text
stringlengths 1
2.55M
| id
stringlengths 21
25
| metadata
dict |
---|---|---|
\section{Introduction}
\emph{"Golden Ratio is a powerful mathematical constant woven into the very fabric of biology. It is the unique visual tension between comforting symmetry and compelling asymmetry, and its thoughtful application can bring beauty and harmony and intrigue to all manner of designed things."} - Darrin Crescenzi\\
The Fibonacci sequence ${1,1,2,3,5,8,13,...}$ is created by the recurrence $F_n = F_{n-1} + F_{n-2}$ and is one of the most famous biologically-inspired mathematical sequences. The Golden Ratio, denoted $\phi$, is the limit of the ratio of consecutive numbers in this sequence. Fibonacci generated the sequence as an idealized model of a reproducing rabbit population assuming overlapping generations~\cite{dunlap1997golden}. It was documented in India many centuries earlier, and has been observed in numerous biological systems including the arrangement of pine cones, unfurling of fern leaves, and the arrangement of sunflower seeds that optimally fills the circular area of the flower~\cite{naylor2002golden}. The Golden ratio and Fibonacci numbers have been used in computer science for various applications like obtaining optimal schedules for security games~\cite{kempe2018quasi}, Fibonacci hashing~\cite{knuth1973art}, bandwidth sharing~\cite{itai1984golden}, data structures~\cite{fredman1987fibonacci} and game theoretic models for blocking-resistant communication~\cite{king2011conflict}.
In this paper, we use the Golden Angle created by arcs that form the Golden Ratio to develop a collective foraging algorithm that reduces the time to first discovery of clustered targets. In particular, we show that our new foraging algorithm, \textsc{GoldenFA}\xspace, performs better in theory and practice than a previous algorithm that chooses search directions uniformly at random~\cite{aggarwalIROS2019}.
\vspace{0.3em}\noindent \textbf{Motivation for using the ``most irrational" number.} Any number can be written as an integer plus $1$ over another number. The larger the denominator in the fractional part, the better the integer part is as an approximation. For example, $\pi = 3 + 1/x$, for $x \geq 7$, and so $\pi$ is fairly-well approximated by $3$. Thinking recursively, the number $x$ in the denominator can itself be written as an integer plus $1$ over another number. Thus, we can write any number as a (possibly infinite) continued fraction~\cite{khinchin1963continued} $$x_1 + \frac{1}{x_2 + \frac{1}{x_3 + \ldots}}$$ where the $x_i$ values are all integers for $i \geq 1$.
The degree to which the original number is well-approximated by a finite continued fraction depends on how large the $x_i$ values are. When $x_i = 1$ for all $i \geq 1$, we obtain an irrational number that is most difficult to approximate. To find this most difficult irrational number, we set $y = 1 + \frac{1}{y}$, and solve the resulting quadratic equation to obtain a solution $y = \frac{1 + \sqrt{5}}{2}$, which is the celebrated Golden ratio $\phi$.
The fact that $\phi$ is difficult to approximate with a rational number has useful implications in ensuring angles are ``well-spread". For example, if we start at the point $0$ on a unit circle, and iteratively add points by moving clockwise by distance $\phi$, then we will end up with points that spread out with ample distance between points.\footnote{In contrast, if we use $\pi$ instead of $\phi$, the points will cluster around $3$ spokes since $\pi$ is fairly well-approximated by $3$. See~\cite{numberphilevideo} for a fascinating simulation and discussion of these facts.} Interestingly, this approximates how plants add florets, leaves and petals as they grow. If the next leaf is added by moving distance $\phi$ along a unit circle, this ensures that leaves are well-spread in order to increase their total intersection with sunlight without interference.
In this paper, we make use of the ``most irrational" property of $\phi$ to design a foraging algorithm. A searcher first forages in an initial direction from a nest site to a boundary of the search space, and then returns to the nest site. The angle for the next spoke from the nest site is chosen by moving a distance of $\phi$ along a unit circle. In this way, we can ensure that our foraging spokes are ``well-spread", thus minimizing overlap at the circle center while maximizing the probability of the first discovery of a cluster of resources somewhere in the circle area. If there are multiple searchers, it is straight-forward to parallelize this process.
The rest of the paper is organized as follows: We first define our formal model and problem statement. Then we introduce \textsc{GoldenFA}\xspace and explain the theoretical upper bound on finding a single target of a given diameter for a single searcher and then multiple searchers. We then compare theoretical predictions to simulated searchers.
\section{Our Model and Problem Statement}
We assume a circular arena of radius $R$ with the \emph{nest} at the centre\footnote{Similar to existing work, this arena can be modelled as a discrete grid with the Manhattan distance as the metric, however, since working in the continuous Euclidean space introduces only a constant multiplicative blowup, we state our algorithm and the results for a continuous arena for mathematical simplicity.}. A \emph{cluster}\footnote{We assume that this cluster is circular in shape for mathematical simplicity, however, our results apply to all cluster shapes that can be circumscribed by a circle.} of targets is placed at a distance $D$ from this collection zone and has a diameter $\Delta$.
We assume an obstacle-free arena in which the cluster does not move or regenerate as targets are collected.
We assume $N \geq 1$ \emph{searchers}. Each searcher has limited memory and can complete straight line motion in a specified direction from the nest to the edge of the search arena. Searchers can detect if they have encountered a target. Moreover, they know the direction in which they are currently moving, but no information from the past can be stored and or communicated.
\vspace{0.3em}\noindent \textbf{Overview of our Approach.}
Previous work shows that foraging for a single target in an arena without any knowledge of the arena parameters requires time that is proportional to the area of the arena~\cite{feinerman2017ants}. We seek to reduce time to discover a cluster with large diameter. In particular, when cluster diameter is any increasing function of arena diameter our foraging strategy can locate the cluster in sub-quadratic time.
We state our main result in the theorem below and defer its proof to the full version of our paper.
\begin{theorem}
\label{thm:mainGoldenFA}
The number of time steps taken by \textsc{GoldenFA}\xspace before the cluster is located for the first time is $O\paran{\paran{\frac{R}{N\Delta}+1}D}$.
\end{theorem}
\noindent Again, as long as $\Delta$ is some increasing function of $D$, \textsc{GoldenFA}\xspace has improved performance. Additionally, when $N = R$ searchers work in parallel, the distance travelled is $O(D)$, which is asymptotically optimal.
Technically, the proof of this theorem rests on two main facts. First, $\sin^{-1}x = \Theta(x)$ for $x<1$ by the Taylor expansion, so the angle formed with the nest and cluster diameter is of size $\Theta(\Delta/D)$. Second, to ensure the largest angle size between consecutive spokes is less than $\Theta(\Delta/D)$, our algorithm requires adding $O(D/\Delta)$ spokes by the Three-Gap Theorem~\cite{swierczkowski1958successive}. Thus, no matter where the cluster is located in the arena, we require $O(D/\Delta)$ spokes before some spoke intersects the cluster.
\section{The Golden Foraging Algorithm}
We now describe our algorithm, \textsc{GoldenFA}\xspace.
\vspace{0.3em}\noindent \textbf{Single-searcher case:}
The searcher starts from the reference heading, moves along a spoke to the end of the arena and then returns to the nest. Next, the searcher turns an arc distance of $\phi$ along a unit circle centered at the nest, and moves along a spoke in that direction to the end of the arena. This process continues until the searcher locates the cluster.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth, trim={15cm 6.8cm 2cm 6.8cm},clip=true]{goldenFA_schematic.pdf}
\caption{A schematic of the \textsc{GoldenFA}\xspace algorithm for a single cluster (shown as the shaded area) of diameter $\Delta$ located at a distance $D$ from the source.}
\label{fig:goldenFA_schematic}
\end{figure}
\vspace{0.3em}\noindent \textbf{Multiple searchers case:}
When $N\geq 1$ searchers search for the cluster together, searcher $i$ starts the search at an arc distance of $(i-1) \phi$ from the reference heading and searches along spokes that are $N \phi$ arc distance apart (see Figure~\ref{fig:goldenFA_schematic}).
For this to work, each searcher must have a unique identifier and know its relative order in the sorted sequence of these identifiers. For simplicity, we assume that the search stops as soon as any searcher discovers a target.
\section{Experimental Evaluation}
The theoretical analysis in the previous section makes many assumptions that do not hold in a robotics setting. These include the lack of collisions between robots, which is a cause of significant slow-down in central place foraging~\cite{Lu2016mpfa}; the use of an exact value of $\phi$ instead of an approximation; and imprecise robot motion. To test whether the analysis holds under more realistic constraints, we implemented the multiple-searcher \textsc{GoldenFA}\xspace in the ARGoS swarm robotics simulator~\cite{pinciroli2012argos}, and evaluated the time required to discover a single square cluster of resources placed uniformly at random in a $100 \times 100$ meter square arena.
The resources in the cluster are placed 0.15 meters apart such that a square cluster with $k$ resources on a side has diameter $\Delta \approx 0.15k$ meters. We conducted experiments with square clusters of $8\times 8$, $16\times16$, $32\times 32$, $64\times 64$, and $128\times128$ resources. For each cluster and value of $N$, we computed the average time over 100 experiments.
\vspace{0.3em} \noindent \textbf{Time to Discover versus $\Delta$ and N.}
Figure~\ref{fig:golden-fa-performance} shows how the time to discover decreases as $\Delta$ increases. There seems to be a nice linear decrease, as predicted in Theorem ~\ref{thm:mainGoldenFA}, for $N=1$ and $N=10$, with a slope of $32$. However, for $N=100$, the slope is larger, probably because of congestion effects. We note that congestion happens when searchers are not infinitely small, as is assumed by the theory.
Figure~\ref{fig:scale-swarm} shows how the time to find the cluster depends on the number of searchers. For increasing $N$, we evaluated square clusters of size $16\times16$, $32\times 32$ and $64\times 64$ in a $100 \times 100$ meter arena. The figure shows how additional searchers reduce the search time, but how adding too many searchers causes increased congestion, which results in increased search times.
\vspace{0.3em} \noindent \textbf{Comparison to Ballistic Algorithm.} We compared our algorithm with the Ballistic Central Place Foraging Algorithm (b\textsc{CPFA})~\cite{aggarwalIROS2019}, for the case $N=10$. In b\textsc{CPFA}, the searchers traverse spokes in directions chosen uniformly at random, until the cluster is located. It can be shown that the number of spokes required by this algorithm is $O\paran{\frac{D}{\Delta}\log\frac{D}{\Delta}}$ with high probability, using the result about maximum distance between random points on an arc from~\cite{king2004choosing}. This is asymptotically worse than \textsc{GoldenFA}\xspace for all values of $D$ and $\Delta$.
Figure~\ref{fig:bcpfa-vs-golden} shows that the searchers using \textsc{GoldenFA}\xspace are able to find the cluster faster than those using the Ballistic \textsc{CPFA} (b\textsc{CPFA}). The results for varying cluster diameter show that the search time using b\textsc{CPFA} is both longer and more variable than the search time using \textsc{GoldenFA}\xspace. Ballistic \textsc{CPFA} searches have a large number of outliers that take considerably longer than average time to find the cluster.
\section{Related Work}
Search is a fundamental problem in biology, where survival depends on search for mates, prey and other resources. It is also a common problem in robotics and mobile computing. Collective search, where multiple searchers must coordinate, is a key problem in computer science, robotics and in social insects. Ant- and bee-inspired algorithms have been particularly influential in swarm robotics research~\cite{csahin2004swarm, krieger2000ant, karaboga2009survey}. In prior work, we have used algorithms inspired by foraging behaviours of desert seed-harvesting ants. These ants forage collectively as follows : each ant leaves a central nest, travels in a relatively straight line in an apparently randomly chosen direction
Upon finding food, the ant determines whether to remember and return to that location or communicate the location to nest mates by laying a pheromone trail.
We mimic this behaviour in robots with a generic Central Place Foraging Algorithm (CPFA) which is effective at finding nearby resources quickly, particularly when resources are distributed in multiple clusters of unknown diameter~\cite{Hecker2015}. However, a simpler interlocking spiral algorithm finds targets faster than the bio-inspired CFPA, and the spiral is particularly fast at completely collecting all targets~\cite{heckerClusters, Fricke2016b, aggarwalIROS2019}.
Our prior work~\cite{aggarwalIROS2019} shows that the most efficient foraging algorithms that completely retrieve all items in a search arena in minimal time use geometric patterns that fill space with minimal traversal of already searched territory. In contrast, in this work, we seek foraging patterns that minimize the time to first discovery when resources are clustered in a single pile of unknown diameter.
\section{Discussion}
\vspace{0.3em} \noindent \textbf{Foraging for Multiple Clusters} When multiple clusters are placed in the arena, time for first detection will be determined by the largest cluster. For discover of all clusters, the number of spokes must be large enough to find the cluster with smallest diameter. Note that if the smallest cluster has a diameter that is constant, then the asymptotic performance of \textsc{GoldenFA}\xspace is no better than exhaustive search, which has quadratic cost.
\vspace{0.3em} \noindent \textbf{Conclusions and Future Work:} We have described an algorithm, \textsc{GoldenFA}\xspace that can locate a cluster efficiently. Our algorithm has search time that asymptotically decreases linearly with cluster diameter and the number of searchers. Moreover, our algorithm performs well in practice, with search times that generally match our theoretical predictions. Further, our algorithm empirically outperforms another common search algorithm in both mean time to discovery and variance. We believe our algorithm is a first step toward minimizing foraging time when resources are distributed in an unknown number of clusters of unknown and variable diameter.
Next steps of this work include the following. First, implementing \textsc{GoldenFA}\xspace in real, rather than simulated, robots to determine how well it performs with real-world noise and error. Second, a theoretical analysis of resilience of \textsc{GoldenFA}\xspace. Finally, measuring collection time of multiple clusters by \textsc{GoldenFA}\xspace and comparing to other foraging algorithms.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{goldenfa_performance.pdf}
\caption{Mean time to discover the single cluster in a $100 \times 100$ meter arena. The dashed lines show the predicted time to discover the cluster $f(x) = 32(\frac{RD}{N\Delta})$ (from Theorem~\ref{thm:mainGoldenFA}) where $D = \frac{2}{3}R$ (the expected distance of a randomly placed cluster from the nest). Error bars show plus or minus one standard deviation. As the swarm size increases clusters are found more quickly; however, for $N=100$ the effects of congestion near the nest slow down the search and lead to longer than the predicted search time.}
\label{fig:golden-fa-performance}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{goldenfa-scale-swarm.pdf}
\caption{Mean time to discover the cluster as the number of searchers increases for three cluster diameters. The effect of increased congestion is clearly visible as an increase in time to discovery in the largest swarms.}
\label{fig:scale-swarm}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{golden_vs_ballistic_box.pdf}
\caption{Comparison of b\textsc{CPFA} and \textsc{GoldenFA}\xspace. The time for 10 searchers to find a single cluster in a $100\times100$ meter arena as the cluster diameter increases. Black triangles indicate the mean. Not only does it take longer to find the cluster using the ballistic \textsc{CPFA}, but the search time is more variable compared to \textsc{GoldenFA}\xspace.}
\label{fig:bcpfa-vs-golden}
\end{figure}
| proofpile-arXiv_059-15529 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Mozilla is constantly balancing user privacy with the benefits that could be produced for users by collecting data.
There are processes in place to ensure that all data collected is reviewed and evaluated against principles of necessity, privacy, transparency, and accountability~\cite{firefox-data}.
When it comes to developing machine-learning-based products, however, many potential ideas would traditionally have required collecting highly sensitive user data.
Even if some users agree to share their data, it is difficult to build representative models in this way.
A promising solution to Mozilla's problem is \emph{Federated Learning\xspace}, a new technique that allows training models on data which only exists on user devices.
Rather than sharing their data, users only send model improvements to a server.
These updates are locally derived based on private data.
Since the data is not directly shared, and there exist various additional protection mechanisms in Federated Learning\xspace, this approach offers much better privacy.
To experiment with Federated Learning\xspace, we decided to use it to try to improve the Firefox URL bar.
When typing a query into the URL bar, Firefox tries to suggest history entries, bookmarks, and other items that the user might want to click on.
At the time of our experiment, the algorithm that selects and ranks these suggestions was based on handcrafted heuristics.
It used several constants for configuring how different features should be weighted.
The values of these weights were chosen because they intuitively seemed reasonable, and not because a data-driven process provided evidence that they work well.
Since many users regularly type queries into the URL bar and select from the suggestions shown to them, Firefox could collect data in order to optimize the URL bar with a learning algorithm.
However, search queries, history entries, and bookmarks are clearly private to users.
Collecting this data on a server would severely violate privacy constraints or produce non-representative results due to opt-in bias.
To still optimize the URL bar suggestions based on user interactions, we developed a system that uses Federated Learning\xspace.
This system allows us to compute model improvements based on the users' data, without directly collecting it.
Instead, the optimization algorithm is distributed so that the parts that directly touch the user data are also executed on the users' computers.
To the best of our knowledge this system represents the first use of Federated Learning\xspace in a major software product outside of Google.
Federated Learning\xspace research published so far has been based purely on simulations.
These simulations represent an easier problem setting since the developer can test many different optimization algorithms and hyperparameters in a short amount of time.
Deploying a Federated Learning\xspace system is more difficult because there is a much larger cost attached to testing out different versions.
Each experiment with real users takes much more time, and, even worse, can lead to a bad user experience.
For these reasons, we developed additional techniques to make our system robust enough to get good results by deploying it to Firefox just once.
These contributions include carefully designing the optimization algorithm, and implementing safeguards to ensure model quality.
We also designed our system in a way that it can optimize anything in Firefox that is currently based on hardcoded constants.
Since there are other places where handcrafted heuristics could be replaced by trained models, we believe our system to be widely applicable in Firefox.
All code that was used during the experiment, and in preparing it, is open sourced in accordance with Mozilla's philosophy~\cite{impl-client, impl-server, impl-simulations}.
Our system ended up being deployed to a large fraction of Firefox Beta users.
Over the course of under four days, roughly 360,000 users helped to train and evaluate a new model for the improved URL bar.
The new model leads to users typing over half a character less before selecting one of the proposed suggestions.
We have released an anonymized dataset containing the client data collected during this study [forthcoming].
\begin{figure}[h]
\centering
\includegraphics[width=250px]{bar.png}
\caption{The Firefox URL bar provides suggestions based on the browsing history}
\label{fig:fx-bar}
\end{figure}
In this paper, we describe how we built the system, what we learned from doing so, and how it could be improved in the future.
Concretely, our contributions include:
\begin{itemize}
\item We show that Federated Learning\xspace can be used successfully in major software products, rather than just in simulations.
\item Since little control over data can be exercised in this setting, we propose specialized optimization methods.
\item We discuss how any black box functions can be optimized based on user interactions, without sacrificing user privacy, by using a custom way of computing gradients.
\end{itemize}
The remainder of the paper is structured as follows.
Section~\ref{sec:related-work} briefly introduces related work, while Section~\ref{sec:background} gives the necessary background knowledge.
In Section~\ref{sec:fx-search}, we show how the URL bar search in Firefox generally works.
Section~\ref{sec:optimization} explains how we designed the optimization process to be robust and easily reusable for future applications.
The process of launching our experiment as well as the analysis of the results are discussed in Section~\ref{sec:study}.
Finally, we provide a short conclusion in Section~\ref{sec:conclusion}.
\section{Related Work}
\label{sec:related-work}
\citet{fl} initially proposed Federated Learning\xspace and an algorithm for computing model updates in a distributed way.
This algorithm repeatedly receives gradients from clients, averages them and applies them to the model.
Since neural networks can contain millions of parameters, this approach could lead to a high communication cost.
To make the algorithm feasible for large models, various compression techniques have been suggested subsequently~\cite{compression}.
\section{Background}
\label{sec:background}
\subsection{Federated Learning}
\label{subsec:fl}
One of the reasons why machine learning has been applied so successfully to many problems in the past few years is that more data has become available.
For some applications of machine learning, collecting data can be privacy-invasive.
One such example application is predicting the next word that a person is going to use by considering the previously typed words.
This is typically done using machine learning nowadays, e.g.\ with recurrent neural networks and LSTMs~\cite{lstm}.
Although it is possible to train such a model using a text corpus from Wikipedia, the language found there differs from the one commonly used by people in daily life.
To train a model on the same data distribution that is also used for inference when the model is deployed, one would need to collect data directly from users.
This, however, would violate the privacy of users.
Users do not want to send everything they type to a server.
Federated Learning is a recently proposed technique for training models on this data, without sending it to a server.
Instead, we collaboratively train a model by distributing the training process among many users.
A server has the role of coordinating this process but most of the work is not performed by a central entity anymore but by a \emph{federation} of users.
Before the server starts off the distributed learning process, it needs to initialize the model.
Theoretically, this can be done arbitrarily, by using any of the common model initialization strategies.
In practice, it makes sense to intelligently initialize the model with sensible default values.
If some data is already available on the server, it can be used to pretrain the model.
In other cases, there might be a known configuration of model parameters that already leads to acceptable results.
Having a good first model gives the training process a head start and can reduce the time until convergence is reached.
After the model has been initialized, the iterative training process is kicked off.
At the beginning of an iteration, a subset of $K$ clients is randomly selected by the server.
They receive a copy of the current model parameters and use their locally available training data to compute an update.
The updates are then sent back to the server.
We denote the model parameter tensor by~$\theta$, the update of the \mbox{$i$-th} user by $H_i$, and the number of data points on the computer of the $i$-th user by $n_i$.
A visualization of the steps performed in each iteration is shown in Figure~\ref{fig:fl}.
\begin{figure*}[ht]
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=.8\linewidth]{FL2.png}
\caption{The server selects $K$ users}
\label{fig:sfig2}
\end{subfigure}
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=.8\linewidth]{FL3.png}
\caption{They receive the current model}
\label{fig:sfig2}
\end{subfigure}
\vspace{1cm}
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=.8\linewidth]{FL4.png}
\caption{and compute updates using their data}
\label{fig:sfig2}
\end{subfigure}
\begin{subfigure}{.45\textwidth}
\centering
\includegraphics[width=.8\linewidth]{FL5.png}
\caption{Updates are shared with the server}
\label{fig:sfig2}
\end{subfigure}
\vspace{0.2cm}
\caption{One communication round in a Federated Learning\xspace system}
\label{fig:fl}
\end{figure*}
The server waits until it has received all updates of the iteration and then combines them into one final update.
This is usually done by computing an average of all updates, weighted by how many training examples the respective clients used.
The weights for iteration $t$ are then computed using
\begin{equation*}
\theta^{(t)} = \theta^{(t - 1)} - \sum\limits_{i = 1}^K \frac{n_i}{N} H_i
\label{eq:federated-updates}
\end{equation*}
\noindent where $N = \sum_{i = 1}^K n_i$ is the total number of data points used in this round.
A new iteration begins after every model update.
In each iteration only $K$ users are queried for updates.
While requesting updates from all users would lead to more stable model improvements, it would also be extremely expensive to do because there could be millions of users.
Only querying a subset of them makes it more feasible to efficiently run many iterations.
This training process is then repeated until the model parameters converge, as determined by an appropriate criterion.
In some situations, it can also make sense to keep the training running indefinitely.
This can for example be the case in some recommender system applications, where the model needs to deal with new data all the time.
One potential attack vector against Federated Learning\xspace systems is trying to analyze updates sent to the server, in order to make conclusions about the original data.
To prevent such \mbox{man-in-the-middle} attacks, \emph{secure multi-party aggregation} mechanisms can be used~\cite{encryp, encryp2}.
These protocols use encryption techniques to allow only the analysis of the aggregated updates from a certain number of users, thus securing individual updates.
To ensure that the model itself does not implicitly memorize personal information, \emph{differential privacy}~\cite{dp-foundations} offers a framework to formalize privacy.
Differential privacy has been applied to Federated Learning\xspace, and there exist gradient descent variants that adhere to a specified level of $(\epsilon, \delta)$-differential privacy~\cite{noisy-sgd,fl-language-model}.
\subsection{Learning to Rank}
\label{subsec:ranking}
A ranking algorithm takes a set of items and sorts them by some criterion.
Before diving into ranking in the Firefox URL bar, it is worth taking a step back to understand how ranking in machine learning works.
This makes it easier to see how the current implementation fits into a learning system.
Fundamentally, there are three different approaches to learning a ranking algorithm~\cite{ranking1,ranking2}.
\emph{Pointwise ranking} algorithms process all items individually and assign a score to each item independently of the others.
The ranking is then determined by sorting all items using their respective scores.
Essentially, this is a special type of a regression model since we are assigning a real number to each input.
A \emph{pairwise ranking} model learns to compare pairs of items.
Its task is to decide which of the two items should be ranked higher.
The learned comparison function can then be used to sort the items.
In this approach, we treat the problem as a classification task since the model can only have two possible outputs.
The third approach is \emph{listwise ranking}, which are methods that try to operate on the entire list.
The motivation behind this idea is to optimize information retrieval metrics directly.
In practice, this turns out to be fairly difficult because many of those metrics are not differentiable and the models need to work with more inputs.
All these approaches have different advantages and disadvantages.
The existing ranking algorithm in Firefox is very similar to a pointwise ranking approach.
Since this algorithm should be optimized using machine learning techniques, this gives us a clear set of techniques that could be useful in this project.
To optimize the current algorithm, we took a known pairwise ranking loss function and adapted it to work with our pointwise ranking system.
\section{Search in the Firefox URL bar}
\label{sec:fx-search}
The Firefox URL bar offers suggestions when users type a search query.
A part of these suggestions is provided directly by a search engine.
The others are generated by Firefox itself, for example based on the user's history, bookmarks, or open tabs.
We tried to optimize the history and bookmark suggestions using our project.
The search engine results are shown below those, and we have no influence over their selection and ranking since everything is provided directly by the search engine.
Searching for history and bookmark entries in the Firefox URL bar is a two-step process:
\begin{enumerate}
\item The search query is matched against the browser history and bookmarks. Matching is a binary decision. Pages either match the query or do not.
\item The set of matched links is ranked based on the user's history.
\end{enumerate}
Our project purely tries to optimize the ranking part of this process.
Future work could tackle the problem directly from the query matching.
The ranking of possible suggestions in the Firefox URL bar is determined using \emph{frecency}~\cite{frecency}, an algorithm that weights how \emph{frequently} and \emph{recently} a site was visited.
To do this, a frecency score is assigned to each history entry and bookmark entry.
After the score is computed, it is cached.
When searching, the matched results are then sorted using their frecency scores.
This makes frecency a pointwise ranking approach.
Frecency does not only take frequency and recency into account but also other information, such as how the page was visited and whether it is bookmarked.
It does this by looking at the latest visits to the respective site.
The value $\operatorname{visit}(v)$ of one single visit $v$ is then defined by how recent that visit was, scaled by the type of visit:
$$
\operatorname{visit}(v) = \operatorname{recency}(v) * \operatorname{type}(v)
$$
Frecency scores have to be cached in order to allow an efficient ranking while the user is typing.
This means that the recency aspect has to be modeled using time buckets.
Otherwise, the score would change all the time and caching would not work.
In the current Firefox implementation, there are five time buckets.
With this approach, the recency score only changes when a visit changes time buckets:
$$
\operatorname{recency}(v) = \begin{cases}
100 & \text{if visited in the past 4 days} \\
70 & \text{if visited in the past 14 days} \\
50 & \text{if visited in the past 31 days} \\
30 & \text{if visited in the past 90 days} \\
10 & \text{otherwise}
\end{cases}
$$
Sites can be visited in many different ways.
If the user typed the entire link themselves or if it was a bookmarked link, we want to weight that differently to visiting a page by clicking a link.
Other visit types, like some types of redirects, should not be worth any score at all.
We implement this by scaling the recency score with a type weight:
$$
\operatorname{type}(v) = \begin{cases}
1.2 & \text{if URL was visited by following a link on a website} \\
2 & \text{if URL was typed out} \\
1.4 & \text{if URL is bookmarked} \\
0 & \text{otherwise}
\end{cases}
$$
Now that we can assign a score to every visit, we could determine the full points of a page by summing up the scores of each visit to that page.
This approach has several disadvantages.
For one, it would scale badly because some pages are visited a lot.
Additionally, user preferences change over time and we might want to decrease the points in some situations.
Instead, we compute the average score of the last 10 visits.
This score is then scaled by the total number of visits.
The full frecency score can now be computed efficiently and changes in user behavior are reflected fairly quickly.
Let $S_x$ be the set of all visits to page $x$, and let $T_x$ be the set of the last up to 10 of these.
The full frecency score is then given by:
$$
\operatorname{frecency}(x) = \frac{|S_x|}{|T_x|} * \sum\limits_{v \in T_x} \operatorname{visit}(v)
$$
For legacy reasons, Firefox additionally decays frecency scores over time.
Once a day, all scores are decreased by a few percentage.
This feature exists in Firefox for historic reasons and could also be modeled using time buckets.
Note that this is a simplified version of the algorithm.
There is some additional logic for special cases, such as typing out bookmarks or different kinds of redirects.
The description here only shows the essence of the algorithm in a mathematical form.
The weights of the current algorithm were handchosen in a way they were deemed reasonable.
Our Federated Learning\xspace system tries to optimize the weights based on user interactions, while keeping the general logic of frecency intact.
This optimization process includes all constants in the previous formulas.
\section{Optimization System Design}
\label{sec:optimization}
\subsection{Optimization By User Interactions}
User interactions with the Firefox URL bar provide a feedback signal that can be used to optimize the weights in the frecency algorithm.
By checking what items users clicked on, the weights can be adapted to make it more likely to show these items earlier the next time similar searches are performed.
The general optimization loop works as follows:
Users search in the URL bar and click on a suggestion.
This provides data to our system that can be used to compute model improvements, in the form of gradients.
These gradients are computed for many users at the same time and are sent to a Mozilla server.
A job on the server averages all gradients of the current iteration and applies a scaled version of the result to the model.
The new model is redistributed periodically when a new iteration begins.
Part of the promise of our system is that this feedback signal from users is perfectly clean.
Users do not only generate data by searching in the URL bar, but they also label it for ranking by selecting an item.
Thus we know exactly how the ranking should have been.
Since machine learning is relying on good data to work with, this is an important point.
\subsection{Pointwise SVM Ranking}
To describe the ranking goal formally, we need to have a loss function that evaluates how well our model did.
To this end, we take a previously proposed SVM loss for pairwise ranking~\cite{svm-ranking,svm} and adapt it to our pointwise approach.
Essentially, the ranking loss function takes in a set of items with their assigned scores as well as the index of the item selected by the user.
The optimization goal is that the selected item should have the highest score.
But even if this was the case, our model might not have been too confident in that decision.
One example of this is the selected item having a score of 100 and the second item having a score of 99.9.
The model made the correct prediction, but only barely so.
To make sure it does a good job in similar cases, we need to provide a signal to the model which shows that it can still improve.
This is what we aim to do with the SVM loss.
If the URL bar displayed the suggestions for pages $x_1, \dots, x_n$ in that order and suggestion $x_i$ was chosen, then the SVM loss for the pointwise ranking is given by
$$
E = \sum\limits_{j \neq i} \max(0, f(x_j) + \Delta - f(x_i))
$$
\noindent where $f(x_i)$ denotes the pointwise ranking score of item $x_i$, which corresponds to frecency in our case.
We iterate over all suggestions that were not chosen and check that their scores were smaller than the one of the selected page by at least a margin of $\Delta$.
If not, an error is added.
The full loss should be minimized.
The margin $\Delta$ is a hyperparameter that needs to be decided on before the optimization process starts.
\begin{figure}[H]
\begin{subfigure}{.5\textwidth}
\centering
\begin{tikzpicture}[baseline=0]
\fill[black] (-4, 0) rectangle (-3.5, 0.5);
\node at (-2.5, .25) {selected item};
\draw[pattern=north west lines, pattern color=black] (-1, 0) rectangle (-0.5, 0.5);
\node at (-0.05, .25) {loss};
\end{tikzpicture} \\
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\begin{tikzpicture}[samples=100,smooth,scale=0.75]
\begin{scope}
\draw[->,ultra thick] (-5,-4.2)--(5,-4.2) node[right]{$x_i$};
\draw[->,ultra thick] (-4.7,-4.5)--(-4.7,3.0) node[above]{$f(x_i)$};
\draw[black] (-4.7,-4.2) rectangle (-3.7,1);
\draw[pattern=north west lines, pattern color=black] (-4.7, 0) rectangle (-3.7,1);
\draw[black] (-3.7,-4.2) rectangle (-2.7,-.3);
\draw[black] (-2.7,-4.2) rectangle (-1.7,-.7);
\draw[black] (-1.7,-4.2) rectangle (-0.7,-.4);
\fill[black] (-0.7,-4.2) rectangle (0.3,2);
\draw[pattern=north west lines, pattern color=black] (0.3, 0) rectangle (1.3,1);
\draw[black] (0.3,-4.2) rectangle (1.3,1);
\draw[black] (1.3,-4.2) rectangle (2.3,-0.5);
\draw[black] (2.3,-4.2) rectangle (3.3,2.5);
\draw[pattern=north west lines, pattern color=black] (2.3, 0) rectangle (3.3,2.5);
\draw[black] (3.3,-4.2) rectangle (4.3,1);
\draw[pattern=north west lines, pattern color=black] (3.3, 0) rectangle (4.3,1);
\draw [decorate,decoration={brace,amplitude=3pt},xshift=-4pt,yshift=0pt]
(-.7,0.02) -- (-.7,1.98) node [black,midway,xshift=-0.3cm] {$\Delta$};
\end{scope}
\end{tikzpicture}
\end{subfigure}
\caption{A visualization of the SVM loss for pointwise ranking}
\label{fig:svm}
\end{figure}
A visualization of this loss function is given in Figure~\ref{fig:svm}.
Each bar represents a possible suggestion, with the selected one being shown in black.
The y-axis displays how many points the model assigned to the respective suggestion.
The hatched areas show the SVM loss.
Everything above the selected suggestion as well as everything below it by a margin of $\Delta$ adds to the full loss.
Even though the selected suggestion had the second highest score, four suggestions contribute to the penalty in our example.
\subsection{Rprop}
Gradient descent is a natural choice for optimization in machine learning.
But since we never collected any data, we have no idea what gradient magnitudes for our problem are like.
Tuning the learning rate for vanilla gradient descent on the server prior to Federated Learning\xspace thus does not work.
Adam~\cite{adam} and other optimization algorithms automatically adapt learning rates and are better suited for our problem setting.
We experimented with several such algorithms but finally settled for Rprop~\cite{Rprop}.
Rprop ignores gradient magnitudes and dynamically adapts learning rates for each weight individually.
The optimization algorithm bounds the magnitude of updates by design and thus limits by how much individual iterations affect the model.
Let $\eta_i^{(t)}$ be the step size for the $i$-th weight in the $t$-th iteration of gradient descent.
The value for the first and second iteration, $\eta_i^{(0)}$ and $\eta_i^{(1)}$, is a hyperparameter that needs to be chosen in advance.
This step size is then dynamically adapted for each weight, depending on the gradient.
The weights themselves are updated using
\begin{equation}
\theta_i^{(t)} = \theta_i^{(t - 1)} - \eta_i^{(t - 1)} * \operatorname{sgn}\left(\frac{\partial E^{(t -
1)}}{\partial \theta_i^{(t - 1)}}\right)
\label{eq:Rprop}
\end{equation}
\noindent where the sign of the partial derivative of the error in the last step
with respect to the given weight is computed.
We go in the direction of descent using the determined step size.
In each iteration of Rprop, the gradients are computed and the step sizes are
updated for each dimension individually.
This is done by comparing the gradient's sign of the current and previous
iteration.
The idea here is the following:
\begin{itemize}
\item When the signs are the same, we go in the same direction as in the
previous iteration. Since this seems to be a good direction, the step size
should be increased to go to the optimum more quickly.
\item If the sign changed, the new update is moving in a different direction.
This means that we just jumped over an optimum.
The step size should be decreased to avoid jumping over the optimum again.
\end{itemize}
To implement this update scheme, the following formula is used:
\begin{equation}
\eta_i^{(t)} = \begin{cases}
\min(\eta_i^{(t - 1)} * \alpha, \eta_{\max}) & \text{if } \frac{\partial E^{(t)}}{\partial \theta_i^{(t)}} * \frac{\partial E^{(t - 1)}}{\partial \theta_i^{(t - 1)}} > 0 \\
\max(\eta_i^{(t - 1)} * \beta, \eta_{\min}) & \text{if } \frac{\partial E^{(t)}}{\partial \theta_i^{(t)}} * \frac{\partial E^{(t - 1)}}{\partial \theta_i^{(t - 1)}} < 0 \\
\eta_i^{(t - 1)} & \text{otherwise}
\end{cases}
\label{eq:Rprop-lr}
\end{equation}
\noindent where $\alpha > 1 > \beta$ scale the step size, depending on whether
the speed should be increased or decreased. The step size is then clipped using
$\eta_{\min}$ and $\eta_{\max}$ to avoid it becoming too large or too small.
If a gradient was zero, a local optimum for this weight was found and the step
size is not changed.
There are well-known hyperparameters that tend to work well for Rprop~\cite{Rprop-empirical} and we validated those using simulations.
Rprop turned out to be a great choice for us for multiple reasons.
In contrast to Adam and most other gradient descent variants, it completely ignores the gradient magnitude.
It can thus deal with any data we might see after deploying our system.
Furthermore, it adapts learning rates dynamically for each weight individually, rendering the initial choice to not be that important.
A third reason is that the updates from Rprop are very interpretable and we can ensure that each iteration of optimization can only change frecency scores by a few points.
To get an additional compression advantage out of Rprop, it can be adapted to only use the signs of the gradient from each client.
Rather than computing the sign of the sum of gradients in Equation~\ref{eq:Rprop}, one can take the most common sign in the set of received gradients.
If we ignore the unlikely case of having a gradient of 0 in Equation~\ref{eq:Rprop-lr}, which would correspond to a perfect local optimum, clients now only need to share a single bit for each weight.
If one does not want to ignore this third case, two bits are required.
This is a strong compression factor, considering that we otherwise need to transfer 32 or 64 bits for each weight.
Additionally, this can also provide a privacy advantage because clients now need to share even less information with the server.
\subsection{Approximating Gradients}
Initially, we prototoyped our algorithms with computational graph libraries that can automatically compute gradients~\cite{tensorflow,pytorch-ad}.
However, it was difficult to add those to Firefox for our experiments.
The frecency algorithm is written in C++.
The corresponding modules are core to the browser and can only be changed if Firefox itself is updated.
To quickly prototype new ideas, Firefox has a separate mechanism for launching experiments, called \emph{Shield}~\cite{shield}.
This system lets us dynamically ship code to clients, completely independently of other major releases, by limiting which modules can be changed.
Since we wanted to use Shield for quick prototyping, we could not adapt the code behind the frecency algorithm itself.
However, it was still possible to change the weights behind the algorithm.
So instead of replacing the current implementation with a computational graph, we used a \emph{finite-difference method} for approximating the gradients.
If $g$ is any univariate function, its gradient can be approximated using:
\[
g'(x) \approx \frac{g(x + \epsilon) - g(x)}{\epsilon}
\]
\noindent where $\epsilon > 0$ is a very small number.
To compute the gradient of a multivariate function, such as the SVM loss based on frecency, this process is then performed by iterating through all dimensions.
In each dimension, the value is changed by $\epsilon$ in the two directions, while all other values stay constant.
The resulting vector is our gradient estimate.
Because $\epsilon$ needs to be a small value for the approximation to be good, this formula inherently has problems with numerical stability.
To improve on this, we used an alternative that has previously been proposed for better stability~\cite{ad}:
\[
g'(x) \approx \frac{g(x + \epsilon) - g(x - \epsilon)}{2 * \epsilon}
\]
It is worth noting that this numerical way of approximating gradients scales badly with the number of weights.
Instead of one forward pass in total to compute the gradient, we now need one forward pass per weight.
In our case, this was not a problem because there were comparatively few weights.
For neural networks with millions of parameters this would not be the case.
Still, this approach saved us a lot of engineering time, while only adding a very small performance penalty.
We did not have to rewrite any C++ code, and could treat the existing implementation as a black box.
It also makes our system generally applicable since anything in Firefox that has configurable weights can now be optimized.
\subsection{General Protocol and Server Side}
The server provides clients with a model file.
Clients fetch the current model when the browser is first opened as well as every time a new gradient descent iteration is completed.
This happens every 30 minutes and is triggered by the server.
Every time a user participating in the optimization performs a history or bookmark search in the URL bar, a gradient is computed.
This gradient is pushed to a Mozilla server using the Firefox Telemetry system~\cite{telemetry}, which has several advantages.
It is a well-designed system with clear rules about what can be collected.
There is a lot of infrastructure around using it and dealing with the data on the server.
All messages sent by clients are stored in a Parquet~\cite{parquet} data store.
A Spark streaming job~\cite{spark} reads the new updates from clients and averages them in real-time.
Every 30 minutes, the average update is then given to the optimizer, and applied to the model.
The resulting model is published and fetched by clients.
We store all gradient updates on the server for later analysis.
For a proven production system this is not strictly necessary.
The system could also work by adding gradients to the current average after which they are directly discarded.
\subsection{Safeguards}
The model that we are training is being used by the URL bar at the very same time as we are optimizing it.
Thus, it was important to make sure that the experiment does not degrade the quality of the URL bar too much if the optimization process fails.
To keep this from happening, we carefully configured Rprop and implemented several safeguards on top of it.
First of all, we initialize our model with the weights that the traditional frecency algorithm used.
This replicates the previous ranking behavior perfectly.
The optimization process thus starts off with a decent initial model.
To gradually improve on this initial solution, updates are bounded so that the model slowly converges to an optimum.
We wanted to avoid huge model changes to make it unlikely to jump far over an optimum.
Because our weights are intepretable, we can understand by how much updates can change frecency scores in one iteration, and limit this to a reasonable value.
In conjunction with the smart initialization, this ensures that our optimization process gradually improves the traditional frecency weights.
Since the weights of the frecency algorithm have a clear meaning, we were able to implement several additional constraints.
Recent visits should always be weighted higher than old ones.
To enforce this, the value of each time bucket weight is bounded to be smaller than the values of newer time buckets.
Finally, we force all weights to be nonnegative to ensure that visits can never be worth a negative number of points.
The exact safeguards we implemented for our system are highly domain-specific.
However, we expect that other domains should have similar constraints.
Bounding update size is a good idea, even if it is harder to interpret what exactly the weights represent.
Starting off from a decent initial model also helps to ensure that users do not interact with a bad model during the first few iterations of training.
\section{Experiment}
\label{sec:study}
\subsection{Simulations}
\label{subsec:simulations}
To prototype whether our ideas for optimizing the frecency weights could work, and to quickly iterate on them, a simulation was created before developing the actual system for Firefox~\cite{impl-simulations}.
This made it possible to simulate an entire Federated Learning\xspace optimization process in little time, without having to wait for code to be deployed to actual clients.
Much of the code that we wrote for the simulation ended up being reused in Firefox~\cite{impl-client}.
The only major part that differs between the simulation and the actual implementation is what data is used for training.
Since no data should be collected, the simulation could not be based on real data from users.
Instead, a mock dataset was created.
This dataset was designed to resemble the data we expected users to generate.
Since there was no way of knowing how the data is actually distributed, several assumptions had to be made.
We modeled how recently websites were visited using a made-up distribution that was skewed towards more recent visits.
To decide on how many pages were bookmarked, we used existing statistics.
The frequency in which websites are visited is modeled using an exponential distribution with $\lambda = 7$.
This distribution mirrors the assumption that there are many websites that are only visited few times and few websites that are visited a lot.
For simplicity's sake, recency, type, and frequency were assumed to be independent of each other.
To model what suggestion a user clicks on, the existing frecency algorithm is used to compute a score for each suggestion.
Random noise, sampled from a normal distribution with $\mu = 0, \sigma^2 = 30$, is then added to the score.
The dataset assumes that the suggestion with the highest score is selected.
By using the existing frecency algorithm with some noise, it is easy to see whether the simulation finds useful weights, as they should be similar to the ones of the current algorithm.
It is worth noting that this dataset is likely to differ substantially from the data generated by real users.
It is difficult to perfectly describe how the data looks like without having seen any of it.
Still, creating the dataset allowed for quick prototyping, which made it much easier to make many design decisions.
We implemented variations of the optimization algorithms described in the previous section, and tested their properties using our simulations.
When the Firefox client-side and server-side implementations were ready, we used data from the simulation to test our final system end-to-end.
\subsection{Study Design}
After developing the Firefox client and server components of our system, 25\% of Firefox Beta users were enrolled in the experiment, which corresponds to roughly 500,000 daily active users.
Since it takes some time to roll out updates, only a part of the users was enrolled in the study before the optimization process was completed.
Users were partitioned into three groups:
\begin{enumerate}
\item \emph{treatment}: The full study was shipped to these users. They compute updates, send them to the server, and start using a new model every 30 minutes.
\item \emph{control}: This group is solely observational. No behavior in the URL bar actually changes. We are just collecting statistics for comparison to treatment.
\item \emph{control-no-decay}: Firefox decays frecency scores over time. Our treatment group loses this effect because we are recomputing scores every 30 minutes. To check if the decay is actually useful, this group has no decay effect but uses the same original algorithm otherwise.
\end{enumerate}
60\% of users were assigned to the treatment group and 20\% to both control groups respectively.
To decide on these numbers, we performed a power analysis and used results from our simulation.
We carefully chose the number of people that should participate in the experiment for two reasons.
For one, if our study enrolls most Firefox users, we would block other studies that want to experiment with changes in the URL bar.
Another reason is that the experiment might break parts of Firefox.
If this happens, it should not affect unnecessarily many people.
Concretely, our analysis consisted of two parts:
\begin{enumerate}
\item How many users do we need to have enough data to train a model?
(relevant for treatment)
\item How many users do we need to show certain effects confidently?
(relevant for treatment and control)
\end{enumerate}
The first part was answered using simulations.
By using an adapted form of the simulation we used to decide on optimization hyperparameters, we could get some idea on how many users we would need.
Existing Telemetry data was helpful for this, as it allowed us to get some idea of how many history searches people perform every day~\cite{telemetry-selected-rank}.
The second part of the power analysis was tackled using classical hypothesis testing, based on the \mbox{\emph{Mann-Whitney-U test}}.
This analysis concluded that the control groups required fewer users than the treatment group.
To be able to evaluate how well our new model worked after the optimization converged, we also collected two metrics:
\begin{enumerate}
\item The number of characters typed before selecting a result: Users should have to type few characters to find what they are looking for.
\item The rank of the suggestion that was selected: The item that is selected should be as far on top as possible.
\end{enumerate}
Clients shared these two metrics with the server by sending them jointly with gradients.
\subsection{Analyzing the Results}
Over the course of the experiment, 723,581 users were enrolled in the study.
The model was fetched 58,399,063 times from the server.
360,518 users participated in sending updates and evaluation data to the server, accounting for a total of 5,748,814 messages.
The optimization phase of the experiment consisted of 137 iterations of 30 minutes each, or just under three days.
In this phase, 186,315 users sent pings to help in the training process.
A separate phase of purely evaluating the model was started afterwards and took a total of 10 days.
In this phase, 306,200 users sent 3,674,063 pings, which included statistics detailing how well the model worked for them.
Since all these users were assigned to treatment or control groups, the new model can be compared well to the old one that was used by the control groups.
Some users were enrolled but did not help with optimization or evaluation because they performed no history and bookmark searches.
During the optimization process, the loss of the model was supervised to check how well the training was going.
Figure~\ref{fig:loss} shows how the loss changed over time, across all three study variations.
There is some noise in this plot, since each iteration only had a very limited number of users.
However, it can still be seen that the loss of the treatment group continues to decline over the course of the experiment.
This shows that the optimization process generally worked.
After 40 iterations, less than one day of optimization, the loss of the treatment group is clearly below the loss of the control groups.
\begin{figure}
\centering
\includegraphics[width=250px]{loss-smooth5.png}
\caption{Rolling average of reported validation loss over the last 5 iterations}
\label{fig:loss}
\end{figure}
After the optimization process ended, an evaluation phase began to determine how well the new model works.
This is equivalent to the testing phase in machine learning.
The model is evaluated on new data that was not used for training or validation.
Table~\ref{fig:results} shows these results.
On average, users in the treatment group type about half a character less to find what they are looking for.
This is a strong improvement over both control groups.
However, users in the treatment group also choose suggestions that were ranked slightly worse.
Hypothesis testing determined that the changes in the treatment group were highly significant, with p-values being below $10^{-75}$.
Because we compared results of several experiment branches, we used a Bonferroni-corrected significance level of $\alpha = 0.05 / 6$.
\begin{figure}
\centering
\begin{tabular}{c||c|c}
& \textbf{mean characters typed} & \textbf{mean rank chosen} \\
\hline
treatment & 3.6747 & 0.37435 \\
control & 4.26239 & 0.35350 \\
control-no-decay & 4.24125 & 0.35771
\end{tabular}
\caption{Results of the evaluation phase}
\label{fig:results}
\end{figure}
From a user perspective, it is not clear if these changes improve the user experience.
While users now have to type a good amount less, they also select suggestions more often that are not on top of the list.
One potential explanation for this could be that the items they were looking for are displayed earlier in the suggestion list.
Since they spent less time typing, they might be willing to select an item that is not the top ranked one.
In the future, we plan to evaluate this hypothesis by collecting data about how the rank of the selected item changed while the user was typing.
It is difficult to determine purely based on the two metrics already collected if this change is good, since it is not clear how their importance should be weighted.
Instead, surveying users would be required to decide on which metric is more important.
But even if users are not satisfied with the new model, the Federated Learning system is still highly useful.
Since the optimization process works well, one would only need to find a loss function that correlates more closely with what users want.
\subsection{Analyzing the Optimization System}
To learn from this experiment for further Federated Learning studies, we additionally analyzed all the update data later on.
In retrospect, the Federated Learning protocol we used was too simple.
Figure~\ref{fig:pings} shows how Firefox Beta activity in our study varies over time.
Since Firefox Beta usage has a bias towards Asian countries, we receive more pings during day time in Asia.
The protocol could be improved by dynamically determining the iteration length depending on how many updates were sent to the server so far.
This way, there would be no iterations with very few updates.
Furthermore, there could be more iterations during periods with many active users, allowing for a faster optimization process.
\begin{figure}
\centering
\includegraphics[width=250px]{pings-over-time.png}
\caption{The number of pings sent by clients over time}
\label{fig:pings}
\end{figure}
A more sophisticated protocol could adapt the iteration length depending on how stable the current update estimate is.
We noticed that the later iterations of the optimization process require many fewer reports to compute a good estimate.
Figure~\ref{fig:update-quality} compares the update we actually used to updates we would get by randomly sampling 2,000 of these update reports.
The $L_1$-distance is used to perform this comparison.
Because of the randomness, the mean and standard deviation after 50 such simulations per iteration are reported.
\begin{figure}
\centering
\includegraphics[width=250px]{update-quality.png}
\caption{Mean and standard deviation of difference in update quality when using 2,000 updates}
\label{fig:update-quality}
\end{figure}
It can be observed that the estimates become much more stable after iteration 100.
While the $L_1$-distance of two updates can be large without affecting the Rprop optimizer much, this is still an interesting result.
We observed similar results for the loss estimates.
The exact differences might be specific to our problem, but this observation can still be used to generally improve the system:
While updates are coming in, the server could check the variance of updates and start a new iteration earlier when it observes little variance.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we introduced a Federated Learning\xspace system built for use in Firefox.
Our system can optimize parts of Firefox purely based on user interactions.
The system is effective in optimization and preserves user privacy, as no personal data is ever shared with a server.
It is also widely applicable since it can replace or tune any heuristics, even if they can only be queried as black boxes.
We used the system to optimize the ranking of suggestion in the Firefox URL bar, which lead to users typing around half a character less before selecting an item.
Future work is multi-fold.
For one, the existing protocol can be improved to make better use of available data by adapting iteration length dynamically.
Furthermore, many other parts of Firefox can now be optimized using Federated Learning\xspace.
The work here mostly lies in posing problems as learning tasks and in designing models.
Lastly, differentially-private mechanisms and secure multi-party aggregation could be added to our system in order to yield stronger privacy guarantees.
\begin{acks}
The authors would like to extend their thanks to several people at Mozilla:
Drew Willcoxon and Rob Helmer helped out with the Firefox client-side parts of the project.
Jeff Klukas and Katie Parlante provided additional support during the project.
Outside of Mozilla, we would like to thank Lukas Zilka and Ralf Hinze for giving valuable feedback while writing this paper.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
| proofpile-arXiv_059-15530 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}\label{S1}
The \emph{Wright--Fisher diffusion model with mutation and selection} is a classical model in population genetics. It describes the evolution forward in time of the type composition of an infinite haploid population with two types, which is subject to mutation and selection. Fit individuals reproduce at rate $1+\sigma$, $\sigma\geq 0$, whereas unfit individuals reproduce at rate $1$. In addition, individuals mutate at rate $\theta$, receiving the fit type with probability $\nu_0\in[0,1]$ and the unfit type with probability $\nu_1\in[0,1]$, with $\nu_0+\nu_1=1$. In this model, the proportion of fit individuals evolves forward in time according to the following SDE
\begin{equation}\label{WFD}
{\rm{d}} X(t)=\left[\theta\nu_0(1-X(t))-\theta\nu_1 X(t) + \sigma X(t)(1-X(t))\right]\,{\rm{d}} t+\sqrt{2 X(t)(1-X(t))}\,{\rm{d}} B(t),\quad t\geq 0,
\end{equation}
where $(B(t))_{t\geq 0}$ is a standard Brownian motion. The solution of \eqref{WFD} is known to be the \emph{diffusion approximation} of properly normalized continuous-time Moran models or discrete-time Wright--Fisher models. Its genealogical counterpart is given by the \emph{ancestral selection graph} (ASG), which traces back potential ancestors of an untyped sample of the population at present. It was introduced by Krone and Neuhauser in \citep{KroNe97,NeKro97}, and later extended to models evolving under general neutral reproduction mechanisms (\citep{EGT10,BLW16}), or including general forms of frequency dependent selection (see \citep{Ne99,BCH18,GS18,CHS19}).
\smallskip
In the absence of mutations, it is well-known that $1-X$ is in moment duality with the process $(R(t))_{t\geq 0}$ that keeps track of the evolution of the number of lineages present in the ASG, i.e.
\begin{equation}\label{mdintro}
\Eb_x\left[(1-X(t))^n\right]=\Eb_n\left[(1-x)^{R(t)}\right],\qquad n\in\Nb, \, x\in[0,1],\, t\geq 0.
\end{equation}
This relation allows, in particular, to express the absorption probability of $X$ at $0$ in terms of the stationary distribution of $R$. In the presence of mutations, two variants of the ASG permit to dynamically resolve mutation events and encode relevant information of the model: \textit{the killed ASG} and \textit{the pruned lookdown ASG}. Both processes coincide with the ASG in the absence of mutations. The killed ASG is reminiscent to the coalescent with killing \citep[Chap. 1.3.1]{D08} and it was introduced in \citep{BW18} for the Wright--Fisher diffusion with selection and mutation. The killed ASG helps to determine weather or not all the individuals in the initial sample are unfit. Moreover, its line-counting process extends the moment duality \eqref{mdintro} to this setting, see \cite[Prop. 1]{BW18} (see also \cite[Prop. 2.2]{CM19} for an extension to $\Lambda$-Wright--Fisher processes with mutation and selection). This duality leads to a characterization of the moments of the stationary distribution of $X$. The pruned lookdown-ASG in turn was introduced in \citep{LKBW15} for the Wright--Fisher diffusion with selection and mutation (see also \citep{BLW16, FC17} for extensions), and it helps to determine the type-distribution of the common ancestor of the population.
\smallskip
In the previous models the fitness of an individual is completely determined by its type. However, in many biological situations the strength of selection fluctuates in time, for example due to environmental changes. The influence of random fluctuations in selection intensities on the growth of populations has been the object of extensive research in the past (see e.g.\cite{G72,KL74,KL74b,KL75,Bu87, BG02, SJV10}) and it is currently experiencing renewed interest (see e.g. \cite{BCM19,CSW19, BEK19, ChK19, GJP18, GPS19}). In this paper, we consider extensions of the Moran model and the Wright--Fisher diffusion with mutation and selection covering the special scenario where the selective advantage of fit individuals is accentuated by exceptional environmental conditions (e.g. extreme temperatures, precipitation, humidity variation, abundance of resources, etc.).
\smallskip
In the classical two-type Moran model with mutation and selection, each individual, independently of each other, can either mutate or reproduce at a constant rate. Fit individuals are characterised by a higher reproduction rate. At a reproduction event, the parent produces a single offspring, which in turn replaces another individual, keeping the population size constant. We consider now the situation where the population evolves in a varying environment. We model the latter via a countable collection of points $(t_i,p_i)_{i\in I}$ in $[0,\infty)\times(0,1)$, satisfying that $\sum_{i: t_i \leq t} p_i<\infty$ for all $t\geq 0$. Each $t_i$ represents a time at which an exceptional external event occurs; the value $p_i$ models the strength of the event occurring at time $t_i$, and we refer to it as a \textit{peak}. The effect of the peak $p_i$ is that, at time $t_i$, each fit individual reproduces with probability $p_i$, independently of each other. Each new born replaces a different individual of the population, thus keeping the population size constant. The summability condition on the peaks assures that the number of reproductions in any compact interval of time is almost surely finite.
\smallskip
A first natural and important question arising in this setting is how sensitive is the model to the effect of the environment. Using coupling techniques, we show that the type-frequency process is continuous with respect to the environment. This result and its proof provide additional insight about the effect of small changes in the environment. Moreover, as a consequence, we can let the environment be random and given by a Poisson point process on $[0,\infty) \times (0,1)$ with intensity measure ${\rm{d}} t \times \mu$, where ${\rm{d}} t$ stands for the Lebesgue measure and $\mu$ is a measure on $(0,1)$ satisfying $\int x \mu({\rm{d}} x) < \infty$. As the population size tends to infinity, we show that, under an appropriate scaling of parameters and time, the fit-type-frequency process converges to the solution of the SDE
\begin{equation}\label{WFSDE}
{\rm{d}} X(t)=\theta\left(\nu_0(1-X(t))-\nu_1 X(t)\right){\rm{d}} t +X(t-)(1-X(t-)){\rm{d}} S(t) +\sqrt{2 X(t)(1-X(t))}{\rm{d}} B(t),\quad t\geq 0,
\end{equation}
where $S(t)\coloneqq \sigma t+J(t)$, $t\geq 0$, and $J$ is a pure-jump subordinator, independent of $B$, which represents the cumulative effect of the environment. We refer to $X$ as the \emph{Wright--Fisher diffusion in random environment}. For environments given by compound Poisson processes, we show that the convergence also holds in the quenched setting. In the case $\theta=0$ (i.e. no mutations), Eq. \eqref{WFSDE} is a particular case of \cite[Eq. (3.3)]{BCM19}, which arises as the (annealed) limit of large populations of a family of discrete-time Wright--Fisher models \cite[Thm. 3.2]{BCM19}.
\smallskip
Our next goal is to generalize the construction of the ASG and its relatives to this setting and to use them in order to characterize the long-term behavior of $X$ and its ancestral type distribution. In order to achieve these tasks we first describe the ancestral structures associated to the Moran model. Next, we assume that selection, mutation and the environment are weak with respect to the population size, and we let the latter converge to infinity. A simple asymptotic analysis of the rates at which events occur in the ancestral structures gives rise to the desired generalizations of the ASG, the killed ASG and the pruned loodown ASG. In the annealed case, we establish a moment duality between $1-X$ and the line-counting process of the killed ASG. This duality follows from a stronger relation, a \emph{reinforced moment duality}, involving the two processes and the total increment of the environment. As a corollary, we obtain a characterization of the asymptotic-type frequencies. Similarly, we express the ancestral type-distribution in terms of the line-counting process of the pruned lookdown ASG. Analogous results are obtained in the more involved quenched setting.
\smallskip
At this point, we would like to draw the attention towards a parallel development by \citet{CSW19}, which studies the accessibility of the boundaries and the fixation probabilities of a generalization of the SDE \eqref{WFSDE} with $\theta=0$ (no mutations). Their construction of the ASG and the corresponding moment duality work in a more general framework than the one considered in this paper. However, our results involving mutations, and therefore the different prunings of the ASG, the reinforced moment duality, and all the results obtained in the quenched setting, are to the best of our knowledge new for the models under study. Even though both papers makes strong use of duality, they use it in a different way. We use properties of the ancestral process to infer properties of the forward process. By contrast, in \cite{CSW19} the duality is used to infer properties of the line-counting process of the ASG; the analysis of the forward process pursues an approach introduced by Griffiths in \cite{G14} that builds on a representation of the infinitesimal generator.
\smallskip
{\bf Outline.} The article is organized as follows. Section~\ref{S2} provides an outline of the paper and contains all our main results. The proofs and more in-depth analyses are shifted to the subsequent sections. Section~\ref{S3} is devoted to the continuity properties of the type-frequency process of a Moran model with respect to the environment. In Section \ref{S4} we prove the existence and pathwise uniqueness of strong solutions of \eqref{WFSDE}. Moreover, we show the quenched and annealed convergence of the type-frequency process of the Moran model towards the solution of \eqref{WFSDE}. Section \ref{S5} is devoted to the proofs of the annealed results related to: (i) the moment duality between the process $X$ and the line-counting process of the killed ASG, (ii) the long term behaviour of the type-frequency process, and (iii) the ancestral type distribution. Section \ref{S6} deals with the quenched versions of the previous results. We end the paper in Section \ref{S7}, with an application to the case where we dispose of quenched information about the environment close to the present, but only annealed information about the environment in the distant past.
\section{Description of the model and main results}\label{S2}
In this section we provide a detailed outline of the paper and state the main results. We start introducing some notation that will be used throughout the paper.
\smallskip
{\bf Notation.} The positive integers are denoted by $\mathbb N$ and we set $\mathbb N_0\coloneqq \mathbb N\cup\{0\}$. For $m\in \mathbb N$, \[[m]\coloneqq \{1,\ldots,m\},\quad [m]_0\coloneqq [m]\cup\{0\},\quad \text{and} \quad \noo{m}\coloneqq [m]\setminus\{1\}.\]
For $T>0$, we denote by $\Db_T$ (resp. $\Db$) the space of c\`{a}dl\`{a}g functions from $[0,T]$ (resp. $[0,\infty)$) to $\Rb$. For any Borel set $S\subset\Rb$, denote by $\Ms_f(S)$ (resp. $\Ms_1(S)$) the set of finite (resp. probability) measures on $S$. We use $\xrightarrow[]{(d)}$ to denote convergence in distribution of random variables and $\xRightarrow[]{(d)}$ for convergence in distribution of c\`{a}dl\`{a}g process, where the space of c\`{a}dl\`{a}g functions is endowed with the Billingsley metric, which induces the $J_1$-Skorokhod topology and makes the space complete (see Appendix \ref{A1}).
\smallskip
For $n,m,k\in \mathbb N_0$ with $k,m\in[n]_0$, we write $K\sim \hypdist{n}{m}{k}$ if $K$ is a hypergeometric random variable with parameters $n,m$, and $k$, i.e \[\P(K=i)= \frac{\binom{n-m}{k-i} \binom{m}{i}}{\binom{n}{k}},\quad i\in[k\wedge m]_0.\]
Furthermore, for~$x\in[0,1]$ and~$n\in \mathbb N$, we write $B\sim \bindist{n}{x}$ if $B$ is a binomial random variable with parameter $n$ and $x$, i.e $$\P(B=i)= \binom{n}{i} x^i (1-x)^{n-1},\quad i\in[n]_0.$$
\subsection{Moran models in deterministic pure-jump environments}\label{s21}
\subsubsection{The model}\label{s211}
We consider a haploid population of size $N$ with two types, type $0$ and type $1$, subject to mutation and selection, that evolves in a deterministic environment. The environment is modeled by an at most countable collection $\zeta\coloneqq (t_k, p_k)_{k \in I}$ of points in $(-\infty,\infty) \times (0,1)$ satisfying that $\sum_{k: t_k \in[s, t]} p_k < \infty$, for any $s,t$ with $s\leq t$. We refer to $p_k$ as the peak of the environment at time $t_k$. The individuals in the population undergo the following dynamic. Each individual mutates at rate $\theta_N$ independently of each other; acquiring type $0$ (resp. $1$) with probability $\nu_{0}$ (resp. $\nu_1$), where $\nu_0,\nu_1\in[0,1]$ and $\nu_0+\nu_1=1$. Reproduction occurs independently of mutation, individuals of type $1$ reproduce at rate $1$, whereas individuals of type $0$ reproduce at rate $1+\sigma_N$, $\sigma_N\geq0$. In addition, at time $t_k$, $k\in I$, each type $0$ individual reproduces with probability $p_k$, independently from the others. At any reproduction time: (a) each individual produces at most one offspring, which inherits the parent's type, and (b) if $n$ individuals are born, $n$ individuals are randomly sampled without replacement from the extant population to die, hence keeping the size of the population constant.
\subsubsection{Graphical representation}\label{s212}
In the absence of environmental factors (i.e. $\zeta=\emptyset$), the evolution of the population is commonly described by means of its graphical representation as an interactive particle system. The latter allows to decouple the randomness of the model coming from the initial type configuration and the one coming from mutations and reproductions. Such a construction can be extended to include the effect of the environment as follows. Non-environmental events are as usual encoded via a family of independent Poisson processes
$$\Lambda\coloneqq \{\lambda_{i}^{0},\lambda_{i}^{1},\{\lambda_{i,j}^{\vartriangle},\lambda_{i,j}^{\blacktriangle}\}_{j\in[N]/\{i\}} \}_{i\in[N]},$$
where: (a) for each $i,j\in [N]$ with $i\neq j$, $(\lambda_{i,j}^{\vartriangle}(t))_{t\in\Rb}$ and $(\lambda_{i,j}^{\blacktriangle}(t))_{t\in\Rb}$ are Poisson processes with rates $\sigma_N/N$ and $1/N$, respectively, and (b) for each $i\in [N]$, $(\lambda_{i}^{0}(t))_{t\in\Rb}$ and $(\lambda_{i}^{1}(t))_{t\in\Rb}$ are Poisson processes with rates $\theta_N\nu_0$ and $\theta_N\nu_1$, respectively. We call $\Lambda$ the \textit{basic background}. The environment introduces a new independent source of randomness into the model, that we describe via the collection
$$\Sigma\coloneqq \{(U_i(t))_{i\in[N], t\in\Rb}, (\tau_A(t))_{A\subset[N], t\in\Rb}\}$$
where: (c) $(U_i(t))_{i\in[N], t\in\Rb}$ is a $[N] \times \Rb-$indexed family of i.i.d. random variables with $U_i(t)$ being uniformly distributed on $[0,1]$, and (d) $(\tau_A(t))_{A\subset[N], t\in\Rb}$ is a family of independent random variables with $\tau_A(t)$ being uniformly distributed on the set of injections from $A$ to $[N]$. We call $\Sigma $ the \textit{environmental background}. We assume that basic and environmental backgrounds are independent and we call $(\Lambda,\Sigma)$ the \textit{background}.
\smallskip
In the graphical representation of the Moran model individuals are represented by horizontal lines at the integer levels in $[N]$. Time runs from left to right. Each reproduction event is represented by an arrow with the parent at its tail and the offspring at its head. We distinguish between \textit{neutral reproductions}, depicted as filled head arrows, and \textit{selective reproductions}, depicted as open head arrows. Mutation events are depicted by crosses and circles on the lines. A circle (cross) indicates a mutation to type~$0$ (type~$1$). See Fig. \ref{particlepicture} for a picture. The appearance of all these random elements can be read off from the background as follows. At the arrival times of $\lambda_{i,j}^{\vartriangle}$ (resp. $\lambda_{i,j}^{\blacktriangle}$), we draw open (resp. filled) head arrows from level $i$ to level $j$. At the arrival times of $\lambda_{i}^{0}$ (resp. $\lambda_{i}^{1})$, we draw an open circle (resp. a cross) at level $i$. In order to draw the environmental elements, we define, for each $k\in I$,
$$I_{\zeta}(k)\coloneqq \{i\in[N]:U_i(t_k)\leq p_k\}\quad\textrm{and}\quad n_{\zeta}(k)\coloneqq |I_{\zeta}(k)|$$
and at time $t_k$, we draw for any $i\in I_{\zeta}(k)$ an open head arrow from level $i$ to level $\tau_{I_{\zeta}(k)}(i)$.
\begin{figure}[t!]
\centering
\scalebox{0.6}{\begin{tikzpicture}
\draw[dashed, opacity=0.4] (1,-0.2) --(1,1) (1,2)--(1,3);
\draw[dashed, opacity=0.4] (6.2,-0.2) --(6.2,0) (6.2,1) --(6.2,2);
\node [right] at (-0.2,-0.5) {$0$};
\node [right] at (0.8,-0.5) {$t_0$};
\node [right] at (6,-0.5) {$t_1$};
\node [right] at (8.3,-0.5) {$T$};
\draw[-{triangle 45[scale=5]},thick] (4.5,2) -- (4.5,1) node[text=black, pos=.6, xshift=7pt]{};
\draw[thick] (0,1) -- (8.5,1);
\draw[thick] (8.5,2) -- (0,2);
\draw[thick] (8.5,3) -- (0,3);
\draw[thick] (0,4) -- (8.5,4);
\draw[thick] (0,0) -- (8.5,0);
\draw[-{triangle 45[scale=5]},thick] (.4,1) -- (.4,0);
\draw[-{open triangle 45[scale=5]},thick] (7.6,1) -- (7.6,4);
\draw[-{open triangle 45[scale=5]},thick] (1.7,2) -- (1.7,4);
\draw[-{open triangle 45[scale=5]},thick] (1,3) -- (1,4);
\draw[-{open triangle 45[scale=5]},thick] (1,1) -- (1,2);
\draw[-{open triangle 45[scale=5]},thick] (6.2,4) -- (6.2,2);
\draw[-{open triangle 45[scale=5]},thick] (6.2,0) -- (6.2,1);
\draw[-{open triangle 45[scale=5]},thick] (2.5,0) -- (2.5,1);
\draw[-{open triangle 45[scale=5]},thick] (7,2) -- (7,0);
\draw[-{triangle 45[scale=5]},thick] (3,3) -- (3,1);
\draw[-{angle 60[scale=5]}] (3.25,-0.6) -- (5.25,-0.6) node[text=black, pos=.5, yshift=6pt]{};
\node[ultra thick] at (3.5,4) {$\bigtimes$} ;
\node[ultra thick] at (6.6,4) {$\bigtimes$} ;
\draw (2.1,2) circle (1.5mm) [fill=white!100];
\draw (4.1,0) circle (1.5mm) [fill=white!100];
\node [right] at (-0.5,0) {$1$};
\node [right] at (-0.5,1) {$1$};
\node [right] at (-0.5,2) {$1$};
\node [right] at (-0.5,3) {$0$};
\node [right] at (-0.5,4) {$1$};
\node [right] at (8.5,0) {$0$};
\node [right] at (8.5,1) {$0$};
\node [right] at (8.5,2) {$0$};
\node [right] at (8.5,3) {$0$};
\node [right] at (8.5,4) {$0$};
\end{tikzpicture} }
\caption{A realization of the Moran interacting particle system. Time runs forward from left to right. The environment has peaks at times $t_0$ and $t_1$.}
\label{particlepicture}
\end{figure}
\begin{remark}\label{finitenumber}
Note that, for any $s<t$, the number of basic graphical elements present in $[s,t]$ is almost surely finite. Moreover, since
$$\Eb\left[\sum_{k: t_k\in[s,t]}n_{\zeta}(k)\right]=N\sum_{k: t_k\in[s,t]} p_k<\infty,$$
and hence $\sum_{k:t_k\in[s,t]}n_{\zeta}(k)<\infty$ almost surely. Hence, the number of arrows in $[s,t]$ due to peaks of the environment is almost surely finite.
\end{remark}
Given a realization of the particle system and an assignment of types to the lines at time $s$, we propagate types forward in time respecting mutations and reproduction events (see Fig \ref{particlepicture}). By this we mean that, as long as we move from left to right in the graphical picture, the type of a given line remains unchanged until the line encounters a cross, a circle or an arrowhead. If the line encounters a cross (resp. a circle) at time $t$, it gets type $1$ (resp. type $0$) from time $t$ until the next cross, circle or arrowhead. If at time $t$ the line encounters a neutral arrowhead, the line gets at time $t$ the type of the line at the tail of the arrow. The same happens if the line encounters a selective arrowhead and the line at its tail is fit, otherwise the selective arrow is ignored.
\begin{remark}
One of the advantages of using such a graphical representation is that one can couple Moran models on the basis of the same basic background, but evolving in a different environment, or two Moran models with the same background, but with a different initial type composition.
\end{remark}
\subsubsection{Type frequency process}\label{s213}
In what follows, we assume that we know the type composition at time $s=0$ and that that there is no peak at that time. As mentioned in the introduction it is convenient to work with the cumulative effect of the peaks rather than with the peaks themselves. Thus, we define the function $\omega:[0,\infty)\to\Rb$ via
$$\omega(t)\coloneqq \sum_{i:t_i\in[0,t]}p_i,\quad t\geq 0.$$
The function $\omega$ is c\`{a}dl\`{a}g, non-decreasing and satisfies
\begin{itemize}
\item[(i)] for all $t\in[0,\infty)$, $\Delta \omega(t)\coloneqq \omega(t)-\omega(t-)\in[0,1)$ and $\sum_{u\in[0,t]}\Delta \omega(u)<\infty$,
\item[(ii)] $\omega(0)=0$ and $\omega$ is pure-jump, i.e. for all $t\in[0,\infty)$, $\omega(t)=\sum_{u\in[0,t]}\Delta \omega(u)$.
\end{itemize}
We denote by $\Db^\star$ the set of all c\`{a}dl\`{a}g, non-decreasing functions satisfying (i) and (ii). Note that the environment in $[0,\infty)$ is given by the collection of points $\{(t,\Delta \omega(t)):\, \Delta\omega(t)>0)\}$. Hence, the function $\omega$ completely determines the environment in $[0,\infty)$. For this reason, we often abuse of the notation and refer to $\omega$ as the environment. In addition, an environment $\omega\in\Db^\star$ is said to be \textit{simple} if $\omega$ has only a finite number of jumps in any compact time interval. We denote by $\textbf{0}$ the null environment.
\smallskip
For any $t\geq 0$, we denote by $X_N(\omega,t)$ the proportion of fit individuals at time $t$ in the population. We denote by $\Pb_N^\omega$ the law of he process $X_N(\omega,\cdot)$, and we refer to it as the quenched probability measure.
\smallskip
For $\omega\equiv \textbf{0}$, $X_N(\textbf{0},\cdot)$ is the continuous-time Markov chain on $E_N\coloneqq \{k/N:k\in[N]_0\}$ with generator
\begin{align*}
\As_N^{0} f\left(x\right)=& \left(N(1+\sigma_N)x(1-x)+N\theta_N\nu_0(1-x)\right)\left(f\left(x+\frac{1}{N}\right)- f\left(x\right)\right)\\
&+ \left(Nx(1-x)+N\theta_N\nu_1 x\right)\left(f\left(x-\frac{1}{N}\right)- f\left(x\right)\right),\quad x\in E_N.
\end{align*}
For a simple environment $\omega$ with jumping times $t_1<\cdots<t_k$ in $[0,T]$, the evolution of $X_N(\omega,\cdot)$ is as follows. If $X_N(\omega,0)=x_0\in E_N$, then $X_N(\omega,\cdot)$ evolves in $[0,t_1)$ as $X_N(\textbf{0},\cdot)$ started at $x_0$. Similarly, in the intervals $[t_i,t_{i+1})$, $X_N(\omega,\cdot)$ evolves as $X_N(\textbf{0},\cdot)$ started at $X_N(\omega,t_{i})$. Moreover, if $X_N(\omega,t_{i}-)=x$, then $X_N(\omega,t_{i})=x+{H_{i}(x,B_i(x))}/{N}$, where the random variables $H_{i}(x,n)\sim \textrm{Hyp}(N,N(1-x),n)$, $n\in[N]_0$, and $B_i(x)\sim\bindist{Nx}{\Delta \omega(t_i)}$ are independent.
\smallskip
We describe now the dynamic of $X_N(\omega,\cdot)$ for a general environment $\omega$. From Remark \ref{finitenumber}, we know that the number of environmental reproductions is almost surely finite in any finite time interval. We can thus define $(S_i)_{i\in\Nb}$ as the increasing sequence of times at which environmental reproductions take place, and set $S_0\coloneqq 0$. From construction $(S_i)_{i\in\Nb}$ is Markovian and its transition probabilities are given by
$$\mathbb{P}_N^\omega(S_{i+1} > t \mid S_i = s ) = \prod_{u \in (s, t]} (1- \Delta \omega(u))^N,\quad i\in\Nb_0, 0\leq s\leq t.$$ If $X_N(\omega,0)=x_0\in E_N$, then $X_N(\omega,\cdot)$ evolves in $[0,S_1)$ as $X_N(\textbf{0},\cdot)$ started at $x_0$.~In the intervals $[S_i,S_{i+1})$, $X_N(\omega,\cdot)$ evolves as $X_N(\textbf{0},\cdot)$ started at $X_N(\omega,S_{i})$. Moreover, if $X_N(\omega,S_{i}-)=x$, then $X_N(\omega,S_{i})=x+{H_{i}(x,\tilde B_i(x))}/{N}$, where the random variables $H_{i}(x,n)\sim \textrm{Hyp}(N,N(1-x),n)$, $n\in[N]_0$, and $\tilde B_i(x)$ are independent, and $\tilde B_i(x)$ has the distribution of a binomial random variable with parameters $Nx$ and $\Delta \omega(S_i)$ conditioned to be positive.
\smallskip
We end this section with our first main result, which provides the continuity of the type-frequency process with respect to the environment. For this let us restrict ourselves to realizations of the model to the time interval $[0,T]$. Note that the restriction of the environment to $[0,T]$ can be identified with an element of
$$\Db_T^\star\coloneqq \{\omega\in\Db_T: \omega(0)=0,\, \Delta \omega(t)\in[0,1)\textrm{ for all $t\in[0,T]$, $\omega$ is non-decreasing and pure-jump}\}.$$
Moreover, we equip $\Db_T^\star$ with the metric $d_T^\star$ defined in Appendix \ref{A1}.
\begin{theorem}[Continuity]\label{cont}
Let $\omega\in\Db_T^\star$ and let $\{\omega_k\}_{k\in\Nb}\subset\Db_T^\star$ be such that $d_T^\star(\omega_k,\omega)\to 0$ as $k\to\infty$. If $X_N(\omega_k,0)=X_N(\omega,0)$ for all $k\in\Nb$, then
$$(X_N(\omega_k, t))_{t\in[0,T]}\xRightarrow[k\to \infty]{(d)}(X_N(\omega, t))_{t\in[0,T]}.$$
\end{theorem}
\subsection{Moran models in a environment driven by a subordinator} \label{s22}
In contrast to Section \ref{s21}, we consider here a random environment given by a Poisson point process $(t_i, p_i)_{i \in I}$ on $[0,\infty) \times (0,1)$ with intensity measure ${\rm{d}} t \times \mu$, where ${\rm{d}} t$ stands for the Lebesgue measure and $\mu$ is a measure on $(0,1)$ satisfying $\int x \mu({\rm{d}} x) < \infty$. The integrability condition on $\mu$ implies that, for every $t\geq 0$, $J(t) \coloneqq \sum_{i : t_i \leq t} p_i<\infty$ almost surely. Moreover, $J\in\Db^\star$ almost surely.
Note that by the L\'evy-Ito decomposition, $(J(t))_{t\geq 0}$ is a pure-jump subordinator with L\'evy measure $\mu$. If the measure $\mu$ is finite, then $J$ is a compound Poisson process so the environment $J$ is almost surely simple.
\smallskip
Theorem \ref{cont} together with the graphical representation of the Moran model given in Section \ref{s212} provide a natural way to construct such a model. Indeed, on the basis of a common background $(\Lambda,\Sigma)$, we can construct simultaneously Moran models for any $\omega\in\Db^\star$. Next, we consider a pure-jump subordinator $(J(t))_{t\geq 0}$ independent of the background, with L\'evy measure $\mu$ on $(0,1)$ satisfying $\int x \mu({\rm{d}} x) < \infty$. Theorem \ref{cont} assures then that the process $(X_N(J,t))_{t\geq 0}$ is well defined. Its law $\Pb_N$ is called the annealed probability measure. The formal relation between the annealed and quenched measures is given by
\begin{equation}\label{avsqN}
\Pb_N(\cdot)=\int \Pb_N^\omega(\cdot) P({\rm{d}} \omega),
\end{equation}
where $P$ denotes the law of $J$. By construction the process $(X_N(J,t))_{t\geq 0}$ is a continuous-time Markov chain on $E_N$ and its infinitesimal generator $\As_N$ is given by
\begin{align*}
\As_N &f\left(x\right)\coloneqq \As_N^0 f(x) + \int\limits_{(0,1)}\left(\Eb\left[f\left(x+\frac{\Hs(N,N(1-x),\beta(Nx,u))}{N}\right)\right]-f(x)\right)\mu_N({\rm{d}} u),\quad x\in E_N,
\end{align*}
where $\beta(Nx,u)\sim \textrm{Bin}(Nx,u)$, and for any $i\in[Nx]_0$, $\Hs(N,N(1-x),i)\sim\textrm{Hyp}(N,N(1-x),i)$ are independent.
\smallskip
The dynamic of the graphical representation is as follows: For each $i,j\in [N]$ with $i\neq j$, open (resp. filled) arrowheads from level $i$ to level $j$ appear at rate $\sigma_N/N$ (resp. $1/N$). For each $i\in [N]$, open circles (resp. crosses) appear at level $i$ at rate $\theta_N\nu_0$ (resp. $\theta_N\nu_1$). For each $k \in [N]$, every group of $k$ lines is subject to a simultaneous reproduction at rate $$\sigma_{N,k}\coloneqq \int_{(0,1)}y^k (1-y)^{N-k}\mu({\rm{d}} y),$$ where the set of $k$ descents is chosen uniformly at random among the $N$ individuals, and $k$ open arrowheads are drawn uniformly at random from the $k$ parents to the $k$ descents.
\subsection{The Wright--Fisher diffusion in random environment}\label{s23}
In this section we are interested in the Wright--Fisher diffusion in random environment described in the introduction. The next result establishes the well-posedness of the SDE \eqref{WFSDE}.
\begin{proposition}[Existence and uniqueness]\label{eandu}
Let $\sigma,\theta\geq 0$, $\nu_0,\nu_1\in[0,1]$ with $\nu_0+\nu_1=1$. Let $J$ be a pure-jump subordinator with L\'{e}vy measure $\mu$ supported in $(0,1)$ and let $B$ be a standard Brownian motion independent of $J$. Then, for any $x_0\in[0,1]$, there is a pathwise unique strong solution $(X(t))_{t\geq 0}$ to the SDE \eqref{WFSDE} such that $X(0)=x_0$. Moreover, $X(t)\in[0,1]$ for all $t\geq 0$.
\end{proposition}
\begin{remark}
The Wright--Fisher diffusion defined via the SDE \eqref{WFSDE} with $\theta=0$ corresponds to \cite[Eq. (10)]{CSW19} with $K_y$, $y\in(0,1)$, being a random variable that takes the value $1$ with probability $1-y$ and the value $2$ with probability $y$.
\end{remark}
Let us now consider $J=(J(s))_{s\geq 0}$ and $B=(B(s))_{s\geq 0}$ as in the previous theorem. Therefore, for any $t>0$, the solution of \eqref{WFSDE} in the time interval $[0,t]$ is a measurable function of $(B(s),J(s))_{s\in[0,t]}$, which we denote by $F(B,J)$.
We denote by $\Pb^J$ a regular version of the conditional law of $F(B,J)$ given $J$, which is called the \textit{quenched probability measure}. This allows us to write $\Pb^\omega$ for a.e. realization $\omega$ of $J$. If $P$ denotes the law of $J$, we called the \textit{annealed measure}
\begin{equation}\label{avsq}
\Pb(\cdot)=\int \Pb^\omega(\cdot) P({\rm{d}}\omega).
\end{equation}
We write $(X(t))_{t\geq 0}$ and $(X(\omega,t))_{t\geq 0}$ for the solution of \eqref{WFSDE} under $\Pb$ and $\Pb^\omega$, respectively.
The next result provides the annealed convergence, under a suitable scaling of parameters and of time, of the type-frequency of a sequence of Moran models to the Wright-Fisher diffusion in random environment.
\begin{theorem}[Annealed convergence]\label{alimit}
Let $J$ be a pure-jump subordinator with L\'{e}vy measure $\mu$ supported in $(0,1)$, and set $J_N(t)\coloneqq J(t/N)$, $t\geq 0$. Assume in addition that
\begin{enumerate}
\item $N \sigma_N\rightarrow \sigma$ and $N\theta_N\rightarrow \theta$ for some $\sigma \geq 0, \theta \geq 0$ as $N\to\infty$.
\item $X_N(J_N,0)\rightarrow x_0$ as $N\to\infty.$
\end{enumerate}
Then, we have
$$(X_N(J_N, Nt))_{t\geq 0}\xRightarrow[N\to\infty]{(d)} (X(t))_{t\geq 0},$$
where $X$ is the unique pathwise solution of \eqref{WFSDE} with $X(0)=x_0$.
\end{theorem}
\begin{remark}
The analogous result of Theorem \ref{alimit} in the context of discrete-time Wright--Fisher models without mutations is covered by the fairly general result \cite[Thm. 3.2]{BCM19} (see also \cite[Thm 2.12]{CSW19}).
\end{remark}
For $\omega$ simple, the quenched law $\Pb^\omega$ can be alternatively defined as follows. Denote by $t_1<\cdots<t_k$ the jumps of $\omega$ in $[0,T]$. If $X(\omega,0)=x_0$, $X(\omega,\cdot)$ evolves in $[0,t_1)$ as the solution of \eqref{WFD} starting at $x_0$. In the intervals $[t_i,t_{i+1})$, $X_N(\omega,\cdot)$ evolves as the solution of \eqref{WFD} starting at $X(\omega,t_i)$. Moreover, if $X(\omega,t_i-)=x$, then $X(\omega,t_i)=x+x(1-x)\Delta\omega(t_i)$. The next result extends Theorem \ref{alimit} to the quenched setting for simple environments.
\begin{theorem}[Quenched convergence]\label{qlimit}
Let $\omega\in\Db^\star$ be a simple environment and set $\omega_N(t)=\omega(t/N)$, $t\geq 0$. Assume in addition that
\begin{enumerate}
\item $N\sigma_N\rightarrow \sigma$ and $N\theta_N\rightarrow \theta$ for some $\sigma \geq 0, \theta \geq 0$ as $N\to\infty$.
\item $X_N(\omega_N,0)\rightarrow X(\omega,0)$ as $N\to\infty$.
\end{enumerate}
Then, we have
$$(X_N(\omega_N, Nt))_{t\geq 0}\xRightarrow[N\to\infty]{(d)} (X(\omega,t))_{t\geq 0}.$$
\end{theorem}
\begin{remark}
Recall that if $J$ is a compound Poisson process, then almost every environment is simple. In this case, Theorem \ref{qlimit} tell us that the quenched convergence holds for $P$-almost every environment. We conjecture that this is true in general. In fact, we prove the tightness of the sequence $(X_N(\omega_N, Nt))_{t\geq 0}$ for any environment $\omega$, see Proposition \ref{q-tight}. In order to get the desired conclusion, it would be enough to prove the continuity of $\omega\mapsto X(\omega,\cdot)$. Unfortunately, since the diffusion term in \eqref{WFSDE} is not Lipschitz, the standard techniques used to prove this type of result fail. Developping new techinques to cover non-Lipschitz diffusion coefficients is out of the scope of this paper.
\end{remark}
\subsection{The ancestral selection graph}\label{s24}
The aim of this section is to generalize the construction of the classical ancestral selection graph (ASG) of Krone and Neuhauser \cite{KroNe97,NeKro97} to include the effect of the random environment. In order to motivate our definition, we briefly explain in Section \ref{s241} how to read off the ASG in a Moran model in deterministic pure-jump environment on the basis of its graphical representation. Inspired by this construction, we define in Section \ref{s242} the ASG for the Wright--Fisher process in random environment under the annealed and the quenched measure.
\subsubsection{Reading off the ancestries in the Moran model}\label{s241}
The ancestral selection graph (ASG) was introduced by Krone and Neuhauser in \cite{KroNe97} (see also \cite{NeKro97}) to study the genealogical relations of an untyped sample taken from the population at present, in the diffusion limit of the Moran model with mutation and selection. In this section we adapt this construction to the Moran model in deterministic pure-jump environment described in Section \ref{s211}.
\smallskip
Let us fix a pure-jump environment $\omega$. Consider a realization of the interactive particle system associated to the Moran model described in Section \ref{s211} in the time interval $[0,T]$. We start with a sample of $n$ individuals at time $T$, and we trace backward in time (from right to left in Figure \ref{ASGpicture}) the lines of their potential ancestors; the backward time $\beta\in[0,T]$ corresponds to the forward time $t=T-\beta$. When a neutral arrow joins two individuals in the current set of potential ancestors, the two lines coalesce into a single one, the one at the tail of the arrow. When a neutral arrow hits from outside a potential ancestor, the hit line is replaced by the line at the tail of the arrow. When a selective arrow hits the current set of potential ancestors, the individual that is hit has two possible parents, the \textit{incoming branch} at the tail and the \textit{continuing branch} at the tip. The true parent depends on the type of the incoming branch, but for the moment we work without types. These unresolved reproduction events can be of two types: a \emph{branching} event if the selective arrow emanates from an individual outside the current set of potential ancestors, and a \emph{collision} event if the selective arrow links two current potential ancestors. Note that at the jumping times of the environment, multiple lines in the ASG can be hit by selective arrows, and therefore, multiple branching and collision events may occur simultaneously. Mutations remain superposed on the lines of the ASG. The object that results applying this procedure up to time $\beta=T$ is called the \emph{Moran-ASG in $[0,T]$ in pure-jump environment}. It contains all the lines that are potentially ancestral (ignoring mutation events) to the sampled lines at time $t=T$, see Fig. \ref{ASGpicture}. Note that, since the number of events occurring in $[0,T]$ is almost surely finite (see Remark \ref{finitenumber}), the ASG in $[0,T]$ is well-defined.
\begin{figure}[t!]
\centering
\scalebox{0.6}{\begin{tikzpicture}
\draw[dashed,thick,opacity=0.3] (1,-0.5) --(1,1) (1,2)--(1,3) (1,4)--(1,4.5);
\draw[dashed,thick,opacity=0.3] (6.2,-0.5) --(6.2,0) (6.2,1) --(6.2,2) (6.2,4)--(6.2,4.5);
\node [right] at (-0.2,-0.8) {$0$};
\node [right] at (0.4,-0.8) {$t_0$};
\node [right] at (5.6,-0.8) {$t_1$};
\node [right] at (8.3,-0.8) {$T$};
\node [right] at (3.5,-0.8) {$t$};
\node [right] at (-0.2,4.8) {$T$};
\node [right] at (0.4,4.8) {$T-t_0$};
\node [right] at (5.6,4.8) {$T-t_1$};
\node [right] at (8.3,4.8) {$0$};
\node [right] at (3.5,4.8) {$\beta$};
\draw[-{triangle 45[scale=5]}] (2.5,-0.5) -- (4.5,-0.5) node[text=black, pos=.6, xshift=7pt]{};
\draw[-{triangle 45[scale=5]}] (4.5,4.5) -- (2.5,4.5) node[text=black, pos=.6, xshift=7pt]{};
\draw[thick,opacity=0.3] (0,1) -- (8.5,1);
\draw[thick,opacity=0.3] (8.5,2) -- (0,2);
\draw[thick,opacity=0.3] (8.5,3) -- (0,3);
\draw[thick,opacity=0.3] (0,0) -- (8.5,0);
\draw[thick,opacity=0.3] (0,4) -- (8.5,4);
\draw[thick] (0,4) -- (6.2,4);
\draw[thick] (0,2) -- (8.5,2);
\draw[thick] (4.5,1) -- (8.5,1);
\draw[thick] (0.4,0) -- (6.2,0);
\draw[thick] (1,1) -- (0.0,1);
\draw[thick] (1,3) -- (0.0,3);
\draw[-{triangle 45[scale=5]},thick] (4.5,2) -- (4.5,1) node[text=black, pos=.6, xshift=7pt]{};
\draw[-{triangle 45[scale=5]},thick] (0.4,1) -- (0.4,0);
\draw[-{open triangle 45[scale=5]},thick] (1.7,2) -- (1.7,4);
\draw[-{open triangle 45[scale=5]},thick] (1,3) -- (1,4);
\draw[-{open triangle 45[scale=5]},thick] (1,1) -- (1,2);
\draw[-{open triangle 45[scale=5]},thick] (6.2,4) -- (6.2,2);
\draw[-{open triangle 45[scale=5]},thick] (6.2,0) -- (6.2,1);
\draw[-{triangle 45[scale=5]},thick, opacity=0.3] (3,3) -- (3,1);
\draw[-{open triangle 45[scale=5]},thick, opacity=0.3] (7.6,1) -- (7.6,4);
\draw[-{open triangle 45[scale=5]},thick, opacity=0.3] (2.5,0) -- (2.5,1);
\draw[-{open triangle 45[scale=5]},thick, opacity=0.3] (7,2) -- (7,0);
\node[ultra thick] at (3.5,4) {$\bigtimes$} ;
\draw[thick] (2.1,2) circle (1.5mm) [fill=white!100];
\draw[thick] (4.1,0) circle (1.5mm) [fill=white!100];
\node[ultra thick,opacity=0.3] at (6.6,4) {$\bigtimes$} ;
\end{tikzpicture} }
\caption{The ASG corresponding to the realization of the Moran interactive particle system given in Fig. \ref{particlepicture}.
Forward time runs from left to right and backward time from right to left.}
\label{ASGpicture}
\end{figure}
\smallskip
Given an assignment of types to the lines present in the ASG at time $t=0$, we can extract the true genealogy and determine the types of the sampled individuals at time $T$. For this, we propagate types forward in time along the lines of the ASG taking into account mutations and reproductions, with the rule that if a line is hit by a selective arrow, the incoming line is the ancestor if and only if it is of type~$0$, see Figure~\ref{fig:peckingorder}. This rule is called the pecking order. Proceeding in this way, the types in~$[0,T]$ are determined along with the true genealogy.
\begin{figure}[b!]
\begin{minipage}{0.23 \textwidth}
\centering
\scalebox{1}{
\begin{tikzpicture}
\draw[line width=0.5mm] (0,1) -- (2,1);
\draw[color=black] (0,0) -- (1,0);
\draw[-{open triangle 45[scale=2.5]},color=black] (1,0) -- (1,1) node[text=black, pos=.6, xshift=7pt]{};
\node[above] at (1.8,1) {\tiny D};
\node[above] at (0.3,1) {\tiny C};
\node[above] at (0.3,0) {\tiny I};
\node[left] at (0,1) {\tiny $1$};
\node[left] at (0,0) {\tiny $1$};
\node[right] at (2,1) {\tiny $1$};
\end{tikzpicture}}
\end{minipage}\hfill
\begin{minipage}{0.23 \textwidth}
\centering
\scalebox{1}{
\begin{tikzpicture}
\draw[line width=0.5mm] (0,1) -- (2,1);
\draw[color=black] (0,0) -- (1,0);
\draw[-{open triangle 45[scale=2.5]},color=black] (1,0) -- (1,1) node[text=black, pos=.6, xshift=7pt]{};
\node[above] at (1.8,1) {\tiny D};
\node[above] at (0.3,1) {\tiny C};
\node[above] at (0.3,0) {\tiny I};
\node[left] at (0,1) {\tiny $0$};
\node[left] at (0,0) {\tiny $1$};
\node[right] at (2,1) {\tiny $0$};
\end{tikzpicture}}
\end{minipage}\hfill
\begin{minipage}{0.23 \textwidth}
\centering
\scalebox{1}{
\begin{tikzpicture}
\draw[] (0,1) -- (2,1);
\draw[line width=0.5mm] (0,0) -- (1,0);
\draw[line width=0.5mm] (1,1) -- (2,1);
\draw[-{open triangle 45[scale=2.5]},color=black,line width=0.5mm] (1,-0.025) -- (1,1) node[text=black, pos=.6, xshift=7pt]{};
\node[above] at (1.8,1) {\tiny D};
\node[above] at (0.3,1) {\tiny C};
\node[above] at (0.3,0) {\tiny I};
\node[left] at (0,1) {\tiny $1$};
\node[left] at (0,0) {\tiny $0$};
\node[right] at (2,1) {\tiny $0$};
\end{tikzpicture}}
\end{minipage}\hfill
\begin{minipage}{0.23 \textwidth}
\centering
\scalebox{1}{
\begin{tikzpicture}
\draw[] (0,1) -- (2,1);
\draw[line width=0.5mm] (0,0) -- (1,0);
\draw[line width=0.5mm] (1,1) -- (2,1);
\draw[-{open triangle 45[scale=2.5]},color=black,line width=0.5mm] (1,-0.025) -- (1,1) node[text=black, pos=.6, xshift=7pt]{};
\node[above] at (1.8,1) {\tiny D};
\node[above] at (0.3,1) {\tiny C};
\node[above] at (0.3,0) {\tiny I};
\node[left] at (0,1) {\tiny $0$};
\node[left] at (0,0) {\tiny $0$};
\node[right] at (2,1) {\tiny $0$};
\end{tikzpicture}}
\end{minipage}
\caption{The descendant line (D) splits into the continuing line (C) and the incoming line (I). The incoming line is ancestral if and only if it is of type~$0$. The true ancestral line is drawn in bold.}
\label{fig:peckingorder}
\end{figure}
\subsubsection{The ASG for the Wright--Fisher process in random/deterministic environment}\label{s242}
The aim of this section is to associate an ASG to the Wright--Fisher diffusion in random/deterministic environment. In contrast to the Moran model setting described in Section \ref{s211}, we do not dispose here of a graphical representation for the forward process. One option would be to provide such a graphical construction in the spirit of the lookdown construction of \cite{ParBah15}. However, we opt here for the following alternative strategy. We first consider the graphical representation of a Moran model with parameters $\sigma/N$, $\theta/N$, $\nu_0$, $\nu_1$ and environment $\omega_N(\cdot)=\omega(\cdot/N)$, and we speed up time by $N$. Next, we sample $n$ individuals at time $T$ and we construct the ASG as in Section \ref{s241}.
\smallskip
Now, replace $\omega$ by a pure-jump subordinator $J$ with L\'evy measure $\mu$ supported in $(0,1)$. Note that the Moran-ASG in $[0,T]$ evolves according to the time reversal of $J$. The latter is the subordinator $\bar{J}^T\coloneqq(\bar{J}^T(\beta))_{\beta\in[0,T]}$ with $\bar{J}^T(\beta)\coloneqq J(T)-J((T-\beta)-)$, which has the same law as $J$ (its law does not depend on $T$). Hence, a simple asymptotic analysis of the rates and probabilities for the possible events leads to the following definition.
\begin{definition}[The annealed ASG] \label{defannealdasg}
The \emph{annealed ancestral selection graph} with parameters $\sigma,\theta,\nu_0,\nu_1$, and environment driven by a pure-jump subordinator with L\'evy measure $\mu$, associated to a sample of the population of size $n$ at time $T$ is the branching-coalescing particle system $(\Gs(\beta))_{\beta\geq 0}$ starting with $n$ lines and with the following dynamic.
\begin{itemize}
\item each line splits into two lines, an incoming line and a continuing line, at rate $\sigma$.
\item every given pair of lines coalesce into a single line at rate $2$.
\item every group of $k$ lines is subject to a simultaneous branching at rate $$\sigma_{m,k}\coloneqq \int_{(0,1)}y^k (1-y)^{m-k}\mu({\rm{d}} y),$$ where $m$ denotes the total number of lines in the ASG before the \textit{simultaneous branching event}. At the simultaneous branching event, each line in the group involved splits into two lines, an incoming line and a continuing line.
\item each line is decorated by a beneficial mutation at rate $\theta \nu_0$.
\item each line is decorated by a deleterious mutation at rate $\theta \nu_1$.
\end{itemize}
\end{definition}
In order to define the ASG in the quenched setting, there are two natural approaches.
\smallskip
The first approach builds on the following observations. The line-counting process of the annealed ASG $K\coloneqq (K(\beta))_{\beta\geq 0}$, i.e. the process that keeps track of the evolution of the number of lines in the ASG, is non-explosive (this will follow from Lemma \ref{pr}). Moreover, $K$ can be constructed as the strong solution of a pure-jump SDE driven by the subordinator $\bar{J}^T$ and other two independent Poisson processes taking into account coalescences and non-environmental branchings. This allows us to define the quenched line-counting process, $K^{\bar{\omega}}\coloneqq (K^{\bar{\omega}}(\beta))_{\beta\geq 0}$ for almost every realization $\bar{\omega}$ of the environment by means of a regular version of the conditional law of $K$ given $\bar{J}^T$ (i.e. the quenched law). Given a realization of $K^{\bar{\omega}}$ it is straightforward to construct the quenched ASG: (1) at any jump of $K^{\bar{\omega}}$ choose uniformly at random the lines that are involved in the event (an increase or decrease of $K^{\bar{\omega}}$ leads to a branching or a coalescence, respectively), (2) decorate the lines in the ASG, independently, with beneficial and deleterious mutations at rate $\theta\nu_0$ and $\theta\nu_1$, respectively.
\smallskip
The second approach is more natural, but technically more involved. It will allow us to generalize the definition of the quenched ASG for any environment. The formal definition is as follows.
\begin{definition}[The quenched ASG in a fixed environment] \label{defquenchedasg}
Let $\omega:\Rb\to\Rb$ be a fixed environment. The \emph{quenched ancestral selection graph} with parameters $\sigma,\theta,\nu_0,\nu_1$, and environment $\omega$, associated to a sample of the population of size $n$ at time $T$ is a branching-coalescing particle system $(\Gs^T(\omega,\beta))_{\beta\geq 0}$ starting at $\beta=0-$ with $n$ lines and with the following dynamic.
\begin{itemize}
\item each line splits into two lines, an incoming line and a continuing line, at rate $\sigma$.
\item every given pair of lines coalesce into a single line at rate $2$.
\item if at time $\beta$, we have $\Delta \omega(T-\beta)>0$, then any line splits into two lines, an incoming line and a continuing line, with probability $\Delta \omega(T-\beta)$, independently from the other lines.
\item each line is decorated by a beneficial mutation at rate $\theta \nu_0$.
\item each line is decorated by a deleterious mutation at rate $\theta \nu_1$.
\end{itemize}
\end{definition}
It is plain that the branching-coalescing particle system $(\Gs^T(\omega,\beta))_{\beta\geq 0}$ is well defined in the case where the environment $\omega$ is simple. However, this is not trivial for environments $\omega$ that are not simple. The justification of the previous definition in the general case is given by the following proposition.
\begin{proposition}[Existence of the quenched ASG]\label{eqasg}
Let $\omega:\Rb\to\Rb$ be a fixed environment. For any $n \in\Nb$ and $T > 0$, there exists a branching-coalescing particle system $(\Gs^T(\omega,\beta))_{\beta\geq 0}$ starting at $\beta=0-$ with $n$ lines, that almost surely contain finitely many lines at each time $\beta \in [0,T]$, and that satisfies the requirements of Definition \ref{defquenchedasg}.
\end{proposition}
\subsection{Type frequency via the killed ASG: a moment duality}\label{s25}
The aim of this section is to relate the type-$0$ frequency process $X$ to the ASG. We start with an heuristic reasoning that leads to the definitions of the killed ASG in the annealed and quenched case respectively. Let us assume that the proportion of fit individuals at time $0$ is equal to $x\in[0,1]$. Conditionally on $X(T)$, the probability of sampling independently $n$ unfit individuals at time $t$ equals $(1-X(T))^n$. Now, consider the annealed ASG associated to the $n$ sampled individuals in $[0,T]$ and assign randomly types independently to each line present in the ASG at time $\beta=T$ according to the initial distribution $(x,1-x)$. In the absence of mutations, the $n$ sampled individuals are unfit if and only if all the lines in the ASG at time $\beta=T$ are assigned the unfit type (since at any selective event a fit individual can only be replaced by another fit individual). Therefore, if $R(T)$ denotes the number of lines present in the ASG at time $\beta=t$, then conditionally on $R(T)$, the probability that the $n$ sampled individuals are unfit is $(1-x)^{R(T)}$. We would then expect to have $$\Eb[(1-X(T))^n\mid X(0)=x]=\Eb[(1-x)^{R(T)}\mid R(0)=n].$$
Mutations determine the types of some of the lines in the ASG even before we assign types to the lines at time $T$. Hence, we can pruned away from the ASG all the sub-ASGs arising from a mutation event. If in the pruned ASG there is a line ending in a beneficial mutation, then we can infer that at least one of the sampled individuals has the fit type. If all the lines end up in a deleterious mutation, then we can infer directly that all the sampled individuals are unfit. In the remaining case, the sampled individuals are all unfit if and
only if all the lines present at time $T$ in the pruned ASG are assigned the unfit type. This motivates the following definition.
\begin{definition}[The annealed killed ASG]\label{defkilledasgannealed}
The \emph{annealed killed ASG} with parameters $\sigma,\theta,\nu_0,\nu_1$, and environment driven by a pure-jump subordinator with L\'evy measure $\mu$, associated to a sample of the population of size $n$ is the branching-coalescing particle system $(\Gs_\dagger(\beta))_{\beta\geq 0}$ starting with $n$ lines and with the following dynamic.
\begin{itemize}
\item each line splits into two lines, an incoming line and a continuing line, at rate $\sigma$.
\item every given pair of lines coalesce into a single line at rate $2$.
\item every group of $k$ lines is subject to a simultaneous branching at rate $\sigma_{m,k}$, where $m$ denotes the total number of lines in the ASG before the simultaneous branching event. At the simultaneous branching event, each line in the group involved splits into two lines, an incoming line and a continuing line.
\item each line is killed at rate $\theta \nu_1$.
\item each line sends the process to the cemetery state $\dagger$ at rate $\theta \nu_0$.
\end{itemize}
\end{definition}
A similar reasoning leads, in the quenched case, to the following definition.
\begin{definition}[The quenched killed ASG]\label{defkilledasgquenched}
Let $\omega:\Rb\to\Rb$ be a fixed environment. The \emph{quenched killed ASG} with parameters $\sigma,\theta,\nu_0,\nu_1$, and environment $\omega$, associated to a sample of the population of size $n$ at time $T$ is the branching-coalescing particle system $(\Gs_{\dagger}^T(\omega,\beta))_{\beta\geq 0}$ starting at $\beta=0-$ with $n$ lines and with the following dynamic.
\begin{itemize}
\item each line splits into two lines, an incoming line and a continuing line, at rate $\sigma$.
\item every given pair of lines coalesce into a single line at rate $2$.
\item if at time $\beta$, we have $\Delta \omega(T-\beta)>0$, then any particle lines splits into two lines, an incoming line and a continuing line, with probability $\Delta \omega(T-\beta)$, independently from the other lines.
\item each line is killed at rate $\theta \nu_1$.
\item each line send the process to the cemetery state $\dagger$ at rate $\theta \nu_0$.
\end{itemize}
\end{definition}
\begin{remark}
The branching-coalescing system underlying the quenched killed ASG is well-defined as it can be constructed on the basis of the quenched ASG (which is well-defined thanks to Proposition \ref{defquenchedasg}).
\end{remark}
The formal annealed and quenched relations between the type-frequency process and the killed ASG are established in the subsequent sections.
\subsubsection{The annealed case}\label{s251}
We start this section with the duality relation between the process $X$ and the line-counting process of the killed ASG.
\smallskip
For each $\beta\geq 0$, we denote by $R(\beta)$ the number of lines present in the killed ASG at time $\beta$, with the convention that $R(\beta)=\dagger$ if $\Gs_\dagger(\beta)=\dagger$. The process $R\coloneqq (R(\beta))_{\beta\geq 0}$, called the line-counting process of the killed ASG, is a continuous-time Markov chain with values on $\Nb_0^\dagger\coloneqq \mathbb{N}_0\cup\{\dagger\}$ and infinitesimal generator matrix $Q^\mu_\dagger\coloneqq (q^\mu_\dagger(i,j))_{i,j\in\Nb_0^\dagger}$ defined via
\begin{equation}\label{krates}
q^\mu_\dagger(i,j)\coloneqq \left\{\begin{array}{ll}
i(i-1)+i \theta\nu_1 &\text{if $j=i-1$},\\
i(\sigma + \sigma_{i,1}) &\text{if $j=i+1$},\\
\binom{i}{k}\sigma_{i,k} &\text{if $j=i+k,\, i\geq k\geq 2$},\\
i\theta\nu_0&\textrm{if $j=\dagger$},\\
-i(i-1+\theta+\sigma)-\int_{(0,1)}(1-(1-y)^i)\mu(dy)&\textrm{if $j=i\in\Nb_0$}\\
0&\textrm{otherwise}.
\end{array}\right.
\end{equation}
If $\theta >0$ and $\nu_0, \nu_1 \in (0,1)$, the states $0$ and $\dagger$ are absorbing for $R$.
\smallskip
Let $J$ be a pure-jump subordinator with L\'evy measure $\mu$ supported in $(0,1)$. Recall that $X(J,\cdot)$ denotes the strong solution of \eqref{WFSDE} as a function of the environment $J$. Since in the annealed case, backward and forward environments have the same law, we can construct the line-counting process of the killed ASG as the strong solution of an SDE involving $J$ and and other four independent Poisson processes encoding the non-environmental events. We denote it by $(R(J,\beta))_{\beta\geq 0}$. The next result shows that, under this coupling, $X$ and $R$ satisfy a relation that generalizes the moment duality.
\begin{theorem}[Reinforced/standard moment duality] \label{momdualannealed}
For all $x\in[0,1]$, $n\in\mathbb{N}$ and $T\geq 0$, and any function $f\in\Cs^2([0,\infty))$ with compact support,
\begin{equation}\label{rmd}
\mathbb{E}[(1-X(J,T))^n f(J(T))|X(0)=x]=\mathbb{E}[(1-x)^{R(J,T)} f(J(T))|R(0)=n].
\end{equation}
with the convention $(1-x)^\dagger=0$ for all $x\in[0,1]$. In particular, $1-X$ and $R$ are moment dual, i.e.
\begin{equation}\label{md}
\mathbb{E}[(1-X(T))^n |X(0)=x]=\mathbb{E}[(1-x)^{R(T)} |R(0)=n].
\end{equation}
\end{theorem}
\begin{remark}
In the case $\theta=0$, \eqref{md} is a particular case of \cite[Lemma 2.14]{CSW19}.
\end{remark}
Assuming that $\theta >0$ and $\nu_0, \nu_1 \in (0,1)$, we define, for $n\in\Nb_0$, $$w_n\coloneqq \mathbb{P}(\exists \beta\geq 0: R(\beta)=0 \mid R(0)=n),$$ the probability that $R$ is eventually absorbed at $0$ conditionally on starting at $n$. Clearly $w_0=1$. The following result is a consequence of Theorem \ref{momdualannealed}.
\begin{theorem}[Asymptotic type-frequency: the annealed case] \label{annealdmoments}
Assume that $\theta >0$ and $\nu_0, \nu_1 \in (0,1)$. Then the diffusion $X$ has a unique stationary distribution $\pi_X\in\Ms_1([0,1])$. Let $X(\infty)$ be a random variable distributed according to $\pi_X$. Then $X(t)$ converges in distribution to $X(\infty)$, independently of the starting value of $X$. Moreover, for all $n\in\mathbb{N}$, we have
\begin{eqnarray}
\mathbb{E}\left[ (1-X(\infty))^n \right]=w_n, \label{cvmomentsannealed}
\end{eqnarray}
and the absorption probabilities $(w_n)_{n\geq 0}$ satisfy
\begin{align}
(\sigma+\theta+n-1)w_n=\sigma w_{n+1}+(\theta\nu_1+ n-1)w_{n-1}+ \frac{1}{n}\sum\limits_{k=1}^n\binom{n}{k} \sigma_{n,k}(w_{n+k}-w_n),\quad n\in\Nb. \label{recwn}
\end{align}
\end{theorem}
\begin{remark} \label{simpsonindexannealed}
A quantity of interest for biologists to measure the diversity in a population is the \emph{Simpson index}. It represents the probability that two individuals, uniformly randomly chosen in the population, have the same type. In our case it is given by $S(t) \coloneqq X(t)^2 + (1-X(t))^2$. If the types represent different species, it gives a measure of bio-diversity. If the types represent two alleles of a gene for a given species, it measures \emph{homozygosity}. As a consequence of Theorem \ref{annealdmoments}, one can express the moments of $S(\infty)$, the asymptotic Simpson index, in term of the coefficients $(w_n)_{n\geq 0}$. In particular, we have
\[ \mathbb{E}[S(\infty)]=\mathbb{E}[X(\infty)^2 + (1-X(\infty))^2] = 1 - 2 w_1 + 2w_2. \]
\end{remark}
\subsubsection{The quenched case} \label{s252}
Let us fix $T \in \mathbb{R}$ and an environment $\omega$ on $]-\infty, T]$. Let $(R_T(\omega,\beta))_{ \beta \geq 0}$ be the line-counting process of the killed ASG $\Gs_\dagger^T(\omega,\cdot)$. As in the annealed case, $0$ and $\dagger$ are absorbing states, when $\theta >0$ and $\nu_0, \nu_1 \in (0,1)$. The moment duality can be stated as follows.
\begin{theorem}[Quenched moment duality]\label{momdualgen}
For $P$-almost every environment $\omega\in\Db^\star$, we have
\begin{eqnarray}
\mathbb{E}^{\omega} \left [ (1-X(\omega,T))^n\mid X(\omega,0)=x \right ]= \mathbb{E}^{\omega} \left [ (1-x)^{R_T(\omega,T-)}\mid R_T(\omega,0-)=n \right ], \label{quenchedual}
\end{eqnarray}
for all $T>0$, $x\in[0,1]$, $n\in \mathbb{N}$, with the convention $(1-x)^\dagger=0$.
\end{theorem}
Note that in Theorem \ref{momdual} we do not need to require that $0$ nor $T$ are continuity points of $\omega$.
\smallskip
Let us now assume $\theta >0$ and $\nu_0, \nu_1 \in (0,1)$. This implies in particular that the process $X(\omega,\cdot)$ is not absorbed at $\{0,1\}$. As in the annealed case, one would like to use the previous moment duality to characterize the asymptotic quenched distribution of $X$. However, in the quenched setting the situation is more involved. The reason is that, for a given realization of the environment $\omega$, as $T$ tends to infinity, the distribution of $X(\omega,T)$ depends strongly on the environment near instant $T$ (and weakly on the environment that is far away in the past). Hence, unless $\omega$ is constant after some fixed time $t_0$, $X(\omega,T)$ will not converge in distribution when $T$ goes to infinity (see Remark \ref{periodic} below for the particular case of a periodic environment). In contrast, for a given a realization of the environment $\omega$ in $(-\infty,0]$, we will see that the distribution of $X(\omega,0)$, conditionally on $X(\omega,-T)=x$, has a limit distribution, and we will characterize its law with the help of the moment duality \eqref{quenchedual}.
For $n\in\Nb_0$, we define the absorption probabilities
$$W_n(\omega)\coloneqq \mathbb{P}^{\omega}(\exists \beta\geq 0 \ \text{s.t.} \ R_0(\omega,\beta)=0 \mid R_{0}(\omega,0-)=n).$$ Clearly $W_0(\omega)=1$.
\begin{theorem}[Quenched type-frequency from the distant past] \label{x0condpastenv}
Assume that $\theta >0$ and $\nu_0, \nu_1 \in (0,1)$. Then, for $P$-a.e environment $\omega$, and for any $x \in (0,1)$, the distribution of $X(\omega,0)$ conditionally on $\{ X(\omega,-T) = x \}$ has a limit distribution when $T$ goes to infinity, that is a function of $\omega$ and that does not depend on $x$. Moreover, this limit distribution $\mathcal{L}^{\omega}$ satisfies
\begin{eqnarray}
\ \int_0^1 (1-y)^n \mathcal{L}^{\omega}({\rm{d}} y) = W_n(\omega),\quad n\in\Nb, \label{dualimite}
\end{eqnarray}
and the convergence of moments is exponential, i.e.
\begin{eqnarray}
\ \left | \mathbb{E}^{\omega} \left [ (1-X(\omega,0))^n \mid X(\omega,-T)=x \right ] - W_n(\omega) \right | \leq e^{-\theta\nu_0 T},\quad n\in\Nb. \label{approxwn}
\end{eqnarray}
\end{theorem}
\begin{remark} \label{periodic}
If $\omega$ is a periodic environment on $[0,\infty)$ with period $T_p > 0$, then the proof of Theorem \ref{x0condpastenv} allows to show that, for any $x \in (0,1)$ and $r \in [0,T_p)$, the distribution of $X(\omega,nT_p + r)$, conditionally on $\{ X(\omega,0) = x \}$, has a limit distribution $\mathcal{L}_r^{\omega}$, when $n$ goes to infinity, which is a function of $\omega$ and $r$, and does not depend on $x$. Furthermore, $\mathcal{L}_r^{\omega}$ satisfies $\int_0^1 (1-y)^n \mathcal{L}_r^{\omega}({\rm{d}} y) = W_n(\omega_r)$, where $\omega_r$ is the periodic environment in $(-\infty,0]$ defined by $\omega_r(t) \coloneqq \omega(r+t+(\lfloor -t/T_p \rfloor + 1) T_p)$ for any $t \in (-\infty,0]$. The convergence of moments is also exponential as in \eqref{approxwn}.
\end{remark}
\subsubsection{Quenched case for simple environments}
In what follows, we assume that $\omega$ is simple. In this case $(R_T(\omega,\beta))_{\beta\geq 0}$ has transitions with rates $(q_\dagger^0(i,j))_{i,j\in\Nb_0^\dagger}$ (i.e $\mu=0$ in \eqref{krates}) between the jumping times of $\omega$.
In addition, at each time $\beta \geq 0$ such that $\Delta\omega(T-\beta)>0$, we have
\[ \forall i \in \mathbb{N}, \ \forall k \in [i]_0, \ \mathbb{P}^{\omega} ( R_T(\omega,\beta) = i+k \mid R_T(\omega,\beta-) = i ) = \binom{i}{k} (\Delta \omega(T-\beta))^k (1-\Delta \omega(T-\beta))^{i-k}. \]
In other words, conditionally on $\{R_T(\omega,\beta-) = i\}$, $R_T(\omega,\beta) \sim i + Y$ where $Y \sim \bindist{i}{\Delta \omega(T-\beta)}$.
\begin{theorem}[Quenched moment duality for simple environments]\label{momdual}
The statements of Theorems \ref{momdualgen} and \ref{x0condpastenv} holds true for any simple environment $\omega$.
\end{theorem}
The proof of the first part of \ref{momdual} uses different arguments as the one used in the proof \ref{momdualgen}. The proof of the second part, is covered in the proof of Theorem \ref{x0condpastenv}.
\smallskip
Under the additional assumption that the selection parameter $\sigma$ is equal to zero, we can go further and express $W_n(\omega)$ as a function of the environment $\omega$. This will be possible thanks to the following explicit diagonalization of the matrix $Q_\dagger^0$ (the transition matrix of the process $R$ under the null environment).
\begin{lemma}\label{diag}
Assume that $\sigma=0$ and set, for $k\in\Nb_0^\dagger$, $\lambda_k^\dagger\coloneqq -q_\dagger^0(k,k)$, and, for $k\in\Nb$, $\gamma_k^\dagger\coloneqq q_\dagger^0(k,k-1)$. In addition, let
\begin{itemize}
\item $D_\dagger$ be the diagonal matrix with diagonal entries $(-\lambda_i^\dagger)_{i\in\Nb_0^\dagger}$.
\item $U_\dagger\coloneqq (u_{i,j}^\dagger)_{i,j\in\Nb_0^\dagger}$, where $u_{\dagger,\dagger}^\dagger \coloneqq 1$ and $u_{\dagger,j}^\dagger\coloneqq 0$ for $j\in\Nb_0$, and, for $i\in\Nb_0$
\begin{eqnarray}
u_{i,j}^\dagger \coloneqq \prod_{l=j+1}^{i} \left ( \frac{\gamma_{l}^\dagger}{\lambda_l^\dagger - \lambda_j^\dagger} \right ) \,\textrm{for } j\in[i]_0, \ u_{i,j}^\dagger\coloneqq 0,\, \textrm{for } j> i\ \textrm{and} \ \ u_{i,\dagger}^\dagger \coloneqq \theta \nu_0 \sum_{k = 1}^{i} \frac{k}{\lambda_{k}^\dagger} \prod_{l=k+1}^{i} \frac{\gamma_{l}^\dagger}{\lambda_l^\dagger}, \label{recrelalphani3}
\end{eqnarray}
\item $V_\dagger\coloneqq (v_{i,j}^\dagger)_{i,j\in\Nb_0^\dagger}$, where $v_{\dagger,\dagger}^\dagger \coloneqq 1$ and $v_{\dagger,j}^\dagger\coloneqq 0$ for $j\in\Nb_0$, and, for $i\in\Nb$
\begin{eqnarray}
v_{i,j}^\dagger \coloneqq \prod_{l=j}^{i-1} \left ( \frac{-\gamma_{l+1}^\dagger}{\lambda_i^\dagger - \lambda_l^\dagger} \right ) \,\textrm{for } j\in[i]_0, \ v_{i,j}^\dagger\coloneqq 0,\, \textrm{for } j> i\ \textrm{and} \ \ v_{i,\dagger}^\dagger \coloneqq \frac{- \theta \nu_0}{ \lambda_i^\dagger} \sum_{k = 1}^{i} k \prod_{l=k}^{i-1} \left ( \frac{- \gamma_{l+1}^\dagger}{\lambda_i^\dagger - \lambda_l^\dagger} \right ), \label{recrelalphani2}
\end{eqnarray}
\end{itemize}
with the convention that an empty sum equals $0$ and an empty product equals $1$. Then, we have
$$Q_\dagger^0=U_\dagger D_\dagger V_\dagger\quad \textrm{and}\quad U_\dagger V_\dagger=V_\dagger U_\dagger=Id.$$
\end{lemma}
Now, we consider the polynomials $P_k^\dagger$, $k \in\Nb_0$, defined via
\begin{eqnarray}
P_k^\dagger(x) \coloneqq \sum_{i=0}^{k} v_{k,i}^\dagger\, x^i,\quad x\in[0,1]. \label{newbasis9}
\end{eqnarray}
In addition, for $z\in(0,1)$, we define the matrices $\Bs(z)\coloneqq (\Bs_{i,j}(z))_{i,j\in\Nb_0^\dagger}$ and $\beta^\dagger(z)\coloneqq (\beta^\dagger_{i,j}(z))_{i,j\in\Nb_0^\dagger}$ via
\begin{equation} \label{defbetaki}
\Bs_{i,j}(z)\coloneqq \left\{\begin{array}{ll}
\Pb(i+B_i(z)=j)&\textrm{for $i,j\in\Nb,$}\\
1&\textrm{for $i=j\in\{0,\dagger\},
$}\\
0&\textrm{otherwise}.
\end{array}\right.\quad\textrm{and}\quad
\beta^\dagger(z)= U_\dagger^\top \Bs(z)^\top V_\dagger^\top,
\end{equation}
where $B_i(z)\sim\bindist{i}{z}$. It will be justified in the proof of Theorem \ref{x0condpastenvII} that the matrix product defining $\beta^\dagger(z)$ is well-defined.
\begin{theorem} \label{x0condpastenvII}
Assume that $\sigma=0$, $\theta >0$ and $\nu_0, \nu_1 \in (0,1)$. Let $\omega$ be a simple environment. Denote by $N\coloneqq N(T)$ the number of jumps of $\omega$ in $(-T,0)$ and let $(T_i)_{i=1}^N$ be the sequence of those jumps in decreasing order, and set $T_0 \coloneqq 0$. For any $m\in[N]$, we define the matrix $A_m^{\dagger}(\omega)\coloneqq (A_{i,j}^{\dagger,m}(\omega))_{i,j\in\Nb_0^\dagger}$ via
\begin{eqnarray}
A^{\dagger}_m(\omega) \coloneqq \beta^\dagger(\Delta \omega(T_m)) \exp \left ( (T_{m-1} - T_m) D_\dagger \right ). \label{defmatA}
\end{eqnarray}
Then, for all $x \in (0,1), n \in \mathbb{N}$, we have
\begin{eqnarray}
\mathbb{E}^{\omega} \left [ (1-X(\omega,0))^n|X(\omega,-T)=x \right ] = \sum_{k = 0}^{n 2^N} C^\dagger_{n,k}(\omega,T) P_k^\dagger(1-x), \label{exprmomtpst}
\end{eqnarray}
where the matrix $C^\dagger(\omega,T)\coloneqq (C^\dagger_{n,k}(\omega,T))_{k,n\in\Nb_0^\dagger}$ is given by
\begin{equation}
C^\dagger(\omega,T)\coloneqq U_\dagger \left[A_N^\dagger(\omega)A_{N-1}^\dagger(\omega)\cdots A_1^\dagger(\omega)\right]^\top \exp \left ((T_N+T)D_\dagger \right ), \label{defcoeffmom}
\end{equation}
with the convention that an empty product of matrices is the identity matrix.
Moreover, for all $n\in\Nb$,
\begin{equation}
\ W_n(\omega) = C_{n,0}^\dagger(\omega,\infty)\coloneqq \lim_{T\to\infty} C_{n,0}^\dagger(\omega,T)=\lim_{T \to \infty} \left(U_\dagger \left[A_{N(T)}^\dagger(\omega)A_{{N(T)}-1}^\dagger(\omega)\cdots A_1^\dagger(\omega)\right]^\top\right)_{n,0}, \label{exprfctgenlt4infmom}
\end{equation}
where the previous limits are well-defined.
\end{theorem}
\begin{remark}
It will be justified in the proof of Theorem \ref{x0condpastenvII} that the matrix products appearing in \eqref{defmatA} and \eqref{defcoeffmom} are well-defined.
\end{remark}
\begin{remark}
Note that if $\omega$ has no jumps in $(-T,0)$, then $N(T)=0$ and $T_{N(T)}=T_0=0$. Hence, Eq.\eqref{defcoeffmom} reads $C^\dagger(\omega,T)=U_\dagger\exp(T D_\dagger).$ In particular, for the null-environment, we obtain
$W_n(0)=u_{n,0}^\dagger$.
\end{remark}
\begin{remark}
Recall the definition of the Simpson index in Remark \ref{simpsonindexannealed}. As a consequence of the previous results, we can give an expression for the moments of $S(\omega,\infty)$, the limit of the quenched Simpson index. In particular, under the assumptions of Theorem \ref{x0condpastenvII}, we have
\[ \mathbb{E}^{\omega}[S(\omega,\infty)]=\mathbb{E}^{\omega}[X(\omega,\infty)^2 + (1-X(\omega,\infty))^2] = 1 - 2 C_{1,0}^\dagger(\omega,\infty) + 2C_{2,0}^\dagger(\omega,\infty). \]
\end{remark}
\begin{remark} \label{periodic2}
If $\omega$ is a simple periodic environment on $(-\infty,0]$ with period $T_p > 0$ then \eqref{exprfctgenlt4infmom} can be re-written as $W_n(\omega) =\lim_{m \to \infty} (U_\dagger B(\omega)^m )_{n,0}$ where $B(\omega) \coloneqq [A_{N(T_p)}^\dagger(\omega)A_{N(T_p)-1}^\dagger(\omega)\cdots A_1^\dagger(\omega)]^\top$.
\end{remark}
\subsubsection{Mixed environments} \label{mixedenv}
In Section \ref{S7}, we consider the case where the environment in the recent past is subject to a big perturbation. To model this, we work with a mixed environment that is random and stationary in the distant past and deterministic close to the present. In this scenario, we obtain a formula \eqref{formulecompacteexw} for the moments of the diffusion $X$. The result is obtained by applying Theorem \ref{x0condpastenvII} and Theorem \ref{annealdmoments}, and is expressed in term of the coefficients $w_n$, $C^\dagger_{n,k}(\omega,T)$, and $v_{i,j}^\dagger$.
\subsection{Ancestral type distribution via the pruned lookdown ASG}\label{s26}
In contrast to section \ref{s25}, where the focus was on the study of the distribution of the type-frequency, in this section we are interested in the ancestral type distribution. Heuristically it represents the probability that the individual from time $0$, who is the ancestor of the entire population after some time, is of the fit type. The biological motivation to study this quantity is that it measures how being of the fit type for one given gene favors the offspring of an individual and therefore the survival of its other genes. We start by making this notion precise.
\smallskip
Consider a realization $\Gs_T\coloneqq (\Gs(\beta))_{\beta\in[0,T]}$ of the annealed ASG in $[0,T]$ started with one line, representing an individual sampled at forward time $T$. For $\beta\in[0,T]$, let $V_\beta$ be the set of lines present at time $\beta$ in $\Gs_T$. Consider a function $c:V_T\to\{0,1\}$ representing an assignement of types to the lines in $V_T$. Given $\Gs_T$ and $c$, we propagate types (forward in time) along the lines of $\Gs_T$ keeping track, at any time $\beta\in[0,T]$, of the true ancestor in $V_T$ of each line in $V_\beta$. We denote by $a_c(\Gs_T)$ the type of the ancestor in $V_T$ of the single line in $V_0$. Assume that, under $\Pb_x$, $c$ assigns independently to each line type $0$ with probability $x$ and type $1$ with probability $1-x$. We call annealed ancestral type distribution at time $T$ to
$$h_T(x)\coloneqq \Pb_x(a_c(\Gs_T)=0),\quad x\in[0,1].$$
Now, consider an environment $\omega$ and let $\Gs_T(\omega)\coloneqq (\Gs^T(\omega,\beta))_{\beta\in[0,T]}$ be the corresponding quenched ASG in $[0,T]$ started with one line, representing the individual sampled at forward time $T$. For $\beta\in[0,T]$, let $V_\beta(\omega)$ be the set of lines present at time $\beta$ in $\Gs_T(\omega)$. For a given type assignement $c:V_T(\omega)\to\{0,1\}$ to the lines in $V_T(\omega)$, we proceed as in the annealed case and denote by $a_c(\Gs_T(\omega))$ the type of the ancestor in $V_T(\omega)$ of the single line in $V_0(\omega)$. We call quenched ancestral type distribution at time $T$ to
$$h_T^\omega(x)\coloneqq \Pb^{\omega}_x(a_c(\Gs_T(\omega))=0),\quad x\in[0,1],$$
where, under $\Pb^{\omega}_x$, $c$ assigns independently to each line in $V_T(\omega)$ type $0$ with probability $x$ and type $1$ with probability $1-x$.
\smallskip
In the case of the null environment, the ancestral type distribution can be expressed in terms of the line-counting process of the pruned lookdown ASG (pruned LD-ASG), see \cite{LKBW15}. The main goals of this section are: a) to incorporate the effect of the environment in the construction of the pruned LD-ASG, b) to express the annealed and quenched ancestral type distributions in terms the corresponding line-counting processes, and c) to infer their asymptotic behavior as $T$ tends to infinity.
\subsubsection{The pruned LD-ASG} \label{pldasg}
In this section, we adapt the construction of the pruned LD-ASG to incorporate the effect of the environment. The following construction is done from the ASG both in the annealed and quenched setting. For the quenched setting, let us mention that the existence of the ASG is guaranteed by Proposition \ref{eqasg} for any environment. In particular, the following construction holds for general environments $\omega$ (i.e. $\omega$ is not assumed to be simple). We first build the \textit{lookdown-ASG} (LD-ASG), which consists in attaching levels to each line in the ASG encoding the hierarchy given by the pecking order. This is done as follows. Consider a realization of the (annealed or quenched) ASG in $[0,T]$ starting with one line. The starting line gets level $1$. When two lines, one with level $i$ and the other with level $j>i$, coalesce, the resulting line is assigned level $i$; the level of each line with level $k > j$ before the coalescence is decreased by $1$. When a group of lines with levels $i_1< i_2<\ldots<i_N$ experiences a simultaneous branching, the incoming (resp. continuing) line of the descendant line with level $i_k$ gets level $i_k+k-1$ (resp. $i_k+k$), respectively; a line having level $j$ before the branching, with $i_k<j<i_{k+1}$ gets level $j+k$; a line having level $j>i_N$ before the branching gets level $j+N$. Mutations do not affect the levels. See Fig. \ref{LDASG}(left) for an illustration.
\smallskip
On the basis of the LD-ASG the pruned LD-ASG is obtained via an appropriate pruning of its lines. In order to describe this pruning procedure, we need first to identify a special line in the LD-ASG: \textit{the immune line}. The immune line at time $\beta$ is the line in the ASG present at time $\beta$ that is the ancestor of the starting line if all the lines at time $\beta$ are assigned the unfit type. In the absence of mutations, the immune line evolves according to the following rules:
\begin{itemize}
\item It only changes at coalescence or branching times involving the current immune line.
\item At a coalescence event involving the immune line, the merged line becomes the immune line.
\item If the immune line is subject to a branching, then the continuing line becomes the immune line.
\end{itemize}
In the presence of mutations, the pruned LD-ASG is constructed simultaneously with the immune line as follows. Let $\beta_1<\cdots<\beta_n$ be the times at which mutations occur in the LD-ASG in $[0,T]$. In the time interval $[0,\beta_1)$ the pruned LD-ASG coincides with the LD-ASG and the immune line evolves as described above. Assume now that we have constructed the pruned LD-ASG together with its immune line up to time $\beta_i-$, where the pruned LD-ASG contains $N$ lines and the immune line has level $k_0\in[N]$. The pruned LD-ASG is extended up to time $\beta_i$ according to the following rules:
\begin{itemize}
\item If at time $\beta_i$, a line with level $k\neq k_0$ at $\beta_i-$, is hit by a deleterious mutation, we stop tracing back this line; all the other lines are extended up to time $\beta_i$; all the lines with level $j>k$ at time $\beta_i-$ decrease their level by $1$ and the others keep their levels unchanged; the immune line continues on the same line (possibly with a different level).
\item If at time $\beta_i-$, the line with level $k_0$ at $\beta_i-$ is hit by a deleterious mutation, we extend all the lines up to time $\beta_i$; the immune line gets level $N$, but remains on the same line; all the lines having at time $\beta_i-$ a level $j>k_0$ decrease their level by $1$, the others keep their levels unchanged.
\item If at time $\beta_i$, a line with level $k$ is hit by a beneficial mutation, we stop tracing back all the lines with level $j>k$; the remaining lines are extended up to time $\beta_i$, keeping their levels; the line hit by the mutation becomes the immune line.
\end{itemize}
\begin{figure}[t!]
\begin{center}
\begin{minipage}{0.5 \textwidth}
\centering
\scalebox{0.7}{\begin{tikzpicture}
\draw[dashed,thick,opacity=0.3] (-0.5,-0.5) --(-0.5,4.5);
\draw[dashed,thick,opacity=0.3] (8.5,-0.5) --(8.5,4.5);
\draw[dashed,thick,opacity=0.3] (0.8,-0.5) --(0.8,0) (0.8,1)--(0.8,4.5);
\draw[dashed,thick,opacity=0.3] (2,-0.5) --(2,1) (2,2)--(2,3) (2,4)--(2,4.5);
\draw[dashed,thick,opacity=0.3] (6.2,-0.5) --(6.2,0) (6.2,1) --(6.2,2) (6.2,4)--(6.2,4.5);
\draw[dashed,thick,opacity=0.3] (7.5,-0.5) --(7.5,1) (7.5,2) --(7.5,4.5) ;
\draw[dashed,thick,opacity=0.3] (4.5,-0.5) --(4.5,1) (4.5,2) --(4.5,4.5) ;
\node [right] at (-0.8,-0.9) {$T$};
\node [right] at (8.3,-0.9) {$0$};
\node [right] at (3.5,-1.5) {$\beta$};
\draw[-{triangle 45[scale=5]}] (4.5,-1.2) -- (2.5,-1.2) node[text=black, pos=.6, xshift=7pt]{};
\node [above] at (8.3,1) {$1$};
\node [above] at (7.2,2) {$1$};
\node [above] at (7.2,1) {$2$};
\node [above] at (5.8,0) {$3$};
\node [above] at (5.8,1) {$4$};
\node [above] at (5.8,2) {$2$};
\node [above] at (5.8,4) {$1$};
\node [above] at (4.2,0) {$3$};
\node [above] at (4.2,2) {$2$};
\node [above] at (4.2,4) {$1$};
\node [above] at (1.7,0) {$5$};
\node [above] at (1.7,1) {$3$};
\node [above] at (1.7,2) {$4$};
\node [above] at (1.7,3) {$1$};
\node [above] at (1.7,4) {$2$};
\node [above] at (0.5,1) {$3$};
\node [above] at (0.5,2) {$4$};
\node [above] at (0.5,3) {$1$};
\node [above] at (0.5,4) {$2$};
\draw[thick] (-0.5,4) -- (6.2,4);
\draw[thick] (-0.5,2) -- (7.5,2);
\draw[thick] (4.5,1) -- (8.5,1);
\draw[thick] (0.8,0) -- (6.2,0);
\draw[thick] (2,1) -- (-0.5,1);
\draw[thick] (2,3) -- (-0.5,3);
\draw[-{triangle 45[scale=5]},thick] (4.5,2) -- (4.5,1) node[text=black, pos=.6, xshift=7pt]{};
\draw[-{triangle 45[scale=5]},thick] (0.8,1) -- (0.8,0);
\draw[-{open triangle 45[scale=5]},thick] (7.5,2) -- (7.5,1);
\draw[-{open triangle 45[scale=5]},thick] (2,3) -- (2,4);
\draw[-{open triangle 45[scale=5]},thick] (2,1) -- (2,2);
\draw[-{open triangle 45[scale=5]},thick] (6.2,4) -- (6.2,2);
\draw[-{open triangle 45[scale=5]},thick] (6.2,0) -- (6.2,1);
\node[thick] at (0.2,4) {$\bigtimes$} ;
\node[ultra thick] at (8,1) {$\bigtimes$} ;
\draw[thick] (3.6,2) circle (1.5mm) [fill=white!100];
\draw[thick] (1.5,1) circle (1.5mm) [fill=white!100];
\end{tikzpicture} }
\end{minipage}\begin{minipage}{0.45 \textwidth}
\centering
\scalebox{0.7}{\begin{tikzpicture}
\draw[dashed,thick,opacity=0.3] (-0.5,-0.5) --(-0.5,4.5);
\draw[dashed,thick,opacity=0.3] (8.5,-0.5) --(8.5,4.5);
\draw[dashed,thick,opacity=0.3] (0.2,-0.5) --(0.2,4.5) ;
\draw[dashed,thick,opacity=0.3] (1.5,-0.5) --(1.5,4.5) ;
\draw[dashed,thick,opacity=0.3] (3.6,-0.5) --(3.6,4.5) ;
\draw[dashed,thick,opacity=0.3] (8,-0.5) --(8,4.5) ;
\node [right] at (-0.8,-0.9) {$T$};
\node [right] at (-0.1,-0.9) {$\beta_4$};
\node [right] at (1.2,-0.9) {$\beta_3$};
\node [right] at (3.3,-0.9) {$\beta_2$};
\node [right] at (7.7,-0.9) {$\beta_1$};
\node [right] at (8.3,-0.9) {$0$};
\node [right] at (3.5,-1.5) {$\beta$};
\draw[-{triangle 45[scale=5]}] (4.5,-1.2) -- (2.5,-1.2) node[text=black, pos=.6, xshift=7pt]{};
\node [above] at (8.3,1) {$1$};
\node [above] at (7.2,2) {$1$};
\node [above] at (7.2,1) {$2$};
\node [above] at (5.8,0) {$3$};
\node [above] at (5.8,1) {$4$};
\node [above] at (5.8,2) {$2$};
\node [above] at (5.8,4) {$1$};
\node [above] at (4.2,0) {$3$};
\node [above] at (4.2,2) {$2$};
\node [above] at (4.2,4) {$1$};
\node [above] at (1.7,1) {$3$};
\node [above] at (1.7,2) {$4$};
\node [above] at (1.7,3) {$1$};
\node [above] at (1.7,4) {$2$};
\node [above] at (0.5,1) {$3$};
\node [above] at (0.5,3) {$1$};
\node [above] at (0.5,4) {$2$};
\node [above] at (7.8,1) {$1$};
\node [above] at (3.3,4) {$1$};
\node [above] at (3.3,2) {$2$};
\node [above] at (1.2,4) {$2$};
\node [above] at (1.2,3) {$1$};
\node [above] at (1.2,1) {$3$};
\node [above] at (0,3) {$1$};
\node [above] at (0,1) {$2$};
\draw[semithick] (3.6,4) -- (6.2,4);
\draw[semithick] (0.2,4) -- (6.2,4);
\draw[line width=2.5pt] (1.5,2) -- (4.5,2);
\draw[semithick] (4.5,2) -- (7.5,2);
\draw[line width=2.5pt] (4.5,1) -- (8.5,1);
\draw[semithick] (3.6,0) -- (6.2,0);
\draw[semithick] (2,1) -- (1.5,1);
\draw[line width=2.5pt] (-0.5,1) -- (1.5,1);
\draw[semithick] (2,3) -- (-0.5,3);
\draw[-{triangle 45[scale=5]},thick] (4.5,2) -- (4.5,1) node[text=black, pos=.6, xshift=7pt]{};
\draw[-{open triangle 45[scale=5]},thick] (7.5,2) -- (7.5,1);
\draw[-{open triangle 45[scale=5]},thick] (2,3) -- (2,4);
\draw[-{open triangle 45[scale=5]},thick] (2,1) -- (2,2);
\draw[-{open triangle 45[scale=5]},thick] (6.2,4) -- (6.2,2);
\draw[-{open triangle 45[scale=5]},thick] (6.2,0) -- (6.2,1);
\node[thick] at (0.2,4) {$\bigtimes$} ;
\node[ultra thick] at (8,1) {$\bigtimes$} ;
\draw[thick] (3.6,2) circle (1.5mm) [fill=white!100];
\draw[thick] (1.5,1) circle (1.5mm) [fill=white!100];
\end{tikzpicture} }
\end{minipage}
\end{center}
\caption{LD-ASG (left) and its pruned LD-ASG (right). In the LD-ASG levels remain constant between the vertical dashed lines, in particular, they are not affected by mutation events. In the pruned LD-ASG, lines are pruned at mutation events, where an additional updating of the levels takes place. The bold line in the pruned LD-ASG represents the immune line.}
\label{LDASG}
\end{figure}
In the time intervals $[\beta_i,\beta_{i+1})$, $i\in[n-1]$, and $[\beta_n,T]$, the pruned LD-ASG evolves as the LD-ASG, and the immune line as in the case without mutations.
\smallskip
The pruning procedure has been defined so that the true ancestor always belongs to the pruned LD-ASG and so that the following result holds true.
\begin{lemma} [Theorem 4 in \cite{LKBW15}] \label{pruningdonnehtx}
If we assign types at instant $0$ in the pruned LD-ASG, the true ancestor is the line of type $0$ with smallest level or, if all lines have type $1$, it is the immune line.
\end{lemma}
The proof of Theorem 4 in \cite{LKBW15} can be adapted in a straightforward way to prove Lemma \ref{pruningdonnehtx}. The basic idea is to treat simultaneous branching events locally as simple branching events.
\smallskip
So far we have defined the pruned LD-ASG in a static way (as a function of a realization of the ASG). We can also provide definitions for the annealed and quenched pruned LD-ASG analogous to Definitions \ref{defkilledasgannealed} and \ref{defkilledasgquenched}. However, thanks to Lemma \ref{pruningdonnehtx}, in order to compute the ancestral type distribution, it is enough to keep track of the number of lines in the pruned LD-ASG. Hence, we will directly provide definitions for the corresponding line-counting processes.
\subsubsection{Ancestral type distribution: the annealed case} \label{s262}
We first describe the dynamic of the line-counting process associated to the pruned LD-ASG and relate it with $h_T(x)$. Next, we provide a expression for $h_T(x)$ and study its limit behavior as $T\to\infty$.
\smallskip
By construction, the line-counting process of the annealed pruned LD-ASG, denoted by $L\coloneqq (L(\beta))_{\beta \geq 0}$, is a continuous-time Markov chain with values on $\mathbb{N}$ and generator matrix $Q^\mu\coloneqq (q^\mu(i,j))_{i,j\in\Nb}$ given by
\begin{equation}\label{kratespldasg}
q^\mu(i,j)\coloneqq \left\{\begin{array}{ll}
i(i-1)+(i-1) \theta\nu_1 + \theta\nu_0 &\text{if $j=i-1$},\\
i(\sigma + \sigma_{i,1}) &\text{if $j=i+1$},\\
\binom{i}{k}\sigma_{i,k} &\text{if $j=i+k,\, i\geq k\geq 2$},\\
\theta\nu_0&\textrm{if $1 \leq j < i-1$},\\
-(i-1)(i+\theta)-i\sigma-\int_{(0,1)}(1-(1-y)^i)\mu({\rm{d}} y)& \textrm{if $j=i$}\\
0&\textrm{otherwise}.
\end{array}\right.
\end{equation}
The next result provides an important feature of the process $L$
\begin{lemma}[Positive recurrence]\label{pr}
The process $L$ is positive recurrent.
\end{lemma}
As a consequence, $L$ admits a unique stationary distribution that we denote by $\pi_L$. We let $L_{\infty}$ be a random variable with law $\pi_L$. The following result provides the formal relation between the pruned LD-ASG and the ancestral type distribution.
\begin{proposition} \label{exprhtannealed}
For all $T \geq 0$ and $x\in[0,1]$,
\begin{equation}\label{pldasgatd}
h_T(x)=1-\mathbb{E}[(1-x)^{L(T)}\mid L(0)=1].
\end{equation}
Moreover, $h(x)\coloneqq \lim_{T\to\infty} h_T(x)$ is well-defined, and
\begin{eqnarray}
h(x) = \sum_{n \geq 0} x(1-x)^{n} \mathbb{P}(L_{\infty} > n). \label{represh(x)tailpldasg}
\end{eqnarray}
\end{proposition}
In the absence of mutations, the previous proposition leads to the following result.
\begin{corollary}\label{hzero}
If $\theta=0$, then, for any $T>0$ and $x\in[0,1]$,
$$h_T(x)=\mathbb{E} [X(T) \mid X(0) = x].$$
Moreover, conditional on $X(0)=x$, the limit of $X(T)$ when $T\to\infty$ is almost surely well-defined and is a Bernoulli random variable with parameter $h(x)$. In particular, the absorbing states $0$ and $1$ are both accessible from any $x\in(0,1)$.
\end{corollary}
\begin{remark}
We point out that the accessibility of both boundaries given in the previous result is not covered by the criteria given in \cite[Thm. 3.2]{CSW19}. The latter holds for a fairly general class of neutral reproduction mechanisms, but excludes the possibility of a diffusive term.
\end{remark}
The next result characterizes the tail probabilities $a_n\coloneqq \mathbb{P}(L_\infty>n)$, $n\in\Nb_0$.
\begin{theorem}[Fearnhead's type recursion] \label{recursion}
For all $n\geq 1$, we have
\begin{equation}\label{fr}
(\sigma +\theta +n+1 )\, a_n= \sigma\, a_{n-1}+(\theta\nu_1+n+1)\,a_{n+1} + \frac{1}{n}\sum\limits_{j=1}^{n} \gamma_{n+1,j}\, (a_{j-1}-a_{j}),\quad n\in\Nb,
\end{equation}
where $\gamma_{i,j}\coloneqq \sum_{k=i-j}^{j}\binom{j}{k}\sigma_{j,k}$.
\end{theorem}
\subsubsection{Ancestral type distribution: the quenched case} \label{s263}
In this section the environment $\omega$ on $(-\infty,\infty)$ is fixed. We first describe the dynamic of the line-counting process associated to the pruned LD-ASG and relate it with $h^{\omega}_T(x)$. Next, we provide an expression for $h^{\omega}_T(x)$ and study its limit when $T\to\infty$.
\smallskip
Let us fix $T \in \mathbb{R}$ and let $(L_T(\omega,\beta))_{ \beta \geq 0}$ be the line-counting process of the quenched pruned LD-ASG constructed in Section \ref{pldasg}.
From construction, $(L_T(\omega,\beta))_{ \beta \geq 0}$, is a continuous-time (in-homogeneous) Markov process with values on $\mathbb{N}$. $(L_T(\omega,\beta))_{\beta\geq 0}$ has transitions at rates given by $(q^0(i,j))_{i,j\in\Nb}$ (i.e $\mu=0$ in \eqref{kratespldasg}) and in addition, at each time $\beta \geq 0$ such that $\Delta\omega(T-\beta)>0$, we have, for all $i\in\Nb$ and $k\in[i]_0$,
\[ \ \mathbb{P}^{\omega} ( L_T(\omega,\beta) = i+k \mid L_T(\omega,\beta-) = i ) = \binom{i}{k} (\Delta \omega(T-\beta))^k (1-\Delta \omega(T-\beta))^{i-k}. \]
In other words, conditionally on $\{L_T(\omega,\beta-)=i\}$, we have $L_T(\omega,\beta) \sim i + Y$, where $Y \sim \bindist{i}{\Delta \omega(s)}$. In the case where $\omega$ is not simple, the fact that the line-counting process $(L_T(\omega,\beta))_{\beta\geq 0}$ is well-defined is guaranteed by the fact that the pruned LD-ASG constructed in Section \ref{pldasg} is well-defined and contains almost surely finitely many lines at all times, which follows from the same statement for the ASG (Proposition \ref{eqasg}).
The relation between the pruned LD-ASG and the ancestral type distribution can be stated as follows.
\begin{proposition} \label{exprht(x)quenched}
For all $T>0$, $x\in[0,1]$, we have
\begin{equation}\label{pldasgatdq}
h^{\omega}_T(x)=1-\mathbb{E}^{\omega}[(1-x)^{L_T(\omega,T-)}\mid L_T(\omega,0-)=1].
\end{equation}
\end{proposition}
\begin{remark}
In the case $\theta=0$ and $\omega$ simple, the processes $(R_T(\omega,\beta))_{ \beta \geq 0}$ and $(L_T(\omega,\beta))_{ \beta \geq 0}$ both coincides with the line-counting process of the quenched ASG. In particular, combining Theorem \ref{momdual} together with Proposition \ref{exprht(x)quenched} leads to $h_T^\omega(x) = \mathbb{E} [X(\omega,T) \mid X(\omega,0) = x]$. Moreover, the diffusion $X(\omega,\cdot)$ is eventually almost surely absorbed at $\{0,1\}$. In particular, $h^\omega(x)$, the limit of $h_T^\omega(x)$ as $T\to\infty$, exists and equals the probability of fixation of the fit type.
\end{remark}
In contrast to the annealed case, in the quenched case the line-counting process of the pruned LD-ASG does not have a stationary distribution. However, for the limit of $h^{\omega}_T(x)$ when $T\to\infty$ to be well-defined, we only need that the distribution of $L_T(\omega,T-)$ admits a limit when $T\to\infty$. The next theorem provides such a convergence result.
\begin{theorem} \label{h(x)=lim}
Assume that either 1) $\theta\nu_0 > 0$, or 2) $\omega$ is simple, has infinitely many jumps on $[0,\infty)$ and the distance between the successive jumps does not converge to $0$. Then, for any $n \in \Nb$, the distribution of $L_T(\omega,T-)$ conditionally on $\{ L_T(\omega,0-) = n \}$ has a limit distribution $\mu^\omega\in\Ms_1(\Nb)$, when $T\to\infty$, that does not depend on $n$. In particular, the limit $h^{\omega}(x)\coloneqq \lim_{T\to\infty}h_T^\omega(x)$ is well-defined and
\begin{equation}\label{pldasgatdtpsinftyq}
h^{\omega}(x)\coloneqq 1- \sum_{n=1}^\infty \mu^\omega(\{n\})(1-x)^{n}.
\end{equation}
Moreover, if $\theta\nu_0 > 0$, then for any $x \in [0,1]$ and $t > 0$ we have
\begin{eqnarray}
\left | h^{\omega}(x) - h^{\omega}_T(x) \right | \leq 2e^{-\theta\nu_0 T}. \label{approxh(x)4}
\end{eqnarray}
\end{theorem}
Note that condition 2) is almost surely satisfied when $\omega$ is given by a realization of a Poisson compound process. Under Assumption 2) and the additional assumption that the selection parameter $\sigma$ equals to zero, we can go further and express $h^{\omega}_T(x)$ as a function of the environment $\omega$. The next result will be crucial in order to obtain the expression for $h^{\omega}_T(x)$.
\begin{lemma}\label{diagpldasg}
Assume that $\sigma=0$ and set, for $k\in\Nb$, $\lambda_k\coloneqq -q^0(k,k)$, and, for $k\in\Nb$, $\gamma_k\coloneqq q^0(k,k-1)$. In addition, let
\begin{itemize}
\item $D$ be the diagonal matrix with diagonal entries $(-\lambda_i)_{i\in\Nb}$.
\item $U\coloneqq (u_{i,j})_{i,j\in\Nb}$ where, for all $i\in\Nb$, $u_{i,i} = 1$, $u_{i,j} = 0$ for $j > i$ and, when $i \geq 2$, $u_{i,i-1} \coloneqq \gamma_{i}/(\lambda_{i} - \lambda_{i-1})$ and the coefficients $(u_{i,j})_{j \in [i-2]}$ are defined via the recurrence relation
\begin{eqnarray}
u_{i,j} \coloneqq \frac{1}{\lambda_i - \lambda_j} \left ( \gamma_i u^{i-1}_{j} + \theta\nu_0 \left ( \sum_{l=j}^{i-2} u^l_j \right ) \right ). \label{recrelalphani4killed}
\end{eqnarray}
\item $V\coloneqq (v_{i,j})_{i,j\in\Nb}$ where, for all $i\in\Nb$, $v_{i,i} = 1$, $v_{i,j} = 0$ for $j > i$ and, when $i \geq 2$, the coefficients $(v_{i,j})_{j \in [i-1]}$ are defined via the recurrence relation
\begin{eqnarray}
v_{i,j} \coloneqq \frac{-1}{(\lambda_i - \lambda_j)} \left [ \left ( \sum_{l=j+2}^i v_{i,l} \right ) \theta\nu_0 + v_{i,j+1} \gamma_{j+1} \right ]. \label{recrelani4killed}
\end{eqnarray}
\end{itemize}
with the convention that an empty sum equals $0$. Then, we have
$$Q=U D V\quad \textrm{and}\quad U V=V U=Id.$$
\end{lemma}
\begin{remark}
Lemma \ref{diagpldasg} can be proved the same way as Lemma \ref{diag} so its proof will be omitted.
\end{remark}
Now, we consider the polynomials $P_k, k \in\Nb$ defined via
\begin{eqnarray}
P_k(x) \coloneqq \sum_{i=0}^{k} v_{k,i} x^i. \label{newbasis9pldasg}
\end{eqnarray}
In addition, for $z\in(0,1)$, we define the matrices $\Bs(z)\coloneqq (\Bs_{i,j}(z))_{i,j\in\Nb}$ and $\beta(z)\coloneqq (\beta_{i,j}(z))_{i,j\in\Nb}$ via
\begin{equation} \label{defbetakipldasg}
\Bs_{i,j}(z)\coloneqq \Pb(i+B_i(z)=j),\quad \textrm{$i,j\in\Nb$}\quad\textrm{and}\quad\beta(z)= U^\top \Bs(z)^\top V^\top,
\end{equation}
where $B_i(z)\sim\bindist{i}{z}$. The fact that the matrix product defining $\beta(z)$ is well-defined can be justified similarly as in the proof of Theorem \ref{x0condpastenvII} where we show that $\beta^\dagger(z)$, defined in \eqref{defbetaki}, is well-defined.
\begin{theorem} \label{h(x)condenv}
Assume that $\sigma=0$ and let $\omega$ be a simple environment with infinitely many jumps on $[0,\infty)$ and such that the distance between the successive jumps does not converge to $0$. Let $N$ be the number of jumps of $\omega$ in $(0,T)$ and let $(T_i)_{i=1}^N$ be the sequence of those jumps in increasing order. We set $T_0 \coloneqq 0$ for convenience. For any $m\in[N]$, we define the matrix $A_m(\omega)\coloneqq (A_{i,j}^{m}(\omega))_{i,j\in\Nb}$ by
\begin{eqnarray}
A_m(\omega) \coloneqq \exp \left ( (T_m - T_{m-1}) D \right ) \beta(\Delta \omega(T_m)). \label{defmatApldasg}
\end{eqnarray}
Then for all $x \in (0,1), n \in \mathbb{N}$, we have
\begin{eqnarray}
h^{\omega}_T(x) = 1 - \sum_{k = 1}^{2^N} C_{1,k}(\omega,T) P_k(1-x), \label{exprfctgenlt4}
\end{eqnarray}
where the matrix $C(\omega,T)\coloneqq (C_{n,k}(\omega,T))_{k,n\in\Nb}$ is given by
\begin{align}
C(\omega,T)\coloneqq U \exp \left ( (T-T_N)D \right ) \left[A_1(\omega)A_{2}(\omega)\cdots A_N(\omega)\right]^\top. \label{defcoeff4}
\end{align}
Moreover, for any $x\in(0,1)$,
\begin{eqnarray}
\ h^{\omega}(x) = 1 - \sum_{k = 1}^{+\infty} C_{1,k}(\omega, \infty) P_k(1-x), \label{exprfctgenlt4inf}
\end{eqnarray}
where the series in \eqref{exprfctgenlt4inf} is convergent
and where
\begin{align}
C_{1,k}(\omega, \infty) \coloneqq \lim_{m \rightarrow +\infty} \left ( U \left[A_1(\omega)A_{2}(\omega)\cdots A_m(\omega)\right]^\top \right )_{1,k}, \label{defcoeff4inf}
\end{align}
and the above limit is well-defined.
\end{theorem}
\begin{remark}
The fact that the matrix products \eqref{defmatApldasg} and \eqref{defcoeff4} are well-defined can be justified similarly as in the proof of Theorem \ref{x0condpastenvII}.
\end{remark}
\section{Moran models: continuity with respect to the environment}\label{S3}
In this section, we aim to prove Theorem \ref{cont}, which states, in the context of a Moran model population, the continuity of the type-frequency process with respect to the environment. On the one side, the paths of the type-frequency process are considered as elements of $\Db_T$, which is endowed with the $J_1$-Skorokhod topology, i.e. the topology induced by the Billingsley metric metric $d_T^0$ defined in Appendix \ref{A1}. On the other hand, the environment is described by means of a function in the set
$$\Db_T^\star\coloneqq \{\omega\in\Db_T: \omega(0)=0,\, \Delta \omega(t)\in[0,1)\textrm{ for all $t\in[0,T]$, $\omega$ is non-decreasing and pure-jump}\},$$
which is endowed with the topology induced by the metric $d_T^\star$ defined in Appendix \ref{A1}.
\smallskip
Let us denote by $\mu_N(\omega)$ the law of $(X_N(\omega,t))_{t\in[0,T]}$. Theorem \ref{cont} states the continuity of the mapping $\omega\mapsto \mu_N(\omega)$, where the set of probability measures on $\Db_T$ is equipped with the topology of weak convergence of measures. We will use the fact that the topology of weak convergence of probability measures on a complete and separable metric space $(E,d)$ is induced by the metric $\varrho_E$ defined in Appendix \ref{A2}. We will also prove some results about uniform convergence of finite dimensional distributions. For this we introduce some notation. For $\omega\in\Db_T^\star$, $n\in\Nb$ and $\vec{r}\coloneqq (r_i)_{i\in[n]}\in[0,T]^n$, $\mu_N^{\vec{r}}(\omega)$ denotes the law of $(X_N(\omega,r_i))_{i\in[n]}$. We consider $[0,1]^n$ equipped with the distance $d_1$ defined via $d_1( (x_i)_{i\in[n]}, (y_i)_{i\in[n]}) \coloneqq \sum_{i\in[n]}|x_i - y_i|$.
\smallskip
We start with a result that will help us to get rid of the small jumps of the environment. For $\delta>0$ and $\omega\in\Db_T^\star$, we define $\omega^\delta, \omega_\delta\in \Db_T$ via
$$\omega^\delta(t)\coloneqq \sum_{u\in[0,t]:\Delta \omega(u)\geq \delta} \Delta \omega(u)\quad\textrm{and}\quad \omega_\delta(t)\coloneqq \sum_{u\in[0,t]:\Delta \omega(u)< \delta} \Delta \omega(u).$$
Clearly, $\omega^\delta$ is simple and $\omega=\omega^\delta+\omega_\delta$. Moreover, $\omega_\delta \to 0$ pointwise as $\delta\to 0$, and hence for any $t\in[0,T]$,
$$d_t^\star(\omega,\omega^\delta)\leq \sum_{u\in[0,T]}\lvert \Delta \omega(u)-\Delta \omega^\delta(u)\rvert= \omega_\delta(T) \xrightarrow[\delta\to 0]{}0.$$
\begin{proposition}\label{sjumps}
Let $\omega\in\Db_T^\star, N \geq 1, T>0$. Assume that, for any $\delta>0$, we have $X_N(\omega^\delta,0)=X_N(\omega,0)$, then
\begin{equation}\label{onedimbound}
\varrho_{[0,T]^n}(\mu_N^{\vec{r}}(\omega^\delta),\mu_N^{\vec{r}}(\omega))\leq n\,\omega_\delta(r_*)e^{\sigma r_*+ \omega(r_*)}, \ \forall \vec{r}\in[0,T]^n, n\in\Nb,
\end{equation}
where $r_*\coloneqq \max_{i\in[n]}r_i$. Moreover,
\begin{equation}\label{unibound}
\varrho_{\Db_T}(\mu_N(\omega^\delta),\mu_N(\omega))\leq\omega_\delta(T)e^{(1+\sigma) T+\omega(T)}.
\end{equation}
In particular,
$$(X_N(\omega^\delta, t))_{t\in[0,T]}\xrightarrow[\delta\to 0]{(d)}(X_N(\omega, t))_{t\in[0,T]}.$$
\end{proposition}
\begin{proof}
For $\delta>0$, we couple in $[0,T]$ a Moran model with parameters $(s,u,\nu_0,\nu_1)$ and environment $\omega$ to a Moran model with parameters $(s,u,\nu_0,\nu_1)$ and environment $\omega^\delta$ (both of size $N$) by using: (1) the same initial type configuration, (2) the same basic background, and (3) the same environmental background (see Section \ref{s212} for the definitions of basic and environmental backgrounds).
\smallskip
For any $t\in[0,T]$ and $a,b\in\{0,1\}$, we denote by $X_N^{a,b}(t)$ the proportion of individuals that at time $t$ have type $a$ under the environment $\omega$ and type $b$ under the environment $\omega^\delta.$ Clearly, we have
$$\lvert X_N(\omega^{\delta},t)-X_N(\omega,t)\rvert=\lvert X_N^{1,0}(t) - X_N^{0,1}(t) \rvert\leq X_N^{1,0}(t) + X_N^{0,1}(t)\coloneqq Z_N(t).$$
Note that $Z_N(t)$ corresponds to the proportion of individuals that have a different type at time $t$ under $\omega$ and $\omega^\delta$. Let us assume that at time $t$ a graphical element arises in the basic background. If the graphical element corresponds to a mutation event, then $Z_N(t)\leq Z_N(t-)$. If the graphical element is a neutral reproduction, we have
$$\Eb[Z_N(t)\mid Z_N(t-)]=Z_N(t-)+\frac{1}{N}Z_N(t-)(1-Z_N(t-))-\frac{1}{N}(1-Z_N(t-))Z_N(t-)=Z_N(t-).$$
If the graphical element corresponds to a selective event, then $$\Eb[Z_N(t)\mid Z_N(t-)]\leq\left(1+\frac1N\right)Z_N(t-).$$
Now, let us assume that $0\leq s<t\leq T$ are such that the interval $(s,t)$ does not contain jumps of $\omega^\delta$ nor selective events. In particular, in this interval only the population driven by $\omega$ is affected by the environment. Moreover, since neutral and mutation events do not increase the expected value of $Z_N(u)$, $u\in[0,T]$, we obtain $$\Eb[Z_N(t-)\mid Z_N(s)]\leq Z_N(s) +\sum_{u\in(s,t)}\Delta \omega(u).$$
In addition, if at time $t$ the environment $\omega^\delta$ jumps (there are only finitely many of these jumps), then $$\Eb[Z_N(t)\mid Z_N(t-)]=Z_N(t-)(1+\Delta \omega (t)).$$
Let $0\leq t_1<\cdots<t_n\leq T$ be the jumping times of $\omega^\delta$. From the previous discussion, we obtain
\begin{equation}\label{auxbound}
\Eb\left[Z_N(t_{i+1})\mid Z_N(t_i)\right]\leq \Eb\left[\left(1+\frac1N\right)^{K_i} \right]\left(Z_N(t_i)+\epsilon_i(\delta)\right)(1+\Delta \omega(t_{i+1})),
\end{equation}
where $\epsilon_i(\delta)\coloneqq \sum_{u\in(t_i,t_{i+1})}\Delta \omega(u)$ and $K_i$ is the number of selective events in $(t_i,t_{i+1})$. Note that $K_i$ has a Poisson distribution with parameter $N\sigma(t_{i+1}-t_i)$, and hence
$$\Eb\left[Z_N({t_{i+1}})\mid Z_N(t_i)\right]\leq e^{\sigma(t_{i+1}-t_i)}\left(Z_N(t_i)+\epsilon_i(\delta)\right)(1+\Delta \omega(t_{i+1})).$$
Iterating this formula and using that $Z_N(0)=0$ yields
\begin{equation}\label{boundMt}
\Eb\left[Z_N(t)\right]\leq e^{\sigma t} \omega_\delta(t)\prod_{i:t_i\leq t}(1+\Delta \omega(t_i))\leq\omega_\delta(t)e^{\sigma t+\sum_{u\in[0,t]}\Delta \omega(u)},
\end{equation}
Recall the definition of the space $\textrm{BL}(E)$ from Appendix \ref{A2}. Note that, for any $n \geq 1$ and $F\in \textrm{BL}([0,1]^n)$, we have
\[ \left\lvert \int F {\rm{d}}\mu_N^{\vec{r}}(\omega^\delta)- \int F {\rm{d}}\mu_N^{\vec{r}}(\omega)\right\rvert = \left\lvert \mathbb{E} \left [ F((X_N(\omega^\delta,r_j))_{j\in[n]}) \right ] - \mathbb{E} \left [ F((X_N(\omega,r_j))_{j\in[n]}) \right ] \right\rvert. \]
Hence, if $\lVert F\rVert_{\textrm{BL}}\leq 1$ and we choose the above coupling for $X_N(\omega^{\delta},t))$ and $X_N(\omega,t)$, we get that
\begin{align*}
\left\lvert \mathbb{E} \left [ F((X_N(\omega^\delta,r_j))_{j\in[n]}) - F((X_N(\omega,r_j))_{j\in[n]}) \right ] \right\rvert & \leq \left\lvert \mathbb{E} \left [ d_1((X_N(\omega^\delta,r_j))_{j\in[n]}, (X_N(\omega,r_j))_{j\in[n]}) \right ] \right\rvert \\
& = \mathbb{E} \left [ \sum_{j\in[n]}|Z_N(r_j)| \right ] \leq \sum_{j\in[n]} \omega_\delta(r_j)e^{\sigma r_j+\sum_{u\in[0,t_j]}\Delta \omega(u)},
\end{align*}
where the last bound comes from \eqref{boundMt} applied at each $r_j$. Taking the supremum over all $F\in \textrm{BL}([0,1]^n)$ with $\lVert F\rVert_{\textrm{BL}}\leq 1$ and using the definition of the distance $\varrho_{[0,1]^n}$ in Appendix \ref{A2} we get \eqref{onedimbound}.
Now, define $Z_N^*(t)\coloneqq \sup_{u\in[0,t]}Z_N(u)$. If at time $t$ a neutral event occurs, then
$$\Eb[Z_N^*(t)\mid Z_N^*(t-)]\leq\left(1+\frac1N\right)Z_N^*(t-).$$
Other events can be treated as before, leading to \eqref{auxbound} with $K_i$ being this time the number of selective and neutral events in $(t_i,t_{i+1})$. Hence, Eq. \eqref{unibound} follows similarly as \eqref{onedimbound}. The convergence of $X_N(\omega^\delta,\cdot)$ towards $X_N(\omega,\cdot)$ is a direct consequence of \eqref{unibound}.
\end{proof}
\begin{proposition}\label{fjumps}
Let $\omega\in\Db_T^\star$ and $\{\omega_k\}_{k\in\Nb}\subset \Db_T^\star$ be such that $d_T^\star(\omega_k,\omega)\to 0$ as $k\to\infty$. If $\omega$ is simple and, for any $k\in\Nb$, $X_N(\omega,0)=X_N(\omega_k,0)$, then
\begin{equation}\label{convfinitejumps}
(X_N(\omega_k, t))_{t\in[0,T]}\xrightarrow[k\to \infty]{(d)}(X_N(\omega, t))_{t\in[0,T]}.
\end{equation}
\end{proposition}
\begin{proof}
The proof consists in two parts. In the first part, we construct a time deformation $\lambda_k\in\Cs_T^\uparrow$ with suitable properties. In the second part, we compare $X_N(\omega_k,\lambda_k(\cdot))$ and $X_N(\omega,\cdot)$ under an appropriate coupling of the underlying Moran models.
\smallskip
\textbf{Part 1:}
Without loss of generality, we can assume that $d_T^\star(\omega_k,\omega)>0$ for all $k\in\Nb$. Set $\epsilon_k\coloneqq 2d_T^\star(\omega_k,\omega)$, so that $d_T^\star(\omega_k,\omega)< \epsilon_k$. From definition of the metric $d_T^\star$ in Appendix \ref{A1}, there is $\phi_k\in \Cs_T^\uparrow$ such that
$$\lVert \phi_{k}\rVert_T^0\leq \epsilon_k\quad\textrm{and}\quad\sum_{u\in[0,T]}|\Delta \omega(u)-\Delta (\omega_k\circ\phi_{k})(u)|\leq \epsilon_k. $$
Denote by $r_1<\cdots<r_n$ the consecutive jumps of $\omega$ in $[0,T]$. For simplicity we assume that $0<r_1\leq r_n<T$, but the proof can be easily adapted to the case where $\omega$ jumps at $T$. Set $\gamma_k\coloneqq T\,\sqrt{e^{\epsilon_k}-1}$. In the remainder of the proof we assume that $k$ is sufficiently large, such that $\gamma_k\leq \min_{i\in[n]_0}(r_{i+1}-r_{i})/3$, where $r_0\coloneqq 0$ and $r_{n+1}\coloneqq T$. This condition ensures that the intervals $I_i^k\coloneqq [r_i-\gamma_k,r_i+\gamma_k]$, $i\in[n]$, are disjoint and contained in $[0,T]$. Now, we define $\lambda_k:[0,T]\to[0,T]$ via
\begin{itemize}
\item For $u\notin \cup_{i=1}^nI_i^k$: $\lambda_k(u)\coloneqq u$.
\item For $u\in [r_i-\gamma_k,r_i]:$ $\lambda_k(u)\coloneqq \phi_k(r_i)+m_i(u-r_i)$, where $m_i\coloneqq (\phi_k(r_i)-r_i+\gamma_k)/\gamma_k.$
\item For $u\in (r_i,r_i+\gamma_k]:$ $\lambda_k(u)\coloneqq \phi_k(r_i)+\bar{m}_i(u-r_i)$, where $\bar{m}_i\coloneqq (r_i+\gamma_k-\phi_k(r_i))/\gamma_k.$
\end{itemize}
For $k$ large enough so that $\epsilon_k < \log 2$, we can infer from $\lVert \phi_{k}\rVert_T^0\leq \epsilon_k$ and from $\gamma_k=T\,\sqrt{e^{\epsilon_k}-1}$ that $m_i$ and $\bar{m}_i$ are positive. It is then straightforward to check that $\lambda_k\in\Cs_T^\uparrow$,
$\lambda_k(I_i^k)=I_i^k$, $i\in[n]$, and that
$$\sum_{u\in[0,T]}|\Delta \omega(u)-\Delta \bar{\omega}_k(u)|\leq \epsilon_k,$$
where $\bar{\omega}_k\coloneqq \omega\circ\lambda_k$. Moreover, since $\lVert \phi_{k}\rVert_T^0\leq \epsilon_k$, we infer that $\phi_k(r_i)\in[e^{-\epsilon_k}r_i,e^{\epsilon_k}r_i]$. It follows that, for $k$ large enough,
$$1-2\sqrt{e^{\epsilon_k}-1} \leq m_i \leq 1+2\sqrt{e^{\epsilon_k}-1},\quad i\in[n],$$
and the same holds for $\bar{m}_i$. Therefore, the right derivative of $\lambda_k(.)$, $\lambda_k'(t)$, belongs to $[1-2\sqrt{e^{\epsilon_k}-1}, 1+2\sqrt{e^{\epsilon_k}-1}]$ for $t \in [0,T]$. We thus have that for any $s,t\in[0,T]$ with $s\neq t$, $(\lambda_k(s)-\lambda_k(t))/(s-t)$ belongs to $[1-2\sqrt{e^{\epsilon_k}-1}, 1+2\sqrt{e^{\epsilon_k}-1}]$. We thus get that for $k$ large enough
$$\frac{\lambda_k(s)-\lambda_k(t)}{s-t}, \,\frac{s-t}{\lambda_k(s)-\lambda_k(t)}\leq 1+3\sqrt{e^{\epsilon_k}-1},\quad i\in[n].$$
Hence, using that $\log(1+x)\leq x$ for $x>-1$, leads to $\lVert \lambda_k\rVert_T^0\leq 3\sqrt{e^{\epsilon_k}-1}$ for large $k$.
\smallskip
\textbf{Part 2:}
Recall the definition of basic and environmental background in Section \ref{s21}. For $\delta>0$, we couple in $[0,T]$ a Moran model with parameters $(s,u,\nu_0,\nu_1)$ and environment $\omega$ to a Moran model with parameters $(s,u,\nu_0,\nu_1)$ and environment $\omega_k$ (both of size $N$) by using: (1) the same initial type configuration, (2) the same basic background, and (3) using in the second population the environmental background of the first one time-changed by $\lambda_k^{-1}$. By construction of the function $\lambda_k$, under this coupling, the Moran model associated to $\omega$ and the Moran model associated to $\omega_k$, time-changed by $\lambda_k$, experience the same basic events out of the time-intervals $I_i^k$, and at times $r_i$, the success of simultaneous environmental reproductions is decided according to the same uniform random variables.
\smallskip
For any $t\in[0,T]$, we denote by $Z_N(t)$ the proportion of individuals
that have a different type at time $t$ for $\omega$ and at time $\lambda_k(t)$ for $\omega_k$. Moreover, we set $Z_N^*(t)\coloneqq \sup_{u\in[0,t]}Z_N(u)$.
\smallskip
Consider the event $E_{k}\coloneqq \{\textrm{there are no basic events in $\cup_{i\in[n]}I_i^k$}\}$. Note that
$$\Pb(E_{k}^c)\leq n\left(1-e^{-2N(1+\sigma+\theta)\gamma_k}\right).$$
Moreover, on the event $E_{k}$, only the population driven by $\omega_k$ can change in $(r_i,r_i+\gamma_k]$, and this can only be due to environmental events. Hence,
\begin{equation}\label{eqc1}
\Eb[Z_N^*(r_i+\gamma_k)1_{E_k}]\leq \Eb[Z_N^*(r_i)1_{E_{k}}] +\sum_{u\in(r_i,r_i+\gamma_k]}\Delta\bar{\omega}_k(u).
\end{equation}
A similar argument, allows to infer that
\begin{equation}\label{eqc2}
\Eb[Z_N^*(r_{i+1}-)\,1_{E_{k}}]\leq \Eb[Z_N^*(r_{i+1}-\gamma_k)1_{E_{k}}]+ \sum_{u\in(r_{i+1}-\gamma_k,r_{i+1})}\Delta\bar{\omega}_k(u).
\end{equation}
Moreover, since in the interval $(r_i+\gamma_k,r_{i+1}-\gamma_k]$ there are no simultaneous jumps of the two environments, we can proceed as in the proof of Proposition \ref{sjumps} to obtain
\begin{equation}\label{eqc3}
\Eb[Z_N^*(r_{i+1}-\gamma_k)1_{E_{k}}]\leq e^{(1+\sigma)(r_{i+1}-r_{i})}\left(\Eb[Z_N^*(r_i+\gamma_k)1_{E_{k}}]+\sum_{u\in (r_i+\gamma_k,r_{i+1}-\gamma_k]}\Delta\bar{\omega}_k(u)\right).
\end{equation}
Moreover, at time $r_{i+1}$, there are two possible contributions to take into account: (i) the contribution of selective arrows arising simultaneously in both environments, and (ii) the contribution of selective arrows arising only on the environment with the biggest jump. This leads to
\begin{equation}\label{eqc4}
\Eb[Z_N^*(r_{i+1})\,1_{E_{k}}]\leq \Eb[Z_N^*(r_{i+1}-)\,1_{E_{k}}](1+\Delta \omega(r_{i+1})\wedge \Delta \bar{\omega}_k(r_{i+1}))+ |\Delta \omega(r_{i+1})-\Delta \bar{\omega}_k(r_{i+1})|.
\end{equation}
Using \eqref{eqc1}, \eqref{eqc2}, \eqref{eqc3} and \eqref{eqc4}, we obtain
$$\Eb[Z_N^*(r_{i+1})\,1_{E_k}]\leq e^{(1+\sigma)(r_{i+1}-r_{i})}\left(\Eb[Z_N^*(r_{i})\,1_{E_k}]+\sum_{u\in
(r_i,r_{i+1}]}|\Delta \omega(u)-\Delta \bar{\omega}_k(u)|\right)(1+\Delta \omega(r_{i+1})).$$
Iterating this inequality, using that $Z_N^*(0)=0$, and adding the contribution of the interval $(r_n+\gamma_k,T]$, we obtain
$$\Eb[Z_N^*(T)\,1_{E_{k}}]\leq \epsilon_k e^{(1+\sigma)T+\sum_{u\in(0,T]}\Delta \omega(u)}.$$
Summarizing,
$$\Eb\left[d_T^0(X_N(\omega,\cdot),X_N(\omega_k,\cdot))\right] \leq 2n\left(1-e^{-2N(1+\sigma+\theta)\gamma_k}\right)+ \sqrt{e^{\epsilon_k}-1}\vee\left(\epsilon_k\, e^{(1+\sigma)T+\sum_{u\in(0,T]}\Delta \omega(u)} \right).$$
The result follows.
\end{proof}
\begin{proof}[Proof of Theorem \ref{cont} (Continuity)]
If $\omega$ has a finite number of jumps, the result follows directly from Proposition \ref{fjumps}. In the general case, note that for any $\delta>0$, we have
\begin{equation}\label{trii}
\Bl{\mu_N(\omega_k)}{\mu_N(\omega)}\leq \Bl{\mu_N(\omega_k)}{\mu_N(\omega_{k}^{\delta})}+ \Bl{\mu_N(\omega_{k}^{\delta})}{\mu_N(\omega^\delta)}+\Bl{\mu_N(\omega^{\delta})}{\mu_N(\omega)},
\end{equation}
where $\omega_{k}^{\delta}(t)\coloneqq \sum_{u\in[0,t]: \Delta\omega_k(u)\geq \delta}\Delta \omega_k(u)$.
Now, we claim that, for any $\delta\in A_\omega\coloneqq \{d>0:\Delta \omega(u)\neq d \textrm{ for any $u\in[0,T]$}\}$, we have
\begin{equation}\label{claim-cont}
d_T^\star(\omega_{k}^{\delta},\omega^\delta)\xrightarrow[k\to\infty]{}0.
\end{equation}
Assume the claim is true and let $\delta\in A_\omega$. Note that for any $\lambda\in\Cs^\uparrow_T$, we have
\begin{align*}
\omega_{k,\delta}(T)&\coloneqq \sum_{u\in[0,T]: \Delta\omega_k(u)< \delta}\Delta \omega_k(u)=\omega_k(T)-\omega_{k}^{\delta}(T)=\omega_k(\lambda(T))-\omega_{k}^{\delta}(\lambda(T))\\
&\leq |\omega_k(\lambda(T))-\omega(T)|+|\omega(T)-\omega^\delta(T)|+|\omega^\delta(T)-\omega_{k}^{\delta}(\lambda(T))|\\
&\leq d_T^0(\omega,\omega_k)+d_T^0(\omega_{k}^{\delta},\omega^\delta)+ \omega_\delta(T)\\
&\leq d_T^\star(\omega,\omega_k)+d_T^\star(\omega_{k}^{\delta},\omega^\delta)+ \omega_\delta(T).
\end{align*}
Using this, together with the claim and Proposition \ref{sjumps}, we infer that
\begin{equation*}
\limsup_{k\to\infty}\Bl{\mu_N(\omega_k)}{\mu_N(\omega_{k}^{\delta})}\leq \omega_\delta(T) e^{(1+\sigma)T+\omega(T)}.
\end{equation*}
Now, using Proposition \ref{fjumps} together with the claim, we conclude that
\begin{equation*}
\limsup_{k\to\infty}\Bl{\mu_N(\omega_{k}^{\delta})}{\mu_N(\omega^{\delta})}=0.
\end{equation*}
Hence, letting $k\to\infty$ in \eqref{trii} and using Proposition \ref{sjumps}, we obtain
$$\limsup_{k\to\infty}\Bl{\mu_N(\omega_k)}{\mu_N(\omega)}\leq 2\omega_\delta(T) e^{(1+\sigma)T+\omega(T)}.$$
The previous inequality holds for any $\delta\in A_\omega$. It is plain to see that $\inf A_\omega=0$. Hence, letting $\delta\to 0$ with $\delta\in A_\omega$ in the previous inequality yields the result.
It remains to prove the claim. Let $\delta\in A_\omega$. Since $d_T^\star(\omega_k,\omega)$ converges to $0$ as $k\to\infty$, then there is a sequence $(\lambda_k)_{k\in\Nb}$ with $\lambda_k\in\Cs_T^\uparrow$ such that
$$\lVert \lambda_k\rVert_T^0\xrightarrow[k\to\infty]{}0\quad\textrm{ and }\quad \epsilon_k\coloneqq \sum_{u\in[0,T]}|\Delta(\omega_k\circ\lambda_k)(u)-\Delta \omega(u)| \xrightarrow[k\to\infty]{}0.$$
Set $\bar{\omega}_k=\omega_k\circ\lambda_k$. Clearly,
$\Delta\bar{\omega}_k(u)\leq \epsilon_k+\Delta \omega(u)$ and $\Delta\omega (u)\leq \epsilon_k+\Delta \bar{\omega}_k(u)$, $u\in[0,T].$
Therefore,
\begin{align*}
\omega_{k}^{\delta}(\lambda_k(t))-\omega^\delta(t)&=\sum_{u\in[0,t]: \Delta\bar{\omega}_k(u)\geq \delta}\Delta \bar{\omega}_k(u)-\sum_{u\in[0,t]: \Delta\omega(u)\geq \delta}\Delta \omega(u)\\
&\leq \sum_{u\in[0,t]: \Delta\omega(u)\geq \delta-\epsilon_k}\Delta \bar{\omega}_k(u)-\sum_{u\in[0,t]: \Delta\omega(u)\geq \delta}\Delta \omega(u)\\
&=\sum_{u\in[0,t]: \Delta\omega(u)\geq \delta-\epsilon_k}(\Delta \bar{\omega}_k(u)-\Delta \omega(u))+\sum_{u\in[0,t]: \Delta\omega(u)\in[\delta-\epsilon_k,\delta)}\Delta \omega(u)\\
&\leq d_T^\star(\omega_k,\omega)+\sum_{u\in[0,T]: \Delta\omega(u)\in[\delta-\epsilon_k,\delta)}\Delta \omega(u).
\end{align*}
Similarly, we obtain
\begin{align*}
\omega^\delta(t)-\omega_{k}^{\delta}(\lambda_k(t))&=\sum_{u\in[0,t]: \Delta\omega(u)\geq \delta}\Delta \omega(u)-\sum_{u\in[0,t]: \Delta\bar{\omega}_k(u)\geq \delta}\Delta \bar{\omega}_k(u)\\
&\leq \sum_{u\in[0,t]: \Delta\omega(u)\in[\delta,\delta+\epsilon_k)}\Delta \omega(u)+\sum_{u\in[0,t]: \Delta\bar{\omega}_k(u)\geq \delta}(\Delta \omega(u)-\Delta \bar{\omega}_k(u))\\
&\leq \sum_{u\in[0,T]: \Delta\omega(u)\in[\delta,\delta+\epsilon_k)}\Delta \omega(u)+d_T^\star(\omega_k,\omega).
\end{align*}
Thus, we deduce that
$$d^0_T(\omega_k^\delta,\omega^\delta)\leq d_T^\star(\omega_k,\omega)+\sum_{u\in[0,T]: \Delta\omega(u)\in(\delta-\epsilon_k,\delta+\epsilon_k)}\Delta \omega(u).$$
Since $\delta\in A_\omega$, letting $k\to\infty$ in the previous inequality yields $\lim_{k\to\infty }d^0_T(\omega_k^\delta,\omega^\delta)=0$. Recall that $\omega^\delta$ has a finite number of jumps, and hence, the claim follows using Lemma \ref{zero-star}.
\end{proof}
\section{Wright-Fisher process in random environment: existence, uniqueness and convergence}\label{S4}
We start this section with the proof of Proposition \ref{eandu} about the existence and pathwise uniqueness of strong solutions of \eqref{WFSDE}.
\begin{proof}[Proof of Proposition \ref{eandu} (Existence and uniqueness)]
In order to prove the existence and the pathwise uniqueness of strong solutions of \eqref{WFSDE} we use \cite[Thms. 3.2, 5.1]{LiPu12}. To this purpose, we first extend \eqref{WFSDE} to an SDE on $\Rb$ and we write it in the form of \cite[Eq. (2.1)]{LiPu12}. Note that by L\'{e}vy-It\^{o} decomposition, the pure-jump subordinator $J$ can be expressed as
$$J(t)=\int_{(0,1)} x\, N(t,{\rm{d}} x),\quad t\geq 0,$$
where $N({\rm{d}} s,{\rm{d}} x)$ is a Poisson random measure with intensity measure $\mu$. Hence, defining the functions $a,b:\Rb\to\Rb$ and $g:\Rb\times(0,1)\to\Rb$ via
$$a(x)\coloneqq \left\{\begin{array}{ll}
\sqrt{2x(1-x)},&\textrm{for $x\in[0,1]$}\\
0,&\textrm{otherwise}
\end{array}\right.,\quad
g(x,u)\coloneqq \left\{\begin{array}{ll}
x(1-x)u,&\textrm{for $x\in[0,1]$ and $u\in(0,1)$,}\\
0,&\textrm{otherwise.}
\end{array}\right.
$$
$$b(x)\coloneqq \left\{\begin{array}{ll}
\sigma x(1-x)+\theta\nu_0(1-x)-\theta\nu_1 x, &\textrm{for $x\in[0,1]$}\\
\theta \nu_0,&\textrm{for $x< 0$,}\\
-\theta\nu_1, &\textrm{for $x>1$,}
\end{array}\right.
$$
Eq. \eqref{WFSDE} can be extended to the following SDE on $\Rb$
\begin{equation}\label{ASDE}
X(t)=X(0)+\int_0^t a(X(s)){\rm{d}} B(s)+\int_0^t\int_{(0,1)}g(X(s-),u)N({\rm{d}} s,{\rm{d}} u)+\int_0^t b(X(s)){\rm{d}} s,\quad t\geq 0.
\end{equation}
From construction, any solution $(X(t))_{t\geq 0}$ of \eqref{ASDE} such that $X(t)\in[0,1]$ for any $t\geq 0$ is a solution of \eqref{WFSDE} and vice-versa.
Note that the functions $a, b,g$ are continuous. Moreover, $b=b_1- b_2$, where
$$\begin{array}{llll}
b_1(x)\coloneqq \theta\nu_0+\sigma x\quad\textrm{for $x\in[0,1]$,} & b_1(x)\coloneqq \theta\nu_0\quad\textrm{for}\quad x\leq0,&\textrm{and}\quad b_1(x)\coloneqq \theta\nu_0+\sigma\quad\textrm{for}\quad x\geq1,\\
b_2(x)\coloneqq \theta x +\sigma x^2\quad\textrm{for $x\in[0,1]$,}& b_2(x)\coloneqq 0\quad\,\,\,\,\,\,\textrm{for}\quad x\leq0,&\textrm{and}\quad b_2(x)\coloneqq \theta+\sigma\quad\,\,\,\,\,\,\textrm{for}\quad x\geq1.
\end{array}$$
In addition, $b_2$ is non-decreasing. Thus, in order to apply \cite[Thms. 3.2, 5.1]{LiPu12}, we only need to verify the sufficient conditions (3.a), (3.b) and (5.a) therein. Condition (3.a) in our case amounts to prove that $x\mapsto b_1(x)+\int_{(0,1)}g(x,u)\mu({\rm{d}} u)$ is Lipschitz continuous. In fact, a straightforward calculation shows that
$$|b_1(x)-b_1(y)|+\int_{(0,1)}|g(x,u)-g(y,u)|\mu({\rm{d}} u)\leq \left(\sigma+\int_{(0,1)}u\mu({\rm{d}} u)\right)|x-y|,\quad x,y\in\Rb,$$
and hence (3.a) follows. Condition (3.b) amounts to prove that $x\mapsto a(x)$ is $1/2$-H\"older. We claim that
\begin{equation}\label{lipsigma}
|a(x)-a(y)|^2\leq 2|x-y|,\quad x,y\in\Rb.
\end{equation}
One can easily check that \eqref{lipsigma} holds whenever $x\notin (0,1)$ or $y\notin (0,1)$. Now, assume that $x,y\in(0,1)$. If $x+y<1$, we have
\begin{align*}
|a(x)-a(y)|&=\frac{2|x-y|(1-(x+y))}{\sqrt{2x(1-x)}+\sqrt{2y(1-y)}}\leq \frac{\sqrt{2}|x-y|(1-(x+y))}{\sqrt{x(1-x)+y(1-y)}} \\
&=\frac{\sqrt{2}|x-y|(1-(x+y))}{\sqrt{(x+y)(1-(x+y))+2xy}}\leq \frac{\sqrt{2}|x-y|(1-(x+y))}{\sqrt{(x+y)(1-(x+y))}}\\
&\leq \sqrt{2|x-y|(1-(x+y))}.
\end{align*}
Since $a(z)=a(1-z)$ for all $z\in\Rb$, the same inequality holds for $x,y\in(0,1)$ such that $x+y> 1$, and the case $x+y=1$ is trivial. Hence, the claim is true and condition (3.b) follows. Therefore, \cite[Thm. 3.2]{LiPu12} yields the pathwise uniqueness for \eqref{ASDE}. Condition (5.a) follows from the fact that the functions $a, b$, $x\mapsto \int_{(0,1)} g(x,u)^2\mu({\rm{d}} u)$ and $x\mapsto \int_{(0,1)} g(x,u)\mu({\rm{d}} u)$ are bounded on $\Rb$. Hence, \cite[Thm. 5.1]{LiPu12} ensures the existence of a strong solution to \eqref{ASDE}. It remains to show that any solution of \eqref{ASDE} with $X(0)\in[0,1]$ is such that $X(t)\in[0,1]$ for any $t\in[0,1]$. Sufficient conditions implying such a result are provided in \cite[Prop. 2.1]{FuLi10}. The conditions on the diffusion and drift coefficients are satisfied, namely, $a$ is $0$ outside $[0,1]$ and $b(x)$ is positive for $x\leq 0$ and negative for $x\geq 1$. However, the condition on the jump coefficient, $x+g(x,u)\in[0,1]$ for every $x\in\Rb$, is not fulfilled. Nevertheless, the proof of \cite[Prop. 2.1]{FuLi10} works without modifications under the alternative condition $x+g(x,u)\in[0,1]$ for $x\in[0,1]$ and $g(x,u)=0$ for $x\notin[0,1]$, which is in turn satisfied. This ends the proof.
\end{proof}
\begin{lemma}\label{core} The solution of the SDE \eqref{WFSDE} is a Feller process with generator $\As$ satisfying for all $f\in \mathcal{C}^2([0,1],\Rb)$
\begin{align*}
\As f(x)&=x(1-x)f''(x)+ (\sigma x(1-x)+\theta\nu_0(1-x)-\theta\nu_1 x) f{'}(x)+\int\limits_{(0,1)}\left(f\left(x+ x(1-x)u\right)-f(x)\right)\mu({\rm{d}} u).
\end{align*}
Moreover, $\mathcal{C}^\infty([0,1],\Rb)$ is an operator core for $A$.
\end{lemma}
\begin{proof}
Since pathwise uniqueness implies weak uniqueness (see \citep[Thm.~1]{BLP15}), we infer from \citep[Thm.~2.16]{Im16} that the martingale problem associated to $A$ in $\Cs^{\infty}([0,1])$ is well-posed. Using \cite[Prop. 2.2]{vanC92}, we deduce that $X$ is Feller. The fact that $\Cs^\infty([0,1])$ is a core follows then from \cite[Thm.~2.5]{vanC92}.
\end{proof}
Now, we proceed to prove the annealed convergence of the Moran models towards the Wright--Fisher diffusion stated in Theorem \ref{alimit}.
\begin{proof}[Proof of Theorem \ref{alimit} (Annealed convergence)]
Let $\As_N^*$ and $\As$ be the infinitesimal generators of the processes $(X^N(Nt))_{t\geq 0}$ and $(X(t))_{t\geq 0}$, respectively. We will prove that, for all $f\in\Cs^\infty([0,1],\Rb)$
$$\sup\limits_{x\in E_N}|\As_N^* f(x)-\As f(x)|\xrightarrow[N\to\infty]{} 0.$$
Provided the claim is true, since $X$ is Feller and $\Cs^\infty([0,1],\Rb)$ is an operator core for $A$ (see Lemma \ref{core}), the result follows applying \cite[Theorem 19.25]{Kall02}. Thus, it remains to prove the claim. To this purpose, we decompose the generator $\As$ as $\As^{1}+\As^{2}+\As^3+\As^4$, where
\begin{align*}
\As^1f(x)\coloneqq x(1-x)f{''}(x),\qquad\qquad\qquad\qquad\qquad\qquad &\As^2f(x)\coloneqq (\sigma x(1-x)+\theta\nu_0(1-x)-\theta\nu_1 x) f{'}(x),\\
\As^3f(x)\coloneqq \int\limits_{(0,\varepsilon_N)}\!\!\left(f\left(x+ x(1-x)u\right)-f(x)\right)\mu({\rm{d}} u),\quad\quad &
\As^4f(x)\coloneqq \int\limits_{(\varepsilon_N,1)}\!\!\left(f\left(x+ x(1-x)u\right)-f(x)\right)\mu({\rm{d}} u).
\end{align*}
Similarly, we write $\As_N^*=\As_N^{1}+\As_N^{2}+\As_N^3+\As_N^4$, where
\begin{align*}
\As_N^1f(x)&\coloneqq N^2x(1-x)\left[f\left(x+\frac{1}{N}\right)+f\left(x-\frac{1}{N}\right)- 2f\left(x\right)\right],\\
\As_N^2f(x)&\coloneqq N^2(\sigma_N x(1-x)+\theta_N\nu_0(1-x))\left[f\left(x+\frac{1}{N}\right)- f\left(x\right)\right]+N^2\theta_N\nu_1x\left[f\left(x-\frac{1}{N}\right)- f\left(x\right)\right],\\
\As_N^3f(x)&\coloneqq \int\limits_{(0,\varepsilon_N)}\!\!\!\!\left(\Eb\left[f\left(x+\xi_N(x,u)\right)\right]-f(x)\right)\mu({\rm{d}} u),\quad
\As_N^4f(x)\coloneqq \int\limits_{(\varepsilon_N,1)}\!\!\!\!\left(\Eb\left[f\left(x+\xi_N(x,u)\right)\right]-f(x)\right)\mu({\rm{d}} u),
\end{align*}
and $\xi_N(x,u)\coloneqq \Hs(N,N(1-x),\beta(Nx,u))/N$, with
$\Hs(N,N(1-x),k)\sim\hypdist{N}{N(1-x)}{k}$, and $\beta(Nx,u)\sim\bindist{Nx}{u}$ being independent. We will choose $\varepsilon_N>0$ later in an appropriate way.
\smallskip
Let $f\in\Cs^\infty([0,1],\Rb)$. Note that
\begin{equation}\label{t0}
\sup_{x\in E_N} |\As_N^* f(x)-\As f(x)|\leq \sum_{i=1}^4 \sup_{x\in E_N} |\As_N^i f(x)-\As^i f(x)|.
\end{equation}
Using Taylor expansions of order three around $x$ for $f(x+1/N)$ and $f(x-1/N)$, we get
\begin{equation}\label{t1}
\sup\limits_{x\in E_N}|\As_N^1 f(x)-\As^1 f(x)|\leq \frac{\lVert f'''\rVert_\infty}{3N}
\end{equation}
Similarly, using the triangular inequality and appropriate Taylor expansions of order two yields
\begin{equation}\label{t2}
\sup\limits_{x\in E_N}|\As_N^2 f(x)-\As^2 f(x)|\leq \frac{(\sigma_N+\theta_N) \lVert f''\rVert_\infty}{2}+ (|\sigma-N\sigma_N|+|\theta-N\theta_N|)\lVert f'\rVert_{\infty}.
\end{equation}
For the third term, using that $\Eb[\xi_N(x,u)]=x(1-x)u$, we get
$$|\As^3_Nf(x)|\leq \lVert f'\rVert_\infty\int_{(0,\varepsilon_N)} u\mu({\rm{d}} u),\quad x\in[0,1],$$
and hence,
\begin{equation}\label{t3}
\sup\limits_{x\in E_N}|\As_N^3 f(x)-\As^3 f(x)|\leq 2 ||f'||_\infty\int\limits_{(0,\varepsilon_N)} u\mu({\rm{d}} u).
\end{equation}
For the last term, we first note that
\begin{align*}
\left|\Eb\left[f\left(x+\xi_N(x,u)\right)-f(x+x(1-x)u)\right]\right|&
\leq \lVert f'\rVert_{\infty}\Eb\left[\left|\xi_N(x,u)-x(1-x)u\right|\right]\\
&\leq \lVert f'\rVert_{\infty}\sqrt{\Eb\left[\left(\xi_N(x,u)-x(1-x)u\right)^2\right]}\\
&\leq\lVert f'\rVert_{\infty} \sqrt{\frac{u}{N}}.
\end{align*}
In the last inequality we used that
\begin{equation}\label{hybimix}
\Eb\left[\left(\xi_N(x,u)-x(1-x)u\right)^2\right]=\frac{x(1-x)^2u(1-u)}{N}+\frac{Nx^2(1-x)u^2}{N^2(N-1)}\leq \frac{u}{N},
\end{equation}
which is obtained from standard properties of the hypergeometric and binomial distributions. Hence,
\begin{equation}\label{t4}
\sup\limits_{x\in E_N}|\As_N^4 f(x)-\As^4 f(x)|\leq ||f'||_\infty\int\limits_{(\varepsilon_N,1)} \sqrt{\frac{u}{N}}\,\mu(du)\leq \frac{||f'||_\infty}{\sqrt{N\varepsilon_N}}\int\limits_{(\varepsilon_N,1)} u\,\mu({\rm{d}} u) .
\end{equation}
Since $\int_{(0,1)} u\mu({\rm{d}} u)<\infty$, choosing $\varepsilon_N\coloneqq 1/\sqrt{N}$, the result follows by plugging \eqref{t1}, \eqref{t2}, \eqref{t3} and \eqref{t4} in \eqref{t0} and letting $N\to\infty$.
\end{proof}
Now, we turn our attention to the asymptotic behavior of a sequence of Moran models in the quenched case. We start with the following lemma that concerns the Moran model with null environment.
\begin{lemma}\label{nullest}
Let $X_N^0(t)\coloneqq X_N({\rm{\textbf{0}}},Nt)$, $t\geq 0$. For any $x\in E_N$ and $t\geq 0$, we have
$$\Eb_x\left[\left(X_N^0(t)-x\right)^2\right]\leq \left(\frac12 + N(\sigma_N+3\theta_N)\right)t,$$
and
$$-N\theta_N\nu_1 t\leq \Eb_x\left[X_N^0(t)-x\right]\leq N(\sigma_N+\theta_N\nu_0) t.$$
\end{lemma}
\begin{proof}
Fix $x\in E_N$ and consider the functions $f_x,g_x:E_N\to[0,1]$ defined via $f_x(z)\coloneqq (z-x)^2$ and $g_x(z)\coloneqq z-x$, $z\in E_N$. The process $X_N^0$ is a Markov chain with generator $\As_N^0\coloneqq \As_N^1+\As_N^2$, where $\As_N^1$ and $\As_N^2$ are defined in the proof of Theorem \ref{alimit}. Moreover, for every $z\in E_N$, we have
\begin{align*}
\As_N^0 f_x(z)&=2z(1-z)+N\left[(\sigma_N z+\theta_N \nu_0)(1-z)\left(2(z-x)+\frac1N\right)+N\theta_N\nu_1 z\left(2(x-z)+\frac1N\right)\right]\\
&\leq \frac12 + N(\sigma_N+3\theta_N),
\end{align*}
and
\begin{align*}
\As_N^0 g_x(z)&=N\left[(\sigma_N z+\theta_N \nu_0)(1-z)-\theta_N\nu_1 z\right]\in[-N\theta_N\nu_1,N(\sigma_N+\theta_N\nu_0)].
\end{align*}
Hence, Dynkin's formula applied to $X_N^0$ with the function $z\in E_N\mapsto(z-x)^2$ leads to
$$\Eb_x\left[\left(X_N^0(t)-x\right)^2\right]=\int_0^t \Eb\left[\As_N^0 f_x(X_N^0(s))\right]{\rm{d}} s\leq \left(\frac12 + N(\sigma_N+3\theta_N)\right)t.$$
Similarly, applying Dynkin's formula to $X_N^0$ with $g_x$, we obtain
$$\Eb_x\left[X_N^0(t)-x\right]=\int_0^t \Eb\left[\As_N^0 g_x(X_N^0(s))\right]{\rm{d}} s\in[-N\theta_N\nu_1 t,N(\sigma_N+\theta_N\nu_0)t],$$
which ends the proof.
\end{proof}
\begin{proposition}[Quenched tightness]\label{q-tight}
Assume that $N \sigma_N\rightarrow \sigma$ and $N\theta_N\rightarrow \theta$ as $N\to\infty$. Let $\omega\in\Db^\star$ and set $\omega_N(t)\coloneqq \omega(t/N)$ for $t\geq 0$. Then the sequence $(X_N(\omega_N; Nt))_{t\geq 0}$ is tight.
\end{proposition}
\begin{proof}
We set $X_N^\omega(t)\coloneqq X_N(\omega_N, Nt)$ for $t\geq 0$ and $N\geq 1$. In order to prove the tightness of the sequence $(X_N^\omega(t))_{t\geq 0}$ we use \cite[Thm. 1]{BKS16}. For this we will verify conditions A1) and A2) therein. Since $X_N^\omega(t)\in[0,1]$ for all $t\geq 0$ and $N\geq 1$, the compact containment condition A1) is satisfied. In addition, we claim that there exist constants $c,C>0$, independent of $N$, such that for all $s,t\leq 0$ with $s\leq t$
$$\Eb\left[(X_N^\omega(t)-X_N^\omega(s))^2\mid \Fs_s^N\right]\leq c\sum_{u\in[s,t]}\Delta \omega(u)+ C (t-s),$$
where $(\Fs_s^N)_{s\geq 0}$ denotes the natural filtration associated to the process $X_N^\omega$. If the claim is true, then condition A2) is satisfied with $\beta=2$ and $F_N(t)=F(t)=c\sum_{u\in[0,t]}\Delta \omega(u)+Ct$, and the result follows from \cite[Thm. 1]{BKS16}. The rest of the proof is devoted to prove the claim.
\smallskip
For $x\in E_N$ and $t\geq 0$, we set
$$\psi_x(\omega,t)\coloneqq \Eb_x[(X_N^\omega(t)-x)^2].$$
For $s\geq 0$, we set $\omega_s(\cdot)\coloneqq \omega(s+\cdot)$. From the definition of $X_N^\omega$, it follows that, for any $0\leq s<t$,
\begin{equation}\label{InMP}
\Eb\left[(X_N^\omega(t)-X_N^\omega(s))^2\mid \Fs_s^N\right]=\psi_{X_N^\omega(s)}(\omega_s,t-s).
\end{equation}
Let $0\leq s<t$. We split the proof of the claim in three cases.
\smallskip
\textbf{Case 1: $\omega$ has no jumps in $(s,t]$.} In particular, $\omega_s$ has no jumps in $(0,t-s]$. Hence, restricted to $[0,t-s]$, $X_N^{\omega_s}$ has the same distribution as $X_N^0$. Using Lemma \ref{nullest} with $x=X_N^\omega(s)$ and plugging the result in \eqref{InMP}, we infer that in this case, the claim holds for any $c\geq 1$ and $C\geq C_1\coloneqq 1/2+\sup_{N\in\Nb}(N(\sigma_N+3\theta_N))$.
\smallskip
\textbf{Case 2: $\omega$ has $n$ jumps in $(s,t]$.} Let $t_1,\ldots,t_n\in(s,t]$ be the jumping times of $\omega$ in $(s,t]$ in increasing temporal order. We set $t_0\coloneqq s$ and $t_{n+1}=t$. For any $i\in[n+1]$ and any $r\in(t_{i-1},t_{i})$, $\omega$ has no jumps in $(t_{i-1},r]$. In particular, $(t_{i-1},r]$ falls into case 1. Using the claim and letting $r\to t_i$, we obtain
$$\Eb\left[(X_N^\omega(t_i -)-X_N^\omega(t_{i-1}))^2\mid \Fs_{t_{i-1}}^N\right]\leq C_1(t_i-t_{i-1}).$$
Moreover,
$$\Eb\left[(X_N^\omega(t_i)-X_N^\omega(t_{i}-))^2\mid \Fs_{t_{i}-}^N\right]\leq \Eb\left[\left(\frac{B_{N,\Delta \omega(t_i)}}{N}\right)^2\right]\leq \Delta \omega(t_i).$$
Using the two previous inequalities and the tower property of the conditional expectation, we get
\begin{equation}\label{oneint}
\Eb\left[(X_N^\omega(t_i)-X_N^\omega(t_{i-1}))^2\mid \Fs_{s}^N\right]\leq 2C_1(t_i-t_{i-1})+2\Delta\omega(t_i).
\end{equation}
Now, note that
\begin{align*}
(X_N^\omega(t)-X_N^\omega(s))^2&=\left(\sum_{i=0}^n( X_N^\omega(t_{i+1})-X_N^\omega(t_i))\right)^2\\
&=\sum_{i=0}^n( X_N^\omega(t_{i+1})-X_N^\omega(t_i))^2+2\sum_{i=0}^n( X_N^\omega(t_{i+1})-X_N^\omega(t_i))(X_N^\omega(t_i)-X_N^\omega(s)).
\end{align*}
Using Eq. \eqref{oneint}, we see that
$$\Eb\left[\sum_{i=0}^n(X_N^\omega(t_{i+1})-X_N^\omega(t_{i}))^2\mid \Fs_{s}^N\right]\leq 2C_1(t-s)+2\sum_{i=1}^n\Delta\omega(t_i).$$
Moreover, we have
\begin{align*}
\Eb\left[(X_N^\omega(t_{i+1})-X_N^\omega(t_{i}))(X_N^\omega(t_{i})-X_N^\omega(s))\mid \Fs_{t_i}^N\right]=\phi_{X_N^\omega(s),X_N^\omega(t_i)}(\omega_{t_i},t_{i+1}-t_{i}),
\end{align*}
where $\phi_{x,y}(\omega,t)\coloneqq (y-x)\Eb_y[X_N^\omega(t)-y]$. Since, for any $r\in(t_i,t_{i+1})$, $\omega_{t_i}$ has no jumps in $(t_i,r]$, we can use Lemma \ref{nullest} to infer that
$$\phi_{x,y}(\omega_{t_i}, r-t_i)\leq N((\sigma_N+\theta_N\nu_0)\vee\theta_N\nu_1)(r-t_i).$$
Note that $(y-x)\Eb_y[X_N^{\omega_{t_i}}(t_{i+1}-t_i)-X_N^{\omega_{t_i}}((t_{i+1}-t_i)-)]\leq \Delta \omega(t_{i+1})$. Hence, letting $r\to t_{i+1}$, we get
$$\phi_{x,y}(\omega_{t_i}, t_{i+1}-t_i)\leq N((\sigma_N+\theta_N\nu_0)\vee\theta_N\nu_1)(t_{i+1}-t_i)+\Delta\omega(t_{i+1}).$$
Altogether, we obtain
$$\Eb\left[(X_N^\omega(t)-X_N^\omega(s))^2\mid \Fs_{s}^N\right]\leq C_2(t-s)+3\sum_{i=1}^n\Delta\omega(t_i),$$
where $C_2\coloneqq 2C_1+\sup_{N\in\Nb}N((\sigma_N+\theta_N\nu_0)\vee\theta_N\nu_1)$. Hence, the claim holds for any $C\geq C_2$ and $c\geq 3$.
\smallskip
\textbf{Case 3: $\omega$ has infinitely many jumps in $(s,t]$}. For any $\delta$, we consider $\omega^\delta$ as in Section \ref{S3} and we couple the processes $X_N^\omega$ and $X_N^{\omega^\delta}$ as in the proof of Proposition \ref{sjumps}. Note that $\omega^\delta$ has only a finite number of jumps in any compact interval, thus $\omega^\delta$ falls into case 2. Moreover, we have
\begin{align*}
\psi_x(\omega,t)&\leq 2\Eb_x[(X_N^\omega(t)-X_N^{\omega^\delta}(t))^2]+2\Eb_x[(X_N^{\omega^\delta}(t)-x)^2]\\
&\leq 2\Eb_x[|X_N^\omega(t)-X_N^{\omega^\delta}(t)|]+2\Eb_x[(X_N^{\omega^\delta}(t)-x)^2]\\
&\leq 2e^{N\sigma_N t+\omega(t)} \sum_{u\in[0,t]:\Delta \omega(u)<\delta}\Delta \omega(u)+2\Eb_x[(X_N^{\omega^\delta}(t)-x)^2],
\end{align*}
where in the last inequality we use Proposition \ref{sjumps}. Now, using the claim for $X_N^{\omega^\delta}$ and the previous inequality, we obtain
$$\Eb\left[(X_N^\omega(t)-X_N^\omega(s))^2\mid \Fs_{s}^N\right]\leq e^{N\sigma_N (t-s)+\omega(t-s)}\sum_{u\in[s,t]:\Delta \omega(u)<\delta}\Delta \omega(u)+2C_2(t-s)+6\sum_{u\in[s,t]}\Delta\omega(u).$$
We let $\delta\to 0$ and conclude that the claim holds for any $C\geq 2C_2$ and $c\geq 6$.
\end{proof}
Now, we proceed to prove the quenched convergence of the sequence of Moran models to the Wright--Fisher diffusion, under the assumption that the environment is simple.
\begin{proof}[Proof of Theorem \ref{qlimit} (Quenched convergence)]
Let $B\coloneqq (B(t))_{t\geq 0}$ be a standard Brownian motion. We denote by $X^0$ the unique strong solution of \eqref{WFSDE} with null environment associated to $B$.
\smallskip
For any simple environment $\omega$, we set $X_N^\omega(t)\coloneqq X_N(\omega_N, Nt)$ for $t\geq 0$ and $N\geq 1$. Theorem \ref{alimit} implies in particular that $X_N^0$ converges to $X^0$ as $N\to\infty$.
\smallskip
Now, assume that $\omega$ is simple, but not constant equal to $0$. We denote by $T_\omega$ the set of jumping times of $\omega$ in $(0,\infty)$. Moreover, for $0<i< |T_\omega|+1$, we denote by $t_i\coloneqq t_i(\omega)\in T_\omega$ the time of the $i$-th jump of $\omega$. We set $t_0=0$ and $t_{|T_\omega|+1}=\infty$.
Therefore, we need to prove that, under the assumptions of the Theorem, we have
$$(X_N^\omega(t))_{t\geq 0}\xrightarrow[N\to\infty]{(d)} (X^\omega(t))_{t\geq 0},$$
where the process $X^\omega$ is defined as follows.
\begin{itemize}
\item $X^\omega(0)=X_0$.
\item For $i\in\Nb$ with $i\leq |T_\omega|+1$, the restriction of $X^{\omega}$ to the interval $(t_{i-1},t_i)$ is given by a version of $X^0$ started at $X^\omega(t_{i-1})$.
\item For $0<i< |T_\omega|+1$, conditionally on $X^\omega(t_i-)$, $$X^\omega(t_i)=X^\omega(t_i-)+X^\omega(t_i-)(1-X^\omega(t_i-))\Delta\omega(t_i).$$
\end{itemize}
Since the sequence $(X_N^\omega)_{N\in\Nb}$ is tight, it is enough to prove the convergence at the level of the finite dimensional distributions. More precisely, we will prove by induction on $i\in\Nb$ with $i\leq |T_\omega|+1$ that for any finite set $I\subset[0,t_i)$, we have $$((X_N^\omega(t))_{t\in I},X_N^\omega(t_i-))\xrightarrow[N\to\infty]{(d)} ((X^\omega(t))_{t\in I},X^\omega(t_i-)).$$
The result for $i=1$ follows from Theorem \ref{alimit} and the fact that $X_N^\omega(t_1-)=X_N^0(t_1)$ and $X^\omega(t_i-)=X^0(t_i)$ almost surely. Now, assume that the result is true for some $i<|T_\omega|+1$ and let $I\subset(0,t_{i+1})$. Without loss of generality we assume that $I=\{s_1,\ldots,s_k, t_i,r_1,\ldots, r_m\}$, with $s_1<\cdots<s_k<t_i<r_1<\cdots<r_m$. We also assume that $i<|T_\omega|$, the other case, i.e. $i=|T_\omega|<\infty$, follows using an analogous argument.
Let $F:[0,1]^{k+1}\to\Rb$ be a Lipschitz function with $\lVert F\rVert_{\textrm{BL}}\leq 1$. Note that
\begin{align*}
\Eb\left[F((X_N^\omega(s_j))_{j=1}^k,X_N^\omega(t_i))\right]&=\Eb\left[F((X_N^\omega(s_j))_{j=1}^k,X_N^\omega(t_i-)+\xi_N(X_N^\omega(t_i-),\Delta\omega(t_i)))\right],
\end{align*}
where for $x\in E_N$ and $u\in(0,1)$, $\xi_N(x,u)\coloneqq \Hs(N,N(1-x),\beta(Nx,u))/N$ with $\Hs(N,N(1-x),k)\sim\hypdist{N}{N(1-x)}{k}$, $k\in[N]_0$, and $\beta(Nx,u)\sim\bindist{Nx}{u}$ being independent between them and independent of $X_N^\omega$. Now, set
\begin{align*}
D_N&\coloneqq \Eb\left[F((X_N^\omega(s_j))_{j=1}^k,X_N^\omega(t_i-)+\xi_N(X_N^\omega(t_i-),\Delta\omega(t_i)))\right]\\
&\,\,\,-\Eb\left[F((X_N^\omega(s_j))_{j=1}^k,X_N^\omega(t_i-)+X_N^\omega(t_i-)(1-X_N^\omega(t_i-))\Delta\omega(t_i))\right].
\end{align*}
Using that $\lVert F\rVert_{\textrm{BL}}\leq 1$ and \eqref{hybimix}, we see that
$|D_N| \leq\sqrt{\Delta\omega(t_i)/N}\to 0$ as $N\to\infty$.
Therefore, the induction hypothesis yields
\begin{align*}
\Eb\left[F((X_N^\omega(s_j))_{j=1}^k,X_N^\omega(t_i))\right]&=D_N+\Eb\left[F((X_N^\omega(s_j))_{j=1}^k,X_N^\omega(t_i-)+X_N^\omega(t_i-)(1-X_N^\omega(t_i-))\Delta\omega(t_i))\right]\\
&\xrightarrow[N\to\infty]{}\,\Eb\left[F((X^\omega(s_j))_{j=1}^k,X^\omega(t_i-)+X^\omega(t_i-)(1-X^\omega(t_i-))\Delta\omega(t_i))\right]\\
&=\Eb\left[F((X^\omega(s_j))_{j=1}^k,X^\omega(t_i))\right].
\end{align*}
Therefore,
\begin{equation}\label{uptoti}
((X_N^\omega(s_j))_{j=1}^k,X_N^\omega(t_i))\xrightarrow[N\to\infty]{(d)} ((X^\omega(s_j))_{j=1}^k,X^\omega(t_i)).
\end{equation}
Let $G:[0,1]^{k+m+2}\to\Rb$ be a Lipschitz function with $\lVert G\rVert_{\textrm{BL}}\leq 1$. For $x\in E_N$, define
$$H_N(z,x)\coloneqq \Eb_x[G(z,x,(X_N^0(r_j-t_i))_{j=1}^m,X_N^0(t_{i+1}-t_i))], \ \forall z \in\Rb^k.$$
Note that
\begin{equation}\label{mpr1}
\Eb[G((X_N^\omega(s_j))_{j=1}^k,X_N^\omega(t_i),(X_N^\omega(r_j))_{j=1}^m,X_N^\omega(t_{i+1}-))]=\Eb\left[H_N((X_N^\omega(s_j))_{j=1}^k,X_N^\omega(t_i))\right].
\end{equation}
Similarly, for $x\in[0,1]$, define
$$H(z,x)\coloneqq \Eb_x[G(z,x,(X^0(r_j-t_i))_{j=1}^m,X^0(t_{i+1}-t_i))],\quad z\in\Rb^k,$$
and note that
\begin{equation}\label{mpr2}
\Eb[G((X^\omega(s_j))_{j=1}^k,X^\omega(t_i),(X^\omega(r_j))_{j=1}^m,X^\omega(t_{i+1}-))]=\Eb\left[H((X^\omega(s_j))_{j=1}^k,X^\omega(t_i))\right].
\end{equation}
Using \eqref{uptoti} and the Skorokhod representation theorem, we can assume that $((X_N^\omega(s_j))_{j=1}^k,X_N^\omega(t_i))_{N\geq1}$ and $((X^\omega(s_j))_{j=1}^k,X^\omega(t_i))$ live in the same probability space and that the convergence holds almost surely. In particular, we can write
\begin{equation}\label{er1r2}
|\Eb\left[H_N((X_N^\omega(s_j))_{j=1}^k,X_N^\omega(t_i))\right]-\Eb\left[H((X^\omega(s_j))_{j=1}^k,X^\omega(t_i))\right]|\leq R_N^1+R_N^2,
\end{equation}
where
\begin{align*}
R_N^1&\coloneqq |\Eb\left[H_N((X_N^\omega(s_j))_{j=1}^k,X_N^\omega(t_i))\right]-\Eb\left[H_N((X^\omega(s_j))_{j=1}^k,X_N^\omega(t_i))\right]|,\\
R_N^2&\coloneqq |\Eb\left[H_N((X^\omega(s_j))_{j=1}^k,X_N^\omega(t_i))\right]-\Eb\left[H((X^\omega(s_j))_{j=1}^k,X^\omega(t_i))\right]|.
\end{align*}
Using that $\lVert G\rVert_{\textrm{BL}}\leq 1$, we obtain
\begin{equation}\label{er1}
R_N^1\leq \sum_{j=1}^k\Eb[|X_N^\omega(s_j)-X^\omega(s_j)|]\xrightarrow[N\to\infty]{}0.
\end{equation}
Moreover, since $X_N^\omega(t_i)$ converges to $X^\omega(t_i)$ almost surely, we conclude using Theorem \ref{alimit} that, for any $z\in[0,1]^k$, $H_N(z,X_N^\omega(t_i))$ converges to $H(z,X^\omega(t_i))$ almost surely. Therefore, using dominated convergence theorem, we conclude that
\begin{equation}\label{er2}
R_N^2\xrightarrow[N\to\infty]{}0.
\end{equation}
Plugging \eqref{er1} and \eqref{er2} in \eqref{er1r2} and using \eqref{mpr1} and \eqref{mpr2} completes the proof.
\end{proof}
\section{Type distribution and ancestral type distribution: the annealed case}\label{S5}
\subsection{Type distribution: the killed ASG}\label{s51}
In this section, we prove Theorem \ref{momdualannealed} about the moment duality between the diffusion $X$ and the killed ASG, and Theorem \ref{annealdmoments} about the moments of the stationary distribution of $X$.
\begin{proof} [Proof of Theorem \ref{momdualannealed}(Reinforced/standard moment duality)]
The proof of this result follows the same lines as the proof of a standard duality. More precisely, we closely follow the proof of \cite[Prop. 1.2]{Jaku}. Let $H:[0,1]\times \mathbb{N}_0^\dagger\times [0,\infty)$ defined via $H(x,n,j)=(1-x)^nf(j)$. Let $(P_t)_{t\geq0}$ and $(Q_t)_{t\geq 0} $ denote the semigroups of $(X,J)$ and $(R,J)$, respectively, i.e. $P_t g(x,j)=\mathbb E[g(X(t),J(t)+j)\mid X(0)=x]$ and $Q_th(n,j)=\Eb[h(R(t),J(t)+j)\mid R(0)=n]$. Let $(\hat{R},\hat{J})$ be a copy of $(R,J)$, which is independent of $(X,J)$. A straightforward calculation shows that
$$P_t(Q_s H)(x,n,j)=E[(1-X(t))^{\hat{R}(s)}f(J(t)+\hat{J}(s)+j)\mid X(0)=x,\, \hat{R}(0)=n]=Q_s(P_t H)(x,n,j).$$
Let $G_{X,J}$ and $G_{R,J}$ be the infinitesimal generators of $(X,J)$ and $(R,J)$, respectively. Clearly, for any $x\in[0,1]$, $(n,j)\mapsto P_t H(\cdot,n,\cdot)(x,j)$ belongs to the domain of $G_{R,J}$. Hence, the previous commutation property between $P_t$ and $Q_s$, yields
\begin{equation}\label{comgensemi}
P_t G_{R,J} H(x,n,j)= G_{R,J} P_t H(x,n,j).
\end{equation}
We claim now that $G_{X,J} H(\cdot,n,\cdot)(x,j)=G_{R,J} H(x,\cdot,\cdot)(n,j)$.
Note first that
\begin{align}\label{gx}
G_{X,J} H(\cdot,n,\cdot)(x,j)&=\left(n(n-1)x(1-x)^{n-1}-\left(\sigma x(1-x)+ \theta\nu_0(1-x)-\theta\nu_1 x \right)n(1-x)^{n-1}\right)f(j)\nonumber\\
&+(1-x)^n\int\limits_{(0,1]}\left((1-xz)^nf(j+z)-f(j)\right)\mu({\rm{d}} z).
\end{align}
In addition,
\begin{align}\label{gr}
G_{R,J} H(x,\cdot,\cdot)(n,j)&=(n(n-1)+n\theta\nu_1)((1-x)^{n-1}-(1-x)^n)f(j)\nonumber\\
&- n\theta\nu_0 (1-x)^n+n\sigma ((1-x)^{n+1}-(1-x)^n)f(j)\nonumber\\
&+\sum\limits_{k=0}^n\binom{n}{k}\int\limits_{(0,1)}y^k (1-y)^{n-k}((1-x)^{n+k}f(j+y)-(1-x)^nf(j))\mu({\rm{d}} y)\nonumber\\
&=\left(n(n-1)x(1-x)^{n-1}-\left(\sigma x(1-x)+ \theta\nu_0(1-x)-\theta\nu_1 x \right)n(1-x)^{n-1}\right)f(j)\nonumber\\
&+(1-x)^n\sum\limits_{k=0}^n\binom{n}{k}\int\limits_{(0,1)}y^k (1-y)^{n-k}((1-x)^kf(j+y)-f(j))\mu({\rm{d}} y).
\end{align}
Moreover, we have
\begin{align}\label{pgr}
&\sum\limits_{k=0}^n\binom{n}{k}\int\limits_{(0,1)}y^k (1-y)^{n-k}((1-x)^kf(j+y)-f(j))\mu({\rm{d}} y)\nonumber\\
&=\int\limits_{(0,1)}\sum\limits_{k=0}^n\binom{n}{k}(((1-x)y)^kf(j+y)- y^kf(j))(1-y)^{n-k}\mu({\rm{d}} y)=\int\limits_{(0,1)}\left((1-xy)^nf(j+y)-f(j)\right)\mu({\rm{d}} y).
\end{align}
The claim follows by plugging \eqref{pgr} in \eqref{gr} and comparing the result with \eqref{gx}.
\smallskip
Now, set $u(t,x,n,j)=P_tH(\cdot,n,\cdot)(x,j)$ and $v(t,x,n,j)=Q_tH(x,\cdot,\cdot)(n,j)$. Using the Kolmogorov forward equation for $P$, the claim and \eqref{comgensemi}, we get
$$\frac{{\rm{d}}}{{\rm{d}} t} u(t,x,n,j)=P_t G_{X,J} H(\cdot,n,\cdot)(x,j)= P_t G_{R,J} H(x,\cdot,\cdot)(n,j)= G_{R,J} u(x,\cdot,\cdot)(n,j).$$
Moreover, the Kolmogorov forward equation for $Q$ yields
$$\frac{{\rm{d}}}{{\rm{d}} t} v(t,x,n,j)=G_{R,J} u(x,\cdot,\cdot)(n,j).$$
Hence, $u$ and $V$ satisfy the same equation. Since $u(0,x,n,j)=(1-x)^nf(j)=v(0,x,n,j)$, the result follows from the uniqueness of the initial value problem associated with $G_{R,J}$ (see \cite[Thm. 1.3]{D65}).
\end{proof}
\begin{proof}[Proof of Theorem \ref{annealdmoments}(Asymptotic type-frequency: the annealed case)]
We first show the convergence in distribution of $X(T)$ as $T\to \infty$ towards a limit law on $[0,1]$. Since $\theta >0$ and $\nu_0, \nu_1 \in (0,1)$, Theorem \ref{momdualannealed} implies that, for any $x\in[0,1]$ the limit of $\mathbb{E}[(1-X(T))^n\mid X(0)=x]$ as $T\to\infty$ exists and satisfy
\begin{equation}\label{mome}
\lim_{T \rightarrow \infty} \mathbb{E}[(1-X(T))^n|X(0)=x]=w_n,\quad n\in\Nb_0.
\end{equation}
Recall that on $[0,1]$ probability measures are completely determined by their moments and convergence of positive entire moments implies convergence in distribution. Therefore, Eq. \eqref{mome} implies that there is a probability distribution $\pi_X$ on $[0,1]$, such that, for any $x\in[0,1]$, conditionally on $\{X(0)=x\}$, the law of $X(T)$ converges in distribution to $\pi_X$ as $T\to\infty$ and
$$w_n=\int_{[0,1]} (1-z)^n\pi_X(dz), \quad n\in\Nb.$$
Using dominated convergence, the convergence of the law of $X(T)$ towards $\pi_X$ as $T\to\infty$ extends to any initial distribution. As a consequence of this and the Markov property of $X$, it follows that $X$ admits a unique stationary distribution, which is given by $\pi_X$.
It only remains to prove \eqref{recwn}. For this, we do a first step decomposition for the probability of absorption in $0$ of the process $R$, and obtain
$$t_n w_n= n\sigma w_{n+1}+ n(\theta\nu_1+ n-1)w_{n-1}+\sum\limits_{k=1}^n \binom{n}{k}\sigma_{n,k} w_{n+1},$$
where $t_n\coloneqq n(\sigma+\theta+ n-1)+\sum\limits_{k=1}^n \binom{n}{k}\sigma_{n,k}$. The result follows dividing both sides of the previous by $n$ and rearranging terms.
\end{proof}
\subsection{Ancestral type distribution: the pruned lookdown ASG}\label{s52}
This section is devoted to prove the results stated in Section \ref{s262} about the ancestral type distribution in the annealed case.
\begin{proof}[Proof of Lemma \ref{pr}(Positive recurrence)]
Since the Markov chain is irreducible, it is enough to prove that the state $1$ is positive recurrent for $L$. The latter is clearly true if $\theta\nu_0>0$, because in this case the hitting time of one is upper bounded by an exponential random variable with parameter $\theta\nu_0$. Now assume that $\theta=0$ (the case $\theta\nu_0=0$ and $\theta\nu_1>0$, can be easily reduced to this case).
We proceed in a similar way as in \cite[Proof of Lem. 2.3]{F13}. Consider the function $f:\Nb\to\Rb_+$ defined via
$$f(n)\coloneqq \sum_{i=1}^{n-1} \frac1{i}\ln\left(1+\frac{1}{i}\right).$$
Note that the function $f$ is bounded. Note also that, for $n>1$
$$n(n-1)(f(n-1)-f(n))=-n\ln\left(1+\frac{1}{n-1}\right)\leq -1.$$
The previous inequality uses the fact that $\ln(1-h)\leq -h$ for $h\in(0,1)$. For any $\varepsilon>0$, set $n_0(\varepsilon)=\lfloor 1/\varepsilon\rfloor +1.$ Note that for $n>n_0(\varepsilon)$
$$n(f(n+i)-f(n))=n\sum_{i=n}^{n+i-1}\frac1{i}\ln\left(1+\frac{1}{i}\right)\leq n\ln\left(1+\frac1n\right)i\varepsilon\leq i\varepsilon.$$
Hence, for $n\geq n_0(\varepsilon)$,
$$G_L f(n)\leq -1 +\frac{\varepsilon}{n}\sum_{i=1}^n \binom{n}{i}\sigma_{n,i} \,i+\sigma\varepsilon=-1+\varepsilon\left(\int_{(0,1)}y\mu(dy)+\sigma\right).$$
Now, set $m_0\coloneqq n_0(\varepsilon_\star)$, where $\varepsilon_\star\coloneqq \left(2\int_{(0,1)}y\mu(dy)+2\sigma\right)^{-1}$. Thus, for any $n\geq m_0$ we have $$G_Lf(n)\leq -1/2.$$
Define $T_{m_0}\coloneqq \inf\{\beta>0: L(\beta)<m_0\}$. Applying Dynkin's formula to $L$ with the function $f$ and the stopping time $T_{m_0}\wedge k$, $k\in\Nb$, we obtain
$$\Eb\left[f(L(T_{m_0}\wedge k))\mid L(0)=n\right]=f(n)+\Eb\left[\int_0^{T_{m_0}\wedge k}G_Lf(L(\beta))d\beta\mid L(0)=n\right]$$
Therefore, for $n\geq m_0$
$$\Eb\left[f(L(T_{m_0}\wedge k))\mid L(0)=n\right]\leq f(n)-\frac1{2}\Eb[T_{m_0}\wedge k\mid L(0)=n].$$
Hence, using that $f$ is non-negative, we get
$$\Eb[T_{m_0}\wedge k\mid L(0)=n]\leq 2f(n).$$
Therefore, letting $k\to\infty$ in the previous inequality yields, for all $n\geq m_0$,
$$\Eb[T_{m_0}\mid L(0)=n]\leq 2f(n)<\infty.$$
Since $L$ is irreducible, it follows by standard arguments that $L$ is positive recurrent.
\end{proof}
\begin{proof}[Proof of Proposition \ref{exprhtannealed}] The first part of the statement is a straightforward consequence of Lemma \ref{pruningdonnehtx} and the fact that we assign types to the $L(T)$ lines present in the pruned LD-ASG at time $T$ according to independent Bernoulli random variables with parameter $x$. Since $L$ is positive recurrent, we have convergence in distribution of the law of $L(T)$ towards the stationary distribution $\pi_L$. In particular, we infer from Eq. \eqref{pldasgatd} that the limit $h(x)$ of $h_T(x)$ as $T\to\infty$ exists and satisfies
$$h(x)=1-\Eb[(1-x)^{L_\infty}]=1-\sum_{\ell=1}^\infty \Pb(L_\infty=\ell)(1-x)^\ell=\sum_{\ell=0}^\infty\Pb(L_\infty>\ell)(1-x)^\ell-\sum_{\ell=1}^\infty\Pb(L_\infty>\ell-1)(1-x)^{\ell-1},$$
and Eq. \ref{represh(x)tailpldasg} follows.
\end{proof}
\begin{proof}[Proof of Corollary \ref{hzero}]
Since $\theta = 0$, the line-counting processes $R$ and $L$ are equal. Hence, combining Proposition \ref{exprhtannealed} and Theorem \ref{md} applied to $n=1$, we obtain
\begin{equation}\label{idzm}
h_T(x)=\Eb[X(T)\mid X(0)=x],
\end{equation}
which proves the first part of the statement.
Moreover, for $\theta=0$, $X$ is a bounded submartingale, and hence $X(T)$ has almost surely a limit as $T\to\infty$, which we denote by $X(\infty)$. Letting $T\to \infty$, in the identity \eqref{idzm} yields
\begin{equation}\label{auxh0}
h(x)=\Eb[X(\infty)\mid X(0)=x].
\end{equation}
Moreover, using Theorem \ref{md} with $n=2$, we get
$$\Eb[(1-X(T))^2\mid X(0)=x]=\Eb[(1-x)^{L(T)}\mid L(0)=2].$$
Letting $T\to \infty$ and using that $L$ is positive recurrent, we obtain
$$\Eb[(1-X(\infty))^2\mid X(0)=x]=1-h(x).$$
Plugging \eqref{auxh0} in the previous identity yields the desired result.
\end{proof}
The proof of Theorem \ref{recursion}, providing the characterization of the tail probabilities $\mathbb{P}(L_{\infty} > n)$ via a linear system of equation, is based on the notion of Siegmund duality. Let us consider the process $D\coloneqq (D(\beta))_{\beta \geq 0}$ with rates
$$q_D(i,j)\coloneqq \left\{\begin{array}{ll}
(i-1)(\sigma + \sigma_{i-1,1})&\textrm{if $j=i-1$},\\
(i-1)\theta\nu_1 +i(i-1) &\textrm{if $j=i+1$},\\
\gamma_{i,j}-\gamma_{i,j-1} &\textrm{if $ 1\leq j< i-1$},\\
(i-1)\theta\nu_0&\textrm{if $j=\dagger$},
\end{array}\right.
$$
where $\gamma_{i,j}\coloneqq \sum_{k=i-j}^{j}\binom{j}{k}\sigma_{j,k}$ and $\dagger$ is a cemetery point. Note that $1$ and $\dagger$ are absorbing states of $D$.
\begin{lemma}[Siegmund duality]\label{sd}
The processes $L$ and $D$ are Siegmund dual, i.e. for all $\ell, d\in \mathbb{N}$ and $t\geq 0$, we have
$$\mathbb{P}\left(L(\beta) \geq d\mid L(0)=\ell\right)=\mathbb{P}\left(\ell\geq D(\beta) \mid D(0)=d\right).$$
\end{lemma}
\begin{proof}
We consider the function $H:\mathbb{N}\times\mathbb{N}\cup\{\dagger\}\rightarrow \{0,1\}$ defined via $H(\ell,d)\coloneqq 1_{\ell\geq d}$ and $H(\ell,\dagger)\coloneqq 0$, $\ell,d\in\mathbb{N}$. Let $G_L$ and $G_D$ be the infinitesimal generators of $L$ and $D$, respectively. By \cite[Prop. 1.2]{Jaku} we only have to show that $G_L H(\cdot,d)(\ell)=G_D H(\ell,\cdot)(d)$ for all $\ell,d\in\mathbb{N}$.
From construction, we have
\begin{align}\label{gl}
G_L H(\cdot,d)(\ell)&=\sigma \ell \,1_{\{\ell+1=d\}} - (\ell-1)(\ell+\theta\nu_1)\,1_{\{\ell=d\}} -\theta\nu_0 \sum\limits_{j=1}^{\ell-1} 1_{\{j<d\leq \ell\}}+ \sum\limits_{k=1}^\ell\binom{\ell}{k}\sigma_{\ell,k} \, 1_{\{\ell< d\leq \ell+k\}}\nonumber\\
&=\sigma \ell \,1_{\{\ell+1=d\}} - (\ell-1)(\ell+\theta\nu_1)\,1_{\{\ell=d\}} -\theta\nu_0 (d-1) 1_{\{d\leq \ell\}}+ \gamma_{d,\ell} \, 1_{\{\ell< d\}}.
\end{align}
Similarly, we have
\begin{align}\label{gd}
G_D H(\ell,\cdot)(d)&=\sigma (d-1) \ell \,1_{\{d-1=\ell\}} - (d-1)(d+\theta\nu_1)\,1_{\{\ell=d\}} -\theta\nu_0 (d-1) 1_{\{d\leq \ell\}}\nonumber\\
& + \sum\limits_{j=1}^{d-1}\left(\gamma_{d,j}-\gamma_{d,j-1}\right) \, 1_{\{j\leq \ell <d\}}.
\end{align}
In addition, using summation by parts yields
\begin{equation}\label{ix}
\sum\limits_{j=1}^{d-1}\left(\gamma_{d,j}-\gamma_{d,j-1}\right) \, 1_{\{j\leq \ell<d\}}=\gamma_{d,d-1}1_{\{d-1= \ell\}}+ \gamma_{d,\ell}1_{\{\ell\leq d-2\}}= \gamma_{d,\ell}1_{\{\ell< d\}}.
\end{equation}
The result follows plugging \eqref{ix} in \eqref{gd} and comparing with \eqref{gl}.
\end{proof}
Now, we can prove Theorem \ref{recursion}.
\begin{proof}[Proof of Theorem \ref{recursion}]
From Lemma \ref{sd} we infer that $a_n=d_{n+1}$, where
$$d_n\coloneqq \mathbb{P}(\exists \beta>0: D(\beta)=1\mid D(0)=n), \quad n\geq 1.$$
Applying a first step decomposition to the process $D$, we obtain
\begin{equation}\label{r1}
T_n d_n= (n-1)\sigma d_{n-1}+(n-1)(\theta\nu_1+n)d_{n+1} + \sum\limits_{j=1}^{n-1} (\gamma_{n,j}-\gamma_{n,j-1})d_j,\quad n>1,
\end{equation}
where $T_n\coloneqq (n-1)[\sigma +\theta +n ] + \sum_{k=1}^{n-1}\binom{n-1}{k}\sigma_{n-1,k}.$ Using summation by parts and rearranging terms in \eqref{r1} yields
\begin{equation}\label{r2}
(\sigma +\theta +n ) d_n= \sigma d_{n-1}+(\theta\nu_1+n)d_{n+1} +\frac{1}{n-1} \sum\limits_{j=1}^{n-1} \gamma_{n,j}(d_j-d_{j+1}),\quad n>1,
\end{equation}
The result follows.
\end{proof}
\section{Type distribution and ancestral type distribution: the quenched case}\label{S6}
\subsection{Existence of the quenched ASG: proof of Proposition \ref{eqasg}}
The object of this section is to prove Proposition \ref{eqasg} by proposing an explicit construction of a branching-coalescing particle system $(\Gs^T(\omega,\beta))_{\beta\geq 0}$ that satisfies the requirements of Definition \ref{defquenchedasg}. We recall that making such a construction is not difficult in the case of a simple environment. Thus, this construction is aimed to tackle general environments $\omega$ that have infinitely many jumps on each compact interval. Fix $T > 0$ and $n \in \Nb$ (sampling size) and define
$$\Lambda_{\textrm{mut}}\coloneqq\{\lambda_{i}^{0},\lambda_{i}^{1} \}_{i \geq 1}, \ \Lambda_{\textrm{\textrm{sel}}}\coloneqq\{\lambda_{i}^{\vartriangle}\}_{i \geq 1}, \ \Lambda_{\textrm{coal}}\coloneqq\{\lambda_{i,j}^{\blacktriangle} \}_{i,j \geq 1, i \neq j},$$
where $\lambda_{i}^{0}, \lambda_{i}^{1}, \lambda_{i}^{\vartriangle}$ and $\lambda_{i,j}^{\blacktriangle}$ are standard Poisson processes on $[0,T]$ with parameter respectively $\theta \nu_0, \theta \nu_1, \sigma$ and $1$. For $\beta \in [0,T]$, let $\tilde \omega (\beta) := \omega (T) - \omega ((T-\beta)-)$. Let $I_{\tilde \omega} := \{ \beta \in [0,T]: \Delta \tilde \omega (\beta) > 0 \}$ be the deterministic (countable) set of jumping times of $\tilde \omega$. Let $\mathcal{B}_{\tilde \omega} := \{ B_i(\beta) \}_{i \geq 1, \beta \in I_{\tilde \omega}}$ be a family of independent Bernoulli random variables; the parameter of $B_i(\beta)$, for $\beta\in I_{\tilde \omega}$, is $\Delta \tilde \omega(\beta)$. Fix a realization of $\{\Lambda_{\textrm{mut}}, \Lambda_{\textrm{\textrm{sel}}}, \Lambda_{\textrm{coal}}, \mathcal{B}_{\tilde \omega}\}$. We assume without loss of generality that the arrival times of these Poisson processes are countable, distinct between them and distinct from the jumping times of $\tilde \omega$. Let $I_{\textrm{coal}}$ (resp. $I_{\textrm{\textrm{sel}}}$) be the random set of arrival times of $\Lambda_{\textrm{coal}}$ (resp. $\Lambda_{\textrm{\textrm{sel}}}$). For $\beta \in I_{\textrm{coal}}$, let $(a_{\beta},b_{\beta})$ be the couple $(i,j)$ such that $\beta$ is an arrival time of $\lambda_{i,j}^{\blacktriangle}$. Since all Poisson processes $\lambda_{i,j}^{\blacktriangle}$ have distinct jumping times, $(a_{\beta},b_{\beta})$ is uniquely defined.
\smallskip
We first construct a set of \emph{virtual lines} in $[0,T]\times\Nb$, representing the set of lines that would be part of the ASG if there were no coalescence events. In particular, once a line enters this set, it will remain there. The set of virtual lines is constructed on the basis of the set of potential branching times $I_{\textrm{bran}}\coloneqq I_{\tilde{\omega}}\cup I_{\textrm{sel}}$ as follows. Consider the (countable) set $$S_{\textrm{bran}}\coloneqq\{(\beta_1,\ldots,\beta_k):k\in\Nb,\, 0\leq \beta_1<\cdots<\beta_k, \beta_i\in I_{\textrm{bran}}, i\in[k]\}$$
of finite sequences of potential branching times. Fix an injective function $i_\star:[n]\times S_{\textrm{bran}}\to\Nb\setminus[n]$. The set of virtual lines is determine according to the following rules.
\begin{itemize}
\item For any $i\in[n]$ (i.e. in the initial sample), $i$ is active for any $\beta\in[0,T]$.
\item For any $(\beta_1,\ldots,\beta_k)\in S_{\textrm{bran}}$ and any $j\in[n]$, line $i_\star(j,\beta_1,\ldots,\beta_k)$ is active at time $\beta$ if and only: (1) $\beta_k\leq \beta$, (2) for any $\ell\in[k]$ such that $\beta_\ell\in I_{\tilde{\omega}}$, $B_{i_\star(j,\beta_1,\ldots,\beta_{\ell-1})}(\beta_\ell)=1$ ($B_{j}(\beta_1)=1$ if $\ell=1$), and (3) for any $\ell\in[k]$ such that $\beta_\ell\in I_{\textrm{sel}}$, $\beta_\ell$ is a jumping time of $\lambda_{i_\star(j,\beta_1,\ldots,\beta_{\ell-1})}^{\vartriangle}$ (of $\lambda_{j}^{\vartriangle}$ if $\ell=1$).
\end{itemize}
Moreover, these are the only possible virtual lines.
Let $V_{\beta} \subset \mathbb{N}$ be the set of virtual lines at time $\beta$. Lemma \ref{nbfinilines} below states $V_\beta$ is almost surely finite for all $\beta \in [0,T]$. Let $\tilde I_{\textrm{coal}} := \{ \beta \in I_{\textrm{coal}} : \ a_{\beta},b_{\beta} \in V_{\beta} \}$. Since $V_{T}$ is independent of $\Lambda_{\textrm{coal}}$ and almost surely finite, it is plain that $\tilde I_{\textrm{coal}}$ is almost surely finite. We say that a line $i\in V_{\beta}$ is \textit{inactive} at time $\beta$ if there is $\hat \beta \in \tilde I_{\textrm{coal}} \cap [0,\beta]$, such that $i=a_{\hat \beta}$ or $i=i_\star(j,\beta_1,\ldots,\beta_k)$ for some $(j,\beta_1,\ldots,\beta_k)\in [n]\times S_{\textrm{bran}}$ with either $\hat{\beta}<\beta_1$ and $j=a_{\hat{\beta}}$ or $\hat{\beta}\in[\beta_{\ell-1},\beta_\ell)$ and $i_\star(j,\beta_1,\ldots,\beta_{\ell-1})=a_{\hat{\beta}}$ for some $\ell\in[k]\setminus\{1\}$. Clearly, once a line becomes inactive, it remains inactive. A line $i\in V_\beta$ is \emph{active} at time $\beta$ if it is not inactive at time $\beta$. We set $A_\beta$ for the set of active lines at time $\beta$.
The ASG $[0,T]$ is then the branching-coalescing system starting with $n$ lines at levels in $[n]$, consisting at any time $\beta\in[0,T]$ of the lines of $A_{\beta}$ (active lines) and where:
\begin{itemize}
\item for $\beta \in I_{\textrm{bran}}$ such that $A_{\beta-}\subsetneq A_\beta$ and $i\in A_{\beta}\setminus A_{\beta-}$, there is $(j,\beta_{1},\ldots,\beta_{k})\in[n]\times S_{\textrm{bran}}$ with $\beta_{k}<\beta$ such that $i=i_\star(j,\beta_1,\ldots,\beta_k,\beta)$, or there is $j\in[n]$ such that $i=i_\star(j,\beta)$.
In the first case, line $i_\star(j,\beta_1,\ldots,\beta_k)$ branches at time $\beta$ into $i_\star(j,\beta_1,\ldots,\beta_k)$ (continuing line) and $i$ (incoming line). In the second case, line $j$ branches at time $\beta$ into $j$ (continuing line) and $i$ (incoming line).
\item for $\beta \in I_{\textrm{coal}}$ such that $A_{\beta}\subsetneq A_{\beta-}$ and $i\in A_{\beta-}\setminus A_{\beta}$, $i=a_\beta$ and $b_{\beta}\in A_\beta$. Thus, lines $i$ an $b_\beta$ merge into $b_\beta$ at time $\beta$.
\item At each $\beta \in [0,T]$ that is an arrival time of $\lambda_{i}^{0}$ (resp. $\lambda_{i}^{1}$) for some $i \in A_{\beta}$, we mark line $i$ with a beneficial (resp. deleterious) mutation at time $\beta$.
\end{itemize}
It is straightforward to see that the so-constructed branching-coalescing particle system satisfies the requirement of Definition \ref{defquenchedasg}. Hence, the fact that the ASG is well-defined is given by the following lemma.
\begin{lemma} \label{nbfinilines}
The number of virtual lines is almost surely finite at any time $\beta \in [0,T]$.
\end{lemma}
Let $N^T(\omega,\beta)$ denote the number of virtual lines at time $\beta$. Let $\delta > 0$. We define the family $\mathcal{B}_{\tilde \omega^{\delta}}$ by setting $B^{\tilde \omega^{\delta}}_i(\beta)=B^{\tilde \omega}_i(\beta)$ if $\Delta \tilde \omega(\beta) > \delta$ and $B^{\tilde \omega^{\delta}}_i(\beta)=0$ otherwise. We then define $N^T(\omega^{\delta},\beta)$ from the family $\{\Lambda_{\textrm{sel}}, \mathcal{B}_{\tilde \omega^{\delta}}\}$, just like $N^T(\omega,\beta)$ is defined from the family $\{\Lambda_{\textrm{sel}}, \mathcal{B}_{\tilde \omega}\}$. It is easy to see from above that $N^T(\omega^{\delta},\beta)$ increases almost surely to $N^T(\omega,\beta)$ as $\delta$ goes to $0$. By the monotone converge theorem we get for all $\beta \in [0,T]$:
\begin{eqnarray}
\lim_{\delta \rightarrow 0} \mathbb{E}[N^T(\omega^{\delta},\beta)|N^T(\omega^{\delta},0)=n] = \mathbb{E}[N^T(\omega,\beta)|N^T(\omega,0)=n]. \label{monotcv}
\end{eqnarray}
Let us bound $\mathbb{E}[N^T(\omega^{\delta},\beta)|N^T(\omega^{\delta},0)=n]$. Since $\tilde \omega^{\delta}$ has finitely many jumping times on $[0,T]$, let $T_1 <\cdots < T_N$ denote those jumping times. From above we can see that $(N^T(\omega^{\delta},\beta))_{\beta \in [0,T]}$ has the following dynamic: on $(T_i,T_{i+1})$, if $N^T(\omega^{\delta},\beta)$ is currently equal to $i$, then it goes to state $i+1$ at rate $\sigma i$. At $T_i$ we have $N^T(\omega^{\delta},T_i) = N^T(\omega^{\delta},T_i-) + \beta(N^T(\omega^{\delta},T_i-), \Delta \tilde \omega^{\delta}(T_i))$, where $\beta(N^T(\omega^{\delta},T_i-), \Delta \tilde \omega^{\delta}(T_i)) \sim \bindist{N^T(\omega^{\delta},T_i-)}{\Delta \tilde \omega^{\delta}(T_i)}$. Note that for each $T_i$ we have $\Delta \tilde \omega^{\delta}(T_i)=\Delta \tilde \omega(T_i)$. This yields in particular
\begin{eqnarray}
\mathbb{E}[N^T(\omega^{\delta},T_i)|N^T(\omega^{\delta},T_i-)]=(1+\Delta \tilde \omega(T_i))N^T(\omega^{\delta},T_i-). \label{expectationatjumps}
\end{eqnarray}
We control the evolution of $N^T(\omega^{\delta},\beta)$ between the jumps in the following lemma:
\begin{lemma} \label{expectationbetweenjumps}
Let $0 \leq \beta_1 <\beta_2 \leq T$ be such that $\tilde \omega^{\delta}$ has no jumping times on $(\beta_1,\beta_2]$. Then we have $\mathbb{E}[N^T(\omega^{\delta},\beta_2)|N^T(\omega^{\delta},\beta_1)] \leq e^{\sigma (\beta_2-\beta_1)} N^T(\omega^{\delta},\beta_1)$.
\end{lemma}
\begin{proof}
Since $\tilde \omega^{\delta}$ has no jumping times on $(\beta_1,\beta_2]$, $(N^T(\omega^{\delta},\beta))_{\beta \in [0,T]}$, on this interval, is the Markov process on $\mathbb{N}$ with generator
\[ \mathcal{G}_N f(n) = \sigma n (f(n+1)-f(n)). \]
For $M \geq 1$ let $f_M(n)\coloneqq = n \wedge M$. Note that for any $M,n \geq 1$ we have $\mathcal{G}_N f_M(n) \leq \sigma f_M(n)$. Applying Dynkin's formula to $(N^T(\omega^{\delta},\beta))_{\beta \in [0,T]}$ on $(\beta_1,\beta_2]$ with the function $f_M$ we obtain
\begin{align*}
\mathbb{E}\left[f_M(N^T(\omega^{\delta},\beta_2)) \mid N^T(\omega^{\delta},\beta_1)\right]&=f_M(N^T(\omega^{\delta},\beta_1))+\mathbb{E}\left[\int_{\beta_1}^{\beta_2} \mathcal{G}_N f_M(N^T(\omega^{\delta},\beta)){\rm{d}} \beta\mid N^T(\omega^{\delta},\beta_1)\right] \\
&\leq f_M(N^T(\omega^{\delta},\beta_1))+\sigma \mathbb{E}\left[\int_{\beta_1}^{\beta_2} f_M(N^T(\omega^{\delta},\beta)){\rm{d}} \beta\mid N^T(\omega^{\delta},\beta_1)\right] \\
&= f_M(N^T(\omega^{\delta},\beta_1))+\sigma \int_{\beta_1}^{\beta_2} \mathbb{E}\left[f_M(N^T(\omega^{\delta},\beta))\mid N^T(\omega^{\delta},\beta_1)\right] {\rm{d}} \beta.
\end{align*}
By Gronwall's lemma we deduce that $\mathbb{E}[f_M(N^T(\omega^{\delta},\beta_2)) \mid N^T(\omega^{\delta},\beta_1)]\leq e^{\sigma (\beta_2-\beta_1)} f_M(N^T(\omega^{\delta},\beta_1))$. Letting $M$ go to infinity and using the monotone convergence theorem, we get the result.
\end{proof}
\begin{proof} [Proof of Lemma \ref{nbfinilines}]
Clearly we only need to justify that $\mathbb{E}[N^T(\omega,T)|N^T(\omega,0)=n] < \infty$. Using successively Lemma \ref{expectationbetweenjumps} and \eqref{expectationatjumps} we get
\[ \mathbb{E}[N^T(\omega^{\delta},T)|N^T(\omega^{\delta},0)=n]\leq ne^{\sigma T} \prod_{\beta \in [0,T] s.t. \Delta \tilde \omega (\beta) > \delta}(1+\Delta \tilde \omega(\beta)). \]
In particular, we have
\begin{eqnarray}
\mathbb{E}[N^T(\omega^{\delta},T)|N^T(\omega^{\delta},0)=n]\leq ne^{\sigma T} \prod_{\beta \in I_{\tilde \omega} \cap [0,T]}(1+\Delta \tilde \omega(\beta)) < \infty. \label{expectationatt}
\end{eqnarray}
Letting $\delta$ go to $0$ in \eqref{expectationatt} and using \eqref{monotcv} we get
\[ \mathbb{E}[N^T(\omega,T)|N^T(\omega,0)=n]\leq ne^{\sigma T} \prod_{\beta \in I_{\tilde \omega} \cap [0,T]}(1+\Delta \tilde \omega(\beta)) < \infty. \]
This concludes the proof.
\end{proof}
\subsection{Type distribution: the killed ASG}
\subsubsection{General environments}
This section is devoted to the proofs of the results presented in Section \ref{s252}. We start by proving Theorem \ref{momdualgen} that states that the quenched moment duality holds for $P$-almost every environment $\omega$.
\begin{proof}[Proof of Theorem \ref{momdualgen}(Quenched moment duality)]
Since both sides of Eq. \eqref{quenchedual} are right-continuous in $T$, it is sufficient to prove that, for any bounded measurable function $g:\Db^\star\to\Rb$,
\begin{equation}\label{condex}
\Eb[(1-X(J,T))^n g((J_s)_{s\in[0,T]})\mid X_0=x]=\Eb[(1-x)^{R^T(J,T)} g((J_s)_{s\in[0,T]})\mid R_{0}^T=n].
\end{equation}
Let $\Hs:=\{g:\Db_T^\star \to\Rb:\textrm{ such that $\eqref{condex}$ is satisfied}\}$. Since the annealed duality holds, every constant function belong to $\Hs$. Moreover, $\Hs$ is closed under increasing limits of non-negative bounded functions in $\Hs$. We claim that \eqref{condex} holds for functions of the form $g(\omega)=g_1(\omega(t_1))\cdots g_k(\omega(t_k))$, with $0<t_1<\cdots<t_k<T$ and $g_i\in\Cs^2([0,\infty))$ with compact support. If the claim is true, then as a consequence of the monotone class theorem, $\Hs$ contains any measurable function $g$, thus achieving the proof.
\smallskip
We prove the claim by induction on $k$. For $k=1$, we have to prove that, for $t_1\in(0,T)$,
\begin{equation}\label{condexsimple}
\Eb[(1-X(J,T))^n g_1(J(t_1))\mid X(J,0)=x]=\Eb[(1-x)^{R^{T}(J,T)} g_1(J({t_1}))\mid R^T(J,0)=n].
\end{equation}
Note first that, using the Markov property for $X$ in $[0,t_1]$ followed by the annealed moment duality, we obtain
\begin{align*}
&\Eb\left[(1-X(J,T))^n g_1(J(t_1))\mid X(J,0)=x\right]\\
&= \Eb\left[g_1(J(t_1))\hat{\Eb}\left[(1-\hat{X}(\hat{J},T-t_1))^n\mid \hat{X}(\hat{J},0)=X(J,t_1)\right]\mid X(J,0)=x\right]\\
&= \Eb\left[g_1(J(t_1))\hat{\Eb}\left[(1-X(J,t_1))^{{\hat{R}^{T-t_1}(\hat{J},T-t_1)}}\mid \hat{R}^{T-t_1}(\hat{J},0)=n\right]\mid X(J,0)=x\right],
\end{align*}
where the subordinator $\hat{J}$ is defined via $\hat{J}(h)\coloneqq J(t_1+h)-J(t_1)$. The processes $\hat{X}$ and $\hat{R}^{T-t_1}$ are independent copies of $X$ and $R^{T-t_1}$, which are driven by $\hat{J}$ (which is in turn independent of $(J(u))_{u\in[0,t_1]}$). Using first Fubini's theorem, and then Eq. \eqref{rmd} in Theorem \ref{momdualannealed}, the last expression equals
\begin{align*}
&=\hat{\Eb}\left[\Eb\left[g_1(J(t_1))(1-X(J,t_1))^{{\hat{R}^{T-t_1}(\hat{J},T-t_1)}}\mid X(J,0)=x\right]\mid \hat{R}^{T-t_1}(\hat{J},0)=n\right]\\
&=\hat{\Eb}\left[\Eb\left[g_1(J(t_1))(1-x)^{{R^{t_1}(J,t_1)}}\mid R^{t_1}(J,0)={\hat{R}^{T-t_1}(\hat{J},T-t_1)}\right]\mid \hat{R}^{T-t_1}(\hat{J},0)=n\right].
\end{align*}
The proof of the claim for $k=1$ is achieved using the Markov property for $R^T$ in the (backward) interval $[0,T-t_1]$.
Let us now assume that the claim is true up to $k-1$. We proceed as before to prove that the claim holds for $k$. Using the Markov property for $X$ in $[0,t_1]$ followed by the inductive step, we obtain
\begin{align*}
&\Eb\left[(1-X(J,T))^n \prod_{i=1}^k g_i(J(t_i))\mid X(J,0)=x\right]\\
&=\Eb\left[g_1(J(t_1))\hat{\Eb}\left[(1-\hat{X}(\hat{J},T-t_1))^n\, G(J(t_1),\hat{J})\mid \hat{X}(\hat{J},0)=X(J,t_1)\right]\mid X(J,0)=x\right]\\
&=\Eb\left[g_1(J(t_1))\hat{\Eb}\left[(1-X(J,t_1))^{\hat{R}^{T-t_1}(\hat{J},T-t_1)}\,G(J(t_1),\hat{J})\mid \hat{R}^{T-t_1}(\hat{J},0)=n\right]\mid X(J,0)=x\right].
\end{align*}
where $G(J(t_1),\hat{J})\coloneqq\prod_{i=2}^k g_i(J(t_1)+\hat{J}(t_i-t_1))$. Using first Fubini's theorem, and then Eq. \eqref{rmd} in Theorem \ref{momdualannealed}, the last expression equals
\begin{align*}
&=\hat{\Eb}\left[{\Eb}\left[(1-X(J,t_1))^{\hat{R}^{T-t_1}(\hat{J},T-t_1)}g_1(J(t_1))\,G(J(t_1),\hat{J})\mid X(J,0)=x\right] \mid\hat{R}^{T-t_1}(\hat{J},0)=n\right]\\
&=\hat{\Eb}\left[{\Eb}\left[(1-x)^{R^{t_1}(J,t_1)}g_1(J(t_1))\,G(J(t_1),\hat{J})\mid R^{t_1}(J,0)=\hat{R}^{T-t_1}(\hat{J},T-t_1)\right] \mid\hat{R}^{T-t_1}(\hat{J},0)=n\right],
\end{align*}
and the proof is achieved using the Markov property for $R^T$ in the (backward) interval $[0,T-t_1]$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{x0condpastenv}(Quenched type-frequency from the distant past)]
Let $\omega$ be such that \eqref{quenchedual} holds between $-T$ and $0$ (this is the case for $P$-a.e. environment and for any simple environment). In particular,
\begin{equation}\label{mdt0}
\mathbb{E}^{\omega} \left [ (1-X(\omega,0))^n|X(\omega,-T)=x \right ]= \mathbb{E}^{\omega} \left [ (1-x)^{R_0(\omega,T-)}|R_0(\omega,0-)=n \right ].
\end{equation}
Since we assume that $\theta >0$ and $\nu_0, \nu_1 \in (0,1)$, the right hand side converges to $W_n(\omega)$, which proves that the moment of order $n$ of $1-X(\omega,0)$ conditionally on $\{ X(\omega,-T) = x \}$ converges to $W_n(\omega)$. Since we are dealing with random variables supported on $[0,1]$, the convergence of the positive entire moments proves the convergence in distribution and the fact that the limit distribution $\mathcal{L}^\omega$ satisfies \eqref{dualimite}.
\smallskip
It remains to prove \eqref{approxwn}. If $\mu$ is a probability measure on $\mathbb{N}_0^\dagger$ with finite support, let $\tilde \mu_s(\omega)$ denote the distribution of $R_0(\omega,s-)$ given that $R_0(\omega,0-)\sim\mu$.
Note that as far as $R_0(\omega,\cdot)$ is not absorbed, it will either get absorbed at $\dagger$ at a rate that is greater than or equal to $\theta\nu_0 > 0$, due to the appearance of a mutation of type $0$, or go to $0$ before such a mutation occurs. As a consequence $T_{0, \dagger}$, the absorption time of $R_0(\omega,\cdot)$ at $\{ 0, \dagger \}$, is stochastically bounded by an exponential random variable with parameter $\theta\nu_0$. Therefore,
\begin{align*}
\tilde \mu_{T}(\omega)(\mathbb{N}) & = \mathbb{P}^{\omega}_{\mu} \left ( R_0(\omega,T-) \in \mathbb{N} \right ) = \mathbb{P}^{\omega}_{\mu} \left ( T_{0, \dagger} > T \right ) \leq e^{- \theta\nu_0 T}. \label{binomialisation}
\end{align*}
Hence,
we have
\begin{align*}
\mathbb{P}^{\omega}_{\mu} \left ( \exists s\geq 0 \ \text{s.t.} \ R_0(\omega,s)=0 \right ) & = \tilde \mu_{T}(\omega)(0) + \sum_{k \geq 1} \mathbb{P}^{\omega}_{\mu} \left ( R_0(\omega,T-) = k \ \text{and} \ \exists s\geq T \ \text{s.t.} \ R_0(\omega,s)=0 \right ) \\
& \leq \tilde \mu_{T}(\omega)(0) + \tilde \mu_T(\omega)(\mathbb{N}) \leq \tilde \mu_T(\omega)(0) + e^{-\theta \nu_0 T},
\end{align*}
We thus get that
\begin{eqnarray}
\tilde \mu_T(\omega)(0) \leq \mathbb{P}^{\omega}_{\mu} \left ( \exists s\geq 0 \ \text{s.t.} \ R_0(\omega,s)=0 \right ) \leq \tilde \mu^{0}_T(\omega)(0) + e^{-\theta\nu_0 T}. \label{absetmu0}
\end{eqnarray}
Similarly, we have
\begin{align*}
\mathbb{E}^{\omega}_{\mu} \left [ (1-x)^{R_0(\omega,T-)} \right ] & = \tilde \mu_T(\omega)(0) + \sum_{k \geq 1} (1-x)^k \tilde \mu_T(\omega)(k) \\
& \leq \tilde \mu_T(\omega)(0) + \tilde \mu_T(\omega)(\mathbb{N}) \leq \tilde \mu_T(\omega)(0) + e^{-\theta\nu_0 T}.
\end{align*}
Hence,
\begin{eqnarray}
\tilde \mu_T(\omega)(0) \leq \mathbb{E}^{\omega}_{\mu} [ (1-x)^{R_0(\omega,T-)} ] \leq \tilde \mu_T(\omega)(0) + e^{-\theta\nu_0 T}. \label{absetmu00}
\end{eqnarray}
Recall from Section \ref{s252} that $W_n(\omega)\coloneqq \mathbb{P}^{\omega}(\exists s\geq 0 \ \text{s.t.} \ R_0(\omega,s)=0 \mid R_0(\omega,0-)=n)$. Choosing $\mu = \delta_n$ in \eqref{absetmu0} and in \eqref{absetmu00} and subtracting both inequalities we get
\[ \left | \mathbb{E}^{\omega} \left [ (1-x)^{R_0(\omega,T-)}\mid R_0(\omega,0-)=n \right ] - W_n(\omega) \right | \leq e^{-\theta\nu_0 T} \]
This inequality together with \eqref{quenchedual} yields the desired result.
\end{proof}
\subsubsection{Simple environments}
In what follows we assume that the environment $\omega$ is simple. We start by proving the first part of Theorem \ref{momdual}; the second part is already covered in the proof of Theorem \ref{x0condpastenv}. The two main ingredients are: 1) a moment duality between the jumping times and 2) a moment duality at the jumping times. These results are the object of the two following lemmas.
\begin{lemma}[Quenched moment duality between the jumps] \label{momdualbetween}
Let $0\leq s<t<T$ and assume that $\omega$ has no jumps in $(s,t)$. For all $x\in[0,1]$ and $n\in \mathbb{N}$, we have
\[ \mathbb{E}^{\omega} \left [ (1-X(\omega,t-))^n \mid X(\omega,s)=x \right ]= \mathbb{E}^{\omega} \left [ (1-x)^{R_t(\omega,(t-s)-)}\mid R_{t}(\omega,0)=n \right ]. \]
Recall that if $\omega$ has no jump at $t$, then $X(\omega,t-)=X(\omega,t)$ and $ R_{t}(\omega,0) = R_{t}(\omega,0-)$.
\end{lemma}
\begin{proof}
In $(s,t)$, the dynamic of $X(\omega,\cdot)$ and $R_t(\omega,\cdot)$ are the same as in the annealed case with $\mu=0$. Therefore the result follows from Theorem \ref{momdualannealed} applied with $\mu = 0$.
\end{proof}
\begin{lemma}[Quenched moment duality at jumps] \label{momdualatjumps}
For all $x\in[0,1]$ and $n\in \mathbb{N}$, if $\omega$ has a jump at time $t < T$, then we have
\[ \mathbb{E}^{\omega} \left [ (1-X(\omega,t))^n \mid X(\omega,t-)=x \right ]= \mathbb{E}^{\omega} \left [ (1-x)^{R_{T}(\omega,T-t)}\mid R_{T}(\omega,(T-t)-)=n \right ]. \]
\end{lemma}
\begin{proof}
On the one hand, the equation satisfied by $X(\omega,\cdot)$, that is, \eqref{WFSDE}, tells us that we have almost surely $X(\omega,t) = X(\omega,t-) + X(\omega,t-)(1-X(\omega,t-))\Delta \omega(t)$. Hence,
\begin{align}
\mathbb{E}^{\omega} \left [ (1-X(\omega,t))^n\mid X(\omega,t-)=x \right ] & = \left [ 1 - x (1 + (1-x) \Delta \omega(t)) \right ]^n \nonumber \\
& = \left [ 1 - x - \Delta \omega(t) x + \Delta \omega(t) x^2 \right ]^n. \label{jumponx}
\end{align}
One the other hand, recall that, conditionally on $\{R_{T}(\omega,(T-t)-)=n\}$, we have $R_{T}(\omega,T-t) \sim n + Y$ where $Y \sim \bindist{n}{\Delta \omega(t)}$. Therefore
\begin{align}
\mathbb{E}^{\omega} \left [ (1-x)^{R_{T}(\omega,T-t)}\mid R_{T}(\omega,(T-t)-)=n \right ] & = \mathbb{E}^{\omega} \left [ (1-x)^{n+Y} \right ] \nonumber \\
& = (1-x)^n \left [ (1-\Delta \omega(t)) + \Delta \omega(t)(1-x) \right ]^n \nonumber \\
& = \left [ 1 - x - \Delta \omega(t) x + \Delta \omega(t) x^2 \right ]^n. \label{jumponr}
\end{align}
The combination of \eqref{jumponx} and \eqref{jumponr} yields the result.
\end{proof}
\begin{proof}[Proof of Theorem \ref{momdual}(Quenched moment duality for simple environments)]
Let $\omega$ be a simple environment. Let $(T_i)_{i=1}^N$ be the increasing sequence of jumping times of $\omega$ in $[0,T]$. Without loss of generality we assume that $0$ and $T$ are both jumping times of $\omega$. In particular $T_1 = 0$ and $T_N=T$. Let $(X(\omega, s))_{ s \in[0,T]}$ and $(R_T(\omega,\beta))_{\beta\in[0,T]}$ be independent realizations of the diffusion and the killed ASG, respectively. Partitioning on the values of $X(\omega,T_N-)$ and using Lemma \ref{momdualatjumps} at $t=T$, we get
\begin{align*}
& \mathbb{E}^{\omega} \left [ (1-X(\omega,T))^n\mid X(\omega,0)=x \right ] \\
= & \int_0^1 \mathbb{E}^{\omega} \left [ (1-X(\omega,T_N))^{n}\mid X(\omega,T_N-)=y \right ] \times \mathbb{P}^{\omega} \left ( X(\omega,T_N-) \in {\rm{d}} y \mid X(\omega,0)=x \right ) \\
= & \int_0^1 \mathbb{E}^{\omega} \left [ (1-y)^{R_T(\omega,0)}\mid R_{T}(\omega,0-) = n \right ] \times \mathbb{P}^{\omega} \left ( X(\omega,T_N-) \in {\rm{d}} y \mid X(\omega,0)=x \right ) \\
= & \mathbb{E}^{\omega} \left [ (1-X(\omega,T_N-))^{R_T(\omega,0)}\mid R_T(\omega,0-)=n, X(\omega,0)=x \right ].
\end{align*}
Partitioning on the values of $X(\omega,T_{N-1})$ and $R_T(\omega,0)$ and using Lemma \ref{momdualbetween} we get that the previous expression equals
\begin{align*}
& \sum_{k \in \mathbb{N}_0^\dagger} \mathbb{E}^{\omega} \left [ (1-X(\omega,T_N-))^{k}\mid X(\omega,0)=x \right ] \times \mathbb{P}^{\omega} \left ( R_T(\omega,0) = k \mid R_T(\omega,0-)=n \right ) \\
= & \sum_{k \in \mathbb{N}_0^\dagger} \int_0^1 \mathbb{E}^{\omega} \left [ (1-X(\omega,T_N-))^k \mid X(\omega,T_{N-1})=y \right ] \times \mathbb{P}^{\omega} \left ( X(\omega,T_{N-1}) \in {\rm{d}} y \mid X(\omega,0)=x \right ) \\
& \qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad \times \mathbb{P}^{\omega} \left ( R_T(\omega,0) = k \mid R_T(\omega,0-)=n \right ) \\
= & \sum_{k \in \mathbb{N}_0^\dagger} \int_0^1 \mathbb{E}^{\omega} \left [ (1-y)^{R_T(\omega,(T-T_{N-1})-)}\mid R_T(\omega,0)=k \right ] \times \mathbb{P}^{\omega} \left ( X(\omega,T_{N-1}) \in {\rm{d}} y \mid X(\omega,0)=x \right ) \\
& \qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad \times \mathbb{P}^{\omega} \left ( R_T(\omega,0) = k \mid R_T(\omega,0-)=n \right ) \\
= & \mathbb{E}^{\omega} \left [ (1-X(\omega,T_{N-1}))^{R_T(\omega,(T-T_{N-1})-)}\mid R_T(\omega,0-)=n, X(\omega,0)=x \right ].
\end{align*}
If $N=2$ the proof is already complete. If $N > 2$ we continue as follows. Partitioning first on the values of $R_T(\omega,(T-T_{N-1})-)$ and then on the values of $X(\omega,T_{N-1}-)$ and using Lemma \ref{momdualatjumps}, we get that the previous expression equals
\begin{align*}
& \sum_{k \in \mathbb{N}_0^\dagger} \mathbb{E}^{\omega} \left [ (1-X(\omega,T_{N-1}))^{k}\mid X(\omega,0)=x \right ] \times \mathbb{P}^{\omega} \left ( R_T(\omega,(T-T_{N-1})-) = k \mid R_T(\omega,0-)=n \right ) \\
= & \sum_{k \in \Nb_0^\dagger} \int_0^1 \mathbb{E}^{\omega} \left [ (1-X(\omega,T_{N-1}))^{k}\mid X(\omega,T_{N-1}-)=y \right ] \times \mathbb{P}^{\omega} \left ( X(\omega,T_{N-1}-) \in {\rm{d}} y \mid X(\omega,0)=x \right ) \\
& \qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad \qquad\qquad\qquad\quad \times \mathbb{P}^{\omega} \left ( R_T(\omega,(T-T_{N-1})-) = k \mid R_T(\omega,0-)=n \right ) \\
= & \sum_{k \in \mathbb{N}_0^\dagger} \int_0^1 \mathbb{E}^{\omega} \left [ (1-y)^{R_T(\omega,T-T_{N-1})}\mid R_{T}(\omega,(T-T_{N-1})-) = k \right ] \times \mathbb{P}^{\omega} \left ( X(\omega,T_{N-1}-) \in {\rm{d}} y \mid X(\omega,0)=x \right ) \\
& \qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad \qquad\qquad\qquad\quad \times \mathbb{P}^{\omega} \left ( R_T(\omega,(T-T_{N-1})-) = k \mid R_T(\omega,0-)=n \right ) \\
&= \mathbb{E}^{\omega} \left [ (1-X(\omega,T_{N-1}-))^{R_T(\omega,T-T_{N-1})}\mid R_T(\omega,0-)=n, X(\omega,0)=x \right ].
\end{align*}
Iterating this procedure, using successively Lemma \ref{momdualbetween} and Lemma \ref{momdualatjumps} (the first one is applied on the intervals $((T_{i-1}, T_i))_{i=1}^N$, while the second one is applied at the times $(T_{i})_{i=1}^N$), we finally obtain that $\mathbb{E}^{\omega} [ (1-X(\omega,T))^n\mid X(\omega,0)=x ]$ equals
\[ \mathbb{E}^{\omega} \left [ (1-X(\omega,0))^{R_T(\omega,T-)}\mid R_T(\omega,0-)=n, X(\omega,0)=x \right ] = \mathbb{E}^{\omega} \left [ (1-x)^{R_T(\omega,T-)}|R_T(\omega,0-)=n \right ], \]
achieving the proof.
\end{proof}
\begin{proof}[Proof of Lemma \ref{diag}]
For any $i\in\Nb_0^\dagger$, let $e_i\coloneqq (e_{i,j})_{j\in\Nb_0^\dagger}$ be the vector defined via $e_{i,i}=1$ and $e_{i,j}=0$ for $j\neq i$. Let us order $\Nb_0^\dagger$ as $\{ \dagger, 0, 1, 2,... \}$. The matrix $(Q_\dagger^0)^{\top}$ is upper triangular with diagonal elements $-\lambda_{\dagger}, -\lambda_{0}, -\lambda_{1}, -\lambda_{2}, \ldots$. For any $n\in\Nb_0^\dagger$, let $v_n \in \textrm{Span} \{ e_i, i \leq n \}$ denote the eigenvector of $(Q_\dagger^0)^{\top}$ associated with the eigenvalue $-\lambda_n$ and for which the coordinate with respect to $e_n$ is $1$. It is not difficult to see that these eigenvectors exist and that we have $v_{\dagger} = e_{\dagger}$ and $v_{0} = e_{0}$. For $n \geq 1$, writing $v_n = e_n + c_{n-1} e_{n-1} + ... + c_0 e_{0} + c_{\dagger} e_{\dagger}$ and multiplying by $\frac1{-\lambda_n} (Q_\dagger^0)^{\top}$ on both sides, we obtain another expression of $v_n$ as a linear combination of $e_n, e_{n-1},..., e_{0}, e_{\dagger}$. Hence, identifying both expressions, we get an expression for $c_{n-1}$ and, for each $k \leq n-2$, for the coefficient $c_k$ in term of $c_{k+1},...,c_{n-1}$. This allows to see that these coefficients are equal to the coefficients $v_{n,k}^\dagger$ defined in the statement of the lemma. More precisely,
\[ v_n = v_{n,n}^\dagger e_n + v_{n,n-1}^\dagger e_{n-1} + ... + v_{n,0}^\dagger e_{0} + v_{n,\dagger}^\dagger e_{\dagger}. \]
Similarly we can show that
\[ e_n = u_{n,n}^\dagger v_n + u_{n,n-1}^\dagger v_{n-1} + ... + u_{n,0}^\dagger v_{0} + u_{n,\dagger}^\dagger v_{\dagger}. \]
We thus get that $V_\dagger^{\top} U_\dagger^{\top} = U_\dagger^{\top} V_\dagger^{\top}=Id$ and $(Q_\dagger^0)^{\top} = V_\dagger^{\top} D_\dagger U_\dagger^{\top}$ where $D_\dagger$ is the diagonal matrix defined in the statement of the lemma and where the matrix products are well-defined since they involve sums of finitely many non-zero terms. The result follows.
\end{proof}
\begin{proof}[Proof of Theorem \ref{x0condpastenvII}]
We assume that $\sigma=0$, $\theta >0$ and $\nu_0, \nu_1 \in (0,1)$.
Let us first justify that the matrix products in \eqref{defbetaki}, \eqref{defmatA} and \eqref{defcoeffmom} are well-defined and that $C^\dagger_{n,k}(\omega,T) = 0$ for all $k > n2^N$. Note that if we order $\Nb_0^\dagger$ as $\{ \dagger, 0, 1, 2,... \}$ then the matrices $U_\dagger^\top$ and $V_\dagger^\top$ are upper triangular. Moreover, $\Bs_{j,i}(z)=0$ for $i>2j$. Therefore, for any $n \in\Nb$ and any vector $v = (v_i)_{i \in \Nb_0^\dagger}$ such that $v_i = 0$ for all $i > n$, $\tilde v \coloneqq U_\dagger^\top (\Bs(z)^\top (V_\dagger^\top v))$ is well-defined and satisfies $\tilde v_i = 0$ for all $i > 2n$. In particular $\beta^\dagger(z) = U_\dagger^\top \Bs(z)^\top V_\dagger^\top$, defined in \eqref{defbetaki}, is well-defined. Since $\exp ( (T_{m-1} - T_m) D_\dagger )$ is diagonal, we have that for any $m \geq 1$, the product defining the matrix $A^{\dagger}_m(\omega)$ in \eqref{defmatA} is well-defined. Moreover, for any $n \in\Nb$ and any vector $v = (v_i)_{i \in \Nb_0^\dagger}$ such that $v_i = 0$ for all $i > n$, the vector $\tilde v \coloneqq A^{\dagger}_m(\omega) v$ satisfies $\tilde v_i = 0$ for all $i > 2n$. In particular, for any $m \geq 1$, the product $\exp ( -(T_N+T)D_\dagger) A_m^\dagger(\omega)A_{m-1}^\dagger(\omega)\cdots A_1^\dagger(\omega) U_\dagger^\top$ is well-defined. Additionally, for $n \geq 1$ and a vector $v = (v_i)_{i \in \Nb_0^\dagger}$ such that $v_i = 0$ for all $i > n$, the vector $\tilde v \coloneqq \exp ( -(T_N+T)D_\dagger) A_m^\dagger(\omega)A_{m-1}^\dagger(\omega)\cdots A_1^\dagger(\omega) U_\dagger^\top v$ satisfies $\tilde v_i = 0$ for all $i > 2^m n$. Transposing, we see that the matrix $C^\dagger(\omega,T)$ in \eqref{defcoeffmom} is well-defined and satisfies $C^\dagger_{n,k}(\omega,T) = 0$ for all $k > n2^N$.
Now, for $s>0$, we define the stochastic matrix $\Ps^\dagger_s(\omega)\coloneqq (p_{i,j}^\dagger(\omega,s))_{i,j\in\Nb_0^\dagger}$ via
$$p_{i,j}^\dagger(\omega,s)\coloneqq \Pb^\omega(R_0(\omega,s-)=j\mid R_0(\omega,0-)=i).$$
Hence, defining $\rho(y)\coloneqq (y^i)_{i\in\Nb_0^\dagger}$, $y\in[0,1]$ (with the convention $y^\dagger\coloneqq 0$), we obtain
\begin{equation}\label{gfrt}
\mathbb{E}^{\omega}[y^{R_0(\omega,T-)} \mid R_0(\omega,0-)=n] = (\Ps_T^\dagger(\omega) \rho(y))_n=(\Ps_T(\omega)U_\dagger P_\dagger(y))_n,
\end{equation}
where we have used that $\rho(y)=U_\dagger P_\dagger(y)$ with $P_\dagger(y)=(P_k^\dagger(y))_{k\in\Nb_0^\dagger}$.
Thus, Theorem \ref{momdual} and Eq. \eqref{gfrt} yield
\begin{equation}\label{mxt}
\mathbb{E}^{\omega} [ (1-X(\omega,0))^n\mid X(\omega,-T)=x ] = \sum_{k = 0}^{\infty} \left(\Ps_T^\dagger(\omega)U_\dagger\right)_{n,k} P_k^\dagger(1-x).
\end{equation}
Now, consider the semi-group $M_\dagger\coloneqq (M_\dagger(s))_{s\geq 0}$ of the killed ASG in the null environment, which is defined via $M_{\dagger}(s)\coloneqq \exp (sQ_\dagger^0 )$. Thanks to Lemma \ref{diag}, $M_\dagger(\beta)=U_\dagger E_\dagger(\beta)V_\dagger$, where $E_\dagger(\beta)$ is the diagonal matrix with diagonal entries $(e^{-\lambda^{\dagger}_j \beta})_{j\in\Nb_0^\dagger}$. We split the proof the first statement in two cases:
\smallskip
\textbf{Case 1, when $\omega$ has no jumps in $[-T,0]$:} In this case, we have
$$\Ps_T^\dagger(\omega)U_\dagger=M_\dagger(T)U_\dagger=U_\dagger E_\dagger(T)V_\dagger U_\dagger=U_\dagger E_\dagger(T),$$
where in the last identity we used that $U_\dagger V_\dagger=Id$. Since the right hand side in the previous identity coincides with $C^\dagger(\omega,T)$, the proof of the first part of the statement follows from \eqref{mxt}.
\smallskip
\textbf{Case 2, when $\omega$ has at least one jump in $[-T,0]$:} In this case, let $T_N<T_{N-1}<\cdots<T_1$ denote the sequence of jumping times of $\omega$ in $[-T,0]$. Disintegrating on the values of $R_0(\omega, -(T_i-))$ and $R_0(\omega,-T_i)$, $i\in [N]$, we see that
\begin{equation}\label{ptdec}
\Ps_T^\dagger(\omega)=M_\dagger(-T_1)\Bs(\Delta\omega (T_1))M_\dagger(T_1-T_2)\Bs(\Delta\omega(T_2))\cdots \Bs(\Delta\omega(T_N))M_\dagger(T_N+T).
\end{equation}
Using this, the relation $M_\dagger(\beta)=U_\dagger E_\dagger(\beta)V_\dagger$, the definition of the matrices $\beta^\dagger$ and $A_i^\dagger$ (see \eqref{defbetaki} and \eqref{defmatA} resp.), and the fact that $U_\dagger V_\dagger=Id$, we obtain
\begin{align*}
\Ps_T^\dagger(\omega)U_\dagger&=U_\dagger E_\dagger(-T_1) \beta^\dagger(\Delta \omega(T_1))^\top E_\dagger(T_1-T_2)\beta^\dagger(\Delta \omega(T_2))^\top\cdots\beta^\dagger(\Delta \omega(T_N))^\top E_\dagger(T_N+T)\\
&= U_\dagger A_1^\dagger(\omega)^\top A_2^\dagger(\omega)^\top\cdots A_N^\dagger(\omega)^\top E_\dagger(T_N+T)\\
&= U_\dagger \left[A_N^\dagger(\omega)A_{N-1}^\dagger(\omega)\cdots A_1^\dagger(\omega)\right]^\top E_\dagger(T_N+T)=C(\omega,T),
\end{align*}
proving the first part of the statement in this case. It remains to prove that $C^\dagger_{n,0}(\omega,T)$ converges to $W_n(\omega)$ as $T$ tends to infinity. In the case of the null environment, i.e. $\omega=0$, the first part of the statement together with \eqref{gfrt} yield
$$\mathbb{E}^{\omega}[y^{R_0(0,T-)} \mid R_0(0,0-)=n]=\sum_{k=0}^n e^{-\lambda_k^\dagger T} u_{n,k}^\dagger P_k^\dagger(y).$$
Since $\lambda_k^\dagger>0$ for $k\in\Nb$ and $\lambda_0^\dagger=0$, the proof of the second part of the statement follows by letting $T\to\infty $ in the previous identity. The general case is a direct consequence of the following proposition.
\end{proof}
\begin{proposition} \label{approxmn}
Assume that $\sigma=0$, $\theta >0$ and $\nu_0, \nu_1 \in (0,1)$, then we have
\[ \left | C^\dagger_{n,0}(\omega,T) - W_n(\omega) \right | \leq e^{-\theta\nu_0 T}. \]
\end{proposition}
\begin{proof}
Let $\omega_T$ be the environment that coincides with $\omega$ in $(-T,0]$ and that is constant and equal to $\omega(-T)$ in $(-\infty,-T]$. Since $\Ps_T^\dagger(\omega_T)=\Ps_T^\dagger(\omega)$ and $\omega_T$ has no jumps in $(-\infty,T]$, we obtain
\begin{align}\label{wow}
W_n(\omega_T)&=\sum_{k\geq 0}p_{n,k}^\dagger(\omega_T,T)\,\Pb^{\omega_T}(\exists \beta\geq T\ \text{s.t.} \ R_0(\omega_T,\beta)=0 \mid R_0(\omega_T,T-)=k)\nonumber\\
&=\sum_{k\geq 0}p_{n,k}^\dagger(\omega,T)\,W_k(0)=(\Ps_T^\dagger(\omega)U_\dagger)_{n,0}=C_{n,0}(\omega,T),
\end{align}
where in the last line we used Theorem \eqref{x0condpastenvII} for the null environment (which was already proved) and the definition of $C(\omega,T)$.
Now combining \eqref{wow} with \eqref{absetmu0} applied to $\omega_T$ with $\mu=\delta_n$ yields
\begin{equation}\label{coefetmu0}
p_{n,0}^\dagger(\omega,T)=p_{n,0}^\dagger(\omega_T,T)\leq C_{n,0}(\omega,T) \leq p_{n,0}^\dagger(\omega_T,T)+e^{-\theta\nu_0 T}=p_{n,0}^\dagger(\omega,T)+e^{-\theta\nu_0 T}.
\end{equation}
Then, combining \eqref{absetmu0} applied to $\omega$ with $\mu=\delta_n$ and \eqref{coefetmu0} we get
$$C_{n,0}(\omega,T)-e^{-\theta\nu_0 T}\leq W_n(0)\leq C_{n,0}(\omega,T)+e^{-\theta\nu_0 T},$$
and the result follows.
\end{proof}
\subsection{Ancestral type distribution: The pruned look-down ASG}\label{h(x)quenchedmain}
In this section, we show the results presented in Sections \ref{s263}.
\begin{proof}[Proof of Proposition \ref{exprht(x)quenched}]
The proof is completely analogous to the proof of Proposition \ref{exprhtannealed}.
\end{proof}
If $\mu$ is a probability measure on $\mathbb{N}$, let $\mu^T_\beta(\omega)$ denote the distribution of $L_T(\omega,\beta-)$ when the initial value $L_T(\omega,0-)$ follows distribution $\mu$.
\begin{proof}[Proof of Theorem \ref{h(x)=lim}]
Let us first assume that $\theta > 0$ and $\nu_0 > 0$. We will first show the convergence of $\mu^{T}_T(\omega)$ to a limit $\mu^{\infty}_{\infty}(\omega)$ when $T \rightarrow \infty$, which does not depend on the choice of $\mu$.
\smallskip
We first let $r_2 > r_1 > 0$ and study $d_{TV} (\mu^{r_2}_{r_2}(\omega), \mu^{r_1}_{r_1}(\omega))$. Note that we have $\mu^{r_2}_{r_2}(\omega) = (\mu^{r_2}_{r_2 - r_1}(\omega))^{r_1}_{r_1}(\omega)$ so
\begin{eqnarray}
d_{TV} (\mu^{r_2}_{r_2}(\omega), \mu^{r_1}_{r_1}(\omega)) = d_{TV} ((\mu^{r_2}_{r_2 - r_1}(\omega))^{r_1}_{r_1}(\omega), \mu^{r_1}_{r_1}(\omega)). \label{shifchanges startinglaw}
\end{eqnarray}
Thus, we need to study $d_{TV} (\mu^{t}_t(\omega), \tilde \mu^{t}_t(\omega))$, i.e. the total variation distance between the distributions of $L_t(\omega,t-)$ with two different starting laws $\mu$ and $\tilde \mu$.
\smallskip
From the dynamic of $L_T(\omega,\cdot)$, we know that from any state $i$ there is transition to the state $1$ with rate $q^0(i,1)$ which is greater or equal to $\theta\nu_0 > 0$. Let $\hat{L}_T^\mu(\omega,\cdot)$ be a process with initial distribution $\mu$ and with the same dynamic as $L_T(\omega,\cdot)$, excepting by the transition rate to the state $1$, which is $q^0(i,1) - \theta\nu_0 \geq 0$. We decompose the dynamic of the $L_T(\omega,\cdot)$ with starting law $\mu$ as follows: (1) $L_T(\omega,\cdot)$ has the same dynamic as $\hat{L}_T^\mu(\omega,\cdot)$ on $[0, \xi]$, where $\xi$ is an independent exponential random variable with parameter $\theta\nu_0$, (2) at time $\xi$ the process jumps to the state $1$ regardless to its current position, and (3) conditionally on $\xi$, $L(\omega,\cdot)$ has the same law on $[\xi, \infty)$ as an independent copy of $L_{T-\xi}(\omega,\cdot)$ started with one line.
Using this idea, we couple, on the basis of the same random variable $\xi$, two copies $\tilde{L}_T(\omega,\cdot)$ and $\tilde{\tilde L}_T(\omega,\cdot)$ of $L_T(\omega,\cdot)$ with starting laws $\mu$ and $\tilde \mu$, respectively, so that the two processes are equal on $[\xi, \infty)$. Since $\tilde{L}(\omega,T-) \sim \mu^{T}_T(\omega)$ and $\tilde{\tilde L}(\omega,T-) \sim \tilde \mu^{T}_T(\omega)$, we have
\begin{eqnarray}
d_{TV} (\mu^{T}_T(\omega), \tilde \mu^{T}_T(\omega)) \leq \mathbb{P} \left ( \tilde{L}(\omega,T-) \neq \tilde{\tilde{L}}(\omega,T-) \right ) \leq \mathbb{P} \left ( \xi > T \right ) = e^{-\theta\nu_0 T}. \label{rapprochementexpo}
\end{eqnarray}
Putting into \eqref{shifchanges startinglaw} we obtain, for any starting law $\mu$ and any $r_2 > r_1 > 0$,
\begin{eqnarray}
\ d_{TV} (\mu^{r_2}_{r_2}(\omega), \mu^{r_1}_{r_1}(\omega)) \leq e^{-\theta\nu_0 r_1}. \label{cauchyexpo}
\end{eqnarray}
In particular $(\mu^{t}_t(\omega))_{t > 0}$ is Cauchy as $t \rightarrow \infty$ for the total-variation distance and therefore convergent. Hence, $\mu^{\infty}_{\infty}(\omega)$ is well-defined for any starting law $\mu$. Moreover, \eqref{rapprochementexpo} implies that $\mu^{\infty}_{\infty}(\omega)$ does not depend on $\mu$. Therefore, the first claim in Theorem \ref{h(x)=lim} is proved.
\smallskip
Setting $r_1 = T$ and letting $r_2\to\infty$ in \eqref{cauchyexpo} yields $$d_{TV} (\mu^{\infty}_{\infty}(\omega), \mu^{T}_{T}(\omega)) \leq e^{-\theta\nu_0 T}.$$ Since $h^{\omega}(x) = 1 - \mathbb{E} \left[(1-x)^{Z_{\infty}(\omega)} \right]$ and $h^{\omega}_T(x) = 1 - \mathbb{E} \left[(1-x)^{Z_{T}(\omega)} \right]$, where $Z_{\infty}(\omega) \sim (\delta_1)^{\infty}_{\infty}(\omega)$ and $Z_{T}(\omega) \sim (\delta_1)^{T}_{T}(\omega)$, we get
$$\lvert h_T^\omega(x)-h^\omega(x)\rvert\leq d_{TV}((\delta_1)^{\infty}_{\infty}(\omega), (\delta_1)^{T}_{T}(\omega))\leq e^{-\theta\nu_0 T},$$
achieving the proof in the case $\theta > 0$ and $\nu_0 > 0$.
\smallskip
Let us now assume that $\theta \nu_0=0$, $\omega$ is simple, has infinitely many jumps on $[0,\infty)$ and that the distance between the successive jumps does not converge to $0$. As before, the proof will be based on an appropriate upper bound for $d_{TV} (\mu^{T}_T(\omega), \tilde \mu^{T}_T(\omega))$.
\smallskip
Let $0 < T_1 < T_2 <\cdots$ denote the sequence of the jumping times of $\omega$ and set $T_0 \coloneqq 0$ for convenience. On each time interval $(T_i, T_{i+1})$, $L_T(\omega,\cdot)$ has transition rates given by $(q^0(i,j))_{i,j\in\Nb}$. For any $k > l$, let $H(k,l)$ denote the hitting time of $[l]$ by a Markov chain starting at $k$ and with transition rates given by $(q^0(i,j))_{i,j\in\Nb}$. Let $(S_l)_{l \geq 2}$ be a sequence of independent random variables such that for each $l \geq 2$, $S_l \sim H(l,l-1)$. By the Markov property we have that for any $k \geq 1$, $H(k,1)$ is stochastically upper bounded by $\sum_{l=2}^k S_l$. Therefore, for any $i$ such that $T_{i+1} < T$ and any $k \geq 1$ we have
\[ \mathbb{P} \left (L_T(\omega,(T-T_i)-) = 1 | L_T(\omega,T-T_{i+1}) = k \right ) \geq \mathbb{P} \left ( \sum_{l=2}^k S_l \leq T_{i+1} - T_{i} \right ) \geq \mathbb{P} \left ( \sum_{l=2}^{\infty} S_l \leq T_{i+1} - T_{i} \right ). \]
Let $l_0$ be such that $\sigma l\leq (l-1)l/4$ for all $l \geq l_0$. For $l \geq l_0$, we define a random walk $Z^l \coloneqq (Z^l(t))_{t \geq 0}$ on $\{l-1,l,l+1,\ldots\}$ starting at $l$ and such that: a) it jumps from $n\geq l$ at rate $(n-1)n$ to either $n-1$ or $n+1$, with probability $3/4$ and $1/4$, respectively, and b) it is absorbed at $l-1$. Let $W_l$ denotes the hitting time of $l-1$ by $Z^l$. We can see that $\mathbb{E}[W_l] \leq 2/l(l-1)$. For $l \geq l_0$, let $Y^l \coloneqq (Y^l(t))_{t \geq 0}$ denote the Markov chain with transition rates given by $(q^0(i,j))_{i,j\in\Nb}$, starting at $l$ and killed at the hitting time of $[l-1]$. Note that $Y^l$ always jumps to the right with a smaller rate than $Z^l$ and jumps to the left at a higher rate. Moreover, the jumps of both are always of size $1$. Therefore, it is possible to build a coupling of $Y^l$ and $Z^l$ such that for every $t \geq 0, Y^l(t) \leq Z^l(t)$. In this coupling the hitting time of $[l-1]$ by $Y^l$ (which is equal in law to $S_l$) is almost surely smaller or equal to $W_l$. We thus get, for all $l\geq l_0$ $$\mathbb{E}[S_l] \leq \mathbb{E}[W_l] \leq \frac{2}{l(l-1)}.$$
Therefore, $\sum_{l=2}^{\infty} \mathbb{E}[S_l] < +\infty$. Thus, $S^\infty\coloneqq \sum_{l=2}^{+\infty} S_l<\infty$ almost surely. Moreover, since, for each $l \geq 2$, the support of $S_l$ contains $0$, the support of $S^\infty$ contains $0$. In particular $q(s) \coloneqq \mathbb{P} ( \sum_{l=2}^{+\infty} S_l \leq s)$ is positive for all $s > 0$. From above we get
\begin{eqnarray}
\forall k \geq 1, \ \mathbb{P} \left (L_T(\omega,(T-T_i)-) = 1 | L_T(\omega,T-T_{i+1}) = k \right ) \geq q(T_{i+1} - T_{i}). \label{probatouch0}
\end{eqnarray}
Let us consider $(V_T(\omega,\beta))_{\beta \geq 0}$ and $(\tilde V_T(\omega,\beta))_{\beta \geq 0}$, two independent versions of $(L_T(\omega,\beta))_{\beta \geq 0}$ with starting laws respectively $\mu$ and $\tilde \mu$ at instant $\beta = 0-$. For any $i \geq 1$, let $(V^{i}_{T_i}(\omega,\beta))_{\beta \geq 0}$ be a version of $(L_{T_i}(\omega,\beta))_{\beta \geq 0}$ starting at $1$ at instant $\beta = (T-T_i)-$, independent from $(V_T(\omega,\beta))_{\beta \geq 0}$ and $(\tilde V_T(\omega,\beta))_{\beta \geq 0}$. Let $I_0(T)$ be the largest index $i$ such that 1) $T_i < T$ and 2) $V_T(\omega,(T-T_i)-) = \tilde V_T(\omega,(T-T_i)-) = 1$, if such an index exists, and let $I_0(T) \coloneqq \dagger$ otherwise. We define $(U_T(\omega,\beta))_{\beta \geq 0}$ and $(\tilde U_T(\omega,\beta))_{\beta \geq 0}$ in the following way:
\[ U_T(\omega,\beta) \coloneqq \left\{\begin{array}{ll}
V_T(\omega,\beta) &\text{if $\beta < T-T_{I_0(T)}$},\\
V^{{I_0(T)}}_{T_{I_0(T)}}(\omega,\beta) &\text{if $\beta \geq T-T_{I_0(T)}$},\\
\end{array}\right.
\ \ \ \tilde U^t_s(\omega)\coloneqq \left\{\begin{array}{ll}
\tilde V_T(\omega,\beta) &\text{if $\beta < T-T_{I_0(T)}$},\\
V^{{I_0(T)}}_{T_{I_0(T)}}(\omega,\beta) &\text{if $\beta \geq T-T_{I_0(T)}$}.\\
\end{array}\right. \]
Note that $(U_T(\omega,\beta))_{\beta \geq 0}$ and $(\tilde U_T(\omega,\beta))_{\beta \geq 0}$ are realizations of $(L_T(\omega,\beta))_{\beta \geq 0}$ with starting laws respectively $\mu$ and $\tilde \mu$ at instant $\beta = 0-$ (in particular we have $U_T(\omega,T-) \sim \mu^{T}_T(\omega)$ and $\tilde U_T(\omega,T-)\sim \tilde \mu^{T}_T(\omega)$). Moreover we have $U_T(\omega,\beta)=\tilde U_T(\omega,\beta)$ for all $\beta \geq T-T_{I_0(t)}$. Therefore,
\begin{eqnarray}
d_{TV} (\mu^{T}_T(\omega), \tilde \mu^{T}_T(\omega)) \leq \mathbb{P} \left ( U_T(\omega,T-) \neq \tilde U_T(\omega,T-) \right ) \leq \mathbb{P} \left ( I_0(T) = \dagger \right ). \label{rapprochementsm}
\end{eqnarray}
Let $N(T)$ be the index of the last jumping time before $T$ (so that $T_1 < T_2 < \cdots < T_{N(T)}$ are the jumping times of $\omega$ on $[0,T]$). According to \eqref{probatouch0}, we have that for any $k_1, k_2 \geq 1$ with $k_1 \neq k_2$,
\begin{align*}
\mathbb{P} &\left (V_T(\omega,(T-T_i)-) \neq 1 \ \text{or} \ \tilde V_T(\omega,(T-T_i)-) \neq 1 \mid V_T(\omega,T-T_{i+1}) = k_1, \tilde V_T(\omega,T-T_{i+1}) = k_2 \right ) \\
&\leq 1-q(T_{i+1} - T_{i})^2.
\end{align*}
Therefore,
\[ \mathbb{P} \left ( I_0(T) = \dagger \right ) \leq \prod_{i=1}^{N(T)} \left ( 1-q(T_{i} - T_{i-1})^2 \right ) =: \varphi_{\omega}(T). \]
Putting in \eqref{rapprochementsm} we deduce that $d_{TV} (\mu^{T}_T(\omega), \tilde \mu^{T}_T(\omega)) \leq \varphi_{\omega}(T)$. Note that $\varphi_{\omega}$ does not depend on the particular choice of $\mu$ and $\tilde \mu$. Recall that by assumption the sequence of jumping times $T_1, T_2,\ldots$ is infinite and the distance between the successive jumps does not converge to $0$. Therefore there is $\epsilon > 0$ such that for infinitely many indices $i$ we have $T_{i+1}-T_i > \epsilon$. The number of factors smaller than $1-q(\epsilon)^2 < 1$ in the product defining $\varphi_{\omega}(T)$ thus converges to infinity when $T$ goes to infinity. We deduce that $\varphi_{\omega}(T)\to 0$ as $T\to\infty$, which concludes the proof.
\end{proof}
\begin{proof} [Proof of Theorem \ref{h(x)condenv}]
We are interested in the generating function of $L_T(\omega,T-)$.
For $s>0$, we define the stochastic matrix $\Ps^T_s(\omega)\coloneqq (p^T_{i,j}(\omega,s))_{i,j\in\Nb}$ via
$$p^T_{i,j}(\omega,s)\coloneqq \Pb^\omega(L_T(\omega,s-)=j\mid L_T(\omega,0-)=i).$$
We also define $(M(s))_{s\geq 0}$ via $M(s)\coloneqq \exp (sQ^0 )$, i.e $M$ is the semi-group of the pruned LD-ASG in the null environment. Let $T_1<T_{2}<\cdots<T_N$ denote the sequence of jumping times of $\omega$ in $[0,T]$. Then, disintegrating on the values of $L_T(\omega, (T-T_i)-)$ and $L_T(\omega, T - T_i)$, $i\in [N]$, we see that
\begin{equation}\label{ptdecpldasg}
\Ps^T_0(\omega)=M(T-T_N)\Bs(\Delta\omega (T_N))M(T_N-T_{N-1})\Bs(\Delta\omega(T_{N-1}))\cdots \Bs(\Delta\omega(T_1))M(T_1).
\end{equation}
In addition,
\begin{equation}\label{gfrtpldasg}
\mathbb{E}^{\omega}[y^{L_T(\omega,T-)} \mid L_T(\omega,0-)=n] = (\Ps^T_0(\omega) \rho(y))_n,\quad \textrm{where}\quad \rho(y)\coloneqq (y^i)_{i\in\Nb}.
\end{equation}
Thanks to Lemma \ref{diagpldasg}, we have that $M(\beta)=U E(\beta)V$, where $E(\beta)$ is the diagonal matrix with diagonal entries $(e^{-\lambda_j \beta})_{j\in\Nb}$. Moreover, from definition of the polynomials $P_k$, we have that $\rho(y)=U P(y)$, where $P(y)=(P_k(y))_{k\in\Nb}$. Using this, Eq. \eqref{ptdecpldasg}, the relation $M(\beta)=U E(\beta)V$, the definition of the matrices $\beta(\cdot)$ in \eqref{defbetakipldasg}, the fact that $U V=Id$, and the definition of the matrices $A_{\cdot}$ in \eqref{defmatApldasg}, we obtain
\begin{align}
\Ps^T_0(\omega) \rho(y)&=U E(T-T_N) \beta(\Delta \omega(T_N))^\top E(T_N-T_{N-1})\beta(\Delta \omega(T_{N-1}))^\top\cdots\beta(\Delta \omega(T_1))^\top E(T_1)P(y) \label{gfrtpldasg2} \\
&= U E(T-T_N) A_N(\omega)^\top A_{N-1}(\omega)^\top\cdots A_1(\omega)^\top P(y) \nonumber \\
&= U E(T-T_N) \left[A_1(\omega)A_{2}(\omega)\cdots A_N(\omega)\right]^\top P(y). \nonumber
\end{align}
Now using the previous identity, Proposition \ref{exprht(x)quenched} and Eq. \eqref{gfrtpldasg}, we obtain
\begin{align*}
h^{\omega}_T(x)=1-\mathbb{E}^{\omega}[(1-x)^{L_T(\omega,T-)} \mid L_T(\omega,0-)=1] = 1 - \sum_{k = 0}^{\infty} C_{1,k}(\omega,T) P_k(1-x).
\end{align*}
Similarly as in the proof of Theorem \ref{x0condpastenvII}, we can show that $C_{1,k}(\omega,T)=0$ for all $k > 2^N$. \eqref{exprfctgenlt4} follows.
Let us now show that for each $k \geq 1$, the coefficient $C_{1,k}(\omega,T)$ converges to a real limit when $t$ goes to infinity. We see from \eqref{exprfctgenlt4} that the decomposition of the generating function of $L_T(\omega,T-)$ (conditionally on $L_T(\omega,0-)=1$) in the basis of polynomials $(P_k(y))_{k \in \Nb}$ is given by $\sum_{k=1}^{\infty} C_{1,k}(\omega,T) P_k(y)$. In the basis $(y^k)_{k \in \Nb}$, this decomposition is clearly given by $\sum_{k=1}^{\infty} \mathbb{P}^{\omega}(L_T(\omega,T-)=k|L_T(\omega,0-)=1) y^k$. Since $U^\top$ is the transition matrix from the basis $(y^k)_{k \in \Nb}$ to the basis $(P_k(y))_{k \in \Nb}$ we deduce that $$C_{1,k}(\omega,T) = \sum_{i \in \Nb} u_{i,k} \mathbb{P}^{\omega}(L_T(\omega,T-)=i \mid L_T(\omega,0-)=1),$$ or equivalently, $$C_{1,k}(\omega,T) = \mathbb{E} [ u_{L_T(\omega,T-),k} |L_T(\omega,0-)=1 ].$$
From Theorem \ref{h(x)=lim}, we know that the distribution of $L_T(\omega,T-)$ converges when $T\to\infty$. In addition, Lemma\ref{majoalphaki} tell us that the function $i \mapsto u_{i,k}$ is bounded, and hence $C_{1,k}(\omega,T)$ converges to a real limit.
Recall that $T_1 < T_2 < \ldots$ is the increasing sequence of the jumping times of $\omega$ and that this sequence converges to infinity. Therefore
\[ \lim_{T \rightarrow \infty} C_{1,k}(\omega,T) = \lim_{m \rightarrow \infty} C_{1,k}(\omega,T_m) = \lim_{m \rightarrow \infty} U \left[A_1(\omega)A_{2}(\omega)\cdots A_N(\omega)\right]^\top, \]
where we have used \eqref{defcoeff4} for the last equality. This shows in particular that the limit in the right hand side of \eqref{defcoeff4inf} exists and equals $\lim_{T \rightarrow \infty} C_{1,k}(\omega,T)$.
We now prove \eqref{exprfctgenlt4inf} together with the convergence of the series in this expression. We already know from Theorem \ref{h(x)=lim} that $h^{\omega}_T(x)$ converges to $h^{\omega}(x)$ when $T\to\infty$ and we have just proved that for any $k \geq 1$, $C_{1,k}(\omega,T)$ converges to $C_{1,k}(\omega,\infty)$, defined in \eqref{defcoeff4inf}, when $T\to\infty$. In addition, we claim that for all $y\in[0,1]$ and $T>T_1$, we have
\begin{eqnarray}
\ | C_{1,k}(\omega,T) P_k(y) | \leq 4^k \times (2ek)^{(k+\theta)/2} e^{- \lambda_k T_1} \label{bornecvdom}
\end{eqnarray}
Using this, the expression \eqref{exprfctgenlt4inf} for $h^{\omega}(x)$ and the convergence of the series would follow using the dominated convergence theorem. It only remains to prove \eqref{bornecvdom}. We saw above that the decomposition of the generating function of $L_T(\omega,T-)$ (conditionally on $L_T(\omega,0-)=1$) in the basis of polynomials $(P_k(y))_{k \in \Nb}$ is given by $\sum_{k=1}^{\infty} C_{1,k}(\omega,T) P_k(y)$ and that $C_{1,k}(\omega,T) = \mathbb{E} [ u_{L_T(\omega,T-),k} |L_T(\omega,0-)=1 ]$. Similarly we can see that for $T > T_1$, the decomposition of the generating function of $L_T(\omega,T-T_1)$ (conditionally on $L_T(\omega,0-)=1$) in the basis of polynomials $(P_k(y))_{k \in \Nb}$ can be expressed as $\sum_{k=1}^{\infty} \tilde C_{1,k}(\omega,T) P_k(y)$, where $\tilde C_{1,k}(\omega,T) = \mathbb{E} [ u_{L_T(\omega,T-T_1),k} |L_T(\omega,0-)=1 ]$. Similarly as we proved \eqref{gfrtpldasg2}, we can prove that
\begin{eqnarray}
\Ps^T_{T_1 +}(\omega) \rho(y)&=U E(T-T_N) \beta(\Delta \omega(T_N))^\top E(T_N-T_{N-1})\beta(\Delta \omega(T_{N-1}))^\top\cdots\beta(\Delta \omega(T_1))^\top P(y) \label{gfrtpldasg3}
\end{eqnarray}
Since $E(T_1)$ is the diagonal matrix with diagonal entries $(e^{-\lambda_j T_1})_{j\in\Nb}$ we conclude from \eqref{gfrtpldasg2} and \eqref{gfrtpldasg3} that $C_{1,k}(\omega,T) = e^{-\lambda_k T_1} \tilde C_{1,k}(\omega,T)$. Therefore
\[ C_{1,k}(\omega,T) = e^{-\lambda_k T_1} \mathbb{E} [ u_{L_T(\omega,T-T_1),k} \mid L_T(\omega,0-)=1 ]. \]
This together with Lemma \ref{majoalphaki} imply that, for all $k\geq 1$ and $t\geq 0$,
\[ \ |C_{1,k}(\omega,T)| \leq (2ek)^{(k+\theta)/2} e^{- \lambda_k T_1}. \]
Combining this with Lemma \ref{majosommedesaki}, we obtain \eqref{bornecvdom}, which concludes the proof.
\end{proof}
\begin{lemma} \label{majoalphaki} For all $k\geq 1$
\[ \ \sup_{j \geq 1} u_{j,k} \leq (2ek)^{(k+\theta)/2}. \]
\end{lemma}
\begin{proof}
Let $k \geq 1$. By the definition of the matrix $U$ in Lemma \ref{diagpldasg}, the sequence $(u_{j,k})_{j \geq 1}$ satisfies
\begin{align*}
u_{j,k} & = 0 \ \text{for} \ j < k, \ u_{k,k} = 1, \ u_{k+1,k} = \frac{\gamma_{k+1}}{\lambda_{k+1} - \lambda_{k}}, \\
u_{k+l,k} & = \frac{1}{\lambda_{k+l} - \lambda_k} \left ( \gamma_{k+l} u_{k+l-1,k} + \theta\nu_0 \left ( \sum_{j=0}^{l-2} u_{k+j,k} \right ) \right ) \ \text{for} \ l \geq 2.
\end{align*}
Let $M_k^{j} \coloneqq \sup_{i \leq j} u_{i,k}$. By the definitions of $\gamma_{j+1}$, $\lambda_{k+1}$, $\lambda_k$ in Lemma \ref{diagpldasg}, we see that $$\gamma_{k+1} = \lambda_{k+1}- (k-1) \theta\nu_0 > \lambda_{k+1} - \lambda_{k}.$$ This together with the above expressions yield that
\begin{align*}
M_k^k & = 1, \qquad M^{k+1}_k = \frac{\lambda_{k+1}- (k-1) \theta\nu_0}{\lambda_{k+1} - \lambda_{k}} \leq \frac{\lambda_{k+1}}{\lambda_{k+1} - \lambda_{k}} = 1 + \frac{\lambda_{k}}{\lambda_{k+1} - \lambda_k},
\end{align*}
and for $l\geq 2$,
\begin{align*}
M^{k+l}_k & \leq M^{k+l-1}_k \vee \frac{M^{k+l-1}_k}{\lambda_{k+l} - \lambda_k} \left ( \gamma_{k+l} + (l-1) \theta\nu_0 \right ) \\
& = M^{k+l-1}_k \vee M^{k+l-1}_k \times \frac{\lambda_{k+l}- (k-1) \theta\nu_0}{\lambda_{k+l} - \lambda_k} \\
& = M^{k+l-1}_k \times \frac{\lambda_{k+l}- (k-1) \theta\nu_0}{\lambda_{k+l} - \lambda_k} \\
& \leq M^{k+l-1}_k \times \frac{\lambda_{k+l}}{\lambda_{k+l} - \lambda_k} \\
& \leq M^{k+l-1}_k \times \left ( 1 + \frac{\lambda_{k}}{\lambda_{k+l} - \lambda_k} \right ).
\end{align*}
As a consequence, we have
\begin{eqnarray}
\sup_{j \geq 1} u_{k,j} \leq \prod_{l=1}^{+\infty} \left ( 1 + \frac{\lambda_{k}}{\lambda_{k+l} - \lambda_k} \right ) =: M^{\infty}_k \label{majoparprodinf}
\end{eqnarray}
Since $\lambda_{k+l} \underset{}{\sim} l^2$ as $l\to\infty$, it is easy to see that the infinite product $M^{\infty}_k$ is finite. Then,
\[ M^{\infty}_k = \exp \left [ \sum_{l=1}^{+\infty} \log \left ( 1 + \frac{\lambda_{k}}{\lambda_{k+l} - \lambda_k} \right ) \right ] \leq \exp \left [ \sum_{l=1}^{+\infty} \frac{\lambda_{k}}{\lambda_{k+l} - \lambda_k} \right ] \leq \exp \left [ \frac{\lambda_{k} \log(2ek)}{2(k-1)} \right ], \]
where we have used Lemma \ref{sommeinverselambda}, stated just after the proof, for the last inequality. Since, by the definition of $\lambda_k$ in Lemma \ref{diagpldasg}, we have $\lambda_{k} = (k-1)(k+\theta)$, we obtain the asserted result.
\end{proof}
\begin{lemma} \label{sommeinverselambda} For all $k\in\Nb$
\[ \ \sum_{l=1}^{+\infty} \frac{1}{\lambda_{k+l} - \lambda_k} \leq \frac{\log(2ek)}{2(k-1)}. \]
\end{lemma}
\begin{proof}
Using the definition of $\lambda_k$ in Lemma \ref{diagpldasg} we have
\begin{align*}
\sum_{l=1}^{+\infty} \frac{1}{\lambda_{k+l} - \lambda_k} & \leq \sum_{l=1}^{+\infty} \frac{1}{(k+l)(k+l-1) - k(k-1)} \\
& \leq \frac{1}{(k+1)k - k(k-1)} + \int_k^{+\infty} \frac{1}{x(x+1) - k(k-1)} {\rm{d}} x \\
& = \frac{1}{2k} + \int_1^{+\infty} \frac{1}{u^2 + (2k-1)u} {\rm{d}} u = \frac{1}{2k} + \lim_{a \rightarrow +\infty} \int_1^{a} \frac{1}{u(u + 2k-1)} {\rm{d}} u \\
& = \frac{1}{2k} + \lim_{a \rightarrow +\infty} \frac{1}{2k-1} \left ( \int_1^{a} \frac{1}{u} du - \int_1^{a} \frac{1}{u + 2k-1} {\rm{d}} u \right ) \\
& = \frac{1}{2k} + \lim_{a \rightarrow +\infty} \frac{1}{2k-1} \left ( \log(a) - \log(a + 2k-1) + \log(2k) \right ) \\
& = \frac{1}{2k} + \frac{\log(2k)}{2k-1} \leq \frac{1+\log(2k)}{2(k-1)} = \frac{\log(2ek)}{2(k-1)}.
\end{align*}
\end{proof}
\begin{lemma} \label{majosommedesaki} For all $k\in\Nb$, we have
\[ \ \sup_{y \in [0,1]} \left | P_k(y) \right | \leq 4^{k}. \]
\end{lemma}
\begin{proof}
By definition of the polynomials $P_k$ in \eqref{newbasis9pldasg}, we have for $k\geq 1$
\begin{eqnarray}
\ \sup_{y \in [0,1]} \left | P_k(y) \right | \leq \sum_{i=1}^{k} | v_{k,i} |. \label{sup<sumcoef}
\end{eqnarray}
Let us fix $k \geq 1$ and define $S^k_j \coloneqq \sum_{i \geq j}^{k} | v_{k,i} |$. Note that $S^k_j = 0$ for $j > k$ and that $S^k_k = 1$ by the definition of the matrix $(v_{i,j})_{i,j \in \Nb}$ in Lemma \ref{diagpldasg}. In particular, the asserted result is true for $k=1$, we thus assume $k > 1$ from now one. Using \eqref{recrelani4killed}, we see that for any $j \in [k-1]$,
\begin{align*}
S^k_j = S^k_{j+1} + | v_{k,j} | & = S^k_{j+1} + \left | \frac{-1}{(\lambda_k - \lambda_j)} \left [ \left ( \sum_{l=j+2}^k v_{k,l} \right ) \theta\nu_0 + v_{k,j+1} \gamma_{j+1} \right ] \right | \\
& \leq S^k_{j+1} + \frac{1}{(\lambda_k - \lambda_j)} \left [ S^k_{j+2} \theta\nu_0 + (S^k_{j+1}-S^k_{j+2}) \gamma_{j+1} \right ] \\
& \leq \left ( 1 + \frac{\gamma_{j+1}}{\lambda_k - \lambda_j} \right ) S^k_{j+1} + \frac{\theta\nu_0 - \gamma_{j+1}}{\lambda_k - \lambda_j} S^k_{j+2}.
\end{align*}
Note that $\frac{\theta\nu_0 - \gamma_{j+1}}{\lambda_k - \lambda_j}\leq 0$, because of the definition of the coefficients $\gamma_i$ in Lemma \ref{diagpldasg}. Thus, for $j\in[k-1]$
\begin{eqnarray}
\ S^k_j \leq \left ( 1 + \frac{\gamma_{j+1}}{\lambda_k - \lambda_j} \right ) S^k_{j+1} \label{relrecskj}
\end{eqnarray}
By the definitions of $\gamma_{j+1}$, $\lambda_k$, $\lambda_j$ in Lemma \ref{diagpldasg} we have
\begin{align*}
\frac{\gamma_{j+1}}{\lambda_k - \lambda_j} & = \frac{(j+1)j + j \nu_1 \theta + \theta\nu_0}{k(k-1) - j(j-1) + (k-j) \nu_1 \theta + (k-j) \theta\nu_0} \\
& \leq \frac{(j+1)j}{k(k-1) - j(j-1)} + \frac{j \nu_1 \theta}{(k-j) \nu_1 \theta} + \frac{\theta\nu_0}{(k-j) \theta\nu_0} \\
& \leq \frac{(j+1)j}{j(k-1) - j(j-1)} + \frac{j \nu_1 \theta}{(k-j) \nu_1 \theta} + \frac{\theta\nu_0}{(k-j) \theta\nu_0} \leq 2 \frac{j+1}{k-j}.
\end{align*}
In particular,
\[ 1 + \frac{\gamma_{j+1}}{\lambda_k - \lambda_j} \leq \frac{k + j + 2}{k-j}, \]
Plugging this into \eqref{relrecskj} yields, for all $j\in[k-1]$
\[ \ S^k_j \leq \frac{k + j + 2}{k-j} S^k_{j+1}. \]
Then, applying the above inequality recursively and combining with $S^k_k = 1$ we get
\[ \sum_{i=1}^{k} | v_{k,i} | = S^k_1 \leq \prod_{j=1}^{k-1} \frac{k + j + 2}{k-j} =\binom{2k+1}{k-1}= \frac{\binom{2k+1}{k-1} + \binom{2k+1}{k+2}}{2} \leq 2^{2k+1}/2 = 4^k. \]
Combining with \eqref{sup<sumcoef} we obtain the asserted result.
\end{proof}
\section{An application: exceptional environment in the recent past}\label{S7}
Let us now present an application of the methods developed in the present paper that illustrates the interest in studying both quenched and annealed results. More precisely, consider a population living in a stationary random environment, which has been recently subject to a big perturbation. We are interested in the influence of this recent exceptional behavior of the environment, in the absence of knowledge of the environment in the far past.
\smallskip
The process starts at $-\infty$ and we are currently at instant $0$. Recall that the environment is a realization $\zeta$ of a Poisson point process $(t_i, p_i)_{i \in I}$ on $(-\infty,0] \times (0,1)$ with intensity measure ${\rm{d}} t \times \mu$, where ${\rm{d}} t$ stands for the Lebesgue measure and $\mu$ is a measure on $(0,1)$ satisfying $\int x \mu({\rm{d}} x) < \infty$. In this section, we assume that the intensity measure $\mu$ is finite, and hence, the environment is almost surely simple. We moreover assume that the parameter of selection $\sigma$ is zero. We assume that we know $\zeta_T \coloneqq (t_i, p_i)_{i \in I, t_i \in [-T,0]}$, the realization of the environment on the time interval $[-T,0]$, but we do not know the environment $\tilde Z$ on the interval $(-\infty, -T)$; we only know that it is a realization of a Poisson point process on $(-\infty,-T) \times (0,1)$ with intensity measure ${\rm{d}} t \times \mu$. Now, under the measure $\Pb^{\zeta_T}$ the environment $\zeta_T$ on $[-T, 0]$ is fixed and we integrate with respect to the random environment $\tilde Z$ on $(-\infty, -T)$. In addition, $\Pb$ simply denotes the annealed measure, i.e. when we integrate with respect to the environment $\zeta$ on $(-\infty, 0]$. From Theorem \ref{annealdmoments}, we have
\begin{eqnarray}
\mathbb{E}[(1-X(0))^n] = \mathbb{E} \left [ (1-X(\infty))^n \right ] = w_n. \label{formulecompacteex}
\end{eqnarray}
Our aim is to obtain a formula for $\mathbb{E}^{\zeta_T}[(1-X(\zeta_T, 0))^n]$. Let $\tilde \zeta$ denote a particular realization of the random environment $\tilde Z$ on $(-\infty, -T)$. Let $\mathbb{E}^{\tilde \zeta, \zeta_T}[(1-X(\tilde \zeta, \zeta_T, 0))^n]$ denotes the quenched moment of order $n$ of $1-X(\tilde \zeta, \zeta_T, 0)$, given the global environment $(\tilde \zeta, \zeta_T)$ that is obtained by gluing together $\tilde \zeta$ and $\zeta_T$. Clearly, we have
\begin{eqnarray}
\mathbb{E}^{\zeta_T}[(1-X(\zeta_T, 0))^n] = \int \mathbb{E}^{\tilde \zeta, \zeta_T}[(1-X(\tilde \zeta, \zeta_T, 0))^n] \times \mathbb{P} (\tilde Z \in {\rm{d}} \tilde \zeta),\label{exanmoyquen}
\end{eqnarray}
and
\begin{eqnarray}
\mathbb{E}^{\tilde \zeta, \zeta_T}[(1-X(\tilde \zeta, \zeta_T, 0))^n] = \int \mathbb{E}^{\zeta_T} \left [ (1-X(\zeta_T, 0))^n | X(\zeta_T, -T) = x \right ] \times \mathbb{P}^{\tilde \zeta} \left ( X(\tilde \zeta, -T) \in {\rm{d}} x \right ). \label{exquen1}
\end{eqnarray}
Let $-T < T_N < \cdots < T_1 < 0$ denote the jumping times of $\zeta_T$ on $[-T,0]$. Let the matrices $(A_m^{\dagger}(\zeta_T))_{1 \leq m \leq N}$ and the coefficients $C^\dagger_{n,k}(\zeta_T,T)$ be defined as in Theorem \ref{x0condpastenvII}, from the environment $\zeta_T$. By \eqref{exprmomtpst} in Theorem \ref{x0condpastenvII} we have
\[ \mathbb{E}^{\zeta_T} \left [ (1-X(\zeta_T,0))^n|X(\zeta_T,-T)=x \right ] = \sum_{k = 0}^{n 2^N} C^\dagger_{n,k}(\zeta_T,T) P_k^\dagger(1-x). \]
Plugging this into \eqref{exquen1} and using \eqref{newbasis9}, we get
\begin{align*}
\mathbb{E}^{\tilde \zeta, \zeta_T}[(1-X(\tilde \zeta, \zeta_T, 0))^n] & = \sum_{k = 0}^{n 2^N} C^\dagger_{n,k}(\zeta_T,T) \mathbb{E}^{\tilde \zeta} \left [ P_k^\dagger \left (1- X(\tilde \zeta, -T) \right ) \right ] \\
& = \sum_{j = 0}^{n 2^N} \left ( \sum_{k = j}^{n 2^N} C^\dagger_{n,k}(\zeta_T,T) v_{k,j}^\dagger \right ) \mathbb{E}^{\tilde \zeta} \left [ \left (1- X(\tilde \zeta, -T) \right )^j \right ].
\end{align*}
Integrating both sides with respect to $\tilde \zeta$ and using \eqref{exanmoyquen} we get
\[ \mathbb{E}^{\zeta_T}[(1-X(\zeta_T, 0))^n] = \sum_{j = 0}^{n 2^N} \left ( \sum_{k = j}^{n 2^N} C^\dagger_{n,k}(\zeta_T,T) v_{k,j}^\dagger \right ) \mathbb{E} \left [ (1-X(-T))^j \right ]. \]
Here, $\mathbb{E} [ (1-X(-T))^j ]$ is the annealed moment of order $j$ of the proportion of individuals of type $1$ at time $-T$, when the environment is integrated on $(-\infty, -T)$. By translation invariance we have $\mathbb{E} [ (1-X(-T))^j ] = \mathbb{E} [ (1-X(0))^j ] = w_j$, where the last equality comes from \eqref{formulecompacteex}. Hence, the desired formula for the $n$-th moment of $1-X(\zeta_T,0)$ is given by
\begin{eqnarray}
\mathbb{E}^{\zeta_T}[(1-X(\zeta_T, 0))^n] = \sum_{j = 0}^{n 2^N} \left ( \sum_{k = j}^{n 2^N} C^\dagger_{n,k}(\zeta_T,T) v_{k,j}^\dagger \right ) w_j. \label{formulecompacteexw}
\end{eqnarray}
| proofpile-arXiv_059-15531 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Contextualized representations trained over large-scale text data have given remarkable improvements to a wide range of NLP tasks, including natural language inference \cite{Bowman2015ALA}, question answering \cite{rajpurkar-etal-2018-know} and reading comprehension \cite{Lai2017RACELR}. Giving new state-of-the-art results that approach or surpass human performance on several benchmark datasets, it is an interesting question what types of knowledge are learned in pre-trained contextualized representations in order to better understand how they benefit the NLP problems above. There has been work investigating the nature of syntactic \cite{liu-etal-2019-linguistic}, semantic \cite{liu-etal-2019-linguistic} and word sense \cite{kim-etal-2019-probing} knowledge contained in such contextualized representations, in particular BERT \cite{devlin-etal-2019-bert}, showing that such knowledge can be effectively learned via language model (LM) pre-training over large scale data.
Commonsense knowledge spans ``a huge portion of human experience, encompassing
knowledge about the spatial, physical, social, temporal, and
psychological aspects of typical everyday life. '' \cite{Liu2004ConceptNetA}. Intuitively, such knowledge is at least as useful as semantic and syntactic knowledge in natural language inference, reading comprehension and coreference resolution. For example, the word ``it'' in the sentence ``the dog cannot cross the street because it is too X'' can refer to three different entities when the word ``X'' is ``timid'', ``wide'' and ``dark'', respectively, and resolving such ambiguity can require that a system has relevant commonsense knowledge beyond the sentence level. However, relatively little work has been conducted on systematically evaluating the nature of commonsense knowledge learned in contextualized representations.
\begin{table*}[!htbp]
\centering
\begin{tabular}{cl c c}
\toprule
& \hfill Token-level& \\
\hline
CA &They broadcast an announcement, \textbf{but} a subway came into the station and I couldn't hear it. & \cmark \\
& They broadcast an announcement, \textbf{before} a subway came into the station and I couldn't hear it . & \xmark \\
\hline
WSC &The trophy doesn't fit into the brown suitcase because the \textbf{trophy} is too large. & \cmark \\
& The trophy doesn't fit into the brown suitcase because the \textbf{suitcase} is too large. & \xmark \\
\hline
SM & money can be used for buying \textbf{cars} & \cmark \\
& money can be used for buying \textbf{stars} & \xmark \\
\toprule
& \hfill Sentence-level &\\
\hline
SMR & $\neg$`` he put an elephant into the fridge'' (because) $\leftarrow$ an elephant is much bigger than a fridge . & \cmark \\
& $\neg$`` he put an elephant into the fridge " (because) $\leftarrow$ elephants are usually gray... & \xmark \\
& $\neg$`` he put an elephant into the fridge " (because) $\leftarrow$ an elephant cannot eat a fridge . & \xmark \\
\hline
SWAG & Someone unlocks the door and they go in. $\rightarrow$ Someone leads the way in.& \cmark \\
& Someone unlocks the door and they go in. $\rightarrow$ Someone opens the door and walks out. &\xmark
\\
& Someone unlocks the door and they go in. $\rightarrow$ Someone walks out of the driveway. &\xmark\\
& Someone unlocks the door and they go in. $\rightarrow$ Someone walks next to someone and sits on a pew. &\xmark\\
\hline
HellaSwag & A carved pumpkin with a light in it glows on a counter. Supplies for carving are then shown. & \\
& $\rightarrow$ A woman cuts the top off the pumpkin, emptying the seeds. & \cmark \\
& $\rightarrow$ she cuts down all the pieces and dumps them in a trash bin in the end. &\xmark\\
& $\rightarrow$ she then carves the traced lines to cut out the design. &\xmark\\
& $\rightarrow$ she tapes the top shut as the continue carving the pumpkin. &\xmark\\
\hline
ARCT & People can choose not to use Google $\wedge$ Other search engines don’t redirect to Google \\ & $\rightarrow$ Google is not a harmful monopoly & \cmark \\
& People can choose not to use Google $\wedge$ All other search engines redirect to Google \\ & $\rightarrow$ Google is not a harmful monopoly & \xmark \\
\toprule
\end{tabular}
\caption{Example of reframed test instances corresponding to each of our test task. The key word is \textbf{bolded} in token-level tasks. $\wedge$, $\neg$, $\leftarrow$ and $\rightarrow$ are used for showing the logic flows and replaced by natural language in actual test data.}\smallskip
\label{table1}
\end{table*}
We fill this gap by evaluating five state-of-the-art contextualized embedding models on seven commonsense benchmarks. The models include off-the-shelf embeddings\footnote{https://github.com/huggingface/transformers} from GPT \cite{rad-2018}, GPT2 \cite{radford2019language}, BERT \cite{devlin-etal-2019-bert}, XLNet \cite{zhilin-19} and RoBERTa \cite{liu2019roberta}, and the benchmarks include Conjunction Acceptability, Sense Making \cite{wang-etal-2019-make}, Winograd Schema Challenge \cite{Levesque:2012:WSC:3031843.3031909}, SWAG \cite{zellers-etal-2018-swag}, HellaSwag \cite{zellers-etal-2019-hellaswag}, Sense Making with Reasoning \cite{wang-etal-2019-make}, and Argument Reasoning Comprehension \cite{habernal-etal-2018-argument}. We evaluate commonsense knowledge contained in the above models by unifying the form of all the datasets and comparing LM perplexities on positive and negative samples (i.e., sentences that make sense and those that do not make sense, respectively). Commonsense contained in our data covers a wide range of subjects, from physical world knowledge to social conventions, from scientific domains to daily life scenes. We further categorize them by the difficulty level, namely the number of inference steps necessary in making sense.
We reframe the datasets in order to conduct both word- and sentence-level testing. For word-level testing, negative samples are drawn by replacing words from positive samples. We are concerned about nouns, verbs, adjectives, adverbs, pronouns and conjunctions, which reflect different aspects of commonsense. For example, while verbs such as ``buy, throw, sell ...'' are relatively more associated with event knowledge, conjunctions such as ``because, but, so ...'' are more associated with logical reasoning. For sentence-level testing, negative examples are drawn by replacing a full subsentences (such as a clause) with irrelevant or conflicting contents. Sentence-level tests concern more about commonsense inference.
From the results we have four salient observations. First, the pre-trained models give consistently better performances than random baselines, which demonstrates that language model pre-training is useful for learning commonsense knowledge. Second, models based on bi-directional contexts such as BERT, XLNet and RoBERTa are stronger in learning commonsense knowledge compared to those based on uni-directional contexts, such as GPT and GPT2. Third, more commonsense knowledge can be learned from larger training sets, which conforms well to the intuition. Fourth, the models have a certain degree of commonsense reasoning ability. However, as the number of necessary inference steps increase, the model performances drop, which shows that commonsense is still a big challenge that is not completely solved by pre-trained contextualized language models (LMs).
Finally, we further test the robustness of the five models by making dual test samples. Here a dual test sample is built by adding, deleting and replacing words in a test sample, or swapping two words in the sample, thereby resulting in a closely related test case. In theory, a model equipped with relevant commonsense should give consistent predictions on a pair of dual test cases. However, we find that none of the models are able to reach such consistency. Instead, the models are confused by the modification, tending to give the same predictions over a pair of dual samples despite they may have different gold labels. This further reveals that commonsense contained in the pre-trained models may remain in a surface level, without deep semantic comprehension. We publicly release our datasets, named commonsense ability tests (CATs), and the test script at GitHub. \footnote{https://github.com/XuhuiZhou/CATS}
\section{Tasks for Evaluating Commonsense}
Commonsense ability can be broadly divided to two categories. First, a model with commonsense ability should have basic knowledge about the world, for example, \textit{water always goes down}. Second, it should have the ability to reason over commonsense knowledge, such as \textit{water always goes down because there is gravity on the earth} and \textit{if you are injured, you should go to the hospital}. To comprehensively test different models' commonsense ability, we synthesize six challenging tasks by taking positive and negative samples from existing benchmarks, and further introduce a new task called Conjunction Acceptability (CA).
We reframe all the tasks into sentence-scoring tasks by substitution or concatenation. For example, we create positive and negative samples by replacing a pronoun in the sentence of a WSC question with the candidates to obtain a test instance as Table \ref{example}. A model is asked to score the sentences and we pick the sentence with the highest score as its prediction in a test instance. Below we introduce the data sources and reframed tasks in detail (the correct answer is \textbf{bolded}).
\begin{table}[t]
\centering
\begin{tcolorbox}[fontupper=\small, fontlower=\small]
{\bf Original:}\\
\textit{Paul tried to call George on the phone, but he wasn't successful.} \\
Who is he? \\
Candidate: A. Paul (correct) B. George
\tcblower
{\bf Reframed:} \\
\textit{A. Paul tried to call George on the phone, but Paul wasn't successful.} (Positive sample)\\
\textit{B. Paul tried to call George on the phone, but George wasn't successful.} (Negative sample)
\end{tcolorbox}
\caption{Example of reframing a WSC question; Note that there can be additional negative samples.}
\label{example}
\end{table}
\subsection{Sense Making (SM)}
Introduced by \citeauthor{wang-etal-2019-make} (2019), this task tests whether a model can differentiate sense-making and non-sense-making statements. Given a pair of statements (i.e a test instance), it requires the model to choose the more sensible statement. One example is: \textit{\textbf{I work 8 hours a day}} / \textit{I work 25 hours a day}. This task conforms to our evaluation schema without a change. More examples are shown in the SM section of Table \ref{table1}. The statements typically differ only in one key word which covers nouns, verbs, adjectives, and adverbs.
\subsection{Winograd Schema Challenge (WSC)}
The Winograd Schema Challenge (WSC) dataset \cite{Levesque:2012:WSC:3031843.3031909} consists 273 instances of the pronoun resolution problem. Each instance contains a sentence with a pronoun referring to one of nouns; the original question is to pick the correct noun. For our task, we transform the test as shown in Table \ref{example}. More examples are shown in the WSC section of Table \ref{table1}. WSC is recognized as one of the most difficult commonsense datasets.
\subsection{Conjunction Acceptability (CA)}
As stated by \citeauthor{lobue-yates-2011-types} (2011), logic-based commonsense knowledge is an important part of world knowledge in addition to content-based knowledge. We aim to probe a model's ability to understand the logic relations in the language by extracting 189 positive samples from the WSC dataset and replacing the conjunction manually with another conjunction to obtain a negative sample. We pair the positive and negative samples to obtain a test instance. For example, \textit{The lawyer asked the witness a question, and the witness was reluctant to answer it} / \textit{\textbf{The lawyer asked the witness a question, but the witness was reluctant to answer it}}. More examples are shown in the CA section of Table \ref{table1}.
This task using ``because'', ``before'', ``when'', ``but'', ``and'' to correspond to the Cause and Effect, Preconditions, Simultaneous Conditions, Contradiction, and Addition logic relations, respectively. It is complementary to the other token-level tasks which focus more on content-based knowledge.
\subsection{SWAG}
SWAG \cite{zellers-etal-2018-swag} is a dataset with multiple choices questions about grounded situations. It questions models' understanding towards the relationship between two physical scenes. With the help of adversarial filtering (AF), \citeauthor{zellers-etal-2018-swag} created a sufficiently large amount of questions automatically. For example, given \textit{On stage, a woman takes a seat at the piano. She}, the question is to choose the following candidates: \textit{A. sits on a bench as her sister plays with the doll B. smiles with someone as the music plays C.is in the crowd, watching the dancers D. \textbf{nervously sets her fingers on the keys}}. We obtain a positive or negative sample by concatenating the context and a candidate together (e.g \textit{On stage, a woman takes a seat at the piano. She nervously sets her fingers on the keys}). There are one positive sample and three negative samples in a SWAG test instance. More examples are shown in the SWAG section of Table \ref{table1}. By forcing the model to predict the next action, it requires inductive reasoning and temporal reasoning.
\subsection{HellaSwag}
HellaSwag \cite{zellers-etal-2019-hellaswag} is an argumented version of SWAG with the same data format as SWAG, more inference steps and higher data quality. While HellaSwag also includes the dataset from WikiHow, we choose only the instances coming from ActivityNet to make the results comparable to the original SWAG dataset.
\subsection{Sense Making with Reasoning (SMR)}
Sense Making with Reasoning focuses on identifying the reason behind a statement \cite{wang-etal-2019-make} against commonsense. A model needs to understand that a specific statement (e.g \textit{can is usually made of gold}) is against commonsense and to make a choice for the reason behind from three candidates (e.g \textit{gold is too bright to make cans}, \textbf{\textit{gold is too soft to make cans}} and \textit{gold is too expensive to make cans}). We make a positive or negative sample by concatenating the statement and candidate reason together. For each test instance in SMR, there is a positive sample and two negative samples. More examples are shown in the SMR section of Table \ref{table1}. This task is intuitively difficult since it requires a model to have deeper knowledge of with higher-level inference, which belongs to abductive reasoning.
\subsection{Argument Reasoning Comprehension Task (ARCT)}
Similar to SMR, \citeauthor{habernal-etal-2018-argument} (2018) propose the ARCT dataset to test a model's abductive reasoning ability.
Its domain lies in social topics such as search engine and LGBT rights, which is different from the daily-routine scenarios. For example, given a reason $R$: \textit{I find the idea that it is a sin to be born or live a life at all to be preposterous} and a claim $C$: \textit{Christians have created a harmful atmosphere for gays}, this task is to pick the correct warrant $W$ from two candidates: \textit{A. being gay isn't considered a sin B. \textbf{being gay is considered a sin}}, where $R \wedge W \rightarrow C$. We make a positive or negative sample by concatenating the reason, candidate warrant and claim together (e.g \textit{I find the idea that it is a sin to be born or live a life at all to be preposterous and since being gay is considered a sin, Christians have created a harmful atmosphere for gays}). A test instance in ARCT contains a pair of positive and negative samples. More examples are shown in the ARCT section of Table \ref{table1}. We further break this task into two variants, where ARCT1 represents the original dataset, ARCT2 represents an argumented dataset by adding negation to original instances to alleviate the statistical cues in the dataset \cite{niven-kao-2019-probing}.
We integrated the above test sets into a commonsense ability test (CATs) benchmark, released for future research.
\section{Pre-trained Models}
We take six contextualized representation models that give the state-of-the-art performances on NLP benchmarks such as GLUE \cite{wang-etal-2018-glue} and SQuAD \cite{rajpurkar-etal-2018-know}. Off-the-shelf models are taken. Below we give the detailed settings.
\textbf{GPT} \cite{rad-2018} is a uni-directional transformer LM trained on 800M tokens of BookCorpus \cite{Zhu2015AligningBA}.
Given a text sequence $\mathbf{x} = [x_1, ..., x_T]$, GPT works in a way similar to conventional auto-regressive (AR) LM:
\[\max_{\theta} \log p_{\theta}(\mathbf{x}) = \sum_{t=1}^T\log p_{\theta}(x_t|\mathbf{x}_{<t}),\]
where $\mathbf{x}_{<t} = [x_1, ..., x_{t-1}]$. The model has dimension of hidden states $H = 768$, attention head numbers $A=12$, number of layers $L=12$ and total parameter size $P=110M$.
\textbf{GPT2} \cite{radford2019language} works similarly as GPT with a few modifications on the hyperparameters. In particular, GPT2 optimizes the layer normalization, expands the vocabulary size to 50,257, increases the context size from 512 to 1024 tokens, and optimizes with a larger batchsize of 512. In addition, GPT2 is pre-trained on WebText, which was created from scraping web pages.
The dataset roughly contains 8 million documents (40 GB). We study GPT2-base and GPT2-medium, with model size $H=768, A=12, L=12, P=117M$ and $H=1024, A=16, L=24, P=345M$, respectively, where the definitions of H, L and A are the same as for GPT.
\textbf{BERT} \cite{devlin-etal-2019-bert} jointly trains on a masked language modeling task and a next sentence prediction task (NSP). The model is trained on the BookCorpus and English Wikipedia, a total of approximately 3300M tokens. BERT is designed with the following objective:
\[\max_{\theta} \log p_{\theta}(\bar{\mathbf{x}}|\tilde{\mathbf{x}}) \approx \sum_{t=1}^T m_t\log p_{\theta}(x_t|\tilde{\mathbf{x}}) ,\]
where $\tilde{\mathbf{x}}$ is a corrupted version of text sequence $\mathbf{x}$, and $\bar{\mathbf{x}}$ is masked tokens. $m_t=1$ if token $x_t$ belongs to $\bar{\mathbf{x}}$.
Here we consider BERT-base and BERT-large, with $H=768, A=12, L=12, P=117M$ and $H=1024, A=16, L=24, P=340M$, respectively, where the definitions of H, L and A are the same as for GPT.
\textbf{XLNet} \cite{zhilin-19} is trained with a permutation-based language modeling objective to capture bidirectional contexts while retain the benefits of AR models. Specifically, they let $\mathcal{Z}_T$ be the set of all possible permutations of the length-T sequence $\mathbf{x} = [x_1, ..., x_T]$:
\[\max_{\theta} \mathbf{E}_{\mathbf{z} \sim \mathcal{Z}_T} \left[ \sum_{t=1}^T m_t\log p_{\theta}(x_{z_t}|\tilde{\mathbf{x}}_{\mathbf{z}<t}) \right] ,\]
where $z_t$ and $\mathbf{z}_{<t}$ are the $t$-th element and the first $t-1$ elements of a permutation $\mathbf{z} \in \mathcal{Z}_T$, respectively. In this way, XLNet ensures that any specific token $x_t$ in $\mathbf{x}$ has seen all the tokens before or after it.
We consider XLNet-base and XLNet-large, whose model sizes are $H=768, A=12, L=12, P=117M$ and $H=1024, A=16, L=24, P=340M$, respectively, where the definitions of H, L and A are the same as for GPT. Note that XLNet-base is trained with the same data as BERT, while XLNet-large is trained with a larger dataset that consists of 32.98B subword pieces coming from Wiki, BookCorpus, Giga5, ClueWeb, and Common Crawl.
\textbf{RoBERTa} \cite{liu2019roberta} has the same architecture as BERT but is trained with dynamic masking, FULL-SENTENCES without NSP loss, a larger batch-size and a larger vocabulary size. Given the optimized design choice, one key difference of RoBERTa with other models is its large training dataset, which consists of BookCorpus, CC-NEWS, OpenWebText, and STORIES. With a total 160GB text, RoBERTa has access to more potential knowledge than the other models.
\section{Experimental Design}
The CAT datasets are applicable to any model that has a method to score a sentence. They fit with the pre-trained models above, which are by nature language models. We derive the score of a sentence below with uni-directional-context LMs and bi-directional-context LMs, respectively.
\begin{table*}[t]
\centering
\begin{tabular}{c | c c c | c c c c c | c}
\toprule
& CA & WSC & SM & SMR & SWAG & HellaSwag & ARCT1 & ARCT2 & Average \\
\hline
RANDOM &0.500& 0.500& 0.500& 0.333& 0.250& 0.250& 0.500& 0.500 & 0.416\\
GPT &0.830& 0.558& 0.735& 0.354& 0.592& 0.263& 0.472& 0.528 & 0.542\\
GPT2-base &0.787 &0.512 &0.705 &0.355 &0.503 &0.300 &0.466 &0.509&0.517 \\
GPT2-medium &0.885 &0.568 &0.746 &0.385 &0.591 &0.338 &0.462 &0.527& 0.563\\
BERT-base &0.891 &0.523 &0.697 &0.419 &0.625 &0.373 &0.477 &0.503 & 0.563\\
BERT-large &0.934 &0.625 &0.694 &0.444 &0.696 &0.393 &0.468 &0.517 &0.596 \\
XLNet-base &0.809 &0.544 &0.662 &0.374 &0.494 &0.381 &0.516 &0.526 &0.543 \\
XLNet-large &0.891 &0.636 &0.583 &0.394 &0.662 &0.435 &0.563 &0.570 &0.591 \\
RoBERTa-base &0.901 &0.623 &0.750 &0.423 &0.712 &0.414 &0.501 &0.537 &0.565\\
RoBERTa-large &0.962 &0.694 &0.792 &0.512 &0.769 &0.5 &0.606 &0.599 & 0.679\\
\hline
HUMAN &0.993 &0.920 &0.991 &0.975 &0.880 &0.945 &0.909 &0.909 & 0.945\\
\toprule
\end{tabular}
\caption{Accuracy for each pre-trained contextualizer on each test set. The rightmost column shows the average of accuracy score of each model.}\smallskip
\label{table2}
\end{table*}
Formally, suppose the sentence S of n words $S = \{w_1,..., w_{k-1}, w_k, w_{k+1},...,w_n\}$. We define the score of a sentence as:
\[Score(S) = \frac{\sum_{k=1}^nlog(P_\theta(w_k|context_k)}{n} ,\]
where the denominator $n$ is for alleviating the influence of the sentence length to models' prediction, especially in sentence-level tasks. For a uni-directional model, $context_k = S_{<k} \equiv \{w_1, ..., w_{k-1}\}$. The numerator becomes $\sum_{k=1}^nlog(P_\theta(w_k|S_{<k}))$, which is factorized from $\log(P_\theta(w_1,..., w_{k-1}, w_k, w_{k+1},...,w_n))$. This is essentially a LM. For a bi-directional model, the $context_k = S_{-k}$, which represents the $S$ with the $k$-th word being removed. In particular, the $k$-th word can be removed with being replaced by a special token `[MASK]' in BERT. The numerator $\sum_{k=1}^n \log(P_\theta(w_k|S_{-k}))$ can also be factorized from $\log(P_\theta(w_1,..., w_{k-1}, w_k, w_{k+1},...,w_n))$ under the assumption that $w_k$ is independent of the successive words (i.e. $w_{k+1}, w_{k+2}, ..., w_n$), which is the bi-directional-context LM.
Intuitively, $P_\theta(w_k|context_k)$ can be interpreted as how probable a word $w_k$ is given the $context_k$: $S_{<k}$ or $S_{-k}$. For example, let $S_{-k}=$ \textit{He put an [MASK] into the fridge}, $w_{k1} = elephant$ and $w_{k2} = turkey$. $P_\theta(w_{k2}|S_{-k})$ should have a relatively larger value since filling in the ``elephant'' in the first case results in an improper sentence, which is against commonsense.
As introduced earilier (Table \ref{table1}), all CATs tasks consist of instances with positive and negative sentences. After we score each sample in a test instance, the models predict the positive sample simply by taking the highest score in the instance.
\section{Commonsense Tests Results}
Table \ref{table2} shows the model performances with random choices as the baseline. Take WSC for example, the random baseline is 0.5, the human is 0.920 and all the models range between 0.512 and 0.694 with RoBERTa-large giving the best result of 0.694. Except for the ARCT task, all tested models demonstrate stronger performances than RANDOM, which indicates that the models all have varying degrees of commonsense. However, for most of the tasks, all of models are well below human performance.
\subsection{Uni-directional Vs Bi-directional LM}
We compare uni-directional (GPT, GPT2-base, GPT2-medium) and bi-directional models (Bert-base, Bert-large, XLNet-base, XLNet-large, RoBERTa-base, and RoBERTa-large). Picking the strongest model from each group, RoBERTa-large outperforms GPT2-medium by a large margin for every task. As mentioned before, RoBERTa-large has the same parameter size as GPT2-medium. However, RoBERTa-large is trained with much more data than GPT2-medium.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{pic1.png}
\caption{Comparison between bidirectional and unidirectional models among different tasks.}
\label{fig1}
\end{figure}
From Figure \ref{fig1}, we can see that except for the SM task, both BERT-large and XLNet-large outperform GPT2-medium while BERT-large is trained with a smaller dataset than GPT2-medium. This indicates that bi-directional context can be more useful for learning commonsense. Intuitively, the models with bi-directional context can make more sentence-level inference.
While only the predecessing words receive sufficient context in a uni-directional model, every word has the full context for bi-directional models. Table \ref{example2} shows examples where RoBERTa-large makes the correct prediction but GPT2-medium does not, we can see that the key tokens, which are considered to be the most influential part in making the correct prediction, lie in the middle of the sentence. This can be the main reason why bi-directional context is important for models' commonsense ability.
\begin{table}[t]
\centering
\begin{tcolorbox}[fontupper=\small, fontlower=\small]
{\bf Token-level:}\\
A. \textbf{Sam pulled up a chair to the piano, but the \textcolor{blue}{chair} was broken, so he had to stand instead.}\\
B. Sam pulled up a chair to the piano, but the \textcolor{blue}{piano} was broken, so he had to stand instead.
\tcblower
{\bf Sentence-level:} \\
A. \textbf{Comments sections permit a reader to analyze many different perspectives in one place, and since \textcolor{blue}{I want to see all these ideas}, even stupid ones, Comment sections have not failed.}\\
B. Comments sections permit a reader to analyze many different perspectives in one place, but since \textcolor{blue}{I don't want to see all these stupid ideas}, Comment sections have not failed.
\end{tcolorbox}
\caption{Examples that a bi-directional model (RoBERTa-large) predicts correctly while a uni-directional model (GPT-2-medium) makes an incorrect prediction; the correct answer is \textbf{bolded}; the key tokens are colored in \textcolor{blue}{blue}.}
\label{example2}
\end{table}
\subsection{Scale of Training Data}
A larger training dataset intuitively allows a model to have access to more commonsense knowledge, thus performs better in our tests. Trained with by far the most data, RoBERTa is the winner for every task. Most of the models are in fact trained on a subset of the dataset used to train RoBERTa. However, larger dataset do not always work when the model capacity is limited with regard to commonsense. For example, GPT2-base underperforms GPT for many tasks in our dataset, which suggests GPT2-base underfits the WebText dataset with regard to commonsense. The fact that RoBERTa-base has the same parameter size as GPT2-base, yet benefits from the larger dataset suggests that bi-directional models have larger representative power in commonsense ability.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{pic3.png}
\caption{Models performances when the number of inference step (IS) increases. Tasks are ranked according to their IS in an increasing order from left to right. }
\label{fig2}
\end{figure}
\subsection{Number of Inference Steps}
Similar to humans, the model performance can intuitively drop when commonsense inference becomes more complicated. To verify this intuition, we pick 100 sentences randomly from each test dataset and annotate the number of required inference steps (IS) of each instance manually. The inference step of each test dataset is defined as the average of the number of the turns of reasoning necessary for the instances from the test dataset. We choose to answer the question
by counting the logical operations that exist in an instance. For example, for the sentence
\textit{They add a lot to the piece and I look forward to reading comments, but since comments sections always distract me from my work, Comment sections have failed.}, the logic chain is (\textit{They add a lot to the piece} $\wedge$ \textit{I look forward to reading comments}) $\wedge$ \textit{comments sections always distract me from my work} $\rightarrow$ \textit{Comment sections have failed}. Thus, this instance needs three inference steps.
In this way, we obtain the Inference Step (IS) for seven test datasets. Each instance is labeled by two expert annotators, and the inter-annotator agreement is 93\%. The final IS is the average from both annotators. Figure \ref{fig2} shows the results \footnote{The performances on tasks with more than one negative sample are transformed to binary-choice scales.} on the test cases with different IS. There is a decrease of performances as IS increases. SWAG and HellaSwag fall out the trend, which may suggest that the models have stronger commonsense ability in temporal reasoning.
Generally speaking, all of our tested models outperform the random baselines except for the ARCT task, which suggests that despite of using different modeling schemas, language modeling stands as an effective objective for extracting commonsense knowledge from large, raw texts. For each task, the overall performance increases with a larger model parameter size, a more sophisticated model design, and larger training data.
\section{Robustness Test}
The robustness of models in commonsense reasoning is an important perspective in evaluating deep commonsense ability. Intuitively, a person can reason whether a statement makes sense or not because he has consistent knowledge. If the statement changes slightly, for example, changing a key word, that person should still make the correct judgement.
We aim to test the robustness of the five models by making dual test samples. A dual instance to the original instance should test the same commonsense knowledge point or largely relevant to the original one. In this way, we expect that the model can demonstrate consistency in the decision. One example is shown in Table \ref{example3}, which choosing A in the original instance should lead to choosing B in the dual case (See Figure \ref{fig4} for more examples).
\begin{table}[t]
\centering
\begin{tcolorbox}[fontupper=\small, fontlower=\small]
{\bf Original:}\\
A. People usually like wealth. B. People hardly like wealth.\\
{\bf Dual:} \\
A. People usually hate wealth. B. People hardly hate wealth.
\end{tcolorbox}
\caption{Example of a robust test case; it contains a test instance from the original test set with a dual test instance created manually. When the key word changes from `like' to `hate', the correct answer switches from A to B. This is an unique feature of our robustness test sets.}
\label{example3}
\end{table}
We consider multiple ways to construct a dual test instance. Particularly, a dual test instance is built by methods: adding, deleting and replacing words in a test sample, or swapping two words in the sample, thereby resulting in a closely related test instance. All of our dual test instances are constructed from the original commonsense test data.
We construct 75 dual instances for each method above over WSC, SM, and ARCT, keeping the instances from each dataset approximately equivalent in order to evaluate the influence of different duality methods to the models. We then pair each dual instance with the original instance to form a new test case. If the model gives the correct or wrong prediction for both of the instances in this case, we recognize it as a \textit{consistent} case.
\begin{table}[t]
\centering
\begin{tabular}{c c c c c}
\toprule
& Add & Del & Swap & Sub \\
\hline
RANDOM &0.50& 0.50& 0.50& 0.50 \\
GPT &0.13& 0.17& 0.51& 0.19 \\
GPT2-base &0.20 &0.23 &0.45 &0.22 \\
GPT2-medium &0.24 &0.24 &0.52 &0.22 \\
BERT-base &0.26 &0.15 &0.50 &0.29 \\
BERT-large &0.26 &0.25 &\textbf{0.56} &0.24 \\
XLNet-base &\textbf{0.36} &0.16 &0.41 &0.26 \\
XLNet-large &\textbf{0.36} &\textbf{0.39} &0.37 &0.26 \\
RoBERTa-base &0.20 &0.27 &0.47 &0.35 \\
RoBERTa-large &0.29 &0.33 &\textbf{0.56} &\textbf{0.42} \\
\toprule
\end{tabular}
\caption{Portion of consistent cases of each method for each contextualizer. Add stands for adding key words in the test sample; Del stands for deleting key words in the test sample; Swap stands for swapping the position of words in the test sample; Sub stands for replacing key words in the test sample. The best contextualizer for each method is \textbf{bolded}.}\smallskip \label{exp_dataset}
\label{table4}
\end{table}
The results are shown in Table \ref{table4}. In theory, a model equipped with relevant commonsense should give consistent predictions on a pair of dual test case. However, we find that none of the models reach consistency. In fact, their consistency is well below the random baselines except for the Swap method.
To better investigate the reason behind the poor consistency, we look at inconsistent cases from the pre-trained model (i.e RoBERTa-large). Similar to \citeauthor{Trinh-2018-a} (2018), we investigate how the model makes decision between two candidate sentences $S_{correct}$ and $S_{incorrect}$ where they have the same number of words. In particular, we look at:
\[ q_k = log(\frac{P_\theta(w_k|S_{correct}-\{w_k\})}{P_\theta(w_k|S_{incorrect}-\{w_k\})}),\]
where $1 \leq k \leq n$. It follows that the choice between $S_{correct}$ and $S_{incorrect}$ is made by the value $Q = \sum_k q_k$ being bigger than 0 or not. Visualizing the value of each $q_k$ provides more insights into the decisions of the model.
From Figure \ref{fig4}, we can tell that the model is confused by the modification, tending to give the same predictions over a pair of dual samples despite that they have different gold labels, especially for Sub, Add and Del. This further reveals that the commonsense knowledge contained in the pre-trained models may remain in a surface level, without deep semantic comprehension.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\columnwidth]{pic4.png}
\caption{Samples of questions from Add, Del, Sub and Swap predicted correctly for the original instance but incorrectly for the dual instance. Note that a sentence here represents a test instance with a pair of positive and negative samples, represented by (.../...). Here we mark the correct prediction by an asterisk and display the normalized $q_t$ by coloring its corresponding word.}
\label{fig4}
\end{figure}
\section{Related Work}
\citeauthor{liu-etal-2019-linguistic} (2019) evaluate BERT \cite{devlin-etal-2019-bert}, GPT \cite{rad-2018}, and ELMo \cite{peters-etal-2018-deep} on a variety of linguistics tasks. Their results suggest that the features generated by pre-trained contextualizer are sufficient for high performance on a board set of tasks but models fail on tasks requiring fine-grained linguistics knowledge. \citeauthor{Tenney2019WhatDY} (2019) evaluate similar models on a variety of sub-sentence linguistic analysis tasks. Their results suggest that contextualized word representation encode both syntax and semantics. Our work is in line in the sense that contextualized representation encode rich knowledge to be `probed'. However, we focus on evaluating the commonsense in those representations. To our best knowledge, this is the first work to systematically evaluate commonsense in pre-trained models.
Our evaluation method is similar to \citeauthor{Trinh-2018-a} (2018), who make use of LM to score a sentence. However, they focus on Winograd schema questions with only self-trained recurrent LMs while we test five models' commonsense with seven diverse tasks.
\section{Conclusion}
We studied the commonsense knowledge and reasoning ability of pre-trained contextualizers with a suite of seven diverse probing tasks, showing that large-scale pre-trained contextualized representation has a certain degree of commonsense knowledge, but there is still a quite large gap between the current state-of-the-art representation models and robust human-level commonsense reasoning, which may require more breakthrough in modeling. We release our test sets, named CATs, publicly.
\section{Acknowledgments}
We would like to thank the anonymous reviewers
for their insightful comments, and Mr. Cunxiang Wang for his help on the collection of the data. Yue Zhang is the corresponding author.
\bibliographystyle{aaai}
| proofpile-arXiv_059-15532 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{\label{sec:level1}Introduction}
Anderson localization (AL)
\cite{anderson1958absence,Lee1985,kramer1993localization,LGP} established that in the presence of uncorrelated on-site random potential, all eigenstates are exponentially localized in one and two dimensions. In three dimensions, there is an energy mobility edge separating localized and delocalized eigenstates.
The localization length $\xi_1$ is determined by many parameters such as eigenstate energy, hopping integrals between adjacent sites, and the amplitude of the random potential, as obtained for different lattices and various types of random potentials \cite{LGP}. AL results in a strong suppression of transport in low-dimensional systems \cite{anderson1958absence,LGP}. AL was observed experimentally in a variety of condensed matter and optical systems \cite{kramer1993localization,lahini2008anderson,billy2008direct,roati2008anderson,strozer2006observation,schwartz2007transport}.
The challenging study of the interplay of interaction and disorder leads to a number of unexpected results
for the localization properties of many particles eigenstates. The seemingly simplest case of two interacting particles (TIP) in one space dimension
was analyzed in an impressive set of publications
\cite{shepelyansky1994coherent,jacquod1997breit,imry1995coherent,roemer1997no,vonoppen1996interaction,song1999general,frahm1995scaling,frahm2016eigenfunction,krimer2010statistics,krimer2011two,ortuno1999localized,Sergej_ivanchenko_2014,sergej_ivanchenko_2017}.
For uncorrelated disorder the TIP localization length $\xi_2$ is assumed to be finite, with the main questions addressing the way $\xi_2$ scales with $\xi_1$ in the
limit of weak disorder
\cite{shepelyansky1994coherent,jacquod1997breit,imry1995coherent,roemer1997no,vonoppen1996interaction,song1999general,frahm1995scaling,frahm2016eigenfunction,krimer2010statistics,krimer2011two,ortuno1999localized}, and the nature of the observed sub-diffusive wave packet spreading on length scales $\xi_1 \ll L \ll \xi_2$ \cite{Sergej_ivanchenko_2014,sergej_ivanchenko_2017}.
Lack of analytical results stresses the need of computational studies. However, in all above cases, there are limits set by the size of the system (in particular
for diagonalization routines due to immense Hilbert space dimensions), the largest evolution
times obtained through direct integrations of time-dependent Schr\"odinger equations with continuous time variables, and the energy dependence of the localization length
$\xi_1$.
An interesting alternative platform is Floquet unitary maps on two-level system networks known as
\textit{discrete-time quantum walks} (DTQW).
DTQWs were introduced for quantum computing purposes \cite{aharonov2001quantum,PhysRevA.48.1687,doi:10.1080/00107151031000110776,tregenna2003controlling}. Recently they have been used to study some numerically challenging complex problems of condensed matter physics, e.g. lattice Dirac transport \cite{chandrashekar2013two}, topological phases \cite{obuse2011topological}, Anderson localization
\cite{PhysRevA.94.023601,vakulchyk2017anderson}, and nonlinear transport in ordered and disordered lattices supporting flat bands \cite{vakulchyk2019wave,vakulchyk2018almost}.
Note that the resulting Floquet Anderson localization is proven analytically for a whole range of different cases, including those where the eigenvalue spectrum
is dense, homogeneous and gapless, and the localization length $\xi_1$ is governing all random eigenstates independent of their eigenvalues
\cite{vakulchyk2017anderson}.
We stress that DTQWs are particular examples of a Floquet driven quantum lattice, and, therefore,
apply to studies of AL under non-equilibrium or simply infinite temperature conditions in an elegant and simple way as compared to the approach defined
by time-periodic Hamiltonian systems (see e.g. \cite{Ducatez_2017}).
DTQWs have been implemented in various condensed matter and optics setups, see e.g. \cite{CoinOperImpl,Impl1,di2004cavity,Impl3}.
Stefanak et al \cite{stefanak_2011} and Ahlbrecht et al \cite{Ahlbrecht_2012} proposed an extension of the single particle DTQW to two interacting DTQWs using a local contact interaction which is parametrized by a phase shift $\gamma$. We use this Hubbard-like interaction and consider two interacting disordered
discrete time quantum walks (TIW).
Using direct numerical simulations, we compute the time-dependent spreading of the TIW wave packet and its dependence on the angle $\xi_1$, and the strength of the interaction $\gamma$.
The computational evolution of wave functions for Hamiltonian systems involves the need to control accumulating errors due to the discretization of the continuous time variable.
DTQWs do not require such approximations making them superior when it comes to long time evolutions.
The paper is organized as follows: in Sec. II, we first present the model for a single one-dimensional DTQW. We then extend the model to two interacting DTQWs.
In Sec. III, we present the computational details and measures used for the study of the time evolution of two interacting DTQWs.
In Sec. IV we present the numerical results, and discuss them.
Sec. V provides the conclusions.
\section{\label{sec:level2}
Models}
We consider the dynamics of a single quantum particle with an internal spin-like degree of freedom on a one-dimensional lattice \cite{aharonov2001quantum,PhysRevA.48.1687,doi:10.1080/00107151031000110776,tregenna2003controlling,obuse2011topological,Venegas-Andraca2012,chandrashekar2008optimizing,vakulchyk2017anderson}.
Such a system is characterized by a two-component wave function $\ket{\Psi(t)}$ defined on a discrete chain of $N$ sites. The wave function is
embedded in a $2N$-dimensional Hilbert space:
\begin{eqnarray}\label{single_particle_wf}
\ket{\Psi(t)} =\sum_{n=1}^{N} \sum_{\alpha=\pm} \psi_n^{\alpha}(t) \ket{\alpha} \otimes \ket{n} = \nonumber \\ \sum_{n=1}^{N} \Big[\psi_n^+(t)\ket{+} + \psi_n^-(t) \ket{-}\Big] \otimes \ket{n},
\end{eqnarray}
where $\ket{\alpha}=\ket{\pm}$ are basis vectors of {\it local two-level systems}, $\ket{n}$ are basis vectors in a {\it one-dimensional coordinate space}, and $\psi_n^{\alpha}$ are the wave function amplitudes.
The Floquet time evolution of the system is realized by means of a unitary map involving coin $\hat C$ and shift $\hat S$ operators:
\begin{eqnarray}
\label{evolution}
\ket{\Psi(t+1)} = \hat S \hat C \ket{\Psi(t)}.
\end{eqnarray}
The coin operator $\hat C$ is a unitary matrix given by
\cite{vakulchyk2017anderson}
\begin{eqnarray}\label{Coin1}
\hat{C}=\sum_{n=1}^N
\hat{c}_n
\otimes\ket{n}\bra{n}
\end{eqnarray}
with local unitary coin operators $\hat{c}_n$
\begin{eqnarray}\label{Coin12}
\hat{c}_n=
e^{i\varphi_n}
\begin{pmatrix}
e^{i\varphi_{1,n}}\cos\theta_n & e^{i\varphi_{2,n}}\sin\theta_n \\
-e^{-i\varphi_{2,n}}\sin\theta_n & e^{-i\varphi_{1,n}}\cos\theta_n
\end{pmatrix}
\end{eqnarray}
which
are parametrized by four spatially dependent angles $\theta_n$, $\varphi_n$, $\varphi_{1,n}$ and $\varphi_{2,n}$. Such local coin operators can be implemented in various experimental setups through e.g. a periodic sequence of effective magnetic field pulses \cite{CoinOperImpl,Impl1,di2004cavity,Impl3}.
As it was shown in Ref. \cite{vakulchyk2017anderson}, the angle $\phi_n$ is related to a potential energy, the angles $\phi_{1,n}$ and $\phi_{2,n}$ to an external and internal
magnetic flux respectively, and the angle $\theta_n$ to a local kinetic energy or hopping. In this work we intend to generalize the corresponding problem of two
interacting particles in a one-dimensional tight-binding chain with uncorrelated disorder. Therefore we choose $\phi_{1,n}=\phi_{2,n}=0$
and $\theta_n \equiv \theta$, which simplifies the local coins $\hat{c}_n$ in (\ref{Coin12}) to
\begin{eqnarray}\label{Coin13}
\hat{c}_n=
e^{i\varphi_n}
\begin{pmatrix}
\cos\theta & \sin\theta \\
-\sin\theta & \cos\theta
\end{pmatrix} \;.
\end{eqnarray}
The spatial local disorder will be introduced through the angles $\varphi_n$ \cite{vakulchyk2017anderson}. This particular choice of disorder resembles a random on-site potential of the original Anderson model \cite{anderson1958absence}.
The shift operator $\hat{S}$ in Eq. (\ref{evolution})
couples neighboring sites by shifting all the $\psi_n^+$ components one step to the right, and all the $\psi_n^-$ components to the left:
\begin{eqnarray}\label{Shift1}
\hat{S} = \sum_n \ket{n}\bra{n+1} \otimes \ket{-}\bra{-} \; +\; \ket{n}\bra{n-1} \otimes \ket{+}\bra{+}. \nonumber\\
\end{eqnarray}
This completes the definition of
a single particle discrete-time quantum walk
\cite{aharonov2001quantum,PhysRevA.48.1687,doi:10.1080/00107151031000110776,tregenna2003controlling,obuse2011topological,Venegas-Andraca2012,chandrashekar2008optimizing,vakulchyk2017anderson}.
We extend the above single particle walk to two interacting discrete time quantum walks (TIW) in analogy to the extension of a single quantum particle in an Anderson model
to two interacting particles:
\begin{eqnarray}\label{two_particle_wave_function}
\ket{\Psi(t)} =
\sum_{i,j=1}^N\sum_{\alpha,\beta = \pm} \psi_{ij}^{\alpha\beta}(t)\ket{\alpha,\beta} \otimes \ket{i,j}.
\end{eqnarray}
The wave function $\ket{\Psi(t)}$ is embedded in a $4N^2$-dimensional Hilbert space where $\ket{\alpha,\beta}$ are basis vectors of two local two-level systems,
and $\ket{i,j}$ are basis vectors in a two-dimensional square lattice.
The TIW evolution is a obtained through a product of a TIW coin $\hat{W}$, shift $\hat T$ and interaction $\hat G$ operators acting on the wave function
(see Fig.\ref{Scheme}):
\begin{eqnarray}\label{tip_evolution}
\ket{\Psi(t+1)} = \hat T \hat W \hat G \ket{\Psi(t)}.
\end{eqnarray}
\begin{figure}
\includegraphics[width=0.5\textwidth]{Scheme}
\caption{\label{Scheme}
A schematic view of the TIW. The four components of the wave function on each site of a square lattice are shifted in different directions indicated by the arrows.
}
\end{figure}
The coin $\hat{W}$ and shift $\hat{T}$ operators are tensor products of the corresponding single particle operators:
\begin{equation}
\hat{W} = \hat{C}\otimes\hat{C} \;,\; \hat{T} = \hat{S}\otimes\hat{S}\;.
\end{equation}
In the absence of an interaction $\hat{G} = \mathbb{1}$ they describe the
evolution of two independent single particle DTQWs.
The local Hubbard-like contact interaction between the two DTQWs was
introduced in Ref. \cite{Ahlbrecht_2012} as
\begin{eqnarray}\label{tip_interaction}
\hat{G} = \mathbb{1}_c \otimes \mathbb{1}_p +\left(
e^{i\gamma} -1\right) \mathbb{1}_c \otimes \hat N ,
\end{eqnarray}
where $\gamma$ is the interaction strength parameter.
$\hat{N}=\sum_i\ket{i,i}\bra{i,i}$ is a projector on the diagonal of the coordinate space,
$ \mathbb{1}_c$ is the $4 \times 4$ unity matrix in the coin space, and $ \mathbb{1}_p$ is the $N^2 \times N^2$ unity matrix in the position
space. Note that
$\gamma=0$ corresponds to two noninteracting DTQWs.
\section{Anderson localization}
The local disorder is introduced through uncorrelated random values of the angle $\varphi_n$.
For the disorder strength $0 \leq W \leq 2\pi$, a set of $\varphi_n$ is independently drawn from a uniform distribution of $[-W/2, W/2]$.
\subsection{Single particle DTQW}
As it was shown in Ref. \cite{vakulchyk2017anderson}, all eigenstates of the single particle DTQW are exponentially localized and characterized by a localization length
$\xi_1$, in full analogy to Anderson localization for Hamiltonian single particle systems \cite{anderson1958absence}.
The single particle DTQW possesses two distinct limiting parameter cases for which $\xi_1 \rightarrow \infty$. The first is obtained for $W \rightarrow 0$, again in full
analogy with Hamiltonian systems. The DTQW eigenvalues form a band spectrum and are located on the unit circle \cite{vakulchyk2017anderson}, which is in general gapped
for $W \rightarrow 0$. Consequently the localization length $\xi_1$ is a function of the eigenvalue and different for different eigenstates, reaching its largest value in the center of the above bands.
The second parameter case is unique for Floquet Anderson systems and is obtained for the case of {\it strongest} disorder $W=\pi$.
The DTQW spectrum is now dense, homogeneous and gapless on the unit circle, with all eigenstates having the same localization length irrespective of their
eigenvalue \cite{vakulchyk2017anderson}:
\begin{eqnarray}
\xi_1 = -\frac{1}{\ln \left( |\cos\theta| \right)}.
\label{loc_length_analyt}
\end{eqnarray}
The limit $\xi_1 \rightarrow \infty$ is obtained by varying the hopping angle $\theta \rightarrow 0$. We are not aware of a similar regime for
Hamiltonian systems. In the following, we will study the TIW in that novel regime.
\subsection{TIW}
We will follow the time evolution of a TIW wave function
starting from the initial state
\begin{eqnarray}
\ket{\Psi(t=0)} = \frac{(\ket{+,-}+\ket{-,+})}{\sqrt{2}}\otimes\ket{N/2,N/2}
\end{eqnarray}
for which the two single particle DTQWs are localized on the lattice site $N/2$ where the TIW interaction is present.
The system size $N$ varies from $5000$ up to $25000$, such that the spreading wave packet does not reach the edges in order to exclude finite system size corrections.
We perform a direct numerical propagation of \eqref{tip_evolution} up to $t_\text{max}$ which varies from $10^4$ for the $\gamma=0$ to $10^6$ for nonzero interaction
strength values.
We follow the wave function probability distribution in coordinate space
\begin{equation}\label{prob_distr_2d}
p_{ij}(t) = \sum_{\alpha,\beta=\pm} \left| \psi_{ij}^{\alpha\beta} \right|^2.
\end{equation}
To assess TIW localization length scales we will project $p_{ij}$ in three different ways onto a one-dimensional coordinate space and compute the
standard deviation of a probability distribution vector $\{v_i\}$ (see e.g. \cite{Sergej_ivanchenko_2014, sergej_ivanchenko_2017})
\begin{eqnarray}\label{generic_sd}
\sigma\left[\{v_i\}\right] = \left(\sum_i i^2 v_i - \left( \sum_i i v_i \right)^2 \right)^{1/2}.
\end{eqnarray}
\textit{Measure 1: projection on a one particle space}: we define $v_i = \sum_j p_{ij}(t)$, substitute in \eqref{generic_sd} and obtain $\sigma_{1}(t)$.
\textit{Measure 2: projection on the space of the center mass motion}: we define $v_i = \sum_j p_{i,j-i}(t)$, substitute in \eqref{generic_sd} and obtain
$\sigma_{\parallel}(t)$.
\textit{Measure 3: projection on the space of (relative) distance between particles}: we define $v_i = \sum_j p_{i,i+j}(t) $, and substitute in \eqref{generic_sd} and obtain
$\sigma_{\perp}$.
In addition to the above three TIW length scales $\sigma_1,\sigma_{\parallel},\sigma_{\perp}$ we also define a length scale $\sigma_{sp}$ which
follows from the numerical simulation of a single particle DTQW. We define
$v_i = |\psi_i^+ (t)|^2 + |\psi_i^- (t)|^2$, substitute in \eqref{generic_sd} and obtain $\sigma_{sp}$.
In the presence of Anderson localization, all the above length scales are expected to grow in time and saturate at some finite values for $t \rightarrow \infty$.
For the single particle DTQW we expect $\sigma_{sp}(t\rightarrow \infty) \sim \xi_1$.
For the noninteracting TIW case $\gamma=0$ we expect the distribution $p_{ij}(t \rightarrow \infty)$ to have four-fold discrete rotational symmetry (see e.g. inset
in Fig.\ref{fig1}).
It follows $\sigma_1\approx\sigma_{\parallel}\approx\sigma_{\perp} \approx \sigma_{sp} \sim \xi_1$.
However, for $\gamma \neq 0$ the two walks are expected to be able to travel beyond the limits set by $\sigma_{sp}$ and $\xi_1$ as long as their
two coordinates are close enough such that $|i-j| < \xi_1$. This is in analogy to two interacting particles in Hamiltonian settings. The interaction is introducing
nonzero matrix elements between the Anderson eigenstates of the noninteracting system which leads to an effective internal degree of freedom of two walks (or particles)
which form a weakly bound state. Consequently the distribution $p_{ij}(t \rightarrow \infty)$ should elongate along the diagonal $i=j$
and reduce its symmetry to a two-fold rotational symmetry (see e.g. inset
in Fig.\ref{fig3}). It follows $\sigma_1 \approx \sigma_{\parallel} \equiv \xi_2$, $\sigma_{\perp} \approx \sigma_{sp} \sim \xi_1$ and $\xi_2 \gg \xi_1$. The TIW is
therefore characterized by two length scales $\xi_2$ and $\xi_1$.
\section{Computational results}
\subsection{$\gamma=0$}
The time dependence $\sigma_1(t)$ (averaged over 100 disorder realizations) is shown in Fig.\ref{fig1} for various values of the hopping angle $\theta$ with solid lines.
We observe the expected saturation of $\sigma_1$ for large evolution times $t=10^4$.
The wave function probability distribution $p_{ij}(t=10^4)$ is shown in the inset of Fig.\ref{fig1} for $\theta=\pi/20$. It shows the above discussed
four-fold discrete rotational symmetry. In addition we plot the time dependence of $\sigma_{sp}(t)$ with dashed lines, which are averaged over
$10^4$ disorder realizations and nicely follow the
corresponding $\sigma_1(t)$ curves.
\begin{figure}
\includegraphics[width=0.5\textwidth]{fig1}
\caption{\label{fig1}
$\sigma_1(t)$ for $\gamma=0$ (solid lines, 100 disorder realizations) and $\sigma_{sp}(t)$ (dashed lines, average over $10^4$ disorder realizations). $\theta = \pi/8,\pi/12,\pi/16,\pi/20$ from bottom to top.
Here $N=5000$ for $\theta=\pi/8,\pi/12$ and $N=25000$ for $\theta=\pi/16,\pi/20$.
Inset: snapshot of the probability distribution $p_{ij}(t=10^4)$
for $\theta=\pi/20$.
}
\end{figure}
A first nontrivial test is the comparison of $\xi_1$ with $\sigma_{sp}(t \rightarrow \infty)$ and $\sigma_1(t \rightarrow \infty)$ for $\gamma=0$. While we expect
$\sigma_{sp} \approx \sigma_1$, the connection between $\xi_1$ and $\sigma_{sp}$ is far from obvious. The Hamiltonian case is known to obey
the single parameter scaling property \cite{mackinnon1983scaling}, which implies in our case $\xi_1 \sim \sigma_{sp}$.
In Fig.\ref{fig2} we compare the localization length $\xi_{1}$ \eqref{loc_length_analyt} (solid line) with $\sigma_{sp}(t=10^4)$ from Fig.\ref{fig1} (blue circles) and
$\sigma_{1}(t=10^4)$ from Fig.\ref{fig1} (red triangles) for different values of $\theta$. At a first glance the single parameter scaling seems to be satisfied, since the data symbols follow the analytical curve reasonably closely.
However, the inset in Fig.\ref{fig2} plots the corresponding ratios $\sigma_{sp}/\xi_1$ and $\sigma_1/\xi_1$ versus $\theta$ which result in non-horizontal curves
and indicate a violation of the single parameter scaling hypothesis. To independently confirm the absence of the single parameter scaling property, we
diagonalize the single particle DTQW numerically for a system size $N=1500$, and obtain the participation numbers $P_{\nu}$ of all eigenfunctions $\ket{\Psi}_{\nu}$ as
$1/P=\sum_{n=1}^{N} \sum_{\alpha=\pm} |\psi_n^{\alpha}|^4$. The average $P=\sum_{\nu} P_{\nu}/{2N}$ is plotted in Fig.\ref{fig2} (green squares). We find that $P/\xi_1$ is varying with $\xi_1$, and even shows
an opposite trend as compared to $\sigma_{sp}/\xi_1$, confirming the presence of a variety of different length scales in the problem.
At the same time, the ratio $\sigma_{sp}$ closely follows $\sigma_1$, implying that any changes in $\sigma_1$ upon increasing
the TIW interaction $\gamma$ away from $\gamma=0$ are solely due to the interaction, and not due to measurement ambiguities.
\begin{figure}
\includegraphics[width=0.5\textwidth]{fig2}
\caption{\label{fig2}
Various length scales $L$ versus $\theta$: $\xi_1$ (solid line), $\sigma_{sp}(t_f)$ (blue circles), $\sigma_1(t_f)$ (red triangles), $P$ (green squares). Here $t_f=2\cdot 10^4$, $\gamma=0$.
Inset: $\sigma_{sp}/\xi_1$, $\sigma_1/\xi_1$ and $P/\xi_1$ as a function of the angle $\theta$.
}
\end{figure}
\subsection{$\gamma = \pi$}
Let us present the numerical analysis of the dynamics of the TIW for non-vanishing interaction $\gamma \neq 0$.
The largest absolute value of the term $(e^{i\gamma} -1)$ in (\ref{tip_interaction}) is obtained for $\gamma=\pi$, which we choose as our operational value in this section.
We evolve a system of size $N=25000$ up to time $t_\text{max}=10^6$. We follow the time dependence of the standard deviation $\sigma_1$ for various values of the angle $\theta$. These results are presented in Fig. \ref{fig3} (solid lines).
$\sigma_1(t)$ shows ballistic-like growth
($\sigma \propto t$) up to $\sigma_1 \sim \xi_1$ in analogy to the noninteracting case. During this first part of the dynamics, the wave packet spreads up to a length scale
of the order of the single particle localization length $\xi_1$. At variance to the noninteracting case, the interacting dynamics continues beyond the limits set by
the single particle DTQW Anderson localization. The corresponding growth of $\sigma_1$ with time is close to a sub-diffusive one
$\sigma \propto t^\alpha$ with $\alpha \leq 0.5$.
For $\theta=\pi/8$ and $\xi_1\approx 12$ we observe saturation of $\sigma_1(t)$ at the largest computational time $t=10^6$. For smaller values of $\theta$ and correspondingly for
larger values of $\xi_1$, the saturation is shifted to larger time and spatial scales, and becomes barely visible for $\theta=\pi/20$ and $\xi_1 \approx 81$.
Choosing larger system sizes, despite being necessary, turns hard due to CPU time and memory limitations. For practical purposes we therefore will
present data which correspond to the largest evolution times.
\begin{figure}
\includegraphics[width=0.5\textwidth]{fig3}
\caption{\label{fig3} Time evolution of $\sigma_1$
of a TIW for different values of $\theta=\pi/8,\pi/12,\pi/16,\pi/20$ from bottom to top.
Here $\gamma=\pi$ and $N=25000$.
Inset: snapshot of the probability distribution $p_{ij}$ for $\theta=\pi/20$ at $t=10^6$, showing strongly anisotropic wave packet spreading.
}
\end{figure}
In the inset of Fig.\ref{fig3} we plot the probability distribution of wave function $p_{ij}(t =10^6)$ for $\theta=\pi/20$. It shows a clear reduction to
the two-fold rotational symmetry which leads to the emergence of at least two different length scales $\sigma_{\perp}$ and $\sigma_{\parallel} \gg \sigma_{\perp}$
which characterize the width and elongation of the cigar-like shape.
The dependence of the new length scales on the single particle $\sigma_{sp}$ one is shown in
Fig.\ref{fig4}. The width $\sigma_{\perp} \approx \sigma_{sp}$ demonstrates that the limit of relative distance on which the two single particle DTQW components of the TIW can propagate is set by $\sigma_{sp}$. However, the elongation $\sigma_{\parallel}$ shows a faster than linear growth with
$\sigma_{sp}$. A simple power law fit $\sigma_{\parallel} \approx \sigma_{sp}^{\beta}$ yields $\beta \approx 1.2$.
\begin{figure}
\includegraphics[width=0.5\textwidth]{fig4}
\caption{\label{fig4} Scaling of the TIW length scale $\sigma_{\perp}$ (green squares), $\sigma_{\parallel}$ (orange triangles) and $\sigma_1$ (blue circles) with the single particle DTQW
length scale $\sigma_{sp}$. The corresponding values of $\theta$ vary between $\pi/20$ and $\pi/3$.
Here $\gamma=\pi$ and $N=25000$.
Black dashed lines are algebraic fits.
}
\end{figure}
\subsection{Varying $\gamma$}
Finally we study the impact of varying the interaction strength $\gamma$ for two different values of $\theta=\pi/8$ and $\theta=\pi/12$ in Fig.\ref{fig5}.
In order to avoid disorder realization induced fluctuations, we evolve the wave packet up to $t=2 \cdot 10^5$ (which is sufficient for the chosen $\theta$ values) and average over
10 disorder realizations.
We first discuss the data for the width $\sigma_{\perp}$. Since we concluded that $\sigma_{\perp} \approx \sigma_{sp}$ is a single particle DTQW length scale,
it should not depend on the strength of $\gamma$. Indeed, the computational data demonstrate this very clearly.
At the same time, the elongation scale $\sigma_{\parallel}$ respectively $\sigma_1$ should strongly depend on $\gamma$. Again, the computational data in Fig.\ref{fig5}
demonstrate this very clearly. The curves $\sigma_{\parallel ,1}(\theta)$ show a clear maximum at $\gamma_m(\theta)$. Surprisingly, $\gamma_m \neq \pi$, with a weak but observable dependence
on $\theta$. Therefore the value $\gamma=\pi$ is in general not corresponding to the case of strongest enhancement of the TIW localization length. Possibly there is a hidden symmetry in the
TIW problem at $\gamma=\pi$ whose violation for $\gamma\neq \pi$ might lead to an enhancement of the localization length.
\begin{figure}
\includegraphics[width=0.5\textwidth]{fig5a}
\includegraphics[width=0.5\textwidth]{fig5b}
\caption{\label{fig5} $\sigma_1$ (blue circles), $\sigma_{\parallel}$ (orange triangles), $\sigma_{\perp}$ (green squares) as functions of the interaction parameter $\gamma$.
(a) $\theta=\pi/8$. (b) $\theta=\pi/12$.
}
\end{figure}
\section{Conclusion and outlook}
We analyzed the interplay of disorder and interaction in the Floquet Anderson localization problem of two interacting discrete time quantum walks.
The single particle DTQW is described by a Floquet unitary map
defined on a chain of two-level systems. Despite the action of strong disorder in one of the Floquet unitary map parameters, the resulting novel Anderson localization phase is
characterized by a gapless Floquet spectrum and one unique localization length $\xi_1$
for all single particle eigenstates. The ratio of the participation number of the eigenstates $P$ over $\xi_1$ is not constant, indicating a violation of the usually expected
single parameter scaling regime as known for Hamiltonian disordered systems.
We add a local contact interaction, which is parametrized by a phase shift $\gamma$.
A wave packet is spreading subdiffusively beyond the bounds set by $\xi_1$ and saturates at a new length scale $\xi_2 \gg \xi_1$.
For the assumed strongest interaction case $\gamma=\pi$ we identify a new length scale $\xi_2 \gg \xi_1$ which follows
$\xi_2 \sim \xi_1^{1.2}$.
We observe a nontrivial dependence of $\xi_2$ on $\gamma$, with a maximum value
observed for $\gamma$-values which are shifted away from the expected strongest interaction case $\gamma=\pi$. We currently lack an understanding of this intriguing fact,
which has to be addressed in future work.
In the absence of interaction $\gamma=0$ we confirm the persistence of the violation of the single parameter scaling. The explanation of this surprising observation
is another interesting topic to be addressed in future work.
\textbf{Acknowledgement.} This work was supported by the Institute for Basic Science, Project Code (IBS-R024-D1).
| proofpile-arXiv_059-15533 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\subsection{Proof of Correctness for Sampling with a Fixed Partition}
Algorithm \ref{alg:nesting_sample} specifies a method for sampling from a weight function given a fixed partition tree and a bound that provably nests. Its proof of correctness is given in Proposition\ref{prop:rejection_sampling_correctness}. Note that a simple property that follows from recursively applying the definition of a nesting bound is that
$\sum_{i \in \mathcal{S}} w(i) \leq \Zub_w (\mathcal{S})$.
More generally, given any node $v$ in $\mathcal{T}$ associated with the subset $S_v \subseteq \mathcal{S}$, we have
$\sum_{i \in S_v} w(i) \leq \Zub_w (S_v)$.
\begin{prop}[\citet{huber2006exact,law2009approximately}]
\label{prop:rejection_sampling_correctness}
Algorithm~\ref{alg:nesting_sample} samples an element $i \in \mathcal{S}$ from the normalized weight function $i \sim \frac{w(i)}{\sum_{j \in \mathcal{S}} w(j)}$.
\end{prop}
\begin{proof} The probability of sampling leaf node $v_i$ at depth $d$ in the partition tree, with ancestors $v^a_{d-1}$, \dots, $v^a_0$ (where $v^a_{d-1}$ is the parent node of $v_i$ and $v^a_0$ is the root node) and associated ancestor subsets $S^a_{d-1}$, \dots, $S^a_0$ is
\begin{align*}
& \frac{1}{p_{accept}} \times \frac{Z^{UB}_w(S^a_1)}{Z^{UB}_w(S^a_0)} \times \frac{Z^{UB}_w(S^a_2)}{Z^{UB}_w(S^a_1)} \times \dots \times \frac{Z^{UB}_w(S^a_d)}{Z^{UB}_w(S^a_{d-1})} \\
= & \frac{1}{p_{accept}} \times \frac{Z^{UB}_w(S^a_d)}{Z^{UB}_w(S^a_0)} = \frac{Z^{UB}_w(\mathcal{S})}{Z_w} \times \frac{w(i)}{Z^{UB}_w(\mathcal{S})}
= \frac{w(i)}{\sum_{i \in \mathcal{S}} w(i)} \\
\end{align*}
\end{proof}
\begin{algorithm}[h
\caption{Sample from a Normalized Weight Function}
\label{alg:nesting_sample}
\begin{flushleft}
\textbf{Inputs:}
\end{flushleft}
\begin{enumerate}
\item Non-empty state space $\mathcal{S} = \{1, \dots, N\}$
\item Partition tree $\mathcal{T}$ of $\mathcal{S}$
\item Unnormalized weight function $w:\mathcal{S} \rightarrow \mathbb{R}_{\geq 0}$
\item Nesting upper bound $\Zub_w(S)$ for $w$ with respect to $\mathcal{T}$
\end{enumerate}
\begin{flushleft}
\textbf{Output:} A sample $i \in \mathcal{S}$ distributed as $i \sim \frac{w(i)}{\sum_{j \in \mathcal{S}} w(j)}$.
\end{flushleft}
\begin{flushleft}
\textbf{Algorithm:}
\end{flushleft}
\begin{enumerate}
\item Set $v$ to the root node of $\mathcal{T}$ and $S = \mathcal{S}$.
\item Sample a child of $v$ (denoted $v_1, \dots, v_k$ with associated subsets $S_1, \dots, S_k$ of $\mathcal{S}$) or slack with probabilities:
\begin{align*}
p(v_l) & = \frac{\Zub_w(S_l)}{\Zub_w(S)} \hspace{3em}
p(\text{slack}) = 1 - \frac{\sum_{l = 1}^k \Zub_w(S_l)}{\Zub_w(S)}
\end{align*}
\item If a child was sampled with an associated subset containing a single element then return this element.
\item If a child, $v_l$, was sampled with an associated subset containing more than one element then set $v = v_l$, $S = S_l$, and go to step 2.
\item If the slack element was sampled then go to step 1.
\end{enumerate}
\end{algorithm}
\subsection{Adaptive Rejection Sampling}\label{sec:adaptive_rejection_sampling}
We can improve the efficiency of \method by tightening the upper bounds $\Zub_w$ whenever we encounter slack. This is done by subtracting the computed slack from the associated upper bounds, which still preserves nesting properties.
The resulting algorithm is an \emph{adaptive rejection sampler}~\citep{gilks1992adaptive}, where the ``envelope'' proposal is tightened every time a point is rejected.\footnote{The use of `adaptive' here is to connect this section with the rejection sampling literature, and is unrelated to `adaptive' partitioning discussed earlier.}
Formally, for any partition $P$ of $S$, we define a new, tighter upper bound as follows:
\begin{equation}
\Zubunder_w(S) = \min \left\{ \sum_{S_i \in P} \Zub_w(S_i),\, \Zub_w(S) \right\}.
\end{equation}
This is still a valid upper bound
on $\Zub_w(S)$ because of the additive nature of $Z_w$,
and is, by definition, also nesting w.r.t.\ the partition $P$.
If we encounter any slack, there must exists some $S$ for which $\Zubunder_w(S) < \Zub_w(S)$, hence we can \emph{strictly} tighten our bound for subsequent steps of the algorithm (thereby making \method more efficient) by using $\Zubunder_w(S)$ instead of $\Zub_w(S)$.
For matrices sampled uniformly from $[0,1)$, we empirically find that bound tightening is most effective for small matrices. After 1000 samples we improve our bound on the permanent to roughly 64\%, 77\%, and 89\% of the original bound for matrices of size 10, 15, 25 respectively. However, bound tightening is more effective for other types of matrices. For the matrices from real world networks in Section~\ref{sec:experiments}, after drawing 10 samples we improve our bound on the permanent to roughly 25\%, 22\%, 8\%, 7\%, and 19\% for models ENZYMES-g192, ENZYMES-g230, ENZYMES-g479, cage5, and bcspwr01 respectively.
\subsection{Estimating the Partition Function with Adaptive Rejection Sampling}\label{sec:adaptive_rej_sampling_estimate}
The number of accepted samples, $a$, is a random variable with expectation $E[a] = \sum_{i=1}^T \frac{Z}{Z^{UB}_i}$, where $Z^{UB}_i$ is the upper bound on the entire state space $\mathcal{S}$ when the $i$-th sample is drawn. This gives the unbiased estimator $\hat{Z} = a/\left(\sum_{i=1}^T \frac{1}{Z^{UB}_i}\right)$ for the partition function. We use bootstrap techniques \cite{chernick2008bootstrap} to perform Monte Carlo simulations that yield high probability bounds on the partition function. We used 100,000 samples to compute our bounds in Table~\ref{table:real_matrices}, which took 2-6 seconds using our python script, but note that this computation is extremely parallelizable. We computed each bootstrap upper and lower bound 10 times and always sampled the same values, except for one case where the log upper bound changed by .06, showing that 100,000 samples is sufficient.
\subsection{Runtime Guarantee of \method}\label{sec:adapart_guarantee_runtime}
\citet{law2009approximately} prove that the runtime of Algorithm~\ref{alg:nesting_sample} is $O(n^{1.5 + .5/(2\gamma - 1)})$ per sample when using their upper bound on the permanent \citep[p. 33]{law2009approximately}, where $\gamma$ controls density. \method has the same guarantee with a minor modification to the presentation in Algorithm~\ref{alg:nesting_sample_partition}. The repeat loop is removed and if the terminating condition $\mathit{ub} \leq \Zub_w(\mathcal{S})$ is not met after a single call to $\refine$, Algorithm~\ref{alg:nesting_sample} is called with the upper bound from and fixed partitioning strategy from \citep{law2009approximately} as shown in Algorithm~\ref{alg:adapart_runtime_guarantee}.
\begin{algorithm}[t!
\caption{\method: Sample from a Normalized Weight Function using Adaptive Partitioning with Polynomial Runtime Guarantee for Dense Matrices}
\label{alg:adapart_runtime_guarantee}
\begin{flushleft}
\textbf{Inputs:}
\end{flushleft}
\begin{enumerate}
\item Non-empty state space $\mathcal{S}$
\item Unnormalized weight function $w: \mathcal{S} \rightarrow \mathbb{R}_{\geq 0}$
\item Family of upper bounds
$\Zub_w(S):\mathcal{D} \subseteq 2^{\mathcal{S}} \rightarrow \mathbb{R}_{\geq 0}$ for $w$ that are tight on single element subsets
\item Refinement function $\refine: \mathcal{P} \rightarrow 2^{\mathcal{P}}$ where $\mathcal{P}$ is the set of all partitions of $\mathcal{S}$
\end{enumerate}
\begin{flushleft}
\textbf{Output:} A sample $i \in \mathcal{S}$ distributed as $i \sim \frac{w(i)}{\sum_{i \in \mathcal{S}} w(i)}$.
\end{flushleft}
\begin{algorithmic}
\STATE \algorithmicif\ {$\mathcal{S}=\{a\}$} \algorithmicthen\ {Return $a$}
\STATE $\mathit{ub} \leftarrow \Zub_w(\mathcal{S})$
\STATE $\{\{S^i_1, \cdots, S^i_{\ell_i}\}\}_{i=1}^K \leftarrow \refine(\mathcal{S})$
\FORALL {$i \in \{1, \cdots, K\}$}
\STATE $\mathit{ub}_i \leftarrow \sum_{j=1}^{\ell_i} \Zub_w(S^i_j)$
\ENDFOR
\STATE $j \leftarrow \arg \min_i \mathit{ub}_i$
\STATE $P \leftarrow \{S^j_1, \cdots, S^j_{\ell_j}\}$
\STATE $\mathit{ub} \leftarrow \mathit{ub} - \Zub_w(S) + \mathit{ub}_j$
\IF {$\mathit{ub} > \Zub_w(\mathcal{S})$}
\STATE Return the output of Algorithm~\ref{alg:nesting_sample} called on $\mathcal{S}$ and $w$ with the bound and fixed partition of \citep{law2009approximately}
\ELSE
\STATE Sample a subset $S_i \in P$ with prob.\
$\frac{\Zub_w(S_i)}{\Zub_w(\mathcal{S})}$, or sample $\mathit{slack}$ with prob.~$1-\frac{\mathit{ub}}{\Zub_w(\mathcal{S})}$
\ENDIF
\IF {$S_m \in P$ is sampled}
\STATE Recursively call \method($S_m,w,\Zub_w,\refine$)
\ELSE
\STATE Restart, i.e., call \method($\mathcal{S},w,\Zub_w,\refine$)
\ENDIF
\end{algorithmic}
\end{algorithm}
\subsection{Additional Experiments} \label{sec:additional_exp}
\begin{figure}[ht]
\centering
\includegraphics[width=8cm]{figures/permanent_estimate_VS_exact_plots_diagMatrix.png}
\caption{Accuracy results on randomly sampled $n \text{x} n$ block diagonal matrices constructed as described earlier, with blocks of size $k=10$. We plot the exact permanent, our estimate, and our high probability bounds calculated from 10 samples for each matrix.
} \label{fig:diag_correctness}
\end{figure}
While calculating the permanent of a large matrix is generally intractable, it can be done efficiently for certain special types of matrices. One example is block diagonal matrices, where an $n\text{x}n$ matrix is composed of $\lfloor \frac{n}{k} \rfloor$ blocks of size $k\text{x}k$ and a single $n\ \mathrm{mod}\ k$ block along the diagonal. Only elements within these blocks on the diagonal may be non-zero. The permanent of a block diagonal matrix is simply the product of the permanents of each matrix along the diagonal, which can be calculated efficiently whenever the block size is sufficiently small. We plot the exact permanent, our estimate, and our high probability bounds for randomly sampled block diagonal matrices of various sizes in Figure~\ref{fig:diag_correctness}.
\section{Introduction}
The permanent of a square, non-negative matrix $A$ is a quantity with natural graph theoretic interpretations. If $A$ is interpreted as the adjacency matrix of a directed graph, the permanent corresponds to the sum of weights of its cycle covers. If the graph is bipartite, it corresponds to the sum of weights of its perfect matchings. The permanent has many applications in computer science and beyond. In target tracking applications \cite{uhlmann2004matrix,morelande2009joint,oh2009markov,hamid2015joint}, it is used to calculate the marginal probability of measurements-target associations. In general computer science, it is widely used in graph theory and network science. The permanent also arises in statistical thermodynamics \cite{beichl1999approximating}
Unfortunately, computing the permanent of a matrix is believed to be intractable in the worst-case, as the problem has been formally shown to be \#P-complete~\cite{valiant1979complexity}.
Surprisingly, a fully polynomial randomized approximation scheme (FPRAS) exists, meaning that it is theoretically possible to accurately approximate the permanent in polynomial time. However, this algorithm is not practical: it is difficult to implement and it scales as $O(n^7 \log^4 n)$. Ignoring coefficients, this is no better than exact calculation until matrices of size $40 \text{x} 40$, which takes days to compute on a modern laptop.
The problems of sampling from an unnormalized distribution and calculating the distribution's normalization constant (or partition function) are closely related and interreducible.
An efficient solution to one problem leads to an efficient solution to the other~\cite{jerrum1986random,jerrum1996markov}. Computing the permanent of a matrix is a special instance of computing the partition function of an unnormalized distribution~\cite{wainwright2008graphical}. In this case the distribution is over $n!$ permutations, the matrix defines a weight for each permutation, and the permanent is the sum of these weights.
\subsection{Contributions} First, we present \method, a novel method for \textit{drawing exact samples from an unnormalized distribution using any algorithm that upper bounds its partition function.} We use these samples to estimate and bound the partition function with high probability. This is a generalization of prior work~\cite{huber2006exact,law2009approximately}, which showed that a specific bound on the matrix permanent nests, or satisfies a Matryoshka doll like property where the bound recursively fits within itself, for a fixed partitioning of the state space. Our novelty lies in adaptively choosing a partitioning of the state space, which (a) is suited to the particular distribution under consideration, and (b) allows us to use any upper bound or combination of bounds on the partition function, rather than one that can be proven \emph{a priori} to nest according to a fixed partitioning.
Second, we provide a complete instantiation of \method for sampling permutations with weights defined by a matrix, and correspondingly computing the permanent of that matrix. To this end, we identify and use an upper bound on the permanent with several desirable properties, including being computable in polynomial time and being tighter than the best known bound that provably nests.
Third, we empirically demonstrate that \method is both computationally efficient and practical for approximating the permanent of a variety of matrices, both randomly generated and from real world applications. We find that \method can be over 25x faster compared to prior work on sampling from and approximating the permanent. In the context of multi-target tracking, \method facilitates sampling from the optimal proposal distribution during particle filtering, which improves multi-target tracking performance while reducing the number of samples by an order of magnitude.
\section{Background}
The \emph{permanent} of an $n \times n$ non-negative matrix $A$ is defined as $\text{per}(A) = \sum_{\sigma \in S_n} \prod_{j=1}^n A(j, \sigma(j))$, where the sum is over all permutations $\sigma$ of $\{1,2,\dots,n\}$ and $S_n$ denotes the corresponding symmetric group. Let us define the weight function, or unnormalized probability, of a permutation as $w(\sigma) = \prod_{j=1}^n A(j, \sigma(j))$. The permanent can then be written as $\text{per}(A) = \sum_{\sigma \in S_n} w(\sigma)$, which is the partition function (normalization constant) of $w$, also denoted $Z_w$.
We are interested in sampling from the corresponding probability distribution over permutations $p(\sigma) = \frac{w(\sigma)}{\sum_{\sigma' \in S_n} w(\sigma')}$, or more generally from any unnormalized
distribution where the exact partition function is unknown. Instead, we will assume access to a function that \emph{upper bounds} the partition function, for instance an upper bound on the permanent. By verifying (at runtime) that this upper bound satisfies a natural `nesting' property w.r.t.\ a partition of the permutations, we will be able to guarantee exact samples from the underlying distribution. Note that verification is critical since the `nesting' property does not hold for upper bounds in general.
In the next few sections, we will consider the general case of any non-negative weight function $w$ over $N$ states (i.e., $w : \mathcal{S} \rightarrow \mathbb{R}_{\geq 0}, |\mathcal{S}| = N$) and its partition function $Z_w$, rather than specifically discussing weighted permutations of a matrix and its permanent. This is to simplify the discussion and present it in a general form. We will return to the specific case of the permanent later on.
\subsection{Nesting Bounds}
\citet{huber2006exact} and \citet{law2009approximately} have noted that upper bounds on the partition function that `nest' can be used to draw exact samples from a distribution defined by an arbitrary, non-negative weight function. For their method to work, the upper bound must nest according to some fixed partitioning $\mathcal{T}$ of the weight function's state space, as formalized in Definition~\ref{def:partition_tree}. In Definition~\ref{nesting_UB}, we state the properties that must hold for an upper bound to `nest' according to the partitioning $\mathcal{T}$.
\begin{definition}[Partition Tree]
\label{def:partition_tree}
Let $\mathcal{S}$ denote a finite state space. A \emph{partition tree} $\mathcal{T}$ for $\mathcal{S}$ is a tree where each node is associated with a non-empty subset of $\mathcal{S}$ such that:
\begin{enumerate}
\item The root of $\mathcal{T}$ is associated with $\mathcal{S}$.
\item If $\mathcal{S} = \{a\}$, the tree $\{a\}$ formed by a single node is a partition tree for $\mathcal{S}$.
\item Let $v_1, \cdots, v_k$ be the children of the root node of $\mathcal{T}$, and $S_1, \cdots, S_k$ be their associated subsets of $\mathcal{S}$. $\mathcal{T}$ is a partition tree if $S_i, S_j$ are pairwise disjoint, $\cup_i S_i = \mathcal{S}$, and for each $\ell$ the subtree rooted at $v_\ell$ is a partition tree for $S_\ell$.
\end{enumerate}
\end{definition}
\begin{definition}[Nesting Bounds]
\label{nesting_UB}
Let $w:\mathcal{S} \rightarrow \mathbb{R}_{\geq 0}$ be a non-negative weight function with partition function $Z_w$.
Let $\mathcal{T}$ be a partition tree for $\mathcal{S}$ and let $\mathcal{S}_\mathcal{T}$ be the set containing the subsets of $\mathcal{S}$ associated with each node in $\mathcal{T}$.
The function $\Zub_w(S) : \mathcal{S}_\mathcal{T} \rightarrow \mathbb{R}_{\geq 0}$ is a \emph{nesting upper bound} for $Z_w$ with respect to $\mathcal{T}$ if:
\begin{enumerate}
\item The bound is tight for all single element sets: $\Zub_w(\{i\}) = w(i)$ for all $i \in \mathcal{S}$.\footnote{This requirement can be relaxed by defining a new upper bounding function that returns $w(i)$ for single element sets and the upper bound which violated this condition for multi-element sets.}
\item The bound `nests' at every internal node $v$ in $\mathcal{T}$. Let $S$ be the subset of $\mathcal{S}$ associated with $v$. Let $S_1, \cdots, S_k$ be the subsets associated with the children of $v$ in $\mathcal{T}$. Then the bound `nests' at $v$ if:
\begin{align}
\label{eq:nestingbound}
\sum_{\ell=1}^k \Zub_w (S_\ell) \leq \Zub_w (S).
\end{align}
\end{enumerate}
\end{definition}
\subsection{Rejection Sampling with a Fixed Partition}\label{sec:fixed_nesting}
Setting aside the practical difficulty of finding such a bound and partition, suppose we are \emph{given} a fixed partition tree $\mathcal{T}$ and a guarantee that $\Zub_w$ nests according to $\mathcal{T}$. Under these conditions, \citet{law2009approximately} proposed a rejection sampling method to perfectly sample an element, $i \sim \frac{w(i)}{\sum_{j \in \mathcal{S}} w(j)}$, from the normalized weight function (see Algorithm~\ref{alg:nesting_sample} in the Appendix). Algorithm~\ref{alg:nesting_sample} takes the form of a rejection sampler whose proposal distribution matches the true distribution precisely---except for the addition of \emph{slack} elements with joint probability mass equal to $\Zub_w(\mathcal{S}) - Z_w$. The algorithm recursively samples a partition of the state space until the sampled partition contains a single element or slack is sampled. Samples of slack are rejected and the
procedure is repeated until
a valid single element is returned.
According to Proposition~\ref{prop:rejection_sampling_correctness} (see Appendix),
Algorithm~\ref{alg:nesting_sample} yields exact samples from the desired target distribution. Since it performs rejection sampling using $\Zub_w(S)$ to construct a proposal, its efficiency depends on how close the proposal distribution is to the target distribution. In our case, this is governed by two factors: (a) the tightness of the (nesting) upper bound, $\Zub_w(S)$, and (b) the tree $\mathcal{T}$ used to partition the state space (in particular, it is desirable for every node in the tree to have a small number of children).
In what follows, we show how to substantially improve upon Algorithm~\ref{alg:nesting_sample} by utilizing tighter bounds (even if they don't nest \emph{a priori}) and iteratively checking for the nesting condition at runtime until it holds.
\begin{algorithm}[t!
\caption{\method: Sample from a Weight Function using Adaptive Partitioning}
\footnotesize
\label{alg:nesting_sample_partition}
\begin{flushleft}
\textbf{Inputs:}
\end{flushleft}
\begin{enumerate}
\item Non-empty state space, $\mathcal{S}$
\item Unnormalized weight function, $w: \mathcal{S} \rightarrow \mathbb{R}_{\geq 0}$
\item Family of upper bounds for $w$,
$\Zub_w(S):\mathcal{D} \subseteq 2^{\mathcal{S}} \rightarrow \mathbb{R}_{\geq 0}$
\item Refinement function, $\refine: \mathcal{P} \rightarrow 2^{\mathcal{P}}$, where $\mathcal{P}$ is the set of all partitions of $\mathcal{S}$
\end{enumerate}
\begin{flushleft}
\textbf{Output:} A sample $i \in \mathcal{S}$ distributed as $i \sim \frac{w(i)}{\sum_{i \in \mathcal{S}} w(i)}$.
\end{flushleft}
\begin{algorithmic}
\STATE \algorithmicif\ {$\mathcal{S}=\{a\}$} \algorithmicthen\ {Return $a$}
\STATE $P = \{\mathcal{S}\}$ \ \ ; \ \ $\mathit{ub} \leftarrow \Zub_w(\mathcal{S})$
\REPEAT
\STATE Choose a subset $S \in P$ to refine: $\{\{S^i_1, \cdots, S^i_{\ell_i}\}\}_{i=1}^K \leftarrow \refine(S)$
\FORALL {$i \in \{1, \cdots, K\}$}
\STATE $\mathit{ub}_i \leftarrow \sum_{j=1}^{\ell_i} \Zub_w(S^i_j)$
\ENDFOR
\STATE $j \leftarrow \arg \min_i \mathit{ub}_i$ \ ; \ $P \leftarrow (P \setminus \{S\}) \cup \{S^j_1, \cdots, S^j_{\ell_j}\}$ \ ; \ $\mathit{ub} \leftarrow \mathit{ub} - \Zub_w(S) + \mathit{ub}_j$
\UNTIL {$\mathit{ub} \leq \Zub_w(\mathcal{S})$}
\STATE Sample a subset $S_i \in P$ with prob.\
$\frac{\Zub_w(S_i)}{\Zub_w(\mathcal{S})}$, or sample $\mathit{slack}$ with prob.~$1-\frac{\mathit{ub}}{\Zub_w(\mathcal{S})}$
\STATE \algorithmicif\ {$S_m \in P$ is sampled} \algorithmicthen\ {Recursively call \method($S_m,w,\Zub_w,\refine$)}
\STATE \algorithmicelse\ {Reject $\mathit{slack}$ and restart with the call to \method($\mathcal{S},w,\Zub_w,\refine$)}
\normalsize
\end{algorithmic}
\end{algorithm}
\section{Adaptive Partitioning}
A key limitation of using the approach in Algorithm~\ref{alg:nesting_sample} is that it is painstaking to prove \emph{a priori} that an upper bound nests for a yet unknown weight function with respect to a complete, fixed partition tree. Indeed, a key contribution of prior work \cite{huber2006exact,law2009approximately} has been to provide a proof that a particular upper bound nests for any weight function $w:\{1,\dots, N\} \rightarrow \mathbb{R}_{\geq 0}$ according to a fixed partition tree whose nodes all have a small number of children.
In contrast, we observe that it is nearly trivial to empirically verify \emph{a posteriori} whether an upper bound respects the nesting property for a particular weight function for a particular partition of a state space; that is, whether the condition in Eq.~(\ref{eq:nestingbound}) holds for a \emph{particular} choice of $S, S_1, \cdots, S_k$ and $\Zub_w$. This corresponds to checking whether the nesting property holds at an individual node of a partition tree. If it doesn't, we can refine the partition and repeat the empirical check.
We are guaranteed to succeed if we repeat until the partition contains only single elements, but empirically find that the check succeeds after a single call to $\refine$ for the upper bound we use
The use of this adaptive partitioning strategy provides two notable advantages: (a) it frees us to choose \emph{any} upper bounding method, rather than one that can be proven to nest according to a fixed partition tree; and (b) we can customize---and indeed optimize---our partitioning strategy on a per weight function basis. Together, this leads to significant efficiency gains relative to Algorithm~\ref{alg:nesting_sample}.
Algorithm~\ref{alg:nesting_sample_partition} describes our proposed method, \method. It formalizes the adaptive, iterative partitioning strategy and also specifies how the partition tree can be created on-the-fly during sampling without instantiating unnecessary pieces.
In contrast to Algorithm~\ref{alg:nesting_sample}, \method does not take a fixed partition tree $\mathcal{T}$ as input. Further, it operates with \emph{any} (not necessarily nesting) upper bounding method for (subsets of) the state space of interest.
Figure~\ref{fig:adaptive_partitioning} illustrates the difference between our adaptive partitioning strategy and a fixed partitioning strategy. We represent the entire state space as a 2 dimensional square. The left square in Figure~\ref{fig:adaptive_partitioning} illustrates a fixed partition strategy, as used by \cite{law2009approximately}. Regardless of the specific weight function defined over the square, the square is always partitioned with alternating horizontal and vertical splits. To use this fixed partitioning, an upper bound must be proven to nest for any weight function. In contrast, our adaptive partitioning strategy is illustrated by the right square in Figure~\ref{fig:adaptive_partitioning}, where we choose horizontal or vertical splits based on the particular weight function. Note that slack is not shown and that the figure illustrates the complete partition trees.
\begin{figure}[hb]
\centering
\textbf{Fixed vs. Adaptive Partitioning}\par\medskip
\floatbox[{\capbeside\thisfloatsetup{capbesideposition={right,top},capbesidewidth=5.6cm}}]{figure}[\FBwidth]
{\caption{Binary partitioning of a square in the order black, blue, orange, green. Left: each subspace is split in half according to a predefined partitioning strategy, alternating vertical and horizontal splits. Right: each subspace is split in half, but the method of splitting (vertical or horizontal) is chosen adaptively with no predefined order. This figure represents tight upper bounds without slack.}\label{fig:adaptive_partitioning}}
{\includegraphics[width=8cm]{figures/adaptive_partitioning_figure}}
\end{figure}
\method uses a function $\refine$, which takes as input a subset $S$ of the state space $\mathcal{S}$, and outputs a collection of $K \geq 1$ different ways of partitioning $S$. We then use a heuristic to decide which one of these $K$ partitions to keep. In Figure~\ref{fig:adaptive_partitioning}, $\refine$ takes a rectangle as input and outputs 2 partitionings, the first splitting the rectangle in half horizontally and the second splitting it in half vertically.
\method works as follows. Given a non-negative weight function $w$ for a state space $\mathcal{S}$, we start with the trivial partition $P$ containing only one subset---all of $\mathcal{S}$. We then call $\refine$ on $\mathcal{S}$, which gives
$K \geq 1$ possible partitions of $\mathcal{S}$. For each of the $K$ possible partitions, we sum the upper bounds on each subset in the partition, denoting this sum as $\mathit{ub}_i$ for the the $i$-th partition.
At this point, we perform a local optimization step and choose the partition $j$ with the tightest (i.e., smallest) upper bound, $\mathit{ub}_j$. The rest of the $K-1$ options for partitioning $\mathcal{S}$ are discarded at this point. The partition $P$ is `refined' by replacing $\mathcal{S}$ with the disjoint subsets forming the $j$-th partition of $\mathcal{S}$.
This process is repeated recursively, by calling $\refine$ on another subset $S \in P$, until the sum of upper bounds on all subsets in $P$ is less than the upper bound on $\mathcal{S}$. We now have a valid nesting partition $P$ of $\mathcal{S}$ and can perform rejection sampling. Similar to Algorithm~\ref{alg:nesting_sample}, we draw a random sample from $P \cup \{\mathit{slack}\}$, where each $S_i \in P$ is chosen with probability $\frac{\Zub_w(S_i)}{\Zub_w(\mathcal{S})}$, and $\mathit{slack}$ is chosen with the remaining probability. If subset $S_m \in P$ is sampled, we recursively call \method on $S_m$. If $\mathit{slack}$ is selected, we discard the computation and restart the entire process. The process stops when $S_m$ is a singleton set $\{a\}$, in which case $a$ is output as the sample.
\method can be seen as using a greedy approach for optimizing over possible partition trees of $\mathcal{S}$ w.r.t.\ $\Zub_w$. At every node, we partition in a way that minimizes the immediate or ``local'' slack (among the $K$ possible partitioning options). This approach may be sub-optimal due to its greedy nature, but we found it to be efficient and empirically effective. The efficiency of \method can be improved further by tightening upper bounds whenever slack is encountered, resulting in an \emph{adaptive\footnote{The use of `adaptive' here is to connect this section with the rejection sampling literature, and is unrelated to `adaptive' partitioning discussed earlier.} rejection sampler}~\citep{gilks1992adaptive} (please refer to Section~\ref{sec:adaptive_rejection_sampling} in the Appendix for further details).
\subsection{Estimating the Partition Function}
Armed with a method, \method, for drawing exact samples from a distribution defined by a non-negative weight function $w$ whose partition function $Z_w$ is unknown, we now outline a simple method for using these samples to estimate the partition function $Z_w$. The acceptance probability of the rejection sampler embedded in \method can be estimated as
\begin{equation}
\label{eq:p_hat}
\hat{p} = \frac{\text{accepted samples}}{\text{total samples}} \approx p = \frac{Z_w}{\Zub}
\end{equation}
which yields $\hat{p} \times \Zub$ as an unbiased estimator of $Z_w$. The number of accepted samples out of $T$ total samples is distributed as a Binomial random variable with parameter $p = \frac{Z_w}{\Zub}$.
The Clopper–Pearson method \cite{clopper1934use} gives tight, high probability bounds on the true acceptance probability, which in turn gives us high probability bounds on $Z_w$. Please refer to Section~\ref{sec:adaptive_rej_sampling_estimate} in the Appendix for the unbiased estimator of $Z_w$ when performing bound tightening as in an adaptive rejection sampler.
\section{Adaptive Partitioning for the Permanent}
In order to use \method for approximating the permanent of a non-negative matrix $A$, we need to specify two pieces: (a) the $\refine$ method for partitioning any given subset $S$ of the permutations defined by $A$, and (b) a function that upper bounds the permanent of $A$, as well as any subset of the state space (of permutations) generated by $\refine$.
\subsection[Refine for Permutation Partitioning]{$\refine$ for Permutation Partitioning}
We implement the $\refine$ method for partitioning an $n \times n$ matrix into a set of $K = n$ different partitions as follows. One partition is created for each column $i \in \{1, \ldots, n\}$. The $i$-th partition of the $n!$ permutations contains $n$ subsets, corresponding to all permutations containing a matrix element, $\sigma^{-1}(i) = j$ for $j \in \{1, \ldots, n\}$. This is inspired by the fixed partition of \citet[pp. 9-10]{law2009approximately}, modified to choose the column for partitioning adaptively.
\subsection{Upper Bounding the Permanent}
\label{sec:permanentUBs}
There exists a significant body of work on estimating and bounding the permanent (cf.~an overview by \citet{zhang2016update}), on characterizing the potential tightness of upper bounds \cite{gurvits2006hyperbolic,samorodnitsky2008upper},
and on improving upper bounds \cite{hwang1998upper,soules2000extending,soules2003new,soules2005permanental}.
We use an upper bound from \citet{soules2005permanental}, which is computed as
follows. Define $\gamma(0) = 0$ and $\gamma(k) = (k!)^{1/k}$ for $k \in \mathbb{Z}_{\geq 1}$.
Let $\delta(k) = \gamma(k) - \gamma(k - 1)$.
Given a matrix $A \in \mathbb{R}^{n \times n}$ with entries $A_{ij}$, sort the entries
of each row from largest to smallest to obtain $a^*_{ij}$, where $a^*_{i1}
\geq \dots \geq a^*_{in}$.
This gives the upper bound,
\begin{equation}\label{eq:soulesUB}
\mathrm{per}(A) \leq \prod_{i=1}^{n} \sum_{j=1}^{n} a^*_{ij} \delta(j).
\end{equation}
If the matrix entries are either 0 or 1, this bound reduces to the Minc-Br\`{e}gman bound \citep{minc1963upper,bregman1973some}.
This upper bound has many desirable properties. It can be efficiently computed in polynomial time, while tighter bounds (also given by \cite{soules2005permanental}) require solving an optimization problem. It is significantly tighter than the one used by \citet{law2009approximately}. This is advantageous because the runtime of \method scales linearly with the bound's tightness (via the acceptance probability of the rejection sampler).
Critically, we empirically find that this bound never requires a second call to $\refine$ in the repeat-until loop of \method. That is, in practice we always find at least one column that we can partition on to satisfy the nesting condition. This bounds the number of subsets in a partition to $n$ and avoids a potentially exponential explosion. This is fortuitous, but also interesting, because this bound (unlike the bound used by \citet{law2009approximately}) does not nest according to any predefined partition tree for all matrices.
\subsection{Dense Matrix Polynomial Runtime Guarantee}
The runtime of \method is bounded for dense matrices as stated in Proposition~\ref{prop:runtime_gaurantee}. Please refer to Section~\ref{sec:adapart_guarantee_runtime} in the Appendix for further details
\begin{prop}\label{prop:runtime_gaurantee}
The runtime of \method is $O(n^{1.5 + .5/(2\gamma - 1)})$ for matrices with $\gamma n$ entries in every row and column that all take the maximum value of entries in the matrix, as shown in Algorithm~\ref{alg:adapart_runtime_guarantee}.
\end{prop}
\section{Related Work on Approximating the Permanent}
The fastest exact methods for calculating the permanent have computational complexity that is exponential in the matrix dimension \cite{ryser1963combinatorial,bax1996finite,balasubramanian1980combinatorics,glynn2013permanent}. This is to be expected, because computing the permanent has been shown to be \#P-complete \cite{valiant1979complexity}. Work to approximate the permanent has thus followed two parallel tracks, sampling based approaches and variational approaches.
The sampling line of research has achieved complete (theoretical) success. \citet{jerrum2004polynomial} proved the existence of a fully polynomial randomized approximation scheme (FPRAS) for approximating the permanent of a general non-negative matrix, which was an outstanding problem at the time \cite{broder1986hard,jerrum1989approximating}. An FPRAS is the best possible solution that can be reasonably hoped for since computing the permanent is \#P-complete. Unfortunately, the FPRAS presented by \cite{jerrum2004polynomial} has seen little, if any, practical use. The algorithm is both difficult to implement and slow with polynomial complexity of $O(n^{10})$, although this complexity was improved to $O(n^7 \log^4 n)$ by \citet{bezakova2006accelerating}.
In the variational line of researc
, the Bethe approximation of the permanent \citep{huang2009approximating,vontobel2014bethe} is guaranteed to be accurate within a factor of ${2}^{n/2}$ \citep{anari2018tight}. This approach uses belief propagation to minimize the Bethe free energy as a variational objective. A closely related approximation, using Sinkhorn scaling, is guaranteed to be accurate within a factor of $2^n$ \citep{gurvits2014bounds}. The difference between these approximations is discussed in \citet{vontobel2014bethe}. The Sinkhorn based approximation has been shown to converge in polynomial time \citep{linial2000deterministic}, although the authors of \citep{huang2009approximating} could not prove polynomial convergence for the Bethe approximation.
\citet{aaronson2014generalizing} build on \citep{gurvits2002deterministic} (a precursor to \citep{gurvits2014bounds}) to estimate the permanent in polynomial time within additive error that is exponential in the largest singular value of the matrix. \jonathan{i'm not really sure if this reference has a variational interpretation, but it does build on work that is by the same author and looks related to the sinkhorn approximation}
While these variational approaches are relatively computationally efficient, their bounds are still exponentially loose.
There is currently a gap between the two lines of research. The sampling line has found a theoretically ideal FPRAS which is unusable in practice. The variational line has developed algorithms which have been shown to be both theoretically and empirically efficient, but whose approximations to the permanent are exponentially loose, with only specific cases where the approximations are good \citep{huang2009approximating,chertkov2008belief,chertkov2010inference}. \citet{huber2006exact} and \citet{law2009approximately} began a new line of sampling research that aims to bridge this gap. They present a sampling method which is straightforward to implement and
has a polynomial runtime guarantee for dense matrices.
While there is no runtime guarantee for general matrices, their method is significantly faster than the FRPAS of \cite{jerrum2004polynomial} for dense matrices. In this paper we present a novel sampling algorithm that builds on the work of \cite{huber2006exact, law2009approximately}. We show that \method leads to significant empirical speedups, further closing the gap between the sampling and variational lines of research.
\section{Experiments}\label{sec:experiments}
In this section we show the empirical runtime scaling of \method as matrix size increases, test \method on real world matrices, compare \method with the algorithm from \citet{law2009approximately} for sampling from a fixed partition tree, and compare with variational approximations \cite{huang2009approximating,anari2018tight,gurvits2014bounds}. Please see section \ref{sec:additional_exp} in the Appendix for additional experiments verifying that the permanent empirically falls within our high probability bounds.
\subsection{Runtime Scaling and Comparison with Variational Approximations}
To compare the runtime performance of \method with \citet{law2009approximately} we generated random matrices of varying size. We generated matrices in two ways, by uniformly sampling every element from $[0,1)$ (referred to as `uniform' in plots) and by sampling $\lfloor \frac{n}{k} \rfloor$ blocks of size $k\text{x}k$ and a single $n\ \mathrm{mod}\ k$ block along the diagonal of an $n\text{x}n$ matrix (with all other elements set to 0, referred to as `block diagonal' in plots).
Runtime scaling is shown in Figure~\ref{fig:matrix_scaling}.
While \method is faster in both cases, we observe the most time reduction for the more challenging, low density block diagonal matrices. For reference a Cython implementation of Ryser's algorithm for exactly computing the permanent in exponential time \cite{ryser1963combinatorial} requires roughly 1.5 seconds for a $20\text{x}20$ matrix.
\begin{figure}[h]
\centering
\begin{subfigure}{.5\linewidth}
\centering
\includegraphics[width=.99\linewidth]{figures/compareNoSearchNestingUB_soules_FastTri_1logN.png}
\end{subfigure}%
\begin{subfigure}{.5\linewidth}
\centering
\includegraphics[width=.99\linewidth]{figures/compareNoSearchNestingUB_soules_FastTriDiagMatrix_1logN.png}
\end{subfigure}
\caption{Log-log plot of mean runtime over 5 samples against $n$ (matrices are of size $n \times n$).}
\label{fig:matrix_scaling}
\end{figure}
To demonstrate that computing the permanent of these matrices is challenging for variational approaches, we plot the bounds obtained from the Bethe and Sinkhorn approximations in Figure~\ref{fig:variational_comparison}. Note that the gap between the variational lower and upper bounds is exponential in the matrix dimension $n$. Additionally, the upper bound from \citet{soules2005permanental} (that we use in \method) is frequently closer to the exact permanent than all variational bounds.
\begin{figure}[h]
\centering
\begin{subfigure}{.33\linewidth}
\centering
\includegraphics[width=.99\linewidth]{figures/loglog_bound_tightness_comparison_sinkhornSoules_uniformMatrix.png}
\end{subfigure}%
\begin{subfigure}{.33\linewidth}
\centering
\includegraphics[width=.99\linewidth]{figures/loglog_bound_tightness_comparison_sinkhornSoules_blockDiagk=10.png}
\end{subfigure}
\begin{subfigure}{.33\linewidth}
\centering
\includegraphics[width=.99\linewidth]{figures/loglog_bound_tightness_comparison_sinkhornSoules_blockDiagk=3.png}
\end{subfigure}
\caption{Bounds on the permanent given by the Bethe approximation \cite{huang2009approximating,anari2018tight}, the Sinkhorn approximation \cite{gurvits2014bounds}, and the upper bound we use from \citet{soules2005permanental}.}\label{fig:variational_comparison}
\end{figure}
\subsection{Matrices from Real-World Networks}
In Table~\ref{table:real_matrices} we show the performance of our method on real world problem instances. In the context of directed graphs, the permanent represents the sum of weights of \emph{cycle covers} (i.e., a set of disjoint directed cycles that together cover all vertices of the graph) and defines a distribution over cycle covers. Sampling cycle covers is then equivalent to sampling permutations from the distribution defined by the permanent. We sampled 10 cycle covers from distributions arising from graphs\footnote{Matrices available at http://networkrepository.com.} in the fields of cheminformatics, DNA electrophoresis, and power networks and report mean runtimes in Table~\ref{table:real_matrices}.
Among the matrices that did not time out, \method can sample cycle covers 12 - 25x faster than the baseline from \citet{law2009approximately}.
We used 10 samples from \method to compute bounds on the permanent that are tight within a factor of 5 and hold with probability $.95$, shown in the \method sub-columns of Table~\ref{table:real_matrices} (we show the natural logarithm of all bounds). Note that we would get comparable bounds using the method from \cite{law2009approximately} as it is also produces exact samples. For comparison we compute variational bounds using the method of \cite{gurvits2014bounds}, shown in the `Sinkhorn' sub-columns. Each of these bounds was computed in less than .01 seconds, but they are generally orders of magnitude looser than our sampling bounds. Note that our sampling bounds can be tightened arbitrarily by using more samples at the cost of additional (parallel) computation, while the Sinkhorn bounds cannot be tightened. We do not show bounds given by the Bethe approximation because the matlab code from \cite{huang2009approximating} was very slow for matrices of this size and the c++ code does not handle matrices with 0 elements.
\begin{table}[h!]
\small
\setlength\tabcolsep{4px}
\centering
\begin{tabular}{@{}lcccccccc@{}}
\toprule
\multicolumn{3}{c}{Model Information} & \multicolumn{2}{c}{Sampling Runtime (sec.)} &
\multicolumn{2}{c}{Lower Bounds} &
\multicolumn{2}{c}{Upper Bounds}\\
\cmidrule(r){1-3} \cmidrule(r){4-5} \cmidrule(r){6-7} \cmidrule(r){8-9}
Network Name & Nodes & Edges & \method & \citet{law2009approximately} & \method & Sinkhorn & \method & Sinkhorn \\\midrule
ENZYMES-g192 & 31 & 132 & \textbf{4.2} & 52.9 & \textbf{19.3} & 17.0 & \textbf{20.8} & 38.5\\
ENZYMES-g230 & 32 & 136 & \textbf{3.3} & 55.5 & \textbf{19.8} & 17.2 & \textbf{21.3} & 39.4\\
ENZYMES-g479 & 28 & 98 & \textbf{1.8} & 45.1 & \textbf{12.3}& 10.9 & \textbf{13.8}& 30.3\\
cage5 & 37 & 196 & \textbf{6.1} & TIMEOUT & \textbf{-20.2}& -29.2 & \textbf{-18.7}& -3.6\\
bcspwr01 & 39 & 46 & \textbf{4.2} & 74.8 & \textbf{18.7}& 13.2 & \textbf{20.1}& 40.3\\
\bottomrule
\end{tabular}
\caption{Runtime comparison of our algorithm (\method) with the fixed partitioning algorithm from \citet{law2009approximately} and bound tightness comparison of \method with the Sinkhorn based variational bounds from \cite{gurvits2014bounds} (logarithm of bounds shown). Best values are in \textbf{bold}.}
\label{table:real_matrices}
\end{table}
\subsection{Multi-Target Tracking}
The connection between measurement association in multi-target tracking and the matrix permanent arises frequently in tracking literature \cite{uhlmann2004matrix,morelande2009joint,oh2009markov,hamid2015joint}. It is used to calculate the marginal probability that a measurement was produced by a specific target, summing over all other joint measurement-target associations in the association matrix.
We implemented a Rao-Blackwellized particle filter that uses \method to sample from the optimal proposal distribution and compute approximate importance weights (see Section~\ref{sec:mtt_overview} in the Appendix).
We evaluated the performance of our particle filter using synthetic multi-target tracking data. Independent target motion was simulated for 10 targets with linear Gaussian dynamics. Each target was subjected to a unique spring force. As baselines, we evaluated against a Rao-Blackwellized particle filter using a sequential proposal distribution \cite{sarkka2004rao} and against the standard multiple hypothesis tracking framework (MHT) \cite{reid1979algorithm,chong2018forty,kim2015multiple}. We ran each method with varying numbers of particles (or tracked hypothesis in the case of MHT) and plot the maximum log-likelihood of measurement associations among sampled particles in Figure~\ref{fig:tracking_performance}. The mean squared error over all inferred target locations (for the sampled particle with maximum log-likelihood) is also shown in Figure~\ref{fig:tracking_performance}. We see that by sampling from the optimal proposal distribution (blue x's in Figure~\ref{fig:tracking_performance}) we can find associations with larger log-likelihood and lower mean squared error than baseline methods while using an order of magnitude fewer samples (or hypotheses in the case of MHT).
\begin{figure}
\centering
\begin{subfigure}{.5\linewidth}
\centering
\includegraphics[width=.99\linewidth]{tracking_figures/all_log_likelihoods.png}
\end{subfigure}%
\begin{subfigure}{.5\linewidth}
\centering
\includegraphics[width=.99\linewidth]{tracking_figures/all_mean_squared_errors.png}
\end{subfigure}
\caption{Multi-target tracking performance comparison. Left: maximum log-likelihoods among sampled particles (or top-$k$ hypotheses for the MHT baseline). Right: mean squared error over all time steps and target locations.
}
\label{fig:tracking_performance}
\end{figure}
\section{Conclusion and Future Work}
Computing the permanent of a matrix is a fundamental problem in computer science. It has many applications, but exact computation is intractable in the worst case. Although a theoretically sound randomized algorithm exists for \emph{approximating} the permanent in polynomial time, it is impractical. We proposed a general approach, \method, for drawing exact samples from an unnormalized distribution. We used \method to construct high probability bounds on the permanent in provably polynomial time for dense matrices. We showed that \method is significantly faster than prior work on both dense and sparse matrices which are challenging for variational approaches.
Finally, we applied \method to the multi-target tracking problem and showed that we can improve tracking performance while using an order of magnitude fewer samples.
In future work, \method may be used to estimate general partition functions if a general upper bound \cite{wainwright2003tree,liu2011bounding,lou2017anytime} is found to nest with few calls to $\refine$. The matrix permanent specific implementation of \method may benefit from tighter upper bounds on the permanent. Particularly, a computationally efficient implementation of the Bethe upper bound \citep{huang2009approximating,anari2018tight} would yield improvements on sparse matrices (see Figure~\ref{fig:variational_comparison}), which could be useful for multi-target tracking where the association matrix is frequently sparse.
The `sharpened' version of the bound we use (Equation~\ref{eq:soulesUB}), also described in \cite{soules2005permanental}, would offer performance improvements if the `sharpening' optimization problem can be solved efficiently.
\subsection*{Acknowledgements}
Research supported by NSF (\#1651565, \#1522054, \#1733686), ONR (N00014-19-1-2145), AFOSR (FA9550-
19-1-0024), and FLI.
\nocite{barvinok2016computing}
\subsection{Multi-Target Tracking Overview}\label{sec:mtt_overview}
The multi-target tracking problem is very similar to classical inference problems in hidden Markov models, requiring the estimation of an unobserved state given a time series of noisy measurements. The non-standard catch is that at each time step the observer is given one noisy measurement per target, but is not told which target produced which measurement. Brute forcing the problem is intractable because there are $K!$ potential associations when tracking $K$ targets. The connection between measurement association and the matrix permanent arises frequently in tracking literature \cite{uhlmann2004matrix,morelande2009joint,oh2009markov,hamid2015joint}, and its computational complexity is cited when discussing the difficulty of multi-target tracking.
As brief background, the computational complexity of multi-target tracking has led to many heuristic approximations, notably including multiple hypothesis tracking (MHT) \cite{reid1979algorithm,chong2018forty,kim2015multiple} and joint probabilistic data association (JPDA) \cite{fortmann1983sonar,hamid2015joint}. As heuristics, they can succumb to failure modes. JPDA is known to suffer from target coalescence where neighboring tracks merge \cite{blom2000probabilistic}.
Alternatively, sequential Monte Carlo methods (SMC or particle filters) provide an asymptotically unbiased method for sequentially sampling from arbitrarily complex distributions. When targets follow linear Gaussian dynamics, a Rao-Blackwellized particle filter may be used to sample the measurement associations allowing sufficient statistics for distributions over individual target states to be computed in closed form (by Kalman filtering, see Algorithm~\ref{alg:SIS} in the Appendix for further details) \cite{sarkka2004rao}. The proposal distribution is a primary limitation when using Monte Carlo methods. Ideally it should match the target distribution as closely as possible, but this generally makes it computationally unwieldy.
In the case of a Rao-Blackwellized particle filter for multi-target tracking, the optimal proposal distribution \citep[p.~199]{doucet2000sequential} that minimizes the variance of each importance weight is a distribution over permutations defined by a matrix permanent (please see Section~\ref{sec:optimal_proposal_distribution} in the Appendix for further details). We implemented a Rao-Blackwellized particle filter that uses the optimal proposal distribution. We evaluated it's effectiveness against a Rao-Blackwellized particle filter using a sequential proposal distribution \cite{sarkka2004rao} \jonathan{check me} and against the standard multiple hypothesis tracking framework (MHT) \cite{reid1979algorithm,chong2018forty,kim2015multiple}.
Our work can be extended to deal with a variable number of targets and clutter measurements using a matrix formulation similar to that in \cite{atanasov2014semantic}. \jonathan{maybe check/elaborate}
\subsection{Optimal Single-Target Bayesian Filtering}
\jonathan{this section has issues with variable collisions that should be resolved, or explain to the reader that x and y refer to single target states in this section only or something. Add citations HMM, Kalman filter}
In this section we give a brief review of the optimal Bayesian filter for single-target tracking. Consider a hidden Markov model with unobserved state $\mathbf{x}_t$ and measurement $\mathbf{y}_t$ at time $t$. The joint distribution over states and measurements factors as
\begin{equation*}
\Pr(\mathbf{x}_{1:T}, \mathbf{y}_{1:T}) = \Pr(\mathbf{x}_{1}) \Pr(\mathbf{y}_1 | \mathbf{x}_{1}) \prod_{t=2}^{T} \Pr(\mathbf{x}_{t} | \mathbf{x}_{t-1}) \Pr(\mathbf{y}_t | \mathbf{x}_{t})
\end{equation*}
by the Markov property. This factorization of the joint distribution facilitates Bayesian filtering, a recursive algorithm that maintains a fully Bayesian distribution over the hidden state $\mathbf{x}_{t}$ as each measurement $\mathbf{y}_{t}$ is sequentially observed. Given the prior distribution $p(\mathbf{x}_{1})$ over the initial state, the Bayesian filter consists of the update step\footnote{Where we have abused notation and the initial distribution is $\Pr(\mathbf{x}_{1} | \mathbf{y}_{1:0}) = \Pr(\mathbf{x}_{1})$.}
\begin{equation*}
\Pr(\mathbf{x}_{t} | \mathbf{y}_{1:t}) = \frac{\Pr(\mathbf{y}_{t} | \mathbf{x}_{t}) \Pr(\mathbf{x}_{t} | \mathbf{y}_{1:t-1})}{\int \Pr(\mathbf{y}_{t} | \mathbf{x}_{t}) \Pr(\mathbf{x}_{t} | \mathbf{y}_{1:t-1}) d\mathbf{x}_{t}}
\end{equation*}
and the prediction step
\begin{equation*}
\Pr(\mathbf{x}_{t} | \mathbf{y}_{1:t-1}) = \int \Pr(\mathbf{x}_{t} | \mathbf{x}_{t-1}) \Pr(\mathbf{x}_{t-1} | \mathbf{y}_{1:t-1}) d\mathbf{x}_{t-1}.
\end{equation*}
In the special case of linear Gaussian models where the state transition and measurement processes are linear but corrupted with Gaussian noise, the above integrals can be computed analytically giving closed form update and predict steps. The distribution over the hidden states remains Gaussian and is given by the Kalman filter \jonathan{add citation, possibly give formulas in appendix?} with update step
\begin{equation} \label{eq:kf_update}
\Pr(\mathbf{x}_{t} | \mathbf{y}_{1:t}) = \mathcal{N}(\hat{\mathbf{x}}_{t|t}, \mathbf{P}_{t|t})
\end{equation}
and prediction step
\begin{equation} \label{eq:kf_predict}
\Pr(\mathbf{x}_{t} | \mathbf{y}_{1:t-1}) = \mathcal{N}(\hat{\mathbf{x}}_{t|t-1}, \mathbf{P}_{t|t-1}).
\end{equation}
\subsection{Optimal Multi-Target Bayesian Filtering}
In this section we give a brief review of the optimal Bayesian filter for multi-target tracking problem with a fixed cardinality (fixed number of targets and measurements over time) \citep[pp.~485-486]{oh2009markov} and its computational intractability. \jonathan{Move me: We then show how to perform sequential Monte Carlo (SMC) on the distribution defined by the optimal Bayesian filter. Using our sampling strategy from the previous sections we can employ the optimal proposal distribution}
\jonathan{we approximate importance weights, so are we doing approximate SMC or what should this be called? also in experiments we should probably run SMC with a different, e.g. sequential, importance distribution}
Given standard multi-target tracking assumptions\jonathan{independence of target motion, independence measurement error between targets, uniform distribution over measurement target associations, markov assumption, etc.}, the joint distribution over all target states $X$, measurements $Y$, and measurement-target associations $\pi$ can be factored as\footnote{For a tracking sequence of $K$ targets over $T$ time steps, $X$ is an array where row $X_t = (X_t^1, \dots, X_t^K)$ represents the state of all targets at time $t$ and element $X_t^k$ is a vector representing the state of the $k^{\text{th}}$ target at time $t$. Likewise $Y$ is an array where row $Y_t = (Y_t^1, \dots, Y_t^K)$ represents all measurements at time $t$ and element $Y_t^k$ is a vector representing the $k^{\text{th}}$ measurement at time $t$. Measurement-target associations are represented by the array $\pi$ where the element $\pi_t \in S_k$ is a permutations of $\{1,2,\dots,k\}$ ($S_k$ denotes the symmetric group).}
\begin{equation}
\begin{aligned}
\Pr(X, Y, \pi) & = \Pr(X_1) \Pr(\pi_1) \Pr(Y_1 | X_1, \pi_1) \\
&\times \prod_{t=2}^T \Pr(X_t | X_{t-1}) \Pr(\pi_t) \Pr(Y_t | X_t, \pi_t). \\
\end{aligned}
\end{equation}
The optimal Bayesian filter for multi-target tracking is a recursive algorithm, similar to the standard Bayesian filter in the single target tracking setting, that maintains a distribution over the joint state of all targets by incorporating new measurement information as it is obtained. It is more complex than the single target Bayesian filter because it must deal with uncertainty in measurement-target association. As in the single target tracking setting the filter is composed of prediction and update steps. The prediction step is
\begin{equation}
\begin{aligned}
&\Pr(X_t | Y_{1:t-1}) \\
= &\sum_{\pi_{1:t-1}} \Pr(X_t | Y_{1:t-1}, \pi_{1:t-1}) \Pr(\pi_{1:t-1} | Y_{1:t-1}) \\
= &\frac{1}{k!^{t-1}}\sum_{\pi_{1:t-1}} \Pr(X_t | Y_{1:t-1}, \pi_{1:t-1}) \\
= &\frac{1}{k!^{t-1}}\sum_{\pi_{1:t-1}} \Pr((X_t^1, \dots, X_t^K) | Y_{1:t-1}, \pi_{1:t-1}) \\
= &\frac{1}{k!^{t-1}}\sum_{\pi_{1:t-1}} \int \dots \int \Pr(X_t^1 | X_{t-1}^1) \Pr(X_{t-1}^1 | Y_{1:t-1}, \pi_{1:t-1}) \\
& \times \Pr(X_t^K | X_{t-1}^K) \Pr(X_{t-1}^K | Y_{1:t-1}, \pi_{1:t-1}) dX_{t-1}^1 \dots dX_{t-1}^K\\
= &\frac{1}{k!^{t-1}}\sum_{\pi_{1:t-1}} \int \Pr(X_t^1 | X_{t-1}^1) \Pr(X_{t-1}^1 | Y_{1:t-1}, \pi_{1:t-1}) dX_{t-1}^1 \\
& \times \dots \times \\
& \int \Pr(X_t^K | X_{t-1}^K) \Pr(X_{t-1}^K | Y_{1:t-1}, \pi_{1:t-1}) dX_{t-1}^K. \\
\end{aligned}
\end{equation}
The update step is
\begin{equation}
\begin{aligned}
&\Pr(X_t | Y_{1:t}) \\
= &\sum_{\pi_{1:t}} \Pr(X_t | Y_{1:t}, \pi_{1:t}) \Pr(\pi_{1:t} | Y_{1:t}) \\
= &\frac{1}{k!^{t}} \sum_{\pi_{1:t}} \Pr(X_t | Y_{1:t}, \pi_{1:t}) \\
= &\frac{1}{k!^{t}} \sum_{\pi_{1:t}} \frac{\Pr(Y_t | X_t, \pi_{t}) \Pr(X_t | Y_{1:t-1}, \pi_{1:t-1})}{\int \Pr(Y_t | X_t, \pi_{t}) \Pr(X_t | Y_{1:t-1}, \pi_{1:t-1}) dX_t} \\
\end{aligned}
\end{equation}
Unfortunately the multi-target optimal Bayesian filtering steps outlined above are computationally intractable to compute. Even in special cases where the integrals are tractable, such as for linear Gaussian models, summation over $k!^t$ states is required.
\subsection{Sequential Monte Carlo}
Sequential Monte Carlo (SMC) or particle filtering methods can be used to sample from sequential models \jonathan{cite}. These methods can be used to sample from the distribution defined by the optimal Bayesian multi-target filter. When target dynamics are linear Gaussian a Rao-Blackwellized particle filter can be used to sample measurement-target associations and compute sufficient statistics for individual target distributions in closed form \cite{sarkka2004rao}.
Pseudo-code for Rao-Blackwellized sequential importance sampling is given in algorithm~\ref{alg:SIS}. We use $KF_u(\cdot)$ and $KF_p(\cdot)$ to denote calculation of the closed form Kalman filter update and prediction steps given in equations~\ref{eq:kf_update} and \ref{eq:kf_predict} respectively.
\tiny
\begin{algorithm}\captionsetup{labelfont={sc,bf}, labelsep=newline}
\caption{Rao-Blackwellized Sequential Importance Sampling} \label{alg:SIS}
\jonathan{check for variable collisions in this algorithm. should we have this pseudocode or just reference another paper?}
\begin{flushleft}
\textbf{Outputs:} $N$ importance samples $\pi_{1:T}^{(i)} \sim \Pr(\pi_{1:T} | Y_{1:T})$ and weights $w_T^{(i)}$ ($i \in {1, 2, \dots, n}$) with corresponding state estimates $\hat{X}_{1:T}^{(i)}$ and covariance matrices $P_{1:T}^{(i)}$. Note $\hat{X}_{1:T}^{(i)}$ and $P_{1:T}^{(i)}$ are both arrays; $\hat{X}_t^{k(i)}$ is the $k^{\text{th}}$ target's estimated state vector at time $t$ for sample $i$.
\end{flushleft}
\begin{algorithmic}[1]
\FOR[Update particle at time $t$]{t = 1, \dots, T}
\FOR[Sample particle i]{i = 1, \dots, N}
\STATE $\pi_t^{(i)} \sim q(\pi_t | \pi_{1:t-1}^{(i)}, Y_{1:t})$
\STATE $\pi_{1:t}^{(i)} \gets \left(\pi_{1:t-1}^{(i)}, \pi_{t}^{(i)}\right)$
\FOR[Iterate over targets]{k = 1, \dots, K}
\STATE $\hat{X}_{t|t}^{k (i)}$, $P_{t|t}^{k (i)} \gets KF_u \left( \hat{X}_{t|t-1}^{k (i)}, P_{t|t-1}^{k (i)}, Y_t^{\pi(k)} \right)$
\STATE $\hat{X}_{t+1|t}^{k (i)}$, $P_{t+1|t}^{k (i)} \gets KF_p \left( \hat{X}_{t|t}^{k (i)}, P_{t|t}^{k (i)} \right)$
\STATE $\hat{X}_{1:t}^{(i)} \gets \left(\hat{X}_{1:t-1}^{(i)}, \hat{X}_{t}^{(i)}\right)$
\STATE $P_{1:t}^{(i)} \gets \left(P_{1:t-1}^{(i)}, P_{t}^{(i)}\right)$
\ENDFOR
\STATE $w_t^{*(i)} \gets w_{t-1}^{*(i)} \frac{\prod_{k=1}^K P\left(Y_{1:t}^{\pi_t(k)} | \hat{X}_{t|t-1}^{k(i)}, P_{t|t-1}^{k(i)}\right)}{q(\pi_t | \pi_{1:t-1}^{(i)}, Y_{1:t})}$
\ENDFOR
\FOR[Normalize importance weights]{i = 1, \dots, N}
\STATE $\Tilde{w}_t^{(i)} \gets \frac{ w_t^{*(i)}}{\sum_{j=1}^N w_t^{*(j)}}$
\ENDFOR
\STATE \jonathan{resample if switch to SIR, otherwise mention elsewhere}
\ENDFOR
\end{algorithmic}
\end{algorithm}
\normalsize
\subsection{Optimal Proposal Distribution} \label{sec:optimal_proposal_distribution}
While SMC methods are asymptotically unbiased, their performance depends on the quality of the proposal distribution. The optimal proposal distribution that minimizes the variance of importance weight $w_t^{*(i)}$ \citep[p.~199]{doucet2000sequential} is $q(x_t | x_{1:t-1}^{(i)}, Y_{1:t}) = \Pr(x_t | x_{t-1}^{(i)}, Y_{t})$\jonathan{where i have just abused notation with $x_t$}. In our setting we have hidden variables $X$ and $\pi$, so we need to rewrite this as $q(X_t, \pi_t | X_{1:t-1}^{(i)}, \pi_{1:t-1}^{(i)}, Y_{1:t}) = \Pr(X_t, \pi_t | X_{t-1}^{(i)}, \pi_{t-1}^{(i)}, Y_{t}) = \Pr(X_t, \pi_t | X_{t-1}^{(i)}, Y_{t})$ (note that $X_t$ and $\pi_t$ are conditionally independent from $\pi_{t-1}^{(i)}$ given $X_{t-1}^{(i)}$). Using Rao-Blackwellization we avoid sampling $X_t$ but instead compute sufficient statistics (mean and covariance) in closed form, so we have that the optimal proposal distribution is
\begin{equation} \label{eq:permanent_for_tracking}
\begin{aligned}
& q(\pi_t | X_{1:t-1}^{(i)}, \pi_{1:t-1}^{(i)}, Y_{1:t}) \\
= & \Pr(\pi_t | \hat{X}_{t|t-1}^{(i)}, P_{t|t-1}^{(i)}, \pi_{1:t-1}^{(i)}, Y_{1:t}) \\
= & \Pr(\pi_t | \hat{X}_{t|t-1}^{(i)}, P_{t|t-1}^{(i)}, Y_{1:t}) \\
= & \frac{\Pr(\pi_t, \hat{X}_{t|t-1}^{(i)}, P_{t|t-1}^{(i)}, Y_{1:t})}{\sum_{\pi_t}\Pr(\pi_t, \hat{X}_{t|t-1}^{(i)}, P_{t|t-1}^{(i)}, Y_{1:t})} \\
= & \frac{\Pr(Y_{1:t} | \pi_t, \hat{X}_{t|t-1}^{(i)}, P_{t|t-1}^{(i)}) \Pr(\hat{X}_{t|t-1}^{(i)}, P_{t|t-1}^{(i)} | \pi_t) \Pr(\pi_t)}{\sum_{\pi_t}\Pr(Y_{1:t} | \pi_t, \hat{X}_{t|t-1}^{(i)}, P_{t|t-1}^{(i)}) \Pr(\hat{X}_{t|t-1}^{(i)}, P_{t|t-1}^{(i)} | \pi_t) \Pr(\pi_t)} \\
= & \frac{\Pr(Y_{1:t} | \pi_t, \hat{X}_{t|t-1}^{(i)}, P_{t|t-1}^{(i)}) \Pr(\hat{X}_{t|t-1}^{(i)}, P_{t|t-1}^{(i)}) \Pr(\pi_t)}{\sum_{\pi_t}\Pr(Y_{1:t} | \pi_t, \hat{X}_{t|t-1}^{(i)}, P_{t|t-1}^{(i)}) \Pr(\hat{X}_{t|t-1}^{(i)}, P_{t|t-1}^{(i)}) \Pr(\pi_t)} \\
= & \frac{\Pr(Y_{1:t} | \pi_t, \hat{X}_{t|t-1}^{(i)}, P_{t|t-1}^{(i)}) / k!} {\sum_{\pi_t}\Pr(Y_{1:t} | \pi_t, \hat{X}_{t|t-1}^{(i)}, P_{t|t-1}^{(i)}) / k!} \\
= & \frac{\prod_{k=1}^K \Pr(Y_{1:t}^{\pi_t(k)} | \hat{X}_{t|t-1}^{k(i)}, P_{t|t-1}^{k(i)})} {\sum_{\pi_t}\prod_{k=1}^K \Pr(Y_{1:t}^{\pi_t(k)} | \hat{X}_{t|t-1}^{k(i)}, P_{t|t-1}^{k(i)})}. \\
\end{aligned}
\end{equation}
Note that the denominator of the final line in equations~\ref{eq:permanent_for_tracking} is the permanent of matrix $A$, where $(a_{jk}) = \Pr(Y_{1:t}^{j} | \hat{X}_{t|t-1}^{k(i)}, P_{t|t-1}^{k(i)})$. Using the machinery developed throughout this paper we can sample from the optimal proposal distribution and compute approximate importance weights \jonathan{say more, how do we deal with this exactly?}.
| proofpile-arXiv_059-15534 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Preliminary Studies and Problem Definitions}
In this section, we first present the preliminary studies of the proposed study, then introduce the design goals for the proposed systems as the technical problem definitions.
\subsection{Gradient Boosting and LightGBM}
As an ensemble learning technique, the Gradient Boosting classifier trains and combines multiple weak prediction models, such as decision trees, for better generalization performance~\cite{friedman2001greedy,friedman2002stochastic}. The key idea of gradient boosting is to consider the procedure of boosting as the optimization over certain cost functions~\cite{breiman1997arcing}. As the result, the gradient descent directions for the loss function minimization can be transformed into the decision trees that were obtained sequentially to improve the classifier.
Given a training dataset, where each data point $(x,y)\sim\mathcal{D}$, the problem of gradient boosting is to learn a function $\widehat F$ from all possible hypotheses $\mathcal{H}$ while minimizing the expectation of loss over the distribution $\mathcal{D}$, such that
\begin{equation}
\widehat F = \underset{F\in\mathcal{H}}{\mathrm{argmin}} \underset{(x,y)\sim\mathcal{D}}{\mathbb{E}} L(y, F(x)),
\end{equation}
where $L(y, F(x))$ refers to the prediction loss of $F(x)$ to the label $y$. More specific, the gradient boosting intends to minimize the loss function and obtain $\widehat F$ in a gradient descent way, such that
\begin{equation}
F_{k+1}(x)\gets F_k(x) + \alpha_k\cdot h_k(x),
\end{equation}
where $F_k(x)$ refers to the learned model in the $k^{th}$ iteration, $h_k(x)$ refers to the decision tree learned as the descent direction at the $k^{th}$ iteration based on the model already obtained $F_k(x)$ and the training dataset, $\alpha_k$ refers to the learning rate of gradient boosting or namely the weight of $h_k(x)$ in the ensemble of learners, the operator $a+b$ refers to the ensemble of $a$ and $b$ models, and $F_{k+1}(x)$ refers to the results of the $k^{th}$ iteration. More specific, the computation of $h_k(x)$ majorly address $\left(y-F_k(x)\right)$ for $\forall (x,y)\in\mathcal{D}$, i.e.,the error between the model $f_k(x)$ that is already estimated and the label $y$ that corresponds to $x$ in training dataset. Note that in the first iteration, the algorithm starts from $F_1(x)$ which is a vanilla decision tree learned from the dataset. With totally $K$ iteration, the algorithm obtains the final model $\widehat F(x)$ as the $F_{K+1}$.
Recently, gradient boosting classifiers have attracted further attentions from both application and algorithmic perspectives. For example, it has won the KDDCup 2016~\cite{sandulescu2016predicting} and tons of other competition such as Kaggle\footnote{https://medium.com/@gautam.karmakar/xgboost-model-to-win-kaggle-e12b35cd1aad}. Gradient boosting trees and its variants have been used as a major baselines for a great number of classification/regression tasks with decent results, ranging from genetic data analytic to the click through predictions~\cite{nielsen2016tree}. In terms of algorithm implementation, XGBoost~\cite{chen2016xgboost} and LightGBM~\cite{ke2017lightgbm} have been proposed to further improve the performance of gradient boosting trees, where thw two work followed similar gradient boosting mechanisms for the decision trees training while made significant contributions to scalability and efficiency issues.
\subsection{Homomorphic Encryption Models}
To secure the security and privacy during the computation, homomorphic encryption (HE) has been proposed as a set of operations that work on the encrypted data while resulting in the secure ones with encryption. More important the results obtained can be decrypted to match the ``true results'' of the corresponding operations~\cite{gentry2010computing,vaikuntanathan2011computing}. Homomorphic encryption contains multiple types of encryption schemes, such as partially homomorphic encryption (PHE), fully homomorphic encryption (FHE) and pre-fully homomorphic encryption (Pre-FHE), that can perform different classes of computations over encrypted data~\cite{armknecht2015guide}. The progress along these lines of research has been well surveyed in~\cite{halevi2017homomorphic}.
As early as 1978, the tentative idea of building a fully homomorphic encryption scheme was proposed just after the publishing of RSA algorithm~\cite{demillo1978foundations}. Thirty years, Gentry emph{et al.} in 2009 sketched the first fully homomorphic encryption scheme based on the lattice cryptography~\cite{gentry2009fully}. One year later, van Dijk \emph{et al.} presented the second fully homomorphic encryption scheme~\cite{van2010fully} based on Gentry's work, but did not rely on the use of ideal lattices. The second generation of FHE starts from 2011, there were some fundemental techniques developed by Zvika Brakerski \emph{et al.}~\cite{brakerski2014leveled,brakerski2014efficient}, where the homomorphic cryptosystems currently used are stemmed. Thanks to these innovations, the second generation of FHE tends to be much more efficient compared with the first generation, and be applied to a lot of applications.
Later, Gentry \emph{et al.} proposed a new technique for building fully homomorphic encryption schemes, namely GSW, which avoids the use of expensive ``relinearization'' computation in homomorphic multiplication~\cite{gentry2013homomorphic}. Brakerski \emph{et al.} observed that, for certain types of circuits, the GSW cryptosystem features an even slower growth rate of noise, and hence better efficiency and stronger security~\cite{brakerski2014lattice}.
As the fully homomorphic encryption is computationally expensive, the most of practical secure systems indeed have been implemented with a partially homomorphic encryption fashion~\cite{halevi2017homomorphic}, where only parts of computation are encrypted with homomorphic encryption. In this work, we hope to secure the computation and communication of federated learning through partially homomorphic encryption. Our proposed method uses ciphertexts to protects parts of computations and communications in the gradient boosting trees learning.
\subsection{Problems and Overall Design Goals}
In this work, we intend to design a novel federated gradient boosting trees classifier that can learned from view separated data in a distributed manner while avoiding the leakage of data privacy and security.
\textbf{The Federated Learning Problem -} Suppose two training datasets $\mathcal{D}_1$ and $\mathcal{D}_2$ are owned by two parties $\mathbb{A}$ and $\mathbb{B}$ respectively, who hope to collaboratively learn one mode but don't trust each other. The schemes of the two datasets are $\mathcal{D}_1=(I; X; Y)$ and $\mathcal{D}_2=(I;X)$, where
\begin{itemize}
\item $I(\mathcal{D}_1)$ and $I(\mathcal{D}_2)$ refer to the identity sets of samples in the two datasets respectively. When $I(\mathcal{D}_1)\cap I(\mathcal{D}_2)\neq\emptyset$, it indicates that the two datasets share a subset of samples but with different views (i.e., features);
\item $X(\mathcal{D}_1)$ and $X(\mathcal{D}_2)$ refer to the feature sets of samples in the two datasets respectively. More specific, we denote $X_i(\mathcal{D}_1)$ as the set of features of the $i^{th}$ sample in $\mathcal{D}_1$ and further denote $X_{i,j}(\mathcal{D}_1)$ as the $j^{th}$ feature of the $i^{th}$ sample in $\mathcal{D}_1$;
\item $Y(\mathcal{D}_1)$ refers the label set of the data samples in $\mathcal{D}_1$. More specific, $\forall y\in Y(\mathcal{D}_1)$ there has $y\in\mathbb{R}^{|\mathcal{C}|}$ and $\mathcal{C}$ refers to the set of classes. As was mentioned, in the settings discussed in this paper, only one part would monopoly the label information.
\end{itemize}
With above settings, our federated learning problem consists of two parts as follow.
\textbf{Training - } There needs to propose an dual-party learning algorithm with secure communication and computation schemes that can train tree-based Gradient Boosting Machines based on $\mathcal{D}_1$ and $\mathcal{D}_2$ with respect to the following restrictions:
\begin{itemize}
\item \textbf{Training Set Identity Protection}: $I(\mathcal{D}_1)\backslash I(\mathcal{D}_2)$ would not be obtained by the party $\mathbb{B}$, while the information about $I(\mathcal{D}_2)\backslash I(\mathcal{D}_1)$ would be prohibited from $\mathbb{A}$;
\item \textbf{Training Set Feature Security}: The inference procedure needs to avoid the leakage of $X(\mathcal{D}_1)$ and $Y(\mathcal{D}_1)$ to $\mathbb{B}$, and the $X(\mathcal{D}_2)$ to the party $\mathbb{A}$.
\end{itemize}
\textbf{Testing - } Given two testing datasets $\mathcal{D'}_1=(I;X)$ and $\mathcal{D'}_2=(I;X)$ owned by the two parties $\mathbb{A}$ and $\mathbb{B}$ respectively, where $I(\mathcal{D'}_1)\cap I(\mathcal{D'}_2)\neq\emptyset$. There needs an online inference algorithm, where the party $\mathbb{A}$ can initialize the inference procedure using the identity of the sample for prediction (i.e., $i'\in I(\mathcal{D'}_1)\cap I(\mathcal{D'}_2)$). The party $\mathbb{A}$ can obtains the prediction result of the sample (i.e., $y_{i'}$) through the secure inference procedure, with respect to the following restrictions:
\begin{itemize}
\item \textbf{Testing Set Identity Protection}: $I(\mathcal{D'}_1)\backslash I(\mathcal{D'}_2)$ would not be obtained by the party $\mathbb{B}$, while the information about $I(\mathcal{D'}_2)\backslash I(\mathcal{D'}_1)$ would be prohibited from $\mathbb{A}$
\item \textbf{Testing Set Feature Security}: The inference procedure needs to avoid the leakage of $X(\mathcal{D'}_1)$ and $y_{i'}$ to $\mathbb{B}$, and $X(\mathcal{D'}_2)$ to the party $\mathbb{A}$.
\end{itemize}
In our research, we intend to design PHE-based encryption schemes to protect the training and inference procedures (derived from LightGBM~\cite{ke2017lightgbm}) and meet above security goals.
\section{Discussion and Conclusion}
In this work, we present \texttt{SecureGBM}\ a secure multi-party (re-)design of LightGBM~\cite{ke2017lightgbm}, where we assume the view (i.e., set of features) of the same group of samples has been split into two parts and owned by two parties separately. To collaboratively train a model while preserving the privacy of the two parties, a group of partial homomorphic encryption (PHE) computation models and multi-party computation protocols have been used to cover the key operators of distributed LightGBM learning and inference over two parties. As the use of PHE and multi-party computation models cause hudge computational and communication overhead, certain statistical acceleration strategies have been proposed to lower the cost of communication while securing the statistical accuracy of learned model through stochastic approximation. With such statistical acceleration strategies, \texttt{SecureGBM}\ would become more and more efficient, with decreasing slowdown ratio, when the scale of training datasets increases.
The experiments based on several large real-world datasets show that \texttt{SecureGBM}\ can achieve decent testing accuracy (i.e., AUC and F1-score) as good as vanilla LightGBM (based on the aggregated datasets from the two parties), using a time consumption tolerable training procedure (5x$\sim$ 64x slowdown), without compromising the data privacy. Furthermore, the ablation study that compares \texttt{SecureGBM}\ to the learners, which uses the single dataset from one party, showed that such collaboration between two parties can improve the accuracy.
\section{Experiments and Results}
In this section, we mainly report the experiments that evaluate \texttt{SecureGBM}, and compares the performance of \texttt{SecureGBM}\ with other baseline methods including vanilla LightGBM, XGBoost, and other downstream classifiers.
\begin{table*}[]
\centering
\caption{Overall Classification AUC (\%) Comparison (N/A: During the experiments, LightGBM reported failure to train the model due as the features of the given datasets are too sparse to learn.)}
{
\begin{tabular}{r|cc||cc||cc} \hline
& \multicolumn{2}{c}{Sparse} & \multicolumn{2}{c}{Adult} & \multicolumn{2}{c}{Phishing} \\ \hline
Methods & Training & Testing & Training &
Testing & Training & Testing \\ \hline
\texttt{SecureGBM} & 93.227 & 66.220 & 92.465 & 90.080 & 62.855 & 61.823 \\ \hline
\multicolumn{7}{c}{Using the Aggregated Datasets from $\mathbb{A}$ and $\mathbb{B}$}\\\hline
LightGBM-($\mathbb{A}$,$\mathbb{B}$) & 96.102 & 68.528 & 92.199 & 90.145 & 67.994 & 63.430 \\
XGBoost-($\mathbb{A}$,$\mathbb{B}$) & 93.120 & 67.220 & 91.830 & 89.340 & 67.090 & 61.990 \\
LIBSVM-Linear-($\mathbb{A}$,$\mathbb{B}$) & 73.490 & 64.560 & 58.641 & 59.280 & 50.073 & 50.980 \\
LIBSVM-RBF-($\mathbb{A}$,$\mathbb{B}$) & 79.850 & 63.210 & 75.549 & 72.060 & 52.789 & 47.479 \\ \hline
\multicolumn{7}{c}{Using the Single Dataset at $\mathbb{A}$}\\\hline
LightGBM-$\mathbb{A}$ & N/A* & N/A* & 89.849 & 88.052 & 64.693 & 59.743 \\
XGBoost-$\mathbb{A}$ & 65.170 & 57.370 & 89.490 & 87.620 & 64.070 & 59.740 \\
LIBSVM-Linear-$\mathbb{A}$ & 52.360 & 50.675 & 66.293 & 34.347 & 50.007 & 50.489 \\
LIBSVM-RBF-$\mathbb{A}$ & 56.740 & 52.380 & 72.909 & 55.076 & 50.248 & 50.306 \\ \hline
\multicolumn{7}{c}{Using the Datasets that aggregate Features from $\mathbb{B}$ and Labels from $\mathbb{A}$ (Not exist in the real case)}\\\hline
LightGBM-$\mathbb{B}$* & 96.102 & 68.528 & 85.708 & 84.587 & 62.396 & 58.929 \\
XGBoost-$\mathbb{B}$* & 93.190 & 67.390 & 85.700 & 85.410 & 61.720 & 58.420 \\
LIBSVM-Linear-$\mathbb{B}$* & 67.480 & 60.990 & 46.527 & 46.840 & 50.627 & 48.567 \\
LIBSVM-RBF-$\mathbb{B}$* & 78.230 & 64.880 & 56.927 & 74.987 & 50.336 & 50.415 \\ \hline
\end{tabular}}
\label{tab:overall}
\end{table*}
\subsection{Experimental Setups}
In this section, we present the dataasets, baseline algorithms as well as tbe experimental settings of our evaluation study.
\subsubsection{Datasets} In our study, we intend to evaluate \texttt{SecureGBM}\ using three datasets as follow.
\begin{itemize}
\item \textbf{\emph{Sparse - }} This is a private dataset consisting of 11,371 users' de-anonymized financial records data, where each sample with 8,922 \emph{extremely sparse} features and a binary label. These features are separately owned by two parties --- the bank is with 5,000 features and the real estate loaner owns the rest 3,922 features, while the bank owns the label information about bankruptcy. The goal of this dataset is to predict the bankruptcy of a user, incorporating their sparse features distributed in the two parties. As the dataset is quite large, we set the mini-batch size $b$ as the 1\% of the overall training set. Note that the labels in Sparse are extremely imbalanced, while most samples are negative.
\item \textbf{\emph{Adult - }} This an open-access dataset consisting of 27,739 web pages' information, where each web page is with 123 features and 1 binary label (whether the web page contains adult contents). We randomly split the features into two sets, each of which is with 61 and 62 features respectively. As this dataset is quite small, we use the whole dataset for each iteration i.e., $b=100\%$ of the overall training set.
\item \textbf{\emph{Phishing - }} This an open-access dataset consisting of 92,651 web pages' information, where each web page is with 116 features and 1 binary label (whether the web page contains phishing risk). We randomly split the features into two sets, each of which is with 58 features respectively. As this dataset is comparatively large, we use the the mini-batch with $b=10\%$ of the overall training set.
\end{itemize}
\subsubsection{Baseline Algorithms with Settings} To understand the advantage of \texttt{SecureGBM}, we compare \texttt{SecureGBM}\ with the baseline algorithms in below.
\begin{itemize}
\item \textbf{\emph{LightGBM - }} We consider the vanilla implementation of LightGBM as a key baseline algorithm, where we include two settings LightGBM-$\mathbb{A}$ and LightGBM-$(\mathbb{A},\mathbb{B})$. LightGBM-$\mathbb{A}$ refers to the LightGBM that is trained using the features and labels in the dataset $\mathbb{A}$ only, while LightGBM-$(\mathbb{A},\mathbb{B})$ refers to the vanilla distributed LightGBM that is trained using the both datasets in $\mathbb{A}$ and $\mathbb{A}$, without encryption protection. Finally, we also include a baseline LightGBM-$\mathbb{B}*$ that might not exist in the real-world settings --- LightGBM-$\mathbb{B}*$ aggregates the features from the party $\mathbb{B}$ and label information from $\mathbb{A}$ as the training dataset. The comparison to LightGBM-$\mathbb{A}$ and LightGBM-$\mathbb{B}*$ would show the information gain of federated learning beyond the model that was trained using any single party.
\item \textbf{\emph{XGBoost - }} Following the above settings, we include two settings XGBoost-$\mathbb{A}$ and XGBoost-$(\mathbb{A},\mathbb{B})$. XGBoost-$\mathbb{A}$ refers to the vanilla XGBoost that was trained using the features and labels in the dataset $\mathbb{A}$ only, while XGBoost-$(\mathbb{A},\mathbb{B})$ refers to the vanilla XGBoost trained through aggregating the both datasets from $\mathbb{A}$ and $\mathbb{B}$, in a centralized manner. Similarly, XGBoost-$\mathbb{B}*$ that was trained through aggregating the features from the party $\mathbb{B}$ and label information from $\mathbb{A}$ was given a baseline to demostrate the information gain of collaboration.
\item \textbf{\emph{LIBSVM - }} Following the same settings, we include two settings LIBSVM-$\mathbb{A}$ and LIBSVM-$(\mathbb{A},\mathbb{B})$. LIBSVM-$\mathbb{A}$ refers to the vanilla LIBSVM that is trained using the features and labels in the dataset $\mathbb{A}$ only, while LIBSVM-$(\mathbb{A},\mathbb{B})$ refers to the vanilla XGBoost that is trained using the both datasets in $\mathbb{A}$ and $\mathbb{A}$, in a centralized manner. Similarly, SVM-$\mathbb{B}*$ that was trained through aggregating the features from the party $\mathbb{B}$ and label information from $\mathbb{A}$ was given a baseline. More specific, the LIBSVM algorithms with RBF kernel and linear SVM are used here.
\end{itemize}
Note that in all experiments, 80\% samples are used for training and the rest 20\% samples are remained for testing. The training and testing sets are randomly selected for 5-folder cross validation. The default learning rate for LightGBM, XGBoost and \texttt{SecureGBM}\ are all set to 0.1.
\begin{figure*}
\centering
\subfloat[$t=5$ on Sparse]{\includegraphics[width=0.3\textwidth]{fig/data1_model1_all.pdf}}
\subfloat[$t=4$ on Sparse]{\includegraphics[width=0.3\textwidth]{fig/data1_model2_all.pdf}}
\subfloat[$t=3$ on Sparse]{\includegraphics[width=0.3\textwidth]{fig/data1_model3_all.pdf}}\\
\subfloat[$t=5$ on Adult]{\includegraphics[width=0.3\textwidth]{fig/data2_model1_all.pdf}}
\subfloat[$t=4$ on Adult]{\includegraphics[width=0.3\textwidth]{fig/data2_model2_all.pdf}}
\subfloat[$t=3$ on Adult]{\includegraphics[width=0.3\textwidth]{fig/data2_model3_all.pdf}}\\
\subfloat[$t=5$ on Phishing]{\includegraphics[width=0.3\textwidth]{fig/data3_model1_all.pdf}}
\subfloat[$t=4$ on Phishing]{\includegraphics[width=0.3\textwidth]{fig/data3_model2_all.pdf}}
\subfloat[$t=3$ on Phishing]{\includegraphics[width=0.3\textwidth]{fig/data3_model3_all.pdf}}
\caption{The Comparison of Training AUC and F1-Score per Iteration \texttt{SecureGBM}\ vs. LightGBM-($\mathbb{A}$,$\mathbb{B}$): : Labels in sparse datasets (personal bankruptcy status) are imbalanced with most samples negative; in this case, the learned models are usually imbalanced with very low recall and F1-score.}
\label{fig:runtime}
\end{figure*}
\begin{figure*}
\centering
\subfloat[$t=5$ on Sparse]{\includegraphics[width=0.3\textwidth]{fig/task_data1_model1_all.pdf}}
\subfloat[$t=4$ on Sparse]{\includegraphics[width=0.3\textwidth]{fig/task_data1_model2_all.pdf}}
\subfloat[$t=3$ on Sparse]{\includegraphics[width=0.3\textwidth]{fig/task_data1_model3_all.pdf}}\\
\subfloat[$t=5$ on Adult]{\includegraphics[width=0.3\textwidth]{fig/task_data2_model1_all.pdf}}
\subfloat[$t=4$ on Adult]{\includegraphics[width=0.3\textwidth]{fig/task_data2_model2_all.pdf}}
\subfloat[$t=3$ on Adult]{\includegraphics[width=0.3\textwidth]{fig/task_data2_model3_all.pdf}}\\
\subfloat[$t=5$ on Phishing]{\includegraphics[width=0.3\textwidth]{fig/task_data3_model1_all.pdf}}
\subfloat[$t=4$ on Phishing]{\includegraphics[width=0.3\textwidth]{fig/task_data3_model2_all.pdf}}
\subfloat[$t=3$ on Phishing]{\includegraphics[width=0.3\textwidth]{fig/task_data3_model3_all.pdf}}
\caption{The Comparison of Testing AUC and F1-Score per Iteration \texttt{SecureGBM}\ vs. LightGBM-($\mathbb{A}$,$\mathbb{B}$): : Labels in sparse datasets (personal bankruptcy status) are imbalanced with most samples negative; in this case, the learned models are usually imbalanced with very low recall and F1-score.}
\label{fig:testing}
\end{figure*}
\subsection{Overall Performance}
To evaluate the overall accuracy of \texttt{SecureGBM}, we measured the Area Under Curve (AUC) of the \texttt{SecureGBM}\ prediction results and compared to the baseline algorithms. Table~\ref{tab:overall} presents the overall comparison on AUC achieved by these classifiers under the same settings. \texttt{SecureGBM}, LightGBM, and LIBSVM all have been trained using 200 iterations, where we measure the AUC on training and testing datasets.
The result in Table~\ref{tab:overall} shows that, compared to LightGBM-($\mathbb{A}$,$\mathbb{B}$), \texttt{SecureGBM}\ achieved similar training and testing AUC based on the same settings, while significant outperforming LightGBM-$\mathbb{A}$ that used the single dataset of $\mathbb{A}$. Furthermore, under both settings, LightGBM performed better than XGBoost in terms of testing AUC. Similar observations can be obtained through the comparisons between LIBSVM and LightGBM. In short, it is reasonable to conclude that multi-party gradient boosting over two distributed datasets can significantly improve the performance and outperforms the one that uses datasets from the party $\mathbb{A}$ only.
Furthermore, the comparison to LightGBM-$\mathbb{B}$* shows that, except the experiments based on Sparse datasets, \texttt{SecureGBM}\ significantly outperforms the one that aggregates the features from $\mathbb{B}$ and labels from $\mathbb{A}$. For the Sparse datasets, one can easily observe that LightGBM-$\mathbb{A}$ failed to train the model, when using the datasets on $\mathbb{A}$ only, as the features in $\mathbb{A}$ are too sparse to learn. The comparison between LightGBM-($\mathbb{A}$,$\mathbb{B}$) and LightGBM-$\mathbb{B}$* further demonstrates that the incorporation of the features in $\mathbb{A}$ can not improve the performance of LightGBM learning. Due to the same reason, \texttt{SecureGBM}\ performed slightly worse than LightGBM-$\mathbb{B}$* with marginal testing AUC degradation.
We conclude \texttt{SecureGBM}\ boosts the testing accuracy of learners from the party $\mathbb{A}$ perspectives, as (1) \texttt{SecureGBM}\ consistently outperforms LightGBM-$\mathbb{A}$, XGBoost-$\mathbb{A}$ and other learners that uses datasets on $\mathbb{A}$ only; (2) The algorithms that aggregates datasets from the both sides, such as LightGBM-($\mathbb{A}$, $\mathbb{B}$) or LightGBM-$\mathbb{B}$*, only perform marginally better than \texttt{SecureGBM}, while these algorithms scarifying the data privacy of the two parties.
\subsection{Case Studies}
To further understand the performance of \texttt{SecureGBM}, we traced back the the models obtained after each iteration and analysis their accuracy from both accuracy and efficiency perspectives.
\subsubsection{Trends of Accuracy Improved per Iteration} Figure~\ref{fig:runtime} presents the the comparison of training AUC and F1-score per iteration, between \texttt{SecureGBM}\ versus vanilla LightGBM. More specific, we evaluated the performance when $t=3,\ 4,\ $ and $5$ (as the LightGBM and \texttt{SecureGBM}\ use leaf-wise growth strategy, $t$ is equivalent to the depth of each decision tree learned), where we clearly observed the error convergence of the model.
It has been observed that, in most cases, the training F1-score could be gradually improved with the increasing number of iterations. For sparse and Adult datasets, the overall trends of AUC and F1-score for LightGBM and \texttt{SecureGBM}\ were almost same under all settings --- even though, for Sparse dataset, \texttt{SecureGBM}\ only used 1\% training data as the mini-batch for the model update per iteration (while LightGBM used the whole). Furthermore, even though \texttt{SecureGBM}\ did not perform as good as LightGBM for the Phishing dataset when $t=5$ and $4$, it still achieved decent performance like LightGBM under the appropriately setting $t=3$. Such observations are quite encouraging that the use of mini-batch seems to not hurt the learning progress of \texttt{SecureGBM}\ for these datasets, with appropriate settings. The lines of \texttt{SecureGBM}\ are with more jitters, due to the use of stochastic approximation for statistical acceleration. Similar observations have been obtained in the comparison of testing AUC and F1-score per iteration which has been shown in Figure~\ref{fig:testing}.
\begin{table}[]
\centering
\caption{Time Consumption per Iteration (seconds) on a Synthesized Dataset over Varying Number of Training Samples}
\normalsize{
\begin{tabular}{r|ccccc} \hline
\# Samples & 1,000 & 2,000 & 4,000 & 8,000 & 16,000 \\ \hline
\texttt{SecureGBM} & 10.20 & 10.50 & 11.4 & 11.75 & 14.30 \\
LightGBM & 0.16 & 0.32 & 0.70 & 1.41 & 2.45 \\
XGBoost & 0.51 & 0.73 & 1.09 & 2.20 & 4.30 \\ \hline
\end{tabular}}
\label{tab:time}
\vspace{-3mm}
\end{table}
\subsubsection{Time Consumption over Scale of Problem} To test the time consumption of \texttt{SecureGBM}\ over varying scale of the problem, we synthesize a dataset based on Sparse with increasing number of samples. The experiment results showed that the \emph{time consumption per iteration} for the training procedure of \texttt{SecureGBM}\ is significantly longer than LightGBM and XGBoost.
We estimate the slowdown ratio of \texttt{SecureGBM}\ as the ration between the time consumption per iteration for \texttt{SecureGBM}\ versus vanilla LightGBM. The range of slowdown ratio is around 3x$\sim$ 64x in this experiment. Furthermore, with the number of samples increases, the slowdown ratio of \texttt{SecureGBM}\ would decrease significantly. For example the ratio is around 63.75x when comparing \texttt{SecureGBM}\ to LightGBM with 1,000 training samples, while it is only 5.8 when compared to LightGBM with 16,000 training samples. It is not difficult to conclude that \texttt{SecureGBM}\ is quite time efficient due to its statistical acceleration strategies used, \texttt{SecureGBM}\ would become more and more efficient when the scale of training set increases. The experiments are carried out using two workstations based on 8-core Xeon CPUs and 16GB memory. The two machines are interconnected with a 100 MBit cable with 1.55ms latency.
\section{Introduction}
Multi-Party federated learning~\cite{chen2006algebraic} becomes one of the most popular machine learning paradigm thanks to the increasing trends of distributed data collection, storage and processing, as well as its benefits to the privacy-preserved manner in different kinds of applications. In most multi-party machine learning applications, ``no raw data sharing'' is an important pre-condition, where the model should be trained using all data stored in distributed machines (i.e., parties) without any cross-machine raw data sharing. A wide range of machine learning models and algorithms, including logistic regression~\cite{pathak2010multiparty}, sparse discriminant analysis~\cite{bian2017multi,tian2016communication}, stochastic gradient-based learners~\cite{jayaraman2018distributed,xing2015petuum,ormandi2013gossip}, have been re-implemented on distributed computing, encryption, and privacy preserving computation/communication platforms, so as to incorporate the secure computation paradigms~\cite{chen2006algebraic}.
\textbf{Backgrounds and Related Work.} Existing efforts majorly work on the implementation of efficient federated learning systems. Two parallel computation paradigms---\emph{data-centric} and \emph{model-centric}~\cite{xing2015petuum,zhou2008large,dean2012large,tsianos2012consensus,tsianos2012consensus,smyth2009asynchronous,ormandi2013gossip} have been proposed. On each machine, the data centric algorithm first estimates the same set of parameters (of the model) using the local data, then aggregates the estimated parameters via model-averaging for global estimation. The model with aggregated parameters is considered as the trained model based on the overall data (from multiple parties) and before aggregated these parameters can be estimated through parallel computing structure in an easy way. Meanwhile, model-centric algorithms require multiple machines to share the same loss function with ``updatable parameters'', and allow each machine to update the parameters in the loss function using the local data so as to minimize the loss. Based on this characteristic, model-centric algorithm commonly updates the parameters sequentially so that the additional time consumption in updating is sometimes a tough nut for specific applications. Even so, compared with the data-centric, the model-centric methods usually can achieve better performances, as it minimizes the risk of the model~\cite{xing2015petuum,ormandi2013gossip}. To advance the distributed performance of linear classifiers, Tian et al.~\cite{tian2016communication} proposed a data-centric sparse linear discriminant analysis algorithm, which leverages the advantage of parallel computing.
In terms of multi-party collaboration, the federated learning algorithms can be categorized into two types: \emph{Data separation} and \emph{View separation}. For the data separation, the algorithms are assumed to learn from the distributed datasets, where each dataset consists of a subset of samples of the same types~\cite{xing2015petuum,bian2017multi,tian2016communication,jayaraman2018distributed}. For example, hospitals are usually required to collaboratively learn a model to predict patents' future diseases through classifying their electronic medical records, where all hospitals follows the same scheme to collect patients' medical record while every hospital can only cover a part of the patients. In this case, federated learning here improves the overall performance of learning through incorporating the private datasets owned by different parties, while ensuring the privacy and security~\cite{xing2015petuum,ormandi2013gossip,jayaraman2018distributed}. While the existing data/computation parallelism mechanisms were usually motivated to improve federated learning under the data separation settings, the federated learning systems under the \emph{view separation} settings are seldom considered
\textbf{Our Work.} We mainly focus on \emph{view separation} settings of the federated learning that assumes the data view of the same group of samples are separated by multiple parties who do not trust each other. For example, the healthcare, finance, and insurance records of the same group of healthcare users are usually stored in the data centers of healthcare providers, banks, and insurance companies separately. For the healthcare users, they usually need some recommendations on the healthcare insurance products according to their health and financial status, while healthcare insurance companies need to learn from large-scale healthcare together with personal financial data to build such recommendation models. However, according to
the law enforcement about data privacy, it is difficult for these three partities to share their data with each other and learn such a predictive model. In this way, federated learning under view separation models is highly appreciated. In this work, we aim at working on the view separation federated learning algorithms using Gradient Boosting Machines (GBM) as the Classifiers. GBM is studied here as it can deliver decent prediction results and be interpreted by human experts for joint data analytics and cross-institutes data understanding purposes.
\textbf{Our Contributions.} We summarize the contribution of the proposed \texttt{SecureGBM}{} algorithm in following aspects.
\begin{itemize}
\item Firstly, we study and formulate the federated learning problem under the (semi)-homomorphic encryption settings, while assuming the data owned by two parties are not sharable. More specific, in this paper, we assume each party owns a unique private view to the same group of samples, while the labels of these samples are monopolized by one party. To the best of our knowledge, this is the first study on tree-based Gradient Boosting Machine classifiers, by addressing 1) two-party security constraint, 2) efficient \emph{model-centric} learning with views separated by two parties but labels ``monopolied'' by one, and 3) the trade-off between statistical accuracy and the communication cost caused by statistical learning over encrypted communication.
\item Secondly, to achieve the goals, we design the \texttt{SecureGBM}{} algorithm which re-implements the vanilla gradient-boosting tree based learners using semi-homomorphic encrypted computation operators offered by Microsoft SEAL. More specific, \texttt{SecureGBM}\ first replaces the \emph{addition} and \emph{multiplication} operators used in the gradient-boosting trees with the secured operators based on semi-homomorphic computation, then \texttt{SecureGBM}\ re-designs a new set of \emph{binary comparison} operators (i.e., $\geq$ or $\leq$) which can not be intercepted by attackers to exactly recover the ground truth through searching with the comparison operators (e.g., binary search).
\item Furthermore, we observe the trade-off between statistical accuracy and communication cost for GBM training. One can use stochastic gradient boosting mechanism to update the training model with mini-batch of data per round, while the communication cost per round can be significantly reduced in a quadratics manner. However, compared to vanilla gradient boosting machines, additional rounds of training procedure might be needed by such stochastic gradient boosting to achieve equivilent performance. In this way, \texttt{SecureGBM}\ makes trade-off between statistical accuracy and communication complexity using mini-batch sampling strategies, so as to enjoy low communication costs and accelerated training procedure.
\item Finally, we evaluate \texttt{SecureGBM}{} using a large-scale real-world user profile dataset and several benchmark datasets for classification. The results show that \texttt{SecureGBM}{} can compete with state of the art of Gradient Boosting Machines --- LightGBM, XGBoosts and the vanilla re-implementation of LightGBM based on Microsoft SEAL.
\end{itemize}
The rest of the paper is organized as follows. In Section II, we review the gradient-boosting trees based classifiers and the implementation of LightGBM, then we introduce the problem formulation of our work. In Section III, we propose the framework of \texttt{SecureGBM}{} and present the details of \texttt{SecureGBM}{} algorithm. In Section IV, we evaluate the proposed algorithms using the real-world user profile dataset and the benchmark datasets. In addition, we compare \texttt{SecureGBM}{} with baseline centralized algorithms. In Section V, we introduce the related work and present a discussion. Finally, we conclude the paper in Section VI.
\section{Frameworks and Algorithms Design}
In this section, we present the framework design of \texttt{SecureGBM}\ with key algorithms used.
\subsection{Overall Framework Design}
The overall framework of \texttt{SecureGBM}\ consists of two parts --- training and inference, where, given the distributed datasets, the training procedure obtains the distributed parameter models for the tree classifiers of \texttt{SecureGBM}\ and the inference procedure predicts the labels using the indices of samples.
\subsubsection{Statistically Accelerated Training Procedure} Given the training datasets $\mathcal{D}_1$ and $\mathcal{D}_2$ distributed in the two parties, as shown in Figure~\ref{fig:train_proc}, the training procedure learns the ensemble of decision trees for Gradient Boosting Machines with distributed parameters in a secure and statistical efficient way. More specific, the training procedure incorporates an accelerated iterative process with a specific initialization as follow
\begin{itemize}
\item \textbf{Initialization - } The owner of $\mathcal{D}_1$ can invoke to initialize the whole training procedure. First of all, \texttt{SecureGBM}\ performs \emph{secure join operation} to align the shared samples stored in $\mathcal{D}_1$ and $\mathcal{D}_2$
through matching $I(\mathcal{D}_1)$ and $I(\mathcal{D}_2)$ under \emph{Partial Homomorphic Encryption (PHE)} settings. Later, based on the data in $\mathcal{D}_1$ including both features $X(\mathcal{D}_1)$ and labels $Y(\mathcal{D}_1)$ , \texttt{SecureGBM}\ learns a decision tree $F_0$ as the base model, which only uses features in $X(\mathcal{D}_1)$, for initialization. Please see also in Section III.B.1 for the detailed design and implementation of \emph{Secure Join Operation} for sample alignment based on PHE.
\end{itemize}
With the model initialized, \texttt{SecureGBM}\ takes a statistically accelerated iterative process for GBM training, where each iteration uses \emph{mini-batch sampling to reduce the computational/communication costs}~\cite{friedman2002stochastic}. Specifially, each iteration (e.g., the $k^{th}$ iteration and $k\geq 0$) consists of three steps:
\begin{itemize}
\item \textbf{Batched Secure Inference - } Given the shared samples in $\mathcal{D}_1\cap\mathcal{D}_2$, \texttt{SecureGBM}\ first randomly selects a subset of samples $\mathcal{B}_k\subseteq (I(\mathcal{D}_1)\cap I(\mathcal{D}_2))$, where $|\mathcal{B}_k|=b$ and $b$ refers to the batch size. With the model already estimated, denoted as $F_{k}$, \texttt{SecureGBM}\ then obtains the \emph{``soft prediction''} results of all samples in $\mathcal{B}_k$ through the secure inference under PHE settings. Such that
\begin{equation}
y_i^{k}\gets F_{k}(X_i(\mathcal{D}_1);X_i(\mathcal{D}_2)),\ \ \forall i \in \mathcal{B}_k,
\end{equation}
where $F_{k}(X_i(\mathcal{D}_1);X_i(\mathcal{D}_2))$ refers to the inference result based on the features from both datasets. Please see also in Section III.A.2 for the PHE-based implementation of the inference procedure.
\item \textbf{Residual Error Estimation - } As was mentioned, both labels and soft prediction results are a $|\mathcal{C|}$-dimensional vector, where $|\mathcal{C}|$ refers to the number of classes.
Then, \texttt{SecureGBM}\ estimates the residual errors of the current model using the \emph{cross-entropy} as follow
\begin{equation}
\varepsilon_i^{k}\gets \mathrm{H}(y_i^{k})+ \mathrm{D_{KL}}\left( y_i^{k}||Y_i(\mathcal{D}_1)\right),\ \ \forall i \in \mathcal{B}_k.
\end{equation}
Note that to secure the security of labels, $y_i^{k}$ and $\varepsilon_i^{k}$ for $\forall i\in \mathcal{B}_k$ are all stored in the owner of $\mathcal{D}_1$.
\item \textbf{Secure Tree Creation and Ensemble - } Given the residual error estimated $\varepsilon_i^{k},\ \forall i \in\mathcal{D}_1\cap\mathcal{D}_2$, \texttt{SecureGBM}\ boosts the learned model $F_{k}$ through creating a new decision tree $h_k$ that fits the residual errors $\varepsilon_i^{k}$ using the features of both datasets $X(\mathcal{D}_1)\cup X(\mathcal{D}_2)$ in an additive manner. \texttt{SecureGBM}\ then ensembles $h_k$ with the current model $F_k$ and obtains the new model $F_{k+1}$ through gradient boosting~\cite{friedman2001greedy}. As was mentioned in Eq.~(2), a specific learning rate $\alpha$ has been given as the weight for model ensembling. Please see also in Section III.B.2 for the detailed design and implementation of \emph{Secure Splits Search Operation} for decision tree creation based on PHE.
\end{itemize}
\subsubsection{Inference Procedure via Execution Flow Compilation} Given the model learned after $K$ iterations, denoted as $F_{K}$, the inference component first complies all these trees into distributed secure execution flow, where the nodes in every decision trees are assigned to the corresponding parties respectively. As shown in Figure~\ref{fig:infer_proc}, all \emph{communications}, \emph{computation} and \emph{binary comparisons} are protected through SEAL-based Homomorphic Encryption schemes. With the secure distributed execution flow, given a index of sample e.g., $i'\in I(\mathcal{D'}_1)\cap I(\mathcal{D'}_2)$, \texttt{SecureGBM}\ runs the inference procedure over the execution flow. Please see also in Section III.B.3 for he design and implementation of \emph{PHE-based Binary Comparison Operator}. Note that in our research, we assume the party $\mathbb{B}$ has no way to access the labels of training and testing samples, while securing the monopoly of the label information at the party $\mathbb{A}$ side. To protect the label information through the inference, the result of PHE-based Binary Comparison Operator (i.e., true or false) has been secured and cannot be deciphered by the party $\mathbb{A}$.
Furthermore, for the sample with index that is contained in $I(\mathcal{D'}_1)$ only, i.e., $i'\in I(\mathcal{D'}_1)\backslash I(\mathcal{D'}_2)$, \texttt{SecureGBM}\ would first learn a comprehensive GBM (LightGBM classifier) based on the dataset $\mathcal{D}_1$ using the features $X(\mathcal{D}_1)$ and labels $Y(\mathcal{D}_1)$ only. With such model, \texttt{SecureGBM}\ makes prediction for the samples in $I(\mathcal{D'}_1)\backslash I(\mathcal{D'}_2)$ all using the features in $X(\mathcal{D'}_1)$.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{execution.pdf}
\caption{Execution Flow Compilation for the Inference Procedure}
\label{fig:infer_proc}
\end{figure}
Please note that the overall framework of \texttt{SecureGBM}\ is derived from the vanilla implementation of LightGBM~\cite{ke2017lightgbm}, while most of calculation and optimization to gradient boosting trees~\cite{friedman2001greedy} has been preserved with coverage of partial homomorphic encryption.
\subsection{Key Algorithms}
Here, we present the detailed design of several key algorithms.
\subsubsection{Secure Join for Sample Alignment} To align the samples with identical indices across the two index sets $I(\mathcal{D}_1)$ and $I(\mathcal{D}_2)$ for training (and $I(\mathcal{D'}_1)$ and $I(\mathcal{D'}_2)$ for inference), \texttt{SecureGBM}\ intends to obtain the intersection between the two index sets in a private and secure manner. Specifically, we adopt the private sets intersection algorithms proposed in~\cite{pinkas2014faster,pinkas2018scalable} to achieve the goal. The security and privacy eniforcement of proposed component highly relies on the techniques called Obvious Transfer Extension (OT Extension), which supports fast and private data transfer for small payloads with limited overhead~\cite{keller2015actively}. The use of OT Extension can avoid the use of time-consuming error correcting code but instead accelerate the secure data transmission through leveraging a pseudo-random code. We also tried other OT extension based private sets intersection algorithm, such as the one using Bloom Filter~\cite{dong2013private}. The speed and scalability is not as good as~\cite{pinkas2014faster,pinkas2018scalable}
\subsubsection{Secure Splits Search for Tree Creation} For each round of iteration (e.g., the $k^{th}$ iteration), \texttt{SecureGBM}\ needs to create a decision tree with the size $t$ (here $t$ refers to the number of learned split nodes in the tree) to fit the ``residual error'' of the model that is already estimated $F_k$, as as to enable the gradient boosting mechanism. Specifically, we adopt a leaf-wise tree growth mechanism that was derived from LightGBM~\cite{ke2017lightgbm} to learn the tree, where \texttt{SecureGBM}\ vertically grows the tree using totally $t$ rounds of computation/communication, and it always picks up the leaf node with maximal ``residual error'' reduction to grow.
In each round of computation for the decision tree creation, for the party $\mathbb{A}$ owning features $X(\mathcal{D}_1)$ and labels $Y(\mathcal{D}_1)$, \texttt{SecureGBM}\ searches new splits using the raw data. Similar to vanilla LightGBM, \texttt{SecureGBM}\ selects the ``best'' split with the maximal residual error reduction for the samples in the mini-batch $\mathcal{B}_k$ as the a candidate of the split at the party $\mathbb{A}$ side. This candidate split would be compared to the ``best'' splits from the party $\mathbb{B}$ as the final split selection for this round.
On the other hand, for the party $\mathbb{B}$ only occupying features $X(\mathcal{D}_2)$, \texttt{SecureGBM}\ first propose a set of potential splits in $X(\mathcal{D}_2)$ (in a random or unsupervised manner), and sends the potential classification results of the samples in $\mathcal{B}_k$ using every proposed split to the party $\mathbb{A}$. Note that the potential classification results have been formed into multiple sets of samples (or sample index), which have been categorized according to their results. Such sets have been encrypted as private sets to protect the privacy of label information from the party $\mathbb{B}$. Certain secure domain isolation has been used to protect the splits~\cite{liu2015thwarting}.
Then, at the party $\mathbb{A}$ side, \texttt{SecureGBM}\ estimates the residual errors of each split proposed by the party $\mathbb{B}$ using their potential classification results. Specifically, \texttt{SecureGBM}\ leverage the aforementioned private set intersection algorithm to estimate the overlap between the sample sets categorized using potential classification results and the true labels, in order to obtain the prediction results and estimate the accuracy~\cite{pinkas2014faster,pinkas2018scalable}. Finally, \texttt{SecureGBM}\ selects the split (the best splits from $\mathbb{A}$ versus $\mathbb{B}$) that can further lower residual error as the split in this round and ``adds'' the split to the decision tree.
To further secure the privacy of label information, the splits at the party $\mathbb{B}$ are deployed in an isolated domain, while the party $\mathbb{B}$ cannot obtain the decision making result of splits. Please refer the section in below for the implementation of the binary comparison.
\subsubsection{Secure Binary Comparison for Decision Making} As was mentioned, \texttt{SecureGBM}\ operates an isolated domain over the machines at the party $\mathbb{B}$, where the computation and comparison criterion for decision making are all stored in the isolated domain, which is trusted by both parties. To further secure the label information and prediction results during inference, \texttt{SecureGBM}\ uses the public keys generated the party $\mathbb{A}$ to encrypted the decision making results from $\mathbb{B}$, while the public keys keep being updated per inference task.
\subsection{Discussion and Remarks}
In this section, we intends to justify the proposed framework and algorithms from costs and learning perspectives.
\textbf{Communication Costs - } In \texttt{SecureGBM}, we replaced the gradient boosting mechanism used by GBM with stochastic gradient boosting~\cite{friedman2002stochastic}, in order to accelerate the learning procedure through lowering the computational/communication costs per iteration. Let denote $\mathcal{N} = |I(\mathcal{D}_1)\cap I(\mathcal{D}_2)|$ as the total number of aligned samples shared by $\mathcal{D}_1$ and $\mathcal{D}_2$, while the size of batch for each iteration is defined as $b=|\mathcal{B}_k|$.
For each iteration of GBM and \texttt{SecureGBM}, there needs to create a $t$-sized decision tree after $t$ rounds of communication between the two parities. For each round of such communication, GBM and \texttt{SecureGBM}\ need to exchange data with payloads size of $\mathcal{O}(\mathcal{N}^2)$ and $\mathcal{O}(b^2)$, respectively. In this way, the cost of communication per iteration should be $\mathcal{O}(t\cdot\mathcal{N}^2)$ and $\mathcal{O}(t\cdot b^2)$ for GBM and \texttt{SecureGBM}.
\textbf{Statistical Acceleration - } To simplify our analysis on the statistical performance, we make an mild assumption that considers the learning procedure of LightGBM and \texttt{SecureGBM}\ as the gradient descent (GD) and stochastic gradient descent (SGD) based loss minimization over certain convex loss~\cite{friedman2001greedy,mason2000boosting,friedman2002stochastic}. Under mild convexity and smoothness conditions, GD and SGD would converge to the minimum of the loss functions at the error convergence rate\cite{shapiro1996convergence,shalev2009stochastic,shamir2013stochastic} of $\mathcal{O}(1/k)$ and $\mathcal{O}(1/\sqrt{k})$ respectively, where $k$ denotes the number of iterations. More discussion can be found in~\cite{mason2000boosting}.
While the costs of each iteration are $\mathcal{O}(t\cdot\mathcal{N}^2)$ and $\mathcal{O}(t\cdot b^2)$ respectively, we can roughly conclude a trade-off exists between the statistical performance and communication complexity for \texttt{SecureGBM}\ Training.
\section{Related Work}
\section{Preliminary Studies and Problem Definitions}
In this section, we first present the preliminary studies of the proposed study, then introduce the design goals for the proposed systems as the technical problem definitions.
\subsection{Gradient Boosting and LightGBM}
As an ensemble learning technique, the Gradient Boosting classifier trains and combines multiple weak prediction models, such as decision trees, for better generalization performance~\cite{friedman2001greedy,friedman2002stochastic}. The key idea of gradient boosting is to consider the procedure of boosting as the optimization over certain cost functions~\cite{breiman1997arcing}. As the result, the gradient descent directions for the loss function minimization can be transformed into the decision trees that were obtained sequentially to improve the classifier.
Given a training dataset, where each data point $(x,y)\sim\mathcal{D}$, the problem of gradient boosting is to learn a function $\widehat F$ from all possible hypotheses $\mathcal{H}$ while minimizing the expectation of loss over the distribution $\mathcal{D}$, such that
\begin{equation}
\widehat F = \underset{F\in\mathcal{H}}{\mathrm{argmin}} \underset{(x,y)\sim\mathcal{D}}{\mathbb{E}} L(y, F(x)),
\end{equation}
where $L(y, F(x))$ refers to the prediction loss of $F(x)$ to the label $y$. More specific, the gradient boosting intends to minimize the loss function and obtain $\widehat F$ in a gradient descent way, such that
\begin{equation}
F_{k+1}(x)\gets F_k(x) + \alpha_k\cdot h_k(x),
\end{equation}
where $F_k(x)$ refers to the learned model in the $k^{th}$ iteration, $h_k(x)$ refers to the decision tree learned as the descent direction at the $k^{th}$ iteration based on the model already obtained $F_k(x)$ and the training dataset, $\alpha_k$ refers to the learning rate of gradient boosting or namely the weight of $h_k(x)$ in the ensemble of learners, the operator $a+b$ refers to the ensemble of $a$ and $b$ models, and $F_{k+1}(x)$ refers to the results of the $k^{th}$ iteration. More specific, the computation of $h_k(x)$ majorly address $\left(y-F_k(x)\right)$ for $\forall (x,y)\in\mathcal{D}$, i.e.,the error between the model $f_k(x)$ that is already estimated and the label $y$ that corresponds to $x$ in training dataset. Note that in the first iteration, the algorithm starts from $F_1(x)$ which is a vanilla decision tree learned from the dataset. With totally $K$ iteration, the algorithm obtains the final model $\widehat F(x)$ as the $F_{K+1}$.
Recently, gradient boosting classifiers have attracted further attentions from both application and algorithmic perspectives. For example, it has won the KDDCup 2016~\cite{sandulescu2016predicting} and tons of other competition such as Kaggle\footnote{https://medium.com/@gautam.karmakar/xgboost-model-to-win-kaggle-e12b35cd1aad}. Gradient boosting trees and its variants have been used as a major baselines for a great number of classification/regression tasks with decent results, ranging from genetic data analytic to the click through predictions~\cite{nielsen2016tree}. In terms of algorithm implementation, XGBoost~\cite{chen2016xgboost} and LightGBM~\cite{ke2017lightgbm} have been proposed to further improve the performance of gradient boosting trees, where thw two work followed similar gradient boosting mechanisms for the decision trees training while made significant contributions to scalability and efficiency issues.
\subsection{Homomorphic Encryption Models}
To secure the security and privacy during the computation, homomorphic encryption (HE) has been proposed as a set of operations that work on the encrypted data while resulting in the secure ones with encryption. More important the results obtained can be decrypted to match the ``true results'' of the corresponding operations~\cite{gentry2010computing,vaikuntanathan2011computing}. Homomorphic encryption contains multiple types of encryption schemes, such as partially homomorphic encryption (PHE), fully homomorphic encryption (FHE) and pre-fully homomorphic encryption (Pre-FHE), that can perform different classes of computations over encrypted data~\cite{armknecht2015guide}. The progress along these lines of research has been well surveyed in~\cite{halevi2017homomorphic}.
As early as 1978, the tentative idea of building a fully homomorphic encryption scheme was proposed just after the publishing of RSA algorithm~\cite{demillo1978foundations}. Thirty years, Gentry emph{et al.} in 2009 sketched the first fully homomorphic encryption scheme based on the lattice cryptography~\cite{gentry2009fully}. One year later, van Dijk \emph{et al.} presented the second fully homomorphic encryption scheme~\cite{van2010fully} based on Gentry's work, but did not rely on the use of ideal lattices. The second generation of FHE starts from 2011, there were some fundemental techniques developed by Zvika Brakerski \emph{et al.}~\cite{brakerski2014leveled,brakerski2014efficient}, where the homomorphic cryptosystems currently used are stemmed. Thanks to these innovations, the second generation of FHE tends to be much more efficient compared with the first generation, and be applied to a lot of applications.
Later, Gentry \emph{et al.} proposed a new technique for building fully homomorphic encryption schemes, namely GSW, which avoids the use of expensive ``relinearization'' computation in homomorphic multiplication~\cite{gentry2013homomorphic}. Brakerski \emph{et al.} observed that, for certain types of circuits, the GSW cryptosystem features an even slower growth rate of noise, and hence better efficiency and stronger security~\cite{brakerski2014lattice}.
As the fully homomorphic encryption is computationally expensive, the most of practical secure systems indeed have been implemented with a partially homomorphic encryption fashion~\cite{halevi2017homomorphic}, where only parts of computation are encrypted with homomorphic encryption. In this work, we hope to secure the computation and communication of federated learning through partially homomorphic encryption. Our proposed method uses ciphertexts to protects parts of computations and communications in the gradient boosting trees learning.
\subsection{Problems and Overall Design Goals}
In this work, we intend to design a novel federated gradient boosting trees classifier that can learned from view separated data in a distributed manner while avoiding the leakage of data privacy and security.
\textbf{The Federated Learning Problem -} Suppose two training datasets $\mathcal{D}_1$ and $\mathcal{D}_2$ are owned by two parties $\mathbb{A}$ and $\mathbb{B}$ respectively, who hope to collaboratively learn one mode but don't trust each other. The schemes of the two datasets are $\mathcal{D}_1=(I; X; Y)$ and $\mathcal{D}_2=(I;X)$, where
\begin{itemize}
\item $I(\mathcal{D}_1)$ and $I(\mathcal{D}_2)$ refer to the identity sets of samples in the two datasets respectively. When $I(\mathcal{D}_1)\cap I(\mathcal{D}_2)\neq\emptyset$, it indicates that the two datasets share a subset of samples but with different views (i.e., features);
\item $X(\mathcal{D}_1)$ and $X(\mathcal{D}_2)$ refer to the feature sets of samples in the two datasets respectively. More specific, we denote $X_i(\mathcal{D}_1)$ as the set of features of the $i^{th}$ sample in $\mathcal{D}_1$ and further denote $X_{i,j}(\mathcal{D}_1)$ as the $j^{th}$ feature of the $i^{th}$ sample in $\mathcal{D}_1$;
\item $Y(\mathcal{D}_1)$ refers the label set of the data samples in $\mathcal{D}_1$. More specific, $\forall y\in Y(\mathcal{D}_1)$ there has $y\in\mathbb{R}^{|\mathcal{C}|}$ and $\mathcal{C}$ refers to the set of classes. As was mentioned, in the settings discussed in this paper, only one part would monopoly the label information.
\end{itemize}
With above settings, our federated learning problem consists of two parts as follow.
\textbf{Training - } There needs to propose an dual-party learning algorithm with secure communication and computation schemes that can train tree-based Gradient Boosting Machines based on $\mathcal{D}_1$ and $\mathcal{D}_2$ with respect to the following restrictions:
\begin{itemize}
\item \textbf{Training Set Identity Protection}: $I(\mathcal{D}_1)\backslash I(\mathcal{D}_2)$ would not be obtained by the party $\mathbb{B}$, while the information about $I(\mathcal{D}_2)\backslash I(\mathcal{D}_1)$ would be prohibited from $\mathbb{A}$;
\item \textbf{Training Set Feature Security}: The inference procedure needs to avoid the leakage of $X(\mathcal{D}_1)$ and $Y(\mathcal{D}_1)$ to $\mathbb{B}$, and the $X(\mathcal{D}_2)$ to the party $\mathbb{A}$.
\end{itemize}
\textbf{Testing - } Given two testing datasets $\mathcal{D'}_1=(I;X)$ and $\mathcal{D'}_2=(I;X)$ owned by the two parties $\mathbb{A}$ and $\mathbb{B}$ respectively, where $I(\mathcal{D'}_1)\cap I(\mathcal{D'}_2)\neq\emptyset$. There needs an online inference algorithm, where the party $\mathbb{A}$ can initialize the inference procedure using the identity of the sample for prediction (i.e., $i'\in I(\mathcal{D'}_1)\cap I(\mathcal{D'}_2)$). The party $\mathbb{A}$ can obtains the prediction result of the sample (i.e., $y_{i'}$) through the secure inference procedure, with respect to the following restrictions:
\begin{itemize}
\item \textbf{Testing Set Identity Protection}: $I(\mathcal{D'}_1)\backslash I(\mathcal{D'}_2)$ would not be obtained by the party $\mathbb{B}$, while the information about $I(\mathcal{D'}_2)\backslash I(\mathcal{D'}_1)$ would be prohibited from $\mathbb{A}$
\item \textbf{Testing Set Feature Security}: The inference procedure needs to avoid the leakage of $X(\mathcal{D'}_1)$ and $y_{i'}$ to $\mathbb{B}$, and $X(\mathcal{D'}_2)$ to the party $\mathbb{A}$.
\end{itemize}
In our research, we intend to design PHE-based encryption schemes to protect the training and inference procedures (derived from LightGBM~\cite{ke2017lightgbm}) and meet above security goals.
\section{Discussion and Conclusion}
In this work, we present \texttt{SecureGBM}\ a secure multi-party (re-)design of LightGBM~\cite{ke2017lightgbm}, where we assume the view (i.e., set of features) of the same group of samples has been split into two parts and owned by two parties separately. To collaboratively train a model while preserving the privacy of the two parties, a group of partial homomorphic encryption (PHE) computation models and multi-party computation protocols have been used to cover the key operators of distributed LightGBM learning and inference over two parties. As the use of PHE and multi-party computation models cause hudge computational and communication overhead, certain statistical acceleration strategies have been proposed to lower the cost of communication while securing the statistical accuracy of learned model through stochastic approximation. With such statistical acceleration strategies, \texttt{SecureGBM}\ would become more and more efficient, with decreasing slowdown ratio, when the scale of training datasets increases.
The experiments based on several large real-world datasets show that \texttt{SecureGBM}\ can achieve decent testing accuracy (i.e., AUC and F1-score) as good as vanilla LightGBM (based on the aggregated datasets from the two parties), using a time consumption tolerable training procedure (5x$\sim$ 64x slowdown), without compromising the data privacy. Furthermore, the ablation study that compares \texttt{SecureGBM}\ to the learners, which uses the single dataset from one party, showed that such collaboration between two parties can improve the accuracy.
\section{Introduction}
Multi-Party federated learning~\cite{chen2006algebraic} becomes one of the most popular machine learning paradigm thanks to the increasing trends of distributed data collection, storage and processing, as well as its benefits to the privacy-preserved manner in different kinds of applications. In most multi-party machine learning applications, ``no raw data sharing'' is an important pre-condition, where the model should be trained using all data stored in distributed machines (i.e., parties) without any cross-machine raw data sharing. A wide range of machine learning models and algorithms, including logistic regression~\cite{pathak2010multiparty}, sparse discriminant analysis~\cite{bian2017multi,tian2016communication}, stochastic gradient-based learners~\cite{jayaraman2018distributed,xing2015petuum,ormandi2013gossip}, have been re-implemented on distributed computing, encryption, and privacy preserving computation/communication platforms, so as to incorporate the secure computation paradigms~\cite{chen2006algebraic}.
\textbf{Backgrounds and Related Work.} Existing efforts majorly work on the implementation of efficient federated learning systems. Two parallel computation paradigms---\emph{data-centric} and \emph{model-centric}~\cite{xing2015petuum,zhou2008large,dean2012large,tsianos2012consensus,tsianos2012consensus,smyth2009asynchronous,ormandi2013gossip} have been proposed. On each machine, the data centric algorithm first estimates the same set of parameters (of the model) using the local data, then aggregates the estimated parameters via model-averaging for global estimation. The model with aggregated parameters is considered as the trained model based on the overall data (from multiple parties) and before aggregated these parameters can be estimated through parallel computing structure in an easy way. Meanwhile, model-centric algorithms require multiple machines to share the same loss function with ``updatable parameters'', and allow each machine to update the parameters in the loss function using the local data so as to minimize the loss. Based on this characteristic, model-centric algorithm commonly updates the parameters sequentially so that the additional time consumption in updating is sometimes a tough nut for specific applications. Even so, compared with the data-centric, the model-centric methods usually can achieve better performances, as it minimizes the risk of the model~\cite{xing2015petuum,ormandi2013gossip}. To advance the distributed performance of linear classifiers, Tian et al.~\cite{tian2016communication} proposed a data-centric sparse linear discriminant analysis algorithm, which leverages the advantage of parallel computing.
In terms of multi-party collaboration, the federated learning algorithms can be categorized into two types: \emph{Data separation} and \emph{View separation}. For the data separation, the algorithms are assumed to learn from the distributed datasets, where each dataset consists of a subset of samples of the same types~\cite{xing2015petuum,bian2017multi,tian2016communication,jayaraman2018distributed}. For example, hospitals are usually required to collaboratively learn a model to predict patents' future diseases through classifying their electronic medical records, where all hospitals follows the same scheme to collect patients' medical record while every hospital can only cover a part of the patients. In this case, federated learning here improves the overall performance of learning through incorporating the private datasets owned by different parties, while ensuring the privacy and security~\cite{xing2015petuum,ormandi2013gossip,jayaraman2018distributed}. While the existing data/computation parallelism mechanisms were usually motivated to improve federated learning under the data separation settings, the federated learning systems under the \emph{view separation} settings are seldom considered
\textbf{Our Work.} We mainly focus on \emph{view separation} settings of the federated learning that assumes the data view of the same group of samples are separated by multiple parties who do not trust each other. For example, the healthcare, finance, and insurance records of the same group of healthcare users are usually stored in the data centers of healthcare providers, banks, and insurance companies separately. For the healthcare users, they usually need some recommendations on the healthcare insurance products according to their health and financial status, while healthcare insurance companies need to learn from large-scale healthcare together with personal financial data to build such recommendation models. However, according to
the law enforcement about data privacy, it is difficult for these three partities to share their data with each other and learn such a predictive model. In this way, federated learning under view separation models is highly appreciated. In this work, we aim at working on the view separation federated learning algorithms using Gradient Boosting Machines (GBM) as the Classifiers. GBM is studied here as it can deliver decent prediction results and be interpreted by human experts for joint data analytics and cross-institutes data understanding purposes.
\textbf{Our Contributions.} We summarize the contribution of the proposed \texttt{SecureGBM}{} algorithm in following aspects.
\begin{itemize}
\item Firstly, we study and formulate the federated learning problem under the (semi)-homomorphic encryption settings, while assuming the data owned by two parties are not sharable. More specific, in this paper, we assume each party owns a unique private view to the same group of samples, while the labels of these samples are monopolized by one party. To the best of our knowledge, this is the first study on tree-based Gradient Boosting Machine classifiers, by addressing 1) two-party security constraint, 2) efficient \emph{model-centric} learning with views separated by two parties but labels ``monopolied'' by one, and 3) the trade-off between statistical accuracy and the communication cost caused by statistical learning over encrypted communication.
\item Secondly, to achieve the goals, we design the \texttt{SecureGBM}{} algorithm which re-implements the vanilla gradient-boosting tree based learners using semi-homomorphic encrypted computation operators offered by Microsoft SEAL. More specific, \texttt{SecureGBM}\ first replaces the \emph{addition} and \emph{multiplication} operators used in the gradient-boosting trees with the secured operators based on semi-homomorphic computation, then \texttt{SecureGBM}\ re-designs a new set of \emph{binary comparison} operators (i.e., $\geq$ or $\leq$) which can not be intercepted by attackers to exactly recover the ground truth through searching with the comparison operators (e.g., binary search).
\item Furthermore, we observe the trade-off between statistical accuracy and communication cost for GBM training. One can use stochastic gradient boosting mechanism to update the training model with mini-batch of data per round, while the communication cost per round can be significantly reduced in a quadratics manner. However, compared to vanilla gradient boosting machines, additional rounds of training procedure might be needed by such stochastic gradient boosting to achieve equivilent performance. In this way, \texttt{SecureGBM}\ makes trade-off between statistical accuracy and communication complexity using mini-batch sampling strategies, so as to enjoy low communication costs and accelerated training procedure.
\item Finally, we evaluate \texttt{SecureGBM}{} using a large-scale real-world user profile dataset and several benchmark datasets for classification. The results show that \texttt{SecureGBM}{} can compete with state of the art of Gradient Boosting Machines --- LightGBM, XGBoosts and the vanilla re-implementation of LightGBM based on Microsoft SEAL.
\end{itemize}
The rest of the paper is organized as follows. In Section II, we review the gradient-boosting trees based classifiers and the implementation of LightGBM, then we introduce the problem formulation of our work. In Section III, we propose the framework of \texttt{SecureGBM}{} and present the details of \texttt{SecureGBM}{} algorithm. In Section IV, we evaluate the proposed algorithms using the real-world user profile dataset and the benchmark datasets. In addition, we compare \texttt{SecureGBM}{} with baseline centralized algorithms. In Section V, we introduce the related work and present a discussion. Finally, we conclude the paper in Section VI.
\section{Frameworks and Algorithms Design}
In this section, we present the framework design of \texttt{SecureGBM}\ with key algorithms used.
\subsection{Overall Framework Design}
The overall framework of \texttt{SecureGBM}\ consists of two parts --- training and inference, where, given the distributed datasets, the training procedure obtains the distributed parameter models for the tree classifiers of \texttt{SecureGBM}\ and the inference procedure predicts the labels using the indices of samples.
\subsubsection{Statistically Accelerated Training Procedure} Given the training datasets $\mathcal{D}_1$ and $\mathcal{D}_2$ distributed in the two parties, as shown in Figure~\ref{fig:train_proc}, the training procedure learns the ensemble of decision trees for Gradient Boosting Machines with distributed parameters in a secure and statistical efficient way. More specific, the training procedure incorporates an accelerated iterative process with a specific initialization as follow
\begin{itemize}
\item \textbf{Initialization - } The owner of $\mathcal{D}_1$ can invoke to initialize the whole training procedure. First of all, \texttt{SecureGBM}\ performs \emph{secure join operation} to align the shared samples stored in $\mathcal{D}_1$ and $\mathcal{D}_2$
through matching $I(\mathcal{D}_1)$ and $I(\mathcal{D}_2)$ under \emph{Partial Homomorphic Encryption (PHE)} settings. Later, based on the data in $\mathcal{D}_1$ including both features $X(\mathcal{D}_1)$ and labels $Y(\mathcal{D}_1)$ , \texttt{SecureGBM}\ learns a decision tree $F_0$ as the base model, which only uses features in $X(\mathcal{D}_1)$, for initialization. Please see also in Section III.B.1 for the detailed design and implementation of \emph{Secure Join Operation} for sample alignment based on PHE.
\end{itemize}
With the model initialized, \texttt{SecureGBM}\ takes a statistically accelerated iterative process for GBM training, where each iteration uses \emph{mini-batch sampling to reduce the computational/communication costs}~\cite{friedman2002stochastic}. Specifially, each iteration (e.g., the $k^{th}$ iteration and $k\geq 0$) consists of three steps:
\begin{itemize}
\item \textbf{Batched Secure Inference - } Given the shared samples in $\mathcal{D}_1\cap\mathcal{D}_2$, \texttt{SecureGBM}\ first randomly selects a subset of samples $\mathcal{B}_k\subseteq (I(\mathcal{D}_1)\cap I(\mathcal{D}_2))$, where $|\mathcal{B}_k|=b$ and $b$ refers to the batch size. With the model already estimated, denoted as $F_{k}$, \texttt{SecureGBM}\ then obtains the \emph{``soft prediction''} results of all samples in $\mathcal{B}_k$ through the secure inference under PHE settings. Such that
\begin{equation}
y_i^{k}\gets F_{k}(X_i(\mathcal{D}_1);X_i(\mathcal{D}_2)),\ \ \forall i \in \mathcal{B}_k,
\end{equation}
where $F_{k}(X_i(\mathcal{D}_1);X_i(\mathcal{D}_2))$ refers to the inference result based on the features from both datasets. Please see also in Section III.A.2 for the PHE-based implementation of the inference procedure.
\item \textbf{Residual Error Estimation - } As was mentioned, both labels and soft prediction results are a $|\mathcal{C|}$-dimensional vector, where $|\mathcal{C}|$ refers to the number of classes.
Then, \texttt{SecureGBM}\ estimates the residual errors of the current model using the \emph{cross-entropy} as follow
\begin{equation}
\varepsilon_i^{k}\gets \mathrm{H}(y_i^{k})+ \mathrm{D_{KL}}\left( y_i^{k}||Y_i(\mathcal{D}_1)\right),\ \ \forall i \in \mathcal{B}_k.
\end{equation}
Note that to secure the security of labels, $y_i^{k}$ and $\varepsilon_i^{k}$ for $\forall i\in \mathcal{B}_k$ are all stored in the owner of $\mathcal{D}_1$.
\item \textbf{Secure Tree Creation and Ensemble - } Given the residual error estimated $\varepsilon_i^{k},\ \forall i \in\mathcal{D}_1\cap\mathcal{D}_2$, \texttt{SecureGBM}\ boosts the learned model $F_{k}$ through creating a new decision tree $h_k$ that fits the residual errors $\varepsilon_i^{k}$ using the features of both datasets $X(\mathcal{D}_1)\cup X(\mathcal{D}_2)$ in an additive manner. \texttt{SecureGBM}\ then ensembles $h_k$ with the current model $F_k$ and obtains the new model $F_{k+1}$ through gradient boosting~\cite{friedman2001greedy}. As was mentioned in Eq.~(2), a specific learning rate $\alpha$ has been given as the weight for model ensembling. Please see also in Section III.B.2 for the detailed design and implementation of \emph{Secure Splits Search Operation} for decision tree creation based on PHE.
\end{itemize}
\subsubsection{Inference Procedure via Execution Flow Compilation} Given the model learned after $K$ iterations, denoted as $F_{K}$, the inference component first complies all these trees into distributed secure execution flow, where the nodes in every decision trees are assigned to the corresponding parties respectively. As shown in Figure~\ref{fig:infer_proc}, all \emph{communications}, \emph{computation} and \emph{binary comparisons} are protected through SEAL-based Homomorphic Encryption schemes. With the secure distributed execution flow, given a index of sample e.g., $i'\in I(\mathcal{D'}_1)\cap I(\mathcal{D'}_2)$, \texttt{SecureGBM}\ runs the inference procedure over the execution flow. Please see also in Section III.B.3 for he design and implementation of \emph{PHE-based Binary Comparison Operator}. Note that in our research, we assume the party $\mathbb{B}$ has no way to access the labels of training and testing samples, while securing the monopoly of the label information at the party $\mathbb{A}$ side. To protect the label information through the inference, the result of PHE-based Binary Comparison Operator (i.e., true or false) has been secured and cannot be deciphered by the party $\mathbb{A}$.
Furthermore, for the sample with index that is contained in $I(\mathcal{D'}_1)$ only, i.e., $i'\in I(\mathcal{D'}_1)\backslash I(\mathcal{D'}_2)$, \texttt{SecureGBM}\ would first learn a comprehensive GBM (LightGBM classifier) based on the dataset $\mathcal{D}_1$ using the features $X(\mathcal{D}_1)$ and labels $Y(\mathcal{D}_1)$ only. With such model, \texttt{SecureGBM}\ makes prediction for the samples in $I(\mathcal{D'}_1)\backslash I(\mathcal{D'}_2)$ all using the features in $X(\mathcal{D'}_1)$.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{execution.pdf}
\caption{Execution Flow Compilation for the Inference Procedure}
\label{fig:infer_proc}
\end{figure}
Please note that the overall framework of \texttt{SecureGBM}\ is derived from the vanilla implementation of LightGBM~\cite{ke2017lightgbm}, while most of calculation and optimization to gradient boosting trees~\cite{friedman2001greedy} has been preserved with coverage of partial homomorphic encryption.
\subsection{Key Algorithms}
Here, we present the detailed design of several key algorithms.
\subsubsection{Secure Join for Sample Alignment} To align the samples with identical indices across the two index sets $I(\mathcal{D}_1)$ and $I(\mathcal{D}_2)$ for training (and $I(\mathcal{D'}_1)$ and $I(\mathcal{D'}_2)$ for inference), \texttt{SecureGBM}\ intends to obtain the intersection between the two index sets in a private and secure manner. Specifically, we adopt the private sets intersection algorithms proposed in~\cite{pinkas2014faster,pinkas2018scalable} to achieve the goal. The security and privacy eniforcement of proposed component highly relies on the techniques called Obvious Transfer Extension (OT Extension), which supports fast and private data transfer for small payloads with limited overhead~\cite{keller2015actively}. The use of OT Extension can avoid the use of time-consuming error correcting code but instead accelerate the secure data transmission through leveraging a pseudo-random code. We also tried other OT extension based private sets intersection algorithm, such as the one using Bloom Filter~\cite{dong2013private}. The speed and scalability is not as good as~\cite{pinkas2014faster,pinkas2018scalable}
\subsubsection{Secure Splits Search for Tree Creation} For each round of iteration (e.g., the $k^{th}$ iteration), \texttt{SecureGBM}\ needs to create a decision tree with the size $t$ (here $t$ refers to the number of learned split nodes in the tree) to fit the ``residual error'' of the model that is already estimated $F_k$, as as to enable the gradient boosting mechanism. Specifically, we adopt a leaf-wise tree growth mechanism that was derived from LightGBM~\cite{ke2017lightgbm} to learn the tree, where \texttt{SecureGBM}\ vertically grows the tree using totally $t$ rounds of computation/communication, and it always picks up the leaf node with maximal ``residual error'' reduction to grow.
In each round of computation for the decision tree creation, for the party $\mathbb{A}$ owning features $X(\mathcal{D}_1)$ and labels $Y(\mathcal{D}_1)$, \texttt{SecureGBM}\ searches new splits using the raw data. Similar to vanilla LightGBM, \texttt{SecureGBM}\ selects the ``best'' split with the maximal residual error reduction for the samples in the mini-batch $\mathcal{B}_k$ as the a candidate of the split at the party $\mathbb{A}$ side. This candidate split would be compared to the ``best'' splits from the party $\mathbb{B}$ as the final split selection for this round.
On the other hand, for the party $\mathbb{B}$ only occupying features $X(\mathcal{D}_2)$, \texttt{SecureGBM}\ first propose a set of potential splits in $X(\mathcal{D}_2)$ (in a random or unsupervised manner), and sends the potential classification results of the samples in $\mathcal{B}_k$ using every proposed split to the party $\mathbb{A}$. Note that the potential classification results have been formed into multiple sets of samples (or sample index), which have been categorized according to their results. Such sets have been encrypted as private sets to protect the privacy of label information from the party $\mathbb{B}$. Certain secure domain isolation has been used to protect the splits~\cite{liu2015thwarting}.
Then, at the party $\mathbb{A}$ side, \texttt{SecureGBM}\ estimates the residual errors of each split proposed by the party $\mathbb{B}$ using their potential classification results. Specifically, \texttt{SecureGBM}\ leverage the aforementioned private set intersection algorithm to estimate the overlap between the sample sets categorized using potential classification results and the true labels, in order to obtain the prediction results and estimate the accuracy~\cite{pinkas2014faster,pinkas2018scalable}. Finally, \texttt{SecureGBM}\ selects the split (the best splits from $\mathbb{A}$ versus $\mathbb{B}$) that can further lower residual error as the split in this round and ``adds'' the split to the decision tree.
To further secure the privacy of label information, the splits at the party $\mathbb{B}$ are deployed in an isolated domain, while the party $\mathbb{B}$ cannot obtain the decision making result of splits. Please refer the section in below for the implementation of the binary comparison.
\subsubsection{Secure Binary Comparison for Decision Making} As was mentioned, \texttt{SecureGBM}\ operates an isolated domain over the machines at the party $\mathbb{B}$, where the computation and comparison criterion for decision making are all stored in the isolated domain, which is trusted by both parties. To further secure the label information and prediction results during inference, \texttt{SecureGBM}\ uses the public keys generated the party $\mathbb{A}$ to encrypted the decision making results from $\mathbb{B}$, while the public keys keep being updated per inference task.
\subsection{Discussion and Remarks}
In this section, we intends to justify the proposed framework and algorithms from costs and learning perspectives.
\textbf{Communication Costs - } In \texttt{SecureGBM}, we replaced the gradient boosting mechanism used by GBM with stochastic gradient boosting~\cite{friedman2002stochastic}, in order to accelerate the learning procedure through lowering the computational/communication costs per iteration. Let denote $\mathcal{N} = |I(\mathcal{D}_1)\cap I(\mathcal{D}_2)|$ as the total number of aligned samples shared by $\mathcal{D}_1$ and $\mathcal{D}_2$, while the size of batch for each iteration is defined as $b=|\mathcal{B}_k|$.
For each iteration of GBM and \texttt{SecureGBM}, there needs to create a $t$-sized decision tree after $t$ rounds of communication between the two parities. For each round of such communication, GBM and \texttt{SecureGBM}\ need to exchange data with payloads size of $\mathcal{O}(\mathcal{N}^2)$ and $\mathcal{O}(b^2)$, respectively. In this way, the cost of communication per iteration should be $\mathcal{O}(t\cdot\mathcal{N}^2)$ and $\mathcal{O}(t\cdot b^2)$ for GBM and \texttt{SecureGBM}.
\textbf{Statistical Acceleration - } To simplify our analysis on the statistical performance, we make an mild assumption that considers the learning procedure of LightGBM and \texttt{SecureGBM}\ as the gradient descent (GD) and stochastic gradient descent (SGD) based loss minimization over certain convex loss~\cite{friedman2001greedy,mason2000boosting,friedman2002stochastic}. Under mild convexity and smoothness conditions, GD and SGD would converge to the minimum of the loss functions at the error convergence rate\cite{shapiro1996convergence,shalev2009stochastic,shamir2013stochastic} of $\mathcal{O}(1/k)$ and $\mathcal{O}(1/\sqrt{k})$ respectively, where $k$ denotes the number of iterations. More discussion can be found in~\cite{mason2000boosting}.
While the costs of each iteration are $\mathcal{O}(t\cdot\mathcal{N}^2)$ and $\mathcal{O}(t\cdot b^2)$ respectively, we can roughly conclude a trade-off exists between the statistical performance and communication complexity for \texttt{SecureGBM}\ Training.
\section{Experiments and Results}
In this section, we mainly report the experiments that evaluate \texttt{SecureGBM}, and compares the performance of \texttt{SecureGBM}\ with other baseline methods including vanilla LightGBM, XGBoost, and other downstream classifiers.
\begin{table*}[]
\centering
\caption{Overall Classification AUC (\%) Comparison (N/A: During the experiments, LightGBM reported failure to train the model due as the features of the given datasets are too sparse to learn.)}
{
\begin{tabular}{r|cc||cc||cc} \hline
& \multicolumn{2}{c}{Sparse} & \multicolumn{2}{c}{Adult} & \multicolumn{2}{c}{Phishing} \\ \hline
Methods & Training & Testing & Training &
Testing & Training & Testing \\ \hline
\texttt{SecureGBM} & 93.227 & 66.220 & 92.465 & 90.080 & 62.855 & 61.823 \\ \hline
\multicolumn{7}{c}{Using the Aggregated Datasets from $\mathbb{A}$ and $\mathbb{B}$}\\\hline
LightGBM-($\mathbb{A}$,$\mathbb{B}$) & 96.102 & 68.528 & 92.199 & 90.145 & 67.994 & 63.430 \\
XGBoost-($\mathbb{A}$,$\mathbb{B}$) & 93.120 & 67.220 & 91.830 & 89.340 & 67.090 & 61.990 \\
LIBSVM-Linear-($\mathbb{A}$,$\mathbb{B}$) & 73.490 & 64.560 & 58.641 & 59.280 & 50.073 & 50.980 \\
LIBSVM-RBF-($\mathbb{A}$,$\mathbb{B}$) & 79.850 & 63.210 & 75.549 & 72.060 & 52.789 & 47.479 \\ \hline
\multicolumn{7}{c}{Using the Single Dataset at $\mathbb{A}$}\\\hline
LightGBM-$\mathbb{A}$ & N/A* & N/A* & 89.849 & 88.052 & 64.693 & 59.743 \\
XGBoost-$\mathbb{A}$ & 65.170 & 57.370 & 89.490 & 87.620 & 64.070 & 59.740 \\
LIBSVM-Linear-$\mathbb{A}$ & 52.360 & 50.675 & 66.293 & 34.347 & 50.007 & 50.489 \\
LIBSVM-RBF-$\mathbb{A}$ & 56.740 & 52.380 & 72.909 & 55.076 & 50.248 & 50.306 \\ \hline
\multicolumn{7}{c}{Using the Datasets that aggregate Features from $\mathbb{B}$ and Labels from $\mathbb{A}$ (Not exist in the real case)}\\\hline
LightGBM-$\mathbb{B}$* & 96.102 & 68.528 & 85.708 & 84.587 & 62.396 & 58.929 \\
XGBoost-$\mathbb{B}$* & 93.190 & 67.390 & 85.700 & 85.410 & 61.720 & 58.420 \\
LIBSVM-Linear-$\mathbb{B}$* & 67.480 & 60.990 & 46.527 & 46.840 & 50.627 & 48.567 \\
LIBSVM-RBF-$\mathbb{B}$* & 78.230 & 64.880 & 56.927 & 74.987 & 50.336 & 50.415 \\ \hline
\end{tabular}}
\label{tab:overall}
\end{table*}
\subsection{Experimental Setups}
In this section, we present the dataasets, baseline algorithms as well as tbe experimental settings of our evaluation study.
\subsubsection{Datasets} In our study, we intend to evaluate \texttt{SecureGBM}\ using three datasets as follow.
\begin{itemize}
\item \textbf{\emph{Sparse - }} This is a private dataset consisting of 11,371 users' de-anonymized financial records data, where each sample with 8,922 \emph{extremely sparse} features and a binary label. These features are separately owned by two parties --- the bank is with 5,000 features and the real estate loaner owns the rest 3,922 features, while the bank owns the label information about bankruptcy. The goal of this dataset is to predict the bankruptcy of a user, incorporating their sparse features distributed in the two parties. As the dataset is quite large, we set the mini-batch size $b$ as the 1\% of the overall training set. Note that the labels in Sparse are extremely imbalanced, while most samples are negative.
\item \textbf{\emph{Adult - }} This an open-access dataset consisting of 27,739 web pages' information, where each web page is with 123 features and 1 binary label (whether the web page contains adult contents). We randomly split the features into two sets, each of which is with 61 and 62 features respectively. As this dataset is quite small, we use the whole dataset for each iteration i.e., $b=100\%$ of the overall training set.
\item \textbf{\emph{Phishing - }} This an open-access dataset consisting of 92,651 web pages' information, where each web page is with 116 features and 1 binary label (whether the web page contains phishing risk). We randomly split the features into two sets, each of which is with 58 features respectively. As this dataset is comparatively large, we use the the mini-batch with $b=10\%$ of the overall training set.
\end{itemize}
\subsubsection{Baseline Algorithms with Settings} To understand the advantage of \texttt{SecureGBM}, we compare \texttt{SecureGBM}\ with the baseline algorithms in below.
\begin{itemize}
\item \textbf{\emph{LightGBM - }} We consider the vanilla implementation of LightGBM as a key baseline algorithm, where we include two settings LightGBM-$\mathbb{A}$ and LightGBM-$(\mathbb{A},\mathbb{B})$. LightGBM-$\mathbb{A}$ refers to the LightGBM that is trained using the features and labels in the dataset $\mathbb{A}$ only, while LightGBM-$(\mathbb{A},\mathbb{B})$ refers to the vanilla distributed LightGBM that is trained using the both datasets in $\mathbb{A}$ and $\mathbb{A}$, without encryption protection. Finally, we also include a baseline LightGBM-$\mathbb{B}*$ that might not exist in the real-world settings --- LightGBM-$\mathbb{B}*$ aggregates the features from the party $\mathbb{B}$ and label information from $\mathbb{A}$ as the training dataset. The comparison to LightGBM-$\mathbb{A}$ and LightGBM-$\mathbb{B}*$ would show the information gain of federated learning beyond the model that was trained using any single party.
\item \textbf{\emph{XGBoost - }} Following the above settings, we include two settings XGBoost-$\mathbb{A}$ and XGBoost-$(\mathbb{A},\mathbb{B})$. XGBoost-$\mathbb{A}$ refers to the vanilla XGBoost that was trained using the features and labels in the dataset $\mathbb{A}$ only, while XGBoost-$(\mathbb{A},\mathbb{B})$ refers to the vanilla XGBoost trained through aggregating the both datasets from $\mathbb{A}$ and $\mathbb{B}$, in a centralized manner. Similarly, XGBoost-$\mathbb{B}*$ that was trained through aggregating the features from the party $\mathbb{B}$ and label information from $\mathbb{A}$ was given a baseline to demostrate the information gain of collaboration.
\item \textbf{\emph{LIBSVM - }} Following the same settings, we include two settings LIBSVM-$\mathbb{A}$ and LIBSVM-$(\mathbb{A},\mathbb{B})$. LIBSVM-$\mathbb{A}$ refers to the vanilla LIBSVM that is trained using the features and labels in the dataset $\mathbb{A}$ only, while LIBSVM-$(\mathbb{A},\mathbb{B})$ refers to the vanilla XGBoost that is trained using the both datasets in $\mathbb{A}$ and $\mathbb{A}$, in a centralized manner. Similarly, SVM-$\mathbb{B}*$ that was trained through aggregating the features from the party $\mathbb{B}$ and label information from $\mathbb{A}$ was given a baseline. More specific, the LIBSVM algorithms with RBF kernel and linear SVM are used here.
\end{itemize}
Note that in all experiments, 80\% samples are used for training and the rest 20\% samples are remained for testing. The training and testing sets are randomly selected for 5-folder cross validation. The default learning rate for LightGBM, XGBoost and \texttt{SecureGBM}\ are all set to 0.1.
\begin{figure*}
\centering
\subfloat[$t=5$ on Sparse]{\includegraphics[width=0.3\textwidth]{fig/data1_model1_all.pdf}}
\subfloat[$t=4$ on Sparse]{\includegraphics[width=0.3\textwidth]{fig/data1_model2_all.pdf}}
\subfloat[$t=3$ on Sparse]{\includegraphics[width=0.3\textwidth]{fig/data1_model3_all.pdf}}\\
\subfloat[$t=5$ on Adult]{\includegraphics[width=0.3\textwidth]{fig/data2_model1_all.pdf}}
\subfloat[$t=4$ on Adult]{\includegraphics[width=0.3\textwidth]{fig/data2_model2_all.pdf}}
\subfloat[$t=3$ on Adult]{\includegraphics[width=0.3\textwidth]{fig/data2_model3_all.pdf}}\\
\subfloat[$t=5$ on Phishing]{\includegraphics[width=0.3\textwidth]{fig/data3_model1_all.pdf}}
\subfloat[$t=4$ on Phishing]{\includegraphics[width=0.3\textwidth]{fig/data3_model2_all.pdf}}
\subfloat[$t=3$ on Phishing]{\includegraphics[width=0.3\textwidth]{fig/data3_model3_all.pdf}}
\caption{The Comparison of Training AUC and F1-Score per Iteration \texttt{SecureGBM}\ vs. LightGBM-($\mathbb{A}$,$\mathbb{B}$): : Labels in sparse datasets (personal bankruptcy status) are imbalanced with most samples negative; in this case, the learned models are usually imbalanced with very low recall and F1-score.}
\label{fig:runtime}
\end{figure*}
\begin{figure*}
\centering
\subfloat[$t=5$ on Sparse]{\includegraphics[width=0.3\textwidth]{fig/task_data1_model1_all.pdf}}
\subfloat[$t=4$ on Sparse]{\includegraphics[width=0.3\textwidth]{fig/task_data1_model2_all.pdf}}
\subfloat[$t=3$ on Sparse]{\includegraphics[width=0.3\textwidth]{fig/task_data1_model3_all.pdf}}\\
\subfloat[$t=5$ on Adult]{\includegraphics[width=0.3\textwidth]{fig/task_data2_model1_all.pdf}}
\subfloat[$t=4$ on Adult]{\includegraphics[width=0.3\textwidth]{fig/task_data2_model2_all.pdf}}
\subfloat[$t=3$ on Adult]{\includegraphics[width=0.3\textwidth]{fig/task_data2_model3_all.pdf}}\\
\subfloat[$t=5$ on Phishing]{\includegraphics[width=0.3\textwidth]{fig/task_data3_model1_all.pdf}}
\subfloat[$t=4$ on Phishing]{\includegraphics[width=0.3\textwidth]{fig/task_data3_model2_all.pdf}}
\subfloat[$t=3$ on Phishing]{\includegraphics[width=0.3\textwidth]{fig/task_data3_model3_all.pdf}}
\caption{The Comparison of Testing AUC and F1-Score per Iteration \texttt{SecureGBM}\ vs. LightGBM-($\mathbb{A}$,$\mathbb{B}$): : Labels in sparse datasets (personal bankruptcy status) are imbalanced with most samples negative; in this case, the learned models are usually imbalanced with very low recall and F1-score.}
\label{fig:testing}
\end{figure*}
\subsection{Overall Performance}
To evaluate the overall accuracy of \texttt{SecureGBM}, we measured the Area Under Curve (AUC) of the \texttt{SecureGBM}\ prediction results and compared to the baseline algorithms. Table~\ref{tab:overall} presents the overall comparison on AUC achieved by these classifiers under the same settings. \texttt{SecureGBM}, LightGBM, and LIBSVM all have been trained using 200 iterations, where we measure the AUC on training and testing datasets.
The result in Table~\ref{tab:overall} shows that, compared to LightGBM-($\mathbb{A}$,$\mathbb{B}$), \texttt{SecureGBM}\ achieved similar training and testing AUC based on the same settings, while significant outperforming LightGBM-$\mathbb{A}$ that used the single dataset of $\mathbb{A}$. Furthermore, under both settings, LightGBM performed better than XGBoost in terms of testing AUC. Similar observations can be obtained through the comparisons between LIBSVM and LightGBM. In short, it is reasonable to conclude that multi-party gradient boosting over two distributed datasets can significantly improve the performance and outperforms the one that uses datasets from the party $\mathbb{A}$ only.
Furthermore, the comparison to LightGBM-$\mathbb{B}$* shows that, except the experiments based on Sparse datasets, \texttt{SecureGBM}\ significantly outperforms the one that aggregates the features from $\mathbb{B}$ and labels from $\mathbb{A}$. For the Sparse datasets, one can easily observe that LightGBM-$\mathbb{A}$ failed to train the model, when using the datasets on $\mathbb{A}$ only, as the features in $\mathbb{A}$ are too sparse to learn. The comparison between LightGBM-($\mathbb{A}$,$\mathbb{B}$) and LightGBM-$\mathbb{B}$* further demonstrates that the incorporation of the features in $\mathbb{A}$ can not improve the performance of LightGBM learning. Due to the same reason, \texttt{SecureGBM}\ performed slightly worse than LightGBM-$\mathbb{B}$* with marginal testing AUC degradation.
We conclude \texttt{SecureGBM}\ boosts the testing accuracy of learners from the party $\mathbb{A}$ perspectives, as (1) \texttt{SecureGBM}\ consistently outperforms LightGBM-$\mathbb{A}$, XGBoost-$\mathbb{A}$ and other learners that uses datasets on $\mathbb{A}$ only; (2) The algorithms that aggregates datasets from the both sides, such as LightGBM-($\mathbb{A}$, $\mathbb{B}$) or LightGBM-$\mathbb{B}$*, only perform marginally better than \texttt{SecureGBM}, while these algorithms scarifying the data privacy of the two parties.
\subsection{Case Studies}
To further understand the performance of \texttt{SecureGBM}, we traced back the the models obtained after each iteration and analysis their accuracy from both accuracy and efficiency perspectives.
\subsubsection{Trends of Accuracy Improved per Iteration} Figure~\ref{fig:runtime} presents the the comparison of training AUC and F1-score per iteration, between \texttt{SecureGBM}\ versus vanilla LightGBM. More specific, we evaluated the performance when $t=3,\ 4,\ $ and $5$ (as the LightGBM and \texttt{SecureGBM}\ use leaf-wise growth strategy, $t$ is equivalent to the depth of each decision tree learned), where we clearly observed the error convergence of the model.
It has been observed that, in most cases, the training F1-score could be gradually improved with the increasing number of iterations. For sparse and Adult datasets, the overall trends of AUC and F1-score for LightGBM and \texttt{SecureGBM}\ were almost same under all settings --- even though, for Sparse dataset, \texttt{SecureGBM}\ only used 1\% training data as the mini-batch for the model update per iteration (while LightGBM used the whole). Furthermore, even though \texttt{SecureGBM}\ did not perform as good as LightGBM for the Phishing dataset when $t=5$ and $4$, it still achieved decent performance like LightGBM under the appropriately setting $t=3$. Such observations are quite encouraging that the use of mini-batch seems to not hurt the learning progress of \texttt{SecureGBM}\ for these datasets, with appropriate settings. The lines of \texttt{SecureGBM}\ are with more jitters, due to the use of stochastic approximation for statistical acceleration. Similar observations have been obtained in the comparison of testing AUC and F1-score per iteration which has been shown in Figure~\ref{fig:testing}.
\begin{table}[]
\centering
\caption{Time Consumption per Iteration (seconds) on a Synthesized Dataset over Varying Number of Training Samples}
\normalsize{
\begin{tabular}{r|ccccc} \hline
\# Samples & 1,000 & 2,000 & 4,000 & 8,000 & 16,000 \\ \hline
\texttt{SecureGBM} & 10.20 & 10.50 & 11.4 & 11.75 & 14.30 \\
LightGBM & 0.16 & 0.32 & 0.70 & 1.41 & 2.45 \\
XGBoost & 0.51 & 0.73 & 1.09 & 2.20 & 4.30 \\ \hline
\end{tabular}}
\label{tab:time}
\vspace{-3mm}
\end{table}
\subsubsection{Time Consumption over Scale of Problem} To test the time consumption of \texttt{SecureGBM}\ over varying scale of the problem, we synthesize a dataset based on Sparse with increasing number of samples. The experiment results showed that the \emph{time consumption per iteration} for the training procedure of \texttt{SecureGBM}\ is significantly longer than LightGBM and XGBoost.
We estimate the slowdown ratio of \texttt{SecureGBM}\ as the ration between the time consumption per iteration for \texttt{SecureGBM}\ versus vanilla LightGBM. The range of slowdown ratio is around 3x$\sim$ 64x in this experiment. Furthermore, with the number of samples increases, the slowdown ratio of \texttt{SecureGBM}\ would decrease significantly. For example the ratio is around 63.75x when comparing \texttt{SecureGBM}\ to LightGBM with 1,000 training samples, while it is only 5.8 when compared to LightGBM with 16,000 training samples. It is not difficult to conclude that \texttt{SecureGBM}\ is quite time efficient due to its statistical acceleration strategies used, \texttt{SecureGBM}\ would become more and more efficient when the scale of training set increases. The experiments are carried out using two workstations based on 8-core Xeon CPUs and 16GB memory. The two machines are interconnected with a 100 MBit cable with 1.55ms latency.
\section{Related Work} | proofpile-arXiv_059-15535 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction }
The last years have seen several important works on the $Q-$curvature problem in dimensions bigger or equal to five, since the discovery by Gursky and Malchiodi \cite{GM} of a natural {\sl geometric} maximum principle associated to the Paneitz operator. Building on this work, Hang and Yang \cite{HY1} realized that one could give some conformally covariant conditions under which such a maximum principle holds and provided an Aubin-type result for existence of constant $Q-$ curvature metrics on closed manifolds.
The present paper is devoted to the construction of singular solutions to the constant $Q-$curvature problem for dimensions bigger or equal to five. Before explaining our results, we review the by-now classical setting of the Yamabe problem. Let $(M,g)$ be a compact $n$-dimensional Riemannian manifold, $n \geq 3$. If $\Sigma \subset M$ is any
closed set, then the `standard' singular Yamabe problem concerns the existence and geometric properties of
complete metrics of the form $\bar g = u^{\frac{4}{n-2}} g$ with constant scalar curvature. This corresponds to solving
the partial differential equation
\begin{equation}\label{eq:cL}
-\Delta_{g} u + \frac{n-2}{4(n-1)} R^{g} u = \frac{n-2}{4(n-1)}R^{\bar g} \, u^{\frac{n+2}{n-2} }, \qquad u > 0,
\end{equation}
where $R^{\bar g}$ is constant and with a `boundary condition' that $u \to \infty$ sufficiently quickly at $\Sigma$ so that
$\bar g$ is complete. It is known
that solutions with $R^{\bar g}< 0$ exist quite generally if $\Sigma$ is large in a capacitary
sense \cite{Lab}, whereas for $R^{\bar g} > 0$ existence is only known when $\Sigma$ is a smooth
submanifold (possibly with boundary) of dimension $k \leq (n-2)/2$, see \cite{Mazzeo-Pacard96}, \cite{F}.
There are both analytic and geometric motivations for studying this problem. For example, in the positive case ($R > 0$), solutions
to this problem are actually weak solutions across the singular set, so these results fit into the broader investigation of
possibly singular sets of weak solutions of semilinear elliptic equations. On the geometric side is a well-known theorem
by Schoen and Yau \cite{SY} stating that if $(M,g)$ is a compact manifold with a locally conformally flat metric $g$ of
positive scalar curvature, then the developing map $D$ from the universal cover $\widetilde{M}$ to $\mathbb S^n$, which
by definition is conformal, is injective, and moreover, $\Sigma := \mathbb S^n \setminus D(\widetilde{M})$ has Hausdorff
dimension less than or equal to $(n-2)/2$. Regarding the lifted metric $\tilde{g}$ on $\widetilde{M}$, this provides an interesting class of solutions of the singular Yamabe problem which are periodic with respect to a
Kleinian group, and for which the singular set $\Sigma$ is typically nonrectifiable. More generally, that paper also
shows that if $g$ is the standard round metric on the sphere and if $\bar g = u^{\frac{4}{n-2}} g$ is a complete metric
with positive scalar curvature and bounded Ricci curvature on a domain $\mathbb S^n \setminus \Sigma$, then
$\dim_{\mathcal H} \Sigma \leq (n-2)/2$.
In this work, we address the same type of question for the $Q-$curvature equation. The equation involves a fourth order operator, the so-called Paneitz operator, is therefore significantly more challenging to investigate. However, in the recent years, there has been several new insights on this difficult problem, thanks to the work of Gursky and Malchiodi \cite{GM}. A major problem for considering higher order equations is the lack of maximum principle. In particular, in general, one cannot ensure that any reasonable approximation yielding to a weak solution of the equation is actually {\sl positive}. However, the breakthrough of Gursky and Malchiodi ensures that under some {\sl geometric} assumptions on the manifold, one can ensure that one can obtain a positive solution. Their maximum principle led to an existence result similar to the Yamabe problem using a flow approach. A more variational point of view was later implemented by Hang and Yang in \cite{HY1}, weakening also the geometric conditions to have a maximum principle. Keeping in mind the importance of the $Q-$curvature problem both analytically and geometrically, it is then a natural question to ask wether one can construct singular solutions as for the second order case. This is the main result of this paper.
We first describe the setting of our contribution: let $(M,g)$ be a smooth closed $n$-dimensional Riemannian manifold with $n\geq 5$. The $Q$-curvature $Q_g$ is given by \begin{align} Q_g&=-\frac{1}{2(n-1)}\Delta R-\frac{2}{(n-2)^2}|Ric|^2+\frac{n^3-4n^2+16n-16}{8(n-1)^2(n-2)^2}R^2\notag\\ &=-\Delta J-2|A|^2+\frac n2J^2,\end{align} where $R$ is the scalar curvature, $Ric$ is the Ricci curvature tensor, and $$J=\frac{R}{2(n-1)},\quad A=\frac{1}{n-2}(Ric-Jg).$$ The Paneitz operator is given by \begin{align}P\varphi&= \Delta^2\varphi+\frac{4}{n-2}div(Ric(\nabla\varphi,e_i)e_i)-\frac{n^2-4n+8}{2(n-1)(n-2)}div(R\nabla\varphi)+\frac{n-4}{2}Q\varphi\notag\\ &=\Delta^2\varphi+div(4A(\nabla\varphi,e_i)e_i-(n-2)J\nabla\varphi)+\frac{n-4}{2}Q\varphi. \end{align} Here $e_1,\dots,e_n$ is an orthonormal frame with respect to $g$.
For a given closed sub-manifold $\Sigma$ of $M$, we are interested in finding weak solutions to \begin{align} \label{main-1}Pu=u^\frac{n+4}{n-4}\quad\text{in }M\setminus \Sigma,\end{align} such that $u$ goes to infinity as one approaches $\Sigma$, that is, for every $p\in\Sigma$ and $x_k\to p$ with $x_k\in M\setminus\Sigma $, $u(x_k)\to\infty$.
Our main result is the following.
\begin{thm}\label{main-theorem}
Let $\Sigma$ be a connected smooth closed (compact without boundary) submanifold of $M$. Assume that $Q\geq 0$, $Q\not\equiv0$ and $R\geq0$. If $0<dim(\Sigma)=:k<\frac{n-4}{2}$ then there exists an infinite dimensional family of complete metrics on $M \backslash \Sigma$ with constant $Q-$curvature. \end{thm}
Actually, Theorem \ref{main-theorem} is a corollary of the following result.
\begin{thm}\label{main2-theorem}
Let $\Omega \subset \mathbb{R}^n$ be a smooth open set and $\Sigma= \cup_{i=1}^K \Sigma_i$ a disjoint union of smooth, closed submanifolds of dimensions $k_i$ in $\Omega$. Assume that $n$ and $k_i$ for $i=1,..., K$ satisfy
$$
\frac{n-k_i}{n-k_i-4}<p < \frac{n-k_i+4}{n-k_i-4}
$$
or equivalently
$$
n-\frac{4p+4}{p-1}<k_i<n-\frac{4p}{p-1}.
$$
Then there exists a positive weak solution to
\begin{align} \label{eq-domain} \left\{\begin{array}{ll} \Delta^2u=u^p &\quad\text{in }\Omega \backslash \Sigma \\ u=\Delta u=0&\quad\text{on }\partial\Omega\\u>0&\quad\text{in }\Omega.\end{array} \right.\end{align}
that blows up exactly at $\Sigma$. Furthermore if at least one of the $k_i >0$, there is an infinite dimensional solution space for \eqref{eq-domain}.
\end{thm}
\begin{rem}
Notice that in the previous theorem if $k_i=0$ for all $i=1,...,K $, then the exponent $p$ has to be subcritical with respect to the Sobolev exponent and supercritical with respect to the Serrin's exponent provided $n \geq 5$. On the other hand, if one of the $k_i$ is positive, then the critical exponent $\frac{n+4}{n-4}$ is allowed for $p$, but the dimension $n$ has to be large enough.
\end{rem}
The proof of Theorem \ref{main-theorem} uses several tools ranging from geometric theory of edge operators (as in \cite{Mazzeo-Pacard96}) to a more general view point on this type of problem provided in \cite{Ao1}. Since we are dealing with a fourth order equation, even the ODE analysis, which is instrumental in \cite{Mazzeo-Pacard96}, is rather involved. On the other hand, the authors in \cite{Ao1} had to develop ODE-free method to deal with their quite general operators. Using the model $\mathbb R^n \backslash \mathbb R^k$ which is conformally equivalent to the product $\mathbb S^{n-k-1} \times \mathbb H^{k+1}$ with the canonical metric, a straightforward computation of the $Q-$curvature on this model provides the condition $0 < k <\frac{n-4}{2}$ for positive $Q-$curvature metrics (see e.g. \cite{BPS}). This model plays a crucial role in our theory since it allows to by-pass some tricky ODE arguments by having an``explicit" form of the solution using the Fourier-Helgason transform on hyperbolic space. See \cite{CHY} for much deeper results related to the dimension restriction and \cite{BPS} for multiplicity results on the Q-curvature problem. Finally, as in the second order case (Yamabe problem), we use Delaunay-type solutions as building blocks of the approximate solution (see \cite{GWZ} and the Appendix for some existence results on these solutions). Note also that \cite{AB} provides also singular solutions using the trivial profile $|x|^{-\frac{4}{p-1}}$ but those allow to build only local solutions (see e.g. \cite{Mazzeo-Smale} for the Yamabe case).
\section{Preliminaries}
\subsection{Function spaces}
Let $\Sigma$ be a smooth $k$ dimensional submanifold of $\Omega \subset \mathbb{R}^n$ (or a union of submanifolds with different dimensions). For $\sigma>0$ small we let $N_\sigma$ to be the geodesic tubular neighborhood of radius $\sigma$ around $\Sigma$. For $\alpha\in(0,1)$, $s\in (0,\sigma)$, $k\in \mathbb{N}\cup\{0\}$ and $\nu\in\mathbb{R}$ we define the seminorms \begin{align} \notag|w|_{k,\alpha,s}:=\sum_{j=0}^k s^j\sup_{N_s\setminus N_\frac s2}|\nabla ^jw| +s^{k+\alpha}\sup_{x,x'\in N_s\setminus N_\frac s2} \frac{|\nabla^kw(x)-\nabla^kw(x')|}{|x-x'|^\alpha}, \end{align} and the weighted H\"older norm\begin{align} \notag \|w\|_{C^{k,\alpha}_\nu} :=|w|_{C^{k,\alpha}(\bar\Omega\setminus N_\frac\sigma2)} +\sup_{0<s<\sigma}s^{-\nu}|w|_{k,\alpha,s}.\end{align} The weighted H\"older space $C^{k,\alpha}_\nu(\Omega\setminus\Sigma)$ is defined by \begin{align} \notag C^{k,\alpha}_\nu(\Omega\setminus\Sigma):=\left\{w\in C^{k,\alpha}_{loc}(\bar \Omega\setminus\Sigma): \|w\|_{C^{k,\alpha}_\nu} <\infty \right\}. \end{align} The subspace of $C^{k,\alpha}_\nu(\Omega\setminus\Sigma)$ with Navier boundary conditions will be denoted by $$C^{k,\alpha}_{\nu,\mathscr{N}}(\Omega\setminus\Sigma):=\{w\in C^{k,\alpha}_\nu(\Omega\setminus\Sigma):w=\Delta w=0\text{ on }\partial\Omega\}.$$
The space $C^{k,\alpha}_{\nu,\nu'}(\mathbb{R}^N\setminus \{0\})$ is defined by \begin{align}\notag \|w\|_{C^{k,\alpha}_{\nu,\nu'}(\mathbb{R}^N\setminus \{0\})}:=\|w\|_{C^{k,\alpha}_{\nu}(B_2\setminus \{0\})} +\sup _{r\geq 1}(r^{-\nu'} \|w(r\cdot)\|_{C^{k,\alpha}(\bar B_2\setminus B_1)}).\end{align} We also set \begin{align}\notag \|w\|_{C^{k,\alpha}_{\nu,\nu'}(\mathbb{R}^n\setminus \mathbb{R}^m)}:=\|w\|_{C^{k,\alpha}_{\nu}(\mathcal{B}_2\setminus \mathbb{R}^m)} +\sup _{r\geq 1}(r^{-\nu'} \|w(r\cdot)\|_{C^{k,\alpha}(\bar{ \mathcal{B}}_2\setminus \bar{\mathcal{B}}_1) },\end{align} where $\mathcal{B}_r$ denotes the tubular neighborhood of radius $r$ of $\mathbb{R}^m$ in $\mathbb{R}^n$. \\
We now list some useful properties of the space $C^{k,\alpha}_{\nu}(\Omega\setminus\Sigma)$, see e.g. \cite{Mazzeo-Pacard96} and the book \cite{Pacard-Riviere}.
\begin{lem} \label{lem-properties}The following properties hold. \begin{itemize} \item[i)] If $w\in C^{k+1,\alpha}_{\gamma}(\Omega\setminus\Sigma)$ then $\nabla w\in C^{k,\alpha}_{\gamma-1}(\Omega\setminus\Sigma)$.
\item[ii)] If $w\in C^{k+1,0}_{\gamma}(\Omega\setminus\Sigma)$ then $ w\in C^{k,\alpha}_{\gamma}(\Omega\setminus\Sigma)$ for every $\alpha\in[0,1)$.
\item[iii)] For every $w_i\in C^{k,\alpha}_{\gamma_i}(\Omega\setminus\Sigma)$,\, i=1,2, we have
$$\|w_1 w_2\|_{k, \gamma_1+\gamma_2,\alpha}\leq C \|w_1 \|_{k, \gamma_1,\alpha} \|w_2\|_{k, \gamma_2,\alpha},$$ for some $C>0$ independent of $w_1,w_2$.
\item[iv)] There exists $C>0$ such that for every $w\in C^{k,\alpha}_{\gamma}(\Omega\setminus\Sigma)$ with $w>0$ in $\bar \Omega\setminus \Sigma$ we have $$\|w^p\|_{k, \gamma,\alpha}\leq C \|w \|^p_{k, \gamma,\alpha} .$$
\end{itemize} \end{lem}
\subsection{Fermi coordinates}
We now compute the Fermi coordinates for our problem. For $\sigma>0$ small we can choose fermi coordinates in $N_\sigma$ as follows: First we fix any local coordinate system $y=(y_1,\dots,y_k)$ on $\Sigma$ ($k$ is the dimension of $\Sigma$). For every $y_0\in\Sigma$ there exists an orthonormal frame field $E_1,\dots,E_{n-k}$, basis of the normal bundle of $\Sigma$. Then we consider the coordinate system $$\Sigma\times\mathbb{R}^{n-k}\ni (y,z)\to y+\sum z_iE_i(y).$$ For $|z|<\sigma$ with $\sigma$ small, these are welldefined coordinate system in a neighborhood of $y_0$.
In this coordinate system the Euclidean metric has the following expansion $$g_{\mathbb{R}^n}=g_{\mathbb{R}^{n-k}}+g_\Sigma +O(|z|)dzdy+O(|z|)dy^2.$$
Therefore, \begin{align} \notag \Delta_{\mathbb{R}^n} =\Delta_{\mathbb{R}^{n-k}}+\Delta_\Sigma +\mathsf{e_1}\nabla+\mathsf{e_2} \nabla^2, \end{align} where $\mathsf{e_i}$, $i=1,2$ satisfy $$\|\mathsf{e_1}\|_{C^{0,\alpha}_0}+\|\mathsf{e_2}\|_{C^{0,\alpha}_1}\leq c. $$ Using this we also have \begin{align}\label{Delta}\Delta^2_{\mathbb{R}^n} =\Delta^2_{\mathbb{R}^{n-k}}+\Delta^2_\Sigma +2\Delta_{\mathbb{R}^{n-k}}\Delta_\Sigma +\sum_{i=1}^4\mathsf{e_i}\nabla^i ,\end{align} where \begin{align} \label{est-13}\|\mathsf{e_1}\|_{C^{0,\alpha}_{-2}}+\|\mathsf{e_2}\|_{C^{0,\alpha}_{-1}}+\|\mathsf{e_3}\|_{C^{0,\alpha}_0}+\|\mathsf{e_4}\|_{C^{0,\alpha}_1}\leq c.\end{align}
\subsection{The singular solution}
The building block for our theory is the existence of a singular solution with different behaviour at the origin and at infinity. The following theorem provides such a solution. We refer the reader to the appendix for a proof of this result.
\begin{thm}\label{exists1}
Let $N \geq 5$.
Suppose that $\frac{N}{N-4}<p<\frac{N+4}{N-4}$. Then for every $\beta>0$ there exists a unique radial solution $u$ to \begin{align} \label{singu11} \left\{\begin{array}{ll} \Delta^2u=u^p\quad\text{in }\mathbb{R}^N\setminus\{0\}\\ u>0 \quad\text{in }\mathbb{R}^N\setminus\{0\}\\
\lim_{|x|\to0} u(x)=\infty, \end{array} \right.\end{align} such that $$\lim_{r\to\infty}r^{N-4}u(r)=\beta,\quad \lim_{r\to 0^+}r^\frac{4}{p-1}u(r)=c_p:=[k(p,N)]^\frac{1}{p-1},$$ where \begin{align*}k(p,N)&=\frac{8(p+1)}{(p-1)^4}\left[ N^2(p-1)^2+8p(p+1)+N(2+4p-6p^2) \right]. \end{align*} \end{thm}
\subsection{Approximate solutions}
Let $u$ be a singular radial solution to \eqref{singu11}. Then $u_\varepsilon(x):=\varepsilon^{-\frac{4}{p-1}}u(\frac x\varepsilon)$ is also a solution to \eqref{singu11}. Note that $$u_\varepsilon(x)\leq C(\delta,u) \varepsilon^{N-4-\frac{4}{p-1}}\quad\text{for }|x|\geq\delta, $$ which shows that $u_\varepsilon\to0$ locally uniformly in $\mathbb{R}^N\setminus\{0\}$. Due to this scaling and the asymptotic behavior of $u$ at infinity, for a given $\alpha>0$, we can find a solution $u_1$ such that $r^4u_1^{p-1}(r)\leq \alpha$ on $(1,\infty)$.
\subsubsection{Isolated singularities}
Let $\Sigma=\{x_1,x_2,\dots,x_K\}$ be a set of finite points in $\Omega$. To construct a solution to \eqref{eq-domain} which is singular precisely at the points of $\Sigma$, we start by constructing an approximate solution to \eqref{eq-domain} which is singular exactly on $\Sigma$. Let us first fix a smooth cut-off function $\chi$ such that $\chi=1$ on $B_1$ and $\chi=0$ on $B_2^c$. Also fix $R>0$ such that $B_{2R}(x_i)\subset\Omega$ and $B_{2R}(x_i)\cap B_{2R}(x_j)=\emptyset$ for every $i\neq j$, $i,j\in\{1,2,\dots,K\}$. Let $\bar\varepsilon=(\varepsilon_1,\varepsilon_2,\dots,\varepsilon_K)$ be a $K$-tuple of dilation parameter. An approximate solution $\bar u_{\bar\varepsilon}$ is defined by $$\bar u_{\bar \varepsilon}(x)=\sum_{i=1}^K\chi_R(x-x_i)u_{\varepsilon_i}(x-x_i)=\sum_{i=1}^K \varepsilon_i^{-\frac{4}{p-1}}\chi(\frac{x-x_i}{R})u_1(\frac{x-x_i}{\varepsilon_i}) .$$ The asymptotic behavior of $u_1$ at infinity leads to the following error of approximation:
\begin{lem} The error $f_{\bar\varepsilon}:=\Delta^2 \bar u_{\bar\varepsilon}- \bar u^p_{\bar\varepsilon}$ satisfies $$\|f_{\bar\varepsilon}\|_{C^{0,\alpha}_{\gamma-4}}\leq C_\gamma \varepsilon_0^{N-\frac{4p}{p-1}}\quad\text{for } 0<\varepsilon_i\leq\varepsilon_0\leq1,$$ for every $\gamma\in\mathbb{R}$. \end{lem}
\subsubsection{Higher dimensional singularities} \label{section-higher}
Let $\Sigma_i\subset \Omega$ be a $k_i$-dimensional submanifold in $\Omega$ for $i=1,2,\dots,K$. We fix $\sigma>0$ small such that Fermi coordinates are well-defined on the Tubular neighborhood $N_{i,\sigma}$ of $\Sigma_i$ for every $i=1,\dots,K$, and $N_{i,2\sigma}\cap N_{j,2\sigma}=\emptyset$ for $i\neq j$. Fix a smooth radially symmetric cut-off function $\chi$ such that $\chi=1$ on $B_1$ and $\chi=0$ on $B_2^c$. Then for $0<\varepsilon_i<1$ and $0<R<\frac\sigma2$ we set $$\bar u_{\varepsilon_i}(x,y)=\varepsilon_i^{-\frac{4}{p-1}}u_1(\frac {x}{\varepsilon_i})\chi(\frac yR)=:u_{\varepsilon_i}(x)\chi_R(y).$$ An approximate solution which is singular only on $\cup_{i=1}^K\Sigma_i$ is defined by $$\bar u_{\bar \varepsilon}=\sum_{i=1}^K\bar u_{\varepsilon_i}.$$ Using the expansion \eqref{Delta} and the estimate \eqref{est-13} we see that the error $f_{\bar\varepsilon}:=\Delta^2\bar u_{\bar\varepsilon}-\bar u_{\bar\varepsilon}^p$ satisfies $$\|f_{\bar\varepsilon}\|_{C^{0,\alpha}_{\gamma-4}}\leq C\varepsilon_0^q,\quad 0<\varepsilon_i\leq \varepsilon_0\leq1,\, q:=\frac{p-5}{p-1}-\gamma>0,$$ for $\gamma< \frac{p-5}{p-1}$. In our applications, $\gamma$ will be bigger than $ -\frac{4}{p-1}$.
\begin{rem}
To prove existence of solution to \eqref{eq-domain} with singular set $\Sigma$, we shall look for solutions of the form $u=\bar u_{\bar\varepsilon}+v$ (in both cases, that is, $\Sigma$ is finite and higher dimensional). Then, $v$ has to satisfy \begin{align}\label{eq-v} L_{\bar\varepsilon} v+f_{\bar\varepsilon}+Q(v)=0,\quad L_{\bar\varepsilon}:=\Delta^2-p\bar u_{\bar\varepsilon}^{p-1}, \end{align} where \begin{align}\label{Q}Q(v)=-(\bar u_{\bar\varepsilon}+v)^p+\bar u_{\bar\varepsilon}^p+p\bar u_{\bar\varepsilon}^{p-1} v . \end{align}
\end{rem}
\medskip
\section{Linearized operator in $\mathbb{R}^N \backslash \left \{ 0 \right \}$}
Since our purpose is to use an implicit function theorem, it is crucial to understand the linearized problem. For this, we invoke the analytic theory of edge operators as in \cite{mazzeoEdge,mazzeoVertman} but also some more general arguments in \cite{Ao1} as we mentioned in the introduction.
We consider the linearized operator $$L_1=\Delta^2-pu_1^{p-1}$$ where in polar coordinates we denote $$\Delta=\frac{\partial^2}{\partial r^2}+\frac{N-1}{r}\frac{\partial}{\partial r}+\frac{1}{r^2}\Delta_\theta.$$
\subsection{Indicial roots}
Next we compute indicial roots of the linearized operator $L_1$. We recall that $\gamma_j$ is a indicial root of $L_1$ at $0$ if $L_1(|x|^{\gamma_j}\varphi_j)=o(|x|^{\gamma-4})$, where $\varphi_j$ is the $j$-th eigenfunction of $-\Delta_\theta$, that is $-\Delta_\theta\varphi_j=\lambda_j\varphi_j$, $$\lambda_0=0,\quad \lambda_j=N-1,\quad \text{for }j=1,\dots,N,$$ and so on. One shows that $\gamma_j$ is a solution to $$[\gamma(\gamma-1)+(N-1)\gamma-\lambda_j][(\gamma-2)(\gamma-3)+(N-1)(\gamma-2)-\lambda_j]-A_p=0,$$ where \begin{align}\label{Ap}A_p:=p\lim_{r\to0}r^4u_1^{p-1}(r)=pk(p,N).\end{align} The solutions can be given by \begin{align*}\gamma_j^{\pm\pm} =\frac12\left[4-N\pm\sqrt{(N-2)^2+4+4\lambda_j\pm 4\sqrt{(N-2)^2+4\lambda_j+A_p}}\right] .\end{align*}
We have that ($\Re$ denotes the real part) \begin{align}\label{indicial18}\gamma_0^{-+}<4-N<-\frac{4}{p-1}<\Re(\gamma_0^{--})\leq \frac{4-N}{2}\leq \Re(\gamma_0^{+-})<0<2<\gamma_0^{++}, \end{align} and \begin{align} \gamma_j^{-\pm}<-\frac{4}{p-1},\quad \Re(\gamma_0^{+-})<\gamma_j^{+\pm}\quad\text{for }j\geq1. \end{align}
To prove the above relations one uses that $A_p$ is monotone. Indeed, for any fixed $N\geq 5$, $\frac{\partial}{\partial p}A_p$ vanishes at the following points $$p_0:=\frac{N+2}{N-6},\quad p_1^\pm= \frac{N+4\pm 2\sqrt{N^2+4}}{3N-8}.$$ Using this one would get that $A_p$ is monotone increasing on $(\frac{N}{N-4},\frac{N+4}{N-4})$.
Since $\lim_{r\to\infty}r^4u_1^{p-1}(r)=0$, the indicial roots of $L_1$ at infinity are the same as for the $\Delta^2$ itself. These values are given by \begin{align*}\tilde \gamma_j^{\pm\pm} =\frac12\left[4-N\pm\sqrt{(N-2)^2+4+4\lambda_j\pm 4\sqrt{(N-2)^2+4\lambda_j}}\right] .\end{align*} In particular,
$$\tilde\gamma_0^{\pm\pm}\in\{2-N,4-N,0,2 \},\quad \tilde\gamma_1^{+\pm}\in\{1,3\},\quad \tilde \gamma_j^{+\pm}\geq 1\quad\text{and }\tilde \gamma_j^{-\pm}<4-N\quad\text{for }j\geq 1.$$
We shall choose $\mu,\nu$ in the region \begin{align}\label{mu-nu} \frac{-4}{p-1}<\nu<\Re(\gamma_0^{--})\leq\frac{4-N}{2}\leq\Re(\gamma_0^{+-})<\mu
, \end{align} so that $\mu+\nu=4-N$.
\subsection{Study of the linearized operator}
For a function $w=w(r,\theta)$ we decompose it as \begin{align}\label{decompose}w(r,\theta)=\sum_{j=0}^\infty w_j(r)\varphi_j(\theta). \end{align} Then \begin{align*}\Delta^2 (w_j\varphi_j)&=\left(w_j^{iv}+\frac{2(N-1)}{r}w_j'''+\frac{N^2-4N+3-2\lambda_j}{r^2}w_j''-\frac{(N-3)(N-1+2\lambda_j)}{r^3}w_j'\right. \\ &\quad \left.+\frac{2(N-4)\lambda_j+\lambda_j^2}{r^4}w_j\right)\varphi_j\\ &= : (w_j^{iv}+\frac{a_{1,j}}{r}w_j'''+\frac{a_{2,j}}{r^2}w_j''-\frac{a_{3,j}}{r^3}w_j'+\frac{a_{4,j}}{r^4}w_j)\varphi_j.
\end{align*}
Thus, $L_1 w=0$ if and only if, for every $j=0,1,2,\dots$ \begin{align}\label{eq-wj-22}w_j^{iv}+\frac{a_{1,j}}{r}w_j'''+\frac{a_{2,j}}{r^2}w_j''-\frac{a_{3,j}}{r^3}w_j'+\frac{a_{4,j}-V_p(r)}{r^4}w_j=0,\quad V_p(r):=pr^4u_1^{p-1}(r). \end{align}
One obtains
\begin{align} &r^{N-1}w_j(w_j^{iv}+\frac{a_{1,j}}{r}w_j'''+\frac{a_{2,j}}{r^2}w_j''-\frac{a_{3,j}}{r^3}w_j' +\frac{a_{4,j}}{r^4}w_j)\notag\\
&=(r^{N-1}w_jw'''_j)' -(r^{N-1}w_j'w_j'')'+(N-1)(r^{N-2}w_jw_j'')' - (N-1+2\lambda_j)(r^{N-3}w_jw_j')' \notag\\ &\quad +(N-1+2\lambda_j)r^{N-3}(w')^2+ r^{N-1}(w'')^2+a_{4,j}r^{N-5}w_j^2. \label{byparts25}
\end{align}
\begin{prop} \label{inj-prop-1} Let $w\in C^{4,\alpha}_{\mu,0}(\mathbb{R}^N\setminus\{0\})$ be a solution to $L_1w=0$. Then $w\equiv0$. \end{prop}
\begin{proof} From the definition of the space $C^{4,\alpha}_{\mu,0}(\mathbb{R}^N\setminus\{0\})$ we have that
\begin{align}\label{asymp-w-24} \left\{ \begin{array}{ll}|w(x)|\leq C\log(2+|x|),\quad |x|^k |\nabla ^kw(x)|\leq C &\quad\text{for }|x|\geq 1,\quad k=1,\dots,4\\ \rule{0cm}{.5cm}|x|^{-\mu+k}|\nabla^k w(x)|\leq C &\quad\text{for }0<|x|\leq1, \quad k=0,\dots,4.\end{array}\right. \end{align} Now we decompose $w$ as in \eqref{decompose}.
First we show that $w_0=0$. From the choice of $\mu$ we see that if $w_0\not\equiv 0$ then $w_0$ should behave like $r^{\gamma_0^{++}}$ around the origin. Therefore, without loss of any generality, we can assume that $w_0\geq 0$ in a small neighborhood of the origin. Using the crucial fact $q:=\gamma_0^{++}>2$, thanks to \eqref{indicial18}, we shall show that $w$ is actually $C^2$ and $\Delta w_0(0)=0$. Indeed, as $w_0$ satisfies $\Delta^2w_0=pu_1^{p-1}w_0=:f$ in $\mathbb{R}^N\setminus\{0\}$, for $0<\varepsilon<r$ we have $$(\Delta w_0)'(r)r^{N-1}=(\Delta w_0)'(\varepsilon)\varepsilon^{N-1}+\omega_N\int_{\varepsilon<|x|<r}fdx, \quad\omega_N:=|S^{N-1}|^{-1}.$$ As $\mu>(4-N)/2$ we see that $$|(\Delta w_0)'(\varepsilon)|\varepsilon^{N-1}\leq C\varepsilon^{\mu-3}\varepsilon^{N-1}\to0\quad\text{as }\varepsilon\to0.$$ Hence, for $0<r_1<r_2$ we get $$\Delta w_0(r_2)=\Delta w_0(r_1)+\omega_N\int_{r_1}^{r_2}\frac{1}{t^{N-1}}\int_{|x|<t}f(x)dxdt.$$ As $(\Delta w_0)'>0$ on $(0,\varepsilon_0)$, that is $\Delta w_0$ is monotone increasing, we see from the above relation that $\lim_{r_1\to0^+}$ exists and finite. Here we used that $f(x)\approx |x|^{q-4}$, $q>2$. Thus, $w_0$ is $C^2$, and again as $q>2$, we must have $\Delta w_0(0)=0$. In conclusion, $\Delta w_0(r)\geq \Delta w_0(1)>0$ for $r>1$, which leads to $w_0(r)\gtrsim r^2 $, a contradiction to \eqref{asymp-w-24}.
For $j\geq 1$, $w_j$ behaves like $r^{4-N-\tilde\gamma_j^{\pm\pm}}$ as $r\to\infty$. Since $\tilde \gamma_j^{-\pm}<4-N$ and $w_j=O(\log r) $ at infinity, we must have $w_j\approx r^{4-N-\tilde\gamma_j^{+\pm}}$.
Now multiplying the equation \eqref{eq-wj-22} by $r^{N-1}w_j$, and using \eqref{byparts25} we get (this is justified thanks to \eqref{asymp-w-24}, and the asymptotic behavior of $w_j$ at infinity) \begin{align*} (N-1+2\lambda_j) &\int_0^\infty r^{N-3}w'^2dr+\int_0^\infty r^{N-1}w''^2dr=\int_0^\infty [V_p(r)-2(N-4)\lambda_j-\lambda_j^2]r^{N-5}w_j^2dr \\ &\leq \left[p\frac{p+1}{2} k(p,N)-2(N-4)\lambda_j-\lambda_j^2\right]\int_0^\infty r^{N-5}w_j^2 dr \\ &\leq \left[ \frac{1}{16}N^3 (N+4)-2(N-4)\lambda_j-\lambda_j^2\right]\int_0^\infty r^{N-5}w_j^2dr\\ &=:C(N,j)\int_0^\infty r^{N-5}w_j^2dr. \end{align*}
An integration by parts gives $$\int_0^\infty r^{N-5}w_j^2dr\leq\frac{4}{(N-4)^2}\int_0^\infty r^{N-3}w_j'^2dr$$
$$\int_0^\infty r^{N-3}w_j'^2dr\leq\frac{4}{(N-2)^2}\int_0^\infty r^{N-1}w_j''^2dr.$$ This leads to \begin{align*} \int_0^\infty r^{N-1}w_j''^2dr &\leq \left[\frac{4 C(N,j)}{(N-4)^2} - (N-1+2\lambda_j) \right] \int_0^\infty r^{n-3}w_j'^2dr \\ & \leq \left[\frac{4 C(N,j)}{(N-4)^2} - (N-1+2\lambda_j) \right] \frac{4}{(n-2)^2}\int_0^\infty r^{n-1}w_j''^2dr \\ &=:\bar C(N,j)\int_0^\infty r^{n-1}w_j''^2dr.\end{align*} One can show that $\bar C(N,j)<1$ for $j\geq N+1$, and hence $w_j\equiv0$ for $j\geq N+1$.
Finally, we consider the case $j=1,\dots N$. For $\lambda_1=n-1$ we have that $\tilde\gamma_1^{+\pm}\in \{1,3\}$, and $\tilde\gamma_1^{-\pm} <4-N$. Therefore, $w_1$ should behave like $r^{1-N}$ or $r^{3-N}$ at infinity.
Let us first show that if $w_1=O(r^{1-N})$ at infinity and $w_1\varphi_1\in C^{4,\alpha}_{\mu,0}$ then $w_1\equiv 0$.
We set $$W(r)=-\int_{r}^\infty w_1(t)dt, \quad r>0,$$ so that $W'=w_1$.
For $\varepsilon>0$ small let $\Omega_\varepsilon$ be the domain $$\Omega_\varepsilon:=\left\{x\in\mathbb{R}^N:x_1>0,\, |x|>\varepsilon \right\}.$$
We know that $u'_1<0$, $\Delta u_1<0$, $(\Delta u_1)'>0 $ on $(0,\infty)$,
$$u_1(r)\approx r^{4-N},\quad u'_1 (r)\approx -r^{3-N},\quad \Delta u_1(r)\approx -r^{2-N}, \quad (\Delta u_1)'(r)\approx r^{1-n} \quad\text{as }r\to\infty ,$$ and $$u_1(r)\approx r^{-\frac{4}{p-1}},\quad u_1' (r)\approx -r^{-\frac{4}{p-1}-1},\quad \Delta u_1(r)\approx -r^{-\frac{4}{p-1}-2}, \quad (\Delta u_1)'(r)\approx r^{-\frac{4}{p-1}-3} \quad\text{as }r\to 0.$$ Therefore, $$W(r)=o(u_1(r)),\quad W'(r)=o(u_1'(r)), \quad (\Delta W)'(r)=o((\Delta u_1)'(r))\quad\text{as }r\to0\text{ or }\infty.$$ Setting $$ \tilde w_\rho(x)=(W'(|x|)-\rho u_1'(|x|))\frac{x_1}{|x|}=\frac{\partial }{\partial x_1}(W-\rho u_1)(|x|),$$ we see that for $\rho>>1$ we have \begin{align*} \tilde w_\rho\geq 0\quad \text{and }\quad \Delta \tilde w_\rho=(\Delta W-\rho\Delta u_1)'(x)\frac{x_1}{|x|}\leq 0\quad \text{in }\Omega_\varepsilon,\end{align*} equivalently \begin{align}\label{w-rho} W'-\rho u_1'\geq 0\quad \text{and }\quad (\Delta W)'-\rho(\Delta u_1)'\leq 0\quad \text{in }\Omega_\varepsilon.\end{align}
Now we set $$\rho_\varepsilon:=\inf\{\rho>0:\eqref{w-rho}\text{ holds} \}.$$ We claim that $\rho_\varepsilon\to0$ as $\varepsilon\to0$. Indeed, as $\tilde w_{\rho_\varepsilon}$ satisfies (recall that $\varphi_1=\frac{x_1}{|x|}$) $$\Delta^2\tilde w_{\rho_\varepsilon}=pu^{p-1}\tilde w_{\rho_\varepsilon}\geq0,\quad \Delta \tilde w_{\rho_\varepsilon}\leq 0 \text{ in }\Omega_\varepsilon,$$ by maximum principle \begin{align}
\label{maximum}\tilde w_{\rho_\varepsilon}>0\quad\text{and }\Delta\tilde w_{\rho_\varepsilon}<0\quad\text{in }\Omega_\varepsilon.\end{align} On the other hand, if $\rho_\varepsilon>0$ then there exists $x_\varepsilon\in\bar\Omega_\varepsilon$ such that $$W'(x_\varepsilon)-\rho_\varepsilon u'_1(x_\varepsilon) = 0\quad \text{ or }\quad (\Delta W)'(x_\varepsilon)-\rho_\varepsilon(\Delta u_1)'(x_\varepsilon)=0,$$ thanks to the definition of $\rho_\varepsilon$ and the asymptotic behavior of $W',(\Delta W)', u_1', (\Delta u_1)'$. Since $W$ and $u_1$ are radially symmetric, \eqref{maximum} implies that $|x_\varepsilon|=\varepsilon$. Hence, from the behavior of $W',(\Delta W)', u_1', (\Delta u_1)'$ around the origin, we conclude that $\rho_\varepsilon\to0$,
Thus we have shown that $W'\geq 0$ on $(0,\infty)$. In a similar way, taking $$\tilde w_\rho(x)=(-W'(|x|)-\rho u_1'(|x|))\frac{x_1}{|x|}$$ we would get that $W'\leq 0$ on $(0,\infty)$. This completes the roof.
The same proof shows that there is no solution $w_1$ such that $w_1(r)=u_1'(r)(1+o(1))$ around the origin, and $w_1(r)=o(u_1'(r))$ at infinity.
Now we show that if $w_1=O(r^{3-N})$ at infinity and $w_1\varphi_1\in C^{4,\alpha}_{\mu,0}$ then $w_1\equiv 0$. Indeed, if $w_1\not\equiv 0$, then $\tilde w_1:=u_1'+a w_1$ would satisfy $\tilde w_1=u'_1(1+o(1))$ around the origin and $\tilde w_1=o(u_1')$ at infinity for some non-zero constant $a$, which is a contradiction.
This finishes the lemma.
\end{proof}
\begin{prop}\label{inj-prop-2}Let $w\in C^{4,\alpha}_{\mu,\mu}(\mathbb{R}^N\setminus\{0\})$ be a solution to $$\Delta^2 w-\frac{A_p}{r^4}w=0\quad\text{in }\mathbb{R}^N\setminus\{0\},$$ where $A_p$ is given by \eqref{Ap}. Then $w\equiv 0$. \end{prop}
\begin{proof} Note that in this case also we have same indicial roots as before. Therefore, $w$ is given by $$w=\sum_{j\geq 0}(c_{1,j}r^{\gamma_j^{++}}+c_{2,j}r^-{\gamma_j^{+-}}+c_{3,j}r^{\gamma_j^{-+}}+c_{4,j}r^{\gamma_j^{--}})\varphi_j,$$ for some $c_{i,j}\in\mathbb{R}$. Since, $r^{\gamma_j^{\pm\pm}}$ is not bounded by $r^{\mu}$ simultaneously at the origin and at infinity, we have that $c_{i,j}=0$ for every $(i,j)$. This finishes the lemma. \end{proof}
\medskip
\section{Linearized operator on $\mathbb{R}^n\setminus\mathbb{R}^k$}
The main goal of this section is to prove the following injectivity of the linearized operator on $\mathbb{R}^n\setminus\mathbb{R}^k$. We closely follow the approach in \cite{Ao1}.
\begin{prop}\label{inj-prop-3} The linearized operator $$\mathbb{L}_1= \Delta^2_{x,y}-pu_1^{p-1}$$ is injective in $C^{4,\alpha}_{\mu,0}(\mathbb{R}^n\setminus\mathbb{R}^k)$. \end{prop}
In order to prove the above lemma we shall use our previous injectivity results on $\mathbb{R}^N\setminus\{0\}$. The idea is to show that both operators have same symbol. To be more precise, we write the Euclidean metric in $\mathbb{R}^N$ as $$|dx|^2=dr^2+r^2 g_{\mathbb S^{N-1}}.$$ We consider the conformal change $$g_0:=\frac{1}{r^2}|dx|^2=dt^2+g_{\mathbb S^{N-1}},\quad r=e^{-t},$$ which is a complete metric on the cylinder $\mathbb{R}\times \mathbb S^{N-1}$. Then the conformal Laplacian $P^{g_0}_{\gamma}$ of order $2\gamma$ with $0<\gamma<\frac N2$ is given by $$P^{g_0}_\gamma w=r^\frac{N+2\gamma}{2}(-\Delta)^\gamma u,\quad u:=r^{-\frac{N-2\gamma}{2}}w. $$ In the following we shall use the following normalization on the definition of Fourier transformation on $\mathbb{R}$: $$\hat w(\xi):=\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}e^{-i\xi t}w(t)dt.$$
The following lemma can be found in \cite{mar}.
\begin{lem}Let $P^{j}_\gamma$ be the projection of the operator $P^{g_0}_\gamma$ on the eigenspace $\langle\varphi_j\rangle$. Then, writing $w(t,\theta)=\sum_{j=0}^\infty w_j(t)\varphi_j(\theta)$ we have $$\widehat{P^j_\gamma w_j}=\Theta^j_\gamma(\xi)\hat w_j,$$ where the Fourier symbol is given by \begin{align}\label{symbol-1}\Theta^j_\gamma(\xi)=2^{2\gamma}\frac{|\Gamma(\frac12+\frac\gamma2+\frac12\sqrt{(\frac N2-1)^2+\lambda_j}+\frac\xi2 i)|^2}{|\Gamma(\frac12-\frac\gamma2+\frac12\sqrt{(\frac N2-1)^2+\lambda_j}+\frac\xi2 i)|^2}.\end{align} \end{lem}
Now we move on to the case when the singularity is along $\mathbb{R}^k$. For a point $z=(x,y)\in\mathbb{R}^n\setminus\mathbb{R}^k$ we shall use the following notations: $x\in \mathbb{R}^N$, $y\in \mathbb{R}^k$ where $\mathbb{R}^n=\mathbb{R}^N\times\mathbb{R}^k$. We shall also write $z=(x,y)=(r,\theta,y)$ where $r=|x|$ and $\theta\in \mathbb S^{N-1}$. Then the Euclidean metric on $\mathbb{R}^n$ can be written as $$|dz|^2=|dx|^2+|dy|^2=dr^2+r^2g_{\mathbb S^{N-1}}+dy^2.$$ Now we consider the conformal metric $$g_k:=\frac{1}{r^2}|dz|^2=g_{\mathbb S^{N-1}}+\frac{dr^2+dy^2}{r^2}=g_{\mathbb S^{N-1}}+g_{\HH^{k+1}},$$ where $\HH^{k+1}$ is the Hyperbolic space. The conformal Laplacian is given by $$P^{g_k}_\gamma w=r^\frac{n+2\gamma}{2}(-\Delta)^\gamma u,\quad u=r^{-\frac{n-2\gamma}{2}}w.$$ For a function $w$ on $\mathbb S^{N-1}\times\HH^{k+1}$ we decompose it as $w(\theta,\zeta)=\sum_{j=0}^\infty w_j(\zeta)\varphi_j(\theta),$ with $\zeta\in\HH^{k+1}$.
The next lemma can be found in \cite{Ao1}.
\begin{lem} Let $P^j_\gamma$ be the projection of the operator $P^{g_k}_\gamma$ on the eigenspace $\langle \varphi_j\rangle $. Then \begin{align}\label{symbol-2}\widehat{P^j_\gamma w_j}=\Theta^j_\gamma(\lambda)\hat w_j,\quad \Theta^j_\gamma(\lambda)=2^{2\gamma}\frac{|\Gamma(\frac12+\frac\gamma2+\frac12\sqrt{(\frac N2-1)^2+\lambda_j}+\frac\lambda2 i)|^2}{|\Gamma(\frac12-\frac\gamma2+\frac12\sqrt{(\frac N2-1)^2+\lambda_j}+\frac\lambda2 i)|^2},\end{align}where $\,\widehat{\cdot} \,$ denotes the Fourier-Helgason transform on $\HH^{k+1}$. \end{lem}
\medskip
\noindent\emph{Proof of Proposition \ref{inj-prop-3}} As we mentioned before, we shall use Proposition \ref{inj-prop-1}. Let $\phi$ be a solution to $$\Delta^2\phi-pu_1^{p-1}\phi=0\quad\text{in }\mathbb{R}^n\setminus\mathbb{R}^k.$$ We set $w=r^{-\frac{n+2\gamma}{2}}\phi$. Let $w_j$ be the projection of $w$ on the eigenspace $\langle \varphi_j\rangle$. Let $\hat w_j(\lambda,\omega)$ be the Fourier-Helgason transform of $w_j$, $(\lambda,\omega)\in\mathbb{R}\times \mathbb S^{N-1}$. As the symbol \eqref{symbol-2} coincides with the symbol \eqref{symbol-1} for every $\omega\in \mathbb S^{N-1}$, our problem is equivalent to that of Proposition \ref{inj-prop-1}. This concludes the proof.
\hfill $\square$
In a similar way, using Proposition \ref{inj-prop-2} one can prove the following Proposition:
\begin{prop} \label{inj-prop-4} Solutions to $$\Delta^2 w-\frac{ A_p}{r^4}w=0\quad\text{in }\mathbb{R}^n\setminus\mathbb{R}^k,$$ are trivial in the space $ w\in C^{4,\alpha}_{\mu,\mu}(\mathbb{R}^n\setminus\mathbb{R}^k).$\end{prop}
\section{Injectivity of $L_{\bar\varepsilon}$ on $C^{4,\alpha}_{\mu,\mathscr{N}}(\Omega\setminus\Sigma)$} In this section we study injectivity of the linearized operator $$L_{\bar\varepsilon}w:=\Delta^2w-pu_{\bar\varepsilon}^{p-1}w.$$
We shall use the following notations:
$$\Omega_{\bar \varepsilon}:=\Omega\setminus \cup _{i=i}^KB_{\varepsilon_i}(\Sigma_i),\quad f^+:=\max\{f,0\},\quad f^-:=\min\{f,0\}.$$
\begin{lem} There exists $\varepsilon_0>0$ such that if $\varepsilon_i<\varepsilon_0$ for every $i$, then after a suitable normalization of $u_1$, the operator $L_{\bar \varepsilon}$ satisfies maximum principle in $\Omega_{\bar\varepsilon}$, that is \begin{align*} \left\{\begin{array}{ll} L_{\bar\varepsilon}w\geq 0&\quad\text{in }\Omega_{\bar\varepsilon} \\ w\geq0&\quad\text{on }\partial \Omega_{\bar \varepsilon} \\ \Delta w\leq0&\quad\text{on }\partial \Omega_{\bar \varepsilon} \end{array}\right.\quad \Longrightarrow \quad w\geq 0\text{ and }\Delta w \leq0\text{ in }\Omega_{\bar\varepsilon}.\end{align*} \end{lem}
\begin{proof} Let $v$ and $\tilde v$ be given by \begin{align*} \left\{\begin{array}{ll} - \Delta v=-(\Delta w)^-&\quad\text{in }\Omega_{\bar \varepsilon} \\ v=w&\quad\text{on }\partial\Omega_{\bar\varepsilon},\quad \end{array}\right. \left\{\begin{array}{ll} -\Delta \tilde v=(\Delta w)^+&\quad\text{in }\Omega_{\bar \varepsilon} \\ \tilde v=0&\quad\text{on }\partial\Omega_{\bar\varepsilon}.\quad \end{array}\right. \end{align*} Then $v\geq0$ and $\tilde v\geq 0$ in $\Omega_{\bar\varepsilon}$. We shall show that $\tilde v=0$, and hence $v=w$.
It follows that \begin{align*} \left\{\begin{array}{ll} - \Delta (v-w)=(\Delta w)^+\geq 0&\quad\text{in }\Omega_{\bar \varepsilon} \\ v-w=0&\quad\text{on }\partial\Omega_{\bar\varepsilon},\quad \end{array}\right. \left\{\begin{array}{ll} -\Delta (\tilde v+w)=-(\Delta w)^-\geq0&\quad\text{in }\Omega_{\bar \varepsilon} \\ \tilde v+w\geq 0&\quad\text{on }\partial\Omega_{\bar\varepsilon}.\quad \end{array}\right. \end{align*} Therefore, $$v-w\geq0,\quad -w\leq\tilde v\quad \text{in }\Omega_{\bar\varepsilon}\quad \text{and }\frac{\partial (v-w)}{\partial \nu}\leq0\text{ on }\partial\Omega_{\bar\varepsilon},$$ where $\nu$ is the outward unit normal vector. We compute \begin{align*} \int_{\Omega_{\bar\varepsilon}} (v-w)\Delta^2 wdx&=\int_{\Omega_{\bar\varepsilon}}\Delta(v-w)\Delta wdx+\int_{\partial\Omega_{\bar \varepsilon}}\left((v-w)\frac{\partial \Delta w}{\partial\nu}-\Delta w\frac{\partial (v-w)}{\partial\nu}\right) d\sigma \\ &=-\int_{\Omega_{\bar \varepsilon}}[(\Delta w)^+]^2dx -\int_{\partial\Omega_{\bar \varepsilon}} \Delta w\frac{\partial (v-w)}{\partial\nu}d\sigma \\ & \leq -\int_{\Omega_{\bar \varepsilon}}[(\Delta w)^+]^2dx.\end{align*} Thus \begin{align}\label{31} \int_{\Omega_{\bar \varepsilon}}[(\Delta w)^+]^2dx &\leq -\int_{\Omega_{\bar \varepsilon}}(v-w)\Delta^2 wdx \notag\\ &\leq p\int_{\Omega_{\bar \varepsilon}} u_{\bar\varepsilon}^{p-1}(-w)(v-w) dx\notag\\ &\leq p\int_{\Omega_{\bar\varepsilon}}u^{p-1}_{\bar\varepsilon}\tilde v(v-w)dx\notag \\&\leq \delta \int_{\Omega_{\bar\varepsilon}}\sum_{i=1}^K\frac{1}{|x-x_i|^4} \tilde v (v-w)dx\notag\\ & \leq \delta \sum_{i=1}^K\left(\int_{\Omega_{\bar\varepsilon}} \frac{\tilde v(x)^2}{|x-x_i|^4} dx \right)^\frac12\left(\int_{\Omega_{\bar\varepsilon}} \frac{(v(x)-w(x))^2}{|x-x_i|^4} dx \right)^\frac12,\end{align} where $\delta>0$ can be chosen arbitrarily small by normalizing $u_1$ so that $pr^4u_1^{p-1}(r)\leq \delta$ for $r\geq 1$. Since $\tilde v=0$ on $\partial\Omega_{\bar\varepsilon}$ (same arguments for $v-w$), integrating by parts we obtain \begin{align*} \int_{\Omega_{\bar\varepsilon}} \frac{\tilde v(x)^2}{|x-x_i|^4} dx & = \int_{\Omega_{\bar\varepsilon}} \frac{\tilde v(x)^2}{|x-x_i|^4} dx \\ &=\frac{-1}{2(n-4)} \int_{\Omega_{\bar\varepsilon}}\tilde v(x)^2 \Delta \frac{1}{|x-x_i|^2} dx\\ &=\frac{-1}{n-4} \int_{\Omega_{\bar\varepsilon}} \frac{\tilde v\Delta\tilde v+|\nabla\tilde v|^2}{|x-x_i|^2} dx\\ &\leq \frac{-1}{n-4} \int_{\Omega_{\bar\varepsilon}} \frac{\tilde v\Delta\tilde v}{|x-x_i|^2} dx\\ &\leq\frac{1}{n-4} \left( \int_{\Omega_{\bar\varepsilon}} \frac{\tilde v(x)^2}{|x-x_i|^4} dx \right)^\frac12 \left( \int_{\Omega_{\bar\varepsilon}} (\Delta\tilde v (x))^2 dx\right)^\frac12, \end{align*} which gives $$ \int_{\Omega_{\bar\varepsilon}} \frac{\tilde v(x)^2}{|x-x_i|^4} dx\leq \frac{1}{(n-4)^2}\int_{\Omega_{\bar\varepsilon}} (\Delta \tilde v(x))^2dx= \frac{1}{(n-4)^2}\int_{\Omega_{\bar\varepsilon}} [(\Delta w(x))^+]^2dx.$$ Going back to \eqref{31} $$ \int_{\Omega_{\bar \varepsilon}}[(\Delta w)^+]^2dx\leq \delta K \frac{1}{(n-4)^2} \int_{\Omega_{\bar \varepsilon}}[(\Delta w)^+]^2dx,$$ and hence $(\Delta w)^+=0$.
We conclude the lemma.
\end{proof}
\begin{rem} $L_{\bar\varepsilon}$ satisfies maximum principle on $\cup_{i=1}^K( B_\sigma(\Sigma_i)-B_{\varepsilon_i}(\Sigma_i))$ for $0<\varepsilon_i<\varepsilon_0$. \end{rem}
\begin{lem}\label{7.2}Fix $\varepsilon_0>0$ such that $L_{\bar \varepsilon}$ satisfies maximum principle on $\Omega_{\bar\varepsilon}$. Let $4-N<\gamma<0$ be fixed. Let $w_{\bar\varepsilon}$ be a solution to $L_{\bar\varepsilon} w_{\bar\varepsilon}=f_{\bar\varepsilon}$ on $\Omega_{\bar\varepsilon}$ for some $f_{\bar\varepsilon}\in C^{0,\alpha}_{\gamma-4}(\Omega_{\bar\varepsilon})$, and $0<\varepsilon_i\leq\varepsilon_0$. Assume that $w_{\bar\varepsilon}=\Delta w_{\bar\varepsilon}=0$ on $\partial\Omega$. Then there exists $C>0$ such that \begin{align}\label{wve-est}\|w_{\bar\varepsilon}\|_{4,\alpha,\gamma}\leq C\left( \|f_{\bar\varepsilon}\|_{0,\alpha,\gamma-4}+\sum_{i=1}^K\left(\varepsilon_i^{-\gamma}\|w_{\bar\varepsilon}\|_{C^0(\partial B_{\varepsilon_i}(\Sigma_i))}+\varepsilon_i^{2-\gamma}\|\Delta w_{\bar\varepsilon}\|_{C^0(\partial B_{\varepsilon_i}(\Sigma_i))} \right)\right) .\end{align} \end{lem}
\begin{proof} Let $\sigma>0$ be as in Section \ref{section-higher} so that $\bar u_\varepsilon$ is supported in $\cup_{i=1}^KB_{\sigma}(\Sigma_i)$. We fix a smooth positive function $\phi$ on $\Omega\setminus \cup \Sigma_i$ such that $\phi(x)=d(x,\Sigma_i)^\gamma$ in each $B_\sigma(\Sigma_i)$.
For simplicity we assume that $\Sigma_i$ is a point $x_i$. Then $\phi(x)=|x-x_i|^\gamma$ on $B_\sigma(\Sigma_i)$. We compute $$\Delta^2 \varphi(x)=c_{N,\gamma}|x-x_i|^{\gamma-4},\quad c_{N,\gamma}:=\gamma(\gamma-2)(N^2+\gamma^2+2N\gamma-6N-6\gamma+8)>0,$$ and $$\Delta \phi(x)=\tilde c_{N,\gamma}|x-x_i|^{\gamma-2},\quad \tilde c_{N,\gamma}:=\gamma(N+2-\gamma)<0.$$ This shows that for a suitable choice of $u_1$, we have for some $\delta>0$ $$L_{\bar\varepsilon} \phi(x)=\Delta^2\phi-p\bar u_{\bar\varepsilon}^{p-1}\phi\geq \delta |x-x_i|^{\gamma-4}\quad\text{on }{\bf \Omega}:=\cup_{i=1}^K B_{\sigma}(\Sigma_i)\setminus B_{\varepsilon_i}(\Sigma_i).$$ Therefore, we can choose $c_{1,{\bar\varepsilon}}\approx \|f_{\bar\varepsilon}\|_{0,\alpha,\gamma-4}$ so that $$ L_{\bar\varepsilon}(w_{\bar\varepsilon}+c_{1,{\bar\varepsilon}}\phi)\geq0 \quad\text{on } \bf\Omega.$$ We can also choose \begin{align*}c_{2,{\bar\varepsilon}}\approx & \sum_{i=1}^K\left(\varepsilon_i^{-\gamma}\|w_{\bar\varepsilon}\|_{C^0(\partial B_{\varepsilon_i}(\Sigma_i))}+\varepsilon_i^{2-\gamma}\|\Delta w_{\bar\varepsilon}\|_{C^0(\partial B_{\varepsilon_i}(\Sigma_i))} \right) \\ &\quad +\sum_{i=1}^K\left(\|w_{\bar\varepsilon}\|_{C^0(\partial B_{\sigma}(\Sigma_i))}+\|\Delta w_{\bar\varepsilon}\|_{C^0(\partial B_{\sigma}(\Sigma_i))} \right)\\&=: c_{3,{\bar\varepsilon}}+c_{4,{\bar\varepsilon}}, \end{align*} so that $$ w_{\bar\varepsilon}+(c_{1,{\bar\varepsilon}}+c_{2,{\bar\varepsilon}})\phi\geq 0\quad \text{and } \Delta w_{\bar\varepsilon}+(c_{1,{\bar\varepsilon}}+c_{2,{\bar\varepsilon}})\Delta \phi\leq 0 \quad\text{on } \bf\Omega. $$ Then by Maximum principle we have that (to get the other inequality use $-\phi$) $$|w_{\bar\varepsilon}|\leq (c_{1,{\bar\varepsilon}}+c_{2,{\bar\varepsilon}})\phi \quad\text{and }|\Delta w_{\bar\varepsilon}|\leq -(c_{1,{\bar\varepsilon}}+c_{2,{\bar\varepsilon}})\Delta\phi\quad\text{in }\bf\Omega.$$ Since, $\Delta^2w_{\bar\varepsilon}=f_{\bar\varepsilon}$ in $\Omega_{\bar\varepsilon}\setminus\bf\Omega$, we get that $$|w_{\bar\varepsilon} (x) |+|\Delta w_{\bar\varepsilon} (x)|\lesssim (c_{1,{\bar\varepsilon}}+c_{2,{\bar\varepsilon}})\quad x\in \Omega_{\bar\varepsilon}\setminus\bf\Omega.$$ We claim that \begin{align}c_{4,{\bar\varepsilon}}\lesssim c_{3,{\bar\varepsilon}}+\|f_{\bar\varepsilon}\|_{0,\alpha,\gamma-4}.\end{align} We assume by contradiction that the above claim is false. Then there exists a family of solutions $w_\ell=w_{{\bar\varepsilon}_\ell}$ to $L_{{\bar\varepsilon}_\ell}w_\ell=f_\ell$ with $0<\varepsilon_{i,\ell}<\varepsilon_0$, $f_\ell \in C^{0,\alpha}_{\gamma-4}(\Omega_{{\bar\varepsilon}_\ell})$, $w_\ell=\Delta w_\ell=0$ on $\partial\Omega$ such that \begin{align}\label{assump-33}c_{4,{\bar\varepsilon}_\ell}=1\quad\text{and } c_{3,{\bar\varepsilon}_\ell}+\|f_\ell\|\to0. \end{align} Then, up to a subsequence, $\Omega_{{\bar\varepsilon}_\ell}\to \tilde\Omega$, where $\tilde\Omega_{\bar\varepsilon}=\Omega\setminus\cup_{i=1}^KB_{\varepsilon_i}(\Sigma_i)$ for some $0\leq \varepsilon_i\leq\varepsilon_0$. Here $B_{\varepsilon_i}(\Sigma_i)=\Sigma_i$ if $\varepsilon_i=0$ for some $i$.
From the estimates on $w_\ell$ we see that $w_\ell\to w$ in $\bar \Omega\setminus\cup_{i=1}^KB_{\varepsilon_i}(\Sigma_i)$. Moreover, $w$ satisfies $$L_{\bar \varepsilon}w=0\quad\text{in }\tilde\Omega_{\bar\varepsilon},$$ where $L_{\bar\varepsilon}=\Delta^2-p\bar u_{\bar\varepsilon}^{p-1}$, with the understanding that if $\varepsilon_i=0$ for some $i$ then $\bar u_{\bar\varepsilon}=0$ on $B_\sigma(\Sigma_i)$. Notice that $w$ satisfies $$w=\Delta w=0\quad\text{on }\partial\Omega\cup_{\varepsilon_i\neq 0}\partial B_\varepsilon(\Sigma_i).$$ If $\varepsilon_i=0$ for some $i$, then $w$ is bi-harmonic in $B_\sigma(\Sigma)\setminus \Sigma_i $, and as $w(x)=O(d(x,\Sigma_i)^{\gamma})$ with $4-N<\gamma<0$, we see that the singularity on $\Sigma_i$ is removable. Thus, we can use maximum principle to conclude that $w=0$ in $\tilde\Omega_{\bar\varepsilon}$. This contradicts the first condition in \eqref{assump-33}.
In this way we have that there exists $C>0$ independent of ${\bar\varepsilon}$, but depending only on the right hand side of \eqref{wve-est} such that $$|w_{\bar\varepsilon}|\leq C\phi\quad\text{and }|\Delta w_{\bar\varepsilon}|\leq C(1+|\Delta\phi| )\quad\text{in }\Omega_{\bar\varepsilon}. $$
The desired estimate follows from Schauder theory.
\end{proof}
\begin{lem} \label{7.3} Let $(w_\ell)\subset C^{4,\alpha}_\mu(B_\sigma(\Sigma_i))$ be a sequence of solutions to $L_1w_\ell=0$ in $B_\sigma(\Sigma_i)$, for some fixed $i\in\{1,2,\dots,K\}.$ If $|w_\ell|+|\Delta w_\ell|\leq C$ on $B_\sigma(\Sigma_i)\setminus B_\frac\sigma2(\Sigma_i)$ then $\|w_\ell\|_{C^{4,\alpha}_\mu(\Sigma_i)}$ is uniformly bounded. \end{lem}
\begin{proof} It suffices to show that $$S_\ell:=\sup (r^{-\mu}|w_\ell|+r^{2-\mu}|\Delta w_\ell| )\leq C.$$ We assume by contradiction that the above supremum is not uniformly bounded. Let $x_\ell=(r_\ell,\theta_\ell,y_\ell)\in B_\sigma(\Sigma_i)$ be such that $$ S_\ell \approx r_\ell^{-\mu}|w_\ell (x_\ell)|+r_\ell^{2-\mu}|\Delta w_\ell (x_\ell)| .$$ We claim that $r_\ell\to 0$. On the contrary, if $r_\ell\to r_\infty\neq0$, then setting $\bar w_\ell=\frac{w_\ell}{S_\ell}$ we see that $\bar w_\ell\to \bar w_\infty$, where $$ L_1\bar w_\infty=0\quad \text{in }B_\sigma(\Sigma_i),\quad \bar w_\infty\equiv 0\text{ in }B_\sigma(\Sigma_i)\setminus B_\frac\sigma2(\Sigma_i).$$ Therefore, $\bar w_\infty\equiv 0$ in $B_\sigma(\Sigma_i)$, which contradicts to $$r_\infty^{-\mu}|\bar w_\infty (x_\infty)|+r_\infty^{2-\mu}|\Delta w_\infty(x_\infty)| \approx 1 .$$
If $\Sigma_i=\{x_i\}$, we set $$\tilde w_\ell(r,\theta)=\frac{r_\ell^{-\mu}w_\ell(rr_\ell,\theta)}{S_\ell},\quad 0<r<\frac{\sigma}{r_\ell}.$$ Then $$r^{-\mu}|\tilde w_\ell(r,\theta)|+r^{2-\mu}|\Delta \tilde w_\ell(r,\theta)|\lesssim 1 \approx \sup_{\partial B_1}(|\tilde w_\ell|+|\Delta\tilde w_\ell|),$$ and $\tilde w_\ell$ satisfies $$L_{r_\ell}\tilde w_\ell=0\quad\text{in }B_\frac{\sigma}{r_\ell}.$$ If $\Sigma_i$ is higher dimensional, then $y_\ell\to y_\infty$, and we choose Fermi coordinates around $y_\infty$ so that $y_\infty=0$, and the coordinates are defined for $|y|<\tau$ for some $\tau>0$. Then we set $$\tilde w_\ell(r,\theta,y):=\frac{r_\ell^{-\mu}w_\ell(rr_\ell,\theta, r_\ell(y{ +\tilde{y}_\ell))}}{S_\ell},\quad \tilde{y_\ell}:=\frac{y_\ell}{r_\ell}\,\quad 0<r<\frac{\sigma}{r_\ell},\, |y|<\frac{\tau}{2r_\ell}.$$ In this case $\tilde w_\ell$ satisfies the equation $L_{r_\ell}\tilde w_\ell =o(1)$.
In both cases we have that $\tilde w_\ell\to \tilde w_\infty\not\equiv 0$, $\tilde w_\infty$ satisfies $r^{-\mu} |\tilde w_\infty|\leq C$. For the point singularity case, the limit function satisfies $$\Delta^2 \tilde w_\infty =\frac{A_p}{r^4}\tilde w_\infty\quad\text{in }\mathbb{R}^n\setminus\{0\},$$ where $A_p$ is given by \eqref{Ap}. By proposition \ref{inj-prop-2} we get that $\tilde w_\infty\equiv0$, a contradiction.
For the higher dimensional case $$\Delta^2 \tilde w_\infty =\frac{A_p}{r^4}\tilde w_\infty\quad\text{in }\mathbb{R}^n\setminus \mathbb{R}^k,$$ and we get a contradiction by Proposition \ref{inj-prop-4}.
\end{proof}
\begin{lem} \label{inj-omega}There exists $\varepsilon_0>0$ sufficiently small such that if each $\varepsilon_i<\varepsilon_0$ then $$L_{\bar\varepsilon}:C^{4,\alpha}_{\mu,\mathscr{N}} (\Omega\setminus\Sigma)\to C^{0,\alpha}_{\mu-4}(\Omega\setminus\Sigma)$$ is injective. \end{lem}
\begin{proof} We assume by contradiction that for every $\varepsilon_0^\ell:=\frac1\ell$ there exists ${\bar\varepsilon}^\ell$ with each $\varepsilon_i^\ell<\varepsilon_0^\ell$ such that $L_{{\bar\varepsilon}^\ell}$ is not injective. Let $w_\ell \in C^{4,\alpha}_{\mu,\mathscr{N}} (\Omega\setminus\Sigma)$ be a non-trivial solution to $L_{{\bar\varepsilon}^\ell}w_\ell=0$.
We normalize $w_\ell$ so that $$\max_{\partial\Omega_{{\bar\varepsilon}^\ell}} \left( \rho(x)^{-\mu}|w_\ell(x)|+ \rho(x)^{2-\mu}|\Delta w_\ell(x)|\right)=1.$$ Then by Lemma \ref{7.2} we get that \begin{align}\label{est-35}\sup_{\Omega_{{\bar\varepsilon}^\ell}} \left( \rho(x)^{-\mu}|w_\ell(x)|+ \rho(x)^{2-\mu}|\Delta w_\ell(x)|\right)\leq C.\end{align}
First consider the case when $\Sigma$ is a set of finite points.
Assume that the above maximum is achieved on $\partial B_{\varepsilon_j^\ell}(x_j)$ for some $j$, and upto a translation, assume that $x_j=0$. We set $$\bar w_\ell(x)=(\varepsilon_j^\ell)^{-\mu} w_\ell(\varepsilon_j^\ell x),\quad |x|<\frac{\sigma}{\varepsilon_j^\ell}.$$ Then $L_1\tilde w_\ell=0$ on $B_{R_\ell}$ for some $R_\ell\to\infty$, and $r^{-\mu}|\tilde w_\ell| +r^{2-\mu}|\Delta \tilde w_\ell|\leq C$ for $1\leq r\leq \frac{\sigma}{\varepsilon_j^\ell}$. Therefore, by Lemma \ref{7.3}, $$r^{-\mu}|\tilde w_\ell| +r^{2-\mu}|\Delta \tilde w_\ell|\leq C\quad\text{ for } r\leq \frac{\sigma}{\varepsilon_j^\ell}.$$ Hence $\tilde w_\ell\to \tilde w_\infty$, where $\tilde w_\infty $ satisfies $$L_1\tilde w_\infty=0\quad\text{in }\mathbb{R}^n\setminus\{0\},\quad r^{-\mu}|\tilde w_\infty| +r^{2-\mu}|\Delta \tilde w_\infty |\leq C .$$ Hence, by Proposition \ref{inj-prop-1} we have $\tilde w_\infty\equiv 0 $, a contradiction to $\max_{\partial B_1}|\tilde w_\infty|+|\Delta \tilde w_\infty|=1 $.
Next we consider the case of higher dimensional singularity. Let $x_\ell=(r_\ell,\theta_\ell,y_\ell) $ be a point around $\Sigma_j$ for some $j$ such that $$r_\ell^{-\mu}|w_\ell(x_\ell)|+ r_\ell^{2-\mu}|\Delta w_\ell(x_\ell)| \approx \sup_{\Omega} \rho(x)^{-\mu}|w_\ell|+ \rho(x)^{2-\mu}|\Delta w_\ell| =:S_\ell. $$ We can also assume that $r_\ell\leq \varepsilon_j^\ell$, thanks to \eqref{est-35}. We shall take $r_\ell=\varepsilon_j^\ell$ if they are of the same order so that either $r_\ell=o(\varepsilon_j^\ell)$ or $r_\ell=\varepsilon_j^\ell$. We choose Fermi coordinates around $y_\infty$ (limit of $y_\ell$) so that $y_\infty=0$, and the coordinates are defined for $|y|<\tau$ for some $\tau>0$. We set $$\tilde w_\ell(r,\theta,y):=\frac{r_\ell^{-\mu}w_\ell(rr_\ell,\theta, r_\ell(y +\tilde{y}_\ell))}{S_\ell},\quad \tilde{y_\ell}:=\frac{y_\ell}{r_\ell}\,\quad 0<r<\frac{\sigma}{r_\ell},\, |y|<\frac{\tau}{2r_\ell}.$$
As before one gets a contradiction, thanks to Propositions \ref{inj-prop-3} and \ref{inj-prop-4}.
\end{proof}
\section{Uniform surjectivity of $L_{\bar\varepsilon}$ on $C^{4,\alpha}_{\mu,\mathscr{N}}(\Omega\setminus\Sigma)$}
Let $\rho$ be a smooth function on $\Omega\setminus\Sigma$ with positive lower bound such that $\rho(\cdot )=dist(\cdot,\Sigma_i)$ in $ B_\sigma(\Sigma_i)$, for every $i$. The weighted space $L^2_{\delta}(\Omega\setminus \Sigma)$ is defined by
$$L^2_{\delta}(\Omega\setminus \Sigma):=\left\{ w\in L^2_{loc}(\Omega\setminus\Sigma): \int_{\Omega} \rho^{-4-2\delta}|w|^2 dx<\infty \right\}.$$ Let $L^2_{-\delta}(\Omega\setminus\Sigma)$ be the dual of $L^2_{\delta}(\Omega\setminus \Sigma)$ with respect to the pairing $$L^2_{\delta}(\Omega\setminus \Sigma)\times L^2_{-\delta}(\Omega\setminus \Sigma)\,\ni\, (w_1,w_2)\longrightarrow \int_{\Omega}w_1w_2 \rho^{-4}dx.$$ We note that the following embedding is continuous $$C^{k,\alpha}_{\gamma}(\Omega\setminus\Sigma) \hookrightarrow L^2_{\delta}(\Omega\setminus\Sigma)\quad\text{for }\delta<\gamma+\frac{N-4}{2}.$$
The domain $D(L_{\bar\varepsilon})$ of the operator $L_{\bar\varepsilon}$ is the set of functions $w\in L^2_\delta$ (for simplicity we drop the domain $\Omega\setminus \Sigma$ from the notation) such that $L_{\bar\varepsilon} w=h\in L^2_{\delta-4}$ in the sense of distributions. One can show that the following elliptic estimate holds: $$\sum_{\ell=1}^4 \|\nabla^\ell w\|_{L^2_{\delta-\ell}(\Omega_\sigma)}\leq C(\|h\|_{L^2_{\delta-4}} +\|w\|_{L^2_\delta }),$$ where $\Omega_\sigma:=\cup_{i=1}^K(B_\sigma(\Sigma_i) \setminus \Sigma_i)$. In particular, $L_{\bar\varepsilon}: L^2_\delta\to L^2_{\delta-4}$ is densely defined, and it has closed graph. Moreover, if $\delta-\frac{N-4}{2}\not\in\{ \Re{\gamma_j^{\pm\pm}}:j=0,1,\dots\}$, then $L_{\bar\varepsilon}$ is Fredholm (see \cite{mazzeoEdge}).
We shall fix $\delta>0 $ slightly bigger than $\mu+\frac{N-4}{2}$,
where $\mu$ is fixed according to \eqref{mu-nu}. The adjoint of the operator \begin{align}\label{op}L_{\bar\varepsilon} :L^2_{-\delta}\to L^2_{-\delta-4}\end{align} is given by \begin{align} \label{op-adj}L^2_{\delta+4}\to L^2_{\delta},\quad w\mapsto \rho^4L_{\bar\varepsilon}(w\rho^{-4}).\end{align} Then the adjoint operator \eqref{op-adj} is injective, and $L_{\bar\varepsilon}$ in \eqref{op} is surjective. Using the isomorphism $$\rho^{2\delta}:L^2_{\tilde \delta}\to L^{2}_{2\delta+\tilde\delta},\quad w\mapsto \rho^{2\delta}w,$$ we identify the adjoint operator as $$L_{\bar\varepsilon}^*: L^2_{-\delta+4}\to L^2_{-\delta},\quad w\mapsto \rho^{4-2\delta}L_{\bar\varepsilon}(w\rho^{2\delta-4}) .$$ Now we consider the composition $$\mathcal{L}=L_{\bar\varepsilon} \circ L_{\bar\varepsilon}^* :L_{-\delta+4}^2\to L_{-\delta-4}^2, \quad w\mapsto L_{\bar\varepsilon}[\rho^{4-2\delta}L_{\bar\varepsilon}(w\rho^{2\delta-4})].$$ Then $\mathcal{L}$ is an isomorphism, and hence there exists a two sided inverse $$\GG_{\bar\varepsilon}:L^2_{-\delta-4}\to L^2_{-\delta+4}.$$ Consequently, the right inverse of $L_{\bar\varepsilon}$ is given by $G_{\bar\varepsilon}:=L_{\bar\varepsilon}^*\GG_{\bar\varepsilon}$. It follows from \cite{Mazzeo-Pacard96} that $$G_{\bar\varepsilon}: C^{0,\alpha}_{\nu-4} (\Omega\setminus\Sigma)\to C^{4,\alpha}_{\nu,\mathcal{N}}(\Omega\setminus\Sigma)$$
is bounded.
\begin{lem} Let $\varepsilon_0>0$ be as in Lemma \ref{inj-omega}. Then the system $L_{\bar\varepsilon} w_1=0$, $w_1=L_{\bar\varepsilon}^* w_2$ with $w_1\in C^{4,\alpha}_{\nu,\mathcal{N}}(\Omega\setminus\Sigma)$ and $w_2\in C^{8,\alpha}_{\nu+4,\mathcal{N}}(\Omega\setminus\Sigma)$ has only trivial solution. \end{lem}
\begin{proof} We set $w=\rho^{2\delta-4}w_2$. Then $L_{\bar\varepsilon}[\rho^{4-2\delta} L_{\bar\varepsilon} w]=0$. Multiplying the equation by $w$ and then integrating by parts we get $$0=\int_{\Omega}\rho^{4-2\delta}|L_{\bar\varepsilon} w|^2dx.$$ Since $\mu+2\delta>\mu$, we have $w\in C^{4,\alpha}_{\nu+2\delta}(\Omega\setminus\Sigma)\subset C^{4,\alpha}_{\mu}(\Omega\setminus \Sigma)$. Then by Lemma \ref{inj-omega} we get that $w=0$, equivalently $w_1=w_2=0$. \end{proof}
\begin{lem} There exists $\varepsilon_0>0 $ small such that if $0<\varepsilon_i<\varepsilon_0$ for every $i$, then the sequence of solutions $(w_{1,{\bar\varepsilon}^\ell})\subset C^{4,\alpha}_{\nu,\mathcal{N}}(\Omega\setminus\Sigma) \cap L^*_{{\bar\varepsilon}^\ell}[C^{8,\alpha}_{\nu+4,\mathcal{N}}(\Omega\setminus\Sigma)]$ to $L_{{\bar\varepsilon}^\ell}w_{1,{\bar\varepsilon}^\ell}=f_{{\bar\varepsilon}^\ell}$ is uniformly bounded in $C^{4,\alpha}_{\nu}(\Omega\setminus\Sigma)$, provided $(f_{{\bar\varepsilon}^\ell})$ is uniformly bounded in $C^{0,\alpha}_{\nu-4}(\Omega\setminus\Sigma)$. \end{lem}
\begin{proof} Assume by contradiction that the lemma is false. Then there exists a sequence of $K$-tuples $({\bar\varepsilon}^\ell)$ with ${\bar\varepsilon}_i^\ell\to 0$ for each $i=1,\dots,K$, and $w_{1,{\bar\varepsilon}^\ell }\in C^{4,\alpha}_{\nu,\mathcal{N}}(\Omega\setminus\Sigma)\cap L_{{\bar\varepsilon}^\ell}[C^{8,\alpha}_{\nu+4,\mathcal{N}}(\Omega\setminus\Sigma)]$ with $L_{{\bar\varepsilon}^\ell}w_{1,{\bar\varepsilon}^\ell}=f_{{\bar\varepsilon}^\ell}$ such that $\|f_{{\bar\varepsilon}^\ell}\|_{C^{0,\alpha}_{\nu-4,\mathcal{N}}(\Omega\setminus\Sigma)}\leq C$. By Lemma \ref{7.2} $$\|w_{1,{\bar\varepsilon}^\ell}\|_{C^{4,\alpha}_{\nu}(\Omega_{{\bar\varepsilon}^\ell})}\leq C+C \max_{\cup \partial B_{\varepsilon_i^\ell}(\Sigma_i)} \left( \rho^{-\nu}|w_{1,{\bar\varepsilon}^\ell}|+\rho^{2-\nu}|\Delta w_{1,{\bar\varepsilon}^\ell}|\right)=:C+CS_{{\bar\varepsilon}^\ell}.$$
First we consider the case when $\Sigma$ is a set of finitely many points. We distinguish the following two cases. \\
\noindent\textbf{Case 1} $S_{{\bar\varepsilon}^\ell}\leq C$.
In this case we proceed as in the proof of Lemma \ref{7.3}. Let $x_\ell=(r_\ell,\theta_\ell)$ be such that $$\sup_{\cup B_{\varepsilon_i^\ell}(\Sigma_i)} \left( \rho^{-\nu}|w_{1,{\bar\varepsilon}^\ell}|+\rho^{2-\nu}|\Delta w_{1,{\bar\varepsilon}^\ell}|\right)\approx \rho^{-\nu}(x_\ell)|w_{1,{\bar\varepsilon}^\ell}(x_\ell)|+\rho^{2-\nu}(x_\ell)|\Delta w_{1,{\bar\varepsilon}^\ell}(x_\ell)|=:S_\ell\to\infty.$$ Up to a subsequence, $x_\ell\in B_{\varepsilon_i^\ell}(\Sigma_i)$ for some fixed $i$. Then necessarily $r_\ell =o(\varepsilon_i^\ell)$. Setting $$\tilde w_{1,{\bar\varepsilon}^\ell}(r,\theta):=\frac{r_\ell^{-\nu}w_{1,{\bar\varepsilon}^\ell}(rr_\ell,\theta)}{S_\ell}$$ one would get that $ \tilde w_{1,{\bar\varepsilon}^\ell}\to \tilde w_1\not\equiv 0$ where $$\tilde L_1\tilde w_1=0\quad\text{in }\mathbb{R}^n\setminus\{0\}, \quad r^{-\nu}|\tilde w_1|\leq C,\quad \tilde L_1:=\Delta^2-\frac{A_p}{r^4}, $$ where $A_p$ is as in \eqref{Ap}. Since $\nu$ does not coincide with indicial roots of $\tilde L_1$, as in the proof of Proposition \ref{inj-prop-2} one obtains that $\tilde w_1\equiv0$, a contradiction. \\
\noindent\textbf{Case 2} $S_{{\bar\varepsilon}^\ell}\to\infty$.
In this case we first divide the function $\tilde w_{1,{\bar\varepsilon}^\ell}$ by $S_{{\bar\varepsilon}^\ell}$. Then consider the scaling with respect to $\varepsilon_i^\ell$ instead of $r_\ell$ (see Lemma \ref{inj-omega}) and proceeding as before we would get that $w_{1,{\bar\varepsilon}^\ell}\to \tilde w_1\not\equiv 0$ where $$L_1\tilde w_1=0\quad\text{in }\mathbb{R}^n\setminus\{0\}, \quad r^{-\nu}|\tilde w_1|\leq C. $$ Since $\tilde w_1$ decays at infinity, its decay rate is determined by the indicial roots of $L_1$ (which are exactly the same as $\Delta^2$) at infinity. In fact, $\tilde w_1$ would be bounded by $r^{4-N}$ at infinity, see e.g. the proof of Proposition \ref{inj-prop-1}.
Since $\tilde w_{1,{\bar\varepsilon}^\ell}\in L^*_{{\bar\varepsilon}^\ell}[C^{8,\alpha}_{\nu+4,\mathcal{N}}(\Omega\setminus\Sigma)]$, we have $w_{1,{\bar\varepsilon}^\ell}=\rho^{4-2\delta}L_{{\bar\varepsilon}^\ell} w_{2,{\bar\varepsilon}^\ell}$ for some $w_{2,{\bar\varepsilon}^\ell}\in C^{8,\alpha}_{\nu+2\delta,\mathcal{N}}(\Omega\setminus\Sigma)$. As $2\delta+\nu>\mu$, applying Lemmas \ref{7.2} and \ref{7.3}, we can show that the scaled functions $$\tilde w_{2,{\bar\varepsilon}^\ell}(r,\theta):=\frac{\varepsilon_{i,\ell}^{-\nu-2\delta}w_{2,{\bar\varepsilon}^\ell}(r\varepsilon_{i,\ell},\theta)}{S_\ell},$$ converges to a limit function $\tilde w_2$, where $$ L_1\tilde w_2= r^{2\delta-4}\tilde w_1\quad\text{in }\mathbb{R}^n\setminus\{0\}, \quad r^{-\nu-2\delta}|\tilde w_2|\leq C.$$ Thus, $ L_1[r^{4-2\delta} L_1 \tilde w_2]=0$. We multiply this equation by $\tilde w_2$ and integrate it on $\mathbb{R}^N$. Then an integration by parts leads $ L_1 \tilde w_2=0$ (this is justified because of the decay of $\tilde w_1$ at infinity, provided we choose $\delta>0$ sufficiently close to $\mu+\frac{N-4}{2}$). Again, as $2\delta+\nu>\mu$, by Proposition \ref{inj-prop-1} we have $\tilde w_2=\tilde w_1=0$, a contradiction.
When $\Sigma$ is of positive dimension, we need to do the scaling as in Lemmas \ref{inj-omega} and \ref{7.3}. We now prove that the limit is independent of the variable $y$. The argument is based on the theory of edge operators and their parametrices\footnote{We are very grateful to Rafe Mazzeo for explaining us the argument, already mentioned in \cite{Mazzeo-Pacard96}}.
As in the previous section, we set $$\tilde w_{1,{\bar\varepsilon}^\ell}(r,\theta,y):=\frac{r_\ell^{-\nu}w_{1,{\bar\varepsilon}^\ell}(rr_\ell,\theta, r_\ell(y +\tilde{y}_\ell))}{S_\ell},\quad \tilde{y_\ell}:=\frac{y_\ell}{r_\ell}\,\quad 0<r<\frac{\sigma}{r_\ell},\, |y|<\frac{\tau}{2r_\ell}.$$
We now proceed as before to show, because of the normalization, that $\tilde w_{1,{\bar\varepsilon}^\ell}\to \tilde w_1\not\equiv 0$ and
$$\tilde L_1\tilde w_1=0\quad\text{in }\mathbb{R}^n\setminus \mathbb{R}^k, \quad r^{-\nu}|\tilde w_1|\leq C,\quad \tilde L_1:=\Delta^2-\frac{A_p}{r^4}. $$
{\bf Claim: The function $\tilde w_1$ does not depend on $y \in \mathbb{R}^k$}. By standard theory in edge calculus (see \cite{mazzeoEdge}), each operator $L_{{\bar\varepsilon}^\ell}$ has a left parametrix $G_{{\bar\varepsilon}^\ell}$ since the solutions are normalized. In other words, there exists a compact (in the sense of pseudo-differential operators) $R_{{\bar\varepsilon}^\ell}$ such that
$$
G_{{\bar\varepsilon}^\ell} L_{{\bar\varepsilon}^\ell}=Id+R_{{\bar\varepsilon}^\ell}
$$
along every sequence ${\bar\varepsilon}^\ell$. Furthermore, since $R_{{\bar\varepsilon}^\ell}$ is compact, it maps polyhomogeneous functions into functions with fast decay. Applying the previous identity to $\tilde w_{1,{\bar\varepsilon}^\ell}$, one sees right away that $\tilde w_{1,{\bar\varepsilon}^\ell}$ is itself polyhomogeneous. Consider now any derivative $\partial_y^\alpha w_{1,{\bar\varepsilon}^\ell}$, denoted for simplicity $w^{(\alpha)}_{1,{\bar\varepsilon}^\ell}$. By appropriately normalizing the latter function and using the fact that the compact operator $R_{{\bar\varepsilon}^\ell}$ is itself polyhomogeneous in $y$, one gets passing to the limit in the previous equation that the limiting function has to be in kernel of $\tilde L_1$ with faster decay, hence its identically zero.
Hence the function $\tilde w_1$ is independent of $y$. Therefore, we are back to the case of a point singularity. This proves the lemma.
\end{proof}
\section{Fixed point arguments}
To prove existence of solution to \eqref{eq-v} we use a fixed point argument on the space $C^{4,\alpha}_{\nu,\mathcal{N}}(\Omega\setminus\Sigma)$. Since we need to find $v$ such that $\bar u_{\bar\varepsilon}+v$ is positive in $\Omega$, we shall solve the equation \begin{align}\label{41}\left\{ \begin{array}{ll} \Delta^2 (\bar u_{\bar\varepsilon}+v)=|\bar u_{\bar\varepsilon}+v|^p &\quad\text{in }\Omega \\ \rule{0cm}{.5cm} v =\Delta v=0 &\quad\text{on }\partial\Omega. \end{array}\right. \end{align} Equivalently, $$ L_{\bar\varepsilon} v+f_{\bar\varepsilon} +Q(v)=0,\quad Q(v):=-|\bar u_{\bar\varepsilon}+v|^p+\bar u_{\bar\varepsilon}^p+p\bar u_{\bar\varepsilon}^{p-1}v,$$ where $f_{\bar\varepsilon}=\Delta^2 \bar u_{\bar\varepsilon}-\bar u_{\bar\varepsilon}^p$ as before. Applying the inverse of $L_{\bar\varepsilon}$, that is $G_{\bar\varepsilon}$, we rewrite the above equation as $$v+G_{\bar\varepsilon} f_{\bar\varepsilon} +G_{\bar\varepsilon} Q(v)=0.$$ The crucial fact we shall use is that the norm of $G_{\bar\varepsilon}$ is uniformly bounded if $\varepsilon_0$ is sufficiently small.
We note that if $v\in C^{4,\alpha}_{\nu,\mathcal{N}}(\Omega\setminus\Sigma)$ is a weak solution to the above equation then by maximum principle we have that $\bar u_{\bar\varepsilon} +v>0$ in $\Omega$. This is a simple consequence of the fact that $\nu>-\frac{4}{p-1}$, and therefore, $\Delta (\bar u_{\bar\varepsilon}+v)<0$ and $\bar u_{\bar\varepsilon}+v>0$ in a small neighborhood of $\Sigma$, thanks to the asymptotic behavior of $\bar u_{\bar\varepsilon}$, $\Delta \bar u_{\bar\varepsilon}$ around the origin.
We recall that the error $f_{\bar\varepsilon}=\Delta^2 \bar u_{\bar\varepsilon}-\bar u_{\bar\varepsilon}^p$ satisfies the estimate $\|f_{\bar\varepsilon}\|_{0,\alpha,\nu-4}\leq C\varepsilon_0^{N-\frac{4p}{p-1}}$ if $\Sigma$ is a discrete set, and $\|f_{\bar\varepsilon}\|_{0,\alpha,\nu-4}\leq C \varepsilon_0^q$, $q=\frac{p-5}{p-1}-\nu$ otherwise. Let us first consider the case when $\Sigma$ is a set of finitely many points. Then, there exists $C_0>0$ such that $\|G_{\bar\varepsilon} f_{\bar\varepsilon} \|_{4,\alpha,\nu}\leq C_0 \varepsilon_0^{N-\frac{4p}{p-1}}$. This suggests to work on the ball $$\mathscr{B}_{\varepsilon_0,M}=\left\{ v\in C^{4,\alpha}_{\nu} :\|v\|_{4,\alpha,\nu}\leq M\varepsilon_0^{N-\frac{4p}{p-1}}\right\},$$ for some $M>2C_0$ large. We shall show that the map $v\mapsto G_{\bar\varepsilon}[f_{\bar\varepsilon}+Q(v)]$ is a contraction on the ball $\mathscr{B}_{\varepsilon_0,M}$. To this end we shall assume that $\varepsilon_i\in [a\varepsilon_0,\varepsilon_0]$ for every $i=1,\dots,K$ for some fixed $a\in (0,1)$.
\begin{lem} Let $M_1>1$ be fixed. Then for $\varepsilon_0<<1$ we have $$\|Q(v_1)-Q(v_2)\|_{0,\alpha,\nu-4}\leq\frac{1}{M_1}\|v_1-v_2\|_{4,\alpha,\nu}\quad\text{for every }v_1,\,v_2\in \mathscr{B}_{\varepsilon_0,M}.$$ \end{lem}
\begin{proof} We start by showing that there exists $0<\tau<\sigma$ small, independent of $\varepsilon_0<<1$, such that \begin{align}\label{est-tau-1}|v(x)|\leq \frac{1}{10}\bar u_{\bar\varepsilon}(x)\quad \text{for every }x\in \cup_{i=1}^K B_\tau(x_i),\, v\in \mathscr{B}_{\varepsilon_0,M}.\end{align} To prove this we recall that for any fixed $R>1$ there exists $c_1,\,c_2>1$ such that $$\frac{1}{c_1}\leq |x|^\frac{4}{p-1}u_{\varepsilon_i}(x)\leq c_1\quad \text{for }|x|\leq R\varepsilon_0,$$ $$\frac{1}{c_2}\leq \varepsilon_0^{-N+\frac{4p}{p-1}}|x|^{N-4}u_{\varepsilon_i}(x)\leq c_2\quad\text{for }R\varepsilon_0\leq |x|\leq\tau .$$ On the other hand, $$\varepsilon_0^{-N+\frac{4p}{p-1}} \rho(x) ^{-\nu} |v(x)|\leq M .$$ As $\nu>4-N$, we have \eqref{est-tau-1} for some $\tau>0$ small.
We have \begin{align*} Q(v_1)-Q(v_2)&=\int_0^1\frac{d}{dt}|\bar u_{\bar\varepsilon} +v_1+t(v_2-v_1) |^pdt +p\bar u_{\bar\varepsilon}^{p-1}(v_1-v_2)\\ &=p(v_2-v_1)\int_0^1\left( |\bar u_{\bar\varepsilon} +v_1+t(v_2-v_1) |^{p-1}-\bar u_{\bar\varepsilon}^{p-1}\right)dt \\ &=: p(v_2-v_1)\int_0^1Q(v_1,v_2)dt. \end{align*}Next, using that $$ (1+a)^{p-1}=1+O(|a|)\quad \text{for } |a|\leq\frac{1}{2},$$
we estimate for $ x\in \cup_{i=1}^KB_{R\varepsilon_0}(x_i)$ \begin{align} |Q(v_1,v_2)|(x) & \leq C\bar u_{\bar\varepsilon}(x)^{p-2}(|v_1|(x)+|v_2|(x)) \notag \\ &\leq CMR^{\frac{4}{p-1}+\nu}\varepsilon_0^{N-4+\nu} \rho^{-4}(x) , \notag \end{align} and for $ x\in \cup_{i=1}^K(B_\tau(x_i)\setminus B_{R\varepsilon_0}(x_i))$ \begin{align} \notag |Q(v_1,v_2) |(x) & \leq C_{\tau,R}M \max\{\varepsilon_0^{N-\frac{4p}{p-1}},\, \varepsilon_0^{(N-\frac{4p}{p-1})(p-1)}\} \rho^{-4}(x). \end{align} For $ x\in \Omega\setminus \cup_{i=1}^KB_{\tau}(x_i)$ \begin{align}|Q(v_1,v_2)|(x) &\leq C(\bar u_{\bar\varepsilon}^{p-1}+|v_1|^{p-1}+|v_2|^{p-1} )(x) \notag \\ &\leq C_{\tau,M} \varepsilon_0^{(N-4)p-N}\rho^{-4}(x) , \notag \end{align} where in the last inequality we have used that, in this region, $$\bar u_{\bar\varepsilon} (x)+|v_1(x)|+|v_2(x)|\leq C_{\tau,M}\varepsilon_0^{N-\frac{4p}{p-1}}.$$ Combining these estimates we get for $\varepsilon_0<<1$ $$\|Q(v_1)-Q(v_2)\|_{0,0,\nu-4}\leq c_{\varepsilon_0}\|v_1-v_2\|_{4,\alpha,\nu},$$ where $c_{\varepsilon_0}\to0$ as $\varepsilon_0\to0$.
In order to estimate the weighted H\"older norm of $Q(v_1)-Q(v_2)$ we note that the function $|\bar u_{\bar\varepsilon}+v|^{p-1}$ is only $C^{0,p-1}$ for $1<p<2$, which in turn implies that $Q(v_1,v_2)$ is only $C^{0,p-1}$.
This suggests that we need to take the H\"older exponent $\alpha\leq p-1$.
For $0<s<\sigma$ we write \begin{align*} &s^{4-\nu+\alpha}\sup_{x,x'\in N_s\setminus N_\frac s2}\frac{|[Q(v_1)-Q(v_2)](x)-[Q(v_1)-Q(v_2)](x')|} {|x-x'|^{\alpha}} \\ &\leq 4 \|Q(v_1)-Q(v_2)\|_{0,0,\nu-4} \\&\quad +s^{4-\nu+\alpha}\sup_{x,x'\in N_s\setminus N_\frac s2,\, |x-x'|\leq\frac s4 }\frac{|[Q(v_1)-Q(v_2)](x)-[Q(v_1)-Q(v_2)](x')|} {|x-x'|^{\alpha}} . \end{align*} Notice that for $x,x'\in N_s\setminus N_\frac s2$ with $|x-x'|\leq \frac s4$, the line segment $[x,y]$ joining $x$ and $y$ lies in $N_{2s}\setminus N_\frac s4$. The desired estimate follows on the region $\cup_{i=1}^KB_\tau(x_i)$ by estimating $Q(v_1,v_2)(x)-Q(v_1,v_2)(x')$ using the following gradient bound (we are using that $|\bar u_{\bar\varepsilon}+v|^{p-1}$ is $C^1$ in this region) \begin{align*}\nabla Q(v_1,v_2)&=(p-1)\left[ (\bar u_{\bar\varepsilon}+v_1+t(v_2-v_1)^{p-2}-\bar u_{\bar\varepsilon}^{p-2}) \right]\nabla \bar u_{\bar\varepsilon} \\ &\quad + (p-1) (\bar u_{\bar\varepsilon}+v_1+t(v_2-v_1)^{p-2}\nabla[v_1+t(v_2-v_1)] \\ &=O(1)\bar u_{\bar\varepsilon}^{p-3}(|v_1|+|v_2|)|\nabla u_{\bar\varepsilon}|+O(1) \bar u_{\bar\varepsilon}^{p-2}(|\nabla v_1|+|\nabla v_2|).\end{align*} In fact, gradient bounds can also be used for the region $\Omega\setminus\cup_{i=1}^K B_\tau(x_i)$ if $p\geq 2$. For $1<p\leq 2$, one can use the following inequalty $$||\phi|^{p-1}(x)-|\phi|^{p-1}(x')|\leq |\phi(x)-\phi(x')|^{p-1}\leq \|\nabla \phi\|^{p-1}_{C^0([x,x'])}|x-x'|^{p-1},$$ with $\phi=\bar u_{\bar\varepsilon}$ and $\phi=\bar u_{\bar\varepsilon}+v_1+t(v_2-v_1)$.
We conclude the lemma.
\end{proof}
From the above lemma we see that for a suitable choice of $M$ and $M_1$, the map $v\mapsto G_{\bar\varepsilon}[f_{\bar\varepsilon}+Q(v)]$ is a contraction on the ball $\mathscr{B}_{\varepsilon_0,M}$ onto itself, for $\varepsilon_0<<1$. Hence, we get a solution to \eqref{41} as desired. \\
When $\Sigma$ is not discrete, one shows in a similar way that the map $v\mapsto G_{\bar\varepsilon}[f_{\bar\varepsilon}+Q(v)]$ is a contraction on the ball
$$\mathscr{B}_{\varepsilon_0,M}=\left\{ v\in C^{4,\alpha}_{\nu} :\|v\|_{4,\alpha,\nu}\leq M\varepsilon_0^q\right\},$$ for some suitable $M>>1$. Here $q=\frac{p-5}{p-1}-\nu$, and the parameter $\nu$ satisfies $-\frac{4}{p-1}<\nu<\min\{\frac{p-5}{p-1},\, \Re{(\gamma_0^{--})}\}$.
\section{Proofs of Theorems \ref{main-theorem} and \ref{main2-theorem}}
This section is devoted to the completion of the proofs of the theorems stated in the introduction. In the previous section, we constructed {\sl for a fixed $\bar \varepsilon$} a solution of \eqref{eq-domain}.
\vspace{0.5cm}
\noindent{{\sl Proof of Theorem \ref{main-theorem}: }} As noticed already in \cite{Mazzeo-Pacard96}, the modifications are very minor. Recall the equation
\begin{align} P_{g_0}u=u^\frac{n+4}{n-4}\quad\text{in }M\setminus \Sigma,\notag \end{align}
where $g_0$ is a fixed metric. Using Fermi coordinates and the rescaled Delaunay-type solutions shows that, since the linearized operator is the bilaplacian with {\sl lower order terms}, those terms disappear in the rescaling/blow-up and one can prove in an exactly parallel way that the linearization is uniformly surjective provided $\varepsilon$ is small enough. The geometric assumptions in the theorem ensures then that the constructed solution in the fixed point, is positive.
\vspace{0.5cm} \noindent{{\sl Proof of Theorem \ref{main2-theorem}: }} The statement follows from a combination of the solution constructed in the previous section and the application of the implicit function theorem as described in \cite{Mazzeo-Pacard96}. To get the infinite dimensionality of the solution space, we invoke as in \cite{Mazzeo-Pacard96}, the edge calculus in \cite{mazzeoEdge}.
\section{Appendix: Singular radial solutions in $\mathbb{R}^N\setminus\{0\}$}
In this appendix, we collect several results related to the ODE analysis of Delaunay-type solutions for our problem (see \cite{Gazzola,GWZ}). For sake of completeness, we provide the proofs. Furthermore, since we need rather fine properties of these solutions, we also straighten some of the arguments in the above-mentioned papers.
\begin{lem} Let $u$ be a radial solution to \eqref{singu11} with $\frac{N}{N-4}<p<\frac{N+4}{N-4}$ as given by Theorem \ref{exists1}. Then \begin{align}\notag r^4u^{p-1} (r)\leq \frac{p+1}{2}k(p,N).\end{align} \end{lem}
\begin{proof} Set $$\tilde u(y)=|x|^{N-4}u(x),\quad x=\frac{y}{|y|^2}.$$ Then $\tilde u$ satisfies \begin{align}\label{eq-transformed}\Delta^2\tilde u=|y|^{\alpha}\tilde u^p\quad\text{in }\mathbb{R}^N,\quad \alpha:=(N-4)p-(N+4)\in (-4,0), \end{align} and $\tilde u$ does not have any singularity at the origin. Now we set $$\bar u(t)=r^\frac{4+\alpha}{p-1}\tilde u(r)=r^{-\frac{4}{p-1}}u(\frac1r),\quad t=\log r.$$ One checks that $$\bar u(t)\to0, \quad \bar u'(t)\to0,\quad \bar u''(t)\to0,\quad \bar u'''(t)\to0\quad \text{as }t\to-\infty. $$ Moreover, \begin{align*} \bar u''''(t)+K_3\bar u'''(t)+K_2\bar u'' +K_1\bar u'(t)+K_0\bar u(t)=\bar u^p(t), \end{align*} where (see e.g. \cite{GWZ,Gazzola} ) \begin{align*} K_0&:= \frac{4+\alpha}{(p-1)^4}\left[2(N-2)(N-4)(p-1)^3+(4+\alpha)(N^2-10N+20)(p-1)^2 \right. \\ &\qquad\left. -2(4+\alpha)^2(N-4)(p-1)+(4+\alpha)^3 \right] \\ &=k(p,N) \\ K_1&:=-\frac{2}{(p-1)^3}\left[ (N-2)(N-4)(p-1)^3+(4+\alpha)(N^2-10N+20)(p-1)^2 \right.\\ &\left. \qquad -3(\alpha^2+8\alpha+16)(N-4)(p-1)+2\alpha(\alpha^2+12\alpha+48)+128 \right] \\ &= -\frac{2}{(p-1)^3}\left[ (6N-N^2-8)p^3+(22N-N^2-56)p^2+(5N^2-14N-56)p-3N^2-8-14N \right] ,\\ K_2&:=\frac{1}{(p-1)^2}\left[ (N^2-10N+20)(p-1)^2-6(4+\alpha)(N-4)(p-1)+6\alpha(\alpha+8)+96 \right], \\ K_3&:=\frac{2}{p-1}\left[ (N-4)(p-1)-2(4+\alpha) \right]=\frac{2}{p-1}\left[ N+4-p(N-4)\right].
\end{align*}
It follows that $K_3>0$ for $\frac{N}{N-4}<p<\frac{N+4}{N-4}$, and $K_1$ vanishes at the following points $$p_1:=\frac{N+4}{N-4},\quad p_2^\pm:=\frac{6-N\pm 2\sqrt{N^2-4N+8}}{N-2}.$$ We also have that $p_2^-<0<p_2^+<\frac{N}{N-4}$. In particular, as $K_1(\infty)>0$, we have that $K_1<0$ for $\frac{N}{N-4}<p<\frac{N+4}{N-4}$.
Let us now define the energy
$$E(t):=\frac{1}{p+1}\bar u^{p+1}(t) -\frac{K_0}{2}\bar u(t)^2-\frac{K_2}{2}|\bar u'(t)|^2+\frac12|\bar u''(t)|^2.$$ If $\bar u'(t_1)=0$ for some $t_1\in\mathbb{R}$, then following the proof of \cite[Lemma 6]{Gazzola} we get $$E(t_1)-E(-\infty)=K_1\int_{-\infty}^{t_1}|\bar u'(t)|dt-K_3\int_{-\infty}^{t_1}|\bar u''(t)|dt\leq0.$$ Thus, $\bar u'(t_1)=0$ implies that $$\bar u^{p-1}(t_1)\leq \frac{p+1}{2}K_0.$$ The proof follows from this, and the asymptotic behavior of $u$ at the origin. \end{proof}
The next lemma provides uniqueness of solutions to \eqref{eq-transformed}. We start with the following lemma:
\begin{lem} Let $u$ be a non-negative bounded radial solution to \eqref{eq-transformed} on $B_1\setminus\{0\}$. Then $u$ is H\"older continuous for every $\alpha\in (-4,0)$, and it is $C^2$ for $\alpha\in (-2,0)$. Moreover, \begin{align}\label{Delta-u-1} \lim_{r\to0^+} r^{-\alpha-2}\Delta u(r) =c_{N,\alpha}u(0)^p ,\quad \alpha\in (-4,-2)\end{align} and \begin{align}\label{Delta-u-2} \lim_{r\to0^+} \frac{\Delta u(r)}{\log r} =c_{N,\alpha}u(0)^p ,\quad \alpha=2,\end{align} for some constant (independent of $u$) $c_{N,\alpha}<0$. \end{lem}
\begin{proof} We set $$v(x)=\frac{1}{\gamma_N}\int_{B_1}\frac{1}{|x-y|^{N-4}}|y|^\alpha u^p(y)dy,\quad h(x):=u-v(x),$$ where $\frac{1}{\gamma_N}\frac{1}{|x|^{N-4}}$ is a fundamental solution of $\Delta^2$ in $\mathbb{R}^N$. Since $u$ is bounded, one easily gets that $v$ is Holder continuous for $\alpha\in (-4,0)$, and differentiating under the integral sign, $v\in C^2$ for $\alpha\in (-2,0)$. Thus, $h$ is a bounded biharmonic function on $B_1\setminus\{0\}$. Therefore, the singularity at zero is removable, and $h$ is smooth in $B_1$. This completes the proof of regularity of $u$.
Now we prove \eqref{Delta-u-1}. We fix $0<\delta<1$ such that $\alpha\delta-\alpha-2>0$. Using that $u$ is continuous, we estimate for $r=|x|\neq 0$ \begin{align*}\Delta v(x)&=c_Nu(0)^p \int_{|y|<r^\delta} \frac{|y|^\alpha}{|x-y|^{N-2}} (1+o(1))dy+O(1) \int_{1>|y|>r^\delta} \frac{|y|^\alpha}{|x-y|^{N-2}}dy \\ &= c_Nu(0)^p \int_{|y|<r^\delta} \frac{|y|^\alpha}{|x-y|^{N-2}} (1+o(1))dy+O(1)r^{\delta\alpha}, \end{align*} where $o(1)\to0$ uniformly in $y\in B_{r^\delta}$ as $r\to0$. Using a change of variable $y\mapsto |x|y$ we obtain \begin{align*} \int_{|y|<r^\delta} \frac{|y|^\alpha}{|x-y|^{N-2}}dy&=r^{2+\alpha} \int_{|y|<r^{\delta-1}} \frac{|y|^\alpha}{|\frac{x}r-y|^{N-2}}dy= r^{2+\alpha} \int_{\mathbb{R}^N} \frac{|y|^\alpha}{|\frac{x}r-y|^{N-2}}dy +o(1)r^{2+\alpha},\end{align*} where the last integral is finite as $-4<\alpha<-2$, and $o(1)\to0$ as $r\to\infty$.
Combining these estimates, and as $h$ is smooth, we get \eqref{Delta-u-1}.
To prove \eqref{Delta-u-2} we fix $0<\varepsilon<<1<<R<\infty$. As before we would get $$\Delta v(x)=c_Nu(0)^p \int_{|y|<\varepsilon} \frac{|y|^{-2}}{|x-y|^{N-2}} (1+o_\varepsilon(1))dy+O_\varepsilon(1),$$ and after a change of variable \begin{align*} \int_{|y|<\varepsilon} \frac{|y|^{-2}}{|x-y|^{N-2}} dy&=\int_{|y|<\frac\varepsilon r} \frac{|y|^{-2}}{|\frac xr-y|^{N-2}}dy=\int_{R<|y|<\frac\varepsilon r} \frac{|y|^{-2}}{|\frac xr-y|^{N-2}} dy+ O_R(1). \end{align*} Since $$\frac{1}{|\frac xr-y|^{N-2}}=\frac{1}{|y|^{N-2}}(1+o_R(1))\quad\text{for }|y|\geq R>>1,$$ we have \begin{align*} \int_{|y|<\varepsilon} \frac{|y|^{-2}}{|x-y|^{N-2}} dy =|S^{N-1}|(1+o_R(1)) \log\frac1r+O_\varepsilon(1)+O_R(1) .\end{align*}
Combining these estimates and first taking $r\to0^+$, and then taking $\varepsilon\to0^+, \, R\to\infty$ we obtain \eqref{Delta-u-2}.
\end{proof}
Next we prove uniqueness of radial solutions to \eqref{eq-transformed}. We shall use the following identity: \begin{align} \label{int-identity}w(r_2)-w(r_1)=\int_{r_1}^{r_2}\frac{1}{|S^{N-1}|t^{N-1}}\int_{B_t}\Delta w(x)dxdt,\quad w \text{ is radial}. \end{align}
\begin{lem} Let $u_1,u_2$ be two non-negative bounded radial solutions to \eqref{eq-transformed} on $\mathbb{R}^N\setminus\{0\}$ with $\alpha\in (-4,0)$. If $u_1(0)=u_2(0)$ then $u_1=u_2$ on $\mathbb{R}^N$.\end{lem}
\begin{proof}
Let us first assume that \begin{align}\label{assump}\lim_{|x|\to0^+}\Delta \bar u(x)=0,\quad \bar u:=u_1-u_2 .\end{align} Then using \eqref{int-identity} we obtain \begin{align} \label{baru}\bar u(x)=o(1)|x|^2\quad\text{as }|x|\to0.\end{align} By \eqref{int-identity}-\eqref{assump} \begin{align*}\Delta \bar u(r)&=\int_0^r\frac{1}{|S^{N-1}|t^{N-1}}\int_{B_t}|x|^\alpha(u_1^p(x)-u_2^p(x))dxdt \\ &=o(1)|\bar u|_r \int_0^r \frac{1}{t^{N-1}}\int_{B_t}|x |^{2+\alpha}\\&=o(1)|\bar u|_rr^{4+\alpha},\end{align*} where we have set $|\bar u|_r:=\sup_{0<t<r}t^{-2}|\bar u(t)|$. This leads to \begin{align*} \bar u(r)=o(1)|\bar u|_r\int_{0}^r\frac{1}{t^{N-1}}\int_{B_t}|x|^{4+\alpha}dxdt =o(1)|\bar u|_r r^{6+\alpha},\end{align*} which gives $$r^{-2}|\bar u(r)|\leq\frac12 \sup_{0<t<r}t^{-2}|\bar u(t)|\quad\text{for every }0<r\leq r_0,$$ for some $r_0>0$ sufficiently small. From this and \eqref{baru} we get that $\bar u\equiv 0$ in a small neighborhood of the origin, and consequently we have $\bar u\equiv 0$ in $\mathbb{R}^N$.
It remains to prove \eqref{assump}, and we do that in few steps. \\
\noindent\textbf{Step 1} Assume that $\bar u(x)=O(1)|x|^\gamma$ for some $\gamma\geq 0$. Then setting $\tilde\gamma:=\alpha+\gamma+2$ we have \begin{align*}\Delta \bar u(x)=O(1)\left\{ \begin{array}{ll} |x|^{\tilde\gamma } &\quad\text{if }\tilde\gamma<0 \\ \log |x| &\quad\text{if }\tilde\gamma=0 \\ 1 &\quad\text{if } \tilde\gamma>0,\end{array}\right. \quad \bar u(x)=O(1)\left\{ \begin{array}{ll} |x|^{\tilde\gamma+2} &\quad\text{if } \tilde\gamma<0 \\ |x|^2 \log |x| &\quad\text{if } \tilde\gamma=0 \\|x|^2 &\quad\text{if } \tilde\gamma>0.\end{array}\right. \end{align*}
We set $\bar v:=v_1-v_2$, $\bar h:=h_1-h_2$ where $$v_i(x):=\frac{1}{\gamma_N}\int_{B_1}\frac{1}{|x-y|^{N-4}}|y|^\alpha u_i^p(y)dy,\quad h_i:=u_i-v_i,\,i=1,2.$$ Then using that $|u_1^p(x)-u_2^p(x)|\leq C|\bar u(x)|\leq C|x|^\gamma$ \begin{align*} \Delta\bar v(x)&=c_n\int_{B_1}\frac{|y|^\alpha}{|x-y|^{N-2}}(u_1^p(y)-u_2^p(y)) dy \\ &=O(1)\int_{B_1}\frac{|y|^{\alpha+\gamma}}{|x-y|^{N-2}}dy\\&=O(1)\left\{ \begin{array}{ll} |x|^{\alpha+\gamma+2} &\quad\text{if }\alpha+\gamma+2<0 \\ \log |x| &\quad\text{if }\alpha+\gamma+2=0 \\ 1 &\quad\text{if }\alpha+\gamma+2>0,\end{array}\right. \end{align*} thanks to Lemma \ref{lem9.4}. First part of Step 1 follows as $\bar h$ is smooth in $B_1$. The second part follows immediately by the first part and the identity \eqref{int-identity}. \\
\noindent\textbf{Step 2} The function $\bar u$ is $C^2$.
Since $\bar u(x)=O(1)$, we can use Step 1 with $\gamma=0$, and deduce that $\bar u(x)=O(|x|^{4+\alpha})$ (or the other growths at $0$). In fact, we can repeat this process finitely many times to eventually get that $\bar u(x)=O(|x|^2)$. Then, as $\alpha>-4$, from the integral representation of $\bar v$ it is easy to see that $\bar v$ is $C^2$, and consequently $\bar u$ is $C^2$. \\
\noindent\textbf{Step 3} \eqref{assump} holds.
Since $\bar u$ is $C^2$, $a:=\lim_{|x|\to0}\Delta \bar u(x)$ exists. If $a>0$ then a repeated use of \eqref{int-identity} yields that $\Delta\bar u\geq a$ on $\mathbb{R}^n$. In particular, $\bar u(x)\geq 2na |x|^2$ on $\mathbb{R}^n$, a contradiction as $\bar u$ is bounded. The case $a<0$ is similar.
\end{proof}
Proof of the following lemma is straight forward.
\begin{lem} \label{lem9.4}For $q_1,q_2\in (0,N)$ there exists $c=c(N,q_1,q_2)>0$ such that for any $R>0$ we have $$\lim_{|x|\to0^+}|x|^{q_1+q_2-N}\int_{B_R}\frac{dy}{|x-y|^{q_1}|y|^{q_2}}=c\quad\text{ if }q_1+q_2>N, $$ and $$\lim_{|x|\to0^+} -(\log |x|)^{-1}\int_{B_R}\frac{dy}{|x-y|^{q_1}|y|^{q_2}}=c\quad\text{ if }q_1+q_2=N, $$ \end{lem}
\begin{thm} \label{thm-4.6}There exists a positive radial solution $u\in C^0(\mathbb{R}^N)\cap C^4(\mathbb{R}^N\setminus\{0\})$ to \eqref{eq-transformed} such that $u$ is monotone decreasing and $u$ vanishes at infinity. In fact, $u(r)\leq C r^{-\frac{4+\alpha}{p-1}}$ at infinity. \end{thm}
To prove the theorem we consider the auxiliary equation
\begin{align} \label{aux}\left\{\begin{array}{ll} \Delta^2 u=\lambda |x|^\alpha(1+u)^p &\quad\text{in }B_1\\ u={\frac{\partial u}{\partial \nu}}=0 &\quad\text{on }\partial B_1. \end{array}\right. \end{align} Next we prove existence of a positive radial solution to \eqref{aux} for some $\lambda>0$. This will be done by Schauder fixed point theorem on the space $$X:= C^0_{rad}(\bar B_1),\quad \|u\|:=\|u\|_{C^0(\bar B_1)}.$$ We define $T:X\to X$, $u\mapsto\bar u$ where we have set \begin{align}\label{bar-u} \bar u(x):=\int_{B_1}G(x,y)|y|^\alpha (1+|u(y)|)^pdy,\end{align} where $G=G(x,y)$ is the Green function for $\Delta^2$ on $B_1$ with Dirichlet boundary conditions. It is easy to see that $T$ is well-defined, and in fact, $T$ is compact. Therefore, there eixsts $0<t_0\leq 1$ and $u_0\in X$ such that $tTu_0=u_0$. Then $u_0$ is positive, monotone decreasing, and it satisfies \eqref{aux} with $\lambda=t_0$.
As $u_0$ is a super soulution to \eqref{aux} for $0<\lambda\leq t_0$, one can prove existence of positive, radially symmetric, monotone decreasing, minimal solution $u=u_\lambda$ to \eqref{aux} for every $0<\lambda\leq t_0$.
Next we prove uniqueness of the minimal solutions for $\lambda>0$ small. In order to do that let us recall the following Pohozaev identity from \cite[Theorem 7.27]{GGS}.
\begin{lem}Let $u$ be a solution to \begin{align}\label{26-f}\left\{ \begin{array}{ll} \Delta^2u=f(x,u) &\quad\text{in }\Omega\subset\mathbb{R}^N\\ u=\frac{\partial u}{\partial\nu}=0&\quad\text{on }\partial\Omega. \end{array}\right. \end{align} Then setting $F(x,t)=\int_0^tf(x,s)ds$ we have $$\int_{\Omega}\left[ F(x,u)+\frac1Nx\cdot F_x(x,u)-\frac{N-4}{2N}(\Delta u)^2 \right]dx=\frac{1}{2N}\int_{\partial \Omega}(\Delta u)^2(x\cdot\nu)d\sigma.$$ \end{lem}
Since $\int_{\Omega}uf(x,u)dx=\int_{\Omega}|\Delta u|^2dx$, and $x\cdot\nu=1$ on $\partial\Omega$ for $\Omega=B_1$, the above Pohozaev identity leads to \begin{align}\label{27} \int_{B_1}\left[ F(x,u)+\frac1Nx\cdot F_x(x,u)- \sigma uf(x,u)\right]dx \geq \left( \frac{N-4}{2N}-\sigma\right)\int_{B_1}(\Delta u)^2dx. \end{align} We also need the following Hardy-Sobolev inequality: \begin{align}\label{H-S} \left(\int_{B_1}\frac{|u|^{\frac{2(N-\beta)}{N-4}}}{|x|^\beta}dx\right)^\frac{N-4}{N-\beta}\leq c_0\int_{B_1}|\Delta u|^2dx\quad\text{for }u\in H^2_0(B_1),\end{align} where $B_1\subset\mathbb{R}^N,\,N\geq 5$ and $0<\beta<4$. This can be derived from the Sobolev inequality $\|u\|_{L^{2^*}}\leq C\|\Delta u\|_{L^2}$ ($2^*:=\frac{2N}{N-4}$), Hardy inequality $\|\frac{u}{|x|^2}\|_{L^2}\leq C\|\Delta u\|_{L^2}$ and H\"older inequality.
Using \eqref{27} and \eqref{H-S} one can prove the following lemma, see e.g. \cite[Proposition 2.2]{Ao1}.
\begin{lem} \label{unique-minimal} There exists $\lambda_0>0$ such that for every $\lambda\in(0,\lambda_0]$ the minimal solution $u=u_\lambda$ to \eqref{aux} is the unique solution on the space $C^0(\bar B_1)$. \end{lem}
From \cite[Theorem 6.2]{Rab} we know that the closure of the set of solutions $\{(\lambda, u)\}\subset \mathbb{R}\times X$ to \eqref{aux} is unbounded in $(0,\infty)\times X$. Therefore, there exists a unbounded sequence $(\lambda_k,u_{\lambda_k})\in(0,\infty)\times X$ of solutions to \eqref{aux}. Then necessarily $u_{\lambda_k}(0)=\max u_{\lambda_k}\to\infty$, and by Lemma \ref{unique-minimal}, $\lambda_k\not\to0$. We set $$v_k(x):=\frac{u_{\lambda_k}(r_kx)}{u_{\lambda_k}(0)},\quad r_k^{4+\alpha}\lambda_k u_{\lambda_k}(0)^{p-1}:=1.$$ Then $r_k\to 0$ and $v_k$ satisfies $$\Delta^2v_k=|x|^\alpha (\frac{1}{u_{\lambda_k}(0)}+v_k)^p\quad\text{in } B_\frac{1}{r_k},\quad 0\leq v_k\leq 1,\quad v_k(0)=1,\quad \Delta v_k\leq 0. $$ By elliptic estimates, up to a subsequence, $v_k\to v$ locally uniformly in $\mathbb{R}^N$, where $v$ is a non-trivial bounded positive radial solution to $$\Delta^2 v=|x|^\alpha v^p\quad\text{in }\mathbb{R}^N.$$ Now to prove the decay estimate of $v$ at infinity, we use that $v$ is monotone decreasing, and $\Delta v<0$ on $\mathbb{R}^N$. For $r>0$, by \eqref{int-identity}, we get \begin{align*} \Delta v(2r) &=\Delta v(r)+ \int_r^{2r}\frac{1}{|S^{N-1}|t^{N-1}}\int_{B_t}|x|^\alpha v^p(x)dxdt \\ &\geq \Delta v(r)+ \int_r^{2r}\frac{1}{|S^{N-1}|t^{N-1}}\int_{B_r}|x|^\alpha v^p(x)dxdt \\ &\geq \Delta v(r)+c_1 r^{\alpha+2}v^p(r) ,\end{align*} for some constant $c_1>0$. Thus $$\Delta v(r)+c_1 r^{\alpha+2}v^p(r) <0,$$ which leads to \begin{align*} v(2r) &=v(r)+ \int_r^{2r}\frac{1}{|S^{N-1}|t^{N-1}}\int_{B_t}\Delta v(x)dxdt \\ &\leq v(r)+ \int_r^{2r}\frac{1}{|S^{N-1}|t^{N-1}}\int_{B_r}\Delta v(x)dxdt \\ &\leq v(r)+c_2 \Delta v(r) r^2\\ &\leq v(r)-c_1c_2r^{4+\alpha} v^p(r), \end{align*} for some constant $c_2>0$.
This finishes the proof of Theorem \ref{thm-4.6}.
\bibliographystyle{alpha}
| proofpile-arXiv_059-15536 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}\label{S:Intro}
Given the short sinking timescale of elements heavier than helium in the atmospheres of WDs \citep{Koester-2009}, the large fraction of WDs that are polluted with heavy elements \citep{ZuckermanEtAl-2003,ZuckermanEtAl-2010,KoesterEtAl-2014} is readily explained by accretion of planetary material \citep{DebesSigurdsson-2002,Jura-2003,KilicEtAl-2006,Jura-2008}. The current view, based on the inferred composition of both WD atmospheres \citep{WolffEtAl-2002,DufourEtAl-2007,DesharnaisEtAl-2008,KleinEtAl-2010,GansickeEtAl-2012,JuraYoung-2014,HarrisonEtAl-2018,HollandsEtAl-2018,DoyleEtAl-2019, SwanEtAl-2019} and their discs \citep{ReachEtAl-2005,JuraEtAl-2007,ReachEtAl-2009,JuraEtAl-2009,BergforsEtAl-2014,Farihi-2016,ManserEtAl-2016,DennihyEtAl-2018} suggests that the polluting material is terrestrial-like and typically dry.
Orbiting dust is deduced from measurements of infrared excess, while gas is inferred from metal emission lines. The spatial distribution of the gas is typically within the WD tidal disruption radius, and it often orbits the star with some eccentricity \citep{GansickeEtAl-2006,GansickeEtAl-2008,DennihyEtAl-2016,DennihyEtAl-2018,CauleyEtAl-2018}. The origin of material at such close proximity to the WD is clearly not primordial \citep{GrahamEtAl-1990}, since the WD disruption radius is of the order of the progenitor star's main-sequence physical radius \citep{BearSoker-2013}. It is instead thought to originate from planetary bodies which are perturbed by some mechanism \citep{DebesSigurdsson-2002,BonsorEtAl-2011,DebesEtAl-2012,KratterPerets-2012,PeretsKratter-2012,ShapeeThompson-2013,MichaelyPerets-2014,VerasGansicke-2015,StoneEtAl-2015,HamersPortegiesZwart-2016,Veras-2016,PayneEtAl-2016,CaiazzoHeyl-2017,PayneEtAl-2017,PetrovichMunoz-2017,StephanEtAl-2017,SmallwoodEtAl-2018} to highly eccentric orbits with proximity to the WD, and are subsequently tidally disrupted to form a circumstellar disc of planetary debris.
To date, there exist very few detailed simulations of disc formation by tidal disruptions. The study of \cite{VerasEtAl-2014} constitutes the most detailed and relevant work thus far, which investigates the initial formation of white dwarf debris discs, caused by the tidal disruption of kilometer-sized asteroids ($\sim10^{14}$ kg). It follows a similar study by \cite{DebesEtAl-2012} which only considered the first initial tidal disruption of an extremely eccentric asteroid instead of the entire debris disc formation, while both studies used the same modified N-body code (\emph{PKDGRAV}). Under the conditions discussed in the \cite{VerasEtAl-2014} paper, the disrupted asteroid debris fill out a highly eccentric ring of debris, along the original asteroid trajectory. The material in this disc does not immediately accrete onto the white dwarf at this early stage, and instead the disc is required to evolve further, perhaps through various radiation processes \citep{VerasEtAl-2015}, into a more compact state.
Being conceptually similar, the study of \cite{WeissmanEtAl-2012} (based on the N-body code developed by \cite{MovshovitzEtAl-2012}) investigates the tidal disruption of the Sun-grazing, Kreutz-family progenitor. Their results show that the disruption around the Sun breaks the object into multiple clumps, depending on the exact density and perihelion distance assumed. They suggest that the observed size distribution of the Kreutz group can perhaps be produced, however multiple returns are needed by the parent object and its initial ensuing fragments in order to provide the observed temporal separation of major fragments. The \cite{WeissmanEtAl-2012} and \cite{DebesEtAl-2012} studies underline the difficulties of modelling such tidal disruptions -- since these objects are often highly eccentric (with $e$ approaching 1), time-step limitations make tracking of the entire orbit for multiple returns computationally implausible. One can substantially reduce either the resolution or the eccentricity (or both) in order to circumnavigate this problem, which was precisely the solution adopted by \cite{VerasEtAl-2014} in order to enable multiple returns (orbits) in their simulations.
At the opposite end of the planetary size distribution, some studies consider the tidal disruption of gas giants, demonstrating the exact same problem. \cite{FaberEtAl-2005,LiuEtAl-2013} use an SPH code in order to simulate a close gas giant flyby around a star (i.e. a single tidal encounter) whereas \cite{GuillochonEtAl-2013} use a grid-based code and considered both single as well as multiple passage encounters. As in the \cite{VerasEtAl-2014} study, multiple returns were accomplished only by considerably lowering the assumed eccentricity of the planets.
Using a very simple analytical model, we demonstrate in Section \ref{S:Analytical} that one cannot simply change important characteristics like the eccentricity or size of the disrupted progenitor without directly (and substantially) affecting the properties of the debris that are produced by the tidal disruption. We therefore emphasize the main shortcomings of all previous studies:
\begin{itemize}
\item no previous study has investigated the detailed disruption of terrestrial-sized or dwarf-sized planets, despite being potentially important in terms of the typically inferred composition of the pollutants, the effect of having larger-than-asteroid size on the outcome of the disruption, the recent determination of oxygen fugacities which suggest that polluting rocky materials are geophysically and geochemically similar to Earth \citep{DoyleEtAl-2019} and the implications of what could be a stripped core from a larger original object \cite{ManserEtAl-2019}.
\item the resolution in previous studies is orders of magnitude lower (few $\sim 10^3$ particles) than the standard resolution currently used in modern SPH or N-body applications ($\sim 10^5-10^6$), due to the aforementioned time-step limitation.
\item in order to enable multiple returns the orbital parameters of the disrupted parent-bodies are contrived. Reducing the semi-major axis and eccentricity changes the outcome of tidal disruptions and in turn the ensuing debris discs.
\item studies that alternatively did consider realistic orbits, were instead limited to calculating only the first tidal encounter, whereas the full disc formation typically requires multiple returns.
\end{itemize}
The goal of this study is therefore to resolve such difficulties by utilizing a new, hybrid concept, to modelling tidal disruptions. Our approach is to omit unnecessary calculations far from the vicinity of the star, by fully following the disruption and coagulation of particles into fragments with SPH, \emph{only} when they are within the star's immediate environment. For the reminder of their orbits, fragment trajectories are calculated and tracked analytically assuming keplerian orbits. I.e., we make the assumption (for simplicity) that the disc of debris is largely collisionless, as well as dynamically unaffected by radiation or other processes, and then instantaneously transfer the fragments back to the tidal sphere for their next flyby, in an iterative process. Our assumptions are discussed and quantified.
With each disruption, the semi-major-axis dispersion of newly formed fragments depends on the exact size and orbit of their progenitor. The hybrid code handles the synchronization, timing and dissemination of SPH jobs. The disc formation completes only when reaching one of two outcomes: either all fragments have ceased disrupting given their exact size, composition and orbit; or fragment disruption is inhibited when reaching the numerical minimum size - that of a single SPH particle.
The hybrid approach enables studying tidal disruptions for any progenitor orbit (even objects originating from tens or hundreds of AU) with the same efficiency. The code easily handles partial disruptions (i.e., those resulting in little mass shedding when the pericentre distance is sufficiently large), since now the number of iterations is not limited by a large semi-major axis.
The layout of the paper is structured as follows: in Section \ref{S:Analytical} we first outline an analytical model of tidal disruptions. This model provides invaluable insights about the outcome of single tidal disruptions. While discussing its limitations, we develop a deeper understanding of what may be expected in performing full numerical tidal disruption simulations involving terrestrial-sized or dwarf-sized planets; In Section \ref{S:FullSPH} we then perform full-scale tidal disruption simulations using SPH. We discuss the code details and setup, show the various disruption outcomes which depend on our choice of pericentre distance, track the formation of the disc and examine the effect of applying initial rotation to the disrupted planet; In Section \ref{S:Hybrid} we introduce our hybrid model, describing its principles and the validity of its assumptions. We then verify and corroborate our hybrid model results against full SPH simulations, showing that the two methods are in agreement, while discussing how the hybrid method outperforms the former. We show that unlike previous tidal disruption studies of small asteroids which form ring-like structures on the original orbit, larger bodies form dispersed structures of interlaced elliptic eccentric annuli on tighter orbits. In Section \ref{S:Future} we discuss various different applications and future improvements for our hybrid model. Finally, in Section \ref{S:Summary} we summarize the paper's main achievements. In an accompanying paper (Malamud and Perets 2019; hereafter Paper II) we utilize the new code as to consider a suite of simulations of the tidal disruptions of rocky bodies by WDs, spanning a large range of masses, semi-major axes and pericentre distances, analyze them and discuss the results.
\section{Analytical impulsive disruption approximation}
\label{S:Analytical}
The tidal disruption of a planetesimal can be approximated analytically via an impulsive disruption. It entails the assumptions that (1) a spherically symmetric planetesimal remains undisturbed until it reaches the distance of closest tidal approach; (2) it then instantaneously breaks into its constituent particles; (3) the latter retain their previous centre-of-mass velocity, albeit now occupy a range of spatial coordinates; and (4) it is assumed that the constituent particles evolve independently of each other immediately after the breakup, tracing out ballistic trajectories in the star's gravitational potential.
The impulsive disruption approximation provides a rather simple analytical framework for gaining insight and intuition regarding the fundamental disruption properties, however, with the caveat that tidal breakup is never strictly and completely impulsive. As we will show, our assumptions break down considerably, depending primarily on the distance of close tidal approach.
In what follows, consider a central star of mass $M$, orbited by a planetesimal of mass $m$ which undergoes an impulsive tidal disruption at distance $d$ from the star. At the moment of breakup, the planetesimal has the velocity $v$ and semi-major axis $a$. The velocity $v$ is given by the 'vis-viva' equation, for any keplerian orbit, in the form:
\begin{equation}
v = \sqrt{\mu\left( \frac{2}{d}-\frac{1}{a} \right)}
\label{eq:VisVisa}
\end{equation}
where $\mu=G(M+m)$ is the standard gravitational parameter, and $G$ the gravitational constant. Since we assume that an arbitrary particle's velocity $\acute{v}$ equals the previous centre-of-mass velocity of the whole planetesimal $v$ at the moment of breakup, neglecting the velocity of self-rotation of the planetesimal (typically two orders of magnitude lower than the centre-of-mass velocity even for a planet sized object), the two can be equated such that:
\begin{equation}
G(M+\acute{m})\left( \frac{2}{d+r}-\frac{1}{\acute{a}} \right) = G(M+m)\left( \frac{2}{d}-\frac{1}{a} \right)
\label{eq:EquateVel}
\end{equation}
where $\acute{m}$ is the particle mass, $\acute{a}$ its semi-major axis and $r$ its displacement relative to the planetesimal's centre-of-mass at breakup, such that $\acute{d}=d+r$. We assume a spherically symmetric planetesimal, hence the maximal displacement equals the planetesimal's radius $R$, such that $\lvert r \rvert \leq R$.
Let us assume that $\acute{m}\ll m$ and $m\ll M$. The latter assumption is highly judicious given the typical mass of terrestrial planets or less. Hence equation \ref{eq:EquateVel} can be re-written to extract the particle's semi-major axis as a function of its displacement $r$:
\begin{equation}
\acute{a} = a \left( 1-a\frac{2r}{d(d+r)} \right)^{-1}
\label{eq:SemiMajorAxis}
\end{equation}
When the denominator equals zero, particles assume a parabolic trajectory. The critical displacement $r_\mathrm{crit}$ for which it occurs, equals:
\begin{equation}
r_\mathrm{crit} = \frac{d^2}{2a-d}
\label{eq:Rcrit}
\end{equation}
Particles with $r>r_\mathrm{crit}$, i.e. particles which are sufficiently displaced from the center-of-mass in the opposite direction of the WD, will become unbound. Particles with exactly $r=0$ will satisfy $\acute{a}=a$, keeping the original semi-major-axis. Particles with $0<r<r_\mathrm{crit}$ will have larger-than $a$ semi-major axes, and all particles with $r<0$ (in the direction of the WD with respect to the centre-of-mass) necessarily have $\acute{a}<a$. The disruption 'roadmap' is visually presented in Figure \ref{fig:roadmap}, depicting the different parts of a cross-section, of a disrupted spherical planetesimal.
\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{Roadmap.eps}
\caption{The disruption roadmap: the semi-major axes of disrupted constituent particles within a spherically-symmetric planetesimal are determined according to their various locations within the object. The exact delineation of $r_\mathrm{crit}$ is pivotal, and determines different disruption outcomes. As indicated by the short arrows it may move in either direction. The right hemisphere (black dashed line) is always bound. If $0<r_\mathrm{crit}<<R$, the left hemisphere is entirely unbound, whereas if $r_\mathrm{crit}>R$ both hemispheres are bound, corresponding to the regimes outlined in Table \ref{tab:regimes}}
\label{fig:roadmap}
\end{center}
\end{figure}
Let us further examine the critical displacement $r_\mathrm{crit}$. Since it is proportional to the square of the planetesimal breakup distance $d$, it can vary by several orders of magnitude. Hence, in Figure \ref{fig:Rcrit} we show $r_\mathrm{crit}$ as a function of $d$ in logarithmic scale, the latter ranging between the typical WD radius ($\sim 10^{-2}R_\odot$) to the typical WD Roche radius ($\sim R_\odot$). The plot features six lines with varying semi-major axes $a$ of the original planetesimal before breakup, spanning three orders of magnitude (between 0.1 AU and 150 AU), and corresponding to all the $a$ values simulated throughout this paper and Paper II. Figure \ref{fig:roadmap} is essential to determining the outcome of single tidal disruptions. It emphasizes the importance of the perturbing mechanisms that inject planetesimals to tidal crossing orbits, since both the breakup distance and the semi-major axis are the decisive dynamical factors that shape the outcomes of tidal disruptions.
\begin{figure}
\begin{center}
\includegraphics[scale=0.558]{Rcrit.eps}
\caption{The critical displacement for generating unbound particles, shown in logarithmic scale as a function of the breakup distance from the star. Six lines denoted by different colours and widths correspond to various $a$ (semi-major axes) of the original planetesimal. Large semi-major axes are more likely to result in unbound disrupted material.}
\label{fig:Rcrit}
\end{center}
\end{figure}
Clearly there exist three distinct disruption regimes. If the parameters of impulsive disruptions are such that $r_\mathrm{crit}\ll R$, roughly half of the debris will be unbound from the system, while the other half will be placed onto much tighter orbits compared to the original $a$. The latter is easily seen, since $r_\mathrm{crit}\ll R$ can be re-written as:
\begin{equation}
\frac{2Ra}{d(d+R)} \gg 1
\label{eq:FirstRegieme}
\end{equation}
Using the condition from Equation \ref{eq:FirstRegieme} and applying it to Equation \ref{eq:SemiMajorAxis}, $\acute{a}$ assumes large negative values (hyperbolic trajectories) for positive displacement ($r>0$). For negative $r$ the original semi-major axis $a$ is divided by a large denominator ($\gg 1$), hence the tight orbits.
Moreover, note that Equation \ref{eq:SemiMajorAxis} can be re-written as $\acute{a}=\mp d(d+r)/2r$, such that $\acute{a}$ is independent of $a$, and particles converge onto a minimum semi-major axis value of $\acute{a}=\mp d^2/2R$ (since typically $R\ll d$). Such extremely \emph{bi-modal disruption} regimes are often formulated in many studies from an energy dispersion point of view (e.g. see \cite{MetzgerEtAl-2017}), where the particle's energy spread is 'frozen-in' at the moment of breakup. We note that the freezing point, or breakup distance $d$, is not necessarily interchangeable with the planetesimal's pericentre distance $q$, since the breakup does not necessarily occur at $q$ (although often studies indeed make that assumption). Several authors have previously demonstrated the importance of this point for stellar disruptions \citep{StoneEtAl-2013,GuillochonRamirez-Ruiz-2013,SteinbergEtAl-2019}. We likewise show in Paper II that $d$ may be different from $q$ for the kind of planetary disruptions considered in this study. Nevertheless, henceforward we sometimes do use $q$ and $d$ interchangeably, but only when referring to literature which does not make this distinction specifically.
The other extreme regime which satisfies $r_\mathrm{crit}\gg R$, necessarily places all the constituent particles in bound orbits. Furthermore, the dispersion in $\acute{a}$ is negligible by the same argument since the denominator $\simeq 1$. The \emph{non-dispersive disruption} therefore results in the formation of an eccentric ring on the original orbit $a$, filled up by debris. See e.g. the \cite{VerasEtAl-2014} study, where the disrupted object is a very small asteroid ($R \simeq 3$ km) with a semi-major axis of 0.2 AU and $q$ between 0.135$R_\odot$ and 0.27$R_\odot$, which according to Figure \ref{fig:Rcrit} (and assuming $d=q$) leads to $r_\mathrm{crit}$ of a few $10^2$ km - some two orders of magnitude larger than $R$. We note however that their semi-major axis of 0.2 AU is intentionally small, due to the computational limitations that our paper attempts to circumnavigate by using the hybrid model. Figure \ref{fig:Rcrit} demonstrates that the same asteroid only at a larger, more realistic semi-major axis, will no longer be in the \emph{non-dispersive disruption} regime and therefore it will not form an eccentric ring, but rather a more dispersed disc. For example, at $a=4.77$ AU and $q=0.24-0.32~R_{\odot}$, the outcome in the \cite{DebesEtAl-2012} study would have been an annulus instead of a ring (were it computationally possible to carry out the formation to its completion), since in this case $r_\mathrm{crit}$ is larger than but similar to the asteroid's $R$. If instead the \cite{VerasEtAl-2014} asteroid originated from an analogue Kuiper belt, we might even get a \emph{bi-modal disruption}.
The third \emph{intermediate disruption} regime entails $r_\mathrm{crit}\simeq R$, which results in some dispersion of the original semi-major axis, depending on the exact parameters of the problem. The ensuing disc is therefore the least straight-forward to characterize. For a summary of the different disruption regimes see Table \ref{tab:regimes}.
\begin{table}
\caption{Tidal disruption regimes.}
\begin{tabular}{*{3}{l|}}
\hline
{Regime} & {Condition} & {Outcome} \\
\hline
\emph{bi-modal disruption} & $r_\mathrm{crit}\ll R$ & $\acute{a}(r\to R) = \pm d^2/2R$ \\
\emph{intermediate disruption} & $r_\mathrm{crit}\simeq R$& $\acute{a}$ from Equation \ref{eq:SemiMajorAxis}\\
\emph{non-dispersive disruption}& $r_\mathrm{crit}\gg R$ & $\acute{a}\cong a$\\
\hline
\end{tabular}
\label{tab:regimes}
The disruption regime is determined by the characteristic value of $r_\mathrm{crit}$ from Equation \ref{eq:Rcrit}. The new semi-major axes $\acute{a}$ of disrupted particles ranges between having two symmetric peaks in the distribution, to having no dispersion at all (i.e., keeping the original semi-major axis $a$).
\end{table}
What outcomes might then we expect to find in our simulations? In this paper, most of the scenarios investigated can be traced within Figure \ref{fig:Rcrit} to have $r_\mathrm{crit}<R$, where $r_\mathrm{crit}$ and $R$ typically differ by about one order of magnitude. Such disruptions are therefore neither \emph{bi-modal}, nor \emph{non-dispersive}, which only emphasizes why detailed simulations are required. Broadly speaking, they are nevertheless closer to the \emph{bi-modal} regime, and thus we might expect the outcome of our disruptions to display, at least in part, some kind of resemblance to a bi-modal semi-major-axis distribution.
There are, however, additional complications. We must remember that unlike in the impulsive disruption approximation, real disruptions do not abide by our set of assumptions. Planetesimal breakup is neither instantaneous nor complete, and the assumption of sphericity is violated. We note that the disruption chiefly depends on the tidal force which breaks the planetesimal apart. For $R\ll d$, the tidal force per unit mass $F_T$ can be approximated by:
\begin{equation}
F_T=\frac{2GMR}{d^3}
\label{eq:TidalForce}
\end{equation}
Since this tidal force greatly depends on the breakup distance $d$, a complete disruption is more likely to occur when the object passes close to the star. Consider for example a very deep tidal disruption with $q=$0.1$R_\odot$ versus a moderate one near the Roche limit with $q=$1$R_\odot$ (see discussion on the Roche distance in Section \ref{SS:SPHOutline}). The former leads to a tidal force 1000 times greater (tentatively assuming $d=q$), while the opposing force of self-gravity remains the same, thus we can expect a huge difference in the outcomes of these two cases (see Section \ref{SS:PericentreDependence} for a quantitative perspective). A common outcome in our simulations, unlike in the impulsive disruption approximation, is a partial, rather than a full disruption.
Since the disruption proceeds gradually, as the planetesimal's motion carries it deeper into the tidal sphere before reaching its closest approach, the tidal process is not instantaneous by definition, and there is always some measure of tidal elongation prior to breakup. Hence, one might consider a more realistic spatial distribution of the constituent particles, as opposed to the simple spherical view described above. This would alter the actual dispersion of the particles.
An additional complication is that the simple impulsive approximation does not really capture the subtleties and nuances of inhomogeneous planetesimals, as clumps of particles following a disruption can consist of different materials and/or have complex internal structures that vary in density (and strength, but we will omit that discussion for the moment). Different materials thus have various Roche radii.
Also, as previously mentioned, the orbit is usually well defined in tidal disruption problems, so we have good knowledge of $q$. However, we do not have good knowledge of $d$, and the 'instantaneous breakup', such as we have defined it, may actually occur prior to the closest approach. Without any sophisticated treatment, we often equate $d$ with $q$ as a heuristic approach, allowing us to draw simplistic analytical approximations.
Finally, as shown in Section \ref{SS:DiscFormation}, the outcome of real disruptions is modified to some extant by the self-rotation of the planetesimal. This effect is not negligible, especially for rapid self-rotation.
The picture that emerges in real tidal disruptions therefore involves some dispersion of the original planetesimal semi-major axis. Whether full or partial, the disrupted clumps returning for an additional tidal passage, now occupy a range of different sizes, compositions, self-rotation rates and semi-major axes, with only their pericentre distance unchanged. They will therefore potentially follow a different disruption regime during each subsequent flyby, making the problem too complex for any simple approximative analytical model, emphasizing the importance of numerical modelling.
\section{Full SPH disruption simulations}\label{S:FullSPH}
\subsection{Code outline and setup}\label{SS:SPHOutline}
We perform hydrodynamical disruption simulations using an SPH code developed by \citet{SchaferEtAl-2016}. The code is implemented via CUDA, and runs on graphics processing units (GPU), with a substantial improvement in computation time, on the order of several $\sim10^{1}-10^{2}$ times faster for a single GPU compared to a single CPU, depending on the precise GPU architecture. The code has already been successfully applied to several studies \citep{DvorakEtAl-2015,MaindlEtAl-2015,HaghighipourEtAl-2016,WandelEtAl-2017,BurgerEtAl-2018,HaghighipourEtAl-2018,MalamudEtAl-2018}.
The code implements a Barnes-Hut tree that allows for treatment of self-gravity, as well as gas, fluid, elastic, and plastic solid bodies, including a failure model for brittle materials and a treatment for small porous bodies. Here we perform our simulations while neglecting solid-body physics, being more computationally expensive. We however lay out future plans (Section \ref{SS:Strength}) to also perform a dedicated study including material strength, outlining its potential importance. We use the Tillotson equation of state (EOS). The parameters for the EOS are taken from \citet{Melosh-1989} and \citet{BenzAsphaug-1999}, for iron and silicate (basalt) respectively. See \cite{MalamudEtAl-2018} for further details.
Throughout this Section we will perform full hydrodynamical simulations of planetesimals which undergo tidal disruption around a 0.6 M$_{\odot}$ WD. The star mass is chosen to correspond to the peak mass in the observed WD mass distribution. It is a common practice in many WD studies to adopt this fiducial WD mass \citep{TremblayEtAl-2016,Veras-2016,CummingsEtAl-2018}. The disrupted planetesimal mass is treated as a free parameter, however in this section we only simulate the tidal disruption outcome of planets with masses corresponding to that of Mars and Earth, or 0.1 M$_{\oplus}$ and 1 M$_{\oplus}$ respectively. For simplicity we consider all planetesimals to have an Earth-like composition and structure, being differentiated and composed of 30\% iron and 70\% dunite by mass.
As discussed in Section \ref{S:Analytical}, the outcome of a tidal disruption is highly dependent on its depth. That is, when the pericentre distance $q$ is a smaller fraction of the Roche limit, the event is more likely to break the object down to its constituent particles (as in the impulse approximation set of assumptions), while a more grazing passage will result in a partial disruption that breaks only the planetesimal's outer portions. In order to investigate and compare such differences we consider in Section \ref{SS:PericentreDependence} the following $q$ values: 0.1$R_\odot$, 0.5$R_\odot$ and 1$R_\odot$ (or 10\%, 50\% and 100\% of a Roche grazing orbit, respectively). The considerations for the pivotal 1$R_\odot$ grazing orbit are discussed below.
Throughout most of the paper we consider \emph{initially} non-rotating planetesimals (i.e., referring strictly to the initial rotation, but not the later rotation of tidally disrupted fragments). This assumption is however tested and evaluated in Appendix \ref{A:RotationDependence}, where we assume the planet to have a 20 h rotation period in both the prograde and retrograde senses prior to the disruption, and compare the outcomes to that of non-rotating planets.
Each simulation starts with a relaxed planetesimal internal structure, i.e. having hydrostatic density profile and internal energy values from adiabatic compression, following the algorithm provided in appendix A of \citet{BurgerEtAl-2018}. This self-consistent semi-analytical calculation (i.e., using the same constituent physical relations as in the SPH model) equivalently replaces the otherwise necessary and far slower process of simulating each body in isolation for several hours, letting its particles settle into a hydrostatically equilibrated state prior to the collision (as done e.g., in the work of \citet{CanupEtAl-2013} or \citet{SchaferEtAl-2016}). Since the relaxation algorithm does not account for the additional effect of rotation, in Appendix \ref{A:RotationDependence} we place initially rotating planetesimals far from the star, providing them with an extra $\sim$30 h relaxation phase that damps any residual radial oscillations before the planet approaches the star.
For all other simulations the planets are initially positioned at a distance that ensures they are outside, yet near the Roche limit, the latter marking the relevant domain for which to begin using SPH, where the tidal forces should start to dominate over self-gravity. In order to make certain that our initial distance is always sufficiently large and outside the Roche limit, independent of the exact composition and density of every planet, we deliberately adopt an upper limit value in excess of the fiducial Roche values typical of rocky planets. Our selection is based on the analysis from \cite{VerasEtAl-2014} (see the discussion therein). The largest Roche value is given by their Equation 3 as $R_{roche,max}=2.73 R_\odot$. This value is derived from their Equation 2, by taking the upper range for the $C$ coefficient and the minimum permitted density of small asteroids. Were we to select both with average values instead, the Roche distance would have been at least halved. In short, given this choice we make sure that the SPH domain start of influence is always selected to be much larger than what is actually required, by a factor of at least 2.
Since the \emph{actual} Roche limit is however less than 50\% smaller, we consider planet pericentre distances of around 1$R_\odot$ as having Roche-grazing orbits, yet well-placed inside the tidal sphere (i.e, at that distance they skim the Roche limit from within).
Throughout this section \emph{only}, we assign a small planet semi-major axis of merely 0.1 AU. The latter value is considered by \cite{VerasEtAl-2014} as the minimal value of $a$ for which the time spent inside the tidal disruption sphere is approximately independent of the choice of $a$. Additionally, this value is sufficiently low that the orbital period of the planet is only 14.9 days, which will allow us to track the formation of the disc for a considerable duration, of the order of 100-200 days, or $\sim10$ orbits. We note that such a small $a$ is physically highly unrealistic since most planetesimals are expected to originate from a semi-major axis of at least several AU (see discussion in Section \ref{SS:SPHOutline}), however this section is not meant to treat realistic scenarios, only maximize the simulation duration and characterize the resulting disc.
A typical simulation time of a few months (which translate into several orbital periods of the original planet), is achieved when the resolution is limited to 10K SPH particles, even when utilizing our relatively high-performance GPU-architecture. Taking a higher resolution must come at the cost of reducing the duration of the simulation (the fraction or number of orbits for which the disc formation is fully tracked) or lowering the semi-major axis/eccentricity (or both). Given our current choice of resolution, the typical runtime is coincidently comparable to the simulation time, most simulations running for up to 4 months. The simulations were performed on the 'TAMNUN' GPU cluster, at the Technion Institute in Israel. The GPU model used is NVIDIA Tesla K20. Each simulation ran on a single dedicated GPU.
\subsection{Dependence on pericentre distance}\label{SS:PericentreDependence}
In this Section we present the results of full-scale tidal disruption SPH simulations as a function of their depth, i.e., given the following pericentre values: 0.1$R_\odot$, 0.5$R_\odot$ and 1$R_\odot$. We consider the disruption of Mars (0.1$M_\oplus$) or Earth (1$M_\oplus$) sized planets. As discussed in the previous section, the planets are assigned a small semi-major axis of only 0.1 AU, for which their orbital periods is merely 14.91 days. In turn, this makes the tracking of disc formation computationally plausible when using full-scale SPH simulations.
In Figure \ref{fig:VelRoche} we show the initial stage of tidal disruption. The image captures the debris after they exit the Roche limit, moving away from the WD. Three Earth-mass planets are considered with different $q$ values, from right-to-left: 0.1$R_\odot$, 0.5$R_\odot$ and 1$R_\odot$. Particle colours denote velocity (magnitude). Resolution is 10K particles. As can be seen, only the WD (bottom-left), represented by a single particle, is stationary. We recall from Section \ref{SS:SPHOutline} that the planets are initially positioned outside but near the Roche limit (by a factor of at least $\sim$2) on the opposite side of the WD. Their approximate trajectories are indicated by the green-dashed lines, moving clockwise. The time passed from their initial-to-final positions just outside this extended sphere, is around 0.15 days.
\begin{figure*}
\begin{center}
\includegraphics[scale=0.62]{VelocityRoche.eps}
\caption{A top-down snapshot of SPH particle velocities in units of m$\times$s$^{-1}$, after the initial tidal encounter ($\sim$13000 s). Three Earth-mass planets with $a=0.1$ AU and pericentre distances $q=$1, 0.5 and 0.1 $R_\odot$ (respectively from left-to-right) are shown moving clockwise (their trajectory lines are marked in dashed green), after they exit the tidal sphere of a 0.6$M_\odot$ WD (bottom-left, blue particle). The disruption causes the planet to distend, forming an outer and inner tidal streams. Deeper disruptions result in more complete tidal stripping. The velocity gradient corresponds to the transition from unbound to bound particles. The image is to scale.}
\label{fig:VelRoche}
\end{center}
\end{figure*}
The disruption causes each planet to distend as tidal energy is transferred into the planet and matter crosses through the Lagrange points. Eventually two mass-shedding streams are evident. The inner stream particles are bound to the star, tracing out elliptical orbits, while particles in the outer stream are often unbound from both the star (and planet, if during a partial disruption it remains intact), heading away from the star system on hyperbolic orbits. Our simulations show that the streams generally follow a single axis, but the geometry obviously differs from case to case, depending on the distance of close tidal approach. In the $q=1R_\odot$ case, it is clearly evident, even during this early stage, that the mass shedding is partial and the streams emanating from the outer portions of the planet do not conform to a single axis geometry.
The velocities of the disrupted particles help us understand the initial formation of the disc. The general picture is as one might expect from Figure \ref{fig:roadmap}, the unbound particles along the tip of the outer stream having the highest velocities, whereas the particles further inward have increasingly lower velocities. Bound particles will slow until reaching their minimum, apocentre velocities. The slowest particles are positioned along the tip of the inner stream. They accordingly have the closest apocentre distances.
One of the major differences that emerge beyond this point is the degree to which the stream is gravitationally self-confined. Gravitational contraction is (by its definition) impossible while the debris still lie within the WD's Roche limit. However, as the particles continue to move away the gravitational interactions among them can, depending on their exact spatial distribution and velocities, cause them to clump up and form larger fragments. In other words, the stream may fragment under its own self-gravity.
Physical intuition regarding the fragmentation phase may be obtained based on the analysis of \cite{HahnRettig-1998}. We follow their calculations, in which they show that fragmentation may occur when the gravitational free-fall timescale $t_c$ becomes smaller than the stream spreading timescale $t_s$. The latter is calculated by determining the length $L(t)$ of the stream, over its rate of change $dL(t)/dt$. Using the notations from Section \ref{S:Analytical}, and replacing the breakup distance $d$ with the distance of close approach $q$, if the planet is on a highly eccentric orbit and $r_\mathrm{crit}\ll R$, then the most bound particle inside the inner stream has $\acute{a}=q^2/2R$. Manipulating the known relation $q=(1-e)a$, we obtain $\acute{e}=1-2R/q$. When $r_\mathrm{crit}\ll R$, the particle formerly at the planet's centre (i.e., with the orbital elements $a$ and $e$) approximately marks the other tip of the bound stream. Now $t_s$ can be calculated from the stream's length $L$ as a function of $t$ using standard solutions for elliptic Keplerian equations of motion, as follows. We calculate the mean anomaly $MA=2\pi t/T$, where $T=2\pi \sqrt{a^3/GM}$ is the orbital time (assuming $m\ll M$). The eccentric anomaly is obtained from solving Kepler's equation $MA(t)=EA(t)-e \sin{EA(t)}$ numerically. Then the distance as a function of time satisfies $l(t)=a(1-e\cos{EA(t)})$, the true anomaly $\theta$ satisfies $\theta(t)=2\arctan[\tan(EA(t)/2)*\sqrt{(1+e)/(1-e)}]$ and $l(t)_{x,y}$ can be extracted. From $\acute{a}$ and $\acute{e}$ we similarly extract $\acute{l(t)}_{x,y}$. Obtaining $L(t)$ and $dL(t)/dt$ is straight forward.
The former timescale $t_c$ is calculated based on the characteristic time it would take a cloud of mass to collapse under its own gravitational attraction. The free fall timescale equals $\alpha / \sqrt{G\rho}$, where $\alpha=\sqrt{3\pi/32} \cong 0.5427$ for a stationary cloud of particles. Since our tidal cloud of debris is not stationary, realistic values for $\alpha$ are larger, and must be calibrated from numerical simulations (e.g., in \cite{HahnRettig-1998} $\alpha$ is determined to be $\sim$1) in order to be used in the calculation. Since the debris spread largely along a single axis, the debris density $\rho$ equals the planet density, scaled by the factor $R/L$, such that $\rho \sim (3m/4\pi R^3) R/L$. According to \cite{HahnRettig-1998}, one has to equate $t_c$ and $t_s$, both which are time dependent, to find the moment $t$ when fragmentation becomes important. By the same analysis, we can also obtain an estimate regarding the number of fragments formed.
If $r_\mathrm{crit}\ll R$ is not satisfied, the calculation changes, however along the same general lines. One can use Equation \ref{eq:SemiMajorAxis} instead, to compute the interior bound orbit and the exterior bound orbit may be evaluated in the same way if $r_\mathrm{crit}>R$, otherwise $l(t)$ can be derived from the parabolic orbit equation. The main problem of the entire calculation is the calibration of $\alpha$, however as we shall soon see, it is not a trivial problem since the latter is actually a function of $q$. An additional caveat is that the method would fail to treat cases with large $q$, when the disruption is partial, since the temporal evolution of $L$ is different in this case and depends on the remaining intact planet.
Thus, considering intermediate or deep disruptions, our initial approach to this problem was to plainly rely on our numerical simulations to find out the exact time when fragmentation occurs, by visually inspecting our data for several scenarios. Based on our comprehensive analysis of full SPH simulations, we found that for $q=0.1R_\odot$, the SPH particles do not coalesce into fragments at all. For $q=0.5R_\odot$, the typical timescale for fragmentation is of the order of a few $10^-1$ days. In Figure \ref{fig:zoomin} we zoom-in on the debris after this fragmentation phase has concluded. Taking the aforementioned timescale estimate, our time index is now 0.58 days. Pixel colours denote composition: orange - rock; black - iron. Resolution is 10K particles.
\begin{figure}
\begin{center}
\subfigure[$q=1 R_{\odot}$, partial disruption.] {\label{fig:zoomin1}\includegraphics[scale=0.3]{zoomin1.eps}}
\subfigure[$q=0.5 R_{\odot}$, gravitationally self-confined full disruption. Zoom-in on dashed rectangle: Figure \ref{fig:fragMerger}.] {\label{fig:zoomin0_5}\includegraphics[scale=0.3]{zoomin0_5.eps}}
\subfigure[$q=0.1 R_{\odot}$, gravitationally unconfined full disruption.]
{\label{fig:zoomin0_1}\includegraphics[scale=0.3]{zoomin0_1.eps}}
\end{center}
\caption{The zoomed-in debris field (top-down view) of a tidally disrupted Earth-sized planet around a 0.6$M_\odot$ WD after 0.58 days, given $a=0.1$ AU and a $q$ of (a) 1$R_{\odot}$; (b) 0.5$R_{\odot}$ and (c) 0.1$R_{\odot}$ (panel dimensions not identical). Colour denotes composition: orange - rock ; black - iron.}
\label{fig:zoomin}
\end{figure}
Our choice of $q$ highlights three distinct cases. Unlike in Figure \ref{fig:VelRoche}, the particles no longer form a continuous stream. It is visually evident that the amount of mass stripped from the planet increases with decreasing periastron separation. Panel \ref{fig:zoomin1} shows a clear case of a partial disruption, in which a relatively intact planet, is accompanied by a small stream of particles from its outer portions. Panel \ref{fig:zoomin0_5} breaks up the planet entirely into a long and narrow stream, but the debris field is gravitationally self-confined, and the stream coagulates to form a finite number of large fragments, accompanied by some smaller particles. Panel \ref{fig:zoomin0_1} showcases the deepest, most violent type of disruption, wherein the destruction of the planet is almost complete, in addition to the debris being so dispersed that they are unable to fragment under the pull of their own self-gravity.
Let us now examine the fragmentation timescale using the \cite{HahnRettig-1998} approach, comparing our results to theory. After performing the previously mentioned calculations, we present in Figure \ref{fig:FragTimescale} the timescales $t_s$ (solid line) and $t_c$ (dotted lines) in units of the encounter timescale $\tau=\sqrt{q^3/GM}$, and in logarithmic scale. To compare with Figure \ref{fig:zoomin}, here we also perform the calculation for an Earth-mass planet with $a=3$ AU. The x-axis shows the time $t$. The starting time ($\sim 5\times 10^3$ s) corresponds to a position that lies beyond the maximum Roche limit (see e.g. \cite{VerasEtAl-2014}), and the end time equals $t=T/2$, where $T/2$ is half the orbital period of the innermost bound particle in the stream. The latter choice is significant since it is the moment in which particles begin to deviate from the single axis geometry of the stream. Within this critical time interval, the fragmentation timescale is obtained when the free-fall timescale $t_c$ intersects with the spreading timescale $t_s$, while we investigate several values of $\alpha$, spanning one order of magnitude.
\begin{figure}
\begin{center}
\includegraphics[scale=0.558]{FragmentationTimescale.eps}
\caption{The tidal stream spreading timescales $t_s$ (solid line) and free-fall timescale $t_c$ (dotted lines, with various $\alpha$ parameters) in units of the encounter timescale $\tau$, as a function of $t$ (both in logarithmic scale). When $t_c$ is less than $t_s$ gravitational contraction and consequently fragmentation, is possible.}
\label{fig:FragTimescale}
\end{center}
\end{figure}
It is shown that for $\alpha=1$, gravitational contraction is possible after about 40000 s, or 0.46 days. It is a good match to our numerical simulations and visual inspection of the data, for $q=0.5R_{\odot}$ (and also a good agreement with \citep{HahnRettig-1998}). The fact that no fragmentation ever occurs for $q=0.1R_{\odot}$, suggests that $\alpha$ is inversely correlated with $q$. In other words, the theoretical interpretation of our results might suggest that a large $\alpha$ inhibits the onset of fragmentation, and merits future investigation in these directions (see Section \ref{SS:Fragmentation}).
In order to quantify the effects of fragmentation, we analyse the data for the number of fragments (we find physical clumps of spatially connected SPH particles whose distances are less than the simulation smoothing length, using a friends-of-friends algorithm), after the particle fragmentation phase has concluded. Our analysis indicates that for Earth-sized planets, approximately 91.7\% of the particles are single SPH particles and are not associated with any fragment, when $q=0.1R_\odot$. In other words, when the disruption is so deep these planets mostly break down into their smallest constituent particles (here limited by resolution), and are unable to fragment significantly. The other 8.3\% of the particles are distributed among small fragments, many of which include only a few particles. In comparison, for $q=0.5R_\odot$ and $q=1R\odot$, the fractions of single SPH particles are much smaller, only 1.88\% and 0.54\% respectively.
A Mars-sized planet (not shown in Figure \ref{fig:zoomin}) breaks down into fewer single SPH particles compared to an Earth-sized planet. Our analysis shows that the respective fractions of single SPH particles in that case are only 76.5\%, 0.74\% and 0.05\%.
The composition distribution is easily observed in Panel \ref{fig:zoomin0_1}. Here the iron particles from the planet's core experience the same tidal shearing as the rocky particles, and display a similar spatial pattern. Panels \ref{fig:zoomin0_5} and \ref{fig:zoomin1}, are however opaque and insufficiently zoomed-in for visual identification. A much closer inspection would nonetheless reveal that in Panel \ref{fig:zoomin0_5} the iron particles are distributed inside the cores of (many of) the larger fragments, as in, e.g., Figure \ref{fig:fragMerger}. In Panel \ref{fig:zoomin1} a closer inspection shows that the iron particles are found \emph{only} inside the core of the original, intact planet, whereas all other small fragments are purely rocky.
It is interesting to note in this context, that the rocky interstellar asteroid Oumuamua is sometimes said to be an unbound fragment originating from a tidally disrupted planet \citep{Cuk-2018,RaymondEtAl-2018a,RaymondEtAl-2018b}. \cite{Rafikov-2018} went one step further in postulating that it could originate from a disruption around a WD, specifically, since polluted WDs often showcase a characteristic abundance of refractory materials. He introduced a complex model for producing the fragment size distribution required for the small size of Oumuamua, by collisional grinding of fragments which arises during the planetary passage through the Roche sphere. In this study, our results are reinforcing the notion that objects like Oumuamua, being so small, must also originate in one of two formation pathways. The first option is that they form specifically in gravitationally unconfined streams, otherwise the outgoing stream would coagulate into larger fragments as it exits the Roche limit, regardless of how small the pieces are when they initially form. This generally means that the disruption has to be very deep. The other option, when we have not-so-deep, yet full disruptions, and the stream is gravitationally self-confined, is to form objects like Oumuamua as smaller, second-generation particles. Close inspection of our data reveals that the way in which to do that is through collisions among merging fragments.
As a demonstration, let us continue following the gravitationally self-confined stream from Figure \ref{fig:zoomin0_5}. After $t=0.45-0.55$ days, all particles have conjoined to form fragments. However, we then see some adjacent fragments, that are gravitationally interacting with one another and eventually merging. Figure \ref{fig:fragMerger} shows an example of such a collision. We note that it is by no means unique, either in this particular simulation, or in our entire suite of simulations. We see many such mergers during this (henceforward-termed) collisional phase, and the transfer of angular momentum in such collisions often results in fast rotation and shedding of mass, producing a cloud of smaller debris in orbit around the central, rotationally-oblate fragment. Sizeable satellites also occasionally form, as in Figure \ref{fig:fragMerger6}, however just as often the second-generation debris field is composed of merely small fragments and a lot single SPH particles. We note that the minimum particle size here is resolution-limited, however there is no physical reason to assume that the secondary debris cloud is not composed of yet smaller particles than our resolution permits, potentially following a power law size distribution, including tiny satellites, boulders and dust.
\begin{figure*}
\begin{center}
\subfigure[$t=0.58$ days.] {\label{fig:fragMerger1}\includegraphics[scale=0.475]{FragmentMerger48000.eps}}
\subfigure[$t=0.64$ days.] {\label{fig:fragMerger2}\includegraphics[scale=0.475]{FragmentMerger53000.eps}}
\subfigure[$t=0.7$ days.]
{\label{fig:fragMerger3}\includegraphics[scale=0.475]{FragmentMerger58000.eps}}
\subfigure[$t=0.76$ days.] {\label{fig:fragMerger4}\includegraphics[scale=0.475]{FragmentMerger63000.eps}}
\subfigure[$t=0.82$ days.] {\label{fig:fragMerger5}\includegraphics[scale=0.475]{FragmentMerger68000.eps}}
\subfigure[$t=1.16$ days.]
{\label{fig:fragMerger6}\includegraphics[scale=0.475]{FragmentMerger100000.eps}}
\end{center}
\caption{Zoom-in on fragment dynamics inside the gravitationally self-confined stream from Figure \ref{fig:zoomin0_5} (see dashed rectangle therein). Scale in Panel \ref{fig:fragMerger1}. This top-down view shows two fragments collide and merge, producing a secondary field of smaller debris. At 0.58 days, the start time is identical to Figure \ref{fig:zoomin}, and ending at 1.16 days. Colour denotes composition: orange - rock ; black - iron.}
\label{fig:fragMerger}
\end{figure*}
Following the collisional phase, the stream settles into an henceforward more stable and collisionless phase, in which the fragments remain mostly unaltered, at least until the next time they enter a strong gravitational potential (like the star, if they are still bound to it). Our inspection of the data seems to suggest this phase ends at roughly 1.16 days, hence it has a duration similar to that of the fragmentation phase.
\subsection{Disc formation}\label{SS:DiscFormation}
The formation of the disc continues as each returning fragment completes a full orbit (given its new semi-major axis) and re-enters the tidal sphere. Although each fragment has a different size, composition and semi-major axis compared to the planet from which it stem, its pericentre distance does not change. It may therefore tidally disrupt again during close approach, further breaking into smaller and smaller fragments, and so forth. If and when a fragment disrupts, we get a dispersion in the semi-major axes of resulting sub-fragments. This process is visualized in Figure \ref{fig:EarthDisruption}, where we show the disc formation progress of a tidally disrupted Earth-sized planet with a pericentre of $q=1 R_{\odot}$ (and $a=0.1$ AU, as before). Due to its large pericentre distance, the planet only grazes the Roche limit and thus essentially undergoes only partial disruptions during each pass, typically shedding a smaller fraction of mass compared to deeper disruptions. Such a disc is the slowest to evolve, and we will follow it here in discrete time intervals equal to the original planet orbital period of 14.9 days. Colour denotes composition: orange - rock ; black - iron; green - WD. Resolution is 10K particles.
\begin{figure}
\begin{center}
\subfigure[1st pass, $t=1.16$ days] {\label{fig:1pass}\includegraphics[scale=0.43]{1pass1e5.eps}}
\subfigure[2nd pass, $t=16$ days]
{\label{fig:2pass}\includegraphics[scale=0.43]{2pass1_388e6.eps}}
\subfigure[3rd pass, $t=31$ days] {\label{fig:3pass}\includegraphics[scale=0.43]{3pass2_677e6.eps}}
\subfigure[4th pass, $t=46$ days] {\label{fig:4pass}\includegraphics[scale=0.43]{4pass3_965e6.eps}}
\subfigure[5th pass, $t=61$ days]
{\label{fig:5pass}\includegraphics[scale=0.43]{5pass5_254e6.eps}}
\subfigure[10th pass, $t=119$ days]
{\label{fig:10pass}\includegraphics[scale=0.43]{10pass1_03e7.eps}}
\end{center}
\caption{Top-down view of disc formation through a sequence of partial tidal disruptions of an Earth-sized planet with $a=0.1$ AU and $q=1 R_{\odot}$, around a 0.6$M_\odot$ WD, plotted at different multiples of the original orbit. Colour denotes composition: orange - rock ; black - iron; green - WD. The planet apocentre and scale is indicated in Panel \ref{fig:1pass}.}
\label{fig:EarthDisruption}
\end{figure}
In Panel \ref{fig:1pass} the first pass through the tidal sphere is shown, at t=$10^5$ (here we wait twice longer than the fragmentation time, for the tidal streams to distend further, in order to get a better visual effect). The planet remains almost fully intact, however an outer and inner tidal stream develop, composed of several small rocky fragments, as in Figure \ref{fig:zoomin1}. The planet continues to move on its original trajectory, its apocentre located to the right edge of the frame, as indicated by the arrow and markings. The fragments at the tip of the inner tidal stream will be the first to reach their new, smaller apocentres.
Continuing to Panel \ref{fig:2pass}, we follow the progression of the disc as it evolves. The aforementioned rocky fragments located at the tip of the inner stream that were formed during the first tidal approach in Panel \ref{fig:1pass}, had an apocentre distance only half of that of the original planet. By $t=16.07$ days they have already re-entered the tidal sphere, disrupting on their own, each creating once again a less massive tidal stream (with its own, new dispersion in particle semi-major axes). This sequence of events begins to form interlaced elliptic eccentric annuli, each resulting in a different angular size depending on the physical and orbital properties of the original fragment from which they were produced. The smallest fragments form eccentric rings instead.
Focusing again on the planet, it now undergoes its second disruption, which highly resembles the first. However, this disruption sheds much more mass from the planet's rocky exterior, an outcome which we explain by the planet's rapid spin, as follows. Our analysis shows that the planet has obtained a 3.3 h rotation period during the first disruption. Tidal spin-up is a well known phenomena, which has been previously seen in simulations involving soft tidal encounters for a wide range of applications, including stars \citep{AlexanderKumar-2001,AlexanderKumar-2002} and asteroids \citep{RichardsonEtAl-1998,WalshRichardson-2006,MakarovVeras-2019}. To the best of our knowledge, however, our paper is the first to report and (in Paper II) statistically analyse the tidal spin-up of terrestrial planet fragments.
We will show that self-rotation of the planet yields a larger stellar Roche limit, effectively making the tidal disruption of the planet deeper, given the same approach distance as before. In order to illustrate this point, consider the simple, classical derivation of the Roche limit. Assuming no rotation, one equates the force of self gravity $F_{sg}$ exerted by the planet with the tidal force $F_T$ exerted by the star. Using the same notation as in Section \ref{S:Analytical}, this gives the standard $R_{roche}=R*(2M/m)^{1/3}$, which we can also flip to obtain $R=R_{roche}(m/2M)^{1/3}$. Now Equation \ref{eq:TidalForce} can be re-written as a function of $R_{roche}$:
\begin{equation}
F_T=\frac{G(2M)^{2/3}m^{1/3}R_{roche}}{d^3}
\label{eq:TidalForce2}
\end{equation}
By adding the planet's rotation, however, we now have $F_{sg}=F_T \pm F_{rot}$. The negative sign before $F_{rot}$ is applicable to retrograde rotation. Since here the planet's rotation is excited during the initial tidal approach in the same sense as its orbit, we take only the positive sign. We thus have:
\begin{equation}
\frac{Gm}{R^2} = \frac{2GMR}{R_{roche}^3} + \omega^2 R
\label{eq:RocheWithRot1}
\end{equation}
\noindent where $\omega$ is the planet's rotation rate. It is convenient to express $\omega$ in terms of the planet's breakup rotation $\omega_{br}=\sqrt{Gm/R^3}$ (which is obtained by equating the forces of self-gravity and self-rotation), such that $\omega=\lambda \omega_{br}$, $\lambda$ being the breakup velocity fraction. Rearranging Equation \ref{eq:RocheWithRot1} and solving for $R_{roche}$ we get:
\begin{equation}
R_{roche} = R \left( \frac{2M}{m} \right)^{1/3} \left( 1-\lambda^2 \right)^{-1/3}
\label{eq:RocheWithRot2}
\end{equation}
We note that when $\lambda\ll 1$, Equation \ref{eq:RocheWithRot2} recovers the standard Roche limit, while for $\lambda=1$, $R_{roche}$ goes to infinity, as one expects.
Since $R_{roche}$ increases due to self-rotation, it follows that the same close approach distance $d$ is now comparatively deeper than in the non-rotating case. Plugging Equation \ref{eq:RocheWithRot2} into Equation \ref{eq:TidalForce2}, the tidal force effectively increases by a factor of $(1-\lambda^2)^{-1/3}$. Given the Earth's breakup rotation period (approximately 1.4 h), and the 3.3 h rotation period of the returning planet from Panel \ref{fig:2pass}, we have $\lambda \cong 0.424$ and $\lambda^2 \cong 0.18$. Thus the tidal force effectively increases by about 7\%. Note that in this simple analysis, we ignored the planet's significant ellipsoidal shape due to fast rotation, which, as we recall from Equation \ref{eq:TidalForce}, increases the tidal force even more, and at the same time reduces self-gravity on its surface. Hence, the balance between these two forces is offset even more. We nevertheless note that the tidal force magnitude still remains dominated by the distance of close approach $d$. Self-rotation induces a much smaller effect, yet it facilitates more mass shedding, especially when $\lambda$ is large. Even when $\lambda$ is small, self-rotation can modify the energy dispersion in the stream, as we discuss in Appendix \ref{A:RotationDependence}.
Moving to Panels \ref{fig:3pass}-\ref{fig:10pass}, the same qualitative behaviour continues. As mass is subsequently being shed from the planet with each tidal approach, $\omega_{br}$ increases with decreasing size, but so does $\omega$ (which is spun up during the disruptions). In Panel \ref{fig:3pass} a few small iron fragments break off from the planet, each consisting of several particles. In Panel \ref{fig:4pass} the streams include numerous fragments with the size of small asteroids, each with a tiny iron core and outer rocky layer, however the planet is still intact. The main change occurs in Panel \ref{fig:5pass}, where the fast-spinning planet breaks into a chain of multiple large fragments, as in Figure \ref{fig:zoomin0_5}, albeit these fragments are now largely composed of iron, as most of the rocky material has already been removed during the previous tidal approaches. Finally in Panel \ref{fig:10pass} we obtain the eventual properties of the fully formed disc. We expect very few changes to occur beyond this point. There remain only a few bound fragments that were flung to distances beyond the original semi-major axis, which will return for subsequent passes and disruptions around the star. By numbers they represent a negligible fraction and may be ignored.
Interestingly, the inner part of the disc is solely rocky, which seems a recurring feature in many of our simulations. This could be intuitively understood from the fact that in Panel \ref{fig:2pass}, where we have the first major disruption, the planet is larger than in \ref{fig:5pass}, in which the iron finally enters the tidal streams during the last cataclysmic disruption. In both cases the planet has the same $r_\mathrm{crit}$ (same $a$ and $d$), which means that the former disruption is more dispersive than the latter (smaller $\acute{a}$ due to larger $R$), explaining why the inner part is solely rocky.
\section{The hybrid approach}\label{S:Hybrid}
As previously shown, the complete disc formation always requires a very long tracking time and typically multiple and repeated disruptions. In Section \ref{S:FullSPH} our approach in handling this computational problem was similar to all other previous studies. We deliberately chose unrealistically small orbits and a low resolution for our simulations. Real tidal disruption scenarios are however often characterized by extremely large and eccentric orbits, making them computationally inaccessible to any numerical method proposed until now. The hybrid approach described in this section does not suffer from the same limitation. It enables to track the tidal disruption and disc formation of any planet, regardless of how far away its orbit is, with exactly the same efficiency. This level of performance does come with a price, since it entails certain assumptions. However, in the following section we will show that these are negligible compared to the potential gain.
\subsection{Principles and assumptions}\label{SS:HybridDescription}
As we have shown in Section \ref{SS:PericentreDependence}, extremely deep tidal disruptions give birth to violent and gravitationally unconfined streams. They break up the planet into its constituent particles and prevent the further coagulation of larger fragments. The outcomes of such disruptions can be fairly-well characterized analytically, and in some cases can also be tracked fully with SPH within reasonable timescales. We therefore note that the hybrid method was never intended to treat these cases, although it can certainly do so, and even with superior performance.
The hybrid approach is intended for an efficient treatment of partial disruptions, or full disruptions which are not as deep, and result in gravitationally self-confined tidal streams. The approach makes use of the fact that the primary processes taking place during these tidal disruptions, are restricted to a relatively small spatial domain. First, we recall that the differential gravitational force that breaks the object apart, is relevant only to the Roche limit of the star. The second important phase is fragmentation. It is during this phase, that small particles may collapse by the stream's own self-gravity, to form larger fragments. The relevant spatial domain here, as discussed in Section \ref{SS:PericentreDependence}, is also rather small, exceeding the tidal sphere only by an order of magnitude or so. Overall, the breakup and fragmentation phases constitute only a tiny fraction of the total spatial domain (of the original orbit), and are confined to the immediate environment of the star.
Our approach is therefore to restrict the SPH computations only to this relatively small domain, and to omit unnecessary calculations outside of it. Following the fragmentation phase, we identify the emerging fragments (whose constituent particles form spatially connected clumps of material), and for the reminder of their orbits, their trajectories are calculated and tracked analytically, assuming Keplerian orbits. Our hybrid program simply places each fragment once again near the star's Roche limit, based on its return orbital elements. This instantaneous 'trasport' comes with the price of ignoring the possible interactions or collisions of this fragment with other fragments or pre-existing material orbiting the star (see discussion in Appendix \ref{A:Collisionless}), in addition to other processes like radiation effects from the star (Appendix \ref{A:Radiation}). However, it saves a lot of computational time, since only a small fraction of the orbit (e.g. $\sim 10^{-2}$ even in contrived, low eccentricity test cases such as in Figure \ref{fig:SPHVsHybrid}) is simulated in full. The returning fragment immediately undergoes an additional tidal disruption, potentially splitting into a new set of fragments, with its own unique dispersion in orbital parameters, and so on. The discussion in Appendices \ref{A:Collisionless} and \ref{A:Radiation} shows that the partitioning of the tidal disruption problem is judicious in this case.
The hybrid code's main task is to identify the fragments, accurately calculate their orbits and especially handle the synchronization and timing of the subsequent disruptions. Apart from this, its other procedural task is handling the SPH job dissemination. The hybrid code terminates when reaching one of two outcomes: either all fragments have ceased disrupting given their exact size, composition and orbit; or fragment disruption is inhibited when reaching its minimum size - that of a single SPH particle.
\subsection{Code implementation}\label{SS:HybridImplementation}
Our code performs the following sequence of steps:
\noindent\textbf{(a)} Based on the input orbital parameters of the disrupting planet, and the mass of the star, a position outside twice the Roche limit and the planet orbital period are both calculated.
\noindent\textbf{(b)} Based on the initial radial distance and orbital period, the initial true anomaly, position and velocity of the planet are calculated.
\noindent\textbf{(c)} The required SPH-duration to achieving fragmentation is calculated (the collisional phase is neglected to get a factor 2 improvement in computation speed).
\noindent\textbf{(d)} The needed input files for the \emph{miluphCUDA} SPH code are generated, including a pre-processed relaxed planet for initializing the SPH disruption simulation.
\noindent\textbf{(e)} Excute SPH job on GPU.
\noindent\textbf{(f)} Analyse result, generate new SPH input files and repeat step (e) until no further disruptions occur or 100\% of the fragments have been reduced to single SPH particles.
\noindent\textbf{(h)} Finalize output/visualization files. \\
Steps (a)-(h) are scripted with BASH. Jobs are disseminated via a linux portable batch system (PBS). The main body of code (approximately 1500 lines) is carried out in step (f) via a separate C program, performing these steps:
\noindent\textbf{(f1)} Find physical fragments (clumps) of spatially connected SPH particles using a friends-of-friends algorithm.
\noindent\textbf{(f2)} Compute fragment properties, in addition their centre of mass (COM) positions and velocities.
\noindent\textbf{(f3)} Compute fragment orbital elements and subsequent disruption times (returning fragments are assumed to be transported to exactly the same distance from the star, regardless to any change in their composition. That is why we calculate a position outside twice the estimated rocky Roche limit).
\noindent\textbf{(f4)} Sort fragments (and their inherent particles, i.e. their relative positions and velocities with respect to the fragment COM) by their subsequent disruption times.
\noindent\textbf{(f5)} Generate the input files to start the subsequent SPH job (i.e., the disruption of the next fragment, after performing the appropriate synchronization procedures).
\noindent\textbf{(f6)} Generate visualization output files, between the current time and the next tidal disruption time (or simulation time limit, whichever smaller).
\subsection{Performance and validation of the hybrid model}\label{SS:Hybrid_Performance_Validation}
Our goal in this section is to test the hybrid SPH-Analytical model against similar models that were carried out in full, using only SPH. If successful, the hybrid simulation should produce an identical disc of debris, but it will achieve this goal in significantly less time.
Figure \ref{fig:SPHVsHybrid} shows that the hybrid model has fulfilled its goal. In this example we Compare the formation of a debris disc for a tidally disrupted Earth-sized planet around a 0.6$M_\odot$ WD. The planet's parameters are identical to those presented in Figure \ref{fig:zoomin0_5} with a pericentre distance of $q=0.5R_{\odot}$, and semi-major axis $a=0.1$ AU - an unrealistic and arbitrary choice, however one which makes the full SPH simulation runtime plausible (approximately 100 days). The left and right columns show the progress of full SPH and hybrid simulations, respectively. The colour scheme denotes composition: orange - rock ; black - iron; green - WD. The resolution is 10K particles. The formation progress is given in units of the original planet's orbital period, 14.9 days, and plotted for 1, 2 and 9 orbit times.
Note that the fragments (and star) in the hybrid simulation are scaled (magnified) with a factor of 1:100, which is sufficient for observing the angular size of the biggest among them. E.g., in the bottom of Panel \ref{fig:Hybrid1orbit}, we can see several fragments returning (clockwise) for a second disruption. Due to the magnification, these fragments are noticeably larger, being major, multiple-particle chunks from the original disrupted planet.
\begin{figure*}
\begin{center}
\subfigure[1 orbit time, full SPH.] {\label{fig:SPH1orbit}\includegraphics[scale=0.4]{SPH1orbit.eps}}
\subfigure[1 orbit time, Hybrid.] {\label{fig:Hybrid1orbit}\includegraphics[scale=0.4]{Hybrid1orbit.eps}}
\subfigure[2 orbit times, full SPH.]
{\label{fig:SPH2orbit}\includegraphics[scale=0.4]{SPH2orbit.eps}}
\subfigure[2 orbit times, Hybrid.] {\label{fig:Hybrid2orbit}\includegraphics[scale=0.4]{Hybrid2orbit.eps}}
\subfigure[9 orbit times, full SPH.] {\label{fig:SPH9orbit}\includegraphics[scale=0.4]{SPH9orbit.eps}}
\subfigure[9 orbit times, Hybrid.]
{\label{fig:Hybrid9orbit}\includegraphics[scale=0.4]{Hybrid9orbit.eps}}
\end{center}
\caption{Top-down view of full SPH (left panels, dark tones) vs. hybrid method (right panels) discs. We compare the disc formation of a tidally disrupted Earth-sized planet ($a=0.1$ AU and $q=0.5R_{\odot}$) around a 0.6$M_\odot$ WD. Colour denotes composition: orange - rock ; black - iron; green - WD. The figure shows the disc formation progress in units of the original planet's orbital period (14.2 days). The hybrid model is generally indistinguishable from the full SPH simulation, but has a runtime of merely 2 days, instead of 100 days.}
\label{fig:SPHVsHybrid}
\end{figure*}
The final hybrid disc is almost indistinguishable from the full SPH simulation, yet it has a runtime of merely 2 days, instead of 100 days. In this example we had a factor 50 improvement in performance. However, by increasing either the resolution or the semi-major axis, the hybrid model would outperform the pure SPH simulation by a much higher factor.
Consider first the semi-major axis in this example. If we increase it by a factor of 100 to a more realistic value of 10 AU, the full SPH simulation runtime would (to first-order approximation) scale with the orbital period, thus taking about $100^{3/2}=1000$ times longer. On the other hand, raising it to 10 AU would make essentially no difference whatsoever in the hybrid model (the only departure from complete identity arises from changing the disruption regime, as suggested in Table \ref{tab:regimes}). The computation thus becomes plausible only with the hybrid model.
Now consider the resolution. Broadly speaking, SPH runtime scales like the resolution squared, so increasing the simulation from 10K particles to 500K particles would lengthen the runtime by a factor of 2500. In the hybrid model, the actual disruptions are also modelled with SPH (which means that the same rule applies), however the resolution is not constant. The runtime during the first flyby is equivalent to full SPH, since the planet initially contains all the SPH particles, however subsequent fragments become smaller and smaller, until eventually they may stop disrupting entirely. Hence, the simulation progress becomes exponentially faster. When $q$ is large, we have fewer yet larger (with more SPH particles) fragments, whereas deeper disruptions result in a multitude of smaller fragments (e.g., see Section \ref{SS:PericentreDependence} wherein extremely deep disruptions break the planet almost entirely into its constituent, SPH particles). In the latter case, the hybrid model will have substantially more fragments to iterate on, however they will cease disrupting quickly, typically by promptly reaching the minimum, single SPH particle size. In the former case, the hybrid model will have much fewer fragments to iterate on, however they may require several orbits until they all cease disrupting.
We generally observe that the hybrid model runtime is the shortest for $q=0.1$. Simulations with higher $q$ values have a longer runtime by a factor of 2-4, however we see no clear proportion or relation between the runtime and $q$, since we suspect a more complex dependency, affected by more parameters than merely $q$. First, the runtime anti-correlates with the semi-major axis, since more eccentric disruptions decrease $r_\mathrm{crit}$ and produce less bound debris. Second, the runtime can either correlate or anti-correlate with the mass. On the one hand, increasing the mass gives a smaller relative fraction of bound material, and also increases the minimum fragment size (that of a single SPH particle), hence the simulation might be (depending on $q$) discontinued earlier for numerical limitations rather than for any physical reasoning. On the other hand, increasing the mass enlarges the dispersion in $\acute{a}$ (since $r_\mathrm{crit}$ moves in the opposite direction of the \emph{non-disruptive regime}, see Table \ref{tab:regimes}), so it can prolong the simulation. Overall, we emphasize that this factor is, at most, of the order of unity, hence the hybrid model performs well under all circumstances and for any combination of parameters.
In stark contrast, full-scale high resolution SPH simulations can be tracked within reasonable runtimes, \emph{only} when the disruptions are extremely deep (lowest $q$). The latter can lead to complete breakup (to the single SPH particle level) and a gravitationally unconfined stream, and then most of the bound mass falls back shortly after the first tidal disruption of the planet, and the complete absence of large fragments does not necessitate repeated disruptions. The implications are immediately evident - high resolution full SPH simulations will always remain computationally restricted to a very narrow portion of the disruption phase space.
We conclude that the hybrid model has fulfilled its aim. It produces similar results, yet enables unlimited increase to the semi-major axis, nullifying the constrains that have limited past simulations, and it also enables a huge increase in the resolution, particularly for large pericentre distances. To demonstrate its power in a more realistic scenario, we refer to Paper II, where we use the hybrid model to simulate a tidal disruption (originating at $a=150$ AU) at a resolution of 500K particles. Had we attempted to simulate the same scenario only with full SPH, the runtime could be estimated from the aforementioned arguments, as 100 days (see the simulation in Section \ref{SS:DiscFormation} which has the same $q$ and planet mass), times the increase in $a$ ($\times 1500^{1.5}$), times the increase in resolution ($\times 50^2$), or $\sim 1.45 \times 10^{10}$ days in total. Our hybrid simulation accomplished the same task in merely 40 days. We also note that the full SPH simulation from Section \ref{SS:DiscFormation} ran for 10 orbit times (of the original planet), whereas the hybrid model ran for 953 orbit times, when it reached the point in which the last fragment ceased from disrupting. Hence, not only is it more efficient, but also more complete, noting however that full completion is not necessarily a very significant criterion, since over 99\% of the disruptions occurred within the first few orbits anyway (e.g. see Section \ref{SS:DiscFormation}).
We further validate the hybrid model against the only previous work which studies disc formation through tidal disruption around WDs, performed by \cite{VerasEtAl-2014}. This work considers a $\sim$3 km asteroid, and so we generate a similar setup. We use the same orbit ($a=0.2$ AU), similar pericentre distance ($q=0.1 R_{\odot}$) and the same resolution (5K particles). The results are in very good qualitative agreement, producing a similar outcome to their Figure 10. We indeed form a ring of debris, as was our expectation from the \cite{VerasEtAl-2014} study, in addition to our theoretical predictions in Section \ref{S:Analytical}.
\section{Future modifications and applications}\label{S:Future}
The new hybrid approach enables us to study a wide variety of problems, which are difficult to study using existing approaches. In Paper II we utilize the current code to perform a suite of hybrid simulations considering disc formation for a wide range of dwarf and terrestrial planets disruptions, pericentre distances and semi-major axes between 3 AU and 150 AU. However, we suggest several directions in which the current code can be further improved, and used for other general applications, as follows.
\subsection{Disc formation by water-bearing objects}\label{SS:Water-bearing}
In this study we consider solely objects whose compositions are terrestrial-like, consisting of a rocky envelop and an iron core. However, recent observations \citep{FarihiEtAl-2013,RaddiEtAl-2015,XuEtAl-2017} in addition to our theoretical studies on water-bearing minor and dwarf planets around WDs \citep{MalamudPerets-2016,MalamudPerets-2017a,MalamudPerets-2017b} strongly reaffirm previous theoretical works \citep{SternEtAl-1990,JuraXu-2010}, and suggest that if such objects are common around main sequence stars, they should also be common around WDs. Our detailed thermo-physical simulations showed that much of their internal water content can be retained while their host stars evolve through the main-sequence, RGB and AGB high luminosity phases. It depends on an intricate set of parameters, including the host star's mass, in addition to their own size, composition, orbit and radionuclide abundances.
Water-bearing objects are generally both smaller and less dense, and therefore can disrupt within an even larger tidal sphere. What then might we expect from such tidal disruptions, and in what way would they differ from the terrestrial-like objects studied here?
As a general rule, the disruption modes and resulting discs (e.g. the semi-major-axes and size distribution) could be very similar to dry planetesimals, however the debris discs might form and evolve through a completely different route, according to the following arguments. Since the irradiation is proportional to the square of the orbital distance, disrupted water-bearing fragments (whose pericentre distances are between $\sim 0.1-1 R_{\odot}$) receive $\sim 10^4-10^6$ times the intrinsic luminosity of the WD during close approach, when compared to typical Solar system comets at 1 AU. Depending on its cooling age, WD luminosity ranges between $10^3-10^{-5} L_\odot$, hence we might typically expect $10^{-1}-10^9$ the amount of insolation, compared to Solar system comets (see \cite{MalamudPerets-2016} for discussion). However, we recall that in tidal disruptions the characteristic time a fragment spends near perihelion is only $\sim 10^{-1}$ days. Depending on the precise size of the fragment (which in turn might depend exactly on how deep the disruption is), we might expect various degrees of water sublimation rates.
It should also be important if the original object is homogeneously mixed or differentiated into a rocky core and icy mantle. For a differentiated object, fragments are expected to be composed of either one material or the other, and might have some cohesive strength. While the icy fragments will experience sublimation, which might decrease their size between disruptions, the rocky fragments would evolve in much the same way as described in this study, since refractory materials have much higher sublimation temperatures \citep{RafikovGarmilla-2012,XuEtAl-2018}. For a homogeneous object, tidally disrupted fragments are rather expected to remain homogenous, and since the original object is likely small (or else it would differentiate), the fragments would be even smaller - most likely rubble piles dominated by gravity alone. Outgassing volatiles might carry with them dust or pebble-like silicate grains, and then the two distinct compositions might evolve on very different timescales. For virtually any WD (with $L<0.1L_\odot$) the radiation forces are too feeble compared to the gravitational forces and are thus unable to disperse the gas \citep{BonsorWyatt-2010,DongEtAl-2010,VerasEtAl-2015}, which most likely would accrete onto the WD by experiencing ionization and then being subject to the magneto-rotational instability \citep{KingEtAl-2007,Farihi-2016}. Whereas the silicate grains are slower to evolve and might be subject to PR drag \citep{VerasEtAl-2015} and collisions \citep{KenyonBromley-2017,KenyonBromley-2018}.
These, as well as other considerations, suggest a level of complexity that needs to be addressed in future dedicated studies. In particular, it might require an approach which utilizes the hybrid technique in combination with other numerical methods or analytical calculations. E.g., the sublimation evolution of fragments may be studied either through complex numerical simulations like the ones used by \cite{MalamudPerets-2016,MalamudPerets-2017a,MalamudPerets-2017b} or via detailed analytical treatment \citep{BrownEtAl-2017}.
\subsection{Inclusion of strength/porosity models}\label{SS:Strength}
The original developers of SPH considered the dynamics of fluid flow, governed by a set of conservation equations (mass, momentum and energy). An equation of state completes the scheme by relating the various thermo-dynamical variables. As the SPH technique evolved, more advanced models have emerged which incorporated representations for elastic solids, elasto-plastic solids, fracture/damage in solids and inclusion of sub-resolution porosity in small objects. All of the aforementioned models are in fact implemented in the SPH code \emph{miluphCUDA}, yet they are not used in this study. In this study we perform only fully hydrodynamical SPH simulations, neglecting material strength.
For impact collision modelling, it is a well-established fact that material strength can greatly affect simulation outcomes. See e.g. some recent papers by \cite{BurgerSchafer-2017,GolabekEtAl-2018} which are relevant to the size range investigated in our study. For disc formation by tidal disruptions, however, no previous work has ever been performed, to the best of our knowledge, that methodically investigated SPH simulation outcomes when comparing various strength models. It is a well known fact that for very large objects (in the range of hundreds to thousands of km, depending on the composition), self-gravity dominates in determining the size of the tidal sphere, as it opposes the tidal force (see Section \ref{SS:DiscFormation}). However, for smaller bodies, the internal material strength takes over as the dominant force \citep{BrownEtAl-2017}. Therefore, there is a size range in which material strength can be rather important for tidal disruptions and their outcomes.
It is however incorrect that material strength always becomes more important with diminishing size. There exists a class of small objects, in the range of hundreds of meters to a few km, for which the effective strength to resist global deformation is once again low, when it is controlled by fractures or flaws \citep{Jutzi-2015}. These so called rubble-piles are dominated by gravity, and they may easily break apart by failure in their low-strength internal fault surfaces. Past models \citep{BenzAsphaug-1994} have often treated such fractured material as completely strength-less, with pressure independent yield criterion, whereas some newer constituent models consider internal friction and pressure-dependent yield criterion \citep{CollinsEtAl-2004,Jutzi-2015} since it is known that the shear strength of rocks is pressure-dependent. According to \cite{BrownEtAl-2017}, the fragments that emerge from tidal disruptions of fractured rubble pile structures, may potentially be considered as monolithic objects if at some smaller scale they once again start to have internal cohesion, although what this scale might be is not yet certain. If we rely on the cohesionless asteroid spin-barrier as evidence, it might be around 150-300 m \citep{PravecEtAl-2002}.
We should consider the possibility that initially cohesive bodies may lose some of their strength when undergoing partial disruptions, or perhaps during full disruptions which produce recycled, fully-damaged, rubble-pile fragments. Then tidally disrupted rubble-piles might once more inject smaller particles with some internal cohesion. All of these aspects require detailed research and merit future work in these directions.
\subsection{Stream gravitational confinement: fragmentation and intra-collisions}\label{SS:Fragmentation}
In Section \ref{SS:PericentreDependence} we showed that tidal streams may or may not be gravitationally self-confined, and fragment under their own self-gravity. A preliminary analysis was performed, equating theoretical predictions and calculations with numerical simulations. We have shown that in some cases the stream undergoes fragmentation, with very good agreement between simulations and theory, while extremely deep disruptions seem to inhibit fragmentation.
This result was interpreted as being linked to the stream's free-fall timescale through the $\alpha$ parameter, which appeared to be strongly dependent on the breakup distance. It is an important behaviour which necessitates further investigation, and must rely on a much more extensive grid than was possible in this paper, which had a broader goal and focus. For future applications of the hybrid model, we should numerically determine $\alpha$ by exploring a wider parameter space of breakup distance and planet size. We would then be able to semi-analytically determine the precise SPH duration required in each application.
Additionally, we have shown that fragments, shortly after being formed, collide among themselves, prior to reaching a more stable, longer-term collisionless state. In our hybrid simulations we have largely neglected the importance of this stage. However, complete understanding of the tidal disruption and disc formation phenomena entails some further development of the theoretical framework governing this process, including a detailed model and an investigation of the collision outcomes / the production of second-generation fragments. Particularly, the size distribution and abundance of collision-induced small particles is an important question, since swarms of small particles and dust can manifest as strong transit events, while larger fragments cannot. For this purpose, we could further examine such collisions in high resolution, using typical impact parameters from our existing simulations.
\subsection{Disc circularization and evolution}\label{SS:CircularizationEvolution}
Our paper and the \cite{VerasEtAl-2014} study, deal with the initial formation phase of a disc, triggered after a tidal disruption following a close approach to a WD. While both studies result in completely different debris discs, the outcomes nevertheless share a common morphological characteristic -- eccentricity. The \cite{VerasEtAl-2014} study considers an asteroid, which, after disruption, forms a narrow eccentric ring of particles following the original asteroid trajectory. Typically, the asteroid must originate from a distance of at least a few AU. As such, the asteroid and resulting ring have an eccentricity approaching 1. Our study considers much bigger objects, up to terrestrial planet sized, and originating from various potential regions of a planetary system, up to hundreds of AU. When disrupted, they form dispersed discs of interlaced elliptic eccentric annuli, extending from as little as $\sim$0.05 AU (in the most extreme case) to well beyond the original planet orbit (see Figure 1 in Paper II). The corresponding eccentricities of fragments in such a debris disc are therefore at the minimum 0.9 and typically much more.
Taking a leap forward in time, the studies of \cite{KenyonBromley-2017,KenyonBromley-2018} focus rather on the final formation sequence of the disc, when it reaches a much more compact state (the eccentricity of order $\sim 0.01$). Here, collisional grinding of large particles rapidly pulverize them to mere dust and gas.
We are \emph{missing} an important link in between those stages. \cite{VerasEtAl-2015} consider disc shrinkage through the drifting of small micron-to-cm sized particles by PR drag, however it is unclear that this particle size range constitutes a significant mass fraction of the disc. Rather, various arguments throughout this paper (and in particular Appendix \ref{A:Radiation}) emphasize the potential importance of the Yarkovsky effect, which applies to more sizeable fragments up to hundreds of m. The Yarkovsky effect could be an important agent for circularizing the disc, however it remains to be properly explored in this context \citep{VerasEtAl-2015}. Rotational statistics (presented in Paper II) could provide a key input for such models. Alternatively, collisional cascade might potentially break the fragments on longer evolutionary timescales, such that a more significant fraction of the disc could evolve through PR drag \citep{WyattEtAl-2011}. This possibility requires further consideration in the context of our study.
Another direction that may facilitate disc shrinkage has been recently proposed, from a slightly different angle. \cite{GrishinVeras-2019} suggest that small exo-Kuiper or exo-Oort like objects in the 0.1-10 km size range, can be captured by a compact gaseous disc in the vicinity of a WD. If so, the same analytical formalism may be used in order to study the fate of similar-sized fragments that form in our study, and which may likewise experience some gradual dissipation as they interact with such a gaseous component. Once again, the starting point for the calculation could be the disc layouts that emerge from our hybrid models.
In Section \ref{SS:PericentreDependence} we show that tidal streams may or may not be gravitationally self-confined, and fragment under their own self-gravity. A preliminary analysis is performed, equating theoretical predictions and calculations with the actual numerical simulations. We have shown that in some cases the stream undergoes fragmentation, with very good agreement between simulations and theory, while extremely deep disruptions seem to inhibit fragmentation.
This result is interpreted as being linked to the stream's free-fall timescale via the $\alpha$ parameter, which appears strongly dependent on the breakup distance. It is an important behaviour which necessitates further investigation, and must rely on a more extensive grid than was possible in this paper, which had a broader goal and focus. Given our results in Section \ref{SS:PericentreDependence}, we need to further explore the parameter space of breakup distance and planet size.
We advocate that this missing link should attract a much greater focus than it has thus far. For example, consider the disintegrating object around WD 1145+017 \citep{VanderburgEtAl-2015}. In \cite{VerasEtAl-2017}, a differentiated body initially placed at nearly circular orbit might steadily disrupt its mantle to produce a compatible signal with hour-scale periodicity, but how does it assume such an orbit in the first place? The ideas discussed in Paper II suggest that it might simply be in a more advanced evolutionary state. That evolution is our missing link.
\subsection{Ring formation around giant planets}
\label{SS:RingFormation}
Ring formation around giant planets has been previously associated with tidal disruptions, either by a primordial satellite after inward migration towards the planet \citep{Canup-2010} or by an occasional passing object \citep{Dones-1991,HyodoEtAl-2017}. These various scenarios were modelled via (fluid) SPH simulations.
In the latter case, the captured fragments from the disruption typically move on very eccentric orbits. Hence, \cite{HyodoEtAl-2017} stop their SPH simulations (using a resolution of $10^5$ particles) after the initial flyby, and in any case prior to the fragments reaching their apocentres. They use follow-up N-body simulations for the longer-term evolution of fragments, including the effect of a planet’s oblate potential. However, the streams that emerge in their SPH simulations are gravitationally self-confined and large fragments are bound to return for subsequent disruptions and should be further modelled with SPH to obtain an accurate particle size and orbital distributions.
We propose that the hybrid model is suitable to this task. It can follow the tidal disruption of passing objects at superior resolution and track further disruptions of returning fragments. Subsequent N-body simulations can be employed in a similar manner to \cite{HyodoEtAl-2017} or Appendix \ref{A:Collisionless}, however only at later times. Furthermore, internal strength treatment, as suggested in Section \ref{SS:Strength}, might prove to be very important. Likewise, modeling small passing objects as porous might be important, since their sound speed is typically one or two orders of magnitude lower compared to consolidated material, thus fractures can propagate throughout the body much more slowly, with a timescale comparable to the encounter timescale (in contrast, see \cite{Dones-1991}).
\subsection{Formation of families of Sun-grazing comets}\label{SS:sun-grazing}
The tidal disruption of minor planets which approach close enough to the sun to be disassociated by tidal forces, is a relatively little studied aspect of their fate, when considering detailed numerical approaches. As mentioned in Section \ref{S:Intro}, the only previous numerical study was performed by \cite{WeissmanEtAl-2012}, based on the n-body code developed by \cite{MovshovitzEtAl-2012}.
Using our hybrid model for tidal disruptions, we have a way of studying tidal disruptions for any progenitor orbit. We can track their repeated passages around the Sun with unprecedented resolution, and try to explain their emerging enigmatic observational attributes \citep{GranvikEtAl-2016}. From such a study we can potentially infer their characteristic properties.
\section{Summary}\label{S:Summary}
Our study introduces a new method for performing high-resolution, tidal disruption simulations, at arbitrary orbits. We call the technique the hybrid method, since it combines full SPH simulations in the relevant tidal disruption spatial domain, but also treats the orbit of tidally disrupted fragments analytically by instantaneously transferring them back to the tidal sphere for subsequent disruptions. Hence, the hybrid approach saves a tremendous amount of computational power, opening new possibilities for studying the long-term formation sequence of tidal disruption debris discs, which have not been possible with any existing model.
Prior to introducing our hybrid technique, however, we performed the following steps. First, we outlined a simple analytical impulse approximation model for treating single tidal disruptions. Using this model, we showed that tidal disruptions generally depend on a number of parameters. Nevertheless, given characteristic Solar system distances, small asteroids usually form a narrow ring in which all the asteroid material is bound to the star and remains on semi-major axes close to that of the original progenitor. Such disruptions were therefore termed the 'non-dispersive regime'. In contrast, larger dwarf or terrestrial planets usually form a completely different debris disc, in which just over half of the material becomes tightly bound to the star (compared to the progenitor orbit) and the other half becomes unbound, assuming hyperbolic trajectories. Such disruptions were termed the 'bi-modal regime'.
While providing invaluable insights into the process of single tidal disruptions, the analytical model is however incorrect, since its assumptions are merely approximations. In reality: the disruptions are not instantaneous but gradual; the object is not spherical at breakup but elongated; the dissociation among the constituent particles is not complete unless the disruption is very deep; self-rotation prior to the disruption affects the disruption outcomes; and the model doesn't account for inhomogeneous objects consisting of various materials/densities/internal strengths etc. The most important point, however, is that unless tidal disruptions are extremely deep, they do not finish after merely a single passage through the tidal sphere. Large fragments can repeatedly return for subsequent disruptions, and given their new sizes and orbits they can disrupt in entirely different regimes compared to their original progenitor. All of these points merit the use of hydrodynamical simulations for a more realistic treatment of both single, as well as multiple repeated disruptions, which are required in order to form a disc of debris.
We begin our numerical study by performing full hyrdodynamic SPH simulations of various tidal disruptions. For the simulated bodies, low resolution and a small semi-major-axis are required in order to track the formation of the debris disc, otherwise the problem becomes computationally impossible. We perform the first ever tidal disruption simulations of terrestrial-sized planets inside the tidal sphere of a WD, at various pericentre distances. For large pericentre near the Roche limit, we show that the disruption is partial, shedding only a small fraction of the planet's mass, emanating from its outer portions. If the pericentre is halved, the disruption is full. It forms a classic narrow stream of debris, however the stream is gravitationally self-confined, and it collapses to form multiple fragments, which later also collide and merge among themselves, forming smaller second-generation particles. An analytic fragmentation timescale model is introduced, and we show that it compares well with the measured fragmentation timescale. If the pericentre is a small fraction of the Roche limit, the disruption is both full and violent, such that the stream is gravitationally unconfined and the planet becomes almost entirely disassociated into its constituent particles. We discuss the possibility that interstellar asteroid Oumuamua is a hyperbolic fragment from a tidal disruption around a WD, formed by either one of the two \emph{full} disruption modes.
We then show a full SPH simulation of debris disc formation from start to finish, for a disrupted planet passing within but near the Roche limit. Even for this limiting case, the disc evolves rather quickly. After a few orbital times of the original planet, all fragments finish their repetitive sequence of tidal disruptions. The dispersed structure of the disc is made of a superposition of interlaced elliptic eccentric annuli, that form consecutively as time passes. An emerging feature in this evolution is the tidal spin-up of the original planet and other fragments. Their excited self-rotation rates are rapid, typically corresponding to a large breakup velocity fraction. This fast rotation in the prograde sense significantly expedites the disruption processes in all but the first flyby, since the original planet typically rotates very slowly compared to its breakup velocity. We nevertheless also quantify the effect of an initially rotating planet, and find it to be neither significant nor entirely negligible.
Afterforwards we discuss the new hybrid concept. Each fragment disruption is modelled individually around the star. The relevant spatial domain is the Roche limit, where tidal forces dominate, in addition to the fragmentation domain. Subsequently, each fragment orbit is analysed and its next position for the successive tidal disruption is calculated. Our hybrid method entails a couple of assumptions, namely, that the fragments are largely unaffected by radiation effects and that the disc is collisionless. We review the literature to show that radiation is not very important during the typical disc formation timescales. We come to a similar conclusion about collisionality, by handing over fully developed SPH discs to a newly modified N-body code, and quantifying the amount of collisions. Not only are collisions scarce (to the 1\% level) in the timescale relevant to formation, but they are also restricted to a spatial domain inside or near the Roche limit. The same conclusion is supported for higher resolutions by analytic arguments.
We then introduce our code specifics, testing the outcomes of the hybrid model against identical full-scale SPH simulations. We conclude that the hybrid model produces essentially identical results, while computationally outperforming the full SPH simulations by many orders of magnitude. It enables unlimited increase to the semi-major axis, nullifying the constrains that have limited past simulations, and at the same time also enables a huge resolution increase.
We conclude the paper in Section \ref{S:Future} by listing a number of important future directions and applications which could further improve the hybrid model. To name the most important ones, the SPH code could also utilize more advanced internal strength, porosity and damage models, for much more realistic tidal disruption outcomes. We could also extend the model to be used in modeling water-bearing bodies in addition to dry ones. We suggest a number of emerging applications which could directly benefit from the hybrid model, and which cannot be numerically modelled by other methods due to computational limitations.
\section{Acknowledgment}\label{S:Acknowledgment}
We wish to thank the anonymous reviewer for excellent suggestions and comments that have greatly improved this manuscript. UM and HBP acknowledge support from the Minerva center for life under extreme planetary conditions, the Israeli Science and Technology ministry Ilan Ramon grant and the ISF I-CORE grant 1829/12. HBP is supported by the Kingsley distinguished visitor program in Caltech.
\newpage
\bibliographystyle{mnras}
| proofpile-arXiv_059-15537 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}\label{s:introduction}
Progress in observational and theoretical astrophysics over the last few decades has solidified the concept that galaxies form in the gravitational potential wells of dark matter haloes \citep{White1978}. In the standard Cold Dark Matter (CDM) model, these haloes form bottom-up \citep{Peebles1965} through a hierarchical coalescence of \textit{progenitors} into more massive \textit{descendants}, as well as through the accretion of diffuse matter. These processes, known as \textit{mergers} and \textit{smooth accretion}, respectively, are responsible for roughly 2/3 and 1/3, respectively, of the mass growth at any halo mass \citep{Genel2010,Wang2011}.
In the CDM framework of structure formation, the stellar content of galaxies grows via two processes: \textit{internally} by gas cooling and star formation in the host halo; and \textit{externally} by accretion of other galaxies \citep{Guo2008,Zolotov2009,Oser2010,Pillepich2015}. The balance between these two processes affects a number of key observables, such as the morphology \citep{Kormendy2009}, the amount of kinematic support \citep{Lagos2018a} and the structure of the stellar halo \citep{Helmi2018}. Since the coalescence of galaxies is normally triggered by an earlier merger of their haloes, the hierarchical assembly of haloes is pivotal to understanding the evolution of galaxies \citep{Toomre1972,White1991,Conselice2014}.
The rich assembly history of haloes not only dictates the hierarchical growth of galaxies, but it also largely determines the internal structure of the haloes themselves. It is widely accepted that the rich quasi-fractal substructure seen in simulated CDM haloes \citep{Springel2008b} is a biased sub-sample of the progenitor hierarchy \citep[e.g.,\xspace][]{Moore1999,Klypin1999b}. Further, there is good evidence that the approximately universal density profile of haloes, the `NFW profile' \citep{Navarro1996}, is itself a consequence of their universal mass accretion histories (\citealp{Ludlow2013}; but see \citealp{Pontzen2013} for a different view). At very least, there is growing evidence that the internal structure of haloes is largely determined by their assembly history \citep[e.g.,\xspace][]{Navarro1997,Bullock2001,Wechsler2002,Zhao2009,Correa2015a} and retains a memory of the primordial fragments \citep{Gao2008b,Ludlow2016}.
In studying the bottom-up mass assembly of haloes, it useful to represent this history by abstract trees, known as \textit{merger trees}, where the branches denote haloes that join hierarchically \citep{Roukema1993}. In mathematics, trees are defined as graphs in which any two vertices are connected by exactly one path \citep{Mesbahi2010}. Halo merger trees are a subclass of such trees, called \textit{rooted directional in-trees}, where all edges (branches) are directed towards a single \textit{root} vertex. The direction represents the direction of time and the root represents the final halo. Following this definition, halo histories approximated by merger trees cannot contain disjoint branches or haloes that split up as time progresses (but generalisations exist).
It is useful to ascribe a \textit{mass} to each branch in a merger tree, reflecting the physical mass of the halo this branch represents. Graphically, the mass can be visualised by the thickness of the branches \citep[e.g.,\xspace Fig. 6 in][]{Lacey1993}. Halo merger trees then look like real trees, in that they follow Leonardo da Vinci's rule \citep{Richter1970} whereby the mass (or cross-sectional area of real trees) remains conserved at the vertices. Smooth accretion (or merger events that lie below the resolution limit of a simulation) can be accommodated in this representation by allowing the branches to continually increase their mass between mergers. Similarly, it is also possible to account for the potential stripping/evaporation of mass (negative accretion) by decreasing the mass along individual branches.
Despite their importance for galactic evolution, only a few attempts have been made to quantify merger histories (e.g.,\xspace see discussions of the `tree contours', \citealp{ForeroRomero2009}; `bushiness', \citealp{Wang2016b}; and `effective number of streams', \citealp{Wang2017}). It is common to resort to, e.g.,\xspace the progenitor mass function (PMF; \citealp{Bond1991}) and the main branch mass assembly history (MAH; e.g.,\xspace \citealp{Li2007,Correa2015b}). However, these measures discard the chronological ordering of mergers (in the case of the PMF) and all except the main branch in a tree (in the case of the MAH). Moreoever, these measures are themselves complicated mathematical functions, whose statistical comparison is not straightforward.
The current struggle to quantify merger trees contrasts with the widespread use of classification schemes for single merger events. It is common to label mergers by the number $n$ of coalescing haloes, distinguishing between \textit{binary mergers} ($n=2$) and \textit{multi-mergers} ($n>2$). For binary mergers, the mass ratio $r\equiv m/M\in(0,1]$ is often used to measure the significance of the merger \citep{Fakhouri2010} and divide between \textit{minor} (normally $r<1/3$) and \textit{major} ($r\geq1/3$) mergers; sometimes the dividing threshold is set to 1/10 \citep{Genel2010}. In the case of galaxy-galaxy mergers, it is further common to specify the types \citep[e.g.,\xspace][]{Dubinski1996,Springel2005b}, structure and gas fraction (e.g.,\xspace \textit{wet} versus \textit{dry} mergers) of the objects involved.
\begin{figure}
\includegraphics[width=0.992\columnwidth]{fig_australian_trees.jpg}
\caption{Natural trees that are structurally similar to two extreme types of merger trees. The Norfolk pine (Araucaria heterophylla, left) resembles `minimal' merger trees with no significant mergers in their history; the White Frangipani (Plumeria obtusa, right) is similar to `maximal' merger trees made exclusively of equal-mass binary mergers. By definition, these trees have minimal ($s=0$) and maximal ($s=1$) tree entropy, respectively.}
\label{fig:australian_trees}
\end{figure}
The goal of this paper is to propose and analyse a parameter $s\in[0,1]$, the \textit{tree entropy}, that extends the mass ratio $r$ of single binary mergers to entire trees (and subtrees thereof), also allowing for multi-mergers and smooth mass changes between merger events. The $r$-value of single binary mergers has two extrema: $r\rightarrow0$ is the limit of a minor merger with a vanishing minor mass ($m\rightarrow0$), whereas $r=1$ denotes a major merger of identical masses ($m=M$). By analogy, we will define the $s$-value such that $s=0$ corresponds to trees without mergers, grown exclusively by smooth accretion, whereas $s=1$ denotes trees grown exclusively by an infinite hierarchy of equal-mass binary mergers. These two types of trees will thus be called \emph{minimal} and \emph{maximal} merger trees, respectively. Structurally, these two types of trees are analogous to the real trees depicted in \fig{australian_trees}. Another biological analogy of minimal versus maximal merger trees is the inheritance of mitochondrial versus nuclear DNA. The former is passed on exclusively by the mothers, whereas the latter is subject to Mendelian inheritance (50~percent\xspace maternal, 50~percent\xspace paternal)\footnote{Anecdote: An impetus for publishing this work came from the question, whether the histories of galaxies resemble rather mitochondrial or nuclear DNA, asked by Paul Schechter at the conference on `The Physics of Galaxy Scaling Relations and the Nature of Dark Matter' (Queen's University, July 2018).}.
As we shall see, the convention of $s$ for minimal and maximal trees, combined with a list of physical requirements, allows us to propose a heuristic definition for $s$, inspired by Shannon's information entropy. We will show that this definition of $s$ carries new information about the haloes, not yet contained in other standard parameters. Furthermore, $s$ also holds a significant amount of information on the physical properties of the galaxies in the haloes, at least within the physics hardwired in semi-analytic models.
\s{derivation} motivates and formally introduces the tree entropy $s$. The most important properties of $s$ are discussed and visualised using mock trees in \s{properties}. In \s{cdmsim}, we determine the tree entropies in a pure CDM universe, using the SURFS\xspace $N$-body simulation suite. Using these simulated data, we analyse the statistics of $s$, its cosmic evolution and its relation to other halo properties. \s{galaxies} focuses on the question of how the global properties of galaxies depend on the tree entropy of their host haloes. This discussion includes an information analysis that compares the information content of $s$ and other halo properties relative to the Hubble morphology. \s{conclusion} summarises the paper with an outlook to future applications.
\section{The `entropy' of merger trees}\label{s:derivation}
The aim of this section is to introduce a parameter $s$ quantifying the complexity of rooted directional trees, such as halo merger trees. We first clarify the definition of such merger trees (\ss{mergertrees}) and then explain the guiding principles for constructing the parameter $s$ (\ss{requirements}), before formally defining this parameter (\ss{definition}). The result-focused reader may skip Sections \ref{ss:mergertrees} and \ref{ss:requirements}.
\subsection{Types of tree representations}\label{ss:mergertrees}
It is important to distinguish between `haloes' and `subhaloes' and their associated `halo merger trees' and `subhalo merger trees', illustrated in \fig{tree_approximation}.
\textit{Halo} merger trees represent the assembly history of first generation haloes, i.e. gravitationally bound structures, which are \emph{not} themselves substructures of larger bound systems. All the self-bound substructure that exists within such haloes is considered a part of the haloes. When two haloes become gravitationally bound to each other they are merged into a single new halo, irrespective of whether one of them temporarily becomes a self-bound substructure within the other. In numerical simulations, haloes are normally identified using a basic friends-of-friends (FOF) algorithm \citep{Davis1985} or derivatives thereof.
By contrast, \textit{subhaloes} refer to self-bound lumps of matter, irrespective of whether they are a sub-system of a larger bound system. Self-bound substructures within a subhalo are considered different subhaloes. There can be multiple generations of subhaloes within subhaloes. In this terminology, the `haloes' (in the above sense) \emph{without} their substructure are called `central' subhaloes (also known as `main', `first generation' and `background' subhaloes), whereas their self-bound substructures are called `satellite' subhaloes (see \citealp{Han2017} for a more in-depth discussion). A plethora of algorithms exist to identify subhaloes in numerical simulations (see overview by \citealp{Onions2012}). They usually combine three-dimensional position-based FOF \citep[e.g.][]{Springel2008} or six-dimensional phase space-based FOF \citep[e.g.][]{Elahi2019} and unbinding techniques with a hierarchical algorithm to identify substructure.
Dark matter simulations can be used to construct merger trees by linking the haloes and subhaloes identified at different output times (`snapshots'). Tree-building algorithms \citep[see overview in][]{Srisawat2013} normally enforce a strict hierarchy with exactly one descendant for each halo/subhalo. In subhalo representations, satellites are normally (but not always, e.g.,\xspace \citealp{Jiang2014}) forced to remain within a fixed central. Enforcing a strict tree and subtree structure is inherently an approximation, since real haloes/subhaloes occasionally split up and/or transition to another tree/branch (e.g.,\xspace the black satellite depicted in \fig{tree_approximation}). However, in reality, only a small fraction ($\sim5$~percent\xspace for the SUFRS simulation discussed in \s{cdmsim}) of the haloes and subhaloes show such anomalous behaviour. Most tree-builders also demand that each halo/subhalo has a descendant in the immediate next snapshot, which sometimes requires interpolating across a few snapshots, e.g.,\xspace when a satellite is hard to distinguish from its central subhalo (e.g.,\xspace black satellite in the second snapshot of \fig{tree_approximation}).
\begin{figure}
\includegraphics[width=0.992\columnwidth]{fig_tree_approximation}\vspace{-2mm}
\caption{Different representations of the cosmic evolution of three haloes over eight time-snapshots (top to bottom). As shown on the left, the smallest halo (black) passes through the largest one (blue), then briefly escapes into the nearby intermediate halo (red), before fully merging with the largest one. By construction, the halo tree representation (right) merges structures at their first encounter and does not allow the small halo to escape again. The subhalo representation keeps track of substructure (dashed circles), but most subhalo tree algorithms force substructure to remain linked to a fixed central (horizontal dashed lines).}
\label{fig:tree_approximation}
\end{figure}
Halo merger trees are frequently used in studying the assembly hierarchy of haloes and the evolution of the halo mass function. Examples include the seminal extended Press-Schechter (EPS) formalism \citep{Bond1991}, as well as halo growth studies using statistically generated `Monte Carlo' merger trees and $N$-body simulation-based merger trees (see \citealp{Jiang2014b} for an overview).
Subhalo trees are less adequate to study the hierarchical assembly of haloes, because two coalescing lumps normally first become a central and a satellite. By the time these two subhaloes actually merge into a single subhalo, the satellite may have mostly evaporated into the central and therefore the mass ratio of the merger is strongly dependent on how long satellites are tracked before they merge. In turn, subhalo trees are better suited than halo trees for galaxy evolution studies, in particular for semi-analytic models \citep[e.g.,\xspace][]{Benson2012,Lacey2016,Croton2016,Lagos2018b}, which benefit from tracking the haloes of galaxies that fall into a larger halo (e.g.,\xspace a group halo).
Since this work focuses on mass assembly, we will adopt halo trees rather than subhalo trees. In the halo tree representation, satellite subhaloes (dashed circles in \fig{tree_approximation}) are lost. In order to study the mass assembly history of satellites themselves, it is possible to construct `halo trees' at the generation of the satellite. For example, the `halo tree' of a second generation subhalo (i.e.,\xspace a satellite, whose immediate host is a central), is a progenitor tree made only of second generation subhaloes, where all higher generation substructure is considered part of the second generation subhaloes. This definition is analogous to the standard definition of `halo trees' for first generation haloes shown in \fig{tree_approximation}.
\subsection{Search for a new parameter}\label{ss:requirements}
Let us now turn to the task of introducing a parameter $s(t)$ quantifying the complexity of the mass assembly tree of a halo at a given time $t$. By choice, this parameter should only depend on the geometry of the tree, i.e.,\xspace the descendant-links and progenitor masses, akin to structural parameters in graph theory \citep{Balakrishnan2012}. We deliberately discard any other information, such as time scales, impact factors, angular momentum, tidal dynamics, etc. This will allow us to compare the information contained in the tree geometry to that of more common quantities, such as the mass, spin and concentration (\s{cdmsim}).
In the broadest sense, we would like $s$ to quantify the complexity of the mass assembly hierarchy in a physically `meaningful' sense. By this we understand that $s$ should carry statistical information on the halo's history and on the galaxy/galaxies that reside inside the halo. For instance, we can think of global morpho-kinematic galaxy properties (e.g.,\xspace the bulge-to-disk stellar mass ratio and the related random-to-ordered stellar velocity ratio), which are already known to depend significantly on merger history (see refs. in \s{introduction}). Expressing this idea in explicit terms is, of course, a challenging affair, which requires a mixture of heuristics and prior knowledge of the galaxy-halo connection.
In view of the approximate scale-invariance of cosmic structure \citep{Blumenthal1984,Davis1985}, it makes sense to define $s$ independently of the overall mass scale. In other words, we require $s$ to be invariant under a uniform rescaling of all the progenitor masses of a halo. Since $s$ then only depends on the dimensionless descendant-links and relative progenitor masses, $s$ must itself be a \emph{dimensionless} parameter. Without loss of generality, we can restrict $s$ to the interval $[0,1]$ and request that the two extrema should correspond to haloes that have been the least ($s=0$) and most ($s=1$) strongly reshaped by mergers, respectively. In this way, $s$ will become a natural extension of the mass ratio $r=m/M\in[0,1]$ of a single binary halo-halo merger.
\subsubsection{Self-similar trees}\label{sss:selfsimilar}
In formalising a definition of the parameter $s(t)$, self-similar merger trees without smooth accretion turn out to be a fruitful starting point. Self-similar trees are hypothetical trees, in which every halo has the same assembly history up to an overall mass scaling (\fig{self_similar_tree}). Our requirement that $s$ should not depend on the overall mass scale then implies that each halo must have the same value of $s$. Thus, self-similar trees reduce the problem of defining a time-dependent tree parameter $s(t)$ to the simpler task of defining a single \emph{time-independent} parameter $s$ for the entire tree. This parameter can only depend on the mass fractions $x_i\equiv(m_i/\sum_{i=1}^n m_i)$ of the $n$ progenitors joining at each node; and a physically meaningful definition should be independent of the mass ordering.
\begin{figure}
\includegraphics[width=0.992\columnwidth]{fig_self_similar_tree.jpg}
\caption{Illustration of a self-similar tree made of 4th order multi-mergers of fixed mass ratio 5:1:3:2. By scale-invariance, the tree parameter $s$ must take the same value anywhere in this tree.}
\label{fig:self_similar_tree}
\end{figure}
The problem of quantifying the geometry of a self-similar tree thus reduces to assigning a number $s$ to the unordered set $\{x_i\}$, where $\sum x_i=1$. To address this problem, we use the prior knowledge that major mergers (i.e. with comparable values of $x_i$) have much stronger ramifications than minor ones in terms of restructuring the dark matter \citep{Ludlow2012,Klypin2016} and transforming the dynamics and kinematics of galaxies \cite[e.g.][]{Conselice2014,Lagos2018a}. Translated to our problem, this means that more similar values $\{x_i\}$ should lead to higher values of $s$ than more diverse sets. In this qualitative sense, the problem of defining $s$ is reminiscent of quantifying the \textit{information} of a probability set $\{x_i\}$. This information (i.e.,\xspace the number of bits required to encode events that occur with probabilities $\{x_i\}$), is given by Shannon's information entropy $\mathcal{H}=-\sum x_i\ln x_i$ (with $0\ln0=0$; \citealp{Shannon1948}).
The information entropy $\mathcal{H}$ is maximal if all values $x_i$ are identical and minimal ($\mathcal{H}=0$), if all values of $x_i$ except one vanish (which corresponds to a halo `merging' with zero-mass haloes, i.e.,\xspace no merger at all). While these are desirable properties for our complexity measure, $\mathcal{H}$ has the unfavourable property of diverging as $n\rightarrow\infty$. This not only violates our requirement for $s$ to remain bound to $[0,1]$, but also works against the physical insight that a halo grown from an instantaneous merger of $n\rightarrow\infty$ progenitors is simply a smoothly collapsed halo, which should thus have as low of a complexity as a minimal tree ($n=1$). This requirement could be met by using the normalised information entropy $\mathcal{H}/n$ \citep{Katok2007}, which vanishes both for $n=1$ and $n\rightarrow\infty$. However, in the present context, such a normalisation is not useful, since adding branches with vanishing mass ($x_i\approx0$) to a merger would increase $n$, but would have no physical effect on the halo. Thus $s$ must not explicitly depend on $n$ in this case. A clever way of bypassing the use of $n$ is to raise the normalised masses $x_i$ to the power of a constant $\alpha>1$ in the definition of $\mathcal{H}$. This gives rise to a \textit{generalised information entropy},
\begin{equation}\label{eq:generalizedshannon}
H = -f\sum_{i=1}^n x_i^\alpha\ln x_i,
\end{equation}
where the normalisation factor $f=(\alpha-1)e$ (with Euler's constant $e$) ensures that $H$ spans the interval $[0,1]$. For any finite $\alpha>1$, $H$ vanishes for mergers of $n=1$ and $n\rightarrow\infty$ haloes (with non-zero masses); and adding empty branches ($x_i=0$) has no effect on $H$. Similar generalisations of the Shannon entropy \citep[e.g.,\xspace][]{Mathai2007} have been proposed and successfully applied in other fields, for instance in defining income equality metrics in econometry \citep{Cowell1980,Shorrocks1980}.
Since $H$ has the properties that we would expect from an astrophysically motivated complexity measure of self-similar merger trees, we heuristically define that the parameter $s$ of a self-similar tree is identical to its generalised information entropy $H$. By virtue of this definition, $s$ is called the \textit{tree entropy} for the remainder of this paper.
\eq{generalizedshannon} requires a choice for $\alpha>1$. To understand the role of $\alpha$, it helps to consider a self-similar tree of equal-mass mergers, i.e.,\xspace the $n$ merging branches each have identical mass fractions $x_i=n^{-1}$. It is straightforward to show (\aa{equalmass}) that $H(n)$ vanishes for $n=1$ and $n=\infty$ and presents a single maximum ($H=1$) at $n_{\rm c}=e^{1/(\alpha-1)}$.
In this work, we choose $n_{\rm c}=2$, giving $\alpha=1+1/\ln2\approx2.442695$ and $f=e/\ln2\approx3.921652$. This choice follows from our original guideline (\s{introduction}) that maximal merger trees, made of an infinite regression of equal-mass \emph{binary} ($n=2$) mergers, have maximum tree entropy ($s=1$). Physically, this choice can be motivated by the fact that multi-mergers ($n>2$) are likely to arise from a coherent assembly, i.e.,\xspace a correlated inflow of haloes on a time scale smaller than a single merger event. Such coherent assembly is expected to result in more ordered structure (e.g.,\xspace in terms of halo and galaxy spin) than a major binary merger. For instance, simulations of a merger of six equal-mass spiral galaxies in a group \citep{Weil1996} found that such mergers tend to produce remnants with more rotation than typically seen in dry binary mergers \citep[e.g.,\xspace][]{Cox2006}. Hence, with respect to the galaxy kinematics, equal-mass binary mergers are more transformational than multi-mergers.
The optimal choice of $n_{\rm c}$, in the sense of maximising the information pertaining to a particular halo or galaxy property, may depend on the context.
Choosing $n_{\rm c}=2$ is a pragmatic way to fix the ideas, but alternatives will be discussed in \ss{optimisation}. Incidentally, we note that multi-mergers are subdominant at any redshift in the $\Lambda$CDM\xspace simulation analysed in \s{cdmsim}; and observations of galaxy-galaxy interactions find that the incidence of multi-mergers relative to binaries lies at the percent level \citep{Darg2011}.
\subsubsection{Extension to arbitrary trees}\label{sss:nonselfsimilar}
Next, we need to extend the time-independent global definition $s=H$ for the entropy of self-similar trees (\sss{selfsimilar}) to a \emph{time-dependent} definition of $s(t)$ in arbitrary merger trees. This definition requires a local equation governing the evolution of $s$. Let us first establish a list of physically motivated mathematical conditions for this equation and then develop an Ansatz satisfying these conditions.
Being a measure of the mass assembly hierarchy, $s(t)$ should only change if the mass of a halo changes (condition~1), be it via discrete mergers or smooth accretion. In the case of mergers, the transition from the entropies $\{s_i\}$ of $n$ merging branches to the new entropy $s_{\rm new}$ of their descendant can only depend on $\{s_i\}$ and on the mass fractions $\{x_i\}$. Following the reasoning of \sss{selfsimilar}, $s_{\rm new}$ must be a continuous and symmetric function of $\{(x_i,s_i)\}$ (condition~2), bound to $[0,1]$ (condition~3). For consistency with self-similar trees, $s_{\rm new}$ must asymptote to $H$ for a succession of self-similar mergers (i.e.,\xspace mergers with identical mass fractions $\{x_i\}$, where each merger takes the output entropy of the previous merger as its input). In particular, if all $n$ input entropies $s_i$ are identical ($s_i\equiv s$), the output entropy $s_{\rm new}$ should be contained in the interval $[s,H]$ (condition~4).
The rule for evolving $s(t)$ under smooth accretion can be determined as the limiting case, where diffuse material of mass $\Delta m$, is added via $p\rightarrow\infty$ minor $n$-mergers ($n\geq2$), each adding $(n-1)$ infinitesimal masses $\delta m=\Delta m/p/(n-1)$ to the main branch. In turn, physical considerations for the evolution of $s(t)$ under smooth accretion will constrain the definition of the merger equation. One such consideration is that the evolution of $s(t)$ under smooth accretion cannot depend on the value of $n$, since it is physically impossible to distinguish binary from multi-mergers in the case of smooth accretion. Similarly, $s(t)$ must \textit{not} depend on the tree entropy $s_{\rm d}$ of the smoothly accreted diffuse material. This is because the macroscopic state of a halo/galaxy cannot depend on whether each smoothly accreted infinitesimal part was itself formed by mergers or not. The only sensible choice is that the smoothly evolving $s(t)$ is independent of $n$ and $s_{\rm d}$ (condition~5) and asymptotically tends to zero as $\Delta m\rightarrow\infty$ (condition~6), consistent with the vanishing entropy of minimal trees. This argument extends to the case, where a halo is formed entirely by a smooth collapse. In particular, if a halo of mass $m$ is formed through the simultaneous coalescence of $n$ progenitors of mass $m/n$, the final tree entropy must be independent of the progenitor entropies and tend to zero as $n\rightarrow\infty$ (condition~7).
All seven conditions above are satisfied by the \textit{Ansatz},
\begin{equation}\label{eq:weightansatz}
s_{\rm new} = H+w\sum_{i=1}^n x_i^2(s_i-H),
\end{equation}
where $H$ depends on $\{x_i\}$ via \eq{generalizedshannon} and $w$ is a function, temporarily taken to be unity. \eq{weightansatz} manifestly satisfies the conditions 1--4. In particular, the difference terms $(s_i-H)$ ensure that $s_{\rm new}$ is always contained in $[s,H]$ if $s_i\equiv s$ (condition~4). It is necessary to weigh these difference terms by $x_i^q$ with some $q>0$ to ensure that adding haloes with zero mass do not change the result. Condition~7 requires that the sum in \eq{generalizedshannon} vanishes as $n\rightarrow\infty$, which is satisfied iff $q>1$, making $q=2$ the simplest choice. As shown in \aa{smooth}, this choice (but not $q=1$) also satisfies conditions~5 and~6.
The weight $w$ in \eq{weightansatz} specifies how well $s_{\rm new}$ `remembers' the input entropies $s_i$. A careful choice of $w$ as a function of $\{x_i\}$ allows us to preserve the conditions~1--7, while further specifying two crucial quantities: a rate parameter $\beta\in[0,1]$ determining how fast the most destructive mergers (at $H\approx1$) build up tree entropy and a rate parameter $\gamma\in[0,1]$ determining how fast smooth accretion (at $H\approx0$) removes tree entropy. Explicitly, we demand that a single $H=1$ merger (that is an equal-mass binary merger for our choice of $\alpha$) with zero progenitor entropy results in a tree entropy $\beta$ (condition~8); and that accreting an infinitesimal mass $\d m$ to a halo of mass $m$ and tree entropy $s$ results in an entropy change $(\d s/s)=-\gamma\,(\d m/m)$ (condition~9). In the `no-merger limit', where all but one progenitor haloes have vanishing mass ($x_i=1$, $x_{j\neq i}=0$; thus $H=0$), we require that $s_{\rm new}=s_i$. Hence, $w$ must be unity in this case. A straightfoward \textit{Ansatz} that satisfies this requirement, while offering two free parameters to accommodate conditions~8 and~9, is the second order polynomial
\begin{equation}\label{eq:wansatz}
w = 1+aH+bH^2
\end{equation}
with real constants $a$ and $b$. A short derivation (\aa{smooth}) shows that conditions~8 and~9 translate to $a=(2-\gamma)/f$ and $b=n_{\rm c}(1-\beta)-1-a$. The value of $s_{\rm new}$ remains bound to $[0,1]$ as long as $\gamma\geq0$.
\eq{weightansatz} guarantees that an infinite regression of self-similar mergers asymptotes to $s_{\rm new}\rightarrow H$, which is maximal ($H=1$) if the mergers are equal-mass mergers of $n=n_{\rm c}$ haloes. However, \eq{weightansatz} does not guarantee that a \emph{single} merger of $n$ identical haloes ($x_i\equiv n^{-1}$, $s_i\equiv s$) maximises $s_{\rm new}$ if $n=n_{\rm c}$. For this to be true, $\beta$ must lie above a threshold that depends on $\alpha$ and $\gamma$. A lower bound, which works for all $\gamma\in[0,1]$, reads $\beta\geq\tanh(1.165/(\alpha-1))$. In our case ($n_{\rm c}=2$), the exact limit is about $\beta>0.65$.
Given our choice $n_{\rm c}=2$, a value of $\beta=1$ would mean that equal-mass binary mergers always produce the maximum entropy ($s_{\rm new}=H=1$), irrespective of their input entropies. Thus, such mergers would completely erase the memory of all past mergers. This choice could be justified on the basis that equal-mass binary mergers dramatically redistribute the particle orbits in a `violent relaxation' process \citep{Lynden-Bell1967} and largely reshape disk galaxies at the halo centres, typically resulting in dispersion-rich featureless spheroids \citep{Toomre1977}. However, detailed simulation-based studies of merging identical disk galaxies (triggered by equal-mass halo-halo mergers) show that the morphological and kinematic memory is not entirely lost. For instance, dissipation-free mergers of two pure stellar disks result in spheroidal remnants with slightly less central density than analogous binary mergers of stellar disk with a bulge \citep{Hernquist1993} and multiple mergers might be needed to fine-tune the fundamental plane \citep{Taranu2015}. Likewise, single dissipational mergers of gas-rich disks might only partially remove the rotation structure \citep{Cox2006} and multiple such mergers seem necessary to explain some of the slowly rotating massive elliptical galaxies \citep{Dubinski1998}. To allow for some memory in equal-mass binary mergers, we adopt $\beta=3/4$ in the definition of the tree entropy.
To choose a value for $\gamma$, we recall that early-type galaxies, likely associated with high-entropy trees (\ss{morphology}), are rarely seen to transform back into late-type ones \citep{Emsellem2007}, despite the predicted continued growth of their haloes (by a factor $\sim3$ since redshift $z=1$; e.g.,\xspace \citealp{Elahi2018}). This can be attributed to feedback from active galactic nuclei, as well as to the fact that the specific angular momentum of haloes increases with cosmic time \citep{Wang2018} and thus late accretion settles less efficiently onto a galaxy in a `biased collapse' scenario \citep{Dutton2012}. Thus, it makes sense to demand that the relative decrease in tree entropy is significantly smaller than the relative increase in mass during smooth accretion, i.e.,\xspace $\gamma\ll1$. Here we choose $\gamma=1/3$, such that \textit{halving} the tree entropy requires a smooth mass increase by a factor $2^3$, which corresponds to \textit{doubling} the halo radius.
Our choices for $\alpha$, $\beta$ and $\gamma$ are, of course, subjective and alternatives will be discussed in \ss{optimisation}.
\subsection{Concise definition of the tree entropy}\label{ss:definition}
The developments of \ss{requirements} have led to an astrophysically motivated definition of a dimensionless parameter $s\in[0,1]$ -- the \emph{tree entropy} -- which quantifies the complexity of hierarchical mass assembly. The minimum entropy ($s=0$) stands for minimal trees without mergers, whereas the maximum entropy ($s=1$) represents maximal trees with a fractal history of equal-mass binary mergers.
We choose the initial entropy of a halo to be $s_{\rm init}=0$. This is a natural choice, as it is equivalent to saying that the earliest haloes were assembled from diffuse matter, which is true for many potential CDM and warm DM particles (e.g.,\xspace neutralions; \citealp{Diemand2005}). In practise, the smallest haloes at the free-streaming limit of about an Earth mass \citep{Angulo2010} are rarely resolved in cosmological simulations (but see \citealp{Wang2019b}). Hence, the first haloes to appear in such simulations should arguably have non-zero tree entropy. However, as we will show in \ss{binary} and \fig{pythagoras}, the choice of $s_{\rm init}$ is not critical for the results, since the final entropy of a tree does not depend significantly on the initial values for well-resolved merger histories ($\gtrsim$ 10 leaves).
Given an initially vanishing tree entropy, the tree entropy at all later times is calculated successively from previous time steps, using two mutually consistent rules, one governing merger events and one governing the time between successive mergers. In either rule, the tree entropy only changes if the halo mass changes.
\subsubsection{Tree entropy change during mergers}
If $n$ haloes of mass $m_i\geq0$ and tree entropy $s_i\in[0,1]$ ($i=1,...,n$) merge simultaneously into a single halo, the tree entropy $s_{\rm new}\in[0,1]$ of this new halo is calculated by successively evaluating three equations:
\begin{subequations}
\label{eq:definition}
\begin{align}
&x_i = m_i/\sum_{i=1}^n m_i~~\text{(normalised masses)},\label{eq:defxi}\\
&H = -f\sum_{i=1}^n x_i^\alpha\ln x_i~~\text{(generalised information)},\label{eq:defH}\\
&s_{\rm new} = H+(1+aH+bH^2)\sum_{i=1}^n x_i^2(s_i-H)\label{eq:defsn},
\end{align}
\end{subequations}
with real constants $f=e\cdot(\alpha-1)$, $a=(2-\gamma)/f$, $b=n_{\rm c}(1-\beta)-1-a$. The default values in this paper are $f=3.921652$, $a=0.424991$, $b=-0.924991$. These constants depend on the three parameters $\alpha$, $\beta$ and $\gamma$, which regulate the dependence of the tree entropy on the type of mergers as detailed in \ss{requirements}. Briefly,
\begin{itemize}
\item $\alpha>1$ (here 2.442695) specifies the relative importance of different orders of mergers (e.g. binary versus triple mergers). The lower the value of $\alpha$, the more entropy is produced by higher order merges relative to lower order ones. Our default choice $\alpha=1+1/\ln2\approx2.442695$ is such that binary mergers produce the most entropy and that maximal merger trees have maximum entropy ($s=1$).
\item $\beta\in[0,1]$ (here 3/4) regulates the rate of entropy change in the most destructive single merger (here an equal-mass binary merger). The higher $\beta$, the closer the output entropy of a single major merger lies to the final entropy $H$ of an infinite cascade of self-similar mergers.\item $\gamma\in[0,1]$ (here 1/3) regulates the rate of tree entropy loss in the smooth accretion limit: the higher $\gamma$, the faster the entropy reduction under smooth accretion.
\end{itemize}
\subsubsection{Tree entropy evolution between mergers}\label{sss:smoothevolution}
The rule for evolving the tree entropy under smooth accretion of a mass $\Delta m$ cannot be chosen freely, but must be derived from these equations. Let us subdivide the smooth accretion into a very large number $p$ of successive mergers, each adding an equal amount of mass of entropy $s_{\rm a}$. If each merger is thought of as a binary merger ($n=2$), then each joining halo has a small mass $\delta m=\Delta m/p$. For mergers of arbitrary order ($n\geq2$), this mass is $\delta m=\Delta m/p/(n-1)$. Through a short calculation it can be shown that in the limit of $p\rightarrow\infty$ and irrespective of $n$, a $p$-fold repetition of Equations~(4)\xspace (while increasing the main mass by $\delta m$ at each iteration) leads to an entropy
\begin{equation}\label{eq:smooth}
s_{\rm new} = s\left(\frac{m}{m+\Delta m}\right)^\gamma,
\end{equation}
where $s$ is the tree entropy of the halo of mass $m$ before the smooth accretion. The independence of \eq{smooth} on $s_{\rm a}$ and $n$ is a crucial requirement for the entropy to be a meaningful measure, since it is conceptually pointless to assign a tree entropy to infinitesimal mass elements and/or to distinguish between binary or higher-order mergers for such elements.
\section{Illustration of tree entropy}\label{s:properties}
The purpose of this section is to illustrate the evolution of the tree entropy $s$, using idealised examples.
\subsection{Tree entropy of single mergers}\label{ss:basic}
Equations~(4)\xspace governing the tree entropy change during mergers satisfy a list of meta-properties that are necessary in order for $s$ to be a physically meaningful parameter:
\begin{itemize}
\item \textit{Markovianity:} The tree entropy of a halo only depends on the preceeding time step. Specifically, the new $s$ of a halo immediately after a merger only depends on the masses $m_i$ and tree entropies $s_i$ ($i=1,...,n$) of its $n$ progenitors just before the merger.
\item \textit{Scale invariance:} Only the ratios, not the absolute values, of the progenitors masses $m_i$ are relevant for the entropy of the descendant. Thus only normalised progenitor masses $x_i\equiv(m_i/\sum m_i)$ appear in Equations~(4)\xspace.
\item \textit{Permutation symmetry:} The tree entropy of a descendant halo does not depend on the ordering of its simultaneously merging progenitors.
\item \textit{No-merger limit:} A merger of $n+k$ haloes with $n$ non-vanishing masses ($m_i>0$, $\forall\,i\leq n$) and $k$ vanishing masses ($m_i=0$, $\forall\,i>n$) results in the same entropy as a merger of the $n$ non-vanishing masses. In particular, if a halo of mass $m_1>0$ and entropy $s_1$ `merges' with some vanishing masses $m_i=0$ ($\forall\,i>1$), the entropy of the resulting halo is $s_1$.
\item \textit{Continuity:} Vanishing changes in the progenitor masses $\{m_i\}$ and entropies $\{s_i\}$ imply asymptotically vanishing changes in the entropies of the descendant.
\end{itemize}
\begin{figure}
\includegraphics[width=0.992\columnwidth]{fig_overview.pdf}
\caption{Evolution of the tree entropy $s$ in idealised mergers. In each row, one property of the merger is varied: the binary mass ratio $r$ (top); the number $n$ of equal-mass haloes (middle); the input tree entropy $s$ (bottom) -- see explanations in \ss{basic}. The bottom row uses a 4-merger with a fixed mass ratio of 1:3:2:6.}
\label{fig:overview}
\end{figure}
To illustrate the qualitative evolution of $s$ during a merger event \fig{overview} shows some contrived cases. In each row, only one aspect of the merger changes, thus illustrating three important properties:
\begin{itemize}
\item \textit{Mass ratio scaling:} A merger of two haloes of zero tree entropy generates an entropy that is a monotonically increasing function of the mass ratio $r=m/M$. Equal-mass binary mergers generate the largest entropy ($s_{\rm new}=\beta=3/4$) of any single merger event.
\item \textit{Multi-merger scaling:} If $n\geq2$ haloes of identical mass and tree entropy merge, the outcome entropy $s_{\rm new}$ is a decreasing function of $n$ and asymptotes to zero as $n\rightarrow\infty$ (see \ss{smooth} for a visualisation of this limit).
\item \textit{Entropy linearity:} If $n$ haloes of identical entropies $s$ merge, the outcome entropy $s_{\rm new}$ is a monotonically, in fact linearly, increasing function of $s$.
\end{itemize}
A key addition to the last point is that s and $s_{\rm new}$ are not proportional to one another and will generally cross-over at some point. Whether $s_{\rm new}$ is smaller or larger than $s$ depends on $s$ and the generalised information entropy $H(\{x_i\})$ associated with the mass ratios. If $n$ haloes of identical tree entropy $s$ merge, the entropy
\begin{itemize}
\item increases towards $H$, if $s<H$;
\item remains unchanged, if $s=H$;
\item decreases towards $H$, if $s>H$.
\end{itemize}
Thus, every merger moves the tree entropy towards $H$ (=0.79 in the bottom row of \fig{overview}). By consequence, a cascade of self-similar mergers asymptotically approaches $s=H$. Therefore, the tree entropy gradually `forgets' its past and hence the choice of the initial entropy $s_{\rm init}$ becomes irrelevant for sufficiently resolved trees (see illustration in \ss{binary}).
\subsection{Smoothly grown haloes}\label{ss:smooth}
By definition, haloes grown smoothly without mergers must have vanishing tree entropy, irrespective of whether the halo was grown trough smooth accretion over some time (`minimal trees', see \s{introduction}) or through an instantaneous collapse of diffuse material. Functional continuity then implies that haloes grown almost smoothly, i.e.,\xspace via a large number $n$ of minor accretion events, must have asymptotically vanishing entropy as $n\rightarrow\infty$. The case of $n=50$ is illustrated in \fig{smooth}. The nearly smooth accretion (left) and nearly smooth collapse (right) result in asymptotically vanishing tree entropies (as $n\rightarrow\infty$), which also become asymptotically independent of the progenitor entropy, consistent with the impossibility to assign a tree entropy to diffuse material.
\begin{figure}
\includegraphics[width=0.992\columnwidth]{fig_smooth.jpg}
\caption{Nearly smooth growth scenarios: the trees on the left grew from $n=50$ very minor accretion events, the ones on the right from a single $n$-merger. In both cases the resulting tree entropy $s_{\rm new}$ becomes independent of the input entropy $s$ and tends to zero as $n\rightarrow\infty$, as required for consistency with smooth accretion and smooth collapse. (Colour scale as in \fig{overview}.)}
\label{fig:smooth}
\end{figure}
\subsection{Detailed analysis of binary mergers}\label{ss:binary}
Most mergers in halo merger trees are binary mergers. In fact in some EPS-based tree-generating algorithms all mergers are binary \citep[e.g.,\xspace][]{Lacey1993,Moreno2008}. In this case, the mass fractions ${x_i}$ of the merging haloes can be reduced to a single mass fraction $x$, such that $x_1=x$ and $x_2=1-x$, related to the mass ratio, $r=m/M$, via $x=1/(1+r)$. If the merging haloes have equal tree entropy $s$, the outcome $s_{\rm new}$ only depends on $x$ and $s$.
\begin{figure}
\includegraphics[width=0.992\columnwidth]{fig_3d.png}
\caption{Outcome entropy $s_{\rm new}$ for a binary merger of two haloes with normalised masses $x$ and $(1-x)$ and identical entropy $s$. Red shows the region where the merger increases the entropy ($s_{\rm new}>s$); blue shows the region where the entropy decreases ($s_{\rm new}<s$). The thick solid line is the critical line ($s_{\rm new}=s=H$). This line gives the values of $s$ towards which self-similar binary merger trees (\fig{pythagoras}) asymptote. The dashed vertical line reaches up to $s_{\rm new}=\beta=3/4$, the output tree entropy of an equal-mass binary merger with zero initial entropy.}
\label{fig:3d}
\end{figure}
\newcommand{~~}{~~}
\begin{table}
\centering
\begin{tabularx}{\columnwidth}{@{\extracolsep{\fill}}ccccc}
\hline \\ [-2ex]
~~$r$ & $x$ & $1-x$ & $s_{\rm new}$ & $H$ \\ [0.5ex]
\hline \\ [-2ex]
~~ 0.000 & 1.000 & 0.000 & $0.0000+1.0000s$ & 0.000 \\
~~ 0.050 & 0.952 & 0.048 & $0.0086+0.9513s$ & 0.177 \\
~~ 0.100 & 0.909 & 0.091 & $0.0424+0.8687s$ & 0.323 \\
~~ 0.150 & 0.870 & 0.130 & $0.0988+0.7778s$ & 0.445 \\
~~ 0.176 & 0.850 & 0.150 & $0.1343+0.7314s$ & 0.500 \\
~~ 0.200 & 0.833 & 0.167 & $0.1691+0.6905s$ & 0.546 \\
~~ 0.250 & 0.800 & 0.200 & $0.2450+0.6118s$ & 0.631 \\
~~ 0.300 & 0.769 & 0.231 & $0.3206+0.5434s$ & 0.702 \\
~~ 0.333 & 0.750 & 0.250 & $0.3688+0.5034s$ & 0.743 \\
~~ 0.400 & 0.714 & 0.286 & $0.4569+0.4361s$ & 0.810 \\
~~ 0.500 & 0.667 & 0.333 & $0.5645+0.3621s$ & 0.885 \\
~~ 0.600 & 0.625 & 0.375 & $0.6428+0.3127s$ & 0.935 \\
~~ 0.800 & 0.556 & 0.444 & $0.7283+0.2623s$ & 0.987 \\
~~ 1.000 & 0.500 & 0.500 & $0.7500+0.2500s$ & 1.000 \\
[0.5ex] \hline
\end{tabularx}
\caption{Tabulated outcome entropy $s_{\rm new}$ of a single binary merger event of two haloes of identical entropies $s$, as a function of their mass ratio $r=(1-x)/x$. The generalised information entropy $H$ is the stationary value of $s$, where $s_{\rm new}=s$. This is also the asymptotic entropy attained in an infinite regression of self-similar binary mergers with fixed mass ratios $r$.}
\label{tab:numerical}
\end{table}
The function $s_{\rm new}(x,s)$ is visualised in \fig{3d}. Its symmetry about $x=0.5$ reflects the permutation symmetry $x_1\leftrightarrow x_2$. At any fixed $x$, $s_{\rm new}$ is a linear function of $s$. Thus, the sheet in \fig{3d} is made of straight lines, which connect the two extremes $s_{\rm new}(x,s=0)$ and $s_{\rm new}(x,s=1)$ (thin solid curves) at fixed $x$. Some values of $s_{\rm new}(x,s)$ and $H(x)$ have been listed in \tab{numerical}. This table shows, for instance, that $H=1/2$ if $r\approx0.176$. Hence any halo with entropy $H>1/2$ must have had at least one merger with $r>0.176$ in its assembly history. Likewise, haloes with entropy $H>0.743$ must have experienced at least one major merger with $r>1/3$. (The converse is not true.)
\begin{figure}
\includegraphics[width=0.475\columnwidth]{fig_pythagoras_tree_1_0}\hspace{2.5mm}
\includegraphics[width=0.475\columnwidth]{fig_pythagoras_tree_3_0}\\
\includegraphics[width=0.475\columnwidth]{fig_pythagoras_tree_10_0}\hspace{2.5mm}
\includegraphics[width=0.475\columnwidth]{fig_pythagoras_tree_3_1}
\caption{Tree entropy evolution in quasi-fractal binary merger trees, represented as Pythagoras trees where masses are proportional to square areas \citep{Bosman1957}. The trees have different mass ratios $r=m/M$, ranging from 1/1 (top left, a `maximal tree'), to 1/3 (right) and 1/10 (bottom left). The outcome entropy $s$ tends towards the generalised information entropy $H$, which only depends on $r$ (\eq{defH}, values in \tab{numerical}). The final entropy is independent of the initial entropies at the leaves of the tree, as seen in the two trees on the right where $s_{\rm init}=0$ (top) and $s_{\rm init}=1$ (bottom). (Colour scale as in \fig{overview}.)}
\label{fig:pythagoras}
\end{figure}
\fig{3d} distinguishes between the regions $s_{\rm new}>s$ (red) and $s_{\rm new}<s$ (blue). The dividing line between these regions (thick solid curve) is the geometric location where $s_{\rm new}(x,s)=s$. According to \eq{defsn}, this line corresponds to $s=H(x)=-f[x^\alpha\ln(x)+(1-x)^\alpha\ln(1-x)]$. The only other case where $s_{\rm new}(x,s)=s$ is the no-merger limit, where $x=0$ or $x=1$. Hence, the function $s_{\rm new}(x,s)$ is equal to $s$ if and only if $x=0$, $x=1$ and $s=H(x)$. This explains the shape of $s_{\rm new}(x,s)$ with one oscillation as a function of $x$ if $s=0$ and two oscillations if $s=1$.
As with any $n$-merger, a sequence of binary mergers with identical mass ratios gradually evolves the tree entropy towards $H(x)$. \fig{pythagoras} shows this asymptotic convergence in the case of quasi-fractal binary trees, i.e.,\xspace trees that are self-similar down to progenitors of a minimal mass (here $10^{-2}$ times the final mass). The two trees on the right show explicitly that the final tree entropy does \textit{not} depend on the initial entropy at the leaves.
\section{Tree entropies in $\Lambda$CDM\xspace}\label{s:cdmsim}
Let us now investigate the tree entropy of realistic halo merger trees in the concordance $\Lambda$ Cold Dark Matter ($\Lambda$CDM\xspace) cosmology. We will first introduce the $N$-body simulation and post-processing techniques used to build the merger trees and their tree entropies (Sections \ref{ss:surfs} and \ref{ss:computeentropy}) and then discuss the statistics and evolution of the entropy (following subsections). A pure CDM universe without baryonic physics is assumed in this section. Baryons and galaxies will be discussed in \s{galaxies}.
\subsection{Simulated merger trees}\label{ss:surfs}
Our analysis focuses on a run from the SURFS\xspace $N$-body simulation suite presented by \cite{Elahi2018}, which uses an up-to-date Planck cosmology \citep[4th column in Table 4 of][]{PlanckCollaboration2016} with characteristic densities $\Omega_{\Lambda}=0.6879$ (vacuum energy), $\Omega_{\rm m}=0.3121$ (all matter), $\Omega_{\rm b}=0.0491$ (baryonic matter, derived), scalar spectral index $n_s=0.9653$, power spectrum normalisation $\sigma_8=0.8150$ and Hubble parameter $H_0=h\cdot100~\rm{km\,s^{-1}Mpc^{-1}}$ with dimensionless Hubble parameter $h=0.6751$.
The SURFS\xspace run considered here is ``L210N1536'' (see Table~1 in \citealp{Elahi2018}), a purely gravitational $N$-body simulation of 1536$^3$ particles in a cubic box of side-length $210\, h^{-1}{\rm Mpc}$ (comoving) with periodic boundary conditions. The implied particle mass is $2.21\cdot10^8h^{-1}{\rm M}_{\odot}$. The initial conditions at redshift $z=99$ were generated using the second order Lagrangian perturbation theory scheme (2LTP; \citealp{Crocce2012}) with a transfer function generated by CAMB \citep{Lewis2000}. Subsequently the particles were evolved to $z=0$ using a memory lean version of GADGET-2 \citep{Springel2001,Springel2005c}.
Particle positions and velocities were stored at 200 discrete time steps (snapshots), between $z=24$ and $z=0$ in evenly spaced intervals of logarithmic growth factor. This high cadence ensures that adjacent snapshots are separated by less than the free-fall time of virialised overdensities, which is necessary to generate merger trees that accurately capture the evolution of dark matter haloes. Central and satellite subhaloes were identified using VELOCIraptor \citep{Elahi2019}, which first identifies haloes using a 3D FOF algorithm in configuration space \citep{Davis1985} and subsequently identifies substructures using a 6D phase-space FOF algorithm with an unbinding algorithm. Subhaloes have a minimum of 20 particles, which implies a minimum mass of $M_{\rm min}=4.42\cdot10^9h^{-1}{\rm M}_{\odot}$.
The subhaloes are then linked across snapshots using the particle correlator code TreeFrog \citep{Elahi2019b}, which can interpolate across several snapshots if necessary. In this work, we used a customised version of the TreeFrog outputs, in which each subhalo has exactly one descendant and satellite subhaloes stay associated with their first central. Since a characterisation of a halo's mass assembly history requires `halo merger trees' rather than `subhalo merger trees' (see \fig{tree_approximation}), we convert the subhalo trees into halo trees by assuming that a halo merges into another halo as soon as it becomes a satellite in the subhalo tree representation. As soon as this happens the two masses are added up to form a single halo and potential `back splashes', where a satellite exits and reenters its central subhalo, are ignored. The mass of haloes is defined as the total mass of their gravitationally bound particles, including bound and associated substructure. To compute the entropies of satellite subhaloes, we construct their effective `halo trees' at the level of satellites, as explained at the end of \ss{mergertrees}.
\subsection{Tree entropy calculation in simulated trees}\label{ss:computeentropy}
All new haloes (i.e.,\xspace without progenitors) in the simulated merger trees are initialised with tree entropy $s_{\rm init}=0$. Later tree entropies are then computed snapshot-by-snapshot by applying two computations to each halo. First, if the halo has more than one progenitor, we apply Equations~(4)\xspace, to compute the entropy change caused by the merger. Second, we compute the entropy change of the halo due to smooth accretion. The latter is accomplished using \eq{smooth} with a small modification that we discuss now.
In general, the mass $m_0$ of a halo differs from the total mass of its $n\geq1$ progenitors by a non-zero amount $\Delta m=m_0-\sum_{i=1}^n m_i$. There can be several reasons for $\Delta m$ to be non-zero. First, haloes can accrete diffuse material that is not resolved into progenitor haloes. Some of this material might be truely diffuse \citep{Genel2010}, whereas some might be bound in small haloes that lie below the resolution limit of the simulation (less than 20 particles). In fact, the smallest (and first) haloes to form in $\Lambda$CDM\xspace have masses of around an Earth mass, corresponding to the small-scale thermal cut-off in the CDM power spectrum \citep{Green2004,Angulo2010}. Since \eq{smooth} for handling smooth accretion also corresponds to the limit of many small mergers, this equation is suitable in either case.
Second, $\Delta m$ can be non-zero due to \textit{numerical} reasons \citep{Contreras2017}: any (sub)halo-finder, including VELOCIraptor, is subject to pseudo-random oscillations in the number of particles that are assigned to a halo at each time step. The continuously changing positions, velocities and energies make it virtually impossible to avoid such oscillations. Such oscillations can be smoothed out by applying \eq{smooth} irrespective of the sign of $\Delta m$. In this way, a spurious numerical mass gain between snapshot $1$ and $2$ of $\Delta m_{1\rightarrow2}>0$, followed by an equal mass loss $\Delta m_{2\rightarrow3}=-\Delta m_{1\rightarrow2}$ results in no net entropy change.
Third, haloes can be stripped of their material, implying a physical mass loss $\Delta m<0$. If blindly applying \eq{smooth} in this case, the tree entropy can increase indefinitely to unphysical values $s>1$. This is particularly problematic for satellite subhaloes, which can be stripped entirely, meaning that all these subhaloes would eventually reach $s>1$, even if the galaxies at their centres remain regular.
A practical way for correctly handling $\Delta m\neq0$ in all the above scenarios is to apply \eq{smooth} for all haloes, irrespective of the sign of $\Delta m$, except for satellites whose mass has dropped below the mass they had when they first became a satellite of another halo. To avoid $s>1$ in some very rare ($\ll 0.1$ percent) cases of massively stripped field haloes, we artificially impose $s_{\rm new}\leq1$ in \eq{smooth}.
\ap{code} shows how the tree entropy computation is implemented programmatically using pseudo-code.
A key question that needs addressing pertains to numerical convergence: how far from the mass resolution limit of a simulation must a halo be for its tree entropy to be numerically converged? Or, inversely, in which haloes can we trust the entropy value given the resolution of the simulation? This question is addressed in detail in \aa{massconvergence}. Irrespectively of the way smooth accretion is handled (even if ignored), it turns out that the entropy values are sufficiently converged (within $\lesssim0.02$, that is $\lesssim10$~percent\xspace) in haloes of mass $M\approx10^{11}h^{-1}{\rm M}_{\odot}$. At this mass, the smallest resolved (20 particles) progenitors are about 20-times less massive. For the whole discussion that follows we therefore only consider haloes with $M\geq10^{11}h^{-1}{\rm M}_{\odot}$, corresponding to $>452$ particles. This cut is similar to that of \cite{Correa2015b}, who found that a minimum of $\sim300$ particles is needed for merger trees to be sufficiently resolved for analysing the main branch MAH.
The time-steps between the snapshots in SURFS\xspace was deliberately chosen below the free-fall time of the haloes to allow for a robust identification of merger trees. The convergence study of \cite{Benson2012b} found that this cadence is indeed sufficient for merger trees to capture the mass growth of haloes and galaxies with a mass error below 5~percent\xspace; in fact, they found this criterion satisfied at 128 snapshots between $z=20$ and $z=0$, whereas SURFS\xspace has an even higher cadence of 200 snapshots between $z=24$ and $z=0$. Throughout this work, we can therefore expect $s$ to be well-converged relative to the time-resolution. In \aa{timeconvergence}, we explicitly confirm this assumption by showing that using only every second snapshot only changes the tree entropy values by a marginal amount.
\begin{figure}
\includegraphics[width=0.992\columnwidth]{fig_entropy_stats}
\caption{Distribution of the tree entropies of the simulated haloes in SURFS\xspace at $z=0$ for haloes with $M\geq10^{11}h^{-1}{\rm M}_{\odot}$. Dotted vertical lines denote the entropies of self-similar binary trees of mass ratio 1/10 ($s\approx0.323$) and 1/3 ($s\approx0.743$).}
\label{fig:entropy_stats}
\end{figure}
\begin{figure*}
\includegraphics[width=1.005\columnwidth]{fig_entropy_nbranches}\hspace{5mm}
\includegraphics[width=1.005\columnwidth]{fig_surfs_example.jpg}
\caption{Left: Number density of haloes (at $z=0$) as a function of tree entropy $s$ and number of leaves $n_l$. Densities (per units of $s$ and $\log n_l$) are shown by shades of grey on a logarithmic scale from white (lowest) to black (highest). The curve indicates the mean $\bar s$ as a function of $n_l$. Right: Illustration of three halo merger trees extracted from SURFS\xspace, colour-coded as in \fig{overview}. At each merger node, the progenitors are horizontally distributed according to their mass to highlight that major mergers (e.g.,\xspace the last merger in the bottom right tree) have no obvious main progenitor. Curved branches have been chosen in the bottom trees for purely graphical reasons. Coloured circles with numbers indicate the location of these trees in the left panel.}
\label{fig:entropy_nbranches}
\end{figure*}
\subsection{Global entropy statistics in $\Lambda$CDM\xspace}\label{ss:basicstats}
The tree entropies of haloes ($M\geq10^{11}h^{-1}{\rm M}_{\odot}$) at $z=0$ span the full range from $s=0$ (minimal merger trees) to $s=1$ (maximal trees). The full distribution of entropies shown in \fig{entropy_stats} is approximated by a $\beta$-distribution, a typical distribution of bound variables resulting from a cascade of random processes (halo mergers in this case).
The distribution of tree entropies peaks near $s=0.4$ (mode $0.35$, median $0.41$, mean $0.43$), approximately matching self-similar trees made of binary minor mergers of mass ratio 1/7 (\tab{numerical}). In this rough sense, we thus expect that median galaxy properties can be modelled using self-similar merger trees with 1/7 ratios. By contrast, only 3.9~percent\xspace of the trees fall into the regime ($s\geq0.743$) corresponding to self-similar binary trees made of major mergers ($r>1/3$).
The small excess of haloes with $s\approx0$ (\fig{entropy_stats}) is a numerical artefact, caused by about $0.7$~percent\xspace of haloes whose main branch was improperly assigned to a neighbouring halo close to $z=0$. This can happen, for instance, when satellite subhaloes escape from their parent entirely to restart a new life as independent halo \citep[see, e.g.,\xspace][]{Ludlow2009}. In the halo tree representation (\fig{tree_approximation}), the assembly history of these haloes is lost and they seem to just `pop up' shortly before or at $z=0$. Typically, these haloes only have one or two leaves, a low entropy ($s<0.05$) and a young age. Given the low incidence of these odd cases, they do not impinge on the statistical results of this paper.
For completeness, \fig{entropy_stats} also shows the tree entropy distribution of satellite subhaloes at $z=0$, accounting for 8.4~percent\xspace of the subhaloes (the remainder being centrals). The distribution is similar to that of the centrals, as expected from self-similarity arguments. On average, the tree entropy of satellites is about 1.1-times higher, likely due to the fact that their late-time growth has been halted/reduced around the time they became a satellite (typically $z\sim1$). This is in quantitative agreement with the cosmic evolution of entropies of (1st generation) haloes (see \ss{evolution}). It is also possible that numerical resolution leads to increased entropies for satellites: more massive satellites are more easily tracked inside their parent than less massive ones, hence skewing the mass ratio of satellite-satellite mergers towards major mergers. In this work, we will not further discuss the tree entropies of satellite subhaloes.
\fig{entropy_nbranches} (left) expands on the relation between tree entropy and leaf number. Trees with only one leaf (minimal trees) have, by construction, zero entropy. Trees with two leaves have had exactly one merger, which means that their entropy can be as high as $\beta=3/4$, bar some rare cases where smooth mass losses lead to slightly higher values. The distribution in \fig{entropy_nbranches} (left) reveals that the entropy distribution remains roughly constant with a mean around 0.43--0.45 (green line), for all trees with more than $\sim10$ leaves. The slight increase in $\bar s$ with number of leaves is a consequence of the weak $s$--mass relation explained in \ss{haloproperties}.
\fig{entropy_nbranches} (right) shows three merger trees extracted from the SURFS\xspace simulation. The typical scenario (example 1 in \fig{entropy_nbranches}; green circle) are trees in which the main branch experienced a few (1--3) major mergers, but the total mass contribution of these mergers lies below that of minor mergers and smooth accretion. Such trees typically settle at entropies near $s=0.4$. Trees with exceptionally high entropies (example 2; red circle) normally consist of two or more similarly massive sub-trees, which coalesce in a series of major mergers near $z=0$. In turn, trees with very low entropy (example 3; blue circle), resemble a pine tree, characterised by a strong main branch, without significant major mergers in its recent history.
\subsection{Cosmic evolution of tree entropies}\label{ss:evolution}
\fig{entropy_evolution} shows the cosmic evolution of the entropy distribution of \fig{entropy_stats}. The observed increase of the mode of $s$ with redshift $z$ is numerically robust: it persists upon selecting subsamples of massive haloes (e.g.,\xspace $\geq10^{12}h^{-1}{\rm M}_{\odot}$) and/or with many leaves ($>10$), for which the entropies are expected to be very well converged (\ap{convergence}).
At first sight, it might seem natural that the entropy distribution evolves with redshift in view of the well-established fact that merger rates (at fixed mass, per unit of time) increase strongly with redshift, approximately as $(1+z)^2$ \citep{Stewart2009}. However, the entropy parameter is independent of the overall merger rate, as it only depends on the geometric structure, \textit{not} on the time scale of a merger tree. As long as the relative masses and the chronological ordering of the merging branches remain unchanged, the entropy of a tree remains unchanged, too.
The cosmic evolution towards higher $s$ at higher $z$ can only be explained by an increase in major mergers relative to minor ones. More precisely, the evolution of $s$ shows that merger rates must increase more significantly with redshift for mergers with higher values of $r=m/M$. In fact, in our simulation, the fraction $f_{r>1/3}$ of major binary mergers ($r>1/3$), relative to all binary mergers, varies from 2~percent\xspace at $z=0$ to 7~percent\xspace at $z=5$, approximately as $f_{r>1/3}\approx 0.02(1+z)^{0.7}$. If major mergers are defined as $r>1/10$, these numbers change to 6~percent\xspace at $z=0$ and 25~percent\xspace at $z=5$, following $f_{r>1/10}\approx 0.06(1+z)^{0.8}$. This evolution is in qualitative agreement with earlier analyses of $N$-body simulations finding that the strong overall evolution of merger rates shows a weak secondary dependence on $r$, such that major mergers become relatively more prevalent with increasing $z$ \citep{Genel2009,Fakhouri2010}.
The cosmic evolution of $s$ (or $r$) can be understood in terms of the evolving characteristic mass $M_*$. This mass, which is related to the break in the halo mass function, is defined as the mass at which the rms $\sigma(M,z)$ of the smoothed density perturbation field $\delta(\mathbf{x},t)$ matches the critical density $\delta_{\rm c}\approx1.69$ for spherical collapse, i.e.,\xspace $\sigma(M_*,z)\equiv\delta_{\rm c}$. The characteristic mass defined in this way is directly computable from the matter power spectrum. For masses below $M_*$, haloes form efficiently following a power-law statistics, whereas for masses above $M_*$, the number density falls of exponentially \citep{Schechter1976}. In a scale-free Einstein-de Sitter universe, $M_*$ is the only driver of scale-dependence \citep{Smith2003,Angulo2017}, apart from the \textit{much} smaller free-streaming scale \citep{Angulo2010}, and hence the evolution of merger tree structures should be explainable by the evolution of $M_*(z)$. Explicitly, we expect the evolution of $s$ to depend only upon the so-called peak amplitude $\nu\equiv\delta_{\rm c}/\sigma(M,z)$. Equivalently, $s$ expressed as a function of $M/M_*(z)$ should be independent of $z$. \fig{nu} demonstrates that this is indeed the case.
\begin{figure}
\includegraphics[width=0.992\columnwidth]{fig_evolution}
\caption{Cosmic evolution of the tree entropy distribution of all simulated haloes ($M\geq10^{11}h^{-1}{\rm M}_{\odot}$). The shift of the mode is qualitatively consistent with the established weak cosmic decline in the major-to-minor merger ratio in $\Lambda$CDM\xspace.}
\label{fig:entropy_evolution}
\end{figure}
Interestingly, the mean of $s$ changes rapidly as a function of halo mass near $M=M_*$, but tends to become roughly mass independent as $M\llM_*$ and $M\ggM_*$. This behaviour can be approximated by the analytic fit
\begin{equation}\label{eq:nufit}
\langle s\rangle=0.45+0.025\,{\rm erf}(x0.6)
\end{equation}
where $x=\log_{10}(M/M_*)$. This function is shown as black curve in \fig{nu}. The standard deviation of the $s$-distribution is about $0.16$ with no significant dependence on $x$ (dashed lines in \fig{nu}). On an intuitive level, the increase of $\langle s\rangle$ (and $\langle r\rangle$) with $x$ can be seen as a consequence of the known relation between $x$ and clustering bias, predicted by the theory of peaks in Gaussian random fields \citep{Bardeen1986} and well-confirmed observationally. We reserve a more quantitative exploration of the connection between $s$ and clustering to future work.
The fact that the tree entropy responds to the evolution in the characteristic mass, but not to the strong evolution in the overall merger rate (about an order of magnitude from $z=0$ to $z=2$), is an interesting feature of purely structural estimators such as $s$.
\begin{figure}
\includegraphics[width=0.992\columnwidth]{fig_nu}
\caption{Mean tree entropy plotted as a function of normalised halo mass at different redshifts. Vertical lines show the standard deviation of the $s$-distribution each bin. The statistical uncertainty on the mean is shown by the thicker vertical lines. The black solid curve is the analytic fit of \eq{nufit}.}
\label{fig:nu}
\end{figure}
\begin{figure*}
\includegraphics[width=0.99\textwidth]{fig_parameter_overview}
\caption{Distributions of five parameters for SURFS\xspace haloes with $M\geq10^{11}h^{-1}{\rm M}_{\odot}$. The diagonal panels show the one-dimensional distributions, fitted with analytic models (black lines, see \ss{haloproperties}). The lower panels (grey) show the two-dimensional \textit{number} density of haloes; whereas the upper panels (blue) show the \textit{mass} density. Contours contain 25/50/75~percent\xspace of the counts or mass, respectively. The redshift axis is linear in $\log(1+z_{\rm form})$. The $r_s$ values are standard (bottom) and mass-weighted (top) Spearman rank coefficients.}
\label{fig:parameter_overview}
\end{figure*}
\subsection{Tree entropy relative to other halo properties}\label{ss:haloproperties}
For the new parameter $s$ to be a useful measure, it needs to add some `information' not already captured by simpler halo parameters. It is therefore interesting to question whether/how $s$ relates to established quantities that are commonly used to characterise haloes. We chose to limit this discussion to four key quantities: the virial mass $M$, the spin parameter $\lambda$ (as defined by \citealp{Bullock2001}), the NFW concentration parameter $c$ (defined by \citealp{Navarro1996}; estimated by VELOCIraptor from the maximum circular velocity) and the formation redshift $z_{\rm form}$. The latter is defined as the mass-weighted redshift, by which material joins the main branch of the tree, accounting for both mergers and smooth accretion.
The probability distributions and covariances of these quantities and the tree entropy $s$ at $z=0$ are shown in \fig{parameter_overview}. The black solid lines plotted on top of the red histograms show analytical fits to the one-dimensional distributions. The $r_{\rm s}$ values in the covariance plots are the Spearman ranking coefficients. We deliberately chose the Spearman ranking over the standard Pearson coefficient in order to quantify the correlations independently of the scales; e.g.,\xspace the Spearman ranking is invariant to logging an axis, such as $z_{\rm form}\mapsto\log(1+z_{\rm form})$.
The global statistics of $M$, $\lambda$, $c$ and $z_{\rm form}$ agree with well-known behaviour in $\Lambda$CDM\xspace:
\begin{itemize}
\item The mass distribution follows a power-law \citep[e.g.,\xspace][]{Murray2013}. The typical exponential truncation around a cut-off mass of $10^{14}{\rm M}_{\odot}$ is present in our data \citep[Fig.~4 of][]{Elahi2018}, but not visible on a linear density scale.
\item The spin parameter distribution is well-approximated by a normal distribution in $\log(\lambda)$ with no significant correlation with $M$, as has been established in various dark matter-only simulations \citep[e.g.,\xspace][]{Knebe2008}. Note that for spherical haloes, $\lambda$ cannot exceed $1/(4\sqrt{2})\approx0.177$; hence the small excess of high-spin objects comes from deformed (and often unrelaxed, interacting) systems.
\item The formation redshift of most haloes falls inside $z_{\rm form}=1$--$2$ (look-back time $\approx8$--$11\rm~Gyr$). The values are consistent with those quoted by \cite{Elahi2018}, upon accounting for the fact that Elahi's $z_{\rm form}$ are fixed when the haloes have just 25\% of their final mass, leading to slightly higher values. In line with other pure dark matter simulations \cite[e.g.,\xspace][]{Wechsler2002,Li2008,Zehavi2018}, our formation times closely follow a normal distribution in $\log(1+z)$. Further, $z_{\rm form}$ shows a negative correlation with $M$, i.e. more massive haloes assemble later, as expected in a hierarchical formation scenario (but see \citeauthor{Li2008} for how this depends on the definition of $z_{\rm form}$).
\item The concentration parameter $c$ is approximately normally distributed, with an excess at $c\lesssim4$ attributed to unrelaxed, predominantly massive haloes, whose three-dimensional density is poorly described by a spherical NFW profile. As first suggested by \cite{Navarro1997}, most of the scatter in $c$ is explained by differences in collapse times, which are inherited through the assembly history. The earlier a halo is assembled, the higher its concentration, and vice versa; hence the strong positive correlation between $c$ and $z_{\rm form}$. The $M$--$z_{\rm form}$ relation then explains the negative correlation between $c$ and $M$, whose mode agrees with the detailed studies by \cite{Dutton2014} and \cite{Ludlow2016}. In fact, the full $z_{\rm form}$--$c$ relation can be explained in one strike upon realising that the halo density profile reflects the evolving density of the universe weighted by an appropriately defined formation time \citep[e.g.,\xspace][]{Zhao2009,Correa2015a,Ludlow2014,Ludlow2016}.
\end{itemize}
Overall, the tree entropy $s$ only correlates significantly with $z_{\rm form}$, with a Spearman rank coefficient of $r_{\rm s}=-0.40$ (or $-0.36$ if mass-weighted). This negative correlation naturally arises from the fact that a high tree entropy $s$ normally means that the halo has had a major merger in its recent past. This implies that the main branch grew significantly at a relatively low $z$, hence making the mass-weighted main branch redshift $z_{\rm form}$ relatively small. Therefore, high values of $s$ often correspond to low values of $z_{\rm form}$ and vice versa.
We performed a principal component analysis in the space of $(M,\lambda,c,z_{\rm form},s)$ to check if $s$ exhibits any significant correlations with $M$, $\lambda$ and/or $c$ beyond those already explained via the $s$--$z_{\rm form}$ relation. However, adding other parameters to $z_{\rm form}$ only improves the constraints on $s$ by an insignificant amount ($\sim10$~percent\xspace).
\section{Connection to galaxies}\label{s:galaxies}
The hierarchical assembly of dark matter haloes naturally drives the mergers of galaxies in the haloes. Theoretical predictions of merger rates, e.g.,\xspace in the ILLUSTRIS and EAGLE simulations \citep{RodriguezGomez2017,Lagos2018a} and semi-analytic modelling \citep{Henriques2015}, are indeed in good agreement with `observed' merger rates deduced from galaxy pair counts \citep{Mundy2017}. This agreement motivates a deeper exploration of the connection between the merger trees and observable galaxy properties.
A full exploration of the galaxy-halo connection lies beyond the scope of this work. However, to illustrate the potential use of $s$ this section discusses a specific example of the information that $s$ carries on galaxies. We limit the discussion to mock galaxies generated by a semi-analytic model, run in post-processing of the CDM trees analysed in \s{cdmsim}; and we restrict the analysis to the stellar bulge-to-total mass ratio at $z=0$ of bulges grown from mergers, reserving a more detailed analysis to future research.
Throughout this section $s$ characterises the merger trees of the CDM haloes, \emph{not} of the galaxies, which merge at different times and with different mass ratios. This is a deliberate choice to investigate the galaxy-halo connection.
\subsection{Galaxy evolution model}\label{ss:sam}
Semi-analytic models (SAMs) of galaxy evolution rely on the assumption that dark matter-dominated structure formation is strictly separable from baryonic processes \citep{Roukema1997,Kauffmann1999}. Exploiting this assumption, halo merger trees are generated first, using purely gravitational modelling or simulations. Mock galaxies are then added in post-processing, using analytic prescriptions for evolving galaxies along the tree branches and nodes. The galaxies are normally approximated as simple axially symmetric systems made of a small number of baryonic components (e.g.,\xspace hot gas, cold gas, stars, black holes). The details of the galaxy models and the physics used to evolve them over time depend on the particular SAM employed.
Here, we use mock galaxies constructed using SHARK\xspace \citep{Lagos2018b}, a free and flexible software\footnote{Source code available at: https://github.com/ICRAR/shark} framework for running SAMs of nearly any flavour (i.e.,\xspace custom galaxy models and physics). We use the default\footnote{SHARK\xspace version 1.2.1, timestamp 2019-04-08T10:46:14.} SAM implementation in SHARK\xspace as detailed by \citeauthor{Lagos2018b}. In this model, galaxies are composed of sub-systems, characterised by their mass, angular momentum and metalicity. These components are split into discs (stellar/atomic/molecular) and spherical systems: hot/cold halo gas, stellar/atomic/molecular bulge, central super massive black hole (SMBH), gas ejected from the halo. The galaxies live in the CDM haloes that form and merge as dictated by the fixed input merger trees. The main baryonic processes that govern the formation/evolution of the galaxies are (1) the accretion of gas onto haloes, modelled via the DM accretion rate; (2) the shock heating and radiative cooling of gas inside haloes onto galactic discs under conservation of specific angular momentum; (3) the formation of molecules and stars in galaxy discs; (4) the suppression of gas cooling by UV background radiation; (5) the chemical enrichment of stars and gas; (6) feedback from stellar winds and supernovae; (7) the growth of SMBHs via accretion of gas and other SMBHs; (8) heating by feedback from SMBHs (`AGN feedback'); (9) galaxy-galaxy mergers driven by dynamical friction inside common DM haloes which can trigger starbursts and the formation and growth of bulges; and (10) the collapse of globally unstable discs that also leads to starbursts and bulges.
SHARK\xspace was run on the subhalo merger trees of the SURFS\xspace L210N1536 simulation (details in \ss{surfs}). Like most SAMs, SHARK\xspace produces galaxies in three types of CDM environments: \textit{central galaxies} sit at the centre of central subhaloes; \textit{satellite galaxies} sit at the centre of satellite subhaloes, and \textit{orphan galaxies} have lost their halo, for instance through the disruption of a satellite subhalo. The following analysis is performed for the 289,937 central galaxies with subhalo masses $M\geq10^{11}h^{-1}{\rm M}_{\odot}$. They represent 86~percent\xspace of all galaxies in this mass range. Note that including the satellites (6~percent\xspace) only changes the numerical results at the percent level.
\begin{figure}
\includegraphics[width=0.992\columnwidth]{fig_entropy_bt_bimodality}
\caption{Distribution of galaxies by tree entropy $s$ and stellar $B/T$ mass ratio. The density has been weighted by halo mass in order to make the rarer massive objects with high $B/T$ values visible. Contours contain 25 (thickest), 50 and 75~percent\xspace (thinnest) of the total halo mass. To emphasise the association of the bimodal distribution with the blue/red galaxy bimodaility, different colours have been used for upper and lower contours.}
\label{fig:entropy_bimodalidy}
\end{figure}
\subsection{Tree entropy and morphology}\label{ss:morphology}
Let us now see how the tree entropy $s$ impinges on the mock galaxies. We limit this discussion to evolved galaxies at $z=0$, specifically to their bulge-to-total ($B/T$) mass ratio, already known to depend on the assembly history \citep[e.g.,\xspace][]{Brook2016}. Bulge formation is a complex affair with many facets: dynamic scaling relations and kinematics support a distinction between between dispersion supported `classical bulges' and rotationally supported `pseudo bulges' \citep{Kormendy2004}, which can sometimes coexist \citep{Erwin2014}. Early views that the former are produced by mergers and the latter in-situ evolved into a much more nuanced picture, where classical bulges can also form in-situ \citep{Perez2013}, mergers are an essential driver of pseudo bulges \citep{Guedes2013,Okamoto2013,Gargiulo2019} and mergers transform classical to pseudo bulges \citep{Saha2015}.
SHARK\xspace distinguishes between bulge stars formed in mergers and bulge stars formed in-situ via global disk instabilities. The instabilities are triggered if the rotational support falls below a classical \citep{Ostriker1973} stability criterion, but currently SHARK\xspace does not account for tidal instabilities known to precede mergers and which can also form bulges \citep{Gargiulo2019}. Therefore, instability driven bulges probably do not depend as much on the assembly history as they do in reality. To circumvent this discussion, our $B/T$ ratios only include stars formed in mergers and in mergers of progenitor galaxies. These merger-driven bulges are likely a combination of classical and pseudo-bulges, but this distinction is irrelevant here.
\fig{entropy_bimodalidy} shows the halo mass distribution of the mock galaxies in the $(s,B/T)$-plane. The reason for weighting the galaxies by their halo mass is purely graphical: it allows us to visualise low $B/T$ values (found in many low mass galaxies) and high $B/T$ values (found in fewer high mass galaxies) with roughly equal weight. In this scaling the classical bimodality between late-type galaxies (LTG, low $B/T$) and early-type galaxies (ETG, high $B/T$) is very apparent.
There is a clear positive correlation between $s$ and $B/T$. In fact, most (60~percent\xspace) of the clear LTGs ($B/T<0.2$) have low entropy ($s<0.4$), while most (74~percent\xspace) of the clear ETGs ($B/T>0.8$) have high entropy ($s>1/3$). This statement remains true even if instability driven bulge mass were included (64~percent\xspace and 60~percent\xspace, respectively). Clearly, the tree entropy encodes important information on the properties of the mock galaxies.
\subsection{Information analysis}\label{ss:information}
We now rigorously quantify the information of $s$ about $B/T$ using the normalised \emph{mutual information} \citep{Strehl2003}. This information measure, contained between 0 and 1, quantifies the information a variable $X$ carries about a variable $Y$ (and vice versa). It is formally defined as
\begin{equation}\label{eq:mi}
\mathcal{I}_{XY} = \frac{1}{\sqrt{\mathcal{H}_X\mathcal{H}_Y}}\iint\!\!\rho_{XY}(x,y)\ln\!\left[\frac{\rho_{XY}(x,y)}{\rho_X(x)\rho_Y(y)} \right]\!\d x \,\d y,
\end{equation}
where $\rho_{XY}(x,y)$ is the joint 2D probability density, $\rho_X(x)$ and $\rho_Y(y)$ are the individual 1D probability densities, and $\mathcal{H}_X$ and $\mathcal{H}_Y$ are the standard information entropies,
\begin{equation}
\mathcal{H}_X = \int \rho_X(x)\ln\rho_X(x)\d x.
\end{equation}
Importantly, the mutual information is invariant under non-linear bijective transformations of the variables. For instance, substituting $B/T$ for the so-called numerical Hubble stage $\mathcal{T}$ \citep{deVaucouleurs1977} using the approximation $\mathcal{T}=10-16\sqrt{B/T}$ \citep[Eq. (18) in][]{Obreschkow2009b} or substituting $z$ for $\log_{10}(1+z)$ would have no effect on $\mathcal{I}$.
To put the information $\mathcal{I}_{s,B/T}$ into perspective, we compute $\mathcal{I}_{X,B/T}$ for all five halo properties discussed in \fig{parameter_overview}: the mass $M$, the spin parameter $\lambda$ (as used by SHARK\xspace), the concentration $c$ (as output by VELOCIraptor), the formation redshift $z_{\rm form}$ (defined in \ss{haloproperties}) and the tree entropy $s$. Of these properties, $M$ has the highest mutual information with $B/T$, as one might expect from the well-known mass-morphology relation of galaxies \citep{Calvi2012,Kelvin2014}. For the other four properties, it therefore makes sense to only evaluate the \emph{conditional} mutual information $\tilde{\mathcal{I}}_{X,B/T|M}$ that is not already explained through correlations with $M$. This conditional information is given by $\tilde{\mathcal{I}}_{XY|Z}=\int\mathcal{I}_{XY|Z=z}\,\rho_Z(z)\d z$, where $\mathcal{I}_{XY|Z}$ is the mutual information $\mathcal{I}_{XY}$ at a fixed value of a third property $Z$ (here taken as $Z=M$ or rather $Z=\logM$ for ease of computation). The mutual informations are listed in \tab{information}.
\begin{table}
\centering
\begin{tabularx}{\columnwidth}{@{\extracolsep{\fill}}lc}
\hline \\ [-2ex]
Halo property & Information [percent] \\ [0.5ex]
\hline \\ [-2ex]
Halo mass $M$ & $7.04\pm0.03$ \\
Tree entropy $s$ & $6.08\pm0.11$ \\
Formation redshift $z_{\rm form}$ & $3.93\pm0.11$ \\
Spin parameter $\lambda$ & $1.31\pm0.11$ \\
Concentration $c$ & $1.01\pm0.11$ \\
[0.5ex]
\hline
\end{tabularx}
\caption{Mutual information between different halo properties and the stellar $B/T$ ratio at $z=0$. For all properties other than $M$, the value is the conditional mutual information that excludes the contribution already explained by correlations with $M$. Properties have been ordered by decreasing information.}
\label{tab:information}
\end{table}
\begin{figure}
\includegraphics[width=0.992\columnwidth]{fig_mutual_information}
\caption{Mutual information, at fixed halo mass $M$, between four halo properties and the stellar $B/T$ ratio at $z=0$. The information measure was evaluated in mass bins of 0.1\,dex and smoothed with a 0.5\,dex top-hat filter. Shaded bands show symmetric 1$\sigma$ confidence intervals determined via bootstrapping.}
\label{fig:mutual_information}
\end{figure}
Interestingly, the tree entropy $s$ provides the highest amount of independent information on $B/T$, in addition to the information contained in $M$. This statement applies at all values of $M$, as shown in \fig{mutual_information}.
It is hard to find a halo parameter, assembly related or not, that carries more information on $B/T$ at fixed $M$ than the tree entropy. In fact, in the exhaustive list of halo parameters output by VELOCIraptor \citep[see Table~4 of][]{Elahi2019}, none performs better than $s$. Furthermore, we explicitly searched for other parameters quantifying \emph{single} accretion events: the mass ratio of the last merger, the mass ratio of the last merger above a threshold mass ratio (0.01/0.03/0.1/0.3), the time since the last major merger (defined as $r>1/10$ and $r>1/3$) and the total mass accreted since the last major merger. None of these quantities hold as much information on $B/T$ than $s$.
In summary, the tree entropy $s$ contains a significant amount of information on the $B/T$ ratio. Of course, $B/T$ was chosen deliberately to make this point, since $B/T$ depends on the merger history by construction in SHARK\xspace. However, the amount of information in $s$ compared to other halo parameters is nonetheless significant. This result demonstrates that galaxies depend on the full structure of the merger trees, not just on isolated merger/accretion events, such as the most recent merger. The exception is the case where the most recent merger was a nearly equal-mass binary merger, since choosing $\beta=1$ (which overwrites all past tree entropies in the case of equal-mass binary mergers) preserves the high information of $s$ on $B/T$ as shown in the following section.
\subsection{Optimisation of free parameters}\label{ss:optimisation}
The tree entropy $s$, as defined in \ss{definition}, is but one way of quantifying merger trees. The question remains whether other definitions satisfying the physical requirements of \ss{requirements} would yield more information, e.g.,\xspace on $B/T$. An exhaustive answer to this question is elusive, but we can at least test other values for the free parameters $\alpha$, $\beta$ and $\gamma$ in the definition of $s$ (Equations~\ref{eq:definition}). As a reminder, $\alpha$ regulates the relative importance of different multi-mergers (e.g.,\xspace binary versus triple mergers), whereas $\beta$ and $\gamma$ regulate the importance of major and very minor mergers (smooth accretion), respectively (see Sections~\ref{ss:requirements}).
\fig{optimization} shows the mutual information $\mathcal{I}_{s,B/T}$ as a function of $\alpha$, $\beta$ and $\gamma$ in two projections. Darker shades denote higher information. The contours enclose the domain, in which $\mathcal{I}_{s,B/T}$ is consistent with its maximum within the statistical noise. Conveniently, our default choice for $\alpha$, $\beta$ and $\gamma$ (orange crosses) approximately maximises the information, justifying this choice with hindsight -- at least for the particular case of $B/T$ at $z=0$ in SHARK\xspace.
Note that $\alpha$ corresponds to the number of branches $n_{\rm c}=e^{1/(\alpha-1)}$ ($\Leftrightarrow \alpha=1-1/\ln n_{\rm c}$) of the equal-mass merger that generates the highest entropy. Thus, the fact that the default value $\alpha=1+1/\ln 2\approx2.442695$ is a good fit, implies that equal-mass binary mergers ($n=2$) are indeed the `worst' mergers in SHARK\xspace, in the sense that they cause more massive bulges than other equal-mass multi-mergers.
\begin{figure}
\includegraphics[width=0.992\columnwidth]{fig_optimization}
\caption{Mutual information $\mathcal{I}_{s,B/T}$ at $z=0$ as a function of $\alpha$, $\beta$ and $\gamma$ in two slices at fixed $\gamma=1/3$ (left) and $\beta=3/4$ (right). White regions denote zero mutual information (i.e.,\xspace $\mathcal{I}$ is at its noise level), while black corresponds to the maximum of $\mathcal{I}$. The contours enclose the region where $\mathcal{I}$ is less than 5~percent\xspace from its maximum. Crosses mark the default parameters.}
\label{fig:optimization}
\end{figure}
\section{Conclusions}\label{s:conclusion}
The main purpose of this paper was to define and present a dimensionless parameter to quantify the structure of halo merger trees in an astrophysically meaningful manner. This parameter, the \emph{tree entropy} $s$, extends the mass ratio of binary halo-halo mergers both horizontally, to multi-mergers, and vertically, to trees of sequential mergers.
By construction, minimal trees, grown without mergers, have minimal entropy ($s=0$); and maximal merger trees, assembled exclusively from a hierarchy of equal-mass binary mergers, have maximal entropy ($s=1$). All other trees have intermediate entropies (\fig{overview}).
Consistent with this definition, the leaves (new haloes) and smoothly accreted material are initialised with zero entropy and hence smooth accretion asymptotically decreases the entropy, $s\rightarrow0$. For a single halo, $s$ is an evolving property, whose value only changes if the halo accretes material (smoothly or discretely).
In $\Lambda$CDM\xspace, merger trees exhibit a distribution of tree entropies (\fig{entropy_evolution}), well modelled by a $\beta$-distribution peaked around $s\approx0.4$. The median of $s$ corresponds to that of self-similar merger trees made exclusively of binary mergers of a mass ratio near 1/7. As expected, this demonstrates that typical merger trees fall in between minimal and maximal trees. This result is a manifestation of the fact that major mergers are, in fact, quite rare \citep[e.g.,\xspace][]{Fakhouri2010}.
We showed that $s$ is not reducible to other standard halo parameters (\fig{parameter_overview}) and therefore offers a useful addition to describing the history of haloes. Looking at a single galaxy property, the stellar bulge-to-total ($B/T$) mass ratio, we found that $s$ holds a significant amount of statistical information. In fact, for haloes of fixed mass, $s$ holds more information on this morphology estimator than any other halo parameter we considered (some of which are shown in \fig{mutual_information}). This shows that most galaxies `care' about a large part of their merger history rather than just a single (e.g.,\xspace the last significant) merger/accretion event.
Having introduced a new framework for studying merger trees, this work lends itself to various extensions:
\begin{itemize}
\item A natural idea is to apply $s$ to a deeper exploration of the galaxy-halo connection, firstly by considering a larger range of galaxy properties and secondly by considering other galaxy evolution models. It would be particularly interesting to include modern hydrodynamic simulations, such as EAGLE \citep{Schaye2015} or IllustrisTNG \citep{Nelson2019}, since galactic transformations are not directly hardwired to mergers in such simulations.
\item Likewise, $s$ might help interpret variations in the inner structure of haloes. We have shown that the concentration of the density profile does not significantly correlate with $s$ beyond the established mass--concentration relation, likely because $s$ does not depend on time scales. However, it would be natural to expect that the substructure of haloes depends on the statistics of merger ratios and hence on $s$.
\item The statistics of $s$ offer a new way to compare different combinations of halo finders and tree builders \citep{Onions2012,Srisawat2013}.
\item Similarly, the statistics of $s$ extracted from CDM simulations provides a new benchmark for analytically constructed merger trees \cite[e.g.,\xspace][]{Kauffmann1993,Sheth1999b,Somerville1999,Zhang2008,Parkinson2008}. By applying this benchmark as a tuning factor, Monte-Carlo algorithms for generating merger trees could be tested and improved.
\item There is a lot of room for sharpening our understanding of the statistics of $s$ and its dependence on cosmology. It is likely that this can be achieved analytically using the transformation equations of \cite{Neistein2010}. New insight might also arise from studying the dependence of $s$ on the power spectrum of explicitly scale-free $\Lambda$CDM\xspace simulations.
\item The tree entropy $s$ could also be used to study the direct implications of galaxy-galaxy mergers on galaxy evolution, rather than the indirect implications of halo-halo mergers studied in this work. To do so, $s$ could be computed using the stellar or stellar+cold gas mass assembly trees.
\end{itemize}
Time will tell if the tree entropy becomes a lasting concept. Maybe modified definitions will turn out to be more fruitful and/or conceptual improvements can be made. It seems, however, likely that the considerations for a physically motivated tree parameter outlined in \ss{requirements} can offer a foundation for forthcoming research.
\section*{Acknowledgements}
We thank the anonymous referee for a very kind and useful report, as well as Paul Schechter and Eric Emsellem for inspiring discussions. We also thank Chris Power, Aaron Robotham and Rodrigo Tobar for their contribution to SURFS\xspace, SHARK\xspace and TreeFrog. Parts of this research were conducted by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO3D), through project number CE170100013. PE and CL are directly funded by ASTRO 3D. DO and AL are recipients of Australian Research Council Future Fellowships (FT190100083, FT160100250, respectively) funded by the Australian Government. This work was supported by resources provided by the Pawsey Supercomputing Centre with funding from the Australian Government and the Government of Western Australia.
| proofpile-arXiv_059-15538 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{sec:intro}
Vortex-core lines (filaments) are essential in turbulence~\cite{mccomb1990physics}, which could exhibit complex geometric and topological structures over space and time.
Due to typically large amount of data especially for high-resolution simulations, people in the field of scientific visualization have made great efforts over the past years to extract vortex-core line structures~\cite{gunther2018state}, which enable efficient and high-quality visualizations, as well as both qualitative and quantitative analysis of structures in turbulence datasets.
While many techniques have been proposed to extract these vortex-core line structures, they all target for classical turbulence (without any quantum mechanical effects).
Recently, quantum fluids with turbulence have got more and more attention in physics~\cite{barenghi2014introduction,nemirovskii2013quantum}.
It is thus desirable to also extract and visualize vortex-core line structures in quantum turbulence datasets to aid the advancement of this emerging field, where experimental visualizations are very expensive and difficult to obtain~\cite{guo2014visualization}.
Unfortunately, existing vortex-core line extraction methods cannot serve for the purpose, since the model equations and the definitions of vortices are quite different; in addition, current visualizations for quantum turbulence datasets rely on the isosurface of either density~\cite{Yepez09,Clark17} or circulation~\cite{GYL17} fields, which are not sufficiently accurate.
They are also difficult to achieve real-time performance when visualizing vortex-core line structures in high-resolution simulation datasets.
In this paper, we are interested in extracting vortex-core line structures in quantum turbulence simulation datasets, and representing them analytically by continuous curves, which is called \textit{vortex-core line vectorization}, for efficient and high-quality visualizations, preferably in real time, such that it can provide a convenient visual analysis tool for the scientific study of quantum turbulence structures.
Note that quantum turbulence is an open and important research area in physics, where little knowledge is known.
From the perspective of domain scientists, manipulating the vortex-core line structures in real time can enable more intuitive understanding and more precise analysis, which could assist the future research on quantum turbulence.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{teaser.jpg}
\caption{Visualizations of vortex-core line structures in steady homogeneous quantum turbulence by progressively zooming in the camera views (from left to right). Such a structure is excited by continuous random potential input, which can compensate fast vortex decay during the massive reconnections, and produce steady quantum turbulence. With our visualization technique, the quantum turbulence structure can be explored interactively with real-time performance. Note that the unit of the labeled length scale is with respect to the domain size ($l=32.0$) we use in the simulation.
}
\label{fig:teaser}
\vspace*{-1.5mm}
\end{figure*}
Given the simulation datasets of quantum turbulence, by solving either nonlinear Sch\"{o}dinger equation (NLSE)~\cite{Tsubota17} or nonlinear Klein-Gorden equation (NLKG)~\cite{Xiong14} (both are complex-valued equations), our visualization starts by extracting vortex nodes with a circulation-based method~\cite{GYL17}, which can immediately result in a vortex-node graph, where vortices belonging to the same vortex-core line can be easily extracted.
Then, we iteratively reduce the graph by local averaging to generate samples on vortex-core lines, together with a density-guided local optimization to adjust positions of these samples for higher accuracy.
Finally, we interpolate these samples by a proper ordering for each vortex-core line with a continuous curve for the vectorized representation.
After vectorization, the memory consumption has been greatly reduced by orders of magnitude, which enables high-quality real-time visualizations, even on a normal laptop PC.
Note that the graph representation we proposed naturally preserves complex topologies in vortex-core line structures during reconnection without any extra computation, which allows us to complete the vectorization within relatively a short period of time.
We demonstrate our real-time visualization results with a dataset we obtained by solving the NLKG equation at a resolution of $2048^3$.
\textit{Note that no previous work has successfully achieved real-time visualization of vortex-core line structures from a quantum turbulence dataset at such a high resolution}.
With our new technique, we have generated different types of visualizations for either qualitative or quantitative analysis (a physicist in quantum fluid domain, the second author of this paper, has been intensively involved in such an analysis), and Fig.~\ref{fig:teaser} shows one of these visualizations to zoom in the quantum turbulence structure.
Comparisons with existing vortex-core line visualization methods~\cite{banks1995predictor,GYL17}, both qualitatively and quantitatively with a metric, are conducted to show higher vortex-core line extraction accuracy and the resulting visualization quality.
\section{Background}
\label{sec:related_work}
Before we turn our focus to vortex-core line vectorization in quantum turbulence datasets, we first give some introduction to background knowledge in the field of quantum fluids, as well as the related work on both classical and quantum turbulence visualizations.
\subsection{Quantum fluids and turbulence}
\label{sec:quantum_fluids_turbulence}
Quantum fluids~\cite{Tilley74} \cite{leggett2006quantum}, such as superfluid liquid helium (Helium-II) with temperature below 2.17K~\cite{Kapitza38}, atomic Bose-Einstein condensate (BEC) at the order of $\mu$K, and the (hypothetical) stellar superfluids in a neutron star with surface temperature around 6$\times$$10^5$K~\cite{sauls1989superfluidity,page2011rapid}, are a special kind of fluids exhibiting macroscopic quantum mechanical effects~\cite{Barenghi16}.
Since the discovery of quantum fluids, many interesting properties have been observed and studied, some of which are completely different from classical fluids.
For example, as a kind of quantum fluids, superfluids have zero viscosity when flowing through narrow capillaries; ideally, they have infinitely large thermal conductance, and hence can be used as moderators in cryogenics.
Quantum fluidity, as an important physical property of various exotic states of matter, can be found applications in fields such as condensed matter physics~\cite{Tilley74} \cite{leggett2006quantum}, astrophysics~\cite{sauls1989superfluidity} \cite{page2011rapid}, as well as particle physics and cosmology~\cite{vilenkin1995cosmic,volovik2001superfluid,Xiong14,makinen2019half}.
\subsubsection{Quantum fluid model}
There have been several mathematical models to describe the dynamic behavior of quantum fluids~\cite{barenghi2014introduction,nemirovskii2013quantum}.
One of the recent models is the nonlinear Klein-Gorden equation~\cite{debbasch1995relativistic,Xiong14}, which is written in a dimensionless form as:
\begin{equation}
-\frac{\partial^2 \Phi}{\partial t}+\nabla^2\Phi = (|\Phi|^2-1)\Phi + \lambda\Phi,
\label{eq:nlkg}
\end{equation}
where $\Phi$ is a complex-valued scalar field defined as: $\Phi =|\Phi|e^{i\sigma}$, with $|\Phi|$ and $\sigma$ the magnitude and phase of $\Phi$; $\lambda$ is a free parameter controlling the interaction among quantum fluid particles that could influence the core radius of quantum vortices, which we set as $\lambda = 1$ to allow resolving the healing length by simulation.
The hydrodynamic density $\rho$ and velocity $\mathbf{u}$ can be obtained as:
\begin{equation}
\rho =-\frac{\dot{\sigma}|\Phi|^2}{\sqrt{1-u^2}},\;\;\;\;\;\mathbf{u} = -\frac{\nabla\sigma}{\dot{\sigma}},
\end{equation}
where $\dot{\sigma}=\partial \sigma / \partial t$.
For low-speed (speed far smaller than speed of light) quantum fluids, such as BEC, the above formulation can be further reduced to:
\begin{equation}
\rho=|\Phi|^2,\;\;\;\;\;\mathbf{u}=\nabla\sigma,
\end{equation}
which has been widely used in BEC simulations~\cite{Kobayashi05,rorai2016approach}.
It should be mentioned that the vortex cores are encoded by the \textit{singularities} of phase $\sigma$, which mostly locate in sub-grid scale.
Thus, the vorticity $\boldsymbol{\omega}=\nabla\times\mathbf{u}$ cannot be directly computed at vortex cores for visualization purpose.
\subsubsection{Quantum turbulence generation}
Among many forms of existence of quantum fluids, quantum turbulence \cite{Nemirovskii13} is of particular interests, since it consists of tangled quantized vortices.
Like classical turbulence, quantum turbulence exhibits complex chaotic structures over both space and time.
Unlike classical turbulence, quantum vortices have a fixed core radius (healing length) and will never dissipate until reconnection happens, which transforms part of the vortex energy into waves.
The study of quantum turbulence is an emerging research in science, which will not only shed lights on understanding the general structure of turbulence, but also can be directly applied in liquid-helium-related cryogenic technology and other fields like cosmic strings, linear defects in solids, and dark lines in nonlinear optics \cite{Nemirovskii13}.
Quantum turbulence can be generated with Eq.~\ref{eq:nlkg} if sufficiently dense initial vortex-core lines are given.
However, as discussed in~\cite{GYL17}, quantum vortices will decay when they reconnect.
In the case of quantum turbulence, massive vortex-core line reconnections will occur, resulting in very fast vortex decay.
In \cite{GYL17}, random vortex rings are periodically injected from the domain boundary to counteract the decay and maintain vortex-core line reconnections.
However, such a method is difficult to produce homogeneous vortex-core line distributions, which are desirable for scientific study of quantum turbulence.
To overcome this deficiency, we can employ ``random potential'' as input excitations, similar as that used in~\cite{Kobayashi05}, which acts as an extra energy source added to Eq.~\ref{eq:nlkg} to compensate for vortex decay:
\begin{equation}
-\frac{\partial^2 \Phi}{\partial t}+\nabla^2\Phi = (|\Phi|^2-1)\Phi + [\lambda + P(\mathbf{x},t)]\Phi,
\label{eq:random-potential}
\end{equation}
where $P(\mathbf{x},t)$ denotes the random potential input whose magnitude is a random number between 0 and $V_0$.
To maintain correlations, we first construct the potential values by random numbers at sampled grid nodes every some spatial distance $X_0$ and temporal distance $T_0$.
Then, for other grid nodes in between these distances, we can employ cosine interpolation~\cite{Smed17} to construct the whole random potential field $P(\mathbf{x},t)$, which varies in both space and time.
Specifically, we can set $X_0 = 2$, $V_0 = 55$, and $T_0 = 0.16$ to balance between vortex production and decay.
Note that random potential has some physical interpretations; it can mimic the random forces in porous media \cite{huang1992hard} or an optical speckle potential \cite{lye2005bose} in real experiments.
\subsubsection{Preparing quantum turbulence datasets}
To obtain the dataset for visualization, we can solve Eq.~\ref{eq:random-potential} with numerical discretization scheme described in \cite{GYL17} to simulate the quantum turbulence dynamics.
For more accurate analysis, such a simulation should be done at high resolution, and in this paper, the resolution is fixed to $2048^3$.
Due to large data size, the simulation is conducted on a cluster system with implementation by message-passing interface (MPI)~\cite{gabriel04:_open_mpi}.
It should be noted that the quantum turbulence datasets can also be obtained by solving other field equations, e.g., nonlinear Schr\"{o}dinger equation.
In the following, we assume that the dataset is ready, and focus on visualizing vortex-core line structures inside the dataset.
\subsubsection{Quantum vortex-core lines}
The structure of quantum fluids is different from that of classical one since it is dominated by vortex-core lines with a very small fixed core radius, where apart from the thin vortex regions, other quantum fluids are smooth potential flows (zero vorticity) without visually obvious structures.
Thus, visualizing the spatial distribution of vortex-core lines is particularly crucial for quantum turbulence.
However, unlike vortices in classical fluids which are defined more subjectively, vortex cores in quantum fluids can be precisely defined as the \textit{phase singularity} of the model equation~\cite{Tilley74,leggett2006quantum,barenghi2016regimes}.
This can be considered as an easier-to-solve problem since we can establish an objective metric to identify those vortex cores.
On the other hand, it is due to such a mathematical definition that another difficulty arises as compared to the traditional approaches where vorticity can be easily computed.
\subsection{Related work}
There have been different kinds of techniques proposed in the literature for both classical and quantum turbulence visualizations, and we discuss them accordingly below.
\subsubsection{Classical turbulence visualization}
Early turbulence visualization methods are based on identifying vortex cores with some local criteria.
For example, Hunt et al.~\cite{Hunt_88} proposed the Q-criterion, Jeong and Hussain~\cite{Jeong_95} proposed the $\lambda_2$-criterion, and Chakraborty~\cite{chakraborty2005relationships} proposed the $\lambda_{ci}$-criterion.
Jiang et al.~\cite{Jiang_05} provided a taxonomy of many of these identification methods.
There are also some other new methods recently proposed, such as $\Omega$-method \cite{liu2016new} and Rortex method~\cite{gao2018rortex}.
In addition to the above local methods, some other works focus on visualizing the global structure of vortex-core distributions.
Jankun-Kelly et al.~\cite{Jankun-Kelly_TVCG_06} proposed to detect and visualize vortices in engineering environments.
Weinkauf et al.~\cite{Weinkauf_07} extracted vortex cores from swirling particle motion in unsteady flows.
Schneider et al.~\cite{Schneider_08} used $\lambda_2$-criterion with largest contours to extract iso-surfaces.
Schafhitzel et al.~\cite{Schafhitzel11} visualized hairpin vortices with iso-surface.
Kasten et al.~\cite{Kasten11} proposed to identify two-dimensional time-dependent vortex regions based on an acceleration magnitude, and Wei\ss{}mann et al. \cite{WeiBmann_14} formulated a global method to identify vortex-core lines over a vector field based on quantum mechanical analogy.
Recently, Chern et al.~\cite{chern2017inside} proposed an algorithm to find spherical Clebsh maps from a given velocity field to obtain vortex-core lines.
On the other hand, an objective optimization is proposed for vortex-core line extraction in classical turbulence~\cite{gunther2017generic}, which was further extended to the detection of vortex-core lines of inertial particles~\cite{gunther2019objective}.
There are also many other approaches for visualizing classical turbulence in different aspects.
Laney et al.~\cite{Laney_06} used Morse-Smale complex to study turbulent mixing in Rayleigh-Taylor instabilities.
Wiebel et al.~\cite{Wiebel_07} embedded streamlines in a line integral convolution (LIC) texture to explore boundary-induced vortices.
Wei et al.~\cite{Wei11} introduced a dual-space method to analyze particle data from combustion simulations using model-based clustering.
Treib et al.~\cite{Treib12} presented a GPU-based system for flow fields with a compressed representation to visualize tera-scale turbulence datasets on desktop PCs, and Shafii et al.~\cite{Shafii13} extracted vortices from wind farm data, and visualized the interplay between vortices and forces on wind turbine blades.
More recently, Kern et al. \cite{kern2018robust} proposed a robust detection and visualization method of jet-stream core lines in atmospheric flows.
Tao et al. \cite{tao2018semantic} presented a semantic flow graph to visualize the object relationships in flow fields, and Wilde et al. \cite{wilde2019recirculation} proposed an approach to the visual analysis of re-circulation in flows by introducing re-circulation surface for 3D unsteady flow fields.
\subsubsection{Quantum fluid visualization}
As described in Section~\ref{sec:quantum_fluids_turbulence}, it is usually difficult to visualize vortex-core line structures in quantum fluids by vorticity.
Thus, the above methods for classical turbulence visualization are difficult to be applied, and the iso-surface rendering of density field (e.g., in \cite{Zuccher12}) is usually used for visualization, with the fact that density drops to zero at vortex cores.
However, as argued and demonstrated in~\cite{GYL17}, such a visualization may produce undesirable non-vortex-core structures during vortex reconnection, where a circulation-based method was proposed.
It is worth mentioning that Guo et al.~\cite{Guo_16} adopted a vortex line extraction and tracking method, also based on circulation, to visualize superconductor simulation datasets, which is however not suitable for visualizing vortex-core lines in quantum fluid simulation datasets.
\begin{figure}[t]
\centering
\includegraphics[width=0.85\columnwidth]{identify.pdf}
\vspace*{-1mm}
\caption{Circulation-based vortex identification method: (a) Circulation at a grid node is calculated on all three coordinate-aligned orthogonal planes in order to detect whether a vortex core exists inside the region enclosed by a circulation path; (b) Circulation is done based on a ring-like path for each plane in (a) to reduce numerical error.}
\label{fig:vortex-node-identification}
\vspace*{-1.5mm}
\end{figure}
\vspace{0.1cm}
\textit{Circulation-based method.}
The recent method for more precisely visualizing vortex-core line structures in quantum fluids is to employ the circulation-based approach proposed in~\cite{GYL17}, where a circulation field $C$ should be computed:
\begin{equation}
C = \oint_L \nabla \sigma \cdot d\mathbf{l}.
\label{eq:circulation1}
\end{equation}
Here, $L$ denotes the enclosed path in the nearest proximity of a grid node.
Ideally, $|C|=2\pi$ when the path includes a vortex core; otherwise, it is zero.
Thus, we can use a threshold $\epsilon = \pi$ to differentiate these two cases.
Note that $C$ can be either positive or negative, indicating the direction of vorticity, which could be used to define the direction of the vortex-core line at a given point.
To numerically compute $C$, we use three orthogonal planes parallel to coordinate planes (see Fig.~\ref{fig:vortex-node-identification} (a)) to compute three circulation values for each grid node based on a ring-like path on that plane (Fig.~\ref{fig:vortex-node-identification} (b)), which improves reliability for numerical integration.
The maximum of the three absolute circulation values is finally selected as the circulation value for that node.
More details on implementation can be found in \cite{GYL17}.
\vspace{0.1cm}
\textit{Our contributions.}
While the above circulation-based approach seems to be capable of directly being employed for quantum turbulence visualization, it may produce unclear results (as will be demonstrated later), since vortex-core lines are very dense in quantum turbulence.
Due to imprecise location of vortex cores from circulation, it is also difficult for quantitative analysis, such as measuring the length, curvature, looping or reconnection of the vortex-core lines.
In addition, since the circulation-based approach relies on extracting isosurface around the vortex-core lines for visualization, the memory consumption is still very large for high-resolution datasets.
Thus, it is very difficult to achieve real-time visualization to support interaction.
In this work, we try to solve these problems by vectorizing massive vortex-core lines inside quantum turbulence datasets, which can be efficiently solved by a novel graph-based algorithm with local optimization.
This finally makes our visualization reach $60$ to $110$ frame rates on a normal desktop PC for data resolution as high as $2048^3$ to support many interactive visual analyses.
\section{Quantum Turbulence Data Preparation}
\label{eq:random-potential-discrete}
\begin{figure*}[t]
\centering
\begin{minipage}{0.24\textwidth}
\includegraphics[width=1.0\textwidth]{process_illustration-SV-0.jpg}
\subcaption{global graph construction}
\end{minipage}
\begin{minipage}{0.24\textwidth}
\includegraphics[width=1.0\textwidth]{process_illustration-SV-1.jpg}
\subcaption{vortex nodes extraction}
\end{minipage}
\begin{minipage}{0.24\textwidth}
\includegraphics[width=1.0\textwidth]{process_illustration-SV-3.png}
\subcaption{zoom-in view}
\end{minipage}
\begin{minipage}{0.24\textwidth}
\includegraphics[width=1.0\textwidth]{process_illustration-SV-4.png}
\subcaption{sample point generation}
\end{minipage}
\begin{minipage}{0.24\textwidth}
\includegraphics[width=1.0\textwidth]{process_illustration-SV-5.png}
\subcaption{one iteration result}
\end{minipage}
\begin{minipage}{0.24\textwidth}
\includegraphics[width=1.0\textwidth]{process_illustration-SV-6.png}
\subcaption{converged iteration result}
\end{minipage}
\begin{minipage}{0.24\textwidth}
\includegraphics[width=1.0\textwidth]{process_illustration-SV-7.png}
\subcaption{vectorized vortex-core line}
\end{minipage}
\begin{minipage}{0.24\textwidth}
\includegraphics[width=1.0\textwidth]{process_illustration-SV-8.png}
\subcaption{vectorization consistency}
\end{minipage}
\vspace*{-1mm}
\caption{Overview of our vortex-core line vectorization algorithm: (a) global graph constructed from the vortex nodes identified by the circulation-based method; (b) extracted vortex nodes on one independent vortex-core line, as highlighted by the red points; (c) zoom-in view of the blue box region in (b), ignoring non-vortex nodes; (d) an illustration for calculating one sample point (the blue one) based on a 3D box around a particular point (the black one), where iterative density-guided local optimization is performed, with initial estimate by mean position; (e) reduced graph after one iteration of local estimation; (f) converged result of iterative graph reduction; (g) vectorization result of a vortex-core line from the surrounding vortex nodes; (h) comparison of the vortex nodes around the line and their corresponding vectorization result; note that the vectorized vortex-core line is always enclosed by the corresponding vortex nodes.}
\label{fig:pipeline}
\vspace*{-1.5mm}
\end{figure*}
\section{Quantum Vortex-Core Line Vectorization}
\label{sec:qt_vis}
By definition, vortex-core lines are phase singularities of $\Phi$ field, which are infinitesimally small and form piecewise continuous curves in space after a series of reconnections due to branching, see Fig.~\ref{fig:teaser} for an example, where topologically complex structures with branches or enclosed rings could be formed.
We call a curve ``\textit{simple vortex-core line}'' if they do not contain branches.
The whole vortex-core line structures can thus be constructed by connecting all simple vortex-core lines together.
With grid nodes identified by the circulation-based method which contain vortex cores nearby (vortex nodes), simple vortex-core lines could be extracted from these nodes and then represented by continuous curves, which enables us to represent the whole vortex-core line structures piecewise continuously.
We call such a process ``\textit{vortex-core line vectorization}''.
In our quantum vortex-core line vectorization, the main difficulty rests with the treatment of branches, where a simple and efficient method is desirable.
In this paper, we propose a novel \textit{graph-based model} to deal with such an issue, with iterative graph reduction, to form a simplified graph of sample points on the desired vortex-core lines.
These sample points are further relocated by a \textit{density-guided local optimization} to increase accuracy.
By graph reduction, it is thus easy to identify branches by examining the connectivity of each graph node, which can separate the simplified graph into multiple graphs without any branches.
The sample points on these separated graphs are finally used to vectorize each simple vortex-core line, so that the whole vortex-core lines can be vectorized.
Fig.~\ref{fig:pipeline} gives an overview of the whole vectorization process, as summarized below:
\begin{itemize}
\item
\textbf{[Step 1]:} With vortex nodes identified by the circulation-based method, a graph based on the local connectivity of the vortex nodes is constructed for the entire field (Fig.~\ref{fig:pipeline} (a)).
\item
\textbf{[Step 2]:} Vortex nodes around the same connected vortex-core lines are extracted as sub-graphs of the whole graph by depth-first traversal (Fig.~\ref{fig:pipeline} (b) \& (c)).
\item
\textbf{[Step 3]:} For vortex nodes of each simple vortex-core line, they are used to calculate the sample points of the continuous curve by an iterative local estimation and graph reduction algorithm (Fig.~\ref{fig:pipeline} (d) to (f)).
\item
\textbf{[Step 4]:} With these sample points, continuous vortex-core lines are formed by first re-ordering these sample points along the curves, followed by spline interpolation, which are finally used for interactive visualization (Fig.~\ref{fig:pipeline} (g) \& (h)).
\end{itemize}
In the following texts of this section, we will detail the above steps for vortex-core line vectorization process.
\subsection{Global graph construction}
As described above, after identifying vortex nodes, we can represent these nodes with a \textit{graph structure}, by first removing non-vortex grid nodes.
Then, for each vortex node, we associate it with the surrounding vortex nodes that are directly linked by grid lines to maintain local topology, leading to a sparse graph, see Fig.~\ref{fig:pipeline} (a).
Such a graph is created by first linearizing vortex nodes with increasing indices in a scan order from x- to z-coordinate directions; then, for each vortex node, we examine its directly connected vortex nodes and store their indices into an adjacency array of that node.
Such a linearization and connectivity representation not only reduce the memory usage, but also provide an efficient data structure for extracting vortex nodes that enclose vortex-core lines.
Note that in preparing the datasets, our simulation is distributed over different CPU cores, and grid nodes (including the identified vortex nodes) are distributed over different memories.
Thus, the above graph construction process should also be distributed; otherwise, no sufficient memory is available.
We employ a block-based distribution, where the whole dataset is divided into 4$\times$4$\times$4 blocks, and each CPU core independently constructs a local graph with local indices for each block.
After that, all local graphs are merged into one global graph, with local indices modified into global ones.
Since the size of the graph is much reduced compared to the whole dataset, the global graph can be constructed on one CPU core instead.
\subsection{Node extraction for quantum vortex-core lines}
With the global graph constructed, we can consider vectorizing the piecewise continuous vortex-core lines.
However, the whole vortex-core lines are not completely connected and may be composed of different independent and isolated ones.
Thus, to vectorize the entire vortex-core lines, we need first identify independent ones, which can be easily achieved by traversing the global graph and extracting sub-graphs which enclose those vortex-core lines.
To do the sub-graph extraction, all the nodes on the global graph are initially labeled as un-visited nodes.
Then, starting from an arbitrary node, we propagate throughout the global graph by depth-first traversal.
If the connected node is un-visited, we put the index of this node into the corresponding set of nodes that enclose a vortex-core line, label it as visited, and then proceed to the next connected but un-visited node until no such node is left.
Once the propagation stops, we collect all the propagated nodes, which are distributed around one particular vortex-core line, and repeat the same process by picking another un-visited node in the remaining graph.
The whole process stops when no other un-visited nodes remain in the whole graph, which produces multiple sets of nodes enclosing independent vortex-core lines.
As an example, Fig.~\ref{fig:pipeline} (b) illustrates the extracted nodes for one vortex-core line, and Fig.~\ref{fig:pipeline} (c) shows its structure in an enlarged view.
\subsection{Sample points generation}
Once we extracted the surrounding vortex nodes for each independent vortex-core line, it is then desirable to generate the sample points on the vortex-core line.
Since the extracted vortex nodes are always distributed around the vortex-core lines, it is able to estimate the sample points using only the neighboring vortex nodes.
There are several requirements for the generated sample points.
First, they should be close in position to the true analytical phase singularity as much as possible with a sufficient sampling rate such that more accurate vortex-core lines can be obtained.
Second, they should be distributed almost uniformly along the vortex-core line, which can reduce unnecessary oscillations during vectorization, leading to more accurate visualization.
Third, the computed sample points should also be represented by a new reduced graph such that re-ordering can be easily done for vectorization.
\subsubsection{Candidate sample point estimation}
We first discuss how candidate sample points can be estimated locally, which are then used to generate the final sample points of vortex-core lines.
Given a vortex node, we can associate a 3D box with an equal length $d=k\Delta x$ ($k$ is an integer larger than 1, and we choose $k\in[3,8]$ in our implementation), where we can extract a sub-graph using depth-first traversal again, see Fig.~\ref{fig:pipeline} (d) which shows the box around a vortex node (colored in black); the edges and nodes of the sub-graph are colored in green and orange, respectively.
Note that only nodes on such a sub-graph are used to estimate one candidate sample point of the vortex-core line, and we denote $\mathbf{p}_i^j$ as the spatial location of the $j$-th node on the sub-graph of the $i$-th vortex node selected from the node set of a vortex-core line.
A good estimator of the candidate sample point is the mean location of all the nodes on the sub-graph since they are almost uniformly distributed around the vortex-core line inside the 3D box: $\bar{\mathbf{p}}=\left(\sum_i \mathbf{p}_i\right)/N$, where $N$ is the number of vortex nodes on the sub-graph.
Such an estimator has an uncertainty of only one grid cell (as compared to three-grid-cell uncertainty in~\cite{GYL17}).
\begin{figure}[t]
\centering
\includegraphics[width=0.75\columnwidth]{uncertainty.pdf}
\vspace*{-1mm}
\caption{Illustration of mean location as sample point estimator. In (a), four different regions around the grid points are enclosed by their circulation paths on a 3D plane, which all include the same vortex point. In (b), the mean location of the vortex nodes always locates in the overlapped region of the surrounding circulation paths, which can be a good estimator for sub-grid vortex core.}
\label{fig:uncertainty}
\end{figure}
The reason to use the mean location as the estimator for candidate sample points on the vortex-core line is illustrated in Fig.~\ref{fig:uncertainty}, where a vortex-core line passes through a 3D plane (note that for better illustration, we only draw 2D planes, where the intersection points between the vortex-core line and the planes are marked by the circled crosses).
If a grid node is identified as a vortex node, it means that a vortex-core line passes through (intersect with) the region enclosed by its circulation path, with the intersection point located inside the enclosed region, see Fig.~\ref{fig:uncertainty} (a) for an example.
In this case, the sample point of the vortex-core line has three-cell uncertainty.
However, the vortex nodes are always clustered around the vortex-core line, and a sample point on the vortex-core line is usually surrounded by four vortex nodes on that plane, as Fig.~\ref{fig:uncertainty} (a) shows.
Thus, the vortex-core line must pass through the overlapping of all the regions enclosed by the four circulation paths on the plane, as indicated by the rhombic area with dotted yellow boundary in Fig.~\ref{fig:uncertainty} (b).
If we take the mean position of all the vortex nodes on the sub-graph as the candidate sample point estimator, it always locates inside this area, which has only one-cell uncertainty, and thus is more accurate than \cite{GYL17} to locate the nearby vortex cores.
Note that this is reliable for 3D box with a relatively small size.
\subsubsection{Sample points generation and graph reduction}
Once we can compute one candidate sample point inside the 3D box around a vortex node, all sample points on a vortex-core line can be automatically estimated by an iterative procedure.
Starting from an arbitrary vortex node, we can calculate a candidate sample point inside the surrounding 3D box using the proposed mean-position estimator.
Then, all the vortex nodes belonging to the sub-graph inside the box are reduced to one node containing only the candidate sample point, and all the edges of the sub-graph are collapsed.
After that, we can proceed to generate another candidate sample point, and any vortex node connected to the node of the previously generated candidate sample point is selected.
Similarly, we can compute a new candidate sample point again, where the vortex nodes belonging to the sub-graph of the new surrounding box are reduced again to one node containing the new estimated candidate sample point, and all the edges of the new extracted sub-graph are also collapsed again.
Such a process repeats until there is no unprocessed vortex node in the surrounding node set of a vortex-core line.
By going through such a process, we can generate all candidate sample points which are approximations of the true sample points on vortex-core lines, with the graph much reduced from the initial one, see Fig.~\ref{fig:pipeline} (e).
However, there are still unwanted nodes and edges which form branches or loops in the graph (see the orange box in Fig.~\ref{fig:pipeline} (e)), which are not suitable for vectorization, and require further graph reduction to remove them.
Thus, we repeat the whole graph reduction process using mean-location estimator for each node with the same size for the 3D box until the positions of all sample points converge, see the result in Fig.~\ref{fig:pipeline} (f).
Usually, only a few iterations (e.g., 2 to 5 iterations) are required for reduction, and the size of the 3D box approximately defines the average distance of the final sample points.
It is important to note that such an iterative graph reduction process inherits the connectivity of the generated sample points.
This is beneficial for sample point re-ordering based on vortex-core line classification, and eventually facilitates the underlying vectorization, which is also the key to ensure topological consistency.
\subsubsection{Locating more accurate sample points}
The previously generated sample points are still not accurate enough, which have one grid-cell uncertainty.
To further improve visualization accuracy, the locations of these sample points should be adjusted towards the true singularities, which is achieved by a basic mathematical observation that when a point $\mathbf{x}$ is a vortex core, its density is zero: $\rho(\mathbf{x})=0$.
Thus, we need to locally search for $\mathbf{x}$ such that this condition is satisfied.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\columnwidth]{newton-raphson.pdf}
\vspace*{-1mm}
\caption{Illustration for locating more accurate sample point. By estimating the tangent $\mathbf{t}$ of the initial sample point $\mathbf{x}_0$, a gradient-descent minimization algorithm is applied within the plane passing through $\mathbf{x}_0$ and perpendicular to $\mathbf{t}$, moving $\mathbf{x}_0$ to $\mathbf{x}^*$, which is closer to the analytical result.}
\label{fig:newton-raphson}
\end{figure}
\begin{figure}[t]
\centering
\begin{minipage}{0.46\columnwidth}
\includegraphics[width=1.0\textwidth]{optimization-1.png}
\subcaption{}
\end{minipage}
\begin{minipage}{0.46\columnwidth}
\includegraphics[width=1.0\textwidth]{optimization-0.png}
\subcaption{}
\end{minipage}
\caption{Comparison of vortex-core lines without (a) and with (b) minimization in a local 3D box. Note the differences especially within the red box regions.}
\label{fig:compare_optimization}
\end{figure}
However, since we solve the quantum turbulence at discrete grid nodes and interpolate the field among them to form continuous field, it is not always able to strictly have $\rho(\mathbf{x})=0$ locally around a vortex core.
Thus, we convert the above problem to be a local minimization problem:
\begin{equation}
\mathbf{x}^*=\text{argmin}_{\mathbf{x}}\;\;\;\rho(\mathbf{x}).
\end{equation}
There can be multiple solutions for $\mathbf{x}^*$ within a local 3D box.
In order to have unique solution, additional constraints should be imposed.
Given the mean-position estimator $\bar{\mathbf{p}}$ as the initial value $\mathbf{x}_0$ for $\mathbf{x}$, we can apply pseudo-vorticity method \cite{rorai2016approach} to determine the local direction $\mathbf{t}$ of the vortex-core line at $\mathbf{x}_0$.
Then, a plane passing through $\mathbf{x}_0$ and perpendicular to $\mathbf{t}$ can be determined, see Fig.~\ref{fig:newton-raphson}.
We restrict the minimization within such a plane, and employ a gradient-descent algorithm to search for $\mathbf{x}^*$, which is an approximation to find the nearest point on the vortex-core line from $\mathbf{x}_0$.
Note that such a minimization should be applied to all sample points to adjust their coordinates.
Fig.~\ref{fig:compare_optimization} shows a comparison for vortex-core lines without and with such a local minimization in an enlarged view.
Note the red boxes for the slight changes of the vortex-core line with such a minimization algorithm.
\subsection{Re-ordering and vectorization}
After we obtain the sample points for each vortex-core line with a reduced graph containing their connectivity, we are ready for vectorization.
However, there are still two problems left.
First, the sample points on one independent vortex-core line may not form a simple vortex-core line, which is difficult for vectorization.
Second, the sample points may not be stored in a right order along the vortex-core line after sample point generation; thus, we should determine their order along the vortex-core line as well as the starting point for vectorization.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\columnwidth]{reordering.pdf}
\vspace*{-1mm}
\caption{Classification of different types of vortex-core lines for re-ordering: (a) Type-I simple open vortex-core line; (b) Type-II simple closed vortex-core line; (c) Type-III complex vortex-core line. Note that S indicates the starting point and E the ending point, while C for complex branch point.}
\label{fig:reordering}
\vspace*{-1.5mm}
\end{figure}
\vspace{0.1cm}
\textit{Vortex-core line classification.}
Given a vortex-core line which is now represented by a reduced graph, we can classify it into the following types of vortex-core lines based on the topological structure of the graph:
\begin{itemize}
\item
{
Type-I (simple open vortex-core line): This type of vortex-core lines contain points with maximal connectivity of 2, and only two end points have a connectivity of 1, see Fig.~\ref{fig:reordering} (a).
}
\item
{
Type-II (simple closed vortex-core line): This type of vortex-core lines contain points always with connectivity of 2, which is the case when two end points in Type-I enclose to form a loop, see Fig.~\ref{fig:reordering} (b).
}
\item
{
Type-III (complex vortex-core line): This type of vortex-core lines contain points with arbitrary connectivity, especially with branching (connectivity larger than 2), see Fig.~\ref{fig:reordering} (c), where the branch points are labeled by C, but can be split into sub-vortex-core lines of either Type-I or Type-II.
Note that the end points of the split lines are usually branch points.
}
\end{itemize}
With the above definition, it is easy to implement the vortex-core line classification by traversing the reduced graph of an independent vortex-core line and examining its connectivity.
Note that only Type-I and Type-II vortex-core lines can be easily used for vectorization.
Hence, we should split Type-III vortex-core lines,
which is achieved by checking the branch points, which are where we split.
\vspace{0.1cm}
\textit{Starting point selection and re-ordering.}
Once we classify different types of vortex-core lines, it is able to determine the starting point for re-ordering.
If the vortex-core line is of Type-I, the starting point could be any of the two points whose connectivity is 1, e.g., S or E in Fig.~\ref{fig:reordering} (a); if the vortex-core line is of Type-II, any point in the graph can be the starting point; and if the vortex-core line is of Type-III, the branch points are selected as the starting points, which split the whole vortex-core line into multiple sub-vortex-core lines of either Type-I or Type-II.
By determining the starting points, we can propagate the graph with a unique route to re-order sample points along each vortex-core line for later vectorization.
\begin{figure}[t]
\centering
\begin{minipage}{0.48\columnwidth}
\includegraphics[width=1.0\columnwidth]{comparison3-11.jpg}
\subcaption{}
\end{minipage}
\begin{minipage}{0.48\columnwidth}
\includegraphics[width=1.0\columnwidth]{comparison3-12.jpg}
\subcaption{}
\end{minipage}
\caption{Comparison between vortex-core line visualizations (a) without spline interpolation (using only sample points) and (b) with spline interpolation.}
\label{fig:spline}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.93\columnwidth]{vortex-direction.jpg}
\caption{Determining the direction of a point on a vortex-core line. (a) By using the right-hand rule according to the circulation path, we can easily determine the positive or negative direction of a vortex. (b) Such a rule can be used to determine the (positive) direction of a vortex-core line.}
\label{fig:dir}
\end{figure}
\vspace{0.1cm}
\textit{Vectorization.}
To do the vectorization, we interpolate the ordered sample points for each vortex-core line.
The simplest method is by piecewise line segments, see Fig.~\ref{fig:spline} (a), but since the samples may not be dense enough, the vectorized vortex-core line may not be smooth.
In particular, we apply Catmull-Rom spline interpolation \cite{engel2006real} to obtain the final vectorization results, assuming sufficient smoothness among sample points, see Fig.~\ref{fig:spline} (b) and Fig.~\ref{fig:pipeline} (g).
It should be noted that since all the sample points are estimated from the surrounding local vortex nodes, the vectorized vortex-core lines are always enclosed by the vortex nodes, as illustrated in Fig.~\ref{fig:pipeline} (h).
The direction of the vortex-core line can also be easily obtained by the right-hand rule according to the circulation path of a nearby vortex node, see Fig.~\ref{fig:dir} (a).
This can be applied to every point on the vortex-core line to determine the positive or negative direction, see Fig.~\ref{fig:dir} (b) for an example of the positive direction of a vortex-core line.
It should be noted that our graph-based algorithm is very unique to facilitate quantum vortex-core line vectorization with complex geometrical and topological structures, which, to our knowledge, has not been proposed in existing vortex-core line extraction methods for both classical and quantum fluids.
In addition, the interpolation used in vectorization is not restricted to the one we use and can have multiple choices: B\'ezier curves~\cite{kipfer2003local} and other types of spline curves such as B-spline~\cite{prautzsch2013bezier} could also be used.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{timing-report.pdf}
\caption{The plot of vectorization performance over time steps and across resolutions.}
\label{fig:timing-report}
\vspace*{-1.5mm}
\end{figure}
\begin{figure*}[t]
\centering
\begin{minipage}{0.24\textwidth}
\includegraphics[width=1.0\textwidth]{diving_camera-0.jpg}
\end{minipage}
\begin{minipage}{0.24\textwidth}
\includegraphics[width=1.0\textwidth]{diving_camera-1.jpg}
\end{minipage}
\begin{minipage}{0.24\textwidth}
\includegraphics[width=1.0\textwidth]{diving_camera-2.jpg}
\end{minipage}
\begin{minipage}{0.24\textwidth}
\includegraphics[width=1.0\textwidth]{diving_camera-3.jpg}
\end{minipage}
\vspace*{-1mm}
\caption{Interactive exploration of quantum turbulence structures. The camera is controlled by keyboard and mouse, and follows the path shown at the bottom-left corner of each snapshot image, which displays the visualization results of different views inside the vectorized quantum turbulence datasets. The rendering is very efficient for such interactive exploration, with the instant frame rates shown at the top-left corner of each snapshot image.
}
\label{fig:camera_diving}
\vspace*{-1.5mm}
\end{figure*}
\begin{figure*}[t]
\centering
\begin{minipage}{0.24\textwidth}
\includegraphics[width=1.0\textwidth]{multi-scale-1.png}
\subcaption{$\text{lengths} \in [3.0,6.0)$}
\end{minipage}
\begin{minipage}{0.24\textwidth}
\includegraphics[width=1.0\textwidth]{multi-scale-2.png}
\subcaption{$\text{lengths} \in [1.5, 3.0)$}
\end{minipage}
\begin{minipage}{0.24\textwidth}
\includegraphics[width=1.0\textwidth]{multi-scale-3.png}
\subcaption{$\text{lengths} \in [0.75, 1.5)$}
\end{minipage}
\begin{minipage}{0.24\textwidth}
\includegraphics[width=1.0\textwidth]{multi-scale-4.png}
\subcaption{$\text{lengths} \in [0.375, 0.75)$}
\end{minipage}
\vspace*{-1mm}
\caption{Multi-scale visualization of quantum turbulence, where vortex-core lines with their lengths within a certain range are selected for visualization. These ranges of lengths are measured with respect to the size of the simulation domain ($l=32$). Here, (a) to (d) show different ranges at small scales, where similar vortex-core line patterns can be found, which indicate the self-similarity property of steady homogeneous quantum turbulence.
}
\label{fig:multi-scale-vis}
\vspace*{-1.5mm}
\end{figure*}
\section{Results}
We implement the simulation for NLKG equation and vortex-core line vectorization in C++ parallelized with MPI and OpenMP~\cite{Dagum:1998:OIA:615255.615542} on a cluster system installed with Linux for preparing high-resolution quantum turbulence dataset, where 8 computational nodes with 64 CPU cores are used.
Each CPU is an Intel Xeon E7-4850 v4 CPU (2.1 GHz), and the system memory is 2 TB in total.
Our vectorization compresses the original $2048^3$ dataset of 70 GB at each time step, which cannot be loaded into a normal GPU memory, to 20 MB by only storing the sample points before spline interpolation.
For online computational efficiency, we re-sample the spline curves into more points, occupying approximately 260 MB at steady state, which is still small enough even for a normal laptop PC to achieve real-time visualization performance.
The vectorization takes about 3 minutes on average for one static data, but could be varying for simulation datasets at different time steps; it also varies across different resolutions. Fig~\ref{fig:timing-report} shows a plot of vectorization performance over time steps and across data resolutions.
Note that our method could scale well for the number of vortex-core lines, since they are processed independently in parallel.
However, not all vortex-core lines have the same length, and longer vortex-core lines require more processing time, which affect the final performance.
The online real-time visualization is implemented using OpenGL shaders based on \cite{stoll2005visualization} to render line tubes by specifying an arbitrarily tunable radius, with the difference that each vertex of the line tube is color-coded by its global position as well as the tangent direction of the associated vortex-core line, showing the variation of locations and orientations.
Note that more recent rendering method by Kanzler et al.~\cite{kanzler2018voxel} could also be used for even better performance.
Such a visualization is run on a Linux workstation installed with an Intel Xeon E5-2650 v4 CPU (2.2 GHz with 24 cores), a 128 GB system memory and an NVIDIA GTX TITAN X GPU, where $60$ to $110$ frames per second is measured for visual interaction.
\begin{figure*}[t]
\centering
\begin{minipage}{0.24\textwidth}
\includegraphics[width=1.0\textwidth]{turbulence_formation-0.jpg}
\subcaption{time step=1000}
\end{minipage}
\begin{minipage}{0.24\textwidth}
\includegraphics[width=1.0\textwidth]{turbulence_formation-1.jpg}
\subcaption{time step=1500}
\end{minipage}
\begin{minipage}{0.24\textwidth}
\includegraphics[width=1.0\textwidth]{turbulence_formation-2.jpg}
\subcaption{time step=3000}
\end{minipage}
\begin{minipage}{0.24\textwidth}
\includegraphics[width=1.0\textwidth]{turbulence_formation-3.jpg}
\subcaption{time step=10000}
\end{minipage}
\vspace*{-1mm}
\caption{Visualization of the formation process over time for steady homogeneous quantum turbulence. With continuous random potential input as external excitations, vortex-core lines will be generated from a calm initial field ($\Phi(\mathbf{x},0)=1$). As time proceeds, they will undergo massive reconnections to form steady homogeneous quantum turbulence.
}
\label{fig:qt_formation}
\vspace*{-1.5mm}
\end{figure*}
\vspace{-0.1cm}
\subsection{Interactive visualization}
With real-time visualization, we can then design different interactions to support structure exploration, multi-scale and formation analyses, as well as visualization of individual vortex-core lines.
The following texts discuss in detail about these interactions.
\textit{Readers can refer to the supplementary video for the corresponding animations}.
\vspace{-0.1cm}
\subsubsection{Interactive exploration of quantum turbulence}
Given any vectorized frame of vortex-core lines, we can load and render them based on a specified radius.
Since our visualization is real-time, we can interactively explore the turbulence structures by changing the view positions and directions instantly with mouse and keyboard, which enables domain scientists to see interesting structures easily, like roaming in a virtual Universe.
Fig.~\ref{fig:camera_diving} shows such an example, where the small image at the bottom-left corner of each snapshot indicates the path as well as the location and direction of the view, while the top-left corner of each snapshot shows the corresponding instant frame rate.
\vspace{-0.1cm}
\subsubsection{Multi-scale visualization of quantum turbulence}
An interesting property of turbulence, including quantum turbulence, is the multi-scale nature, where fractal-like structures are usually observed in some ranges of scales in homogeneous fully developed steady-state turbulence datasets.
Thanks to our vectorization, it becomes very easy to measure the lengths of the vortex-core lines.
Thus, we can filter the vortex-core lines by specifying a specific range of lengths.
Fig.~\ref{fig:multi-scale-vis} shows an example of such a multi-scale visualization, where similar structures of different sizes can be found at small length scale, which has similar property as classical turbulence.
This may also help to further explain the fractal nature of quantum turbulence.
\begin{figure*}[t]
\centering
\begin{minipage}{0.24\textwidth}
\includegraphics[width=1.0\textwidth]{vortex_picking-SV-0.jpg}
\subcaption{}
\end{minipage}
\begin{minipage}{0.24\textwidth}
\includegraphics[width=1.0\textwidth]{vortex_picking-SV-1.jpg}
\subcaption{}
\end{minipage}
\begin{minipage}{0.24\textwidth}
\includegraphics[width=1.0\textwidth]{vortex_picking-SV-2.jpg}
\subcaption{}
\end{minipage}
\begin{minipage}{0.24\textwidth}
\includegraphics[width=1.0\textwidth]{vortex_picking-SV-5.jpg}
\subcaption{}
\end{minipage}
\vspace*{-1mm}
\caption{Vortex-core line selection and visualization. Given vortex-core lines vectorized from the simulation datasets, see (a), we can use mouse to select different individual vortex-core lines interactively, and highlight them for visualization. (b) to (d) are three individual vortex-core lines selected for visualization, which are rendered with different colors.}
\label{fig:vortex_line_picking}
\vspace*{-1.5mm}
\end{figure*}
\vspace{-0.1cm}
\subsubsection{Formation of steady quantum turbulence}
While previous visualizations explore dataset at one particular frame, we can gather all vectorized frames together and produce dynamic visualization results, which are achieved by designing a time slider, where users can slide over the time line and stop at a particular time; then the system then loads the vectorized data from hard disk into memory to render the vortex-core lines immediately.
To maximize loading efficiency, we can maintain a memory cache and load multiple frames of data from hard disk into the cache, like the paging scheme in operating system, and render directly from the cache.
Fig.~\ref{fig:qt_formation} shows such a time-varying dynamic visualization to discover the quantum turbulence formation process.
From the visualization, it is interesting to see that under excitation of continuous random potential, small enclosed vortex-core lines will first be generated, which scatter over space, see Fig.~\ref{fig:qt_formation} (a).
As time proceeds, they will accumulate and reconnect with each other to form complex vortex-core lines, see Fig.~\ref{fig:qt_formation} (b).
Such a reconnection process becomes massive when a large amount of vortex-core lines are produced and the whole system starts to enter quantum turbulence state, see Fig.~\ref{fig:qt_formation} (c).
Since the random potential keeps producing small vortex-core lines that will merge into existing ones, it will not damage the whole turbulence distribution while compensating the vortex decay, see Fig.~\ref{fig:qt_formation} (d).
This formation process, as observed by our visualization, is different from classical turbulence.
Another benefit from our vectorization over time steps is that the change of overall length of vortex-core lines can be plotted, see the blue curve in Fig.~\ref{fig:reconnection-and-vortex-length}, which is useful for model verification by Vinen's equation~\cite{vinen1957mutual,schwarz1988three}.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\columnwidth]{reconnection-and-vortex-length.pdf}
\caption{The plot of total length of vortex-core lines and the reconnection event statistics (overall number) against the number of time-steps, as generated by automatically measuring the lengths and counting the number of reconnection events with our vortex-core line vectorization.}
\label{fig:reconnection-and-vortex-length}
\vspace*{-1.5mm}
\end{figure}
\begin{figure*}[t]
\centering
\begin{minipage}{0.33\textwidth}
\includegraphics[width=1.0\textwidth]{reconnection_point-SV-0.jpg}
\subcaption{}
\end{minipage}
\begin{minipage}{0.64\textwidth}
\includegraphics[width=1.0\textwidth]{reconnection_point-SV-1.jpg}
\subcaption{}
\end{minipage}
\vspace*{-1mm}
\caption{Reconnection event visualization. By identifying reconnection events from the reduced graph topology, we can highlight the reconnection events (with red spheres) to see their global spatial distributions, see (a). We can also interactively zoom in the view to see the local distribution of reconnection events (the orange box in (a)), as shown in (b).}
\label{fig:reconnnection}
\vspace*{-1.5mm}
\end{figure*}
\vspace{-0.1cm}
\subsubsection{Vortex-core line selection and visualization}
The previous visualization focuses on the global structure of quantum turbulence vortex-core lines.
However, it is still difficult to observe the geometric and topological structures of each independent vortex-core line, which are interested by domain scientists for more detailed analysis, e.g., length, curvature, loop, knot, as well as fractal geometry, etc.
These properties are still far from full understanding and require further investigation.
Thus, visualizing and highlighting independent vortex-core lines is very attractive.
To achieve this goal, we employ two passes.
In the first pass, we assign vertices on each independent vortex-core line a gray color, as determined by normalizing their line indices into the range $[0,1]$, and perform off-screen rendering into a texture without lighting.
When selecting, the gray value that mouse clicks can be re-converted to the index of the vortex-core line nearest to the view.
In the second pass, we highlight the selected vortex-core line, giving higher opacity with a different color for rendering.
Fig.~\ref{fig:vortex_line_picking} shows such an example of interactively selecting and visualizing different independent vortex-core lines.
\vspace{-0.1cm}
\subsection{Reconnection event visualization}
Finally, we turn our focus to the massive vortex-core line reconnection events (where reconnection happens) in quantum turbulence datasets, which are important and expected to further explain some physical behaviors of quantum turbulence.
To visualize the reconnection events, we first identify them by checking the connectivity of each point on the final reduced graph, where a point with connectivity larger than 2 (branch point) is identified as one reconnection event.
After all reconnection events are identified, we can render a colored sphere over these points to highlight reconnection events, see Fig.~\ref{fig:reconnnection}, where we first visualize the reconnection events globally to see the overall distributions, and then zoom-in interactively to see their local relations.
Such a capability also produces a plot of the statistics (overall number) of vortex-core line reconnection events during quantum turbulence evolution, which is shown by the red curve in Fig.~\ref{fig:reconnection-and-vortex-length}.
Note that both overall lengths of vortex-core lines and their reconnection event statistics have similar behavior over time and they remain stable after a certain time period of evolution.
\section{Discussions}
\label{sec:dis}
\section{conclusion}
In this paper, we propose a quantum turbulence vortex-core line vectorization algorithm, with iterative graph reduction and density-guided local optimization, for real-time visualization of high-resolution ($2048^3$) quantum turbulence datasets, where different types of interactions are designed to aid the visual analysis of quantum turbulence vortices by domain scientists.
This work is the first in the literature to achieve real-time visualization performance at such a high resolution to facilitate explorative visual study.
With our real-time visualization, some visual analysis has been conducted.
We believe that the pictures and animations we produced from our visualization technique as well as the software tools we provided may give some reference and guidance for both theoretical and experimental research on quantum turbulence in the future.
\section{Introduction}\label{sec:introduction}}
\IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file''
\end{comment}
\input{1.introduction.tex}
\input{2.related_work.tex}
\input{3.quantum_turbulence_simulation.tex}
\input{4.visualization.tex}
\input{5.results.tex}
\input{6.comparision_and_discussion.tex}
\input{7.conclusion.tex}
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
| proofpile-arXiv_059-15539 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{sec01}
In binary systems, by accreting mass from their companion stars, highly magnetized ($B\gtrsim$ {\DP{12}} G), rotating neutron stars (NSs) are observed as X-ray pulsars. The interaction between the accretion flow and the magnetic field is one of the key issues in X-ray astronomy. On one hand, the accretion flow tends to be channeled onto the polar cap of the NS along the field lines, and a hotspot or thermal mound is formed, which contributes to a large fraction of X-ray emission \citep[][]{BeckerW07}. On the other hand, due to the strong field the energies of electrons are quantized according to the Landau level \citep[][]{Meszaros92}
\begin{eqnarray}
E_{n}=n\frac{e{B}\hbar}{m_{\rm e}c} \approx 11.6{\,\rm keV}\cdot{n}B_{\rm 12},\label{eqn01}
\end{eqnarray}
where $m_{\rm e}$ is the mass of an electron, $n=$1, 2, 3 $\ldots$ and $B_{\rm 12}=B/10^{12} {\,\rm G}$. The cyclotron resonant scattering feature (CRSF, or commonly referred to as `cyclotron line') of the outgoing photons with these quantized electrons happens in the line-formation region, which naturally explains the absorption features observed in the energy spectra of $\sim$ forty X-ray pulsars \citep[e.g., ][]{RevnivtsevM15, StaubertTK19, TruemperPR78, WalterLB15, WheatonDP79}. If the gravitational redshift is considered, the observed energy of the cyclotron line should be $E_{\rm cyc}=E_{n}/(1+z)$, where $z$ is the gravitational redshift around the NS. At present, it is the only way to directly measure the magnetic field of NSs. However, the exact location of the line-formation region is still under discussion, and the cyclotron line is occasionally absent in some pulsars. These issues inhibit us from revealing the evolution of $E_{\rm cyc}$ with the X-ray luminosity \citep[e.g., ][]{DoroshenkoTME17, StaubertKE17, StaubertTK19, TsygankovLCS07}.
Furthermore, besides the fundamental line, multiple harmonics were exhibited in the spectra of several X-ray pulsars, e.g., 4U 0115+63 \citep{SantangeloSG99}, V 0332+53 \citep{TsygankovLCS06}, and GRO J2058+42 \citep{MolkovLT19}. From these harmonics, we can infer the fundamental line by calculating the minimal energy gap between them, which is of vital importance to the measurements of the magnetic fields and further studies on X-ray pulsars. In our work, we mainly study the multiple absorption features of 4U\,0115+63.
4U\,0115+63, locating at $\sim 7$ kpc \citep{NegueruelaO01,BailerJRFE18}, is a Be/X-ray binary \citep[][]{JohnsKCM78}, and its orbital parameters have been measured \citep[orbital period $P_{\rm orb}\sim$ 24.3 d, eccentricity $e\sim$ 0.34,][]{RappaportCCL78}. It consists of a $\sim$ 3.61 s accreting pulsar \citep[][]{CominskyCLM78} and a Be star. Like other transient Be/X-ray binaries, 4U\,0115+635 had undergone several outbursts, during which the fundamental line and several harmonics were detected \citep[e.g., ][]{BoldinTL13, FerrignoBSM09, LiWZ12, SantangeloSG99, TsygankovLCS07}. Especially, for the first time a peculiar cyclotron line near 15 keV was probed simultaneously with those near 11, 20 and 33 keV, and was considered as the other fundamental line \citep[][]{IyerMD15}. If two different fundamental lines are detected simultaneously, it is unclear where they are produced, and from which one the magnetic field can be measured. However, the robustness of the peculiar absorption feature needs further confirmation. On one hand, the signal to noise (S/N) of the observation data, obtained from the joint observations of {\it Suzaku} and {\it RXTE}, is somewhat low, since different systematic errors introduce extra uncertainties. On the other hand, the 11 keV line is detected at a low level of significance (3.5$\sigma$), which makes the coexistence of the two line sets doubtful. That is, observations of both line sets with a higher S/N with the same telescope are needed.
In this work, by analyzing two pointing observations of 4U 0115+63 during the 2015 outburst, obtained by the Nuclear Spectroscopic Telescope Array ({\it NuSTAR}), we try to study the complicated cyclotron lines. {\it NuSTAR} performs X-ray observations in 3$-$79 keV, displaying energy resolution of 400 eV (full width at half maximum) at 10 keV and 900 eV at 68 keV. Its inherently low background makes the telescope detect hard X-ray sources with an improvement in sensitivity by 100 times as high as other instruments \citep[][]{HarrisonCE13}. Its time resolution (2 $\mu$s) and dead time (2.5 ms) further allow us to obtain observation data with high quality and to analyze pulse phase-resolved energy spectra conveniently. Thus these two {\it NuSTAR} observations are helpful to confirm the robustness of the peculiar CRSF and reveal its nature. From phase-dependent equivalent widths of the measured CRSFs and the physical model of the accretion column \citep{BeckerW07, BeckerKSE12}, we can infer the formation processes of these lines. In Section \ref{sec02}, we describe the observation and data reduction. In Section \ref{sec03}, we present details of the spectral fitting, and test the robustness of the peculiar cyclotron line. In Section \ref{sec04}, we try to explain the nature of these confusing absorption lines, and summarize our work.
\section{Observation and Data Reduction}
\label{sec02}
During the 2015 outburst of 4U 0115+63, {\it NuSTAR} performed a Target of Opportunity (ToO) observation of 4U\,0115+63 on 2015 October 22 and 30 (ObsIDs 90102016002 and 90102016004, hereafter abbreviated as ObsIDs 002 and 004), respectively. The data is obtained with the focal plane module telescopes (FPMA and FPMB). The net exposure times of the two observations are $\sim$ 8.6 and 14.6 ks, respectively. Figure \ref{fig01} displays the details of these ToO observations during the 2015 outburst (described by {\it Swift}/BAT), i.e., these {\it NuSTAR} observations are performed in the peak and decay of the outburst, respectively.
\clearpage
Following the analysis guide of {\it NuSTAR} data, we first employ the {\tt nupipeline} task (v 0.4.6) of the software {\tt NUSTARDAS} (v 1.8.0, packaged in {\tt HEASOFT} v 6.23) to filter and calibrate the event data, using {\it NuSTAR} Calibration data base (CALDB; released on Oct. 22 2018)\footnote{Here we execute the tasks of {\tt nupipeline} and {\tt nuproducts} two times. In the repeated procedure, the keyword {\tt statusexpr} of ``STATUS==b0000xxx00xxxx000" is considered since in the preliminary processing the incident rate of the light curves with binsize=1 s exceeds 100 counts per second (see \url{https://heasarc.gsfc.nasa.gov/docs/nustar/nustar_faq.html}).}. Secondly, we utilize the {\tt nuproducts} task to extract source (and background) light curves and spectra from the cleaned FPMA and FPMB data, and response files. We obtain the source products using a circular region of 160 arcsec around the source position, and the background products using a circular region of 130 arcsec around a position away from the source region. Thirdly, we group the FPMA and FPMB energy spectra via the {\tt grppha} task with $\geq$ 50 counts per channel bin.
\section{Spectral Analysis}
\label{sec03}
\subsection{Spectral Fitting}
\label{sec03.1}
To study the cyclotron absorption of the Be/X-ray binary 4U 0115+63 in the giant outburst, we analyze the phase-averaged spectra in 3$-$79 keV. For ObsID 002 or 004, the FPMA and FPMB energy spectra are fitted together, and the cross normalization factors are adopted, i.e., the factor for FPMA is fixed at 1.0, and that for FPMB is free \citep[with uncertainty of 1$-$2\%, in good agreement with][]{MadsenHM15}. Other parameters for both spectra are tied, respectively.
The broad-band spectra are tentatively fitted with the commonly discussed models. In order to find the interstellar absorption, we adopt the {\tt TBabs} model in {\tt XSPEC} (v 12.10.0) by setting abundances and cross-sections in accordance with those of \citet{WilmsAM00} and \citet{VernerY95}, respectively. Since the calibrated spectra below 3 keV are not available here, we fix the absorption column ($N_{\rm H}$) at {1.2\TDP{22}} cm$^{-2}$ in all fits unless specified \citep[$\sim (0.5-2)$ {\TDP{22}} cm$^{-2}$, see][]{FerrignoBSM09, IyerMD15, TsygankovLD16}. The almost simultaneous {\it swift}/XRT observations (on Oct. 21, 23 and 30) are not considered in our work. It is shown that $N_{\rm H}$ can not be well determined from the jointly spectral fitting. In that there are some inconsistent calibration, different scattering halos (producing different values of $N_{\rm H}$) between two telescopes\footnote{See \url{http://iachec.scripts.mit.edu/meetings/2019/presentations/WGI_Madsen.pdf}}. To fit an absorption feature, we can use a multiplicative gaussian \citep[In {\tt XSPEC} we define a multiplicative model {\tt mgabs} using the function \ensuremath{1-\tau\exp[-\frac{(E-E_{\rm cyc})^2}{2\sigma^2}]}. See][]{MiharaMOS90, FerrignoBSM09, DoroshenkoTME17, StaubertTK19}, a fake-Lorentzian ({\tt cyclabs}) or an exponential gaussian model ({\tt gabs}) \citep[e.g., ][]{NakajimaMM10, MullerFK13, StaubertTK19}. Here we only adopt model {\tt mgabs} since the other two models may cause some shift in the measured line center or some residuals around the fitted line \citep[see discussions in][]{DoroshenkoTME17}.
First, we fit the spectra of ObsID 002 using a {\tt compTT} model. As shown in the left panel `a' of Fig. \ref{fig02}, a significant soft excess is always detected, even when different values of $N_{\rm H}$ ($\sim (0.5-2)${\TDP{22}} cm$^{-2}$) are considered. Adding a low temperature {\tt bb} component significantly improves the spectral fit with $\Delta \chi^{2} \gtrsim 11000$. An emission line near 6.5\,keV, three narrow absorption lines ($\sim$ 23, 35 and 48\,keV) and a broad absorption feature ($\sim 10-20$\,keV) become apparent in the residuals (see the left panel `b'). We then add a {\tt gaussian} and four {\tt mgabs} components to fit these features, which leads to a substantial decrease in $\chi^{2}$ (see the left panel `c'). Upon closer inspection, we find two small dips in the residuals near 12\,keV and 16\,keV. The line features are likely related to the poor modelling of the absorption feature in $10-20$\,keV. We suspect that there are two narrow absorption lines rather than a broad one. Therefore, we test the fit by adding another line, and finally obtain a very good fit (see the left panel `d' and right panel of Fig. \ref{fig02}) with $\chi^2$/dof $=1828/1731$ (degree of freedom) and $\Delta\chi^2 \sim$ 100. In summary, we have detected four harmonics ($\sim$ 12, 23, 35 and 48 keV) and a peculiar one \citep[$\sim$ 16 keV, not a harmonic component of the 12 keV line. See also][]{RoyAIB19} in ObsID 002, even when we adopt different values of $N_{\rm H}$ ($\sim(0.5-2)$ {\TDP{22}} cm$^{-2}$). Especially, our further analysis precludes the residual near 12 keV related to the {\it NuSTAR} calibration issue \citep[for more details see][]{DoroshenkoPDS20}. If each CRSF is fitted by model {\tt cyclabs} or {\tt gabs}, the absorption feature near 16 keV can still be identified.
Then we repeat the above procedure using some power-law based models \citep[phenomenological ones, see][]{MullerFK13, StaubertTK19}, e.g., the simple cutoff power-law ({\tt cutoffpl}), a power-law with a high energy cutoff (HEcut), that with a `Fermi-Dirac' cutoff (FDcut), or a sum of a negative and positive power-law multiplied by an exponential cutoff (NPEX). We still detect the abnormal CRSF near 16 keV in the fit using any of these power-law based models (see Fig. \ref{fig03}), even when model {\tt cyclabs} or {\tt gabs} is applied. If the residual near 16 keV is additionally fitted as a CRSF, the fit is obviously improved as compared to that in Fig. \ref{fig03}, e.g., $\chi^2$ of model {\tt cutoffpl}, NPEX, HEcut and FDcut based fittings are reduced by $\sim$ 109, 131, 53 and 357, respectively.
Similarly, in ObsID 004, aside from the harmonic CRSFs ($\sim$ 12, 22 and 33 keV), a peculiar one near 16 keV is again detected (see Tbl. \ref{tab01}), when each model used in ObsID 002 is considered.
However, the absorption feature at $\sim$ 16 keV might be related to the so-called ``10 keV feature'', described by a wide-{\tt gaussian} profile \citep[e.g., ][]{Coburn01, FerrignoBSM09, MullerFK13, FarinelliFBB16, StaubertTK19, neKuhnelKFE20} or a {\tt compTT} component \citep[e.g.,][but not applicable to 4U 0115+63, since in the tentative fitting the ``10 keV feature'' still exists]{TsygankovDM19, TsygankovES19}. That is, if each of the above fits contains an additive wide-{\tt gaussian} component (see Tbl. \ref{tab01}), the peculiar absorption line at $\sim$ 16 keV would not be detected (see also Fig. \ref{fig04}). Thus, the detected absorption lines are all harmonic, e.g., $\sim$ 12, 23 and 33/35 keV. Even though in the fit including a wide-{\tt gaussian} component the statistical $\chi^2$/dof decreases a little bit accordingly, the wide-{\tt gaussian} profile is associated with unknown physics, besides reflecting the possible distribution of the thermal atoms \citep[e.g., ][]{StaubertTK19}. In the following, we treat the residual near 16 keV as a CRSF, unless future observations can obtain some astrophysical evidence relating to the wide-{\tt gaussian} model.
\subsection{Robustness of the 16 keV line}
\label{sec03.2}
In the above analyses, we have described the details about the detection of multiple cyclotron lines in 4U 0115+63. Especially, we have detected an anomalous CRSF ($\sim$ 16 keV) in both {\it NuSTAR} observations. Although the similar multiple CRSFs (especially that near 15 keV) in the 2011 outburst have been reported by \citet{IyerMD15}, due to higher performance of {\it NuSTAR}, the CRSFs in our work are measured with a higher S/N. Obviously, the 16 keV line can not be treated as a harmonic component (the fundamental) of the 12 keV (23 keV) line. It seems that the cyclotron line near 16 keV is a different fundamental line, unless an absorption line near 5 keV exists\footnote{Note that some weak absorption-like residual near 5 keV appears in the spectrum of ObsID 002, regardless of the fitting model (see Fig. \ref{fig02}$-$\ref{fig04}). According to our further spectral analysis, we cannot treat the residual as an absorption feature, since the tentative fitting displays no significant improvement ($\Delta\chi^2 \sim 20$, see left panels `d' and `e' of Fig. \ref{fig02}), and the width of the 4.6 keV line is $\lesssim$ 0.1 keV. In addition, the small residual may be related to the uncertainty of the calibration \citep[][]{MadsenHM15, MadsenFE17}.}, or the 16 keV line is a minor product of the 12 keV line. Therefore, the nature of these two sets of cyclotron lines should be studied, e.g., whether they are formed in the same region or not. Then the cyclotron line near 35 keV (48.5 keV) may also be the first (second) harmonic of the 16 keV line, besides being the second (third) harmonic of the 12 keV line. In Section \ref{sec04.1}, by analyzing the phase-dependent equivalent widths of these cyclotron lines and physical model of the accretion column, we try to answer these questions.
Before modelling the 12 and 16 keV lines, we should check their robustness of detection. By fitting only four (three) CRSFs in ObsID 002 (004), we can roughly estimate the significance from observation data. Our fit tends to fit CRSFs near 12 and 16 keV rather than the harmonics (i.e., $\sim$ 12 and 23 keV), since the former scheme produces a smaller value of $\chi^2$/dof or a group of satisfactory parameters. For example, using the {\tt compTT} dependent model in ObsID 002, we obtain $\chi^2$/dof of 1851.3/1734 and 1924.78/1734 for the scheme fitting the lines at 11.9, 16.2, 23.2 and 34.8 keV and that fitting the lines at 13.3, 20.8, 35.1 and 47.5 keV, respectively. That is, the absorption features near 12 and 16 keV should not be ignored (see also the right panel of Fig. \ref{fig02}).
In order to further confirm the robustness of the CRSF near 16 keV, we perform the following simulations and fitting-statistics. Briefly, the 16 keV line is not involved to produce the simulated spectra, and due to the statistical fluctuations its possible detection from the simulated spectra is analyzed. The details are described as follows. (i) Basing on the physical model {\tt compTT+gauss+bb} (modified by {\tt TBabs} and four {\tt mgabs} components), we simulate {\DP{5}} spectra using script {\tt fakeit} in {\tt XSPEC} package. Besides the response matrix files and auxiliary response files (for FPMA and FPMB) of ObsID 002, the best parameters for the spectrum in Tbl. \ref{tab01} (except those of the 16 keV line) are applied. Additionally, the exposure durations of ObsID 002 (8.58 and 8.88 ks for FPMA and FPMB, respectively) and statistical (Poisson) noise are considered to produce simulated source/background spectra. (ii) Applying model {\tt compTT+gauss+bb} absorbed by {\tt TBabs} and four {\tt mgabs} components, we fit the simulated spectra and obtain their $\chi^2$. By including another {\tt mgabs} component with an initial trial $E_{\rm cyc}=$16.20 keV (varying between 15 and 17 keV) and a fixed line width of 3.17 keV, we again fit the spectra, and calculate $\Delta\chi^2$ as compared to the former fit. Then the number distribution of these $\Delta\chi^2$ is shown in panel `a' of Fig. \ref{fig05}. As illustrated in the figure, none of these simulated spectra displays a $\Delta\chi^2$ close to that of the observation ($\sim -$ 103.8). That is, if four harmonic lines ($\sim$ 12, 23, 35 and 48 keV) are detected in the observed data, the probability of nonexistence of the 16 keV line is lower than {\DP{-5}}.
Then we repeat the above simulation to verify the robustness of the 12 keV line, and obtain a similar $\Delta\chi^2$ distribution in panel `b' of Fig. \ref{fig05}. In the energy band 11$-$13 keV of the simulated spectra, we can not detect an assumed 12 keV line with $\Delta\chi^2$ close to that of the observation ($\sim -$ 173.0). Therefore, the simultaneous detection probability of the 12 keV line with other four cyclotron lines ($\sim$ 16, 23, 35 and 48 keV) is greater than 99.999\% in the observed data.
\section{Discussion and conclusion}
\label{sec04}
In 4U 0115+63, two cyclotron lines near 12 keV and 20 keV were first detected by {\it HEAO} in the 1978 outburst \citep{WheatonDP79, WhiteSH83} and again in the 1990 outburst \citep[by {\it GINGA},][]{MiharaMN98}, and a single line around 17 keV appeared in the 1991 outburst. In different epochs of the 1999 outburst the second ($\sim$ 33 keV), third ($\sim$ 49 keV) and fourth ($\sim$ 57 keV) harmonics were obtained by {\it Rossi X-Ray Timing Explorer} \citep{HeindlCGE99}, {\it BeppoSAX} \citep{SantangeloSG99} and {\it BeppoSAX} \citep{FerrignoBSM09}, respectively. These absorption lines were further confirmed in more outbursts, observed by other detectors \citep[e.g.,][]{BoldinTL13, LiWZ12}. In the 2011 outburst, among detected CRSFs ($\sim$ 11, 15, 20 and 33 keV), the 15 keV line is not a harmonic of the 11 keV line \citep{IyerMD15}, and the similar situation happened again in the 2015 outburst \citep[see this work and][]{RoyAIB19}.
In order to reveal the nature of these complicated cyclotron lines observed in the 2015 outburst, we study the pulse phase-resolved spectra. Basing on {\it NuSTAR} observations, we obtain the phase-resolved spectra in 10 phase bins with equal width, and fit these spectra using the {\tt bb+gauss+compTT} based model. During the fitting, we assume the detections of all CRSFs in each bin by fixing their widths to those of the phase-averaged spectra, respectively, beside fixing $\sigma_{\rm Fe}$ accordingly (see Tbl. \ref{tab01}). In a few cases, we also fix the line energy if its upper and lower limits can not be constrained simultaneously (see the data point with a downward arrow in Fig. \ref{fig06}). Then we calculate the phase-dependent equivalent widths (EWs) of the CRSFs (see Fig. \ref{fig06}. Since the line near 48 keV is not detected in ObsID 004, we only study other CRSFs) using the following equation (based on model {\tt mgabs}),
\begin{eqnarray}
{\text{EW (keV)}}= \int^{E_{\rm 2}}_{E_{\rm 1}} \tau\exp[-\frac{(E-E_{\rm cyc})^2}{2\sigma^2}]{\rm d}E,\label{eqn02}
\end{eqnarray}
where $\tau$ and $\sigma$ are the line depth and width, respectively.
For further discussions, we pick up two energy-dependent pulse profiles (8$-$14 and 14$-$19 keV), each of which is affected by the cyclotron absorption (see the profiles without the cyclotron absorption and those with the absorption in panels `N' and `a' of Fig. \ref{fig06}, respectively). In each pulse phase, using the script {\tt cflux} in {\tt XSpec} we first calculate the flux ($F_{\rm 1}$, e.g., in 8$-$14 keV) without the cyclotron absorption (e.g., at 12 keV) and that ($F_{\rm 2}$) with the absorption, respectively. Then, in each phase multiplying the count rate by the factor $F_{\rm 1}/F_{\rm 2}$, we can obtain the pulse profile without the cyclotron absorption in 8$-$14 keV. Similarly, we can work out the profile without absorption in 14$-$19 keV. We also plot the phase-dependent $E_{\rm cyc}$ of the 12 keV and 16 keV lines in panel `d'.
As depicted in Fig. \ref{fig06}, $E_{\rm cyc}$ (phase varied by up to 30\%) and `EWs' all show significant pulse-phase dependence. Obviously, every CRSF is detected at most of the pulse phases. The phase-dependent $E_{\rm cyc}$ and the corresponding pulse profile reach their peaks almost at the same phase, which is similar to Her X-1 \citep[][]{StaubertKE14}. The details about the phase-dependent EWs are summarized as follows.
\begin{enumerate}[1)]
\item{The line-formation region of the 12 keV (16 keV) line is primarily observed at $\phi \sim$ 0.6$-$0.9 (0.2$-$0.5), at which the pulse profile for the 8$-$14 keV (14$-$19 keV) band has a hump, and displays significant cyclotron absorption (see panels `N' and `a' of Fig. \ref{fig06}). At $\phi \sim$ 0.9$-$1.1 the 16 keV line has another peak EW (e.g., in ObsID 004), where no obvious pulse is detected. Additionally, EW of the 16 keV line is larger than that of the 12 keV line.}
\item{As for the first harmonic of the 12 keV line, the 22 keV line reaches its peak EW at $\phi \sim$ 0.2$-$0.5 and 0.6$-$0.9. Especially, at $\phi \sim$ 0.2$-$0.5 the significant absorption near 22 keV conflicts with the weakness of the 12 keV line (see ObsID 004), different from the situation at $\phi \sim$ 0.6$-$0.9.}
\item{As for the second (first) harmonic of the 12 keV (16 keV) line, the line near 33/35 keV displays very strong absorption at $\phi \sim$ 0.1$-$0.2, 0.5$-$0.6 and 0.8$-$0.9, and very weak one at $\phi \sim$ 0.2$-$0.5. At the phase where the strong absorption near 33/35 keV appears, no component of the 12 keV or 16 keV line set arrives at its peak EW.}
\end{enumerate}
Therefore, we believe that two sets of cyclotron lines (fundamental lines of $\sim$ 12 keV and 16 keV, respectively) coexist in the 2015 outburst. First, the formation of the 12 keV and 16 keV lines are not affected by each other. Until now no evidence indicates that an absorption feature near 5 keV is detected (i.e., the two lines are not harmonic), or that the 16 keV line is the by-product of the 12 keV line (see discussions in Sec. \ref{sec03.2}). Secondly, the 12 keV and 16 keV lines should be formed in different regions, since their EWs show different pulse phase dependence, respectively (see Fig. \ref{fig06}).
We also estimate the systematic uncertainty related to the fixed cyclotron line width. During the spectral fitting, we first let the line width range from 0.9 to 1.1 times of the width obtained from the phase-averaged spectrum. After obtaining the best value of the width, we fix the width and calculate the errors of other CRSF parameters. Then it is shown that EWs in most phases would be varied by $\lesssim$ 30\%. Even so, the above main conclusions are hardly affected.
In the following, we further discuss the formation of these different line sets using the physical model of the accretion column.
\subsection{About the absorption feature near 16 keV}
\label{sec04.1}
In the previous work analyzing the 2011 outburst \citep{IyerMD15}, the complicated cyclotron lines, classified into two different line sets, are believed to be formed in different regions, which is in consistence with our analysis. They proposed two possibilities to explain the detected lines.
The first case depends on the dipolar structure of the NS magnetic field, where the 11 and 15 keV line sets are formed in the pencil beam (the top of the shock) and fan beam (the base of the hot mound), respectively. The model can describe the formation of the two possible sets of cyclotron lines in GX 301$-$2 \citep[][]{FurstFME18}. However, the model is in disagreement with the bright outburst \citep[see Eq. (32) in][]{BeckerKSE12}\footnote{For ObsIDs 002 and 004, the 0.1$-$100 keV luminosities are, respectively, {1.08\TDP{38}\,{\ensuremath{\rm erg\,{s}^{-1}}}}$(d/\text{7 kpc})^2$ and {7.28\TDP{37}\,{\ensuremath{\rm erg\,{s}^{-1}}}}$(d/\text{7 kpc})^2$. In the 2011 outburst, the 3$-$50 keV luminosities $\gtrsim$ {2\TDP{37}\,{\ensuremath{\rm erg\,{s}^{-1}}}}.}. First, the pencil beam can not be formed at the top of the accretion column where upward photons are trapped by the advection of the accretion flow. Moreover, the column top \citep[$\sim$ 10 km, see Eq. (16) in][]{BeckerKSE12} is higher than the location ($\sim$ 1.0 km, assuming $B \propto R^{-3}$ and on NS surface $E_{\rm cyc}\simeq$ 16 keV, where $R$ is the distance to the NS center) at which the 12 keV line set is formed. Therefore in the bright state it is not reasonable to have two different line-formation regions on the same pole.
In the second case, \citet{IyerMD15} supposed that the two sets of CRSFs are formed on two poles of the NS with nondipolar magnetic fields, respectively, i.e., two-poles CRSF model (TPCM). The viability of TPCM depends on two key points. First, the magnetic fields of the two poles should be different, i.e., a nondipolar field. By decomposing the energy-dependent pulse profiles of 4U 0115+63 at different levels of luminosities, \citet{SasakiMKF12} derived that the magnetic axes of the two poles are misaligned (offset by $\sim$ 60$^{\rm o}$). In the distorted configuration, it is reasonable to assume that the local magnetic fields of the two poles are different. Secondly, two groups of cyclotron absorption should happen in the two poles, respectively. Given that each energy-dependent pulse profile ($\lesssim$ 50 keV) has double peaks (a main and minor one), \citet{IyerMD15} analyzed the energy-dependent phase lags of each peak by defining a reference pulse profile \citep[see also][]{FerrignoFBB11}. The significant negative phase-shifts of the main (minor) peak are detected at energies of $\sim$ 11, 23 and 39 keV (16 and 30 keV), respectively, indicating the energies at which the corresponding cyclotron absorption happens. Provided that each peak in the pulse profile of 4U 0115+63 mainly corresponds to the emission from one single pole, they concluded that the two sets of CRSFs are formed in the two poles, respectively. However, their method in testing the second point of TPCM is inconsistent with some observation, e.g., some energy-dependent pulse profile displays more than two peaks (see panel `002-a' of Fig. \ref{fig06}), from which it is ambiguous to infer the phases of the two poles in the profile. In addition, the two line sets might be produced in the fan and pencil beams, respectively, if these radiation regions contribute to different humps in the pulse profile accordingly \citep[e.g.,][]{SasakiMKF12, IwakiriPE19}. Especially, in the bright state ($\gtrsim$ {\DP{37}\,{\ensuremath{\rm erg\,{s}^{-1}}}}), neither of the two peaks in the profile corresponds to the emission of a single pole \citep{KrausBSE96}.
Thus in the following we apply some other way to test the second point in TPCM. In our work, not only does the measurement with higher S/N identify the detection of the peculiar cyclotron line near 16 keV, but also we can obtain phase-resolved spectra due to the high quality of {\it NuSTAR} data, which supplies more details about these CRSFs. That is, basing on the phase-dependent `EWs' of these CRSFs (i.e., the strength of the cyclotron absorption in different phase, see Fig. \ref{fig06}), we can constrain better their line-formation regions. Then we check whether these two regions can be recognized as the two magnetic poles, respectively. Our analyses concentrate on the dependence of the `EW' on the pulse profiles for the low energy band ($8-14$ keV and $14-19$ keV), irrespective of the appearance of more than two peaks in the profiles, or the contribution of different radiation regions to the pulses.
As described by the following discussions, the observations in Fig. \ref{fig06} are consistent with the second point in TPCM. As depicted in panels `N' and `a' of Fig. \ref{fig06}, the continuum radiation of 8$-$14 keV (14$-$19 keV) undergoes a significant cyclotron absorption at $\phi \sim$ 0.6$-$0.9 (0.2$-$0.5), which denotes the line-formation region of the 12 keV (16 keV, see panel `b') line. In TPCM, these regions can be identified as two different magnetic poles, respectively (see the shade region in panel `a'). Moreover, at the phase where the 12 keV (16 keV) line is formed, we also witness the slightly weaker absorption near 16 keV (12 keV). According to \citet[][]{KrausBSE96}, in high-luminosity state ($L_{\rm X} \gtrsim$ {\DP{37}\,{\ensuremath{\rm erg\,{s}^{-1}}}}), the emissions from two magnetic poles can both contribute to the formation of each pulse. Therefore in the same pulse both two sets of cyclotron lines appear (see panels `a' and `b').
We can further determine which line set is formed on pole 1 and the other is on pole 2. It is the scattering cross-section of electrons that affects the interaction of electrons with photons and the propagation of photons. In strong magnetic field ($\gtrsim$ {\DP{12}} G), the cross-section for low-energy photons ($E <E_{\rm cyc}$) depends on the field strength, propagation angle with respect to the field and photon energy \citep[][]{AronsKL87, BeckerW07, BeckerKSE12}. The cross-sections parallel ($\sigma_{\rm \|}$) and perpendicular ($\sigma_{\rm \bot}$) to the magnetic field can be described, respectively, by \citep[see discussions in][]{BeckerW07}
\begin{eqnarray}
&&\sigma_{\rm \|} \approx (\frac{E}{E_{\rm cyc}})^2\sigma_{\rm T},\nonumber\\
&&\sigma_{\rm \bot} \approx \sigma_{\rm T}, \label{eqn03}
\end{eqnarray}
where $\sigma_{\rm T}$ is the Thomson cross-section. Thus low energy photons ($E<E_{\rm cyc}$, scattered inside the column) from the thermal mound tend to diffuse along the field line due to $\sigma_{\rm \|} < \sigma_{\rm \bot}$, i.e., forming a ``pencil-like'' beam. In the direction of the column axis a portion of the pencil-like beam is easily reprocessed by the accretion flow (see the pencil-like beam `B' in Fig. \ref{fig07}). Due to the advection of the accretion flow and gravitational light deflection, the reprocessed pencil-like beam possibly produces a narrow ``anti-pencil'' beam \citep{SasakiKKC10, SasakiMKF12}, which preserves the absorption feature in the pencil-like beam. According to the discussion in \citet{SasakiKKC10}, the anti-pencil should be from pole 2, and can be observed on the back side of the accretion column. Note that the 16 keV line appears in the anti-pencil beam ($\phi \sim$ 0.9$-$1.1, especially, see panel `004-b' of Fig. \ref{fig06}), thus the 16 keV line is formed on pole 2, and the 12 keV line is on the other pole (see Fig. \ref{fig07}).
Then we can further in TPCM explain the observations of the other harmonic components.
\begin{enumerate}[1)]
\item{At $\phi\sim$ 0.6$-$0.9, EWs of the 12 keV line and its first harmonic ($\sim$ 22 keV) both arrive at their peaks (see panels `b' and `c' in Fig. \ref{fig06}). However, at $\phi\sim$ 0.2$-$0.5 the slight weakness of the 12 keV line is in conflict with the significant absorption near 22 keV. The discrepancy is consistent with the previous studies \citep{ArayaH00, SchonherrWE07}, i.e., it is the de-excitation of the thermal electrons and photon filling near the energy of the fundamental line that weaken the strength of the fundamental line. At $\phi\sim$ 0.2$-$0.5 the count rate in the 8$-$14 keV band is $\sim$ 15 times of that in the 19$-$27 keV band, in accordance with the property of the pencil-like beam (see the above discussion following Eq. \ref{eqn03}). Thus among photons in 8$-$14 keV, the ratio of those being absorbed is very small, as compared to the situation of photons in 19$-$27 keV, which causes a weaker absorption feature near 12 keV.}
\item{The phase-dependent EW of the 33/35 keV line becomes complicated, in that the line is either the first harmonic of the 16 keV line, or the second harmonic of the 12 keV line. At $\phi \sim$ 0.8$-$0.9, the 12 keV line set is the main contribution to the absorption near 33/35 keV, since the 16 keV line is very weak. At $\phi \sim$ 0.1$-$0.2 or 0.5$-$0.6, the situation indicates that the 16 keV and 12 keV line sets both contribute to the formation of the 33/35 keV line, and due to a higher ratio of photons being absorbed in 30$-$40 keV EW of the 33/35 keV line is larger than that of the 12 keV or 16 keV line. Additionally, at $\phi \sim$ 0.2$-$0.5, due to lack of high-energy photons (and/or high-energy electrons) in the direction undergoing the cyclotron absorption, the absorption near 33/35 keV is very weak.}
\end{enumerate}
More observations are also consistent with the model, e.g., (i) each cyclotron line is created in a region close to the continual radiation region \citep[the height of the line-formation region is $\sim$ 0.2 km, and that of the thermal mound is $\lesssim$ 0.1 km. See][]{BeckerW07,BeckerKSE12} , in that the cyclotron line can be detected in most pulse phases (Fig. \ref{fig06}). (ii) $\frac{\Delta{B}}{\Delta{z}}$ in each line-formation region should be small, otherwise the cyclotron line would be much wider than the observed, or even hardly be observed. (iii) The configuration of the two magnetic poles in Fig. \ref{fig07}, deduced from Fig. \ref{fig06}, is consistent with that in \citet{SasakiMKF12}.
However, the strong pulse phase-dependence of $E_{\rm cyc}$ is not explicit in TPCM. It is the phase-dependent height of the line-formation-region that causes the phase-dependence \citep{StaubertTK19}. In order to figure out the dependence, more details should be considered (e.g., the light-bending around the NS), which is beyond the exploration of TPCM.
Therefore, we suppose that the two line sets (their fundamental lines are different) are formed in different magnetic poles, respectively (see Fig. \ref{fig06} and \ref{fig07}). Note that the altitude ($z$, measured from the NS surface) of the line-formation region is determined by the luminosity rather than the magnetic field \citep[see Eq. (40) in][]{BeckerKSE12}, these two lines should be produced at the same height of different poles, respectively. For ObsID 002 (004), the emission from the thermal mound undergoes the cyclotron absorption at $z\sim$ 0.23 km (0.16 km). From the centroid energies of the two fundamental lines, we obtain the magnetic fields of the two poles, i.e., $\sim$ {1.4\TDP{12}} and {1.1\TDP{12}} G, respectively.
Even though centroid energies of these two fundamental lines both seem to decrease with the decaying outburst in our work (see Tbl. \ref{tab01}), no final conclusion can be drawn. In TPCM, we can make some predictions on their luminosity-dependent $E_{\rm cyc}$. As summarized by previous studies \citep[e.g.,][]{BeckerKSE12, DoroshenkoTME17, StaubertTK19}, two types of correlations between the luminosity and CRSF energy are observed. That is, in the subcritical state a positive correlation is detected, and in the supercritical state an anti-correlation is. We suppose that at different levels of luminosities the luminosity-dependent centroid energy of the 12 keV (16 keV) line should also follow these correlations, respectively.
Future studies are expected to be performed as follows. (1) If the physical nature of the ``10 keV feature'', described by a wide-{\tt gaussian} profile or a {\tt compTT} component, is well studied, some methods should be developed to distinguish the ``10 keV feature'' from the cyclotron line in the energy spectrum. e.g., the negative (positive) dependence of the luminosity with the assumed CRSF energy in the supercritical (subcritical) state may support the feature as a cyclotron absorption \citep{ReigM16}. (2) Theoretical calculations and observations should be followed, e.g., revealing the physical properties of these different line-formation regions, measuring $E_{\rm cyc}$ of the two sets of lines at different levels of the luminosity, and disentangling the contribution of the emissions from two magnetic poles to EWs of these lines. Then we might understand why two line-formation regions appear in 4U 0115+63, and predict the same situation in other accreting pulsars. Our work is helpful to understand more issues of the cyclotron line and the distribution of the magnetic field, e.g., the luminosity-dependence of the cyclotron line energy at different levels of luminosities \citep[e.g., ][]{DoroshenkoTME17, StaubertKE17, StaubertTK19, TsygankovLCS07}. Especially, for different line sets, the luminosity-dependent line energy should satisfy different functions, respectively \citep[e.g., ][]{TsygankovLCS07, BoldinTL13}.
\subsection{Conclusion}
\label{sec04.2}
In our work, we have studied two pointing observations of 4U 0115+63 in the 2015 outburst, obtained by {\it NuSTAR}. In both observations, we have detected several harmonic CRSFs ($\sim$ 12, 23 and 33/35 keV) and a peculiar one ($\sim$ 16 keV), similar to those jointly observed in the 2011 outburst by several X-ray detectors \citep[$\sim$ 15 keV,][]{IyerMD15}. It is clear that the 16 keV line is not a harmonic component of the 12 keV line. We suppose that the fitting residual around 16 keV is not a so-called ``10 keV feature'', of which the physical nature is still an open issue \citep[e.g.,][]{StaubertTK19}. Then the robustness of the 16 keV line is confirmed. First, because of the high performance of {\it NuSTAR} the complicated cyclotron lines are detected with a higher S/N and less uncertainty, as compared to the previous observation in \citet{IyerMD15}. Secondly, in the fits using physical or phenomenological models, the absorption features near 12 keV and 16 keV are very significant and should be fitted preferentially. Thirdly, our simulations indicate that no simultaneous-detection probability of the 12/16 keV line with other lines is lower than {\DP{-5}}.
From the pulse phase-dependent equivalent widths of these cyclotron lines (see Fig. \ref{fig06}), we infer that the 12 keV and 16 keV lines are two different fundamental lines. In the two-poles CRSF model, the two line sets are produced at the same altitude ($\sim$ 0.2 km away from the NS surface) of different magnetic poles, respectively (see Fig. \ref{fig07}). Thus the magnetic fields of the two poles should be $\sim$ {1.1\TDP{12}} and {1.4\TDP{12}} G, respectively. It is expected that the centroid energy of the 12 keV (16 keV) line should satisfy the positive/negative correlation with the luminosity at different levels of luminosities, as summarized in previous work \citep[e.g.,][]{BeckerKSE12, DoroshenkoTME17, StaubertTK19}.
\begin{acknowledgements}
We are grateful to an anonymous referee, Prof. Fang-Jun Lu, P. A. Becker, G. K. Jaisawal and M. Yukita for clarifying and helpful comments. This work is supported by Project U1838201, National Key Research and Development Program of China (2016YFA0400803), the Natural Science Foundation of China under grant Nos. 11733009, Y71131005C, 11673023, 11333004 and 11773015 supported by NSFC and CAS. This research has made use of the {\it NuSTAR} Data Analysis Software ({\tt NuSTARDAS}) jointly developed by the ASI Science Data Center (ASDC, Italy) and the California Institute of Technology (Caltech, USA), and the software provided by the High Energy Astrophysics Science Archive Research Center (HEASARC).
\end{acknowledgements}
\bibliographystyle{apj}
| proofpile-arXiv_059-15540 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
In this paper, we consider $C^1$ dynamical systems $\Phi\colon Q\times \mathbb{T} \to Q$ having a globally attracting hyperbolic fixed point or periodic orbit.
Here $Q$ is a smooth manifold and either $\mathbb{T} = \mathbb{Z}$ or $\mathbb{T} = \mathbb{R}$; when $\mathbb{T} = \mathbb{R}$, a common example is that of $t\mapsto \Phi^t(x_0)$ being the solution to the initial value problem $$\frac{d}{dt}x(t) = f(x(t)),\qquad x(0) = x_0$$ determined by a complete $C^1$ vector field $f$ on $Q$.
Our main contributions are existence and uniqueness results regarding globally defined $C_{\textnormal{loc}}^{k,\alpha}$ linearizing semiconjugacies $\psi\colon Q\to \mathbb{C}^m$ which, by definition, make the diagram
\begin{equation}\label{eq:cd-gen-efunc}
\begin{tikzcd}
&Q \arrow{r}{\Phi^t}\arrow{d}{\psi}&Q\arrow{d}{\psi}\\
&\mathbb{C}^m\arrow{r}{e^{tA}}&\mathbb{C}^m
\end{tikzcd}
\end{equation}
commute for some $A\in \mathbb{C}^{m\times m}$ and all $t\in \mathbb{T}$.
By $C_{\textnormal{loc}}^{k,\alpha}$ with $k\in \mathbb{N}_{\geq 1}$ and $0\leq \alpha \leq 1$ we mean that $\psi\in C^k(Q,\mathbb{C}^m)$ and that all $k$-th partial derivatives of $\psi$ are locally $\alpha$-H\"{o}lder continuous in local coordinates.
Linearizing semiconjugacies are also known as linearizing \concept{factors} or \concept{factor maps} in the literature and can be viewed as a further generalization of the \concept{generalized Koopman eigenfunctions} of \cite{mezic2019spectrum,korda2019optimal}.
We note that such semiconjugacies are distinct from those in the diagram
\begin{equation}\label{eq:cd-param-method}
\begin{tikzcd}
&Q \arrow{r}{\Phi^t}&Q\\
&\mathbb{C}^m\arrow{r}{e^{tA}}\arrow{u}{K}&\mathbb{C}^m\arrow{u}{K}
\end{tikzcd}
\end{equation}
obtained from \eqref{eq:cd-gen-efunc} by flipping the vertical arrows.
Existence results for semiconjugacies of the type in \eqref{eq:cd-param-method} were obtained by \cite{cabre2003parameterization1,cabre2003parameterization2,cabre2005parameterization3} in the context of proving invariant manifold results using the parameterization method.
Our main result for the case of a globally attracting hyperbolic fixed point both generalizes and sharpens Sternberg's linearization theorem \cite[Thms~2,3,4]{sternberg1957local} which provides conditions ensuring the existence of a linearizing local $C^k$ diffeomorphism defined on a neighborhood of the fixed point; the results of \cite{lan2013linearization,kvalheim2018global} show that this local diffeomorphism can be extended to a global $C^k$ diffeomorphism $\psi \colon Q\to \mathbb{R}^n\subset \mathbb{C}^n$ making \eqref{eq:cd-gen-efunc} commute.
Under Sternberg's conditions, a corollary of our main result is that this global linearizing diffeomorphism is uniquely determined by its derivative at the fixed point.
Additionally, we sharpen Sternberg's result from $C^k$ to $C_{\textnormal{loc}}^{k,\alpha}$ linearizations.
For the case of a globally attracting hyperbolic periodic orbit of a flow, our main result also yields a similar existence and uniqueness corollary for Floquet normal forms.
Our main results also imply several existence and uniqueness corollaries relevant to the ``applied Koopmanism'' literature, which has experienced a surge of interest initiated by \cite{dellnitz1999approximation,mezic2004comparison,mezic2005spectral}---motivated largely by data-driven applications--- more than 70 years after Koopman's seminal work \cite{koopman1931hamiltonian}.\footnote{See, e.g., \cite{budivsic2012applied,mauroy2012use,mauroy2013isostables,lan2013linearization,mohr2014construction,giannakis2015spatiotemporal,mezic2015applications,williams2015data,mauroy2016global,mohr2016koopman,brunton2016koopman,surana2016koopman, surana2016linear,arbabi2017ergodic, arbabi2017study,kaiser2017data,mezic2019spectrum,proctor2018generalizing,korda2018convergence,korda2018data,korda2019optimal,das2019delay,bruder2019nonlinear,dietrich2019koopman,arbabi2019data}.}
Existence and uniqueness results are desirable since, in analyzing the theoretical properties of any algorithm for computing some quantity, it is desirable to know whether the computation is well-posed \cite{hadamard1902problemes}, and in particular whether the quantity in question \emph{exists} and is \emph{uniquely determined}.
Our results yield precise conditions under which various quantities in this literature---including targets of numerical algorithms---exist and are unique, and are especially relevant to work on \concept{principal eigenfunctions} and \concept{isostables} for point attractors \cite{mohr2016koopman,mauroy2013isostables} and to work on \concept{isostable coordinates} for periodic orbit attractors in \cite{wilson2016isostable,shirasaka2017phase,wilson2018greater,monga2019phase}.
Isostables and isostable coordinates are useful tools for nonlinear model reduction, and it has been proposed that these objects could prove useful in real-world applications such as treatment design for Parkinson's disease, migraines, cardiac arrhythmias \cite{wilson2016isostable-pde}, and jet lag \cite{wilson2014energy}.
We give an intrinsic definition of principal eigenfunctions for nonlinear dynamical systems which generalizes the definition for linear systems in \cite{mohr2016koopman}.
We provide existence and uniqueness results for $C_{\textnormal{loc}}^{k,\alpha}$ principal eigenfunctions, and we also show that the (a priori non-unique) ``pullback algebra'' defined in \cite{mohr2016koopman} is unique under certain conditions.
For the case of periodic orbit attractors, principal eigenfunctions essentially coincide with the notion of isostable coordinates defined in \cite{wilson2018greater, monga2019phase}, except that the definition in these references involves a limit which might not exist except for the ``slowest'' isostable coordinate.
Our techniques shed light on this issue, and our results imply that this limit does in fact always exist for the ``slowest'' isostable coordinate if the dynamical system is at least smoother than $C^{1,\alpha}_{\textnormal{loc}}$ with $\alpha > 0$.
In fact our results imply---assuming that there is a unique and algebraically simple ``slowest'' real Floquet multiplier---that any corresponding ``slowest'' $C^1$ isostable coordinate is always unique modulo scalar multiplication for a $C^1$ dynamical system, without the need for any nonresonance or spectral spread assumptions; furthermore, such a unique isostable coordinate always exists if the dynamics are at least smoother than $C^{1,\alpha}_{\textnormal{loc}}$ with $\alpha > 0$.
Similarly, if instead there is a unique and algebraically simple ``slowest'' complex conjugate pair of Floquet multipliers, then any corresponding ``slowest'' complex conjugate pair of isostable coordinates are always unique modulo scalar multiplication for a $C^1$ dynamical system, and such a unique pair always exists if the dynamics are $C^{1,\alpha}_{\textnormal{loc}}$ with $\alpha > 0$.
As a final application of our main results, we give a complete classification of $C^\infty$ eigenfunctions for a $C^\infty$ dynamical system with semisimple (diagonalizable over $\mathbb{C}$) and nonresonant linearization, generalizing known results for analytic dynamics and analytic eigenfunctions \cite{mauroy2013isostables,mezic2019spectrum}.
The remainder of the paper is organized as follows.
In \S \ref{sec:main-results} we state Theorems \ref{th:main-thm} and \ref{th:main-thm-per}, our two main results, without proof.
We also state a proposition on the uniqueness of linearizing factors which does not assume any nonresonance conditions.
As applications we derive in \S \ref{sec:applications} several results which are essentially corollaries of this proposition and the two main theorems.
\S \ref{sec:app:stern-floq} contains existence and uniqueness theorems for global Sternberg linearizations and Floquet normal forms.
In \S \ref{sec:app-p-eigs} we define principal Koopman eigenfunctions and isostable coordinates for nonlinear systems and show how Theorems \ref{th:main-thm} and \ref{th:main-thm-per} yield corresponding existence and uniqueness results.
We then discuss the relationship between various notions defined in \cite{mohr2016koopman} and our definitions, and we also discuss the convergence of the isostable coordinate limits in \cite{wilson2018greater,monga2019phase}.
\S \ref{sec:app-classify} contains our classification theorem for $C^\infty$ eigenfunctions of $C^\infty$ dynamical systems with a globally attracting hyperbolic fixed point or periodic orbit.
Finally, \S \ref{sec:proofs-main-results} contains the proofs of Theorems \ref{th:main-thm} and \ref{th:main-thm-per}.
\subsection*{Acknowledgements}
The majority of this work was performed while Kvalheim was a postoctoral researcher at the University of Michigan.
Both authors were supported by ARO grants W911NF-14-1-0573 and W911NF-17-1-0306 to Revzen.
We thank George Haller, David Hong, Igor Mezi\'{c}, Jeff Moehlis, Corbinian Schlosser, and Dan Wilson for helpful discussions and comments.
We owe special gratitude to Alexandre Mauroy and Ryan Mohr for their careful reading of the manuscript and valuable feedback; in particular, one of Mauroy's comments led to a sharpening of the uniqueness statements of Theorems \ref{th:main-thm} and \ref{th:main-thm-per} and Propositions \ref{prop:koopman-cka-fix} and \ref{prop:koopman-cka-per}, and Mohr found an error in Definition \ref{def:nonres}.
\section{Main results}\label{sec:main-results}
Before stating our main results, we give two definitions which are essentially asymmetric versions of some appearing in \cite{sternberg1957local,sell1985smooth}.
When discussing eigenvalues and eigenvectors of a linear map or matrix in the remainder of the paper, we are always discussing eigenvalues and eigenvectors of its complexification, although we do not always make this explicit.
\begin{Def}[$(X,Y)$ $k$-nonresonance]\label{def:nonres}Let $X \in \mathbb{C}^{d\times d}$ and $Y \in \mathbb{C}^{n\times n}$ be matrices with eigenvalues $\mu_1,\ldots,\mu_d$ and $\lambda_1,\ldots,\lambda_n$, respectively, repeated with multiplicities.
For any $k \in \mathbb{N}_{\geq 1}$, we say that $(X,Y)$ is \concept{$k$-nonresonant} if, for any $i\in \{1,\ldots, d\}$ and any $m=(m_1,\ldots, m_n) \in \mathbb{N}^n_{\geq 0}$ satisfying $2\leq m_1 + \cdots + m_n \leq k$, \begin{equation}\label{eq:nonres-lem}
\mu_i \neq \lambda_1^{m_1}\cdots \lambda_n^{m_n}.
\end{equation}
(Note this condition vacuously holds if $k = 1$.) We say $(X,Y)$ is \concept{$\infty$-nonresonant} if $(X,Y)$ is $k$-nonresonant for every $k\in \mathbb{N}_{\geq 1}$.
\end{Def}
For the definition below, recall that the spectral radius $\rho(X)$ of a matrix is defined to be the largest modulus (absolute value) of the eigenvalues of (the complexification of) $X$.
\begin{Def}[$(X,Y)$ spectral spread]
\label{def:nonres-spread}
Let $X\in \mathsf{GL}(m,\mathbb{C})$ and $Y\in \mathsf{GL}(n,\mathbb{C})$ be invertible matrices with the spectral radius $\rho(Y)$ satisfying $\rho(Y) < 1$.
We define the spectral spread $\nu(X,Y)$ to be
\begin{equation}\label{eq:spread-def}
\nu(X,Y)\coloneqq \max_{\substack{\mu \in \textnormal{spec}(X)\\ \lambda \in \textnormal{spec}(Y)}}\frac{\ln(|\mu|)}{\ln(|\lambda|)}.
\end{equation}
\end{Def}
Finally, here we recall the definition of $C_{\textnormal{loc}}^{k,\alpha}$ functions.
\begin{Def}[$C_{\textnormal{loc}}^{k,\alpha}$ functions]\label{def:ckal-funcs}
Let $M,N$ smooth manifolds of dimensions $m$ and $n$, let $\psi\in C^k(M,N)$ a $C^k$ map $\psi\colon M\to N$ with $k\in \mathbb{N}_{\geq 0}$, and let $0\leq \alpha \leq 1$.
We will say that $\psi \in C_{\textnormal{loc}}^{k,\alpha}(M,N)$ if for every $x\in M$ there exist charts $(U_1,\varphi_1)$ and $(U_2,\varphi_2)$ containing $x$ and $\psi(x)$ such that all $k$-th partial derivatives of $\varphi_2 \circ \psi \circ \varphi_1^{-1}$
are H\"{o}lder continuous with exponent $\alpha$.
If the domain and codomain $M$ and $N$ are clear from context, we will write $C_{\textnormal{loc}}^{k,\alpha}$ instead of $C_{\textnormal{loc}}^{k,\alpha}(M,N)$.
\end{Def}
\begin{Rem}
Using the chain rule and the fact that compositions and products of locally $\alpha$-H\"{o}lder continuous functions are again locally $\alpha$-H\"{o}lder, it follows that the property of being $C_{\textnormal{loc}}^{k,\alpha}$ on a manifold does not depend on the choice of charts in Definition \ref{def:ckal-funcs}.
\end{Rem}
\begin{Not}
Given a differentiable map $F\colon M\to N$ between smooth manifolds, in the remainder of this paper we use the notation $\mathsf{D}_x F$ for the derivative of $F$ at the point $x\in M$.
(Recall that the derivative $\mathsf{D}_x F\colon \mathsf{T}_x M \to \mathsf{T}_{F(x)}N$ is a linear map between tangent spaces \cite{lee2013smooth}, which can be identified with the Jacobian of $F$ evaluated at $x$ in local coordinates.)
In particular, given a dynamical system $\Phi\colon Q\times \mathbb{T} \to Q$ and fixed $t \in \mathbb{T}$, we write $\mathsf{D}_{x}\Phi^t\colon \mathsf{T}_{x}Q\to \mathsf{T}_{\Phi^t(x)}Q$ for the derivative of the time-$t$ map $\Phi^t\colon Q\to Q$ at the point $x\in Q$.
Given $i \geq 2$, we similarly use the notation $\mathsf{D}_{x}^iF$ for the $i$-th derivative of $F$, which can be identified with the linear map $\mathsf{D}_{x}^i F\colon (\mathsf{T}_{x}M)^{\otimes i} \to \mathsf{T}_{F(x)}N$ from the $i$-th tensor power $(\mathsf{T}_{x}M)^{\otimes i}$ to $\mathsf{T}_{x}N$ represented in local coordinates by the $(1+i)$-dimensional array of $i$-th partial derivatives of $F$ evaluated at $x$.
\end{Not}
We now state our main results, Theorems \ref{th:main-thm} and \ref{th:main-thm-per}, as well as Proposition \ref{prop:uniqueness-without-nonresonance}.
Figure \ref{def:nonres-spread} illustrates the condition $\nu(e^{A},\mathsf{D}_{x_0} \Phi^1) < k + \alpha$ in Theorem \ref{th:main-thm} below.
\begin{figure}
\centering
\input{spread.pdf_tex}
\caption{An illustration of the condition $\nu(e^{A},\mathsf{D}_0 \Phi^1) < k + \alpha$ of Theorem \ref{th:main-thm}. Unwinding Definition \ref{def:nonres-spread}, it follows that this condition is equivalent to every eigenvalue of $\mathsf{D}_0 \Phi^1$ (represented by an ``$\times$'' above) belonging to the open disk with radius given by raising the smallest modulus of the eigenvalues of $e^A$ to the power $\frac{1}{k+\alpha}$. }\label{fig:spread}
\end{figure}
\begin{restatable}[Existence and uniqueness of $C_{\textnormal{loc}}^{k,\alpha}$ global linearizing factors for a point attractor]{Th}{ThmMain}\label{th:main-thm}
Let $\Phi\colon Q \times \mathbb{T} \to Q$ be a $C^{1}$ dynamical system having a globally attracting hyperbolic fixed point $x_0\in Q$, where $Q$ is a smooth manifold and either $\mathbb{T} = \mathbb{Z}$ or $\mathbb{T} = \mathbb{R}$.
Let $m\in \mathbb{N}_{\geq 1}$ and $e^{A}\in \mathsf{GL}(m,
\mathbb{C})$ have spectral radius $\rho(e^{A})<1$, and let the linear map $B\colon \mathsf{T}_{x_0}Q\to \mathbb{C}^m$ satisfy
\begin{equation}\label{eq:main-th-1}
\forall t \in \mathbb{T}\colon B \mathsf{D}_{x_0}\Phi^t = e^{tA} B.
\end{equation}
\emph{\textbf{Uniqueness.}} Fix $k\in \mathbb{N}_{\geq 1}\cup \{\infty\}$ and $0 \leq \alpha \leq 1$, assume that $(e^{A},\mathsf{D}_{x_0} \Phi^1)$ is $k$-nonresonant, and assume that either $\nu(e^{A},\mathsf{D}_{x_0} \Phi^1) < k + \alpha$ or $\nu(e^{A},\mathsf{D}_{x_0} \Phi^1) \leq k$.
Then any $\psi \in C_{\textnormal{loc}}^{k,\alpha}(Q,\mathbb{C}^m)$ satisfying
\begin{equation*
\psi \circ \Phi^1 = e^{A} \psi, \qquad \mathsf{D}_{x_0} \psi = B
\end{equation*}
is unique, and if $B\colon \mathsf{T}_{x_0}Q\to \mathbb{R}^m$ and $A\in \mathbb{R}^{m\times m}$ are real, then $\psi\colon Q\to \mathbb{R}^m \subset \mathbb{C}^m$ is real.
\emph{\textbf{Existence}.} If furthermore $\Phi\in C_{\textnormal{loc}}^{k,\alpha}$ and $\nu(e^{A},\mathsf{D}_{x_0} \Phi^1) < k + \alpha$, then such a $\psi$ exists and additionally satisfies
\begin{equation}\label{eq:main-th-3}
\forall t \in \mathbb{T}\colon \psi \circ \Phi^t = e^{tA} \psi.
\end{equation}
In fact, if $P$ is any ``approximate linearizing factor'' satisfying $\mathsf{D}_{x_0}P = B$ and
\begin{equation}\label{eq:main-th-4}
P \circ \Phi^1 = e^A P + R
\end{equation}
with $\mathsf{D}_{x_0}^i R = 0$ for all integers $0\leq i < k+\alpha$, then
\begin{equation}\label{eq:main-th-5}
\psi = \lim_{t\to \infty} e^{-tA} P \circ \Phi^t,
\end{equation}
in the topology of $C^{k,\alpha}$-uniform convergence on compact subsets of $Q$.
\end{restatable}
\begin{Rem}
Definitions \ref{def:nonres} and \ref{def:nonres-spread} are not independent.
In particular, if $(X,Y)$ is $(\ell-1)$-nonresonant and $\nu(X,Y) < \ell$, then it follows that $(X,Y)$ is $\infty$-nonresonant.
Hence an equivalent statement of Theorem \ref{th:main-thm} could be obtained by replacing $k$-nonresonance with $\infty$-resonance everywhere (alternatively, for the existence statement only $(k-1)$-nonresonance need be assumed in the case $\alpha = 0$).
We prefer to use the stronger-sounding statement of the theorem above since it makes apparent that the set of matrix pairs $(e^A, \mathsf{D}_{x_0}\Phi^1)$ satisfying its hypotheses are \emph{open} in the space of all matrix pairs.
Openness for $k < \infty$ is immediate, and openness for $k = \infty$ follows the fact that $\nu(e^A,\mathsf{D}_{x_0}\Phi^1)$ is always finite.
\end{Rem}
\begin{Rem}\label{rem:no-nonres-needed}
The statement regarding the above limit actually holds without any nonresonance assumptions if such an approximate linearizing factor $P$ exists; see Lemma \ref{lem-make-approx-exact} in \S \ref{sec:main-proof-exist}.
\end{Rem}
\begin{Rem}
In the uniqueness portion of the above theorem and also later in this paper, the point of the condition that either $$\nu(e^{A},\mathsf{D}_{x_0} \Phi^1) < k + \alpha \qquad \textnormal{ or } \qquad \nu(e^{A},\mathsf{D}_{x_0} \Phi^1) \leq k$$ is to require that $\nu(e^{A},\mathsf{D}_{x_0} \Phi^1) < k + \alpha$ be strictly less than $\alpha$ except when $\alpha = 0$, in which case non-strict inequality is allowed to hold.
This is relevant for, e.g., the case that $k= 1$ and $\alpha = 0$.
Of course the ``or'' above is inclusive, i.e., we allow both inequalities to hold in the hypotheses of the above theorem and later results.
\end{Rem}
\begin{Rem}[the $C^\infty$ case]\label{rem:point-cinf-case}
In the case that $k = \infty$, the hypothesis $\nu(e^A,\mathsf{D}_{x_0}\Phi^1) < k + \alpha$ becomes $\nu(e^A,\mathsf{D}_{x_0}\Phi^1) < \infty$ which is automatically satisfied since $\nu(e^A,\mathsf{D}_{x_0}\Phi^1)$ is always finite.
Hence for the case $k = \infty$, no assumption is needed on the spectral spread in Theorem \ref{th:main-thm}; we need only assume that $(e^A, \mathsf{D}_{x_0}\Phi^1$) is $\infty$-nonresonant.
Similar remarks hold for all of the following results which include a condition of the form $\nu(\slot,\slot) < k + \alpha$.
\end{Rem}
\begin{Rem}[sketch of the proof of the existence portion of Theorem \ref{th:main-thm}]\label{rem:main-thm-1-outline}
Here we sketch the proof of the existence statement of Theorem \ref{th:main-thm}, which is somewhat more involved than the uniqueness proof.
(The existence proof also yields uniqueness, but under stronger assumptions than the uniqueness statement in Theorem \ref{th:main-thm}.)
Since global asymptotic stability of $x_0$ implies that $Q$ is diffeomorphic to $\mathbb{R}^n$ \cite[Lem~2.1]{wilson1967structure}, we may assume that $Q = \mathbb{R}^n$ and $x_0 = 0$.
For now we consider the case $k < \infty$.
First, the $k$-nonresonance assumption implies that we can uniquely solve \eqref{eq:main-th-4} order by order (in the sense of Taylor polynomials) for $P$ up to order $k$.
Once we obtain a polynomial $P$ of sufficiently high order, we derive a fixed point equation for the high-order remainder term $\varphi$, where $\psi = P + \varphi$ is the desired linearizing factor.
Given a sufficiently small, positively invariant, compact neighborhood $B$ containing the fixed point, the proof of Lemma \ref{lem-make-approx-exact} shows that the spectral spread condition $\nu(e^A,\mathsf{D}_{x_0}\Phi^1)< k+\alpha$ implies that the restriction $\varphi|_B$ of the desired high-order term is the fixed point of a map $S\colon C^{k,\alpha}(B,\mathbb{C}^m)\to C^{k,\alpha}(B,\mathbb{C}^m)$ which is a contraction with respect to the standard $C^{k,\alpha}$ norm $\norm{\slot}_{k,\alpha}$ making $C^{k,\alpha}(B,\mathbb{C}^m)$ a Banach space (note, however, that $\norm{\slot}_{k,\alpha}$ must be induced by an appropriate underlying \concept{adapted norm} \cite[Sec.~A.1]{cabre2003parameterization1} on $\mathbb{R}^n$ to ensure that $S$ is a contraction).
In fact, $S$ is the affine map defined by
\begin{equation}\label{eq:contraction}
S(\varphi|_B)\coloneqq -P|_B + e^{-A}\left(P|_B + \varphi|_B\right)\circ \Phi^1.
\end{equation}
Hence we can obtain $\varphi|_B$ by the standard contraction mapping theorem, thereby obtaining a function $\psi|_B\in C^{k,\alpha}(B,\mathbb{C}^m)$ satisfying $\psi|_B \circ \Phi^1|_B = e^A \psi|_B$.
We then extend the domain of $\psi|_B$ using the globalization techniques of \cite{lan2013linearization,kvalheim2018global} to obtain a globally defined function $\psi \in C_{\textnormal{loc}}^{k,\alpha}(Q,\mathbb{C}^m)$ satisfying $\psi \circ \Phi^1 = e^A \psi$.
Using an argument of Sternberg \cite[Lem.~4]{sternberg1957local} in combination with the uniqueness statement of Theorem \ref{th:main-thm}, we show that the function $\psi$ satisfies \eqref{eq:main-th-3}, i.e., $\psi$ is actually a linearizing factor for $\Phi^t$ for \emph{all} $t\in \mathbb{R}$.
We extend the result to the case that $k = \infty$ using a bootstrapping argument.
\begin{Rem}[a numerical consideration]
Our proof of the existence portion of Theorem \ref{th:main-thm}, outlined above, was inspired by Sternberg's proof of his linearization theorem \cite[Thms~2,~3,~4]{sternberg1957local} and also has strong similarities with the techniques used to prove the existence of semiconjugacies of the type \eqref{eq:cd-param-method} using the parameterization method \cite{cabre2003parameterization1,cabre2003parameterization2,cabre2005parameterization3}.
We repeat here an observation of \cite[Sec.~3]{cabre2003parameterization1} and \cite[Rem.~5.5]{cabre2005parameterization3} which is also relevant for numerical computations of linearizing semiconjugacies of the type \eqref{eq:cd-gen-efunc} (such as Koopman eigenfunctions) based on our proof of Theorem \ref{th:main-thm}.
Consider $P$ satisfying \eqref{eq:main-th-4}, $B$ and $S$ as in Remark \ref{rem:main-thm-1-outline}, and an initial guess $\psi_0|_B = P|_B + \varphi_0|_B$ for a local linearizing factor.
If $$\Lip{S}\leq \kappa < 1\qquad \norm{S(\varphi_0|_B)-\varphi_0|_B}\leq \delta,$$
then the standard proof of the contraction mapping theorem implies the estimate
\begin{equation}\label{eq:a-posteriori}
\norm{\varphi|_B - \varphi_0|_B} \leq \delta/(1-\kappa),
\end{equation}
where $\varphi|_B$ is such that $\psi|_B = P|_B + \varphi|_B$ is the unique actual local linearizing factor.
Thus equation \eqref{eq:a-posteriori} furnishes an upper bound on the distance between the initial guess $\varphi_0|_B$ and the true solution $\varphi|_B$, and can be used for \concept{a posteriori estimates} in numerical analysis.
\end{Rem}
\end{Rem}
Theorem \ref{th:main-thm} gave conditions ensuring existence and uniqueness of linearizing factors under spectral spread and nonresonance conditions.
Before stating Theorem \ref{th:main-thm-per}, we state a result on the uniqueness of linearizing factors which does not assume any nonresonance conditions.
Proposition \ref{prop:uniqueness-without-nonresonance} follows immediately from Lemma \ref{lem:psi-identically-0} (used to prove the uniqueness statement of Theorem \ref{th:main-thm}) and the fact that $Q$ is diffeomorphic to $\mathbb{R}^{\dim(Q)}$ as mentioned above.
\begin{Prop}\label{prop:uniqueness-without-nonresonance}
Fix $k\in \mathbb{N}_{\geq 1}\cup \{\infty\}$ and $0 \leq \alpha \leq 1$, and let $\Phi\colon Q \times \mathbb{T} \to Q$ be a $C_{\textnormal{loc}}^{k,\alpha}$ dynamical system having a globally attracting hyperbolic fixed point $x_0\in Q$, where $Q$ is a smooth manifold and either $\mathbb{T} = \mathbb{Z}$ or $\mathbb{T} = \mathbb{R}$.
Let $m\in \mathbb{N}_{\geq 1}$ and $e^{A}\in \mathsf{GL}(m,
\mathbb{C})$ have spectral radius $\rho(e^{A})<1$ and satisfy either $\nu(e^A,\mathsf{D}_0 F) < k + \alpha$ or $\nu(e^A,\mathsf{D}_0 F) \leq k$.
Let $\varphi\in C_{\textnormal{loc}}^{k,\alpha}(Q,\mathbb{C}^m)$ satisfy $\mathsf{D}_{x_0}^i \varphi = 0$ for all $0\leq i \leq k$ and $$\varphi \circ \Phi^1 = e^{A} \varphi.$$
Then it follows that $\varphi \equiv 0$.
In particular, if $\varphi = \psi_1 - \psi_2$, then $$\psi_1 = \psi_2.$$
\end{Prop}
\begin{restatable}[Existence and uniqueness of $C_{\textnormal{loc}}^{k,\alpha}$ global linearizing factors for a limit cycle attractor]{Th}{ThmMainPer}\label{th:main-thm-per}
Fix $k\in \mathbb{N}_{\geq 1}\cup \{\infty\}$ and $0 \leq \alpha \leq 1$, and let $\Phi\colon Q \times \mathbb{R} \to Q$ be a $C_{\textnormal{loc}}^{k,\alpha}$ flow having a globally attracting hyperbolic $\tau$-periodic orbit with image $\Gamma\subset Q$, where $Q$ is a smooth manifold.
Fix $x_0\in \Gamma$ and let $E^s_{x_0}$ denote the unique $\mathsf{D}_{x_0}\Phi^\tau$-invariant subspace complementary to $\mathsf{T}_{x_0} \Gamma$.
Let $m\in \mathbb{N}_{\geq 1}$ and $e^{\tau A}\in \mathsf{GL}(m,
\mathbb{C})$ have spectral radius $\rho(e^{\tau A})<1$, and let the linear map $B\colon E^s_{x_0} \to \mathbb{C}^m$ satisfy
\begin{equation}\label{eq:main-th-per-1}
B \mathsf{D}_{x_0}\Phi^\tau|_{E^s_{x_0}} = e^{\tau A} B.
\end{equation}
\emph{\textbf{Uniqueness.}} Assume that $(e^{\tau A},\mathsf{D}_{x_0} \Phi^\tau|_{E^s_{x_0}})$ is $k$-nonresonant, and assume that either $\nu(e^{\tau A},\mathsf{D}_{x_0} \Phi^\tau|_{E^s_{x_0}}) < k + \alpha$ or $\nu(e^{\tau A},\mathsf{D}_{x_0} \Phi^\tau|_{E^s_{x_0}}) \leq k$.
Then any $\psi \in C_{\textnormal{loc}}^{k,\alpha}(Q,\mathbb{C}^m)$ satisfying
\begin{equation}\label{eq:main-th-per-2}
\forall t\in \mathbb{R}\colon \psi \circ \Phi^t = e^{t A} \psi, \qquad \mathsf{D}_{x_0} \psi|_{E^s_{x_0}} = B
\end{equation}
is unique, and if $B\colon E^s_{x_0}\to \mathbb{R}^m$ and $A\in \mathbb{R}^{m\times m}$ are real, then $\psi\colon Q\to \mathbb{R}^m \subset \mathbb{C}^m$ is real.
\emph{\textbf{Existence.}} If furthermore $\nu(e^{\tau A},\mathsf{D}_{x_0} \Phi^\tau|_{E^s_{x_0}}) < k + \alpha$, then such a unique $\psi$ exists.
\end{restatable}
\section{Applications}\label{sec:applications}
In this section, we give some applications of Theorems \ref{th:main-thm} and \ref{th:main-thm-per} and Proposition \ref{prop:uniqueness-without-nonresonance}.
\S \ref{sec:app:stern-floq} contains results on Sternberg linearizations and Floquet normal forms.
\S \ref{sec:app-p-eigs} gives applications to principal Koopman eigenfunctions and isostable coordinates.
\S \ref{sec:app-classify} contains the classification theorems for $C^\infty$ eigenfunctions of a $C^\infty$ dynamical system.
\subsection{Sternberg linearizations and Floquet normal forms}\label{sec:app:stern-floq}
The following result is an improved statement of Sternberg's linearization theorem for hyperbolic sinks \cite[Thms~2,3,4]{sternberg1957local}; it includes uniqueness of the linearizing conjugacy, it sharpens Sternberg's $C^k$ linearization result to a $C_{\textnormal{loc}}^{k,\alpha}$ linearization result, and the linearization is globally defined on all of $Q$ rather than on some neighborhood of $x_0$.
Our technique for globalizing the domain of the linearization is essentially the same as those used in \cite{lan2013linearization,kvalheim2018global}.
\begin{Prop}[Existence and uniqueness of global $C_{\textnormal{loc}}^{k,\alpha}$ Sternberg linearizations]\label{prop:sternberg}
Fix $k\in \mathbb{N}_{\geq 1}\cup \{\infty\}$ and $0 \leq \alpha \leq 1$.
Let $\Phi\colon Q \times \mathbb{T} \to Q$ be a $C_{\textnormal{loc}}^{k,\alpha}$ dynamical system having a globally attracting hyperbolic fixed point $x_0\in Q$, where $Q$ is a smooth manifold and either $\mathbb{T} = \mathbb{Z}$ or $\mathbb{T} = \mathbb{R}$.
Assume that $\nu(\mathsf{D}_{x_0} \Phi^1,\mathsf{D}_{x_0} \Phi^1) < k + \alpha$, and assume that $(\mathsf{D}_{x_0} \Phi^1,\mathsf{D}_{x_0} \Phi^1)$ is $k$-nonresonant.
Then there exists a unique diffeomorphism $\psi\in C_{\textnormal{loc}}^{k,\alpha}(Q,\mathsf{T}_{x_0}Q)$ satisfying
\begin{equation}\label{eq:sternberg-lin}
\forall t \in \mathbb{T}\colon \psi \circ \Phi^t = \mathsf{D}_{x_0} \Phi^t \psi, \qquad \mathsf{D}_{x_0}\psi = \textnormal{id}_{\mathsf{T}_{x_0}Q}.
\end{equation}
(In writing $\mathsf{D}_{x_0}\psi = \textnormal{id}_{\mathsf{T}_{x_0}Q}$, we are making the standard and canonical identification $\mathsf{T}_{0}(\mathsf{T}_{x_0}Q)\cong \mathsf{T}_{x_0} Q$.)
\end{Prop}
\begin{proof}
Identifying $\mathsf{T}_{x_0} Q$ with $\mathbb{R}^n$ by choosing a basis, we apply Theorem \ref{th:main-thm} with $e^{tA} = \mathsf{D}_{x_0} \Phi^t$ and $B = \textnormal{id}_{\mathsf{T}_{x_0} Q}$ to obtain a unique $\psi\in C_{\textnormal{loc}}^{k,\alpha}(Q,\mathsf{T}_{x_0} Q)$ satisfying \eqref{eq:sternberg-lin} and $\mathsf{D}_{x_0} \psi = \textnormal{id}_{\mathsf{T}_{x_0}Q}$.
It remains only to show that $\psi$ is a diffeomorphism.
To do this, we separately show that $\psi$ is injective, surjective, and a local diffeomorphism.
By continuity, $\mathsf{D}_{x_0} \psi = \textnormal{id}_{\mathsf{T}_{x_0}Q}$ implies that $\mathsf{D}_x \psi$ is invertible for all $x$ in some neighborhood $U \ni x_0$.
Since $Q = \bigcup_{t\geq 0}\Phi^{-t}(U)$ by asymptotic stability of $x_0$, \eqref{eq:sternberg-lin} and the chain rule imply that $\mathsf{D}_{x}\psi$ is invertible for all $x\in Q$.
Hence $\psi$ is a local diffeomorphism.
To see that $\psi$ is injective, let $U$ be a neighborhood of $x_0$ such that $\psi|_U\colon U\to \psi(U)$ is a diffeomorphism, and let $x,y \in Q$ be such that $\psi(x) = \psi(y)$.
By asymptotic stability of $x_0$, there is $T > 0$ such that $\Phi^T(x),\Phi^T(y)\in U$, and \eqref{eq:sternberg-lin} implies that $\psi\circ \Phi^T(x) = \psi \circ \Phi^T(y)$.
Injectivity of $\psi|_U$ then implies that $\Phi^T(x) = \Phi^T(y)$, and injectivity of $\Phi^T$ then implies that $x = y$.
Hence $\psi$ is injective.
To see that $\psi$ is surjective, fix any $y\in \mathsf{T}_{x_0}Q$ and let the neighborhood $U$ be as in the last paragraph.
Asymptotic stability of $0$ for $\mathsf{D}_{x_0}\Phi \colon \mathsf{T}_{x_0}Q\times \mathbb{T}\to \mathsf{T}_{x_0}Q$ implies that there is $T > 0$ such that $\mathsf{D}_{x_0} \Phi^T y \in \psi(U)$, so there exists $x\in U$ with $\mathsf{D}_{x_0}\Phi^T y = \psi(x)$.
Hence $y = \mathsf{D}_{x_0} \Phi^{-T}\psi(x) = \psi \circ \Phi^{-T}(x)$, where we have used \eqref{eq:sternberg-lin}.
It follows that $\psi$ is surjective.
This completes the proof.
\end{proof}
The following is an existence and uniqueness result for the $C_{\textnormal{loc}}^{k,\alpha}$ Floquet normal form of a stable hyperbolic periodic orbit of a flow.
The result is proved using a combination of Proposition \ref{prop:sternberg} and stable manifold theory \cite{fenichel1974asymptotic,fenichel1977asymptotic,hirsch1977,de1995irwin} specialized to the theory of isochrons \cite{isochrons}.
\begin{Prop}[Existence and uniqueness of $C_{\textnormal{loc}}^{k,\alpha}$ global Floquet normal forms]\label{prop:floq-norm-form}
Fix $k\in \mathbb{N}_{\geq 1}\cup \{\infty\}$ and $0 \leq \alpha \leq 1$.
Let $\Phi\colon Q \times \mathbb{R} \to Q$ be a $C_{\textnormal{loc}}^{k,\alpha}$ flow having a globally attracting hyperbolic $\tau$-periodic orbit with image $\Gamma \subset Q$, where $Q$ is a smooth manifold.
Fix $x_0 \in \Gamma$ and let $E^s_{x_0}\subset \mathsf{T}_{x_0}Q$ denote the unique $\mathsf{D}_{x_0}\Phi^\tau$-invariant subspace complementary to $\mathsf{T}_{x_0}\Gamma$.
Assume that $\nu(\mathsf{D}_{x_0} \Phi^\tau|_{E^s_{x_0}},\mathsf{D}_{x_0} \Phi^\tau|_{E^s_{x_0}}) < k + \alpha$, and assume that $(\mathsf{D}_{x_0} \Phi^\tau|_{E^s_{x_0}},\mathsf{D}_{x_0} \Phi^\tau|_{E^s_{x_0}})$ is $k$-nonresonant.
Then if we write $\mathsf{D}_{x_0}\Phi^\tau|_{E^s_{x_0}}=e^{\tau A}$ for some complex linear $A\colon (E^s_{x_0})_\mathbb{C} \to (E^s_{x_0})_\mathbb{C}$, there exists a unique proper $C_{\textnormal{loc}}^{k,\alpha}$ embedding $\psi = (\psi_\theta, \psi_z) \colon Q\to S^1 \times (E^s_{x_0})_\mathbb{C}$ such that $\psi_\theta(x_0) = 1$, $(\mathsf{D}_{x_0}\psi_z)|_{E^s_{x_0}} = (E^s_{x_0} \hookrightarrow (E^s_{x_0})_\mathbb{C})$, and
\begin{equation}\label{eq:floquet}
\forall t \in \mathbb{T}\colon \psi_\theta \circ \Phi^t(x) = e^{2\pi i \frac{t}{\tau}}\psi_\theta(x), \qquad \psi_z\circ \Phi^t(x) = e^{tA} \psi_z(x),
\end{equation}
where $S^1\subset \mathbb{C}$ is the unit circle.
If $A\colon E^s_{x_0}\to E^s_{x_0}$ is real, then $\psi_z\in C_{\textnormal{loc}}^{k,\alpha}(Q, E^s_{x_0})$ is real, and the codomain-restricted map $\psi \colon Q \to S^1 \times E^s_{x_0}\subset S^1\times (E^s_{x_0})_\mathbb{C}$ is a diffeomorphism.
\end{Prop}
\begin{proof}
Theorem \ref{th:main-thm-per} implies that a map $\psi_z\in C_{\textnormal{loc}}^{k,\alpha}(Q,(E^s_{x_0})_\mathbb{C})$ satisfying the conclusions above exists.
Letting $W^s_{x_0}$ denote the global strong stable manifold (isochron) through $x_0$, we have $\mathsf{T}_{x_0}W^s_{x_0}= E^s_{x_0}$ and that $\Phi^\tau(W^s_{x_0}) = W^s_{x_0}$.
Additionally, $W^s_{x_0}$ is a $C_{\textnormal{loc}}^{k,\alpha}$ manifold since it is the stable manifold for the fixed point $x_0$ of the $C_{\textnormal{loc}}^{k,\alpha}$ diffeomorphism $\Phi^\tau$ \cite[Thm~2.1]{de1995irwin}.
Proposition \ref{prop:sternberg} then implies that $\psi_z|_{W^s_{x_0}}\colon W^s_{x_0}\to E^s_{x_0} \subset (E^s_{x_0})_\mathbb{C}$ is a diffeomorphism.\footnote{Strictly speaking, Proposition \ref{prop:sternberg} was stated for smooth manifolds.
Hence in order to apply Proposition \ref{prop:sternberg} here (and also in the proofs of Theorems \ref{th:main-thm-per} and \ref{th:classify-per}) we must first give $W^s_{x_0}$ a compatible $C^\infty$ structure, but this can always be done \cite[Thm~2.2.9]{hirsch1976differential}, so we will not mention this anymore.}
Since the vector field generating $\Phi$ intersects $W^s_{x_0}$ transversely, a standard argument \cite[p.~243]{hirsch1974differential} together with the $C_{\textnormal{loc}}^{k,\alpha}$ implicit function theorem \cite[Cor.~A.4]{eldering2013normally} imply that a real-valued $C_{\textnormal{loc}}^{k,\alpha}$ ``time-to-impact $W^s_{x_0}$'' function can be defined on a neighborhood of any point.
Using these facts, one can show that the function $\psi_\theta\colon Q\to S^1$ defined via $\psi_\theta(W^s_{x_0})\equiv 1$ and $\psi_\theta(\Phi^t(W^s_{x_0})) \equiv e^{2\pi i \frac{t}{\tau}}$ is a $C_{\textnormal{loc}}^{k,\alpha}$ function.
This function $\psi_\theta$ clearly satisfies $\psi_\theta(x_0) = 1$ and \eqref{eq:floquet}.
$\psi_\theta$ is unique among all continuous functions satisfying these equalities, since if $\tilde{\psi}_\theta$ is any other such function, then asymptotic stability of $\Gamma$ implies that the quotient $(\psi_\theta/\tilde{\psi}_\theta)$ is constant on $Q$, and since $(\psi_\theta/\tilde{\psi}_\theta)(x_0) = 1$ it follows that $\tilde{\psi}_\theta \equiv \psi_\theta$.
Since the kernels $\ker (\mathsf{D}_{x_0}\psi_z) = \mathsf{T}_{x_0}\Gamma$ and $\ker(\mathsf{D}_{x_0}\psi_\theta) = E^s_{x_0}$ are transverse, $\psi$ is an immersion, and $\psi$ is injective since the restriction of $\psi_z$ to any level set $W^s_{\Phi^t(x_0)}$ of $\psi_{\theta}$ is the composition of injective maps $e^{tA} \circ \psi_z|_{W^s_{x_0}} \circ \Phi^{-t}|_{W^s_{\Phi^t(x_0)}}$.
$\psi$ is a $C_{\textnormal{loc}}^{k,\alpha}$ embedding since, if $(\psi_z|_{W^s_{x_0}})^{-1}\colon E^s_{x_0}\to W^s_{x_0}$ is the inverse of $\psi_z|_{W^s_{x_0}}$, then $\psi^{-1}\colon \psi(Q)\to Q$
\begin{equation}\label{eq:psi-inv-def}
\psi^{-1}(\theta,z)\coloneqq \Phi^{\frac{\arg(\theta)}{2\pi}\tau}\circ (\psi_z|_{W^s_{x_0}})^{-1}( e^{-\frac{\arg(\theta)}{2\pi}\tau A}z)
\end{equation}
is the $C_{\textnormal{loc}}^{k,\alpha}$ inverse of $\psi$.
Since it is clear from \eqref{eq:psi-inv-def} that $\psi^{-1}$ is the restriction to $\psi(Q)$ of a continuous function $G\colon S^1\times (E^s_{x_0})_\mathbb{C} \to Q$ (given by the same formula), it follows that $\psi$ is a proper map \cite[Prop.~4.93(e)]{lee2011topological}.
That $\psi^{-1}$ is $C_{\textnormal{loc}}^{k,\alpha}$ follows since $(\psi_z|_{W^s_{x_0}})^{-1}$ is, and because the definition of $\psi^{-1}$ is independent of the branch of $\arg(\slot)$ used since $\Phi^\tau \circ (\psi_z|_{W^s_{x_0}})^{-1} \circ e^{- \tau A} = (\psi_z|_{W^s_{x_0}})^{-1}$.
Hence $\psi$ is a proper $C_{\textnormal{loc}}^{k,\alpha}$ embedding, and the same argument shows that $\psi$ is a diffeomorphism onto $S^1\times E^s_{x_0}$ if $A$ is real.
This completes the proof. \end{proof}
\subsection{Principal Koopman eigenfunctions, isostables, and isostable coordinates}\label{sec:app-p-eigs}
Given a $C^1$ dynamical system $\Phi\colon Q\times \mathbb{T} \to Q$, where $Q$ is a smooth manifold and either $\mathbb{T} = \mathbb{Z}$ or $\mathbb{T} = \mathbb{R}$, we say that $\psi \colon Q\to \mathbb{C}$ is a Koopman \concept{eigenfunction} if $\psi$ is not identically zero and satisfies
\begin{equation}\label{eq:koopman-efunc}
\forall t \in \mathbb{T}\colon \psi \circ \Phi^t = e^{\mu t} \psi
\end{equation}
for some $\mu \in \mathbb{C}$.
The following are intrinsic definitions of principal eigenfunctions and the principal algebra which extend the definitions for linear systems given in \cite[Def.~2.2--2.3]{mohr2016koopman}; a more detailed comparison is given later in Remark \ref{rem:mohr-mezic}.
The condition $\psi|_M \equiv 0$ was motivated in part by the definition of a certain space $\mathcal{F}_{A_c}$ of functions in \cite[p.~3358]{mauroy2016global}.
\begin{Def}\label{def:principal-eigenfunction}
Suppose that $\Phi$ has a distinguished, closed, invariant subset $M\subset Q$.
We say that an eigenfunction $\psi \in C^1(Q)$ is a \concept{principal} eigenfunction if
\begin{equation}\label{eq:p-eig-def}
\psi|_M \equiv 0 \qquad \text{and} \qquad \forall x\in M\colon \mathsf{D}_x\psi \neq 0.
\end{equation}
We define the $C_{\textnormal{loc}}^{k,\alpha}$ principal algebra $\mathcal{A}^{k,\alpha}_{\Phi}$ to be the complex subalgebra of $C_{\textnormal{loc}}^{k,\alpha}(Q,\mathbb{C})$ generated by all $C_{\textnormal{loc}}^{k,\alpha}$ principal eigenfunctions.
\end{Def}
\begin{Rem}
In the case that $\Phi|_{M\times \mathbb{T}}$ is minimal (has no proper, closed, nonempty invariant subsets)---\eqref{eq:p-eig-def} can be replaced by the weaker condition
\begin{equation}\label{eq:p-eig-def-weaker}
\exists x\in M\colon \psi(x) =0 \qquad \text{and} \qquad \exists y\in M\colon \mathsf{D}_y\psi \neq 0.
\end{equation}
This will be the case in the sequel, in which we will consider only the cases that $M$ is either a fixed point or periodic orbit.
\end{Rem}
Differentiating \eqref{eq:koopman-efunc} and using the chain rule immediately yields Propositions \ref{prop:p-eig-evec-point} and \ref{prop:p-eig-evec-cycle}, which have appeared in the literature (see e.g. the proof of \cite[Prop.~2]{mauroy2016global}). In these results, $d$ denotes the exterior derivative and $\mathsf{T}_{x_0}^* Q$ denotes the cotangent space to $x_0$; $d\psi(x_0)$ corresponds to $\mathsf{D}_{x_0}\psi$ after making the canonical identification $\mathbb{C} \cong \mathsf{T}_{\psi(x_0)}\mathbb{C}$.
\begin{Prop}\label{prop:p-eig-evec-point}
Let $x_0$ be a fixed point of the $C^1$ dynamical system $\Phi\colon Q\times \mathbb{T} \to Q$. If $\psi$ is a principal Koopman eigenfunction for $\Phi$ satisfying \eqref{eq:koopman-efunc} with $\mu\in \mathbb{C}$, then for any $t\in \mathbb{T}$, it follows that $d \psi(x_0)\in (T_{x_0}^* Q)_\mathbb{C}$ is an eigenvector of the (complexified) adjoint $(\mathsf{D}_{x_0} \Phi^t)^*$ with eigenvalue $e^{\mu t}$.
\end{Prop}
\begin{Prop}\label{prop:p-eig-evec-cycle}
Let $\Gamma$ be the image of a $\tau$-periodic orbit of the $C^1$ dynamical system $\Phi\colon Q\times \mathbb{R} \to Q$.
If $\psi$ is a principal Koopman eigenfunction for $\Phi$ satisfying \eqref{eq:koopman-efunc} with $\mu\in \mathbb{C}$, then for any $x_0 \in \Gamma$, it follows that $d\psi(x_0)\in(T_{x_0}^* Q)_\mathbb{C}$ is an eigenvector of the (complexified) adjoint $(\mathsf{D}_{x_0}\Phi^\tau)^*$ with eigenvalue $e^{\mu \tau}$; in particular, $e^{\mu \tau}$ is a Floquet multiplier for $\Gamma$.
\end{Prop}
Notice that, for a dynamical system having a globally attracting compact invariant set $M$, any continuous eigenfunction satisfying \eqref{eq:koopman-efunc} with $\mu \in \mathbb{C}$ must have $|e^{\mu}|\leq 1$.
If this attracting set $M$ is furthermore a hyperbolic fixed point, then there is the stronger conclusion that either $\mu = 0$ or $|e^{\mu}|< 1$.
With this observation, Proposition \ref{prop:koopman-cka-fix} below is now immediate from Theorem \ref{th:main-thm} and Proposition \ref{prop:p-eig-evec-point}.
We remind the reader of Remark \ref{rem:point-cinf-case} which points out that, in the case $k = \infty$, no spectral spread conditions are needed because all inequalities of the form $\nu(\slot,\slot) < \infty$ trivially hold.
\begin{Prop}[Existence and uniqueness of $C_{\textnormal{loc}}^{k,\alpha}$ Koopman eigenvalues and principal eigenfunctions for a point attractor]\label{prop:koopman-cka-fix}
Let $\Phi\colon Q \times \mathbb{T} \to Q$ be a $C^{1}$ dynamical system having a globally attracting hyperbolic fixed point $x_0\in Q$, where either $\mathbb{T} = \mathbb{Z}$ or $\mathbb{T} = \mathbb{R}$.
Fix $k \in \mathbb{N}_{\geq 1}\cup\{+\infty\}$ and $0 \leq \alpha \leq 1$, fix $\mu \in \mathbb{C}$, and let $\psi_1\in C_{\textnormal{loc}}^{k,\alpha}(Q,\mathbb{C})$ be any Koopman eigenfunction satisfying \eqref{eq:koopman-efunc} with $\mu \in \mathbb{C}$.
\emph{\textbf{Uniqueness of Koopman eigenvalues and principal eigenfunctions.}}
Assume that either \\$\nu(e^{\mu},\mathsf{D}_{x_0}\Phi^1) < k + \alpha$ or $\nu(e^{\mu},\mathsf{D}_{x_0}\Phi^1) \leq k$.
\begin{enumerate}
\item\label{item:uniq-thm-1} Then there exists $m = (m_1,\ldots,m_n)\in \mathbb{N}_{\geq 0}^n$ such that $$e^{\mu} = e^{m\cdot \lambda},$$ where $e^{\lambda_1},\ldots,e^{\lambda_n}$ are the eigenvalues of $\mathsf{D}_{x_0} \Phi^1$ repeated with multiplicities and $\lambda\coloneqq (\lambda_1,\ldots,\lambda_n)$.
\item\label{item:uniq-thm-2} Additionally assume that $\psi_1$ is a principal eigenfunction so that $e^{\mu} \in \textnormal{spec}(\mathsf{D}_{x_0}\Phi^1)$, and assume that $(e^\mu,\mathsf{D}_{x_0}\Phi^1)$ is $k$-nonresonant.
Then $\psi_1$ is uniquely determined by $d \psi_1(x_0)$, and if $\mu$ and $d\psi_1(x_0)$ are real, then $\psi\colon Q\to \mathbb{R}\subset \mathbb{C}$ is real.
In particular, if $e^{\mu}$ is an algebraically simple eigenvalue of (the complexification of) $\mathsf{D}_{x_0}\Phi^1$ and if $\psi_2$ is any other principal eigenfunction satisfying \eqref{eq:koopman-efunc} with the same $\mu$, then there exists $c\in \mathbb{C}\setminus \{0\}$ such that $$\psi_1 = c\psi_2.$$
\end{enumerate}
\emph{\textbf{Existence of principal eigenfunctions.}} Assume that $\Phi \in C_{\textnormal{loc}}^{k,\alpha}$, that $e^{\mu}\in \textnormal{spec}(\mathsf{D}_{x_0}\Phi^1)$, that $(e^\mu,\mathsf{D}_{x_0}\Phi^1)$ is $k$-nonresonant, and that $\nu(e^\mu,\mathsf{D}_{x_0}\Phi^1)<k+\alpha$.
Let $w\in (\mathsf{T}_{x_0}^* Q)_\mathbb{C}$ be any eigenvector of the (complexified) adjoint $(\mathsf{D}_{x_0}\Phi^1)^*$ with eigenvalue $e^\mu$.
\begin{enumerate}
\item Then there exists a unique principal eigenfunction $\psi \in C_{\textnormal{loc}}^{k,\alpha}(Q,\mathbb{C})$ satisfying \eqref{eq:koopman-efunc} with $\mu$ and satisfying $d\psi(x_0) = w$.
\item In fact, if $P$ is any ``approximate eigenfunction'' satisfying $\mathsf{D}_{x_0}P = w$ and
\begin{equation}\label{eq:Koop-main-approx}
P \circ \Phi^1 = e^\mu P + R
\end{equation}
with $\mathsf{D}_{x_0}^i R = 0$ for all integers $0\leq i < k+\alpha$, then
\begin{equation}\label{eq:Koop-main-converge}
\psi = \lim_{t\to \infty} e^{-\mu t} P \circ \Phi^t,
\end{equation}
in the topology of $C^{k,\alpha}$-uniform convergence on compact subsets of $Q$.
\end{enumerate}
\end{Prop}
\begin{Rem}[Laplace averages]
Given $P\colon Q\to \mathbb{C}$, in the Koopman literature the \concept{Laplace average} $$\psi\coloneqq \lim_{T\to\infty} \frac{1}{T}\int_0^T e^{-\mu t} P\circ\Phi^t\, dt$$ is used to produce a Koopman eigenfunction satisfying \eqref{eq:koopman-efunc} with $\mu$ as long as the limit exists \cite{mauroy2013isostables,mohr2014construction}.
Since convergence of the limit \eqref{eq:Koop-main-converge} clearly implies convergence of the Laplace average to the same limiting function, Proposition \ref{prop:koopman-cka-fix} gives sufficient conditions under which the Laplace average of $P$ exists and is equal to a unique $C_{\textnormal{loc}}^{k,\alpha}$ principal eigenfunction satisfying $\mathsf{D}_{x_0}P = w$.
\end{Rem}
\begin{Rem}[Isostables and isostable coordinates]\label{rem:isostable}
It follows from the discussion after \cite[Def.~2]{mauroy2013isostables} that the definition of \concept{isostables} given in that paper---for $\Phi$ having a globally attracting hyperbolic fixed point $x_0$ with $\mathsf{D}_{x_0}\Phi^1$ having a unique eigenvalue $e^{\mu_1}$ (or complex conjugate pair of eigenvalues) of largest modulus---is equivalent to the following.
Isostables as defined in \cite{mauroy2013isostables} are the level sets of the modulus $|\psi_1|$ of a principal eigenfunction $\psi_1$ satisfying \eqref{eq:koopman-efunc} with $\mu = \mu_1$.
Because $e^{\mu_1}$ is the ``slowest'' eigenvalue of $\mathsf{D}_{x_0}\Phi^1$, Proposition \ref{prop:koopman-cka-fix} implies that any such $\psi_1\in C^1(Q,\mathbb{C})$ is unique modulo scalar multiplication for a $C^1$ dynamical system without any further assumptions (since $\nu(e^\mu,\mathsf{D}_{x_0}\Phi^1) = 1$), and such a unique eigenfunction exists if $\Phi\in C^{1,\alpha}_{\textnormal{loc}}$ for some $\alpha > 0$ (since $\nu(e^\mu,\mathsf{D}_{x_0}\Phi^1) = 1 < 1 + \alpha$ for any $\alpha > 0$).
Since the complex conjugate $\bar{\psi}_1$ is a principal eigenfunction satisfying \eqref{eq:koopman-efunc} with $\mu = \bar{\mu}_1$, it follows that the isostables as defined in \cite{mauroy2013isostables} are unique even if $\mu_1 \in \mathbb{C} \setminus \mathbb{R}$.
A uniqueness proof for analytic isostables under the additional assumptions of $(\mathsf{D}_{x_0}\Phi^1, \mathsf{D}_{x_0}\Phi^1)$ $\infty$-nonresonance and an analytic vector field was given in \cite{mauroy2013isostables}.
For the special case that the eigenvalue of largest modulus is real, unique, and algebraically simple, in \cite[p.~23]{mauroy2013isostables} these authors do point out that uniqueness of $C^1$ isostables follows from the fact that the isostables are the unique $C^1$ global strong stable manifolds---leaves of the unique global strong stable foliation---over an attracting, normally hyperbolic, $1$-dimensional, inflowing invariant ``slow manifold'', and this argument works even if the dynamical system is only $C^1$ (see \cite{kvalheim2018global} for detailed information on the global stable foliation of an inflowing invariant manifold).
The slow manifold is itself generally non-unique without further assumptions, but this does not affect the isostable uniqueness argument.
However, as pointed out in \cite[p.~23]{mauroy2013isostables}, this argument does not work when the eigenvalue of largest modulus is not real, because in this case the isostables can no longer be interpreted as strong stable manifolds (e.g., the relevant slow manifold is now $2$-dimensional, so the dimension of the codimension-1 isostables is too large by $1$).
For the case that $\mathbb{T} = \mathbb{R}$ and $\Phi$ has an attracting hyperbolic periodic orbit, several authors have investigated various versions of \concept{isostable coordinates} without restricting attention to the ``slowest'' isostable coordinate.
The authors in \cite[Eq.~5]{wilson2016isostable} defined a ``finite-time'' approximate version of \concept{isostable coordinates} which provide an approximation of our principal eigenfunctions.
Subsequently, \cite[Sec.~2]{shirasaka2017phase} defined a version of ``exact'' isostable coordinates (termed \concept{amplitudes} and \concept{phases}) directly in terms of Koopman eigenfunctions, and in particular our Proposition \ref{prop:koopman-cka-per} and Theorem \ref{th:classify-per} can be used to directly infer existence, uniqueness and regularity properties of these coordinates under relatively weak assumptions.
It appears that \cite{wilson2018greater,monga2019phase} intended to define a different version of ``exact'' isostable coordinates close in spirit to the approximate version in \cite{wilson2016isostable}.
However, these definitions \cite[Eq.~24,~Eq.~58]{wilson2018greater, monga2019phase} are given in terms of a limit which might not exist for principal eigenfunctions other than the ``slowest'', as we show in Example \ref{ex:koop-converge} below.
In any case, it appears that principal Koopman eigenfunctions provide a means for defining all of the isostable coordinates for a periodic orbit attractor which does not require such limits.
\end{Rem}
\begin{Rem}[Relationship to the principal eigenfunctions, principal algebras, and pullback algebras of \cite{mohr2016koopman}]\label{rem:mohr-mezic}
Given a nonlinear dynamical system $\Phi\colon Q\times \mathbb{T} \to Q$ with a globally attracting hyperbolic fixed point $x_0$, Mohr and Mezi\'{c} defined the principal eigenfunctions for the associated linearization $\mathsf{D}_{x_0}\Phi\colon \mathsf{T}_{x_0}Q\times \mathbb{T}\to \mathsf{T}_{x_0}Q$ to be those of the form $v\mapsto w(v)$ where $w\in (\mathsf{T}_{x_0}^* Q)_\mathbb{C}$ is an eigenvector of the complexified adjoint $(\mathsf{D}_{x_0}\Phi^1)^*\colon (\mathsf{T}_{x_0}^*Q)_\mathbb{C} \to (\mathsf{T}_{x_0}^*Q)_\mathbb{C}$ \cite[Def.~2.2]{mohr2016koopman}, and they defined the principal algebra $\mathcal{A}_{\mathsf{D}_{x_0}\Phi^1}$ to be the subalgebra of $C^0(\mathsf{T}_{x_0}Q,\mathbb{C})$ generated by the principal eigenfunctions \cite[Def.~2.3]{mohr2016koopman}.
Mohr and Mezi\'{c} do not define principal eigenfunctions or the principal algebra for the nonlinear system itself but, given a topological conjugacy $\tau\colon \mathsf{T}_{x_0}Q \to Q$ between $\Phi$ and $\mathsf{D}_{x_0}\Phi$, they define the \concept{pullback algebra}
\begin{equation}\label{eq:pullback-algebra}
\left(\mathcal{A}_{\mathsf{D}_{x_0}\Phi^1}\right) \circ \tau^{-1} \coloneqq \{\varphi \circ \tau^{-1}\colon \varphi \in \mathcal{A}_{\mathsf{D}_{x_0}\Phi^1}\}.
\end{equation}
Assuming that $\Phi\in C_{\textnormal{loc}}^{k,\alpha}$, Proposition \ref{prop:koopman-cka-fix} implies that the relationship between the concepts in our Definition \ref{def:principal-eigenfunction} and those of \cite{mohr2016koopman} is as follows.
If $(\mathsf{D}_{x_0}\Phi^1,\mathsf{D}_{x_0}\Phi^1)$ is $k$-nonresonant and either $\nu(\mathsf{D}_{x_0}\Phi^1,\mathsf{D}_{x_0}\Phi^1)< k + \alpha$ or $\nu(\mathsf{D}_{x_0}\Phi^1,\mathsf{D}_{x_0}\Phi^1)\leq k$, our principal eigenfunctions for $\mathsf{D}_{x_0}\Phi^1$ coincide precisely with their principal eigenfunctions $w\in (\mathsf{T}_{x_0}^*Q)_\mathbb{C}$.
This implies that our principal algebra $\mathcal{A}^{k,\alpha}_{\mathsf{D}_{x_0}\Phi^1}$ coincides with their $\mathcal{A}_{\mathsf{D}_{x_0}\Phi^1}$.
Next, notice that the pullback algebra \eqref{eq:pullback-algebra} is generated by the functions $w \circ \tau^{-1}$ where $w\in (\mathsf{T}_{x_0}^*Q)_\mathbb{C}$ is a principal eigenfunction of the linearization.
If we further assume that the conjugacy $\tau$ is a $C_{\textnormal{loc}}^{k,\alpha}$ diffeomorphism, then the chain rule implies that each $w \circ \tau^{-1}$ is a $C_{\textnormal{loc}}^{k,\alpha}$ principal eigenfunction for $\Phi$, and therefore $\left(\mathcal{A}_{\mathsf{D}_{x_0}\Phi^1}\right)\circ \tau^{-1} = \mathcal{A}^{k,\alpha}_{\Phi}$.
In particular, under the above hypotheses it follows that $\left(\mathcal{A}_{\mathsf{D}_{x_0}\Phi^1}\right)\circ \tau^{-1}$ is independent of $\tau$ and generated by at most $n$ $C_{\textnormal{loc}}^{k,\alpha}$ principal eigenfunctions for $\Phi$.
This is perhaps surprising since \eqref{eq:pullback-algebra} depends on the a priori non-unique conjugacy $\tau$; here the assumption that $\tau$ is a $C_{\textnormal{loc}}^{k,\alpha}$ diffeomorphism is essential.
\end{Rem}
For a globally attracting hyperbolic $\tau$-periodic orbit of a $C_{\textnormal{loc}}^{k,\alpha}$ flow with image $\Gamma$, stable manifold theory \cite{fenichel1974asymptotic,fenichel1977asymptotic,hirsch1977,isochrons,de1995irwin} can be used to show that the global strong stable manifold (isochron) $W^s_{x_0}$ through $x_0\in \Gamma$ is a $C_{\textnormal{loc}}^{k,\alpha}$ submanifold of $Q$, and $\Phi^\tau(W^s_{x_0}) = W^s_{x_0}$.
Furthermore, any eigenfunction $\varphi\in C_{\textnormal{loc}}^{k,\alpha}(W^s_{x_0},\mathbb{C})$ of $\Phi^\tau|_{W^s_{x_0}}$ satisfying $\varphi \circ \Phi^\tau|_{W^s_{x_0}} = e^{\mu \tau}\varphi$ admits the unique extension to an eigenfunction $\psi\in C_{\textnormal{loc}}^{k,\alpha}(Q,\mathbb{C})$ given by $$\psi|_{W^s_{\Phi^{t}(x_0)}}\coloneqq e^{\mu t} \varphi \circ \Phi^{-t}|_{W^s_{\Phi^t(x_0)}}$$
for all $t\in \mathbb{R}$.
This observation combined with Propositions \ref{prop:p-eig-evec-cycle} and \ref{prop:koopman-cka-fix} yields the following result.
\begin{Prop}[Existence and uniqueness of $C_{\textnormal{loc}}^{k,\alpha}$ Koopman eigenvalues and principal eigenfunctions for a limit cycle attractor]\label{prop:koopman-cka-per}
Fix $k\in \mathbb{N}_{\geq 1}\cup \{\infty\}$ and $0 \leq \alpha \leq 1$, and let $\Phi\colon Q \times \mathbb{R} \to Q$ be a $C_{\textnormal{loc}}^{k,\alpha}$ flow having a globally attracting hyperbolic $\tau$-periodic orbit with image $\Gamma \subset Q$.
Fix $x_0\in \Gamma$ and let $E^s_{x_0}$ denote the unique $\mathsf{D}_{x_0}\Phi^\tau$-invariant subspace complementary to $\mathsf{T}_{x_0} \Gamma$.
Let $\psi_1\in C_{\textnormal{loc}}^{k,\alpha}(Q,\mathbb{C})$ be any Koopman eigenfunction satisfying \eqref{eq:koopman-efunc} with $\mu \in \mathbb{C}$ and $\mathbb{T} = \mathbb{R}$.
\emph{\textbf{Uniqueness of Koopman eigenvalues.}}
Assume that either $\nu(e^{\mu\tau },\mathsf{D}_{x_0}\Phi^\tau|_{E^s_{x_0}}) < k + \alpha$ or\\ $\nu(e^{\mu\tau },\mathsf{D}_{x_0}\Phi^\tau|_{E^s_{x_0}}) \leq k$.
\begin{enumerate}
\item Then there exists $m = (m_1,\ldots,m_n)\in \mathbb{N}_{\geq 0}^n$ such that
$$\mu \in m\cdot \lambda + \frac{2\pi j}{\tau}\mathbb{Z},$$ where $e^{\lambda_1\tau},\ldots,e^{\lambda_n\tau}$ are the eigenvalues of $\mathsf{D}_{x_0} \Phi^\tau|_{E^s_{x_0}}$ repeated with multiplicities and $\lambda \coloneqq (\lambda_1,\ldots, \lambda_n)$.
\item Additionally assume that $\psi_1$ is a principal eigenfunction so that $e^{\mu \tau} \in \textnormal{spec}(\mathsf{D}_{x_0}\Phi^\tau|_{E^s_{x_0}})$, and assume that $(e^{\mu\tau}, \mathsf{D}_{x_0}\Phi^\tau|_{E^s_{x_0}})$ is $k$-nonresonant.
Then $\psi_1$ is uniquely determined by $d \psi_1(x_0)$, and if $\mu$ and $d\psi_1(x_0)$ are real, then $\psi\colon Q\to \mathbb{R}\subset \mathbb{C}$ is real.
In particular, if $e^{\mu}$ is an algebraically simple eigenvalue of (the complexification of) $\mathsf{D}_{x_0}\Phi^1$ and if $\psi_2$ is any other principal eigenfunction satisfying \eqref{eq:koopman-efunc} with the same $\mu$, then there exists $c\in \mathbb{C}\setminus \{0\}$ such that $$\psi_1 = c\psi_2.$$
\end{enumerate}
\emph{\textbf{Existence of principal eigenfunctions.}} Assume that $e^{\mu \tau} \in \textnormal{spec}(\mathsf{D}_{x_0}\Phi^\tau|_{E^s_{x_0}})$, that $(e^{\mu\tau}, \mathsf{D}_{x_0}\Phi^\tau|_{E^s_{x_0}})$ is $k$-nonresonant, and that $\nu(e^{\mu\tau },\mathsf{D}_{x_0}\Phi^\tau|_{E^s_{x_0}}) < k + \alpha$.
Let $w\in (E^s_{x_0})^*_\mathbb{C}$ be any eigenvector of the (complexified) adjoint $(\mathsf{D}_{x_0}\Phi^\tau|_{E^s_{x_0}})^*$ with eigenvalue $e^{\mu \tau}$.
Then there exists a unique principal eigenfunction $\psi_1\in C_{\textnormal{loc}}^{k,\alpha}(Q,\mathbb{C})$ for $\Phi$ satisfying \eqref{eq:koopman-efunc} with $\mu$ and $\mathbb{T} = \mathbb{R}$ and satisfying $d\psi_1(x_0)|_{E^s_{x_0}}= w$.
\end{Prop}
A well-known example of Sternberg shows that, even for an analytic diffeomorphism $\Phi^1$ of the plane having the globally attracting fixed point $0$, there need not exist a $C^2$ principal eigenfunction corresponding to $e^{\mu}\in \textnormal{spec}(\mathsf{D}_0\Phi^1)$ if $(e^{\mu},\mathsf{D}_0\Phi^1)$ is not $2$-nonresonant \cite[p.~812]{sternberg1957local}.
Concentrating now on the issue of uniqueness of principal eigenfunctions, the following example shows that our nonresonance and spectral spread conditions are both necessary for the uniqueness statements of Propositions \ref{prop:koopman-cka-fix} and \ref{prop:koopman-cka-per}.
\begin{Ex}[Uniqueness of principal eigenfunctions]\label{ex:thm-sharp}
Consider $\Phi = (\Phi_1,\Phi_2)\colon \mathbb{R}^2\times \mathbb{T} \to \mathbb{R}^2$ defined by
\begin{equation}\label{eq:example-theorem-sharp-maps}
\begin{split}
\Phi^t_1(x,y) &= e^{-t} x\\
\Phi^t_2(x,y) &= e^{-(k+\alpha)t}y
\end{split}
\end{equation}
where $k \in \mathbb{N}_{\geq 1}$, $0\leq \alpha \leq 1$, and either $\mathbb{T} = \mathbb{Z}$ or $\mathbb{T} = \mathbb{R}$.
$\Phi$ is a linear dynamical system, so clearly the eigenvalues of $\mathsf{D}_0 \Phi^1$ are $e^{-1}$ and $e^{-(k+\alpha)}$.
Furthermore, for any irrational $\alpha\in [0,1]$, $(e^{-(k+\alpha)},\mathsf{D}_0\Phi^1)$ is $\infty$-nonresonant.
However, if we define $\sigma_\alpha(x)\coloneqq |x|$ for $\alpha > 0$ and $\sigma_{\alpha}(x)\coloneqq x$ for $\alpha = 0$, then for any $k\in\mathbb{N}_{\geq 1}$ and $\alpha \in [0,1]$ both
\begin{equation}\label{eq:ex-h1}
h_1(x,y)\coloneqq y
\end{equation}
and
\begin{equation}\label{eq:ex-h2}
h_2(x,y)\coloneqq y + \sigma_{\alpha}(x)^{k+\alpha}
\end{equation}
are $C^{k,\alpha}$ principal eigenfunctions satisfying \eqref{eq:koopman-efunc} with the same value $$\mu = -(k+\alpha).$$
In particular this shows that $C_{\textnormal{loc}}^{k,\alpha}$ principal Koopman eigenfunctions are not unique (modulo scalar multiplication) even if the $\infty$-nonresonance condition is satisfied.
Since here $h_2 \in C_{\textnormal{loc}}^{k,\alpha}(\mathbb{R}^2,\mathbb{C})$ and $\nu(e^{-(k+\alpha)},\mathsf{D}_0\Phi^1) = k + \alpha$, this shows that the spectral spread condition $\nu(e^{-(k+\alpha)},\mathsf{D}_0\Phi^1) < k + \alpha$ is both necessary and sharp for the principal eigenfunction uniqueness statement of Proposition \ref{prop:koopman-cka-fix} to hold in the case that $\alpha > 0$.
(Note that Proposition \ref{prop:koopman-cka-fix} \emph{does} imply that $C_{\textnormal{loc}}^{k',\alpha'}$ principal eigenfunctions are unique for any $k'+\alpha' > k+\alpha$.)
If instead $k = 1$ and $\alpha = 0$, then $h_1$ and $h_2$ are both $C^1$ eigenfunctions satisfying \eqref{eq:koopman-efunc} with the same value $\mu = -(k+\alpha),$ but now these eigenfunctions are distinguished by their derivatives at the origin, which is consistent with the case that $\nu(e^{-1},\mathsf{D}_0 \Phi^1) = 1$ in the uniqueness statement of Proposition \ref{prop:koopman-cka-fix}.
On the other hand, if $k = 2$ and $\alpha = 0$ so that $(e^{-2},\mathsf{D}_0\Phi^1)$ is not $2$-nonresonant, \eqref{eq:ex-h1} and \eqref{eq:ex-h2} show that analytic eigenfunctions are not unique despite the fact that the spectral spread certainly satisfies $\nu(e^{-2},\mathsf{D}_0\Phi^1) < \infty$.
Hence the nonresonance condition is also necessary for the principal eigenfunction uniqueness statement of Proposition \ref{prop:koopman-cka-fix} to hold.
Finally, by taking $\mathbb{T} = \mathbb{R}$, changing the state space $\mathbb{R}^2$ above to $\mathbb{R}^2 \times S^1$, and prescribing $S^1$ with the decoupled dynamics $\Phi_3^t(x,y,\theta)\coloneqq \theta + t \mod 2\pi$ yields an example showing that the spectral spread and nonresonance conditions are both necessary for the uniqueness statement in Proposition \ref{prop:koopman-cka-per} to hold as well.
\begin{Ex}[Existence of the limit \eqref{eq:Koop-main-converge} and isostable coordinates]\label{ex:koop-converge}
Existence of the limit in \eqref{eq:Koop-main-converge} is not automatic if the ``approximate eigenfunction'' $P$ is not an approximation to sufficiently high order.
In fact fix $k \in \mathbb{N}_{\geq 1}$, $\alpha \in [0,1]$, and $r,\epsilon \in \mathbb{R}_{>0}$.
Define $\sigma_\alpha(x)\coloneqq |x|$ for $\alpha > 0$ and $\sigma_{\alpha}(x)\coloneqq x$ for $\alpha = 0$, and consider the $C_{\textnormal{loc}}^{k,\alpha}$ dynamical system $\Phi\colon \mathbb{R}^2\times \mathbb{T} \to \mathbb{R}^2$ defined by
\begin{equation}
\begin{split}
\Phi_1^t(x,y)&= e^{-t}x\\
\Phi_2^t(x,y)&= e^{-rt}(y-\epsilon \sigma_{\alpha}(x)^{k + \alpha}) + \epsilon e^{-(k + \alpha)t} \sigma_{\alpha}(x)^{k+\alpha},
\end{split}
\end{equation}
where either $\mathbb{T} = \mathbb{Z}$ or $\mathbb{T} = \mathbb{R}$.
To see that $\Phi$ is indeed a dynamical system (i.e., that $\Phi$ satisfies the group property $\Phi^{t+s}=\Phi^t\circ \Phi^s$), define the diagonal linear system $\widetilde{\Phi}^t(x,y) = (e^{-t} x, e^{-rt}y)$ and the $C^{k,\alpha}$ diffeomorphism $H\colon \mathbb{R}^2\to \mathbb{R}^2$ via $H(x,y) \coloneqq (x, y + \epsilon \sigma_{\alpha}(x)^{k + \alpha})$, and note that $\Phi^t = H \circ \widetilde{\Phi}^t \circ H^{-1}$.
In other words, $\Phi$ is obtained from a diagonal linear dynamical system via a $C^{k,\alpha}$ change of coordinates; note also that this change of coordinates can be made arbitrarily close to the identity by taking $\epsilon$ arbitrarily small.
We note that $\nu(e^{-r},\mathsf{D}_0 \Phi^1) = r$ and that, for any choice of $\epsilon$, the analytic function $P(x,y)\coloneqq y$ satisfies $$P \circ \Phi^1 = e^{-r} P + R$$
where $\mathsf{D}^j_{(0,0)}R = 0$ for all integers $0\leq j < k+\alpha$.
However,
\begin{equation}
\label{eq:ex-diverge}
\begin{split}
\lim_{t\to\infty}e^{rt} P \circ \Phi^t(x,y) &= y- \epsilon \sigma_{\alpha}(x)^{k + \alpha} + \epsilon \sigma_{\alpha}(x)^{k + \alpha} \lim_{t\to\infty} e^{(r-(k + \alpha))t}\\
&=
\begin{cases}
y-\epsilon \sigma_{\alpha}(x)^{k + \alpha} & 0 < r < k + \alpha \\
y & r = k + \alpha\\
+\infty & r > k + \alpha
\end{cases}
\end{split}
\end{equation}
for any $x \neq 0$ and $\epsilon > 0$.
We see that the limit \eqref{eq:ex-diverge} diverges whenever $\nu(e^{-r},\mathsf{D}_0 \Phi^1) = r > k+\alpha$, but the limit converges when $r\leq k + \alpha$.
For the case that $r$ is not an integer, this is consistent with Proposition \ref{prop:koopman-cka-fix} which guarantees that the limit converges if $\Phi\in C_{\textnormal{loc}}^{k,\alpha}$, if $\nu(e^{-r},\mathsf{D}_0 \Phi^1) < k+\alpha$, and if $(e^{-r},\mathsf{D}_0 \Phi^1)$ is $k$-nonresonant.
When $r$ is not an integer and $r = k + \alpha$, convergence is also guaranteed by Proposition \ref{prop:koopman-cka-fix} for this specific example, because then (i) $(e^{-r},\mathsf{D}_0 \Phi^1)$ is $\infty$-nonresonant, (ii) $\Phi$ is linear and hence $C^\infty$, and (iii) Proposition \ref{prop:koopman-cka-fix} guarantees that this limit always exists if $\Phi\in C^\infty$ and $(e^{-r},\mathsf{D}_0 \Phi^1)$ is $\infty$-nonresonant because the spectral spread condition $\nu(e^{-r},\mathsf{D}_0 \Phi^1) < \infty$ always holds.
As alluded to in Remark \ref{rem:no-nonres-needed}, the preceding reasoning can actually be applied even without the assumption that $r$ is not an integer if Lemma \ref{lem-make-approx-exact} is used instead of Proposition \ref{prop:koopman-cka-fix} as the tool of inference.
We emphasize that the divergence in \eqref{eq:ex-diverge} is associated purely with the spectral spread condition since, e.g., we can choose $r > 0$ so that $(e^{-r},\mathsf{D}_0 \Phi^1)$ is $\infty$-nonresonant and take $\alpha = 0$ so that $\Phi$ is analytic.
Note that by taking $\mathbb{T} = \mathbb{R}$, changing the state space $\mathbb{R}^2$ to $\mathbb{R}^2 \times S^1$, and prescribing $S^1$ with the decoupled dynamics $\Phi_3^t(x,y,\theta)\coloneqq \theta + t \mod 2\pi$ yields a corresponding example with a globally attracting hyperbolic periodic orbit $\{(0,0)\}\times S^1$.
In this case, for this example \cite[Eq.~24,~Eq.~58]{wilson2018greater, monga2019phase}
would attempt to \emph{define} the isostable coordinate (principal eigenfunction in our terminology) $\psi_2$ satisfying \eqref{eq:koopman-efunc} with $\mu_2 \coloneqq -r$ via the limit \eqref{eq:ex-diverge}, but \eqref{eq:ex-diverge} shows that this limit does not exist if $r > k + \alpha$.
This phenomenon should be compared with the explanation in the preceding paragraph based on our general results.
\end{Ex}
\iffalse
\begin{figure}
\centering
\includegraphics[width=0.33\columnwidth]{figs/h1}
\includegraphics[width=0.33\columnwidth]{figs/h2}
\includegraphics[width=0.33\columnwidth]{figs/semilogy-plots}
\caption{The top two plots show an orbit of \eqref{eq:example-theorem-sharp-maps} superimposed on contours of $h_1$ and $h_2$, with $\mathbb{T} = \mathbb{R}$, $k = 2$, and $\alpha =\pi-3$. The bottom semilogy plot shows $h_1$ and $h_2$ evaluated along the orbit as a function of a $t\in \mathbb{R}$.
Both curves are lines with the same slope, illustrating that $h_1$ and $h_2$ decrease at the same exponential rate along trajectories.}
\label{fig:ck-sharpness}
\end{figure}
\fi
\end{Ex}
\subsection{Classification of all $C^\infty$ Koopman eigenfunctions}\label{sec:app-classify}
\begin{Not}
To improve the readability of Theorems \ref{th:classify-point} and \ref{th:classify-per} below, we introduce the following multi-index notation.
We define an $n$-dimensional multi-index to be an $n$-tuple $i = (i_1,\ldots, i_n)\in \mathbb{N}^n_{\geq 0}$ of nonnegative integers, and define its sum to be $|i|\coloneqq i_1+\cdots + i_n.$
For a multi-index $i\in \mathbb{N}^n_{\geq 0}$ and $z = (z_1,\ldots, z_n)\in \mathbb{C}^n$, we define $z^{[i]}\coloneqq z_1^{i_1}\cdots z_n^{i_n}.$
Given a $\mathbb{C}^n$-valued function $\psi = (\psi_1,\ldots, \psi_n)\colon Q\to \mathbb{C}^n$, we define $\psi^{[i]}\colon Q\to \mathbb{C}$ via $\psi^{[i]}(x)\coloneqq (\psi(x))^{[i]}$ for all $x\in Q$.
We also define the complex conjugate of $\psi = (\psi_1,\ldots, \psi_n)$ element-wise: $\bar{\psi}\coloneqq (\bar{\psi}_1,\ldots, \bar{\psi}_n).$
\end{Not}
\begin{Th}[Classification of all $C^\infty$ eigenfunctions for a point attractor]\label{th:classify-point}
Let $\Phi\colon Q \times \mathbb{T} \to Q$ be a $C^{\infty}$ dynamical system having a globally attracting hyperbolic fixed point $x_0\in Q$, where either $\mathbb{T} = \mathbb{Z}$ or $\mathbb{T} = \mathbb{R}$.
Assume that $\mathsf{D}_{x_0}\Phi^1$ is semisimple and that $(\mathsf{D}_{x_0}\Phi^1,\mathsf{D}_{x_0}\Phi^1)$ is $\infty$-nonresonant.
Letting $n = \dim(Q)$, it follows that there exists an $n$-tuple $$\psi = (\psi_1,\ldots,\psi_n)$$ of $C^\infty$ principal eigenfunctions such that every $C^\infty$ Koopman eigenfunction $\varphi$ is a (finite) sum of scalar multiples of products of the $\psi_i$ and their complex conjugates $\bar{\psi}_i$:
\begin{equation}\label{eq:th-classify-expansion}
\varphi = \sum_{|i|+|\ell| \leq k}c_{i,\ell}\psi^{[i]}\bar{\psi}^{[\ell]}
\end{equation}
for some $k \in \mathbb{N}_{\geq 1}$ and some coefficients $c_{i,\ell}\in \mathbb{C}$.
\end{Th}
\begin{proof}
By Proposition \ref{prop:sternberg}, there exists a proper $C^\infty$ embedding $Q\hookrightarrow \mathbb{C}^n$ which maps $Q$ diffeomorphically onto an $\mathbb{R}$-linear subspace, maps $x_0$ to $0$, and semiconjugates $\Phi$ to the diagonal linear flow $\Theta^t(z_1,\ldots,z_n)=(e^{\lambda_1 t} z_1,\ldots, e^{\lambda_n t} z_n)$.
Identifying $Q$ with its image under the embedding, we may view $Q$ as a $\Theta$-invariant submanifold of $\mathbb{C}^n$ and $\Phi = \Theta|_{Q\times \mathbb{T}}$.
Let $\varphi \in C^\infty(Q,\mathbb{C})$ be any $C^\infty$ Koopman eigenfunction satisfying \eqref{eq:koopman-efunc} with $\mu\in \mathbb{C}$.
Write $z = (z_1,\ldots,z_n)\in \mathbb{C}^n$.
For any $k\in \mathbb{N}_{\geq 1}$, by Taylor's theorem we may write
\begin{equation}\label{eq:classify-point-expansion}
\begin{split}
\varphi(z) &= \sum_{|i|+|\ell|\leq k} c_{i,\ell}z^{[i]}\bar{z}^{[\ell]}+ R_k(z)
\eqqcolon P_k(z) + R_k(z)
\end{split}
\end{equation}
where $R_k(0) = \mathsf{D}_0 R_k = \cdots = \mathsf{D}_0^k R_k$ and the coefficients $c_{i,\ell}$ are independent of $k$.
Defining $\lambda\coloneqq (\lambda_1,\ldots, \lambda_n)$ and writing the eigenfunction equation $\varphi \circ \Phi^1 = e^{\mu} \varphi$ in terms of the expansion \eqref{eq:classify-point-expansion} yields
\begin{align*}
\sum_{|i| + |\ell| \leq k} e^{i\cdot \lambda + \ell\cdot \bar{\lambda}}c_{i,\ell}z^{[i]}\bar{z}^{[\ell]} + R_k\circ \Phi^1(z)
= \sum_{|i| + |\ell| \leq k} e^{\mu} c_{i,\ell}z^{[i]}\bar{z}^{[\ell]} + R_k(z).
\end{align*}
Equating coefficients of $z^{[i]}\bar{z}^{[\ell]}$ implies that we must have $c_{i,\ell} = 0$ whenever
$e^{\mu} \neq e^{i\cdot \lambda + \ell\cdot \bar{\lambda}},$
which implies that $P_k$ is equal to a sum of products of the principal eigenfunctions of the form $\psi_j(z)\coloneqq z_j$, $\bar{\psi}_j(z) = \bar{z}_j$ such that each such product $z^{[i]}\bar{z}^{[\ell]}$ is itself an eigenfunction satisfying \eqref{eq:koopman-efunc} with $\mu$, and therefore $P_k$ is also an eigenfunction satisfying \eqref{eq:koopman-efunc} with $\mu$.
It follows that the same is true of $R_k = \psi - P_k$.
If we choose $k \in \mathbb{N}_{\geq 1}$ sufficiently large so that $k > \nu(e^{\mu},\mathsf{D}_0 \Phi^1)$, Proposition \ref{prop:uniqueness-without-nonresonance} implies that $R_k\equiv 0$, so it follows that $\psi = P_k$.
This completes the proof.
\end{proof}
For a globally attracting hyperbolic $\tau$-periodic orbit of a $C_{\textnormal{loc}}^{k,\alpha}$ flow with image $\Gamma$, let $W^s_{x_0}$ be the global strong stable manifold (isochron) through the point $x_0\in \Gamma$.
As discussed in the proof of Proposition \ref{prop:floq-norm-form}, there is a unique (modulo scalar multiplication) continuous eigenfunction satisfying \eqref{eq:koopman-efunc} with $\mu = \frac{2\pi}{\tau}$ and $\mathbb{T} = \mathbb{R}$, and this eigenfunction is in fact $C^\infty$ for a $C^\infty$ flow.
In the theorem below, let $\psi_\theta$ be the unique such eigenfunction satisfying $\psi_\theta|_{W^s_{x_0}}\equiv 1$, where $W^s_{x_0}$ is the global strong stable manifold (isochron) through the point $x_0$ in the theorem statement.
Explicitly, $\psi_\theta$ is given by
$$\psi_\theta|_{W^s_{\Phi^t(x_0)}} = e^{i\frac{2\pi}{\tau} t}$$
for all $t\in \mathbb{R}$.
This defines $\psi_\theta$ on all of $Q$ since $Q = \bigcup_{t\in \mathbb{R}}W^s_{\Phi^t(x_0)}$, and the definition makes sense since $W^s_{\Phi^{j\tau}(x_0)} = W^s_{x_0}$ for all $j\in \mathbb{Z}$.
\begin{Th}[Classification of all $C^\infty$ eigenfunctions for a limit cycle attractor]\label{th:classify-per}
Let $\Phi\colon Q \times \mathbb{R} \to Q$ be a $C^{\infty}$ dynamical system having a globally attracting hyperbolic $\tau$-periodic orbit with image $\Gamma\subset Q$.
Fix $x_0 \in \Gamma$ and denote by $E^s_{x_0}$ the unique $\tau$-invariant subspace complementary to $\mathsf{T}_{x_0}\Gamma$.
Assume that $\mathsf{D}_{x_0}\Phi^\tau|_{E^s_{x_0}}$ is semisimple and that $(\mathsf{D}_{x_0}\Phi^\tau|_{E^s_{x_0}},\mathsf{D}_{x_0}\Phi^\tau|_{E^s_{x_0}})$ is $\infty$-nonresonant.
Letting $n + 1= \dim(Q)$, it follows that there exists an $n$-tuple $$\psi = (\psi_1,\ldots,\psi_n)$$ of $C^\infty$ principal eigenfunctions such that every $C^\infty$ Koopman eigenfunction $\varphi$ is a (finite) sum of scalar multiples of products of integer powers of $\psi_\theta$ with products of the $\psi_i$ and their complex conjugates $\bar{\psi}_i$:
\begin{equation}
\varphi = \sum_{|\ell|+|m|\leq k}c_{\ell,m}\psi^{[\ell]}\bar{\psi}^{[m]} \psi_\theta^{j_{\ell,m}}
\end{equation}
for some $k\in \mathbb{N}_{\geq 1}$, some coefficients $c_{\ell,m} \in \mathbb{C}$, and $j_{\ell,m} \in \mathbb{Z}$.
\end{Th}
\begin{proof}
Let $W^s_{x_0}$ be the $C^\infty$ global strong stable manifold through $x_0$.
We remind the reader of the facts $Q = \bigcup_{t\in \mathbb{R}}W^s_{\Phi^t(x_0)}$, $W^s_{\Phi^t(x_0)} = \Phi^t(W^s_{x_0})$ which are implicitly used in the remainder of the proof.
First, we note that every eigenfunction $\chi\in C^\infty(W^s_{x_0},\mathbb{C})$ of $F^j(x)\coloneqq \Phi^{j\tau}|_{W^s_{x_0}}(x)$ satisfying \eqref{eq:koopman-efunc} with $\mu\in \mathbb{C}$ and $\mathbb{T} = \mathbb{Z}$ admits a unique extension to an eigenfunction $\tilde{\chi}\in C^\infty(Q,\mathbb{C})$ of $\Phi$ satisfying \eqref{eq:koopman-efunc} with $\mu$ and $\mathbb{T} = \mathbb{R}$; this unique extension $\tilde{\chi}$ is defined via
\begin{align}\label{eq:th-classify-per-extension-formula}
\tilde{\chi}|_{W^s_{\Phi^{-t}(x_0)}} = e^{-\mu t} \chi \circ \Phi^t|_{W^s_{\Phi^{-t}(x_0)}} \end{align}
for all $t\in \mathbb{R}$.
$\chi$ is a principal eigenfunction if and only if its extension $\tilde{\chi}$ is.
Next, let $\varphi\in C^\infty(Q,\mathbb{C})$ be an eigenfunction satisfying \eqref{eq:koopman-efunc} with $\mu$ and $\mathbb{T} = \mathbb{R}$.
Theorem \ref{th:classify-point} implies that $\varphi|_{W^s_{x_0}}$ is equal to a sum of products of principal eigenfunctions $\chi_1,\ldots,\chi_n,\bar{\chi}_1,\ldots, \bar{\chi}_n$ of $\Phi^\tau|_{W^s_{x_0}}$ of the form:
\begin{equation}\label{eq:classify-per-expand-1}
\varphi|_{W^s_{x_0}} = \sum_{|\ell|+|m| \leq k}c_{\ell,m}\chi^{[\ell]}\bar{\chi}^{[m]}
\end{equation}
for some $k\in \mathbb{N}_{\geq 1}$, where $\chi = (\chi_1,\ldots,\chi_n)$.
Let $\lambda = (\lambda_1, \ldots \lambda_n) \in \mathbb{C}^n$ be such that each $\chi_j$ satisfies $\chi_j \circ \Phi^\tau|_{W^s_{x_0}} = e^{\lambda_j \tau}\chi_j$.
The proof of Theorem \ref{th:classify-point} showed that
\begin{equation}\label{eq:th-classify-per-exp-equal}
e^{\mu \tau} = e^{(\ell\cdot \lambda + m\cdot \bar{\lambda})\tau}
\end{equation}
for all $\ell,m\in \mathbb{N}^{n}_{\geq 1}$ such that $c_{\ell,m} \neq 0$, so for such $\ell,m$ we have
\begin{equation}\label{eq:th-classify-per-log-equal}
\mu = \ell\cdot \lambda + m\cdot \bar{\lambda}+ i \frac{2\pi}{\tau} j_{\ell,m}
\end{equation}
for some $j_{\ell,m} \in \mathbb{Z}$.
By the previous paragraph, we may uniquely write $\chi = \psi|_{W^s_{x_0}} = (\psi_1|_{W^s_{x_0}},\ldots,\psi_n|_{W^s_{x_0}})$ for principal eigenfunctions $\psi_i$ of $\Phi$ satisfying \eqref{eq:koopman-efunc} with $\lambda_i$ and $\mathbb{T} = \mathbb{R}$.
Using \eqref{eq:classify-per-expand-1},\eqref{eq:th-classify-per-log-equal}, and the extension formula \eqref{eq:th-classify-per-extension-formula}, we obtain
\begin{align*}
\varphi|_{W^s_{\Phi^{-t}(x_0)}}&= \sum_{|\ell|+|m| \leq k} c_{\ell,m}e^{-\mu t}\cdot \left(\chi^{[\ell]}\bar{\chi}^{[m]}\right) \circ \Phi^t|_{W^s_{\Phi^{-t}(x_0)}}\\
&= \sum_{|\ell|+ |m|\leq k} c_{\ell,m}e^{-\mu t}\cdot \left(\psi^{[\ell]}\bar{\psi}^{[m]}\right)|_{W^s_{x_0}} \circ \Phi^t|_{W^s_{\Phi^{-t}(x_0)}}\\
&= \sum_{|\ell|+ |m|\leq k} c_{\ell,m} e^{-(i\frac{2\pi}{\tau}j_{\ell,m})t} \left(\psi^{[\ell]}\bar{\psi}^{[m]}\right)|_{W^s_{\Phi^{-t}(x_0)}}\\
&= \sum_{|\ell|+|m|\leq k} c_{\ell,m} \left(\psi^{[\ell]}\bar{\psi}^{[m]}\psi_\theta^{j_{\ell,m}}\right)|_{W^s_{\Phi^{-t}(x_0)}}
\end{align*}
for all $t\in \mathbb{R}$ as desired.
To obtain the last equality we used the fact that $\psi_\theta|_{W^s_{x_0}}\equiv 1$, so the extension formula \eqref{eq:th-classify-per-extension-formula} implies that $\psi_\theta|_{W^s_{\Phi^{-t}(x_0)}}\equiv e^{-i\frac{2\pi}{\tau}t}$ and hence also $\left(\psi_\theta^{j_{\ell,m}}\right)|_{W^s_{\Phi^{-t}(x_0)}}\equiv e^{-(i\frac{2\pi}{\tau}j_{\ell,m})t}.$
This completes the proof.
\end{proof}
\section{Proofs of the main results}\label{sec:proofs-main-results}
\subsection{Proof of Theorem \ref{th:main-thm}}
In this section we prove Theorem \ref{th:main-thm}, which we repeat here for convenience.
\ThmMain*
We prove the uniqueness and existence portions of Theorem \ref{th:main-thm} in the following \S \ref{sec:main-proof-uniq} and \S \ref{sec:main-proof-exist}, respectively.
\subsubsection{Proof of uniqueness}\label{sec:main-proof-uniq}
In this section, we prove the uniqueness portion of Theorem \ref{th:main-thm}.
The proof of uniqueness consists of an algebraic part and an analytic part.
The algebraic portion is carried out in Lemmas \ref{lem:nonresonance-implies-invertible} and \ref{lem:jets-zero-maps}, and the analytic portion is carried out in Lemma \ref{lem:psi-identically-0}.
\begin{Lem}\label{lem:nonresonance-implies-invertible}
Let $k \in \mathbb{N}_{\geq 1}\cup \{\infty\}$ and $X\in \mathbb{C}^{m\times m}$, and $Y\in \mathbb{R}^{n \times n}$ be such that $(X,Y)$ is $k$-nonresonant.
For all $1 < i \leq k$, let $\mathcal{L}((\mathbb{R}^n)^{\otimes i},\mathbb{C}^m)$ denote the space of linear maps from the $i$-fold tensor product $(\mathbb{R}^n)^{\otimes i}$ to $\mathbb{C}^m$, and define the linear operator
\begin{equation}
T_i \colon \mathcal{L}((\mathbb{R}^n)^{\otimes i},\mathbb{C}^m) \to \mathcal{L}((\mathbb{R}^n)^{\otimes i},\mathbb{C}^m), \qquad T_i(P)\coloneqq P Y^{\otimes i} - X P.
\end{equation}
(By this formula we mean that $T_i(P)$ acts on tensors $\tau \in (\mathbb{R}^n)^{\otimes i}$ via $\tau \mapsto P(Y^{\otimes i}(\tau)) - XP(\tau)$.)
Then for all $1 < i \leq k$, $T_i$ is a linear isomorphism.
(The conclusion holds vacuously if $k = 1$.)
\end{Lem}
\begin{proof}
Let $\lambda_1,\ldots, \lambda_n$ and $\mu_1,\ldots, \mu_m$ respectively be the eigenvalues of $Y$ and $X$ repeated with multiplicity.
First assume that $Y$ and $X$ are both semisimple, i.e., diagonalizable over $\mathbb{C}$.
Identifying $Y$ with its complexification, let $e_1,\ldots, e_n \in \mathbb{C}^n$ be a basis of eigenvectors for $Y$ and let $e^1, \ldots, e^n \in (\mathbb{C}^n)^*$ be the associated dual basis.
Let $f_1,\ldots, f_m \in \mathbb{C}^m$ be a basis of eigenvectors for $X$.
Fix any integer $i$ with $1< i \leq k$, any $p\in \{1,\ldots, m\}$, and any multi-indices $\ell,j\in \mathbb{N}_{\geq 1}^i$; defining $e^{\otimes[\ell]}\coloneqq e^{\ell_1}\otimes \cdots \otimes e^{\ell_i}$ and similarly for $e_{\otimes [j]}$, we compute
\begin{align*}
T_i\left(f_p \otimes e^{\otimes [\ell]}\right)\cdot e_{\otimes [j]} &= \lambda_{j_1}\cdots \lambda_{j_n}\cdot (e^{\otimes[\ell]}\cdot e_{\otimes[j]})f_p
- \mu_p\cdot (e^{\otimes[\ell]}\cdot e_{\otimes[j]}) f_p
\\&= \delta^\ell_j\cdot \left(\lambda_{j_1}\cdots \lambda_{j_i} - \mu_p\right)f_p
\end{align*}
(no summation implied), where the multi-index Kronecker delta $\delta^{\ell}_\ell = 1$ and $\delta^\ell_j = 0$ if $\ell \neq j$.
Hence the $f_p \otimes e^{\otimes[\ell]}$ are eigenvectors of $T_i$ with eigenvalues $\left(\lambda_{j_1}\cdots \lambda_{j_i} - \mu_p\right)$, and dimension counting implies that these are all of the eigenvector/eigenvalue pairs.
The $k$-nonresonance assumption implies that none of these eigenvalues are zero, so $T_i$ is invertible.
Since the operator $T_i$ depends continuously on the matrices $X$ and $Y$, since eigenvalues of a matrix depend continuously on the matrix, and since semisimple matrices are dense, it follows by continuity that the eigenvalues of $T_i$ are all of the form $\left(\lambda_{j_1}\cdots \lambda_{j_i} - \mu_p\right) \neq 0$ even if one or both of $X$ and $Y$ are not semisimple (c.f. \cite[p.~37]{nelson1970topics}).
Hence $T_i$ is still invertible in the case of general $X$ and $Y$.
\end{proof}
\begin{Lem}\label{lem:jets-zero-maps}
Let $F\in C^1(\mathbb{R}^n,\mathbb{R}^n)$ have the origin as a fixed point.
Let $k \in \mathbb{N}_{\geq 1}\cup \{\infty\}$ and $X\in \mathbb{C}^{m\times m}$ be such that $(X,\mathsf{D}_0 F)$ is $k$-nonresonant.
Assume that $\psi\in C^k(\mathbb{R}^n,\mathbb{C}^m)$ satisfies $\mathsf{D}_0 \psi = 0$ and
\begin{equation}\label{eq:lem-uniq-psi-conj}
\psi \circ F = X \psi.
\end{equation}
Then it follows that
\begin{equation*}
\mathsf{D}^i_0 \psi= 0.
\end{equation*}
for all $1 < i \leq k$. (The conclusion holds vacuously if $k = 1$.)
\end{Lem}
\begin{Rem}
We can restate the conclusion of Lemma \ref{lem:jets-zero-maps} in the language of jets \cite{hirsch1976differential,golubitsky1985singularities,smoothInvariant}.
If $\psi$ is a linearizing factor such that the $1$-jet $j^1_0(\psi - \psi(0)) = 0$, then automatically the $k$-jet $j^k_0(\psi - \psi(0)) = 0$.
\end{Rem}
\begin{proof}
We will prove the lemma by induction on $i$.
The base case, $\mathsf{D}_0^1 \psi = \mathsf{D}_0 \psi = 0$, is one of the hypotheses of the lemma.
For the inductive step, assume that $\mathsf{D}_0 \psi = \cdots = \mathsf{D}_0^{i}\psi = 0$ for an integer $i$ satisfying $1 \leq i \leq k-1$.
Differentiating \eqref{eq:lem-uniq-psi-conj} $(i+1)$ times using the chain rule and the inductive hypothesis, we obtain\footnote{The ``higher-order chain rule,'' also known as Fa\`{a} di Bruno's formula, gives a general expression for higher-order derivatives of the composition of two functions (see \cite{jacobs2014stare} for an exposition).
Our inductive hypothesis implies that every term in Fa\`{a} di Bruno's formula is zero except for those appearing in \eqref{eq:lem-chain-rule}; however, it is easy to deduce \eqref{eq:lem-chain-rule} directly without using the full strength of this formula.}
\begin{equation}\label{eq:lem-chain-rule}
\left(\mathsf{D}_0^{(i+1)} \psi\right) \left(\mathsf{D}_0 F\right)^{\otimes (i+1)} - X\mathsf{D}_0^{(i+1)} \psi = T_{(i+1)}(\mathsf{D}_0^{(i+1)}\psi) = 0,
\end{equation}
where the linear operator $T_{(i+1)} \colon \mathcal{L}((\mathbb{R}^n)^{\otimes (i+1)},\mathbb{C}^m)\to \mathcal{L}((\mathbb{R}^n)^{\otimes (i+1)},\mathbb{C}^m)$ is as defined in Lemma \ref{lem:nonresonance-implies-invertible} (taking $Y \coloneqq \mathsf{D}_0 F$).\footnote{Note that here $ \left(\mathsf{D}_0 F\right)^{\otimes (i+1)}$ denotes the tensor product of $\mathsf{D}_0 F$ with itself $i\in \mathbb{N}_{\geq 1}$ times, and is distinct from the multi-index notation $(\slot)^{[\ell]}$ used in Lemma \ref{lem:nonresonance-implies-invertible}.}
In deriving \eqref{eq:lem-chain-rule} we have used the fact that symmetric tensors are completely determined by their action on tensors of the form $v^{\otimes i}$ \cite[Thm~1]{thomas2014polarization}.
Lemma \ref{lem:nonresonance-implies-invertible} implies that $T_{(i+1)}$ is invertible, so \eqref{eq:lem-chain-rule} implies that $\mathsf{D}_0^{(i+1)}\psi = 0$.
This completes the inductive step and the proof.
\end{proof}
\begin{Lem}\label{lem:psi-identically-0}
Let $F\in C^1(\mathbb{R}^n,\mathbb{R}^n)$ be a diffeomorphism such that the origin is a globally attracting hyperbolic fixed point for the dynamical system defined by iterating $F$.
Fix $k\in \mathbb{N}_{\geq 1}\cup \{\infty\}$ and $0\leq \alpha \leq 1$.
Let $e^A\in \mathsf{GL}(m,\mathbb{C})$ have spectral radius $\rho(e^A) < 1$ and satisfy either $\nu(e^A,\mathsf{D}_0 F) < k + \alpha$ or $\nu(e^A,\mathsf{D}_0 F) \leq k$.
Assume $\psi\in C_{\textnormal{loc}}^{k,\alpha}(\mathbb{R}^n,\mathbb{R}^m)$ satisfies
\begin{equation}\label{eq:lem-uniq-remainder-conj}
\psi \circ F = e^{A} \psi
\end{equation}
and
\begin{equation}\label{eq:psi-ders-vanish}
\mathsf{D}^i_0 \psi= 0
\end{equation}
for all $1 \leq i \leq k$.
Then $\psi \equiv 0$.
\end{Lem}
\begin{proof}
We first observe that since (i) $0$ is asymptotically stable for the iterated dynamical system defined by $F$, (ii) $\psi$ is continuous, and (iii) $\rho(e^{A})<1$, it follows that $\psi(0) = 0$ since
\begin{equation}\label{eq:psi-zero-zero}
0 = \lim_{n\to \infty}e^{nA}\psi(x_0) = \lim_{n\to\infty}\psi(F^n(x_0)) = \psi(0)
\end{equation}
for any $x_0 \in \mathbb{R}^n \setminus \{0\}$.
The second equality follows from \eqref{eq:lem-uniq-remainder-conj}.
For the remainder of the proof, define $x_j \coloneqq F^j(x_0)$ for $j\in \mathbb{N}$, and choose $r\in \mathbb{R}$ as follows: (i) if $\alpha = 0$ define $r\coloneqq k$, and (ii) if $\alpha > 0$ define $r\in \mathbb{R}$ to be any number satisfying $\nu(e^A,\mathsf{D}_0 F)<r< k+\alpha$.
Taylor's theorem for $C_{\textnormal{loc}}^{k,\alpha}$ functions \cite[p.~162]{de1999regularity} says that $$\psi(x) = \sum_{i=0}^k \mathsf{D}_0^i \psi\cdot x^{\otimes i} + R(x),$$
where $\lim_{x\to 0}\frac{R(x)}{\norm{x}^{r}} = 0$.
Equations \eqref{eq:psi-ders-vanish} and \eqref{eq:psi-zero-zero} imply that all of the terms in the sum above vanish, so we obtain $\psi= R$.
Using \eqref{eq:lem-uniq-remainder-conj} it follows that $e^{j A}\psi = \psi \circ F^j = R\circ F^j $, and since $x_j = F^j(x_0)$ we obtain
\begin{equation}\label{eq:main-thm-1st-eqn}
e^{jA}\psi(x_0) = R(x_j), \qquad \lim_{x\to 0}\frac{R(x)}{\norm{x}^{r}} = 0.
\end{equation}
Noting that, as $j\to \infty$, $x_j$ approaches the origin tangent to the generalized eigenspace $E_\lambda$ corresponding to some eigenvalue $\lambda$ of $\mathsf{D}_0 F$, it follows that $\frac{|\lambda|^j}{\norm{x_j}} \to C \neq 0$.\footnote{If $\Phi \in C^2$ this follows from Hartman's $C^1$ linearization theorem; for the general case that $\Phi\in C^1$, this follows from the pseudohyperbolic versions of the (un)stable and center-(un)stable manifold theorems \cite[Ch.~5]{hirsch1977}.}
Dividing both sides of \eqref{eq:main-thm-1st-eqn} by $\norm{x_j}^{r}$, multiplying by $1=\frac{|\lambda|^{jr}}{|\lambda|^{jr}}$ and taking the limit as $j\to \infty$ therefore yields
\begin{equation}\label{eq:main-thm-2nd-eqn}
\lim_{j\to\infty} e^{jA} \frac{\psi(x_0)}{\norm{x_j}^{r}} = \lim_{j\to \infty} \left(\frac{|\lambda|}{\norm{x_j}}\right)^r \left(\frac{e^{A}}{|\lambda|^{r}}\right)^j\psi(x_0)= C^{r} \lim_{j\to \infty} \left(\frac{e^{A}}{|\lambda|^{r}}\right)^j\psi(x_0) = 0.
\end{equation}
But $$r \geq \nu(e^{A},\mathsf{D}_0 F) \coloneqq \max_{\substack{\alpha \in \textnormal{spec}(e^A)\\ \beta \in \textnormal{spec}(\mathsf{D}_0 F)}}\frac{\ln(|\alpha|)}{\ln(|\beta|)}$$ implies that all eigenvalues $\alpha$ of $e^{A}$ satisfy $\ln(|\alpha|) \geq r \ln(|\lambda|)$, with the inequality flipping since all eigenvalues of $\mathsf{D}_0 F$ have modulus smaller than one (note that the inequality is actually strict in the case $\alpha > 0$).
Exponentiating, this implies that all eigenvalues $\alpha$ of $e^A$ satisfy $|\alpha| \geq |\lambda|^r$, and therefore all eigenvalues of $\frac{e^A}{|\lambda|^r}$ have modulus greater than or equal to $1$.\footnote{The desire for this conclusion was part of what motivated our definition of the spectral spread $\nu(\slot,\slot)$.}
Hence the diagonal entries in the (upper triangular) Jordan normal form of $(\frac{e^A}{|\lambda|^r})^j$ are bounded below by $1$, so if $\psi(x_0)\neq 0$ then at least one component of $(\frac{e^A}{|\lambda|^r})^j\psi(x_0)$ with respect to the Jordan basis is bounded below uniformly in $j$. It follows that \eqref{eq:main-thm-2nd-eqn} holds if and only if $\psi(x_0) = 0$.
Since $x_0 \in \mathbb{R}^n\setminus\{0\}$ was arbitrary and since we already obtained $\psi(0) = 0$ in \eqref{eq:psi-zero-zero}, it follows that $\psi \equiv 0$ on $\mathbb{R}^n$.
This completes the proof.
\end{proof}
Using Lemmas \ref{lem:jets-zero-maps} and \ref{lem:psi-identically-0}, we now prove the uniqueness portion of Theorem \ref{th:main-thm}.
\begin{proof}[Proof of the uniqueness portion of Theorem \ref{th:main-thm}]
Since $x_0$ is globally asymptotically stable, the Brown-Stallings theorem \cite[Lem~2.1]{wilson1967structure} implies that there is a diffeomorphism $Q\approx \mathbb{R}^n$ sending $x_0$ to $0$ where $n = \dim(Q)$, so we may assume that $Q = \mathbb{R}^n$ and $x_0 = 0$.\footnote{For example, Wilson states in \cite[Thm~2.2]{wilson1967structure} this result for the special case of a flow generated by a $C^1$ vector field, but his argument works equally well for any $C^1$ flow or diffeomorphism having a globally asymptotically stable fixed point.}
Define the diffeomorphism $F\coloneqq \Phi^1$ to be the time-$1$ map.
Let $\psi_1$ and $\psi_2$ be two functions satisfying $\mathsf{D}_0 \psi_i = B$ and $\psi_i \circ F = e^{A} \psi_i$ for $i = 1,2$.
Then $\psi\coloneqq \psi_1-\psi_2$ satisfies $\mathsf{D}_0 \psi = 0$ and $\psi \circ F = e^{A} \psi$.
Lemma \ref{lem:jets-zero-maps} implies that $\mathsf{D}_0^i \psi = 0$ for all $1\leq i \leq k$, and Lemma \ref{lem:psi-identically-0} then implies that $\psi_1 - \psi_2 = \psi \equiv 0$.
If $A$ and $B$ are real, then we can define $\psi_2\coloneqq \bar{\psi}_1$ to be the complex conjugate of $\psi_1$, and so the preceding implies that $\psi_1 = \bar{\psi}_1$; hence $\psi_1$ is real if $A$ and $B$ are real.
This completes the proof of the uniqueness statement of Theorem \ref{th:main-thm}.
\end{proof}
\subsubsection{Proof of existence}\label{sec:main-proof-exist}
In this section, we prove the existence portion of Theorem \ref{th:main-thm}.
As with the proof of the uniqueness portion, the proof consists of an algebraic part and an analytic part.
The techniques we use in the existence proof are similar to those used in \cite{sternberg1957local,cabre2003parameterization1}.
The algebraic portion of our proof is carried out in Lemma \ref{lem:existence-approx-conj}, and the analytic portion is carried out in Lemma \ref{lem-make-approx-exact}.
\begin{Lem}[Existence and uniqueness of approximate polynomial linearizing factors for diffeomorphisms]\label{lem:existence-approx-conj}
Fix $k \in \mathbb{N}_{\geq 1}$ and let $F\in C^k(\mathbb{R}^n,\mathbb{R}^n)$ have the origin as a fixed point.
Let $X \in \mathbb{C}^{m\times m}$ be such that $(X,\mathsf{D}_0 F)$ is $k$-nonresonant, and assume $B\in \mathbb{C}^{m\times n}$ satisfies $$B \mathsf{D}_0 F = X B.$$
Then there exists a unique degree-$k$ symmetric polynomial $P\colon \mathbb{R}^n\to \mathbb{C}^m$ vanishing at $0$ such that $\mathsf{D}_0 P = B$ and such that
\begin{equation}\label{eq:poly-conjugacy}
P \circ F = X P + R,
\end{equation}
where $R$ satisfies $\mathsf{D}_0^i R = 0$ for all $0\leq i \leq k$.
Furthermore, if $X\in \mathbb{R}^{m\times m}$ and $B\in \mathbb{R}^{m\times n}$ are real, then this unique polynomial $P\colon \mathbb{R}^n \to \mathbb{R}^m\subset \mathbb{C}^m$ is real.
\end{Lem}
\begin{Rem}
We prove Lemma \ref{lem:existence-approx-conj} for the case of finite $k$ only and rely on a bootstrapping method to prove the existence portion for the case $k = \infty$ at the end of this section.
We could have proved a $C^\infty$ version of Lemma \ref{lem:existence-approx-conj} (at least the existence part) using the fact that every formal power series comprises the derivatives of some $C^\infty$ function \cite[p.~34]{nelson1970topics}, but choose not to do so.
\end{Rem}
\begin{proof}
By Lemma \ref{lem:nonresonance-implies-invertible}, the linear operator
\begin{equation}
T_i \colon \mathcal{L}((\mathbb{R}^n)^{\otimes i},\mathbb{C}^m) \to \mathcal{L}((\mathbb{R}^n)^{\otimes i},\mathbb{C}^m), \qquad T_i(P_i)\coloneqq P_i (\mathsf{D}_0 F)^{\otimes i} - X P_i
\end{equation}
is invertible for all $1 < i \leq k$.
Denoting by $\textnormal{Sym}^i(\mathbb{R}^n)\subset (\mathbb{R}^n)^{\otimes i}$ the linear subspace of fully symmetric $i$-tensors (the $i$-th symmetric power), symmetry of the tensor $(\mathsf{D}_0 F)^{\otimes i}$ also implies that $T_i$ restricts to a well-defined automorphism of $\mathcal{L}(\textnormal{Sym}^i(\mathbb{R}^n),\mathbb{C}^m)$.
By Taylor's theorem we may write $F$ as a degree-$k$ polynomial plus remainder: $F(x) = \sum_{i=1}^k F_i\cdot x^{\otimes i} + R_1$, where $F_1 = \mathsf{D}_0 F$ and $\lim_{x\to 0} \frac{R_1(x)}{\norm{x}^k}=0$.
Defining $F_{\otimes [j]}\coloneqq F_{j_1}\otimes \cdots \otimes F_{j_\ell}$ for any multi-index $j\in \mathbb{N}_{\ell \geq 1}$, we may therefore write \eqref{eq:poly-conjugacy} as
\begin{equation}\label{eq:poly-replace-with-evaluation}
\sum_{\ell=1}^k\sum_{\substack{j \in \mathbb{N}_{\geq 1}^\ell\\|j|\leq k}} P_\ell \cdot F_{\otimes[j]} \cdot x^{\otimes \ell} = X \sum_{\ell=1}^k P_\ell \cdot x^{\otimes \ell} + R_2 \cdot x^{\otimes \ell},
\end{equation}
where $j=|(j_1,\ldots,j_\ell)| = \sum_{i=1}^\ell j_i$, $P(x) = \sum_{\ell=1}^k P_\ell \cdot x^{\otimes \ell}$, $P_1 = B$, and $\lim_{x\to 0}\frac{R_2(x)}{\norm{x}^k}=0$.
If we require that all tensors $P_\ell$ are symmetric then all tensors appearing in \eqref{eq:poly-replace-with-evaluation} are symmetric, and since symmetric tensors are completely determined by their values on all vectors of the form $x^{\otimes i}$ \cite[Thm~1]{thomas2014polarization}, this implies that
\begin{equation}\label{eq:poly-replace}
\sum_{\ell=1}^k\sum_{\substack{j \in \mathbb{N}^\ell_{\geq 1}\\|j|\leq k}} P_\ell \cdot F_{\otimes[j]} = X \sum_{\ell=1}^k P_\ell + R_2.
\end{equation}
Since $B\mathsf{D}_0 F = X B$ and $P_1 = \mathsf{D}_0 P = B$ by assumption, an inductive argument implies that equation \eqref{eq:poly-replace} holds for some suitable $R_2$ if and only if
\begin{equation}\label{eq:poly-induct}
\sum_{\ell=1}^{i-1}\sum_{\substack{j \in \mathbb{N}^\ell_{\geq 1}\\|j|= i}} P_\ell \cdot F_{\otimes[j]} = \underbrace{X P_i - P_i (\mathsf{D}_0 F)^{\otimes i}}_{-T_i(P_i)}.
\end{equation}
for all $0 \leq i \leq k$ (the induction is on $i$, and the base case $0 = XB - B\mathsf{D}_0 F $ is one of our hypotheses).
Since the left side of \eqref{eq:poly-induct} belongs to $\mathcal{L}(\textnormal{Sym}^i(\mathbb{R}^n),\mathbb{C}^m)$ and involves only $P_\ell$ for $\ell < i$, and since $T_i$ is invertible, we can can inductively solve for $P_i$ using \eqref{eq:poly-induct}.
Since additionally $T_i|_{\mathcal{L}(\textnormal{Sym}^i(\mathbb{R}^n),\mathbb{C}^m)}$ is a well-defined automorphism of $\mathcal{L}(\textnormal{Sym}^i(\mathbb{R}^n),\mathbb{C}^m)$ as discussed above, it follows that each $P_i\in \mathcal{L}(\textnormal{Sym}^i(\mathbb{R}^n),\mathbb{C}^m)$ which is compatible with our earlier stipulation that each $P_i$ be symmetric (which we used in obtaining equation \eqref{eq:poly-replace} from \eqref{eq:poly-replace-with-evaluation}).
Finally, assume that $X\in \mathbb{R}^{m\times m}$ is real, and assume by induction that $B = P_1, P_2,\ldots, P_{i-1}$ are real.
Taking the complex conjugate of \eqref{eq:poly-induct}, we find that $P_i$ solves \eqref{eq:poly-induct} if and only if its complex conjugate $\bar{P}_i$ solves \eqref{eq:poly-induct}.
Invertibility of $T_i$ thus implies that $P_i = \bar{P}_i$, so $P_i\colon \mathbb{R}^n\to \mathbb{R}^m$ and hence also $P = \sum_{i=1}^k P_i$ are real.
This completes the proof.
\end{proof}
\begin{Lem}[Making approximate linearizing factors exact]\label{lem-make-approx-exact}
Fix $k \in \mathbb{N}_{\geq 1} \cup \{\infty\}$, $0 \leq \alpha \leq 1$, and let $F\colon \mathbb{R}^n \to \mathbb{R}^n$ be a $C_{\textnormal{loc}}^{k,\alpha}$ diffeomorphism such that the origin is a globally attracting hyperbolic fixed point for the dynamical system defined by iterating $F$.
Let $e^{A} \in \mathsf{GL}(m,\mathbb{C})$ satisfy $\nu(e^{A},\mathsf{D}_0 F) < k + \alpha$,
and assume that there exists $P\in C_{\textnormal{loc}}^{k,\alpha}(\mathbb{R}^n,\mathbb{C}^m)$ such that $$P\circ F = e^{A} P + R,$$
where $R\in C_{\textnormal{loc}}^{k,\alpha}(\mathbb{R}^n,\mathbb{C}^m)$ satisfies $\mathsf{D}_0^i R = 0$ for all integers $0\leq i < k+\alpha$ (note the case $\alpha = 0$).
Then there exists a unique $\varphi \in C_{\textnormal{loc}}^{k,\alpha}(\mathbb{R}^n,\mathbb{C}^m)$ such that $\mathsf{D}_0^i \varphi=0$ for all integers $0 \leq i < k + \alpha$ and such that $\psi\coloneqq P + \varphi$ satisfies $$\psi \circ F = e^{A} \psi.$$
In fact, $$\psi = \lim_{j\to \infty}e^{-jA}P\circ F^j$$ in the topology of $C^{k,\alpha}$-uniform convergence on compact subsets of $\mathbb{R}^n$.
Furthermore, if $e^{A}\in \mathsf{GL}(m,\mathbb{R})$ is real and $P\in C_{\textnormal{loc}}^{k,\alpha}(\mathbb{R}^n,\mathbb{R}^m)$ is real, then $\varphi,\psi \colon \mathbb{R}^n \to \mathbb{R}^m \subset \mathbb{C}^m$ are real.
\end{Lem}
\begin{Rem}\label{rem:uniqueness-two-ways}
The uniqueness statement in Lemma \ref{lem-make-approx-exact} follows from the proof of the uniqueness statement of Theorem \ref{th:main-thm} proved in \S \ref{sec:main-proof-uniq}, but we include a self-contained proof below because the methods used to prove existence naturally yield the uniqueness statement of Lemma \ref{lem-make-approx-exact}.
However, the hypotheses $F\in C_{\textnormal{loc}}^{k,\alpha}$ and $\nu(e^{A},\mathsf{D}_0 F) < k + \alpha$ assumed here are stronger than needed for the uniqueness statement of Theorem \ref{th:main-thm}, with the latter condition being stronger if $\alpha = 0$.
\end{Rem}
\begin{proof}
We first assume that $k$ is finite, and delay consideration of the case $k = \infty$ until the end of the proof.
\textit{Adapted norms.}
Later in the proof we will require that the following bound on operator norms holds (needed following \eqref{eq:contract-holder-coeff}):
\begin{equation}\label{eq:existence-spread-rewritten}
\norm{e^{-A}}\norm{\mathsf{D}_0 F}^{k+\alpha}< 1.
\end{equation}
Due to our assumption that $\nu(e^A,\mathsf{D}_0 F) < k +\alpha$, this bound can always be made to hold by using an appropriate choice of ``adapted'' norms (which induce the operator norms) on the underlying vector spaces $\mathbb{R}^n$ and $\mathbb{C}^m$, and so we may (and do) assume that \eqref{eq:existence-spread-rewritten} holds in the remainder of the proof.
But first we argue that such norms can indeed be chosen.
Let $\lambda \in \textnormal{spec}(\mathsf{D}_0 F)$ and $\mu\in \textnormal{spec}(e^{A})$ be the eigenvalues of $\mathsf{D}_0 F$ and $e^{A}$ with largest and smallest modulus, respectively.
For any $\kappa > 0$, there exist adapted norms (both denoted by $\norm{\cdot}$) on $\mathbb{R}^n$ and $\mathbb{C}^m$ having the property that the induced operator norms $\norm{e^{A}}$ and $\norm{\mathsf{D}_0 F}$ satisfy \cite[p.~279--280]{hirsch1974differential}, \cite[Sec.~A.1]{cabre2003parameterization1}:
\begin{equation}\label{eq:adapted-norm}
|\norm{\mathsf{D}_0 F} - |\lambda|| \leq \kappa, \qquad \left|\norm{e^{-A}} - \frac{1}{|\mu|}\right| \leq \kappa.
\end{equation}
Now since $\frac{\ln(|\mu|)}{\ln(|\lambda|)} \eqqcolon \nu(e^{A},\mathsf{D}_0 F) < k+\alpha$ and since $|\lambda|<1$, it follows that $\frac{|\lambda|^{k+\alpha}}{|\mu|}<1$.
The inequalities \eqref{eq:adapted-norm} implies that $\norm{e^{-A}}\norm{\mathsf{D}_0 F}^{k+\alpha} \approx \frac{|\lambda|^{k+\alpha}}{|\mu|}$ if $\kappa$ is small, so choosing $\kappa$ sufficiently small yields \eqref{eq:existence-spread-rewritten} as claimed.
For later use we also note that \eqref{eq:adapted-norm} implies that $\mathsf{D}_0 F$ is a strict contraction if $\kappa$ is small enough since $|\lambda| < 1$ for all $\lambda \in \textnormal{spec}(\mathsf{D}_0 F)$, which in turn implies that
\begin{equation}\label{eq:B-pos-invariant}
F(B)\subset B
\end{equation}
if $B\subset \mathbb{R}^n$ is a sufficiently small ball centered at the origin \cite[p.~281]{hirsch1974differential}.
\textit{Definition of function spaces.}
Let $B \subset \mathbb{R}^n$ be a closed ball centered at the origin.
Given any Banach space $X$, let $C^k(B,X)$ be the space of $C^k$ functions $G\in C^k(B,X)$ equipped with the standard norm
\begin{equation*}
\norm{G}_k\coloneqq \sum_{i=0}^k \sup_{x\in B}\norm{\mathsf{D}_x^i G}.
\end{equation*}
making $C^k(B,X)$ into a Banach space \cite{de1999regularity}. Similarly, for a Banach space $Y$, we define the $\alpha$-H\"{o}lder constant $[H]_\alpha$ of a map $H\colon B\to Y$ via
\begin{equation*}
[H]_\alpha \coloneqq \sup_{\substack{x,y\in B\\x \neq y}}\frac{\norm{H(x) - H(y)}}{\norm{x-y}^\alpha},
\end{equation*}
and for $\alpha > 0$ we let
$C^{k,\alpha}(B,X)$ be the space of $C^k$ functions $G\in C^{k,\alpha}(B,X)$ whose $k$-th derivative is uniformly $\alpha$-H\"older continuous equipped with the standard norm
\begin{equation*}
\norm{G}_{k,\alpha}\coloneqq \norm{G}_k + [\mathsf{D}^k G]_\alpha
\end{equation*}
making $C^{k,\alpha}(B,X)$ into a Banach space \cite{de1999regularity}.\footnote{Different $C^k$ and $C^{k,\alpha}$ norms are actually used in \cite{de1999regularity}, namely $\sup_k \sup_{x\in B}\norm{\mathsf{D}_x^i G}$ and $\sup(\sup_k \sup_{x\in B}\norm{\mathsf{D}_x^i G}, [G]_\alpha)$, but these two norms are equivalent to the corresponding norms we have chosen.}
For $\alpha = 0$, we identify $C^{k,\alpha}(B,X)$ with $C^k(B,X)$ and make the special definition $$\norm{\slot}_{k,0}\coloneqq \norm{\slot}_k.$$
Let $\mathcal{F}\subset C^{k,\alpha}(B,\mathbb{C}^m)$ denote the subspace of functions $\varphi$ such that $\mathsf{D}_0^i\varphi = 0$ for all integers $0\leq i < k+\alpha$;
$\mathcal{F}$ is a closed linear subspace of $C^{k,\alpha}(B,\mathbb{C}^m)$, hence also a Banach space.
\textit{Preliminary estimates.}
By the definition of $\mathcal{F}$ and the mean value theorem it follows that, for any $\epsilon > 0$, if the radius of $B$ is sufficiently small then for any $\varphi \in \mathcal{F}$:
\begin{equation}\label{eq:norm-k-minus-one-bound}
\begin{split}
\norm{\varphi}_{k-1} &\leq \epsilon\norm{\mathsf{D}^k\varphi}_0\\
\norm{\varphi}_k &\leq (1+\epsilon)\norm{\mathsf{D}^k\varphi}_0
\end{split}
\end{equation}
and, if $\alpha > 0$,
\begin{equation}\label{eq:norm-k-bound}
\begin{split}
\norm{\varphi}_{k-1,\alpha} + \norm{\mathsf{D}^k \varphi}_{0} &\leq \epsilon [\mathsf{D}^k \varphi]_{\alpha}\\
\norm{\varphi}_{k,\alpha}&\leq (1+\epsilon) [\mathsf{D}^k \varphi]_{\alpha}.
\end{split}
\end{equation}
\iffalse
\footnote{By a $C^\infty$ Lyapunov function for $F$, we mean a $C^\infty$ function $V\colon \mathbb{R}^n \to [0,\infty)$ satisfying $V^{-1}(0) = \{0\}$ and $V\circ F < V$. The existence of a Lyapunov function for the diffeomorphism $F$ follows from the existence of Lyapunov functions for flows. In more detail: letting $\Phi\colon \widetilde{\mathbb{R}^n} \times \mathbb{R} \to \widetilde{\mathbb{R}^n}$ denote the suspension flow of $F$ (see e.g. \cite[p.~797]{smale1967differentiable} or \cite[p.~173]{robinson1999ds}), the fixed point of $F$ corresponds to a globally asymptotically stable periodic orbit with image $\Gamma \subset \widetilde{\mathbb{R}^n}$ for $\Phi$.
We may give the $C^k$ manifold $\widetilde{\mathbb{R}^n}$ a compatible $C^\infty$ structure \cite[Ch.~2]{hirsch1976differential}.
Then by \cite[Thm~3.2]{wilson1969smooth} there exists a $C^k$ function $\tilde{V}\colon \widetilde{\mathbb{R}^n} \to [0,\infty)$ satisfying $V^{-1}(0) = \Gamma$ and $V\circ \Phi^t(x) < V(x)$ for all $x\not \in \Gamma$ and $t > 0$.
Defining $V_0\colon \mathbb{R}^n \to [0,\infty)$ to be the restriction of $\tilde{V}$ to any of the canonically $C^k$ embedded copies of $\mathbb{R}^n\subset \widetilde{\mathbb{R}^n}$ then yields a $C^k$ Lyapunov function $V_0$ for $F$. Since the set of $C^k$ functions $V$ satisfying $V \circ F < V$ defines an open set in the $C^k$ (strong) Whitney topology, we may approximate $V_0$ by a $C^\infty$ function $V$ which is a Lyapunov function for $F$ \cite[Ch.~2]{hirsch1976differential}.}
\fi
\textit{Defining a linear contraction mapping on $\mathcal{F}$.}
Recall that $F\colon \mathbb{R}^n\to \mathbb{R}^n$ is the diffeomorphism from the statement of the lemma.
By \eqref{eq:B-pos-invariant}, all sufficiently small closed balls $B\subset \mathbb{R}^n$ centered at the origin satisfy $F(B)\subset B$.
Additionally, since $F\in C_{\textnormal{loc}}^{k,\alpha}(\mathbb{R}^n,\mathbb{R}^n)$ and $B$ is compact, $F|_B \in C^{k,\alpha}(B,\mathbb{R}^n)$.
It follows that there is a well-defined linear operator $T\colon C^{k,\alpha}(B,\mathbb{C}^m)\to C^{k,\alpha}(B,\mathbb{C}^m)$ given by\footnote{That $\mathsf{D}^k T(\varphi)$ is $\alpha$-H\"older follows from the chain rule, the fact that $F$ and the first $k-1$ derivatives of $\varphi$ are $C^1$ and hence Lipschitz, the fact that the composition of a bounded $\alpha$-H\"older function with a bounded Lipschitz function is again $\alpha$-H\"older, and the fact that the product of bounded $\alpha$-H\"older functions is again $\alpha$-H\"{o}lder (see, e.g., \cite[Lem~1.19]{eldering2013normally}).}
\begin{equation}
T(\varphi)\coloneqq e^{-A}\varphi\circ F.
\end{equation}
Note that $T(\mathcal{F})\subset \mathcal{F}$, so that $\mathcal{F}$ is an invariant subspace for $T$.
We claim that there is a choice of $B$ so that $T|_\mathcal{F}\colon \mathcal{F} \to \mathcal{F}$ is a contraction with constant $\beta < 1$:
\begin{equation}\label{eq:T-contract}
\norm{T(\varphi)}_{k,\alpha}\leq \beta \norm{\varphi}_{k,\alpha}.
\end{equation}
To see this, we give an argument essentially due to Sternberg, but which generalizes the proof of \cite[Thm~2]{sternberg1957local} to the case of linearizing semiconjugacies and to the $C^{k,\alpha}$ setting.
Using the notation $\mathsf{D}^{\otimes[j]}_x F\coloneqq \mathsf{D}^{j_1}_x F \otimes \cdots \otimes \mathsf{D}^{j_i}_x F$ for a multi-index $j\in \mathbb{N}^i_{\geq 1}$, we compute
\begin{equation}\label{eq:existence-k-der-estimate}
\mathsf{D}_x^k (T(\varphi)) = e^{-A} \mathsf{D}^k_{F(x)} \varphi \cdot (\mathsf{D}_x F)^{\otimes k} + e^{-A} \sum_{i=1}^{k-1}\sum_{\substack{j \in \mathbb{N}^i_{\geq 1}\\|j|\leq k}} C_{i,j} \mathsf{D}^i_{F(x)}\varphi \cdot \mathsf{D}^{\otimes[j]}_x F,
\end{equation}
where the integer coefficients $C_{i,j}\in \mathbb{N}_{\geq 1}$ are combinatorially determined by Fa\`{a} di Bruno's formula for the ``higher-order chain rule'' \cite{jacobs2014stare} and are therefore independent of $B$.
We choose $B$ sufficiently small that its diameter is less than one, and we note that there exists a constant $N_0$ such that
\begin{equation}\label{eq:fa-di-sum-bound}
\sup_{\norm{x} \leq 1}\sum_{i=1}^{k-1}\sum_{\substack{j \in \mathbb{N}^\ell_{\geq 1}\\|j|\leq k}} C_{i,j} \norm{\mathsf{D}^{\otimes[j]}_x F} < N_0.
\end{equation}
Using \eqref{eq:norm-k-minus-one-bound} and \eqref{eq:fa-di-sum-bound} to bound the sum in \eqref{eq:existence-k-der-estimate}, it follows that
\begin{equation}\label{eq:contract-k-deriv}
\norm{\mathsf{D}^k T(\varphi)}_0 \leq \norm{e^{-A}}(\norm{\mathsf{D} F}^k_0 + \epsilon N_0)\norm{\mathsf{D}^k \varphi}_0.
\end{equation}
For the case that $\alpha > 0$, we will now use \eqref{eq:norm-k-bound} to obtain a bound on $[\mathsf{D}_x^k T(\varphi)]_\alpha$ analogous to \eqref{eq:contract-k-deriv}.
In order to do this, we use the estimate $[x\mapsto \mathsf{D}^k_{F(x)}\varphi]_\alpha\leq [\mathsf{D}^k \varphi]_\alpha\norm{\mathsf{D} F}_0^\alpha$ and the product rule $[fg]_\alpha \leq \norm{f}_0 [g]_\alpha + [f]_\alpha \norm{g}_0$ for H\"older constants (see, e.g., \cite[Lem~1.19]{eldering2013normally}) to bound the first term of \eqref{eq:existence-k-der-estimate} by $\norm{e^{-A}}\left([\mathsf{D}^k \varphi]_\alpha\norm{\mathsf{D} F}_0^{k+\alpha} + \norm{\mathsf{D}^k \varphi}_0 \cdot k [\mathsf{D} F]_\alpha\norm{\mathsf{D} F}_0^{k-1}\right) \leq \norm{e^{-A}}\left(\norm{\mathsf{D} F}_0^{k+\alpha} + \epsilon k [\mathsf{D} F]_\alpha\norm{\mathsf{D} F}_0^{k-1}\right)$, where we have also used \eqref{eq:norm-k-bound} to bound the second term in parentheses.
Next, we use \eqref{eq:fa-di-sum-bound}, the product rule for H\"older constants again, and for $1\leq i \leq k-1$ the estimates $[x\mapsto \mathsf{D}^i_{F(x)}\varphi]_\alpha\leq [\mathsf{D}^i \varphi]_\alpha\norm{\mathsf{D} F}_0^\alpha \leq \epsilon [\mathsf{D}^k \varphi]_\alpha\norm{\mathsf{D} F}_0^\alpha$ to bound the second term of \eqref{eq:existence-k-der-estimate} by $\epsilon N_0 \norm{e^{-A}}[\mathsf{D}^k \varphi]_\alpha$.
This last estimate we used follows from \eqref{eq:norm-k-bound} and the fact that we are requiring $B$ to have diameter less than one, so that $[\mathsf{D}^i \varphi]_\alpha \leq \norm{\mathsf{D}^{i+1}\varphi}_0\leq \epsilon [\mathsf{D}^k \varphi]_\alpha$.
We finally obtain
\begin{align}\label{eq:contract-holder-coeff}
\left[\mathsf{D}_x^k T(\varphi)\right]_\alpha
&\leq \norm{e^{-A}}\left(\norm{\mathsf{D} F}_0^{k+\alpha} + \epsilon k [\mathsf{D} F]_\alpha \norm{\mathsf{D} F}_0^{k-1} + \epsilon N_0\right)[\mathsf{D}^k\varphi]_\alpha.
\end{align}
The estimate \eqref{eq:existence-spread-rewritten} and continuity imply that $\norm{e^{-A}}\norm{\mathsf{D} F}_0^{k+\alpha} < 1$ if $B$ is sufficiently small.
Hence if $\epsilon$ is sufficiently small, the quantities respectively multiplying $\norm{\mathsf{D}^k\varphi}_0$ and $[\mathsf{D}^k \varphi]_\alpha$ in \eqref{eq:contract-k-deriv} and \eqref{eq:contract-holder-coeff} will be bounded above by some positive constant $\beta' < 1$.
The discussion preceding \eqref{eq:norm-k-minus-one-bound} and \eqref{eq:norm-k-bound} implies that we can indeed take $\epsilon$ this small after possibly further shrinking $B$, so it follows that $\norm{\mathsf{D}^k T(\varphi)}_0 < \beta' \norm{\mathsf{D}^k \varphi}_0$ and, if $\alpha > 0$, also $[D^k T(\varphi)]_\alpha \leq \beta' [\mathsf{D}^k \varphi]_\alpha$.
We therefore obtain a contraction estimate on the highest derivative and H\"{o}lder constant \emph{only}.
However, we can combine this observation with the second inequalities from each of the two displays \eqref{eq:norm-k-minus-one-bound} and \eqref{eq:norm-k-bound} to obtain in both cases $\alpha = 0$ and $\alpha > 0$ contractions on \emph{all} of the derivatives:
\begin{equation}
\norm{T(\varphi)}_{k,\alpha} \leq (1+\epsilon)\beta' \norm{\varphi}_{k,\alpha}.
\end{equation}
(This technique for the case $\alpha = 0$ was also used in the proof of \cite[Thm 2]{sternberg1957local}.)
Define $\beta \coloneqq (1+\epsilon)\beta'$.
Since $\beta' < 1$, if necessary we may shrink $B$ further to ensure that $\epsilon$ is sufficiently small that $\beta < 1$.
This shows that $T|_\mathcal{F}$ is a contraction and completes the proof of \eqref{eq:T-contract}.
\textit{Existence and uniqueness of a linearizing factor defined on $B$.}
We will now find a locally-defined linearizing factor $\tilde{\psi}\in C^{k,\alpha}(B,\mathbb{C}^m)$ of the form $\tilde{\psi} = P|_B + \tilde{\varphi}$, where $\tilde{\varphi}\in \mathcal{F}$ and $P\colon \mathbb{R}^n \to \mathbb{C}^m$ is as in the statement of the lemma.
By definition, $\tilde{\psi}$ is linearizing if and only if $\tilde{\psi} = e^{-A} \tilde{\psi}\circ F \eqqcolon T(\tilde{\psi})$, so we need to solve the equation $
P|_B + \tilde{\varphi} = T(P|_B + \tilde{\varphi})$ for $\tilde \varphi$.
(We are writing $P|_B$ rather than $P$ because $\tilde{\varphi}$ is a function with domain $B$ rather than $\mathbb{R}^n$, and also because $T$ is a linear operator defined on functions with domain $B$.)
Since $T$ is linear, after rearranging we see that this amounts to solving
\begin{equation}\label{eq:tilde-phi-1}
\left(\textnormal{id}_{C^{k,\alpha}(B,\mathbb{C}^m)} - T\right)\tilde{\varphi} = T(P|_B) -P|_B.
\end{equation}
One of the assumptions of the lemma is that $\left(P\circ F - e^A P\right)|_B \in \mathcal{F}$, and this implies that the right hand side of \eqref{eq:tilde-phi-1} belongs to $\mathcal{F}$ since $e^{-A} \cdot \mathcal{F} \subset \mathcal{F}$.
Since $T(\mathcal{F})\subset \mathcal{F}$, it follows that we may rewrite \eqref{eq:tilde-phi-1} as
\begin{equation}\label{eq:tilde-phi-2}
\left(\textnormal{id}_{\mathcal{F}} - T|_\mathcal{F}\right)\tilde{\varphi} = T(P|_B) -P|_B.
\end{equation}
We showed earlier that $T|_\mathcal{F}$ is a strict contraction, i.e., its operator norm satisfies $\norm{T|_\mathcal{F}}_{k,\alpha} < 1$.
It follows that $(\textnormal{id}_\mathcal{F} - T|_\mathcal{F})$ has a bounded inverse given by the corresponding Neumann series, so that \eqref{eq:tilde-phi-2} has a unique solution $\tilde{\varphi}$ given by
\begin{equation}\label{eq:tilde-phi-neumann}
\tilde{\varphi} = \left(\textnormal{id}_{\mathcal{F}} - T|_\mathcal{F}\right)^{-1} \cdot \left(T(P|_B) -P|_B\right) = \sum_{n=0}^\infty (T|_\mathcal{F})^n \cdot \left(T(P|_B) -P|_B\right).
\end{equation}
\textit{Extension to a unique global linearizing factor.}
Since $x_0$ is globally asymptotically stable and since $B$ is positively invariant, for every $x\in B$ there exists $j(x)\in \mathbb{N}_{\geq 0}$ such that, for all $j > j(x)$, $F^j(x)\in \textnormal{int}(B)$.
If $j$ is large enough that $F^j(x)\in \textnormal{int}(B)$ and $\ell > j$, then $$e^{-\ell A}\tilde{\psi} \circ F^\ell(x) = e^{-jA}\left( e^{(\ell-j)A} \tilde{\psi} \circ F^{(\ell-j)}\right)|_B \circ F^{j}(x) = e^{-jA} \tilde{\psi} \circ F^j(x),$$ so there is a well-defined map $\psi\colon \mathbb{R}^n\to \mathbb{C}^m$ given by
\begin{equation}\label{eq:psi-const}
\psi(x)\coloneqq e^{-j A}\tilde{\psi}\circ F^{j}(x),
\end{equation}
where $j\in \mathbb{N}_{\geq 0}$ is any nonnegative integer sufficiently large that $F^j(x)\in \textnormal{int}(B)$.
Clearly $\psi \circ F = e^{A} \psi$.
If $x \in \mathbb{R}^n$ and $F^j(x)\in \textnormal{int}(B)$, then $x$ has a neighborhood $U$ with $F^j(U)\subset \textnormal{int}(B)$ by continuity, so $\psi|_U$ is given by \eqref{eq:psi-const} with $j$ constant on $U$.
By the chain rule, this shows that $\psi \in C_{\textnormal{loc}}^{k,\alpha}(\mathbb{R}^n)$.
Clearly $\psi$ and $\varphi\coloneqq \psi - P$ are uniquely determined by $\psi|_B = P|_B + \varphi|_B = P|_B + \tilde{\varphi}$ which is in turn uniquely determined by $\tilde{\varphi}$, and since $\tilde{\varphi} = \varphi|_B$ is unique it follows that $\varphi$ and $\psi$ are also unique.
If $A\in \mathbb{R}^{m\times m}$ and $P\in C_{\textnormal{loc}}^{k,\alpha}(\mathbb{R}^n,\mathbb{R}^m)$ are real, then the complex conjugate $\bar{\psi} = P + \bar{\varphi}$, also satisfies $\bar{\psi} \circ F = e^{A} \bar{\psi}$, so uniqueness implies that $\bar{\psi} = \psi$ and hence $\psi\colon \mathbb{R}^n\to \mathbb{R}^m$ is real.
\textit{Convergence to the global linearizing conjugacy.}
We now complete the proof of the lemma by proving the sole remaining claim that $e^{-jA}P\circ F^j \to \psi$ with $C^{k,\alpha}$-uniform convergence on compact subsets of $\mathbb{R}^n$.
To do this, we first inspect the finite truncations of the infinite series in \eqref{eq:tilde-phi-neumann}.
We see that, since $$\sum_{n=0}^j (T|_\mathcal{F})^n \cdot \left(T(P|_B) -P|_B\right) = \sum_{n=0}^j T^{n+1}(P|_B) - T^n(P|_B) = T^{j+1}(P|_B) - P|_B$$ for each $j \in \mathbb{N}_{\geq 1}$, taking the limit $j\to\infty$ shows that the series in \eqref{eq:tilde-phi-neumann} is equal to $-P|_B + \lim_{j\to\infty} T^j(P|_B)$.
In other words,
\begin{equation}\label{eq:tilde-psi-converge}
\tilde{\psi} = \lim_{j\to\infty} e^{-j A} P \circ F^j|_B
\end{equation}
with convergence in $C^{k,\alpha}(B,\mathbb{C}^m)$.
Next, let $K\subset \mathbb{R}^n$ be any positively invariant compact subset.
Since $0$ is globally asymptotically stable and since $B$ contains a neighborhood of $0$, there exists $j_0 > 0$ such that $F^j(K)\subset B$ for all $j > j_0$.
We compute
\begin{align*}
\lim_{j\to\infty}e^{-j A}P\circ F^j|_K &= \lim_{j\to \infty}e^{-j_0 A} \left(e^{-j A} P\circ F^j|_B \right)\circ F^{j_0}|_K\\
&= e^{-j_0 A} \left(\lim_{j\to \infty} e^{-j A} P\circ F^j|_B \right)\circ F^{j_0}|_K\\
&= e^{-j_0 A} \psi|_B \circ F^{j_0}|_K\\
&= \psi|_K,
\end{align*}
with convergence in $C^{k,\alpha}(K,\mathbb{C}^m)$.
Since we are considering convergence in $C^{k,\alpha}(K,\mathbb{C}^m)$ --- rather than, e.g., merely pointwise convergence --- it is not obvious that we can move the limit inside the parentheses to obtain the second equality.
The reason this is valid is that composition maps of the form $g\mapsto f\circ g \circ h$ ($f, h$ fixed, all maps $C^{k,\alpha}$) are continuous with respect to the $C^{k,\alpha}$ normed topologies \cite[Prop.~6.1,~Prop.~6.2~(iii)]{de1999regularity}.
Since every compact subset of $\mathbb{R}^n$ in contained in some positively invariant compact subset $K$ (e.g., a sublevel set of a Lyapunov function), this completes the proof for the case $k < \infty$.
\textit{Consideration of the case $k = \infty$.}
For the case $k = \infty$, repeating the proof above for any $k' < \infty$ such that $\nu(e^{A},\mathsf{D}_0 F) < k'$ yields unique $C^{k'}$ functions $\varphi\colon \mathbb{R}^n\to \mathbb{C}^m$ and $\psi\coloneqq P + \varphi$ such that $\mathsf{D}_0^i \varphi = 0$ for all $0\leq i \leq k'$.
By uniqueness, these functions $\varphi, \psi$ are independent of $k' > \nu(e^{A},\mathsf{D}_0 F)$, and since $k'$ is arbitrary it follows that $\varphi,\psi \in C^\infty(\mathbb{R}^n,\mathbb{C}^m)$.
Additionally, for any positively invariant compact $K$ we have shown that $e^{-j A}\circ P \circ F^j|_K \to \psi|_K$ in $C^{k'}(K,\mathbb{C}^m)$ for every $k'\in \mathbb{N}_{\geq 1}$, and hence also $e^{-j A}\circ P \circ F^j|_K \to \psi|_K$ in the space $C^\infty(K,\mathbb{C}^m)$ whose topology is defined, e.g., by the complete metric $$d(f,g) = \sum _{j=0}^\infty 2^{-j}\frac{\norm{f-g}_{j}}{1+\norm{f-g}_j}$$ making $C^\infty(K,\mathbb{C}^m)$ into a Frech\'et space.
Since again every compact subset of $\mathbb{R}^n$ is contained in some positively invariant compact subset $K$, This completes the proof.
\end{proof}
Using Lemmas \ref{lem:existence-approx-conj} and \ref{lem-make-approx-exact}, we now complete the proof of Theorem \ref{th:main-thm} by proving the existence portion of its statement.
\begin{proof}[Proof of the existence portion of Theorem \ref{th:main-thm}]
As in the proof of the uniqueness portion of Theorem \ref{th:main-thm} at the end of \S \ref{sec:main-proof-uniq}, we may assume that $Q = \mathbb{R}^n$ and $x_0 = 0$.
We first consider the case that $\mathbb{T} = \mathbb{Z}$, and define the time-$1$ map $F\coloneqq \Phi^1$.
First suppose that $k < \infty$.
Lemma \ref{lem:existence-approx-conj} implies that there exists a polynomial $P$ such that $\mathsf{D}_0 P = B$ and $P\circ F = e^{A} P + R$, where $R\in C_{\textnormal{loc}}^{k,\alpha}(\mathbb{R}^n,\mathbb{C}^m)$ satisfies $\mathsf{D}_0^i R = 0$ for all integers $0\leq i < k + \alpha$.\footnote{Actually we can find $P$ such that $\mathsf{D}^i_0 R = 0$ for all integers $0\leq i \leq k$, with the only difference arising when $\alpha = 0$. However, we do not need this in the following.}
Furthermore, $P$ and $R$ are real if $e^{A}$ and $B$ are real.
Lemma \ref{lem-make-approx-exact} then implies that there exists $\varphi\in C_{\textnormal{loc}}^{k,\alpha}(\mathbb{R}^n,\mathbb{C}^m)$ such that $\psi = P + \varphi\in C_{\textnormal{loc}}^{k,\alpha}(\mathbb{R}^n,\mathbb{C}^m)$ satisfies $\mathsf{D}_0 \psi = B$, $\psi \circ F = e^{A} \psi$, $e^{-jA}\tilde{P} \circ \Phi^j \to \psi$ $C^{k,\alpha}$-uniformly on compact subsets for any $\tilde{P}$ satisfying the hypotheses of Theorem \ref{th:main-thm} (such as $P$), and that $\psi$ is real if $A$ and $B$ are real.
This completes the proof for the case $k < \infty$.
Now suppose that $k = \infty$.
Repeating the proof above for finite $k' > \nu(e^{A},\mathsf{D}_0)$ yields $\psi \in C^{k'}(\mathbb{R}^n,\mathbb{C}^m)$ satisfying $\mathsf{D}_0 \psi = B$ and $\psi \circ F = e^{A} \psi$.
The proof of the uniqueness portion of Theorem \ref{th:main-thm} in \S \ref{sec:main-proof-uniq} implies that $\psi$ is independent of $k' > \nu(e^{A},\mathsf{D}_0 F)$, so since $k'$ is arbitrary it follows that $\psi \in C^\infty$.
Additionally, by Lemma \ref{lem-make-approx-exact} we have that $e^{-jA}\tilde{P} \circ \Phi^j|_K \to \psi$ $C^{\infty}$-uniformly on compact subsets for any $\tilde{P}$ satisfying the hypotheses of Theorem \ref{th:main-thm}.
This completes the proof for the case that $\mathbb{T} = \mathbb{Z}$.
It remains only to consider the case that $\mathbb{T} = \mathbb{R}$, i.e., the case that $\Phi$ is a flow.
By the proof of the case $\mathbb{T} = \mathbb{Z}$, there exists $\tilde{\psi}\in C_{\textnormal{loc}}^{k,\alpha}(\mathbb{R}^n,\mathbb{C}^m)$ satisfying $\mathsf{D}_0 \tilde \psi = B$ and $\tilde \psi \circ \Phi^j = e^{jA} \tilde \psi$ for all $j \in \mathbb{Z}$.
By adapting a technique of Sternberg \cite[Lem~4]{sternberg1957local}, from $\tilde{\psi}$ we will construct a map $\psi\in C_{\textnormal{loc}}^{k,\alpha}(\mathbb{R}^n,\mathbb{C}^m)$ satisfying $\mathsf{D}_0 \psi = B$ and $\psi \circ \Phi^t = e^{tA} \psi$ for all $t\in \mathbb{R}$.
In fact define
\begin{equation}
\psi\coloneqq \int_0^1 e^{-sA}\tilde{\psi}\circ \Phi^s\, ds.
\end{equation}
By Leibniz's rule for differentiating under the integral sign and basic estimates, $\psi \in C_{\textnormal{loc}}^{k,\alpha}(\mathbb{R}^n,\mathbb{C}^m)$, and using the assumption \eqref{eq:main-th-1} we have that$$\mathsf{D}_0\psi = \int_0^1 e^{-sA}B \mathsf{D}_0 \Phi^s \, ds = \int_0^1 B\,ds = B.$$
To prove that $\psi \circ \Phi^t = e^{tA} \psi $ for all $t\in \mathbb{R}$, we compute
\begin{align*}
\psi \circ \Phi^t &= \int_0^1 e^{-sA} \tilde{\psi} \circ \Phi^{s+t}\,ds
= \int_t^{1+t} e^{(t-s)A} \tilde{\psi} \circ \Phi^s \, ds\\
&= e^{tA} \int_t^1 e^{-sA} \tilde{\psi} \circ \Phi^s \, ds + e^{tA} \int_1^{1+t} e^{-sA} \tilde{\psi} \circ \Phi^s \, ds\\
&= e^{tA} \int_t^1 e^{-sA} \tilde{\psi} \circ \Phi^s \, ds + e^{tA} \int_1^{1+t} e^{-sA} \left(e^{A}\tilde{\psi} \circ \Phi^{-1} \right) \circ \Phi^s \, ds\\
&= e^{tA} \int_t^1 e^{-sA} \tilde{\psi} \circ \Phi^s \, ds + e^{tA} \int_0^{t} e^{-sA} \tilde{\psi} \circ \Phi^s \, ds\\
&= e^{tA} \psi
\end{align*}
as desired.
Since $\psi$ satisfies $\psi \circ \Phi^1 = e^{A} \psi$, the uniqueness result for the case $\mathbb{T} = \mathbb{Z}$ actually implies that $\psi = \tilde{\psi}$.
Letting $K\subset \mathbb{R}^n$ be any positively invariant compact subset, the map $G\colon [0,1] \times C^{k,\alpha}(K,\mathbb{C}^m)\to C^{k,\alpha}(K,\mathbb{C}^m)$ given by $G(r, f) \coloneqq e^{-r A}f \circ \Phi^r|_K$ is continuous and satisfies $(r,\psi)\mapsto \psi$ for all $r\in [0,1]$, so compactness of $[0,1]$ implies that for every neighborhood $V\subset C^{k,\alpha}(K,\mathbb{C}^m)$ of $\psi$ there is a smaller neighborhood $U \subset V$ of $\psi$ such that $G([0,1]\times U) \subset V$, i.e., $e^{-rA} \varphi \circ \Phi^r|_K \subset V$ for every $\varphi \in U$ and $r\in [0,1]$.
Fix any $P\in C_{\textnormal{loc}}^{k,\alpha}(\mathbb{R}^n,\mathbb{C}^m)$ satisfying the hypothesis \eqref{eq:main-th-4} of Theorem \ref{th:main-thm}.
By the proof for the case $\mathbb{T} = \mathbb{Z}$ there exists $N \in \mathbb{N}_{\geq 0}$ such that, for all $j > N$, $e^{-jA}P\circ \Phi^j|_K \in U$.
By the definition of $U$ it follows that $e^{-tA}P\circ \Phi^t|_K \subset V$ for all $t > N$.
Since the neighborhood $V \ni \psi$ was arbitrary, this implies that
$$\psi|_K = \lim_{t\to \infty} e^{-tA}P\circ \Phi^t|_K$$
in $C^{k,\alpha}(K,\mathbb{C}^m)$.
Since every compact subset of $\mathbb{R}^n$ is contained in some positively invariant compact subset $K$, this proves that $e^{-tA}P\circ \Phi^t|_K \to \psi$ in the topology of $C^{k,\alpha}$-uniform convergence on compact subsets.
This completes the proof of Theorem \ref{th:main-thm}.
\end{proof}
\subsection{Proof of Theorem \ref{th:main-thm-per}}
In this section we prove Theorem \ref{th:main-thm-per}, which we repeat here for convenience.
This proof invokes Theorem \ref{th:main-thm} and is much shorter because of this.
\ThmMainPer*
\begin{proof}
Let $W^s_{x_0}$ be the global strong stable manifold (isochron) through $x_0$ \cite{kvalheim2018global}.
Since $W^s_{x_0}$ is the stable manifold for the fixed point $x_0$ of the $C_{\textnormal{loc}}^{k,\alpha}$ diffeomorphism $\Phi^\tau$, it follows that $W^s_{x_0}$ is a $C_{\textnormal{loc}}^{k,\alpha}$ manifold \cite[Thm~2.1]{de1995irwin}.
After identifying $E^s_{x_0}$ with $\mathbb{R}^n$, the uniqueness portion of Theorem \ref{th:main-thm} applied to $\psi|_{W^s_{x_0}}$ implies that $\psi|_{W^s_{x_0}}$ is unique for any $\psi$ satisfying the uniqueness hypotheses, and furthermore $\psi|_{W^s_{x_0}}$ is real if $A$ and $B$ are real.
Since $\psi$ is uniquely determined by $\psi|_{W^s_{x_0}}$ and \eqref{eq:main-th-per-2} (which is true because $Q = \bigcup_{t\in \mathbb{R}}\Phi^t(W^s_{x_0})$), this implies that $\psi$ is unique and that $\psi$ is real if $A$ and $B$ are real.
This completes the proof of the uniqueness statement of Theorem \ref{th:main-thm-per}.
Under the existence hypotheses, the existence portion of Theorem \ref{th:main-thm} similarly implies that there exists a unique $\varphi\in C_{\textnormal{loc}}^{k,\alpha}(W^s_{x_0},\mathbb{C}^m)$
satisfying \eqref{eq:main-th-per-1} and
\begin{equation}\label{eq:main-th-per-proof-1}
\forall j \in \mathbb{Z}\colon \varphi \circ \Phi^{j\tau}|_{W^s_{x_0}} = e^{j\tau A} \varphi.
\end{equation}
The unique extension of $\varphi$ to a function $\psi \colon Q\to \mathbb{C}^m$ satisfying \eqref{eq:main-th-per-2} is given by
\begin{equation}\label{eq:main-th-per-proof-2}
\forall t\in \mathbb{R}\colon \psi|_{\Phi^{-t}(W^s_{x_0})}\coloneqq e^{-tA} \varphi \circ \Phi^{t}|_{W^s_{\Phi^{-t}(x_0)}}.
\end{equation}
$\psi$ is well-defined because $\Phi^\tau(W^s_{x_0})= W^s_{x_0}$ and $e^{\tau A} \varphi \circ \Phi^{-\tau}|_{W^s_{x_0}} = \varphi$ by \eqref{eq:main-th-per-proof-1}.
This completes the proof.
\end{proof}
\iffalse
\section{Discussion}\label{sec:discussion}
In this paper, we proved Theorem \ref{th:main-thm} on the uniqueness and existence of $C_{\textnormal{loc}}^{k,\alpha}$ linearizing factors for dynamical systems having a global attractor which is a hyperbolic fixed point or periodic orbit.
Theorem \ref{th:main-thm} both generalizes and sharpens Sternberg's linearization theorem for hyperbolic sinks \cite{sternberg1957local}.
Besides proving a uniqueness result for Sternberg's theorem as a corollary, we also obtain a uniqueness corollary for the Floquet normal form of a periodic orbit.
We also derived several corollaries related to Koopman eigenfunctions for dynamical systems having a globally attracting hyperbolic fixed point or periodic orbit.
We showed that the so-called isostable coordinates considered in the literature \cite{mauroy2013isostables, wilson2016isostable,mezic2019spectrum} always exist, and are unique in a suitable sense.
Additionally, under nonresonance assumptions we completely classified all $C^\infty$ Koopman eigenfunctions if the dynamical system is $C^\infty$.
To do this, we gave an intrinsic definition for nonlinear systems of the principal eigenfunctions defined for linear systems in \cite{mohr2016koopman}, and we proved existence and uniqueness results for $C_{\textnormal{loc}}^{k,\alpha}$ principal eigenfunctions under nonresonance and spectral spread conditions.
Under these conditions, we showed that a $C_{\textnormal{loc}}^{k,\alpha}$ version of the principal algebra defined by \cite{mohr2016koopman} for a linear system can be intrinsically defined for a nonlinear system to be the algebra generated by the $C_{\textnormal{loc}}^{k,\alpha}$ principal eigenfunctions, and this intrinsically defined algebra coincides with the conjugacy-pullback algebra considered in \cite{mohr2016koopman} if the conjugacy used is $C_{\textnormal{loc}}^{k,\alpha}$.
We have restricted attention to the case of global attractors which are hyperbolic fixed points or periodic orbits.
It would be interesting to consider more general attractors.
A natural starting point would be the more general class of normally attracting invariant manifolds \cite{kvalheim2018global}, the special case of normally hyperbolic invariant manifolds which are attracting \cite{fenichel1971persistence,hirsch1977,normallyHypMan,eldering2013normally}, using the Sacker-Sell spectrum \cite{sacker1978spectral,sacker1980spectrum} of the induced linear flow on the tangent bundle to replace the role of eigenvalues in our hypotheses.
One generalization of Sternberg's linearization theorem along these lines was considered by Sell in \cite{sell1983linearization,sell1985smooth}.
\fi
\bibliographystyle{amsalpha}
| proofpile-arXiv_059-15541 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Submission of papers to NIPS 2018}
NIPS requires electronic submissions. The electronic submission site
is
\begin{center}
\url{https://cmt.research.microsoft.com/NIPS2018/}
\end{center}
Please read the instructions below carefully and follow them faithfully.
\subsection{Style}
Papers to be submitted to NIPS 2018 must be prepared according to the
instructions presented here. Papers may only be up to eight pages
long, including figures. Additional pages \emph{containing only
acknowledgments and/or cited references} are allowed. Papers that
exceed eight pages of content (ignoring references) will not be
reviewed, or in any other way considered for presentation at the
conference.
The margins in 2018 are the same as since 2007, which allow for
$\sim$$15\%$ more words in the paper compared to earlier years.
Authors are required to use the NIPS \LaTeX{} style files obtainable
at the NIPS website as indicated below. Please make sure you use the
current files and not previous versions. Tweaking the style files may
be grounds for rejection.
\subsection{Retrieval of style files}
The style files for NIPS and other conference information are
available on the World Wide Web at
\begin{center}
\url{http://www.nips.cc/}
\end{center}
The file \verb+nips_2018.pdf+ contains these instructions and
illustrates the various formatting requirements your NIPS paper must
satisfy.
The only supported style file for NIPS 2018 is \verb+nips_2018.sty+,
rewritten for \LaTeXe{}. \textbf{Previous style files for \LaTeX{}
2.09, Microsoft Word, and RTF are no longer supported!}
The \LaTeX{} style file contains three optional arguments: \verb+final+,
which creates a camera-ready copy, \verb+preprint+, which creates a
preprint for submission to, e.g., arXiv, and \verb+nonatbib+, which will
not load the \verb+natbib+ package for you in case of package clash.
\paragraph{New preprint option for 2018}
If you wish to post a preprint of your work online, e.g., on arXiv,
using the NIPS style, please use the \verb+preprint+ option. This will
create a nonanonymized version of your work with the text
``Preprint. Work in progress.'' in the footer. This version may be
distributed as you see fit. Please \textbf{do not} use the
\verb+final+ option, which should \textbf{only} be used for papers
accepted to NIPS.
At submission time, please omit the \verb+final+ and \verb+preprint+
options. This will anonymize your submission and add line numbers to aid
review. Please do \emph{not} refer to these line numbers in your paper
as they will be removed during generation of camera-ready copies.
The file \verb+nips_2018.tex+ may be used as a ``shell'' for writing
your paper. All you have to do is replace the author, title, abstract,
and text of the paper with your own.
The formatting instructions contained in these style files are
summarized in Sections \ref{gen_inst}, \ref{headings}, and
\ref{others} below.
\section{General formatting instructions}
\label{gen_inst}
The text must be confined within a rectangle 5.5~inches (33~picas)
wide and 9~inches (54~picas) long. The left margin is 1.5~inch
(9~picas). Use 10~point type with a vertical spacing (leading) of
11~points. Times New Roman is the preferred typeface throughout, and
will be selected for you by default. Paragraphs are separated by
\nicefrac{1}{2}~line space (5.5 points), with no indentation.
The paper title should be 17~point, initial caps/lower case, bold,
centered between two horizontal rules. The top rule should be 4~points
thick and the bottom rule should be 1~point thick. Allow
\nicefrac{1}{4}~inch space above and below the title to rules. All
pages should start at 1~inch (6~picas) from the top of the page.
For the final version, authors' names are set in boldface, and each
name is centered above the corresponding address. The lead author's
name is to be listed first (left-most), and the co-authors' names (if
different address) are set to follow. If there is only one co-author,
list both author and co-author side by side.
Please pay special attention to the instructions in Section \ref{others}
regarding figures, tables, acknowledgments, and references.
\section{Headings: first level}
\label{headings}
All headings should be lower case (except for first word and proper
nouns), flush left, and bold.
First-level headings should be in 12-point type.
\subsection{Headings: second level}
Second-level headings should be in 10-point type.
\subsubsection{Headings: third level}
Third-level headings should be in 10-point type.
\paragraph{Paragraphs}
There is also a \verb+\paragraph+ command available, which sets the
heading in bold, flush left, and inline with the text, with the
heading followed by 1\,em of space.
\section{Citations, figures, tables, references}
\label{others}
These instructions apply to everyone.
\subsection{Citations within the text}
The \verb+natbib+ package will be loaded for you by default.
Citations may be author/year or numeric, as long as you maintain
internal consistency. As to the format of the references themselves,
any style is acceptable as long as it is used consistently.
The documentation for \verb+natbib+ may be found at
\begin{center}
\url{http://mirrors.ctan.org/macros/latex/contrib/natbib/natnotes.pdf}
\end{center}
Of note is the command \verb+\citet+, which produces citations
appropriate for use in inline text. For example,
\begin{verbatim}
\citet{hasselmo} investigated\dots
\end{verbatim}
produces
\begin{quote}
Hasselmo, et al.\ (1995) investigated\dots
\end{quote}
If you wish to load the \verb+natbib+ package with options, you may
add the following before loading the \verb+nips_2018+ package:
\begin{verbatim}
\PassOptionsToPackage{options}{natbib}
\end{verbatim}
If \verb+natbib+ clashes with another package you load, you can add
the optional argument \verb+nonatbib+ when loading the style file:
\begin{verbatim}
\usepackage[nonatbib]{nips_2018}
\end{verbatim}
As submission is double blind, refer to your own published work in the
third person. That is, use ``In the previous work of Jones et
al.\ [4],'' not ``In our previous work [4].'' If you cite your other
papers that are not widely available (e.g., a journal paper under
review), use anonymous author names in the citation, e.g., an author
of the form ``A.\ Anonymous.''
\subsection{Footnotes}
Footnotes should be used sparingly. If you do require a footnote,
indicate footnotes with a number\footnote{Sample of the first
footnote.} in the text. Place the footnotes at the bottom of the
page on which they appear. Precede the footnote with a horizontal
rule of 2~inches (12~picas).
Note that footnotes are properly typeset \emph{after} punctuation
marks.\footnote{As in this example.}
\subsection{Figures}
\begin{figure}
\centering
\fbox{\rule[-.5cm]{0cm}{4cm} \rule[-.5cm]{4cm}{0cm}}
\caption{Sample figure caption.}
\end{figure}
All artwork must be neat, clean, and legible. Lines should be dark
enough for purposes of reproduction. The figure number and caption
always appear after the figure. Place one line space before the figure
caption and one line space after the figure. The figure caption should
be lower case (except for first word and proper nouns); figures are
numbered consecutively.
You may use color figures. However, it is best for the figure
captions and the paper body to be legible if the paper is printed in
either black/white or in color.
\subsection{Tables}
All tables must be centered, neat, clean and legible. The table
number and title always appear before the table. See
Table~\ref{sample-table}.
Place one line space before the table title, one line space after the
table title, and one line space after the table. The table title must
be lower case (except for first word and proper nouns); tables are
numbered consecutively.
Note that publication-quality tables \emph{do not contain vertical
rules.} We strongly suggest the use of the \verb+booktabs+ package,
which allows for typesetting high-quality, professional tables:
\begin{center}
\url{https://www.ctan.org/pkg/booktabs}
\end{center}
This package was used to typeset Table~\ref{sample-table}.
\begin{table}
\caption{Sample table title}
\label{sample-table}
\centering
\begin{tabular}{lll}
\toprule
\multicolumn{2}{c}{Part} \\
\cmidrule(r){1-2}
Name & Description & Size ($\mu$m) \\
\midrule
Dendrite & Input terminal & $\sim$100 \\
Axon & Output terminal & $\sim$10 \\
Soma & Cell body & up to $10^6$ \\
\bottomrule
\end{tabular}
\end{table}
\section{Final instructions}
Do not change any aspects of the formatting parameters in the style
files. In particular, do not modify the width or length of the
rectangle the text should fit into, and do not change font sizes
(except perhaps in the \textbf{References} section; see below). Please
note that pages should be numbered.
\section{Preparing PDF files}
Please prepare submission files with paper size ``US Letter,'' and
not, for example, ``A4.''
Fonts were the main cause of problems in the past years. Your PDF file
must only contain Type 1 or Embedded TrueType fonts. Here are a few
instructions to achieve this.
\begin{itemize}
\item You should directly generate PDF files using \verb+pdflatex+.
\item You can check which fonts a PDF files uses. In Acrobat Reader,
select the menu Files$>$Document Properties$>$Fonts and select Show
All Fonts. You can also use the program \verb+pdffonts+ which comes
with \verb+xpdf+ and is available out-of-the-box on most Linux
machines.
\item The IEEE has recommendations for generating PDF files whose
fonts are also acceptable for NIPS. Please see
\url{http://www.emfield.org/icuwb2010/downloads/IEEE-PDF-SpecV32.pdf}
\item \verb+xfig+ "patterned" shapes are implemented with bitmap
fonts. Use "solid" shapes instead.
\item The \verb+\bbold+ package almost always uses bitmap fonts. You
should use the equivalent AMS Fonts:
\begin{verbatim}
\usepackage{amsfonts}
\end{verbatim}
followed by, e.g., \verb+\mathbb{R}+, \verb+\mathbb{N}+, or
\verb+\mathbb{C}+ for $\mathbb{R}$, $\mathbb{N}$ or $\mathbb{C}$. You
can also use the following workaround for reals, natural and complex:
\begin{verbatim}
\newcommand{\RR}{I\!\!R}
\newcommand{\Nat}{I\!\!N}
\newcommand{\CC}{I\!\!\!\!C}
\end{verbatim}
Note that \verb+amsfonts+ is automatically loaded by the
\verb+amssymb+ package.
\end{itemize}
If your file contains type 3 fonts or non embedded TrueType fonts, we
will ask you to fix it.
\subsection{Margins in \LaTeX{}}
Most of the margin problems come from figures positioned by hand using
\verb+\special+ or other commands. We suggest using the command
\verb+\includegraphics+ from the \verb+graphicx+ package. Always
specify the figure width as a multiple of the line width as in the
example below:
\begin{verbatim}
\usepackage[pdftex]{graphicx} ...
\includegraphics[width=0.8\linewidth]{myfile.pdf}
\end{verbatim}
See Section 4.4 in the graphics bundle documentation
(\url{http://mirrors.ctan.org/macros/latex/required/graphics/grfguide.pdf})
A number of width problems arise when \LaTeX{} cannot properly
hyphenate a line. Please give LaTeX hyphenation hints using the
\verb+\-+ command when necessary.
\subsubsection*{Acknowledgments}
Use unnumbered third level headings for the acknowledgments. All
acknowledgments go at the end of the paper. Do not include
acknowledgments in the anonymized submission, only in the final paper.
\section{Introduction}
\input{sections/intro.tex}
\section{Related work} \label{sec:literature}
\input{sections/literature.tex}
\section{Methodology} \label{sec:methodology}
\input{sections/methodology.tex}
\section{Experiments} \label{sec:experiment}
\input{sections/experiment.tex}
\section{Discussion} \label{sec:discussion}
\input{sections/discussion.tex}
\subsection*{Acknowledgement}
This work was funded in part by Army Research Office grant number ARO W911NF-18-1-0281 and Google Cloud Platform Research Credit award.
\newpage
{\small
\bibliographystyle{ieeetr}
\subsubsection{Accuracy on adversarial examples and confidences on clean images}
We first focus on the relation between categorical accuracy on adversarial examples ($h(g(x))$) and categorical confidences on clean images ($x$).
One plausible assumption is that categories with higher confidences on clean images might be a bit harder for adversarial attacks. Hence, those categories may have higher accuracy after defense.
To validate this assumption, we take adversarial examples from two PGD attacks and perform anisotropic diffusion, bilateral filter, and mean filter as respective defense.
Contrary to the assumption, as categorical accuracy on adversarial examples increases (1st column in Figure \ref{fig:variance-among-categories}), there is no obvious trend in corresponding confidences on clean images (2nd column in Figure \ref{fig:variance-among-categories}).
\begin{figure}
\centering
\subfigure[Accuracy PGD ($\epsilon=0.01$) \label{subfig:categorical-accuracy_pgd-0.01}]{
\includegraphics[width=0.3\textwidth, trim={0 0 0 1.2cm}, clip]{figures/4.3/acc_001-0002.png}}
\subfigure[Confidence PGD ($\epsilon=0.01$)]{
\includegraphics[width=0.3\textwidth, trim={0 0 0 1.2cm}, clip]{figures/4.3/ad-confidence_001-0002.png}}
\subfigure[Ground-truth prob PGD ($\epsilon=0.01$)]{
\includegraphics[width=0.3\textwidth, trim={0 0 0 1.2cm}, clip]{figures/4.3/ad_pgd-001-0002_ground_truth_categorical_prob.png}}
\\
\subfigure[Accuracy PGD ($\epsilon=0.05$)]{
\includegraphics[width=0.3\textwidth, trim={0 0 0 1.2cm}, clip]{figures/4.3/acc_005-001.png}}
\subfigure[Confidence PGD ($\epsilon=0.05$)]{
\includegraphics[width=0.3\textwidth, trim={0 0 0 1.2cm}, clip]{figures/4.3/ad-confidence_005-001.png}}
\subfigure[Ground-truth prob PGD ($\epsilon=0.05$)]{
\includegraphics[width=0.3\textwidth, trim={0 0 0 1.2cm}, clip]{figures/4.3/ad_pgd-005-001_ground_truth_categorical_prob.png}}
\caption{\label{fig:variance-among-categories} Distribution of categorical accuracy on adversarial examples in increasing order (1st column), categorical confidence on clean samples (2nd column), and categorical probability at the ground-truth class (3rd column). (a)-(c): Results on adversarial examples that are generated from PGD ($\epsilon=0.01$), with anisotropic diffusion as defense. (d)-(f): same as (a)-(c) but the adversarial examples are generated from PGD ($\epsilon=0.05$). In each row, the order of categories (horizontal axis in all plots) are synchronized.}
\end{figure}
\subsubsection{Accuracy on adversarial examples and probability assigned to ground-truth category}
Another assumption was made on the relation between accuracy and the initial probability assigned to the ground-truth category. Intuitively, it might be easier to defend an adversarial example with relatively higher probability assigned to the ground-truth category.
However, such an intuition is disproved by the 3rd column in Figure \ref{fig:variance-among-categories}.
\subsection{Performance of various smoothing techniques on defending fixed adversarial attacks} \label{subsec:fixed-attack_various-defenses}
We start with a set of controlled experiments that show the performance of various smoothing techniques on defending a fixed attack.
Two sets of PGD attacks, with maximum perturbation $\epsilon=0.01$ (imperceptible to humans) and $\epsilon = 0.05$ (similar scale as in \cite{xie2018feature}), are chosen as baseline because (1) PGD attack is one of the strongest attacks and (2) adversarial examples from PGD attack cannot be defended by simple quantization.
Following the default settings, 20 iterations of PGD are performed.
After obtaining the perturbed images, we tweak the parameters in each smoothing method to pursue the optimal ones that lead to the highest classification accuracy over all perturbed images. If a method contains multiple parameters, we tweak them one after another and naively apply the optimal parameter values obtained from previous exploration.
Figure \ref{fig:fixed-attack_various-defenses} shows the classification accuracy on ImageNet validation set as the strength of smoothing defenses varies. The strength is measured by the most sensitive parameter, that is, \emph{number of iterations} for iterative methods such as anisotropic diffusion and modified curvature motion, \emph{size of the kernel} for mean filters, and \emph{diameter of the neighborhood} for bilateral filters.
\begin{figure}
\centering
\subfigure[PGD ($\epsilon=0.01$, 20 iterations)]{
\includegraphics[width=0.45\textwidth]{figures/4.1/pgd-001-4in1.png}}
\subfigure[PGD ($\epsilon=0.05$, 20 iterations)]{
\includegraphics[width=0.45\textwidth]{figures/4.1/pgd-005-4in1.png}}
\caption{\label{fig:fixed-attack_various-defenses} Change of classification accuracy on ImageNet validation set (vertical) along with the strength of smoothing defense (horizontal). The strength of smoothing method is measured by the most sensitive parameter: number of iterations for anisotropic diffusion and modified curvature motion, size of the kernel for mean filter, and radius of the neighborhood for bilateral filter.}
\end{figure}
We only present results from four selected methods because the rest of them lead to much lower (i.e., 20-30\% less) classification accuracy. Henceforth, we will focus on these four methods in subsequent sections.
\begin{figure}
\centering
\subfigure[fixed attack $g$, varying defense $h$ \label{subfig:detour_fixed-attack_varying-defense}]{
\includegraphics[width=0.35\textwidth]{figures/4.1/fixed-attack_varying-defense.png}}
\quad
\subfigure[varying attack $g$, fixed defense $h$ \label{subfig:detour_varying-attack_fixed-defense}]{
\includegraphics[width=0.35\textwidth]{figures/4.2/varying-attack_fixed-defense.png}}
\caption{Simplified illustration on the ``detour'' effect between adversarial attack $g$ and test-time defense $h$}
\label{fig:detour}
\end{figure}
The curves in Figure \ref{fig:fixed-attack_various-defenses} share a similar concave shape, which might suggest a geometric relation between adversarial attack $g$ and test-time defense $h$.
As illustrated by Figure \ref{subfig:detour_fixed-attack_varying-defense}, the test-time defense should not travel too far along the ``detour.'' In the following subsection, we further study the non-monotonic effect from the attackers' perspective.
\subsection{Performance of a fixed defense under attacks with varying number of iterations} \label{subsec:varying-attack_fixed-defense}
We continue our controlled experiments by setting the parameters of each smoothing defense to the optimal values obtained in section \ref{subsec:fixed-attack_various-defenses} and varying the strength (i.e., number of iterations) of PGD attacks.
Figure \ref{fig:varying-attack_fixed-defenses} presents the classification accuracy on ImageNet validation set as the number of attack iteration increases from 1 to 100.
Surprisingly, the accuracy first drops but then rebounds as the number of attack iterations keeps increasing. Such performance might seem contradictory to previous work as we used to believe that more iterations leads to stronger attacks, especially for defenses that involve adversarial training. For test-time defenses, however, the convex curves may reflect the actual non-monotonic relation between attacks $g$ and defenses $h$, as illustrated in Figure \ref{subfig:detour_varying-attack_fixed-defense}. In contrast, when no defense is performed, the classification accuracy keeps dropping and stabilizes at a low level.
\begin{figure}
\centering
\subfigure[PGD ($\epsilon=0.01$)]{
\includegraphics[width=0.45\textwidth]{figures/4.2/pgd-001-4in1.png}}
\subfigure[PGD ($\epsilon=0.05$)]{
\includegraphics[width=0.45\textwidth]{figures/4.2/pgd-005-4in1.png}}
\caption{\label{fig:varying-attack_fixed-defenses} Change of classification accuracy on ImageNet validation set (vertical) along with the number of iterations in PGD attack (horizontal). The bump at iteration $=50$ corresponds to a switch from the ImageNet validation set to a subset of 5,000 images to reduce computation time.}
\end{figure}
\subsection{Variance of performance among categories} \label{subsec:variance-among-categories}
During the experiments, we noticed that the variance of classification accuracy for each category was quite large. For illustration purposes, we take PGD attack and anisotropic diffusion as an example.
Figure \ref{subfig:categorical-accuracy_pgd-0.01} shows the sorted accuracies from ImageNet categories. The lowest categorical accuracy stays below 20\% whereas the highest accuracy reaches almost 100\%. Similar distribution of categorical accuracy is observed from other attack-defense pairs.
\begin{figure}
\centering
\subfigure[PGD, $\epsilon=0.01$ \label{subfig:categorical-accuracy_pgd-0.01}]{
\includegraphics[width=0.4\textwidth, trim={0 0 0 1.2cm}, clip]{figures/4.3/acc_001-0002.png}}
\subfigure[PGD, $\epsilon=0.05$]{
\includegraphics[width=0.4\textwidth, trim={0 0 0 1.2cm}, clip]{figures/4.3/acc_005-001.png}}
\caption{\label{fig:variance-among-categories} Distribution of categorical accuracy on adversarial examples in increasing order. (a): Results on adversarial examples that are generated from PGD ($\epsilon=0.01$), with anisotropic diffusion as defense. (b): same as (a) but the adversarial examples are generated from PGD ($\epsilon=0.05$).}
\end{figure}
Such an observation leads us to a question: is it possible to select a relatively large subset of test samples on which a designated method works the best?
The task turns out to be easy.
For each smoothing technique, we sort the test samples that are correctly classified according to their prediction confidence. Then, we can select a relatively large (with more than 20,000 samples) "optimal" subset with high prediction confidence.
\begin{table}
\caption{\label{tab:optimal-subsets}Classification accuracy on ``optimal'' subsets consisting of adversarial examples generated from PGD attack ($\epsilon=0.01$). Accuracies can be inflated to 100\% on a dataset with $>10^4$ samples.}
\centering
\begin{tabular}{|l|ccc|}
\hline
\diagbox{"Optimal" subset }{Defense} & Anisotropic diffusion & Bilateral & Mean \\
\hline
Anisotropic diffusion & 100.0\% & 88.51\% &93.79\% \\
Bilateral & 92.79\% & 100\% & 93.81\% \\
Mean &93.62\% &89.31\% &100\% \\
\hline
\end{tabular}
\end{table}
The performance on these optimal subsets are shown in Table \ref{tab:optimal-subsets}. For example, the subset at the first row is selected based on anisotropic diffusion.
Therefore, anisotropic diffusion achieves 100\% accuracy while bilateral filters only achieve less than 90\%.
The optimal subset for each smoothing technique in this work will be released.
\subsection{Variance of required defense for test samples} \label{subsec:variance-among-samples}
Both non-monotonic relations in section \ref{subsec:fixed-attack_various-defenses} and large variance in section \ref{subsec:variance-among-categories} suggest the idea of an adaptive version of the test-time smoothing defense, which is favorable for iterative methods. Specifically, an optimal iteration number or termination criterion is required for each adversarial example.
In order to demonstrate the potential advantage of the adaptive method, we compute the minimum number of iterations required for defending an adversarial example. Figure \ref{fig:min-iteration} shows the histograms of minimum iterations required with anisotropic diffusion under two sets of PGD attacks. In addition, the upper bound of the minimum iteration number is set to 30. In other words, if an adversarial sample remains misclassified throughout 30 smoothing iterations, we consider it as undefendable. We then compute an upper-bound accuracy for the defense by taking account results from all iterations. Compared with the result from a fixed iteration number over the whole dataset, our simulation of the “adaptive method” enhance the accuracy from 72.2\% to 83.6\% on adversarial examples generated by PGD ($\epsilon=0.01$) and from 55.5\% to 70.1\% on adversarial examples generated by PGD ($\epsilon=0.05$).
Designing and implementing the adaptive algorithm is left for future work.
\begin{figure}
\centering
\subfigure[PGD ($\epsilon=0.01$) minimum iteration]{
\includegraphics[width=0.45\textwidth]{figures/4.4/pgd-001-miniter-hist-diffusion.png}}
\subfigure[PGD ($\epsilon=0.05$) minimum iteration]{
\includegraphics[width=0.45\textwidth]{figures/4.4/pgd-005-miniter-hist-diffusion.png}}
\caption{Minimum number of iterations required for defending adversarial examples}
\label{fig:min-iteration}
\end{figure}
\subsection{Test-time defense scheme}
Let $f:X\rightarrow Y$ denote a pretrained classifier that maps an image $x \in X$ to its label $y \in Y$. An adversarial attack $g:X\rightarrow X$ then maps a legitimate image $x$ to an adversarial example $\hat{x}$ under certain constraints (e.g., on $L_p$ distance) such that $f(x) \neq f(\hat{x}) = f(g(x))$. To defend adversarial attacks at test time, an ideal solution would be applying the inverse mapping $g^{-1}: \hat{x}\mapsto x$. In reality, however, we have to find a defense $h$, which is an alternative approximation of $g^{-1}$ with the hope that $f(x) = f(h(\hat{x}))$ can be satisfied. In addition, a defense $h$ is more desirable for deployment if it brings less distortion to legitimate images $x$, keeping $f(x) = f(h(x))$. To achieve that, we may also introduce a detector (as a part of $h$) that distinguishes adversarial examples from legitimate ones at the first stage of the defense. Once an input is considered legitimate, no further defense is required.
In this work, we apply smoothing techniques as the alternative approximation ($h$) of the inverse mapping ($g^{-1}$) of the attack. Theoretically, the smoothing defense only works when outputs from an adversarial attack ($g$) are ``noisy,'' which implies $g\approx h^{-1}$ from the perspective of $f$.
\subsection{Smoothing techniques}
The smoothing techniques involved can be categorized into three groups: common, edge-preserving, and advanced. The common group includes mean, median and Gaussian filter, which are most commonly used in image processing. Edge-preserving smoothing algorithms include anisotropic diffusion and bilateral filter. More advanced smoothing techniques include non-local means and modified curvature motion. We will explain concisely the algorithms in the following paragraphs.
\textbf{Mean, median, and Gaussian filters}: These filters are widely applied in image processing. Despite the simple forms, they do not necessarily perform the worst in defending adversarial examples.
\textbf{Anisotropic diffusion} \cite{perona1990scale}: The Perona-Malik anisotropic diffusion aims at reducing image noise without removing important edges by assigning lower diffusion coefficients for edge pixels (which have larger gradient norm). During iterations, the image is updated according to the formula below.
$$I_t = \mathrm{div} \left( c(x,y,t) \nabla I \right)= \nabla c \cdot \nabla I + c(x,y,t) \Delta I$$
in which $\mathrm{div}$ denotes the divergence operator, $\Delta$ denotes the Laplacian, and $\nabla$ denotes the gradient.
The diffusion coefficient is defined either as
$c\left(\|\nabla I\|\right) = e^{-\left(\|\nabla I\| / K\right)^2}$ or
$c\left(\| \nabla I\| \right) = \frac{1}{1 + \left(\|\nabla I\|/ K\right)^2}$.
\textbf{Bilateral filter} \cite{tomasi1998bilateral}: A bilateral filter is a non-linear edge-preserving filter that computes the filtered value of a pixel $p$ using weighted average of its neighborhood $S$. The filter can be expressed with the following formula \cite{paris2007gentle}.
$$
\text{BF}[I](p) = \frac{1}{W(p)} \sum_{q\in S} G_{\sigma_s}(\|p - q\|) \, G_{\sigma_r}(|I(p) - I(q)|) \, I(q)
$$
in which $W(p) = \sum_{q\in S} G_{\sigma_s}(\|p - q\|) \, G_{\sigma_r}(|I(p) - I(q)|)$ is the normalization term. $G_{\sigma_s}$ and $G_{\sigma_r}$ are weight functions (e.g., Gaussian) for space and range, respectively. Edges are preserved because pixels that fall on different sides of the edge will have lower weights for range.
\textbf{Non-local means} \cite{buades2005non}: The non-local mean algorithm takes a more general form in which the output value of a pixel $i$ is computed as a average of all pixels in the image $I$, weighted by a similarity $w(i, j)$ which is measured as a decreasing function of the weighted Euclidean distance to that pixel. For a discrete image $v = \{v(i)\,|\,i \in I\}$, the filtered value for a pixel $i$ is computed as follows. $$
\text{NL}[v](i) = \sum_{j\in I} w(i,j) v(i)
$$
in which the weights
$
w(i,j) = \frac{1}{Z(i)} e^{-\frac{\|v(\mathcal{N}_i) - v(\mathcal{N}_j)\|^2_{2, a}}{h^2}}
$. In the formula, $\mathcal{N}_k$ denotes a square neighborhood of fixed size centered at a pixel $k$, and $a$ is the standard deviation of the Gaussian kernel. The normalizing constant is computed as $Z(i) = \sum_j e^{-\frac{\|v(\mathcal{N}_i) - v(\mathcal{N}_j)\|^2_{2, a}}{h^2}}$.
\textbf{Modified curvature motion} \cite{yezzi1998modified}: As most smoothing techniques are originally designed for gray-scale images, generalizing them to multi-channel color images and feature maps might be less natural, and sometimes there may exist multiple ways for the generalization.
Instead of splitting a color image into separate channels, we can treat it as a surface $(x, y, R(x,y), G(x,y), B(x,y)) \subset \mathds{R}^5$. Following the geometric property that smoother surfaces have smaller areas (or volumes), we can then iteratively smooth it with a general curvature motion method:
$$
I_t = \frac{k^{-2}\nabla^2 I + (I_y^2 + I_z^2)I_{xx} + (I_x^2 + I_z^2)I_{yy} + (I_x^2 + I_y^2)I_{zz} - 2(I_x I_y I_{xy} + I_x I_z I_{xz} + I_y I_z I_{yz})}{(k^{-2} + \|\nabla I\|^2)^2}
$$
where $k$ is a scaling factor. As $k$ becomes larger, the algorithm transits from isotropic to a more edge-preserving diffusion.
Such a formulation can be easily and naturally extended to feature maps with more channels along the $z$ axis. | proofpile-arXiv_059-15542 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Rectangular (or grid) diagrams of links provide a convenient combinatorial framework for
studying Legendrian and transverse links. Namely, there are the following naturally defined bijections,
each respecting the topological type of the link:
$$\begin{aligned}
\mathscr R/\bigl\langle\overrightarrow{\mathrm I},\overleftarrow{\mathrm I}\bigr\rangle&\cong
\bigl\{\xi_+\text{-Legendrian link types}\big\},\\
\mathscr R/\bigl\langle\overrightarrow{\mathrm{II}},\overleftarrow{\mathrm{II}}\bigr\rangle&\cong
\bigl\{\xi_-\text{-Legendrian link types}\big\},\\
\mathscr R/\bigl\langle\overrightarrow{\mathrm I},\overleftarrow{\mathrm I},\overleftarrow{\mathrm{II}}\bigr\rangle\cong
\mathscr R/\bigl\langle\overrightarrow{\mathrm I},\overleftarrow{\mathrm I},\overrightarrow{\mathrm{II}}\bigr\rangle&\cong
\bigl\{\xi_+\text{-transverse link types}\big\},\\
\mathscr R/\bigl\langle\overrightarrow{\mathrm I},\overleftarrow{\mathrm{II}},\overrightarrow{\mathrm{II}}\bigr\rangle\cong
\mathscr R/\bigl\langle\overleftarrow{\mathrm I},\overleftarrow{\mathrm{II}},\overrightarrow{\mathrm{II}}\bigr\rangle&\cong
\bigl\{\xi_-\text{-transverse link types}\big\},\\
\end{aligned}$$
where~$\mathscr R/\langle T_1,\ldots,T_k\rangle$ means `oriented rectangular diagrams viewed up
to exchange moves and (de)sta\-bi\-li\-za\-tions of oriented types~$T_1,\ldots,T_k$' (we use
the notation of~\cite{bypasses} for the oriented types of stabilizations and destabilizations; see also Definition~\ref{moves-def} and Figure~\ref{stab-fig} below),
$\xi_+$ is the standard contact structure of~$\mathbb S^3$, and~$\xi_-$ is the mirror image of~$\xi_+$. A proof of these facts can be found in~\cite{OST}.
With the notation above at hand, the elements of the sets
$$\mathscr R/\bigl\langle\overrightarrow{\mathrm I},\overrightarrow{\mathrm{II}}\bigr\rangle\cong
\mathscr R/\bigl\langle\overleftarrow{\mathrm I},\overleftarrow{\mathrm{II}}\bigr\rangle\cong
\mathscr R/\bigl\langle\overrightarrow{\mathrm I},\overleftarrow{\mathrm{II}}\bigr\rangle\cong
\mathscr R/\bigl\langle\overleftarrow{\mathrm I},\overrightarrow{\mathrm{II}}\bigr\rangle$$
are naturally interpreted
as braids viewed up to conjugacy and Birman--Menasco exchange moves defined in~\cite{bm4}
(these entities are called Birman--Menasco classes in~\cite{bypasses}),
and elements of
$$\mathscr R/\bigl\langle\overrightarrow{\mathrm I},\overleftarrow{\mathrm I},\overrightarrow{\mathrm{II}},
\overleftarrow{\mathrm{II}}\bigr\rangle$$
as topological types of oriented links in~$\mathbb S^3$, see~\cite{cromwell,simplification}.
The elements of~$\mathscr R/\langle\varnothing\rangle$ are so called exchange classes, which mean rectangular diagrams viewed up
to exchange moves. The number of possible combinatorial types of diagrams in each exchange class
is finite, so the equivalence problem for exchange classes is trivially decidable. This fact and the results of~\cite{distinguishing}
are used in~\cite{dyn-shast}
to solve the equivalence problem for Legendrian knots of topological types having trivial orientation-preserving symmetry group.
It is noted in~\cite{dyn-shast} that the equivalence problem for transverse knots of the same topological types
can be solved in a similar manner, once we are able to solve the equivalence problem
for the elements of~$\mathscr R/\bigl\langle\overrightarrow{\mathrm I}\bigr\rangle$,
$\mathscr R/\bigl\langle\overleftarrow{\mathrm I}\bigr\rangle$, $\mathscr R/\bigl\langle\overrightarrow{\mathrm{II}}\bigr\rangle$,
and $\mathscr R/\bigl\langle\overleftarrow{\mathrm{II}}\bigr\rangle$
(see~\cite[Remark~7.1]{dyn-shast}).
In this note we give a topological interpretation to the elements of these sets
and solve the equivalence problem for them, thus extending the method of~\cite{distinguishing,dyn-shast} to transverse knots.
\section{Notation}
We denote by~$\mathbb T^2$ the two-dimensional torus~$\mathbb S^1\times\mathbb S^1$, and by~$\theta,\varphi$
the angular coordinates on~$\mathbb T^2$, which run through~$\mathbb R/(2\pi\mathbb Z)$. Denote
by~$p_\theta$ and~$p_\varphi$ the projection maps from~$\mathbb T^2$ to the first and the second~$\mathbb S^1$-factors, respectively.
We regard the three-sphere~$\mathbb S^3$ as the join~$\mathbb S^1*\mathbb S^1$ of two circles,
and use the associated coordinate system $\theta,\varphi,\tau$:
$$\mathbb S^3=\mathbb S^1\times\mathbb S^1\times[0;1]/\bigl((\theta',\varphi,0)\sim(\theta'',\varphi,0),
(\theta,\varphi',1)\sim(\theta,\varphi'',1)\ \forall\theta,\theta',\theta'',\varphi,\varphi',\varphi''\in\mathbb S^1\bigr).$$
(Observe that~$\tau$ is set to~$1$ on the first copy of~$\mathbb S^1$, on which the angular coordinate is~$\theta$,
and to~$0$ on the second one, where the angular coordinate is~$\varphi$.)
The map~$p_{\theta,\varphi}:\mathbb S^3\setminus\bigl(\mathbb S^1_{\tau=1}\cup\mathbb S^1_{\tau=0}\bigr)\rightarrow\mathbb T^2$
defined by~$p_{\theta,\varphi}(\theta,\varphi,\tau)=(\theta,\varphi)$ is referred to as the \emph{torus projection}.
For two distinct points~$x_1,x_2\in\mathbb S^1$ we denote by~$[x_1;x_2]$ (respectively, $(x_1;x_2)$) the closed (respectively, open) interval in
$\mathbb S^1$ starting at~$x_1$ and ending at~$x_2$.
\section{Rectangular diagrams of links}
\begin{defi}
By \emph{an oriented rectangular diagram of a link} we mean a non-empty finite subset~$R\subset\mathbb T^2$
with a decomposition~$R=R^+\sqcup R^-$ into disjoint union of two subsets~$R^+$ and~$R^-$ such
that we have~$p_\theta(R^+)=p_\theta(R^-)$, $p_\varphi(R^+)=p_\varphi(R^-)$, and each of~$p_\theta$, $p_\varphi$
restricted to each of~$R^+$, $R^-$ is injective.
The elements of~$R$ (respectively, of~$R^+$ or $R^-$)
are called \emph{vertices} (respectively, \emph{positive vertices} or \emph{negative vertices}) of~$R$.
Pairs~$(u,v)$ of vertices of~$R$ such that~$p_\theta(u)=p_\theta(v)$ (respectively, $p_\varphi(u)=p_\varphi(v)$)
are called \emph{vertical} (respectively, \emph{horizontal}) \emph{edges} of~$R$.
\end{defi}
With every oriented rectangular diagram~$R$ of a link we define \emph{the associated oriented link}~$\widehat R\subset\mathbb S^3$
as the closure of the preimage~$p_{\theta,\varphi}^{-1}(R)$ oriented so that~$\tau$ increases on
the oriented arcs constituting~$p_{\theta,\varphi}^{-1}(R^+)$ and decreases on
the oriented arcs constituting~$p_{\theta,\varphi}^{-1}(R^-)$.
A planar diagram of a link topologically equivalent to~$\widehat R$ can be obtained as follows. Cut the torus~$\mathbb T^2$
along a longitude and a meridian not passing through a vertex of~$R$ to obtain a square. Connect
the vertices in every edge by a vertical or horizontal straight line segment and make all verticals overpasses
at all crossings. Orient the obtained diagram so that each vertical edge is directed
from a positive vertex to a negative one, and each horizontal edge from a negative to a positive one.
For an example see Figure~\ref{rd-fig}, where positive vertices are black and negative ones are white.
\begin{figure}[ht]
\begin{tabular}{ccc}
\includegraphics[width=150pt]{rd0.eps}\put(-80,20){$\mathbb T^2$}&\hbox to 2cm{\hss}&\includegraphics[width=150pt]{rd1.eps}\\
$R$&&a knot equivalent to~$\widehat R$
\end{tabular}
\caption{A rectangular diagram of a link and a planar diagram of the corresponding link}\label{rd-fig}
\end{figure}
In this paper, all links and their diagrams are assumed to be oriented, so we omit `oriented' in the sequel.
\begin{defi}\label{moves-def}
Let~$R_1$ and~$R_2$ be rectangular diagrams of a link such that,
for some~$\theta_1,\theta_2,\varphi_1,\varphi_2\in\mathbb S^1$, the following holds:
\begin{enumerate}
\item
$\theta_1\ne\theta_2$, $\varphi_1\ne\varphi_2$;
\item
the symmetric difference~$R_1\triangle R_2$ is~$\{\theta_1,\theta_2\}\times\{\varphi_1,\varphi_2\}$;
\item
the intersection of the rectangle~$[\theta_1;\theta_2]\times[\varphi_1;\varphi_2]$
with~$R_1\cup R_2$ coincides with~$R_1\triangle R_2$;
\item
one, two, or three consecutive corners of the rectangle~$[\theta_1;\theta_2]\times[\varphi_1;\varphi_2]$
belong to~$R_1$, and the other(s) to~$R_2$;
\item
the orientations of~$R_1$ and~$R_2$ agree on~$R_1\cap R_2$, which
means~$R_1^+\cap R_2=R_1\cap R_2^+$ (equivalently, $R_1^-\cap R_2=R_1\cap R_2^-$).
\end{enumerate}
Then we say that the passage~$R_1\mapsto R_2$ is \emph{an elementary move}.
An elementary move~$R_1\mapsto R_2$ is called:
\begin{itemize}
\item
\emph{an exchange move} if~$|R_1|=|R_2|$,
\item
\emph{a stabilization move} if~$|R_2|=|R_1|+2$, and
\item
\emph{a destabilization move} if~$|R_2|=|R_1|-2$,
\end{itemize}
where~$|R|$ denotes the number of vertices of~$R$.
\end{defi}
We distinguish two \emph{types} and four \emph{oriented types} of stabilizations and destabilizations as follows.
\begin{defi}
Let~$R_1\mapsto R_2$ be a stabilization, and let~$\theta_1,\theta_2,\varphi_1,\varphi_2$ be as in Definition~\ref{moves-def}.
Denote by~$v$ an element of~$R_1\cap([\theta_1;\theta_2]\times[\varphi_1;\varphi_2])$, which is unique.
We say that the stabilization~$R_1\mapsto R_2$ and the destabilization~$R_2\mapsto R_1$
are of \emph{type~\rm I} (respectively, of \emph{type~\rm II}) if
$v\in\{(\theta_1,\varphi_1),(\theta_2,\varphi_2)\}$
(respectively, $v\in\{(\theta_1,\varphi_2),(\theta_2,\varphi_1)\}$).
Let~$\varphi_0\in\{\varphi_1,\varphi_2\}$ be such that~$\{\theta_1,\theta_2\}\times\{\varphi_0\}\subset R_2$.
The stabilization~$R_1\mapsto R_2$ and the destabilization~$R_2\mapsto R_1$
are of \emph{oriented type~$\overrightarrow{\mathrm I}$}
(respectively, of \emph{oriented type~$\overrightarrow{\mathrm{II}}$}) if they are of type~I (respectively, of type~II),
and~$(\theta_2,\varphi_0)$ is a positive vertex of~$R_2$.
The stabilization~$R_1\mapsto R_2$ and the destabilization~$R_2\mapsto R_1$
are of \emph{oriented type~$\overleftarrow{\mathrm I}$}
(respectively, of \emph{oriented type~$\overleftarrow{\mathrm{II}}$}) if they are of type~I (respectively, of type~II)
and~$(\theta_2,\varphi_0)$ is a negative vertex of~$R_2$.
\end{defi}
Elementary moves are illustrated in Figures~\ref{exch-fig} and~\ref{stab-fig}, where the shaded rectangle is supposed
to contain no vertices of the diagrams except the indicated ones.
\begin{figure}[ht]
\begin{tabular}{ccccccc}
\includegraphics[height=60pt]{nnwb.eps}&\raisebox{27pt}{$\leftrightarrow$}&
\includegraphics[height=60pt]{wbnn.eps}&\hbox to 40 pt{\hss}&
\includegraphics[height=60pt]{nnbw.eps}&\raisebox{27pt}{$\leftrightarrow$}&
\includegraphics[height=60pt]{bwnn.eps}\\[20pt]
\includegraphics[height=60pt]{wnbn.eps}&\raisebox{27pt}{$\leftrightarrow$}&
\includegraphics[height=60pt]{nwnb.eps}&\hbox to 40 pt{\hss}&
\includegraphics[height=60pt]{bnwn.eps}&\raisebox{27pt}{$\leftrightarrow$}&
\includegraphics[height=60pt]{nbnw.eps}\\
\end{tabular}
\caption{Exchange moves}\label{exch-fig}
\end{figure}
\begin{figure}[ht]
\begin{tabular}{ccccccc}
\includegraphics[height=60pt]{nnwn.eps}&\raisebox{27pt}{$\leftrightarrow$}&
\includegraphics[height=60pt]{wbnw.eps}&\hbox to 40 pt{\hss}&
\includegraphics[height=60pt]{nbnn.eps}&\raisebox{27pt}{$\leftrightarrow$}&
\includegraphics[height=60pt]{bnwb.eps}\\
&\hbox to 0 pt{\hss type $\overrightarrow{\mathrm I}$\hss}&
&&&\hbox to 0 pt{\hss type $\overrightarrow{\mathrm I}$\hss}\\[20pt]
\includegraphics[height=60pt]{nnbn.eps}&\raisebox{27pt}{$\leftrightarrow$}&
\includegraphics[height=60pt]{bwnb.eps}&\hbox to 40 pt{\hss}&
\includegraphics[height=60pt]{nwnn.eps}&\raisebox{27pt}{$\leftrightarrow$}&
\includegraphics[height=60pt]{wnbw.eps}\\
&\hbox to 0 pt{\hss type $\overleftarrow{\mathrm I}$\hss}&
&&&\hbox to 0 pt{\hss type $\overleftarrow{\mathrm I}$\hss}\\[20pt]
\includegraphics[height=60pt]{wnnn.eps}&\raisebox{27pt}{$\leftrightarrow$}&
\includegraphics[height=60pt]{nwwb.eps}&\hbox to 40 pt{\hss}&
\includegraphics[height=60pt]{nnnb.eps}&\raisebox{27pt}{$\leftrightarrow$}&
\includegraphics[height=60pt]{wbbn.eps}\\
&\hbox to 0 pt{\hss type $\overrightarrow{\mathrm{II}}$\hss}&
&&&\hbox to 0 pt{\hss type $\overrightarrow{\mathrm{II}}$\hss}\\[20pt]
\includegraphics[height=60pt]{bnnn.eps}&\raisebox{27pt}{$\leftrightarrow$}&
\includegraphics[height=60pt]{nbbw.eps}&\hbox to 40 pt{\hss}&
\includegraphics[height=60pt]{nnnw.eps}&\raisebox{27pt}{$\leftrightarrow$}&
\includegraphics[height=60pt]{bwwn.eps}\\
&\hbox to 0 pt{\hss type $\overleftarrow{\mathrm{II}}$\hss}&
&&&\hbox to 0 pt{\hss type $\overleftarrow{\mathrm{II}}$\hss}
\end{tabular}
\caption{Stabilization and destabilization moves}\label{stab-fig}
\end{figure}
The set of all rectangular diagrams of links will be denoted by~$\mathscr R$. For any subset~$\{T_1,\ldots,T_k\}$
of~$\{\overrightarrow{\mathrm I},\overleftarrow{\mathrm I},\overrightarrow{\mathrm{II}},\overleftarrow{\mathrm{II}}\}$,
we denote by~$\langle T_1,\ldots,T_k\rangle$ the equivalence relation on~$\mathscr R$ generated
by all stabilizations and destabilizations of oriented types~$T_1,\ldots,T_k$ and exchange moves.
For a rectangular diagram of a link~$R\in\mathscr R$, we denote by~$[R]_{T_1,\ldots,T_k}$
the equivalence class of~$R$ in~$\mathscr R/\langle T_1,\ldots,T_k\rangle$.
The following statement is nearly a reformulation of~\cite[Proposition on page~42 + Theorem on page~45]{cromwell}
and~\cite[Proposition~4]{simplification} (the three versions use slightly different
settings and sets of moves, but their equivalence is easily seen).
\begin{theo}
The map
$$R\mapsto\text{the topological type of }\widehat R$$
establishes a one-to-one correspondence between $\mathscr R/\langle
\overrightarrow{\mathrm I},\overleftarrow{\mathrm I},\overrightarrow{\mathrm{II}},\overleftarrow{\mathrm{II}}\rangle$
and the set of all link types.
\end{theo}
\section{Decidability for the equivalence of transverse knots}
Here is the main technical result of the present paper:
\begin{theo}\label{main-tech-theo}
For any~$T\in\{\overrightarrow{\mathrm I},\overleftarrow{\mathrm I},\overrightarrow{\mathrm{II}},\overleftarrow{\mathrm{II}}\}$
there is an algorithm for deciding, given two rectangular diagrams of a link~$R_1$, $R_2$, whether or not~$[R_1]_T=[R_2]_T$.
\end{theo}
To prove Theorem~\ref{main-tech-theo} we need some preparations.
For a rectangular diagram of a link~$R$, denote by~$\Gamma_{\overrightarrow{\mathrm{II}}}(R)$
the following union of closed immersed staircase-like curves in~$\mathbb T^2$:
$$\Gamma_{\overrightarrow{\mathrm{II}}}(R)=\left(\bigcup_{(\theta_0,\varphi_1)\in R^+,\
(\theta_0,\varphi_2)\in R^-}\{\theta_0\}\times[\varphi_1;\varphi_2]\right)\cup
\left(\bigcup_{(\theta_1,\varphi_0)\in R^-,\
(\theta_2,\varphi_0)\in R^+}[\theta_1;\theta_2]\times\{\varphi_0\}\right)$$
oriented by demanding that~$\theta+\varphi$ locally increase on every straight line segment
in this union. These straight line segments will be referred to as \emph{the edges} of~$\Gamma_{\overrightarrow{\mathrm{II}}}(R)$.
Thus, the pair of endpoints of an edge of~$\Gamma_{\overrightarrow{\mathrm{II}}}(R)$ is an edge of~$R$, and vice versa.
An example is shown in Figure~\ref{gamma-fig}.
\begin{figure}[ht]
\begin{tabular}{ccc}
\includegraphics[width=150pt]{rd0.eps}\put(-80,20){$\mathbb T^2$}&\hbox to 2cm{\hss}&\includegraphics[width=150pt]{tl-example.eps}\\
$R$&&$\Gamma_{\overrightarrow{\mathrm{II}}}(R)$
\end{tabular}
\caption{A rectangular diagram of a link~$R$ and the curve~$\Gamma_{\protect\overrightarrow{\mathrm{II}}}(R)$}\label{gamma-fig}
\end{figure}
The union of curves~$\Gamma_{\overrightarrow{\mathrm{II}}}(R)$ can also be described as the torus projection
of the link~$\widehat R_\varepsilon$ obtained from~$\widehat R$ by replacing each arc in the domain~$\tau\in[0;\varepsilon]$
(respectively, $\tau\in[1-\varepsilon;1]$)
by an arc on which the coordinates~$\varphi$ and~$\tau$ are constant, and~$\theta$ is increasing
(respectively, $\theta$ and~$\tau$ are constant, and~$\varphi$ is increasing), where~$\varepsilon\in(0;1/2)$.
With every rectangular diagram of a link~$R$ we associate a triple of
numbers~$\omega_{\overrightarrow{\mathrm{II}}}(R)\in\mathbb N\times\mathbb N\times(\mathbb N\cup\{0\})$
as follows: $\omega_{\overrightarrow{\mathrm{II}}}(R)=(k,l,m)$, where~$m$ is the number of double points in~$\Gamma_{\overrightarrow{\mathrm{II}}}(R)$,
and~$(k,l)\in\mathbb Z^2=H_1(\mathbb T^2;\mathbb Z)$ is the homology class of~$\Gamma_{\overrightarrow{\mathrm{II}}}(R)$, that is,
$$k=\frac1{2\pi}\int\limits_{\Gamma_{\overrightarrow{\mathrm{II}}}(R)}d\theta,\qquad l=\frac1{2\pi}\int\limits_{\Gamma_{\overrightarrow{\mathrm{II}}}(R)}d\varphi.$$
\begin{lemm}\label{omega-inv-lem}
If~$[R]_{\overrightarrow{\mathrm{II}}}=[R']_{\overrightarrow{\mathrm{II}}}$, then~$\omega_{\overrightarrow{\mathrm{II}}}(R)=
\omega_{\overrightarrow{\mathrm{II}}}(R')$.
\end{lemm}
\begin{proof}
To simplify the notation we put~$\Gamma=\Gamma_{\overrightarrow{\mathrm{II}}}(R)$ and~$\Gamma'=\Gamma_{\overrightarrow{\mathrm{II}}}(R')$.
It suffices to consider the case when~$R\mapsto R'$ is an exchange move or a type~$\overrightarrow{\mathrm{II}}$ stabilization.
One can check that, for any of these moves,
the closure of the symmetric difference~$\Gamma\triangle\Gamma'$
is the boundary of a rectangle~$r$ (which is not necessarily the one mentioned in Definition~\ref{moves-def}),
with the bottom and right sides of~$r$ belonging to one of~$\Gamma$, $\Gamma'$, and the top and left sides
to the other. Moreover, if~$\Gamma$ and~$\Gamma'$ are viewed as $1$-chains, then~$\Gamma-\Gamma'=\partial r$
for some orientation of~$r$. The cases are sketched in Figure~\ref{allowed-fig}.
\begin{figure}[ht]
\begin{tabular}{ccccccc}
\includegraphics[height=50pt]{box2.eps}\put(-31,22){$r$}&\raisebox{23pt}{$\leftrightarrow$}&
\includegraphics[height=50pt]{box1.eps}\put(-31,22){$r$}&\hbox to 40 pt{\hss}&
\includegraphics[height=50pt]{box3.eps}\put(-31,22){$r$}&\raisebox{23pt}{$\leftrightarrow$}&
\includegraphics[height=50pt]{box4.eps}\put(-31,22){$r$}\\
\includegraphics[height=50pt]{box5.eps}\put(-31,22){$r$}&\raisebox{23pt}{$\leftrightarrow$}&
\includegraphics[height=50pt]{box8.eps}\put(-31,22){$r$}&\hbox to 40 pt{\hss}&
\includegraphics[height=50pt]{box7.eps}\put(-31,22){$r$}&\raisebox{23pt}{$\leftrightarrow$}&
\includegraphics[height=50pt]{box6.eps}\put(-31,22){$r$}\\
\end{tabular}
\caption{The change of~$\Gamma_{\protect\overrightarrow{\mathrm{II}}}(R)$ under allowed elementary moves on~$R$}\label{allowed-fig}
\end{figure}
Thus, the homology class of~$\Gamma$ in~$H_1(\mathbb T^2)$
is the same as that of~$\Gamma'$.
Let~$(k,l)\in H_1(\mathbb T^2;\mathbb Z)$ be this class.
Since both multi-valued functions~$\theta$ and~$\varphi$ are locally non-decreasing on every edge of~$\Gamma$ and~$\Gamma'$,
each meridian~$\{\theta\}\times\mathbb S^1$ (respectively, longitude~$\mathbb S^1\times\{\varphi\}$) not passing through a vertex of~$R$
intersects each of~$\Gamma$ and~$\Gamma'$ exactly~$k$ (respectively, $l$) times.
Let~$\theta_1,\theta_2,\varphi_1,\varphi_2$ be as in Definition~\ref{moves-def}. Denote the rectangle~$[\theta_1;\theta_2]\times[\varphi_1;\varphi_2]$
by~$r_0$. Three cases are possible:
\begin{itemize}
\item $r=r_0$,
\item $r=[\theta_2;\theta_1]\times[\varphi_1;\varphi_2]$, or
\item
$r=[\theta_1;\theta_2]\times[\varphi_2;\varphi_1]$.
\end{itemize}
By the assumption of Definition~\ref{moves-def}, the intersection of~$r_0$ with~$R$ is a subset of the set
of vertices of~$r_0$.
Therefore, any vertical edge of~$\Gamma$ that intersects~$(\theta_1;\theta_2)\times\{\varphi_1\}$
intersects also~$(\theta_1;\theta_2)\times\{\varphi_2\}$, and vice versa. Let~$k_0$ be the number
of such edges. These edges are the same in~$\Gamma'$.
Similarly, let~$l_0$ be the number of horizontal edges of~$\Gamma$ (equivalently, of~$\Gamma'$)
that intersect~$\{\theta_1\}\times(\varphi_1;\varphi_2)$ (equivalently, $\{\theta_2\}\times(\varphi_1;\varphi_2)$).
$\Gamma$ and~$\Gamma'$ have exactly the same set of double points outside~$\partial r$. From the arguments above
it follows that the number
of double points of~$\Gamma$ and~$\Gamma'$ at~$\partial r$ is also the same and is equal to
\begin{itemize}
\item
$k_0+l_0$ if~$r=r_0$,
\item
$k-k_0-1+l_0$ if~$r=[\theta_2;\theta_1]\times[\varphi_1;\varphi_2]$,
\item
$k_0+l-l_0-1$ if~$r=[\theta_1;\theta_2]\times[\varphi_2;\varphi_1]$.
\end{itemize}
The claim follows.
\end{proof}
\begin{lemm}\label{step-lem}
Let~$R$ and~$R'$ be rectangular diagrams of a link such that the closure of the symmetric
difference~$\Gamma_{\protect\overrightarrow{\mathrm{II}}}(R)\triangle\Gamma_{\protect\overrightarrow{\mathrm{II}}}(R')$
has the form of the boundary of an embedded disk or an annulus~$F\subset\mathbb T^2$ such that
the interior of~$F$ is disjoint from~$R\cup R'$, and~$\partial F$ is disjoint from the set of double points of~$\Gamma_{\protect\overrightarrow{\mathrm{II}}}(R)$
and~$\Gamma_{\protect\overrightarrow{\mathrm{II}}}(R')$.
Then~$[R]_{\overrightarrow{\mathrm{II}}}=[R']_{\overrightarrow{\mathrm{II}}}$.
\end{lemm}
\begin{proof}
We again put~$\Gamma=\Gamma_{\overrightarrow{\mathrm{II}}}(R)$ and~$\Gamma'=\Gamma_{\overrightarrow{\mathrm{II}}}(R')$.
Suppose that~$F$ is a disc. It follows from the hypothesis of the lemma that:
\begin{enumerate}
\item
$F$ is co-bounded by two staircase arcs~$\alpha$ and~$\beta$ such that~$\alpha\subset\Gamma$
and~$\beta\subset\Gamma'$ (on which the functions~$\theta$ and~$\varphi$
are locally non-decreasing);
\item
the set of corners of~$F$ coincides with~$R\triangle R'$.
\end{enumerate}
See Figure~\ref{disc-d-fig}(a) for an illustration.
\begin{figure}[ht]
\begin{tabular}{ccc}
\includegraphics{discd.eps}\put(-100,70){$\alpha$}\put(-47,40){$\beta$}\put(-72,55){$F$}&&
\includegraphics{discd12.eps}\put(-72,52){$\gamma$}\put(-100,30){$F_2$}\put(-50,78){$F_1$}\\
(a)&\hbox to 2cm{\hss}&(b)
\end{tabular}
\caption{Induction step when~$F$ is a disc}\label{disc-d-fig}
\end{figure}
The proof is by induction
in the number~$m$ of corners of the polygon~$F$.
The smallest possible number is~$m=4$. In this case, one easily finds
that~$R\mapsto R'$ is an exchange move or a type~$\overrightarrow{\mathrm{II}}$ stabilization or destabilization.
Suppose that~$m>4$ and the claim is proved in the case when~$F$ has fewer corners than~$m$.
Small perturbations of a rectangular diagram of a link are achievable by means exchange moves, so, without loss of generality, we may assume
that no meridian or longitude of the torus~$\mathbb T^2$ contains four points of~$R\cup R'$, for this can
be resolved by a small perturbation of~$R$ or~$R'$.
There is an arc~$\gamma$ of the form~$[\theta_1;\theta_2]\times\{\varphi_0\}$ such that:
\begin{enumerate}
\item
$\gamma\subset F$;
\item
$\gamma\cap\partial F=\partial\gamma$;
\item
one of the endpoints of~$\gamma$ belongs to~$R\cup R'$.
\end{enumerate}
Without loss of generality we may assume that~$(\theta_1,\varphi_0)\in\alpha$ and~$(\theta_2,\varphi_0)\in\beta$ as this is
the question of exchanging the roles of~$R$ and~$R'$. The arc~$\gamma$ cuts~$F$ into two discs, which we denote by~$F_1$ and~$F_2$.
We number them so that~$F_1$ is above~$\gamma$ and~$F_2$ is below~$\gamma$, see Figure~\ref{disc-d-fig}(b).
Let~$C_1$ (respectively, $C_2$) be the set of corners of~$F_1$ (respectively, $F_2$). One can see that there is
a rectangular diagram of a link~$R''$ such that~$R\triangle R''=C_1$ (which is equivalent to~$R'\triangle R''=C_2$)
whose orientation agrees with that of~$R$ on~$R\cap R''$. We then have
\begin{equation}\label{f1f2-eq}\overline{\Gamma\triangle\Gamma_{\protect\overrightarrow{\mathrm{II}}}(R'')}=\partial F_1,\quad
\overline{\Gamma'\triangle\Gamma_{\protect\overrightarrow{\mathrm{II}}}(R'')}=\partial F_2.
\end{equation}
Each of~$F_1$ and~$F_2$ has fewer corners than~$m$, hence, by the induction hypothesis, we have
$[R]_{\overrightarrow{\mathrm{II}}}=[R'']_{\overrightarrow{\mathrm{II}}}$ and~$[R']_{\overrightarrow{\mathrm{II}}}=[R'']_{\overrightarrow{\mathrm{II}}}$.
The induction step follows.
Now suppose that~$F$ is an annulus. Then it can be cut by two straight line segments, one horizontal and one vertical,
into two discs so that condition~\eqref{f1f2-eq} will hold (possibly after exchanging~$R$ and~$R'$),
which again will imply $[R]_{\overrightarrow{\mathrm{II}}}=[R'']_{\overrightarrow{\mathrm{II}}}$
and~$[R']_{\overrightarrow{\mathrm{II}}}=[R'']_{\overrightarrow{\mathrm{II}}}$
by the proven case of the lemma. The idea is illustrated in Figure~\ref{annf-fig}. We skip the easy details.
\begin{figure}[ht]
\centerline{\includegraphics{annF.eps}\put(-75,53){$F_1$}\put(-130,110){$F_2$}}
\caption{Cutting the annulus~$F$ into two discs~$F_1$ and~$F_2$}\label{annf-fig}
\end{figure}
\end{proof}
\begin{lemm}\label{r=r-lem}
Let~$R$ and~$R'$ be rectangular diagrams of a link such that:
\begin{enumerate}
\item
$\Gamma_{\overrightarrow{\mathrm{II}}}(R)$ and~$\Gamma_{\overrightarrow{\mathrm{II}}}(R')$
have the same set of double points, which we denote by~$X$;
\item
there is an isotopy from~$\Gamma_{\overrightarrow{\mathrm{II}}}(R)$ to~$\Gamma_{\overrightarrow{\mathrm{II}}}(R')$ in~$\mathbb T^2$
fixed on an open neighborhood of~$X$.
\end{enumerate}
Then~$[R]_{\overrightarrow{\mathrm{II}}}=[R']_{\overrightarrow{\mathrm{II}}}$.
\end{lemm}
\begin{proof}
As before, we put~$\Gamma=\Gamma_{\overrightarrow{\mathrm{II}}}(R)$ and~$\Gamma'=\Gamma_{\overrightarrow{\mathrm{II}}}(R')$
and assume that no longitude or meridian contains four points of~$R\cup R'$.
The closure~$\alpha$
of a connected component of~$\Gamma\setminus X$ (respectively, $\Gamma'\setminus X$) will be called \emph{an arc of~$\Gamma$} (respectively, $\Gamma'$) if~$\alpha\cap X\ne\varnothing$.
There is a natural one-to-one correspondence between the arcs of~$\Gamma$ and those of~$\Gamma'$, defined
by demanding that arcs~$\alpha\subset\Gamma$ and~$\alpha'\subset\Gamma'$ corresponding to each other
have the same starting (equivalently, terminal) portion.
Let~$\alpha_1,\alpha_2,\ldots,\alpha_N$ be all the arcs of~$\Gamma$, and let~$\alpha_1',\alpha_2',\ldots,\alpha_N'$ be the
respective arcs of~$\Gamma'$.
Some connected components of~$\Gamma$ (and hence of~$\Gamma'$) may be disjoint from~$X$, and thus be simple
closed staircase-like curves, which are pairwise disjoint and homologous to each other.
Let~$\gamma_1,\gamma_2,\ldots,\gamma_K$ be all these components numbered using the following recipe.
Choose a point~$x_0\in X$ if $X$ is non-empty, and~$x_0\in\mathbb T^2\setminus(\Gamma\cup\Gamma')$ otherwise.
Choose also an oriented loop~$\beta\subset\mathbb T^2$ based at~$x_0$ that intersects each~$\gamma_i$ exactly once.
The numbering of~$\gamma_i$'s is chosen according to the order in which the points~$\gamma_i\cap\beta$ follow on~$\beta$.
Let~$\gamma_1',\gamma_2',\ldots,\gamma_K'$ be the closed components of~$\Gamma'\setminus X$
numbered so that an isotopy bringing~$\Gamma$ to~$\Gamma'$ and fixed on~$X\cup\{x_0\}$ brings~$\gamma_i$ to~$\gamma_i'$, $i=1,\ldots,K$.
We proceed by induction in
\begin{equation}\label{cRR-eq}
c(R,R')=(K+1)\Bigl(\sum_{\beta,\beta'}\chi(\beta\cap\beta')-N\Bigr)+\bigl|\{i=1,\ldots,K:\gamma_i\ne\gamma_i'\}\bigr|,
\end{equation}
where~$\chi$ denotes the Euler characteristics, and the sum is taken over all connected components~$\beta$ of~$\Gamma\setminus X$
and all connected components~$\beta'$ of~$\Gamma'\setminus X$.
The equality~$c(R,R')=0$ means that~$R$ and~$R'$ coincide. This is the induction base.
Suppose that~$c(R,R')>0$ and~$\sum_{\beta,\beta'}\chi(\beta\cap\beta')=N$. This means that all the arcs of~$\Gamma'$
coincide with the respective arcs of~$\Gamma$, and, for any~$i,j\in\{1,\ldots,K\}$, the curves~$\gamma_i$ and~$\gamma_j'$
are either coincident or disjoint.
Let~$k$ be the minimal index such that~$\gamma_k\ne\gamma_k'$. Then~$\gamma_k$ and~$\gamma_k'$ cut
the torus~$\mathbb T^2$ into two annuli. Let~$A$ be the one of these annuli that does not contain the point~$x_0$.
The interior of~$A$ is disjoint either from~$\Gamma$ or from~$\Gamma'$. Without loss of generality
we may assume the former. There is a rectangular diagram of a link~$R''$ such that~$\Gamma_{\overrightarrow{\mathrm{II}}}(R'')=
(\Gamma\setminus\gamma_k)\cup\gamma_k'$. We have~$[R]_{\overrightarrow{\mathrm{II}}}=[R'']_{\overrightarrow{\mathrm{II}}}$
by Lemma~\ref{step-lem} and~$c(R'',R')<c(R,R')$, which gives the induction step.
Now suppose that~$\sum_{\beta,\beta'}\chi(\beta\cap\beta')>N$. This means that, for some~$i\in\{1,\ldots,N\}$,
we have~$\alpha_i\ne\alpha_i'$ or, for some~$i,j\in\{1,\ldots,K\}$, we have~$\gamma_i\cap\gamma_j'\ne\varnothing$,
$\gamma_i\ne\gamma_j'$. In both cases, we claim that either~$c(R,R')$ can be reduced by
a small perturbation of~$R$ or~$R'$ keeping the set of double points of~$\Gamma$ fixed, or
there is a disc~$d\subset\mathbb T^2$ co-bounded by two staircase arcs~$\beta,\beta'$ such
that
$$d\cap\Gamma=\beta\subset\Gamma\setminus X,\qquad d\cap\Gamma'=\beta'\subset\Gamma'\setminus X.$$
Take this claim for granted for the moment. In the former case, the induction step is obvious. In the latter case,
there is a rectangular diagram of a link~$R''$ such that~$\Gamma_{\overrightarrow{\mathrm{II}}}(R'')=(\Gamma\setminus\beta)\cup
\beta'$. By Lemma~\ref{step-lem}, for such a diagram, we again have~$[R]_{\overrightarrow{\mathrm{II}}}=[R'']_{\overrightarrow{\mathrm{II}}}$.
The first summand in~\eqref{cRR-eq} decreases by~$K+1$ when~$R$ is replaced by~$R''$,
whereas the second summand may increase by at most~$K$
(as a result of possible renumbering of~$\gamma_i$'s). Hence~$c(R'',R')<c(R,R')$, and the induction step follows.
Now we prove the claim.
A disc in~$\mathbb T^2$ disjoint from~$X$ and
co-bounded by a subarc of~$\Gamma\setminus X$ and a subarc of~$\Gamma'\setminus X$ will be referred to as \emph{a bigon of~$\Gamma$
and~$\Gamma'$}. If these subarcs are the \emph{only} intersections of the bigon with~$\Gamma$ and~$\Gamma'$, then the bigon will be called \emph{clean}. We use a similar terminology for the full preimages~$\widetilde\Gamma$ and~$\widetilde\Gamma'$
of~$\Gamma$ and~$\Gamma'$, respectively,
under the projection map~$\mathbb R^2\rightarrow\mathbb T^2=\mathbb R^2/(2\pi\mathbb Z^2)$.
The set of double points of~$\widetilde\Gamma$ (equivalently, of~$\widetilde\Gamma'$) is denoted by~$\widetilde X$.
Suppose that~$\alpha_i\ne\alpha'_i$ for some~$i\in\{1,\ldots,N\}$. Choose preimages~$\widetilde\alpha_i$ and~$\widetilde\alpha_i'$
of these arcs
in~$\mathbb R^2$ so that~$\partial\widetilde\alpha_i=\partial\widetilde\alpha_i'$.
(The arcs~$\alpha_i$ and~$\alpha_i'$ may form closed loops based at a point from~$X$,
in which case~$\widetilde\alpha_i$ and~$\widetilde\alpha_i'$ are defined as the closures
of preimages of~$\alpha_i\setminus X$ and~$\alpha_i'\setminus X$, such that~$\partial\widetilde\alpha_i=\partial\widetilde\alpha_i'$.)
By the hypothesis of the lemma, the staircase
arcs~$\widetilde\alpha_i$ and~$\widetilde\alpha'_i$ are isotopic
relative to~$\widetilde X$ and coincide near~$\partial\widetilde\alpha_i=\partial\widetilde\alpha_i'\subset\widetilde X$.
This implies the existence of a bigon~$\widetilde d$ of~$\widetilde\Gamma$ and~$\widetilde\Gamma'$
with~$\partial\widetilde d\subset\widetilde\alpha_i\cup\widetilde\alpha_i'$.
However, this bigon is not necessarily clean. If the interior of~$\widetilde d$
has a non-empty intersection with~$\widetilde\Gamma$ or~$\widetilde\Gamma'$,
then a subarc of~$\widetilde\Gamma\setminus\widetilde X$ or~$\widetilde\Gamma'\setminus\widetilde X$
cuts off a smaller bigon from~$\widetilde d$. Let~$\widetilde d_0$ be a minimal bigon of~$\widetilde\Gamma$ and~$\widetilde\Gamma'$
contained in~$\widetilde d$, that is, such that there is no smaller bigon contained in~$\widetilde d_0$.
Let~$\widetilde\beta\subset\widetilde\Gamma\setminus\widetilde X$ and~$\widetilde\beta'\subset\widetilde\Gamma'\setminus\widetilde X$
be the arcs co-bounding~$\widetilde d_0$. By construction, the interior of~$\widetilde d_0$ is disjoint from~$\widetilde\Gamma$
and~$\widetilde\Gamma'$.
If~$\widetilde\beta$ has a non-empty intersection with~$\widetilde\Gamma'$, or~$\widetilde\beta'$ has a non-empty
intersection with~$\widetilde\Gamma$,
then this intersection can be resolved by a small perturbation of~$R$ or~$R'$, which results in decreasing of~$c(R,R')$.
If~$\beta\cap\Gamma'=\varnothing=\beta'\cap\Gamma$, then the bigon~$\widetilde d_0$ is clean, and so is its image~$d_0$
in~$\mathbb T^2$. The claim follows.
Now suppose that~$\alpha_i=\alpha_i'$ for all~$i=1,\ldots,N$.
Let $i,j\in\{1,\ldots,K\}$ be such that~$\gamma_i\cap\gamma_j'\ne\varnothing$, $\gamma_i\ne\gamma_j'$.
If~$\gamma_i\cap\gamma_j'$ is a single point, this intersection can be resolved by a small perturbation of~$R$ or~$R'$,
which results in decreasing of~$c(R,R')$.
Otherwise we find a bigon~$\widetilde d$
of~$\widetilde\Gamma$ and~$\widetilde\Gamma'$ co-bounded by some~$\beta\subset\widetilde\gamma_i$
and~$\beta'\subset\widetilde\gamma_j'$, where~$\widetilde\gamma_i$ and~$\widetilde\gamma_j'$ are preimages of~$\gamma_i$
and~$\gamma_j'$, respectively, in~$\mathbb R^2$, and proceed as above.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{main-tech-theo}]
Due to symmetry it suffices to prove the assertion for any of the four types of stabilizations. We choose~$T=\overrightarrow{\mathrm{II}}$.
Two rectangular diagrams of a link~$R$ and~$R'$ (or, more generally, any two pairs~$(R^+,R^-)$, $({R'}^+,{R'}^-)$
of finite subsets of~$\mathbb T^2$)
are called \emph{combinatorially equivalent} if
there are orientation-preserving self-homeomorphisms~$f,g$ of~$\mathbb S^1$ such that~$(f\times g)(R^\pm)={R'}^\pm$.
Two rectangular diagrams of a link~$R$ and~$R'$
are said to be of the same \emph{$\overrightarrow{\mathrm{II}}$-homology type} if there is a rectangular diagram of a link~$R''$
such that~$R'$ and~$R''$ are combinatorially equivalent, and~$\Gamma=\Gamma_{\overrightarrow{\mathrm{II}}}(R)$
is isotopic to~$\Gamma_{\overrightarrow{\mathrm{II}}}(R'')$ relative to the set of double points of~$\Gamma$.
This is clearly an equivalence relation.
It follows from Lemma~\ref{r=r-lem} that the coincidence of the $\overrightarrow{\mathrm{II}}$-homology types
of~$R$ and~$R'$ implies~$[R]_{\overrightarrow{\mathrm{II}}}=[R']_{\overrightarrow{\mathrm{II}}}$.
\begin{rema}
The term `homology type' is justified by the fact that the $\overrightarrow{\mathrm{II}}$-homology type of a diagram~$R$ is
determined by the homological information about~$\Gamma_{\overrightarrow{\mathrm{II}}}(R)$, which
can be encoded by the function~$\psi:H_1(\Gamma,X;\mathbb Z)\rightarrow\mathbb Z$ defined by
$$\psi(z)=\bigl|\{\alpha\subset\Gamma\setminus X:[\overline\alpha]=z\}\bigr|,$$
where~$\Gamma=\Gamma_{\overrightarrow{\mathrm{II}}}(R)$, and~$X$ is the set of double points of~$\Gamma$.
\end{rema}
Now we claim that, for any~$k,l\in\mathbb N$ and~$m\in\mathbb N\cup\{0\}$, there are only finitely many pairwise
distinct $\overrightarrow{\mathrm{II}}$-homology types of diagrams~$R$ such that~$\omega_{\overrightarrow{\mathrm{II}}}(R)=(k,l,m)$.
Indeed, if~$\omega_{\overrightarrow{\mathrm{II}}}(R)=(k,l,0)$, then~$\Gamma_{\overrightarrow{\mathrm{II}}}(R)$ is a union
of~$\mathrm{lcd}(k,l)$ simple closed curves in~$\mathbb T^2$ having homology class~$(k,l)/\mathrm{lcd}(k,l)$.
This means that the $\overrightarrow{\mathrm{II}}$-homology type is completely determined by~$k,l$.
Suppose $\omega_{\overrightarrow{\mathrm{II}}}(R)=(k,l,m)$ with~$m>0$. Denote by~$X$ the set of double points
of~$\Gamma=\Gamma_{\overrightarrow{\mathrm{II}}}(R)$. Let~$\theta_1,\theta_2,\ldots,\theta_K\in\mathbb S^1$
(respectively, $\varphi_1,\varphi_2,\ldots,\varphi_L\in\mathbb S^1$) be all the points in the projection~$p_\theta(X)$ (respectively,
$p_\varphi(X)$) numbered according to their cyclic order in~$\mathbb S^1$. We have~$K,L\leqslant m$.
Pick an~$\varepsilon>0$ smaller than one half of the length of the shortest interval among those into which~$\mathbb S^1$
is cut by~$p_\theta(R)\cup p_\varphi(R)$. Then whenever~$(\theta_i,\varphi_j)\in X$, we will have
\begin{equation}\label{cross-eq}
\Gamma\cap[\theta_i-\varepsilon;\theta_i+\varepsilon]\times[\varphi_j-\varepsilon;\varphi_j+\varepsilon]=
\bigl([\theta_i-\varepsilon;\theta_i+\varepsilon]\times\{\varphi_j\}\bigr)\cup
\bigl(\{\theta_i\}\times[\varphi_j-\varepsilon;\varphi_j+\varepsilon]\bigr).
\end{equation}
Denote the set~$\bigcup_{i=1}^K\{\theta_i-\varepsilon,\theta_i+\varepsilon\}\subset\mathbb S^1$ by~$\Theta$
and~$\bigcup_{j=1}^L\{\varphi_j-\varepsilon,\varphi_j+\varepsilon\}\subset\mathbb S^1$ by~$\Phi$.
Due to the choice of~$\varepsilon$, we have~$\Theta\cap p_\theta(R)=\varnothing=\Phi\cap p_\varphi(R)$
since~$p_\theta(X)\subset p_\theta(R)$ and~$p_\varphi(X)\subset p_\varphi(R)$.
Therefore, whenever~$\theta_0\in\Theta$ (respectively, $\varphi_0\in\Phi$), the meridian~$\{\theta_0\}\times\mathbb S^1$
(respectively, the longitude~$\mathbb S^1\times\{\varphi_0\}$) intersects~$\Gamma$ in exactly~$k$ (respectively,~$l$)
points.
Denote by~$Y$ the set of all such intersection points:
$$Y=\bigl((\Theta\times\mathbb S^1)\cup(\mathbb S^1\times\Phi)\bigr)\cap\Gamma.$$
We claim that the homology type of~$R$ can be recovered from~$X$ and~$Y$. Indeed, we can recover
the subsets~$\Theta$ and~$\Phi$ as they are the projections~$p_\theta(Y)$ and~$p_\varphi(Y)$.
Now let~$r$ be the closure of a connected component of~$\mathbb T^2\setminus\bigl((\Theta\times\mathbb S^1)\cup(\mathbb S^1\times\Phi)\bigr)$.
By construction, $r$ is a rectangle which is either disjoint from~$X$ or contains exactly one point from~$X$.
In the former case, we can recover~$\Gamma\cap r$ up to isotopy relative to~$\partial r$
since~$\partial r\cap\Gamma\subset Y$
and~$\Gamma\cap r$ is a union of pairwise disjoint staircase arcs on which the functions~$\theta$, $\varphi$
are non-decreasing (some of these arcs may be degenerate to a single point). In the latter case,
the intersection~$\Gamma\cap r$ is completely known due to~\eqref{cross-eq}.
The number of points in~$Y$ is bounded from above by a function of~$k,l,m$:
$$|Y|=2Kk+2Ll\leqslant2m(k+l).$$
Therefore, for any fixed triple~$(k,l,m)$ there are only finitely many combinatorial types of pairs~$(X,Y)$ that
can arise in this construction, and hence, the number of homology types or rectangular diagrams~$R$
with~$\omega_{\overrightarrow{\mathrm{II}}}(R)=(k,l,m)$ is also finite.
In a similar fashion one can show that, for any fixed triple~$(k,l,m)$,
the number of pairs of homology types~$(Z,Z')$ of rectangular diagrams
such that, for some~~$R\in Z$,
$R'\in Z'$, we have~$\omega_{\overrightarrow{\mathrm{II}}}(R)=\omega_{\overrightarrow{\mathrm{II}}}(R')=(k,l,m)$
and~$R\mapsto R'$
is either an exchange move or a type~$\overrightarrow{\mathrm{II}}$ stabilization, is also finite.
Thus, an algorithm to decide wether~$[R]_{\overrightarrow{\mathrm{II}}}=[R']_{\overrightarrow{\mathrm{II}}}$ is constructed as follows.
First, compute~$\omega_{\overrightarrow{\mathrm{II}}}(R)$ and~$\omega_{\overrightarrow{\mathrm{II}}}(R')$.
If~$\omega_{\overrightarrow{\mathrm{II}}}(R)\ne\omega_{\overrightarrow{\mathrm{II}}}(R')$,
then~$[R]_{\overrightarrow{\mathrm{II}}}\ne[R']_{\overrightarrow{\mathrm{II}}}$ by Lemma~\ref{omega-inv-lem}.
If~$\omega_{\overrightarrow{\mathrm{II}}}(R)=\omega_{\overrightarrow{\mathrm{II}}}(R')=(k,l,m)$ we construct a graph~$G$
whose vertices are homology types of all rectangular diagrams of links~$R''$ with~$\omega_{\overrightarrow{\mathrm{II}}}(R'')=(k,l,m)$,
and the edges are all pairs~$(Z_1,Z_2)$ of vertices such that there exists an exchange move or a type~$\overrightarrow{\mathrm{II}}$
stabilization~$R_1\mapsto R_2$ with~$R_1\in Z_1$, $R_2\in Z_2$. As we have seen above, this graph is finite.
It is also clear that a procedure to construct this graph as well as to find its vertices~$Z$, $Z'$
with~$Z\ni R$ and~$Z'\ni R'$ can be described in a purely combinatorial way.
Now the equality~$[R]_{\overrightarrow{\mathrm{II}}}=[R']_{\overrightarrow{\mathrm{II}}}$
holds if and only if the vertices~$Z$ and~$Z'$ belong to the same
connected component of~$G$, which is easily checkable.
\end{proof}
\begin{coro}\label{main-coro}
The equivalence problem for transverse links of a topological type that has trivial orientation-preserving
symmetry group is decidable.
\end{coro}
\begin{proof}
Recall that equivalence classes of positively $\xi_+$-transverse links can be viewed as $\xi_+$-Legendrian links modulo
Legendrian isotopy and negative stabilizations, and also as elements of~$\mathscr R/\bigl\langle\overrightarrow{\mathrm I},\overleftarrow{\mathrm I},\overrightarrow{\mathrm{II}}\bigr\rangle$, whereas equivalence classes of $\xi_+$-Legendrian (respectively, $\xi_-$-Legendrian) links are identified
with elements of~$\mathscr R/\langle\overrightarrow{\mathrm I},\overleftarrow{\mathrm I}\rangle$ (respectively,
$\mathscr R/\langle\overrightarrow{\mathrm{II}},\overleftarrow{\mathrm{II}}\rangle$) (see~\cite{dyn-shast,OST}).
The proof of the corollary is parallel to that of~\cite[Theorem~7.1]{dyn-shast}. Namely, for any two topologically equivalent
positively~$\xi_+$-transverse links we can find their presentations by rectangular diagrams~$R_1$, $R_2$
such that~$[R_1]_{\overrightarrow{\mathrm{II}},\overleftarrow{\mathrm{II}}}=[R_2]_{\overrightarrow{\mathrm{II}},\overleftarrow{\mathrm{II}}}$.
If the links~$\widehat R_1$ and~$\widehat R_2$ have trivial orientation-preserving symmetry group,
then by~\cite[Theorem~4.2]{dyn-shast}, we have
$$[R_1]_{\overrightarrow{\mathrm{II}}}=[R_2]_{\overrightarrow{\mathrm{II}}}\quad\Leftrightarrow\quad
[R_1]_{\overrightarrow{\mathrm I},\overleftarrow{\mathrm I},\overrightarrow{\mathrm{II}}}=
[R_2]_{\overrightarrow{\mathrm I},\overleftarrow{\mathrm I},\overrightarrow{\mathrm{II}}}.$$
An application of Theorem~\ref{main-tech-theo} completes the proof.
\end{proof}
\section{Transverse--Legendrian links}
For an introduction to contact topology and the theory of Legendrian and transverse knots
the reader is referred to~\cite{etnyre05} and~\cite{Ge}. Here we consider links which
are Legendrian with respect to one contact structure and transverse to another,
simultaneously.
Namely, let~$\xi_+$ and~$\xi_-$ be the cooriented contact structures on~$\mathbb S^3$ defined
by the $1$-forms
$$\alpha_+=\sin^2\Bigl(\frac{\pi\tau}2\Bigr)\,d\theta+\cos^2\Bigl(\frac{\pi\tau}2\Bigr)\,d\varphi\qquad\text{and}\qquad
\alpha_-=\sin^2\Bigl(\frac{\pi\tau}2\Bigr)\,d\theta-\cos^2\Bigl(\frac{\pi\tau}2\Bigr)\,d\varphi,$$
respectively, that is~$\xi_\pm=\ker\alpha_\pm$. One can see that~$\xi_+$ is nothing else but
the standard contact structure, and~$\xi_-$ is a mirror image of~$\xi_+$.
\begin{defi}
A smooth link in~$\mathbb S^3$ is called \emph{transverse-Legendrian of type~$\overrightarrow{\mathrm{II}}$}
(or simply \emph{transverse-Legendrian})
if it is positively transverse with respect to~$\xi_+$ and Legendrian with respect to~$\xi_-$.
Two transverse-Legendrian links are \emph{equivalent} if they are isotopic within the class of transverse-Legendrian links.
\end{defi}
Let~$L$ be a transverse-Legendrian link.
The contact structures~$\xi_+$ and~$\xi_-$ agree, if their coorientations are ignored, at the union~$\mathbb S^1_{\tau=0}\cup\mathbb
S^1_{\tau=1}$, since we have~$\alpha_+=\pm\alpha_-$ on this subset.
Therefore, $L$ misses the circles~$\mathbb S^1_{\tau=0}$ and~$\mathbb
S^1_{\tau=1}$, and the torus projection~$p_{\theta,\varphi}$ is well defined on the whole of~$L$.
One can also see that the restriction of both forms~$d\theta$ and~$d\varphi$ on~$L$ are non-degenerate and,
moreover, positive with respect to the orientation of~$L$.
Any transverse-Legendrian link~$L$ can be uniquely recovered from its torus projection similarly to the way in which
a Legendrian link is recovered from its front projection. Indeed, since~$\alpha_-|_L=0$,
the following equality holds for the restrictions of the coordinates~$\theta,\varphi,\tau$ on~$L$:
\begin{equation}\label{tau-eq}
\tau=\frac2\pi\sqrt{\arctan\frac{d\varphi}{d\theta}}.\end{equation}
This means that we can describe the set of transverse-Legendrian links completely in terms of torus projections.
Namely, the following statement holds:
\begin{prop}
The torus projection map gives rise to a one-to-one correspondence between transverse-Legendrian links
and subsets~$\Gamma\subset\mathbb T^2$ such that the following holds:
\begin{enumerate}
\item
$\Gamma$ is the image of a smooth immersion~$\mathbb S^1\sqcup\mathbb S^1\sqcup\ldots\sqcup\mathbb S^1\rightarrow\mathbb T^2$,
\item
the slope of~$\Gamma$ is everywhere positive, and
\item
$\Gamma$ has no self-tangencies.
\end{enumerate}
\end{prop}
A subset satisfying Conditions~1--3 of this proposition will be referred as \emph{a (positive) torus front}.
A torus front is said to be \emph{almost generic} if it has no self-intersections
of multiplicity higher than two, and \emph{generic} if, additionally, no meridian or longitude of~$\mathbb T^2$
contains more than one self-intersection points of the front. An example of a generic torus front is shown in Figure~\ref{t-front-ex-fig}.
\begin{figure}[ht]
\includegraphics[scale=.3]{t-front.eps}
\caption{A generic positive torus front}\label{t-front-ex-fig}
\end{figure}
We use a convention that, at every crossing point, the arc with with larger slope
is shown as overcrossing. Due to~\eqref{tau-eq},
this agrees with the position of the corresponding transverse-Legendrian link in~$\mathbb S^3$.
We also indicate the orientation of the corresponding transverse-Legendrian link.
\begin{prop}\label{R3-prop}
Two almost generic torus fronts define equivalent transverse-Legendrian links if and only if they are
obtained from one another by a sequence of continuous deformations in the class of almost generic torus fronts,
and type~III Reidemeister moves.
\end{prop}
\begin{proof}
This follows from the obvious fact that the main, codimension-one stratum of
the set of non-almost generic torus fronts consists of torus fronts having a triple self-intersection point.
\end{proof}
With every rectangular diagram of a link~$R$ we associate an equivalence class~$\mathrm{TL}_{\overrightarrow{\mathrm{II}}}(R)$
of transverse-Legendrian links
by demanding that a torus front representing an element of~$\mathrm{TL}_{\overrightarrow{\mathrm{II}}}(R)$
can be obtained by an arbitrarily small (in the $C^0$ sense) perturbation of~$\Gamma_{\overrightarrow{\mathrm{II}}}(R)$.
\begin{figure}[ht]
\begin{tabular}{ccc}
\includegraphics[width=150pt]{tl-example.eps}
&\hbox to 1cm{\hss}&
\includegraphics[width=150pt]{tl1.eps}\\
$\Gamma_{\overrightarrow{\mathrm{II}}}(R)$&&
torus projection of~$\mathrm{TL}_{\overrightarrow{\mathrm{II}}}(R)$
\end{tabular}
\caption{Producing a transverse-Legendrian link from a rectangular diagram of a link}\label{tl-convertion-fig}
\end{figure}
See Figure~\ref{tl-convertion-fig} for an example.
\begin{prop}\label{i+ii-prop}
{\rm(i)}
Every equivalence class of type~$\overrightarrow{\mathrm{II}}$ transverse-Legendrian links has the form~$\mathrm{TL}_{\overrightarrow{\mathrm{II}}}(R)$
for some rectangular diagram of a link~$R$.
{\rm(ii)}
Let~$R$ and~$R'$ be rectangular diagrams of a link. Then~$\mathrm{TL}_{\overrightarrow{\mathrm{II}}}(R)=
\mathrm{TL}_{\overrightarrow{\mathrm{II}}}(R')$ implies~$[R]_{\overrightarrow{\mathrm{II}}}=[R']_{\overrightarrow{\mathrm{II}}}$.
\end{prop}
\begin{proof}
Statement~(i) follows from the approximation argument: any generic positive torus front~$\Gamma$ can be
approximated by a union of immersed staircase-like closed curves of the form~$\Gamma_{\overrightarrow{\mathrm{II}}}(R)$,
where~$R$ is a rectangular diagram of a surface, so that~$\Gamma_{\overrightarrow{\mathrm{II}}}(R)$ and~$\Gamma$
have the same set of double points.
To prove statement~(ii),
let~$\Gamma$ and~$\Gamma'$ be generic torus fronts obtained from~$\Gamma_{\overrightarrow{\mathrm{II}}}(R)$
and~$\Gamma_{\overrightarrow{\mathrm{II}}}(R')$, respectively, by a $C^0$-small perturbation.
The equality $\mathrm{TL}_{\overrightarrow{\mathrm{II}}}(R)=\mathrm{TL}_{\overrightarrow{\mathrm{II}}}(R')$
means that there is a continuous $1$-parametric family~$\Gamma_t$, $t\in[0;1]$ of torus fronts such that~$\Gamma_0=\Gamma$,
$\Gamma_1=\Gamma'$. Such a family can be chosen so that there are only finitely many moments~$t=t_1,t_2,\ldots,t_m$ at which the torus front~$\Gamma_t$
is not generic, and at these moments the genericity of~$\Gamma_t$ is unavoidably broken in one of the two simplest ways:
either there are two double points of~$\Gamma_t$ at the same meridian or longitude, or~$\Gamma_t$ has
a triple self-intersection. We may assume that~$t_1<t_2<\ldots<t_m$. We also set~$t_0=0$, $t_{m+1}=1$.
For each~$t\in[0;1]\setminus\{t_1,\ldots,t_m\}$ let~$R_t$ be a rectangular diagram of a link such that~$\Gamma_{\overrightarrow{\mathrm{II}}}(R_t)$
is isotopic to~$\Gamma_t$ relative the set of self-intersections of~$\Gamma_t$. By construction,
the homology type of~$R_t$ is constant
on each of the intervals~$[0;t_1)$, $(t_1;t_2),\ldots$, $(t_{m-1};t_m)$, $(t_m;1]$, and thus, by Lemma~\ref{r=r-lem}, so is~$[R_t]_{\overrightarrow{\mathrm{II}}}$.
At any critical moment~$t_i$ the torus front~$\Gamma_{t_i}$ can be approximated in two different ways
by unions of staircase curves~$\Gamma_{\overrightarrow{\mathrm{II}}}(R_{t_i}')$
and~$\Gamma_{\overrightarrow{\mathrm{II}}}(R_{t_i}'')$, where~$R_{t_i}'$ and~$R_{t_i}''$ are rectangular diagrams of a link such that:
\begin{enumerate}
\item
$R_{t_i}'^-\mapsto R_{t_i}''$ is an exchange move;
\item
the homology type of~$R_{t_i}'$ (respectively, $R_{t_i}''$)
coincides with that of~$R_t$ for~$t\in(t_{i-1};t_i)$ (respectively,
$t\in(t_i;t_{i+1})$).
\end{enumerate}
This is illustrated in Figure~\ref{R-+}.
\begin{figure}[ht]
\begin{tabular}{cccc}
\raisebox{4mm}{(a)}&
\includegraphics[scale=.65]{nongen1.eps}&
\includegraphics[scale=.65]{nongen2.eps}&
\includegraphics[scale=.65]{nongen3.eps}\\
&$\Gamma_{t_i}$&$\Gamma_{\overrightarrow{\mathrm{II}}}(R_{t_i}')$&$\Gamma_{\overrightarrow{\mathrm{II}}}(R_{t_i}'')$\\[5mm]
\raisebox{4mm}{(b)}&
\includegraphics[scale=.65]{nongen4.eps}&
\includegraphics[scale=.65]{nongen5.eps}&
\includegraphics[scale=.65]{nongen6.eps}\\
&$\Gamma_{t_i}$&$\Gamma_{\overrightarrow{\mathrm{II}}}(R_{t_i}')$&$\Gamma_{\overrightarrow{\mathrm{II}}}(R_{t_i}'')$
\end{tabular}
\caption{Approximating a non-generic torus front: (a) when two double points appear on the same longitude; (b) when
a triple self-intersection occurs}\label{R-+}
\end{figure}
The claim follows.
\end{proof}
The converse to the assertion~(ii) of Proposition~\ref{i+ii-prop} does not hold in general. Namely,
the equality~$[R]_{\overrightarrow{\mathrm{II}}}=[R']_{\overrightarrow{\mathrm{II}}}$
does not always imply
$\mathrm{TL}_{\overrightarrow{\mathrm{II}}}(R)=
\mathrm{TL}_{\overrightarrow{\mathrm{II}}}(R')$.
However, the elements of~$\mathscr R/\bigl\langle\overrightarrow{\mathrm{II}}\bigr\rangle$
can be classified in terms of transverse-Legendrian links and exchange moves, which we now define.
\begin{defi}
Let~$\Gamma$ and~$\Gamma'$ be two generic positive torus fronts such that there are three
smooth arcs~$\alpha\subset\Gamma$, $\alpha'\subset\Gamma'$, and~$\beta\subset\Gamma\cap\Gamma'$
satisfying the following conditions (see Figure~\ref{3-arcs-fig}):
\begin{enumerate}
\item
the closure of the symmetric difference~$\Gamma\triangle\Gamma'$ is~$\alpha\cup\alpha'$;
\item
there is an embedded closed $2$-disc~$d\subset\mathbb T^2$ such~$\partial d=\alpha\cup\alpha'$;
\item
$\partial\beta\subset\partial d$;
\item $\int_\beta d\theta<2\pi$, $\int_\beta d\varphi<2\pi$;
\item
$\beta$ is homologous, relative to~$d$, either to a meridian or to a longitude of~$\mathbb T^2$;
\item
the intersection~$\beta\cap d$ consists of two arcs~$\gamma$ and~$\gamma'$ such
that~$\partial\gamma\subset\alpha$, $\partial\gamma'\subset\alpha'$;
\item
the interior of~$d$ intersects~$\Gamma\setminus\beta$ (equivalently, $\Gamma'\setminus\beta$) in a union of
pairwise disjoint open arcs each of which separates~$\gamma\setminus\partial\gamma$ from~$\gamma'\setminus\partial\gamma'$;
\item
if $\beta$ is homologous to a meridian (respectively, longitude) relative to~$d$, then at each self-intersection point of~$\Gamma$
or~$\Gamma'$ located at $\partial d\setminus\beta$
the overpassing (respectively, underpassing) arc is a part of~$\partial d$.
\end{enumerate}
Then we say that the passage from~$\Gamma$ to~$\Gamma'$ (or between the respective transverse-Legendrian links)
is called \emph{an exchange move}.
\begin{figure}[ht]
\centerline{\includegraphics[scale=.5]{bigon.eps}\put(-160,20){$d$}\put(-185,25){$\alpha'$}\put(-155,2){$\alpha$}
\put(-225,30){$\beta$}\put(-145,40){$\gamma'$}\put(-95,35){$\gamma$}}
\caption{The disc~$d$ and the arcs~$\alpha,\beta$ in the definition of an exchange move of
transverse-Legendrian links}\label{3-arcs-fig}
\end{figure}
\end{defi}
Exchange moves of transverse-Legendrian links are illustrated in Figure~\ref{big-mov-fig}.
\begin{figure}[ht]
\centerline{\includegraphics[scale=.3]{bigon1.eps}
\hskip1cm\raisebox{65pt}{$\longleftrightarrow$}\hskip1cm
\includegraphics[scale=.3]{bigon2.eps}}
\vskip1cm
\centerline{\includegraphics[scale=.3]{bigon3.eps}
\hskip1cm\raisebox{65pt}{$\longleftrightarrow$}\hskip1cm
\includegraphics[scale=.3]{bigon4.eps}}
\caption{Exchange moves of transverse-Legendrian links}\label{big-mov-fig}
\end{figure}
\begin{prop}\label{class-prop}
Let~$R$ and~$R'$ be rectangular diagrams of a link. Then we have~$[R]_{\overrightarrow{\mathrm{II}}}=[R']_{\overrightarrow{\mathrm{II}}}$
if and only if the type~$\overrightarrow{\mathrm{II}}$ transverse-Legendrian link associated with~$R$ and~$R'$
can be obtained from one another by a finite sequence of isotopies in the class of transverse-Legendrian links,
and exchange moves.
\end{prop}
\begin{proof}
Due to Proposition~\ref{i+ii-prop}, proving the `if' part amounts to checking that exchange moves
of transverse-Legendrian links can be realized my means of elementary moves of respective rectangular diagrams.
We leave this to the reader, and don't use in the sequel.
To prove the `only if' part, first, note that every elementary move of rectangular diagrams can be
decomposed into a sequence of `even more elementary' ones, namely, such that each of the annuli~$(\theta_1;\theta_2)\times\mathbb S^1$
and~$\mathbb S^1\times(\varphi_1;\varphi_2)$ (we use the notation from Definition~\ref{moves-def})
contains at most one edge or the diagram being transformed. This follows from the fact that a single elementary move
associated with the rectangle~$[\theta_1;\theta_2]\times[\varphi_1;\varphi_2]$ can be decomposed
into two moves associated with rectangles~$[\theta_1;\theta_3]\times[\varphi_1;\varphi_2]$
and~$[\theta_3;\theta_2]\times[\varphi_1;\varphi_2]$ (respectively, $[\theta_1;\theta_2]\times[\varphi_1;\varphi_3]$ and
$[\theta_1;\theta_2]\times[\varphi_3;\varphi_2]$) for any~$\theta_3\in(\theta_1;\theta_2)$
(respectively, $\varphi_3\in(\varphi_1;\varphi_2)$)
such that the meridian~$\{\theta_2\}\times\mathbb S^1$ (respectively, the longitude~$\mathbb S^1\times\{\varphi_3\}$)
contains no vertices of the diagram.
In each case of an `even more elementary' move (there are now only finitely many to consider),
it is a direct check that the corresponding transverse-Legendrian
link undergoes an isotopy in the class of transverse-Legendrian links, possibly composed with an exchange move.
\end{proof}
\section{Applications}
Corollary~\ref{main-coro} gives a theoretical solution of the equivalence problem for
transverse links having trivial orientation-preserving symmetry group, but
an implementation of the algorithm takes a lot of time in general.
However, the results of~\cite{distinguishing,dyn-shast} supplemented by Propositions~\ref{R3-prop} and~\ref{class-prop} above
allow, in some cases, to distinguish transverse knots having trivial orientation-preserving symmetry group
with very little effort. To illustrate this, we consider the knots~$10_{128}$ and~$10_{160}$.
It is conjectured in~\cite{chong2013} chat the $\xi_+$-Legendrian knots
associated with the rectangular diagrams~$10_{128}^{1\mathrm R}$ and~$-\mu(10_{128}^{1\mathrm R})$
shown in Figure~\ref{128-r-fig} (we use the notation of~\cite{dyn-shast} for these diagrams)
\begin{figure}[ht]
\begin{tabular}{ccc}
\includegraphics[scale=.25]{39822420771501401.eps}
&\hbox to 2cm{\hss}&
\includegraphics[scale=.25]{39822420771501401m-.eps}\\
$10_{128}^{1\mathrm R}$&&$-\mu(10_{128}^{1\mathrm R})$
\end{tabular}
\caption{The diagrams~$10_{128}^{1\mathrm R}$ and~$-\mu(10_{128}^{1\mathrm R})$}\label{128-r-fig}
\end{figure}
are not Legendrian isotopic, and, moreover, remain such after
any number of negative stabilizations. This is equivalent to saying that the positively $\xi_+$-transverse
knots associated with these diagrams are not transversely isotopic.
In the notation introduced in the beginning of this paper,
this inequality can also be written as
\begin{equation}\label{128-transverse-neq}
[10_{128}^{1\mathrm R}]_{\overrightarrow{\mathrm I},\overleftarrow{\mathrm I},\overrightarrow{\mathrm{II}}}
\ne[-\mu(10_{128}^{1\mathrm R})]_{\overrightarrow{\mathrm I},\overleftarrow{\mathrm I},\overrightarrow{\mathrm{II}}}.
\end{equation}
This conjecture was partially confirmed in~\cite[Proposition~7.5]{dyn-shast},
namely, it was shown that the Legendrian knots in questions are, indeed,
not equivalent, and remain such after up to four negative stabilizations. Extending
this to any number of negative stabilizations now amounts to showing that
\begin{equation}\label{128-ne-eq}
[10_{128}^{1\mathrm R}]_{\overrightarrow{\mathrm{II}}}\ne
[-\mu(10_{128}^{1\mathrm R})]_{\overrightarrow{\mathrm{II}}}.
\end{equation}
The type~$\overrightarrow{\mathrm{II}}$ transverse-Legendrian knots associated with
the diagrams~$10_{128}^{1\mathrm R}$ and~$-\mu(10_{128}^{1\mathrm R})$ are shown in Figure~\ref{torus-128-fig}.
\begin{figure}[ht]
\begin{tabular}{ccc}
\includegraphics[scale=.25]{39822420771501401a.eps}
&\hbox to 2cm{\hss}&
\includegraphics[scale=.25]{39822420771501401m-a.eps}\\
$\mathrm{TL}_{\overrightarrow{\mathrm{II}}}(10_{128}^{1\mathrm R})$&&$\mathrm{TL}_{\overrightarrow{\mathrm{II}}}(-\mu(10_{128}^{1\mathrm R}))$
\end{tabular}
\caption{Torus projections of knots from $\mathrm{TL}_{\protect\overrightarrow{\mathrm{II}}}(10_{128}^{1\mathrm R})$
and~$\mathrm{TL}_{\protect\overrightarrow{\mathrm{II}}}(-\mu(10_{128}^{1\mathrm R}))$}\label{torus-128-fig}
\end{figure}
There are no `triangles' in the complement of any of these torus fronts, hence no Reidemeister-III move
can be applied to them. It is also not hard to see that these torus fronts admit no exchange moves,
even after any isotopy in the class of positive torus fronts, and that they are not isotopic.
By Propositions~\ref{R3-prop} and~\ref{class-prop}, this implies~\eqref{128-ne-eq}, and then~\eqref{128-transverse-neq} by~\cite[Theorem~4.2 and Figure~20]{dyn-shast}.
Thus, we have the following.
\begin{prop}
The positively $\xi_+$-transverse
knots associated with the diagrams~$10_{128}^{1\mathrm R}$ and~$-\mu(10_{128}^{1\mathrm R})$
are not transversely isotopic.
\end{prop}
In a completely similar fashion the following statement, which also confirms a conjecture of~\cite{chong2013}, is established.
\begin{prop}
The positively $\xi_+$-transverse
knots associated with the diagrams~$-10_{160}^{2\mathrm R}$ and~$10_{160}^{3\mathrm R}$
shown in Figure~\ref{160-r-fig}
are not transversely isotopic.
\end{prop}
\begin{figure}[ht]
\begin{tabular}{ccc}
\includegraphics[scale=.25]{71369834809351081-.eps}
&\hbox to 2cm{\hss}&
\includegraphics[scale=.25]{40074357544392929.eps}\\
$-10_{160}^{2\mathrm R}$&&$10_{160}^{3\mathrm R}$
\end{tabular}
\caption{The diagrams~$-10_{160}^{2\mathrm R}$ and~$10_{160}^{3\mathrm R}$}\label{160-r-fig}
\end{figure}
The proof is obtained by analyzing the torus projections in Figure~\ref{torus-160-fig} (see \cite[Figure~22]{dyn-shast}
for the notation and a description of the relation between these diagrams and those in~\cite{chong2013}).
\begin{figure}
\begin{tabular}{ccc}
\includegraphics[scale=.25]{71369834809351081-a.eps}
&\hbox to 2cm{\hss}&
\includegraphics[scale=.25]{40074357544392929a.eps}\\
$\mathrm{TL}_{\protect\overrightarrow{\mathrm{II}}}(-10_{160}^{2\mathrm R})$&&$\mathrm{TL}_{\protect\overrightarrow{\mathrm{II}}}(10_{160}^{3\mathrm R})$
\end{tabular}
\caption{Torus projections of $\mathrm{TL}_{\protect\overrightarrow{\mathrm{II}}}(-10_{160}^{2\mathrm R})$
and~$\mathrm{TL}_{\protect\overrightarrow{\mathrm{II}}}(10_{160}^{3\mathrm R})$}\label{torus-160-fig}
\end{figure}
\section{Concluding remarks}
Four oriented types of stabilizations and destabilizations of rectangular diagrams of links
are symmetric to each other and play equal roles in knot theory. This means that with every rectangular diagram~$R$ of a link
one can associate four different objects having the nature of a transverse-Legendrian link type:
\begin{itemize}
\item
a positively $\xi_+$-transverse and $\xi_-$-Legendrian link type, which is identified with~$[R]_{\overrightarrow{\mathrm{II}}}$,
\item
a negatively $\xi_+$-transverse and $\xi_-$-Legendrian link type, which is identified with~$[R]_{\overleftarrow{\mathrm{II}}}$,
\item
a positively $\xi_-$-transverse and $\xi_+$-Legendrian link type, which is identified with~$[R]_{\overrightarrow{\mathrm I}}$, and
\item
a negatively $\xi_-$-transverse and $\xi_+$-Legendrian link type, which is identified with~$[R]_{\overleftarrow{\mathrm I}}$.
\end{itemize}
This is illustrated in Figure~\ref{four-tl-fig}, where torus projections of all four transverse-Legendrian links are shown.
\begin{figure}[ht]
\centerline{\includegraphics[scale=.2]{tl1a.eps}\smash{\put(-63,-15){$\mathrm{TL}_{\overrightarrow{\mathrm{II}}}(R)$}}\hskip6cm
\includegraphics[scale=.2]{tl2.eps}\smash{\put(-63,-15){$\mathrm{TL}_{\overleftarrow{\mathrm{II}}}(R)$}}}
\vskip-.5cm
\centerline{\includegraphics[scale=.2]{rd1.eps}\smash{\put(-50,-15){$R$}}}
\vskip-.5cm
\centerline{\includegraphics[scale=.2]{tl3.eps}\smash{\put(-63,-15){$\mathrm{TL}_{\overrightarrow{\mathrm I}}(R)$}}\hskip6cm
\includegraphics[scale=.2]{tl4.eps}\smash{\put(-63,-15){$\mathrm{TL}_{\overleftarrow{\mathrm I}}(R)$}}}
\vskip5mm
\caption{Four transverse-Legendrian links associated with a single rectangular diagram}\label{four-tl-fig}
\end{figure}
One may naturally ask if there is a relation between rectangular diagrams and links which are Legendrian
with respect to both contact structures~$\xi_+$ and~$\xi_-$, or transverse with respect to both of them.
The answer in both cases is pretty simple. Links that are $\xi_+$-Legendrian and~$\xi_-$-Legendrian simultaneously
are exactly the links of the form~$\widehat R$, where~$R$ is a rectangular diagram of a link (one should
extend the definition of a Legendrian link to piecewise smooth curves, since the links of the form~$\widehat R$ are
typically non-smooth). So, equivalence classes of such links are in one-to-one correspondence with
combinatorial types of rectangular diagrams.
Links which are positively $\xi_+$-transverse and positively $\xi_-$-transverse are nothing else but closed
braids with~$\mathbb S^1_{\tau=0}$ as the axis. The isotopy classes of such links are the same
thing as conjugacy classes of braids. As noted in the beginning of the paper,
rectangular diagrams allow to classify braids modulo conjugacy and
Birman--Menasco exchange moves (as the elements of~$\mathscr R/\langle\overrightarrow{\mathrm I},\overrightarrow{\mathrm{II}}\rangle$).
The situation with transverse-Legendrian links reflected in Proposition~\ref{class-prop}
is completely analogues.
| proofpile-arXiv_059-15543 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Weakly-trained Autoencoders}
\label{appdx:weakly_train}
\subsection{Concentration of $K(0)$ -- Proof of Lemma \ref{lmMinEigenK0_Case1}}
\proof Recall that $K(0) = \frac{1}{m}\sum_{r=1}^m \widetilde{X}_r(0)^\top \widetilde{X}_r(0) \otimes a_ra^\top_r$.
In this proof, we omit the argument $t=0$ in $\widetilde{X}_r(0)$, and simply write $\widetilde{X}_r$ for clarity.
Consider the random matrix $Z_r = \widetilde{X}_r^\top \widetilde{X}_r \otimes a_ra^\top_r$ and $\bar{Z}_r = \mathbb{E}_{w_r}[\widetilde{X}^\top \widetilde{X}] \otimes I$. Note that $Z_r$ is positive semi-definite.
One can easily show two facts:
\[
\norm{Z_r}=\norm{\widetilde{X}_r^\top \widetilde{X}_r \otimes a_ra^\top_r} = \norm{\widetilde{X}_r^\top \widetilde{X}_r}\norm{ a_ra^\top_r} = \norm{a_r}^2\norm{\widetilde{X}_r^\top \widetilde{X}_r} \leq d\lambda_n,
\]
in which we use $\norm{a_r}^2=d$; and
\begin{equation}
\norm{\widetilde{X}_r^\top\widetilde{X}_r} = \sup_{\norm{b} = 1}\norm{\widetilde{X}_r b}^2 \leq \sup_{\|b\|=1} \|\sum_{i} b_i x_i\|^2 = \norm{X^\top X} = \lambda_n.
\label{eqn:tXnormbound}
\end{equation}
Similarly, $\norm{\bar{Z}_r}\le \lambda_n$, and hence $\norm{Z_r-\bar{Z}_r}\le (d+1)\lambda_n$. Moreover,
\begin{align*}
\mathbb{E}_{w_r, a_r}[(Z_r-\bar{Z}_r)^2]
&=\mathbb{E}_{w_r, a_r}[(Z_r-\bar{Z}_r)(Z_r - \bar{Z}_r)^\top]\\
&= \mathbb{E}_{w_r,a_r}[Z_rZ_r^\top] - \bar{Z}_r^2 \\
&= \mathbb{E}_{w_r, a_r}[(\widetilde{X}_r^\top \widetilde{X}_r)^2 \otimes \norm{a_r}^2a_ra^\top_r] - (\mathbb{E}_w[\widetilde{X}^\top \widetilde{X}])^2 \otimes I \\
&\preceq d\mathbb{E}_{w_r}[(\widetilde{X}_r^\top \widetilde{X}_r)^2] \otimes I.
\end{align*}
By the above argument, $\norm{\mathbb{E}_{w_r}[(\widetilde{X}_r^\top \widetilde{X}_r)^2]}\le \lambda_n^2$,
so $\|\sum_r \mathbb{E}((Z_r-\bar{Z}_r)^2)\|\le md\lambda_n^2$.
From matrix Bernstein's inequality \citep[Theorem 1.4 of]{Tropp12},
\begin{align*}
\mathbb{P}\left[ \norm{mK(0) - mK^\infty} \geq \epsilon \right] \leq nd \exp\left(- \frac{\epsilon^2/2}{(d+1)\lambda_n \epsilon/3 + md\lambda_n^2} \right).
\end{align*}
Since the second term in the denominator of the exponent dominates ($\lambda_0 \leq \lambda_n$), we get
\[
m \ge C
\frac{\lambda^2_nd\log(nd/\delta)}{\lambda^2_0}
\]
where we pick $\epsilon = m\lambda_0/4$. Therefore,
\[
\norm{K(0) - K^\infty} \leq {\lambda_0}/{4}
\]
with probability at least $1 -\delta$ for any $\delta \in (0, 1)$. By Weyl's inequality, we have with the same probability:
\[
\lambda_{\min}(K(0)) \geq 3\lambda_0/4.
\]
\qedhere
\subsection{Proof of supporting claims}
To prove the bounds in Claims \ref{clmC1_Case1}, \ref{clmC2_Case1}, \ref{clmC3_Case1}, and \ref{clmC4_Case1}, we use the bound $\norm{w_r(k+1) - w_r(0)} \leq R'$ for all $r \in [m]$ in Lemma \ref{lmMovementInstep_Case1}. In what follows, we assume $R' < R$, which is the weight movement allowed to achieve Lemma \ref{lmBoundKDiff_Case1}. This assumption holds with high probility as long as $m$ is large enough.
\begin{Claim}
\label{clmC1_Case1}
Let $C_1 = -\frac{2\eta}{d} \mathrm{vec}( X - U(k) )^\top K(k) \mathrm{vec}( X - U (k) )$. Then we have
\begin{align*}
C_1 \leq -\frac{\eta\lambda_0}{d} \norm{X - U(k)}_F^2.
\end{align*}
with probability at least $1-\delta$.
\end{Claim}
\proof
Using Lemma \ref{lmMovementInstep_Case1}, we have $\norm{w_r(k) - w_r(0)} \leq R' < R$ for all $r \in [m]$. By Lemma \ref{lmBoundKDiff_Case1},
we have
\[
\|K(k)-K(0)\| < \frac {\lambda_0}{4}.
\]
Therefore, $\lambda_{\min}(K(k)) \ge \lambda_0 / 2$ with probability at least $1-\delta$. As a result,
\begin{align*}
\mathrm{vec}(X - U(k))^\top K(k) \mathrm{vec}( X - U(k) ) \geq \frac{\lambda_0}{2} \| X - U(k) \|^2= \frac{\lambda_0}{2} \norm{X - U(k) }_F^2,
\end{align*}
and $C_1 \leq -\frac{\eta\lambda_0}{d} \norm{X - U(k)}_F^2$ with probability at least $1-\delta$.
\qedhere
\begin{Claim}
\label{clmC2_Case1}
Let $C_2 = \frac{2\eta}{d} \mathrm{vec}( X - U(k) )^\top K(k)^{\bot} ( X - U(k) )$. We have
\begin{align*}
C_2 \leq 8\eta nR \norm{X - U(k)}_F^2.
\end{align*}
with probability at least $1-n\exp(-mR)$.
\end{Claim}
\proof
All we need is to bound $K(k)^{\bot}$. A simple upper bound is
\begin{align*}
\norm{K(k)^{\bot}}^2 &\leq \sum_{i, j=1}^n \norm{K(k)^{\bot}_{i, j}}^2_F \\
&\leq \sum_{i, j=1}^n \norm[\Big]{ \frac{1}{m} \sum_{r\in S^{\bot}_i} x_i^\top x_j \mathbbm{1}[ w_r(k)^\top x_i \geq 0, w_r(k)^\top x_j \geq 0]a_ra^\top_r }^2_F \\
&\leq d^2\sum_{i, j=1}^n \left( \frac{1}{m} \sum_{r\in S^{\bot}_i} \mathbbm{1}[ w_r(k)^\top x_i \geq 0, w_r(k)^\top x_j \geq 0] \right)^2 \\
&\leq d^2\sum_{i, j=1}^n \left( \frac{1}{m} \sum_{r=1}^m \mathbbm{1}[r\in S^{\bot}_i] \right)^2 \\
&\leq 16n^2d^2R^2
\end{align*}
with probability $1-n\exp(-mR)$ where the last step follows from Lemma \ref{lmSongClaim4.10}. Then, with that same probability $\norm{K(k)^{\bot}} \leq \norm{K(k)^{\bot}}_F \leq 4ndR$, and
\begin{align*}
C_2 &= \frac{2\eta}{d} \mathrm{vec}( X - U(k) )^\top K(k)^{\bot} ( X - U(k) ) \\
&\leq \frac{2\eta}{d} \norm{K(k)^{\bot}} \norm{X - U(k)}_F^2 \\
&\leq 8\eta nR \norm{X - U(k)}_F^2.
\end{align*}
\qedhere
\begin{Claim}
\label{clmC3_Case1}
Let $C_3 = -2\mathrm{vec}( X - U(k) )^\top v_2$, then with probability at least $1 - n\exp(-mR)$
\begin{equation*}
C_3 \leq 8\eta n R \norm{X - U(k)}_F^2.
\end{equation*}
\end{Claim}
\proof
We have $C_3 \leq 2\norm{ X - U(k) }_F \norm{v_2}$. Using the Lipschitz property of $\phi$, we have
\begin{align*}
\norm{v_2}^2 &= \sum_{i=1}^n \norm{v_{2, i}}^2 \\
&\leq \sum_{i=1}^n \norm[\bigg]{\frac{1}{ \sqrt{md} } \sum_{r \in S^{\bot}_i} a_r \left( \phi \left( w_r(k+1)^\top x_i \right) - \phi( w_r(k)^\top x_i ) \right) }^2 \\
&\leq \frac{\eta^2}{ m } \sum_{i=1}^n \left( \sum_{r \in S^{\bot}_i} \left|\Big( \nabla_{w_r}L(W(k)) \Big)^\top x_i\right| \right)^2 \\
&\leq \frac{\eta^2}{ m } \sum_{i=1}^n \left( \sum_{r =1}^m \mathbbm{1}[r \in S^{\bot}_i] \left|\Big( \nabla_{w_r}L(W(k)) \Big)^\top x_i\right| \right)^2 \\
&\leq \frac{\eta^2}{ m }\max_r \norm[\Big]{ \nabla_{w_r}L(W(k)) \Big)^\top }^2 \sum_{i=1}^n\left( \sum_{r=1}^m \mathbbm{1}[r \in S^{\bot}_i] \right)^2 \\
&\leq \frac{\eta^2\lambda_n}{ m^2 }\norm{X - U(k)}_F^2 \sum_{i=1}^n\left( \sum_{r=1}^m \mathbbm{1}[r \in S^{\bot}_i] \right)^2 \\
&\leq \frac{\eta^2\lambda_n}{ m^2 }\norm{X - U(k)}_F^2 \sum_{i=1}^n (4mR)^2 \\
&\leq 16n\lambda_nR^2\eta^2 \norm{X - U(k)}_F^2 \\
&\leq 16n^2R^2\eta^2 \norm{X - U(k)}_F^2.
\end{align*}
with probability $1 - n\exp(-mR)$. The sixth step we use
\begin{align*}
\norm{\nabla_{w_r}L(W(k))} &= \norm[\Big]{\frac{1}{\sqrt{md}}\widetilde{X}_r(k)(X - U(k))^\top a_r} \\
&\leq \frac{\sqrt{\lambda_n}}{\sqrt{m}}\norm{ X - U(k) }_F,
\end{align*}
and the last step follows from from Lemma \ref{lmSongClaim4.10} that $\sum_{r=1}^m \mathbbm{1}[r \in S^{\bot}_i] \leq 4mR$ with probability at least $1 - n\exp(-mR)$. Substitute the bound into $C_3$, then we finish the proof.
\qedhere
\begin{Claim}
\label{clmC4_Case1}
Let $C_4 = \norm{ U (k+1) - U(k) }_F^2$. Then we have
\begin{align*}
C_4 \leq \eta^2 n\lambda_n \| X - U(k) \|_F^2.
\end{align*}
\end{Claim}
\proof
Previously in Lemma \ref{lmMovementInstep_Case1}, we proved that
\begin{align*}
\norm{\nabla_{w_r}L(W(k))} &= \norm[\Big]{\frac{1}{\sqrt{md}}\widetilde{X}_r(k)(X - U(k))^\top a_r} \\
&\leq \frac{\sqrt{\lambda_n}}{\sqrt{m}}\norm{ X - U(k) }_F.
\end{align*}
Expand the form of $U(k+1) - U(k)$ and use the Lipschitz of $\mathrm{ReLU}$ to get
\begin{align*}
C4 &= \sum_{i=1}^n \norm{ u_i(k+1) - u_i(k) }^2 \\
&= \frac{1}{md} \sum_{i=1}^n \norm[\Big]{ \sum_{r=1}^m a_r \left( \phi( w_r(k+1)^\top x_i ) - \phi(w_r(k)^\top x_i ) \right)}^2 \\
&\leq \eta^2 \sum_{i=1}^n \frac{1}{m} \left( \sum_{r=1}^m \norm[\Big]{\nabla_{w_r}L( W(k) ) } \right)^2 \\
&\leq \eta^2 \sum_{i=1}^n \frac{1}{m} \left( \sum_{r=1}^m \frac{\sqrt{\lambda_n}}{\sqrt{m}}\norm{ X - U(k) }_F \right)^2 \\
&= \eta^2 n\lambda_n \norm{ X - U(k) }_F^ 2 \\
&\leq n^2\eta^2 \norm{ X - U(k) }_F^2.
\end{align*}
Therefore, we finish the proof.
\qedhere
\section{Jointly-trained Autoencoders}
\label{appdx:jointly_train}
\subsection{Concentration of $H(0)$}
We re-state and prove the concentration of $H(0)$ in Lemma \ref{lmMinEigenH0_joint}.
\begin{Lemma}
For any $\delta \in (0, \frac{\lambda_0}{12d\lambda_n^2})$, if $m \ge C\frac{\max(n, d) \lambda_n\log^2(nd/\delta)}{\lambda_0^2}$ for some large enough constant $C$, then with probability at least $1 - 1/(2nd)^{2\log nd} - m\delta$,
one obtains $\norm{H(0) - H^\infty} \leq {\lambda_0}/{4}$ and $\lambda_{\min}(H(0)) \geq 3\lambda_0/4$ .
\end{Lemma}
\proof
Recall that
\begin{align}
H(0) = \frac{1}{m}\sum_{r=1}^m\phi(X^\top w_r(0))\phi(w_r(0)^\top X) \otimes I,
\end{align}
and $H^\infty = \mathbb{E}_{W(0), A(0)}[H(0)]$. Our goal is to show the concentration of $\sum_{r=1}^m\phi(X^\top w_r(0))\phi(w_r(0)^\top X)$. Let us use $w_r$ to mean $w_r(0)$ and denote the $r^{\textrm{th}}$-random matrix as
\[
Z_r = \phi(X^\top w_r)\phi(w_r^\top X).
\]
We use Lemma B.7 of \citet{zhong2017recovery} and
verify the required conditions by
the next results.
\begin{Claim}[Condition I for H(0)] The following are true:
\begin{enumerate}[label=(\roman*).,leftmargin=*]
\item $\norm{Z_r} \leq \lambda_n\norm{w_r}^2.$
\item $\mathbb{P}_{w_r}[\norm{w_r}^2 \leq 4d\sqrt{\log(2/\delta)}] \geq 1 - \delta$ for any $\delta \in (0, 1) $.
\end{enumerate}
\label{clCond1_H0_joint}
\end{Claim}
\proof
(i) We have
\[
\norm{Z_r} = \norm{\phi(X^\top w_r)\phi(w^\top_r X)} = \norm{\phi(X^\top w_r)}^2 \leq \norm{X^\top X} \norm{w_r}^2 \leq \lambda_n\norm{w_r}^2,
\]
which gives the first part (i). For the second part, we use the fact $\norm{w_r}^2$ is a chi-squared random variable with $d$ degrees of freedom, and sub-exponential with sub-exponential norm $2\sqrt{d}$, meaning that:
\[
\mathbb{P}_{w_r}[|\norm{w_r}^2 - d| \geq \epsilon] \leq 2\exp\bigl(-\frac{\epsilon^2}{8d}\bigr)
\]
For any $\delta \in (0, 1)$ and $\epsilon^2 = 9d\log(2/\delta)\geq 8d\log(2/\delta)$, we have
\[
|\norm{w_r}^2 - d| \leq 3\sqrt{d\log(2/\delta)}
\]
with probability at least $1-\delta$. Then, $\norm{w_r}^2 \leq d + 3\sqrt{d\log(2/\delta)} \leq 4d\sqrt{\log(2/\delta)}$ with probability at least $1-\delta$.
\qedhere
\begin{Claim}[Condition II for H(0)]
$\norm{\mathbb{E}[Z_rZ_r^\top]} \leq 3n\lambda_n.$
\label{clCond2_H0_joint}
\end{Claim}
\proof We have
\begin{align*}
Z_rZ_r^\top &= \phi(X^\top w_r)\phi(w^\top_r X)\phi(X^\top w_r)\phi(w^\top_r X)\\
&\preceq \norm{\phi(X^Tw_r)}^2 \norm{w_r}^2 X^\top X \preceq \sum_{l=1}^n \phi(w^\top x_l)^2\norm{w_r}^2 X^\top X.
\end{align*}
We need is to compute $\mathbb{E}_{w_r}[\phi(w^\top x_l)^2\norm{w_r}^2]$, which is already done in \eqref{eqn:fact2}, Section \ref{sec:scaling}. To be precise, we have
\begin{align*}
\mathbb{E}[\phi(w^\top_r x_l)^2\norm{w_r}^2] = \frac{d+2}{2} \le d
\end{align*}
where $\widetilde{x}_i = x_i1[w^\top_r x_i \geq 0]$ for each $i \in n$. Then, we can write
\[
\norm{\mathbb{E}_{w_r}[Z_rZ_r^\top]} \leq nd\lambda_n.
\]
\qedhere
\begin{Claim}[Condition IV for H(0)]
$\sup_{\{b:\norm{b}=1\}}(\mathbb{E}[(b^\top Z_r b)^2])^{1/2} \leq \sqrt{3}d\lambda_n$.
\label{clCond3_H0_joint}
\end{Claim}
\proof Recall $Z_r = \phi(X^\top w_r)\phi(w_r^\top X)$, and for any unit-norm vector $b \in \mathbb{R}^n$
\[
(b^\top Z_r b)^2 = \norm{b^\top \phi(X^\top w_r)}^4 \leq \norm{\phi(X^\top w_r)}^4 \leq \lambda_n^2 \norm{w_r}^4.
\]
Moreover, $\norm{w_r}^2$ is a chi-squared random variable with $d$ degree of freedom, so
\[
\mathbb{E}[\norm{w_r}^4] = 3d^2.
\]
Therefore, $\sup_{\{b:\norm{b}=1\}}(\mathbb{E}[(b^\top Z_r^\top b)^2])^{1/2} \leq \sqrt{3}d\lambda_n$.
\qedhere
\proof[Proof of Lemma \ref{lmMinEigenH0_joint}.] With the conditions fulfilled in Claims \ref{clCond1_H0_joint}, \ref{clCond2_H0_joint} and \ref{clCond3_H0_joint}, we can now apply \cite[Lemma B.7]{zhong2017recovery} to show the concentration of $K(0)$:
\[
\norm[\big]{\frac{1}{m}\sum_{r=1}^mZ_r - \mathbb{E}[Z_r]} \leq \epsilon \norm{\mathbb{E}[Z_r]}
\]
with probability $1-1/n^{2t}-n\delta$ for any $t \geq 1$, $\epsilon \in (0, 1)$ and $\delta < \epsilon \norm{\mathbb{E}[Z_r]}/(2\sqrt{3}d\lambda_n)^2$.
For the target bound, we choose $\epsilon \norm{\mathbb{E}[Z_r]} = \lambda_0/4$, $t = \log (2nd)$ and note that $\lambda_0 \leq \norm{\mathbb{E}[Z_r]} = \norm{K^\infty} \leq \lambda_n$. Therefore, with probability $1- 1/(2nd)^{2\log nd} - m\delta$ for any $\delta \in (0, \frac{\lambda_0}{12d^2\lambda_n^2})$, then
\[
\norm{H(0) - H^{\infty}} \leq \frac{\lambda_0}{4}
\]
if $m$ satisfies
\[
m \geq 18\log^2 (2nd)\frac{nd\lambda_n + \lambda_n^2 + (4d\sqrt{\log(2/\delta)})\lambda_n\lambda_0/4}{\lambda_0^2},
\]
which means $ m = C\frac{nd \lambda_n \log^2(nd)\log(1/\delta)}{\lambda_0^2}$.
\qedhere
\subsection{Proof of supporting claims}
In the proof of the next claims, we assume that $\norm{W(0)} \geq R'_w > R_w$ and $\norm{A(0)} \ge R'_a > R_a$. Also, assume $d \ll m$. These conditions will hold with high probability when $m$ is large enough.
\begin{Claim}
\label{clmC1_joint}
Let $C_1 = -\frac{2\eta}{d} \mathrm{vec}( X - U(k) )^\top K(k) \mathrm{vec}( X - U (k) )$ . We have
\begin{align*}
C_1 \leq -\frac{\eta\lambda_0}{d} \norm{X - U(k)}_F^2.
\end{align*}
\end{Claim}
\proof
Since we have proved that $\| W(k) - W(0) \|_F \leq R_w'$, using Lemma \ref{lmBoundHDiff_joint} with the choice of $R_w < R_w'$,
we have
\[
\|H(k)-H(0)\| \leq \frac {\lambda_0}{4}.
\]
Moreover, $G(k)$ is p.s.d, therefore $\lambda_{\min}(K(k)) \geq \lambda_{\min}(H(k)) \geq \lambda_0 / 2$, and as a result,
\begin{align*}
\mathrm{vec}(X - U(k))^\top K(k) \mathrm{vec}( X - U(k) ) \geq \frac{\lambda_0}{2} \| X - U(k) \|^2= \frac{\lambda_0}{2} \norm{X - U(k) }_F^2.
\end{align*}
and $ C_1 \leq -\frac{\eta\lambda_0}{d} \norm{X - U(k)}_F^2$.
\qedhere
\begin{Claim}
\label{clmC2_joint}
Let $C_2 = \frac{2\eta}{d} \mathrm{vec}( X - U(k) )^\top K(k)^\bot \mathrm{vec}( X - U (k) ) $. We have
\[
C_2 \leq \frac{8\eta \lambda_n}{d} \norm{X - U(k)}_F^2
\]
with probability at least $1-2\exp(-m)$.
\end{Claim}
\proof
We need to bound the spectral norm of $K(k)^\bot$, defined as $K(k)^\bot = G(k)^\bot + H(k)^\bot$. We will bound their spectral norms. We have
\begin{align*}
\norm{G(k)^\bot} &= \norm[\Big]{\frac{1}{m}\sum_{r=1}^m \mathrm{diag}(\mathbbm{1}[r\in S^{\bot}_i]) \widetilde{X}_r^\top \widetilde{X}_r \otimes a_r(k)a_r(k)^\top } \\
&\leq \frac{1}{m}\norm{X}^2\norm{\sum_{r=1}^m a_r(k)a_r(k)^\top \mathbbm{1}[r \in S^{\bot}_i]} \\
&\leq \frac{\lambda_n}{m} \norm{A(k)}^2 \\
&\leq \frac{4\lambda_n \norm{A(0)}^2}{m},
\end{align*}
where we use the assumption $R_a \leq \norm{A(0)}$. Similarly, using $R_w \le \norm{W(0)} $ we have
\begin{align*}
\norm{H(k)^\bot} &= \norm[\Big]{\frac{1}{m}\sum_{r=1}^m \mathrm{diag}(\mathbbm{1}[r\in S^{\bot}_i]) \phi(X^\top w_r(k)) \phi(w_r(k)^\top X) \otimes I} \\
&\leq \frac{1}{m}\norm{X}^2 \norm{W(k)}^2 \\
&\leq \frac{4\lambda_n\norm{W(0)}^2}{m}.
\end{align*}
Moreover, using a standard boun on sub-Gaussian matrices, we have $\norm{W(0)} \leq 2\sqrt{m} + \sqrt{d}$ and $\norm{A(0)} \leq 2\sqrt{m} + \sqrt{d}$ with probability at least $1 - 2\exp(-m)$.
Then,
\begin{align*}
C_2 &= \frac{2\eta}{d} \mathrm{vec}( X - U(k) )^\top K(k)^\bot \mathrm{vec}( X - U (k) ) \\
&\leq \frac{8 \eta\lambda_n}{d}
\end{align*}
with probability at least $1-2\exp(-m)$.
\qedhere
We re-state the results in the proof in Lemma \ref{lmMovementInTimeWr_joint} and Lemma \ref{lmMovementInTimeAr_joint} to bound the remaining terms $C_3, C_4, C_5$:
\begin{equation}
\label{eqnBoundGradWr_joint}
\norm{\nabla_{W}L( W(k), A(k) )}_F \leq \frac{\sqrt{\lambda_n}} {\sqrt{md}} \norm{X - U(k)}_F\norm{A(k)}.
\end{equation}
\begin{equation}
\label{eqnBoundGradAr_joint}
\norm{\nabla_{A}L( W(k), A(k) )}_F \leq \frac{\sqrt{\lambda_n} } {\sqrt{md}} \norm{X - U(k)}_F\norm{W(k)}.
\end{equation}
\begin{Claim}
\label{clmC3_joint}
Let $C_3 = -\frac{2\eta}{d} \mathrm{vec}( X - U(k) )^\top v_2$. We have
$$C_3 \leq \frac{16\eta^2}{d} \sqrt{n\lambda_n} \norm{X - U(k)}_F^2$$ with probability at least $1-3\exp(-m)$.
\end{Claim}
\proof
We have $\mathrm{vec}( X - U(k) )^\top v_2 \leq \norm{v_2}\norm{X - U(k)}_F$, so we need to bound $\norm{v_2}$. Let $D_i = \mathrm{diag}(\mathbbm{1}[1 \in S^{\bot}_i], \dots, \mathbbm{1}[m \in S^{\bot}_i])$, then:
\begin{align*}
\norm{v_2}^2 &= \sum_{i=1}^n \norm{v_{2, i}}^2 \\
&= \sum_{i=1}^n \norm[\Big]{\frac{1}{ \sqrt{md} } \sum_{r \in S^{\bot}_i} \left( a_r(k+1) \phi( w_r(k+1)^\top x_i ) - a_r(k)\phi(w_r(k)^\top x_i ) \right)}^2 \\
&= \frac{1}{ md} \sum_{i=1}^n \norm[\Big]{A(k+1)D_i \phi( W(k+1)^\top x_i ) - A(k)D_i\phi(W(k)^\top x_i )}^2 \\
&\leq \frac{2\eta^2}{ md } \sum_{i=1}^n \norm[\Big]{(\nabla_AL) D_i \phi( W(k+1)^\top x_i )}^2 + \norm[\Big]{A(k)D_i(\nabla_WL)^\top x_i )}^2 \\
&\leq \frac{2n\eta^2}{md } \left( \norm{\nabla_AL}_F^2 \norm{W(k+1)}^2 + \norm{A(k)}^2 \norm{\nabla_WL}_F^2 \right) \\
&\leq \frac{2n\lambda_n\eta^2}{m^2d^2 } \norm{X - U(k)}_F^2 \left( \norm{W(k+1)}^2\norm{W(k)}^2 + \norm{A(k)}^4 \right) \\
&\leq \frac{64n\lambda_n\eta^2}{d^2 } \norm{X - U(k)}_F^2
\end{align*}
with probability at least $1 - 3\exp(-m)$. Since $\norm{W(k+1)} \leq 2\norm{W(0)}$ and $\norm{A(k+1)} \leq 2\norm{A(0)}$. Therefore, with that probability
\[
C_3 \leq \frac{16\eta^2}{d} \sqrt{n\lambda_n}\norm{X - U(k)}_F^2.
\]
\qedhere
\begin{Claim}
\label{clmC4_joint}
Let $C_4 = -\frac{2\eta}{d} \mathrm{vec}( X - U(k) )^\top v_3$. We have
$$C_3 \leq \frac{8\eta^2}{d} \sqrt{n\lambda_n^2} \norm{X - U(k)}^2_F \norm{X - U(0)}_F$$ with probability at least $1-2\exp(-m)$.
\end{Claim}
\proof
We have $\mathrm{vec}( X - U(k) )^\top v_3 \leq \norm{v_3}\norm{X - U(k)}_F$. We want to bound $\norm{v_3}$. Let $D'_i = \mathrm{diag}(\mathbbm{1}[ w_1(k)^\top x_i \geq 0], \dots, \mathbbm{1}[ w_m(k)^\top x_i \geq 0])$
\begin{align*}
\norm{v_3}^2 &= \sum_{i=1}^n \norm{v_{3, i}}^2 \\
&= \sum_{i=1}^n \norm[\Big]{\frac{\eta^2}{\sqrt{md}} \sum_{r \in S_i} (\nabla_{a_r} L) (\nabla_{w_r} L)^\top x_i\mathbbm{1}[ w_r(k)^\top x_i \geq 0]}^2 \\
&\leq \frac{\eta^4}{md} \norm[\Big]{(\nabla_{A} L) ((\nabla_{W} L) D'_i)^\top x_i}^2 \\
&= \frac{\eta^4n}{md} \norm{\nabla_{A} L}_F^2 \norm{ \nabla_{W} L }_F^2 \\
&\leq \frac{\eta^4n\lambda_n^2}{m^2d^2} \norm{X - U(k)}_F^4 \norm{A(k)}^2\norm{W(k)}^2 \\
&\leq \frac{16\eta^4 n\lambda_n^2 }{d^2} \norm{X - U(k)}_F ^4
\end{align*}
with probability at least $1-2\exp(-m)$. Therefore,
\begin{align*}
C_3 \leq \frac{8\eta^2 \sqrt{n\lambda_n^2} }{d} \norm{X - U(k)}_F ^3 \leq \frac{8\eta^2 \sqrt{n\lambda_n^2} }{d} \norm{X - U(k)}_F^2 \norm{X - U(0)}_F
\end{align*}
since $\norm{X - U(k)}_F \le \norm{X - U(0)}_F$ by the induction hypothesis.
\qedhere
\begin{Claim}
\label{clmC5_joint}
Let $C_5 = \norm{ U (k+1) - U(k) }_F^2$. Then we have
\begin{align*}
C_5 \leq \frac{64 \eta^2 \lambda^2_n}{md}\norm{X - U(k)}_F^2.
\end{align*}
with probability at least $1 - 3\exp(-m)$.
\end{Claim}
\proof
We bound this by re-iterating the proof of Claim \ref{clmC3_joint}:
\begin{align*}
\norm{ U(k+1) - U(k) }^2_F &= \frac{1}{m^2}\norm{ A(k+1)\phi(W(k+1)^\top X) - A(k) \phi(W(k)^\top X) }^2_F \\
&\leq \frac{2\eta^2}{ m^2 } \left( \norm[\Big]{(\nabla_AL) \phi( W(k+1)^\top X)}^2 + \norm{A(k)}^2\norm[\Big]{(\nabla_WL)^\top X)}^2 \right) \\
&\leq \frac{2\eta^2 \norm{X}^2}{m^2} \left( \norm{\nabla_AL}_F^2 \norm{W(k+1)}^2 + \norm{A(k)}^2 \norm{\nabla_WL}_F^2 \right) \\
&\leq \frac{2\eta^2 \lambda^2_n}{m^3d} \norm{X - U(k)}_F^2 \left( \norm{W(k+1)}^2\norm{W(k)}^2 + \norm{A(k)}^4 \right) \\
&\leq \frac{64\eta^2 \lambda_n^2}{md } \norm{X - U(k)}_F^2
\end{align*}
with probability at least $1-3\exp(-m)$.
\qedhere
\subsection{Linearized Autoencoders}
\label{sec:asymptotic}
While the NTK allows us to analyze the gradient dynamics of autoencoders, it does not provide a straightforward characterization of the reconstruction given any new input. This makes it difficult to reason about the inductive bias of the over-parameterization and gradient descent for autoencoders, which were empirically studied in \citep{zhang2019identity, memorization_ae}. Here, we theoretically justify these results by using linearization and infinite approximation based on the result of \citet{lee2019wide}.
For the autoencoder $f(\theta, x)$, we denote by $\theta(t)$ the parameter vector at time $t$ and by $\theta(0)$ its initial value. Let us simplify the notation by denoting $f_t(x) = f(\theta(t), x)$ and $f_t(X) =[f_t(x_1), \dots, f_t(x_n)]$. Recall the training objective:
\[
L(\theta) = \frac{1}{2}\sum_{i=1}^n\norm{x_i -
f(\theta, x_i)}^2,
\]
and the gradient flow characterization of the training dynamics:
\begin{equation}
\frac{\mathrm{d}\theta(t)}{\mathrm{d}t} = - \nabla_{\theta}L(\theta(t))
\end{equation}
Consider the following linearized autoencoder via the first order Taylor expansion of $f_t(x)$ around $\theta(0)$:
\begin{align*}
f_t^{\mathrm{lin}}(x) \triangleq f_0(x) + \frac{\partial f_0(x)}{\partial \theta} \cdot \omega(t).
\end{align*}
Here, $\omega_t = \theta(t) - \theta(0)$ is the parameter movement from its initialization. The first term $f_0(x)$ or the initial reconstruction of $x$ remains unchanged during training over $\theta$ whereas the second term captures the dynamics with respect to the parameters, governed by:
\begin{align*}
\dot{\omega}(t) &= -\sum_{i=1}^n \left( \frac{\partial f_0(x_i)}{\partial \theta} \right)^\top (x_i - f_t^{\mathrm{lin}}(x_i)), \numberthis \label{eqnLinearizedodODE1} \\
\dot{f}_t^{\mathrm{lin}}(x) &= - \sum_{i=1}^n \frac{\partial f_0(x)}{\partial \theta} \left( \frac{\partial f_0(x_i)}{\partial \theta} \right)^\top (x_i - f_t^{\mathrm{lin}}(x_i)).
\\ &= - K_0(x, X) \mathrm{vec}(X - f_t^{\mathrm{lin}}(X)). \numberthis \label{eqnLinearizedodODE2}
\end{align*}
where we denote
\begin{align*}
\nabla_\theta f_0(X)^\top &\triangleq \left[\left( \frac{\partial f_0(x_1)}{\partial \theta} \right)^\top, \dots, \left( \frac{\partial f_0(x_n)}{\partial \theta} \right)^\top \right], \\
K_0(x, X) &\triangleq \frac{\partial f_0(x)}{\partial \theta} \nabla_\theta f_0(X)^\top \in \mathbb{R}^{d\times nd}, \\
\mathcal{K}_0 &\triangleq \nabla_\theta f_0(X) \nabla_\theta f_0(X)^\top \in \mathbb{R}^{nd\times nd}.
\end{align*}
The last quantity is known as the neural tangent kernel matrix evaluated at $\theta(0)$, which is presented in the earlier section. Following from \citet{lee2019wide}, we have the closed form solutions for the ODEs in \eqref{eqnLinearizedodODE1} and \eqref{eqnLinearizedodODE2} as follows:
\begin{align}
\label{eqnLinearizedODESol}
\omega(t) &= -\nabla_\theta f_0(X)^\top \mathcal{K}_0^{-1}(I-e^{- \mathcal{K}_0 t}) \mathrm{vec}(X- f_0^{\mathrm{lin}}(X)), \\
\mathrm{vec}(f_t^{\mathrm{lin}}(X)) &= (I-e^{-\mathcal{K}_0 t}) \mathrm{vec}(X) + e^{- \mathcal{K}_0 t} \mathrm{vec}(f_0(X)).
\end{align}
Moreover, given any new input $x$, the lineared output is $f_t^{\mathrm{lin}}(x) = \mu_t(x) + \gamma_t(x)$ where the signal and noise terms are given by
\begin{align}
\mu_t(x) &= K_0(x, X)\mathcal{K}_0^{-1}(I-e^{- \mathcal{K}_0 t}) \mathrm{vec}(X), \label{eqnLinearizePredSig} \\
\gamma_t(x) &= f_0(x) - K_0(x, X)\mathcal{K}_0^{-1}(I-e^{- \mathcal{K}_0 t}) \mathrm{vec}(f_0(X)) \label{eqnLinearizePredNoise}.
\end{align}
These equations characterize the dynamics of reconstruction (up to scaling) for the linearized network. Now, we establish the connection between the infinitely wide autoencoder and its linearized version, and prove Theorem \ref{thmInformalBias}.
\proof[Proof of Theorem \ref{thmInformalBias}] We simply invoke Theorem 2.1 in \citep{lee2019wide} for the autoencoder case. Denote by $K^{\infty} = \mathbb{E}_{W(0), A(0)}[K(0)]$ the neural tangent kernel of the two-layer autoencoder. Assume $\lambda_{\min}(K^{\infty}) >0$ and let $\eta_{\mathrm{critical}} \triangleq 2(\lambda_{\max}(K^{\infty}) + \lambda_{\min}(K^{\infty}))^{-1}$. \citet{lee2019wide} shows that under gradient descent with learning rate $\eta < \eta_{\mathrm{critical}}$, for every $x \in \mathbb{R}^d$ such that $\norm{x} \le 1$, as the width $m \rightarrow \infty$, the autoencoder $f_t(x)$ converges in $f_t^{\mathrm{lin}}(x)$ given by Equation \eqref{eqnLinearizePredSig} and Equation \eqref{eqnLinearizePredNoise}.
\qedhere
\section{Inductive Biases of Over-parameterized Autoencoders}
In principle, the training dynamics of over-parameterized autoencoders are similar to those of supervised networks. However, the generalization properties or inductive biases of the over-parameterization are different and underexplored. In this section, we rigorously analyze the observations in \citep{zhang2019identity, memorization_ae} using the results we have developed.
\subsection{One-sample training}
This training setting was exclusively studied in \cite{zhang2019identity} with interesting insights on the memorization phenomemnon and the role of the depth and width. They were able to give some theoretical evidence for their observation in a simple one-layer linear case. Using linearization, we generalize this result for non-linear networks. We particularly focus on the two-layer architecture, but the results can be extended to networks of any depth. Although our result is asymptotic, \citet{lee2019wide} showed that networks with finite, large width exhibit the same inductive bias.
Suppose we have access to only one sample $x$ or the training data $X = x$. For a test input $x'$, $f_t^{\mathrm{lin}}(x') = \mu_t(x') + \gamma_t(x')$ where
\begin{align}
\mu_t(x') &= K_0(x', x)\mathcal{K}_0^{-1}(I-e^{-\mathcal{K}_0 t}) x, \\
\gamma_t(x') &= f_0(x') - K_0(x', x)\mathcal{K}_0^{-1}(I-e^{- \mathcal{K}_0 t}) f_0(x).
\end{align}
As the learning rate for gradient desent is sufficiently small, the autoencoder output $f_t(x') \rightarrow f_t^{\mathrm{lin}}(x')$ as $m \rightarrow \infty$. In the infinite-width limit, the neural tangent kernel at $t=0$ converges to:
\begin{align*}
\mathcal{K}_0 &= \frac{1}{md} \sum_{r=1}^m \mathbbm{1}[w_r(0)^\top x \ge 0] a_r(0)a_r(0)^\top + \phi(w_r(0)^\top x)^2 I \\
&\rightarrow \frac{1}{d}\mathbb{E}[\mathbbm{1}[w_r(0)^\top x \ge 0] a_r(0)a_r(0)^\top + (w_r(0)^\top x)^2 I] = \frac{I}{d},
\end{align*}
since $w_r(0), a_r(0)$ are independent and $w_r(0)^\top x \sim \mathcal{N}(0, 1)$. Therefore,
the reconstruction is governed by the similarly between $x'$ and $x$ via the kernel function. Specifically,
\begin{align*}
\mu_t(x') &\rightarrow d \cdot K_0(x', x)(1- e^{-t/d}) x, \\
\gamma_t(x') &\rightarrow f_0(x') - d \cdot K_0(x', x) (1- e^{- t/d}) f_0(x).
\end{align*}
Moreover, as $m \rightarrow \infty$,
\begin{align*}
K_0(x', x) &= \frac{\partial f_0(x')}{\partial \theta} \left( \frac{\partial f_0(x)}{\partial \theta} \right)^\top \\
&= \frac{1}{md}\sum_{r=1}^m 1[w_r(0)^\top x' \ge 0, w_r(0)^\top x \ge 0] x'^\top x a_r(0)a_r(0)^\top + \phi(w_r(0)^\top x')\phi(w_r(0)^\top x) I \\
&\rightarrow \frac{1}{d}\mathbb{E}_w[1[w^\top x' \ge 0, w^\top x \ge 0] x'^\top x] + \mathbb{E}_w[\phi(w^\top x')\phi(w^\top x) I] \\
&= \langle x', x\rangle \frac{\pi - \arccos(\langle x', x\rangle)}{\pi d}\cdot I + \frac{1}{2\pi d}\sqrt{1 - \langle x', x\rangle^2} I.
\end{align*}
When $x'$ is close to $x$, $K_0(x', x) \sim I/d$, the signal term $\mu_t(x') \approx d \cdot K_0(x', x) x \approx x$ dominates the zero-mean noise term $\gamma_t(x') \approx f_0(x') - f_0(x)$, then the reconstruction is close to $x$ that explains the memorization. When $x'$ is far from $x$, $\mu_t(x') \sim 0$ while $\gamma_t(x')$ is a random, so the reconstruction is governed by a random noise. See \citep{zhang2019identity} for more details on the empirical evidence.
\subsection{Multiple-sample training}
For the training with many samples, \citet{memorization_ae} showed that overparameterized autoencoders exhibit memorization by learning functions that
concentrate near the training examples. They proved that single-layer autoencoders project data onto the span of the training examples. We provide another intuition based on the reconstruction of the linearized networks. For an arbitrary input $x'$,
\begin{align*}
\mu_t(x') &= K_0(x', X)\mathcal{K}_0^{-1}(I-e^{- \mathcal{K}_0 t}) \mathrm{vec}(X),\\
\gamma_t(x') &= f_0(x') - K_0(x', X)\mathcal{K}_0^{-1}(I-e^{- \mathcal{K}_0 t}) \mathrm{vec}(f_0(X)).
\end{align*}
The signal part of the reconstruction is a linear combination of training samples weighted by the kernel $K_0(x', x_i)$ and the eigenvalues of the kernel matrix $\mathcal{K}_0$.
Therefore, as $m \rightarrow \infty$ and $t$ is sufficiently large,
\begin{align*}
\mu_t(x') &\rightarrow \sum_{i=1}^n d\cdot K_0(x', x_i) x_i,\\
\gamma_t(x) &\rightarrow f_0(x) - \sum_{i=1}^n d\cdot K_0(x', x_i) x_i f_0(x_i).
\end{align*}
The closer the new test input $x'$ is to the span of training data $X$, the more its reconstruction concentrates around these seen points. This coincides with the observation about ``memorization'' by \citet{memorization_ae}.
\section{Introduction}
\label{sec:intro}
Deep neural networks have achieved great success in a variety of applications such as image and speech recognition, natural language processing, and gaming AI. Remarkably, neural networks that achieve the state-of-the-art performance
in each of these tasks are all massively over-parameterized, with far more weight parameters than the sample size of training data or the input dimension. Such networks can gain impressive performance in terms of both (near) zero training error and high generalization capacity,
which seemingly contradicts the conventional wisdom of
bias-variance tradeoffs.
Surprising enough is the fact that (stochastic) gradient descent or its variants can effectively find global and generalizable solutions.
Explaining this phenomenon has arguably become one of the fundamental tasks for demystifying deep learning.
As a consequence, there has been growing interest in
understanding the power of the gradient descent for over-parameterized networks. Over the past year, a specific line of research \citep{li2018learning, allen2018convergence, zou2018stochastic, du2018_gradient, oymak2019moderate, arora2019fine, zou2019improved} has led to exciting theoretical progress. In particular,
the seminal work of \citet{du2018_gradient}
shows that gradient descent on two-layer neural networks with $\mathrm{ReLU}$ activation provably converges to some global minimum at a geometric rate, provided a sufficiently large number of neurons that is of polynomial order in the sample size.
The key idea that leads to this result is the following: once the network is sufficiently wide, gradient descent does not change the individual weights much, but results in a non-negligible change in the network output that exponentially reduces the training loss with iteration count. This line of thinking has been subsequently refined and linked to the stability of a special kernel, called the \emph{neural tangent kernel} (NTK) \citep{jacot2018ntk}. \citet{arora2019fine} showed that the minimum eigenvalue of the limiting kernel governs both the algorithmic convergence and the generalization performance.
Despite these exciting results, the majority of existing work has focused on
\emph{supervised} settings and hence are limited to tasks such as classification
and regression. In contrast, the role of over-parameterization in the
\emph{unsupervised} setting (for tasks such as reconstruction, denoising, and
visualization) has gained much less attention. An early related example in unsupervised learning can be traced back to learning over-complete dictionaries with sparse codes \citep{olshausen97_sc}. Another example is the problem of learning mixtures of $k$ well-separated spherical Gaussians, where \cite{dasgupta2007_mg} showed that initializing with $O(k\log k)$ centers enables expectation-maximization to correctly recover the $k$ components.
Interesting (but limited) progress has been made towards understanding over-parameterization for autoencoders, a popular class of unsupervised models
based on neural networks.
\cite{zhang2019identity} provided an extensive study of training highly over-parameterized autoencoders using a \emph{single} sample. They empirically showed that when learned by gradient descent, autoencoders with different architectures can exhibit two inductive biases: memorization (i.e., learning the constant function) and generalization (i.e., learning the identity mapping) depending on the non-linearity and the network depth. \citet{memorization_ae} showed that over-parameterized autoencoder learning is empirically biased towards functions that concentrate around the training samples and hence exhibits memorization. \citet{buhai2019autoenc} empirically showed that over-parameterization benefits learning in recovering generative models with single-layer latent variables (including the sparse coding model).
However, there has been a lack of theoretical evidence that supports these observations. \citet{zhang2019identity} were able to prove a result for a simple one-layer linear case while \citet{memorization_ae} also proved the concentration of outputs near the training examples for a single-layer network under a data-restrictive setting. Moreover, none of the above papers have rigorously studied the training \emph{dynamics} of autoencoder models. The loss surface of autoencoder training was first characterized in \citep{tran17}. Subsequently, \citet{nguyen2019_ae} proved that under-parameterized (and suitably initialized) autoencoders performed (approximate) proper parameter learning in the regime of asymptotically many samples, building upon techniques in provable dictionary learning; cf.~\citep{arora15_neural,nguyen18_double}.
\paragraph{Our contributions.} In this paper, we provide the first rigorous analysis of inductive bias of gradient descent and gradient dynamics of over-parameterized, shallow (two-layer) autoencoders. To examine the inductive bias, we use an infinite-width approximation to derive the output reconstruction in terms its input. For the gradient dynamics, we study different training schemes and establish upper bounds on the level of over-parameterization under which (standard) gradient descent, starting from randomly initialized weights, can linearly converge to global optima provided the training dataset obeys some mild assumptions. Our specific contributions are as follows:
\begin{enumerate}
\item First, we build upon the results by \citet{lee2019wide} to characterize the evolution of autoencoder output via linearization and infinite-width approximation. Then, we establish the inductive bias of infinite-width autoencoders trained with gradient descent and provide insights into the memorization phenomena. While our analysis is asymptotic with respect to the network width, empirical results in \citep{lee2019wide, zhang2019identity} strongly suggest that similar phenomena are exhibited at finite widths as well.
\item Next, we extend the results by \citet{du2018_gradient} to the setting of over-parameterized two-layer autoencoders. This involves developing a version of the NTK for multiple outputs, which can be done in a straightforward manner by lifting the kernel matrix of a single output into a higher-dimensional space via Kronecker products.
\item Next, we study the gradient dynamics of the \emph{weakly-trained}\footnote{This distinction of weak- vs.\ joint-training has been introduced in earlier work such as \cite{arora2019cntk}.} case where the training is done only over the weights in the encoder layer. We obtain a bound on the number of hidden neurons (i.e., level of over-parameterization) required to achieve linear convergence of gradient descent, starting from random initialization, to global optimality.
\item Next, we study the gradient dynamics of the \emph{jointly-trained} case where
both the encoder and decoder are trained with gradient descent. We obtain a bound analogous to the weakly-trained case for the level of over-parameterization required for global convergence.
Interestingly, our bound for over-parameterization in the jointly trained case is significantly better compared with the {weakly-trained} case.
\item Finally, we study a special family of autoencoders for which
the encoder and decoder are \emph{weight-tied}, i.e., the two layers share the same weights (this is a common architectural choice in practical applications).
For the weight-tied case, we show that even without any training, $O(d/\epsilon)$ hidden units are able to achieve $\epsilon$-test error where $d$ is the input dimension.
Indeed, as the number of hidden unit increases, the autoencoder approximately recovers an identity map.
Since the identity map is not particularly useful in representation learning, we speculate that training of weight-tied autoencoders under over-parameterization may lead to unexpected degeneracies.
\end{enumerate}
\paragraph{Techniques.} Our analysis extends the techniques of \citet{lee2019wide} and \citet{du2018_gradient} for analyzing the global convergence of gradient descent in overparameterized neural networks using the neural tangent kernel. The special case of autoencoder networks is somewhat more complicated since we now have to deal with multiple outputs, but the use of Kronecker products enables us to derive concise NTK's for our setting.
The work of \citet{du2018_gradient} and subsequent papers study the weakly-trained case for the supervised setting where the second layer is fixed. We derive analogous bounds for the autoencoder setting. Moreover, we derive a new result for the jointly-trained case and obtain a significantly improved bound on the requisite level of over-parameterization. Our result is based on three key insights:
\begin{enumerate}[label=(\roman*)]
\item the linearization enables us to derive the autoencoder's reconstruction for a given input as a linear combination of the training samples weighted by kernel scores;
\item thanks to the linear decoder, the corresponding kernel is \emph{smooth}, and the improved smoothness allows gradient descent to move greater amount from the initial point; and
\item with this improved smoothness, we can derive a sharper characterization of the descent trajectory length in Frobenius norm instead of column-wise Euclidean norm.
\end{enumerate}
\section{Jointly-trained Autoencoders}
In the previous section, we analyzed the gradient dynamics of a two-layer autoencoder under the {weakly-trained} regime. We now analyze the jointly-trained regime where the loss is optimized over both sets of layer weights. For consistency of our presentation, we reuse some key notations in this section; for example, $K(t), U(t)$ have the same interpretation as before but possess a different closed form.
\subsection{Gradient flow}
\label{sec:gradflow_joint}
The loss function we consider for this \emph{jointly-trained} regime is the same:
\begin{equation}
\label{eqnEmpLoss_joint}
L(W, A) = \frac{1}{2}\sum_{i=1}^n\norm{x_i - \frac{1}{\sqrt{md}}A\phi(W^\top x_i)}^2.
\end{equation}
The difference is that the optimization is now taken over \emph{both} weights $W$ and $A$. To make the comparison easier, the matrices $W$ and $A$ are randomly initialized in the same way such that
\[
w_{ij}(0) \sim \mathcal{N}(0, 1), ~ a_{ij}(0) \sim \mathrm{Unif}(\{- 1, 1\})
\]
are drawn independently for each pair $(i, j)$. $W$ and $A$ are then updated using gradient descent with step size $\eta$:
\begin{align}
W(k+1) &= W(k) - \eta \nabla_W L(W(k), A(k)), ~k=0, 1, \dots \label{eqnUpdateWJoint} \\
A(k+1) &= A(k) - \eta \nabla_A L(W(k), A(k)), ~k=0, 1, \dots \label{eqnUpdateAJoint}
\end{align}
Similar to the previous case, we derive the gradients of $L(W, A)$ with respect the column $w_r$ of $W$ and $a_r$ of $A$. The gradient $\nabla_{w_r}L(W, A)$ is the same in \eqref{eqnGradLossOverWr_Case1} in Section \ref{sec:gradflow_case1} whereas $\nabla_{a_r}L(W, A)$ is standard:
\begin{align}
\nabla_{w_r}L(W, A)
&= - \frac{1}{\sqrt{md}} \sum_{i=1}^n \mathbbm{1}[w_r^Tx_i \ge 0]x_ia^\top_r (x_{i} - u_{i}), \numberthis
\label{eqnGradLossOverWr_joint} \\
\nabla_{a_r}L(W, A) &= -\frac{1}{\sqrt{md}}\sum_{i=1}^n\phi(w^\top_rx_i) (x_{i} - u_{i}). \numberthis
\label{eqnGradLossOverAr_joint}
\end{align}
Consider two ODEs, one for each weight vector over the continuous time $t$:
\begin{align}
\frac{\mathrm{d} w_r(t)}{\mathrm{d}t} &= -\nabla_{w_r}L( W(t), A(t) ), \label{eqnGradODEWr_joint} \\
\frac{\mathrm{d} a_r(t)}{\mathrm{d}t} &= -\nabla_{a_r}L( W(t), A(t) ). \label{eqnGradODEAr_joint}
\end{align}
Using~\eqref{eqnGradLossOverWr_joint}, \eqref{eqnGradLossOverAr_joint}, \eqref{eqnGradODEWr_joint} and \eqref{eqnGradODEAr_joint}, the continuous-time dynamics of the predicted output, $u_i(t)$, for sample $x_i$ is given by:
\begin{align*}
\frac{\mathrm{d} u_{i}(t)}{\mathrm{d}t} &= \frac{\mathrm{d}}{\mathrm{d}t} \left( \frac{1}{\sqrt{md}}\sum_{r=1}^m a_r\phi(w^\top_rx_{i}) \right) \\
&= \frac{1}{\sqrt{md}} \sum_{r=1}^m \left( J_{w_r} (a_r\phi(w^\top_rx_{i}))\frac{dw_r}{dt} + J_{a_r} (a_r\phi(w^\top_rx_{i}))\frac{da_r}{dt} \right) \\
&= \frac{1}{\sqrt{md}} \sum_{r=1}^m \left( 1[w_r^Tx_i \ge 0]a^\top_rx_i(-\nabla_{w_r}L(W, A) ) + \phi(w^\top_rx_i) (-\nabla_{a_r}L(W, A) ) \right) \\
&= - \frac{1}{\sqrt{md}} \sum_{j=1}^n \sum_{r=1}^m \left( 1[w_r^Tx_i \ge 0, w_r^Tx_j \ge 0]x^\top_ix_j a_ra^\top_r + \phi(w^\top_rx_i)\phi(w^\top_rx_j) I \right) (x_{j} - u_{j}).
\end{align*}
In these expresssions, we skip the dependence of the weight vectors on time $t$ and simply write them as $w_r$ and $a_r$. Vectorizing $\frac{dU(t)}{dt}$, we get to the key equation that characterizes the dynamics of $U(t)$:
\begin{align*}
\label{eqnPredDynamics_joint}
\frac{\mathrm{d} \mathrm{vec}(U(t))}{\mathrm{d}t}
&= \frac{1}{d} \Bigl( G(t) + H(t) \Bigr) \mathrm{vec}(X - U(t). \numberthis
\end{align*}
In the above equation, $G(t)$ is a size-$nd\times nd$ matrix of the form:
\begin{align}
G(t) = \frac{1}{m}\sum_{r=1}^m \widetilde{X}_r(t)^\top \widetilde{X}_r(t) \otimes a_r(t)a_r(t)^\top,
\label{eqnGt_joint}
\end{align}
where
$
\widetilde{X}_r(t) = \Bigl[\mathbbm{1}[w_r(t)^\top x_{1} \ge 0]x_{1}, \dots, \mathbbm{1}[w_r(t)^\top x_{n} \ge 0]x_{n}\Bigr],
$
while $H(t)$ is a size-$nd\times nd$ matrix:
\begin{align}
H(t) = \frac{1}{m}\sum_{r=1}^m\phi(X^\top w_r(t))\phi(w_r(t)^\top X) \otimes I.
\label{eqnHt_joint}
\end{align}
Let us emphasize again that $G(t)$ is precisely the kernel that governs the dynamics for the {weakly-trained} case. On the other hand, $H(t)$ is a Kronecker form of the Hessian of the loss function derived with respect to $A$, using the features produced at the output of the $\mathrm{ReLU}$ activations.
As shown in Section \ref{sec:kernels}, assuming randomness and independence of $W(0)$ and $A(0)$, we can prove that as $m \rightarrow \infty$, $H(0)$ and $G(0)$ converge to the corresponding NTKs whose minimum eigenvalues are assumed to be positive. More specifically, we have
\begin{align*}
G^\infty &= \mathbb{E}_{W(0), A(0)}[G(0)] \\ &= \mathbb{E}_{w(0), a(0)}[\widetilde{X}(0)^\top \widetilde{X}(0) \otimes a(0) a(0)^\top] \\ &= \mathbb{E}_{w}[\widetilde{X}^\top \widetilde{X}] \otimes I. \numberthis
\label{eqnGinfty_joint}
\end{align*}
and
\begin{align*}
H^\infty &= \mathbb{E}_{W(0), A(0)}[H(0)] \\ &= \mathbb{E}_{w(0), a(0)}[\phi(X^\top w(0))\phi(w(0)^\top X)] \otimes I
\numberthis
\label{eqnHinfty_joint}
\end{align*}
Denote the time-dependent kernel $ K(t) = G(t) + H(t) $. Since both $G(t)$ and $H(t)$ are positive semi-definite, we only focus on $H(t)$ for reasons that will become clear shortly.
Since $G(t)$ is also positive definite with high probability (Section \ref{sec:gradflow_case1}), the flow convergence can be also boosted by the positive definiteness of $G^\infty$. By Assumption~\ref{astMinEigen},
\[
\lambda_{\min}(K^\infty) \ge \lambda_{\min}(H^\infty) \ge \lambda_0 >0.
\]
Since $G(0)$ is positive semi-definite, in order to bound the minimum eigenvalue of $K(0)$, all we need is to bound that of $H(0)$. Importantly, we observe that the smoothness of the kernel $H(t)$ is much better as a function of the deviation of the weights from the initialization. This allows the weights to change with a larger amount than merely using $G(t)$, and enables us to significantly reduce the number of parameters required for the gradient to reach a global optimum.
Our main result for gradient flow of the {jointly-trained} autoencoder is given by:
\begin{Theorem}[Linear convergence of gradient flow, jointly-trained regime]
\label{thmGradFlow_joint}
Suppose Assumptions~\ref{astUnitNorm} and \ref{astMinEigen} hold. The initial weights are independently drawn such that $w_r \sim \mathcal{N}(0, I)$ and $a_r \sim \mathrm{Unif}(\{-1,1 \}^d)$ for all $r \in [m]$. If $m \ge C\frac{n\lambda_n^3d}{\lambda_0^4\delta^2}$ for some large enough constant $C$, then with probability at least $1-\delta$,
\[
\norm{X - U(t)}_F^2 \leq \exp\bigl(-\frac{\lambda_0 t}{d}\bigr)\norm{X - U(0)}_F^2.
\]
\end{Theorem}
\begin{Remark}
\normalfont We initialize the second-layer weights $A$ with independent Rademacher entries. This is for convenience of analysis because such $A$ has constant-norm columns. However, similar results should easily follow for initialization with more practical schemes (for example, i.i.d.~Gaussians).
\end{Remark}
We will first state and prove a few auxiliary results in Lemmas \ref{lmMinEigenH0_joint}, \ref{lmBoundHDiff_joint}, and \ref{lmMinEigennMovementAnyTime_joint} and then use them to prove Theorem~\ref{thmGradFlow_joint}.
\begin{Lemma}
\label{lmMinEigenH0_joint}
For any $\delta \in (0, \frac{\lambda_0}{12d\lambda_n^2})$, if $m \ge C\frac{nd \lambda_n\log^2(nd/\delta)}{\lambda_0^2}$ for some large enough constant $C$, then with probability at least $1 - 1/(2nd)^{2\log nd} - m\delta$,
one obtains $\norm{H(0) - H^\infty} \leq {\lambda_0}/{4}$ and $\lambda_{\min}(H(0)) \geq 3\lambda_0/4$ .
\end{Lemma}
The proof of this Lemma is deferred to Appendix \ref{appdx:jointly_train}.
\begin{Lemma}
\label{lmBoundHDiff_joint}
Suppose $\norm{W(t) - W(0)}_F \leq R_w$.
Then,
\[
\norm{H(t) - H(0)} \leq \frac{\lambda_n}{m} (2\norm{W(0)} + R_w)R_w.
\]
Particularly,
if $\frac{\lambda_n}{m} (2\norm{W(0)} + R_w)R_w \le \frac{\lambda_0}{4}$, then $\norm{H(t) - H(0)} \leq \frac{\lambda_0}{4}$. Therefore, $\lambda_{\min} (K(t) > \frac{\lambda_0}{2}$ if $\lambda_{\min}(H(0)) \ge \frac{3\lambda_0}{4}$.
\end{Lemma}
\begin{Remark}
\normalfont Let us compare with Lemma \ref{lmBoundKDiff_Case1}. Note that compared with that bound, $O(n^2dR_w)$ on the kernel perturbation, here the spectral norm bound on $H(t) - H(0)$ is significantly better in two ways:
\begin{enumerate}[label=(\roman*)]
\item the bound scales with $1/\sqrt{m}$, which later determines the over-parameterization and
\item the movement is now characterized by the total $\norm{W(t) - W(0)}_F$. This is possible due to the smoothness of the $\mathrm{ReLU}$ activation, which is the reason why we focus on $H(t)$ instead of $G(t)$.
\end{enumerate}
\end{Remark}
\proof We apply the triangle inequality and use the Lipschitz property of the rectified linear unit to bound the difference. Recall that
\[
H(t) = \frac{1}{m} \sum_{r=1}^m\phi(X^\top w_r(t))\phi(w_r(t)^\top X) = \frac{1}{m} \phi(X^\top W(t))\phi(W(t)^\top X).
\]
Then, we can upper bound the perturbation as follows:
\begin{align*}
\norm{H(t) - H(0)} &\leq \frac{1}{m}\norm[\big]{\phi(X^\top W(t))\phi(W(t)^\top X) - \phi(X^\top W(0))\phi({W(0)}^{\top} X)} \\
&\leq \frac{1}{m}\norm{\phi(X^\top W(t)}\norm{\phi(W(t)^\top X) - \phi(W(0)^\top X)} \\ &~~+ \frac{1}{m} \norm{\phi(X^\top W(t)) - \phi(X^\top W(0))}\norm{\phi({W(0)}^{\top} X)} \\
&\leq \frac{1}{m}\norm{X}^2\left(\norm{W(t)} + \norm{W(0)} \right)\norm{W(t) - W(0)}_F \\
&\leq \frac{\lambda_n}{m}\left(2\norm{W(0)} + \norm{W(t) - W(0)}\right)\norm{W(t) - W(0)}_F \\
&\leq \frac{\lambda_n}{m} (2\norm{W(0)} + R_w)\lambda_nR_w.
\end{align*}
In the third step, we use the fact that the $\mathrm{ReLU}$ function is 1-Lipschitz and $\norm{\phi(X^\top W(t)} \leq \norm{X}\norm{W(t)}$. The last step follows by $\norm{W(t) - W(0)}_F \le R_w$.
Using the condition and Weyl's inequality, one can easily show that $\lambda_{\min}(K(t)) \geq \lambda_{\min}(H(t)) > \lambda_0/2$.
\qedhere
We haved proved that as long as the weight matrix $W(t)$ do not change much over $t$, the minimum eigenvalue of $K(t)$ stays positive. Next, we show that this implies the exponential decay of the loss with iteration, and give a condition under which the weights do not change much.
\begin{Lemma}
Fix $t >0$. Suppose $\lambda_{\min}(K(s)) \ge \frac{\lambda_0}{2}$
for all $0 \le s < t$.
Then
\[
\norm{X - U(s)}_F^2 \le \exp\Bigl(-\frac{\lambda_0s}{d}\Bigr) \norm{X - U(0)}_F^2.
\]
\label{lmConvergenceFlow_joint}
\end{Lemma}
\proof
We have $\lambda_{\min}(K(s)) \geq \frac{\lambda_0}{2}$, then
\begin{align*}
\frac{\mathrm{d}}{\mathrm{d} s} \left( \norm{\mathrm{vec}(X - U(s))}_2^2 \right)
&= -2\mathrm{vec}(X - U(s))^\top \frac{K(t)}{d}\mathrm{vec}(X - U(s)) \nonumber \\
&\le -\frac{2}{d}\lambda_{\min}(K(s))\norm{\mathrm{vec}(X - U(s))}_2^2 \\
&\le -\frac{\lambda_0}{d}\norm{X - U(s)}_F^2, \numberthis
\end{align*}
since $\lambda_{\min}(H(s)) \ge \frac{\lambda_0}{2}$. Therefore,
\begin{align*}
\norm{X - U(s)}_F^2 = \norm{\mathrm{vec}(X - U(s))}^2 &\le \exp\Bigl(-\frac{\lambda_0s}{d} \Bigr) \norm{\mathrm{vec}(X - U(0))}^2 \\
&\le \exp\Bigl(-\frac{\lambda_0s}{d} \Bigr) \norm{X - U(0)}_F^2.
\end{align*}
\qedhere
\begin{Lemma}
Fix $t >0$. Suppose $\lambda_{\min}(K(s)) \ge \frac{\lambda_0}{2}$ and $\norm{A(s) - A(0)}_F \leq R_a$ for all $0 \le s < t$.
For all $r \in [m]$, we have
\[
\norm{W(t) - W(0)}_F \le \frac{2\sqrt{d\lambda_n} (\norm{A(0)} + R_a)\norm{X - U(0)}_F }{\sqrt{m}\lambda_0} \triangleq R'_w.
\]
\label{lmMovementInTimeWr_joint}
\end{Lemma}
\proof
For $s \in [0, t)$, we have
\begin{align*}
\frac{\mathrm{d}}{\mathrm{d} s}w_r(s) &= -\nabla_{w_r}L(W(s), A(s) ) \\
&= \frac{1}{\sqrt{md}} \sum_{i=1}^n \mathbbm{1}[w_r^Tx_i \ge 0]x_ia_r(s)^\top (x_{i} - u_{i} (s) ) \\
&= \frac{1}{\sqrt{md}}\widetilde{X}_r (X - U(s))^\top_r a_r(s).
\end{align*}
Then, one can bound the entire weight matrix as follows:
\begin{align*}
\norm[\bigg]{\frac{\mathrm{d}}{\mathrm{d} s}W(s)}_F
&\leq \frac{\norm{X}}{\sqrt{md}} \norm[\big]{ (X - U(s))^\top A(s) }_F \\
&\le \frac{\sqrt{\lambda_n}}{\sqrt{md}} \norm{X - U(s)}_F \norm{A(s)}\\
&\le \frac{\sqrt{\lambda_n} (\norm{A(0)} + R_a)}{\sqrt{md}} \exp\Bigl(-\frac{\lambda_0s}{2d} \Bigr)\norm{X - U(0)}_F.
\end{align*}
In the second step, we use the fact $\norm{CD}_F \leq \norm{C}_F\norm{D}$ for any matrices $C, D$, and $\norm{X}^2 = \lambda_n$. The last step follows from $\norm{A(s)} \leq \norm{A(0)} + R_a$ and Lemma \ref{lmConvergenceFlow_joint}. Using the same continuity, we have
\begin{align*}
\norm{W(t) - W(0)}_F &\le
\lim_{t' \rightarrow t} \int_{0}^{t'} \norm[\bigg]{\frac{\mathrm{d}}{\mathrm{d} s}W(s)}_F \\
&\le \lim_{t' \rightarrow t}\int_{0}^{t'} \frac{\sqrt{\lambda_n} (\norm{A(0)} + R_a) \exp\Bigl(-\frac{\lambda_0s}{2d} \Bigr)\norm{X - U(0)}_F}{\sqrt{md}}ds \\
&\le \frac{2\sqrt{\lambda_n} (\norm{A(0)} + R_a) \norm{X - U(0)}_F}{\sqrt{md}} \lim_{t' \rightarrow t}\int_{0}^{t'} \exp\Bigl(-\frac{\lambda_0s}{2d} \Bigr)ds \\
&\le \frac{2\sqrt{d\lambda_n} (\norm{A(0)} + R_a) \norm{X - U(0)}_F}{\sqrt{m}\lambda_0} \triangleq R_w'.
\end{align*}
\qedhere
\begin{Lemma}
Fix $t >0$. Suppose $\lambda_{\min}(K(s)) \ge \frac{\lambda_0}{2}$ and $\norm{W(s) - W(0) }_F \leq R_w$ for all $0 \le s < t$,
then for $r = 1, 2, \dots, m$
\[
\norm{A(t) - A(0)}_F \le \frac{2\sqrt{d\lambda_n} (\norm{W(0)} + R_w) \norm{X - U(0)}_F}{\sqrt{m}\lambda_0} \triangleq R'_a.
\]
\label{lmMovementInTimeAr_joint}
\end{Lemma}
\proof
For $s \in [0, t)$, we use the gradient derived in \eqref{eqnGradLossOverAr_joint} and \eqref{eqnGradODEAr_joint} to obtain:
\begin{align*}
\frac{\mathrm{d}}{\mathrm{d}t s}a_r(s) &= -\nabla_{a_r}L( W(s), A(s) ) \\
&= \frac{1}{\sqrt{md}}\sum_{i=1}^n\phi(w^\top_rx_i) (x_{i} - u_{i}(s)) \\
&= \frac{1}{\sqrt{md}} (X - U(s)) \phi(X^\top w_r(s))
\end{align*}
Then, one can write
\begin{align*}
\norm[\bigg]{\frac{\mathrm{d}}{\mathrm{d}t s}A(s)}_F
&= \norm[\bigg]{\frac{1}{\sqrt{md}} (X - U(s)) \phi(X^\top W(s))}_F \\
&\le \frac{\sqrt{\lambda_n}}{\sqrt{md}} \norm{X - U(s)}_F\norm{W(s)}\\
&\le \frac{\sqrt{\lambda_n}(\norm{W(0)} + R_w) }{\sqrt{md}} \exp\Bigl(-\frac{\lambda_0s}{2d} \Bigr) \norm{X - U(0)}_F,
\end{align*}
where we use $\norm{X} \le \sqrt{\lambda_n}$, $\norm{W(s)} \leq \norm{W(0)} + R_w$. The
last step follows from Lemma \ref{lmConvergenceFlow_joint}. Now, we integrate out $s$:
\begin{align*}
\norm{A(t) - A(0)}_F \le \int_{0}^t \norm[\bigg]{\frac{\mathrm{d}}{\mathrm{d}t s}A(s)}_F &\le \int_{0}^t \frac{\sqrt{\lambda_n}(\norm{W(0)} + R_w)\exp\Bigl(-\frac{\lambda_0s}{2d} \Bigr)\norm{X - U(0)}_F}{\sqrt{m}}ds \\
&\le \frac{2\sqrt{d\lambda_n}(\norm{W(0)} + R_w)\norm{X - U(0)}_F}{\sqrt{m} \lambda_0} = R'_a,
\end{align*}
which is what we need.
\qedhere
\begin{Lemma}
If $R'_w < R_w$ and $R'_a < R_a$, then for all $t \ge 0$, we have
\begin{itemize}
\item[(i)] $\lambda_{\min}(K(t)) \ge \frac{\lambda_0}{2}$; and for all $r \in [m]$, $\norm{W(t) - W(0) }_F \leq R'_w$, $\norm{A(t) - A(0) }_F \leq R'_a$;
\item[(ii)] If (i) holds, then $\norm{X - U(t)}_F^2 \le \exp\Bigl(-\frac{\lambda_0}{d}\Bigr) \norm{X - U(0)}_F^2.$
\end{itemize}
\label{lmMinEigennMovementAnyTime_joint}
\end{Lemma}
\proof
Suppose on the contrary that
\begin{equation}
\label{eqEventT}
\mathcal{T} = \left\{ t \ge 0: \lambda_{\min}(K(t)) \le \frac{\lambda_0}{2} \text{ {or} } \norm{W(t) - W(0) }_F > R'_w \text{ {or} } \norm{A(t) - A(0) }_F > R'_a \right\}.
\end{equation}
is not an empty set. Therefore, $t_0\triangleq\inf\mathcal{T}$ exists. Using the same contuinity argument as in Lemma \ref{lmMovementInTime_Case1}, one can verify that $t_0 > 0$. First, if $\lambda_{\min}(K(t_0)) \leq \frac{\lambda_0}{2}$, then by Lemma \ref{lmBoundHDiff_joint}, $\| W(t_0) - W(0) \|_F > R_w > R_w'$, which is a contradiction because it violates the minimality of $t_0$.
The other two cases are similar, so we will prove one of them. If it holds true that
\[
\norm{W(t) - W(0) }_F > R'_w.
\]
The definitions of $t_0$ and $\mathcal{T}$ implies that for any $s \in [0, t_0)$,
$\lambda_{\min}(K(s)) \ge \frac{\lambda_0}{2}$ and $\norm{A(s) - A(0) }_F \leq R'_a$. Then, Lemma \ref{lmMovementInTimeWr_joint} leads to:
\[
\norm{W(t_0) - W(0) }_F \le R'_w,
\]
which is a contradiction. Therefore, we have finish the proof.
\qedhere
\proof[Proof of Theorem \ref{thmGradFlow_joint}] With the results, we can can prove the Theorem. From Lemma \ref{lmMinEigennMovementAnyTime_joint}, if $R'_w \leq R_w$ and $R'_a \le R_a$, then
\[
\norm{X - U(t)}_F^2 \le \exp\Bigl(-\frac{\lambda_0}{d}\Bigr) \norm{X - U(0)}_F^2.
\]
We only need the conditions $R_w'=R'_a \leq R_w = R_a$ to satisfy for this to work. The conditions are
\begin{align*}
&\frac{\lambda_n}{m} (2\norm{W(0)} + R_w)R_w \leq \frac{\lambda_0}{4}; \\
\text{ and } &R_w < R_w' = \frac{2\sqrt{d\lambda_n} (\norm{A(0)} + R_a)\norm{X - U(0)}_F }{\sqrt{m}\lambda_0}.
\end{align*}
Note that $\norm{X - U(0)}_F^2 \leq 3n/2\delta$ with probability at least $1-\delta$. Also, using a standard bound on sub-Gaussian matrices, we have $\norm{W(0)} \leq 2\sqrt{m} + \sqrt{d}$ and $\norm{A(0)} \leq 2\sqrt{m} + \sqrt{d}$ with probability at least $1 - 2\exp(-m)$. Then if
we have the order of $m \ge \Omega\left( \frac{nd\lambda_n^3}{\lambda_0^4\delta^2}\right)$. Therefore, we finished the proof for the gradient flow Theorem.
\qedhere
\subsection{Gradient descent}
\label{sec:descent2}
As above, we will now appropriately discretize the gradient flow to obtain a convergence result for gradient descent with finite step size for the jointly-trained regime.
\begin{Theorem}
\label{thmGradDescent_joint}
Suppose Assumptions~\ref{astUnitNorm} and \ref{astMinEigen} hold. At initialization, suppose the weights are independently drawn from $w_r \sim \mathcal{N}(0, I)$ and $a_r \sim \mathrm{Unif}(\{-1, 1\}^d)$ for all $r \in [m]$. If $m \ge C\frac{n\lambda_n^3d}{\lambda_0^4\delta^2}$ for some large enough constrant $C$, then with probability at least $1-\delta$ the gradient descent on $W$ with step size $\eta = \Theta(\frac{\lambda_0}{n\lambda_n})$,
\begin{equation}
\label{eqnGDConvergence_joint}
\norm{X - U(k)}_F^2 \leq (1-\frac{\eta \lambda_0}{2d})^k \norm{X - U(0)}_F^2.
\end{equation}
\end{Theorem}
We will prove \ref{thmGradDescent_joint} by induction. The base case when $k = 0$ is trivially true. Assume holds for $k'=0, 1, \dots, k$ and we want to show \eqref{eqnGDConvergence_joint} for $k' = k+1$. First, we prove that $\norm{W(k+1) - W(0)}_F$ and $\norm{A(k+1) - A(0)}_F$ are small enough, and we then use that to bound $\norm{X - U(k+1)}_F^2$.
In this section, we define and assume that
\[
R_w < \frac{4\sqrt{d\lambda_n} (\norm{A(0)} + R_a) \norm{X - U(0)}_F}{\sqrt{m}\lambda_0} \triangleq R'_w \quad\mbox{and}\quad
R_a < \frac{4\sqrt{d\lambda_n} (\norm{W(0)} + R_w) \norm{X - U(0)}_F}{\sqrt{m}\lambda_0} \triangleq R'_a.
\]
First, we show the following auxiliary lemma.
\begin{Lemma}
\label{lmMovementInstep_joint}
If the condition \eqref{eqnGDConvergence_joint} holds for $k'=0, 1, \dots k$, then we have
\begin{align*}
\norm{W(k+1) - W(0)}_F \leq R'_w, \text{ and } \norm{A(k+1) - A(0)}_F \leq R'_a
\end{align*}
with probability at least $1-\delta$ for any $\delta \in (0, 1)$.
\end{Lemma}
\proof We prove this by induction. Clearly, both hold when $k'=0$. Assuming that both hold for $k' \leq k$. We will prove both hold for $k'=k+1$.
We use the expression of the gradients over $w_r$ and $a_r$ in \eqref{eqnGradLossOverWr_joint} and \eqref{eqnGradLossOverAr_joint}:
\begin{align*}
\nabla_{w_r}L(W(k), A(k) ) &= -\sum_{i=1}^n \frac{1}{\sqrt{md}}\mathbbm{1}[w_r(k)^Tx_i \ge 0]x_ia_r(k)^\top (x_{i} - u_{i}(k)) \\
&= - \frac{1}{\sqrt{md}}\widetilde{X}_r(k)(X - U(k)) a_r(k), \\
\nabla_{a_r}L(W(k), A(k) ) &= -\sum_{i=1}^n \frac{1}{\sqrt{md}}\phi(w_r(k)^\top x_i) (x_{i} - u_{i}(k)) \\
&= - \frac{1}{\sqrt{md}}(X - U(k)) \phi(X^\top w_r(k)).
\end{align*}
Then, the difference of the weight matrix $W$ is:
\begin{align*}
\norm{W(k+1) - W(0)}_F
&\le \eta \frac{\norm{X}}{\sqrt{md}}\sum_{k'=0}^k\norm{\mathrm{vec}(X - U(k'))}_F(\norm{A(0)} + R_a) \\
&\le \eta \frac{\sqrt{\lambda_n} (\norm{A(0)} + R_a) }{\sqrt{m}}\sum_{k'=0}^k\Bigl(1 - \frac{\eta \lambda_0}{2d} \Bigr)^{k'/2}\norm{X - U(0)}_F \\
&\le \eta \frac{\sqrt{\lambda_n} (\norm{A(0)} + R_a)}{\sqrt{m}}\norm{X - U(0)}_F\sum_{k'=0}^\infty\Bigl(1 - \frac{\eta \lambda_0}{2d} \Bigr)^{k'/2} \\
&= \eta \frac{\sqrt{\lambda_n} (\norm{A(0)} + R_a)}{\sqrt{m}}\norm{X - U(0)}_F \frac{4d}{\eta \lambda_0} \\
&= \frac{4\sqrt{d\lambda_n} (\norm{A(0)} + R_a) \norm{X - U(0)}_F}{\sqrt{m}\lambda_0} = R_w',
\end{align*}
where the third step and fourth step follow from $\norm{\widetilde{X}_r(k')} \leq \norm{X} = \sqrt{\lambda_n}$ and the induction hypothesis $\norm{A(k')} \leq \norm{A(0)} + R_a$. The last step follows from
\begin{align*}
\sum_{i=0}^\infty (1-\eta\lambda_0/2)^{i/2}
\le \frac{4d}{\eta \lambda_0}.
\end{align*}
Similarly, we bound the difference of the weight matrix $A$ between time $k+1$ and $0$:
\begin{align*}
\norm{A(k+1) - A(0)}_F
&= \eta \norm[\bigg]{\sum_{k'=0}^k\frac{1}{\sqrt{md}}(X - U(k') ) \phi(X^\top W(k') )}_F \\
&\le \eta \frac{\norm{X}}{\sqrt{md}}\sum_{k'=0}^k\norm{X - U(k') }_F\norm{W(k')} \\
&\le \eta \frac{\sqrt{\lambda_n}}{\sqrt{md}}\sum_{k'=0}^k\Bigl(1 - \frac{\eta \lambda_0}{2d} \Bigr)^{k'/2}\norm{X - U(0) }_F (\norm{W_0} + R_w) \\
&= \eta \frac{\sqrt{\lambda_n} (\norm{W_0} + R_w) \norm{X - U(0)}_F}{\sqrt{m}} \frac{4}{\eta \lambda_0}\\
&= \frac{4\sqrt{\lambda_n} (\norm{W_0} + R_w) \norm{X - U(0)}_F}{\sqrt{m} \lambda_0} = R'_a
\end{align*}
where the third step and fourth step follow from the facts that $\norm{\phi(X^\top W(k'))} \leq \norm{X}\norm{W(k')}$ and $\norm{X} = \sqrt{\lambda_n}$, and $\norm{W(k')} \le \norm{W(0)} + R_w$.
We have therefore shown that $\norm{W(k') - w(0)}_F \leq R_w'$ and $\norm{A(k') - A(0)}_F \leq R_a'$ for $k' = k+1$.
\qedhere
Now, we expand $\norm{ X - U(k+1)}_F^2$ in terms of the step $k$. Recall the update rule in \eqref{eqnUpdateWJoint} and \eqref{eqnUpdateAJoint} that
\begin{align}
W(k+1) &= W(k) - \eta \nabla_W L(W(k), A(k)), ~k=0, 1, \dots \\
A(k+1) &= A(k) - \eta \nabla_A L(W(k), A(k)), ~k=0, 1, \dots
\end{align}
where the gradients is given above. Next, we compute the difference of the prediction between two consecutive steps, a discrete version of $\frac{du_i(t)}{dt}$. For each $i \in [n]$, we have
\begin{align*}
\label{eqnDiffui_Case1}
u_i(k+1) &- u_i(k) = \frac{1}{\sqrt{md}} \sum_{r=1}^m \left( a_r(k+1) \phi( w_r(k+1)^\top x_i ) - a_r(k)\phi(w_r(k)^\top x_i ) \right) \\
&= \frac{1}{\sqrt{md}} \sum_{r=1}^m \left( \bigl ( a_r(k) - \eta \nabla_{a_r} L \bigr) \phi\left( \bigl(w_r(k) - \eta \nabla_{w_r} L \bigl)^\top x_i \right) - a_r(k)\phi(w_r(k)^\top x_i ) \right).
\numberthis
\end{align*}
For a particular $r$, if the activation pattern does not change, we can write the inside term as:
\begin{align*}
&\bigl ( a_r(k) - \eta \nabla_{a_r} L \bigr) \phi\left( \bigl(w_r(k) - \eta \nabla_{w_r} L \bigl)^\top x_i \right) - a_r(k)\phi(w_r(k)^\top x_i ) \\
&= \left( -\eta a_r(k) \left( \nabla_{w_r} L \right)^\top - \eta ( \nabla_{a_r} L ) w_r(k)^\top + \eta^2 (\nabla_{a_r} L) (\nabla_{w_r} L)^\top \right) x_i\mathbbm{1}[ w_r(k)^\top x_i \geq 0],
\end{align*}
where the first part corresponds to kernel $G(t)$ and the second part corresponds to the $H(t)$ shown up in the gradient flow analysis. With this intuition, we split the right hand side into two parts. $v_{1,i}$ represents the terms that the pattern does not change and $v_{2,i}$ represents the remaining term that pattern may changes.
For each $i \in [n]$, we define $S_i = \{r \in [m] : \mathbbm{1}[ w_r(k+1)^\top x_i \geq 0] = \mathbbm{1}[ w_r(k)^\top x_i \geq 0 ]$, and $S^{\bot}_i = [m] \backslash S_i$.
Then, we write $v_{1,i}$ and $v_{2,i}$ as follows:
\begin{align*}
&v_{1,i} \triangleq \frac{1}{ \sqrt{md} } \sum_{r \in S_i} \left( a_r(k+1) \phi( w_r(k+1)^\top x_i ) - a_r(k)\phi(w_r(k)^\top x_i ) \right), \\
&v_{2,i} \triangleq \frac{1}{ \sqrt{md} } \sum_{r \in S^{\bot}_i} \left( a_r(k+1) \phi( w_r(k+1)^\top x_i ) - a_r(k)\phi(w_r(k)^\top x_i ) \right) .
\end{align*}
We further write $v_1 = (v_{1,1}^\top, v_{1,2}^\top, \dots, v_{1,n}^\top)^\top$ and do the same for $v_2$. Hence, we write
\begin{align*}
\mathrm{vec}(U(k+1) - U(k)) = v_1 + v_2 .
\end{align*}
In order to analyze $v_1 \in \mathbb{R}^n$, we provide definition of $G, G^{\bot} \in \mathbb{R}^{nd \times nd}$ and $H, H^{\bot} \in \mathbb{R}^{nd \times nd}$,
\begin{align*}
G(k)_{i,j} = & ~ \frac{1}{m} \sum_{r=1}^m x_i^\top x_j \mathbbm{1}[{ w_r(k)^\top x_i \geq 0, w_r(k)^\top x_j \geq 0 }] a_r(t)a_r(t)^\top, \\
G(k)^{\bot}_{i,j} = & ~ \frac{1}{m} \sum_{r\in S^{\bot}_i} x_i^\top x_j \mathbbm{1}[ w_r(k)^\top x_i \geq 0, w_r(k)^\top x_j \geq 0]a_r(t)a_r(t)^\top, \\
H(k)_{i,j} = & ~ \frac{1}{m} \sum_{r=1}^m \phi(w_r(t)^\top x_i) \phi(w_r(t)^\top x_j)I, \\
H(k)^{\bot}_{i,j} = & ~ \frac{1}{m} \sum_{r\in S^{\bot}_i} \phi(w_r(t)^\top x_i) \phi(w_r(t)^\top x_j)I.
\end{align*}
Using the fact that $\phi(z) = z\mathbbm{1}[z \geq 0]$ and the definition of $S_i$, we expand the forms of the gradients in \ref{eqnGradLossOverWr_joint} and \ref{eqnGradLossOverAr_joint} and get:
\begin{align*}
v_{1,i}
= &~- \frac{1}{\sqrt{md}} \sum_{r \in S_i} \eta a_r(k) \left( \nabla_{w_r} L \right)^\top x_i\mathbbm{1}[ w_r(k)^\top x_i \geq 0] \\
&~- \frac{1}{\sqrt{md}} \sum_{r \in S_i} \eta ( \nabla_{a_r} L ) w_r(k)^\top x_i\mathbbm{1}[ w_r(k)^\top x_i \geq 0] \\
&~+ \frac{1}{\sqrt{md}} \sum_{r \in S_i} \eta^2 (\nabla_{a_r} L) (\nabla_{w_r} L)^\top x_i\mathbbm{1}[ w_r(k)^\top x_i \geq 0] \\
= &~ \frac{\eta}{d} \sum_{j=1}^n \left( G(k)_{i, j} - G(k)_{i, j}^\bot + H(k)_{i, j} - H(k)_{i, j}^\bot \right) (x_j - u_j) + v_{3, i},
\end{align*}
where $v_3$ will be treated as a perturbation:
\[
v_{3, i} = \frac{\eta^2}{\sqrt{md}} \sum_{r \in S_i} (\nabla_{a_r} L) (\nabla_{w_r} L)^\top x_i\mathbbm{1}[ w_r(k)^\top x_i \geq 0].
\]
Then, we can write $v_1$ as:
\begin{align}
\label{eqnV1compact}
v_1 = \frac{\eta}{d} (K(k) - K^{\bot}(k)) \mathrm{vec}(X - U(k)) + v_3,
\end{align}
in which $K(k) = G(k) + H(k)$ --- the discrete NTK kernel and $K^\bot (k) = H^\bot(k) + G^\bot(k)$. Lastly, we come to the main prediction dynamics in discrete time for $\mathrm{vec}(U(k+1) - U(k))$ as:
\begin{align*}
\mathrm{vec}(U(k+1) - U(k)) = \frac{\eta}{d} (K(k) - K^{\bot}(k)) \mathrm{vec}(X - U(k) + v_2 + v_3.
\end{align*}
Using this equation, we can rewrite $\| X - U(k+1) \|_2^2$ in terms of $X - U(k)$ as follows:
\begin{align*}
\norm{ X - U(k+1)}_F^2
&= \norm{ \mathrm{vec}(X - U(k+1))}^2 \\
&= \norm{ \mathrm{vec}(X - U(k)) - \mathrm{vec}(U(k+1) - U(k)) }_F^2 \\
&= \norm{X - U(k)}_F^2 - 2\mathrm{vec}(X - U(k))^\top \mathrm{vec}(U(k+1) - U(k)) \\
&~~~+ \norm{U(k+1) - U(k)}_F^2 \\
&= \norm{X - U(k)}_F^2 - \frac{2\eta}{d}\mathrm{vec}(X - U(k))^\top K(k) \mathrm{vec}(X - U(k) \\
&~~~+ \frac{2\eta}{d}\mathrm{vec}(X - U(k))^\top K(k)^\bot \mathrm{vec}(X - U(k) \\
&~~~- \frac{2\eta}{d}\mathrm{vec}(X - U(k))^\top (v_2 + v_3) \\
&~~~+ \norm{U(k+1) - U(k)}_F^2.
\end{align*}
We define and upper bound each of the following terms
\begin{align*}
C_1 &= -\frac{2\eta}{d} \mathrm{vec}( X - U(k) )^\top K(k) \mathrm{vec}( X - U (k) ) , \\
C_2 &= \frac{2\eta}{d} \mathrm{vec}( X - U(k) )^\top K(k)^\bot \mathrm{vec}( X - U (k) ) , \\
C_3 &= -\frac{2\eta}{d} \mathrm{vec}( X - U(k) )^\top v_2 , \\
C_4 &= -\frac{2\eta}{d} \mathrm{vec}( X - U(k) )^\top v_3 , \\
C_5 &= \norm{U(k+1) - U(k)}_F^2 .
\end{align*}
Notice that $C_1$ can be upper bounded in terms of $\lambda_{\min}(K(k)) \ge \lambda_{\min}(H(k))$, which is ensured as long as the movement in the weight is sufficiently small (shown in Lemma \ref{lmMovementInstep_joint}.) $C_2$ can be upper bounded also using the kernel with bound on its spectral norm.
\proof[Proof of Theorem \ref{thmGradDescent_joint}]
We will prove Theorem \ref{thmGradDescent_joint} by induction. The base case when $k = 0$ is trivially true. Assume that the claim holds for $k'=0, 1, \ldots, k$ and we want to show that \eqref{eqnGDConvergence_joint} also holds for $k' = k+1$. For $k' = k+1$, we have
\begin{align*}
\norm{ X - U(k+1)}_F^2
&= \norm{X - U(k)}_F^2 + C_1 + C_2 + C_3 + C_4 + C_5
\end{align*}
Now, we invoke the bound for each of these terms from Claims~\ref{clmC1_joint}, \ref{clmC2_joint}, \ref{clmC3_joint}, \ref{clmC4_joint} and \ref{clmC5_joint} in Appendix \ref{appdx:jointly_train} and Lemma\ref{lmMovementInstep_joint}. Then, we want to choose $\eta$ and $R_w$ such that
\begin{align}\label{eq:choice_of_eta_R}
\left( 1 - \frac{\eta \lambda_0}{d} + \frac{8\eta \lambda_n}{d} + \frac{16\eta^2 \sqrt{n\lambda_n}}{d} + \frac{8\eta^2 \sqrt{n\lambda_n^2}}{d} \norm{X - U(0)}_F + \frac{64 \eta^2 \lambda_n^2}{md} \right) \leq \left(1-\frac{\eta\lambda_0}{2d} \right) .
\end{align}
If we set $\eta = \frac{\lambda_0 }{64n\lambda_n}$ and use $\norm{X - U(0)}_F \leq C\sqrt{n}$, we have the two dominating terms are
\begin{align*}
\frac{8\eta\lambda_n}{d} \leq \frac{\eta \lambda_0 }{8d} ,
\mathrm{~~~and~~~} \frac{8\eta^2 \sqrt{n\lambda_n^2}\norm{X - U(0)}_F}{d} \leq \frac{\eta \lambda_0}{8d}.
\end{align*}
This implies that
\begin{align*}
\norm{ X - U(k+1) }_F^2 \leq ( 1 - \frac{\eta \lambda_0}{2d} ) \norm{ X - U(k) }_F^2.
\end{align*}
\paragraph{Lower bound on $m$.}
We require for any $\delta \in (0, 1)$ that
\begin{align*}
\frac{\lambda_n}{m}(4\norm{W(0)} + R_w)R_w \leq \frac{\lambda_0}{4},
\end{align*}
\begin{align*}
R_w \leq \frac{4\sqrt{d\lambda_n}(\norm{A(0)} + R_a) \norm{X - U(0)}_F}{\sqrt{m}\lambda_0}
\end{align*}
and
\[
2\exp(-m) \leq \delta
\]
where the first bound on $R_w$ comes from the result on gradient descent and the condition in Lemma \ref{lmBoundHDiff_joint}, whereas the second bound is required by the above Claims. By Claim \ref{clInitialLoss_Case1} that $\norm{X - U(0)}_F \leq \sqrt{\frac{2n}{\delta}}$ for arbitrary $\delta \in (0, 1)$, then we require
\[
m \geq C\frac{nd\lambda_n^3}{\lambda^4_0\delta^2}
\]
for a sufficiently large constant $C>0$ so that the claim holds with probability $1-\delta$.
\qedhere
\section{The Neural Tangent Kernel and Linearized Autoencoders
\label{sec:kernels}
\subsection{NTK for general autoencoders}
Let us first derive the neural tangent kernels for general autoencoders (possibly deep and with more than 2 layers) with multiple outputs in a compact form. Given $n$ i.i.d samples $X = [x_1, x_2, \dots, x_n]$ and the autoencoder $f(\theta, x)$, we consider minimizing the squared-error reconstruction loss:
\begin{equation*}
L(\theta) = \frac{1}{2}\sum_{i=1}^n\norm{x_i -
f(\theta, x_i)}^2 = \frac{1}{2}\sum_{i=1}^n\norm{x_i - u_i}^2,
\end{equation*}
where $\theta$ is a vector that stacks all the network parameters (e.g. $W$ and $A$) and $u_i = f(\theta, x_i) \in \mathbb{R}^d$ denotes the corresponding output for every $i = 1, 2, \ldots, m$. The evolution of gradient descent on $L(\theta)$ with an infinitesimally small learning rate is represented by the following ordinary differential equation (ODE):
\begin{equation}
\label{eqnGradODE}
\frac{\mathrm{d}\theta(t)}{\mathrm{d}t} = -\nabla_{\theta}L(\theta(t)).
\end{equation}
The {time-dependent} NTK for autoencoders can be characterized as follows:
\begin{Lemma}
\label{lmNTK_general}
Denote by $U(t) = [u_{1}(t), u_{2}(t), \ldots, u_{n}(t)] \in \mathbb{R}^{d\times n}$ the corresponding outputs of all the samples in $X$, i.e., $u_i(t) = f(\theta(t), x_i)$. The dynamics of $U(t)$ is given by the ODE:
\begin{align*}
\frac{\mathrm{d}\mathrm{vec}(U(t))}{\mathrm{d}t} &=
K(t)\mathrm{vec}(X - U(t)),
\end{align*}
where $K(t)$ is an $nd\times nd$ positive semi-definite kernel matrix whose $(i, j)$-th block of size $d\times d$ is:
\[
\left( \frac{\partial}{\partial \theta}f(\theta, x_i) \right) \cdot \left( \frac{\partial}{\partial \theta}f(\theta, x_j) \right)^\top.
\]
\end{Lemma}
\proof
Note that in the supervised learning setting with a single output, the $(i,j)$-th block is a single scalar equal to the inner product of two gradients. We prove this using simple calculus. The gradient of the loss over the parameters $\theta$ is
\begin{align*}
\nabla_{\theta}L(\theta) &= -\sum_{i=1}^n \frac{\partial u_i^\top}{\partial \theta} (x_{i} - u_{i}),
\end{align*}
where $\partial u_i/\partial \theta$ denotes the Jacobian matrix of the output vector $u_i$ with respect to $\theta$. Combining with \eqref{eqnGradODE}, the continuous-time dynamics of the prediction for each sample $i \in [n]$ is specified as
\begin{align*}
\frac{\mathrm{d} u_{i}}{\mathrm{d}t} &=
\frac{\partial u_i}{\partial \theta} (-\nabla_{\theta}L(\theta)) \\
&=
\sum_{j=1}^n \frac{\partial u_i}{\partial \theta} \frac{\partial u_j^\top}{\partial \theta} (x_{j} - u_{j}).
\end{align*}
Vectorizing $\frac{dU(t)}{dt}$, we get
\begin{align*}
\frac{\mathrm{d}\mathrm{vec}(U(t))}{\mathrm{d}t} &=
K(t)\mathrm{vec}(X - U(t)),
\end{align*}
where $K(t)$ (or $K$) is an $nd\times nd$ matrix whose $(i, j)$-block is of size $d\times d$:
\[
K_{i, j} = \frac{\partial u_i}{\partial \theta} \frac{\partial u_j^\top}{\partial \theta} =
\left( \frac{\partial}{\partial \theta}f(\theta, x_i) \right) \cdot \left( \frac{\partial}{\partial \theta}f(\theta, x_j) \right)^\top.
\]
One can easily verify that $K(t)$ is positive semi-definite.
\qedhere
If the parameters $\theta(0)$ are assumed to be stochastic, then the (deterministic) neural tangent kernel (NTK) is defined as:
\begin{equation}
(K^\infty)_{i, j} =
\mathbb{E}_{\theta(0)}\left [\Bigl(\left. \frac{\partial}{\partial \theta}f(\theta, x_i)\right|_{\theta=\theta(0)} \Bigr) \cdot \Bigl(\left. \frac{\partial}{\partial \theta}f(\theta, x_j)\right|_{\theta=\theta(0)} \Bigr)^\top \right].
\end{equation}
Note that $K^\infty$ is time-independent.
If the network is randomly initialized and its width is allowed to grow infinitely large, $K(t)$ converges to $K^\infty$, and remains constant during training.
Our goal is to show that if the width is sufficiently large (not necessarily infinite), then $K(t) \approx K(0) \approx K^\infty$, and the gradient dynamics are governed by the spectrum of $K^\infty$.
\input{asymptotic}
\subsection{NTK for two-layer autoencoders}
Let us now specialize to the case of two-layer autoencoders with $\mathrm{ReLU}$ activation. Since we consider the two training regimes, including the \emph{weakly-trained} and \emph{jontly-trained},
we first give the expression of a few \emph{base} kernels whose appropriate compositions produce the final kernel for each individual case. The precise derivation of each regime is given in the next few sections.
Again, we consider the reconstruction loss:
\begin{equation*}
L(W, A) = \frac{1}{2}\sum_{i=1}^n\norm{x_i - \frac{1}{\sqrt{md}}A\phi(W^\top x_i)}^2 = \frac{1}{2}\sum_{i=1}^n\norm{x_i - u_i}^2,
\end{equation*}
where the weights are independently initialized such that:
\[
w_{r}(0) \sim \mathcal{N}(0, I), ~~ a_{r}(0) \sim \mathrm{Unif}\{-1, 1\}^d,~~r = 1,\ldots, m .
\]
Here the minimization can be either over the encoder weights $W$, or the decoder weights $A$, or both $W$ and $A$. Let us denote
\[
\widetilde{X}_r(t) = \Bigl[\mathbbm{1}[w_r(t)^\top x_{1} \ge 0]x_{1}, \ldots, \mathbbm{1}[w_r(t)^\top x_{n} \ge 0]x_{n}\Bigr].
\]
If we fix $A$ and optimize the loss $L(W, A)$ over $W$, we get
\[
G(t) = \frac{1}{md} \sum_{r=1}^m\widetilde{X}_r(t)^\top\widetilde{X}_r(t) \otimes a_ra^\top_r .
\]
If we fix $W$ and optimize the loss $L(W, A)$ over $A$, we get
\[
H(t) = \frac{1}{md}\sum_{r=1}^m \phi(X^\top w_r(t))\phi(w_r(t)^\top X) \otimes I,
\]
Writing these kernels in Kronecker product form allows us to clearly visualize the connection to the supervised learning case, and enables characterization of their spectrum. Intuitively, in the {jointly-trained} case, since both $W$ and $A$ depend on $t$, an invocation of the chain rule leads to the sum $G(t) + H(t)$ being the ``effective'' kernel that governs the dynamics.
In the infinite-width limit where $m \rightarrow \infty$, the NTKs in the corresponding training regimes reduce to compositions of the following fixed deterministic kernels:
\begin{align*}
G^{\infty} &= \mathbb{E}_{w(0), a(0)}\bigl[\widetilde{X}(0)^\top\widetilde{X}(0) \otimes a(0)a(0)^\top \bigr] = \mathbb{E}_{w(0)}[\widetilde{X}(0)^\top\widetilde{X}(0)] \otimes I_d, \\
H^{\infty} &= \mathbb{E}_{w(0)}\biggl[ \phi(X^\top w(0))\phi(w(0)^\top X) \biggr] \otimes I.
\end{align*}
Somewhat curiously, we will show that the crucial component of the \emph{time-dependent} kernel in the jointly-trained regime, $H(t)$ (within $H(t)+G(t)$), is better-behaved than the corresponding kernel in the weakly-trained regime, $G(t)$, thanks to its better Lipschitz smoothness, even though the respective \emph{limiting} kernels are the same.
This improved smoothness allows us to derive a much better bound on kernel perturbations with respect to changing weights, and this results in a significant improvement in the level of over-parameterization (Theorem \ref{thmInformalJoint}).
\section{Overview of main results}
\textbf{Notation.} We use uppercase letters to denote matrices,
and lowercase for vectors or scalars.
An expectation is the notation $C$ which represents a generic scalar constant, whose value can change from line to line.
A vector is interpreted as a column vector by default. We denote by $x_i \in \mathbb{R}^d$ the $i^{\textrm{th}}$-column (or sample) of the data matrix $X$, and $W = [w_1, \dots, w_m] \in \mathbb{R}^{d\times m}$ denotes a weight matrix.
Whenever necessary, we distinguish between the weight vector $w_r$ at different algorithmic steps using an explicit $w_r(t)$ indexed by the step $t$. For a matrix $A = [a_1, \dots, a_m] \in \mathbb{R}^{d\times m}$, $\mathrm{vec}(A) = [a_{11}, \dots, a_{d1}, \dots, a_{1m}, \dots a_{dm}]^\top$ vectorizes the matrix $A$ by stacking its columns. The symbol $\otimes$ denotes the Kronecker product.
We use $\mathcal{N}(\cdot)$ and $\mathrm{Unif}(\cdot)$ to denote the Gaussian and uniform distributions respectively. We simply write $\mathbb{E}_{w}$ instead of $\mathbb{E}_{w \sim \mathcal{N}(0, I)}$ for brevity. Throughout the paper, we refer to an arbitrary $\delta \in (0, 1)$ as the failure probability of some event under consideration.
\subsection{Two-layer autoencoders}
Our goal is to understand the inductive bias and the learning dynamics of learning two-layer autoencoders with gradient descent. We focus on the two-layer autoencoder architecture with the rectified linear unit ($\mathrm{ReLU}$), defined by $\phi(z)=\max(z,0)$ for any $z\in\mathbb{R}$.
In below, when $\phi$ is applied to a vector or a matrix, the $\mathrm{ReLU}$ function is applied element-wisely. Given an input sample $x \in \mathbb{R}^d$, the autoencoder
returns a reconstruction $u\in\mathbb{R}^d$ of $x$, given by
\[
u = \frac{1}{\sqrt{md}}A\phi(W^\top x) = \frac{1}{\sqrt{md}}\sum_{r=1}^m a_r\phi(w^\top_r x) ,
\]
where
$W = [w_1, \dots, w_m]$ and $A= [a_1, \dots, a_m]$ are weight matrices of the first (encoder) and second (decoder) layers respectively. We do not consider bias terms in this work. However, in principle, the bias vector for the hidden layer can be regarded as the last column of $W$ with the last dimension of $x$ always being 1.
\begin{Remark}[Choice of scaling factor]
\label{rmkScalingFactor1}
\normalfont Notice that we have scaled the output with $1/\sqrt{md}$, where $1/\sqrt{m}$ is the factor for the first layer and $1/\sqrt{d}$ for the second layer. Such scaling has been utilized in mathematical analyses of supervised networks \citep{jacot2018ntk} as well as of autoencoders \citep{li2018randomae}. Since the ReLU is homogeneous to scaling, such factors can technically be absorbed into the corresponding weight matrices $W$ and $A$,
but we find that keeping such factors explicit is crucial to understand the asymptotic behavior of neural network training as the network widths (i.e., $m$ in this case) go to infinity.
\end{Remark}
Let us now set up the problem. Suppose that we are given $n$ training samples $X = [x_1, x_2, \dots, x_n]$. We assume that each weight is randomly and independently initialized. Then, we train the autoencoder via gradient descent over the usual squared-error reconstruction loss:
\begin{equation}
\label{eqnGenEmpLoss}
L(W, A) = \frac{1}{2}\sum_{i=1}^n\norm{x_i - \frac{1}{\sqrt{md}}A\phi(W^\top x_i)}^2 = \frac{1}{2}\sum_{i=1}^n\norm{x_i - u_i}^2.
\end{equation}
Throughout the paper, unless otherwise specified, we make the following assumptions:
\begin{Assumption}
\label{astUnitNorm}
All training samples are normalized, i.e., $\|x_i\|=1$ for $i=1,\dots, n$.
\end{Assumption}
We gather the training samples into the data matrix $X = [x_1, x_2, \dots, x_n]$ and define $\lambda_n \triangleq \|X^\top X\|$.
Assumption~\ref{astUnitNorm} implies that $\norm{X}_F = \sqrt{n}$ and hence $1 \leq \lambda_n \leq n$. We regard $\lambda_n$ as a parameter that depends on the data geometry. For certain families of matrices (e.g., those with independent Gaussian entries), $\lambda_n \sim O(\max(n/d, 1))$, which can be $o(n)$ depending on how large $n$ is in terms of $d$.
We note that throughout our analysis, $X$ is regarded as fixed,
and we will focus on the randomness in the weights.
\begin{Assumption}
\label{astMinEigen}
Consider a random vector $w \sim \mathcal{N}(0, I)$ and define $\widetilde{x}_i = \mathbbm{1}[w^\top x_i \ge 0]x_i$ for each $i \in [n]$. Let $\widetilde{X} = \bigl[\widetilde{x}_1, \dots, \widetilde{x}_n\bigr]$. Assume $\min(\lambda_{\min}(\mathbb{E}_{w}[\widetilde{X}^\top\widetilde{X}]), \lambda_{\min}(\mathbb{E}_{w}[\phi(X^\top w)\phi(w^\top X)])) = \lambda_0 > 0$.
\end{Assumption}
The matrix $\mathbb{E}_{w}[\widetilde{X}^\top\widetilde{X}]$ is the so-called Gram matrix from the kernel induced by the $\mathrm{ReLU}$ transformation and has been extensively studied in \citep{xie2016diverse, tsuchida2017invariance, du2018_gradient, arora2019fine}.
Although this condition is difficult to interpret, one sufficient condition established in \citep{oymak2019moderate} (Lemma H.1 and Lemma H.2) is that as long as the squared minimum singular value $\sigma^2_{\min}(X \star X) > 0$ where $\star$ denotes the Khatri-Rao product, then Assumption \ref{astMinEigen} holds. In this sense, our assumption is similar to that of \citet{oymak2019moderate} and slightly weaker than of \citet{du2018_gradient}, which only require $\lambda_{\min}(\mathbb{E}_{w}[\widetilde{X}^\top\widetilde{X}]) > 0$.
The above assumptions about the data are relatively mild, which are in sharp contrast with assuming a specific generative model for the data (e.g., dictionary models, mixture of Gaussians \citep{nguyen2019_ae, buhai2019autoenc}) that have so far been employed to analyze autoencoder gradient dynamics.
\subsection{Learning dynamics}
Depending on which weight variables are being optimized, we consider three training regimes:
\begin{itemize}
\item \textbf{Weakly-trained case:}
This corresponds to the regime where the loss function \eqref{eqnGenEmpLoss} is optimized over the weights $W$ while keeping $A$ fixed. A different form of weak training is to fix
the encoder and optimize \eqref{eqnGenEmpLoss} over $A$. Indeed, this practice is perhaps a folklore: it corresponds to standard kernel regression where the global convergence depends on the Hessian associated with random $\mathrm{ReLU}$ features. We do not pursue this case any further since kernel methods are well understood, but note in passing that the Hessian will eventually show up in our analysis.
\item \textbf{Jointly-trained case:} This corresponds to the regime that \eqref{eqnGenEmpLoss} is optimized over both $W$ and $A$. This case matches practical neural network training, and performs better than the weakly trained case. We will show that the contrast between {weakly-trained} and {jointly-trained} cases arises due to the nature of the different NTK's and our analysis may pave the way to better understanding of autoencoder training.
\item \textbf{Weight-tied case:} Weight-tying is another common practice in training autoencoders. Here, one sets the encoder and decoder weights to be the same, i.e., $A = W$, and optimizes \eqref{eqnGenEmpLoss} over the common variables $W$. We study this problem from the perspective of over-parameterization and show that this case leads to somewhat unexpected degeneracies.
\end{itemize}
We adopt the framework introduced in \citep{du2018_gradient}. Our proofs proceed generally as follows:
\begin{enumerate
\item[(i)] We will consider the continuous flow of the autoencoder outputs $U(t) = [u_{1}(t), u_{2}(t), \dots, u_{n}(t)] \in \mathbb{R}^{d\times n}$ corresponding to the samples in $X$ at time $t$. This continuous flow can be morally viewed as the execution of gradient descent with infinitesimal learning rate. This enables us to write:
\begin{align*}
\frac{\mathrm{d}\mathrm{vec}(U(t))}{\mathrm{d}t} &=
K(t)\mathrm{vec}(X - U(t)),
\end{align*}
where $K(t)$ is a kernel matrix.
\item[(ii)] From this characterization, we can infer that the spectrum of $K(t)$ governs the dynamics of the outputs. To derive explicit convergence bounds, we will first prove that $K(0)$ has positive minimum eigenvalue with high probability. This is achieved via using concentration arguments over the random initialization. Then, we will upper-bound the movement of each individual weight vector from the initial guess and hence bound the deviation of $K(t)$ from $K(0)$ in terms of spectral norm.
\item[(iii)] By discretizing the continuous-time analysis, we will obtain analogous bounds for gradient descent with a properly chosen step size and show that gradient descent linearly converges to a global solution.
\end{enumerate}
Our convergence results are informally stated in the following theorems:
\begin{Theorem}[Informal version of Theorems \ref{thmGradFlow_Case1} and \ref{thmGradDescent_Case1}]
\label{thmInformalWeak}
Consider an autoencoder that computes output $u = \frac{1}{\sqrt{md}}A\phi(W^\top x)$ where the weight vectors are initialized with independent vectors $w_{r} \sim \mathcal{N}(0, I)$ and $a_{r} \sim \mathrm{Unif}(\{-1, 1\}^d)$ for all $r \in [m]$. For any $
\delta \in (0, 1)$ and $m \ge C \frac{n^5d^4\lambda_n}{\lambda_0^4\delta^3}$ for some large enough constant $C$, the gradient descent over $W$ linearly converges to a global minimizer with probability at least $1-\delta$ over the randomness in the initialization.
\end{Theorem}
\begin{Theorem}[Informal version of Theorems \ref{thmGradFlow_joint} and \ref{thmGradDescent_joint}]
\label{thmInformalJoint}
Consider an autoencoder that computes output $u = \frac{1}{\sqrt{md}}A\phi(W^\top x)$ where the weight vectors are initialized with independent vectors $w_{r} \sim \mathcal{N}(0, I)$ and $a_{r} \sim \mathrm{Unif}(\{- 1, 1\}^d)$ for all $r \in [m]$. For any $
\delta \in (0, 1)$ and $m \ge C\frac{nd\lambda^3_n}{\lambda_0^4\delta^2} $ for some large enough constant $C$, the gradient descent jointly over $W$ and $A$ linearly converges to a global minimizer with probability at least $1-\delta$ over the randomness in the initialization.
\end{Theorem}
\paragraph{Comparisons with existing work.} We summarize the quantitative implications of our results in Table \ref{tblResults}. In this table, we compare with \citet{du2018_gradient, oymak2019moderate,zou2019improved} that achieve the best known bounds to our knowledge.
We emphasize that the factor $d$ in our bounds arises due to the fact that our network produces high-dimensional outputs (dimension $d$ in the case of autoencoders) while the previous works have focused on scalar outputs. Note also that the input dimension $d$ is implicitly hidden in $\lambda_0$ and $\lambda_n$.
For {weakly-trained} networks with a single output, we (slightly) improve the order of over-parameterization: $m = \Omega\left(\frac{n^5\lambda_n}{\lambda_0^4\delta^3} \right)$ over the previous bound $\Omega\left(\frac{n^6}{\lambda_0^4\delta^3} \right)$ in \citet[Theorem 3.2]{du2018_gradient} by explicitly exposing the role of the spectral norm $\lambda_n$ of the data.
For the {jointly-trained} regime, we obtain a significantly improved bound over \citet[Theorem 3.3]{du2018_gradient}. Our result is consistent with \citet[Theorem 6.3]{oymak2019moderate}, but we have both layers jointly trained; the proof technique in \citet[Theorem 6.3]{oymak2019moderate} is different from ours (bounding Jacobian perturbations),
and does not seem to be easily extended to the jointly trained case.
Let us better understand the intuition behind the bounds in Table \ref{tblResults} in terms of the dimension $d$ and the sample size $n$. We emphasize that in the fairly typical regime of machine learning where $n \ge d$ and $\lambda_n \sim n/d$, the level of over-parameterization for the single output is moderate (of order $n^4/d^3$). Since autoencoders have an output dimension $d$, the factor-$d$ in the bounds is natural in the jointly-trained case by characterizing the trajectory length by Frobenius nor
. This is consistent with the result in \cite{zou2019improved}. Our bound is different from that in \citet{zou2019improved} in that we make assumption on the minimum eigenvalue $\lambda_0$ while they assume a lower bound on the sample separation $\Delta$. A direct universal comparison between the two bounds is difficult; however, \citet{oymak2019moderate} shows an upper bound $\lambda_0 \ge \Delta/100n^2$. Finally, we note that initializing $A$ with i.i.d. Rademacher entries keeps our analysis in line with previous work, and an extension to Gaussian random initialization of $A$ should be straightforward.
\begin{table}[h!]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Regime & Reference & Single output & Multiple output \\
\hline
\hline
\multirow{3}{*}{Weakly-trained} & \cite{du2018_gradient} & $C\frac{n^6}{\lambda_0^4\delta^3}$ & \ding{55} \\
\hhline{~---}
& This work & $C\frac{n^5\lambda_n}{\lambda_0^4\delta^3}$ & $C\frac{n^5d^4\lambda_n}{\lambda_0^4\delta^3}$ \\
\hhline{~---}
& \cite{oymak2019moderate} & $C\frac{n\lambda_n^3}{\lambda_0^4}$ & \ding{55} \\
\hline
\multirow{3}{*}{Joint-trained} & \cite{du2018_gradient} & $C\frac{n^6\log(m/\delta)}{\lambda_0^4\delta^3}$ & \ding{55} \\
\hhline{~---}
& \cite{zou2019improved} & $C\frac{n^8}{\Delta^4}$ & $C\frac{n^8d}{\Delta^4}$ \\
\hhline{~---}
& This work & $C\frac{n\lambda_n^3}{\lambda_0^4\delta^2}$ & $C\frac{nd\lambda^3_n}{\lambda_0^4\delta^2}$ \\
\hline
\end{tabular}
\vskip .1in
\caption{\small \sl Comparison of our over-parameterization bounds with the known results in \citep[Theorem 3.2 and Theorem 3.3]{du2018_gradient}, \citep[Theorem 6.3]{oymak2019moderate} and \citep[Table 1]{zou2019improved}. Here, $d$ is the input dimension, $n$ is the training size, $\lambda_0$ is the smallest eigenvalue of the Gram matrix, $\lambda_n$ is the maximum eigenvalue of the covariance matrix and $C$ is some sufficiently large constant. $\Delta$ is the smallest distance between any pair of distinct training points.
}
\label{tblResults}
\end{table}
\subsection{Inductive bias}
The following theorem establishes a result on the inductive bias of
the infinitely wide autoencoders trained with gradient descent.
\begin{Theorem}
\label{thmInformalBias}
Let $K^{\infty} = \mathbb{E}_{W(0), A(0)}[K(0)]$. Assume $\lambda_{\min}(K^{\infty}) > 0$ and let $\eta_{\mathrm{critical}} = 2(\lambda_{\max}(K^{\infty}) + \lambda_{\min}(K^{\infty}))^{-1}$. Under gradient descent with learning rate $\eta < \eta_{\mathrm{critical}}$, for every normalized $x \in \mathbb{R}^d$ as the width $m \rightarrow \infty$, the autoencoder output $f_t(x)$ at step $t$ converges to $\mu_t(x) + \gamma_t(x)$, with:
\begin{align*}
\mu_t(x) &\rightarrow \sum_{i=1}^n \Lambda_i x_i , \\
\gamma_t(x) &\rightarrow f_0(x) - \sum_{i=1}^n \Lambda_i f_0(x_i)
\end{align*}
where each $\Lambda_i \in \mathbb{R}^{d\times d}$ depends on the kernel score between the input $x$ and each training sample $x_i$ and $K^{\infty}$. $f_0(x)$ is the autoencoder reconstruction of $x$ at initialization.
\end{Theorem}
We prove this result in Section \ref{sec:asymptotic}. Essentially, Theorem \ref{thmInformalBias} generalizes the simple result in \citet[][Theorem 1]{zhang2019identity} to non-linear autoencoders and
multiple-sample training despite its asymptotic nature. The closer the new test input $x$ is to the span of training data $X$, the more its reconstruction concentrates around these seen points. This coincides with the observation about ``memorization'' by \citet{memorization_ae}.
\section{Useful Facts}
\label{apdx:facts}
\begin{Lemma}[Stein's Lemma]
\label{lm:stein}
For a random vector $w \in R^{d}$ such that $w \sim \mathcal{N}(0, I)$ and function $h(w) : \mathbb{R}^d \rightarrow \mathbb{R}^k$ is weakly differentiable with Jacobian $D_w h$, we have
\[
\mathbb{E}_{w \sim \mathcal{N}(0, I)}\bigl[wh(w)^\top\bigr] = \mathbb{E}_{w \sim \mathcal{N}(0, I)}\bigl[(D_w h)^\top\bigr].
\]
\end{Lemma}
\begin{Lemma}[]
\label{lmSongClaim4.10}
Denote $S_i = \{r \in [m] : \mathbbm{1}[ w_r(k+1)^\top x_i \geq 0] = \mathbbm{1}[ w_r(k)^\top x_i \geq 0 ]$, and $S^{\bot}_i = [m] \backslash S_i$. If $\norm{w_r(k) - w_r(0)} \leq R$, then
\[
\sum_{r=1}^m \mathbbm{1}[r \in S^{\bot}_i] \leq 4mR
\]
with probability at least $1 - n\exp(-mR)$.
\end{Lemma}
This result is borrowed from the proof of \cite[Claim 4.10]{song2019quadratic}.
\section{Weakly-trained Autoencoders}
We now analyze various training regimes; these will follow from different compositions of the above NTK's. In each of the analyses, we will first set up the corresponding NTK, study the gradient dynamics with infinitesimal step size (gradient flow), and then appropriately discretize the flow to get our final results.
\subsection{Gradient flow}
\label{sec:gradflow_case1}
Consider the {weakly-trained} regime with the objective function:
\begin{equation}
\label{eqnEmpLoss_Case1}
L(W) = \frac{1}{2}\sum_{i=1}^n\norm{x_i - \frac{1}{\sqrt{md}}A\phi(W^\top x_i)}^2
\end{equation}
where the corresponding minimization is \emph{only} performed over $W$. Suppose that the weight matrices $W$ and $A$ are randomly initialized such that
\[
w_{ij}(0) \sim \mathcal{N}(0, 1), ~ a_{ij}(0) \sim \mathrm{Unif}(\{\pm 1\})
\]
are drawn independently for each all $(i, j)$. After the initialization, we keep $A$ fixed throughout and apply gradient descent learning over $W$ with step size $\eta$:
\[
W(k+1) = W(k) - \eta \nabla_W L(W(k)),~k=0, 1, 2, \ldots .
\]
Let us derive the neural tangent kernel for this training regime. We first calculate the gradient of $L(W)$ with respect to $W$. Since $A\phi(W^\top x) = \sum_{r=1}^ma_r\phi(w_r^\top x)$ for any $x\in\mathbb{R}^d$, it is convenient to compute the gradient with respect to each column $w_r$. The gradient $\nabla_{w_r}L(W)$ of the loss in~\eqref{eqnEmpLoss_Case1} over $w_r$ is given by:
\begin{align*}
\nabla_{w_r}L &= - \sum_{i=1}^n J_{r}(u_{i}) ^\top (x_{i} - u_{i})
= -\frac{1}{\sqrt{md}} \sum_{i=1}^n \mathbbm{1}[w_r^Tx_i \ge 0]a_rx^\top_i (x_{i} - u_{i}), \numberthis
\label{eqnGradLossOverWr_Case1}
\end{align*}
where $J_{r}(u_i)$\footnote{Note that $\phi(z)$ is differentiable everywhere except at $z =0$, at which the derivative will be considered as 0.} denotes the Jacobian matrix of the output vector $u_i$ with respect to $w_r$:
\begin{align*}
J_{r}(u_i) &= \frac{1}{\sqrt{md}}a_rx^\top_i\phi'(w^\top_rx) = \frac{1}{\sqrt{md}}\mathbbm{1}[w_r^Tx_i \ge 0]a_rx^\top_i. \numberthis
\label{eqnJacobianUtowr_Case1}
\end{align*}
Let us consider the gradient flow for the weight vector $w_r(t)$ via the following ODE:
\begin{equation}
\label{eqnGradODE_Case1}
\frac{\mathrm{d} w_r(t)}{\mathrm{d}t} = -\nabla_{w_r}L(W(t)).
\end{equation}
Using~\eqref{eqnGradLossOverWr_Case1} and~\eqref{eqnGradODE_Case1}, the continuous-time dynamics of the prediction for each sample $i \in [n]$ is:
\begin{align*}
\frac{\mathrm{d} u_{i}}{\mathrm{d}t} &= \sum_{j=1}^n \left( \sum_{r=1}^m J_{r} (u_{i}) J_{r}^\top(u_{j})\right) (x_{j} - u_{j}).
\end{align*}
Vectorizing $\frac{dU(t)}{dt}$, we get the equation that characterizes the dynamics of $U(t)$:
\begin{align*}
\label{eqnPredDynamics_Case1}
\frac{\mathrm{d} \mathrm{vec}(U(t))}{\mathrm{d}t}
&= \frac{1}{d}K(t)\mathrm{vec}(X - U(t)), \numberthis
\end{align*}
where $K(t)$ is the $nd\times nd$ matrix whose $(i, j)$-block is of size $d\times d$ and defined as
\[
K(t)_{i, j} = d \sum_{r=1}^m J_{r} (u_{i}) J_{r}^\top(u_{j}) = \frac{1}{m}\sum_{r=1}^m\mathbbm{1}[w_r(t)^Tx_i \ge 0, w_r(t)^Tx_j \ge 0]x_{i}^\top x_{j}a_ra_r^\top.
\]
If we denote
\[
\widetilde{X}_r(t) = \Bigl[\mathbbm{1}[w_r(t)^\top x_{1} \ge 0]x_{1}, \dots, \mathbbm{1}[w_r(t)^\top x_{n} \ge 0]x_{n}\Bigr].
\]
then we can write $K(t)$ in Kronecker form:
\[
K(t) = \frac{1}{m}\sum_{r=1}^m \widetilde{X}_r(t)^\top \widetilde{X}_r(t) \otimes a_ra^\top_r.
\]
Since $W(0)$ and $A(0)$ are randomly initialized, in the limit as $m \rightarrow \infty$, $K(0)$ converges to the NTK:
\begin{align*}
K^\infty &= \mathbb{E}_{W(0), A(0)}[K(0)] \\&= \mathbb{E}_{w(0), a(0)}[\widetilde{X}(0)^\top \widetilde{X}(0) \otimes a(0) a(0)^\top] \\&= \mathbb{E}_w[\widetilde{X}^\top \widetilde{X}] \otimes I,
\end{align*}
where the last step follows from the independence of $w(0)$ and $a(0)$.
By Assumption~\ref{astMinEigen}, $\lambda_{\min}(K^\infty) = \lambda_{\min}(\mathbb{E}_w[\widetilde{X}^\top \widetilde{X}]) = \lambda_0 >0$. In other words, the NTK kernel is strictly positive definite. We want to bound the minimum eigenvalue of $K(0)$ at the initialization $W(0)$ and prove $K(t) \approx K(0) \approx K^\infty$ when $m$ is large enough.
Now, we state the main theorem for the convergence of the gradient flow:
\begin{Theorem}[Linear convergence of gradient flow, weakly-trained regime]
\label{thmGradFlow_Case1}
Suppose Assumptions \ref{astUnitNorm} and \ref{astMinEigen} hold. Suppose at initialization that the weights are independently drawn such that $w_r \sim \mathcal{N}(0, I)$ and $a_r \sim \mathrm{Unif}(\{\pm 1\}^d)$ for all $r \in [m]$. If $m \ge C\frac{n^5d^4\lambda_n}{\lambda_0^4\delta^2}$ for a constant $C > 0$, then with probability at least $1-\delta$
\[
\norm{X - U(t)}_F^2 \leq \exp\Bigl(-\frac{\lambda_0 t}{d}\Bigr)\norm{X - U(0)}_F^2.
\]
\end{Theorem}
To prove this theorem, we use the auxiliary results from Lemmas \ref{lmMinEigenK0_Case1}, \ref{lmBoundKDiff_Case1} and \ref{lmMovementInTime_Case1}.
\begin{Lemma}
\label{lmMinEigenK0_Case1}
For any $\delta \in (0, 1)$, if $m \ge C\frac{\lambda_n^2d\log(nd/\delta)}{\lambda^2_0}$ for some large enough constant $C$,
then with probability at least $1 -\delta$,
one obtains $\norm{K(0) - K^\infty} \leq {\lambda_0}/{4}$ and $\lambda_{\min}(K(0)) \geq 3\lambda_0/4$ .
\end{Lemma}
The proof of this Lemma is given in Appendix \ref{appdx:weakly_train}.
\begin{Remark}
\normalfont
Compared with the results in~\citep{du2018_gradient, song2019quadratic}, our bound exposes the dependence on the data $X$ through the spectral norm of $X$ and the dimension $d$. When $\lambda_n$ is much smaller than $n$, our bound improves over these aforementioned results. For example, if the training samples are drawn from certain distributions (e.g., Gaussians, or from sparsely used dictionary models), the bound can be as low as $m \sim \widetilde{O}(d)$.
\end{Remark}
The next step in our analysis is to upper bound the spectral norm of the kernel perturbation, $\norm{K(t) - K(0)}$, with high probability.
\begin{Lemma}
Suppose $w_r \sim \mathcal{N}(0, I)$ and $a_r \sim \mathrm{Unif}(\{\pm 1\}^d)$ are drawn independently for all $r \in [m]$. For any $\delta \in (0, 1)$ and some $R >0$, with probability at least $1-\delta$:
\begin{equation}
\label{eqn:suptwK}
\sup_{\substack{\{\widetilde{w}=(\widetilde{w}_1,\dots,\widetilde{w}_m):\norm{\widetilde{w}_r - w_r} \leq R \\ ~\forall r \in [m]\}}} \norm{K(\widetilde{w}) - K(w)} < \frac{2n^2dR}{\delta},
\end{equation}
where $K(w) = \frac{1}{m}\sum_{r=1}^m\widetilde{X}(w_r)^\top \widetilde{X}(w_r) \otimes a_ra_r^\top$.
\label{lmBoundKDiff_Case1}
\end{Lemma}
\begin{Remark}
\normalfont
One may ask why not to directly bound $K(t) - K(0)$ for each time $t$ but need the supremum over the ball near each $w_r$. Basically, since $w(t)$ depends on $W(0)$ and $A(0)$,
directly working on $K(t)-K(0)$ is difficult.
The uniform bound \eqref{eqn:suptwK} allows us to
overcome this dependence when applied to $K(t)-K(0)$.
\end{Remark}
Note that in this lemma we use $K(w)$ to indicate that the kernel $K$ is being evaluated at the weight vectors $w_r$ and ignore the time index $t$. In this Lemma, we use $\widetilde{X}(w_r)$ to denote $\widetilde{X}_r$ evaluated at $w_r$.
\proof For simplicity of notation, we use $\sup_{\widetilde{w}}$ to represent the supremum in \eqref{eqn:suptwK}, and $\sup_{\widetilde{w}_r}$ to represent
$\sup_{\{\widetilde{w}_r:\|\tilde{w}_r -w_r\|\le R\}}$. To prove this lemma, we work on the Frobenius norm instead of the spectral norm. Let us first write
\[
z_{ijr} = \mathbbm{1}[\widetilde{w}_r^\top x_i \geq 0, \widetilde{w}_r^\top x_j \geq 0] -\mathbbm{1}[w^\top_r x_i \geq 0, w^\top_r x_j \geq 0].
\]
Next,
\begin{align*}
\norm{K(\widetilde{w}) - K(w)}^2 \leq \norm{K(\widetilde{w}) - K(w)}_F^2
&= \frac{1}{m^2} \sum_{i,j=1}^n \norm{x^\top_ix_j \sum_{r=1}^m z_{ijr} a_ra^\top_r}_F^2 \\
&\leq \frac{1}{m^2} \left(\sum_{i, j=1}^n \norm[\Big]{\sum_{r=1}^m z_{ijr} a_ra^\top_r}_F \right)^2.
\end{align*}
The last step follows from the fact that $|x^\top_ix_j| \leq 1$ due to Cauchy-Schwartz. Therefore,
\begin{align*}
\sup_{\widetilde{w}} \norm{K(\widetilde{w}) - K(w)} &\leq \frac{1}{m} \sup_{\widetilde{w}} \sum_{i, j=1}^n \norm[\Big]{\sum_{r=1}^m z_{ijr} a_ra^\top_r}_F \\
&\leq \frac{1}{m} \sup_{\widetilde{w}} \sum_{i, j=1}^n \sum_{r=1}^m |z_{ijr}| \norm{a_ra^\top_r}_F \\
&\leq \frac{d}{m} \sum_{i, j=1}^n \sum_{r=1}^m \sup_{\widetilde{w}_r}|z_{ijr}|,
\end{align*}
since $\norm{a_ra^\top_r}_F = \norm{a_r}^2 = d$. Now we take expectation over the random vector $w_r$'s on both sides:
\begin{align*}
\mathbb{E}_{w}[\sup_{\widetilde{w}_r} \norm{K(\widetilde{w}) - K(w)}] &\leq \frac{d}{m} \sum_{i, j=1}^n \sum_{r=1}^m \mathbb{E}_{w}[\sup_{\widetilde{w}_r} |z_{ijr}|].
\end{align*}
Next, we bound $\mathbb{E}_{w}[\sup_{\widetilde{w}_r} |z_{ijr}|]$. By definition of $z_{ijr}$,
\begin{align*}
\label{eqnBoundzijr}
|z_{ijr}| &= |\mathbbm{1}[w^\top_r x_i \geq 0, w^\top_r x_j \geq 0] - \mathbbm{1}[\widetilde{w}_r^\top x_i \geq 0, \widetilde{w}_r^\top x_j \geq 0]| \\
&\leq |\mathbbm{1}[w^\top_r x_i \geq 0] - \mathbbm{1}[\widetilde{w}_r^\top x_i \geq 0]| + |\mathbbm{1}[w^\top_r x_j \geq 0] - \mathbbm{1}[\widetilde{w}_r^\top x_j \geq 0]| \\
&\leq \mathbbm{1}[|w^\top_r x_i| \leq R] + \mathbbm{1}[|w^\top_r x_j| \leq R]. \numberthis
\end{align*}
The last step follows from the results in \citep[Lemma 3.2]{du2018_gradient}. So we get
\begin{align*}
\mathbb{E}_{w}[\sup_{\widetilde{w}_r} |z_{ijr}|] &\leq \mathbb{E}_{w}[\mathbbm{1}[|w^\top_r x_i| \leq R] + \mathbbm{1}[|w^\top_r x_j| \leq R]] \\
&= 2\mathbb{P}_{z \sim \mathcal{N}(0, 1)}[|z| < R] \leq \frac{4R}{\sqrt{2\pi}} < 2R.
\end{align*}
Therefore,
\begin{equation*}
\mathbb{E}_{w}[\sup_{\widetilde{w}} \norm{K(\widetilde{w}) - K(w)}] < 2n^2dR.
\end{equation*}
Finally, by Markov's inequality, with probability at least $1-\delta$:
\begin{equation*}
\sup_{\widetilde{w}} \norm{K(\widetilde{w}) - K(w)} < \frac{2n^2dR}{\delta}.
\end{equation*}
\qedhere
\begin{Corollary}
\label{corMinEigenKt_Case1}
Suppose $\norm{w_r(t) - w_r(0)} \leq R \triangleq \frac{\lambda_0\delta}{8n^2d }$ for all $r \in [m] $ and $t \ge 0$ with probability at least $1-\delta$. We have
\[
\lambda_{\min}(K(t)) > \frac{\lambda_0}{2}
\]
with probability at least $1-3\delta$
if $m \ge C\frac{\lambda_n^2d\log(nd/\delta)}{\lambda^2_0}$.
\end{Corollary}
\proof
This is the direct consequence of Lemma \ref{lmMinEigenK0_Case1} and Lemma \ref{lmBoundKDiff_Case1}.
Since $\norm{w_r(t) - w_r(0)} \leq R = \frac{\lambda_0\delta}{8n^2d }$ with probability at least $1-\delta$ for all $t \geq 0$, then
\[
\norm{ K(t) - K(0)}
< 2n^2dR\delta = \frac{\lambda_0}{4}
\]
with probability at least $1-2\delta$. Using Weyl's inequality, we can bound:
\[
\lambda_{\min}(K(t)) \geq \lambda_{\min}(K(0)) - \norm{K(t) - K(0)} > \lambda_0/2
\]
with probability at least $1-3\delta$
if $m \ge C\frac{\lambda_n^2d\log(nd/\delta)}{\lambda^2_0}$
as stated in Lemma \ref{lmMinEigenK0_Case1}.
\qedhere
In what follows, we show that $\norm{w_r(t) - w_r(0)} \leq R$ with high probability if $m$ is sufficiently large.
\begin{Lemma}
Fix $t > 0$. Suppose $\lambda_{\min}(K(s)) \geq \lambda_0/2$ for all $0 \le s < t$. Then,
\[
\norm{X - U(s)}_F^2 \le \exp\left(-\frac{\lambda_0 s}{d}\right) \norm{X - U(0)}_F^2.
\]
Also, for each $r = 1, 2, \dots, m$:
\[
\norm{w_r(t) - w_r(0)} \le \frac{d\sqrt{\lambda_n}\norm{X - U(0)}_F}{\sqrt{m}\lambda_0} \triangleq R'.
\]
\label{lmMovementInTime_Case1}
\end{Lemma}
\proof
For all $s \in [0, t)$, we have
\begin{align*}
\frac{d}{ds} \norm{\mathrm{vec}(X - U(s))}_2^2 &= -2\mathrm{vec}(X - U(s))^\top \frac{1}{d} K(s)\mathrm{vec}(X - U(s)) \nonumber \\
&\le -\frac{2}{d}\lambda_{\min}(K(s))\norm{\mathrm{vec}(X - U(s))}^2 \\
&\le -\frac{\lambda_0}{d}\norm{X - U(s)}_F^2
\end{align*}
by the assumption $\lambda_{\min}(K(s)) \ge \lambda_0/2$. Therefore, the loss at time $s$ is upper-bounded by
\begin{align*}
\label{eqnLossDecayWeakly}
\norm{X - U(s)}_F^2 &= \norm{\mathrm{vec}(X - U(s))}^2 \\ &\le \exp \Bigl(-\frac{\lambda_0s}{d} \Bigr) \norm{\mathrm{vec}(X - U(0))}^2 \\
&\le \exp \Bigl (-\frac{\lambda_0s}{d} \Bigr) \norm{X - U(0)}_F^2, \numberthis
\end{align*}
which decays exponentially with time $s$ at rate $\lambda_0/d$.
To upper bound the movement of the weights $\norm{w_r(t) - w_r(0)}$, we use the above result while expanding the derivative of $w_r(s)$ over time $0 \le s < t$:
\begin{align*}
\norm[\bigg]{\frac{\mathrm{d}}{\mathrm{d} s}w_r(s)} &= \norm[\bigg]{-\nabla_{w_r}L(W(s))} \\
&= \norm[\bigg]{\frac{1}{\sqrt{md}}\sum_{i=1}^n\mathbbm{1}[w^\top_rx_i \ge 0]x_ia^\top_r(x_i - u_i(s))} \\
&= \norm[\bigg]{\frac{1}{\sqrt{md}}\widetilde{X}_r (X - U(s))^\top a_r} \\
&\le \frac{\norm{X}\norm{a_r}}{\sqrt{md}} \norm{X - U(s)}_F \\
&\leq \sqrt{\frac{\lambda_n}{m}} \exp\Bigl(-\lambda_0s/d \Bigr) \norm{X - U(0)}_F,
\end{align*}
where the last step follows from $\norm{a_r}^2 = d, \norm{X}^2 = \lambda_n$ and Eq. \eqref{eqnLossDecayWeakly}.
From the differential equation, $w_r(s)$ is continuous for all $s \in [0, t)$, and so is $\norm{w_r(s) - w_r(0)}$. Consequently, we can take the limit for $t' \rightarrow t$:
\begin{align*}
\norm{w_r(t) - w_r(0)}_2 &= \lim_{t' \rightarrow t} \norm{w_r(t') - w_r(0)}_2 \le \lim_{t' \rightarrow t} \int_{0}^{t'} \norm[\bigg]{\frac{\mathrm{d}}{\mathrm{d} s}w_r(s)}ds \\ &\le \lim_{t' \rightarrow t} \int_{0}^{t'} \frac{\sqrt{\lambda_n}\exp\Bigl(-\lambda_0s/d \Bigr)\norm{X - U(0)}_F}{\sqrt{m}}ds \\
&\le \frac{d\sqrt{\lambda_n}\norm{X - U(0)}_F}{\sqrt{m}\lambda_0} \triangleq R',
\end{align*}
since $\exp\bigl(-\lambda_0s/d \bigr)$ is continuous at $s=t$. Therefore, we finish the proof.
\qedhere
\begin{Lemma}
If $R' < R$, then $\lambda_{\min}(K(t)) \geq \frac{1}{2}\lambda_0$
for all $t \geq 0$. Moreover,
$
\norm{w_r(t) - w_r(0)} \leq R'
$
and
$
\norm{X - U(t)}_F^2 \le \exp(-\frac{\lambda_0 t}{d}) \norm{X - U(0)}_F^2
$
for all $r \in [m]$.
\label{lmConvergence_Case1}
\end{Lemma}
\proof
We will prove this by contradiction. Assume the conclusion does not hold, meaning there exists $t_0$ such that:
\begin{equation*}
t_0 = \inf \left \{ t > 0: \lambda_{\min}(H (t) ) \leq \lambda_0/2 \right \}.
\end{equation*}
We will argue that $t_0 > 0$ using the continuity. Since $w_r(t)$ is continuous in $t$, $K(t)$ and $\lambda_{\min}(K(t))$ are also continuous. Therefore, there exists $t' > 0$ such that for any $0 < \epsilon < \lambda_0/4$ we have
\[
\lambda_{\min}(K(t')) > \lambda_{\min}(K(0)) - \epsilon > \lambda_0/2.
\]
Since $t_0 > 0$, then for any $0 \le s < t_0$, $\lambda_{\min}(H (s) ) \ge \lambda_0/2$. By Lemma \ref{lmMovementInTime_Case1}, we have for all $r \in [m]$:
\[
\norm{w_r(t_0) - w_r(0)} \leq R' < R.
\]
Corollary \ref{corMinEigenKt_Case1} implies that $\lambda_0(H(t_0)) > \lambda_0/2$, which is a contradiction.
Therefore, we have proved the first part. For the second part, we have for all $t \geq 0$, $\lambda_{\min}(K(t)) \geq \frac{1}{2}\lambda_0$ and it follows from Lemma \ref{lmMovementInTime_Case1} that:
$
\norm{w_r(t) - w_r(0)} \leq R'
$
for all $r \in [m]$ and
$
\norm{X - U(t)}_F^2 \le \exp(-\frac{\lambda_0 t}{d}) \norm{X - U(0)}_F^2.
$
\qedhere
Now, we bound $\norm{X - U(0)}_F$ to upper bound $R'$.
\begin{Claim}
For any $\delta \in (0, 1)$, then $\norm{X - U(0)}_F^2 \leq \frac{2n}{\delta}$ with probability at least $1-\delta$.
\label{clInitialLoss_Case1}
\end{Claim}
\proof We prove this using Markov's inequality. We use the independence between $A(0)$ and $W(0)$ to derive expressions for the expectation. In this proof, the expectations are evaluated over $W(0)$ and $A(0)$.
\begin{align*}
\mathbb{E}[ \norm{X - U(0)}_F^2] &= \norm{X}_F^2 + \frac{1}{md}\mathbb{E}[\norm{A(0)\phi(W(0)^TX}_F^2] \\
&= n + \frac{1}{md}\mathbb{E}[\textrm{trace}(\phi(X^TW(0))A(0)\phi(W(0)^TX] \\
&= n + \frac{1}{md}\textrm{trace}(\mathbb{E}[\phi(X^TW(0))A(0)^\top A(0) \phi(W(0)^TX]) \\
&= n + \frac{1}{m}\textrm{trace}(\mathbb{E}[\phi(X^TW(0))\phi(W(0)^TX]) \\
&= n + \sum_{i=1}^n\mathbb{E}_w[\phi(w^\top x_i)^2] \\
&= n + n\mathbb{E}_{z \in \mathcal{N}(0, 1)}[z^2\mathbbm{1}[z \geq 0]]= \frac{3n}{2},
\end{align*}
where in the fourth step we use $\mathbb{E}[A(0)^\top A(0)] = dI$, and in the last step we use the independence of the columns of $W(0)$. Using Markov, we get:
\[
\norm{X - U(0)}_F^2 \leq \frac{2n}{\delta}
\]
with probability at least $1-\delta$.
\qedhere
\proof[Proof of Theorem~\ref{thmGradFlow_Case1}] If the following condition holds
\[
R' = \frac{d\sqrt{\lambda_n}\norm{X - U(0)}_F}{\sqrt{m}\lambda_0} \leq R = \frac{\delta\lambda_0}{8n^2d},
\]
then Lemma \ref{lmMovementInTime_Case1} follows. Using the condition with the bound $\norm{X - U(0)}_F \leq \sqrt{2n/\delta}$ in Claim \ref{clInitialLoss_Case1}, we obtain $m = \Omega\left( \frac{n^5d^4\lambda_n}{\lambda_0^4\delta^3}\right)$. This bound dominates the order of $m$ required for the concentration of $K(0)$ in the Corollary \ref{corMinEigenKt_Case1}, and therefore Theorem~\ref{thmGradFlow_Case1} follows.
\qedhere
\subsection{Gradient descent }
\label{sec:gradient-descent}
The above result for gradient flow can be viewed as a convergence rate for gradient descent in the weakly-trained regime with infinitesimally small step size. We now derive a convergence rate for gradient descent with finite step sizes.
\begin{Theorem}
\label{thmGradDescent_Case1}
Suppose Assumptions~\ref{astUnitNorm} and \ref{astMinEigen} hold. The initial weights are independently drawn such that $w_r \sim \mathcal{N}(0, I)$ and $a_r \sim \mathrm{Unif}(\{\pm 1\}^d)$ for all $r \in [m]$. If $m \ge C\frac{n^5d^4\lambda_n}{\lambda_0^4\delta^3}$ for some large enough constant $C$, then with probability at least $1-\delta$ the gradient descent on $W$ with step size $\eta = \Theta(\frac{\lambda_0}{nd\lambda_n})$,
\begin{equation}
\label{eqnGDConvergence_Case1}
\norm{X - U(k)}_F^2 \leq \left(1-\frac{\eta \lambda_0}{2d}\right)^k \norm{X - U(0)}_F^2
\end{equation}
for $k=0, 1, \dots$
\end{Theorem}
We will prove Theorem \ref{thmGradDescent_Case1} by induction. The base case when $k = 0$ is trivially true. Assume Eq. \eqref{eqnGDConvergence_Case1} holds for $k'=0, 1, \dots, k$, then we show it holds for $k' = k+1$. To this end, we first prove $\norm{w_r(k+1) - w_r(0)}$ is small enough; then we use that property to bound $\norm{X - U(k+1)}_F^2$.
\begin{Lemma}
\label{lmMovementInstep_Case1}
If \eqref{eqnGDConvergence_Case1} holds for $k'=0, 1, \dots k$, then we have for all $r \in [m]$,
\begin{align*}
\norm{w_r(k+1) - w_r(0)} \leq \frac{4d\sqrt{\lambda_n}\norm{X - U(0)}_F}{\sqrt{m} \lambda_0} \triangleq R'.
\end{align*}
\end{Lemma}
\proof We use the expression of the gradient in \eqref{eqnGradLossOverWr_Case1}, which is:
\begin{align*}
\nabla_{w_r}L(W(k)) & = -\sum_{i=1}^n \frac{1}{\sqrt{md}}\mathbbm{1}[w_r(k)^Tx_i \ge 0]x_ia^\top_r (x_{i} - u_{i}(k)) \\ & = - \frac{1}{\sqrt{md}}\widetilde{X}_r(k)(X - U(k))^\top a_r.
\end{align*}
Then, the difference of the weight vector $w_r$ is:
\begin{align*}
\norm{w_r(k+1) - w_r(0)} &= \eta \norm[\Big]{\sum_{k'=0}^k\nabla_{w_r}L(w_r(k'))} \\
&= \eta \norm[\bigg]{\sum_{k'=0}^k\frac{1}{\sqrt{md}}\widetilde{X}_r(k')(X - U(k'))^\top a_r} \\
&\le \eta \frac{\norm{X}}{\sqrt{md}}\sum_{k'=0}^k\norm{\mathrm{vec}(X - U(k'))}_F\norm{a_r} \\
&\le \eta \frac{\sqrt{\lambda_n}}{\sqrt{m}}\sum_{k'=0}^k\Bigl(1 - \frac{\eta \lambda_0}{2} \Bigr)^{k'/2}\norm{X - U(0)}_F \\
&\le \eta \frac{\sqrt{\lambda_n}}{\sqrt{m}}\norm{X - U(0)}_F\sum_{k'=0}^\infty\Bigl(1 - \frac{\eta \lambda_0}{2d} \Bigr)^{k'/2} \\
&= \eta \frac{\sqrt{\lambda_n}}{\sqrt{m}}\norm{X - U(0)}_F \frac{1}{\eta \lambda_0/(4d)} \\
&= \frac{4d\sqrt{\lambda_n}\norm{X - U(0)}_F}{\sqrt{m}\lambda_0},
\end{align*}
where the third step and the fourth step follow from the facts that $\norm{\widetilde{X}_r(k')} \leq \norm{X} = \sqrt{\lambda_n}$ and $\norm{a_r} = \sqrt{d}$. The last step follows because $\sum_{i=0}^\infty (1-\eta\lambda_0/2)^{i/2} \le \frac{4d}{\eta \lambda_0}$.
\qedhere
Now, let us derive the form of $X - U(k+1)$. First, we compute the difference of the prediction between two consecutive steps, similar to deriving $\frac{du_i(t)}{dt}$. For each $i \in [n]$, we have
\begin{align*}
\label{eqnDiffui_Case1}
&u_i(k+1) - u_i(k) = \frac{1}{\sqrt{md}} \sum_{r=1}^m a_r \left( \phi( w_r(k+1)^\top x_i ) - \phi(w_r(k)^\top x_i ) \right) \\
&= \frac{1}{\sqrt{md}} \sum_{r=1}^m a_r \left( \phi \left( \Big( w_r(k) - \eta \nabla_{w_r}L(W(k)) \Big)^\top x_i \right) - \phi ( w_r(k)^\top x_i ) \right). \numberthis
\end{align*}
We split the right hand side into two parts: $v_{1,i}$ represents the terms that the activation pattern does not change and $v_{2,i}$ represents the remaining term that pattern may change. Formally speaking, for each $i \in [n]$, we define
\[
S_i = \{r \in [m] : \mathbbm{1}[ w_r(k+1)^\top x_i \geq 0] = \mathbbm{1}[ w_r(k)^\top x_i \geq 0 ] \}, ~ \text{and } S^{\bot}_i = [m] \backslash S_i.
\]
Then, we can formally define $v_{1,i}$ and $v_{2,i}$ as follows:
\begin{align*}
&v_{1,i} \triangleq \frac{1}{ \sqrt{md} } \sum_{r \in S_i} a_r \left( \phi \left( \Big( w_r(k) - \eta \nabla_{w_r}L(W(k)) \Big)^\top x_i \right) - \phi( w_r(k)^\top x_i ) \right), \\
&v_{2,i} \triangleq \frac{1}{ \sqrt{md} } \sum_{r \in S^{\bot}_i} a_r \left( \phi \left( \Big( w_r(k) - \eta \nabla_{w_r}L(W(k)) \Big)^\top x_i \right) - \phi( w_r(k)^\top x_i ) \right) .
\end{align*}
We write $v_1 = (v_{1,1}^\top, v_{1,2}^\top, \dots, v_{1,n}^\top)^\top$ and do the same for $v_2$, so
\begin{align*}
\mathrm{vec}(U(k+1) - U(k)) = v_1 + v_2 .
\end{align*}
In order to analyze $v_1 \in \mathbb{R}^n$, we define $K$ and $K^{\bot} \in \mathbb{R}^{nd \times nd}$ as follows:
\begin{align*}
K(k)_{i,j} = & ~ \frac{1}{m} \sum_{r=1}^m x_i^\top x_j \mathbbm{1}[{ w_r(k)^\top x_i \geq 0, w_r(k)^\top x_j \geq 0 }] a_ra^\top_r, \\
K(k)^{\bot}_{i,j} = & ~ \frac{1}{m} \sum_{r\in S^{\bot}_i} x_i^\top x_j \mathbbm{1}[ w_r(k)^\top x_i \geq 0, w_r(k)^\top x_j \geq 0]a_ra^\top_r.
\end{align*}
Next, we write $\phi(z)= z\mathbbm{1}[z \geq 0]$ to make use the definition of $S_i$ and expand the form of $\nabla_{w_r}L(W(k))$:
\begin{align*}
v_{1,i}
&= \frac{1}{\sqrt{md}} \sum_{r \in S_i} a_r \Big(- \eta \nabla_{w_r}L(W(k)) \Big)^\top x_i \mathbbm{1}[ w_r(k)^\top x_i \geq 0] \\
&= \frac{\eta}{md} \sum_{j=1}^n x^\top_ix_j \sum_{r \in S_i} \mathbbm{1}[{ w_r(k)^\top x_i \geq 0 , w_r(k)^\top x_j \geq 0 }]a_r a^\top_r (x_j - u_j) \\
&= \frac{\eta}{d} \sum_{j=1}^n ( K_{i,j}(k) - K_{i,j}^{\bot}(k) ) (x_j - u_j),
\end{align*}
Then, we can write $v_1$ as:
\begin{align}
\label{eqnV1compact}
v_1 = \frac{\eta}{d} (K(k) - K^{\bot}(k)) \mathrm{vec}(X - U(k)),
\end{align}
and expand $\| X - U(k+1) \|_F^2$:
\begin{align*}
\norm{ X - U(k+1)}_F^2
&= \norm{ \mathrm{vec}(X - U(k+1))}^2 \\
&= \norm{ \mathrm{vec}(X - U(k)) - \mathrm{vec}(U(k+1) - U(k)) }_F^2 \\
&= \norm{X - U(k)}_F^2 - 2\mathrm{vec}(X - U(k))^\top \mathrm{vec}(U(k+1) - U(k)) \\
&~~+ \norm{U(k+1) - U(k)}_F^2 .
\end{align*}
We can further expand the second term above using \eqref{eqnV1compact} as below:
\begin{align*}
\mathrm{vec}(X - U(k))^\top &\mathrm{vec}(U(k+1) - U(k)) \\
=&~ \mathrm{vec}( X - U(k) )^\top ( v_1 + v_2 ) \\
=&~ \mathrm{vec}( X - U(k) )^\top v_1 + \mathrm{vec}( X - U(k) )^\top v_2 \\
=&~ \frac{\eta}{d} \mathrm{vec}( X - U(k) )^\top K(k) \mathrm{vec}( X - U (k) ) - \frac{\eta}{d} \mathrm{vec}( X - U(k) )^\top K(k)^{\bot} ( X - U(k) ) \\ &~+ \mathrm{vec}( X - U(k) )^\top v_2.
\end{align*}
We define and bound the following quantities and bound them in Claims~\ref{clmC1_Case1}, \ref{clmC2_Case1}, \ref{clmC3_Case1} and \ref{clmC4_Case1}.
\begin{align*}
C_1 = & ~ -\frac{2\eta}{d} \mathrm{vec}( X - U(k) )^\top K(k) \mathrm{vec}( X - U (k) ) , \\
C_2 = & ~ \frac{2\eta}{d} \mathrm{vec}( X - U(k) )^\top K(k)^{\bot} ( X - U(k) ) , \\
C_3 = & ~ -2\mathrm{vec}( X - U(k) )^\top v_2 , \\
C_4 = & ~ \norm{U(k+1) - U(k)}_F^2 .
\end{align*}
\proof[Proof of Theorem \ref{thmGradDescent_Case1}] We are now ready to prove the induction hypothesis. What we need to is to prove
\[
\norm{X - U(k')}_F^2 \leq (1-\frac{\eta \lambda_0}{2d})^{k'} \norm{X - U(0)}_F^2
\]
holds for $k'=k+1$ with probability at least $1-\delta$. In fact,
\begin{align*}
\norm{ X - U(k+1) }_F^2
&= \norm{ X - U(k) }_F^2 + C_1 + C_2 + C_3 + C_4 \\
&\leq \norm{ X - U(k) }_F^2 \left( 1 - \frac{\eta \lambda_0}{d} + 8 \eta n R + 8 \eta n R + \eta^2 n\lambda_n \right) ,
\end{align*}
with probability at least $1-\delta$ where the last step follows from Claim~\ref{clmC1_Case1}, \ref{clmC2_Case1}, \ref{clmC3_Case1}, and \ref{clmC4_Case1}.
\paragraph{Choice of $\eta$ and $R$.}
We need to choose $\eta$ and $R$ such that
\begin{align}\label{eq:choice_of_eta_R}
( 1 - \frac{\eta \lambda_0}{d} + 8 \eta n R + 8 \eta n R + \eta^2 n\lambda_n ) \leq 1-\frac{\eta\lambda_0}{2d} .
\end{align}
If we set $\eta=\frac{\lambda_0 }{4nd\lambda_n}$ and $R=\frac{\lambda_0}{64nd}$, we have
\begin{align*}
8 \eta n R + 8 \eta n R =16\eta nR \leq \frac{\eta \lambda_0} {4d} ,
\mathrm{~~~and~~~} \eta^2 n\lambda_n \leq \frac{\eta \lambda_0}{4d}.
\end{align*}
Finally,
\begin{align*}
\norm{ X - U(k+1) }_F^2 \leq \left( 1 - \frac{\eta \lambda_0}{2d} \right) \norm{ X - U(k) }_F^2 \cdot
\end{align*}
holds with probability at least $1-\delta$ if $2n\exp(-mR) \leq \delta/3$.
\paragraph{Lower bound on the level of over-parameterization $m$.}
We require for any $\delta \in (0, 1)$ that
\begin{align*}
R'= \frac{4d\sqrt{\lambda_n}\norm{ X - U(0) }_F}{\sqrt{m}\lambda_0} < R = \min\left\{ \frac{\lambda_0}{64nd}, \frac{\lambda_0 \delta}{2n^2d} \right\},
\end{align*}
where the first bound on $R$ comes from the gradient descent whereas the second is required in Lemma \ref{lmBoundKDiff_Case1}. By Claim \ref{clInitialLoss_Case1} that $\norm{X - U(0)}_F \leq \sqrt{\frac{2n}{\delta}}$ with probability at least $1-\delta$, then we require
\[
m \geq C\frac{n^5\lambda_nd^4}{\lambda^4_0\delta^3} ,
\]
for a sufficiently large constant $C>0$ so that the descent holds with probability $1-\delta$.
\qedhere
We give proofs for Claims \ref{clmC1_Case1}, \ref{clmC2_Case1}, \ref{clmC3_Case1}, and \ref{clmC4_Case1} in Appendix \ref{appdx:weakly_train}.
\section{Weight-tied Autoencoders}
\label{sec:scaling}
We conclude with the case of training two-layer autoencoders whose weights are shared (i.e., $A = W$). This is a common architectural choice in practice, and indeed previous theoretical analysis for autoencoders~\citep{tran17,nguyen2019_ae, li2018_randomAE} have focused on this setting. We will show that somewhat surprisingly, allowing the network to be over-parameterized in this setting leads to certain degeneracies. First, we prove:
\begin{Lemma}
\label{lmExpectedInitLoss_shared}
Let $x$ be any fixed sample. The weight $W$ is randomly initialized such that $w_r \sim \mathcal{N}(0, \sigma^2I)$ independently for $r=1, 2, \dots, m$, then
\[
\mathbb{E}_{w_r \sim \mathcal{N}(0, \sigma^2I), \forall r}[\norm{x - \frac{1}{m}W\phi(W^\top x)}^2] = \left(\frac{\sigma^2}{2} - 1\right)^2 \norm{x}^2 + \frac{(2d+3)\norm{\sigma^2 x}^2}{4m}.
\]
Particularly, when $\norm{x}=1$, $\sigma^2=2$, then
\[
\mathbb{E}_{w_r \sim \mathcal{N}(0, \sigma^2I), \forall r}[\norm{x - \frac{1}{m}W\phi(W^\top x)}^2] = \frac{2d+3}{m}.
\]
For an arbitrary small $\epsilon > 0$, the expected reconstruction loss is at most $\epsilon$ if $m \ge \Omega(d/\epsilon)$.
\end{Lemma}
\begin{Remark}
\normalfont This Lemma has a few interesting implications. First, when $\sigma^2 = 2$
, then
\[
\mathbb{E}_{w_r \sim \mathcal{N}(0, 2I), \forall r}[\norm{x - u}^2] = \frac{(2d+3)\norm{x}^2}{m},
\]
which does not exceed $\epsilon$ if $m \ge 3d/\epsilon$ for $\epsilon > 0$. Provided that the data samples are normalized, if $m$ is sufficiently large, even with random initialization the reconstruction loss is very close to zero \emph{without any need for training}. Therefore, mere over-parameterization already gets us to near-zero loss; the autoencoder mapping $\frac{1}{m} W\phi(W^\top x) \approx x$ for any unit-norm $x$. It suggests that training of weight-tied autoencoders under high levels of over-parameterization may be degenerated.
\end{Remark}
\proof We will use $\mathbb{E}_{w_r}$ as a shorthand for $\mathbb{E}_{w_r \sim \mathcal{N}(0, \sigma^2I)}$. We expand the reconstruction loss:
\begin{align}
\label{eqn:loss_at_W0}
\norm{x - \frac{1}{m}W\phi(W^\top x)}^2 &= \norm{x - \frac{1}{m}W\phi(W^\top x)}^2 = \norm{x - \frac{1}{m}\sum_{r=1}^m \phi(w_r^\top x)w_r}^2 \nonumber \\
&= \norm{x}^2 - \frac{2}{m}\sum_{r=1}^m x^T\phi(w_r^\top x)w_r + \frac{1}{m^2}\sum_{r, s \in [m]} \phi(w_r^\top x)w_r^\top \phi(w_s^\top x)w_s.
\end{align}
Because $\phi$ is ReLU and the distribution of $w_r$ is symmetric, we have:
\begin{align*}
\mathbb{E}_{w}[\phi(w^\top x)w] = \frac{1}{2} \mathbb{E}_w[ww^\top]x = \frac{\sigma^2x}{2},
\end{align*}
since $\mathbb{E}_w[ww^\top]= I$. Then, by the independence of the columns in $W$ (more details or split up, one more step),
\begin{align}
\label{eqn:expected_loss1}
\mathbb{E}_{w_r}\bigl[ \norm{x - u}^2 \bigr] &= \norm{x}^2 - \frac{2}{m}\sum_{r=1}^m \frac{\norm{x}^2}{2} + \frac{1}{m^2}\sum_{r, s\in [m], r \neq s} \frac{\norm{\sigma^2 x}^2}{4} + m\mathbb{E}_w[\phi(w^Tx)^2\norm{w}^2] \nonumber \\
&= (1-\sigma^2)\norm{x}^2 + \frac{m-1}{4m}\norm{\sigma^2 x}^2 + \frac{1}{m}\mathbb{E}_w[\phi(w^Tx)^2\norm{w}^2].
\end{align}
Now, we compute the last term:
\begin{align}
\mathbb{E}_w[\phi(w^Tx)^2\norm{w}^2] = \frac{d+2}{2}\norm{\sigma^2 x}^2.
\label{eqn:fact2}
\end{align}
Due to the normalization $\|x\|=1$, we can also write $ \frac{1}{\sigma}w = ux + v$ such that $x^Tv = 0$, then $u = \inprod{w}{x} \sim \mathcal{N}(0, 1)$ and $v \sim \mathcal{N}(0, I-xx^T)$ are conditionally independent given $x$. Note that since the conditional distribution of $u$ is unchanged with respect to $x$, this implies that $u$ is independent of $x$; as a result, $u$ and $v$ are (unconditionally) independent.
Also, denote $\alpha_q = \mathbb{E}_{z \sim \mathcal{N}(0, 1)}[z^q\mathbbm{1}(z \ge 0)]$
for the exact value
. Using Stein's Lemma, we can compute the exact values: $\alpha = \mathbb{E}_w[u\mathbbm{1}(u \ge 0)] = \mathbb{E}_{z \sim \mathcal{N}(0, 1)}[z\mathbbm{1}(z \ge 0)] = \frac{1}{\sqrt{2\pi}}$, $\beta = \mathbb{E}_w[u^2\mathbbm{1}(u \ge 0)] = \frac{1}{2}$, $\gamma = \mathbb{E}_w[u^4\mathbbm{1}(u \ge 0)] = \frac{3}{2}$, which are all positive.
Write $\phi(z) = \max(0, z) = \mathbbm{1}(z \ge 0)z$, and
\begin{align}
\label{eq_square_term}
\mathbb{E}_{w \sim \mathcal{N}(0, I)}[\phi(w^Tx)^2\norm{w}^2]
&=\mathbb{E}_w[\mathbbm{1}(\inprod{w}{x} \ge 0)\inprod{w}{x}^2\norm{w}^2] \nonumber \\
&= \mathbb{E}_w[\mathbbm{1}(u \ge 0) u^2 (u^2 + \norm{v}^2)] ~~(\norm{x} = 1) \nonumber \\
&= \mathbb{E}_w[\mathbbm{1}(u \ge 0) u^4] + \mathbb{E}_w[\mathbbm{1}(u \ge 0)u^2] \mathbb{E}_w[\norm{v}^2] ~~\text{(cond. independence of $u$ and $v$.)} \nonumber \\
&= \frac{d + 2}{2}.
\end{align}
Changing variables by scaling the variance:
\begin{align*}
\mathbb{E}_{w \sim \mathcal{N}(0, \sigma^2 I)}[\phi(w^Tx)^2\norm{w}^2] &= \sigma^4\mathbb{E}_{w \sim \mathcal{N}(0, I)}[\phi(w^Tx)^2\norm{w}^2] = (2d + 4)\norm{\sigma^2 x}^2 .
\end{align*}
Combining with~\eqref{eqn:expected_loss1}:
\begin{align*}
\label{eqn:expected_loss2}
\mathbb{E}_{w_r \sim \mathcal{N}(0, \sigma^2I)}\bigl[ \norm{x - u}^2 \bigr] &= \left(\frac{\sigma^2}{2} - 1\right)^2 \norm{x}^2 + \frac{(2d+3)\norm{\sigma^2 x}^2}{4m} .
\numberthis
\end{align*}
The second result directly follows from the Lemma with the specific values of $\norm{x}, \sigma$ plugged in.
\qedhere
| proofpile-arXiv_059-15544 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section*{Acknowledgments}
\input{acknowledgments.tex}
\section*{References}
| proofpile-arXiv_059-15545 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Relation extraction (RE) is one of the most fundamental tasks in natural language processing, and its goal is to identify the relationship between a given pair of entities in a sentence.
Typically, a large-scale training dataset with clean labels is required to train a reliable relation extraction model. However, it is time-consuming and labor-intensive to annotate such data by crowdsourcing.
To overcome the lack of labeled training data, \citeauthor{mintz2009distant}~\shortcite{mintz2009distant} presents a distant supervision approach that automatically generates a large-scale, labeled training set by aligning entities in knowledge graph (e.g. Freebase \cite{bollacker2008freebase}) to corresponding entity mentions in natural language sentences.
This approach is based on a \emph{strong assumption} that, any sentence containing two entities should be labeled according to the relationship of the two entities on the given knowledge graph. However, this assumption does not always hold.
Sometimes the same two entities in different sentences with various contexts cannot express a consistent relationship as described in the knowledge graph, which certainly results in wrongly labeled problem.
To alleviate the aformentioned problem, \citeauthor{riedel2010modeling}~\shortcite{riedel2010modeling} proposes a multi-instance learning framework, which relaxes the strong assumption to \emph{expressed-at-least-one} assumption.
In plainer terms, this means any possible relation between two entities hold true in at least one distantly-labeled sentence rather than all of the them that contains those two entities.
In particular, instead of generating a sentence-level label, this framework assigns a label to a \emph{bag} of sentences containing a common entity pair, and the label is a relationship of the entity pair on knowledge graph.
Recently, based on the labeled data at bag level, a line of works \cite{zeng2015distant,du2018multi,lin2016neural,han2018hierarchical,ye2019distant} under selective attention framework \cite{lin2016neural} let model implicitly focus on the correctly labeled sentence(s) by an attention mechanism and thus learn a stable and robust model from the noisy data.
\begin{table}[t] \small
\centering
\begin{tabular}{p{4.5cm}p{1.5cm}p{1cm}}
\toprule
\textbf{Bag consisting of one sentence}&\textbf{Label}&\textbf{Correct} \\
\midrule
After moving back to \emph{New York}, \emph{Miriam} was the victim of a seemingly racially motivated attack ... & place\_lived& True\\ \midrule
... he faced, walking \emph{Bill Mueller} and giving up singles to Mark Bellhorn and \emph{Johnny Damon}. & place\_lived& False\\
\bottomrule
\end{tabular}
\caption{\small Two examples of one-sentence bag, which are correctly and wrongly labeled by distant supervision respectively.}
\label{tab:nosiylabel}
\end{table}
However, such selective attention framework is vulnerable to situations where a bag is merely comprised of one single sentence labeled;
and what is worse, the only one sentence possibly expresses inconsistent relation information with the bag-level label.
This scenario is not uncommon. For a popular distantly supervised relation extraction benchmark, e.g., NYT dataset \cite{riedel2010modeling}, up to $80\%$ of its training examples (i.e., bags) are one-sentence bags. From our data inspection, we randomly sample 100 one-sentence bags and find $35\%$ of them is incorrectly labeled. Two examples of one-sentence bag are shown in Table \ref{tab:nosiylabel}.
These results indicate that, in training phrase the selective attention module is enforced to output a single-valued scalar for $80\%$ examples, leading to an ill-trained attention module and thus hurting the performance.
Motivated by aforementioned observations, in this paper, we propose a novel \textbf{Se}lective \textbf{G}ate (SeG) framework for distantly supervised relation extraction.
In the proposed framework,
1) we employ both the entity embeddings and relative position embeddings \cite{zeng2014relation} for relation extraction, and an entity-aware embedding approach is proposed to dynamically integrate entity information into each word embedding, yielding more expressively-powerful representations for downstream modules;
2) to strengthen the capability of widely-used piecewise CNN (PCNN) \cite{zeng2015distant} on capturing long-term dependency \cite{yu2018qanet}, we develop a light-weight self-attention \cite{lin2017structured,shen2017disan} mechanism to capture rich dependency information and consequently enhance the capability of neural network via producing complementary representation for PCNN;
and 3) based on preceding versatile features, we design a selective gate to aggregate sentence-level representations into bag-level one and alleviate intrinsic issues appearing in selective attention.
Compared to the baseline framework (i.e., selective attention for multi-instance learning), SeG is able to produce entity-aware embeddings and rich-contextual representations to facilitate downstream aggregation modules that stably learn from noisy training data.
Moreover, SeG uses gate mechanism with pooling to overcome problem occurring in selective attention, which is caused by one-sentence bags. In addition, it still keeps a light-weight structure to ensure the scalability of this model.
The experiments and extensive ablation studies on New York Time dataset \cite{riedel2010modeling} show that our proposed framework achieves a new state-of-the-art performance regarding both AUC and top-n precision metrics for distantly supervised relation extraction task, and also verify the significance of each proposed module. Particularly, the proposed framework can achieve AUC of 0.51, which outperforms selective attention baseline by 0.14 and improves previous state-of-the-art approach by 0.09.
\section{Proposed Approach}
\begin{figure*}[t]
\centering
\includegraphics[width=0.75\textwidth]{./sentencerepre.pdf}
\caption{\small The framework of our approach (i.e. SeG) that consisting of three components: 1) entity-aware embedding 2) self-attention enhanced neural network and 3) a selective gate. Note, tokens $e_h$ and $e_t$ with gray background mean the head entity and tail entity of this sentence.}
\label{fig:model}
\end{figure*}
As illustrated in Figure \ref{fig:model}, we propose a novel neural network, i.e., SeG, for distantly supervised relation extraction, which is composed of following neural components.
\subsection{Entity-Aware Embedding} \label{sec:ent_aware}
Given a bag of sentences\footnote{``sentence'' and ``instance'' are interchangeable in this paper.} $B^k = \{s^k_1, \dots, s^k_{m^k}\}$ where each sentence contains common entity pair (i.e., head entity $e^k_h,$ and tail entity $e^k_t$), the target of relation extraction is to predict the relation $y^k$ between the two entities. For a clear demonstration, we omit indices of example and sentence in remainder if no confusion caused. Each sentence is a sequence of tokens, i.e., $s = [w_1, \dots, w_n]$, where $n$ is the length of the sentence. In addition, each token has a low-dimensional dense-vector representation, i.e., $[\bm{v}_1, \cdots, \bm{v}_n] \in\mathbb R^{d_w \times n}$, where $d_w$ denotes the dimension of word embedding.
In addition to the typical word embedding, relative position is a crucial feature for relation extraction, which can provide downstream neural model with rich positional information \cite{zeng2014relation,zeng2015distant}. Relative positions explicitly describe the relative distances between each word $w_i$ and the two targeted entities $e_h$ and $e_t$. For $i$-th word, a randomly initialized weight matrix projects the relative position features into a two dense-vector representations w.r.t the head and tail entities, i.e., $\bm{r}^{e_h}_i$ and $\bm{r}^{e_t}_i\in\mathbb R^{d_r}$ respectively. The final low-level representations for all tokens are a concatenation of the aforementioned embeddings, i.e., $\bm{X}^{(p)} = [\bm{x}^{(p)}_1, \cdots, \bm{x}^{(p)}_n] \in\mathbb R^{d_p \times n}$ in which $\bm{x}^{(p)}_i = [\bm{v_i}; \bm{r}^{e_h}_i; \bm{r}^{e_t}_i]$ and $d_p = d_w + 2\times d_r$.
However, aside from the relative position features, we argue that the embeddings of both the head entity $e_h$ and tail entity $e_t$ are also vitally significant for relation extraction task, since the ultimate goal of this task is to predict the relationship between these two entities.
This hypothesis is further verified by our quantitative and qualitative analyses in later experiments (Section \ref{sec:ablation_study} and \ref{sec:casestudy}).
The empirical results show that our proposed embedding can outperform the widely-used way in prior works \cite{ji2017distant}.
In particular, we propose a novel entity-aware word embedding approach to enrich the traditional word embeddings with features of the head and tail entities. To this end, a position-wise gate mechanism is naturally leveraged to dynamically select features between relative position embedding and entity embeddings. Formally, the embeddings of head and tail entities are denoted as $\bm{v}^{(h)}$ and $\bm{v}^{(t)}$ respectively. The position-wise gating procedure is formulated as
\begin{align}
\bm{\alpha} &= \sigmoid(\lambda \cdot (\bm{W}^{(g1)} \bm{X}^{(e)} + \bm{b}^{(g1)})), \\ \label{eq:hyperparameter}
\tilde{\bm{X}}^{(p)} &= \tanh(\bm{W}^{(g2)}\bm{X}^{(p)} + \bm{b}^{(g2)}), \\
\bm{X} &= \bm{\alpha} \cdot \bm{X}^{(e)} + (1-\bm{\alpha}) \cdot \tilde{\bm{X}}^{(p)},\\
\text{where,}~\bm{X}^{(e)} &= [\bm{x}^{(e)}_i]_{i=1}^{n},~\forall \bm{x}^{(e)}_i = [\bm{v}_i; \bm{v}^{(h)}; \bm{v}^{(t)}],
\end{align}
in which $\bm{W}^{(g1)}\in\mathbb{R}^{d_h \times 3d_w}$ and $\bm{W}^{(g2)}\in\mathbb{R}^{d_h \times d_p}$ are learnable parameters, $\lambda$ is a hyper-parameter to control smoothness, and $\bm{X} = [\bm{x}_1, \dots, \bm{x}_n] \in\mathbb R^{d_h \times n}$ containing the entity-aware embeddings of all tokens from the sentence.
\subsection{Self-Attention Enhanced Neural Network} \label{sec:self_attn_enhanced}
Previous works of relation extraction mainly employ a piecewise convolutional neural network (PCNN) \cite{zeng2015distant} to obtain contextual representation of sentences due to its capability of capturing local features, less computation and light-weight structure. However, some previous works \cite{vaswani2017attention} find that CNNs cannot reach state-of-the-art performance on a majority of natural language processing benchmarks due to a lack of measuring long-term dependency, even if stacking multiple modules. This motivates us to enhance the PCNN with another neural module, which is capable of capturing long-term or global dependencies to produce complementary and more powerful sentence representation.
Hence, we employ a self-attention mechanism in our model due to its parallelizable computation and state-of-the-art performance.
Unlike existing approaches that sequentially stack self-attention and CNN layers in a cascade form \cite{yu2018qanet,wu2019pay}, we arrange these two modules in parallel so they can generate features describing both local and long-term relations for the same input sequence. Since each bag may contain many sentences (up to 20), a light-weight networks that can can efficiently process these sentences simultaneously is more preferable, such as PCNN that is the most popular module for relation extraction.
For this reason, there is only one light-weight self-attention layer in our model. This is contrast to \citeauthor{yu2018qanet}~\shortcite{yu2018qanet} and \citeauthor{wu2019pay}~\shortcite{wu2019pay} who stack both modules many times repeatedly. Our experiments show that two modules arranged in parallel manner consistently outperform stacking architectures that are even equipped with additional residual connections \cite{he2016deep}). The comparative experiments will be elaborated in Section \ref{sec:main_res} and \ref{sec:ablation_study}.
\paragraph{Piecewise Convolutional Neural Network} This section provides a brief introduction to PCNN as a background for further integration with our model, and we refer readers to \citeauthor{zeng2015distant}~\shortcite{zeng2015distant} for more details. Each sentence is divided into three segments w.r.t. the head and tail entities. Compared to the typical 1D-CNN with max-pooling \cite{zeng2014relation}, piecewise pooling has the capability to capture the structure information between two entities.
Therefore, instead of using word embeddings with relative position features $\bm{X}^{(p)}$ as the input, we here employ our entity-aware embedding $\bm{X}$ as described in Section \ref{sec:ent_aware} to enrich the input features. First, 1D-CNN is invoked over the input, which can be formally represented as
\begin{equation}
\bm{H} = \onedcnn(\bm{X}; \bm{W}^{(c)}, \bm{b}^{(c)}) \in\mathbb{R}^{d_c \times n},
\end{equation}
where, $\bm{W}^{(c)} \in\mathbb{R}^{d_c \times m \times d_h}$ is convolution kernel with window size of $m$ (i.e., $m$-gram). Then, to obtain sentence-level representation, a piecewise pooling performs over the output sequence, i.e., $\bm{H}^{(c)} = [\bm{h}_1, \dots, \bm{h}_n]$, which is formulated as
\begin{equation} \label{eq:piecewise_pool}
\bm{s} = \tanh([\pool(\bm{H}^{(1)}); \pool(\bm{H}^{(2)}); \pool(\bm{H}^{(3)})]).
\end{equation}
In particular, $\bm{H}^{(1)}$, $\bm{H}^{(2)}$ and $\bm{H}^{(3)}$ are three consecutive parts of $\bm{H}$, obtained by dividing $\bm{H}$ according to the positions of head and tail entities. Consequently, $\bm{s} \in\mathbb R^{3d_c}$ is the resulting sentence vector representation.
\paragraph{Self-Attention Mechanism} To maintain efficiency of proposed approach, we adopt the recently-promoted self-attention mechanism \cite{liu2016learning,lin2017structured,shen2019tensorized,li2018hierarchical,liu2019GPN} for compressing a sequence of token representations into a sentence-level vector representation by exploiting global dependency, rather than computation-consuming pairwise ones \cite{vaswani2017attention}. It is used to measure the contribution or importance of each token to relation extraction task w.r.t. the global dependency. Formally, given the entity-aware embedding $\bm{X}$, we first calculate attention probabilities by a parameterized compatibility function, i.e.,
\begin{align}
&\bm{A} = \bm{W}^{(a2)} \sigma(\bm{W}^{(a1)}\bm{X} + \bm{b}^{(a1)} ) + \bm{b}^{(a2)}, \\
&\bm{P}^{(A)} = \softmax(\bm{A}),
\end{align}
where, $\bm{W}^{(a1)}, \bm{W}^{(a2)} \in\mathbb R^{d_h \times d_h}$ are learnable parameters, $\softmax(\cdot)$ is invoked over sequence, and $\bm{P}^{(A)}$ is resulting attention probability matrix. Then, the result of self-attention mechanism can be calculated as
\begin{equation}
\bm{u} = \sum \bm{P}^{(A)} \odot \bm{X},
\end{equation}
in which, $\sum$ is performed along sequential dimension and $\odot$ stands for element-wise multiplication. And, $\bm{u} \in\mathbb R^{d_h}$ is also a sentence-level vector representation which is a complement to PCNN-resulting one, i.e., $\bm{s}$ from Eq.(\ref{eq:piecewise_pool}).
\subsection{Selective Gate} \label{sec:selective_gate}
Given a sentence bag $B = [s_1, \dots, s_m]$ with common entity pair, where $m$ is the number of sentences. As elaborated in Section \ref{sec:self_attn_enhanced}, we can obtain $\bm{S} = [\bm{s}_1, \dots, \bm{s}_m]$ and $\bm{U} = [\bm{u}_1, \dots, \bm{u}_m]$ for each sentence in the bag, which are derived from PCNN and self-attention respectively.
Unlike previous works under multi-instance framework that frequently use a selective attention module to aggregate sentence-level representations into bag-level one, we propose a innovative selective gate mechanism to perform this aggregation. The selective gate can mitigate problems existing in distantly supervised relation extraction and achieve a satisfactory empirical effectiveness. Specifically, when handling the noisy instance problem, selective attention tries to produce a distribution over all sentence in a bag; but if there is only one sentence in the bag, even the only sentence is wrongly labeled, the selective attention mechanism will be low-effective or even completely useless.
Note that almost $80\%$ of bags from popular relation extraction benchmark consist of only one sentence, and many of them suffer from the wrong label problem.
In contrast, our proposed gate mechanism is competent to tackle such case by directly and dynamically aligning low gating value to the wrongly labeled instances and thus preventing noise representation being propagated.
Particularly, a two-layer feed forward network is applied to each $\bm{u}_j$ to sentence-wisely produce gating value, which is formally denoted as
\begin{align}
g_j = \sigmoid(\bm{W}^{(g1)} \sigma(\bm{W}^{(g2)} \bm{u}_j &+ \bm{b}^{(g2)} ) + \bm{b}^{(g1)}), \\
\notag &~\forall j = 1, \dots, m,
\end{align}
where, $\bm{W}^{(g1)} \in\mathbb R^{3d_c \times d_h}$, $\bm{W}^{(g2)} \in\mathbb R^{d_h \times d_h}$, $\sigma(\cdot)$ denotes an activation function and $g_j \in (0, 1)$. Then, given the calculated gating value, an mean aggregation performs over sentence embeddings $[\bm{s}_j]_{j=1}^m$ in the bag, and thus produces bag-level vector representation for further relation classification. This procedure is formalized as
\begin{align} \label{eq:aggreation}
\bm{c} = \dfrac{1}{m} \sum_{j=1}^{m} g_j \cdot \bm{s}_j
\end{align}
Finally, $\bm{c}$ is fed into a multi-layer perceptron followed with $|C|$-way $\softmax$ function (i.e., an $\mlp$ classifier) to judge the relation between head and tail entities, where $|C|$ is the number of distinctive relation categories. This can be regarded as a classification task \cite{DBLP:conf/cikm/LongCZZ12}. Formally,
\begin{equation} \label{eq:predicted_distribution}
\bm{p} = \softmax(\mlp(\bm{c})) \in\mathbb{R}^{|C|}.
\end{equation}
\subsection{Model Learning}
We minimize negative log-likelihood loss plus $L_2$ regularization penalty to train the model, which is written as
\begin{equation} \label{loss}
L_{NLL} = - \dfrac{1}{|\mathcal{D}|} \sum\nolimits_{k=1}^{|\mathcal{D}|} \log \bm{p}^k_{(i=y^k)}+\beta||\theta||^2_2
\end{equation}
where $\bm{p}^k$ is the predicted distribution from Eq.(\ref{eq:predicted_distribution}) for the $k$-th example in dataset $\mathcal{D}$ and $y^k$ is its corresponding distant supervision label.
\section{Experiments}
\begin{table*}[t]\small
\centering
\begin{tabular}{lcccccccccccc}
\toprule
\multicolumn{1}{l}{\bf Approach }&\multicolumn{4}{c}{\bf One}&\multicolumn{4}{c}{\bf Two}&\multicolumn{4}{c}{\bf All}\\
\midrule
\textbf{P@N (\%)} &100&200&300&Mean&100&200&300&Mean&100&200&300&Mean\\
\midrule
\multicolumn{13}{l}{\textit{Comparative Approaches} } \\
\midrule
CNN+ATT \cite{lin2016neural} &72.0&67.0&59.5&66.2&75.5&69.0&63.3&69.3&74.3&71.5&64.5&70.1\\
PCNN+ATT \cite{lin2016neural} &73.3&69.2&60.8&67.8&77.2&71.6&66.1&71.6&76.2&73.1&67.4&72.2\\
PCNN+ATT+SL \cite{liu2017soft} &84.0&75.5&68.3&75.9&86.0&77.0&73.3&78.8&87.0&84.5&77.0&82.8\\
PCNN+HATT \cite{han2018hierarchical} &84.0&76.0&69.7&76.6&85.0&76.0&72.7&77.9&88.0&79.5&75.3&80.9\\
PCNN+BAG-ATT \cite{ye2019distant} &86.8&77.6&73.9&79.4&91.2&79.2&75.4&81.9&91.8&84.0&78.7&84.8\\
\midrule
\textbf{SeG} (\textit{ours}) &\textbf{94.0}&\textbf{89.0}&\textbf{85.0}&\textbf{89.3}&\textbf{91.0}&\textbf{89.0}&\textbf{87.0}&\textbf{89.0}&\textbf{93.0}&\textbf{90.0}&\textbf{86.0}&\textbf{89.3}\\
\midrule [0.2ex]
\multicolumn{13}{l}{\textit{Ablations} } \\
\midrule
SeG w/o Ent &85.0&75.0&67.0&75.6&87.0&79.0&70.0&78.6&85.0&80.0&72.0&79.0\\
SeG w/o Gate&87.0&85.5&82.7&85.1&89.0&87.0&84.0&86.7&90.0&88.0&85.3&87.7\\
SeG w/o Gate w/o Self-Attn&86.0&85.0&82.0&84.3&88.0&86.0&83.0&85.7&90.0&86.5&86.0&87.5\\
SeG w/o ALL &81.0&73.5&67.3&74.0&82.0&75.0&72.3&76.4&81.0&75.0&72.0&76.0\\
\midrule[0.01ex]
SeG+ATT w/o Gate&89.0&83.5&75.7&82.7&90.0&83.5&77.0&83.5&92.0&82.0&76.7&83.6\\
SeG+ATT&88.0&81.0&75.0&81.3&87.0&82.5&77.0&82.2&90.0&86.5&81.0&85.8\\
SeG w/ stack&91.0&88.0&85.0&88.0&91.0&87.0&85.0&87.7&92.0&89.5&86.0&89.1\\
\bottomrule
\end{tabular}
\caption{\small Precision values for the top-100, -200 and -300 relation instances that are randomly selected in terms of one/two/all sentence(s). }
\label{tab:topnprecision}
\end{table*}
To evaluate our proposed framework, and to compare the framework with baselines and competitive approaches, we conduct experiments on a popular benchmark dataset for distantly supervised relation extraction.
We also conduct an ablation study to separately verify the effectiveness of each proposed component, and last, case study and error analysis are provided for an insight into our model.
\paragraph{Dataset}
In order to accurately compare the performance of our model, we adopt New York Times (NYT) dataset \cite{riedel2010modeling}, a widely-used standard benchmark for distantly supervised relation extraction in most of previous works \cite{lin2016neural,zeng2015distant,han2018hierarchical,du2018multi}, which contains 53 distinct relations including a null class \textit{NA} relation.
This dataset generates by aligning Freebase with the New York Times (NYT) corpus automatically. In particular, NYT dataset contains 53 distinct relations including a null class \textit{NA} relation referred to as the relation of an entity pair is unavailable.
There are 570K and 172K sentences respectively in training and test set.
\paragraph{Metrics}
Following previous works \cite{zeng2015distant,lin2016neural,han2018hierarchical,du2018multi}, we use precision-recall (PR) curves, area under curve (AUC) and top-N precision (P@N) as metrics in our experiments on the held-out test set from the NYT dataset. To directly show the perfomance on one sentence bag, we also calculate the accuracy of classification (Acc.) on non-NA sentences.
\paragraph{Training Setup}
For a fair and rational comparison with baselines and competitive approaches, we set most of the hyper-parameters by following prior works \cite{lin2017structured,han2018hierarchical}, and also use 50D word embedding and 5D position embedding released by \cite{lin2016neural,han2018hierarchical} for initialization, where the dimension of $d_h$ equals to 150. The filters number of CNN $d_c$ equals to 230 and the kernel size $m$ in CNN equals to $3$. In output layer, we employ dropout \cite{srivastava2014dropout} for regularization, where the drop probability is set to $0.5$. To minimize the loss function defined in Eq.\ref{loss}, we use stochastic gradient descent with initial learning rate of $0.1$, and decay the learning rate to one tenth every 100K steps.
\paragraph{Baselines and Competitive Approaches}
We compare our proposed approach with extensive previous ones, including feature-engineering, competitive and state-of-the-art approaches, which are briefly summarized in the following.
\begin{itemize}
\item \textbf{Mintz} \cite{mintz2009distant} is the original distantly supervised approach to solve relation extraction problems with distantly supervised data.
\item \textbf{MultiR} \cite{hoffmann2011knowledge} is a graphical model within a multi-instance learning framework that is able to handle problems with overlapping relations.
\item \textbf{MIML} \cite{surdeanu2012multi} is a multi-instance, multi-label learning framework that jointly models both multiple instances and multiple relations.
\item \textbf{PCNN+ATT} \cite{lin2016neural} employs a selective attention over multiple instances to alleviate the wrongly labeled problem, which is the principal baseline of our work.
\item \textbf{PCNN+ATT+SL} \cite{liu2017soft} introduces an entity-pair level denoising method, namely employing a soft label to alleviate the impact of wrongly labeled problem.
\item \textbf{PCNN+HATT} \cite{han2018hierarchical} employs hierarchical attention to exploit correlations among relations.
\item \textbf{PCNN+BAG-ATT} \cite{ye2019distant} uses an intra-bag to deal with the noise at sentence-level and an inter-bag attention to deal with noise at the bag-level.
\end{itemize}
\begin{figure}[t]
\centering
\includegraphics[scale=0.38]{./auc_my.png}
\caption{Performance comparison for proposed model and previous baselines in terms of precision-recall curves}
\label{fig:prcurves}
\end{figure}
\begin{table}[t] \small
\centering
\begin{tabular}{lc}
\toprule
\textbf{Approach}&\textbf{AUC}\\
\midrule
PCNN+HATT& 0.42\\
PCNN+ATT-RA+BAG-ATT& 0.42\\
\midrule
\textbf{SeG} (ours)& 0.51\\
\bottomrule
\end{tabular}
\caption{\small Model comparison regarding the AUC value. The comparative results are reported by \citeauthor{han2018hierarchical}~\shortcite{han2018hierarchical} and \citeauthor{ye2019distant}~\shortcite{ye2019distant} respectively. }
\label{tab:aucscores}
\end{table}
\begin{table}[t] \small
\centering
\begin{tabular}{lcc}
\toprule
\textbf{Approach}&\textbf{AUC}&\textbf{Acc.}\\
\midrule
PCNN&0.36&83\% \\
PCNN+ATT&0.35&78\% \\
SeG(ours)&0.48&90\% \\
\bottomrule
\end{tabular}
\caption{\small Model that is trained and tested on extracted one sentence bags from NYT dataset comparison regarding the AUC value and Acc., where Acc. is accuracy on non-NA sentences.}
\label{tab:one-sentenceexp}
\end{table}
\subsection{Relation Extraction Performance} \label{sec:main_res}
We first compare our proposed SeG with aforementioned approaches in Table \ref{tab:topnprecision} for top-N precision (i.e., P@N). As shown in the top panel of the table, our proposed model SeG can consistently and significantly outperform baseline (i.e., PCNN+ATT) and all recently-promoted works in terms of all P@N metric.
Compared to PCNN with selective attention (i.e., PCNN+ATT), our proposed SeG can significantly improve the performance by 23.6\% in terms of P@N mean for all sentences; even if a soft label technique is applied (i.e., PCNN+ATT+SL) to alleviate wrongly labeled problem, our performance improvement is also very significant, i.e., 7.8\%.
Compared to previous state-of-the-art approaches (i.e., PCNN+HATT and PCNN+BAG-ATT), the proposed model can also outperform them by a large margin, i.e., 10.3\% and 5.3\% , even if they propose sophisticated techniques to handle the noisy training data. These verify the effectiveness of our approach over previous works when solving the wrongly labeled problem that frequently appears in distantly supervised relation extraction.
Moreover, for proposed approach and comparative ones, we also show AUC curves and available numerical values in Figure \ref{fig:prcurves} and Table \ref{tab:aucscores} respectively.
The empirical results for AUC are coherent with those of P@N, which shows that, our proposed approach can significantly improve previous ones and reach a new state-of-the-art performance by handling wrongly labeled problem using context-aware selective gate mechanism. Specifically, our approach substantially improves both PCNN+HATT and PCNN+BAG-ATT by 21.4\% in aspect of AUC for precision-recall.
\subsection{Ablation Study} \label{sec:ablation_study}
\begin{table}[b] \small
\centering
\begin{tabular}{lc}
\toprule
\textbf{Approach}&\textbf{AUC}\\
\midrule
\textbf{SeG} (ours)& 0.51\\
SeG w/o Ent& 0.40\\
SeG w/o Gate& 0.48\\
SeG w/o Gate w/o Self-Attn& 0.47\\
SeG w/o ALL& 0.40\\ \midrule
SeG + ATT w/o Gate& 0.47\\
SeG + ATT&0.47\\
SeG w/ stack&0.48\\
\bottomrule
\end{tabular}
\caption{\small Ablation study regarding precision-recall AUC value.}
\label{tab:abl_auc_scores}
\end{table}
To further verify the effectiveness of each module in the proposed framework, we conduct an extensive ablation study in this section.
In particular, \textit{SeG w/o Ent} denotes removing entity-aware embedding, \textit{SeG w/o Gate} denotes removing selective gate and concatenating two representations from PCNN and self-attention, \textit{SeG w/o Gate w/o Self-Attn} denotes removing self-attention enhanced selective gate. In addition, we also replace the some parts of the proposed framework with baseline module for an in-depth comparison. \textit{SeG+ATT} denotes replacing mean-pooing with selective attention, and \textit{SeG w/ stack} denotes using stacked PCNN and self-attention rather than in parallel.
\begin{figure}[t]
\centering
\includegraphics[scale=0.38]{./ablation_pr.png}
\caption{\small Performance comparison for ablation study under precision-recall curves}
\label{fig:ablationpr}
\end{figure}
\begin{table*}[htbp] \small
\centering
\begin{tabular}{p{13pt}|p{170pt}|p{95pt}|p{50pt}|p{50pt}|p{57pt}}
\hline
Bag&Sentence&Relation&SeG (Ours) &SeG w/o Ent&SeG w/o GSA\\
\hline
B1& \textbf{Yul Kwon}, 32, of \textbf{San Mateo}, Calif., winner of last year's television contest “Survivor” and ... &\textit{/people/person/place\_lived}&Correct&Wrong&Wrong\\
\hline
B2&Other winners were Alain Mabanckou from Congo, \textbf{Nancy Huston} from \textbf{Canada} and Léonora Miano from Cameroon.&\textit{/people/person/nationality}&Correct&Correct&Wrong\\
\hline
\hline
B3&... production moved to \textbf{Connecticut} to film interiors in places like Stamford, Bridgeport, Shelton, \textbf{Ridgefield} and Greenwich.&\textit{/location/location/contains}&Correct&Wrong&Correct\\
\hline
B4&... missionary \textbf{George Whitefield}, according to The Encyclopedia of \textbf{New York City}.&\textit{NA}&Correct&Wrong&Correct\\
\hline
\end{tabular}
\caption{A case study where each bag contains one sentence. \textit{SeG w/o GSA} is an abbreviation of \textit{SeG w/o Gate w/o Self-Attn}.}
\label{tab:casestudy}
\end{table*}
The P@N results are listed in the bottom panel of Table \ref{tab:topnprecision}, and corresponding AUC results are shown in Table \ref{tab:abl_auc_scores} and Figure \ref{fig:ablationpr}.
According to the results, we find that our proposed modules perform substantially better than those of the baseline in terms of both metrics.
Particularly, by removing entity-aware embedding (i.e, SeG w/o Ent) and self-attention enhanced selective gate (i.e., SeG w/o Gate w/o Self-Attn), it shows 11.5\% and 1.8\% decreases respectively in terms of P@N mean for all sentences. Note that, when dropping both modules above (i.e., SeG w/o ALL), the framework will be degenerated as selective attention baseline \cite{lin2016neural}, which again demonstrates that our proposed framework is superior than the baseline by 15\% in terms of P@N mean for all sentences.
To verify the performance of selective gate modul when handling wrongly labeled problem, we simply replace the selective gate module introduced in Eq.(\ref{eq:aggreation}) with selective attention module, namely, SeG+Attn w/o Gate, and instead of mean pooling in Eq.(\ref{eq:aggreation}), we couple selective gate with selective attention to fulfill aggregation instead mean-pooling, namely, SeG+Attn.
Across the board, the proposed SeG still deliver the best results in terms of both metrics even if extra selective attention module is applied.
Lastly, to explore the influence of the way to combine PCNN with self-attention mechanism, we stack them by following the previous works \cite{yu2018qanet}, i.e., SeG w/ Stack. And we observe a notable performance drop after stacking PCNN and self-attention in Table \ref{tab:abl_auc_scores}. This verifies that our model combining self-attention mechanism and PCNN in parallel can achieve a satisfactory result.
To further empirically evaluate the performance of our method in solving one-sentence bag problem, we extract only the one-sentence bags from NYT's training and test sets, which occupy ~80\% of the original dataset.
The evaluation and comparison results in Table \ref{tab:one-sentenceexp} show that compared to PCNN+ATT, the AUC improvement (+0.13) between our model and PCNN+ATT on one-sentence bags is higher than the improvement of full NYT dataset, which verifies SeG's effectiveness on one-sentence bags.
In addition, PCNN+ATT shows a light decrease compared with PCNN, which can also support the claim that selective attention is vulnerable to one-sentence bags.
\subsection{Case Study} \label{sec:casestudy}
In this section, we conduct a case study to qualitatively analyze the effects of entity-aware embedding and self-attention enhanced selective gate. The case study of four examples is shown in Table \ref{tab:casestudy}.
First, comparing Bag 1 and 2, we find that, without the support of the self-attention enhanced selective gate, the model will misclassify both bags into \textit{NA}, leading to a degraded performance. Further, as shown in Bag 2, even if entity-aware embedding module is absent, proposed framework merely depending on selective gate can also make a correct prediction. This finding warrants more investigation into the power of the self-attention enhanced selective gate; hence, the two error cases are shown in Bags 3 and 4.
Then, to further consider the necessity of entity-aware embedding, we show two error cases for SeG w/o Ent whose labels are \textit{/location/location/contains} and \textit{NA} respectively in Bag 3 and 4. One possible reason for the misclassification of both cases is that, due to a lack of entity-aware embedding, the remaining position features cannot provide strong information to distinguish complex context with similar relation position pattern w.r.t the two entities.
\subsection{Error Analysis}
To investigate the possible reasons for misclassification, we randomly sample 50 error examples from the test set and manually analyze them. After human evaluation, we find the errors can be roughly categorized into following two classes.
\paragraph{Lack of background} We observe that, our approach is likely to mistakenly classify relation of almost all the sentences containing two place entities to \textit{/location/location/contains}. However, the correct relation is \textit{/location/country/capital} or \textit{/location/country/administrative\_divisions}. This suggests that we can incorporate external knowledge to alleviate this problem possibly caused by a lack of background.
\paragraph{Isolated Sentence in Bag} Each sentence in a bag can be regarded as independent individual and do not have any relationship with other sentences in the bag, which possibly leads to information loss among the multiple sentences in the bag when considering classification over bag level.
\section{Conclusion}
In this paper, we propose a brand-new framework for distantly supervised relation extraction, i.e., selective gate (SeG) framework, as a new alternative to previous ones. It incorporates an entity-aware embedding module and a self-attention enhanced selective gate mechanism to integrate task-specific entity information into word embedding and then generates a complementary context-enriched representation for PCNN.
The proposed framework has certain merits over previously prevalent selective attention when handling wrongly labeled data, especially for a usual case that there are only one sentence in the most of bags.
The experiments conduct on popular NYT dataset show that our model SeG can consistently deliver a new benchmark in state-of-the-art performance in terms of all P@N and precision-recall AUC.
And further ablation study and case study also demonstrate the significance of the proposed modules to handle wrongly labeled data and thus set a new state-of-the-art performance for the benchmark dataset.
In the future, we plan to incorporate an external knowledge base into our framework, which may further boost the prediction quality by overcoming the problems with a lack of background information as discussed in our error analysis.
\section*{Acknowledgements}
This research was funded by the Australian Government through the Australian Research Council (ARC) under grants LP180100654 partnership with KS computer. We also acknowledge the support of NVIDIA Corporation and Google Cloud with the donation of GPUs and computation credits respectively.
| proofpile-arXiv_059-15546 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Superconducting nanowire single photon detectors (SNSPDs) have gained widespread use due to their near saturated quantum efficiency\cite{MarsiliF,kahl2015waveguide,verma2015high}, high count rates\cite{rosenberg2013high,Robinsonoptics}, exceptional timing jitter\cite{14p8Jitter,zadeh2018single,caloz2018high,JPL}, and low dark count rates\cite{schuck2013waveguide,wollman2017uv}. The performance of an SNSPD is impacted by its bias and readout circuitry, and several research groups have worked on the modeling and optimization of this interface. An equivalent circuit describing the electrical interaction between an SNSPD and a passive termination was first described by Kerman and it was shown that the recovery time of a passively quenched SNSPD is limited by an electrical time constant, $\tau_\text{e}=L_\text{K}/R_\text{L}$, where $L_\text{K}$ is the kinetic inductance of the detector and $R_\text{L}$ is the load resistance\cite{kerman2006kinetic}. Shortly thereafter, finite-difference time-domain simulations of the electro-thermal response of a passively terminated SNSPD were leveraged to study the extent to which the recovery time of a SNSPD with fixed kinetic inductance can be reduced by increasing $R_\text{L}$. It was found that only limited improvement was possible due to latching, which describes the condition in which a SNSPD becomes stuck in the resistive state~\cite{Latching}. This result was verified experimentally and it was shown that latching occurs when electro-thermal feedback is able to stabilize the normal domain, which happens when the ratio of the electrical to thermal time constant is too small\cite{KERMANFB}.
While short detectors are desired to minimize the recovery time, a large area detector is typically employed to maximize system detection efficiency. As such, various approaches have been developed to minimize the reset time while preserving other performance characteristics. These techniques include biasing the detector through a resistor while employing a dc-coupled amplifier for readout~\cite{kerman2013readout}, leveraging a snubber element (consisting of a short circuited transmission line) to limit the charging and discharging of an ac-coupling capacitor\cite{snubber,improvedreadout}, and gating the detector using a microwave bias\cite{akhlaghi2012gated}. However, in each of these cases, the output voltage is limited to the product of the load resistance (typically 50$\,\Omega$) and the bias current ($I_\text{B}$). As the peak resistance that a normal domain within a passively quenched detector typically reaches during a detection event is on the order of twenty times that of the load resistance, a significant gain in pulse amplitude could be obtained if the detector were configured to operate with a high-impedance load. Increasing the pulse amplitude has the potential to reduce timing jitter due to the associated improvement in signal to noise ratio.
In this paper, we explore the use of an active quenching architecture for the bias and readout of SNSPDs. We begin with the details of operation and the expected performance. We then describe a prototype active quenching system and present experimental results that demonstrate the advantages of this approach over conventional readout techniques. These results confirm that the count rate, dark count rate, and timing jitter of an SNSPD can be simultaneously enhanced through the use of active quenching.
\section{Proposed approach}
A block diagram of an actively quenched SNSPD appears in Fig.~\ref{concept}. The detector is biased using a constant current source, which is disabled whenever the voltage across the device exceeds the threshold of a comparator. Thus, once a normal domain forms due to a detection event or a dark count, it will grow until the aggregate resistance ($R_\text{N}$) is large enough that the comparator is triggered. At this point, the bias current will be disabled, allowing the normal region of the device to transition back to the superconducting state. After a delay, the bias current is again enabled, returning the SNSPD to the photosensitive state.
\begin{figure}[bt!]
\centering
\includegraphics[width=0.75\columnwidth]{Figures/Concept.pdf}
\caption{Conceptual block diagram of active quenching architecture and key waveforms describing its operation.}
\label{concept}
\end{figure}
In principle, this approach has two significant advantages over standard SNSPD readout techniques. First, both the amplitude of the voltage across the detector
as well as the slew rate associated with the rising edge of this voltage will be greatly enhanced since the SNSPD is no longer shunted by a small resistance. As the jitter added by the readout circuit is inversely proportional to slew rate, it is expected that the system timing jitter will be enhanced. Secondly, by actively quenching the normal domain, we decouple the reset and rebias operations, allowing for reduced dead times, corresponding to higher count rates.
\section{Theoretical performance}
The rising-edge performance of the proposed architecture can be quantified by considering the dynamics of normal domain growth within a capacitively loaded SNSP
\footnote{For the purpose of the analysis described in this work, we consider an SNSPD as a lumped device and neglect the non-linearity of the kinetic inductance~\cite{clem2012kinetic}. While this is valid for short SNSPD, a distributed model should technically be used when studying devices with large kinetic inductance\cite{santavicca2016microwave}. However, based on measurement results, we believe our simplified models are sufficient to predict performance, even when considering larger devices.}, as shown in Fig.~\ref{SCH}(a). For this analysis, we assume that the bias current is kept constant ($I_\text{B}\left(t\right)=I_\text{B0}$) and that $R_\text{DQ}$ is infinite. Our goal is to estimate the improvement in slew rate ($\mathrm{d}v_\text{DET}/\mathrm{d}t$) and pulse amplitude that is achieved through the active quenching circuit, relative to the case in which the detector is biased from a constant current source and shunted in a load resistance ($R_\text{L}$), as shown in Fig.~\ref{SCH}(b)
To begin, we note that, when subjected to a current greater than that required to sustain a self-heating normal domain ($I_\text{D}>I_\text{SS}$
, the normal domain can be described by a non-linear capacitance of the form\cite{SUSTKB,KERMANFB} $C_\text{eff}\left\{I_\text{D}\right\}\approx{}wd\sqrt{\psi{}I_\text{D}^2/I_\text{SW}^2-1}/\left[\left(2\rho_\text{n}v_\text{0}\right)\left(\psi{}I_\text{D}^2/I_\text{SW}^2-2\right)\right]$, where $I_\text{SW}\le{I_\text{C}}$ is the switching current, $v_\text{0}\equiv\sqrt{\left(h_\text{c}\kappa\right)/\left(c^2d\right)}$ is the characteristic normal domain velocity, $\psi\equiv\rho_\text{n}I_\text{SW}^2/\left[\left(h_cw^2d\right)\left(T_\text{C}-T_\text{SUB}\right)\right]$ is the Stekly parameter~{\cite{stekly}}, $T_\text{SUB}$ is the bath temperature, $h_\text{c}$ is the thermal boundry conductance between the superconducting film and the substrate, $w$ is the nanowire width, and $d$, $\rho_\text{n}$, $\kappa$, $c$, and $T_\text{C}$ are the thickness, normal resistivity, thermal conductivity, specific heat per unit volume, and critical temperature of the superconducting film, respectively.
Next, we assume that a normal domain of infinitesimal length bridges the nanowire at $t=0$. All of the bias current will still be flowing through the nanowire ($I_\text{D}\left(0\right)=I_\text{B0}$) and the voltage across the normal domain will begin to slew as $\mathrm{d}v_\text{n}/\mathrm{d}t=I_\text{B0}/C_\text{eff}\left\{I_\text{B0}\right\}$. As this voltage grows, the current through the nanowire will decrease as part of the bias current is diverted to charge the load capacitance. Eventually, at time $t_\text{D}$, the currents charging the normal domain and the load capacitance will equalize such that the voltages across the normal domain and the load capacitance slew at the same rate, i.e., when
$I_\text{SLEW}C_\text{L}=\left(I_\text{B0}-I_\text{SLEW}\right)C_\text{eff}\left\{I_\text{SLEW}\right\},$
where $I_\text{SLEW}$ is the steady-state current flowing through the nanowire. So, between $t=0$ and $t=t_\text{D}$, the kinetic inductance will have discharged by $\Delta{E_\text{L}}=\left(I_\text{B0}-I_\text{SLEW}\right)^2L_\text{K}/2$. As this discharge aids normal domain growth, the slew rate will be maximum during this time interval. Thus, we consider the time interval during which the inductor discharges as that of the highest performance and constrain the circuit operation such that a reset is triggered prior to $t_\text{D}$.
\begin{figure}[bt!]\centering
\includegraphics[width=\columnwidth]{Figures/SCH3.pdf}
\caption{Equivalent circuit of an (a) actively quenched and (b) passive quenched SNSPD.}
\label{SCH}
\end{figure}
By approximating $C_\text{eff}$ as a linear function of time, it can be shown that the relative improvements in slew rate and signal amplitude achieved by active quenching are approximated as \hrd{(a derivation is given in Appendix A)}
\begin{equation}
\frac{SR_\text{AQ}}{SR_\text{PQ}}\approx\frac{2\sqrt{2L_\text{K}C_\text{eff}\left\{I_\text{B0}\right\}}}{R_\text{L}\left(\left<C_\text{eff}\right>+C_\text{L}\right)}
\label{SRcomp}
\end{equation}
and
\begin{equation}
\frac{v_\text{PK,AQ}}{v_\text{PK,PQ}}\approx\sqrt{2.2\frac{I_\text{SLEW}}{I_\text{B0}}\frac{L_\text{K}/R_\text{L}}{R_\text{L}\left(\left<C_\text{eff}\right>+C_\text{L}\right)}},
\label{voltcomp}
\end{equation}
where $\left<C_\text{eff}\right>=\left(C_\text{eff}\left\{I_\text{B0}\right\}+C_\text{eff}\left\{I_\text{SLEW}\right\}\right)/2$. For parameter values consistent with a NbN detector on a sapphire substrat
that is biased at 0.93\,$I_\text{SW}$, Equations~(\ref{SRcomp}) and (\ref{voltcomp}) evaluate to unity for kinetic inductances of approximately 200 and 100\,pH, respectively\footnote{Here we have assumed a load capacitance of 50\,fF, corresponding to $I_\text{SLEW}\approx{}I_\text{SW}/2$.}. For larger detectors, the improvement scales proportional to the square root of the kinetic inductance.
It is also important to understand the requirements that must be met for the detector to recover and be re-biased. First, we consider the operation of resetting the detector to the superconducting state. For the detector to recover, $I_\text{D}$ must drop below $I_\text{SS}$ so that phonon cooling can overcome Joule heating.
However, since recovery occurs at a non-zero value of $I_\text{D}$, the kinetic inductance is still energized ($E_\text{L}\approx{I_\text{SS}^2L_\text{k}/2}$). Referring to Fig.~\ref{SCH}(a), when the detector is in the superconducting state, the circuit behaves as a parallel LC resonator with resonant frequency $\omega_0=1/\sqrt{L_\text{K}C_\text{L}}$ and quality factor $Q=R_\text{DQ}\sqrt{C_\text{L}/L_\text{K}}$. To prevent ringing of the voltage at the output of the detector, it is desirable for $R_\text{DQ}$ to be smaller than $\sqrt{L_\text{K}/C_\text{L}}/2$ during the reset operation. \hrd{Similarly, when the detector is rebiased, it is essential that the circuit is sufficiently damped to prevent ringing, which could cause the instantaneous bias current through the detector to exceed $I_\text{SW}$, leading to oscillation. However, in this case,} it is feasible to employ a somewhat larger value of $R_\text{DQ}$ if the slew rate of the bias current waveform is limited (so as to control the energy at frequencies in the vicinity of $\omega_\text{0}$). \hrd{Thus,} one may choose to tolerate some ringing after the quench operation so that a single value of $R_\text{DQ}$ may be used while the device is quenched and rebiased.
From the discussion above, it is clear that the performance of an actively quenched SNSPD is expected to be optimum when the capacitance loading the detector is minimized. Specifically, it is desirable that the total capacitance seen at the output node of the detector is at most of the same order of magnitude as that of $\left<C_\text{eff}\right>$. For a typical NbN detector on a sapphire substrate, $\left<C_\text{eff}\right>$ is on the order of 35\,fF. As the bondpads of a typical CMOS integrated circuit present a capacitance on the order of 30\,fF, it is essential that other capacitances presented by the readout and interface circuitry be minimized. It is therefore important to tightly integrate the SNSPD with the active quenching circuit so as to minimize parasitic capacitances.
\section{Results}
A proof-of-concept bias and control circuit has been implemented using a commercial \hrd{silicon germanium (SiGe)} BiCMOS integrated circuit process\cite{orner20030}. A block diagram of the chip connected to an SNSPD appears in Fig.~\ref{newSCH}(a).
The detector is biased from a resistive current source, which can be disabled using a CMOS switch. The bias current can be controlled using a pair of digitally programmable resistors, each covering the range of 1.2--50\,k$\Omega$.
The state of the detector is sensed using a comparator whose threshold is set by an off-chip reference voltage. The enable port of the current source is controlled through a delayed version of the comparator output. The delay block was implemented to be asymmetric, such that it mainly affects the falling edge (\hrd{which enables} the bias current \hrd{after quenching has occurred}). The delay associated with the falling edge is programmable over the range of \hrd{0--20}\,ns, \hrd{not including the propagation delay associated with the other components in the feedback loop}. An additional programmable resistor ($R_\text{DQ}$) covering the range of 2--25\,k$\Omega$ was employed to reduce ringing during the quenching and re-biasing phases of operation. For the purpose of this proof-of-concept demonstration, this resistor was not open-circuited during the detection phase. Finally, a digital output is provided via an inverter that is terminated in a \hrd{41:1} voltage attenuator. This \hrd{attenuator} serves to reduce the power required to drive a 50\,$\Omega$ line. A die micrograph of the fabricated IC appears in Fig.~\ref{newSCH}(b).
\begin{figure}[bt!]\centering
\includegraphics[width=\columnwidth]{Figures/NewSCHv3.pdf}
\caption{Device developed for demonstration of active quenching. (a) Simplified block diagram of SiGe IC. (b) Die photograph of fabricated SiGe IC. {The chip dimensions are $1\,\mathrm{mm}\times{}0.46$\,mm.} (c) False color SEM photograph of example NbTiN detector with active area diameter of \hrd{15\,$\mu$m}. The nanowire width is 50\,nm and the fill factor is 33\%. (d) Photograph of hybrid assembly. The interconnection between the terminals of the SNSPD were directly bonded to the integrated circuit. \hrd{Scale bars in (b)--(d) are approximate.}
\label{newSCH}
\end{figure}
A module was designed to allow for evaluation of the active quenching scheme using NbTiN detectors \hrd{with a diameters of 5\,$\mu$m and 10\,$\mu$m}, corresponding to kinetic inductances of 250\,nH and 1\,$\mu$H, respectively. These devices were fabricated on top of a 330\,nm Si$_3$N$_4$ film, which was deposited on an oxidized silicon wafer. The nanowire width was approximately 50\,nm and the NbTiN film thickness, as determined through TEM imaging, was approximately 6.5\,nm. A SEM photograph of a representative device appears in Fig.~\ref{newSCH}(c). Details regarding the nanowire fabrication have been reported previously~\cite{Risheng}. \hrd{{All SNSPDs employed here were taken from the same wafer, which was found to have near uniform performance.}
From simulation, we estimate the input capacitance of the active quenching IC to be approximately 70\,fF. To avoid the introduction of significant additional stray capacitances, we mounted the detectors in close proximity to the \hrd{BiCMOS integrated circuit} and made direct bondwire connections between the two chips (see Fig.~\ref{newSCH}(d)).
Light was coupled to the detectors using flood illumination via SMF-28 fibers that were terminated in open-ended FC/PC connectors. The FC/PC connectors were seated in mating sleeves that were coarsely aligned to the detectors under a microscope. The estimated working distance between the tip the FC/PC connector and the surface of the detector chip was 15\,mm, corresponding to a spot with a diameter on the order of 2\,mm. \hrd{As such, we estimate geometric coupling losses of 58\,dB and 52\,dB when using 250\,nH (5\,$\mu$m diameter) and 1\,$\mu$H (10\,$\mu$m diameter) detectors, respectively. This large coupling loss was accepted for this proof-of-concept work to ease the challenge of aligning a fiber to the SNSPD.}
\hrd{Testing was carried out in a commercial closed-cycle cryostat. The detectors were illuminated} at 1550\,nm using both CW (Ando AQ8204\hrd{, {10}\, mW}) and femtosecond lasers (Calmar FPL-02CFF: \hrd{2\,mW average power, <500\,fs duration, and 30\,MHz repetition rate), attenuated to single photon levels\footnote{\hrd{As an example, to achieve count rates of 200 kcps using the CW and fs laser, we employed 50.5 and 44\,dB of external attenuation, respectively, when the 1\,$\mu$H detector was biased at 95\% of its switching current. Including the estimated 52\,dB of coupling losses, we find a photon flux of about $4\times10^6$ photons/second was incident upon the detector in each case. In the case of the CW laser, we can estimate the probability of a two-photon event by assuming a Poissonian process and a conservative detection window of 1\,ns, we find that there is a 0.4\% probability of a two photon event. Similarly, applying coherent statistics for the pulsed case, we find a 0.8\% chance of more than one photon being incident upon the detector during any given pulse. As these probabilities are both considerably smaller than the estimated system detection efficiency of just over 4\%, we believe that these power levels are well within the single photon regime.}}}. An additional set of baseline measurements were carried out in which the electrical connections between each detector and the active quenching circuit were broken and the detectors were directly wirebonded to the output transmission lines \hrd{of the printed circuit board}. The performance of each device was then measured using a typical 50\,$\Omega$ readout chain. \hrd{This procedure permitted a direct comparison of active and passive quenching with the same detector employed for both sets of measurements}. Detailed block diagrams of the test setups employed for characterization of the detectors in the active and passive quenching configurations appear in Figs.~\ref{setup}(a) and \ref{setup}(b), respectively.
\begin{figure}
\includegraphics[width=\columnwidth]{Figures/Setup_EDIT.pdf}
\caption{Test setup for characterization of the detectors in the (a) active and (b) passive quenching configurations.}\label{setup}
\end{figure}
All measurements were taken at a physical temperature of 2.8\,K, as measured using a silicon diode temperature sensor that was directly mounted on the lid of the modules. Moreover, \hrd{unless otherwise stated}, the active quenching circuit was configured with {$R_\text{DQ}=3.8\,\mathrm{k}\Omega$, $R_\text{BIAS}=10\,\mathrm{k}\Omega$,} and with a reset delay in the range of {4--10\,ns}. The quiescent SNSPD bias current was adjusted using a laboratory voltage source and the active quenching circuit was biased at a power consumption of approximately 100\,$\mu$W, \hrd{excluding} the dissipation of the output driver.
The latching ($I_\text{LATCH}$) and hot-spot currents ($I_\text{SS}$) were measured for \hrd{two of each detector value.}
\hrd{The latching currents were determined with all fiber ports of the cryostat terminated with metal caps and correspond to the minimum bias currents that caused the detectors to latch due to a dark count.}
When biased through the active quenching circuit, we measured \hrd{$I_\text{LATCH}=13.1$ and 12.9\,$\mu$A for the 250\,nH detectors and 11.5\,$\mu$A, for both of the 1\,$\mu$H detectors}. On the other hand, when using the passive quenching configuration, we found that the latching currents of the same \hrd{pair of 250\,nH detectors dropped to 12.3\,$\mu$A and {12.1}\,$\mu$A, respectively}. \hrd{Similarly, we found that the latching currents for the pair of 1\,$\mu$H detectors dropped to 10.2 and 11.3\,$\mu$A, respectively.} Interestingly, we found the values of $I_\text{SS}$ to be consistent between the active and passive quenching configurations, \hrd{with values of 1.86\,$\mu$A and 1.84\,$\mu$A recorded for the 250\,nH detectors and 1.68 and 1.78\,$\mu$A recorded for the 1\,$\mu$H detectors.} This indicates that the change in latching current was not due to a difference in bath temperature\hrd{, but rather the quenching mechanism.}
\begin{figure}\centering
\includegraphics[width=\columnwidth]{Figures/NEW_FIG1b.pdf}
\caption{\hrd{Results: (a) Time domain waveforms for the 1\,$\mu$H
detector. (b) Count rate for actively quenched 250\,nH SNSPD at bias currents of 8.0, 8.5, 9.0, 9.5, 10.0, 11.0, and 11.4\,$\mu$A. The count rate was found to increase monotonically with bias current. The dashed lines are guide lines with a slope of one.
Count rates for (c) 250\,nH and (d) 1\,$\mu$H
detectors in the active and passive quenching configurations at four different signal intensities (3\,dB increment). Dark count rates are also shown as dashed lines for each configuration.}}
\label{EXP}
\end{figure}
Next, we measured the detector performance under CW illumination. Example time domain waveforms measured using each quenching scheme appear in Fig.~\ref{EXP}(a). The output of the active quenching circuit is an inverted pulse with a duration of approximately \hrd{{2.5}}\,ns, which is significantly shorter than the pulse duration observed using the passive quenching scheme. \hrd{This duration is determined by the latency around the feedback loop and could be further reduced by moving to a more advanced CMOS technology node or dissipating more power}. From the measured passive quenching waveforms, we estimated recovery time constants ($\tau_\text{e}=L_\text{K}/R_\text{L}$) of 20\,ns and 5\,ns for 250\,nH and 1\,$\mu$H devices, respectively. These numbers are consistent with expectation.
The count rates were also measured under CW illumination. Curves were collected as a function of both detector bias and light intensity. Example results appear in Figs.~\ref{EXP}(b)--(d). In the active quenching configuration, we observed a linear relationship between the intensity and count rates over a wide range of bias currents and count rates. We also found that count rates greater than 95\,Mcps and 12\,Mcps were achievable for 250\,nH and 1\,$\mu$H devices, respectively. In each case, the device eventually latched as the light intensity was increased. We believe that this effect could be avoided by open-circuiting $R_\text{DQ}$ during the detection phase of operation.
Count rates were also measured with each SNSPD configured in the passive quenching configuration (see Figs.~\ref{EXP}(c) and \ref{EXP}(d)). The results were found to be consistent with the active quenching measurements under low-illumination. However, the maximum count rates that could be achieved before the devices latched were found to be 25\,Mcps and \hrd{7}\,Mcps for 250\,nH and 1\,$\mu$H devices, respectively.
We believe that the active quenching approach was able to reach higher count rates due to the reduced dead time and dc coupling.
\hrd{
The dead time was studied by performing inter-photon arrival experiments. For these measurements, the devices were excited by a 1550\,nm CW source, attenuated to produce average count rates of approximately 200\,kcps. Interphoton arrival statistics were acquired over a wide range of detector bias currents and in excess of 500,000 statistics were acquired for each measurement. Unless otherwise stated, the rebias delays were set to {4}\,ns and {6}\,ns for 250\,nH and 1\,$\mu$H, respectively. In all cases, we observed exponential inter-photon arrival histograms whose distributions were consistent with expectation given the observed count rates.
Example inter-photon arrival histograms for 250\,nH and 1\,$\mu$H detectors at a bias current of 11\,$\mu$A are shown in Figs.~\ref{interphoton}(a) and \ref{interphoton}(b), respectively. These data have been normalized by the expected value at zero delay ($2N\sinh\left\{R\Delta\tau/2\right\}$, where $N$ is the total number of counts, $\Delta{\tau}$ is the bin size, and $R$ is the count rate. The active quenching scheme was found to reduce the recovery time significantly.}
\hrd{To further understand the recovery operation, we quantified the dead time, which we define here as the time after a detection at which the count statistics returned to 90\% of the expected value (assuming Poissonian arrival statistics). The dead time as a function of detector bias current is shown for 250\,nH and 1\,$\mu$H devices in Fig.~\ref{interphoton}(c) and (d). respectively. In comparison to passive quenching, the active quenching scheme achieved significantly shorter dead times across the full range of bias currents for which we performed measurements.}
\begin{figure}[bt!]
\includegraphics[width=\columnwidth]{Figures/Interphoton_V3.pdf}
\caption{\hrd{Interphoton arrival statistics. Example interphoton arrival statistics for (a) 250\,nH and (b) 1\,$\mu$H detectors at a bias current of 11\,$\mu$A. Deadtime as a function of bias current for (c) 250\,nH and (d) 1\,$\mu$H detectors. The solid red and dashed blue lines correspond to data acquired using active and passive quenching, respectively. Curves (a)--(d) were acquired with rebias delays of 4\,ns and 6\,ns for the 250\,nH and 1\,$\mu$H detectors, respectively. Dead time as a function of rebias delay at a bias of 11\,$\mu$A for (e) 250\,nH and (f) 1\,$\mu$H detectors. The solid red and black dashed lines correspond to experimental data and the simple model of dead time on delay dependence, as described in the text.}}\label{interphoton}
\end{figure}
\hrd{We also studied the dependence of the recovery time on the rebias delay (see Figs.~\ref{interphoton}(e) and (f)). It was possible to acquire data for rebias delays as short as 4\,ns and 6\,ns for 250\,nH and 1\,$\mu$H devices, respectively. At shorter re-bias delays, the devices displayed afterpulsing and latching. Also shown in Figs.~\ref{interphoton}(e) and (f) are trendlines that were generated by assuming that the recovery time was equal to the programmed delay plus a constant offset corresponding the average difference between the measured recovery time and the programmed delay. Referring to Figs.~5(e) and (f), the measured dead times are in fact consistent with the trend lines, validating this simple model. Extrapolation to zero rebias delay indicates recovery time limits of 2.8\,ns and 5.5\,ns for 250\,nH and 1\,$\mu$H detectors, respectively.}
\hrd{As described earlier, after-pulsing is a potential issue with active quenching, since the dynamic current can ring and exceed the critical current if the nanowire is rebiased too quickly. To check for after-pulsing, we repeated the inter-photon arrival experiment over the range of bias currents reported in Fig.~\ref{interphoton}, using the femtosecond laser as the excitation source. For these measurements, the optical signal was attenuated to provide a count rate on the order of 200\,kHz when each device was biased at 11\,$\mu$A}. \hrd{In each case, we found that all significant peaks in the interphoton arrival histogram were constrained to time bins that were an integer multiple of the laser period, implying that no significant afterpulsing occurred.}
Next, we measured the dark count rates as a function of bias current. For these measurements, all fiber ports were blocked using metal caps. The results appear in Figs.~\ref{EXP}(c) and \ref{EXP}(d). In each case, we observed significantly lower dark count rates when using active quenching as opposed to passive quenching. Interestingly, this was due to the fact that we observed after-pulsing for each dark count when passive quenching was employed (see Appendix B). No such after-pulsing was observed for photon-induced counts.
While further work is required to understand this result, one possible explanation could be that the active quenching scheme provides a more effective reset of the detector by forcing the detector bias current to drop below $I_\text{SS}$ for a controlled period of time.
\begin{figure}[bt!]
\centering
\includegraphics[width=\columnwidth]{Figures/NEW_FIG2.pdf}
\caption{Timing jitter as a function of bias current for the (a) 250\,nH and (b) 1\,$\mu$H devices. Improved timing jitter was observed when active quenching was employed.}\label{jitter}
\end{figure}
Finally, we measured the timing jitter of the SNSPDs in each configuration using the 1550\,nm femtosecond laser. The illumination employed for these measurements was set such that count rates of approximately 200\,kHz were achieved when the detectors were biased near their latching currents. \hrd{This count rate is $150\times$ smaller than the repetition rate of the laser and was chosen to ensure that the illumination was in the single photon regime.} The results appear in \hrd{Figs.~\ref{jitter}(a) and \ref{jitter}(b)}. At all bias points, we found that the proposed approach achieved superior jitter performance in comparison to the standard approach. \hrd{As explained in Appendix~C, the difference in measured jitter is not explained by room temperature amplifier noise}. As the comparator used in prototype circuit was not designed with noise performance in mind, there is likely room for further improvement in the jitter by \hrd{further engineering of this block}.
\section{Conclusion}
Active quenching appears to be a promising approach towards improving the performance of SNSPDs. \hrd{In this article,} we have developed a basic understanding of the performance expected when using active quenching for the bias and readout of an SNSPD and \hrd{have} demonstrated a compact and low-power actively quenched SNSPD circuit. Moreover, we have found experimentally that this bias and readout technique is able to simultaneously improve the count rates, dark count rates, and timing jitter of SNSPDs.
\hrd{While the results so far are encouraging, further work is required to realize the full potential of actively quenched SNSPDs. For instance, while the jitter of the actively quenched SNSPDs was found to be lower than that which was achieved when the same detectors were passively quenched, the observed values of jitter were still considerably higher than the current state of the art, which was achieved using passive quenching and cryogenic low noise amplifiers~\cite{JPL}. Further work is required in order to explore the fundamental limits to the jitter of actively quenched SNSPDs. This could, for instance, include the implementation of an active quenching circuit with an ancillary linear buffer amplifier to permit direct monitoring of the voltage across the detector.}
\hrd{Another area in which there is room for improvement is in reducing parasitic capacitances, which slow the achievable slew rate of an actively quenched SNSPD. For this preliminary work, we used relatively large bondpads on both the superconductive and BiCMOS ICs and connected the two using wire bonds, resulting in a total parasitic capacitance that is on the order of the effective normal domain capacitance. A much more optimum approach would be to either fabricate the SNSPD directly on top of the silicon IC, removing the need for large bondpads, or to use membrane SNSPDs~\cite{Faraz} to allow for co-integration.}
\hrd{Finally, for this proof-of-concept work, we have made no attempt to couple light efficiently to the SNSPD. However, if active quenching is to be used in a practical application, it is essential that light be efficiently coupled to the detector. While it is important that the control electronics and the detector are in close proximity to minimize parasitic capacitance, we do not believe this precludes efficient fiber coupling to the detector. Specifically, we believe that it should be feasible to employ either the self-aligned technique described in~\cite{SaeWooSelfAlign} or waveguide coupling~\cite{Hong}. We note that, to make the former approach work assuming separate superconductor and semiconductor chips, it may be necessary to space the detector approximately 1.5\,mm from the BiCMOS IC. However, it should still be feasible to make an interconnect between the two devices using superconductive high impedance (inductive) transmission lines.
In summary, we have shown via theoretical analysis and proof-of-principle demonstration that active quenching of SNSPDs provides performance advantages in terms of dead time and jitter. Future work in this area should focus on improved performance, with potential research directions including lower noise integrated electronics, tighter semiconductor/superconductor integration, and optimized fiber coupling.}
\section*{Appendix A: Derivation and validation of Equations (1) and (2)}
Here, we derive equations for the ratio of rising-edge slew rate and peak voltage for the active quenching topology with respect to a passive quenching approach. \hrd{In doing so, we neglect the current-dependence of the kinetic inductance~\cite{clem2012kinetic} as well as distributed effects~\cite{santavicca2016microwave}}. We first derive expressions for the slew rate, rise time, and peak voltage for the active quenching architecture. Next, we repeat this exercise for a passively quenched SNSPD and take ratios to arrive at Equations~(1) and (2) from the main text. Finally, the expressions are validated through circuit simulations.
\subsection*{Rising-edge dynamics of an actively quenched SNSPD}
The rising-edge dynamics of an actively quenched SNSPD can be studied by considering the case of a capacitively terminated SNSPD, biased with a constant current source. As described in the main text, the normal domain dynamics of the detector embedded in this \hrd{circuit} can be modelled using a non-linear capacitance, $C_\text{eff}\left(i_\text{D}\right)$, where $i_\text{D}$ is the instantaneous current flowing through the normal domain (see the main text for the functional form of $C_\text{eff}\left(i_\text{D}\right)$).
We begin by estimating the average output slew rate of the voltage across the detector. We note that the currents through the normal domain at $t=0$ and $t=t_\text{D}$ are $I_\text{B0}$ and $I_\text{SLEW}$, respectively. Using this information, we take a linear approximation for the non-linear capacitance such that its average value between $t=0$ and $t=t_\text{D}$ is $\left<C_\text{eff}\right>\approx{\left(C_\text{eff}\left\{I_\text{B0}\right\}+C_\text{eff}\left\{I_\text{SLEW}\right\}\right)}/{2}$. The average output slew rate is thus approximated as
\begin{equation}\left<SR_\text{AQ}\right>\approx{I_\text{B0}/\left(\left<C_\text{eff}\right>+C_\text{L}\right)}.\label{SR}\end{equation}
Next, we write the output voltage at the point in time at which the inductor has fully discharged ($t=t_\text{D}$) as
\begin{equation}
v_\text{PK,AQ}\approx{}R_\text{NPK,AQ}I_\text{SLEW},
\end{equation}
where $R_\text{NPK,AQ}$ is the normal domain resistance at $t=t_\text{D}$. Since we also know that $v_\text{PK,AQ}\approx{}\left<SR_\text{AQ}\right>t_\text{rise,AQ}$, we can write an expression for the peak normal domain resistance in terms of the rise time
\begin{equation}
R_\text{NPK,AQ}\approx\frac{\left<SR_\text{AQ}\right>t_\text{rise,AQ}}{I_\text{SLEW}}.
\end{equation}
Now, assuming that the dominant impedances in the circuit during the discharge period are the kinetic inductance and the normal domain resistance, the discharge time can be estimated as $t_\text{rise,AQ}\approx{}2.2L_\text{K}/\left<R_\text{N,AQ}\right>$. Finally, taking $\left<R_\text{N,AQ}\right>\approx{}R_\text{NPK,AQ}/2$, we arrive at expressions for the discharge time and peak output voltage,
\begin{equation}
t_\text{rise,AQ}\approx{}\sqrt{2.2\frac{I_\text{SLEW}}{I_\text{B0}}L_\text{K}\left(\left<C_\text{eff}\right>+C_\text{L}\right)}
\end{equation}
and
\begin{equation}
v_\text{PK,AQ}\approx{}\sqrt{\frac{2.2L_\text{K}I_\text{B0}I_\text{SLEW}}{\left<C_\text{eff}\right>+C_\text{L}}}.
\end{equation}
\subsection*{Rising-edge dynamics of a resistively shunted SNSPD}
Next, we approximate the performance characteristics of a resistively shunted SNSPD so that we can make a comparison to the proposed technique. To begin, we note that the current through a resistively shunted SNSPD drops from $I_\text{B0}$ to approximately zero during the time period where the output is transitioning. During this time, a normal domain forms and a voltage develops across this normal domain. Rather than approximating the resistance and current with linear functions, as we did for the case of a capacitively shunted SNSPD, we use exponential functions since the dynamics in this case are more strongly dominated by the kinetic inductance. Specifically, we model the normal domain current and resistance as
\begin{equation}
I_\text{D,PQ}\approx{}I_\text{B0}\exp\left\{\frac{-2.2t}{t_\text{rise,PQ}}\right\}
\label{pcurrent}
\end{equation}
and
\begin{equation}
R_\text{N,PQ}\approx{}R_\text{NPK,PQ}\left(1-\exp\left\{\frac{-2.2t}{t_\text{rise,PQ}}\right\}\right),
\label{pres}
\end{equation}
\hrd{where $I_\text{D,PQ}$ is the dynamic nanowire current, $t_\text{rise,PQ}$ is the rise time of the passively quenched detector; and $R_\text{N,PQ}$ and $R_\text{NPK,PQ}$ are the dynamic and peak normal domain resistances, respectively}. Taking the derivative of the product of Equations.~(\ref{pcurrent}) and (\ref{pres}) with respect to time, we arrive at an expression for the rate at which the voltage across the normal domain grows.
\begin{equation}
SR_\text{N,PQ}\left(t\right)\approx\frac{2.2R_\text{NPK,PQ}I_\text{B0}}{t_\text{rise,PQ}}\left(2\exp\left\{\frac{-4.4t}{t_\text{rise,PQ}}\right\}-\exp\left\{\frac{-2.2t}{t_\text{rise,PQ}}\right\}\right).
\label{SRN}
\end{equation}
At $t=0$, we know that the voltage across the normal domain will slew as $SR_\text{N,PQ}\left(0\right)\approx{}I_\text{B0}/C_\text{eff}\left\{I_\text{B0}\right\}$. Thus,
\begin{equation}
t_\text{rise,PQ}\approx{}2.2R_\text{NPK,PQ}C_\text{eff}\left\{I_\text{B0}\right\}.
\end{equation}
We can also approximate the rise time in terms of the kinetic inductance and average normal domain resistance as $t_\text{rise,PQ}\approx{}2.2L_\text{K}/\left<R_\text{N,PQ}\left(t\right)\right>$, where
\begin{equation}
\left<R_\text{N,PQ}\left(t\right)\right>\approx\frac{R_\text{NPK,PQ}}{t_\text{rise,PQ}}\int_0^{t_\text{rise,PQ}}\left(1-\exp\left\{-2.2t/t_\text{rise,PQ}\right\}\right)\mathrm{dt}.
\end{equation}
Evaluating this integral, we find $\left<R_\text{N,PQ}\right>\approx{}3R_\text{NPK,PQ}/5$. Thus, the peak normal domain resistance, the rise time, the peak output voltage voltage, and the average slew rate can be approximated as
\begin{equation}
R_\text{NPK,PQ}\approx\sqrt{\frac{5}{3}\frac{L_\text{K}}{C_\text{eff}\left\{I_\text{B0}\right\}}},
\end{equation}
\begin{equation}
t_\text{rise,PQ}\approx{}2\sqrt{2L_\text{K}C_\text{eff}\left\{I_\text{B0}\right\}},
\end{equation}
\begin{equation}
v_\text{PK,PQ}\approx{}I_\text{B0}R_\text{L},
\end{equation}
and
\begin{equation}
\left<SR_\text{PQ}\right>\approx\frac{I_\text{B0}R_\text{L}}{2\sqrt{2L_\text{K}C_\text{eff}\left\{I_\text{B0}\right\}}}.
\end{equation}
\subsection*{Comparison of rising-edge dynamics for active and passive quenching}
Finally, we take the ratios of the active quenching metrics to those of the passive quenching approach in order to quantify the improvement in rise time characteristics.
\begin{equation}
\frac{\left<SR_\text{AQ}\right>}{\left<SR_\text{PQ}\right>}\approx{}\frac{2\sqrt{2L_\text{K}C_\text{eff}\left\{I_\text{B0}\right\}}}{R_\text{L}\left(\left<C_\text{eff}\right>+C_\text{L}\right)},
\label{eq1}
\end{equation}
\begin{equation}
\frac{t_\text{rise,AQ}}{t_\text{rise,PQ}}\approx\frac{1}{2}\sqrt{1.1\frac{I_\text{SLEW}}{I_\text{B0}}\frac{\left<C_\text{eff}\right>+C_\text{L}}{C_\text{eff}\left\{I_\text{B0}\right\}}},
\end{equation}
and
\begin{equation}
\frac{v_\text{pk,AQ}}{v_\text{pk,PQ}}\approx{}\sqrt{2.2\frac{I_\text{SLEW}}{I_\text{BO}}\frac{L_\text{K}/R_\text{L}}{R_\text{L}\left(\left<C_\text{eff}\right>+C_\text{L}\right)}}
\label{eq2}
\end{equation}
\begin{table}[bt!]
\centering
\textbf{\caption{Parameters used for verification of equations (1) and (2), (From \cite{KERMANFB,duan2010sub})}.}
\begin{tabular}{llllllllll}
\hline
&$L_\text{S}$&$h_\text{c}$&$\kappa$&$c$&$\rho_\text{n}$&$T_\text{C}$&$T_\text{SUB}$&$w$&$t$\\
\hline
Units&pH/$\Box$&W$\cdot$m$^{-2}\cdot$K$^{-1}$&\,W$\cdot$m$^{-1}\cdot$K$^{-1}$&J$\cdot$m$^{-3}\cdot$K$^{-1}$&$\Omega\cdot$m&K&K&nm&nm\\
\hline
Value&80&50,000&0.108&4,400&3$\times$10$^{-6}$&10.5&2.5&80&4\\
\hline
\label{Table1}
\end{tabular}
\end{table}
\subsection*{Numerical validation of Equations (\ref{SRcomp}) and (\ref{voltcomp})}
A series of simulations were conducted to validate and interpret Equations ~(\ref{SRcomp}) and (\ref{voltcomp}). These simulations were carried out in a Spice environment using the model topology reported in {\cite{SUSTKB}. The topologies considered are those from Fig.~2 of the main text, with $R_\text{DQ}$ open circuited for the active quenching case. A list of model parameters used for the SNSPD appears in Table~\ref{Table1}.
\begin{figure}
\includegraphics[width=1\columnwidth]{Figures/NEW_SUP_FIG.pdf}
\caption{\hrd{Verification of equations (\ref{SRcomp}) and (\ref{voltcomp}). Ratio of slew rate for active quenching to that of passive quenching as a function of (a) kinetic inductance, (b) load capacitance, and (c) bias current. Ratio of peak voltage achieved with active quenching to that of passive quenching as a function of (d) kinetic inductance, (e) load capacitance, and (f) bias current. The simulations were carried out using the model described in~\cite{SUSTKB}. }}
\label{Fig2}
\end{figure}
Simulations were carried out for both the actively quenched and resistively shunted SNSPDs over a \hrd{wide} range of kinetic inductances, load capacitances, and bias currents. The ratio of slew rate and rise time for the actively quenched SNSPD to that of the resistively shunted device was calculated for each case and the results are shown in Figs.~\ref{Fig2}(a)--\ref{Fig2}(f). While the simulations predict the active quenching architecture will achieve a slightly lower improvement in slew rate and a slightly higher improvement in peak voltage in comparison to the that expected from Equations.~(1) and (2) of the main text, the trends are consistent with expectation.
Of particular importance is the dependence of the relative improvements of the load capacitance for the actively quenched detector. \hrd{As the load capacitance increases, the improvement which can be achieved using the active quenching architecture decreases.} Thus, it is critical to minimize this capacitance. Since a high impedance comparator has a capacitive input, there is a practical limit to this value\hrd{; we assume this minima }to be on the order of 30\,fF, which is slightly higher than a typical bondpad capacitance. However, even with a capacitance of this size, the improvement in performance is still significant.
\section*{Appendix B: Dark count after-pulsing}
The dark count measurements displayed after pulsing when the passive quenching configuration was employed. Such after-pulsing was not observed for regular counts or when using the active quenching configuration. An example transient waveform demonstrating this phenomenon appears in Fig.~\ref{BURST}(a). Here, a single dark count produces a string of pulses over the course of approximately 0.2\,$\mu$s. This waveform was obtained using a 250-nH detector. However, similar behavior was also observed for the 1-$\mu$H device. As a result of the dark count induced after-pulsing, the recorded dark count rates for the passively quenched devices were significantly greater than those of the active devices.
\begin{figure}
\includegraphics[width=\columnwidth]{Figures/DCR_BURST2.pdf}
\caption{~Dark count after-pulsing observed for the passively quenched devices. (a) Time domain waveform for an example dark count and (b) comparison of dark counts for the same detector in the passive and active quenching configurations.}
\label{BURST}
\end{figure}
\section*{Appendix C: Jitter contribution due to room temperature amplifiers}
\label{JitterAPP}
Here, we estimate the jitter contribution from the room temperature amplifiers employed in the passive quenching configuration in order to rule this out as the reason for the improvement in jitter associated with active quenching. The RMS jitter due to these amplifiers can be estimated as the ratio of the RMS noise voltage to the rising-edge slew-rate, both measured at the output of the amplifier. The measured RMS noise voltages and slew rates for a 250\,nH(1\,$\mu$H) detector were found to be 5.3(4.7)\,mV and 580(510)\,MV/s, respectively. Thus, the jitter due to amplifier noise was estimated to be 9\,ps RMS, corresponding to 21\,ps FWHM. Considering the fact that the amplifier jitter contribution should be independent from other sources of jitter, we can de-embed the effect of this jitter: $T_\text{J,int}=\sqrt{T_\text{J,meas}^2-T_\text{J,amp}^2},$ where $T_\text{J,int}$ and $T_\text{J,meas}$ are the intrinsic and measured jitter, and $T_\text{J,amp}$ is the amplifier jitter contribution. Applying this correction to the data shown in Fig.~\ref{EXP}, we find that the amplifier jitter contribution is at most 4\,ps and that the jitter performance of the actively quenched detector is still superior in all cases.
\section*{Acknowledgement}
This work was supported by the Office of Naval Research (ONR) (\#N00014-15-1-2417), the Defense Advanced Research Project Agency (DARPA) (\#W911NF-16-2-0151), the National Science Foundation (\#CCCS-1351744), and Google LLC.
\\
\bibliographystyle{IEEEtran}
| proofpile-arXiv_059-15547 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Attacks on Deployed Neural Networks}
We present a short survey of published attacks on neural network accelerators. We focus primarily on test-time attacks (attacks on already trained models), as we assume that training-time attacks such as data poisoning~\cite{Biggio2012PoisoningAA} or Neural Trojans~\cite{Trojannn} must happen before the model is deployed.
\noindent \textbf{API attacks:}
API attacks interact with the victim device only through the sensors, the interface, or the network.
Here we assume that the attack is independent of the hardware platform running the neural network and does not rely on any side-channel information.
The majority of the API attacks present in the literature either attempt to (1) exfiltrate the model or the model metaparameters, (2) find adversarial examples, or (3) infer some property of the model's training data.
In~\cite{stealing_nns}, the authors show how machine learning models hosted behind APIs can be exfiltrated. Here the attacker sends crafted inputs and collects outputs from the model until the attacker is able to reconstruct the model behind the API. In the case of simpler ML models such as decision trees, the models can be perfectly reconstructed. However, for more complex models such as neural networks, the attacker cannot simply solve nonlinear equations to arrive at model weights, but must instead train a `student' network on input-output pairs collected from the API~\cite{2015arXiv150302531H}. A similar work~\cite{2017arXiv171101768O} shows the simplicity of reverse-engineering black-box neural network weights, architecture, optimization method and the training/data split. In~\cite{Orekondy}, authors reframe the goal from model theft, to arriving at a `knockoff' model exhibiting the same functionality. In~\cite{metaparameter_theft}, authors ignore model parameters and instead attempt to steal the hyperparameters of a network. Good hyperparameters, while far smaller than models, can be more difficult to arrive at, as they require many experiments and human effort to tune.
In order to violate the integrity of a machine learning model, attackers may attempt to find adversarial examples~\cite{2014arXiv1412.6572G}. While most attacks rely on having access to the white-box model or the output gradient, several works have shown that even black-box networks~\cite{2017arXiv171101768O} and networks with obfuscated gradients~\cite{DBLP:journals/corr/abs-1802-00420} are not resistant to determined attackers.
Lastly, attackers may attempt to infer some information about the data the neural network was trained on. Attacks which determine whether a specific input was used in training a model are called \textit{membership inference} attacks. Though DNN models are typically smaller than the training dataset, they can nonetheless memorize potentially secret information~\cite{DBLP:journals/corr/abs-1802-08232}, as in the example of predictive keyboards memorizing PIN codes or passwords. In~\cite{Salem2018}, authors show that even when the model is behind a black-box API, and the adversary has no knowledge of the victim's training dataset, membership inference attacks are still successful.
\noindent \textbf{Software side-channel attacks:}
API and software side-channel (SC) attacks target a similar attack surface, but software side-channel attacks can additionally gain information through side-effects such as timing or cache side-channels. Here, the attacker abuses information about the physical device processing the attackers request to gain an insight into the internal state of the device.
Both timing and cache side-channels typically cannot reveal anything about the data being processed on the device - timing SC reveal information about the compute intensity of a certain task, and cache SC reveal information about recently accessed addresses in the caches. As such, they are commonly employed to extract course-grain information such as neural network architecture running on a device. For example, in Cache Telepathy~\cite{Yan2018}, attackers use the Flush+Reload~\cite{184415} and Prime+Probe~\cite{Liu:2015:LCS:2867539.2867673} cache SC attacks to measure the size of general matrix multiply (GEMM) operations, first counting the number of parameters in the model, and then narrowing down the model architecture. While this attack is restricted to CPUs, GPUs are no less vulnerable to cache-side channels~\cite{Naghibijouybari2018}. A similar work~\cite{DBLP:journals/corr/abs-1810-03487} is applicable to CPUs, GPUs, and DNN accelerators, and can fingerprint a network after only a single inference operation. It leverages a priori knowledge of major DNN libraries to prime the instruction cache and learn which functions are called during inference.
Timing attacks are also used to reveal model architecture: in~\cite{nn_timing_attack}, the authors assume that the attacker knows the victim's hardware, and is able to buy the same device in order to build timing profiles of different networks. By only knowing the accuracy and the latency of the victim network, the attacker trains many candidate architectures searching for one that has the same signature. This, however, requires the attacker to first steal a part of the training dataset using a membership inference attack~\cite{Salem2018}, which negates much of the need for stealing a model architecture.
Software SC attacks may be less successful in the edge domain compared to the cloud, as edge devices typically serve a single user, while SC are typically used for compromising secure multi-user systems~\cite{cross_vm}. However, as more networks are pushed to the edge, we can expect multi-network systems with different privileges, goals, and timescales to become increasingly common. An example of this may be predictive keyboards, which perform both inference (text prediction) and NN training on the same device~\cite{DBLP:journals/corr/McMahanMRA16}.
Another potential vulnerability may be introduced with the adoption of data-dependent inference latency. For example, DARPA's N-ZERO program~\cite{darpa} seeks low-power edge devices that may need to stay dormant for years and have several levels of neural networks, each activating the next one once a certain pattern is sensed. These types of networks are inherently vulnerable to timing attacks, as conventional methods for defending against timing attacks, such as constant time functions negate all the benefits of variable-latency inference.
\noindent \textbf{Physical side-channel attacks:}
Physical side-channels typically measure some physical quantity, such as power, electromagnetic radiation, vibration, etc.
Several works have explored using physical side-channels to extract the neural network architecture, weights, or user inputs to an edge device.
Memory access patterns can trivially reveal model architecture. In~\cite{Hua2018}, authors attempt to steal a model and model architecture running on a secure enclave such as Intel SGX~\cite{Costan2016IntelSE}, by observing memory access traces. While traces allow attackers to learn the architecture, the model can only be stolen if the accelerator exploits data-dependent model properties, such as the sparsity of hidden neuron activations~\cite{cnvlutin}. Power and electromagnetic (EM) side-channel attacks are explored in~\cite{Batina}, where the authors use EM SCs to learn the activation function, simple power analysis to learn model architecture, and differential power analysis to learn network weights. While simple power analysis does not require invasive measures, differential power analysis may require chip decapsulation, and would need to be classified as an invasive attack. Finally, the authors show how user's private inputs may be extracted using power analysis.
A similar attack is explored in~\cite{DBLP:journals/corr/abs-1803-05847}, where the authors use a power side-channel to observe the processing of the first layer of a convolutional network and extract user's inputs. The authors explore both active and passive attackers, i.e., attackers that can actively input their own images to the accelerator and attackers that can only observe user inputs.
Another line of attacks attempts to induce faults in order to cause misclasifications~\cite{Liu, Rakin} and relies on a microarchitectural or device-level attacks, such as RowHammer~\cite{Gruss2017}.
\noindent \textbf{Probing attacks:}
Probing attacks assume that an attacker is able to access the individual components of the device, e.g., the CPU/GPU/ASIC, the RAM memory, non-volatile storage, or busses, but is not able to perform invasive attacks that access the internals of the chips. The attacker has full access to measure signals on any exposed wires or even drive wires themself. This opens up a variety of denial-of-service, integrity, and privacy attacks. Additionally, probing attacks assume that no tamper evidence is left after the attack, unlike invasive attacks.
A simple attack the attacker can carry out is model theft - here the attacker probes the memory bus and runs an inference operation while recording the model being loaded onto the chip. This can be prevented by storing only the encrypted model in RAM and NVM, and decrypting the model on-the-fly, if power requirements permit~\cite{tie}.
However, even if the model is encrypted, just knowing the memory access pattern is enough to reveal the model architecture. Each layer and activation will have a different memory bandwidth, and the attacker can monitor these changes along with memory addresses to learn where layers start and end in memory. While oblivious RAM~\cite{oram} can hide memory addresses, memory access timings are still sufficient to reveal the topology of the model. This forces the defender to either prefetch weights or create fake accesses in order to obfuscate memory access timings~\cite{tie}.
Similarly, network activations may be larger than the available on-chip memory and may be stored in RAM. These activations also need to be encrypted, because even in cases when the device manufacturer is not concerned about privacy, these activations can be used in order to infer the model weights~\cite{Milli}.
The attacker may also attempt to overwrite parts of RAM or feed their own inputs to the chip in order to subvert any software guards, for example in order to generate more input-output pairs used for API model theft~\cite{stealing_nns}. Encrypted RAM may defend against this type of attack, but the device is still susceptible to DoS attacks, where fake accesses are inserted on busses.
\noindent \textbf{Invasive attacks:}
Invasive attacks assume that the attacker has full control over the chip and is able to bypass any tamper-proof packaging.
These attacks include freezing the device in order to extract volatile memory, probing the internals of the chip, ionizing parts of the chip in order to induce faults, feeding non-legitimate voltages and clock frequencies to the chip, etc. Mounting these attacks is typically cost-prohibitive and requires substantial expertise and equipment to execute.
Several works have explored invasive attacks on DNN accelerators, and many of the conventional (non-DNN specific) invasive attacks are still applicable to them. In DeepLaser~\cite{Breier}, the authors decapsulate a chip and are able to induce faults by shining a laser on the chip, causing misclasifications by the neural network. This is done by causing bit-flips in the last layer's activation function, where flipping high-order bits of an output neuron's activation will cause the associated category or value to be dominant. Choosing the minimal amount of bit-flips to achieve a desired output has been studied in two works: \cite{Liu} and \cite{Rakin}. Both these works show that, despite the robustness of neural networks to random perturbations, networks are highly susceptible to targeted bit-flips, in a manner similar to non-targeted adversarial attacks~\cite{Yuan}.
While we have not been able to find any examples of this, we expect neural network accelerators to be vulnerable to cold boot attacks~\cite{Halderman:2009:LWR:1506409.1506429}, which may be able to steal unencrypted models stored in volatile memory, or microprobing~\cite{Tria2011}, which may be able to bypass model or user data decryption.
\section{Conclusion}
In this work, we have presented a survey of attacks and defenses on neural networks. We have created a taxonomy of attacks and defenses with regard to attackers level of access to the hardware, and attacker's agenda. We have described different types of attacks on neural networks, ranging from API-based attacks to invasive attacks such as decapsulation and microprobing. Finally, we gave an overview of the types of defenses of neural networks, with the goal of protecting the privacy of user data, the privacy of deployed neural networks, or the integrity of neural network inference.
\section{Defending Edge Devices Running Neural Networks}
We briefly cover proposed defenses for edge devices running neural networks.
\noindent \textbf{API defenses:}
The majority of API attacks we have mentioned attempt to steal the model or the model architecture, learn which inputs have been used to train the model, or find adversarial examples for the model running on the device. As finding adversarial examples typically involves first stealing the model~\cite{2017arXiv171101768O}, we focus only on defenses against model exfiltration and membership inference attacks.
In a recent work called Prada~\cite{Juuti}, the authors succeed in detecting API model-stealing attacks with a 100\% detection rate and no false positives. Here, the authors do not attempt to detect if a single query is malicious (as in the case of adversarial attacks), but whether some consecutive set of them is actively trying to steal the model. The authors detect model-stealing queries as they are specifically crafted to extract the maximum amount of information out of the model. However, the authors note that attackers may introduce dummy queries to maintain a benign query distribution, resulting in slower but more covert model-stealing attacks.
Watermarking is a method for embedding secret information into some system in order to verify the origin of that system at a later date. Watermarking has been proposed as a method of establishing ownership of neural networks~\cite{Adi2018, Uchida, DBLP:journals/corr/abs-1804-00750}. Here, a watermark is applied to a neural network in such a way that it does not impact the network's accuracy, but can be used to confirm ownership from network outputs. Even if the party responsible for the theft attempts to prune or finetune the network, watermarks can be retained~\cite{Uchida}.
Defending against membership inference attacks has been explored in several works. In~\cite{Shokri2017}, the authors claim that overfitting is the reason why models are vulnerable to membership inference attacks and suggest that differential privacy~\cite{Abadi2016} used during training can protect against these types of attacks. They propose several defenses, similar to those used in defending against adversarial attacks: (1) reducing the number of predicted classes (in the case of classification problems), (2) reducing the amount of information per class by rounding prediction probabilities, (3) increasing entropy of the prediction values and (4) using stronger regularization during training.
Similarly, in~\cite{Salem2018}, the authors propose two defenses: dropout~\cite{JMLR:v15:srivastava14a}, where authors show that randomly zeroing out neurons during training partially prevents the attackers from inferring membership, and model stacking, where multiple models are used in an ensemble to make a prediction.
\noindent \textbf{Side-channel defenses:}
Due to the data-independent behavior of non-recurrent DNNs, all of the software side-channel attacks we have listed attempt to steal the network architecture. We have not been able to find any attacks that succeed at violating privacy of the inputs or the model parameters through software side-channels.
In DeepRecon~\cite{DBLP:journals/corr/abs-1810-03487}, where attackers prime the instruction cache in order to learn function invocations, the authors propose a defense where the defender simultaneously creates decoy function calls to similar neural network layers. These decoy layers should be small enough not to incur a performance penalty. However, this defense does not stop the attacker from using data cache-based side-channels or timing side-channels.
Cache Telepathy~\cite{Yan2018} suggests less aggressive compiler optimizations, cache partitioning~\cite{7446082} or disallowing resource sharing as defenses against cache-based SC. However, these may not be viable solutions without hardware support for secure caches.
While cache-based defenses may help hide some of the accesses, and the defender may go so far as to remove the possibility of an attacker executing code on the same shared resources as the victim, a determined attacker may attempt to probe the memory bus. As neural networks are typically larger than the last-level cache of modern processors, caches will suffer from capacity misses and the network architecture may be exposed to memory probing attacks.
In the Trusted Inference Engine (TIE)~\cite{tie}, the device can either create fake memory accesses in times of reduced memory bandwidth or prefetch data, given available on-chip storage. As TIE targets networks with data-independent profiles (i.e., not recurrent neural networks), the timing of fake or prefetched accesses can be calculated at compile time.
Similar techniques can be applied to counter power and timing side-channels. As long as networks have data-independent behavior, i.e., the accelerator does not attempt to take advantage of zero values~\cite{cnvlutin}, or the network computation graph is static~\cite{DBLP:journals/corr/SutskeverVL14, DBLP:journals/corr/abs-1709-01686}, power and timing side-channel attacks should not be able to learn information about the network.
\noindent \textbf{Defenses against invasive and semi-invasive attacks:}
There are two common approaches used when an organization needs to deploy software with privacy or integrity requirements. One option is to not trust the edge hardware, and assume that the hardware can be actively malicious, as in the case of untrusted CPUs/GPUs, possible hardware Trojans, broken hardware defenses~\cite{206170}, etc.
There exist several algorithms that allow processing on private data. Homomorphic encryption~\cite{Rivest1978} (HE) for neural networks has been explored in CryptoNets~\cite{cryptonets}, where the authors use HE to run neural networks on encrypted data, without decrypting it at any time during the process. One of the issues with using HE is the performance reduction - inference using HE can be 100-1000 times slower than without HE. Several works have, however, been able to accelerate HE for neural networks. In Gazelle~\cite{Juvekar}, authors leverage HE for linear layers and Yao's Garbled Circuits~\cite{Goldreich:2003:CCP:966037.966044} for offloading calculating nonlinearities to the owner of the private data, as well as an efficient SIMD implementation and a set of homomorphic linear algebra.
While HE is very efficient for linear layers of a network, DNNs typically use nonlinear activations between the layers, requiring many rounds of computationally expensive calculations. An alternative venue for private inference is based on Yao's Garbled Circuits~\cite{Goldreich:2003:CCP:966037.966044} (GC). Here, two parties want to compute the output of a function (a neural network in this case), where one party supplies the network, and the other the inputs to the network. The party that supplies the network typically creates a garbled circuit and uses a procedure such as oblivious transfer~\cite{Even:1985:RPS:3812.3818} (OT) to acquire the second party's inputs without learning those inputs. A naive implementation of neural networks on GC is very inefficient, and several works have presented domain-specific optimizations to them. In DeepSecure~\cite{Darvish2018}, authors first prune the network~\cite{Han2016}, and then convert the network to Verilog for which they can apply logic minimization. In~\cite{Ball2019}, authors present a modified GC that supports free addition and constant-multiplication on a limited integer range, and a significantly cheaper activation function.
As a third take on efficient DNNs using GC, XONN~\cite{xonn} attempts to accelerate XNOR-based networks~\cite{DBLP:journals/corr/RastegariORF16} (networks where activations have only values of -1 or 1), as XNOR operations can be processed for free in GC~\cite{10.1007/978-3-540-70583-3_40}. While GC requires a linear number of rounds w.r.t. the number of network layers, both~\cite{Darvish2018} and~\cite{xonn} are able to perform inference in a fixed amount of rounds.
The question that arises is whether it makes sense to run any of these algorithms on edge devices. In the case of inference, where both the model and user inputs should be kept private, the defender has the choice of sending encrypted inputs to the cloud or sending the encrypted model to the edge. Since HE is computationally expensive, edge devices may not receive any latency benefits by running the models locally (unless they are not connected to the network at all).
Another option for private edge inference is \textbf{\textit{hardware root-of-trust}}~\cite{Tehranipoor:2011:IHS:2051742}. Here, the defender trusts some type of hardware device, which is built with certain security measures, as in the case of secure enclaves~\cite{Costan2016IntelSE, Suh:2003:AAT:2591635.2667184, keystone} or secure accelerators~\cite{tie}. These devices are typically built to work in adversarial environments, where the threat model assumes that attacker can tamper with the device, but cannot probe chip internals.
For example, using secure enclaves, such as Intel SGX~\cite{Costan2016IntelSE}, to perform inference can provide privacy and integrity to the user and neural network deployer, but may be very inefficient. In MLCapsule~\cite{MLCapsule}, authors develop a machine learning as a service (MLaaS) platform above Trusted Execution Environments (TEE) such as Intel SGX, and formally prove it's security. In~\cite{Tramer2018}, the authors propose to use Intel SGX as a hardware root-of-trust, but leverage other hardware such as more powerful (but untrusted) CPUs cores and GPUs to perform inference. The authors are able to guarantee both the privacy of the data sent to untrusted devices, as well as the integrity of results received.
An alternative venue explores building custom secure neural network accelerators~\cite{tie}. Here, the design stores obfuscated or encrypted models in off-chip memory, and performs efficient decryption / deobfuscation on the device. The design leverages secure pseudo-random number generators using physical unclonable functions~\cite{4261134} (PUF) as a source of randomness as an alternative to the power-hungry but more secure encryption. The design also provides security against timing attacks by prefetching data or creating fake accesses to RAM memory.
Since the attacker can still probe peripherals, the device must encrypt data in RAM. However, by timing the memory accesses, the attacker can learn the model architecture. Using oblivious RAM (ORAM) does not help, as ORAM only protects the address values and not access times. Additionally, neural network weights are typically stored in ascending order, so knowing the addresses (but not timings) reveals only the complete model size. To prevent the attacker from timing the RAM, the defender, then, must either have a prefetcher and load weights in advance while maintaining a constant bandwidth, or create fake accesses in times when the bandwidth is unused~\cite{Fletcher:2012:SPA:2382536.2382540, tie}.
\section{Introduction}
Since the rise of deep learning in the last decade, many different libraries and frameworks for running and training deep neural networks
(DNN) have been published and open-sourced. In that time, the landscape of software tools for training neural networks has moved from
difficult-to-install libraries~\cite{caffe}, and support for static graphs only~\cite{theano}, to industry-ready, easy-to-deploy
frameworks~\cite{tensorflow}, high-development efficiency~\cite{keras}, and support for dynamic graphs and \textit{just-in-time} compilation\cite{pytorch}.
Recently, as tools have gained maturity, more businesses have started using neural networks in production and exposing services
based on neural networks~\cite{google_nmt}. Deploying neural networks is non-trivial, and the research frameworks proved insufficient
to handle high-bandwidth, low-latency inference, leading to the development of production-ready frameworks such as Tensorflow
Serving~\cite{serving} and the standardization of neural network formats with ONNX~\cite{onnx}. With an increasing number of
mobile devices and PCs possessing GPUs and custom ASICs, networks have been pushed to smartphones~\cite{coreml} and even
GPU-enabled JavaScript~\cite{tensorflow_js}. With the rise of voice assistants, wearables, and smart cameras,
the need for low-power inference has led to the development of many custom DNN acceleration ASICs~\cite{edge_tpu, jetson_nano}.
There are many reasons why businesses or users may want to run neural networks on edge devices,
as an alternative to sending the data to datacenters for processing, including:
\noindent \textbf{Privacy:} users may not want or may not be able to send the data to the cloud for privacy or legal reasons.
For example, a hospital may want to process patient data on servers at a different location, but is not willing to risk patient
privacy. Even if the patient data is encrypted, if the server is malicious, the patient data may be at risk.
\noindent \textbf{Power}: sending data directly to the cloud may not be the most power-efficient approach to run neural networks.
For example, in~\cite{7979979}, the authors show that in convolutional neural networks (CNN), processing a few of the first
convolutional layers before sending the data to the cloud achieves higher power savings compared to processing the whole
network on the device or sending the input data to the cloud. As more low-power accelerators using approaches such as
quantization~\cite{NIPS2015_5647}, stochastic computing~\cite{DBLP:journals/corr/RenLLDQQYW16}, or
sparsity~\cite{DBLP:journals/corr/HanKMHLLXLYWYD16, closnets} are released, we expect the ratio between the cost of processing
networks and the cost of transmitting input data to become more significant.
\noindent \textbf{Latency:} many applications have hard latency requirements and must process a network within a certain time limit.
Furthermore, for certain mission-critical applications with hard availability guarantees, as in the case of
autonomous drones or self-driving cars, being able to process data on the device is mandatory.
While datacenters are able to provide virtually unlimited computing power, possibly making inference time negligible, the transmit
time of inputs over the network often cannot be ignored. Hence, a device must possess the required compute power to process the inputs within the time budget.
\noindent \textbf{Throughput:} several industries dealing with high bandwidth data are faced with the question of whether to store data for offline processing, allowing thorough analysis at the cost of large amounts of storage, or to process the data in-flight
potentially sacrificing some information, but saving on storage. Take an extreme case: the CERN Large Hadron Collider (LHC) can
generate upwards of hundreds of terabytes of data per second. Storing that data is difficult, so authors of~\cite{lhc} propose
to process the data in-flight using extremely low-latency FPGA designs.
\section{Taxonomy of Attacks on and Defenses of Deployed Neural Networks}
Since DNN accelerators have only recently been deployed in commercial products, the field of attacking and defending these devices is in its infancy.
In this section we aim to provide (1) a taxonomy of DNN accelerator attacks and defenses, and (2) a list of plausible attack surfaces and attacker motivations for targeting edge devices running DNNs.
\subsection{Taxonomy of DNN Accelerator Attacks and Defenses}
\begin{figure*}[t]
\centering
\includegraphics[width=0.92\textwidth]{./figs/taxonomy_with_references.pdf}
\vspace{-0.3cm}
\caption{Taxonomy of attacks, defenses, and potential vulnerabilities of edge devices running ML inference or on-line training.}
\label{fig:taxonomy}
\vspace{-0.4cm}
\end{figure*}
Of the many possible dimensions over which we could characterize attacks and defenses of edge devices, we believe that the \textit{attacker agenda} and \textit{level of access} to the edge device provide a useful classification. The attacker agenda represents the goal of the attacker, and ranges from local denial-of-service (DoS) to gaining full access to a network of edge devices. Level of access is a set of attack surfaces the attacker has access to and ranges from simple API accesses to probing buses or even chip internals. In Figure~\ref{fig:taxonomy}, we present an overview of attacks, defenses, and potential vulnerabilities present in the literature.
\subsection{Attacker Agenda}
The $y$-axis in Figure~\ref{fig:taxonomy} represents the attacker's motivation for attacking an edge-deployed neural network.
We classify attacker motivations into four categories:
\noindent \textbf{Denial of Service: } attackers may want to prevent a device running a neural network from properly functioning. For example, attackers may want to prevent smart cameras from properly classifying recordings in order not to raise alarms. Denial of Service (DoS) attacks prevent a device from maintaining availability and completing its function. As feedforward neural networks are data-independent and have fixed latencies, DoS attacks targeting DNNs are only applicable to accelerators running data-dependent models, e.g., recurrent neural networks~\cite{LSTM} or neural networks with early exits like BranchyNets~\cite{DBLP:journals/corr/abs-1709-01686} or Tree LSTMs~\cite{tree_lstm}.
\noindent \textbf{User Privacy Violation: } smart devices are increasingly trusted with private user data such as shopping history, voice commands, or medical recordings~\cite{DBLP:journals/corr/abs-1711-05225}. This data is valuable for its advertising, monitoring, or polling value. User privacy violations are cases where the attacker is able to access measured or stored sensor data from the device or user data the device from the network. For example, attacks on voice assistants where the attacker can access previous voice commands constitute a local privacy violation.
\noindent \textbf{Model Privacy Violation: } the attacker may attempt to exfiltrate a neural network model for a number of reasons: (1) models require significant investment to develop, and, as such, may be stolen and sold, or used in ensembles as a black box~\cite{tie}, (2) finding adversarial examples is significantly easier if the attacker has access to a model (i.e., the white-box scenario), compared to only having access to model inputs and outputs (i.e., the black-box scenario)~\cite{DBLP:journals/corr/PapernotMG16}, or (3) the attacker may attempt to learn data from the dataset the model was trained on~\cite{DBLP:journals/corr/abs-1802-08232}.
\noindent \textbf{Integrity Violation: } the attackers may not want to outright prevent the device from functioning, but may want to force the neural network to perform in an unacceptable way. For example, malware may craft adversarial packets in an attempt to fool a network intrusion detection system (IDS) that uses DNNs to identify packets. Local integrity violations are cases where the attacker is able to affect the correctness of a device's neural network inference.
In Figure~\ref{fig:taxonomy}, we list several attacker agendas, sorted by severity. We add two additional categories of general user and integrity violations, which consists of cases that affect not only a single device, but multiple devices, some of which are not under the attacker's physical control. An example of this is data poisoning attacks on federated learning systems~\cite{Bagdasaryan2018HowTB}, where attackers controlling one device can insert backdoors into all devices in the network.
\subsection{Attacker's Level of Access}
The $x$-axis in Figure~\ref{fig:taxonomy} represents the attacker's level of access to an edge device. The five access categories vary by invasiveness from purely software, API-based attacks and defenses, all the way to invasive attacks, such as decapsulation and microprobing.
The API attacks assume that the attacker only has access to the device through conventional channels, e.g. through the network (as in the case of machine translation systems) or the device's sensors (as in the case of voice assistants). Software side-channels additionally assume that the attacker has some ability to measure side-channels through the device's legitimate outputs, e.g., measure the latency of network responses or the amount of traffic the device is sending to the cloud. For both API and software side-channel attacks, the attacker does not need physical contact with the device. Additionally, these attacks are typically simple to automate, unlike the attacks based on physical properties of the device. In the case of physical side-channels, the attacker needs physical proximity to the device, as in the case of power or electromagnetic (EM) analysis. However, physical side-channel attacks do not require invasive sensors or access to the printed circuit board (PCB). PCB probing attacks include attacks that may measure data or timing information of any bus exposed on the PCB, but not the internals of any chip on the device. These attacks include probing RAM or non-volatile memory (NVM), as well as cold-boot attacks, etc.
Finally, invasive attacks access the internals of a chip. These include approaches such as decapsulation (a procedure where the chip packaging is removed), microprobing (where the attacker can probe the internals of a chip), chemical attacks that can reveal information stored in read-only memory, and scanning electron microscope (SEM) attacks (which are able to read RAM memory)~\cite{Tria2011}. These attacks typically require specialized labs and expensive equipment. They are often destructive and may require multiple devices before a successful attack is implemented. For completeness, we also include training-time attacks and defenses that happen before devices are deployed, or during on-line training.
Attacks that take place strictly before deployment are beyond the scope of this study.
| proofpile-arXiv_059-15548 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Most often the dynamics of local cosmic strings formed in a phase transition in the early universe (see \cite{ViSh,Us,TomMark} for reviews) is described by the Nambu-Goto (NG) action. This approximation is valid when the microscopic width of the string
\begin{equation}
w\sim \mu^{-1/2} \sim 1/\eta
\label{wdef}
\end{equation}
(with $\mu$ the string tension and $\eta$ the energy scale of the phase transition), is very small relative to its characteristic macroscopic size $\ell$ --- a situation which is well satisfied in the early universe.
Closed loops of NG strings loose energy slowly by radiating gravitational waves, and as a result NG string networks contain numerous loops whose decay generate a stochastic gravitational wave background (SGWB) ranging over a wide range of frequencies \cite{ViSh}. Depending on the details of the particular cosmic string model, the corresponding constraints on the dimensionless string tension $G \mu$ from the SGWB are $G\mu \lesssim 10^{-7}$ at LIGO-Virgo frequencies \cite{LigoStrings}, $G\mu \lesssim 10^{-11} $ at Pulsar frequencies \cite{Blanco-Pillado:2017rnf}, whereas at LISA frequencies one expects to reach $G\mu \lesssim 10^{-17}$ \cite{LisaStrings}.
On the other hand, at a more fundamental level, cosmic strings are topological solutions of field theories. Their dynamics can therefore also be studied by solving the field theory equations of motions. In studies of large scale field theory string networks \cite{Vincent:1997cx,Hindmarsh:2008dw,Lizarraga:2016onn,Hindmarsh:2017qff}, loops are observed to decay directly into particles and gauge boson radiation on a short time scale of order of the loop length. Hence, field theory string network simulations predict very different observational consequences --- in particular no SGWB from loops.
Since field theory and Nambu-Goto strings in principle describe the same physics, and hence lead to
the same observational consequences,
this is an unhappy situation.
Based on high resolution field theory simulations, a possible answer to this long-standing conundrum was proposed in \cite{TVrecent}. In particular, for a loop of length $\ell$ containing {\it kinks}, a new characteristic length scale $\ell_0=\ell_{\rm k}$ was identified, and it was shown that if $\ell \gtrsim \ell_{\rm k}$ gravitational wave emission is the dominant decay mode, whereas for smaller loops $\ell \lesssim \ell_{\rm k}$ particle radiation is the primary channel for energy loss. That is,
\begin{equation}
\frac{d\ell}{dt} = \begin{cases}
-\gamma_\ud , & \ell \gg \ell_{\rm k} \\
-\gamma_\ud \frac{\ell_{\rm k}}{ \ell} , & \ell \ll \ell_{\rm k},
\end{cases}
\label{kink}
\end{equation}
where
\begin{equation}
\gamma_\ud \equiv \Gamma G \mu
\nonumber
\end{equation}
with
$\Gamma \sim 50$ the standard constant describing gravitational radiation from cosmic string loops \cite{Vachaspati:1984gt,Burden:1985md,Garfinkle:1987yw,Blanco-Pillado:2017oxo}. Notice that Nambu Goto strings correspond to $\ell_{\rm k} \rightarrow 0$; and if particle radiation is dominant for all loops,
$\ell_{\rm k} \rightarrow \infty$.
In practise $\ell_{\rm k}$ is neither of these two limiting values, and in \cite{TVrecent} was estimated (for a given class of loops with kinks) to be given by
\begin{equation}
\ell_{\rm k} \sim \beta_{\mathrm{k}} \frac{w}{\Gamma G\mu}
\label{Est}
\end{equation}
where $w$ is the width of the string, Eq.~(\ref{wdef}), and the constant $\beta_{\mathrm{k}}\sim {\cal{O}}(1)$.
If a loop contains {\it cusps}, then one expects the above to be modified to \cite{BlancoPillado:1998bv,Olum:1998ag}
\begin{equation}
\frac{d\ell}{dt} = \begin{cases}
-\gamma_\ud , & \ell \gg \ell_{\rm c} \\
-\gamma_\ud \sqrt{\frac{\ell_{\rm c} }{ \ell} }, & \ell \ll \ell_{\rm c}
\end{cases}
\label{cusp}
\end{equation}
where
\begin{equation}
\ell_{\rm c}
\sim \beta_{\mathrm{c}} \frac{w}{(\Gamma G\mu)^{2}}
\label{lcdef}
\end{equation}
with $\beta_{\mathrm{c}}\sim {\cal{O}}(1)$.
The aim of this paper is to determine the observational effects --- and corresponding constraints on $G\mu$ --- of a finite, fixed, value of $\ell_{\rm k}$ or $\ell_{\rm c}$. A first immediate consequence of the presence of the fixed scale is that the distribution of loops $n(\ell,t)$, with $n(\ell,t)d\ell$ the number density of loops with length between $\ell$ and $\ell+d\ell$ at time $t$, will no longer be scaling. That is, contrary to the situation for NG strings, the loop distribution will depend explicitly on $t$ as well as the dimensionless variable $\gamma=\ell/t$.
We determine this non-scaling loop distribution $n(\gamma,t)$ in section \ref{sec:nlt}, taking into account exactly (and for the first time) the backreaction of particle emission on the loop distribution.
We then study the consequence of the non-scaling distribution of non-self intersecting loops on the stochastic GW background, determining the fraction of the critical density in GWs per logarithmic interval of frequency,
\begin{equation}\label{eqn:theone}
\mathrm{\Omega}_{\rm gw}(t_0,f) = \frac{8 \pi G}{3\mathrm{H_{0}}^2 }~f ~ \frac{\mathrm{d}\rho_{\rm gw}}{\mathrm{d} f} (t_0, f)\,,
\end{equation}
where $\mathrm{H_{0}}$ is the Hubble parameter, and
the $d\rho_{gw}/df$ factor
is the energy density in gravitational waves per unit frequency $f$ observed today (at $t=t_0$). A scaling distribution of NG loops gives a spectrum which is flat at high frequencies \cite{ViSh}; we
will show below that a consequence of
the non-scaling of the loop distribution is the introduction of a characteristic frequency $f_*$, with $\Omega(f>f_*)\rightarrow 0$.
The precise value of $f_*$ depends on $\ell_{\rm k}$ or $\ell_{\rm c}$,
as well as $G\mu$. For cusps and kinks with $\ell_{\rm c}$ and $\ell_{\rm k}$ given respectively by Eqs.~(\ref{kink}) and (\ref{cusp}), the characteristic frequency $f_*$ is outside the LIGO and LISA band provided $G\mu \gtrsim 10^{-17}$, and so in this case the new cutoff will only be relevant for very light strings but for which the
amplitude of the signal is below the observational thresholds of planned gravitational wave detectors.
In section \ref{sec:particle} we turn to particle physics signatures. At lower string tensions $G\mu$, the gravitational signatures of strings weaken, while the particle physics ones are expected to increase. Following \cite{Sigl}, we focus on so-called ``top down'' models for production of ultra-high energy cosmic rays
in which heavy particles, namely the quanta of massive gauge and Higgs field of the underlying (local) field theory trapped inside the string, decay to give ultra-high energy protons and gamma rays. We focus on the diffuse gamma ray flux which at GeV scales is constrained by Fermi-Lat \cite{FL}.
However, taking into account backreaction of the emitted particles on the loop distribution
we find that current gamma ray observations do not lead to significant constraints.
(Early studies on the production of cosmic rays assumed NG strings and particle
emission rates that were based on dynamics without taking backreaction into account.
See Refs.~\cite{Bhattacharjee:1989vu,MacGibbon:1989kk,MacGibbon:1992ug,Brandenberger:1993hw,Cui:2008bd}
and~\cite{Sigl} for a review.
Other work has focused on strings with condensates, e.g.~\cite{Mota:2014uka,Vachaspati:2009kq,Peter:2013jj},
or strings coupled to other fields such as Kaluza-Klein or dilaton fields \cite{Dufaux,Damour:1996pv}.)
This paper is organised as follows. In section \ref{sec:nlt} we determine the effect of an $\ell$-dependent energy loss
\begin{equation}
\dfrac{\mathrm{d} \ell}{\mathrm{d} t} = - \gamma_\ud \mathcal{J}(\ell),
\label{new}
\end{equation}
on the loop distribution $n(\ell,t)$. The function $\mathcal{J}(\ell)$
will initially be left arbitrary. Specific cases corresponding to (i) NG loops with $\mathcal{J}=1$; (ii) loops with kinks, see Eq.~(\ref{kink}), and (iii) loops with cusps, see Eq.~(\ref{cusp}) are studied in subsections \ref{ss:NG}-\ref{ss:c}. Given the loop distribution, we then use it to calculate the SGWB in section \ref{sec:SGWB}, and the predicted diffuse gamma ray flux in \ref{sec:particle}. We conclude in section \ref{sec:conc} by discussing the resulting experimental constraints on $G\mu$.
\section{The loop distribution}
\label{sec:nlt}
All observational consequences of string loops depend on $n(t,\ell) \mathrm{d} \ell$, the number density of non self-intersecting loops with length between $\ell$ and $\ell+\mathrm{d}\ell$ at time $t$. In this section we calculate $n(t,\ell)$ given (\ref{new}), that is we take into account the backreaction of the emitted particles on the loop distribution. As noted in the introduction, the existence of the fixed scale $\ell_{\rm k}$ or $\ell_{\rm c}$ means that the loop distribution will no longer scale, that it will no longer be a function
of the dimensionless variable $\gamma \equiv \ell/t$.
\subsection{Boltzmann equation and general solution}
The loop distribution satisfies a Boltzmann equation which, taking into account the $\ell$-dependence of $\dot{\ell}$ (that is the flux of loops in $\ell$-space), is given by \cite{Copeland:1998na}
\begin{equation}
\pd{t}{\ell} \left(a^3 n(t,\ell)\right) + \pd{\ell}{t}\left(\dfrac{\mathrm{d} \ell}{\mathrm{d} t}a^3 n(t,\ell)\right) = a^3 \mathcal{P}
\label{Boltzmann}
\end{equation}
where $a(t)$ is the cosmic scale-factor, and the
loop production function (LPF) $\mathcal{P}(t,\ell)$ is the rate at which loops of length $\ell$ are formed at time $t$ by being chopped of the infinite string network.
On substituting (\ref{new}) into Eq.~(\ref{Boltzmann}) and multiplying each side of the equation
by $\mathcal{J}(\ell)$, one obtains
\begin{equation}
\dfrac{1}{\gamma_\ud}\pd{t}{\ell} g(t,\ell) - \mathcal{J}(\ell) \pd{\ell}{t}g(t,\ell) = a^3 \mathcal{J}(\ell) \mathcal{P}(t,\ell),
\label{eq:this}
\end{equation}
where
\begin{equation}
g(t,\ell) \equiv \gamma_\ud \mathcal{J}(\ell) a^3(t) n(t,\ell).
\label{eq:gdef}
\end{equation}
In order to solve (\ref{eq:this}), we first change variables from $(t,\ell)$ to
\begin{equation}
\tau \equiv \gamma_\ud t ~,~ \qquad \xi \equiv \int \dfrac{\mathrm{d} \ell}{\mathcal{J}(\ell)}.
\label{change1}
\end{equation}
Notice from (\ref{new}) and (\ref{change1}) that for a loop formed at time $t_i$ with length $\ell_i$, its length at time $t$ satisfies
\begin{equation}
\xi(\ell) + \gamma_\ud t = \xi(\ell_i) + \gamma_\ud t_i.
\label{physloop}
\end{equation}
In terms of these variables Eq.~(\ref{eq:this}) reduces to a wave equation with a source term
\begin{equation}
\pd{\tau}{\xi} g(\tau,\xi) - \pd{\xi}{\tau}g(\tau,\xi) = \mathcal{S}(\tau,\xi),
\end{equation}
where
\begin{equation}
\mathcal{S}(\tau,\xi) = a^3(\tau)\mathcal{J}(\xi) \mathcal{P}(\tau,\xi).
\nonumber
\end{equation}
We now introduce the lightcone variables
\begin{equation}
2 u \equiv \tau - \xi ~,~ \qquad 2v \equiv \tau + \xi,
\label{change2}
\end{equation}
so that the evolution equation simply becomes
\begin{equation}
\pd{u}{v} g(u,v) = \mathcal{S}(u,v),
\label{eq:m}
\end{equation}
which is straightforward to integrate. In the following we neglect any initial loop distribution at initial time $t_\uini$ (since this is rapidly diluted by the expansion of the universe), so that the general solution of (\ref{eq:m}), and hence the original Boltzmann equation Eq.~(\ref{Boltzmann}), is
\begin{equation}
g(u,v) = \int_{-v}^{u} \mathrm{d} u' S(u',v).
\label{eq:solution}
\end{equation}
Finally one can convert back to the original variables $n(\ell,t)$ using (\ref{eq:gdef}) to find
\begin{equation}
n(t,\ell) = \frac{1}{\gamma_\ud \mathcal{J}(\ell)
a^3(t)}\int_{-v(t,\ell)}^{u(t,\ell)} \mathrm{d} u' \ a^3\big(u',v(t,\ell)\big) \mathcal{J}(u',v(t,\ell)) \mathcal{P}(u',v(t,\ell))
\end{equation}
where $v(t,\ell)$ is obtained from Eqs.~(\ref{change1}) and (\ref{change2}).
Notice that $\mathcal{J}$ appears in two places: as an overall factor in the denominator, as well as in the integrand.
\subsection{Solution for a $\delta$-function loop production function}
\label{subsec:delta}
We now assume that all loops are chopped off the infinite string network with
length $\alpha t$ at time $t$. This assumption, which has often been used in the literature, will lead to analytic expressions. The value $\alpha \sim 0.1$ is suggested by the NG simulations of \cite{BlancoPillado:2011dq,Blanco-Pillado:2013qja},
particularly in the radiation era. However, one should note that other simulations \cite{Ringeval:2005kr} are consistent with power-law loop productions functions~\cite{Auclair:2019zoz,Lorenz:2010sm}, which have also been predicted analytically \cite{Polchinski:2006ee,Polchinski:2007rg,Dubath:2007mf}. These will be
considered elsewhere. Since $\alpha t \gg (\ell_{\rm k},\ell_{\rm c})$ for $\alpha\sim0.1$, we expect that particle radiation from infinite strings will not affect the (horizon-size) production of loops from the scaling infinite string network, and hence we consider a loop production function of the form
\begin{equation}
\mathcal{P}(t,\ell) = \mathrm{C} t^{-5} \delta\left(\dfrac{\ell}{t} - \alpha\right)
\end{equation}
where the constant $\mathrm{C}$, which takes different values in the radiation and matter eras, will be specified below.
Substituting into \eqref{eq:solution}, assuming $a\propto t^{\nu}$, (with $\nu=1/2$ in the radiation era, and $\nu=2/3$ in the matter era) gives
\begin{equation*}
g(u,v) =\mathrm{C} \int^u_{-v} \mathrm{d} u' ~ \mathcal{J}[\ell(u',v)] t(u',v)^{-5} a[t(u',v)]^3
\delta\left[ \dfrac{\ell(u',v)}{t(u',v)}-\alpha\right].
\label{eq:mess}
\end{equation*}
In order to evaluate this integral, in which $v=v(t,\ell)$ is {\it fixed}, let us denote the argument of the $\delta$-function by
\begin{equation}
y \equiv \frac{\ell(u',v)}{t(u',v)}-\alpha .
\nonumber
\end{equation}
For the given $v$, the argument vanishes ($y=0$) for some $u'(v)$, that we will denote $u_\star(v)$ and which therefore satisfies
\begin{equation}
\ell(u_\star,v) = \alpha t(u_\star,v).
\label{tosolve}
\end{equation}
Let us rewrite this more simply as $\ell_\star = \alpha t_\star$ where $\ell_\star \equiv \ell(u_\star,v)=\ell_\star(v)$ and $t_\star \equiv t(u_\star,v) = t_\star(v)$.
Now, from the $v$ equation in \eqref{change2}, one has $2 v = \gamma_\ud t_\star(v) + \xi (\ell_\star(v))$. Furthermore --- since our final goal is to write the loop distribution in terms of $(t,\ell)$ (rather than $v$) --- we note from the same equation that $v$ is related to $(t,\ell)$ by $2v = \gamma_\ud t + \xi(\ell)$. Thus $t_\star(t,\ell)$, which will be required below, is the solution of
\begin{equation}
\gamma_\ud t_\star + \xi(\alpha t_\star) = \gamma_\ud t + \xi(\ell),
\label{this}
\end{equation}
which physically is simply relating the length of the loop $\alpha t_\star$ at its formation time $t_\star$, with its length $\ell$ at time $t$, see Eq.~(\ref{physloop}).
The final step needed to evaluate the integral in Eq.~(\ref{eq:mess}) is
the Jacobian of the transformation from $u'$ to $y$ which, on using (\ref{change2}), is given by
\begin{equation*}
\pd{u'}{v}\left(y(u',v)\right) = - \dfrac{\gamma_\ud \mathcal{J}(\ell(u',v)) t(u',v) + \ell(u',v)}{\gamma_\ud t(u',v)^2}.
\end{equation*}
Evaluating this at $u'=u_\star$ and using $\ell_\star=\alpha t_\star$ gives
\begin{equation*}
\pd{u}{v}\left(y(u_\star,v)\right) = - \dfrac{\gamma_\ud \mathcal{J}[\alpha t_\star(t,\ell)] + \alpha }{\gamma_\ud t_\star(t,\ell)}.
\end{equation*}
Having now expressed all the relevant quantities in terms of $(t,\ell)$, one can combine the above results and use the definition of $g$ in terms of $n(t,\ell)$ in Eq.~(\ref{eq:gdef}) to find
\begin{equation}
t^4 n(t,\ell) = \mathrm{C} \dfrac{1}{\mathcal{J}(\ell)} \dfrac{\mathcal{J}(\alpha t_\star)}{\alpha +\gamma_\ud\mathcal{J}(\alpha t_\star)} \left( \frac{t_\star}{t}\right)^{-4} \left(\frac{a(t_\star)}{a(t)}\right)^3.
\label{eq:general}
\end{equation}
This equation, which is exact, is the central result of this section and gives the loop distribution for any form of energy loss
$\mathrm{d} \ell / \mathrm{d} t = - \gamma_\ud \mathcal{J}(\ell)$,
provided the loop production function is a $\delta$-function. It generalises and extends other approximate results which may be found in the literature.
For loops that are formed in a given era (either radiation or matter domination) and decay in the {\it same} era, the above solution reduces to
\begin{eqnarray}
t^4 n(t,\ell) &=& \mathrm{C} \dfrac{1}{\mathcal{J}(\ell)} \dfrac{\mathcal{J}(\alpha t_\star)}{\alpha +\gamma_\ud\mathcal{J}(\alpha t_\star)} \left( \frac{t_\star}{t}\right)^{3\nu-4} .
\label{eq:same}
\end{eqnarray}
In the matter era, however, there also exists a population of loops which were {\it formed} in the radiation era, where $\mathrm{C}=\mathrm{C_{\rm R}}$, and decay in the matter era. Indeed, this population generally dominates over loops formed in the matter era. From (\ref{eq:general}) one can find a general expression for the distribution at any redshift $z$, provided the loops were formed in the radiation era ($\nu=1/2$): it is given by
\begin{equation}
t^4 n(t,\ell) =
\mathrm{C_{\rm R}} \dfrac{1}{\mathcal{J}(\ell)} \dfrac{\mathcal{J}(\alpha t_\star)}{\alpha +\gamma_\ud\mathcal{J}(\alpha t_\star)} \left( \frac{t_\star}{t}\right)^{-5/2} (1+z(t))^3 \left(2\sqrt{\Omega_{\rm R}} \mathrm{H_{0}} t\right)^{3/2}
\label{looprm}
\end{equation}
This reduces to (\ref{eq:general}) in the radiation era, and has the correct scaling in the matter era.
In the following we use standard Planck cosmology with Hubble constant $\mathrm{H_{0}}=100 h {\rm km/s/Mpc}$, $h=0.678$, $\Omega_M=0.308$, $\Omega_{\rm R}=9.1476\times 10^{-5}$ and $\Omega_{\Lambda} = 1-\Omega_M-\Omega_{\rm R}$ \cite{Aghanim:2018eyx}. We model the varying number of effective degrees of freedom in the radiation era through $H(z) = \mathrm{H_{0}} {\cal H}(z)$ with
${\cal H}(z) = \sqrt{\Omega_\Lambda + \Omega_M(1+z)^3 + \Omega_R {\cal{G}}(z)(1+z)^4 }$
where ${\cal G}(z)$ is directly related to the effective number of degrees of freedom $g_*(z)$ and the effective number of entropic degrees of freedom $g_S(z)$ by \cite{Binetruy:2012ze}
\begin{equation}
{\cal G}(z) = \frac{g_*(z) g_{S}^{4/3}(0)}{g_*(0) g_{S}^{4/3}(z)}.
\end{equation}
We model this by a piecewise constant function whose value changes at the QCD phase transition ($T=200$MeV), and at electron-positron annihilation ($T=200$keV):
\begin{equation}
{\cal G}(z) = \left\{\begin{array}{ll} \displaystyle
1 & {\rm for}\ z<10^9,\\
0.83 & {\rm for}\
10^9<z<2 \times10^{12}.\\
0.39 & {\rm for} \ z>2 \times10^{12}
\end{array}\right.
\label{dof}
\end{equation}
\section{Loop distributions for particle radiation from cusps and kinks}
\label{sec:LD}
Given a specific form of $\mathcal{J}(\ell)$, the loop distribution $n(\ell,t)$ is given by (\ref{eq:general}), where $t_\star(t,\ell)$ is obtained by solving (\ref{this}). The existence or not of an {\it analytical} solution depends on the form of $\mathcal{J}(\ell)$. In this section we consider three cases:
\begin{enumerate}
\item {\it Nambu-Goto loops}: here $\dot{\ell}=-\gamma_\ud$ so that $\mathcal{J}=1$;
\item {\it Loops with kinks}: The asymptotic behaviour of $\mathcal{J}(\ell)$ is given in Eq.~(\ref{kink}). This can be captured, for instance, by
$\mathcal{J}_1=1+\ell_{\rm k}/\ell$
or alternatively by
\begin{align}
\mathcal{J}_{\mathrm{k}} &= \sqrt{1+\left(\dfrac{\ell_{\rm k}}{\ell}\right)^2}.
\label{Jk}
\end{align}
This second form gives a simpler analytic expression for $t_\star$, and we work with it below. (We have checked that the differences in predictions arising from the choice of $\mathcal{J}_1$ or $\mathcal{J}_{\mathrm{k}}$ are negligible.)
\item {\it Loops with cusps}: Following Eq.~(\ref{cusp}), we take
\begin{equation}
\mathcal{J}_c = \left[1+\left(\dfrac{\ell_{\rm c}}{\ell}\right)^{3/2}\right]^{1/3},
\label{Jc}
\end{equation}
which has the correct asymtotic behaviour and also leads to analytical expressions.
An alternative, and seemingly simpler, form $\mathcal{J} = 1+\sqrt{{\ell_{\rm c}}/{\ell}}$ does not give
analytical expressions for $n(t,\ell)$.
\end{enumerate}
We now determine the corresponding loop distribution in scaling units, namely in terms of the variables
\begin{equation}
\gamma \equiv \frac{\ell}{t}, \qquad \gamma_{\mathrm{k}}(t) \equiv \frac{\ell_{\rm k}}{t}, \qquad \gamma_{\mathrm{c}}(t)\equiv\frac{\ell_{\rm c}}{t},
\label{thegammasdef}
\end{equation}
and determine
\begin{equation}
{\mathcal{N}}(t,\gamma)\equiv t^4 n(t,\gamma).
\label{calNdef}
\end{equation}
\subsection{NG strings}
\label{ss:NG}
A first check is that the above formalism yields the well known, standard, loop distribution for NG strings ($\mathcal{J}=1$). Eq.~(\ref{change1}) yields $\xi=\ell$, and from Eq.~(\ref{this}) it follows that
\begin{equation}
\frac{t_\star}{t} = \frac{\gamma+\gamma_\ud}{\alpha+\gamma_\ud}.
\nonumber
\end{equation}
Hence from Eq.~(\ref{eq:same})
\begin{equation}
{\mathcal{N}}_{NG}(t,\gamma) = \mathrm{C} \frac{(\alpha+\gamma_\ud)^{3(1-\nu)}}{(\gamma+\gamma_\ud)^{4-3\nu}},
\label{eq:NG}
\end{equation}
which is the standard scaling NG loop distribution for a delta-function loop production function \cite{ViSh}. In the radiation/matter eras, and on the scales $\alpha \gg \gamma_\ud$ observed in simulations, comparison with the numerical results of \cite{BlancoPillado:2011dq,Blanco-Pillado:2013qja,Ringeval:2005kr} sets the value of $\mathrm{C}$ to respectively
\begin{eqnarray}
\mathrm{C_{\rm R}} \alpha^{3/2} &\simeq& 0.18 \qquad (\text{radiation era})
\nonumber
\\
\mathrm{C_{\rm M}} \alpha &\simeq& 0.27\qquad (\text{matter era})
\nonumber
\end{eqnarray}
The scaling distribution Eq.~(\ref{eq:NG}) is shown in the black (solid) curve in figure \ref{fig:kink}, where we have taken $\alpha=0.1$, $\gamma_\ud=10^{-6}$ and $\nu=1/2$ (radiation era).
\subsection{Loops with kinks}
\label{ss:kink}
From Eq.~(\ref{change1}), with $\mathcal{J}_k$ given Eq.~(\ref{Jk}), we now have $\xi(\ell) = \sqrt{\ell^2+\ell_{\rm k}^2}$. Thus from Eq.~(\ref{this}), $t_\star$ satisfies a quadratic equation with solution
\begin{equation}
\frac{t_\star}{t} =\frac{ -\bar{\gamma}\left(\frac{\gamma_\ud}{\alpha}\right) + \sqrt{\bar{\gamma}^2-\gamma_{\mathrm{k}}^2\left(1-\left(\frac{\gamma_\ud}{\alpha}\right)^2\right)}}{\alpha\left(1-\left(\frac{\gamma_\ud}{\alpha}\right)\right)}
\end{equation}
where $\gamma_{\mathrm{k}}(t)$ is given in (\ref{thegammasdef}) and
\begin{equation}
\bar{\gamma}(t,\gamma) \equiv \gamma_\ud + \sqrt{\gamma_{\mathrm{k}}^2(t) +\gamma^2}
\end{equation}
Since $\alpha \sim 0.1$ and $\gamma_\ud \equiv \Gamma G\mu \lesssim 10^{-6}$
(from cosmic microwave background constraints on cosmic strings \cite{Ade:2013xla})
in our analytical expressions below we ignore terms in $\gamma_\ud/\alpha$ so that
$\left(\alpha t_\star/t\right)^2=\bar{\gamma}^2-\gamma_{\mathrm{k}}^2(t)$.
(This approximation was not used in our numerical calculations.)
Thus from Eq.~(\ref{eq:general}) we find, assuming $\alpha \gg \gamma_\ud$,
\begin{equation}
{\mathcal{N}}(t,\gamma)= \mathrm{C} \alpha^{3(1-\nu)}\left(\frac{\bar{\gamma}^2(t,\gamma)}{1+\gamma_{\mathrm{k}}^2(t)/\gamma^2}\right)^{1/2} \left(\bar{\gamma}^2(t,\gamma) - \gamma_{\mathrm{k}}^2(t)\right)^\frac{3\nu-5}{2} \qquad \qquad {\text {where}} \; \gamma \leq \alpha,
\label{answerbis}
\end{equation}
This distribution, in the radiation era, is plotted in
Fig.~\ref{fig:kink} for illustrative values of $\gamma_{\mathrm{k}}(t)$, with $\gamma_\ud=10^{-6}$, $\alpha=0.1$.
\begin{figure*}
\includegraphics[width=0.75\textwidth]{kinks-again}
\caption{
Loop distribution for kinks in the radiation era, with $\alpha=0.1$ and $\gamma_\ud = 10^{-6}$, and at several different epochs. Black solid line: $\gamma_{\mathrm{k}}=0$ ($t\rightarrow \infty$), the NG loop distribution. Red dash line: $\gamma_{\mathrm{k}}(t)=10^{-5}\gamma_\ud$ (corresponding to $t=10^5 t_k$). Blue dot-dash line $\gamma_{\mathrm{k}}(t)=\gamma_\ud$ (corresponding to $t=t_k$). Green dotted line $\gamma_{\mathrm{k}}(t)=10^4\gamma_\ud$ (corresponding $t=10^{-4}t_k$). }
\label{fig:kink}
\end{figure*}
The important qualitative and quantitative features to notice are the following:
\begin{itemize}
\item The existence of the fixed scale $\ell_{\rm k}$ gives rise to a non-scaling distribution: $\mathcal{N}$ is explicitly $t$-dependent.
\item When $\gamma_{\mathrm{k}}\rightarrow 0$, namely when $t\rightarrow \infty$, Eq.~(\ref{answerbis}) reduces to the standard scaling NG loop distribution given in Eq.~(\ref{eq:NG}) (in the limit $\alpha \gg \gamma_d$).
\item For $\gamma \gg \gamma_{\mathrm{k}}(t)$, the loop distribution is scaling since $\bar{\gamma}\sim \gamma + \gamma_\ud$, so that
\begin{equation}
{\mathcal{N}}(t,\gamma) \simeq \mathrm{C} \alpha^{3(1-\nu)} (\gamma + \gamma_\ud)^{3\nu-4}.
\label{answersbig}
\end{equation}
This behaviour is clear in Fig.~\ref{fig:kink} where for $\gamma \gg \gamma_{\mathrm{k}}(t)$ the various curves coincide
with the NG curve. Hence for loops of these lengths, gravitational radiation is important but particle radiation plays no role. Furthermore
\begin{itemize}
\item when $\gamma_\ud \gg \gamma \gg \gamma_{\mathrm{k}}$, the distribution is {\it flat}, see figure \ref{fig:kink} dashed-red curve.
\item when $\gamma \gg (\gamma_\ud,\gamma_{\mathrm{k}})$ $\mathcal{N}$ drops off as $\gamma^{3\nu-4}$, as for NG loops, a dependence which is simply due to the expansion of the universe.
\end{itemize}
\item For $\gamma \ll \gamma_{\mathrm{k}}(t)$, the distribution no-longer scales
because of particle radiation. Indeed $\bar{\gamma}\sim \gamma_{\mathrm{k}}(t) + \gamma_\ud$ so that
\begin{equation}
{\mathcal{N}} \simeq \mathrm{C} \alpha^{3(1-\nu)} \gamma_\ud^\frac{3\nu-5}{2}
\left( \frac{\gamma}{\gamma_{\mathrm{k}}(t)}\right)
(2\gamma_{\mathrm{k}}(t)+\gamma_\ud)^\frac{3\nu-5}{2} (\gamma_{\mathrm{k}}(t)+\gamma_\ud).
\label{linear}
\end{equation}
This linear dependence on $\gamma$ for $\gamma \ll \gamma_k$ is visible in Fig.~\ref{fig:kink}. Notice that
\begin{itemize}
\item when $\gamma_\ud \ll \gamma_{\mathrm{k}}$, there is no plateau in the distribution, which goes from the linear behaviour Eq.(\ref{linear}) to the scaling behaviour Eq.~(\ref{answersbig}), at a value of $\gamma$ obtained by equating these two equations, namely
\begin{equation}
\gamma_{\mathrm{k}}^*(t)\simeq \sqrt{2\gamma_{\mathrm{k}}\gamma_\ud}.
\nonumber
\end{equation}
This is clearly visible in the green-dotted curve in Fig.~\ref{fig:kink}.
\end{itemize}
\end{itemize}
When $\gamma_{\mathrm{k}}(t)\ll \gamma_\ud$, an excellent approximation to the distribution is
\begin{equation}
\mathcal{N}(\gamma,t) \simeq \mathrm{C} \alpha^{3(1-\nu)} \dfrac{1}{\mathcal{J}(\gamma,t)} (\gamma + \gamma_\ud)^{3\nu-4}.
\label{eq:approx}
\end{equation}
where, for the kinks considered here,
\begin{equation}
\mathcal{J}(\gamma,t)=\sqrt{1+\left(\frac{\gamma_{\mathrm{k}}(t)}{\gamma}\right)^2}.
\nonumber
\end{equation}
On the other hand, when $\gamma_{\mathrm{k}}(t)\geq \gamma_\ud$ the distribution changes behaviour, and for
$\gamma_{\mathrm{k}}(t)\gg \gamma_\ud$
{\it its amplitude is significantly supressed due to particle emission}. Indeed when $\gamma=\gamma_{\mathrm{k}}^*(t)$, which is at the maximum of $\mathcal{N}$ (see green curve, figure \ref{fig:kink}), ${\mathcal{N}}$ scales as $\gamma_{\mathrm{k}}^{-(4-3\nu)/2}$ which decreases with increasing $\gamma_{\mathrm{k}}$. The equality $\gamma_\ud = \gamma_{\mathrm{k}}(t) $ defines a {\it characteristic time}
$t_k$ by
\begin{equation}
t_k \equiv \frac{\ell_{\rm k}}{\gamma_\ud}.
\label{tkdef}
\end{equation}
For $t\ll t_k$, particle emission is dominant, $\gamma_{\mathrm{k}}(t)\geq \gamma_\ud$, and the distribution is supressed.
Using $\ell_{\rm k}$ given by Eq.~(\ref{Est}),
\begin{equation}
t_k
=\beta_k \frac{t_{pl}}{\Gamma^2 (G\mu)^{5/2}} \simeq \beta_k t_{eq} \left(\frac{2.5\times 10^{-24}}{G\mu}\right)^{5/2}
\nonumber
\end{equation}
or in terms of redshift
\begin{equation}
z_k \simeq z_{eq} \left(\frac{G\mu}{2.5\times 10^{-24}}\right)^{5/4} \frac{1}{\sqrt{\beta_k}}
\label{zkdef}
\end{equation}
where
$z_{eq} \simeq \Omega_M/\Omega_{\rm R} \sim 3367$.
The LH panel of Fig.~\ref{fig:kc} shows the loop distribution for different redshifts for $\ell_{\rm k}$ given in Eq.~(\ref{Est}) and $\beta_k=1$. The effect of the supression of the loop distribution at $z\gg z_k$ on the SGWB will be discussed in Sec.~\ref{sec:SGWB}.
\subsection{Loops with cusps}
\label{ss:c}
For loops with cusps, where $\mathcal{J}=\mathcal{J}_{\mathrm{c}}$ given in Eq.~(\ref{Jc}), the analysis is very similar. We only give the salient features. As for kinks (see Eq.~\ref{tkdef}), one can define a characteristic time through $\gamma_\ud = \gamma_{\mathrm{c}}(t)$, namely
\begin{equation}
t_c \equiv \frac{\ell_{\rm c}}{\gamma_\ud},
\label{tcdef}
\end{equation}
and again, as for kinks, when $t \ll t_c$ the effects of particle radiation are more important and the loop distribution is supressed. For $\ell_{\rm c}$ given in Eq.~(\ref{lcdef}), we have
\begin{equation}
t_c
=\beta_c \frac{t_{pl}}{\Gamma^3 (G\mu)^{7/2}} \simeq \beta_c t_{eq} \left(\frac{4.6 \times 10^{-18}}{G\mu}\right)^{7/2}
\label{tcdefbis}
\end{equation}
or in terms of redshift
\begin{equation}
z_c \simeq z_{eq} \left(\frac{G\mu}{4.6 \times10^{-18}}\right)^{7/4} \frac{1}{\sqrt{\beta_c}}.
\label{zcdef}
\end{equation}
For the relevant range, namely $G\mu < 10^{-6}$, we have $z_c < z_k$ and hence the observational consequences of cusps, both on the SGWB and the diffuse Gamma-ray background, are expected to be more significant than those of kinks --- since, as discussed above, the loop distribution is suppressed when $z<(z_c,z_k)$, see Fig.~\ref{fig:kc}.
The explicit $\gamma$-dependence of the distribution is the following.
First, substituting $\mathcal{J}_{\mathrm{c}}$ in the definition of $\xi(\gamma)$ and $t_*$,
Eqs.(\ref{change1}) and (\ref{this}) respectively, we find
\begin{align*}
\xi(\ell) &= \left(\ell^{3/2}+\ell_{\rm c}^{3/2}\right)^{2/3}, \\
\left(\frac{\alpha t_\star}{t}\right)^{3/2} &= \left[\gamma_\ud + \left(\gamma^{3/2}+\gamma_{\mathrm{c}}^{3/2}\right)^{2/3}\right]^{3/2} -\gamma_{\mathrm{c}}^{3/2} \qquad \text{for} \; \alpha \gg \gamma_\ud.
\end{align*}
It then follows from Eq.~(\ref{eq:same}) that the resulting distribution again scales for $\gamma \gg \gamma_{\mathrm{c}}$ where it is given by Eq.~(\ref{answersbig}); and for $\gamma \ll \gamma_\ud$, ${\cal{N}} \propto \sqrt{\gamma}$. When $\gamma_{\mathrm{c}}\gg \gamma_\ud$, we find
\begin{equation*}
\cal{N} \propto \begin{cases}
\gamma^{3\nu-4} \qquad \qquad (\gamma \gg \gamma_{\mathrm{c}}^*) \\
\sqrt{\gamma}\qquad \quad \qquad (\gamma \ll \gamma_{\mathrm{c}}^*)
\end{cases}
\end{equation*}
where
\begin{equation}
\gamma_{\mathrm{c}}^* \simeq \left({\gamma_\ud \sqrt{\gamma_{\mathrm{c}}}}\right)^{2/3}.
\nonumber
\end{equation}
\begin{figure*}
{\includegraphics{evolution-kinks.pdf}}
{\includegraphics{evolution-cusps.pdf}}
\caption{
Loop number density $\mathcal{N}=t^4 n$ for kinks [LH panel] and cusps [RH panel], for $G\mu=10^{-17}$.
Thus $z_k\sim 10^{12}$ and $z_c \sim 10^{4}$. From bottom to top, the curves show
snapshots of the loop distribution at redshifts $z=10^{13}, 10^{11}, 10^{9}, 10^{7}, 10^{5}$,
and the black curve is the scaling loop distribution at $z\rightarrow 0$. The loop distributions are supressed for $z\gg z_k$ or $z\gg z_c$.
}
\label{fig:kc}
\end{figure*}
\section{The Stochastic Gravitational Wave Background}
\label{sec:SGWB}
The stochastic GW background $\mathrm{\Omega}_{\rm gw}(t_0,f)$ given in (\ref{eqn:theone}) is obtained by adding up the GW emission from all the loops throughout the whole history of the Universe which have contributed to frequency $f$.
Following the approach developed in \cite{Caldwell:1991jj,ViSh,Blanco-Pillado:2017oxo}
\begin{equation}
\mathrm{\Omega}_{\rm gw}(\ln f) = \frac{8\pi G^2\mu^2 f}{3 \mathrm{H_{0}}^2}\sum_{j=1}^\infty C_j(f) P_j\,,
\label{eqn:omega-method-1}
\end{equation}
where
\begin{equation}\label{eqn:Cn}
C_j(f) = \frac{2j}{f^2 }\int_0^{z_{\mathrm{friction}}} \frac{dz}{H(z) (1+z)^6}~n\left(\frac{2j} {(1+z)f},t(z)\right)\,,
\end{equation}
and $z_{\mathrm{friction}}$ is the redshift below which friction effects on the string dynamics become negligible \cite{ViSh}
\begin{equation}
z_{\mathrm{friction}} \simeq
{z_{\rm eq}}\, ( 4.4\times 10^{16}) \left(\frac{G\mu}{10^{-11}}\right) .
\label{zfr}
\end{equation}
The $C_j$ depend on the loop distribution $n(\ell,t)$ through $n\left({2j} /{((1+z)f)},t(z)\right)$, whilst the $P_j$ are the ``average loop gravitational wave power-spectrum'', namely the power emitted in gravitational waves in the $j$th harmonic of the loop. By definition of $\Gamma$, these must be normalised to
$$
\Gamma = \sum_{j=1}^\infty P_j.
$$
For loops with kinks, $P_j \propto j^{-5/3}$, whereas for loops with cusps $P_j \propto j^{-4/3}$ ~\cite{Vachaspati:1984gt,Binetruy:2009vt,ViSh}.
As explained above, the effect of $\gamma_{\mathrm{k}}$ and $\gamma_{\mathrm{c}}$ on the loop distribution is particularly important at large redshifts $z>(z_c,z_k)$, and hence in the radiation era. Therefore we expect the effect of particle radiation to be visible in the high-frequency part of the spectrum.
This is indeed observed in Fig.~\ref{fig:one-scale-stochastic}, where the LH panel is for kinks with $\ell_{\rm k}$ given in Eq.~(\ref{Est}) and $P_j \propto j^{-5/3}$; whereas the RH panel is for cusps with $\ell_{\rm c}$ given in Eq.~(\ref{lcdef}) and $P_j \propto j^{-4/3}$. As a result of the non-scaling loop distribution, the spectrum is no longer flat at high frequencies and, as expected, the effect is more significant for cusps than for kinks since $z_c < z_k$.
\begin{figure*}
\includegraphics{frequency-dependence-kinks.pdf}
\includegraphics{frequency-dependence-cusps.pdf}
\caption{SBGW including the backreaction of particle emission on the loop distribution. LH panel: kinks on loops, RH panel: cusps on loop. The spectra are cutoff at high frequency, as indicated by the black vertical lines. $G\mu$ ranges from $10^{-17}$ (lower curve), through $10^{-15}$, $10^{-13}$,$10^{-11}$, $10^{-9}$ and $10^{-7}$ (upper curve). Also plotted are the power-law integrated sensitivity curves from SKA (pink dashed) \cite{Janssen:2014dka}, LISA (yellow dashed) \cite{Caprini:2019pxz}, adv-LIGO (grey dashed) \cite{TheLIGOScientific:2016dpb} and Einstein Telescope (blue dashed) \cite{Punturo:2010zz,Hild:2010id}.}
\label{fig:one-scale-stochastic}
\end{figure*}
We can estimate the frequency above which the spectrum decays as follows. In the radiation era
\begin{align}
H(z) &= (1+z)^2 \sqrt{\Omega_{\rm R}} \mathrm{H_{0}} \label{Hrad} \\
t(z) &= \dfrac{1}{2(1+z)^2}\frac{1}{\sqrt{\Omega_{\rm R}} \mathrm{H_{0}}} \label{trad}
\end{align}
At high frequency, the lowest harmonic $j=1$ is expected to dominate \cite{ViSh}, so we set $P_j = \Gamma \delta_{j,1}$. Then using (\ref{Hrad}) and (\ref{trad}), Eq.~(\ref{eqn:omega-method-1}) simplifies to
\begin{eqnarray}
\mathrm{\Omega}_{\rm gw}(\ln f) &=& 2^4 \frac{16\pi (\Gamma G\mu)^2 }{3 \Gamma} \frac{\mathrm{H_{0}}}{ f } \Omega_{\rm R}^{3/2} \int_{{z_{\rm eq}}}^{z_{\mathrm{friction}}} {\mathrm{d} z}~\mathcal{N}\left(\frac{2} {(1+z)f},t(z)\right)\,
\nonumber
\\
&\propto& \frac{\mathrm{H_{0}}}{ f }
\left[ \int_{{z_{\rm eq}}}^{z_{c,k}} {\mathrm{d} z}~\mathcal{N}\left(\frac{2} {(1+z)f},t(z)\right) + \int_{z_{c,k}}^{z_{\mathrm{friction}}} {\mathrm{d} z}~\mathcal{N}\left(\frac{2} {(1+z)f},t(z)\right) \right]\,.
\nonumber
\\
&\simeq &\frac{\mathrm{H_{0}}}{ f } \int_{{z_{\rm eq}}}^{z_{c,k}} {\mathrm{d} z}~\mathcal{N}\left(\frac{2} {(1+z)f},t(z)\right) \,.
\label{blimey}
\end{eqnarray}
Here, in going from the second to the third equality, we have used the fact that (i) for $G\mu \gtrsim 10^{-18}$,
which is relevant range for current and future GW detectors, ${z_{\rm eq}} < (z_c,z_k) \ll z_{\mathrm{friction}}$ (see Eqs.~(\ref{zkdef}), (\ref{zcdef}) and (\ref{zfr})), and (ii) that the loop distribution above $z_{(c,k)}$ is subdominant, see e.g.~discussion above equation (\ref{tkdef}) in section \ref{ss:kink}. Using Eq.(\ref{trad}) as well as the approximation for the loop distribution for $z<z_k$ given in Eq.~(\ref{eq:approx}), it follows that for kinks
\begin{eqnarray}
[ \Omega_{\rm gw}(\ln f)]_k
&\propto& \int_{x_{\rm{eq}} }^{x_k} \left[1+\left(\dfrac{\ell_{\rm k} x f^2}{8 \mathrm{H_{0}} \sqrt{\Omega_{\rm R}}}\right)^2\right]^{-1/2} \left(\gamma_\ud+x\right)^{-5/2} \mathrm{d} x
\label{rubbish}
\end{eqnarray}
where we have changed variable from $z$ to
\begin{equation}
x = \frac{4}{f} (1+z) \mathrm{H_{0}}\sqrt{\Omega_{\rm R}}
\nonumber
\end{equation}
so that
\begin{equation}
x_{\rm{eq}} = \frac{4}{f} (1+{z_{\rm eq}}) \mathrm{H_{0}}\sqrt{\Omega_{\rm R}}\, , \qquad x_k = \frac{4}{f} (1+z_k) \mathrm{H_{0}}\sqrt{\Omega_{\rm R}}\, .
\nonumber
\end{equation}
In order to understand the frequency dependence of $\Omega_{\rm gw}$, let us initially
focus on the standard NG case, namely $\ell_k=0$. (Here, the same change of
variable starting from the first line of Eq.~(\ref{blimey}) again yields Eq.~(\ref{rubbish})
but with upper bound replaced by $x_{\rm friction}={4} (1+z_{\mathrm{friction}}) \mathrm{H_{0}}\sqrt{\Omega_{\rm R}}/f$).
Then Eq.~(\ref{rubbish}) gives
\begin{equation}
[ \Omega_{\rm gw}(\ln f)]_{NG}\propto \frac{1}{\left(\frac{f_{\mathrm{eq}}}{f}+1\right)^{3/2}} -\frac{1}{\left(\frac{f_{\mathrm{friction}}}{f}+1\right)^{3/2}} ,
\nonumber
\end{equation}
where
\begin{equation}
f_{\mathrm{eq}} = \frac{4 \mathrm{H_{0}} \sqrt{\Omega_{\rm R}}(1+ {z_{\rm eq}})}{\gamma_\ud} \sim \frac{10^{-18}}{G\mu} {\rm s}^{-1}\, , \qquad f_{\mathrm{friction}} = \frac{4 \mathrm{H_{0}} \sqrt{\Omega_{\rm R}} (1+z_{\mathrm{friction}})}{\gamma_\ud} \sim 10^{10} {\rm s}^{-1},
\nonumber
\end{equation}
and where in the last equality we have used Eq.~(\ref{zfr}). At frequencies $f$ for which $f_{\mathrm{friction}} \gg f \gg f_{\mathrm{eq}}$ it follows that $ [ \Omega_{\rm gw}(\ln f)]_{NG} \rightarrow$ constant
meaning that the spectrum is flat, which is the well known result for NG strings \cite{ViSh}.
For $\ell_{\rm k} \neq 0$, the argument is altered because of the frequency dependence of the term in square brackets in Eq.~(\ref{rubbish}). A further characteristic frequency now enters: this is can be obtained by combining the typical scales of the two terms in Eq.~(\ref{rubbish}). Namely, on one hand, from the first term (in square brackets) we have $\ell_{\rm k} f^2\sim 8 \mathrm{H_{0}} \sqrt{\Omega_{\rm R}}x^{-1}$; and on the other hand from the second (standard NG) term we have $x\sim \gamma_\ud$. Combining these yields the
characteristic frequency
\begin{equation}
f_k \sim \left( {\dfrac{8 \mathrm{H_{0}} \sqrt{\Omega_{\rm R}}}{\ell_{\rm k} \gamma_\ud}}\right)^{1/2} .
\label{eq:dirac-cutoff}
\end{equation}
For $f_k > f >f_{eq}$ the spectrum is still flat, as in the NG case. However, for $f>f_k$ it decays since the first term in square brackets in Eq.~(\ref{rubbish}) dominates. With $\ell_{\rm k}$ given in Eq.~(\ref{Est}), $f_k \propto (G\mu)^{1/4}\beta_{\mathrm{k}}^{-1/2}$, and this behaviour is clearly shown in Fig.~\ref{fig:one-scale-stochastic} where $f_k$ is shown with a vertical black line for each value of $G\mu$ and we have assumed $\beta_{\mathrm{k}}=1$.
For cusps the analysis proceeds identically with
\begin{equation}
f_c = \left( {\dfrac{8 \mathrm{H_{0}} \sqrt{\Omega_{\rm R}}}{\ell_{\rm c} \gamma_\ud}}\right)^{1/2} .
\label{eq:dirac-cutoffc}
\end{equation}
Now, on using $\ell_{\rm c}$ defined in Eq.~(\ref{lcdef}), we have $f_c \propto (G\mu)^{3/4}\beta_{\mathrm{c}}^{-1/2}$. The spectrum of SGWB in this case is shown in the RH panel of Fig.~\ref{fig:one-scale-stochastic} where $f_c$ is shown with a vertical black line for each value of $G\mu$ and we have taken $\beta_{\mathrm{c}}=1$.
As the figure shows, with $\beta_{\mathrm{c}}=1$ and in the range of $G\mu$ of interest for GW detectors,
the decay of $\Omega_{\rm{GW}}$ for $f>f_c$ is {\it outside} the observational window of the
LIGO, LISA (and future ET) detectors. In order to have $f_c \sim f_{\rm {LIGO}}$, one would require large
values of $\beta_{\mathrm{c}}$ which are not expected.
\section{Emission of particles}
\label{sec:particle}
The loops we consider radiate not only GW but also particles. Indeed, for loops with kinks,
from Eq.~(\ref{kink})
\begin{equation}
\left. \dot{\ell} \right|_{\rm particle} = -\gamma_\ud \frac{\ell_{\rm k}}{\ell}
\label{rp}
\end{equation}
The emitted particles are heavy and in the dark particle physics sector corresponding to the
fields that make up the string. We assume that there is some interaction of the dark sector
with the standard model sector. Then the emitted particle radiation will eventually decay,
and a significant fraction of the energy $f_{{\rm eff}} \sim 1$ will cascade down into $\gamma$-rays.
Hence the string network will be constrained by the Diffuse Gamma-Ray bound measured at GeV scales by Fermi-Lat \cite{FL}. This bound is
\begin{equation}
\omega_{\rm DGRB}^{\rm obs} \lesssim 5.8 \times 10^{-7} \; {\rm eV}{\rm cm}^{-3},
\end{equation}
where $\omega_{\rm DGRB}$ is the total electromagnetic energy injected since the universe became
transparent to GeV $\gamma$ rays at $t_{\gamma} \simeq 10^{15}$s, see e.g.~\cite{Mota:2014uka}.
The rate per unit volume at which string loops lose energy into particles can be obtained by integrating (\ref{rp}) over the loop distribution $n(\ell,t)=t^{-4}\mathcal{N}(\gamma,t)$, namely
\begin{equation}
\Phi_{\rm H}(t) = \mu \gamma_\ud {\ell_{\rm k}} \int_0^{\alpha t} n(\ell,t)\frac{\mathrm{d}\ell}{\ell} =
\mu t^{-3} {\gamma_\ud \gamma_{\mathrm{k}}} \int_0^{\alpha} \ \frac{{\mathcal{N}}(\gamma',t)}{\gamma'} \mathrm{d} \gamma'
\end{equation}
The Diffuse Gamma Ray Background (DGRB) contribution is then given by (see e.g.~\cite{Mota:2014uka})
\begin{eqnarray}
\omega_{\rm DGRB} &=& f_{\rm eff }\int_{t_\gamma}^{t_0} \dfrac{{\rm \Phi_H}(t)}{(1+z)^4} \mathrm{d} t
\nonumber
\\
&=& f_{\rm eff } \mu {\gamma_\ud } \int_{t_\gamma}^{t_0} \dfrac{\gamma_{\mathrm{k}}(t)}{t^3 (1+z(t))^4} \left[ \int_0^{\alpha} \ \frac{{\mathcal{N}}(\gamma',t)}{\gamma'} \mathrm{d} \gamma' \right] \mathrm{d} t
\nonumber
\\
&=&
\Gamma(8.4\times10^{39}) f_{\rm eff } \left(\frac{G\mu}{c^4}\right)^2
\int_{t_\gamma}^{t_0} \dfrac{\gamma_{\mathrm{k}}(t)}{t^3 (1+z(t))^4} \left[ \int_0^{\alpha} \ \frac{{\mathcal{N}}(\gamma',t)}{\gamma'} \mathrm{d} \gamma' \right] \mathrm{d} t
\qquad {\rm eV}{\rm cm}^{-3}
\label{omgk}
\end{eqnarray}
where in the last line we have explicity put in factors of $c$ converted to physical units of ${\rm eV}/{\rm cm}^3$.
For cusps, one finds
\begin{eqnarray}
\omega_{\rm DGRB} &=& \Gamma(8.4\times10^{39}) f_{\rm eff } \left(\frac{G\mu}{c^4}\right)^2 \int_{t_\gamma}^{t_0} \dfrac{\sqrt{\gamma_{\mathrm{c}}(t)}}{t^3 (1+z(t))^4} \left[ \int_0^{\alpha} \ \frac{{\mathcal{N}}(\gamma',t)}{\sqrt{\gamma'}} \mathrm{d} \gamma' \right] \mathrm{d} t \qquad {\rm eV}{\rm cm}^{-3}
\label{omgc}
\end{eqnarray}
In the matter dominated era, the loop distribution is dominated by those loops produced in
the radiation era but decay in the matter era: its general expression is given in Eq.~(\ref{looprm}),
and can be deduced straightforwardly from the results of subsections \ref{ss:kink} and \ref{ss:c}
for kinks and cusps respectively. We have calculated (\ref{omgk}) and (\ref{omgc}) numerically,
and the results are shown in
Fig.~\ref{fig:omg} for kinks [LH panel] and cusps [RH panel], together with the Fermi-Lat bound.
\begin{figure*}
{\includegraphics[width=0.45\textwidth]{omega_cas-kinks.pdf}}
{\includegraphics[width=0.45\textwidth]{omega_cas-cusps.pdf}}
\caption{
Contribution of cosmic strings to the Diffuse Gamma-Ray Background. The (blue) horizontal line is the experimental constraint from Fermi-LAT, while the (orange) line is the exact numerical calculation for kinks (LH panel) and cusps (RH panel). On either side of the maxima, the slope and amplitude can be estimated using the results of previous sections. Kinks: for low $G\mu$ the slope is $9/8$ (dashed-green line), and for high $G\mu$ it depends on $\mu^{-2}\log(\mu)$ (dashed-red line).
Cusps: For low $G\mu$ the slope is $13/12$ (dashed-green line), and for high $G\mu$ it is $-5/4$ (dashed-red line). The slightly different amplitude between the numerical calculation and the analytical one is because the latter assumes a matter dominated universe, and hence neglects effects of late time acceleration.}
\label{fig:omg}
\end{figure*}
It is clear from this figure that particle radiation from loops containing
kinks and/or cusps, with $\ell_{\rm k}$ and $\ell_{\rm c}$ given in (\ref{Est}) and \eqref{lcdef},
are not constrained by the Fermi-lat data.
The general shape of the spectra in Fig.~\ref{fig:omg} can be understood from the
results of section \ref{sec:nlt}. Let us focus on the case of cusps (for kinks the analysis is similar).
First, we can determine the range of $G\mu$ for which the characteristic time $t_c$ defined in
Eq.~(\ref{tcdef}) falls within the range of integration of (\ref{omgc}), namely
\begin{equation}
t_\gamma \leq t_c \leq t_0 \, \qquad \Longleftrightarrow \qquad 10^{-19} \lesssim G\mu \lesssim 10^{-18}
\nonumber
\end{equation}
(we have assumed $\beta_{\mathrm{c}}=1$ and, from Eq.~(\ref{tcdefbis}), $t=t_c$ implies
$G\mu \sim 4.6\times 10^{-18} ( {t_{\rm eq}}/{t})^{2/7}$). This range of $G\mu$ defines the position
of the maximum of the DGRB in the RH panel of Fig.~\ref{fig:omg}. For lower $G\mu$, all times in
the integration range are {\it smaller} than $t_c$. As we have discussed in Sec.~\ref{ss:c}, in this
case the loop distributions are {\it supressed} due to particle radiation: there are fewer loops, and
hence fewer particles are emitted leading to a decrease in the DGRB. This is shown in
Fig.~\ref{fig:omg}, and using the results of Sec.~\ref{ss:c}, one can show that for
$G\mu \ll 10^{-19}$, $\Phi_{\rm H}(t) \propto \mu^{2/3} \ell_{\rm c}^{-1/6} (1+z)^3 t^{-4/3}$ leading to
\begin{equation}
\omega_{\rm DGRB} \propto \mu^{2/3} \ell_{\rm c}^{-1/6} \propto (G\mu)^{13/12} \qquad (G\mu \ll 10^{-19}).
\nonumber
\end{equation}
On the other hand, for $G\mu \gg 10^{-18}$, all times in the integration range are {\it larger} than $t_c$. There is no supression of the loop distribution, since GR dominates over particle emission (see Sec.~\ref{sec:nlt}). But precisely because GR dominates, fewer particles are emitted, and hence we also have a decrease in the DGRB. We now find that $\Phi_{\rm H}(t)\propto \gamma_\ud^{-1} \mu \sqrt{\ell_{\rm c}} (1+z)^3 t^{-2}$ so that
\begin{equation}
\omega_{\rm DGRB} \propto \sqrt{\ell_{\rm c}} \propto (G\mu)^{-5/4}
\nonumber
\end{equation}
which is the slope seen in Fig.~\ref{fig:omg}. For kinks the discussion is very similar, and the slopes are given in the caption of the figure. However, each kink event emits fewer particles, leading to a lower overall DGRB.
\section{Conclusion}
\label{sec:conc}
Cosmic string loops emit both particle and gravitational radiation. Particle emission is more important
for small loops, while gravitational emission dominates for large loops. In this work, we have
accounted for both types of radiation in the number density of loops and calculated the expected stochastic gravitational wave background and the diffuse gamma ray background from strings.
Our results show that the
number density of loops gets cutoff at small lengths due to particle radiation. The strength of
the cutoff depends on the detailed particle emission mechanism from strings -- if only kinks
are prevalent on strings, small loops are suppressed but not as much as in the case when
cusps are prevalent (see Fig.~\ref{fig:kc}). The cutoff in loop sizes implies that the stochastic
gravitational wave background will get cut off at high frequencies (see Fig.~\ref{fig:one-scale-stochastic}).
The high frequency cutoff does not affect current gravitational wave detection efforts but may
become important for future experiments.
Particle emission from strings can provide an important alternate observational signature
in the form of cosmic rays. Assuming that the particles emitted from strings
decay into standard model Higgs particles that then eventually cascade into gamma rays,
we can calculate the gamma ray background from strings. This background is below
current constraints in the case of both kinks and cusps.
It is important to evaluate more carefully the prevalence of kinks versus cusps on
cosmological string loops. In \cite{TVrecent}, particle radiation from a loop of a specific shape
was studied where the shape was dictated by general expectations for the behavior
of the cosmological string network. That particular loop only contained kinks. It would be
of interest to study other loop shapes that are likely to be produced from the network
and that contain cusps and to assess if the $1/\sqrt{\ell}$ dependence in \eqref{cusp} is
an accurate characterization of such loops over their lifetimes. It would also be interesting to study other loop production functions, particularly those of \cite{Polchinski:2006ee,Polchinski:2007rg,Dubath:2007mf} which predict a larger number of small loops relative to the situation studied in section \ref{subsec:delta}; hence one might expect a larger gamma ray background from strings in this case\footnote{Work in progress}.
\acknowledgements
We would like to thank Dimitri Semikoz and Ed Porter for useful discussions, and Mark Hindmarsh, Christophe Ringeval and G\'eraldine Servant for useful comments and questions on the first draft of this paper. PA thanks Nordita for hospitality whilst this work was in progress. DAS thanks Marc Ar\`ene, Simone Mastrogiovanni and Antoine Petiteau. TV thanks APC (Universit\'e Paris Diderot)
for hospitality through a visiting Professorship while this work was being done.
TV is supported by the U.S. Department of Energy, Office of High Energy Physics, under
Award No.~DE-SC0019470 at Arizona State University.
\noindent
| proofpile-arXiv_059-15549 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{sec:1}
The packing of granular materials has a well-established history of inquiry, bolstered by theoretical, experimental, and numerical works aimed at understanding these far-from equilibrium systems~\cite{ellenbroek2009,majmudar2007,desmond2009,dagois2012,goodrich2014,coulais2014,morse2017,chaudhuri2010,bitzek2006,morse2014,tighe2014,tordesillas2011}. While individual particles/grains are considered discrete solids, granular amalgams can display a broader range of behavior, transitioning between liquid, glass, and solid-like states~\cite{liu2010,vaagberg2013,mari2009}. This diverse behavior provides direct contrast for situations in which granular matter interacts with a continuum, such as a thin elastic structure. Though in general these coupled interactions are less well understood, they provide a useful connection to many real-world systems. Previous work on root growth has shed light on granular force-chain development and propagation~\cite{kolb2017,kolb2012,whiteley1982,oliva2007,bengough2005}. The study of burrowing bivalves \& crustaceans and of the locomotive strategies in desert dwelling reptiles~\cite{dorgan2015,atkinson2015,maladen2009,young2003} highlights some of the impediments to motion that are specific to moving in and around granular materials. Recent inquiries have aimed to create a general physical framework for these elastogranular phenomena through the analysis of the large elastic deformations of thin rods embedded in both horizontal and vertically oriented granular systems~\cite{kolb2013,mojdehi2016,schunter2018,algarra2018}. With previous investigations either neglecting the role of grain size distribution or limited to monodisperse arrays, questions regarding bidispersity and disruptions to crystalline order, remain open.
\section{Elastogranular Systems}
\label{sec:2}
In this Letter, we consider the buckling and packing of an elastica within a nearly frictionless, bidisperse granular bed. Experimentally, an unbent elastica of initial arclength $L_0$ (equal to the distance separating the clamped/roller boundary conditions that ensure strictly planar deformations and permit additional arclength to enter the system), is confined to deform within 2D arrays of soft hydrogel grains ($\text{M}^2$ Polymer \& MagicWaterBeads). Binary (50:50) mixtures of large $(r_1)$ and small $(r_2)$ radii grains, with diameter ratio $\eta=r_1/r_2$, are randomly placed at equal initial packing fractions $\phi_0$ within the areas $\{B_1,B_2\}$ on both sides of the slender structure [see Fig.~\ref{fig1}]. Here we consider three diameter ratios, with $\eta \in [1.0, 1.2, 1.9]$, prepared over a range of initial packing fractions. At the start of an experiment (for a granular array with a particular $\eta$ and $\phi_0$), we begin increasing the arclength quasi-statically in small increments $\Delta$ $(\sim 0.2\,mm)$, such that the new, current arclength is: $L = L_0 + \Delta$ [Fig.~\ref{fig1} and video S1]. This allows for the observation of both the onset of buckling and characteristic postbuckling morphologies [Figs. 1(i)-1(iii)]~\cite{bigoni2015}.
Using bidispersity as a small perturbation~\cite{kurita2010} to the fragile, hexagonally-packed states that arise in monodisperse arrays ($\eta=1.0$;~\cite{schunter2018}), our aim is to gradually frustrate this global crystalline structure to better understand and characterize elastogranular behaviors in systems where the granular medium acts more like an amorphous solid~\cite{mermin1968}.
\begin{figure}
\resizebox{1.0\columnwidth}{!}{\includegraphics{schematic2.pdf}}
\caption{View of experimental set-up. The arclength of a planar elastica is quasi-statically increased by an amount $\Delta$ within granular monolayers at varying diameter ratio $\eta$ (here $\eta=1.9$) and initial prepared packing fraction $\phi_0$. The frames shown above are at the same injected arclength value $\Delta/L_0=0.43$.}
\label{fig1}
\end{figure}
\section{The Elastogranular Length Scale}
\label{sec:3}
To better understand the role of bidispersity within elastogranular phenomena, we begin by comparing systems representing two extremes: monodisperse arrays (where $\eta=1.0$) and arrays with moderate to large bisdispersity ($\eta > 1.4$; here $\eta=1.9$)~\cite{kurita2010}. A wide range of experimental packing fractions $\left(0\leq\phi_0\lesssim0.89\right)$ were prepared from which to sample. In general, when the initial packing fraction of an array is below the jamming threshold $\left(\phi_0<\phi_j\right)$, the dominant system effects originate with the deforming elastica.
In the present experiments, the evolution of the granular contact network, (namely, the reconfigurations taking place due to the lengthening elastica) will not be completely random, as the preparation history of individual packings can allow for small locally crystalline regions to form~\cite{schreck2011,estrada2016,o'hern2003,hamanaka2007,vanhecke2009}. However, given the lack of thermal excitations and the quasi-static nature of arclength injection, we observe no preferential migrations ({\em i.e.} phase separations) of grains towards specific areas of the system~\cite{vaagberg2013,schnautz2005,hamanaka2006} or dominating behavior of one grain size over another within the experimental packings~\cite{kurita2010}.
In the nascent stages of an experiment when $\Delta$ just begins to increase, the thin structure will buckle into a single side of the enclosure (either area $B_1$ or $B_2$), eventually adopting a mode one postbuckling configuration defined by a primary amplitude $A_0$ and the critical, average half-wavelength $\lambda_c$ measured at low $\Delta/L_0$~\cite{schunter2018}. In pre-jamming arrays $\left(\phi_0<\phi_j\right)$, the elastica will displace grains as additional arclength enters, changing the underlying area available to the granular medium and causing a gradual increase in packing fraction on the side in which $A_0$ grows [Figs. 1(i)-1(iii)]. This side eventually reaches a jammed state at a critical packing fraction $\phi_j$. Critical packing fractions are determined in separate experiments for each value of $\eta$ by placing grains within a rectangular enclosure (as in Fig.~\ref{fig1}) with an adjustable internal area, made possible by a single, rigid actuating wall of length $2W_0$. Taking force measurements with a load-cell (Interface) mounted to the fixed wall opposite the actuating boundary, we determine $\phi_j$ as the point at which the reaction force within the granular array is observed to increase rapidly under continuous quasi-static compression. (See videos S2, S3 and the Supplemental Material in Ref.~\cite{schunter2018} for movies of these compression tests). From these experiments, we find $\phi_j=\{0.8305\pm0.0135,0.8277\pm0.0134,0.7950\pm0.0110\}$ for $\eta=[1.0,1.2,1.9]$, respectively. At comparable initial packing fractions ($\phi_0<\phi_j$), Fig.~\ref{fig2}(a) suggests that even moderate to large ($\eta > 1.4$;~\cite{kurita2010}) bidispersity has little effect on a system's behavior below and on approach to $\phi_j$. Nearly equivalent behaviors are observed between experiments where $\eta=1.9$ (light blue circles) and $\eta=1.0$ (yellow diamonds;~\cite{schunter2018}).
To induce jamming in monodisperse packings, we showed that the critical injection length $\Delta_c$, or the {\em elastogranular} length of this system, can be determined by approximating the area removed from one side of the array as being triangular in shape [inset, Fig.~\ref{fig2}(b)]~\cite{schunter2018}. The primary amplitude is connected to the wavelength by the so--called {\em slaving} condition~\cite{davidovitch2012,paulsen2018}, {\em i.e.} $A_0/\lambda \sim (\Delta/\pi^2L)^{1/2}$, which provides a convenient way to approximate the area consumed by the elastica's deformation as a function of injected arclength. For a fixed number of grains, the critical injection length is found by comparing the area consumed by the elastica with the area that needs to be removed to induce jamming, yielding the relation~\cite{schunter2018}:
\begin{equation}
\label{1}
\frac{\Delta_c}{L_0}\sim \left(\frac{L_0}{\lambda_c}\right)^4\left(1-\frac{\phi_0}{\phi_j}\right)^2\,.
\end{equation}
We plot this equation in Fig.~\ref{fig2}(b) using values of $\Delta_c$ experimentally measured in arrays where $\eta=1.9$ (light blue circles), along with data from~\cite{schunter2018} (yellow diamonds) for arrays where $\eta=1.0$. Below jamming, it seems that individual packings of bidisperse grains follow the same scaling law as for monodisperse grains. These results, along with the evolution of $\phi$ as a function of $\Delta/L_0$ in Fig.~\ref{fig2}(a), indicate that below jamming the elastica is sensitive to the initial packing fraction~\cite{schunter2018}, but insensitive to variability in the grain size ratio.
\begin{figure}
\resizebox{1.0\columnwidth}{!}{\includegraphics{polyDispOverTime_2.pdf}}
\caption{(a) Comparison of pre-jamming behavior for both $\eta=1.9$ (light blue circles) and $\eta=1.0$ (yellow diamonds;~\cite{schunter2018}) arrays. The dashed-dotted (dashed) lines correspond to the critical jamming packing fractions $\phi_j$ for $\eta=1.9$ ($\eta=1.0$) arrays, respectively. (b) The elastogranular length scale $\Delta_c$ is observed to hold in bidisperse arrays ($\eta=1.9$, light blue circles) for $\phi_0<\phi_j$. The dashed line is Eq. $(\textbf{\ref{1}})$, plotted with a slope of 1/2 and the yellow diamonds correspond to experiments with $\eta=1.0$ where the elastogranular length scale was observed, originally discussed in Ref.~\cite{schunter2018}.}
\label{fig2}
\end{figure}
\section{Elastogranular Frustration}
\label{sec:4}
In contrast to the situation below jamming, the behavior of packings prepared above $\phi_j$ tends to be dictated by the highly dense granular array~\cite{schunter2018}. In the monodisperse case, the elastica is seen to localize deformations within a diamond-shaped ``lozange" region, the boundary contour of which is set by the tendency of the grains towards hexagonal packing, and the length of which is governed by a characteristic granular length scale $\lambda_c$, reflecting the extent to which forces originating with the thin structure may diffuse out into the medium~\cite{schunter2018}. In these configurations the elastica is kinematically frustrated. With the introduction of bidispersity, where $\eta>1.0$, we begin to observe a qualitative change in the way curvature $\kappa(s)$ localizes along the curvilinear coordinate $s$ of the arclength of the lengthening elastica.
\begin{figure*}
\centering
\begin{minipage}[b]{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{UbOverTime.pdf}
\end{minipage}
\begin{minipage}[b]{0.45\linewidth}
\centering
\includegraphics[width=6cm,height=3.25cm]{UbU0.pdf}
\vspace{\baselineskip}
\vspace{.15cm}
\includegraphics[width=5.5cm,height=2.25cm]{angles.pdf}
\end{minipage}
\caption{(a) Comparison of bending energy values. The evolution of the bending energy $U_{\text{b}}$ as a function of injected arclength is shown for the free antisymmetric elastica (red squares), $\eta=1.0$ (yellow diamonds), $\eta=1.2$ (dark blue triangles), and $\eta=1.9$ (light blue circles), where $\phi_0>\phi_j$. The solid line is the numerical solution for the ideal case of a doubly-clamped, antisymmetric elastica when rescaled by the appropriate beam material and geometric properties. (b) For a fixed injection length $L/L_0=1.25$ [dashed vertical line, (a)] in arrays with equal $\phi_0$, the dimensionless bending energy, normalized by the free-case bending energy $U_0$ [red squares, (a)], is observed to decrease as the diameter ratio $\eta$ becomes larger. (c) Defining the inclination angles $\{\theta,\beta\}$, inflection point $(p)$, and clamped-roller boundary conditions, along with the numerically calculated elastica profile (at fixed injection length $L/L_0=1.25$).}
\label{fig3}
\end{figure*}
We quantify this gradual change from over confinement by looking at the elastica's bending energy~\cite{landau1986}: $U_{\text{b}}=\frac{\text{B}}{2}\int_{0}^{L}\kappa(s)^2 \ \text{d}s$, the energy required to bend a structure characterized by a bending rigidity $B=EI$, where $E$ is the material's elastic modulus, and $I$ is the second moment of area (given by $I=h^3b/12$ for a beam of thickness $h$ and width $b$ as measured out of the plane in Fig.~\ref{fig1}). Above the jamming threshold, the dense granular arrays act as an ``effective" elastic medium, confining the elastica with approximately equal pressure contributions from side $B_1$ and $B_2$. Due to this assumed average force balance within the granular bed, we expect the buckling geometry that minimizes bending energy in the beam to be equivalent to the bending energy of an antisymmetric, doubly-clamped elastica. The governing differential equation for this problem is given by:
\begin{equation}
\label{elastica}
\psi''(s)+\gamma^2\sin \psi(s) = 0 \ \ \ \ \forall \ s \in [0, L/2],
\end{equation}
where $\gamma^2=(P^2+R^2)^{1/2}/B$, where $P$ and $R$ are the axial and transverse reaction forces, respectively, at the clamped ends~\cite{bigoni2015}. The angle $\psi(s)=\theta(s) + \beta$ defines the tangent at $s$ relative to $\beta$, which is the inclination of $P$ and $R$ with respect to the axial direction [Fig.~\ref{fig3}(c)]~\cite{bigoni2015}. Equation~\ref{elastica} is valid from $0\leq s \leq L/2$, as its symmetry about the inflection point $(p)$ at $L/2$ reduces the problem to two equivalent clamped--pinned beams.
The boundary conditions for this reduced problem become $\psi(0)=\beta$ and $\psi(L/2)=0$, and an additional (global) kinematic constraint: $\int_0^{L/2} \sin(\psi(s)-\beta) ~\text{d}s$, ensures the vanishing of transverse displacements at the clamped end and the inflection point. Solutions to this system of equations are highly non--trivial (see~\cite{bigoni2015} for a clear and detailed explanation), and result in parametric equations for the in--plane displacements, $x(s)$ and $y(s)$. For an elastica that is elongating between two fixed ends, we can multiply the parametric equations by a scalar $\Gamma$ that represents an increment in ``growth'' of the curve~\cite{schunter2018}, such that
\begin{subequations}
\begin{align}
\label{x}
x_g(s) &= \Gamma x(s)\,,\\
\label{y}
y_g(s) &= \Gamma y(s)\,.
\end{align}
\end{subequations}
Unlike the symmetric elastica deformation described in~\cite{schunter2018}, the antisymmetric case depends on two related angles $\psi$ and $\beta$. We used Newton's method for numerical root finding in the commercial software \textsc{Mathematica} to determine $\beta = f(\psi)$. The injected length $\Delta$ is found by numerically integrating the parametric equations~\ref{x} and~\ref{y} for a range of $\Gamma$-values. Finally, the bending energy of these curves is found by numerically integrating the square of the arc curvature [solid line, Fig.~\ref{fig3}(a)]. By measuring the experimentally observed bending energy in the elastica for representative runs at each value of $\eta$ investigated, we can utilize these numerical results to determine the extent to which variations in $\eta$ may drive the elastica towards this assumed minimal energy configuration [Fig.~\ref{fig3}].
To determine the bending energy $U_{\text{b}}$ for experimental runs, the elastica's deformation profile is extracted from each frame of an image sequence and subsequently discretized using custom image processing code written in MATLAB. Fitting polygons to these discrete points, we can obtain a measurement of the analytic curvature at each point, quantities which are then summed and squared over the elastica's arclength. Indeed, in Fig. 3(a) we observe a gradual decrease in $U_{\text{b}}$ as $\eta$ is made larger, with $U_{\text{b}}[\eta=1.9]$ (light blue circles) less than $U_{\text{b}}[\eta=1.2]$ (dark blue triangles), which in turn is less than $U_{\text{b}}[\eta=1.0]$ (yellow diamonds).
\begin{figure*}
\resizebox{1.0\textwidth}{!}{\includegraphics[height=7cm]{dispFields.pdf}}
\caption{Granular displacement fields at equivalent initial packing fraction ($\phi_0=0.89$) and low injected arclength ($\Delta/L_0\approx0.014$). As the diameter ratio $\eta=1.0$ (a) increases to $\eta=1.2$ (b), $\eta=1.9$ (c), the mobility of grains within the monolayer begins to increase. The crystalline ordering characteristic of $\eta=1.0$ packings (I), which restricts granular motion to small areas of the array, is disrupted by the introduction of bidispersity $[\eta=1.2; (II)]$. At the largest experimentally tested value of $\eta=1.9$, the granular displacement field, no longer confined, is observed along the entire length of the elastica (III). We have outlined one respective size of grain radii used in preparing a specific $\eta$-valued array in (b),(c) to aid visualization.}
\label{fig4}
\end{figure*}
To experimentally verify the model, we also performed experiments for the idealized case of a free antisymmetric elastica (with bending energy $U_0$) by artificially pinning the midpoint at a given current length $L$, imaging, and analyzing as in the previous experiments [red squares, Fig.~\ref{fig3}(a)]. The numeric and experimental values of the antisymmetric elastica, which serves as our point of comparison by defining an effective continuum limit (where $\eta>>1$) for more ``fluid-like" arrays, are seen to be in excellent agreement. We expect that this continuum limit would also be reached at fixed $\eta$-values if the grain sizes were decreased relative to the elastica thickness. This question could be addressed in a subsequent study.\\
\indent The bidisperse arrays used here lack the global crystalline order found in the $\eta=1.0$ case, allowing for highly localized regions of curvature in the elastica to relax within the granular medium as opposed to being confined within a characteristic region. At comparable injected arclength $\Delta$ and $\phi_0$, along with decreases in bending energy [Fig.~\ref{fig3}(b)], this relaxation manifests as a difference in the mobility of the surrounding granular array. We quantify this by tracking the motion of individual grains within arrays at each $\eta$-value tested [Figs. 4(I)-4(III)]. The elastica must effectively ``fracture" the more solid--like arrays prepared at $\eta \in [1.0, 1.2]$ in order for additional arclength to enter [Figs. 4(I)-4(II)], resulting in a highly localized granular displacement field restricted to a small area of the system. In monodisperse granular arrays, these high-mobility regions have been observed to occur in close proximity to any disruptions of hexagonal ordering~\cite{schunter2018,sausset2008}. This behavior contrasts what we observe at the largest experimentally tested value of $\eta=1.9$: the granular displacement field is no longer confined to a small, characteristic region and grain motion is observed along the entire extent of the elastica's arclength [Fig.~\ref{fig4}(III)].
\section{Discussion}
\label{sec:5}
It is interesting to note that by introducing geometric frustration (via bidispersity) into the granular medium, we were able to alleviate some of the kinematic frustration present in the confined elastica (observed to adopt a lower energy configuration in Fig.~\ref{fig3}). We speculate the existence of an intermediate range of $\eta$-values and elastica bending rigidity $B$, such that the relative effect each element has on the system balances the other. In this intermediate range, modifications to the rigidity of the thin elastic structure will hypothetically have the same effect as an adjustment to $\eta$. In practice, there are certain physics and engineering scenarios where it may be easier to change the characteristic dimensions of either the elastic structure or the granular medium. Additionally, simple experimental models such as these of coupled, frustrated systems may provide a novel means of investigating equipartition in nonlinear systems~\cite{davidovitch2019}. A thorough understanding of this energy balancing and how elastogranular systems can be ``tuned" will have direct applications to the fields of soil mechanics and civil engineering, and in the development of burrowing robots or steerable needles.
\begin{acknowledgements}
We are grateful for financial support from the National Science Foundation CMMI--CAREER through Mechanics of Materials and Structures (No. 1454153).
\end{acknowledgements}
\section*{Compliance with ethical standards}
\small{\textbf{Conflict of interest} The authors declare that they have no conflict of interest.}
\bibliographystyle{spphys}
| proofpile-arXiv_059-15550 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Digital artists have increasingly adopted techniques that use deep learning as part of their repertoire. One such technique is DeepDream \citep{DeepDream}, which uses a pre-trained convolutional architecture and updates the image to force the network to "hallucinate" objects. While this technique has been perfected for images, a popular way of applying DeepDream to videos is to directly apply it frame-by-frame. A significant drawback of this approach is that it causes a flickering effect due to the lack of temporal consistency between frames, often detracting from the overall aesthetic appeal of the end result. Another drawback of DeepDream is that controlling the objects hallucinated in the input image is often done by trial-and-error, and is thus not straightforward.
In this work, we describe two simple modifications to the traditional DeepDream formulation to improve its controllability and applicability to videos. The first enhances the degree of control when applying DeepDream to images by improving the ability to hallucinate specific classes by updating the image to maximize the network's final classification layer's logits, as opposed to the intermediate convolutional layers. The second improves DeepDream's applicability to videos, resolving the flickering issue by drawing inspiration from recent work in style transfer \citep{styleTransfer,styleTransferVideo} and leveraging temporal consistency loss terms.
\begin{figure*}[!htb]
\centering
\includegraphics[width=.32\textwidth,height=\textheight,keepaspectratio]{images/george/frame_0001.jpg}
\includegraphics[width=.32\textwidth,height=\textheight,keepaspectratio]{images/george/frame_0025.jpg}
\includegraphics[width=.32\textwidth,height=\textheight,keepaspectratio]{images/george/frame_0050.jpg}
\includegraphics[width=.32\textwidth,height=\textheight,keepaspectratio]{images/george/frame_0075.jpg}
\includegraphics[width=.32\textwidth,height=\textheight,keepaspectratio]{images/george/frame_0100.jpg}
\includegraphics[width=.32\textwidth,height=\textheight,keepaspectratio]{images/george/frame_0120.jpg}
\caption{Controlled DeepDream with temporal consistency and scene change detection applied on frames (from top-left) 1, 25, 50, 75, 100 and 120 in a 120 frame clip of size $480\times360$ with 3 continuous takes. The \texttt{cockroach} ImageNet class hallucinated. Best viewed zoomed in and in colour. The complete video is available \link{\href{https://drive.google.com/open?id=1SkYCJK4tmkZW7GzrMTQNt2PTkhCSz8T7}{here}}.}
\label{fig:george}
\end{figure*}
\section{Method}
\textbf{DeepDream} The original DeepDream framework works with a model of the Inception \citep{Inception} architecture fully trained on ImageNet \citep{imagenet}. The loss is defined as $\mathcal{L}_{l, a} = -\norm{\mathcal{F}_{l,a}(\mathcal{I})}^2_F$, where the function $\mathcal{F}_{l, a}(\mathcal{I})$ yields the output of the $l^{th}$ layer's $a^{th}$ feature map after applying the network to the input image $\mathcal{I}$, and $\norm{.}^2_F$ represents the squared Frobenius norm. The original image is then updated to minimize this loss, for example, using gradient descent.
\textbf{Controlled DeepDream} To control DeepDream, we propose maximizing square of the pre-softmax activations (i.e., the logits) $\mathcal{F}_{c}(\mathcal{I})$ of the class $c$ to be hallucinated, as opposed to using an intermediate feature map. Correspondingly, the loss to be minimized becomes $\mathcal{L}_{c} = - \mathcal{F}_{c}(\mathcal{I})^2$.
A drawback of this formulation is that the loss is no longer applied to a fully convolutional network- the input image must be of size 224x224. To overcome this, we apply controlled DeepDream on uniformly randomly sampled tiles of size 224x224: we describe this tiling in detail in Section \ref{sec:tile}.
\textbf{Temporal Consistency in DeepDream}
Applying DeepDream individually to each frame in a video causes a flickering effect. This occurs because of the lack of a constraint forcing adjacent frames to be consistent with each other when patterns and objects are hallucinated. Inspired by recent work on applying neural style transfer to videos, we propose the application of similar short-term and long-term temporal consistency losses. As part of this process, we find that certain hyperparameter sets yield artistically interesting and visually distinct results: we describe these in Section \ref{sec:altvisual}. The losses are similar to those used when applying style transfer to videos \citep{styleTransferVideo}, and are described below.
Given the $i^{th}$ frame and the $i-j^{th}$ frame, let $\mathbf{x^{(i)}}$ be the output image, let $\mathbf{w^{(i-j, i)}}$ be $\mathbf{x^{(i-j)}}$ mapped to $\mathbf{x^{(i)}}$ using the optical flow between the two input frames. Further, let $\mathbf{c^{(i-j, i)}}$ represent the temporal consistency between the two frames (in the form of a boolean mask), indicating the presence of de-occlusions and motion boundaries, as described in \citep{styleTransferVideo,sundaram2010dense}.
The \textbf{long-term temporal consistency loss} is:
\begin{align*}
\mathcal{L}_{lt} &= \frac{1}{D} \sum_{j\in J : i-j\geq1} \sum_{k=1}^{D} c_{l}^{(i-j, i)}[k] . (x^{(i)}[k] - w^{(i-j, i)}[k])^2 \\
\textbf{c}_{l}^{(i-j, i)} &= max \left( \textbf{c}^{(i-j, i)} - \sum_{k\in J : i-k > i-j} \textbf{c}^{(i-k, i)}, \textbf{0} \right)
\end{align*}
where $D$ is the dimensionality of the input image and J is the set indices to be subtracted from the current frame index $i$, i.e., the set of previous frames to be taken into account. The \textbf{short-term temporal consistency loss} $\mathcal{L}_{st}$ is obtained by setting $J=\{1\}$.
\textbf{LucidDream} The final LucidDream update, which yields temporally consistent videos and improves controllability of the hallucinated class is then given by $\mathcal{L} = \alpha \mathcal{L}_c + \beta \mathcal{L}_{st} + \gamma \mathcal{L}_{lt}$, where $\alpha$, $\beta$ and $\gamma$ are hyperparameters. Each frame in the video is then updated to minimize this loss term.
| proofpile-arXiv_059-15551 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{A Simple Setting for Studying Bias}
\label{sec:cifar}
We begin by constructing a novel benchmark for studying bias mitigation in visual recognition models. This setting makes it possible to demonstrate that the presence of spurious correlations in training data severely degrades the performance of current models, even if learning such spurious correlations is sub-optimal for the target task.
\smallsec{CIFAR-10S Setup} To do so, we design a benchmark that erroneously correlates target classification decisions (what object category is depicted in the image) with an auxiliary attribute (whether the image is color or grayscale).
We introduce CIFAR-10 Skewed (CIFAR-10S), based on CIFAR-10 \cite{krizhevsky_learning_2009}, a dataset with 50,000 $32 \times 32$ images evenly distributed between 10 object classes. In CIFAR-10S, each of the 10 original classes is subdivided into two new domain subclasses, corresponding to color and grayscale domains within that class. Per class, the 5,000 training images are split 95\% to 5\% between the two domains; five classes are 95\% color and five classes are 95\% grayscale. The total number of images allocated to each domain is thus balanced. For testing, we create two copies of the standard CIFAR-10 test set: one in color ({\sc Color}) and one in grayscale ({\sc Gray}). These two datasets are considered separately, and only the 10-way classification decision boundary is relevant.
\smallsec{Discussion} We point out upfront that the analogy between color/grayscale and gender domains here wears thin: (1) we consider the two color/grayscale domains as purely binary and disjoint whereas the concept of gender is more fluid; (2) a color/grayscale domain classifier is significantly simpler to construct than a gender recognition model; (3) the transformation between color and grayscale images is linear whereas the manifestation of gender is much more complex.
Nevertheless, we adopt this simple framework to distill down the core algorithmic exploration before diving into the more complex setups in Sec.~\ref{sec:real-world}. This formulation has several compelling properties: (1) we can control the correlation synthetically by changing images from color to grayscale, maintaining control over the distribution, (2) we can guarantee that color images contain strictly more information than grayscale images, maintaining control over the discriminative cues in the images, and (3) unlike other datasets, there is no fairness/accuracy trade off since both are complementary. Furthermore, despite its simplicity, this setup still allows us to study the behavior of modern CNN architectures.
\smallsec{Key Issue} We ground the discussion by presenting one key result that is counter-intuitive and illustrates why this very simple setting is reflective of a much deeper problem. We train a standard ResNet-18~\cite{he_deep_2015} architecture with a softmax and cross-entropy loss for 10-way object classification. Training on the skewed CIFAR-10S dataset and testing on {\sc Color} images yields $89.0\pm 0.5\%$ accuracy.\footnote{We report the mean across 5 training runs (except for CelebA in Sec.~\ref{sec:real-world-celeba}). Error bars are 2 standard deviations (95\% confidence interval).} This may seem like a reasonable result until we examine that a model trained on an all-grayscale training set (so never having seen a single color image!) yields a significantly higher $93.0\%$ accuracy when tested out-of-domain on {\sc Color} images.
This disparity occurs because the model trained on CIFAR-10S learned to correlate the presence of color and the object classes. When faced with an all-color test set, it infers that it is likely that these images come from one of the five classes that were predominantly colored during training (Fig.~\ref{fig:confusion}). In a real world bias setting where the two domains correspond to gender and the classification targets correspond to activities, this may manifest itself as the model making overly confident predictions of activities traditionally associated with female roles on images of women~\cite{zhao_men_2017}.
\section{Conclusions}
\vspace{-0.02in}
We provide a benchmark and a thorough analysis of bias mitigation techniques in visual recognition models. We draw several important algorithmic conclusions, while also acknowledging that this work does not attempt to tackle many of the underlying ethical fairness questions. What happens if the domain (gender in this case) is non-discrete? What happens if the imbalanced domain distribution is not known at training time -- for example, if the researchers failed to identify the undesired correlation with gender? What happens in downstream tasks where these models may be used to make prediction decisions? We leave these and many other questions to future work.
\smallsec{Acknowledgements} This work is partially supported by the National Science Foundation under Grant No. 1763642, by Google Cloud, and by the Princeton SEAS Yang Family Innovation award. Thank you to Arvind Narayanan and to members of Princeton's Fairness in AI reading group for great discussions.
\section{Introduction}
Computer vision models learn to perform a task by capturing relevant statistics from training data. These statistics range from low-level information about color or composition (zebras are black-and-white, chairs have legs) to contextual or societal cues (basketball players often wear jerseys, programmers are often male). Capturing these statistical correlations is helpful for the task at hand: chairs without legs are rare and programmers who are not male are rare, so capturing these dominant features will yield high accuracy on the target task of recognizing chairs or programmers. However, as computer vision systems are deployed at scale and in a variety of settings, especially where the initial training data and the final end task may be mismatched, it becomes increasingly important to both \emph{identify} and develop strategies for \emph{manipulating} the information learned by the model.
\smallsec{Societal Context} To motivate the work of this paper, consider one such example of social bias propagation: AI models that have learned to correlate activities with gender~\cite{bolukbasi2016man,Caliskan2017science,zhao_men_2017,anne2018women}. Some real-world activities are more commonly performed by women and others by men. This real-world gender distribution skew becomes part of the data that trains models to recognize or reason about these activities.\footnote{Buolamwini and Gebru~\cite{gender-shades-2018} note that collecting a more representative training dataset should be the first step of the solution. That is true in the cases they consider (where people with darker skin tones are dramatically and unreasonably undersampled in datasets) but may not be a viable approach to cases where the datasets accurately reflect the real-world skew.} Naturally, these models then learn discriminative cues which include the gender of the actors. In fact, the gender correlation may even become \emph{amplified} in the model, as Zhao et al.~\cite{zhao_men_2017} demonstrates. We refer the reader to e.g., \cite{algorithms-of-oppression} for a deeper look at these issues and their impact.
\smallsec{Study Objectives and Contributions} In this work, we set out to provide an in-depth look at this problem of training visual classifiers in the presence of spurious correlations. We are inspired by prior work on machine learning fairness~\cite{zhang2018mitigating,zhao_men_2017,ryu2017improving,alvi2018turning} and aim to build a unified understanding of the proposed techniques. Code is available at \url{https://github.com/princetonvisualai/DomainBiasMitigation}.
We begin by proposing a simple but surprisingly effective benchmark for studying the effect of data bias on visual recognition tasks. Classical literature on mitigating bias generally operates on simpler (often linear) models~\cite{dwork2012fairness,zemel2013learning, leino2018feature}, which are easier to understand and control; only recently have researchers begun looking at mitigating bias in end-to-end trained deep learning models~\cite{ganin2015unsupervised,anne2018women,ryu2018inclusivefacenet, grover2019bias, wang2019balanced, kim2019learning, li2019repair, quadrianto2019discovering, wang2019racial, grover2019fair}. Our work helps bridge the gap, proposing an avenue for exploring mitigating bias in Convolutional Neural Network (CNN) models within a simpler and easier-to-analyze setting than with a fully-fledged black-box system. By utilizing dataset augmentation to introduce controlled biases, we provide simple and precise targets for model evaluation (Sec.~\ref{sec:cifar}).
Using this benchmark, we demonstrate that the presence of spurious bias in the training data severely degrades the accuracy of current models, even when the biased dataset contains strictly more information than an unbiased dataset. We then provide a thorough comparison of existing methods for bias mitigation, including domain adversarial training~\cite{tzeng2015simultaneous,ryu2017improving,alvi2018turning}, Reducing Bias Amplification~\cite{zhao_men_2017}, and domain conditional training similar to~\cite{ryu2018inclusivefacenet}. To the best of our knowledge, no such comparison exists currently as these methods have been evaluated on different benchmarks under varying conditions and have not been compared directly. We conclude that a domain-independent approach inspired by~\cite{dwork2012fairness} outperforms more complex competitors (Sec.~\ref{sec:cifar-baselines}).
Finally, we validate our findings in more realistic settings. We evaluate on the CelebA~\cite{liu2015faceattributes} benchmark for attribute recognition in the presence of gender bias (Sec.~\ref{sec:real-world}). We demonstrate that our domain-independent training model successfully mitigates real-world gender bias.
\section{Benchmarking Bias Mitigation Methods}
\label{sec:cifar-baselines}
Grounded with the task at hand (training recognition models in the presence of spurious correlations) we perform a thorough benchmark evaluation of bias mitigation methods. Many of these techniques have been proposed in the literature for this task; notable exceptions include prior shift inference for bias mitigation (Sec.~\ref{sec:cifar-baselines-discriminative}), the distinction between discriminative and conditional training in this context (Sec.~\ref{sec:cifar-baselines-independent}), and the different inference methods for conditional training from biased data (Sec.~\ref{sec:cifar-baselines-independent}). Our findings are summarized in Table~\ref{table:cifar}. In Sec.~\ref{sec:real-world} we demonstrate how our findings on CIFAR10S generalize to real world settings.
\begin{figure}
\centering
\includegraphics[width=.6\linewidth]{pics/confusion_matrix_new.png}
\caption{Confusion matrix of a ResNet-18~\cite{he_deep_2015} classifier trained on the skewed CIFAR-10S dataset. The model has learned to correlate the presence of color with the five object classes (in bold) and predominantly predicts those classes on the all-color test set.}
\label{fig:confusion}
\end{figure}
\input{table_cifar.tex}
\smallsec{Setup} To perform this analysis, we utilize the CIFAR-10S domain correlation benchmark of Sec.~\ref{sec:cifar}. We assume that at training time the domain labels are available (e.g., we know which images are color and which are grayscale in CIFAR-10S, or which images correspond to pictures of men or women in the real-world setting). All experiments in this section build on the ResNet-18~\cite{he_deep_2015} architecture trained on the CIFAR-10S dataset, with $N=10$ object classes and $D=\{\mathrm{color}, \> \mathrm{grayscale}\}$. The models are trained from scratch on the target data, removing any potential effects from pretraining. Unless otherwise noted the models are tarined for \( 200 \) epochs, with SGD at a learning rate of \( 10^{-1}\) with a factor of \( 10 \) drop-off every \( 50 \) epochs, a weight decay of \( 5 \mathrm{e}{-4} \), and a momentum of 0.9. During training, the image is padded with 4 pixels on each side and then a $32\times32$ crop is randomly sampled from the image or its horizontal flip.
\smallsec{Evaluation} We consider two metrics: mean per-class per-domain accuracy (primary) and bias amplification of~\cite{zhao_men_2017}. The test set is fully balanced across domains, so mean accuracy directly correlates with the model's ability to avoid learning the domain correlation during training. We include the mean bias metric for completeness with the literature, as
\begin{equation}
\label{eq:bias}
\frac{1}{|C|}\sum_{c \in C} \frac{\max(\mathrm{Gr}_c, \mathrm{Col}_c)}{\mathrm{Gr}_c + \mathrm{Col}_c} - 0.5.
\end{equation}
where $\mathrm{Gr}_c$ is the number of grayscale test set examples predicted to be of class $c$, while $\mathrm{Col}_c$ is the same for color.
\subsection{Strategic Sampling}
\label{sec:cifar-baselines-sampling}
The simplest approach is to strategically sample with replacement to make the training data `look' balanced with respect to the class-domain frequencies. That is, we sample rare examples more often during training, or, equivalently, utilize non-uniform misclassification cost~\cite{Elkan01thefoundations,BickelJMLR09}. However, as detailed in~\cite{weiss2007cost}, there are significant drawbacks to oversampling: (1) seeing exact copies of the same example during training makes overfitting likely, (2) oversampling increases the number of training examples without increasing the amount of information, which increases learning time.
\smallsec{Experimental Evaluation} The baseline model first presented in Sec.~\ref{sec:cifar} is a ResNet-18 CNN with a softmax classication layer, which achieves $88.5\pm0.3\%$ accuracy. The same model with oversampling improves to $89.1\pm0.4\%$ accuracy. Both models drive the training loss to zero. Note that data augmentation is critical for this result: without data augmentation the oversampling model achieves only $79.2\pm0.8\%$ accuracy, overfitting to the data.
\subsection{Adversarial Training}
\label{sec:cifar-baselines-adversarial}
Another approach to bias mitigation commonly suggested in the literature is \textit{fairness through blindness}. That is, if a model does not look at, or specifically encode, information about a protected variable, then it cannot be biased. To this end, adversarial training is set up through the minimax objective: maximize the classifier's ability to predict the class, while minimizing the adversary's ability to predict the protected variable based on the underlying learned features.
This intuitive approach, however, has a major drawback. Suppose we aim to have equivalent feature representations across domains. Even if a particular protected attribute does not exist in the feature representation of a classifier, combinations of other attributes can be used as a proxy. This phenomenon is termed \emph{redundant encoding} in the literature \cite{hardt2016equality, dwork2012fairness}. For an illustrative example, consider a real-world task of a bank evaluating a loan application, irrespective of the applicant's gender. Suppose that the applicant's employment history lists `nurse'. It can thus, by proxy, be inferred with high probability that the applicant is also a woman. However, employment history is crucial to the evaluation of a loan application, and thus the removal of this redundant encoding will degrade its ability to perform the evaluation.
\smallsec{Experimental Evaluation} We apply adversarial learning to de-bias the object classifier. We consider both the uniform confusion loss \(-(1/|D|) \sum_{d} \log q_d \) of~\cite{alvi2018turning} (inspired by~\cite{tzeng2015simultaneous}), and the loss reversal \( \sum_{d} \mathds{1}[\widehat{d} = d] \log q_d \) with gradient projection of~\cite{zhang2018mitigating}.\footnote{We apply the adversarial classifiers on the penultimate layer for~\cite{alvi2018turning,tzeng2015simultaneous} model, and on the final classification layer for~\cite{zhang2018mitigating} as recommended by the authors. We experimented with other combinations of layers and losses, including applying the projection method of~\cite{zhang2018mitigating} onto the confusion loss of~\cite{alvi2018turning,tzeng2015simultaneous}, and achieved similar results.
The models are trained for \( 500 \) epochs using Adam with learning rates 3e-4 and weight decay 1e-4. We hold out 10,000 images to tune the hyperparameters before retraining the network on the entire training set. To verify training efficacy, we train SVM domain classifiers on the learned features: the accuracy is \( 99.0\% \) before and \( 78.2\% \) after adversarial training, verifying training effectiveness.} These methods achieve only $83.4\%$ and $84.1\%$ accuracy, respectively. As Fig.~\ref{fig:adversarial} visually demonstrates, although the adversarial classifier enforces domain confusion it additionally creates undesirable class confusion.
We run one additional experiment to validate the findings. We test whether models encode the domain (color/grayscale) information even when \emph{not} exposed to a biased training distribution; if so, this would help explain why minimizing this adversarial objective would lead to a worse underlying feature representation and thus reduced classification accuracy. We take the feature representation of a 10-way classifier trained on \emph{all color} images (so not exposed to color/grayscale skew) and train a linear SVM adversary on this feature representation to predict the color/grayscale domain of a new image. This yields an impressive 82\% accuracy; since the ability to discriminate between the two domains emerges naturally even without biased training, it would make sense that requiring that the model not be able to distinguish between the two domains would harm its overall classification ability.
\subsection{Domain Discriminative Training}
\label{sec:cifar-baselines-discriminative}
The alternative to fairness through blindness is \emph{fairness through awareness}~\cite{dwork2012fairness} where the domain information is first explicitly encoded and then explicitly mitigated. The simplest approach is training a $ND$-way discriminative classifier where $N$ is the number of target classes and $D$ is the number of domains. The correlation between domains and classes can then be removed during inference in one of several ways.
\begin{figure}
\centering
\begin{tabular}{c@{}c@{}|@{}c@{}c}
\multicolumn{2}{c}{\small w/o adversary} & \multicolumn{2}{c}{\small w/ adversary} \\
{\small domains} & {\small classes} & {\small domains} & {\small classes} \\
\includegraphics[width=0.24\linewidth]{pics/baseline_domain.png} &
\includegraphics[width=0.24\linewidth]{pics/baseline_class.png} &
\includegraphics[width=0.24\linewidth]{pics/adv_domain.png} &
\includegraphics[width=0.24\linewidth]{pics/adv_class.png} \\
\end{tabular}
\caption{Adversarial training \cite{zhang2018mitigating} enforces domain confusion but also introduces unwanted class boundary confusion (t-SNE plots).}
\vspace{-0.1in}
\label{fig:adversarial}
\end{figure}
\subsubsection{Prior Shift Inference}
If the outputs of the $ND$-way classifier can be interpreted as probabilities, a test-time domain solution to removing class-domain correlation was introduced in~\cite{saerens2002adjusting} and applied in~\cite{royer2015classifier} to visual recognition. Let the classifier output a joint probability $\mathrm{P}(y,d|x)$ for target class $y$, domain $d$ and image $x$. We can assume that $\mathrm{P}_{\mathrm{tr}}(x|y,d)=\mathrm{P}_{\mathrm{te}}(x|y,d)$, i.e., the distribution of image appearance within a particular class and domain is the same between training and test time. However, $\mathrm{P}_{\mathrm{tr}}(d,y)\neq\mathrm{P}_{\mathrm{te}}(d,y)$, i.e., the correlation between target classes and domains may have changed. This suggests that the test-time probability $\mathrm{P}_{\mathrm{te}}(y,d|x)$ should be computed as:
\begin{align}
\mathrm{P}_{\mathrm{te}}(y,d | x) &\propto \mathrm{P}_{\mathrm{te}}(x | y,d)\mathrm{P}_{\mathrm{te}}(y,d) \label{eq:ps:1}\\
&=\mathrm{P}_{\mathrm{tr}}(x|y,d)\mathrm{P}_{\mathrm{te}}(y,d) \label{eq:ps:2}\\
&\propto \mathrm{P}_{\mathrm{tr}}(y,d|x)\frac{\mathrm{P}_{\mathrm{te}}(y,d)}{\mathrm{P}_{\mathrm{tr}}(y,d)} \label{eq:ps:3}
\end{align}
In theory, this requires access to the test label distribution $\mathrm{P}_{\mathrm{te}}(y,d)$; however, assuming uncorrelated $d$ and $y$ at test time (unbiased $\mathrm{P}_{\mathrm{te}}(d|y)$) and mean per-class accuracy evaluation (uniform $\mathrm{P}_{\mathrm{te}}(y)$), $\mathrm{P}_{\mathrm{te}}(y,d)=\mathrm{P}_{\mathrm{te}}(d|y)\mathrm{P}_{\mathrm{te}}(y)\propto1$.
Eqn.~\ref{eq:ps:3} then simplifies to $\mathrm{P}_{\mathrm{tr}}(y,d|x)/\mathrm{P}_{\mathrm{tr}}(y,d)$, removing the test distribution requirement. With this assumption, the target class predictions can be computed directly as
\begin{equation}
\hat{y} = \argmax_y \max_d \mathrm{P}_{\mathrm{te}}(y,d|x)
\label{eq:discrmax}
\end{equation}
or, using the Law of Total Probability,
\begin{equation}
\label{eq:discrsum}
\hat{y} = \argmax_y \mathrm{P}_{\mathrm{te}}(y|x)=\argmax_y \sum_d\mathrm{P}_{\mathrm{te}}(y,d|x).
\end{equation}
\smallsec{Experimental Evaluation} We train a $ND$-way classifier (20-way softmax in our setting) to discriminate between (class, domain) pairs. This discriminative model with inference prior shift towards a uniform test distribution (Eqn.~\ref{eq:ps:3}) followed by sum of outputs (Eqn.~\ref{eq:discrsum}) achieves $90.3\%$ accuracy, significantly outperforming the $88.5\pm0.3\%$ accuracy of the $N$-way softmax baseline. To quantify the effects of the two steps of inference: taking the highest output predictor rather than summing across domains (Eqn.~\ref{eq:discrmax}) has no effect on accuracy because the two domains are easily distinguishable in this case; however, summing the outputs without first applying prior shift drops accuracy from $90.3\%$ to $87.3\%$.
Finally, we verify that the increase in accuracy is not just the result of the increased number of parameters in the classifier layer. We train an ensemble of baseline models, averaging their softmax predictions: one baseline achieves $88.5\%$ accuracy, two models achieve $89.6\%$, and only an ensemble of \emph{five} baseline models (with 55.9M trainable parameters) achieve $90.0\%$ accuracy on par with $90.3\%$ accuracy of the discriminative model (with 11.2M parameters).
\vspace{-0.1in}
\subsubsection{Reducing Bias Amplification}
An alternative inference approach is Reducing Bias Amplification (``RBA'') of Zhao et al.~\cite{zhao_men_2017}. RBA uses corpus-level constraints to ensure inference predictions follow a particular distribution. They propose a Lagrangian relaxation iterative solver since the combinatorial optimization problem is challenging to solve exactly at large scale. This method effectively matches the desired inference distribution and reduces bias; however, the expensive optimization must be run on all test samples before a single inference is possible.
\smallsec{Experimental Evaluation}
In the original setting of \cite{zhao_men_2017}, training and test time biases are equal. However, RBA is flexible enough to optimize for any target distribution. On CIFAR-10S, we thus set the optimization target bias to 0 and the constraint epsilon to \(5\%\). To make the optimization as effective as possible, we substitute in the known test-time domain (because it can be perfectly predicted) so that the optimization only updates the class predictions.
Applying RBA on the $\sum_d \mathrm{P}_{\mathrm{tr}}(y, d |x )$ scores results in $88.6\%$ accuracy, a $1.3\%$ improvement over the simpler $\argmax_y \sum_d \mathrm{P}_{\mathrm{tr}}(y, d |x )$ inference but an insignificant improvement over $88.5\%$ of the {\sc Baseline} model. Interestingly, we also observe that the benefits of RBA optimization are significantly lessened when prior shift is applied beforehand. For example, when using the $\sum_d \mathrm{P}_{\mathrm{te}}(y, d |x )$ post-prior shift scores, accuracy only improves negligibly from $90.3\%$ using $\argmax$ inference to $90.4\%$ using RBA. Therefore, we conclude that applying RBA after prior shift is extraneous. However, the converse is not true as the best accuracy achieved by RBA without prior shift is significantly lower than the accuracy achieved with prior shift inference.
\subsection{Domain Independent Training}
\label{sec:cifar-baselines-independent}
One concern with the discriminative model is that it learns to distinguish between the $ND$ class-domain case; in particular, it explicitly learns the boundary between the same class across different domains (e.g., cat in grayscale versus cat in color, or a woman programming versus a man programming). This may be wasteful, as the $N$-way class decision boundaries may in fact be similar across domains and the additional distinction between the same class in different domains may not be necessary. Furthermore, the model is necessarily penalized in cases where the domain prediction is challenging but the target class prediction is unambiguous.
This suggests training separate classifiers per domain. Doing this naively, however, as an ensemble, will yield poor performance as each model will only see a fraction of the data. We thus consider a shared feature representation with an ensemble of classifiers. This alleviates the data reduction problem for the representation though not for the classifiers.
Given the predictions $\mathrm{P}(y|d,x)$, multiple inference methods are possible. If the domain $d^*$ is known at test time, $\hat{y} = \argmax_y \mathrm{P}(y|d^*,x)$ is reasonable yet entirely ignores the learned class boundaries in the other domains $d \neq d^*$, and may suffer if some classes $y$ were poorly represented within $d^*$ during training. If a probabilistic interpretation is possible, then two inference methods are reasonable:
\begin{align}
\hat{y} &= \argmax_y \max_d \mathrm{P}(y|d,x), \text{ or } \label{eq:condinf:1}\\
\hat{y} &= \argmax_y \sum_d \mathrm{P}(y|d,x)\mathrm{P}(d|x) \label{eq:condinf:2}
\end{align}
However, Eqn.~\ref{eq:condinf:1} again ignores the learned class boundaries across domains, and Eqn.~\ref{eq:condinf:2} requires inferring $\mathrm{P}(d|x)$ (which may either be trivial, as in CIFAR-10S, reducing to a single-domain model, or complicated to learn and implicitly encoding the correlations between $y$ and $d$ that we are trying to avoid). Further, in practice, while the probabilistic interpretation of a single model may be a reasonable approximation, the probabilistic outputs of the multiple independent models are frequently miscalibrated with respect to each other.
A natural option is to instead reason directly on class boundaries of the $D$ domains, and perform inference as\footnote{Interestingly, under a softmax probabilistic model this inference corresponds to the geometric mean between $\{\mathrm{P}(y|d,x)\}_d$, which is a stable method for combining independent models with different output ranges. }
\begin{equation}
\label{eq:condinf:3}
\hat{y} = \argmax_y \sum_d \mathrm{s}(y,d,x),
\end{equation}
where \( s(y, d, x) \) are the network activations at the classifier layer. For linear classifiers with a shared feature representation this corresponds to averaging the class decision boundaries. We demonstrate that this technique works well in practice across both single and multi-label target classification tasks at removing class-domain correlations.
\smallsec{Experimental Evaluation} We train a model for performing object classification on the two domains independently. This is implemented as two 10-way independent softmax classifiers sharing the same underlying network. At training time we use knowledge of the image domain to only update one of the classifiers. At test time we apply prior shift to adjust the output probabilities of both classifiers towards a uniform distribution, and consider two inference methods. First, we use only the classifier corresponding to the test domain, yielding a low $88.9\%$ accuracy as expected because it is not able to integrate information across the two domains (despite requiring specialized knowledge of the image domain). Instead, we combine the decision boundaries following Eqn.~\ref{eq:condinf:3} and achieve $92.0\%$ accuracy, significantly outperforming the baseline of $88.5 \pm 0.3\%$.
\vspace{-0.01in}
\subsection{Summary of Findings}
\vspace{-0.01in}
So far we illustrated that the CIFAR-10S setup is an effective benchmark for studying bias mitigation, and provided a thorough evaluation of multiple techniques. We demonstrated the shortcomings of strategic resampling and of adversarial approaches for bias mitigation. We showed that the prior shift inference adjustment of output probabilities is a simpler, more efficient, and more effective alternative to the RBA technique~\cite{zhao_men_2017}. Finally, we concluded that the domain-conditional model with explicit combination of per-domain class predictions significantly outperforms all other techniques. Table~\ref{table:cifar} lays out the findings.
Recall our original goal of Sec.~\ref{sec:cifar} to train a model that mitigates the domain correlation bias in CIFAR-10S enough to classify color images of objects as well as a model trained on only grayscale images would. We have partially achieved that goal. The {\sc DomainIndependent} model trained on CIFAR-10S achieves $92.4\%$ accuracy on color images, significantly better than $89.0\pm0.5\%$ of {\sc Baseline} and approaching $93.0\pm0.2\%$ of the model trained entirely on grayscale images. However, much still remains to be done. We would expect that a model trained on CIFAR-10S would take advantage of the available color cues and perform even better than $93.0\%$, ideally approaching $95.1\%$ accuracy of a model trained on all color images. The correlation bias is a much deeper problem for visual classifiers and much more difficult to mitigate than it appears at first glance.
\section{Real World Experiments}
\vspace{-0.01in}
\label{sec:real-world}
While CIFAR-10S proves to be a useful landscape for bias isolation studies, there remains the implicit assumption throughout that such findings will generalize to other settings. Indeed, it is possible that they may not due to the synthetic nature of the proposed bias generation. We thus investigate our findings in three alternative scenarios. First, in Sec.~\ref{sec:real-world-cifar} we consider two modifications to CIFAR-10S: varying the level of skew beyond the 95\%-5\% studied in Sec.~\ref{sec:cifar-baselines}, and replacing the color/grayscale domains with more realistic non-linear transformations. After verifying all our findings still hold, in Sec.~\ref{sec:real-world-celeba} we consider face attribute recognition on the CelebA dataset~\cite{liu2015faceattributes} where the presence of attributes, e.g., ``smiling'' is correlated with gender.
\subsection{CIFAR Extensions}
\label{sec:real-world-cifar}
There are two key distinctions between the CIFAR-10S dataset studied in Sec.~\ref{sec:cifar-baselines} and the real world scenarios where gender or race are correlated with the target outputs.
\smallsec{Varying Degrees of Domain Distribution} The first distinction is in the \emph{level} of skew, where domain balance may be more subtle than the 95\%-5\% breakdown studied above. To simulate this setting, we validated on CIFAR with different levels of color/grayscale skew, using the setup of Sec.~\ref{sec:cifar-baselines} in Fig.~\ref{fig:real-world-cifar} (\emph{Left}). The {\sc DomainIndep} model consistently outperforms the {\sc Baseline}, although the effect is significantly more pronounced at higher skew levels. For reference, the average gender skew on the CelebA dataset~\cite{liu2015faceattributes} for face attribute recognition described in Sec.~\ref{sec:real-world-celeba} is $80.0\%$\footnote{In this multi-label setting the gender skew is computed on the dev set as the mean across 39 attributes of $\frac{\min(|attr=1,woman|,|attr=1,man|)}{|attr=1|}$.}.
\begin{figure}[t]
\centering
\begin{tabularx}{\linewidth}{m{0.6\linewidth}@{\hskip 0.05in}m{0.3\linewidth}}
\includegraphics[height=1.2in]{pics/DistributionPlot.png}&
\includegraphics[height=1.2in]{pics/imagenet_cifar.png}
\end{tabularx}
\vspace{-0.07in}
\caption{\emph{(Left)} The {\sc DomainIndep} model outperforms the {\sc Baseline} on CIFAR-10S for varying levels of skew. \emph{(Right)} To investigate more real-world domains instead of color-grayscale, we consider the subtle shift between CIFAR and 32x32 ImageNet~\cite{CINIC,ILSVRC15}. }
\label{fig:real-world-cifar}
\end{figure}
\smallsec{Other Non-Linear Transformations} The second distinction is that real-world protected attributes differ from each other in more than just a linear color-grayscale transformation (e.g., men and women performing the same task look significantly more different than the same image in color or grayscale). To approximate this in a simple setting, we followed the CIFAR protocol of Sec.~\ref{sec:cifar-baselines}, but instead of converting images to grayscale, we consider alternative domain options in Table~\ref{table:nonlinear}. Arguably the most interesting shift corresponds to taking images of similar classes from ImageNet~\cite{ILSVRC15,CINIC}, and we focus our discussion on that one.
The domain shift here is subtle (shown in Fig.~\ref{fig:real-world-cifar} \emph{Right}) but the conclusions hold: mean per-class per-domain accuracy is {\sc Baseline} $79.4\pm0.4\%$, {\sc Adversarial} $74.1\pm0.6$\%~\cite{alvi2018turning,tzeng2015simultaneous} and $73.1\pm3.0$\%~\cite{zhang2018mitigating} (not shown in Table~\ref{table:nonlinear}), {\sc DomainDiscriminative} $81.5\pm 0.7\%$, and our {\sc DomainIndependent} model $83.5\pm0.3\%$.
One interesting change is that {\sc Oversampling} yields $78.6\pm0.4\%$, significantly lower than the baseline of $79.4\%$, so we investigate further. The drop can be explained by the five classes which were heavily skewed towards CIFAR images at training time: the model overfit to the small handful of ImageNet images which got oversampled, highlighting the concerns with oversampling particularly in situations where the two domains are different from each other and the level of imbalance is high. We observe similar results in the high-to-low-resolution domain shift (third and fourth columns of Table~\ref{table:nonlinear}), where the two domains are again very different from each other. To counteract this effect we instead applied the class-balanced loss method Cui et al.~\cite{Cui_2019_CVPR}, cross-validating the hyperparameter on a validation set to $\beta = 0.9$, and achieved a more reasonable result of $79.2\%$, on par with $79.4\pm0.4\%$ of {\sc Baseline} but still behind $83.5\pm0.3\%$ of {\sc DomainIndependent}.
\input{table_nonlinear}
\vspace{-0.02in}
\subsection{CelebA Attribute Recognition}
\vspace{-0.01in}
\label{sec:real-world-celeba}
Finally, we verified our findings on the real-world CelebA dataset~\cite{liu2015faceattributes}, used in~\cite{ryu2017improving} to study face attribute recognition when the presence of attributes, e.g., ``smiling,'' is correlated with gender. We trained models to recognize the 39 attributes (all except the ``Male'' attribute). Out of the 39 attributes, 21 occur more frequently with women and 18 with men, with an average gender skew of 80.0\% when an attribute is present. During evaluation we consider the 34 attributes that have sufficient validation and test images.\footnote{The removed attributes did not contain at least 1 positive male, positive female, negative male, and negative female image. They are: 5 o'clock shadow, goatee, mustache, sideburns and wearing necktie. }
\smallsec{Task and Metric} The target task is multi-label classification, evaluated using mean average precision (mAP) across attributes. We remove the gender bias in the test set by using a weighted mAP metric: for an attribute that appears with $N_m$ men and $N_w$ women images, we weight every positive man image by $(N_m+N_w)/(2N_m)$ and every positive woman image by $(N_m+N_w)/(2N_w)$ when computing the true positive predictions. This simulates the setting where the total weight of positive examples within the class remains constant but is now equally distributed between the genders.
We also evaluate the bias amplification (BA) of each attribute~\cite{zhao_men_2017}. For an attribute that appears more frequently with women, this is $P_w/(P_m+P_w)-N_w/(N_m+N_w)$ where $P_w,P_m$ are the number of women and men images respectively classified as positive for this attribute. For attributes that appear more frequently with men, the numerators are $P_m$ and $N_m$. To determine the binary classifier decision we compute a score threshold for each attribute which maximizes the classifier's F-score on the validation set. Since our methods aim to de-correlate gender with the attribute we expect that bias amplification will be \emph{negative} as the predictions approach a uniform distribution across genders.
\smallsec{Training Setup} The images are the Aligned\&Cropped subset of CelebA~\cite{liu2015faceattributes}. We use a ResNet-50~\cite{he_deep_2015} base architecture pre-trained on ImageNet~\cite{ILSVRC15}. The FC layer of the ResNet model is replaced with two consecutive fully connected layers. Dropout and relu is applied to the output between the two fully connected layers, which has size 2048. It is trained with a binary cross entropy loss with logits using a batch size of 32, for 50 epochs with the Adam optimizer~\cite{adam} (learning rate 1e-4). The best model over all epochs is selected per inference method on the validation set. For adversarial training, we run an extensive hyperparameter search over the relative weights of the losses and the number of epochs of the adversary. We select the model with the highest weighted mAP on the validation set among all models that successfully train a de-biased representation (accuracy of the gender classifier drops by at least 1\%; otherwise it's essentially the {\sc Baseline} model with the same mAP). The models are evaluated on the test set.
\begin{table}[t]
\centering
\footnotesize
\begin{tabular}{llcc}
\toprule
{\sc Model} & {\sc Model} & {\sc mAP} & {\sc BA} \\
\hline
{\sc Base} & N sigmoids & 74.7 & 0.010\\
{\sc Adver}
& w/uniform conf.~\cite{alvi2018turning,tzeng2015simultaneous}
& 71.9 & 0.019\\
\multirow{1}{*}{{\sc DomDis}} & 2N sigm, $\sum_d \mathrm{P_{tr}}(y,d|x)$ & 73.8 & 0.007\\
\midrule
\multirow{4}{*}{{\sc DomInd}} & 2N sigmoids, $\mathrm{P_{tr}}(y|d^*,x)$ & 73.8 & 0.009\\
& 2N sigm, $\max_d \mathrm{P_{tr}}(y|d,x)$ & 75.4 & {\bf -0.039}\\
& 2N sigm, $\sum_d \mathrm{P_{tr}}(y|d,x)$ & 76.0 & -0.037\\
& 2N sigmoids, $\sum_d s(y,d,x)$ & {\bf 76.3} & -0.035\\
\bottomrule
\end{tabular}
\vspace{0.1in}
\caption{Attribute classification accuracy evaluated using mAP (in \%, $\uparrow$) weighted to ensure an equal distribution of men and women appearing with each attribute, and Bias Amplification ($\downarrow$). Evaluation is on the CelebA test set, across 34 attributes that have sufficient validation data; details in Sec.~\ref{sec:real-world-celeba}. }
\vspace{-0.1in}
\label{table:celeba}
\end{table}
\smallsec{Results} Table~\ref{table:celeba} summarizes the results. The overall conclusions from Sec.~\ref{sec:cifar-baselines} hold despite the transition to the multi-label setting and to real-world gender bias. {\sc Adversarial} training as before de-biases the representation but also harms the mAP (71.9\% compared to 74.7\% for {\sc Baseline}). In this multi-label setting we do not consider a probabilistic interpretation of the output as the classifier models are trained independently instead of jointly in a softmax. Without this interpretation and prior shift the {\sc DomainDiscrminative} model achieves less competitive results than the baseline at $73.8\%$. RBA inference of~\cite{zhao_men_2017} towards a uniform distribution performs similarly at $73.6\%$. The {\sc DomainIndependent} model successfully mitigates gender bias and outperforms the domain-unaware {\sc Baseline} on this task, increasing the weighted mAP from $74.7\%$ to $76.3\%$. Alternative inference methods, such as selecting the known domain, computing the max output over the domains, or summing the outputs of the probabilities directly achieve similar bias amplification results but perform between $0.3-2.5\%$ mAP worse.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{pics/celeba_skew_high_res.png}
\vspace*{-0.05in}
\caption{Per-attribute improvement of the {\sc DomainIndependent} model over the {\sc Baseline} model on the CelebA validation set, as a function of the level of gender imbalance in the attribute. Attributes with high skew (such as ``bald'') benefit most significantly. }
\vspace*{-0.1in}
\label{fig:celeba_imbalance}
\end{figure}
\smallsec{Analysis} We take a deeper look at the per-class results on the validation set to understand the factors that contribute to the improvement. Overall the {\sc DomainIndependent} model improves over {\sc Baseline} on 24 of the 34 attributes. Fig.~\ref{fig:celeba_imbalance} demonstrates that the level of gender skew in the attribute is highly correlated with the amount of improvement ($\rho=0.709$). Attributes that have skew greater than $80\%$ (out of the positive training images for this attribute at least $80\%$ belong to one of the genders) always benefit from the {\sc DomainIndependent} model. This is consistent with the findings from CIFAR-10S in Fig.~\ref{fig:real-world-cifar}\emph{(Left)}. When the level of skew is insufficiently high the harm from using fewer examples when training the {\sc DomainIndependent} model outweighs the benefit of decomposing the representation.
\smallsec{Oversampling} Finally, we note that the {\sc Oversampling} model in this case achieves high mAP of 77.6\% and bias amplification of -0.061, outperforming the other techniques. This is expected as we know from prior experiments in Sec.~\ref{sec:cifar-baselines} and \ref{sec:real-world-cifar} that oversampling performs better in settings where the two domains are more similar (color/grayscale, 28x28 vs 32x32 crop) and where the skew is low while the dataset size is large so it wouldn't suffer from overfitting.
\section{Related Work}
\smallsec{Mitigating Spurious Correlation} Recent work on the effects of human bias on machine learning models investigates two challenging problems: identifying and quantifying bias in datasets, and mitigating its harmful effects. In relation to the former, \cite{buda_systematic_2017, liu2009exploratory} study the effect of class-imbalance on learning, while \cite{zhao_men_2017} reveal the surprising phenomenon of bias amplification. Additionally, recent works have shown that ML models possess bias towards legally protected classes \cite{beautycontest,gender-shades-2018,bolukbasi2016man,Caliskan2017science,madras2018learning,creager2019flexibly}. Our work complements these by presenting a dataset that allows us to isolate and control bias precisely, alleviating the usual difficulties of quantifying bias.
On the bias mitigation side, early works investigate techniques for simpler linear models \cite{khosla2012undoing, zemel2013learning}. Our constructed dataset allows us to isolate bias while not simplifying our architecture. More recently, works have begun looking at more sophisticated models. For example, \cite{zhao_men_2017} propose an inference update scheme to match a target distribution, which can remove bias. \cite{ryu2018inclusivefacenet} introduce InclusiveFaceNet for improved attribute detection across gender and race subgroups; our discriminative architecture is inspired by this work. Conversely, \cite{dwork2018decoupled} propose a scheme for decoupling classifiers, which we use to create our domain independent architecture. The last relevant approach to bias mitigation for us is adversarial mitigation \cite{alvi2018turning,zhang2018mitigating,edwards2016censoring,ganin2015unsupervised}. Our work uses our novel dataset to explicitly highlight the drawbacks, and offers a comparison between these mitigation strategies that would be impossible without access to a bias-controlled environment.
\smallsec{Fairness Criterion}
Pinning down an exact and generally applicable notion of fairness is an inherently difficult and important task. Various fairness criteria have been introduced and analyzed, including demographic parity \cite{kilbertus2017avoiding,zhang2018mitigating}, predictive parity \cite{gajane2017formalizing}, error-rate balance \cite{hardt2016equality}, equality-of-odds and equality-of-opportunity \cite{hardt2016equality}, and fairness-through-unawareness \cite{Pedreshi:2008:DDM:1401890.1401959} to try to quantify bias. Recent work has shown that such criteria must be selected carefully; \cite{hardt2016equality} prove minimizing error disparity across populations, even under relaxed assumptions, is equivalent to randomized predictions; \cite{hardt2016equality} introduce and explain the limitations of an `oblivious' discrimination criterion through a non-identifiability result; \cite{Pedreshi:2008:DDM:1401890.1401959} demonstrate that ignoring protected attributes is ineffective due to redundant encoding; \cite{dwork2012fairness} show that demographic parity does not ensure fairness. We define our tasks such that test accuracy directly represents model bias.
\smallsec{Surveying Evaluations}
We are inspired by previous work which aggregate ideas, methods and findings to provide a unify survey of a subfield of computer vision~\cite{imagenetTransferLearning,russakovskyiccv13,Sigurdsson_2017,hoiemODerrors}. For example, \cite{torralba_unbiased_2011} surveys relative dataset biases present in computer vision datasets, including selection bias (datasets favoring certain types of images), capture bias (photographers take similar photos), category bias (inconsistent or imprecise category definitions), and negative set bias (unrepresentative or unbalanced negative instances). We continue this line of work for bias mitigation methods for modern visual recognition systems, introducing a benchmark for evaluation which isolates bias, and showing that our analysis generalizes to other, more complex, biased datasets.
| proofpile-arXiv_059-15552 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction} \label{intro}
\subsection{The countable Moore-Schmidt theorem}
Suppose that $X = (X,\X,\mu)$ is a probability space. If $Y = (Y,\Y)$ is a measurable space and $f \colon X \to Y$ is a measurable map, we define the \emph{pullback map} $f^*: \Y \to \X$ by
$$ f^* E \coloneqq f^{-1}(E)$$
for $E \in \Y$, and then define the \emph{pushforward measure} $f_* \mu$ on $Y$ by the usual formula
$$ f_*\mu(E) \coloneqq \mu(f^* E).$$
For reasons that will become clearer later, we will refer to measurable spaces and measurable maps as \emph{concrete measurable spaces} and \emph{concrete measurable maps} respectively.
We define $\Aut(X,\X,\mu)$ to be the space of all concrete invertible bimeasurable maps $T: X \to X$ such that $T_* \mu = \mu$; this is a group.
If $\Gamma = (\Gamma,\cdot)$ is a discrete group, we define a \emph{(concrete) measure-preserving action} of $\Gamma$ on $X$ to be a group homomorphism $\gamma \mapsto T^\gamma$ from $\Gamma$ to $\Aut(X,\X,\mu)$. If $K = (K,+)$ is a compact Hausdorff\footnote{It is likely that the arguments here extend to non-Hausdorff compact groups by quotienting out the closure of the identity element, but the Hausdorff case already captures all of our intended applications and so we make this hypothesis to avoid some minor technical issues.} abelian group, which we endow with the Borel $\sigma$-algebra $\B(K)$, we define a \emph{$K$-valued (concrete measurable) cocycle} for this action to be a family $\rho = (\rho_\gamma)_{\gamma \in \Gamma}$ of concrete measurable maps $\rho_\gamma: X \to K$ such that for any $\gamma_1, \gamma_2 \in \Gamma$, the cocycle equation
\begin{equation}\label{cocy}
\rho_{\gamma_1\gamma_2} = \rho_{\gamma_1} \circ T^{\gamma_2} + \rho_{\gamma_2}
\end{equation}
holds $\mu$-almost everywhere. A cocycle $\rho$ is said to be a \emph{(concrete measurable) coboundary} if there exists a concrete measurable map $F: X \to K$ such that for each $\gamma \in \Gamma$, one has
\begin{equation}\label{cobound}
\rho_\gamma = F \circ T^\gamma - F
\end{equation}
$\mu$-almost everywhere. Note that \eqref{cobound} automatically implies \eqref{cocy}.
It is of interest to determine the space of all $K$-valued concrete measurable coboundaries. The following remarkable result of Moore and Schmidt \cite[Theorem 4.3]{moore1980coboundaries} reduces this problem to the case of coboundaries taking values in the unit circle $\T=\R/\Z$, at least under certain regularity hypotheses on the data $\Gamma,X,K$. More precisely, let $\hat K$ denote the Pontryagin dual of the compact Hausdorff abelian group $K$, that is to say the space of all continuous homomorphisms $\hat k: k \mapsto \langle \hat k, k \rangle$ from $K$ to $\T$.
\begin{theorem}[(Countable) Moore-Schmidt theorem]\label{mst}
Let $\Gamma$ be a discrete group acting (concretely) on a probability space $X = (X,\X,\mu)$, and let $K$ be a compact Hausdorff abelian group. Assume furthermore:
\begin{itemize}
\item[(a)] $\Gamma$ is at most countable.
\item[(b)] $X = (X,\X,\mu)$ is a standard Borel space.
\item[(c)] $K$ is metrisable.
\end{itemize}
Then a $K$-valued concrete measurable cocycle $\rho = (\rho_\gamma)_{\gamma \in \Gamma}$ on $X$ is a coboundary if and only if the $\T$-valued cocycles $\langle \hat k, \rho\rangle \coloneqq (\langle \hat{k}, \rho_\gamma\rangle)_{\gamma \in \Gamma}$ are coboundaries for all $\hat{k} \in \hat K$.
\end{theorem}
In fact, the results in \cite{moore1980coboundaries} extend to the case when $\Gamma$ and $K$ are locally compact groups (which are now assumed to be second countable instead of countable), and $(\langle \hat{k}, \rho_\gamma\rangle)_{\gamma \in \Gamma}$ is only assumed to be a coboundary for almost all $\hat{k} \in K$ with respect to some ``full'' measure. We will not discuss such extensions of this theorem here, but mention that the original proof by Moore and Schmidt at this level of generality crucially relies on measurable selection theorems.
The Moore-Schmidt theorem is a beautiful classification result which serves as a relevant technical tool in ergodic theory and probability.
It formulates a condition for the triviality of the first cohomology class of cocycles - an important invariant of measure-theoretic actions of groups - by describing the size of the set of characters necessary and sufficient to test triviality.
It is particularly helpful for understanding the structure of cocycles. See e.g., \cite{host2005nonconventional,bergelson2010inverse,austin2015pleasant1} for applications in the structure theory of nonconventional ergodic averages of multiple recurrence type, \cite{aaronson2001local,gouezel2005berry} for applications to limit theorems in probability, and \cite{schmidt1980asymptotically,moore1979groups,bergelson2014rigidity,helson1986cocycles} for some applications in other classification and asymptotic results in ergodic theory.
We briefly sketch here a proof of Theorem \ref{mst}. Using the ergodic decomposition \cite{furstenberg2014recurrence} (which takes advantage of the hypotheses (a), (b)) we may assume without loss of generality that the action is ergodic. By definition, for each $\hat k \in \hat K$ there exists a realization $\alpha_{\hat k}$ of the group $L^0(X \to \T)$ of concrete measurable functions from $X$ to $\T$, modulo $\mu$-almost everywhere equivalence, such that
\begin{equation}\label{hatk}
\langle \hat{k}, \rho_\gamma \rangle = \alpha_{\hat k} \circ T^{\gamma} - \alpha_{\hat k}
\end{equation}
$\mu$-almost everywhere. For any $\hat k_1, \hat k_2 \in \hat K$, one sees from comparing \eqref{hatk} for $\hat k_1, \hat k_2, \hat k_1 + \hat k_2$ that the function $\alpha_{\hat k_1 + \hat k_2} - \alpha_{\hat k_1} - \alpha_{\hat k_2}$ is $\Gamma$-invariant up to $\mu$-almost sure equivalence, and hence equal in $L^0(X \to \T)$ to a constant $c(\hat k_1, \hat k_2) \in \T$, by the ergodicity hypothesis. Viewing $\T$ as a divisible\footnote{That is, for any $x \in \T$ and $n \in \N$, there exists $y \in \T$ such that $ny=x$.} subgroup of the abelian group $L^0(X \to \T)$, a routine application of Zorn's lemma\footnote{We freely assume the axiom of choice in this paper.} (see e.g., \cite[p.~46--47]{halmos2013lectures}) then lets us obtain a retract homomorphism $w: L^0(X \to \T) \to \T$. If we define the modified function $\tilde \alpha_{\hat k} \coloneqq \alpha_k - w(\alpha_k)$ then we have $\tilde \alpha_{\hat k_1 + \hat k_2} = \tilde \alpha_{\hat k_1} + \tilde \alpha_{\hat k_2}$ $\mu$-almost everywhere for each $\hat k_1, \hat k_2 \in \hat K$. By hypothesis (c), $\hat K$ is at most countable, hence for $\mu$-almost every point $x \in X$, the map $x \mapsto \tilde \alpha_{\hat k}(x)$ is a homomorphism from $\hat K$ to $\T$, and hence by Pontryagin duality takes the form $\tilde \alpha_{\hat k}(x) = \langle \hat k, F(x) \rangle$ for some $\mu$-almost everywhere defined map $F: X \to K$, which one can verify to be measurable. One can then check that
$$ \rho_\gamma = F \circ T^\gamma - F $$
$\mu$-almost everywhere, giving the claim.
\subsection{The uncountable Moore-Schmidt theorem}
The hypotheses (a), (b), (c) were used in the above proof, but one can ask if they are truly necessary for Theorem \ref{mst}. Thus, we can ask whether the Moore-Schmidt theorem holds for actions of uncountable discrete groups $\Gamma$ on spaces $X$ that are not standard Borel, with cocycles taking values in groups $K$ that are compact Hausdorff abelian, but not necessarily metrizable. We refer to this setting as the ``uncountable'' setting for short, in contrast to the ``countable'' setting in which hypotheses such as (a), (b), (c) are imposed. Our motivation for this is to remove similar regularity hypotheses from other results in ergodic theory, such as the Host-Kra structure theorem \cite{host2005nonconventional}, which rely at one point on the Moore-Schmidt theorem. This in turn is motivated by the desire to apply such structure theory to such situations as actions of hyperfinite groups on spaces equipped with Loeb measure, which (as has been seen in such work as \cite{szegedy2012higher}, \cite{gtz}) is connected with the inverse conjecture for the Gowers norms in additive combinatorics. We plan to address these applications in future work.
Unfortunately, a naive attempt to remove the hypotheses from Theorem \ref{mst} leads to counterexamples. The main difficulty is the \emph{Nedoma pathology}: Once the compact Hausdorff abelian group $K$ is no longer assumed to be metrizable, the product Borel $\sigma$-algebra $\B(K) \otimes \B(K)$ can be strictly smaller than the Borel $\sigma$-algebra $\B(K \times K)$, and the group operation $+: K \times K \to K$, while still continuous, can fail to be measurable when $K \times K$ is equipped with the product $\sigma$-algebra $\B(K) \otimes \B(K)$: see Remark \ref{red-counter}. As a consequence, one cannot even guarantee that the sum $f+g$ of two measurable functions $f,g: X \to K$ remains measurable, and so even the very definition of a $K$-valued measurable cocycle or coboundary becomes problematic if one insists on endowing $K$ with the Borel $\sigma$-algebra $\B(K)$.
Two further difficulties, of a more technical nature, also arise. One is that if $X$ is no longer assumed to be standard Borel, then tools such as disintegration may no longer be available; one similarly may lose access to measurable selection theorems when $K$ is not metrizable. The other is that if $\Gamma$ is allowed to be uncountable or $K$ is allowed to be non-metrizable, then one may have to manipulate an uncountable number of assertions that each individually hold $\mu$-almost everywhere, but for which one cannot ensure that they \emph{simultaneously} hold $\mu$-almost everywhere, because the uncountable union of null sets need not be null.
To avoid these difficulties, we will make the following modifications to the setup of the Moore-Schmidt theorem, which turn out to be natural changes to make in the uncountable setting.
The most important change, which is needed to avoid the Nedoma pathology, is to coarsen the $\sigma$-algebra on the compact group $K$, from the Borel $\sigma$-algebra to the Baire $\sigma$-algebra (see e.g.~\cite[Volume 2]{bogachev2006measure} for a reference):
\begin{definition}[Baire $\sigma$-algebra]\label{reduced-def} If $K$ is a compact space, we define the \emph{Baire $\sigma$-algebra} $\Baire(K)$ to be the $\sigma$-algebra generated by all the continuous maps $f: K \to \R$. We use $K_\Baire$ to denote the concrete measurable space $K_\Baire = (K, \Baire(K))$.
\end{definition}
Since every closed subset $F$ of a compact metric space $S$ is the zero set of a real-valued continuous function $x \mapsto \mathrm{dist}(x,F)$, we see that the Baire $\sigma$-algebra $\Baire(K)$ of a compact space $K$ can equivalently be defined as the $\sigma$-algebra generated by all the continuous maps into compact metric spaces. Clearly $\Baire(K)$ is a subalgebra of $\B(K)$ which is equal to $\B(K)$ when $K$ is metrizable. However, it can be strictly smaller; see Remark \ref{red-counter}. In Proposition \ref{group-mes} we will show that if $K$ is a compact Hausdorff group, then the group operations on $K$ are measurable on $K_\Baire$, even if they need not be on $K$. For this and other reasons, we view $K_\Baire$ as the ``correct'' measurable space structure to place on $K$ when $K$ is not assumed to be metrizable.
To avoid the need to rely on disintegration and measurable selection, and to avoid situations where we take uncountable unions of null sets, we shall adopt a ``point-less'' or ``abstract'' approach to measure theory, by replacing concrete measurable spaces $(X,{\mathcal X})$ with their abstract counterparts. Namely:
\begin{definition}[Abstract measurable spaces] The category of abstract measurable spaces is the opposite\footnote{This is analogous to how the category of Stone spaces is equivalent to the opposite category of Boolean algebras, or how the category of affine schemes is equivalent to the opposite category of the category of commutative rings. One could also adopt a noncommutative probability viewpoint, and interpret the category of abstract probability spaces as the opposite category to the category of tracial commutative von Neumann algebras, but we will not need to do so in this paper.} category of the category of $\sigma$-complete boolean algebras (or \emph{abstract $\sigma$-algebras}). That is to say, an abstract measurable space is of the form $\X^\op$, where $\X = (\X, 0, 1, \wedge, \vee, \overline{\cdot})$ is a Boolean algebra that is $\sigma$-complete (all countable families have meets and joins), and an abstract measurable map $f: \X^\op \to \Y^\op$ from one abstract measurable space $\X^\op$ to another $\Y^\op$ is of the form $f = (f^*)^{\mathrm{op}}$, where $f^*: \Y \to \X$ is a $\sigma$-complete homomorphism, that is to say a Boolean algebra homomorphism that also preserves countable joins: $f^* \bigvee_{n=1}^\infty E_n = \bigvee_{n=1}^\infty f^* E_n$ for all $E_n \in \Y$. We refer to $f^*$ as the \emph{pullback map} associated to $f$. Here $\op$ is a formal symbol to indicate use of the opposite category. The space of all abstract measurable maps from $\X^\op$ to $\Y^\op$ will be denoted $\Hom( \X^\op \to \Y^\op)$. If $f \in \Hom( \X^\op \to \Y^\op)$ and $g \in \Hom( \Y^\op \to {\mathcal Z}^\op)$ are abstract measurable maps, the composition $g \circ f \in \Hom( \X^\op \to {\mathcal Z}^\op)$ is defined by the formula $g \circ f \coloneqq (f^* \circ g^*)^\op$ (or equivalently $(g \circ f)^* = f^* \circ g^*$). Elements of $\X$ will be referred to as \emph{abstract measurable subsets} of $\X^\op$.
We will also occasionally need the notion of a \emph{weak abstract measurable map} $f: \X^\op \to \Y^\op$, which is an object of the form $f = (f^*)^{\mathrm{op}}$ where $f^*: \Y \to \X$ is a Boolean algebra homomorphism that is not required to preserve countable joins. We let $\Hom_w( \X^\op \to \Y^\op)$ denote the space of weak abstract measurable maps, thus $\Hom(\X^\op \to \Y^\op) \subset \Hom_w(\X^\op \to \Y^\op)$.
\end{definition}
Note that any (concrete) measurable space $(X,\X)$ can be viewed as an abstract measurable space by viewing the $\sigma$-algebra $\X$ as a $\sigma$-complete Boolean algebra in the obvious manner (replacing set-theoretic symbols such as $\emptyset, X, \cup, \cap$ with their Boolean algebra counterparts $0, 1, \vee, \wedge$) and identifying $(X,\X)$ with $\X^\op$, and similarly any (concrete) measurable map $f \colon X \to Y$ between two measurable spaces $(X,\X), (Y,\Y)$ can be viewed as an abstract measurable map by identifying $f$ with $(f^*)^\op$, where $f^*: \Y \to \X$ is the pullback map. By abuse of notation, we shall frequently use these identifications in the sequel without further comment. One can then easily check that the category of concrete measurable spaces is a subcategory of the category of abstract measurable spaces (in particular, the composition law for concrete measurable maps is consistent with that for abstract measurable maps).
\begin{example}\label{ultra} Let $\mathrm{pt}$ be a point (with the discrete $\sigma$-algebra). Then $\Hom(\mathrm{pt} \to \N)$ can be identified with $\N$ (with every natural number $n$ giving an abstractly measurable map $n: \mathrm{pt} \to \N$ defined by $n^* E = 1_{n \in E}$ for $E \subset \N$) while $\Hom_w(\mathrm{pt} \to \N)$ (the Stone dual of the discrete $\sigma$-algebra on $\N$) can be identified with the larger space $\beta \N$ of ultrafilters $p$ on $\N$ (which can also be identified with the weakly abstractly measurable map $p: \mathrm{pt} \to \N$ defined by $p^* E = 1_{E \in p}$ for $E \subset \N$).
\end{example}
An important further example for us of an abstract measurable space (that is not, in general, represented by a concrete measurable space) will be as follows. If $(X,\X,\mu)$ is a measure space, we define the \emph{opposite measure algebra} $X_\mu$ to be the abstract measurable space $(\X_\mu)^\op$, where the measure algebra $\X_\mu \coloneqq \X/\mathcal{N}_\mu$ is the $\sigma$-complete Boolean algebra $\X$ quotiented out by the ideal $\mathcal{N}_\mu \coloneqq \{ A \in \X: \mu(A)=0\}$ of $\mu$-null sets, thus $\X_\mu \coloneqq \{ [A]: A \in \X \}$ where $[A] \coloneqq \{ A' \in \X: A \Delta A' \in \mathcal{N}_\mu \}$ for each $A \in \X$. We call $[A]$ the \emph{abstraction} of $A$ and $A$ a \emph{representative} of $[A]$.
Informally, the opposite measure algebra $X_\mu$ is formed from $X$ by ``removing the null sets'' (without losing any sets of positive measure); this is an operation that does not make sense on the level of concrete measurable spaces, but is perfectly well defined in the category of abstract measurable spaces. The measure $\mu$ can be viewed as a countably additive map from the measure algebra $\X_\mu$ to $[0,+\infty]$. There is an obvious ``inclusion map'' $\iota \colon X_\mu \to X$, which is the abstract measurable map defined by setting $\iota^* A \coloneqq [A]$ for all $A \in \X$; this is a monomorphism in the category of abstract measurable spaces.
If $f \colon X \to Y$ is a concrete measurable map, we refer to $[f] \coloneqq \iota \circ f \in \Hom(X_\mu \to Y)$ as the \emph{abstraction} of $f$, and $f$ as a \emph{realization} of $[f]$; chasing all the definitions, we see that $[f]^* E = [f^* E]$ for all measurable subsets $E$ of $Y$. Note that if $f \colon X \to Y$, $g \colon X \to Y$ are concrete measurable maps that agree $\mu$-almost everywhere, then $[f] = [g]$. The converse is only true in certain cases: see Section \ref{condrep-sec}. Furthermore, there exist abstract measurable maps in $\Hom(X_\mu \to Y)$ that have no realizations as concrete measurable maps from $X$ to $Y$; again, see Section \ref{condrep-sec}. As such, $\Hom(X_\mu \to Y)$ is not equivalent in general to the space $L^0(X \to Y)$ of concrete measurable maps from $X$ to $Y$ up to almost everywhere equivalence, although the two spaces are still analogous in many ways. Our philosophy is that $\Hom(X_\mu \to Y)$ is a superior replacement for $L^0(X \to Y)$ in uncountable settings, as it exhibits fewer pathologies; for instance it behaves well with respect to arbitrary products, as seen in Proposition \ref{product}, whereas $L^0(X \to Y)$ does not (see Example \ref{bad-ex}). The main drawback of working with $X_\mu$ is the inability to use ``pointwise'' arguments; however, it turns out that most of the tools we really need for our applications can be formulated without reference to points. (Here we follow the philosophy of ``conditional set theory'' as laid out in \cite{drapeau2016algebra}.)
\begin{example}\label{pointless} Let $X$ be the unit interval $[0,1]$ with the Borel $\sigma$-algebra and Lebesgue measure $\mu$. Then $\Hom_w(\mathrm{pt} \to X_\mu)$ is quite large (it is the Stone space of $\X_\mu$, so for instance $\X_\mu$ can be identified with the clopen subsets of this space), but $\Hom(\mathrm{pt} \to X_\mu)$ can be verified to be empty. Thus $X_\mu$ contains no ``points'' (although it does contain a large number of ``weak points''), which explains why one cannot use ``pointwise'' arguments when working with $X_\mu$ as a base space. Note this argument also shows that $X_\mu$ is not isomorphic to a concrete measurable algebra.
\end{example}
Define $\Aut(X_\mu)$ to be the group of invertible elements $T = (T^*)^\op$ of $\Hom(X_\mu \to X_\mu)$. Any element of $\Aut(X,\X,\mu)$ can be abstracted to an element of $\Aut(X_\mu)$; in fact the abstraction lies in the subgroup $\Aut(X_\mu,\mu)$ of $\Aut(X_\mu)$ consisting of maps $T$ that also preserve the measure, $T_* \mu = \mu$, but we will not need this measure-preservation property in our formulation of the Moore-Schmidt theorem. We also remark that there can exist elements of $\Aut(X_\mu,\mu)$ that are not realized\footnote{For a simple example, let $X = \{1,2,3\}$, let $\X$ be the $\sigma$-algebra generated by $\{1\}, \{2,3\}$, and let $\mu$ assign an equal measure of $1/2$ to $\{1\}$ and $\{2,3\}$. Then there is an element of $\Aut(X_\mu,\mu)$ that interchanges the equivalence classes of $\{1\}$ and $\{2,3\}$, but it does not arise from any element of $\Aut(X,\X,\mu)$. For a more sophisticated counterexample in which $\X$ separates points, see \cite{gtw}.} by a concrete element of $\Aut(X,\X,\mu)$. We believe that $\Aut(X_\mu)$ (or $\Aut(X_\mu,\mu)$) is a more natural replacement for $\Aut(X,\X,\mu)$ in the case when $X$ is not required to be standard Borel. An \emph{abstract action} of a discrete (and possibly uncountable) group $\Gamma$ on $X_\mu$ is defined to be a group homomorphism $\gamma \mapsto T^\gamma$ from $\Gamma$ to $\Aut(X_\mu)$. Clearly any concrete measure-preserving action of $\Gamma$ on $X$ also gives rise to an abstract measure-preserving action on $X_\mu$, but there are abstract actions that are not represented by any concrete one.
If $(X,\X,\mu)$ is a probability space (not necessarily standard Borel) and $K$ is a compact abelian group (not necessarily metrizable), then the measurable nature of the group operations on $K_\Baire$ makes the space $\Hom( X_\mu \to K_\Baire )$ an abelian group: see Section \ref{pont-sec}. If $\Gamma$ is a (possibly uncountable) discrete group acting abstractly on $X_\mu$, we define an \emph{abstract $K$-valued cocycle} to be a collection $\rho = (\rho_\gamma)_{\gamma \in \Gamma}$ of abstract measurable maps $\rho_\gamma \in \Hom( X_\mu \to K_\Baire )$ such that
$$ \rho_{\gamma_1\gamma_2} = \rho_{\gamma_1} \circ T^{\gamma_2} + \rho_{\gamma_2}$$
for all $\gamma_1,\gamma_2 \in \Gamma$. Note in comparison to \eqref{cocy} that we no longer need to introduce the caveat ``$\mu$-almost everywhere''. We say that an abstract $K$-valued cocycle is an \emph{abstract coboundary} if there is an abstract measurable map $F \in \Hom( X_\mu \to K_\Baire )$ such that
$$ \rho_\gamma = F \circ T^\gamma - F $$
for all $\gamma \in \Gamma$.
With these preliminaries, we are finally able to state the uncountable analogue of the Moore-Schmidt theorem. As a minor generalization, we can also allow $(X,\X,\mu)$ to be an arbitrary measure space rather than a probability space; in particular, $(X,\X,\mu)$ is no longer required to be $\sigma$-finite, again in the spirit of moving away from ``countably complicated'' settings.
\begin{theorem}[Uncountable Moore-Schmidt theorem]\label{mstu}
Let $\Gamma$ be a discrete group acting abstractly on the opposite measure algebra $X_\mu$ of a measure space $X = (X,\X,\mu)$, and let $K$ be a compact Hausdorff abelian group. Then an abstract $K$-valued cocycle $\rho = (\rho_\gamma)_{\gamma \in \Gamma}$ on $X_\mu$ is an abstract coboundary if and only if the $\T$-valued abstract cocycles $\hat k \circ \rho \coloneqq (\hat{k}\circ \rho_\gamma)_{\gamma \in \Gamma}$ are abstract coboundaries for all $\hat{k} \in \hat K$.
\end{theorem}
We prove this result in Section \ref{uncountable-sec}; the key tool is a ``conditional'' version of the Pontryagin duality relationship between $K$ and $\hat K$, which we formalize as Theorem \ref{cond-pont}. Once this result is available, the proof mimics the proof of the countable Moore-Schmidt theorem, translated to the abstract setting. We avoid the use of the ergodic decomposition by replacing the role of the scalars $\T$ by the invariant factor $\Hom(X_\mu \to \T)^\Gamma$.
While we believe that the formalism of abstract measure spaces is the most natural one for this theorem, one can still explore the question of to what extent Theorem \ref{mstu} continues to hold if one works with concrete actions, cocycles, and coboundaries instead of abstract ones. We do not have a complete answer to this question, but we give some partial results in Sections \ref{condrep-sec}, \ref{concrete-sec}; in particular we recover Theorem \ref{mst} as a corollary of Theorem \ref{mstu}. Interestingly, once one insists on concrete realizability of various maps, the truth of various natural statements become sensitive to axioms of set theory that are independent of ZFC. This issue does not seem to arise when one restricts attention to abstract measurable spaces and abstract measurable maps, which reinforces our belief that the latter formalism is the ``correct'' one to adopt in uncountable settings.
\begin{remark} If $\Scal^\op$ is an arbitrary abstract measurable space, then by the Loomis-Sikorski theorem \cite{loomis1947}, \cite{sikorski2013boolean} $\Scal$ is isomorphic to $\X/{\mathcal N}$ for some concrete measurable space $(X,\X)$ and some null ideal ${\mathcal N}$ of $\X$. In particular $\Scal^\op$ is isomorphic to $X_\mu$, where $\mu$ is the (non-$\sigma$-finite) measure on $X$ that assigns $0$ to elements of ${\mathcal N}$ and $+\infty$ to all other elements. Thus in Theorem \ref{mstu} one can replace the opposite measure algebra $X_\mu$ by an arbitrary abstract measurable space.
\end{remark}
\subsection{Notation}
For any unexplained definition or result in the theory of measure algebras, we refer the interested reader to \cite{fremlin1989measure}, and for any unexplained definition or result in the general theory of Boolean algebras to \cite[Part 1]{monk1989handbook}.
If $S$ is a statement, we use $1_S$ to denote its indicator, equal to $1$ when $S$ is true and $0$ when $S$ is false. (In some cases, $1$ and $0$ will be interpreted as elements of a Boolean algebra, rather than as numbers.)
\subsection{Acknowledgments}
AJ was supported by DFG-research fellowship JA 2512/3-1.
TT was supported by a Simons Investigator grant, the James and Carol Collins Chair, the Mathematical Analysis \& Application Research Fund Endowment, and by NSF grant DMS-1764034.
\section{The Baire $\sigma$-algebra}\label{reduced-sec}
In this section we explore some properties of the measurable spaces $K_\Baire = (K, \Baire(K))$ defined in Definition \ref{reduced-def}. We have already observed that $\Baire(K) = \B(K)$ when $K$ is a metric space. We now generalize this observation:
\begin{lemma}[Description of Baire $\sigma$-algebra]\label{subspace-red} Let $K$ be a closed subspace of a product $S_A \coloneqq \prod_{\alpha \in A} S_\alpha$ of compact metric spaces $S_\alpha$. Then $\Baire(K)$ is the restriction of the product $\sigma$-algebra $\B_A \coloneqq \bigotimes_{\alpha \in A} \B(S_\alpha)$ to $K$:
$$ \Baire(K) = \{ E \cap K: E \in \B_A \}.$$
Equivalently, $\Baire(K)$ is the $\sigma$-algebra generated by the coordinate projections $\pi_\alpha \colon K \to S_\alpha$, $\alpha \in A$.
\end{lemma}
We caution that this lemma does \emph{not} assert that $K$ itself lies in $\B_A$; see Remark \ref{red-counter} below for an explicit counterexample.
\begin{proof} The coordinate projections from $K$ to each of the compact metric spaces $S_\alpha$ are all continuous. Because of this, we see that all the generating sets of $\B_A$ lie in $\Baire(K)$ when restricted to $K$, and hence $\Baire(K)$ contains the restriction of $\B_A$ to $K$. To reverse the implication, it suffices to show that the preimage $f^* \overline{B_Y(y,r)}$ of a closed ball $\overline{B_Y(y,r)}$ in any continuous map $f: K \to Y$ into a compact metric space $(Y,d_Y)$ is the restriction of a $\B_A$-measurable set to $K$. For each $n$, the preimage $f^* B_Y(y,r+\frac{1}{n})$ is open in $K$ by continuity, and so is the intersection of an open set $O$ in $S_A$ and $K$.
We can write $O$ as a union of basic open sets in $S_A$.
By compactness of $f^* \overline{B_Y(y,r)}$, we may thus find a $\B_A$-measurable set whose restriction to $K$ contains $f^* \overline{B_Y(y,r)}$ and lies in $f^* B_Y(y,r+\frac{1}{n})$. Taking intersections in $n$, we conclude that $f^* \overline{B_Y(y,r)}$ is itself a restriction of a $\B_A$-measurable set, and the claim follows.
\end{proof}
Lemma \ref{subspace-red} combines well with the following theorem of Weil \cite{weil1937espaces}:
\begin{theorem}[Weil's theorem]\label{weil-thm} Every compact Hausdorff space is homeomorphic to a closed subset of a product of compact metric spaces.
\end{theorem}
Lemma \ref{subspace-red} also combines well with the following topological lemma:
\begin{lemma}\label{embed} Let $K$ be a compact Hausdorff space, and let $\rho = (\rho_\alpha)_{\alpha \in A}$ be a family of continuous maps $\rho_\alpha \colon K \to S_\alpha$ from $K$ to compact Hausdorff spaces $S_\alpha$. Suppose that the $\rho_\alpha$ separate points, thus for every distinct $k,k' \in K$ there exists $\alpha \in A$ such that $\rho_\alpha(k) \neq \rho_\alpha(k')$. We view $\rho \colon K \to S_A$ as a map from $K$ to $S_A$ by setting $\rho(k) \coloneqq (\rho_\alpha(k))_{\alpha \in A}$. Then $\rho(K)$ is a closed subset of $S_A$, and $\rho$ is a homeomorphism between $K$ and $\rho(K)$ (where we give the latter the topology induced from the product topology on $S_A$).
\end{lemma}
\begin{proof} Clearly $\rho$ is continuous and injective (since the $\rho_\alpha$ separate points), so $\rho(K)$ is compact and hence closed in the Hausdorff space $S_A$. Thus $\rho \colon K \to \rho(K)$ is a continuous bijection between compact Hausdorff spaces; it therefore maps compact sets to compact sets, hence is an open map, hence is a homeomorphism as required.
\end{proof}
In the case when $K$ is a group, we can give a more explicit description of an embedding $\rho$ of the form described in Lemma \ref{embed}:
\begin{corollary}[Description of compact Hausdorff groups]\label{group-desc} Let $K$ be a compact Hausdorff group.
\begin{itemize}
\item[(i)] There exists a family $\rho = (\rho_\alpha)_{\alpha \in A}$ of continuous unitary representations $\rho_\alpha \colon K \to S_\alpha$, $\alpha \in A$, of $K$ (thus each $S_\alpha$ is a unitary group and $\rho_\alpha$ is a continuous homomorphism) such that $\rho(K)$ is a closed subgroup of $S_A$, and $\rho \colon K \to \rho(K)$ is an isomorphism of topological groups. The $\sigma$-algebra $\Baire(K)$ is generated by the representations $\rho_\alpha$.
\item[(ii)] If $K = (K,+)$ is abelian, and one defines the map $\iota \colon K \to \T^{\hat K}$ by $\iota(k) \coloneqq (\langle \hat k, k \rangle)_{\hat k \in \hat K}$, then $\iota(K)$ is a closed subgroup of $\T^{\hat K}$, and $\iota \colon K \to \iota(K)$ is an isomorphism of topological groups. The $\sigma$-algebra $\Baire(K)$ is generated by the characters $\hat k \in \hat K$. Furthermore, one can describe $\iota(K)$ explicitly as
\begin{equation}\label{iotak} \iota(K) = \{ (\theta_{\hat k})_{\hat k \in \hat K} \in \T^{\hat K}: \theta_{\hat k_1 + \hat k_2} = \theta_{\hat k_1} + \theta_{\hat k_2} \forall \hat k_1, \hat k_2 \in \hat K \}.
\end{equation}
\end{itemize}
\end{corollary}
\begin{proof} For part (i), we observe from the Peter-Weyl theorem that there are enough continuous unitary representations of $K$ to separate points, and the claim now follows from Lemma \ref{embed} and Lemma \ref{subspace-red}.
For part (ii), we observe from Plancherel's theorem that the characters $\hat k \colon K \to \T$ for $\hat k \in \hat K$ separate points, so by Lemma \ref{embed} we verify that $\iota(K)$ is a closed subgroup of $\T^{\hat K}$ and that $\iota \colon K \to \iota(K)$ is an isomorphism of topological groups, and from Lemma \ref{subspace-red} we see that $\Baire(K)$ is generated by the characters $\hat k \in \hat K$. As $K$ is compact, the Pontryagin dual $\hat K$ is discrete, and by Pontryagin duality, $K$ can be identified with the space of homomorphisms $\hat k \mapsto \theta_{\hat k}$ from $\hat K$ to $\T$. This gives the description \eqref{iotak}.
\end{proof}
As a consequence of Corollary \ref{group-desc}, we have
\begin{proposition}[Group operations measurable in Baire $\sigma$-algebra]\label{group-mes} Let $K = (K,\cdot)$ be a compact Hausdorff group. Then the group operations $\cdot: K_\Baire \times K_\Baire \to K_\Baire$ and $()^{-1}: K_\Baire \to K_\Baire$ are measurable. In particular, if $K = (K,+)$ is a compact Hausdorff abelian group, then the group operations $+: K_\Baire \times K_\Baire \to K_\Baire$ and $-: K_\Baire \to K_\Baire$ are measurable.
\end{proposition}
\begin{proof} By Corollary \ref{group-desc}(i), we may view $K_\Baire$ as a closed subgroup of a product of unitary groups. The group operations are measurable on each such unitary group, hence measurable on the product, giving the claim.
\end{proof}
\begin{remark}[Nedoma pathology]\label{red-counter} Let $K$ be the non-metrizable compact Hausdorff abelian group $K = \T^\R$, and let $K^\Delta \subset K \times K$ be the diagonal closed subgroup $K^\Delta = \{ (k,k): k \in K \}$. By \emph{Nedoma's pathology} \cite{nedoma1957note}, $K^\Delta$ is not measurable in $\B(K) \otimes \B(K)$. Indeed, $\B(K) \otimes \B(K)$ consists of the union of $\B_1 \otimes \B_2$ as $\B_1, \B_2$ range over countably generated subalgebras of $\B(K)$. If $K^\Delta$ were in $\B(K) \otimes \B(K)$, we conclude on taking slices that all the points in $K$ lie in a single countably generated subalgebra of $\B(K)$, but the latter has cardinality at most $2^{\aleph_0}$ and the former has cardinality $2^{2^{\aleph_0}}$, leading to a contradiction. This shows that $\B(K) \otimes \B(K) \neq \B(K \times K)$, and also shows that in Lemma \ref{subspace-red} $K$ need not be measurable in $S_A$. Also, by comparing this situation with Proposition \ref{group-mes}, we conclude that $\B(K) \neq \Baire(K)$ in this case. This can also be seen directly: $\Baire(K)$ is the product $\sigma$-algebra on $\T^\R$, which is also equal to the union of the pullbacks of the $\sigma$-algebras of $\T^I$ for all countable subsets of $I$. In particular a single point in $K$ will not be measurable in $\Baire(K)$, even though it is clearly measurable in $\B(K)$.
\end{remark}
\section{A conditional Pontryagin duality theorem}\label{pont-sec}
Throughout this section, $X = (X,\X,\mu)$ denotes a measure space; to avoid some degeneracies we will assume in this section that $X$ has positive measure. We will use the abstract measurable space $X_\mu$ as a base space for the formalism of conditional set theory and conditional analysis, as laid out in \cite{drapeau2016algebra} (although as it turns out we will not need to draw upon the full power\footnote{For instance, we will not utilize the (measurable) topos-theoretic ability, which is powered by the completeness of the Boolean algebra $\X_\mu$ (which is equivalent to $X_\mu$ being decomposable (e.g., if $(X,\X,\mu)$ is $\sigma$-finite), an assumption we will not need in our analysis), to glue together different conditional objects along a partition of the base space $X_\mu$, which allows one to develop in particular a theory of conditional metric spaces and conditional topology.} of this theory in this paper). In this formalism, many familiar objects such as numbers, sets, and functions will have ``conditional'' analogues which vary ``measurably'' with the base space $X_\mu$; to avoid confusion, we will then use the term ``classical'' to refer to the original versions of these concepts. Thus for instance we will have classical real numbers and conditional real numbers, classical functions and conditional functions, and so forth. The adjectives ``classical'' and ``conditional'' in this formalism are analogous to the adjectives ``deterministic'' and ``random'' in probability theory (for instance the latter theory deals with both deterministic real numbers and random real variables). Our ultimate objective of this section is to obtain a conditional analogue of the Pontryagin duality identity \eqref{iotak}.
We begin with some basic definitions.
\begin{definition}[Conditional spaces] If $Y = (Y,\Y)$ is any concrete measurable space, we define the \emph{conditional analogue}
$\Cond(Y) = \Cond_{X_\mu}(Y)$ of $Y$ to be the space $\Cond(Y) \coloneqq \Hom(X_\mu \to Y)$. Elements of $\Cond(Y)$ will be referred to as \emph{conditional elements} of $Y$. Thus for instance elements of $\Cond(\R) = \Hom(X_\mu \to \R)$ are conditional reals, and elements of $\Cond(\N) = \Hom(X_\mu \to \N)$ are conditional natural numbers. Every (classical) element $y \in Y$ gives rise to a constant abstract measurable map $\Cond(y) \in \Cond(Y)$, defined by setting $\Cond(y)^* A = 1_{y \in A}$ for $A \in \Y$ (where the indicator $1_{y \in A}$ is interpreted as taking values in the $\sigma$-complete Boolean algebra $\X_\mu$). We will usually abuse notation by referring\footnote{This is analogous to how a constant function $x \mapsto c$ that takes a fixed value $c \in Y$ for all inputs $x \in X$ is often referred to (by abuse of notation) as $c$. Strictly speaking, in order for the identification of $y$ with $\Cond(y)$ to be injective, $\Y$ needs to separate points (i.e., for any distinct $y,y'$ in $Y$ there exists $A \in \Y$ that contains $y$ but not $y'$), but we will ignore this subtlety when abusing notation in this manner.} to $\Cond(y)$ simply as $y$.
We also define $\Cond_w(Y) \coloneqq \Hom_w(X_\mu \to Y)$, and refer to elements of $\Cond_w(Y)$ as \emph{weak conditional elements} of $Y$. Thus we have $Y \subset \Cond(Y) \subset \Cond_w(Y)$.
\end{definition}
Thus for instance if $\rho = (\rho_\gamma)_{\gamma \in \Gamma}$ is an abstract $K$-valued cocycle, then each $\rho_\gamma$ is a conditional element of $K_\Baire$.
As discussed in the introduction, every concrete measurable map $f: X \to Y$ into a concrete measurable space $Y$ gives rise to a conditional element $[f] \in \Cond(Y)$. In the case that $X$ is a Polish space, this is an equivalence:
\begin{proposition}[Conditional elements of compact metric or Polish spaces]\label{metr} Let $K$ be a Polish space. Then every conditional element $k \in \Cond(K)$ has a realization by a concrete measurable map $f: X \to K$, unique up to $\mu$-almost everywhere equivalence.
If furthermore $K$ is a compact metric space, then one can also replace ``conditional element $k \in \Cond(K)$'' by ``weak conditional element $k \in \Cond_w(K)$'' in the above claim. In particular we have $\Cond_w(K) = \Cond(K)$ in this case.
\end{proposition}
Note that Example \ref{ultra} shows that the hypothesis of compactness cannot be omitted from the second part of the proposition.
\begin{proof} Since $X$ has positive measure, $X_\mu$ is non-trivial, and hence we may assume $K$ is non-empty (since otherwise there are no conditional elements of $K$).
First suppose that $K$ is Polish. We may endow $K$ with a complete metric $d$. The space $K$ is separable, and hence for every $n \in \N$ there exists a measurable ``rounding map'' $f_n: K \to S_n$ to an at most countable subset $S_n$ of $K$ with the property that $d(k', f_n(k')) \leq \frac{1}{n}$ for all $k' \in K$. If $k \in \Cond(K)$, then $f_n(k) \in \Cond(S_n)$. By taking representatives of the preimages $(f_n(k))^* \{s\}$ for each $s \in S_n$, we can find a realization $F_n: X \to S_n$ of $f_n(k)$. Since $d(f_n(k'),f_m(k')) \leq \frac{1}{n}+\frac{1}{m}$ for all $n,m \in \N$, we have $d(F_n(x), F_m(x)) \leq \frac{1}{n} + \frac{1}{m}$ for each $n,m \in \N$ and $\mu$-almost every $x \in X$. Thus the sequence of measurable functions $F_n: X \to K$ is almost everywhere Cauchy, and thus (see e.g., \cite[Lemmas 1.10, 4.6]{kallenberg2002}) converges $\mu$-almost everywhere to a measurable limit $F: X \to K$. From this it is a routine matter to verify that $[F]=f$, giving existence. For uniqueness, suppose that $F,G: X \to K$ are two measurable maps with $[F]=[G]$, thus $F^* E$ differs by a null set from $G^* E$ for every measurable $E \in K$. If $F$ is not equal almost everywhere to $G$, then $d(F,G) > 0$ on a set of positive measure, and then by the second countable nature of $K$ we may find a ball $B$ for which $F^* B$ and $G^* B$ differ by a set of positive measure, a contradiction. Thus $F$ is equal to $G$ $\mu$-almost everywhere as claimed.
Now suppose furthermore that $K$ is a compact metric space. Then $K$ is now totally bounded, and we can make the sets $S_n$ in the above argument finite rather than countable. If $k \in \Cond_w(K)$, then $f_n(k) \in \Cond_w(S_n)$, but as $S_n$ is now finite, $\Cond_w(S_n) = \Cond(S_n)$. The arguments now proceed as before.
\end{proof}
Now we look at conditional elements of arbitrary products $\prod_{\alpha \in A} S_\alpha = (\prod_{\alpha \in A} S_\alpha, \bigotimes_{\alpha \in A} \Scal_\alpha)$ of Polish spaces $S_\alpha = (S_\alpha, \Scal_\alpha)$. Here, as is usual, $\prod_{\alpha \in A} S_\alpha$ is the Cartesian product, and the product $\sigma$-algebra $\bigotimes_{\alpha \in A} \Scal_\alpha$ is the minimal $\sigma$-algebra that makes all the projection maps $\pi_\beta \colon \prod_{\alpha \in A} S_\alpha \to S_\beta$ measurable for $\beta \in A$. We have the following fundamentally important identity:
\begin{proposition}[Conditional elements of product spaces]\label{product} Let $(S_\alpha)_{\alpha \in A}$ be a family of Polish spaces $S_\alpha = (S_\alpha,\Scal_\alpha)$. Then one has the equality
$$ \Cond\left(\prod_{\alpha \in A} S_\alpha\right) = \prod_{\alpha \in A} \Cond(S_\alpha)$$
formed by identifying each conditional element $f$ of $\prod_{\alpha \in A} S_\alpha$ with the tuple $(\pi_\alpha \circ f)_{\alpha \in A}$.
\end{proposition}
\begin{proof} It is clear that if $f \in \Cond(\prod_{\alpha \in A} S_\alpha)$ then $(\pi_\alpha \circ f)_{\alpha \in A}$ lies in $\prod_{\alpha \in A} \Cond(S_\alpha)$. Now suppose that $(f_\alpha)_{\alpha \in A}$ is an element of $\prod_{\alpha \in A} \Cond(S_\alpha)$. By Proposition \ref{metr}, for each $\alpha \in A$ we can find a concrete measurable map $\tilde f_\alpha: X \to S_\alpha$ such that $f_\alpha = [\tilde f_\alpha]$. Let $\tilde f: X \to \prod_{\alpha \in A} S_\alpha$ be the map
$$ \tilde f(x) \coloneqq (\tilde f_\alpha(x))_{\alpha \in A},$$
then $\tilde f$ is a concrete measurable map. Set $f \coloneqq [\tilde f]$, then $f \in \Cond\left(\prod_{\alpha \in A} S_\alpha\right)$. By chasing all the definitions we see that $(\pi_\alpha \circ f)^* E = f_\alpha^* E$ for any $E \in \Scal_\alpha$, hence $(f_\alpha)_{\alpha \in A} = (\pi_\alpha \circ f)_{\alpha \in A}$.
It remains to show that each tuple $(f_\alpha)_{\alpha \in A}$ is associated to at most one $f \in \Cond(\prod_{\alpha \in A} S_\alpha)$. Suppose that $f,g \in \Cond(\prod_{\alpha \in A} S_\alpha)$ are such that $\pi_\alpha \circ f = \pi_\alpha \circ g$ for all $\alpha \in A$. Then we have $f^* E = g^* E$ for all generating elements $E$ of the product $\sigma$-algebra $\bigotimes_{\alpha \in A} \Scal_\alpha$. As $f^*, g^*$ are both $\sigma$-algebra homomorphisms, we conclude that $f^* = g^*$ and hence $f=g$, giving the claim.
\end{proof}
The hypothesis that $S_\alpha$ are Polish cannot be relaxed to arbitrary concrete measurable spaces, even when considering products of just two spaces; see Proposition \ref{counter-prop}.
If $f \colon Y \to Z$ is a (classical) concrete measurable map between two concrete measurable spaces $Y,Z$, then we can define the conditional analogue $\Cond(f) \colon \Cond(Y) \to \Cond(Z)$ of this function by the formula
$$ \Cond(f)(y) \coloneqq f \circ y$$
for $y \in \Cond(Y)$; we can similarly define $\Cond_w(f) \colon \Cond_w(Y) \to \Cond_w(Z)$ by the formula
$$ \Cond_w(f)(y) \coloneqq f \circ y$$
for $y \in \Cond_w(Y)$. By chasing the definitions, we also observe the functoriality property
\begin{equation}\label{functor}
\Cond(g \circ f) = \Cond(g) \circ \Cond(f)
\end{equation}
whenever $f \colon Y \to Z$, $g \colon Z \to W$ are classical concrete measurable maps between concrete measurable spaces $Y,Z,W$; using the identification from Proposition \ref{product} we also have the identity
\begin{equation}\label{condpair}
(\Cond(f_1), \Cond(f_2)) = \Cond((f_1,f_2))
\end{equation}
for any classical concrete measurable maps $f_1 \colon K \to S_1$, $f_2 \colon K \to S_2$ from a measurable space $K$ to Polish spaces $S_1, S_2$, and more generally
\begin{equation}\label{cond-tuple}
(\Cond(f_\alpha))_{\alpha \in A} = \Cond((f_\alpha)_{\alpha \in A})
\end{equation}
whenever $f_\alpha \colon K \to S_\alpha$, $\alpha \in A$, are classical concrete measurable maps from a measurable space $K$ to Polish spaces $S_\alpha$.
Suppose that $S$ is a concrete measurable space and $K$ is a (possibly non-measurable) subset of $S$, then the measurable space structure on $S$ induces one on $K$ by restricting all the measurable sets of $S$ to $K$. The inclusion map $\iota: K \to S$ is then measurable, and thus $\Cond(\iota)$ is a conditional map from $\Cond(K)$ to $\Cond(S)$, which is easily seen to be injective; thus (by abuse of notation) we can view $\Cond(K)$ as a subset of $\Cond(S)$. One can then ask for a description of this subset. We can answer this in two cases:
\begin{proposition}[Description of $\Cond(K)$]\label{condk-desc} Let $S = (S, \mathcal{S})$ be a concrete measurable space, let $K$ be a subset of $S$ with the induced measurable space structure $(K,\mathcal{K})$, and view $\Cond(K)$ as a subset of $\Cond(S)$ as indicated above.
\begin{itemize}
\item[(i)] If $K$ is measurable in $S$, then $\Cond(K)$ consists of those conditional elements $s \in \Cond(S)$ of $S$ such that $s^* K=1$.
\item[(ii)] If $S = S_A = \prod_{\alpha \in A} S_\alpha$ is the product of compact metric spaces $S_\alpha$ with the product $\sigma$-algebra, and $K$ is a closed (but not necessarily measurable) subset of $S_A$, then $\Cond(K)$ consists of those conditional elements $s_A \in \Cond(S_A)$ of $S_A$ such that $s_A^* \pi_I^{-1}(\pi_I(K)) = 1$ for all at most countable $I \subset A$, where $\pi_I: S_A \to S_I$ is the projection to the product $S_I \coloneqq \prod_{i \in I} S_i$.
\end{itemize}
\end{proposition}
\begin{proof}
For part (i), it is clear that if $k \in \Cond(K)$ then $k^* K=1$. Conversely, if $s^* K=1$, then $s^* K^c=0$, and hence $s^* E = s^* F$ whenever $E,F$ are measurable subsets of $S$ that agree on $K$ (since $s^*(E \cap K^c) = s^*(F \cap K^c) = 0$). Thus the $\sigma$-complete Boolean homomorphism $s^*: \mathcal{S} \to \X_\mu$ descends to a $\sigma$-complete Boolean homomorphism on $\mathcal{K}$, so that $s \in \Cond(K)$ as claimed.
Now we prove part (ii). If $k \in \Cond(K)$ and $I \subset A$ is at most countable, then the image $\pi_I(K)$ is a compact subset of the metrizable space $S_I$, and is hence measurable in $S_I$; this also implies that $\pi_I^{-1}(\pi_I(K))$ is measurable in $S_A$. Observe that $\Cond(\pi_I)(k)$ is an element of $\Cond(\pi_I(K))$, hence by (i) we have $\Cond(\pi_I)(k)^* \pi_I(K) = 1$, and hence $k^*( \pi_I^{-1}(\pi_I(K))) = 1$.
Conversely, assume that $s_A \in \Cond(S_A)$ is such that $s_A^* \pi_I^{-1}(\pi_I(K)) = 1$ for all at most countable $I \subset A$. Let $E$ be a measurable subset of $S_A$ that was disjoint from $K$. The product $\sigma$-algebra $\bigotimes_{\alpha \in A} \B(S_\alpha)$ is equal to the union of the pullbacks $\pi_I^*(\bigotimes_{i \in I} \B(S_i))$ as $I$ ranges over countable subsets of $A$ (since the latter is a $\sigma$-algebra contained in the former that contains all the generating sets). Thus there exists an at most countable $I$ such that $E = \pi_I^{-1}(E_I)$ for some measurable subset $E_I$ of $S_I$. Since $E$ is disjoint from $K$, $E_I$ is disjoint from $\pi_I(K)$, hence $E$ is disjoint from $\pi_I^{-1}(\pi_I(K))$. Since $s_A^* \pi_I^{-1}(\pi_I(K))=1$, we conclude that $s_A^* E=0$ for all measurable $E$ disjoint from $K$. Thus $s_A^* E = s_A^* F$
whenever $E,F$ are measurable subsets of $S_A$ that agree on $K$, and by arguing as in (i) we conclude that $s \in \Cond(K)$, giving (ii).
\end{proof}
As a corollary we have the following variant of Proposition \ref{product}:
\begin{corollary}[Conditional elements of product spaces, II]\label{square} Let $K, K'$ be compact Hausdorff spaces. Then $\Cond(K_\Baire \times K'_\Baire) = \Cond(K_\Baire) \times \Cond(K'_\Baire)$.
\end{corollary}
The proof given below extends (with only minor notational changes) to arbitrary products of compact Hausdorff spaces, not just to products of two spaces, but the latter case is the only one we need in this paper. We also give a generalization of Corollary \ref{square} in Proposition \ref{half-square}, in the case that $X$ is a probability space.
\begin{proof} By Theorem \ref{weil-thm} and Lemma \ref{subspace-red}, we may assume $K_\Baire$ is a subspace of a product $S_A = \prod_{\alpha \in A} S_\alpha$ of compact metric spaces $S_\alpha$, with the $\sigma$-algebra induced from the product $\sigma$-algebra, and similarly that $K'_\Baire$ is a subspace of $S'_{A'} = \prod_{\alpha \in A'} S'_\alpha$. From Proposition \ref{condk-desc}(ii), $\Cond(K_\Baire)$ consists of those elements $s_A \in \Cond(S_A)$ such that $s_A^* \pi_I^{-1}(\pi_I(K)) = 1$ for all at most countable $I \subset A$. Similarly for $\Cond(K'_\Baire)$. From Lemma \ref{subspace-red} we have $K_\Baire \times K'_\Baire = (K \times K')_\Baire$, and from Proposition \ref{product} we have $\Cond(S_A \times S'_{A'}) = \Cond(S_A) \times \Cond(S'_{A'})$, so by a second application of Proposition \ref{condk-desc} we see that $\Cond(K_\Baire \times K'_\Baire)$ consists of those elements $(s_A, s'_{A'}) \in \Cond(S_A) \times \Cond(S'_{A'})$ such that
$$ (s_A, s'_{A'})^* ( \pi_I^{-1}(\pi_I(K)) \times \pi_{I'}^{-1}(\pi_{I'}(K')) ) = s_A^* \pi_I^{-1}(\pi_I(K)) \wedge (s'_{A'})^* \pi_{I'}^{-1}(\pi_{I'}(K')) = 1$$
for all at most countable $I \subset A, I' \subset A'$. The claim follows.
\end{proof}
We can use conditional analogues of classical functions to generate various operations on conditional elements of concrete measurable spaces. For instance, suppose we have two conditional real numbers $x,y \in \Cond(\R)$. Then we can define their sum $x+y \in \Cond(\R)$ by the formula
\begin{equation}\label{plus-def}
x+y = \Cond(+)(x,y)
\end{equation}
where we use Proposition \ref{product} to view $(x,y)$ as an element of $\Cond(\R^2)$, and $+: \Cond(\R^2) \to \Cond(\R)$ is the conditional analogue of the classical addition map $+: \R^2 \to \R$. Similarly for the other arithmetic operations; one then easily verifies using \eqref{functor}, \eqref{condpair} that the space $\Cond(\R)$ of conditional real numbers has the structure of a real unital commutative algebra. This is analogous to the more familiar fact that $L^0(X \to \R)$ is also a real unital commutative algebra. A similar argument (using Proposition \ref{group-mes} and Corollary \ref{square}) shows that if $K$ is a compact Hausdorff group then $\Cond(K_\Baire)$ is also a group, which will be abelian if $K$ is abelian, and the group operations are conditional functions in the sense given in \cite{drapeau2016algebra}.
Now we can give a conditional analogue of the Pontryagin duality relationship \eqref{iotak}.
\begin{theorem}[Conditional Pontryagin duality]\label{cond-pont} Let $K$ be a compact Hausdorff abelian group, and let $\iota \colon K_\Baire \to \T^{\hat K}$ be the map
$$ \iota(k) \coloneqq ( \langle \hat k, k \rangle )_{\hat k \in \hat K}.$$
Then
\begin{equation}\label{iotak-cond} \Cond(\iota)(\Cond(K_\Baire)) = \{ (\theta_{\hat k})_{\hat k \in \hat K} \in \Cond(\T)^{\hat K}: \theta_{\hat k_1 + \hat k_2} = \theta_{\hat k_1} + \theta_{\hat k_2} \forall \hat k_1, \hat k_2 \in \hat K \}
\end{equation}
where we use Proposition \ref{product} to identify $\Cond(\T^{\hat K})$ with $\Cond(\T)^{\hat K}$. Also, $\Cond(\iota): \Cond(K_\Baire) \to \Cond(\T^{\hat K})$ is injective.
\end{theorem}
\begin{proof} For all $\hat k_1, \hat k_2 \in \hat K$, we have from definition of the group structure on $\hat K$ that
$$ \langle \hat k_1 + \hat k_2, k \rangle = \langle \hat k_1, k \rangle + \langle \hat k_2, k \rangle$$
for all classical elements $k \in K_\Baire$. All expressions here are measurable in $k$, so the identity also holds for conditional elements $k \in \Cond(K_\Baire)$ (where by abuse of notation we write $\Cond(\langle \hat k, \cdot \rangle)$ simply as $\langle \hat k, \cdot \rangle$ for any $\hat k \in \hat K$). From this we see that if $k \in \Cond(K_\Baire)$ then $\Cond(\iota)(k)$ lies in the set in the right-hand side of \eqref{iotak-cond}.
Now we establish the converse inclusion. By Corollary \ref{group-desc}(ii), $\iota$ is a measurable space isomorphism between $K_\Baire$ and $\iota(K)$ (where the latter is given the measurable space structure induced from $\T^{\hat K}$). Thus $\Cond(\iota)$ is injective and $\Cond(\iota)(\Cond(K_\Baire)) = \Cond(\iota(K))$. Let $\theta = (\theta_{\hat k})_{\hat k \in \hat K}$ be an element of the right-hand side of \eqref{iotak-cond}; we need to show that $\theta \in \Cond(\iota(K))$. By Proposition \ref{condk-desc}(ii), it suffices to show that $\theta^* \pi_I^{-1}(\pi_I(\iota(K))) = 1$ for all at most countable $I \subset \hat K$. By replacing $I$ with the group generated by $I$, which is still at most countable, it suffices to do so in the case when $I$ is an at most countable subgroup of $\hat K$.
Let $K_I \subset \T^I$ denote the group of homomorphisms from $I$ to $\T$, thus
$$ K_I = \{ (\xi_i)_{i \in I} \in \T^I: \xi_{i+j} = \xi_i + \xi_j \forall i,j \in I \}.$$
This is a closed subgroup of $\T^I$. Because $\T$ is a divisible abelian group, we see from Zorn's lemma that every homomorphism from $I$ to $\T$ can be extended to a homomorphism from $\hat K$ to $\T$, thus $K_I = \pi_I(\iota(K))$. From the hypotheses on $\theta$ we see that $(\theta_i)_{i \in I}$ is a conditional element of $K_I$, which by Proposition \ref{condk-desc}(i) implies that $(\theta_i)_{i \in I}^* K_I=1$, and hence
$$ \theta^* \pi_I^{-1}(\pi_I(\iota(K))) = \theta^* \pi_I^{-1}(K_I) = (\theta_i)_{i \in I}^* K_I = 1$$
giving the claim.
\end{proof}
\section{Proof of the uncountable Moore-Schmidt theorem}\label{uncountable-sec}
We now have enough tools to prove Theorem \ref{mstu}, by modifying the argument sketched in the introduction to prove Theorem \ref{mst}. We may assume that the space $X$ has positive measure, since if $X$ has zero measure then every abstract cocycle is trivially an abstract coboundary.
Let $\Gamma$ be a discrete group acting abstractly on the opposite measure algebra $X_\mu$ of an arbitrary measure space, and let $K$ be a compact Hausdorff abelian group. If $\rho = (\rho_\gamma)_{\gamma \in \Gamma}$ is an abstract $K$-valued coboundary, then by definition there exists $F \in \Cond( K_\Baire )$ such that
$$ \rho_\gamma = F \circ T^\gamma - F $$
for all $\gamma \in \Gamma$, hence for each $\hat k \in \hat K$ we have
$$ \langle \hat k, \rho_\gamma\rangle = \langle \hat k, F\rangle \circ T^\gamma - \langle \hat k, F\rangle$$
for all $\gamma \in K$. Thus each $\langle \hat k, \rho \rangle$ is an abstract $\T$-valued coboundary.
Conversely, suppose that for each $\hat k \in \hat K$, $\langle \hat k, \rho \rangle$ is an abstract $\T$-valued coboundary; thus we may find $\alpha_{\hat k} \in \Cond(\T)$ such that
\begin{equation}\label{hkr}
\langle \hat k, \rho_\gamma\rangle = \alpha_{\hat k} \circ T^\gamma - \alpha_{\hat k}
\end{equation}
for all $\hat k \in \hat K$ and $\gamma \in \Gamma$. If $\hat k_1, \hat k_2 \in \hat K$, then we have
$$ \langle \hat k_1 + \hat k_2, \rho_\gamma \rangle = \langle \hat k_1, \rho_\gamma \rangle + \langle \hat k_2, \rho_\gamma \rangle$$
which when combined with \eqref{hkr} and rearranging gives
$$ c(\hat k_1, \hat k_2) \circ T^\gamma = c(\hat k_1, \hat k_2)$$
where $c(\hat k_1, \hat k_2) \in \Cond(\T)$ is the conditional torus element
\begin{equation}\label{ckk}
c(\hat k_1, \hat k_2) \coloneqq \alpha_{\hat k_1 + \hat k_2} - \alpha_{\hat k_1} - \alpha_{\hat k_2}.
\end{equation}
Thus, if we define the invariant subgroup
$$ \Cond(\T)^\Gamma \coloneqq \{ \theta \in \Cond(\T): \theta \circ T^\gamma = \theta \; \forall \gamma \in \Gamma \}$$
of $\Cond(\T)$, then we have $c(\hat k_1, \hat k_2) \in \Cond(\T)^\Gamma$ for all $\hat k_1, \hat k_2 \in \hat K$.
We now claim that $\Cond(\T)^\Gamma$ is a divisible abelian group; thus for any $\theta \in \Cond(\T)^\Gamma$ and $n \in \N$, we claim that there exists $\beta \in \Cond(\T)^\Gamma$ such that $n\beta = \theta$. But one can easily construct a concrete measurable map $g_n \colon \T \to \T$ such that $n g_n(\theta) = \theta$ for all $\theta \in \T$ (for instance, one can set $g_n(x \mod \Z) \coloneqq \frac{x}{n} \mod \Z$ for $0 \leq x < 1$), and the claim then follows by setting $\beta \coloneqq \Cond(g_n)(\theta)$.
Since $\Cond(\T)^\Gamma$ is a divisible abelian subgroup of $\Cond(\T)$, we see from Zorn's lemma that there exists a retract homomorphism $w: \Cond(\T) \to \Cond(\T)^\Gamma$ (a homomorphism that is the identity on $\Cond(\T)^\Gamma$); see e.g. \cite[p.~46--47]{halmos2013lectures}. For each $\hat k \in \hat K$,
let $\tilde \alpha_{\hat k} \in \Cond(\T)$ denote the conditional torus element
\begin{equation}\label{tdiff}
\tilde \alpha_{\hat k} \coloneqq \alpha_{\hat k} - w(\alpha_{\hat k}).
\end{equation}
Applying $w$ to both sides of \eqref{ckk} and subtracting, we conclude that
\begin{equation}\label{hka}
0 = \tilde \alpha_{\hat k_1 + \hat k_2} - \tilde \alpha_{\hat k_1} - \tilde \alpha_{\hat k_2}
\end{equation}
for all $\hat k_1, \hat k_2 \in \hat K$. By Theorem \ref{cond-pont}, we conclude that $(\tilde \alpha_{\hat k})_{\hat k \in \hat K}$ lies in $\Cond(\iota)(\Cond(K_\Baire))$, that is to say there exists $F \in \Cond(K_\Baire)$ such that
$$ \tilde \alpha_{\hat k} = \langle \hat k, F \rangle$$
for all $\hat k \in \hat K$. On the other hand, from \eqref{hkr}, \eqref{tdiff} we have
$$ \langle \hat k, \rho_\gamma\rangle = \tilde \alpha_{\hat k} \circ T^\gamma - \tilde \alpha_{\hat k}
$$
for all $\hat k \in K$ and $\gamma \in \Gamma$ and hence
\begin{equation}\label{hkb}
\langle \hat k, \rho_\gamma - (F \circ T^\gamma - F) \rangle = 0
\end{equation}
for all $\hat k \in \hat K$ and $\gamma \in \Gamma$. Applying the injectivity claim of Theorem \ref{cond-pont}, we conclude that
$$ \rho_\gamma - (F \circ T^\gamma - F) = 0$$
for all $\gamma \in \Gamma$, and so $\rho$ is an abstract $K$-valued coboundary as required.
\section{Representing conditional elements of a space}\label{condrep-sec}
Throughout this section $X$ is assumed to be a measure space of positive measure.
If $Y = (Y,\Y)$ is a concrete measurable space, and $f: X \to Y$ is a concrete measurable map, then the abstraction $[f] \in \Hom(X_\mu \to Y)= \Cond(Y)$ defined in the introduction is a conditional element of $Y$, and can be defined explicitly as
$$ [f]^* E = [f^* E]$$
for $E \in \Y$, where $[f^* E] \in \X_\mu$ is the abstraction of $f^* E \in \X$ in $\X_\mu$. Thus for instance $\Cond(c)$ is the abstraction of the constant function $x \mapsto c$ for all $c \in Y$. It is clear that if $f,g: X \to Y$ are concrete measurable maps that agree $\mu$-almost everywhere, then $[f]=[g]$. However, the converse is not true. One trivial example occurs when $\Y$ fails to separate points:
\begin{example}[Non-uniqueness of realizations, I] Let $Y = \{1,2\}$ with the trivial $\sigma$-algebra $\Y = \{ \emptyset, Y \}$. Then the constant concrete measurable maps $1$ and $2$ from $X$ to $Y$ are such that $[1]=[2]$, but $1$ is not equal to $2$ almost everywhere (if $X$ has positive measure).
\end{example}
However, there are also counterexamples when $\Y$ does separate points, as the following example shows:
\begin{example}[Non-uniqueness of realizations, II]\label{bad-ex} Let $X = [0,1]$ with Lebesgue measure $\mu$, and let $Y \coloneqq \{0,1\}^{[0,1]}$ with the product $\sigma$-algebra. Set $f: X \to Y$ to be the function
$$ f(x) \coloneqq ( 1_{x=y} )_{y \in [0,1]}$$
for all $x \in [0,1]$, where the indicator $1_{x=y}$ equals $1$ when $x=y$ and zero otherwise, and let $g: X \to Y$ be the zero function $g(x) \coloneqq 0$. Observe that $f(x) \neq g(x)$ for all $x \in [0,1]$, so $f$ and $g$ are certainly not equal almost everywhere. However, the product $\sigma$-algebra in $Y = \{0,1\}^{[0,1]}$ is the union of the pullbacks of the $\sigma$-algebras on $\{0,1\}^I$ as $I$ ranges over at most countable subsets of $[0,1]$. Thus if $E$ is measurable in $Y$, then $E = \pi_I^{-1}(E_I)$ for some measurable subset $E_I$ of $\{0,1\}^I$, where $\pi_I: \{0,1\}^{[0,1]} \to \{0,1\}^I$ is the projection map. The function $\pi_I \circ f: X \to \{0,1\}^I$ is equal to $\pi_I \circ g = 0$ almost everywhere, thus $f^* E = (\pi_I \circ f)^*(E_I)$ is equal modulo null sets to $g^* E = (\pi_I \circ g)^* E_I$. We conclude that $[f]=[g]$, despite the fact that $f,g$ are not equal almost everywhere.
\end{example}
Note in the above example while $f$ and $g$ do not agree almost everywhere, each component of $f$ agrees with the corresponding component of $g$ almost everywhere, and it is the latter that allows us to conclude that $[f]=[g]$; this can also be derived from Proposition \ref{product}. In particular, this example shows that the analogue of Proposition \ref{product} for the space $L^0(X \to Y)$ of concrete measurable functions modulo almost everywhere equivalence fails.
For certain choices of $Y$, there exist conditional elements $y \in \Cond(Y)$ of $Y$ that are not represented by any concrete measurable map:
\begin{example}[Non-realizability]\label{nonrep} Let $X = \mathrm{pt}$ be a point (with counting measure $\mu$), and let $Y \coloneqq \{0,1\}^{[0,1]} \backslash \{0\}^{[0,1]}$ be the product space $\{0,1\}^{[0,1]}$ with a point $\{0\}^{[0,1]}$ removed, endowed with the measurable structure induced from the product $\sigma$-algebra. Observe that the point $\{0\}^{[0,1]} = \{0^{[0,1]}\}$ is not measurable in $\{0,1\}^{[0,1]}$ (all the measurable sets in this space are pullbacks of a measurable subset of $\{0,1\}^I$ for some countable $I \subset [0,1]$, and $\{0\}^{[0,1]}$ is not of this form). Hence every measurable subset $E$ of $\{0,1\}^{[0,1]} \backslash \{0\}^{[0,1]}$ has a unique measurable extension $\tilde E$ to $\{0,1\}^{[0,1]}$. Now let $y \in \Cond(Y)$ be the conditional element of $Y$ defined by
$$ y^* E = 1_{0^{[0,1]} \in \tilde E};$$
this is easily seen to be an element of $\Cond(Y)$. However, it does not have any concrete realization $f: X \to Y$. For if we had $y = [f]$, then we must have $1_{0^{[0,1]} \in \tilde E} = 1_{f(0) \in E}$ for every measurable subset $E$ of $\{0,1\}^{[0,1]}$. But $f(0) \in Y$ must have at least one coefficient equal to $1$, and is thus contained in a cylinder set $E$ whose extension $\tilde E$ does not contain $0^{[0,1]}$, a contradiction.
\end{example}
Nevertheless, we are able to locate a number of situations in which conditional elements of $Y$ are represented by concrete measurable maps. From Proposition \ref{metr} we already can do this whenever $Y$ is a Polish space. We can also recover a concrete realization of a (weakly) conditional element of $K_\Baire$ in the case that $K$ is a compact Hausdorff abelian group.
\begin{proposition}[Conditional elements of compact abelian groups]\label{alt} Let $K$ be a compact Hausdorff abelian group. Then every weak conditional element $k \in \Cond_w(K_\Baire)$ has a realization by a concrete measurable map $f: X \to K_\Baire$. In particular $\Cond_w(K_\Baire)=\Cond(K_\Baire)$ in this case.
\end{proposition}
\begin{proof} Fix $K,k$. Then $\langle \hat k, k \rangle \in \Cond_w(\T)$ for each $\hat k \in \hat K$ (where by abuse of notation we identify $\langle \hat k, \cdot \rangle$ with $\Cond_w(\langle \hat k, \cdot \rangle))$. We will apply Zorn's lemma (in the spirit of the standard proof of the Hahn-Banach theorem) to the following setup. Define a \emph{partial solution} to be a tuple $(G, (f_g)_{g \in G})$, where
\begin{itemize}
\item $G$ is a subgroup of $\hat K$.
\item For each $g \in G$, $f_g \colon G \to \T$ is a concrete measurable map with $[f_g] = \langle g, k \rangle$.
\item For each $g_1,g_2 \in G$, one has $f_{g_1+g_2}(x) = f_{g_1}(x) + f_{g_2}(x)$ for \emph{every} $x \in X$ (not just $\mu$-almost every $x$).
\end{itemize}
We place a partial order on partial solutions by setting $(G, (f_g)_{g \in G}) \leq (G', (f'_{g'})_{g' \in G'})$ if $G \leq G'$ and $f_g = f'_g$ for all $g \in G$. Since $(\{0\}, (0))$ is a partial solution, and every chain of partial solutions has an upper bound, we see from Zorn's lemma that there exists a maximal partial solution $(G, (f_g)_{g \in G})$. We claim that $G$ is all of $\hat K$. Suppose this is not the case, then we can find an element $\hat k$ of $\hat K$ that lies outside of $G$. There are two cases, depending on whether $n \hat k \in G$ for some natural number $n$.
First suppose that $n \hat k \not \in G$ for all $n \in \N$. By Proposition \ref{metr}, we can find a concrete measurable map $f_{\hat k} \colon X \to \T$ such that $[f_{\hat k}] = \langle \hat k, k \rangle$. We then define $f_{n \hat k + g} \colon X \to \T$ for all $n \in \Z \backslash \{0\}$ and $g \in G$ by the formula
\begin{equation}\label{nkform}
f_{n \hat k + g}(x) \coloneqq n f_{\hat k}(x) + f_g(x).
\end{equation}
If we set
\begin{equation}\label{gp-def}
G' = \{ n \hat k + g: n \in \Z, g \in G \}
\end{equation}
to be the group generated by $\hat k$ and $G$, we can easily check that $(G', (f_{g'})_{g' \in G})$ is a partial solution that is strictly larger than $(G, (f_g)_{g \in G})$, contradicting maximality.
Now suppose that there is a least natural number $n_0$ such that $n_0 \hat k \in G$. We can find a concrete measurable map $\tilde f_{\hat k} \colon X \to \T$ such that $[\tilde f_{\hat k}] = \langle \hat k, k \rangle$. This map cannot immediately be used as our candidate for $f_{\hat k}$ because it does not necessarily obey the consistency condition $n_0 \tilde f_{\hat k}(x) = f_{n_0 \hat k}(x)$ for all $x \in X$. However, this identity is obeyed for \emph{almost all} $x \in X$. Let $N$ be the null set on which the identity fails. We then set $f_{\hat k}(x)$ to equal $\tilde f_{\hat k}(x)$ when $x \not \in N$ and equal to $g_{n_0}( f_{n_0 \hat k}(x) )$ when $x \in N$, where (as in the previous section) $g_{n_0}: \T \to \T$ is a measurable map for which $n_0 g_{n_0}(\theta) = \theta$ for all $\theta \in \T$. Then $[f_{\hat k}] = [\tilde f_{\hat k}] = \langle \hat k, k \rangle$. If one then defines $f_{n \hat k + g}$ for all $ n \in \Z$ and $g \in G$ by the same formula as before, we see that this is a well defined formula for $f_{g'}$ for all $g'$ in the group \eqref{gp-def}, and that $(G', (f_{g'})_{g' \in G})$ is a partial solution that is strictly larger than $(G, (f_g)_{g \in G})$, again contradicting maximality. This completes the proof that $G = \hat K$.
By Pontryagin duality \eqref{iotak}, for each $x \in X$ there is a unique element $f(x) \in K$ such that $f_{\hat k}(x) = \langle \hat k, f(x) \rangle$ for all $\hat k \in \hat K$. This gives a map $f: X \to K_\Baire$; as all the maps $\langle k, f \rangle = f_{\hat k}$ are measurable, we see that $f$ is also measurable as the $\sigma$-algebra of $K_\Baire$ is generated by the characters $\hat k$. From Theorem \ref{cond-pont} we see that $[f]=k$, and the claim follows.
\end{proof}
One can ask if the proposition holds for all compact Hausdorff spaces, not just the compact Hausdorff abelian groups. This depends on the properties of the base space $X$. We first treat the case when the base space $X$ is a point:
\begin{lemma}[The case of a point base space]\label{simp} Let $K$ be a compact Hausdorff space and suppose that $X=
\text{pt}$ is a point (equipped with counting measure). Then we have the identity
$$ K = \Cond(K_\Baire) = \Cond_w(K_\Baire),$$
where we identify each element $k \in K$ with its conditional analogue $\Cond(k)$.
\end{lemma}
Note that Example \ref{nonrep} shows that the requirement that $K$ be compact cannot be completely omitted in this lemma.
\begin{proof} From Theorem \ref{weil-thm} we see that any two distinct points $k,k' \in K$ are separated by preimages of disjoint balls with respect to a continuous map $\pi: K \to S$ into a metric space, and hence are also distinct as elements of $\Cond(K_\Baire)$ (or $\Cond_w(K_\Baire)$) as such preimages are measurable. It remains to show that every weak conditional element $k \in \Cond_w(K_\Baire)$ of $K_\Baire$ arises from an element of $K$. By Theorem \ref{weil-thm}, we may assume that $K_\Baire$ is a closed subset of $S_A = \prod_{\alpha \in A} S_\alpha$ for some metric spaces $S_\alpha$, with the product $\sigma$-algebra. For each $\alpha \in A$, let $\pi_\alpha \colon K_\Baire \to S_\alpha$ be the coordinate map, then $\pi_\alpha(k) \in \Cond_w(S_\alpha)$. By Proposition \ref{metr} there is a unique element $s_\alpha \in S_\alpha$ such that $\pi_\alpha(k) = [s_\alpha]$, so in particular $\pi_\alpha(k) \in \Cond(S_\alpha)$. If we set $s \in S_A$ to be the tuple $s \coloneqq (s_\alpha)_{\alpha \in A}$, and identify $s$ with a concrete measurable map from $X$ to $S_A$, then by Proposition \ref{product} we have $k = [s]$, so $k \in \Cond(K_\Baire)$. By Proposition \ref{condk-desc}, this implies that $\pi_I(s) \in \pi_I( K )$ for all countable $I \subset A$, and hence by the closed nature of $K$ we have $s \in K$. Thus $k$ arises from an element of $K$ as required.
\end{proof}
Now we can characterize the spaces $X$ for which concrete realizations always exist:
\begin{proposition}[Characterization of concrete realizability]\label{equiv} For any measure space $X = (X,\X,\mu)$, the following statements are equivalent:
\begin{itemize}
\item[(i)] Every conditional element $k \in \Cond(K_\Baire)$ of a compact Hausdorff space $K$ has a realization by a concrete measurable map $f \colon X \to K$.
\item[(ii)] Every weak conditional element $k \in \Cond_w(K_\Baire)$ of a compact Hausdorff space $K$ has a realization by a concrete measurable map $f \colon X \to K$. (In particular, $\Cond(K_\Baire)=\Cond_w(K_\Baire)$.)
\item[(iii)] There exists a weak retraction map $\pi \in \Hom_w(X \to X_\mu)$ such that $\pi \circ \iota \in \Hom_w(X_\mu \to X_\mu)$ is the (abstract) identity map on $X_\mu$.
\end{itemize}
\end{proposition}
We remark that a strong retraction map $\pi \in \Hom(X \to X_\mu)$ is not expected to exist in general, since this would give a map from the points $\Hom(\mathrm{pt} \to X)$ in $X$ to the points $\Hom(\mathrm{pt} \to X_\mu)$ in $X_\mu$, and the latter set can be empty (see Example \ref{pointless}).
\begin{proof} First suppose that (iii) holds, and let $k \in \Cond_w(K_\Baire)$ be a weak conditional element of a compact Hausdorff space $K$. Then $k \circ \pi \in \Hom_w(X \to K_\Baire)$, and for each point $x \in X$, we have $k \circ \pi \circ x \in \Hom_w(\mathrm{pt} \to K_\Baire)$, where we identify $x$ with the abstract measurable map from a point $\mathrm{pt}$ to $X$ such that $x^* A = 1_{x \in A}$ for $A \in \X$. By Lemma \ref{simp}, each $k \circ \pi \circ x$ can then be uniquely identified with an element $f(x)$ of $K$, giving rise to a map $f \colon X \to K$. By chasing the definitions we see that $f$ is a concrete measurable map with $f^* = \pi^* \circ k^*$, and thus $k = [f]$, giving (ii).
Trivially, (ii) implies (i). Finally, suppose that (i) holds. Let $K \subset \{0,1\}^{\X}$ denote the set of all Boolean algebra homomorphisms $\phi: \X \to \{0,1\}$ to the Boolean algebra $\{0,1\}$ that respect almost everywhere equivalence (that is to say, $\phi(A)=\phi(B)$ whenever $A,B \in \X$ differ by a set of measure zero). This is easily seen to be a closed set (note that the property of being a Boolean algebra homomorphism and respecting almost everywhere equivalence can be written as a collection of assertions that each involve only finitely many values of $\phi$). Let $\Phi: X \to \{0,1\}^\X$ denote the concrete measurable map $\Phi( x ) = ( 1_{x \in A} )_{A \in \X}$. We can check that $[\Phi] \in \Cond(K_\Baire)$. Indeed, by Proposition \ref{condk-desc} it suffices to show that $[\pi_I(\Phi)] \in \Cond(\pi_I(K))$ for any countable $I \subset \X$, where $\pi_I: \{0,1\}^\X \to \{0,1\}^I$ is the projection map. By enlarging $I$ we may assume that $I$ is Boolean subalgebra of $\X$. From Zorn's lemma we see that any Boolean homomorphism $\phi:I \to \{0,1\}$ that respects almost everywhere equivalence lifts to an element of $K$, and one can check that for almost every $x$, $\pi_I(\Phi(x)): I \to \{0,1\}$ is a Boolean algebra homomorphism that respects almost everywhere equivalence, giving the claim.
By (i), $[\Phi] = [\phi]$ for some concrete measurable map $\phi: X \to K$. Writing $\phi(x) = (\phi(x)(\alpha))_{\alpha \in A}$, we have for each $x \in X$ that $\alpha \mapsto \phi(x)(\alpha)$ is a Boolean algebra homomorphism from $A$ to $\{0,1\}$ that respects almost everywhere equivalence; as $[\Phi] = [\phi]$, we also see that for each $\alpha \in A$, that $\phi(x)(\alpha) = 1_{x \in \alpha}$ for almost every $x$. Thus, if we set $\psi(\alpha) \coloneqq \{ x \in X: \phi(x)(\alpha)=1\}$, we see that $\alpha \mapsto \psi(\alpha)$ is a Boolean algebra homomorphism from $\X$ to $\X$ such that $\psi(\alpha)=\psi(\beta)$ whenever $\alpha,\beta$ are almost surely equal, and such that $\psi(\alpha)$ is almost surely equal to $\alpha$ for each $\alpha \in A$. Thus $\psi$ descends to a Boolean algebra homomorphism $\pi^* \colon \X_\mu \to \X$ with $\iota^* \circ \pi^* \colon \X_\mu \to \X_\mu$ the identity. Then the weakly abstract measurable map $\pi = (\pi^*)^\op \colon X \to X_\mu$ is a retract of $X$ to $X_\mu$, giving (iii).
\end{proof}
\begin{remark} One can interpret the proof of Proposition \ref{equiv} in the category of abstract measurable spaces as follows. The proof that (iii) implies (ii) shows that for any compact Hausdorff $K$, the space $\Hom_w(X \to K)$ of weakly abstract measurable maps from $X$ to $K$ is in one-to-one correspondence with the concrete measurable maps from $X$ to $K$ (and is therefore also equal to $\Hom(X \to K)$); hence if one has a weak retract $\pi: \Hom_w(X \to X_\mu)$, then any weakly abstract measurable map $f: \Hom(X_\mu \to K)$ can be made into a concrete map by composing on the right with $\pi$ and using the above correspondence. In the proof that (i) implies (ii), the compact space $K$ constructed there can be interpreted as the space $\Hom_w(\mathrm{pt} \to X_\mu)$ (the Stone space of $X_\mu$), and the arguments there show that the weakly abstract measurable map $\pi: \Hom(\mathrm{pt} \to X_\mu)_\Baire \to X_\mu$ defined by $\pi^* a \coloneqq \{ \psi \in \Hom(\mathrm{pt} \to X_\mu): \psi^* a = 1\}$ has a left inverse $\iota: X_\mu \to \Hom(\mathrm{pt} \to X_\mu)_\Baire$ that is abstractly measurable. Thus if $\iota: X_\mu \to \Hom(\mathrm{pt} \to X_\mu)_\Baire$ has a concrete realization $f\colon X \to \Hom(\mathrm{pt} \to X_\mu)_\Baire$, the weakly measurable map $\pi \circ f \colon X \to X_\mu$ gives the required retract.
\end{remark}
By chasing all the definitions, we see that the existence of a weakly measurable retract from $X$ to $X_\mu$ is equivalent to that of a Boolean algebra homomorphism $\pi^*: \X_\mu \to \X$ such that $[\pi^*(a)] = a$ for all $a \in \X_\mu$. Such homomorphisms are referred\footnote{We are indebted to Will Brian for suggesting these references.} to in \cite{shelah} as a \emph{lifting} (or \emph{splitting}) of the Boolean algebra $\X_\mu$. If $X$ is the unit interval $[0,1]$ equipped with the Borel $\sigma$-algebra and Lebesgue measure, then it is a classical result of von Neumann \cite{vn, vns} that such a lifting exists under the Continuum Hypothesis, while in \cite{shelah} it is shown that it is also consistent with ZFC that no such lifting exists. Thus, for this choice of $X$, the question of whether all abstract maps into compact Hausdorff spaces are realizable is independent of ZFC! If on the other hand the measure space $(X,\X,\mu)$ is assumed to be $\sigma$-finite and complete (for instance, if $X$ is a standard probability space), then it was shown in \cite{maharam} (with the result attributed to von Neumann) that such a lifting always exists. From this and Proposition \ref{equiv} we conclude
\begin{corollary}[Conditional elements of compact spaces in the complete case]\label{large} Let $K$ be a compact Hausdorff space, and suppose also that the measure space $(X,\X,\mu)$ is $\sigma$-finite and complete. Then every weak conditional element $k \in \Cond_w(K_\Baire)$ has a realization by a concrete measurable map $f: X \to K$. In particular $\Cond_w(K_\Baire) = \Cond(K_\Baire)$.
\end{corollary}
For surveys of the lifting problem, see \cite[\S 4]{fremlin1989measure}, \cite{burke}.
\section{Towards a concrete version of the uncountable Moore-Schmidt theorem}\label{concrete-sec}
One can raise the conjecture of whether Theorem \ref{mstu} continues to hold if we use concrete actions, coboundaries, and cocycles:
\begin{conjecture}[Concrete uncountable Moore-Schmidt conjecture]\label{mstu-concrete}
Let $\Gamma$ be a discrete group acting concretely on a measure space $X = (X,\X,\mu)$, and let $K$ be a compact Hausdorff abelian group. Then a concrete $K_\Baire$-valued cocycle $\rho = (\rho_\gamma)_{\gamma \in \Gamma}$ on $X$ is an concrete coboundary if and only if the $\T$-valued concrete cocycles $\hat k \circ \rho \coloneqq (\hat{k}\circ \rho_\gamma)_{\gamma \in \Gamma}$ are concrete coboundaries for all $\hat{k} \in \hat K$.
\end{conjecture}
The ``only if'' part of the conjecture is easy; the difficulty is the ``if'' direction. If $\rho = (\rho_\gamma)_{\gamma \in \Gamma}$ is a concrete coboundary with the property that $\hat k \circ \rho$ is a concrete coboundary for all $\hat k \in \hat K$, then the abstraction $[\rho] \coloneqq ([\rho_\gamma])_{\gamma \in \Gamma}$ is clearly an abstract coboundary with $\hat k \circ [\rho] = [\hat k \circ \rho]$ an abstract coboundary for all $\hat k \in \hat K$. Applying Theorem \ref{mstu}, we conclude that $[\rho]$ is an abstract coboundary, thus there exists an abstract measurable map $F \in \Hom(X_\mu \to K_\Baire)$ such that
$$ [\rho_\gamma] = F \circ T^\gamma - F$$
for all $\gamma \in \Gamma$. By Proposition \ref{alt}, we may then find a concrete measurable map $\tilde F \colon X \to K_\Baire$ such that $[\tilde F] = F$. If we then introduce the concrete coboundary
$$ \tilde \rho \coloneqq ( \tilde F \circ T^\gamma - \tilde F )_{\gamma \in \Gamma}$$
then we see that $[\rho] = [\tilde \rho]$. If we could conclude that $\rho = \tilde \rho$, we could establish Conjecture \ref{mstu-concrete}. We are unable to do this, but by subtracting $\tilde \rho$ from $\rho$ we see that to prove the above conjecture it suffices to do so in the case $\tilde \rho=0$, which implies that $[\langle \hat k, \rho_\gamma \rangle] = 0$, or equivalently (by Proposition \ref{metr}) that $\langle \hat k, \rho_\gamma \rangle$ vanishes almost everywhere for each $\hat k, \gamma$. Thus Conjecure \ref{mstu-concrete} can be equivalently formulated as
\begin{conjecture}[Concrete uncountable Moore-Schmidt conjecture, reduced version]\label{mstu-concrete-red}
Let $\Gamma$ be a discrete group acting concretely on a measure space $X = (X,\X,\mu)$, and let $K$ be a compact Hausdorff abelian group. Let $\rho = (\rho_\gamma)_{\gamma \in \Gamma}$ be a concrete $K_\Baire$-valued cocycle on $X$ with the property that $\langle \hat k, \rho_\gamma \rangle$ vanishes $\mu$-almost everywhere for each $\hat k \in \hat K$ and $\gamma \in \Gamma$. Then $\rho$ is a concrete coboundary.
\end{conjecture}
One easily verified case of this conjecture is when $K$ is metrizable. Then $\hat K$ is countable, so for each $\gamma \in \Gamma$ we see that for almost every $x \in X$, $\langle \hat k, \rho_\gamma(x) \rangle=0$ for all $\hat k \in \hat K$ simultaneously, and so $\rho_\gamma(x)=0$ for almost every $x$, which of course implies that $\rho$ is a coboundary. Note that this allows us to recover Theorem \ref{mst} from Theorem \ref{mstu}.
Another easy case is when $\Gamma$ is countable, $(X,\X,\mu)$ is complete, and $K$ is a torus $K = \T^A$ for some (possibly uncountable) $A$. By hypothesis, the cocycle equation
\begin{equation}\label{cocycle} \rho_{\gamma_1\gamma_2}(x) = \rho_{\gamma_1} \circ T^{\gamma_2}(x) + \rho_{\gamma_2}(x)
\end{equation}
holds for each $\gamma_1,\gamma_2 \in \Gamma$ for $x$ outside of a null set. Since $\Gamma$ is countable, we may make this null set independent of $\gamma_1,\gamma_2$, and can also make it $\Gamma$-invariant. We may then delete this set from $X$ and assume without loss of generality that \eqref{cocycle} holds for \emph{all} $x \in X$. Now we write $\rho$ in coordinates as $\rho_\gamma(x) = (\rho_{\gamma,\alpha}(x))_{\alpha \in A}$. Then for each $\alpha \in A$, $\rho_{\gamma,\alpha}(x)$ vanishes for $x$ outside of a null set $N_\alpha$, which as before we can assume to be independent of $\gamma$ and $\Gamma$-invariant. By the axiom of choice, we may partition $N_\alpha$ into disjoint orbits of $\Gamma$:
$$ N_\alpha = \bigcup_{x \in M_\alpha} \{ T^\gamma x: \gamma \in \Gamma \}$$
where $M_\alpha$ is a subset of $N_\alpha$. If we then define the map $F_\alpha: X \to \T$ by setting
$$ F_\alpha( T^\gamma x ) \coloneqq \rho_{\gamma,\alpha}(x)$$
for $x \in M_\alpha$ and $\gamma \in \Gamma$, and $F_\alpha(x) = 0$ for $x \not \in N_\alpha$, then by the completeness of $(X,\X,\mu)$ we see that $F_\alpha$ is measurable (being zero almost everywhere) and from the cocycle equation we see that
$$ \rho_{\gamma,\alpha}(x) = F_\alpha(T^\gamma x) - F_\alpha(x)$$
for all $x \in X$, $\gamma \in \Gamma$, $\alpha \in A$. Setting $F: X \to K_\Baire$ to be the map $F(x) \coloneqq (F_\alpha(x))_{\alpha \in A}$, we conclude that $\rho_\gamma(x) = F(T^\gamma(x)) - F(x)$ for all $\gamma \in \Gamma$ and $x \in X$, so that $\rho$ is a concrete coboundary as claimed in this case.
In view of the results in the previous section it is conceivable that the truth of this conjecture is sensitive to undecidable axioms in set theory.
| proofpile-arXiv_059-15553 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{intro}
Dynamical matching is an interesting mechanism originally proposed in \cite{carpenter1985,carpenter1995} that arises in Caldera-type potential energy surfaces (PES). These potentials are relevant in chemistry since they provide good approximations for the description of many organic chemical reactions, such as those that occur in the vinylcyclopropane-cyclopentene rearrangement \cite{baldwin2003,gold1988}, the stereomutation of cyclopropane \cite{doubleday1997}, the degenerate rearrangement of bicyclo[3.1.0]hex-2-ene \cite{doubleday1999,doubleday2006} or that of 5-methylenebicyclo[2.1.0]pentane \cite{reyes2002}. The potential energy surface of a Caldera is similar to that of a collapsed region of an erupted volcano. It is characterized by a shallow potential well region (a central minimum) surrounded by four entrance/exit channels mediated by index-1 saddles. Two of these saddles have low energy values and correspond to the formation of chemical products, while the other two are higher in energy and represent reactants.
Broadly speaking, trajectories in Caldera type PES exhibit two distinct types of dynamical behavior. The first kind is the trapping of trajectories in the central minimum area of the Caldera, and the other type is dynamical matching. Examples of the behavior of these types of trajectories for the type of Caldera PES studied in this paper were described in \cite{collins2014}. In the first case, trajectories that have initial conditions on the dividing surfaces of the unstable periodic orbits (UPOs) of the upper index-1 saddles enter the central area of the Caldera and become temporarily trapped as a result of the interaction between the invariant manifolds of the UPOs that exist in the central area of the Caldera with those of the unstable periodic orbits of the index-1 saddles. This is studied in \cite{katsanikas2018}. Eventually, these trajectories will exit the Caldera through any channel corresponding to the four index-1 saddles surrounding the central area. As we will show in this work, trapping of trajectories, i.e. non-existence of dynamical matching, is a consequence of heteroclinic connections between the stable manifolds of the family of UPOs in the central minimum of the Caldera and the unstable manifolds of the UPO of the upper index-1 saddles.
The second type of trajectory behavior is dynamical matching, for which trajectories with initial conditions on the dividing surfaces of the UPOs of the upper index-1 saddles go straight across the Caldera and exit via the opposite lower index-1 saddles. This was considered in \cite{katsanikas2018}. The understanding of this mechanism is very important for Caldera PESs with reflectional symmetry about the $y$-axis (which is what we consider in this paper) since for such PESs statistical theories would predict that reactive trajectories exit with equal probability through the two channels of the lower index-1 saddles. However, chemical systems whose energy landscape possesses caldera intermediate regions on their PES almost never exhibit the expected symmetry in the product formation ratio. For this reason this mechanism must be understood from a phase space perspective.
Dynamical matching can be viewed as an expression of momentum conservation and Newton’s first law of motion. It is manifested by a trajectory entering the Caldera from a channel corresponding to a high energy index-1 saddle (reactant). In the relatively flat region of the caldera it experiences little force, and it exits through the diametrically opposing low energy index-1 saddle (product). As a result, this mechanism plays an important role in determining the outcome of the chemical reaction. However, not all trajectories entering the caldera behave in this fashion. Some trajectories may interact with the shallow potential well region and become temporarily trapped. This can play a significant role in how they exit from the well.
In our previous study of dynamical matching for Caldera PES described in \cite{katsanikas2018} we used the method of Poincar{\'e} sections to understand that dynamical matching is a consequence of the
non-existence of interaction between the unstable invariant manifolds of the UPOs associated with the upper index-1 saddles and the manifolds from the central minimum of the Caldera. We also investigated in \cite{katsanikas2019} the conditions for the non-existence of dynamical matching in cases where we stretched the PES in the $x$-direction. In this case, the distance in the $x$-direction between the saddles and the central minimum increases as we decrease the stretching parameter. We found that there existed a critical value of the stretching parameter for which the system does not exhibit dynamical matching. At this critical value, the invariant manifolds of the UPOs associated with the upper index-1 saddles begin to interact with the central area of the Caldera, and trajectories become temporally trapped. We showed that this results from the decrease of the H{\'e}non stability parameter of the UPOs of the upper index-1 saddles that is responsible for the focusing of the unstable manifolds of the UPOs towards the central area of the Caldera \cite{katsanikas2019}.
\cite{katsanikas2018,katsanikas2019} used the following methods to reveal and analyze the phase space structure:
\begin{enumerate}
\item Computation of periodic orbits using classical methods. In particular, it was noted that in Caldera-type Hamiltonian systems it is very difficult to compute the Lyapunov families of UPOs of the index-1 upper saddles, since the system has distinct escape routes leading to non-convergence of the methods in a reasonable computational time.
\item Computation of periodic orbit dividing surfaces associated with relevant UPOs.
\item Computation of selected Poincar{\'e} sections.
\item Computation of the invariant manifolds of the UPOs on Poincar{\'e} sections.
\end{enumerate}
\noindent
In this paper we show how the method of Lagrangian descriptors can be used to achieve each of these steps with significant computational efficiency, both in implementation and time.
The outline of this paper is as follows. In section \ref{sec.1} we briefly describe the Caldera Hamiltonian system for which we analyze the dynamical matching mechanism. Section \ref{sec.1a} is devoted to introducing the method of Lagrangian descriptors and how it can be applied to reveal the geometrical template of invariant manifolds in the high-dimensional phase space of Hamiltonian systems. In section \ref{sec.2} we present the results of this work on how to detect the dynamical matching phenomenon using Lagrangian descriptors. Finally, in the last section we discuss the conclusions.
\section{The Hamiltonian Model}
\label{sec.1}
In this section we present the Caldera PES that we have used in order to analyze the phase space structures responsible for the dynamical matching mechanism. The model PES that we consider, which has been addressed in previous works, see e.g. \cite{collins2014,katsanikas2018,katsanikas2019}, has a central minimum and four index-1 saddles around it. Two of these saddles have high energy values and the other two are lower in energy. Therefore, the regions about the index-1 saddles allow entrance and exit to and from the central area of the Caldera. In particular, we study a stretched version of the Caldera potential in the $x$ degree of freedom, in the form:
\begin{equation}
V(x,y) = c_1 \left(y^2 + (\lambda x)^2\right) + c_2 \, y - c_3 \left((\lambda x)^4 + y^4 - 6 \, (\lambda x)^2 y^2\right)
\label{eq1}
\end{equation}
\noindent where the model parameters used in this work are $c_1 = 5$, $c_2 = 3$, $c_3 = -3/10$ and $0 < \lambda \leq 1$ (the stretching parameter). The classical symmetric caldera PES \cite{collins2014,katsanikas2018} corresponds to $\lambda = 1$ and is shown in Fig. \ref{caldera_pes}. We depict in Fig. \ref{equi} the contours and the equilibrium points of the potential for different values of $\lambda$, for example $\lambda=1$, $\lambda=0.8$, $\lambda=0.6$ and $\lambda=0.2$. We also compile in Table \ref{tab:ta08} the positions and energies of the upper index-1 saddles for different values of $\lambda$. We observe that the positions of the index-1 saddles move away from the center of the Caldera as we decrease the parameter $\lambda$. The position of the central minimum is $(x,y) = (0,-0.297)$ with energy $E = -0.448$ for all values of the stretching parameter $\lambda$.
The Hamiltonian with two degrees of freedom is defined as the sum of kinetic plus potential energy:
\begin{equation}
H(x,y,p_x,p_y) = \frac{p_x^2}{2m_x} + \frac{p_y^2}{2m_y} + V(x,y)
\label{eq2}
\end{equation}
where $V(x,y)$ is the Caldera PES in Eq. \eqref{eq1}, and $m_x$, $m_y$ are the masses of the $x$ and $y$ DoF respectively. We denote the numerical value of the Hamiltonian as energy $E$. In this work we take $m_x = m_y =1$, and Hamilton's equations of motion are given by:
\begin{equation}
\begin{cases}
\dot x = \dfrac{\partial H} {\partial p_x} = \dfrac{p_x}{m_x} \\[.4cm]
\dot y = \dfrac{\partial H} {\partial p_y} = \dfrac{p_y}{m_y} \\[.4cm]
\dot p_x = -\dfrac{\partial H} {\partial x} = 2 \lambda \, (\lambda x) \left[2c_3 \left((\lambda x)^2 - 3 y^2 \right) - c_1 \right] \\[.4cm]
\dot p_y = -\dfrac {\partial H} {\partial y} = 2 y \left[ 2 c_3 \left(y^2 - 3 (\lambda x)^2\right) - c_1 \right] - c_2
\end{cases}
\label{eq3}
\end{equation}
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.26]{caldera_pes_lambda_1.png}
\end{center}
\caption{Caldera potential energy surface given in Eq. (\ref{eq1}) for the model parameters $c_1 = 5$, $c_2 = 3$, $c_3 = -3/10$ and $\lambda = 1$.}
\label{caldera_pes}
\end{figure}
\begin{figure}[htbp]
\begin{center}
A)\includegraphics[scale=0.48]{equili1a.png}
B)\includegraphics[scale=0.48]{equili08a.png}
C)\includegraphics[scale=0.48]{equili06a.png}
D)\includegraphics[scale=0.48]{equili02a.png}
\end{center}
\caption{The stable stationary point in the center area (depicted by a black point), the upper saddles (depicted by red points), the lower saddles (depicted by blue points) and the equipotential contours for the stretching parameter: A) $\lambda = 1$; B) $\lambda = 0.8$; C) $\lambda = 0.6$ and D) $\lambda = 0.2$.}
\label{equi}
\end{figure}
\begin{table}[htbp]
\tbl{The upper index-1 saddles of the potential given in Eq. \eqref{eq1} ("RH" and "LH" are the abbreviations for right hand and left hand respectively) for different values of $\lambda$. The energy for all the cases is $E = 27.0123$.} {
\begin{tabular}{| l c c c |}
\hline
Critical point & x & y & $\lambda$ \\
\hline
Upper LH index-1 saddle &-2.149 & 2.0778 & 1 \\
Upper RH index-1 saddle &2.149 & 2.0778 & 1\\
Upper LH index-1 saddle &-2.6862 & 2.0778 & 0.8 \\
Upper RH index-1 saddle &2.6862 & 2.0778 & 0.8\\
Upper LH index-1 saddle &-3.5815 & 2.0778 & 0.6 \\
Upper RH index-1 saddle &3.5815 & 2.0778 & 0.6 \\
Upper LH index-1 saddle &-10.7446 & 2.0778 & 0.2 \\
Upper RH index-1 saddle &10.7446 & 2.0778 & 0.2 \\
\hline
\end{tabular} \label{tab:ta08} }
\end{table}
\section{Lagrangian Descriptors}
\label{sec.1a}
The method of Lagrangian descriptors (LDs) is a trajectory-based scalar diagnostic that has been developed in the nonlinear dynamics literature to explore the geometrical template of phase space structures that characterizes qualitatively distinct dynamical behavior. This technique was originally introduced a decade ago in \cite{madrid2009} for the location of \textit{Distinguished Hyperbolic Trajectories}, and was defined by means of computing the arclength of particle trajectories as they evolve forward and backward in time \cite{mancho2013lagrangian}. The method was originally applied to study transport and mixing mechanisms in geophysical flows \cite{mendoza2010}. Recently, the technique has received recognition in the field of Chemistry, in particular in the area of transition State Theory (see e.g. \cite{craven2015lagrangian,craven2016deconstructing,craven2017lagrangian}), where the computation of chemical reaction rates relies on the knowledge of the phase space structures that separate reactants from products. Therefore, the use of mathematical techniques that have the capability of detecting high-dimensional phase space structures that occur in Hamiltonian systems, such as normally hyperbolic invariant manifolds (NHIMs) and their stable and unstable manifolds, is of great interest and utility. One of the biggest challenges when exploring the high-dimensional phase space of a dynamical system is to interpret the dynamical behavior of ensembles of initial conditions, and to recover from the evolution of their trajectories the underlying geometrical phase space structures that govern the dynamics. The problem that arises is that classical techniques rely on following the location of the trajectories of initial conditions that start nearby, and in a high-dimensional phase space, trajectories might get ``lost'' with respect to each other very quickly. The method of Lagrangian descriptors provides a radically different approach that resolves this issue, as it focuses on integrating a positive scalar function along the trajectory of any initial condition of the system instead of tracking their phase space location. This is probably one of the key ideas behind the success of this technique, as the phase space geometry is concealed in the initial conditions themselves.
In the framework of Hamiltonian systems it has been mathematically proven that LDs detect the geometrical phase space structures responsible for transition dynamics through index-1 saddles \cite{naik2019a}, and numerical studies have been carried out to analyze escaping dynamics on open PESs \cite{demian2017,naik2019b,GG2019}. The methodology offered by LDs has been shown to have many advantages with respect to other nonlinear dynamics tools. For instance, it is straightforward to implement and computationally inexpensive when applied to systems with two or three DoF. But probably the most important feature of this tool is that it allows to produce a complete and detailed geometrical \textit{phase space tomography} in high dimensions by means of using low-dimensional phase space probes to extract the intersections of the phase space invariant manifolds with these slices \cite{demian2017,naik2019a,naik2019b,GG2019}.
Consider a dynamical system with general time-dependence in the form:
\begin{equation}
\dfrac{d\mathbf{x}}{dt} = \mathbf{v}(\mathbf{x},t) \;,\quad \mathbf{x} \in \mathbb{R}^{n} \;,\; t \in \mathbb{R} \;,
\label{gtp_dynSys}
\end{equation}
\noindent
where the vector field $\mathbf{v}(\mathbf{x},t) \in C^{r} (r \geq 1)$ in $\mathbf{x}$ and continuous in time. In this work, this system is given by Hamilton's equations for the Caldera PES, see Eq. \eqref{eq3}. In order to explore the phase space structures of this dynamical system we have used a modified version of the $p$-norm definition of Lagrangian descriptors that relies on variable time integration. The reason for doing so is that, since the Caldera PES is an open potential, trajectories can escape to infinity at an increasing rate, and this issue may cause problems when computing LDs. Take an initial condition $\mathbf{x}_0 = \mathbf{x}(t_0)$ and a fixed integration time $\tau > 0$, the $p$-norm LD introduced in \cite{lopesino2017} is defined as follows:
\begin{equation}
M_p(\mathbf{x}_{0},t_0,\tau) = \int^{t_0+\tau}_{t_0-\tau} \, \sum_{i=1}^{n} |v_{i}(\mathbf{x}(t;\mathbf{x}_0),t)|^p \; dt = M_p^{(b)}(\mathbf{x}_{0},t_0,\tau) + M_p^{(f)}(\mathbf{x}_{0},t_0,\tau) \;,\quad p \in (0,1] \; .
\label{Mp_function}
\end{equation}
\noindent
where $M_p^{(b)}$ and $M_p^{(f)}$ represent, respectively, backward and forward integration of initial conditions starting at time $t_0$, that is:
\begin{equation}
M_p^{(b)}(\mathbf{x}_{0},t_0,\tau) = \int^{t_0}_{t_0-\tau} \sum_{i=1}^{n} |v_{i}(\mathbf{x}(t;\mathbf{x}_0),t)|^p \; dt \quad,\quad M_p^{(f)}(\mathbf{x}_{0},t_0,\tau) = \int^{t_0+\tau}_{t_0} \sum_{i=1}^{n} |v_{i}(\mathbf{x}(t;\mathbf{x}_0),t)|^p \; dt
\end{equation}
\noindent
In particular, we have chosen for this work $p = 1/2$. At this point, it is important to highlight that with this definition of LDs one can mathematically prove that NHIMs and their stable and unstable manifolds are detected as singularities of the $M_p$ scalar field, that is, points at which the function is non-differentiable and thus its gradient takes very large values \cite{lopesino2017,demian2017,naik2019a}. Moreover, it has been shown that,
\begin{equation}
\mathcal{W}^u(\mathbf{x}_{0},t_0) = \textrm{argmin } M_p^{(b)}(\mathbf{x}_{0},t_0,\tau) \quad,\quad \mathcal{W}^s(\mathbf{x}_{0},t_0) = \textrm{argmin } M_p^{(f)}(\mathbf{x}_{0},t_0,\tau)
\label{min_LD_manifolds}
\end{equation}
\noindent where $\mathcal{W}^u$ and $\mathcal{W}^s$ are, respectively, the unstable and stable manifolds calculated at time $t_0$ and $\textrm{argmin}$ denotes the phase space coordinates $\mathbf{x}_0$ that minimize the function $M_p$. In addition, NHIMs at time $t_0$ can be calculated as the intersection of the stable and unstable manifolds:
\begin{equation}
\mathcal{N}(\mathbf{x}_{0},t_0) = \mathcal{W}^u(\mathbf{x}_{0},t_0) \cap \mathcal{W}^s(\mathbf{x}_{0},t_0) = \textrm{argmin } M_p(\mathbf{x}_{0},t_0,\tau)
\label{min_NHIM_LD}
\end{equation}
\noindent
It is important to point out here that the phase space location of the stable and unstable manifolds can be thus obtained in two ways. Firstly, we can extract them as ridges of the scalar function $|| \nabla M_p ||$ since manifolds are located at pòints where the function $M_p$ is non-differentiable. Once the manifolds are known, one can compute the NHIM at their intersection by means of a root search algorithm. The second method to recover the manifolds and their associated NHIM is by minimizing the function $M_p$ using a search optimization algorithm. This second procedure and some interesting variations are described in \cite{feldmaier2019}.
Notice that the LD definition given in Eq. (\ref{Mp_function}) implies that all initial conditions are integrated for the same time $\tau$. Recent studies have revealed, see e.g. \cite{junginger2017chemical,naik2019b,GG2019}, that computing fixed-time LDs, that is, integrating all initial conditions chosen on a phase space surface for the same integration time $\tau$, could give rise to issues related to the fact that some of the trajectories that escape the PES can go to infinity in finite time or at an increasing rate. The trajectories that show this behavior will give NaN values in the LD scalar field, hiding some regions of the phase space, and therefore obscuring the detection of invariant manifolds. In order to circumvent this problem we will apply in this work the approach that has been recently adopted in the literature \cite{junginger2017chemical,naik2019b,GG2019} known as variable integration time Lagrangian descriptors. In this methodology, LDs are calculated, at any initial condition, for the initial fixed integration time or until the trajectory of that initial condition leaves a certain phase space region $\mathcal{R}$ that we call the {\em interaction region}. Therefore, the total integration time in this strategy depends on the initial conditions themselves, that is $\tau(\mathbf{x}_0)$. In this variable-time formulation, the $p$-norm definition of LDs has the form:
\begin{equation}
M_p(\mathbf{x}_{0},t_0,\tau) = \int^{t_0 + \tau^{+}_{\mathbf{x}_0}}_{t_0 - \tau^{-}_{\mathbf{x}_0}} \sum_{i=1}^{n} |v_{i}(\mathbf{x}(t;\mathbf{x}_0),t)|^p \; dt \;,\quad p \in (0,1] \;.
\label{Mp_vt}
\end{equation}
\noindent
and, for a fixed integration time $\tau_0 > 0$, the total integration time is defined as:
\begin{equation}
\tau^{\pm}_{\mathbf{x}_{0}} = \min \left\lbrace \tau_0 \, , \, |t^{\pm}|_{\big| \mathbf{x}\left(t^{\pm}; \, \mathbf{x}_{0}\right) \notin \mathcal{R}} \right\rbrace \; ,
\end{equation}
\noindent
where $t^{+}$ and $t^{-}$ are the times for which the trajectory leaves the interaction region $\mathcal{R}$ in forward and backward time, respectively. For the analysis of the Caldera-type Hamiltonian in this work we have chosen:
\begin{equation}
\mathcal{R} = \left\lbrace \mathbf{x} = (x,y,p_x,p_y) \in \mathbb{R}^4 \; \big| \; |y| \leq 6 \right\rbrace \;.
\label{inter_reg}
\end{equation}
\noindent
We conclude the description of the method by highlighting the fact that if the selected interaction region is large enough, the variable integration time LD definition given above in Eq. \eqref{Mp_vt} will approach the fixed-time LD definition in Eq. \eqref{Mp_function}. Thus, NHIMs and their stable and unstable manifolds will be captured by the phase space points for which the LD is non-differentiable and local minimum behavior given in Eqs. \eqref{min_LD_manifolds} and \eqref{min_NHIM_LD} is recovered. Consequently, the variable integration time LD provides us with a suitable methodology to study the phase space geometrical structures that characterize the dynamics in open potentials, since it avoids the issue of trajectories escaping to infinity very fast.
To finish this section we will illustrate how variable integration time LDs can be used to detect the geometrical phase space structures, that is, the NHIMs and their stable and unstable invariant manifolds that characterize the dynamical matching phenomenon in the Caldera Hamiltonian system. In particular, we will focus on the extraction of the phase space structures for the dynamical system given in Eq. \eqref{eq3} using the model parameters described in Section \ref{sec.1}, and considering the unstretched ($\lambda = 1$) Caldera potential.
To compare the results obtained using LDs with those found in \cite{katsanikas2018} by means of other nonlinear dynamics techniques, we will analyze the phase space structures in the following Poincar\'e sufaces of section (SOSs):
\begin{eqnarray}
\mathcal{U}^{+}_{x,p_x} &=& \lbrace (x,y,p_x,p_y) \in \mathbb{R}^4 \;|\; y = 1.88409 \; ,\; p_y > 0 \;,\; E = 29 \rbrace \\
\mathcal{V}^{+}_{x,p_x} &=& \lbrace (x,y,p_x,p_y) \in \mathbb{R}^4 \;|\; y = 0 \; ,\; p_y > 0 \;,\; E = 30 \rbrace
\label{psos_defs}
\end{eqnarray}
\noindent
We begin our analysis with the SOS $\mathcal{U}^{+}_{x,p_x}$, and we choose a small integration time $\tau = 4$. Once we have fixed the phase space slice where we want to compute LDs, we select a grid of initial conditions and, after discarding those that are energetically unfeasible, we integrate the remaining conditions both forward and backward in time, and compute LDs using the definition in Eq. \eqref{Mp_vt} with $p = 1/2$ along the trajectory for the whole fixed integration time or until the initial condition leaves the interaction region $\mathcal{R}$ in Eq. \eqref{inter_reg}, what happens first. The result is that if we plot the LDs values obtained from the forward/backward integration, the scalar field will reveal the stable/unstable manifolds in the SOS under consideration. Moreover, if we plot the combined sum of forward and backward integration, the method highlights both stable and unstable manifolds simultaneously. This is shown in Fig. \ref{fig:LD_tau4}, where the values of LDs for forward/backward integration is displayed in panel A)/B) and the combination of both is depicted in C). We can clearly see that the manifolds are detected at points where the LD scalar function is non-differentiable. To demonstrate this mathematical property, we represent in Fig. \ref{fig:LD_tau4_maniDetect} the values taken by the LD function calculated on $\mathcal{U}^{+}_{x,p_x}$ along the line $p_x = 1$. Notice the jumps in the values of the function, which indicate non-differentiability by means of very large gradient values. Therefore, we can directly extract the invariant stable and unstable manifolds in the SOS from the gradient, that is, using $||\nabla \mathcal{M}_p||$. This is illustrated in Fig. \ref{fig:LD_mani_extract} for the SOS $\mathcal{U}^{+}_{x,p_x}$ where two different values for the integration time have been used to compute LDs, in particular $\tau = 4$ and $\tau = 8$. It is important to note here the crucial role that the integration time $\tau$ plays when it comes to revealing the invariant manifolds in phase space. As shown in Fig. \ref{fig:LD_mani_extract}, when we increase the value for the integration time, richer and more complex details of the underlying geometrical template of phase space structures is unveiled. This behavior is expected, since an increase of the integration time would imply incorporating more information about the past and future dynamical history of particle trajectories in the computation of LDs. This means that $\tau$ is intimately related to the time scales of the dynamical phenomena that take place in the model under consideration and thus, it is a parameter that is problem-dependent. Consequently, there is no general ``golden'' rule for selecting its value for exploring phase space, and thus it is usually selected from the information obtained by performing several numerical experiments. One needs to always bare in mind that there is a compromise between the complexity of the structures that one would like to reveal to explain a certain dynamical mechanism, and the interpretation of the intricate manifolds displayed in the LD scalar output.
As a final remark to complete the analysis of this example on how the method of Lagrangian descriptors is applied to extract the geometrical template of invariant manifolds in a high-dimensional phase space by means of looking at low-dimensional slices, there is a key point that needs to be highlighted and that demonstrates the real potential of LDs with respect to other classical techniques from nonlinear dynamics. In Figs. \ref{fig:LD_mani_extract} and \ref{fig:LD_PSec_comp} we have extracted from the gradient of the $M_p$ function the stable and unstable manifolds on the Poincar\'e sections $\mathcal{U}^{+}_{x,p_x}$ and $\mathcal{V}^{+}_{x,p_x}$ respectively. Using LDs we can obtain \textit{all} the manifolds coming from \textit{any} NHIM in phase space \textit{simultaneously}. This is of course a tremendous advantage in comparison to the classical approach used to compute stable and unstable manifolds, which relies on the individual location of the NHIMs in phase space and, for every NHIM, globalize the manifolds separately taking into account the crucial information provided by the eigendirections. Consequently, the application of LDs offers the capability of recovering \textit{all} the relevant phase space structures in one \textit{shot} without having to study the local dynamics about equilibrium points of the dynamical system. To validate that the structures extracted from the gradient of LDs correspond to the stable and unstable manifolds present in the phase space of the Caldera Hamiltonian, we have compared them in Fig. \ref{fig:LD_PSec_comp} with the invariant manifolds obtained by means of classical nonlinear dynamics techniques to calculate periodic orbits, see \cite{katsanikas2018} for more details.
\begin{figure}[htbp]
\centering
A)\includegraphics[scale=0.26]{LDfw_lambda_1_y_188409_E_29_tau_4.png}
B)\includegraphics[scale=0.26]{LDbw_lambda_1_y_188409_E_29_tau_4.png}
C)\includegraphics[scale=0.26]{LD_lambda_1_y_188409_E_29_tau_4.png}
\caption{Computation of variable-time LDs in the Poincar\'e SOS $\mathcal{U}^{+}_{x,p_x}$ using $\tau = 4$ and $p = 1/2$. A) Forward integration LDs; B) Backward integration LDs; C) The sum of forward and backward LDs. The energy boundary is depicted in magenta.}
\label{fig:LD_tau4}
\end{figure}
\begin{figure}[htbp]
\centering
A)\includegraphics[scale=0.38]{LD_maniDetect_lambda_1_y_188409_E_29_tau_4.png}
B)\includegraphics[scale=0.38]{maniDetect_lambda_1_y_188409_E_29_tau_4.png}
\caption{Detection of stable and unstable manifolds at phase space points where the LD scalar function is non-differentiable. A) Variable-time LDs calculated on the Poincar\'e SOS $\mathcal{U}^{+}_{x,p_x}$ using $\tau = 4$ and $p = 1/2$; B) Value of LDs along the line $p_x = 1$.}
\label{fig:LD_tau4_maniDetect}
\end{figure}
\begin{figure}[htbp]
\centering
A)\includegraphics[scale=0.4]{LD_lambda_1_y_188409_E_29_tau_4.png}
B)\includegraphics[scale=0.4]{manifolds_lambda_1_y_188409_E_29_tau_4.png}
C)\includegraphics[scale=0.4]{LD_lambda_1_y_188409_E_29_tau_8.png}
D)\includegraphics[scale=0.4]{manifolds_lambda_1_y_188409_E_29_tau_8.png}
\caption{On the left column, LDs calculated on the SOS $\mathcal{U}^{+}_{x,p_x}$ using: A) $\tau = 4$; C) $\tau = 8$. On the right column, the invariant stable (blue) and unstable (red) manifolds extracted from the gradient of the scalar function $M_p$. We have also marked with yellow dots the location of the unstable periodic orbits of the upper index-1 saddles and the magenta curve represents the energy boundary.}
\label{fig:LD_mani_extract}
\end{figure}
\begin{figure}[htbp]
\centering
A)\includegraphics[scale=0.39]{LD_lambda_1_y_188409_E_29_tau_8.png}
B)\includegraphics[scale=0.4]{LD_lambda_1_y_0_E_30_tau_6.png}
C)\includegraphics[scale=0.4]{manifolds_lambda_1_y_188409_E_29_v2.png}\hspace{.3cm}
D)\includegraphics[scale=0.4]{manifolds_lambda_1_y_0_E_30_tau_6.png}
E)\includegraphics[scale=0.57]{PSec1_lambda_1_matthaios_y_188409_E_29.png}\hspace{.3cm}
F)\includegraphics[scale=0.57]{PSec2_lambda_1_matthaios_y_0_E_30.png}
\caption{On the left column: A) LDs calculated on the SOS $\mathcal{U}^{+}_{x,p_x}$ using $\tau = 4$; C) invariant stable (blue) and unstable (red) manifolds extracted from the gradient of the scalar function $M_p$; E) Unstable (cyan) and stable
(orange) invariant manifolds of the periodic orbits of the
two upper saddles, that are also represented by two black points. We also depict the invariant unstable (violet) and stable (green) manifolds of the family of periodic orbits of the central minimum. On the right column we perform the same analysis but for the Poincar\'e SOS $\mathcal{V}^{+}_{x,p_x}$, where LDs have been calculated using an integration time $\tau = 6$. It is important to remark that the invariant manifolds shown in E) and F) have been computed by means of classical nonlinear techniques to calculate periodic orbits, see \cite{katsanikas2018}.}
\label{fig:LD_PSec_comp}
\end{figure}
\section{Numerical Results}
\label{sec.2}
In this section we compute Lagrangian descriptors with $\tau = 4$ in order to study the phase space structures close to the UPOs associated with the upper index-1 saddles. For this purpuse we use the Poincar\'e surfaces of section defined in Eq. \eqref{psos_defs}, whcih was also used in \cite{katsanikas2018}. This analysis is carried out for different values of $\lambda$. Our goal is to understand how LDs are capable of detecting the dynamical matching mechanism. This section is divided into two subsections. In the first part we describe how the method of LDs succeeds in the detection of dynamical matching, and the second subsection presents the properties and advantages of this methodology.
\subsection{The detection of Dynamical Matching}
The phenomenon of dynamical matching refers to the lack of a mechanism that would enable transport of trajectories from the region of the upper saddles to the central area of the Caldera. As we know, trajectories with initial conditions on the invariant manifolds of unstable periodic orbits move away from the periodic orbit (unstable manifold) or approach the periodic orbit (stable manifold). A mechanism that could be responsible for the transport of trajectories from the region of the upper saddles to the central area of the Caldera, would be heteroclinic intersections of the unstable invariant manifolds of the unstable periodic orbits of the upper saddles with the stable manifolds of the unstable periodic orbits that exist in the central area. We will show that the non-existence or existence of this mechanism determines if we have dynamical matching or not. For this reason, we compute the invariant manifolds for different values of $\lambda$ starting from $\lambda=1$ to zero in order to find the values of $\lambda$ that correspond to dynamical matching and trapping.
\begin{enumerate}
\item \underline{\textbf{Dynamical matching:}} The gap in Fig.\ref{fig1} (for $\lambda=0.8$) indicates that we have no interaction (heteroclinic intersections) of the unstable invariant manifold of the periodic orbits associated with the upper saddle with the central area and this means that we have no mechanism of transport of trajectories from one region to the other. Consequently, we have in this case the phenomenon of dynamical matching, the trajectories that have initial conditions on the dividing surfaces of the periodic orbits of upper saddles go straight across the Caldera and they exit via the lower opposite saddle as we know from previous papers (\cite{katsanikas2018}, \cite{katsanikas2019}). An example of this is given in Fig. \ref{fig1} for $\lambda=0.8$. As we can see in this figure we choose an initial conditions (circle) inside the region of the unstable invariant manifold of the unstable periodic orbits of upper saddles. If we integrate backward the initial condition that corresponds to the circle the resulting trajectory exits via the region of the upper saddle. If we integrate it forward the resulting trajectory goes straight across the caldera and exits via the lower opposite saddle. This means that the trajectory comes from the region of the upper saddle and it exhibits the phenomenon of dynamical matching. This gap decreases in size as we decrease the stretching parameter $\lambda$ until we reach a critical value of $\lambda$. \\
\item \underline{\textbf{The critical value}:} In Fig. \ref{fig1} we observe for $\lambda=0.778$ (middle row of figures) the unstable manifolds of the periodic orbits of upper saddles start to interact with the stable manifolds of the unstable periodic orbits of the central area, resulting in heteroclinic connections and forming lobes between them. These lobes are very narrow and cannot be distinguished initially as we can see in Fig.\ref{fig1}. In order to observe these lobes we magnify the region of the upper saddles, for example the region of the upper right saddle in Fig.\ref{fig1}. When we magnify these regions, we see the heteroclinic connections and the lobes between the unstable invariant manifolds of the unstable periodic orbits of upper saddles and the stable manifolds of the unstable periodic orbits that exist in the central area. These lobes are responsible for the trapping of the trajectories that come from the region of the upper saddles to the central area. This can be checked very easily. We depict two initial conditions in Fig.\ref{fig1} for $\lambda=0.778$, one inside the lobe (the diamond) and other one outside the lobe (the circle) but inside the region of the unstable manifold of the unstable periodic orbit of upper saddle. If we integrate backward the two initial conditions, we see that the corresponding trajectories come from the region of the right upper saddle because they exit via the region of the right upper saddle. But if we integrate forward the initial condition, that is inside the lobe, the corresponding trajectory is trapped and after a long time exits through the region of the opposite lower saddle. On the contrary, the trajectory that corresponds to the other initial condition is not trapped and go straight across to the exit from the caldera. This means that the initial conditions in the lobes between the unstable invariant manifolds of the unstable periodic orbits associated with the upper saddles and the stable invariant manifolds of the unstable periodic orbits of the central area are responsible for the trapping of the trajectories that come from the region of the upper saddles. This is the first value of $\lambda$ for which we find interaction between the unstable invariant manifolds of unstable periodic orbits, associated with the upper saddles, with the central area. This means that this is a critical value of the stretching parameter for the non-existence of dynamical matching, as we have observed in a previous paper \cite{katsanikas2019}). \\
\item \underline{\textbf{Trapping}:} Now if we decrease the value of $\lambda$, starting from the critical value, we have again interaction of the unstable invariant manifolds of unstable periodic orbits of upper saddles with the central area. We have again lobes between the unstable invariant manifolds of the unstable periodic orbits with the stable invariant manifolds of the unstable periodic orbits that exist in the central region as we can see for example for $\lambda=0.7$ in Fig.\ref{fig1}. This means that we have again trapping for values of $\lambda$ lower than the critical value.
\end{enumerate}
\subsection{Properties and advantages of the method of Lagrangian Descriptors.}
In this subsection we describe three different properties and advantages of the method of Lagrangian descriptors for the detection of dynamical matching:
\begin{enumerate}
\item \underline{\textbf{Accuracy:}} An important advantage of Lagrangian descriptors is that they provide an accurate approximations of the critical value of a parameter of the system for the transition from the case of the dynamical matching to the case of the non-existence of the dynamical matching, than the approximations that are obtained from other methods like dividing surfaces. For example in this paper, the critical value $\lambda=0.778$, that we computed using Lagrangian descriptors, is a little larger than the critical value $\lambda=0.72$, that is computed using dividing surfaces (see \cite{katsanikas2019}). The trapping of the trajectories for the case of the critical value of the stretching parameter and below is because of a narrow lobe (that we observed in Fig. \ref{fig1}) between the unstable invariant manifolds of the unstable periodic orbits of the upper saddles and the stable manifolds of the unstable periodic orbits that exist in the central area, as we explained earlier. This narrow lobe can be very easily identified using Lagrangian descriptors because we can see directly which part of the phase space can be responsible for the trapping and transport of the trajectories from the region of the upper saddles to the central area of the Caldera. But if we use the dividing surfaces we are constrained to identify the phenomenon of dynamical matching in the configuration space without knowing the structure of the phase space and if there is a region of the phase space that is responsible for the trapping of the trajectories in the central area of the Caldera. This means that it depends on the sampling of the dividing surface whether or not we will detect the phenomenon of dynamical matching. For the case of the critical value we have only very few trajectories that are trapped inside a narrow lobe and this makes it very difficult for these trajectories to be included in the sampling of the dividing surfaces. For this reason, we can identify the critical value with more accuracy using Lagrangian descriptors. \\[.1cm]
\item \underline{\textbf{The itegration time $\tau$}:} A crucial quantity for the detection of dynamical matching is the time $\tau$ of the computation of the Lagrangian descriptors. In all cases we used $\tau=4$ because we could see all the appropriate geometrical structures and specifically the invariant manifolds of the unstable periodic orbits of the upper saddles and the invariant manifolds of the unstable periodic orbits of the central area. This could allow us to see directly if we have a gap or lobe (dynamical matching or trapping) between the unstable invariant manifolds of the unstable periodic orbits associated with the upper saddles and the stable manifolds of the unstable periodic orbits that exist in the central area. For smaller values of $\tau$ than $4$ we could not see, in many cases, the invariant manifolds from the central area of the Caldera. On the contrary, for larger values of $\tau$ we could see more structures but it was very difficult to detect the appropriate lobes that were responsible for the non-existence of the dynamical matching. For example we identify for $\lambda=0.7$ and for $\tau=4$ (see Fig. \ref{fig1}) very easily the non-existence of the dynamical matching because of the lobe between the unstable invariant manifolds of the unstable periodic orbits associated with the upper saddles and the stable manifolds of the unstable periodic orbits that exist in the central area. But, if we use large values for $\tau$, as for example $\tau=15$ (Fig.\ref{fig3b}), we have many returns of the invariant manifolds and it is not obvious which lobe is responsible for the trapping of the trajectories that come from the region of the upper saddles. This means that increasing the time $\tau$, we increase the complexity of the figures and it is very difficult to detect the non-existence of the dynamical matching. If we decrease the time $\tau$ less than $4$ we cannot also identify the existence or not of the dynamical matching because some of the geometrical structures from the central area do not exist in the figures. There is a critical value for $\tau$ that is sufficient to see the appropriate geometrical structures (invariant manifolds from the region of the upper saddles and central area) and to detect lobes and gaps between them but also it is not so large as to increase the complexity of the figures. In our paper this value is $\tau=4$. \\[.1cm]
\item \underline{\textbf{The increase of Trapping:}} Using the method of Lagrangian descriptors we can predict the increase of trapping as we decrease the stretching parameter. As we decrease the $\lambda$ parameter we approach the integrable case of our system. The integrable case of our system corresponds to $\lambda=0$. In this case there is no x coordinate in the expression for the caldera PES and our system has only one degree of freedom, and it is therefore integrable. This is the reason as we can see in Fig.\ref{fig2} the ordered region around the central stable periodic orbit increases, as we decrease the $\lambda$ parameter, decreasing the ratio of the free space for the invariant manifolds of the unstable periodic orbits to the permitted area (that is indicated by pink color in Fig. \ref{fig2}). Consequently, the stable invariant manifolds of the unstable periodic orbits, that exist in the central area, open more and more to the edge of the permitted space forming larger lobes with the unstable invariant manifolds of the unstable periodic orbits associated with the upper saddles. We can see this for example if we compare the lobes between the case for $\lambda=0.778$ and $\lambda=0.7$ (in Fig. \ref{fig1}). The increasing size of lobes can explain the increase of the trapping of trajectories in the central area, as we decrease the $\lambda$ parameter, which was also observed in a previous paper \cite{katsanikas2019}.
\end{enumerate}
\begin{figure}[htbp]
\centering
A)\includegraphics[scale=0.25]{manifolds_lambda_08_tau_4_v2.png}
B)\includegraphics[scale=0.25]{ICSmanifolds_lambda_08_tau_4_zoom.png}
C)\includegraphics[scale=0.25]{TrajEvol_lambda_08_v2.png}\\
D)\includegraphics[scale=0.25]{manifolds_lambda_0778_tau_4_v2.png}
E)\includegraphics[scale=0.25]{ICSmanifolds_lambda_0778_tau_4_zoom.png}
F)\includegraphics[scale=0.25]{TrajEvol_lambda_0778.png}\\
G)\includegraphics[scale=0.25]{manifolds_lambda_07_tau_4_v2.png}
H)\includegraphics[scale=0.25]{ICSmanifolds_lambda_07_tau_4_zoom.png}
I)\includegraphics[scale=0.25]{TrajEvol_lambda_07.png}\\
\caption{The phase space close to the unstable periodic orbits associated with the upper saddles (first column) and the enlargement of the region of the phase space that is indicated by a rectangle in the figures of the first column (figures in the second column) using Lagrangian Descriptors (with $\tau=4$). The figures in the third column depict the trajectories in the configuration space that correspond to a circle and diamond in the figures in the second column. In the first row, the green line indicate the part of the trajectory at backward integration that corresponds to the circle. In the second and third row, the red line indicate the part of the trajectories at backward integration that correspond to both of them, circle and diamond. In addition, black and blue line indicate the part of the trajectories at forward integration that correspond to the circle and diamond respectively (in all rows). A) B) C) are for $\lambda = 0.8$, D) E) F) are for $\lambda = 0.778$ and G) H) I) are for $\lambda = 0.7$.}
\label{fig1}
\end{figure}
\begin{figure}[htbp]
\centering
A)\includegraphics[angle=0,width=8cm]{PS_lambda_08.png}
B)\includegraphics[angle=0,width=8cm]{PS_lambda_06.png}
C)\includegraphics[angle=0,width=8cm]{PS_lambda_02.png}
\caption{Phase space close to the unstable periodic orbits associated with the upper saddles using the Poincar{\'e} surface of section $y=1.884090$ with $p_y>0$ at energy $E=29$ for the stretching parameter: A) $\lambda=0.8$; B) $\lambda=0.6$; C) $\lambda=0.2$.}
\label{fig2}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.45]{manifolds_lambda_07_tau_15_zoom.png}
\caption{Phase space stable (blue) and unstable (red) manifolds extracted from Lagrangian descriptors close to the unstable periodic orbits associated with the upper index-1 saddles. The computation has been carried out using $\tau = 15$ for the Hamiltonian system with energy $E = 29$ and stretching parameter $\lambda = 0.7$ in the Poincar{\'e} section $y = 1.884090$ with $p_y > 0$.}
\label{fig3b}
\end{figure}
\section{Conclusions}
In this work we have used the method of Lagrangian descriptors to detect the dynamical matching mechanism in a Caldera-type Hamiltonian system stretched in the $x$-direction, and our analysis has helped us to develop a deeper understanding of the dynamical origin of this phenomenon in phase space. The results we have found in this study are:
\begin{enumerate}
\item Lagrangian descriptors can easily detect the gap between the unstable invariant manifolds of the upper index-1 saddles and the stable manifolds of the unstable periodic orbits that exist in the central area. This gap corresponds to dynamical matching and is a consequence of the non-existence of a heteroclinic connection in phase space. \\
\item The detection of dynamical matching can be carried out only by means of the computation of LDs, allowing us to avoid the use of dividing surfaces, classical methods for finding periodic orbits, the use of Poincar{\'e} sections and the separate computation of the invariant manifolds on Poincar{\'e} sections. This means that this method is faster and can be implemented in all cases even in systems with many escapes in which the computation of periodic orbits using classical methods and the use of dividing surfaces is very difficult. \\
\item Lagrangian descriptors can detect not only the non-existence of dynamical matching but also the specific regions of the phase space that are responsible for this type of behavior. We can easily see using Lagrangian descriptors the interaction of the unstable manifolds of the unstable periodic orbits of the upper saddles with the stable manifolds of the unstable periodic orbits of the central area. Then we can identify which lobes between the unstable manifold of the unstable periodic orbits of upper saddles and stable manifolds of the unstable periodic orbits of the central area are responsible for the trapping of the trajectories. We can also predict if the intensity of the phenomenon of trapping in the central area of the Caldera will be small or large from the size of the lobes. This give us a deeper understanding of the origin of this phenomenon. \\
\item For the detection of dynamical matching the method of Lagrangian descriptors is more accurate that that of sampling dividing surfaces. This is because this mechanism may involve only a few special trajectories that could easily be missed in a sampling procedure. In particular, these trajectories come from the region of the upper saddles and are trapped in the central area of the Caldera. Narrow lobes between the unstable manifolds of the unstable periodic orbits of the upper saddles with the stable manifolds of the unstable periodic orbits of the central area are responsible for this trajectory behaviour. \\[.1cm]
\item The detection of dynamical matching by means of Lagrangian descriptors is very sensitive to the value chosen for the integration time $\tau$ to compute LDs. By numerical experiments and inspection one can easily find a suitable value so that the method clearly reveals the relevant invariant manifolds in the region of the upper index-1 saddles and the central area of the Caldera, allowing for the detection of lobes and gaps between manifolds. As we have pointed out, the selection of $\tau$ is a relevant step in the process, since for large integration time values, the complexity of the phase space structures recovered by this technique would make the interpretation of figures a difficult task. This phenomenon is illustrated in Fig. \ref{fig3b}.
\end{enumerate}
\nonumsection{Acknowledgments:}
The authors acknowledge the support of EPSRC Grant no. EP/P021123/1 and Office of Naval Research Grant No. N00014-01-1-0769.
\bibliographystyle{ws-ijbc}
| proofpile-arXiv_059-15554 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section*{Conflicts of interest}
There are no conflicts to declare.
\section*{Acknowledgements}
The authors wish to thank Rinik Kumar for performing preliminary experiments on the free-boundary elastica. We are grateful for financial support from the National Science Foundation CMMI--CAREER through Mechanics of Materials and Structures (No. 1454153).
| proofpile-arXiv_059-15555 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{sec:Introduction}
This paper studies a functional equation arising from an extension of the celebrated Condorcet Jury
Theorem. In Condorcet's model, an odd-sized jury must decide whether Nature is in one of two
equiprobable states of Nature, $ A $ or $B$. Each juror receives an independent binary signal (for
$A$ or for $B $) which is correct with the same probability $p>1/2$. \citet{Con} [or see
\citet[][Ch.~XVII]{Tod} for a textbook discussion] showed that when jurors vote simultaneously
according to their signal, the probability of a correct verdict tends to $1$ as the number of
jurors tends to infinity. Recently, \citet{AlpC2,AlpC1} considered a related sequential voting
model, where jurors receive signals $S$ in the interval $[-1, +1] $ rather than binary signals. Low
signals indicate $B$ and high signals indicate $A$. The strength of this ``indication'' depends on
the ``ability'' of the juror, a number $a$ between $0$ and $1$, which is a proxy for Condorcet's
$p$ that tends from $1/2$ to $1$. When deciding how to vote, each juror notes the previous voting,
the abilities of the previous jurors, his own signal $S$ and his own ability. This is sufficient to
determine which alternative he views as being more likely. The mechanism that underlies this
determination is the common knowledge of the distributions by which a signal is given as private
information to each juror, depending on his ability and the state of Nature. It is not relevant to
the discussions of this paper, but we mention that one of the main results of Alpern and Chen is
that, given three jurors of fixed abilities, their majority verdict is most likely to be correct
when they vote in the following order: middle-ability juror first, highest-ability juror next, and
finally the lowest-ability juror.
The cumulative distribution formula of signals $S$ on $\left[ -1,+1\right] $ that a juror of
ability $a$ receives in the Alpern-Chen jury model is given by
\begin{align}
F_{a}(t) := \mathbb{P}_{a}[S\leq t \,|\, A] =(t+1)(at-a+2)/4, & \text{ if Nature is $A$;}\label{Fa} \\
G_{a}(t) := \mathbb{P}_{a}[S\leq t \,|\, B]= ( t+1)( a-at+2)/4, &\text{ if Nature is $B$.} \label{Ga}
\end{align}
These were selected as the simplest family of distributions arising from linear densities in which
steepness indicates ability: for the signal distributions following $A$ or $B$, a high signal in
$[-1,+1]$ was to indicate $A$ as more likely, and a low signal to indicate $B$ as more likely. For
this reason an increasing density function was selected to follow $A$ and a decreasing one to
follow $B$. The simplest increasing and decreasing densities on $[-1,+1]$ are the linear functions
(taken to mean ``affine'') that go through $(0,1/2)$. This gave the signal distribution of
Alpern-Chen model. It emerges from results below that these are uniquely determined from requiring
the tail-balance to be linear.
The relation between the two distributions is based on the following assumption of signal symmetry:
\[
\mathbb{P}_{a}[S\leq t\,|\, A]=\mathbb{P}_{a}[-S\leq t\,|\, B].
\]
For a continuous distribution it follows that
\begin{align*}
G_{a}(t) &=\mathbb{P}_{a}[S\leq t\,|\, B]=\mathbb{P}_{a}[-S\geq -t \,|\, B]
=1-\mathbb{P}_{a}[-S\leq -t \,|\, B] \\
&=1-\mathbb{P}_{a}[S\leq -t \,|\, A]=1-F_{a}(-t),
\end{align*}
and so
\
G_{a}(t) =1-F_{a}(-t).
\
This confirms that, for a juror of any ability $a$, the probability of receiving a signal less than
$t$ when Nature is $B$ is the same as receiving a signal larger than $t$ when Nature is $A$.
The jurors use their private information (signal) $S$ in $[0,1]$ to calculate the conditional
probabilities of $A$ and $B$ by considering the relative likelihood that their signal came from the
distribution $F$ or the distribution $G$.
We now wish to relate this continuous signal model to the binary model of Condorcet. We want our
notion of ability $a$ to be a proxy for Condorcet's probability $p$. Since his $p$ runs from
$p_0=1/2$ (a useless signal) to $p_1=1$ (a certain signal) and our ability $a$ runs from $0$ (no
ability, useless signals) to $1$ (highest ability), for fixed $t$ the conditional probability is a
linear transformation, and so for some coefficient $b=b(t)$ we wish to have
\[
\mathbb{P}_{a}[A\,|\, S\geq t] =\frac{1}{2}+b(t)\, a,
\]
as the left-hand side is his $p$ if his signals are restricted to $-1$ (for $B$) and $+1$ (for
$A$), taking any $t$ in $( 0,1)$. We want this conditional probability to be $1$ when $t=+1$
(highest signal) and $a=1$ (highest ability), so this gives $b(1) =1/2$. When $t=-1$ the condition
$S\geq t$ gives no new information for any $a$, so the left-hand side should be the \emph{a priori}
probability of $A$, $\mathbb{P}_a[A]$, which is $1/2$. Hence $b(-1) =0$ and, taking $b$ to be
linear, we get $b(t) =(t+1)/4$, or
\[
\mathbb{P}_{a}[A\,|\, S\geq t] =\frac{1}{2}+\frac{t+1}{4}\,a.
\]
Putting $H_{a}( t) :=\mathbb{P}_{a}[ S\leq t\,|\, A] $, so that $\mathbb{P}_{a}[S\leq t\,|\,
B]=1-H_{a}(-t)$ as above, by Bayes Law and since $\mathbb{P}[B] =\mathbb{P}[A] $, we have
\begin{eqnarray*}
\mathbb{P}_{a}[ A\,|\, S\geq t]
&=&\frac{\mathbb{P}_{a}[ S\geq t\,|\, A] \mathbb{P}[ A] }{\mathbb{P}_{a}[ S\geq t\,|\, A]
\mathbb{P} [A] +\mathbb{P}_{a}[ S\geq t\,|\, B] \mathbb{P}[ B] } \\
&=&\frac{1-H_{a}( t) }{( 1-H_{a}( t) )+( 1-( 1-H_{a}( -t) ) ) } \\
&=&\frac{1-H_{a}( t) }{1-H_{a}( t) +H_{a}(-t) }.
\end{eqnarray*
The last term, comparing the right tail against the tail sum, is known as the \textit{tail-balance
ratio} \citep[\S8.3]{BinGT}. Its asymptotic behaviour and the regular variation of the tail-sum are
particularly relevant to the Domains of Attraction Theorem of probability theory
\citep[Theorem~8.3.1]{BinGT}. The theorem is concerned with stable laws (stable under addition: the
sum of two independent random variables with that law has, to within scale and centering, the same
law), and identifies those that arise as limits in distribution of appropriately scaled and
centered random walks.
In summary we seek a family, indexed by the ability $a$, of signal distributions $H_{a}\left(
t\right) $ on $\left[ -1,+1\right] $, which correspond to state of Nature $A$ while $1-H_{a}\left(
-t\right) $ correspond to state of Nature $B$, such that by Bayes Law
\[
\mathbb{P}_{a}[ A\,|\, S\geq t] =\frac{1-H_{a}(t)}{1-H_{a}\left( t\right)
+H_{a}(-t) }=\frac{1}{2}+\frac{t+1}{4}~a.
\]
The main technical result of the paper is the following.
\begin{lemma}\label{lem:main}
The unique solution for the c.d.f.\ $H_{a}( t)$ on $[ -1,+1 ] $ to the following functional
equations for $0\leq a\leq 1$:
\[
\frac{1-H_{a}( t) }{1-H_{a}( t) +H_{a}( -t)}=\frac{1}{2}+\frac{t+1}{4}\,a,\ -1\leq t\leq +1
\]
is given by
\[
H_{a}\left( t\right) =F_{a}( t) =( t+1) (at-a+2) /4.
\]
\end{lemma}
The standard text-book treatment of functional equations is \citet{AczD}, but it is often the case
that particular functional equations arising in applications require individual treatment ---
recent such examples are \citet{ElsBFN} and \citet{KahM}; for applications in probability, see
\citet{Ost}.
We will prove this result in Section~\ref{sec:2}, which thus gives the following consequence for
the signal distribution in the jury problem.
\begin{theorem}
The only c.d.f.\ on the signal space $[-1,+1] $ that makes the conditional probability
$\mathbb{P}_{a}[A\,|\, S\geq t] $ a linear function of the juror's ability $a$ with a slope linear
in $t$ are the Alpern-Chen functions $F_{a}(t)$ and $G_{a}(t)$ given in (\ref{Fa}) and (\ref{Ga}).
\end{theorem}
\section{Tail-balance equation and proof of Lemma~\ref{lem:main}}
\label{sec:2}
The proof of Lemma~\ref{lem:main} in this section is deduced from a more general result concerning
a functional equation of the following type:
\begin{equation}\label{eqn:tail-equation}
\frac{1-H(t)}{1-H(t)+H(-t)}=\alpha (t),\ t\in [-1,+1),
\end{equation}
with $\alpha (t)$ a strictly monotone function, interpreting the left-hand side to be $1$ for
$t=1$. Of interest here are non-negative increasing functions $H$ with $H(-1)=0$ and $H(1)=1$
representing probability distribution functions, hence the adoption of the name ``tail balance''
(as above). Indeed, with these boundary conditions,
\[
\alpha(-1)=\frac{1}{2},
\]
implying that the left and right tails of $H$ are exactly balanced; furthermore,
\[
\frac{1}{2}<\alpha (t)<1,\ t\in (-1,+1),
\]
since
\[
\frac{1-H(t)}{1-H(t)+H(-t)}=1-\frac{H(-t)}{1-H(t)+H(-t)}.
\]
Note that $H(0)=1-\alpha (0)$. The linear case of the last section is thus
\[
\alpha(t)=\frac{1}{2}+\frac{t+1}{4}\,a,\ 0\le a\le 1.
\]
Turning to a general increasing $\alpha $, we may write
\[
\beta (t):=\frac{\alpha (t)}{1-\alpha (t)},\ t\in [-1,+1).
\]
This is again an increasing function with $\beta (t)>1$ for $t\in (-1,+1)$ and $\lim_{t\uparrow
1}\beta(t)=+\infty$ if $\alpha(1)=1$. Thus in the formula below $ \beta(t)\beta(-t)-1>0$ and
$\lim_{t\uparrow 1}(\beta (t)-1)/(\beta (t)\beta (-t)-1)=1$, since $\beta (-1)=1$.
\begin{theorem}\label{thm:tail-solution}
The tail-balance functional equation \eqref{eqn:tail-equation} has the following unique
non-negative solution:
\begin{equation}
H(t)=\frac{\beta (t)-1}{\beta (t)\beta (-t)-1}=\frac{(2\alpha
(t)-1)(1-\alpha (-t))}{\alpha (t)+\alpha (-t)-1}<1. \label{Soln Tail}
\end{equation}
In particular, for $\alpha $ linear as in Lemma~\ref{lem:main}, we have
\begin{equation}
H(t)=\frac{( t+1) ( at-a+2)}{4}. \label{Soln Lin}
\end{equation}
\end{theorem}
\begin{proof}
After some re-arrangement of \eqref{eqn:tail-equation} we have
\begin{equation*}
H(-t)=\frac{1-H(t)}{\beta (t)}.
\end{equation*
S
\begin{equation*}
\beta (-t)H(t)=1-H(-t)=1-\frac{1-H(t)}{\beta (t)}.
\end{equation*
Henc
\begin{equation*}
(\beta(t)\beta (-t)-1)H(t)=\beta (t)-1,
\end{equation*
yielding the asserted formula. As for the inequality, we note the equivalence:
\begin{equation*}
\frac{\beta (t)-1}{\beta (t)\beta (-t)-1}<1\Longleftrightarrow \beta
(t)<\beta (t)\beta (-t).
\end{equation*
The calculation of the linear case of $\alpha $ is straightforward, and relies on
\begin{equation*}
\alpha (t)+\alpha (-t)-1=a/2,\quad \text{and}\quad 1-\alpha (-t)=(at-a+2)/4.
\end{equation*}
Our theorem follows.
\end{proof}
\begin{remark}
More generally, with $\alpha (t)$ monotone as before and $\mathbb{P}_a[A] =\theta $ with $0<\theta
<1$, so that $\mathbb{P}_a[ B] =1-\theta $, writing the odds $(1-\theta )/\theta $ as $\lambda ,$
the earlier application of Bayes Rule gives for $t\in [-1,+1]$
\begin{equation}\label{Odds}
\mathbb{P}_{a}[ A\,|\, S\geq t] =\frac{1-H_{a}\left( t\right) }{1-H_{a}\left( t\right)
+\lambda H_{a}\left( -t\right) }=\alpha (t).
\end{equation}
Here, as $1+\lambda =1+(1-\theta )/\theta =1/\theta$,
\begin{equation*}
\alpha (-1)=\frac{1}{1+\lambda }=\theta =\mathbb{P}_{a}[ A]
=\mathbb{P}_{a}[ A\,|\, S\geq -1] .
\end{equation*}
\end{remark}
To solve (\ref{Odds}) we apply a similar procedure as in Theorem~\ref{thm:tail-solution} by first
showing the following variant.
\begin{theorem}
The equation for $t\in [-1,+1]$
\begin{equation}
H(-t)=\gamma (t)+\delta (t)H(t) \label{General}
\end{equation
has solution
\begin{equation}
H(t)=\frac{\gamma (t)\delta (-t)+\gamma (-t)}{1-\delta (t)\delta (-t)},
\label{Soln Gen}
\end{equation
provided $\delta (t)\delta (-t)\neq 1$.
\end{theorem}
\begin{proof}
Equation (\ref{General}) may be solved by writing
\begin{eqnarray*}
H(t) &=&\gamma (-t)+\delta (-t)H(-t) \\
&=&\gamma (-t)+\delta (-t)(\gamma (t)+\delta (t)H(t)), \\
H(t)(1-\delta (t)\delta (-t)) &=&\gamma (t)\delta (-t)+\gamma (-t),
\end{eqnarray*
yielding the claim.
\end{proof}
\begin{corollary}\label{cor:solution}
Equation (\ref{Odds}) has solution for $t\in [-1,+1]$
\begin{equation*}
H(t)=\frac{((\lambda +1)\alpha (t)-1)(1-\alpha (-t))}{\alpha (t)+\alpha
(-t)+(\lambda ^{2}-1)\alpha (t)\alpha (-t)-1}.
\end{equation*}
\end{corollary}
\begin{proof}
Equation (\ref{Odds}) may be rewritten as
\[
(1-\alpha (t))=(1-\alpha (t))H(t)+\lambda \alpha (t)H(-t),\ t\in [-1,+1],
\]
so is of the more general form above with
\begin{equation}
\gamma (t)=\frac{1-\alpha (t)}{\lambda \alpha (t)},\quad \delta (t)=-\gamma
(t). \label{gamma}
\end{equation
For $\gamma (t)=-\delta (t)$ equation (\ref{Soln Gen}) become
\begin{equation}
H(t)=\frac{-\gamma (t)\gamma (-t)+\gamma (-t)}{1-\gamma (t)\gamma (-t)}
=\frac{\gamma (-t)(1-\gamma (t))}{1-\gamma (t)\gamma (-t)}. \label{Soln Spec}
\end{equation
Substitution in (\ref{Soln Spec}) for $\gamma (t)$ from (\ref{gamma}) gives
\begin{align*}
H(t) &=\frac{1-\alpha (-t)}{\lambda \alpha (-t)}\frac{1-\frac{1-\alpha (t)}{\lambda \alpha (t)}}
{1-\frac{1-\alpha (t)}{\lambda \alpha (t)}\frac{1-\alpha(-t)}{\lambda \alpha (-t)}} \\
&=\frac{1-\alpha (-t)}{\alpha (-t)}\cdot \frac{\lambda \alpha (t)\alpha(-t)-(1-\alpha (t))
\alpha (-t)}{(\lambda ^{2}\alpha (t)\alpha(-t))-(1-\alpha (t))(1-\alpha (-t))} \\
&=\frac{((\lambda +1)\alpha (t)-1((1-\alpha (-t))}{(\lambda ^{2}\alpha(t)\alpha (-t))
-(1-\alpha (t))(1-\alpha (-t))} \\
&=\frac{((\lambda +1)\alpha (t)-1)(1-\alpha (-t))}{\alpha (t)+\alpha(-t)
+(\lambda ^{2}-1)\alpha (t)\alpha (-t)-1}.
\end{align*
Note that, as $\alpha $ is increasing, $\alpha (t)>\theta $ for $t\in (-1,+1] $ and then $(1-\alpha
(t))/\alpha (t)<(1-\theta )/\theta =\lambda$. So, for $t\in \lbrack -1,+1]$, the denominator in
(\ref{Soln Tail}) is non-zero, as
\begin{equation*}
\frac{1-\alpha (t)}{\alpha (t)}\cdot \frac{1-\alpha (-t)}{\alpha (-t)}<\lambda
^{2}.
\end{equation*}
\end{proof}
\begin{remark}
When $\lambda =1$ the above corollary yields (\ref{Soln Tail}) of Theorem~\ref{thm:tail-solution}.
\end{remark}
\begin{theorem}
The linear case of the general odds tail-balance equation (\ref{Odds}), i.e., with
\[
\alpha (t)=\theta +(t+1)(1-\theta )a/2,
\]
has solution
\begin{align}
H(t) &=\frac{(1+t)(at-a+2)a/4}{-a^{2}/4+a^{2}t^{2}/4+(a+\lambda a^{2}/4
-\lambda a^{2}t^{2}/4)} \label{Soln Odds} \\
& =\left\{\begin{array}{ll}
(1+t)(at-a+2)/(4-a+at^{2}), & \textrm{as }\lambda \rightarrow 0, \\
(1+t)(at-a+2)a/4, & \textrm{if } \lambda =1, \\
(at-a+2)/(\lambda a(1-t))=2/(\lambda a(1-t))-1/\lambda , & \textrm{as }
\lambda \rightarrow \infty.
\end{array
\right. \nonumber
\end{align}
\end{theorem}
\begin{proof}
In the general state of Nature case (\ref{Odds}), specializing to the linear case and repeating the
argument in Section~\ref{sec:Introduction} does indeed give
\[
\alpha (t)=\theta +(t+1)(1-\theta )a/2,
\]
as then $\alpha (-1)=\theta $ and, for $a=1$, $\alpha (1)=\theta +(1-\theta )=1$. Noting that
\[
1-\alpha (-t) =1-\theta -\frac{1}{2}(1-t)(1-\theta )a=\frac{1}{2}(1-\theta)(at-a+2)
\]
and
\[
\alpha (t)+\alpha (-t)-1 = 2\theta +(1-\theta )a-1=\theta +(1-\theta )(a-1),
\]
and, writing $\alpha (t)=B+At$ for convenience, from Corollary~\ref{cor:solution} we derive
\begin{eqnarray*}
H(t) &=&\frac{((\lambda +1)(B+At)-1)(1-\theta )(at-a+2)/2}{\theta +(1-\theta
)(a-1)+(\lambda ^{2}-1)(B^{2}-A^{2}t^{2})} \\
&=&\frac{((B+At)/\theta -1)\lambda (at-a+2)/2}{1+\lambda (a-1)+(\lambda
^{2}-1)(B^{2}-A^{2}t^{2})/\theta } \\
&=&\frac{\lambda ^{2}(1+t)(at-a+2)a/4}{1+\lambda (a-1)+(\lambda
-1)(B^{2}-A^{2}t^{2})/\theta ^{2}},
\end{eqnarray*
as $1+\lambda =1/\theta $ (as above), $B/\theta =1+\lambda a/2$ and $A/\theta =\lambda a/2$.
Finally, writing $\mu =1/\lambda$, we obtain
\begin{equation*}
H(t)=\frac{(1+t)(at-a+2)a/4}{\mu ^{2}+\mu (a-1)+(1/\mu -1)((\mu
+a/2)^{2}-a^{2}t^{2}/4)},
\end{equation*
and here the denominator is
\begin{equation*}
D(t):=-a^{2}/4+a^{2}t^{2}/4+(a+\lambda a^{2}/4-\lambda a^{2}t^{2}/4),
\end{equation*
so that $D(t)\sim \lambda a^{2}(1-t^{2})/4$ as $\lambda \rightarrow \infty$ and
\[
\lim_{\lambda \rightarrow 0}D(t;\mu ) =\lim_{\lambda \rightarrow
0}(4a-a^{2}+a^{2}t^{2})/4.
\]
\end{proof}
\begin{remark}
When $\lambda =1$ we retrieve from (\ref{Soln Odds}) the formula $(1+t)(at-a+2)/4$ as in (\ref{Soln
Lin}).
\end{remark}
\section{An alternative proof of Lemma~\ref{lem:main}}
In this section we give, as an alternative to Theorem~\ref{thm:tail-solution}, a direct proof of
Lemma~\ref{lem:main}, as it is of independent interest.
Consider the following equation:
\begin{equation}
\frac{1-H_{a}(t)}{1-H_{a}(t)+H_{a}(-t)}=\frac{2+(1+t)a}{4}.
\label{eqn:functional_equation}
\end{equation
We are to show that $H_{a}(t)=(1+t)(2+at-a)/4$. Introduce two functions as follows:
\begin{align*}
f_{a}(t)& :=H_{a}(t)-H_{a}(-t), \\
g_{a}(t)& :=H_{a}(t)+H_{a}(-t).
\end{align*
Then \eqref{eqn:functional_equation} can be re-written as
\begin{equation}
H_{a}(t)=1-\frac{2+(1+t)a}{4}\left( 1-f_{a}(t)\right) .
\label{eqn:functional_equation1}
\end{equation
Similarly, replacing $t$ by $-t$ in \eqref{eqn:functional_equation} leads to
\begin{equation}
H_{a}(-t)=1-\frac{2+(1-t)a}{4}\left( 1+f_{a}(t)\right) .
\label{eqn:functional_equation2}
\end{equation
Subtraction of \eqref{eqn:functional_equation2} from
\eqref{eqn:functional_equation1} gives
\begin{equation*}
f_{a}(t)=\frac{2+(1-t)a}{4}\left( 1+f_{a}(t)\right) -\frac{2+(1+t)a}{4
\left( 1-f_{a}(t)\right) =\frac{2+a}{2}f_{a}(t)-\frac{a}{2}t,
\end{equation*
that is $f_{a}(t)=t$. On the other hand, summation of
\eqref{eqn:functional_equation1} and \eqref{eqn:functional_equation2} leads
to
\begin{equation*}
g_{a}(t)=\frac{at}{2}f_{a}(t)+\frac{2-a}{2}=\frac{2-a+at^{2}}{2}.
\end{equation*
Therefore, we obtain
\begin{equation*}
H_{a}(t)=\frac{f_{a}(t)+g_{a}(t)}{2}=\frac{(1+t)(2+at-a)}{4},
\end{equation*
as desired. $\square $
| proofpile-arXiv_059-15556 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Model and Assumptions}
\subsection{Real-time Tasks and Scheduling Model} \label{sec:rt_model}
Consider a set of $N_R$ RT tasks $\Gamma_R = \lbrace \tau_1, \tau_2, \cdots, \tau_{N_R} \rbrace$, scheduled on a multicore platform with $M$ identical cores $\mathcal{M} = \lbrace \pi_1, \pi_2, \cdots, \tau_{M} \rbrace$. Each RT task $\tau_r$ releases an infinite sequence of task instances, also called \textit{jobs}, and is represented by the tuple $(C_r, T_r, D_r)$ where $C_r$ is the worst-case execution time (WCET), $T_r$ is the minimum inter-arrival time ({\it e.g.,}\xspace period) and $D_r$ is the relative deadline.
The utilization of each task is denoted by $U_r = \frac{C_r}{T_r}$.
We assume constrained deadlines for RT tasks ({\it e.g.,}\xspace $D_r \leq T_r$) and that the task priorities are assigned according to rate-monotonic (RM)~\cite{Liu_n_Layland1973} order ({\it e.g.,}\xspace shorter period implies higher priority).
All events in the system happen with the precision of integer clock ticks ({\it i.e.,}\xspace processor clock cycles), that is, any time $t$ involved in scheduling is a non-negative integer. In this paper we consider RT tasks that are scheduled using partitioned fixed-priority preemptive scheme~\cite{mutiprocessor_survey} and assigned to the cores using a standard task partitioning algorithm~\cite{parti_see, mutiprocessor_survey}. We further assume that the RT tasks are \textit{schedulable}, {\it viz.,}\xspace the worst-case response time (WCRT), denoted as $\mathcal{R}_r$, is less than deadline ({\it e.g.,}\xspace $\mathcal{R}_r \leq D_r, \forall \tau_r$) and the following necessary and sufficient schedulability condition holds for each RT tasks $\tau_r$ assigned to any given core $\pi_m$~\cite{parti_see}:
\begin{equation}
\exists t : 0 < t \leq D_r \text{~and~} C_r + \hspace*{-1.5em} \sum\limits_{\tau_i \in hp(\tau_r, \pi_m)} \hspace*{-0.2em} \left\lceil \frac{t}{T_i} \right\rceil C_i \leq t,
\end{equation}
where $hp(\tau_r, \pi_m)$ denotes the set of RT tasks with higher priority than $\tau_r$ assigned to core $\pi_m$.
\subsection{Security Model}
Our focus is on integrating given security mechanisms abstracted as security tasks into a legacy multicore RTS without impacting the RT functionality of the RTS. While we use specific intrusion detection mechanisms ({\it e.g.,}\xspace Tripwire) to demonstrate our approach, our approach is somewhat agnostic to the security mechanisms. The security model used and the design of security tasks are orthogonal problems. Since we aim to maximize the frequency of execution of security tasks, security mechanisms whose performance improves with frequency of execution ({\it e.g.,}\xspace intrusion monitoring and detection tasks) benefit from our framework.
\section{Security Integration Framework} \label{sec:se_int_frmakework}
We propose to improve the security posture of multicore based RT systems by integrating additional \textit{periodic security tasks} ({\it e.g.,}\xspace tasks that are specifically designed for intrusion detection purposes). We highlight that ZORE\xspace abstracts security tasks and allows designers to execute \textit{any given techniques}. Our focus here is on integration of a given set of security tasks ({\it e.g.,}\xspace intrusion detection mechanisms) in an existing multicore RTS without impacting the RT task parameters ({\it e.g.,}\xspace WCET, periods, {\it etc.}) or their task execution order. In general, the addition of security mechanisms may increase the execution time of existing tasks~\cite{lin2009static,xie2007improving}
or reduce schedulability~\cite{sibin_RT_security_journal}.
As we mentioned earlier, our focus is on legacy multicore systems where designers
may not have enough flexibility to modify system parameters
to integrate security mechanisms.
We address this problem by allowing security tasks to execute with a priority lower than all the RT tasks, {\it i.e.,}\xspace leverage \emph{opportunistic execution}~\cite{mhasan_rtss16, mhasan_date18}. This way, security tasks will only execute during the slack time ({\it e.g.,}\xspace when a core is idle) and the timing requirements of the RT tasks will not be perturbed. However, in contrast to prior work (HYDRA\xspace)~\cite{mhasan_date18} where the security tasks are statically bound to their respective cores, in this paper we allow security tasks to continuously migrate at runtime ({\it i.e.,}\xspace the combined taskset with RT and security tasks follows a semi-partitioned scheduling policy) whenever any core is available ({\it e.g.,}\xspace when other RT or higher-priority security tasks are not running). An illustration of ZORE\xspace is presented in Fig.~\ref{fig:se_int_fig} where two RT tasks (represented by blue and green rectangles) are partitioned into two cores and a newly added security task (red rectangle) can move across cores.
\begin{figure}
\centering
\includegraphics[scale=0.20]{se_int_fig}
\caption{Illustration of our security integration framework for a dual-core platform: two RT tasks (blue and green) are statically assigned to two cores (core 0 and core 1, respectively). We propose to integrate a security task (red) that will execute with lowest priority and can be migrated to ether core (whichever is idle) at runtime.}
\label{fig:se_int_fig}
\end{figure}
As we shall see in Section \ref{sec:evaluation}, allowing security tasks to execute on any available core will give us the opportunity to execute security tasks more frequently ({\it e.g.,}\xspace with shorter period) and that leads to better responsiveness (faster intrusion detection time). One fundamental question with our security integration approach is to figure out how often to execute security tasks so that the system remains schedulable ({\it e.g.,}\xspace WCRT is less than period), and also can execute within a designer provided frequency bound (so that the security checking remains effective). This is different when compared to scheduling traditional RT tasks since the RT task parameters ({\it e.g.,}\xspace periods) are often derived from physical system properties and cannot be adjusted due to control/application requirements. We now formally define security tasks.
\subsubsection*{Security Tasks}
Let us include a set of $N_S$ security tasks $\Gamma_S = \lbrace\tau_1, \tau_2, \cdots, \tau_{N_S} \rbrace$ in the system. We adopt the periodic security task model \cite{mhasan_rtss16} and represent each security task by the tuple $(C_s, T_s, T_s^{max})$ where $C_s$ is the WCET, $T_s$ is the (unknown) period ({\it e.g.,}\xspace $\tfrac{1}{T_s}$ is the monitoring frequency) and $T_s^{max}$ is a designer provided upper bound of the period -- if the period of the security task is higher than $T_s^{max}$ then the responsiveness is too low and security checking may not be effective.
We assume that priority of the security tasks are distinct and specified by the designers ({\it e.g.,}\xspace derived from specific security requirements). Security tasks have implicit deadlines, {\it i.e.,}\xspace they need to finish execution before the next invocation. We also assume that task migration and context switch overhead is negligible compared to WCET of the task. Our goal here is to find a minimum period $T_s \leq T_s^{max}$ (so that the security tasks can execute more frequently) such that the taskset remains schedulable ({\it e.g.,}\xspace $\forall \tau_s \in \Gamma_S$: $\mathcal{R}_s \leq T_s$ where $\mathcal{R}_s$ is the WCRT\footnote{The calculation of WCRT is presented in Section \ref{ref:se_wcrt_cal}.} of $\tau_s$).
\section{Period Selection} \label{sec:period_selection}
The actual periods for the security tasks are not known -- we need to find the periods that ensures schedulability and gives us better monitoring frequency. Mathematically this can be expressed as the following optimization problem: $\underset{T_s, \forall \tau_s \in \Gamma_S}{\operatorname{minimize}} \sum\limits_{\tau_s \in \Gamma_S} T_s$, subject to $\mathcal{R}_s \leq T_s \leq T_s^{max}, \forall \tau_s \in \Gamma_S$. This is a non-trivial problem since the period of $\tau_s$ can be anything in $[\mathcal{R}_s, T_s^{max}]$ and the response time $\mathcal{R}_s$ is variable as it depends on the period of other higher priority security tasks. We first derive the WCRT of the security tasks and use it as a (lower) bound to find the periods. Our WCRT calculation for security tasks is based on the existing iterative analysis for global multicore scheduling~\cite{guan2009new_wcrt_bound,sun2014improving_wcrt2,global_rta_sanjay} and we modify it to account the fact that RT tasks are partitioned.
\input{Sections/analysis_new.tex}
\begin{comment}
\subsection{Preliminaries} \label{sec:background}
Since security tasks are allowed to migrate, our WCRT analysis follows prior research~\cite{guan2009new_wcrt_bound,sun2014improving_wcrt2,global_rta_sanjay} on multicore global schedulability analysis for fixed-priority tasks. We start by briefly reviewing the relevant terminology and parameters. We are interested in determining the response time of a job $\tau_s^k$ of task $\tau_s$ ({\it e.g.,}\xspace job under analysis) and each iteration of the response time is denoted by $x$.
\begin{definition}[Busy Period]
The \textit{busy period} of $\tau_s^k$ is the maximal continuous
time intervals $[t_1, t_2)$ (until $\tau_s^k$ finishes) when all the cores are executing either higher priority tasks or $\tau_s^k$ itself.
\end{definition}
\begin{definition}[Interference]
Given task $\tau_i$, the interference $I_{\tau_s \leftarrow \tau_i}$ caused by $\tau_i$ on $\tau_s^k$ is the number of time units in the busy period when $\tau_i$ executes while $\tau_s^k$ does not.
\end{definition}
Note that the job under analysis $\tau_s^k$ does not execute if all cores are busy with higher priority tasks; hence, the length of the busy period is at most $\left\lfloor \tfrac{\Omega_s}{M} \right\rfloor +C_s$ by definition, where $\Omega_s$ is the sum of the interference caused by all higher priority tasks on $\tau_s^k$. To compute the value of $I_{\tau_s \leftarrow \tau_i}$, we rely on the concept of \textit{workload}.
\begin{definition}[Workload]
The \textit{workload} $W_i(x)$ of a task $\tau_i$ in a window of length $x$ represents the accumulated execution time of $\tau_i$ within this time interval.
\end{definition}
For a busy period of length $x$, the interference caused by $\tau_i$ on $\tau_s^k$ is then given by:
\begin{equation} \label{eq:intf_ci_nc}
I_{\tau_s \leftarrow \tau_i}(x, W_i(x)) = \min \left( W_i(x), x - C_s + 1 \right).
\end{equation}
\begin{figure
\centering
\captionsetup[subfigure]{oneside,margin={3.2cm,0cm}}
\begin{subfigure}[]{0.5\linewidth}
\centering \includegraphics[scale=0.28]{ci_workload}
\caption{\label{fig:ci_workload}}
\end{subfigure}%
\begin{subfigure}[]{0.5\linewidth}
\centering \includegraphics[scale=0.28]{nc_workload}
\caption{\label{fig:nc_workload}}
\end{subfigure}%
\caption{Illustration of: \textit{(a)} carry-in task and \textit{(b)} non-carry-in task for a window of size $x$. The arrival time of the task $\tau_{i}$ is denoted by $a_{i}$.}
\label{fig:ci_nc_worload}
\end{figure}
It remains to compute the workload of each higher priority task $\tau_i$. The workload computation depends on the arrival time of the task relative to the beginning of the busy period, as specified in the following definition.
\begin{definition}[Carry-in]
A task $\tau_i$ is called a \textit{carry-in} task if
there exists one job of $\tau_i$ that has been released before the beginning
of a given time window and executes within the window (see Fig~\ref{fig:ci_workload}). If no such
job exists, $\tau_i$ is referred to as a \textit{non-carry-in} task (see Fig.~\ref{fig:nc_workload}).
\end{definition}
\hl{The following section updated -- MH}
\subsection{Workload Calculation for RT Tasks} \label{sec:intf_cal}
In the following we first derive the workload for RT tasks and use this results to analyze response time for the security tasks.
Generally (but not always), the workload of a task $\tau_i$ in the busy period is higher if $\tau_i$ is a carry-in task than a non-carry-in task. Hence, it is important to limit the number of higher priority carry-in tasks.
We calculate the workload of RT tasks by showing that RT tasks with carry-in does not represent the worst-case scenario (see Lemma~\ref{lemma:rt_workload} for the formal proof). In particular, we follow an approach similar to prior research~\cite{guan2009new_wcrt_bound,global_rta_sanjay} and extend the busy period of $\tau_s^k$ from its arrival time (denoted by $a_s$) to an earlier time instance $t_0$ (see Fig.~\ref{fig:busy_period_extension}) such that during any time instance $t \in [t_0, a_s)$ all cores are busy executing tasks with higher priority than $\tau_s$. Note that by definition, this implies that there was at least one free core ({\it i.e.,}\xspace not executing higher priority tasks) at time $t_0 - 1$.
\begin{figure}[!t]
\centering
\includegraphics[width=2.5in]{extended_busy_period}
\caption{Extension of busy period for bounding the number of carry-in higher priority security tasks.}
\label{fig:busy_period_extension}
\end{figure}
\begin{figure
\centering
\captionsetup[subfigure]{oneside,margin={2.0cm,0cm}}
\begin{subfigure}[]{0.33\linewidth}
\hspace*{-3.5em}
\centering \includegraphics[scale=0.18]{rt_ci_1}
\caption{\label{fig:rt_ci_1}}
\end{subfigure}%
\begin{subfigure}[]{0.33\linewidth}
\hspace*{-2.5em}
\centering\includegraphics[scale=0.18]{rt_ci_2}
\caption{\label{fig:rt_ci_2}}
\end{subfigure}%
\begin{subfigure}[]{0.33\linewidth}
\centering
\vspace*{-1.1em} \includegraphics[scale=0.18]{rt_ci_3}
\caption{\label{fig:rt_ci_3}}
\end{subfigure}%
\caption{Illustration of critical instance for higher priority RT tasks for a window of size $x$. }
\end{figure}
\setcounter{theorem}{0}
\begin{lemma} \label{lemma:rt_workload}
The workload of the higher priority RT tasks will be maximized if all RT tasks are released synchronously at $t_0$ ({\it i.e.,}\xspace beginning of the extended busy period).
\end{lemma}
\begin{proof}
Consider the RT tasks running on a given core $\pi_m$. Note that any number of such task can have carry-in (since they are partitioned to the core). In what follows we prove that RT tasks with carry-in will not result in the worst-case scenario.
\textit{First}, consider the case when no RT task on $\pi_m$ has carry-in
(see Fig.~\ref{fig:rt_ci_1}).
In that case the maximum workload in the busy period of length $x$ trivially corresponds to the worst-case scenario in the single core fixed-priority scheduling case~\cite{Liu_n_Layland1973}, {\it i.e.,}\xspace all RT tasks running on $\pi_m$ activated together with $\tau_s^k$ at $t_0$ and the subsequent jobs of RT tasks are then released as soon as possible. \textit{Second}, assume that at least one RT task has carry-in for the busy period of length $x$ starting at $t_0$. Figure~\ref{fig:rt_ci_2} illustrates an example of such a scenario where $\tau_{1}$ and $\tau_{3}$ have carry-in.
Define $t^\prime = t_0 - \Delta$ as the earliest time such that no RT task runs at $t_0 - 1$ on $\pi_m$.
Note that if we shift the time instance $t^\prime$ to the right by $\Delta$ ({\it i.e.,}\xspace shift the arrival time of all RT jobs that arrive at or after $t_0 - \Delta$ to the right by $\Delta$),
it will not decrease the workload of the busy period. Also note that no RT task now runs at $t_0-1$ ({\it i.e.,}\xspace no RT task has carry-in); hence, we can again maximize the workload by releasing all jobs as soon as possible at or after $t_0$.
Therefore RT tasks having carry-in does not represent the worst-case ({\it i.e.,}\xspace they must be released at $t_0$, see Fig.~\ref{fig:rt_ci_3}) and the proof follows.
\end{proof}
Since the RT task $\tau_r$ does not have carry-in for the worst-case situation, the workload bound for the interval of length $x$ can be calculated assuming that $\tau_r$ is released at the beginning of the interval (see Fig.~\ref{fig:nc_workload}):
\begin{equation} \label{eq:nc_workload}
W_r^{NC}(x)=\left\lfloor \frac{x}{T_r} \right\rfloor C_r + \min (x~\mathsf{mod}~T_r, C_r).
\end{equation}
\hl{the following section is also revised -- MH}
\subsection{Response Time Analysis for Security Tasks} \label{sec:wcrt_calculation}
In Section \ref{sec:intf_cal} we show that RT tasks with carry-in do not represent the worst-case scenario for the busy period starting at $t_0$.
We now argue that there are at most $M-1$ higher priority security tasks that can have carry-in ({\it i.e.,}\xspace the job under analysis $\tau_s^k$ will experience worst-case carry-in interference only from other $M-1$ high priority security tasks). We prove the claim by using the following lemma.
\setcounter{theorem}{1}
\begin{lemma}
\label{lemma_ci_se}
At most $M-1$ security tasks can have carry-in.
\end{lemma}
\begin{proof}
The maximum number of higher priority security tasks that can have carry-in at $t_0$ is $M-1$ since by definition there have to be strictly less than $M$ higher priority security tasks active at time $t_0 -1$ (otherwise they will occupy all the cores).
\end{proof}
Notice that if a security task $\tau_s$ does not have carry-in, we can calculate the workload bound $W_s^{NC}(x)$ for the interval $x$ using Eq.~(\ref{eq:nc_workload}). Likewise, the workload bound for a carry-in security task $\tau_s$ in an interval of length $x$ starting at $t_0$ is given by (see Fig.~\ref{fig:ci_workload}):
\begin{equation} \label{eq:ci_workload}
W_s^{CI}(x) = W_s^{NC}\left( \max(x - \bar{x}_s, 0) \right) + \min(x, C_s - 1)
\end{equation}
where
$\bar{x}_s = C_s - 1 + T_s - \mathcal{R}_s$. Note that we can bound the workload of the first carry-in job to $C_s - 1$ because the job must have started executing at the latest at $t_0 - 1$ (given that not all cores are busy).
Now recall that security tasks execute with a priority lower than all the RT tasks; hence they experience interference from all the RT and higher priority security tasks.
The interference experienced by $\tau_s^k$ from RT task $\tau_r$ can be calculated by Eq.~(\ref{eq:intf_ci_nc}) and replacing $W_r(x) = W_r^{NC}(x)$.
We can also calculate the interference $I_{\tau_s \leftarrow \tau_i}(x, W_i(x))$ from a higher priority security task $\tau_i$ using
Eq.~(\ref{eq:intf_ci_nc}), where $W_i(x) = W_i^{CI}(x)$, if $\tau_i$ has carry-in, otherwise $W_i(x) = W_i^{NC}(x)$.
Note that the WCRT and periods of security task in the carry-in workload function
(see Eq.~(\ref{eq:ci_workload}))
is actually an unknown parameter. However, we follow an iterative scheme that allows us to calculate the period and WCRT of all higher priority security tasks before we calculate the interference for task $\tau_s$ (Section \ref{sec:alg}).
Let $hp_S(\tau_s$) denote the set of security tasks with a higher priority than $\tau_s$ and $hp(\tau_s) = \Gamma_R \cup hp_S(\tau_s)$ represents set of all ({\it e.g.,}\xspace RT and security) higher priority tasks. Note that we do not know which $M-1$ security tasks in $hp_S(\tau_s)$ have carry-in. In order to derive the WCRT of $\tau_s$, let us define $\mathcal{Z}_{\tau_s} \subset \Gamma \times \Gamma$ as the set of all partitions of $hp(\tau_s)$ into two subsets $\Gamma_s^{NC}$ and $\Gamma_s^{CI}$ ({\it e.g.,}\xspace the non overlapping set of carry-in and non-carry-in tasks) such that:
\begin{equation*}
\Gamma_s^{NC} \cap \Gamma_s^{CI} = \emptyset,
\Gamma_s^{NC} \cup \Gamma_s^{CI} = hp(\tau_s),
\Gamma_s^{CI} \cap \Gamma_R = \emptyset
\text{~and~}
|\Gamma_s^{CI}| \leq M-1,
\end{equation*}
{\it e.g.,}\xspace there are at most $M-1$ carry-in tasks and RT tasks do not belong in the carry-in set.
For a given carry-in and non-carry-in set ({\it e.g.,}\xspace $\Gamma_s^{NC}$ and $\Gamma_s^{CI}$), we can calculate the total interference experienced by $\tau_s$ as follows:
\begin{equation}
\Omega_s(x, \Gamma_s^{NC}, \Gamma_s^{CI}) = \hspace*{-1em} \sum_{\tau_i \in \Gamma_s^{NC}} \hspace*{-0.5em} I_{\tau_s \leftarrow \tau_i}(x, W_i^{NC}(x))~+\hspace*{-0.5em} \sum_{\tau_i \in \Gamma_s^{CI}} \hspace*{-0.5em} I_{\tau_s \leftarrow \tau_i}(x, W_i^{CI}(x)).
\end{equation}
Recall that $\Gamma_s^{NC}$ can contain both RT and security tasks while $\Gamma_s^{CI}$ contains only security tasks. For a given $\Gamma_s^{NC}, \Gamma_s^{CI}$ sets response time $\mathcal{R}_{s | {(\Gamma_s^{NC}, \Gamma_s^{CI}})}$ will be the minimal solution of the following iteration\footnote{Note that the worst-case is when the job arrives at $t_0$ ({\it i.e.,}\xspace $a_s = t_0$).}~\cite{guan2009new_wcrt_bound}:
\begin{equation}
x = \left\lfloor \frac{\Omega_s(x, \Gamma_s^{NC}, \Gamma_s^{CI})}{M}\right\rfloor + C_s.
\end{equation}
We can solve this using an iterative fixed-point search with the initial condition $x=C_s$. The iteration terminates when $x > T_s^{max}$ since $\tau_s$ becomes trivially unschedulable for WCRT greater than $T_s^{max}$. Finally we can calculate the WCRT of $\tau_s$ as follows:
\begin{equation}
\mathcal{R}_s = \max\limits_{\left( \Gamma_s^{NC}, \Gamma_s^{CI} \right) \in \mathcal{Z}_{\tau_s} } \mathcal{R}_{s | {(\Gamma_s^{NC}, \Gamma_s^{CI}})} .
\end{equation}
\end{comment}
\subsection{Algorithm} \label{sec:alg}
The security task $\tau_s$ remains schedulable with any period $T_s \in [\mathcal{R}_s, T_s^{max}]$. However as mentioned earlier, the calculation of $\mathcal{R}_s$ requires us to know the period and response time of other high priority tasks $\tau_h \in hp_S(\tau_s)$. Also if we arbitrarily set $T_s = \mathcal{R}_s$ (since this allows us to execute security tasks more frequently) it may negatively affect the schedulability of other tasks that are at a lower priority than $\tau_s$ because of a high degree of interference from $\tau_s$. Hence, we developed an iterative algorithm that gives us a trade-off between schedulability and monitoring frequency.
\input{Sections/algorithm}
Our proposed solution (refer to Algorithm \ref{alg:mc_period_selection} for a formal description) works as follows. We first fix the period of each security task $T_s^{max}$ and calculate the response time $\mathcal{R}_s$ using the approach presented in Section \ref{sec:wcrt_calculation} (Line 1). If there exists a task $\tau_j$ such that $\mathcal{R}_j > T_j^{max}$ we report the taskset as unschedulable (Line 2) since it is not possible to find a period for the security tasks within the designer provided bounds -- this unschedulability result will help the designer in modifying the requirements (and perhaps RT tasks' parameters, if possible) accordingly to integrate security tasks for the target system. If the taskset is schedulable with $T_s^{max}$, we then iteratively optimize the periods from higher to lower priority order (Lines 5-9) and return the period (Line 10). To be specific, for each task $\tau_s \in \Gamma_S$ we perform a logarithmic search \cite[Ch. 6]{knuth1997art} (see Algorithm \ref{alg:mc_log_search} for the pseudocode) and find the minimum period $T_s^*$ within the range $[R_s, T_s^{max}]$ such that all low priority tasks (denoted as $lp(\tau_s)$) remain schedulable, {\it e.g.,}\xspace $\forall \tau_j \in lp(\tau_s): \mathcal{R}_j \leq T_j^{max}$ (Line 7). Note that since we perform these steps from higher to lower priority order, WCRT and period of all higher priority tasks ({\it e.g.,}\xspace $\forall \tau_h \in hp(\tau_s)$) are already known. We then update the response times of all low priority task $\tau_j \in lp(\tau_s)$ considering the interference from the newly calculated period $T_s^*$ (Line 8) and repeat the search for next security task.
\input{Sections/algo_logsearch.tex}
\section{Discussion}
In this paper we do not design for any specific security tasks (the IDS system used is meant for demonstration purposes only) and allow designers to integrate their preferred techniques. Depending on the actual implementation of the security tasks some attack may not be detectable. For instance, the system may be vulnerable to zero-day attacks if the security tasks are not designed to detect unforeseen exploits or anomalous behaviors. There exists cases where security tasks may require some amount of system modifications and/or porting efforts -- say a timing behavior based security checking~\cite{hamad2018prediction,securecore,dragonbeam} may require the insertion of probing mechanisms inside the RT application tasks (or additional hardware) so that security tasks can validate their expected execution profiles.
ZORE\xspace abstracts security tasks (and underlying monitoring events) and works in a proactive manner. However, designers may want to integrate security tasks that \textit{react}, based on anomalous behavior. For instance, let at time $t$, $j$-th job of task $\tau_s$ ({\it e.g.,}\xspace $\tau_s^j$) performs action $\mathtt{a}_0$ ({\it e.g.,}\xspace runtime of real-time tasks). Because of intrusions (or perhaps due to other system artifacts) in time $[t, t+T_s]$ ($T_s$ is the period of $\tau_s$), job $\tau_s^{j+1}$ finds that $\mathtt{a}_0$ is not behaving as expected. Therefore $\tau_s^{j+1}$ may perform both actions, $\mathtt{a}_0$ and $\mathtt{a}_1$ (say that checks the list of system calls, to see if any undesired calls are executed). One way to support such a feature is to consider the dependency ({\it i.e.,}\xspace $\mathtt{a}_1$ depends on $\mathtt{a}_0$ in this case) between security checks ({\it e.g.,}\xspace sub-tasks). We intend to extend our framework considering dependency between security tasks.
\section{Evaluation} \label{sec:evaluation}
We evaluate ZORE\xspace on two fronts: {\it (i) } a proof-of-concept implementation on an ARM-based rover platform with security applications -- to demonstrate the viability of our scheme in a realistic setup (Section \ref{sec:exp_rover}); and {\it (ii) } with synthetically generated workloads for broader design-space exploration (Section \ref{sec:exp_synthetic}). Our implementation code will be made available in a public, open-sourced repository~\cite{mhasan_conex_implementation}.
\subsection{Experiment with an Embedded Platform and Security Applications} \label{sec:exp_rover}
\subsubsection{Platform Overview}
We implemented our ideas on a rover platform
manufactured by Waveshare~\cite{waveshare}.
The rover hardware/peripherals ({\it e.g.,}\xspace wheel, motor, servo, sensor, {\it etc.}) are controlled by a Raspberry Pi 3 (RPi3) Model B~\cite{rpi3} SBC (single board computer). The RPi3 is equipped with a 1.2 GHz 64-bit quad-core ARM Cortex-A53 CPU on top of Broadcom BCM2837 SoC (system-on-chip). In our experiments we focus on a dual-core setup ({\it e.g.,}\xspace activated only \texttt{core0} and \texttt{core1}) and disabled the other two cores) -- this was done by modifying the boot command file \texttt{/boot/cmdline.txt} and set the flag \texttt{maxcpus}=2. The base hardware unit of the rover is connected with RPi3 using a 40-pin GPIO (general-purpose input/output) header. The rover supports omni-directional movement and can move avoiding obstacles using an infrared sensor ({\it e.g.,}\xspace ST188~\cite{ST188}). We also attached a camera (RPi3 camera module) that can capture static images (3280 $\times$ 2464 pixel resolution). The detailed specifications of the rover hardware ({\it e.g.,}\xspace base chassis, adapter, {\it etc.}) are available on the vendor website~\cite{waveshare}.
\subsubsection{Experiment Setup and Implementation}
We implemented our security integration scheme in Linux kernel 4.9 and enabled real-time capabilities by applying the PREEMPT\_RT patch~\cite{rt_patch} (version 4.9.80-rt62-v7+).
In our experiments the rover moved around autonomously and periodically captured images (and stored them in the internal storage). We assumed implicit deadlines for RT tasks and considered two RT tasks: {\it (a) } a navigation task -- that avoids obstacles (by reading measurements from infrared sensor) and navigates the rover and {\it (b) } a camera task that captures and stores still images. We do not make any modifications to the vendor provided control codes ({\it e.g.,}\xspace navigation task). In our experiments we used the following parameters $(C_{r}, T_{r})$: $(240, 500)$ ms and $(1120, 5000)$ ms, for navigation and camera tasks, respectively ({\it i.e.,}\xspace total RT task utilization was $0.7040)$. We calculated the WCET values using ARM cycle counter registers (CCNT) and set periods in a way that the rover can navigate and capture images without overloading the RPi3 CPU. Since CCNT is not accessible by default, we developed a Linux loadable kernel module and activated the registers so that our measurement scripts can access counter values.
To integrate security into this rover platform, we included two additional security tasks: {\it (a) } an open-source security application, Tripwire~\cite{tripwire}, that checks intrusions in the image data-store and {\it (b) } our custom security task that checks current kernel modules (as a preventive measure to detect rootkits) and compares with an expected profile of modules. The WCET of the security tasks were $5342$ ms and $223$ ms, respectively and the maximum periods of security tasks were assumed to be $10000$ ms ({\it e.g.,}\xspace total system utilization is at least $0.7040 + 0.5565 = 1.2605$) -- we picked this maximum period value by trial and error so that the taskset became schedulable for demonstration purposes. We used the Linux \texttt{taskset} utility~\cite{linux_taskset} for partitioning tasks to the cores and the tasks were scheduled using Linux native \texttt{sched\_setscheduler()} function. For accuracy of our measurements we disabled all CPU frequency scaling features in the kernel and executed RPi with a constant frequency ({\it e.g.,}\xspace 700 MHz -- the default value). The system configurations and tools used in our experiments are summarized in Table \ref{tab:rov_param}.
\begin{table
\caption{Summary of the Evaluation Platform}
\label{tab:rov_param}
\centering
\begin{tabular}{P{2.7cm}||P{4.9cm}}
\hline
\bfseries Artifact & \bfseries Configuration/Tools\\
\hline\hline
Platform & 1.2 GHz 64-bit Broadcom BCM2837 \\
CPU & ARM Cortex-A53 \\
Memory & 1 Gigabyte \\
Operating System & Debian Linux (Raspbian Stretch Lite) \\
Kernel version & Linux Kernel 4.9 \\
Real-time patch & PREEMPT\_RT 4.9.80-rt62-v7+ \\
Kernel flags & $\mathtt{CONFIG\_PREEMPT\_RT\_FULL}$ enabled\\
Boot parameters & $\mathtt{maxcpus}$=2, $\mathtt{force\_turbo}$=1,
$\mathtt{arm\_freq}$=700,
$\mathtt{arm\_freq\_min}$=700 \\
WCET measurement & ARM cycle counter registers \\
Task partition & Linux \texttt{taskset} \\
\hline
\end{tabular}
\end{table}
We compared the performance of our scheme with prior work, HYDRA\xspace~\cite{mhasan_date18}. In that work, researchers proposed to statically partition the security tasks among the multiple cores -- to our knowledge HYDRA\xspace is the state-of-the-art mechanism for integrating security in legacy multicore-based RT platforms. The key idea in HYDRA\xspace was to allocate security tasks using a greedy best-fit strategy: for each task, allocate it to a core that gives maximum monitoring frequency ({\it i.e.,}\xspace shorter period) without violating schedulability constraints of already allocated tasks.
\subsubsection{Experience and Evaluation}
\begin{comment}
\begin{figure}
\centering
\subfigure
{\hspace*{-1em}
\includegraphics[scale=0.27]{rover_id_time}
\label{fig:rov_id_time}
}
\subfigure
{
\includegraphics[scale=0.27]{rover_cs_count}
\label{fig:rov_cs_count}
}
\vspace{-1.0\baselineskip}
\caption{Experiments with rover platform:~\textit{(a)}~time (cycle counts) to detect intrusions;~\textit{(b)}~average number of context switches. On average our scheme can detect the intrusions faster without impacting the performance of RT tasks.
}
\label{fig:rover_exp}
\end{figure}
\end{comment}
\begin{figure
\centering
\begin{subfigure}[b]{0.5\linewidth}
\hspace*{-1.5em}
\centering \includegraphics[scale=0.28]{rover_id_time}
\caption{\label{fig:rov_id_time}}
\end{subfigure}%
\begin{subfigure}[b]{0.5\linewidth}
\hspace*{-0.6em}
\centering\includegraphics[scale=0.28]{rover_cs_count}
\caption{\label{fig:rov_cs_count}}
\end{subfigure}%
\caption{Experiments with rover platform:~\textit{(a)}~time (cycle counts) to detect intrusions;~\textit{(b)}~average number of context switches. On average our scheme can detect the intrusions faster without impacting the performance of RT tasks. }
\end{figure}
We observed the performance of ZORE\xspace by \textit{analyzing how quickly an intrusion can be detected}. We considered the following two realistic attacks\footnote{\underline{Note}: our focus here is on the integration of any given security mechanisms rather the detection of any particular class of intrusions. Hence we assumed that there were no zero-day attacks and the security tasks were able the detect the corresponding attacks correctly ({\it i.e.,}\xspace there were no false-positive/negative errors) -- although the generic framework proposed in this paper allows the designers to accommodate any desired security ({\it e.g.,}\xspace intrusion detection/prevention) technique.}: {\it (i) } an ARM shellcode~\cite{arm_shellcode} that allows the attacker to modify the contents of the image data-store -- this attack can be detected by Tripwire; {\it (ii) } a rootkit~\cite{simple_rootkit} that intercepts all the \texttt{read()} system calls -- our custom security task can detect the presence of the malicious kernel module that is used to inject the rootkit. For each of our experimental trials we launched attacks at random points during program execution ({\it i.e.,}\xspace from the RT tasks) and used ARM cycle counters to measure the detection time. In Fig.~\ref{fig:rov_id_time} we show the average time to detect both the intrusions (in terms of cycle counts, collected from $35$ trials) for ZORE\xspace and HYDRA\xspace schemes. From our experiments we found that, on average, our scheme can detect intrusions $19.05\%$ faster compared to the HYDRA\xspace approach (Fig.~\ref{fig:rov_id_time}). Since our scheme allows security tasks to migrate across cores, it provides smaller response time ({\it e.g.,}\xspace shorter period) in general and that leads to faster detection times.
We next measured the overhead of our security integration approach in terms of number of context switches (CS). For each of the trials we observed the schedule of the RT and security tasks for $45$ seconds and counted the number of CS using the Linux \texttt{perf} tool~\cite{linux_perf}. In Fig.~\ref{fig:rov_cs_count} we show the number of CS (y-axis in the figure) for ZORE\xspace and HYDRA\xspace schemes (for $35$ trials). As shown in the figure, our approach increases the number of CS (since we permit migration across cores) compared to the other scheme that statically partitions security tasks. From our experiments we found that, on average, our scheme increases CS by
$1.75$ times.
However, this increased CS overhead \textit{does not impact the deadlines of RT tasks} (since the security tasks always execute with a priority lower than the RT tasks) and thus may be acceptable for many RT applications.
\subsection{Experiment with Synthetic Tasksets} \label{sec:exp_synthetic}
We also conducted experiments with (randomly generated) synthetic workloads for broader design-space exploration.
\begin{table
\caption{Simulation Parameters}
\label{tab:mc_ex_param}
\centering
\begin{tabular}{P{5.5cm}||P{2.2cm}}
\hline
\bfseries Parameter & \bfseries Values\\
\hline\hline
Process cores, $M$ & $\lbrace 2, 4\rbrace$ \\
Number of real-time tasks, $N_R$ & $[3\times M, 10 \times M]$ \\
Number of security tasks, $N_S$ & $[2 \times M, 5 \times M]$ \\
Period distribution (RT and security tasks) & Log-uniform \\
RT task allocation & Best-fit \\
RT task period, $T_r$ & $[10, 1000]$ ms \\
Maximum period for security tasks, $T_s^{max}$ & $[1500, 3000]$ ms\\
Minimum utilization of security tasks & At least $30\%$ of RT tasks \\
Base utilization groups & $10$ \\
Number of taskset in each configuration & $250$\\
\hline
\end{tabular}
\end{table}
\subsubsection{Taskset Generation and Parameters} \label{sec:tc_generation}
In our experiments we used parameters similar to those in related work~\cite{mhasan_date18,sibin_RT_security_journal,mhasan_rtss16,sun2014improving_wcrt2,davis2015global,mn_gp} (see Table \ref{tab:mc_ex_param}). We considered $M \in \lbrace2,4\rbrace$ cores and each taskset instance contained $[3 \times M, 10 \times M]$ RT and $[2 \times M, 5 \times M]$ security tasks. To generate tasksets with an even distribution
of tasks, we
grouped the real-time and security tasksets by base-utilization
from $[(0.01+0.1 i)M, (0.1+0.1i)M]$ where $i \in \mathbb{Z}, 0 \leq i \leq 9$. Each utilization group contained $250$ tasksets ({\it e.g.,}\xspace total $10 \times 250 = 2500$ tasksets were tested for each core configuration). We only considered the schedulable tasksets ({\it e.g.,}\xspace the condition in Section \ref{sec:rt_model} was satisfied for all RT tasks) -- since tasksets that fail to meet this condition are trivially unschedulable. Task periods were generated according to a log-uniform distribution. Each RT task had periods between $[10, 1000]$ ms and the maximum periods for security tasks were selected from $[1500 ,3000]$ ms. We assumed that RT tasks were partitioned using a best-fit~\cite{parti_see} strategy and the total utilization of the security tasks was at least $30\%$ of the system utilization. For a given number of tasks and total system utilization, the utilization of individual tasks were generated using Randfixedsum algorithm \cite{randfixedsum}.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.45]{ec_dist_24}
\caption{Euclidean distance between achievable period and maximum period vectors for different utilizations.
Larger distance (y-axis in the figure) implies security tasks execute more frequently.}
\label{fig:ecdist_count}
\end{figure}
\begin{figure*
\centering
\captionsetup[subfigure]{oneside,margin={3.2cm,0cm}}
\begin{subfigure}[]{0.5\linewidth}
\centering \includegraphics[scale=0.45]{Figures/sched_count_24_all_5}
\caption{}
\label{fig:sched_count}
\end{subfigure}%
\begin{subfigure}[]{0.5\linewidth}
\centering \includegraphics[scale=0.45]{dist_diff_24}
\caption{}
\label{fig:dist_diff_count}
\end{subfigure}%
\caption{Impact on schedulability and security. \textit{(a)} The acceptance ratio vs taskset utilizations for 2 and 4 core platforms: our scheme outperforms HYDRA\xspace and GLOBAL-TMax\xspace approaches for higher utilizations. \textit{(b)} Difference in period vectors for our approach and reference schemes ({\it e.g.,}\xspace HYDRA\xspace, GLOBAL-TMax\xspace, HYDRA-TMax\xspace): the non-negative distance (y-axis in the figure) implies that ZORE\xspace finds shorter periods than other schemes.}
\label{fig:exp_sec_sched_tradeoff}
\end{figure*}
\subsubsection{Impact on Inter-Monitoring Interval}
We first observe how frequently we can execute (schedule) security tasks compared to the designer specified bound (Fig.~\ref{fig:ecdist_count}). The x-axis of Fig.~\ref{fig:ecdist_count} shows the normalized utilization $\tfrac{U}{M}$ where $U$ is the minimum utilization requirement and given as follows:
$
U = \hspace*{-0.0em}\sum\limits_{\tau_r \in \Gamma_R} \hspace*{-0.0em} \frac{C_r}{T_r} + \hspace*{-0.0em}\sum\limits_{\tau_s \in \Gamma_S} \hspace*{-0.0em} \frac{C_s}{T_s^{max}}.
$
The y-axis represents the Euclidean distance between the calculated period vector $\mathbf{T}^{\boldsymbol{*}} = [T_s^*]_{\forall \tau_s \in \Gamma_S}$ and maximum period vector $\mathbf{T}^{\boldsymbol{\rm max}} = [T_s^{max}]_{\forall \tau_s \in \Gamma_S}$ (normalized to $1$). A higher distance implies that tasks can run more frequently. As we can see from the figure for higher utilizations, the distance reduces ({\it e.g.,}\xspace periods are closer to the maximum value) -- this is mainly due to the interference from higher priority (RT and security) tasks. The results from this figure suggest that we can execute security tasks more frequently for low to medium utilization cases.
\subsubsection{Impact on Schedulability and Security Trade-off}
While in this work we consider a legacy RT system ({\it i.e.,}\xspace where RT tasks are partitioned to respective cores), for comparison purposes
we considered the following two schemes (in addition to the related work, HYDRA\xspace, introduced in Section \ref{sec:exp_rover}) that do not consider any period adaptation for security tasks.
\begin{itemize}
\item GLOBAL-TMax\xspace: In this scheme all the RT and security tasks are scheduled using a global fixed-priority multicore scheduling scheme~\cite{mutiprocessor_survey}.
Since our focus here is on schedulability we set $T_s = T_s^{max}, ~\forall \tau_s \in \Gamma_S$ (recall that a taskset can be considered schedulable if the following conditions hold: $\mathcal{R}_r \leq D_r, \forall \tau_r \in \Gamma_R$ and $\mathcal{R}_s \leq T_s^{max}, \forall \tau_s \in \Gamma_S$). This scheme allows us to observe the performance impacts of binding RT tasks to the cores (due to legacy compatibility).
\item HYDRA-TMax\xspace: This is similar to the HYDRA\xspace approach introduced in Section \ref{sec:exp_rover} ({\it i.e.,}\xspace security tasks were partitioned using
best-fit allocation as before) but instead of minimizing periods here we set $T_s = T_s^{max}, \forall \tau_s$. This allows us to observe the trade-offs between schedulability and security in a fully-partitioned system.
\end{itemize}
In Fig.~\ref{fig:sched_count} we compare the performance of ZORE\xspace with the HYDRA\xspace, GLOBAL-TMax\xspace and HYDRA-TMax\xspace strategies in terms of \textit{acceptance ratio} (y-axis in the figure) defined as the number of schedulable tasksets ({\it e.g.,}\xspace $\mathcal{R}_s \leq T_s^{max}, \forall \tau_s$) over the generated one and the x-axis
shows the normalized utilization $\tfrac{U}{M}$.
As we can see from the figure, ZORE\xspace outperforms HYDRA\xspace when the utilization increases ({\it i.e.,}\xspace $\frac{U}{M} > 0.2$). This is because our scheme allows security tasks to execute in parallel across cores and also allocate periods considering the schedulability constrains of all low priority tasks -- this results in a smaller response time and can find more tasksets that satisfy the designer specified bound. In contrast HYDRA\xspace uses a greedy approach that minimizes the periods of higher priority tasks first without considering the global state. Also HYDRA\xspace statically binds the security task to the core and hence suffers interference from the higher priority tasks assigned to that core -- this leads to lower acceptance ratios.
For higher utilizations ({\it i.e.,}\xspace $\tfrac{U}{M} \geq 0.7$) ZORE\xspace can find tasksets schedulable that can not be easily partitioned by using the HYDRA-TMax\xspace scheme. The acceptance ratio of our method and the HYDRA-TMax\xspace scheme is equal when $\tfrac{U}{M} < 0.7$. This is because, for lower utilizations some lower priority security tasks experience less interference due to longer periods and specific core assignment (recall we set $T_s = T_s^{max}$ for all security tasks). While we bind the RT tasks to cores (due to legacy compatibility), it does not affect the schedulability ({\it i.e.,}\xspace the acceptance ratio of ZORE\xspace is higher when compared to the GLOBAL-TMax\xspace scheme). This is because, RT tasks are already schedulable when partitioned ({\it e.g.,}\xspace by assumption on taskset generation, see Section~\ref{sec:tc_generation}) and our analysis reduces the interference that RT tasks have on security ones.
For higher utilizations, the acceptance ratio drops for all the schemes since it is not possible to satisfy all the constraints due to the high interference from RT and security tasks. We also highlight that while our approach results in better schedulability, ZORE\xspace/HYDRA-TMax\xspace ({\it i.e.,}\xspace where legacy RT tasks are partitioned to the cores) and GLOBAL-TMax\xspace ({\it i.e.,}\xspace where all tasks can migrate) schemes are incomparable in general ({\it e.g.,}\xspace there exists taskset that may be schedulable by task partitioning but not in global scheme where migration is allowed and vice-versa)
-- we allow security tasks to migrate due to security requirements ({\it e.g.,}\xspace to achieve faster intrusion detection -- as we explain in the next experiments, see Fig.~\ref{fig:dist_diff_count}).
\begin{comment}
\begin{figure
\centering
\includegraphics[scale=0.45]{Figures/sched_count_24_all_5}
\caption{The acceptance ratio vs taskset utilizations for 2 and 4 core platforms. Our scheme outperforms HYDRA\xspace and GLOBAL-TMax\xspace approaches for higher utilizations.}
\label{fig:sched_count}
\end{figure}
\begin{figure
\centering
\includegraphics[scale=0.45]{dist_diff_24}
\caption{Difference in period vectors for our approach and reference schemes ({\it e.g.,}\xspace HYDRA\xspace, GLOBAL-TMax\xspace, HYDRA-TMax\xspace). The non-negative distance (y-axis in the figure) implies that ZORE\xspace finds shorter periods than other schemes.}
\label{fig:dist_diff_count}
\end{figure}
\end{comment}
In the final set of experiments (Fig.~\ref{fig:dist_diff_count}) we compare the achievable periods (in terms of Euclidean distance) for our approach and the other schemes. The x-axis in the Fig.~\ref{fig:dist_diff_count} shows the normalized utilizations and the y-axis represents the average difference between the following period vectors: {\it (a) } between ZORE\xspace and HYDRA\xspace (dashed line); {\it (b) } ZORE\xspace and other strategies ({\it e.g.,}\xspace GLOBAL-TMax\xspace and HYDRA-TMax\xspace) that do not consider period minimization
(dotted marker) for dual and quad core setup. Higher distance values imply that the periods calculated by ZORE\xspace are smaller ({\it i.e.,}\xspace leads to faster detection time) and our approach outperforms the other scheme. For low to medium utilizations ({\it e.g.,}\xspace $0.2 \leq U \leq 0.5$) ZORE\xspace performs better when compared to HYDRA\xspace. In situations with higher utilizations, the lesser availability of slack time results in ZORE\xspace and HYDRA\xspace performing in a similar manner. Also, for higher utilizations HYDRA\xspace is unable to find schedulable tasksets and hence there exist fewer data points.
Our experiments also show that compared to GLOBAL-TMax\xspace and HYDRA-TMax\xspace our approach finds smaller periods in most cases (Fig.~\ref{fig:dist_diff_count}). This is expected since there is no period adaptation ({\it i.e.,}\xspace we set $T_s = T_s^{max}$ for those schemes). However it is important to note that ZORE\xspace achieves better execution frequency ({\it i.e.,}\xspace smaller periods) without sacrificing schedulability as seen in Fig.~\ref{fig:sched_count}. That is, our semi-partitioned approach achieves better continuous monitoring when compared with both a fully-partitioned approach (HYDRA\xspace, HYDRA-TMax\xspace) and a global scheduling approach (GLOBAL-TMax\xspace) while providing the same or better schedulability.
\section{Conclusion}
\label{sec:conclusion}
Threats to safety-critical RTS are growing and there is a need for developing layered defense mechanisms to secure such critical systems. We present algorithms to integrate continuous security monitoring for legacy multicore-based RTS. By using our framework, systems engineers can improve the security posture of RTS. This additional security guarantee also enhances safety -- which is the main goal for such systems.
\section{Introduction}
\label{sec:intro}
Limited resources in terms of processing power, memory, energy, {\it etc.}~coupled with the fact that security was not considered a design priority has led to the deployment of a large number of real-time systems (RTS) that include little to no security mechanisms. Hence, retrofitting such legacy RTS with general-purpose security solutions is a challenging problem since any perturbation of the real-time (RT) constraints
(runtimes, periods, task execution orders, deadlines, {\it etc.}) could be detrimental
to the correct and safe operation of RTS.
Besides, security mechanisms need to be designed in such a way that an adversary can not easily evade them. Successful attacks/intrusions into
RTS are often aimed at impacting the safety guarantees of such systems, as evidenced by
recent intrusions ({\it e.g.,}\xspace attacks on control systems~\cite{stuxnet, Ukraine16}, automobiles~\cite{ris_rts_1, checkoway2011comprehensive}, medical devices~\cite{security_medical}, {\it etc.}~to name but a few).
Systems with RT properties pose unique security challenges -- these systems are required to meet stringent timing requirements along with strong safety requirements.
Limited resources ({\it i.e.,}\xspace
computational power, storage, energy, {\it etc.}) prevent security mechanisms that have been primarily developed for general purpose
systems from being effective for RTS.
\todo[inline]{RBB: Bring For instance., consider an IDS ... text here}
\todo[inline]{RBB: As a step towards enabling the design of secure RT platforms researchers proposed Opportunistic execution ...}
Opportunistic execution~\cite{mhasan_rtss16,mhasan_date18} has been proposed as a potential way to integrate security mechanisms into legacy RTS -- this allows the execution of security mechanisms as background services without impacting the timing constraints of the RT tasks. Other approaches have been built on this technique for integrating tasks into both legacy and non-legacy systems~\cite{mhasan_ecrts17,xie2007improving,lin2009static,lesi2017network,lesi2017security,securecore,hamad2018prediction}. However, most of that work was focused on single core RTS (that are a majority of such systems in use today). However, \textit{multicore} processors have found increased use in the design of RTS to improve overall performance and energy efficiency~\cite{rt_multicore_lui,mutiprocessor_survey}.
While the use of such processors \textit{increases} the security problems in RTS ({\it e.g.,}\xspace due to parallel execution of critical tasks)~\cite{mhasan_rtiot_sensors19} very few security solutions have been proposed in literature.
\todo[inline]{RBB: In this paper we propose ....}
In early work (called HYDRA\xspace)~\cite{mhasan_date18} researchers have developed some mechanisms for integrating security into multicore RTS -- however this work does not allow runtime migration of security tasks across cores and \hl{we show that this} may result in delayed detection of intrusions\footnote{We discuss this issue further in Section~\ref{sec:evaluation}.}.
As a step towards the design of secure RT platforms, we propose
to improve the security posture of a multicore-based RTS through integration of security tasks. Our main goal in this paper is \textit{to raise the responsiveness of such security tasks by increasing their frequency of execution}. For instance, consider an intrusion detection system (IDS) -- say one that checks the integrity of file systems. If such a system is interrupted (before it can complete checking the entire system), then an adversary could use that opportunity to intrude into the system and, perhaps, stay resident in the part of the filesystem that has already been checked (assuming that the IDS is carrying out the check in parts). If, on the other hand, the IDS task is able to execute with as few interruptions as possible (\textit{e.g.}, by moving immediately to an empty core when it is interrupted), then there is much higher chance of success and, correspondingly, a much lower chance of a successful adversarial action.
Note that the addition of any security mechanisms (such as IDS, encryption/authentication, behavior-based monitoring, {\it etc.}) may require modification of the system or the RT task parameters as was the case in prior work~\cite{xie2007improving,lin2009static,lesi2017network,lesi2017security, securecore,securecore_memory,securecore_syscal,xie2005dynamic,sibin_RT_security_journal}.
Our focus is on
ensuring that the existing RT tasks are not affected by such integration. The security tasks could be carrying out any one of protection, detection or response-based operations, depending on the system requirements. For instance, a sensor measurement correlation task may be added for detecting sensor manipulation or a change detection task (or other intrusion detection programs) may be added to detect changes/intrusions into the system.
{\bf Note: }
We do not target our framework towards any specific security mechanism -- our focus is to integrate any designer-provided security solution into a multicore-based RTS.
In Table \ref{tab:mc_se_ex_table} we present some examples of security tasks that can be integrated into legacy systems (again, this is by no stretch meant to be an exhaustive list). In our experiments we used Tripwire~\cite{tripwire} (a data integrity checking tool) as well as our \textit{in-house custom-developed malicious kernel module checker} to demonstrate the feasibility of our approach -- the ideas proposed in this paper are more broadly applicable \hl{to other security mechanisms}.
\begin{table}[!thb]
\caption{Example of Security Tasks}
\label{tab:mc_se_ex_table}
\centering
\hspace*{-0.5em}
\begin{tabular}{P{3.65cm}||P{4.2cm}}
\hline
\bfseries Security Task & \bfseries Approach/Tools\\
\hline\hline
File-system checking & Tripwire~\cite{tripwire}, AIDE~\cite{aide}, {\it etc.} \\
Network packet monitoring & Bro~\cite{bro}, Snort~\cite{snort}, {\it etc.} \\
Hardware event monitoring & Statistical analysis based checks~\cite{woo2018early} using performance monitors ({\it e.g.,}\xspace \text{perf}~\cite{linux_perf}, \texttt{OProfile}~\cite{oprofile}, {\it etc.})\\
Application specific checking & Behavior-based detection (see the related work~\cite{anomaly_detection_survey, securecore,securecore_memory,securecore_syscal}) \\
\hline
\end{tabular}
\end{table}
\paragraph*{Our Contributions}
In this paper, we propose a design-time methodology and a framework named ZORE\xspace for partitioned\footnote{Since this is the commonly used multicore scheduling approach for many commercial and open-source OSs (such as OKL4~\cite{okl4}, QNX~\cite{qnx}, RT-Linux~\cite{rt_patch}, {\it etc.}) -- mainly due to its simplicity and efficiency~\cite{parti_see,mhasan_date18}.} RTS that {\it (i) } allows security tasks to execute \emph{continuously} ({\it i.e.,}\xspace execute as frequently as possible) across cores ({\it e.g.,}\xspace follow a \textit{semi-partitioned}~\cite{kato2009semi} RT scheduling scheme); and {\it (ii) } does not impact the timing constraints of other, existing, RT tasks. Our framework takes advantage of the properties of a multicore platform and allows security tasks to migrate across available cores and execute \textit{opportunistically} ({\it i.e.,}\xspace when the RT tasks are not running). This framework extends \hl{existing} work~\cite{mhasan_date18} and ensures better security ({\it e.g.,}\xspace faster detection time) and schedulability (see Section \ref{sec:evaluation}). As in previous work~\cite{mhasan_rtss16,mhasan_ecrts17,mhasan_date18} we propose to integrate security mechanisms as independent periodic tasks. To provide the best protection, \textit{security tasks may need to be executed as often as possible}. If the interval between consecutive monitoring events is too large then an attacker may remain undetected and cause harm to the system between two invocations of the security task. In contrast, if the security tasks are executed very frequently, it may impact the schedulability of the RT (and other security) tasks. The challenge is then to determining the right \textit{periods} ({\it i.e.,}\xspace minimum inter-invocation time) for the security tasks~\cite{sibin_deeply}. We therefore develop an iterative algorithm that finds the trade-off between schedulability and periods of the security tasks (Section \ref{sec:alg}).
\todo[inline]{RBB: seem somewhat repetitive. Monwar I would like to chat about flow of Inro. }
A major contribution of this paper is to introduce a security integration framework, ZORE\xspace, for legacy multicore platforms that \textit{allows security tasks to execute continuously} across cores. ZORE\xspace is able to do this without violating timing constraints for either the existing RT tasks or the security ones (Section \ref{sec:se_int_frmakework}). We develop a mathematical model and iterative solution that allows security tasks to execute as frequently as possible while still considering the schedulability constraints of other tasks (Section \ref{sec:period_selection}). In addition, we also present an implementation on a realistic ARM-based multicore rover platform (running a RT variant of Linux system and realistic security applications). We then perform comparisons with the state-of-the-art~\cite{mhasan_date18} (Section
\ref{sec:exp_rover}).
Finally, we carry out a design space exploration using synthetic workloads and study trade-offs for schedulability and security \hl{(Section} \ref{sec:exp_synthetic}).
\begin{comment}
\begin{table
\caption{Mathematical Notations}
\label{tab:mc_math_not}
\hspace*{-0.60em}
\centering
\begin{tabular}{c || P{5.95cm}}
\hline
\bfseries Notation & \bfseries Interpretation\\
\hline\hline
$\Gamma_R$, $\Gamma_S$ & Set of RT and security tasks, respectively \\
$N_R$, $N_S$ & Number of RT and Security tasks, respectively \\
$\mathcal{M}$, $M$ & Set of processor cores and the number of cores, respectively \\
$\pi_m$ & A given processor core \\
$C_i$, $T_i$, $D_i$, $\mathcal{R}_i$ & Worst-case execution time, period, deadline and WCRT of (RT/security) task $\tau_i$, respectively \\
$T_s^{max}$ & Maximum allowable period of security task $\tau_s$ \\
$W_i(x)$ & Workload bound for a job of the task $\tau_i$ within the window of size $x$ \\
$I_{\tau_j \leftarrow \tau_i}$ & Interference caused by $\tau_i$ on $\tau_j$ \\
$\Omega_s$ & Total interference experienced by security task $\tau_s$ \\
$hp_S(\tau_s)$, $hp(\tau_s)$ & Set of security tasks and set of all ({\it i.e.,}\xspace both RT and security) tasks higher priority than $\tau_s$, respectively \\
$lp(\tau_s)$ & Set of security tasks with a priority lower than $\tau_s$ \\
$\Gamma_s^{CI}$, $\Gamma_s^{NC}$ & Set of carry-in and non-carry-in tasks higher priority than $\tau_s$, respectively \\
$\mathcal{R}_{s | {(\Gamma_s^{NC}, \Gamma_s^{CI}})}$ & Response time of $\tau_s$ for a given carry-in and non-carry-in set \\
\hline
\end{tabular}
\end{table}
\end{comment}
\section{Related Work} \label{sec:rel_work}
\subsubsection*{RT Scheduling and Period Optimization}
Although not in the context of RT security, the scheduling approaches present in this paper can be considered as a special case of prior work~\cite{linux_push_pull} where each task can bind to any arbitrary number of available cores. For a given period, this prior analysis~\cite{linux_push_pull} is pessimistic for the model considered by ZORE\xspace ({\it i.e.,}\xspace RT tasks are partitioned and security tasks can migrate on any core) in a sense that it over-approximates carry-in interference from the tasks bound to single cores ({\it e.g.,}\xspace RT tasks) and hence results in lower schedulability ({\it e.g.,}\xspace identical to the GLOBAL-TMax\xspace scheme in Fig.~\ref{fig:sched_count}).
Researchers also propose various semi-partitioned scheduling strategies for fixed-priority RTS~\cite{kato2009semi,lakshmanan2009partitioned}. However, this work primarily focuses on improving schedulability ({\it e.g.,}\xspace by allowing highest priority task to migrate) and they are not designed for security requirements in consideration ({\it e.g.,}\xspace minimizing periods and executing security tasks with fewer interruption for faster anomaly detection). There exists other work~\cite{delay_period} in which
the authors statically assign the periods for multiple independent control tasks considering control delay as a cost metric. Davare {\it et~al.}\xspace~\cite{davare2007period_can} propose to assign task and message periods as well as satisfy end-to-end latency
constraints
for distributed automotive systems. While the previous work focus on optimizing period of \textit{all} the tasks in the system for a single core setup, our goal is to ensure security without violating timing constraints of the RT tasks in a multicore platform.
\subsubsection*{Security Solutions for RTS}
In recent years researchers proposed various mechanisms to provide security guarantees into legacy and non-legacy RTS (both single and multicore platforms) in several directions, {\it viz.,}\xspace integration of security mechanisms~\cite{mhasan_rtss16,mhasan_ecrts17,mhasan_date18}, authenticating/encrypting communication channels~\cite{lesi2017network,lesi2017security,xie2007improving,xie2005dynamic,lin2009static,jiang2013optimization}, side-channel defence techniques~\cite{sg1,sg2,sibin_RT_security_journal,taskshuffler,volp_TT_randomization} as well as hardware/software-based frameworks~\cite{mohan_s3a,securecore,securecore_memory,slack_cornell,securecore_syscal,mhasan_resecure16,mhasan_resecure_iccps}.
Perhaps the closest line of research is HYDRA\xspace~\cite{mhasan_date18}
where authors proposed to statically partition security tasks to the cores and used an optimization-based solution to obtain the periods. While this approach does not have the overhead of context switches across cores, as we observed from our experiments (Section \ref{sec:evaluation}), that scheme results in a poor acceptance ratio for larger utilizations, and suffers interference from other high priority tasks leading to slower detection of intrusions ({\it i.e.,}\xspace less effective). The problem of integrating security for single core RTS is addressed in prior research~\cite{mhasan_rtss16} where authors used hierarchical scheduling~\cite{server_ab_uk}
and proposed to execute security tasks with a low priority server. This approach is also extended to a multi-mode framework~\cite{mhasan_ecrts17} that allows security tasks to execute in different modes ({\it i.e.,}\xspace passive monitoring with lowest priority as well as exhaustive checking with higher priority). These server-based approaches, however, may require additional porting efforts for legacy systems.
There exists recent work~\cite{lesi2017network,lesi2017security} to secure cyber-physical systems from man-in-the-middle attacks by enabling authentication mechanisms and timing-aware network resource scheduling. There has also been work~\cite{xie2005dynamic,xie2007improving,lin2009static} where authors proposed to add protective security mechanisms into RTS and considered periodic task scheduling where each task requires a security service whose overhead varies according to the required level of service.
The problem of designing secure multi-mode RTS have also been addressed in prior work~\cite{jiang2013optimization} under dynamic-priority scheduling. In contrast, we consider a multicore fixed-priority scheduling mechanism where security tasks are executed periodically, across cores, while meeting real-time requirements. The above mentioned work are designed for single core platforms and it is not straightforward to retrofit those approaches for multicore legacy systems.
In another direction, the issues related to information leakage through storage timing channels using shared architectural resources ({\it e.g.,}\xspace caches) is introduced in prior work
\cite{sg1, sg2, sibin_RT_security_journal}. The key idea is to use a modified fixed-priority scheduling algorithm with a state cleanup mechanism to mitigate information leakage through shared resources. However, this leakage prevention comes at a cost of reduced schedulability.
Researchers also proposed to limit inferability of deterministic RT schedulers by randomizing the task execution patterns. Yoon {\it et~al.}\xspace~\cite{taskshuffler} proposed a schedule obfuscation method for fixed-priority RM systems. A combined online/offline randomization scheme~\cite{volp_TT_randomization} is also proposed to reduce determinism for time-triggered (TT) systems where tasks are executed based on a pre-computed, offline, slot-based schedule. We highlight that all the aforementioned work either requires modification to the scheduler or RT task parameters, and is designed for single core systems only.
Unlike our approach that works at the scheduler-level, researchers also proposed hardware/software-based architectural solutions~\cite{mohan_s3a,onchip_fardin,securecore,securecore_memory,slack_cornell,securecore_syscal,mhasan_resecure16,mhasan_resecure_iccps} to improve the security posture of future RTS. Those solutions require system-level modifications and are not suitable for legacy systems. To our knowledge this is the first work that aims to achieve continuous monitoring for multicore-based legacy RT platforms.
\section{Introduction}
\label{sec:intro}
Limited resources in terms of processing power, memory, energy, {\it etc.}~coupled with the fact that security was not considered a design priority has led to the deployment of a large number of real-time systems (RTS) that include little to no security mechanisms. Hence, retrofitting such legacy RTS with general-purpose security solutions is a challenging problem since any perturbation of the real-time (RT) constraints
(runtimes, periods, task execution orders, deadlines, {\it etc.}) could be detrimental
to the correct and safe operation of RTS.
Besides, security mechanisms need to be designed in such a way that an adversary can not easily evade them. Successful attacks/intrusions into
RTS are often aimed at impacting the safety guarantees of such systems, as evidenced by
recent intrusions ({\it e.g.,}\xspace attacks on control systems~\cite{stuxnet, Ukraine16}, automobiles~\cite{ris_rts_1, checkoway2011comprehensive}, medical devices~\cite{security_medical}, {\it etc.}~to name but a few).
Systems with RT properties pose unique security challenges -- these systems are required to meet stringent timing requirements along with strong safety requirements.
Limited resources ({\it i.e.,}\xspace
computational power, storage, energy, {\it etc.}) prevent security mechanisms that have been primarily developed for general purpose
systems from being effective for safety-critical RTS.
In this paper we aim to improve the security posture of
RTS through integration of security tasks while ensuring that the existing RT tasks are not affected by such integration. The security tasks considered could be carrying out any one of protection, detection or response-based operations, depending on the system requirements. For instance, a sensor measurement correlation task may be added for detecting sensor manipulation or a change detection task (or other intrusion detection programs) may be added to detect changes/intrusions into the system. In Table \ref{tab:mc_se_ex_table} we present some examples of security tasks that can be integrated into legacy systems (this is by no stretch meant to be an exhaustive list). Note that the addition of any security mechanisms (such as IDS, encryption/authentication, behavior-based monitoring, {\it etc.}) may require modification of the system or the RT task parameters as was the case in prior work~\cite{xie2007improving,lin2009static,lesi2017network,lesi2017security, securecore,securecore_memory,securecore_syscal,xie2005dynamic,sibin_RT_security_journal}.
Further, to provide the best protection, \textit{security tasks may need to be executed as often as possible}.
If the interval between consecutive checking events is too large then an attacker may remain undetected and cause harm to the system between two invocations of the security task. In contrast, if the security tasks are executed very frequently, it may impact the schedulability of the RT (and other security) tasks. The challenge is then to determining the right \textit{periods} ({\it i.e.,}\xspace minimum inter-invocation time) for the security tasks~\cite{sibin_deeply}.
\begin{table
\caption{Example of Security Tasks}
\label{tab:mc_se_ex_table}
\centering
\hspace*{-0.5em}
\begin{tabular}{P{3.65cm}||P{4.2cm}}
\hline
\bfseries Security Task & \bfseries Approach/Tools\\
\hline\hline
File-system checking & Tripwire~\cite{tripwire}, AIDE~\cite{aide}, {\it etc.} \\
Network packet monitoring & Bro~\cite{bro}, Snort~\cite{snort}, {\it etc.} \\
Hardware event monitoring & Statistical analysis based checks~\cite{woo2018early} using performance monitors ({\it e.g.,}\xspace \text{perf}~\cite{linux_perf}, \texttt{OProfile}~\cite{oprofile}, {\it etc.})\\
Application specific checking & Behavior-based detection (see the related work~\cite{anomaly_detection_survey, securecore,securecore_memory,securecore_syscal}) \\
\hline
\end{tabular}
\end{table}
As a step towards enabling the design of secure RT platforms, opportunistic execution~\cite{mhasan_rtss16,mhasan_date18} has been proposed as a potential way to integrate security mechanisms into legacy RTS -- this allows the execution of security mechanisms as background services without impacting the timing constraints of the RT tasks. Other approaches have been built on this technique for integrating tasks into both legacy and non-legacy systems~\cite{mhasan_ecrts17,xie2007improving,lin2009static,lesi2017network,lesi2017security,securecore,hamad2018prediction}. However, most of that work was focused on single core RTS (that are a majority of such systems in use today). However, \textit{multicore} processors have found increased use in the design of RTS to improve overall performance and energy efficiency~\cite{rt_multicore_lui,mutiprocessor_survey}.
While the use of such processors \textit{increases} the security problems in RTS ({\it e.g.,}\xspace due to parallel execution of critical tasks)~\cite{mhasan_rtiot_sensors19} to our knowledge very few security solutions have been proposed in literature~\cite{mhasan_date18}. In prior work (called HYDRA\xspace)~\cite{mhasan_date18} researchers have developed a mechanism for integrating security into multicore RTS. However this work uses a partitioned scheduling approach and does not allow runtime migration of security tasks across cores. We show that this results in delayed detection of intrusions\footnote{We discuss this issue further in Section~\ref{sec:evaluation}.} as the security tasks are not able to execute as frequently.
Our main goal in this paper is \textit{to raise the responsiveness of such security tasks by increasing their frequency of execution}. For instance, consider an intrusion detection system (IDS) -- say one that checks the integrity of file systems. If such a system is interrupted (before it can complete checking the entire system), then an adversary could use that opportunity to intrude into the system and, perhaps, stay resident in the part of the filesystem that has already been checked (assuming that the IDS is carrying out the check in parts). If, on the other hand, the IDS task is able to execute with as few interruptions as possible (\textit{e.g.}, by moving immediately to an empty core when it is interrupted), then there is much higher chance of success and, correspondingly, a much lower chance of a successful adversarial action.
\paragraph*{Our Contributions}
In this paper, we propose a design-time methodology and a framework named ZORE\xspace for partitioned\footnote{Since this is the commonly used multicore scheduling approach for many commercial and open-source OSs (such as OKL4~\cite{okl4}, QNX~\cite{qnx}, RT-Linux~\cite{rt_patch}, {\it etc.}) -- mainly due to its simplicity and efficiency~\cite{parti_see,mhasan_date18}.} RTS that {\it (a) } leverages semi-partitioned scheduling~\cite{kato2009semi} to enable \emph{continuous execution} of security tasks ({\it i.e.,}\xspace execute as frequently as possible) across cores, and {\it (b) } does not impact the timing constraints of other, existing, RT tasks.
ZORE\xspace takes advantage of the properties of a multicore platform and allows security tasks to migrate across available cores and execute \textit{opportunistically} ({\it i.e.,}\xspace when the RT tasks are not running). This framework extends existing work~\cite{mhasan_date18} and ensures better security ({\it e.g.,}\xspace faster detection time) and schedulability (see Section \ref{sec:evaluation}). ZORE\xspace is able to do this without violating timing constraints for either the existing RT tasks or the security ones (Section~\ref{sec:se_int_frmakework}). We develop a mathematical model and iterative solution that allows security tasks to execute as frequently as possible while still considering the schedulability constraints of other tasks (Section~\ref{sec:period_selection}). In addition, we also present an implementation on a realistic ARM-based multicore rover platform (running a RT variant of Linux system and realistic security applications). We then perform comparisons with the state-of-the-art~\cite{mhasan_date18}
(Section~\ref{sec:exp_rover}).
Finally, we carry out a design space exploration using synthetic workloads and study trade-offs for schedulability and security. Our evaluation shows that proposed semi-partitioned approach can achieve better execution frequency for security tasks and consequently quicker intrusion detection ($19.05\%$ faster on average) when compared with both fully-partitioned and global scheduling approaches while providing the same or better schedulability (Section~\ref{sec:exp_synthetic}).
{\bf Note: }
We do not target our framework towards any specific security mechanism -- our focus is to integrate any designer-provided security solution into a multicore-based RTS.
In our experiments we used Tripwire~\cite{tripwire} (a data integrity checking tool) as well as our \textit{in-house custom-developed malicious kernel module checker} to demonstrate the feasibility of our approach -- the integration framework proposed in this paper is more broadly applicable to other security mechanisms.
\begin{comment}
\begin{table
\caption{Mathematical Notations}
\label{tab:mc_math_not}
\hspace*{-0.60em}
\centering
\begin{tabular}{c || P{5.95cm}}
\hline
\bfseries Notation & \bfseries Interpretation\\
\hline\hline
$\Gamma_R$, $\Gamma_S$ & Set of RT and security tasks, respectively \\
$N_R$, $N_S$ & Number of RT and Security tasks, respectively \\
$\mathcal{M}$, $M$ & Set of processor cores and the number of cores, respectively \\
$\pi_m$ & A given processor core \\
$C_i$, $T_i$, $D_i$, $\mathcal{R}_i$ & Worst-case execution time, period, deadline and WCRT of (RT/security) task $\tau_i$, respectively \\
$T_s^{max}$ & Maximum allowable period of security task $\tau_s$ \\
$W_i(x)$ & Workload bound for a job of the task $\tau_i$ within the window of size $x$ \\
$I_{\tau_j \leftarrow \tau_i}$ & Interference caused by $\tau_i$ on $\tau_j$ \\
$\Omega_s$ & Total interference experienced by security task $\tau_s$ \\
$hp_S(\tau_s)$, $hp(\tau_s)$ & Set of security tasks and set of all ({\it i.e.,}\xspace both RT and security) tasks higher priority than $\tau_s$, respectively \\
$lp(\tau_s)$ & Set of security tasks with a priority lower than $\tau_s$ \\
$\Gamma_s^{CI}$, $\Gamma_s^{NC}$ & Set of carry-in and non-carry-in tasks higher priority than $\tau_s$, respectively \\
$\mathcal{R}_{s | {(\Gamma_s^{NC}, \Gamma_s^{CI}})}$ & Response time of $\tau_s$ for a given carry-in and non-carry-in set \\
\hline
\end{tabular}
\end{table}
\end{comment}
\subsection{Preliminaries} \label{sec:background}
We start by briefly reviewing the relevant terminology and parameters. We are interested in determining the response time of a job $\tau_s^k$ of task $\tau_s$ ({\it e.g.,}\xspace job under analysis) using an iterative method and the response time in each iteration is denoted by $x$.
\begin{definition}[Busy Period]
The \textit{busy period} of $\tau_s^k$ is the maximal continuous
time interval $[t_1, t_2)$ (until $\tau_s^k$ finishes) where all the cores are executing either higher priority tasks or $\tau_s^k$ itself.
\end{definition}
\begin{definition}[Interference]
Given task $\tau_i$, the interference $I_{\tau_s \leftarrow \tau_i}$ caused by $\tau_i$ on $\tau_s^k$ is the number of time units in the busy period when $\tau_i$ executes while $\tau_s^k$ does not.
\end{definition}
Note that the job under analysis $\tau_s^k$ cannot execute if all cores are busy with higher priority tasks; hence, the length of the busy period is at most $\left\lfloor \tfrac{\Omega_s}{M} \right\rfloor +C_s$ by definition, where $\Omega_s$ is the sum of the interference caused by all higher priority tasks on $\tau_s^k$. To compute the value of $I_{\tau_s \leftarrow \tau_i}$, we rely on the concept of \textit{workload}.
\begin{definition}[Workload]
The \textit{workload} $W_i(x)$ of a task $\tau_i$ in a window of length $x$ represents the accumulated execution time of $\tau_i$ within this time interval.
\end{definition}
It remains to compute the workload and corresponding interference for each higher priority task $\tau_i$. We first show how to do so for RT tasks and then for security tasks with higher priority than $\tau_s$.
\subsection{Interference Calculation for RT Tasks} \label{sec:intf_cal}
Since RT tasks are statically partitioned to cores and they have higher priority than any task that is allowed to migrate between cores, the worst-case workload for RT tasks can be trivially obtained based on the same critical instant used for single core fixed-priority scheduling case~\cite{Liu_n_Layland1973}.
\begin{lemma} \label{lemma:rt_workload}
For a given core $\pi_m$, the maximum workload of RT tasks executed on $\pi_m$ in any possible time interval of length $x$ is obtained when all RT tasks are released synchronously at the beginning of the interval.
\end{lemma}
\begin{proof}
Since RT tasks are partitioned and they have higher priorities than security tasks, the schedule of RT tasks executed on $\pi_m$ does not depend on any other task in the system. Now consider any interval $[t, t+x)$ of length $x$. We show that we can obtain an interval $[t', t'+x)$ where all tasks are released at $t'$, such that the workload of RT tasks on $\pi_m$ is higher in $[t', t'+x)$ compared to $[t, t+x)$.
\textit{First step}: let $t'$ be the earliest time such that $\pi_m$ continuously executes RT tasks in $[t', t)$; if such time does not exist, then let $t' = t$. By definition, $\pi_m$ does not execute RT tasks at time $t'-1$. Also since RT tasks continuously execute in $[t', t)$, the workload of RT tasks in $[t', t'+x)$ cannot be smaller than the workload in $[t, t+x)$.
\textit{Second step}: since $\pi_m$ is idle at $t' - 1$, no job of RT tasks on $\pi_m$ released before $t'$ can contribute to the workload in $[t', t)$. Hence, the workload can be maximized by anticipating the release of each RT task $\tau_r$ so that it corresponds with $t'$. This concludes the proof.
\end{proof}
\begin{figure
\includegraphics[scale=0.28]{nc_workload}
\caption{Workload of the RT tasks for a window of size $x$. The arrival time of the task $\tau_{i}$ is denoted by $a_{i}$.}
\label{fig:nc_workload}
\end{figure}
Let $\Gamma_R^{\pi_m} \subseteq \Gamma_R$ denote the set of RT tasks partitioned to core $\pi_m$. Based on Lemma~\ref{lemma:rt_workload}, an
upper bound to the workload of RT tasks on $\pi_m$ can be obtained by assuming that each RT task $\tau_r$ is released at the beginning of the interval and each job of $\tau_r$ executes as early as possible after being released, as shown in Fig.~\ref{fig:nc_workload}. We thus obtain the workload for RT task $\tau_r$:
\begin{equation} \label{eq:nc_workload}
W_r^{R}(x)=\left\lfloor \frac{x}{T_r} \right\rfloor C_r + \min (x~\mathsf{mod}~T_r, C_r),
\end{equation}
and summing over all RT tasks on $\pi_m$ yields a total workload $\sum\limits_{\tau_i \in \Gamma_R^{\pi_m}} W_i^{R}(x)$. Finally, we notice that by definition the interference caused by a group of tasks executing on the same core $\pi_m$ on $\tau_s$ cannot be greater than $x - C_s + 1$.
Therefore, the maximum interference caused by RT tasks on $\pi_m$ to $\tau_s$ can be bounded as:
\begin{equation} \label{eq:in_rt}
I_{\tau_s \leftarrow \Gamma_R^{\pi_m}}\Big(x, \hspace*{-0.5em} \sum_{\tau_i \in \Gamma_R^{\pi_m}} \hspace*{-0.5em} W_i^{R}(x)\Big) = \min \left( \sum_{\tau_i \in \Gamma_R^{\pi_m}} \hspace*{-0.2em} W_i^{R}(x), x - C_s + 1 \right).
\end{equation}
The `$+1$' term in the upper bound of the interference ({\it e.g.,}\xspace Eq.~(\ref{eq:in_rt})) ensures the convergence of iterative search for the response time (recall from Section~\ref{sec:background} that at each iteration the response time is denoted by $x$) to the correct value~\cite{bertogna2007response}. For example, when the iterative search for the response time is started with $x = C_s$ ({\it i.e.,}\xspace $x-C_s = 0$), the search would stop immediately (and outputs an incorrect WCRT) since $\min \left( \sum\limits_{\tau_i \in \Gamma_R^{\pi_m}} \hspace*{-0.2em} W_i^{R}(x), x - C_s \right) = 0$.
\subsection{Interference Calculation for Security Tasks} \label{sec:wcrt_calculation}
We next consider the workload of security tasks with higher priority than $\tau_s$. The workload computation depends on the arrival time of the task relative to the beginning of the busy period, as specified in the following definition.
\begin{definition}[Carry-in]
A task $\tau_i$ is called a \textit{carry-in} task if
there exists one job of $\tau_i$ that has been released before the beginning of a given time window of length $x$ and executes within the window.
If no such
job exists, $\tau_i$ is referred to as a \textit{non-carry-in} task.
\end{definition}
\begin{figure}[!t]
\centering
\includegraphics[width=2.5in]{extended_busy_period}
\caption{Extension of busy period for bounding the number of carry-in higher priority security tasks.}
\label{fig:busy_period_extension}
\end{figure}
Generally (but not always), the workload of a task $\tau_i$ in the busy period is higher if $\tau_i$ is a carry-in task than a non-carry-in task. Hence, it is important to limit the number of higher priority carry-in tasks. To this end, we follow an approach similar to prior research~\cite{guan2009new_wcrt_bound,global_rta_sanjay} and extend the busy period of $\tau_s^k$ from its arrival time (denoted by $a_s$) to an earlier time instance $t_0$ (see Fig.~\ref{fig:busy_period_extension}) such that during any time instance $t \in [t_0, a_s)$ all cores are busy executing tasks with higher priority than $\tau_s$. Note that by definition, this implies that there was at least one free core ({\it i.e.,}\xspace not executing higher priority tasks) at time $t_0 - 1$.
\setcounter{theorem}{1}
\begin{lemma}
\label{lemma_ci_se}
At most $M-1$ higher priority tasks can have carry-in at time $t_0$.
\end{lemma}
\begin{proof}
The maximum number of higher priority tasks that can have carry-in at $t_0$ is $M-1$ since by definition there have to be strictly less than $M$ higher priority tasks active at time $t_0 -1$ (otherwise they will occupy all the cores).
\end{proof}
\begin{figure
\centering
\includegraphics[scale=0.28]{ci_workload}
\caption{Illustration of carry-in task for a window of size $x$.}
\label{fig:ci_workload}
\end{figure}
Since Lemma~\ref{lemma_ci_se} holds for all tasks with higher priority than $\tau_s$, an immediate corollary is that the number of security tasks with carry-in at $t_0$ also cannot be larger than $M-1$. If a security task $\tau_i$ does not have carry-in, its workload is maximized when the task is released at the beginning of the busy interval. Hence, we can calculate the workload bound $W_i^{S_{NC}}(x)$ for the interval $x$ using Eq.~(\ref{eq:nc_workload}), {\it e.g.,}\xspace $W_i^{S_{NC}}(x)=\left\lfloor \frac{x}{T_i} \right\rfloor C_i + \min (x~\mathsf{mod}~T_i, C_i)$. Likewise, the workload bound for a carry-in security task $\tau_i$ in an interval of length $x$ starting at $t_0$ is given by (see Fig.~\ref{fig:ci_workload}):
\begin{equation} \label{eq:ci_workload}
W_i^{S_{CI}}(x) = W_i^{S_{NC}}\left( \max(x - \bar{x}_i, 0) \right) + \min(x, C_i - 1),
\end{equation}
where
$\bar{x}_i = C_i - 1 + T_i - \mathcal{R}_i$. We can bound the workload of the first carry-in job to $C_i - 1$ because the job must have started executing at the latest at $t_0 - 1$ (given that not all cores are busy). Finally, using the same argument as in Section~\ref{sec:intf_cal}, the interference of $\tau_i$ can be bounded as follows:
\begin{equation} \label{eq:intf_ci_nc}
I_{\tau_s \leftarrow \tau_i}(x, W_i(x)) = \min \left( W_i(x), x - C_s + 1 \right),
\end{equation}
where $W_i(x)$ is either $W^{S_{NC}}_i(x)$ or $W^{S_{CI}}_i(x)$. Notice that the WCRT and periods of security task in the carry-in workload function
(see Eq.~(\ref{eq:ci_workload}))
is actually an unknown parameter. However, we follow an iterative scheme that allows us to calculate the period and WCRT of all higher priority security tasks before we calculate the interference for task $\tau_s$ (refer to Section \ref{sec:alg} for details).
\subsection{Response Time Analysis} \label{ref:se_wcrt_cal}
Let $hp_S(\tau_s$) denote the set of security tasks with a higher priority than $\tau_s$. Note that we do not know which (at most) $M-1$ security tasks in $hp_S(\tau_s)$ have carry-in. In order to derive the WCRT of $\tau_s$, let us define $\mathcal{Z}_{\tau_s} \subset \Gamma \times \Gamma$ as the set of all partitions of $hp_S(\tau_s)$ into two subsets $\Gamma_s^{NC}$ and $\Gamma_s^{CI}$ ({\it e.g.,}\xspace the non overlapping set of carry-in and non-carry-in tasks) such that:
\begin{equation*}
\Gamma_s^{NC} \cap \Gamma_s^{CI} = \emptyset,
\Gamma_s^{NC} \cup \Gamma_s^{CI} = hp_S(\tau_s),
\text{~and~}
|\Gamma_s^{CI}| \leq M-1,
\end{equation*}
{\it e.g.,}\xspace there are at most $M-1$ carry-in tasks.
For a given carry-in and non-carry-in set ({\it e.g.,}\xspace $\Gamma_s^{NC}$ and $\Gamma_s^{CI}$), we can calculate the total interference experienced by $\tau_s$ as follows:
\begin{eqnarray}
\Omega_s(x, \Gamma_s^{NC}, \Gamma_s^{CI}) = \sum_{\pi_m \in \mathcal{M}} I_{\tau_s \leftarrow \Gamma_R^{\pi_m}} \Big(x, \sum_{\tau_i \in \Gamma_R^{\pi_m}} W_i^{R}(x) \Big)~+ \nonumber \\
\sum_{\tau_i \in \Gamma_s^{NC}} I_{\tau_s \leftarrow \tau_i}\Big(x, W_i^{S_{NC}}(x) \Big)~ +
\sum_{\tau_i \in \Gamma_s^{CI}} I_{\tau_s \leftarrow \tau_i}\Big(x, W_i^{S_{CI}}(x)\Big).
\end{eqnarray}
For a given $\Gamma_s^{NC}, \Gamma_s^{CI}$ sets response time $\mathcal{R}_{s | {(\Gamma_s^{NC}, \Gamma_s^{CI}})}$ will be the minimal solution of the following iteration\footnote{Note that the worst-case is when the job arrives at $t_0$ ({\it i.e.,}\xspace $a_s = t_0$).}~\cite{guan2009new_wcrt_bound}:
\begin{equation}
x = \left\lfloor \frac{\Omega_s(x, \Gamma_s^{NC}, \Gamma_s^{CI})}{M}\right\rfloor + C_s.
\end{equation}
We can solve this using an iterative fixed-point search with the initial condition $x^{(0)}=C_s$. The search terminates if there exists a solution ({\it i.e.,}\xspace $x = x^{(k)} = x^{(k-1)}$ for some iteration $k$) or when $x^{(k)} > T_s^{max}$ for any iteration $k$ since $\tau_s$ becomes trivially unschedulable for WCRT greater than $T_s^{max}$. Finally we can calculate the WCRT of $\tau_s$ as follows:
\begin{equation}
\mathcal{R}_s = \max\limits_{\left( \Gamma_s^{NC}, \Gamma_s^{CI} \right) \in \mathcal{Z}_{\tau_s} } \mathcal{R}_{s | {(\Gamma_s^{NC}, \Gamma_s^{CI}})} .
\end{equation} | proofpile-arXiv_059-15557 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
A cochlear implant is a Food and Drug Administration (FDA) approved solution for profound-to-severe hearing disability. Manual insertion of cochlear implant electrode arrays (hereafter called electrode array), however, causes intra-cochlear physical trauma in about one-third of surgeries \cite{Clark}, \cite{Clark2}. This physical trauma not only decreases the residual hearing ability but also reduces the functionality of the cochlear implant \cite{Clark}, \cite{Clark2}. To prevent physical trauma during surgery, researchers have suggested magnetic guidance of the electrode array \cite{Clark}, \cite{Clark2}, \cite{Leon}. In this technique, a magnet attached to the tip of the electrode array is guided in the cochlear turns via an external magnetic field (see Fig.~\ref{fig:1}) \cite{DHANASINGH201793}, \cite{medel}. After surgery, the magnet must be detached from the electrode array and removed from the cochlea to avoid potential medical complications arising when the patient is exposed to a strong magnetic field \cite{majdani}. The detachment process requires heating of the magnet, thus releasing thermal energy in the cochlea that may cause thermal trauma within the ear. Heat transfer in the middle ear, organ of balance, auditory nerves and the skull has been studied for applications such as caloric test \cite{Kassemi:2005}, \cite{Baertschi1975}, \cite{Cawthorne}, stapedectomy \cite{morshed}, \cite{ricci_mazzoni_1985}, \cite{Kodali}, radio-frequency radiation \cite{McIntosh}, \cite{Bernardi}, \cite{McIntosh2005}, magnetic resonance imaging \cite{majdani}, \cite{Wanger}, \cite{LOEFFLER2007583}, \cite{YUN2005275}, infrared neural stimulation of cochlear implants \cite{THOMPSON201546}, \cite{Shiparo}, \cite{Izzo2007OpticalPV}, \cite{ Thompson2013InfraredNS} \cite{rajguru4-INS}, , and therapeutic hypothermia \cite{Rajguru}\cite{rajguru2-terapeutic}, \cite{rajguru3-therapeutic}. Yet, despite these numerous efforts, heat transfer analysis within cochlear canals is still lacking and constitutes an important knowledge gap to the establishment of magnetic cochlear implant surgery. In addition, neither the heat source nor the targeted tissue in these aforementioned applications are similar to the magnet detachment process after magnetic insertion of the cochlear implant. Clearly, a comprehensive thermal analysis in cochlear canals during magnetic cochlear implant surgery is required. This constitutes the novelty of this work, as heat transfer in cochlear canals has never been studied before. \\
The objective of this paper is to understand the mechanisms responsible for thermal energy dissipation during magnetic guidance of cochlear implants. For that purpose, conduction and natural convection heat transfer are simulated in a three-dimensional (3D) uncoiled model of the cochlea, where the magnet acts as a heat source that results from Joule heating. Specifically, the safe range of input power density to detach the magnet without causing thermal trauma in the ear, and the effectiveness of natural convection with respect to conduction for removing the excess heat during the magnet detachment phase, are analyzed for the first time.\\
The rest of the paper is organized as follows. Section 2 provides a description of the computational model and the associated assumptions. Next, the model is verified for conduction heat transfer by comparison against a one-dimensional (1D) solution for two concentric cylinders, where the inner cylinder represents the magnet generating heat. This is followed by a verification of the model for natural convection heat transfer between two concentric cylinders and validation for two eccentric cylinders. In the fourth section, heat transfer within the uncoiled model of the cochlea where the magnet acts as a heat source is simulated. The impact of natural convection with and without an inserted electrode array are analyzed. The effect of the magnitude and duration of the heating process, and the fluid within the cochlear canals, are determined in the last section. Concluding remarks are then provided. \\
\begin{figure}[t]
\centering\includegraphics[width=\linewidth]{cochleaelectrode2.pdf}
\caption{\label{fig:1} Cutaway view of a cochlea with an inserted electrode array (Photo by MED-EL) \cite{DHANASINGH201793}, \cite{medel}.}
\end{figure}
\section{Description of the 3D uncoiled model of the cochlea}
The cochlea is a long semi-conical, spiral set of three fluid-filled ducts with two and one-half turns (see Fig. \ref{fig:1}). The fluid filling the cochlea is a dilute solution of ions in water called perilymph \cite{Perilymph}. In this paper, a 3D uncoiled model of the cochlea characterized by a length of 32.31 mm and a diameter of 2 mm is considered (see Fig.~\ref{fig:2}). A 31.5-mm-long electrode array is inserted in the cochlear canal through a dissected hole called the round window. The radius of the electrode array decreases linearly from 0.65 mm at the round window ($x$ = 0 mm) to 0.4 mm at $x$ = 6.5 mm, and then from 0.4 mm down to 0.2 mm between $x$ = 6.5 mm and 31.5 mm \cite{DHANASINGH201793}. The electrode array is surrounded by perilymph, and the boundary of the cochlear canal is made of bone. A 1-mm-long, 0.5-mm-diameter cylindrical magnet acting as a heat source is aligned with and attached near the tip of the electrode array. The center of the magnet is located at $x$ = 30 mm and $y$= -0.7 mm.\\
Due to the relatively low maximum temperature involved in robotic cochlear implant surgery (a few degrees higher than the body core temperature of 37$^{\circ}$C), radiation heat transfer is negligible. As such, heat transfer in the 3D uncoiled model of the cochlea is analyzed by considering only conduction and natural convection heat transfer. The energy balance is given by:\\
\begin{equation} \label{eq:1}
{\rho c_p \frac{\partial T}{\partial t}} +{\rho c_p \mathbf{u} \cdot \nabla T} + {\nabla \cdot (-k \nabla T)} = q
\end{equation}\\
where $T$, $\rho$, $c_p$, $k$, $\mathbf{u}$, and $t$ are respectively temperature (K), density (kg/m$^3$), heat capacity (J/kg$\cdot$K), thermal conductivity (W/m$\cdot$K), velocity vector (m/s), and time (s).~The input power density, $q$ (W/m$^3$), is non-zero only in the magnet, while the advection term (i.e., second term on the left-hand side of Eq. (\ref{eq:1})) is non-zero only in the perilymph.~The magnet is heated by Joule heating.~The magnet detachment process starts after insertion of the cochlear implant and after switching off the external magnetic field. During insertion, the external magnetic field strength is on the order of 10 mT \cite{petruska}; thus, heating during this phase of the surgery is negligible. For contrast, a strong magnetic field of 3T, such as that produced by an MRI machine, only produces a temperature increase in a magnet of less than 0.5 $^{\circ}$C \cite{majdani}. As such, the only heating within the cochlea that is considered occurs during the magnet detachment phase. The velocity field in the perilymph is determined by solving the following mass and momentum balance equations:
\begin{equation} \label{eq:2}
\rho \frac{\partial \rho}{\partial t}+\nabla \cdot (\rho \bf{u})=0
\end{equation}
\begin{equation} \label{eq:3}
\rho \frac{\partial \bf{u}}{\partial t}+\rho(\bf{u}\cdot\nabla)\bf{u}=\nabla \cdot \it{p} \bf{I} + \rho\bf{g}
\end{equation}\\
where $\bf{g}$ (m/s$^{2}$) is the gravitational acceleration, ${p}$ is the pressure (Pa), and $\bf{I}$ is a $3\times3$ identity matrix. The magnetohydrodynamic effect is negligible in this analysis because the external magnetic field is removed after insertion of the magnet.\\
When solving the energy balance equation, the perilymph, electrode array, and magnet are initially at the body core temperature of 37$^{\circ}$C. Except at the round window where the bone is removed during surgery, the cochlear canal boundary is assumed to be insulated. This is justified by the fact that bones are characterized by a low thermal conductivity in the range of $\sim$0.373 to 0.496 W/m$\cdot$K \cite{bone}. At the round window, the perilymph and electrode array are isothermal at the body core temperature. Continuity boundary conditions are applied at the electrode array and magnet walls, which implies that the temperature and heat flux on these boundaries are equal for the adjacent domains.\\
For the mass and momentum balance equations, the perilymph is initially stagnant ($\bf{u}$ = 0 m/s), while the pressure inside the cochlear canal is equal to atmospheric pressure. The pressure at the round window is assumed to be constant and equal to atmospheric pressure. The magnet, cochlear canal, and electrode array walls are subjected to no-slip boundary conditions. The gravity effect is active along the negative $z$-direction and the reference point ($x=0$, $y=0$, $z=0$) is located at the round window in the center of the cochlear canal.\\
Equations (\ref{eq:1}) to (\ref{eq:3}) are solved using the finite element method as implemented in COMSOL Multiphysics 5.4
\cite{Bergheau}, \cite{reddy}, \cite{comsol} (see the Supplementary Material for computational details). In the calculations, the thermophysical properties of the perilymph are assumed to be the same as those of water \cite{Kassemi:2005},\cite{Perilymph}. The magnet and electrode array thermophysical properties are provided in Table~1. Before analyzing heat transfer in the cochlear canal shown in Fig.~\ref{fig:2}, the COMSOL model is first verified and validated. These are presented in the next section.\\
\begin{table}[h]
\begin{center}
\centering
\begin{threeparttable}
\label{thermalproperties}
\caption{Thermophysical properties of the magnet and electrode array.}
\begin{tabular}{l l l l}
\hline
Domain & $c_{p}$~(J/kg$\cdot$K) & $k$~(W/m $\cdot$ K) & $\rho$~(kg/m$^3$)\\
\hline
Magnet & 430 \tnote{a} & 8.1 \tnote{a} & 7500 \tnote{a} \\
Electrode array & 127.7 \tnote{b} & 2.8 \tnote{b} & 19400 \tnote{b} \\
\hline
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[a] Provided by the manufacturer (SUPERMAGNETMAN).
\item[b] Calculated based on the information provided by MED-EL.
\end{tablenotes}
\end{threeparttable}
\end{center}
\end{table}
\begin{figure}[h]
\centering\includegraphics[width=\linewidth]{actualgeo.pdf}
\caption{\label{fig:2} 3D uncoiled model of the cochlea with inserted electrode array and magnet. The electrode array model is made by MED-EL \cite{DHANASINGH201793}.}
\end{figure}
\begin{figure}[h]
\centering\includegraphics[width=\linewidth]{Figure3.pdf}
\caption{\label{fig:3} The cochlear 3D model is uncoiled, and a cross-section of the uncoiled cochlea with inserted magnet represents the 1D model that is used for verification of conduction.}
\end{figure}
\section{Verification and validation of the model}
\subsection{\textbf{Verification of conduction heat transfer in a cross-section of the cochlea with heat source}}
Conduction heat transfer in the numerical model is verified in a 1D cross-section of the cochlea, where only temperature variations along the radial direction are analyzed (see Fig.~\ref{fig:3}). The outer cylinder delimits the cochlear region, while the inner cylinder represents the magnet generating heat. The numerical results are compared with an analytical solution described hereafter.\\
The energy balances for pure conduction in the magnet (region 1: $r<R_i$ (see Fig. \ref{fig:3})) and in the perilymph (region 2: $R_i<r<R_o$ (see Fig. \ref{fig:3})) are respectively given by:\\
\begin{equation} \label{eq:4}
\alpha_{1}\frac{\partial^2 T_1}{\partial r^2}+\frac{\alpha_1}{k_1}q(r,t)=\frac{\partial T_1}{\partial t}
\end{equation}
\begin{equation} \label{eq:5}
\alpha_{2}\frac{\partial^2 T_2}{\partial r^2}=\frac{\partial T_2}{\partial t}
\end{equation}
where $\alpha$ and $r$ are the thermal diffusivity (m$^2$/s) and the radial coordinate (m). Subscripts 1 and 2 represent region 1 (magnet) and region 2 (perilymph), respectively. $R_i$ and $R_o$ are radii (m) of the inner and outer circles.
At $r=R_i$, continuity boundary conditions are applied:
\begin{equation} \label{eq:6}
T_1(R_i,t)=T_2(R_i,t)
\end{equation}
\begin{equation} \label{eq:7}
k_1\left.\frac{\partial T_1}{\partial r}\right|_{r=R_i}= \left.k_2\frac{\partial T_2}{\partial r}\right|_{r=R_i}
\end{equation}
It is assumed that the entire domain is initially at the body core temperature ($T_{\textnormal{bc}}$ = 37$^{\circ}$C), while the input power density, $q$, is fixed at $3.3\times10^6$ W/m$^3$. The analytical solution for conduction heat transfer between two infinite, concentric cylinders is provided in Ref. \cite{ozisik}. Equations (\ref{eq:4}) and (\ref{eq:5}) are solved simultaneously using Green's functions.~The final result is a combination of Bessel functions of the first and second kinds that require computation of eigenvalues.~It is challenging, however, to calculate the first eigenvalue for an adiabatic cochlear boundary. Jain et al.~\cite{Jain} avoided using an insulation boundary condition, but no explanation was given. As such, a conductive boundary condition at $r=R_o$ is modeled with a small overall heat transfer coefficient, $U$, of 0.003 W/m$^2\cdot$K to mimic an adiabatic condition:
\begin{equation} \label{eq:8}
{-k_2\left.\frac{\partial T_2}{\partial r}\right|_{r=R_o}}+U(T_2(R_o) -T_{\textnormal{bc}})=0
\end{equation}
The convergence of the numerical solution has been studied by refining the element size as well as the time step. The numerical results converged using 8 elements and a time step of 0.1 s. The converged numerical results are verified against the analytical results in Fig.~\ref{fig:4}. The maximum Normal Root Mean Square Error (NRMSE) does not exceed 0.5\%. This difference is mostly due to truncation errors.\\
Natural convection is the other heat transfer mode that may play a role in thermal energy dissipation during magnetic guidance of cochlear implants. Therefore, natural convection in the numerical model is verified and validated next using numerical and experimental data from the literature.\\
\begin{figure}[t]
\centering\includegraphics[width=4in]{verification.pdf}
\caption{\label{fig:4} Comparison of temperature profiles at selected times (0, 10, 40, 80, 114 s) from the analytical solution and the numerical model. The region between $r$=0 mm and 0.25 mm represents the magnet, while the rest of the domain is filled with perilymph.}
\end{figure}
\subsection{\textbf{Verification and validation of natural convection in a cross-section of the cochlea with inserted magnet}}
Natural convection in the numerical model is first verified with numerical data for two concentric, isothermal cylinders without heat generation. The energy, mass, and momentum equations given as Eqs. (\ref{eq:1}) to (\ref{eq:3}), respectively, are solved for an input power density ($q$) equal to zero. Radial temperature distributions from the numerical model are compared in Fig.~\ref{fig:5} against numerical results \cite{cho} for selected azimuthal angles $\varphi$. The dimensional equations are solved and the results are nondimensionalized during the post-processing phase for easy comparison to temperature data presented in \cite{cho}. Here, $\theta$ is the dimensionless temperature $\frac{T-T_o}{T_i-T_o}$, where $T_o$ is temperature of the outer cylinder of radius $R_o$, while $T_i$ is the temperature of the inner cylinder of radius $R_i$. The dimensionless radial distance $R^*$ is defined as $\frac{r-R_i}{R_o-R_i}$. For the case depicted in Fig. \ref{fig:5}, the temperature difference between the inner and outer cylinders is 175 $^{\circ}$C, the Raleigh number $Ra = \frac{g \beta (T_i -T_o) (R_i-R_o)^3}{\nu \alpha}$ is $10^4$ which is within the range of natural convection, the Prandtl number $Pr$ is 0.71, and the ratio of the outer cylinder radius to the inner cylinder radius, $R_o/R_i$, is 5. \\
A convergence analysis has been performed to determine the minimum number of elements leading to a stable solution. Initially,~142,961 elements were used to solve the problem; the number of elements was subsequently increased to~2,977,920. The maximum difference between these simulations was less than $1\%$. A time step of 0.1 s is used for the simulation. Refining the time step does not significantly affect the results. The maximum difference between the numerical results and those from Cho et al. \cite{cho} is less than 4$\%$. This difference may be due to using a digitizer tool (OriginPro 2019b) to extract the data from Ref. \cite{cho}, or the slight difference between the input parameters (e.g., material properties) used in our simulation and those from Cho et al \cite{cho}. \\
Natural convection is next validated against experimental data for two eccentric cylinders \cite{kuehn}, which is representative of the actual problem where the magnet attached to the electrode array is not centered in the cochlear canal (see Fig.~\ref{fig:6}). For this problem, $r'$ represents the radial distance from the inner cylinder center, while $R'$ is the radial distance between the two cylinder walls. The ratio of the inner cylinder eccentricity, $\varepsilon$, to the gap distance between the two cylinders in a concentric arrangement, $(L = R_o-R_i)$, is 0.652. In addition, the Rayleigh number $Ra$ is $4.8 \times 10^4$, the Prandtl number $Pr$ is 0.706, while $R_i$ and $R_o$ are respectively equal to 1.78 cm and 4.625 cm. The temperature difference, $\Delta T$, is 26.3 $^{\circ}$C, and $T_i+T_o$~=~70.46 $^{\circ}$C.\\
\begin{figure}[tb]
\centering\includegraphics[width=4in]{validationwithcho.pdf}
\caption{\label{fig:5} Verification of natural convection in the numerical model with the results reported by Cho et al. \cite{cho} for two concentric cylinders. The vertical and horizontal axes represent the dimensionless temperature ($\theta$) and dimensionless radial distance ($R^{ \ast}$), respectively. }
\end{figure}
A convergence analysis of the numerical model has been performed in the same manner as for two concentric cylinders.~The number of elements leading to converged results is 205,021.~At time steps shorter than 0.1 s, the numerical results do not change by more than 1$\%$.~The dimensionless temperature ($\theta$) is plotted as a function of the dimensionless radial the distance $R^{*'}$=$\frac{r'-R_i}{R'-R_i}$ for selected values of $\varphi$ in Fig.~\ref{fig:6}. The maximum NRMSE is 6$\%$, which is equal to 1.7 $^{\circ}$C. The error in digitizing the data from Ref. \cite{kuehn}, and the measurement error (not specified explicitly) are the main plausible sources of differences.\\
To conclude this section, the numerical model provides accurate results for both conduction and natural convection heat transfer. As such, the numerical model can be applied with confidence to the thermal analysis of the uncoiled cochlea shown in Fig.~\ref{fig:2}.
\begin{figure}[tb]
\centering\includegraphics[width=4in]{validationwithkuehn.pdf}
\caption{\label{fig:6} Validation of natural convection in the numerical model with the results reported by Kuehn and Goldstein \cite{kuehn} for two eccentric cylinders. The vertical and horizontal axes represent the dimensionless temperature ($\theta$) and the dimensionless radial distance ($R^{ ^{*'}}$), respectively. }
\end{figure}
\section{Heat transfer analysis in the 3D uncoiled model of the cochlea with inserted electrode array and magnet}
The verified and validated numerical model is used to simulate heat transfer in the 3D uncoiled model of the cochlea with an inserted electrode array and magnet, as described in section 2 and shown in Fig.~\ref{fig:2}. In all simulations, 3,565,495 elements and time steps of 0.1 s were sufficient to obtain converged results. Note that the number of elements is reduced to 808,937 when natural convection is neglected. Prior to discussing the heat transfer analysis, the computational details are presented in subsections 4.1 and 4.2 (see Supplementary Material for details on element distribution, and solver settings).
\subsection{\textbf{Grid independence}}
Considering conduction only, the number of elements has been increased in four steps to determine the minimum number of elements necessary such that the solution does not change with the mesh size. The coarse, medium, fine, and extra fine meshes include 18,067, 110,087, 304,362, and 808,937 (808,937 Tetrahedra, 51,380 Triangles, 3,028 Edge elements, and 32 Vertex elements) elements, respectively. The temperatures at $t$ = 114 s in three locations in the cochlea are selected to assess grid independence (see Fig. \ref{gridindependence} (a)). Point 1 (30.5 mm, -0.96 mm, 0.0 mm) is located on the left side of the magnet, point 2 (30.5 mm, -0.4 mm, 0.0 mm) on the right side of the magnet, and point 3 (15 mm, -0.6 mm, 0.0 mm) is located in the middle of the cochlea on the left side of the electrode array. The maximum difference between the temperature data is 0.9$\%$.
\begin{figure}[h]
\centering\includegraphics[width=\linewidth]{convergence.pdf}
\caption{\label{gridindependence} Grid independence results in three points inside the cochlea for four different grid sizes ($q$ = 1.62 $\times$ 10$^7$ W/m$^3$, $t$ = 114 s): (a) conduction without convection, (b) conduction with convection.}
\end{figure}
In the case of conduction with convection, the coarse, medium, fine, and extra fine meshes include 224,191, 429,652, 1,093,677,and 3,565,495 (3,214,283 Tetrahedra, 592 Pyramids, 350,620 Prisms, 177,936 Triangles, 344 Quads, 4,816 Edge elements, and 32 Vertex elements) elements, respectively. Once again, the temperatures at $t$ = 114 s in the same three locations in the cochlea are selected to determine grid independence (see Fig. \ref{gridindependence} (b)). The maximum difference between the temperature data is 0.09$\%$. Note that $y^+$ is less than 1 indicating that there is sufficient mesh resolution near the wall. The Rayleigh number $Ra$ is less than $10^3$, thus the flow is laminar. The grid independence presentation in this document is based on Refs. \cite{reviewer-paper1}, \cite{reviewer-paper2}, \cite{reviewer-paper3}.
\subsection{\textbf{Time step analysis}}
The finest meshes with 808,937 elements for conduction and 3,916,115 elements for conduction and convection are used to determine the impact of the time step on the final results. The three values of time step selected for this study are: 0.05 s, 0.1 s, and 0.5 s. Temperature data at the same three locations used to study grid independence are employed here. The results are provided in Tables \ref{timedeanalysis1} and \ref{timedeanalysis2}. The time step does not have any impact on the conduction results. In the case of conduction with convection, the data change by less than 0.1 $\%$.
\begin{table}[h]
\caption{Calculated temperature ($q$ = 1.62 $\times$ 10$^7$ W/m$^{3}$, $t$ = 114 s ) at three locations for simulations of conduction without convection as a function of time step.}
\begin{center}
\centering
\small
\label{timedeanalysis1}
\begin{tabular}{l l l l}
\hline
Time Step (s) & $T_{\rm{point1}}$~($^{\circ}$C) & $T_{\rm{point2}}$~($^{\circ}$C) & $T_{\rm{point3}}$~($^{\circ}$C)\\
\hline
0.05 & 40.584 & 41.226 & 37.306\\
0.1 & 40.584 & 41.226 & 37.306\\
0.5 & 40.584 & 41.226 & 37.306\\
\hline
\end{tabular
\end{center}
\end{table}
\begin{table}[h]
\caption{Calculated temperature ($q$ = 1.62 $\times$ 10$^7$ W/m$^3$, $t$ = 114 s ) at three locations for simulations of conduction with convection as a function of time step.}
\begin{center}
\centering
\small
\label{timedeanalysis2}
\begin{tabular}{l l l l}
\hline
Time Step (s) & $T_{\rm{point1}}$~($^{\circ}$C) & $T_{\rm{point2}}$~($^{\circ}$C) & $T_{\rm{point3}}$~($^{\circ}$C)\\
\hline
0.05 & 40.561 & 41.201 & 37.308\\
0.1 & 40.563 & 41.200 & 37.306\\
0.5 & 40.562 & 41.201 & 37.308\\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{\textbf{Impact of natural convection on heat transfer within the cochlea}}
The thermal damage threshold of tissues in the cochlea is required for calculating the maximum safe input power density to detach the magnet \cite{thermaldose1}. This thermal damage threshold of in-vivo tissues depends on the temperature, the type of tissues, and the length of exposure to the heat source. The Cumulative Equivalent Minutes at a fixed temperature ($CEM_{T}$) is a parameter that combines both the effects of temperature and length of exposure \cite{thermaldose1}, \cite{Yoshida10116}. As such, $CEM_{T}$ is used to determine the maximum safe input power density for detaching the magnet from the electrode array. Yoshida et al. \cite{Yoshida10116} reported that exposing mouse ear tissues to a temperature of 43$^{\circ}$C for 1.9 minutes (114 s) does not affect ear functionality. Van Rhoon et al. \cite{Rhoon} pointed out that a $CEM_{43}$ less than 2 minutes is safe for any type of tissues under supervision of an expert capable of managing a sudden physiological response to a thermal stress. Here, a $CEM_{43}$ of 144 s is used to calculate the maximum safe input power density.\\
The maximum safe allowable input power density for 114 s of heating is first estimated for the limiting case of pure conduction within a cochlea containing a solitary magnet. This is the worst case scenario, as natural convection and conduction through the electrode array facilitate heat dissipation in the cochlea. By applying the Parametric Sweep Study Module in COMSOL, it is found that the maximum safe input power density for this limiting case is $1.62\times10^7$ $\textup W/ \textup m^3$ based on a $CEM_{43}$ of 114 s. This input power density is used hereafter to study the impacts of natural convection and conduction through the electrode array on the thermal management of the cochlea. \\
The maximum temperature in the cochlea is provided in Table~\ref{table4} for four different scenarios, namely for pure conduction with and without an electrode array, and for conduction and natural convection with and without an electrode array. In the absence of the electrode array, natural convection reduces the maximum temperature in the cochlea by approximately 1$^\circ$C. This effect is significant considering the fact that an increase of temperature in excess of 6$^\circ$C above the body core temperature causes damage to tissues. Conversely, in the cochlea with electrode array, the maximum $Ra$ is less than 400, so the flow is laminar and conduction is dominant \cite{incropera}. Also, the maximum P\'{e}clet number ($Pe$) is 0.33, which means that the diffusion transport rate dominates the advection transport rate. Consequently, the impact of natural convection on the maximum temperature in the cochlea with the electrode array is negligible. Thus, it can be concluded that inserting the electrode array, as done in the actual surgery, reduces the relative contribution of natural convection to heat transfer in the cochlea.~This conclusion is also confirmed in Fig.~\ref{heatremoval}, where the heat removal rate from the magnet is shown as a function of time.~When the electrode array is not modeled, the heat rate from the magnet increases in a non-negligible manner due to natural convection during the first 60 s of the transient process.~Yet, the difference between the heat rate removal from the magnet by natural convection is clearly negligible in comparison to the heat rate removal by conduction in the presence of the electrode array. These results can be explained by the fact that the electrode array drives some perilymph out of the cochlea. The remaining fluid in the small annular region does not flow easily due to the internal no-slip boundary condition around the electrode array. As such, heat is mostly transferred axially via conduction in the electrode array. Fig.~\ref{tempprofile} provides the temperature distribution within the uncoiled cochlea with and without the electrode array after heating the magnet for 114 s with an input power density of $1.62\times10^7$ W/m$^3$. Perilymph temperature is maximum near the magnet and reduces to the body core temperature when approaching the round window. \\
The negligible impact of natural convection in the presence of the electrode array is further supported by the correlation developed by Raithby and Holland \cite{RAITHBY} for two concentric cylinders separated by a fluid gap assumed to be much smaller than the length of the cylinders.~In this correlation, an effective thermal conductivity $k_{\textnormal{eff}}$ (W/m$\cdot$K) due to conduction and natural convection within the gap between two isothermal cylinders is calculated as follows:
\begin{equation} \label{eq:9}
\frac{k_{\textnormal{eff}}}{k}= 0.386\left(\frac{Pr}{0.861+Pr}\right)^{\frac{1}{4}}Ra_{cc}^{\frac{1}{4}}
\end{equation}\\
where $k$ is the thermal conductivity of the fluid in the gap. The correlation assumes that the temperature of the inner cylinder is greater than the outer cylinder temperature. In Eq.~(\ref{eq:9}), the modified Rayleigh number for two concentric cylinders, $Ra_{cc}$, is defined as \cite{RAITHBY}:
\begin{equation} \label{eq:10} Ra_{cc}=\frac{[ln(\frac{D_o}{D_i})]^{4}}{b^3(D_i^{-3/5}+D_o^{-3/5})^{5}}Ra_b
\end{equation}\\
where $b=D_o-D_i$, and $D_o$ and $D_i$ are respectively the outer and inner cylinder diameters. Equations (\ref{eq:9}) and (\ref{eq:10}) are applicable for fluids characterized by $0.7\leq Pr \leq 6000$ and $Ra \leq 10^7$. A ratio $\frac{k_\textnormal{eff}}{k}$ larger than 1 indicates that natural convection contributes to heat transfer in a non-negligible manner. Otherwise, natural convection is negligible and heat transfer can solely be modeled via conduction. Substituting the cochlear dimensions into Eqs. (\ref{eq:9}) and (\ref{eq:10}), and using the temperature difference $\Delta{T}$ = 43$^{\circ}$C - 37$^{\circ}$C to calculate $Ra_b$, the maximum $\frac{k_{\textnormal{eff}}}{k}$ ratio is 0.7. Therefore, both the numerical model and correlation demonstrate that natural convection is negligible in the thermal analysis of the magnet detachment process. The maximum safe input power for a range of heating time intervals is discussed in the next section.
\begin{table}[h]
\caption{Maximum temperature in a cochlear canal.}
\begin{center}
\centering
\begin{adjustbox}{width=1\textwidth}
\small
\label{table4}
\begin{tabular}{l l}
\hline
& $T_{max}$~($^{\circ}$C) \\
\hline
Cochlea with magnet (conduction) & 42.87 \\
Cochlea with magnet (conduction and convection) & 42.08 \\
Cochlea with electrode array and magnet (conduction) & 41.23 \\
Cochlea with electrode array and magnet (conduction and convection) & 41.20 \\
\hline
\end{tabular}
\end{adjustbox}
\end{center}
\end{table}
\begin{figure}[h]
\centering\includegraphics[width=\linewidth]{heatrate.pdf}
\caption{\label{heatremoval} Heat removal rate from the magnet as a function of time.}
\end{figure}
\begin{figure}[h]
\centering\includegraphics[width=\linewidth]{tempprofile.pdf}
\caption{\label{tempprofile} Temperature distribution at $t = 114$ s in the uncoiled cochlea due to heat transfer by conduction: (a) With a solitary magnet. (b) With electrode array and magnet.}
\end{figure}
\section{Determination of the safe input power density range}
The range of safe input power is determined by limiting the maximum allowable temperature within the entire cochlea to 43$^{\circ}$C. Thus, the safe input power density depends on the length of the heating period, i.e., faster detachment requires higher input power density. A parametric study is performed to determine the maximum safe input power density at discrete time intervals (see Fig. \ref{maxpower}). By fitting the data to a power law function, the maximum input power density is shown to be inversely proportional to approximately the square root of the heating period. Total energy transferred to the magnet decreases by increasing the input power density and concurrently reducing the heating period. In a faster heating process, the rate of input energy is higher than the rate of the heat removal by perilymph in comparison to the slower heating case. Consequently, the temperature of the magnet and the immediate adjacent region ascends quickly, as a result, lower energy can be transferred to the magnet to prevent thermal trauma.\\
One possible means to attach the magnet to the tip of the electrode array is by a paraffin wax structure. To determine the energy required to melt the paraffin, it is assumed that the paraffin is a lumped capacitance, all the energy inside the magnet is transferred to the paraffin, and the paraffin is not exchanging heat with the surrounding material. With these assumptions, the combined sensible heat and the latent heat required to melt a 0.5 mm$^{3}$ paraffin bulk with a melting point of 43$^{\circ}$C is approximately 0.1 J~(1.8 $\times$ 10$^{6}$~{W/m$^{3}$} for a heating duration of 114 s), which is about one order of magnitude lower than the maximum safe input power. The lowest maximum safe input power is sufficient to melt the paraffin around the magnet to release it. Therefore, a paraffin structure can be used to attach the magnet, and after the insertion, the paraffin can be melted without the risk of hyperthermia.
\begin{figure}[h]
\centering\includegraphics[width=\linewidth]{maxpower.pdf}
\caption{\label{maxpower} Maximum safe input power density as a function of heating time interval.}
\end{figure}
\section{Impact of the cochlear fluid on transient temperature change}
The fluid within the cochlea is an important parameter to determine the safe input power density for magnet detachment. During cochlear implant electrode array insertion, some amount of perilymph is forced from the cochlea and is replaced by air. This scenario represents a worst case scenario in terms of possible exposure to hyperthermia. Thus, the impact of replacing the perilymph with air should be studied. Research also suggests that lubricants can reduce intracochlear force during insertion \cite{lubricant}. Employing a lubricant during magnetic insertion also has potential for decreasing the intracochlear physical trauma. Thus, the effect of the various fluids on the magnet detachment process is critical. Four fluids, including perilymph, air, Glycerol, and a soap solution (10$\%$ wt soap, 90$\%$ wt distilled water \cite{lubricant}) are studied to determine their impact on temperature increase within the cochlea during detachment (see Fig. \ref{materialimpact}). Thermophysical properties of air, perilymph, Glycerol, and the soap solution are provided in Table~5. Air has the largest thermal diffusivity among these four substances, and consequently the rate of temperature change for air exceeds that of the other three fluids. The data from this study indicate that the cochlea should be filled with either perilymph or the soap solution, otherwise, the magnet (and surrounding tissues) will heat too quickly for safe control or without significantly reducing the input power density.~The soap solution decreases the insertion force and is thermally compatible with the magnetic insertion of the electrode array since the primary component is distilled water. As a result, conducting magnetic insertion with the soap solution has potential for future use.
\begin{table}[h]
\begin{center}
\centering
\begin{threeparttable}
\label{table5}
\caption{Thermophysical properties of four possible cochlear fluids.}
\begin{tabular}{l l l l l}
\hline
Fluid & $c_{p}$~(J/kg$\cdot$K) & $k$~(W/m$\cdot$K) & $\rho$~(kg/m$^3$)& $\alpha$(m$^2$/s)\\
\hline
Air & 1006.4 \cite{comsol} & 0.027 \cite{comsol} & 1.13 \cite{comsol} & 2.37$\times 10^{-5}$\\
Perilymph & 4176.6 \cite{comsol} & 0.625 \cite{comsol} & 992.20 \cite{comsol} & 1.51$\times 10^{-7}$\\
Glycerol & 2240.0 \cite{glycerol} & 0.285 \cite{glycerol} & 1260.00 \cite{glycerolcp} & 1.01$\times 10^{-7}$\\
Soap solution & 4000.0 \cite{soap} & 0.600 \cite{soap1} & 999.00 \cite{soap} & 1.50$\times 10^{-7}$\\
\hline
\end{tabular}
\footnotesize
\end{threeparttable}
\end{center}
\end{table}
\begin{figure}[h]
\centering\includegraphics[width=\linewidth]{materialimpact.pdf}
\caption{\label{materialimpact} Transient maximum temperature within the uncoiled cochlea with inserted magnet and electrode array filled with perilymph, air, Glycerol, and soap solution ($q$ = 2.265$\times$10$^7$ W/m$^3$ for 114 s).}
\end{figure}
\section{Conclusions}
A 3D uncoiled model of the cochlea was, for the first time, developed to analyze heat transfer within cochlear canals. Specifically, the range of safe input power density to detach the magnet from the electrode array during robotic cochlear implant surgery was studied using a 3D uncoiled model of the cochlea. Conservation of mass, momentum, and energy in the cochlea were solved using the finite element method as implemented in COMSOL Multiphysics 5.4. The numerical model was verified and validated for conduction and natural convection heat transfer. It was found that natural convection has a negligible impact on dissipating the heat generated during the magnet detachment process as most of the heat is transferred axially by conduction through the electrode array. Solving the equations for conservation of mass, momentum, and energy simultaneously, which is required to calculate natural convection, is computationally expensive. Thus, the fact that natural convection is negligible is critical for modeling heat transfer in the actual cochlea geometry as it reduces the computational time drastically from about 32 hrs for conduction and natural convection to 1 hr for conduction only on a desktop computer with an Intel Core i9-9900K processor and 64.0 GB RAM. Finally, the range of safe maximum input power density to detach the magnet after magnetic guidance of the cochlear implant is 2.265$\times$10$^7$ W/m$^3$ for 114s to 6.6$\times$10$^7$ W/m$^3$ for 9s. A soap solution is a suitable lubricant that decreases insertion forces and is safe to use during the magnet detachment process. This work will accelerate the implementation of robotic implant cochlear surgery and is critical for avoiding thermal damage of cochlear tissues. In addition, this research provides a foundation for heat transfer studies of a coiled cochlea. \\
\section*{Acknowledgements}
Research reported in this publication was supported by the National Institute on Deafness and Other Communication Disorders of the National Institutes of Health under Award Number R01DC013168. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. We acknowledge the Center for High Performance Computing at the University of Utah for their support and resources. We would also like to show our gratitude to the MED-EL company for sharing the information about their electrode arrays.
| proofpile-arXiv_059-15558 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Neural networks and ensembles of models are currently used across many domains. For these complex models, explanations accounting for how features relate to predictions is often desirable and at times mandatory \cite{goodman2017european}. In medicine, explainable AI (XAI) is important for scientific discovery, transparency, and much more \cite{holzinger2017we}. One popular class of XAI methods is per-sample feature attributions (i.e., values for each feature for a given prediction).
In this paper, we focus on SHAP values \cite{lundberg2017unified} -- Shapley values \cite{shapley1953value} with a conditional expectation of the model prediction as the set function. Shapley values are the only additive feature attribution method that satisfies the desirable properties of local accuracy, missingness, and consistency. In order to approximate SHAP values for neural networks, we fix a problem in the original formulation of DeepSHAP \cite{lundberg2017unified} where previously it used $E[x]$ as the reference and theoretically justify a new method to create explanations relative to background distributions. Furthermore, we extend it to explain stacks of mixed model types as well as loss functions rather than margin outputs.
Popular model agnostic explanation methods that also aim to obtain SHAP values are KernelSHAP \cite{lundberg2017unified} and IME \cite{vstrumbelj2014explaining}. The downside of most model agnostic methods are that they are sampling based and consequently high variance or slow.
Alternatively, local feature attributions targeted to deep networks has been addressed in numerous works: Occlusion \cite{zeiler2014visualizing}, Saliency Maps \cite{simonyan2013deep}, Layer-Wise Relevance Propagation \cite{bach2015pixel}, DeepLIFT, Integrated Gradients (IG) \cite{sundararajan2017axiomatic}, and Generalized Integrated Gradients (GIG) \cite{gig}.
Of these methods, the ones that have connections to the Shapley Values are IG and GIG. IG integrates gradients along a path between a baseline and the sample being explained. This explanation approaches the Aumann-Shapley value. GIG is a generalization of IG to explain losses and mixed model types -- a feature DeepSHAP also aims to provide. IG and GIG have two downsides: 1.) integrating along a path can be expensive or imprecise and 2.) the Aumann-Shapley values fundamentally differ to the SHAP values we aim to approximate. Finally, DASP \cite{ancona2019explaining} is an approach that approximates SHAP values for deep networks. This approach works by replacing point activations at all layers by probability distributions and requires many more model evaluations than DeepSHAP. Because DASP aims to obtain the same SHAP values as in DeepSHAP it is possible to use DASP as a part of the DeepSHAP framework.
\section{Approach}
\subsection{Propagating SHAP values}
\begin{figure}[!ht]
\centering
\includegraphics[width=.8\linewidth]{fig/gradient_based_interpretation.pdf}
\vspace{-.2cm}
\caption{\textit{Visualization of models for understanding DeepLIFT's connection to SHAP values}. In the figure $g$ is a non-linear function and $T$ is a non-differentiable tree model.
}
\vspace{-.5cm}
\label{fig:understanding}
\end{figure}
DeepSHAP builds upon DeepLIFT; in this section we aim to better understand how DeepLIFT's rules connect to SHAP values. This has been briefly touched upon in \cite{deeplift} and \cite{lundberg2017unified}, but here we explicitly define the relationship.
DeepSHAP is a method that explains a sample (foreground sample), by setting features to be ``missing''. Missing features are set to corresponding values in a baseline sample (background sample). Note that DeepSHAP generally uses a background distribution, however focusing on a single background sample is sufficient because we can rewrite the SHAP values as an average over attributions with respect to a single background sample at a time (see next section for more details).
In this section, we define a foreground sample to have features x$f_{x_i}$ and neuron values $f_{h}$ (obtained by a forward pass) and a background sample to have $b_{x_i}$ or $b_{h}$. Finally we define $\phi(\cdot)$ to be attribution values.
If our model is \textbf{fully linear} as in Figure \ref{fig:understanding}a, we can get the exact SHAP values for an input $x_i$ by summing the attributions along all possible paths between that input $x_i$ and the model's output $y$. Therefore, we can focus on a particular path (in blue). Furthermore, the path's contribution to $\phi(x_i)$ is exactly the product of the weights along the path and the difference in $x_1$: $w^{(2)}_{2} w^{(1)}_{1,2} (f_{x_1}-b_{x_1})$, because we can rewrite the layers of linear equations in \ref{fig:understanding}a as a single linear equation. Note that we can derive the attribution for $x_1$ in terms of the attribution of intermediary nodes (as in the chain rule):
\begin{align}
\phi(h_1^2)&=w_2^{(2)}(f_{h_1^2}{-}b_{h_1^2})\nonumber\\
\phi(x_1)&=\frac{\phi(h_1^2)}{f_{h_1^2}{-}b_{h_1^2}}w_{1,2}^{(1)}(f_{x_1}{-}b_{x_1})\label{eq:phi_lin_1}
\end{align}
Next, we move on to reinterpreting the two variants of DeepLIFT: the Rescale rule and the RevealCancel rule. First, a gradient based interpretation of the \textbf{Rescale rule} has been discussed in \cite{ancona2018towards}. Here, we explicitly tie this interpretation to the SHAP values we hope to obtain.
For clarity, we consider the example in Figure \ref{fig:understanding}b. First, the attribution value for $\phi(h)$ is $g(f_h)-g(b_h)$ because SHAP values maintain local accuracy (sum of attributions equals $f_y-b_y$) and $g$ is a function with a single input. Then, under the Rescale rule, $\phi(x_i){=}\frac{\phi(h)}{f_h-b_h}w_i(f_{x_i}-b_{x_i})$ (note the resemblance to Equation (\ref{eq:phi_lin_1})). Under this formulation it is easy to see that the Rescale rule first computes the exact SHAP value for $h$ and then propagates it back linearly. In other words, the the non-linear and linear functions are treated as separate functions. Passing back nonlinear attributions linearly is clearly an approximation, but confers two benefits: 1.) fast computation on order of a backward pass and 2.) a guarantee of local accuracy.
Next, we describe how the \textbf{RevealCancel rule} (originally formulated to bring DeepLIFT closer to SHAP values) connects to SHAP values in the context of Figure \ref{fig:understanding}c. RevealCancel partitions $x_i$ into positive and negative components based on if $w_i(f_{x_i}-b_{x_i})<t$ (where $t{=}0$), in essence forming nodes $h_+$ and $h_-$. This rule computes the exact SHAP attributions for $h_+$ and $h_-$ and then propagates the resultant SHAP values linearly. Specifically:
\begin{align*}
\phi(g_+)&=\frac{1}{2}((g(f_{h_+}{+}f_{h_-})-g(b_{h_+}{+}f_{h_-})+\nonumber\\&(g(f_{h_+}{+}b_{h_-})-g(b_{h_+}{+}b_{h_-}))\\
\phi(g_+)&=\frac{1}{2}((g(f_{h_+}{+}f_{h_-})-g(f_{h_+}{+}b_{h_-})+\nonumber\\&(g(b_{h_+}{+}f_{h_-})-g(b_{h_+}{+}b_{h_-}))\\
\phi(x_i)&=
\begin{cases}
\frac{\phi_{h_+}}{f_{h_+}-b_{h_+}}w_i(f_{x_i}-b_{x_i}), & \text{if}\ w_i(f_{x_i}-b_{x_i})>t \\
\frac{\phi_{h_-}}{f_{h_-}-b_{h_-}}w_i(f_{x_i}-b_{x_i}), & \text{otherwise}\\
\end{cases}
\end{align*}
Under this formulation, we can see that in contrast to the Rescale rule that explains a linearity and nonlinearity by exactly explaining the nonlinearity and backpropagating, the RevealCancel rule exactly explains the nonlinearity and a partition of the inputs to the linearity as a single function prior to backpropagating. The RevealCancel rule incurs a higher computational cost in order to get a an estimate of $\phi(x_i)$ that is ideally closer to the SHAP values.
This reframing naturally motivates explanations for \textbf{stacks of mixed model types}. In particular, for Figure \ref{fig:understanding}d, we can take advantage of fast, exact methods for obtaining SHAP values for tree models to obtain $\phi(h_j^2)$ using Independent Tree SHAP \cite{treeshap}. Then, we can propagate these attributions to get $\phi(x_i)$ using either the Rescale or RevealCancel rule. This argument extends to explaining losses rather than output margins as well.
Although we consider specific examples here, the linear propagation described above will generalize to arbitrary networks if SHAP values can be computed or approximated for individual components.
\subsection{SHAP values with a background distribution}
\label{sec:theory_multref}
Note that many methods (Integrated Gradients, Occlusion) recommend the utilization of a single background/reference sample. In fact, DeepSHAP as previously described in \cite{lundberg2017unified} created attributions with respect to a single reference equal to the expected value of the inputs. However, in order to obtain SHAP values for a given background distribution, we prove that the correct approach is as follows: obtain SHAP values for each baseline in your background distribution and average over the resultant attributions. Although similar methodologies have been used heuristically \cite{deeplift,erion2019learning}, we provide a theoretical justification in Theorem 1 in the context of SHAP values.
\begin{theorem}
\label{thm}
The average over single reference SHAP values approaches the true SHAP values for a given distribution.
\end{theorem}
\begin{proof}
Define $D$ to be the data distribution, $N$ to be the set of all features, and $f$ to be the model being explained. Additionally, define $\mathcal{X}(x,x',S)$ to return a sample where the features in $S$ are taken from $x$ and the remaining features from $x'$. Define $C$ to be all combinations of the set $N \setminus \{i\}$ and $P$ to be all permutations of $N \setminus \{i\}$. Starting with the definition of SHAP values for a single feature: $\phi_i(x)$
\begin{align*}
&= \sum_{S\in C } W(|S|,|N|)(\mathbb{E}_{D}[f(X)|x_{S\cup \{i\}}] {-} \mathbb{E}_{D}[f(X)|x_{S}])\\
&=\frac{1}{|P|}\sum_{S\subseteq P} \mathbb{E}_\mathcal{D}[f(x)|\text{do}(x_{S \cup \{i\}})] {-} \mathbb{E}_\mathcal{D}[\text{do}(f(x)|x_{S})]\\
&= \frac{1}{|P|}\sum_{S\subseteq P}\frac{1}{|D|}\sum_{x'\in D} f(\mathcal{X}(x,x',S\cup \{i\})) {-} f(\mathcal{X}(x,x',S))
\\
&= \frac{1}{|D|}\sum_{x'\in D} \underbrace{\frac{1}{|P|}\sum_{S\subseteq P} f(\mathcal{X}(x,x',S\cup \{i\})) {-} f(\mathcal{X}(x,x',S))}_\text{single reference SHAP value}
\end{align*}
where the second step depends on an interventional conditional expectation \cite{janzing2019feature} which is very close to Random Baseline Shapley in \cite{sundararajan2019many}).
\end{proof}
\section{Experiments}
\label{sec:experiments}
\subsection{Background distributions avoid bias}
\label{sec:experiments:results:multiple_references}
\begin{figure}[!ht]
\centering
\vspace{-.2cm}
\includegraphics[width=.8\linewidth]{fig/cifar_references.pdf}
\vspace{-.2cm}
\caption{\textit{Using a single baseline leads to bias in explanations.}
}
\label{fig:reference}
\end{figure}
\noindent
In this section, we utilize the popular CIFAR10 dataset \cite{krizhevsky2009learning} to demonstrate that single references lead to bias in explanations. We train a CNN that achieves 75.56\% test accuracy and evaluate it using either a zero baseline as in DeepLIFT or with a random set of 1000 baselines as in DeepSHAP.
In Figure \ref{fig:reference}, we can see that for these images drawn from the CIFAR10 training set, DeepLIFT has a clear bias that results in low attributions for darker regions of the image. For DeepSHAP, having multiple references drawn from a background distribution solves this problem and we see attributions in sensical dark regions in the image.
\subsection{Explaining mortality prediction}
\noindent
In this section, we validate DeepSHAP's explanations for an MLP with 82.56\% test accuracy predicting 15 year mortality. The dataset has 79 features for 14,407 individuals released by \cite{treeshap} based on NHANES I Epidemiologic Followup Study \cite{cox1997plan}.
\begin{figure}[!ht]
\centering
\vspace{-.3cm}
\includegraphics[width=\linewidth]{fig/mortality_nhanes_summary_plot_2.pdf}
\vspace{-.5cm}
\caption{\textit{Summary plot of DeepSHAP attribution values.} Each point is the local feature attribution value, colored by feature value. For brevity, we only show the top 6 features.}
\label{fig:mortality_nhanes}
\end{figure}
In Figure \ref{fig:mortality_nhanes}, we plot a summary of DeepSHAP (with 1000 random background samples) attributions for all NHANES training samples (n=$8023$) and notice a few trends. First, Age is predictably the most important and old age contributes to a positive mortality prediction (positive SHAP values). Second, the Sex feature validates a well-known difference in mortality \cite{gjoncca1999male}. Finally, the trends linking high systolic BP, low serum albumin, high sedimentation rate, and high hematocrit to mortality have been independently discovered \cite{port2000systolic,goldwasser1997serumalbumin,paul2012hematocrit,go2016sedimentation}.
\begin{figure}[!ht]
\centering
\includegraphics[width=\linewidth]{fig/Individual_different_background.pdf}
\vspace{-.5cm}
\caption{\textit{Explaining an individual's mortality prediction for different backgrounds distributions.}}
\label{fig:ind_diff_backgrounds}
\end{figure}
Next, we show the benefits of being able to specify a background distribution. In Figure \ref{fig:ind_diff_backgrounds}a, we see that explaining an individual's mortality prediction with respect to a general population emphasizes that the individual's age and gender are driving a high mortality prediction. However, in practice doctors are unlikely to compare a 67-year old male to a general population that includes much younger individuals. In Figure \ref{fig:ind_diff_backgrounds}b, being able to specify a background distribution allows us to compare our individual against a more relevant distribution of males over 60. In this case, gender and age are naturally no longer important, and the individual actually may not have cause for concern.
\subsection{Interpreting a stack of mixed model types}
\label{sec:experiments:results:mixed_model_types}
\begin{figure}[!ht]
\centering
\vspace{-.25cm}
\includegraphics[width=\linewidth]{fig/interpretation3.pdf}
\caption{\textit{Ablation test for explaining an LSTM feature extractor fed into an XGB model.} All methods used background of 20 samples obtained via kmeans. [a.] Convergence of methods for a single explanation. [b.] Model performance versus \# features kept for DeepSHAP (rescale), IME Explainer (4000 samples), KernelSHAP (2000 samples) and a baseline (Random) (AUC in the legend).
}
\vspace{-.2cm}
\label{fig:ablation}
\end{figure}
\noindent
Stacks, and more generally ensembles, of models are increasingly popular for performant predictions \cite{bao2009stacking,gunecs2017stacked,zhai2018development}. In this section, our aim is to evaluate the efficacy of DeepSHAP for a neural network feature extractor fed into a tree model. For this experiment, we use the Rescale rule for simplicity and Independent TreeSHAP to explain the tree model \cite{treeshap}. The dataset is a simulated one called Corrgroups60. Features $X\in\mathbb{R}^{1000\times60}$ have tight correlation between groups of features ($x_i$ is feature $i$), where $\rho_{x_i,x_i}{=}1$, $\rho_{x_i,x_{i+1}}{=}\rho_{x_i,x_{i+2}}{=}\rho_{x_{i+1},x_{i+2}}{=}.99$ if $(i \bmod 3) {=} 0$, and $\rho_{x_i,x_j}{=}0$ otherwise. The label $y\in \mathbb{R}^{n}$ is generated linearly as $y{=}X\beta {+} \epsilon$ where $\epsilon {\sim} \mathcal{N}_{n}(\mu{=}0,\sigma^2{=}10^{-4})$ and $\beta_i{=}1$ if $(i \bmod 3) {=} 0$ and $\beta_i{=}0$ otherwise.
We evaluate DeepSHAP with an ablation metric called \textit{keep absolute (mask)} \cite{treeshap}. The metric works in the following manner: 1) Obtain the feature attributions for all test samples 2) Mask all features (by mean imputation) 3) Introduce one feature at a time (unmask) from largest absolute attribution value to smallest for each sample and measure $R^2$. The $R^2$ should initially increase rapidly, because we introduce the ``most important'' features first.
We compare against two sampling-based methods (a natural alternative for explaining mixed model stacks) that provide SHAP values in expectation: KernelSHAP and IME explainer. In Figure \ref{fig:ablation}b, DeepSHAP (rescale) has no variability and requires a fixed number of model evaluations. IME Explainer and KernelSHAP, benefit from having more samples (and therefore more model evaluations). For the final comparison, we check the variability of the tenth largest attribution (absolute value) of the sampling based methods to determine ``convergence'' across different numbers of samples. Then, we use the number of samples at the point of ``convergence'' for the next figure.
In Figure \ref{fig:ablation}c, we can see that DeepSHAP has a slightly higher performance than model agnostic methods. Promisingly, all methods demonstrate initial steepness in their performance; this indicates that the most important features had higher attribution values. We hypothesize that KernelSHAP and IME Explainer's lower performance is due in part to noise in their estimates. This highlights an important point: model agnostic methods often have sampling variability that makes determining convergence difficult. For a fixed background distribution, DeepSHAP does not suffer from this variability and generally requires fewer model evaluations.
\subsection{Improving the RevealCancel rule}
\label{sec:experiments:results:rescale_revealcancel}
\begin{figure}[!ht]
\centering
\vspace{-.3cm}
\includegraphics[width=\linewidth]{fig/revealcancel.pdf}
\caption{\textit{Comparison of new RevealCancel$^\text{Mean}$ rule for estimating SHAP values on a toy example.} The axes correspond to mean absolute difference from the SHAP values (computed exactly). Green means RevealCancel$^\text{Mean}$ wins and red means it loses.}
\label{fig:revealcancel}
\end{figure}
\noindent
DeepLIFT's RevealCancel rule's connection to the SHAP values is touched upon in \cite{deeplift}. Our SHAP value framework explicitly defines this connection. In this section, we propose a simple improvement to the RevealCancel rule. In DeepLIFT's RevealCancel rule the threshold $t$ is set to $0$ (for splitting $h_-$ and $h_+$). Our proposed rule RevealCancel$^\text{Mean}$ sets the threshold to the mean value of $w_i(f_{x_i}{-}b_{x_i})$ across $i$. Intuitively, splitting by the mean better separates $x_i$ nodes, resulting in a better approximation than splitting by zero.
We experimentally validate RevealCancel$^\text{Mean}$ in Figure \ref{fig:revealcancel}, explaining a simple function: $\text{ReLU}(x_1+x_2+x_3+x_4+100)$. We fix the background to zero: $b_{x_i}{=}0$ and draw 100 foreground samples from a discrete uniform distribution: $f_{x_i}{\sim} U\{-1000,1000\}$.
In Figure \ref{fig:revealcancel}a, we show that RevealCancel$^\text{Mean}$ offers a large improvement for approximating SHAP values over the Rescale rule and a modest one over the original RevealCancel rule (at no additional asymptotic computational cost).
\section{Conclusion}
In this paper, we improve the original DeepSHAP formulation \cite{lundberg2017unified} in several ways: we 1.) provide a new theoretically justified way to provide attributions with a background distribution 2.) extend DeepSHAP to explain stacks of mixed model types 3.) present improvements of the RevealCancel rule.
Future work includes more quantitative validation on different data sets and comparison to more interpretability methods. In addition, we primarily used Rescale rule for many of these evaluations, but more empirical evaluations of RevealCancel are also important.
| proofpile-arXiv_059-15559 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
In this paper we extend the work of \cite{Croci2018} to the quasi and multilevel quasi Monte Carlo case (QMC and MLQMC respectively). We consider the solution of random elliptic partial differential equations (PDEs) in which Mat\'ern fields, sampled via the stochastic PDE (SPDE) approach \cite{Lindgren2011}, appear as coefficients. For instance, a typical problem is: find $\E[P]$, where $P(\omega)=\mathcal{P}(p)$ and $\mathcal{P}$ is a Fr\'echet differentiable functional of the function $p$ that satisfies,
\begin{align}
\label{eq:diffusion_eqn_for_QMC_conv_general}
\begin{array}{rlc}
-\nabla\cdot(F(u(\bm{x},\omega))\nabla p(\bm{x},\omega)) = f(\bm{x}), & \bm{x}\in G\subset\R^d,& \omega\in\Omega,
\end{array}
\end{align}
with suitable boundary conditions. Here we take the function $f$ and the domain $G$ to be suitably smooth and $F \in C^0(\R)$ to be a positive locally Lipschitz function. In this work, we assume that the coefficient $u(\bm{x},\omega)$ is a zero-mean Mat\'ern field approximately sampled by solving the (domain-truncated) Whittle SPDE \cite{Whittle1954, Lindgren2011}:
\begin{align}
\label{eq:white_noise_eqn}
\left(\mathcal{I} - \kappa^{-2}\Delta\right)^{k}u(\bm{x},\omega) = \eta \W,\quad \bm{x}\in D\subset\R^d,\quad \omega\in\Omega,\quad \nu = 2k - d/2 > 0,
\end{align}
where $\W$ is spatial Gaussian white noise in $\R^d$, and $k>d/4$. Here $d\leq 3$ and the equality has to hold almost surely and be interpreted in the sense of distributions. The constant $\eta>0$ is a scaling factor that depends on $\sigma$, $\lambda$ and $\nu$, cf.~\cite{Croci2018}. Fi
In what follows we assume that $G\subset\joinrel\subset D \subset\joinrel\subset\R^d$, where by $G\subset\joinrel\subset D$ we indicate that the closure of $G$ is a compact subset of $D$, and we prescribe homogeneous Dirichlet boundary conditions on $\partial D$. If the distance between $\partial D$ and $\partial G$ is large enough, then the error introduced by truncating $\R^d$ to $D$ is negligible \cite{potsepaev2010application,Khristenko2018}.
A wide range of Gaussian field sampling methods are available in the literature. The simplest of them all involves a Cholesky factorization of the covariance matrix of the Gaussian vector $\bm{u}(\omega)=[u(\bm{x}_1,\omega),\dots,u(\bm{x}_m,\omega)]^T$ containing the values of the field $u(\bm{x},\omega)$ at $m\in \N^+$ discrete locations $\bm{x}_i\in D$. Indicating with $C\in\R^{m\times m}$ the typically dense, positive-definite covariance matrix of $\bm{u}$, a sample of $\bm{u}$ can be obtained by first computing a Cholesky factorization $C=HH^T$ and then setting $\bm{u}=H\bm{z}$ for a given sample of a standard Gaussian vector $\bm{z}\sim\mathcal{N}(0,I)$, $\bm{z}\in\R^m$. The total cost of this sampling strategy is a $O(m^3)$ cost for the factorization and a $O(m^2)$ cost per sample.
More efficient methods are available. The most common are: the Karhunen-Lo\`eve expansion of the random field (cf.~section 11.1 in \cite{sullivan2015introduction}); the hierarchical matrix approximation of the covariance matrix \cite{FeischlKuoSloan2018,Khoromskij2009,Hackbush2015HMatrices,DolzHarbrechtSchwab2017}; the circulant embedding method \cite{WoodChan1994,dietrich1997fast,GrahamKuoEtAl2018,BachmayrGraham2019}; the SPDE approach \cite{Whittle1954,Lindgren2011,Croci2018}. Each of these methods has its advantages and disadvantages and is more or less efficient according to the covariance structure of the field. We do not describe these methods further here, but we refer to section 2.4 in \cite{CrociPHD} for a more detailed overview and comparison. In this paper, we only consider the SPDE approach, which consists of sampling a Mat\'ern field by solving equation \eqref{eq:white_noise_eqn} for a given realization of white noise. Equation \eqref{eq:white_noise_eqn} can be solved after discretization in $O(m)$ cost complexity with an optimal solver. This yields an overall $O(m)$ sampling complexity provided that the white noise term can also be sampled in linear cost. In \cite{Croci2018}, the authors have showed how white noise realizations can be sampled in $O(m)$ complexity within a finite element (FEM) and non-nested MLMC framework.
The new (ML)QMC method we present in this paper is based on the efficient sampling of the white noise term in \eqref{eq:white_noise_eqn} with a hybrid quasi/pseudo-random sequence. It is extremely important for QMC applications for the QMC integrand to have low effective dimensionality and to order the QMC integrand variables in order of decaying importance \cite{CaflishMorokoffOwen}. For this reason, a common approach in the existing literature about MLQMC methods for elliptic PDEs is the expansion of the random field coefficients as an infinite series of basis functions of $L^2(D)$ that naturally exposes the leading order dimensions in the integrands \cite{KuoScheichlSchwabEtAl2017,KuoSchwabSloan2015,GrahamKuoNichols2013,DickKuoSloan2013}. If the random field is smooth, the coefficients in the (e.g.~Karhunen-Lo\`eve) expansion quickly decay and a truncated expansion provides both the variable ordering and the low-effective dimensionality required by QMC methods.
When using the SPDE approach and equation \eqref{eq:white_noise_eqn}, the only source of randomness is white noise and we therefore must expand $\W$ to achieve the required variable ordering. In this case, the Karhunen-Lo\`eve expansion does not provide a feasible route since white noise is not smooth and the eigenvalues in the expansion do not decay. A good alternative in this case is offered by a wavelet expansion of $\W$.
Wavelets in general form a multi-resolution orthogonal basis of $L^2(D)$ and are commonly employed within QMC algorithms as their hierarchical structure exposes the leading order dimensions in the integrands while allowing fast $O(m)$ or $O(m\log m)$ complexity operations (depending on the wavelet basis \cite{Daubechies1988}). A classical example on the efficacy of wavelet expansions of white noise (in time) within a QMC method is offered by the L\'evy-Ciesielski (or Brownian bridge) construction of Brownian motion. Ubiquitous in mathematical finance, it is commonly used to solve stochastic differential equations with QMC \cite{glasserman2013,GilesWaterhouse2009}. Inspired by this technique, we choose to expand white noise into a Haar wavelet expansion\footnote{Note that the hat functions used in the L\'evy-Ciesielski construction are piecewise linear wavelets, their derivatives are Haar wavelets and white noise in time is the derivative of Brownian motion.}, although the generalization of our approach to higher degree wavelets should be straight-forward.
In a MLQMC framework, wavelets are used by Kuo et al.~to sample random fields efficiently, yielding a cost per sample of $O(m \log m)$ using nested grids \cite{KuoSchwabSloan2015}. In \cite{HerrmannSchwab2017}, Hermann and Schwab use a truncated wavelet expansion of white noise to sample Gaussian fields with the SPDE approach within a nested MLQMC hierarchy. Their work is possibly the closest to ours as they also work with the SPDE approach to Mat\'ern field sampling and use a wavelet expansion of white noise \cite{HerrmannSchwab2017}.
Generally speaking, all the randomized MLQMC methods for elliptic PDEs presented in the above papers are strongly theory-oriented. They use randomly shifted lattice rules and derive MLQMC complexity bounds using a pure QMC approach, truncated expansions and nested hierarchies on simple geometries. Our work is different in spirit and strategy. Firstly, our focus is practice-oriented and we do not derive any MLQMC complexity estimates, but we design our method to work in the general case in which the multilevel hierarchy is non-nested and the domain geometries are non-trivial. Secondly, we handle the expansion differently: we do not just truncate it, but we work with the whole infinite expansion of white noise by adding a correction term to the truncation. The truncation term is finite-dimensional and we sample it with a randomized low-discrepancy sequence; the correction term is infinite-dimensional and a QMC approach is not feasible. However, the covariance of the correction is known and we can sample it using pseudo-random numbers with an extension of the technique presented in \cite{Croci2018}. We can still sample white noise (and consequently the Mat\'ern field) in linear cost complexity (or log-linear, according to the Haar wavelet type) as in \cite{Croci2018}.
We therefore adopt a hybrid MC/QMC approach. The advantage of doing so is that we can sample white noise exactly, independently from the truncation level and the wavelet degree considered (e.g.~while we use Haar wavelets, Hermann and Schwab in \cite{HerrmannSchwab2017} consider higher degree wavelets), without introducing any additional bias into the MLQMC estimate. In contrast, in the aforementioned MLQMC algorithms the expansion must be truncated after enough terms to make the truncation error negligible. Naturally, this advantage comes at a price: since we are using pseudo-random numbers as well, the asymptotic convergence rate of our method with respect to the number of samples $N$ is still the standard MC rate of $O(N^{-1/2})$. Nevertheless, we show that large computational gains can be recovered in practice in a pre-asymptotic QMC-like regime in which the convergence rate is $O(N^{-\chi})$, $\chi \geq 1/2$, and we derive a partial convergence result (cf.~supplementary material) that explains this behaviour in the QMC case.
Wavelets are used in both \cite{KuoSchwabSloan2015} and \cite{HerrmannSchwab2017}, but no comment is made about how to work with the wavelet basis in practice if this is not nested within the FEM approximation subspace. This happens whenever the mesh on which we solve the Whittle SPDE \eqref{eq:white_noise_eqn} is non-uniform. When working with complex geometries and graded meshes it is desirable for the sampled Mat\'ern field to have the same accuracy as the solution of the PDE of interest (e.g.~\eqref{eq:diffusion_eqn_for_QMC_conv_general}) and the Mat\'ern field should not therefore be sampled on a uniform structured mesh. For this purpose, we adopt the embedded mesh technique by Osborn et al.~\cite{Osborn2017} so that in the MLQMC hierarchy each mesh of $G$ is nested within the corresponding mesh of $D$ and we deal with the non-nestedness of the FEM and wavelet spaces via a supermesh construction.
In the independent white noise realization case we construct a two-way supermesh between the graded FEM mesh of interest and a uniform ``wavelet'' mesh and we sample white noise in a consistent way between the FEM and the wavelet subspaces. In the MLQMC coupled realization case, we construct a three-way supermesh between the two non-nested FEM meshes and the ``wavelet'' mesh. The supermesh constructions can be simplified when the meshes involved are nested and if all meshes are nested no supermesh is required. In any case, the number of supermesh cells is still linear in the number of cells of the parent meshes under mild assumptions \cite{quasi-uniform-supermeshing}. We remark that the same supermeshing strategy can be employed to sample the truncated white noise expansion used in \cite{HerrmannSchwab2017} in the general non-uniform case as our technique easily generalises to higher degree wavelets.
This paper is structured as follows: in section \ref{sec:background} we summarize the mathematical background needed to understand the rest of the paper. In section \ref{sec:Haar_wavelet_expansion_of_white_noise} we introduce the Haar wavelet expansion of white noise and its splitting into a truncated term and a correction term. In section \ref{sec:QMC_sample_white_noise} we introduce our sampling technique for independent white noise realizations. We extend the white noise sampling method to MLQMC in section \ref{sec:coupled_realizations_MLQMC}, where we show how coupled white noise realizations can be sampled efficiently. The algorithms are supported by numerical results, which we present and discuss in section \ref{sec:MLQMC_num_res}. We conclude the paper with a brief summary of the methods and results presented in section \ref{sec:MLQMC_conclusions}.
\section{Notation and background}
\label{sec:background}
\subsection{Notation}
In this paper we denote with $L^2(D)$ the space of square-integrable functions over $D$ and with $(\cdot, \cdot)$ the standard $L^2(D)$ inner product. We furthermore indicate with $W^{k,q}(D)$ the standard Sobolev space of integrability order $q$ and differentiability $k$, with $H^k(D) \equiv W^{k,2}(D)$ and with $H^1_0(D)$ the space of $H^1(D)$ functions that vanish on $\partial D$ in the sense of traces.
Given a sample space $\Omega$, we indicate with $L^2(\Omega, \R)$ the space of real-valued \emph{random variables} with finite second moment.
For a given Banach space $U$ over $D$, we indicate with $L^2(\Omega, U)$ the space of \emph{random fields} $u(\bm{x}, \omega)$, $\bm{x}\in D$, $\omega\in\Omega$ such that $u(\bm{x}, \cdot)\in L^2(\Omega, \R)$ for almost every $\bm{x}\in D$ and $u(\cdot, \omega)\in U$ almost surely (a.s.). If the $u(\bm{x},\cdot)$ are jointly Gaussian for almost every $\bm{x}\in D$, then the random field is a \emph{Gaussian field} and it is uniquely determined by its mean $\mu(\bm{x})$ and covariance $C(\bm{x},\bm{y})$ functions. Throughout this paper we will consider only zero-mean fields for simplicity.
A Gaussian field is also a \emph{Mat\'ern field} if its covariance is of the Mat\'ern class, i.e.
\begin{align}
\label{eq:Matern}
C(\bm{x},\bm{y}) = \dfrac{\sigma^2}{2^{\nu-1}\Gamma(\nu)}(\kappa r)^\nu \mathcal{K}_\nu(\kappa r),\ \ r=\Vert \bm{x}-\bm{y}\Vert _2,\ \ \kappa = \frac{\sqrt{8\nu}}{\lambda},\ \ \bm{x},\bm{y}\in D,
\end{align}
where $\sigma^2$, $\nu$, $\lambda > 0$ are the variance, smoothness parameter and correlation length of the field respectively, $\Gamma(x)$ is the Euler Gamma function and $\mathcal{K}_\nu$ is the modified Bessel function of the second kind.
In this paper we will adopt the following definition of \emph{generalized random field}, first introduced by It\^{o} \cite{Ito1954} and extended by Inaba and Tapley \cite{Inaba1975}. For a given Banach space $U$, we denote with ${\mathcal{L}(U, L^2(\Omega, \R))}$ the space of generalized random fields that are continuous linear mappings from $U$ to $L^2(\Omega,\R)$. For a given $\xi\in \mathcal{L}(U, L^2(\Omega, \R))$ we indicate the action (or pairing) of $\xi$ onto a function $\phi\in U$ with the notation $\xi(\phi) = \langle \xi, \phi \rangle$.
Possibly the most commonly used generalized random field is Gaussian \textit{white noise}. White noise is defined as follows.
\begin{definition}[White noise, see example 1.2 and lemma 1.10 in \cite{Hida1993}]
\label{def:white_noise}
Let $D\subseteq\R^d$ be an open domain. The white noise $\W\in \mathcal{L}(L^2(D), L^2(\Omega, \R))$ is a generalized stochastic field such that for any collection of $L^2(D)$ functions $\{\phi_i\}$, if we let $b_i = \langle \W, \phi_i \rangle$, then $\{b_i\}$ are joint Gaussian random variables with zero mean and covariance given by $\E[b_ib_j]=(\phi_i,\phi_j)$.
\end{definition}
\subsection{Randomized quasi Monte Carlo}
\paragraph{Quasi Monte Carlo}
Quasi Monte Carlo (QMC) methods retain most of the advantages of standard Monte Carlo (MC) while improving the convergence order with respect to the number of samples. At the heart of QMC for estimating expectations is the reinterpretation/approximation of the expected value as an integral with respect to the uniform distribution over the unit hypercube:
\begin{align}
\label{eq:QMC_intro}
\E[P] = \int\limits_\Omega P(\omega) \text{d}\mathbb{P}(\omega) \approx \int\limits_{[0,1]^s}Y(\bm{x})\text{d}\bm{x},
\end{align}
for some suitable function $Y$. Here $\mathbb{P}$ is the probability measure of a suitable probability space $(\Omega, \mathcal{A}, \mathbb{P})$, with $\mathcal{A}$ as its $\sigma$-algebra.
QMC methods are, in fact, nothing but quadrature rules over the unit hypercube with $N$ points and equal weights, approximating the integral on the right-hand side as
\begin{align}
I = \int\limits_{[0,1]^s}Y(\bm{x})\text{d}\bm{x} \approx \frac{1}{N}\sum\limits_{n=1}^NY(\bm{x}_n) = I_N,
\end{align}
where the $\bm{x}_n\in\R^s$ are, unlike in the standard MC case, not chosen at random, but chosen carefully and in a deterministic way so as to minimise a quantity called the \emph{discrepancy} of the point set. Informally, discrepancy is a measure of how well the point sequence covers the unit hypercube (cf.~figure \ref{fig:QMC_sequences_comparison}) and its importance lies in the fact that the QMC quadrature error decays proportionally to the discrepancy as $N$ increases \cite{Morokoff1995, KuoNuyensPracticalGuide2016, KuoScheichlSchwabEtAl2017}.
While random sequences are proven to have discrepancy of $O((\log\log N/N)^{1/2})$ with probability one \cite{Morokoff1995}, there exist deterministic sequences that achieve discrepancies of $O((\log N)^s/N)$ \cite{Morokoff1995}. These sequences are called \emph{low-discrepancy} point sequences and, if used for QMC integration, yield a faster-than-MC asymptotic rate of $O(N^{-1+\epsilon})$, for any $\epsilon>0$, provided that the integrand $Y$ is smooth enough.
Unlike standard MC, QMC methods are not completely dimension-independent: for high-dimensional problems the $(\log N)^s$ term in the discrepancy might dominate for small sample sizes. If this happens, low-discrepancy sequences cease to cover the whole hypercube well and their discrepancy temporarily falls back to a $O(N^{-1/2})$ rate\footnote{Caflish et al.~also report that QMC integration is almost never worse than standard MC \cite{CaflishMorokoffOwen}.} as in the random case up until $N$ becomes impractically huge \cite{CaflishMorokoffOwen}. However, this is not always the case. Caflish et al.~in \cite{CaflishMorokoffOwen} investigate this behaviour and introduce the notion of \emph{low effective dimensionality}: if the QMC integrand can be well approximated by a function that only depends on the first $\bar{s}\ll s$ QMC variables, the $(\log N)^s$ in the discrepancy bound can be replaced with $(\log N)^{\bar{s}}$, for which the transition to a $O(N^{-1})$-like regime will already happen for small sample sizes \cite{CaflishMorokoffOwen}. More recent theoretical results on QMC convergence \cite{KuoSchwabSloan2015,GrahamKuoEtAl2018,KuoScheichlSchwabEtAl2017,HerrmannSchwab2017} adopt a slightly different interpretation of low effective dimensionality and work with the underlying assumption that there is \emph{``some varying degree of importance between the variables''} \cite{KuoNuyensPracticalGuide2016}. This yields dimension-independent error bounds.
Overall, for practical applications of high-dimensional QMC integration it is extremely important to order the integration variables in order of decaying importance and/or reduce the dimensionality of the integrand so that higher-than-MC convergence rates can be achieved. This will be a key aspect in the methods we present in this paper.
\paragraph{Randomized quasi Monte Carlo}
\begin{figure}[h!]
\centering
\includegraphics[width=0.85\textwidth]{./sobol-2.eps}
\vspace{-3pt}
\centering
\caption{\textit{Pseudo-random, not randomized and randomized low-discrepancy sequences in comparison. On the left, a sample of $256$ uniform random points. In the middle, the first $256$ points in a $2$-dimensional Sobol' sequence. On the right, the same points after random digital shifting. The low-discrepancy sequence covers the unit square better than the pseudo-random sequence, even after randomization.}}
\label{fig:QMC_sequences_comparison}
\end{figure}
Although theoretically useful, a bound depending on a discrepancy measure cannot be used in practice as the discrepancy value is extremely difficult to estimate. Furthermore, low-discrepancy sequences are deterministic and we cannot rely on the central limit theorem as for standard MC.
Randomized QMC methods combine MC and QMC ideas and fix these issues by randomising the low-discrepancy sequence used, i.e.~given a fixed deterministic low-discrepancy sequence $\{\bm{x}_n\}_{n=1}^N$, randomized QMC produces a set of $M$ independent randomized sequences $\{\hat{\bm{x}}_{n,m}(\omega)\}_{n=1,m=1}^{n=N,m=M}$ in such a way that the discrepancy properties of the parent sequence are preserved (see figure \ref{fig:QMC_sequences_comparison}). See chapter $6$ of \cite{Lemieux2009} for an overview and \cite{Owen2003} for a comparison of various randomization techniques. The randomized sequences are then combined into the randomized QMC estimator,\vspace{-6pt}
\begin{align}
\hat{I}_{M,N}(\omega) = \frac{1}{M}\sum\limits_{m=1}^M I^m_N(\omega) = \frac{1}{M}\sum\limits_{m=1}^M \left(\frac{1}{N}\sum\limits_{n=1}^NY(\hat{\bm{x}}_{n,m}(\omega))\right).
\end{align}
Since $I^m_N(\omega)$ is now random, provided that $M$ is large enough a confidence interval can be estimated and we retain a practical error measure as in the standard MC case.
In this work we use $M=32$ unless otherwise stated. Assuming fixed $M$, a given mean square error (MSE) tolerance $\varepsilon^2$, a $O(\varepsilon^{-q})$ cost per sample and a QMC convergence order of $O(N^{-1+\epsilon})$ for any $\epsilon>0$, the total cost of randomized QMC is $O(\varepsilon^{-q-1/(1-\epsilon)})$, which for small $\epsilon$ is almost $\varepsilon^{-1}$ times better than standard MC.
\subsection{Multilevel Monte Carlo methods}
The multilevel Monte Carlo method was first introduced by Heinrich in \cite{Heinrich2001} for parametric integration and popularised by Giles for stochastic differential equations in \cite{giles2008}. Multilevel quasi Monte Carlo, originally presented in \cite{GilesWaterhouse2009}, combines QMC and MLMC together with the objective of combining their advantages. Assume that it is possible to compute realizations of $P(\omega)$ at different accuracy levels $P_\ell(\omega)$ for $\ell=1,\dots,L$ of increasing accuracy and computational cost, and that the approximation of $P$ on the finest level, $P_L$ is accurate enough. Multilevel methods estimate $\E[P_L]$ through the telescopic sum,
\vspace{-6pt}
\begin{align}
\label{eq:telescoping}
\E[P] \approx \E[P_L] = \sum\limits_{\ell = 1}^L\E[P_\ell - P_{\ell-1}],
\end{align}
\vspace{-9pt}\\
where $P_{0} \equiv 0$. For example, if samples of $P$ are obtained by soving \eqref{eq:diffusion_eqn_for_QMC_conv_general} with the FEM, the levels of accuracy can be defined by using a hierarchy of meshes ($h$-refinement) or by increasing the polynomial degree of the finite element bases used ($p$-refinement).
The MLMC and MLQMC estimators are then obtained from \eqref{eq:telescoping} by approximating each term in the sum with standard MC or randomized QMC respectively. For MLMC we have,
\begin{align}
\E[P_\ell - P_{\ell-1}] \approx \frac{1}{N_\ell}\sum\limits_{n=1}^{N_\ell}(P_\ell - P_{\ell - 1})(\omega^n_\ell),
\end{align}
in which each $P_\ell - P_{\ell-1}$ sample is coupled in the sense that the samples of $P_\ell$ and $P_{\ell-1}$ share the same event $\omega_\ell^n$. Ensuring this coupling is respected in practice is essential for any MLMC algorithm since this coupling is the reason behind the increased efficiency of MLMC with respect to standard MC \cite{giles2008}.
Enforcing the same type of coupling is also essential for MLQMC. In the MLQMC case each term in \eqref{eq:telescoping} is approximated with randomized QMC as follows,
\begin{align}
\label{eq:mlqmc_level_estimator}
\E[P_\ell-P_{\ell-1}] = \int\limits_{[0,1]^{s_\ell}}Y_\ell(\bm{x})\text{d}\bm{x}\approx \frac{1}{M}\sum\limits_{m=1}^M \left(\frac{1}{N_\ell}\sum\limits_{n=1}^{N_\ell}Y_\ell(\hat{\bm{x}}_{n,m}^\ell(\omega))\right)=\frac{1}{M}\sum\limits_{m=1}^MI_{N_\ell}^{m,\ell}(\omega),
\end{align}
where the meaning of each variable is the same as in the QMC case. Note that the multilevel coupling is implicit in the fact that $Y_\ell$ now represents the difference $P_\ell - P_{\ell-1}$. We now have a hierarchy of integrands $\{Y_\ell\}_{\ell=1}^L$ and of randomized low-discrepancy sequences $\{\hat{\bm{x}}_{n,m}^\ell\}_{n=1,m=1,\ell=1}^{n=N_\ell,m=M,\ell=L}$ of dimensions $\{s_\ell\}_{\ell=1}^L$. Note that, in the same way as for QMC, MLQMC still requires for good performance either the integration variables on each level to be organized in order of decaying importance or the integrands $Y_\ell$ to have low effective dimensionality.
The theory for MLMC is by now established \cite{giles2008,Cliffe2011,TeckentrupMLMC2013}, yielding formulas for the optimal number of samples $N_\ell$ on each level and for the total MLMC algorithm complexity ($O(\varepsilon^{-2})$ in the best case scenario, see supplementary material \ref{secSM:multilevel_methods}). On the other hand, proving any convergence result for MLQMC is particularly hard, to the extent that convergence proofs are only available for a few specific problems and specific low-discrepancy sequences \cite{KuoScheichlSchwabEtAl2017,HerrmannSchwab2017}. For this reason, setting up an optimal MLQMC hierarchy with the optimal values of the $N_\ell$ is a challenging task. However, in the best possible case where we get a $O(N^{-\chi})$, $1/2 \leq \chi \leq 1$, QMC rate for each term in the telescoping sum, the benefits of MLMC and QMC can accumulate yielding a total MLQMC computational cost of $O(\varepsilon^{-1/\chi})$ for a given MSE tolerance of $\varepsilon^2$ \cite{HerrmannSchwab2017}. In this case MLQMC significantly outperforms all other Monte Carlo methods.
In this paper we employ the original MLQMC algorithm from \cite{GilesWaterhouse2009} as it does not require the convergence rate with respect to $N$ to be known \emph{a priori}. We refer to the supplementary material \ref{secSM:multilevel_methods} for a description of the algorithm.
\subsection{Supermeshes}
We now introduce the concepts of non-nested tessellations/meshes and of a supermesh.
\begin{figure}[h!]
\centering
\includegraphics[trim=0cm 0cm 0cm 0cm, clip=true, width=.8\textwidth]{./supermesh.eps}
\centering
\caption{\textit{An example of a supermesh construction. The first two meshes on the left are the parent meshes and the mesh on the right is one of their supermeshes.}}
\label{fig:supermesh}
\vspace{-12pt}
\end{figure}
Let $T_a$ and $T_b$ be two tessellations of $D$. We say that $T_a$ is \emph{nested} within $T_b$ if $\textnormal{vertices}(T_a)\subseteq \textnormal{vertices}(T_b)$ and if for each element $e\in T_a$ there exists a set of elements $E\subseteq T_b$ such that $e=\bigcup_{\hat{e}_i\in E}\hat{e}_i$. We say that $T_a$ and $T_b$ are \emph{non-nested} if $T_a$ is not nested within $T_b$ and vice-versa.
A crucial ingredient we need to enforce the multilevel (quasi) Monte Carlo coupling across a non-nested mesh hierarchy is a supermesh construction. Supermeshes are commonly used e.g.~within adaptive discretizations or geometric multigrid algorithms, to transfer discrete fields between non-nested meshes \cite{Farrell2009Supermesh}. A supermesh is defined as follows.
\begin{definition}[Supermesh, \cite{Farrell2011Supermesh, Farrell2009Supermesh}]
Let $D\subset\joinrel\subset\R^d$ be an open domain and let $\T_a$, $\T_b$ be two tessellations of $D$. A supermesh $S_h$ of $\T_a$ and $\T_b$ is a common refinement of $\T_a$ and $\T_b$. More specifically, $S_h$ is a triangulation of $D$ such that:
\begin{enumerate}[leftmargin=1cm]
\item $\textnormal{vertices}(\T_a) \cup \textnormal{vertices}(\T_b) \subseteq \textnormal{vertices}(S_h)$,
\item $\textnormal{measure}(e_S \cap e) \in \{0, \textnormal{measure}(e_S)\}$ for all cells $e_S\in S_h$, $e\in (\T_a \cup \T_b)$.
\end{enumerate}
\end{definition}
The first condition means that every parent mesh vertex must also be a vertex of the supermesh, while the second states that every supermesh cell is completely contained within exactly one cell of either parent mesh \cite{Farrell2009Supermesh}. The supermesh construction is not unique \cite{Farrell2009Supermesh}. We show an example of supermesh construction in figure \ref{fig:supermesh}. Efficient algorithms for computing the supermesh are available \cite{libsupermesh-tech-report}.
It was shown in \cite{quasi-uniform-supermeshing} that if the parent tesselations $T_a$ and $T_b$ are quasi-uniform (cf.~definition 4.4.13 in \cite{brenner2007mathematical}), then the number of cells of a supermesh constructed via a local supermeshing algorithm (cf.~\cite{PatrickPHD}) is linear in the number of cells of $T_a$ and $T_b$.
\section{Haar wavelet expansion of spatial white noise}
\label{sec:Haar_wavelet_expansion_of_white_noise}
For good QMC convergence we need to order the dimensions of the QMC integrand in order of decaying importance so that the largest error components are on the leading dimensions \cite{glasserman2013,DickKuoSloan2013}. In what follows we expand white noise into a Haar wavelet series so that the hierarchical structure of Haar wavelets can naturally provide the variable ordering needed for QMC integration.
We start by introducing the Haar wavelet basis. Let $\ind_A(x)$ be the indicator function of a set $A$ and let $\Psi(x)$ for $x\in\R$ be the Haar mother wavelet,
\begin{align}
\Psi(x) = \ind_{[0,1/2)}(x) - \ind_{[1/2,1)}(x) = \begin{dcases}
1,&\quad 0 \leq x < 1/2,\\
-1,&\quad 1/2 \leq x < 1,\\
0,&\quad\text{otherwise.}
\end{dcases}
\end{align}
Let $\bar{\N} = \{-1\} \cup \N$ and let $x^+ = \max(x,0)$. The Haar wavelets $H_{l,n}$ for $l\in\bar{\N}$, $n=0,\dots,2^{l^+}-1$ can be expressed in terms of the mother wavelet through shifting and rescaling as follows.
\begin{align}
\begin{dcases}
H_{-1,0}(x) = \ind_{[0,1)}(x),\quad &l = -1,\ n = 0\\
H_{l,n}(x) = 2^{l/2}\Psi(2^l x - n),\quad &l\in\N,\quad n=0,\dots,2^l - 1.
\end{dcases}
\end{align}
The Haar wavelets have support size $|\supp(H_{l,n})| = 2^{-l^+}$ and form an orthonormal basis of $L^2(0,1)$. The Haar system can be generalized to higher dimensions by taking the tensor product of the 1D Haar basis with itself: let $\bm{l} \in \bar{\N}^d$ and $\bm{n}\in\N$ we can define the family of $d$-dimensional Haar wavelets $H_{\bm{l},\bm{n}}(\bm{x})$ for $\bm{x}\in\R^d$ as
\begin{align}
H_{\bm{l},\bm{n}}(\bm{x}) = \bigotimes\limits_{i=1}^d H_{l_i, n_i}(x_i),\quad \text{with}\ n_i \in \{0,\dots,2^{l_i^+}-1\}\ \forall i.
\end{align}
The $d$-dimensional Haar wavelets have support size $|\supp(H_{\bm{l},\bm{n}})|=\prod_{i=1}^{d} 2^{-l_i^+} = 2^{-||\bm{l}^+||_1}$ and they form an orthonormal basis of $L^2((0,1)^d)$. It is also possible to construct an orthonormal basis of $L^2$ for a general boxed (hyper-rectangular) domain by scaling and shifting the components of $\bm{x}$ accordingly.
\begin{remark}[On the non-standard Haar wavelet basis]
\label{rem:compactly_supported_Haar_basis}
The $d$-dimensional wavelets just introduced are sometimes called the \emph{standard} Haar basis, which leads to log-linear complexity operations (rather than just linear) for $d>1$. Even though this is the basis we use in our numerical experiments, the algorithms we will introduce also work for the \emph{non-standard} Haar basis, which supports linear complexity operations in all dimensions \cite{Daubechies1988,Beylkin1992}.
\end{remark}
Let $z_{\bm{l},\bm{n}}(\omega)$ be i.i.d.~standard normal random variables. Furthermore, let $|\bm{l}| = \max_i(l_i)$. We can express white noise over $[0,1]^d$ as a Haar wavelet expansion,
\begin{align}
\label{eq:Haar_expansion_full}
\W = \sum\limits_{|\bm{l}|=-1}^{|\bm{l}|=\infty}\sum\limits_{\bm{n}=\bm{0}}^{2^{\bm{l}^+}-\bm{1}}z_{\bm{l},\bm{n}}(\omega)H_{\bm{l},\bm{n}}(\bm{x}).
\end{align}
The second summation is to be interpreted as the sum over all $\bm{n}$ with components $n_i$ such that $0\leq n_i \leq 2^{l^+}_i - 1$ for all $i$. Let $\mathscr{L}\in\bar{\N}$. We now divide the series in two terms,
\begin{align}
\label{eq:wavelet_QMC_white_noise}
\W = \W_\mathscr{L} + \W_R = \sum\limits_{|\bm{l}|=-1}^{|\bm{l}|=\mathscr{L}}\sum\limits_{\bm{n}=\bm{0}}^{2^{\bm{l}^+}-\bm{1}}z_{\bm{l},\bm{n}}(\omega)H_{\bm{l},\bm{n}}(\bm{x}) + \sum\limits_{|\bm{l}|=\mathscr{L}+1}^{|\bm{l}|=\infty}\sum\limits_{\bm{n}=\bm{0}}^{2^{\bm{l}^+}-\bm{1}}z_{\bm{l},\bm{n}}(\omega)H_{\bm{l},\bm{n}}(\bm{x}).
\end{align}
The idea is then to sample the Gaussian variables in the expression for $\W_\mathscr{L}$ by using a hybrid QMC/MC combination of quasi-random (e.g.~Sobol) and pseudo-random numbers, and to sample $\W_R$ with pseudo-random numbers only by extending the work in \cite{Croci2018}.
The reasoning behind this splitting is that it is important to keep the dimensionality of the low-discrepancy sequence relatively low: first, as we will see in the next section, the sampling of $\W$ expressed this way requires a supermesh construction and smaller dimensions imply faster $\W$ samples; second, some low-discrepancy sequences cannot readily be sampled in high dimensions\footnote{For example, the state-of-the-art Sobol' sequence generator, Broda, can generate the largest dimensional Sobol' sequences with $65536$ dimensions \cite{Sobol2011}. This might still be too low for an infinite-dimensional PDE setting.} and third, the approximation properties of some quasi-random sequences deteriorate as the dimensionality grows \cite{glasserman2013,DickKuoSloan2013}.
\section{Sampling independent realizations for QMC}
\label{sec:QMC_sample_white_noise}
To sample $u(\bm{x}, \omega)$, we must solve equation \eqref{eq:white_noise_eqn}. In what follows, we set $\eta=1$ and we will only consider the $k=1$ case for simplicity. We refer to \cite{Bolin2017}, \cite{Bolin2017SPDEApproach} for the general $k>d/4$ case. After these simplifications, we obtain
\begin{align}
\label{eq:white_noise_SPDE_reminder}
u - \kappa^{-2}\Delta u = \W,\quad \bm{x}\in D,\\
u = 0,\quad \bm{x}\in\partial D.\notag
\end{align}
From now on we introduce the simplifying assumption that $D=[0,1]^d$. Relaxing this assumption to general boxed domains is straightforward, but considering more general cases is non-trivial. As we are using the SPDE approach, we are free to choose any domain shape for $D$ \cite{Lindgren2011} so this is not really a restriction.
It is useful for what comes next to introduce the concept of a Haar mesh (see figure \ref{fig:haar_mesh}):
\begin{definition}[Haar mesh]
\label{def:Haar_mesh}
Let $D=[0,1]^d$ and let $\mathscr{L}\in\bar{\N}$. The Haar mesh $D_\mathscr{L}$ is the uniform quadrilateral mesh of $D$ whose cells are all regular polyhedra of volume $|\square_H| = 2^{-d(\mathscr{L}+1)}$. Note that for a given $\mathscr{L}$ there are exactly as many cells in $D_\mathscr{L}$ as terms in the wavelet expansion \eqref{eq:wavelet_QMC_white_noise} for $\W_\mathscr{L}$, namely $\mathscr{N}_\mathscr{L}=2^{d(\mathscr{L}+1)}$.
\end{definition}
We solve \eqref{eq:white_noise_SPDE_reminder} with the FEM. Let $D_h$ be a mesh of $D$, not necessarily nested within the Haar mesh $D_\mathscr{L}$. Let $V\subseteq H^1_0(D)$ and let $V_h=\spn(\phi_1,\dots,\phi_m)\subseteq V$ be the FEM subspace used to solve equation \eqref{eq:white_noise_SPDE_reminder} on $D_h$.
In what follows we will refer to $D_h$ as the \emph{FEM mesh} and we assume for simplicity that there are always $m_e$ degrees of freedom of $V_h$ on each cell of $D_h$.
A discrete weak form of \eqref{eq:white_noise_SPDE_reminder} then reads: find $u_h\in V_h$ such that
\begin{align}
(u_h, v_h) + \kappa^{-2}(\nabla u_h,\nabla v_h) = \langle \W, v_h \rangle
\quad \text{for all } v_h\in V_h.
\end{align}
The solution ${u_h=\sum_{i=1}^mu_i\phi_i}$, expressed in terms of the basis functions of $V_h$, is given by the following linear system for the $u_i$,
\begin{align}
\label{eq:linear_system}
A\bm{u} = \bm{b},\quad\text{where}\quad A_{ij}=(\phi_i,\phi_j) + \kappa^{-2}(\nabla\phi_i,\nabla\phi_j),\quad b_i = \langle \W, \phi_i \rangle.
\end{align}
Now, since $\W = \W_{\mathscr{L}_\ell} + \W_R$, the $b_i$ can also be expressed as
\begin{align}
\label{eq:bL_bR}
b_i = (\bm{b}_\mathscr{L})_i + (\bm{b}_R)_i,\quad\text{with}\quad (\bm{b}_\mathscr{L})_i = (\W_\mathscr{L}, \phi_i),\quad (\bm{b}_R)_i = \langle\W_R, \phi_i\rangle,
\end{align}
Note that we use the $L^2(D)$ inner product notation for $\W_{\mathscr{L}}$ since $\W_{\mathscr{L}}$ is a.s.~in $L^2(D)$ for finite $\mathscr{L}$.
The task of computing a realization of white noise is therefore equivalent to computing a sample of $\W_{\mathscr{L}}$ and $\W_R$, and consequently of the two vectors $\bm{b}_{\mathscr{L}}$ and $\bm{b}_R$. As we will see, the sampling strategies for the two terms are considerably different. Nevertheless, we will explain how both terms can be sampled efficiently in linear or log-linear complexity.
\begin{remark}
\label{rem:split_haar}
From now on we assume without loss of generality that the support of each $\phi_i\in V_h$ is entirely contained in a single Haar mesh cell. The reason why the generality of what follows is not affected is that each basis function $\phi_i\in V_h$ can always be split into the sum of the restrictions of $\phi_i$ to each cell of $D_\mathscr{L}$. Note that splitting the basis functions when $D_h$ is non-nested within the Haar mesh requires a supermesh construction. We will indicate with $S_h$ a given supermesh between $D_h$ and $D_\mathscr{L}$.
\end{remark}
\subsection{Sampling of $\W_\mathscr{L}$}
We first consider the efficient sampling of $\W_\mathscr{L}$.
In order to achieve good convergence with respect to the number of QMC samples we align the terms in the quasi-random sequence according to the $||\cdot||_1$ norm of the vector $\bm{l}$ in the expansion \eqref{eq:wavelet_QMC_white_noise} for $\W_\mathscr{L}$: the first term corresponds to $z_{-\bm{1},\bm{0}}$, the second batch of terms corresponds to the $z_{\bm{l},\bm{n}}$ with $||\bm{l}||_1 = 0$, the third batch to those with $||\bm{l}||_1 = 1$ and so on.
\begin{figure}[!h]
\begin{center}
\begin{subfigure}{0.4\linewidth}
\includegraphics[width=\linewidth]{./haar_mesh.eps}
\caption{}
\label{fig:haar_mesh}
\end{subfigure}\hspace{0.1\linewidth}
\begin{subfigure}{0.4\linewidth}
\includegraphics[width=\linewidth]{./haar_level_set.eps}
\caption{}
\label{fig:haar_level_set}
\end{subfigure}
\vspace{-12pt}
\caption{\textit{On the left, the Haar mesh in the $d=2$, $\mathscr{L}=0$ case. The Haar cells are coloured according to the values of the $H_{\bm{0},\bm{0}}=\Psi(x)\Psi(y)$ wavelet: yellow for $+1$, blue for $-1$. On the right, a schematic of the sampling strategy for the Haar coefficients of $\W$ in 2D. The coefficients in the square are the coefficients for $\W_\mathscr{L}$, while those in the unbounded ``L-shaped'' domain belong to $\W_R$. The green region corresponds to the coefficients with $||\bm{l}||_1\leq \mathscr{L}$ which are sampled with a low-discrepancy sequence and ordered according to $||\bm{l}||_1$. The others, corresponding to the light blue regions, are sampled with independent pseudo-random numbers.}}
\label{fig:haar_stuff}
\end{center}
\vspace{-12pt}
\end{figure}
We adopt a hybrid sampling technique for $\W_\mathscr{L}$: of all the wavelet coefficients $\bm{z}_\mathscr{L} \in \R^{\mathscr{N}_\mathscr{L}}$ corresponding to the Haar levels $|\bm{l}| \leq \mathscr{L}$, we only sample those with $||\bm{l}||_1 \leq \mathscr{L}$ (note the change from max norm to $1$ norm) using a low-discrepancy sequence (in our case Sobol with digital shifting \cite{glasserman2013}) and we sample the rest using independent pseudo-random numbers, i.e. we sample $\bm{z}_\mathscr{L} = [\bm{z}_{\text{QMC}}^T,\bm{z}_{\text{MC}}^T]^T$, where $\bm{z}_{\text{QMC}}$ is obtained by applying the normal inverse CDF to a randomized low-discrepancy sequence point and $\bm{z}_{\text{MC}}$ is sampled with a pseudo-random number generator. Again this is in the interest of keeping the dimensionality of the low-discrepancy sequence low. To get an idea of the numbers, there are $2^{d(\mathscr{L}+1)}$ wavelets satisfying $|\bm{l}| \leq \mathscr{L}$, but only $2^{\mathscr{L}-1}(\mathscr{L}+3)$ and $2^{\mathscr{L}-2}(\mathscr{L}^2+9\mathscr{L}+16)$ satisfying $||\bm{l}||_1 \leq \mathscr{L}$ in 2D and 3D respectively. To fix ideas, we show a schematic of our sampling choices for the coefficients of $\W$ in figure \ref{fig:haar_level_set}.
Let $V_H=\spn(\psi_1,\dots,\psi_{\mathscr{N}_\mathscr{L}})$ be the space of piecewise constant functions over the cells $\square_k$ of the Haar mesh $D_\mathscr{L}$. In a moment, we will prove that $\W_\mathscr{L}\in V_H$ almost surely and that therefore it can be expressed in terms of the basis functions of $V_H$ as $\W_\mathscr{L} = \sum_{k=1}^{\mathscr{N}_\mathscr{L}}w_k\psi_i$, where $w_k$ is the value of $\W_\mathscr{L}$ over the Haar cell $\square_k$. In practice, rather than computing the inner products of each Haar wavelet with the basis functions of $V_h$, it is more straightforward to just compute each $w_k$ from the $\bm{z}_\mathscr{L}$ sample and compute each entry of $\bm{b}_\mathscr{L}$ as $(\bm{b}_\mathscr{L})_i = w_{\kappa(i)}\int_D\phi_i \text{ d}\bm{x}$, where $\kappa(i)$ is the index $k$ of the Haar cell that contains the support of $\phi_i$. Before explaining how this is done in practice, we prove that $\W_\mathscr{L}$ can be interpreted as the projection of white noise onto $V_H$ and therefore $\W_\mathscr{L}$ does indeed belong to $V_H$.
\begin{lemma}
\label{lemma:W_L_is_projection}
Let $V_H=\spn(\psi_1,\dots,\psi_{\mathscr{N}_\mathscr{L}})$ be the space of piecewise constant functions over the cells $\square_i$ of the Haar mesh $D_\mathscr{L}$. Let $P_H$ be the $L^2$ projection onto $V_H$ and define the projected white noise $P_H\W$ as follows,
\begin{align}
(P_H\W, v) := \langle \W, P_Hv\rangle,\quad\forall v\in L^2(D).
\end{align}
We then have that $\W_\mathscr{L} \equiv P_H\W$ in $L^2(\Omega, L^2(D))$.
\end{lemma}
\begin{proof}
We note that all the Haar wavelets in the expansion for $\W_\mathscr{L}$ can be represented as a linear combination of basis functions of $V_H$. Since there are exactly as many wavelets as basis functions of $V_H$ (see definition \ref{def:Haar_mesh}) and since these wavelets are linearly independent, we conclude that the Haar wavelets form a basis of $V_H$. Therefore $\W_\mathscr{L}\in L^2(\Omega,V_H)$ and $\langle \W_R, v\rangle = 0$ for all $v \in V_H$. Furthermore, for all $v\in L^2(D)$,
\begin{align}
(\W_\mathscr{L}, v) &= (\W_\mathscr{L}, P_Hv + v^\perp) = (\W_\mathscr{L}, P_Hv) \\
&= \langle\W - \W_R, P_Hv\rangle = \langle \W, P_Hv \rangle =: (P_H\W, v),
\end{align}
almost surely since $\langle \W_R, P_Hv\rangle = 0$ for all $v\in L^2(D)$. Here we used the fact that all $v\in L^2(D)$ can be split as $v = P_Hv + v^\perp$, where $v^\perp \in V_H^\perp$.
\end{proof}
We now propose the following algorithm for sampling $\W_\mathscr{L}$:\\\vspace{-6pt}
\paragraph{Algorithm for the sampling of $\W_\mathscr{L}$:}
\begin{enumerate}
\item Compute the supermesh between the FEM mesh and the Haar mesh and split the support of the basis functions of $V_h$ to obtain $\{\phi_i\}_{i=1}^m$ each of which with support entirely contained within a single Haar cell. Compute the scalar map $\kappa(i)$ that maps each $i$ to the index $k$ of the Haar cell $\square_k$ that contains the support of $\phi_i$ and compute $\int_D \phi_i \text{ d}\bm{x}$ for all $i=1,\dots,m$. This step can be done offline.
\item Sample the vector $\bm{z}_\mathscr{L} \in \R^{\mathscr{N}_\mathscr{L}}$ of the coefficients in the expression \eqref{eq:wavelet_QMC_white_noise} for $\W_\mathscr{L}$ as $\bm{z}_\mathscr{L} = [\bm{z}_{\text{QMC}}^T,\bm{z}_{\text{MC}}^T]^T$, where $\bm{z}_{\text{QMC}}$ is a randomized low-discrepancy sequence point of dimension equal to the number of coefficients with $||\bm{l}||_1\leq \mathscr{L}$ and $\bm{z}_{\text{MC}}$ is sampled with a pseudo-random number generator.
\item Sample the values $w_k$ of $\W_\mathscr{L}$ over each Haar mesh cell $\square_k$ as follows. Let $J(\bm{l},\bm{n})$ be the index map that given $(\bm{l},\bm{n})$ returns the index $j$ such that ${z_{\bm{l}, \bm{n}} = (\bm{z}_\mathscr{L})_j}$ (the two vectors are the same up to reordering) and define $\bm{m}_k\in\R^{d}$ to be the coordinate vector of the midpoint of $\square_k$. For each $k=1,\dots,\mathscr{N}_\mathscr{L}$ and $\bm{l}$ with $|\bm{l}| \leq \mathscr{L}$, there is only one wavelet with level vector $\bm{l}$ with non-zero support over $\square_k$. For $i=1,\dots,d$, its wavelet number is given by $(\bar{\bm{n}}_k(\bm{l}))_i = \lfloor(\bm{m}_k)_i2^{\bm{l}_i}\rfloor$ and its sign over $\square_k$ by $\bar{s}_k(\bm{l})=\prod_{i=1}^ds_k(\bm{l}_i)$, where the $s_k(\bm{l}_i)$ are the signs of the 1D Haar wavelets in the tensor product for $H_{\bm{l},\bar{\bm{n}}_k(\bm{l})}$, namely
\begin{align}
\label{alg:sign_1D_Haar_wavelet}
s_k(\bm{l}_i) = 1-2(\lfloor(\bm{m}_k)_i2^{\bm{l}_i+1}\rfloor \pmod 2).
\end{align}
This expression comes from the fact that Haar wavelets are positive on even Haar cells and negative on odd cells. We set for all $k=1,\dots,\mathscr{N}_\mathscr{L}$,
\begin{align}
w_k = \sum\limits_{\bm{l}=\bm{0}}^{|\bm{l}|\leq \mathscr{L}}\bar{s}_k(\bm{l}) \bm{z}_{J(\bm{l},\bar{\bm{n}}_k(\bm{l}))}2^{||\bm{l}^+||_1/2}
\end{align}
\item For all $i=1,\dots,m$, set $(\bm{b}_\mathscr{L})_i = w_{\kappa(i)}\int_D\phi_i \text{ d}\bm{x}$.
\end{enumerate}
\begin{remark
\label{rem:intersections_with_haar_mesh}
In point $1$ and $3$ above we exploit the fact that the Haar mesh is uniform and structured.
For instance, we can readily obtain the Haar mesh cell in which any point $\bm{p}\in D$ lies: it belongs to the ${\lfloor(\bm{p})_i2^{\mathscr{L}+1}\rfloor\text{-th}}$ Haar cell from the origin in the $i$-th coordinate direction. When supermeshing this makes the search for candidate intersections \cite{Farrell2009Supermesh} inexpensive as we always know for a given cell of $D_h$ exactly with which Haar cells it intersects. The expressions for $\bar{\bm{n}}_k(\bm{l})$ and $\bar{s}_k(\bm{l})$ in point $3$ above also derive from the same considerations.
\end{remark}
\begin{remark}[Complexity of the sampling of $\W_\mathscr{L}$]
\label{rem:supermesh_complexity}
Let $m$ be the number of basis functions that span $V_h$, let $\mathscr{N}_\mathscr{L}=2^{d(\mathscr{L}+1)}$ be the number of cells in the Haar mesh and let $N_\mathscr{L}=(\mathscr{L}+2)^d$ be the number of wavelets that are non-zero over a given Haar cell. In general, it is possible to sample $\W_\mathscr{L}$ in $O(m + N_\mathscr{L}\mathscr{N}_\mathscr{L})$ complexity, which reduces to $O(m + \mathscr{N}_\mathscr{L})$ in the case in which we are using non-standard Haar wavelets\footnote{In this case it is possible to use a multi-dimensional generalization of the Brownian bridge construction (of which $\W_\mathscr{L}$ is the derivative) which is well known in the computational finance literature \cite{glasserman2013}.} (cf.~remark \ref{rem:compactly_supported_Haar_basis}). If $D_h$ is not nested within $D_\mathscr{L}$ a supermesh construction is needed (cf.~remark \ref{rem:split_haar}). In this case the cost complexity becomes $O(\mathscr{N}_Sm_e + N_\mathscr{L}\mathscr{N}_\mathscr{L})$, where $\mathscr{N}_S$ is the number of cells in the supermesh between $D_h$ and $D_\mathscr{L}$ and $m_e$ is the number of degrees of freedom on each supermesh cell $e$. Owing to theorem 1.1 in \cite{quasi-uniform-supermeshing}, when $D_h$ is quasi-uniform we have $\mathscr{N}_S = O(\mathscr{N}_h + \mathscr{N}_\mathscr{L})$, where $\mathscr{N}_h$ is the number of cells of $D_h$ and $a > 0$. This gives a linear cost complexity in the number of cells of $D_h$ and log-linear in the number of cells of $D_\mathscr{L}$ since $N_\mathscr{L} = O((\log_2(\mathscr{N}_\mathscr{L})/d)^d)$. The log-term can be dropped if we use non-standard wavelets.
\end{remark}
\subsection{Sampling of $\W_R$}
We now consider the efficient sampling of $\W_R$. Dealing with an infinite summation is complicated. However, we can circumvent this problem by noting that the covariance of $\W_R$ is known since, as $\W_\mathscr{L}$ is independent from $\W_R$ by construction, for all $u,v\in L^2(D)$ we have
\begin{align}
\label{eq:cov_Wr_splitting}
\E[\langle \W_R, u\rangle\langle \W_R, v\rangle] = \E[\langle \W, u\rangle\langle \W, v\rangle] - \E[(\W_\mathscr{L}, u)(\W_\mathscr{L}, v)],
\end{align}
where the covariance of $\W$ is known by definition \ref{def:white_noise} and the covariance of $\W_\mathscr{L}$ is given by the following lemma.
\begin{lemma}
\label{lemma:covariance_Wr}
Let $\square_i$ for $i=1,\dots,\mathscr{N}_\mathscr{L}$ be the $i$-th cell of $D_\mathscr{L}$ of volume\\ ${|\square_i| = 2^{d(\mathscr{L}+1)}=|\square_H|}$ for all $i$ (see definition \ref{def:Haar_mesh}). Then, for all $u,v\in L^2(D)$,
\begin{align}
\mathcal{C}_\mathscr{L}(u,v) = \E[(\W_\mathscr{L}, u)(\W_\mathscr{L}, v)] = \sum\limits_{i=1}^{\mathscr{N}_\mathscr{L}}\frac{1}{|\square_i|}\int_{\square_i}u\text{ d}\bm{x}\int_{\square_i}v\text{ d}\bm{x}.
\end{align}
\end{lemma}
\begin{proof}
Let $P_H$ be the $L^2$ projection onto $V_H$, then for all $u\in L^2(D)$ we have that $P_Hu=\sum_{i=1}^{\mathscr{N}_\mathscr{L}}u_i\psi_i$ satisfies
\begin{align}
(P_Hu,v_H) = (u,v_H),\quad \forall v_H\in V_H.
\end{align}
A standard FEM calculation gives that the coefficients $u_i$ are given by
\begin{align}
\label{eq:reproof1}
u_i = \frac{1}{|\square_i|}(u,\psi_i) = \frac{1}{|\square_i|}\int_{\square_i}u \text{ d}\bm{x}.
\end{align}
We conclude by using lemma \ref{lemma:W_L_is_projection} to show that, for all $u,v\in V$ such that $P_Hu = \sum_{i=1}^{\mathscr{N}_\mathscr{L}}u_i\psi_i$ and $P_Hv = \sum_{i=1}^{\mathscr{N}_\mathscr{L}}v_i\psi_i$,
\begin{align}
\E[(\W_\mathscr{L},u)(\W_\mathscr{L},v)] = \E[\langle\W,P_Hu\rangle\langle\W,P_Hv\rangle] = (P_Hu,P_Hv) \notag\\
= \sum\limits_{i,j=1}^{\mathscr{N}_\mathscr{L}}u_iv_j(\psi_i,\psi_j) = \sum\limits_{i=1}^{\mathscr{N}_\mathscr{L}}|\square_i|u_iv_i= \sum\limits_{i=1}^{\mathscr{N}_\mathscr{L}}\frac{1}{|\square_i|}\int_{\square_i}u\text{ d}\bm{x}\int_{\square_i}v\text{ d}\bm{x}.
\end{align}
Note that a similar procedure also yields an expression for the pointwise covariance of $\W_\mathscr{L}$. Let $\W_\mathscr{L} = \sum_{k=1}^{\mathscr{N}_\mathscr{L}}w_k\psi_i$ be the representation of $\W_\mathscr{L}$ in terms of the basis functions of $V_H$, then, for all $\bm{x}\in \square_i$, $\bm{y}\in\square_j$, we have
\begin{align}
\E[\W_\mathscr{L}(\bm{x})\W_\mathscr{L}(\bm{y})] = \delta_{ij}\E[w_iw_j] = \delta_{ij}\frac{1}{|\square_i|^2}\E[(\langle\W,\psi_i\rangle)^2] = \frac{\delta_{ij}}{|\square_i|}.
\end{align}
Here by $\delta_{ij}$ we denote the Kronecker delta and we used equation \eqref{eq:reproof1} in the second step.
\end{proof}
It is then readily shown from lemma \ref{lemma:covariance_Wr} and from \eqref{eq:cov_Wr_splitting} that the covariance of $\W_R$ is
\begin{align}
\label{eq:C_R}
\mathcal{C}_R(u,v) = \E[\langle \W_R, u\rangle\langle \W_R, v\rangle] = (u,v) - \sum\limits_{i=1}^{\mathscr{N}_\mathscr{L}}\frac{1}{|\square_i|}\int_{\square_i}u\text{ d}\bm{x}\int_{\square_i}v\text{ d}\bm{x},
\end{align}
for all $u,v\in L^2(D)$.
From lemma \ref{lemma:covariance_Wr} and from definition \ref{def:white_noise} we deduce that if the supports of $u$ and $v$ never share the same Haar mesh cell, then
\begin{align}
\E[(\W_\mathscr{L}, u)(\W_\mathscr{L}, v)] = \E[\langle\W, u\rangle\langle\W, v\rangle] = \E[\langle\W_R, u\rangle\langle\W_R, v\rangle] = 0,
\end{align}
i.e.~the action of $\W_\mathscr{L}$ is exactly the same as the action of white noise in this case and the correction term $\W_R$ is not needed. This means that the restrictions of $\W_\mathscr{L}$ and $\W_R$ to separate Haar mesh cells are statistically independent from each other.
Thanks to this property, we can consider each Haar cell separately and only account for the correlations among the pairings of $\W_R$ with test functions that belong to the same cell. Since the computations on separate Haar cells are independent, these operations can be performed simultaneously in parallel.
Before proceeding, we show that $\mathcal{C}_R$ is a proper covariance function, i.e.~that it is positive semi-definite.
\begin{lemma}
\label{lemma:covariance_W_R_pos_semidef}
The covariance of $\W_R$, $\mathcal{C}_R$, is positive semi-definite.
\end{lemma}
\begin{proof}
With the same notation as in the proof of lemma \ref{lemma:covariance_Wr}, we have that, for all $u\in L^2(D)$,
\begin{align}
\mathcal{C}_R(u,u) = \E[(\langle\W_R, u\rangle)^2] = \E[(\langle\W - P_H\W, u\rangle)^2] = \E[(\langle\W, u - P_Hu\rangle)^2] = ||u-P_Hu||_{L^2(D)}^2,
\end{align}
since $\W_R = \W - \W_\mathscr{L} = \W - P_H\W$. Hence $\mathcal{C}_R(u,u)$ is always non-negative and it is zero if and only if $u\in V_H$.
\end{proof}
\begin{remark}
\label{rem:QMC_white_noise_L2_projection}
From the proofs of lemmas \ref{lemma:covariance_Wr} and \ref{lemma:covariance_W_R_pos_semidef}, we see that we can interpret $\W_\mathscr{L}$ as the $L^2$-projection of white noise onto $V_H$. In principle, if $D_\mathscr{L}$ is fine enough (or if $V_h\equiv V_H$), the correction $\W_R$ is not needed at all. However, Haar wavelets are only piecewise constant and we might only expect first order convergence of $\W_\mathscr{L}$ to $\W$. If so, large QMC dimensions and an extremely fine Haar mesh would be needed to make the correction term $\W_R$ negligible and this translates into very expensive samples of $\W_\mathscr{L}$.
\end{remark}
The sampling of $\W_R$ can be performed independently on each Haar cell. If we focus our attention only on the basis functions $\phi_1,\dots,\phi_{m_k}\in V_h$ of support entirely contained within a given Haar cell $\square_k$, we note that the expression \eqref{eq:C_R} for $\mathcal{C}_R$ simplifies to
\begin{align}
\mathcal{C}_R(\phi_i,\phi_j) = (\phi_i,\phi_j) - \frac{1}{|\square_k|}\int_{\square_k}\phi_i\text{ d}\bm{x}\int_{\square_k}\phi_j\text{ d}\bm{x},\quad\text{for all }i,j\in\{1,\dots,m_k\}.
\end{align}
Similarly as in \cite{Croci2018}, the sampling of $\W_R$ over $\square_k$ boils down to sampling a zero-mean Gaussian vector $\bm{b}_R^k$ with entries $(\bm{b}_R^k)_i = \langle\W_R,\phi_i\rangle$ and covariance matrix $C_R^k$ of entries $(C_R^k)_{ij}$ given by
\begin{align}
\bm{b}_R^k\sim\mathcal{N}(0, C_R^k),\quad (C_R^k)_{ij} = \mathcal{C}_R(\phi_i,\phi_j).
\end{align}
If we let $M_k$ be the local mass matrix over the space spanned by the $\{\phi_i\}_{i=1}^{m_k}$, with entries $(M_k)_{ij}=(\phi_i,\phi_j)$ and if we let the vector $\bm{I}^k\in\R^{m_k}$ be given by
\begin{align}
\label{eq:Ik}
\bm{I}^k = \left[\int_{\square_k}\phi_1\text{ d}\bm{x},\dots,\int_{\square_k}\phi_{m_k}\text{ d}\bm{x}\right]^T,
\end{align}
we can write $C_R^k$ as \vspace{-9pt}
\begin{align}
\label{eq:C_R^k}
C_R^k = M_k - \frac{1}{|\square_k|}\bm{I}^k(\bm{I}^k)^T.
\end{align}
Note that a consequence of lemma \ref{lemma:covariance_W_R_pos_semidef} is that $C_R^k$ is positive semi-definite with null-space spanned by the vector $\bm{1}\in\R^{m_k}$, the length $m_k$ vector of all ones (piecewise constant functions over $D_\mathscr{L}$ are in the null-space of the covariance). The sampling of a Gaussian vector with this covariance through factorization is expensive as direct factorization of $C_R^k$ (e.g.~Cholesky) has an $O(m_k^3)$ and $O(m_k^2)$ cost and memory complexity respectively and it is therefore to be avoided.
We now show how $\bm{b}_R^k$ can be sampled efficiently by extending the techniques presented in \cite{Croci2018}. The main idea is to first sample a Gaussian vector with covariance $M_k$ in linear complexity and then perform an efficient update to obtain a sample of $\bm{b}_R^k$. We can write the action of $\W_R$ against each $\phi_i$ as
\begin{align}
\label{eq:W_R_against_phi_i}
\langle\W_R,\phi_i\rangle = \langle\W-\W_\mathscr{L},\phi_i\rangle = \langle\W,\phi_i\rangle - \langle\W_\mathscr{L},\phi_i\rangle = (\bm{b}_M^k)_i - w_k(\bm{I}^k)_i,
\end{align}
where $\bm{I}^k$ is given by \eqref{eq:Ik}, $w_k$ by
\begin{align}
\label{eq:def_of_wk}
w_k = \frac{1}{|\square_k|}\langle\W,\ind_{\square_k}\rangle,\quad w_k\sim\mathcal{N}\left(0,\frac{1}{|\square_k|}\right),
\end{align}
and the vector $\bm{b}_M^k\in\R^{m_k}$ is given entrywise by
\begin{align}
(\bm{b}^k_M)_i = \langle\W,\phi_i\rangle,\quad i=1,\dots,m_k.
\end{align}
The variables $w_k$ and $\bm{b}_M^k$ are by definition \ref{def:white_noise} all zero-mean joint Gaussian variables with covariance
\begin{align}
\E[w_kw_k] = \frac{1}{|\square_k|},\quad \E[\bm{b}^k_Mw_k] = \frac{\bm{I}^k}{|\square_k|},\quad \E[\bm{b}^k_M(\bm{b}^k_M)^T]=M_k.
\end{align}
Thanks to these relations and to \eqref{eq:W_R_against_phi_i}, if we set
\begin{align}
\bm{b}_R^k = \bm{b}_M^k - w_k\bm{I}^k,
\end{align}
then the covariance of $\bm{b}_R^k$ is correct (cf. equation \eqref{eq:C_R^k}) since
\begin{align}
\E[\bm{b}_R^k(\bm{b}_R^k)^T] &= \E[(\bm{b}_M^k - w_k\bm{I}^k)(\bm{b}_M^k - w_k\bm{I}^k)^T] \notag\\
&=\E[\bm{b}^k_M(\bm{b}^k_M)^T] - \E[\bm{b}^k_Mw_k](\bm{I}^k)^T - \bm{I}^k\E[\bm{b}^k_Mw_k]^T + \E[w_kw_k]\bm{I}^k(\bm{I}^k)^T \notag\\
&= M_k - \frac{1}{|\square_k|}\bm{I}^k(\bm{I}^k)^T.
\end{align}
In what follows, we exploit the fact that constants can be represented exactly by the FEM subspace $V_h$, i.e.~$c\in V_h$ for all $c\in\R$. This assumption is standard and it is required to achieve FEM convergence by the Bramble-Hilbert lemma, cf.~lemma 4.3.8 in \cite{brenner2007mathematical}. Let $\bm{\phi}_k = [\phi_1,\dots,\phi_{m_k}]^T$. This means that for each Haar cell $\square_k$ there exists a vector $\bm{c}_k\in\R^{m_k}$ such that $\ind_{\square_k}\equiv\bm{c}_k\cdot \bm{\phi}_k$. It is then straightforward to obtain $w_k$ from $\bm{b}^k_M$ since
\begin{align}
\label{eq:derive_wk_from_b_M}
\bm{c}_k\cdot\bm{b}^k_M = \sum\limits_{i=1}^{m_k}\langle \W, (\bm{c}_k)_i\phi_i\rangle =
\langle \W, \bm{c}_k\cdot \bm{\phi}_k\rangle = \langle \W, \ind_{\square_k}\rangle = |\square_k|w_k,
\end{align}
hence $w_k = |\square_k|^{-1}\bm{c}_k\cdot\bm{b}^k_M$. Note that $\bm{c}_k$ is always known, e.g.~for Lagrange basis functions on simplices we have $\bm{c}_k=\bm{1}\in\R^{m_k}$.
We can now sample $\W_R$ from its distribution by using the following algorithm, in which we exploit the same strategy we presented in \cite{Croci2018}:\\\vspace{-6pt}
\paragraph{Algorithm for the efficient sampling of $\W_R$.}
\begin{enumerate}
\item Loop over each Haar cell $\square_k$.
\item Use the technique presented in section 4.1 of \cite{Croci2018} to work supermesh cell by supermesh cell and sample a Gaussian vector $\bm{b}^k_M\sim\mathcal{N}(0,M_k)$ in linear cost complexity.
\item Set $w_k = |\square_k|^{-1}\bm{c}_k\cdot\bm{b}^k_M$ and compute $\bm{b}_R^k = \bm{b}_M^k - w_k\bm{I}^k$.
\end{enumerate}
\begin{remark}
The sampling strategies for $\W_\mathscr{L}$ and $\W_R$ presented in this work are conceptually different. In the $\W_\mathscr{L}$ case we use the Haar wavelet representation to make sure that the variables in the quasi-random sequence are ordered correctly. Therefore the use of the Haar representation is crucial in the sampling of $\W_\mathscr{L}$. In the $\W_R$ case, instead, the ordering is irrelevant as $\W_R$ is sampled by using pseudo-random numbers. For this reason we can ``forget'' about the wavelet representation in this case and sample $\W_R$ as it is done for any standard Gaussian field, i.e.~by factorising its covariance matrix after discretization.
\end{remark}
\begin{remark}
This algorithm has $O(\mathscr{N}_Sm_e^3)$ cost and $O(\mathscr{N}_Sm_e^2)$ memory complexity, where $\mathscr{N}_S$ is the total number of supermesh cells \cite{Croci2018}. As discussed in remark \ref{rem:supermesh_complexity}, $\mathscr{N}_S$ is of $O(\mathscr{N}_h + \mathscr{N}_\mathscr{L})$, where $\mathscr{N}_h$ and $\mathscr{N}_\mathscr{L}$ are the number of cells of $D_h$ and of $D_\mathscr{L}$ respectively.
\end{remark}
\section{Sampling coupled realizations for MLQMC}
\label{sec:coupled_realizations_MLQMC}
We now generalize the QMC sampling algorithm just presented to the MLQMC case. Compared to standard Monte Carlo, both MLMC and QMC already bring a significant computational improvement. When the two are combined into MLQMC, it is sometimes possible to obtain the best of both worlds and further improve the computational complexity and speed. However, to do so, we must be able to satisfy the requirements and assumptions underlying both QMC and MLMC: we must order the dimensions of our random input in decaying order of importance as in QMC and introduce an approximation level hierarchy and enforce a good coupling between the levels as in MLMC. We now show how this can be done with white noise sampling.
In what follows we assume we have a MLQMC hierarchy of possibly non-nested FEM approximation subspaces $\{V^\ell\}_{\ell=1}^L$ over the meshes $\{D_h^\ell\}_{\ell=1}^L$ and of accuracy increasing with $\ell$. Since as in the MLMC case (see \cite{Croci2018}) the only stochastic element in \eqref{eq:white_noise_SPDE_reminder} is white noise, on each MLQMC level we must be able to draw Mat\'ern field samples $u_\ell\in V^\ell$ and $u_{\ell-1}\in V^{\ell-1}$ for $\ell > 1$ that satisfy the following variational problems coupled by the same white noise sample: for a given $\omega_\ell^n\in\Omega$, find $u_\ell\in V^\ell$ and $u_{\ell-1}\in V^{\ell-1}$ such that
\begin{align}
\label{eq:coupled1}
(u_{\ell},v_{\ell}) + \kappa^{-2}(\nabla u_{\ell},\nabla v_{\ell}) &= \langle \W,v_{\ell} \rangle (\omega^n_\ell),\quad\hspace{10.5pt}\text{for all } v_{\ell}\in V^\ell,\\
(u_{\ell-1},v_{\ell-1}) + \kappa^{-2}(\nabla u_{\ell-1},\nabla v_{\ell-1}) &= \langle \W,v_{\ell-1} \rangle (\omega^n_\ell),\quad\text{for all } v_{\ell-1}\in V^{\ell-1}.
\label{eq:coupled2}
\end{align}
where the terms on the right hand side are coupled in the sense that they are centred Gaussian random variables with covariance $\E[\langle \W,v_{\lell} \rangle \langle \W,v_{s} \rangle] = (v_{\lell},v_s)$ for $\lell,s\in\{\ell,\ell-1\}$, as given by definition \ref{def:white_noise}. Again we order the dimensions of white noise by expanding it in the Haar wavelet basis as in \eqref{eq:Haar_expansion_full}, but this time we allow the Haar level to possibly increase with the MLQMC level and we split the expansion at the finer Haar level between the two MLQMC levels, $\mathscr{L}_\ell$,
\begin{align}
\label{eq:wavelet_MLQMC_white_noise1}
\W = \W_{\mathscr{L}_\ell} + \W_{R_\ell},
\end{align}
where the splitting of the expansion is done in the same way as in equation \eqref{eq:wavelet_QMC_white_noise}. From now on we assume that $\mathscr{L}_{\ell-1}\leq \mathscr{L}_\ell$, although extending the methods presented to decreasing Haar level hierarchies is straightforward. Let $\{\phi^\ell_i\}_{i=1}^{m_\ell}$ and $\{\phi^{\ell-1}_j\}_{j=1}^{m_{\ell-1}}$ be the basis functions spanning $V^\ell$ and $V^{\ell-1}$ respectively. Sampling white noise on both MLQMC levels again means to sample the vectors $\bm{b}_\mathscr{L}^\ell$, $\bm{b}_\mathscr{L}^{\ell-1}$, $\bm{b}_R^\ell$ and $\bm{b}_R^{\ell-1}$, with entries given by,
\begin{align}
(\bm{b}_\mathscr{L}^{\lell})_i = \langle \W_{\mathscr{L}_\ell}, \phi^{\lell}_i\rangle,\quad (\bm{b}_R^{\lell})_i = \langle \W_{R_\ell}, \phi^{\lell}_i\rangle,\quad\text{for } i = 1,\dots,m_{\lell},\quad \lell\in\{\ell,\ell-1\}.
\end{align}
Since we both require a multilevel coupling and a Haar wavelet expansion, this time we need to construct a \emph{three-way} supermesh $S_h$ between $D_{\mathscr{L}_\ell}$, $D_h^\ell$ and $D_h^{\ell-1}$ (note that $D_{\mathscr{L}_{\ell-1}}$ is always nested within $D_{\mathscr{L}_\ell}$ so there is no need for a four-way supermesh). Thanks to the supermesh construction we can split the support of all the basis functions so that each $\phi^\ell_i$ and $\phi^{\ell-1}_j$ has support entirely contained within a single Haar cell. In fact, we will assume for simplicity from now on that the supports of all basis functions have this property. The sampling of $\W$ in the MLQMC case is extremely similar to that of the QMC case with only a few differences concerning the sampling of $\W_{R_\ell}$ which we will now highlight.
Again, portions of $\W_{R_\ell}$ on separate Haar cells of $D_{\mathscr{L}_\ell}$ are independent and we can therefore sample $\W_{R_\ell}$ Haar cell-wise. For each Haar cell $\square_k$ and for $\lell\in\{\ell,\ell-1\}$, let $\phi^{\lell}_1,\dots,\phi^{\lell}_{m^{\lell}_k}$ be the basis functions with non-zero support over $\square_k$ and define the Haar cell correction vectors $\bm{b}_{R,k}^{\lell}$ with entries $(\bm{b}_{R,k}^{\lell})_i = \langle \W, \phi^{\lell}_i\rangle$ for $i\in\{1,\dots,m^{\lell}_k\}$ and covariances given by,
\begin{align}
\E[\bm{b}_{R,k}^{\lell}(\bm{b}_{R,k}^{\lell})^T] = M_k^{\lell} - \frac{1}{|\square_k|}\bm{I}^k_{\lell}(\bm{I}^k_{\lell})^T,\quad \E[\bm{b}_{R,k}^\ell(\bm{b}_{R,k}^{\ell-1})^T]=M_k^{\ell,\ell-1} - \frac{1}{|\square_k|}\bm{I}^k_\ell(\bm{I}^k_{\ell-1})^T,
\end{align}
where $(M_k^{\lell})_{ij}=(\phi^{\lell}_i,\phi^{\lell}_j)$, $(M_k^{\ell,\ell-1})_{ij}=(\phi^\ell_i,\phi^{\ell-1}_j)$ and $(\bm{I}^k_{\lell})_i = \int_D \phi^{\lell}_i \text{ d}\bm{x}$. If we define $w_k$ as in \eqref{eq:def_of_wk} we can again write
\begin{align}
\bm{b}_{R,k}^{\lell} = \bm{b}_{M,k}^{\lell} - w_k\bm{I}^k_{\lell},\quad\text{for }\lell\in\{\ell,\ell-1\},
\end{align}
where $\bm{b}_{M,k}^{\lell} \sim\mathcal{N}(0,M_k^{\lell})$. Since constants can be represented exactly by both $V^\ell$ and $V^{\ell-1}$, i.e.~for all $c\in\R$ and for all $\lell\in\{\ell,\ell-1\}$, we have that $c\in V^{\lell}$, then there exist two vectors $\bm{c}_k^\ell$ and $\bm{c}_k^{\ell-1}$ such that $\ind_{\square_k} \equiv \bm{c}_k^\ell\cdot\bm{\phi}_k^\ell\equiv\bm{c}_k^{\ell-1}\cdot\bm{\phi}_k^{\ell-1}$, where $\bm{\phi}_k^{\lell} = [\phi_1^{\lell},\dots,\phi_{m_k^{\lell}}^{\lell}]^T$ for $\lell\in\{\ell,\ell-1\}$. The same argument used to derive equation \eqref{eq:derive_wk_from_b_M} then gives
\begin{align}
w_k = \frac{1}{|\square_k|}\bm{c}_k^\ell\cdot\bm{b}_{M,k}^\ell = \frac{1}{|\square_k|}\bm{c}_k^{\ell-1}\cdot\bm{b}_{M,k}^{\ell-1}.
\end{align}
We can now proceed with the coupled sampling of $\W$ for MLQMC as follows:\\\vspace{-6pt}
\paragraph{Algorithm for the efficient sampling of $\W$ for MLQMC}
\begin{enumerate}
\item Compute the three-way supermesh between the FEM meshes and the Haar mesh $D_{\mathscr{L}_\ell}$ and split the support of the basis functions of $V^\ell$ and $V^{\ell-1}$ to obtain $\{\phi_i^{\lell}\}_{i=1}^{m^{\lell}}$ for $\lell\in\{\ell,\ell-1\}$ each of which with support entirely contained within a single Haar cell. Compute the scalar maps $\kappa^{\lell}(i)$ that map each $i$ to the index $k$ of the Haar cell $\square_k$ that contains the support of $\phi_i^{\lell}$ and compute $\int_D \phi_i^{\lell} \text{ d}\bm{x}$ for all $i=1,\dots,m^l$ and for ${\lell}\in\{\ell,\ell-1\}$. This step can be done offline.
\item Let $\mathscr{N}_{\mathscr{L}_\ell}$ be the number of cells of $D_{\mathscr{L}_\ell}$. Sample the vector $\bm{z}_{\mathscr{L}_\ell} \in \R^{\mathscr{N}_{\mathscr{L}_\ell}}$ of the coefficients in the expression for $\W_{\mathscr{L}_\ell}$ as $\bm{z}_\mathscr{L} = [\bm{z}_{\text{QMC}}^T,\bm{z}_{\text{MC}}^T]^T$, where $\bm{z}_{\text{QMC}}$ is a randomized low-discrepancy sequence point of dimension equal to the number of coefficients with $||\bm{l}||_1\leq {\mathscr{L}_\ell}$ and $\bm{z}_{\text{MC}}$ is sampled with a pseudo-random number generator.
\item Compute the Haar cell values $\bar{w}_k$ of $\W_\mathscr{L}$ over all $\square_k$ for $k=1,\dots,\mathscr{N}_{\mathscr{L}_\ell}$ in the same way as in the QMC case (this step does not depend on the FEM meshes).
\item Use the technique presented in section 4.2 of \cite{Croci2018} to work supermesh cell by supermesh cell and sample in linear cost complexity the coupled Gaussian vectors $\bm{b}^\ell_{M,k}$ and $\bm{b}^{\ell-1}_{M,k}$ with covariance,
\begin{align}
\left[\begin{array}{c}
\bm{b}^\ell_{M,k}\\ \hline
\bm{b}^{\ell-1}_{M,k}
\end{array}\right] =
\left[\begin{array}{c|c}
M_k^\ell & M_k^{\ell,\ell-1} \\ \hline
(M_k^{\ell,\ell-1})^T & M_k^{\ell-1}
\end{array}\right].
\end{align}
\item For all $\lell\in\{\ell,\ell-1\}$, compute $(\bm{b}_\mathscr{L}^{\lell})_i = \bar{w}_{\kappa^{\lell}(i)}\int_D\phi_i^{\lell} \text{ d}\bm{x}$ for all $i=1,\dots,m^{\lell}$, then set $w_k = |\square_k|^{-1}\bm{c}_k^{\ell-1}\cdot\bm{b}_{M,k}^{\ell-1}$ and compute $\bm{b}_{R,k}^{\lell} = \bm{b}_{M,k}^{\lell} - w_k\bm{I}^k_{\lell}$.
\end{enumerate}
\begin{remark}[Complexity of the sampling of $\W$ for MLQMC]
\label{rem:MLQMC_sampling_complexity}
The overall complexity of this sampling strategy is $O(\mathscr{N}_Sm_e + N_{\mathscr{L}_\ell}\mathscr{N}_{\mathscr{L}_\ell})$ in the standard Haar wavelet case and $O(\mathscr{N}_{S_\ell}m_e^\ell + \mathscr{N}_{\mathscr{L}_\ell})$ in the non-standard case (cf.~remark \ref{rem:compactly_supported_Haar_basis}), where (cf.~remark \ref{rem:supermesh_complexity}) $\mathscr{N}_{S_\ell}$ is the number of cells of the three-way supermesh on the MLQMC level $\ell$, $m_e^\ell$ is the number of dofs of $V^\ell$ per cell of $D_h^\ell$ and $N_{\mathscr{L}_\ell}$ is the number of wavelets that have non-zero support over any of the $\mathscr{N}_{\mathscr{L}_\ell}$ cells of $D_{\mathscr{L}_\ell}$. Since $\mathscr{N}_{S_\ell} = O(\mathscr{N}_h^\ell + \mathscr{N}_{\mathscr{L}_\ell})$ (cf.~theorem 1.1 in \cite{quasi-uniform-supermeshing}), where $\mathscr{N}_h^\ell$ is the number of cells of $D_h^\ell$, this gives an overall linear cost complexity in $\mathscr{N}_h^\ell$ and log-linear (linear for non-standard wavelets) in $\mathscr{N}_{\mathscr{L}_\ell}$.
\end{remark}
\begin{remark}[Simpler cases: nested meshes and $p$-refinement]
When the MLQMC mesh hierarchy is nested and/or the hierarchy is composed by taking a single mesh and increasing the polynomial degree of the FEM subspaces we have $V^{\ell-1}\subseteq V^\ell$. In this case everything we discussed still applies with the following simplifications: only a two-way supermesh between $D_h^\ell$ and $D_{\mathscr{L}_\ell}$ is needed on each MLQMC level in the $h$-refinement case. In the $p$-refinement case we only have one FEM mesh $D_h$ and a single two-way supermesh construction is needed between $D_h$ and the finest Haar mesh $D_{\mathscr{L}_L}$.
\end{remark}
\begin{remark}[Non-nested mesh hierarchies and embedded domains]
In practice, we assume that we are given a user-provided hierarchy $\{G_h^\ell\}_{\ell=0}^{L}$ of possibly non-nested FEM meshes of the domain $G$ on which we need the Mat\'ern field samples. From this we construct a boxed domain $D$ s.t.~$G\subset\joinrel\subset D$ and a corresponding hierarchy of Haar meshes $\{D_{\mathscr{L}_\ell}\}_{\ell=0}^{L}$ and of FEM meshes of $D$, $\{D_h^\ell\}_{\ell=0}^{L}$. As in \cite{Croci2018}, it is convenient to construct each $D_h^\ell$ so that $G_h^\ell$ is nested within it, so that each Mat\'ern field sample can be transferred exactly and at negligible cost on the mesh on which it is needed (this is the embedded domain strategy proposed in \cite{Osborn2017}).
\end{remark}
\begin{remark}[Generic wavelets]
\label{rem:QMC_generalizations_wavelets}
We expect the generalization of the presented sampling methods to generic wavelets to be straight-forward, although it is unclear as to whether this would bring any considerable advantage. We leave this investigation to future research.
\end{remark}
\begin{remark}[General domains]
\label{rem:QMC_generalizations}
We briefly speculate on the extension of the methods presented to more general (i.e.~non boxed) domains. The same sampling method could be generalized to general convex domains by introducing ``generalized'' Haar wavelets and meshes, obtained by partitioning a mesh into sub-regions and defining the cells of $D_\mathscr{L}$ through aggregation of the cells of $D_h$. Establishing any theoretical results in this case would be more complex, but the same algorithm should carry forward after accounting for the fact that the ``Haar cells'' obtained through aggregation would have variable volume. The advantage of doing this is that no supermesh would then be required in the QMC case (the Haar mesh would be nested within $D_h$ by construction) and only one supermesh construction would be needed (between $D_h^\ell$ and $D_h^{\ell-1}$) in the non-nested MLQMC case. We leave the implementation of this extension to future work.
\end{remark}
\section{Numerical results}
\label{sec:MLQMC_num_res}
We now test the algorithms presented. We consider test problem \eqref{eq:diffusion_eqn_for_QMC_conv} over the domain $G=(-0.5,0.5)^d$ with forcing term $f = 1$, i.e.~we solve
\begin{align}
\label{eq:diffusion_eqn_for_QMC_conv_bis}
\begin{array}{rlc}
-\nabla\cdot(e^{u(\bm{x},\omega)}\nabla p(\bm{x},\omega)) = 1, & \bm{x}\in G =(-0.5,0.5)^d,& \omega\in\Omega,\\
p(\bm{x},\omega) = 0,& \bm{x}\in \partial G,& \omega\in\Omega,
\end{array}
\end{align}
where $u(\bm{x},\omega)$ is a Mat\'ern field sampled by solving equation \eqref{eq:white_noise_SPDE_reminder} over $D=(-1,1)^d$ with $\lambda = 0.25$ and mean and standard deviation chosen so that $\E[e^u]=1$, $\V[e^u]=0.2$. For simplicity, we take the $L^2(G)$ norm of $p$ squared, $P(\omega) = ||p||_{L^2(G)}^2(\omega)$ as our output functional of interest.
\begin{remark}
We do not consider functionals of the Mat\'ern field $u$ and we directly focus on the estimation of $\E[P]$. The reason is that in 2D and 3D the smoothness of $u$ is low and we only observe standard Monte Carlo convergence rates in numerical experiments (not shown).
\end{remark}
We solve equations \eqref{eq:white_noise_SPDE_reminder} and \eqref{eq:diffusion_eqn_for_QMC_conv_bis} with the FEniCS software package \cite{LoggEtAl2012}. For simplicity, we consider the $h$-refinement case and we discretize the equations using continuous piecewise-linear Lagrange elements. We employ the conjugate gradient routine of PETSc \cite{balay2014petsc} preconditioned by the BoomerAMG algebraic multigrid algorithm from Hypre \cite{hypre} for the linear solver for both equations.
We declare convergence when the absolute size of the preconditioned residual norm is
below a tolerance of $10^{-12}$. We employ the libsupermesh software package \cite{libsupermesh-tech-report} for the supermesh constructions. We use random digital shifted Sobol' sequences sampled with a custom-built\footnote{Available online at \url{bitbucket.org/croci/mkl_sobol/}.} Python and C wrapper of the Intel\textsuperscript{\tiny\textregistered} Math Kernel Library Sobol' sequence implementation augmented with Joe and Kuo's primitive polynomials and direction numbers \cite{JoeKuo2008} (maximum dimension $= 21201$). All the algorithms presented (as well as the MLMC methods from \cite{Croci2018}) are available online within the femlmc software package\footnote{Available online at \url{bitbucket.org/croci/femlmc/}.}.
We construct the mesh hierarchies $\{G_h^\ell\}_{\ell=0}^{L}$ and $\{D_h^\ell\}_{\ell=0}^{L}$ so that, for all MLQMC levels $\ell$, $G_h^\ell$ is nested within $D_h^\ell$, yet $G_h^{\ell-1}$ and $D_h^{\ell-1}$ are not nested within $G_h^{\ell}$ and $D_h^{\ell}$ respectively. We take all the meshes in both hierarchies to be simplicial, uniform and structured for simplicity with mesh sizes $h_\ell = 2^{-(\ell+1)}$ in 1D, $h_\ell = 2^{-1/2}\ 2^{-\ell}$ in 2D and $h_\ell = \sqrt{3}\ 2^{-(\ell + 1)}$ in 3D, although we do not exploit this structure in the implementation.
\begin{figure}[h!]
\centering
\begin{subfigure}{.325\textwidth}
\includegraphics[width=\textwidth, trim={0cm 0cm 0cm 0cm},clip]{./mlqmc_conv_1D_QMC_3-13-0.eps}
\end{subfigure}
\begin{subfigure}{.325\textwidth}
\includegraphics[width=\textwidth, trim={0cm 0cm 0cm 0cm},clip]{./mlqmc_bias_conv_2D_FEM_4-9_QMC_0-0.eps}
\end{subfigure}
\vspace{6pt}
\begin{subfigure}{.325\textwidth}
\includegraphics[width=\textwidth, trim={0cm 0cm 0cm 0cm},clip]{./mlqmc_bias_conv_3D_FEM_2-6_QMC_0-0.eps}
\end{subfigure}
\vspace{-6pt}
\caption{\textit{Logarithm of the absolute value of the expected value of $P_\ell$ and $P_\ell-P_{\ell-1}$ as a function of the MLQMC level $\ell$ in 1D (left), 2D (middle) and 3D (right). We observe a decay rate of $O(h^{-2})$ in all dimensions. These results are independent from the Haar level chosen as we always compute the exact action of white noise independently from the choice of $\mathscr{L}$.}}
\label{fig:MLQMC_bias_conv}
\vspace{-12pt}
\end{figure}
We first study how the quantities $|\E[P_\ell]|$ and $|\E[P_\ell-P_{\ell-1}]|$ vary as the MLQMC level is increased. Assuming that $u$ can be sampled exactly, we expect the MLMC parameter value $\alpha$ to be $\alpha = \min(\nu+1, p+1)$ \cite{Hackbusch1992}. Numerical results are shown in figure \ref{fig:MLQMC_bias_conv} where observe a decay rate of $\alpha=2$ in 1D, 2D and 3D. In 3D we might have expected the rate to be $1.5$ due to the lack of smoothness of the coefficient $u$ which is only in $C^{0.5-\epsilon}(\bar{G})$ for any $\epsilon>0$ \cite{Hackbusch1992}. However, at the discrete level, the FEM approximation of $u$ is in $W^{1,\infty}(G)$ a.s.~and we might be observing a pre-asymptotic regime.
As a next step, we analyse the convergence behaviour of QMC and MLQMC with respect to the number of samples. In the supplementary material (theorem \ref{th:hybrid_QMC_conv}) we show that in the QMC case we expect an initial faster-than-MC convergence rate followed by a standard MC rate of $O(N^{-1/2})$ and that the higher the Haar level is, the later the transition between the two regimes happens. No results regarding the MLQMC case were derived, but we expect a similar behaviour to occur. Furthermore, we would like to determine whether the multilevel technique can improve on QMC by bringing further variance reduction.\vspace{-6pt}
\begin{remark}
Another way of interpreting our hybrid approach is that we are splitting white noise into a smooth part $\W_\mathscr{L}$ and a rough part $\W_R$. QMC is effective at reducing the statistical error coming from the smooth part, but performs poorly when approximating the rough part and we are better off with directly using pseudo-random points. This aspect was experimentally investigated in \cite{Beentjes2018} and can be seen as another instance of the \emph{effective dimensionality} principle mentioned in section \ref{sec:background}.
\end{remark}
\vspace{-6pt}
We draw inspiration from the original MLQMC paper by Giles and Waterhouse \cite{GilesWaterhouse2009} and we study the convergence behaviour of both QMC and MLQMC as the MLQMC level is increased. Results are shown in figure \ref{fig:QMC_conv}. We increase the Haar level with the MLQMC level so that the Haar mesh size is always proportional to the FEM mesh size, but we consider two different strategies: 1) we choose the Haar mesh size to be comparable to the FEM mesh size (figures \ref{fig:QMC_conv1}, \ref{fig:QMC_conv3}, \ref{fig:QMC_conv5}) and 2) we pick the Haar mesh size to be smaller than the FEM mesh size (figures \ref{fig:QMC_conv2}, \ref{fig:QMC_conv4}, \ref{fig:QMC_conv6}). For both scenarios, we compute the variance $\V_\ell$ of the (ML)QMC estimator on MLQMC level $\ell$ by using $M=128$ ($M=64$ in 3D) randomizations of the Sobol' sequence used and we monitor the quantity $\log_2(N_\ell\V_\ell)$ as the number of samples $N_\ell$ is increased. Various colours are used in figure \ref{fig:QMC_conv} to indicate the different sample sizes. The horizontal lines correspond to QMC and the oblique lines to MLQMC.
For standard MC and MLMC, we have $\V_\ell = O(N^{-1})$, giving a $\log_2(N_\ell\V_\ell)$ of $O(1)$. For this reason, if we were observing a MC-like convergence rate, we would see the different coloured lines of figure \ref{fig:QMC_conv} overlapping. The fact that this does not happen means that we are in fact observing a QMC-like rate which is faster than $O(N^{-1})$ (for the variance). However, it is clear by looking at figures \ref{fig:QMC_conv1}, \ref{fig:QMC_conv3} and \ref{fig:QMC_conv5} that as $N_\ell$ grows the lines get closer to each other, marking a decay to a $O(N^{-1})$ rate of convergence (for the variance) as predicted by theorem \ref{th:hybrid_QMC_conv} (see supplementary material). By comparing the figures on the left hand side to those on the right hand side, it is also clear that increasing the Haar level delays the occurrence of this behaviour both in the QMC case and in the MLQMC case. Furthermore, it appears that in the MLQMC case the convergence rate decays sooner than in the QMC case. Finally, we note that MLQMC indeed benefits from the combination of QMC and MLMC: the variance of the MLQMC estimator on any level is always smaller than the corresponding QMC estimator for the same number of samples, with large variance reductions on the fine levels.
We now focus on the 2D case only for simplicity and see how both QMC and MLQMC perform in practice when applied to equation \eqref{eq:diffusion_eqn_for_QMC_conv_bis}. In figure \ref{fig:MLQMC_conv_1} we study the adaptivity and cost of (ML)QMC as the root mean square error tolerance $\varepsilon$ is decreased for the same FEM hierarchy, but for two different Haar level hierarchies. The top plots in the figure correspond to the choice of Haar meshes with mesh size comparable to the FEM mesh size ($|\square_{\mathscr{L}_\ell}|^{1/2} = 2^{-\ell}$). The results in the bottom plots are instead obtained by fixing the Haar level to $\mathscr{L}_\ell = 6$ for all $\ell$. In both cases we fix the number of randomizations to be $M=32$.
In the plots on the left hand side in figure \ref{fig:MLQMC_conv_1} we see how MLQMC automatically selects the number of samples according to the greedy strategy highlighted in the supplementary material \ref{secSM:multilevel_methods} and in \cite{GilesWaterhouse2009} so as to satisfy the given error tolerance. As in the MLMC case, more samples are taken on coarse levels and only a few on the fine levels. The second Haar level strategy (figure \ref{fig:MLQMC_conv_1}, plot on the bottom right) uses higher Haar levels on the coarse MLQMC levels, which corresponds to a later decay to a $O(N^{-1/2})$ rate (cf.~figure \ref{fig:QMC_conv}). Therefore this strategy requires lower sample sizes (compare with the top left plot).
\begin{figure}[H]
\centering
\begin{subfigure}{.49\textwidth}
\includegraphics[width=\textwidth, trim={0cm 0cm 0cm 0cm},clip]{./qmc_conv_test_1D_FEM_2-9_QMC_3-10.eps}
\caption{}
\label{fig:QMC_conv1}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\includegraphics[width=\textwidth, trim={0cm 0cm 0cm 0cm},clip]{./qmc_conv_test_1D_FEM_2-9_QMC_4-11.eps}
\caption{}
\label{fig:QMC_conv2}
\end{subfigure}\\
\begin{subfigure}{.49\textwidth}
\includegraphics[width=\textwidth, trim={0cm 0cm 0cm 0cm},clip]{./qmc_conv_test_2D_FEM_2-7_QMC_2-7.eps}
\caption{}
\label{fig:QMC_conv3}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\includegraphics[width=\textwidth, trim={0cm 0cm 0cm 0cm},clip]{./qmc_conv_test_2D_FEM_2-7_QMC_5-10.eps}
\caption{}
\label{fig:QMC_conv4}
\end{subfigure}\\
\begin{subfigure}{.49\textwidth}
\includegraphics[width=\textwidth, trim={0cm 0cm 0cm 0cm},clip]{./qmc_conv_test_3D_FEM_2-6_QMC_2-6.eps}
\caption{}
\label{fig:QMC_conv5}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\includegraphics[width=\textwidth, trim={0cm 0cm 0cm 0cm},clip]{./qmc_conv_test_3D_FEM_2-6_QMC_3-66.eps}
\caption{}
\label{fig:QMC_conv6}
\end{subfigure}
\vspace{-6pt}
\caption{\textit{Convergence behaviour of (ML)QMC with respect to the number of samples $N_\ell$ in 1D (a)-(b) and 2D (c)-(d) ($M=128$), and in 3D (e)-(f) ($M=64$). Plots (a),(c),(e) (on the left) are obtained by choosing Haar mesh sizes comparable to the FEM mesh sizes: $|\square_{\mathscr{L}_\ell}| = 2^{-(\ell+1)}$ in 1D (a), and $|\square_{\mathscr{L}_\ell}|^{1/d} =2^{-\ell}$ in 2D (c) and 3D (d). Plots (b),(d),(f) (on the right) are obtained by choosing Haar meshes which are finer than the corresponding FEM meshes: $|\square_{\mathscr{L}_\ell}| = 2^{-(\ell+2)}$ in 1D (b), $|\square_{\mathscr{L}_\ell}|^{1/2} =2^{-(\ell+2)}$ in 2D (d) and $|\square_{\mathscr{L}_\ell}|^{1/3} = 2^{-(\ell+1)}$ in 3D (f). The (approximately) horizontal and oblique lines correspond to QMC and MLQMC respectively. Different colours indicate different sample sizes. On the $y$-axis we monitor (the logarithm of) the product between $N_\ell$ and the (ML)QMC estimator variance on level $\ell$. This product is $O(1)$ when the convergence rate is MC-like and therefore the coloured lines would overlap if a $O(N^{-1/2})$ MC rate is observed. In the figure we observe a pre-asymptotic QMC-like convergence rate that then tails off to a standard MC rate (the lines initially do not overlap, but they get closer and closer as $N_\ell$ is increased). This phenomenon always occurs (cf.~theorem \ref{th:hybrid_QMC_conv}), but it happens later when the Haar level is increased (figures on the right).}}
\label{fig:QMC_conv}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{.8\textwidth}
\includegraphics[width=\textwidth, trim={0cm 0cm 0cm 0cm},clip]{./mlqmc_test_2D_FEM_4-9_QMC_4-9-1.eps}
\end{subfigure}\\
\vspace{6pt}
\begin{subfigure}{.8\textwidth}
\includegraphics[width=\textwidth, trim={0cm 0cm 0cm 0cm},clip]{./mlqmc_test_2D_FEM_4-9_QMC_6const-1.eps}
\end{subfigure}
\caption{\textit{MLQMC convergence for the solution of \eqref{eq:diffusion_eqn_for_QMC_conv_bis}. We take $M=32$ and consider two Haar level hierarchies: $\mathscr{L}_\ell = 2 + \ell$ (top plots) and $\mathscr{L}_\ell = 6$ for all $\ell$ (bottom plots). In the plots on the left we show how the MLQMC algorithm automatically selects the optimal number of samples $N_\ell$ on each level to achieve a given tolerance $\varepsilon$. Note that we have dropped the first mesh of the hierarchy as it is too coarse and it would not bring any significant advantage to the performance of MLQMC (same reasoning as for MLMC, see \cite{giles2015multilevel}). We observe that on the finest levels only one sample is used, making MLQMC equivalent to plain MLMC on these levels. In the plot on the right we compare the efficiency of MLQMC with QMC for different tolerances. MLQMC appears to have a better-than-$O(\varepsilon^{-2})$ total cost complexity and significantly outperforms QMC.}}
\label{fig:MLQMC_conv_1}
\end{figure}
In the plots on the right hand side we show the overall cost of QMC and MLQMC as the root mean square error tolerance $\varepsilon$ is reduced. More specifically, we plot the quantity $\varepsilon^2C_{\text{tot}}$, where $C_{\text{tot}}$ is the total cost. The reason is that the total cost complexity of MLMC for this problem (MLMC parameters: $\beta = 2\alpha = 4$, $\gamma=2$, cf.~supplementary material \ref{secSM:multilevel_methods}) is $O(\varepsilon^{-2})$, giving the $\varepsilon^2C_{\text{tot}}$ factor to be $O(1)$ for all $\varepsilon$. The fact that the MLQMC cost line is not horizontal, but decreases as $\varepsilon$ is reduced shows that the total complexity of MLQMC is better than $\varepsilon^{-2}$, i.e.~that our MLQMC algorithm has a better-than-MLMC complexity. This improved complexity stems from the fact that we are observing a QMC-like convergence rate with respect to $N_\ell$.
As $\varepsilon$ is decreased, we expect the cost complexity to decay to an $\varepsilon^{-2}$ rate: for extremely fine tolerances very large sample sizes are required yielding the asymptotic $O(N^{-1/2})$ standard MC rate and harming the overall cost complexity. However, even in this case, the overall MLQMC cost benefits from the pre-asymptotic regime and MLQMC still outperforms MLMC (see figure \ref{fig:MLQMC_conv_2}). Similarly, QMC initially benefits from a faster convergence rate with respect to $N$. As the tolerance is decreased, the QMC rate decays to a standard MC rate and the total cost of QMC starts increasing faster than $O(\varepsilon^{-2})$.
Comparing the costs between the top and bottom of figure \ref{fig:MLQMC_conv_1}, it appears that increasing the Haar level on the coarse MLQMC levels improves the total MLQMC cost while, in the QMC case, decreasing the Haar level harms convergence. This suggests that the Haar level choice has a considerable impact on the overall MLQMC and QMC performance. We investigate this in figure \ref{fig:MLQMC_conv_2}, where we show the total cost of (ML)QMC for different Haar level hierarchies and we compare it with the cost of standard MLMC. The $x$-axis and the black lines in both plots are the same. We present the costs of two versions of MLMC: the black dash-dotted line corresponds to standard MLMC, while the black dashed line corresponds to a MLMC algorithm in which the number of samples are restricted to be in powers of $2$ (this restriction also applies to the MLQMC algorithm we use \cite{GilesWaterhouse2009}). MLQMC outperforms MLMC by a factor of approximately $8$, depending on the Haar level choice.
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\textwidth, trim={0cm 0cm 0cm 0cm},clip]{./multi_mlqmc_conv.eps}
\caption{\textit{MLMC, QMC and MLQMC total computational cost needed for the solution of \eqref{eq:diffusion_eqn_for_QMC_conv_bis} with the same FEM mesh hierarchy as in figure \ref{fig:MLQMC_conv_1}. In the (ML)QMC case, we take $M=32$ and consider different Haar level hierarchies which correspond to different computational costs. The $x$-axis and the MLMC lines are the same in both plots. MLQMC outperforms MLMC which in turn outperforms QMC.}}
\label{fig:MLQMC_conv_2}
\vspace{-20pt}
\end{figure}
Recall the convergence results with respect to the number of samples shown in figure \ref{fig:QMC_conv}: even if the convergence rate decreases as $N_\ell$ increases, it is clear from figure \ref{fig:MLQMC_conv_1} (left) that this only happens on the coarse levels where more samples are needed. Since for problem \eqref{eq:diffusion_eqn_for_QMC_conv_bis} and the FEM discretization chosen we are in the ``good'' case of the MLMC theorem (i.e.~$\beta>\gamma$, cf.~theorem \ref{th:MLMC} in the supplementary material), the multilevel cost is dominated by the sample cost on the coarse levels. We therefore expect to obtain computational gains by increasing the Haar level on the coarse MLQMC levels. At the same time, we do not expect to lose in computational efficiency if we decrease the Haar level on the fine levels as these are not dominating the total cost. Note that in the QMC case there is only one level and the only possible strategy is to keep the Haar level as high as required.
\begin{remark}
\label{rem:MLQMC_cost_no_supermesh_cells}
The results shown in figures \ref{fig:MLQMC_conv_1} and \ref{fig:MLQMC_conv_2} do not account for differences in the cost per sample due to variations in the number of supermesh cells. If the cost of solving the PDE with random coefficients of interest (e.g.~equation \eqref{eq:diffusion_eqn_for_QMC_conv_bis}) dominates over the cost of sampling white noise realizations, these results are still valid as is. Otherwise, extra care must be taken when using Haar meshes which are much finer than the corresponding FEM meshes as this results in a large number of supermesh cells. In the figures this would apply to Haar levels greater than $\mathscr{L}_\ell = \{4,\dots,10\}$ (gray line in the plot on the right) and there is clearly a trade-off: larger $\mathscr{L}$ means faster decay with respect to $N$, but larger costs per sample as well.
\end{remark}
By looking at figure \ref{fig:MLQMC_conv_2} it is clear that our expectations are met. In the QMC case (plot on the left) we see that a small Haar level results in significant cost increase for small tolerances, while for large Haar levels we retain good convergence with respect to $N$ and a cost complexity which looks just slightly worse than $O(\varepsilon^{-2})$. For the tolerances considered, there seems to be little advantage in increasing the Haar level beyond the $\mathscr{L}=8$ threshold. For this specific problem, the optimal strategy would be to increase the Haar level as the mesh is refined and set $\mathscr{L}_{\ell} = \{4,5,6,6,8,8\}$ so that the Haar level is increased only when needed. Generally speaking, we believe that it is never advantageous to use Haar meshes much finer than the FEM meshes (cf.~remark \ref{rem:MLQMC_cost_no_supermesh_cells}).
In the MLQMC case (plot on the right in figure \ref{fig:MLQMC_conv_2}), we note that increasing the Haar level on the coarse levels indeed brings computational advantages (e.g.~compare the gray and pink lines) and decreasing it on the fine levels does not seem to affect the total cost (e.g.~compare the pink with the orange line), as predicted. The optimal strategy therefore consists of increasing the Haar level on the coarse levels and either keeping it constant across the MLQMC hierarchy or possibly even decreasing it (there is little computational advantage in decreasing it if the FEM meshes on the fine levels are already much finer than the Haar mesh). For the MLQMC hierarchy, a good choice seems to fix $\mathscr{L}_\ell = 6$ for all $\ell$ since a larger Haar level would significantly increase the number of supermesh cells (cf.~remark \ref{rem:MLQMC_cost_no_supermesh_cells}).
Overall, our MLQMC strategy outperforms MLMC, which in turn outperforms QMC. Standard MC is always worse than QMC, by up to two orders of magnitude for small $\varepsilon$ (not shown).
\begin{remark}
The optimal Haar strategy is likely to change if the problem to be solved belongs to the other two cases of the MLMC theorem (theorem \ref{th:MLMC} in the supplementary material), i.e.~$\beta=\gamma$ or $\beta<\gamma$. In the first case ($\beta = \gamma$), the total multilevel cost is simultaneously dominated by all levels in the multilevel hierarchy and we expect in this case that the optimal strategy is to use a Haar mesh hierarchy of mesh sizes comparable to the FEM mesh sizes (e.g.~as for the gray line in the right plot of figure \ref{fig:MLQMC_conv_2}). In the latter case ($\beta < \gamma$), the total multilevel cost is dominated by the fine levels. In this case it might be advantageous to keep the Haar level low on the coarse levels and to increase it on the fine levels.
\end{remark}
\begin{remark}
If we are in the $\beta > \gamma$ case, then the Haar level is capped on the fine levels. Therefore, as previously mentioned in remarks \ref{rem:supermesh_complexity} and \ref{rem:MLQMC_sampling_complexity}, the overall white noise sampling complexity is asymptotically linear with respect to the number of cells of the FEM mesh considered even in the case in which we are using standard Haar wavelets (cf.~remark \ref{rem:compactly_supported_Haar_basis}). In the other cases of the MLMC theorem (theorem \ref{th:MLMC} in the supplementary material), it might be detrimental to cap the Haar level and non-standard Haar wavelets are to be preferred in case we cannot afford the additional logarithmic term in the complexity estimate.
\end{remark}
\section{Conclusions}
\label{sec:MLQMC_conclusions}
We presented a novel algorithm to efficiently compute the action of white noise and sample Mat\'ern fields within a QMC and MLQMC framework. This algorithm retains the computational efficiency of the MLMC case \cite{Croci2018} and still enforces the required multilevel coupling in a non-nested mesh hierarchy. The numerical results show that our technique works well in practice, that the convergence orders observed agree with the theory and that MLQMC outperforms MLMC and has a better cost complexity in the pre-asymptotic regime.
We remark that the sampling technique presented extends naturally to
any application in which spatial white noise realizations are needed within a finite element
framework provided that the solution is smooth enough.
An open problem is the derivation of a closed-form expression for the optimal number of samples on each MLQMC level and for the optimal Haar level hierarchy, but we leave this to future research. It would also be interesting to extend the algorithms presented to general higher degree wavelets and domains (cf.~remarks \ref{rem:QMC_generalizations_wavelets} and \ref{rem:QMC_generalizations}). The first enhancement (generic wavelets) could possibly improve the convergence rate with respect to the number of samples, while the second (general domains) would reduce the supermeshing complexity and consequently the white noise sample cost.
\section*{Acknowledgments}
The authors would like to acknowledge useful discussions with Marie Rognes, Abdul-Lateef Haji-Ali, Alberto Paganini and Casper Beentjes. The authors would also like to express their thanks to James R. Maddison for his assistance with the implementation of an interface between libsupermesh and FEniCS.
\printbibliography
\newpage
| proofpile-arXiv_059-15560 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
The Universe is mostly filled with ionised gas.
However, in many astrophysical systems
such as galaxies, stars, and sub-stellar objects,
neutral materials are not only present but coexist with hot ionised gases.
In the interfaces where neutral and ionised media meet,
ions would inevitably encounter neutral atoms or molecules
and the interactions between them would allow for the capture of an electron or several electrons by the ions.
In this charge-exchange (CX) process,
the electron-capture hosts are often in an excited state.
Their subsequent de-excitation gives rise to photon emission,
with the photon energy determined by the energy levels involved in the electronic transition.
X-ray emission lines are produced when the emitting hosts are highly charged ions.
CX emission processes are efficient,
as the interaction cross-section may reach $10^{-15}{\rm cm}^2$
\citep{Tawara1985,McGrath1989,Greenwood2000}.
This is much larger than the Thomson electron-photon scattering cross-section
($\sigma_{\rm T} = 6.65\times 10^{-25}{\rm cm}^2$),
which signifies common radiative processes that occur in hot astrophysical plasmas
(e.g., accretion flows in compact objects and relativistic AGN jets)
which generate keV X-rays.
CX emission has been observed in astronomical environments ranging from sub-stellar sized objects,
e.g.\ comets \citep{Lisse1996,Cravens1997,Bodewits2007},
planets \citep{Branduardi-Raymont2004,Dennerl2006,Dennerl2008}
and moons \citep{Johnson1982,McGrath1989}
to systems of galactic-scale structures and beyond,
e.g.\ high-velocity clouds in galactic halos \citep{Lallement2004},
the galactic centre and the galactic ridge \citep{Tanaka1999,Tanaka2002},
and large-scale outflows from star-forming galaxies
\citep{Ranalli2008,Konami2011,Liu2011,Zhang2014}.
Although CX processes are often associated with ionised or partially ionised gases,
a thermally hot medium is not always required.
An example of this is the solar wind and comet interface \citep[see][]{Ip1989}.
The current spectroscopic diagnoses of keV and sub-keV astrophysical plasmas
are mostly based on the radiative recombination emission
associated with collisional ionisation/excitation and photo-ionisation processes
\citep[see][]{Smith2001,Ferland2003,Raymond2005,Kaastra2008,Porquet2010}.
Despite this, CX spectroscopy has been applied in many other disciplinary areas for decades,
e.g.\ in confined fusion reaction and reactor studies
\citep[e.g.][]{Isler1994,Ida2008,Li2016}.
Its use as a diagnostic tool of astrophysical plasmas outside the solar system~\citep[see][]{Wargelin2008,Dennerl2010}
has also recently gained attention.
CX lines are now identified as a useful means
by which the physical conditions in multi-phase outflows from galaxies can be probed
\citep[see][]{Konami2011,Liu2011,Liu2012MNRAS,Zhang2014}.
Large-scale galactic outflows are characteristic of starburst galaxies
\citep[see][for reviews]{Heckman2003,Veilleux2005}.
They are powered by the star-forming processes in galaxies
\citep{Mathews1971,Chevalier1985,Strickland2000,Meiksin2016},
where the frequent supernova explosions continually inject
large amounts of energy into the interstellar medium (ISM).
The out-of-galactic-plane outflow material emits strong continuum and line X-rays
\citep{Bregman1995,Dahlem1998,Wang2001,Strickland2004,Yamasaki2009},
implying that it contains substantial hot, thermal, ionised gases.
The discovery of synchrotron emission halos associated with galactic outflows
in some starburst galaxies, e.g.\ NGC~253 \citep{Carilli1992},
also indicate the presence of energetic non-thermal electrons and cosmic rays.
However, the filament-like structures in H$\alpha$ images \citep[see][]{Lehnert1996,Hoopes1999,Yoshida2011},
suggest that warm, partially ionised gas is interwoven amongst the hot ionised keV X-ray emitting gases.
Colour variations in the optical/UV emission are observed
in the conic outflows (hereafter, wind cone)
of galaxies \citep{Hutton2014,Hutton2015},
which are attributed to differential extinction caused by variations
in the dust properties and/or temperature inhomogeneity along the flows.
There is an ionisation cap at high altitudes ($\sim 10~{\rm kpc}$)
of the outflow in some starburst galaxies, e.g., M82
\citep{Devine1999,Tsuru2007}.
Although cooling effects are arguably very significant in altering the flow dynamics
in the so-called super-winds of starburst galaxies \citep{Heckman2003},
the thermal properties in the outflow
are by no means uniform throughout
(see e.g., the thermal and hydrodynamic profiles shown in \citealt{Chevalier1985}).
Recombination and radiative cooling would compete with ionisation and mechanical (shock) heating
in the wind cone \citep{Hoopes2003}, the transitional cap\footnote{This
transitional cap is not the ionisation cap mentioned above,
but a rough boundary
where there is a transition between the properties
of the inhomogeneities in the galactic wind cone.}, and the region above it.
Moreover, the thermal and dynamical instabilities that subsequently develop
will lead to the fragmentation of the ionised flow into ionised bubbles interspersed with cooler, denser, condensed clumps
\citep[see e.g.][]{Strickland2000,Pittard2003,Cooper2008ApJ,Fujita2009ApJ}.
These ionised and neutral sub-structures are entrained
in the flow
and continue migrating upwards \cite[see][]{Schwartz2004}.
As for the hot ionised bubbles,
some will escape to intergalactic space
where gravitational forces give way to inertial, radiative and buoyancy forces
\citep[cf. the model of bubble buoyancy in Cen A,][]{Saxton2001}, while others may be heated and evaporated as they traverse up the outflow zone.
Galactic outflows are evidently complex, multi-component, multi-phase fluids
\citep[see][]{Ohyama2002,Strickland2002,Melioli2013,Martin-Fernandez2016},
where hot ionised gases, warm partially ionised gases and cool neutral material
intermingle as well as segregate.
While we are able to paint a broad phenomenological picture of galactic outflows,
many questions regarding their finer geometries and physical properties
are still waiting to be answered.
For instance, what is the mass-loading of neutral material and warm gas
in the hot ionised outflows?
What are the filling factors of the ionised and neutral components in flows?
What are the internal geometries of the neutral material and the warm gas?
The answers to these questions are not only essential
to determine the dynamics of the system,
but they also provide insights for wider astrophysical issues
such as the chemical and energy transport processes at work within galaxies, and from galaxies into intergalactic space,
and the role of galactic outflows in shaping the present-day structure of our Universe.
In this work we investigate the interior geometries of
multi-phase multi-component galactic outflows,
utilising the information obtained from the CX spectroscopic analyses.
We determine the surface area to volume ratio of ionised gas and neutral material in a flow,
which characterises the strength of the CX emission,
and we derive the volume partitions of neutral and ionised fluid components.
We show that CX emission information
together with appropriate models for the thermal and dynamical properties
of the ionised gas and neutral material
can strongly constrain the internal geometries of outflow regions in starburst galaxies.
We organise the paper as follows:
In \S 2 we present a two-phase two-zone model
for the outflow region from star-forming galaxies,
with spherical neutral clumps entrenched in an ionised zone
and spherical gas bubbles in a neutral zone,
and we compute the surface-to-volume ratio of the ionised gas.
In \S 3 we relax the model to allow ellipticity in the neutral clumps and the ionised gas bubbles
and, finally, we generalise the model to consider clumps and bubbles with arbitrary aspect ratios.
We compute the corresponding surface-to-volume ratios for these cases.
With these ratios in-hand,
we use values inferred from the charge-exchange (CX) lines
observed in galactic outflows of starburst galaxies
and the timescale over which condensed, neutral clumps are ablated
to set constraints on the geometries and sizes of neutral clumps
and their filling factors within ionised outflows.
In \S4 we discuss our findings
in comparison to previous numerical and observational studies.
We also discuss the astrophysical implications
in the context of the survival of the neutral clumps in the galactic outflows
and the advection of remnant neutral clumps and their stripped gas
into the circumgalactic medium.
In \S 5 we give a brief summary of our findings.
\section{Two-phase two-zone model}
Consider that the outflow region is conical, and is enclosed in a neutral background medium.
The outflow region consists of two zones,
the first being predominantly ionised, and the other being predominantly neutral.
The predominantly ionised zone (hereafter the ionised zone) is located close to the galactic plane,
and the predominantly neutral zone (hereafter the neutral zone) is above the ionised zone at the outskirts.
Both zones are inhomogeneous.
In the ionised zone,
neutral clumps are entrained in an ionised flow;
in the neutral zone, ionised bubbles
are embedded in the neutral outflowing material.
Without losing generality we assume that the conic outflow region has a circular cross-section.
The opening half-angle of the wind cone is $\alpha$, and the height is $h$.
The ionised zone terminates at a height $h' (=\chi h\, , \, \chi \leq1)$,
and above it lies the neutral zone.
The radius of the circular surface where the two zones meet
is $h' \tan\alpha$.
\subsection{Surface area to volume ratios of individual spherical clumps and bubbles}
We first consider that both the dense neutral clumps in the ionised zone
and the ionised bubbles in the neutral zone are spherical.
The spherical assumption will be relaxed in later sections,
where other shapes will be considered.
In reality,
the clumps and the bubbles
would vary in their sizes and densities.
Without losing generality,
we assume that
the sizes and the densities
of the clumps and bubbles
can be represented by some characteristic values.
Then we may assign an effective radius $r_1$
for the neutral clumps
and an effective radius $r_2$ for the ionised bubbles,
and similarly
an effective number density $\rho_1$ for the clumps
and an effective number density $\rho_2$ for the bubbles.
Moreover,
the sizes of the neutral clumps and the ionised gas bubbles
are small, with the condition
${\rm Max}\;\!\!(r_1, r_2) \ll {\rm Min}\;\!(\chi h\tan\alpha,\chi h,h(1-\chi))$
generally satisfied.
A schematic illustration of the two-zone wind cone model
is shown in Fig.~\ref{fig:wind_cone}.
\begin{figure}
\begin{center}
\hspace*{-0.7cm}
\includegraphics[width=10cm]{fig_wind_cone_revised_vectorized.pdf}
\end{center}
\vspace*{-0.2cm}
\caption{A schematic illustration of the two-component, two-phase wind cone model
used in this study.
The ionised gas occupies the bottom region,
and neutral clumps are embedded within the ionised gas.
The region above the ionised gas cone is occupied by predominantly neutral material.
Bubbles of hot ionised gas are present in this neutral region.
The ``cap" is the transitional layer
between the ionised
region at the bottom of the outflow cone
and the neutral material at the top.
The entire wind cone is enclosed by neutral material (not shown).
Although the clumps and bubbles are drawn as spheres in the diagram,
they may assume any shape and various aspect ratios
within the appropriate astrophysical contexts of the calculations.
The opening half-angle of the wind cone is $\alpha$,
and the height of the entire cone is $h$.
The fractional height of the bottom ionised region is specified by the parameter $\chi$.
}
\label{fig:wind_cone}
\end{figure}
The total volume of the outflow region is $V_{\rm w} = (\pi h^3 \tan^2\! \alpha)/3$,
while the volumes of the ionised and neutral zones are $\chi^3 V_{\rm w}$
and $(1- \chi^3) V_{\rm w}$ respectively.
The total volume occupied by the ionised gas in the entire outflow is
\begin{equation}
V_{\rm ion} = V_{\rm w}
\left\{\chi^3\left[1 - \rho_1 \bigg(\frac{4\pi}{3} r_1^3 \bigg)\right]
+(1-\chi^3) \rho_2 \bigg(\frac{4\pi}{3} r_2^3 \bigg) \right\} \ ,
\end{equation}
and the total volume occupied by the neutral material is
\begin{equation}
V_{\rm neu} = V_{\rm w}
\left\{(1-\chi^3)\left[1 - \rho_2 \bigg(\frac{4\pi}{3} r_2^3 \bigg)\right]
+\chi^3 \rho_1 \bigg(\frac{4\pi}{3} r_1^3 \bigg) \right\} \ .
\end{equation}
There are four interfaces between the ionised gases and the neutral material in this configuration:
(i) the interface at which the ionised zone and the neutral zone
within the outflow region meet;
(ii) the interface at which the ionised zone meets with the neutral material enclosing the wind cone;
(iii) the interface between the ionised gas and the dense neutral clumps in the ionised zone; and
(iv) the interface between the neutral material and the ionised gas bubbles in the neutral zone.
If the interfacing surfaces in all these cases are smooth,
the respective areas are simply
$\pi r^2\, (= \chi^2 \pi h^2 \tan^2\!\alpha)$,
$\pi r \sqrt{r^2 +h'^2}\,(= \chi^2 \pi h^2 \tan^2\!\alpha\, {\rm cosec}\,\alpha )$,
$\chi^3 V_{\rm w} (\rho_1 4\pi r_1^2)$ and $(1- \chi^3) V_{\rm w} (\rho_2 4\pi r_2^2)$.
Summing them gives the total surface area of the interfacing boundaries
between the ionised gas and the neutral material in the outflow:
\begin{multline}
A_{\rm ion} = A_{\rm w} \bigg\{ \chi^2
+ \frac{h}{r_1} \frac{\chi^3}{(1+ {\rm cosec}\, \alpha)} \rho_1 \bigg( \frac{4\pi}{3} r_1^3\bigg) \\
+ \frac{h}{r_2} \frac{1- \chi^3}{(1+ {\rm cosec}\, \alpha)} \rho_2 \bigg( \frac{4\pi}{3} r_2^3\bigg)
\bigg\} \ ,
\end{multline}
where the total surface area of the outflow zone is
\begin{equation}
A_{\rm w} = {\pi}\, h^2 \tan^2\alpha (1+ {\rm cosec}\, \alpha) \ .
\end{equation}
Note that $\rho_1 (4\pi r_1^3)/3 \; (=f_1)$ and $\rho_2 (4\pi r_2^3)/3 \; (=f_2)$
are the volume filling fraction of the dense neutral clumps in the ionised zone
and the volume filling fraction of the ionised gas bubbles in the neutral zone respectively.
The relative enhancement of the area-to-volume ratio
for the surface interfacing the ionised gases and the neutral material
in the presence of spherical dense neutral clumps in the ionised zone
and spherical ionised gas bubbles in the neutral zone is therefore
\begin{equation}
\begin{split}
{\cal R} & = \frac{A_{\rm ion}/V_{\rm ion}}{A_{\rm w}/V_{\rm w}} \\
& = \frac{\chi^2(1+{\rm cosec} \alpha) + \frac{h}{r_1} {\chi^3 f_1}
+ \frac{h}{r_2} {(1-\chi^3) f_2}}{
(1+{\rm cosec} \alpha)\left[\chi^3(1-f_1)+(1-\chi^3)f_2\right]} \ .
\end{split}
\end{equation}
In the special case where the outflow is predominantly occupying
the ionised zone (with clumps of neutral material), $\chi =1$ and
\begin{equation}
{\cal R} = \frac{(1+{\rm cosec}\,\alpha)+\frac{h}{r_1} f_1}{(1+{\rm cosec}\,\alpha)(1-f_1)} \ .
\end{equation}
If the volume filling fraction of the dense neutral clumps is insignificant,
i.e.\ $h f_1 \ll r_1$ (which automatically implies $f_1 \ll 1$, although the converse is not always true),
the global geometrical factor $(1+ {\rm cosec}\, \alpha)$ will dominate in the numerator.
Then, we have
\begin{equation}
{\cal R} \approx 1 \ .
\end{equation}
This recovers the result for a fully ionised outflow enveloped by a neutral ambient medium.
If the volume filling fraction of the dense neutral clumps is not negligible,
and if the clump sizes are sufficiently small such that $r_1 \ll h$, then
\begin{equation}
{\cal R} \approx \frac{h}{r_1}\left[ \frac{f_1}{(1+{\rm cosec}\,\alpha)(1-f_1)} \right] \ .
\label{eq:r_factor_obs1}
\end{equation}
\subsection{Surface area to volume ratios of individual ellipsoidal clumps and bubbles}
We now relax the spherical assumption for the dense neutral clumps and ionised gas bubbles.
Consider that they are ellipsoids, characterised by three semi-axes $a$, $b$ and $c$.
The volume of these ellipsoids is simply
\begin{equation}
V_{\rm ep} = \frac{4\pi}{3} abc \ .
\end{equation}
One needs to evaluate two elliptic integrals numerically to obtain the exact surface area of an ellipsoid.
A simple analytical expression in terms of elementary functions is not always possible.
This is impractical when extracting information from observational data
where an algebraic expression of the surface area is unavailable.
An approximate analytic expression with good accuracy is therefore needed.
Hence, we may adopt the Thomsen formula (see Appendix \ref{A:ep})
for the surface area of ellipsoids
\begin{equation}
A_{\rm ep} = 4\pi \left[ \frac{1}{3} (a^p b^p + b^p c^p + c^p a^p)\right]^{1/p} \ ,
\end{equation}
with the index $p \approx 1.6075$.
Then we obtain a surface area to volume ratio for the ellipsoids as:
\begin{equation}
\frac{A_{\rm ep}}{V_{\rm ep}} =
3 \left[\frac{1}{3}\bigg(\frac{1}{a^p} +\frac{1}{b^p} +\frac{1}{c^p} \bigg) \right]^{1/p} \ .
\end{equation}
The surface area to volume ratio of a sphere with a radius $r$ is
\begin{equation}
\frac{A_{\rm sp}}{V_{\rm sp}} = \frac{3}{r} \ .
\end{equation}
Thus, the equivalent radius of a sphere with a volume the same as that of an ellipsoid
is the geometric mean of the three semi-axes of the ellipsoid, i.e.,
\begin{equation}
r = (abc)^{1/3} \ .
\end{equation}
We define the ratio $\Upsilon = A_{\rm ep}/A_{\rm sp}$ of the surface areas of the ellipsoid and the sphere respectively.
For an ellipsoid and a sphere with the same volume,
\begin{equation}
\begin{split}
{\hat \Upsilon} & = \frac{A_{\rm ep}/V_{\rm ep}}{A_{\rm sp}/V_{\rm sp}}
\bigg\vert_{V_{\rm ep}=V_{\rm sp}}\\
& = r \left[\frac{1}{3}\bigg(\frac{1}{a^p} +\frac{1}{b^p} +\frac{1}{c^p} \bigg) \right]^{1/p} \\
& = \left[\frac{1}{3}\bigg(\frac{1}{a^p} +\frac{1}{b^p} +\frac{1}{c^p} \bigg)
\bigg(\frac{1}{a^p}\frac{1}{b^p}\frac{1}{c^p} \bigg)^{-1/3} \right]^{1/p} \ .
\end{split}
\label{eq:ellipsoids}
\end{equation}
The $p$-th power of ${\hat \Upsilon}$
is essentially the ratio between the arithmetic mean and the geometrical mean
of the reciprocals of the three semi-axes.
It is always equal to or larger than 1, and is equal to one only when $a=b=c$ (i.e. a sphere).
${\hat \Upsilon}$ serves as a geometrical correction factor
for the surface area of an object when it deviates from a spherical shape.
Thus, a general formula for the enhancement of the surface area to volume ratio
of the ionised material, taking into account the ellipsoid shapes of the dense neutral clumps
and the ionised gas bubbles, is
\begin{equation}
{\cal R} = \frac{\chi^2(1+{\rm cosec} \alpha) + \frac{h}{r_1} {\chi^3 {\hat \Upsilon}_1 f_1}
+ \frac{h}{r_2} {(1-\chi^3) {\hat \Upsilon}_2 f_2}}{
(1+{\rm cosec} \alpha)\left[\chi^3(1-f_1)+(1-\chi^3)f_2\right]} \ ,
\label{eq:RE}
\end{equation}
where $\{{\hat \Upsilon_1(a_1,b_1, c_1)}, {\hat \Upsilon}_2(a_2, b_2, c_2)\}$
are the geometrical correction factors
for the ellipsoidal dense neutral clumps and the ionised gas bubbles,
and here $r_1 = (a_1 b_1 c_1)^{1/3}$ and $r_2 = (a_2 b_2 c_2 )^{1/3}$.
\subsection{Effects of the aspect ratios of individual clumps and bubbles}
\subsubsection{Cylindrical approximation}
Now consider that the clumps and bubbles
have shapes with substantial aspect ratios.
Here we adopt a simple approximation that
the elongated and flattened clumps (hereafter filament-like and pancake-like correspondingly)
and bubbles are approximated by cylinders.
The height/length of the cylinder is $t$ and the cross-sectional area is $\pi s^2$
(where $s$ is the radius of the cross-section).
For the filament-like clumps/bubbles, $t \gg s$;
for the pancake-like clumps/bubbles $t \ll s$.
The surface area of $2\pi s(s+t)$ and the volume of $\pi s^2 t$ of the cylinders
a geometrical correction factor
\begin{equation}
\begin{split}
{\hat \Upsilon} & = \frac{A_{\rm ep}/V_{\rm ep}}{A_{\rm sp}/V_{\rm sp}}
\bigg\vert_{V_{\rm ep}=V_{\rm sp}}\\
& = \frac{2}{3} \left[ \frac{r \;\!(s+t)}{s\;\!t}\right] \\
& = \left(\frac{2}{9} \right)^{1/3} \left[\frac{(s+t)}{s^{1/3} t^{2/3}} \right] \ .
\end{split}
\end{equation}
Now consider an aspect ratio defined as
\begin{equation}
\zeta \equiv \frac{{\rm Max}\,(s,t/2)}{{\rm Min}\,(s,t/2)} \ .
\end{equation}
In terms of this aspect ratio, the geometrical correction factor is
\begin{equation}
{\hat \Upsilon}
= \left(\frac{1}{18} \right)^{1/3} \left[ \frac{n+(3-n)\zeta}{\zeta^{(3-n)/3}} \right] \ ,
\label{eq:n1n2}
\end{equation}
with $n=1$ for filament-like clumps and bubbles
and $n=2$ for pancake-like clumps and bubbles.
In the limit of an extreme aspect ratio, the expression becomes
\begin{equation}
{\hat \Upsilon}
\approx \left(\frac{1}{18} \right)^{1/3}\!\!(3-n)\, \zeta^{n/3} \ .
\end{equation}
In the cylindrical approximation,
the enhancement of the surface area to volume ratio ${\cal R}$
has the same expression as that in equation \ref{eq:RE},
except that the equivalent spherical radius is now
\begin{equation}
r = \left(\frac{3}{2}\right)^{1/3} \frac{t}{2} \ \zeta^{-2/3}\
\end{equation}
for the filament-like clumps and bubbles and
\begin{equation}
r = \left(\frac{3}{2}\right)^{1/3} s \ \zeta^{-1/3}\
\end{equation}
for the pancake-like clumps and bubbles.
It is interesting that, in the limit of extreme aspect ratios for the clumps or bubbles,
i.e. ${\zeta^{-1}\rightarrow 0}$, the factor is therefore
\begin{equation}
\frac{h}{r} {\hat \Upsilon} \approx \frac{2}{3} \left[ \frac{2 h\;\! \zeta }{t} \right]
= \frac{2}{3} \frac{h}{s} \ ,
\end{equation}
for filament-like cylindrical clumps and bubbles and
\begin{equation}
\frac{h}{r} {\hat \Upsilon} \approx \frac{2}{3} \left[ \frac{h\;\! \zeta }{2s} \right]
= \frac{1}{3} \frac{h}{t/2} \ ,
\end{equation}
for pancake-like cylindrical clumps and bubbles.
\subsubsection{Elongated and flattened ellipsoids}
Here we consider the case where the clumps/bubbles are elongated and flattened ellipsoids.
For ellipsoids with a rotational symmetry axis,
the geometrical correction factor ${\hat \Upsilon}$, in equation \ref{eq:ellipsoids} obtained previously,
can be expressed in terms of an aspect ratio, which is now defined as
\begin{equation}
\zeta \equiv \frac{{\rm Max}\,(a,b,c)}{{\rm Min}\,(a,b,c)} \ .
\end{equation}
In the convention $a \geq b \geq c$, $\zeta$ is simply $a/c$.
Setting $a> b = c$ gives
\begin{equation}
{\hat \Upsilon} = \left[ \frac{1}{3} \bigg( \zeta^{-2p/3} + 2 \zeta^{p/3} \bigg) \right]^{1/p}
\end{equation}
for the prolate ellipsoids, and setting $a = b > c$ yields
\begin{equation}
{\hat \Upsilon} = \left[ \frac{1}{3} \bigg(2 \zeta^{-p/3} + \zeta^{2p/3} \bigg) \right]^{1/p}
\end{equation}
for the oblate ellipsoids.
Filaments can be considered as prolate ellipsoids with $\zeta \gg 1$,
and their geometrical correction factor is therefore
\begin{equation}
{\hat \Upsilon} \approx \left( \frac{2}{3}\right)^{1/p} \zeta^{1/3} \ .
\end{equation}
Oblate ellipsoids with $\zeta \gg 1$ would resemble pancakes,
and their geometrical correction factor is
\begin{equation}
{\hat \Upsilon} \approx \left( \frac{1}{3} \right)^{1/p} \zeta^{2/3} \ .
\end{equation}
The effective spherical radii $r$ of the ellipsoids are
\begin{equation}
r (n) = a \zeta^{-(3-n)/3} \ ,
\end{equation}
with $n =1$ for prolate ellipsoids and $n=2$ for oblate ellipsoids.
Hence,
\begin{equation}
\frac{h}{r} {\hat \Upsilon} \approx
\left(\frac{3-n}{3}\right)^{1/p}\! \frac{h\, \zeta}{a} = \left(\frac{3-n}{3}\right)^{1/p}\! \frac{h }{c} \ ,
\end{equation}
for the ellipsoids.
It is also worth noting that $(1/3)^{1/p} \approx 0.5049$ and $(2/3)^{1/p} \approx 0.7771$
for $p = 1.6075$ (see Appendix \ref{A:ep}),
which implies that $1/3<(1/3)^{1/p} < 2/3 < (2/3)^{1/p}$.
Fig.~\ref{fig:head} shows the dependence of ${\hat \Upsilon}$ on $\zeta$
for elongated and flattened clumps/bubbles modelled respectively by cylinders and ellipsoids.
As expected, the ellipsoids give $\hat \Upsilon =1$
and the cylinders give $\hat \Upsilon =(3/2)^{1/3}$ at $\zeta = 1$.
In both the elongated and flattened cases,
${\hat \Upsilon}$ of the ellipsoids and the cylinders have the same dependencies on $\zeta$
when the aspect ratio is sufficiently large.
Moreover, the value of $\hat \Upsilon$ hardly exceeds 12, even when $\zeta$ is as large as 100.
This restricts the values for the ratios of the interfacing surface areas
to the volumes between the ionised and neutral material.
Nevertheless, higher surface area to volume ratios can be obtained
if the interacting boundary is sufficiently rough (e.g.\ as would be the case with fractal structures).
Shapes with small aspect ratios may also emerge in outflows: for instance, clumps with a substantial relative speed with respect to the velocity
of the outflowing plasma
in which they are entrained could develop into shapes resembling a hamburger. We discuss the resulting surface area to volume ratios of such objects in Appendix~\ref{A-one}.
\begin{figure}
\begin{center}
\hspace*{-0.0cm}
\includegraphics[width=8.2cm]{fig_head-2-eps-relabel2}
\end{center}
\vspace*{-0.5cm}
\caption{Plot of the surface area enhancement factor ${\hat \Upsilon}$
against the aspect ratios $\zeta$ of cylindrical and axisymmetric ellipsoidal clumps/bubbles.
The solid lines denote long cylinders (corresponding to $n=1$ in equation \ref{eq:n1n2})
and prolate ellipsoids (corresponding to the filament-like clumps/bubbles), while
the dot-dashed lines denote short cylinders (corresponding to $n=2$ in equation \ref{eq:n1n2})
and oblate ellipsoids (corresponding to the pancake-like clumps/bubbles). }
\label{fig:head}
\end{figure}
\section{Results and discussion}
The general two-zone flow model is characterised by $\chi$, which takes a value between 0 and 1,
with the general formula for the enhancement of the surface area to volume ratio ${\cal R}$ being described by equation~\ref{eq:RE}.
We demonstrate our model by considering the two extreme cases in which the two-zone model reduces to a one-zone system:
(1) when $\chi =0$, in which a neutral background flow is embedded with ionised bubbles;
and (2) when $\chi =1$, in which an ionised flow is embedded with neutral clumps.
A more thorough parameter study and investigation of the two-zone model (i.e. when adopting values of $\chi$ between 0 and 1), is left to future work.
\subsection{Ionised bubbles in a neutral flow}
With the surface area to volume ratio of individual neutral clumps ${\hat \Upsilon}_1$
and ionised bubbles ${\hat \Upsilon}_2$
for specific shapes determined,
we are ready to calculate the enhancement of the surface area to volume ratio ${\cal R}$.
The relative size of the ionised gas dominated region
and the neutral material dominated region in the wind cone is specified by $\chi$.
We first consider the case with $\chi =0$
in which the wind cone is one-zone, with a neutral background flow which is embedded with ionised bubbles.
Putting $\chi = 0$ in equation (\ref{eq:RE}) gives
\begin{equation}
{\cal R} = \left(\frac{h}{r_2} \right) \frac{{\hat \Upsilon}_2}{(1+{\rm cosec} \alpha)} \ .
\label{eq:RE-c0}
\end{equation}
The enhancement factor ${\cal R}$ thus depends only on
the size ($r_2$) and the shape (through ${\hat \Upsilon}$) of the bubbles,
when the opening angle of the wind cone is specified.
Hence, it can be used
to put constraints on the geometrical properties of the bubbles in a neutral outflow.
For instance, in a predominantly neutral flow (i.e.\ with $\chi \approx 0$),
a value of ${\cal R} \approx 50$
derived from CX line spectroscopy in an observation
would imply (from Fig.~\ref{fig:ee00}) that the effective linear size of the ionised bubbles, $r_2$,
relative to the height of the wind cone, $h$, is about 1/200,
provided that the bubbles are sufficiently hot so as to maintain a roughly spherical shape
(cf. the value of $\mathcal{R} \approx 27$
derived for the outflow of M82 from X-ray spectroscopy in section~\ref{sec:calc_r_obs_m82}).
Even in the extreme case that the bubbles are compressed into a pancake shape,
the effective size of the bubbles relative to the height of the wind cone would not exceed 1/10 (cf. Fig.~\ref{fig:ee00}).
It is however unlikely that the ionised bubbles would have such an extremely flat pancake-like shape
as this would require a background flow in the wind cone to have very high speeds,
exceeding the thermal velocities of the ionised gas in the bubbles (i.e. ram pressure dominated),
yet still maintaining a uniform flow pattern with the absence of turbulence.
When $\chi = 0$,
${\cal R}$ is independent of $f_2$, the filling factor of the ionised gas bubbles,
and hence the fractional volumes of the neutral material and the ionised gas
are unconstrained by ${\cal R}$.
In other words, the total mass loading in the neutral outflow embedded with ionised bubbles
cannot be determined using the information obtained from CX line spectroscopy alone.
\begin{figure}
\begin{center}
\hspace*{-0.0cm}
\includegraphics[width=8.2cm]{fig_e00-relabel}
\end{center}
\vspace*{-0.0cm}
\caption{The relative enhancement of the surface area to volume ratio ${\cal R}$
of a neutral outflow with embedded ionised gas bubbles
as a function of bubble size, in terms of $h/r_2$,
with respect to the height $h$ of the wind cone (calculated from equation~\ref{eq:RE-c0}).
The opening half-angle of the conic wind zone $\alpha = 60^\circ$ (solid curves) and $30^\circ$ (dashed curves),
and the parameter $\chi =0$.
The curves from top to bottom in each panel
correspond to ${\hat \Upsilon} = 10,$ 9, 8, 7, 6, 5, 4, 3, 2, and 1 respectively.
The region below the lowermost curve (with ${\hat \Upsilon} = 1$) in each case is forbidden
as no physical object has a surface area to volume ratio smaller than that of a sphere.
A horizontal (dashed) line with ${\cal R}=50$ is shown as a reference.
This value of ${\cal R}$ is similar to that derived from the CX line spectroscopy (${\cal R}= 27 $) of the superwind of the starburst galaxy M82 -- see section~\ref{sec:calc_r_obs_m82} and~\citet{Zhang2014}.}
\label{fig:ee00}
\end{figure}
\subsection{Neutral clumps in an ionised flow}
\subsubsection{Filling factor}
\begin{figure}
\begin{center}
\hspace*{-0.0cm}
\includegraphics[width=\columnwidth]{fig_zpa001_relabel} \\
\includegraphics[width=\columnwidth]{fig_zpa003_relabel} \\
\includegraphics[width=\columnwidth]{fig_zpa010_relabel}
\end{center}
\caption{The relative enhancement of the surface area to volume ratio ${\cal R}$
in the presence of dense neutral clumps entrenched in
an ionised outflowing fluid
as a function of the filling factor of the clumps $f_1$
for ${\hat \Upsilon} =1$ (perfectly spherical clumps),
3 (very long filament-like, or moderately flat pancake-like clumps)
and 10 (extremely flat pancake-like clumps).
Panels from top to bottom.
The opening half-angle of the conic wind zone $\alpha = 60^\circ$ (solid curves) and $30^\circ$ (dashed curves),
and the parameter $\chi =1$.
In each panel, the set of curves from top to bottom
correspond to an increase in the linear size of the dense clumps,
with ${\rm Log} (h/r_1) =$ 3, 2.6667, 2.3333, 2, 1.6667 and 1.3333 respectively. }
\label{fig:vpa}
\end{figure}
\begin{figure}
\begin{center}
\hspace*{-0.0cm}
\includegraphics[width=\columnwidth]{fig_f01z_relabel} \\
\vspace*{-0.2cm}
\includegraphics[width=\columnwidth]{fig_f10z_relabel} \\
\vspace*{-0.2cm}
\includegraphics[width=\columnwidth]{fig_f50z_relabel}
\end{center}
\vspace*{-0.2cm}
\caption{The relative enhancement of the surface area to volume ratio ${\cal R}$
in the presence of dense neutral clumps entrained within
an ionised outflow as a function of the parameter $h/r_1$,
which specifies the size of the neutral clumps with respect to the height of the wind cone.
Panels from top to bottom correspond to the filling factors of the clumps
$f_1 = $ 0.01, 0.1 and 0.5.
The opening half-angle of the conic wind zone $\alpha = 60^\circ$ (solid curves) and $30^\circ$ (dashed curves),
and the parameter $\chi =1$.
The curves from top to bottom in each panel
correspond to ${\hat \Upsilon} = 10,$ 9, 8, 7, 6, 5, 4, 3, 2, and 1 respectively.
The region below the lowermost curve (with ${\hat \Upsilon} = 1$) in each case is forbidden
as no object has a surface area to volume ratio smaller than that of sphere.
The two vertical lines represent $h/r_1 = 250$ and $250/(10)^{1/3}$.
The former corresponds to a spherical clump with a radius $r = 40\;\! {\rm pc}$ (i.e.\ $\hat \Upsilon = 1$)
for a wind cone with a height $h = 10~{\rm kpc}$;
the latter corresponds to the equivalent radius of a long ellipsoidal-filament with an aspect ratio $\zeta = 10$
and a cross-sectional radius of $40~{\rm pc}$. }
\label{fig:f00}
\end{figure}
We next consider the case where $\chi =1$,
i.e. when the wind cone is one-zone, predominantly filled with ionised gas entrenched with neutral clumps.
There are two kinds of interfacing surfaces between the ionised gas and the neutral material in the flow
where CX emission lines could arise:
(i) the boundary surface of the wind cone where the ionised gas meets the surrounding neutral material,
and (ii) the surface layer of the neutral clumps carried by the flow.
By setting $\chi =1$ in equation (\ref{eq:RE}), we obtain
\begin{equation}
{\cal R} = \frac{(1+{\rm cosec} \alpha) + \frac{h}{r_1} { {\hat \Upsilon}_1 f_1} }
{(1+{\rm cosec} \alpha)(1-f_1)} \ .
\label{eq:RE-c1}
\end{equation}
The enhancement of the surface area to volume ratio ${\cal R}$
is now determined by the volume filling factor of the neutral clumps
as well as their individual sizes and geometries.
Fig.~\ref{fig:vpa} shows ${\cal R}$ as a function of $f_1$, the filling factor of the neutral material in the flow
for various effective linear sizes of the clumps $r_1$, expressed in terms of $(h/r_1)$.
Three clump geometries are shown in the panels from top to bottom:
spherical clumps (${\hat \Upsilon} = 1$),
filament-like and pancake-like clumps which have large aspect ratios (${\hat \Upsilon} = 3$ and 10 respectively).
Fig.~\ref{fig:f00} shows ${\cal R}$ as a function of the effective linear clump size,
in terms of $(h/r)$, for various ${\hat \Upsilon}$ values.
The panels from top to bottom correspond respectively
to the cases with filling factors of $f_1 = 0.01$, 0.1 and 0.5.
The general trends shown in Fig.~\ref{fig:vpa} and \ref{fig:f00} are that
either smaller effective neutral clump sizes $r_1$, and/or larger ${\hat \Upsilon}$ values for individual clumps, will both give larger values of ${\cal R}$.
Moreover, ${\cal R}$ increases with the clump filling factor $f_1$.
An observed value of ${\cal R}$ would therefore yield constraints on the clump filling factor $f_1$
as well as the clump geometry\footnote{
We note that, in addition to the large observational and model uncertainties in derived values of $\mathcal{R}$
(c.f. section~\ref{sec:calc_r_obs_m82}),
there is some degeneracy between the shape and size of the clumps
and the filling factor $f_1$ and their impact on $\mathcal{R}$.
Unless these degeneracies can be broken,
the constraining power of $\mathcal{R}$ on the internal multi-phase structure on outflows may not be tightened.
This issue would be exacerbated by complex (e.g. fractal-like) geometry, which
may occur at the interfaces
between different phases of material
when Kelvin-Helmholtz and/or Rayleigh-Taylor instabilities
are present in an outflow.}
(i.e. the effective linear size in terms of $(h/r)$ and the aspect ratio as parametrised by $\hat \Upsilon$).
\subsubsection{Characteristic clump size}
In an outflow where the filling fraction of the neutral clumps is non-negligible,
the characteristic clump size
can be estimated using equation~\ref{eq:r_factor_obs1}, i.e. as:
\begin{equation}
\langle r_1 \rangle \approx \frac{h}{\cal R}\left[ \frac{f_1}{(1+{\rm cosec}\,\alpha)(1-f_1)} \right]
\label{eq:spherical_r}
\end{equation}
where $h$, the height of the outflow cone, and $\alpha$, the opening angle of the cone, can be determined by imaging observations.
${\cal R}$ can be derived from the relative strength of the CX lines.
The volume fraction of the ionised gas can in principle be obtained from
X-ray spectroscopic measurements (assuming a sensible model for the emission process),
and hence the volume filling fraction of clumps $f_1$ can be estimated.
This gives sufficient information for the characteristic size $\langle r_1 \rangle$ of the neutral clumps
entrenched within the ionised gas outflow to be constrained.
If filament-like structures are predominant within the outflow,
extreme prolate ellipsoids (or cylinders)
may be used as a more suitable approximation
for their morphology in the context of
parametrising their surface to volume ratios.
In this case, the characteristic clump size
is modified according to equation~\ref{eq:RE-c1}, thus yielding
\begin{equation}
\langle r_1 \rangle \approx \frac{h {\hat \Upsilon}_1 f_1}{(1+{\rm cosec}\,\alpha) \left[ {\cal R} (1-f_1) -1 \right] } \ ,
\label{eq:spherical_r}
\end{equation}
for which the extra parameter ${\hat \Upsilon}_1$
may be estimated from the clump morphology (prolate ellipsoids or cylinders),
e.g. from infra-red (IR) emission which more directly reveals the structure of the cold material in the flow.
\subsubsection{Calculating $\mathcal{R}$ for the M82 superwind}
\label{sec:calc_r_obs_m82}
The {\it XMM-Newton} RGS spectrum of the M82 superwind
cannot be explained using an optically-thin thermal-plasma model,
but a good fit can be found
if CX emission is taken into account.
The CX emission would contribute
about one-quarter of the flux in the RGS wavelength range
(6$-$30 \AA),
and the hot plasma would have a temperature of $\sim$0.6 keV
and solar-like metal abundances~\citep{Zhang2014}.
Given that CX process arise at the surface of the neutral material,
the observed CX component allow us to determine
the effective area of the interface $A_{\rm ion}$
between the hot plasma and the neutral gas.
The normalisation of the ACX model
\citep{Smith2012AN, Smith2014ApJ}\footnote{Also detailed online, at~\url{http://www.atomdb.org/CX/}.}, which represents the CX component, suggests a total flux of ions onto the M82 wind's interface area, $S_{\rm int}$, of
\begin{equation}
\int_{S_{\rm int}} {\rm d}S \ {n_{\rm H}}\;\! v
= 3.2\times10^{51}\,\rm{s^{-1}} \ ,
\end{equation}
with a measurement uncertainty of 15\%, plus an assumed 30\% ACX model uncertainty. Here $n_{\rm H}$ is the hydrogen number-density of
the hot plasma,
and $v$ is the relative velocity between the hot gas and neutral material.
Assuming the hot plasma to be homogeneous and has a unique interacting velocity $v$, it follows that $A_{\rm ion}=\int {\rm d}S$.
The density $n_{\rm H}$ of the hot plasma can be inferred from the normalisation of the APEC (Astrophysical Plasma Emission Code;~\citealt{Foster2012ApJ}) model, given by
\begin{equation}
10^{-14}\int_{\rm wc} {{\rm d} V_{\rm ion}} \
\frac{n_{\rm e}n_{\rm H}}
{4\pi D^2}=0.0062 \ ,
\end{equation}
with an uncertainty of 20\%.
Here, "wc" denotes the region over which the integral is performed
(i.e. the wind cone), $V_{\rm ion}$ is the volume of the ionised gas in the wind,
and $n_{\rm e}$ is the electron density
(with $n_{\rm e}\simeq 1.2\;\! n_{\rm H}$ for solar-like abundance).
We set the distance to M82 $D=3.52\,\rm{Mpc}$ \citep{Tully2009AJ}.
Without losing generality,
we consider that all the ionised gas is contained by
the conical outflow,
which has a half-cone opening angle $\alpha \approx 30^{\circ}$
and a height $\chi h\sim$3~kpc,
and ignore the volume filling fraction
of the dense neutral material inside the ionised outflow.
Then,
\begin{equation}
V_{\rm ion} =
\int_{\rm wc} {\rm d}V_{\rm ion}\approx
\frac{1}{3}\left(2\pi \chi^3h^3{\rm tan}^2\alpha\right)
=5.5\times10^{65}\,\rm{cm^3} \ ,
\end{equation}
and the density of plasma is estimated to be $n_{\rm H}= 0.04\,\rm{cm^{-3}}$.
While the hot ionised plasma has a velocity $\gtrsim 10^3~\text{km}\;\!\text{s}^{-1}$
in the outflow,
the entrained cool gas was found to move substantially slower
(at $\sim 500 ~{\rm km}\;\!{\rm s}^{-1}$; see \citealt{Melioli2013}).
An approximation of the relative velocity is
then $v\simeq 500~{\rm km}\;\!{\rm s}^{-1}$ $(\pm 30\%)$,
which is comparable to the sound speed of the hot plasma.
It follows that an estimate to the total surface area of the interfacing boundaries
\begin{equation}
A_{\rm ion}=\int_{\rm wc} {\rm d}S = 1.6 \times10^{45}\,\rm{cm^2} \ ,
\end{equation}
when integrated across all interfacing surfaces in the wind cone.
Along the axis of the superwind, \ion{H}{i} was detected up to 10 kpc ($h_1$) toward
the south and beyond 5~kpc ($h_2$) to the north \citep{Martini2018ApJ}\footnote{There are indications that the northern ionised wind has broken the 3~kpc shell, and could reach as far as the 11.5~kpc `cap'. The wind density is low, and its contribution to the soft X-ray emission is much less than that of the southern wind~\citep{Zhang2014}.}.
Taking this as the entire wind bi-cone, the total surface area of the ionised wind is then
\begin{equation}
A_{\rm w}=
\frac{\pi}{3} \left(h_1^2+h_2^2 \right)
\;\!{\tan}^2\alpha \;\! \left(1+{\rm cosec}\,\alpha\right)
=1.25\times10^{45}\;\! {\rm cm}^2 \ ,
\end{equation}
and the total volume is
\begin{equation}
V_{\rm w}= \frac{\pi}{3} \left(h_1^3+h_2^3\right)\;\! {\rm tan}^2\alpha =1.15\times10^{67}\,\rm{cm^3} \ .
\end{equation}
The relative enhancement of the area-to-volume ratio then follows as
\begin{equation}
\mathcal{R}=\frac{A_{\rm ion}/V_{\rm ion}}{A_{\rm w}/V_{\rm w}}\simeq 27 \ .
\end{equation}
The uncertainty in this value is dominated
by that of the two volumes,
$V_{\rm ion}$ and $V_{\rm w}$.
The value of $\mathcal{R}$
could therefore be off by a factor of a few,
given that X-ray emission
from the hot ionised gas is dominated
by the southern wind cone in M82,
which leads to uncertainties in the hot gas density of 50\% and interfacing area of 60\%
(neglecting the assumed 30\% ACX model uncertainty).
We also note that
the filling factor of the neutral clumps
has been ignored
in the above estimation,
which in term would lead
to a value of $\mathcal{R}$
smaller than
the expected value
when the filling factor of the neutral clumps
is considered.
\section{Properties of charge-exchange line emitting clumps}
\subsection{Stripped gas and H$\alpha$ filaments}
\label{sec:halpha_filaments}
CX processes take place at the boundary between the gases in the neutral and the ionised phases,
and therefore CX lines carry information
about how the cool neutral gas interacts with the hot ionised gas in galactic outflows.
Observations have shown H$\alpha$ emitting filamentary structures
in the outflows of starburst galaxies
(see e.g. the HST H$\alpha$ images of M82 -- \citealt{Mutchler2007PASP}).
The H$\alpha$ trails are even more apparent in images
obtained by the William Herschel Telescope (WHT)\footnote{See \url{http://www.iac.es/telescopes/IAM/2011/73_may11_m82.jpg.html}
for WHT RGB + H$\alpha$ image.}.
Structures are seen
in the {\it Chandra} X-ray image of the M82 outflow, at 0.3$-$1.1 keV and 0.7$-$2.2 keV
\citep{Kilgard2011AAS}\footnote{The multi-band X-ray images are also available here: \url{http://www.chandra.harvard.edu/photo/2011/m82/}.}.
`Pancake'-like morphologies
were present at high flow altitudes in the 0.7$-$2.2 keV image.
These X-rays were due, at least in part,
to CX processes
operating in the interface between ionised and neutral material in the outflow.
CX lines, including those from \ion{C}{vi}, \ion{N}{vi}, \ion{N}{vii}, \ion{O}{viii}
and (most importantly) \ion{O}{vii},
are present in the 0.3$-$1.1 keV band,
while \ion{O}{viii} is present also in the 0.7$-$2.2 keV band \citep[see][]{Smith2012AN}.
Although spectroscopic fits to the lines in the RGS spectrum of M82
indicate multiple origins for the \ion{O}{vii} triplet,
very substantial contributions must come
from the CX process.\footnote{
The line ratios of \ion{O}{vii} triplets were found to be different
at different locations in M82.
If their physical origins were attributed
solely to the same process, we would expect the same line ratios
\citep[see observations and discussions in][]{Liu2012MNRAS}. }
\citealt{Liu2011} estimated that CX processes contribute
90\% to the \ion{O}{vii} K$\alpha$ triplet lines
(with substantial but lower fractions for other species)
in the {\it XMM-Newton} RGS spectrum \citep[see also][]{Liu2012MNRAS}.
Note that the presence of \ion{O}{i} (63$\mu$m) and \ion{C}{ii} (158$\mu$m) line emission
from M82's outflow cones
suggests the presence of diffuse \ion{H}{i} gas, i.e. cold neutral gas
with temperature below $10^4~K$,
while detection of [\ion{O}{iii}] emission
(88$\mu$m)
would suggest this diffuse \ion{H}{i}
is interspersed with \ion{H}{ii} regions \citep{Leroy2015ApJ},
i.e. hot ionised gas with a temperature of $10^6$~K or even above
\citep[see][]{Franceschini2000astroph,Contursi2013A&A}.
Moreover, CO emission is also observed around M82.
This comes from cold, dense molecular gas
\citep[e.g.][]{Franceschini2000astroph, Draine2011book}, so
presumably, the CO emission is associated with the clumps
in which the inner cores are comprised of cold gas
harbouring neutral species,
or species in low ionisation states.
This emission is clumpy in morphology
and extended along the direction of the outflow cones~\citep{Leroy2015ApJ}.
Observations of other nearby starburst galaxies
also show evidence of multi-phase clumpy outflows.
For instance, CX X-ray emission
(with a strong forbidden line in the \ion{O}{vii} K$\alpha$ triplet)
has been detected in NGC~253, M51, M82, M61, NGC~4631
and the Antennae galaxies.
There is also evidence of a combination of thermal and CX X-ray emission
in M94 and NGC 2903~\citep{Liu2012MNRAS, Wang2012AN}.
\begin{figure}
\begin{center}
\hspace*{-0.8cm}
\includegraphics[width=\columnwidth]{fig_clump_survival_vectorized.pdf}
\end{center}
\vspace*{-0.25cm}
\caption{Schematic illustration of a cold clump in a hot outflow showing the process by which material is stripped from the surface
in contact with the hot outflowing gas.
The sharp boundary between the clump
and the surrounding gas is the main source of CX emission.
Trails of semi-ionised warm gas form filamentary structures
ahead of the clump, comprising of gas torn off
from the clumps by the surrounding flow.
These warm gas filaments emit in H$\alpha$, and are heated by conduction and ionisation as they are exposed to the outflow environment when trailing away from the clump.
The clump itself is dense and self-shields against much of the external ionising radiation, with heating only operating effectively by conduction over much longer timescales.
}
\label{fig:clump_survival}
\end{figure}
This filamentary geometry appears to contradict
our deduction (from the CX lines)
that the neutral clumps have flattened oblate geometries,
i.e. shapes
resembling a hamburger, or even a pancake.
This apparent dilemma can be resolved as follows.
The H$\alpha$ filaments are not the CX line emitters.
They are trails of gas torn off from neutral clumps
by the fast flowing gas around them
\citep[see simulations of][]{Suchkov1994ApJ, Cooper2008ApJ, Cooper2009ApJ,
Scannapieco2015ApJ, BandaBarragan2016MNRAS,
Bruggen2016ApJ, BandaBarragan2018MNRAS, Goldsmith2018MNRAS}.
The stripped material from the cooler CX emitting clumps,
when warmed appropriately,
emits H$\alpha$ lines and appears as H$\alpha$ filaments.
Moreover, dense clumps are
more likely to survive
in the presence of
strong irradiation in a galactic outflow environment
than geometrically thin filaments
-- see Appendix~\ref{A:clumps_vs_filaments}.
\subsection{Survival of clumps}
An approximate pressure balance
between the gases of the neutral phase and the ionised phase
implies that the neutral clumps, which are cooler, should be denser.
While the ionised gases are accelerated mechanically or radiatively
to form a galactic outflow,
the neutral clumps (which have larger inertia)
are not accelerated very efficiently.
A pressure is therefore exerted on the slow-moving clumps
by the faster moving ionised gas from below,
causing compression of the clump material.
The shear between the fast-moving ionised gas and the slow-moving clumps
would also cause stripping, especially at the edges of the clumps
(see the schematic illustration in Fig.~\ref{fig:clump_survival}).
The stripped material naturally has a filamentary structure
but is less dense than the cooler clumps.
The filaments are thus thinner optically as well as geometrically.
For H$\alpha$ emission to be produced,
the material must be warm, with temperatures of around 10$^4$~K or slightly higher,
but is not fully ionised.
Although the filaments are not fully ionised initially,
they are in direct contact with the hot ionised gas
and fully exposed to the ambient radiation.
The filaments are therefore heated by particle collisions, radiation from starlight and
down scattered X-ray radiation of the ionised gas\footnote{
The H$\alpha$ emission is unlikely to be caused by the CX X-rays because the ionising (UV/X-ray) continuum due to the hot gas and stellar/SN emission carries much more energy and has more photons than the CX lines.}
and also a certain degree of thermal conduction (see, e.g. \citealt{Draine2011book} and also section~\ref{sec:entrained_clumps}).
In contrast,
the material in the inner core of a dense clump
is shielded from UV radiation and soft X-rays.
The timescale for conduction heating is also long
compared to the galactic outflow timescale
(see section~\ref{sec:entrained_clumps}),
so a shielded clump would remain cold and will not be strong in
X-ray/UV emission, or even optical.
Their presence may, however, be tracked by
the trails of material that is stripped from them.
A possible origin if these CX emitting clumps
is the dense interstellar medium (ISM) of the galaxy
entrained into the wind cone
and accelerated by the flow
\citep[see e.g.][]{Heckman1990ApJS, Cooper2008ApJ, Fujita2009ApJ, Sharma2012ApJ},
which pushes
on the lower surface of clumps\footnote{For the prescription of an outflow adopted in this model, the dominant pressure acting on these clouds is ram pressure, being around an order of magnitude greater than thermal pressure, with
$P_{\rm ram}/P_{\rm th} \sim 10$.}.
There are several possible consequences:
(i) the clump is accelerated in the direction of the flow,
but it traverses the outflow cone before it is transported to large altitudes;
(ii) the clump is entrapped
and accelerated sufficiently to become entrained in the plasma fluid;
(iii) the clump is evaporated before it can penetrate sufficiently far into the outflow cone for its dynamics to be significantly affected.
We shall assess each of these possibilities accordingly.
\subsubsection{Acceleration and entrainment}
\label{sec:acceleation_clumps}
In M82, filaments and clump-like structures move along with an outflow
with velocities ${\tilde v}_{\rm clump} \sim 600~\text{km}~\text{s}^{-1}$~\citep[see][]{Strickland2009}.
The CX line emitters
are embedded structures in an outflow
and would have similar velocities.
Thus, we may assume ${v}_{\rm clump} \approx {\tilde v}_{\rm clump}$.
Consider that a clump entering the outflow is accelerated
to the observed velocities of those entrained in the flow.
With an initial zero upward velocity,
the velocity difference between the clump
and the outflowing fluid
$\Delta v$
is specified by the outflow velocity alone.
Without losing generality,
we adopt the centre point of the range suggested for M82,
i.e. setting $v_{\rm flow} = 1,800~\text{km}~\text{s}^{-1}$.
The fluid plasma outflow velocity is expected to rapidly reach
terminal velocity at low altitudes~\citep[see e.g.][]{Chevalier1985},
so the ram pressure acting on an outflow clump
could be approximated as
$P_{\rm ram} = \rho_{\rm flow} v_{\rm flow}^2$ throughout the outflow cone.
There would be a decrease in the rate of acceleration
as the clumps evaporate
(by reducing the cross-section surface),
while the velocity offset between the clump and the hot surrounding gas
decreases when the clumps are accelerated.
These complications are not essential for the illustrative estimation here,
for which only a first upper-estimate on the timescale is required.
If assuming a roughly uniform acceleration,
then we have
$a \sim \sigma_{\rm clump}\;\!P_{\rm ram}/M_{\rm c}$,
where $M_{\rm c} \sim 4\pi \;\! n_{\rm H} \;\! m_{\rm H} r'^3/3$
is the mass of the clump,
$\sigma_{\rm clump} \sim \pi r'^2$
is the effective surface intercepting the hot outflow fluid,
$n_{\rm H}$ is the hydrogen (number) density of the gas in the clump,
$r'$ is the radius of the clump,
and
$m_{\rm H}$ is hydrogen mass.
This gives a length-scale
\begin{equation}
\ell_{\rm a} \approx 0.74\;\!\left(\frac{\delta}{10^3}\right) \left( \frac{r'}{10\;\!\text{pc}} \right) \left(\frac{v_{\rm clump}}{600\;\!\text{km}\;\!\text{s}^{-1}}\right)^{2}\;\!\left(\frac{v_{\rm flow}}{1,800\;\!\text{km}\;\!\text{s}^{-1}}\right)^{-2} \;\!\text{kpc} \ ,
\end{equation}
over which the clump is accelerated to
velocities comparable to the observed entrained clumps,
(with $\delta = n_{\rm H}/n_{\rm flow}$ as the over-density of a clump compared to the density
of the outflowing fluid).
This corresponding acceleration arises on a timescale of
\begin{equation}
t_{\rm a} \approx 2.4\;\!\left(\frac{\delta}{10^3}\right)\;\! \left( \frac{r'}{10\;\!\text{pc}} \right) \;\! \left( \frac{v_{\rm clump}}{600\;\!\text{km}\;\!\text{s}^{-1}} \right)\;\! \left( \frac{v_{\rm flow}}{1,800\;\!\text{km}\;\!\text{s}^{-1}} \right)^{-2} \;\!\text{Myr} \ .
\label{eq:acc_timescale}
\end{equation}
The high-velocity clouds (HVCs) in the Milky Way
typically have velocities less than $100~\text{km}~\text{s}^{-1}$
relative to the Galactic standard of rest~\citep[e.g.][]{Wakker1991A&A, Putman2002AJ}.
If adopting the values observed for the HVCs in the Milky Way
as an initial (likely upper bound) velocity of a clump passing into the outflow cone
in a direction perpendicular to the flow,
we would expect that the HVC would take around 10 Myr to traverse a wind cone
of a width of 1~kpc.
This timescale is substantially longer than
the expected acceleration timescale (equation~\ref{eq:acc_timescale}).
The HVCs, and hence the clumps, passing into the flow would therefore
have sufficient time to be accelerated,
and would eventually become entrained into the flow
instead of traversing across and leaving it.
\subsubsection{Fate of entrained clumps}
\label{sec:entrained_clumps}
The entrained clumps (and their stripped gas) will be heated
by the hot gas in the outflow
and also by the ambient radiation,
leading to ionisation and evaporation.
For conduction
the heat diffusion is expected to be most effective along the minor axis
of the clumps (or the filaments),
because of both the larger interfacing cross-sectional area
and the shorter path from the interfacing surface to the interior core.
CX requires that the neutral material has temperatures $T_{\rm c} \approx 10^2-10^4~\text{K}$
\citep[e.g.][]{Strickland1997A&A, Lehnert1999ApJ}.
The ionised gas in the galactic outflow would have temperatures
reaching $T_{\rm h} \approx 10^7~\text{K}$
\citep[see, e.g.][]{McKeith1995A&A, Shopbell1998ApJ}.
The timescale over which a clump (i.e. an entrained HVC)
evaporates due to conduction heating is
\begin{equation}
t_{\rm evap} \approx 1.7\;\!\left(\frac{n_{\rm H}}{10~\text{cm}^{-3}}\right) \left( \frac{r'}{10~\text{pc}} \right)^2 \left(\frac{T_{\rm h}}{10^7~\text{K}}\right)^{-5/2} ~\text{Myr} \ ,
\end{equation}
\citep[see][]{Draine2011book},
where $n_{\rm H}$ is
in the range $10^{-1} - 10^1~\text{cm}^{-3}$
\citep[e.g.][]{Melioli2013},
and $r'$ is the characteristic clump cross-sectional radius along its minor axis,
which is of order 10~pc.
The timescale over which clumps are pushed along the wind cone,
can be estimated from their observed velocity and the length-scale of the flow, i.e.
\begin{equation}
t_{\rm flow} \approx 4.9\;\! \left(\frac{h}{3~\text{kpc}}\right) \left(\frac{v_{\rm clump}}{600~\text{km s}^{-1}}\right)^{-1} ~\text{Myr} \ .
\label{eq:t_flow}
\end{equation}
Here, $h\approx3~\text{kpc}$ corresponds to the approximate scale height
for the super-wind as those observed in M82~\citep{Strickland2009}
Note that this parameter varies among systems \citep[see][]{Veilleux2005}.
For our adopted parameter choices,
$t_{\rm flow} > t_{\rm evap}$,
meaning that clumps (and warm filaments)
would generally evaporate and dissolve
into the surrounding hot fluid
before they can reach the top cap of the wind cone.
Moreover, comparison with the clump acceleration timescale (equation~\ref{eq:acc_timescale})
indicates
that clump entrainment/acceleration would also arise on comparable timescales,
i.e. with $t_{\rm a} \sim t_{\rm flow} \sim t_{\rm evap}$.
Thus, while some clumps entering an outflow
are gradually accelerated and evaporated,
other dense clumps could survive for their entire passage up the flow,
with the neutral gas they harbour being ultimately advected into circumgalactic space.
\subsection{The circumgalactic environment}
\subsubsection{Cooling}
The thermal cooling timescale of ionised gas is
$t_{\rm cool} \sim n_{\rm e}k_{\rm B}T_{\rm e}/
\Lambda_{\rm cool}(n_{\rm e},T_{\rm e},Z)$,
where $\Lambda_{\rm cool}$ is the cooling rate,
$n_{\rm e}$ is the electron number density,
$T_{\rm e}$ is the electron temperature,
$Z$ is the charge number of ions,
and $k_{\rm B}$ is the Boltzmann constant.
For ionised hydrogen in free-free cooling,
\begin{equation}
t_{\rm cool}
\sim \;\! \left(\frac{n_{\rm e}}{10^{-2} \;\! {\rm cm}^{-3}} \right)^{-1} \left(\frac{T_{\rm e}}{10^7 \;\! {\rm K}} \right)^{1/2} {\rm Gyr} \ .
\end{equation}
Thus, $t_{\rm cool} \gg t_{\rm flow}$
(equation \ref{eq:t_flow})
for an ionised outflow
with $n_{\rm e}\sim 10^{-2}\;\!{\rm cm}^{-2}$
and $T_{\rm e} \sim 10^7\;\!{\rm K}$.
The hot ionised gas will be advected into the CGM,
and will eventually cool outside the host galaxy.
Partially ionised gas
of temperatures $T \sim 10^4\;\!{\rm K} - 10^6{\rm K}$
can be cooled more efficiently
through bound-free and bound-bound cooling processes
\cite[see][]{Sutherland1993ApJS}.
When the temperature of the outflowing ionised gas
falls below $10^6$~K,
bound-free and bound-bound processes will become dominant.
Condensation would occur in the circum-galactic environment
\citep[cf.][]{Putman2002AJ, Putman2003ApJ},
with the surviving clump remnants as seeds for density growth.
The condensates formed as such
when falling back into the host galaxy,
would manifest as objects
similar to the HVCs
that we observe in the Milky Way \citep[][]{Putman2012ARA&A}.
\subsubsection{Clump infall}
The loss of kinetic energy of CGM clumps
causes their infall back into the host galaxy
to become part of its ISM.
The rate at which this happens is determined by a number of processes,
in particular, (i) the ram pressure drag experienced by clumps in the CGM,
which leads to their energy loss and hence orbital decay,
and (ii) the collision and merging of clumps.
The infall timescale is given by
\begin{align}
t_{\rm dyn} &\approx \frac{\pi}{2}\frac{R^{3/2}}{\sqrt{2 G M_{\rm gal}}} \nonumber \\
& \approx 27 \;\! \left(\frac{R}{3\;\!\text{kpc}}\right)^{3/2}\;\!\left(\frac{M_{\rm gal}}{10^{10}\;\!\text{M}_{\odot}}\right)^{-1/2}~\text{Myr} \ ,
\end{align}
where $R$ is the lengthscale of the CGM and
$M_{\rm gal}$ is the dynamical mass of the system,
which is scaled here to that of M82
\citep[see e.g.][]{Strickland2009, Greco2012ApJ}.
The kinetic energy loss of a clump due to ram pressure drag
will lead to a decay of its orbit.
Assuming a uniform deceleration, we obtain a drag timescale of
\begin{align}
t_{\rm drag} & = \frac{M_{\rm c}}{ \pi \langle r' \rangle^2 \;\! m_{\rm p}\;\!n_{\rm bg} \;\! v_{\rm c}} \nonumber\\
& = \frac{4}{3} \frac{\langle r' \rangle}{v_{\rm c}} \delta \nonumber \\
& = 1.31 \;\! \left(\frac{\langle r' \rangle}{100~\text{pc}}\right) \;\! \left(\frac{v_{\rm c}}{100~\text{km}\;\!~\text{s}^{-1}}\right)^{-1}\;\!
\left(\frac{\delta}{1000}\right) ~\text{Gyr} \
\end{align}
\citep{Maller2004MNRAS},
where $\langle r' \rangle$ is the mean radius of the clump
(rather than the size of the minor axis),
for which we adopt a characteristic value of $100~\text{pc}$~\citep{Crighton2015MNRAS}.
Here $M_{\rm c}$ retains its earlier definition as the clump mass,
$v_{\rm c}$ is its velocity in the CGM,
$n_{\rm bg}$ is the number density of the hot component of the CGM gas,
and $\delta = n_{\rm H}/n_{\rm bg}$ is the over-density of a clump on the CGM background.
The CGM gas is slightly cooler than that of the outflow fluid,
and its temperature is expected to be
$T\approx 10^4 - 10^{5.5}~\text{K}$~\citep{Narayanan2010ApJ, Werk2013ApJS, Stocke2013ApJ, Tumlinson2017ARA&A}.
Thus, the ram pressure drag-induced kinetic energy loss
is insufficient to cause galactic outflow material to return
and replenish the ISM of the host galaxy.
An alternative mechanism is clump-clump collisions.
The process is stochastic, in contrast to the drag process
which extracts a clump's kinetic energy continuously.
In a collision, some fraction of the kinetic energy of the colliding clumps
will be transferred into turbulence,
causing heating in the clump material.
When the thermal energy is radiated away,
it leaves a single massive remnant
with a kinetic energy lower than those of its pre-merge progenitors~\citep{Maller2004MNRAS}.
The efficiency of the clump-clump collision process may be estimated as follows.
The initial number of clumps (per CGM unit volume) is given by
\begin{align}
N_{\rm c} &\approx \frac{3\;\! M_{\rm neut}}{4 \pi M_{\rm c} R^3} \nonumber \\
& \approx 27 \;\! \left(\frac{M_{\rm c}}{10^6~\text{M}_{\odot}}\right)^{-1} \;\! \left(\frac{M_{\rm neut}}{3\times 10^9~\text{M}_{\odot}}\right) \;\! \left(\frac{R}{3~\text{kpc}}\right)^{-3}~\text{kpc}^{-3} \ ,
\end{align}
where $M_{\rm neut}$ is total mass content in the neutral material in the CGM,
and $M_{\rm c}$ is the averaged mass of a single clump.
It is argued that 20-30\% of the dynamical mass of the host galaxy
may reside in the cold, neutral component of the CGM~\citep[e.g.][]{Maller2004MNRAS, Stocke2013ApJ}.
The neutral material is likely to be condensates, i.e. in the form of clumps,
rather than being mixed-in with the ionised material\footnote{Note that here the neutral clumps are referred to in a general context, i.e. in that they are dense condensates in the CGM.
They can be formed by condensation of the CGM directly, or by condensation of the CGM
seeded by the remnant clumps which have survived their passage through the galactic outflow to be advected into the circumgalactic environment,
or by the recycling condensation of the galactic outflow material.}.
For a galaxy similar to M82,
the amount of neutral gas in the CGM may therefore be
$M_{\rm neut} = 3\times 10^{9}~\text{M}_{\odot}$
\citep[see e.g.][]{Maller2004MNRAS, Stocke2013ApJ, Werk2013ApJS}.
Presumably, the end-state of CGM clumps after they have undergone an infall
would be HVCs.
The masses of the Galactic HVCs are found to be within a range of
$10^5 - 5\times 10^6~\text{M}_{\odot}$~\citep[e.g.][]{Putman2012ARA&A}.
Without losing generality,
we assign $M_{\rm c} = 10^{6}~\text{M}_{\odot}$
as the characteristic mass of CGM neutral clumps.
(We acknowledge these are observed to be comprised
of smaller sub-structures of filaments and clouds, c.f.
\citealt{Putman2003ApJ, Thom2008ApJ, Hsu2011AJ, Fox2014ApJ}, also seen in M31 -- \citealt{Thilker2004ApJ}.)
The clump-clump collisional timescale may be estimated as
\begin{align}
t_{\rm cc} &\approx \left[N_{\rm c} \;\! \sigma_{\rm c} v_{\rm c}\right]^{-1} \nonumber \\
&\approx 11.8 \;\! \left(\frac{N_{\rm c}}{26.5~\text{kpc}^{-3}}\right)^{-1}\;\!
\left(\frac{v_{\rm c}}{100~\text{km}~\text{s}^{-1}}\right)^{-1}\;\!
\left(\frac{\langle r' \rangle}{100~\text{pc}}\right)^{-2}~\text{Myr} \ ,
\end{align}
which is shorter than the timescale for drag-induced energy loss
for the parameters considered in this work.
Clump-clump collisions are therefore a viable mechanism
for returning the outflow material.
Note that the collisional timescale estimated above is slightly longer than
the timescale over which clumps are advected
(cf. equation~\ref{eq:t_flow}).
An advection timescale shorter than the kinetic energy loss timescale
implies a gradual building up of CGM.
The accumulation of outflow material
would persist until the termination of the starburst phase
(hence, the cut off of the energy supply for outflow --
see e.g. ~\citealt{Chevalier1985} for a generic scenario and the hydrodynamics formulation).
If the star formation occurs in cycles,
this ejection of multi-phase clumpy ISM matter will give rise
to complexity in the feedback interplay.
On the one hand, the depletion of cold gas in the ISM
and the advection of neutral material to the outside of the host galaxy
would undoubtedly suffocate the star formation process.
But, on other hand, a multi-phase outflow would provide metal-enriched material,
which can be cooled more efficiently than the metal-poor pristine cosmological gas.
The remnant clumps would be seeds that nucleate CGM condensation,
and the subsequent infall of the newly formed CGM condensate
would reignite and fuel the next phase of the star formation cycle.
\subsection{Additional remarks}
The large-scale galactic outflow in M82
is believed to be driven by a combination of outward thermal pressure
and the coalescence of shocks
and outflowing material,
with power derived from an active star-forming core
at the base of the wind cone.
In such a setting,
the hot outflow would entrain cold neutral material from the ISM
which would form into clumps and filaments.
In a galactic outflow with a single phase medium,
an asymptotic flow velocity will develop
when the energy injection by the supernova explosions
is deposited into the thermal and kinetic energy of the fluid
\citep[see][]{Chevalier1985}.
In a multi-phase clumpy galactic outflow,
a fraction of the supernova power
will be dissipated for the compression of the cold neutral clumps that are entrained into the hot galactic outflow medium.
Such compression has already been seen in numerical simulations
and is also verified
by the flattened hamburger/pancake shapes
that we infer from the surface-to-volume analyses
using observations of CX emission.
Another fraction of the supernova power
will be dissipated in the drag
between the clumps and the gas in the flow,
by which the clumps are accelerated
\citep[see][]{Larson1974MNRAS, Nath1997MNRAS, Veilleux2005, Bustard2016ApJ}.
A single fluid formulation is therefore inadequate
to model the thermodynamics and hydrodynamics of
such multi-phase clumpy galactic outflows.
It is needless to say that further complications would also arise
due to the radiative cooling of the hot outflow medium
and the additional pressure exerted
by radiation and cosmic rays produced in star-forming regions
at the base of the wind cone.
Moreover,
the entrained clumps and filaments are not stationary structures
as they are subjected to mechanical ablation, i.e. stripping,
by the faster-moving fluid in the surrounding
and thermal evaporation by UV and X-rays
permeating through the hot galactic outflow medium.
In addition,
clump destruction can also arise due to thermal conduction
(see the discussions
in sections~\ref{sec:halpha_filaments} and~\ref{sec:entrained_clumps}).
The material evaporated or mechanically stripped from the clumps
will be pushed ahead in the flow.
This is heated and stretched,
forming a semi-ionised H$\alpha$-emitting trail.
The galactic outflow is likely to be turbulent,
and this would cause the filamentary trails to develop
fractal-like structures
rather than a well-defined interface surface
on which CX process would operate in a quasi-steady manner.
The survival of the stripped material against mixing
when it extends into the hot galactic outflow
depends on many factors and their interplay.
The dissolution of these fractal trails in a hot outflow
is an important avenue of future research,
given that it will strongly impact
on the mass-loading of the flow,
the redistribution of material in different fluid phases,
the evolution of substructures,
and the observational morphologies of
X-ray and H$\alpha$ emission features.
Galactic outflows in active star-forming galaxies
do not necessarily
take the form of a super-wind,
where dense clumps
and H$\alpha$ emitting filaments are entrained within,
as that of M82~\citep{Chevalier1985, Ohyama2002}.
In NGC~3077,
a starburst dwarf neighbour of M82,
bubbles and expanding shells
(instead of elongated filaments)
are observed ~\citep{Ott2003}.
The bubbles consist of hot gas,
and they are enclosed by a warm shell,
characterised by the H$\alpha$ emission.
In terms of the two-zone model
shown in Figure~\ref{fig:wind_cone},
the super-wind galactic outflows
would be dominated by the bottom zone,
while in the galactic outflows of NGC~3077
the bottom zone is negligible or absent.
Such different multi-phase structures
would have different hydrodynamic properties.
These would have impacts on the entrainment of HVCs,
if it happens,
and the subsequent evolution of HVCs/clumps in the outflow.
In fact,
the diversities of galactic outflow conditions
are now recognised,
and outflows from star-forming galaxies
can be powered by various driving mechanisms.
A galactic outflow can be driven by the radiation from
an intense star-forming core in the host galaxy~\citep{Dijkstra2008MNRAS, Nath2009MNRAS, Thompson2015MNRAS}.
It has also been suggested
that cosmic rays can play a very important role
in regulating the energy budget of galactic outflows
~\citep{Samui2010MNRAS, Uhlig2012MNRAS}\footnote{At high redshifts, the favoured mechanism for driving galactic outflows
is cosmic-ray pressure.
Studies have argued
that cosmic-ray driven outflows become increasingly important
in earlier epochs~\citep[e.g.][]{Samui2010MNRAS},
when host galaxy masses were presumably smaller
and star-formation rates were higher during bursts.}.
The ability for a neutral clump to survive
in a hot ionised galactic fluid,
an intense radiative field
or a bath of cosmic rays
may depend on the metalicity and even the magnetic field
in addition to the density of the clump,
the outflow speed
of the hot ionised gas
and the length-scale of the outflow.
Ignoring magnetism for the time being,
in a low-temperature outflow
(e.g. in a radiatively driven system,
see \citealt{Zhang2018Gal})
clump destruction is less likely to arise
by thermal conduction.
Instead, the abundant ionising radiation responsible
for powering the outflow
will cause surface evaporation of a clump,
particularly
on the surface facing the host galaxy,
where the strong radiation is emitted
from the regions with intense star formation.
Outflows driven predominantly by cosmic ray pressure
may also exhibit a relatively low fluid temperature
in the outflow material
\citep[see e.g.][]{Samui2010MNRAS}.
As the energy deposition of the cosmic rays
will not be concentrated at the base of the outflow cone,
the gas would accelerate
not only by a thermal pressure gradient
but also by a cosmic ray pressure gradient,
which shares some similarities
to the radiative pressure.
A consequence is that
outflows can extend to altitudes higher than
those driven purely by supernova power mechanically, reaching a few tens of kpc instead of just a few kpc
\citep{Naab2017ARA&A, Jacob2018MNRAS, Girichidis2018MNRAS}.
Again, the lower temperatures
reduce the amount of heat transported into clumps
by means of thermal conduction,
which reduces this as a means of dissolving the clumps
and erasing the temperature inhomogeneities within the flow.
However, the lower velocities sometimes attributed to cold cosmic ray driven outflows~\citep[e.g.][]{Samui2010MNRAS}
mean that entrained clumps remain in the (much larger) outflow cone for greatly extended periods of time.
This would mean that a given clump/over-density
is exposed to eroding and irradiating processes
for much longer than its counterparts
in a supernova-driven hot outflow,
which may still lead to its eventual destruction
well before it has completed its passage of an outflow cone.
As such, it would not be advected into the circumgalactic space, despite the less efficient heat conduction process.
Cosmic rays can cause heating and ionisation
in the CGM and the intergalactic medium as well as the ISM
\citep[see e.g.][]{Owen2018MNRAS},
and tend to deposit their energy into dense targets.
Hence the dense cores of entrained neutral clumps
are more effective in capturing and absorbing the cosmic ray particles
then the galactic outflow medium.
The heating effect due to cosmic rays
can also be enhanced if clumps are magnetised.
As the energy is deposited mainly in the densest parts of clumps
and/or the most strongly magnetised regions,
clumps in a cosmic ray dominated flow are being
pushed along by their cores
(i.e. expanding `inside-out' instead of compression by an external pressure force).
This could stretch clumps out over time,
if the magnetic fields are also stretched
along the flow of ionised fluid.
This will lead to the formation of extended filaments
oriented along the outflow direction
-- in contrast to the clumps in a supernova-driven system,
where pancake or hamburger shape clumps
with their flattened surface oriented perpendicularly
to the flow direction of a hot galactic outflow fluid.
The action of different clump-flow interactions
should give different surface-to-volume ratios
and the interfacing surface would be accordingly
marked by the strength of the CX lines in their emission spectra.
Finally, we note that
in outflows
with a significant magnetic energy density and turbulence,
the heating/evaporation of entrained clumps in a hot outflow
may be reduced (if cosmic rays are insignificant).
This is because the thermal particles (usually electrons)
responsible for the conduction of heat propagate predominantly along magnetic field lines.
In the presence of tangled and turbulent magnetic fields,
such lines increase the path that particles must travel when conducting heat,
thus slowing the conduction process down
\citep{Tao1995MNRAS, Chandran1998, Malyshkin2001ApJ}.
This would operate to protect entrained clumps,
enabling them to survive much longer in the outflow cone,
and allowing them to persist to higher altitudes.
\section{Summary and conclusions}
\label{sec:summary_and_conclusion}
Galactic outflows are complex multi-phase fluids,
with cold dense clumps and warm filaments entrained in a hot ionised gas.
CX (charge-exchange) processes are a characteristic of fluids
consisting of a neutral and an ionised phase.
For galactic outflows,
they operate in the interface surface between
neutral or partially ionised clumps and the ionised gas,
and the CX processes give rise to X-ray CX emission lines.
The large-scale outflows of a number of nearby starburst galaxies
show a strong forbidden line in the X-ray \ion{O}{vii} triplet \citep{Liu2012MNRAS, Wang2012AN},
which are interpreted as CX lines,
thus establishing the multi-phase nature of galactic outflows.
In this work, we conducted analyses of the surface-to-volume ratio
of dense clumps
based on observations of CX emission from starburst galaxies,
and hence constrained the geometries and structures of the dense neutral material
in galactic outflows.
More specifically we considered the relative enhancement
of the area-to-volume ratio of the dense neutral clumps with respect to spheres,
and we derived the aspect ratios of these clumps
from the observed strengths of CX lines, e.g. in M82~\citep{Liu2011}.
Our analyses indicated that the cold dense clumps in galactic outflows
such as those of M82
would have flattened shapes, resembling a hamburger or a pancake,
instead of elongated shapes.
The flattened geometry is consistent
with the findings of numerical simulations,
which show that dense clumps entrained in galactic outflows
would be ram pressure compressed.
Our analyses do not support an elongated geometry for the CX emission objects.
Thus, the filamentary features observed in the H$\alpha$ images of galactic outflows
are not primary CX emitters.
We interpret them as
warm trails of the stripped material
from the colder dense CX emitting clumps,
which are pushed forward (after having been stripped) and stretched into filamentary structures
by the faster-moving galactic outflow fluid.
These filaments are exposed
to the hot gas in their surroundings
and intense background radiation.
Though cold initially, they are gradually warmed.
This is consistent with numerical studies
which have shown
that
dense, slowly moving clumps are ablated
by shear and ram pressure
exerted by the fast-moving galactic outflow fluid,
and the stripped material forms long trails leading the clump of their origin.
We have found that
the entrainment, acceleration and evaporation/dissolution of neutral clumps
in a galactic outflow occur over comparable timescales
(see section~\ref{sec:entrained_clumps}).
Thus, some fraction of the clumps, and perhaps the stripped gas filaments,
can survive their entire passage through the galactic outflow
to be advected into the circumgalactic space surrounding their galaxy of origin.
These remnants are metal-enriched,
and they are seeds for the condensation of CGM (circumgalactic medium)
to form a new generation of clumps.
Clump-clump collisions in circumgalactic environments
would cause clump infall into the ISM of their host galaxy, as the clumps would lose a substantial fraction of their kinetic energies in a collision.
These new clumps could be the origin of the HVCs (high-velocity clouds)
as those observed in our Galaxy~\citep[see][]{Putman2012ARA&A}.
When infallen clumps are re-entrained into the galactic outflow,
a recycling of galactic material is initiated.
\section*{Acknowledgements}
We thank the referee
for helpful comments
and suggestions to improve the manuscript.
KJL's research at UCL-MSSL
was supported by UCL through a MAPS Dean's Research Studentship
and by CUHK through a CN Yang Scholarship, a University Exchange Studentship,
a Physics Department SURE Studentship, and a New Asia College Scholarship.
ERO was supported by a UK Science and Technology Facilities Council PhD studentship.
This research has made use of the SAO/NASA ADS system and the arXiv system.
\bibliographystyle{mnras}
| proofpile-arXiv_059-15561 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}\label{sec_intro}
With the significant increase of online threats, there is a growing demand on ISPs by governments and organizations~\cite{australia-ISPs, australia2-ISPs} to have a bigger role in preventative cyber security. ISPs actively employ measures to filter spoofed traffic, but can also have a key role of detecting other attacks~\cite{wired-ISPs}. One emerging attack vector that can be effectively tackled at the ISP level is the detection of compromised mobile devices. Recently, researchers proposed to identify compromised devices from a ISP's point of view~\cite{social,carrier,cellbot}. ISPs have direct access to key network traces and information, which enables them to perform early detection of compromised mobile devices. Once discovered, ISPs can inform their customers including organizations so that they can take proper actions~\cite{darkside}.
Indeed, organizations have encouraged the use of personal mobile devices in workplaces, increasing the security incidents involving mobile devices. A recent study shows that one in three organizations has faced a security incident due to compromised devices having malicious apps~\cite{mobilesecindex2019}. Among other undesirable behavior, such devices may leak sensitive information, perform unauthorized credit card transactions, and phone calls~\cite{mosaic,taintdroid,recon,bugfix,cellbot}. A key challenge in mitigating such security threats is to accurately detect compromised devices and take actions. As organizations have little control over mobile devices and do not have access to all mobile network traffic, one needs to perform the detection at the mobile network provider level.
A number of methods to detect malicious apps have been proposed in the literature, which mainly apply various static and dynamic code analysis techniques~\cite{carrier,robotdroid,taintdroid,wild} and network-based approaches ~\cite{httpmalware,deviation,trafficav,machinelearning,imbalanced}. However, these techniques require the inspection of a vast number of apps created constantly and identify local features of every device and/or app. Therefore, a different method, that is not only robust but also scales to a large network, is required to detect compromised devices.
Many free apps are typically developed with in-app advertisements promoting other apps or in-app purchases ~\cite{appscanner}. While using such free apps, users are often tricked to authorize to download related apps and fall victim to \emph{drive-by-downloads} attacks~\cite{drive,phishing}. Further, many users also tend to install free apps that are not published in official app stores such as Google Play. For example, in some countries such as China, users are blocked from accessing Google Play and thus have to use various other stores with considerably low and varying security guarantees~\cite{beyondgoogleplay}.
Motivated by the above observation, we hypothesize that there exists a \emph{homophily relationship} between devices and their installed apps so that devices sharing a similar set of apps will have a similar probability of being compromised. We thus formulate the compromised device detection problem as a graph-inference based classification task. Most apps require network connection between devices and their host servers while being downloaded, installed, or executed. We model such communication involving devices and mobile apps as a bipartite graph where one side is the set of devices (i.e., users) and the other is the set of mobile apps. To \emph{infer} whether an unknown device is compromised,
having evaluated several graph inference approaches, we apply Belief Propagation (BP), a well-known algorithm that has been widely used to reliably approximate an entity's likelihood of being bad on probabilistic graphical models for large graphs in a variety of security contexts, including anomaly detection, fraud detection, and malicious domain detection ~\cite{maldomain,fraudeagle,generalbp,enterprise_bp}.
Essentially, the effectiveness of BP depends on the strength of association between nodes in the graph ~\cite{codaspy_issa}. Unlike other applications where associations are relatively straightforward to be derived (e.g., a malware-infected machine and its activity controlled by command \& control server
), it is quite challenging to derive such strong associations between devices and mobile apps due to the facts that: (1) it is often hard for mobile apps to interfere and taint other apps; (2) user interactions are needed to take any action.
We empirically verify our hypothesis that, to a certain extent, there in fact exist associations which can be used to correctly identify compromised devices using 5-terabytes of anonymized mobile network dataset provided by a cellular service provider. We further discuss the effect of the relatively weak associations in a device-app graph on BP. Concretely, we provide in-depth analysis on the topological similarity and differences between a device-app graph and well-known domain-IP resolution graphs obtained from active DNS data~\cite{codaspy_issa} and their impact on the behavior of BP. Finally, we investigate if detected devices exhibit undesirable behavior in terms of privacy leakage and hosting servers accessed by the devices.
In sum, we make the following main contributions. First, we investigate an association between mobile devices and apps installed in the devices with which network administrators can successfully identify \textbf{unknown compromised devices} with little knowledge on devices in the network. Then, we model the associations as a device-app bipartite graph and apply a graph-based inference approach using the association. To the best of our knowledge, this is the first attempt to use such associations to detect compromised devices in mobile context. The key advantage of our approach
is that (1) it is applicable regardless of devices' models, OS versions, app
versions, or app types (e.g., phishing, malware) and (2) it can detect compromised devices at large-scale without time-consuming investigation on individual devices. Second, through experiments over a large-scale real-world dataset, we show that our approach can effectively detect compromised devices, achieving nearly 98\% accuracy in terms of AUC. (3) We further investigate the unique graph structures of the association between devices and their installed apps, and how it affects the choice of key parameters of BP and the effectiveness of our approach. (4) Finally, we validate our approach with a post-analysis of the behavior of detected unknown compromised devices. We show that these devices, most of whose apps are not known malicious apps, are leaking highly sensitive information in their network traffic, and they tend to frequently access IPs and domains with malicious behavior such as fast-fluxing or being short-lived ~\cite{fastflux:2008:NDSS,takedown:2019:NDSS}.
\section{The Proposed Approach}\label{sec_approach}
The aim of this work is to identify unknown compromised mobile devices given a small set of known ones and the network traffic data of mobile devices collected by a service provider. Concretely, we want to determine whether a given unknown device is compromised or not by analyzing their connections to known devices. We first highlight the main questions and challenges that we must address and describe how our approach tackles each challenge.
\subsection{The Baseline Approach}~\label{blacklist}
One trivial approach to detect compromised devices is to compare apps in a device against known malicious apps. However, similar to other blacklist based approaches utilized to detect malicious entities in the Internet, such an approach fails to detect compromised devices having previously unknown malicious apps ~\cite{impending}. One thus needs approaches that can predict the status of devices based on the limited prior knowledge. In this paper, we thus propose to employ a graph inference approach, presented in the following.
\subsection{The Graph-Inference Based Approach}\label{overview}
A key challenge to identify compromised devices using an inference algorithm is to first identify meaningful associations that can graphically demonstrate the homophily relationship. That is, we seek to define an association that is able to create two distinct clusters in the association graph, corresponding to compromised and not-compromised devices, respectively.
Mobile users mostly access the contents through the apps in their devices~\cite{robust}. We thus investigate the associations between devices and apps. Intuitively, if a device installs malicious apps, it is likely to download/install other malicious apps due to several reasons including in-app advertisements promoting similar apps~\cite{appscanner} and other drive-by-download attacks. Meanwhile, most apps require network connection between devices and their host servers while being downloaded, installed, or executed. We thus claim the likelihood of a device being compromised can be measured by analyzing its app usage behavior revealed in the network traffic. Our key insight is thus that there exists an association between a device and apps following homophily that can be used to identify other compromised devices.
We present a model to reflect the homophily relationship between a device and apps. Specifically, we capture the association between devices and their apps through analysis of network traffic (as an ISP has no direct access to devices) and model such associations as a bipartite graph (Section~\ref{sec_bipartite}).
To determine whether a device is compromised, we follow the guilt-by-association principle that has been extensively applied in various applications with a graph model~\cite{codaspy_issa,fraudeagle,guiltfile,impending,maldomain}. In a nutshell, the idea of guilt-by-association is to estimate the \emph{guiltiness} of a node by propagating the prior knowledge on some of the nodes in the graph model, given the homophily relationship between nodes.
One may utilize various approaches such as label propagation (LP)~\cite{lp:2002,marmite}, BP, or graph node embedding~\cite{node2vec} to perform inference over graphs. We show in Section~\ref{discussion} that the accuracy of these approaches is comparable with BP being slightly better than the other approaches and much more efficient than graph node embedding. Hence, we apply BP on the bipartite graphs we build (Section~\ref{sec_bp}).
\subsubsection{Constructing Bipartite Graphs}\label{sec_bipartite}
\begin{figure}[tb]
\begin{center}
\parbox{0.45\textwidth}{
\centering
\epsfig{file=bipartite_newdata.pdf,width=0.3\textwidth}
\caption{A device-app bipartite graph model}\label{bipartite}}
\end{center}
\end{figure}
We represent the associations between devices and apps as a bipartite graph $G = (V,E)$ where a set of devices $D = \lbrace d_1,...d_n\rbrace \subset V$ and a set of apps $A = \lbrace a_1,...a_n\rbrace \subset V$ are connected with undirected edges $e(d_i,a_j)$, where $d_i$ is a device and $a_j$ is an app. Fig.~\ref{bipartite} illustrates an example of a bipartite graph model where the left side is a set of devices (i.e., nodes in $D$) and the right side is a set of apps (i.e., nodes in $A$). Each node in $D$ may belong to one of three categories: not-compromised, compromised, and unknown, and each node in $A$ may belong to one of four categories: benign, malicious, suspicious, and unknown. As illustrated, a compromised device may have edges with all types of apps including malicious, suspicious, unknown, and benign. An unknown device may have edges with suspicious, unknown, and benign apps. A not-compromised device may have edges with benign and unknown apps, but does not have edges with suspicious or malicious apps.
\subsubsection{Belief Propagation}\label{sec_bp}
We work based on the intuition that there exists an association between the probability of a device being compromised and the probability of having malicious apps. That is, the more malicious apps a device has, the more likely it is to download other malicious apps, resulting in homophily relationships between devices and apps.
Given the bipartite graph in Section~\ref{sec_bipartite} and the prior knowledge about devices, we thus aim to infer the probability of unknown devices being compromised or not using the guilt-by-association principle. To do so, we employ BP, which is shown to reliably estimate the posterior probabilities on probabilistic graphical models for large graphs in a variety of domains including anomaly detection, fraud detection, and malicious domain detection~\cite{guiltfile,codaspy_issa,fraudeagle}. In the following, we explain how we apply BP in our context.
We model each node $i \in V$ as a random variable, $x_i$, that can be in the set of \emph{states} $S = \lbrace good , bad \rbrace$ so that the badness and goodness of a node can be expressed by the probabilities $P(Bad)$ and $P(Good)$, respectively, where $P(Bad) + P(Good)= 1$. Our goal is then to determine the marginal probabilities $P(x_i = Good)$ and $P(x_i = Bad)$ for unknown devices.
BP computes the marginal probability of each node by iteratively passing local messages from its neighbor given the prior knowledge of other nodes in the graph
\begin{comment}
\begin{equation}\label{eq:pro}
\tag{Eq.1}
P(x_i) = \sum_{x_1}\cdots\sum_{x_{i-1}}\sum_{x_{i+1}}\cdots\sum_{x_n} P(x1,x2,\cdots,x_n)
\end{equation}
\end{comment}
At each iteration, BP computes the message vector $m_{ij}$ for each node $i$, and passes it to each of its neighbors $j \in N(i)$, where $N(i)$ is the set of $i$'s neighbor. $m_{ij}(x_j)$ is $i$'s belief that node $j$ is in state $x_j$ (i.e., $i$'s outgoing message vector to $j$), which will be computed based on $i$'s neighbors' messages about $i$. Concretely, there are three components to compute message $m_{ij}(x_j)$: (1) initial belief $\phi_i(x_i)$ for $i$ being in state $x_i$; (2) the product of all messages from $i$'s neighbors excluding $j$ (i.e., $i$'s incoming message vector from $k \in N(i)$); and (3) the edge potential $\psi_{ij}(x_i,x_j)$ between two neighboring nodes $i$ and $j$ specifying the probability of $i$ being in state $x_i$ and $j$ being in state $x_j$. Formally, the message $m_{ij}$ is defined as:
\begin{equation}\label{eq:msg}
\tag{Eq.1}
m_{ij} (x_{j}) = \sum_{x_i \in S} [\phi_i(x_i)\psi_{ij}(x_i,x_j) \prod_{k \in N(i)\backslash j}m_{ki}(x_i)]
\end{equation}
We assign the initial belief for each node based on the ground truth labels, which is summarized in Table~\ref{bp_table}(a). Furthermore, Table~\ref{bp_table}(b) represents the edge potential matrix. In Section~\ref{sec_analysis}, we will discuss how we choose $\delta$ and $\epsilon$, and the effect of varying these parameters on detection accuracy.
\begin{table}[h!]
\begin{center}
\subtable[Initial beliefs for nodes in a graph\label{initial}]{
\begin{tabular}{|c|c|c|}\hline
\textbf{ } & P(Bad)& P(Good)\\
\hline
Bad & $\delta$ & 1- $\delta$\\
\hline
Good & 1- $\delta$ & $\delta$\\
\hline
Unknown & 0.5 & 0.5\\
\hline
\end{tabular}}
\subtable[Edge potentials\label{edge}]{
\begin{tabular}{|c|c|c|}\hline
\textbf{ } & Bad& Good\\
\hline
Bad & $\epsilon$ & $1-\epsilon$\\
\hline
Good & $1-\epsilon$ & $\epsilon$\\
\hline
\end{tabular} }
\caption{Initial beliefs and edge potentials for Belief Propagation }
\label{bp_table}
\end{center}
\end{table}
\begin{comment}
Following~\cite{maldomain}, we update $i$'s outgoing messages in \emph{a synchronous order} for simplicity. That is, the $i$'s outgoing messages in iteration $t$ is computed from $i$'s incoming messages in iteration $t-1$. And the message passing procedure stops when the messages converge, i.e, they do not change significantly between iterations. Note that although BP is not theoretically guaranteed to converge given arbitrary graph topologies, it is shown to converge quickly with highly accurate approximation in practice~\cite{guiltfile,maldomain}.
\end{comment}
Note that BP is not theoretically guaranteed to converge for arbitrary graphs. However, it is shown to converge quickly with highly accurate approximation in practice~\cite{guiltfile,maldomain}. After the messages converge, i.e, they do not change significantly between iterations, we compute the final belief for $i$ as follows:
\begin{equation}\label{eq:belief}
\tag{Eq.2}
b_i(x_i) = C\phi(x_i) \prod_{k \in N(i)}m_{ki}(x_i),
\end{equation}
where $C$ is a normalizing constant. Finally, we classify devices as compromised or not based on the final belief.
\section{Dataset}\label{sec_data}
\subsection{Mobile Network Traffic Dataset}\label{sec_ISPdata}
Our dataset contains 5-terabytes of 5-days mobile network traffic data from a Chinese mobile service provider. Note that ISP does not have any control over the apps running on devices so that it is not straightforward to identify which devices use which apps and build a device-app bipartite graph from the traffic. In the following, we describe how we identify and extract information about devices and apps from the dataset.
Our approach relies on constructing bipartite graphs using entities extracted from the mobile network traffic including devices and apps. Fig.~\ref{fig:fields} shows the fields we use from IP packets to extract those entities. The \emph{app string} and destination IPs are important in extracting app information; whereas the source IPs are important to extract device information.
\begin{figure}[tb]
\begin{center}
\parbox{0.4\textwidth}{
\centering
\epsfig{file=field.pdf,width=0.28\textwidth
\caption{Simplified IP packet with fields used for data extraction}\label{fig:fields}}
\end{center}
\end{figure}
\subsubsection{Device Extraction}\label{sec:device_extraction}
Following previous research~\cite{appprint,carrier,mosaic,appscanner}, we consider each source IP address as a device. By doing so, we extract 250149 devices from our traffic.
\subsubsection{App Extraction}\label{sec:app_extraction}
Given the ISP traffic, one may utilize various information to extract app information such as app strings in the HTTP header, destination IP addresses in IP header, or unique TCP traffic patterns representing apps~\cite{appprint,appscanner,mosaic,tcp_finger}. We employ and compare two approaches using HTTP and/or IP headers.
\heading{(1) App Extraction using App Strings:} Although HTTPS dominates the general web connections, the usage of HTTP is still significant in mobile apps according to recent research ~\cite{inconsistent,networkframe,locationtracking,ample,lookat,bugfix,appcracker}. Although Android defaults to use HTTPS traffic in all apps since 2018, it is not strictly enforced, and still allows developers to change configuration to use HTTP~\cite{androidP}. Indeed, it has been observed in recent study that only part of communication (e.g., initial requests) are secured over HTTPS~\cite{debate,mixcontent,appcracker}. This may be due to a few reasons. First, advertisements (ads) traffic accounts for a significant portion in mobile networks and ads traffic is mostly carried over HTTP ~\cite{locationtracking,breaking,mixcontent,bugfix,darkside}. Further, HTTPS adds significant costs (e.g., significantly increasing latency and energy consumption) due to cryptographic operations and a required extra handshake, which is critical in mobile networks~\cite{httpscost,appcracker}. For the same reason, it has also been observed in previous study that most malicious traffic is carried over HTTP ~\cite{impending,nazca,dissecting}.
\begin{figure}[tb]
\begin{center}
\parbox{0.5\textwidth}{
$GET /open/confirm.htm?pkg= com.sina.news$
\caption{A HTTP header with an app string}\label{pkt}}
\end{center}
\end{figure}
This approach thus focuses on HTTP traffic and extract app information revealed in HTTP header. That is, we extract the IP packets containing the app string field in the header. The app string often contains the name of the app binary file. 10\% of our traffic includes explicit app strings. An example of packets with an app string is presented in Fig.~\ref{pkt}. We assume each unique app string as an app. By doing so, we gather 5870 app strings from our dataset.
\heading{(2) App Extraction using IP:}
Most mobile apps require network connection between devices and their host servers while being downloaded, installed, or executed. One way to extract app information without explicit app strings (e.g., HTTPS traffic) is thus that we use each or a group of destination IPs as the counterpart of apps since an IP may represent a server hosting a specific app. As a first step to deal with traffic without app strings, we explore a naive approach to treat an IP as an app. Note that a single IP often does not reliably represent an app for several reasons~\cite{robust,appscanner}. Indeed, in Section~\ref{accuracy}, we will show identifying compromised devices only using destination IPs results in high false positive rates. Meanwhile, we discuss how it can be improved in Section \ref{discussion}. To fairly compare two approaches, we use the same HTTP dataset with which 6150 destination IPs are extracted.
\begin{figure}[htbp]
\begin{center}
\parbox{0.5\textwidth}{
\centering
\subfigure[The CDF of the number of devices for all apps]{
\epsfig{file=app_device_cnt.pdf,width=0.24\textwidth, height=0.15\textheight}\label{num_devices_app}}
\subfigure[The CDF of the number of devices for all destination IPs]{
\epsfig{file=dstip_device_cnt.pdf,width=0.24\textwidth, height=0.15\textheight}\label{num_devices_ip}}
\caption{The distribution of number of devices}\label{cdf_num_devices}
\end{center}
\end{figure}
\subsubsection{A Device-App Graph}\label{sec:device_cnt}
By the end of this process, we have a mapping between devices and apps (either app strings or destination IPs), i.e, edges. Fig.\ref{num_devices_app} presents the CDF of the number of devices where the x-axis represents the number of devices having each app string and the y-axis represents the corresponding CDF (i.e., the portion of apps). Note that nearly 50\% of apps are having only one device. This is mainly because app strings may also include version names or market names of the apps such as com.sina.news-7.19.3 and com.supercell.clashofclans.baidu, which we consider as individual apps.
Fig.\ref{num_devices_ip} presents the CDF of the number of devices where the x-axis represents the number of devices connecting to each destination IP and the y-axis represents the corresponding CDF (i.e., the portion of destination IPs). Fig.~\ref{cdf_num_devices} suggests that each device will have relatively more shared IPs with another, compared to the number of shared apps. We discuss the effect of this distribution in Section~\ref{accuracy}.
\subsection{Ground Truth Sets and Definitions for Detecting Compromised Devices}
Our approach requires the small sets of ground truth about compromised and not-compromised devices to apply BP. However, given the vast number of unknown apps and little, if any, knowledge on devices in practice, it is often hard to build a ground truth set. We first collect ground truth sets for apps (destination IPs and app binaries) (Section~\ref{sec:app_groundtruth}) and construct ground truth sets for devices (Section~\ref{sec:device_groundtruth}).
\subsubsection{Ground Truth Sets for Apps}\label{sec:app_groundtruth}
One may utilize any one or multiple intelligence sources to build a ground truth set for apps. We use VirusTotal (VT)~\cite{virustotal} to collect a ground truth set for apps. VT is a security intelligence portal for IPs, URLs and binaries, based on third-party anti-virus engines, widely used in the literature for building ground truth~\cite{beyondgoogleplay,unknown_malice,mastino_mining,impending}. For each query, VT aggregates the responses from more than 50 engines, each of which categorizes the queried IP, binary or URL to \emph{malicious} or \emph{benign}.
\begin{figure}[tb]
\begin{center}
\parbox{0.4\textwidth}{
\centering
\epsfig{file=market_stat.pdf,width=0.35\textwidth, height=0.18\textheight}
\caption{The CDF of the number of published app stores}\label{market_stat}}
\end{center}
\end{figure}
\heading{(1) App Binary Analysis}: To build a ground truth set, we first attempt to download android binaries from 16 popular Chinese app stores ~\cite{beyondgoogleplay} by searching the app strings extracted in Section ~\ref{sec:app_extraction}. App string in the traffic is sometimes not readable, as it could be truncated or represented as simple digits or a serial number~\cite{appprint,ample}. Among 5870, 2367 app strings were found in at least one of 16 app stores. Fig.\ref{market_stat} presents the Cumulative Distribution Function (CDF) of the number of app stores where the x-axis represents the number of app stores having each app string and the y-axis represents the corresponding CDF (i.e., the portion of apps). Note that we also verify that the binaries are indeed generating the corresponding app strings by capturing network traffic while installing and executing binaries on two mobile devices (i.e., Samsung Galaxy Note4 and Sony Xperia Z3 Dual).
We upload the binaries to VT and check whether it is marked as malicious. Note that 29\% of app strings are published in multiple app stores, as shown in Fig.\ref{market_stat}. However, we observe that the maliciousness of each app is the same regardless of the app stores where it is downloaded. This observation agrees with that from prior research: the app string often correctly represents a specific app ~\cite{beyondgoogleplay,usagepattern}. Furthermore, Haoyu \emph{et al.} observed that recent mobile malware does not spread by repackaging as much as before in a large-scale study on android apps ~\cite{beyondgoogleplay}. We thus argue that it is reasonable to rely on the app string to identify and evaluate each app.
Among 2367 binaries, 1711 apps are flagged as malicious by at least one VT engine and 656 apps are not flagged by any engine. Previous research suggests that evaluation based on VT may have a limited coverage or noise due to multiple reasons ~\cite{appriskanalysis,beyondgoogleplay,unknown_malice,droppereffect}. To reduce potential false positives, we label each app using thresholds as follows.
If an app is detected as malicious by more than or equal to $vt$ number of engines among the 60 VT engines, we label the app as \textbf{bad}; if an app is detected as malicious by less than $vt$ engines, we label the app as \textbf{suspicious}; if an app is not detected as malicious by any engine, we assume that the app is \textbf{good}; if we are not able to find corresponding binaries, we consider the app as \textbf{no-info}.\footnote{For consistency reasons, we follow previous work~\cite{polonium,guiltfile,fraudeagle,speagle} in choosing ``good'', and ``bad'' as BP labels though they may not sound technical.}.
Note that we aim to identify unknown compromised devices that may have installed unknown malicious apps. Many app stores are known to perform a vetting process to identify and remove malicious apps from the stores ~\cite{beyondgoogleplay,unknown_malice,getoff}. As it is relatively easy to detect popular yet bad apps through such a general vetting process,
we exclude popular apps and app libraries. Concretely, we consider an app popular if it is used by more than $N_{p}$ number of devices. This filtering is important as it helps us avoid a number of false positives which can be induced by false association in a graph-based approach~\cite{snare,maldomain,codaspy_issa,exposure}. We shall discuss the effect and limitation caused by this filtering in Section~\ref{accuracy} and Section~\ref{discussion}.
Table~\ref{gt_app} summarizes the number of bad apps with various $vt$ and $N_p$ thresholds.
\begin{table}[tb
\begin{center}
\small
\begin{tabular}{|c||c|c|c|c|c|}
\hline
\backslashbox[2.5cm]{App\\popularity}{VT}
& $vt=3$ & $vt=4$ &$vt=5$ &$vt=6$ &$vt=7$ \\
\hline\hline
Not-filtered &1195&1060&955 & 845&764\\
\hline
$N_{p}$ = 10000 &1190&1056&951 & 841&761\\
\hline
$N_{p}$ = 5000 &1189&1056&951 & 841&761\\
\hline
$N_{p}$ = 1000 &1178&1047&943 & 833&753\\
\hline
\end{tabular}
\caption{The number of bad apps with varying $vt$ and $N_p$ thresholds}
\label{gt_app}
\end{center}
\end{table}
\heading{(2) Destination IP Analysis}: Towards dealing with traffic without explicit app strings, we inspect destination IPs that devices are connecting to. Particularly, we checked 6150 destination IPs in our dataset extracted in Section~\ref{sec:app_extraction} against VT. Following previous research~\cite{codaspy_issa}, we label an IP as \textbf{bad}, if the IP is detected as malicious by two or more engines in VT; if the IP is detected as malicious by only one engine, we label it as \textbf{suspicious}. By doing so, we find 528 bad IPs.
\subsubsection{Ground Truth Sets for Devices}\label{sec:device_groundtruth}
Given the ground truth set for apps, we define a \textbf{bad device} as one using more than or equal to $N(A_b)$ number of bad apps, where $A_b$ is the set of bad apps; we define a \textbf{good device} as one not using any bad and suspicious apps. Note that we have two ground truth sets for apps (i.e., based on (1) app string and (2) destination IP). Accordingly, we also build two ground truth sets for devices.
\heading{(1) An App Binary Based Ground Truth Set: } Table~\ref{gt_device} summarizes the number of devices given the ground truth sets in Table.~\ref{gt_app}.
\begin{table}[tb
\begin{center}
\small
\begin{tabular}{|c|p{0.58cm}|p{0.58cm}|p{0.58cm}|p{0.58cm}|p{0.55cm}||p{0.65cm}|}
\hline
\backslashbox[2.1cm]{App\\popularity}{VT}
& $vt=3$ & $vt=4$ &$vt=5$ &$vt=6$ &$vt=7$ & good device \\
\hline\hline
Not-filtered &25847&16889&15449 & 14391&9280&53162\\
\hline
$N_{p}$ = 10000 &7547&5347&4385 & 3621&3169&25782\\
\hline
$N_{p}$ = 5000 &6871&5347&4385 & 3621&3169&24231\\
\hline
$N_{p}$ = 1000 &2759&2371&2004 & 1400&1086&12923\\
\hline
\end{tabular}
\caption{The number of devices with varying $vt$ and $N_p$ thresholds}
\label{gt_device}
\end{center}
\end{table}
\heading{(2) A Destination IP-based Ground Truth Set: }
We build a ground truth set for devices given the ground truth sets in Section~\ref{sec:app_groundtruth}(2).
By doing so, we found 442 bad devices connecting to 2 or more bad IPs and 13794 good devices.
\section{Experimental Results and Analysis}\label{sec_analysis}
\subsection{Experimental Setup}
\begin{table}[tb
\small
\centering
\begin{tabular}{|p{1.2cm}|p{5.5cm}|p{1cm}|}\hline
\textbf{Notation} & \textbf{Description} & \textbf{Default}\\
\hline
$N_p$ & The threshold to define a popular app & 1000 \\
\hline
$vt$ & The threshold for the number of VT engines detecting the app as malicious & 5 \\
\hline
$N(A_B)$ & The number of bad apps the bad device has & 2\\
\hline
$\epsilon$ & The edge potential parameter & 0.51\\
\hline
$\delta$ & The initial belief parameter & 0.99\\
\hline
\end{tabular}
\caption{List of parameters for experiments and default values}
\label{param}
\end{table}
Table \ref{param} summarizes the notation for the parameters used in our experiments.
\textbf{\emph{BP Implementation:}} We implemented BP in C following the implementation in~\cite{maldomain}, as it was shown to be fast and scalable for large graphs. It only takes 1.25 seconds to run 10 BP iterations on average for our graphs.
\textbf{\emph{Ground Truth Sets for BP:}} In our bipartite graph, we have two types of nodes: apps and devices. Although BP can be used to classify both apps and devices in principle, we focus on the classification of devices. We thus consider the badness/goodness of apps as unknown.
Hence, BP inference is driven by two different sets of device ground truth labels: bad devices ($D_B$) and good devices ($D_G$). The number of instances in each set is described in Section~\ref{sec:device_groundtruth}. Note that the original data set is not balanced.
It is natural, however, that unbalanced initial belief leads to a biased set dominating the final result~\cite{snare,maldomain}. We thus randomly choose an equal number of instances from each set to avoid any such bias. For example, as described in Table~\ref{gt_device}, there are 12923 good devices and 2004 bad devices if we set $vt=5$ and $N_{p}= 1000$. In such a case, we use all of 2004 bad devices as $D_B$, and randomly choose 2004 out of the 12923 good devices as $D_G$.
\textbf{\emph{Cross Validation:}} To validate our approach, we perform k-fold cross validation. We randomly divide each of $D_B$ and $D_G$ into k folds and run BP $k$ times. In each BP run, we use one of the $k$ folds as a testing set and the remaining $k-1$ folds as a training set. We rotate the testing fold across the $k$ folds in the $k$ runs of BP and the final results are the average of the results from the $k$ BP runs. For our data set, BP converges fast, and therefore we do not limit its number of iterations.
As discussed in Section~\ref{sec_bp}, each node has two belief scores representing its badness ($P(Bad)$) and goodness ($P(Good) = 1 - P(Bad)$), respectively. For simplicity, we only mention the badness scores in this section. In each BP run, we consider the devices in the testing set as unknowns and hence, their initial beliefs are set to 0.5, as described in Table~\ref{bp_table}(a). The initial beliefs of devices in the training set are set according to their ground truth labels. Specifically, we set $\delta$ = 0.99
and hence, the initial badness beliefs of devices in the training set from $D_B$ are set to 0.99, and 0.01 for those in the training set from $D_G$.
Devices in the testing set are labeled based on their average final beliefs. Specifically, we vary the threshold for final beliefs, and classify a device whose final belief is above the threshold as bad. Otherwise we classify it as good. We then compute the true positive rate as the number of bad devices that are correctly classified to the total number of bad devices in the test set. Similarly, the false positive rate is computed as the number of good device that are misclassified to the total number of good devices in the test set.
\subsection{Device Classification Accuracy}\label{accuracy}
To show the detection accuracy, we present a series of ROC curves where: the x-axis represents the false positive rate (FPR), the y-axis represents the true positive rate (TPR), and each point in ROC curves represents different threshold for BP final belief scores. While doing so, we vary the parameters in Table \ref{param}, that might have effect on the performance.
\heading{Varying the Ground Truth Set.} Fig.\ref{device_ip_app_roc} shows the ROC curves while varying ground truth sets built based on destination IPs and app binaries. The figure clearly shows that the classification with the destination IP based ground truth set provides modest accuracy. The best FPR and TPR it could achieve are 0.17 and 0.804, respectively. Such an accuracy is not acceptable in practice as it misclassifies a considerable number of devices. This is mainly because a single destination IP does correctly represent an app and thus a user's general behavior, while most activities on mobile devices incur through apps. Hence, if we treat each IP as an app, our approach will consider all devices connect to the same IP as related; which in turn results in a lot of false associations harming the accuracy of graph-inference approach. In fact, 39\% IPs are used for servers hosting two or more unrelated apps in the given IP. In the following, we thus focus on detection with an app-binary based ground truth set, and provide discussion about how we can improve detection with a destination IP-based ground truth set in Section~\ref{discussion}.
\begin{figure*}[htbp]
\begin{center}
\parbox{1.0\textwidth}{
\centering
\subfigure[Varying ground truth sets]{
\psfig{file=device_ip_app_roc.pdf,width=0.23\textwidth, height=0.18\textheight}\label{device_ip_app_roc}}
\subfigure[Varying popular app definitions]{
\psfig{file=roc_popularapps.pdf,width=0.23\textwidth, height=0.18\textheight}\label{roc_popularapp}}
\subfigure[Various $vt$]{
\psfig{file=roc_1000vt3_4_5_6_7.pdf,width=0.23\textwidth, height=0.18\textheight}\label{vt5_3_1}}
\subfigure[Various $\epsilon$]{
\psfig{file=roc_1000vt5.pdf,width=0.23\textwidth, height=0.18\textheight}\label{roc_epsilon}}
}
\caption{ROC curves with various parameter setting}\label{all_roc_curves}
\end{center}
\end{figure*}
\heading{Varying the Definition of Popular Apps.} Fig.\ref{roc_popularapp} shows the ROC curves with varying the definition of popular apps. Although not significant, the figure shows that with less filtering such as $N_P= 10000$, the false positive increases. This is expected because many devices, if not all, using popular apps will be considered related, resulting in false associations~\cite{snare,maldomain,codaspy_issa,exposure}.
\heading{Varying $vt$ Values.} Note that there is no consensus on the right value for $vt$ ~\cite{appriskanalysis,beyondgoogleplay}. We thus show the results with various $vt$, i.e., $vt = {3, 4, 5, 6, 7}$ in Fig.~\ref{vt5_3_1}. As the figure shows, there is no significant difference on the false positive rates and true positive rates while changing $vt$. We thus use $vt=5$ for further discussion in the following without loss of generality.
\heading{Varying Edge Potentials $\epsilon$.} Fig.\ref{roc_epsilon} shows the ROC curves while varying $\epsilon$. To be specific, when $\epsilon = 0.51$, we can achieve 0.89 TPR with only 0.002 FPR. Interestingly (as it is different from BP behavior in other applications such as ~\cite{codaspy_issa,maldomain}), our results are sensitive to $\epsilon$, an edge potential parameter of BP. As shown in Fig.\ref{roc_epsilon}, as $\epsilon$ increases, the false positive rates increase. Furthermore, we achieve the best accuracy with low values of $\epsilon$ (e.g., 0.51). In the next section, we present an in-depth analysis to demonstrate distinctive characteristics of the mobile traffic data that lead to this behavior.
\subsection{In-Depth Analysis of BP behavior}\label{sec_graphstruct}
Identifying the most effective edge potential values to accurately classify graph nodes using BP is a known problem, and recent work aims to automate the process of identifying such values\cite{edgepotential}. That said, it has also been observed in a variety of previous work \cite{maldomain,codaspy_issa,fraudeagle} that the accuracy of BP is not significantly impacted by different $\epsilon$ values. On the contrary, recall that our results are sensitive to $\epsilon$ as discussed in Section~\ref{accuracy}. In this section, we shed light on why edge potential value $\epsilon$ has an impact on our results, by providing in-depth analysis on distinctive network properties of two bipartite graphs from different applications. In the first graph (\emph{Mobile}), $\epsilon$ has obvious impact on accuracy, while in the other (\emph{DNS}), $\epsilon$ has no notable impact on accuracy.
\emph{Mobile} represents the bipartite graph built from our dataset. For our experiments, we have used various ground truth sets while changing $vt$ to define a bad device, which,
as shown in Fig.\ref{vt5_3_1}, have no significant impact on false positive rates and true positive rates. We also note that nodes in the ground truth drawn with different $vt$s do not have much topological difference. Without loss of generality, we thus use the ground truth drawn with $vt=5$ to provide analysis in the following.
\emph{DNS} represents the bipartite graph between domains and IPs built from the active DNS dataset in \cite{codaspy_issa}. To compare the impact of $\epsilon$ in different networks, we obtained the domain-ip bipartite graph and the ground truth labels on domains used in their work~\cite{codaspy_issa}. Concretely, in their bipartite graph, domains and IPs are connected as edges, each of which represents a domain resolving to an IP. They collected ground truth for bad domains by checking domains against VT and for good domains by checking domains against Alexa top list~\cite{alexa}.
\begin{figure}[tb]
\begin{center}
\parbox{0.4\textwidth}{
\centering
\epsfig{file=auc_newdata.pdf,width=0.38\textwidth, height=0.2\textheight}
}
\caption{AUC when varying $\epsilon$ for Mobile and DNS}\label{auc}
\end{center}
\end{figure}
To clearly capture the sensitivity to $\epsilon$ in each of the two graphs (\emph{Mobile} and \emph{DNS}), we measure the area under the ROC curve (AUC). Fig.\ref{auc} shows AUC with varying $\epsilon$, where the x-axis represents $\epsilon$ and the y-axis represents the corresponding AUC for each graph. The figure clearly shows that the classification accuracy in \emph{Mobile} gets lower (from 0.98 to 0.97), as we increase $\epsilon$ by 0.1 (0.51, 0.6, 0.7, 0.8, 0.9). On the other hand, the classification accuracy in \emph{DNS} stays almost the same (0.96), regardless of $\epsilon$. We argue that this different behavior of BP is due to the network structures and the topological locations of nodes in the ground truth.
For any two nodes $S$ and $T$ in the graph, their impact on each other depends on multiple variables, the most important of which are: (1) the length of the path between $S$ and $T$, (2) the number of paths between $S$ and $T$, and (3) the edge potential parameter $\epsilon$.
\begin{comment}
\begin{itemize}
\item the length of the path between $S$ and $T$,
\item the number of paths between $S$ and $T$, and
\item the edge potential parameter $\epsilon$.
\end{itemize}
\end{comment}
First, the longer the path between $S$ and $T$, the smaller $S$'s impact on $T$. This is because the edge potential diminishes as it travels on the path between the two nodes (due to fraction multiplications as many as the length of the path.)
As a result, the final badness score will be insensitive to $\epsilon$ in case of graphs with longer paths.
Second, the larger the number of paths between $S$ and $T$, the higher the impact of $S$ on $T$. This is because the final belief at $T$ is a function of the product of messages received on each path from $S$ to $T$. For example, assume that a bad node $S$ has $p$ paths to $T$, then $S$ sends a bad message $m_B(i)$ and a good message $m_G(i)$ on a path $i$. Since $S$ is bad, $m_B(i)$ is larger than $m_G(i)$. The final bad (good) impact of $S$ on $T$ is a function of the product of the $m_B(i)$ ($m_G(i)$) messages from all the $p$ paths. The larger the number of paths ($p$), the higher the difference between the $m_B(i)$ product and the $m_G(i)$ product, and hence, the higher the final badness score (due to the assumption that $S$ is bad in the example).
Finally, if we set $\epsilon = 1$, the path length will no longer have any impact, because length-1 has the same impact as length-1000; if we set $\epsilon$ close to 0.5, $b_S$'s impact on $b_T$ greatly diminishes except for very short paths (e.g., 2). \\
We next compare two datasets from the two graphs ({\em Mobile} and {\em DNS}) in terms of there topological features. Specifically, we are interested in nodes in the ground truth set.
\textbf{Shortest path length:} Consider two clusters: bad ($C_B$) and good ($C_G$). The important intuition behind BP using homophily relationship is that each cluster's intra-cluster distance is supposed to be low, whereas inter-cluster distance between two clusters is supposed to be high. We thus measure the intra-cluster and inter-cluster distances in terms of the shortest path lengths between all pair of nodes in $C_B$ and $C_G$.
Fig.\ref{heatmap} provides the matrix representing the shortest path lengths between nodes in $C_B$ and $C_G$. The range of lengths are from 0 to 20, each of which is illustrated as a color between black (0) and yellow (20) in the matrix. The figure shows a few important observations. First, generally in both datasets, intra-cluster distances are smaller (darker and greener colors in the figure) than inter-cluster distances between $C_B$ and $C_G$. Second, $C_B$'s intra-cluster distances are the lowest (i.e., the darker and greener color in the figure) in both datasets. Finally, the difference between intra-cluster and inter-cluster distances in $DNS$ is much larger than that in $Mobile$.
\begin{figure}[htbp]
\begin{center}
\parbox{0.4\textwidth}{
\centering
\epsfig{file=heatmap_newdata.pdf,width=0.38\textwidth
\caption{Shortest path length matrix between bad/good and bad/good nodes}\label{heatmap}}
\end{center}
\end{figure}
\begin{figure*}[htbp]
\begin{center}
\parbox{1.0\textwidth}{
\centering
\subfigure[CDF between good nodes]{
\psfig{file=benign_cdf_newdata.pdf,width=0.3\textwidth, height=0.18\textheight}\label{benign_cdf}}
\subfigure[CDF between bad nodes]{
\psfig{file=mal_cdf_newdata.pdf,width=0.3\textwidth, height=0.18\textheight}\label{mal_cdf}}
\subfigure[CDF between bad and good nodes]{
\psfig{file=benign_mal_cdf_newdata.pdf,width=0.3\textwidth, height=0.18\textheight}\label{benign_mal_cdf}}
\caption{Shortest path length CDF between bad/good and bad/good nodes}\label{short_path}
\end{center}
\end{figure*}
Fig.\ref{short_path} presents the CDF of shortest path lengths between nodes in $C_B$ and $C_G$, where the x-axis represents the shortest path lengths
and the y-axis represents the corresponding CDF (i.e., portion of node pairs); one line per each dataset. As shown in Fig.\ref{short_path}, the maximum lengths are 8 and 20 in Mobile and DNS, respectively.
Interestingly, $C_B$'s intra-cluster distances are similar in the two datasets. Specifically, 86.7\% of path lengths are within 4 (i.e., 2 or 4) in $Mobile$, and 79.9\% of path lengths are within 4 in $DNS$.
On the other hand, we observe different characteristics in $C_G$'s intra-cluster distances, and inter-cluster distance between $C_B$ and $C_G$ for each dataset. In \emph{Mobile}, 91\% of path lengths between nodes in $C_G$ are smaller than or equal to 6 and only 9\% of path lengths are greater than 6; which are in fact similar to the inter-cluster distance between $C_B$ and $C_G$ where 97.5\% of path lengths are smaller than or equal to 6 and only 2.5\% of path lengths are greater than 6. In \emph{DNS}, 60\% of path lengths between nodes in $C_G$ are smaller than or equal to 6, while 90\% of path lengths between nodes in $C_B$ and $C_G$ are more than 6. In other words, although the intra-cluster distance is smaller than the inter-cluster distance in both datasets (i.e., the homophily relationships holds), the difference between intra-cluster and inter-cluster distances in
\emph{Mobile} is relatively small. By contrast, the difference is relatively large in \emph{DNS}. On average, differences between $C_B$'s intra-cluster distance and the inter-cluster distance is only 0.9 in \emph{Mobile}, whereas the difference is 8 in \emph{DNS}, as shown in Table \ref{avg_short}.
\begin{table}[htb
\centering
\small
\begin{tabular}{|c|c|c|c|}\hline
\textbf{}& Good-Good & Bad-Bad & Bad-Good\\
\hline
\textbf{Mobile} & 5.542 & 4.089 & 4.922\\
\hline
\textbf{DNS} & 6.43 &4.523 &12.062\\
\hline
\end{tabular}
\caption{Average shortest path length}
\label{avg_short}
\end{table}
Recall how the path length and $\epsilon$ affect the behavior of BP. Relatively long inter-cluster distance (i.e., 12) diminishes the impact of bad (good) domains on good (bad) domains, irrespective of $\epsilon$ in \emph{DNS}. On the other hand, $\epsilon$ plays a big role in classification accuracy in \emph{Mobile}, due to the small differences between intra-cluster and inter-cluster distances. Concretely, bad devices have more impact on good ones when we use higher $\epsilon$, resulting in the higher false positives. Hence, it is required to carefully choose $\epsilon$ close to 0.5 (e.g., 0.51) to avoid high false positives.
\textbf{Closeness centrality (CC)} of a node measures the average length of the shortest paths from the node to others \cite{trafficgraph}. Concretely, $CC$, of node $u$ is computed as:
\begin{displaymath}
CC_u = (N-1)\big/\sum_v l(v,u),
\end{displaymath}
where $N$ is the number of nodes in the graph and $l(v,u)$ is the shortest path length between $u$ and node $v$.
Essentially, CC takes into account both factors: the number of paths and the shortest path lengths. If all nodes in the graph are highly connected to each other with short path lengths, the CCs of all nodes will be similar. Indeed, the average CCs of bad and good devices in \emph{Mobile} are similar ($\approx$ 0.22) as shown in Table \ref{network_property}. On the other hand, the average CC of bad domains is relatively small (0.088), compared to those of good and unknown domains (0.141 and 0.113, respectively) in \emph{DNS}.
\begin{table}[htbp
\begin{center}%
\begin{tabular}{|p{3cm}||p{1.7cm}|p{1.7cm}|}\hline
\textbf{}&\textbf{Closeness Centrality} &\textbf{Eigenvector Centrality} \\
\hline\hline
Mobile(Bad) & 0.223 & 0.0087\\
\hline
Mobile (Good) & 0.219& 0.0022\\
\hline
Mobile (Unknown) & 0.219& 0.0031\\
\hline\hline
DNS (Bad) & 0.088 & 0.005\\
\hline
DNS (Good) & 0.141 & 0.179 \\
\hline
DNS (Unknown) & 0.113 & 0.006 \\
\hline
\end{tabular}
\caption{Network properties of the Mobile and DNS datasets}
\label{network_property}
\end{center}
\end{table}
Along with the average shortest path given in Table \ref{avg_short}, we can conclude that the bad nodes in \emph{DNS} are much farther from other nodes and have less number of paths to other nodes, while good nodes are highly connected to good or unknown nodes, which is expected. This is because good domains are not likely to have many connections to bad domains, but have many connections to good or unknown domains. Hence, the classification accuracy is not sensitive to $\epsilon$ in in \emph{DNS}.
\textbf{Eigenvector centrality (EC)} of a node measures its influence in the graph. Concretely, $EC$ of node $u$ is computed:
\begin{displaymath}
EC_u = \kappa_1^{-1}\sum_vA_{uv}EC_v,
\end{displaymath}
where $v$ is $u$'s neighbor, $A$ is the adjacency matrix of the graph, $\kappa_1$ is its largest eigenvalue.
\begin{figure*}[ht]
\begin{center}
\parbox{0.9\textwidth}{
\subfigure[Mobile eigenvector centrality distribution]{
\psfig{file=eigen_newdata.pdf,width=0.38\textwidth, height=0.2\textheight}\label{newbenign_eigen}}
\subfigure[DNS eigenvector centrality distribution]{
\psfig{file=mada_eigen.pdf,width=0.38\textwidth, height=0.2\textheight}\label{mada_eigen}}
}
\caption{Eigenvector centrality distributions of the Mobile and DNS datasets}\label{eigen}
\end{center}
\end{figure*}
A node with high EC means that it is highly connected to other \emph{influential} nodes. That is, messages are most frequently passing through a node with high EC so that it will play a key role during belief propagation process. As shown in Table \ref{network_property}, there is clear difference on ECs between \emph{Mobile} and \emph{DNS} graphs. In general, the average ECs of bad, good, and unknown devices are almost similar (i.e., 0.0087, 0.0022, 0.0031, respectively) in \emph{Mobile} graph. This means that all nodes in the graph are highly connected with each other so that there are no significantly influential nodes in the graph. Note that the ECs of bad devices is the highest, meaning that as the higher $\epsilon$ is used, the score of bad devices can dominate the network, resulting in high false positives.
On the other hand, the average EC of good domains (0.179) are much higher than those of bad and unknown domains (0.005 and 0.006, respectively) in \emph{DNS} graph.
Fig.\ref{eigen} shows the distribution of eigenvector centralities of bad and good nodes in each dataset. Similar to results from $CC$, Fig.\ref{mada_eigen} shows that bad domains in \emph{DNS} are significantly further from other nodes and are not connected to influential nodes, meaning that there is a smaller number of paths to other nodes. Although the $EC$s of good domains are high on average, they are well-distributed. This is in fact expected, as there can be influential and non-influential domains.
By our definition in \emph{Mobile}, bad devices can have edges with all types of apps (i.e., bad, good, suspicious, and no-info apps); good devices can have edges with good and no-info apps. This means that good devices could have a similar number of paths with both good and bad devices; bad devices, however, have more paths with other bad devices than good devices. Consequently, bad devices become relatively influential and connected to other influential bad devices, resulting in the relatively high $EC$s as shown in Fig.\ref{newbenign_eigen}.
Recall how the number of paths and $\epsilon$ affect the behavior of BP. One important observation in Fig.\ref{eigen} is that bad devices are more influential on others than good devices in \emph{Mobile}; whereas bad domains are less influential on others in \emph{DNS}. Along with results in Table \ref{avg_short}, we can conclude that bad devices get more influences from bad devices, especially from those influential bad devices, than good devices; so that good devices' messages have relatively less impact on bad devices. Consequently, there are not much change on false negatives, irrespective of $\epsilon$, as opposed to false positives.
\section{Post Analysis of Classified Devices}~\label{sec_post_analysis}
Recall that our goal is to identify unknown compromised devices whose owners often inadvertently install apps without much consideration of consequences. To further verify the accuracy of the classification results in Section ~\ref{accuracy}, we measure the private information leakage on classified devices (Section ~\ref{sec_leakage_analysis}) and study underlying network infrastructure accessed by the devices (Section ~\ref{sec_infratructure}).
\subsection{Privacy Leakage}\label{sec_leakage_analysis}
It is known that devices having bad apps often leak private information~\cite{droidjust,credroid}. We examine samples of classified good and bad devices to study private information leakage on them.
\textbf{Ethics}: It is important to note that our research is conducted on an anonymized version of the dataset where possible privacy concerns are carefully considered and addressed. Before analysis, all identifiable and personal information (e.g., phone number, user names) or device identifiers appearing in the traffic is anonymized or replaced with pseudo information.
We have compiled a list of private information which often leaks in mobile networks based on the previous research ~\cite{taintdroid,recon,mosaic,bugfix}. It has been shown privacy leaks often occur in a structured format, i.e., a key-value pair in HTTP headers ~\cite{recon,bugfix,locationtracking}. For example, a login password leaks with \emph{pwd=mypaSS123} where \emph{pwd} is a key and \emph{mypaSS123} is a value. Note that the key for a specific type of private information might be different depending on each app or device ~\cite{recon,bugfix}. From the dataset, we heuristically extract highly-related keywords to each type of private information. Examples of such keywords are summarized in Table.~\ref{pii_keywords} in the appendix. Note that we only present a few examples of keywords in Table.~\ref{pii_keywords}, as the keywords are similar yet small variations such as imei1 and imei7.
Note that we also validate that the keywords are in fact used to leak the corresponding type of information by running apps on the two mobile devices mentioned in Section~\ref{sec_data}. The private information might have been obfuscated in the traffic (e.g., hashing). However, it is shown that most information leakage occurs in plaintext in mobile networks~\cite{recon}. We thus argue that the following results represent the general behavior of each device.
We sort unknown devices by their final beliefs, and choose top-100 devices with high scores as bad devices and bottom-100 devices with low scores as good devices. Note that, we choose scores derived from the results with parameters $vt=5$ and $\epsilon = 0.51$, based on discussion in Sections ~\ref{accuracy} and ~\ref{sec_graphstruct}. Then, we inspect HTTP packets originated from each device. Specifically, we search the keywords in each device's all of the HTTP headers, and consider a device leaks its private information if a non-empty key-value string corresponding to given keywords is found in any headers.
\begin{figure*}[htbp]
\begin{center}
\parbox{0.9\textwidth}{
\centering
\subfigure[Statistics for Private information leakage]{
\psfig{file=pii_detail_newdata.pdf,width=0.38\textwidth, height=0.2\textheight}\label{pii_detail}}
\subfigure[The CDF of the number of leaked information types]{
\psfig{file=pii_type_cdf_newdata.pdf,width=0.38\textwidth, height=0.2\textheight}\label{pii_cdf}}
\caption{Private Information Leakage}\label{pii_app_traffic}
\end{center}
\end{figure*}
Fig.\ref{pii_detail} presents the ratio of the number of devices leaking each type of private information, where the x-axis represents each information type and the y-axis represents the ratio of the number of devices (i.e., the number of devices leaking corresponding information over the total number of devices). Generally, a large number of bad devices leak private information compared to good devices. Notably, only bad devices leak highly sensitive information including passwords and email addresses. Interestingly, some of the good devices also leak private information. Note that, as we select good devices with bottom-100 scores, it is less likely that these classified good devices include false negatives.
One possible reason is that even non-bad apps, particularly location-based searching apps, sometimes leak private information such as location data and device identifiers in order to support their functionality~\cite{taintdroid,recon}. In fact, we observe that the majority of leaking apps on good devices are location-based searching apps.
To compare leaking on good and bad devices, we measure (i) the number of leaked private information types, (ii) the distribution of the number of leaking apps and packets. Fig.\ref{pii_cdf} shows the CDF of the number of leaked private information type of devices, where the x-axis represents the number of leaked private information and the y-axis represents the CDF of the number of devices (i.e., corresponding portion of devices). As shown in the figure, half of good devices do not leak any information and 38\% of good devices leak only one information type. On the contrary, 92\% of bad devices leak at least one private information type and 36\% of bad devices leak more than 5 information types.
\begin{figure*}[htbp]
\begin{center}
\parbox{1.0\textwidth}{
\centering
\subfigure[CDF of the leaking app ratio]{
\psfig{file=pii_app_cdf_newdata.pdf,width=0.23\textwidth, height=0.2\textheight}\label{pii_app_cdf}}
\subfigure[The number of leaking apps]{
\psfig{file=leak_app_count_newdata.pdf,width=0.23\textwidth, height=0.2\textheight}\label{pii_app_cnt}}
\subfigure[CDF of the leaking packet ratio]{
\psfig{file=pii_traffic_cdf_newdata.pdf,width=0.23\textwidth, height=0.2\textheight}\label{pii_traffic_cdf}}
\subfigure[The of number of leaking packets]{
\psfig{file=pii_traffic_volume_newdata.pdf,width=0.23\textwidth, height=0.2\textheight}\label{pii_traffic_cnt}}
\caption{The distribution of leaking apps and packets of devices}\label{pii_app_traffic_leak}
\end{center}
\end{figure*}
We further measure how many apps and packets of each device leak private information. Fig.\ref{pii_app_cdf} and Fig.\ref{pii_traffic_cdf} present the CDFs of the leaking app and the leaking traffic ratios, respectively. The x-axis in each figure represents the leaking app and leaking traffic ratios, respectively; the y-axis represents the portion of devices.
Leaking app ratio is measured by the number of apps leaking information over the total number of apps of each device; leaking traffic ratio is measured by the number of packets leaking information over the total number of packets of each device. As shown in the figures, although some good devices also leak private information, the ratios of leaking apps and packets of each device are relatively small. Specifically, we observe that among all the good devices that leak private information (i.e., 50\% of the good devices), 30\% of them have less than 10\% of leaking apps (Fig.\ref{pii_app_cdf}); the traffic of 41\% of them has less than 10\% of leaking packets (Fig.\ref{pii_traffic_cdf}).
Fig.\ref{pii_app_cnt} and Fig.\ref{pii_traffic_cnt} present the distribution of the number of leaking apps and packets of devices, respectively. The x-axis represents each device; the y-axis in each figure represents the corresponding number of leaking apps and packets of each device, respectively. Note that 50\% of good devices and 8\% of bad devices do not leak any information so that their numbers of leaking apps and packets are 0, which are not shown in the figures.
Interestingly, these leaking apps are not necessarily the same set as the bad apps in the ground truth set in Section~\ref{sec_data}. In fact, 85\% of these leaking apps are not the apps originally flagged as bad. In other words, the BP based inference relies on a largely independent set of apps compared to the ground truth to detect bad devices.
Notably, we can see a clear difference between good and bad devices in terms of the number of leaking apps and packets. Specifically, if any, good devices have only one or two leaking apps; whereas 35\% of bad devices have more than two leaking apps. Also, although some good devices leak information, the number of leaking packets are less than 30; whereas 23\% of bad devices have more than 30 leaking packets.
In summary, our privacy leakage analysis suggests that our approach can reliably detect unknown bad devices. Specifically, we show that although devices do not have bad apps from the ground truth, classified bad devices are showing undesirable behavior in terms of leaking their personal information. This result is also promising in that we can possibly identify unknown bad apps by our approach. Evidently, apps leaking sensitive information are not desirable and we may further analyze apps causing privacy leakage on the classified bad device. Since our focus is to identify devices, we leave the further investigation on apps as future work.
\subsection{Network Infrastructure Accessed}~\label{sec_infratructure}
We analyze the underlying network resources such as domains and IPs accessed by classified devices. It is well-known that miscreants utilize fast fluxing~\cite{fastflux:2008:NDSS}, where a given malicious domain is hosted at different IPs in a short period of time, to improve the availability of their malicious domains. Further, it is recently shown that miscreants move their domains from one hosting provider to another frequently to evade take down~\cite{codaspy_issa}. In our dataset, 94\% of hosting providers possess a single AS (Autonomous System). Thus, the above observation on hosting providers can be generalized to ASes. In this experiment, we seek to find out if indeed this AS behavior exists in the IPs accessed by the apps in the classified devices. Fig.~\ref{fig:asn_cdf} shows the CDF of the number of ASes utilized to host domains accessed by classified good and bad devices. Inline with above observations from previous research, it shows that bad devices tend to access IPs from more ASes compared to good devices. The figure shows, more than 90\% of bad devices access IPs from more than 20 ASes, whereas only 30\% of good devices exhibit the same behavior.
\begin{figure*}[htbp]
\begin{center}
\parbox{0.9\textwidth}{
\subfigure[The CDF of the number of ASes utilized]{
\psfig{file=device_asn_cdf.pdf,width=0.38\textwidth, height=0.18\textheight
}\label{fig:asn_cdf}}
\subfigure[The CDF of the number of short lived domains accessed]{
\psfig{file=short_lived_cdf.pdf,width=0.38\textwidth,height=0.18\textheight
}\label{fig:short_lived_cdf}}
\caption{The CDFs of the number of ASes utilized and the number of short lived domains accessed}\label{network_infra}
\end{center}
\end{figure*}
As it is economical to create domains, nowadays, miscreants use many disposable domains to launch their attacks~\cite{reg:2017:RAID}. We thus also explore whether domains accessed by bad devices exhibit such a behavior. First, for the domains in the dataset, we extract the first seen and last seen dates of each domain from Farsight passive DNS repository~\cite{DNSDB}, which collects DNS queries resolved world-wide and serves historical DNS query data since 2011. We define a short lived domain as one whose DNS footprint is less than 3 months. Most of these domains are usually taken down, sink holed, or black listed, if identified malicious~\cite{takedown:2019:NDSS}. Fig.~\ref{fig:short_lived_cdf} shows the CDF of the number of short lived domains accessed by good and bad devices. The figure confirms with the previous research findings where, in general, classified bad devices access more short lived domains compared to good devices. It shows that 20\% of bad devices access more than 40 short lived domains, whereas only less than 10\% of good devices exhibit the similar behavior.
\section{Discussion and Limitations}\label{discussion}
\heading{Baseline Approaches.}
A naive baseline approach of utilizing a blacklist is restrictive and unable to detect compromised devices having previously unknown bad apps. In fact, our approach in general can detect twice as many unknown bad devices not in the ground truth. An important consideration at early stages of detection is to identify which predictive model would work well to infer compromised devices from graph structured data. We evaluate three popular techniques: (1) Unsupervised node embedding along with Random Forest (Node2Vec)~\cite{node2vec}, (2) Label Propagation (LP)~\cite{lp:2002}, and (3) BP. Fig.\ref{compare_algo_app} shows the ROC curves for the three approaches
with the ground truth drawn with $vt = 5$. The ROC curves show that BP provides a low FPR, compared to LP and Node2Vec along with RF. There are a few possible reasons why BP performs slightly better: (a) node embedding based approach fails to capture labels into the embedding and may result in inaccurate classification when two or more nodes have similar structure but different labels, and (b) LP simply takes the average of the neighboring node values during each iteration and, unlike BP, it fails to capture the homophily relationships among neighboring nodes. Thus, it is not surprising that LP has the lowest performance out of the three approaches. Further, BP is several orders of magnitude faster than Node2vec.
\begin{figure}[tb]
\begin{center}
\parbox{0.42\textwidth}{
\centering
\epsfig{file=compare_algorithms_newdata.pdf,width=0.38\textwidth, height=0.18\textheight
\caption{A ROC curve for three graph-inference algorithms}\label{compare_algo_app}}
\end{center}
\end{figure}
\heading{App Strings and Filtering.}
Our study with the HTTP headers assumes that app strings revealed can characterize a specific app and its badness based on the prior research results ~\cite{beyondgoogleplay,usagepattern}. However, it is also known that malware writers often distribute repackaged apps by adding malicious codes to popular legitimate apps that may include the app string of such legitimate apps ~\cite{unknown_malice}. As we filter out popular apps to avoid false associations in BP, such repackaged apps may lead to false negatives; meaning that devices having only maliciously repackaged apps might be filtered and thus not detected. However, note that our approach does not use the knowledge of bad apps during the inference. In fact, we show that bad devices are detected independently from the known bad apps and our approach can be further extended to identify unknown bad apps by investigating the apps in detected devices in Section \ref{sec_post_analysis}. We thus argue that considering a limited number of known apps does not have a significant impact on the detection performance.
\heading{HTTPS Encrypted Traffic.} Our approach can be applied both encrypted and not-encrypted traffic by extracting data in different ways. As mentioned in Section~\ref{sec_data}, however, a large percentage of mobile apps still rely on HTTP protocol for their communication for various reasons~\cite{inconsistent,networkframe,locationtracking,ample,lookat,bugfix}. It has also been observed in previous study that most malicious traffic is carried over HTTP for the same reasons~\cite{impending,nazca}. We thus believe that our approach with the high accuracy of detection (98\% AUC) using HTTP traffic can successfully identify the compromised devices in real-world mobile networks.
Towards dealing with HTTPS, we also suggested a naive approach to use IP header only. While the detection accuracy is not promising as much as one to use HTTP, there is a line of work known as \emph{app fingerprinting} using IP headers or TCP traffic patterns that can be applied to improve the accuracy~\cite{realtime,appscanner,robust,tcp_finger}. That is, one may employ a supervised classifier to determine the specific app generating the observed HTTPS traffic~\cite{robust,appprint}. Once the apps in the traffic are identified, one can label the apps using external sources such as VT, and build a bipartite graph on which inference can be performed to detect compromised devices. We leave the investigation of this direction as future work.
\heading{VT intelligence.} We used VT intelligence to label the apps to establish ground truth. It has been pointed out in previous research that VT may have a limited coverage which could possibly bias results~\cite{unknown_malice,droppereffect}. We argue that the source of establishing ground truth is independent of our approach and an organization may get access to other sources~\cite{androzoo,koodous} which can possibly further reduce false positives and negatives. The goal of our approach is to start with a small set of ground truth based on any accessible intelligence source (VT or others) to expand the knowledge using graph inference to eventually identify devices which may have installed bad apps not detected previously. As discussed in Section~\ref{sec_post_analysis}, 85\% of leaking apps in detected devices are not the ones originally detected by VT, yet leaking a large amount of private information.
\section{Related Work}\label{sec_related}
\heading{Malicious App Detection.} Malicious app detection research falls into two categories:
code and network-analysis. Code analysis can further be categorized into static and dynamic analysis. Static analysis approaches derive signatures from app binaries based on features drawn from known malicious apps~\cite{droidminer,drebin}. However, it is often easy to evade detection by static analysis through code obfuscation and repackaging. Dynamic analysis monitors the behavior of an app such as privacy leakage or API calls~\cite{taintdroid,droidscope}. Unlike traditional desktop machines, however, it is often hard to perform run-time analysis on mobile devices because they are resource constrained. Network-analysis based approaches utilize traffic patterns such as packet sizes and detect anomalous network footprints ~\cite{deviation,trafficav,imbalanced,automated}. However, these approaches still extract each app-specific features which can easily be obfuscated. Also, they have the limitation that they cannot be applied for general purpose. Specifically, most proposed approaches are analyzing android-based apps, which cannot be directly used to detect iOS counterparts~\cite{taintdroid,robotdroid,androidmalware,droidminer}. By contrast, our approach identifies unknown compromised devices through graph inference without relying on device or app-specific features. Indeed, this is one of our key contribution that a network administrator does not need to do deep analysis on each individual device on the network, but can infer if an unknown device is compromised from other known devices. This also enables the administrator to quickly manage possible threats encountered at large-scale.
There are only a few research efforts that approach mobile security from a network administrator's point of view. Lever \emph{et al.} provided a large-scale network level analysis of mobile malware by investigating the DNS traffic of mobile devices~\cite{carrier}. This research is valuable in that authors show infection rate in real traces. However, authors did not provide a solution to identify mobile threats. Zhu \emph{et al.} proposed a method based on social network analysis to prevent worm propagation in cellular network~\cite{social}. The focus of this research was on worm propagation through MMS and SMS, which is different from our approach. Sharif \emph{et al.} propose a proactive approach that predicts if a user is connecting a malicious domain or web content by observing her mobile browsing behavior~\cite{impending}. In this paper, we focus on apps causing devices to be compromised rather than domains.
\heading{Graph-inference Approaches.} A graph-inference approach has been employed in many different applications including anomaly detection, malware detection, fake social network account detection, fraud detection, and malicious domain detection. These applications construct different types of graphs including file-machine or file-relation graphs~\cite{polonium,marmite,guiltfile}, reviewer-product graphs~\cite{fraudeagle,speagle}, host-domain graphs~\cite{maldomain,enterprise_bp}, domain-IP graphs~\cite{codaspy_issa} and social network account graphs~\cite{Yuan:2019:fakeaccounts}. Although researchers have applied BP on a variety of applications, there is little study on the effect of BP parameters in different types of networks. In fact, most researchers either mentioned the results are not sensitive to BP parameters such as edge potentials~\cite{maldomain,codaspy_issa} or stated that specific values work well without any further description~\cite{polonium,guiltfile,fraudeagle}.
However, we observe that the effectiveness of BP is relatively sensitive to characteristics of mobile networks. In Section~\ref{sec_graphstruct}, we thus discuss the unique characteristic of mobile networks in terms of their topology (i.e., where devices are closely connected to each other) compared to DNS based applications, and provide theoretical and experimental analysis on how such uniqueness may affect the results of BP.
\section{Conclusion}\label{sec_conclude}
We proposed a graph-inference based approach to identify compromised mobile devices. In doing so, we applied a well-known algorithm, BP, based on the intuition that devices sharing a similar set of apps will have a similar probability of being compromised. We studied this problem on real-world data that faithfully represents actual behavior of mobile users with which we demonstrate the effectiveness of our approach.
We further study the impact of graph topology on BP parameters and highlight the distinct features of the mobile graph. Finally, our privacy leakage and hosting infrastructure post-analyses support the claim that our approach can reliably detect unknown compromised devices without relying on device-specific features. It is also important to take appropriate actions after compromised devices are detected. In fact, we also discuss that further investigation on detected devices might be helpful to identify unknown malicious apps, which we leave as a future work.
| proofpile-arXiv_059-15562 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
In linear elasticity the overall properties of a layered elastic continuum can be found using formulas described in Backus~\cite{backus}.
Each individual layer is isotropic and thus characterised by the two elasticity parameters $c_{1111}$ and $c_{2323}$.
According to Backus~\cite{backus}, the overall averaged continuum will have a transverse isotropic symmetry and thus can be described by the five elastic constants
\begin{equation}
\label{eq:elast-const1}
c^{\overline{\rm
TI}}_{1111}=\overline{\left(\dfrac{c_{1111}-2c_{2323}}{c_{1111}}\right)}^{\,2}
\,\,\,\overline{\left(\dfrac{1}{c_{1111}}\right)}^{\,-1}
+\overline{\left(\dfrac{4(c_{1111}-c_{2323})c_{2323}}{c_{1111}}\right)}\,,
\end{equation}
\begin{equation}
\label{eq:elast-const2}
c^{\overline{\rm
TI}}_{1122}=\overline{\left(\dfrac{c_{1111}-2c_{2323}}{c_{1111}}\right)}^{\,2}
\,\overline{\left(\dfrac{1}{c_{1111}}\right)}^{\,-1}
+\overline{\left(\dfrac{2(c_{1111}-2c_{2323})c_{2323}}{c_{1111}}\right)}\,,
\end{equation}
\begin{equation}
\label{eq:elast-const3}
c^{\overline{\rm TI}}_{1133}=\overline{\left(\dfrac{c_{1111}-2c_{2323}}{c_{1111}}\right)}\,\,
\,\,\overline{\left(\dfrac{1}{c_{1111}}\right)}^{\,-1}
\,,
\end{equation}
\begin{equation}
\label{eq:elast-const4}
c^{\overline{\rm TI}}_{2323}=\overline{\left(\dfrac{1}{c_{2323}}\right)}^{\,-1}
\,,
\end{equation}
\begin{equation}
\label{eq:elast-const5}
c^{\overline{\rm TI}}_{3333}=\overline{\left(\dfrac{1}{c_{1111}}\right)}^{\,-1}
\,,
\end{equation}
where the bar indicates an averaged value. The present expressions \eqref{eq:elast-const1}--\eqref{eq:elast-const5} are expressions (13) in the original paper~\cite{backus}, expressed in terms of Lam\'e's parameters $\lambda$ and $\mu$. The usage of the following connections: $c_{1111}=\lambda+2\mu$, $c_{2323}=\mu$ shows the equivalence of these expressions.
For small deformations we may use Backus average, but for large deformations linear elasticity is not suitable in general and we need to use a nonlinear elasticity which accounts for large deformations.
In this paper we propose an extended use of Backus average for finitely deformed materials.
A nonlinear elastic material may be treated essentially as linear elastic under small loads and small deformations.
Thus, each nonlinear elastic material may be associated with a corresponding linear elastic material.
As it can be seen from the examples considered in this paper, in general, we can observe that we cannot describe a nonlinear material fully by the information obtained from its corresponding linear elastic counterpart.
Normally, we do not have enough information for this purpose (and this should not be considered as something surprising, because nonlinear elasticity is more general and linear elasticity is a particular case of nonlinear elasticity).
In order to fill this gap in the information for modelling of the nonlinear behaviour of a material, based on information from linear elasticity, we need to use some additional information or to make an educated guess.
\section{Theoretical background}
We label a material point in the reference configuration $\mathcal{B}_\mathrm{r}$ by a position vector $\mathbf{X}$, and this point in the current configuration $\mathcal{B}$ by a position vector $\mathbf{x}$. Deformation is described by the vector field $\boldsymbol{\chi}$, which relates the position of a particle in the reference configuration to the position of the same particle in the current configuration: $\mathbf{x}=\boldsymbol{\chi}(\mathbf{X})$. The deformation gradient tensor, denoted $\mathbf{F}$, is defined by
\begin{equation}
\mathbf{F}=\Grad\boldsymbol{\chi},
\end{equation}
where $\Grad$ is the operator defined with respect to $\mathbf{X}$.
The stored energy in a transversely isotropic nonlinear elastic material can be formulated in terms of invariants $I_1$, $I_2$, $I_3$, $I_4$, $I_5$, ~\cite{Spencer}.
Thus, the strain energy per unit reference volume can be written as a function
\begin{equation*}
W=W(I_1, I_2, I_3, I_4, I_5),
\end{equation*}
where invariants are defined by
\begin{equation*}
I_1=\tr {\bf C}, \quad I_2=I_3\tr ({\bf C}^{-1})\,, \quad I_3=\det {\bf C}\,,
\end{equation*}
\begin{equation*}
I_4={\bf A}\cdot{\bf CA}\,, \quad I_5={\bf A}\cdot{\bf C}^2{\bf A}\,,
\end{equation*}
where $\bf C={\bf F}^\mathrm{T}{\bf F}$ is the right Cauchy-Green deformation tensor.
For nonlinear elastic transversely isotropic material a unit vector $\bf A$ represents a fiber direction in the reference configuration.
In our case we do not have any fibres, because, according to Backus, transverse isotropy is caused by the layered structure of the body.
Therefore, we may identify $\bf A$ with imaginary fibres or $\bf A$ can be thought of as a direction of anisotropy in the reference configuration.
In the current configuration this direction deforms to $\bf a=\bf FA$.
First Piola-Kirchhoff stress tensor can be found as a derivative of a strain energy function with respect to deformation gradient tensor
\begin{equation*}
{\bf P}=\frac{\partial W}{\partial \bf F}\,.
\end{equation*}
Using a chain rule and the following relations
\begin{align*}
\frac{\partial I_1}{\partial \bf F}=2 {\bf F}, \quad &\frac{\partial I_2}{\partial \bf F}=2 I_1{\bf F}-2{\bf F}{\bf F}^\mathrm{T}{\bf F}\,, \quad \frac{\partial I_3}{\partial \bf F}=2I_3{\bf F}^\mathrm{-T}\,, \\[1ex]
&\frac{\partial I_4}{\partial {\bf F}}=2{\bf FA}\otimes{\bf A}, \quad \frac{\partial I_5}{\partial {\bf F}}=2 \left(\bf FCA\otimes A + FA \otimes CA\right)\,,
\end{align*}
we obtain
\begin{align*}
{\bf P}=&2W_1{\bf F} + 2W_2\left(I_1 {\bf F}-{\bf F}{\bf F}^\mathrm{T} {\bf F}\right)+2W_3I_3{\bf F}^\mathrm{-T}\\
&+2W_4{\bf FA}\otimes {\bf A}+2W_5\left(\bf FCA\otimes A + FA \otimes CA\right)\,.
\end{align*}
Thus, Cauchy stress $\boldsymbol\sigma$ can be found using a standard relation
\begin{align}\label{cauchy-stress}
J\boldsymbol{\sigma}={\bf P}{\bf F}^\mathrm{T}=&2 W_1 {\bf B}+2W_2(I_1{\bf I}-{\bf B}){\bf B}+2W_3I_3{\bf I}\notag\\[1ex]
&+2W_4{\bf a}\otimes {\bf a}+2W_5({\bf Ba}\otimes{\bf a}+{\bf a}\otimes{\bf Ba}),
\end{align}
where $\bf B={\bf F}{\bf F}^\mathrm{T}$ is the left Cauchy-Green deformation tensor, and we remind that $J=\det \bf F$. Subscripts for energy function $W$ denote a corresponding derivative with respect to invariants $I_1$,~..., $I_5$.
In the reference configuration, where $I_1=3$, $I_2=3$, $I_3=1$, $I_4=1$, $I_5=1$, strain energy function has a zero value
\begin{equation}\label{eq:energy-zero}
W(3,3,1,1,1)=0.
\end{equation}
Stress should also vanish in the reference configuration.
Thus, evaluating expression for stress~\eqref{cauchy-stress} at $\bf F=I$ we obtain
\begin{equation}\label{eq:stress-isotr-zero}
W_1(3,3,1,1,1)+2W_2(3,3,1,1,1)+W_3(3,3,1,1,1)=0\,,
\end{equation}
\begin{equation}\label{eq:stress-transverse-zero}
W_4(3,3,1,1,1)+2W_5(3,3,1,1,1)=0\,.
\end{equation}
The following conditions should be satisfied for consistency with linear transversely isotropic elasticity (Merodio \& Ogden~\cite{MO2003}):
\begin{equation}\label{eq:connection-start}
W_{11}+4W_{12}+4W_{22}+2W_{13}+4W_{23}+W_{33}=c_{11}/4\,,
\end{equation}
\begin{equation}\label{eq:connection-c12c11}
W_2+W_3=(c_{12}-c_{11})/4\,,
\end{equation}
\begin{equation}\label{eq:connection-c44}
W_1+W_2+W_5=c_{44}/2\,,
\end{equation}
\begin{equation}\label{eq:connection-end1}
W_{14}+2W_{24}+2W_{15}+W_{34}+4W_{25}+2W_{35}=(c_{13}-c_{12})/4\,,
\end{equation}
\begin{equation}\label{eq:connection-end}
W_{44}+4W_{45}+4W_{55}+2W_5=(c_{33}-c_{11}+2c_{12}-2c_{13})/4\,.
\end{equation}
All derivatives in the expressions~\eqref{eq:connection-start}--\eqref{eq:connection-end} are evaluated in the reference configuration. Note that for these connections it is important that the vector $\mathbf{A}$ is aligned along the $X_3$ axis in the reference configuration, which is shown explicitly in Appendix \ref{app:connections}.
It is well-known that due to symmetries of Cauchy stress $\boldsymbol{\sigma}$ and small strain tensor $\boldsymbol{\varepsilon}$ the components of the elasticity tensor in linear elasticity may be written as a $6\times6$ matrix and they are present in this form in the right hand side of relations~\eqref{eq:connection-start}--\eqref{eq:connection-end}.
We can identify these components $c_{mn}$ with the components $c_{ijkl}$ of the forth-order elasticity tensor, using for following rule for indices: $(11)\rightarrow1$, $(22)\rightarrow2$, $(33)\rightarrow1$, $(23)\rightarrow4$, $(13)\rightarrow5$, $(12)\rightarrow6$, where a pair of indices in parentheses correspond to a pair of indices $ij$ and $kl$ in $c_{ijkl}$, a single index after the arrow sign corresponds to an index $m$ or $n$ in $c_{mn}$.
Thus, we can write $c_{11}=c_{1111}$, $c_{12}=c_{1122}$, $c_{33}=c_{3333}$, $c_{13}=c_{1133}$, $c_{44}=c_{2323}$.
Axis~$x_3$ represents the axis of symmetry for linear elastic transversely isotropic material.
\section{Association of linear elastic materials with specific nonlinear models}
\label{lin with nonlin models}
Following~Merodio \& Ogden \cite{MO2003} let us start with a strain energy function which depends on three invariants $W(I_1, I_3, I_4)$
\begin{equation}\label{eq:potential-134}
W(I_1, I_3, I_4)=\hat{\mu}(I_1-3)+H(I_3)+F(I_4)\,,
\end{equation}
where $\hat{\mu}$ is a positive material constant and $F$ is a function satisfying
\newline $F(1)=0$, $F'(1)=0$.
From conditions~\eqref{eq:energy-zero} and~\eqref{eq:stress-isotr-zero} we obtain
\begin{equation*}
H(1)=0, \quad H'(1)=-\hat{\mu}.
\end{equation*}
Condition \eqref{eq:stress-transverse-zero} is also satisfied for function \eqref{eq:potential-134}.
We may specify function $F(I_4)$ as $F(I_4)=\frac{1}{2}\alpha(I_4-1)^2$, where $\alpha$ is a positive material parameter accounting for the degree of anisotropy.
Function $F(I_4)$ is often referred to as a reinforcing model and accounts for transversely isotropic properties of a material.
Therefore, we confirm that
\begin{equation*}
F(1)=0\,, \quad F'(1)=0,
\end{equation*}
and additionally we obtain
\begin{equation*}
F''(1)=\alpha\,.
\end{equation*}
We use the value of the second derivative $H''(1)=k$.
Thus, from expressions~\eqref{eq:connection-start}--\eqref{eq:connection-end} we obtain
\begin{align}\label{eq:system}
&c_{11}=4k, \quad c_{33}=4(k+\alpha), \quad c_{44}=2\hat{\mu},\quad c_{13}(=c_{12})=4(k-\hat{\mu}).
\end{align}
Note that from the strain energy potential~\eqref{eq:potential-134} it is always possible to find elastic constants $c_{11}$\,,...,\,$c_{44}$.
In contrast, in the present paper we are interested in the opposite problem. From Backus averaging we want to find parameters $\hat{\mu}$, $\alpha$ and $k$ so that they would help to define function~\eqref{eq:potential-134}.
Thus, from function \eqref{eq:potential-134} we will be able to predict overall transversely isotropic mechanical behaviour of layers as a whole structure under significant loads and thus experiencing large deformations.
Note that in this case relations~\eqref{eq:system} can be viewed as a system of $5$ linear algebraic equations with $3$ unknown variables $\hat{\mu}$, $\alpha$ and $k$, and thus it may have no solutions for some combinations of elastic constants $c_{11}$\,,...,\,$c_{44}$.
The system~\eqref{eq:system} will always have a solution if the additional condition for elastic constants is satisfied
\begin{equation*}
c_{12}=c_{13}=c_{11}-2c_{44}.
\end{equation*}
Let us consider another strain energy function with the reinforcing model defined by $F(I_5)=\frac{1}{2}\gamma(I_5-1)^2$
\begin{equation}\label{eq:potential-135}
W(I_1, I_3, I_5)=\hat{\mu}(I_1-3)+H(I_3)+F(I_5)\,.
\end{equation}
Thus, we have
\begin{equation*}
F(1)=0\,, \quad F'(1)=0\,, \quad F''(1)=\gamma\,.
\end{equation*}
Using connections~\eqref{eq:connection-start}--\eqref{eq:connection-end} we obtain
\begin{align}\label{eq:system-I-5}
&c_{11}=4k, \quad c_{33}=4(k+\gamma), \quad c_{44}=2\hat{\mu},\quad c_{13}(=c_{12})=4(k-\hat{\mu}).
\end{align}
Note that if we take material parameters $\alpha=\gamma$ in expressions~\eqref{eq:potential-134} and~\eqref{eq:potential-135}, then using connections ~\eqref{eq:connection-start}--\eqref{eq:connection-end} we obtain the same linear elastic material corresponding to different nonlinear elastic materials (due to different invariants $I_4$ and $I_5$ in the reinforcing models).
Thus, two different nonlinear elastic materials may correspond to the same linear elastic material. It is important to bear in mind this fact: in order to properly model a nonlinear elastic material we need to use additional information about a possible behaviour of a such material in nonlinear regime.
Invariants $I_4$ and $I_5$ featuring in expressions~\eqref{eq:potential-134} and \eqref{eq:potential-135} model differently the behaviour of a material under large deformations.
Invariant $I_5$ accounts for an additional reinforcement under shear deformations. For details we refer to Merodio \& Ogden~\cite{MO2005}.
Let us consider another example. We define strain energy function as
\begin{equation*}
W=\hat{\mu}(I_1-3)+K(I_2)+H(I_3)+F(I_4).
\end{equation*}
Functions $K(I_2)$, $H(I_3)$ and $F(I_4)$ are defined so that the following conditions are satisfied
\begin{equation*}
K(3)=0\,, \quad K''(3)=q\,, \quad H(1)=0\,, \quad H'(1)=d\,, \quad H''(1)=k\,,
\end{equation*}
\begin{equation*}
\quad F(1)=0\,, \quad F'(1)=0\,, \quad F''(1)=\alpha\,.
\end{equation*}
From condition~\eqref{eq:stress-isotr-zero} we obtain
\begin{equation*}
K'(3)=-(\hat{\mu}+d)/2.
\end{equation*}
From connections~\eqref{eq:connection-start}--\eqref{eq:connection-end} we obtain
\begin{equation*}
4q+k=c_{11}/4\,, \quad d-\hat{\mu}=(c_{12}-c_{11})/2\,, \quad \hat{\mu}+d=c_{44}, \quad \alpha=(c_{33}-c_{11})/4\,.
\end{equation*}
Note that parameters $q$ and $k$ are not uniquely defined in this case. Thus, again in order to extend modelling for nonlinear regime we need to make some educated guess or use some additional information about material working under large loads.
At the end of this section lets us consider the potential with a specified function $H(I_3)$ depending on the volumetric deformation captured by the invariant $I_3$.
For example, we may consider the following potential
\begin{equation}\label{eq:poten-I4}
W=\frac{\mu}{2}(I_1-3)-\mu\log \sqrt{I_3}+\frac{\lambda}{2}(\log\sqrt{I_3})^2+\frac{1}{2}\alpha(I_4-1)^2\,,
\end{equation}
where $\lambda$ and $\mu$ are Lam\'e's parameters.
Note that for this potential conditions~\eqref{eq:stress-isotr-zero} and~\eqref{eq:stress-transverse-zero} are satisfied. We want to find material parameters $\mu$, $\lambda$ and $\alpha$ in \eqref{eq:poten-I4} from the connections~\eqref{eq:connection-start}--\eqref{eq:connection-end}. Thus, we have
\begin{equation*}
2\mu+\lambda=c_{11}, \quad \mu=c_{44}, \quad \mu=(c_{11}-c_{12})/4, \quad c_{13}=c_{12}, \quad \alpha=(c_{33}-c_{11})/4.
\end{equation*}
We note that for compatibility we need an additional condition to be satisfied $c_{44}=(c_{11}-c_{12})/2$.
Let us consider a particular case of expression \eqref{eq:poten-I4} without the term containing invariant $I_4$. Thus, we have the following expression for the potential
\begin{equation}\label{eq:iso-poten}
W=\frac{\mu}{2}(I_1-3)-\mu\log \sqrt{I_3}+\frac{\lambda}{2}(\log\sqrt{I_3})^2\,,
\end{equation}
typically used for modelling isotropic nonlinear elastic materials~(Bonet \& Wood~\cite{Bonet2008}). It is worth noting that from the expressions ~\eqref{eq:connection-start}--\eqref{eq:connection-end1} we correctly recover expressions of the components of linear elastic tensor $c_{ijkl}=\lambda\delta_{ij}\delta_{kl}+\mu(\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk})$ in terms of Lam\'e's parameters:
\begin{equation*}
c_{1111}=2\mu+\lambda, \quad c_{2323}=\mu, \quad c_{1122}=\lambda, \quad c_{1133}=\lambda.
\end{equation*}
For incompressible linear elastic material the number of elastic constants reduces from five to three (Merodio \& Ogden~\cite{MO2005}). For this case connections~\eqref{eq:connection-start}--\eqref{eq:connection-end} can be specialized appropriately. For details we refer to Merodio \& Ogden~\cite{MO2005}.
\section{Formulation based on a predeformed configuration}
\label{predeformed-config}
In this section we use a different approach and explore the possibility of using a prestress formulation for Backus averaging for finitely deformed materials of a layered structure with waves being treated as superimposed displacements on a current deformed configuration.
Backus (1962) showed that a stack of thin isotropic layers can be considered equivalently as a homogenized transversely isotropic media. All derivations were done within the context of a linear elastic theory. In this section we attempt to see if a similar averaging can be made within finite elasticity for large deformations using a prestress formulation.
In Backus averaging the load applied to the horizontal plane of a medium which consists of thin horizontal layers can be reasonably considered as prestress. The wave propagating in this media can be considered as a displacement field, superimposed on the underlying loaded current configuration. Although Backus (1962) considers a wave propagation, his problem is essentially static. In his paper displacement fields do not depend on time. We also consider incremental displacement field which is static and does not depend on time.
In this section instead of the first Piola-Kirchhoff stress tensor we use a nominal stress tensor, which is a transpose of the first Piola-Kirchhoff stress tensor $\mathbf{T}=\mathbf{P}^{\mathrm{T}}$.
Let us consider a small displacement superimposed on a finitely deformed configuration
\begin{equation*}
\dot{\bf x}=\dot{\boldsymbol{\chi}}({\bf X})\,.
\end{equation*}
We use notation $\mathbf{u}=\dot{\bf x}$ for the incremental displacement $\dot{\bf x}$. We define that the increment in the deformation gradient is
\begin{equation*}
\dot{\mathbf{F}}=\Grad \dot{\boldsymbol \chi}.
\end{equation*}
Treating $\bf u$ as a function of $\bf x$, we obtain
\begin{equation}\label{dot-F}
\dot{\mathbf{F}}=(\grad \bf u) \,\bf F.
\end{equation}
Note that operator $\grad$ is defined with respect to $\bf x$, and we denote $\mathbf{L}=\grad\mathbf{u}$.
Incrementing expression
\begin{equation}\label{Cauchy-Nominal}
J\boldsymbol{\sigma}=\mathbf{F}\mathbf{T}
\end{equation}
for compressible case, we obtain
\begin{equation}
\dot{J}\boldsymbol{\sigma}+J\dot{\boldsymbol{\sigma}}=\dot{\mathbf{F}}\mathbf{T}+\mathbf{F}\dot{\mathbf{T}}.
\end{equation}
Rearranging,
\begin{equation}
\dot{\boldsymbol{\sigma}}=J^{-1}\dot{\mathbf{F}}\mathbf{T}+J^{-1}\mathbf{F}\dot{\mathbf{T}}-J^{-1}\dot{J}\boldsymbol{\sigma}.
\end{equation}
We use expression \eqref{dot-F}, expression $\dot{J}=J\tr\mathbf{L}$ and a push forward version of increment in nominal stress tensor
\begin{equation}\label{push-f-inc-nom}
\dot{\mathbf{T}}_0=J^{-1}\mathbf{F}\dot{\mathbf{T}}.
\end{equation}
For details we refer to the book by R.W. Ogden ~\cite{Ogden}, Chapter 6. Thus, we obtain
\begin{equation}\label{compres}
\dot{\boldsymbol{\sigma}}=\mathbf{L}\boldsymbol{\sigma}+\dot{\mathbf{T}}_0-(\tr\mathbf{L})\boldsymbol{\sigma},
\end{equation}
or we can rewrite
\begin{equation}\label{re-compres}
\dot{\boldsymbol{\sigma}}=[\mathbf{L}-(\tr\mathbf{L})\mathbf{I}]\boldsymbol{\sigma}+\dot{\mathbf{T}}_0.
\end{equation}
For incompressible case $\tr\mathbf{L}=0$,
\begin{equation}\label{incomp}
\dot{\boldsymbol{\sigma}}=\mathbf{L}\boldsymbol{\sigma}+\dot{\mathbf{T}}_0.
\end{equation}
Increment in Cauchy stress $\dot{\boldsymbol{\sigma}}$ is a symmetric tensor.
Note that $\mathbf{L}$ and $\dot{\mathbf{T}}_0$ are not symmetric, but $\dot{\boldsymbol{\sigma}}$ is a symmetric tensor. The proof follows from the expression \eqref{updated-rot-bal}, given in Appendix \ref{app:incremental}, and also from expression~(6.2.21) in~\cite{Ogden}, given in a different notation.
Expression~(6.2.21) in the present notation can be written as
\begin{equation}
\mathbf{L}\boldsymbol{\sigma}+\dot{\mathbf{T}}_0=\dot{\mathbf{T}}^{\mathrm{T}}_0+\boldsymbol{\sigma}\mathbf{L}^{\mathrm{T}}.
\end{equation}
Thus, symmetry of \eqref{incomp} is established. For compressible case~\eqref{compres}, increment $\dot{\boldsymbol{\sigma}}$ is also symmetric, since
$(\tr\mathbf{L})\boldsymbol{\sigma}$ is symmetric.
Using incremental constitutive law for compressible case
\begin{equation}\label{incr-constit-law}
\mathbf{\dot{T}}_0=\boldsymbol{\mathcal{A}}_0\mathbf{L}.
\end{equation}
Substituting \eqref{incr-constit-law} in \eqref{re-compres}, we obtain
\begin{equation}\label{fin-incr-law}
\dot{\boldsymbol{\sigma}}=[\mathbf{L}-(\tr\mathbf{L})\mathbf{I}]\boldsymbol{\sigma}+\boldsymbol{\mathcal{A}}_0\mathbf{L}.
\end{equation}
For isotropic material non-zero instantaneous elastic moduli $\boldsymbol{\mathcal{A}}_0$ can be specified, see Chapter 6 in~\cite{Ogden}. Thus, we can obtain in components from \eqref{fin-incr-law}
\begin{align}
\label{index-begin-incr-law}
\dot{\sigma}_{11}&=\frac{\partial u_1}{\partial x_1}\sigma_{11}+\frac{\partial u_1}{\partial x_2}\sigma_{21}+\frac{\partial u_1}{\partial x_3}\sigma_{31}-\left(\frac{\partial u_1}{\partial x_1}+\frac{\partial u_2}{\partial x_2}+\frac{\partial u_3}{\partial x_3}\right)\sigma_{11}\nonumber\\
&+\mathcal{A}_{01111}\frac{\partial u_1}{\partial x_1}+\mathcal{A}_{01122}\frac{\partial u_2}{\partial x_2}+\mathcal{A}_{01133}\frac{\partial u_3}{\partial x_3},
\end{align}
\begin{align}
\dot{\sigma}_{22}&=\frac{\partial u_2}{\partial x_1}\sigma_{12}+\frac{\partial u_2}{\partial x_2}\sigma_{22}+\frac{\partial u_2}{\partial x_3}\sigma_{32}-\left(\frac{\partial u_1}{\partial x_1}+\frac{\partial u_2}{\partial x_2}+\frac{\partial u_3}{\partial x_3}\right)\sigma_{22}\nonumber\\
&+\mathcal{A}_{02211}\frac{\partial u_1}{\partial x_1}+\mathcal{A}_{02222}\frac{\partial u_2}{\partial x_2}+\mathcal{A}_{02233}\frac{\partial u_3}{\partial x_3},
\end{align}
\begin{align}
\dot{\sigma}_{33}&=\frac{\partial u_3}{\partial x_1}\sigma_{13}+\frac{\partial u_3}{\partial x_2}\sigma_{23}+\frac{\partial u_3}{\partial x_3}\sigma_{33}-\left(\frac{\partial u_1}{\partial x_1}+\frac{\partial u_2}{\partial x_2}+\frac{\partial u_3}{\partial x_3}\right)\sigma_{33}\nonumber\\
&+\mathcal{A}_{03311}\frac{\partial u_1}{\partial x_1}+\mathcal{A}_{03322}\frac{\partial u_2}{\partial x_2}+\mathcal{A}_{03333}\frac{\partial u_3}{\partial x_3},
\end{align}
\begin{align}
\dot{\sigma}_{13}&=\frac{\partial u_1}{\partial x_1}\sigma_{13}+\frac{\partial u_1}{\partial x_2}\sigma_{23}+\frac{\partial u_1}{\partial x_3}\sigma_{33}-\left(\frac{\partial u_1}{\partial x_1}+\frac{\partial u_2}{\partial x_2}+\frac{\partial u_3}{\partial x_3}\right)\sigma_{13}\nonumber\\
&+\mathcal{A}_{01313}\frac{\partial u_3}{\partial x_1}+\mathcal{A}_{01331}\frac{\partial u_1}{\partial x_3},
\end{align}
\begin{align}
\dot{\sigma}_{23}&=\frac{\partial u_2}{\partial x_1}\sigma_{13}+\frac{\partial u_2}{\partial x_2}\sigma_{23}+\frac{\partial u_2}{\partial x_3}\sigma_{33}-\left(\frac{\partial u_1}{\partial x_1}+\frac{\partial u_2}{\partial x_2}+\frac{\partial u_3}{\partial x_3}\right)\sigma_{23}\nonumber\\
&+\mathcal{A}_{02323}\frac{\partial u_3}{\partial x_2}+\mathcal{A}_{02332}\frac{\partial u_2}{\partial x_3},
\end{align}
\begin{align}
\label{index-last-incr-law}
\dot{\sigma}_{12}&=\frac{\partial u_1}{\partial x_1}\sigma_{12}+\frac{\partial u_1}{\partial x_2}\sigma_{22}+\frac{\partial u_1}{\partial x_3}\sigma_{32}-\left(\frac{\partial u_1}{\partial x_1}+\frac{\partial u_2}{\partial x_2}+\frac{\partial u_3}{\partial x_3}\right)\sigma_{12}\nonumber\\
&+\mathcal{A}_{01212}\frac{\partial u_2}{\partial x_1}+\mathcal{A}_{01221}\frac{\partial u_1}{\partial x_2}.
\end{align}
Now we want to follow similar procedures which were done for linear elastic case in~\cite{backus} and in~\cite{Slawinski}, Section $4.2$, i.e.
we need to rearrange terms in expressions~\eqref{index-begin-incr-law}--\eqref{index-last-incr-law} according to how they vary through out the hight of the stack of layers: slowly or fast. Terms which vary significantly along the hight of the stack of layers should be brought to one side. According to Backus procedure on the other side we may have slow-varying terms, multiplied by the abruptly varying terms. Due to complexity of expressions~\eqref{index-begin-incr-law}--\eqref{index-last-incr-law}, apparently, it is impossible to follow the similar procedures done by Backus for linear elastic case. Also, note that in the present setting due to the superimposed displacements on the current deformed configuration we have incremental Cauchy stress $\dot{\boldsymbol{\sigma}}$, which is absent in Backus formulation due to his solution obtained purely within linear elastic theory.
Also, it is worth noting that here instantaneous elastic moduli $\boldsymbol{\mathcal{A}}_0$ are functions of strain, whereas in a classical linear elastic case elastic parameters are simply constants.
\section{Summary and Conclusions}
In this paper we proposed a new approach for calculating the overall properties of a homogenized material. This approach is based on the combination of linear and non-linear elastic theories. Let us summarise this procedure. For each isotropic layer we know elasticity constants $c_{1111}$ and $c_{2323}$, or equivalently Lam\'e's parameters $\lambda$ and $\mu$. Then, we use expressions \eqref{eq:elast-const1}--\eqref{eq:elast-const5}, and we find elastic constants for overall material (but still in linear regime for small deformations). Then, we use connections \eqref{eq:connection-start}--\eqref{eq:connection-end} and construct the stain energy potential, which accounts for the behaviour of the material in the nonlinear regime. In Section \ref{lin with nonlin models} we give examples of such potentials, including a fully specified potential (20). We note that although, in general, we need some additional information on how material works in non-linear regime, a potential with the standard reinforcing model may be used by default.
We think it is also instructive to show another approach described in Section \ref{predeformed-config}, despite the fact that it did not lead to a successful result. Nonetheless, we think that sometimes a negative result should be shown too. If one compares equations given in book ~\cite{Slawinski}, Section 4.2 and those presented in this paper \eqref{index-begin-incr-law}--\eqref{index-last-incr-law}, we will find many similar terms, elastic moduli $\boldsymbol{\mathcal{A}}_0$ now depend on the strain, they are not constants as it is the case for linear elastic problem. As mentioned before, due to complexity of expressions \eqref{index-begin-incr-law}--\eqref{index-last-incr-law}, apparently, a further progression becomes impossible. The derived equations can be also used for modelling small deformations superimposed on a deformed configuration. If a deformation field depends on time, we may think about the deformation field as a wave propagating in a prestressed media. In Section 4 we concluded that tensor $\dot{\boldsymbol{\sigma}}$ is symmetric, and thus we need to consider only 6 expressions \eqref{index-begin-incr-law}--\eqref{index-last-incr-law}. Symmetry of tensor $\dot{\boldsymbol{\sigma}}$ follows from rotational balance equation by taking increments. In turn, rotational balance equation is a consequence of objectivity of a constitutive law, and the proof of this fact is given in Appendix \ref{app:incremental}.
\section*{Acknowledgments}
I would like to thank and acknowledge discussions with Professor Ray W. Ogden and Professor Michael Slawinski.
This research was performed in the context of The Geomechanics Project supported by Husky Energy.
Also, this research was partially supported by the Natural Sciences and Engineering Research Council of Canada, grant 238416-2013.
| proofpile-arXiv_059-15563 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
In this paper, we consider the following trust region subproblem (TRS)
\begin{eqnarray*}
(\rm P_0)& \min&f(\x):= \x^T\A\x-2\b^T\x\\
&\rm s.t.&g(\x):=\|\x\|^2-1\le0,
\end{eqnarray*}
where $\A$ is an $n\times n$ nonzero real symmetric matrix, $\b\in \R^n$ and
$\|\cdot\|$ denotes the Euclidean $l_2$ norm. To avoid the trivial case, we make the blanket assumption that $n\ge 2$. Let $S_0$ be the nonempty optimal solution set of $\rm(P_0)$ and $f^*$ be the optimal value.
Problem $\rm(P_0)$ first arises as a subproblem in the trust region
method for nonlinear optimization \cite{conn2000trust,yuan2015recent}, and also admits applications in robust optimization \cite{ben2009robust} and least squares problems \cite{zhang2010derivative}. The generalized trust region subproblem, where the constraint is a general quadratic inequality, is also well studied in the literature \cite{more1993generalizations,pong2014generalized,jiang2018socp,jiang2019novel,jiang2018linear}.
When $\A$ is not positive semidefinite, problem $\rm(P_0)$ is a nonconvex problem and may have a local non-global optimal solution \cite{martinez1994local}.
However, problem $\rm(P_0)$ enjoys hidden convexity and the strong duality holds due to the celebrated S-lemma \cite{yakubovich1971s}.
Various methods have been developed in the literature for solving the TRS, e.g., a safeguarded Newton's method by solving the so-called secular equation \cite{more1983computing}, generalized Lanczos methods and its recent variants \cite{gould1999solving,zhang2017generalized,zhang2018nested} and a parametrized
eigenvalue approach based on semidefinite programming and duality theory \cite{rendl1997semidefinite}, to name a few.
Hazan and Koren \cite{hazan2016linear} proposed the first linear-time algorithm with complexity $\tilde O(\frac{N}{\sqrt{\epsilon}})$ for the TRS to achieve an $\epsilon$ optimal solution, where $N$ is the number of nonzero entries in the input.
Wang and Xia \cite{wang2017linear} and Ho-Nguyen and Kilinc-Karzan \cite{ho2017second} presented a linear-time algorithm to solve the TRS by applying Nesterov's accelerated gradient descent algorithm to a convex reformulation of $\rm(P_0)$, where such a reformulation was first proposed, to the best of our knowledge, by Flippo and Jansen \cite{flippo1996duality}.
Very recently, Beck and Vaisbourd \cite{beck2018globally} showed that a family of first-order methods for solving problem $\rm(P_0)$, including the projected and conditional gradient methods, converges to a global optimal solution under proper initialization.
However, the convergence rates for these methods are still largely unknown.
Error bounds \cite{luo1993error,pang1997error} and the Kurdyka-{\L}ojasiewicz (KL) inequality \cite{attouch2010proximal} are widely used in the literature for analyzing rate of convergence for various optimization algorithms and for understanding the local variational geometry of the solution set of an optimization problem.
In this paper, we devote ourself to a thorough understanding of error bounds and the KL inequality for the TRS, which, as far as we know, are not available in the literature. Then, based on these results, we conduct an analysis of the convergence rate of projected gradient methods studied in \cite{beck2018globally}.
{Before stating the defintions of the H\"olderian error bounds and the KL inequality for the TRS, let us first define $B$ as the unit norm ball, i.e., $B := \{ \x\in\R^n \mid \norm{\x} \le 1\}$ and $\delta_{B}$ as the corresponding indicator function} $\delta_B(\x)=\left\{\begin{array}{ll}0,&\text{if }\|\x\|\le1,\\
+\infty,&\text{otherwise.}\end{array}\right.$
\begin{dfn}[H\"olderian error bounds for the TRS]
\label{def:errorbound}
The TRS is said to satisfy the global error bound condition with modulus $\rho$ if there exists a constant $\tau >0$ such that
\[
{\rm dist}(\x,S_0)\le \tau \big(f(\x) - f^* + \delta_B(\x)\big)^{\rho}, \quad \forall\, \x\in \R^n.
\]
\end{dfn}
\noindent Notice that Definition \ref{def:errorbound} is sometimes referred to as the H\"olderian growth condition. Specifically, if $\rho = 1/2$, it is exactly the quadratic growth condition \cite{dontchev2009implicit}; and if $\rho = 1$, the objective $f(\x) + \delta_B(\x)$ is said to have weak sharp minima \cite{burke1993weak}. We shall emphasize that the residual function used in Definition \ref{def:errorbound}, i.e., the right-hand-side of the inequality, corresponds to the difference between the objective function value at any given point and the optimal value of the TRS.
This is different from (sub)gradient based error bounds, e.g., Luo-Tseng error bounds \cite{luo1993error}, where residual functions associated with certain (sub)gradient information are used in the definitions.
\begin{dfn}[KL inequality for the TRS]
\label{def:KL}
The TRS is said to satisfy the KL inequality at $\x^*\in S_0$ with an exponent $\varrho\in[0,1)$, if there exist $\tau >0$ and $\epsilon >0$ such that
\[
\big(f(\x) - f^* + \delta_{B}(\x)\big)^{\varrho} \le \tau {\rm dist}(-\nabla f(\x), N_B(\x)), \quad \forall \, \x\in B(\x^*,\epsilon),
\]
{where $N_B(\x)$ is the normal cone of $B$ at $\x$, $B(\x^*,\epsilon)$ is the $\epsilon$-neighborhood of $\x^*$, i.e., $B(\x^*,\epsilon) = \left\{\x\in \R^n \mid \norm{\x - \x^*}\le \epsilon \right\}$,
{and by convention the distance of any given point to an empty set is defined as $\infty$}.}
\end{dfn}
\noindent We note that Definition \ref{def:KL} is inherited from the definition given in \cite{bolte2017error}, except that our definition focuses only on optimal solutions instead of general stationary points.
Existing works of error bounds and the KL inequality related to the TRS are H\"olderian error bounds for convex quadratic inequality systems \cite{wang1994global} (see also Subsection 3.1), convex piecewise quadratic functions \cite{li1995error}, a system consisting of a nonconvex quadratic function and a polyhedron \cite{luo2000error}, and the KL property for spherical constrained quadratic optimization problems \cite{liu2016quadratic,gao2016ojasiewicz} and spherical constrained quartic-quadratic optimization problems \cite{zhang2019geometric}. However, none of the above results can be directly applied to the TRS due to the existence of the possibly nonconvex and nonhomogeneous quadratic objective function and the unit ball constraint.
Recently, \cite{bolte2017error} studied the relations between H\"olderian error bounds and the KL inequality for convex problems. Specifically, they showed that for convex problems, error bounds with moderate residual functions are equivalent to the KL inequalities and the sum of the H\"olderian error bound modulus and the KL exponent equals one. See also \cite{aragon2008characterization,cui2019r,li2018calculus,drusvyatskiy2018error,drusvyatskiy2014second} for other related discussions and results. {As far as we know}, all the available equivalent relations between H\"olderian error bounds and the KL inequality are obtained under the convexity assumption.
For nonconvex problems or even the special nonconvex TRS, however, such relations remain largely unknown.
In this paper, by combining the local geometry of the optimal solution set $S_0$ and the elegant H\"olderian error bound results for convex quadratic inequality systems in \cite{wang1994global}, we are able to obtain a comprehensive
characterization of a H\"olderian error bound for the TRS.
Specifically, it can be shown that the H\"olderian error bound holds globally with the modulus $\rho = 1/4$
for the \emph{TRS-ill case} (to be defined in \eqref{eq:con}) and otherwise, $\rho = 1/2$.
Then, based on the obtained error bound results, we are able to derive the KL inequality for the TRS.
We show that for the TRS, the KL inequality always holds locally with the KL exponent $\varrho = 3/4$ at any optimal solution (if the TRS is convex, the KL inequality in fact holds globally).
More precisely, the KL exponent is 3/4 if we are dealing with the \emph{TRS-ill case} and 1/2 otherwise at any optimal solution, i.e., the sum of the KL exponent $\varrho$ and the H\"olderian error bound modulus $\rho$ always equals one for the TRS. Hence, we successfully extend the equivalence between error bounds and the KL inequality from the convex problems \cite{bolte2017error} to the nonconvex TRS.
We shall emphasize here that for the TRS, both error bounds and the KL inequality results, as well as their relations, are new in the literature.
Equipped with these thorough understandings, we are able to derive convergence rate results for algorithms for solving the TRS.
As an illustration, the convergence rate of {projected gradient methods considered in \cite{beck2018globally} is studied.
Specifically, we show that projected gradient methods converge to a global optimal solution locally sublinearly in the \emph{TRS-ill case} and linearly otherwise.
The remaining of this paper is organized as follows. In Section 2, we review some existing results in the literature that will be used in our proof. Then, we conduct a thorough analysis about the H\"olderian error bound and the KL inequality for the TRS in Sections 3 and 4, respectively.
In Section 5, we study the convergence rate of projected gradient methods for solving the TRS with the help of the KL inequality.
We conclude our paper in Section 6.
\textbf{Notation.}
For any vector $\x\in\R^n$, we use $[\x]_+$ to denote its positive part. We use $(\cdot)^\dagger$ to denote the \textit{Moore-Penrose} pseudoinverse of a matrix. For any given nonempty closed set $C$, the distance between any given vector $\x\in\R^n$ to $C$ is denoted by ${\rm dist}(\x,C) := \min_{\y\in C} \norm{\x - \y}$. Meanwhile, define the possibly set-valued projection mapping $\Pi_C:\R^n \rightrightarrows \R^n$ as
$
\Pi_{C}(\x) = \left\{\v \in \R^n \mid \norm{\x - \v} = {\rm dist}(\x,C) \right\}.
$
{We define $\bf 0$ as the vector (or matrix) of all zeros and its dimension will be clear from the context.}
\section{Preliminaries}\label{sec:pre}
We recall some basic properties associated with the TRS.
Let $\A = \P{\mathbf \Lambda} \P^T$ be the spectral decomposition of $\A$ with $\P$ being the orthogonal matrix and ${\mathbf \Lambda} = {\rm Diag}(\lambda)$ being the diagonal matrix with {$\lambda_1\le \cdots \le \lambda_n$}.
When $\lambda_1 <0$ (i.e., nonconvex trust region subproblems), we consider the following convex relaxation:
\begin{align*}
(\rm P_1) \quad \min&\ \tilde f(\x):= \x^T(\A-\lambda_1\I)\x-2\b^T\x+\lambda_1\\
{\rm s.t.}& \ \|\x\|^2\le1.
\end{align*}
Problem ${\rm (P_1)}$ is regarded as a relaxation of ${\rm (P_0)}$ since
\begin{equation}
\label{eq:fandtf}
[g(\x)]_+ = [\norm{\x}^2 - 1]_+ = 0 \quad \mbox{ and } \quad \tilde f(\x)= f(\x)-\lambda_1(\x^T\x-1)\le f(\x)
\end{equation}
whenever $\|\x\|\le1$ and $\lambda_1 <0$.
We shall emphasize here that the relaxation $({\rm P_1})$ plays a central role in our subsequent analysis.
Throughout this paper, we define the solution set of the problem $\rm(P_1)$ as $S_1$.
We summarize in the following lemma the corresponding results obtained in \cite[Lemmas 1 and 2]{flippo1996duality} to reveal the relations between $S_0$ and $S_1$.
\begin{lem}
\label{lemma:P1}
{Suppose $\lambda_1<0$. Then we have the following:
\begin{itemize}
\item $\norm{\x^*}=1, \forall \, \x^*\in S_0$,
\item $\tilde f(\x^*) = f(\x^*) $ for any $\x^*\in S_0$, and
\item $S_0 = S_1 \cap \{\x \in \R^n \mid \norm{\x} = 1\}$.
\end{itemize}}
\end{lem}
Lemma \ref{lemma:P1} also implies the well-known optimality conditions \cite{flippo1996duality,conn2000trust} for the TRS, i.e.,
$\x^*\in \R^n$ is an optimal solution to problem $\rm(P_0)$ if and only if for some $\lambda^*\in\R$, $(\x^*,\lambda^*)$ satisfies the following KKT conditions:
\begin{eqnarray*}
\|\x^*\|^2&\le& 1, \\
(\A+\lambda^*\I)\x^*&=&\b, \\
\A+\lambda^*\I&\succeq& {\bf 0},\\
\lambda^*&\ge&0,\\
\lambda^*(1-\|\x^*\|^2)&=&0~~~ (\text{complementary slackness}).
\end{eqnarray*}
For the noncovnex TRS, i.e., $\lambda_1 <0$, it is well-known
that the problem can be categorized into easy and hard cases\footnote{We should point out that in this paper we use the categories of the easy and hard cases only for the nonconvex case, which is slightly different from the categories in \cite{fortin2004trust}.} (\cite{fortin2004trust}).
A brief review about the two cases is given as follows:
\begin{enumerate}
\item In the easy case, $\b\not\perp\Null(\A-\lambda_1\I)$, which implicitly implies that $\lambda^*>-\lambda_1$. In this case, the optimal solution is unique, and is given by
$\x^*=(\A+\lambda^*\I)^{-1}\b$ with $\|\x^*\|=1$.
\item In the hard case, $\b\perp\Null(\A-\lambda_1\I)$. In this case, the optimal solution may not be unique. In fact, the optimal solution is given by either $\x^*=(\A+\lambda^*\I)^{-1}\b$ for some optimal Lagrangian multiplier $\lambda^*>-\lambda_1$ (or called hard case 1 in \cite{fortin2004trust})
or
$\x^*=(\A-\lambda_1\I)^{\dagger}\b+\v$, where $\v\in\Null(\A-\lambda_1\I)$ such that $\|\x^*\|=1$ (or called hard case 2 in \cite{fortin2004trust}). Particularly, the case with $\v=\bf0$ is called hard case 2 (i), and the case with with $\v\neq\bf 0$ is called hard case 2 (ii).
\end{enumerate}
We summaries the above characterizations in the following table.
\begin{center}
\begin{tabular}{cccc}
\toprule
Easy case & Hard case 1 & Hard case 2 (i) & Hard case 2 (ii) \\
\midrule
$\b\not\perp\Null(\A-\lambda_1\I)$ & $\b\perp\Null(\A-\lambda_1\I)$ & $\b\perp\Null(\A-\lambda_1\I)$, & $\b\perp\Null(\A-\lambda_1\I)$, \\[2pt]
(implies $\lambda^* > -\lambda_{1}$) &
and $\lambda^* > -\lambda_{1}$
& $\lambda^* = -\lambda_{1}$ & $\lambda^* = -\lambda_{1}$\\[2pt]
& & and $\norm{(\A-\lambda_1\I)^{\dagger}\b} = 1$ & and $\norm{(\A-\lambda_1\I)^{\dagger}\b} < 1$ \\[2pt]
\bottomrule
\end{tabular}
\captionof{table}{Different cases for the nonconvex TRS, i.e., $\lambda_1 < 0$.}
\end{center}
\section{H\"olderian error bounds for the TRS}
In this section, we present our main results on H\"olderian error bounds for the TRS. Mainly, our analysis will be divided into two cases.
First, we consider the convex case, i.e., the case with $\lambda_1 \ge 0$. The more challenging nonconvex case with $\lambda_1 <0$ will be discussed later.
\subsection{Case with $\lambda_1\ge0$}
In this case, problem ${\rm (P_0)}$ is convex and H\"olderian error bounds for the TRS can be obtained by applying the elegant error bound results derived in \cite{wang1994global} for convex quadratic inequalities.
Let $[m]:=\{1,2,\ldots,m\}$. We recall the main definitions and results in \cite{wang1994global}.
\begin{dfn}
\label{dfn:singular}
Consider the inequality system
\[q_i(\x)\le0, \quad \forall i\in[m].\]
An inequality $q_i(\x) \le0$ in the system is said to be singular if $q_i(\x) = 0$ for any solution $\x$ to the system. Thus an inequality $q_i(\x) \le 0$ is nonsingular if there is a solution $\x_i$ to the system such that $q_i(\x_i)<0$. If every inequality in the system is singular, we say that the inequality system is singular.
\end{dfn}
\begin{dfn}
\label{def:critical}
Let S be a singular system of inequalities. We say that S is critical, if either
one of the following two conditions holds:
\begin{enumerate}
\item at most one of the inequalities is nonlinear; or,
\item after any one of the nonlinear inequalities is deleted, all the remaining nonlinear
inequalities become nonsingular.
\end{enumerate}
\end{dfn}
\begin{dfn}
\label{def:irregular}
An inequality in a system is called irregular if it is nonlinear, singular, and
contained in no critical subsystem.
\end{dfn}
The following definition defines a concept of the degree of singularity, which will be used to determine the modulus of H\"olderian error bounds for a convex quadratic inequality system.
\begin{dfn}
\label{def:degreeofsingularity}
Let
\[q_i(\x)\le0,\quad \forall i\in[m].\]
be a system of inequalities. If there is no nonlinear, singular inequality, we define the
degree of singularity of this system to be zero. If there is at least one such inequality, we
define the degree of singularity of the system to be one plus the number of irregular
inequalities.
\end{dfn}
The main technical result we will use is the following global error bound for convex quadratic inequality systems.
\begin{lem}[Theorem 3.1 in \cite{wang1994global}]
\label{lem:eb}
Suppose
\[q_i(\x)\le0, \quad \forall i\in[m]\]
is a convex quadratic system with a nonempty solution set $S$ and let $[m]=K\bigoplus J$,
where $K$ is the index set of all the nonsingular constraints and $J$ is the index set of all the singular constraints.
Then there exists a constant $\tau>0$ such that,
\begin{equation}
\label{eq:degree}
\dt(\x,S)\le \tau\left(\sum_{i=1}^m [q_i(\x)]_++\sum_{j\in J}[q_j(\x)]_+ ^{1/2^d}\right),{\quad \forall \x\in \R^n.}
\end{equation}
where $d$ is the degree of singularity of the system\footnote{{The original inequality in \cite[Theorem 3.1]{wang1994global} is $\dt(\x,S)\le \tau(\|[q_K(\x)]_+\|+\|[q_J(\x)]_+\|+\|[q_J(\x)]_+\|^{1/2^d})$, from which inequality \eqref{eq:degree} follows directly with a possibly rescaling of the constant $\tau$.} }.
\end{lem}
We note that one important feature of Lemma 3.5 is that the exponent of the term $[q_J(\x)]_+$ in the above inequality is related to $d$, the degree of singularity of the system.
As one can observe later, it is this special and computable quantity that makes our analysis possible. Indeed, when Lemma \ref{lem:eb} is applied to system
\begin{equation}
\label{eq:sysorigin}
f(\x)-f^*\le0, \quad g(\x)\le0,
\end{equation}
the main task will be computing its degree of singularity.
We first consider a case that the system is minimal, i.e., deleting either inequality yields the system nonsingular.
\begin{lem}
\label{lem:EBnonnegreg}
Assume that $\lambda_1 \ge 0$ and $\min_{\x\in\R^n} f(\x)<f^*$. Then there exists a constant $\tau>0$ such that
\begin{equation}
\label{EB:regular}
\dt(\x,S_0)\le \tau \big(f(\x)-f^*+\delta_{B}(\x)\big)^{1/2}, \quad \forall \x\in \R^n.
\end{equation}
\end{lem}
\begin{proof}
From the definition of $f^*$, we know that the solution set of system \eqref{eq:sysorigin} is $S_0$. From Lemma \ref{lemma:P1}, it further holds that system \eqref{eq:sysorigin} is singular.
Moreover, we see that when either inequality in \eqref{eq:sysorigin} is deleted, the remaining inequality is nonsingular.
Hence, from Definition \ref{def:critical}, we know that system \eqref{eq:sysorigin} is critical.
Therefore, there is no irregular inequality in \eqref{eq:sysorigin} and the degree of singularity $d$ of system \eqref{eq:sysorigin} equals to $1$.
Hence, by using Lemma \ref{lem:eb}, we have that for all $\x\in \R^n$,
\begin{equation}
\label{eq:distS}
\dt(\x,S_0)\le \tau_1\left([f(\x)-f^*]_++[g(\x)]_+
+[f(\x)-f^*]_+^{1/2}+[g(\x)]_+^{1/2}\right).
\end{equation}
By Weierstrass theorem, we know that $[f(\x) - f^*]_+$ is upper bounded over the unit ball, i.e., there exists a constant $M>0$ such that
\[
M = \max \left\{
[f(\x) - f^*]_+ \mid \norm{\x} \le 1
\right\}.
\]
If $M\le 1$, inequality \eqref{eq:distS} implies that \eqref{EB:regular} holds with $\tau = 2\tau_1$. If $M > 1$, we have from \eqref{eq:distS} that
\[
\dt(\x,S_0)\le 2\tau_1 (f(\x)-f^*+\delta_{B}(\x))^{1/2}, \quad \forall\, \x \mbox{ with } [f(\x) - f^*]_+\le 1,
\]
and
\begin{align*}
\dt(\x,S_0)\le{}& \tau_1 (M + (f(\x)-f^*+\delta_{B}(\x))^{1/2}),\\
\le{}& \tau_1(M+1)(f(\x)-f^*+\delta_{B}(\x))^{1/2},
\quad \forall\, \x \mbox{ with } [f(\x) - f^*]_+> 1.
\end{align*}
Combining all the above discussions, we see that \eqref{EB:regular} holds with $\tau = \max\{2, M+1\}\tau_1$.
\end{proof}
Now consider the case that $\min_{\x\in\R^n} f(\x) = f^*$. In this case, it is easy to see that $\{f(\x)-f^*\le0\}$ is a singular system. Hence, the degree of singularity
of system \eqref{eq:sysorigin} and the corresponding error bound modulus depend on
the singularity of the second inequality.
Indeed, since $\min_{\x\in\R^n} f(\x) = f^* > -\infty$, we know that $\b\in \ra(\A)$ and for all $\x \in\R^n$,
\begin{equation*}
\label{eq:fval}
f(\x) - f^* = (\x - \tilde \x)^T \A (\x - \tilde \x),
\end{equation*}
where $\tilde \x=\A^\dagger \b$ and $\norm{\tilde \x} \le 1$.
Therefore, the optimal solution set of problem (P) can be written as \begin{equation}
\label{eq:s0convex}
S_0= \{\x\in\R^n \mid \x = \tilde \x + \d, \, \|\x\|\le1, \, \d \in \Null(\A) \}.
\end{equation}
With these discussions in hand, we are ready to derive a H\"olderian error bound in the following lemma.
\begin{lem}
\label{lem:EBnonneg}
Assume that $\lambda_1\ge0$ and $\min_{\x\in\R^n} f(\x) = f^*$. Then, it holds that
\begin{enumerate}
\item if $\lambda_1 >0$, then
\[
\dt(\x,S_0)\le \sqrt{\frac{1}{\lambda_1}}\,[f(\x) - f^* + \delta_B(\x)]^{1/2}, \quad \forall\, \x\in\R^n;
\]
\item if $\lambda_1 = 0$ and $\norm{\tilde \x}= 1$, then $S_0 = \{ \tilde \x\}$ and there exists a constant $\tau >0$ such that
\begin{equation*}
\label{EB:1o4}
\dt(\x,S_0)\le \tau[f(\x) - f^* + \delta_B(\x)]^{1/4}, \quad \forall\, \x\in\R^n;
\end{equation*}
\item if $\lambda_1 = 0$ and $\norm{\tilde \x} < 1$, then there exists a constant $\tau >0$ such that
\begin{equation*}
\label{EB:1o2}
\dt(\x,S_0)\le \tau[f(\x) - f^* + \delta_B(\x)]^{1/2}, \quad \forall\, \x\in\R^n.
\end{equation*}
\end{enumerate}
\end{lem}
\begin{proof}
Case 1 follows directly from the fact that $f(\x) - f^* = (\x - \tilde \x)^T \A (\x - \tilde \x)\ge\lambda_1\|\x-\tilde\x\|^2$ and $S_0=\{\tilde\x\}$.
For case 2, since $\lambda_1 =0$ and $\{\x\in \R^n \mid \norm{\x} < 1\}\cap S_0 = \emptyset$, from \eqref{eq:s0convex}, we know that $S_0 = \{\tilde \x\}$, i.e., $S_0$ is a singleton.
Thus, the second inequality $g(\x) \le 0$ in system \eqref{eq:sysorigin} is singular. Meanwhile, the assumption that $\min_{\x\in\R^n} f(\x) = f^*$ implies that the only critical subsystem of \eqref{eq:sysorigin} is the first inequality $f(\x) - f^*\le 0$. Therefore, $g(\x) \le 0$ is irregular and the degree of singularity of \eqref{eq:sysorigin} equals to $2$.
Then, Lemma \ref{lem:eb} asserts that there exists a constant $\tau_1>0$ such that
\[\dt(\x,S_0)\le \tau_1 \left([f(\x)-f^*]_++[g(\x)]_++[f(\x)-f^*]_+^{1/4}+[g(\x)]_+^{1/4}\right)\]
for all $\x\in\R^n$.
Following the same arguments in the proof of Lemma \ref{lem:EBnonnegreg}, we know that there exists a constant $\tau >0$ such that
\[
\dt(\x,S_0)\le \tau\big(f(\x) - f^* + \delta_B(\x)\big)^{1/4}, \quad \forall\, \x\in\R^n.
\]
In case 3, we have $\{\x \in \R^n \mid \norm{\x} < 1\}\cap S_0 \neq \emptyset$ and thus the second inequality $\norm{\x}^2 - 1\le 0$ is nonsingular. Hence the degree of singularity of system \eqref{eq:sysorigin} equals to 1 as there are no irregular inequalities.
Then, Lemma \ref{lem:eb} asserts that there exists a constant $\tau_2>0$ such that
\[\dt(\x,S_0)\le \tau_2 \left([f(\x)-f^*]_++[g(\x)]_++[f(\x)-f^*]_+^{1/2}+[g(\x)]_+^{1/2}\right)\]
for all $\x\in\R^n$.
Similar as the proof of Lemma \ref{lem:EBnonnegreg}, we have
\[
\dt(\x,S_0)\le \tau\big(f(\x) - f^* + \delta_B(\x)\big)^{1/2}, \quad \forall\, \x\in\R^n.
\]
This completes the proof for the lemma.
\end{proof}
\subsection{Case with $\lambda_1<0$}
In this section, we turn our interests to the nonconvex TRS. As is stated in the preliminary section, the nonconvex TRS includes the easy and hard cases. We will derive error bounds for them separately.
Before diving into the proofs, we shall discuss the main ideas here.
Since $\lambda_1 <0$, the first inequality in the quadratic inequality system \eqref{eq:sysorigin} is {nonconvex}.
Fortunately, the following quadratic inequality system
\begin{equation}
\label{eq:sysnew}
\tilde f(\x)-f^*\le0, \quad g(\x)\le0,
\end{equation}
derived from the reformulation $\rm(P_1)$ (see Lemma \ref{lemma:P1}), is always a convex one.
Hence, we can apply Lemma \ref{lem:eb} to system \eqref{eq:sysnew} and then use the relations between the solution sets of
systems \eqref{eq:sysorigin} and \eqref{eq:sysnew} to establish meaningful error bounds for nonconvex quadratic inequality system \eqref{eq:sysorigin}.
We first study the easy case and hard case 1. In these two cases, the solution for problem (P) is unique and has unit norm.
\begin{lem}
\label{lem:ezh1}
In the easy case or hard case 1, there exists some constant $\tau>0$ such that
\begin{equation*}
\dt(\x,S_0)\le \tau\big(f(\x)-f^*+\delta_B(\x)\big)^{1/2},\quad \forall \x\in\R^n.
\end{equation*}
\end{lem}
\begin{proof}
In both the easy case and hard case 1, we see that $\min_{\x\in\R^n} \tilde f(\x) < f^*$.
Indeed, for the easy case, since $\b\not\perp \Null(\A - \lambda_1 \I)$, we have $\b\notin\ra(\A-\lambda_1\I)$. Then, it holds that $\min \tilde f(\x)=-\infty<f^*$.
For hard case 1, the optimal solution for $\min_{\x\in\R^n} \tilde f(\x)$
is achieved by $(\A-\lambda_1\I)^\dagger \b$, whose norm is larger than 1. This gives $\min_{\x\in\R^n} \tilde f(\x)<f^*$.
Similar to the case studied in Lemma \ref{lem:EBnonnegreg}, the degree of singularity of system \eqref{eq:sysnew} equals to $1$. Hence, there exists some constant $\tau>0$ such that for all
$\x\in\R^n$,
\begin{equation*}
\dt(\x,S_1)\le \tau\big(\tilde f(\x)-f^*+\delta_B(\x)\big)^{1/2}.
\end{equation*}
From \eqref{eq:fandtf},
we see that
\[
\dt(\x, S_1) \le \tau\big( f(\x)-f^*+\delta_B(\x)\big)^{1/2}, \quad \forall \, \x\in\R^n.
\]
Note that in both the easy case and hard case 1, since $\min_{\x\in\R^n} \tilde f(\x) < f^*$,
we have that $\norm{\x} = 1$ for all $\x\in S_1$. This further implies that
$S_1 = S_0$
(in fact, $S_1=S_0=\{(\A-\lambda^* \I)^\dagger \b\}$ for a unique $\lambda^*>-\lambda_1$ such that $\|(\A-\lambda^* \I)^\dagger \b\|=1$) and thus completes the proof.
\end{proof}
In hard case 2, $\b$ is orthogonal to the eigenspace of matrix $\A$ corresponding to the smallest eigenvalue, i.e., $\b\perp \Null(\A-\lambda_1\I)$ and $f^* = \min_{\x\in\R^n} \tilde f(\x)$.
Denote $\bar \x=(\A-\lambda_1\I)^\dagger \b$. Then, we have
\begin{equation}
\label{eq:s0s1}
\begin{array}{lll}
S_0&=&\{\bar \x+ \v \in \R^n \mid \|\bar \x+ \v\|=1,\v\in \Null(\A-\lambda_1\I)\}~~~~ \mbox{
and }\\
S_1&=&\{\bar \x+ \v \in \R^n \mid \|\bar \x+ \v\|\le1,\v\in \Null(\A-\lambda_1\I)\}.
\end{array}
\end{equation}
Let us first consider the hard case 2 (i).
\begin{lem}
\label{lem:h2i}
In the hard case 2 (i),
there exists $\tau >0$ such that
\[
\dt(x,S_0) \le \tau \big(f(\x) - f^* + \delta_{B}(x)\big)^{1/4}, \quad \forall\,
\x\in\R^n.
\]
\end{lem}
\begin{proof}
From the assumption, we know that $S_0 = S_1 = \{\bar \x\}$. By applying Lemma \ref{lem:EBnonneg} case 2 to problem $({\rm P_1})$, we obtain that\[
\dt(\x,S_0) = \dt(\x,S_1)\le \tau\big(\tilde f(\x)-f^*+\delta_B(\x)\big)^{1/4}, \quad \forall\, \x\in\R^n.
\]
The conclusion then follows directly from \eqref{eq:fandtf}.
\end{proof}
Next we focus on the hard case 2 (ii), in which $S_1$ is not a singleton since $\norm{\bar \x} < 1$.
To establish the desired error bound inequality in this case, we need to characterize the connection between $\dt(\x,S_0)$ and $\dt(\x,S_1)$. For this purpose, we establish the following technical lemma.
\begin{lem}
\label{lem:proj0}
Given $\x\in\R^n$, let $\x_1:=\Pi_{S_1}(\x)$ be the projection of $\x$ onto the convex set $S_1$.
In the hard case 2 (ii), it holds that
\begin{enumerate}
\item if $\x_1 = \bar \x$, then $\Pi_{S_0}(\x) = S_0$ and
$\dt(\x, S_0) = \sqrt{\norm{\x - \bar \x}^2 + 1 - \norm{\bar \x}^2}\,$;
\item else if $\x_1 \neq \bar \x$, then
$\Pi_{S_0}(\x) = \bar\x+t_0\v_1$ with $\v_1=\frac{\x_1-\bar\x}{\|\x_1-\bar\x\|} \in \Null(\A - \lambda_1 \I)$ and $t_0\ge\|\x_1-\bar\x\|$ such that $\|\bar\x+t_0\v_1\|=1.$
\end{enumerate}
\end{lem}
\begin{proof}
In the first case, since $\x_1 = \Pi_{S_1}(\x) = \bar \x $, we know that
\[
\inprod{\x - \bar\x}{\z - \bar\x} \le 0, \quad \forall\, \z \in S_1,
\]
which, together with the structure of $S_1$, implies that
$\x - \bar \x \in \ra(\A - \lambda_1 \I).$
Thus, for any $\y \in S_0$, we have that
\[
\norm{\x - \y}^2 = \norm{\x - \bar \x + \bar \x - \y}^2
= \norm{\x - \bar \x}^2 + \norm{\bar\x - \y}^2,
\]
where the second equality follows from the facts that $\y - \bar\x \in \Null(\A - \lambda_1 \I)$ and {thus} $\inprod{\x - \bar \x}{\bar\x - \y} = 0$.
This, together with $1 = \norm{\y} = \norm{\bar \x - \y}^2 + \norm{\bar \x}^2$ (due to $\y - \bar\x \in \Null(\A - \lambda_1 \I)$ and $\bar\x \in \ra(\A - \lambda_1\I)$), implies
\[\norm{\x - \y} = \sqrt{\norm{\x - \bar \x}^2 + 1 - \norm{\bar \x}^2},\quad \, \forall\, \y \in S_0.\] This completes the proof for the first case.
We next consider the second case.
If $\norm{\x_1} = 1$, we know that $\x_1 \in \Pi_{S_0}(\x)$. For any $\tilde \x \in \Pi_{S_0}(\x)$, it holds that
\[\tilde \x \in S_0 \subseteq S_1 \quad \mbox{ and } \quad {\rm dist}(\x, S_1) = \norm{\x - \x_1} = {\rm dist}(\x, S_0) = \norm{\x - \tilde \x},\]
and consequently, $\tilde \x = \Pi_{S_1}(\x) = \x_1$ {due to the uniqueness of the projection onto the convex set $S_1$}.
Hence, $\Pi_{S_0}(\x)$ is a singleton, i.e., \[\x_0 := \Pi_{S_0}(\x) = \x_1= \bar \x + \norm{\x_1 - \bar \x}\frac{\x_1 - \bar \x}{\norm{\x_1 - \bar\x}}.\]
Now we consider the case with $\norm{\x_1}<1$.
Without loss of generality, assume that the null space of $\A-\lambda_1\I$ is spanned by
an orthogonal basis $\{\v_1,\v_2,\ldots,\v_k\}$ with some $k\ge 1$, $\v_1$ being the non-zero vector $\frac{\x_1-\bar\x}{\|\x_1-\bar\x\|}$ and all other $\v_i$ being unit-norm vectors.
Then, we can rewrite the solution set of $\rm(P_0)$ as $S_0=\{\x\in\R^n \mid \x = \bar\x+\sum_{i=1}^k\alpha_i\v_i, \, \|\x\|=1\}$.
Since $\x_1 = \Pi_{S_1}(\x)$, we have
\[
\inprod{\x - \x_1}{\z - \x_1} \le 0,\quad \forall\, \z \in S_1.
\]
This, together with \eqref{eq:s0s1} and $\norm{\x_1}<1$, implies that
$\inprod{\x - \x_1}{\d} = 0$ for all $\d\in \Null(\A - \lambda_1 \I)$. Therefore, there exists some vector $\s\in \ra(\A - \lambda_1 \I)$ such that
$\x = \x_1 + \s$.
Consider the projection of $\x$ onto $S_0$:
\begin{equation}
\label{prob:projxs0}
\min \|\x-\z\|^2~~~{\rm s.t.}~~\z\in S_0,
\end{equation}
which, due to $\x=\bar\x+\|\x_1-\bar\x\|\v_1+\s$ and $\z=\bar\x+\sum_{i=1}^k\mu_i \v_i$, is equivalent to
\begin{equation}\label{pb:310}
\min_{\mu_1,\ldots, \mu_k} \left\|\|\x_1-\bar\x\|\v_1+\s-\sum_{i=1}^k\mu_i \v_i\right\|^2~~~
{\rm s.t.}~~\left\|\bar\x+\sum_{i=1}^k\mu_i\v_i\right\|=1.
\end{equation}
Since $\s,\v_1,\ldots,\v_k$ are orthogonal to each other and $\norm{\v_i} = 1$ for $i=1,\ldots,k$, the objective in \eqref{pb:310} can be further written as:
\[
\left\|\|\x_1-\bar\x\|\v_1+\s-\sum_{i=1}^k\mu_i \v_i\right\|^2 = \|\x_1-\bar\x\|^2-2\mu_1\|\x_1-\bar\x\|+
\sum_{i=1}^{k} \mu_i^2
+\|\s\|^2.
\]
We further note that for any feasible solution $(\mu_1,\ldots,\mu_k)$ to \eqref{pb:310}, it holds that
$
\norm{\bar\x}^2 + \sum_{i=1}^{k}\mu_i^2 = 1.
$
Hence, \eqref{pb:310}, and consequently \eqref{prob:projxs0}, can be equivalently rewritten as
\begin{equation*}
\min_{\mu_1,\ldots, \mu_k}\|\x_1-\bar\x\|^2-2\mu_1\|\x_1-\bar\x\|+1-\|\bar\x\|^2+\|\s\|^2~~~
{\rm s.t.}~~\norm{\bar\x}^2 + \sum_{i=1}^{k}\mu_i^2 = 1,
\end{equation*}
whose unique optimal solution is clearly $\mu_1^*=\sqrt{1-\|\bar\x\|^2}$ and $\mu_i^*=0$ for $i=2,\ldots,k$.
Therefore, the projection problem
\eqref{prob:projxs0} has a unique optimal solution that takes the form
$\z^*=\bar\x+t_0\v_1$ with $t_0 > \norm{\x_1 - \bar \x}$ such that $\norm{\z^*} = 1$.
\end{proof}
{Now we are ready to present the connection between $\dt(\x,S_0)$ and $\dt(\x,S_1)$ in the hard case 2 (ii).}
\begin{lem}
\label{lemma:xS}
In the hard case 2 (ii), there exists some constant $\gamma\in(0,\pi/2)$ with $\sin\gamma=\frac{1-\|\bar\x\|}{ \sqrt{(1 - \norm{\bar\x})^2 + 1-\|\bar\x\|^2}}$ such that
\begin{equation}
\label{eq:distxx0xx1}
{\rm dist}(\x,S_1) + \sqrt{1 - \norm{\x}^2} \ge {\rm dist}(\x, S_0)\sin\gamma , \quad \forall \x \in \left\{ \x\in\R^n \mid \norm{\x}\le 1 \right\}.
\end{equation}
\end{lem}
\begin{proof}
For any given $\x\in \R^n$ with $\norm{\x}\le 1$ and any given $\x_0 \in \Pi_{S_0}(\x)$, define $\v = \x_0 - \bar\x$.
Since $\norm{\bar\x} < 1$ in the hard case 2 (ii) and $\norm{\x_0} = 1$, we have that $\norm{\v} \neq 0$.
Define line segment $L:= \left\{\bar\x + \alpha \v \in \R^n \mid \alpha \in [0,1]\right\}$. We note from \eqref{eq:s0s1} and $\bar\x = (\A-\lambda_1\I)^\dagger \b$ that $\inprod{\v}{\bar\x} = 0$.
If $\bar\x = \Pi_{S_1}(\x)$, then $\Pi_{S_1}(\x)\in L$;
otherwise, by case 2 in Lemma \ref{lem:proj0}, we know that $\x_0 = \Pi_{S_0}(\x) = \bar\x + t (\Pi_{S_1}(\x) - \bar\x)$ for some constant $t \ge 1$. Hence, $\Pi_{S_1}(\x) - \bar\x = \v/t$, i.e., $\Pi_{S_1}(\x) \in L$.
Since $L\subseteq S_1$, it further holds that
\begin{equation}
\label{eq:piLeqpiS}
\Pi_L(\x) = \Pi_{S_1}(\x).
\end{equation}
Let $\Pi_{L}(\x) = \bar\x + \alpha^*\v$. By the definition of $L$ and the properties of the projection operator $\Pi_L$,
it is not difficult to see that
\begin{equation*}
\inprod{\x - \Pi_{L}(\x)}{\v} \left\{
\begin{aligned}
& = 0, \mbox{ if } \alpha^*\in (0,1),\\
& \le 0, \mbox{ if } \alpha^* = 0,\\
& \ge 0, \mbox{ if } \alpha^* = 1.
\end{aligned}
\right.
\end{equation*}
We first
consider the case where $\inprod{\x - \Pi_{L}(\x)}{\v}>0$. In this case, we have $\alpha^*=1$ and thus
\[
{\rm dist}(\x,S_1) = \norm{\x - \Pi_{S_1}(\x)} = \norm{\x - \Pi_{L}(\x)} = \norm{\x-(\bar\x + \v)}
= \norm{\x - \x_0} = {\rm dist}(\x,S_0),
\]
i.e., \eqref{eq:distxx0xx1} holds trivially.
Then, we
argue that $\inprod{\x - \Pi_{L}(\x)}{\v} < 0$ cannot occur. Indeed, if this is not the case,
we must have
$\alpha^* =0$ and $\inprod{\x - \Pi_{L}(\x)}{\v} < 0$ and thus
$ \inprod{\x - \bar\x}{\v} < 0$.
Hence we further have that
\begin{equation}
\label{eq:L311}
\begin{array}{lll}
\norm{\x - (\bar\x - \v)}^2 &={}& \norm{\x - \bar\x}^2 + 2\inprod{\x - \bar\x}{\v}
+\norm{\v}^2 \\
&<{}& \norm{\x - \bar\x}^2 - 2\inprod{\x - \bar\x}{\v}
+\norm{\v}^2 \\
&={}& \norm{\x - (\bar\x +\v)}^2 = \norm{\x - \x_0}^2.
\end{array}
\end{equation}
Since {$\inprod{\bar\x}{\v} = 0$}, it holds that
$\norm{\bar\x - \v}^2 = \norm{\bar\x}^2 + \norm{\v}^2 = \norm{\bar\x + \v}^2 = 1$. This, together with the definition of $S_0$ in \eqref{eq:s0s1}, implies that $\bar\x - \v \in S_0$.
Since $\x_0 \in \Pi_{S_0}(\x)$, it holds that $\norm{\x - \x_0} \le \norm{\x - (\bar\x - \v)}$, which contradicts \eqref{eq:L311}.
Thus, in the subsequent analysis, we only need to focus on the case where $\inprod{\x - \Pi_{L}(\x)}{\v} = 0$.
{We first consider the case where $\bar \x=\bf0$. In this case, $\v=\x_0$.
Let $\u=\x-\Pi_L(\x)=\x-\alpha^*\v$. Hence we have $\u\perp\v$.
It holds that
$
\dt(\x, S_1)=\|\u\|, $
and
\[ \dt(\x,S_0) \le \|\x-\x_0\|=\|\u-(1-\alpha^*)\v\|.
\]
We then claim that \eqref{eq:distxx0xx1} holds.
To see this, since
\[
\dt(\x,S_1)^2+1-\|\x\|^2 \le \left(\dt(\x,S_1)+\sqrt{1-\|\x\|^2}\right)^2,
\]
it suffices to show
\[
\dt(\x,S_0)^2\sin\gamma^2 \le\dt(\x,S_1)^2+1-\|\x\|^2, \]
which
is equivalent to
\begin{equation}
\label{eq:equi01}
\|\u+(1-\alpha^*)\v\|^2\le 2\left(\|\u\|^2+1-\|\x\|^2\right)
\end{equation}
due to $\sin\gamma=1/\sqrt2$ (since $\bar\x=0$), $\x=\u+\alpha^*\v$ and $\u\perp\v$. Meanwhile,
\eqref{eq:equi01} is further equivalent to
\[
\|\u\|^2+ (1-\alpha^*)^2\|\v\|^2+2\|\alpha^*\v\|^2\le 2,
\]
which is trivial since $\|\x\|^2=\|\u\|^2+\|\alpha^*\v\|^2\le1$, $0\le\alpha^*\le1$ and $\|\v\|=1$.}
In the following of this proof, we consider the remaining case where $\bar\x \neq \bf0$.
In this case, we derive a lower bound of $\norm{\x - \Pi_{S_1}(\x)} $,{ which is equivalent to $\norm{\x - \Pi_{L}(\x)}$ due to \eqref{eq:piLeqpiS},} by considering
the following optimization problem:
\begin{equation}
\label{prob:zy1y0}
\min_{\z \in \R^n} \left\{ \norm{\z - \Pi_{L}(\z)} \mid \inprod{\z - \Pi_{L}(\z)}{\v} = 0, \, \norm{\z} = \norm{\x}, \, \norm{\z - \x_0} = {\rm dist}(\x,S_0)
\right\}.
\end{equation}
From \eqref{eq:piLeqpiS} and $\inprod{\x - \Pi_{L}(\x)}{\v} = 0$, we see that problem \eqref{prob:zy1y0} has a non-empty closed and bounded feasible set as $\x$ is always a feasible solution. Since the objective is continuous in $\z$, by Weierstrass theorem we know that problem \eqref{prob:zy1y0} has a non-empty and compact solution set.
Let $\z^*$ be any optimal solution to problem \eqref{prob:zy1y0}. We know that
${\rm dist}(\x, S_1) =\norm{\x - \Pi_{L}(\x)}\ge \norm{\z^* - \Pi_{L}(\z^*)}$ as $\x$ is a feasible solution to \eqref{prob:zy1y0}.
Since $\|\x\|=\|\z^*\|$, it further holds that
\begin{equation}
\label{eq:distrelation}
{\rm dist}(\x,S_1) + \sqrt{1 - \norm{\x}^2} \ge \norm{\z^* - \Pi_{L}(\z^*)} + \sqrt{1 - \norm{\z^*}^2}.
\end{equation}
Recall that $\bar\x\neq\bf0$ and $\v\neq\bf0$. For any $\z \in \R^n$, since $\v\perp\bar \x$, we note that there exist $\lambda, \mu \in\R$ and $\u\in\R^n$ satisfying $\inprod{\u}{\bar \x} = \inprod{\u}{\v} = 0$ {(note that $\u=0$ if $n=2$)} such that
\begin{equation*}
\label{eq:zdecomp}
\z = \lambda \bar\x + \mu \v + \u.
\end{equation*}
Given the structure of $L$, we know that $\Pi_{L}(\z) = \bar \x + \alpha \v$ for some $\alpha \in [0,1]$.
Now, if, in addition, $\inprod{\z - \Pi_{L}(\z)}{\v} = 0$, it then holds that $\mu = \alpha \in [0,1]$, and
{since $\x_0 - \Pi_L(\z)=(\bar\x+\v)-(\bar\x+\mu\v) =(1-\mu)\v $, } we further know that
$\inprod{\z -\Pi_L(\z)}{\x_0 - \Pi_L(\z)} = 0$.
Hence, for any feasible solution $\z$ to problem \eqref{prob:zy1y0}, it holds that
\begin{equation}
\label{eq:dist}
\norm{\z - \Pi_{L}(\z)}^2
= \norm{\z - \x_0}^2 - \norm{\x_0 - \Pi_{L}(\z)}^2
= {\rm dist}^2(\x, S_0) - (1-\mu)^2\norm{\v}^2.
\end{equation}
Therefore, problem \eqref{prob:zy1y0} can be equivalently reformulated as
\begin{equation}
\label{prob:mu}
\begin{aligned}
\min_{\u\in\R^n, \, \mu,\lambda\in\R} \quad &\mu \\
{\rm s.t.}\quad & \lambda^2 \norm{\bar \x}^2 + \mu^2 \norm{\v}^2 + \norm{\u}^2 = \norm{\x}^2, \\
& (\lambda -1)^2 \norm{\bar \x}^2 + (\mu -1)^2\norm{\v}^2 + \norm{\u}^2 = {\rm dist}^2(\x,S_0), \\
& \inprod{\u}{\bar\x} = 0, \, \inprod{\u}{\v} = 0,\, \mu\in [0,1].
\end{aligned}
\end{equation}
We proceed the proof by considering two cases where the dimension of problem \eqref{prob:zy1y0} is $n = 2$ or $n\ge 3$.
\begin{enumerate}
\item[{\bf Case I:}] If $n = 2$, then for any feasible solution $(\mu,\lambda, \u)$ to \eqref{prob:mu}, it holds that $\u \equiv {\bf 0}$. Let $(\lambda^*,\mu^*,{\bf 0})$ be an optimal solution to \eqref{prob:mu}. Then,
$\z^*=\lambda^*\bar\x+\mu^*\v$ is an optimal solution to \eqref{prob:zy1y0}.
Define $\tilde \z = \lambda^*\beta\bar\x+\mu^*\v $,
where $\beta \ge 1$ is some constant such that $\norm{\tilde \z} = 1$.
Note that the above construction of $\tilde \z$ implies that $\Pi_{L}(\tilde \z) = \Pi_{L}(\z^*)$. Let $\tilde\theta\in(0,\pi/2)$ be the angle between $\tilde \z - \x_0$ and $\bar \x - \x_0$.
{Note that $\gamma\in(0,\pi/2)$ can be regarded as the angle between $\frac{\bar\x}{\|\bar\x\|}-\x_0$ and $\bar\x-\x_0$.} Then, geometric arguments assert that $\tan \tilde \theta \ge \tan \gamma$, i.e, $\tilde\theta \ge \gamma$ (see Figure \ref{fig:angles} for the illustration).
Hence,
\[
\norm{\tilde \z - \Pi_{L}(\tilde \z)} = \norm{\tilde \z -\Pi_{L}(\z^*)}
= \norm{\tilde \z - \x_0} \sin\tilde \theta \ge \norm{\tilde \z - \x_0}\sin \gamma.
\]
First consider the case where $\norm{\tilde \z - \x_0} \ge \norm{\z^* - \x_0}$.
In this case, we know that
\begin{equation} \label{eq:distz}
\begin{aligned}
& \sqrt{1 - \norm{\z^*}^2} + \norm{\z^* - \Pi_{L}(\z^*)} \ge \|\tilde \z-\z^*\|+ \norm{\z^* - \Pi_{L}(\z^*)} \\
\ge&\norm{\tilde \z - \Pi_{L}(\z^*)} \ge \norm{\tilde \z - \x_0} \sin \gamma \ge \norm{\z^* - \x_0} \sin \gamma = {\rm dist}(\x, S_0) \sin\gamma,
\end{aligned}
\end{equation}
where the first inequality follows from the facts that $\inprod{\tilde \z-\z^*}{\z^*}=(\beta-1)(\lambda^*)^2\norm{\bar\x}^2\ge0$ and $1=\|\tilde\z\|^2=\|\tilde \z-\z^*\|^{2}+\|\z^*\|^2+2\inprod{\tilde \z-\z^*}{\z^*}\ge\|\tilde \z-\z^*\|^{2}+\|\z^*\|^2$.
Next, we consider the case where $\norm{\tilde \z - \x_0} < \norm{\z^* - \x_0}$. To proceed, define by $\theta\in (0,\pi/2)$ the angle between $\z^* - \x_0$ and $\bar\x - \x_0$.
{Since $\tilde\theta\in (0,\pi/2)$, and $\cos\theta=\frac{\|\x_0-\Pi_{L}(\z^*)\|}{\norm{\z^* - \x_0}} < \frac{\|\x_0-\Pi_{L}(\z^*)\|}{\norm{\tilde\z - \x_0}}= \cos\tilde\theta$, we see that $\sin\theta>\sin\tilde\theta$.}
Then, it holds that
\begin{equation}
\label{eq:distz2}
\norm{\z^* - \Pi_{L}(\z^*)} = \norm{\z^* - \x_0}\sin \theta > {\rm dist}(\x,S_0) \sin\tilde\theta \ge {\rm dist}(\x, S_0)\sin \gamma.
\end{equation}
From \eqref{eq:distrelation}, \eqref{eq:distz} and \eqref{eq:distz2}, we see that \eqref{eq:distxx0xx1} holds.
\item[{\bf Case II:}] If $n\ge3$, we observe that $\u$ can be eliminated from problem \eqref{prob:mu}
by using the fact that $\|\bar\x\|^2+\|\v\|^2=\|\x_0\|^2=1$. In particular, problem \eqref{prob:mu} can be rewritten as follows:
\begin{equation}
\label{prob:mu_rd}
\begin{aligned}
\min_{\mu,\lambda\in\R} \quad &\mu \\
{\rm s.t.}\quad & \lambda^2 \norm{\bar \x}^2 + \mu^2 \norm{\v}^2 \le \norm{\x}^2, \\
& (\lambda -1)^2 \norm{\bar \x}^2 + (\mu -1)^2\norm{\v}^2 \le {\rm dist}^2(\x,S_0), \\
& \norm{\x}^2 +1 - 2\lambda\norm{\bar \x}^2 - 2\mu\norm{\v}^2 = {\rm dist}^2(\x, S_0),\\
& \mu\in [0,1],
\end{aligned}
\end{equation}
where the equality constraint comes from eliminating $\|\u\|^2$ in the first two constraints in \eqref{prob:mu}.
Indeed, for any feasible solution $(\lambda,\mu,\u)$ to \eqref{prob:mu}, $(\lambda,\mu)$ is feasible to \eqref{prob:mu_rd}. Meanwhile, if $(\lambda,\mu)$ is feasible to \eqref{prob:mu_rd}, since $n\ge 3$, one can always find $\u\in\R^n$ satisfying $\|\u\|^2=\|\x\|^2-\big(\lambda^2 \norm{\bar \x}^2 + \mu^2 \norm{\v}^2\big) \ge 0$ and $\inprod{\u}{\bar\x} = 0, \, \inprod{\u}{\v} = 0$ such that
$(\lambda,\mu,\u)$ is a feasible solution to \eqref{prob:mu}.
Consequently, $(\lambda^*,\mu^*,\u^*)$ is an optimal solution to \eqref{prob:mu} if and only if $(\lambda^*,\mu^*)$ is an optimal solution to \eqref{prob:mu_rd}.
Let $\mu^*$ be the optimal value of problem \eqref{prob:mu_rd}. Then, $(\lambda^*,\mu^*)$ with $\lambda^* = \big(1 + \norm{\x}^2 - 2\mu^*\norm{\v}^2 - {\rm dist}^2(\x,S_0)\big)/2\norm{\bar\x}^2$ is the unique optimal solution to problem \eqref{prob:mu_rd}.
We consider three cases here:
\begin{itemize}
\item[\rm (i)] $\mu^* \in (0,1)$.
{Note that the first three constraints in \eqref{prob:mu_rd} result a line segment, denoted by $F$, where its endpoints
are the two intersection points (note that since $\x$ is a feasible solution of \eqref{prob:zy1y0} with $\u={\bf 0}$, the two ellipses must intersect) of the two ellipses:
\[
\left\{
(\lambda, \mu) \mid \lambda^2 \norm{\bar \x}^2 + \mu^2 \norm{\v}^2 = \norm{\x}^2
\right\}\]
and
\[
\left\{(\lambda, \mu) \mid (\lambda -1)^2 \norm{\bar \x}^2 + (\mu -1)^2\norm{\v}^2 = {\rm dist}^2(\x,S_0)
\right\}.
\]
Hence the feasible set to \eqref{prob:mu_rd} can be written as $ F\cap \{(\lambda,\mu) \mid 0\le\mu\le1\}$.
Since $\norm{\bar\x} \neq 0$, the equality constraint in \eqref{prob:mu_rd} implies that $F$ cannot be parallel to the $\lambda$-axis. Now, $\mu^* \in (0,1)$ implies that the optimal solution to \eqref{prob:mu_rd} must be an endpoint of $F$.
This further implies that the first two inequality constraints in \eqref{prob:mu_rd} are active at the optimal solution $(\lambda^*,\mu^*)$.}
Then, {$(\lambda^*,\mu^*, \bf 0)$} is an optimal solution to \eqref{prob:mu}.
Therefore, $\z^*=\lambda^*\bar\x+\mu^*\v$ is an optimal solution to problem \eqref{prob:zy1y0}. The desired result then follows from the same arguments in Case I.
\item [\rm (ii)] $\mu^* = 1$. In this case, we have from \eqref{eq:dist} that
$\norm{\z^* - \Pi_{L}(\z^*)} = {\rm dist}(\x, S_0).$
Hence, we have
\[
\sqrt{1 - \norm{\x}^2} + {\rm dist}(\x, S_1) \ge \sqrt{1 - \norm{\x}^2} + \norm{\z^* - \Pi_{L}(\z^*)} \ge {\rm dist}(\x, S_0) \ge {\rm dist}(\x, S_0)\sin\gamma,
\]
where the first inequality follows from the optimality of $\z^*$ to \eqref{prob:zy1y0} and ${\rm dist}(\x,S_1)=\|\x-\Pi_{S_1}(\x)\|=\|\x-\Pi_L(\x)\|$.
\item[\rm (iii)] $\mu^* = 0$. In this case, $(\lambda^*,0,\u^*)$ with some $\u^*$ satisfying $\|\u^*\|^2=\|\x\|^2- (\lambda^*)^2 \norm{\bar \x}^2 $ and $\inprod{\u^*}{\bar\x} = 0, \, \inprod{\u^*}{\v} = 0$ is an optimal solution to \eqref{prob:mu}.
Then, $\z^* = \lambda^*\bar\x + \u^*$ is an optimal solution to \eqref{prob:zy1y0} and
$\Pi_{L}(\z^*) = \bar\x$. Let $\tilde \z := \z^* + \beta \v = \lambda^*\bar\x + \u^* + \beta\v$ with $\beta \ge 0$ such that
$\norm{\tilde \z} = 1$. Then, we see that
\begin{equation}
\label{eq:caseiii}
\norm{\tilde\z - \bar\x}^2 = (\lambda^* - 1)^2 \norm{\bar\x}^2 + \norm{\u^*}^2 + \beta^2 \norm{\v}^2
= 1 - 2\lambda^*\norm{\bar\x}^2 + \norm{\bar\x}^2 = {\rm dist}^2(\x,S_0),
\end{equation}
where the second equality holds since $\norm{\tilde\z} = 1$ and the third equality follows from the equality constraint in problem \eqref{prob:mu_rd}. Meanwhile, it holds that
\[
1 - \norm{\z^*}^2 = 1 - (\lambda^*)^2\norm{\bar\x}^2 - \norm{\u^*}^2 = \beta^2 \norm{\v}^2
=\norm{\tilde \z - \z^*}^2.
\]
Hence, we have
\begin{align*}
\sqrt{1 - \norm{\x}^2} + {\rm dist}(\x, S_1) \ge{}& \sqrt{1 - \norm{\z^*}^2} + \norm{\z^* - \Pi_{L}(\z^*)} \\
={}& \norm{\tilde \z - \z^*} + \norm{\z^* - \bar\x}
\\
\ge{}& \norm{\tilde \z - \bar\x} \\
\ge{}&{\rm dist}(\x, S_0) \sin \gamma,
\end{align*}
where the first inequality follows from {\eqref{eq:distrelation}}{, and the last inequality follows from \eqref{eq:caseiii}.}
\end{itemize}
\end{enumerate}
We have shown that \eqref{eq:distxx0xx1} holds
and thus completed the proof of Lemma \ref{lemma:xS}.
\end{proof}
{ \begin{figure}
\centering
\begin{tikzpicture
[
scale=1.5,
point/.style = {draw, circle, fill = black, inner sep = 1pt},
dot/.style = {draw, circle, fill = black, inner sep = .2pt},
line width/.style = 1pt,
]
\begin{scope}
\clip (0,0) rectangle (4,4);
\draw (0,0) circle(4);
\draw (0,0) -- (4,0);
\end{scope}
\node (origin) at (0,0)[point, label = {below right:${\bf 0}$}]{};
\draw (0,0) -- (0,4);
\node (xbar) at (0,2.5) [point, label = {below left:$\bar\x$}]{};
\node (norxbar) at (0,4) [point, label = {below left:$\frac{\bar\x}{\|\bar\x\|}$}]{};
\node (x0) at (3.1225,2.5)[point, label = {below:$\x_0$}]{};
\draw (xbar) -- (x0);
\draw[dashed, blue] (x0) -- (0,4);
\node (zstar) at (1,0.5) [point, label = {below right:${\z^*}$}]{};
\node (ztilde) at (1,3.8730) [point, label = {above right:${\tilde\z}$}]{};
\draw[dashed, blue] (zstar) -- (ztilde);
\draw[dashed, red] (ztilde) -- (x0);
\draw[dashed, red] (zstar) -- (x0);
\node (plzstar) at (1,2.5) [point, label = {above left:${\Pi_{L}(\z^*)}$}]{};
\node (xbarend) at (0,4)[]{};
\node (xbarstart) at (0,1)[]{};
\draw pic["$\gamma$", draw=blue!100, <->, angle eccentricity=0.6, angle radius=0.7cm]
{angle=xbarend--x0--xbar};
\draw pic["$\tilde \theta$", draw=red!100, <->, angle eccentricity=1.1, angle radius=1.2cm]
{angle=ztilde--x0--xbar};
\draw pic["$\theta$", draw=red!100, <->, angle eccentricity=1.3, angle radius=1.5cm]
{angle=xbar--x0--zstar};
\draw (6.5,2) circle (2);
\node (origin2) at (6.5,2)[point, label = {below right:${\bf 0}$}]{};
\draw (6.5,2) -- (6.5,4);
\node (xbar1) at (6.5,3) [point, label = {below left:$\bar\x$}]{};
\node (norxbar1) at (6.5,4) [point, label = {below left:$\frac{\bar\x}{\|\bar\x\|}$}]{};
\node (x01) at (8.2321,3)[point, label = {below right:$\x_0$}]{};
\draw (xbar1) -- (x01);
\draw[dashed, blue] (x01) -- (6.5,4);
\node (zstar1) at (7.5,1) [point, label = {above right:${\z^*}$}]{};
\node (ztilde1) at (7.5,0.2679) [point, label = {below right:${\tilde\z}$}]{};
\node (xbarend1) at (6.5,4)[]{};
\draw[dashed, red] (ztilde1) -- (x01);
\node (plzstar1) at (7.5,3) [point, label = {above left:${\Pi_{L}(\z^*)}$}]{};
\draw[dashed, blue] (plzstar1) -- (ztilde1);
\draw pic["$\gamma$", draw=blue!100, <->, angle eccentricity=0.5, angle radius=0.5cm]
{angle=xbarend1--x01--xbar1};
\draw pic["$\tilde \theta$", draw=red!100, <->, angle eccentricity=1.3, angle radius=0.9cm]
{angle=xbar1--x01--ztilde1};
\end{tikzpicture}
\caption{Illustration of two possible scenarios of the positions of $\tilde\z$. In either case, it holds that $\tilde\theta \ge \gamma$.
{In the first case, it also holds that $\cos\theta\le\cos\tilde \theta$ (or equivalently, $\sin\theta\ge\sin\tilde\theta$).}}
\label{fig:angles}
\end{figure}}
Note that Lemma \ref{lem:EBnonneg} provides certain error bound inequality involving ${\rm dist}(\x,S_1)$ for the convex problem ($\rm P_1$) and Lemma \ref{lemma:xS} connects ${\rm dist}(\x,S_1)$ and ${\rm dist}(\x,S_0)$. Using these results, we obtain in the following lemma the desired error bound result in the hard case 2 (ii).
\begin{lem}
\label{lem:h2ii}
In the hard case 2 (ii), there exists a constant $\tau>0$ such that for all $\x\in \R^n$,
\begin{equation*}
\dt(\x,S_0)\le \tau\big(f(\x)-f^*+\delta_{B}(\x)\big)^{1/2}.
\end{equation*}
\end{lem}
\begin{proof}
Recall that $\tilde f(\x):=f(\x)-\lambda_1(\x^T\x-1)$ is convex. Note that in hard case 2 (ii), it holds that $\min \tilde f(\x) = f^*$ and $\|\bar\x\|<1$. Then, by Lemma \ref{lem:EBnonneg} item 3, we know that there exist a constant $\tau_1 > 0$ such that
\begin{equation}
\label{eq:xS1}
\dt(\x,S_1)\le \tau_1\big(\tilde f(\x)-f^*+\delta_B(\x)\big)^{1/2}, \quad \forall\, \x\in\R^n.
\end{equation}
Without loss of generality, one can assume that $\tau_1 \ge 1/\sqrt{-\lambda_1}$, i.e., $2\tau_1^2 \lambda_1 + 2 \le 0$.
By Lemma \ref{lemma:xS}, we know that
\[
{\rm dist}(\x,S_0)\sin\gamma \le {\rm dist}(\x,S_1) + \sqrt{1 - \norm{\x}^2}, \quad \forall \norm{\x} \le 1,
\]
where $\sin\gamma = \frac{1-\|\bar\x\|}{ \sqrt{(1 - \norm{\bar\x})^2 + 1-\|\bar\x\|^2}}$.
Together with \eqref{eq:xS1}, this implies that for all $\norm{\x}\le 1$,
\begin{align*}
{\rm dist}^2(\x,S_0)\sin^2\gamma \le {}& 2 {\rm dist}^2(\x,S_1) + 2(1 -\norm{\x}^2) \\
\le {}& 2\tau_1^2(\tilde f(\x) - f^* + \delta_{B}(\x)) + 2(1 - \norm{\x}^2)\\
= {}& 2\tau_1^2(f(\x) - f^*+\delta_{B}(\x)) + (2 + 2\tau_1^2\lambda_1)(1 - \norm{\x}^2) \\
\le {}& 2\tau_1^2(f(\x) - f^*+\delta_{B}(\x)),
\end{align*}
where the last inequality holds since $2 + 2\tau_1^2\lambda_1\le 0$.
Thus, we know that
\[
{\rm dist}(\x,S_0) \le \frac{\sqrt{2}\tau_1}{\sin \gamma}\big(f(\x) - f^* + \delta_B(\x)\big)^{1/2}, \quad \forall\, \x\in \R^n.
\]
We complete the proof of this lemma.
\end{proof}
Now, with Lemmas \ref{lem:EBnonneg}, \ref{lem:ezh1}, \ref{lem:h2i} and \ref{lem:h2ii}, we are able to summarize the situations in the following \emph{TRS-ill case} in which the H\"olderian error bound modulus is 1/4:
\begin{equation}
\label{eq:con}
\lambda_1\le 0,~\b\in\ra(\A-\lambda_1\I) \text{ and } \norm{(\A-\lambda_1\I)^\dagger \b} = 1.
\end{equation}
After all these preparations, we arrive at the following theorem.
\begin{thm}
\label{thm:main}
For the trust region subproblem problem $\rm(P_0)$, there exists a constant $\tau_{EB} >0$ such that
\[\dt(\x,S_0)\le\tau_{EB} \big(f(\x)-f^*+\delta_{B}(\x)\big)_+^\rho,~\forall \, \x\in\R^n,\]
where
$\rho=\left\{
\begin{aligned}
& 1/4, \mbox{ for the TRS-ill case \eqref{eq:con}}, \\[5pt]
& 1/2, \mbox{ otherwise.}
\end{aligned}
\right.$
\end{thm}
The above theory in fact shows that the H\"olderian error bound always holds with modulus 1/4 for all cases due to the facts that the function value $f(\x)$ is bounded in the unit ball and the inequality $t^{1/2}\le t^{1/4}$ holds for all $t\in(0,1)$.
\section{KL inequality for the TRS}
In this section, based on the previously developed error bound results, we derive the KL inequality associated with the TRS.
To prove the KL inequality, let us first recall part of the results from \cite{bolte2017error} that essentially state that in the \emph{convex} setting, H\"olderian error bounds imply the KL inequality.
In fact, the equivalence between these two concepts for \emph{convex} problems is also obtained in \cite{bolte2017error}.
\begin{lem}[Corollary 6 (ii) in \cite{bolte2017error}]
\label{lem:eb2kl}
Let H be a real Hilbert space. Set
\[\mathcal{K}(0,+\infty)=\{\varphi\in C^0[0,+\infty)\cap C^1(0,+\infty),\varphi(0)=0,\varphi\text{ is concave and }\varphi'>0\}.\]
Let $h:H\rightarrow(-\infty, +\infty]$ be a proper, convex and lower-semicontinuous function, with $\min h=0$. Let $\varphi\in \mathcal{K}(0,+\infty),c>0$. Let $S=\{x\in H\mid h(x)=\min h(x)\}$.
Then if $s\varphi'(s)\ge c\varphi(s)$ for all $s>0$, and $\varphi(h(x))\ge\dt(x,S)$ for all $x\in \left\{x\in H\mid h(x) >0 \right\}$, then $\varphi'(h(x)){\rm dist}(0,\partial h(x))\ge c$ for all $x\in \left\{x\in H\mid h(x) >0 \right\}$.
\end{lem}
{Recall that we denote by $f^*$ the optimal value of the TRS $\rm(P_0)$.} With Lemma \ref{lem:eb2kl}, we are ready to prove the KL inequality for the convex TRS. As one can observe from Theorem \ref{thm:kl-convex}, the KL inequality for the convex TRS holds globally.
\begin{thm}
\label{thm:kl-convex}
For the TRS $({\rm P}_0)$ with $\lambda_1 \ge 0$, there exists some constant $\tau >0$ such that the KL inequality holds
\begin{equation}
\label{eq:KL-convex}
\big(f(\x)-f^*+\delta_{B}(\x)\big)^{1-\rho}\le\tau {\rm dist}(-\nabla f(\x), N_B(\x)), \quad \forall\, \x \in \R^n,
\end{equation}
where $\rho = \frac{1}{4}$ if $\lambda_1=0$, $\b\in{\rm Range}(\A)$ and $\norm{\A^{\dagger}\b} = 1$; otherwise $\rho = \frac{1}{2}$.
\end{thm}
\begin{proof}
By Theorem \ref{thm:main}, we know that there exists a constant $\tau >0$ such that for all $\x \in \R^n$
\begin{equation}\label{eq:ebcvx}
\dt(\x,S_0)\le \tau \big(f(\x)-f^*+\delta_{B}(\x)\big)^{\rho}
\end{equation}
with $\rho = \frac{1}{4}$ if $\lambda_1=0$, $\b\in{\rm Range}(\A)$ and $\norm{\A^{\dagger}\b} = 1$; and $\rho = \frac{1}{2}$ otherwise.
Now define $\varphi(s) = \tau s^\rho$ for all $s\ge 0$. Obviously, $\varphi\in {\cal K}(0,+\infty)$ and we also note that
\[
s\varphi'(s) = \tau s \rho s^{\rho - 1} = \tau \rho s^{\rho} = \rho \varphi(s), \quad \forall\, s> 0.
\]
Hence, by \eqref{eq:ebcvx}, Lemma \ref{lem:eb2kl} and the fact that $\dt(0,\partial [f(\x)+\delta_B(\x)])=\dt(-\nabla f(\x),N_B(\x))$, we have that for all $\x\not\in S_0$,
\[
\tau \rho (f(\x)-f^*+\delta_{B}(\x))^{\rho-1} {\rm dist}(-\nabla f(\x), N_B(\x)) \ge \rho.
\]
Thus, by noting that \eqref{eq:KL-convex} holds trivially if $\x\in S_0$, we complete the proof for the theorem.
\end{proof}
{Next, we consider the nonconvex case.
In this case, the analysis is more complicated.
We note that in \cite{drusvyatskiy2014second}, the authors show that under the prox-regularity \cite{poliquin2014generalized} assumption, for extended-real-valued lower semicontinuous functions, the error bound condition with modulus $1/2$ (coined as the quadratic growth condition in \cite{drusvyatskiy2014second}), implies the metric subregularity \cite{dontchev2009implicit} property. However, for the TRS, the relations between the metric subregularity property and the KL inequality remain largely unknown. Hence, the results in \cite{drusvyatskiy2014second} can not be directly used here.
In the following, we give a comprehensive analysis of the KL inequality for the nonconvex TRS.}
For ease of reading, we first provide here a roadmap of the analysis. Apart from some general discussions and computations, our proof can be divided into mainly three steps. Specifically,
in the first step, we establish the KL inequality for testing points chosen from the intersection of the interior of the unit norm ball and a neighborhood of the given optimal reference point.
Then, we restrict our attentions on the testing points chosen from the boundary of the unit norm ball. We discuss the easy case and hard case 1 in the second step. The analysis for the hard case 2 is presented in the last step where certain parts of the proofs are presented in the Appendix.
We begin our analysis with some general discussions.
Let $\x^*$ be an optimal solution to the TRS $({\rm P}_0)$ with $\lambda_1 < 0$. We know from the KKT condition in Section \ref{sec:pre} that there exists $\lambda^* \ge 0$ such that
\begin{equation}
\label{eq:optnonconvex}
\nabla f(\x^*) + 2\lambda^* \x^* = 2\A\x^*-2\b+ 2\lambda^*\x^*= {\bf 0}, \, \lambda^* \ge -\lambda_1 > 0, \, \lambda^*(1 - \norm{\x^*}) = 0.
\end{equation}
Thus, $\norm{\x^*} = 1$ and $\norm{\nabla f(\x^*)} = 2\lambda^*$.
For any given $\x\in\R^n$, since
$$\|\nabla f(\x)-\nabla f(\x^{*})\|=\| 2\A(\x-\x^*)\|\le 2\|\A\|_2\|\x-\x^*\|,$$ it holds that
\begin{equation}
\label{eq:gradf}
\|\nabla f(\x)\|\ge\|\nabla f(\x^{*})\|-2\|\A\|_2\|\x-\x^*\|.
\end{equation}
Meanwhile since $f(\x)$ is a quadratic function, we have
\begin{equation}
\label{eq:valf}
f(\x)-f(\x^{*})= \inprod{\nabla f(\x^*)}{\x-\x^*}+ \inprod{\x - \x^*}{\A(\x - \x^*)}.
\end{equation}
Now, for any $\x$ satisfying $\|\x-\x^{*}\|\le\frac{\lambda^*}{\|\A\|_2+\lambda^*}$ and $\norm{\x}\le 1$, we know from \eqref{eq:gradf} and \eqref{eq:valf} that
\begin{equation*}
\left\{
\begin{aligned}
&\norm{\nabla f(\x)} \ge 2\lambda^* - \frac{2\lambda^*\norm{\A}_2}{\|\A\|_2+\lambda^*} = \frac{2(\lambda^*)^2}{\|\A\|_2+\lambda^*} = \frac{\lambda^*}{\|\A\|_2+\lambda^*} \norm{\nabla f(\x^*)}, \\
& f(\x)-f(\x^{*})\le \frac{2(\lambda^*)^2}{\|\A\|_2+\lambda^*}
+ \frac{\norm{\A}_2 (\lambda^*)^2}{(\|\A\|_2+\lambda^*)^2}
\le 2\lambda^* = \norm{\nabla f(\x^*)},
\end{aligned}
\right.
\end{equation*}
and consequently,
\begin{equation}
\label{eq:finteiror}
\big(f(\x) - f(\x^*)\big)^{\frac{1}{2}} \le \|\nabla f(\x^*)\|^{1/2} \le \frac{\|\A\|_2+\lambda^*}{\sqrt2(\lambda^*)^{3/2}}\|\nabla f(\x)\|.
\end{equation}
{Then we have the following results associated with the case where the testing points are chosen from the intersection of the interior of the unit norm ball and a neighborhood of the given optimal reference point $\x^*.$}
\begin{lem}
\label{lem:interiorKL}
{Suppose $\lambda_1 <0$}. Then there exist some constant $\tau >0$ and sufficiently small $\epsilon >0$ such that
\begin{equation*}
\label{eq:interiorKL}
\big(f(\x)-f^*+\delta_{B}(\x)\big)^{1/2}\le\tau{\rm dist}(-\nabla f(\x), N_B(\x)), \quad \forall\, \x\in B(\x^*,\epsilon)\cap \left\{\x\in\R^n \mid \norm{\x} < 1 \right\}.
\end{equation*}
\end{lem}
\begin{proof}
Note that for all $\x$ satisfying $\norm{\x} < 1$, it holds that $N_B(\x) = \{{\bf 0}\}$ and ${\rm dist}(-\nabla f(\x),N_B(\x)) = \norm{\nabla f(\x)}$. Hence, we know from \eqref{eq:finteiror} that
\begin{equation*}
\label{eq:KLnormxs1}
\big(f(\x)-f^*+\delta_{B}(\x)\big)^{1/2} \le \frac{\norm{\A}_2 + \lambda^*}{\sqrt2(\lambda^*)^{3/2}} {\rm dist}(-\nabla f(\x), N_B(\x)),
\end{equation*}
whenever
$\|\x-\x^{*}\|\le\frac{\lambda^*}{\|\A\|_2+\lambda^*}$ and $\norm{\x}< 1$.
\end{proof}
In the subsequent discussions, we will focus on the boundary of the unit norm ball, denoted by ${\bf bd}(B):=\left\{\x\in\R^n\mid \norm{\x} = 1 \right\}$. For all $\x\in {\bf bd}(B)$, it {follows from \eqref{eq:valf} } that
\begin{equation}\label{eq:fmfstar}
f(\x) - f(\x^*) = -\inprod{2\lambda^* \x^*}{\x - \x^*} + \inprod{\x - \x^*}{\A(\x - \x^*)} = \inprod{\x - \x^*}{(\A + \lambda^*\I) (\x - \x^*)},
\end{equation}
{where the second equality holds since $\|\x\|=\|\x^*\|=1.$}
Meanwhile, for any given $\x\in {\bf bd}(B)$, we have $N_B(\x) = \{\nu\x \in \R^n \mid \nu \ge 0\}$
and thus
\begin{equation}
\label{eq:distnorm1}
{\rm dist}(-\nabla f(\x), N_B(\x)) = \min_{\nu \ge 0} \norm{ 2(\A\x - \b) + \nu\x}.
\end{equation}
Let $\nu^*(\x)$ be the optimal solution to problem \eqref{eq:distnorm1}. It can be verified that $\nu(\x) = \max\{\inprod{-2(\A\x - \b)}{\x},0\}$. Moreover, it holds that
\begin{equation}
\label{eq:nux}
\nu(\x) \to \nu(\x^*) =
\inprod{-2(\A\x^* -\b)}{\x^*} = 2\lambda^* >0 \, \mbox{ as } {\bf bd}(B) \ni \x\to \x^*.
\end{equation}
Hence, we know that there exists a constant $\epsilon_0 \in (0,1)$ such that
\begin{equation}
\label{eq:eps0}
\nu(\x) = \inprod{-2(\A\x - \b)}{\x} >0, \quad \forall\, \x \in {\bf bd}(B) \cap \left\{\x\in\R^n\mid \norm{\x -\x^*}\le \epsilon_0 \right\}.
\end{equation}
Now, we are ready to prove the desired KL inequality for the easy case and hard case 1.
\begin{lem}[KL inequality for the easy case and hard case 1]
\label{lem:KLeasyhard1}
Suppose $\lambda_{1} <0$. In the easy case and hard case 1, there exist some constant $\tau >0$ and sufficiently small $\epsilon >0$ such that
\begin{equation}
\label{eq:easkyhard1}
\big(f(\x)-f^*+\delta_{B}(\x)\big)^{1/2}\le\tau{\rm dist}(-\nabla f(\x), N_B(\x)), \quad \forall\, \x\in B(\x^*,\epsilon).
\end{equation}
\end{lem}
\begin{proof}
Note that in the easy case and hard case 1, it holds that
$\lambda^* > -\lambda_1 >0$ and $\norm{(\A - \lambda_1\I)^{\dagger}\b}\neq 1$.
Since $\lambda^* > -\lambda_1 >0$, from the discussion of \eqref{eq:nux}, we know that there exists a positive constant $\epsilon_1 \le \epsilon_0$ such that
$\nu(\x) + 2\lambda_1 > 0$ whenever $\norm{\x} = 1$ and $\norm{\x - \x^*}\le \epsilon_1$. Recall the objective function $\tilde f$ in the convex problem (${\rm P}_1$), i.e., $\tilde f(\x):= \x^T(\A-\lambda_1\I)\x-2\b^T\x+\lambda_1$.
We note for all $\x$ satisfying $\norm{\x} = 1$ and $\norm{\x - \x^*}\le \epsilon_1$ that
\begin{align*}
{\rm dist}(-\nabla \tilde f(\x), N_B(\x)) ={}& \min_{\mu\ge 0} \|2(\A-\lambda_{1}\I)\x-2\b+\mu\x\| \\
={}& \|2(\A-\lambda_{1}\I)\x-2\b+(-\inprod{2 (\A-\lambda_1 \I)\x - 2 \b}{\x})\x\| \\
={}& \|2(\A-\lambda_{1}\I)\x-2\b+(\nu(\x) + 2\lambda_1)\x\| \\
={}& \norm{2(\A\x - \b) + \nu(\x)\x} \\
={}&{\rm dist}(-\nabla f(\x), N_B(\x)),
\end{align*}
{where the second equality is due to $-\inprod{2 (\A-\lambda_1 \I)\x - 2 \b}{\x}=\nu(\x)+2\lambda_1>0.$}
Note that $\tilde f(\x) = f(\x), \, \forall\, \x \in {\bf bd}(B)$ and from Lemma \ref{lemma:P1} that $f^*$, the optimal value of the TRS (${\rm P}_0$), is also the optimal value of the convex problem (${\rm P}_1$).
Since $\norm{(\A - \lambda_1\I)^{\dagger}\b}\neq 1$, we obtain by Theorem \ref{thm:kl-convex} that there exists a positive constant $\tau_1$ such that for all $\x\in{\bf bd}(B)$ with $\norm{\x - \x^*}\le \epsilon_1$,
\begin{equation*}
\label{eq:easyKL}
\begin{aligned}
\big(f(\x)-f^*+\delta_{B}(\x)\big)^{\frac{1}{2}} ={}& \big(\tilde f(\x)-f^*+\delta_{B}(\x)\big)^{\frac{1}{2}} \\[2pt]
\le{}&\tau_1 {\rm dist}(-\nabla \tilde f(\x), N_B(\x)) = \tau_1{\rm dist}(-\nabla f(\x), N_B(\x)).
\end{aligned}
\end{equation*}
{This, together with Lemma \ref{lem:interiorKL}, implies that \eqref{eq:easkyhard1} holds with some positive constant $\epsilon$ and $\tau$.}
\end{proof}
Now, we are ready for presenting the analysis for the hard case 2. {Recall $\lambda^*=-\lambda_1$ in the hard case.} First, we discuss the case where $\A - \lambda_1 \I ={\bf 0}$.
In this case, we know from the optimality condition \eqref{eq:optnonconvex} that $\b= {\bf 0}$ and thus $f(\x^*) = f(\x) =\lambda_1 \x^T\x = \lambda_1$ whenever $\norm{\x} = 1$. This, together with Lemma \ref{lem:interiorKL}, implies that the KL inequality holds with the exponent $1/2$.
Note that this case belongs to the hard case 2 (ii) as $\b\perp\Null(\A-\lambda_1\I)$ and $\|(\A-\lambda_1\I)^\dagger\b\|=0<1$.
Hence, in the subsequent analysis, we focus on the nontrivial case where $\A - \lambda_1\I \neq {\bf 0}$.
Recall the spectral decomposition of $\A$ as $\A = \P{\mathbf \Lambda} \P^T$ in Section \ref{sec:pre}. Then, $\A - \lambda_1 \I = \P ({\mathbf \Lambda} - \lambda_1\I) \P^T$ is the spectral decomposition of $\A -\lambda_1\I$. Denote the diagonal matrix $\D = {\mathbf \Lambda} - \lambda_1\I = {\rm Diag}(\d)$.
Since $\A - \lambda_1 \I \neq 0$, there exists some integer $1 \le K <n$ such that
\begin{equation}
\label{eq:kD}
\left\{\begin{array}{llll}
\d_i &=&0 &\mbox{ for } i=1,\ldots,K,\\
\d_i &=& \lambda_i -\lambda_1 &\mbox{ for } i=K+1,\ldots, n,
\end{array}\right.
\end{equation}
and $\d_n \ge \cdots \ge \d_{K+1} > 0$.
{
For all $\x\in {\bf bd}(B)$, we define
\begin{equation}\label{eq:defalbet}
\alpha(\x) :=(\x^*)^T(\A-\lambda_1\I)(\x-\x^*) \mbox{ and }
\beta(\x) :=(\x-\x^*)^T(\A-\lambda_1\I)(\x-\x^*).
\end{equation}
For simplicity in the following discussion, we sometimes suppress the dependence on $\x$ in our notation. For example, we often write $\alpha(\x)$ and $\beta(\x)$ as $\alpha$ and $\beta$, respectively. However, this should not cause any confusion because the reference vector $\x$ will always be clear from the context.}
Let $\s=\P^T\x^*$ and $\z=\P^T(\x-\x^*)$ and we verify from \eqref{eq:defalbet} that
\begin{equation}\label{eq:defabP}
\left\{
\begin{array}{lllll}
\alpha &=&(\x^*)^{T}(\A-\lambda_1\I)(\x-\x^*) &=& \inprod{\s}{\D \z},\\
\beta &=&(\x-\x^*)^T(\A-\lambda_1\I)(\x-\x^*)\overset{\eqref{eq:fmfstar}}=f(\x)-f^* &=&\inprod{\z}{\D\z}. \\
\end{array}
\right.
\end{equation}
Recall the definition of $\epsilon_0$ in \eqref{eq:eps0}. It holds that for any $\x\in {\bf bd}(B)$ with $\norm{\x - \x^*}\le \epsilon_0$,
\begin{equation}
\label{eq:klpf1}
\begin{array}{ll}
&{\rm dist}^2(-\nabla f(\x), N_B(\x))\\
=&\|2(\A\x-\b)-2\lambda_1\x+(2\lambda_1+\nu(\x))\x\|^2\\
=&\|2(\A-\lambda_1\I)(\x-\x^*)+2\left[(\x^*)^T(\A\x^*-\b)-\x^T(\A\x-\b)\right]\x\|^2\\
=&\|2(\A-\lambda_1\I)(\x-\x^*)+2\left[f(\x^*)-f(\x)+(\x^*-\x)^T(\A-\lambda_1\I)\x^*\right]\x\|^2\\
=&\|2(\A-\lambda_1\I)(\x-\x^*)-2(\beta+\alpha)\x\|^2\\
=&\|2(\A-\lambda_1\I)(\x-\x^*)\|^2-8\x^T(\A-\lambda_1\I)(\x-\x^*)(\beta+\alpha)+\norm{2\beta\x+2\alpha\x}^2\\
=&\|2(\A-\lambda_1\I)(\x-\x^*)\|^2-4\left(\beta+\alpha\right)^2\\
=&4(\norm{\D\z}^2 - (\alpha + \beta)^2)
\end{array}
\end{equation}
where the second equality follows from the fact that $2\A\x^*-2\b-2\lambda_1\x^*={\bf 0}$ {(which is \eqref{eq:optnonconvex} with $\lambda^*=-\lambda_1$)} and the definition of $\nu(\x)$,
the third equality follows from $2\A\x^*-2\b-2\lambda_1\x^*={\bf 0}$ and \eqref{eq:fmfstar},
the forth equality holds due to \eqref{eq:defalbet}, the fifth equality holds since $\x^T(\A-\lambda_1\I)(\x-\x^*)=\alpha+\beta$ and $\norm{\x} = 1$, and the last equality holds as $\z = \P^T(\x - \x^*)$.
Define function $H:{\bf bd}(B) \to \R$ as:
\begin{equation}\label{eq:defH}
H(\x):=\|(\A\x-\b)-\lambda_1\x+(\lambda_1+\nu(\x)/2)\x\|^2\overset{\eqref{eq:klpf1}}=\|\D\z-(\alpha+\beta)\x\|^2=\|\D\z\|^2-(\alpha+\beta)^2,
\end{equation}
and thus we have
\begin{equation}\label{eq:distandH}
{\rm dist}^2(-\nabla f(\x), N_B(\x))=4H(\x), \quad \forall\, \x \in {\bf bd}(B)\cap \left\{\x\in\R^n\mid \norm{\x - \x^*}\le \epsilon_0 \right\}.
\end{equation}
In the next lemma, we show that the KL inequality holds with an exponent $1/2$ for the hard case 2 (ii).
\begin{lem}[KL inequality for the hard case 2 (ii)]
\label{lem:hard2ii}
Suppose that $\lambda_1 <0$. In the hard case 2 (ii), there exist some constant $\tau >0$ and sufficiently small $\epsilon >0$ such that
\begin{equation*}
\label{eq:hardKL}
\big(f(\x)-f^*+\delta_{B}(\x)\big)^{1/2}\le\tau{\rm dist}(-\nabla f(\x), N_B(\x)), \quad \forall\, \x\in B(\x^*,\epsilon).
\end{equation*}
\end{lem}
\begin{proof}
We have discussed that the results hold under the case $\A - \lambda_1\I ={\bf 0}$. In the remaining, we consider the case where $\A - \lambda_1\I \neq {\bf 0}$.
In the hard case 2 (ii), since $\x^* \not\perp {\rm Null}(\A - \lambda_1 \I)$, i.e., $\s \not\perp {\rm Null}(\D)$, we know that
$
\sum_{j=1}^{K} \s_j^2>0,
$
and
from Lemma \ref{lem:funcPsi} in Appendix that there exists $0 < \epsilon' < \epsilon_0$ such that for any $\x \in {\bf bd}(B)$ with $\norm{\x - \x^*}\le \epsilon'$,
\begin{equation*}\label{eq:Hhc2ii}
H(\x) \ge \frac{1}{2} \d_{K+1}(\sum_{j=1}^{K} \s_j^2) \beta,
\end{equation*}
i.e.,
\[
{\rm dist}^2(-\nabla f(\x), N_B(\x)) = 4H(\x) \ge 2\d_{K+1} (\sum_{j=1}^{K} \s_j^2) (f(\x) -f^*).
\]
This, together with Lemma \ref{lem:interiorKL}, competes the proof.
\end{proof}
Before discussing the general results for the hard case 2 (i), we state in the following lemma that in the hard case 2 (i) if the test point $\x$ is restricted in the subspace ${\rm Range}(\A - \lambda_1\I)$, the desired KL inequality can be derived by combining Lemma \ref{lem:KLeasyhard1} with a reduction strategy. To avoid the tedious reduction arguments, we present the proof for the lemma in the Appendix.
\begin{lem}
\label{lem:KLld}
Suppose $\lambda_1 <0$. In the hard case 2 (i), there exist some constant $\tau >0$ and sufficiently small $\epsilon >0$ such that
\begin{equation*}
\label{eq:easkyhard2ild}
\big(f(\x)-f^*+\delta_{B}(\x)\big)^{1/2}\le\tau {\rm dist}(-\nabla f(\x), N_B(\x)), \quad \forall\, \x\in B(\x^*,\epsilon)\cap {\rm Range}(\A - \lambda_1 \I).
\end{equation*}
\end{lem}
After these preparations, we are ready to show in the next lemma that the KL inequality holds with the exponent $3/4$ for the hard case 2 (i), which is a subcase of the TRS-ill case \eqref{eq:con}.
\begin{lem}[KL inequality for the hard case 2 (i)]
\label{lem:hard2i}
Suppose that $\lambda_1 <0$. In the hard case 2 (i), there exist some constant $\tau >0$ and sufficiently small $\epsilon >0$ such that
\begin{equation}
\label{eq:hardKL2i}
\big(f(\x)-f^*+\delta_{B}(\x)\big)^{3/4}\le\tau{\rm dist}(-\nabla f(\x), N_B(\x)), \quad \forall\, \x\in B(\x^*,\epsilon).
\end{equation}
\end{lem}
\begin{proof}
Note that in the hard case 2 (i), it holds that
$\x^*\perp {\rm Null}(\A - \lambda_1 \I)$ (or equivalently, $\x^*\in {\rm Range}(\A - \lambda_1 \I)$ or $\s\in\ra(\D)$). Recall the definition of $\epsilon_0$ in \eqref{eq:eps0} and denote $\epsilon_1 : = \min \left\{
\epsilon_0, 1/2 \right\}$.
For any $\x\in{\bf bd}(B)$ with
$\norm{\x - \x^*}\le \epsilon_1$, we know from $\z = \P^T(\x-\x^*)$ that $\gamma:=\|\z\|^2 \le \epsilon_1^2 \le 1/4$.
Recall the definitions of $\alpha$ and $\beta$ in \eqref{eq:defalbet}.
In the following, we consider the following two cases, i.e., {\bf Case I:} with $|\alpha+\beta|\le\frac{1}{2}\sqrt{\d_{K+1}}\beta^{1/2}$ and {\bf Case II:} with $|\alpha+\beta|>\frac{1}{2}\sqrt{\d_{K+1}}\beta^{1/2}$:
{\bf Case I.} If $|\alpha+\beta|\le\frac{1}{2}\sqrt{\d_{K+1}}\beta^{1/2}$, from \eqref{eq:kD}, \eqref{eq:defabP} and \eqref{eq:defH}, we have
\begin{equation}\label{eq:hc2ii1}
H(\x)=\|\D \z\|^{2}-(\alpha+\beta)^2
\ge\d_{K+1}\beta-\frac{1}{4}\d_{K+1}\beta = \frac{3}{4}\d_{K+1}\beta, \quad \forall\, \x \in {\bf bd}(B) \mbox{ with } \norm{\x - \x^*}\le \epsilon_1.
\end{equation}
{\bf Case II.} Now suppose $|\alpha+\beta|>\frac{1}{2}\sqrt{\d_{K+1}}\beta^{1/2}$. Note that $\alpha=\inprod{\s}{\D\z}\to0$ and $\beta=\inprod{\z}{\D\z}\to0$ as $\z\to {\bf 0}$ or equivalently $\x \to \x^*$.
Then, by reducing $\epsilon_0$ if necessary, we have
$|\alpha|\ge\beta$ and thus
\begin{equation}
\label{eq:algebe}
|\alpha|\ge\frac{1}{4}\sqrt{\d_{K+1}}\beta^{1/2} \quad ~~~~\forall~ \x\in{\bf bd}(B) \mbox{ with }\norm{\x - \x^*}\le \epsilon_1.
\end{equation}
Then if $\alpha\le0$, we know that for any $\x\in{\bf bd}(B)$ with
$\norm{\x - \x^*}\le \epsilon_1$,
\begin{equation}\label{eq:alphage0}
H(\x)=\|\D\z\|^2-\alpha^2-2\alpha\beta-\beta^2\ge 0-2\alpha\beta+\alpha\beta=-\alpha\beta\ge \frac{1}{4}\sqrt{\d_{K+1}}\beta^{3/2},
\end{equation}
where the first inequality holds as $\|\D\z\|^2-\alpha^2\ge(\s^T \D\z)^2-\alpha^2=0$ and $|\alpha|\ge\beta$.
Next consider the case with $\alpha>0$.
Recall that in the hard case 2 (i), we have $\s\in\ra(\D)$.
Hence, by reducing $\epsilon_0$ if necessary, we know from Lemma \ref{lem:KLld}, \eqref{eq:defabP}, \eqref{eq:defH} and \eqref{eq:distandH} that there exists some constant $c_1 \in (0,1)$ such that for any $\x \in{\bf bd}(B)\cap {\rm Range}(\A - \lambda_1 \I)$ with $\norm{\x - \x^*}\le \epsilon_1$,
\begin{equation}
\label{eq:KLrange}
\frac{1}{4}{\rm dist}^2(-\nabla f(\x), N_B(\x))= H(\x) = \norm{\D \z - (\alpha + \beta)(\s+\z)}^2 \ge c_1^2 \d_{K+1}\beta.
\end{equation}
For any given $\x \in {\bf bd}(B)$ satisfying $\norm{\x - \x^*}\le \epsilon_1$ {and $\x\notin{\rm Range}(\A - \lambda_1 \I)$}, consider the orthogonal decomposition of $\z=\z_1+\z_2$ such that $\z_1\in\ra(\D)$ and $\z_2\in\Null(\D)$. {We note that {$\z_1,\z_2\neq\bf0$} since $\x \notin{\rm Range}(\A - \lambda_1\I)$, $\x^*\in{\rm Range}(\A - \lambda_1 \I)$ and $\alpha >0$.} Let \begin{equation}\label{eq:defc}
c=\frac{\d_{K+1} c_1 }{16\sqrt{\d_n^2+\d_{K+1}^2}}<\frac{c_1}{16}<\frac{1}{16},
\end{equation} where $c_1$ is the constant in \eqref{eq:KLrange}.
We consider two subcases:
\begin{enumerate}
\item
Suppose that $\|\z_2\|<c\sqrt\gamma$.
Then, we have
\begin{equation}\label{eq:betage}
\beta = \inprod{\D\z}{\z} = \inprod{\D\z_1}{\z_1}\ge\d_{K+1}\|\z_1\|^2\ge\d_{K+1}(1-c^{2})\gamma\ge\frac{\d_{K+1}}{4}\gamma.
\end{equation}
In our proof, the dimension of subspace ${\rm Range}(\D)$ needs to be taken into consideration. Specifically, we use different strategies to handle cases with $\dim(\ra(\D))\le2$ and $\dim(\ra(\D))\ge3$ (which implicitly requires $n\ge 4$). The case with $\dim(\ra(\D))\le2$ is less insightful and its proof is presented in Appendix \ref{app:dim2}.
\medskip
Now we focus on the case with $\dim(\ra(\D))\ge3$.
Let $\tilde \z=\z_1+\z_3$ with $\z_3\in\ra(\D)$, $\z_3\perp\s_1$, $\z_3\perp \z_1$, $\|\z_3\|=\|\z_2\|$ and $\langle \z_3,\D\z_1\rangle\le0$. Note that $\z_3$, and consequently $\tilde\z$, is well-defined because $\dim(\ra(\D))\ge3$.
From $\tilde \z$, we can further construct $\tilde \x$, and the corresponding $\tilde \alpha$ and $\tilde \beta$ as in \eqref{eq:defabP}, in the following way:
\[
\tilde \x = \x^* + \P \tilde\z, \quad
\tilde\alpha=\s^T\D\tilde\z \quad \mbox{ and } \quad \tilde\beta=\tilde \z^T\D\tilde\z.
\]
Since $\norm{\s} = 1$, $\D\z_2=0$, $\langle \z_3,\D\z_1\rangle\le0$ and $\|\z_3\|=\|\z_2\|$,
we have
\begin{equation}\label{eq:alphatil}
|\tilde\alpha-\alpha|=|\s^T\D(\tilde\z-\z)|=|\s^T\D(\z_3-\z_2)|=|\s^T\D\z_3|\le\norm{\s}\norm{\D \z_3}\le \d_n\|\z_2\|,
\end{equation}
and
\begin{equation*}\label{eq:betatilub}
|\tilde\beta-\beta|=2\langle \z_3,\D\z_1\rangle+\langle \z_3,\D\z_3\rangle \le \d_n\|\z_3\|_2^2 = \d_n\|\z_2\|_2^2 \le \d_n c^2\gamma\overset{\eqref{eq:betage}}\le\frac{4\d_n c^2}{\d_{K+1}}\beta.
\end{equation*}
Meanwhile, we have from \eqref{eq:defc} that $4 \d_n c^2/\d_{K+1} \le 1/4$, and consequently,
\begin{equation}
\label{eq:betatilde}
\frac{1}{4}\beta \le (1 - \frac{4 \d_n c^2}{\d_{K+1}})\beta \le \tilde\beta \le (1 + \frac{4\d_n c^2}{\d_{K+1}}) \beta \le 2 \beta.
\end{equation}
Furthermore, it holds that $\norm{\tilde \x - \x^*}^2 = \|\tilde\z\|^2=\|\z\|^2=\gamma$ and $\norm{\tilde \x}^2 = \|\s+\tilde\z\|^2=\|\s+\z\|^2 = \norm{\x}^2 = 1$.
Since $\tilde \x \in {\bf bd}(B)\cap {\rm Range}(\A - \lambda_1\I)$ and $\norm{\tilde \x - \x^*} = \sqrt{\gamma} \le \epsilon_1$, we know from \eqref{eq:KLrange} that
\begin{equation}\label{eq:consteasy}
\frac{1}{2}{\rm dist}(-\nabla f(\tilde\x), N_B(\tilde\x)) = \|\D \tilde \z-(\tilde\alpha+\tilde\beta)(\s+\tilde \z)\|\ge c_{1}\sqrt{\d_{K+1}}\tilde\beta^{1/2}.
\end{equation}
Next, we bound the difference between ${\rm dist}(-\nabla f(\tilde\x), N_B(\tilde\x))/2$ and ${\rm dist}(-\nabla f( \x), N_B( \x))/2$ by
\begin{equation}\label{eq:tildeDD}
\begin{array}{lll}
&&\|[\D \tilde \z-(\tilde\alpha+\tilde\beta)(\s+\tilde \z)]-[\D \z-(\alpha+\beta)(\s+\z)]\|\\
&=&\|\D (\tilde \z-\z)-(\tilde\alpha-\alpha)\s-\tilde\alpha\tilde\z+\alpha\z-\tilde\beta(\s+\tilde \z)+\beta(\s+\z)\|\\
&\le &\|\D\z_3\|+|\tilde\alpha-\alpha|+|\tilde\alpha|\|\tilde\z\|+|\alpha|\|\z\|+\tilde\beta+\beta\\
&\le& 2\d_{n}\|\z_2\|+2\d_n\gamma+ 3\beta,
\end{array}
\end{equation}
where
the first inequality follows from $\D(\tilde\z-\z)=\D(\z_3-\z_2)=\D\z_3$, $\|\s\|=1$ and $\|\s+\tilde\z\|=\|\s+\z\|=1$,
and the last inequality follows from
$\|\D\z_3\|\le \d_n\|\z_3\|=\d_n\|\z_2\|$, \eqref{eq:alphatil},
$|\alpha|\|\z\|=|\s^T\D\z|\|\z\|\le\d_n\|\z\|^2=\d_n\gamma$,
$|\tilde\alpha|\|\tilde\z\|=|\s^T\D\tilde\z|\|\tilde\z\|\le\d_n\|\tilde\z\|^2=\d_n\gamma$ and \eqref{eq:betatilde}.
The right-hand-side of \eqref{eq:tildeDD} can be further bounded by
\begin{equation*}
\begin{array}{lll}
2\d_{n}\|\z_2\|+2\d_n\gamma+ 3\beta&\overset{\eqref{eq:betage}}\le& 2\d_{n}\|\z_2\|+(3+\frac{8\d_n}{\d_{K+1}})\beta\\
&\overset{}\le& 2\d_{n}c\sqrt\gamma+(3+\frac{8\d_n}{\d_{K+1}})\beta\\
&\overset{\eqref{eq:defc}}\le&\frac{c_{1}}{8}\d_{K+1}\sqrt\gamma +(3+\frac{8\d_n}{\d_{K+1}})\beta\\
&\overset{\eqref{eq:betage}}\le&\frac{c_{1}}{4}\sqrt{\d_{K+1}}\beta^{1/2}+(3+\frac{8\d_n}{\d_{K+1}})\beta\\[2pt]
&{}=& \big(\frac{c_{1}}{4}\sqrt{\d_{K+1}} + (3 + \frac{8\d_n}{\d_{K+1}})\beta^{1/2}\big)\beta^{1/2}.
\end{array}
\end{equation*}
Since $\beta\to 0$ as $\x \to \x^*$, by reducing $\epsilon_0$ if necessary, we have
\[
(3 + \frac{8\d_n}{\d_{K+1}})\beta^{1/2} \le \frac{c_1}{8} \sqrt{\d_{K+1}}
\]
and consequently,
\begin{equation}
\label{eq:bdc3}
2\d_{n}\|\z_2\|+2\d_n\gamma+ 3\beta \le \frac{3c_1}{8} \sqrt{\d_{K+1}}.
\end{equation}
Therefore, we know that
\begin{equation}\label{eq:hc2i2a}
\begin{array}{lll}
H(\x)&=&\|\D \z-(\alpha+\beta)(\s+\z)\|\\
&\ge& c_{1}\sqrt{\d_{K+1}}\tilde\beta^{1/2}-2\d_{n}\|\z_2\|-2\d_n\gamma-c_3\beta\\
&\ge&\frac{c_{1}}{2}\sqrt{\d_{K+1}}\beta^{1/2}-\frac{3c_{1}}{8}\sqrt{\d_{K+1}}\beta^{1/2} \\
&\ge&\frac{c_{1}}{8}\sqrt{\d_{K+1}}\beta^{1/2}, \\
\end{array}
\end{equation}
where the first inequality is due to \eqref{eq:consteasy} and \eqref{eq:tildeDD},
the second equality is due to \eqref{eq:betatilde} and \eqref{eq:bdc3}.
\item
On the other hand, suppose that $\|\z_2\|\ge c\sqrt\gamma$. Then, $\|\z_1\|\le \sqrt{(1-c^2)\gamma}$.
Let $\z'=\rho \z_1$ and {$\x'=\x^*+\P\z' = \s + \z'$}, where
$\rho=\gamma/\|\z_1\|^2\ge1/\sqrt{1-c^2}>1.$
Then we have
$$\norm{\x'} = \|\s+\z'\|^2=1+2\s^T\z'+\|\z'\|^2=1-\rho\gamma+\rho^2\|\z_1\|^2=1,$$
where the third equality is due to
$$2\s^T\z'=2\rho\s^T\z_1=2\rho\s^T\z=\rho(\|\s+\z\|^2-\|\s\|^2-\|\z\|^2) = \rho(1 - 1 -\gamma)=-\rho\gamma.$$
Similar as in \eqref{eq:defabP}, define $\alpha'$ and $\beta'$ corresponding to $\x'$ as $\alpha'=\s^T\D\z'=\rho\alpha$ and $\beta'=\z'^T\D\z'=\rho^2\beta$.
Then, recalling the definition of $H$ in \eqref{eq:defH}, we have
\begin{eqnarray*}
\frac{H(\x')}{(\beta')^{3/2}}&=&\frac{\|\D\z'\|^2-\alpha'^2-2\alpha'\beta'-\beta'^2}{(\beta')^{3/2}}\\
&=&\frac{(\|\D\z\|^2-\alpha^2)/\rho-2\alpha\beta-\rho\beta^2}{\beta^{3/2}}\\
&=&\frac{(\|\D\z\|^2-\alpha^2-2\alpha\beta-\beta^2)/\rho+2(1/\rho-1)\alpha\beta+(1/\rho-\rho)\beta^2}{\beta^{3/2}}\\
&\le & \frac{H(\x)}{\rho\beta^{3/2}}+\frac{2(1/\rho-1)\alpha\beta}{\beta^{3/2}},
\end{eqnarray*}
where the last inequality is due to $1/\rho-\rho<0$.
Note that since $\|\x'\|=1$, it holds that $H(\x')=\|\D\z'\|-(\alpha'+\beta')^2=\|\D\z'-(\alpha'+\beta')(\x')\|^2\ge0$.
Hence we have
\begin{equation}\label{eq:hc2i2b}
\frac{H(\x)}{\beta^{3/2}}\ge\frac{2\rho(1-1/\rho)\alpha\beta}{\beta^{3/2}}\overset{\eqref{eq:algebe}}\ge\frac{\rho-1}{2}\sqrt{\d_{K+1}}\ge\frac{1/\sqrt{1-c^2}-1}{2}\sqrt{\d_{K+1}}.
\end{equation}
\end{enumerate}
Combining (\ref{eq:defabP}--\ref{eq:distandH}), \eqref{eq:hc2ii1}, \eqref{eq:alphage0}, \eqref{eq:hc2i2a} and \eqref{eq:hc2i2b} (and \eqref{eq:hc2i2adim2} in Appendix \ref{app:dim2} for the case with ${\rm dim}({\rm Range}(\D)) \le 2$) with Lemma \ref{lem:interiorKL}, and noting that, {by reducing $\epsilon_0$ if necessary}, $\beta^{3/4}\le\beta^{1/2}$, we know that \eqref{eq:hardKL2i} holds with some positive constant $\tau$ and sufficient small $\epsilon >0$.
\end{proof}
We summarize the results of Theorems \ref{thm:kl-convex} and Lemmas \ref{lem:interiorKL}--\ref{lem:hard2i} in the following theorem.
\begin{thm}
\label{cor}
There exist some constant $\tau_{KL} >0$ and sufficiently small $\epsilon >0$ such that the KL inequality holds
\begin{equation}
\label{eq:KLgeneral}
\big(f(\x)-f^*+\delta_{B}(\x)\big)^\varrho\le\tau_{KL}{\rm dist}(-\nabla f(\x), N_B(\x)), \quad \forall\, \x\in B(\x^*,\epsilon),
\end{equation}
where $\varrho=\left\{
\begin{aligned}
& {3/4}, \mbox{ for the {TRS-ill case} \eqref{eq:con},} \\%[5pt]
& {1/2}, \mbox{ otherwise.}
\end{aligned}
\right.$
\end{thm}
The above theorem in fact shows us that the KL exponent $3/4$ holds for all cases, due to the fact that the function value $f(\x)$ is bounded in the unit ball and the inequality $t^{3/4}\le t^{1/2}$ for all $0\le t\le 1$.
\section{Convergence analysis of first order methods}
Recently, Beck and Vaisbourd \cite{beck2018globally} demonstrated that with a proper initialization, {projected gradient methods} (in fact, more general first order conic methods) for solving the TRS converge to a global optimal solution.
However, the rate of convergence for these algorithms are not studied in their paper.
Meanwhile, it is well known that projected gradient methods converge to a stationary point in rate $O(1/\sqrt{k})$ for {minimizing a} nonconvex smooth function with global Lipschitz continuous gradient over a closed convex set with $k$ being the iteration index; see, e.g., \cite[Chapter 1]{nesterov1998introductory} for unconstrained minimization and \cite[Chapter 10]{beck2017first} for general composite minimization.
Hence, a straightforward conclusion will be that {projected gradient methods} for the TRS achieve a sublinear iteration complexity of $O(1/\sqrt{k})$.
Here, we improve this result by showing that the local convergence rate of {projected gradient methods} for solving the TRS can be improved to at least $O(1/k^2)$. In fact, in most cases, it even enjoys local linear convergence. As one can observe in the subsequent analysis in this section, the cornerstone for these superior improvements is the obtained KL inequality for the TRS.
Let $\Pi_B:\R^n \to \R^n$ be the Euclidean projector onto the unit norm ball $B$, i.e., for any $\z\in\R^n$, \[\Pi_B(\z)=\left\{
\begin{aligned}
& {\frac{\z}{\|\z\|}}, &\mbox{ if } \norm{\z} \ge 1, \\
& \z, &\mbox{ otherwise.}
\end{aligned}
\right.\] Our main contribution in this section is summarized in the following theorem for constant step size projected gradient methods. Note that the Lipschitz constant of the gradient of the objective function in the TRS is $L = 2\|\A\|_2$ and the step size we will use is $t\in(0,\frac{2}{L})$, i.e., $t\in(0,\frac{1}{\norm{\A}_2})$.
Specifically, we show that {projected gradient methods} achieve a locally sublinear rate of $O(1/k^2)$ in the \emph{TRS-ill case}; otherwise, the local convergence rate can be further improved to linear.
\begin{thm}[Constant step size projected gradient methods]
\label{thm:rate}
Let the step size $t\in(0, 2/L)$, i.e., $t\in(0,1/\norm{\A}_2)$. Suppose for all $k\ge 0$ that $\x_{k+1}=\Pi_B(\x_k-t\nabla f(\x_k))$ and the sequence $\{\x_k\}$ converge to an optimal solution $x^*$ to the TRS $\rm(P_0)$.
Then, there exists a sufficiently large positive integer $N$ such that
$\{\x_k\}_{k\ge N} \subseteq B(\x^*,\epsilon)$ where $\epsilon >0$ is the same constant in Theorem \ref{cor}.
Let $M=\frac{\tau_{KL} (2\|A\|_2+1/t)}{\sqrt{1/t-\|A\|_2}}$ with $\tau_{KL} >0$ being the constant in Theorem \ref{cor}. Then, in the \emph{TRS-ill case} \eqref{eq:con}, it holds that
\begin{equation}
\label{eq:rate1}
f(\x_k)-f^*\le\frac{1}{\left(\frac{k-N}{M^2(2+\frac{3}{2M^2}\sqrt{f(x_N)-f^*})}+\frac{1}{\sqrt{f(x_N)-f^*}}\right)^2}, \quad \forall\, k\ge N;
\end{equation}
otherwise, it holds that
\begin{equation*}
\label{eq:rate2}
f(\x_k)-f^*\le\left(\frac{M^2}{M^2 + 1}\right)^k(f(\x_N)-f^*), \quad \forall\, k\ge N.
\end{equation*}
\end{thm}
\begin{proof}
Since $\x_k \to \x^*$ as $k\to \infty$, for the given constant $\epsilon >0$ in Theorem \ref{cor}, there exists $N>0$ such that
$\x_k\in B(\x^*,\epsilon)$ for all $k\ge N$.
For all $k\ge 0$, one can rewrite the updating rule $\x_{k+1}=\Pi_B\big(\x_k-t\nabla f(\x_k)\big)$ in the following manner
\[\x_{k+1}=\argmin_\u \left\{ \nabla f(\x_k)^T(\u-\x_k)+\frac{1}{2t}\|\u-\x_k\|^2\right\}+\delta_B(\u),\]
whose optimality condition asserts:
\[ 0\in \x_{k+1}-(\x_k-t\nabla f(\x_k)) +N_B(\x_{k+1})\] with $N_B(\x_{k+1})$ being the normal cone of $B$ at $\x_{k+1}$. Denote for all $k\ge 0$, $\v_{k+1}= - (\x_{k+1} - \x_k)/t - \nabla f(\x_k)$. Then, it holds for all $k\ge 0$ that $\v_{k+1} \in N_B(\x_{k+1})$ and
\begin{equation}
\label{eq:con1}
\|\v_{k+1}+\nabla f(\x_k)\|= \|\x_{k+1}-\x_k\|/t.
\end{equation}
Since $1/t > \norm{\A}_2=L/2$, from Lemma 10.4 in \cite{beck2017first}, we know that
\begin{equation}
\label{eq:con2}
f(\x_k)-f(\x_{k+1})\ge ({1/t-\|\A\|_2})\|\x_k-\x_{k+1}\|^2, \quad \forall\, k\ge 0.
\end{equation}
Meanwhile, from the KL inequality \eqref{eq:KLgeneral}, it holds
for all $k\ge N$ that
\begin{eqnarray*}
(f(\x_{k+1})-f^*)^{\varrho}&\le & \tau_{KL} {\rm dist}(-\nabla f(\x_{k+1}), N_B(\x_{k+1}))\\
&\le& \tau_{KL} \|\nabla f(\x_{k+1})+\v_{k+1}\| \\
&=&\tau_{KL} \|\nabla f(\x_{k+1})-\nabla f(\x_k)+\nabla f(\x_k)+\v_{k+1}\|\\
&\le& \tau_{KL} \big(\|\nabla f(\x_{k+1})-\nabla f(\x_k)\|+\|\nabla f(\x_k)+\v_{k+1}\|\big),
\end{eqnarray*}
where $\varrho$ is the KL exponent and $\tau_{KL} >0$ is the constant in Theorem \ref{cor}.
The above inequality, together with the Lipschitz continuity of $\nabla f$ (note that the Lipschitz constant is $L=2\|\A\|_2$) and \eqref{eq:con1}, gives
\[
(f(\x_{k+1})-f^*)^{\varrho}\le \tau_{KL} (2\|\A\|_2+1/t)\|\x_k-\x_{k+1}\|, \quad \forall\, k\ge N.
\]
Substituting this to \eqref{eq:con2} implies
\[
(f(\x_{k+1})-f^*)^{\varrho}\le\frac{\tau_{KL} (2\|\A\|_2+1/t)}{\sqrt{1/t-\|\A\|_2}}(f(\x_{k})-f(\x_{k+1}))^\frac{1}{2}, \quad\forall\, k\ge N.
\]
Defining $r_k=f(\x_{k})-f^*$ and $M=\frac{\tau_{KL} (2\|\A\|_2+1/t)}{\sqrt{1/t-\|\A\|_2}}$, we have
\begin{equation}
\label{eq:rk}
r_{k+1}^{\varrho}\le M(r_{k}-r_{k+1})^\frac{1}{2}, \quad \forall\, k\ge N.
\end{equation}
We divide our discussions into two cases:
\begin{itemize}
\item In the \emph{TRS-ill case} \eqref{eq:con}, from Theorem \ref{cor} and \eqref{eq:rk}, we have $r_{k+1}^\frac{3}{2}\le M^2(r_k-r_{k+1})$.
Hence, for all $k\ge N$, we have {from \eqref{eq:con2}} that $r_{k+1} \le r_k$ and
\begin{eqnarray*}
\frac{1}{\sqrt{r_{k+1}}}-\frac{1}{\sqrt{r_{k}}}&\ge&\frac{1}{\sqrt{r_{k+1}}}-\frac{1}{\sqrt{r_{k+1}+\frac{1}{M^2}r_{k+1}^{3/2}}}\\
&=&\frac{\sqrt{r_{k+1}+\frac{1}{M^2}r_{k+1}^{3/2}}-\sqrt{r_{k+1}}}{\sqrt{r_{k+1}}\sqrt{r_{k+1}+\frac{1}{M^2}r_{k+1}^{3/2}}}\\
&=&\frac{r_{k+1}^{3/2}}{M^2\sqrt{r_{k+1}}\sqrt{r_{k+1}+\frac{1}{M^2}r_{k+1}^{3/2}}(\sqrt{r_{k+1}+\frac{1}{M^2}r_{k+1}^{3/2}}+\sqrt{r_{k+1}})}\\
&=&\frac{1}{M^2\sqrt{1+\frac{1}{M^2}\sqrt{r_{k+1}}}(\sqrt{1+\frac{1}{M^2}\sqrt{r_{k+1}}}+1)}\\
&=&\frac{1}{M^2(\sqrt{1+\frac{1}{M^2}\sqrt{r_{k+1}}}+1+\frac{1}{M^2}\sqrt{r_{k+1}})}\\
&\ge&\frac{1}{M^2(2+\frac{3}{2M^2}\sqrt{r_{k+1}})}\\
&\ge&\frac{1}{M^2(2+\frac{3}{2M^2}\sqrt{r_{N}})},
\end{eqnarray*}
where the second inequality follows from $\sqrt{1+\frac{1}{M^2}\sqrt{r_{k+1}}}\le 1+\frac{1}{2M^2}\sqrt{r_{k+1}}$.
Hence, for all $k\ge N$, we have
\[\frac{1}{\sqrt{r_k}}\ge\frac{k-N}{M^2(2+\frac{3}{2M^2}\sqrt{r_{N}})}+\frac{1}{\sqrt{r_N}}
\]
and thus
\[ r_k\le\frac{1}{\left(\frac{k-N}{M^2(2+\frac{3}{2M^2}\sqrt{r_{N}})}+\frac{1}{\sqrt{r_N}}\right)^2}\,.\]
\item Otherwise, from Theorem \ref{cor} and \eqref{eq:rk}, we have $r_{k+1}\le M^2 (r_k-r_{k+1})$ for all $k\ge N$. This implies \[
r_{k}\le \left(\frac{M^2}{M^2+1}\right)^{k - N}r_N,\quad \forall\, k\ge N.\]
\end{itemize}
We thus complete the proof for the theorem.
\end{proof}
We remark here
that the sublinear convergence rate \eqref{eq:rate1} holds in all cases as the KL inequality holds with exponent $3/4$ for all cases. Note that the assumption in Theorem \ref{thm:rate} that {projected gradient methods} converge to a global optimal solution is not restrictive at all. As is mentioned in the introduction and the beginning of this section, the assumption can be guaranteed as long as the starting point of the projection gradient method is properly chosen \cite{beck2018globally}.
Moreover, it is also noted in \cite{beck2018globally} that the initial point can be obtained without much difficulty.
Although the iteration complexity results derived in Theorem \ref{thm:rate} only holds locally around the optimal solution $\x^*$, by using similar ideas in \cite{han2017linear}, one can directly extend these results to a global version.
{To see this, let $N$ be the positive integer in Theorem \ref{thm:rate}.
Note from the proof of Theorem \ref{thm:rate} that $\{f(\x_k)\}_{k\ge 0}$ is non-increasing. Then, we have that for all $1\le k\le N$,
\[
0 \le f(\x_k)-f^* \le f(\x_0) - f^* \le
\min \left\{\frac{N^2( f(\x_0) - f^*)}{k^2} , (f(\x_0) - f^*) (\frac{M^2}{M^2 + 1})^{k-N} \right\}.
\]
Hence {it follows from Theorem \ref{thm:rate}} there exist constants $C_1, C_2 >0$
such that
\begin{equation*}
\label{eq:complexity_global}
f(\x_k) - f(\x^*) \le
\left\{
\begin{array}{ll}
\frac{C_1}{k^2}, &\text{for the \emph{TRS-ill case} \eqref{eq:con}},\\
C_2\, (\frac{M^2}{M^2 + 1})^{k}, &\text{otherwise,}
\end{array}
\right.\quad \forall\, k\ge 1.
\end{equation*}
{Note that constants $C_1,C_2$ depend on the choice of the initial point and can be hard to estimate explicitly.}
In light of Theorem \ref{thm:main}, we can further derive the convergence rate associated with $\dt(\x_k,S_0)$:
\begin{equation*}
\label{eq:complexity_global_dist}
\dt(\x_k,S_0)\le\left\{
\begin{array}{ll}
\frac{\tau_{EB}C_1^{1/4}}{\sqrt k}, &\text{for the \emph{TRS-ill case} \eqref{eq:con}},\\%[5pt]
\tau_{EB} \sqrt{C_2}\,(\frac{M^2}{M^2 + 1})^{k/2}, &\text{otherwise,}
\end{array}
\right. \quad \forall\, k\ge 1,
\end{equation*}
where $\tau_{EB} >0$ is the error bound modulus in Theorem \ref{thm:main}. We also note that the projected gradient method with backtracking line search can also be analyzed in a similar way, as \eqref{eq:con2} still holds with a little more conservative constant, and thus we omit its analysis for simplicity.
\begin{rem}
It would be interesting to compare Theorem \ref{thm:rate} with the results obtained in \cite{zhang2017generalized} and \cite{carmon2018analysis}, which demonstrated that the generalized Lanczos trust-region (GLTR)\ method has a linear convergence rate for the easy case.
While we have proved that {projected gradient methods} converge sublinearly at rate $O(1/k^2)$ for the \emph{TRS-ill case}, and linearly otherwise (including the easy case).
\end{rem}
\begin{rem}
Based on the established KL inequality for the TRS in Theorem \ref{cor}, the same order of the convergence rate for the gap $f(\x_k)-f(\x^*)$ and $\dt(\x_k,S_0)$ can also be obtained by using the arguments in \cite{frankel2015splitting}. However, our proof here is simpler and has an explicit dependence of the constants.
\end{rem}
\section{Conclusion}
In this paper, we conducted a thorough analysis about H\"olderian error bounds and the KL inequality associated with the TRS. Specifically,
we showed that for the TRS, a H\"olderian error bound holds with modulus 1/4 and the KL inequality holds with exponent 3/4.
Moreover, we demonstrated that the H\"olderian error bound modulus and the KL exponent in fact are both 1/2 unless in the \emph{TRS-ill case} \eqref{eq:con}.
Given these results, we further proved that {projected gradient methods} for solving the TRS enjoy a local sublinear convergence rate $O(1/k^2)$, which can be further improved to a linear rate unless in the \emph{TRS-ill case}.
\section*{Acknowledgements}
We would like to thank Professor Jong-Shi Pang at University of Southern California for his helpful comments on an early version of this paper.
\section{Appendix}
\subsection{{Complementary} proof for Lemma \ref{lem:hard2ii}}
Here, we provide a supporting lemma that helps prove Lemma \ref{lem:hard2ii}.
With the same notation $\alpha,\beta$ in \eqref{eq:defabP} and $H(\x)$ in \eqref{eq:defH}, recall that $\|\s\|=\|\P^T\x^*\|=1$ and $\{\d_i\}_{i=1}^n$ in \eqref{eq:kD}.
We have the following lemma.
\begin{lem}
\label{lem:funcPsi}
It holds that for all $\z\in\R^n$
\[
H(\x)= \sum_{i=1}^K \s_i^2\sum_{j= K+1}^{n} (\d_j \z_j)^2
+ \sum_{K+1\le i < j \le n}(\d_i\z_i \s_j - \d_j\z_j \s_i)^2 - 2\alpha\beta-\beta^2.
\]
Moreover, if $\s\not\perp {\rm Null}(\D)$, it holds that
\begin{equation}
\label{eq:psizs}
H(\x)\ge \frac{1}{2}\d_{K+1}(\sum_{j=1}^K \s_j^2) \beta
\end{equation}
for all $\z$ with sufficiently small $\norm{\z}$.
\end{lem}
\begin{proof}
By some calculations, we see that
\begin{align*}
H(\x) ={}& \sum_{i=1}^n (\d_i\z_i)^2 - \big(\sum_{i=1}^{n} \s_i \d_i \z_i\big)^2 - 2\alpha\beta-\beta^2\\
={}&\sum_{i=1}^n \sum_{j=1}^n \s_j^2 (\d_i \z_i)^2 - \big(\sum_{i=1}^{n} \s_i \d_i \z_i\big)^2 - 2\alpha\beta-\beta^2 \\
={}& \sum_{1\le i <j \le n}(\d_i\z_i \s_j - \d_j\z_j \s_i)^2 - 2\alpha\beta-\beta^2\\
={}&\sum_{i=1}^K \sum_{j=K+1}^{n} (\d_i\z_i \s_j - \d_j\z_j \s_i)^2
+ \sum_{i=1}^K \sum_{j=i+1}^{K} (\d_i\z_i \s_j - \d_j\z_j \s_i)^2\\
&+ \sum_{K+1\le i < j \le n}(\d_i\z_i \s_j - \d_j\z_j \s_i)^2 - 2\alpha\beta-\beta^2\\
={}& \sum_{i=1}^K \s_i^2\sum_{j=K+1}^{n} (\d_j \z_j)^2
+ \sum_{K+1\le i < j \le n}(\d_i\z_i \s_j - \d_j\z_j \s_i)^2 - 2\alpha\beta-\beta^2,
\end{align*}
where the second equation holds since $\norm{\s} = 1$,
the third equation follows from the fact that $\sum_{1\le i,j\le n} a_i^2 b_j^2 - (\sum_{1\le i \le n} a_ib_i)^2 = \sum_{1\le i <j \le n} (a_ib_j - a_jb_i)^2$,
and the last equation is due to $\d_i =0$ for $i=1,\ldots,K$.
Since $\sum_{i=K+1}^n\d_i^2 \z_i^2 \ge \sum_{i=K+1}^n \d_{K+1} \d_i \z_i^2 = \d_{K+1}\beta$, we obtain that
\[
H(\x)\ge (\d_{K+1} \sum_{j=1}^K\s_j^2 - 2\alpha-\beta)\beta, \quad \forall\, \z \in \R^n.
\]
If, further, $\s\not\perp {\rm Null}(\D)$, together with $\d_{K+1} >0$, we know that
$\d_{K+1} \sum_{j=1}^K \s_j^2 >0$.
This together with the fact that $\alpha\to 0$ and $\beta\to 0$ as $\z\to {\bf 0}$, implies that \eqref{eq:psizs} holds when $\norm{\z}$ is sufficiently small. We thus complete the proof.
\end{proof}
\subsection{Proof of Lemma \ref{lem:KLld}}
\begin{proof}
Recall the spectral decomposition of $\A$ and the definition of the diagonal matrix $\D$ as:
\[
\A = \P{\mathbf \Lambda}\P^T = \P {\rm Diag}(\lambda)\P^T, \quad
\A - \lambda_1\I = \P \D \P^T, \quad \D = {\mathbf \Lambda} -\lambda_1 \I,
\]
{where $\P \P^T = \P^T\P = \I$.}
Define linear operators $\mathcal{L}_{K} :\R^{n} \to \R^{K}$ and $\mathcal{U}_{K}:\R^n \to \R^{n-K}$ as
\[
\mathcal{L}_{K}(\x) = [\x_{1}, \ldots, \x_{K}]^T, \quad \mathcal U_{K}(\x) = [\x_{K+1},\ldots, \x_n]^T, \quad \forall\, \x\in\R^n.
\]
Let $\M = {\rm Diag}(\mathcal U_{K}(\lambda))$ where $\lambda \in \R^n$ is {the vector formed by all the eigenvalues} of $\A$ in ascending order. Define function $q$ as
\[
q(\z) := \inprod{\z}{\M\z} - 2\inprod{\mathcal U_{K}(\P^T \b)}{\z}, \quad \forall \z \in \R^{n-K}.
\]
Consider the following optimization problem
\begin{equation}
\label{prob:ming}
\min \left\{
q(\z) \mid \z \in \widehat{B} \right\} \mbox{ with } \widehat{B} : = \left\{ \z \in \R^{n - K} \mid \norm{\z} \le 1 \right\}.
\end{equation}
We show next that $\mathcal U_{K}(\P^T \x^*)$ is an optimal solution to \eqref{prob:ming}.
Since $\b, \x^*\in {\rm Range}(\A - \lambda_1 \I)$, we know that $\P^T \b, \P^T \x^* \in {\rm Range}({\mathbf \Lambda} - \lambda_1\I)$.
Note that since the first $K$ diagonal entries of the diagonal matrix ${\mathbf \Lambda} - \lambda_1\I$ are zeros, we have $\mathcal L_{K}(\P^T \x^*)) =\mathcal L_{K}(\P^T\b) = \bf0$ and $\norm{\mathcal U_{K}(\P^T \x^*)} = 1$.
Moreover, since $(\A -\lambda_1 \I)\x^* = \b$, we have $({\mathbf \Lambda} - \lambda_1\I)(\P^T \x^*) = \P^T \b $.
Therefore, $ (\M - \lambda_1 \I) \mathcal U_K(\P^T\x^*) = \mathcal U_{K}(\P^T\b)$, i.e., $\mathcal U_{K}(\P^T \x^*)$ and $-\lambda_1$ solve the optimality conditions (see the KKT conditions in Section \ref{sec:pre}) associated with the smaller dimensional TRS \eqref{prob:ming}.
Since $\lambda_{K+1}$ is the smallest eigenvalue of $\M$ and {an optimal Lagrangian multiplier is} $-\lambda_1> -\lambda_{K+1}$, we know that problem \eqref{prob:ming} falls into the easy case or the hard case 1. Hence, by Lemma \ref{lem:KLeasyhard1}, we have that there exist $\epsilon_2, \tau >0$ such that for all $\z \in B({\cal U}_K(\P^T\x^*), \epsilon_2)$,
\begin{equation}
\label{eq:klqz}
[q(\z) - q(\mathcal U_k(\P^T\x^*)) + \delta_{\widehat{B}}(\z)]^{1/2} \le \tau {\rm dist}(-\nabla q(\z), N_{\widehat{B}}(\z)).
\end{equation}
Note that for all $\x \in {\rm Range}(\A - \lambda_1\I)$, it holds that $\mathcal L_K(\P^T\x) = \bf0$ and $\norm{{\cal U}_K(\P^T \x)} = \norm{\P^T \x} = \norm{\x}$.
Then, for all $\x\in {\rm Range}(\A - \lambda_1\I)$ with $\norm{\x} < 1$, we know that $\norm{{\cal U}_K(\P^T\x)} < 1$, $N_{\widehat{B}}({\cal U}_K(\P^T\x)) =\bf 0$, and
\begin{equation}
\label{eq:distqin}
\begin{aligned}
{\rm dist}(-\nabla q(\mathcal U_K(\P^T \x)), N_{\widehat{B}}(\mathcal U_K(\P^T \x))) ={}& \norm{\nabla q(\mathcal U_K(\P^T \x))} = \norm{2(\M\mathcal U_K(\P^T \x) - \mathcal U_K(\P^T \b))} \\[2pt]
={}&\norm{2({\mathbf \Lambda} \P^T\x - \P^T\b)} = \norm{2(\P{\mathbf \Lambda}\P^T\x -\b} \\[2pt]
={}& \norm{2(\A\x - \b)} = {\rm dist}(-\nabla f(\x), N_B(\x)).
\end{aligned}
\end{equation}
Meanwhile, for all $\x \in {\rm Range}(\A - \lambda_1\I)$ with $\norm{\x} = 1$, it holds $\norm{{\cal U}_K(\P^T \x)} = 1$ and
\[
\norm{\mathcal U_K(\P^T\x) - \mathcal U_K(\P^T \x^*)} = \norm{\P^T\x - \P^T \x^*} =
\norm{\x - \x^*}.
\]
{Then for all $\x \in {\bf bd}(B) \cap {\rm Range}(\A - \lambda_1\I) $ satisfying $\norm{\x - \x^*}\le \epsilon: = \min\left\{\epsilon_0,\epsilon_2 \right\}$, we further have}
\begin{equation}
\label{eq:distqdistf}
\begin{aligned}
&{\rm dist}(-\nabla q(\mathcal U_K(\P^T \x)), N_{\widehat{B}}(\mathcal U_K(\P^T \x))) \\
={}& \norm{2(\M\mathcal U_K(\P^T \x) - \mathcal U_K(\P^T \b)) +
\inprod{-2(\M\mathcal U_K(\P^T \x) -\mathcal U_K(\P^T \b))}{\mathcal U_K(\P^T \x)}\mathcal U_K(\P^T \x) }\\
={}&\norm{2({\mathbf \Lambda} \P^T \x - \P^T\b) + \inprod{-2({\mathbf \Lambda} \P^T \x - \P^T\b)}{\P^T \x} \P^T \x}\\
={}&\norm{2(\P {\mathbf \Lambda} \P^T\x - \b) + \inprod{-2(\P{\mathbf \Lambda} \P^T\x - \b)}{\x} \x} \\
={}&\norm{2(\A\x - \b) + \inprod{-2(\A\x - \b)}{\x}\x}\\
={}&{\rm dist}(-\nabla f(\x), N_B(\x)),
\end{aligned}
\end{equation}
where $\epsilon_0$ is given in \eqref{eq:eps0}, the first and last equations follow from \eqref{eq:distnorm1}, \eqref{eq:nux}, \eqref{eq:eps0} and
$$\inprod{-2(\M\mathcal U_K(\P^T \x)-\mathcal U_K(\P^T \b))}{\mathcal U_K(\P^T \x)}=\inprod{-2(\A\x - \b)}{\x}>0,$$
and the third equation holds since $\P$ is an orthogonal matrix.
Since $q(\mathcal U_k(\P^T\x^*)) = f(\x^*) = f^*$ and
$q(\mathcal U_k(\P^T \x)) = f(\x)$ for all $\x \in {\rm Range}(\A - \lambda_1 \I)$, we know from \eqref{eq:klqz}, \eqref{eq:distqin} and \eqref{eq:distqdistf} that
\begin{equation*}\label{eq:hd2ieasy}
\big(f(\x) - f(\x^*) + \delta_B(\x)\big)^{1/2} \le \tau {\rm dist}(-\nabla f(\x), N_B(\x)),
\end{equation*}
for all
$\x\in B(\x^*,\epsilon) \cap {\rm Range}(\A - \lambda_1 \I)$.
\end{proof}
\subsection{Complementary proof for Lemma \ref{lem:hard2i}}\label{app:dim2}
As a complementary proof for Lemma \ref{lem:hard2i}, we use the same notation as in the main proof. Here, we consider {{\bf Case II} } with $\dim(\ra(\D)) \le 2$. {Recall that we are in case where} $\|\s\| = \|\s+\z\|=1$, $\alpha = \inprod{\s}{\D\z} > 0$, $\|\z\|^2=\gamma < 1/4$, $\z = \z_1 + \z_2$ with $\z_1 \in \ra(\D)$ and $\z_2 \in {\rm Null}(\D)$ and $\norm{\z_2}\le c \sqrt{\gamma}$ with constant $c \le 1/16$ given in \eqref{eq:KLrange}.
It can also be verified that $\inprod{\s}{\z_1} = \inprod{\s}{\z} = -\gamma/2$.
We first claim that $\dim(\ra(\D))>1$. Indeed, if $\dim(\ra(\D))=1$, since $\s \in \ra(\D)$, it holds that $\ra(\D)={\rm span}\{\s\}$, i.e., the space spanned by $\{\s\}$.
Since $\z_1\in\ra(\D)$ and $\inprod{\s}{\z_1} = -\gamma/2$, we have $\z_1 = -\gamma\s/2$.
Hence,
\[
\inprod{\s}{\D\z} = \inprod{\s}{\D\z_1} = -\frac{\gamma}{2} \inprod{\s}{\D\s} \overset{\eqref{eq:kD}}< 0.
\]
However, this contradicts our assumption that $\alpha = \inprod{\s}{\D\z} > 0$.
Hence, $\dim(\ra(\D)) = 2$.
Therefore, \eqref{eq:kD} reduces to
$$\d_n \ge \d_{n-1}>0=\d_{n-2}=\ldots=\d_1, \quad \mbox{ i.e., } \quad K = n-2.$$
Let $\e_i$, $i=1,\ldots,n$, be the standard basis of $\R^n$.
Then, we can represent $\s$ and $\z$ by the linear combinations of $\e_n$ and $\e_{n-1}$ as $\s=u_1\e_{n-1}+u_2\e_{n}$ and $\z_1=w_1\e_{n-1}+w_2\e_{n}$, respectively.
Since $\norm{\s} = 1, \norm{\z}^2 = \gamma$ and $\norm{\z_2} \le c\sqrt{\gamma}$, we know that
\begin{equation}
\label{eq:z1norm}
u_1^2 + u_2^2 = 1, \quad \mbox{and} \quad \norm{\z_1}^2 = w_1^2 + w_2^2 = \gamma - \norm{\z_2}^2 \ge (1 - c^2)\gamma.
\end{equation}
We discuss two cases here, i.e., cases with $(u_2\d_nw_2)(u_1\d_{n-1}w_1)> 0$ and $(u_2\d_nw_2)(u_1\d_{n-1}w_1) \le 0$.
Suppose that $(u_2\d_nw_2)(u_1\d_{n-1}w_1)> 0$. By reducing $\epsilon_0$ if necessary, we can assume without
loss of generality that
\begin{equation}
\label{eq:gammaeps1}
\sqrt{\gamma} \le \epsilon_1 = \min(\epsilon_0, 1/2) < \min (|u_1|, |u_2|).
\end{equation}
Since $\s^T\z=u_1w_1+u_2w_2=-\gamma/2<0$, $\d_{n-1}>0$ and $\d_n>0$, we have $(u_2\d_nw_2)<0$ and $(u_1\d_{n-1}w_1)<0$.
From \eqref{eq:z1norm} and $c\le 1/16$, we know that
\[
|w_1| + |w_2| \ge \sqrt{w_1^2 + w_2^2} \ge \frac{1}{2} \sqrt{\gamma}.
\]
This further implies that
\begin{eqnarray*}
\s^T\z = u_2w_2+u_1w_1 &=& -|u_2||w_2| - |u_1||w_1|\\
&\le& -\min(|u_1|,|u_2|)(|w_2|+|w_1|)\\
&\le&-\frac{\min(|u_1|,|u_2|)\sqrt\gamma}{2} \\
&\overset{\eqref{eq:gammaeps1}}<& -\frac{\gamma}{2}.
\end{eqnarray*}
which contradicts the fact that $\s^T\z=-\gamma/2$.
Hence, we only need to consider the case with
$(u_2\d_nw_2)(u_1\d_{n-1}w_1)\le 0$. Then, it holds that
\begin{eqnarray}\label{eq:s1s2neq0}
\begin{array}{lll}
\|\D\z\|^2-\alpha^2&=&\d_n^2w_2^2+\d_{n-1}^2w_1^2-(u_2\d_nw_2+u_1\d_{n-1}w_1)^2\\
&\ge&(1-u_2^2)\d_n^2w_2^2+(1-u_1^2)\d_{n-1}^2w_1^2\\
&=&u_1^2 \d_{n}^2w_2^2+ u_2^2 \d_{n-1}^2w_1^2.
\end{array}
\end{eqnarray}
Suppose first that {$\min(u_1^2,u_2^2)= 0$}. Without loss of generality, we can assume $u_1 = 0$, and thus $|u_2| = 1$. Since $\inprod{\s}{\z_1} = -\gamma/2 = u_2w_2$, we know that $w_2^2 = \gamma^2/4$, which, together with \eqref{eq:z1norm} and the fact that $\gamma \le 1/4$ and $c \le 1/16$, implies that
\begin{equation}
\label{eq:w12}
w_1^2 \ge (1 - c^2)\gamma - \gamma^2/4 \ge (1 - 1/16^2 - 1/16)\gamma \ge \frac{7}{8}\gamma.
\end{equation}
Meanwhile, it holds that
\[
\beta= \inprod{\z}{\D\z} = \inprod{\z_1}{\D\z_1} \le \d_n \norm{\z_1}^2 \le \d_n \gamma,
\]
which, together with $\eqref{eq:s1s2neq0}$ and $\eqref{eq:w12}$, implies that
\begin{equation}
\label{eq:u1u20alpha}
\|\D\z\|^2-\alpha^2 \ge \frac{7}{8}\d_{n-1}^2 \gamma
\ge \frac{7\d_{n-1}^2}{8\d_n} \d_n \gamma \ge \frac{7\d_{n-1}^2}{8\d_n} \beta.
\end{equation}
Suppose now that {$\min(u_1^2,u_2^2)\neq 0$}. Then, from \eqref{eq:s1s2neq0}, we know that
\begin{equation}
\label{eq:u1u2n0alpha}
\begin{array}{lll}
\|\D\z\|^2-\alpha^2 &\ge{}& \min(u_1^2, u_2^2) (\d_n^2 w_2^2 + \d_{n-1}^2 w_1^2 ) \\
&\ge{} & \min(u_1^2, u_2^2) \d_{n-1} (\d_n w_2^2 + \d_{n-1} w_1^2 ) \\
&=& \min(u_1^2, u_2^2) \d_{n-1} \beta.
\end{array}
\end{equation}
Combining \eqref{eq:u1u20alpha} and \eqref{eq:u1u2n0alpha}, we conclude that if $\dim(\ra(\D))=2$, it holds that
\begin{equation*}\label{eq:dim2}
\|\D\z\|^2-\alpha^2\ge c_2\beta,
\end{equation*}
where $c_2=\min\{\frac{7\d_{n-1}^2}{8\d_n},\min(u_1^2, u_2^2) \d_{n-1}\}$.
Since $\alpha\to0$ and $\beta\to0$ as $\gamma\to 0$ (or equivalently, $\x \to \x^*$), by reducing $\epsilon_0$ if necessary, we know in {{\bf Case II}} if $\dim(\ra(\D))=2$ that
\begin{equation}
\label{eq:hc2i2adim2}
\frac{1}{4}{\rm dist}^2(-\nabla f(\x), N_B(\x)) = H(\x)=\|\D\z\|^2-\alpha^2-2\alpha\beta-\beta^2\ge \beta(c_2 - 2\alpha - \beta) \ge \frac{c_2}{2}\beta,
\end{equation}
for all $\x \in {\bf bd}(B)$ with $\norm{\x - \x^*}\le \epsilon_1$.
\bibliographystyle{abbre}
| proofpile-arXiv_059-15564 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Semantic segmentation is an important visual scene understanding task with wide application, especially in
autonomous and assisted vehicle systems.
Recent deep network approaches (e.g., ~\cite{long2015fully, zhao2017pyramid, chen2018deeplab}) have
achieved impressive results, but
require large training datasets with precise pixel-level ground-truth annotation
and do not generalize well
over large domain shifts in viewpoint, lighting, etc.~\cite{yao2015semi}
These issues can potentially be addressed by unsupervised domain adaptation methods
that attempt to identify and correct for a shift in the appearance of the visual input from source to target domains.
A successful domain adaptation method will not only improve generalization,
but allow larger and more easily obtained synthetic ground truth to be used for training,
even if it does not perfect represent the appearance of real scenes.
The well-generalized model can be trained with synthetic image dataset which has access to ground-truth label,
and real-world image dataset whose ground-truth label remains unknown,
to avoid labor costing and high time-consuming annotation job.
The synthetic image dataset with ground-truth label is named as source domain,
while the real-world image dataset without ground-truth label is named as target domain.
\iffalse
Semantic segmentation is a crucial visual scene understanding task,
which has attracted intense interest and experienced wide applications, especially in self-driving.
To yield satisfactory performance, Deep Neural Networks (DNN) has been adopted by training the model on
large-scale, high-quality, dense pixel-wise labeled street scene
datasets~\cite{long2015fully, zhao2017pyramid, chen2018deeplab}.
Despite remarkable achievements, semantic segmentation is restricted with challenges
of diverse street scenes where there exist enormous scene variations concerning weather,
scene scenarios and time of day when it comes to unseen driving images.
This variation is named "domain shift"~\cite{yao2015semi} which refers to
the different distribution between annotated trained images and unseen testing images,
weaken the ability of the pre-trained model's generalization for testing.
In order to address this domain shift challenge for road semantic segmentation
one way is to build new large-scale driving datasets and annotate pixel-wise ground truth for each image.
To avoid this labor costing and high time-consuming job,
unsupervised domain adaptation methods have been proposed to narrow down the difference between
images from various road situations.
This promising idea allows to train a well-generalized model with both source domain (labeled) images and
target domain (unlabeled) images, but only source images' ground truth annotations.
\fi
\iffalse
\begin{figure}[t]
\centering
\includegraphics[width=1\textwidth]{f1.pdf}
\caption{Illustration of our proposed CCDA method,
consisting of a basic adaptation, a class-conditional multi-scale discriminator
and class-conditional loss.
With our proposed CCDA method, we manage to measure the performance
in a class-conditional way.}
\label{fig: f1}
\end{figure}
\fi
A common approach to solve the "domain shift" problem for deep network systems
is to modify the weights of the network to render representations produced by the network for target domain vectors more similar to representations produced for source domain vectors.
By further minimizing the distance between distributions of certain representations from both domains,
a well-generalized model can be obtained.
Some papers have focused on representations in the prediction space~\cite{tsai2018learning, vu2019advent}
while others have focused on representations in feature (latent) space~\cite{hoffman2016fcns, chen2017no, luo2019significance}.
Representational dissimilarity can be assessed using correlation distances~\cite{sun2016return} or maximum mean discrepancy~\cite{geng2011daml}.
However, more recent work has tended to focus on generative adversarial methods~\cite{goodfellow2014generative}
for unsupervised domain adaptation.
This adversarial principle becomes prominent since it
achieves promising result for pixel-level prediction tasks~\cite{hoffman2016fcns, hoffman2018cycada, tsai2018learning, vu2019advent, saito2018maximum, luo2019taking}.
One limitation of prior work on unsupervised domain adaptation for semantic segmentation is that
domain adaptation tends to be more effective for more frequent classes~\cite{hoffman2016fcns, tsai2018learning}.
An underlying tendency can be observed that
representations on classes with higher frequency can be easily extracted and adapted,
while certain classes with lower frequency are inclined to be failed.
For driving datasets such as
Cityscapes~\cite{cordts2016cityscapes},
adaptation works fairly well for dominant classes such as road, car, buildings, vegetation, and sky,
but less well for infrequent classes such as sign or bicycle.
To address this issue, we propose a novel Class-Conditional Domain Adaptation method (CCDA).
It consists of a class-conditional multi-scale discriminator,
and a class-conditional loss function for both segmentation and adaptation.
The basic idea of our class-conditional multi-scale discriminator is to measure the alignment of
feature-level representations at both fine and coarse spatial scales.
The fine-scale branch is to evaluate the adaptation on pixel-level,
while in particular, for each coarse-scale patch in the image,
the loss is weighted equally over all classes occurring (or estimated to occur) within the patch,
regardless of the number of pixels associated with each class.
Our class-conditional multi-scale discriminator not only encourages the network to realign representations of
pixels belonging to the same class in a consistent, class-conditional way,
but also provides equal attention on each class occurred in one patch.
Meanwhile, the design of the class-conditional loss function is also to assist the network
to evaluate the performance of both segmentation and adaptation on each class fairly.
In summary, our proposed CCDA method makes three main contributions:
\begin{itemize}
\item {We proposed a novel class-conditional multi-scale discriminator that allows class-conditional domain shift to be learned. }
\item {By equalizing the class-conditional loss for both segmentation and adaptation,
we further improve the performance for less frequent classes.}
\item {We demonstrate that our method achieves comparable performance to state-of-the-art algorithms
on two semantic segmentation domain adaptation scenarios.}
\end{itemize}
\section{Related Work}
While there has been substantial progress on domain adaptation for image classification~\cite{tzeng2017adversarial, ganin2015unsupervised, long2015learning, long2016unsupervised, hu2018duplex, zhang2019domain, xie2018learning, pan2019transferrable, ganin2016domain},
pixel-level tasks are more challenging due to the more direct dependence on local appearance. Nevertheless, increasing activity in autonomous vehicle applications has driven interest in domain adaptation for pixel-level segmentation of road scenes~\cite{hoffman2016fcns, zhang2017curriculum, chen2017no, vu2019advent, zou2018unsupervised, hong2018conditional, tsai2018learning, murez2018image, sankaranarayanan2018learning, zhang2018fully, zhu2018penalizing, chen2018road, luo2019taking}.
The most popular current approach relies on adversarial learning, where a discriminator is employed to align source and target representations either at the prediction-level~\cite{tsai2018learning, vu2019advent}
or the feature-level~\cite{hoffman2016fcns, chen2017no, luo2019significance}.
In~\cite{tsai2018learning}, Tsai \emph{et al.} first provide a prediction-level representation alignment with GAN network
for domain adaptation on semantic segmentation.
Vu \emph{et al.}~\cite{vu2019advent} then employe an entropy minimization technique during adversarial learning to improve domain adaptation at the prediction level and
Luo \emph{et al.}~\cite{luo2019significance} use an information bottleneck approach to more fully remove task-independent information from feature representations.
Co-training adaptation using multi-view learning
has also been employed~\cite{saito2018maximum, saito2017adversarial, luo2019taking}.
\begin{figure*}[ht]
\centering
\includegraphics[width=1\textwidth]{121.pdf}
\caption{Overview of our proposed Class-Conditional Domain Adaptation.}
\label{fig:structure}
\end{figure*}
Approaches like pixel-level adaptation and self-training provide different directions for the process of domain adaptation,
and can be combined with the above representation adaptation methods.
The pixel-level adaptation approach is to view domain adaptation in part as a style transfer problem.
In this approach, images from one domain are transformed to have the `style' or appearance of images from another domain, while preserving the original `content' of the image from the first domain~\cite{zhang2018fully, sankaranarayanan2018learning, li2019bidirectional}.
The self-training approach is to alternatively select unlabelled target samples with higher prediction probability
and utilize them with their predictions as pseudo ground-truth labels during training while updating the learnt model
\cite{zou2018unsupervised, chen2019domain}.
Techniques like~\cite{chen2017no, luo2019taking, du2019ssf, tsai2019domain}
tend to boost domain adaptation performance for some classes or regions of the image more than others,
suggesting that a class- or region-conditioned domain adaptation approach may be required to achieve good adaptation over all classes.
In~\cite{chen2017no, du2019ssf}, an adversarial system is employed to train distinct domain classifiers for each segmentation class.
Luo \emph{et al.}~\cite{luo2019taking} instead use the disagreement between two classifiers to indicate the probability of incorrect representational alignment for each region of the image, increasing the weight of the adversarial loss for regions that appear to be poorly aligned.
Tsai \emph{et al.}~\cite{tsai2019domain} utilize the multiple modes of patch-level prediction with a more accurate classification for the category distribution, and apply the adaptation based on the representation of this patch classification.
Meanwhile, for the self-training method., Zou \emph{et al.}~\cite{zou2018unsupervised} employ class-normalized confidence scores for pseudo ground-truth label selection to prevent the imbalanced selection of target domain samples on each class, which improves the performance on less frequent classes.
The common drawback of the pervious adversarial learning adaptation methods is they neglect the imbalance frequency of different classes even though they consider the class information.
They fail to apply the equal attention on each class by not taking the class-based performance into account.
Here we propose a class-conditional multi-scale discriminator
and a class-conditional loss function for both segmentation and adaptation.
By using the designed class label for each patch,
we allow the discriminator to consider class-conditional information for adaptation
equal to all classes.
The way of equalizing the loss over classes also improves the performance for lower frequent classes.
Our method is more efficient than~\cite{chen2017no, du2019ssf} since we avoid training multi domain classifiers for each
class, and the multi-scale discriminator encourages to capture the domain shift and evaluate the adaptation in pixel-level
as well as patch-level.
\section{Methods}
In this section, we present our proposed CCDA approach using class-conditional multi-scale discriminator
and our class-conditional loss function for segmentation and adaptation.
We begin by describing a basic structure for domain adaptation.
Then, we will explain in detail innovations for our class-conditional multi-scale discriminator,
and describe the design of class-based loss function for adaptation and segmentation components.
The overview of our proposed structure is showed in Figure \ref {fig:structure}.
\subsection{Basic Domain Adaptation Architecture} \label{section: Basic Domain Adaptation Structure}
We apply an adversarial learning approach for our unsupervised domain adaptation on segmentation,
since it is the most explored way in this area.
A basic structure consists of three modules:
a feature encoder $\mathbf E$, a segmentation decoder $\mathbf S$,
and a discriminator $\mathbf D$.
The image data consist of source domain data and target domain data.
Each source image $ I_s \in\mathcal{I}_s$ is paired with ground truth pixel-level segmentation labels
$Y_s\in \mathcal{Y}_s$. Target images $I_t \in\mathcal{I}_t$ are assumed to have no ground truth data
available for training.
Our goal is to train the feature encoder $\mathbf E$ and segmentation decoder $\mathbf S$ to output good prediction
$ P_t $ on target domain image.
This is achieved through two processes, one that trains $\mathbf E$ and $\mathbf S$ to
output good segmentation prediction $ P_s $ for source image $I_s$ with associated label $ Y_s$,
and a second that uses the discriminator $\mathbf D$ to align the feature-level representations $ F_s $ and $ F_t $
output by the feature encoder $\mathbf E$ for the two domains.
The first (segmentation) process is trained by minimizing the segmentation cross-entropy loss:
\begin{equation}
\label{segloss1}
\mathcal{L}_{seg}(\mathbf E, \mathbf S) = -\sum_{h=1,w=1}^{H,W} \sum_{c=1}^C Y_s^{(h, w, c)}log(P_s^{(h, w, c)})
\end{equation}
where $ H, W $ are the size of image, $ C $ is the number of semantic class.
$Y_s^{(h, w, c)}$ and $P_s^{(h, w, c)}$ are the ground truth and predicted states for Class $c$ at pixel $(h,w)$.
$P_s = \mathbf S(F_s) = \mathbf S(\mathbf E(I_s))$ is the output of segmentation decoder $\mathbf S$.
The second (alignment) process is trained adversarially to generate domain-invariant features.
Our discriminator module $\mathbf D$ tries to distinguish feature representations from source and target domains,
minimizing
\begin{equation}
\label{Dloss1}
\mathcal{L}_{D1}(\mathbf{D}) = \lambda_s \mathcal{L}_{bce}(\mathbf D(F_s), 0) +
\lambda_t \mathcal{L}_{bce}(\mathbf D(F_t), 1)
\end{equation}
where $\mathcal{L}_{bce}$ is the binary cross-entropy domain classification loss
since the output channel of this basic discriminator D is 1.
And source and target domain samples are assigned labels of 0 and 1, respectively.
Concurrently, the feature encoder $\mathbf E$ tries to confuse $\mathbf D$, minimizing
\begin{equation}
\label{advloss1}
\mathcal{L}_{adv1}(\mathbf E) = \lambda_s \mathcal{L}_{bce}(\mathbf D(F_s), 1) +
\lambda_t \mathcal{L}_{bce}(\mathbf D(F_t), 0)
\end{equation}
This adversarial learning process produces a rough alignment of features among all classes,
but tends to work less well for lower frequency classes that do not contribute
substantially to the cross-entropy loss. Also, since the feature map computed
by our encoder is spatiotopic but reduced in resolution relative
to the input, the alignment achieved by our adversarial process is at
the specific intermediate scale of our feature map, which may not capture domain
shift at smaller or larger scales.
These observations motivate our class-based multi-scale discriminator
and class-based loss function for segmentation and adaptation.
\subsection{Class-Conditional Multi-scale Discriminator} \label{section: Weakly Class-based Multi-level Discriminator}
Our proposed class-conditional multi-scale discriminator is composed of fine-scale and coarse-scale branches (Fig. \ref{fig:structure}),
The fine-scale branch measures alignment at the pixel-level using the basic architecture with loss functions in Equations \ref{Dloss1} and \ref{advloss1}
and thus can capture spatially detailed domain shift phenomena.
The coarse-scale branch measures class-conditional alignment
at a scale that is coarser than the feature scale with the equal class information.
We first describe how to perform this class conditioning
by explaining the coarse-scale class label, and then elaborate on our structure of the class-based coarse-scale discriminator
branch as well as the class-based fine-scale branch.
\iffalse
The alignment of feature representation in our discriminator is measured on the patch level.
For our class-based multi-scale discriminator, we measure the alignment on
two different levels.
For the fine-scale branch, we measure the adaptation based on patches with small size,
which means the patches contain a small number of pixels in images.
While for the coarse-scale branch, the adaptation is measured based on patches with large size.
The loss functions we use for fine-scale patches is what we mention above in
Section \ref{section: Basic Domain Adaptation Structure} with Equation (\ref{Dloss1})
and (\ref{advloss1}).
They can estimate the adaptation more accurately since the size of each patch is small.
However, without supervision on semantic information,
it would still neglect information of lower frequency classes.
As a result, when processing the coarse-scale patches, we add weakly class label supervision
as well as estimate the adaptation.
First, we explain how we get access to our weakly class label,
and then we can elaborate on our structure of this weakly class-based discriminator.
\fi
\subsubsection{Coarse-Scale Class Labels}
We define a coarse-scale class label $ W \in \{0,1\}^C$ that indicates the presence or absence of each class within a rectangular patch of the image.
Note that a patch may contain multiple classes, this class label is not a one-hot label.
For source images, $W$ is computed by analyzing the pixel-level ground-truth labels $Y_s$ within the image back-projection of a patch.
If any pixel within the back-projected region of the image has class $c$, we set $W^c=1$, otherwise we set $W^c=0$.
For target domain images, we do not have ground-truth labels.
Instead, we assign coarse-scale class labels based on the projected pixel-level predictions $P_t^c$ of our segmentation module $\mathbf S$ for the patch.
In particular, given a confidence threshold $th_w$, if $P_t^c> th_w$ for any pixel within a patch,
we set $W^c=1$, otherwise we set $W^c=0$.
Note that binarizing the patch-based class label $W$ has the effect of equalizing class frequencies
at the patch level: $W^c=1$ whether the number of pixels with class $c$ the patch contains.
This will have the benefit or boosting adaptation performance for less frequent classes
by apply an equal attention on all the classes a patch contains.
\iffalse
$ W \in \mathbb{R}^C$ is designed to indicate all the classes in one patch.
If its index $c$ is 1, it means this patch has class $c$,
and it is not a one-hot label since one patch may contain multi classes.
For source images, it can be easily determined by pixel-wise ground-truth labels.
We just collect all the pixels' label correspond to a certain patch,
and set the value on matching indexes to 1 if the patch has pixels belong to these classes.
\begin{equation}
\label{weakly_class_label_for_source}
W^c = \left\{
\begin{array}{lcl}
{1} &\text{if} &{N^c > 0} \\
{0} &\text{if} &{N^c = 0}
\end{array}
\right.
\end{equation}
where $N^c$ is the number of pixels belong to class $c$ in one patch.
For target domain images, we do not have ground-truth labels.
Therefore, we assign this weakly class label by the prediction of segmentation module $\mathbf S$.
If the prediction of one pixel $P_t^c > th_w$, $th_w$ is a fixed threshold,
we would consider the pixel has a chance to be class $c$.
And then we utilize this pseudo label to calculate the weakly class label based on
Equation (\ref{weakly_class_label_for_source}).
This weakly class label for patches is different from the ground-truth label in two ways.
First, it does not give detailed information about class accurate to each pixel.
Besides, this kind of label helps us to pay attention to different classes equally
and prevent the network from forgetting the information of low-frequency classes.
\fi
\subsubsection{Class-conditional Coarse-scale Branch} \label{section: Class-conditional Coarse-scale Branch}
For the basic domain adaptation in Section \ref{section: Basic Domain Adaptation Structure},
the discriminator output is a scalar value indicating the domain of the input vector (in our case, 0 for source domain,
1 for target domain).
In contrast, the output of our class-conditional coarse-scale discriminator branch
consists of two vectors $O^s$ and $O^t$, each of length $C$.
$O^s$ carries estimates of patch-level class labels for the source domain,
an large value on $O^s_c$ indicates high confidence that the patch contains at least one pixel drawn from
the source domain and belonging to class $c$.
And similarly, $O^t$ carries estimates of patch-level class labels for the target domain.
The advantage of this dual vector representation is that it allows us to multiplex
both domain and class information, informing both an adversarial adaptation loss based on class
and a non-adversarial classification loss (Figure \ref{fig:adv_class}).
In particular, to inform the non-adversarial classification loss, we form the
vector $O^c=\sigma\left(O^s+O^t\right)$,
where $\sigma(\cdot)$ is a sigmoid function apply for each class.
And we calculate a classification loss with the binary cross-entropy loss as $\mathcal{L}_{bce}(O^c, W^c)$.
Note that including this classification loss in the discriminator will encourage
a feature-level domain alignment that preserves segmentation class information.
Meanwhile, the designed class label $W$ insures the class information it preserves is equal to all classes,
and improve the performance on less frequent classes.
\begin{figure}[t]
\centering
\includegraphics[width=1\textwidth]{13.pdf}
\caption{Details about the loss calculation in coarse-scale class-based branch in discriminator.}
\label{fig:adv_class}
\end{figure}
\iffalse
Once determining the definition of weakly class label,
we can utilize class information in our weakly class-based discriminator.
The output channel of a basic discriminator is normally 1,
and its value indicates the domain a patch belongs to.
For example, in our settings, if the value is close to 0,
it is more likely to be from the source domain, and vice versa.
For our weakly class-based discriminator, the output channel is $ 2C$.
The first $C$ channels indicate source domain labels, named $O^s \in \mathbb{R}^{C \times 1}$,
while the last $C$ channels indicate target domain labels, named $O^t \in \mathbb{R}^{C \times 1}$.
And the $C$ channels of each domain label is for $C$ different classes.
These two outputs $O^s$, $O^t$, can be used for both adaptation and
weakly class prediction.
When inputting a feature representation of an image,
it outputs these $ 2C$ channels predictions for all patches.
We restrain the $O^s$, $O^t$ outputs to predict domain label in an adversarial way,
and the weakly class label in a non-adversarial way.
The calculations of these two types of loss in showed in Figure \ref{fig:adv_class}.
First, to predict the weakly class label, we sum the $O^s$, $O^t$ outputs together by each class.
By applying the sigmoid function for each class, we obtain a $O^c \in \mathbb{R}^{C \times 1}$
for predicting the weakly class label.
The loss function is a simple binary cross-entropy loss as $\mathcal{L}_{bce}(O^c, W^c)$.
we use a binary cross-entropy loss since $W^c$ may contain multiple classes.
And all modules are trained to minimize this loss because it is a non-adversarial loss.
Then, we use the same outputs $O^s$, $O^t$ to calculate the adversarial loss for domain label.
To evaluate the adaptation for each class in a patch,
we concat these two outputs $O^s$, $O^t$ by class, and apply a softmax operation
on the two values of each class to form this $O^{st} \in \mathbb{R}^{C \times 2}$.
This $O^{st}$ indicates the probability of a patch to be from source or target domain for each class.
The values of the first column, which is $O^{st} \lbrack :, 0 \rbrack \in \mathbb{R}^{C \times 1}$
is the probability of a patch to be from source domain,
and the values of the last column, which is $O^{st} \lbrack :, 1 \rbrack \in \mathbb{R}^{C \times 1}$
is the probability of a patch to be from target domain.
Moreover, this probability is accurate for each class.
For example, on class $c$, $O^{st} \lbrack c, : \rbrack \in \mathbb{R}^{1 \times 2} $ has two values,
and the sum of these two values is 1 since we use the softmax operation.
If the first value $O^{st} \lbrack c, 0 \rbrack $
is larger than the last value $O^{st} \lbrack c, 1 \rbrack $,
it means the pixels belong to class $c$ on this patch is more likely to be from source domain.
\fi
To inform the adversarial adaptation loss, we form the $C\times2$ matrix $O^{st}=f\left(\left[O^s, O^t\right]\right)$,
where $f(\cdot)$ is the softmax operation over rows, normalizing the sum of $O^s_c$ and $O^t_c$ to 1 for every class
$c$.
$O^{st}$ indicates the probability of the pixels belong to certain classes on a patch are from source domain or target
domain.
This normalization will tend to spread the loss more uniformly across classes.
However, it should be noted that one patch may only contain certain classes instead of all $C$ classes,
so not all values on $C$ channels are valuable for calculating the loss function for domain label classification.
To form the final discriminator
domain adaptation loss, we sum the class-conditional loss over classes present in the patch by the classes one contains,
weighting the sum
by the ground truth patch-level class labels $W_s$ for the source domain,
and the predicted patch-level class predictions $O_t$ for the target domain.
Combining with the non-adversarial classification loss, the total patch-level discriminator loss is:
{\setlength\abovedisplayskip{6pt}
\begin{equation}
\begin{aligned}
\label{Dloss_class}
\mathcal{L}_{D \_coarse}(\mathbf D) = \ &\lambda_c \mathcal{L}_{bce}(O^c, W^c)
\\ + &\lambda_s \sum_{c=1}^ C W_s^c[c] \mathcal{L}_{ce}(O_s^{st} [c], 0)
\\ + & \lambda_t \sum_{c=1}^ C O_t^c[c] \mathcal{L}_{ce}(O_t^{st} [c], 1)
\end{aligned}
\end{equation}
\setlength\belowdisplayskip{-6pt}}
where $O_s^{st}$ is the output $O_{st}$ for source domain images and $O_t^{st}$ is the output $O_{st}$ for source domain images.
The generative component of the adversarial loss is defined symmetrically:
\begin{small}
\begin{equation}
\begin{aligned}
\label{advloss_class}
\mathcal{L}_{adv \_coarse}(\mathbf E, \mathbf S) = \ &\lambda_c \mathcal{L}_{bce}(O^c, W^c)
\\ + &\lambda_s \sum_{c=1}^ C W_s^c[c] \mathcal{L}_{ce}(O_s^{st} [c], 1)
\\ + & \lambda_t \sum_{c=1}^ C O_t^c[c] \mathcal{L}_{ce}(O_t^{st} [c], 0)
\end{aligned}
\end{equation}
\end{small}
\subsubsection{Fine-Scale Class-Conditional Discriminator} \label{section: Class-based Loss for Adaptation}
The coarse-scale class-conditional adaptation can capture larger-scale domain shift effects
but may not capture shifts in finer detail. For that purpose, we employ a fine-scale class-conditional
discriminator which can evaluate the adaptation on a pixel-level way.
It remains the scale of feature representations in this fine-scale discriminator branch and upsamples the output
to produce a fine-scale domain classification $U_s$ and $U_t$ that match the size of the original input.
For this fine-scale discriminator branch, we also evaluate the performance of adaptation for each class
by a class-conditional loss.
For source domain images, we employ the ground-truth class labels $Y_s$ to calculate the loss for each class and average over classes
to form a class-conditional binary cross entropy loss:
\begin{small}
\begin{equation}
\label{class_bceloss_source}
\mathcal{L}_{cbce\_ s} (U_s, Y_s, l_d)=
\frac{1}{C}\sum_{c=1}^C\left(
\frac {\sum_{h, w} Y_s^{(h, w, c)} \mathcal{L}_{bce} (U_s^{(h, w)}, l_d)}
{\sum_{h, w} Y_s^{(h, w, c)} + \epsilon}
\right)
\end{equation}
\end{small}
where the ground truth domain label $l_d$ is set to $l_d = 0$ when training the discriminator $\mathbf D$,
and to $l_d = 1$ when training the encoder $\mathbf E$ and segmentation module $\mathbf S$, to confuse the discriminator.
For target domain images, we do not have ground-truth class labels, and so we employ
the pixel-level class predictions $P_t$ instead
to form a pseudo-label $\hat{Y}_t$ by selecting the class with the highest prediction value:
\begin{equation}
\label{pseudo_label_for_target1}
\hat{Y}_t^{h, w, c} = \left\{
\begin{array}{lcl}
&{1} &\text{if} \ \ {c = \mathop{\arg\max} P_t^{h,w}} \\
&{0} &\text{otherwise}
\end{array}
\right.
\end{equation}
The above pixel-level class predictions will not be confident since the segmentation network is highly rely on source domain images.
It can be used as an indication that domain shift is interfering with classification.
Therefore, it is necessary to focus more on the adaptation of these uncertain regions by giving them a large weight.
We identify these ambiguous pixels by a label $N_t$:
\begin{equation}
\label{pseudo_label_for_target2}
N_t^{h, w} = \left\{
\begin{array}{lcl}
&{1} &\text{if} \ \ {\mathop{\max}_c P_t^{h,w,c} < th_n} \\
&{0} &\text{otherwise}
\end{array}
\right.
\end{equation}
where $th_n$ is a threshold constant for selecting the uncertain pixels. We then add an additional term to the fine-scale domain adaptation loss that will serve to upweight these regions during feature alignment.
The final class-conditional binary cross entropy loss for target domain becomes:
{\setlength\abovedisplayskip{1pt}
\begin{small}
\begin{equation}
\begin{aligned}
\label{class_bceloss_target}
\mathcal{L}_{cbce_ t} (U_t, \hat Y_t, l_d)=
&
\frac{1}{C}\sum_{c=1}^C
\left(
\frac {\sum_{h, w} \hat Y_t^{(h, w, c)} \mathcal{L}_{bce} (U_t^{(h, w)}, l_d)}
{\sum_{h, w} \hat Y_t^{(h, w, c)} + \epsilon}
\right)\\
+
&
\lambda_n \frac {\sum_{h, w} N_t^{(h, w)} \mathcal{L}_{bce} (U_t^{(h, w)}, l_d)}
{\sum_{h, w} N_t^{(h, w)} + \epsilon}
\end{aligned}
\end{equation}
\end{small}
\setlength\belowdisplayskip{-1pt}}
The fine-scale class-conditional adaptation discriminator loss over all images is then:
\begin{equation}
\begin{aligned}
\label{Dloss2}
\mathcal{L}_{D2}(\mathbf{D}) = \ & \lambda_s \mathcal{L}_{cbce\_ s}(U_s, Y_s, 0) \\
+ & \lambda_t \mathcal{L}_{cbce\_ t}(U_t, \hat Y_t, 1)
\end{aligned}
\end{equation}
The generative component of the adversarial fine-scale loss
trained on feature encoder $\mathbf E$ and segmentation decoder $\mathbf S$
is defined symmetrically:
\begin{equation}
\begin{aligned}
\label{advloss2}
\mathcal{L}_{adv2}(\mathbf{E}, \mathbf{S}) = \ & \lambda_s \mathcal{L}_{cbce\_ s}(U_s, Y_s, 1) \\
+ & \lambda_t \mathcal{L}_{cbce\_ t}(U_t, \hat Y_t, 0)
\end{aligned}
\end{equation}
For stability, we blend these class-conditional fine-scale losses with the conventional losses from the basic architecture
defined in Equations \ref{Dloss1} and \ref{advloss1}:
\begin{equation}
\label{fine_disloss}
\mathcal{L}_{D\_ fine}(\mathbf D) = \beta \mathcal{L}_{D1} + (1 - \beta) \mathcal{L}_{D2}
\end{equation}
\begin{equation}
\label{fine_advloss}
\mathcal{L}_{adv\_ fine}(\mathbf E, \mathbf S) = \beta \mathcal{L}_{adv1} + (1 - \beta) \mathcal{L}_{adv2}
\end{equation}
By utilizing the class-conditional loss, we manage to evaluate the performance among all classes equally for
our fine-scale discriminator branch to further improve the performance on less frequent classes.
\subsection{Class-Conditional Segmentation Loss} \label{section: Class-based Loss for Segmentation}
The conventional loss employed for pixel-level semantic segmentation is the pixel-level
cross-entropy loss.
The final value of segmentation loss will be the mean loss of all pixels regardless of the class information.
Unfortunately, this has the drawback that classes that are less frequent at the
pixel-level, either because regions of that class occur infrequently or because they tend to be small,
do not contribute substantially to the loss function,
while the pixels belongs to classes which are high frequent dominate the training process.
And thus performance of the trained system
for the less frequent classes can be poor,
since the pixels belongs to less frequent classes tend to be neglect during the training process.
For domain adaptation systems, this has the additional consequence
that the system may never learn how to align representations across domains for these infrequent classes.
To begin to address this problem, we introduce a modified class-conditional loss for segmentation, which will
serve to distribute the loss more evenly across classes. In particular, we employ a blend of the
dice loss~\cite{milletari2016v} and the cross-entropy loss to train our segmentation network.
The idea of dice loss comes from the dice coefficients,
and has been widely used in medical image segmentation~\cite{nie2018asdnet, wong20183d}. It has the form:
\begin{equation}
\label{diceloss1}
\mathcal{L}_{dice}(\mathbf E, \mathbf S) = 1 -
\frac{1}{C}\sum_{c=1}^C\left(
\frac {2 \sum_{h, w} Y_s^{(h, w, c)} P_s^{(h, w, c)}}
{\sum_{h,w}(Y_s^{(h, w, c)} + P_s^{(h, w, c)}) + \epsilon}
\right)
\end{equation}
Note that the loss is formed as the complement of a normalized segmentation performance averaged
over classes. The normalization, similar in spirit to intersection-over-union, will roughly equalize
the contribution of each class to the loss function. $\epsilon$ is a small constant that prevents division
by 0 for classes that do not appear in ground truth or predictions within an image.
Since the up-weighting of rare classes may introduce instability in training, we elect to employ a blend of
the dice loss with the conventional cross-entropy loss (Equation (\ref{segloss1})) to form the segmentation
prediction loss $\mathcal{L}_{pred}$:
\begin{equation}
\label{predloss}
\mathcal{L}_{pred}(\mathbf E, \mathbf S) = \alpha \mathcal{L}_{seg} + (1- \alpha) \mathcal{L}_{dice}
\end{equation}
By this class-conditional loss for segmentation, we manage to evaluate the performance of prediction among
all classes equally on source domain images, which further improve the performance of segmentation prediction
for target domain images after the adaptation.
\iffalse
\subsection{Class-based Loss Function} \label{section: Class-based Loss Function}
In this part, we introduce our class-based loss function for both segmentation and adaptation.
By adding the class-based loss, we can consider the performance of different classes equally,
which prevent the network from neglecting information from lower frequency classes.
\subsubsection{Class-based Loss for Segmentation} \label{section: Class-based Loss for Segmentation}
\ \ \ \
To evaluate the performance of segmentation on source domain images,
normally we would prefer to choose cross-entropy function and calculate the loss on each pixel.
The final value of segmentation loss will be the mean of all pixels.
However, since the frequency of different classes is imbalanced on the training dataset,
the classes with higher frequency become easily to be trained,
while the performance of classes with lower frequency remains poor.
Even though the loss value of lower frequency classes is extremely large,
it would not affect the total loss value since there are not too many pixels belong to these classes,
which makes the segmentation network neglect the performance of lower frequency classes.
If the segmentation network forgets the information of lower frequency classes,
it would also make the adaptation process forget these classes.
To solve this problem, we first add a class-based loss for segmentation,
which evaluates the performance of different classes equally.
We refer to~\cite{milletari2016v} and use the dice loss with cross-entropy loss together to train our segmentation network.
The idea of dice loss comes from the dice coefficient,
and it is widely used in medical image segmentation like~\cite{nie2018asdnet, wong20183d}.
The loss function is as follow:
\begin{equation}
\label{diceloss1}
\mathcal{L}_{dice}(\mathbf E, \mathbf S) = 1 -\mathop{\mathbf M}
( \frac {2 \sum_{h, w}^{H, W} Y_s^{(h, w, c)} P_s^{(h, w, c)}}
{\sum_{h, w}^{H, W} (Y_s^{(h, w, c)} + P_s^{(h, w, c)}) + \epsilon})
\end{equation}
where $ c = 1 \ldots C$ for all $C$ classes,
and $\mathbf M (\cdot )$ is the mean operation to calculate the average loss value for all $C$ classes.
It evaluates the loss on each class first, and summarize them equally.
Also, $\epsilon$ is a small value to handle the situation of missing some classes in one image.
We use this dice loss and the cross-entropy loss in Equation (\ref{segloss1}) together to evaluate
the performance of segmentation prediction.
Therefore, the total loss for segmentation prediction $\mathcal{L}_{pred}$ is as follow:
\begin{equation}
\label{predloss}
\mathcal{L}_{pred}(\mathbf E, \mathbf S) = \alpha \mathcal{L}_{seg} + (1- \alpha) \mathcal{L}_{dice}
\end{equation}
\fi
\iffalse
\subsubsection{Class-based Loss for Adaptation} \label{section: Class-based Loss for Adaptation}
\ \ \ \
We not only evaluate the class-based loss for segmentation learning,
but also for adaptation learning.
It is necessary for our fine-scale patch branch in discriminator to calculate the loss
by each class as well.
To realize this thought, we first upsample the output $D(F_s)$ and $D(F_t)$ to be the same size
with $Y_s$ and $P_t$.
We name these two outputs after upsampling as $U_s$ and $U_t$.
For source images, the ground-truth label $Y_s$ can give us accurate class information,
so we use it to calculate the loss for each class and average it among the classes as in Section
\ref{section: Class-based Loss for Segmentation}:
\begin{equation}
\label{class_bceloss_source}
\mathcal{L}_{cbce\_ s} (U_s, Y_s, l_d)=\mathop{\mathbf M} (
\frac {\sum_{h, w}^{H, W} Y_s^{(h, w, c)} \mathcal{L}_{bce} (U_s^{(h, w)}, l_d)}
{\sum_{h, w}^{H, W} Y_s^{(h, w, c)} + \epsilon})
\end{equation}
where $l_d$ is the domain label we use to calculate the loss.
When training $\mathbf D$ module, we set $l_d = 0$ for classification,
while for training $\mathbf E$, $\mathbf S$ module, we set $l_d = 1$ to confuse the discriminator.
$\mathcal{L}_{cbce\_ s}$ is short for class binary cross entropy loss for source domain images.
While for target domain images, we do not have the ground-truth semantic label,
instead we have the output of segmentation module $P_t$.
We are able to form a pseudo-label $\hat{Y}_t$ for target by the following equation:
\begin{equation}
\label{pseudo_label_for_target1}
\hat{Y}_t^{h, w, c} = \left\{
\begin{array}{lcl}
&{1} &\text{if} \ \ {c = \mathop{\arg\min} P_t^{h,w}} \\
&{0} &\text{otherwise}
\end{array}
\right.
\end{equation}
Besides, some predictions of pixels are not very confident, the maximum value of $P_t^{h,w}$ is small.
It is mainly because the segmentation module $\mathbf S$ is only trained on source domain images
and their ground-truth labels.
If the feature representation of this region is not similar to the one from the source domain,
the segmentation module $\mathbf S$ may not be able to classify this part.
Therefore, it is necessary to focus more on the adaptation of these regions by giving them a large weight.
We select these uncertain regions by checking the maximum value of $P_t^{h,w}$ and give them a label $N_t$:
\begin{equation}
\label{pseudo_label_for_target2}
N_t^{h, w} = \left\{
\begin{array}{lcl}
&{1} &\text{if} \ \ {\mathop{\max}_c P_t^{h,w,c} < th_n} \\
&{0} &\text{otherwise}
\end{array}
\right.
\end{equation}
where $th_n$ is a threshold for selecting the uncertain regions,
if the maximum value for a region among all the classes is lower than $th_n$,
we treat this region with a large weight for evaluating adaptation.
And the class binary cross-entropy loss for target domain is as follow:
\begin{equation}
\begin{aligned}
\label{class_bceloss_target}
\mathcal{L}_{cbce\_ t} (U_t, \hat Y_t, l_d)=\mathop{\mathbf M} (
& \frac {\sum_{h, w}^{H, W} \hat Y_t^{(h, w, c)} \mathcal{L}_{bce} (U_t^{(h, w)}, l_d)}
{\sum_{h, w}^{H, W} \hat Y_t^{(h, w, c)} + \epsilon}, \\
\lambda_n & \frac {\sum_{h, w}^{H, W} N_t^{(h, w)} \mathcal{L}_{bce} (U_t^{(h, w)}, l_d)}
{\sum_{h, w}^{H, W} N_t^{(h, w)} + \epsilon})
\end{aligned}
\end{equation}
where $ c = 1 \ldots C$ for all $C$ classes,
and $\mathcal{L}_{cbce\_ t}$ calculates the average loss value for all $C$ classes
as well as the weighted loss for uncertain regions.
With our $\mathcal{L}_{cbce\_ s}$ and $\mathcal{L}_{cbce\_ t}$, we then can calculate
the class-based loss for training our fine-scale branch discriminator:
\begin{equation}
\begin{aligned}
\label{Dloss2}
\mathcal{L}_{D2}(\mathbf{D}) = \ & \lambda_s \mathcal{L}_{cbce\_ s}(U_s, Y_s, 0) \\
+ & \lambda_t \mathcal{L}_{cbce\_ t}(U_t, \hat Y_t, 1)
\end{aligned}
\end{equation}
For confusing the discriminator, we train our feature encoder $\mathbf E$ and segmentation decoder $\mathbf S$
using the following equation:
\begin{equation}
\begin{aligned}
\label{advloss2}
\mathcal{L}_{adv2}(\mathbf{E}, \mathbf{S}) = \ & \lambda_s \mathcal{L}_{cbce\_ s}(U_s, Y_s, 1) \\
+ & \lambda_t \mathcal{L}_{cbce\_ t}(U_t, \hat Y_t, 0)
\end{aligned}
\end{equation}
When training our fine-scale branch, we add our class-based loss with the basic loss Equation (\ref{Dloss1}),
Equation (\ref{advloss1}) in Section \ref{section: Basic Domain Adaptation Structure}.
The total loss for discriminator and adaptation for our fine-scale branch is as follow:
\begin{equation}
\label{fine_disloss}
\mathcal{L}_{D\_ fine}(\mathbf D) = \beta \mathcal{L}_{D1} + (1 - \beta) \mathcal{L}_{D2}
\end{equation}
\begin{equation}
\label{fine_advloss}
\mathcal{L}_{adv\_ fine}(\mathbf E, \mathbf S) = \beta \mathcal{L}_{adv1} + (1 - \beta) \mathcal{L}_{adv2}
\end{equation}
\fi
\subsection{Complete Training Loss}
To summarize, the complete training process combines the class-conditional segmentation loss (Equation
\ref{predloss}), fine- and coarse-scale class-conditional domain adaptation discriminator losses
(Equations \ref{fine_disloss} and \ref{Dloss_class}) and fine- and coarse-scale domain adaptation adversarial losses (Equation \ref{fine_advloss} and \ref{advloss_class}):
\begin{equation}
\label{total_D}
\mathop{\min}_\mathbf{D} \mathcal{L}_{D\_ fine} + \mathcal{L}_{D\_ coarse}
\end{equation}
\begin{equation}
\label{total_G}
\mathop{\min}_{\mathbf{E}, \mathbf{S}} \ \mathcal{L}_{pred}
+ \mathcal{L}_{adv \_fine} + \mathcal{L}_{adv \_ coarse}
\end{equation}
\iffalse
\subsection{The Overall Training Loss}
The overall training process utilizes all above loss functions including Equation
(\ref{predloss}) for segmentation prediction loss,
Equation (\ref{fine_advloss}), (\ref{advloss_class}) for both fine-scale and coarse-scale adaptation loss,
Equation (\ref{fine_disloss}), (\ref{Dloss_class}) for both fine-scale and coarse-scale discriminator loss.
Therefore, we summarize the training objective as follow:
\begin{equation}
\label{total_D}
\mathop{\min}_\mathbf{D} \mathcal{L}_{D\_ fine} + \mathcal{L}_{D\_ coarse}
\end{equation}
\begin{equation}
\label{total_G}
\mathop{\min}_{\mathbf{E}, \mathbf{S}} \ \mathcal{L}_{pred}
+ \mathcal{L}_{adv \_fine} + \mathcal{L}_{adv \_ coarse}
\end{equation}
\fi
\begin{table*}[ht]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{c|c|cccccccccccccccccccc}
\toprule
\multicolumn{22}{c}{\textbf {GTA5 $\rightarrow$ Cityscapes}} \\ \hline
& & 1 & 5 & 2 & 11 & 10 & 7 & 17 & 12 & 3 & 9 & 6 & 8 & 18 & 4 & 15 & 14 & 16 & 19 & 13 & \\
& \rotatebox{90} { Meth.} & \rotatebox{90}{ road} & \rotatebox{90}{ side.} & \rotatebox{90}{ buil.} & \rotatebox{90}{ wall} & \rotatebox{90}{ fence} & \rotatebox{90}{ pole} & \rotatebox{90}{ light} & \rotatebox{90}{ sign} & \rotatebox{90}{ vege.} & \rotatebox{90}{ terr.} & \rotatebox{90}{ sky} & \rotatebox{90}{ pers.} & \rotatebox{90}{ rider} & \rotatebox{90}{ car} & \rotatebox{90}{ truck} & \rotatebox{90}{ bus} & \rotatebox{90}{ train} & \rotatebox{90}{ mbike.} & \rotatebox{90}{ bike} & \rotatebox{90} { mIoU} \\
\hline
CDA~\cite{zhang2019curriculum} & \multirow{1}{*}{CT}
& 72.9 & 30.0 & 74.9 & 12.1 & 13.2 & 15.3 & 16.8 & 14.1 & 79.3 & 14.5 & \textbf{75.5} & 35.7 & 10.0 & 62.1 & 20.6 & 19.0 & 0.0 & \textbf{19.3} & 12.0 & 31.4 \\
CBST-SP~\cite{zou2018unsupervised} & \multirow{1}{*}{ST}
& \textbf{90.4} & 50.8 & 72.0 & 18.3 & 9.5 & \textbf{27.2} & \textbf{28.6} & 14.1 & \textbf{82.4} & 25.1 & 70.8 & 42.6 & \textbf{14.5} & 76.9 & 5.9 & 12.5 & \textbf{1.2} & 14.0 & \textbf{28.6} & 36.1 \\
Ours & & 90.0& \textbf{36.2}& \textbf{79.1}& \textbf{25.0}& \textbf{18.9}& 26.8& 27.6& \textbf{16.5}& 80.8& \textbf{31.1}& 73.4& \textbf{48.4}& 12.8& \textbf{81.2}& \textbf{25.6}& \textbf{24.8}& 0.0& 12.5& 5.4& \textbf{37.7}\\
\hline
AdaptSeg~\cite{tsai2018learning} & \multirow{3}{*} {AT-P}
& 87.3 & 29.8 & 78.6 & 21.1 & 18.2 & 22.5 & 21.5 & 11.0 & 79.7 & 29.6 & 71.3 & 46.8 & 6.5 & 80.1 & 23.0 & 26.9 & 0.0 & 10.6 & 0.3 & 35.0 \\
ADVENT~\cite{vu2019advent} &
& 86.9 & 28.7 & 78.7 & \textbf{28.5} & \textbf{25.2} & 17.1 & 20.3 & 10.9 & 80.0 & 26.4 & 70.2 & 47.1 & 8.4 & \textbf{81.5} & 26.0 & 17.2 & \textbf{18.9} & 11.7 & 1.6 & 36.1 \\
CLAN~\cite{luo2019taking} &
& 88.0 & 30.6 & \textbf{79.2} & 23.4 & 20.5 & 26.1 & 23.0 & 14.8 & \textbf{81.6} & \textbf{34.5} & 72.0 & 45.8 & 7.9 & 80.5 & \textbf{26.6} & \textbf{29.9} & 0.0 & 10.7 & 0.0 & 36.6\\
\hdashline
FCNs in the Wild~\cite{hoffman2016fcns} & \multirow{3}{*} {AT-F}
& 70.4 & 32.4 & 62.1 & 14.9 & 5.4 & 10.9 & 14.2 & 2.7 & 79.2 & 21.3 & 64.6 & 44.1 & 4.2 & 70.4 & 8.0 & 7.3 & 0.0 & 3.5 & 0.0 & 27.1 \\
SIBIN~\cite{luo2019significance} &
& 83.4 & 13.0 & 77.8 & 20.4 & 17.5 & 24.6 & 22.8 & 9.6 & 81.3 & 29.6 & \textbf{77.3} & 42.7 & 10.9 & 76.0 & 22.8 & 17.9 & 5.7 & \textbf{14.2} & 2.0 & 34.2 \\
Ours & &\textbf{90.0}& \textbf{36.2}& 79.1& 25.0& 18.9& \textbf{26.8}& \textbf{27.6}& \textbf{16.5}& 80.8& 31.1& 73.4& \textbf{48.4}& \textbf{12.8}& 81.2& 25.6& 24.8& 0.0& 12.5& \textbf{5.4}& \textbf{37.7}\\
\bottomrule
\end{tabular}%
}
\setlength{\belowcaptionskip}{10pt}\centering\caption{Adaptation from GTA5 to Cityscapes.
We present the per-class IoU and mean IoU.
The numbers above all classes are the indexes of their frequency in a descending order based on Cityscapes.
(Please refer to~\cite{cordts2016cityscapes} for more details)
"CT", "ST" and "AT" represent curriculum-learning method, self-training.
and adversarial-learning method.
"P" and "F" represent prediction-level adaptation and feature-level adaptation.
We highlight the best result in each column in \textbf{bold}.}
\label{Tab: gta52cityscape}
\end{table*}
\section{Experiments}
In this section, we evaluate our class-conditional domain adaptation method and present the experimental results.
First, we introduce the used datasets and some implementation details of our network architecture.
Then, we show the comparison with the state-of-art method and discuss the effectiveness for our CCDA method with the ablation study.
\begin{table*}[ht]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{c|c|cccccccccccccccccc}
\toprule
\multicolumn{20}{c}{\textbf {SYNTHIA $\rightarrow$ Cityscapes}} \\
\hline
& & 1 & 5 & 2 & 10 & 9 & 7 & 14 & 11 & 3 & 6 & 8 & 15 & 4 & 13 & 16 & 12 & \\
& \rotatebox{90} { Meth. } & \rotatebox{90} { road } & \rotatebox{90} { side. } & \rotatebox{90} { buil. } & \rotatebox{90} { wall* } & \rotatebox{90} { fence* } & \rotatebox{90} { pole* } & \rotatebox{90} { light } & \rotatebox{90} { sign } & \rotatebox{90} { vege. } & \rotatebox{90} { sky } & \rotatebox{90} { pers. } & \rotatebox{90} { rider } & \rotatebox{90} { car } & \rotatebox{90} { bus } & \rotatebox{90} { mbike. } & \rotatebox{90} { bike } & \rotatebox{90} { mIoU } & \rotatebox{90} { mIoU* } \\
\hline
CDA~\cite{zhang2019curriculum} &\multirow{1}{*}{CT}
& 57.4 & 23.1 & 74.7 & 0.5 & \textbf{0.6} & 14.0 & 5.3 & 4.3 & 77.8 & 73.7 & 45.0 & 11.0 & 44.8 & \textbf{21.2} & 1.9 & 20.3 & 29.7 & 35.4 \\
CBST-SP~\cite{zou2018unsupervised} & \multirow{1}{*}{ST}
& 69.6 & 28.7 & 69.5 & \textbf{12.1} & 0.1 & \textbf{25.4} & \textbf{11.9} & \textbf{13.6} & \textbf{82.0} & \textbf{81.9} & \textbf{49.1} & \textbf{14.5} & 66.0 & 6.6 & 3.7 & \textbf{32.4} & \textbf{35.4} & 36.1 \\
Ours & & \textbf{82.6} & \textbf{34.2} & \textbf{76.9} & 2.6 & 0.2 & 23.8 & 3.5 & 7.7 & 77.9 & 79.5 & 44.2 & 8.2 & \textbf{73.4} & 20.9 & \textbf{4.0} & 14.2 & 34.6 & \textbf{40.6} \\
\hline
AdaptSeg~\cite{tsai2018learning} &\multirow{3}{*} {AT-P}
& 78.9 & 29.2 & 75.5 & - & - & - & 0.1 & 4.8 & 72.6 & 76.7 & 43.4 & 8.8 & 71.1 & 16.0 & 3.6 & 8.4 & - & 37.6 \\
ADVENT~\cite{vu2019advent} &
& 67.9 & 29.4 & 71.9 & \textbf{6.3} & \textbf{0.3} & 19.9 & 0.6 & 2.6 & 74.9 & 74.9 & 35.4 & \textbf{9.6} & 67.8 & \textbf{21.4} & \textbf{4.1} & \textbf{15.5} & 31.4 & 36.6 \\
CLAN~\cite{luo2019taking} &
& 80.4 & 30.7 & 74.7 & - & - & - & 1.4 & 8.0 & 77.1 & 79.0 & 46.5 & 8.9 & \textbf{73.8} & 18.2 & 2.2 & 9.9 & - & 39.3 \\
\hdashline
FCNs in the Wild~\cite{hoffman2016fcns} &\multirow{4}{*} {AT-F}
& 11.5 & 19.6 & 30.8 & 4.4 & 0.0 & 20.3 & 0.1 & \textbf{11.7} & 42.3 & 68.7 & \textbf{51.2} & 3.8 & 54.0 & 3.2 & 0.2 & 0.6 & 20.2 & 22.9 \\
Cross-city~\cite{chen2017no} &
& 62.7 & 25.6 & 78.3 & - & - & - & 1.2 & 5.4 & \textbf{81.3} & \textbf{81.0} & 37.4 & 6.4 & 63.5 & 16.1 & 1.2 & 4.6 & - & 35.7 \\
SIBIN~\cite{luo2019significance} &
& 70.1 & 25.7 & \textbf{80.9} & - & - & - & \textbf{3.8} & 7.2 & 72.3 & 80.5 & 43.3 & 5.0 & 73.3 & 16.0 & 1.7 & 3.6 & - & 37.2 \\
Ours & & \textbf{82.6} & \textbf{34.2} & 76.9 & 2.6 & 0.2 & \textbf{23.8} & 3.5 & 7.7 & 77.9 & 79.5 & 44.2 & 8.2 & 73.4 & 20.9 & 4.0 & 14.2 & \textbf{34.6} & \textbf{40.6} \\
\bottomrule
\end{tabular}%
}
\setlength{\belowcaptionskip}{10pt}\centering\caption{Adaptation from SYNTHIA to Cityscapes.
The table setting is the same as Table \ref{Tab: gta52cityscape},
while mIoU and mIoU* are averaged over 16 and 13 categories, respectively.}
\label{Tab: synt2cityscape}
\end{table*}
\iffalse
\begin{table}[hb]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{cc|cc|c|c}
\toprule
\multicolumn{6}{c}{GTA5 $\rightarrow$ Cityscape} \\
\hline
\multicolumn{2}{c}{$\mathcal{L}_{pred}$} & \multicolumn{2}{c}{$L_{fine}$} & \multicolumn{1}{c}{$L_{coarse}$}
& \\
$\mathcal{L}_{seg}$ & $\mathcal{L}_{dice}$ & $\mathcal{L}_{fine1}$ &
$\mathcal{L}_{fine2}$ & $\mathcal{L}_{adv \_class}$ & mIoU \\
\hline
\checkmark & & \checkmark & & & \\
\checkmark & & \checkmark & \checkmark & &\\
\checkmark & \checkmark & \checkmark & & & \\
\checkmark & \checkmark & \checkmark & \checkmark & \checkmark &\\
\bottomrule
\end{tabular}%
}
\setlength{\belowcaptionskip}{10pt}\centering\caption{Ablation.}
\label{Tab: ablation_r}
\end{table}
\fi
\begin{table}[hbp]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{c|ccc|c}
\toprule
\multirow{2}{*} {Task} & \multicolumn{3}{c}{Method} & \multirow{2}{*} {mIoU}
\\
& $\mathcal{L}_{basic}$ & $\mathcal{L}_{basic\_class}$ & $\mathcal{L}_{coarse}$
& \\
\hline
\multirow{3}{*}
{\begin{tabular}[c]{@{}c@{}}GTA5 $\rightarrow$ \\ Cityscapes \end{tabular}}
& \checkmark & & & 34.9\\
& \checkmark & \checkmark & & 37.0\\
& \checkmark & \checkmark & \checkmark & 37.7\\
\bottomrule
\end{tabular}%
}
\setlength{\belowcaptionskip}{10pt}\centering\caption{Ablation Study on our CCDA mehtod.}
\label{Tab: ablation_r}
\end{table}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.8\textwidth, height=0.42\textheight]{show1.pdf}
\caption{Example results of our proposed Class-Conditional Domain Adaptation (CCDA).
for GTA5 $\rightarrow$ Cityscapes task.
For each target image, we show the corresponding Ground-truth map,
the result for the baseline CCDA architecture in Section \ref{section: Basic Domain Adaptation Structure},
and the result for our full CDDA system.}
\label{fig: show}
\end{figure*}
\subsection{Datasets}
Manual generation of pixel-level ground truth for semantic segmentation is expensive.
Since for synthetic datasets, pixel-level labels can be derived directly from the generative
model, synthetic data has the potential to vastly expand the quantity of labeled data used
to train deep semantic segmentation networks. But translating this into greater accuracy
at inference time requires bridging any domain shift between the synthetic training data and the
real test data.
With this motivation, we evaluate our class-conditional domain adaptation method by employing two synthetic source domain datasets (SYNTHIA~\cite{ros2016synthia} and GTA5~\cite{richter2016playing}) and a real-world target domain dataset (Cityscapes~\cite{cordts2016cityscapes}).
This defines two adaptation tasks: GTA5 $\rightarrow$ Cityscapes, and SYNTHIA $\rightarrow$ Cityscapes.
\iffalse
Since the domain adaptation on road segmentation mainly solves the problem
of expensive labor-consuming for pixel-level annotation.
We evaluate our method by training the adapted model on
the synthetic dataset as source domain images, and the real-world dataset as target domain images.
Two synthetic dataset, SYNTHIA~\cite{ros2016synthia} and GTA5~\cite{richter2016playing}
are chosen to be the source domain.
With Cityscapes~\cite{cordts2016cityscapes} to be the target domain images,
we have two adaptation tasks, SYNTHIA $\rightarrow$ Cityscapes, and GTA5 $\rightarrow$ Cityscapes.
The GTA5 dataset is a large synthetic dataset with 24966 images with their densely ground-truth labels.
The image resolution is 1914 $\times$ 1052 and the class labels are compatible with Cityscapes datasets.
The SYNTHIA dataset is also a widely used synthetic dataset for domain adaptation.
It contains 9400 images with semantic annotations with 960 $\times$ 720 frame resolution.
For the real-world images, Cityscape has 2975 images in the training set,
and 500 images in validation set with 2048 $\times$ 1024 as resolution.
When training our adapted model,
we use whole images in the GTA5 dataset or SYNTHIA dataset as source domain images,
and the training set in Cityscape for the target domain.
And we test our model with Cityscape's validation set on 19 classes for GTA5 $\rightarrow$ Cityscapes
and 13/16 classes for SYNTHIA $\rightarrow$ Cityscapes as other papers.
\fi
The GTA5 dataset is a large synthetic dataset with 24,966 images with pixel-level ground-truth labels.
The image resolution is 1914 $\times$ 1052 pixels and the class labels are compatible with the Cityscapes dataset.
The SYNTHIA dataset is also a widely used synthetic dataset for domain adaptation, which contains 9,400 images with pixel-level ground-truth labels. The image resolution is 960 $\times$ 720 pixels.
For the real-world images, Cityscapes dataset comprises 2,975 training images
and 500 validation images with the resolution of 2048 $\times$ 1024 pixels.
To train our domain adaptation model,
we employ both the images and the ground truth labels from either the GTA5 dataset or SYNTHIA datasets for the source domain,
and only the images (not the labels) from the Cityscapes training set for the target domain.
We evaluate our model on the Cityscapes validation set, over 19 classes for GTA5 $\rightarrow$ Cityscapes
adaptation and over 13 and 16 classes for SYNTHIA $\rightarrow$ Cityscapes, as per convention
like in~\cite{tsai2018learning, zhang2019curriculum}.
\subsection{Implementation Details}
We apply PyTorch for our implementation using a single GeForce RTX 2080 Ti GPU with 11 GB memory.
For segmentation, we use the DeepLab-v2~\cite{chen2018deeplab} framework with a small
VGG16~\cite{simonyan2014very} pre-trained model as backbone for our feature encoder $\mathbf E$ and
segmentation decoder $\mathbf S$.
For discriminator module $\mathbf D$, the fine-scale branch has a structure similar to~\cite{tsai2018learning, luo2019taking},
consistsing of 5 convolution layers with channel numbers $\{64, 128, 256, 512, 1 \}$.
To preserve fine-scale detail, we use 3 $\times$ 3 kernels and a stride of 1.
A final up-sampling layer is added at the end of this branch to rescale the output to the input image resolution.
For the coarse-scale branch, we share the first two convolution layers with the fine-scale branch,
and then apply 3 convolution layers with channel numbers $ \{256, 512, C\times 2 \}$ with kernel 3 and stride of 2
for downsampling.
Except for the last convolution layer in both branches,
each convolution layer in our discriminator module is followed by a Leaky-ReLU~\cite{maas2013rectifier} with a slope of $0.2$ for negative inputs.
To train our feature encoder $\mathbf E$ and segmentation decoder $\mathbf S$,
we use the Stochastic Gradient Descent (SGD) optimizer~\cite{bottou2010large} with the momentum of 0.9
and the weight decay is 5$e -$4.
The initial learning rate is set to 2.5$e -$4 and decays during training.
For discriminator module $\mathbf D$, we apply ADAM~\cite{kingma2014adam} optimizer
with $\beta 1 = 0.9$ and $\beta 2 = 0.99$.
The initial learning rate is set to 1$e -$4 and decayed with the same policy as SGD.
We train our model with a crop of 512 $\times$ 1024 with one source domain image and one target domain image at a time
the same as in~\cite{luo2019taking, luo2019significance}.
We set hyper-parameters $\lambda_s = \lambda_t = 0.0003$ for both fine-scale and coarse-scale branch.
\subsection{Results}
Table \ref{Tab: gta52cityscape} and Table \ref{Tab: synt2cityscape} summarizes the performance of our method compared with the state-of-the-art on the two transfer tasks GTA5 $\rightarrow$ Cityscapes,
and SYNTHIA $\rightarrow$ Cityscapes, respectively.
For a fair comparison, we choose several state-of-art methods using the same VGG16 as backbone as our method,
which include adversarial-learning methods with prediction-level adaptation AdaptSeg~\cite{tsai2018learning}, ADVENT~\cite{vu2019advent},
CLAN~\cite{luo2019taking};
and adversarial-learning methods with feature-level adaptation FCNs in the Wild~\cite{hoffman2016fcns}, Cross-city~\cite{chen2017no},
SIBIN~\cite{luo2019significance}.
We do not compare with the two latest state-of-art method~\cite{du2019ssf, tsai2019domain},
since they use different setting of Cityscape dataset or add extra pixel-level adaptation (image style-transfer) module.
We also compare our method with self-training method CBST-SP~\cite{zou2018unsupervised};
curriculum-learning method CDA~\cite{zhang2019curriculum}.
These two methods achieve better performance on lower frequency classes due to their strategy of alternate selection
of target domain samples for training the segmentation.
In Table \ref{Tab: gta52cityscape},
we present our experimental results on the GTA5 $\rightarrow$ Cityscapes task
compared with the chosen state-of-art methods.
This table shows our proposed CCDA method
performs better on average than all of these methods, and this advantage derives from improvements over a wide range of classes.
We observe that our class-conditional method boosts the performance of lower-frequency classes substantially
while maintaining performance for higher-frequency classes like road, building, vegetation, car,
and thus ultimately a higher mean IoU performance.
In Table \ref{Tab: synt2cityscape}, comparison with current state-of-the-art methods on SYNTHIA $\rightarrow$ Cityscapes transfer task
shows that our CCDA method performs favorably against the other algorithms on mIoU, which indicates that our method increases the overall performance.
Especially, compared with other adversarial learning methods, our CCDA has advantages on lower-frequency classes.
While for self-training and curriculum-learning method which perform better on several least frequent classes,
we can still reach a comparable results on these classes and outperform them on the overall performance.
\iffalse
Table \ref{Tab: synt2cityscape} presents the comparison between
our method and other state-of-art methods on the
SYNTHIA $\rightarrow$ Cityscapes transfer task.
The domain gap between SYNTHIA and Cityscapes is considered to be larger than it
between GTA5 and Cityscapes.
Our CCDA method still performs better than other methods on mIoU.
This table also shows that our CCDA can preform decent result among all the classes.
The self-training method CBST-SP is able to achieve the best result on
some lower frequency classes.
Our CCDA reaches a comparable result on most of these classes,
and performs better on some higher frequency classes like
road, building and car.
\fi
\subsection{Ablation Studies}
To better understand the impact of each component of our adaptation model,
we conducted an ablation study by selectively deactivating each component
and measuring the effect on performance for the GTA5 $\rightarrow$ Cityscapes transfer task.
In particular, we define three nested subset models:
1) $\mathcal{L}_{basic}$: using the basic domain adaptation architecture in
Section \ref{section: Basic Domain Adaptation Structure} with the segmentation loss and a fine-scale basic discriminator.
2) $\mathcal{L}_{basic} + \mathcal{L}_{basic\_class}$:
adding class-conditional loss for segmentation in Section \ref{section: Class-based Loss for Segmentation} as well as the class-conditional loss for fine-scale discriminator in Section \ref{section: Class-based Loss for Adaptation} on the basic architecture.
3) $\mathcal{L}_{basic} + \mathcal{L}_{class\_ based} + \mathcal{L}_{coarse}$:
further adding the class-conditional coarse-scale branch for discriminator in
Section \ref{section: Class-conditional Coarse-scale Branch}.
The result is showed in Table \ref{Tab: ablation_r}. Our class-conditional loss on the basic architecture
(both segmentation and fine-scale discriminator)
gains 2.1\% improvements together
and the designed coarse-scale branch brings another 0.7\% improvements.
It verifies the effectiveness of our CCDA method,
including both class-based loss and the class-based coarse-scale branch.
We also present some qualitative segmentation examples in Figure \ref{fig: show}.
This figure also verifies the effectiveness of our CCDA method.
The performance of our CCDA method outperforms the baseline
structure in two ways.
Firstly, it provides a cleaner and more accurate prediction on higher-frequency like
road and sidewalk.
Secondly, it improves the performance of lower-frequency classes like light and sign.
\iffalse
1) $\mathcal{L}_{basic}$,
using the basic fine-scale discriminator branch without class-based loss $\mathcal{L}_{seg} + \mathcal{L}_{fine1}$,
where we set $\alpha = 1$ and $\beta = 1$.
2) adding weakly class-based coarse-scale discriminator branch
$\mathcal{L}_{seg} + \mathcal{L}_{fine1} + \mathcal{L}_{coarse}$,
by adding the loss in Equation (\ref{advloss_class}), (\ref{Dloss_class}).
3) adding class-based loss for segmentation
$\mathcal{L}_{pred} + \mathcal{L}_{fine1} + \mathcal{L}_{coarse}$.
4) adding class-based loss for adaptation in fine-scale branch
$\mathcal{L}_{pred} + \mathcal{L}_{fine} + \mathcal{L}_{coarse}$.
\fi
\iffalse
\begin{description}
\item[ Class-based loss function.] Class-based loss function. Class-based loss function. Class-based loss function.
\item[Weakly class-based multi-level discriminator.] Weakly class-based multi-level discriminator.
\end{description}
\fi
\section{Conclusions}
We have developed a novel approach to solving an important problem in domain adaptation for semantic segmentation, namely, the poor performance often observed for infrequent classes. The solution hinges on the introduction of class-conditioning at multiple points in the model, including segmentation, coarse-scale domain adaptation and fine-scale domain adaptation, and upon equalizing across classes at several stages in the computation.
Evaluation on two transfer tasks demonstrates the effectiveness of our method, which boosts performance on infrequent classes while maintaining performance on the remaining classes. Generally, the proposed class-conditional domain adaptation method outperforms the state of the art on average, due to superior performance on a broad range of classes.
\\ \hspace*{\fill} \\
\noindent \textbf{Acknowledgements: }
We would like to thank the York University
Vision: Science to Applications (VISTA) program for its support.
\iffalse
\subsection{Language}
All manuscripts must be in English.
\subsection{Dual submission}
Please refer to the author guidelines on the CVPR 2020 web page for a
discussion of the policy on dual submissions.
\subsection{Paper length}
Papers, excluding the references section,
must be no longer than eight pages in length. The references section
will not be included in the page count, and there is no limit on the
length of the references section. For example, a paper of eight pages
with two pages of references would have a total length of 10 pages.
{\bf There will be no extra page charges for CVPR 2020.}
Overlength papers will simply not be reviewed. This includes papers
where the margins and formatting are deemed to have been significantly
altered from those laid down by this style guide. Note that this
\LaTeX\ guide already sets figure captions and references in a smaller font.
The reason such papers will not be reviewed is that there is no provision for
supervised revisions of manuscripts. The reviewing process cannot determine
the suitability of the paper for presentation in eight pages if it is
reviewed in eleven.
\subsection{The ruler}
The \LaTeX\ style defines a printed ruler which should be present in the
version submitted for review. The ruler is provided in order that
reviewers may comment on particular lines in the paper without
circumlocution. If you are preparing a document using a non-\LaTeX\
document preparation system, please arrange for an equivalent ruler to
appear on the final output pages. The presence or absence of the ruler
should not change the appearance of any other content on the page. The
camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment
the \verb'\cvprfinalcopy' command in the document preamble.) Reviewers:
note that the ruler measurements do not align well with lines in the paper
--- this turns out to be very difficult to do well when the paper contains
many figures and equations, and, when done, looks ugly. Just use fractional
references (e.g.\ this line is $095.5$), although in most cases one would
expect that the approximate location will be adequate.
\subsection{Mathematics}
Please number all of your sections and displayed equations. It is
important for readers to be able to refer to any particular equation. Just
because you didn't refer to it in the text doesn't mean some future reader
might not need to refer to it. It is cumbersome to have to use
circumlocutions like ``the equation second from the top of page 3 column
1''. (Note that the ruler will not be present in the final copy, so is not
an alternative to equation numbers). All authors will benefit from reading
Mermin's description of how to write mathematics:
\url{http://www.pamitc.org/documents/mermin.pdf}.
\subsection{Blind review}
Many authors misunderstand the concept of anonymizing for blind
review. Blind review does not mean that one must remove
citations to one's own work---in fact it is often impossible to
review a paper unless the previous citations are known and
available.
Blind review means that you do not use the words ``my'' or ``our''
when citing previous work. That is all. (But see below for
techreports.)
Saying ``this builds on the work of Lucy Smith [1]'' does not say
that you are Lucy Smith; it says that you are building on her
work. If you are Smith and Jones, do not say ``as we show in
[7]'', say ``as Smith and Jones show in [7]'' and at the end of the
paper, include reference 7 as you would any other cited work.
An example of a bad paper just asking to be rejected:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of our
previous paper [1], and show it to be inferior to all
previously known methods. Why the previous paper was
accepted without this analysis is beyond me.
[1] Removed for blind review
\end{quote}
An example of an acceptable paper:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of the
paper of Smith \etal [1], and show it to be inferior to
all previously known methods. Why the previous paper
was accepted without this analysis is beyond me.
[1] Smith, L and Jones, C. ``The frobnicatable foo
filter, a fundamental contribution to human knowledge''.
Nature 381(12), 1-213.
\end{quote}
If you are making a submission to another conference at the same time,
which covers similar or overlapping material, you may need to refer to that
submission in order to explain the differences, just as you would if you
had previously published related work. In such cases, include the
anonymized parallel submission~\cite{Authors14} as additional material and
cite it as
\begin{quote}
[1] Authors. ``The frobnicatable foo filter'', F\&G 2014 Submission ID 324,
Supplied as additional material {\tt fg324.pdf}.
\end{quote}
Finally, you may feel you need to tell the reader that more details can be
found elsewhere, and refer them to a technical report. For conference
submissions, the paper must stand on its own, and not {\em require} the
reviewer to go to a techreport for further details. Thus, you may say in
the body of the paper ``further details may be found
in~\cite{Authors14b}''. Then submit the techreport as additional material.
Again, you may not assume the reviewers will read this material.
Sometimes your paper is about a problem which you tested using a tool which
is widely known to be restricted to a single institution. For example,
let's say it's 1969, you have solved a key problem on the Apollo lander,
and you believe that the CVPR70 audience would like to hear about your
solution. The work is a development of your celebrated 1968 paper entitled
``Zero-g frobnication: How being the only people in the world with access to
the Apollo lander source code makes us a wow at parties'', by Zeus \etal.
You can handle this paper like any other. Don't write ``We show how to
improve our previous work [Anonymous, 1968]. This time we tested the
algorithm on a lunar lander [name of lander removed for blind review]''.
That would be silly, and would immediately identify the authors. Instead
write the following:
\begin{quotation}
\noindent
We describe a system for zero-g frobnication. This
system is new because it handles the following cases:
A, B. Previous systems [Zeus et al. 1968] didn't
handle case B properly. Ours handles it by including
a foo term in the bar integral.
...
The proposed system was integrated with the Apollo
lunar lander, and went all the way to the moon, don't
you know. It displayed the following behaviours
which show how well we solved cases A and B: ...
\end{quotation}
As you can see, the above text follows standard scientific convention,
reads better than the first version, and does not explicitly name you as
the authors. A reviewer might think it likely that the new paper was
written by Zeus \etal, but cannot make any decision based on that guess.
He or she would have to be sure that no other authors could have been
contracted to solve problem B.
\medskip
\noindent
FAQ\medskip\\
{\bf Q:} Are acknowledgements OK?\\
{\bf A:} No. Leave them for the final copy.\medskip\\
{\bf Q:} How do I cite my results reported in open challenges?
{\bf A:} To conform with the double blind review policy, you can report results of other challenge participants together with your results in your paper. For your results, however, you should not identify yourself and should not mention your participation in the challenge. Instead present your results referring to the method proposed in your paper and draw conclusions based on the experimental comparison to other results.\medskip\\
\begin{figure}[t]
\begin{center}
\fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}}
\end{center}
\caption{Example of caption. It is set in Roman so that mathematics
(always set in Roman: $B \sin A = A \sin B$) may be included without an
ugly clash.}
\label{fig:long}
\label{fig:onecol}
\end{figure}
\subsection{Miscellaneous}
\noindent
Compare the following:\\
\begin{tabular}{ll}
\verb'$conf_a$' & $conf_a$ \\
\verb'$\mathit{conf}_a$' & $\mathit{conf}_a$
\end{tabular}\\
See The \TeX book, p165.
The space after \eg, meaning ``for example'', should not be a
sentence-ending space. So \eg is correct, {\em e.g.} is not. The provided
\verb'\eg' macro takes care of this.
When citing a multi-author paper, you may save space by using ``et alia'',
shortened to ``\etal'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word.)
However, use it only when there are three or more authors. Thus, the
following is correct: ``
Frobnication has been trendy lately.
It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by
Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \etal~\cite{Alpher04}.''
This is incorrect: ``... subsequently developed by Alpher \etal~\cite{Alpher03} ...''
because reference~\cite{Alpher03} has just two authors. If you use the
\verb'\etal' macro provided, then you need not worry about double periods
when used at the end of a sentence as in Alpher \etal.
For this citation style, keep multiple citations in numerical (not
chronological) order, so prefer \cite{Alpher03,Alpher02,Authors14} to
\cite{Alpher02,Alpher03,Authors14}.
\begin{figure*}
\begin{center}
\fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}}
\end{center}
\caption{Example of a short caption, which should be centered.}
\label{fig:short}
\end{figure*}
\section{Formatting your paper}
All text must be in a two-column format. The total allowable width of the
text area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54
cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a
$\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the
first page) should begin 1.0 inch (2.54 cm) from the top edge of the
page. The second and following pages should begin 1.0 inch (2.54 cm) from
the top edge. On all pages, the bottom margin should be 1-1/8 inches (2.86
cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4
paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the
page.
\subsection{Margins and page numbering}
All printed material, including text, illustrations, and charts, must be kept
within a print area 6-7/8 inches (17.5 cm) wide by 8-7/8 inches (22.54 cm)
high.
Page numbers should be in footer with page numbers, centered and .75
inches from the bottom of the page and make it start at the correct page
number rather than the 4321 in the example. To do this fine the line (around
line 23)
\begin{verbatim}
\setcounter{page}{4321}
\end{verbatim}
where the number 4321 is your assigned starting page.
Make sure the first page is numbered by commenting out the first page being
empty on line 46
\begin{verbatim}
\end{verbatim}
\subsection{Type-style and fonts}
Wherever Times is specified, Times Roman may also be used. If neither is
available on your word processor, please use the font closest in
appearance to Times to which you have access.
MAIN TITLE. Center the title 1-3/8 inches (3.49 cm) from the top edge of
the first page. The title should be in Times 14-point, boldface type.
Capitalize the first letter of nouns, pronouns, verbs, adjectives, and
adverbs; do not capitalize articles, coordinate conjunctions, or
prepositions (unless the title begins with such a word). Leave two blank
lines after the title.
AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title
and printed in Times 12-point, non-boldface type. This information is to
be followed by two blank lines.
The ABSTRACT and MAIN TEXT are to be in a two-column format.
MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use
double-spacing. All paragraphs should be indented 1 pica (approx. 1/6
inch or 0.422 cm). Make sure your text is fully justified---that is,
flush left and flush right. Please do not place any additional blank
lines between paragraphs.
Figure and table captions should be 9-point Roman type as in
Figures~\ref{fig:onecol} and~\ref{fig:short}. Short captions should be centred.
\noindent Callouts should be 9-point Helvetica, non-boldface type.
Initially capitalize only the first word of section titles and first-,
second-, and third-order headings.
FIRST-ORDER HEADINGS. (For example, {\large \bf 1. Introduction})
should be Times 12-point boldface, initially capitalized, flush left,
with one blank line before, and one blank line after.
SECOND-ORDER HEADINGS. (For example, { \bf 1.1. Database elements})
should be Times 11-point boldface, initially capitalized, flush left,
with one blank line before, and one after. If you require a third-order
heading (we discourage it), use 10-point Times, boldface, initially
capitalized, flush left, preceded by one blank line, followed by a period
and your text on the same line.
\subsection{Footnotes}
Please use footnotes\footnote {This is what a footnote looks like. It
often distracts the reader from the main flow of the argument.} sparingly.
Indeed, try to avoid footnotes altogether and include necessary peripheral
observations in
the text (within parentheses, if you prefer, as in this sentence). If you
wish to use a footnote, place it at the bottom of the column on the page on
which it is referenced. Use Times 8-point type, single-spaced.
\subsection{References}
List and number all bibliographical references in 9-point Times,
single-spaced, at the end of your paper. When referenced in the text,
enclose the citation number in square brackets, for
example~\cite{Authors14}. Where appropriate, include the name(s) of
editors of referenced books.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|}
\hline
Method & Frobnability \\
\hline\hline
Theirs & Frumpy \\
Yours & Frobbly \\
Ours & Makes one's heart Frob\\
\hline
\end{tabular}
\end{center}
\caption{Results. Ours is better.}
\end{table}
\subsection{Illustrations, graphs, and photographs}
All graphics should be centered. Please ensure that any point you wish to
make is resolvable in a printed copy of the paper. Resize fonts in figures
to match the font in the body text, and choose line widths which render
effectively in print. Many readers (and reviewers), even of an electronic
copy, will choose to print your paper in order to read it. You cannot
insist that they do otherwise, and therefore must not assume that they can
zoom in to see tiny details on a graphic.
When placing figures in \LaTeX, it's almost always best to use
\verb+\includegraphics+, and to specify the figure width as a multiple of
the line width as in the example below
{\small\begin{verbatim}
\usepackage[dvips]{graphicx} ...
\includegraphics[width=0.8\linewidth]
{myfile.eps}
\end{verbatim}
}
\subsection{Color}
Please refer to the author guidelines on the CVPR 2020 web page for a discussion
of the use of color in your document.
\section{Final copy}
You must include your signed IEEE copyright release form when you submit
your finished paper. We MUST have this form before your paper can be
published in the proceedings.
Please direct any questions to the production editor in charge of these
proceedings at the IEEE Computer Society Press:
\url{https://www.computer.org/about/contact}.
\fi
{\small
\bibliographystyle{ieee_fullname}
| proofpile-arXiv_059-15565 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{sec:Introduction}
Internet of Things (IoT) and connected devices have transformed our lives. IoT systems are actively
deployed in a variety of settings, including homes, hospitals, battlefields, schools, airports, manufacturing plants, and more. The architecture of these systems, generally, consists of devices connected to one another or users/clients where the main network activity is data or information exchanges. In contrast to general and non-sensitive information exchange, such as reading sensors or controlling air conditioners remotely, there are many instances where critical or confidential information needs to be shared or routed among the devices. These pieces of information or ``secrets" are used by the devices or the users/clients to perform security or privacy related functions in the IoT system. The information could be encryption keys, digital signatures, login credentials, or important account numbers.
However, such a secret cannot be entrusted to any individual device, because the malfunction of a single device might then jeopardize the security of the entire network. Therefore, an appropriate approach is to split the secret and distribute it among multiple devices. The most commonly adopted technique in this area is threshold secret sharing (TSS). In an IoT or distributed system, TSS is generally carried out by a dealer (usually the server or administrator of the IoT system), which divides the secret and parcels those pieces among multiple holders (the devices), in such a way that the secret can only be reconstructed collaboratively by subsets of holders whose size has to reach a minimum number. This minimum size is called ``threshold." Below the threshold, the secret is theoretically safe and kept private from retrieval.
Practical secret sharing techniques are deployed in many real world applications hat include IoT systems. The most common example is key management in wireless sensor networks. Rather than entrusting the cryptographic key to a single node, which can be easily compromised in hostile environments, the key is shared among a group of nodes and can only be retrieved collaboratively [\cite{AC2005}] to be used for digital signature or other cryptographic purposes at an other terminal. If some nodes are found to be malfunctioning, then their access will be revoked, and they will be replaced by the same number of healthy nodes to reach the threshold. One such application is the ``Vanish" project [\cite{RG2009}], which uses the threshold property to make the secret key in a distributed system vanish when the number of shareholding nodes gradually decreases to below the threshold. Another application is in Hardware Security Module (HSM) based systems. HSMs are widely used in bank card payment systems. Some HSMs [\cite{TH2013}] are produced and distributed by certification authorities (CAs) and registration authorities (RAs) to generate and share important secret keys under Public Key Infrastructure (PKI). These HSMs also require implementation of a multi-part user authentication scheme, namely threshold secret sharing. The most well-known application is probably DNS Security (DNSSEC) [\cite{DNS2010}], which ensures the DNS (Domain Name System) servers can connect users and their Internet destinations (URLs and IPs) in a secure and verified manner. Its root key is split and shared among seven holders all over the world. In the case of an attack, if any five or more of the holders are able to come to a U.S. base, then they can reconstruct the root key using their shares to restore the Internet connections. Technology survey companies also use TSS to store sensitive survey data to prevent them from being extracted by any single data analyst without the participation of others [\cite{LA2016}].
However, although this technique reduces the risk of losing all the confidential information due to a malfunction of one of a few devices, there is still a danger when attackers compromise a larger group of them. Due to their distributed nature, TSS schemes are susceptible to a number of attacks, like passive attacks, man-in-the-middle (MITM) or share manipulations, i.e., cheating. These attacks, resulting in share disclosure or distortions, may lead to the leakage of the original secret or retrieval of a false secret. Generally speaking, the TSS is able to maintain the privacy of the secret information under the existence of a small number (below the threshold) of cheaters. However, it alone cannot guarantee the integrity of the secret.
Although, there have been many secure TSS schemes, they are often limited in their adversarial capabilities, i.e., cheater tolerance. For instance, [\cite{CR2008}] proposed a secure version of TSS based on secret validation, which is able to detect, but not identify, any number of cheaters. The authors in [\cite{WM2016}] on the other hand, leveraged the superimposed codes with secret verification and were able to locate no more than $n^{0.5}$ cheaters, where $n$ is the total number of devices involved. With higher computation complexity, [\cite{MS1981, RG2001, FG2006}] verify the shares with error control coding (particularly Maximum Distance Separable codes) to boost the cheater tolerance to nearly $n/3$. However, when the number of cheaters exceeds their fault tolerance, neither the privacy nor the integrity of the secret can be guaranteed. In addition, the dishonest parties can even frame the honest ones as cheaters.
Therefore, we propose a secure and robust scheme to enable the sharing of the confidential information in IoT systems with a stronger cheater tolerance. The major contributions of this work are:
\begin{enumerate}
\item The proposed approach uses Threshold Secret Sharing (TSS) to split the secret into shares distributed among the devices in the system, so the secret can only be retrieved collaboratively by groups of devices;
\item It adds additional security features on top of the original TSS functionality; specifically, it protects the confidentiality of the secret even when attackers have hijacked a group of devices;
\item It also ensures the integrity of the secret even when attackers hijack a large number of devices, collude, or manipulate the shares to forge fake secrets;
\item The proposed approach is able to detect and identify cheaters or compromised share holders up to a given theoretical upper bound;
\item It provides an automation tool to aid in the secret sharing procedure and system programming based on user-specified parameters.
\end{enumerate}
For evaluating the feasibility of the proposed robust secret sharing approach in systems consisting of physically distributed and connected devices, we introduce the Odysseus IoT open-interface testbed system. Testing on the Odysseus IoT testbed serves to validate the practicality of the attack models and associated defenses. It also highlights how a practical secure information sharing mechanism may be implemented.
Section II introduces the details of the Odysseus IoT testbed system, as well as the original threshold secret sharing scheme. The section also covers the attack model. Section III summarizes some existing secure protocols for the TSS. Section IV follows up with the vulnerabilities of those protocols under the attack model. Section V describes the proposed secure and robust secret sharing scheme, as well as a cheater identification protocol. Section VI presents the design automation tool, and finally, Section VII concludes the paper.
\section{The Odysseus IoT System, the Original Threshold Secret Sharing Scheme, and the Attack Models}
\label{sec:Related-Works}
In this section, we first introduce the Internet of Things (IoT) Testbed System, ``Odysseus", on which we evaluate the practicality of the proposed secure TSS scheme. We use this system to introduce and illustrate the proposed approach without a loss of generality, and to provide some deployment concreteness.
\subsection{System Model - Odysseus IoT}
The original motivation of developing a secure and robust TSS is to protect systems like the Odysseus IoT system. In such a system, the dealer is the service provider, which provides the Odysseus boards and is responsible for their deployment. The Odysseus boards are sensor hosting boards supporting various types of sensors. The boards have wireless communication modules for data exchange. The clients or users can pick the sensors to be installed and processed on the boards via GPIO ports before the boards' deployment. These sensors can either be heterogeneous or homogeneous. One example of Odysseus' application is in fire-fighting and rescue: heat sensors to map the fire intensity and location within a burning building, and motion sensors to identify human presence.
In general, the dealer (administrator) of the Odysseus system can deploy a large number of sensor boards to an area, and their sensor data can be requested remotely by different clients. From time to time, a client will request sensing data from a group of sensors, while retrieving from them a secret, if necessary. The secret, such as an encryption key, will be used by the client on various applications associated with the sensor data. The system chart and prototype of Odysseus are shown below in Fig. \ref{fig:Odysseus} and \ref{fig:prototype}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=3.5in]{Flowcharts/Odysseus.pdf}
\caption{\small The three layers of the Odysseus system: the dealer who deploys the boards and the secret, the sensor boards as the shareholders with wireless communication capability, and the client(s) who collects the data as well as the secret.
\label{fig:Odysseus}}
\vspace{-0.1in}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=3.0in]{Flowcharts/Prototype.png}
\caption{\small The prototype boards of Odysseus.
\label{fig:prototype}}
\vspace{-0.2in}
\end{center}
\end{figure}
The security of this IoT system also needs to be addressed. Although the dealer and clients can be trusted, the sensor hosting boards scattered all over a region are not physically monitored. Since any number of them can be subject to passive or active attacks, no critical information such as the secret can be entrusted to any individual board. There is even a danger of a large number of them being hijacked by attackers, who might thereby gain full access to those devices. Therefore, a secure protocol to maintain the privacy and integrity of the secret is needed, as well as error tolerance to deal with the existence of compromised boards.
\subsection{The Original Threshold Secret Sharing}
\label{sec:Original}
As noted in Section 1, TSS divides confidential information between devices, instead of storing the whole secret on each device, such that a defect in, or the compromise of, a single device will not impair the security of the entire network.
The following notations are used to describe and evaluate the original threshold secret sharing scheme, as well as the related secure variations:
\vspace{-0.1in}
\addtolength{\leftskip}{0mm}
\begin{itemize}[leftmargin=\dimexpr\parindent+1mm+0.5\labelwidth\relax]
\renewcommand{\labelitemi}{\scriptsize$\bullet$}
\setlength\itemsep{0.05em}
\item $ S $: the original secret (a piece of confidential information);
\item $ D_{i} $: the public ID of the $i^{th}$ shareholder;
\item $ h_{i} $: the share of $S$ of the $i^{th}$ shareholder;
\item $ t $: the threshold of a secret sharing scheme;
\item $ c_{est} $: the number of estimated cheaters;
\item $ c_{act} $: the number of actual cheaters;
\item $ n $: the total number of shareholders involved in a computation;
\item $ b $: the number of bits in a vector variable;
\item $\oplus$: the addition operator in finite fields;
\item $\cdot$ : the multiplication operator in finite fields;
\item $\bigoplus$: the cumulative sum operator in finite fields;
\item $\prod$: the cumulative product operator in finite fields;
\item $\mathtt{\sim}$ : the distortion of a vector;
\item $ MAC() $: a secure message authenticating function;
\item $ ENC() $: a cryptographic encryption function;
\item $ EtM() $: an Encrypt-then-MAC function;
\item $ K $: the cryptographic key;
\item $||$: the concatenation operator;
\item $ E $: the encoded secret where $E = EtM(S, K)$;
\item $ P_{miss} $: the probability of failing to detect cheating in the IoT system.
\end{itemize}
The concept of $t$-threshold secret sharing (TSS) was first introduced by Shamir [\cite{AS1979}]. He argued that all computations should be carried out over Galois finite field ($GF$) arithmetic, in order to maintain the information's theoretical security. To share a secret $S$, a polynomial of degree $(t-1)$ is used to compute and distribute the shares, where the secret $S$ serves as the free or leading coefficient, and all other coefficients can be arbitrarily chosen. The shares are the evaluations of the polynomial by each holder's $D_i$.
The share distribution equations when $S$ is placed as the free coefficients is:
\begin{equation*}
h_{i} = S \oplus a_{1}D_{i} \oplus a_{2}D_{i}^2 \oplus \cdots \oplus a_{t-1}D_{i}^{t-1}.
\end{equation*}
And as the leading coefficient:
\begin{equation} \label{eq:1}
h_{i} = a_{0} \oplus a_{1}D_{i} \oplus a_{2}D_{i}^2 \oplus \cdots \oplus SD_{i}^{t-1} .
\end{equation}
where $S$, $h_i$, $D_i$ $\in GF(2^b)$.
The ID numbers, $D_i$, are publicly known, while the share, $h_i$, are kept private by shareholders.
With any subset of at least $t$ shareholders' IDs and shares, one can use the Lagrange interpolation formula to reconstruct the secret.
If $S$ is placed at the free coefficient, it can be retrieved by:
\begin{equation*}
S = \bigoplus_{i=0}^{t-1} {\frac{D_{i} \cdot h_{i}}{\prod_{j=0,j \neq i}^{t-1} {(D_{i} \oplus D_{j})}}}.
\end{equation*}
If $S$ is the leading coefficient, it can be retrieved by:
\begin{equation} \label{eq:2}
S = \bigoplus_{i=0}^{t-1} {\frac{h_{i}}{\prod_{j=0,j \neq i}^{t-1} {(D_{i} \oplus D_{j})}}}.
\end{equation}
Such a construction is $(t-1)$-private. This means it needs at least $t$ shareholders to reconstruct the secret and so any $(t-1)$ or fewer shareholders have zero knowledge of the secret.
For computational simplicity, in this paper we choose to place $S$ as the leading coefficient as shown in [Eq. \ref{eq:1} and \ref{eq:2}]. We also assume that the system works over finite field $GF(2^b)$, where, as in most computer systems, $b = 32, 64, 128, 256, \cdots$.
The original scheme's share distribution and secret reconstruction procedures are shown in Fig. \ref{fig:original}, which matches with the Odysseus and many other IoT architectures very well in the administrator - devices - clients three layer structure.
\begin{figure}[h]
\begin{center}
\includegraphics[width=3.5in]{Flowcharts/Original_SSS2.pdf}
\caption{\small The secret sharing and reconstruction flow. The reconstructor can be omitted if there is no end user and every shareholder (either device or person) has trustworthy computation capability. \label{fig:original}}
\vspace{-.2in}
\end{center}
\end{figure}
\begin{remark}
\normalfont Shamir's secret sharing scheme is supposed to work under finite field arithmetic where the field size should be a prime or a power of a prime. Ordinary arithmetic will be vulnerable and any secret can be retrieved by at most two carefully selected shareholders instead of $t$.
In ordinary positive integer arithmetic, for instance, if a shareholder's ID is $D_{i} = 1$, this holder's share will be $h_{i} = a_0 + a_1 + \cdots + S$, namely the sum of the all coefficients of (\ref{eq:1}). And in ordinary arithmetic, it is obvious that $h_{i} > a_{l} | a_{l} \in \{a_0, a_1, \cdots, S\}$. Then, this holder can find another holder with ID $D_{j} \geq h_{i}$ whose share is $h{j}$. If these two shareholders work together, they can easily uncover the secret, regardless of $t$, by expressing $h_{j}$ in the radix of $D_{j}$, where the most significant digit will be $S$.
However, in finite field or modular arithmetic, one can never have $h_{i} > a_{l} | a_{l} \in \{a_0, a_1, \cdots, S\}$ if $h_{i} = a_0 \oplus a_1 \oplus \dots \oplus S$.
\hfill\(\square\)
\end{remark}
\subsection{Attack Model}
\label{sec:attack-models}
We define an attack model below, which is much stronger than what the original scheme and its conventional secure variations can handle.
\begin{definition} \label{def:attack-model}
\normalfont The attack model in this paper is described by the following characteristics:
\begin{enumerate}
\item The dealer and the clients are trusted;
\item The shareholders (devices in an IoT system) are not trusted and there is no limit to the number of compromised devices or cheaters.
\item The cheaters are able to gain full control of hijacked devices, meaning cheaters can read memory contents, use IO ports, or tamper with devices.
\item The cheaters can also eavesdrop or tamper with the communication channels between the devices, the dealer and/or the clients.
\item The attackers have the knowledge of the system's basic parameters ($n, t$, equations [\ref{eq:1}, \ref{eq:2}] etc.). They can work collaboratively.
\item The goals of the attackers are:
\begin{enumerate}
\item \textit{Passive attack}: to compute and acquire the original secret stealthily;
\item \textit{Active attack}: to select their own secret and submit it to the clients without being spotted.
\end{enumerate}
\end{enumerate}
\end{definition}
\textbf{\textit{Note:}} Besides the shares, each Odysseus board also submits its sensor data to the clients. However, because those are source data, their verification is another issue beyond the scope of this paper.
When the cheaters work collectively, they are able to share any information they hold, or to modify it according to their common interest. We also assume that the cheaters have sufficient computational power to calculate equations such as [\ref{eq:1}, \ref{eq:2}] and other necessary tasks.
\section{The Conventional Secure Protocols for TSS}
\label{sec:current}
In this section, we present some of the existing secure protocols and their associated passive and active attack models. We also highlight their vulnerabilities under our attack model.
\subsection{Against Passive Attacks}
The key property of TSS is that it only allows $t$ or more shareholders (devices) to retrieve the secret. Below this threshold, the secret information is theoretically secure. Namely, $t-1$ devices have no more knowledge of the secret than any individual device does. However, if the cheaters have compromised $t$ or more devices, so $c_{act} \geq t $, then the privacy of the secret is not guaranteed, since they can use [Eq. \ref{eq:2}] to retrieve it.
\subsection{Against Active Attacks}
\label{sec:convention}
Soon after the introduction of the Shamir's secret sharing scheme, it was noticed that if any number of the shareholders participating in the secret reconstruction apply an active attack by changing their shares to make $h_i$ to $\tilde{h_i} \neq h_i$, the retrieved secret will be distorted $\tilde{S} \neq S$ according to [Eq. \ref{eq:2}]. Therefore the authenticity of the submitted shares or the retrieved secret needs to be verified.
\subsubsection{Share Verification}
\label{sec:secret-verify}
Researchers [\cite{MS1981, RG2001, FG2006}] have proposed approaches to verify the validity of shares with a probability of 1. The common feature in these approaches is that if the shares can be encoded to a specified error control code (ECC) codeword. The codeword's symbols, i.e., shares can be verified and corrected up to the ECC's capability.
Particularly, the share distribution [Eq. \ref{eq:1}] is inherently equivalent to the non-systematic encoding equation of the well-known Reed-Solomon (RS) ECC codes. RS codes are maximum distance separable (MDS) codes which meet the Singleton bound with equality. With such a distribution equation, an ($n, t, d$) Reed-Solomon codeword ($h_{0}, h_{1}, \cdots h_{n-1}$) is encoded with $n$ symbols (shares) in total, $t$ information symbols, and distance $d = n-t+1$, which can correct up to $\frac{d-1}{2}$ (or $\frac{n-t}{2}$) erroneous symbols with the algorithms in [\cite{EB2015, SG2003}].
In the secret sharing language, with $n$ shareholders' IDs and shares, we are able to tolerate up to $c_{est} \leq \frac{n-t}{2}$ shares maliciously modified by cheaters. Theoretically speaking, the error correction capability of RS codes can tolerate up to $c_{est} < n/2$ cheaters if $n \gg t$. However, commonly, the assumption that there should be $c_{est} < t$ cheaters is made, such that a group of all cheaters have no access to the secret [\cite{HK1993}]. Then we have
\begin{equation} \label{eq:3}
c_{est} < n/3.
\end{equation}
If $n$ instead of $t$ shareholders are involved in the share error correction by RS decoders, then the correctness of the retrieved secret is ensured when [Eq. \ref{eq:3}] holds. Consequently, the secure secret sharing is both $(t-1)$-private and $(t-1)$-resilient; that is, up to $t-1$ shareholders cannot reconstruct the secret, and up to $t-1$ cheaters cannot affect the correctness of the secret [\cite{LM2015}].
\subsubsection{Secret Verification}
Besides share verification with a share correction probability of 1, another approach is to sign the original secret with a key $K$ using a message authentication code (MAC) function. Then the original secret is shared together with its MAC (usually in a manner of concatenation) to the holders. Denoting the encoded secret as $(S || MAC(K, S))$, then [Eq. \ref{eq:1}] becomes:
\begin{equation} \label{eq:4}
h_{i} = a_{0} \oplus a_{1}D_{i} \oplus a_{2}D_{i}^2 \oplus \cdots \oplus (S || MAC(K, S))D_{i}^{t-1} .
\end{equation}
At the reconstructor end, after the retrieval of the possibly distorted $(\tilde{S} || \widetilde{MAC(K, S)})$, the following authentication equation is evaluated:
\begin{equation} \label{eq:5}
MAC(\tilde{K},\tilde{S}) \stackrel{?}{=} \widetilde{MAC(K, S)}.
\end{equation}
An inequality detects cheating. If this MAC function has a high enough security level, such as a collision or mis-detection probability of $2^{-128}$ (or lower), then it is generally believed that all distortions will be spotted. The secure protocol of the Shamir's secret sharing is shown below.
\begin{figure}[h]
\begin{center}
\includegraphics[width=3.5in]{Flowcharts/Signed.pdf}
\caption{\small The secret sharing scheme with secret authentication in the context of Odysseus system.
\label{fig:signed}}
\vspace{-0.2in}
\end{center}
\end{figure}
There are two common approaches for signing the original secret: HMAC with a key and AMD codes with a random vector.
\paragraph{A. HMAC with a Key} \hfill \break
HMAC, keyed-hashing for message authentication code, is the most often used technique for authentication today. To sign a secret $S$, the nested equation is defined as follows [\cite{RFC2104}]:
\begin{definition}
\normalfont Let $HMAC()$ be the HMAC function, $K$ the signing key, and $K'$ be derived from $K$ by padding to the right zeros to the block size. Also let $H$ be a hashing function, $opad$ the outer padding and $ipad$ the inner padding. Then:
\begin{equation} \label{eq:6}
HMAC (K, S) = H \big( (K' \oplus opad) || H((K' \oplus ipad) || S) \big)
\end{equation}
\end{definition}
The client can authenticate the secret using the HMAC version of [Eq. \ref{eq:5}]:
\vspace{-0.1in}
\begin{equation} \label{eq:7}
HMAC(\tilde{K}, \tilde{S}) \stackrel{?}{=} \widetilde{HMAC(K,S)}.
\end{equation}
With SHA-2 256 or higher used for $H()$ [\cite{RFC6234}], the collision rate is less than $2^{-128}$, and thus, it is considered cryptographically secure.
\paragraph{B. AMD with a Random Number} \hfill \break
[\cite{CR2008}] have proposed an Algebraic Manipulation Detection (AMD) code to detect any modification of secrets with a probability close to 1. [\cite{WK2011}] later generalized this code with a flexible construction.
Unlike HMAC, it operates over finite fields and its security level is adjustable by block size $b$. The AMD encoding is defined as follows
\begin{definition}
\normalfont Let $K = (K_1, K_2, \cdots , K_m)$, where $K_i \in GF(2^b)$ is a randomly generated $b$-bit vector. An $g^{th}$ order Generalized Reed-Muller code ($GRM$) with $m$ variables consists of all codewords ($f(0), f(1), \cdots , f(2^{bm}-1)$), where $f(K)$ is a polynomial of $K = (K_1, K_2, \cdots , K_m)$ of degree up to $g$.
Let
\vspace{-.05in}
\begin{equation*}
A(K)=
\begin{cases} \bigoplus_{i=1}^{m} K_i^{g+2}, & \text{if $g$ is odd;}
\\
\bigoplus_{i=2}^{m-1} K_1K_i^{g+1}, & \text{if $g$ is even and $m>1$;}
\end{cases}
\end{equation*}
where $\bigoplus$ is the accumulated sum in $GF(2^b)$. Let
\vspace{-.05in}
\begin{equation*}
B(K, S)= \bigoplus_{1 \leq j_1 + j_2 + \cdots + j_1 \leq g+1} y_{j_1, j_2, \cdots, j_m} \prod_{i=1}^m K_i^{j_i},
\end{equation*}
where $\prod_{i=1}^m K_i^{j_i}$ is a monomial of $R$ of a degree between 1 and $g+1$. And $\prod_{i=1}^m K_i^{j_i} \notin \triangle B(K, S)$ which is defined by:
\begin{equation*}
\begin{cases} \{K_1^{h+1}, K_2^{g+1}, \cdots, K_m^{g+1}\}, \text{if $g$ is odd;}
\\
\{K_2^{g+1}, K_1K_2^{g}, \cdots, K_1K_m^{g}\}, \text{if $g$ is even and $m>1$}.
\end{cases}
\end{equation*}
Let $f(K, S) = A(K) \oplus B(K, S)$, then a generalized AMD codeword is composed of the vectors $(S, K, f(K, S)) $, where $S$ is the information portion, $K$ the random vector, and $ f(K, S) $ the redundancy signature portion [\cite{WK2011}]. \hfill\(\blacksquare\)
\end{definition}
\begin{remark}
\normalfont If the attack involves a non-zero error on the information $S$, which is the major purpose of almost all attacks, then in $f(K, S)$ the term $A(K)$ can be omitted [\cite{BK2017}]. Furthermore, if only one random number vector is used, the encoding equation can be simplified to
\begin{equation} \label{eq:AMD}
AMD(K, S) = f(K, S ) = \bigoplus_{1 \leq j_1 + \cdots + j_i + \cdots + j_m \leq h+1} S_{j_1, \cdots, j_i, \cdots, j_m} K^{j_i}
\end{equation}
where $S_{j_i}$ is a $b$-bit block of $S$. \hfill\(\blacksquare\)
\end{remark}
The client can authenticate the secret using the AMD version of [Eq. \ref{eq:5}]:
\begin{equation} \label{eq:9}
AMD(\tilde{K}, \tilde{S}) \stackrel{?}{=} \widetilde{AMD(K,S)}.
\end{equation}
The probability of mis-detecting a distortion of $S$ in [Eq. \ref{eq:9}] has an upper bound, $\frac{g}{2^b}$ [\cite{CR2008}], where $g$ is a very small number in most constructions. With 128 bits (or larger) selected as $b$, the security level of AMD codes will be on the same order of HMAC ($2^{-128}$ or less in attack mis-detection rate).
\textit{\textbf{Note:}} Although HMAC and AMD codes are different approaches for authenticating the retrieved secrets, there is no essential difference in their design philosophy as [Eq. \ref{eq:7}] and [Eq. \ref{eq:9}] have shown.
It should be noted that there are two potential drawbacks to the secret verification approach. First, no method for transmitting the MAC key, $K$, from the dealer to the client is specified for either approach. In addition, while these approaches can detect the distortion of the secret, they cannot identify the cheaters or restore the correct secret.
\section{Vulnerabilities of the Conventional Secure TSS Schemes}
\label{sec:vulnerable}
In this section, we illustrate the vulnerabilities associated with conventional secure schemes under the attack model defined previously. Because of the distributed nature of IoT systems, it is not unusual to have attacks of a scale unanticipated by designers. The demand for more secure and robust confidential information sharing scheme for IoT systems is the main motivation for the approach proposed in the next section.
\subsection{Passive Attack: Acquiring the Original Secret}
Usually an assumption has to be made that $c_{est} < t$ so that a group of all cheaters cannot retrieve the secret by themselves. However, a case with more estimated cheaters such that $c_{act} \geq t > c_{est}$, could exist. With any $t$ of them, it is easy to acquire the original secret by [Eq. \ref{eq:2}].
\subsection{Active Attack: Making the Secret Unaccessible}
Here we assume the IoT system's TSS is already equipped with a share verification module. As mentioned in Section 3.2.1, the essence of such module is to encode the shares into a codeword, whose validity can be verified by the RS decoding algorithm. Although RS codes are known for their strong error correction (tolerating $c_{est} < n/3$ cheaters), their encoding procedure is linear and thus, susceptible to cheating exploits.
If the number of cheaters satisfy $(n/3 < c_{act} < n-t+1)$, although the RS decoder can still raise an alarm for cheating, it is already beyond the share error correction capability of the RS code. Therefore the system is unable to retrieve the secret or identify the cheaters.
\subsection{Active Attack: Forging a Legal Secret}
\label{sec:forge}
If the number of cheaters satisfies $ (n-t+1 \leq c_{act} \leq n) $, they will be able to manipulate the entire system. For instance the cheaters can pick a share distribution polynomial different from [Eq. \ref{eq:1}] with random coefficients $b_i$ and their own forged secret $\tilde{S}$:
\begin{equation} \label{eq:10}
h'_{i} = b_{0} \oplus b_{1}D_{i} \oplus b_{2}D_{i}^2 \oplus \cdots \oplus \tilde{S}D_i^{t-1}
\end{equation}
The new shares $h'_{i}$ of the cheaters will be the evaluation of [Eq. \ref{eq:10}] by the same IDs $D_i$. When $c_{act} \geq n-t+1$, the cheaters' shares will form a new legal RS codeword which will never be detected by the RS decoder. The secret reconstruction will then submit the cheaters' secret $\tilde{S}$ to the client. If the client uses it on his/her own important applications such as digital signatures, the attackers can effortlessly break those applications.
\begin{example}
\label{example1}
\normalfont A secret sharing system has a secret $S = 111$ in the $GF(2^3)$ finite field. It requires $t=2$ shareholders to reconstruct the secret every time. The following share distribution polynomial is used to generate the shares:
\[
h_{i} = a_{0} \oplus SD_{i} = 010 \oplus 111D_{i}.
\]
The protocol is designed in such a way that up to 1 cheater can be tolerated. Therefore, in the secret reconstruction stage there will be $n = 3c_{est}+1 = 4$ shareholders involved. Suppose that in the secret reconstruction, shareholders with IDs $ D_0 = 001, D_1 = 010, D_2 = 011, D_3 = 100 $ are involved. And the shares distributed to them are $ h_0 = 101, h_1 = 111, h_2 = 010, h3 = 001$. These 4 shares form a legal RS codeword $v = (101, 111, 010, 001)$ with distance $d=n-t+1=3$ and it can correct up to 1 error.
Now all 4 of them are cheating collusively, and they have selected their own secret $\tilde{S} = 100$ and a different share distribution polynomial:
\[
h'_{i} = b_{0} \oplus \tilde{S}D_{i} = 001 \oplus 100D_{i}.
\]
Thus their shares will be maliciously changed to $h_0 = 101, h_1 = 010, h_2 = 110, h3 = 111$, which is also a legal codeword $v' = (101, 010, 110, 111)$ of a $(n, t, d) = (4, 2, 3)$ RS code. This codeword will unfortunately be considered as a valid codeword by the RS decoding algorithm [\cite{SG2003}] and there will be no cheating alarm. As a result, the fake secret $\tilde{S} = 100$ is retrieved by those shares under [Eq. \ref{eq:2}]. During the entire procedure the cheating will not be detected. \hfill\(\square\)
\end{example}
\subsection{Active Attack: Framing Honest Shareholders}
\label{sec:framing}
Another vulnerability that cheaters can exploit when $ (n-t+1 \leq c_{act} \leq n) $ is to frame honest shareholders, so that the decoder treats the honest parties as ``cheaters" and cheaters as ``honest shareholders."
If $c_{act}$ is large enough that the number of honest shareholders is $ n-c_{act} \leq \frac{n-t}{2} $, then the honest shareholders are within the RS decoder's error correction capability. Since all of the cheaters' shares are generated by the same forged secret sharing polynomial, the honest minority will be treated as cheaters and ``corrected." The cheaters' fake secret will be regarded as the valid secret as the result of [Eq. \ref{eq:2}].
\begin{example}
\normalfont Suppose that we have the same secret sharing system as in Example \ref{example1}. Let us have three shareholders \{$D_0 = 001, D_1 = 010, D_2 = 011$\} as cheaters, and shareholder $ D_3 = 100 $ is an honest participant. The codeword for the shares submitted to the RS decoder will be $v' = (101, 010, 110, 001)$. $v'$ will be decoded as $(101, 010, 110, 111)$ which is the cheaters' codeword. Shareholder $D_3=100$ will be labeled as a ``cheater". Consequently, the forged secret $\tilde{S}=100$ (as in Example \ref{example1})
will be retrieved. \hfill\(\square\)
\end{example}
\subsection{Active Attack: Against Secret Verification}
As mentioned above, one can design an IoT with secret verification TSS capabilities. Although such a design has a high probability of detecting any number of share distortions, it alone is not able to identify the cheaters nor correct the shares. In addition, there is another problem that needs to be addressed: how to securely pass the MAC key $K$ from the dealer to the client (as in Fig. \ref{fig:signed}) in order to conduct the secret authentication, giving that the transmission channel might be eavesdropped.
There can be more types of attacks besides the ones listed above. Especially when the number of cheaters is beyond estimation, the entire system can be subject to total manipulation. Therefore there is a demand for a more secure and resilient scheme to handle the severe attacks.
\section{A Secure and Robust Secret Sharing Scheme for IoT}
\label{sec:Proposed}
In this section, we propose a new secure and robust secret sharing scheme for IoT systems. Compared to the current secret sharing scheme, which has limited protection against cheaters, the advantages of the proposed scheme are
\begin{enumerate}
\item The proposed scheme protects both the confidentiality and the integrity of the secret;
\item The proposed scheme is able to detect and identify the cheaters up to the theoretical upper bound;
\item The proposed scheme uses the Physical Unclonable Functions (PUF) to ensure the security of the cryptographic key update;
\item The proposed scheme works in an adaptive manner, such that a more powerful module will only be activated when the previous module fails. Thus, the scheme functions in a cost-efficient way and consumes a minimum of resources on average.
\end{enumerate}
The following subsections are organized to present an overview of the proposed scheme, a detailed introduction of the modules of this scheme, and, finally, a simple numeric example to demonstrate the scheme.
\subsection{Overview of the Proposed Secure Secret Sharing Scheme}
The proposed scheme has four stages to ensure the basic functionality and authenticity of the secret sharing.
\textbf{Stage 1: Dealer - Encoding and Distribution of the Secret} \\
First, the dealer will encode the secret $S$ with an Encryption-then-MAC function $EtM()$ to $E = EtM(K, S)$, where $K$ is randomly picked from the dealer's repository, which stores the challenge and response pairs (CRPs) of the client's PUF. Then the dealer distributes $E$ using [Eq.\ref{eq:2}] to $n$ shareholders. The detailed key transmission protocol will be introduced in later subsections.
\textbf{Stage 2: Client - Secret Retrieving} \\
The client will select an arbitrary set of $t$ shareholders to participate in the secret retrieving using [Eq. \ref{eq:2}]. The retrieved secret will be authenticated by [Eq. \ref{eq:7} or \ref{eq:9}] by the $K$ generated at the client end. If the authentication claims the secret is valid, then it is considered a successful secret reconstruction with no cheating. If not, the scheme moves to Stage 3 for share correction.
\textbf{Stage 3: Client - Share Error Correction} \\
This stage uses the Reed-Solomon error correction module in the classic protocol. Here, $n = 3c_{est} + 1$ shareholders will be invited to participate in the protocol, where $c_{est}$ is the number of estimated cheaters defined by the system. The RS decoder will try to correct the shares and then send them back to the secret reconstruction and verification modules at the client end. If the protocol passes both the share correction (by the RS decoder module) and secret verification (by the authentication module), then the secret reconstruction is successful. When $c_{act} < n/3$, the cheater tolerance probability is $100\%$. If either module fails, then the protocol ascends to its fourth stage, indicating that the actual number of cheaters is greater than $n/3$.
\textbf{Stage 4: Client - Group Testing} \\
This stage will be activated if the previously retrieved secret is not legal. It will involve up to $n$ shareholders, among whom there are at least $n/3$ cheaters. The client will generate a group testing pattern which is able to identify up to $ c_{est} = n - t $ cheaters with a minimum number of $t$ honest holders. Even if there are more than $n - t$ cheaters, it is still able to detect the cheating, although the correct secret is beyond reconstruction because there are not enough honest holders.
The work flow of the proposed scheme is shown below.
\begin{figure}[h]
\begin{center}
\includegraphics[width=3.5in]{Flowcharts/Flowchart_BW2.pdf}
\caption{\small \textbf{Stage 1} and \textbf{2} are sufficient if the number of actual cheaters $c_{act} = 0$. If cheating is detected by \textbf{Stage 2}, then \textbf{Stage 3} with RS decoder is called under the assumption of $c_{est} < n/3$. If \textbf{Stage 3} fails then \textbf{Stage 4} with group testing is able to identify $n/3 \leq c_{est} \leq n - t$ cheaters and retrieve the correct secret. If $c_{act}$ is even beyond this scale, an additional invitation module can be introduced to resolve the issue.
\label{fig:Flowchart_PUF}}
\vspace{-.2in}
\end{center}
\end{figure}
\subsection{Secret Encoding}
In order to perform the obscuration and authentication of the secret, we will apply the Encryption-then-MAC function to encode the original secret $S$ to $E = EtM(K, S) = ENC(K, S) || MAC(K, ENC(K, S))$. The encryption function $ENC()$ can be the standard AES or other lightweight approaches. And the $MAC()$ function can be either HMAC with fixed security level $P_{miss}$, or AMD codes with flexible $P_{miss}$ as mentioned in Section \ref{sec:secret-verify}. AMD codes are able to trade off between the security level and hardware cost by adjusting the vector size $b$, which can be an ideal choice for IoT systems with limited resources. Therefore, AMD may be a better choice for this class of systems.
For some distributed systems without a client end, it is not possible to maintain the confidentiality of the secret if more than $t$ devices are compromised. This is because the TSS scheme for this case entrusts the secret to the devices themselves. However, for other IoT systems with a client end like the Odysseus, it is possible to protect the privacy of the secret even if $c_{act} \geq t$ with the help of the client. Even if the attackers have compromised more than $t$ devices, they will only acquire the cipher, but not the secret's plaintext.
However, there is a critical issue of transmitting the EtM key to the client securely, which we discuss below.
\subsection{EtM Key Transmission}
The core of this proposed scheme's security is to establish a secure transmission channel for $K$ that is
\addtolength{\leftskip}{0mm}
\begin{itemize}[leftmargin=\dimexpr\parindent+3mm+0.5\labelwidth\relax]
\renewcommand{\labelitemi}{\scriptsize$\bullet$}
\item \textbf{Eavesdrop resistant}: if the cheaters eavesdrop on the channel, they should not acquire any knowledge of $K$; \vspace{-0.1in}
\item \textbf{Easy to update}: it should be easy and secure to update $K$ on both the dealer and the client sides; \vspace{-0.1in}
\item \textbf{Unforgeable}: a cheater should not be able to predict, duplicate, or forge the keys; \vspace{-0.1in}
\item \textbf{Unique}: in the case of a multi-client secret sharing system, different clients should have different sets of keys. \vspace{-0.1in}
\end{itemize}
Based on the criteria above, a Physical Unclonable Function (PUF) based approach is an excellent and fitting solution. Another choice is to use public and private key pairs. Considering that the Odysseus and many other IoT systems are hardware based, and so it is very convenient and natural to implement PUFs on them, we will use PUFs to facilitate the transmission of $K$ in this paper. Although the concept of PUF has been known since 1983 [\cite{DB1983}], the term PUF only came into wide use in 2002 [\cite{BG2002}]. A PUF is a piece of hardware that produces unpredictable responses upon challenges due to manufacturing variations. PUFs are both easy to make and hard to duplicate, even when the exact same circuit layout and manufacturing procedure are used. A PUF can be made from a device's memory cells or circuits without modifying the device's architecture. Because of its attributes of randomness and uniqueness, PUF provides an inexpensive and integrated solution for random number or secret key generation, dynamic authentication, and identification [\cite{MY2016}].
The PUF serves as a cryptographic primitive in the manner of challenge-response pairs (CRPs). Each PUF's output (response) is a non-linear function of the outside input (challenge) and the PUF’s own physical, intrinsic, and unique diversity, its ``Silicon Fingerprints" [\cite{EE2010}]. Given the same challenge, the same PUF design on different circuits will return different responses, which cannot be predicted by just having the challenge vector. Therefore, PUF is an ideal choice in facilitating the transmission of $K$.
\subsubsection{Key Transmission Protocol}
\begin{algorithm} \label{alg: CRP}
\normalfont For the $k^{th}$ round of secret sharing, denote the secret as $S_k$, the arbitrarily selected challenge and response of the client's PUF as $CHL_k$ and $K_k$ respectively. Then the EtM key $K$ is transmitted from the dealer to the client as follows
\begin{enumerate}
\item When a client registers with the dealer, the dealer challenges the client's intrinsic PUF with a set of inputs and stores its CRPs; \vspace{-0.05in}
\item Before $S_k$ is distributed, the dealer selects an arbitrary CRP and uses its response $K_k$ to encode the secret with $EtM()$ to $E_k$. At the same time, the challenge $CHL_k$ takes the position of the share distribution polynomial's free coefficient. Therefore [Eq. \ref{eq:1}] becomes:
\begin{equation} \label{eq:11}
h_{i} = CHL_k \oplus a_{1}D_{i} \oplus a_{2}D_{i}^2 \oplus \cdots \oplus E_kD_{i}^{t-1} .
\end{equation}
Then the encoded secret $E_k$ is distributed in the form of shares to the devices of the IoT system;
\item When $S_k$ needs to be retrieved, $t$ holders will turn in their IDs and shares to the client; \vspace{-0.05in}
\item The client uses [Eq. \ref{eq:2}] to retrieve the encoded secret $E_k$, and by another Lagrange interpolation formula the client calculates $CHL_k$:
\begin{equation} \label{eq:12}
CHL_k = \bigoplus_{i=0}^{t-1} {\frac{D_{i} \cdot h_{i}}{\prod_{j=0,j \neq i}^{t-1} {(D_{i} \oplus D_{j})}}}.
\end{equation}
\item The client takes $CHL_k$ to its PUF and regenerates the corresponding response $K_k$, which is the same key used by dealer to EtM $S_k$. This $K_k$ is used to authenticate and decrypt the retrieved encoded secret $E_k$. \hfill\(\blacksquare\)
\end{enumerate}
\end{algorithm}
The Odysseus system (or other IoT systems) equipped with the proposed scheme will have the workflow shown in Fig. 6.
\begin{figure}[h]
\begin{center}
\includegraphics[width=3.5in]{Flowcharts/Proposed.pdf}
\caption{\small The dealer now shares both the encoded secret and the challenge to the devices. Once the client retrieves the secret, stage 2 to 4 in Section 5.1 will be performed to identify the cheaters (if any).
\label{fig:proposed}}
\end{center}
\end{figure}
The advantage of this protocol is that $CHL_k$ leaks no information about $K_k$. Even if there are $t$ or more cheaters calculate $CHL_k$, they are still not able to acquire the corresponding $K_k$ because the CRPs of a PUF are not predictable. Moreover, if the PUF module generates an erroneous $\tilde{K_k}$ due to aging, temperature variations or device instability, the fuzzy extractor is able to correct the error using its ECC feature.
When a new secret $S_l$ is about to be distributed, the dealer can select another CRP of the PUF to EtM the secret, and embed the new challenge $CHL_l$ to [Eq. \ref{eq:11}]. This makes the update of the key to $K_l$ simple and secure.
\subsubsection{The Selection of PUFs for Secret Sharing}
Based on where the variation comes from, there are multiple types of PUFs. Delay PUFs and memory PUFs are the two popular implementations. A delay PUF uses the random variation in delays of wires and gates, and their race condition to generate the response bits. A memory PUF is based on the random initial state (1 or 0) of each memory cell.
Based on the size of the challenge-response pairs, there are weak and strong PUFs, which have different applications in security. Weak PUFs' CRP size grows linearly with the PUF size, while strong PUFs' CRPs grow exponentially.
In our design, we consider the frequent updates of the key (up to one key per secret). Thus, we have selected the delay PUFs because of their large sets of CRPs. We use FPGAs to implement the secret sharing system with both the Ring Oscillator (RO) PUF based on the race condition of two ROs, and the Arbiter PUF based the delay difference between two MUX chains [\cite{MS2010}]. We also improved the design of both to increase the Hamming distance among the responses, while developing a design automation tool (introduced in Section \ref{sec:Automation}).
\subsection{Cheater Identification by Group Testing}
\label{sec:cheater-identification}
In Section 3.2.2 we pointed out that secret verification alone does not identify the cheaters nor help to retrieve the correct secret. Therefore in this paper we propose adaptive group testing, which works together with secret verification for cheater identification. It can locate up to $c_{est} = n - t$ number of cheaters, which is the theoretical upper bound. This means that in a $t$-threshold secret sharing scheme, among all the $n$ shareholders participating in our scheme, our scheme needs as few as $t$ honest parties to retrieve the correct secret. The test construction follows.
\begin{construction} \label{cont_M}
\normalfont For any $t$-threshold secret sharing scheme, suppose among $n$ holders there are $c$ attackers where $ 0 \leq c \leq n-t $. A test pattern to identify the honest holders and attackers can be constructed as a binary matrix $M$ of size $T \times n$, where $T$ is the number of tests needed at most. The rows of $M$ consist of all of the different $n$-bit vectors with exactly $t$ 1's and so $T = \binom{n}{t}$. Each column of the matrix therefore has $\binom{n-1}{t-1}$ number of 1's. The 1's in each row (test) correspond to the shareholders participating in that particular test. Each test is a two-step procedure:
\begin{enumerate}
\item A secret reconstruction using [Eq. \ref{eq:2}] to retrieve the secret $\tilde{E}$ with its specific participants;
\item An authentication using [Eq. \ref{eq:7} or \ref{eq:9}] over $\widetilde{E}$ to verify the validity of the retrieved secret.
\end{enumerate}
The test syndrome is a $T$-bit binary vector $u$, where 0's in $u$ indicate the equality of [Eq. \ref{eq:7} or \ref{eq:9}], and 1's the inequality. \hfill\(\blacksquare\)
\end{construction}
Then the cheater identification algorithm is:
\begin{algorithm}
\normalfont For any $t$-threshold secret sharing scheme and its corresponding group testing matrix $M$ there are $n$ shareholders participating in the tests indexed by $H = \{0, 1, 2, \cdots, n-1 \}$ . Among the $n$ shareholders there are $c_{est}$ cheaters where $ n/3 \leq c_{est} \leq n-t $. Let $w = (w_0, w_1, \cdots, w_{n-1})$ be a $n$-digit vector and $w = u^{\top} \times M$, where $u$ is the $T$-bit binary test syndrome and $\times$ is the multiplication of regular arithmetic. The cheaters' indexes belong to the set $\{ l | \ w_l = \binom{n-1}{t-1} \}$. and the rest of the holders are honest. \hfill\(\blacksquare\)
\end{algorithm}
However, the testing technique above requires $\binom{n}{t}$ tests in total to identify the cheaters. This can be a large number when $n$ and $t$ are large. Therefore, its adaptive form, given below, drastically reduces the average number of tests to a linear formula.
\begin{algorithm} \label{alg:adaptive}
\normalfont For a test pattern $M$ of size $T \times n$ generated by Construction \ref{cont_M}, $\triangle T$ is the number of tests needed to find the first 0 (equality of [Eq. \ref{eq:7} or \ref{eq:9}]) in the test syndrome. The $n$ shareholders are indexed by $H = \{0, 1, 2, \cdots , n-1\} $. The $t$ honest holders identified by this test are indexed by $I = \{i_0, i_1, \cdots, i_{t-1} \}$. The system only needs to run at most $n-t$ more tests whose participants are $\{i_0, i_1, \cdots, i_{t-2}, j\}$, where $j \in H \backslash I $. Each test's syndrome indicates holder $j$ as an attacker or not by 1 or 0. The total number of tests needed to identify all holders is then at most $\triangle T + (n-t)$. \hfill\(\blacksquare\)
\end{algorithm}
\subsection{Extra Invitation Module}
If the group testing module in Stage 4 cannot successfully identify the $c_{act}$ cheaters in the system, where $n-t < c_{act} \leq n$, then the number of honest shareholders is less than $t$.
At this point, our scheme will still raise the cheating alarm based on the secret authentication. Moreover, the protocol is adaptive enough to be extended to a further stage to include an invitation module. This module can pull in the execution of the protocol additional participants and perform new rounds of group testing. From the hardware perspective, the invitation module can be power-gated and disabled when not in use.
\begin{algorithm}
\normalfont Let the number of honest shareholders in the current group testing be $\triangle t$ and $ 0 \leq \triangle t < t $. Suppose the system is able to identify an extra set of $t$ honest shareholders from another group. Then these $t$ honest parties can be combined into the current group with the modified group testing matrix of size $ \binom{n+t}{t} \times (n+t) $. With this new test pattern, the $\triangle t + t$ honest shareholders can be identified and the rest will be properly labeled as cheaters.\hfill\(\blacksquare\)
\end{algorithm}
\subsection{Numeric Examples} \label{example}
Here we present two illustrative examples to demonstrate the security of the proposed protocol. The first one will be under a passive attack and the second one under an active attack.
\begin{example}
\normalfont For an Odysseus system equipped with the proposed secret sharing scheme, there are $t$ cheaters who want to compute the original secret $S$ stealthily. However, they can only acquire $E = EtM(K, S)$ and $CHL$. Without the client's PUF, they are not able to have the response $K$ to $CHL$. Therefore, $S$ still remains unknown to the $t$ curious cheaters. \hfill\(\square\)
\end{example}
In the second example, for simplicity we will not perform the encryption function $ENC()$ in the EtM. For the MAC function, we will use $AMD()$, since it is able to work with very short vectors. Thus, this numeric example will be relatively small and easy to follow.
\begin{example}
\label{rasss-example}
\normalfont We start with a share distribution among the boards of a seven-board Odysseus system configuration. We deploy on the system the proposed secure TSS scheme which is $t$-threshold and $t=3$. The original secret is a digital signature $S \in GF{(2^{12})}$ where $S = 001111110000 = 0x3F0$. The RS decoder in this scheme is constructed under the assumption that there are at most 2 cheaters. However, in the actual scenario there 4 devices which have been compromised by the cheaters.
\textbf{\textit{Stage 1: Secret Encoding and Share Distribution}} \\
The original secret $0x3F0$ is first encoded by the AMD encoding equation [Eq. \ref{eq:AMD}]. Using \text{Definition 3.2} we choose $b=4$ such that the encoding and decoding are over $GF(2^4)$, $m=1$ such that the random vector has only one symbol, and $g=3$ such that $S$ is partitioned into 3 symbols $S = (S_0, S_1, S_2)$ where $S_0 = 0x3, S_1 = 0xF$, and $S_2 = 0x0$. Suppose the dealer has chosen a response from the client's PUF which is $K = 0x0006$ whose corresponding challenge is $CHL = 0xAAAA$. The original secret will be encoded to an AMD codeword $E = AMD(K, S)$ by:
\begin{equation*}
AMD(K, S) = S_0K \oplus S_1K^2 \oplus S_2K^3 = 0x1 \Rightarrow E = (0x3F01).
\end{equation*}
Then with the share distribution polynomial:
\begin{equation*}
h_{i} = CHL \oplus a_{1}D_{i} \oplus ED_{i}^2
\end{equation*}
where $a_1=0x5555$ is an arbitrarily chosen coefficient and $CHL, a_1, E \in GF(2^{16})$, this encoded secret is shared to seven Odysseus boards with IDs and shares $\{D_i : h_i\}$ = $\{1 : 0xC0FE\}$, $\{2 : 0xFC04\}$, $\{3 : 0x9650\}$, $\{4 : 0x0FB4\}$, $\{5 : 0x65E0\}$, $\{6 : 0x591A\}$, $\{7 : 0x334E\}$.
However, devices $\{3, 4, 6, 7\}$ have been compromised by cheaters and they have selected another secret $\tilde{S}=0xABCD$ and forged another share distribution polynomial:
\begin{equation*}
\tilde{h}_{i} = 0xAAAA \oplus 0x7777D_{i} \oplus 0xABCD\cdot D_{i}^2.
\end{equation*}
By their IDs, their shares are changed to: $\{3 : 0x2686\}$, $\{4 : 0xDBAF\}$, $\{6 : 0x9A2F\}$, $\{7 : 0x4695\}$.
\textbf{\textit{Stage 2: Secret Reconstruction and Verification}}\\
First, let us assume that Odysseus devices $\{2, 3, 4\}$ are selected to reconstruct the secret, of which $\{3, 4\}$ are cheaters. By the secret reconstruction [Eq. \ref{eq:2}] the retrieved secret is
\begin{equation*}
\widetilde{E} = 0x5522.
\end{equation*}
The reconstructed secret will be verified by the AMD decoder using [Eq. \ref{eq:9}]: $\widetilde{AMD(K, S)} \stackrel{?}{=} AMD(\widetilde{K},\widetilde{S})$. Through the computation over $GF(2^4)$ we have the following inequality:
\begin{equation*}
\widetilde{AMD(K, S)}
\neq [AMD(\widetilde{K},\widetilde{S}) = \widetilde{S_0}\widetilde{K} \oplus \widetilde{S_1}\widetilde{K}^2 \oplus \widetilde{S_2}\widetilde{K}^3].
\end{equation*}
Thus, cheating is detected and Stage 3 will be initiated under the assumption of $c_{est}=2$ cheaters.
\textbf{\textit{Stage 3: Share Error Correction}}\\
Under the RS decoder, $n = 3c_{est}+1 = 7$ shareholders will be involved and up to 2 shares can be corrected using an $(n, t, d) = (7, 3, 5)$ RS code. However, there is a total number of $c_{act}=4$ cheaters $\{3, 4, 6, 7\}$, which is beyond the capability of this RS decoder. Therefore, the protocol moves to its fourth stage upon the failure of error correction.
\textbf{\textit{Stage 4: Group Testing}}\\
This stage is designed under the assumption that among all the 7 Odysseus boards from Stage 3, only $t = 3$ are not compromised by cheaters. The group testing matrix $M$ of size $T \times n$ can be constructed with \text{Construction 5.1}, where $T = \binom{n}{t} = 35, n = 7$. To save space $M$ is listed in its transposed form $M^{\top}$:
\newcommand\hlt[1]{\tikz[overlay, remember picture,baseline=-\the\dimexpr\fontdimen22\textfont2\relax]\node[rectangle, fill=blue!50, fill opacity=0.3, text opacity=1,] {$#1$};}
\[ \resizebox{\linewidth}{!}{%
$
\begin{array}{c|ccccc:ccccc:ccccc:ccccc:ccccc:ccccc:ccccc|ccccc}
& 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20 & 21 & 22 & 23 & 24 & 25 & 26 & 27 & 28 & 29 & 30 & 31 & 32 & 33 & 34 & 35 \\ \hline
1 & \hlt{1} & 0 & 0 & 0 & 0 & \hlt{1} & \hlt{1} & \hlt{1} & \hlt{1} & \hlt{1} & \hlt{1} & \hlt{1} & \hlt{1} & \hlt{1} & \hlt{1} & \hlt{1} & \hlt{1} & \hlt{1} & \hlt{1} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
2 & \hlt{1} & \hlt{1} & 0 & 0 & 0 & \hlt{1} & \hlt{1} & \hlt{1} & \hlt{1} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \hlt{1} & \hlt{1} & \hlt{1} & \hlt{1} & \hlt{1} & \hlt{1} & \hlt{1} & \hlt{1} & \hlt{1} & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
3 & \hlt{1} & \hlt{1} & \hlt{1} & 0 & 0 & 0 & 0 & 0 & 0 & \hlt{1} & \hlt{1} & \hlt{1} & \hlt{1} & 0 & 0 & 0 & 0 & 0 & 0 & \hlt{1} & \hlt{1} & \hlt{1} & 0 & 0 & 0 & 0 & 0 & 0 & \hlt{1} & \hlt{1} & \hlt{1} & \hlt{1} & \hlt{1} & 0 & 0 \\
4 & 0 & \hlt{1} & \hlt{1} & \hlt{1} & 0 & \hlt{1} & 0 & 0 & 0 & \hlt{1} & 0 & 0 & 0 & \hlt{1} & \hlt{1} & \hlt{1} & 0 & 0 & 0 & 0 & 0 & 0 & \hlt{1} & \hlt{1} & \hlt{1} & 0 & 0 & 0 & \hlt{1} & \hlt{1} & 0 & 0 & 0 & \hlt{1} & \hlt{1} \\
5 & 0 & 0 & \hlt{1} & \hlt{1} & \hlt{1} & 0 & \hlt{1} & 0 & 0 & 0 & \hlt{1} & 0 & 0 & \hlt{1} & 0 & 0 & \hlt{1} & \hlt{1} & 0 & \hlt{1} & 0 & 0 & \hlt{1} & 0 & 0 & \hlt{1} & \hlt{1} & 0 & 0 & 0 & \hlt{1} & \hlt{1} & 0 & \hlt{1} & 0 \\
6 & 0 & 0 & 0 & \hlt{1} & \hlt{1} & 0 & 0 & \hlt{1} & 0 & 0 & 0 & \hlt{1} & 0 & 0 & \hlt{1} & 0 & \hlt{1} & 0 & \hlt{1} & 0 & \hlt{1} & 0 & 0 & \hlt{1} & 0 & \hlt{1} & 0 & \hlt{1} & \hlt{1} & 0 & \hlt{1} & 0 & \hlt{1} & 0 & \hlt{1} \\
7 & 0 & 0 & 0 & 0 & \hlt{1} & 0 & 0 & 0 & \hlt{1} & 0 & 0 & 0 & \hlt{1} & 0 & 0 & \hlt{1} & 0 & \hlt{1} & \hlt{1} & 0 & 0 & \hlt{1} & 0 & 0 & \hlt{1} & 0 & \hlt{1} & \hlt{1} & 0 & \hlt{1} & 0 & \hlt{1} & \hlt{1} & \hlt{1} & \hlt{1} \\
\hline
\end{array}
$
}\]
Each test involves 3 boards and the secret retrieved by them is to be verified by [Eq. \ref{eq:9}]. Since boards $\{1, 2, 5\}$ are not compromised by cheaters, test 7 is the first test with syndrome 0.
Based on the adaptive \text{Algorithm 5.3}, $\triangle T = 7$. The system will only need to run the tests of $\{1, 6, 8, 9\}$ whose participants are boards $\{1, 2, j\}$ where $j \in H \backslash I = \{3, 4, 6, 7\}$. Thus only tests $\{8, 9\}$ are left to run. The actual number of implemented tests are then $9 < \triangle T + (n-k) \ll \binom{n}{k} = 35$.
In this way, the Odysseus boards that have been hijacked by cheaters are identified as $\{3, 4, 6, 7\}$. The functional boards $\{1,2,5\}$ will be able to retrieve the encoded legal secret $E = 0x3F01$ and, therefore, the correct digital signature $S = 0x3F0$. \hfill\(\square\)
\end{example}
\section{Design Evaluation and Automation}
\label{sec:Automation}
In this section we will evaluate the proposed scheme and offer a design automation tool for it.
\subsection{Mis-detection Probability}
In the previous example, the AMD code works over $GF(2^4)$, where the error mis-detection probability is $\overline{P_{miss}} = \frac{3}{2^4}$ in the worst case. To increase the security level, one can simply have the protocol work over a larger field. If the system uses HMAC as the $MAC()$ function, then $P_{miss}$ is a fixed value close to 0. Therefore, we will only test the performance of the $AMD()$ under different block sizes.
In our experiments, $n/3 \leq c_{act} \leq n-t$. The sizes of the encoded secret $E$ are set to $\{8, 16, 32, 48, 64, 80, 96, 128\}$ bits, which are the cases for most real-world applications. Therefore, the AMD codes are over $GF(2^b)$ fields where $b \in \{2, 4, 8, 12, 16, 20, 24, 32\}$. A comparison is made between the experimental $P_{miss}$ (under $4 \cdot 2^{b}$ rounds of attack and defense) and the theoretical $\overline{P_{miss}}$.
\begin{figure}[h]
\begin{center}
\includegraphics[width=3.25in]{Figures/Pmask.pdf}
\caption{\small The experimental $P_{miss}$ matches the theoretical upper bound $\overline{P_{miss}} = \frac{g}{2^b}$. The experimental results are usually better than the upper bound because [Eq. \ref{eq:9}] does not always have $h$ solutions in the finite field. Also when $b \geq 32$ the experiments did not miss a single attack. \label{fig:Exp_Theo}}
\end{center}
\vspace{-0.2in}
\end{figure}
\subsection{Hardware and Runtime Overheads}
In this subsection we evaluate the complexity of the proposed scheme under $c_{act} < n/3$ and $n/3 \leq c_{act} \leq n-t$, and the hardware and runtime overheads between these two settings. The hardware cost is measured on a Xilinx Vertex 7 XC7VX330T FPGA board, and the timing on an Intel\textsuperscript{\textregistered} Core\texttrademark \ i7-6700 @ 3.4GHz and 8 GB memory machine running Linux OS.
\begin{table}[htp]
\resizebox{\linewidth}{!}{%
$
\centering
\begin{threeparttable}
\renewcommand{\arraystretch}{1.2}
\caption{\large Hardware and Runtime Evaluations} \label{tab:overhead}
\begin{tabular}{|c||c|c|c||c|c|c||}
\hline
$ \boldsymbol{E}$ & \multicolumn{3}{c||}{\textbf{Hardware} (Slices)} & \multicolumn{3}{c||}{\textbf{Timing} ($10^6$ clock cycles)} \\
(bits) & $c_{act} < n/3$ & $c_{act} \geq n/3$ & Overhead & $c_{act} < n/3$ & $c_{act} \geq n/3$ & Overhead \\
\hline
\hline
{\textbf{8}} & 521 & 828 & 0.59 & 0.47 & 3.50 & 7.38 \\
\hline
{\textbf{16}} & 1492 & 2256 & 0.51 & 0.56 & 5.13 & 9.17 \\
\hline
{\textbf{32}} & 3977 & 6164 & 0.55 & 1.36 & 14.65 & 10.75 \\
\hline
{\textbf{48}} & 6114 & 9462 & 0.55 & 1.89 & 22.34 & 11.81 \\
\hline
{\textbf{64}} & 8462 & 12749 & 0.51 & 2.55 & 27.37 & 10.75 \\
\hline
{\textbf{80}} & 9895 & 15804 & 0.59 & 3.18 & 32.47 & 10.21 \\
\hline
{\textbf{96}} & 11873 & 18918 & 0.59 & 3.68 & 40.90 & 11.12 \\
\hline
{\textbf{128}} & 17842 & 27695 & 0.55 & 4.79 & 50.05 & 10.44 \\
\hline
\end{tabular}
\begin{tablenotes}
\begin{footnotesize}
\small
\item [I] With only 60\% of the hardware overhead, the the cheater tolerance capability can be drastically improved.
\item [II] The timing overhead is efficiently reduced by Algorithm \ref{alg:adaptive}.
\end{footnotesize}
\end{tablenotes}
\end{threeparttable}
\vspace{-0.1in}
$
}
\end{table}
\subsection{Design Automation}
Although one can manually make a secret sharing system with PUF on FPGAs, it still involves a good amount of work between writing the HDL code, fixing the routing and placement of PUF's basic elements, configuring the bitstream, and so forth. Also, with a change in one parameter, the entire system may need to be modified. Therefore, we have designed an automation tool that simply takes the user's inputs of four parameters (secret size, security level (for AMD only, HMAC default as $2^{-128}$), total number of holders $n$, threshold $t$, and MAC function). In addition, we also provide a PUF automation tool to generate the PUFs based on user specified response and challenge sizes.
In this tool, the system's HDL codes and PUF's fixed-routing configuration are pre-written in a folder named ``Templates." The tool will generate the system according to user specified parameters based on the files in this folder. For any future modification of the system, only the templates need to be adjusted, and the generator tool can stay unchanged.
\begin{figure}[h]
\begin{center}
\includegraphics[width=3.5in]{Flowcharts/GUIs.png}
\caption{\small The GUIs for the secret sharing system generator (left) and the PUF generator (right). With this tool any research can generator his own customized secret sharing system in a few clicks to assist his/her researches on secret sharing and PUF.
\label{fig:GUI1}}
\vspace{-0.4in}
\end{center}
\end{figure}
\section{Conclusion}
In this paper we have proposed a secure and robust scheme to share confidential information in IoT systems. This scheme uses Threshold Secret Sharing (TSS) to split the information into shares to be kept by all devices in the system, so that the malfunction of a single device will not harm the security of the entire system. In case of more erroneous or rogue devices, this scheme ensures both the privacy and integrity of that piece of information even when there is a large number of sophisticated and coordinated attackers hijacking the devices. The scheme is able to identify all of the compromised devices, while still keeping the secret unknown to, and unforgeable by, the attackers. In contrast, earlier secure schemes suffer from the leakage of secrets, the forgery of fake secrets, or even the misidentification of the honest devices as cheaters. This scheme works in an adaptive manner, such that a more powerful (and power-consuming) security module will only be activated when the previous modules fail. Therefore, the average power consumption is minimized. This scheme also applies to other IoTs with a structure similar to the Odysseus.
| proofpile-arXiv_059-15566 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Long matchmaking waiting time can be a disaster for online games. However, such situation is very common to newborn or obsolete games with few players. Even players with high ranks in popular games may experience long waiting times, sometimes longer than the game time. For example, public data from a \emph{League of Legends} server \cite{ref_lol} shows that the average waiting time reaches up to 45 minutes for the highest ranks, while \emph{League of Legends} is a session-based game with relatively short sessions, averaging around 34 minutes according to the game developers \cite{ref_trace}.
Conventional wisdom may suggest attracting more (high quality) players to the game, which often involves huge investments on game designs. While such investments certainly help, we focus on the rule design of games, which is the other determinants of the game's attractiveness.
We submit that a good rule design can significantly improve the game's attractiveness without any huge extra investment.
Figure \ref{fig:rule_design} illustrates our decomposition of the whole rule chain into a sequence of \emph{decision making} problems. We sequentially discuss the rule design of each component in the rule chain and their combinations. Using a queueing theoretic framework, we seek to answer these \emph{decision making} problems and offer more insights on better rule designs for online games.
\begin{figure}[t]
\centering
\includegraphics[width=2.8in]{ruleDesign.pdf}
\caption{The rule design for a 2v2 battle game.\vspace{-0.5cm}}
\label{fig:rule_design}
\end{figure}
\subsection{Related Work}
Most previous research on online game matchmaking focuses on understanding the network latency's impact. For example, Agarwal \emph{et al.} conduct the latency estimation for matchmaking through geolocation and network coordinate systems in \cite{ref_latency_sensitive}. Manweiler \emph{et al.} extend the research to mobile games in \cite{ref_switchboard}, where the network is much more unstable.
Surprisingly, the mathematical treatment for online game matchmaking waiting time only emerges recently. Vron \emph{et al.} study user traces and analyze the influence of ranking on the waiting time in \cite{ref_trace}. Yuval \emph{et al.} propose to define the matchmaking cost regarding the matched requests and the waiting time, and study the minimal cost perfect matching in \cite{ref_haste}. Malak and Deja examine the game structure's impact on matchmaking in \cite{ref_game_structure_sensitive}. Alman and McKay lay out theoretical foundations for matchmaking, and propose a solution to extract the fairest and most uniform games in \cite{ref_foundation}.
\subsection{Our Contributions}
In contrast to previous works, we focus on understanding the waiting time through a queueing framework, and we seek to understand different rule designs' impact on the matchmaking mechanism. The principal contributions in this paper can be summarized as follows:
\begin{itemize} \setlength\itemsep{0.3em}
\item \emph{Decomposition of Rule Design}: Based on the queueing framework, we decompose the whole rule chain design into a sequence of decision making problems. Such decomposition allows us to discuss the rule design of each component and their combinations, and thus to identify bottleneck in rule designs.
\item \emph{Optimal Matchmaking Mechanism}: We use 2v2 battle game to derive the optimal matchmaking mechanism. By comparing different service orders for diverse arrivals, we submit that the optimal matchmaking mechanism is the \emph{packing} service order and it is robust to small network latency.
\item \emph{Static Rule Design}: We use choosing sides before the battle as an example to highlight the impact on static rule design to the matchmaking mechanism. We find that allowing all players to choose sides yields \emph{unbounded} expected waiting time. This inspires us to examine the value of choice-free players for the matchmaking mechanism. We show that a small portion of choice-free players can already guarantee a reasonable expected waiting time.
\item \emph{Dynamic Rule Design}: As for the dynamic rule design, we consider the game offers multiple zones for players of different levels, and examine whether the game designer should allow skilled players to battle in a junior zone. Coincidentally, we refer to the skilled players who are indifferent in the battle zones as choice-free players. Being choice-free benefits themselves as well as the whole system
\end{itemize}
\vspace{0.1cm}
The rest of the paper is organized as follows. Section \ref{sec:systemModel} introduces our system model and revisits the basic queueing results for 2v2 battle games. By comparing different service orders, we derive the optimal matchmaking mechanism in Section \ref{sec:design}. Based on this optimal mechanism, we investigate the impacts of \emph{static} and \emph{dynamic} rule designs in Section \ref{sec:static} and Section \ref{sec:dynamic}, respectively. Section \ref{sec:conclusion} delivers the concluding remarks and points out possible future directions.
\section{System Model}
\label{sec:systemModel}
The matchmaking problem can be modeled in the queueing framework. When a player starts to search for a game, it joins the queueing system. Take 2v2 game as an example: players can make a team with a friend and join the queueing system together, or choose to join the system on their own. Through the matchmaking mechanism, when there are sufficient players (in the 2v2 game setting, four players are sufficient), they may start a game and leave the queue.
\subsection{Assumptions}
To simplify our analysis and establish a stylized model, we make the following assumptions:
\begin{enumerate}
\item The network latency is negligible.
\item The arrival process of players follows Poisson Process.
\item All players are identical regarding their skills.
\end{enumerate}
\vspace{0.3cm}
\noindent \textbf{Remark}: We will relax the first assumption by examining the matchmaking mechanism's robustness to different service orders. The last assumption guarantees that the players are indifferent to their teammates as well as their rivals. In many games, only players with the similar ranks can be matched together, and our subsequent analysis can be generalized to such cases straightforwardly. For other games, they may need to take into account players' self achievement when designing the matchmaking, which is outside the scope of this paper. This assumption allows us to focus on understanding the waiting time in different matchmaking mechanisms, and our conclusions can serve as the benchmark for general online game matchmaking mechanism design.
\vspace{0.2cm}
In order to facilitate our understanding of the rule designs' impact on matchmaking mechanisms, in this section, we review the classical queueing theory results for two cases: the $k$-player game, and the 2v2 battle game.
\subsection{$k$-player Games}
The first type of game is the most basic one, standard $k$-player game. One example could be Mahjong game, which involves four identical players. The simple matchmaking mechanism waits for a total of $k$ players and starts a game.
Denote the Poisson arrival rate of players by $\lambda$. Figure \ref{kplayerqueue} shows the Continuous Time Markov Chain (CTMC) of such a $k$-player queue. By solving the balance equations, we can prove the following lemma.
\begin{lemma}
For $k$-player games with Poisson arrival rate $\lambda$, any matchmaking mechanism, which ensures that players start a game as soon as there are at least $k$ of them in the queue, yields the same
expected waiting time $\mathbb{E}[T]$ as follows:
\begin{equation}
\mathbb{E}[T]^{\text{$k$-player}} = \frac{k - 1}{2 \lambda}.
\end{equation}
\label{thm_k_player}
\end{lemma}
\noindent \textbf{Proof}: It is sufficient to identify that the stationary probability distributions are as follows
\begin{equation}
\pi_0 = \pi_1 = \cdots = \pi_{k - 1} = \frac{1}{k}.
\end{equation}
Then, Little's Law \cite{ref_little} immediately yields the result. \hfill $\blacksquare$
\vspace{0.2cm}
\noindent \textbf{Remark}: When the game only requires a single player, i.e., $k=1$, our conclusion also holds. It simply degenerates to the trivial case where there won't be any queue in the system.
\begin{figure}[t]
\centering
\begin{tikzpicture}
\graph[->/.tip=stealth,
grow right=1.3cm,
nodes={circle,fill=black!15,inner sep=0,minimum size=8mm}]
{
{
[edges={bend left,"$\lambda$" above}]
0 -> 1 -> 2 -> 3[as=$\cdots$,fill=black!5] -> 4[as=$k - 2$] -> 5[as = $k - 1$];
};
5 ->[bend left=30,looseness=0.6,"$\lambda$" below] 0;
};
\end{tikzpicture}
\caption{The CTMC of the queueing system for $k$-player game. \vspace{-0.5cm}}
\label{kplayerqueue}
\end{figure}
\subsection{2v2 Games}
Another popular form of game involves battles, and can often be modeled as a 2v2 game. One popular example could be \textit{Clash Royale} \cite{ref_cr}. In such battle games, players can make a team with a friend to join the battle, or just go on a quick match by themselves.
Denote the individual arrival rate by $\lambda_1$, and the team (consisting of 2 players) arrival rate by $\lambda_2$. Thus, the total arrival rate of players into the queueing system is $\lambda_{\text{total}} = \lambda_1 + 2\lambda_2$.
We seek to understand the average waiting time, denoted by $T$, for all kinds of players. By analyzing the CTMC as shown in Fig. \ref{2v2queue}, we can prove the following Lemma.
\begin{lemma}
For 2v2 games with total arrival rate $\lambda_{\text{total}}$ with $\lambda_1>0$, any matchmaking mechanism, which starts a game as soon as there are at least four players in the queue, enjoys the same expected waiting time $\mathbb{E}[T]$ as follows
\begin{equation}
\mathbb{E}[T]^{\text{2v2}} = \frac{3}{2 \lambda_{\text{total}}}.
\end{equation}
\label{thm_2v2}
\end{lemma}
\noindent \textbf{Remark}: When $\lambda_1 = 0$, the 2v2 battle reduces to the 2-player case. In this case, Lemma \ref{thm_2v2} won't hold. Hence, we require the additional condition $\lambda_1>0$ to maintain the structure of the 4-player game without degeneration.
\begin{figure}[t]
\centering
\begin{tikzpicture}
\graph[->/.tip=stealth,
grow right=2cm,
nodes={circle,fill=black!15,inner sep=0,minimum size=8mm}]
{
{
[edges="$\lambda_1$"]
{
[edges={above, bend left}]
0 -> 1 -> 2 -> 3;
};
{
[edges={below, bend left=50, looseness=0.9}]
3 -> 0;
};
};
{
[edges="$\lambda_2$"]
{
[edges={above, bend left=60, looseness=0.9}]
0 -> 2;
1 -> 3;
};
{
[edges={below, bend left=30}]
2 -> 0;
3 -> 1;
};
};
};
\end{tikzpicture}
\caption{The CTMC of a 2v2 queue.\vspace{-0.5cm}}
\label{2v2queue}
\end{figure}
\subsection{Criteria for Rule Design}
The rule design must consider the associated system performance measured by the total expected waiting time, and individual impacts assessed by the variance of waiting time.
While Lemma \ref{thm_2v2} is remarkable in guaranteeing the same expected waiting time for all players, different matchmaking mechanisms may yield diverse expected waiting time for individual arrivals and the team arrivals. This leads us to employ the variance of the waiting time for all arrivals as the evaluation metric for different matchmaking mechanisms before the static or dynamic designs are involved. Once the rule designers begin to trade-off their choices for the static and dynamic rule designs, the conditions of Lemma \ref{thm_2v2} no longer hold. Thus, both the system performance and individual impacts are included as criteria for rule selection.
\section{Rule Design: the Basics}
\label{sec:design}
Experts in queueing theory may propose the most straightforward mechanism is First-In-First-Out (FIFO) central queue system. However, from player's point of view, we submit the fairest but seemingly inefficient mechanism is to divide individual arrivals and team arrivals into 2 different queues. In this section, we first compare the central queue with separate queue systems, then evaluate the performance of matchmaking mechanisms with various service orders.
\subsection{Central Queue or Two Queues?}
The two-queue system is fair to players in that a team of friends usually cooperate better than a team of 2 random players. And the team players may even collude offline.
We denote the waiting time for individual arrivals by $T_1$, and that for team arrivals by $T_2$. Such a two-queue system divides the 2v2 game into a 4-player game with arrival rate $\lambda_1$ and a 2-player game with arrival rate $\lambda_2$. Lemma \ref{thm_k_player} dictates
\begin{align}
\mathbb{E}[T_1]^{\text{2Q}} &= \frac{3}{2\lambda_1}, \\
\mathbb{E}[T_2]^{\text{2Q}} &= \frac{1}{2\lambda_2}.
\end{align}
Based on $\mathbb{E}[T_1]^{\text{2Q}}$ and $\mathbb{E}[T_2]^{\text{2Q}}$, standard mathematical manipulations yield
\begin{align}
\mathbb{E}[T]^{\text{2Q}} &= \frac{5}{2\lambda_{\text{total}}}, \\
\mathbf{Var}[T]^{\text{2Q}} &= \frac{3\lambda_1^2 - \lambda_1\lambda_2 + 11\lambda_2^2}{2\lambda_{\text{total}}^2\lambda_1\lambda_2}.
\end{align}
\vspace{0.1cm}
\noindent \textbf{Remark}: Obviously, in this straightforward yet fair mechanism, fairness pays! The expect waiting time for all kinds of arrivals is 67\% more than the class of mechanisms considered in Lemma \ref{thm_2v2}. Later, we will use numerical studies to highlight that the variance $\mathbf{Var}[T]^{\text{2Q}}$ is also significantly larger than many other mechanisms.
\subsection{Service Order for Central Queue}
When there is only one central queue for the 2v2 game, players are matched as soon as there are 4 players in the queue. However, even with the Poisson arrival assumption, there could be a non-negligible probability for 5 players waiting in the queue: a team arrives when 3 players are waiting in the queue. For the 3 players in queue, if there is already a team, then we need to match the two teams together for the new game since teams cannot be broken up. Hence, the remaining hurdle is to choose a service order for the case of 3 individual players in queue to conduct the matchmaking. Possible service orders include:
\begin{enumerate} \setlength\itemsep{0.3em}
\item \emph{FIFO}: According to players' arrival time, match the first two individual players with the team.
\item \emph{Packing}: Whenever there are two individual players in the queueing system, pack them into a team. Thus, we only need to match the teams in FIFO service order. In our 2v2 game setting, it is equivalent to FIFO.
\item \emph{Last-In-First-Out, LIFO}: In contrast to FIFO, LIFO matches the \emph{last} two individual players with the team according to their arrival time.
\end{enumerate}
\vspace{0.1cm}
\noindent \textbf{Remark}: It is straightforward to see that FIFO outperforms LIFO. However, it is important to consider both service orders due to \emph{network latency}. Small network latency may affect the true arrival orders of the players. Hence, evaluating both service orders' performance help us evaluate 2v2 battle game's sensitivity to service order and thus its sensitivity to small network latency.
Note that Lemma \ref{thm_2v2} dictates the mean waiting time is the same for all the three service orders:
\begin{equation}
\mathbb{E}[T]^{\text{FIFO}} =\mathbb{E}[T]^{\text{Packing}}=\mathbb{E}[T]^{\text{LIFO}}= \frac{3}{2\lambda_{\text{total}}}.
\end{equation}
However, the CTMC in Fig. \ref{2v2queue} does not contain enough information for us to evaluate the waiting time variance resulting from different service orders. Let states $2a$ and $3a$ denote that players in the queue are all individuals, and states $2b$ and $3b$ denote that there is a team in the queue. This leads to a CTMC in Fig. \ref{2v2FIFO}. It is worth noting that all the three service orders yield the same CTMC.
\begin{figure}[t]
\centering
\begin{tikzpicture}[->/.tip=stealth,nodes={inner sep=0,minimum size=8mm}]
\node[circle,fill=black!15] (0) at (0, 0) {0};
\node[circle,fill=black!15] (1) at (3, 0) {1};
\node[circle,fill=black!15] (2a) at (0, 3) {$2a$};
\node[circle,fill=black!15] (3a) at (3, 3) {$3a$};
\node[circle,fill=black!15] (2b) at (0, -3) {$2b$};
\node[circle,fill=black!15] (3b) at (3, -3) {$3b$};
\draw (0) edge[->,"$\lambda_1$" above] (1);
\draw (1) edge[->,"$\lambda_1$" above right=-0.3, near end] (2a);
\draw (2a) edge[->,"$\lambda_1$" above] (3a);
\draw (3a) edge[->,"$\lambda_1$" above left=-0.3, near end] (0);
\draw (2b) edge[->,"$\lambda_1$" above] (3b);
\draw (3b) edge[->,"$\lambda_1$" above] (0);
\draw (0) edge[->,"$\lambda_2$" left, bend right] (2b);
\draw (1) edge[->,"$\lambda_2$" right, bend left] (3b);
\draw (2a) edge[->,"$\lambda_2$" left] (0);
\draw (3a) edge[->,"$\lambda_2$" right] (1);
\draw (2b) edge[->,"$\lambda_2$" left, bend right] (0);
\draw (3b) edge[->,"$\lambda_2$" right, bend left] (1);
\end{tikzpicture}
\caption{The CTMC for single queue 2v2 game.\vspace{-0.0cm}}
\label{2v2FIFO}
\end{figure}
By solving stationary equations for this CTMC, we can obtain desired evaluation metrics for the three service orders:
\begin{equation}
\begin{aligned}
&\mathbf{Var}[T]^{\text{FIFO}} =\mathbf{Var}[T]^{\text{Packing}}= \frac{7 \lambda_1^2+\lambda_1 \lambda_2+4 \lambda_2^2}{2 \lambda_1 (\lambda_1+\lambda_2)^3} \\
&\ \ \ \ \ \ \ \ \ \ \ \ \ \ -\frac{2 \lambda_1 \lambda_2^3+3 \lambda_2^4}{\lambda_1 \lambda_{\text{total}} (\lambda_1+\lambda_2)^4}- \left [ \mathbb{E}[T]^{\text{FIFO}} \right ] ^2,
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
& \mathbf{Var}[T]^{\text{LIFO}} = \frac{5 \lambda_1^3 \lambda_2+7 \lambda_1^2 \lambda_2^2+10 \lambda_1 \lambda_2^3+2 \lambda_2^4}{2 \lambda_1 (\lambda_1+\lambda_2)^3 \left(\lambda_1^2+2 \lambda_1 \lambda_2+2 \lambda_2^2\right)} \\
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\frac{8 \lambda_1^2+7 \lambda_1 \lambda_2}{2 (\lambda_1+\lambda_2)^4} - \left [ \mathbb{E}[T]^{\text{LIFO}} \right ] ^2.
\end{aligned}
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=2.6in]{2v2_variance.pdf}
\caption{The variances of waiting time with normalized $\lambda_{\text{total}}$.\vspace{-0.5cm}}
\label{2v2graph}
\end{figure}
\begin{figure*}[t]
\centering
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width=1.\linewidth]{packing0_2_new.pdf}
\caption{$\lambda_1 = 0.2$}
\label{fig:sub-first}
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width=1.\linewidth]{packing0_5_new.pdf}
\caption{$\lambda_1= 0.5$}
\label{fig:sub-second}
\end{subfigure}
\begin{subfigure}{.3\textwidth}
\centering
\includegraphics[width=1.\linewidth]{packing0_8_new.pdf}
\caption{$\lambda_1 = 0.8$}
\label{fig:sub-third}
\end{subfigure}
\caption{Evaluate the inter-arrival time of 1v1 game approximation with diverse $\lambda_1$'s with normalized $\lambda_{\text{total}}$ (orange bins: histograms for 1v1 game approximation, blue bins: histograms for Poisson process with the same total arrival rate). }
\label{fig:approx}
\end{figure*}
\subsection{Performance Assessment and Rule Design}
Figure \ref{2v2graph} compares the variance of different matchmaking mechanisms by normalizing $\lambda_{\text{total}}$ to be one.
The single queue mechanisms all outperform the straightforward two queue mechanism. Note that $\mathbf{Var}[T]^{\text{2Q}}$ reaches minimum at
\begin{equation}
\frac{\lambda_1}{\lambda_{\text{total}}} = 2\sqrt{33} - 11 \approx 0.489.
\end{equation}
It is also worth noting that though it is straightforward to argue FIFO and packing outperform LIFO service order, their variances do \emph{not} differ much. Thus, \textbf{the optimal matchmaking mechanism is central queue in FIFO/packing service order}, but the whole family of single queue matchmaking mechanisms are \emph{insensitive} to service orders (and hence small network latency!) in the sense of reasonable waiting time variances.
\vspace{0.2cm}
\noindent \textbf{Remark}: For games with more players, say $k$v$k$ battle games, it is more complicated to design the optimal matchmaking mechanism. However, the intuition is that packing would be a good choice when $k$ is small. A relatively large $k$ may display the problem's combinatorial nature in that we need to solve a subset sum problem to decide which $k$ players to pack together whenever a new player arrives.
\vspace{0.2cm}
Having derived the optimal matchmaking mechanism, we are ready to examine different rule designs' impacts on the matchmaking mechanism through the expected waiting time. We are especially interested in two kinds of rule designs for 2v2 battle game: allowing players to choose sides, and allowing players to join 2 queues simultaneously in the 2-queue matchmaking mechanism.
\section{Static Rule Design: Choosing Sides}
\label{sec:static}
In some 2v2 games, there could be possible differences in all aspects of figures between the two battle sides. Players may prefer one side than another. This leads to the first rule design of our interest: it would be nice to allow players to choose sides as this will obviously improve the players' satisfaction to the game. However, this rule may inevitably increase the expected waiting time for the game (Lemma \ref{thm_2v2} \emph{won't} hold with this additional rule!). We investigate these trade-offs in this section.
In the basic model analyzed in the previous section, there only exist \textit{finite} possible states. However, once the side selection is allowed, the state space can go to \textit{infinity}. Choosing sides sophisticates the rule design.
This warrants us exploring a simple but representative approximation of the 2v2 game. To obtain useful insights for rule design through a neat analysis, in this work, we select the 1v1 game to approximate the 2v2 game. The first subsection demonstrates the conditions under which such approximation is accurate. Then, we present insights obtained from the approximated game, which is crucial for rule designs.
\subsection{Approximation for 2v2 Game with Packing}
The 2v2 game with packing service order can be viewed as an 1v1 game by counting each package of 2 players as a single arrival. However, such 1v1 game approximation will \emph{not} lead to a CTMC. This is because packing two individual players makes the inter-arrival time distribution non-exponential.
By normalizing $\lambda_\text{total}$ to be 1, we plot the histograms for the inter-arrival time of 1v1 game approximation with diverse $\lambda_1$'s in Fig. \ref{fig:approx}. We also compare the histograms with Poisson arrival of the same total arrival rate. We can observe that, the only hurdle preventing the distribution to be exponential is the arrival of individual players. As $\lambda_1$ increases, the 1v1 game approximation becomes more inaccurate.
Nonetheless, in the subsequent analysis, we stick to the 1v1 battle game approximation for the 2v2 battle game to obtain more insights through analytical studies. We want to emphasize that though our conclusions are only valid for the case when the game attracts significantly more team arrivals than individual arrivals, they provide useful intuition for other cases as well.
\subsection{Impacts of Proportions Having Preferred Side}
\label{sec:hurt}
The analysis on 1v1 game approximation reveals that the proportion of players preferring one side, referred by the players' structure in the following sections, determines the consequence of allowing side selection.
We first demonstrate the effects of the players' structure by the extreme case where we allow and require all the players to choose a side of their favorites: namely either side A or side B. Denote the arrival rate of players choosing side A by $\lambda_A$, and that of players choosing side B by $\lambda_B$. To figure out the CTMC, it suffices to identify its states: using integer $i$ is enough. For positive $i$, state $i$ implies there are $i$ players choosing side A in the queue. For negative $i$, state $i$ implies $-i$ players choosing side B. Figure \ref{2sidequeue} plots such a CTMC.
\begin{figure}[t]
\centering
\begin{tikzpicture}
\graph[->/.tip=stealth,
grow right=1.3cm,
nodes={circle,fill=black!15,inner sep=0,minimum size=8mm}]
{
{
[edges={bend left,"$\lambda_A$" above}]
c[as=$\cdots$,fill=black!5] -> b[as=-2] -> a[as=-1] -> 0 -> 1 -> 2 -> 3[as=$\cdots$,fill=black!5];
};
{
[edges={bend left,"$\lambda_B$" below}]
3 -> 2 -> 1 -> 0 -> a -> b -> c;
};
};
\end{tikzpicture}
\caption{The CTMC for allowing all players to choose sides.}
\label{2sidequeue}
\end{figure}
Unfortunately, this CTMC is quite classical in that it is transient when $\lambda_A \neq \lambda_B$, and null recurrent when $\lambda_A = \lambda_B$. Hence, even in the latter case, one can show that the expected waiting time for all players is \emph{unbounded}. Hence, \emph{game designers cannot afford to allow all players to choose sides freely}.
However, introducing a small portion of choice-free players (who are indifferent in choosing sides) into the game can inverse the conclusion. Denote the arrival rate of choice-free players by $\lambda_C$. Figure. \ref{2sidequeue2} plots the new CTMC.
\begin{fact}
The CTMC in Fig. \ref{2sidequeue2} is positive recurrent iff
\begin{equation}\label{lambda_conditon}
\lambda_C>|\lambda_A-\lambda_B|.
\end{equation}
\end{fact}
\noindent\textbf{Remark}: This is a straightforward yet remarkable fact. If by design the two battle sides are balanced, i.e., $\lambda_A$ and $\lambda_B$ do not differ much, then the game can afford allowing more players to choose sides. In contrast, if the game is not carefully designed, we may have to force many players to take the side they don't like. This will further reduce their satisfaction to the game: a disaster to the game designer.
\begin{figure}[t]
\centering
\begin{tikzpicture}
\tikzstyle{every node}=[font=\footnotesize]
\node[circle,fill=black!15,inner sep=0,minimum size=8mm] (4) at (3.9,-1.95) {$1'$};
\graph[->/.tip=stealth,
grow right=1.3cm,
nodes={circle,fill=black!15,inner sep=0,minimum size=8mm}]
{
[edges=bend left]
{
c[as=$\cdots$,fill=black!5] ->["$\lambda_A + \lambda_C$" above] b[as=-2] ->["$\lambda_A + \lambda_C$" above] a[as=-1] ->["$\lambda_A + \lambda_C$" above] 0 ->["$\lambda_A$" above] 1 ->["$\lambda_1$" above] 2 ->["$\lambda_A$" above] 3[as=$\cdots$,fill=black!5];
};
{
[edges="$\lambda_B + \lambda_C$" below]
3 -> 2 -> 1 -> 0;
};
{
[edges="$\lambda_B$" below]
0 -> a -> b -> c;
};
0 ->["$\lambda_C$" right, bend left=5, near end] (4) ->["$\lambda_A + \lambda_B + \lambda_C$" left, bend left=5, near start] 0;
};
\end{tikzpicture}
\caption{The CTMC with choice-free players.}
\label{2sidequeue2}
\end{figure}
The above remark indicates that \textit{the valuable players include not only the choice-free players but also the players balancing the proportions of the two sides}. Denote the waiting time for players choosing side A, players choosing side B, and choice-free players by $T_A$, $T_B$, and $T_C$, respectively. Solving the CTMC in Fig. \ref{2sidequeue2} yields (see Appendix \ref{app:static} for more details)
\begin{align}
\mathbb{E}[T_A] &= \frac{(\lambda_B + \lambda_C)\pi_0}{(\lambda_B + \lambda_C - \lambda_A)^2},\label{eqTa}\\
\mathbb{E}[T_B]& = \frac{(\lambda_A + \lambda_C)\pi_0}{(\lambda_A + \lambda_C - \lambda_B)^2},\label{eqTb}\\
\mathbb{E}[T_C]& = \frac{\pi_0}{\lambda_A + \lambda_B + \lambda_C},\label{eqTc}
\end{align}
where $\pi_0$ denotes the stationary probability at state $0$. Denote $\lambda_{\text{total}}$ as the total arrival rate $\lambda_A + \lambda_B + \lambda_C$ into the system, we have
\begin{equation}
\pi_0 = \left(\frac{\lambda_A}{\lambda_B \!+\! \lambda_C\! -\! \lambda_A} \!+\! \frac{\lambda_B}{\lambda_A \!+\! \lambda_C\! -\! \lambda_B} + \lambda_C\lambda_{\text{total}}^{-1} + 1\right)^{-1}\!\!\!.\nonumber
\end{equation}
With Eq. (\ref{eqTa})-(\ref{eqTc}), we can further obtain the expected waiting time for all players:
\begin{equation}
\begin{aligned}
\mathbb{E}[T] &= \frac{1}{2(\lambda_A+\lambda_C-\lambda_B)}+\frac{1}{2(\lambda_B+\lambda_C-\lambda_A)}\\
&\quad +\frac{1}{\lambda_A+\lambda_B+\lambda_C} -\frac{1}{2\lambda_C}\\
&\quad -\frac{\lambda_A+\lambda_B+2 \lambda_C}{2(2\lambda_A \lambda_B+\lambda_A\lambda_C+\lambda_B\lambda_C+\lambda_C^2)}.
\end{aligned}
\label{etside}
\end{equation}
To decipher Eq.\eqref{etside}, we normalize $\lambda_{\text{total}}$ to be 1.
Figure \ref{sidegraph} examines how $\mathbb{E}[T]$ varies with $\lambda_A$ and $\lambda_B$. With normalized $\lambda_{\text{total}}$, for any fixed $\lambda_B$, $\mathbb{E}[T]$ hardly changes with $\lambda_A$ as long as $\lambda_A < \lambda_B$. In contrast, when $\lambda_A > \lambda_B$, $\mathbb{E}[T]$ increases sharply. Due to the symmetric nature of $\lambda_A$ and $\lambda_B$ to $\mathbb{E}[T]$, we conclude that $\max(\lambda_A, \lambda_B)$ dominates $\mathbb{E}[T]$. That is, the benefit of being choice-free depends on the \emph{imbalance} between $\lambda_A$ and $\lambda_B$. This can be further exemplified by the improvement factor: we define improvement factor $q$ as follows:
\begin{equation}
q = \frac{\mathbb{E}[T_C]}{\mathbb{E}[T_B]} = \frac{(\lambda_{\text{total}} - 2\lambda_B)^2}{\lambda_{\text{total}}(\lambda_{\text{total}} - \lambda_B)}.
\end{equation}
Given $\lambda_{\text{total}}=1$, we can further study the first order derivative of $q$ with respect to $\lambda_B$:
\begin{equation}
\frac{\partial q}{\partial \lambda_B} = \frac{1}{(1-\lambda_B)^2} - 4 < 0.
\end{equation}
This derivative is always negative in that we assume $\lambda_B < 0.5 \lambda_\text{total}$. Hence, as $\lambda_B$ increases (the two sides become more symmetric), the improvement factor decreases, and decreases \emph{slower}!
However, $\mathbb{E}[T]$ does not display such monotonicity with respect to $\lambda_B$. Zooming in Fig. \ref{sidegraph}, we plot Fig. \ref{sidegraph_zoom_in} to highlight that though $\mathbb{E}[T]$ is generally insensitive to the changes in $\lambda_A$ when $\lambda_A<\lambda_B$, increasing $\lambda_A$ could first help reduce the mean waiting time, then hurt the system performance!
\subsection{Guideline for Rule Design}
To sum up, it is crucial to evaluate players' preference distribution before the rule design. With a more symmetric player preference distribution, the game could afford more players to choose sides according to their preferences. However, this issue raises a very important future work: players' preference may change over time, and thus the ideal rule game could be somewhat dynamic. Hence, it is important to establish an adaptive rule design to constantly track the changes in players' preference distribution. Also, the mechanism design of attracting players balancing the sides also deserves carefully explorations.
\begin{figure}[t]
\centering
\includegraphics[width=2.6in]{choose_side.pdf}
\caption{The expected waiting time with normalized $\lambda_{\text{total}}$.}
\label{sidegraph}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=2.6in]{choose_side_0_4.pdf}
\caption{$\mathbb{E}[T]$ with normalized $\lambda_{\text{total}}$ and $\lambda_B = 0.4$.}
\label{sidegraph_zoom_in}
\end{figure}
\begin{comment}
\begin{table}[t]
\centering
\caption{The expected waiting time when $\lambda_{\text{total}} = 1$.}
\begin{tabular}{|l|l|l|l|}
\hline
$\lambda_A$ & $\lambda_B$ & $\lambda_C$ & $\mathbb{E}[T]$ \\
\hline
0.4999 & 0.4999 & 0.0002 & 2499.9998 \\
0.4901 & 0.4999 & 0.01 & 2475.2425 \\
0.49976 & 0.4999 & 0.00034 & 2071.0781 \\
0.491 & 0.499 & 0.01 & 227.76781 \\
0.495 & 0.495 & 0.01 & 49.990101 \\
0 & 0.495 & 0.505 & 49.019802 \\
0.405 & 0.495 & 0.1 & 47.533665 \\
0.488 & 0.495 & 0.017 & 41.404813 \\
0.45 & 0.45 & 0.1 & 4.9108911 \\
0 & 0.45 & 0.55 & 4.1818182 \\
0.35 & 0.45 & 0.2 & 4.0016181 \\
0.37 & 0.45 & 0.18 & 3.9952017 \\
0.4 & 0.4 & 0.2 & 2.3461538 \\
0.3 & 0.4 & 0.3 & 1.8796296 \\
0.2 & 0.4 & 0.4 & 1.8333333 \\
0 & 0.4 & 0.6 & 1.8333333 \\
0.1 & 0.4 & 0.5 & 1.8318966 \\
0.15 & 0.4 & 0.45 & 1.8312448 \\
0.3 & 0.3 & 0.4 & 1.0431034 \\
0.2 & 0.3 & 0.5 & 0.8736559 \\
0.1 & 0.3 & 0.6 & 0.8295455 \\
0 & 0.3 & 0.7 & 0.8214286 \\
0.2 & 0.2 & 0.6 & 0.6568627 \\
0.1 & 0.2 & 0.7 & 0.5953990 \\
0 & 0.2 & 0.8 & 0.5833333 \\
0.1 & 0.1 & 0.8 & 0.5274390 \\
0 & 0.1 & 0.9 & 0.5138889 \\
0 & 0 & 1 & 0.5 \\
\hline
\end{tabular}
\label{sidetable}
\end{table}
\end{comment}
\section{Dynamic Rule Design: Climbing the Ladder}
\label{sec:dynamic}
We have focused on the static analysis for the 2v2 battle game in the sense that we assume all players are identical across time. In this section, we consider a dynamic setting to allow the players to have different levels. This is true for many games: players start from the beginner level and can only play in a junior zone. When their skills climbing up to a higher level, they are allowed to play in the advanced zone\footnote{One concrete example could be the game \textit{Majsoul} \cite{ref_majsoul}. There are 5 arenas: Bronze, Silver, Gold, Jade and Throne Arena. Players are allowed to battle in different arenas according to their levels in the game.}. Though in practice, there could be more zones for a game. We restrict ourselves to the two-zone scenario for analytical solutions.
\subsection{Battle in Both Zones: the Formulation}
One straightforward rule design problem could be to decide the level to distinguish the players for two zones. However, such a decision making problem heavily depends on concrete game setting. In this section, we turn to another decision making problem: some games allow the experienced players to battle in the junior zone while others don't. In terms of the expected waiting time, how much would it help to allow the experienced players to \emph{battle in both zones}?
We follow the 1v1 game approximation in this section to enable us focus on understanding the dynamic rule design's impact on matchmaking. In such a setting, there are 3 types of arrivals, green players for Zone A (the junior zone) arrives with rate $\lambda_A$, experienced players only for Zone B (the advanced zone) arrives with rate $\lambda_B$, and experienced players for both zones (choice-free players) arrives with rate $\lambda_C$.
However, this is not enough to formulate the CTMC. It remains undecided when a choice-free experienced player joins the two queues and finds that both queues are non-empty. For simplicity, we employ the FIFO service order to deal with such situations. That is, match the new arrival with the player who arrives first (In fact, before the new arrival, there are exactly two players in the system, and one in each queue). We use \emph{state 0} to indicate there is no player in either queue, and use \emph{states A, B, and C} to indicate from \emph{state 0}, the new arrival is a green player for Zone A, experienced player for Zone B, and choice-free player for both zones, respectively. We denote the state when the player queueing in Zone A arrives first by \emph{state AB}. In contrast, we denote the corresponding state when the player queueing in Zone B arrives first by \emph{state BA}. With these notations, we plot the CTMC in Fig. \ref{1v1sfifo}.
\begin{figure}[t]
\centering
\begin{tikzpicture}[->/.tip=stealth,nodes={inner sep=0,minimum size=8mm}]
\node[circle,fill=black!15] (0) at (0, 0) {0};
\node[circle,fill=black!15] (A) at (2.3, 1.5) {A};
\node[circle,fill=black!15] (B) at (2.3, -1.5) {B};
\node[circle,fill=black!15] (C) at (-2.3, 0) {C};
\node[circle,fill=black!15] (AB) at (4.6, 1.5) {AB};
\node[circle,fill=black!15] (BA) at (4.6, -1.5) {BA};
\draw (0) edge[->,"$\lambda_A$" above left, bend left] (A);
\draw (B) edge[->,"$\lambda_A$" above] (BA);
\draw (BA) edge[->,"$\lambda_A$" below, bend left] (B);
\draw (0) edge[->,"$\lambda_B$" above right] (B);
\draw (A) edge[->,"$\lambda_B$" above, bend left] (AB);
\draw (AB) edge[->,"$\lambda_B$" below] (A);
\draw (0) edge[->,"$\lambda_C$" below, bend left] (C);
\draw (A) edge[->,"$\lambda_A + \lambda_C$" below right] (0);
\draw (AB) edge[->,"$\lambda_A + \lambda_C$" below right, near start] (B);
\draw (B) edge[->,"$\lambda_B + \lambda_C$" below left, bend left] (0);
\draw (BA) edge[->,"$\lambda_B + \lambda_C$" above right, near start] (A);
\draw (C) edge[->,"$\lambda_A + \lambda_B + \lambda_C$" above, bend left] (0);
\end{tikzpicture}
\caption{CTMC for dynamic rule design in FIFO order.}
\label{1v1sfifo}
\end{figure}
Solving this CTMC, Little's Law dictates that (see Appendix \ref{app:dynamic} for more details)
\begin{equation}
\mathbb{E}[T_A]\! =\! \frac{2\lambda_A \lambda_B \!+\! \lambda_A \lambda_C \!+\! 2\lambda_B^2 \!+\! 4\lambda_B \lambda_C \!+\! \lambda_C^2}{(\lambda_A\!+\!\lambda_C) (\lambda_B\!+\!\lambda_C) (\lambda_A\!+\!\lambda_B\!+\!\lambda_C)}\pi_0,
\label{1v1set1fifo}
\end{equation}
\begin{equation}
\mathbb{E}[T_B] \!=\! \frac{2\lambda_A \lambda_B \!+\! \lambda_B \lambda_C \!+\! 2\lambda_A^2 \!+\! 4\lambda_A \lambda_C \!+\! \lambda_C^2}{(\lambda_A\!+\!\lambda_C) (\lambda_B\!+\!\lambda_C) (\lambda_A\!+\!\lambda_B\!+\!\lambda_C)}\pi_0,
\end{equation}
\begin{equation}
\mathbb{E}[T_C] = \frac{\pi_0}{\lambda_A + \lambda_B + \lambda_C},
\end{equation}
where $\pi_0$ is the stationary distribution for being at \emph{state 0}. Using the law of total probability, we can obtain the expected waiting time for all arrivals:
\begin{equation}
\mathbb{E}[T]
= \frac{\lambda_A \mathbb{E}[T_A] + \lambda_B \mathbb{E}[T_B] + \lambda_C \mathbb{E}[T_C]}{\lambda_A + \lambda_B + \lambda_C}.
\end{equation}
\subsection{Benefits of Being Choice-Free}
To understand the benefits for experienced players of being choice-free, we define the improvement factor $q$ as follows:
\begin{equation}
q \!=\! \frac{\mathbb{E}[T_C]}{\mathbb{E}[T_B]} \!=\! \frac{(\lambda_A+\lambda_C) (\lambda_B+\lambda_C)}{2\lambda_A^2 \!+\! 2\lambda_A \lambda_B \!+\! 4\lambda_A \lambda_C \!+\! \lambda_B \lambda_C \!+\! \lambda_C^2}.
\end{equation}
Obviously, $q$ measures how much it saves the experienced players to start a game by joining two queues, instead of sticking to the advanced zone. To obtain more insights, note that when $\lambda_A = \lambda_B$, we have
\begin{equation}
q = \frac{\lambda_B + \lambda_C}{4\lambda_B + \lambda_C}.
\end{equation}
That is, when $\lambda_A = \lambda_B$, the benefit for the first experienced player to switch to a choice-free player is to save 75\% expected waiting time! As the number of choice-free players increases, such benefit diminishes.
As Fig. \ref{fig:2_zones} illustrates, when $\lambda_C=0$, and $\lambda_\text{total}=1$,
\begin{equation}
q=0.5(1-\lambda_A),
\end{equation}
which implies that the initial benefit of being choice-free solely depends on $\lambda_A$! When the junior zone is already crowded compared to the advanced zone, then there is no incentive for being choice-free.
\begin{figure}[t]
\centering
\includegraphics[width=3in]{2_zones.pdf}
\caption{The improvement factor $q$ vs. $\lambda_A$ and $\lambda_C$.}
\label{fig:2_zones}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=3in]{2_zones_et.pdf}
\caption{$\mathbb{E}[T]$ vs. $\lambda_A$ and $\lambda_B$ with normalized $\lambda_{\text{total}}$.}
\label{fig:2_zones_et}
\end{figure}
\begin{comment}
\begin{table}[t]
\centering
\caption{The expected waiting time and $q$ when $\lambda_{\text{total}} = 1$, $\lambda_A \geq 0.1$, $\lambda_B \geq 0.1$ and $\lambda_C \leq 0.2$.}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|}
\hline
$\lambda_B$ & $\lambda_A$ & $\lambda_C$ & $\mathbb{E}[T]$ & $\mathbb{E}[T_B]$ & $\mathbb{E}[T_C]$ & $q$ \\
\hline
0.1 & 0.9 & 0 & 1 & 5 & 0.25 & 0.05 \\
\hline
0.2 & 0.8 & 0 & 1 & 2.5 & 0.25 & 0.1 \\
0.1 & 0.8 & 0.1 & 0.828 & 3.321 & 0.336 & 0.101 \\
\hline
0.3 & 0.7 & 0 & 1 & 1.667 & 0.25 & 0.15 \\
0.2 & 0.7 & 0.1 & 0.891 & 1.992 & 0.305 & 0.153 \\
0.1 & 0.7 & 0.2 & 0.737 & 2.458 & 0.381 & 0.155 \\
\hline
0.4 & 0.6 & 0 & 1 & 1.25 & 0.25 & 0.2 \\
0.3 & 0.6 & 0.1 & 0.914 & 1.423 & 0.293 & 0.206 \\
0.2 & 0.6 & 0.2 & 0.810 & 1.638 & 0.345 & 0.211 \\
\hline
0.5 & 0.5 & 0 & 1 & 1 & 0.25 & 0.25 \\
0.4 & 0.5 & 0.1 & 0.923 & 1.106 & 0.288 & 0.261 \\
0.3 & 0.5 & 0.2 & 0.840 & 1.226 & 0.330 & 0.269 \\
\hline
0.6 & 0.4 & 0 & 1 & 0.833 & 0.25 & 0.3 \\
0.5 & 0.4 & 0.1 & 0.923 & 0.904 & 0.288 & 0.319 \\
0.4 & 0.4 & 0.2 & 0.848 & 0.978 & 0.326 & 0.333 \\
\hline
0.7 & 0.3 & 0 & 1 & 0.714 & 0.25 & 0.35 \\
0.6 & 0.3 & 0.1 & 0.914 & 0.764 & 0.293 & 0.384 \\
0.5 & 0.3 & 0.2 & 0.840 & 0.811 & 0.330 & 0.407 \\
\hline
0.8 & 0.2 & 0 & 1 & 0.625 & 0.25 & 0.4 \\
0.7 & 0.2 & 0.1 & 0.891 & 0.660 & 0.305 & 0.462 \\
0.6 & 0.2 & 0.2 & 0.810 & 0.690 & 0.344 & 0.5 \\
\hline
0.9 & 0.1 & 0 & 1 & 0.556 & 0.25 & 0.45 \\
0.8 & 0.1 & 0.1 & 0.828 & 0.579 & 0.336 & 0.581 \\
0.7 & 0.1 & 0.2 & 0.737 & 0.593 & 0.381 & 0.643 \\
\hline
\end{tabular}
\label{1v1stable}
\end{table}
\end{comment}
Besides the intuitions, we can further conclude that $q$ increases rapidly (and hence the benefit decreases rapidly) as $\lambda_A$ decreases (fewer junior players). In fact, the maximal benefit of being choice-free could save the players up to 90\% of expected waiting time when $\lambda_A=0.8$ and $\lambda_B=0.2$
The system also benefits from the existence of choice-free players, as shown in Fig.~\ref{fig:2_zones_et}. The more the choice-free players more, the lower the expected total waiting time is. However, the marginal system value of additional choice-free player decreases. Nevertheless, the proportion of choice-free players in the total population determines whether the game designers should allow the players to select the battle zone according to their own preferences.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we focus on the optimal matchmaking mechanisms for 2v2 battle game. Using the 1v1 game approximation to the packing service order, we analyze both the static rule design as well as the dynamic rule design's impact on the matchmaking mechanisms. We submit that in both cases, choice-free players are vital to the system: being choice-free benefits themselves as well as the whole system
Much remains unknown. The CTMC for games with more players are more complicated, and it is hard to obtain analytical insights. Also, in this paper, our goal for the game rule design is to minimize the expected waiting time. However, in practice, many more aspects concerning the player's satisfaction to the game need to be carefully examined.
It would be also interesting to consider more dynamic scenario when players' preference distribution, and players' level distribution change over time. This inspires us to construct more adaptive rule designs in the future.
| proofpile-arXiv_059-15567 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{intro}
In 1977, Mackey and Glass \cite{L} proposed the following non-linear differential equation with constant delay
\begin{equation}
x'(t) = -\alpha x(t) + \dfrac{\beta x(t-\tau)}{\theta^n + x^n(t-\tau)},\quad 0 <n.
\end{equation}
in order to describe the concentration of mature cells in the blood concentration. Here $\alpha$, $\beta$, $\tau$ and $\theta$ are positive constants, the unknown $x$ stands for the density of mature cells in blood circulation, $\alpha$ is the rate of lost cells from the circulation at time t, the flux \begin{center}
$f(x(t-\tau)):= \dfrac{\beta x(t-\tau)}{\theta^n+ x^n(t-\tau)}$\end{center}
of cells in the circulation depends on $x(t - \tau)$ at the time $t -\tau$, where $\tau$ is the time delay between the
production of immature cells in the bone marrow and their maturation.\\
\\
Since its introduction in the literature, the hematopoiesis model has gained a lot of attention and various extensions.
Hence, under some additional conditions some authors \cite{K1,K2,M1,N1} considered an extended version of eq.(1) and obtained the existence and attractivity of the unique positive periodic and almost periodic solutions of the following model
\begin{equation}
x(t)= {\displaystyle -a(t) x(t) +\sum^{N}_{i=1} \dfrac{b_i(t)x^m(t-\tau_i(t))}{1 + x^n(t-\tau_i(t))}},\quad 0\leq m\leq 1, 0<n.
\end{equation}
Recently, there have been extensive and valuable contributions dealing with oscillations of the hematopoiesis model with and without delays, see, e.g., \cite{K1,B,K2,O,H} and references therein.\\
Also, the stability of various models was strongly investigated by many authors recently \cite{M,A,M1,F,B1,N} and references therein.\\
\\
As we all know, in real-world applications equations with a harvesting term provide generally a more realistic and reasonable description for models of mathematical biology and in particular the population dynamics. Hence, the investigation of biological dynamics with harvesting is a meaningful
subject in the exploitation of biological resources which is related to the optimal management of renewable resources \cite{K,P}.\\
\\
Besides, the study oscillations and dynamics systems of biological origin is an exciting topic. One can find a valuable results in this field \cite{H1,H2,B0,M0,M2} and references therein.\\
\\
Motivated by the discussion above the main subject of this paper is to study the existence and the global attractor
of the unique and positive pseudo almost periodic solution for the generalized Mackey-Glass model with a nonlinear harvesting term and mixed delays. Roughly speaking, we shall consider the following hematopoiesis model
\begin{eqnarray}
x'(t)=-a(t)x(t)+\sum^{N}_{i=1} \dfrac{b_i(t)x^m(t-\tau_i(t))}{1 + x^n(t-\tau_i(t))}- H(t,x(t-\sigma(t)), \quad 1<m \leq n, t\in \mathbb{R}.
\end{eqnarray}
However, to the author's best knowledge, there are no publications considering the pseudo and positive almost periodic solutions for Mackey-Glass model with harvesting term and $1<m\leq n$.\\
\\
The remainder of this paper is organized as follows: In Section 1, we will introduce some necessary notations, definitions and fundamental properties of the space PAP($\mathbb{R}$,$\mathbb{R}^+$) which will be used in the paper. In Section 2, the model is given. In section 3,
the existence of the unique positive pseudo almost periodic solution for the considered system is established. Section 4 is devoted to the stability of the pseudo almost periodic solution. In Section 5, based on suitable Lyapunov function and Dini derivative, we
give some sufficient conditions to ensure that all solutions converge exponentially to the positive pseudo almost periodic solution of the equation (3). At last, an illustrative example is given. It should be mentioned that the main results of this paper are theorems 1, 2.
\section{Preliminaries}
\label{sec:1}
In this section, we would like to recall some basic definitions and lemmas which are used in what follows. In this paper, $BC(\mathbb{R}, \mathbb{R})$ denotes the set of bounded continued functions from $\mathbb{R}$ to $\mathbb{R}$. Note that $(BC(\mathbb{R}, \mathbb{R}), \|.\|_{\infty})$ is a Banach space where the sup norm $$\|f\|_{\infty} :=\underset{t\in \mathbb{R}}{sup} |f(t)|.$$
\begin{definition} Let u(.) $\in BC(\mathbb{R}, \mathbb{R})$, u(.) is said to be almost periodic (a.p) on $\mathbb{R}$ if, for any $\epsilon > 0$, the set
$$T(u, \epsilon) = \{\delta; |u(t + \delta) - u(t)| < \epsilon, \text{ for all }t \in \mathbb{R}\}$$ is
relatively dense; that is, for any $\epsilon > 0$, it is possible to find a real number $l = l(\epsilon) > 0$; for any interval with length l($\epsilon$), there exists a number $\delta= \delta(\epsilon)$ in this interval such, that $$|u(t + \delta) - u(t)| <\epsilon, \text{ for all }t \in \mathbb{R}.$$
\end{definition}
We denote by $AP(\mathbb{R}, \mathbb{R})$ the set of the almost periodic functions from $\mathbb{R}$ to $\mathbb{R}$.
\begin{remark}
Let $u_i (.)$ , $1 \leq i \leq m$ denote almost periodic functions and
$\epsilon > 0$ be an arbitrary real number. Then there exists a positive real number
$L = L(\epsilon)> 0$ such that every interval of length L contains at least one common
$\epsilon-almost$ period of the family of functions $ u_i (.)$, $1 \leq i \leq m.$
\end{remark}
Besides, the concept of pseudo almost periodicity (p.a.p) was introduced by Zhang \cite{D} in the early nineties. It is a natural generalization of the classical almost periodicity. Precisely, define the class of functions $PAP_0(\mathbb{R}, \mathbb{R})$ as follows
$$\bigg\{f \in BC(\mathbb{R}, \mathbb{R}); \underset{T\rightarrow+\infty}{lim} \dfrac{1}{2T} \int_{-T}^T |f(t)|dt = 0\bigg\} .$$
A function $f \in BC(\mathbb{R}, \mathbb{R})$ is called pseudo almost periodic if it can be expressed as $f = h + \phi$, where $h \in AP(\mathbb{R}, \mathbb{R})$
and $\phi \in PAP_0(\mathbb{R}, \mathbb{R})$. The collection of such functions will be denoted by $PAP(\mathbb{R}, \mathbb{R})$. The functions h and $\phi$ in the
above definition are, respectively, called the almost periodic component and the ergodic perturbation of the pseudo almost periodic function $f$. The decomposition given in definition above is unique. It should be mentioned that pseudo almost periodic functions
possess many interesting properties; we shall need only a few of them and for the proofs we shall refer to \cite{D,D1,M3}.
\begin{remark}
Observe that (PAP($\mathbb{R}$, $\mathbb{R}$), $\|.\|_{\infty}$) is a Banach
space and $AP(\mathbb{R}, \mathbb{R})$ is a proper subspace of $PAP(\mathbb{R}, \mathbb{R})$ since the function $\psi(t) = cos(2 t) +sin(2 \sqrt{5}t) +exp^{-t^2 |sin (t)|}$ is pseudo almost periodic function but not almost periodic.
\end{remark}
\begin{properties}\cite{D} If f,g $\in PAP(\mathbb{R},\mathbb{R})$, then the following assertions hold:
\begin{enumerate}
\item[(a)] f.g, f+g $\in PAP(\mathbb{R}, \mathbb{R})$.
\item[(b)] $\dfrac{f}{g} \in PAP(\mathbb{R}, \mathbb{R})$, if $\underset{t\in \mathbb{R}}{inf}|g(t)|>0$.
\end{enumerate}
\end{properties}
\begin{definition}\cite{D}
Let $\Omega \subseteq \mathbb{R}$ and let K be any compact subset of $\Omega$. On define the class of functions \\$PAP_0(\Omega\times\mathbb{R}, \mathbb{R})$ as follows
$$\bigg\{\psi\in C(\Omega\times \mathbb{R}; \mathbb{R}); \underset{T\rightarrow+\infty}{lim} \dfrac{1}{2T} \int_{-T}^T |\psi(s,t)|dt = 0\bigg\}$$
uniformly with respect to $s\in K$.
\end{definition}
\begin{definition} (Definition 2.12, \cite{E}) \item Let $\Omega \subseteq \mathbb{R}$. An continuous function f : $\mathbb{R} \times\Omega \longrightarrow \mathbb{R}$ is called pseudo almost periodic (p.a.p).
in t uniformly with respect $x \in \Omega$ if the two following conditions are satisfied :\\
i) $\forall x \in \Omega, f(., x) \in PAP(\mathbb{R},\mathbb{R}),$\\
ii) for all compact K of $\Omega$, $\forall \epsilon> 0, \exists \delta > 0, \forall t \in \mathbb{R}, \forall x_1, x_2 \in K$,
\begin{center}
$|x_1 - x_2| \leq \delta \Rightarrow |f(t, x_1) - f(t, x_2)| \leq \epsilon$.
\end{center}
Denote by $PAP_U(\Omega\times \mathbb{R}; \mathbb{R})$ the set of all such functions.
\end{definition}
\section{The model}
\label{sec:2}
In order to generalize and improve the above models, let us consider the following Mackey-Glass model with a non-linear harvesting term and several concentrated delays
\begin{eqnarray}
x'(t)=-a(t)x(t)+\sum^{N}_{i=1} \dfrac{b_i(t)x^m(t-\tau_i(t))}{1 + x^n(t-\tau_i(t))}- H(t,x(t-\sigma(t))
\end{eqnarray}
where $t \in \mathbb{R}$ and
\begin{enumerate}
\item[$\blacktriangleright$]The function a : $\mathbb{R}\longrightarrow\mathbb{R^+}$ is pseudo almost periodic(p.a.p) and $\underset{t\in \mathbb{R}}{inf} a(t) >0.$
\item[$\blacktriangleright$] For all 1$\leq i \leq N$; the functions $\tau_i,\sigma$, b : $\mathbb{R}\longrightarrow\mathbb{R^+}$ are p.a.p.
\item[$\blacktriangleright$] The term H $\in PAP_U(\mathbb{R}\times\mathbb{R},\mathbb{R}^+$) satisfies the Lipschitz condition : $\exists L_H > 0$ such that
$${\displaystyle \mid H(t,x)-H(t,y) \mid < L_H \mid x-y \mid,\quad \forall x,y,t \in \mathbb{R}}.$$
\end{enumerate}
Throughout the rest of this paper, for every bounded function $f : \mathbb{R} \rightarrow \mathbb{R}$, we denote
$$f^+ = \underset {t\in \mathbb{R}}{sup} f(t), f^- =\underset {t\in \mathbb{R}}{inf} f(t).$$ Pose $r =\underset{t \in \mathbb{R}}{sup} \bigg(\tau_i(t),\sigma(t) ; i=1,2...N\bigg).$
Denote by $BC ([-r, 0] , \mathbb{R}^+$) the set of bounded continuous functions from [-r, 0] to $\mathbb{R}^+$. If $x(.)$ is defined on $[-r + t_0, \sigma[$ with $t_0, \sigma \in \mathbb{R}$, then we define $x_t \in C([-r, 0] , \mathbb{R}$) where $x_t(\theta) = x(t + \theta)$ for all $\theta \in [-r, 0]$. Notice that we restrict our selves to $\mathbb{R}^+$-valued functions since only non-negative solutions of (4) are biologically meaningful. So, let us consider the following initial condition
\begin{equation}
x_{t_0} = \phi,\quad \phi \in BC ([-r, 0] , \mathbb{R}^+) \text{ and }\phi (0) > 0.
\end{equation}
We write $x_t (t_0, \phi)$ for a solution of the admissible initial value problem (4) and (5). Also, let $[t_0, \eta(\phi)[$ be the maximal right-interval of existence of $x_t(t_0, \phi)$.
\section{Main results}
\label{sec:3}
As pointed out in the introduction, we shall give here sufficient conditions which ensures existence and uniqueness of pseudo almost periodic solution of (4). In order to prove this result, we will state the following lemmas. For simplicity, we denote $x_t(t_0, \phi)$ by $x(t)$ for all $t\in \mathbb{R}$.
\begin{proposition}
A positive solution $x(.)$ of model (4)-(5) is bounded on $[t_0, \eta(\phi)[$, and $\eta(\phi)=+\infty$.
\end{proposition}
\begin{proof}\item
\text{ } We have for each $t \in [t_0, \eta(\phi)[$ the solution verifies
$$x(t)= {\displaystyle e^{- \int^t_{t_0} a(u) du} \phi(0) +\int^t_{t_0} e^{-\int^t_{s} a(u) du}\bigg[\sum^{N}_{i=1} \dfrac{b_i(s)x^m(s-\tau_i(s))}{1 + x^n(s-\tau_i(s))}- H(t,x(s-\sigma(s))\bigg]ds}.$$
So, by $\underset{x\geq0}{sup}\dfrac{x^m}{1+x^n}\leq 1,\text{ }\forall 1<m\leq n$, we obtain
$$\begin{array}{lll}
x(t)\leq {\displaystyle \phi(0) +\int^t_{t_0} e^{-a^-(t-s)}\sum^{N}_{i=1} b_i^+ds}&\leq {\displaystyle \phi(0) + \dfrac{1}{a^-}[1- e^{-a^-(t-t_0)}]\sum^{N}_{i=1} b_i^+}\\
&\leq \phi(0) + \dfrac{1}{a^-}\sum^{N}_{i=1} b_i^+< +\infty,
\end{array}$$
which proves that $x(.)$ is bounded. The second part of the conclusion is given by Thorem 2.3.1 in \cite{I}, we have that $\eta(\phi)=+\infty$.
\end{proof}
\begin{proposition}\item
If $a^->\sum^N_{i=1}b_i^+,$ then each positive solution $x_t(t_0,\phi)$ of model (4)-(5) satisfies
$$x(t) \underset{t \rightarrow +\infty}{\longrightarrow} 0.$$
\end{proposition}
\begin{proof}\item
\text{ } We define the continuous function
\begin{eqnarray*}G : [0,1]
&\longrightarrow& \mathbb{R}\\
y &\longmapsto& y-a^-+\sum^N_{i=1}b_i^+ e^{y t}.\end{eqnarray*}
From the hypothesis, we obtain G(0)<0, then there exists $\lambda \in [0,1]$, where $$G(\lambda)<0. \qquad (C.1)$$
Let us consider the function $W(t)=x(t)e^{\lambda t}$. Calculating the left derivative $W(.)$ and by using the following inequality
$$\dfrac{x^m}{1+x^n} \leq x ,\quad \forall 1<m\leq n. $$We obtain
\begin{eqnarray*}
D^-W(t) &=& \lambda x(t)e^{\lambda t}+x'(t)e^{\lambda t}\\
\\
&=&\lambda x(t) e^{\lambda t}+e^{\lambda t}[-a(t)x(t)+\sum_{i=1}^{N} b_i(t) \frac{x^m(t-\tau_i(t))}{1+x^n(t-\tau_i(t))}-H(t,x(t-\sigma(t))]\\
\\ &\leq& e^{\lambda t}((\lambda- a^-)x(t) + \sum_{i=1}^{N} b^+_i x(t-\tau_i(t))).
\end{eqnarray*}
Let us prove that
\begin{eqnarray*}
W(t) &=& x(t)e^{\lambda t}< e^{\lambda t_0} M = Q, \forall t\geq t_0.
\end{eqnarray*}
Suppose that there exists $t_1>t_0$ such that
\begin{eqnarray*}
W(t_1) &=& Q, W(t)<Q, \text{ for all } t_0-r\leq t< t_1.
\end{eqnarray*}
Then
\begin{eqnarray*}
0\leq D^-W(t_1)&\leq&(\lambda- a^-)x(t_1) e^{\lambda t_1}+ \sum_{i=1}^{N} b^+_i x(t_1-\tau_i(t_1))e^{\lambda t_1} \\
\\ &\leq& (\lambda- a^-) Q+ \sum_{i=1}^{N} b^+_i x(t_1-\tau_i(t_1))e^{\lambda t_1} e^{\lambda \tau_i} e^{-(\lambda \tau_i(t_1))}\\
\\ &=&(\lambda- a^-) Q+ \sum_{i=1}^{N} b^+_i x(t_1-\tau_i(t_1)) e^{\lambda (t_1-\tau_i(t_1))} e^{\lambda r}\\
\\&\leq& [\lambda- a^-+ \sum_{i=1}^{N} b^+_i e^{\lambda r}] Q\\
\\&<& 0 \qquad (\text{by \textbf{(C.1)}})
\end{eqnarray*}
which is a contradiction.
Consequently, $x(t) e^{\lambda t}< Q$. Then, $x(t)< e^{-\lambda t}Q \underset{t \rightarrow +\infty}{\longrightarrow} 0$.
\end{proof}
\begin{remark} Pose $f_{n,m}(u) =\dfrac{u^m}{1+u^n}$, one can get:\\
\text{\quad if $m<n$}:
\[
\left \{
\begin{array}{r c l}
f_{n,m}'(u) = \dfrac{u^{m-1}(m-(n-m)u^n)}{(1 + u^n)^2}> 0,\forall u \in \left[ 0,\sqrt[n]{\dfrac{m}{n-m} }\right] \qquad (C.3)\\
\\f_{n,m}'(u) = \dfrac{u^{m-1}(m-(n- m)u^n)}{(1 + u^n)^2}<0,\forall u \in \left] \sqrt[n]{\dfrac{m}{ n-m} },+\infty \right[ \qquad (C.4),
\end{array}
\right.
\]\\
and \text{if } $m= n$:
$$f_{n,m}'(u) = \dfrac{u^{m-1}m}{(1 + u^m)^2}> 0,\forall u \in [ 0,+\infty[.\qquad (C.5)$$
If $m<n$, one can choose $k \in \left]0, \sqrt[n]{\dfrac{m}{ n-m} }\right[ $ and combining with \textbf{(C.3)} and \textbf{(C.4)} there exists a constant \\ $\overset{\backsim}{ k}>\sqrt[n]{\dfrac{m}{ n-m} } $ such that
\begin{center}
$f_{n,m}(k)= f_{n,m}(\overset{\backsim}{k}).$ \qquad (C.6)
\end{center}
Moreover, $$\underset{u\geq0}{sup}\dfrac{u^m}{1+u^n}\leq 1, \quad \forall 1<m\leq n. \qquad (C.7)$$\\
\end{remark}
$$\text{Let }\mathcal{C}^0=\{\psi \in BC([-r,0] , \mathbb{R}^+); k\leq \psi \leq M\}.$$
\begin{definition}A positive solution $ x(.)$ of the differential equation is permanent if there exists $t^*\geq 0$, A and B ; $B > A > 0$ such that
$$A \leq x(t) \leq B \quad \text{ for }t \geq t^*.$$
\end{definition}
\begin{lemma}
Suppose that there exist a two positives constants M and k satisfying:
\begin{enumerate}
\item[[H1]]
\text{If} m<n:
$$0<k < \sqrt[n]{\dfrac{m}{ n-m} } < M \leq \overset{\backsim}{k}\quad (\text {$\overset{\backsim}{k}$ was given by \textbf{(C.6)}})$$
\text{If $ m= n$ we have:}
$$0<k < M$$
\item[[H2]] ${\displaystyle -a^- M+\sum^N_{i=1}b_i^+ - H^-}<0$
\item[[H3]] ${\displaystyle -a^+ k+\sum^N_{i=1}b_i^- \dfrac{k^m}{1+k^n} -H^+}> 0$
\end{enumerate}
and $\phi \in \mathcal{C}^0$, then the solution of (4)-(5) $ x(.)$ is permanent which $\eta(\phi)=+\infty$.
\end{lemma}
\begin{proof} \item
\qquad Actually, we prove that $x(.)$ is bounded in $[ t_0,\eta(\phi)[ .$ \\
$\bullet$ First, we claim that
$$ x(t) < M, \forall t \in [ t_0,\eta(\phi)[ .\qquad (i)$$
Contrarily, there exists $t_1 \in ] t_0,\eta(\phi)[ $ such that:
\[
\left \{
\begin{array}{llll}
x(t)<M, \forall t \in \left[t_0-r,t_1\right[\\
x(t_1) = M
\end{array}
\right.
\]
Calculating the right derivative of $x(.)$ and by $\textbf{(H2)}$ and \textbf{(C.7)}, we obtain
$$
\begin{array}{ll}
0 \leq x'(t_1)&=-a(t_1) x(t_1)+ \sum^{N}_{i=1} \dfrac{b_i(t_1)x^m(t_1-\tau_i(t_1))}{1 + x^n(t_1-\tau_i(t_1))}- H(t_1,x(t_1-\sigma(t_1))\\
\\&\leq -a(t_1) M +\sum^{N}_{i=1} b_i(t_1) -H^- \\
\\&< -a^- M +\sum^{N}_{i=1} b_i^+-H^-\\
\\& < 0,
\end{array}
$$
which is a contradiction. So it implies that (i) holds.\\
\\
$\bullet $ Next, we prove that
\begin{center}
$k < x(t) ,\forall t \in [ t_0,\eta(\phi)[ .$ \qquad (ii)
\end{center}
Otherwise, there exists $t_2 \in ] t_0,\eta(\phi)[ $ such that
\[
\left \{
\begin{array}{llll}
x(t)>k, \forall t \in \left[t_0-r,t_2\right[\\
x(t_2) = k
\end{array}
\right.
\]
Calculating the right derivative of $x(.)$ and combining with $\textbf{(H3)}$, \textbf{(C.3)} and\textbf{(C.5)}, we obtain
$$
\begin{array}{ll}
0 \geq x'(t_2)&=-a(t_2) x(t_2)+ \sum^{N}_{i=1} \dfrac{b_i(t_2)x^m(t_2-\tau_i(t_2))}{1 + x^n(t_2-\tau_i(t_2))}- H(t_2,x(t_2-\sigma(t_2))\\
\\& \geq -a(t_2) k + \sum^{N}_{i=1} \dfrac{b_i(t_2) k^m}{1 +k^n}-H^+\\
\\& > -a^+ k + \sum^{N}_{i=1} \dfrac{b_i^- k^m}{1 +k^n}-H^+\\
\\& > 0,\\
\end{array}
$$
which is a contradiction and consequnetly (ii) holds. From Thorem 2.3.1 in \cite{I}, we have that $\eta(\phi)=+\infty$. The proof of Lemma 4.4 is now completed.
\end{proof}
$$\text{Let }\mathcal{B}=\{\psi \in PAP(\mathbb{R} , \mathbb{R}); k\leq \psi \leq M\}.$$
\begin{lemma}
$\mathcal{B}$ is a closed subset of $PAP(\mathbb{R},\mathbb{R})$.
\end{lemma}
\begin{proof}
Let $(\psi_n)_{n \in \mathbb{N}} \subset \mathcal{B}$ such that $\psi_n \longrightarrow \psi$.
$ \text{Let us prove that } \psi \in \mathcal{B}.$\\
\\
Clearly, $\psi \in PAP(\mathbb{R},\mathbb{R})$ and we obtain that $$\begin{array}{lll} \psi_n \underset{n\longrightarrow +\infty}{\longrightarrow }\psi &\Leftrightarrow \forall \epsilon> 0, \exists n_0>0 \text{ such that } \mid \psi_n(t)-\psi(t)\mid \leq \epsilon, (\forall t \in \mathbb{R}, \forall n>n_0)\\&\Leftrightarrow \forall \epsilon> 0, \exists n_0>0 \text{ such that } -\epsilon \leq \psi_n(t)-\psi(t)\leq\epsilon,(\forall t \in \mathbb{R}, \forall n>n_0).\end{array}$$
Let t $\in \mathbb{R}$, we obtain then
$$-\epsilon+ k \leq \psi(t)=[\psi(t)-\psi_n(t) ]+\psi_n(t) \leq \epsilon +M.$$
So, $\psi \in \mathcal{B}$.
\end{proof}
\begin{lemma}[Theorem 2.17,\cite{E}]
If f $\in PAP_U(\mathbb{R} \times \mathbb{R},\mathbb{R})$ and for each bounded subset B of $\mathbb{R}$, f is bounded on $\mathbb{R} \times B$, then the Nymetskii operator
$$N_f : PAP(\mathbb{R},\mathbb{R}) \longrightarrow PAP(\mathbb{R}, \mathbb{R}) \text { with } N_f(u)=f(.,u(.))$$
is well defined.
\end{lemma}
\begin{lemma}\cite{G}
Let f,g $\in AP(\mathbb{R},\mathbb{R})$. If $g^->0$ then $F \in AP(\mathbb{R},\mathbb{R})$ where
$$F(t)={\displaystyle \int^t_{-\infty}e^{-\int^t_s g(u)du} f(s) ds}, \quad t\in\mathbb{R}.$$
\end{lemma}
\begin{lemma}\cite{D}
Let F $\in PAP_U(\mathbb{R}\times \mathbb{R},\mathbb{R})$ verifies the Lipschitz condition: $\exists L_F >0$ such that
$$| F(t,x)-F(t,y)| \leq L_F |x-y|,\quad \forall x,y\in \mathbb{R} \text{ and } t \in \mathbb{R}.$$
If h $\in PAP(\mathbb{R},\mathbb{R})$, then the function $F(.,h(.)) \in PAP(\mathbb{R},\mathbb{R})$.
\end{lemma}
\begin{theorem}
If conditions $\textbf{(H1)-( H3)}$ and
$$\textbf{[H4]}: {\displaystyle \underset{t\in \mathbb{R} }{sup}\bigg\{-a(t)+\sum^N_{i=1}b_i(t)\bigg[\dfrac{(n-m)}{4}+\dfrac{m}{(1+k^n)^2}\bigg]M^{m-1}+L\bigg\}}<0$$ are fulfilled, then the equation (4) has a unique p.a.p solution x(.) in the region $\mathcal{B}$, given by
$${\displaystyle x(t)=\int^t_{-\infty} e^{-\int^t_s a(u)du}\sum^{N}_{i=1} \dfrac{b_i(s)x^m(s-\tau_i(s))}{1 + x^n(s-\tau_i(s))}- H(s,x(s-\sigma(s))ds}.$$
\end{theorem}
\begin{proof}\item
\text{ }Step 1:\\
Clearly $\mathcal{B}$ is a bounded set. Now, let $\psi \in \mathcal{B}$ and $f(t,z)=\psi(t-z)$, since the numerical application $\psi$ is continuous and the space PAP($\mathbb{R},\mathbb{R})$ is a translation invariant then the function $f \in PAP_U(\mathbb{R} \times \mathbb{R}, \mathbb{R}^+)$. Furthermore $\psi$ is bounded, then $f$ is bounded on $\mathbb{R}\times B$.
By the lemma 4, the Nymetskii operator
\begin{eqnarray*}
N_f : PAP(\mathbb{R},\mathbb{R})&\longrightarrow & PAP(\mathbb{R},\mathbb{R})
\\\tau_i &\longmapsto &f(.,\tau_i(.))\end{eqnarray*}
is well defined for ${\displaystyle \tau_i \in PAP(\mathbb{R},\mathbb{R})}$ such that $0 \leq i \leq N$. Consequently,$${\displaystyle \bigg[t\longmapsto\psi(t-\tau_i(t))\bigg]\in PAP(\mathbb{R},\mathbb{R^+})}\text{ for all } i=1,...,N.$$
Since ${\displaystyle \underset{t\in \mathbb{R}}{inf} |1+\psi^n(t-\tau_i(t))|>0}$ and using properties 1, the p.a.p functions one has
$${\displaystyle\bigg [ t \longmapsto \sum^{N}_{i=1} \dfrac{b_i(t)\psi^m(t-\tau_i(t))}{1 + \psi^n(t-\tau_i(t))} \bigg] \in PAP(\mathbb{R},\mathbb{R})}.$$
Also, under the fact that the harvesting term verifies the Lipschitz condition, being the lemma 6,
$${\displaystyle \bigg [G : t \longmapsto \sum^{N}_{i=1} \dfrac{b_i(t)\psi^m(t-\tau_i(t))}{1 + \psi^n(t-\tau_i(t))} -H(t,\psi(t-\sigma(t))\bigg] \in PAP(\mathbb{R},\mathbb{R})}.$$
\text{ }Step 2:
Let us define the operator $\Gamma$ by
$$\Gamma(\psi)(t)=\int^t_{-\infty} e^{-\int^t_s a(u)du} G(s) ds$$
We shall prove that $\Gamma$ maps $\mathcal{B}$ into itself. First, since the functions G(.) and a(.) are p.a.p one can write $$G=G_1+ G_2 \text{ and }a=a_1+ a_2,$$where $G_1,a_1\in AP(\mathbb{R},\mathbb{R})$ and $G_2,a_2 \in PAP_0(\mathbb{R}, \mathbb{R})$.
So, one can deduce
$$\begin{array}{ll}
\Gamma(\psi)(t)&{\displaystyle=\int^t_{-\infty} e^{-\int^t_s a_1(u)du} G_1(s)ds+\int^t_{-\infty} \bigg[ e^{-\int^t_s a(u)du} G(s)-e^{-\int^t_s a_1(u)du}G_1(s)\bigg]ds}\\
\\&{\displaystyle=\int^t_{-\infty} e^{-\int^t_s a_1(u)du} G_1(s)ds+\int^t_{-\infty} e^{-\int^t_s a_1(u)du} G_1(s)\bigg[e^{-\int^t_s a_0(u)du}-1\bigg]ds}\\
\\&+ \int^t_{-\infty}e^{-\int^t_s a(u)du}G_2(s)ds\\
\\&{\displaystyle=I(t)+II(t)+III(t). }\end{array}$$\\
By the lemma \textbf{5}, $I(.) \in AP(\mathbb{R},\mathbb{R})$. \\
\\
Now, we show that II(.) is ergodic. One has
$$\begin{array}{lll}
II(t)&={\displaystyle\int^t_{-\infty} e^{-\int^t_s a_1(u)du} G_1(s)\bigg[e^{-\int^t_s a_2(u)du}-1\bigg]ds}\\
\\&={\displaystyle \int^{+\infty}_0 e^{-\int^t_{t-v} a_1(u)du} G_1(t-v)\bigg[e^{-\int^t_{t-v} a_2(u)du}-1\bigg]dv}\\
\\&={\displaystyle \int^{v_0}_0 +\int^{+\infty}_{v_0} e^{-\int^{v}_0a_1(t-s)ds} G_1(t-v)\bigg[e^{-\int^{v}_0 a_2(t-s)ds}-1\bigg]dv}\\
\\&=II_1(t)+II_2(t).
\end{array}$$
Since $a^-_1\geq a^-$ and for large enough $v_0$, we obtain
$$\begin{array}{lll}
|II_2(t)|&\leq {\displaystyle \int^{+\infty}_{v_0} e^{-\int^{v}_0 a_1(t-s)ds} |G_1(t-v)|\bigg[|e^{-\int^{v}_0 a_2(t-s)ds}|+1\bigg]dv}\\
\\&= {\displaystyle \int^{+\infty}_{v_0} \|G_1\|_{\infty}\bigg[e^{-\int^{v}_0 a(t-s)ds}+ e^{-\int^{v}_0 a_1(t-s)ds}\bigg]dv}\\
\\&\leq {\displaystyle \int^{+\infty}_{v_0} 2 \|G_1\|_{\infty}e^{- a^-v}dv <\dfrac{\epsilon}{2}}.
\end{array}$$
So, $II_2 \in PAP_0(\mathbb{R},\mathbb{R})$.\\
Thereafter, it has not yet been demonstrated that $II_1(.) \in PAP_0(\mathbb{R},\mathbb{R})$.\\
Firstly, we prove that the following function
$$\mu(v,t)={\displaystyle \int^v_0 a_2(t-s)ds, \qquad (v\in [0,v_0], t\in \mathbb{R})}$$
is in $PAP_0([0,v_0] \times \mathbb{R}, \mathbb{R})$. Clearly $|\mu(v,t)|\leq {\displaystyle\int^v_0 |a_2(t-s)|ds}$, then it is obviously sufficient to prove that the function $${\displaystyle\int^v_0 |a_2(.-s)|ds \in PAP_0(\mathbb{R},\mathbb{R}).}$$
We have $a_2 (.)\in PAP_0(\mathbb{R},\mathbb{R})$, for $\epsilon >0$ there exists $T_0 >0$ such that
$${\displaystyle \dfrac{1}{2T} \int^T_{-T} |a_2(t-s)|dt \leq \dfrac{\epsilon}{v}, \qquad (T\geq T_0, s\in [0,v])}.$$
Since $[0,v]$ is bounded, the Fubini's theorem gives for $\epsilon >0$ there exists $T_0 >0$ such that $${\displaystyle \dfrac{1}{2T} \int^T_{-T} \int^v_0 |a_2(t-s)|ds dt\leq \epsilon, \qquad T\geq T_0}.$$
So, $${\displaystyle\int^v_0 |a_2(.-s)|ds \in PAP_0(\mathbb{R},\mathbb{R})}$$ which implies the required result.\\
Then, we obtain
$$\begin{array}{ lll}
{\displaystyle\dfrac{1}{2T} \int^T_{-T}|II_1(t)|dt } & {\displaystyle\leq \dfrac{1}{2T} \int^T_{-T}\int^{v_0}_0 e^{-\int^{v}_0 a_1(t-s)ds} |G_1(t-v)||e^{-\int^{v}_0 a_2(t-s)ds}-1|dv dt }\\
\\& ={\displaystyle\dfrac{1}{2T} \int^T_{-T}\int^{v_0}_0 |G_1(t-v)||e^{-\int^{v}_0 a(t-s)ds}-e^{-\int^{v}_0 a_1(t-s)ds}|dv dt }.
\end{array}$$
By the mean value theorem, $\exists \eta \in ]0,1[$ such that
\begin{eqnarray*}
{\displaystyle\dfrac{1}{2T} \int^T_{-T}|II_1(t)|dt }\leq &&{\displaystyle \dfrac{1}{2T} \int^T_{-T}\int^{v_0}_0 |G_1(t-v)|e^{-[(1-\eta)\int^{v}_0 a(t-s)ds+ \eta \int^{v}_0 a_1(t-s)ds]}}\\
\\&&\times\bigg(\int^{v}_0|a_2(t-s)|ds\bigg) dv dt .
\end{eqnarray*}
Since the function $\mu \in PAP_U([0,x_0] \times \mathbb{R}, \mathbb{R})$ and in virtue of the Fubini's theorem for $\epsilon>0$, $\exists T_1>0$ such that
$$\begin{array}{lll}
{\displaystyle\dfrac{1}{2T} \int^T_{-T}|II_1(t)|dt }&{\displaystyle\leq \int^{v_0}_0 \|G_1\|_{\infty} \dfrac{\epsilon}{ 2\|G_1\|_{\infty} v_0}}=\dfrac{\epsilon}{2}, \qquad (\forall T\geq T_1).
\end{array}$$\\
So, $II_1\in PAP_0(\mathbb{R},\mathbb{R})$.\\
Finally, we study the ergodicity of III(.). We have
$$\begin{array}{ll}
{\displaystyle \dfrac{1}{2T}\int^T_{-T} |III(t)|dt}& {\displaystyle \leq \dfrac{1}{2T} \int^T_{-T} \int ^t _{-\infty} e^{-(t-s) a^-} \mid G_2(s) \mid ds dt}\\
\\&\leq III_1(T) + III_2(T),
\end{array}$$
where $$III_1(T)={\displaystyle \dfrac{1}{2T} \int^T_{-T} \int ^t _{-T} e^{-(t-s) a^-} \mid G_2(s) \mid ds dt}$$ and
$$III_2(T)={\displaystyle \dfrac{1}{2T} \int^{-T}_{-\infty} \int ^t _{-\infty} e^{-(t-s) a^-} \mid G_2(s) \mid ds dt}.$$
Next, we prove that $$\underset{T\rightarrow +\infty}{lim} III_1(T)=\underset{T\rightarrow +\infty}{lim}III_2(T)=0.$$
By the Fubini's theorem, we obtain
$$\begin{array}{ll}
III_1(T)&{\displaystyle = \int ^{+\infty}_0 e^{- a^- u} \dfrac{1}{2T} \int^T_{-T} \mid G_2(t-u) \mid dt du}\\
\\&{\displaystyle= \int ^{+\infty}_0 e^{- a^- u} \dfrac{1}{2T} \int^{T-u}_{-T-u} \mid G_2(t) \mid dt du}\\
\\&{\displaystyle \leq \int ^{+\infty}_0 e^{- a^- u} \dfrac{1}{2T} \int^{T+u}_{-(T+u)} \mid G_2(t) \mid dt du.}
\end{array} $$
Now, since $G_2 \in PAP_0(\mathbb{R},\mathbb{R})$, then the function $\Psi_T$ defined by
$${\displaystyle \Psi_T(u)=\dfrac{T+u}{T} \dfrac{1}{2(T+u)} \int^{T+u}_{-(T+u)} \mid G_2(t) \mid dt} $$
is bounded and satisfy $\underset{T\longrightarrow +\infty}{lim} \Psi_T(u)=0$. From the Lebesgue dominated convergence theorem, we obtain $$\underset{T\rightarrow +\infty}{lim}III_1(T)=0.$$
On the other hand, notice that $\|G_2\|_\infty<0$ and by setting $\xi=t-s$ we obtain
$$\begin{array}{ll}
III_2(T)&{\displaystyle \leq \dfrac{\|G_2\|_{\infty} }{2T} \int^T_{-T} \int ^{+\infty}_{2T} e^{- a^- \xi} d\xi dt}\\
\\&{\displaystyle =\dfrac{\|G_2\|_{\infty} }{a^-} e^{-2 a^- T}}\qquad \underset{T\longrightarrow +\infty }{\longrightarrow 0}.
\end{array} $$
which implies the required result.\\
\\
Step 3:\\
Let
\vspace{-0.5cm}
\begin{eqnarray*}\gamma : [0,1]&\longrightarrow& \mathbb{R}\\
u&\longmapsto& {\displaystyle \underset{t\in \mathbb{R} }{sup}\bigg\{-a(t)+\bigg[\sum^N_{i=1}b_i(t) \bigg(\dfrac{(n-m)}{4}+\dfrac{m}{(1+k^n)^2}\bigg)M^{m-1}+L\bigg]e^u\bigg\}.}\end{eqnarray*}
It is clear that $\gamma$ is continuous function on [0,1].\\
From $\textbf{(H4)}$ : $\gamma$(0)<0, so $\exists \zeta \in [0,1]$ such that $$\gamma(\zeta)<0\qquad (C.8).$$
Next, we claim that $ \Gamma(\psi)(t) \in [k,M]$ for all $t\in \mathbb{R} .$\\
For $\psi \in \mathcal{B}$, we have $$
\begin{array}{lll}
\bullet \text{ }\Gamma(\psi)(t)&\leq{\displaystyle \int ^t_{-\infty} e^{-(t-s) a^-} \left[ \sum^{N}_{i=1} b_i^+ -H^-\right]ds} \qquad(\text{By \textbf{(C.1)}})\\
\\&\leq{\displaystyle \int ^t_{-\infty} e^{a^-(t-s)} a^- M ds} \qquad ({\text{By \textbf{(H2)}} )}\\
\\&=M.\\
\\ \bullet \text{ }\Gamma(\psi)(t)&\geq {\displaystyle \int^t_{-\infty} e^{-(t-s)a^+}\left[ \sum^{N}_{i=1} \dfrac{b_i^- k^m}{1+k^n} -H^+\right] ds} \quad (\text{By \textbf{(C.3)} and \textbf{(C.5)}})\\
\\ &\geq {\displaystyle \int ^t_{-\infty} e^{-(t-s)a^+} a^+ k ds} \qquad (\text{By \textbf{(H3)}} )\\
\\&=k.
\end{array}
$$
Thus $\Gamma$ a self-mapping from $\mathcal{B}$ to $\mathcal{B}$. \\
\\
$\ast$ $\Gamma$ is a contraction. Indeed;
Let $\varphi ,\psi \in \mathcal{B} $, we have \begin{eqnarray*}
\Vert \Gamma(\varphi)-\Gamma(\psi)\Vert _\infty &=&\underset{t\in \mathbb{R}}{sup}|\Gamma(\phi)(t)-\Gamma(\psi)(t)|\\
\\ &\leq &{\displaystyle \underset{t\in \mathbb{R}}{sup} \int^t_{-\infty} e^{-\int^t_s a(u)du} \sum^{N}_{i=1} b_i(s) \bigg| \dfrac{\varphi^m(s-\tau_i(s))}{1 + \varphi^n(s-\tau_i(s))}-\dfrac{\psi^m(s-\tau_i(s))}{1 + \psi^n(s-\tau_i(s))}\bigg| }\\
\\&&+\bigg|H(s,\varphi(s-\sigma(s))-H(s,\psi(s-\sigma(s))\bigg|ds.
\end{eqnarray*}
By the mean value theorem, one can obtain $$\begin{array}{lll}
\bigg|\dfrac{x^m}{1+x^n}- \dfrac{y^m}{1+y^n}\bigg|&=|g'(\theta)| |x-y| \qquad \qquad\text{\qquad \text{\qquad }where $\bigg[g : t\in \mathbb{R}^+ \longrightarrow \dfrac{t^m}{1+t^n}\bigg]$}\\ \\
&=\bigg|\dfrac{\theta^{m-1+n}(m-n)+m\theta^{m-1}}{(1 + \theta^n)^2}\bigg| |x-y|\\
\\&\leq \bigg[\dfrac{\theta^{m-1}(n-m)}{4 } +\dfrac{m\theta^{m-1}}{(1+\theta^n)^2}\bigg]|x-y|,\\
\end{array} $$ where $x,y \in [k,M]$, $\theta$ lies between $x$ and $y$. \\
Consequently, the following estimate hold
$$
\begin{array}{lll}
{\displaystyle \Vert \Gamma(\varphi)-\Gamma(\psi)\Vert_\infty}& \leq {\displaystyle \underset{t\in \mathbb{R}}{sup} \int^t_{-\infty} e^{-\int^t_s a(u)du} \bigg(\sum^{N}_{i=1} b_i(s) \bigg[ \dfrac{(n-m)M^{m-1}}{4}+\dfrac{m M^{m-1} }{(1+k^n)^2}\bigg]}\\
\\& \bigg| \varphi(s-\tau_i(s))-\psi(s-\tau_i(s))\bigg| +L \parallel \varphi-\psi \parallel_\infty\bigg) ds\\
\\& {\displaystyle \leq \underset{t\in \mathbb{R}}{sup} \int^t_{-\infty} e^{-\int^t_s a(u)du} \bigg(\sum^{N}_{i=1} b_i(s) \bigg[\dfrac{(n-m)}{4}+\dfrac{m}{(1+k^n)^2}\bigg]M^{m-1}+L\bigg)}\\
\\& {\displaystyle \times \parallel \varphi-\psi \parallel _\infty ds}\\
\\&\leq {\displaystyle \parallel \varphi-\psi \parallel _\infty \underset{t \in \mathbb{R}}{sup} \int ^t_{-\infty} e^{-\int^t_s a(u)du}a(s) e^{-\zeta}ds} \qquad (\text{By \textbf{(C.8)}})\\
\\& \leq {\displaystyle e^{-\zeta} \parallel \varphi-\psi \parallel_\infty},\end{array}$$
which proves that $\Gamma$ is a contracting operator on ${\displaystyle \mathcal{B}}$. By using fixed point theorem, we obtain that operator $\Gamma$ has a unique fixed point ${\displaystyle x^* (.)\in \mathcal{B}}$, which corresponds to unique p.a.p solution of the equation (4).
\end{proof}
\section{The stability of the pap solution}
\label{sec:4}
\begin{definition}\cite{I} Let $f : \mathbb{R} \longrightarrow \mathbb{R}$ be a continuous function, then the upper right derivative of $f$ is defined as
$$D^+f(t)= \overline{\underset{h\rightarrow0^+}{lim}}\dfrac{f(t + h) - f(t)}{h}.$$
\end{definition}
\begin{definition} We say that a solution $x^*$ of Eq. (4) is a global attractor or globally asymptotically stable (GAS) if for any
positive solution $x_t(t_0,\phi)$
$$\underset{t\rightarrow +\infty}{lim}|x^*(t)-x_t(t_0,\phi)|=0.$$
\end{definition}
\begin{theorem}
Under the assumptions \textbf{H(1)-H(4)}, the positive pseudo almost periodic solution $x^*(.)$ of the equation (4) is a global attractor.
\end{theorem}
\begin{proof}
Firstly, set $x_t(t_0,\phi)$ for $\phi \in \mathcal{C}^0$ by $x(t)$ for all $t \in \mathbb{R}$. Let $$y(.)=x(.)-x^*(.).$$Then,
\begin{eqnarray*}
y'(t) &=& -a(t)[x(t)-x^*(t)]+\sum^{N}_{i=1} b_i(t)\bigg[\dfrac{x^m(t-\tau_i(t))}{1 + x^n(t-\tau_i(t))}-\dfrac{x^*{^m}(t-\tau_i(t))}{1 + x^*{^n}(t-\tau_i(t))}\bigg] \\
\\ &&-\bigg [H(t,x(t-\sigma(t)))-H(t,x^*(t-\sigma(t)))\bigg].
\end{eqnarray*}
Let us define a continuous function $\Delta :\mathbb{R^+}\longrightarrow \mathbb{R}$ by
$$\Delta(u)= u-a^-+\bigg[L_{H}+\sum_{i=1}^{N} b^+_i\bigg(\dfrac{(n-m)}{4}+\dfrac{m}{(1+k^n)^2}\bigg)M^{m-1}\bigg]\exp(ut).$$
From $\textbf{(H4)}$, we have $\Delta (0)<0$, then there exists $\lambda\in \mathbb{R}^+$, such that$$\Delta(\lambda)<0 \qquad (C.11).$$
We consider the Lyapunov functional $V(t)=|y(t)|e^{\lambda t}$. Calculating the upper right derivative of $V(t)$, we obtain
\begin{eqnarray*}
D^+V(t) &\leq & \bigg[-a(t)|y(t)|+\sum_{i=1}^{N} b_i(t)\bigg|\frac{x^m(t-\tau_i(t))}{1+x^n(t-\tau_i(t))} -\frac{(x^*)^m(t-\tau_i(t))}{1+(x^*)^n(t-\tau_i(t))}\bigg|+ \bigg|H(t,x^*(t-\sigma(t)))\\
\\&&-H(t,x(t-\sigma(t)))\bigg|\bigg]e^{\lambda t}+\lambda|y(t)|e^{\lambda t}\\
\\&\leq & e^{\lambda t}\bigg((\lambda-a^-)|y(t)|+L_H|y(t-\sigma(t))|+\sum_{i=1}^{N} b^+_i \bigg|\frac{x^m(t-\tau_i(t))}{1+x^n(t-\tau_i(t))}-\frac{(x^*)^m(t-\tau_i(t))}{1+(x^*)^n(t-\tau_i(t))}\bigg|\bigg).
\end{eqnarray*}
We claim that
\begin{eqnarray*}
V(t) &=& |y(t)|e^{\lambda t}< e^{\lambda t_0}(M+\max_{t<t_0}|x(t)-x^*(t)|)=M_1, \forall t\geq t_0.
\end{eqnarray*}
Suppose that there exists $t_1>t_0$ such that
\begin{eqnarray*}
V(t_1) &=& M_1, V(t)<M_1, \forall \text{ } t_0-r\leq t< t_1.
\end{eqnarray*}
Besides,
\begin{eqnarray*}
0\leq D^+V(t_1)&\leq& (\lambda-a^-)|y(t_1)|e^{\lambda t_1}+L_H|y(t_1-\sigma(t_1))|e^{\lambda (t_1-\sigma(t_1))}e^{\sigma(t_1) \lambda}\\
\\&&+\sum_{i=1}^{N} {b_i}^+ e^{\lambda t_1}\bigg[\frac{x^m(t_1-\tau_i(t_1))}{1+x^n(t_1-\tau_i(t_1))}-\frac{(x^*)^m(t_1-\tau_i(t_1))}{1+(x^*)^n(t_1-\tau_i(t_1))}\bigg].
\end{eqnarray*}
On the other hand, for all $x,x^* \in \mathbb{R}^+$ we have
$$\bigg|\dfrac{x^m}{1+x^n}-\dfrac{(x^*)^m}{1+(x^*)^n}\bigg| \leq \bigg[\dfrac{\zeta^{m-1}(n-m)}{4 } +\dfrac{m\zeta^{m-1}}{(1+\zeta^n)^2}\bigg] |x-x^*|,$$
where $\zeta \in [k,M]$.
\\
So, we obtain
\begin{eqnarray*}
\bigg |\dfrac{x^m(t-\tau_i(t))}{1+x^n(t-\tau_i(t))}-\frac{(x^*)^m(t-\tau_i(t))}{1+(x^*)^n(t-\tau_i(t))}\bigg | \leq \bigg[\dfrac{M^{m-1}(n-m)}{4 } +\dfrac{M^{m-1}}{(1+k^n)^2}\bigg] |y(t-\tau_i(t))|.
\end{eqnarray*}
Then,
\begin{eqnarray*}
0&\leq& D^+V(t_1)\\
\\&\leq& (\lambda-a^-)|y(t_1)|e^{\lambda t_1}+L_H|y(t_1-\sigma(t_1))|e^{\lambda (t_1-\sigma(t_1))}e^{\sigma(t_1)\lambda}\\
\\ &&+\sum_{i=1}^{N} \bar{b_i}e^{\lambda (t_1-\tau_i(t_1)) }e^{\tau_i(t_1) \lambda}|y(t_1-\tau_i(t_1))|\bigg[\dfrac{M^{m-1}(n-m)}{4 } +\dfrac{m M^{m-1}}{(1+k^n)^2}\bigg]\\
\\&\leq& \bigg(\lambda-a^-+L_H e^{r\lambda}+\sum_{i=1}^{N} b^+_i e^{r \lambda}\bigg[ \dfrac{(n-m)}{4}+\dfrac{m}{(1+k^n)^2}\bigg]M^{m-1} \bigg)M_1.
\end{eqnarray*}
However, by \textbf{(C.11)}
\begin{eqnarray*}
\lambda-a^-+L_H e^{r\lambda}+\sum_{i=1}^{N} b^+_i e^{r \lambda}\bigg [ \dfrac{(n-m)}{4}+\dfrac{m}{(1+k^n)^2}\bigg]M^{m-1} <0,
\end{eqnarray*}
which is contradicts the hypothesis. Consequently, $|y(t)|< M_1 e^{-\lambda t},\text{ } \forall t> t_0$.
\end{proof}
\section{An example}
\label{sec:5}
In this section, we present an example to check the validity of our theoretical results obtained in the previous sections.\\
\\
First, we construct a function $\omega(t)$. For $n = 1, 2,...$ and $0 \leq i < n$, $$a_n=\dfrac{n^3-n}{3}$$ and intervals $$I_n^i = [a_n + i, a_n + i + 1].$$ Choose a non-negative, continuous function g on [0,1] defined by $$g(s) =\dfrac{8}{\pi}\sqrt{s-s^2}.$$
Define the function $\omega$ on $\mathbb{R}$ by
\[\omega(t)=
\left \{
\begin{array}{lll}
g[t-(a_n+i)], &\qquad t \in I_n^i,\\
0, &\qquad t\in \mathbb{R}^+ \ \cup \{I_n^i: n=1,2,...,0\leq i \leq n \}, \\
\omega(-t), &\qquad t<0.
\end{array}
\right.
\]\\
From {\color{blue}{(\cite{D}, p.211, Example 1.7)}}, we know that $\omega \in PAP_0( \mathbb{R}, \mathbb{R})$ is ergodic. However, $\omega \notin C_0( \mathbb{R}, \mathbb{R})$.
\begin{figure}[hbtp]
\includegraphics[scale=0.4]{ergodique.PNG}
\caption{Diagram of $\omega$}
\end{figure}
Let us consider the case $n=m=2$ , $N=1$,\\
\\
$a(t)=0.38+ \dfrac{|sin(t)+sin (\pi t)|}{400} + \dfrac{ \pi\omega(t)}{800}$, $b_1(t) =1+\dfrac { sin^2( t) + sin^2(\sqrt{2}t)}{10}+ \dfrac{1}{100(1+t^2)},$ \\
\\
$ \tau_1(t) = cos^2(t) + cos^2 (\sqrt{2}t) + 1 +e^{-t^2}, \text{}H(t,x)=0.01|sin(t)+cos(\sqrt{3}t)| \dfrac{|x|}{1+x^2},\text{ et }\sigma(t)=|sin(t)-sin(\pi t)|.$\\
\\
Clearly, $a^+=0.39, a^-=0.38, b^+=1.21, b^-=1, H^+=5*10^{-3}, H^-=0, r=4.$\\
\\
It is not difficult to see that H $\in PAP_U(\mathbb{R} \times \mathbb{R},\mathbb{R}^+)$ and satisfies Lipschitz condition with $l=10^{-2}$.\\
\\Let k=2, M=3.29. We obtain easily:
\begin{enumerate}
\item[[H1]] $0<2<3.29;$
\item[[H2]] ${\displaystyle -a^- M+\sum^N_{i=1}b_i^+ - H^-}= -0.04 < 0$;
\item[[H3]] ${\displaystyle -a^+ k+\sum^N_{i=1}b_i^- \dfrac{k^m}{1+k^n} -H^+}= 0.015> 0$;
\item[[H4]] ${\displaystyle \underset{t\in \mathbb{R} }{sup}\bigg\{-a(t)+\sum^N_{i=1}b_i(t)\bigg[\dfrac{(n-m)}{4}+\dfrac{m}{(1+k^n)^2}\bigg]M^{m-1}+L\bigg\}}$= -0.0515<0.
\end{enumerate}
Then, all the conditions in Theorem {\color{blue}1} et {\color{blue}2} are satisfied, Therefore, there exists a unique pseudo almost periodic solution $x^*$ in $\mathcal{B}=\{\phi \in PAP(\mathbb{R},\mathbb{R}); k\leq \phi (t) \leq M,\text{} \forall t\in \mathbb{R}\}$ which is global attractor. \\
\begin{figure}[hbtp]
\includegraphics[scale=0.7]{deuxvaleurs.PNG}
\caption{The numerical solution $x_t(t_0,2)$ of the example 6.1 for $x_0=0.1$ and $x_0=1$}
\end{figure}
\begin{remark}
Notice that in vue of this above example, it follows that the condition of proposition 4.2 is necessary. Besides, The results are verified by the numerical simulations in fig(2).
\end{remark}
\section{Conclusion}
\label{sec:6}
In this paper, some new conditions were given ensuring the existence of the uniqueness positive pseudo almost periodic solution of the hematopoies model with mixed delays and with a non-linear harvesting term (which is more realistic).\\
Also, the global attractivity of the unique pseudo almost periodic solution of the considered model is demonstrated by a new and suitable Lyapunov function.\\
\\As the best of our knowledge, this is the first paper considering such solutions for this generalized model.\\
Notice that our approach can be applied to the case of the almost automorphic and pseudo almost automorphic solutions of the considered model.
| proofpile-arXiv_059-15568 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Quantum dots (QD) in semiconductor heterostructures are sometimes regarded as
artificial atoms, but one aspect in which they differ from real atoms is that
they live in an invasive environment. QD carriers interact with both acoustic
and optical phonons, and sometimes with free carriers in the wetting layer (WL)
or in the bulk. Such interactions play an important role in the QD optical
properties, and especially in the dissipative behavior, like carrier population
redistribution and polarization dephasing.
In this respect, the role of the phonons enjoyed a much larger attention in the
literature, one reason being the availability of the independent boson model
(IBM) \cite{mahan,huang_rhys, duke_mahan}, which is both simple and in
certain circumstances exact. The
popularity of the method cannot be overestimated, it being used in various
contexts like the theory of exciton dephasing and absorption \cite{zimm_ebg,
wacker,li_jqe}, phonon-assisted exciton recombination \cite{hofling},
phonon-mediated off-resonant light-matter coupling in QD lasers
\cite{florian_1, hohenester}, generation of entangled phonon states
\cite{hahn_kuhn} or phonon-assisted adsorption in graphene \cite{sengupta}, to
cite only a few.
The IBM relies on the carrier-phonon interaction being diagonal in the
carrier states. If not, or if additional interactions are present, the method
ceases to be exact, but it is still helpful as a way to handle a part of the
interaction, responsible for the polaron formation.
This diagonality implies that the bosons see an occupation number, either zero
or one, according to whether the QD excitation is present or not. The
difference
between the two cases amounts, in a linear coupling, to a displacement of the
phonon oscillation centers, without changing their frequency. In other words
one has just a change of basis, performed by a unitary operator, the polaronic
transform.
The diagonality requirement means that the method is particularly suited for
calculating pure dephasing, i.e. polarization decay without population changes,
and absorption spectra \cite{zimm_ebg, besombes}, where it produces exact
analytic results.
In view of this wide range of applications it is legitimate to ask whether such
a unitary transform does not exist for the interaction with the free carriers
too.
We show that the answer is positive, if we frame the problem as above, i.e. if
the interaction is diagonal in the QD states. We illustrate the situation in
the case of a two-level system where the electron-hole vacuum acts as the
ground state and an exciton pair as the excited state. The charge distribution
of the latter acts as a scattering center for the carriers in the continuum. It
is known \cite{taylor, sitenko} that the free and the scattered continua are
unitary equivalent, with the transform provided by the scattering matrix. In
many ways this approach resembles the IBM, and can be regarded as its fermionic
analogue. Many similarities between the two can be seen, but we also point out
the
differences. Most importantly, the method is not exact any more, but it lends
itself to a diagrammatic expansion.
In a previous paper \cite{florian_2} this approach has been already used in the
context of a nonresonant Jaynes-Cummings (JC) model. The cavity feeding was
assisted by the fermionic bath of WL carriers, which compensated the energy
mismatch. The situation there is more complicated due to the QD states getting
mixed by the JC interaction with the photonic degrees of freedom.
In the present paper, we address a simpler, more clearcut situation, involving
only the exciton-continuum interaction. We illustrate the method by calculating
the polarization decay and absorption line shape, as functions of bath
temperature and carrier concentration. We also discuss in more detail the
similarities and differences with respect to the IBM.
\section{The model}
\label{sec:model}
The system under consideration consists of a QD exciton interacting with a
fermionic thermal bath, represented by the WL carriers. The latter are taken
interactionless and in thermal equilibrium.
The Hamiltonian describing the problem is ($\hbar=1$)
\begin{align}
\begin{split}
H = &\,\varepsilon^{\phantom\dagger}_X X^\dagger X + (1 - X^\dagger X)\,h_0 + X^\dagger X
\, h_X\,, \\
h_0 = & \sum_{\lambda=e,h}\sum_{\vec k} \varepsilon^{\lambda}_{\vec k}
\lambda^\dagger_{\vec k} \lambda^{\phantom\dagger}_{\vec k}\,, \quad\quad h_X = h_0
+ W \, .
\label{eq:hamiltonian}
\end{split}
\end{align}
Here $X^{\dagger}$, $X$ are the excitonic creation and annihilation operators.
Specifically, considering in the QD one $s$-state in each band, with operators
$e^{\dagger},e$ and $h^{\dagger},h$ for electrons and holes respectively, we have $X=he$.
Limiting the QD configurations to the neutral ones, one has $X^{\dagger} X=e^{\dagger} e=h^{\dagger}
h$.
The exciton energy is $\varepsilon^{\phantom\dagger}_X$. The WL Hamiltonian is given by
$h_0$, with the subscripted $\lambda$ symbol meaning either $e$ or $h$
WL-continuum operators, and the momentum index $\vec k$ including tacitly the
spin. In the presence of the exciton the WL Hamiltonian $h_X$ gets additionally
a term $W$, describing the interaction with the QD carriers.
This is expressed in terms of the matrix elements
\begin{equation}
V^{\lambda\lambda'}_{ij,kl} = \sum_{\vec q} V_{\vec
q}\bra{\varphi^{\lambda}_i}
e^{i\vec q\vec r}\ket{\varphi^{\lambda}_l}\bra{\varphi^{\lambda'}_j} e^{-i\vec
q\vec r}\ket{\varphi^{\lambda'}_k}
\label{eq:mat_el}
\end{equation}
of the Coulomb potential $V_{\vec q}$, between one-particle states, with
$\lambda, \lambda'=e,h$. Only two of these indices are needed
since band index conservation is, as usual, assumed.
The WL-QD interaction has the form
\begin{align}
\begin{split}
& X^{\dagger} X \,W= \\
& \sum_{\vec k, \vec k'} e^{\dagger}_{\vec k} e^{\phantom\dagger}_{\vec k'}
\left[ V^{ee}_{s \vec k, \vec k's} e^{\dagger} e
-V^{he}_{s \vec k, \vec k's} h^{\dagger} h
-V^{ee}_{s \vec k, s \vec k'} e^{\dagger} e \right] + \\
& \sum_{\vec k, \vec k'} h^{\dagger}_{\vec k} h^{\phantom\dagger}_{\vec k'}
\left[ V^{hh}_{s \vec k, \vec k's} h^{\dagger} h
-V^{eh}_{s \vec k, \vec k's} e^{\dagger} e
-V^{hh}_{s \vec k, s \vec k'} h^{\dagger} h \right] \, .
\label{eq:W}
\end{split}
\end{align}
It shows that the WL carriers are scattered by an external field produced by
the exciton, having the form
\begin{equation}
W = \sum_{\lambda=e,h}\sum_{\vec k, \vec k'} W^\lambda_{\vec k, \vec k'}
\lambda ^{\dagger}_{\vec k} \lambda ^{\phantom\dagger}_{\vec k'} \,.
\label{eq:scat_field}
\end{equation}
In each $W^\lambda_{\vec k, \vec k'}$ the first two terms describe direct,
electrostatic interaction between WL-carriers with the QD electron and hole
respectively. The difference between repulsion and attraction is nonzero due to
the different charge densities of these two. The exciton is globally neutral,
but local charges are usually present and generate scattering. The strength
of the scattering depends on the degree of charge compensation within the
exciton.
The third term is the exchange contribution, and it is not expected to be
large,
since around $\vec q =0$ the matrix elements in Eq.\eqref{eq:mat_el} become
overlap
integrals between orthogonal states.
The idea of the present method relies on the unitary equivalence between the WL
Hamiltonians $h_0$ and $h_X$ provided by the scattering matrix $\mathcal
S(0,-\infty)$ (see e.g. ~\cite{taylor,sitenko}). One has
\begin{equation}
\mathcal S(-\infty,0) \, h_X \, \mathcal S(0,-\infty) = h_0 \, ,
\label{eq:hh0}
\end{equation}
with $\mathcal S$ generated by the scattering potential $W$
\begin{align}
\begin{split}
\mathcal S(t_1, t_2) &= \mathcal S^\dagger(t_2,t_1)\\
&=\mathcal{T} \exp \left[ -i \int_{t_2}^{t_1} \, W(t)\,\text{d}t
\right] \,, \quad t_1>t_2 \, .
\end{split}
\end{align}
$\mathcal{T}$ is the time ordering operator and the interaction representation
of the perturbation $W(t)$ with respect to $h_0$ is used.
For the full WL-QD Hamiltonian we formally eliminate the interaction part using
the unitary transform
\begin{equation}
U = 1-X^\dagger X + X^\dagger X \,\mathcal S(0,-\infty) \, ,
\end{equation}
which switches on the action of the $\mathcal S$-matrix only when the exciton
is present. It follows that
\begin{align}
\begin{split}
U^\dagger & \left[(1-X^\dagger X)\, h_0 + X^\dagger X \, h_X
\right] U = \\
(1-X^\dagger X)\, h_0
+ & X^\dagger X\, \mathcal S(-\infty,0)\,h_X\,
\mathcal S(0,-\infty) = h_0 \, .
\end{split}
\end{align}
The excitonic operators are changed, according to $\widetilde X= U^{\dagger}\, X\,
U=X\, \mathcal S(0,-\infty)=\mathcal S(0,-\infty)\, X$ and similarly
$\widetilde X^{\dagger}= X^{\dagger} \mathcal S(-\infty,0) $, but $X^{\dagger} X$ remains
invariant. Therefore in the transformed problem
\begin{equation}
\widetilde H= U^{\dagger}\, H\, U = \varepsilon^{\phantom\dagger}_X X^{\dagger}\, X + h_0\, ,
\label{eq:H_tilde}
\end{equation}
the QD and the WL become uncoupled.
This follows faithfully the effect produced by the polaronic transform in the
bosonic bath case, as described by the IBM.
Assuming an instantaneous excitation at $t=0$, the linear optical polarization
of the QD is expressed in terms of the exciton retarded Green's function
\begin{equation}
\mathcal P(t)= -i\, \theta(t) \braket{X(t) X^{\dagger}} =
-i\,\theta(t)\text{Tr}\{\rho\, X(t) X^{\dagger} \}\, .
\end{equation}
In the unitary transformed picture
\begin{equation}
\mathcal P(t)= -i\, \theta(t)\text{Tr}\{\widetilde \rho\, \widetilde X(t)
\widetilde X^{\dagger} \} \, ,
\end{equation}
the problem is interaction-free, and QD and WL operators evolve independently.
Therefore
\begin{align}
\widetilde X(t) & = e^{-i \varepsilon^{\phantom\dagger}_X t} X e^{i h_0 t} \mathcal S(0,
-\infty) e^{-i h_0 t} \nonumber \\
&= e^{-i \varepsilon^{\phantom\dagger}_X t} X
\mathcal S (t,-\infty)\,,
\end{align}
and as a consequence
\begin{equation}
\mathcal P(t)= -i\, \theta(t) e^{-i \varepsilon^{\phantom\dagger}_Xt}
\braket{XX^{\dagger}}\braket{\mathcal S(t,0)} \, .
\end{equation}
Essentially, besides a trivial exciton energy oscillation, the problem boils
down to evaluating the thermal bath average of the scattering matrix $\mathcal
S(t,0)$ for positive times.
This is again formally similar to the IBM case, but differing in the details,
as will be discussed below. In the present case one makes use of the linked
cluster (cumulant) expansion for $\braket{\mathcal S (t,0)}$
\cite{abrikosov,mahan}, in which a lot of resummation has been
performed. As a consequence $\braket{\mathcal S(t,0)}$ is expressed
as an exponential, $\exp[\Phi(t)]$, where $\Phi(t)$ is the sum of all connected
diagrams with no external points $\Phi(t)=\sum_n L_n(t)$, where the diagram
$L_n\, , n=1,2,3 \dots$ of order $n$ comes with a factor $1/n$.
Its internal points are time-integrated from $0$ to $t$. In our case the
interaction is an external potential, not a many-body one and the elementary
interaction vertex in the diagrams is as in Fig.~\ref{fig:diagrams}(a). The
first diagrams of the expansion are represented in Fig.~\ref{fig:diagrams}(b).
One has
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{figure1.eps}
\caption{(a) Elementary interaction vertex. (b) First three connected
diagrams $L_1$, $L_2$ and $L_3$ of the linked cluster expansion in the
evaluation of the thermal average of the $S$-matrix $\braket{S(t,0)}$.}
\label{fig:diagrams}
\end{figure}
\begin{align}
\begin{split}
L_1(t) =& - \sum_{\lambda,\vec k} \int_0^t \text{d}t_1
W^{\lambda}_{\vec k,\vec k}\, G^0_{\lambda \vec k}(t_1,t_1^+) \\
L_2 (t)=& -\frac{1}{2} \sum_{\lambda, \vec k,\vec k'}
\Big|W^{\lambda}_{\vec k,\vec k'}\Big|^2 \\
&\times\int_0^t \text{d}t_1\int_0^t\text{d}t_2 \,G^0_{\lambda \vec
k}(t_1,t_2) \, G^0_{\lambda \vec k'}(t_2,t_1)\,,
\end{split}
\label{eq:diag_uneval}
\end{align}
where $G^0_{\lambda \vec k}$ is the free Green's function for the WL state
$\lambda \vec k$.
For the first-order contribution one obtains an imaginary, linear time
dependence, which amount to a correction to the exciton energy.
\begin{equation}
L_1(t) = - i\sum_{\lambda,\vec k} W^{\lambda}_{\vec k,\vec k}\,
f^\lambda_{\vec k}\, \,t
\label{eq:L1}
\end{equation}
with $f^\lambda_{\vec k}$ the Fermi function for the WL level carrying the
same indices.
More important is the second order diagram ($\varepsilon^\lambda_{\vec k
\vec k'}$ denotes the difference $\varepsilon^\lambda_{\vec
k}-\varepsilon^\lambda_{\vec k'}$)
\begin{align}
\begin{split}
L_2(t) =&\sum_{\lambda,\vec k,\vec k'} \Big| W^{\lambda}_{\vec
k,\vec k'}\Big|^2 (1-f^{\lambda}_{\vec k})f^{\lambda}_{\vec k'}\,\\
&\times \Big( e^{-i\,\varepsilon^{\lambda}_{\vec k\vec k'}\,t}
-1 +i\,\varepsilon^{\lambda}_{\vec k\vec k'}\,t
\Big)\Big(\varepsilon^{\lambda}_{\vec k\vec k'}\Big)^{-2} \, .
\end{split}
\label{eq:L2}
\end{align}
Initially it decays quadratically with time, as obvious from the double
integral from $0$ to $t$ in Eq.\eqref{eq:diag_uneval}, and also reflected in
Eq.\eqref{eq:L2} by the subtraction of the first two terms in the expansion of
the exponential.
More relevant is the long-time behavior of the real part
\begin{equation}
\text{Re} L_2(t) = -\sum_{\lambda,\vec k,\vec k'} \Big| W^{\lambda}_{\vec
k,\vec k'}\Big|^2 (1-f^{\lambda}_{\vec k})f^{\lambda}_{\vec k'}\,
\,\frac{1-\cos(\varepsilon^\lambda_{\vec k
\vec k'}\, t)}{(\varepsilon^\lambda_{\vec k \vec k'})^2} \, ,
\label{eq:real_L2}
\end{equation}
which controls the polarization decay. Indeed, using the large $t$ asymptotics
of $(1-\cos\omega t)/\omega^2 \sim \pi\, \delta(\omega)\, t$, one finds an
exponential attenuation $\mathcal P(t) \sim \exp(-\Gamma t)$ with the decay
rate given by
\begin{equation}
\Gamma = \pi \sum_{\lambda,\vec k,\vec k'} \Big| W^{\lambda}_{\vec
k,\vec k'}\Big|^2 (1-f^{\lambda}_{\vec k})f^{\lambda}_{\vec
k'}\,\delta(\varepsilon^\lambda_{\vec k \vec k'})\, .
\label{eq:Gamma}
\end{equation}
A comparative discussion with the IBM is in order. The dephasing process does
not imply a change of population (pure dephasing) and therefore the decay rate
$\Gamma$ does not involve energy transfer, as seen by the presence of the
$\delta$-function. In the case of IBM that means only zero-energy phonons are
involved. Then all the discussion takes place around the spectral edge, and
depends
on the density of states and coupling constants there. Usually they vanish as
a higher power of energy and overcome the singularity of the Bose-Einstein
distribution, with the result that $\Gamma=0$. This leads to the problem of
the zero-phonon line (ZPL) appearing as an artificial pure $\delta$-peak in the
spectrum. This is a weak point and several ways out have been devised, like
including a phenomenological line broadening \cite{zimm_ebg}, a phonon-phonon
interaction \cite{zimmulj}, or considering a lower dimensionality \cite{wacker}
to enhance the density of states. The fermionic case is free from this problem,
since $\Gamma$ relies on quantities around the chemical potential.
Also, it is worth noting that limiting the expansion to $L_2$ gives the exact
result in the IBM, while here it is only an approximation. One may plead in
favor of neglecting higher diagrams by arguing that a lot of compensation takes
place between the direct terms in Eq.\eqref{eq:W} and the exchange terms are
small, in other words the QD-WL coupling is weak. Nevertheless, this is not
sufficient, since it might turn out that higher order diagrams behave as
higher powers in time, and thus asymptotically overtake the second order one.
We argue below that this is not the case.
Indeed, the structure of the diagrams is such that the $\theta$-functions
contained in the Green's functions splits the expression into integrals of the
form
\begin{equation}
I_n(t) = \int_0^t \text{d}t_1 e^{i \omega_1 t_1}\int_0^{t_1} \text{d}t_2 e^{i
\omega_2 t_2} \,\ldots \int_0^{t_{n-1}} \text{d}t_n e^{i \omega_n t_n} \, .
\label{eq:In}
\end{equation}
The frequency appearing in each time integration is the difference of the
energies of the Green's functions meeting at the corresponding internal points.
Summing these pairwise differences around the diagram entails the relation
$\omega_1+\omega_2+ \ldots + \omega_n=0$.
On the other hand, the Laplace transform of $I_n(t)$ can be easily calculated
and gives
\begin{align}
J_n(s) = &\frac{1}{s} \,
\frac{1}{s+i\omega_1}\,\frac{1}{s+i(\omega_1+\omega_2)}
\cdots \nonumber \\
&\cdots \frac{1}{s+i(\omega_1+\omega_2+ \ldots +\omega_n)}
\label{eq:Jn}
\end{align}
The last factor is actually $1/s$, like the first, so that $J_n(s) \sim 1/s^2$
around $s=0$. This corresponds to a behavior $I_n(t) \sim t$ as $t \to \infty$
for all $n$. For instance, the $n=2$ case, discussed above, can be recovered
from $J_2(s) = 1/s^2 \,\cdot 1/(s+i \omega_1)$. The low-$s$ asymptotics of its
real part generates the linear large time behavior times the
$\pi \delta(\omega_1)$ factor, with $\omega_1= \varepsilon^\lambda_{\vec k,
\vec k'}$.
Using $g$ as a generic notation for the coupling strength, we conclude that the
contribution of the diagram of order $n$ at large times is $\sim g^n\,t$. This
is in agreement with the so-called weak interaction limit \cite{spohn}, stating
that when $g\to 0$ and simultaneously $t \to \infty$ so that $g^2\,t$ remains
constant, the Born-Markov dissipative evolution becomes exact. Indeed, here all
$n>2$ contributions vanish in this scaling limit.
\section{Numerical example}
As an illustration we consider an InAs/GaAs heterostructure, with a
self-assembled QD on a WL of $L=2.2$ nm width.
The relevant material parameters are those of Vurgaftman {\em et al.}
\cite{vurgaftman}. We assume the wavefunctions to be factorized into the
square-well solution $\chi^\lambda(z)$ along the growth direction
and the in-plane function. The latter are taken as oscillator ground-state
gaussians for the QD $s$-states for electrons and holes, and as plane waves,
orthogonalized on the former, for the WL extended states. The gaussians are
defined by their width $\alpha_\lambda$ in the reciprocal space, i.e.
$\varphi^\lambda_s(\vec r)= \alpha_\lambda/\sqrt{\pi} \exp(-\alpha^2_\lambda
r^2/2) $ with $\vec r$ here the in-plane position.
These parameters depend on many geometric and composition features of the QD,
so that they can reach a broad set of values. For the sake of our example we
take $\alpha_e=0.2/\text{nm}$ and $\alpha_h=0.1/\text{nm}$.
The phonon-induced dephasing is expected to be less important at low
temperatures. The Coulomb-assisted dephasing depends on both temperature and
WL-carrier concentration, therefore lowering the temperature and increasing the
concentration it has a chance to compete with the phononic processes.
We consider only the neutral charging, with electrons and holes in the
WL at the same concentration $n$.
\begin{figure}[t]%
\includegraphics*[width=\linewidth,height=0.8\linewidth]{figure2.eps}
\caption{(color online) Time evolution of the real part of $\Phi$ for two
temperatures,
5K (a) and 10K (b) at the same carrier concentration
$n=10^{12}/\text{cm}^2$.
The dephasing reaches a linear decay whose rate increases with
temperature.}
\label{fig:fig2}
\end{figure}
In Fig.~\ref{fig:fig2} the time evolution of the real part of $\Phi(t)$ is
plotted for two
different temperatures. The inital quadratic behavior is followed by a linear
decrease, whose slope $\Gamma$ is the dephasing rate predicted by
Eq.\eqref{eq:Gamma}. It increases with temperature, as confirmed by
Fig.~\ref{fig:fig3}, which shows the values of $\Gamma$ at various
temperatures and carrier concentrations.
\begin{figure}[t]%
\includegraphics*[width=\linewidth,height=0.8\linewidth]{figure3.eps}
\caption{(color online) Dephasing rates $\Gamma$ as functions of
temperature for different carrier concentrations.}
\label{fig:fig3}
\end{figure}
The range of those values is such that $\hbar \Gamma$ is of the
order of a few $\mu \text{eV}$. This is comparable with results for dephasing
by phonons at low temperatures both in theoretical simulations \cite{zimm_ebg,
takagahara} and in experimental data \cite{woggon, borri_prl}. Experimental
data obtained by four-wave mixing \cite{borri_prl} do not separate phonon and
injected carrier contributions to dephasing, but their total effect is still in
the $\mu \text{eV}$ range.
For an increase of temperatures from 5K to 120K the dephasing grows by roughly
one order of magnitude. In the same conditions the rate of dephasing by phonons
gains two orders of magnitude \cite{zimm_ebg, woggon}, showing a higher
sensitivity to temperature. Yet in the case of the fermionic bath the decay is
enhanced also by increasing a second controllable parameter, the carrier
concentration.
As mentioned above, the description of the phonon dephasing by the IBM runs
into the ZPL problem. As seen in Refs. \cite{li_jqe,zimmulj} the slope of the
long-time
linear asymptotics is zero, for reasons discussed Sec.\ref{sec:model}. This
leads to an unphysical pure $\delta$-spike in the frequency domain.
\begin{figure}[t]%
\includegraphics*[width=\linewidth,height=0.8\linewidth]{figure4.eps}
\caption{(color online) Absorption spectra for $n=10^{12}/\text{cm}^2$
at T=10, 30, 50, 100K}
\label{fig:fig4}
\end{figure}
This is not the case with the fermionic bath, as also seen in
Fig.~\ref{fig:fig4}.
The main feature of the spectra is their Lorentzian shape, as a consequence of
the exponential decay in the time domain. Still, the quadratic initial behavior
replaces the cusp at $t=0$ of a pure exponential by a smooth matching. In the
frequency domain this leads to a departure from the Lorentzian, in the sense of
a faster decay at large frequencies.
\begin{figure}[t]%
\includegraphics*[width=\linewidth,height=0.8\linewidth]{figure5.eps}
\caption{(color online) Temperature dependence of the dephasing rate for
different WL width $L$ and space extension parameters $\alpha$. All curves for
$n=10^{12}/\text{cm}^2$}
\label{fig:fig5}
\end{figure}
In Fig.~\ref{fig:fig5} we also consider the dependence of dephasing on other,
more geometric parameters. It is seen that $\Gamma$ is not very sensitive to
the WL width $L$, but it is significantly influenced by the spatial extension
of the QD $s$-states. The broader the states, the stronger the dephasing, due
to a more efficient scattering.
\section{Conclusions}
In conclusion, we have shown that a fermionic counterpart of the popular IBM
is possible. It describes the QD exciton interaction with the fermionic bath
consisting of injected carriers in the bulk or WL.
Similarities and differences to the IBM are pointed out. For instance, the
present solution takes the form of a diagrammatic series expansion, while the
IBM is exact, but this advantage is lost as soon as other interactions are
present. Also, our case is free from the ZPL problem inherent to the bosonic
case.
The dephasing process is controlled not only by temperature but also by the
chemical potential of the bath. The numerical illustration shows that at low
temperatures and higher carrier concentrations
the dephasing times are comparable with those produced by the phonon
interaction. But, of course, this is also dependent on the parameters of the
particular case considered. The dephasing gets stronger at higher temperature
and concentration, as well as with broader charge distribution of QD states.
\section*{Acknowledgments}
The authors acknowledge financial support from CNCS-UEFISCDI Grant No.
PN-III-P4-ID-PCE-2016-0221
(I.V.D. and P.G.) and from the Romanian Core Program PN19-03, Contract No. 21
N/08.02.2019, (I.V.D. and M. \c T.).
| proofpile-arXiv_059-15569 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{はじめに}
\section{Introduction}
Simultaneous translation is a translation task where the translation process starts before the end of an input.
It helps real-time spoken language communications such as human conversations and public talks.
A usual machine translation system works in the sentence level and starts its translation process after it reads the end of a sentence.
It would not be appropriate for spoken languages due to roughly two issues: (1) sentence boundaries are not clear and (2) a large latency occurs for a long input.
Previous studies tackled this problem by an incremental process,
in order to reduce the translation latency for a given input.
\newcite{fujita13interspeech} proposed a phrase-based approach to the simultaneous translation
based on phrasal reordering probabilities.
\newcite{oda-etal-2015-syntax} proposed a syntax-based method to determine when to start translation of observed inputs.
Such an approach faces a trade-off between speed and accuracy; reducing the translation latency using very limited context information also causes the loss in the translation accuracy.
This becomes more serious especially in a syntactically-distant language pair such as English and Japanese,
where we sometimes have to wait a latter part of a source language to determine the corresponding former part in a target language.
Recent neural machine translation (NMT) studies tried an incremental processing for the simultaneous translation.
\newcite{gu2017learning} proposed a reinforcement learning approach to determine \textit{when to translate}
based on two different actions: READ to take one input token and WRITE to generate one output token.
While they reported some latency reduction without the loss of translation accuracy,
the NMT model itself is trained independently from this incremental manner and is not fully optimized for simultaneous translation.
\newcite{ma2018stacl} proposed a very simple incremental method called \textit{Wait-k},
where the decoder starts to generate output tokens after the encoder reads \textit{k} tokens and then works token-by-token.
Here, some required inputs may not be observed by the encoder;
however, the decoder has to predict the next output token even in that case.
This approach enables a simple end-to-end simultaneous NMT with implicit anticipation of unobserved inputs.
It showed high translation accuracy with small latency on some common English-to-German and Chinese-to-English datasets.
The latency hyperparameter \textit{k} can be used to control the speed-accuracy trade-off,
but it has to be large enough for a distant language pair like English-Japanese.
We observed a problem in translating a phrase longer than \textit{k} tokens in our pilot study on English-to-Japanese translation.
In this work, we propose a novel incremental NMT method that uses a special token \text{\textless wait\textgreater} in the target language
which is generated when the translation model chooses to read the next input token instead of generating an output token.
The proposed method uses Connectionist Temporal Classification (CTC) \cite{graves2006connectionist}
to handle ambiguities in possible positions inserting \text{\textless wait\textgreater} in the training time.
CTC is applied to sequential model training such as automatic speech recognition,
where we have a reference word sequence but do not have the corresponding segmentation or alignment in an acoustic signal.
We conduct experiments in English-to-Japanese simultaneous translation with the proposed and baseline methods
and show the proposed method achieves a good translation performance with relatively small latency.
The proposed method can determine when to wait or translate in an adaptive manner and is useful in simultaneous translation tasks.
\section{Simultaneous machine translation by \textit{Wait-k} model}
First, we review a general NMT model following the formulation by \citet{luong-pham-manning:2015:EMNLP} and the ``Wait-k" model \cite{ma2018stacl} that is the baseline model for simultaneous NMT.
Given a source sentence $X$ and a target sentence $Y$ as follows:
\begin{align*}
X=\{\textbf{x}_1, \textbf{x}_2, ..., \textbf{x}_I\}, \\
Y=\{\textbf{y}_1, \textbf{y}_2, ..., \textbf{y}_J\},
\end{align*}
where $\textbf{x}_i \in \mathbb{R}^{S \times 1}$ is a one-hot vector of the i-th input word,
$I$ is the length of the input sentence $X$,
$\textbf{y}_i \in \mathbb{R}^{T \times 1}$ is a one-hot vector of the i-th output word,
and $J$ is the length of the output sentence $Y$.
The problem of translation from the source to the target language can be solved by finding the best target language sentence $\hat{Y}$ that maximizes the conditional probability
\begin{equation}
\hat{Y} = \mathop{\rm arg~max}\limits_Y p ( Y | X ) .
\end{equation}
In general NMT manner, the conditional probability is decomposed by the product of conditional generation probabilities of $\textbf{y}_{j}$ given the source sentence $X$ and preceding target words $\textbf{y}_{<j}$:
\begin{equation}
p (Y | X) = \prod_{j=1}^{J} p_\theta (\textbf{y}_j | \textbf{y}_{<j}, X), \label{eq:general_word_prob}
\end{equation}
where $\textbf{y}_{<j}$ represents the target words up to position $j$, and $\theta$ indicates the model parameters.
In contrast, the model for simultaneous translation has to output translated words given only prefix words of the source sentence.
Therefore, the conditional probability is decomposed as follows:
\begin{equation}
p (Y | X) = \prod_{j=1}^{J} p_\theta (\textbf{y}_j | \textbf{y}_{<j}, \textbf{x}_{<g(j)}), \label{eq:simultaneous_word_prob}
\end{equation}
where $\textbf{x}_{<g(j)}$ are the target words up to position $g(j)$ and $g(j)$ represents the number of encoded source tokens when the model outputs $j$ words.
In the ``Wait-k" model, $g(j)$ is defined as follows:
\begin{equation}
g(j) = \begin{cases}
k + j - 1 & (j < I - k) \\
I & (\mathrm{otherwise})
\end{cases}
\end{equation}
Here, $k$ is the hyperparameter which indicates the target sentence generation is $k$ tokens behind the source sentence input and it takes a constant value in the ``Wait-k" model.
The model is composed of an encoder (\S\ref{seq:encoder}) and a decoder with the attention mechanism (\S\ref{seq:attention_and_decoder}) that are both implemented using recurrent neural networks (RNNs);
the encoder converts source words into a sequence of vectors, and the decoder generates target language words one-by-one with the attention mechanism based on the conditional probability shown in the equation \ref{eq:general_word_prob} and \ref{eq:simultaneous_word_prob}.
The details are described below.
\subsection{Encoder}
\label{seq:encoder}
The encoder takes a sequence of a source sentence $X$ as inputs and returns forward hidden vectors $\overrightarrow{\textbf{h}_i}(1 \leq i \leq I)$ of the forward RNNs:
\begin{equation}
\overrightarrow{\textbf{h}_i} = \mathrm{RNN}(\overrightarrow{\textbf{h}_{i-1}}, \textbf{x}_i).
\end{equation}
In the general NMT model, they also calculate backward hidden vectors of backward RNNs from a reversed source sentence.
However, we only use forward hidden vectors because we cannot use the information of the whole sentence on the simultaneous translation task.
\subsection{Decoder with Attention}
\label{seq:attention_and_decoder}
The decoder takes source hidden vectors as inputs and returns target language words one-by-one with the attention mechanism.
The decoder RNNs recurrently generates target words using its hidden state and an output context.
The conditional generation probability of the target word $\textbf{y}_i$ defined as follows:
\begin{align}
p_\theta (\textbf{y}_j | \textbf{y}_{<j}, \textbf{x}_{\leq g(j)}) &= \mathrm{softmax}(\textbf{W}_s\tilde{\textbf{b}_j}), \\
\tilde{\textbf{b}_j} &= \mathrm{tanh}(\textbf{W}_c [ \textbf{c}_j; \textbf{d}_j]), \\
\textbf{d}_j &= \mathrm{RNN}(\textbf{d}_{j-1}, \textbf{y}_{j-1}).
\end{align}
Here, $\textbf{W}_c, \textbf{W}_p$ are trainable parameters and $\textbf{c}_j$ is a context vector to retrieve source language inputs in forms of a weighted sum of the source hidden vectors $\textbf{h}_j$, defined as follows.
\begin{align}
\textbf{c}_j &= \sum^{g(j)}_{t=1}{\alpha_{ij} \overrightarrow{\textbf{h}_i}}, \\
\alpha_{ij} &= \frac{
\mathrm{exp}(score(\textbf{d}^T_j, \overrightarrow{\textbf{h}_i}))
}{
\sum^{g(j)}_{t'=1}{\mathrm{exp}(score(\textbf{d}^T_j, \overrightarrow{\textbf{h}_{t'}}))
}}.
\end{align}
The \textit{score} function above can be defined in some different ways as discussed by \citet{luong-pham-manning:2015:EMNLP}.
In this paper, we use \textit{dot} attention for this \textit{score} function.
\section{Proposed Method}
In this work, we proposed the method to decide the output timing adaptively.
The proposed method introduces a special token \text{\textless wait\textgreater} \ which is output instead of delaying translation to target-side vocabulary.
In this section, we first review a standard objective function, softmax cross-entropy and show the problem that occurs when this function is applied to \text{\textless wait\textgreater} \ (\S\ref{seq:sce}).
After that, we introduce an objective function, called Connectionist Temporal Classification, to handle this problem (\S\ref{seq:ctc}).
Finally, we propose a new objective function to adjust a trade-off between translation accuracy and latency (\S\ref{seq:delay_panalty}) and explain how to combine these objective functions (\S\ref{seq:combine_objective_functions}).
\subsection{Softmax Cross-Entropy}
\label{seq:sce}
Softmax Cross-Entropy (SCE) is a commonly used token-level objective function for multi-class classification including word generation in NMT, defined as follows:
\begin{equation}
\ell_{ent} = -\sum_{j=1}^{J}{\sum_{k=1}^{K}{\textbf{y}_{jk}\log{p_\theta(\textbf{y}_{jk} | \textbf{y}_{<j}, \textbf{x}_{<g(j)})}}},
\end{equation}
where $\textbf{y}_{ij}$ is a j-th element of the one-hot vector corresponding to the i-th words of the reference sentence and $p(\textbf{y}_{jk}|\cdot)$ is the generation probability of $\textbf{y}_{jk}$.
A correct sequence that corresponds to an output sequence one-by-one is necessary to use SCE as an objective function for NMT.
However, in the proposed method, we cannot simply use SCE because we don't know when we should cause delay.
To avoid this problem, we set the loss for delay tokens to 0 during the time step $t\ (t \leq g(I))$ which the model can output \text{\textless wait\textgreater} , or while a source sentence is inputted.
\subsection{Connectionist Temporal Classification}
\label{seq:ctc}
As we mentioned in the previous section, we set the loss value for \text{\textless wait\textgreater} \ to 0, but this causes the problem that it does not optimize about generating \text{\textless wait\textgreater} .
Against this problem, we use an objective function called Connectionist Temporal Classification (CTC) \cite{graves2006connectionist} for sequence-level optimization.
CTC extends output sequence, called Path $\bm{\pi} = \Omega(\textbf{y})$, to the length $T$ by allowing token repetitions and outputting \text{\textless wait\textgreater} .
Conversely, we can obtain an original output sequence $\textbf{y} = \Omega^{-1}(\bm{\pi})$ by removing \text{\textless wait\textgreater} \ and all token repetitions.
The objective function is defined the sum of the probabilities of all possible paths $\bm{\pi} \in \Omega(\textbf{y})$ by using the forward-backward algorithm, as follows:
\begin{align}
\ell_{ctc}
&= \sum_{\bm{\pi} \in \Omega (\textbf{y})}{p(\bm{\pi} | X)} \nonumber \\
&= \sum_{\bm{\pi} \in \Omega (\textbf{y})}{\prod_{t=1}^{T}{p(\pi_t | \pi_{<t}, \textbf{x}_{g(t)})}},
\end{align}
where $\pi_t$ is a t-th element of $\bm{\pi}$.
\subsection{Delay Penalty}
\label{seq:delay_panalty}
Furthermore, we introduce a new objective function, called Delay Penalty, to control latency.
We use this function only when an output token causes the delay; that is, when the model outputs \text{\textless wait\textgreater} \ or the same token as a previous one.
Delay Penalty is defined by a negative log-likelihood of the probabilities for non-delayed tokens, as follows:
\begin{align}
\ell_{def} &= -\sum_{t=1}^{T}{\log (1 - w_t)}, \\
w_t &= \begin{cases}
p(\text{\textless wait\textgreater} | \textbf{y}_{<t}, \textbf{x}_{<g(t)}) & (\textbf{y}_t = \text{\textless wait\textgreater} ) \\
p(\textbf{y}_{t-1} | \textbf{y}_{<t}, \textbf{x}_{<g(t)}) & (\textbf{y}_t = \textbf{y}_{t-1}) \\
0 & (otherwise)
\end{cases}
\end{align}
\subsection{Objective Function}
\label{seq:combine_objective_functions}
For optimization, we combine three objective functions introduced so far, as follows:
\begin{equation}
\ell = \ell_{ent} + \ell_{ctc} + \alpha \ell_{del} .
\end{equation}
Here, $\alpha$ is a hyperparameter to adjust the amount of latency directly.
\section{Experiments}
We conducted simultaneous translation experiments from English to Japanese and discussed accuracy, latency, and issues for translation results.
\subsection{Settings}
All models were implemented as described in the previous sections using PyTorch\footnote{\url{https://pytorch.org}}.
Both the encoders and the decoders were two-layered uni-direcitional LSTM \cite{hochreiter1997lstm}, and the decoder used input feeding\cite{luong-pham-manning:2015:EMNLP}.
The number of dimensions in word embeddings and hidden vectors was set to 512, and the minibatch size was 64.
We use Adam \cite{Kingma2015adam} for optimization with the default parameters.
The learning rate was set to $10^{-1}$, and gradient clipping was set to 5.
The dropout probability was set to $0.3$.
The learning rate was adjusted by a decay factor of $1/\sqrt{2}$ when the validation loss was larger than that in the previous epoch.
Then, we chose the best parameter/model with the smallest validation loss for evaluation.
We used two different corpora for the experiments: small\_parallel\_enja\footnote{\url{https://github.com/odashi/small_parallel_enja}} and Asian Scientific Paper Excerpt Corpus (ASPEC) \cite{NAKAZAWA16.621}.
small\_parallel\_enja is a small-scale corpus that is consist of sentences filtered sentence length 4 to 16 words, and ASPEC is a mid-scale corpus of the scientific paper domain.
Table \ref{table:corpora} shows their detailed statistics.
\begin{table}[tbp]
\centering
\caption{Number of sentences for each corpus used in the experiments.}
\label{table:corpora}
\begin{tabular}{c||c|c|c}\hline
\multirow{2}{*}{Corpus} & \multicolumn{3}{c}{Number of Sentence} \\ \cline{2-4}
& Train & Valid. & Test \\ \hline\hline
small\_parallel\_enja & 50k & 500 & 500 \\ \hline
ASPEC & 964k & 1790 & 1812 \\ \hline
\end{tabular}
\end{table}
All datasets were tokenized into subword unit \cite{sennrich-haddow-birch:2016:P16-12:BPE,kudo-richardson-2018-sentencepiece} by using Sentencepiece \footnote{\url{https://github.com/google/sentencepiece}}.
The source and target language vocabularies were independent, and their size was set to 4000 tokens for small\_parallel\_enja and 8000 tokens for ASPEC, respectively.
We filtered out the sentence whose number of tokens was more than 60 tokens, or the length ratio was more than 9 from the training set.
We used ``{\bf Wait-k}'' models and general NMT models as baseline models.
General NMT models were attention-based encoder-decoder and it translated sentences from full-length source sentences (called {\bf Full Sentence}).
For evaluation metrics, we used BLEU \cite{papineni2002bleu} and RIBES \cite{isozaki-etal-2010-automatic} to measure translation accuracy, and token-level delay to measure latency.
We used Kytea \cite{neubig-etal-2011-kytea} as a tokenize method for evaluations of Japanese translation accuracy.
\subsection{Experiments with Small-scale Corpus}
We conducted small-scale experiments using small\_parallel\_enja.
We compared different hyperparameters: $k = \{3, 5\}$ and $\alpha = \{0, 0.01, 0.03, 0.05\}$.
\begin{table*}[htb]
\centering
\caption{Latency and automatic evaluation scores with small\_parallel\_enja. Latencies are shown by averages and standard deviations (in parentheses) in the number of tokens.}
\begin{tabular}{cc|c||c|c}\hline
\multicolumn{2}{c|}{Method} & Latency (\#tokens) & BLEU & RIBES \\\hline\hline
\multicolumn{2}{c|}{Full sentence} & 9.75 ($\pm$ 2.69) & 34.53 & 84.03 \\ \hline\hline
Wait-k \cite{ma2018stacl} & k=3 & 3.00 ($\pm$ 0.00) & 31.06 & 82.46 \\
& k=5 & 5.00 ($\pm$ 0.00) & 33.29 & 83.45 \\\hline\hline
Ours & $\alpha$=0.00 & 4.32 ($\pm$ 3.14) & 28.01 & 81.78 \\
& $\alpha$=0.01 & 4.29 ($\pm$ 3.16) & 30.42 & 82.60 \\
& $\alpha$=0.03 & 2.88 ($\pm$ 2.95) & 26.47 & 80.51 \\
& $\alpha$=0.05 & 0.80 ($\pm$ 1.96) & 22.60 & 77.86 \\\hline
\end{tabular}
\label{table:exp1_result}
\end{table*}
Table~\ref{table:exp1_result} shows the results in latency and automatic evaluation scores on small\_parallel\_enja.
The full sentence scores are upper bounds of incremental methods.
The proposed method reduced the average latency in more than 50\% from the full sentence baseline
with some loss in BLEU and RIBES.
The BLEU and RIBES results by the proposed method were worse than those by Wait-k.
Th would be due to some degradation in smaller latency parts that were determined adaptively by the proposed methods
while Wait-k keeps the fixed latency.
\subsection{Experiments with Mid-scale Corpus}
We investigated the performance on longer and more complex sentences by the experiments using ASPEC.
We compared different hyperparameters: $k = \{5, 7\}$ and $\alpha = \{0.03, 0.05, 0.1\}$.
\begin{table*}[htb]
\centering
\caption{Latency and automatic evaluation scores with ASPEC.}
\label{table:exp2_result}
\begin{tabular}{cc|c||c|c}\hline
\multicolumn{2}{c|}{Method} & Latency (\#tokens) & BLEU & RIBES \\\hline\hline
\multicolumn{2}{c|}{Full sentence} & 29.81 ($\pm$ 14.30) & 32.22 & 80.17 \\ \hline\hline
Wait-k \cite{ma2018stacl} & k=5 & 5.00 ($\pm$ 0.00) & 21.53 & 71.40 \\
& k=7 & 7.00 ($\pm$ 0.00) & 23.20 & 73.21 \\ \hline\hline
Ours & $\alpha$=0.03 & 23.03 ($\pm$ 14.08) & 24.86 & 72.59 \\
& $\alpha$=0.05 & 21.96 ($\pm$ 13.88) & 22.45 & 70.60 \\
& $\alpha$=0.1 & 17.13 ($\pm$ 12.69) & 23.66 & 72.27 \\ \hline
\end{tabular}
\end{table*}
Table~\ref{table:exp2_result} shows the results in latency and automatic evaluation scores on ASPEC.
We can see the proposed method showed much larger latency than Wait-k.
This is probably due to many long and complex phrases used in scientific articles in ASPEC.
Wait-k has to translate such a long phrase without sufficient input observations due to its strict fixed latency strategy.
On the other hand, the proposed method can wait for more input tokens adaptively by generating \text{\textless wait\textgreater}
at the cost of large latency.
\subsection{Discussion}
In the experimental results above,
the proposed method determined the translation latency adaptively,
short delay for short and simple inputs as in small\_parallel\_enja and
long delay for long and complex inputs as in ASPEC.
Here we discuss our results in detail using some examples.
Table~\ref{table:example} shows translation examples on small\_parallel\_enja.
In the first example, the proposed method gives a correct translation result by adaptive waits.
Wait-k generated unrelated words \cjk{野球} (\textit{baseball}) and \cjk{飲-み} (\textit{drink}) due to the poor input observations with its small fixed latency.
The proposed method waited until a subword \textit{swim} was observed and successfully generate a word \cjk{泳-ぐ} (\textit{swim}).
\begin{table*}[htp]
\centering
\caption{Translation examples in small\_parallel\_enja. \textit{w} shows the generation of \text{\textless wait\textgreater} token.}
\label{table:example}
\includegraphics[width=\linewidth]{example.png}
\end{table*}
However, the proposed method sometimes generated consecutive \text{\textless wait\textgreater} symbols until the end of input, as shown in the second example.
This is probably due to our training strategy;
the latency penalty would not be large enough to choose small latency translation
at the cost of some increase in SCE- and CTC-based loss.
The translation data in the experiments are not from simultaneous interpretation but standard translation,
so the current task does not match with the proposed approach.
The use of specialized data for simultaneous translation would be important in practice,
such as \textit{monotonic translations} like simultaneous translation.
\section{Conclusion}
In this paper, we proposed an adaptive latency control method for simultaneous neural machine translation
in syntactically distant language pairs.
We introduced a meta token \text{\textless wait\textgreater} to wait until the observation of the next input token.
We proposed a CTC-based loss function to perform optimization using bilingual data without appropriate positions of \text{\textless wait\textgreater} ,
which is used along with the latency penalty and a standard word prediction loss.
The experimental results suggest the proposed method determines when to translate or when to wait in an adaptive manner.
Future work includes further analyses on translation accuracy in different latency conditions and time-based latency evaluation instead of the token-based one.
\section*{Acknowledgments}
A part of this work is supported by JSPS Kakenhi JP17H06101.
| proofpile-arXiv_059-15570 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Usually, when one talks about $\textit{information speed}$, one envisions two parties A and B, where A aims to communicate some message to B; A then sends the message encoded in a physical system (signal or information carrier) to B and the information speed is simply the speed of the signal which is limited by the speed of light. In this sense, quantum mechanics cannot provide any enhancement to the information speed. However, what if the information that needs to be transmitted is not localized at the sender's station, as it was at A in the given example? What if the information of interest is encoded in a $\textit{global}$ property of dispersed pieces of information, each localized at a different location? In this case, if we define information speed as a quantity inversely proportional to the time needed to acquire and transmit some generally global information, quantum mechanics may provide some advantage with respect to classical theory.\\
In this paper we show that preparing information carriers in spatial superposition provides an arbitrarily high speed up of an information theoretic task involving the acquisition and transmission of globally encoded information. In order to formally address the subject matter we first describe the scenario of interest and introduce the auxiliary notion of $\textit{k-way signaling behaviors}$ within a device-independent formalism \cite{device}. We then proceed by proving that a single quantum particle in spatial superposition outperforms classical particles at collecting and transmitting delocalized information. This is shown by the violation of a specific inequality which poses sharp bounds on the performance of $k$-way signalling processes. One quantum particle outperforms $N$ classical particles in a single shot; however, since the overhead is negligible for large $N$, we introduce a slight modification of our inequality, and show that multiple rounds enable a quadratic enhancement of the information speed (for large $N$). Our findings have a natural application in scenarios involving information sources with limited power, and are based solely on the quantum superposition principle. Our result can thus be seen in light of recent developments that put forward quantum superposition as a genuine resource for information processing, such as in two-way communication with one particle \cite{2-way}, quantum acausal processes \cite{Brukner}, superposition of orders \cite{orders} and directions \cite{directions}, quantum combs \cite{combs}, quantum switch \cite {switch} and quantum causal models \cite{causal models}. Some of these novel phenomena have been demonstrated in recent experiments \cite{exp1, exp2, exp3}.
\section{Acquirement and transmission of delocalized information}\label{Acquirement and transmission of delocalized information}
The scenario of interest consists of one party, whom we'll refer to as Alice, and $N$ pieces of information $\left\{x_1,x_2,...,x_N \right\}$ dispersed at $N$ different locations, as pictured in Figure \ref{fig:Fig1}. Alice is connected to each of the $N$ locations with communication channels which enable a bidirectional transmission of information. Her goal is to learn some global property $a=a(x_1,...,x_N)$ as a function of spatially dispersed information pieces $x_i$. For simplicity we assume that the $N$ locations containing the local information are not mutually connected by any communication channel, i.e. the information does not flow in between the different regions. This restriction can be understood as one forcing the pieces of information to be truly isolated/localized and removing the dependence on the geometry of the problem.
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{Fig1_2}
\caption{ Alice is connected to each of the $N$ information locations via $N$ communication channels. In order to gather the required global information, Alice sends her signals towards the $N$ locations. Upon receiving back her signals, she decodes the message and produces a classical output $a$.}
\label{fig:Fig1}
\end{figure}
One way for Alice to learn $a(x_1,...,x_N)$ is to send $N$ signals which will encode the information pieces $x_i$, retrieve them and calculate the function $a$. We denote the time she needs to complete this action (acquisition and transmission of information pieces) as $\tau$ and call it unit time. Suppose now that Alice has limited resources to complete the task, i.e. she has power to send at most $k$ signals per unit time, where $k<N$. This restriction can come from some natural assumption, such as the limited power of the source of information carriers. Upon receiving back the signals, she decodes them and produces an outcome $a$. In general, the information pieces are randomly sampled from some distribution; the process is thus mathematically fully characterized by the following set of conditional probabilities, or $\textit{behavior}$
\begin{equation}\label{eq:Behavior}
\left\{ P\left( a|x_1,...,x_N \right); \quad \forall a\in O; x_1,..,x_N \in I\right\},
\end{equation}
where $I$ and $O$ denote the $\textit{input}$ and $\textit{output}$ alphabets pertaining respectively to $\left\{x_1,...,x_N \right\}$ and to $a$. $P\left( a|x_1,...,x_N \right)$ thus denotes the probability that Alice produces the output $a$ conditioned on the dispersed information being $\left\{x_1,...,x_N \right\}$.\\
For a moment, we shall forget that A is using $k$ classical signals, and define the more general notion of $k-way$ $signaling$, which we introduce in a device-independent manner in what follows.
\theoremstyle{definition}
\begin{definition}
A behavior $\left\{ P\left( a|x_1,...,x_N \right), \forall a, x_i\right\}$ is said to be $\textit{k-way signaling}$ $\textit{iff}$ there exists a set of weights $\left\{q_{j_1,...,j_k}, \forall j_1,...,j_k\right\}$ and a set of probability distributions $\left\{P\left( a|x_{j_1},...,x_{j_k} \right), \forall j_1,...,j_k \right\}$ such that the following is satisfied:
\begin{gather*}
P\left( a|x_1,...,x_N \right)=\sum_{j_1,...,j_k} q_{j_1,...,j_k} P\left( a|x_{j_1},...,x_{j_k} \right);\\
\sum_{j_1,...,j_k} q_{j_1,...,j_k}=1;\quad q_{j_1,...,j_k} \geq 0,\quad \forall j_1,...,j_k,
\end{gather*}
where the domain of the indices $\left\{j_1,...,j_k\right\}$ ranges over all $N \choose k$ subsets of the $N$ locations.
\end{definition}
The intuition behind the latter definition is the following: if the system exhibits $k$-way signaling, it means that its behavior can be modeled by Alice choosing to communicate with locations pertaining to $\left\{x_{j_1},...,x_{j_k}\right\}$ with probability $q_{j_1,...,j_k}$.\\
For example, for $N=3$, a two-way signaling distribution can be decomposed as
\begin{equation*}
\begin{split}
P\left( a|x_1,x_2,x_3 \right)&=q_{12}P\left( a|x_1,x_2\right)+q_{13}P\left( a|x_1,x_3\right)\\
&+q_{23}P\left( a|x_2,x_3\right);
\end{split}
\end{equation*}
\begin{align}
\sum_{i<j}q_{ij}=1; \quad q_{ij} \geq 0, \forall i<j,
\end{align}
where $q_{ij}$ denotes the probability of Alice communicating with locations pertaining to $x_i$ and $x_j$. The definition above will help us to quantify Alice's performance in a device-independent way, thereby providing a fair comparison between classical and quantum resources.\\
\section{Genuine $N$-way signaling}\label{Genuine $N$-way signaling}
The set of all $k$-way signaling behaviors forms a polytope when embedded in a real vector space, since it is closed under probabilistic mixing (a simple proof is given in Appendix \ref{app:A}). Thus, one can characterize this set via necessary and sufficient conditions in form of facet inequalities. These and similar methods have been applied e.g. in the investigation of Bell's inequalities \cite{Nonlocality} and in causal modelling \cite{Causal1, Causal2}. Providing the full characterization of a polytope is hard in general and will not be pursued here (especially since we want to make a statement about the scenario involving a general number of parties). Instead, here we will focus on a particular inequality that we obtained as a natural generalization of inequalities computed numerically for low $N$ using the Python package \textit{cdd} \cite{cdd}. Moreover, we focus our attention to the case of binary inputs and outputs, i.e. $a,x_i\in\{0,1\}$.\\
The aforementioned inequality is the following:
\begin{equation}\label{eq:ineq}
B\equiv -P(1|0,0,...,0)+\sum_{i=1}^{N}P\left(1|0,...,x_i=1,...0\right)\leq N-1,
\end{equation}
which is satisfied by any $k$-way-signalling behavior, for $k<N$.\\
To see that this is the case, notice that any $(N-1)$-way signalling behavior can be expressed as a convex sum of processes which leave out one location from the communication. If the $i$-th location is left out, then
\begin{equation}
P(a|0,0,...,x_i=0,...,0)=P(a|0,0,...,x_i=1,...,0),
\end{equation}
so the first negative term in B cancels at least one of the positive terms and leaves the maximum achievable value equal to $N-1$. The analogous reasoning holds for $k<N-1$. Thus, the violation of this inequality necessarily implies $N$-way signalling.\\
Note that our inequality represents the probability of successfully accomplishing a task involving information genuinely globally encoded in space. Namely, Alice is supposed to compute the function
\begin{equation}\label{game}
a(x_1,...,x_N)=\begin{cases}
0, & \text{if $x_i=0,\quad \forall i$},\\
1, & \text{if $\exists j$ s.t. $x_j=1$ and $x_i=0, \forall i\neq j$ },
\end{cases}
\end{equation}
where the $N+1$ settings are uniformly distributed. Inequality \eqref{eq:ineq} then implies that Alice's probability of successfully computing the latter function is bounded by $P_{bound}=\frac{N}{N+1}$, unless her performance exhibits $N$-way-signalling.\\
Suppose now that Alice possesses limited resources and sends $k$ classical signals per unit time $\tau$ thus achieving $k$-way signalling. In this case, it is clear that she can achieve at best $P_{bound}$ in a single shot experiment. In order to surpass this threshold she needs to send her $k$ signals at least $\lc \frac{N}{k} \right\rceil$ times before producing an output, thereby $\textit{effectively}$ exhibiting $N$-way- signalling behavior.\\
\section{Quantum enhanced information speed}\label{Quantum enhanced information speed}
\subsection{Single query}
In the following we show the possibility of achieving the violation of inequality \eqref{eq:ineq} for arbitrary $N$ by using a single quantum particle (one signal per unit time) prepared in spatial superposition. Here we will focus on the scenario involving a single shot (i.e. single query), while the multiple query case will be addressed later.\\
We assume a simple model where at each location $x_i$ the information is encoded into a crystal that applies a local phase shift $e^{i x_i \phi_i}$ to the state of the particle, where $\left\{\phi_i\right\}$ are fixed angles known to Alice. The protocol can be summed up as follows. Alice prepares her signal in a uniform spatial superposition of trajectories directed towards the $N$ locations; after interacting with the crystals, the wave packets are bounced back to Alice who performs a binary measurement thereby producing an output $a$.\\
The initial wave function is:\\
\begin{equation}
\ket{\psi_0}=\frac{1}{\sqrt{N}} \sum_n \ket{n},
\end{equation}
where $\left\{\ket{n}\right\}$ is the basis of spatial modes corresponding to the $N$ trajectories directed towards their pertaining locations.\\
After encoding, the wave function is transformed to
\begin{equation}
\ket{\psi}_{x_1,...,x_N}=\frac{1}{\sqrt{N}} \sum_n e^{i\phi_n x_n} \ket{n}.
\end{equation}
Upon getting back her signals, Alice performs a binary measurement defined by a general POVM $\Pi \equiv \left\{\Pi_0,\Pi_1\right\}$, thereby producing an outcome $a\in \left\{0,1\right\}$.\\
Let us denote the quantum state that arises via encoding when $\left\{x_1=0,x_2=0,...,x_N=0\right\}$ with $\rho_0$, and the one that arises from encoding when $\left\{x_1=0,x_2=0,...,x_i=1,...,x_N=0\right\}$ with $\rho^{(i)}$. Then, if we introduce the following averaged state
\begin{equation}
\rho_1=\frac{1}{N}\sum_{i=1}^{N} \rho^{(i)},
\end{equation}
the left hand side of inequality \eqref{eq:ineq} can be rewritten as
\begin{equation}
\begin{split}
B&=-1+(N+1)\left[ \frac{1}{N+1}\Tr(\Pi_0 \rho_0) + \frac{N}{N+1}\Tr(\Pi_1 \rho_1) \right]\\
&\equiv -1+(N+1)P_W.
\end{split}
\end{equation}
The expression $P_W=\left[ \frac{1}{N+1}\Tr(\Pi_0 \rho_0) + \frac{N}{N+1}\Tr(\Pi_1 \rho_1) \right]$ is the probability of successfully distinguishing the quantum states $\rho_0$ and $\rho_1$ given their respective prior probabilities $p_0=\frac{1}{N+1}$ and $p_1=\frac{N}{N+1}$. It is known \cite{Discrimination} that this probability is bounded by
\begin{equation}
\max\limits_{\Pi}P_W=\frac{1}{2}(1+||p_1\rho_1-p_0\rho_0||_1),
\end{equation}
where $||A||_1$ denotes the trace norm of $A$.\\
The maximum achievable value of $B$ for a given encoding scheme is then given by
\begin{equation}
\max\limits_{\Pi}B=-1+\frac{N+1}{2}(1+||p_1\rho_1-p_0\rho_0||_1)\equiv N-1+\delta,
\end{equation}
where
\begin{equation}\label{eq:max_bound}
\delta=\frac{1}{2}-\frac{N}{2}+\frac{N+1}{2}||p_1\rho_1-p_0\rho_0||_1
\end{equation}
is the amount of violation.\\
Let us first analyse the case $N=2$. We set the two crystals' phases to $\phi_{1,2}=\pi$. Alice is supposed to output $a=0$ if both settings are equal to 0 and $a=1$ if one of the settings is equal to 1. Clearly, states $\rho_0$ and $\rho_1$ are mutually orthogonal and thus perfectly distinguishable, thereby enabling Alice to saturate the logical bound of our inequality, i.e. $\delta=1$ (the details are presented in Appendix \ref{app:B}). In contrast, if she uses one classical signal per unit time, she needs double the time to achieve a violation. Therefore, spatial coherence doubles the information speed involved in completing the task.\\
The $N\geq3$ case is more complicated and the detailed analysis is presented in Appendix \ref{app:C}. In order to analytically demonstrate the possibility of violating the inequality, we set $\lc \frac{N}{2} \right\rceil$ of the crystals' phases to some angle $\phi$ and the rest to $-\phi$; we show that a clear violation $\delta>0$ is achieved for $\cos(\phi)>\frac{N(N-6)+4}{(N-2)^2}$ (with a small correction for odd $N$). The numerical results of the violation are shown in Figure \ref{fig:Fig2}.
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{NEWFIGURE}
\caption{ \textbf{Violation of the inequality}. The left graph represents the quantum violation of inequality \eqref{eq:ineq} as function of the number of information locations $N$. The table on the right compares the value of the inequality achievable using classical particles ($B_{CL}$) and the one achievable with a single quantum particle in spatial superposition ($B_{QM}$).}
\label{fig:Fig2}
\end{figure}
We have therefore proved the possibility of genuine multi-way signalling with an arbitrary number of locations using one particle in spatial superposition within one unit of time $\tau$. On the other hand, suppose that Alice possesses a source producing one particle with a defined trajectory per unit time: in this case, she cannot achieve the quantum performance for this task even in $\left(N-1\right)\tau$ time, since she can communicate with maximally one location per unit time. Hence, spatial coherence as a resource provides an arbitrarily large enhancement of the information speed as defined by our task and inequality \eqref{eq:ineq}. Note that there is no conflict with special relativity, since the information carriers' speed is limited by the speed of light, which is reflected in the time $\tau$ that light needs to travel back and forth from Alice to the crystals.\\
\subsection{Multiple queries}
In the previous section we provided a proof of the possibility of achieving arbitrarily high levels of signalling using a single quantum particle in a single shot. However, the violation of the inequality scales poorly for large $N$, as seen in Fig. \ref{fig:Fig2}. In what follows, we will analyse how Alice's performance in the quantum case improves by allowing her to send her particle multiple times towards the parties, i.e. by relaxing the restriction of one unit of time $\tau$. In order to show a clear gap between the classical and the quantum performance, we will analyse a variation of the task/game \eqref{game}: namely, the setting and goals of the game remain the same, but we set the prior probabilities to be $\frac{1}{2}$. The winning probability for the new task is thus given by:
\begin{equation}\label{game2}
P_W=\frac{1}{2}P\left(0|0,...,0\right)+\frac{1}{2}\frac{1}{N}\sum_{i=1}^{N}P\left(1|0,...,x_i=1,...0\right).
\end{equation}
Now, if Alice possesses one classical particle/signal and has at disposal $k$ units of time $\tau$ for communication, then she can exhibit at most $k$-way signalling. For instance, if she communicates with the first $k$ parties, then the probability distributions obey the following:
\begin{equation}
\begin{split}
P\left(a|x_1,...,x_k,x_{k+1},...x_{N}\right)&=P\left(a|x_1,...,x_k,x_{k+1}',...,x_{N}'\right),\\
&\forall a,x_i,x_j'=0,1.
\end{split}
\end{equation}
The best strategy Alice can employ is to output 0 if $\left\{x_i=0, i=1,...,k\right\}$ and to output 1 otherwise. The maximum winning probability after $k$ classical queries is thus:
\begin{equation}\label{win_game_1}
P_{Cl}(k)=\frac{1}{2}\left(1+\frac{k}{N}\right).
\end{equation}
Next, suppose that Alice can prepare her particle in spatial superposition and query it through the boxes $k$ times (in $k$ units of time $\tau$). After each query, she can apply a fixed unitary transformation, and at the end of the $k$ queries she performs a measurement. We will show that Grover's algorithm (with a modified final measurement) can give her the same speed-up for the execution of her task as the well known $O(\sqrt{N})$ speed-up obtained in Grover's search \cite{Grover1, Grover2}.\\
Let us suppose that the $N$ parties encode their local bits via $\pi$-phase-shifts $\phi_i=x_i\pi$ and that the initial state prepared by Alice is a uniform superposition $\ket{\psi_0}=\frac{1}{\sqrt{N}} \sum_n \ket{n}$. Moreover, after each query, Alice applies the following unitary transformation (also known as the \textit{inversion about mean operation} \cite{Grover2}):
\begin{equation}
U=2\ket{\psi_0}\bra{\psi_0}-\mathbbm{1}.
\end{equation}
If all the settings are 0, then the final state after $k$ runs is $\ket{\psi_0}$, since $U\ket{\psi_0}=\ket{\psi_0}$. On the other hand, if the $i$-th setting is $x_i=1$ and all the others are 0, then the final state after $k$ queries is equal to the one obtained after $k$ queries of Grover's algorithm \cite{Grover2}:
\begin{equation}
\ket{\psi_k^{(i)}}=\cos\left(\frac{2k+1}{2}\theta\right)\ket{\bar{i}}+\sin\left(\frac{2k+1}{2}\theta\right)\ket{i},
\end{equation}
where $\ket{\bar{i}}\equiv\frac{1}{\sqrt{N-1}}\sum_{j\neq i}\ket{j}$ and $\sin\left(\theta/2\right)=1/\sqrt{N}$. Setting $k=\frac{\pi}{4}\sqrt{N}$ and taking the limit of large $N$ one obtains
\begin{equation}
\ket{\psi_k^{(i)}}\approx \ket{i}-\frac{1}{\sqrt{N}}\ket{\bar{i}}.
\end{equation}
Therefore, the winning probability \eqref{game2} after $O(\sqrt{N})$ queries is
\begin{equation}\label{value}
P_W=\frac{1}{2}\left(\Tr(\Pi_0 \rho_0)+\Tr(\Pi_1 \rho_1) \right),
\end{equation}
where $\left\{\Pi_0,\Pi_1\right\}$ is a POVM, $\rho_0=\ket{\psi_0}\bra{\psi_0}$, and
\begin{equation}
\rho_1= \frac{1}{N}\sum_i \ket{i}\bra{i}+O(N^{-3/2})=\frac{1}{N}\mathbbm{1}+O(N^{-3/2}).
\end{equation}
The maximum achievable value of expression \eqref{value} is determined by the Helstrom bound:
\begin{equation}
\max\limits_{\Pi}P_W=\frac{1}{2}\left(1+\frac{1}{2}||\rho_1-\rho_0||_1\right).
\end{equation}
Using the fact that the trace norm of an operator is the sum of the absolute values of its eigenvalues we obtain
\begin{equation}
\max\limits_{\Pi}P_W=1-\frac{1}{2N}+O(N^{-3/2}).
\end{equation}
Therefore, Alice needs only $O(\sqrt{N})$ queries to achieve the task with unit probability (for large $N$), which thereby provides a $O(\sqrt{N})$ speed-up with respect to the classical case; in other words, spatial coherence increases the information speed by $O(\sqrt{N})$. Note that we did not prove the optimality of the latter algorithm; it might be the case that one can achieve a better speed-up with a different encoding of local bits and a different intermediate unitary transformation $U$.\\
\section{Discussion}
We started the previous section by proving the possibility of exhibiting $N$-way-signalling with a single quantum particle in superposition by violating an inequality which separates $(N-1)$ from $N$-way-signalling behaviors. Since the violation of the inequality drops quickly to 0 for large $N$, this violation ought to be seen as a proof of principle, rather than as a compelling application. Moreover, the $(N-1)$-way signalling bound in \eqref{eq:ineq} can be saturated already with a random guess (i.e. one does not gain any advantage by using $(N-1)$ classical signals with respect to no signals at all). As a response to the latter drawback, we introduced the game \eqref{game2}, where we chose different prior probabilities with respect to the preceding game in order to obtain a scaling of the probability of success with the amount of signalling. Indeed, the probability of success \eqref{win_game_1} scales linearly with the amount of signalling, i.e. with the number of classical queries. We then proceeded by showing that a Grover-like algorithm provides a quadratic enhancement of the information speed.\\
The latter can be understood from a more practical point of view as an oracle problem that can be solved more efficiently using quantum resources. Here we want to emphasize the method we came up with this new oracle problem and its pertaining quantum speed-up. We started off with the definition of $k$-way-signalling behaviors and showed that they constitute a polytope and can consequently be characterized in terms of facet inequalities. Generally, each inequality corresponds to a particular game and can therefore be interpreted as an oracle problem. This generic method can perhaps be used to discover new oracle problems with a potential quantum advantage by defining a polytope structure of conditional probability distributions $P\left(a|x_1,...,x_N\right)$ (e.g. in our particular case: the set of $k$-way-signalling distributions) and investigating its facet inequalities. The latter inequalities can be interpreted as oracle problems, for which one can try to obtain a quantum advantage. The converse research direction is also interesting to pursue, i.e., can every oracle problem be seen as a facet inequality of a certain polytope? These connections can prove useful in shedding light on the structure of oracle problems in general, and can provide further intuition on the quantum mechanical speed-up obtained in these problems.\\
An interesting remark about our scheme is the fact that the only resource used is coherence of paths and all the information is encoded via local phases. It still remains open whether internal degrees of freedom could increase the performance. Moreover, in this work we have not analysed the scenario in which Alice sends more quantum signals/particles per unit time, in the case of which one might expect further advantage arising from the entanglement between the particles. We leave these considerations for future work.\\
\renewcommand\refname{Bibliography}
\addcontentsline{toc}{section}{Bibliography}
| proofpile-arXiv_059-15571 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
The standard $\Lambda$ Cold Dark Matter (\ensuremath{\Lambda{\rm CDM}}) paradigm has been remarkably successful at large scales, with predictions from \ensuremath{\Lambda{\rm CDM}}\ numerical simulations providing an excellent match to observational data \citep[e.g.][]{ade14, springel06, clowe06, dawson13, baur16}. However, discrepancies between \ensuremath{\Lambda{\rm CDM}}\ predictions and the observations remain at smaller scales, \citep[see][for a review]{bullock17}. One of the earliest noticed, as well as most studied, of these is the so called ``cusp-core problem'', where observations of isolated dwarf and low surface brightness galaxies (i.e. galaxies for which observations are expected to be least affected by systematic errors arising from non circular motions and uncertainties in the stellar mass to light ratio) typically find that the dark matter halo has a constant density core towards the centre, while \ensuremath{\Lambda{\rm CDM}}\ models predict a cuspy density profile \citep[e.g.][]{deblok10,oh11,ogiya14,ogiyamori14,ogiya15}.
In detail, \ensuremath{\Lambda{\rm CDM}}\ simulations which do not include baryonic physics predict that dark matter halos of all masses have a universal density profile with the density distribution in the inner regions following a power law $\rho \sim$ r$^{\alpha}$ with slope $\alpha $ = -1 \citep{navarro96, navarro97}. Steeper slopes $\alpha \approx -1.5$ were found by \citet{moore98, moore99}, and more recent simulations find that the inner slope shows some variation and mass dependence with $\alpha$ lying in the range -(0.8 -- 1.4) \citep{ricotti03, ricotti07, delpopolo10, delpopolo12, dicintio14}. For smaller galaxies the slope is found to decrease towards the center reaching $\alpha $= -0.8 at $\sim 100$~pc from the centre \citep{stadel09, navarro10, delpopolo11}. In contrast however, observations of dwarf and low-surface brightness (LSB) galaxies indicate a flat ($\alpha \approx -0.2$) dark matter density core \citep[e.g.][]{deBlok01, deBosma02, oh11,oh15}. Some observational studies \citep{simon05, adams14} find intermediate slopes, i.e. steeper than would be expected from a constant density core, but still significantly shallower than the slopes predicted by simulations.
Various solutions have been proposed to resolve the cusp-core problem; these broadly fall into three categories. The first set of solutions propose that the dark matter could be warm or weakly self interacting, this would erase the cusps that arise in pure \ensuremath{\Lambda{\rm CDM}}\ \citep[e.g.][]{spergel00, rocha13, elbert15, kaplinghat16, schneider17}. The second category of solutions invokes baryonic processes to generate cores from an originally cuspy distribution. Basically, repeated gas outflows resulting from supernova explosions from star formation concentrated at the galaxy centre result in a re-distribution of the baryonic as well as dark matter \citep[see e.g.][]{pontzen12, governato12, pontzen14, read16}. However, some simulations find that the density profile of the dark matter is consistent with cuspy profiles even after including baryonic outflows \citep{ceverino09, marianacci14,schaller15}. The third category of solutions suggest that there are residual systematic problems in determining and modelling the rotation curves of galaxies, and these problems could result in a mis-identification of cores as cusps \citep[e.g.][]{vandenbosch00, swaters03, hayashi04,oman17,pineda17}. These residual systematic effects include smoothing of the rotation curve because of the finite resolution of the observations \citep{vandenbosch00}, incorrectly measured inclination angles \citep[e.g.][]{rhee04, read16b}, improperly modelled pressure support \citep[e.g.][]{rhee04, valenzuela07, pineda17} or unmodelled non-circular motions \citep[e.g.][]{rhee04, valenzuela07, oman17}. All of these can lower the inner rotation velocities and mask the cuspy distributions.
More recently, \citet{pineda17} studied systematic effects in observational studies using high-resolution hydrodynamic simulations of dwarf galaxies and they find that the cored isothermal halos are favoured in spite of the fact that their simulations contain NFW halos.
Traditionally, the H{\sc i} rotation curves used in mass modelling were derived from the 2-D velocity fields \citep[e.g.][]{deBlok02,oh11,oh15}. Recently there have been several software packages developed that determine the rotation curves by directly modelling the full 3D data cube, including modelling of instrumental effects such as beam smearing. Rotation curves derived in this way would be expected not to suffer from the flattening associated with beam smearing, and as such better trace the underlying circular velocities. In this work, we derive the rotation curves by using both the 3D and 2D tilted ring fitting routines. We compare the properties of dark matter halos that were obtained using the rotation curves from both the approaches.
The galaxies that we have in our sample all lie in voids. As such the dark matter distribution in these galaxies is also of interest by itself. There are at least two possible reasons why void galaxies may have different dark matter halo properties than galaxies in higher density regions. The first is that the large scale environment is expected to correlate with the properties of individual galaxy halos. Simulations find that, for halo masses $<$ 5 $\times$ 10$^{11}$ $h^{-1}$ M$_{\odot}$, halos in cluster regions are on average $\sim$ 30 -- 40 $\%$ more concentrated and have $\sim$ 2 times higher central densities than halos in voids \citep{avila05}. In models where the distribution of dark matter in central regions of the galaxy is driven by stellar feedback processes \citep[e.g.][]{pontzen12, governato12}, void galaxies, with their typically higher star formation rates \citep{moorman16} could be more affected by such baryonic processes as compared to galaxies in high density regions.
This paper is organized as follows. In \S \ref{obs}, we describe the sample and the procedures used for the derivation of rotation curves. The construction of mass models is described \S \ref{massmodels}. In \S \ref{results}, we discuss the dark matter density profiles derived from the 2-D and 3-D rotation curves and finally, we summarize the main results in \S \ref{summary}.
\section{Kinematic analysis}
\label{obs}
\subsection{Rotation curves}
\label{rot}
Our sample galaxies are gas rich dwarfs, lying in nearby voids, and for which we have reasonably well resolved H{\sc i} data cubes. The sample selection and the data reduction is discussed in detail in \citet[][hereafter Paper~I]{kurapati18}, and the interested reader is referred to that for more details. The derivation of rotation curves is also presented in detail in Paper~I, and is briefly summarized below. As discussed earlier, rotation curves can be derived by fitting a tilted ring model either to the data cube (i.e. a ``3-D'' approach) or to the velocity field (i.e. a ``2-D'' approach). We use both of these approaches to derive rotation curves for our sample galaxies. The velocity field was determined using the `MOMNT' task in {\sc aips }; the moment method is a commonly used method for determining H{\sc i} velocity fields.
For comparison, we also derive the velocity field using Gauss-Hermite fits to the individual spectra, a comparison of the results obtained using the moment method and the Gauss-Hermite fits is presented in Sec.~\ref{ssec:gauss-hermite}.
In the ``2-D'' approach, we derive the rotation curves by fitting the tilted ring model to this velocity field using the `ROTCUR' task from the {\sc gipsy }\ software package \citep{vanderhulst92}. In the ``3-D'' approach we derive the rotation curves by fitting the tilted ring model directly to the H{\sc i} data cube using the {\sc fat } pipeline \citep[v5.0.2,][]{kamphuis15}. We fit flat discs, where all parameters except the surface brightness and the rotational velocity of each ring have the same value at all radii. Our data do not show obvious warping signatures. The {\sc fat } and ROTCUR derived rotation curves \citep[shown in Fig. A1, A2, and A3 of][]{kurapati18} are in broad agreement for 8 of the 11 galaxies. The main difference is that the inner parts of the rotation curves derived using {\sc fat } are steeper than those derived using ROTCUR. This is likely due to the fact that {\sc fat } operates directly on data cubes instead of velocity fields. It is therefore expected to be less affected by projection effects such as ``beam smearing". For the remaining 3 galaxies, the rotation curves derived using the {\sc fat } pipeline were unreliable in the central regions probably due to their clumpy H{\sc I} distribution. Hence, we exclude these 3 galaxies for the construction of the mass models.
The ``3-D'' approach works well for galaxies with inclinations upto 90$^{\circ}$ whereas the ``2-D'' approach is expected to be reliable for the galaxies with inclinations upto $\sim$ 70$^{\circ}$ \citep{begeman89}. In our sample, the galaxy UGC4148 is above this limit of 70$^{\circ}$ and the galaxies J0929+1155 and J0926+3343 have inclinations of $\sim$ 70$^{\circ}$. The optical and kinematic parameters (as derived from the {\sc fat}) for the selected 8 galaxies are listed in Table \ref{tab:sample}. The columns are as follows. (1): galaxy name, (2): distance to the galaxy in Mpc, (3): absolute B-band magnitude (corrected for Galactic extinction using extinction values from NED), (4): H{\sc i} mass ($\times $ 10$^{7}$ M$_{\odot}$), and (5): inclination angle in degrees, (6): velocity dispersion in \ensuremath{\rm km \ s^{-1}}\ and (7): maximum rotation velocity in \ensuremath{\rm km \ s^{-1}}\ as derived from the {\sc fat}.
\subsection{Correction for pressure support}
\label{pressure}
\begin{table}
\begin{footnotesize}
\caption{Parameters of galaxies selected for this study}
\label{tab:sample}
\begin{tabular}{ p{1.4cm} p{0.6cm} p{0.8cm} p{1.0cm} p{0.4cm} p{0.7cm} p{0.5cm}}
\\
\hline
Name & $d$ (Mpc) & \ensuremath{\rm M_B} & $\ensuremath{\rm M_{HI}}$ (10$^{7}$M$_{\odot}$) & i$^{ a}$ & $\sigma^{ b}$ & V$_{\rm max}^{ b}$\\
\hline
KK246$^{c}$ & 6.85 & -13.69 & 9.0 & 62 & 10.9 & 42.0 \\
UGC4115 & 7.73 & -14.75 & 31.9 & 55 & 13.6 & 56.5 \\
J0926+3343 & 10.6 & -12.90 & 5.2 & 69 & 11.5 & 31.5 \\
UGC5288 & 11.4 & -15.61 & 90.2 & 38 & 9.1 & 72.4 \\
UGC4148 & 13.5 & -15.18 & 78.4 & 83 & 8.4 & 63.9 \\
J0630+23 & 22.9 & -15.89 & 135.1& 51 & 10.9 & 80.2 \\
J0626+24 & 23.2 & -15.64 & 63.8 & 62 & 10.2 & 80.3 \\
J0929+1155 & 24.3 & -14.69 & 36.6 & 72 & 9.1 & 62.3 \\
\hline
\multicolumn{7}{l}{ $^{ a}$ The inclination angle (i) is in degrees, }\\
\multicolumn{7}{l}{$^{ b}$ Velocity dispersion ($\sigma$), maximum velocity (V$_{\rm max}$) are in km~s$^{-1}$ } \\
\multicolumn{7}{l}{$^{ c}$ The \ensuremath{\rm M_B}\ and $d$ values for KK246 and all other galaxies are}\\
\multicolumn{7}{l}{$^{ c}$ from \citet{kreckel11} and \citet{pustilnik11}.}\\
\end{tabular}
\end{footnotesize}
\end{table}
The observed rotation velocities are usually smaller than the true circular velocities, as the random gas motions in the gaseous disk provide ``pressure support'' to the disk. This effect is particularly significant in dwarf galaxies whose velocity dispersions are comparable to the maximum rotation velocity. In order to construct the mass models, the observed rotation velocities have to be corrected for the pressure support \citep[e.g.][]{meurer96,begum04}. This correction is given by: \\
$v_{c}^{2} = v_{o}^{2} - r \times \sigma ^{2} \bigg[\frac{d}{dr}(ln \ \Sigma_{HI}) \ + \frac{d}{dr}(ln \ \sigma^{2}) \ - \frac{d}{dr}(ln \ 2h_{z}) \bigg]$ \\
where, $v_{c}$ is the corrected velocity, $v_{o}$ is the observed rotation velocity, $\Sigma_{HI}$ is the H{\sc i} surface density, $\sigma$ is the velocity dispersion and $h_{z}$ is the scale height of the disc. We assume that the scale height doesn't vary with radius and that the velocity dispersion is constant across the galaxy (but have also checked that assuming a linear gradient in both velocity dispersion and scale height does not make a significant difference - see below). Under the assumption of a constant scale height and velocity dispersion the pressure correction simplifies to:
$v_{c}^{2} = v_{o}^{2} - r \times \sigma^{2} \bigg[\frac{d}{dr}(ln \ \Sigma_{HI}) \bigg]$
We use parametric fits to the de-projected radial surface density profiles (themselves obtained using task `ELLINT' in {\sc gipsy } software) to determine the pressure support correction. For most of the galaxies, either a Gaussian or double Gaussian profile fits best to the radial surface density distribution, An exponential profile fits best for the case of KK246. For the velocity dispersion, we use the values determined from the data cubes by the {\sc fat } pipeline.
The rotation curves after correcting for the ``pressure support'' are shown in Appendix A. For all the galaxies, we find that the corrections are usually small in the inner regions, but are significant in the outer regions except J0926+3343, which has the smallest angular diameter. Our measurement of the DM density profile slope $\alpha$ is most sensitive to the inner part of the rotation curve, and is hence not much affected by the pressure support correction. We have also computed the corrections if we assume a linear gradient in both scale height and velocity dispersion (6--12 km s$^{-1}$), and find that the conclusion that the corrections are not significant at the inner radii remains unchanged.
\begin{figure*}
\noindent
\captionsetup[subfigure]{aboveskip=-1pt,belowskip=-1pt}
\subfloat{\includegraphics[width = 3.25in]{J0626+24_dm.pdf}}
\subfloat{\includegraphics[width = 3.25in]{J0626+24_density.pdf}} \\
\subfloat{\includegraphics[width = 3.25in]{J0929+1155_dm.pdf}}
\subfloat{\includegraphics[width = 3.25in]{J0929+1155_density.pdf}} \\
\subfloat{\includegraphics[width = 3.25in]{KK246_dm.pdf}}
\subfloat{\includegraphics[width = 3.25in]{KK246_density.pdf}}
\caption{Left panels: Mass models for J0626+24, J0929+1155 and KK246. The open circles indicate the total observed velocities after the correction for pressure support. The blue filled circles indicate the velocity contribution of the dark matter halo (after subtracting the contribution of baryons). The red dashed line and the green solid line represent the best fit NFW model and the best fit ISO model. The blue dotted line and the brown dashed line represent the gas and stars respectively. The residual velocities from best-fit ISO and NFW model are shown by the green filled circles and the red open circles. Right panels: The mass density profiles of J0626+24, J0929+1155 and KK246. The black dotted line indicates the data points used for the estimation of inner density slope.}
\label{fig:dens1}
\end{figure*}
\begin{figure*}
\noindent
\captionsetup[subfigure]{aboveskip=-1pt,belowskip=-1pt}
\subfloat{\includegraphics[width = 3.25in]{U4148_dm.pdf}}
\subfloat{\includegraphics[width = 3.25in]{U4148_density.pdf}} \\
\subfloat{\includegraphics[width = 3.25in]{J0630+23_dm.pdf}}
\subfloat{\includegraphics[width = 3.25in]{J0630+23_density.pdf}} \\
\subfloat{\includegraphics[width = 3.25in]{U5288_dm.pdf}}
\subfloat{\includegraphics[width = 3.25in]{U5288_density.pdf}}
\caption{Left panels: Mass models for UGC4148, J0630+23 and UGC5288. The open circles indicate the total observed velocities after the correction for pressure support. The blue filled circles indicate the velocity contribution of the dark matter halo (after subtracting the contribution of baryons). The red dashed line and the green solid line represent the best fit NFW model and the best fit ISO model. The blue dotted line and the brown dashed line represent the gas and stars respectively. The residual velocities from best-fit ISO and NFW model are shown by the green filled circles and the red open circles. Right panels: The mass density profiles of UGC4148, J0630+23 and UGC5288. The black dotted line indicates the data points used for the estimation of inner density slope.}
\label{fig:dens2}
\end{figure*}
\begin{figure*}
\noindent
\captionsetup[subfigure]{aboveskip=-1pt,belowskip=-1pt}
\subfloat{\includegraphics[width = 3.25in]{U4115_dm.pdf}}
\subfloat{\includegraphics[width = 3.25in]{U4115_density.pdf}} \\
\subfloat{\includegraphics[width = 3.25in]{J0926+3343_dm.pdf}}
\subfloat{\includegraphics[width = 3.25in]{J0926+3343_density.pdf}}
\caption{Left panels: Mass models for UGC4115 and J0926+3343. For UGC4115, we didn't yield physical results when both c and R$_{200}$ were allowed to vary. Hence, we fit the model by fixing the c value to the average concentration parameter for other galaxies ( $\sim$ 5.5 ). The open circles indicate the total observed velocities after the correction for pressure support. The blue filled circles indicate the velocity contribution of the dark matter halo (after subtracting the contribution of baryons). The red dashed line and the green solid line represent the best fit NFW model and the best fit ISO model. The blue dotted line and the brown dashed line represent the gas and stars respectively. The residual velocities from best-fit ISO and NFW model are shown by the green filled circles and the red open circles. Right panels: The mass density profiles of UGC4115 and J0926+3343. The black dotted line indicates the data points used for the estimation of inner density slope.}
\label{fig:dens3}
\end{figure*}
\section{Mass models}
\label{massmodels}
\begin{table*}
\begin{footnotesize}
\caption{Optical and dark matter properties }
\label{table:dm}
\begin{tabular}{ p{1.6cm} p{0.7cm} p{0.8cm} p{1.2cm} p{0.7 cm} p{1.4cm} p{1.9cm} p{0.7cm} p{1.4cm} p{2.1cm} p{0.7cm}}
\\
\hline
& & & & & Isothermal & & & NFW & & \\
\hline
Galaxy & D & M$_{B}$ & V$_{rot}$ & $\gamma_{\ast}$ & r$_{c}$ & $\rho_{0}$ &$\chi^{2}_{r}$ & c & r$_{200}$ & $\chi_{r}^{2}$ \\
& (Mpc) & & (km s$^{-1}$) & & (kpc) & (M$_{\odot}$ pc$^{-3}$) & & & (kpc) & \\
\hline
KK246 & 6.85 & -13.69 & 48.5 & 1.07 & 1.20$\pm$0.26 & 39$\pm$12 & 0.30 & 2.7$\pm$0.7 & 52$\pm$10 & 0.06 \\
U4115$^{ a}$ & 7.73 & -14.75 & 62.9 & 0.29 & 1.87$\pm$0.20 & 31$\pm$4 & 0.09 & ..(5.5) &.. (37$\pm$3)&..(0.78)\\
J0926+3343 & 10.6 & -12.90 & 38.3 & 0.06 & 0.34$\pm$0.19 & 163$\pm$143 & 0.52 & 1.9$\pm$6.1 & 60$\pm$161 & 0.15 \\
U5288 & 11.4 & -15.61 & 80.6 & 0.36 & 0.25$\pm$0.05 & 1341$\pm$507& 0.71 & 11.4$\pm$1.7& 41$\pm$2 & 1.21 \\
U4148 & 13.5 & -15.18 & 67.3 & 0.17 & 0.50$\pm$0.11 & 289$\pm$118 & 0.69 & 8.4$\pm$0.6 & 39$\pm$1 & 0.14 \\
J0630+23 & 22.9 & -15.89 & 86.0 & 0.24 & 1.92$\pm$0.24 & 42$\pm$8 & 0.34 & 3.4$\pm$0.8 & 71$\pm$10 & 0.39 \\
J0626+24 & 23.2 & -15.64 & 88.0 & 0.27 & 0.89$\pm$0.33 & 144$\pm$90 & 2.14 & 5.1$\pm$1.1 & 60$\pm$8 & 0.52 \\
J0929+1155 & 24.3 & -14.69 & 66.8 & 0.22 & 0.64$\pm$0.10 & 185$\pm$48 & 0.17 & 5.8$\pm$0.7 & 46$\pm$3 & 0.07 \\
\hline
\multicolumn{11}{l}{$^{ a}$ For UGC4115, we didn't yield physical results when both c and R$_{200}$ were allowed to vary for the NFW halo model. }\\
\multicolumn{11}{l}{Hence, we fix the c value to the average concentration parameter for other galaxies ( $\sim$ 5.5 ) to fit the NFW halo model. }
\end{tabular}
\end{footnotesize}
\end{table*}
The circular velocity reflects the gravitational potential of total matter content of galaxy which includes stars, gas and dark matter. We use the H{\sc i} and optical de-projected radial surface brightness profiles and subtract the dynamical contribution of the atomic gas and stars in order to constrain the dark matter distribution. The derivation of the gas and stellar surface densities are described in \S \ref{ste} and \S \ref{gas}. In \S \ref{ml}, we discuss the various different assumptions regarding the stellar mass to luminosity ratio, and the effect they have on the derived dark matter distribution. We assume that the contribution of the molecular gas is negligible since for faint dwarf galaxies such as those in our sample, the molecular fraction is believed to be low \citep[e.g.][]{taylor98, schruba12, cormier14}. The contribution of the ionized gas is also expected to be small and has been neglected. The H{\sc i} gas and stars hence form the dominant baryonic component of the galaxy. In \S \ref{dark} below, we describe the different dark matter models that were used for the construction of mass models. Once the form of contribution of each of these components was determined, mass models were fit to the pressure support corrected rotation curves using `ROTMAS' task in {\sc gipsy }. The modelling procedure consists of a $\chi^{2}$ minimization of
$v_{c}^{2} -\gamma v_{*}^{2} -v_{g}^{2} -v_{h}^{2} $, where $v_{c}$ is the rotation velocity after correcting for the pressure support, $\gamma$ is the mass to luminosity ratio, $v_{*}, v_{g} $ and $v_{h} $ are the rotation velocities contributed by stars, gas and dark matter halo respectively.
\subsection{Stellar Component}
\label{ste}
We use the $g$-band optical images to calculate the contribution of the stellar disk to the rotation curve. The optical $g$-band images were either taken from SDSS \citep{ahn12} or from PanSTARRS \citep{flewelling16} for those galaxies which lie outside the SDSS footprint. The de-projected luminosity profiles were derived using the task `ELLINT' in `{\sc gipsy }' using the parameters obtained from the tilted ring fits. We fit an exponential to the extracted luminosity profile to obtain the scale length (h). We derive the $g$-band mass to luminosity ratio from the $g-i$ colors \citep[taken from][]{perepelitsyna14} and the stellar mass to light relations given in \citet{zibetti09}. We use the derived mass to luminosity ratio to convert the luminosity profiles into stellar mass profiles. For the mass modelling, we assume a vertical sech$^{2}$(z) scale height distribution, with the ratio of scale length to scale height (h/z$_{\circ}$) to be 5. We have also confirmed, that the choices of the vertical profile and the value of h/z$_{\circ}$ do not affect the mass models significantly.
\subsection{Gas Component}
\label{gas}
We use the total integrated H{\sc i} intensity maps to derive the contribution of the gas to the observed rotation velocity. We apply the tilted ring parameters (\S \ref{rot}) to derive the de-projected H{\sc i} radial surface density profiles using task `ELLINT' in {\sc gipsy}. For most galaxies, the H{\sc i} surface density profiles derived in this way match broadly with the profiles derived from using the {\sc fat } pipeline to directly fit to the data cube. In some cases however, we find that the {\sc fat }\ derived surface density tends to be slightly higher in the inner regions of the galaxy. The reason for this difference is unclear. We show in \S \ref{results} that our main results are not affected significantly if one uses the {\sc fat } derived surface brightness profiles instead of 'ELLINT' ones. We scale the derived H{\sc i} surface density profiles by a factor of 1.35 to take the contribution of Helium and metals to the gas mass. As discussed above, we assume that the contribution of molecular gas and ionized gas to the total gas mass is negligible.
\subsection{Dark matter halo}
\label{dark}
We use the two well-known models, viz. the Isothermal and the NFW models to parametrize the dark matter distribution. The parameters of these models are briefly summarized below.
\subsubsection{Isothermal halo model}
The density distribution of the observationally motivated pseudo-isothermal (ISO) halo model \citep[e.g.][]{begeman91} is:
\begin{equation}
\rho_{iso} (r) =\rho_{0}[1+(r/r_{c})^{2}]^{-1}
\end{equation}
where, $\rho_0$ is the core density and $r_c$ the core radius of the halo. The corresponding circular velocity is:
\begin{equation}
V_{iso} (r) = \sqrt{4 \pi \rho_{0} (r) r_{c}^{2}\Big [1-\dfrac{r_{c}}{r} \tan^{-1}(\dfrac{r}{r_{c}})\Big ]}
\end{equation}
The inner slope ($\rho$ $\sim$ r$^{\alpha}$) of the mass density profile for the Isothermal halo model is $\alpha = 0$ (since for $r<< r_{c}$, $\rho_{iso} \approx \rho_{\circ}$).
\subsubsection{NFW halo model}
The density profile of the NFW halo model \citep{navarro96} is given by:
\begin{equation}
\rho_{NFW} (r) =\dfrac{\rho_{i}}{(r/r_{s})(1+r/r_{s})^{2}}
\end{equation}
where $ r_{s} $ is a characteristic radius of the dark matter halo and $\rho_{i}$ is the initial density of the universe at the time of collapse of the halo.
The corresponding circular velocity is:
\begin{equation}
V_{NFW} (r) = v_{200} \sqrt{\dfrac{ln(1+cx)-cx/(1+cx)}{x[ln(1+c)-c/(1+c)]}}
\end{equation}
where $ c=r_{200}/r_{s} $ is the concentration parameter and $ x=r/r_{200} $; $ r_{200} $ being the radius at which the mean density of the halo is equal to 200 times the critical density and $ v_{200} $ is the rotational velocity at $ r_{200}$. $ v_{200} $ is related to the $ r_{200}$ as $ v_{200} $ = h$ r_{200}$, where h is the dimensionless Hubble parameter. The inner slope of the density distribution is $\alpha$ $\sim$ -1 (at radii $r<< r_{s}$, $\rho_{NFW} \approx \rho_{i}$ \big($\frac{r_{s}}{r}$ \big)).
\subsection{Constructing the mass models}
\label{ml}
As described above mass models for the galaxies were fit using three components i.e. the stellar disk, the gas disk and the dark matter halo. A number of different fits were performed. Firstly for the dark matter halo we fit both the Isothermal halo and NFW halo models. For each of these we allowed four different possibilities for the contributions from the baryonic components, viz. (1) {\it Constant $\gamma_{\ast}$:} In this case, we use a fixed mass to light ratio as derived from the stellar population synthesis (SPS) models. (2) {\it Maximum disc:} This model assumes that the observed rotation curve in the inner regions is almost entirely due to the stellar component. In this case, we scale the rotation curve due to the stellar component to the maximum value for which the dark matter density is non-negative at all radii. This model constrains the value of dark matter density to be minimum at all radii. (3) {\it Minimum disc:} In this case, we assume that the contribution of baryons to the observed rotation curve is zero. This assumption provides an upper limit to the dark matter density. (4) {\it Minimum disc + gas:} the stellar disk contribution to the rotation velocity is assumed to be zero, however the contribution of the gas disk is fully included. (5) {\it Free $\gamma_{\ast}$: } In this model, we allow the mass-to-light ratio to be a free parameter. In some of the galaxies, this assumption did not yield physical results. The fit results of individual galaxies for various assumptions for $\gamma_{\ast}$ are presented in Table \ref{table:gamma} in the Appendix.
\section{Results}
\label{results}
Fig \ref{fig:dens1}--\ref{fig:dens3} show the fit results for case(1), i.e. where the stellar mass to light ratio is fixed using the $g-i$ colors, using the rotation curves derived with {\sc fat } and the corresponding fit parameters are given in Table~\ref{table:dm}. We find that the NFW halos provide a better fit in most of the galaxies in terms of fit quality. However, the $\chi^{2}_{red}$ is less than 1 in almost all cases, indicating that the errors provided by the {\sc fat } package are likely to be overestimated. Although both fits are formally acceptable, the residual velocities for NFW model(shown by red open circles) are generally smaller than the residual velocities for the isothermal model (shown by green filled circles) in the central regions. For 2 galaxies, viz. UGC4115 and UGC5288, the isothermal halo gave a better fit compared to the NFW halo. We note that the galaxy UGC5288 has a strong bar. For the galaxy UGC4115, we did not get physical results when both $c$ and R$_{200}$ were allowed to freely vary during the fitting process. For this galaxy we hence fix the value of $c$ to the average concentration parameter for other galaxies ( $\sim$ 5.5 ).
\subsection{Dark matter density profiles}
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{alpha.pdf}
\includegraphics[width=1.0\linewidth]{alpha_gipsy.pdf}
\caption{The value of inner slope ($\alpha$) of the dark matter density distribution versus the radius of the innermost point. The data points from this work are shown by blue circles. The red stars are from \citet{deBlok01}, the green pentagons are from \citet{deBosma02}, the yellow diamonds are from \citet{swaters00}, and the cyan triangles are from \citet{verheijen97}. The black dotted lines are the theoretical slopes for the isothermal halo with core radii of 0.5 (left), 1 (middle) and 2(right) kpc. The red solid line is the NFW model (r$^{-1}$) and the green dashed line is the CDM r$^{-1.5}$ model with c=8 and v$_{200}$ = 100 km s$^{-1}$. upper panel: slopes obtained using the rotation curves derived from {\sc fat } (3-D approach), lower panel: slopes obtained using the rotation curves derived from ROTCUR (2-D approach).}
\label{fig:alpha}
\end{figure}%
Apart from fitting mass models, one can also use the rotation curve to directly determine the inner slope of the density profile \citep[e.g.][]{oh11a}. We assume a spherical halo potential \citep{trachternach08} to convert the rotation curve to the dark matter density profile using the following equation:
\begin{equation}
\rho(R) = \frac{1}{4\pi G} \bigg[2\frac{V}{R} \frac{\partial V}{\partial R} + \bigg( \frac{V}{R} \bigg)^{2} \bigg]
\end{equation}
where G is the gravitational constant and $V(R)$ is the rotation velocity after correcting for pressure support at a radius R. The right panels of Fig~\ref{fig:dens1}--\ref{fig:dens3} show the derived dark matter density profiles of the individual galaxies. We plot both the dark matter densities derived from the total rotation curve (brown circles) and from the rotation curve after subtracting the contribution of baryons (green squares). The best-fit dark matter density profiles of isothermal halo and NFW halo models for a fixed mass to light ratio are also overplotted. As can be seen, the central dark matter density profiles are steeper than expected for the isothermal halo model, but are a good match to what would be expected from the NFW model.
In order to quantify the cuspiness of the dark matter distribution in the central regions of the galaxy, the logarithmic inner slope of the density profiles were measured following the method described in \citet{deBlok01} \citep[see also][]{deBlok02, oh11, oh15}. We first determine the ``break radius" in the central regions of the galaxy. The break radius is defined as the radius at which the variation of the logarithmic slope of the density profile is maximum. The inner density slope ($\alpha$) is then measured using a least-squares fit to the data points that lie within the break radius. The range over which the fit is performed is shown in the right panels of Fig~\ref{fig:dens1}--\ref{fig:dens3} by a black dotted line. The uncertainty in $\alpha$ is taken to be half of the difference between the slopes measured when one includes or excludes the first data point outside the break radius while doing the fit. This error estimate is probably more appropriate than that derived from the formal fit error. The inner density slopes were measured from both the observed rotation curve as well as the rotation curve obtained after subtracting the baryonic contribution. The former case corresponds to the ``minimum disc assumption", in the latter case the rotation velocity corresponds to the contribution of the dark matter alone. The measured slope $\alpha$ and the uncertainty in slope $\Delta \alpha$ of the galaxies for the dark matter only profiles are shown in the right panels of Fig~\ref{fig:dens1}--\ref{fig:dens3}. We find that the mean values of the inner density slopes are $\alpha$ = -1.39 $\pm$0.19 and $\alpha$ = -1.48 $\pm$0.23 for the minimum disc assumption and the dark matter only profile respectively.
We note that the values overlap within the errorbars. The measured logarithmic inner density slopes for all the galaxies are shown in table \ref{table:alpha}. These values are in better agreement with the logarithmic inner slope ($\sim$ -1) of the NFW cuspy profiles, and inconsistent with the slope of $\sim 0$ expected for the ISO halo. We also note that for all the galaxies, the radius within which we measure the inner density slope (r$_{\rm d}$) is much less than the best fit characteristic radius (r$_{s}$) of the NFW halo, where r$_{\rm d}$ is typically within $\sim$ 10\% of r$_{s}$ and within 30 $\%$ of r$_{\rm s}$ for all the cases. But, for isothermal haloes, we find that r$_{\rm d}$ is typically comparable to the best fit core radius (r$_{c}$) of the isothermal halo and is higher than r$_{c}$ in a few cases, which may lead to steeper DM density profiles (steeper than $\alpha \sim 0$).
\subsection{Comparison with earlier studies}
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{alpha_rmax.pdf}
\includegraphics[width=1.0\linewidth]{alpha_rmax_gipsy.pdf}
\caption{The value of inner slope ($\alpha$) of the dark matter density distribution versus the number of resolution elements across the major axis of the galaxy. The data points from this work are shown by blue circles. The red squares are from \citet{oh15}. upper panel: slopes obtained using the rotation curves derived from {\sc fat } (3-D approach), lower panel: slopes obtained using the rotation curves derived from ROTCUR (2-D approach).}
\label{fig:alpha_rmax}
\end{figure}%
The steep inner slopes that we measure are in contradiction to several earlier measurements based on independent samples. For example, \citet{deBlok01} measure $\alpha$ to be $-0.2 \pm 0.2$, while \citet{oh11} find $\alpha = -0.29 \pm 0.07$ and \citet{oh15} get $\alpha = -0.42 \pm 0.21$ using the minimum disc assumption. Some earlier studies \citep[e.g.][]{verheijen97} have however found steeper slopes, i.e. $\alpha \sim -1.8$. Insufficient sampling of the dark matter density profiles in the inner region could lead to an artificial steepening of the slope \citep{deBlok01}. This is because the logarithmic slope of the density profile becomes steeper towards the outer regions, steeper slopes would hence be obtained if data points from the outer region are included for the estimation of slope. To check whether this under sampling is the cause for the steeper slope that we measure, we compare the number of resolution elements across the galaxy for the galaxies in our sample and the literature. Fig \ref{fig:alpha_rmax} (upper panel) shows the plot of value of inner slope versus the number of resolution elements across the major axis of the galaxy. The blue circles indicate the galaxies from our sample and the red squares are the galaxies from earlier studies. As can be seen the galaxies in our sample have similar number of resolution elements as compared to the galaxies in the literature. This indicates that the obtained steeper slopes are not due to a different sampling in the inner regions of the dark matter halos. The other point of departure of this study is that we use rotation curves derived using a fit to the 3-D data cube, instead of a fit to the 2-D velocity field. To see what difference this makes, we show in Fig \ref{fig:alpha_rmax} (lower panel) the slopes measured for our sample using the 2-D velocity field and the {\sc gipsy } task ROTCUR. As can be seen the agreement between the slopes measured from our sample, and the slopes measured for earlier samples match quite well when we use rotation curves derived from the 2-D velocity field. Fig.~\ref{fig:alpha}, which is a plot of the value of inner slope ($\alpha$) versus the radius of the innermost point shows a similar result. Fig.~\ref{fig:alpha}~(a), shows slopes for our galaxies based on {\sc fat } rotation curves, while in Fig.~\ref{fig:alpha}~(b) shows the slopes measured using ROTCUR on the 2-D velocity fields. The data points from this work are shown by blue circles and other symbols are from measurements taken from other studies (see the caption for details). The black dotted line shows the slope for the isothermal model, the red solid line and the green dashed lines show the $\Lambda$CDM r$^{-1}$ \citep{navarro96} and r$^{-1.5}$ \citep{moore99} models respectively. The mean value of $\alpha$ for estimates from rotation curves derived from the 2-D velocity field, is $\alpha = -0.49 \pm 0.24$, which is consistent with values obtained in the literature.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{density_scale.pdf}
\includegraphics[width=1.0\linewidth]{density_scale_gipsy.pdf}
\caption{upper panel: The density profiles derived using {\sc fat } (3-D approach) rotation curves, lower panel: The density profiles derived using ROTCUR (2-D approach) rotation curves. The black dotted line represents NFW halo with ($\alpha$ $\sim$ -1) with V$_{200}$ 10--110 km s$^{-1}$ and the blue dashed line is the best-fit isothermal halo model.}
\label{fig:density}
\end{figure}%
\subsection{Scaled density profiles}
So far we have been dealing with each individual galaxy separately, and combining the results from parametric fits to the individual rotation curve. A complementary approach would be to try and suitably scale each rotation curve, so that they can be combined together, and then compare this composite curve with theoretical models. To do this, we follow \citet[][]{hayashi06, oh11, oh15} and scale the pressure corrected rotation velocities to V$_{0.3}$ and the radius to R$_{0.3}$, where R$_{0.3}$ is the radius at which the logarithmic slope of the rotation curve ($d$ log$V$/$d$ log$R$) becomes 0.3 and V$_{0.3}$ is the velocity at this radius. This scaling radius can generally be well determined for all the galaxies as it lies between the raising ($d$ logV/$d$ logR = 1) and the flat ($d$ logV/$d$ logR = 0) rotation curve. For our sample, we can determine R$_{0.3}$ for all the galaxies except for J0926+3343; in the case of J0926+3343 we take the outermost radius is taken as the scaling radius. In order to compare this composite rotation curve with the NFW model, we combine the theoretical NFW curves with V$_{200}$ ranging from 10--110 km s$^{-1}$, concentration parameter obtained using the empirical relation between c--V$_{200}$ \citep{deBlok03, mcGaugh07} by also scaling them with respect to R$_{0.3}$ and V$_{0.3}$ Fig.~\ref{fig:density} shows the plot of the scaled density versus the scaled radius. The black dotted line represents NFW halo with ($\alpha$ $\sim$ -1) with V$_{200}$ 10--110 km s$^{-1}$ and the blue dashed line is the best-fit isothermal halo model. Once again we see a clear difference between the rotation curve derived from the 2-D and 3-D rotation curve. The upper panel shows the density profile obtained using the {\sc fat }-derived rotation curve, which is consistent with the cuspy NFW profile. The lower panel shows the density profile derived using the ROTCUR-derived rotation curve. This is not consistent with the NFW profile but is in good agreement with best-fit isothermal model in contrast to the {\sc fat } derived rotation curve.
\subsection{Further comparison of the 2D and 3D approaches}
The difference between the rotation curves derived in the 2D and 3D method could, in principle arise from {\sc fat } incorrectly modelling the data cube. To test this, we derive moment maps from the best fit model data cube produced by {\sc fat } and derive rotation curves as well as surface brightness profiles from them using the 2D routines in GIPSY. We show rotation curves as well as surface brightness profiles for all the galaxies in Fig. \ref{fig:fatmodel1}, \ref{fig:fatmodel2}, and \ref{fig:fatmodel3}. For each of these quantities 3 sets of curves are shown, these are the curves derived by {\sc fat } (red squares) and GIPSY (black circles) when run on the original observed data, as well as the curves produced by the GIPSY tasks when run on the model produced by {\sc fat } (green diamonds). We find that in most cases, the rotation curves produced by GIPSY routines run on the original data very closely match those produced when the same routines are run on the {\sc fat } model, especially in the inner regions. This indicates that the {\sc fat } model is a good fit to the observed data, and that the differences that we see in the curves arise because of the systematic differences between the 2D and 3D approaches. There are significant differences in both the rotation curves as well as in the surface brightness profiles. The differences in the surface brightness profiles would lead to differences in the estimates of the contribution of the mass of the gas disk to the total rotation velocity as well as to differences in the estimated pressure support correction. We confirm that the differences to the pressure support correction are small in the inner regions (i.e. the regions that we use to determine the inner slope of the density profile). We remind the reader that we compare our results for the minimum disk approximation with results from the literature that also use the minimum disk approximation. As such this comparison is unaffected by the assumed surface density profile.
\subsection{Comparison with Gauss-Hermite velocity fields}
\label{ssec:gauss-hermite}
We also derive the velocity fields using Gauss-Hermite fits to the individual spectra using the task `XGAUFIT' in {\sc gipsy }\ . The Gauss-Hermite polynomial includes an $h_{3}$ (skewness) term to the velocity profiles, and hence takes into account asymmetries in the profiles. We could not derive a reliable Hermite velocity field for the galaxy J0929+1155, the galaxy with the lowest signal to noise. The difference in slopes from the Hermite velocity field and the first moment map is significant for the galaxy UGC5288, which is a barred galaxy. As before we used the velocity fields to calculate the rotation curves using the task ROTCUR, these were then used to calculate the logarithmic slope of the density profile in exactly the same manner as detailed above. These slopes are listed in table \ref{table:alpha}, and the logarithmic slopes derived using the three different estimates of rotation curve (i.e. from the moment method and ROTCUR, the Gauss-Hermite fit and ROTCUR, and FAT) are shown in Fig.~\ref{fig:slope-compare}. As can be seen slopes derived with Hermite velocity fields are somewhat steeper than the slopes derived with the first moment maps, but not as steep as the FAT derived curves. The average slope obtained using the Gauss Hermite fits is -0.71 $\pm$ 0.33. If we exclude the galaxy J0929+1155, the average slope obtained using moment maps and FAT is -0.56 $\pm$ 0.25 and -1.41 $\pm$ 0.21 respectively.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{alpha_all.pdf}
\caption{The value of inner slope ($\alpha$) of the dark matter density distribution versus the radius of the innermost point. The slopes derived with the FAT rotation curves are shown by blue circles. The green and red circles indicate the slopes derived from the the Gauss-Hermite fit (and ROTCUR) and moment method (and `Rotcur') respectively.}
\label{fig:slope-compare}
\end{figure}%
\subsection{Comparison with simulated galaxies}
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{v_circ_max.pdf}
\caption{Circular velocity at r = 2 kpc versus the maximum circular velocity, V$_{max}$, for 8 void dwarf galaxies. The black solid line represents the correlation expected for the NFW halo.}
\label{fig:vcirc}
\end{figure}
The role of systematic effects in observational studies to the cusp-core problem was investigated by \citet{pineda17} using hydrodynamical simulations of dwarf galaxies. They mimic realistic kinematic observations and fit mock rotation curves. Their model galaxies also suggest that the minimum disc approximation would cause one to infer that the DM profile is flatter than it actually is, which is in contrast with the widely accepted claim. Our results are in agreement with this, we find a flatter inner density slope with the minimum disc assumption ($\alpha$ = -1.39 $\pm$0.19) than for the dark matter only profile where we find a relatively steeper slope ($\alpha$ = -1.48 $\pm$0.23), although it matches within the error bars.
They also find that it is extremely challenging to fully correct for the pressure support even with the data available from the highest quality cusp-core studies and that even small errors of a few km s$^{-1}$ can cause dark matter cusps to be disguised as cores. In our particular case, we note that the pressure correction terms are small in the central part of the galaxy, and different assumptions do not make a significant difference to the final rotation curve (see \S \ref{pressure}).
\citet{oman15} find that the `cusp-core problem' is better characterized as an `inner mass deficit' problem, where they compare the inner circular velocities of observed galaxies with those of $\Lambda$CDM galaxies of same maximum velocity (V$_{\rm max}$). Following their prescription in Fig \ref{fig:vcirc}, we plot the circular velocity at 2 kpc against the maximum measured rotation speed using the rotation curves derived using {\sc fat }. We interpolate linearly between nearby data points to get the velocity at exactly 2 kpc. The black solid line represents the correlation expected for the NFW halo. Dwarf galaxies in our sample are in good agreement with the NFW haloes.
\citet{oman17} measure H{\sc i} rotation curves using $^{\rm 3D}${\small BAROLO} tilted ring modelling tool for the galaxies simulated from the APOSTLE $\Lambda$CDM hydrodynamical simulations. Their results suggest that non-circular motions in the gas lead to underestimate the circular velocities in the central regions, which can be misinterpreted as evidence for cores in the dark matter. They suggest that the failure of tilted ring models when applied to galaxies with non-negligible non-circular motions could be a possible resolution to the cusp-core problem. In our case, we find no clear evidence for non-circular motions, except in the case of UGC~5288, where there appears to be a strong bar and we indeed find that NFW halo is not a good fit for this galaxy.
\subsection{Environmental dependence of DM halo properties}
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{c_v200.pdf}
\includegraphics[width=1.0\linewidth]{c_v200_gipsy.pdf}
\caption{The concentration parameters versus the V$_{200}$ in km s$^{-1}$. upper panel: The parameters for the galaxies in our sample are using the rotation curves derived from {\sc fat } (3-D approach), lower panel: The parameters for the galaxies in our sample are using the rotation curves derived from ROTCUR (2-D approach). }
\label{fig:cv200}
\end{figure}%
One of the aims of this study was to check if the void galaxies have different DM halo properties as compared to the galaxies outside voids. Figure \ref{fig:cv200} shows the concentration parameters versus the V$_{200}$ (km s$^{-1}$). The black circles in the upper panel and lower panel represent the NFW halo parameters derived for the void dwarf galaxies using {\sc fat } and ROTCUR respectively. We were able to obtain physical values for the concentration parameters only for 6 rotation curves derived with {\sc fat } and 4 rotation curves derived with ROTCUR. The magenta squares \citep{oh11a}, red diamonds \citep{oh15}, blue triangles \citep{deBlok01} represent the NFW halo parameters of galaxies outside voids.
As can be seen from the plot, the data for the void dwarfs from our sample and other dwarfs from the literature overlap. We perform a 2D two-sample Kolmogorov-Smirnov (KS) test to compare the distribution of dwarf galaxies in the voids and dwarf galaxies in the average density regions quantitatively. We do not include \citet{deBlok01} sample for the comparison as their galaxies lie outside the rotational velocity range of our galaxies. Since four galaxies (DDO43, DDO46, DDO52, and F564-V3) from the comparison sample happen to reside in voids \citep{pustilnik18}, we include them in the void galaxy sample. This test gives a probability of 0.11 for the void galaxies and the galaxies from average density regions being drawn from the same distribution. This p-value indicates that there is no clear statistical evidence for the two samples (i.e. dwarfs from voids and average density regions) being drawn from different populations. However, we require a larger sample to draw stronger conclusions. It would also be interesting to compare against a sample of dwarfs specifically chosen to lie in regions of higher than average density.
\section{Summary and Conclusions}
\label{summary}
We have derived rotation curves of 8 galaxies that lie in Lynx-Cancer void using 3-D and 2-D tilted ring fitting routines. We construct mass models and we find that both the isothermal and NFW halos are a good description of dark matter distribution of galaxies in terms of fit-quality (i.e. $\chi^{2}_{red}$). We convert the rotation curves derived using 2-D and 3-D approaches into density profiles. This allows us to examine the central dark matter density profiles and to distinguish between the cores and cusps at the center of galaxies. We find that the dark matter halo density profiles derived using 3-D approach are consistent with the NFW profile and the measured inner slopes ($\alpha$ = -1.39 $\pm$0.19 ) are more steep than the values of the slopes from the literature ($\alpha$ = -0.2 $\pm$ 0.2 in \citet{deBlok01}, $\alpha$ = -0.29 $\pm$ 0.07 in \citet{oh11} and $\alpha$ = -0.42 $\pm$ 0.21 in \citet{oh15}). Since the mass models for the galaxies in the literature were constructed using the 2-D approaches, we use the rotation curves derived using 2D approach to estimate the inner density slope ($\alpha$) values. The value of $\alpha$ $\sim$ -0.5-0.7 we get using the 2D approach is consistent with the slopes obtained in literature. This suggests that the fundamental differences in 3-D and 2-D tilted ring fitting routines affect the slope of the central dark matter density profiles. Since our sample size is modest, it is important to check the results using larger samples.
\section*{Acknowledgements}
This paper is based in part on observations taken with the GMRT. We thank the staff of the
GMRT who made these observations possible. The GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research. PK is partially supported by BMBF project 05A17PC2 for D-MeerKAT. The work of SAP on this project was supported by RSCF grant No. 14-12-00965.
| proofpile-arXiv_059-15572 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
The pure (monomial) $p$-spin disordered spherical model is a solvable classical system that has been the focus of intense study since it appeared in the literature in the early 90s. Its static~\cite{Crisanti1992}, metastable~\cite{Cavagna1998,Kurchan1993} and
dynamic~\cite{Cugliandolo1993} properties can be obtained,
in the thermodynamic limit, with analytic methods (namely, the replica trick, the Thouless-Anderson-Thouless approach
and the Schwinger-Dyson equations coupling linear response and correlation functions). A rather complete and consistent picture emerges from these studies.
In particular, the model realises the random first order phase transition scenario
and, for this reason, it is accepted as the simplest model for fragile glass physics~\cite{Berthier2011,Bouchaud1996,Cavagna2009,Cugliandolo2002}.
An easy but intriguing generalisation consists in adding two different pure spherical models, with potential energies involving interactions between different number of spins and, for concreteness, both strictly larger than two. This construction yields a mixed $p$-spin, still spherical, disordered model. One reason for being interested in these generalisations is that, in the glassy context, they extend the mode-coupling approach developed to describe the dynamics above the dynamic critical temperature $T_d$ and capture richer relaxations of the correlation functions~\cite{Berthier2011,Bouchaud1996,Cavagna2009,Cugliandolo2002}. Another reason is that, in mappings between optimisation problems and disordered spin systems, models with several $p$-spin terms naturally arise~\cite{Monasson1997}. Finally, one can simply be interested in the behaviour of such an extended Hamiltonian.
Standard knowledge on the metastability of the spherical $p$-spin models suggests that the behaviour of the mixed case should be different from the one of the monomial model in many respects. Indeed, a simple and very convenient property of the equilibrium and metastable states of the pure model is lost. In the monomial model, due to the homogeneity of the Hamiltonian, the states can be followed in temperature until the spinodal at which they disappear without crossing, merging nor dividing~\cite{Kurchan1993}. In other words, there is no chaos in temperature. In particular, the states that dominate the equilibrium properties are the same in the whole low temperature phase~\cite{Kurchan1993,Barrat,Barrat1997}. This simple structure is lost in the mixed case.
The static properties of the mixed model, derived with the replica method, remain very similar to the ones of the pure model: at a critical temperature $T_s$ the replica symmetry is broken into a one-step replica symmetry breaking form signaling the equilibrium transition from the disordered paramagnetic phase to the low-temperature glassy one~\cite{Crisanti2006}. The relaxation dynamics from totally random initial conditions, mimicking equilibrium at infinite temperature, indicate the existence of a dynamic transition at a higher temperature $T_d$, below which the evolution is forced to remain out of equilibrium in the infinite system size limit~\cite{Cugliandolo1996}, taking place on a threshold level, at higher (free) energy density than the equilibrium one.
However, in line with chaotic structures, the early works on the mixed model~\cite{Barrat1997} already showed peculiar behaviour. In particular, the dynamics of initial states in equilibrium at $T'\in [T_s, T_d]$ quenched at very low temperatures $T<T_{\rm RSB}(T')$ showed ageing phenomena at energy levels below the (flat) threshold level that attracts the relaxation of initial states at $T'\gg T_d$. Several later papers improved the analysis and confirmed the result just described~\cite{Capone2006,Sun2012,Dembo2019}. In particular, in Ref.~\cite{Folena2019}, equilibrium initial conditions at $T'\in [T_d,T_{\rm onset}]$ were considered and, very surprisingly, memory of the initial conditions after quenches to very low temperatures was observed in the numerical solutions of the Schwinger-Dyson equations. More details on this and other peculiar features of the metastable and dynamic properties of the mixed model are given in the Background Subsection~\ref{subsec:background}.
In this paper we first show that chaos in temperature, in a sense that we will make precise later, is present in the low temperature regime $T<T_d$ (and not only below $T_s$) in the mixed model. We then introduce and study a constrained free energy density function, of Thouless-Anderson-Palmer (TAP) \cite{Thouless1977} type but with new conditions, that allows one to identify a possible origin of the differences in the quenched dynamic behaviour of the mixed and pure spherical models. We also set the stage for a generalisation of the dynamic approach to follow the evolution of equilibrium initial conditions in more detail than done so far.
The paper is structured as follows. In Section~\ref{sec:model} we present the model and we recall how its stochastic dynamics are described via a Langevin process. Section.~\ref{sec: temperature chaos} focuses on temperature chaos captured by the Franz-Parisi (FP) potential and the TAP free energy. In Sec.~\ref{sec: A TAP-like approach to quenches} we introduce the constrained TAP free energy approach. Sections~\ref{sec:Application to the mixed $p$-spin model} and~\ref{se: an exact approach for the constrained free energy} are devoted to the derivation of our results and their discussion, also in connection with predictions from the use of the FP potential. A concluding Section closes the paper. In four appendices we present some properties of the unconstrained TAP free energy landscape and we provide details on the derivation of the constrained one.
\section{The model}
\label{sec:model}
In this Section we introduce the model
and we recall some of its most relevant properties.
\subsection{Definitions}
\label{subsec:definitions}
We study a disordered spherical spin model with Hamiltonian equal to the
sum of two $p$-spin terms:
\begin{eqnarray}
H_J[\{s_i\}]
&=&H_{p_1}[\{s_i\}]+H_{p_2}[\{s_i\}]=-\sum_{i_1<\dots<i_{p_1}}J_{i_1\dots i_{p_1}} s_{i_1}\dots s_{i_{p_1}}-\sum_{i_1<\dots<i_{p_2}}J_{i_1 \dots i_{p_2}}s_{i_1}\dots s_{i_{p_2}}
\; .
\label{eq:Hamiltonian}
\end{eqnarray}
The $i=1,\dots,N$ spin variables are real and continuous, $-\infty < s_i < \infty$,
and they are globally constrained to satisfy
\begin{equation}
\frac{1}{N} \sum_{i=1}^N s^2_i = 1
\; .
\label{eq:spherical-constraint}
\end{equation}
Quenched disorder is introduced by the interaction constants
$J_{i_1 \dots i_{p_\ell}}$ that are {\it independent} random variables
taken from two Gaussian distributions with mean and variance
\begin{equation}
{\rm I\!E} [J_{i_1 \dots i_{p_\ell}}]=0 \qquad\mathrm{and}\qquad {\rm I\!E} [{J^2_{i_1 \dots i_{p_\ell}}}]=\frac{{J_{p_\ell}^2}p_\ell!}{2{N}^{p_\ell-1}}
\end{equation}
where $J_{p_\ell}>0$ and $\ell=1,2$.
The random exchanges induce correlations between the Hamiltonian (\ref{eq:Hamiltonian}) evaluated on two different spin configurations
$\{s_i\}$ and $\{s'_i\}$. Defining their overlap
\begin{equation}
C_{ss'} \equiv \frac{1}{N} \sum_{i=1}^N s_i s'_i
\end{equation}
one has
\begin{eqnarray}
\nu(C_{ss'})
\equiv
{\rm I\!E}[H_J[\{s_i\}]H_J[\{s'_i\}]]
=
\frac{J_{p_1}^2}{2} C_{ss'}^{p_1}
+
\frac{J_{p_2}^2}{2} C_{ss'}^{p_2}
\; .
\label{eq:random-pot-corr}
\end{eqnarray}
For later convenience we called the expectation value $\nu$.
The stochastic dynamics are governed by overdamped Langevin equations
\begin{equation}
\eta \frac{d s_i(t)}{dt} =
- \frac{\delta H_J[\{s_j(t)\}]}{\delta s_i(t)} - \mu(t) s_i(t) + \xi_i(t)
\; ,
\label{eq:Langevin}
\end{equation}
with $\mu(t)$ a Lagrange multiplier that imposes the spherical constraint (\ref{eq:spherical-constraint}) all along the evolution, and $\xi_i(t)$
a time dependent Gaussian random force with zero mean and delta-correlations:
\begin{equation}
\langle \xi_i(t) \rangle = 0
\; ,
\quad
\langle \xi_i(t) \xi_j(t')\rangle = 2 \eta k_BT \delta(t-t') \delta_{ij}
\; .
\end{equation}
The evolution starts from initial conditions
$\{s_i(0)\}$ that are chosen with different criteria. The ones most commonly used are in equilibrium at temperature $T'$. In the infinite temperature limit, $T'\to\infty$, their statistics is mimicked with a flat probability distribution. At finite temperature, $T'<+\infty$, the disordered dependent Gibbs-Boltzmann weight at $T'$ is used to sample $\{s_i(0)\}$.
In the thermodynamic limit the correlation function $C(t,t')$ and the linear response function $R(t,t')$ are the main observables that describe the dynamics of the system. The first one consists in the average (over the thermal noise and initial conditions denoted with angular brackets, the disorder indicated with ${\mathbb E}$, and the whole system) overlap of a spin $s_i$ taken at two times $t$ and $t'$ strictly larger than the initial one, that hereafter we set to zero:
\begin{equation}
C(t,t')=\frac{1}{N}\sum_i {\rm I\!E}\Big[\langle s_i (t) s_i (t') \rangle\Big]
\; .
\label{eq:Cttp}
\end{equation}
We choose to distinguish this `late times' function from the correlation between the initial configuration $\{s_i(t=0)\}$ and the configuration at a later time $\{s_i(t>0)\}$:
\begin{equation}
C(t,0)=\frac{1}{N}\sum_i {\rm I\!E}\Big[\langle s_i (t) s_i (0) \rangle\Big]
\; .
\label{eq:Ct0}
\end{equation}
The response function is calculated from the variation between the evolution, on average, of a given spin $s_i$ with the addition of a magnetic field $h_i(t)$, such that the forces are shifted by $-\delta_{s_i(t)} H[\{ s_j(t)\}] \rightarrow -\delta_{s_i(t)} H[\{ s_j(t)\}]+h_i(t)$, and the free one. More formally, the linear response function can be written as
\begin{equation}
R(t,t')=\frac{1}{N}\sum_i {\rm I\!E}\Big[ \delta_{h_i(t')} \langle s_i (t)\rangle_h \Big]\Bigr|_{h_i(t)=0} \;.
\end{equation}
In the following we set the units such that $\eta=k_B=1$.
It is known that this model has different static, metastable and dynamic behaviour depending on whether one of the two $p$ parameters takes the value $2$ or not, see Ref.~\cite{Crisanti2006} for details. In the following study we will choose the convention $p_1<p_2$
and we will focus on $2<p_1$.
\subsection{Background}
\label{subsec:background}
As we have already written in the Introduction, these models have an equilibrium phase transition at a temperature $T_s$ determined from, for example, the analysis of the symmetry breaking properties in the replica calculation of the thermodynamic free energy. The replica structure goes from being symmetric above $T_s$ to one step symmetry breaking (RSB) below it, indicating the presence of a glassy equilibrium phase at low temperatures. The transition is discontinuous in the sense that the order parameter jumps but second order thermodynamically. There is no Gardner temperature below which a full RSB solution would be needed in this model. The equilibrium phase diagram is discussed in detail in App. C in Ref.~\cite{Crisanti2006}.
The relaxation dynamics from random, infinite temperature, initial conditions, face the impossibility to equilibrate below a temperature $T_d$ ($>T_s$). Still, the correlation function with the initial condition and the two-time one for widely separated times approach zero, when times are taken to diverge after the thermodynamic limit, in the whole post-quench temperature range of variation. Below $T_d$, this complete decorrelation gives rise to the so-called {\it weak ergodicity breaking} scenario~\cite{Bouchaud1992,Cugliandolo1995}. The relaxation approaches a flat region of phase space named the threshold~\cite{Cugliandolo1993}. The dynamic transition at $T_d$ is also discontinuous. These conclusions can be extracted from Ref.~\cite{Cugliandolo1996} since the mixed model is a special case of the ones studied in this reference. The dynamic transition line can also be found with a replica study in which marginality is imposed, see the App. C in Ref.~\cite{Crisanti2006} for the development of this approach and Fig. 17 in this reference for the phase diagram of the mixed model with $p_1=3$ and $p_2=4$.
The static phases and phase transitions, and the dynamic properties after quenches from infinite temperature, just described are in complete analogy with the ones of the pure $p$-spin spherical model.
Thouless, Anderson \& Palmer (TAP)~\cite{Thouless1977} introduced a formalism that allows one to define and investigate a free energy landscape that is a function of all relevant order parameters and thus access metastable states of all kinds. This approach extends Landau's to disordered systems.
For the disordered spin models we are dealing with, the order parameters are the local magnetisations, $\langle s_i\rangle = m_i$, and they are order $N$ in number. In pure $p$-spin models, the TAP free energy landscape is complex but relatively simple at the same time~\cite{Kurchan1993,Crisanti1995}. It starts having a complex structure, with stationary points that are associated to metastable states, at temperatures that are well above $T_s$. But these states are organised in such a way that they neither cross, merge nor bifurcate; therefore, once one of them is identified at, for example, zero temperature, it can be followed in temperature until it disappears at its spinodal.
Pure $p$-spin models have, in a finite window of temperatures above $T_s$, an exponentially large number of non-trivial states with, e.g., different and non zero local magnetisations, $\{m_i^{\alpha}\}$ with $\alpha$ the state identification,
that combine to yield paramagnetic global properties. De Dominicis and Young~\cite{deDominicis1983}
showed that proper equilibrium averages can be recovered from the average of the value of the selected observable, $O$, in each of these states, $O(\{m_i^\alpha\})$, weighted with a Boltzmann probability factor $e^{-\beta N f(\{m_i^\alpha\})}/Z$, and summed over all $\{m_i^\alpha\}$. In the development of this calculation the number of metastable states with the same TAP free energy
density, ${\cal N}(f)$, plays a crucial role. Indeed, the sum over $\{m_i^\alpha\}$ is transformed into an integral over $f$, and the complexity or configurational entropy, that is to say, the logarithm of their number, $\Sigma(f) = \ln {\cal N}(f,T)$, intervenes in the statistical weight that is modified, in the continuum limit, to be $\exp[-\beta N (f-T \Sigma(f,T))]$.
This nice structure is partly due to the homogeneity of the monomial potential of the pure models and it is partially lost in the mixed problems, that present {\it temperature chaos}~\cite{Rizzo2006}.
Barrat {\it et al.}~\cite{Barrat1997} calculated the Franz-Parisi (FP) effective potential~\cite{Franz1995} as an alternative way to observe the bifurcation of metastable states in the mixed model.
The FP potential is the Legendre transform of the free energy of the system under a local field proportional to
a particular equilibrium configuration at a chosen temperature $T'$. The dependence on the strength of the
local field, say $\epsilon$, is exchanged, under the Legendre transform,
into a dependence on the overlap between the reference configuration
and the ones at the working temperature. The FP potential is, therefore, the free energy cost to keep a
system in equilibrium at temperature $T$ at a fixed overlap with a generic equilibrium
configuration at another temperature $T'$. The need to break replica symmetry in the mixed model to calculate this potential
below another characteristic temperature $0<T_{\rm RSB}(T')<T'$
was interpreted as a signature of the multifurcation of metastable states below this same temperature.
In the same paper, Barrat {\it et al.}~\cite{Barrat1997} derived and performed a first study of the Schwinger-Dyson dynamic equations for the disorder averaged
model quenched from equilibrium at a temperature $T_s< T'<T_d$ to a lower temperature $T<T'$. The average over the equilibrium initial conditions was dealt with using the replica trick, as pioneered in Ref.~\cite{Houghton83} and, since
$T'>T_s$, no replica symmetry breaking was used. Nevertheless, in this range of temperatures, a complex TAP free energy landscape already exists (as discussed in the third paragraph in this Section). Therefore, the initial configurations drawn with the Gibbs-Boltzmann measure are interpreted as being within one non-trivial TAP state with non-zero values of the local magnetisations $\{m_i^\alpha \neq 0\}$ that, however, are averaged over in this calculation and are not individually accessed. The authors showed that above the
temperature $T_{\rm RSB}(T')$ the dynamics occur as in equilibrium and the correlation with the initial condition does not approach zero but a value consistent with the state following interpretation. However, below $T_{\rm RSB}(T')$ these solutions no longer exist and
the authors conjectured that the dynamics age
forever with the very unusual feature of keeping a memory of the initial condition, via a non-zero asymptotic
value of the correlation function
$C(t,0)$
({\it strong ergodicity breaking}). The picture developed in this paper was later
confirmed in~\cite{Sun2012} where a planting procedure was used to
generate the initial conditions and the adiabatic state following method~\cite{Krzakala2010} was applied.
Next, Capone {\it et al.}~\cite{Capone2006} studied the Schwinger-Dyson equations for equilibrium initial conditions (also imposed with a replica calculation) in more detail
than done in Ref.~\cite{Barrat}. On the one
hand, they confirmed the results of Barrat {\it et al.}~\cite{Barrat1997} with usual restrained equilibrium state following above $T_{\rm RSB}(T')$ and ageing below this temperature
taking place in a marginal manifold (supposedly the one in which the initial state opens up) that lies {\it below} the threshold one approached with quenches from $T' \rightarrow +\infty$. However, they also realised that the asymptotic equations derived with an ageing {\it Ansatz} that fix, for example, the varios long-time limit values of the correlation, {\it do not have solution} below another characteristic temperature $T_c(T')<T_{\rm RSB}(T')$. (These equations are the same that fix the parameters $q_o$, $q$ and $x$ in the 1RSB calculation of the
FP potential that, therefore, do not have solution either below $T_c(T')$.) The authors complemented the dynamic analysis with
a static one in which they calculated a {\it constrained complexity}, defined as the number of states at temperature $T$ with given free energy density and
overlap with {\it all} reference equilibrium states at $T'$. The temperature
$T_{\rm RSB}(T')$ was then associated with the one at which this constrained complexity vanishes.
Several new features of the quench dynamics of the mixed model have recently been shown with a numerical integration of the Schwinger-Dyson equations ~\cite{Folena2019}. The authors identified a temperature $T_{\rm onset}$, higher than the usual dynamical temperature $T_d$, below which the system memorises the initial condition
when instantaneously quenched to a sufficiently low temperature.
They have also shown that the system can go through an ageing regime where the description used for the pure $p$-spin case fails.
In fact, the marginal states reached through this ageing dynamics have a non-zero overlap with the initial condition, and the usual analytical {\it Ansatz} with weak long term memory and weak ergodicity breaking features used to describe ageing regimes~\cite{Cugliandolo1993} does not fit the simulations because of this fact.
With the aim of clarifying the origin of the unexpected behavior found in the references cited above~\cite{Barrat1997,Capone2006,Sun2012,Folena2019}, we here revisit the TAP approach by using new constraints, \`a la FP. The idea is to keep track of the individual TAP states that contribute to the equilibrium measure at $T'$. These, identified with the states where the initial conditions are located, we claim, should have a distinctive dynamic evolution.
In order to clarify followig discussions and to set orders of magnitude the values of $T_s$ and $T_d$ for $p_1=3$, $p_2=4$ and $J_{p_1}=J_{p_2}=1$ are
\begin{equation}
\label{eq: temp. charac.}
T_s\approx 0.762 \quad \text{and}\quad T_d\approx 0.805\; .
\end{equation}
We recall that we consider $T' \in [T_s,T_d]$ and $T_{\rm RSB}(T')$ is a function of $T'$ that varies from $0$ to $T_d$.
\section{Temperature chaos}
\label{sec: temperature chaos}
Let us consider, as in Refs.~\cite{Barrat1997,Capone2006,Folena2019}, an
equilibrated system at $T'$, described by the Gibbs-Boltzmann distribution $P[\{s_i\}]\propto \exp\big(-\beta' H_J[\{s_i\}]\big)$.
The unrestrained TAP analysis shows that the equilibrium measure in the temperature window $T'\in [T_s;T_d]$
is dominated by an ensemble of non-trivial TAP states with $\{m_i \neq 0\}$, in the sense
that they are the ones that dominate the measure $Z^{-1}(\beta') \,
\exp[-\beta' N(f-T' \Sigma(f,T'))]$
with $\Sigma(f,T')$ the complexity calculated in Refs.~\cite{Rieger1992,Rizzo2006}.
TAP states are fully parametrised by their overlap $q = N^{-1} \sum_i m_i^2$ and the adimensional energy densities $\varepsilon_{p_1}= N^{-1} J_{p_1}^{-1} q^{-p_1/2} H_{p_1}[\{m_i\}]$
and $\varepsilon_{p_2}=N^{-1}J_{p_2}^{-1} q^{-p_2/2} H_{p_2}[\{m_i\}]$, see Eq.~(\ref{eq:fTAP}) and App.~\ref{sec:app_TAP}.
Therefore, at each temperature $T'$,
the measure above is dominated by TAP states with optimised values of $q_{eq}$,
$\varepsilon_{p_1, eq}$ and $\varepsilon_{p_2, eq}$
given by
\begin{eqnarray}
\label{eq: equilibrium}
\frac{q_{eq}}{1-q_{eq}} &=&
\frac{p_1\beta^2 J_{p_1}^2}{2} q_{eq}^{p_1-1}+\frac{p_2 \beta^2 J_{p_2}^2}{2} q_{eq}^{p_2-1}
\; ,
\\
\varepsilon_{p_{\ell,eq}}&=&-\frac{\beta J_{p_\ell}}{2}\Big[p_\ell q_{eq}^{\frac{p_\ell}{2}-1}-(p_\ell-1)
q_{eq}^{\frac{p_\ell}{2}} \Big]
\qquad \mbox{for} \quad \ell=1,2
\; .
\end{eqnarray}
This is similar to what happens in the pure model though with an extra `parameter' $\varepsilon_{p_2,eq}$. In the following we will consider that a TAP state is ``followed" after a change in temperature whenever the temperature change induces a simple homothetic transformation (global rescaling or homogeneous dilation) of the magnetisations configuration. In other words if the magnetisations of the state at the new temperature are simply $m_i \rightarrow \alpha \, m_i $. Whenever this property is not verified it is straightforward to see that $\varepsilon_{p_1,eq}$ and $\varepsilon_{p_2,eq}$ change with temperature, in this case we will talk about chaotic behavior.
The FP potential is well adapted to study the equilibrium
behaviour of a system at temperature $T$, constrained to have a given overlap with itself when in equilibrium at another temperature $T'$. Moreover,
the results for the relevant parameters found with the FP match the asymptotic overlaps $q_o$, $q$ and the energy density derived with a dynamic approach in which the system is initialised in equilibrium at $T'$ and evolved at a different temperature $T$.
In fact, both FP and dynamics approaches
yield parameters $q_o$ and $q$ determined by the set of equations
\begin{eqnarray}
\label{eq: dynamics (1)}
q_o^2 &=&
q-(1-q)^2 \Big[\frac{p_1 J_{p_1}^2\beta^2}{2}q^{p_1-1}+\frac{p_2 J_{p_2}^2\beta^2}{2}q^{p_2-1}\Big]
\; ,
\\
\label{eq: dynamics (2)}
\frac{1}{1-q} &=&
\frac{p_1 J_{p_1}^2\beta'\beta}{2} q_o^{{p_1}-2}+\frac{p_2 J_{p_2}^2\beta'\beta}{2} q_o^{{p_2}-2}
\; ,
\end{eqnarray}
as long as $T \in [T_{\rm RSB}(T'),T']$.
In the FP calculation~\cite{Barrat1997},
$q$ is the order parameter of the constrained system and $q_o$ its overlap with the equilibrated system at $T'$. In the dynamic calculation~\cite{Barrat1997,Capone2006}, $q_o = \lim_{t\to\infty} C(t,0)$, see Eq.~(\ref{eq:Ct0}), while $q=\lim_{t\to\infty}
\lim_{t'\to\infty} C(t,t')$, see Eq.~(\ref{eq:Cttp}).
Both approaches exhibit the same transition temperature $T_{\rm RSB}(T')$ which has been interpreted as the start of an ageing regime where the system
approaches marginal states
with order parameter determined by a different equation
\begin{equation}
\label{eq:q_marginal}
T^2 = \nu''(q_{\rm marg}) (1-q_{\rm marg})^2
\; .
\end{equation}
The comparison of results obtained with the FP
potential and the usual unconstrained TAP free energy can already show that there should be temperature chaos in the temperature interval $[T_{\rm RSB}(T'),T']$. We justify this claim as follows.
The energy density of the system at temperature $T$
can be obtained with the two approaches and
compared. On the one hand, the system's
energy density, in the $[T_{\rm RSB}(T'),T']$ interval, calculated with the FP potential is
\begin{eqnarray}
\label{eq: quench dynamics(1)}
e_{\rm FP}
&=&
-\frac{\beta'}{2}(J_{p_1}^2 q_o^{p_1}+J_{p_2}^2 q_o^{p_2})-\frac{\beta}{2}\Big[J_{p_1}^2 (1-q^{p_1})+J_{p_2}^2 (1-q^{p_2})\Big]
\end{eqnarray}
(this expression can be read from Eq.~(27) in Ref.~\cite{Barrat} setting the last term to zero and making the necessary changes of names of variables, $\tilde p = q_0$ and $q_1=q$.) Using Eqs.~(\ref{eq: dynamics (1)}) and (\ref{eq: dynamics (2)}) to replace $q_o$ and $q$, one can readily get the dependence of $e_{\rm FP}$ on $T$ and $T'$.
On the other hand, the energy density derived from the TAP free energy is
\begin{eqnarray}
e_{\rm TAP}
&=& q^{p_1/2} J_{p_1}
\varepsilon_{p_1}+ q^{p_2/2} J_{p_2}\varepsilon_{p_2}-\frac{ \beta J_{p_1}^2}{2}\Big[(p_1-1)q^{p_1}-{p_1}q^{p_1-1}+1\Big] -\frac{ \beta J_{p_2}^2}{2}\Big[(p_2-1)q^{p_2}-{p_2}q^{p_2-1}+1\Big] \;
\label{eq:eTAP}
\end{eqnarray}
with $q$ the order parameter $q=\frac{1}{N}\sum_i m_i^2$ and $\{m_i\}$ the local magnetisations in a TAP state. We eliminated the labels $eq$ used in Eqs.~(\ref{eq: dynamics (1)}) and (\ref{eq: dynamics (2)}) for simplicity. The equations that fix the $N$
local magnetisations, $\partial f_{\rm TAP}/\partial m_i =0$, can be multiplied by $m_i$ and summed over $i$
to yield an extra equation that relates $q$ to $\varepsilon_{p_1}$ and $\varepsilon_{p_2}$:
\begin{eqnarray}
\label{eq: quench dynamics(2)}
0=1+ \beta \frac{1-q}{q} \sum_{\ell=1}^2 J_{p_\ell}
p_\ell q^{p_\ell/2} \varepsilon_{p_\ell} +
\beta^2 (1-q)^2
\sum_{\ell=1}^2
J_{p_\ell}^2 \, \frac{p_\ell(p_\ell-1)}{2} \, q^{p_\ell-2} \; .
\end{eqnarray}
We use this condition to obtain $\varepsilon_{p_1}$ as a function of
$q, \varepsilon_{p_2}$ and $T$ that we replace in Eq.~(\ref{eq:eTAP})
and thus rewrite the TAP energy density in the form
$e_{\rm TAP}(q,\varepsilon_{p_2}, T)$.
If we now require that $q$ (and $q_o$)
be determined by Eqs.~(\ref{eq: dynamics (1)}) and (\ref{eq: dynamics (2)}) we can rewrite
the TAP energy density in a new form that is $e_{\rm TAP}(\epsilon_{p_2}, T, T')$.
If the systems at temperature $T$ described by the TAP and FP approaches were the same, the FP and TAP energies should coincide and the condition
$e_{\rm FP}(T,T')=e_{\rm TAP}(\varepsilon_{p_2},T,T')$ verified. This gives an equation that determines $\varepsilon_{p_2}(T,T')$. In the following we will call
$\varepsilon_{p_2,{\rm FP}}(T,T')$ the adimensional energy density obtained through this method. One can check numerically, see Figs.~\ref{fig:energy_variation} and \ref{fig:energy_variation(2)}, that in the mixed model $\varepsilon_{p_2, {\rm FP}}$ thus obtained depends on both temperatures $T$ and $T'$.
Hence, for any temperature $T \in [T_{\rm RSB}(T'),T'[$ the constrained system shifts away from the original TAP state. On the contrary, in the pure model, the same construction yields a $T$-independent energy density $\varepsilon_p$, indicating that there is no chaos in temperature in this case.
\begin{figure}[h]
\centering
\includegraphics[width=0.55\textwidth]{figure1a.png}
\quad\quad
\includegraphics[width=0.55\textwidth]{figure1b.png}
\caption{ The adimensionnal energy densities are derived with the FP potential as explained at the end of Sec.~\ref{sec: temperature chaos} (a proxy for a system equilibrated at $T'=1/\beta'$ and quenched to a bath temperature $T=1/\beta$). Panel (a) focuses on a mixed $p$-spin model with $T'\approx 0.801$, $p_1=3$, $p_2=4$ and $J_{p_1}=J_{p_2}=1$ ($T_s$ and $T_d$ are recalled in Eq.~(\ref{eq: temp. charac.})). Panel (b) focuses on the pure $p$-spin model with $p=3$, $T'\approx 0.609$ ($T_s\approx0.586$ and $T_d\approx0.612$), $J=1$ and we retrieve the state following behavior as $\varepsilon_{FP}(T,T')$ is constant for all temperatures $T$.
}
\label{fig:energy_variation}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.6\textwidth]{figure1c.png}
\caption{We reproduced the energy density diagram for metastable TAP states -described in App.\ref{sec:app_TAP}- with the exclusion zone bounded by the limit states. On top of it we added the trajectory described by the adimensionnal energy densities $\varepsilon_{p_1,\rm FP}(T,T')$ and $\varepsilon_{p_2,\rm FP}(T,T')$. It emphasises the chaos in temperature when a system is equilibrated at a bath temperature $T$ and constrained with a system at $T'$. The trajectory was obtained for $p_1=3$, $p_2=4$, $J_{p_1}=J_{p_2}=1$ and $T'\approx 0.801$. It starts at $T=T'$ and following the direction of the arrows the temperature $T$ gets lower and lower.}
\label{fig:energy_variation(2)}
\end{figure}
\section{A constrained TAP free energy density}
\label{sec: A TAP-like approach to quenches}
The TAP approach consists in probing the local minima of the (rough) free energy landscape with respect to the local magnetisations $\langle s_i \rangle=m_i$, where the angular brackets denote a static statistical average. This description allows one to reach an understanding of metastability in disordered mean-field models. Moreover, it enabled one to recover equilibrium results, originally derived with the replica trick~\cite{Crisanti2006}, and to grasp the outcome of the relaxation dynamics following quench protocols from disordered~\cite{Biroli1999,Cugliandolo1993}
and metastable initial conditions~\cite{Barrat1996,Biroli1999,Franz1995} in the pure $p$-spin model.
Different methods to obtain the TAP free energy and the ensuing TAP equations have been developed throughout the years~\cite{Biroli1999,Crisanti1995,Mezard1987,Rieger1992,Thouless1977}. One can cite, for example, the cavity method, the diagrammatic expansion of the free energy or the historical derivation by Thouless, Anderson and Palmer~\cite{Thouless1977}. We use here the proof based on the Legendre transform of the thermodynamic free energy, first introduced by Georges and Yedidia \cite{Georges1991}.
\subsection{Justification of the approach and definition of the free energy}
\label{sec: justification of the approach}
Let us take a vector in the $N$ dimensional phase space with components $\{v_i\}$. It could be given by the ensemble of
local magnetisations $\{ v_i = m_i^{\sigma} \}$ that characterise a TAP state at temperature $T'$, where $\sigma$ is the label that identifies the TAP state chosen, or it could be just a generic $N$-dimensional vector.
We require that the (thermal averaged) overlap between a configuration
$\{s_i\}$ and this vector be
\begin{equation}
\sum_i \langle s_i \rangle v_i =Nq_o
\; .
\end{equation}
In this section we will compute the free energy of a system, at temperature $T=1/\beta$, when the configurations are constrained to have, on average,
overlap $q_o$ with the reference configuration defined by
$\{ v_i\}$.
More explicitly, we calculate the constrained free energy
\begin{eqnarray}
\label{eq: free energy def}
-\beta F_J [\beta,q_o,\{v_i\},l]&=&\ln \Bigg\{ {\rm Tr}_{\{s_i\}}\Big[e^{-\beta H_J[\{s_i\}]-\frac{\lambda}{2}\sum_i (s_i^2-l)-h\sum_i(s_i v_i-q_o)}\Big] \Bigg\}=\ln Z_J[\beta,q_o,\{v_i\},l]\\
&=&-\beta F_J^\star[\beta,\{h v_i\},\lambda]+\frac{N \lambda l}{2}+N h q_o \nonumber
\end{eqnarray}
up to order $O(N)$, with
\begin{eqnarray}
\partial_\lambda \big(-\beta F_J[\beta,q_o,\{v_i\},l]\big) &=& 0 \implies \sum_i \langle s_i^2 \rangle= N l \; ,
\\
\partial_h \big(-\beta F_J[\beta,q_o,\{v_i\},l]\big)
&=&
0 \implies \sum_i \langle s_i \rangle v_i=Nq_o \; ,
\end{eqnarray}
or in another fashion
\begin{equation}
\label{eq: constraining the free energy}
\partial_\lambda \big(-\beta F_J^\star[\beta,\{h v_i\},\lambda]\big)
=
\frac{N l}{2} \implies \sum_i \langle s_i^2 \rangle= N l \; ,
\end{equation}
\begin{equation}
\label{eq: constraining the free energy(2)}
\partial_h \big(-\beta F_J^\star[\beta,\{h v_i\},\lambda]\big)
=
Nq_o \;\implies \sum_i \langle s_i \rangle v_i=Nq_o \; .
\end{equation}
The function $F_J^\star[\beta,\{h v_i\},\lambda]$ is the Legendre transform of $F_J[\beta,q_o,\{v_i\},l]$. Moreover the parameter $\lambda$ enforces the spherical constraint while $h$ fixes the global overlap with the reference state $\{v_i\}$.
In the end the free energy defined in this way has to be extremised with respect to $q_o$. Two arguments can be offered to justify this statement. The first one consists in requiring that the equilibrium properties of the system be described by the thermodynamic free energy
\begin{equation}
-\beta F_J[\beta,l]=\ln \Bigg\{ {\rm Tr}_{\{s_i\}}\Big[e^{-\beta H_J[\{s_i\}]-\frac{\lambda}{2}\sum_i (s_i^2-l)}\Big] \Bigg\}=\ln Z_J[\beta,l]
\label{eq:free energy}
\end{equation}
where the parameter $\lambda$ still enforces the spherical constraint. Thus, if the constrained free energy
$F_J[\beta,q_o,\{v_i\},l]$
is made extreme for $q_o=\hat{q}_o$, one has
\begin{eqnarray}
\partial_{q_o} \big(-\beta F_J[\beta,q_o,\{v_i\},l]\big)\Bigr|_{q_o=\hat{q}_o} =0 \implies h\Bigr|_{q_o=\hat{q}_o}=0
\end{eqnarray}
and one recovers
\begin{eqnarray}
F_J[\beta,\hat{q}_o,\{v_i\},l]=\ln \Bigg\{ {\rm Tr}_{\{s_i\}}\Big[e^{-\beta H[\{s_i\}]-\frac{\lambda}{2}\sum_i (s_i^2-l)}\Big] \Bigg\}=-\beta F_J[\beta,l]
\end{eqnarray}
with
\begin{equation}
\sum_i \langle s_i \rangle v_i =N \hat{q}_o
\; .
\end{equation}
The second argument is based on the usual saddle point approximations performed for extensive quantities.
Indeed, the thermodynamic free energy can be rewritten as
\begin{eqnarray}
-\beta F_J[\beta,l]&=&
\ln \Bigg\{ {\rm Tr}_{\{s_i\}}\Big[ \int dq_o \; e^{-\beta H_J[\{s_i\}]-\frac{\lambda}{2}\sum_i (s_i^2-l)}\Big] \delta\big(\sum_i s_i v_i-q_o\big) \Bigg\}\\
&=&
\ln \Bigg\{ {\rm Tr}_{\{s_i\}}\Big[ \int dh \; dq_o \; e^{-\beta H_J[\{s_i\}]-\frac{\lambda}{2}\sum_i (s_i^2-l)+h(\sum_i s_i v_i -q_o)}\Big] \Bigg\}
\nonumber\\
&=&
\ln \int dh dq_o \; e^{-\beta N f_J[\beta, \, h \!, \, q_o, \{v_i\},l]}\nonumber
\; .
\end{eqnarray}
In the thermodynamic limit the free energy is then deduced from the saddle point with respect to $h$ and $q_o$:
\begin{eqnarray}
-\beta F_J[\beta,l]=-\beta N f_J[\beta,\, \hat{h} \!, \, \hat{q}_o,\{v_i\},l]
\end{eqnarray}
where $\hat{h}$ and $\hat{q}_o$ are determined by
\begin{eqnarray}
\partial_h f_J\big[\beta,h,q_o,\{v_i\},l\big]\Bigr|_{\substack{h=\hat{h}\\q_o=\hat{q}_o}}=0 \quad \text{and} \quad \partial_{q_o} f_J\big[\beta,h,q_o,\{v_i\},l\big]\Bigr|_{\substack{h=\hat{h}\\q_o=\hat{q}_o}}=0 \; .
\end{eqnarray}
In part of our analysis, we will choose $\{v_i\}$ to be a metastable TAP state at a given temperature $T'$ and it will be designated as the reference state $\{m_i^{\sigma}\}$ while the system described by the spins $\{s_i\}$ will be referred to as the constrained system, and the free energy $-\beta F_J[\beta,q_o,\{v_i\}]$ will be called the constrained free energy. For the moment, we keep $\{v_i\}$ generic.
\subsection{Taylor expansion of the free energy}
\label{subsec: Taylor dev}
Following the approach pioneered by Georges and Yedidia \cite{Biroli1999,Georges1991} we perform a Taylor expansion of the constrained free energy around $\beta=0$ up to second order in $\beta$. Concretely, the series reads
\begin{eqnarray}
-\beta F_J[\beta,q_o,\{v_i\},l] & = & \sum_{k=0}^{+\infty}\frac{\beta^k}{k!}\partial^k_\beta (-\beta F_J)\Bigr|_{\beta=0}\\
& = & -\beta F_J\Bigr|_{\beta=0} +\beta \partial_\beta (-\beta F_J)\Bigr|_{\beta=0} +\frac{\beta^2}{2} \partial^2_\beta (-\beta F_J)\Bigr|_{\beta=0}+O(\beta^3)
\; . \nonumber
\end{eqnarray}
Throughout the calculation we will use the notation
\begin{eqnarray}
\langle \quad.\quad\rangle &=&
{\rm Tr}_{\{s_i\}}\Big[\quad.\quad e^{-\beta H_J[\{ s_i \}]-\frac{\lambda}{2}\sum_i (s_i^2-l)-h\sum_i(s_i v_i-q_o)}\Big]/Z_J[\beta,q_o,\{v_i\},l]
\; , \\
\langle \quad.\quad\rangle\Bigr|_{\beta=0}
&=&
{\rm Tr}_{\{s_i\}}\Big[\quad.\quad e^{-\frac{\lambda}{2}\sum_i (s_i^2-l)-h\sum_i(s_i v_i-q_o)}\Big]/Z_J[\beta=0,q_o,\{v_i\},l]
\; .
\end{eqnarray}
Let us now compute the first terms in the series.
\vspace{0.2cm}
\noindent
\textit{$O^{th}$ order in $\beta$}.
The first term is simply given by the trace over the Gaussian weight
\begin{equation}
-\beta F_J\Bigr|_{\beta=0}= \ln \Bigg\{ {\rm Tr}_{\{s_i\}}\Big[e^{-\frac{\lambda}{2}\sum_i (s_i^2-l)-h\sum_i(s_i v_i-q_o)}\Big] \Bigg\}
\; .
\end{equation}
After integrating over all spin configurations $\{s_i\}$ the previous expression yields (up to a constant)
\begin{eqnarray}
\label{eq: 0th order TAP}
-\frac{1}{N}\beta F_J\Bigr|_{\beta=0} & = & \frac{l\lambda}{2}-\frac{\ln\lambda}{2}+h \hspace{0.01cm} q_o+ \frac{h^2}{2\lambda} \frac{1}{N}\sum_i v_i^2
\; .
\end{eqnarray}
One can note that at this order
\begin{equation}
\langle s_i\rangle\Bigr|_{\beta=0}=
- \frac{1}{h} \frac{\partial(-\beta F_J|_{\beta=0})}{\partial v_i}
=
\frac{-h v_i}{\lambda}
\qquad \text{and} \qquad
\Big\langle \big(s_i-\langle s_i\rangle\big)^2\Big\rangle\Bigr|_{\beta=0}
=
\frac{1}{h^2} \frac{\partial^2(-\beta F_J|_{\beta=0})}{\partial (v_i)^2}
=
\frac{1}{\lambda}
\; .
\end{equation}
Introducing
\begin{equation}
q_{v} = \frac{1}{N}\sum_i v_i^2
\;
\end{equation}
and using the condition of vanishing variation of the free energy with respect to the Lagrange
multipliers:
\begin{equation}
\partial_\lambda \big(-\beta F_J\big)\Bigr|_{\beta=0}=0 =l-\frac{1}{\lambda}-
\frac{h^2}{\lambda^2} \, q_v
\end{equation}
and
\begin{equation}
\partial_h \big(-\beta F_J\big)\Bigr|_{\beta=0}=0 = q_o + \frac{h}{\lambda} \,
q_v
\; ,
\end{equation}
one eliminates $h$ and $\lambda$ to obtain a concise expression for the free energy
\begin{eqnarray}
\label{eq: 0th order TAP}
-\beta F_J\Bigr|_{\beta=0} & = &
\frac{N}{2}
\ln\Big(l-\frac{q_o^2}{q_v}\Big)
\; ,
\end{eqnarray}
and the mean values
\begin{equation}
\label{eq: mean high T}
\langle s_i\rangle\Bigr|_{\beta=0}
=\frac{q_o}{q_v} v_i \qquad \text{and} \qquad
\langle s_i^2\rangle\Bigr|_{\beta=0}
=l-\frac{q_o^2}{q_v}+\frac{q_o^2}{q_v^2} \, v_i^2
\; .
\end{equation}
At this order, the free energy is just the entropy of non-interacting spins lying on the sphere with radius $lN$ and magnetisations $\{(q_o/q_v) v_i\}$. The
last expression in Eq.~(\ref{eq: mean high T}) implies that the global spherical constraint is preserved.
\vspace{0.2cm}
\noindent
\textit{$1^{st}$ order in $\beta$:}
The derivation with respect to $\beta$
that yields the first order contribution in $\beta$ reads
\begin{eqnarray}
-\beta \frac{\partial (\beta F_J)}{\partial \beta}\Bigr|_{\beta=0} & = &-\beta \Big\langle H_J[\{s_i\}]+\partial_\beta h\sum_i(s_i v_i-q_o)+\frac{\partial_\beta \lambda}{2}\sum_i({s_i}^2-l)\Big\rangle\Bigr|_{\beta=0}
\; .
\end{eqnarray}
The last two terms vanish as the Lagrangian constraints are verified on average. Taking the limit $\beta=0$ all spins are decoupled, the averages can be explicitly computed, and one finds
\begin{eqnarray}
\label{eq: 1st order TAP}
-\beta \frac{\partial (\beta F_J)}{\partial \beta}\Bigr|_{\beta=0} & = & -\beta \Bigg\{ \Big(\frac{q_o}{q_v}\Big)^{p_1} H_{p_1} [\{ v_i \}] + \Big(\frac{q_o}{q_v}\Big)^{p_2} H_{p_2} [\{ v_i \}] \Bigg\}
\; .
\end{eqnarray}
Taking $v_i=\langle s_i\rangle$, $q_v=q_o$, and combining the $0^{th}$ order with the $1^{st}$ order the standard mean field
result, in which no overlap constraint is imposed, is retrieved.
\vspace{0.25cm}
\noindent
\textit{$2^{nd}$ order in $\beta$}.
The second order correction yields the Onsager reaction
term.
From now on we will consider, for simplicity, the usual case
in which the spherical constraint is set to $l=1$. We will thus drop the dependence of the constrained free energy in $l$ and write it $-\beta F_J[\beta,q_o,\{v_i\}]$.
The second derivative of the constrained free energy with respect to $\beta$ yields
\begin{eqnarray}
\label{eq:2nd order free energy term(1)}
\frac{\partial^2 (-\beta F_J)}{\partial \beta^2}& = &\Bigg\langle \Big\{H_J[\{ s_i\}]+\frac{1}{2}\partial_\beta \lambda \sum_i(s_i^2-1)+\partial_\beta h\sum_i(s_i v_i-q_o)\Big\}^2\Bigg\rangle-\Big\langle H_J[\{s_i\}]\Big\rangle^2
\; .
\end{eqnarray}
At this point the first order correction in $\beta$ of $h$ and $\lambda$ have to be
computed. To do so one can use the Maxwell relations and after some manipulations write:
\begin{eqnarray}
\label{eq: Maxwell}
\partial_\beta h\Bigr|_{\beta=0} & = &\frac{1}{N}\partial_\beta\Big[ \partial_{q_o}
\big(-\beta F_J\big)\Big]\Bigr|_{\beta=0}= \frac{1}{N}\partial_{q_o}\Big[ \partial_\beta\big(-\beta F_J\big)\Big]\Bigr|_{\beta=0} = \frac{1}{N}\partial_{q_o} \Big\langle - H_J[\{ s_i \}] \Big\rangle \Bigr|_{\beta=0}\nonumber\\
& = & -\frac{1}{N}\partial_{q_o}\Bigg\{ \Big(\frac{q_o}{q_v}\Big)^{p_1} H_{p_1} [\{ v_i \}] + \Big(\frac{q_o}{q_v}\Big)^{p_2} H_{p_2} [\{ v_i\}] \Bigg\}\nonumber\\
&=&- \frac{1}{N} \Bigg\{ p_1 \Big(\frac{{q_o}^{p_1-1}}{{q_v}^{p_1}}\Big) H_{p_1} [\{ v_i \}] + p_2 \Big(\frac{{q_o}^{p_2-1}}{{q_v}^{p_2}}\Big) H_{p_2} [\{ v_i \}] \Bigg\}
\; ,\\
\partial_\beta \lambda\Bigr|_{\beta=0} & = &\frac{2}{N}\partial_\beta \Big[\partial_{l}\big(-\beta F_J\big)\Big]\Bigr|_{\beta=0} = \frac{2}{N}\partial_l \Big[\partial_{\beta}\big(-\beta F_J\big)\Big]\Bigr|_{\beta=0} = \frac{2}{N}\partial_{l} \Big\langle - H_J[\{s_i\}] \Big\rangle \Bigr|_{\beta=0} \nonumber\\
&=& \frac{2}{N}\partial_l \Bigg\{ \Big(\frac{q_o}{q_v}\Big)^{p_1} H_{p_1} [\{ v_i \}] + \Big(\frac{q_o}{q_v}\Big)^{p_2} H_{p_2} [\{ v_i \}] \Bigg\}= 0
\; .
\end{eqnarray}
The last identity allows us to simplify the second derivative of the free energy that becomes
\begin{eqnarray}
\label{eq:2nd order free energy term}
\frac{\partial^2 (-\beta F_J)}{\partial \beta^2}\Bigr|_{\beta=0}& = & \Big\langle (H_J[\{s_i\}])^2 \Big\rangle\Bigr|_{\beta=0}-\Big\langle H_J[\{s_i\}]\Big\rangle\Bigr|_{\beta=0}^2+2\partial_\beta h\Big\langle H_J[\{s_i\}]\sum_i(s_i v_i-q_o)\Big\rangle\Bigr|_{\beta=0}\nonumber\\
&&+(\partial_\beta h)^2\Bigg\langle\Big[\sum_i(s_i v_i-q_o)\Big]^2\Bigg\rangle\Biggr|_{\beta=0}
\; .
\end{eqnarray}
In order to keep the next calculations comprehensible
we will introduce a compact notation and rewrite
$H_J [\{s_i\}]$ as follows
\begin{eqnarray}
H_J[\{ s_i \}]& = &-\sum_{i_1\neq\dots\neq i_{p_1}}\frac{J_{i_1\dots i_{p_1}}}{p_1 !} s_{i_1}\dots s_{i_{p_1}}-\sum_{i_1\neq\dots\neq i_{p_2}}\frac{J_{i_1\dots i_{p_2}}}{p_2 !}s_{i_1}\dots s_{i_{p_2}}
\nonumber\\
& = &-\sum_{\{i_{p_1}\}}J_{\{i_{p_1}\}} S_{\{i_{p_1}\}}-\sum_{\{i_{p_2}\}}J_{\{i_{p_2}\}}S_{\{i_{p_2}\}}
\end{eqnarray}
with
\begin{eqnarray}
\begin{array}{rclrclrcl}
\{i_{p_1}\} &\equiv& i_1 \neq \dots \neq i_{p_1}
\;,
& \qquad \{k,i_{2},i_{p_1}\} &\equiv& k\neq i_2 \neq \dots \neq i_{p_1} \; ,
& \qquad J_{\{i_{p_1}\}} &=&\frac{J_{i_1\dots i_{p_1}}}{p_1 !} \; ,
\\
S_{\{i_{p_1}\}} &=& s_{i_1}\dots s_{i_{p_1}}
\; , & \qquad
S_{\{i_{2},i_{p_1}\}} &=& s_{i_2}\dots s_{i_{p_1}} \; ,
& \qquad \Delta S_{\{i_{p_1}\}} &=& \langle{S_{\{i_{p_1}\}}}^2\rangle-{\langle S_{\{i_{p_1}\}}\rangle}^2 \; ,
\\
V_{\{i_{2},i_{p_1}\}} &=& v_{i_2} \dots v_{i_{p_1}} \; .
& &&
\end{array}
\end{eqnarray}
We now proceed to evaluate each term in Eq.~(\ref{eq:2nd order free energy term}) separately; the details of the calculations can be found in App.~\ref{sec:appA}. To begin with we focus on the extensive contribution of the variance of $H_J[\{ s_i \}]$,
\begin{eqnarray}
&&
\left.
\left[
\Big\langle (H_J[\{s_i\}])^2 \Big\rangle-\Big\langle H_J[\{s_i\}]\Big\rangle^2
\right]
\right|_{\beta=0}
= \frac{N J_{p_1}^2}{2}\Big[1-\Big(\frac{q_o^2}{q_v}\Big)^{p_1}\Big] +\frac{N J_{p_2}^2}{2}\Big[1-\Big(\frac{q_o^2}{q_v}\Big)^{p_2}\Big]
\nonumber \\
& &
\qquad\qquad
-\frac{N J_{p_1}^2}{2}\Big(1-\frac{q_o^2}{q_v}\Big)p_1\Big(\frac{q_o^2}{q_v}\Big)^{p_1-1} -\frac{N J_{p_2}^2}{2}\Big(1-\frac{q_o^2}{q_v}\Big)p_2\Big(\frac{q_o^2}{q_v}\Big)^{p_2-1} \nonumber\\
& &
\qquad\qquad
+\Big(1-\frac{q_o^2}{q_v}\Big)\sum_k \left[\sum_{ \{i_2,i_{p_1}\}} J_{\{k,i_{2},i_{p_1}\}} \Big(\frac{{q_o}}{q_v}\Big)^{p_1-1} V_{\{i_{2},i_{p_1}\}}+ \sum_{ \{j_2,j_{p_2}\}} J_{\{k,j_{2},j_{p_2}\}} \Big(\frac{{q_o}}{q_v}\Big)^{p_2-1} V_{\{j_{2},j_{p_2}\}} \right]^2
\; .
\label{eq: 2nd order TAP(1)}
\end{eqnarray}
The remaining terms in Eq.~(\ref{eq:2nd order free energy term}) yield
\begin{eqnarray}
\label{eq: 2nd order TAP(2)}
\left.
(\partial_\beta h)^2 \Bigg\langle\Big[\sum_i(s_i v_i-q_o)\Big]^2\Bigg\rangle \right|_{\beta=0} & = & N \left(\partial_\beta h\Bigr|_{\beta=0}\right)^2\Big(1-\frac{q_o^2}{q_v}\Big)q_v
\end{eqnarray}
and
\begin{eqnarray}
\label{eq: 2nd order TAP(3)}
\left.
(\partial_\beta h) \Big\langle H_J[\{s_i\}]\sum_i(s_i v_i-q_o)\Big\rangle
\right|_{\beta=0}
& = & {{\partial_\beta h}\Bigr|_{\beta=0}}\hspace{0.1cm}p_1 \Big(1-\frac{q_o^2}{q_v}\Big) \Big(\frac{q_o}{q_v}\Big)^{p_1-1}H_{p_1}[\{ v_i \}] \nonumber\\
& & + {{\partial_\beta h}\Bigr|_{\beta=0}}\hspace{0.1cm}p_2 \Big(1-\frac{q_o^2}{q_v}\Big) \Big(\frac{q_o}{q_v}\Big)^{p_2-1}H_{p_2}[\{ v_i \}] \;.
\end{eqnarray}
where Eq.~(\ref{eq: Maxwell}) can be used to replace ${{\partial_\beta h}\Bigr|_{\beta=0}}$. Finally, gathering all three orders of the Taylor expansion, Eqs.~(\ref{eq: 0th order TAP}), (\ref{eq: 1st order TAP}), (\ref{eq: 2nd order TAP(1)}), (\ref{eq: 2nd order TAP(2)}), (\ref{eq: 2nd order TAP(3)}), the constrained free energy becomes
\begin{eqnarray}
\label{eq:free energy}
-\beta F_J[\beta,q_o,\{v_i\}]&=&\frac{N}{2}\ln\Big(1-\frac{q_o^2}{q_v}\Big)-\beta \Bigg\{ \Big(\frac{q_o}{q_v}\Big)^{p_1} H_{p_1} [\{ v_i \}] + \Big(\frac{q_o}{q_v}\Big)^{p_2} H_{p_2} [\{ v_i \}] \Bigg\}\nonumber\\
& &+\frac{N \beta^2 J_{p_1}^2}{4}\Big[1-p_1\Big(\frac{q_o^2}{q_v}\Big)^{p_1-1}+(p_1-1)\Big(\frac{q_o^2}{q_v}\Big)^{p_1}\Big] \nonumber\\
& &+\frac{N \beta^2 J_{p_2}^2}{4}\Big[1-p_2\Big(\frac{q_o^2}{q_v}\Big)^{p_2-1}+(p_2-1)\Big(\frac{q_o^2}{q_v}\Big)^{p_2}\Big] \nonumber\\
& &+\frac{\beta^2}{2}\Big(1-\frac{q_o^2}{q_v}\Big)\sum_k \Big[\sum_{ \{i_2,i_{p_1}\}} J_{\{k,i_{2},i_{p_1}\}} \Big(\frac{{q_o}}{q_v}\Big)^{p_1-1} V_{\{i_{2},i_{p_1}\}} \nonumber\\
& & \hspace{2.8cm}+ \sum_{ \{j_2,j_{p_2}\}} J_{\{k,j_{2},j_{p_2}\}} \Big(\frac{{q_o}}{q_v}\Big)^{p_2-1} V_{\{j_{2},j_{p_2}\}} \Big]^2 \nonumber\\
& &-\frac{\beta^2}{2 q_v N} \Big(1-\frac{q_o^2}{q_v}\Big)\Bigg\{ p_1 \Big(\frac{q_o}{q_v}\Big)^{p_1-1} H_{p_1} [\{ v_i \}] + p_2 \Big(\frac{q_o}{q_v}\Big)^{p_2-1} H_{p_2} [\{ v_i \}] \Bigg\}^2 +O(\beta^3) \; .
\end{eqnarray}
One can rewrite this expression under the form
\begin{eqnarray}
\label{eq:free energy(1)}
-\beta F_J[\beta,q_o,\{v_i\}]&=&
-\beta F_{\rm TAP}\Big[\beta,\Big\{\frac{q_o}{q_v} v_i\Big\}\Big]\nonumber\\
& &+\frac{\beta^2}{2}\Big(1-\frac{q_o^2}{q_v}\Big)\sum_k \Big[\sum_{ \{i_2,i_{p_1}\}} J_{\{k,i_{2},i_{p_1}\}}\Big(\frac{{q_o}}{q_v}\Big)^{p_1-1} V_{\{i_{2},i_{p_1}\}} \nonumber\\
& & \hspace{2.8cm}+ \sum_{ \{j_2,j_{p_2}\}} J_{\{k,j_{2},j_{p_2}\}} \Big(\frac{{q_o}}{q_v}\Big)^{p_2-1} V_{\{j_{2},j_{p_2}\}} \Big]^2 \nonumber\\
& &-\frac{\beta^2}{2 q_v N} \Big(1-\frac{q_o^2}{q_v}\Big)\Bigg\{ p_1 \Big(\frac{q_o}{q_v}\Big)^{p_1-1} H_{p_1} [\{ v_i \}] + p_2 \Big(\frac{q_o}{q_v}\Big)^{p_2-1} H_{p_2} [\{ v_i \}] \Bigg\}^2 +O(\beta^3)
\end{eqnarray}
where $-\beta F_{\rm TAP}$ is the unconstrained TAP free energy for the mixed model
\begin{eqnarray}
-\beta F_{\rm TAP}[\beta,\{m_i\}]&=&\frac{N}{2}\ln(1-q)-\beta \Big( H_{p_1} [\{ m_i \}] + H_{p_2} [\{ m_i \}] \Big)\nonumber\\
& &+\frac{N \beta^2 J_{p_1}^2}{4}\Big[1-p_1 q^{p_1-1}+(p_1-1) q^{p_1}\Big]
+\frac{N \beta^2 J_{p_2}^2}{4}\Big[1-p_2 q^{p_2-1}+(p_2-1) q^{p_2}\Big]
\label{eq:fTAP}
\end{eqnarray}
with
\begin{equation}
m_i=\frac{q_o}{q_v}v_i \quad \text{and} \quad q=\frac{1}{N}\sum_i m_i^2
\; .
\label{eq:q-def}
\end{equation}
(In~\cite{Annibale2004} this same $F_{\rm TAP}$ appears and it is presented as the result of a perturbative expansion in which one of the two $p$
Hamiltonian's is treated as a perturbation with respect to the other one. In~\cite{Rizzo2006} the TAP free energy is considered to be exact to order $N$ and it describes exactly the statics of the model.)
Fixing the reference $\{v_i\}$ and the temperature $T$ for the system one can note that the constrained free energy only depends on the overlap $q_o$ and not on an extensive number of parameters like is the case in the usual TAP free energy. Here, however, we will have to keep track of the choice of the reference state. We finally emphasise that this derivation differs from the usual TAP calculation as the constrained free energy is determined only up to $O(\beta^3)$ correction terms that we cannot ensure are subleading in $N$. Indeed we have exchanged the local fields $\{h_i\}$ of the TAP method (see App.~\ref{sec: app TAP free energy}) with a global field $h$. Thus the cavity method arguments (perturbing the system with one incremented spin) or the diagrammatic expansion in Ref.~\cite{Crisanti1995} do not apply here as the previous local feature arising with the fields $\{h_i\}$ is lost.
\subsection{A particular case: the pure $p$-spin model}
In the case of the pure $p$-spin model, with a single term in the Hamiltonian $H_J[\{s_i\}]=H_p[\{s_i\}]$, the constrained
free energy simplifies drastically. Moreover, taking $\{v_i=m_i^\sigma\}$ a metastable TAP state at temperature $T'$,
the stationary points of the constrained free energy are of two kinds: either the system keeps a non-vanishing overlap with the reference state, $q_o\neq 0$,
or it becomes paramagnetic, $q_o=0$. In the dynamic interpretation of this approach, the former situation is linked to the possibility of following the initial state in a, say, low temperature quench while the latter corresponds to escaping the non-trivial TAP state towards the disordered paramagnetic phase.
To begin with, one can note that if the reference state $\{m_i^\sigma\}$ is metastable at a temperature $T'=1/\beta'$,
it should verify the TAP equations
\begin{eqnarray}
\frac{m_k^\sigma}{1-q_\sigma}&=&\beta'\sum_{\{i_2,i_{p}\}} J_{k,\{i_{2},i_{p}\}} M_{\{i_{2},i_{p}\}}
-{\beta'^2 J^2}(1-q_\sigma)\frac{p(p-1)}{2} {q_\sigma}^{p-2} m_k^\sigma
\qquad \text{with} \qquad k=1,\dots p
\; ,
\end{eqnarray}
$M_{i_2,i_p}= m_{i_2}^\sigma \dots m_{i_p}^\sigma$ and $q_\sigma=N^{-1}\sum_i (m_i^\sigma)^2$,
obtained as a stationary point condition
on $F_{\rm TAP}$ in Eq.~(\ref{eq:fTAP}) with $J_2=0$, $J_1=J$ and $p_1=p$. This equation leads straightforwardly to
\begin{eqnarray}
\frac{Nq_\sigma}{1-q_\sigma}&=&-\beta' p H_{p} [\{ m_i^\sigma \}]-{\beta'^2 J^2} N (1-q_\sigma)\frac{p(p-1)}{2} q_\sigma^{p-1}
\; .
\end{eqnarray}
Again, a lengthy calculation shows that the last two terms in Eq.~(\ref{eq:free energy}) cancel out and the constrained free energy becomes
\begin{eqnarray}
\label{eq: free energy pure p spin}
-\beta F_J [\beta,q_o,\{m_i^{\sigma}\}]&=&\frac{N}{2}\ln(1-\frac{q_o^2}{q_\sigma})-\beta \Big(\frac{q_o}{q_\sigma}\Big)^{p} H_p[\{ m_i^\sigma \}] +\frac{N \beta^2 {J^2}}{4}\Big[1-p\Big(\frac{q_o^2}{q_\sigma}\Big)^{p-1}+(p-1)\Big(\frac{q_o^2}{q_\sigma}\Big)^{p}\Big]+O(\beta^3)\nonumber\\
&=&-\beta F_{\rm TAP}\Big[\beta,\Big\{\frac{q_o}{q_\sigma} m_i^{\sigma}\Big\}\Big]+O(\beta^3)\; ,
\end{eqnarray}
that is to say, the TAP free energy for a $p$-spin model with local magnetisations and overlap
\begin{equation}
m_i=\Big(\frac{q_o}{q_\sigma}\Big) m_i^\sigma \qquad \text{and} \qquad q=\frac{1}{N}\sum_i m_i^2=\frac{q_o^2}{q_\sigma}
\; ,
\end{equation}
respectively.
In fact, as detailed in App.~\ref{sec:app_link_TAP_constrained}, the constrained free energy $-\beta F_J [\beta,q_o,\{m_i^{\sigma}\}]$ is strictly equal to the TAP free energy $-\beta F_{\rm TAP}\Big[\beta,\Big\{\frac{q_o}{q_\sigma} m_i^{\sigma}\Big\}\Big]$ in this case. In other words the $O(\beta^3)$ terms and higher order ones vanish in the $N \rightarrow +\infty$ limit, Eq.~(\ref{eq: free energy pure p spin}) is then exact and not approximated.
As previewed at the beginning of this section, the solutions minimising the free energy are such that
\begin{eqnarray}
&&\partial_{q_o}(-\beta F_J)=\frac{2q_o}{q_\sigma}\partial_{\frac{q_o^2}{q_\sigma}}(-\beta F_J)=0
\qquad \Rightarrow \qquad
\left\{
\begin{array}{l}
\displaystyle{
\frac{p(p-1)}{2}z^2+p \epsilon z+1
= 0
} \; , \\
q_o=0 \; ,
\end{array}
\right.
\end{eqnarray}
with
\begin{eqnarray}
z= \frac{J}{T} \Big(1-\frac{q_o^2}{q_\sigma}\Big)\Big(\frac{q_o^2}{q_\sigma}\Big)^{\frac{p}{2}-1} \qquad \text{and} \qquad \varepsilon =\frac{1}{N J {q_\sigma}^{\frac{p}{2}}}H_J[\{ m_i^\sigma \}]
\; .
\end{eqnarray}
The interpretation of these solutions, in dynamical terms, is the following.
On the one hand the $q_o\neq 0$ solution corresponds to the system -after the quench- staying in the same TAP metastable state up to the rescaling $ m_i=({q_o}/{q_\sigma}) m_i^\sigma$. On the other hand, with the $q_o=0$ solution one recovers the free energy of the paramagnetic state, it corresponds to a quench to high temperature where the first solution is not available anymore (i.e. the initial TAP state becomes unstable). These conclusions have already been drawn in previous papers, see e.g. Ref.~\cite{Barrat}, using different methods and
focusing on the dynamics with an initial temperature $T' \in [T_s;T_d]$.
We conclude that the constrained free energy density does not provide further information about the behaviour of the pure model, compared to what had been derived from the unconstrained one.
\section{Application to the mixed $p$-spin model}
\label{sec:Application to the mixed $p$-spin model}
In this section we apply the constrained free energy density to the analysis of the mixed model.
\subsection{Simplification of the free energy and stability condition}
Contrary to the pure $p$-spin model the last two terms of the constrained
free energy in Eq.~(\ref{eq:free energy(1)}) do not cancel out and can be seen as being at the origin of
some of the peculiar features of these models.
For the following analysis we will consider the reference state $\{v_i=m_i^\sigma\}$ to be a metastable TAP state at a given temperature $T'=1/\beta'$. Under this choice,
the constrained free energy can be simplified to -for details, see App.~\ref{sec:appB}-
\begin{equation}
\label{eq:free energy mixed p-spin}
-\beta F_J [\beta,q_o,\{m_i^{\sigma}\}] =-\beta F_{\rm TAP}\Big[\beta,\Big\{\frac{q_o}{q_\sigma} m_i^{\sigma}\Big\}\Big]+\frac{\beta^2}{2}\Big(1-\frac{q_o^2}{q_\sigma}\Big)\Bigg[\Big(\frac{{q_o}}{q_\sigma}\Big)^{p_2-1}-\Big(\frac{{q_o}}{q_\sigma}\Big)^{p_1-1}\Bigg]^2\Delta H_{p_2}[\{ m_i^\sigma \}]+O(\beta^3)
\end{equation}
where we used the form of $F_{\rm TAP}$ in Eq.~(\ref{eq:fTAP}) and
\begin{equation}
\Delta H_{p_2}[\{ m_i^\sigma \}]= \sum_k \Big(\partial_{m_k^\sigma} H_{p_2} [\{ m_i^\sigma \}] \Big)^2-\frac{{p_2}^2}{N}\Big(H_{p_2} [\{ m_i^\sigma \}]\Big)^2=\left\|{\nabla H_{p_2}}\right\|^2-\frac{{p_2}^2}{N}
\Big( H_{p_2} [\{ m_i^\sigma \}]\Big)^2 \; .
\end{equation}
The extra term $\Delta H_{p_2}[\{ m_i^\sigma \}]$, depending on the reference $\{m_i^\sigma\}$, cannot be rewritten using the usual variables $q_\sigma, $ $H_{p_1} [\{ m_i^\sigma \}]$ and $H_{p_2} [\{ m_i^\sigma \}]$. It is a direct consequence of the Hamiltonian and its non homogeneity. More practically this can be seen with the terms
\begin{eqnarray}
\sum_{ \{i_2,i_{p_1}\}} J_{\{k,i_{2},i_{p_1}\}}\Big(\frac{{q_o}}{q_v}\Big)^{p_1-1} V_{\{i_{2},i_{p_1}\}} + \sum_{ \{j_2,j_{p_2}\}} J_{\{k,j_{2},j_{p_2}\}} \Big(\frac{{q_o}}{q_v}\Big)^{p_2-1} V_{\{j_{2},j_{p_2}\}} \;
\end{eqnarray}
appearing in Eq.~(\ref{eq:free energy(1)}).
The dependence in $({q_o}/{q_v})^{p_1-1}$ and $({q_o}/{q_v})^{p_2-1}$ prevent us from simplifications using the TAP equations. Indeed, for simplifications one would rather need terms of the form -for details, see App.~\ref{sec:appB}-
\begin{eqnarray}
\sum_{ \{i_2,i_{p_1}\}} J_{\{k,i_{2},i_{p_1}\}} V_{\{i_{2},i_{p_1}\}} + \sum_{ \{j_2,j_{p_2}\}} J_{\{k,j_{2},j_{p_2}\}} V_{\{j_{2},j_{p_2}\}} \; .
\end{eqnarray}
As detailed in App.~\ref{sec:app_link_TAP_constrained}, if we impose $q_o=q_\sigma$ the expression for the constrained free energy~(\ref{eq:free energy mixed p-spin}) becomes exact at any order in $\beta$ in the $N\rightarrow +\infty$ limit. Indeed, the $O(\beta^3)$ term yields sub-extensive contributions to the free energy.
For the general case, up to $O(\beta^3)$ corrections, any metastable state should minimise the free energy with respect to $q_o$ such that
\begin{eqnarray}
\label{eq: stability}
\partial_{q_o}\Bigg\{ -\beta F_{\rm TAP}\Big[\beta,\Big\{\frac{q_o}{q_\sigma} m_i^{\sigma}\Big\}\Big] \Bigg\}+ \partial_{q_o}\Bigg\{\Big(1-\frac{q_o^2}{q_\sigma}\Big)\Bigg[\Big(\frac{{q_o}}{q_\sigma}\Big)^{p_2-1}-\Big(\frac{{q_o}}{q_\sigma}\Big)^{p_1-1}\Bigg]^2\Delta H_{p_2}[\{ m_i^\sigma \}]\Bigg\}=0
\end{eqnarray}
Looking at the case $\beta=\beta'$ one can check that $q_o=q_\sigma$ is a stationary point of the constrained free energy yielding simply $F_J [\beta',q_\sigma,\{m_i^{\sigma}\}]=F_{\rm TAP}[\beta',\{m_i^{\sigma}\}]$.
We recover here the expected result that the reference state $\{m_i^{\sigma}\}$ is metastable for $\beta=\beta'$. However, the general situation (for any $\beta,\beta'$) is non-trivial. In fact, taking the variation of the constrained free energy with respect to $q_o$ yields the stationary conditions
\begin{eqnarray}
\label{eq:stability}
\partial_{q_o}(-\beta F_J) &=& 0\\
&\Longleftrightarrow &\left\{
\begin{array}{lll}
q_o=0 \\
\\
0=1+\sum_{\ell=1}^2 \Big\{\frac{p_\ell(p_\ell-1)}{2}z_\ell^2+p \varepsilon_{p_\ell}[\{ m_i^\sigma \}] z_\ell\Big\} \\
\hspace{0.65cm}+{q_\sigma} \frac{\Delta H_{p_2}[\{ m_i^\sigma \}]}{N} \Bigg[\displaystyle{\frac{ z_2}{J_{p_2}{q_\sigma}^{\frac{p_2}{2}}}-\frac{ z_1}{J_{p_1}{q_\sigma}^{\frac{p_1}{2}}}}\Bigg] \Bigg[\frac{ z_2}{J_{p_2}{q_\sigma}^{\frac{p_2}{2}}}\Big(p_2-1-\frac{1}{\frac{q_\sigma}{q_o^2}-1}\Big)-\frac{z_1}{J_{p_1}{q_\sigma}^{\frac{p_1}{2}}}\Big(p_1-1-\frac{1}{\frac{q_\sigma}{q_o^2}-1}\Big)\Bigg] \\
\end{array}
\right.\nonumber
\end{eqnarray}
with
\begin{eqnarray}
z_\ell= {J_{p_\ell} \beta} \Big(1-\frac{q_o^2}{q_\sigma}\Big)\Big(\frac{q_o^2}{q_\sigma}\Big)^{\frac{p_\ell}{2}-1} \quad \text{and} \quad \varepsilon_{p_\ell} [\{ m_i^\sigma \}] =\frac{1}{N J_{p_\ell} {q_\sigma}^{\frac{p_\ell}{2}}}H_{p_\ell} [\{ m_i^\sigma \}] \quad \qquad \mbox{for} \qquad \ell = 1,2
\; .
\end{eqnarray}
\subsection{Discussion}
If we now focus on a reference $\{v_i = m_i^\sigma\}$
that is a metastable TAP state, the constrained free energy $-\beta F_J[\beta,q_o,\{m_i^{\sigma}\}]$ makes the chaos in temperature appear clearly. In particular it shows that different states $\{m_i^{\sigma}\}$ yield different constrained systems. To see these features
we compute the total energy,
\begin{eqnarray}
\label{eq:energy_shift}
E_J[\beta,q_o,\{m_i^{\sigma}\}]
&=&
-\partial_\beta \Bigg\{-\beta F_J[\beta,q_o,\{m_i^{\sigma}\}]\Bigg\}\\
&=&E_{\rm TAP}\Big[\beta,\Big\{\frac{q_o}{q_\sigma}m_i^{\sigma}\Big\}\Big]+\beta\Big(1-\frac{q_o^2}{q_\sigma}\Big)\Bigg[\Big(\frac{{q_o}}{q_\sigma}\Big)^{p_2-1}-\Big(\frac{{q_o}}{q_\sigma}\Big)^{p_1-1}\Bigg]^2\Delta H_{p_2}[\{ m_i^\sigma \}]
\nonumber
\; .
\end{eqnarray}
The first term, $E_{\rm TAP}\Big[\beta,\Big\{\frac{q_o}{q_\sigma}m_i^{\sigma}\Big\}\Big]$,
is the energy that the system would have if it were at temperature $T$
within the same TAP state as the reference was. The sole change in energy would then be given by the state deformation represented by the renormalisation $m_i^\sigma \rightarrow (q_o/q_\sigma) \, m_i^{\sigma}$.
The information about the
shift of TAP state from the reference to another one is thus contained in the second term in Eq.~(\ref{eq:energy_shift}), as it implies $E_J[\beta,q_o,\{m_i^\sigma\}] \neq E_{\rm TAP}\Big[\beta,\Big\{\frac{q_o}{q_\sigma}m_i^{\sigma}\Big\}\Big]$. We note that the constrained system depends explicitly on the reference state $\{m_i^\sigma\}$ via the value taken by $\Delta H_{p_2}[\{ m_i^\sigma \}]$. Again it is important to point out that, as the mixed $p$-spin glass model has a non homogeneous Hamiltonian, this term cannot be rewritten using the usual equilibrium parameter $q_\sigma$, $H_{p_1}[\{m_i^\sigma\}]$ and $H_{p_2}[\{m_i^\sigma\}]$. In the case of the pure $p$-spin model
$\Delta H_{p_2}[\{ m_i^\sigma \}]=0$ and there is no shift of TAP state.
Chaos in temperature can also be observed through the minimisation of the constrained free energy. Indeed the value of $q_o$ determined by Eq.~(\ref{eq:stability}) depends on the reference $\{m_i^\sigma\}$ again via the ``parameter" $\Delta H_{p_2}[\{ m_i^\sigma \}]$ -see Fig.~\ref{fig:TAP_quench}. Consequently different TAP states impose different overlaps $q_o$, besides yielding constrained systems with different energies.
Interpreting this result in dynamic terms, a system initially equilibrated at $T'$ and then quenched to a temperature $T \in [T_{\rm RSB}(T'),T'[$ departs in different metastable states depending on which TAP state $\{m_i^{\sigma}\}$ it was initially laid in. This is different from what happens in the pure $p$-spin model, in which metastable TAP states can be fully followed in temperature.
This chaos in temperature cannot be observed using the FP potential nor using the Schwinger-Dyson equations \cite{Capone2006} (in the context of dynamics). Indeed both procedures average over the constraining system (respectively the initial system), thus making it impossible to keep track of each reference state. This chaotic behavior of the mixed $p$-spin model may also be useful to interpret the strange dynamics observed for quenches to low temperatures ($T<T_{\rm RSB}(T')$) and zero temperature dynamics \cite{Folena2019}.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{figure3.png}
\caption{Changing the temperature $T=1/\beta$, we plotted the $q_o$ that minimises the constrained free energy -see Eq.~(\ref{eq:stability}). For the reference $\{m_i^\sigma\}$ we focused on the metastable TAP states at $T'=1/\beta'$ that dominate the Gibbs-Boltzmann distribution $P[\{s_i\}]\propto \exp\big(-\beta' H_J[\{s_i\}]\big)$. We recall that they are fully parametrised by their overlap $q = N^{-1} \sum_i m_i^2$ and the adimensional energy densities $\varepsilon_{p_1}= N^{-1} J_{p_1}^{-1} q^{-p_1/2} H_{p_1}[\{m_i\}]$
and $\varepsilon_{p_2}=N^{-1}J_{p_2}^{-1} q^{-p_2/2} H_{p_2}[\{m_i\}]$. However, $\Delta H_{p_2}[\{ m_i^\sigma \}]$ is a priori not fixed and was taken to have the values given in the key. As parameters we fixed $T'\approx 0.801$, $p_1=3$, $p_2=4$ and $J_{p_1}=J_{p_2}=1$. We took for $H_{p_1}[\{m_i^\sigma\}]$ and $H_{p_2}[\{m_i^\sigma\}]$ the values given by the equilibrated system at $T'$. }
\label{fig:TAP_quench}
\end{figure}
\section{An exact approach for the constrained free energy}
\label{se: an exact approach for the constrained free energy}
In the following section we will present a method to derive exactly the constrained free energy. It will consist in mapping the TAP free energy on the constrained one. Via the high temperature expansion we have already shown the equivalence between the two free energies when $q_o=q_\sigma$; the constrained system is a TAP state with local magnetisation $\langle s_i\rangle=m_i^\sigma$ in this situation. To generalise this result to any value of $q_o$ we will start by considering the Legendre transforms of the constrained and TAP free energies -see Eqs.~(\ref{eq: free energy def}) and~(\ref{eq: free energy def2}):
\begin{eqnarray}
-\beta F_J^\star[\beta,\{h v_i\},\lambda]&=&-\beta F_J [\beta,q_o,\{v_i\},l]-\frac{N \lambda l}{2}-N h q_o\\
&=&\ln \Bigg\{ {\rm Tr}_{\{s_i\}}\Big[e^{-\beta H_J[\{s_i\}]-\frac{\lambda}{2}\sum_i (s_i^2-l)-h\sum_i(s_i v_i-q_o)}\Big] \Bigg\}-\frac{N \lambda l}{2}-N h q_o\nonumber\\
&=&\ln \Bigg\{ {\rm Tr}_{\{s_i\}}\Big[e^{-\beta H_J[\{s_i\}]-\frac{\lambda}{2}\sum_i (s_i^2)-\sum_i (h v_i) s_i}\Big] \Bigg\} \;,\nonumber
\end{eqnarray}
\begin{eqnarray}
-\beta F_{\rm TAP}^\star[\beta,\{h_i\},\lambda']&=&-\beta F_{\rm TAP}[\beta,\{m_i\},l]-\frac{N \lambda' l}{2}-\sum_i h_i m_i\\
&=&\ln \Bigg\{ {\rm Tr}_{\{s_i\}}\Big[e^{-\beta H_{J}[\{s_i\}]-\frac{\lambda'}{2}\sum_i (s_i^2-l)-\sum_ih_i(s_i -m_i)}\Big] \Bigg\}-\frac{N \lambda' l}{2}-\sum_i h_i m_i \nonumber\\
&=&\ln \Bigg\{ {\rm Tr}_{\{s_i\}}\Big[e^{-\beta H_J[\{s_i\}]-\frac{\lambda'}{2}\sum_i (s_i^2)-\sum_i h_i s_i}\Big] \Bigg\} \;.\nonumber
\end{eqnarray}
The first step of our approach is to Taylor expand the two free energies in orders of $\beta$ keeping the Lagrange multipliers $\lambda'$, $\lambda$, $\{h_i\}$ and $h$ constants. In more details we write
\begin{equation}
-\beta F_J^\star[\beta,\{h v_i\},\lambda] = \sum_{k=0}^{+\infty}\frac{\beta^k}{k!}\partial^k_\beta \Big(-\beta F_J^\star[\beta,\{h v_i\},\lambda]\Big)\Bigr|_{\beta=0}
\end{equation}
and
\begin{equation}
-\beta F_{\rm TAP}^\star[\beta,\{h_i\},\lambda'] = \sum_{k=0}^{+\infty}\frac{\beta^k}{k!}\partial^k_\beta \Big(-\beta F_{\rm TAP}^\star[\beta,\{h_i\},\lambda']\Big)\Bigr|_{\beta=0} \; .
\end{equation}
The $0^{\rm th}$ order is a Gaussian integral in both cases, it is almost identical to the $0^{\rm th}$ order expansion in Sec.~\ref{subsec: Taylor dev}. We have straightforwardly
\begin{eqnarray}
F_J^\star[\beta=0,\{h v_i\},\lambda]=-\frac{N}{2}\ln(\lambda)+\sum_i \frac{h^2 v_i^2}{2\lambda} \; ,
\end{eqnarray}
\begin{eqnarray}
F_{\rm TAP}^\star[\beta=0,\{h_i\},\lambda']=-\frac{N}{2}\ln(\lambda')+\sum_i \frac{h_i^2}{2\lambda'} \; .
\end{eqnarray}
For the constrained free energy the following orders are simply functions of $\lambda$ and $\{h v_i\}$, they are of the generic form
\begin{eqnarray}
\partial_\beta^k \Big(-\beta F_J^\star[\beta,\{h v_i\},\lambda]\Big)\Bigr|_{\beta=0}&=&\Big\langle A_k [\{s_i\}]\Big\rangle\Bigr|_{\beta=0}+ B_k \Big[\Big\{\langle s_i\rangle\bigr|_{\beta=0}\Big\}\Big]\\
&=&A'_k \Big[\Big\{\langle s_i\rangle\bigr|_{\beta=0} \; ; \;\langle s_i^2\rangle\bigr|_{\beta=0}\Big\}\Big]+ B_k \Big[\Big\{\langle s_i\rangle\bigr|_{\beta=0}\Big\}\Big] \nonumber
\end{eqnarray}
with
\begin{eqnarray}
\label{eq:average1}
\langle s_i\rangle\Bigr|_{\beta=0}
&=&
-\frac{h v_i}{\lambda} \quad \text{and} \quad \langle s_i^2\rangle\Bigr|_{\beta=0}
=
\frac{1}{\lambda}+\frac{h^2 v_i^2}{\lambda^2} \; .
\end{eqnarray}
The case of the TAP free energy is equivalent, we have
\begin{eqnarray}
\partial_\beta^k \Big(-\beta F_{\rm TAP}^\star[\beta,\{h_i\},\lambda']\Big)\Bigr|_{\beta=0}&=&\Big\langle A_k [\{s_i\}]\Big\rangle\Bigr|_{\beta=0}+ B_k \Big[\Big\{\langle s_i\rangle\bigr|_{\beta=0}\Big\}\Big]\\
&=&A'_k \Big[\Big\{\langle s_i\rangle\bigr|_{\beta=0} \; ; \;\langle s_i^2\rangle\bigr|_{\beta=0}\Big\}\Big]+ B_k \Big[\Big\{\langle s_i\rangle\bigr|_{\beta=0}\Big\}\Big]\nonumber
\end{eqnarray}
with
\begin{eqnarray}
\label{eq:average2}
\langle s_i\rangle\Bigr|_{\beta=0}
&=&
-\frac{h_i}{\lambda'} \quad \text{and} \quad \langle s_i^2\rangle\Bigr|_{\beta=0}
=
\frac{1}{\lambda'}+\frac{h_i^2}{\lambda'^2} \; .
\end{eqnarray}
As an example the $1^{\rm st}$ order is
\begin{eqnarray}
& \partial_\beta \Big(-\beta F_J^\star[\beta,\{h v_i\},\lambda]\Big)\Bigr|_{\beta=0}=-\Big\langle H_J [\{s_i\}]\Big\rangle\Bigr|_{\beta=0}=-H_J\big[\{\frac{h v_i}{\lambda}\}\big]
\end{eqnarray}
and
\begin{eqnarray}
& \partial_\beta \Big(-\beta F_{\rm TAP}^\star[\beta=0,\{h_i\},\lambda']\Big)\Bigr|_{\beta}=-\Big\langle H_J [\{s_i\}]\Big\rangle\Bigr|_{\beta=0} =-H_J\big[\{\frac{h_i}{\lambda'}\}\big] \; .
\end{eqnarray}
At this stage it is important to note that the expansion in terms of the functions $A'_k$ and $B_k$ is identical for both Legendre transforms, thus they differ from each other only through their spin averages (\ref{eq:average1}),(\ref{eq:average2}). The next step of our reasoning is to set $\lambda=\lambda'$ and $h_i=h v_i$ for all the local fields, then the Legendre transforms become equal:
\begin{eqnarray}
\label{eq: equality1}
-\beta F_J^\star[\beta,\{h v_i\},\lambda]=-\beta F_{\rm TAP}^\star[\beta,\{h v_i\},\lambda]\; .
\end{eqnarray}
For the TAP free energy the Taylor expansion is known exactly when we set
\begin{eqnarray}
h_i&=&\frac{-m_i}{l-q}-\beta \partial_{m_i} H_J [\{m_j\}]-\frac{p_1(p_1-1)\beta^2 J_{p_1}^2}{2}(l-q)q^{p_1-2}m_i-\frac{p_2(p_2-1)\beta^2 J_{p_2}^2}{2}(l-q)q^{p_2-2}m_i \; ,\\
\lambda&=&\frac{1}{l-q}+\frac{p_1\beta^2 J_{p_1}^2}{2}(l^{p_1-1}-q^{p_1-1})+\frac{p_2\beta^2 J_{p_2}^2}{2}(l^{p_2-1}-q^{p_2-1})
\end{eqnarray}
with $q=\sum_i m_i^2$. Thus we retrieve Eq.~(\ref{eq:free_energy_TAP})
\begin{eqnarray}
\label{eq: equality2}
-\beta F_{\rm TAP}^\star[\beta,\{h_i\},\lambda] &=&
-\frac{N \lambda l}{2}-\sum_i h_i m_i-\beta F_{\rm TAP}[\beta,\{m_i\},l] \\
&=&-\frac{N \lambda l}{2}-\sum_i h_i m_i-\frac{NT}{2}\ln(l-q)+ N q^{\frac{p_1}{2}} \, J_{p_1} \, \varepsilon_{p_1} +N q^{\frac{p_2}{2}} \, J_{p_2} \, \varepsilon_{p_2}
\nonumber\\
& & -\frac{N J_{p_1}^2}{4T}\Big[l^{p_1}-p_1 l q^{p_1-1}+(p_1-1)q^{p_1}\Big]-\frac{N J_2^2}{4T}\Big[l^{p_2}-p_2 l q^{p_2-1}+(p_2-1)q^{p_2}\Big] \; . \nonumber
\end{eqnarray}
Combining Eqs.~(\ref{eq: equality1}) and (\ref{eq: equality2}) we can finally write
\begin{eqnarray}
-\beta F_J^\star[\beta,\{h v_i\},\lambda] =-\beta F_{\rm TAP}^\star[\beta,\{h v_i\},\lambda]=-\beta F_{\rm TAP}[\beta,\{m_i\},l] -\frac{N \lambda l}{2}-h\sum_i v_i m_i
\end{eqnarray}
with the prescription
\begin{eqnarray}
\label{eq:h vs magnetisations}
h v_i&=&\frac{-m_i}{l-q}-\beta \partial_{m_i} H_J [\{m_j\}]-\frac{p_1(p_1-1)\beta^2 J_{p_1}^2}{2}(l-q)q^{p_1-2}m_i-\frac{p_2(p_2-1)\beta^2 J_{p_2}^2}{2}(l-q)q^{p_2-2}m_i \; ,\\
\lambda&=&\frac{1}{l-q}+\frac{p_1\beta^2 J_{p_1}^2}{2}(l^{p_1-1}-q^{p_1-1}) \; .
\end{eqnarray}
To satisfy the spherical constraint and the overlap with the reference we write Eqs.~(\ref{eq: constraining the free energy}) and (\ref{eq: constraining the free energy(2)}) as follow
\begin{eqnarray}
\partial_\lambda \big(-\beta F_J^\star[\beta,\{h v_i\},\lambda]\big)
&=&
\partial_\lambda \big(-\beta F_{\rm TAP}^\star[\beta,\{h v_i\},\lambda]\big)=\frac{N l}{2} \implies l = 1 \; ,
\\
\partial_h \big(-\beta F_J^\star[\beta,\{h v_i\},\lambda]\big)
&=&
\sum_i v_i \partial_{hv_i} \big(-\beta F_{\rm TAP}^\star[\beta,\{h v_i\},\lambda]\big)=\sum_i v_i m_i \;\implies \sum_i v_i m_i = N q_o \; .
\end{eqnarray}
We can also note, as explained in subsection \ref{sec: justification of the approach}, that extremising the constrained free energy with respect to $q_o$ is equivalent to setting $h=0$. It follows straightforwardly from Eq.~(\ref{eq:h vs magnetisations}) that the state $\{m_i\}$ obtained by minimising the constrained free energy is a metastable TAP state that verifies
\begin{eqnarray}
\label{eq: constrained TAP}
&0=\frac{-m_i}{1-q}-\beta \partial_{m_i} H_J [\{m_j\}]-\frac{p_1(p_1-1)\beta^2 J_{p_1}^2}{2}(1-q)q^{p_1-2}m_i-\frac{p_2(p_2-1)\beta^2 J_{p_2}^2}{2}(1-q)q^{p_2-2}m_i \; .
\end{eqnarray}
To sum up, the constrained free energy describes a TAP state $\{m_i\}$ with a spherical norm $l=1$ and an overlap $\sum_i m_i v_i=N q_o$ with the reference. It is a metastable TAP state at the temperature $1/\beta$ when the constrained free energy is extremised with respect to $q_o$.
There is one last ambiguity that we have to take care of with the Legendre transforms. In fact we can map one value of $q_o$ with one value of $h$ only in a region where the constrained free energy is either convex or concave. The same problem appears with the TAP free energy for the conjugate variables $\{h_i\}$ and $\{m_i\}$. More practically if we set a value for $q_o$, and we consequently fix $h$, there are still numerous sets of magnetisations $\{m_i\}$ that verify Eq.~(\ref{eq:h vs magnetisations}).
However, if we focus on a reference $\{v_i=m_i^\sigma\}$ being a metastable TAP state at $\beta'$, it is possible to pin the right set of magnetisations for a given value of $q_o$. Indeed one can remember that the constrained free energy (\ref{eq:free energy mixed p-spin}) is known exactly under this assumption when $q_o=q_\sigma$:
\begin{equation}
-\beta F_J[\beta,q_o=q_\sigma,\{m_i^{\sigma}\}] =-\beta F_{\rm TAP}\big[\beta,\{ m_i^{\sigma}\},l=1\big] \; .
\end{equation}
In that case the constrained free energy describes a system with magnetisations $\{m_i^\sigma\}$ at temperature $1/\beta$. Consequently, the constrained free energy in the convex/concave region around $q_o=q_\sigma$ describes a TAP state with magnetisation $\{m_i\}$ in the convex/concave region around $\{m_i^\sigma\}$ - see Fig.~\ref{fig:quench}. Like with a general reference state $\{v_i\}$, this TAP state has a spherical norm $l=1$ and an overlap $\sum_i m_i m_i^\sigma=N q_o$ with the reference. It also extremises the constrained free energy when it becomes metastable, in other words when the local magnetisations follow Eq.~(\ref{eq: constrained TAP}).
\begin{figure}
\centering
\includegraphics[width=0.65\textwidth]{figure2a.png}
\includegraphics[width=0.61\textwidth]{figure2b.png}
\caption{The upper and middle graphs represent the TAP free energy landscape at temperature $T'=1/\beta'$ and $T=1/\beta$. The lower graph represent the constrained free energy lanscape at temperature $T=1/\beta$. As the reference state plays a particular role in our discussion we emphasised its coordinates ($\{m_i^\sigma\}$ for the TAP free energy and $q_\sigma$ for the constrained one) with red dashed lines. If we select the convex/concave region around $q_o=q_\sigma$, the constrained free energy describes a TAP state in the same convex/concave region as the reference state $\{m_i^\sigma\}$. The two regions are delimited in blue in the middle and lower graphs. The fields $h$ and $\{h_i\}$ are defined through the slope of their respective free energy, we emphasised here the non-bijective nature of the Legendre transforms due to this definition with the green arrows. Indeed it is direct to see that a non convexe/concave function can give a same set of fields $\{h_i\}$ and $h$ for different values of $\{m_i\}$ and $q_o$. }
\label{fig:quench}
\end{figure}
\section{Conclusions and outlook}
In this paper we first showed how chaos is present for any quench when the system is initially equilibrated at $T'\in [T_s,T_d]$. Using both the TAP approach and the FP potential we saw that there is in fact no simple rescaling $m_i \rightarrow\alpha \,m_i$ for all magnetisations linking the reference state at a given temperature to the constrained one at a different temperature. In dynamic terms, this is interpreted as a non trivial relation, or an absence of any simple one, between the initial state and the quenched one.
We then introduced a constrained free energy which enabled us to describe a system equilibrated at temperature $T$ enforced to have a fixed overlap with a reference state. Performing a high temperature expansion of this free energy we saw the role of the reference in the context of constrained equilibrium. In particular for the mixed $p$-spin model, each reference -taken metastable at a temperature $T'$- yields a different equilibrium state depending on the value taken by a ``parameter" that we called $\Delta H_{p_2}[\{m_i^\sigma\}]$. Finally we linked this new free energy to the unconstrained TAP one; it demonstrated that the equilibrated system corresponds to one given TAP state which is metastable when the constrained free energy is extremised with respect to the overlap with the reference.
To have here a complete understanding of the metastable properties the number of metastable TAP states (also called complexity) with fixed parameters $q$, $H_{p_1}[\{m_i\}]$, $H_{p_2}[\{m_i\}]$ and
$\Delta H_{p_2}[\{ m_i^\sigma \}]$ is still missing. This quantity would probably allow us to match the constrained free energy with the Franz-Parisi potential (up to $O(\beta^3)$ correction terms) in the same fashion that the TAP free energy is linked via an extra complexity term to the free energy derived with the replica trick. Besides this calculation would allow us to know in which interval is the value of $\Delta H_{p_2}[\{ m_i^\sigma \}]$ expected.
Another interesting perspective would be to determine the complexity $\Sigma[q,q_o,\{m_i^\sigma\}]$ of the metastable TAP states with an order parameter $q$ and a fixed overlap $q_o$ with another metastable TAP state $\{m_i^\sigma\}$. Previous papers have already pushed forward in this direction. For example in Refs.~\cite{Ros2018,Ros2019} Ros {\it et al} have calculated such a complexity with an extra average over the disorder, in other words they performed the annealed average ${\rm I\!E}\Big[\exp{(\Sigma[q,q_o,\{m_i^\sigma\}]\big)}\Big]$. Unfortunately as our TAP-like approach is specifically disordered dependent we cannot use directly their results in our discussion.
\label{sec:co}
\acknowledgements{We warmly thank J. Kurchan, T. Rizzo, M. Tarzia and F. Zamponi for very useful discussions and suggestions.}
| proofpile-arXiv_059-15573 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section*{Introduction}
Throughout the paper we denote by $\pi$ a set of primes. A finite group is called a {\em $\pi$-group},
if all prime divisors of its order belong to~$\pi$. Given a finite group $G$, by $\Oo_\pi(G)$ we denote its {\em $\pi$-radical}, i.~e. the largest normal $\pi$-subgroup of~$G$,
and by $G^\sharp$ we always denote $G\setminus \{1\}$.
The Baer--Suzuki theorem \cite{Baer,Suz,AlpLy} states
\begin{BSThm}
Let $p$ be a prime, $G$ a finite group, and $x\in G$.
Then $x\in \Oo_p(G)$ if and only if $\langle x,x^g \rangle$ is a $p$-group for every~${g\in G}$.
\end{BSThm}
Clearly, in this theorem only the ``if'' part is nontrivial.
Various generalization and analogues for the Ba\-er--Su\-zu\-ki theorem were investigated by many authors in
\cite{AlpLy,Mamont,Soz,OMS,Gu,FGG,GGKP,GGKP1,GGKP2, Palchik, Tyut,Tyut1,BS_odd,BS_Dpi}. For example, N. Gordeev, F. Grunewald, B. Kunyavskii, and E. Plotkin in \cite{GGKP2},
and independently P. Flavell, S. Guest, and R. Guralnick in \cite{FGG} have shown that, if every four elements in given conjugacy class of a finite group generate a solvable subgroup,
then the conjugacy class is included in the solvable radical of the group.
The following proposition shows that one cannot replace in the Baer--Suzuki theorem $p$ with a set of primes~$\pi$.
\begin{Prop}
\label{ex1}
Let $m$ be a natural number. Choose a prime $r$ and a set $\pi$ of primes so that $r-1>m$ and $\pi$ includes all primes less than~$r$ and does not include~$r$. Then,
in the symmetric group $G=S_r$, any $m$ transpositions generate a $\pi$-sub\-gro\-up, while $\Oo_\pi(G)=1$.
\end{Prop}
The proposition not only shows that, in general, the fact that $x$ together with any of its conjugates generates a $\pi$-sub\-gro\-up does not guarantee that $x$ lies in $\Oo_\pi(G)$.
It also shows that there does not exist $m$ such that, for every set of primes $\pi$ and for every finite group $G$, the equality
$$
\Oo_\pi(G)=\{x\in G\mid \langle x_1,\ldots,x_m\rangle \text{ is a }\pi\text{-group for every }x_1,\ldots,x_m\in x^G\}
$$
holds.
However we show that a weaker analogue of the Baer--Suzuki theorem for the $\pi$-radical holds. The goal of this paper is to prove Theorems~\ref{t2} and~\ref{t3} below.
\begin{theorem}\label{t2}
Let $\pi$ be a set of primes. Then there exists a natural $m$ (depending on $\pi$) such that for every finite group $G$ the equality
$$
\Oo_\pi(G)=\{x\in G\mid \langle x_1,\ldots,x_m\rangle \text{ is a }\pi\text{-group for every }x_1,\ldots,x_m\in x^G\}
$$
holds, i.e. $x\in\Oo_\pi(G)$ if and only if every $m$ conjugates of $x$ generate a $\pi$-sub\-gro\-up.
\end{theorem}
Similarly to~\cite[Definition~1.15]{GGKP}, given a set of primes $\pi$, let $\bs(\pi)$ be the minimal $m$ such that the conclusion of Theorem~\ref{t2} remains valid,
i.~e. the minimal $m$ such that, for every finite group $G$ and its element $x$, if any $m$ conjugates of $x$ generate a $\pi$-sub\-gro\-up then $x\in \Oo_\pi(G)$.
In other words $m<\bs(\pi)$ if and only if there exists a group $G$ and $x\in G\setminus \Oo_\pi(G)$ such that any $m$ conjugates of $x$ generate a $\pi$-sub\-gro\-up. Replacing
$G$ with $G/\Oo_\pi(G)$ and taking the image of $x$ in $G/\Oo_\pi(G)$, we see that $\bs(\pi)$ can be defined by the condition that if $m<\bs(\pi)$ then there exists $G$ with
$\Oo_\pi(G)=1$ and a nonidentity $x\in G$ such that every $m$ its conjugates generate a $\pi$-sub\-gro\-up.
The conclusion of Theorem~\ref{t2} is evident if $\pi$ is the set of all primes. In this case, we can take $m$ in the statement of Theorem~\ref{t2} to be equal to $1$ (and even $0$).
The Baer--Su\-zu\-ki theorem implies that if $\pi=\{p\}$, then $\bs(\pi)=2$. V.N.\,Tyutyanov \cite{Tyut1} and, much later, the second author in~\cite{BS_odd} showed that if $2\notin\pi$
then $\bs(\pi)=2$. In the paper, we prove the following
\begin{theorem}\label{t3}
Let $\pi$ be a proper subset of the set of all primes. Then
$$
r-1\leq \bs(\pi)\leq \max\{11,2(r -2)\},
$$
where $r$ is the minimal prime not in~$\pi$.
\end{theorem}
The lower bound in Theorem \ref{t3} follows by Proposition~\ref{ex1}.
In order to prove Theorem \ref{t2} and obtain the upper bound in Theorem \ref{t3}, we use the reduction to almost simple groups obtained in \cite{BS_odd} (see Lemma~\ref{red}), and the results by R.Guralnick and J. Saxl~\cite{GS}. They obtained, for every finite simple group $L$ and every automorphism $x\in \Aut(L)$ of prime order, upper bounds\footnote{For sporadic groups these bounds were substantially improved in~\cite{DiMPZ}.} on~$\alpha(x,L)$ defined as follows.
\begin{definition} \cite{GS}\label{alphaxL} Let $L$ be a nonabelian simple group and $x$ its nonidentity automorphism. Let $\alpha(x)=\alpha(x,L)$ be the minimal number of $L$-conjugates of~$x$ which generate the group $\langle x, \Inn(L)\rangle$.
\end{definition}
Similarly to Definition \ref{alphaxL}, we introduce the number $\beta_r(x,L)$ which also plays an important role in this paper.
\begin{definition} Let $r$ be a prime divisor of the order of a nonabelian simple group $L$. For a nonidentity automorphism $x$ of $L$, denote by
$\beta_r(x,L)=\beta_r(x)$ the minimal number of
$L$-conjugates of $x$ which generate a subgroup of order divisible by~$r$.
\end{definition}
It follows immediately from the definitions, that, for every nonabelian simple group $L$ and every nonidentity $x\in\Aut(L)$, if $r$ divides $|L|$ then the inequality
$$
\beta_r(x,L)=\beta_{r}(x)\leq \alpha(x,L)
$$
holds.
The results from \cite{BS_odd} imply that $\beta_2(x,L)\leq2$ for any nonabelian simple group $L$ and its nonidentity automorphism~$x$.
In this paper, we always assume that $r$ is an odd prime. We provide a rather rough upper bound for $\beta_r(x,L)$ depending on $r$ only, where $L$ is a simple group of order divisible by $r$, and $x\in \Aut(L)^\sharp$. It follows by definition that if $y\ne 1$ is a power of~$x$, then $\beta_r(x,L)\leq\beta_r(y,L)$. In particular, it is enough to find upper bounds for $\beta_r(x,L)$ in the case when the order of $x$ is prime. We derive Theorem~\ref{t2} and the upper bound for $\bs(\pi)$ in Theorem~\ref{t3} from the following statement which is the main result of this paper.
\begin{theorem}\label{t4}
Let $r$ be an odd prime, $L$ a nonabelian simple group, and let $x\in\Aut(L)$ be of prime order. Then one of the following statements is true.
\begin{itemize}
\item[$(1)$] $\alpha(x,L)\leq 11$;
\item[$(2)$] $L$ is isomorphic to an alternating or classical group of Lie type, the order of $L$ is not divisible by $r$, and
$\alpha(x,L)\leq r-1$;
\item[$(3)$] $L$ is isomorphic to an alternating or classical group of Lie type, the order of $L$ is divisible by $r$, and
$\beta_r(x,L)\leq 2(r-2)$.
\end{itemize}
In particular, if $r$ divides $|L|$ then $\beta_r(x,L)\leq\max\{11, 2(r-2)\}$, and if $r$ does not divide~$|L|$, then $\alpha(x,L)\leq \max\{11, r-1\}\leq \max\{11, 2(r-2)\}$.
\end{theorem}
The number $11$ in Theorem~\ref{t4} (and, as a consequence, in Theorem~\ref{t3}) is a uniform bound for $\alpha(x,L)$ in the case when $L$ is a sporadic or exceptional Lie type group. The existence of such a bound follows by the results in~\cite{GS}. It is likely that a detailed investigation of $\beta_r(x,L)$ in sporadic, exceptional, and classical Lie type groups of small ranks allows one to reduce or even remove the number $11$ from both Theorems~\ref{t3} and~\ref{t4}. We think the bound $2(r-2)$ for $\bs(\pi)$ and $\beta_r(x,L)$ is also too big. The authors do not know any counterexamples to the following statements (notice that the first is a corollary to the second).
\begin{Conj}
Let $\pi$ be a proper subset of the set of all primes containing at least two elements, and let $r$ be the minimal prime not in~$\pi$. Then $$\bs(\pi)=\left\{\begin{array}{rl}
r, & \text{ if } r\in\{2,3\}, \\
r-1, & \text{ if } r\geq 5.
\end{array}\right.
$$
\end{Conj}
\begin{Conj}
For any nonabelian simple group $L$ of order divisible by~$r$ and its every automorphism $x$ of prime order, we have
$$\beta_r(x,L)\leq\left\{\begin{array}{rl}
r, & \text{ if } r\in\{2,3\}, \\
r-1, & \text{ if } r\geq 5.
\end{array}\right.
$$
\end{Conj}
In the case $r=2$, as we have noted above, both conjectures are true. There are examples (see~\cite[Example~2]{BS_odd}, and Proposition~\ref{beta_A_n_prop} below) showing that, for $r=3$, the value $\beta_r(x,L)$ can be equal to~$3$. Thus the bound on $\bs(\pi)$ follows from the bounds for $\beta_r(x,L)$ for a nonabelian simple~$L$ and an odd prime~$r$. In the case of alternating groups, the sharp bound gives
\begin{Prop} \label{beta_A_n_prop}
Let $L=A_n$, $n\geq 5$, let $r\leq n$ be a prime, and let $x\in \Aut (L)$ be of prime order. Then
\begin{itemize}
\item[$(1)$] $\beta_r(x,L)=r-1$ if $x$ is a transposition;
\item[$(2)$] if $r=3$, $n=6$, and $x$ is an involution not lying in $S_6$ then $\beta_r(x,L)= 3$;
\item[$(3)$] $\beta_r(x,L)\leq r-1$ for all other $x$.
\end{itemize}
\end{Prop}
Recall that the class of finite groups~$\mathfrak{X}$ is called {\em radical} if in every finite group $G$ there exists {\em $\mathfrak{X}$-radical} $G_{\mathfrak{X}}$, i.e. the largest normal $\mathfrak{X}$-sub\-gro\-up \footnote{As usual, a group $X\in\mathfrak{X}$ is called an {\em $\mathfrak{X}$-gro\-up}. The largest normal $\mathfrak{X}$-sub\-gro\-up is the normal $\mathfrak{X}$-sub\-gro\-up containing any other normal $\mathfrak{X}$-sub\-gro\-up.}. According to~\cite[Definition~1.15]{GGKP}, let the {\em Baer-Suzuki width} $\bs(\mathfrak{X})$ for a radical class $\mathfrak{X}$ be defined as the exact lower bound for the set of all natural numbers $b$ such that, for every finite group $G$, the $\mathfrak{X}$-ra\-di\-cal $G_{\mathfrak{X}}$ is equal to
$$\{x\in G\mid \langle x_1,\dots,x_b\rangle\in\mathfrak{X}\text{ for every } x_1,\dots,x_b\in x^G\}.$$
N. Gordeev, F. Grunewald, B. Kunyavskii, E. Plotkin state the problem~\cite[Problem~1.16]{GGKP}: for what radical classes~$\mathfrak{X}$ the inequality $\bs(\mathfrak{X})<\infty$ holds? The results of this paper show that, for every set of primes $\pi$, the class of all $\pi$-groups has finite Baer-Suzuki width. Moreover, we believe that the results of the paper make a substantial progress toward the solution of~\cite[Problem~1.16]{GGKP} in general.
\section{Preliminaries}
\subsection{Reduction to almost simple groups and general lemmas}
\begin{definition}
\cite{BS_odd} Let $\pi$ be a set of primes and $m$ be a positive integer. We say that a finite group $G$ {\it lies in ${\mathcal {BS}}_{\pi}^{m}$} ($G\in{{\mathcal B}{\mathcal S}}_{\pi}^{m}$), if
$$
\Oo_\pi(G)=\{x\in G\mid \langle x_1,\ldots,x_m\rangle \text{ is a }\pi\text{-group for every }x_1,\ldots,x_m\in x^G\}.
$$
\end{definition}
\begin{lemma}\label{red} {\em \cite[Lemma~11]{BS_odd}} Suppose that not all finite groups are contained in $\BS_{\pi}^{m}$ for some $m\geq 2$, and choose ${G\notin\BS_{\pi}^{m}}$ of minimal order. Then $G$ possesses a subgroup $L$ and an element $x$ such that
\begin{itemize}
\item[$(1)$] $L\trianglelefteq G$;
\item[$(2)$] $L$ is nonabelian simple;
\item[$(3)$] $L$ is neither $\pi$- nor $\pi'$-gro\-up, where $\pi'$ is the complement of $\pi$ in the set of all primes;
\item[$(4)$] $C_G(L)=1$;
\item[$(5)$] any $m$ conjugates of $x$ generate a $\pi$-gro\-up;
\item[$(6)$] $x$ has prime order;
\item[$(7)$] $G=\langle x, L\rangle$.
\end{itemize}
\end{lemma}
\begin{lemma} \label{2notinpi} {\em \cite[Theorem 1]{BS_odd}} If $2\not\in\pi$, then $\BS_\pi^2$ includes all finite groups.
\end{lemma}
Let $G$ be fixed. For $x\in G$ and a group $K$ we say that $x$ {\it is drawn into $K$} ($x\leadsto K,$)
if there exists a subgroup $H\leq G$ such that
\begin{itemize}
\item[$(1)$] $x\in H$;
\item[$(2)$] there exists an epimorphism $H$ on $K$;
\item[$(3)$] the image of $x$ in $K$ under the epimorphism is nontrivial.
\end{itemize}
The following statement is immediate from the definition.
\begin{lemma} \label{leadsto}
Let $G$ and $K$ be almost simple groups with socles $S$ and $L$ respectively, and assume a prime $r$ divides the orders of both $S$ and $L$. Assume inequality $\beta_r(y,L)\leq m$ for all $y\in K^\sharp$. If $x\leadsto K$ for some $x\in G^\sharp$, then $\beta_r(x,S)\leq m$.
\end{lemma}
The next lemma allows to find a conjugate of the centralizer of an automorphism $x$ that is normalized, but is not centralized by~$x$.
\begin{lemma} \label{guest} {\em \cite[Lemma~15]{GuestLevy}}
Let $G$ be a finite group and $x\in\Aut(G)$ be a $p$-element. Set $M=C_G(x)$. Suppose that $p$ divides $|G:M|$, and either $M=N_G(M)$ or $Z(M) =1$. Then there exists a conjugate of $M$ that is normalized but not centralized by~$x$.
\end{lemma}
\subsection{Information on almost simple groups}
For finite simple groups we use the notations of ATLAS~\cite{Atlas}.
In Table~\ref{tab1} we collect the information on the orders of finite simple classical groups.
\begin{table}
\caption
{Simple classical groups of Lie type over a field $\mathbb{F}_q$}\label{tab1}
\begin{tabular}{|l|l|c|r|}\hline
$L$ & Restrictions & $|L|$& $d$ \\
\hline\hline
\multirow{2}{*}{$A_{n-1}(q)\cong L_n(q)$}&$n\geq2$; $q>3$ for & \multirow{2}{*}{$\frac{1}{d} q^{n(n-1)/2} \prod\limits_{i=1}^n(q^{i}-1)$}& \multirow{2}{*}{$(n,q-1)$}\\
& $n=2$&&\\\hline
\multirow{2}{*}{${^2A}_{n-1}(q)\cong U_n(q)$}& $n\geq3$; $q>2$ &\multirow{2}{*}{$\frac{1}{d} q^{n(n-1)/2} \prod\limits_{i=1}^n(q^{i}-(-1)^{i})$} &\multirow{2}{*}{$(n,q+1)$}\\
& for $n=3$ &&\\ \hline
$B_n(q)\cong O_{2n+1}(q)$& $n\geq3$ & $\frac{1}{d} q^{n^2} \prod\limits_{i=1}^n(q^{2i}-1)$& $(2,q-1)$\\
\hline
\multirow{2}{*}{$C_n(q)\cong S_{2n}(q)$}& $n\geq2$; $q>2$ for &\multirow{2}{*}{ $\frac{1}{d} q^{n^2} \prod\limits_{i=1}^n(q^{2i}-1)$} & \multirow{2}{*}{$(2,q-1)$}\\
& $n=2$&&\\\hline
$D_n(q)\cong O^+_{2n}(q)$& $n\geq4$ &$\frac{1}{d} q^{n(n-1)}(q^n-1) \prod\limits_{i=1}^{n-1}(q^{2i}-1)$ & $(4,q^n-1)$ \\ \hline
${^2D}_n(q)\cong O_{2n}^-(q)$& $n\geq4$ &$\frac{1}{d} q^{n(n-1)}(q^n+1) \prod\limits_{i=1}^{n-1}(q^{2i}-1)$ & $(4,q^n+1)$\\ \hline
\end{tabular}
\end{table}
Our terminology for automorphisms of groups of Lie type agrees with that of \cite{GLS} and is different with that of \cite{Car}. We quote it here explicitly.
The definition of inner-diagonal automorphisms is the same in \cite{Car} and in \cite{GLS}, and we use this definition. In \cite[Definition 2.5.10]{GLS} subgroups $\Phi_K$ and $\Gamma_K$ of $\Aut(K)$ are defined for arbitrary group of Lie type $K$. For groups of Lie type we usually use the letter~$L$, so the corresponding subgroups we denote by $\Phi_L$ and $\Gamma_L$. We denote the group of inner-diagonal automorphisms of $L$ by $\L$ or $\Inndiag(L)$.
\begin{lemma}\label{Aut} {\em \cite[Theorem 2.5.12]{GLS}} Let $L$ be a simple group of Lie type over a field ${\mathbb F}_q$ of characteristic $p$. Then $\Aut(L)$ is a split extension of $\L$ by an abelian group $\Phi_L\Gamma_L$. Moreover $\Phi_L\Gamma_L\cong\Phi_L\times\Gamma_L$, except the following cases:
\begin{itemize}
\item[$(1)$] $L=B_2(q)$, $q$ is a power of $2$ and $\Phi_L\Gamma_L$ is cyclic with $|\Phi_L\Gamma_L:\Phi_L|=2$;
\item[$(2)$] $L=F_4(q)$, $q$ is a power of $2$ and $\Phi_L\Gamma_L$ is cyclic with $|\Phi_L\Gamma_L:\Phi_L|=2$;
\item[$(3)$] $L=G_2(q)$, $q$ is a power of $3$ and $\Phi_L\Gamma_L$ is cyclic with $|\Phi_L\Gamma_L:\Phi_L|=2$.
\end{itemize}
\end{lemma}
An automorphism of prime order $\alpha\in\Aut(L)\setminus\L$ of an untwisted group of Lie type $L$ is called
\begin{itemize}
\item[]{\em field modulo $\L$}, if the image of $\alpha$ in $\Aut(L)/\L$ lies in $\Phi_L\L/\L$; elements of $\Phi_L$ we call {\em canonical field automorphisms} of $L$;
\item[] {\em graph modulo} $\L$, if $L$ is not isomorphic to $B_2(2^n)$, $F_4(2^n)$, and $G_2(3^n)$ and the image of $\alpha$ in $\Aut(L)/\L$ lies in $\Gamma_L\L/\L$; elements of $\Gamma_L$ we call {\it canonical graph automorphisms} of $L$;
\item[] {\em graph-field modulo} $\L$ in the remaining cases; at that elements of $\Phi_L\Gamma_L\setminus\Phi_L$ for $B_2(2^n)$, $F_4(2^n)$, and $G_2(3^n)$, and elements of $\Phi_L\Gamma_L\setminus(\Phi_L\cup\Gamma_L)$ for the remaining untwisted groups of Lie type we call {\it canonical graph-field automorphisms} of $L$.
\end{itemize}
Let $L$ be a twisted group of Lie type, not isomorphic to a Suzuki or a Ree group, obtaining from its untwisted analogue as a set of stable points of an automorphism of order $d\in\{2,3\}$ (see \cite[Ch.~13]{Car}). In this case $\Gamma_L=1$ (see \cite[Theorem~2.5.12]{GLS}). Consider $\alpha\in\Aut(L)\setminus\L$ of prime order. We say that $\alpha$ is
\begin{itemize}
\item[]{\em field modulo $\L$}, if the order of $\alpha$ is coprime to $d$; elements of $\Phi_L$ we call {\em canonical field automorphisms} of $L$;
\item[] {\em graph modulo} $\L$, if the order of $\alpha$ equals $d$; elements of $\Phi_L$ we call {\em canonical graph automorphisms} of $L$.
\item[] there are no graph-field automorphisms of prime order modulo $\L$.
\end{itemize}
Finally, all noninner automorphisms of a Suzuki or a Ree group $L$, are called {\em field modulo~$\L$}.
Notice that the notion of a field and a graph-field modulo $\L$ automorphism $\alpha$ of $L$ coincide with the notion of a field or a graph-field automorphism in \cite[Definition~2.5.13]{GLS}.
\begin{lemma}\label{Field_Aut} {\em \cite[Proposition~4.9.1]{GLS}} Let $L={}^d\Sigma(q)$ be a simple group of Lie type over a field ${\mathbb F}_q$, where $\Sigma$ is an indecomposable root system, $d$ is either an empty symbol, or $2$ (i.e. ${}^d\Sigma(q)\ne{}^3D_4(q)$). Let $x$ and $y$ be automorphisms of $L$, having the same prime order. Assume also that both $x$ and $y$ are either field or graph-field automorphisms modulo~$\L$. Then subgroups $\langle x\rangle$ and
$\langle y\rangle$ are conjugate under~$\L$. Moreover if $x$ is a graph-field automorphism, and ${}^d\Sigma\in\{A_{n-1},D_{n}\}$, then $|x|=2$ and ${}^2\Sigma(q^{1/2})\leq C_{L}(x)\leq \widehat{{}^2\Sigma(q^{1/2})}$.
\end{lemma}
By $\tau$ we denote the automorphism of $\GL_n(q)$, acting by
$$\tau:A\mapsto (A^{-1})^\top,$$
where $A^\top$ is the transposed of $A$. If $q$ is a power of a prime $p$, by $\varphi_{p^k}$ we denote an automorphism of $\GL_n(q)$ acting by
$$\varphi_{p^k}:(a_{i,j})\mapsto (a_{i,j}^{p^k}).$$
We use the same symbols $\tau$ and $\varphi_{p^k}$ for the induced automorphisms of $\mathrm{PGL}_n(q)$, $\mathrm{SL}_n(q)$, and $L_n(q)=\mathrm{PSL}_n(q)$. In particular, if $q=p^k$ and $r$ divides $k$, then $\varphi_{q^{1/r}}$ is a field automorphism of order~$r$. For definiteness, we always assume that $\mathrm{PSU}_n(q)=O^{p'}\left(C_{\mathrm{PGL}_n(q^2)}\left(\tau\varphi_{q}\right)\right)$. As usual, we use the notations $L_n^\varepsilon(q)=\mathrm{PSL}_n^\varepsilon(q)$, $\varepsilon=\pm$ for linear and unitary groups, assuming $L_n^+(q)=\mathrm{PSL}_n^+(q)=\mathrm{PSL}_n(q)$ and $L_n^-(q)=\mathrm{PSL}_n^{-}(q)=\mathrm{PSU}_n(q)$. By $E_k$ we denote the identity $(k\times k)$-matrix and by $A\otimes B$ the Kronecker product of matrices $A$ and~$B$.
\begin{lemma}\label{GraphAutGLU}
Let $L=L_n^\varepsilon(q)$ be a simple projective special linear or unitary group and $n\geq 5$. Then the following statements hold:
\begin{itemize}
\item[{\em (1)}] If $n$ is odd, then the coset $\L\tau$ of $\langle\L, \tau\rangle$ contains exactly one class of conjugate involutions, and every such involution normalizes, but not centralizes a subgroup $H$ of $S$ such that $H\cong O_n(q)$.
\item[{\em (2)}] If $n$ is even and $q$ is odd, then the coset $\L\tau$ of $\langle\L, \tau\rangle$ contains exactly three classes of conjugate involutions with representatives $x_0,x_+,x_-$ such that $x_\delta$ normalizes but does not centralize $H_\delta$, where $\delta\in \{0,+,-\}$
$$
H_\delta\cong\left\{
\begin{array}{cc}
S_n(q), &\text{if } \delta=0, \\
O^+_n(q), &\text{if } \delta=+, \\
O^-_n(q), &\text{if } \delta=-.
\end{array}
\right.
$$
\item[{\em (3)}] If both $n$ and $q$ are even, then the coset $\L\tau$ of $\langle\L, \tau\rangle$ contains exactly two classes of conjugate involutions, and every such involution normalizes, but not centralizes a subgroup isomorphic to~$S_n(q)$.
\end{itemize}
\end{lemma}
\begin{proof}[Proof]
The statement on the centralizers and the number of $\L$-conjugacy classes of involutions for $q$ odd follows by \cite[Theorem~4.5.1 with Table~4.5.1]{GLS}, while for $q$ even by \cite[(19.8)]{AsSeitz}. If $n$ is odd, then the socle of the centralizer in $\L$ of a graph involution is isomorphic to $H$, while for $n$ even and $q$ odd the socle of the centralizer in $\L$ is isomorphic to $H_\delta$, in particular, the centralizers of $x_0$, $x_+$, and~$x_-$ are pairwise nonisomorphic. Also the centers of centralizers are trivial. Since for $n\geq 5$ these centralizers have even indices, by Lemma~\ref{guest} every such involution normalizes, but not centralizes a subgroup conjugate to its centralizer. Thus we obtain statements (1) and~(2) of the lemma.
Now assume that both $n$ and $q$ are even.
In this case we can take $\tau I$ as a representative of one of the conjugacy classes of involutions, where $I$ is the projective image of a block-diagonal matrix with blocks $\left(\begin{array}{rr}0&1\\1&0 \end{array}\right)$ on the diagonal, i.~e.
$$
I=\left(
\begin{array}{rr}
0 & 1 \\
1 & 0
\end{array}
\right)\otimes E_{n/2}.
$$
It is easy to see that $C_L(x)\cong S_n(q)$ (if $L$ is linear, direct computations show that $C_L(x)$ consists exactly of projective matrices $A$ satisfying $A^\top IA=I$, and these matrices form $S_n(q)$; if $L$ is unitary, see \cite[(19.8)]{AsSeitz}). Since the index $\vert L:C_L(x)\vert$ is even and $Z(C_L(x))=1$, by Lemma~\ref{guest} there exists a subgroup $M$ of $S$, isomoprphic to $C_L(x)$, such that $x$ centralizes, but not normlizes $M$. In order to obtain the representative $y$ of the second conjugacy class, it is enough to take any transvection $t\in C_L(x)$ and set $y=\tau I t$. By construction, $y$ normalizes but not centalizess $C_L(x)\cong S_n(q)$.
\end{proof}
\begin{lemma}\label{Graph_Inv_D_n} Let $V$ be a vector space over a field $\mathbb{F}_q$ of odd order $q$, $\dim V=2n$ for $n\geq 5$, and $V$ is equipped with a nondegenerate symmetric bilinear form of the sign $\varepsilon\in\{+,-\}$. Let ${\rm O}$ be the full group of isometries and $\Delta$ be the full group of similarities of $V$, ${\rm SO}$ be the subgroup of ${\rm O}$ consisting of elements with determinant $1$, $\Omega={\rm O}'$. Denote by $$\overline{\phantom{x}}:\Delta\rightarrow\Delta/Z(\Delta)$$
the canonical epimorphism. Let $L=\overline{\Omega}=O_{2n}^\varepsilon(q)$. Then the following statements hold.
\begin{itemize}
\item[$(1)$] A canonical graph automorphism $\overline{\gamma}$ of $L$ is contained in $\overline{{\rm O}}\setminus \overline{{\rm SO}}$, while $\overline{\Delta}$ coincides with $\langle\L,\overline{\gamma}\rangle$.
\item[$(2)$] All graph modulo $\L$ involutions are images of involutions from $\Delta$.
\item[$(3)$] If $n$ is even, then there are $n/2$ classes of $\L$-conjugate graph modulo $\L$ involutions with representatives $\overline{\gamma}_1=\overline{\gamma}, \overline{\gamma}_2,\dots,\overline{\gamma}_{n/2}$, where $\gamma_i$ for $i=1,\dots, n/2$ is an involution in ${\rm O}$ such that the eigenvalue $-1$ of $\gamma_i$ has multiplicity ${2i-1}$. Every $\overline{\gamma}_i$ normalizes but does not centralize a subgroup of $L$ isomorphic to $O_{2n-1}(q)$.
\item[$(4)$] If $n$ is odd, then there exist $(n+3)/2$ classes of $\L$-conjugate graph modulo $\L$ involutions with representatives $\overline{\gamma}_1=\overline{\gamma}, \overline{\gamma}_2,\dots,\overline{\gamma}_{(n+1)/2}$ and $\overline{\gamma}_{(n+1)/2}'$, where for $i=1,\dots, (n+1)/2$ $\overline{\gamma}_i$ is an involution from ${\rm O}$ such that the eigenvalue $-1$ of $\overline{\gamma}_i$ has multiplicity ${2i-1}$, while $\overline{\gamma}'_{(n+1)/2}$ is an involution from $\Delta\setminus {\rm O}$ such that its eigenvalue $-1$ has multiplicity $n$. Every $\overline{\gamma}_i$ normalizes but does not centralize a subgroup of $L$ isomorphic to $O_{2n-1}(q)$, while $\overline{\gamma}_{(n+1)/2}'$ normalizes but not centralizes a subgroup of $L$ isomorphic to~$O_{n}(q^2)$.
\end{itemize}
\end{lemma}
\begin{proof} All statements of the lemma, except the existence of subgroups normalized but not centralized by corresponding involutions follow by \cite[Theorems~4.3.1 and 4.3.3, Tables~ 4.3.1 and 4.3.3, and Remark~4.5.4]{GLS}.
We show that for the involutions $\gamma_i$ there exists a nondegenerate invariant subspace $U$ of $V$ of $\dim U=2n-1$ with nonscalar action. Since $\gamma_i$ is an isometry, the subspaces $V_+$ and $V_-$, consisting of eigenvectors corresponding to eigenvalues $1$ and $-1$ respectively, are orthogonal and $V= V_+\oplus V_-$. So $V_+$ and $V_-$ are nondegenerate $\gamma_i$-stable subspaces, and $\gamma_i$ acts on both of them as a scalar multiplication. Let $u\in V_+$ be a nonsingular vector. Then the subspace $W=u^\perp$ is $\gamma_i$-stable. Since $\dim V_+=2n-2i+1>1$, the restriction of $\gamma_i$ on $W$ has two eigenvalues $1$ and~$-1$, and so its action on $W$ is nonscalar. Hence $\overline{\gamma}_i$ normalizes but not centralizes the derived subgroup $I(W)'\cong O_{2n-1}(q)$ of the group of all isometries of~$W$.
By~\cite[Table~4.5.1]{GLS} the centralizer of $\overline{\gamma}_{(n+1)/2}'$ in $\L$ is isomorphic to the group of inner-diagonal automorphisms of $O_{n}(q^2)$. The centralizer has a trivial center and even index in~$\L$. By Lemma~\ref{guest} we conclude that $\overline{\gamma}_{(n+1)/2}'$ normalizes but not centralizes a subgroup of $L$ isomorphic to~$O_{n}(q^2)$.
\end{proof}
\begin{lemma}\label{Graph_Inv_D_n_even_q} Assume $L=D_n(q)=O_{2n}^\varepsilon (q)$ for $\varepsilon\in\{+,-\}$, $n\geq 4$, and even~$q$. Then every graph modulo $\L$ involution of $\Aut(L)$ normalizes a subgroup of $L$ isomorphic to $O_{2n-2}^\eta(q)\times O_2^{\varepsilon\eta}(q)$ for $\eta\in\{+,-\}$ and does not centralizes the component $O_{2n-2}^\eta(q)$ in this product.
\end{lemma}
\begin{proof} Denote by $V$ a vector space of dimension $2n$ over $\mathbb{F}_q$ equipped with a nondegenerate quadratic form of sign~$\varepsilon$. We identify $L$ and $\Omega(V)={\rm O}(V)'$, where ${\rm O}(V)$ is the isometry group of~$V$.
Let $t$ be a graph modulo $\L$ involution of $\Aut(L)$. It is a well-known fact that $t$ belongs to ${\rm O}(V)$ (see, for example, \cite[p.~34--35]{Bray} and \cite[\S~2.7--2.8]{KL}).
In view of \cite[(7.6) and (8.10)]{AsSeitz}, since $t\not \in \Omega(V)$, the rank of $t-1$ is odd. Now by \cite[(7.5)(1)]{AsSeitz} we obtain that $t$ stabilizes a decomposition $V=Y\oplus Y^\perp$, where $Y$ is nondegenerate of dimension~$2$. Therefore $t$ normalizes $\Omega(Y)\times \Omega(Y^\perp)$. If $t$ does not centralize $\Omega(Y^\perp)$, we obtain the lemma. If $t$ centralizes $\Omega(Y^\perp)$, then $t$ acts identically on $Y^\perp$, i.~e. $Y^\perp$ consists of $t$-stable vectors. In this case we can take any nondegenerate $2$-dimensional subspace $U$ of $Y^\perp$. By construction, $t$ stabilizes $U\oplus U^\perp$ and acts nontrivially on $U^\perp$. Hence $t$ normalizes $\Omega(U)\times\Omega(U^\perp)$ and does not centralizes~$\Omega(U^\perp)$.
\end{proof}
\begin{lemma}\label{alpha_A_n} {\em \cite[Lemma~6.1]{GS}} Assume $L=A_n$, where $n\geq 5$ is a simple alternating group. Then $\alpha(x,L)\leq n-1$ for any nonidentity element $x\in\Aut(L)$. Moreover if $n\ne 6$ and $x$ is not a transposition, then~$\alpha(x,L)\leq n/2$.
\end{lemma}
Notice that $A_6\cong L_2(9)$ so for $L=A_6$ the information on $\alpha(x,L)$ is provided in Lemma~\ref{alpha_classic} below.
\begin{lemma}\label{Sporadic} {\em \cite[Table~1]{GS}} Let $L$ be a simple sporadic group. Then $\alpha(x,L)\leq 8$ for every nonidentity element $x\in\Aut(L)$.
\end{lemma}
\begin{lemma}\label{Exceptional} {\em \cite[Theorem~5.1]{GS}} Let $L$ be a simple exceptional group of Lie type. Then $\alpha(x,L)\leq 11$ for every nonidentity $x\in\Aut(L)$.
\end{lemma}
\begin{lemma} \label{SemisimpleInvariant} Let $V$ be a vector space of dimension $n> 2$ over a finite field $F$, equipped with either trivial, or nondegenerate bilinear or unitary form. Let $x$ be a nonscalar similarity of $V$, and assume that the characteristic of $F$ does not divide~$|x|$. Assume also that $x$ possesses a $1$-dimensional invariant subspace~$U$. Then there exists an $x$-invariant subspace $W$ of codimension $1$ such that $x$ is nonscalar on it. Moreover, if $U$ is nondegenerate then $W$ can be chosen nondegenerate.
\end{lemma}
\begin{proof} Let $W$ be either an $x$-invariant complement of $U$, if the form is trivial (the existence of such a complement follows by the Maschke theorem), or $W=U^\perp$, if the form is nondegenerate. In any case, $W$ is $x$-invariant of codimension~1 and $W$ is nondegenerate if $U$ is nondegenerate.
If $x$ is nonscalar on $W$, we obtain the lemma. Otherwise, we show that there is an $x$-invariant subspace $W_0$ of codimension 1 such that $W_0\ne W$. This implies that $V=W+W_0$ and $W\cap W_0\ne 0$, because $n>2$. Since $x$ is nonscalar on $V$, we conclude that $x$ is nonscalar on~$W_0$. Furthermore, we show that $W_0$ can be chosen nondegenerate if $U$ is nondegenerate.
If $x$ is scalar on $W$ then every $1$-dimension subspace of $W$ is $x$-invariant. Moreover, if the form is nondegenerate, one can choose an $1$-dimension subspace $U_0$ of $W$ such that $U_0^\perp\ne W$ (otherwise $W$ would be totally isotropic which contradicts the Witt lemma). Moreover, if $U$ is nondegenerate, then the form is not symplectic. Therefore, nondegenerate subspace $W$ possesses a nonisotropic vector, and $U_0$ can be chosen nondegenerate.
Choose $W_0$ to be an $x$-invariant complement of $U_0$, if the form is trivial, or $W_0=U_0^\perp$, if the form is nondegenerate. It is easy to see that $W_0$ satisfies the conclusion of Lemma~\ref{SemisimpleInvariant}.
\end{proof}
\begin{center}\begin{table}
\caption
{Bounds on $\alpha(x,L)$ for classical groups $L$}\label{tab2}
\begin{center}\begin{tabular}{|c|c|c|c|r|}\hline
\multirow{2}{*}{ $L$ } & conditions & conditions & conditions & \multirow{2}{*}{$\alpha(x,L)$} \\
& on $n$ & on $q$ & on $x$ & \\
\hline\hline
\multirow{18}{*}{$A_{n-1}(q)\cong L_n(q)$} & \multirow{10}{*}{$n=2$} & \multirow{3}{*}{$q\ne 5,9$} & $|x|> 2$ & $2$ \\
\cline{4-5}
& & & field, $|x|=2$ & $\leq 4$ \\
\cline{4-5}
& & &\multirow{2}{*}{ not field, $|x|=2$} & \multirow{2}{*}{$3$}\\
\cline{3-3}
& & \multirow{4}{*}{$q= 9$} & & \\
\cline{4-5}
& & & field, $|x|=2$ & $5$ \\
\cline{4-5}
& & & $|x|=3$ & $3$ \\
\cline{4-5}
& & & $|x|>3$ & $2$ \\
\cline{3-5}
& & \multirow{3}{*}{$q=5$} & $|x|>2$ & $2$ \\
\cline{4-5}
& & & not diagonal, $|x|=2$ & $3$ \\
\cline{4-5}
& & & diagonal, $|x|=2$ & $4$ \\
\cline{3-5}
\cline{2-5}
& \multirow{3}{*}{ $n=3$} & & not graph-field & \multirow{2}{*}{$\leq 3$} \\
& & & or $|x|\ne 2$ & \\
\cline{4-5}
& & & graph-field, $|x|=2$ & $\leq 4$ \\
\cline{2-5}
& \multirow{4}{*}{$n=4$} & \multirow{2}{*}{$q>2$} &graph & $\leq 6$ \\
\cline{4-5}
& & & \multirow{2}{*}{not graph} & \multirow{2}{*}{$\leq 4$} \\
\cline{3-3}
& & \multirow{2}{*}{$q=2$} & & \\
\cline{4-5}
& & & graph & $7$ \\
\cline{2-5}
& \multirow{1}{*}{$n>4$} & & & $\leq n$ \\
\hline \hline
\multirow{11}{*}{${^2A}_{n-1}(q)\cong U_n(q)$} & \multirow{4}{*}{$n=3$} &$q>3$ & & $\leq 3$ \\
\cline{3-5}
& &\multirow{3}{*}{$q=3$} & not inner & \multirow{2}{*}{$\leq 3$} \\
& & & or $|x|\ne 2$ & \\
\cline{4-5}
& & &inner, $|x|=2$ & $4$ \\
\cline{2-5}
& \multirow{6}{*}{$n=4$} & \multirow{2}{*}{$q>2$} & not graph & $\leq 4$ \\
\cline{4-5}
& & & \multirow{2}{*}{graph } & \multirow{2}{*}{$\leq 6$ } \\
\cline{3-3}
& & \multirow{4}{*}{$q=2$} & & \\ \cline{4-5}
& & & transvection & $\leq 5$ \\
\cline{4-5}
& & & not transvection & \multirow{2}{*}{$\leq 4$} \\
& & & or not graph & \\
\cline{2-5}
& $n>4$ & & & $\leq n$ \\
\hline \hline
\multirow{7}{*}{$C_{n}(q)\cong S_{2n}(q)$} & \multirow{4}{*}{ $n=2$} & \multirow{2}{*}{ $q>3$} & $|x|= 2$ & $\leq 5$ \\
\cline{4-5}
& & & \multirow{2}{*}{$|x|> 2$ } & \multirow{2}{*}{$\leq 4$ } \\
\cline{3-3}
& & \multirow{2}{*}{$q=3$} & & \\
\cline{4-5}
& & & $|x|= 2$ & $\leq 6$ \\
\cline{2-5}
& \multirow{3}{*}{$n>2$} & & not transvection & $\leq n+3$ \\
\cline{3-5}
& & odd & \multirow{2}{*}{ transvection } & $\leq 2n$ \\
\cline{3-3} \cline{5-5}
& & even & & $\leq 2n+1$ \\
\hline \hline
\multirow{4}{*}{$B_{n}(q)\cong O_{2n+1}(q)$} & \multirow{4}{*}{$n\geq 3$} & \multirow{2}{*}{odd} & reflection & $ 2n+1$ \\
\cline{4-5}
& & & not reflection & \multirow{2}{*}{$\leq n+3$} \\
\cline{3-4}
& & \multirow{2}{*}{even} & not transvection & \\
\cline{4-5}
& & & transvection & $2n+1$ \\
\hline \hline
\multirow{2}{*}{$D_{n}(q)\cong O_{2n}^+(q)$,} & \multirow{4}{*}{$n\geq 4$} & \multirow{2}{*}{odd} & reflection & $ 2n$ \\
\cline{4-5}
& & & not reflection & \multirow{2}{*}{$\leq n+3$} \\
\cline{3-4}
\multirow{2}{*}{${}^2D_{n}(q)\cong O_{2n}^-(q)$} & & \multirow{2}{*}{even} & not transvection & \\
\cline{4-5}
& & & transvection & $2n$ \\
\hline
\end{tabular}\end{center}
\end{table}\end{center}
\begin{lemma} \label{alpha_classic}
Let $L$ be a simple classical group and $x$ its automorphism of prime order. Then a bound on $\alpha(x,L)$ is given in the last column of Table~{\em\ref{tab2}}.
\end{lemma}
As a corollary to Lemma \ref{alpha_classic}, we immediately obtain
\begin{lemma}\label{Small_rank} Let $L$ be a simple classical group of Lie type, possessing an automorphism $x$ of prime order such that $\alpha(x,L)> 11$. Then
\begin{itemize}
\item[$(1)$] if $L=A_{n-1}(q)=L_n(q)$ or $L={}^2A_{n-1}(q)=U_n(q)$, then $n\geq12$;
\item[$(2)$] if $L=B_{n}(q)=O_{2n+1}(q)$ or $L=C_{n}(q)=S_{2n}(q)$, then $n\geq6$;
\item[$(3)$] if $L=D_{n}(q)=O^+_{2n}(q)$ or $L={}^2D_{n}(q)=O^-_{2n}(q)$, then $n\geq6$.
\end{itemize}
\end{lemma}
\begin{lemma} \label{alpha_irreducible}
Let $L$ be a simple classical group and $x$ be its automorphism of prime order, induced by an irreducible similarity. Then $\alpha(x,L)\leq 3$.
\end{lemma}
\begin{proof} See \cite[proof of Theorems 4.1--4.4]{GS}. More precisely, for linear groups \cite[page~534]{GS}, for unitary groups \cite[page~536]{GS}, for symplectic groups \cite[page~538]{GS}, and for orthogonal groups \cite[page~539]{GS}.
\end{proof}
\begin{lemma} \label{Parabolic} {\em \cite[Lemma 2.2]{GS}}
Let $L$ be a simple group of Lie type, let $G = \L$ and let $x\in G$.
\begin{itemize}
\item[$(1)$] If $x$ is unipotent, let $P_1$ and $P_2$ be distinct maximal parabolic subgroups containing a common Borel subgroup of $G$ with unipotent radicals $U_1$ and $U_2$. Then $x$ is conjugate to an element of $P_i\setminus U_i$ for $i = 1$ or $i = 2$.
\item[$(2)$] If $x$ is semisimple, assume that $x$ lies in a parabolic subgroup of~$G$. If the rank of $L$ is at least two, then there exists a maximal parabolic subgroup~$P$ with Levi complement $J$ such that $x$ is conjugate to and element from $J$, not centralized by any Levi component (possibly solvable) of~$J$.
\end{itemize}
\end{lemma}
\subsection{Large $r'$-subgroups in classical groups}
\begin{lemma} \label{r_not divi}
Let $L$ be a simple classical group or an alternating group. Assume that an odd prime $r$ does not divides the order of $L$.
Then the following statements hold.
\begin{itemize}
\item[$(1)$] If $L=A_n$, then $n\leq r-1$.
\item[$(2)$] If $L=L_n(q)$, then $n\leq r-2$ and $ r\geq n+2.$
\item[$(3)$] If $L=U_n(q)$, then $n\leq r-2$ and $ r\geq n+2.$
\item[$(4)$] If $L=S_{2n}(q)$, then $2n\leq r-3$ and $r\geq 2n+3.$
\item[$(5)$] If $L=O_{2n+1}(q)$, then $2n\leq r-3$ and $r\geq 2n+3.$
\item[$(6)$] If $L=O^+_{2n}(q)$ or $L=O^-_{2n}(q)$, then $2n\leq r-1$ and $r\geq 2n+1.$
\item[$(7)$] If $L=L_n(q^2)$, then $2n\leq r-3$ and $r\geq 2n+3.$
\end{itemize}
\end{lemma}
\begin{proof} Since $r$ does not divides $|L|$, numbers $r$ and $q$ are coprime. Moreover, by the Little Fermat theorem,
the order of $L$ (see Table~\ref{tab1}) is not divisible by~$q^{r-1}-1$. Now, using the parity of $r-1$, we see that $q^{r-1}-1=q^{r-1}-(-1)^{r-1}$, and we obtain the statement of the lemma by using Table~\ref{tab1}.
\end{proof}
\begin{lemma} \label{non_parabolic_divisor}
Assume, $L$ is a simple classical group of Lie type satisfying one of the following conditions:
\begin{itemize}
\item $L=A_{n-1}(q)=L_n(q)=L_n^+(q)$, $n\geq 2$;
\item $L={}^2A_{n-1}(q)=U_n(q)=L_n^-(q)$, $n\geq 3$;
\item $L=B_{n}(q)=O_{2n+1}(q)$, $n\geq 3$;
\item $L=C_{n}(q)=S_{2n}(q)$, $n\geq 2$;
\item $L=D_{n}(q)=O^+_{2n}(q)$, $n\geq 4$;
\item $L={}^2D_{n}(q)=O^-_{2n}(q)$, $n\geq 4$.
\end{itemize}
Suppose that an odd prime $r$ does not divides the order of a maximal parabolic subgroup of~$L$.
Then one of the following statements hold:
\begin{itemize}
\item[$(1)$] $L=L_n(q)=L_n^+(q)$ and $r\geq \left[\displaystyle\frac{n+1}{2}\right]+2\geq\displaystyle\frac{n+4}{2}$;
\item[$(2)$] $L=U_n(q)=L_n^-(q)$ and $r\geq \left[\displaystyle\frac{n+1}{2}\right]+2\geq\displaystyle\frac{n+4}{2}$;
\item[$(3)$] $L=S_{2n}(q)$ and $r\geq \displaystyle\frac{2n+7}{3}$;
\item[$(4)$] $L=O_{2n+1}(q)$ and $r\geq \displaystyle\frac{2n+7}{3}$;
\item[$(5)$] $L=O^+_{2n}(q)$ and $r\geq \displaystyle\frac{2n+5}{3}$;
\item[$(6)$] $L=O^-_{2n}(q)$ and $r\geq \displaystyle\frac{2n+5}{3}$.
\end{itemize}
\end{lemma}
\begin{proof} Denote by $P$ a maximal parabolic subgroup of~$L$ of order not divisible by~$r$. The Levi factor of $P$ has at most two components, corresponding to the connected components of the Dynkin diagram with one node removed. The number $r$ does not divide the orders of the components. Assume that we remove a root $r_m$ for $A_{n-1}(q)$, $B_n(q)$, $C_n(q)$, and $D_n(q)$, or the root $r_m^1$ for ${}^2A_{n-1}(q)$, and ${}^2D_{n-1}(q)$ on pic.~\ref{p90}--\ref{p92}\footnote{In view of symmetry reasons we may assume that $m\ne n$ in case $D_n(q)$.}. Then we obtain one of the following cases\footnote{We agree that $O_6^+(q)=D_3(q)=A_3(q)=L_4(q)$, $O_4^+(q)=D_2(q)=A_1(q)\times A_1(q)=L_2(q)\times L_2(q)$, $O_6^-(q)={}^2D_3(q)={}^2A_3(q)=U_4(q)$, $O_4^-(q)={}^2D_2(q)=A_1(q^2)=L_2(q^2)$. }:
\begin{itemize}
\item[(a)] $L=A_{n-1}(q)=L_n(q)$ and $r$ does not divide the orders of $A_{m-1}(q)=L_m(q)$ and $A_{n-1-m}(q)=L_{n-m}(q)$, where $1\leq m \leq n-1$;
\item[(b)] $L={}^2A_{n-1}(q)=U_n(q)$ and $r$ does not divide the orders of $A_{m-1}(q^2)=L_m(q^2)$ and ${}^2A_{n-1-2m}(q)=U_{n-2m}(q)$, where $1\leq m \leq [n/2]$;
\item[(c)] $L=B_n(q)=O_{2n+1}(q)$ and $r$ does not divide the orders of $A_{m-1}(q)=L_m(q)$ and $B_{n-m}(q)=O_{2(n-m)+1}(q)$, where $1\leq m \leq n-1$;
\item[(d)] $L=C_n(q)=S_{2n}(q)$ and $r$ does not divide the orders of $A_{m-1}(q)=L_m(q)$ and $C_{n-m}(q)=S_{2(n-m)}(q)$, where $1\leq m \leq n-1$;
\item[(e)] $L=D_n(q)=O^+_{2n}(q)$ and $r$ does not divides the orders of $A_{m-1}(q)=L_m(q)$ and $D_{n-m}(q)=O^+_{2(n-m)}(q)$, where $1\leq m \leq n-1$;
\item[(f)] $L={}^2D_n(q)=O^-_{2n}(q)$ and $r$ does not divide the orders of $A_{m-1}(q)=L_m(q)$ and ${}^2D_{n-m}(q)=O^-_{2(n-m)}(q)$, where $1\leq m \leq n-1$.
\end{itemize}
Consider all these cases separately.
\begin{figure}
\begin{center}
\unitlength=8mm \begin{picture}(4,1)(3,0)
\put(0,0){\circle {0.2}} \put(0,0.5){\makebox(0,0){$r_1$}}
\put(2,0){\circle {0.2}} \put(2,0.5){\makebox(0,0){$r_2$}}
\put(4,0){\circle {0.2}} \put(4,0.5){\makebox(0,0){$r_3$}}
\put(6,0){\circle {0.2}} \put(6,0.5){\makebox(0,0){$r_{n-2}$}}
\put(8,0){\circle {0.2}} \put(8,0.5){\makebox(0,0){$r_{n-1}$}}
\put(0.1,0){\line(1,0){1.8}}
\put(2.1,0){\line(1,0){1.8}}
\put(4.1,0){\line(1,0){0.5}}
\put(5.4,0){\line(1,0){0.5}}
\put(5,0){\makebox(0,0){$\cdots$}}
\put(6.1,0){\line(1,0){1.8}}
\end{picture}
\caption{\label {p90}
$A_n(q)$. }
\end{center}
\end{figure}
\vspace{8cm}
\begin{figure}
\begin{center}
\unitlength=8mm \begin{picture}(4,5)(3,0)
\put(0,5){\circle {0.2}} \put(0,5.5){\makebox(0,0){$r_1$}}
\put(2,5){\circle {0.2}} \put(2,5.5){\makebox(0,0){$r_2$}}
\put(4,5){\circle {0.2}} \put(4,5.5){\makebox(0,0){$r_3$}}
\put(6,5){\circle {0.2}} \put(6.5,5.5){\makebox(0,0){$r_{k-1}$}}
\put(8,5){\circle {0.2}} \put(8.5,5.5){\makebox(0,0){$r_{k}$}}
\put(0.1,5){\line(1,0){1.8}}
\put(2.1,5){\line(1,0){1.8}}
\put(4.1,5){\line(1,0){0.5}}
\put(5.4,5){\line(1,0){0.5}}
\put(5,5){\makebox(0,0){$\cdots$}}
\put(6.1,5){\line(1,0){1.8}}
\put(0,3){\circle {0.2}} \put(0,3.5){\makebox(0,0){$r_{n-1}$}}
\put(2,3){\circle {0.2}} \put(2,3.5){\makebox(0,0){$r_{n-2}$}}
\put(4,3){\circle {0.2}} \put(4,3.5){\makebox(0,0){$r_{n-3}$}}
\put(6,3){\circle {0.2}} \put(6.5,3.5){\makebox(0,0){$r_{k+2}$}}
\put(8,3){\circle {0.2}} \put(8.5,3.5){\makebox(0,0){$r_{k+1}$}}
\put(0.1,3){\line(1,0){1.8}}
\put(2.1,3){\line(1,0){1.8}}
\put(4.1,3){\line(1,0){0.5}}
\put(5.4,3){\line(1,0){0.5}}
\put(5,3){\makebox(0,0){$\cdots$}}
\put(6.1,3){\line(1,0){1.8}}
\put(8,4.9){\line(0,-1){1.8}}
\put(4,2.7){\vector(0,-1){1.7}}
\put(0,0){\circle {0.2}} \put(0,0.5){\makebox(0,0){$r^1_1$}}
\put(2,0){\circle {0.2}} \put(2,0.5){\makebox(0,0){$r^1_2$}}
\put(4,0){\circle {0.2}} \put(4,0.5){\makebox(0,0){$r^1_3$}}
\put(6,0){\circle {0.2}} \put(6,0.5){\makebox(0,0){$r^1_{k-1}$}}
\put(8,0){\circle {0.2}} \put(8,0.5){\makebox(0,0){$r^1_{k}$}}
\put(0.1,0){\line(1,0){1.8}}
\put(2.1,0){\line(1,0){1.8}}
\put(4.1,0){\line(1,0){0.5}}
\put(5.4,0){\line(1,0){0.5}}
\put(5,0){\makebox(0,0){$\cdots$}}
\put(6,0.1){\line(1,0){2}}
\put(6,-0.1){\line(1,0){2}}
\end{picture}
\caption{\label {p901}
${}^2A_{n-1}(q)$, where $n-1=2k$, $k=\left[\displaystyle\frac{n}{2}\right]$. }
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\unitlength=8mm \begin{picture}(4,7)(3,0)
\put(0,5){\circle {0.2}} \put(0,5.5){\makebox(0,0){$r_1$}}
\put(2,5){\circle {0.2}} \put(2,5.5){\makebox(0,0){$r_2$}}
\put(4,5){\circle {0.2}} \put(4,5.5){\makebox(0,0){$r_3$}}
\put(6,5){\circle {0.2}} \put(6.2,5.5){\makebox(0,0){$r_{k-1}$}}
\put(0.1,5){\line(1,0){1.8}}
\put(2.1,5){\line(1,0){1.8}}
\put(4.1,5){\line(1,0){0.5}}
\put(5.4,5){\line(1,0){0.5}}
\put(5,5){\makebox(0,0){$\cdots$}}
\put(6.1,4.95){\line(2,-1){1.8}}
\put(0,3){\circle {0.2}} \put(0,3.5){\makebox(0,0){$r_{n-1}$}}
\put(2,3){\circle {0.2}} \put(2,3.5){\makebox(0,0){$r_{n-2}$}}
\put(4,3){\circle {0.2}} \put(4,3.5){\makebox(0,0){$r_{n-3}$}}
\put(6,3){\circle {0.2}} \put(6.2,3.5){\makebox(0,0){$r_{k+1}$}}
\put(8,4){\circle {0.2}} \put(8,4.5){\makebox(0,0){$r_{k}$}}
\put(0.1,3){\line(1,0){1.8}}
\put(2.1,3){\line(1,0){1.8}}
\put(4.1,3){\line(1,0){0.5}}
\put(5.4,3){\line(1,0){0.5}}
\put(5,3){\makebox(0,0){$\cdots$}}
\put(6.1,3.05){\line(2,1){1.8}}
\put(4,2.7){\vector(0,-1){1.7}}
\put(0,0){\circle {0.2}} \put(0,0.5){\makebox(0,0){$r^1_1$}}
\put(2,0){\circle {0.2}} \put(2,0.5){\makebox(0,0){$r^1_2$}}
\put(4,0){\circle {0.2}} \put(4,0.5){\makebox(0,0){$r^1_3$}}
\put(6,0){\circle {0.2}} \put(6,0.5){\makebox(0,0){$r^1_{k-1}$}}
\put(8,0){\circle {0.2}} \put(8,0.5){\makebox(0,0){$r^1_{k}$}}
\put(0.1,0){\line(1,0){1.8}}
\put(2.1,0){\line(1,0){1.8}}
\put(4.1,0){\line(1,0){0.5}}
\put(5.4,0){\line(1,0){0.5}}
\put(5,0){\makebox(0,0){$\cdots$}}
\put(6,0.1){\line(1,0){2}}
\put(6,-0.1){\line(1,0){2}}
\end{picture}
\caption{\label{p902}
${}^2A_{n-1}(q)$, where $n-1=2k-1$, $k=\left[\displaystyle\frac{n}{2}\right]$. }
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\unitlength=8mm \begin{picture}(9,2)
\put(0,0){\circle {0.2}} \put(0,0.5){\makebox(0,0){$r_1$}}
\put(2,0){\circle {0.2}} \put(2,0.5){\makebox(0,0){$r_2$}}
\put(4,0){\circle {0.2}} \put(4,0.5){\makebox(0,0){$r_3$}}
\put(6,0){\circle {0.2}} \put(6,0.5){\makebox(0,0){$r_{n-1}$}}
\put(8,0){\circle {0.2}} \put(8,0.5){\makebox(0,0){$r_{n}$}}
\put(0.1,0){\line(1,0){1.8}}
\put(2.1,0){\line(1,0){1.8}}
\put(4.1,0){\line(1,0){0.5}}
\put(5.4,0){\line(1,0){0.5}}
\put(5,0){\makebox(0,0){$\cdots$}}
\put(6,0.1){\line(1,0){2}}
\put(6,-0.1){\line(1,0){2}}
\end{picture}
\caption{\label {p903}
$B_{n}(q)$ and $C_{n}(q)$. }
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\unitlength=8mm \begin{picture}(9,4)
\put(0,2){\circle {0.2}} \put(0,2.5){\makebox(0,0){$r_1$}}
\put(2,2){\circle {0.2}} \put(2,2.5){\makebox(0,0){$r_2$}}
\put(4,2){\circle {0.2}} \put(4,2.5){\makebox(0,0){$r_{l-3}$}}
\put(6,2){\circle {0.2}}
\put(5.8,2.5){\makebox(0,0){$r_{n-2}$}} \put(8,4){\circle
{0.2}} \put(8.63,4){\makebox(0,0){$r_{n-1}$}} \put(8,0){\circle
{0.2}} \put(8.5,0){\makebox(0,0){$r_n$}}
\put(0.1,2){\line(1,0){1.8}}
\put(2.1,2){\line(1,0){0.5}}
\put(3.4,2){\line(1,0){0.5}}
\put(3,2){\makebox(0,0){$\cdots$}}
\put(4.1,2){\line(1,0){1.8}}
\put(6.08,2.08){\line(1,1){1.83}}
\put(6.08,1.92){\line(1,-1){1.83}}
\end{picture}
\caption{\label {p91}
$D_n(q)$.}
\end{center}
\end{figure}
\begin{figure} \begin{center} \unitlength=9mm
\begin{picture}(9,6) \put(0,4){\circle {0.2}}
\put(0,4.5){\makebox(0,0){$r_1$}} \put(2,4){\circle {0.2}}
\put(2,4.5){\makebox(0,0){$r_2$}} \put(4,4){\circle {0.2}}
\put(4,4.5){\makebox(0,0){$r_{n-3}$}} \put(6,4){\circle {0.2}}
\put(5.8,4.5){\makebox(0,0){$r_{n-2}$}} \put(8,6){\circle
{0.2}} \put(8.7,6){\makebox(0,0){$r_{n-1}$}}
\put(8,2){\circle {0.2}}
\put(8.5,2){\makebox(0,0){$r_n$}}
\put(0.1,4){\line(1,0){1.8}}
\put(2.1,4){\line(1,0){0.5}}
\put(3.4,4){\line(1,0){0.5}}
\put(3,4){\makebox(0,0){$\cdots$}}
\put(4.1,4){\line(1,0){1.8}}
\put(6.08,4.08){\line(1,1){1.83}}
\put(6.08,3.92){\line(1,-1){1.83}}
\put(4,3){\vector(0,-1){1.9}}
\put(0,0){\circle {0.2}} \put(0,0.5){\makebox(0,0){$r_1^1$}}
\put(2,0){\circle {0.2}} \put(2,0.5){\makebox(0,0){$r_2^1$}}
\put(4,0){\circle {0.2}}
\put(4,0.5){\makebox(0,0){$r_{n-3}^1$}} \put(6,0){\circle {0.2}}
\put(5.8,0.5){\makebox(0,0){$r_{n-2}^1$}} \put(8,0){\circle
{0.2}} \put(8,0.5){\makebox(0,0){$r_{n-1}^1$}}
\put(0.1,0){\line(1,0){1.8}}
\put(2.1,0){\line(1,0){0.5}}
\put(3.4,0){\line(1,0){0.5}}
\put(3,0){\makebox(0,0){$\cdots$}}
\put(4.1,0){\line(1,0){1.8}}
\put(6,0.1){\line(1,0){2}}
\put(6,-0.1){\line(1,0){2}}
\end{picture}
\caption{\label {p92}
${^2D_n(q)}$.}
\end{center}
\end{figure}
\medskip
C~a~s~e~~(a). We have $2\max\{m,n-m\}\geq m+(n-m)=n$. So by Lemma~\ref{r_not divi} the inequality
$$
r-2\geq\max\{m,n-m\}\geq n/2,
$$
holds, whence the statement~(1) of the lemma.
\medskip
C~a~s~e~~(b). Notice that if the order of $L_m(q^2)$ is not divisible by $r$, then by Lemma~\ref{r_not divi} we obtain $r-2>r-3\geq 2m$.
Whence applying Lemma~\ref{r_not divi} to $U_{n-2m}(q)$ we obtain the inequality
$$
r-2\geq\max\{2m,n-2m\}\geq n/2,
$$ and as in Case (a), substituting $m$ by $2m$, we obtain item~(2) of the lemma.
\medskip
C~a~s~e~s~~(c,d). Since the orders of $S_{2k}(q)$ and $O_{2k+1}(q)$ are equal, it is enough to consider case (d). The inequality $$3\max\{m,2(n-m)+1\}\geq 2m+2(n-m)+1=2n+1$$ and Lemma~\ref{r_not divi} imply
$$
r\geq \max\{m+2,2(n-m)+3\}=\max\{m,2(n-m)+1\}+2\geq \frac{2n+1}{3}+2=\frac{2n+7}{3},
$$
whence item~(3) (or (4) in case (c)) of the lemma holds.
\medskip
C~a~s~e~s~~(e,f). Since $3\max\{m+1,2(n-m)\}\geq 2(m+1)+2(n-m)=2n+2$, by Lemma~\ref{r_not divi} the inequality
$$
r\geq \max\{m+2,2(n-m)+1\}=\max\{m+1,2(n-m)\}+1\geq \frac{2n+2}{3}+1=\frac{2n+5}{3}
$$
holds. Whence items~(5) and (6) of the lemma hold.
\end{proof}
\begin{lemma} \label{non_non-degenerated_divisor}
Let $L$ be isomorphic to $U_n(q)$ and $n\geq 12$, or to $S_{2n}(q)$, $O_{2n+1}(q)$, $O_{2n}^+(q)$, or $O_{2n}^-(q)$ and $n\geq 6$. Assume an odd prime $r$ does not divide the order of the stabilizer in $L$ of a nondegenerate subspace $U$ of the underlying space $V$. Then one of the following statements hold:
\begin{itemize}
\item[$(1)$] $L=U_n(q)$ and $r\geq \left[\displaystyle\frac{n+1}{2}\right]+2\geq\displaystyle\frac{n+4}{2}$;
\item[$(2)$] $L=S_{2n}(q)$ and $r\geq 2\left[\displaystyle\frac{n+1}{2}\right]+3\geq n+3$;
\item[$(3)$] $L=O_{2n+1}(q)$ and $r\geq 2\left[\displaystyle\frac{n+1}{2}\right]+1\geq n+1$;
\item[$(4)$] $L=O^+_{2n}(q)$ or $L=O^-_{2n}(q)$ and $r\geq 2\left[\displaystyle\frac{n+1}{2}\right]+1\geq n+1$.
\end{itemize}
\end{lemma}
\begin{proof}
Since an element of $L$ stabilizing a nondegenerate subspace $U$ also stabilizes the (nondegenerate) orthogonal complement $U^\perp$,
replacing, if necessary, $U$ with $U^\perp$,
we can assume that for some $m<n$ one of the following cases appear:
\begin{itemize}
\item[(a)] $L=U_n(q)$, $\dim U=m$ and $r$ does not divide the orders of $U_m(q)$ and $U_{n-m}(q)$, where $1\leq m \leq n-1$;
\item[(b)] $L=S_{2n}(q)$, $\dim U=2m$ and $r$ does not divide the orders of $S_{2m}(q)$ and $S_{2(n-m)}(q)$, where $1\leq m \leq n-1$;
\item[(c)] $L=O_{2n+1}(q)$, $\dim U=2m$ and $r$ does not divide the orders of $O^\pm_{2m}(q)$ and $O_{2(n-m)+1}(q)$, where $1\leq m \leq n-1$;
\item[(d)] $L=O^\pm_{2n}(q)$, $\dim U=2m+1$ and $r$ does not divide the orders of $O_{2m+1}(q)$ and $O_{2(n-m-1)+1}(q)$, where $1\leq m \leq n-1$;
\item[(e)] $L=O^\varepsilon_{2n}(q)$, $\varepsilon\in\{+,-\}$, $\dim U=2m$, ${\rm sgn}\,U=\delta\in\{+,-\}$ and $r$ does not divide the orders of $O^\delta_{2m}(q)$ and $O^{\varepsilon\delta}_{2(n-m)}(q)$, where $1\leq m \leq n-2$.
\end{itemize}
Consider all these cases separately.
\medskip
C~a~s~e~~(a). Since by Lemma~\ref{r_not divi} $$r-2\geq \max\{m,n-m\},$$ as in case (a) in the proof of Lemma~\ref{non_parabolic_divisor}, we obtain item~(1) of the lemma.
\medskip
C~a~s~e~~(b). By Lemma~\ref{r_not divi}
$$r-3\geq \max\{2m,2(n-m)\}= 2\max\{m,n-m\}\geq 2\left[\displaystyle\frac{n+1}{2}\right]\geq n,$$
and item~(2) of the lemma follows.
\medskip
C~a~s~e~~(c). By Lemma~\ref{r_not divi}
$$r\geq \max\{2m+1,2(n-m-1)+3\}= 2\max\{m,n-m\}+1\geq 2\left[\displaystyle\frac{n+1}{2}\right]+1\geq n+1,$$
and item~(3) of the lemma holds.
\medskip
C~a~s~e~~(d). By Lemma~\ref{r_not divi}
$$r\geq \max\{2m+3,2(n-m-1)+3\}= 2\max\{m,(n-1)-m\}+3\geq 2\left[\displaystyle\frac{n}{2}\right]+3\geq n-1+3=n+2.$$
Since $n+2>n+1$, we obtain item~(4) of the lemma.
\medskip
C~a~s~e~~(e). By Lemma~\ref{r_not divi}
$$r\geq \max\{2m+1,2(n-m)+1\}= 2\max\{m,n-m\}+1\geq 2\left[\displaystyle\frac{n+1}{2}\right]+1\geq n+1,$$
and item~(4) of the Lemma holds.
\end{proof}
\section{Proof of Proposition~\ref{ex1}}
Let $T$ be a set of transposition in $S_r$. Consider a graph $\Gamma$ with the vertex set $\Omega=\{1,\dots,r\}$, where two different vertices $i,j$ are adjacent if and only if $(ij)\in T$. If $\Delta_1,\dots, \Delta_k$ are the connected components of $\Gamma$, then clearly $$\langle T\rangle\leq\Sym\Delta_1\times\dots\times \Sym\Delta_k.$$ Now, if $m=|T|<r-1$, then $\Gamma$ is disconnected and each $\Sym\Delta_i$ is a $\pi$-group (recall that $r=\min \pi'$). Therefore $\langle T\rangle$ is a $\pi$-group.
So for every proper subset $\pi$ of the set of all primes and for $r=\min \pi'$ every $m<r-1$ transpositions generate a $\pi$-subgroup in $S_r$, while~$\Oo_\pi(S_r)=1$.
\hfill{$\Box$}
\section{Proof of Proposition~\ref{beta_A_n_prop}}
Notice that $(12\dots r)=(12)(13)\dots(1r)$. Whence if $x$ is a transposition, then $\beta_{r,L}(x)\leq r-1.$ On the other hand, Proposition~\ref{ex1} shows that for a transposition $x$ the inequality $\beta_{r}(x,L)\geq r-1.$ holds. So item (1) of the proposition holds.
Now we consider $A_6$ and an involution $x$ lying outside of $S_6$. It is known $L=A_6\cong L_2(9)$ and
$x$ is a diagonal automorphism of $L_2(9)$. By Lemma \ref{alpha_classic} for $r\in \{3,5\}$ we have
$$\beta_{r}(x,L)\leq\alpha(x,L)=3.$$ So for $r=5$
$$\beta_{r}(x,L)\leq 3<4=r-1,$$ therefore item (3) of the proposition for $A_6$ and $x$ holds. In order to prove item (2) of the proposition we need to show that $\beta_3(x,L)=3$. Since $\beta_3(x,L)\leq \alpha(x,L)=3$, we remain to show that $\beta_{3}(x,L)\ne 2$.
Assume that $y\in x^L$ is such that the order of $D=\langle x,y\rangle$ is divisible by 3. Since $D$ is a dihedral group, this means that $x$ inverts an element of order $3$ in $\langle xy\rangle\leq L$. The group $L=A_6$ is known to contain exactly two classes of conjugate elements of order 3 with representatives $(123)$ and $(123)(456)$. It follows that every element of order $3$ of $L$ is conjugate to its inverse. Thus $x$ leaves invariant some (and hence any) conjugacy class of elements of order $3$. A contradiction, since by \cite[Exercise~2.18]{Wilson}, $x$ interchanges both classes.
It remains to show that if $L=A_n$ and $x\in S_n$, then $\beta_{r}(x,L)\leq r-1$.
We use the induction by $n$. By item (1) of the proposition we may assume that $x$ is not a transposition.
Assume first that $n=5$. Then $r\in \{3,5\}$. If $x\in L$ is of odd order, then by Lemma~\ref{alpha_A_n}, $$\beta_{r}(x,L)\leq\alpha(x,L)=2=r-1.$$
So assume that $x$ is an involution, i.e. $x$ is a product of two independent transpositions. Since $$[(12)(45)][(13)(45)]=(123)\,\,\,\text{ and }\,\,\,[(13)(45)][(14)(23)]=(12345),$$ we conclude
$$\beta_{r}(x,L)=2\leq r-1.$$
Assume $n=6$. Notice that $L\cong L_2(9)$. Again $r\in \{3,5\}$. We may assume $|x|\ne r$. In view of item (1) we may also assume that $x$ is not a transposition and is not a product of three transpositions, since they are conjugate under outer automorphism of $A_6$.
If $r=5$, then by Lemma \ref{alpha_A_n} we obtain $$\beta_{r}(x,L)\leq\alpha(x,L)=3\leq 4=r-1.$$
So assume $r=3$. If $|x|=5$, then Lemma \ref{alpha_classic} implies $$\beta_{r}(x,L)\leq\alpha(x,L)=2=r-1.$$
Finally assume $|x|=2$, i.e. $x$ is a product of two independent transpositions. As in the case $n=5$, we derive~$\beta_{r}(x,L)=2= r-1.$
Let $n>6$. Since $r\in \pi(L)$, the inequality $r\leq n$ holds.
Assume $x\in S_n$ has a fixed point. Then $x$ is included in a point stabilizer isomorphic to $S_{n-1}$. If $r\in \pi(S_{n-1})$, then $\beta_{r}(x,L)\leq r-1$
by induction. If $r\notin \pi(S_{n-1})$, then $r=n$ and by Lemma~\ref{alpha_A_n} we have $$\beta_{r}(x,L)\leq \alpha(x,L)\leq n/2=r/2<r-1.$$
Assume $\langle x\rangle$ acts without fixed points, but not transitively. Then $x$ is contained in a subgroup of the form
$S_m\times S_{n-m}$, $2\leq m\leq n-2$ and the projections of $x$ on both components are nontrivial. If the orders of the components are not divisible by $r$, we obtain
$$
\beta_{r}(x,L)\leq \alpha(x,L)\leq n/2\leq\max\{m, n-m\}<r,
$$
whence the proposition follows. Assume $r$ divides the order of at least one of the components in the product $S_m\times S_{n-m}$. If $\max\{m,n-m\}\geq 5$, the induction implies the desired inequality. So we may assume $\max\{m,n-m\}<5$ and $r=3$. Since $n>6$, we also obtain $\max\{m,n-m\}\geq n/2>3$ and so $\max\{m,n-m\}=4$. Thus $n\in\{7,8\}$ and $S_m\times S_{n-m}$ is equal to either $S_3\times S_4$, or $S_4\times S_4$.
If $x\leadsto S_3$ then $\beta_{3}(x,L)\leq 2=r-1$. Since $S_3$ is a homomorphic image of $S_4$, in order to finish the consideration of nontransitive action of $\langle x\rangle$ it remains to consider the following configuration: $n=8$ and $x$ is a product of four independent transpositions. In this case $x\in S_2\times S_6$ and $x\leadsto S_6$, so by induction
$\beta_{3}(x,L)\leq 2=r-1$.
Finally assume that $\langle x\rangle$ acts transitively. Since $x$ is of prime order, this implies $n=|x|$ is a prime, without loss of generality we may assume
$$
x=(12\dots n).
$$
Let $y=(123)$. Then
$$
x^{-1}(x^{y^{-1}})=x^{-1}yxy^{-1}=(134).
$$
Thus the subgroup $\langle x, x^{y^{-1}}\rangle$ contains a $3$-cycle and is primitive, since it is transitive and $n$ is prime. By the Jordan theorem \cite[Theorem 3.3E]{DM}, we conclude $L\leq \langle x, x^{y^{-1}}\rangle$, whence $$\beta_{r}(x,L)\leq \alpha(x,L)=2\leq r-1,$$
and the proposition follows. \hfill{$\Box$}
\section{Proof of Theorem~\ref{t4}}
Theorem~\ref{t4} is true, if $\alpha(x,L)\leq 11$. So by Lemmas,
\ref{Sporadic}, \ref{Exceptional}, and \ref{Small_rank}, we derive that Theorem~\ref{t4} holds for the sporadic groups, for the exceptional groups of Lie type, for $A_{n-1}(q)=L_n(q)$ and ${}^2A_n(q)=U_n(q)$ if $n\leq 11$, for $B_n(q)=O_{2n+1}(q)$ and $C_{n}(q)=S_{2n}(q)$ if $n\leq 5$, and for $D_n(q)=O_{2n}^+(q)$ and ${}^2D_n(q)=O_{2n}^-(q)$ if $n\leq 5$.
The theorem also holds in the case when $L=A_n$ is an alternating group. Indeed, if $|L|=n!$ is not divisible by $r$, then $r>n$ and by Lemma \ref{alpha_A_n}$$\alpha(x,L)\leq n-1<r-1,$$ i.e. item (2) of Theorem~\ref{t4} holds. If $r$ divides $|L|$ and $n\ne 6$, then by Proposition~\ref{beta_A_n_prop}, we have
$$
\beta_r(x,L)\leq r-1\leq 2(r-2).
$$
For $n=6$ the inequality $\alpha(x,L)\leq 5<11$ holds and the theorem follows.
Thus by the classification of finite simple groups \cite[Theorem~0.1.1]{AschLyoSmSol} it remains to prove that Theorem~\ref{t4} holds in the following cases:
\begin{itemize}
\item $L=A_{n-1}(q)=L_n(q)$ or $L={}^2A_{n-1}(q)=U_n(q)$ for $n\geq12$;
\item $L=B_{n}(q)=O_{2n+1}(q)$ or $L=C_{n}(q)=S_{2n}(q)$ for $n\geq6$;
\item $L=D_{n}(q)=O^+_{2n}(q)$ or $L={}^2D_{n}(q)=O^-_{2n}(q)$ for $n\geq6$.
\end{itemize}
Assume that Theorem~\ref{t4} is not true and choose a nonabelian simple group $L$ of minimal order possessing an automorphism $x$ of prime order such that the theorem is not true. Thus $L$ is one of the classical groups mentioned above and, for some prime $r$, one of the following conditions holds
\begin{itemize}
\item[(a)] $\alpha(x,L)\geq 12$,
\item[(b)] if $r$ does not divides $|L|$, then $\alpha(x,L)>r-1$,
\item[(c)] if $r$ divides $|L|$, then $\beta_r(x,L)>2(r-2)$.
\end{itemize}
We prove a series of steps that lead to the final contradiction.
\medskip
\begin{itemize}
\item[$(i)$] {\em $r$ divides $|L|$ and $\beta_r(x,L)>2(r-2)$.}
\end{itemize}
\medskip
Assume $r$ does not divides $|L|$. Then by Lemma~\ref{r_not divi} one of the following statements holds:
\begin{itemize}
\item $L=L_n(q)$ or $L=U_n(q)$ and $n\leq r-2$, whence by Lemma~\ref{alpha_classic} we obtain $\alpha(x,L)\leq n\leq r-2$, a contradiction with~(b);
\item $L=O_{2n+1}(q)$ or $L=S_{2n}(q)$ and $2n\leq r-3$, by Lemma~\ref{alpha_classic} we obtain $\alpha(x,L)\leq 2n+1\leq r-2$, a contradiction with~(b);
\item $L=O^+_{2n}(q)$ or $L=O^-_{2n}(q)$ and $2n\leq r-1$, again by Lemma \ref{alpha_classic} we obtain $\alpha(x,L)\leq 2n\leq r-1$; a contradiction with~(b).
\end{itemize}
Hence $r$ divides $|L|$ and $\beta_r(x,L)>2(r-2)$ in view of (c). Thus we obtain $(i)$.
\medskip
Now using the bounds on $\alpha(x,L)$ from Lemma~\ref{alpha_classic} and the fact that $L$ possesses an automorphism $x$ such that $\alpha(x,L)\geq\beta_r(x,L)\geq 2r-3$ we conclude that
\medskip
\begin{itemize}
\item[$(ii)$] {\em The following statements hold.
\begin{itemize}
\item[$(1)$] If $L=L_n(q)$ or $L=U_n(q)$, then $n\geq 2r-3$;
\item[$(2)$] If $L$ is one of the group $S_{2n}(q)$, $O_{2n+1}(q)$ or $O^\pm_{2n}(q)$, then $n\geq 2r-6$, possibly excepting the following cases\footnote{In view of the isomorphism $S_{2n}(q)\cong O_{2n+1}(q)$ for $q$ even, we need not to consider the case $L=O_{2n+1}(q)$ and $q$ is even separately.}:
\begin{itemize}
\item[$(2a)$] $L=S_{2n}(q)$, $q$ is odd, $x$ is an inner automorphism induced by a transvection; in this case $n\geq r-1$;
\item[$(2b)$] $L=S_{2n}(q)$, $q$ is even, $x$ is an inner automorphism induced by a transvection; in this case $n\geq r-2$;
\item[$(2c)$] $L=O_{2n+1}(q)$, $q$ is odd, $x$ is an inner-diagonal automorphism, induced by a reflection, in this case $n\geq r-2$;
\item[$(2d)$] $L=O^\pm_{2n}(q)$, $q$ is odd, $x$ is an inner-diagonal automorphism, induced by a reflection; in this case $n\geq r-2$;
\item[$(2e)$] $L=O^\pm_{2n}(q)$, $q$ is even, $x$ is an inner automorphism induced by a transvection; in this case $n\geq r-2$.
\end{itemize}
\end{itemize}
}
\end{itemize}
\medskip
\begin{itemize}
\item[$(iii)$] {\em Cases $(2a)$--$(2e)$ in step $(ii)$ are impossible. If $L$ is one of the groups $S_{2n}(q)$, $O_{2n+1}(q)$ or $O^\pm_{2n}(q)$, then $\alpha(x,L)\leq n+3$.}
\end{itemize}
\medskip
Assume, one of cases $(2a)$--$(2e)$ holds. The inequality $n\geq r-2$ is satisfied in all these cases, and it implies for $r\geq 7$
$$2(n-1)\geq 2(r-3)> r-1>r-3.$$ By Lemma~\ref{r_not divi} the orders of $S_{2(n-1)}(q)$, $O_{2n-1}(q)$, and $O^\pm_{2(n-1)}(q)$ are divisible by~$r$. Clearly, the orders of $S_{2(n-1)}(q)$, $O_{2n-1}(q)$, and $O^\pm_{2(n-1)}(q)$ are also divisible by~$r$ for $r=3,5$.
We claim that any transvection of $S_{2n}(q)$ and $O^\pm_{2n}(q)$ (in the last case $q$ is even) is contained as a central element in a subgroup $H$ such that $H/Z(H)\cong S_{2(n-1)}(q)$ and $H/Z(H)\cong O^\pm_{2(n-1)}(q)$ respectively. Indeed, all transvection in these groups are conjugate. Consider the stabilizer of the decomposition of $V$ into an orthogonal sum of nondegenerate subspaces $U$ and $W$ of dimensions $2$ and $2(n-1)$ respectively, and let $H$ be a subgroup in this stabilizer consisting of all elements acting on $U$ as scalars. Clearly, $H$ contains a transvection and has the desired structure.
Now notice that for every of such subgroup $H$ the order $|H/Z(H)|$ is divisible by $r$. Since $L$ is a minimal counter example, it follows that in cases $(2a)$, $(2b)$, and $(2e)$
$$
\beta_r(x,L)\leq 2(r-2).
$$
Since the reflections in orthogonal groups over fields of odd characteristic are conjugate, every reflection induces a reflection on a nondegenerate subspace of codimension $2$ and acts identically on its orthogonal complement. Therefore, every reflection in $O_{2n+1}(q)$ or $O^\pm_{2n}(q)$ with $q$ odd, normalizes but not centralizes a subgroup $H$ such that $H/Z(H)\cong O_{2n-1}(q)$ or $H/Z(H)\cong O^\pm_{2(n-1)}(q)$. Again the order $|H/Z(H)|$ is divisible by $r$, and the minimality of $L$ implies that in cases $(2c)$ and $(2d)$
$$
\beta_r(x,L)\leq 2(r-2).
$$
Now if $L$ is one of $S_{2n}(q)$, $O_{2n+1}(q)$, or $O^\pm_{2n}(q)$, then $\alpha(x,L)\leq n+3$ by Lemma~\ref{alpha_classic}.
\medskip
\begin{itemize}
\item[$(iv)$] {\em $x$ is not unipotent.
}
\end{itemize}
\medskip
Otherwise by Lemma~\ref{Parabolic} we may assume that there exists a maximal parabolic subgroup $P$ such that $x$ is contained in $P\setminus U$, where $U$ is the unipotent radical of $P$. Therefore $x$ has a nontrivial image in~$P/\Oo_\infty(P)$ and one of the following cases holds:
\begin{itemize}
\item[$(iv1)$] $L=L_n(q)$; $P$ is corresponding to a set $J$ of fundamental roots of the Dynkin diagram pic.~\ref{p90}, where either $J=\{r_2,\dots,r_{n-1}\}$ or $J=\{r_1,\dots,r_{n-2}\}$;
$P/\Oo_\infty(P)$ is isomorphic to a subgroup of $\Aut(L_{n-1}(q))$, containing~$L_{n-1}(q)$.
\item[$(iv2)$] $L=U_n(q)$; $P$ is corresponding to a set $J^1$ of fundamental roots of the Dynkin diagram pic.~\ref{p901} and~\ref{p902}, where either $J^1=\{r_2^1,\dots, r_{[n/2]}^1\}$ and $P/\Oo_\infty(P)$ is isomorphic to a subgroup of $\Aut(U_{n-2}(q))$ containing $U_{n-2}(q)$, or $J^1=\{r_1^1,\dots, r_{[n/2]-1}^1\}$ and $P/\Oo_\infty(P)$ is isomorphic to a subgroup of $\Aut(L_{[n/2]}(q^2))$ containing $L_{[n/2]}(q^2)$.
\item[$(iv3)$] $L=S_{2n}(q)$ or $L=O_{2n+1}(q)$; $P$ is corresponding to a set $J$ of fundamental roots of the Dynkin diagram pic.~\ref{p903}, where either $J=\{r_2,\dots, r_{n}\}$ and $P/\Oo_\infty(P)$ is isomorphic to a subgroup of $\Aut(S_{2(n-1)}(q))$, containing $S_{2(n-1)}(q)$ or to a subgroup of $\Aut(O_{2n-1}(q))$ containing $O_{2n-1}(q)$, or $J=\{r_1,\dots, r_{n-1}\}$ and $P/\Oo_\infty(P)$ is isomorphic to a subgroup of $\Aut(L_{n}(q))$ containing~$L_{n}(q)$.
\item[$(iv4)$] $L=O^+_{2n}(q)$;
$P$ is corresponding to a set $J$ of fundamental roots in the Dynkin diagram pic.~\ref{p91}, where either $J=\{r_2,\dots, r_{n}\}$ and $P/\Oo_\infty(P)$ is isomorphic to a subgroup of $\Aut(O^+_{2(n-1)}(q))$ containing $O^+_{2(n-1)}(q)$, or $J=\{r_1,\dots, r_{n-1}\}$ and $P/\Oo_\infty(P)$ is isomorphic to a subgroup of $\Aut(L_{n}(q))$ containing $L_{n}(q)$.
\item[$(iv5)$] $L=O^-_{2n}(q)$;
$P$ is corresponding to a set $J^1$ of fundamental roots in the Dynkin diagram pic.~\ref{p92}, where either $J^1=\{r^1_2,\dots, r^1_{n-1}\}$ and $P/\Oo_\infty(P)$ is isomorphic to a subgroup of $\Aut(O^-_{2(n-1)}(q))$ containing $O^-_{2(n-1)}(q)$, or $J^1=\{r^1_1,\dots, r^1_{n-2}\}$ and $P/\Oo_\infty(P)$ is isomorphic to a subgroup of $\Aut(L_{n-1}(q))$ containing $L_{n-1}(q)$.
\end{itemize}
We consider all possibilities case by case and show that $\beta_r(x,L)\leq \max\{11, 2(r-2)\}$, thus obtaining a contradiction.
In view of step $(ii)$, for both $(iv1)$ and $(iv2)$ the inequality $n\geq 2r-3$ holds. Moreover $n\geq12$. Whence $n-1,n-2$, and $2[n/2]$ are greater than $r-2$, and by Lemma~\ref{r_not divi} the orders of $L_{n-1}(q)$, $U_{n-2}(q)$, and $L_{[n/2]}(q^2)$ are divisible by $r$. By the minimality of $L$ we derive
$$\beta_r(x,L)\leq \max\{11, 2(r-2)\}.$$
Also by step $(ii)$ in case $(iv3)$ the inequality $n\geq 2r-6$ holds. Moreover $n\geq6$. Whence $n> r-2$ and $2(n-1)>r-3$, and by Lemma~\ref{r_not divi} the orders of $S_{2(n-1)}(q)$, $O_{2n-1}(q)$, and $L_{n}(q)$ are divisible by $r$. Again by the minimality of $L$ we obtain
$$\beta_r(x,L)\leq \max\{11, 2(r-2)\}.$$
Finally, by step $(ii)$ in cases $(iv4)$ and $(iv5)$ the inequality $n\geq2r-6$ holds, and also $n\geq6$. Whence $n>n-1> r-2$ and $2(n-1)>r-1$, and by Lemma~\ref{r_not divi} the orders of $O^\pm_{2(n-1)}(q)$, $L_{n}(q)$, and $L_{n-1}(q)$ are divisible by $r$. Like above, by the minimality of $L$ we conclude that $$\beta_r(x,L)\leq \max\{11, 2(r-2)\}.$$
\begin{itemize}
\item[$(v)$] {\em The automorphism $x$ is not induced by an irreducible semisimple inner-diagonal element. }
\end{itemize}
\medskip
This step follows immediately by Lemma~\ref{alpha_irreducible}.
\begin{itemize}
\item[$(vi)$] {\em The automorphism $x$ is not induced by a semisimple inner-diagonal element contained in a proper parabolic subgroup of~$\L$. }
\end{itemize}
\medskip
Assume by contradiction that $x$ is induced by a semisimple inner-diagonal element lying in a proper parabolic subgroup. By Lemma ~\ref{Parabolic}, $x$ is conjugate with an element from the Levi complement $J$ of a maximal parabolic subgroup $P$, and normalizes but not centralizes every component of this complement. Then $x$ induces a nontrivial automorphism on each component, and, if the order of a component is divisible by $r$, the minimality of $L$ implies that
$$\beta_r(x,L)\leq \max\{11, 2(r-2)\}.$$
If $r$ does not divide the order of all components, then $r$ does not divide the order of $P$. By Lemma~\ref{non_parabolic_divisor} one of the following possibilities occur:
\begin{itemize}
\item
$L=L_n(q)$ or $L=U_n(q)$ and $r\geq \displaystyle\frac{n+4}{2}$; a contradiction with the inequality $n\geq 2r-3$ obtained in step~$(ii)$.
\item
$L=S_{2n}(q)$ or $L=O_{2n+1}(q)$ and $r\geq \displaystyle\frac{2n+7}{3}$; a contradiction with $n\geq 2r-6$ obtained in step~$(ii)$, since $n\geq 6$.
\item
$L=O^\pm_{2n}(q)$ and $r\geq \displaystyle\frac{2n+5}{3}$. If $(n,r)\ne(8,7)$, then we obtain a contradiction with $n\geq 2r-6$ obtained in step~$(ii)$, since $n\geq 6$. If $(n,r)=(8,7)$, then by Lemma~\ref{alpha_classic} we have
$$
\beta_r(x,L)\leq \alpha(x,L)\leq n+3=11\leq\max\{11, 2(r-2)\}.
$$
\end{itemize}
\begin{itemize}
\item[$(vii)$] {\em If $L$ is one of the group $U_n(q)$, $S_{2n}(q)$, $O_{2n+1}(q)$, or $O_{2n}^\pm(q)$, then the automorphism $x$ is not induced by a similarity of the underlying space~$V$ of~$L$, possessing a proper nondegenerate invariant subspace. }
\end{itemize}
Assume $U$ is a nontrivial nondegenerate $x$-invariant subspace of minimal possible dimension. Set $W=U^\perp$. Then $W$ is also $x$-invariant, and we have $V=U\oplus W$ and $\dim W\geq \displaystyle\frac{1}{2}\dim V$ since $U$ is nondegenerate. If $\dim U=1$, we may assume that a preimage $x^*$ of $x$ in the group of all similarities of $V$ is nonscalar on $W$
(see Lemma~\ref{SemisimpleInvariant}). If $\dim U>1$ , then $x$ is nonscalar on $W$ by the choice of $U$ (otherwise $U$ would not be of minimal possible dimension). Let $H$ be the stabilizer of both $U$ and $W$ in $\L$. Then $x\in H$. Consider the projective image $P\Delta(W)$ of the group of all similarities $\Delta(W)$ of~$W$. Clearly $P\Delta(W)$ is a homomorphic image of $H$ and the image $\overline{x}$ of $x$ in $P\Delta(W)$ is nontrivial. Let $S$ be the socle of $P\Delta(W)$. If $r$ divides $|S|$, then by the choice of $L$ we have
$$
\beta_r(x,L)\leq \beta_{r}(\overline{x},S)\leq \max\{11, 2(r-2)\}.
$$
Similarly, if $\dim U>1$, and the order of the socle $T$ of $P\Delta(U)$ (here $P\Delta(U)$ is the projective image of the group of similarities $\Delta(U)$ of $U$), is divisible by $r$ we derive
$$
\beta_r(x,L)\leq \max\{11, 2(r-2)\}.
$$
So we may assume that $r$ does not divide the order of the stabilizer in $\L$ of some decomposition $V=U\oplus W$ into the sum of mutually orthogonal nondegenerate subspaces $U$ and~$W$.
By Lemma~\ref{non_non-degenerated_divisor} one of the following cases holds:
\begin{itemize}
\item $L=U_n(q)$ and $r\geq
\displaystyle\frac{n+4}{2}$; a contradiction with the inequality $n\geq 2r-3$ obtained in Step~$(ii)$.
\item $L=S_{2n}(q)$ and $r\geq n+3$; a contradiction with the inequality $n\geq 2r-6$ obtained in Step~$(ii)$ and condition $n\geq 5$.
\item $L=O_{2n+1}(q)$ and $r\geq n+1$; a contradiction with the inequality $n\geq 2r-6$ obtained in Step~$(ii)$ and condition $n\geq 6$.
\item $L=O^+_{2n}(q)$ or $L=O^-_{2n}(q)$ and $r\geq n+1$; a contradiction with the inequality $n\geq 2r-6$ obtained in Step~$(ii)$ and condition $n\geq 6$.
\end{itemize}
\begin{itemize}
\item[$(viii)$] {\em $x$ is not inner-diagonal. }
\end{itemize}
Follows by $(iv)$--$(vii)$
\begin{itemize}
\item[$(viii)$] {\em $x$ is not a field automorphism modulo $\L$.
}
\end{itemize}
Otherwise by Lemma~\ref{Field_Aut} we may assume that $x$ is a canonical field automorphism. Then $x$ induces a nontrivial field automorphism on the stabilizer $H$ of a $2$-dimensional nondegenerate ($1$-dimensional in case $L=L_n(q)$) subspace (such that the restriction of the corresponding quadratic form on the subspace have the sign $\varepsilon$, if $L=O_{2n}^\varepsilon(q)$), and on the socle $S$ of $H/\Oo_\infty(H)$. At that
\begin{itemize}
\item[$(viii1)$] $S\cong L_{n-1}(q)$, if $L=L_n(q)$;
\item[$(viii2)$] $S\cong U_{n-2}(q)$, if $L=U_n(q)$;
\item[$(viii3)$] $S\cong S_{2(n-1)}(q)$, if $L=S_{2n}(q)$;
\item[$(viii4)$] $S\cong O_{2n-1}(q)$, if $L=O_{2n+1}(q)$;
\item[$(viii5)$] $S\cong O^+_{2(n-1)}(q)$, if $L=O^\varepsilon_{2n}(q)$.
\end{itemize}
As above, if $|S|$ is divisible by $r$, the induction implies $$
\beta_r(x,L)\leq \max\{11, 2(r-2)\}.
$$
Otherwise by Lemma~\ref{r_not divi} we obtain on of the following inequalities
\begin{itemize}
\item[$(viii1)$] $r-2\geq n-1$, a contradiction with the inequality $n\geq 2r-3$ obtained in Step~$(ii)$;
\item[$(viii2)$] $r-2\geq n-2$, a contradiction with the inequality $n\geq 2r-3$, obtained in Step~$(ii)$ or with $n\geq 12$;
\item[$(viii3,4)$] $r-3\geq 2(n-1)$, a contradiction with the inequality $n\geq 2r-6$, obtained in Step~$(ii)$ or with $n\geq 5$;
\item[$(viii5)$] $r-1\geq 2(n-1)$, a contradiction with the inequality $n\geq 2r-6$, obtained in Step~$(ii)$, or with $n\geq 6$.
\end{itemize}
Thus by Lemma~\ref{Aut} it follows that
\begin{itemize}
\item[$(ix)$] {\em Modulo~$\L$, $x$ either is graph-field and $L\in\{L_n(q),O_{2n}^+(q)\}$, or is graph and \\ $L\in\{L_n(q),U_n(q),O_{2n}^+(q),O_{2n}^-(q)\}$. Moreover $|x|=2$.
}
\end{itemize}
We exclude both remaining possibilities for $x$.
\begin{itemize}
\item[$(x)$] {\em $x$ is not a graph-field automorphism modulo~$\L$.
}
\end{itemize}
Suppose the contrary. Then by Lemma~\ref{Field_Aut} we have $q=q_0^2$ and $C_L(x)\cong U_{n}(q_0)$, if $L=L_n(q)$; and $C_L(x)\cong O_{2n}^-(q_0)$, if $L=O^+_{2n}(q)$. It is easy to see that the index $|L:C_L(x)|$ is even. Moreover, $Z(C_L(x))=1$. By Lemma~\ref{guest}, $x$ normalizes but not centralizes a subgroup $H=C_L(x)^g$. Therefore $x$ induces on $H$ a nontrivial automorphism $\overline{x}$.
\begin{itemize}
\item If $L=L_n(q)$, then $n\geq 2r-3>r-2$, $H\cong U_{n}(q_0)$, and by Lemma~\ref{r_not divi}, $r$ divides~$|H|$.
\item If $L=O_{2n}^+(q)$, then $H\cong O^-_{2n}(q_0)$, $n\geq 2r-6$, whence, using the inequality $n\geq 6$, we conclude $2n>r-1$, and by Lemma~\ref{r_not divi}, $r$ divides~$|H|$.
\end{itemize}
Therefore, by induction we obtain
$$
\beta_r(x,L)\leq \beta_{r}(\overline{x},H)\leq \max\{11, 2(r-2)\};
$$
a contradiction.
\begin{itemize}
\item[$(xi)$] {\em $x$ is not a graph automorphism modulo~$\L$. }
\end{itemize}
Suppose the contrary. Then one of the following possibilities occurs: $L=L^\varepsilon_n(q)$ or $L=O^\varepsilon_{2n}(q)$, $\varepsilon\in\{+,-\}$. Consider these possibilities separately.
\begin{itemize}
\item $L=L^\varepsilon_n(q)$, $\varepsilon\in\{+,-\}$.
\end{itemize}
By Lemma~\ref{GraphAutGLU}, $x$ normalizes but not centralizes a subgroup $H$ of $L$, isomorphic to $O_n(q)=O_{2k+1}(q)$, if $n=2k+1$, and a subgroup isomorphic to either $S_n(q)=S_{2k}(q)$, or $O^\pm_n(q)=O^\pm_{2k}(q)$, if $n=2k$. Then $x$ induces on $H$ an automorphism $\overline{x}$. Since $n\geq 2r-3$ and $n\geq 12$, in all cases we have
$$2k\geq n-1\geq 2r-4>r-1>r-3.
$$
Whence $|H|$ is divisible by $r$ by Lemma~\ref{r_not divi}, and the induction implies
$$
\beta_r(x,L)\leq \beta_{H}(\overline{x},H)\leq \max\{11, 2(r-2)\}.
$$
\begin{itemize}
\item $L=O^\varepsilon_{2n}(q)$, $\varepsilon\in\{+,-\}$.
\end{itemize}
As above, first we choose a subgroup $H$ of $L$ such that $x$ normalizes but not centralizes it.
If $q$ is even, then $L=\L$ and by Lemma \ref{Graph_Inv_D_n_even_q} a graph automorphism $\gamma$ of $\L$ normalizes but not centralizes a subgroup isomorphic to $ O_{2n-2}^{\varepsilon\eta}(q)$ for appropriate $\eta\in\{+,-\}$.
Assume that $q$ is odd. In this case by Lemma~\ref{Graph_Inv_D_n}, $x$ normalizes but not centralizes a subgroup $H$ of $L$, such that either $H\cong O_{2n-1}(q)$, or $n=2k+1$ and $H\cong O_{2k+1}(q^2)$.
Since $n\geq 2r-6$ and $n\geq 6$, we obtain that $$2(n-1)\geq n>r-3.$$ Whence $|H|$ is divisible by $r$ by Lemma~\ref{r_not divi}, and by induction we have $$
\beta_r(x,L)\leq \beta_{r}(\overline{x},H)\leq \max\{11, 2(r-2)\}.
$$
\section{Proofs of Theorems~\ref{t2} and~\ref{t3}}
Let $\pi$ be a proper subset of the set of all primes, $r$ be the minimal prime not in $\pi$ and $$m:=\max\{11, 2(r-2)\}.$$ In view of Proposition~\ref{ex1}, in order to prove Theorems~\ref{t2} and~\ref{t3} it is enough to prove that $\BS_\pi^m$ coincides with the class of all finite groups.
Assume the contrary, and let $G\notin \BS_\pi^m$ be of minimal order. By Lemma~\ref{2notinpi} we obtain that $r>2$.
By Lemma~\ref{red} we conclude that $G$ is isomorphic to an almost simple group with the simple socle $L$, satisfying to the following properties: $L$ is neither a $\pi$- nor a $\pi'$-group, it admits an automorphism $x$ of prime order lying in $\pi$ such that every $m$ conjugates of $x$ generate a $\pi$-group, and $G\cong \langle L,x\rangle$.
Since $L$ is not a $\pi$-group, there exists a prime divisor $s$ of the order of~$L$, not lying in~$\pi$. Clearly $s\geq r$.
Now $\alpha(x,L)> m$, since otherwise $m$ conjugates of $x$ generate a subgroup of $\langle L,x\rangle$ whose order is divisible by $s$ in contrast with the assumption that every $m$ conjugates of $x$ generate a $\pi$-group. In particular, $$\alpha(x,L)>11\text{ and } \alpha(x,L)>2(r-2)\geq r-1.$$ The same arguments imply $$\text{ if }r\text{ divides the order of }L,\text{ then }\beta_r(x,L)>m\geq 2(r-2).$$ A contradiction with Theorem~\ref{t4}.
\bibliographystyle{amsplain}
| proofpile-arXiv_059-15574 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{sec:intro}
\ac{UQ} requires the solution of a stochastic \ac{PDE} with random data. Several methods for solving stochastic \acp{PDE}, such as stochastic Galerkin \cite{lemaitre2010spectral,ghanem1991stochastic} or stochastic collocation \cite{Babuska2010}, are based on a standard approximation in space like \acp{FE} or finite volumes, and different types of polynomial expansions in the stochastic space \cite{Xiu2003}. Although very powerful for some particular problems, most of these techniques suffer the so-called `curse of dimensionality', i.e. a dramatic increase of computational cost with the number of stochastic variables. Besides, some of these methods are intrusive because given a code that can be used to solve a deterministic problem, it needs to be modified to solve a stochastic one.
In contrast, the classical \ac{MC} method does not present these drawbacks. On the one hand it is a sampling method, i.e., it only requires the repeated evaluation of a deterministic model, and it can be implemented in a non-intrusive way. On the other hand it is known to convergence to the exact statistics of the solution as the number of input samples tends to infinity, independently of the dimensionality of the stochastic space and mostly independently of the physics of the problem under consideration, as long as some moments of the \ac{QoI} are bounded. However, the number of samples required to achieve statistical convergence combined with the complexity of the computational model required to have enough spatial and/or temporal accuracy can make it prohibitively expensive.
To address this difficulty, the \ac{MLMC} method was introduced in \cite{Kebaier2005} (with two levels) and in \cite{Giles2008} (with several levels) and applied to solve stochastic \acp{PDE} with random input parameters in \cite{Giles2012,Mishra2012,Mishra2012a,Pisaroni2017}. The \ac{MLMC} method exploits a hierarchy of discretizations of the underlying differential problem (using a hierarchy of meshes). The method computes expectations on the difference between solutions at two consecutive lvels of accuracy, reducing the variance of the problem at hand and thus decreasing the number of samples required at finer discretization levels. By doing this, most of the computational cost is transferred to coarse discretization levels, where a large number of samples is used to control the statistical error of the \ac{MLMC} estimator, whereas only few samples are used on the finest (and most costly) levels, to control the discretization error.
Depending on the application, different types of uncertain data can be considered, like material properties (\ac{PDE} parameters), boundary and/or initial conditions, and the geometry/topology of the domain. The \ac{MLMC} method can be used out-of-the-box in most cases but it requires a hierarchy of discretizations as input, which can be difficult to generate when dealing with complex geometries and unstructured body-fitted meshes. For this reason \ac{MLMC} methods are barely used to perform \ac{UQ} when complex geometries are considered. This is the first limitation we aim to address herein, utilizing embedded methods with \ac{MLMC} in a method referred to as \ac{EMLMC}.
On top of that, leaving simple academic constructions aside, geometry is hardly known exactly and surfaces are rarely smooth when looking at sufficiently small scale. Variation between different samples of natural materials, e.g. porous rocks, and cost limits of diminishing manufacturing tolerances of fabrication processes make geometry an epistemic uncertainty in general. Geometric uncertainties can have a strong impact on the solution of a \ac{PDE}, e.g. they trigger boundary layer separation in fluids, and these effects are important in many fields, e.g. geology \cite{Deffeyes1982}, tribology \cite{hutchings2017tribology} and lubrication \cite{hamrock2004fundamentals}, nano-technology \cite{Nosonovsky2008} and biology \cite{Park2012}. The first attempts to predict the impact of roughness was through deterministic parameterizations, like periodic indentations \cite{Taylor1971,Richardson1971}, sinusoidal corrugation \cite{Fyrillas2001} and fractal representations, e.g. using the well-known Von Koch snowflake curve \cite{Cajueiro1999,Blyth2003}. After that, stochastic representations of surface roughness have been adopted frequently, which permit to incorporate the lack of information or measurement errors. Examples of these descriptions include random fields to represent the thickness in shell structures (see \cite{Stefanou2004} and the references therein), random fractal representations \cite{Sarkar1993} and random fields with a given spatial correlation structure to represent derivations from a nominal geometry \cite{Xiu2006,Tartakovsky2006,Mohan2011874,Zayernouri2013}. In engineering design, geometry is exactly defined using computational tools and in this case the uncertainty is rooted at manufacturing process, as mentioned. Uncertainty also arises when measuring, e.g when the geometry is determined from digital images, sourced at image scanning and resolution. Using pixelated images for computing may result in significant errors, even in the limit of small resolution \cite{Babuska2001,Babuska2003}. However, it is possible to recover a definition of the geometry using level set methods \cite{sethian1999level} and, in fact, this can be done statistically, e.g. using polynomial chaos expansions, to define random level set representations \cite{Stefanou2009}. Curiously, level set and phase field representations can also be obtained from noisy images solving stochastic \acp{PDE} \cite{Patz2013}.
The are three approaches to deal with geometric uncertainty. The first one is based on domain mappings, the second on perturbation methods and the third one on the use of fictitious domain methods, also known as embedded or immersed methods. The first approach started with the stochastic mapping concept that was introduced in \cite{Xiu2006} to transform a (deterministic or stochastic) problem in a random domain to a stochastic problem in a deterministic domain. This approach was applied to the stochastic analysis of roughness in flow problems in \cite{Tartakovsky2006}. In \cite{Mohan2011874} the main idea is to fix the mesh connectivity and change the coordinates of the nodes using deformation strategies that incorporate uncertainty through the solution of auxiliary \ac{PDE} with random boundary conditions. In \cite{Castrillon-Candas2016} a domain mapping approach is used to prove the convergence of a stochastic collocation method based on Smolyak grids. Because the construction of globally smooth mappings is difficult, a piecewise smooth domain mapping approach is proposed in \cite{Chaudhry20181127}. The second approach was introduced in \cite{Sarkar1993} where an analytical perturbation method was used to find solutions of the Laplace equation. A perturbation method developed using shape calculus (a Taylor expansion using shape derivatives and its linearization around a nominal domain) was proposed in \cite{Harbrecht2008,Harbrecht2013,Harbrecht2016}. Because shape derivatives are difficult to compute, a perturbation approach based on approximated stochastic boundary conditions (of Robin type) in \cite{Dambrine2017943,Dambrine2016921}. Although it is not based on a formal perturbation expansion, the stochastic smoothed profile method of \cite{Zayernouri2013} can be included in this category. This method transforms a random domain into a random force-term in a deterministic domain through the introduction of a smoothly spreading interface layer to represent rough boundaries and its error grows with the layer thickness \cite{Luo2008}. By definition, these approaches are not applicable to complex geometrically and even topologically random domains.
The use of embedded methods, which rely on a fixed background grid and a discrete approximation of the geometry using a level set function, for the solution of \acp{PDE} in stochastic domains combined with polynomial chaos was proposed in \cite{Canuto2007} and applied to fluid dynamics problems in \cite{Parussini2010}. Among the several possible immersed methods, those in \cite{Canuto2007,Parussini2010} impose boundary conditions through Lagrange multipliers defined in a \ac{FE} space built from a boundary discretization (independent of the domain discretization). This construction results in a saddle point linear system for which specific preconditioners have been recently developed \cite{Gordon2014}. Alternatively, the use of the so-called \ac{XFEM} method \cite{Moes1999,sukumar_modeling_2001} for dealing with embedded stochastic geometries using polynomial chaos, was proposed in \cite{Nouy2007,Nouy2008,Nouy20101312,Nouy20113066} and named \ac{X-SFEM}. It was lately applied to heat transfer \cite{Lang20131031} and structural problems \cite{Schoefs2016}. The lack of robustness of embedded \ac{FE} methods is not addressed in these works.
The use of \ac{MLMC} in complex random domains using body-fitted meshes would require to generate an unstructured mesh (and its corresponding mesh hierarchy) per sample, which is absolutely impractical. For this reason, and the previously described problems related to the generation of mesh hierarchies in general, we also consider embedded \ac{FE} methods in this work. However, the naive use of methods exhibit some problems. The most salient one is the so-called small cut cell problem. The intersection of cells with the domain surface cannot be controlled and one can end up with cells for which the ratio between the cell volume and its portion in the domain interior is arbitrarily large. It is well-known that the condition number of the resulting linear system blows up with these ratios. As a result, plain embedded \ac{FE} methods are not robust. Different remedies have been proposed so far in the literature. For conforming \ac{FE} methods in which trace continuity must be enforced between cells, one can consider methods that introduce artificial dissipation (stabilization) in order to weakly enforce zero jump of derivatives (up to the \ac{FE} order being used) of jumps on facets that belong to cut cells (see \cite{burman_cutfem:_2015} for more details). Another approach is to create aggregated meshes such that every aggregate contains at least one interior cell. Whereas the generation of discontinuous \ac{FE} spaces on aggregated meshes is straightforward (enforcing the local \ac{FE} space to be the same polynomial space as in a standard cell), it is far more complicated for conforming \acp{FE}. Using such a naive approach for conforming methods would generate global constraints and too much \emph{rigidization}, which would affect the convergence properties of the method, its implementation, and the sparsity of the resulting linear system; we note that this is in fact the \emph{strong} version of the weakly enforced constraints in \cite{burman_cutfem:_2015}. The aggregated \ac{FE} method has been proposed in \cite{Badia2017a} to enable aggregation techniques for conforming methods. It has three key features: (a) the constraints are cell-local and their implementation in \ac{FE} codes is easy; (b) it keeps the convergence order of standard body-fitted \ac{FE} spaces; (c) it does not perturb the Galerkin formulation with any kind of artificial dissipation and does not involve the computation of (possibly) high-order derivatives on the element boundaries.
In this work, we propose the \ac{EMLMC} framework. It combines the \ac{MLMC} method on a hierarchy of background meshes (which can be simple structured meshes) and the \ac{agfem} at every mesh level to capture every sample of the random domain. In particular, we consider random domains implicitly defined through random level-set functions and the marching tetrahedra scheme to approximate the random domain at every mesh level. The resulting method extends the applicability of \ac{MLMC} to complex random domains and it is robust. We state the model problem in Section \ref{sec:problem} and we describe the \ac{MLMC} method in Section \ref{sec:mc_and_mlmc}. The construction of the \ac{agfem} discretization for each sample is then described in Section \ref{sec:agg}. Finally, a set of numerical experiments are described in Section \ref{sec:examples} and we draw some concluding remarks in Section \ref{sec:conclusions}.
\def\mathcal{B}{\mathcal{B}}
\section{Problem formulation}
\label{sec:problem}
As a model problem we consider the following elliptic stochastic problem. Given an oriented manifold $\mathcal{M}({\omega})\subset \mathbb{R}^d$ and its corresponding interior domain ${\mathcal{D}}({\omega})$, find $u({\boldsymbol{x}},{\omega})$ such that
\begin{align}
-{\boldsymbol{\nabla}} \cdot \left( {k} {\boldsymbol{\nabla}} u \right) = f \ \hbox{ in } \ {\mathcal{D}}({\omega}), \qquad
u = u_0 \ \hbox{ on } \, \mathcal{M}({\omega}), \label{eq:model}
\end{align}
almost surely (a.s.), i.e., for almost all ${\omega} \in \Omega $, which denotes the uncertainty, described by a complete probability space $(\Omega,{\mathcal{F}},P)$. Although stochastic coefficients, i.e. the diffusion ${k} = {k}({\boldsymbol{x}},{\omega})$, forcing $f=f({\boldsymbol{x}},{\omega})$, and boundary condition $u_0 = u_0({\boldsymbol{x}},{\omega})$ could be considered random fields too, we assume they are deterministic, as it is usual in the literature when stochastic domains are considered \cite{Chaudhry20181127,Mohan2011874,Xiu2006,Harbrecht2008,Harbrecht2013,Harbrecht2016,Dambrine2017943,Dambrine2016921}. Let us assume that all realizations $\mathcal{M}({\omega})$ (and ${\mathcal{D}}({\omega})$) are bounded. We can define a bounded artificial domain $\mathcal{B}$ that contains all possible realizations of $\mathcal{M}({\omega})$, i.e. ${\mathcal{D}}({\omega}) \subset \mathcal{B}, \, \forall {\omega} \in \Omega$. We also assume that ${k}$ and $f$ are defined in $\mathcal{B}$, independently of ${\omega} \in \Omega$. With the random solution of this problem at hand we aim to compute $\E\left(Q(u)\right)$ where $Q$ is a deterministic \ac{QoI}, e.g. an integral of $u$ on a subregion or surface, and $\E$ the expectation.
Although it is not strictly necessary (at least in deterministic domains \cite{Charrier2013,Teckentrup2013a}), uniform ellipticity is often assumed to guarantee the well-posedness of \Eq{eq:model} (see \cite{Barth2011,Babuska2010,Chaudhry20181127}). Therefore, we assume ${k}$ uniformly bounded from below, i.e. there exists ${k}_0$ such that ${k}({\boldsymbol{x}}) \ge {k}_0, \, \forall {\boldsymbol{x}} \in \mathcal{B}$ (and $\forall {\omega} \in \Omega$ if a stochastic diffusion ${k}({\boldsymbol{x}},{\omega})$ is considered). Under these assumptions, the bilinear form associated to the weak form of \eqref{eq:model} is bounded and coercive and the Lax-Milgram lemma guarantees a solution for any ${\omega} \in \Omega$ uniformly bounded by $ \|f\|_{L^2( \mathcal{B})} $ \cite{Chaudhry20181127}.
\def\mathcal{S}{\mathcal{S}}
\def\mathcal{M}{\mathcal{M}}
\def\mathcal{T}{\mathcal{T}}
\def{\ell}{{\ell}}
\def{\T}^\M{{\mathcal{T}}^\mathcal{M}}
\def{\T}^\D{{\mathcal{T}}^{\mathcal{D}}}
\def\widetilde{\D}{\widetilde{{\mathcal{D}}}}
\def\mathcal{V}{\mathcal{V}}
\def\widetilde{\X}{\widetilde{\mathcal{V}}}
\def\mathcal{I}{\mathcal{I}}
\section{Monte Carlo and its multilevel extension}
\label{sec:mc_and_mlmc}
\subsection{The Monte Carlo method}
\label{sec:mc}
The aim is to compute the expected value $\E(Q)$ of the \ac{QoI} $Q(u)$. Using a discrete approximation in a mesh $\mathcal{T}_h$ of size $h$ having $M \propto h^{-d} $ degrees of freedom, we actually compute a discrete approximation to the \ac{QoI} using $u_h$, the discrete \ac{FE} solution of the continuous problem at hand. Let us denote $Q_h=Q(u_h)$. The standard MC algorithm computes the approximation of its expected value as the average
\begin{equation}
\E(Q) \approx \overline{Q}_h = \frac{1}{N} \sum_{i=1}^N Q_h^i, \label{eq:mc_average}
\end{equation}
where $Q_h^i$ are obtained evaluating the \ac{QoI} using the $N$ realizations (samples) $u_h^i$ of the stochastic solution $u_h({\boldsymbol{x}},{\omega})$ on the given mesh $\mathcal{T}_h$. The mean square error of this approximation is
\begin{equation}
e^2(\overline{Q}_h) = \E\left[(\overline{Q}_h - \E(Q))^2\right] = (\E(Q_h -Q))^2 + \V(\overline{Q}_h)
\label{eq:mc_error}
\end{equation}
where $\V$ is the variance. As it is well known, the first term is the (squared) discretization or bias error and its reduction requires refining the mesh, whereas the second one is the statistical error that decays as
\begin{equation}
\V(\overline{Q}_h) = N^{-1} \V(Q_h),
\label{eq:mc_variance}
\end{equation}
so its reduction requires increasing the number of samples. Denoting the complexity of evaluating $Q^i_h$ as $C_h$, it is possible to estimate the total computational cost of the MC algorithm, $C_{\rm MC}$, under the following assumptions.
\begin{assumption}
\label{as:1}
There exist $\alpha$ and $c_{\alpha}$ such that
$$|\E(Q_h -Q)| \leq c_{\alpha}h^{\alpha} .$$
\end{assumption}
\begin{assumption}
\label{as:2}
There exist $\gamma$ and $c_{\gamma}$ such that
$$C_h \leq c_{\gamma} h^{-\gamma} .$$
\end{assumption}
By \Ass{as:1} the mesh size required to achieve a discretization error smaller than $\varepsilon$ is $h \preceq \varepsilon^{1/\alpha}$ (hereafter $a \preceq b$ means that there is a constant $c$, independent of $h$, such that $a \le c b$).
On the other hand, by \Eq{eq:mc_variance}, the number of samples required to achieve a statistical error $ \left[\V(\overline{Q}_h)\right]^{1/2} \leq \varepsilon $ is $N \propto \varepsilon^{-2}$, where here $a\propto b$ means that there exist constants $c$ and $C$ such that $c\varepsilon^{-2} \le N \le C\varepsilon^{-2}$. Then
\begin{equation}
C_{\rm MC} = N C_h \preceq N h^{-\gamma} \preceq \varepsilon^{-(2+\gamma/\alpha)}
\label{eq:mc_complex}
\end{equation}
Note that the factor $\varepsilon^{-\gamma/\alpha} $ comes from the discretization error and represents the (asymptotic) cost of one sample.
For the model problem of \Sec{sec:problem}, \Ass{as:1} holds with $\alpha=2$ \cite{Teckentrup2013a}. Now the constant $\gamma$ depends on $d$ and the type of solver. For naive Gaussian elimination $\gamma=3d$, whereas $\gamma=d$ for the optimal multigrid method. In between, for sparse linear solvers $\gamma=3(d-1)$ (for $d=2,3$) whereas for the (unpreconditioned) \ac{CG} algorithm $\gamma=d+1$. In the last case, for example, the computational cost scales as $\varepsilon^{-4}$ when $d=3$.
\subsection{The Multilevel Monte Carlo method}
\label{sec:mlmc}
The ac{MLMC} algorithm exploits the linearity of the expectation using a hierarchy of $L+1$ meshes $\mathcal{T}_0,\mathcal{T}_1,...,\mathcal{T}_L$ of sizes $h_0>h_1>...>h_L$. Although there are other options \cite{Haji-Ali2016}, we assume $h_l=h_0s^{-l}$ (i.e. each mesh in the hierarchy is obtained by uniformly dividing each cell into $s^d$ subcells). Performing $\{N_{\ell}\}_{{\ell}=0,...,L}$ simulations for different values of the random parameters ${\omega}$ on $\{\mathcal{T}_{\ell}\}_{{\ell}=0,...,L}$, the expectation on the coarse grid is corrected using the whole hierarchy as
\begin{equation}
\E(Q_L) = \E(Q_0) + \sum_{{\ell}=1}^L \E(Q_{\ell}-Q_{{\ell}-1}) \approx \sum_{{\ell}=0}^L \overline{Y_{\ell}}:=\widetilde{Q}_L \label{eq:mlmc_average}
\end{equation}
where $Q_{\ell}=Q(u_{{\ell}})$ is the approximation of the \ac{QoI} computed using the \ac{FE} solution $u_{\ell}$ for the mesh $\mathcal{T}_{\ell}$ and $Y_{\ell}=Q_{\ell}-Q_{{\ell}-1}$ for ${\ell}=1,...,L$ and $Y_0=Q_0$.
The total error of this approximation is now given by
\begin{equation}
e^2( \widetilde{Q}_L ) = \E\left[( \widetilde{Q}_L - \E(Q) )^2\right] = (\E(Q_L -Q))^2 + \V(\widetilde{Q}_L)
\label{eq:mlmc_error}
\end{equation}
The first term, the (squared) discretization error, is the same as in the \ac{MC} method and it requires mesh $\mathcal{T}_L$ to be fine enough. For this term to be smaller than, e.g. $\varepsilon^2/2$, \Ass{as:1} requires $c_{\alpha} h_L \leq 2^{-1/2} \varepsilon^{1/\alpha}$, or, equivalently,
\begin{equation}
L = \lceil {\frac{1}{\alpha} \log_s \left(\sqrt{2}c_{\alpha} \varepsilon^{-1}\right)} \rceil
\label{eq:mlmc_num_levels}
\end{equation}
where, as usual, $\lceil x \rceil$ denote the unique integer $n$ satisfying $x<n<x+1$. The second term \Eq{eq:mlmc_error} is the statistical error, given by $\V(\widetilde{Q}_L) = \sum_{{\ell}=0}^{L} N_{\ell}^{-1} \V(Y_{\ell})$, is very different from the one in the \ac{MC} method thanks to the following assumption.
\begin{assumption}
\label{as:3}
For any ${\ell}=0,....L$, there exist $\beta$ and $c_{\beta}$ such that
$$\V(Y_{\ell}) \leq c_{\beta}h_{\ell}^{\beta}.$$
\end{assumption}
Thanks to \Ass{as:3}, $\V(Y_{\ell})$ tends to zero with $h_{\ell}$ and in this sense, the \ac{MLMC} method can be understood as a variance reduction method. Under \Ass{as:1}, \Ass{as:2} and \Ass{as:3}, it is possible to explicitly bound the total computational cost and to minimize it to obtain the optimal number of samples to be taken on each level \cite{Giles2008}.
The number of samples on each level required to have a (squared) statistical error smaller than $\varepsilon^2/2$ is given by
\begin{equation}
N_{\ell} = \lceil {2 \varepsilon^{-2} \sqrt{\frac{\V(Y_{\ell})}{C_{\ell}}} \sum_{i=0}^{L} \sqrt{\V(Y_i)C_i}} \rceil,
\label{eq:mlmc_num_samples}
\end{equation}
where $C_{\ell}$ is the complexity (computational cost) of evaluating $Q_{\ell}$, i.e. taking one sample at level ${\ell}$. \Eq{eq:mlmc_num_samples} can also be written as
\begin{equation}
N_{{\ell}} = \lceil {s^{\Gamma (L-{\ell})} N_{L}} \rceil,
\label{eq:num_samples_ratio}
\end{equation}
where
$$\Gamma=\frac{1}{2}(\gamma+\beta).$$
The computational complexity of the \ac{MLMC} algorithm is given by $C_{\rm MLMC} = \sum_{{\ell}=0}^{L} N_{\ell} C_{\ell}$, and it can be bounded using \Ass{as:2} and \Ass{as:3} as
\begin{equation}
C_{\rm MLMC} \preceq \varepsilon^{-2-\max(0,\frac{\gamma-\beta}{\alpha})},
\label{eq:mlmc_complexity_bound}
\end{equation}
with an additional logarithmic factor when $\gamma=\beta$.
There are three different situations depending on whether the (computational cost required to reduce the) statistical error dominates, is of the same order of, or is dominated by the discretization error. In the first case, $\beta>\gamma$ and because the statistical error is eliminated using coarse grids, the complexity of the \ac{MLMC} algorithm is independent of the fine mesh size. In the last one $\beta<\gamma$ and the complexity includes a factor $\varepsilon^{-\gamma/\alpha} $ that comes from the discretization error, which also appears in the \ac{MC} complexity estimate \eqref{eq:mc_complex}. In applications of \acp{PDE} with random coefficients, it is typical to have $\beta=2\alpha$ and in this case, $C_{\rm MLMC} \leq \varepsilon^{-\gamma/\alpha} $ which is the complexity of one sample at the finest grid, that is, the \ac{MLMC} algorithm does not \emph{see} the sampling cost \cite{Pisaroni2017}. Thus, the relation between parameters $\beta$ and $\gamma$ determines whether the effort should be concentrated on coarse or fine meshes.
From the previous discussion the crucial role of the constants $\alpha$, $\beta$, $\gamma$, $c_{\alpha}$, $c_{\beta}$ and $c_{\gamma}$ is apparent. In general, these constants are unknown (they are problem and solver dependent) but their estimation is required to define the optimal number of levels and samples in \ac{MLMC}. The so-called adaptive \ac{MLMC} algorithms have been developed to face this challenge by estimating these constants with extrapolations and/or statistical inference \cite{Giles2008,Cliffe2011,Collier2015}. For example, the variances in \Eq{eq:mlmc_num_samples} can be estimated (a posteriori) using the sample variance
\begin{equation}
\V(Y_{\ell}) \approx V(Y_{\ell}) = \frac{1}{N_{\ell}} \sum_{i=1}^{N_{\ell}} (Y^i_{\ell} - \overline{Y_{\ell}})^2.
\label{eq:sample_variance}
\end{equation}
We do not follow this strategy herein, but we exploit the knowledge of the problem at hand.
For the problem of \Sec{sec:problem} $\beta=4$ \cite{Teckentrup2013a} and using a \ac{CG} algorithm $\gamma=d+1$, so the statistical error dominates the computational cost and the optimal relation between samples at the hierarchy are given by \Eq{eq:num_samples_ratio} with $\Gamma=4$ for $d=3$ and $\Gamma=3.5$ for $d=2$.
Observe that \Ass{as:1} and \Ass{as:3} (redefining $c_{\alpha}$ to include $h_0^{-\alpha}$ and $c_{\beta}$ to include $h_0^{-\beta}$ ) can be written as
\begin{align}
|\E(Q_{\ell}-Q)| \leq c_{\alpha} s^{-\alpha {\ell}}, \qquad \V(Y_{\ell}) \leq c_{\beta} s^{-\beta {\ell}}.\label{eq:error_decay}
\end{align}
The corresponding sample approximations $|Q-\widetilde{Q}_{\ell}|$ (see \Eq{eq:mlmc_average}) and $V(Y_{\ell})$ (see \Eq{eq:sample_variance}) statistically converge to these values and thus, with a large enough number of samples, one can check spatial convergence (i.e., with respect to $\ell$) for these quantities too.
In the numerical examples of \Sec{sec:examples} $s=2$ and therefore, the (logarithmic) convergence rate with respect to the level is $2\log(2)$ and $4\log(2)$ respectively.
\section{The aggregated FE method}
\label{sec:agg}
The mesh hierarchy required by the \ac{MLMC} algorithm is built from $\mathcal{T}_0$, an initial shape-regular partition of $\mathcal{B}$, generating $\mathcal{T}_{{\ell}+1}$ by uniform refinement of $\mathcal{T}_{\ell}$.
In our implementation, we consider (a bounding box) $\mathcal{B} := \prod_{i=1}^d[x^i_m,x^i_M]$ and $\mathcal{T}_0$ a uniform Cartesian mesh. At every level ${\ell}$, we consider a piecewise linear approximation $\mathcal{M}_{\ell}({\omega})$ of the manifold $\mathcal{M}({\omega})$. When $\mathcal{M}({\omega})$ is already a polytopal surface mesh, e.g. an STL mesh, no approximation is required at this step. However, if the resolution of $\mathcal{M}({\omega})$ is very fine compared to the coarser mesh size, it would involve a very costly numerical integration at coarser levels. A case of practical interest that will be addressed in this work is when $\mathcal{M}({\omega})$ is represented implicitly with a level-set function $\phi({\boldsymbol{x}})$. In this case, we construct a $\mathcal{C}^0$ Lagrangian \ac{FE} space of arbitrary order on the grid $\mathcal{T}_{\ell}$, which is represented with ${\widetilde{\X}}(\mathcal{T}_{\ell})$. In this case, one can define $\mathcal{M}_{\ell}({\omega})$ as the result of applying the marching tetrahedra algorithm \cite{Doi91} to the interpolation $\phi_{\ell}({\boldsymbol{x}})$ of the level-set function $\phi({\boldsymbol{x}})$ onto $\widetilde{\X}(\mathcal{T}_{\ell})$. When considering quad meshes, one can simply use the decomposition of an n-cube into n-tetrahedra before applying the marching tetrahedra algorithm.
\newcommand{\closure}[2][3] {
{}\mkern#1mu\overline{\mkern-#1mu#2}}
Given the oriented manifold $\mathcal{M}_{\ell}({\omega})$ (and its corresponding interior, the domain ${\mathcal{D}}_{\ell}({\omega})$) and the background mesh $\mathcal{T}_{\ell}$, due to the piecewise linear nature of the manifold, it is possible to compute exactly the intersection of each cell $K \in \mathcal{T}_{\ell}$ with $\mathcal{M}_{\ell}({\omega})$ and ${\mathcal{D}}_{\ell}({\omega})$. We represent with ${\T}^\D_{\ell}({\omega})$ (resp. ${\T}^\M_{\ell}({\omega})$) the set of active (resp., cut) cells in $\mathcal{T}_{\ell}$ that intersect ${\mathcal{D}}_{\ell}({\omega})$ (resp., $\mathcal{M}_{\ell}({\omega})$); ${\T}^\D_{\ell}({\omega}) \setminus {\T}^\M_{\ell}({\omega})$ is the set of interior cells in ${\mathcal{D}}_{\ell}({\omega})$.
Numerical integration over cells $K \in {\T}^\M_\ell({\omega})$ can be carried out by generating a simplicial mesh $\mathcal{I}^{\mathcal{D}}_K$ of $K \cap {\mathcal{D}}_{\ell}({\omega})$, e.g. using a Delaunay triangulation.
Interior cell volume integration is straightforward, and we can simply take $\mathcal{I}_K^{{\mathcal{D}}} = K$. We proceed analogously to create surface meshes for ${K} \cap \mathcal{M}_{\ell}({\omega})$. We define the volume integration mesh as the union of the interior cells and the integration meshes for cut cells $ \mathcal{I}_{\ell}^{\mathcal{D}}({\omega}) := \bigcup_{K \in {\T}^\D_\ell({\omega})} I^{\mathcal{D}}_K$; analogously for the surface mesh $\mathcal{I}_{\ell}^\mathcal{M}({\omega})$. These integration meshes always exist and are cell-wise local, i.e., no conformity is required among cells.
We note that $\mathcal{I}_{\ell}^{\mathcal{D}}({\omega})$ is a body-fitted mesh of ${\mathcal{D}}_\ell({\omega})$ and $\mathcal{I}_{\ell}^\mathcal{M}({\omega})$ is a surface mesh of $\mathcal{M}_{\ell}({\omega})$. However, the shape regularity of the meshes is not relevant since they are only used for integration purposes. For higher (than one) order approximations of $\mathcal{M}({\omega})$, we can consider the same approach described above supplemented with an additional degree elevation of the linear cell-wise integration meshes. These techniques impose some constraints on the ratio between surface curvature and mesh resolution and are not being used in Section \ref{sec:examples}, since we restrict ourselves to first order \acp{FE}.
\def\boldsymbol{\nabla}{\boldsymbol{\nabla}}
In this work, $H^1$-conforming Lagrangian space of arbitrary order are considered for the discretization of the \ac{PDE} problem at hand. We denote with $\widetilde{\X}_\ell({\T}^\D,{\omega})$ the \ac{FE} space on the grid ${\T}^\D_{\ell}({\omega})$. The space ${\T}^\D_{\ell}({\omega})$ is defined on $\widetilde{\D}({\omega}) \supseteq {\mathcal{D}}({\omega})$, where $\widetilde{\D}({\omega})$ is the union of all cells in ${\T}^\D({\omega})$. Thus, it is not a body-fitted mesh of ${\mathcal{D}}({\omega})$ and embedded \ac{FE} techniques must be used to properly capture the geometry. Let us consider the weak form of \eqref{eq:model} in which the essential boundary conditions are imposed in a weak form using Nitsche's method. First, we define the cell-wise forms
\def\mathcal{A}{\mathcal{A}}
\def\boldsymbol{n}{\boldsymbol{n}}
\defK{K}
\begin{align}
& \mathcal{A}^{\ell}_K(u,v) := \int_{K \cap {\mathcal{D}}_{\ell}({\omega})} \boldsymbol{\nabla} u \cdot \boldsymbol{\nabla} v \mathrm{d}V + \int_{K \cap \mathcal{M}_{\ell}({\omega})} (\tau_K u v - v ( \boldsymbol{n} \cdot \boldsymbol{\nabla} u) - u (\boldsymbol{n} \cdot \boldsymbol{\nabla} v) ) \mathrm{d}S, \\
& {b^{\ell}_K(v)} := \int_{K \cap {\mathcal{D}}_{\ell}({\omega})} f v \mathrm{d}V + \int_{K \cap \mathcal{M}_{\ell}({\omega})} (\tau_K u_0 - (\boldsymbol{n} \cdot \boldsymbol{\nabla} v) u_0 ),
\end{align}
where $\tau_K \approx \mathcal{O}(h_{\ell}^{-2})$ is a positive parameter that must be large enough to ensure coercivity of the bilinear form. The global form $\mathcal{A}^{\ell}: H_0^1({\mathcal{D}}_{\ell}({\omega})) \rightarrow H^{-1}({\mathcal{D}}_{\ell}({\omega}))$ and right-hand side term $b^{\ell} \in H^{-1}({\mathcal{D}}_\ell({\omega}))$ are stated as the sum of the element contributions, i.e.,
\begin{align}\label{eq:bil-form}
\mathcal{A}^{\ell}(u,v) := \sum_{K \in {\T}^\D_{\ell}({\omega})} \mathcal{A}^{\ell}_K(u,v), \quad
b^{\ell}{ (v)} := \sum_{K \in {\T}^\D_{\ell}({\omega})} b^{\ell}_K(v).
\end{align}
The use of embedded \ac{FE} methods has a dramatic impact on the condition number of the resulting linear systems, known as the \emph{small cut cell} problem. Given that the mesh $\mathcal{T}_{\ell}$ is shape regular, one can define its characteristic cell size $h_{\ell}$. It is well-known in body-fitted \ac{FE} methods that the condition number of the linear system that results from the discretization of second order elliptic \acp{PDE} is $\mathcal{O}(h_\ell^{-2})$. However, for unfitted \ac{FE} methods, it also deteriorates with the maximum of ${|K \cap {\mathcal{D}}_{\ell}({\omega})|}^{-1}{|K|}$ among all cells in $K \in {\T}^\D_{\ell}$. The portion of the cell in the physical domain cannot be controlled and it can be arbitrarily close to zero. As a result, embedded \ac{FE} methods can produce almost singular linear systems to be solved, and thus are not reliable.
\def\mathcal{Q}{\mathcal{Q}}
\def\mathcal{G}{\mathcal{G}}
To fix the small cut cell problem issue, we consider the \ac{agfem} recently proposed in \cite{Badia2017a}. The idea is to start with the mesh ${\T}^\D_{\ell}$ and generate an aggregated mesh $\mathcal{G}_{\ell}({\omega})$ in which the cut cells with small volume ratio within the physical domain are aggregated to one interior cell (see \cite[Alg. 2.1]{Badia2017a}). Such aggregation is next used to define constraints over the standard \ac{FE} space $\widetilde{\X}({\T}^\D_{\ell},{\omega})$. We represent the resulting \ac{agfe} space with $\mathcal{V}(\mathcal{G}_{\ell},{\omega}) \subseteq \widetilde{\X}({\T}^\D_{\ell},{\omega})$. We refer the interested reader to \cite{Badia2017a} for a detailed exposition of the mesh aggregation algorithm, the computation of constraint, the implementation issues, and its numerical analysis, to \cite{Verdugo2019} for its parallel implementation, and to \cite{Badia2018b} for a mixed \ac{agfe} space for the Stokes problem. \ac{agfe} spaces are not affected by the small cut cell problem and the condition number of the resulting matrix is $\mathcal{O}(h_{\ell}^{-2})$ \cite{Badia2017a}. The numerical experiments in \cite{Verdugo2019} show that a standard parallel algebraic multigrid solver is effective to solve \ac{agfe} linear systems using default settings. The \ac{agfe} space approximation of \eqref{eq:model} at a given level ${\ell}$ reads as follows: find $u_{\ell} \in \mathcal{V}(\mathcal{G}_{\ell},{\omega})$ such that
\begin{align}
\mathcal{A}^{\ell}(u_{\ell},v_{\ell}) = b^{\ell}(v_{\ell}) \qquad \hbox{ for any } \, v_{\ell} \in \mathcal{V}(\mathcal{G}_{\ell},{\omega}).\label{eq:agfeprob}
\end{align}
We note that all these terms can be computed using the cell-wise integration triangulations commented above. The condition number of the matrix resulting from the \ac{agfem} discretization has been proven to be $\mathcal{O}(h_{\ell}^{-2})$ as in body-fitted methods (see \cite{Badia2017a}). All the steps required to apply \ac{agfem} in the frame of \ac{MLMC} are listed in Alg. \ref{alg:alg}.
\def\w^i{{\omega}^i}
\begin{algorithm}
\caption{\ac{EMLMC}\label{alg:alg}}
\For{${\ell} \in \{1,\ldots,L\}$}{
Compute $\mathcal{T}_{\ell}$ as uniform refinement of $\mathcal{T}_{{\ell}-1}$\\
Generate the \ac{FE} space $\widetilde{\X}(\mathcal{T}_{\ell})$
}
$\widetilde{Q}_L = 0 $ \\
\For{${\ell} \in \{0,\ldots,L\}$}{
$\overline{Y_l} = 0$\\
\For{$i \in \{1,\ldots,N_{\ell}\}$}{
Sample the random variables and get $\w^i$ \\
Set $k={\ell}$ \\
Interpolate the level-set function $\phi({\boldsymbol{x}},\w^i)$ in the \ac{FE} space $\widetilde{\X}(\mathcal{T}_k,\w^i)$ to obtain $\phi_k({\boldsymbol{x}},\w^i)$ \nllabel{lin:interp} \\
Run the marching tetrahedra algorithm for $\phi_k({\boldsymbol{x}}, \w^i)$ and compute $\mathcal{M}_k(\w^i)$ and ${\mathcal{D}}_k(\w^i)$ \\
Compute the intersection of cells in $\mathcal{T}_k(\w^i)$ with $\mathcal{M}_k(\w^i)$ and ${\mathcal{D}}_k(\w^i)$ to obtain ${\T}^\D_k(\w^i)$ and ${\T}^\M_k(\w^i)$ and the cell-local integration meshes $\mathcal{I}_k^{\mathcal{M}}(\w^i)$ and $\mathcal{I}_k^{{\mathcal{D}}}(\w^i)$ \\
Generate the \ac{FE} space $\widetilde{\X}({\T}^\D_k,{\omega}^*)$ \\
Run the mesh aggregation algorithm in \cite[Alg. 2.1]{Badia2017a} to compute the aggregated mesh $\mathcal{G}_k(\w^i)$ \\
Generate the \ac{agfe} space $\mathcal{V}(\mathcal{G}_k,{\omega})$ as in \cite[Sect. 3]{Badia2017a}\\
Compute the solution $u_k({\omega}^*)$ of \eqref{eq:agfeprob} \\
Compute the $Q_k=Q(u_k)$ \nllabel{lin:qoi} \\
$\overline{Y_l} = \overline{Y_l} + Q^i_k$\\
\If{${\ell} > 0$}{
Set $k={\ell}-1$ \\
Repeat lines \ref{lin:interp} to \ref{lin:qoi}.\\
$\overline{Y_l} = \overline{Y_l} - Q^i_k$\\
}
}
$\widetilde{Q}_L = \widetilde{Q}_L + \overline{Y_l} / N_l$ \\
}
\end{algorithm}
\section{Numerical examples}
\label{sec:examples}
In this section we present three numerical examples aimed at illustrating the efficiency and robustness of the proposed approach. The first of them is a convergence test whose goal is to asses the actual convergence rates in the implementation. The second one is the solution of \eqref{eq:model} in a complex random domain illustrating in a quantitative way the robustness of the proposed approach. Finally we solve \eqref{eq:model} in a situation where the randomness may induce changes in the topology. These numerical experiments have been carried out using \texttt{FEMPAR} v1.0.0~\cite{fempar,Badia2019}, an open source object-oriented scientific computing library for the simulation of complex multiphysics problems governed by \acp{PDE} at large scales that incorporates embedded \ac{FE} machinery \cite{Verdugo2019}.
The implementation of \ac{EMLMC} was made in \texttt{FEMPAR} by developing a new software layer which permits the concurrent execution of a model, understood as the actual implementation of an algorithm that permits the solution of a \ac{PDE}. In our case, the model was developed using the numerical tools provided by \texttt{FEMPAR} to perform the automatic generation of background meshes $\mathcal{T}_{\ell}$ of arbitrary size, the level-set description of the geometry and its interpolation, the construction of \ac{agfem} discretization, the numerical integration and the solution of the linear systems using domain decomposition preconditioners. The implementation of sampling methods made in this new module \cite{workflow} exploits three levels of parallelism, across levels, across samples and in the evaluation of each sample, e.g. employing a domain decomposition method, although the latter feature has not been exploited in the examples below (each sample is computed using one processor).
\subsection{A convergence test}
\label{subsec:Canuto}
In this section we consider an example with analytic solution proposed in~\cite{Canuto2007}.
The spatial domain $ {\mathcal{D}}(R)\subset \R^2$ is a circle centered at $(0.5,0.5)$ whose radius $R$ is a truncated Gaussian random variable with mean $\mu=0.3$, standard deviation $\sigma=0.025$, and restricted to $[a,b]$ with $a=0.2$ and $b=0.4$, which we denote by ${\mathcal{TN}}(a,b,\mu,\sigma)$. In this random domain we solve problem \Eq{eq:model} with $f=4$, $k=1$, and Dirichlet boundary conditions $u_0=u_{\rm ex}$ where
\begin{equation}
u_{\rm ex}({\boldsymbol{x}}):=R^2-|{\boldsymbol{x}}_1 - 0.5|^2 - |{\boldsymbol{x}}_2 - 0.5|^2,
\label{eq:solution_canuto}
\end{equation}
is the exact solution. We consider two different (although similar) \acp{QoI}, the average of the solution over the whole domain $\omega_1={\mathcal{D}}(R)$ or over a (deterministic) subregion $ \omega_2=\widehat{D} \subset {\mathcal{D}}$ taken as $\widehat{D}=[0.5-\delta,0.5+\delta]^2$ with $\delta=0.125$, that is,
\begin{equation}
Q_i(u):=\frac{1}{|\omega_i|}\int_{\omega_i}u
\label{eq:qois}
\end{equation}
where $|\omega_i|$ denotes the area of $\omega_i$. Using \Eq{eq:solution_canuto} we get $Q_1=0.5 R^2$ and $Q_2=R^2 - 2\delta^2/3$. The \ac{PDF} of the ${\mathcal{TN}}$ can be written in terms of the error function and the variance can be computed easily using, e.g., \cite{truncatednormal}, which gives
$\E(Q_1)=0.04531216540324139$ and $\E(Q_2)=0.08020766413981611$.
Because the \ac{MLMC} estimation given by \Eq{eq:mlmc_average} is a random variable so is the squared error $(\widetilde{Q}_L-\E(Q))^2$. Several realizations of the experiment will produce different results and its expectation error is determined by \eqref{eq:mlmc_error}. Therefore, to evaluate the error decay in \eqref{eq:error_decay} we approximate it by a sample average, running each experiment $K$ times, i.e., we compute the
error as
\begin{equation}
{\mathcal{E}}_L^2 = \frac{1}{K} \sum_{k=1}^K (\widetilde{Q}^i_L-\E(Q))^2,
\label{eq:average_error}
\end{equation}
where $\widetilde{Q}^k_L$ is the result of the $k$-th realization of the experiment. The numerical experiments have been performed with $K=100$, whereas a value of $K=30$ was found to be sufficient in the context of hyperbolic conservation laws \cite{Mishra2012b}. Likewise we compute the averaged variances of the \ac{QoI} differences between levels as
\begin{equation}
\mathcal{V}_l = \frac{1}{K} \sum_{k=1}^K V(Y_l^k)
\label{eq:average_variance}
\end{equation}
where $V(Y_l^k)$ is given by \Eq{eq:sample_variance} for the $k$-th realization of the experiment. In practice, each realized \ac{MLMC} computation, here indexed by $k$, is computed with a different seed of the \ac{PRNG}. In the studies here we utilize the xoshiro256** algorithm \cite{PRNGxoshiro256} for generating pseudo random sequences, that has a 256 bit state. The algorithm is suited for this work as it is quick to advance the state, and statistically robust when seeded well. The detailed seeding procedure is not especially relevant to the results presented here, and those details are a bit beyond the scope of this paper, but the basic process can be given. The seeding process uses the recommended splitmix64 \ac{PRNG} \cite{PRNGsplitmix} to advance from an initial 64 bit seed. The outputs from this are used, together with physically generated random bits, to seed the state space of a Mersenne Twister \cite{Matsumoto1998}, specifically MT19937-64, and the outputs of the Mersenne Twister are used, together with physically generated random bits, to seed the initial state of the xoshiro256**. The state space of the generator has a period of $2^{256}-1$, meaning that after $2^{256}$ evaluations the state of the generator repeats, and there are no repetitions before that. The algorithm is adapted to parallelization by having easily computed a jump of $2^{128}$ states. Using this, jumps are performed to set the initial state for each sample. As a result, each process has an effective $2^{128}$ calls to the random number generator before reaching a state that was realized by another process, allowing for effective and efficient large scale computations. The jumps are done consistently, so that the initial state depends only on the seed and sample number, to allow reproducibility of the stochastic inputs regardless of the scheduling of order of computations.
We consider $L=5$ and the number of samples per level are defined according to \Eq{eq:num_samples_ratio} with $\Gamma=7/2$. Each sample is computed taking $\mathcal{B}=[0,1]^2$ and $\mathcal{T}_0$ a Cartesian mesh of $8 \times 8 $ \acp{FE} and $\mathcal{T}_{{\ell}+1}$ is generated by dividing each element of $\mathcal{T}_{\ell}$ into four subelemements (two per direction). \Fig{fig:canuto_avg_err} shows the average error ${\mathcal{E}}_L$ as a function of the level for $N_5=\{3,6\}$. Even these low values of $N_5$, shows a good convergence of the level estimates, in agreement with the expected decay rate in \Ass{as:1}.
\begin{figure}
\centering
\includegraphics[scale=0.024]{QoIAveragesByLevelCombinedCanuto2d.pdf}
\caption{Averaged error ${\mathcal{E}}_L$ (given by \Eq{eq:average_error}) for each level and sample size (as from \Eq{eq:num_samples_ratio}) evaluated with $Q_1$ (Full Domain) and $Q_2$ (Partial Domain) from \Eq{eq:qois} compared to the theoretical expeted decay rate.}
\label{fig:canuto_avg_err}
\end{figure}
\Fig{fig:canuto_var} shows the sample variance estimate \eqref{eq:average_variance} as a function of the level. The variance of the \ac{QoI} differences between levels decays with the level at the rate slightly higher than the one predicted by the theory in \cite{Teckentrup2013a}, verifying \Ass{as:3}. The determination of these variances is important in adaptive \ac{MLMC} methods as they are required to obtain the optimal number of samples per level in \Eq{eq:mlmc_num_samples}.
\begin{figure}
\centering
\includegraphics[scale=0.024]{QoIVarianceCombinedCanuto2d.pdf}
\caption{Averaged sample variance between levels (given by \Eq{eq:average_variance}) evaluated with $Q_1$ (Full Domain) and $Q_2$ (Partial Domain) from \Eq{eq:qois}.}
\label{fig:canuto_var}
\end{figure}
The number of samples in \Eq{eq:mlmc_num_samples} also depends on the complexity of sampling at each level $C_{\ell}$ which is shown in \Fig{fig:canuto_cost}. This complexity is measured as the CPU time per sample averaged at each level, i.e., if $t^i_{\ell}$ is the elapsed CPU time in sample $i$ of level ${\ell}$, we estimate the complexity of sampling at each level as
\begin{equation}
C_{\ell}=\frac{1}{N_{\ell}}\sum_{i=1}^{N_{\ell}} t^i_{\ell}.
\end{equation}
As it can be seen in \Fig{fig:canuto_cost}, the scaling is in agreement with \Ass{as:2}.
\begin{figure}
\centering
\includegraphics[scale=0.024]{ComputationCostCanuto2d.pdf}
\caption{Computational cost as a function of the level.}
\label{fig:canuto_cost}
\end{figure}
After verifying that the EMLMC algorithm shows an error and variance decay and cost increase that is in agreement with \Ass{as:1} to \Ass{as:3} and because these assumptions permit to obtain the complexity bound \Eq{eq:mlmc_complexity_bound} we expect the actual error to decrease with cost increase accordingly. The actual error decrease with respect to the total complexity, measured as $$C = \sum_{{\ell}=0}^{L}\sum_{i=1}^{N_{\ell}} t^i_{\ell}$$ is shown in \Fig{fig:canuto_samples} and the agreement with the scaling \Eq{eq:mlmc_complexity_bound} is evident. The important gain of \ac{MLMC} with respect to \ac{MC} in deterministic domains with random coefficients is also observed for the \ac{EMLMC} with this random geometry.
\begin{figure}
\centering
\includegraphics[scale=0.024]{ErrorVsCostCombinedCanuto2d.pdf}
\caption{Trade off between error and cost, evaluated with $Q_1$ (Full Domain) and $Q_2$ (Partial Domain) from \Eq{eq:qois}.}
\label{fig:canuto_samples}
\end{figure}
\subsection{Dealing with complex random geometries}
\label{subsec:complex}
The aim of this example is to show the robustness of the proposed approach to perform \ac{UQ} in complex random geometries. To this end, we consider a stochastic extension of a benchmark commonly used to test numerical methods designed to deal with interfaces described by a level-set \cite{hautefeuille_robust_2012,burman_cutfem:_2015}, the Poisson problem in a popcorn domain, a ball with spherical protuberances attached to its surface. Therefore, we aim to solve problem \Eq{eq:model} with $k=1$ and $f=-\nabla^2 u_{\rm ex}$ and Dirichlet boundary conditions $u_0=u_{\rm ex}$, where
\begin{equation}
u_{\rm ex} = \sin(k\|{\boldsymbol{x}}-{\boldsymbol{x}}_c\|)
\end{equation}
is defined in $\mathcal{B}=[0,1]^d$ with ${\boldsymbol{x}}_c=(1/2,1/2,1/2)$ and each sample is computed in a domain ${\mathcal{D}}({\omega})$ defined through a level set function, i.e. ${\boldsymbol{x}} \in {\mathcal{D}}({\omega})$ iff $\phi({\boldsymbol{x}},{\omega})<0$. The classical definition of the popcorn domain is
\begin{equation}
\phi({\boldsymbol{x}}) = \|{\boldsymbol{x}} - {\boldsymbol{x}}_0 \| - \rho_0 - \sum_{i=1}^n A \exp{ \|{\boldsymbol{x}}- {\boldsymbol{x}}_i\| / \sigma}
\end{equation}
with given parameters $\rho_0$ (the radius of the ball), ${\boldsymbol{x}}_i$ (the location of spherical protuberances), $A$ and $\sigma$ (which control the size of the protuberances) (see e.g. \cite{burman_cutfem:_2015}).
We consider a stochastic variation of this geometry defined by a random level set for $d=2,3$, although in this section we are primarily interested in the case $d=3$. This function is build to represent a random central ellipse (labelled as $0$) with smaller random ellipses attached to its boundary (labelled by $j$ with $1\leq j \leq n$) and is defined by
\begin{equation}
\phi({\boldsymbol{x}},{\omega}) = \min_{j\in \{0:n\}}\left( \|{\boldsymbol{x}}-{\boldsymbol{x}}_j\|_{\bm{AR}(j)} - \rho_j\right),
\label{eq:level_set_popcorn}
\end{equation}
where $x_j$ are random centers and $\rho_j$ random radius. The deformation from circles to ellipses is achieved by the norm
\begin{equation}
\|{\boldsymbol{y}}\|_{\bm{AR}(j)} = \sqrt{\left<(\bm{A}_j\bm{R}_j)^{-1}{\boldsymbol{y}},(\bm{A}_j\bm{R}_j)^{-1}{\boldsymbol{y}}\right>}.
\end{equation}
which depends on streatching ($\bm{A}_j$) and rotation ($\bm{R}_j$) matrices defined as follows. Distortion matrices $\bm{A}_j$ are square, diagonal matrices with diagonal entries ($(\bm{A}_j)_{ii}$) defined by parameters $A^{\prime}_j(i)$ ($1\leq i \leq d$)
\begin{equation}
\label{eq:axis_normalization_def}
(\bm{A}_j)_{ii} := \frac{A^{\prime}_j(i))}{\sqrt{\sum_{i=1}^d (A^{\prime}_j(i))^2}}.
\end{equation}
Rotations are defined in each $(x_i,x_{i+1})$ plane ($1\leq i\leq d-1$)
through the rotation matrix $\bm{R}^{\prime}(\Theta_j(i))$ defined from random parameters $\Theta_j(i)$ as
\begin{equation}
\label{eq:rotation_def1}
\bm{R}^{\prime}(\Theta_j(i)) := \left(\begin{array}{cc}
\cos(\Theta_j(i)) & -\sin(\Theta_j(i))\\
\sin(\Theta_j(i)) & \cos(\Theta_j(i))\\
\end{array}\right),
\end{equation}
Let $\bm{R}(\Theta_j(i))$, be the embedding of $\bm{R}^\prime(\Theta_j(i))$ inside $\R^d$, acting as the identity on the other $d-2$ coordinates of the vector. We then define the application of all rotations in $\R^d$ by one matrix as $\bm{R}_j$ defined by applying these rotations $\bm{R}(\Theta_j(1)),\cdots, \bm{R}(\Theta_j(d-1))$, in order, or as one single matrix denoted as,
\begin{equation}
\label{eq:rotation_def3}
\bm{R}_j := \prod_{i=1}^{d-1}\bm{R}(\Theta_j(i)) = \bm{R}(\Theta_j(d-1))\cdots\bm{R}(\Theta_j(1)) .
\end{equation}
The random parameters describing this geometry are distributed according to
\begin{itemize}
\item $n \in {\mathcal{P}}(11)$
\item $A^{\prime}_j \in {\mathcal{U}}(0.8,1.3)^d$
\item ${\boldsymbol{x}}_0 \in {\mathcal{U}}(0.4,0.6)^d$
\item $\rho_0 \in {\mathcal{U}}(0.1,0.2)$
\item $\bm\Theta_j \in {\mathcal{U}}(0,2\pi)^{d-1}$
\item ${\boldsymbol{y}}_j \in \mathcal{S}^d$
\item ${\boldsymbol{x}}_j={\boldsymbol{x}}_0+ \rho_0\bm{A}_0\bm{R}_0{\boldsymbol{y}}_j$
\item $\rho_j \in {\mathcal{U}}(0.03,0.1)$
\end{itemize}
where ${\mathcal{P}}$ and ${\mathcal{U}}$ denote a Poisson and uniform distributions, respectively, and $\mathcal{S}^d$ is a uniform distribution on the surface of a $d$-dimensional unit sphere. Note that using a level-set description permits to define complex domain through Boolean operations. In particular, given two level-set functions $\phi_1$ and $\phi_2$ describing $\Omega_1$ and $\Omega_2$ respectively, the function $\min(\phi_1,\phi_2)$ describes $\Omega_1 \cup \Omega_2$.
A few samples of these stochastic domains are shown in \Fig{fig:popcorn_samples}, where the important variation of the morphology and size of the domains is apparent.
\begin{figure}[!b]
\centering
\includegraphics[width=0.45\textwidth]{3dpopcorn0001-fs8}
\includegraphics[width=0.45\textwidth]{3dpopcorn0002-fs8} \\
\includegraphics[width=0.45\textwidth]{3dpopcorn0003-fs8}
\includegraphics[width=0.45\textwidth]{3dpopcorn0004-fs8} \\
\includegraphics[width=0.45\textwidth]{3dpopcorn0005-fs8}
\includegraphics[width=0.45\textwidth]{3dpopcorn0006-fs8} \\
\caption{Some realizations of the solution of \Eq{eq:model} in a stochastic popcorn domain.}
\label{fig:popcorn_samples}
\end{figure}
An important problem with sampling methods, as described in \Sec{sec:intro} is the robustness of the solver employed to compute samples and this is an important feature of the \ac{agfem} employed herein and described in \Sec{sec:agg}. We performed a quantitative evaluation of this feature by running the \ac{EMLMC} method with and without the aggregation procedure using an iterative solver for the linear system arising from the \ac{FE} discretization. Due to the symmetry of problem \eqref{eq:model}, we use the \ac{CG} algorithm; as it is well-known, the number of iterations required to converge is proportional to the square root of the condition number of the system matrix. We compute statistics of the number of iterations (maximum, minimum and average) taking 1000 samples per level. The results obtained in a 2D version of the problem are shown in \Fig{fig:num_iter_2D} whereas those obtained in the 3D setting are shown in \Fig{fig:num_iter_3D}.
\begin{figure}[!b]
\centering
\includegraphics[width=0.7\textwidth]{num_iter_pop_corn_2D}
\caption{Number of \ac{CG} iterations required to converge to a tolerance of $10^{-8}$ in the relative norm the solution of each (2D) sample of the \ac{EMLMC} as a function of the mesh size on each level, with and without aggregation.}
\label{fig:num_iter_2D}
\end{figure}
\begin{figure}[!b]
\centering
\includegraphics[width=0.45\textwidth]{num_iter_pop_corn_3D_wa}
\includegraphics[width=0.45\textwidth]{num_iter_pop_corn_3D_woa}
\caption{Number of \ac{CG} iterations required to converge to a tolerance of $10^{-8}$ in the relative norm the solution of each (3D) sample of the \ac{EMLMC} as a function of the mesh size on each level, (a) with and (b) without aggregation (note the difference in the scales).}
\label{fig:num_iter_3D}
\end{figure}
\subsection{Dealing with stochastic topologies}
In this section we consider a problem in a stochastic domain where the randomness may induce changes in the topology. It consists of a square plate with two circular holes of uncertain position which may overlap to form a single one. We consider problem \eqref{eq:model} with Dirichlet boundary conditions on the left and right sides and zero Neumann boundary on the top and bottom sides and on the boundary of the internal holes. Physically, this problem corresponds to the heat transfer in a plate where the left (low) and right (high) temperatures are prescribed and heat flows through the plate at a rate that depends on the location and size of the holes. The natural quantity of interest is therefore
\begin{equation}
Q(u) :=\int_{x_1=0} \partial_{x_1} u,
\label{eq:qoi_def_heat}
\end{equation}
The stochastic domain is defined, again, through a level set function $\phi({\boldsymbol{x}},{\omega}) = \phi_1({\boldsymbol{x}},{\omega})+ \phi_2({\boldsymbol{x}},{\omega})$ where
\begin{equation}
\phi_i({\boldsymbol{x}},{\omega}) = \|{\boldsymbol{x}} - {\boldsymbol{x}}_i({\omega}) \| - \rho_i
\end{equation}
for $i=1,2$ represent the two interior circular holes (which do not conduct the heat) are distributed one along the upper half of the plate and one along the lower half of the plate. We study the influence of the radius on the average total heat flux throuogh the plate for random position of the holes. We conside three cases $\rho_i=0.18$, $\rho_i=0.2$ or $\rho_i=0.22$ for $i=1,2$. The first components of ${\boldsymbol{x}}_i$ for $i=1,2$ (holes' centers) are drawn uniformly from $[0.23,0.77]$. The upper circle has the $x_2$ coordinate of its center drawn uniformly from $[0.70,0.76]$, and the lower circle has the $x_2$ coordinate of its center drawn uniformly from $[0.24,0.30]$. The union of these circles form the non-conductive interior domain.
We consider $L=5$ and the number of samples per level are defined according to \Eq{eq:num_samples_ratio} with $\Gamma=7/2$. Each sample is computed by taking $\mathcal{B}=[0,1]^2$ and $\mathcal{T}_0$ a Cartesian mesh of $8 \times 8 $ \acp{FE} and $\mathcal{T}_{{\ell}+1}$ is generated by dividing each element of $\mathcal{T}_{\ell}$ into four subelemements (two per direction). A few samples of the solution are shown in \Fig{fig:two_holes_geometry}, where the topology changes in the domain can be observed. The proposed \ac{EMLMC} method is able to automatically compute realizations using the same procedure followed in other cases, no special treatment of topology changes is required.
\Fig{fig:two_holes_flux} shows $\widetilde{Q}_L $ for the three different radii. As it can be seen, and is expected, when the size of the holes is increased, the effective area available for heat conduction is decreased, thus resulting in a reduction of the heat flux through the plate.
\begin{figure}[!b]
\centering
\includegraphics[scale=0.024]{FluxByRadius.pdf}
\caption{Average $Q(u)$ value as from \Eq{eq:qoi_def_heat} for three different radii at different levels of computation.}
\label{fig:two_holes_flux}
\end{figure}
\begin{figure}[!b]
\centering
\includegraphics[width=0.45\textwidth]{heat0001-fs8}
\includegraphics[width=0.45\textwidth]{heat0009-fs8} \\
\includegraphics[width=0.45\textwidth]{heat0003-fs8}
\includegraphics[width=0.45\textwidth]{heat0010-fs8} \\
\includegraphics[width=0.45\textwidth]{heat0005-fs8}
\includegraphics[width=0.45\textwidth]{heat0019-fs8} \\
\caption{Some realizations of the solution of \Eq{eq:model} in a plate with two (stochastic) holes.}
\label{fig:two_holes_geometry}
\end{figure}
\section{Conclusions}\label{sec:conclusions}.
In this article we propose the \ac{EMLMC} method which consists of drawing samples in \ac{MLMC} using a robust embedd method, the \ac{agfem}, for the discretization of the problem. Once again, we emphasize that applying the standard \ac{MLMC} on complex geometries can be hard if body-fitted methods are used for the problem discretization due to the difficulty of the mesh-hierarchy generation, which tipically require human intervenation. On top of that, when dealing with random geometries the construction of the mesh-hierarchy might even be impossible using body-fitted techniques or may simply fail for a particular sample. Using embedded discretization methods on a mesh-hierarchy generated on a bounding-box circumvents these problems.
The resulting algorithm is a powerful method to perform \ac{UQ} on complex random domains thanks to the robustness provided by the \ac{agfem} algorithm. The numerical examples presented herein show that the error and variance decays and the complexity are similar to those observed in the standard \ac{MLMC} (with deterministic geometries and body-fitted discretizations) and are in line with theoretical expectations. Therefore, the \ac{EMLMC} shows the same cost reduction with respect to \ac{MC} than the standard \ac{MLMC} making it an excellent method to perform \ac{UQ} on complex random domains.
\bibliographystyle{plain}
\section{Introduction}
\label{sec:intro}
\ac{UQ} requires the solution of a stochastic \ac{PDE} with random data. Several methods for solving stochastic \acp{PDE}, such as stochastic Galerkin \cite{lemaitre2010spectral,ghanem1991stochastic} or stochastic collocation \cite{Babuska2010}, are based on a standard approximation in space like \acp{FE} or finite volumes, and different types of polynomial expansions in the stochastic space \cite{Xiu2003}. Although very powerful for some particular problems, most of these techniques suffer the so-called `curse of dimensionality', i.e. a dramatic increase of computational cost with the number of stochastic variables. Besides, some of these methods are intrusive because given a code that can be used to solve a deterministic problem, it needs to be modified to solve a stochastic one.
In contrast, the classical \ac{MC} method does not present these drawbacks. On the one hand it is a sampling method, i.e., it only requires the repeated evaluation of a deterministic model, and it can be implemented in a non-intrusive way. On the other hand it is known to convergence to the exact statistics of the solution as the number of input samples tends to infinity, independently of the dimensionality of the stochastic space and mostly independently of the physics of the problem under consideration, as long as some moments of the \ac{QoI} are bounded. However, the number of samples required to achieve statistical convergence combined with the complexity of the computational model required to have enough spatial and/or temporal accuracy can make it prohibitively expensive.
To address this difficulty, the \ac{MLMC} method was introduced in \cite{Kebaier2005} (with two levels) and in \cite{Giles2008} (with several levels) and applied to solve stochastic \acp{PDE} with random input parameters in \cite{Giles2012,Mishra2012,Mishra2012a,Pisaroni2017}. The \ac{MLMC} method exploits a hierarchy of discretizations of the underlying differential problem (using a hierarchy of meshes). The method computes expectations on the difference between solutions at two consecutive lvels of accuracy, reducing the variance of the problem at hand and thus decreasing the number of samples required at finer discretization levels. By doing this, most of the computational cost is transferred to coarse discretization levels, where a large number of samples is used to control the statistical error of the \ac{MLMC} estimator, whereas only few samples are used on the finest (and most costly) levels, to control the discretization error.
Depending on the application, different types of uncertain data can be considered, like material properties (\ac{PDE} parameters), boundary and/or initial conditions, and the geometry/topology of the domain. The \ac{MLMC} method can be used out-of-the-box in most cases but it requires a hierarchy of discretizations as input, which can be difficult to generate when dealing with complex geometries and unstructured body-fitted meshes. For this reason \ac{MLMC} methods are barely used to perform \ac{UQ} when complex geometries are considered. This is the first limitation we aim to address herein, utilizing embedded methods with \ac{MLMC} in a method referred to as \ac{EMLMC}.
On top of that, leaving simple academic constructions aside, geometry is hardly known exactly and surfaces are rarely smooth when looking at sufficiently small scale. Variation between different samples of natural materials, e.g. porous rocks, and cost limits of diminishing manufacturing tolerances of fabrication processes make geometry an epistemic uncertainty in general. Geometric uncertainties can have a strong impact on the solution of a \ac{PDE}, e.g. they trigger boundary layer separation in fluids, and these effects are important in many fields, e.g. geology \cite{Deffeyes1982}, tribology \cite{hutchings2017tribology} and lubrication \cite{hamrock2004fundamentals}, nano-technology \cite{Nosonovsky2008} and biology \cite{Park2012}. The first attempts to predict the impact of roughness was through deterministic parameterizations, like periodic indentations \cite{Taylor1971,Richardson1971}, sinusoidal corrugation \cite{Fyrillas2001} and fractal representations, e.g. using the well-known Von Koch snowflake curve \cite{Cajueiro1999,Blyth2003}. After that, stochastic representations of surface roughness have been adopted frequently, which permit to incorporate the lack of information or measurement errors. Examples of these descriptions include random fields to represent the thickness in shell structures (see \cite{Stefanou2004} and the references therein), random fractal representations \cite{Sarkar1993} and random fields with a given spatial correlation structure to represent derivations from a nominal geometry \cite{Xiu2006,Tartakovsky2006,Mohan2011874,Zayernouri2013}. In engineering design, geometry is exactly defined using computational tools and in this case the uncertainty is rooted at manufacturing process, as mentioned. Uncertainty also arises when measuring, e.g when the geometry is determined from digital images, sourced at image scanning and resolution. Using pixelated images for computing may result in significant errors, even in the limit of small resolution \cite{Babuska2001,Babuska2003}. However, it is possible to recover a definition of the geometry using level set methods \cite{sethian1999level} and, in fact, this can be done statistically, e.g. using polynomial chaos expansions, to define random level set representations \cite{Stefanou2009}. Curiously, level set and phase field representations can also be obtained from noisy images solving stochastic \acp{PDE} \cite{Patz2013}.
The are three approaches to deal with geometric uncertainty. The first one is based on domain mappings, the second on perturbation methods and the third one on the use of fictitious domain methods, also known as embedded or immersed methods. The first approach started with the stochastic mapping concept that was introduced in \cite{Xiu2006} to transform a (deterministic or stochastic) problem in a random domain to a stochastic problem in a deterministic domain. This approach was applied to the stochastic analysis of roughness in flow problems in \cite{Tartakovsky2006}. In \cite{Mohan2011874} the main idea is to fix the mesh connectivity and change the coordinates of the nodes using deformation strategies that incorporate uncertainty through the solution of auxiliary \ac{PDE} with random boundary conditions. In \cite{Castrillon-Candas2016} a domain mapping approach is used to prove the convergence of a stochastic collocation method based on Smolyak grids. Because the construction of globally smooth mappings is difficult, a piecewise smooth domain mapping approach is proposed in \cite{Chaudhry20181127}. The second approach was introduced in \cite{Sarkar1993} where an analytical perturbation method was used to find solutions of the Laplace equation. A perturbation method developed using shape calculus (a Taylor expansion using shape derivatives and its linearization around a nominal domain) was proposed in \cite{Harbrecht2008,Harbrecht2013,Harbrecht2016}. Because shape derivatives are difficult to compute, a perturbation approach based on approximated stochastic boundary conditions (of Robin type) in \cite{Dambrine2017943,Dambrine2016921}. Although it is not based on a formal perturbation expansion, the stochastic smoothed profile method of \cite{Zayernouri2013} can be included in this category. This method transforms a random domain into a random force-term in a deterministic domain through the introduction of a smoothly spreading interface layer to represent rough boundaries and its error grows with the layer thickness \cite{Luo2008}. By definition, these approaches are not applicable to complex geometrically and even topologically random domains.
The use of embedded methods, which rely on a fixed background grid and a discrete approximation of the geometry using a level set function, for the solution of \acp{PDE} in stochastic domains combined with polynomial chaos was proposed in \cite{Canuto2007} and applied to fluid dynamics problems in \cite{Parussini2010}. Among the several possible immersed methods, those in \cite{Canuto2007,Parussini2010} impose boundary conditions through Lagrange multipliers defined in a \ac{FE} space built from a boundary discretization (independent of the domain discretization). This construction results in a saddle point linear system for which specific preconditioners have been recently developed \cite{Gordon2014}. Alternatively, the use of the so-called \ac{XFEM} method \cite{Moes1999,sukumar_modeling_2001} for dealing with embedded stochastic geometries using polynomial chaos, was proposed in \cite{Nouy2007,Nouy2008,Nouy20101312,Nouy20113066} and named \ac{X-SFEM}. It was lately applied to heat transfer \cite{Lang20131031} and structural problems \cite{Schoefs2016}. The lack of robustness of embedded \ac{FE} methods is not addressed in these works.
The use of \ac{MLMC} in complex random domains using body-fitted meshes would require to generate an unstructured mesh (and its corresponding mesh hierarchy) per sample, which is absolutely impractical. For this reason, and the previously described problems related to the generation of mesh hierarchies in general, we also consider embedded \ac{FE} methods in this work. However, the naive use of methods exhibit some problems. The most salient one is the so-called small cut cell problem. The intersection of cells with the domain surface cannot be controlled and one can end up with cells for which the ratio between the cell volume and its portion in the domain interior is arbitrarily large. It is well-known that the condition number of the resulting linear system blows up with these ratios. As a result, plain embedded \ac{FE} methods are not robust. Different remedies have been proposed so far in the literature. For conforming \ac{FE} methods in which trace continuity must be enforced between cells, one can consider methods that introduce artificial dissipation (stabilization) in order to weakly enforce zero jump of derivatives (up to the \ac{FE} order being used) of jumps on facets that belong to cut cells (see \cite{burman_cutfem:_2015} for more details). Another approach is to create aggregated meshes such that every aggregate contains at least one interior cell. Whereas the generation of discontinuous \ac{FE} spaces on aggregated meshes is straightforward (enforcing the local \ac{FE} space to be the same polynomial space as in a standard cell), it is far more complicated for conforming \acp{FE}. Using such a naive approach for conforming methods would generate global constraints and too much \emph{rigidization}, which would affect the convergence properties of the method, its implementation, and the sparsity of the resulting linear system; we note that this is in fact the \emph{strong} version of the weakly enforced constraints in \cite{burman_cutfem:_2015}. The aggregated \ac{FE} method has been proposed in \cite{Badia2017a} to enable aggregation techniques for conforming methods. It has three key features: (a) the constraints are cell-local and their implementation in \ac{FE} codes is easy; (b) it keeps the convergence order of standard body-fitted \ac{FE} spaces; (c) it does not perturb the Galerkin formulation with any kind of artificial dissipation and does not involve the computation of (possibly) high-order derivatives on the element boundaries.
In this work, we propose the \ac{EMLMC} framework. It combines the \ac{MLMC} method on a hierarchy of background meshes (which can be simple structured meshes) and the \ac{agfem} at every mesh level to capture every sample of the random domain. In particular, we consider random domains implicitly defined through random level-set functions and the marching tetrahedra scheme to approximate the random domain at every mesh level. The resulting method extends the applicability of \ac{MLMC} to complex random domains and it is robust. We state the model problem in Section \ref{sec:problem} and we describe the \ac{MLMC} method in Section \ref{sec:mc_and_mlmc}. The construction of the \ac{agfem} discretization for each sample is then described in Section \ref{sec:agg}. Finally, a set of numerical experiments are described in Section \ref{sec:examples} and we draw some concluding remarks in Section \ref{sec:conclusions}.
\def\mathcal{B}{\mathcal{B}}
\section{Problem formulation}
\label{sec:problem}
As a model problem we consider the following elliptic stochastic problem. Given an oriented manifold $\mathcal{M}({\omega})\subset \mathbb{R}^d$ and its corresponding interior domain ${\mathcal{D}}({\omega})$, find $u({\boldsymbol{x}},{\omega})$ such that
\begin{align}
-{\boldsymbol{\nabla}} \cdot \left( {k} {\boldsymbol{\nabla}} u \right) = f \ \hbox{ in } \ {\mathcal{D}}({\omega}), \qquad
u = u_0 \ \hbox{ on } \, \mathcal{M}({\omega}), \label{eq:model}
\end{align}
almost surely (a.s.), i.e., for almost all ${\omega} \in \Omega $, which denotes the uncertainty, described by a complete probability space $(\Omega,{\mathcal{F}},P)$. Although stochastic coefficients, i.e. the diffusion ${k} = {k}({\boldsymbol{x}},{\omega})$, forcing $f=f({\boldsymbol{x}},{\omega})$, and boundary condition $u_0 = u_0({\boldsymbol{x}},{\omega})$ could be considered random fields too, we assume they are deterministic, as it is usual in the literature when stochastic domains are considered \cite{Chaudhry20181127,Mohan2011874,Xiu2006,Harbrecht2008,Harbrecht2013,Harbrecht2016,Dambrine2017943,Dambrine2016921}. Let us assume that all realizations $\mathcal{M}({\omega})$ (and ${\mathcal{D}}({\omega})$) are bounded. We can define a bounded artificial domain $\mathcal{B}$ that contains all possible realizations of $\mathcal{M}({\omega})$, i.e. ${\mathcal{D}}({\omega}) \subset \mathcal{B}, \, \forall {\omega} \in \Omega$. We also assume that ${k}$ and $f$ are defined in $\mathcal{B}$, independently of ${\omega} \in \Omega$. With the random solution of this problem at hand we aim to compute $\E\left(Q(u)\right)$ where $Q$ is a deterministic \ac{QoI}, e.g. an integral of $u$ on a subregion or surface, and $\E$ the expectation.
Although it is not strictly necessary (at least in deterministic domains \cite{Charrier2013,Teckentrup2013a}), uniform ellipticity is often assumed to guarantee the well-posedness of \Eq{eq:model} (see \cite{Barth2011,Babuska2010,Chaudhry20181127}). Therefore, we assume ${k}$ uniformly bounded from below, i.e. there exists ${k}_0$ such that ${k}({\boldsymbol{x}}) \ge {k}_0, \, \forall {\boldsymbol{x}} \in \mathcal{B}$ (and $\forall {\omega} \in \Omega$ if a stochastic diffusion ${k}({\boldsymbol{x}},{\omega})$ is considered). Under these assumptions, the bilinear form associated to the weak form of \eqref{eq:model} is bounded and coercive and the Lax-Milgram lemma guarantees a solution for any ${\omega} \in \Omega$ uniformly bounded by $ \|f\|_{L^2( \mathcal{B})} $ \cite{Chaudhry20181127}.
\def\mathcal{S}{\mathcal{S}}
\def\mathcal{M}{\mathcal{M}}
\def\mathcal{T}{\mathcal{T}}
\def{\ell}{{\ell}}
\def{\T}^\M{{\mathcal{T}}^\mathcal{M}}
\def{\T}^\D{{\mathcal{T}}^{\mathcal{D}}}
\def\widetilde{\D}{\widetilde{{\mathcal{D}}}}
\def\mathcal{V}{\mathcal{V}}
\def\widetilde{\X}{\widetilde{\mathcal{V}}}
\def\mathcal{I}{\mathcal{I}}
\section{Monte Carlo and its multilevel extension}
\label{sec:mc_and_mlmc}
\subsection{The Monte Carlo method}
\label{sec:mc}
The aim is to compute the expected value $\E(Q)$ of the \ac{QoI} $Q(u)$. Using a discrete approximation in a mesh $\mathcal{T}_h$ of size $h$ having $M \propto h^{-d} $ degrees of freedom, we actually compute a discrete approximation to the \ac{QoI} using $u_h$, the discrete \ac{FE} solution of the continuous problem at hand. Let us denote $Q_h=Q(u_h)$. The standard MC algorithm computes the approximation of its expected value as the average
\begin{equation}
\E(Q) \approx \overline{Q}_h = \frac{1}{N} \sum_{i=1}^N Q_h^i, \label{eq:mc_average}
\end{equation}
where $Q_h^i$ are obtained evaluating the \ac{QoI} using the $N$ realizations (samples) $u_h^i$ of the stochastic solution $u_h({\boldsymbol{x}},{\omega})$ on the given mesh $\mathcal{T}_h$. The mean square error of this approximation is
\begin{equation}
e^2(\overline{Q}_h) = \E\left[(\overline{Q}_h - \E(Q))^2\right] = (\E(Q_h -Q))^2 + \V(\overline{Q}_h)
\label{eq:mc_error}
\end{equation}
where $\V$ is the variance. As it is well known, the first term is the (squared) discretization or bias error and its reduction requires refining the mesh, whereas the second one is the statistical error that decays as
\begin{equation}
\V(\overline{Q}_h) = N^{-1} \V(Q_h),
\label{eq:mc_variance}
\end{equation}
so its reduction requires increasing the number of samples. Denoting the complexity of evaluating $Q^i_h$ as $C_h$, it is possible to estimate the total computational cost of the MC algorithm, $C_{\rm MC}$, under the following assumptions.
\begin{assumption}
\label{as:1}
There exist $\alpha$ and $c_{\alpha}$ such that
$$|\E(Q_h -Q)| \leq c_{\alpha}h^{\alpha} .$$
\end{assumption}
\begin{assumption}
\label{as:2}
There exist $\gamma$ and $c_{\gamma}$ such that
$$C_h \leq c_{\gamma} h^{-\gamma} .$$
\end{assumption}
By \Ass{as:1} the mesh size required to achieve a discretization error smaller than $\varepsilon$ is $h \preceq \varepsilon^{1/\alpha}$ (hereafter $a \preceq b$ means that there is a constant $c$, independent of $h$, such that $a \le c b$).
On the other hand, by \Eq{eq:mc_variance}, the number of samples required to achieve a statistical error $ \left[\V(\overline{Q}_h)\right]^{1/2} \leq \varepsilon $ is $N \propto \varepsilon^{-2}$, where here $a\propto b$ means that there exist constants $c$ and $C$ such that $c\varepsilon^{-2} \le N \le C\varepsilon^{-2}$. Then
\begin{equation}
C_{\rm MC} = N C_h \preceq N h^{-\gamma} \preceq \varepsilon^{-(2+\gamma/\alpha)}
\label{eq:mc_complex}
\end{equation}
Note that the factor $\varepsilon^{-\gamma/\alpha} $ comes from the discretization error and represents the (asymptotic) cost of one sample.
For the model problem of \Sec{sec:problem}, \Ass{as:1} holds with $\alpha=2$ \cite{Teckentrup2013a}. Now the constant $\gamma$ depends on $d$ and the type of solver. For naive Gaussian elimination $\gamma=3d$, whereas $\gamma=d$ for the optimal multigrid method. In between, for sparse linear solvers $\gamma=3(d-1)$ (for $d=2,3$) whereas for the (unpreconditioned) \ac{CG} algorithm $\gamma=d+1$. In the last case, for example, the computational cost scales as $\varepsilon^{-4}$ when $d=3$.
\subsection{The Multilevel Monte Carlo method}
\label{sec:mlmc}
The ac{MLMC} algorithm exploits the linearity of the expectation using a hierarchy of $L+1$ meshes $\mathcal{T}_0,\mathcal{T}_1,...,\mathcal{T}_L$ of sizes $h_0>h_1>...>h_L$. Although there are other options \cite{Haji-Ali2016}, we assume $h_l=h_0s^{-l}$ (i.e. each mesh in the hierarchy is obtained by uniformly dividing each cell into $s^d$ subcells). Performing $\{N_{\ell}\}_{{\ell}=0,...,L}$ simulations for different values of the random parameters ${\omega}$ on $\{\mathcal{T}_{\ell}\}_{{\ell}=0,...,L}$, the expectation on the coarse grid is corrected using the whole hierarchy as
\begin{equation}
\E(Q_L) = \E(Q_0) + \sum_{{\ell}=1}^L \E(Q_{\ell}-Q_{{\ell}-1}) \approx \sum_{{\ell}=0}^L \overline{Y_{\ell}}:=\widetilde{Q}_L \label{eq:mlmc_average}
\end{equation}
where $Q_{\ell}=Q(u_{{\ell}})$ is the approximation of the \ac{QoI} computed using the \ac{FE} solution $u_{\ell}$ for the mesh $\mathcal{T}_{\ell}$ and $Y_{\ell}=Q_{\ell}-Q_{{\ell}-1}$ for ${\ell}=1,...,L$ and $Y_0=Q_0$.
The total error of this approximation is now given by
\begin{equation}
e^2( \widetilde{Q}_L ) = \E\left[( \widetilde{Q}_L - \E(Q) )^2\right] = (\E(Q_L -Q))^2 + \V(\widetilde{Q}_L)
\label{eq:mlmc_error}
\end{equation}
The first term, the (squared) discretization error, is the same as in the \ac{MC} method and it requires mesh $\mathcal{T}_L$ to be fine enough. For this term to be smaller than, e.g. $\varepsilon^2/2$, \Ass{as:1} requires $c_{\alpha} h_L \leq 2^{-1/2} \varepsilon^{1/\alpha}$, or, equivalently,
\begin{equation}
L = \lceil {\frac{1}{\alpha} \log_s \left(\sqrt{2}c_{\alpha} \varepsilon^{-1}\right)} \rceil
\label{eq:mlmc_num_levels}
\end{equation}
where, as usual, $\lceil x \rceil$ denote the unique integer $n$ satisfying $x<n<x+1$. The second term \Eq{eq:mlmc_error} is the statistical error, given by $\V(\widetilde{Q}_L) = \sum_{{\ell}=0}^{L} N_{\ell}^{-1} \V(Y_{\ell})$, is very different from the one in the \ac{MC} method thanks to the following assumption.
\begin{assumption}
\label{as:3}
For any ${\ell}=0,....L$, there exist $\beta$ and $c_{\beta}$ such that
$$\V(Y_{\ell}) \leq c_{\beta}h_{\ell}^{\beta}.$$
\end{assumption}
Thanks to \Ass{as:3}, $\V(Y_{\ell})$ tends to zero with $h_{\ell}$ and in this sense, the \ac{MLMC} method can be understood as a variance reduction method. Under \Ass{as:1}, \Ass{as:2} and \Ass{as:3}, it is possible to explicitly bound the total computational cost and to minimize it to obtain the optimal number of samples to be taken on each level \cite{Giles2008}.
The number of samples on each level required to have a (squared) statistical error smaller than $\varepsilon^2/2$ is given by
\begin{equation}
N_{\ell} = \lceil {2 \varepsilon^{-2} \sqrt{\frac{\V(Y_{\ell})}{C_{\ell}}} \sum_{i=0}^{L} \sqrt{\V(Y_i)C_i}} \rceil,
\label{eq:mlmc_num_samples}
\end{equation}
where $C_{\ell}$ is the complexity (computational cost) of evaluating $Q_{\ell}$, i.e. taking one sample at level ${\ell}$. \Eq{eq:mlmc_num_samples} can also be written as
\begin{equation}
N_{{\ell}} = \lceil {s^{\Gamma (L-{\ell})} N_{L}} \rceil,
\label{eq:num_samples_ratio}
\end{equation}
where
$$\Gamma=\frac{1}{2}(\gamma+\beta).$$
The computational complexity of the \ac{MLMC} algorithm is given by $C_{\rm MLMC} = \sum_{{\ell}=0}^{L} N_{\ell} C_{\ell}$, and it can be bounded using \Ass{as:2} and \Ass{as:3} as
\begin{equation}
C_{\rm MLMC} \preceq \varepsilon^{-2-\max(0,\frac{\gamma-\beta}{\alpha})},
\label{eq:mlmc_complexity_bound}
\end{equation}
with an additional logarithmic factor when $\gamma=\beta$.
There are three different situations depending on whether the (computational cost required to reduce the) statistical error dominates, is of the same order of, or is dominated by the discretization error. In the first case, $\beta>\gamma$ and because the statistical error is eliminated using coarse grids, the complexity of the \ac{MLMC} algorithm is independent of the fine mesh size. In the last one $\beta<\gamma$ and the complexity includes a factor $\varepsilon^{-\gamma/\alpha} $ that comes from the discretization error, which also appears in the \ac{MC} complexity estimate \eqref{eq:mc_complex}. In applications of \acp{PDE} with random coefficients, it is typical to have $\beta=2\alpha$ and in this case, $C_{\rm MLMC} \leq \varepsilon^{-\gamma/\alpha} $ which is the complexity of one sample at the finest grid, that is, the \ac{MLMC} algorithm does not \emph{see} the sampling cost \cite{Pisaroni2017}. Thus, the relation between parameters $\beta$ and $\gamma$ determines whether the effort should be concentrated on coarse or fine meshes.
From the previous discussion the crucial role of the constants $\alpha$, $\beta$, $\gamma$, $c_{\alpha}$, $c_{\beta}$ and $c_{\gamma}$ is apparent. In general, these constants are unknown (they are problem and solver dependent) but their estimation is required to define the optimal number of levels and samples in \ac{MLMC}. The so-called adaptive \ac{MLMC} algorithms have been developed to face this challenge by estimating these constants with extrapolations and/or statistical inference \cite{Giles2008,Cliffe2011,Collier2015}. For example, the variances in \Eq{eq:mlmc_num_samples} can be estimated (a posteriori) using the sample variance
\begin{equation}
\V(Y_{\ell}) \approx V(Y_{\ell}) = \frac{1}{N_{\ell}} \sum_{i=1}^{N_{\ell}} (Y^i_{\ell} - \overline{Y_{\ell}})^2.
\label{eq:sample_variance}
\end{equation}
We do not follow this strategy herein, but we exploit the knowledge of the problem at hand.
For the problem of \Sec{sec:problem} $\beta=4$ \cite{Teckentrup2013a} and using a \ac{CG} algorithm $\gamma=d+1$, so the statistical error dominates the computational cost and the optimal relation between samples at the hierarchy are given by \Eq{eq:num_samples_ratio} with $\Gamma=4$ for $d=3$ and $\Gamma=3.5$ for $d=2$.
Observe that \Ass{as:1} and \Ass{as:3} (redefining $c_{\alpha}$ to include $h_0^{-\alpha}$ and $c_{\beta}$ to include $h_0^{-\beta}$ ) can be written as
\begin{align}
|\E(Q_{\ell}-Q)| \leq c_{\alpha} s^{-\alpha {\ell}}, \qquad \V(Y_{\ell}) \leq c_{\beta} s^{-\beta {\ell}}.\label{eq:error_decay}
\end{align}
The corresponding sample approximations $|Q-\widetilde{Q}_{\ell}|$ (see \Eq{eq:mlmc_average}) and $V(Y_{\ell})$ (see \Eq{eq:sample_variance}) statistically converge to these values and thus, with a large enough number of samples, one can check spatial convergence (i.e., with respect to $\ell$) for these quantities too.
In the numerical examples of \Sec{sec:examples} $s=2$ and therefore, the (logarithmic) convergence rate with respect to the level is $2\log(2)$ and $4\log(2)$ respectively.
\section{The aggregated FE method}
\label{sec:agg}
The mesh hierarchy required by the \ac{MLMC} algorithm is built from $\mathcal{T}_0$, an initial shape-regular partition of $\mathcal{B}$, generating $\mathcal{T}_{{\ell}+1}$ by uniform refinement of $\mathcal{T}_{\ell}$.
In our implementation, we consider (a bounding box) $\mathcal{B} := \prod_{i=1}^d[x^i_m,x^i_M]$ and $\mathcal{T}_0$ a uniform Cartesian mesh. At every level ${\ell}$, we consider a piecewise linear approximation $\mathcal{M}_{\ell}({\omega})$ of the manifold $\mathcal{M}({\omega})$. When $\mathcal{M}({\omega})$ is already a polytopal surface mesh, e.g. an STL mesh, no approximation is required at this step. However, if the resolution of $\mathcal{M}({\omega})$ is very fine compared to the coarser mesh size, it would involve a very costly numerical integration at coarser levels. A case of practical interest that will be addressed in this work is when $\mathcal{M}({\omega})$ is represented implicitly with a level-set function $\phi({\boldsymbol{x}})$. In this case, we construct a $\mathcal{C}^0$ Lagrangian \ac{FE} space of arbitrary order on the grid $\mathcal{T}_{\ell}$, which is represented with ${\widetilde{\X}}(\mathcal{T}_{\ell})$. In this case, one can define $\mathcal{M}_{\ell}({\omega})$ as the result of applying the marching tetrahedra algorithm \cite{Doi91} to the interpolation $\phi_{\ell}({\boldsymbol{x}})$ of the level-set function $\phi({\boldsymbol{x}})$ onto $\widetilde{\X}(\mathcal{T}_{\ell})$. When considering quad meshes, one can simply use the decomposition of an n-cube into n-tetrahedra before applying the marching tetrahedra algorithm.
\newcommand{\closure}[2][3] {
{}\mkern#1mu\overline{\mkern-#1mu#2}}
Given the oriented manifold $\mathcal{M}_{\ell}({\omega})$ (and its corresponding interior, the domain ${\mathcal{D}}_{\ell}({\omega})$) and the background mesh $\mathcal{T}_{\ell}$, due to the piecewise linear nature of the manifold, it is possible to compute exactly the intersection of each cell $K \in \mathcal{T}_{\ell}$ with $\mathcal{M}_{\ell}({\omega})$ and ${\mathcal{D}}_{\ell}({\omega})$. We represent with ${\T}^\D_{\ell}({\omega})$ (resp. ${\T}^\M_{\ell}({\omega})$) the set of active (resp., cut) cells in $\mathcal{T}_{\ell}$ that intersect ${\mathcal{D}}_{\ell}({\omega})$ (resp., $\mathcal{M}_{\ell}({\omega})$); ${\T}^\D_{\ell}({\omega}) \setminus {\T}^\M_{\ell}({\omega})$ is the set of interior cells in ${\mathcal{D}}_{\ell}({\omega})$.
Numerical integration over cells $K \in {\T}^\M_\ell({\omega})$ can be carried out by generating a simplicial mesh $\mathcal{I}^{\mathcal{D}}_K$ of $K \cap {\mathcal{D}}_{\ell}({\omega})$, e.g. using a Delaunay triangulation.
Interior cell volume integration is straightforward, and we can simply take $\mathcal{I}_K^{{\mathcal{D}}} = K$. We proceed analogously to create surface meshes for ${K} \cap \mathcal{M}_{\ell}({\omega})$. We define the volume integration mesh as the union of the interior cells and the integration meshes for cut cells $ \mathcal{I}_{\ell}^{\mathcal{D}}({\omega}) := \bigcup_{K \in {\T}^\D_\ell({\omega})} I^{\mathcal{D}}_K$; analogously for the surface mesh $\mathcal{I}_{\ell}^\mathcal{M}({\omega})$. These integration meshes always exist and are cell-wise local, i.e., no conformity is required among cells.
We note that $\mathcal{I}_{\ell}^{\mathcal{D}}({\omega})$ is a body-fitted mesh of ${\mathcal{D}}_\ell({\omega})$ and $\mathcal{I}_{\ell}^\mathcal{M}({\omega})$ is a surface mesh of $\mathcal{M}_{\ell}({\omega})$. However, the shape regularity of the meshes is not relevant since they are only used for integration purposes. For higher (than one) order approximations of $\mathcal{M}({\omega})$, we can consider the same approach described above supplemented with an additional degree elevation of the linear cell-wise integration meshes. These techniques impose some constraints on the ratio between surface curvature and mesh resolution and are not being used in Section \ref{sec:examples}, since we restrict ourselves to first order \acp{FE}.
\def\boldsymbol{\nabla}{\boldsymbol{\nabla}}
In this work, $H^1$-conforming Lagrangian space of arbitrary order are considered for the discretization of the \ac{PDE} problem at hand. We denote with $\widetilde{\X}_\ell({\T}^\D,{\omega})$ the \ac{FE} space on the grid ${\T}^\D_{\ell}({\omega})$. The space ${\T}^\D_{\ell}({\omega})$ is defined on $\widetilde{\D}({\omega}) \supseteq {\mathcal{D}}({\omega})$, where $\widetilde{\D}({\omega})$ is the union of all cells in ${\T}^\D({\omega})$. Thus, it is not a body-fitted mesh of ${\mathcal{D}}({\omega})$ and embedded \ac{FE} techniques must be used to properly capture the geometry. Let us consider the weak form of \eqref{eq:model} in which the essential boundary conditions are imposed in a weak form using Nitsche's method. First, we define the cell-wise forms
\def\mathcal{A}{\mathcal{A}}
\def\boldsymbol{n}{\boldsymbol{n}}
\defK{K}
\begin{align}
& \mathcal{A}^{\ell}_K(u,v) := \int_{K \cap {\mathcal{D}}_{\ell}({\omega})} \boldsymbol{\nabla} u \cdot \boldsymbol{\nabla} v \mathrm{d}V + \int_{K \cap \mathcal{M}_{\ell}({\omega})} (\tau_K u v - v ( \boldsymbol{n} \cdot \boldsymbol{\nabla} u) - u (\boldsymbol{n} \cdot \boldsymbol{\nabla} v) ) \mathrm{d}S, \\
& {b^{\ell}_K(v)} := \int_{K \cap {\mathcal{D}}_{\ell}({\omega})} f v \mathrm{d}V + \int_{K \cap \mathcal{M}_{\ell}({\omega})} (\tau_K u_0 - (\boldsymbol{n} \cdot \boldsymbol{\nabla} v) u_0 ),
\end{align}
where $\tau_K \approx \mathcal{O}(h_{\ell}^{-2})$ is a positive parameter that must be large enough to ensure coercivity of the bilinear form. The global form $\mathcal{A}^{\ell}: H_0^1({\mathcal{D}}_{\ell}({\omega})) \rightarrow H^{-1}({\mathcal{D}}_{\ell}({\omega}))$ and right-hand side term $b^{\ell} \in H^{-1}({\mathcal{D}}_\ell({\omega}))$ are stated as the sum of the element contributions, i.e.,
\begin{align}\label{eq:bil-form}
\mathcal{A}^{\ell}(u,v) := \sum_{K \in {\T}^\D_{\ell}({\omega})} \mathcal{A}^{\ell}_K(u,v), \quad
b^{\ell}{ (v)} := \sum_{K \in {\T}^\D_{\ell}({\omega})} b^{\ell}_K(v).
\end{align}
The use of embedded \ac{FE} methods has a dramatic impact on the condition number of the resulting linear systems, known as the \emph{small cut cell} problem. Given that the mesh $\mathcal{T}_{\ell}$ is shape regular, one can define its characteristic cell size $h_{\ell}$. It is well-known in body-fitted \ac{FE} methods that the condition number of the linear system that results from the discretization of second order elliptic \acp{PDE} is $\mathcal{O}(h_\ell^{-2})$. However, for unfitted \ac{FE} methods, it also deteriorates with the maximum of ${|K \cap {\mathcal{D}}_{\ell}({\omega})|}^{-1}{|K|}$ among all cells in $K \in {\T}^\D_{\ell}$. The portion of the cell in the physical domain cannot be controlled and it can be arbitrarily close to zero. As a result, embedded \ac{FE} methods can produce almost singular linear systems to be solved, and thus are not reliable.
\def\mathcal{Q}{\mathcal{Q}}
\def\mathcal{G}{\mathcal{G}}
To fix the small cut cell problem issue, we consider the \ac{agfem} recently proposed in \cite{Badia2017a}. The idea is to start with the mesh ${\T}^\D_{\ell}$ and generate an aggregated mesh $\mathcal{G}_{\ell}({\omega})$ in which the cut cells with small volume ratio within the physical domain are aggregated to one interior cell (see \cite[Alg. 2.1]{Badia2017a}). Such aggregation is next used to define constraints over the standard \ac{FE} space $\widetilde{\X}({\T}^\D_{\ell},{\omega})$. We represent the resulting \ac{agfe} space with $\mathcal{V}(\mathcal{G}_{\ell},{\omega}) \subseteq \widetilde{\X}({\T}^\D_{\ell},{\omega})$. We refer the interested reader to \cite{Badia2017a} for a detailed exposition of the mesh aggregation algorithm, the computation of constraint, the implementation issues, and its numerical analysis, to \cite{Verdugo2019} for its parallel implementation, and to \cite{Badia2018b} for a mixed \ac{agfe} space for the Stokes problem. \ac{agfe} spaces are not affected by the small cut cell problem and the condition number of the resulting matrix is $\mathcal{O}(h_{\ell}^{-2})$ \cite{Badia2017a}. The numerical experiments in \cite{Verdugo2019} show that a standard parallel algebraic multigrid solver is effective to solve \ac{agfe} linear systems using default settings. The \ac{agfe} space approximation of \eqref{eq:model} at a given level ${\ell}$ reads as follows: find $u_{\ell} \in \mathcal{V}(\mathcal{G}_{\ell},{\omega})$ such that
\begin{align}
\mathcal{A}^{\ell}(u_{\ell},v_{\ell}) = b^{\ell}(v_{\ell}) \qquad \hbox{ for any } \, v_{\ell} \in \mathcal{V}(\mathcal{G}_{\ell},{\omega}).\label{eq:agfeprob}
\end{align}
We note that all these terms can be computed using the cell-wise integration triangulations commented above. The condition number of the matrix resulting from the \ac{agfem} discretization has been proven to be $\mathcal{O}(h_{\ell}^{-2})$ as in body-fitted methods (see \cite{Badia2017a}). All the steps required to apply \ac{agfem} in the frame of \ac{MLMC} are listed in Alg. \ref{alg:alg}.
\def\w^i{{\omega}^i}
\begin{algorithm}
\caption{\ac{EMLMC}\label{alg:alg}}
\For{${\ell} \in \{1,\ldots,L\}$}{
Compute $\mathcal{T}_{\ell}$ as uniform refinement of $\mathcal{T}_{{\ell}-1}$\\
Generate the \ac{FE} space $\widetilde{\X}(\mathcal{T}_{\ell})$
}
$\widetilde{Q}_L = 0 $ \\
\For{${\ell} \in \{0,\ldots,L\}$}{
$\overline{Y_l} = 0$\\
\For{$i \in \{1,\ldots,N_{\ell}\}$}{
Sample the random variables and get $\w^i$ \\
Set $k={\ell}$ \\
Interpolate the level-set function $\phi({\boldsymbol{x}},\w^i)$ in the \ac{FE} space $\widetilde{\X}(\mathcal{T}_k,\w^i)$ to obtain $\phi_k({\boldsymbol{x}},\w^i)$ \nllabel{lin:interp} \\
Run the marching tetrahedra algorithm for $\phi_k({\boldsymbol{x}}, \w^i)$ and compute $\mathcal{M}_k(\w^i)$ and ${\mathcal{D}}_k(\w^i)$ \\
Compute the intersection of cells in $\mathcal{T}_k(\w^i)$ with $\mathcal{M}_k(\w^i)$ and ${\mathcal{D}}_k(\w^i)$ to obtain ${\T}^\D_k(\w^i)$ and ${\T}^\M_k(\w^i)$ and the cell-local integration meshes $\mathcal{I}_k^{\mathcal{M}}(\w^i)$ and $\mathcal{I}_k^{{\mathcal{D}}}(\w^i)$ \\
Generate the \ac{FE} space $\widetilde{\X}({\T}^\D_k,{\omega}^*)$ \\
Run the mesh aggregation algorithm in \cite[Alg. 2.1]{Badia2017a} to compute the aggregated mesh $\mathcal{G}_k(\w^i)$ \\
Generate the \ac{agfe} space $\mathcal{V}(\mathcal{G}_k,{\omega})$ as in \cite[Sect. 3]{Badia2017a}\\
Compute the solution $u_k({\omega}^*)$ of \eqref{eq:agfeprob} \\
Compute the $Q_k=Q(u_k)$ \nllabel{lin:qoi} \\
$\overline{Y_l} = \overline{Y_l} + Q^i_k$\\
\If{${\ell} > 0$}{
Set $k={\ell}-1$ \\
Repeat lines \ref{lin:interp} to \ref{lin:qoi}.\\
$\overline{Y_l} = \overline{Y_l} - Q^i_k$\\
}
}
$\widetilde{Q}_L = \widetilde{Q}_L + \overline{Y_l} / N_l$ \\
}
\end{algorithm}
\section{Numerical examples}
\label{sec:examples}
In this section we present three numerical examples aimed at illustrating the efficiency and robustness of the proposed approach. The first of them is a convergence test whose goal is to asses the actual convergence rates in the implementation. The second one is the solution of \eqref{eq:model} in a complex random domain illustrating in a quantitative way the robustness of the proposed approach. Finally we solve \eqref{eq:model} in a situation where the randomness may induce changes in the topology. These numerical experiments have been carried out using \texttt{FEMPAR} v1.0.0~\cite{fempar,Badia2019}, an open source object-oriented scientific computing library for the simulation of complex multiphysics problems governed by \acp{PDE} at large scales that incorporates embedded \ac{FE} machinery \cite{Verdugo2019}.
The implementation of \ac{EMLMC} was made in \texttt{FEMPAR} by developing a new software layer which permits the concurrent execution of a model, understood as the actual implementation of an algorithm that permits the solution of a \ac{PDE}. In our case, the model was developed using the numerical tools provided by \texttt{FEMPAR} to perform the automatic generation of background meshes $\mathcal{T}_{\ell}$ of arbitrary size, the level-set description of the geometry and its interpolation, the construction of \ac{agfem} discretization, the numerical integration and the solution of the linear systems using domain decomposition preconditioners. The implementation of sampling methods made in this new module \cite{workflow} exploits three levels of parallelism, across levels, across samples and in the evaluation of each sample, e.g. employing a domain decomposition method, although the latter feature has not been exploited in the examples below (each sample is computed using one processor).
\subsection{A convergence test}
\label{subsec:Canuto}
In this section we consider an example with analytic solution proposed in~\cite{Canuto2007}.
The spatial domain $ {\mathcal{D}}(R)\subset \R^2$ is a circle centered at $(0.5,0.5)$ whose radius $R$ is a truncated Gaussian random variable with mean $\mu=0.3$, standard deviation $\sigma=0.025$, and restricted to $[a,b]$ with $a=0.2$ and $b=0.4$, which we denote by ${\mathcal{TN}}(a,b,\mu,\sigma)$. In this random domain we solve problem \Eq{eq:model} with $f=4$, $k=1$, and Dirichlet boundary conditions $u_0=u_{\rm ex}$ where
\begin{equation}
u_{\rm ex}({\boldsymbol{x}}):=R^2-|{\boldsymbol{x}}_1 - 0.5|^2 - |{\boldsymbol{x}}_2 - 0.5|^2,
\label{eq:solution_canuto}
\end{equation}
is the exact solution. We consider two different (although similar) \acp{QoI}, the average of the solution over the whole domain $\omega_1={\mathcal{D}}(R)$ or over a (deterministic) subregion $ \omega_2=\widehat{D} \subset {\mathcal{D}}$ taken as $\widehat{D}=[0.5-\delta,0.5+\delta]^2$ with $\delta=0.125$, that is,
\begin{equation}
Q_i(u):=\frac{1}{|\omega_i|}\int_{\omega_i}u
\label{eq:qois}
\end{equation}
where $|\omega_i|$ denotes the area of $\omega_i$. Using \Eq{eq:solution_canuto} we get $Q_1=0.5 R^2$ and $Q_2=R^2 - 2\delta^2/3$. The \ac{PDF} of the ${\mathcal{TN}}$ can be written in terms of the error function and the variance can be computed easily using, e.g., \cite{truncatednormal}, which gives
$\E(Q_1)=0.04531216540324139$ and $\E(Q_2)=0.08020766413981611$.
Because the \ac{MLMC} estimation given by \Eq{eq:mlmc_average} is a random variable so is the squared error $(\widetilde{Q}_L-\E(Q))^2$. Several realizations of the experiment will produce different results and its expectation error is determined by \eqref{eq:mlmc_error}. Therefore, to evaluate the error decay in \eqref{eq:error_decay} we approximate it by a sample average, running each experiment $K$ times, i.e., we compute the
error as
\begin{equation}
{\mathcal{E}}_L^2 = \frac{1}{K} \sum_{k=1}^K (\widetilde{Q}^i_L-\E(Q))^2,
\label{eq:average_error}
\end{equation}
where $\widetilde{Q}^k_L$ is the result of the $k$-th realization of the experiment. The numerical experiments have been performed with $K=100$, whereas a value of $K=30$ was found to be sufficient in the context of hyperbolic conservation laws \cite{Mishra2012b}. Likewise we compute the averaged variances of the \ac{QoI} differences between levels as
\begin{equation}
\mathcal{V}_l = \frac{1}{K} \sum_{k=1}^K V(Y_l^k)
\label{eq:average_variance}
\end{equation}
where $V(Y_l^k)$ is given by \Eq{eq:sample_variance} for the $k$-th realization of the experiment. In practice, each realized \ac{MLMC} computation, here indexed by $k$, is computed with a different seed of the \ac{PRNG}. In the studies here we utilize the xoshiro256** algorithm \cite{PRNGxoshiro256} for generating pseudo random sequences, that has a 256 bit state. The algorithm is suited for this work as it is quick to advance the state, and statistically robust when seeded well. The detailed seeding procedure is not especially relevant to the results presented here, and those details are a bit beyond the scope of this paper, but the basic process can be given. The seeding process uses the recommended splitmix64 \ac{PRNG} \cite{PRNGsplitmix} to advance from an initial 64 bit seed. The outputs from this are used, together with physically generated random bits, to seed the state space of a Mersenne Twister \cite{Matsumoto1998}, specifically MT19937-64, and the outputs of the Mersenne Twister are used, together with physically generated random bits, to seed the initial state of the xoshiro256**. The state space of the generator has a period of $2^{256}-1$, meaning that after $2^{256}$ evaluations the state of the generator repeats, and there are no repetitions before that. The algorithm is adapted to parallelization by having easily computed a jump of $2^{128}$ states. Using this, jumps are performed to set the initial state for each sample. As a result, each process has an effective $2^{128}$ calls to the random number generator before reaching a state that was realized by another process, allowing for effective and efficient large scale computations. The jumps are done consistently, so that the initial state depends only on the seed and sample number, to allow reproducibility of the stochastic inputs regardless of the scheduling of order of computations.
We consider $L=5$ and the number of samples per level are defined according to \Eq{eq:num_samples_ratio} with $\Gamma=7/2$. Each sample is computed taking $\mathcal{B}=[0,1]^2$ and $\mathcal{T}_0$ a Cartesian mesh of $8 \times 8 $ \acp{FE} and $\mathcal{T}_{{\ell}+1}$ is generated by dividing each element of $\mathcal{T}_{\ell}$ into four subelemements (two per direction). \Fig{fig:canuto_avg_err} shows the average error ${\mathcal{E}}_L$ as a function of the level for $N_5=\{3,6\}$. Even these low values of $N_5$, shows a good convergence of the level estimates, in agreement with the expected decay rate in \Ass{as:1}.
\begin{figure}
\centering
\includegraphics[scale=0.024]{QoIAveragesByLevelCombinedCanuto2d.pdf}
\caption{Averaged error ${\mathcal{E}}_L$ (given by \Eq{eq:average_error}) for each level and sample size (as from \Eq{eq:num_samples_ratio}) evaluated with $Q_1$ (Full Domain) and $Q_2$ (Partial Domain) from \Eq{eq:qois} compared to the theoretical expeted decay rate.}
\label{fig:canuto_avg_err}
\end{figure}
\Fig{fig:canuto_var} shows the sample variance estimate \eqref{eq:average_variance} as a function of the level. The variance of the \ac{QoI} differences between levels decays with the level at the rate slightly higher than the one predicted by the theory in \cite{Teckentrup2013a}, verifying \Ass{as:3}. The determination of these variances is important in adaptive \ac{MLMC} methods as they are required to obtain the optimal number of samples per level in \Eq{eq:mlmc_num_samples}.
\begin{figure}
\centering
\includegraphics[scale=0.024]{QoIVarianceCombinedCanuto2d.pdf}
\caption{Averaged sample variance between levels (given by \Eq{eq:average_variance}) evaluated with $Q_1$ (Full Domain) and $Q_2$ (Partial Domain) from \Eq{eq:qois}.}
\label{fig:canuto_var}
\end{figure}
The number of samples in \Eq{eq:mlmc_num_samples} also depends on the complexity of sampling at each level $C_{\ell}$ which is shown in \Fig{fig:canuto_cost}. This complexity is measured as the CPU time per sample averaged at each level, i.e., if $t^i_{\ell}$ is the elapsed CPU time in sample $i$ of level ${\ell}$, we estimate the complexity of sampling at each level as
\begin{equation}
C_{\ell}=\frac{1}{N_{\ell}}\sum_{i=1}^{N_{\ell}} t^i_{\ell}.
\end{equation}
As it can be seen in \Fig{fig:canuto_cost}, the scaling is in agreement with \Ass{as:2}.
\begin{figure}
\centering
\includegraphics[scale=0.024]{ComputationCostCanuto2d.pdf}
\caption{Computational cost as a function of the level.}
\label{fig:canuto_cost}
\end{figure}
After verifying that the EMLMC algorithm shows an error and variance decay and cost increase that is in agreement with \Ass{as:1} to \Ass{as:3} and because these assumptions permit to obtain the complexity bound \Eq{eq:mlmc_complexity_bound} we expect the actual error to decrease with cost increase accordingly. The actual error decrease with respect to the total complexity, measured as $$C = \sum_{{\ell}=0}^{L}\sum_{i=1}^{N_{\ell}} t^i_{\ell}$$ is shown in \Fig{fig:canuto_samples} and the agreement with the scaling \Eq{eq:mlmc_complexity_bound} is evident. The important gain of \ac{MLMC} with respect to \ac{MC} in deterministic domains with random coefficients is also observed for the \ac{EMLMC} with this random geometry.
\begin{figure}
\centering
\includegraphics[scale=0.024]{ErrorVsCostCombinedCanuto2d.pdf}
\caption{Trade off between error and cost, evaluated with $Q_1$ (Full Domain) and $Q_2$ (Partial Domain) from \Eq{eq:qois}.}
\label{fig:canuto_samples}
\end{figure}
\subsection{Dealing with complex random geometries}
\label{subsec:complex}
The aim of this example is to show the robustness of the proposed approach to perform \ac{UQ} in complex random geometries. To this end, we consider a stochastic extension of a benchmark commonly used to test numerical methods designed to deal with interfaces described by a level-set \cite{hautefeuille_robust_2012,burman_cutfem:_2015}, the Poisson problem in a popcorn domain, a ball with spherical protuberances attached to its surface. Therefore, we aim to solve problem \Eq{eq:model} with $k=1$ and $f=-\nabla^2 u_{\rm ex}$ and Dirichlet boundary conditions $u_0=u_{\rm ex}$, where
\begin{equation}
u_{\rm ex} = \sin(k\|{\boldsymbol{x}}-{\boldsymbol{x}}_c\|)
\end{equation}
is defined in $\mathcal{B}=[0,1]^d$ with ${\boldsymbol{x}}_c=(1/2,1/2,1/2)$ and each sample is computed in a domain ${\mathcal{D}}({\omega})$ defined through a level set function, i.e. ${\boldsymbol{x}} \in {\mathcal{D}}({\omega})$ iff $\phi({\boldsymbol{x}},{\omega})<0$. The classical definition of the popcorn domain is
\begin{equation}
\phi({\boldsymbol{x}}) = \|{\boldsymbol{x}} - {\boldsymbol{x}}_0 \| - \rho_0 - \sum_{i=1}^n A \exp{ \|{\boldsymbol{x}}- {\boldsymbol{x}}_i\| / \sigma}
\end{equation}
with given parameters $\rho_0$ (the radius of the ball), ${\boldsymbol{x}}_i$ (the location of spherical protuberances), $A$ and $\sigma$ (which control the size of the protuberances) (see e.g. \cite{burman_cutfem:_2015}).
We consider a stochastic variation of this geometry defined by a random level set for $d=2,3$, although in this section we are primarily interested in the case $d=3$. This function is build to represent a random central ellipse (labelled as $0$) with smaller random ellipses attached to its boundary (labelled by $j$ with $1\leq j \leq n$) and is defined by
\begin{equation}
\phi({\boldsymbol{x}},{\omega}) = \min_{j\in \{0:n\}}\left( \|{\boldsymbol{x}}-{\boldsymbol{x}}_j\|_{\bm{AR}(j)} - \rho_j\right),
\label{eq:level_set_popcorn}
\end{equation}
where $x_j$ are random centers and $\rho_j$ random radius. The deformation from circles to ellipses is achieved by the norm
\begin{equation}
\|{\boldsymbol{y}}\|_{\bm{AR}(j)} = \sqrt{\left<(\bm{A}_j\bm{R}_j)^{-1}{\boldsymbol{y}},(\bm{A}_j\bm{R}_j)^{-1}{\boldsymbol{y}}\right>}.
\end{equation}
which depends on streatching ($\bm{A}_j$) and rotation ($\bm{R}_j$) matrices defined as follows. Distortion matrices $\bm{A}_j$ are square, diagonal matrices with diagonal entries ($(\bm{A}_j)_{ii}$) defined by parameters $A^{\prime}_j(i)$ ($1\leq i \leq d$)
\begin{equation}
\label{eq:axis_normalization_def}
(\bm{A}_j)_{ii} := \frac{A^{\prime}_j(i))}{\sqrt{\sum_{i=1}^d (A^{\prime}_j(i))^2}}.
\end{equation}
Rotations are defined in each $(x_i,x_{i+1})$ plane ($1\leq i\leq d-1$)
through the rotation matrix $\bm{R}^{\prime}(\Theta_j(i))$ defined from random parameters $\Theta_j(i)$ as
\begin{equation}
\label{eq:rotation_def1}
\bm{R}^{\prime}(\Theta_j(i)) := \left(\begin{array}{cc}
\cos(\Theta_j(i)) & -\sin(\Theta_j(i))\\
\sin(\Theta_j(i)) & \cos(\Theta_j(i))\\
\end{array}\right),
\end{equation}
Let $\bm{R}(\Theta_j(i))$, be the embedding of $\bm{R}^\prime(\Theta_j(i))$ inside $\R^d$, acting as the identity on the other $d-2$ coordinates of the vector. We then define the application of all rotations in $\R^d$ by one matrix as $\bm{R}_j$ defined by applying these rotations $\bm{R}(\Theta_j(1)),\cdots, \bm{R}(\Theta_j(d-1))$, in order, or as one single matrix denoted as,
\begin{equation}
\label{eq:rotation_def3}
\bm{R}_j := \prod_{i=1}^{d-1}\bm{R}(\Theta_j(i)) = \bm{R}(\Theta_j(d-1))\cdots\bm{R}(\Theta_j(1)) .
\end{equation}
The random parameters describing this geometry are distributed according to
\begin{itemize}
\item $n \in {\mathcal{P}}(11)$
\item $A^{\prime}_j \in {\mathcal{U}}(0.8,1.3)^d$
\item ${\boldsymbol{x}}_0 \in {\mathcal{U}}(0.4,0.6)^d$
\item $\rho_0 \in {\mathcal{U}}(0.1,0.2)$
\item $\bm\Theta_j \in {\mathcal{U}}(0,2\pi)^{d-1}$
\item ${\boldsymbol{y}}_j \in \mathcal{S}^d$
\item ${\boldsymbol{x}}_j={\boldsymbol{x}}_0+ \rho_0\bm{A}_0\bm{R}_0{\boldsymbol{y}}_j$
\item $\rho_j \in {\mathcal{U}}(0.03,0.1)$
\end{itemize}
where ${\mathcal{P}}$ and ${\mathcal{U}}$ denote a Poisson and uniform distributions, respectively, and $\mathcal{S}^d$ is a uniform distribution on the surface of a $d$-dimensional unit sphere. Note that using a level-set description permits to define complex domain through Boolean operations. In particular, given two level-set functions $\phi_1$ and $\phi_2$ describing $\Omega_1$ and $\Omega_2$ respectively, the function $\min(\phi_1,\phi_2)$ describes $\Omega_1 \cup \Omega_2$.
A few samples of these stochastic domains are shown in \Fig{fig:popcorn_samples}, where the important variation of the morphology and size of the domains is apparent.
\begin{figure}[!b]
\centering
\includegraphics[width=0.45\textwidth]{3dpopcorn0001}
\includegraphics[width=0.45\textwidth]{3dpopcorn0002} \\
\includegraphics[width=0.45\textwidth]{3dpopcorn0003}
\includegraphics[width=0.45\textwidth]{3dpopcorn0004} \\
\includegraphics[width=0.45\textwidth]{3dpopcorn0005}
\includegraphics[width=0.45\textwidth]{3dpopcorn0006} \\
\caption{Some realizations of the solution of \Eq{eq:model} in a stochastic popcorn domain.}
\label{fig:popcorn_samples}
\end{figure}
An important problem with sampling methods, as described in \Sec{sec:intro} is the robustness of the solver employed to compute samples and this is an important feature of the \ac{agfem} employed herein and described in \Sec{sec:agg}. We performed a quantitative evaluation of this feature by running the \ac{EMLMC} method with and without the aggregation procedure using an iterative solver for the linear system arising from the \ac{FE} discretization. Due to the symmetry of problem \eqref{eq:model}, we use the \ac{CG} algorithm; as it is well-known, the number of iterations required to converge is proportional to the square root of the condition number of the system matrix. We compute statistics of the number of iterations (maximum, minimum and average) taking 1000 samples per level. The results obtained in a 2D version of the problem are shown in \Fig{fig:num_iter_2D} whereas those obtained in the 3D setting are shown in \Fig{fig:num_iter_3D}.
\begin{figure}[!b]
\centering
\includegraphics[width=0.7\textwidth]{num_iter_pop_corn_2D}
\caption{Number of \ac{CG} iterations required to converge to a tolerance of $10^{-8}$ in the relative norm the solution of each (2D) sample of the \ac{EMLMC} as a function of the mesh size on each level, with and without aggregation.}
\label{fig:num_iter_2D}
\end{figure}
\begin{figure}[!b]
\centering
\includegraphics[width=0.45\textwidth]{num_iter_pop_corn_3D_wa}
\includegraphics[width=0.45\textwidth]{num_iter_pop_corn_3D_woa}
\caption{Number of \ac{CG} iterations required to converge to a tolerance of $10^{-8}$ in the relative norm the solution of each (3D) sample of the \ac{EMLMC} as a function of the mesh size on each level, (a) with and (b) without aggregation (note the difference in the scales).}
\label{fig:num_iter_3D}
\end{figure}
\subsection{Dealing with stochastic topologies}
In this section we consider a problem in a stochastic domain where the randomness may induce changes in the topology. It consists of a square plate with two circular holes of uncertain position which may overlap to form a single one. We consider problem \eqref{eq:model} with Dirichlet boundary conditions on the left and right sides and zero Neumann boundary on the top and bottom sides and on the boundary of the internal holes. Physically, this problem corresponds to the heat transfer in a plate where the left (low) and right (high) temperatures are prescribed and heat flows through the plate at a rate that depends on the location and size of the holes. The natural quantity of interest is therefore
\begin{equation}
Q(u) :=\int_{x_1=0} \partial_{x_1} u,
\label{eq:qoi_def_heat}
\end{equation}
The stochastic domain is defined, again, through a level set function $\phi({\boldsymbol{x}},{\omega}) = \phi_1({\boldsymbol{x}},{\omega})+ \phi_2({\boldsymbol{x}},{\omega})$ where
\begin{equation}
\phi_i({\boldsymbol{x}},{\omega}) = \|{\boldsymbol{x}} - {\boldsymbol{x}}_i({\omega}) \| - \rho_i
\end{equation}
for $i=1,2$ represent the two interior circular holes (which do not conduct the heat) are distributed one along the upper half of the plate and one along the lower half of the plate. We study the influence of the radius on the average total heat flux throuogh the plate for random position of the holes. We conside three cases $\rho_i=0.18$, $\rho_i=0.2$ or $\rho_i=0.22$ for $i=1,2$. The first components of ${\boldsymbol{x}}_i$ for $i=1,2$ (holes' centers) are drawn uniformly from $[0.23,0.77]$. The upper circle has the $x_2$ coordinate of its center drawn uniformly from $[0.70,0.76]$, and the lower circle has the $x_2$ coordinate of its center drawn uniformly from $[0.24,0.30]$. The union of these circles form the non-conductive interior domain.
We consider $L=5$ and the number of samples per level are defined according to \Eq{eq:num_samples_ratio} with $\Gamma=7/2$. Each sample is computed by taking $\mathcal{B}=[0,1]^2$ and $\mathcal{T}_0$ a Cartesian mesh of $8 \times 8 $ \acp{FE} and $\mathcal{T}_{{\ell}+1}$ is generated by dividing each element of $\mathcal{T}_{\ell}$ into four subelemements (two per direction). A few samples of the solution are shown in \Fig{fig:two_holes_geometry}, where the topology changes in the domain can be observed. The proposed \ac{EMLMC} method is able to automatically compute realizations using the same procedure followed in other cases, no special treatment of topology changes is required.
\Fig{fig:two_holes_flux} shows $\widetilde{Q}_L $ for the three different radii. As it can be seen, and is expected, when the size of the holes is increased, the effective area available for heat conduction is decreased, thus resulting in a reduction of the heat flux through the plate.
\begin{figure}[!b]
\centering
\includegraphics[scale=0.024]{FluxByRadius.pdf}
\caption{Average $Q(u)$ value as from \Eq{eq:qoi_def_heat} for three different radii at different levels of computation.}
\label{fig:two_holes_flux}
\end{figure}
\begin{figure}[!b]
\centering
\includegraphics[width=0.45\textwidth]{heat0001}
\includegraphics[width=0.45\textwidth]{heat0009} \\
\includegraphics[width=0.45\textwidth]{heat0003}
\includegraphics[width=0.45\textwidth]{heat0010} \\
\includegraphics[width=0.45\textwidth]{heat0005}
\includegraphics[width=0.45\textwidth]{heat0019} \\
\caption{Some realizations of the solution of \Eq{eq:model} in a plate with two (stochastic) holes.}
\label{fig:two_holes_geometry}
\end{figure}
\section{Conclusions}\label{sec:conclusions}.
In this article we propose the \ac{EMLMC} method which consists of drawing samples in \ac{MLMC} using a robust embedd method, the \ac{agfem}, for the discretization of the problem. Once again, we emphasize that applying the standard \ac{MLMC} on complex geometries can be hard if body-fitted methods are used for the problem discretization due to the difficulty of the mesh-hierarchy generation, which tipically require human intervenation. On top of that, when dealing with random geometries the construction of the mesh-hierarchy might even be impossible using body-fitted techniques or may simply fail for a particular sample. Using embedded discretization methods on a mesh-hierarchy generated on a bounding-box circumvents these problems.
The resulting algorithm is a powerful method to perform \ac{UQ} on complex random domains thanks to the robustness provided by the \ac{agfem} algorithm. The numerical examples presented herein show that the error and variance decays and the complexity are similar to those observed in the standard \ac{MLMC} (with deterministic geometries and body-fitted discretizations) and are in line with theoretical expectations. Therefore, the \ac{EMLMC} shows the same cost reduction with respect to \ac{MC} than the standard \ac{MLMC} making it an excellent method to perform \ac{UQ} on complex random domains.
| proofpile-arXiv_059-15575 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{section1}
Let $X_1,\ldots,X_n$ be possibly dependent and not necessarily identically distributed (DNID) discrete random variables (RV's) and by $X_{1:n},\ldots,X_{n:n}$ denote the corresponding order statistics. The aim of this paper is to derive expressions for single moments of $X_{r:n}$, $1\leq r\leq n$, and to use these results to obtain moments of lifetimes of
coherent systems consisting of components having discrete operating times.
The study of order statistics from discrete populations and their moments has a long history. As described in a review paper by Nagaraja \cite{N92} the interest in this topic arose from practical problems like statistical testing for homogeneity, and ranking populations and selection of the best one. Tank and Eryilmaz \cite{TE15} pointed out its utility in statistical control of quality and start-up demonstration test. Eisenberg \cite{E08} motivated computing the mean of the maximum of independent and identically (IID) geometric RV's by a statistical problem in bioinformatics. Another field where discrete order statistics find application is the reliability theory. They are used in analysis of technical systems when component lifetimes have discrete probability distributions. Discrete component lifetimes appear naturally when the system is monitored in discrete time or when it performs a task repetitively and its components have certain probabilities of failure upon each cycle, see, e.g. \cite[p 17]{BP96}. Some other examples where discrete lifetimes are involved are discrete-time shock models; for details see \cite{E16szok} and the references therein.
An important class of systems studied in the reliability theory is one consisting of so called coherent systems i.e. of systems such that every their component is relevant and replacing a failed component by a working one cannot cause a working system to fail. There is a vast literature on coherent systems yet it is restricted mainly to the case when component lifetimes are jointly continuously distributed. A few results are known to be valid also in the most general case of arbitrary distributions of component lifetimes (see for instance \cite{NRS07}, \cite{NSBB08}, \cite{MR14}, \cite{BE15}, \cite{EKT16}) and these can be applied in particular when components lifetimes are discrete. Results holding specifically for the discrete case are sparse and, to the best of our knowledge, concern only a~special subclass of coherent systems, namely $k$-out-of-$n$ systems (\cite{W62}, \cite{TE15}, \cite{D18}, \cite{DD19}); for the definition of $k$-out-of-$n$ systems see the beginning of Section~\ref{section4}. Consequently, there are still many open problems related to reliability properties of coherent systems with discrete lifetimes of components. This work is intended to solve one of such problems and to provide methods, convenient for numerical computations, of finding means, variances and other moments of times to failures of coherent systems operating in discrete time. For this purpose we first concentrate on single moments of discrete order statistics and derive exact and approximate formulas that allow their computation using software.
The paper is organized as follows. In Section \ref{section2}, we establish expressions for single moments of order statistics from DNID discrete RV's. In the case when these expressions involve infinite summations, we propose a~procedure that can be used for numerical computations and leads to approximate results with an error not exceeding a given value. In particular, we present explicit formulas for obtaining desired approximations when the marginal distributions of the parent RV's are Poisson or negative binomial. In Section \ref{section3}, we look at samples with the multivariate geometric distribution introduced in \cite{EM73} to describe dependent lifetimes of units exposed to common shocks. We derive closed-form expressions which enable numerical computation of exact values of moments of the corresponding order statistics. Section \ref{section4} is devoted to applications of the results given in two preceding sections in the reliability theory to find moments of the lifetime of a coherent system operating in discrete time. We provide general exact and approximate formulas for these moments and evaluate them for the bridge system with some selected joint distributions of component lifetimes. The results presented in the paper are illustrated in numerous tables and some figures.
\section{Moments of order statistics from DNID discrete RV's}
\label{section2}
In \cite{DD18} expressions for moments of order statistics arising from independent and not necessarily identically distributed (INID) discrete RV's were derived. We begin this section with a generalization of this result to the case when the underlying RV's are DNID.
\begin{theorem}
\label{th1}
Let $X_1,X_2,\ldots,X_n$ be DNID RV's taking values in the set of non-negative integers. Then, for $1\leq r\leq n$ and $p=1,2,\ldots$, we have
\begin{eqnarray}
EX^p_{r:n}&=&\sum\limits_{m=0}^\infty \big((m+1)^p-m^p\big) \sum_{s=0}^{r-1} \sum_{(j_1,\ldots,j_n)\in\mathcal{P}_s} \nonumber \\
&&
P\left(\bigg(\bigcap\limits_{l=1}^s\left\{X_{j_l}\leq m\right\}\bigg)\cap \bigg(\bigcap\limits_{l=s+1}^n\left\{X_{j_l}>m\right\}\bigg)\right), \label{th1t1}
\end{eqnarray}
or, equivalently,
\begin{eqnarray}
EX^p_{r:n}&= &\sum_{m=0}^\infty \big((m+1)^p-m^p\big)
\left\{1-\sum\limits_{s=r}^{n}\sum_{(j_1,\ldots,j_n)\in\mathcal{P}_s} \right.\nonumber \\
&& \left.P\left(\bigg(\bigcap\limits_{l=1}^s\left\{X_{j_l}\leq m\right\}\bigg)\cap \bigg(\bigcap\limits_{l=s+1}^n\left\{X_{j_l}>m\right\}\bigg)\right)\right\}, \label{th1t2}
\end{eqnarray}
where $\mathcal{P}_s$ denotes the subset of permutations $(j_1,j_2,\ldots,j_n)$ of $(1,2,\ldots,n)$ satisfying
$$j_1<j_2<\cdots<j_n \hbox{ and } j_{s+1}<j_{s+2}<\cdots<j_n,$$
and it is understood that $\mathcal{P}_0=\mathcal{P}_n=\{(1,2,\ldots,n)\}$.
\end{theorem}
\begin{proof}
We use the following well known fact: if a discrete RV $X$ has a~support consisting only of non-negative integers, then
\begin{equation}
\label{fact1_formula}
EX^p=p\int\limits_0^\infty x^{p-1}P(X>x)dx=\sum_{m=0}^\infty \big((m+1)^p-m^p\big)P(X>m), \quad p=1,2,\ldots.
\end{equation}
This fact implies that if $X_1,X_2,\ldots,X_n$ are DNID RV's taking values in the set of non-negative integers, then, for $1\leq r\leq n$ and $p=1,2,\ldots$,
\begin{equation}
\label{th1d1}
EX_{r:n}^p=\sum_{m=0}^\infty \big((m+1)^p-m^p\big)P\left(X_{r:n}>m\right).
\end{equation}
Thus we are reduced to finding $P\left(X_{r:n}>m\right)$. For this purpose observe that
$$\{X_{r:n}>m\}=\bigcup_{s=0}^{r-1}A_s \; \hbox{ and } \; \{X_{r:n}\leq m\}=\bigcup_{s=r}^{n}A_s,$$
where the events $A_s$, $s=0,1,\ldots,n$, given by
$$A_s=\left\{\textrm{exactly }s\textrm{ of }X_i\textrm{ are }\leq m\textrm{ and the remaining }n-s\textrm{ of }X_i\textrm{ are }>m \right\}$$
are pairwise disjoint. Therefore
\begin{equation}
\label{th1d2}
P(X_{r:n}>m)=\sum_{s=0}^{r-1}P\left(A_s\right),
\end{equation}
or equivalently,
\begin{equation}
\label{th1d3}
P(X_{r:n}>m)=1-P(X_{r:n}\leq m)=1-\sum_{s=r}^{n}P\left(A_s\right).
\end{equation}
But, for $s=0,1,\ldots,n$,
\begin{equation}
\label{th1d4}
P\left(A_s\right)=\sum_{(j_1,\ldots,j_n)\in\mathcal{P}_s}P\left(\bigg(\bigcap\limits_{l=1}^s\left\{X_{j_l}\leq m\right\}\bigg)\cap \bigg(\bigcap\limits_{l=s+1}^n\left\{X_{j_l}>m\right\}\bigg)\right).
\end{equation}
Combining (\ref{th1d1}) with (\ref{th1d2})-(\ref{th1d4}) yields (\ref{th1t1}) and (\ref{th1t2}). The proof is complete.
\end{proof}
\begin{remark}
\label{rem1}
{\bf (1)} Under the additional assumption that $X_1,X_2,\ldots,X_n$ are independent and $X_i$ has cumulative distribution function $F_i(x)=P(X_i\leq x)$ and survival function $\bar{F}_i(x)=P(X_i> x)$, $i=1,\ldots,n$, formulas (\ref{th1t1}) and (\ref{th1t2}) reduce to
$$EX^p_{r:n}=\sum\limits_{m=0}^\infty \big((m+1)^p-m^p\big)\sum_{s=0}^{r-1}
\sum_{(j_1,\ldots,j_n)\in\mathcal{P}_s} \bigg(\prod_{l=1}^sF_{j_l}(m)\bigg) \bigg(\prod_{l=s+1}^n\bar{F}_{j_l}(m)\bigg)
$$
and
$$EX^p_{r:n}= \sum_{m=0}^\infty \big((m+1)^p-m^p\big)
\left\{1-\sum\limits_{s=r}^{n}\sum_{(j_1,\ldots,j_n)\in\mathcal{P}_s}\bigg(\prod_{l=1}^sF_{j_l}(m)\bigg) \bigg(\prod_{l=s+1}^n\bar{F}_{j_l}(m)\bigg)\right\},
$$
respectively. Thus we recover \citep[Theorem 2.1]{DD18}.
\noindent
{\bf (2)} Let $\mathcal{P}$ denote the set of all permutations $(j_1,j_2,\ldots,j_n)$ of $(1,2,\ldots,n)$. If in Theorem \ref{th1} we additionally require that the random vector $(X_1,X_2,\ldots,X_n)$ is exchangeable, that is, for $(j_1,j_2,\ldots,j_n)\in\mathcal{P}$,
$(X_{j_1},X_{j_2},\ldots,X_{j_n})$ has the same distribution as $(X_1,X_2,\ldots,X_n)$,
then, for all $(j_1,j_2,\ldots,j_n)\in\mathcal{P}$,
\begin{eqnarray*}
P\left(\bigg(\bigcap\limits_{l=1}^s\left\{X_{j_l}\leq m\right\}\bigg)\cap \bigg(\bigcap\limits_{l=s+1}^n\left\{X_{j_l}>m\right\}\bigg)\right) && \\
=P\left(\bigg(\bigcap\limits_{l=1}^s\left\{X_{l}\leq m\right\}\bigg)\cap \bigg(\bigcap\limits_{l=s+1}^n\left\{X_{l}>m\right\}\bigg)\right). &&
\end{eqnarray*}
Since there are exactly ${n \choose s}$ elements of $\mathcal{P}_s$, (\ref{th1t1}) and (\ref{th1t2}) simplify to
\begin{eqnarray*}
EX^p_{r:n} &=& \sum\limits_{m=0}^\infty \big((m+1)^p-m^p\big)\\
&& \times\sum_{s=0}^{r-1} {n\choose s}
P\left(\bigg(\bigcap\limits_{l=1}^s\left\{X_{l}\leq m\right\}\bigg)\cap \bigg(\bigcap\limits_{l=s+1}^n\left\{X_{l}>m\right\}\bigg)\right),
\end{eqnarray*}
and
\begin{eqnarray*}
EX^p_{r:n}&=& \sum_{m=0}^\infty \big((m+1)^p-m^p\big) \\
&& \times \left\{1-\sum\limits_{s=r}^{n}
{n\choose s}
P\left(\bigg(\bigcap\limits_{l=1}^s\left\{X_{l}\leq m\right\}\bigg)\cap \bigg(\bigcap\limits_{l=s+1}^n\left\{X_{l}>m\right\}\bigg)\right)\right\},
\end{eqnarray*}
respectively.
\noindent
{\bf (3)}
If $r<\frac{n+1}{2}$ then it is better to use (\ref{th1t1}) rather than (\ref{th1t2}), because it requires a smaller number of arithmetic operation. If in turn $r>\frac{n+1}{2}$ then (\ref{th1t2}) leads to shorter time of computation than (\ref{th1t1}).
In particular, applying (\ref{th1t1}) with $r=1$ and (\ref{th1t2}) with $r=n$ gives, for $p=1,2,\ldots$,
\begin{equation}
\label{firstOS}
EX^p_{1:n}= \sum\limits_{m=0}^\infty \big((m+1)^p-m^p\big) P\left(\bigcap\limits_{l=1}^n\{X_l>m\}\right),
\end{equation}
\begin{equation}
\label{largestOS}
EX^p_{n:n}=
\sum\limits_{m=0}^\infty \big((m+1)^p-m^p\big)
\left\{1-P\left(\bigcap\limits_{l=1}^n\{X_l\leq m\}\right)\right\}.
\end{equation}
\noindent
{\bf (4)} In \citep[Section 3.2]{E17} a formula for the cumulative distribution function of $X_{r:n}$ has been derived under the assumption that $(X_1,X_2,\ldots,X_n)$ consists of multiple types of RV's such that the RV's of the same type are exchangeable and the RV's of different types are arbitrary dependent. This formula can be alternatively used to obtain moments of $X_{r:n}$ whenever the above assumption is fulfilled.
\end{remark}
\medskip
Theorem \ref{th1} is very general in the sense that it provides formulas for moments of order statistics from discrete RV's $X_1,X_2,\ldots,X_n$ under the single assumption that $X_i$'s take values in the set of non-negative integers. Theoretically we can use this theorem for any marginal distributions of $X_i$'s (with supports containing only non-negative integers) and for any dependence structure between $X_i$'s. Yet, in practice formulas (\ref{th1t1}) and (\ref{th1t2}) work well only when the supports of $X_1,X_2,\ldots,X_n$ are finite - then the infinite sums $\sum_{m=0}^\infty$ in (\ref{th1t1}) and (\ref{th1t2}) consist of finite numbers of non-zero elements which allows their evaluation using software. To illustrate with a numerical example application of Theorem \ref{th1} in the case of finite supports of $X_1,X_2,\ldots,X_n$, we consider the random vector $(X_1,\ldots,X_{10})$ with multinomial distribution $Mult(20,(p_1,\ldots,p_{10}))$, i.e., we assume that
$$
P(X_i=n_i, i=1,\ldots,10)=\left\{
\begin{array}{cl}
\frac{20!}{n_1!\ldots n_{10}!}p_1^{n_1}\ldots p_{10}^{n_{10}} & \hbox{ if } \; \sum_{i=1}^{10}n_i=20 \\
0 & \hbox{ otherwise}
\end{array},
\right.
$$
where $p_i>0$, $i=1,\ldots,10$, and $\sum_{i=1}^{10}p_i=1$. In Table \ref{Table1}, for various values of $p_i>0$, $i=1,\ldots,10$, we present means, second raw moments (in brackets) and variances (in double brackets) of the corresponding order statistics $X_{r:10}$, $1\leq r\leq 10$.
\bigskip
\begin{landscape}
\begin{table}[ht]
\footnotesize
\caption{Mean, second raw moment (in brackets) and variance (in double brackets) of $X_{r:10}$ from $(X_1,X_2,\ldots,X_{10})\sim Mult(20,(p_1,p_2,\ldots,p_{10}))$}
\begin{tabular}{|l||c|c|c|c|c|c|c|c|c|c||}
\hline
$p_i\backslash r$ &1 & 2& 3 & 4 &5 &6 &7 &8 &9 &10\\ \hline\hline
\multirow{3}{*}{$p_1=p_2=\cdots=p_{10}=0.1$} &0.215&0.654&0.991&1.325&1.733&2.011&2.368&2.873&3.421&4.410\\
&(0.215)&(0.662)&(1.120)&(1.987)&(3.203)&(4.148)&(5.847)&(8.477)&(12.048)&(20.292)\\
&((0.169))&((0.234))&((0.139))&((0.233))&((0.199))&((0.104))&((0.240))&((0.226)) &((0.343))&((0.846))\\\hline
$p_1=\cdots=p_5=0.08, $ &0.182&0.600&0.953&1.280&1.691&1.998&2.373&2.892&3.484&4.547\\
\multirow{2}{*}{$p_6=\cdots=p_{10}=0.12$} &(0.182)&(0.606)&(1.057)&(1.862)&(3.077)&(4.112)&(5.877)&(8.604)&(12.519)&(21.625)\\
&((0.149))&((0.245))&((0.149))&((0.224))&((0.218))&((0.120))&((0.246))&((0.240))&((0.379))&((0.950))\\\hline
$p_i=0.045+0.01i$, &0.148 &0.540&0.911&1.236&1.648& 1.985&2.381&2.916&3.551&4.683\\
\multirow{2}{*}{$i=1,\ldots,10$} &(0.148)&(0.544)&(0.993)&(1.742)&(2.950)&(4.078)&(5.924)&(8.758)&(13.024)&(22.973)\\
&((0.126))&((0.252))&((0.164))&((0.215))&((0.234))&((0.138))&((0.253))&((0.254))&((0.413))&((1.043))\\\hline
$p_1=\cdots=p_9=0.05,$ &0.009&0.077&0.284&0.597&0.877&1.118&1.451&1.914&2.672&11.001\\
\multirow{2}{*}{$p_{10}=0.55$} &(0.009)&(0.077)&(0.284)&(0.603)&(0.934)&(1.423)&(2.385)&(4.011)&(7.815)&(125.959)\\
&((0.008))&((0.071))&((0.204))&((0.246))&((0.165))&((0.174))&((0.280))&((0.348))&((0.674))&((4.938))\\\hline
\end{tabular}
\label{Table1}
\end{table}
\end{landscape}
A problem arises when we want to apply Theorem \ref{th1} to compute moments of order statistics from RV's with infinite supports. Then the sums $\sum_{m=0}^\infty$ in (\ref{th1t1}) and (\ref{th1t2}) have infinitely many positive terms and we are not able to add all these terms using software. We propose two solutions of this problem. The first solution is a truncation method presented below and leading to approximate values of $EX^p_{r:n}$. The second one, described in details in Section~\ref{section3}, concerns a special case when the random vector $(X_1,X_2,\ldots,X_n)$ has a multivariate geometric distribution.
In the truncation method we first fix $d>0$, the desired accuracy of the result. Next we split the infinite series in (\ref{th1d1}) into two parts as follows
\begin{eqnarray*}
EX_{r:n}^p&=&\sum_{m=0}^{M_0} \big((m+1)^p-m^p\big)P\left(X_{r:n}>m\right) \\
&+&\sum_{m=M_0+1}^\infty \big((m+1)^p-m^p\big)P\left(X_{r:n}>m\right),
\end{eqnarray*}
where $M_0$ is so chosen that
\begin{equation}
\label{s2M0}
\sum_{m=M_0+1}^\infty \big((m+1)^p-m^p\big)P\left(X_{r:n}>m\right)\leq d.
\end{equation}
Then by (\ref{th1d2}) - (\ref{th1d4}) both the equivalent approximate formulas
\begin{eqnarray}
EX^p_{r:n}&\approx&\sum\limits_{m=0}^{M_0} \big((m+1)^p-m^p\big) \sum_{s=0}^{r-1} \sum_{(j_1,\ldots,j_n)\in\mathcal{P}_s} \nonumber \\
&&
P\left(\bigg(\bigcap\limits_{l=1}^s\left\{X_{j_l}\leq m\right\}\bigg)\cap \bigg(\bigcap\limits_{l=s+1}^n\left\{X_{j_l}>m\right\}\bigg)\right) \label{af1}
\end{eqnarray}
and
\begin{eqnarray}
EX^p_{r:n}&\approx&\sum\limits_{m=0}^{M_0} \big((m+1)^p-m^p\big)
\left\{1-\sum\limits_{s=r}^{n}\sum_{(j_1,\ldots,j_n)\in\mathcal{P}_s} \right.\nonumber \\
&& \left.P\left(\bigg(\bigcap\limits_{l=1}^s\left\{X_{j_l}\leq m\right\}\bigg)\cap \bigg(\bigcap\limits_{l=s+1}^n\left\{X_{j_l}>m\right\}\bigg)\right)\right\} \label{af2}
\end{eqnarray}
introduce an error not greater than $d$.
What is left is to find $M_0$ satisfying (\ref{s2M0}). First observe that if $EX^p_{r:n}$ is finite then the infinite series in (\ref{th1d1}) converges and consequently for any $d>0$ there exists $M_0$ such that (\ref{s2M0}) holds. This means that the finiteness of $EX^p_{r:n}$ guarantees the existence of $M_0$ satisfying (\ref{s2M0}). Yet, to derive a~convenient formula for $M_0$ we will need a stronger assumption than $EX^p_{r:n}<\infty$, namely condition (\ref{th2a1}) given later on. To see this note that
\begin{eqnarray}
&&\sum_{m=M_0+1}^\infty \big((m+1)^p-m^p\big)P\left(X_{r:n}>m\right) \nonumber\\
&=&\sum_{m=M_0+1}^\infty \big((m+1)^p-m^p\big) \nonumber \\
&& \times\sum_{s=0}^{r-1}
\sum_{(j_1,\ldots,j_n)\in\mathcal{P}_s}P\left(X_{j_1}\leq m, \ldots, X_{j_s}\leq m, X_{j_{s+1}}> m,\ldots,X_{j_n}> m\right) \nonumber\\
&\leq& \sum_{m=M_0+1}^\infty \big((m+1)^p-m^p\big)\sum_{s=0}^{r-1}
\sum_{(j_1,\ldots,j_n)\in\mathcal{P}_s}P\left(X_{j_n}> m\right) \nonumber\\
&\leq& \sum_{m=M_0+1}^\infty \big((m+1)^p-m^p\big) \max_{j=1,\ldots,n}P\left(X_j> m\right)
\sum_{s=0}^{r-1}
\sum_{(j_1,\ldots,j_n)\in\mathcal{P}_s}1 \nonumber\\
&=& \sum_{m=M_0+1}^\infty \big((m+1)^p-m^p\big) \max_{j=1,\ldots,n}P\left(X_j> m\right)
\sum_{s=0}^{r-1}
{n \choose s} \nonumber\\
&=&\sum_{s=0}^{r-1}{n \choose s} \sum_{m=M_0+1}^\infty \big((m+1)^p-m^p\big) \sum_{x=m+1}^{\infty}P(X_{j_{max(m)}}=x),
\label{s2ps1}
\end{eqnarray}
where
\begin{equation}
\label{jmaxm}
j_{max(m)}=\operatorname{argmax}_{j=1,\ldots,n}P\left(X_j> m\right).
\end{equation}
In the exchangeable case all the probabilities $P\left(X_j> m\right)$, $j=1,\ldots,n$, are equal so we can take $j_{max(m)}=1$, $m=0,1,\ldots$. Hence in this case $j_{max(m)}$ does not depend on $m$. This is so also in some non-exchangeable cases of interest (see the begining of proofs of Corollaries \ref{th2c1} and \ref{th2c2}). Assuming that $j_{max(m)}$ does not depend on $m$, denoting
$$j_{max(m)}=j_0, \quad m=0,1,\ldots$$
and interchanging the order of summation in (\ref{s2ps1}) we get
\begin{eqnarray*}
&&\sum_{m=M_0+1}^\infty \big((m+1)^p-m^p\big)P\left(X_{r:n}>m\right) \\
&\leq&\sum_{s=0}^{r-1}{n \choose s}
\sum_{x=M_0+2}^{\infty}P(X_{j_0}=x)
\sum_{m=M_0+1}^{x-1} \big((m+1)^p-m^p\big) \\
&=&\sum_{s=0}^{r-1}{n \choose s}
\sum_{x=M_0+2}^{\infty}P(X_{j_0}=x)
\big(x^p-(M_0+1)^p\big) \\
&\leq&\sum_{s=0}^{r-1}{n \choose s}
\sum_{x=M_0+2}^{\infty}x^pP(X_{j_0}=x).
\end{eqnarray*}
It follows that the condition
\begin{equation}
\label{warM0}
\sum_{x=M_0+2}^{\infty}x^pP(X_{j_0}=x)\leq d \left(\sum_{s=0}^{r-1}{n \choose s}\right)^{-1}
\end{equation}
implies (\ref{s2M0}). Note that for any $d>0$ there exists $M_0$ satisfying (\ref{warM0}) if $EX^p_{j_0}<\infty$, because then $\lim_{M_0\to\infty} \sum_{x=M_0+2}^{\infty}x^pP(X_{j_0}=x)=0$.
Summarizing, we have proved the following result.
\begin{theorem}
\label{th2}
Let the assumptions of Theorem \ref{th1} hold. Moreover, suppose that $j_{max(m)}$ defined in (\ref{jmaxm}) does not depend on $m$ and denote it briefly by $j_0$. If, for fixed $p\in\{1,2,\ldots\}$,
\begin{equation}
\label{th2a1}
EX^p_{j_0}<\infty,
\end{equation}
then, for $1\leq r\leq n$, $EX^p_{r:n}$ is finite and the approximate formulas (\ref{af1}) and (\ref{af2}) with $M_0$ satisfying (\ref{warM0}) introduce an error not greater than $d$.
\end{theorem}
Condition (\ref{warM0}) determining $M_0$ can be rewritten in an explicit form for some specific univariate marginal distributions of $(X_1,\ldots,X_n)$. Below we consider two cases: the first one when these marginal distributions are Poisson, and the second one when these are negative binomial. By $F_X^{\leftarrow}$ we denote the quantile function of the RV $X$, i.e.,
$$
F_X^{\leftarrow}(q)=\min\{x\in\mathbb{R} : P(X\leq x)\geq q\}, \quad q\in(0,1).
$$
\begin{corollary}
\label{th2c1}
Let the marginal distributions of the random vector \linebreak $(X_1,\ldots,X_n)$ be Poisson distributions $\mathrm{Pois}(\lambda_j)$ with $\lambda_j>0$, $j=1,\ldots,n$, i.e.,
$$P(X_j=x)=\operatorname{e}^{-\lambda_j}\frac{\lambda_j^x}{x!}, \quad x=0,1,\ldots, \; j=1,\ldots, n.$$
Set $ j_0=\operatorname{argmax}_{j=1,\ldots,n}\lambda_j$.
Then the approximate formulas (\ref{af1}) and (\ref{af2}) with
$$
M_0=\left\{
\begin{array}{lcl}
p-2 & \hbox{if} & d_{Pois}\leq 0 \\
F_{X_{j_0}}^{\leftarrow}(d_{Pois})+p-1 & \hbox{if} & d_{Pois}\in(0,1)
\end{array}
\right.,
$$
where
$$
d_{Pois}=1-d\, 2^{-p(p-1)/2}\lambda_{j_0}^{-p}\left(\sum_{s=0}^{r-1}{n \choose s}\right)^{-1},
$$
introduce an error not exceeding the fixed value $d>0$.
\end{corollary}
\begin{proof}
If $X_j\sim \mathrm{Pois}(\lambda_j)$, then
$$
P(X_j>m)=1+\left(-\operatorname{e}^{-\lambda_j}\right) \sum_{x=0}^m\frac{\lambda_j^x}{x!}, \quad m=0,1,\ldots,
$$
which shows that, for any fixed $m\in\{0,1,\ldots\}$, $P(X_j>m)$ is an increasing function of $\lambda_j>0$. Consequently, for $m=0,1,\ldots$,
$$
\max_{j=1,\ldots,n}P(X_j>m)=P(X_{j_0}>m), \quad \hbox{ where } j_0=\operatorname{argmax}_{j=1,\ldots,n}\lambda_j.
$$
Hence $j_{max(m)}$ defined in (\ref{jmaxm}) does not depend on $m$ and we have $j_{max(m)}=j_0$. Therefore the assumptions of Theorem \ref{th2} are satisfied. Before we apply this theorem we show that if $M_0\geq p-2$, then the condition
\begin{equation}
\label{M0Pois}
P(X_{j_0}>M_0+1-p)\leq d\, 2^{-p(p-1)/2}\lambda_{j_0}^{-p}\left(\sum_{s=0}^{r-1}{n \choose s}\right)^{-1},
\end{equation}
implies (\ref{warM0}). To do this, note that, for any $y=2,3,\ldots$ and $q=0,1,\ldots$,
\begin{equation}
\label{nierpom}
y^q=(y-1+1)^q=\sum_{i=0}^q{q \choose i}(y-1)^i\leq \sum_{i=0}^q{q \choose i}(y-1)^q=(y-1)^q2^q.
\end{equation}
Consequently,
$$
\begin{array}{lcll}
&& \hspace{-7mm}\displaystyle \sum_{x=M_0+2}^\infty x^pP(X_{j_0}=x) = \sum_{x=M_0+2}^\infty\operatorname{e}^{-\lambda_{j_0}} \lambda_{j_0}^x\frac{x^{p-1}}{(x-1)!}&\\
&\leq & \displaystyle 2^{p-1}\sum_{x=M_0+2}^\infty\operatorname{e}^{-\lambda_{j_0}} \lambda_{j_0}^x\frac{(x-1)^{p-2}}{(x-2)!} & \hbox{ if } p\geq 2\\
&\leq & \displaystyle 2^{p-1}2^{p-2}\sum_{x=M_0+2}^\infty\operatorname{e}^{-\lambda_{j_0}} \lambda_{j_0}^x\frac{(x-2)^{p-3}}{(x-3)!} & \hbox{ if } p\geq 3 \hbox{ and } M_0\geq 1\\
&\leq & \displaystyle \cdots &\\
&\leq & \displaystyle 2^{p\!-\!1}2^{p\!-\!2}\cdots 2^{p\!-\!(p\!-\!1)}\sum_{x=M_0+2}^\infty\operatorname{e}^{-\lambda_{j_0}} \lambda_{j_0}^x\frac{(x\!-\!(p\!-\!1))^{p-p}}{(x-p)!} & \hbox{ if } M_0\geq p-2\\
&=& \displaystyle 2^{p(p-1)/2} \lambda_{j_0}^p \sum_{x=M_0+2}^\infty\operatorname{e}^{-\lambda_{j_0}} \frac{\lambda_{j_0}^{x-p}}{(x-p)!} &\\
&=& \displaystyle 2^{p(p-1)/2} \lambda_{j_0}^pP(X_{j_0}>M_0+1-p).&
\end{array}
$$
Thus
indeed if $M_0\geq p-2$ and (\ref{M0Pois}) holds then (\ref{warM0}) is satisfied. But if $d_{Pois}\in(0,1)$ then the smallest $M_0$ for which (\ref{M0Pois}) is true equals $F_{X_{j_0}}^{\leftarrow}(d_{Pois})+p-1$. Now
application of Theorem \ref{th2} finishes the proof.
\end{proof}
\begin{corollary}
\label{th2c2}
Let the marginal distributions of the random vector \linebreak $(X_1,\ldots,X_n)$ be negative binomial distributions $\mathrm{NBin}(R,p_j)$ with $R>0$ and $p_j\in(0,1)$, $j=1,\ldots,n$, i.e.,
$$P(X_j=x)=\frac{\Gamma(x+R)}{x!\Gamma(R)}(1-p_j)^xp_j^R, \quad x=0,1,\ldots, \; j=1,\ldots, n.$$
Set $j_0=\operatorname{argmin}_{j=1,\ldots,n}p_j$ and
\begin{equation}
\label{defXfalka}
\tilde{X}\sim \mathrm{NBin}(R+p,p_{j_0}).
\end{equation}
Then the approximate formulas (\ref{af1}) and (\ref{af2}) with
$$
M_0=\left\{
\begin{array}{lcl}
p-2 & \hbox{if} & d_{NBin}\leq 0 \\
F_{\tilde{X}}^{\leftarrow}(d_{NBin})+p-1 & \hbox{if} & d_{NBin}\in(0,1)
\end{array}
\right.,
$$
where
$$
d_{NBin}=1-\frac{d}{2^{p(p-1)/2}\prod_{i=0}^{p-1}(R+i)\sum_{s=0}^{r-1}{n \choose s}}\left(\frac{p_{j_0}}{1-p_{j_0}}\right)^p,
$$
introduce an error not exceeding the fixed value $d>0$.
\end{corollary}
\begin{proof}
The proof is similar to that of Corollary \ref{th2c1}. First we observe that, for any fixed $R>0$ and $m\in\{0,1,\ldots\}$, $P(X_j>m)$ is a decreasing function of $p_j\in(0,1)$, because
$$
P(X_j>m)=1+\left(-p_j^R\right) \sum_{x=0}^m\frac{\Gamma(x+R)}{x!\Gamma(R)}(1-p_j)^x, \quad m=0,1,\ldots.
$$
Consequently, for $m=0,1,\ldots$,
$$
\max_{j=1,\ldots,n}P(X_j>m)=P(X_{j_0}>m), \quad \hbox{ where } j_0=\operatorname{argmin}_{j=1,\ldots,n}p_j,
$$
which shows that $j_{max(m)}$ defined in (\ref{jmaxm}) does not depend on $m$ and $j_{max(m)}=j_0$.
Now using (\ref{nierpom}) we get
$$
\begin{array}{lcll}
&&\hspace{-7mm} \displaystyle \sum_{x=M_0+2}^\infty x^pP(X_{j_0}=x)=\sum_{x=M_0+2}^\infty x^{p-1}\frac{\Gamma(x+R)}{(x-1)!\Gamma(R)}(1-p_{j_0})^xp_{j_0}^R & \\
&\leq & \displaystyle 2^{p-1}\sum_{x=M_0+2}^\infty (x-1)^{p-2}\frac{\Gamma(x+R)}{(x-2)!\Gamma(R)}(1-p_{j_0})^xp_{j_0}^R & \hbox{ if } p\geq 2\\
&\leq & \displaystyle \cdots &\\
&\leq & \displaystyle 2^{p(p-1)/2} \sum_{x=M_0+2}^\infty
\frac{\Gamma(x+R)}{(x-p)!\Gamma(R)}(1-p_{j_0})^xp_{j_0}^R
& \hbox{ if } M_0\geq p-2 \\
&=& \displaystyle 2^{p(p-1)/2}R(R+1)\cdots(R+p-1) \left(\frac{1-p_{j_0}}{p_{j_0}}\right)^p & \\
&& \qquad \times P(\tilde{X}>M_0+1-p),&
\end{array}
$$
where the RV $\tilde{X}$ is defined in (\ref{defXfalka}). It follows that if $M_0\geq p-2$ then the condition
\begin{equation*}
P(\tilde{X}>M_0+1-p)\leq \frac{d}{2^{p(p-1)/2}R(R+1)\cdots(R+p-1)\sum_{s=0}^{r-1}{n \choose s}}\left(\frac{p_{j_0}}{1-p_{j_0}}\right)^p,
\end{equation*}
implies (\ref{warM0}). Theorem \ref{th2} now yields the desired conclusion.
\end{proof}
The quantile functions of RV's with Poisson and negative binomial distributions are implemented in statistical packages. Therefore Corollaries \ref{th2c1} and \ref{th2c2} enable to compute numerically approximate values of raw moments of order statistics from the random vector $(X_1,X_2,\ldots,X_n)$ with univariate marginal Poisson or negative binomial distributions. Moreover, Corollaries \ref{th2c1} and \ref{th2c2} can be applied in the case of any dependence structure between $X_1,\ldots,X_n$. To illustrate the computational details we fix $n=10$, the level of accuracy $d=0.0005$, and assume that $X_1,\ldots,X_{10}$ are independent. Selecting some values of $\lambda_i$, $i=1,\ldots,10$, in Tables \ref{Table2} and \ref{Table3}, we present means $EX_{r:10}$ and second raw moments $EX^2_{r:10}$, respectively, of all order statistics from $(X_1,\ldots,X_{10})$, where $X_i\sim \mathrm{ Pois}(\lambda_i)$, $i=1,\ldots,10$. Furthermore, in each table in brackets underneath moments, we provide values of $M_0$, i.e., numbers of terms sufficient in the sum to obtain the desired accuracy. Corresponding results for $(X_1,\ldots,X_{10})$ with $X_i$, $i=1,\ldots,10$, having the negative binomial distribution $\mathrm{NBin}(R,p_{i})$ are given in Tables \ref{Table4} and \ref{Table5} for $R=2$, $R=5$ and selected values of $p_i$, $i=1,\ldots,10$.
\bigskip
\begin{landscape}
\begin{table}[ht]
\footnotesize
\begin{center}
\caption{Mean of $X_{r:10}$ from $(X_1,\ldots,X_{10})$, where $X_i\sim$Pois($\lambda_i$) and $X_{1},\ldots,X_{10}$ are independent}
\begin{tabular}{|l||c|c|c|c|c|c|c|c|c|c||}
\hline
$\lambda_i\backslash r$ &1 & 2& 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10\\ \hline\hline
$\lambda_i=1, i=1,\ldots,10$ & 0.010 & 0.070 & 0.225& 0.471& 0.737& 0.979& 1.230 & 1.551& 1.990& 2.738\\
& (6) & (7) & (8) & (8) & (8) & (9) & (9) & (9) & (9) & (9)\\ \hline
$\lambda_i=1, i=1,\ldots,5$ & 0.081 & 0.343& 0.722&1.117 &1.557 & 2.116&2.864 &3.851 &5.155 &7.193 \\
$\lambda_i=i-4, i=6,\ldots,10$& (17) & (19) & (20) & (21) & (22) & (22) & (23) & (23) &(23) & (23)\\ \hline
$\lambda_i=1, i=1,\ldots,5$ & 0.102 &0.414 &0.860 &1.389 &2.220 &6.497 & 8.367 &9.879 &11.483 &13.788 \\
$\lambda_i=10, i=6,\ldots,10$& (24) & (27) &(28) &(29) &(30) &(31) & (31) &(31) & (31) & (31)\\ \hline
$\lambda_i=i, i=1,\ldots,10$ &0.620 & 1.598&2.587 &3.585 &4.601 &5.653 & 6.774 &8.030 &9.578 & 11.974\\
&(24) &(27) &(28) &(29) & (30) &(31) & (31) & (31) &(31) & (31)\\ \hline
$\lambda_i=2i+1, i=1,\ldots,4$ &2.482 &4.354 &5.806 &6.969 &7.980 &8.934 &9.901 &10.963 &12.272 & 14.339\\
$\lambda_i=10, i=5,\ldots,10$& (24) &(27) &(28) &(29) &(30) &(31) &(31) &(31) &(31) & (31)\\ \hline
$\lambda_1=\lambda_2=\lambda_3=10$& 7.375&9.844 &12.339 &16.587 &19.696 &22.727 &26.539 &30.155 &34.638 &50.099 \\
$\lambda_4=\lambda_5=\lambda_6=20$& (83)&(87) &(90) &(92) &(93) &(94) &(94) & (94) & (94) & (94)\\
$\lambda_7=\lambda_8=\lambda_9=30$& & & & & & & & & & \\
$\lambda_{10}=50$& & & & & & & & & & \\ \hline
\end{tabular}
\label{Table2}
\end{center}
\end{table}
\begin{table}[ht]
\footnotesize
\begin{center}
\caption{Second raw moments of $X_{r:10}$ from $(X_1,\ldots,X_{10})$, where $X_i\sim$Pois($\lambda_i$) and $X_{1},\ldots,X_{10}$ are independent}
\begin{tabular}{|l||c|c|c|c|c|c|c|c|c|c||}
\hline
$\lambda_i\backslash r$ &1 & 2& 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10\\ \hline\hline
$\lambda_i=1, i=1,\ldots,10$& 0.010&0.070 &0.227 &0.480 &0.789 &1.173 &1.770 &2.751 &4.412 &8.319 \\
& (7) &(8) & (9) & (9) &(10) &(10) & (10) &(10) &(10) & (10)\\ \hline
$\lambda_i=1, i=1,\ldots,5$ & 0.082 &0.360 &0.839 &1.585 &2.848 &5.042 &9.030 &16.084 &28.522 &55.608 \\
$\lambda_i=i-4, i=6,\ldots,10$& (20) &(22) &(23) &(24) & (25) &(25) & (25) &(25) &(25) & (25)\\\hline
$\lambda_i=1, i=1,\ldots,5$ & 0.105&0.453 &1.116 &2.419 &5.809 &45.464 &72.835 &100.538 &135.397 &195.864 \\
$\lambda_i=10, i=6,\ldots,10$& (28)&(31) & (32)& (33) & (34) &(34) & (34) & (34) & (34)& (34)\\\hline
$\lambda_i=i, i=1,\ldots,10$ & 0.870 &3.318 &7.636 &13.961 &22.455 &33.449 &47.660 &66.701 &94.812 & 149.138\\
&(28) &(31) & (32) &(33) & (34) & (34) & (34) & (34) & (34)& (34)\\ \hline
$\lambda_i=2i+1, i=1,\ldots,4$ &7.922 &20.733 &35.407 &50.229 &65.376 &81.593 &99.979 &122.459 & 153.556& 210.746\\
$\lambda_i=10, i=5,\ldots,10$ &(28) &(31) & (32) & (33) & (34) & (34) & (34) & (34)& (34) & (34)\\ \hline
$\lambda_1=\lambda_2=\lambda_3=10$&58.889 &101.184 &157.427 &282.417 &395.389 &524.549 &714.111 &921.182 &1217.132 & 2557.719\\
$\lambda_4=\lambda_5=\lambda_6=20$&(92) & (96) & (98) & (100) & (101) & (102) & (102) & (102) & (102) & (102)\\
$\lambda_7=\lambda_8=\lambda_9=30$& & & & & & & & & & \\
$\lambda_{10}=50$& & & & & & & & & & \\ \hline
\end{tabular}
\label{Table3}
\end{center}
\end{table}
\end{landscape}
\bigskip
\begin{landscape}
\begin{table}[ht]
\begin{center}
\footnotesize
\caption{Mean of $X_{r:10}$ from $(X_1,\ldots,X_{10})$, where $X_i\sim$ NBin($R$,$p_i$) and $X_{1},\ldots,X_{10}$ are independent}
\begin{tabular}{|c|l||c|c|c|c|c|c|c|c|c|c||}
\hline
$R$ & $p_i$ &1 & 2& 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10\\ \hline\hline
\multirow{8}{*}{2} &$p_i=0.1i-0.05$ & 0.003&0.049&0.248&0.665&1.268&2.129&3.500&6.017&12.024&39.429\\
&$i=1,\ldots,10$& (271)&(321)&(354)&(378)&(394)&(404)&(410)&(413)&(414)&(414)\\\cline{2-12}
&$p_i=0.25$ &0.768 &1.708&2.617&3.534&4.512&5.603&6.883&8.497&10.788&15.090\\
&$i=1,\ldots,10$& (41)&(50) &(56) &(60) &(63) &(64) &(66) &(66) &(66) &(66)\\\cline{2-12}
&$p_i=0.25, i=1,\ldots,8$, &0.409&1.112&1.888&2.716&3.633&4.685&5.946&7.556&9.859&14.194\\
& $p_9=p_{10}=0.5$ &(41) &(50) &(56) &(60) &(63) &(64) &(66) &(66) &(66)&(66)\\\cline{2-12}
&$p_i=0.25, i=1,\ldots,8$, &0.121&0.562&1.353&2.279&3.304&4.455&5.797&7.469&9.816&14.179\\
& $p_9=p_{10}=0,75$ &(41) &(50) &(56) &(60) &(63) &(64) &(66) &(66)& (66)&(66)\\\hline
\multirow{8}{*}{5} &$p_i=0.1i-0.05$ &0.080&0.519&1.302&2.350&3.791&5.889&9.210&15.240&29.435&95.509\\
&$i=1,\ldots,10$&(415)&(471)&(509)&(535)&(553)&(564)&(570)&(573) &(574) &(575)\\\cline{2-12}
&$p_i=0.25$ &5.295&7.732&9.639&11.387&13.123&14.953&16.998&19.454&22.774&28.644\\
&$i=1,\ldots,10$&(64) &(74) &(81) &(86) &(89) &(91) &(92) &(93) &(93) &(93)\\\cline{2-12}
&$p_i=0.25, i=1,\ldots,8$, &2.843&4.983&7.084&9.072&11.031&13.057&15.276&17.892&21.363&27.398\\
&$p_9=p_{10}=0.5$ &(64)&(74) &(81) &(86) &(89) &(91) &(92) &(93) &(93)&(93)\\\cline{2-12}
&$p_i=0.25, i=1,\ldots,8$, &0.852&2.296&5.987&8.589&10.806&12.952&15.228&17.872&21.356&27.396\\
&$p_9=p_{10}=0.75$ &(64) &(74) &(81) &(86) &(89) &(91) &(92) &(93) &(93)&(93)\\\hline
\end{tabular}
\label{Table4}
\end{center}
\end{table}
\end{landscape}
\bigskip
\begin{landscape}
\begin{table}[ht]
\footnotesize
\begin{center}
\caption{Second raw moments of $X_{r:10}$ from $(X_1,\ldots,X_{10})$, where $X_i\sim$ NBin($R$,$p_i$) and $X_{1},\ldots,X_{10}$ are independent}
\begin{tabular}{|c|l||c|c|c|c|c|c|c|c|c|c||}
\hline
$R$ & $p_i$ &1 & 2& 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10\\ \hline\hline
\multirow{8}{*}{2} &$p_i=0.1i-0.05$ &0.003&0.050&0.271&0.874&2.327&5.918&15.425&45.583&189.511&2254.318\\
&$i=1,\ldots,10$&(369)&(418)&(451)&(474)&(490)&(500)&(506) &(509) &(510)&(510)\\\cline{2-12}
&$p_i=0.25$ &1.407 &4.245&8.571&14.671&23.111&34.924&52.086&78.900&127.351&254.734\\
&$i=1,\ldots,10$&(51)&(60) &(66) &(70) &(73) &(75) &(76) &(77) &(77) &(77)\\\cline{2-12}
&$p_i=0.25, i=1,\ldots,8$, &0.579&2.068&4.737&8.976&15.375&24.943&39.576&63.396&107.904&228.447\\
& $p_9=p_{10}=0.5$ &(51) &(60) &(66) &(70) &(73) &(75) &(76) &(77) &(77) &(77)\\\cline{2-12}
&$p_i=0.25, i=1,\ldots,8$, &0.134&0.772&2.808&6.782&13.235&23.067&38.084&62.335&107.267&228.182\\
&$p_9=p_{10}=0.75$ &(51) &(60) &(66) &(70) &(73) &(75) &(76) &(77) &(77) &(77)\\\hline
\multirow{8}{*}{5} &$p_i=0.1i-0.05$ &0.084&0.667&2.456&6.867&16.860&39.644&96.143&264.684&1011.931&10968.740\\
&$i=1,\ldots,10$&(541)&(595)&(631)&(656)&(673)&(684) &(691) &(693) &(694) &(695)\\\cline{2-12}
&$p_i=0.25$ &33.944&65.736&99.251&136.645&180.123&232.831&300.169&393.140&540.481&867.679\\
&$i=1,\ldots,10$&(79) &(89) &(96) &(100) &(103) &(105) &(107) &(107) &(107) &(107)\\\cline{2-12}
&$p_i=0.25, i=1,\ldots,8$, &11.095&28.637&55.155&88.494&129.197&179.639&244.761&335.147&478.848&799.026\\
&$p_9=p_{10}=0.5$ &(79) &(89) &(96) &(100) &(103) &(105) &(107) &(107) &(107) &(107)\\\cline{2-12}
&$p_i=0.25, i=1,\ldots,8$, &1.562&7.117&42.120&81.087&125.032&177.352&243.566&334.577&478.621&798.965\\
&$p_9=p_{10}=0.75$ &(79) &(89) &(96) &(100) &(103) &(105) &(107) &(107) &(107)&(107)\\\hline
\end{tabular}
\label{Table5}
\end{center}
\end{table}
\end{landscape}
\bigskip
\section{Multivariate geometric case}
\label{section3}
In this section we will derive formulas convenient for numerical computation of exact values of moments of order statistics from random vectors with multivariate geometric (MVG) distribution.
Before we give the definition of MVG distribution let us recall that the possibly extended RV
$X$ is said to be geometrically distributed with parameter $\pi\in[0,1]$ (denoted by $X\sim ge(\pi)$) if
$$P(X=k)=\pi(1-\pi)^{k}, \quad k=0,1,2,\ldots,$$
or equivalently if
$$P(X>k)=(1-\pi)^{k+1}, \quad k=-1,0,1,\ldots.$$
In particular, if $\pi=1$ then $P(X=0)=1$. If $\pi=0$ then $P(X>k)=1$ for $k=-1,0,1,\ldots$, which means that $X$ is an extended RV with defective distribution and $P(X=\infty)=1$.
\begin{definition}
\label{MVG}
The random vector $(X_1,X_2,\ldots,X_n)$ has the MVG distribution with parameters $\theta_I\in[0,1]$, $I\in J$, if
\begin{equation*}
X_i=\min\{M_I, I\ni i\},\quad i=1,2,\ldots,n,
\end{equation*}
where
\begin{itemize}
\item[(a)] $J$ is the class of all nonempty subsets of the set $\{1,2,\ldots,n\}$;
\item[(b)] the RV's $M_I$, $I\in J$, are independent;
\item[(c)] for $I\in J$, the RV $M_I$ is geometrically distributed with parameter $1-\theta_I$;
\item[(d)] for every $i=1,2,\ldots,n$, there exists $I$ such that $i\in I$ and $\theta_I\in[0,1)$.
\end{itemize}
\end{definition}
The above definition was introduced by Esary and Marshall \cite{EM73} to describe random lifetimes of $n$ unrepairable units. These units, denoted by $U_1, U_2,\ldots, U_n$, are exposed to various shocks that happen in discrete times (cycles) and may cause units failures. More precisely, during each cycle
\begin{itemize}
\item if the unit $U_i$ is still operating, it is exposed to a shock which it survives with probability $\theta_{\{i\}}$ and does not survive with probability $1-\theta_{\{i\}}$, $1\leq i\leq n$;
\item if any of the units $U_{i_1}, U_{i_2}$ is still operating, then these two units are exposed to a~common shock which all the working units among $\{U_{i_1}, U_{i_2}\}$ survive with probability $\theta_{\{i_1,i_2\}}$ and do not survive with probability $1-\theta_{\{i_1,i_2\}}$, $1\leq i_1<i_2\leq n$;
\item if any of the units $U_{i_1}, U_{i_2}, U_{i_3}$ is still operating, then these three units are exposed to a~common shock which all the working units among $\{U_{i_1}, U_{i_2}, U_{i_3}\}$ survive with probability $\theta_{\{i_1,i_2,i_3\}}$ and do not survive with probability $1-\theta_{\{i_1,i_2,i_3\}}$, $1\leq i_1<i_2<i_3\leq n$;
$$\vdots$$
\item if any of the $n$ units $U_1, U_2,\ldots, U_n$ is still operating, then all the units are exposed to a~common shock which the working units survive with probability $\theta_{\{1,2,\ldots,n\}}$ and do not survive with probability $1-\theta_{\{1,2,\ldots,n\}}$.
\end{itemize}
Moreover, it is assumed that before the first cycle all the units are in the working state and that different shocks affects operation of units independently. If $X_i$ denote the number of cycles which the unit $U_i$ survived, $1\leq i\leq n$, then the random vector $(X_1,X_2,\ldots,X_n)$ has the MVG distribution with parameters $\theta_I$, $I\in J$.
Note that Condition (d) of Definition \ref{MVG} ensures that each of the $n$ units will finally break down with probability one. In other words, each RV among $X_1,X_2,\ldots,X_n$ has a~non-defective distribution.
Esary and Marshall \cite{EM73} gave some properties of the MVG distribution while in \cite{DD19} and \cite{D18} $k$-out-of-$n$ systems with components having some special MVG lifetimes were studied.
Below we present further properties of the MVG distribution. These will be needed later on in this section to establish closed-form formulas for factorial moments of order statistics from random vectors with the MVG distribution. Furthermore, these will prove useful in Section \ref{section4} to examine times to failures of coherent systems consisting of elements with MVG lifetimes.
Here, and subsequently, $E\left(Y\right)_p$ denotes the $p$th factorial moment of a RV $Y$, i.e., for $p=1,2,\ldots$,
$$E\left(Y\right)_p=E\big(Y(Y-1)\cdots(Y-p+1)\big)$$
and, for $S=\{l_1,\ldots,l_m\}\subset\{1,\ldots,n\}$, $1\leq m\leq n$,
$$X_{1:S}=\min\{X_{l_1},\ldots,X_{l_m}\}.$$
\begin{theorem}\label{MVG_properties}
Let the random vector $(X_1,X_2,\ldots,X_n)$ have the MVG distribution with parameters $\theta_I\in [0,1]$, $\emptyset\neq I\subset \{1,2,\ldots,n\}$.
\begin{itemize}
\item[(i)] For $k_1,k_2,\ldots,k_n\in\{-1,0,1,\ldots\}$,
\begin{eqnarray}
\hspace{-5mm}P(X_1>k_1,X_2>k_2,\ldots,X_n>k_n)&=&\prod_{\emptyset\neq I\subset \{1,2,\ldots,n\}}\theta_{I}^{\max\{k_i, i\in I\}+1} \label{sfMVG} \\
&=& \prod\limits_{i=1}^n\prod\limits_{1\leq j_1<\ldots<j_i\leq n}\theta_{\{j_1,\ldots,j_i\}}^{\max\{k_{j_1},\ldots,k_{j_i}\}+1}. \nonumber
\end{eqnarray}
\item[(ii)] $X_{1:n}\sim ge\left(1-\prod\limits_{\emptyset\neq I\subset \{1,2,\ldots,n\}}\theta_I\right).$
\item[(iii)]If $1\leq s\leq n$, $1\leq l_1<l_2<\cdots<l_s\leq n$, and $\tilde{J}$ denotes the family of all subsets (along with the empty set) of the set $\{1,2,\ldots,n\}\setminus\{l_1,\ldots,l_s\}$, then the random vector $(X_{l_1},X_{l_2},\ldots,X_{l_s})$ has the MVG distribution with parameters
\begin{equation*}
\hat{\theta}_I=\prod\limits_{\tilde{I}\in\tilde{J}}\theta_{I\cup\tilde{I}}, \quad \emptyset\neq I\subset\{l_1,l_2,\ldots,l_s\}.
\end{equation*}
\item[(iv)] Under the assumptions of Part (iii) we have
\begin{equation}
\label{MinSubset}
X_{1:\{l_1,l_2,\ldots,l_s\}} \sim ge\left(
1-\prod_{I\subset\{1,\ldots,n\}, I\cap\{l_1,\ldots,l_s\}\neq \emptyset} \theta_I \right)
\end{equation}
and, in consequence, for $p=1,2,\ldots$,
\begin{equation}
\label{factorialMin}
E\left(X_{1:\{l_1,l_2,\ldots,l_s\}}\right)_p
=p!\left(\frac{\theta}{1-\theta}\right)^{p},
\end{equation}
where $ \theta=\prod_{I\subset\{1,\ldots,n\}, I\cap\{l_1,\ldots,l_s\}\neq \emptyset}
\theta_I$.
\end{itemize}
\end{theorem}
\begin{proof}
(i) Let $J$ be as in Definition \ref{MVG}. Then, for $k_{1}, \ldots,k_{n}\in\{-1,0,1,\ldots\}$,
\begin{eqnarray*}
P(X_1>k_1,\ldots,X_n>k_n)&=&P\left(\min\{M_I: I\ni i\}>k_i, i=1,2,\ldots,n\right) \\
&=&P\left(M_I>\max\{k_i,i\in I\} \textrm{ for every } I\in J\right) \\
&=&\prod\limits_{I\in J}P\left(M_I>\max\{k_i,i\in I\}\right) \\
&=&\prod\limits_{I\in J}\theta_I^{\max\{k_i,i\in I\}+1} \\
&=&\prod\limits_{i=1}^n\prod\limits_{1\leq j_1<\ldots<j_i\leq n}\theta_{\{j_1,\ldots,j_i\}}^{\max\{k_{j_1},\ldots,k_{j_i}\}+1}
\end{eqnarray*}
as required.
\medskip\noindent
(ii) Since $P(X_{1:n}>k)=P(X_1>k,X_2>k,\ldots,X_n>k)$,
(\ref{sfMVG}) immediately implies
\begin{eqnarray*}
P(X_{1:n}>k)&=&\prod_{\emptyset\neq I\subset \{1,2,\ldots,n\}}\theta_I^{\max\{k,\ldots,k\}+1} \\
&=&\left(\prod_{\emptyset\neq I\subset \{1,2,\ldots,n\}}\theta_I\right)^{k+1},\quad k=-1,0,1,\ldots,
\end{eqnarray*}
which gives the desired conclusion.
\medskip\noindent
(iii) Let $k_i=-1$ if $i\notin\{l_1,\ldots,l_s\}$. Then, for $k_{l_1}, \ldots,k_{l_s}\in\{-1,0,1,\ldots\}$,
\begin{eqnarray}
P\left(X_{l_1}>k_{l_1},\ldots,X_{l_s}>k_{l_s}\right)&=& P\left(X_i>k_i, i=1,2,\ldots,n\right)\nonumber\\
&=&\prod_{\emptyset\neq I\subset\{1,2,\ldots,n\}}\theta_I^{\max\{k_i,i\in I\}+1} \nonumber\\
&=&\prod_{I\subset\{1,2,\ldots,n\},I\cap\{l_1,\ldots,l_s\}\neq \emptyset}\
\theta_I^{\max\{k_i, i\in I\cap\{l_1,\ldots,l_s\} \}+1}\nonumber\\
&=&\prod_{\emptyset\neq I\subset\{l_1,\ldots,l_s\}}\left(\prod\limits_{\tilde{I}\in\tilde{J}}\theta_{I\cup\tilde{I}}\right)^{\max\{k_{i}, i\in I\}+1}, \label{rozk_brzeg}
\end{eqnarray}
where the second equality is a consequence of (\ref{sfMVG}). Comparing (\ref{rozk_brzeg}) with (\ref{sfMVG}) gives the assertion of (iii).
\medskip\noindent
(iv) By (ii) and (iii), $X_{1:\{l_1,l_2,\ldots,l_s\}}$ has the geometric distribution with parameter $1-\theta$, where
$$\theta=\prod\limits_{\emptyset\neq I\subset \{l_1,l_2,\ldots,l_s\}}\hat{\theta}_I
=\prod\limits_{\emptyset\neq I\subset \{l_1,l_2,\ldots,l_s\}} \prod\limits_{\tilde{I}\in\tilde{J}}\theta_{I\cup\tilde{I}}
=\prod_{I\subset\{1,\ldots,n\}, I\cap\{l_1,\ldots,l_s\}\neq \emptyset} \theta_I$$
as claimed in (\ref{MinSubset}). Relation (\ref{factorialMin}) is an immediate consequence of the well known formula for factorial moments of the geometric distribution.
\end{proof}
In particular, Theorem \ref{MVG_properties} (iii) asserts that univariate marginal distributions of the MVG distribution are geometric. More precisely, if the random vector $(X_1,X_2,\ldots,X_n)$ has the MVG distribution with parameters $\theta_I\in [0,1]$, $\emptyset\neq I\subset \{1,2,\ldots,n\}$, then
$$X_i\sim ge\left(1-\prod\limits_{\{i\}\subset I\subset\{1,2,\ldots,n\}}\theta_I \right), \quad i=1,2,\ldots,n.$$
If moreover, $\theta_I=1$ for all $\emptyset\neq I\subset \{1,2,\ldots,n\}$ except singletons, then $X_1,X_2,\ldots,X_n$ are independent and $X_i\sim ge\left(1-\theta_{\{i\}}\right)$, $i=1,2,\ldots,n$.
Theorem \ref{MVG_properties} (iv) provides formulas for factorial moments of the smallest order statistic from random vector $(X_1,X_2,\ldots,X_n)$ with MVG distribution. To extend these formulas to larger order statistics we will use the following result by Balakrishnan et. al. \cite{BBM92}. For some other interesting relations for cumulative distribution functions of order statistics we refer the reader to \cite{E13}.
\begin{theorem}
Let the random vector $(X_1,X_2,\ldots,X_n)$ have any joint distribution and $1\leq r\leq n$. By $F_{r:n}$ and $F_{1:S}$, $S\subset\{1,2,\ldots,n\}$, denote the cumulative distribution functions of $X_{r:n}$ and $X_{1:S}$, respectively. Then, for all $x$,
\begin{equation}
\label{recc}
F_{r:n}(x)=\sum_{j=0}^{r-1} (-1)^{r-1-j} {{n-j-1} \choose {n-r}} \sum_{S \subset \{1,\ldots,n\}, |S|=n-j} F_{1:S}(x),
\end{equation}
where $|S|$ stands for the number of elements of the set $S$.
\end{theorem}
Relation (\ref{recc}) can be rewritten in terms of probability mass functions and hence in terms of factorial moments, provided they exist. For $1\leq r\leq n$ and $p=1,2,\ldots$, we get
\begin{equation}
\label{reccfac}
E\left(X_{r:n}\right)_p=\sum_{j=0}^{r-1} (-1)^{r-1-j}
{{n-j-1} \choose {n-r}}\sum_{S \subset \{1,\ldots,n\},|S|=n-j} E\left(X_{1:S}\right)_p.
\end{equation}
Combining Theorem \ref{MVG_properties} (iv) with (\ref{reccfac}) we obtain the main result of this section.
\begin{theorem}
\label{MVGmoments}
Under the assumption of Theorem \ref{MVG_properties}, for $1\leq r\leq n$ and $p=1,2,\ldots$,
\begin{equation}
\label{MVGfac}
E\left(X_{r:n}\right)_p
=p!\sum_{j=0}^{r-1} (-1)^{r-1-j} {{n-j-1} \choose {n-r}} S_{j,p},
\end{equation}
where
$$S_{0,p}=\left(\frac{1}{1-\theta_{all}}-1 \right)^p$$
and, for $1\leq j\leq n-1$,
$$S_{j,p}=\sum_{\{s_1,\ldots,s_{j}\}\subset\{1,\ldots,n\}} \left(\frac{1}{1-\frac{\theta_{all}}{\prod\limits_{\emptyset\neq I\subset \{s_1,\ldots,s_{j}\}}\theta_I}}-1 \right)^p$$
with
\begin{equation}
\label{teta.all}
\theta_{all}=\prod\limits_{\emptyset\neq I\subset \{1,2,\ldots,n\}}\theta_I.
\end{equation}
In particular,
\begin{equation}
\label{MVGexp}
E\left(X_{r:n}\right)=\sum_{j=0}^{r-1} (-1)^{r-1-j} {{n-j-1} \choose {n-r}} S_{j,1}
\end{equation}
and
\begin{eqnarray}
\label{MVGvar}
&&\hspace{-7mm}Var\left(X_{r:n}\right) =E\left(X_{r:n}(X_{r:n}-1) \right)+E\left(X_{r:n}\right)-\big(E\left(X_{r:n}\right)\big)^2 \nonumber \\
&=& 2\sum_{j=0}^{r-1} (-1)^{r-1-j} {{n-j-1} \choose {n-r}} S_{j,2}+E\left(X_{r:n}\right)\left(1-E\left(X_{r:n}\right)\right).
\end{eqnarray}
\end{theorem}
\begin{proof}
For $0\leq j\leq n-1$,
\begin{eqnarray}
&&\sum_{S \subset \{1,\ldots,n\},|S|=n-j} E\left(X_{1:S}\right)_p=\sum_{\{s_1,\ldots,s_{n\!-\!j}\}\subset\{1,\ldots,n\}}
E\left(X_{1:\{s_1,\ldots,s_{n-j}\}}\right)_p \nonumber \\
&=&p!\sum_{\{s_1,\ldots,s_{n-j}\}\subset\{1,\ldots,n\}}
\left(\frac{1}{1-\prod\limits_{I\subset \{1,\ldots,n\},I\cap \{s_1,\ldots,s_{n-j}\}\neq\emptyset}\theta_I}-1 \right)^p.
\label{thmMVG1}
\end{eqnarray}
by Theorem \ref{MVG_properties} (iv).
But, for $j=0$ and $\{s_1,\ldots,s_{n}\}\subset \{1,\ldots,n\}$,
\begin{equation}
\label{thmMVG2}
\prod\limits_{I\subset \{1,\ldots,n\},I\cap \{s_1,\ldots,s_{n-j}\}\neq\emptyset}\theta_I =
\prod\limits_{\emptyset\neq I\subset \{1,\ldots,n\}}\theta_I=\theta_{all},
\end{equation}
while for $1\leq j\leq n-1$ and $\{s_1,\ldots,s_{n-j}\}\subset \{1,\ldots,n\}$,
\begin{eqnarray}
\prod\limits_{I\subset \{1,\ldots,n\},I\cap \{s_1,\ldots,s_{n-j}\}\neq\emptyset}\theta_I
&=&\frac{\prod\limits_{\emptyset\neq I\subset \{1,\ldots,n\}}\theta_I}{
\prod\limits_{\emptyset\neq I\subset \{1,\ldots,n\}\setminus \{s_1,\ldots,s_{n-j}\}}\theta_I} \nonumber \\
&=&\frac{\theta_{all}}{\prod\limits_{\emptyset\neq I\subset \{1,\ldots,n\}\setminus \{s_1,\ldots,s_{n-j}\}}\theta_I}. \label{thmMVG3}
\end{eqnarray}
Substituting (\ref{thmMVG2}) and (\ref{thmMVG3}) into (\ref{thmMVG1}), and noticing that, for $1\leq j\leq~n-~1$,
\begin{eqnarray*}
\sum_{\{s_1,\ldots,s_{n-j}\}\subset\{1,\ldots,n\}}
\left(\frac{1}{1-\frac{\theta_{all}}{\prod\limits_{\emptyset\neq I\subset \{1,\ldots,n\}\setminus \{s_1,\ldots,s_{n-j}\}}\theta_I}}-1 \right)^p && \\
=\sum_{\{s_1,\ldots,s_{j}\}\subset\{1,\ldots,n\}} \left(\frac{1}{1-\frac{\theta_{all}}{\prod\limits_{\emptyset\neq I\subset \{s_1,\ldots,s_{j}\}}\theta_I}}-1 \right)^p=S_{j,p}, &&
\end{eqnarray*}
we get
\begin{eqnarray*}
&&\sum_{\{s_1,\ldots,s_{n-j}\}\subset\{1,\ldots,n\}}
E\left(X_{1:\{s_1,\ldots,s_{n-j}\}}\right)_p \\
&=&\left\{
\begin{array}{lcl}
p!\left(\frac{1}{1-\theta_{all}}-1 \right)^p=p!S_{0,p} & \hbox{ if } & j= 0 \\
p!S_{j,p} & \hbox{ if } & 1\leq j\leq n-1
\end{array}
\right..
\end{eqnarray*}
Now application of (\ref{reccfac}) finishes the proof.
\end{proof}
Applying (\ref{MVGexp}) and (\ref{MVGvar}) to the case when $X_1, X_2, \ldots, X_n$ are independent, i.e., $\theta_I=1$ for all $\emptyset\neq I\subset \{1,2,\ldots,n\}$ except singletons, we recover known formulas for the expectation and variance of the $r$th order statistic from INID geometric RV's \citep[formulas (4.11) and (4.13)]{DD18}.
Furthermore, under the additional assumption that the random vector $(X_1, X_2, \ldots, X_n)$ is exchangeable, Theorem \ref{MVGmoments} takes on the following simpler form. In its formulation we adopt the convention that
\begin{equation}
\label{konw}
{{j} \choose {s}}=0 \; \hbox{ if } \; s>j\geq 1.
\end{equation}
\begin{corollary}
\label{exchMVGmoments}
Let the random vector $(X_1,X_2,\ldots,X_n)$ be exchangeable and have the MVG distribution with parameters $\theta_I\in [0,1]$, $\emptyset\neq I\subset \{1,2,\ldots,n\}$, where
\begin{equation}
\label{cMVGexch}
\theta_{\{i_1,\ldots,i_s\}}=\theta_s\in[0,1] \quad \hbox{ for all } 1\leq s\leq n \hbox{ and } 1\leq i_1<\cdots<i_s\leq n
\end{equation}
and at least one $\theta_s$, $1\leq s\leq n$, be not equal to 1. Then, for $1\leq r\leq n$ and $p=1,2,\ldots$,
$$
E\left(X_{r:n}\right)_p
=p!\sum_{j=0}^{r-1} (-1)^{r-1-j} {{n-j-1} \choose {n-r}}{{n} \choose {j}}\left(\frac{1}{1-\prod\limits_{s=1}^n \theta_s^{{{n} \choose {s}}-{{j} \choose {s}}}} -1\right)^p.
$$
In particular,
$$
E\left(X_{r:n}\right)=\sum_{j=0}^{r-1} (-1)^{r-1-j} {{n-j-1} \choose {n-r}}{{n} \choose {j}}\left(\frac{1}{1-\prod\limits_{s=1}^n \theta_s^{{{n} \choose {s}}-{{j} \choose {s}}}} -1\right)
$$
and
\begin{eqnarray*}
Var\left(X_{r:n}\right)&=&2\sum_{j=0}^{r-1} (-1)^{r-1-j} {{n-j-1} \choose {n-r}}{{n} \choose {j}}\left(\frac{1}{1-\prod\limits_{s=1}^n \theta_s^{{{n} \choose {s}}-{{j} \choose {s}}}} -1\right)^2 \\
&+&E\left(X_{r:n}\right)\left(1-E\left(X_{r:n}\right)\right).
\end{eqnarray*}
\end{corollary}
\begin{proof}
Condition (\ref{cMVGexch}) guarantees that the random vector $(X_1,X_2,\ldots,X_n)$ is indeed exchangeable. If this is the case, then
\begin{eqnarray*}
\theta_{all}&=&\prod\limits_{\emptyset\neq I\subset \{1,2,\ldots,n\}}\theta_I=
\prod_{s=1}^n\prod_{\{i_1,\ldots,i_{s}\}\subset\{1,\ldots,n\}}\theta_{\{i_1,\ldots,i_{s}\}} \\
&=&\prod_{s=1}^n\prod_{\{i_1,\ldots,i_{s}\}\subset\{1,\ldots,n\}}\theta_s=\prod_{s=1}^n\theta_s^{{n} \choose {s}}
\end{eqnarray*}
and, for $\{s_1,\ldots,s_{j}\}\subset\{1,\ldots,n\}$,
$$
\prod\limits_{\emptyset\neq I\subset \{s_1,\ldots,s_j\}}\theta_I=
\prod_{s=1}^j\theta_s^{{j} \choose {s}}=
\prod_{s=1}^n\theta_s^{{j} \choose {s}},
$$
by (\ref{konw}).
Consequently, using notation of Theorem \ref{MVGmoments}, we have
\begin{equation}
\label{coMVGexch1}
S_{0,p}={{n} \choose {0}}\left(\frac{1}{1-\prod\limits_{s=1}^n\theta_s^{{n} \choose {s}}}-1 \right)^p
\end{equation}
and, for $1\leq j\leq n-1$,
\begin{eqnarray}
S_{j,p}&=&\sum_{\{s_1,\ldots,s_{j}\}\subset\{1,\ldots,n\}} \left(\frac{1}{1-\prod\limits_{s=1}^n\theta_s^{{{n} \choose {s}}-{{j} \choose {s}}}}-1 \right)^p \nonumber \\
&=&{{n} \choose {j}} \left(\frac{1}{1-\prod\limits_{s=1}^n\theta_s^{{{n} \choose {s}}-{{j} \choose {s}}}}-1 \right)^p.
\label{coMVGexch2}
\end{eqnarray}
Substituting (\ref{coMVGexch1}) and (\ref{coMVGexch2}) into (\ref{MVGfac}) completes the proof.
\end{proof}
We finish this section with Tables \ref{Table6} and \ref{Table7} presenting means and variances (in brackets) of all order statistics from some MVG samples of size $10$ which are non-exchangeable and exchangeable, respectively. They were obtained using Theorem \ref{MVGmoments} and Corollary \ref{exchMVGmoments}.
\begin{landscape}
\begin{table}[ht]
\footnotesize
\caption{Mean and variance (in brackets) of $X_{r:10}$ from $(X_1,\ldots,X_{10})$ with MVG distribution with parameters $\theta_I$, $\emptyset\neq I\subset\{1,\dots,10\}$ (not listed parameters are assumed to be equal to 1)}
\begin{tabular}{|l||c|c|c|c|c|c|c|c|c|c||}
\hline
$\theta_I\backslash r$ &1 & 2&3 &4 &5 &6 &7 &8 &9 &10\\ \hline\hline
$\theta_{\{1\}}=\cdots=\theta_{\{8\}}=0.9$ & 0.375 & 1.138 & 2.110 &3.239 &4.563 & 6.157 &8.149 &10.784 & 14.644 & 21.851\\
$\theta_{\{9\}}=\theta_{\{10\}}=0.8$& (0.516) & (1.407) & (2.456) &(3.876) & (5.978)&(9.271)&(14.827)&(25.311) &(49.390) & (137.343) \\
$\theta_{\{1,\ldots,10\}}=0.99$& & & & & & & & & & \\\hline
$\theta_{\{1\}}=\cdots=\theta_{\{8\}}=0.9$ & 0.213 & 0.583 & 1.115 & 1.760 & 2.525 &3.450 &4.614 & 6.184 & 8.566 & 13.406 \\
$\theta_{\{9\}}=\theta_{\{10\}}=0.8$& (0.258) & (0.681) & (1.194) &(1.787)&(2.546)&(3.623)&(5.287)&(8.202)&(14.656) & (40.025) \\
$\theta_{\{i,j\}}=0.99, 1\leq i<j\leq 10$& & & & & & & & & & \\\hline
$\theta_{\{1\}}=\cdots=\theta_{\{8\}}=0.9$ & 0.336 & 0.997 &1.850 & 2.860 & 4.061 & 5.535 & 7.424 & 10.016 & 14.030 & 22.350 \\
$\theta_{\{9\}}=\theta_{\{10\}}=0.8$ & (0.449) & (1.221) & (2.121) & (3.258) & (4.849) & (7.238) & (11.153) & (18.502) & (35.978) &(109.293)\\
$\theta_{\{1,j\}}=0.99, 2\leq j\leq 10$& & & & & & & & & & \\\hline
$\theta_{\{1\}}=\cdots=\theta_{\{8\}}=0.9$ & 0.314 & 0.919 & 1.674 & 2.524 & 3.478 & 4.565 & 5.835 & 7.372 & 9.344 & 12.209\\
$\theta_{\{9\}}=\theta_{\{10\}}=0.8$& (0.413) & (1.118) & (1.960) & (3.088) & (4.779) & (7.469) & (11.993) & (20.185) & (36.868) & (80.375) \\
$\theta_{\{1,j\}}=0.99, 2\leq j\leq 10$& & & & & & & & & & \\
$\theta_{\{1,\ldots,10\}}=0.95$& & & & & & & & & & \\\hline
\end{tabular}
\label{Table6}
\end{table}
\begin{table}[ht]
\footnotesize
\caption{Mean and variance (in brackets) of $X_{r:10}$ from $(X_1,\ldots,X_{10})$ with exchangeable MVG distribution with parameters $\theta_s$, $s=1,2,\ldots,10$, defined in (\ref{cMVGexch}) (not listed parameters are assumed to be equal to 1)}
\begin{tabular}{|l||c|c|c|c|c|c|c|c|c|c||}
\hline
$\theta_s\backslash r$ &1 & 2&3 &4 &5 &6 &7 &8 &9 &10\\ \hline\hline
$\theta_{1}=0.9$& 0.285 & 0.705 & 1.303 & 2.008 & 2.839 & 3.835 &5.080 & 6.740 & 9.229 &14.208 \\
$\theta_{2}=0.99$& (0.366) & (0.885)&(1.499)&(2.208)&(3.101)&(4.338)&(6.197)&(9.366)&(16.184)&(42.216) \\\hline
$\theta_{1}=0.9$& 0.036 &0.077 & 0.211 & 0.387 & 0.656 & 0.992 & 1.420 & 1.984 & 2.828 & 4.515 \\
$\theta_{2}=0.95$&(0.037)& (0.080) &(0.203)&(0.345)&(0.513)&(0.694)&(0.927)&(1.317)&(2.150)& (5.244) \\\hline
$\theta_{1}=0.9$& 0.036 & 0.077 &0.209 &0.382 &0.648 &0.979 &1.398 &1.948 &2.764 & 4.367 \\
$\theta_{2}=0.95$&(0.037)&(0.080)&(0.201)&(0.341)&(0.509)&(0.690)&(0.926)&(1.324)& (2.177) & (5.293) \\
$\theta_{10}=0.99$& & & & & & & & & & \\\hline
$\theta_{2}=0.95$& 0.110 & 0.110 & 0.403 & 0.560 & 0.948 & 1.332 & 1.857 & 2.540 & 3.567 & 5.619 \\
& (0.123) & (0.123) & (0.398) & (0.546) & (0.791) & (1.065) & (1.425) & (2.040) & (3.312) & (7.967) \\\hline
$\theta_{8}=0.95$& 0.110 & 0.110 & 0.110 & 0.110 & 0.110 & 0.110 & 0.110 & 0.110 & 0.403 & 0.587 \\
& (0.123) & (0.123) & (0.123) & (0.123) & (0.123) & (0.123) & (0.123) & (0.123) & (0.398) & (0.592) \\\hline
\end{tabular}
\label{Table7}
\end{table}
\end{landscape}
\section{Moments of lifetimes of coherent systems}
\label{section4}
In this section we establish formulas that are convenient for numerical evaluation of moments of lifetimes of coherent systems operating in discrete time.
First observe that from results given in Sections \ref{section2} and \ref{section3} we immediately obtain moments of lifetimes of two important types of technical structures, namely of $k$-out-of-$n:F$ and $k$-out-of-$n:G$ systems. To see this, let $X_1,X_2,\ldots,X_n$ be random lifetimes of $n$ items and $T_{k,n:F}$ ($T_{k,n:G}$) denote the lifetime of the $k$-out-of-$n:F$ ($k$-out-of-$n:G$) system built up from these items. Since a $k$-out-of-$n:F$ system fails when at least $k$ of its components are broken, and $k$-out-of-$n:G$ system works as long as at least $k$ of its components are working we have, for any joint distribution of $(X_1,X_2,\ldots,X_n)$ and $1\leq k\leq n$,
$$
T_{k,n:F}=X_{k:n} \hbox{ and } T_{k,n:G}=X_{n-k+1:n}.
$$
Therefore using Theorem \ref{th1} (\ref{th2})
we can find moments (their approximate values) of $T_{k,n:F}$ and $T_{k,n:G}$ whenever $X_1,X_2,\ldots,X_n$ are discrete RV's taking values in finite (infinite) subsets of the set of non-negative integers. In particular, Corollaries \ref{th2c1} and \ref{th2c2}
enable us to compute approximations of these moments when the univariate marginal distributions of $(X_1,X_2,\ldots,X_n)$ are Poisson or negative binomial. Theorem \ref{MVGmoments} along with Corollary \ref{exchMVGmoments} give moments of $T_{k,n:F}$ and $T_{k,n:G}$ in the case when the joint distribution of component lifetimes is MVG. Furthermore, values in Tables \ref{Table1}-\ref{Table7}
can be interpreted as means, second raw moments and/or variances of $T_{k,n:F}$ (for $r=k$) and $T_{k,n:G}$ (for $r=n-k+1$).
\begin{example}
Water is supplied to a factory by $10$ pipes. Every morning of a working day valves are opened to provide water to the factory. There is one main valve which opens general water flow to all the pipes and $10$ valves which open water flow into separate pipes. The factory can work as long as at least $k$ ($1\leq k\leq 10$) pipes supply water. If the probability that the main valve will not break down during opening is $\theta_{\{1,\ldots,10\}}$, the probability that the valve of the $i$th pipe will not break down during opening is $\theta_{\{i\}}$, $i=1,\ldots,10$, all the valves work independently and are not repaired, then the system supplying water to the factory is a $k$-out-of-$10:G$ system with components having the MVG distribution with parameters $\theta_I$, $\emptyset\neq I\subset\{1,\dots,10\}$, where $\theta_I=1$ for $I\notin\{\{1\},\ldots,\{10\},\{1,\ldots,10\}\}$. Results of Section \ref{section3} allow to find moments of the random number of working days up to a failure of this system.
In particular, taking $r=n-k+1=11-k$ in the first row of Table \ref{Table6}, one obtains means and variances of this number for $k=1,\ldots,10$ when $\theta_{\{1\}}=\cdots=\theta_{\{8\}}=0.9$, $\theta_{\{9\}}=\theta_{\{10\}}=0.8$ and $\theta_{\{1,\ldots,10\}}=0.95$.
\end{example}
If we want to find moments of lifetimes of coherent systems other than $k$-out-of-$n$ structures we need more effort. The rest of this section is denoted to presenting methods of doing this. We start with recalling relevant concepts and facts.
Let us consider a coherent system $S$ consisting of $n$ components numbered $1,2,\ldots,n$. We say that $P\subset\{1,2,\ldots,n\}$ is a path set of $S$ if system $S$ functions when all the elements with indices in $P$ work. Similarly, $C\subset\{1,2,\ldots,n\}$ is called a cut set of $S$ if system $S$ fails whenever all the elements with indices in $C$ break down. A path (cut) set is said to be minimal if it does not contain any strict subset being a path (cut) set.
Using the concepts of minimal path and cut sets we can write a useful representation for the lifetime of a coherent system. If the coherent system $S$ has $s$ minimal path sets $P_1,P_2,\ldots,P_s$ and $v$ minimal cut sets $C_1,C_2,\ldots,C_v$, then
\begin{equation}
\label{repMPCs}
T=\max\limits_{1\leq j\leq s}\min\limits_{i\in P_j}X_i=\min\limits_{1\leq j\leq v}\max\limits_{i\in C_j}X_i,
\end{equation}
where $T$ denotes the lifetime of system $S$ and $X_i$ is the lifetime of its $i$th component, $i=1,\ldots,n$; see \cite[p 13]{BP75}.
Applying (\ref{repMPCs}) and the inclusion-exclusion formula we get expressions for the survival function of T (see \cite[pp. 25-26]{BP75} and \cite{NRS07})
\begin{eqnarray}
\hspace{-4mm} P(T>t)&=&\sum\limits_{j=1}^s(-1)^{j+1}\sum\limits_{\{k_1,\ldots,k_j\}\subset\{1,\ldots, s\}}P\left(X_{1:\bigcup_{l=1}^jP_{k_l}}>t\right) \label{sf1} \\
&=&1-\sum\limits_{j=1}^v(-1)^{j+1}\sum\limits_{\{k_1,\ldots,k_j\}\subset\{1,\ldots, v\}}P\left(X_{\left|\bigcup_{l=1}^jC_{k_l}\right| :\bigcup_{l=1}^jC_{k_l}}\leq t\right) \nonumber \\
&=& \sum\limits_{j=1}^v(-1)^{j+1}\sum\limits_{\{k_1,\ldots,k_j\}\subset\{1,\ldots, v\}}P\left(X_{\left|\bigcup_{l=1}^jC_{k_l}\right| :\bigcup_{l=1}^jC_{k_l}}> t\right), \label{sf2}
\end{eqnarray}
where, for $A=\{i_1,\ldots,i_m\}\subset\{1,\ldots,n\}$, $1\leq m\leq n$, and $1\leq r\leq m$,
$X_{r:A}$ denotes the $r$th order statistic from $X_{i_1},\ldots,X_{i_m}$, and as in the previous section $|A|$ stands for the number of elements in $A$.
Under the assumption that the random vector $(X_1,X_2,\ldots,X_n)$ is exchangeable, (\ref{sf1}) and (\ref{sf2}) simplify considerably. Indeed, then the distribution of $X_{1:\bigcup_{l=1}^jP_{k_l}}$ and $X_{\left|\bigcup_{l=1}^jC_{k_l}\right| :\bigcup_{l=1}^jC_{k_l}}$ depends only on the number of elements in $\bigcup_{l=1}^jP_{k_l}$ and $\bigcup_{l=1}^jC_{k_l}$, respectively, and we get
\begin{equation}
\label{sfexch}
P(T>t)=\sum_{i=1}^n\alpha_iP(X_{1:i}>t)=\sum_{i=1}^n\beta_iP(X_{i:i}>t),
\end{equation}
where $\alpha_i, \beta_i$, $i=1,2,\ldots,n$, are real numbers depending on the structure of the coherent system but not on the distribution of $(X_1,X_2,\ldots,X_n)$, and satisfying $\sum_{i=1}^n\alpha_i=\sum_{i=1}^n\beta_i=1$. The vectors $\mathbf{a}=(\alpha_1,\ldots,\alpha_n)$ and $\mathbf{b}=(\beta_1,\ldots,\beta_n)$ are called minimal and maximal signatures, respectively; see \cite{NRS07}.
Now we are ready to state and prove the main result of this section providing formulas for raw and factorial moments of lifetimes of coherent systems operating in discrete time. In the sequel, $\mathbb{I}(\cdot)$ denotes the indicator function, i.e., $\mathbb{I}(B)=1$ if $B$ is true and $\mathbb{I}(B)=0$ otherwise.
\begin{theorem}
\label{mCS}
Let $T$ be the lifetime of a coherent system with component lifetimes $X_1,X_2,\ldots,X_n$ and with minimal path and cut sets $P_1,P_2,\ldots,P_s$ and $C_1,C_2,\ldots,C_v$, respectively.
\begin{itemize}
\item[(i)] If $X_i$ takes values in the set of non-negative integers, $i=1,2,\ldots,n$, then, for $p=1,2,\ldots$,
\begin{eqnarray}
ET^p&=&\sum\limits_{m=0}^\infty \big((m+1)^p-m^p\big) \sum_{i=1}^{n} \sum\limits_{\{k_1,\ldots,k_i\}\subset\{1,\ldots, n\}}
\alpha_{\{k_1,\ldots,k_i\}} \nonumber \\
&&\times P\left(\bigcap\limits_{l=1}^i\{X_{k_l}>m\}\right) \label{rmCS1} \\
&=&\sum\limits_{m=0}^\infty \big((m+1)^p-m^p\big) \sum_{i=1}^{n} \sum\limits_{\{k_1,\ldots,k_i\}\subset\{1,\ldots, n\}}
\beta_{\{k_1,\ldots,k_i\}} \nonumber \\
&&\times \left\{1-P\left(\bigcap\limits_{l=1}^i\{X_{k_l}\leq m\}\right)\right\}, \label{rmCS1A}
\end{eqnarray}
where, for $\{k_1,\ldots,k_i\}\subset\{1,\ldots,n\}$,
\begin{equation}
\label{alfy}
\alpha_{\{k_1,\ldots,k_i\}}=\sum\limits_{j=1}^s(-1)^{j+1}\sum\limits_{\{s_1,\ldots,s_j\}\subset\{1,\ldots, s\}}\mathbb{I}\left(\bigcup_{l=1}^jP_{s_l}=\{k_1,\ldots,k_i\}\right)
\end{equation}
and
\begin{equation*}
\beta_{\{k_1,\ldots,k_i\}}=\sum\limits_{j=1}^v(-1)^{j+1}\sum\limits_{\{s_1,\ldots,s_j\}\subset\{1,\ldots, v\}}\mathbb{I}\left(\bigcup_{l=1}^jC_{s_l}=\{k_1,\ldots,k_i\}\right) .
\end{equation*}
\item[(ii)] Under the hypothesis of part (i), if moreover we assume that $j_{max(m)}$ defined in (\ref{jmaxm}) does not depend on $m$, denote it by $j_0$ and require that for a fixed $p\in\{1,2,\ldots\}$, $EX_{j_0}^p<\infty$, then
\begin{eqnarray}
ET^p&\approx&\sum\limits_{m=0}^{M_0} \big((m+1)^p-m^p\big) \sum_{i=1}^{n} \sum\limits_{\{k_1,\ldots,k_i\}\subset\{1,\ldots, n\}}
\alpha_{\{k_1,\ldots,k_i\}} \nonumber \\
&&\times P\left(\bigcap\limits_{l=1}^i\{X_{k_l}>m\}\right) \label{rmCSapp}
\end{eqnarray}
and
\begin{eqnarray}
ET^p&\approx&\sum\limits_{m=0}^{\bar{M}_0} \big((m+1)^p-m^p\big) \sum_{i=1}^{n} \sum\limits_{\{k_1,\ldots,k_i\}\subset\{1,\ldots, n\}}\beta_{\{k_1,\ldots,k_i\}} \nonumber \\
&&\times \left\{1-P\left(\bigcap\limits_{l=1}^i\{X_{k_l}\leq m\}\right)\right\}
\label{rmCSappA}
\end{eqnarray}
introduce an error not greater than $d>0$, provided that
\begin{equation}
\label{M0CS}
\sum_{x=M_0+2}^{\infty}x^pP(X_{j_0}=x)\leq d \left(\sum_{i=1}^{n}\sum\limits_{\{k_1,\ldots,k_i\}\subset\{1,\ldots, n\}}\alpha_{\{k_1,\ldots,k_i\}}^+\right)^{-1}
\end{equation}
and
\begin{equation*}
\sum_{x=\bar{M}_0+2}^{\infty}x^pP(X_{j_0}=x)\leq \frac{d}{2^n-1} \left(\sum_{i=1}^{n}\sum\limits_{\{k_1,\ldots,k_i\}\subset\{1,\ldots, n\}}\beta_{\{k_1,\ldots,k_i\}}^+\right)^{-1}
\end{equation*}
respectively, where
$$
\alpha_{\{k_1,\ldots,k_i\}}^+=
\left\{
\begin{array}{ll}
\alpha_{\{k_1,\ldots,k_i\}} & \hbox{if } \; \alpha_{\{k_1,\ldots,k_i\}}>0 \\
0 & \hbox{otherwise}
\end{array}
\right.
$$
and
$$
\beta_{\{k_1,\ldots,k_i\}}^+=
\left\{
\begin{array}{ll}
\beta_{\{k_1,\ldots,k_i\}} & \hbox{if } \; \beta_{\{k_1,\ldots,k_i\}}>0 \\
0 & \hbox{otherwise}
\end{array}
\right..
$$
\item[(iii)] If $X_1,X_2,\ldots,X_n$ are non-negative RV's and for a fixed $p\in\{1,2,\ldots\}$ we have
\begin{equation}
\label{thmCSiii}
E\left(X_{1:\{k_1,\ldots,k_i\} }^p \right)<\infty \; \hbox{ for } \emptyset\neq \{k_1,\ldots,k_i\}\subset \{1,\ldots, n\}
\end{equation}
$$
\left( E\left(X_{i:\{k_1,\ldots,k_i\} }^p \right)<\infty \; \hbox{ for } \emptyset\neq \{k_1,\ldots,k_i\}\subset \{1,\ldots, n\} \right),
$$
then
\begin{equation}
ET^p = \sum_{i=1}^{n} \sum\limits_{\{k_1,\ldots,k_i\}\subset\{1,\ldots, n\}}\alpha_{\{k_1,\ldots,k_i\}}E\left(X_{1:\{k_1,\ldots,k_i\} }^p \right)
\label{rmCS2}
\end{equation}
\begin{equation}
\left(ET^p = \sum_{i=1}^{n} \sum\limits_{\{k_1,\ldots,k_i\}\subset\{1,\ldots, n\}}\beta_{\{k_1,\ldots,k_i\}}E\left(X_{i:\{k_1,\ldots,k_i\} }^p \right) \right)
\label{rmCS2A}
\end{equation}
and
\begin{equation}
E(T)_p = \sum_{i=1}^{n} \sum\limits_{\{k_1,\ldots,k_i\}\subset\{1,\ldots, n\}}\alpha_{\{k_1,\ldots,k_i\}}
E\left(X_{1:\{k_1,\ldots,k_i\} }\right)_p \label{fmCS}
\end{equation}
\begin{equation}
\left( E(T)_p = \sum_{i=1}^{n} \sum\limits_{\{k_1,\ldots,k_i\}\subset\{1,\ldots, n\}}\beta_{\{k_1,\ldots,k_i\}}E\left(X_{i:\{k_1,\ldots,k_i\} } \right)_p \right).
\label{fmCSA}
\end{equation}
\end{itemize}
\end{theorem}
\begin{proof}
We will show (\ref{rmCS1}), (\ref{rmCSapp}), (\ref{rmCS2}) and (\ref{fmCS}). The proof of (\ref{rmCS1A}), (\ref{rmCSappA}), (\ref{rmCS2A}) and (\ref{fmCSA}) goes along the same lines.
From (\ref{sf1}) we have
\begin{eqnarray}
P(T>t) &=&\sum_{i=1}^{n} \sum\limits_{\{k_1,\ldots,k_i\}\subset\{1,\ldots, n\}}
\sum\limits_{j=1}^s(-1)^{j+1}\sum\limits_{\{s_1,\ldots,s_j\}\subset\{1,\ldots, s\}} \nonumber \\ && \hspace{5mm} \mathbb{I}\left(\bigcup_{l=1}^jP_{s_l}=\{k_1,\ldots,k_i\}\right)
P(X_{1:\{k_1,\ldots,k_i\}}>t) \nonumber \\
&=&\sum_{i=1}^{n} \sum\limits_{\{k_1,\ldots,k_i\}\subset\{1,\ldots, n\}}\alpha_{\{k_1,\ldots,k_i\}} P(X_{1:\{k_1,\ldots,k_i\}}>t).
\label{dthmCSf1}
\end{eqnarray}
\noindent
(i) Using (\ref{fact1_formula}) and next (\ref{dthmCSf1}) we get
\begin{eqnarray*}
ET^p&=&\sum\limits_{m=0}^\infty \big((m+1)^p-m^p\big) P(T>m) \\
&=&\sum\limits_{m=0}^\infty \big((m+1)^p-m^p\big) \sum_{i=1}^{n} \sum\limits_{\{k_1,\ldots,k_i\}\subset\{1,\ldots, n\}}
\alpha_{\{k_1,\ldots,k_i\}} P(X_{1:\{k_1,\ldots,k_i\}}>m).
\end{eqnarray*}
But from (\ref{th1d2}) and (\ref{th1d4}) we see that
\begin{equation}
\label{prawdop_Min}
P(X_{1:\{k_1,\ldots,k_i\}}>m)= P\left(\bigcap\limits_{l=1}^i\{X_{k_l}>m\}\right),
\end{equation}
and (\ref{rmCS1}) is established.
\noindent
(ii) The error of the approximation in (\ref{rmCSapp}) is given by
\begin{eqnarray}
Error&=&\sum\limits_{m=M_0+1}^{\infty} \big((m+1)^p-m^p\big) \sum_{i=1}^{n} \sum\limits_{\{k_1,\ldots,k_i\}\subset\{1,\ldots, n\}}
\alpha_{\{k_1,\ldots,k_i\}} \nonumber \\
&&\times P\left(\bigcap\limits_{l=1}^i\{X_{k_l}>m\}\right). \label{s4error}
\end{eqnarray}
From (\ref{dthmCSf1}) - (\ref{s4error}) it follows that
\begin{equation}
\label{dthmCSf3}
Error\geq 0.
\end{equation}
On the other hand, Theorem \ref{th2} shows that the assumption $EX_{j_0}^p<\infty$ guarantees that (\ref{thmCSiii}) holds and hence that the infinite series
\begin{eqnarray*}
&&\sum\limits_{m=M_0+1}^{\infty} \big((m+1)^p-m^p\big) P(X_{1:\{k_1,\ldots,k_i\}}>m) \\
&=&\sum\limits_{m=M_0+1}^{\infty} \big((m+1)^p-m^p\big)P\left(\bigcap\limits_{l=1}^i\{X_{k_l}>m\}\right)
\end{eqnarray*}
is convergent whenever $\emptyset\neq \{k_1,\ldots,k_i\}\subset \{1,\ldots, n\}$. This allows us to change the order of summation in (\ref{s4error}) to get
\begin{eqnarray}
Error&=& \sum_{i=1}^{n} \sum\limits_{\{k_1,\ldots,k_i\}\subset\{1,\ldots, n\}}\alpha_{\{k_1,\ldots,k_i\}} \sum\limits_{m=M_0+1}^{\infty} \big((m+1)^p-m^p\big) \nonumber \\
&&\times P(X_{1:\{k_1,\ldots,k_i\}}>m) \nonumber \\
&\leq& \sum_{i=1}^{n} \sum\limits_{\{k_1,\ldots,k_i\}\subset\{1,\ldots, n\}}\alpha^+_{\{k_1,\ldots,k_i\}} \sum\limits_{m=M_0+1}^{\infty} \big((m+1)^p-m^p\big) \nonumber \\
&&\times P(X_{1:\{k_1,\ldots,k_i\}}>m). \label{dthmCSf4}
\end{eqnarray}
Theorem \ref{th2} implies
\begin{eqnarray}
\sum\limits_{m=M_0+1}^{\infty} \big((m+1)^p-m^p\big)P(X_{1:\{k_1,\ldots,k_i\}}>m) && \nonumber \\
\leq d\left( \sum_{i=1}^{n} \sum\limits_{\{k_1,\ldots,k_i\}\subset\{1,\ldots, n\}}\alpha^+_{\{k_1,\ldots,k_i\}} \right)^{-1}, && \label{dthmCSf5}
\end{eqnarray}
provided that (\ref{M0CS}) holds. Now (\ref{dthmCSf3}) - (\ref{dthmCSf5}) show that
$$0\leq Error \leq d,$$
which means that (\ref{rmCSapp}) gives an accuracy not worse than $d$.
\noindent
(iii) Since $T$ is a non-negative RV we have
$$ET^p=p\int_0^{\infty}x^{p-1}P(T>x)dx$$
and from (\ref{dthmCSf1}) we see that
\begin{eqnarray*}
ET^p&=&\sum_{i=1}^{n} \sum\limits_{\{k_1,\ldots,k_i\}\subset\{1,\ldots, n\}}\alpha_{\{k_1,\ldots,k_i\}}
p\int_0^{\infty}x^{p-1}P(X_{1:\{k_1,\ldots,k_i\}}>x)dx \\
&=&\sum_{i=1}^{n} \sum\limits_{\{k_1,\ldots,k_i\}\subset\{1,\ldots, n\}}\alpha_{\{k_1,\ldots,k_i\}} E\left(X^p_{1:\{k_1,\ldots,k_i\}} \right),
\end{eqnarray*}
which is precisely (\ref{rmCS2}).
Condition (\ref{thmCSiii}) guarantees that $ET^p$ is finite. Hence so are $ET^j$, $j=1,\ldots,p-1$. Moreover, for some real numbers $c_j$, $j=1,\ldots,p$, we have
$$x(x-1)\cdots(x-(p-1))=\sum_{j=1}^{p}c_jx^j.$$
Consequently
\begin{eqnarray*}
E(T)_p &=&\sum_{j=1}^{p}c_jE\left(T^j\right) \\
&=&\sum_{j=1}^{p}c_j \sum_{i=1}^{n} \sum\limits_{\{k_1,\ldots,k_i\}\subset\{1,\ldots, n\}}\alpha_{\{k_1,\ldots,k_i\}} E\left(X^j_{1:\{k_1,\ldots,k_i\}} \right) \\
&=& \sum_{i=1}^{n} \sum\limits_{\{k_1,\ldots,k_i\}\subset\{1,\ldots, n\}}\alpha_{\{k_1,\ldots,k_i\}}
E\left(\sum_{j=1}^{p}c_jX^j_{1:\{k_1,\ldots,k_i\}} \right) \\
&=& \sum_{i=1}^{n} \sum\limits_{\{k_1,\ldots,k_i\}\subset\{1,\ldots, n\}}\alpha_{\{k_1,\ldots,k_i\}}
E\left(X_{1:\{k_1,\ldots,k_i\}} \right)_p,
\end{eqnarray*}
and (\ref{fmCS}) is proved.
\end{proof}
If the random vector $(X_1,X_2,\ldots,X_n)$ is exchangeable, then, for any $t$ and $\emptyset\neq \{k_1,\ldots,k_i\}\subset \{1,\ldots, n\}$, we have
$$P(X_{1:\{k_1,\ldots,k_i\}}>t)=P(X_{1:i}>t).$$
Hence comparing (\ref{sfexch}) with (\ref{dthmCSf1}) and using (\ref{alfy}) we obtain the following formula for the minimal signature. For $1\leq i\leq n$,
\begin{eqnarray}
\alpha_i & = & \sum\limits_{\{k_1,\ldots,k_i\}\subset\{1,\ldots, n\}}\alpha_{\{k_1,\ldots,k_i\}} \label{alfy_exch} \\
&=&\sum_{j=1}^{s}(-1)^{j+1} \sum\limits_{\{s_1,\ldots,s_j\}\subset\{1,\ldots, s\}}\mathbb{I}\left(\left|\bigcup_{l=1}^jP_{s_l}\right|=i \right). \nonumber
\end{eqnarray}
Similarly, for $1\leq i\leq n$,
$$\beta_i=
\sum_{j=1}^{v}(-1)^{j+1} \sum\limits_{\{s_1,\ldots,s_j\}\subset\{1,\ldots, v\}}\mathbb{I}\left(\left|\bigcup_{l=1}^jC_{s_l}\right|=i \right). $$
Minimal and maximal signatures for all coherent systems with three and four exchangeable components are given in \cite{NRS07}, those for all coherent systems with five exchangeable components are tabulated in \cite{NR10} while whose for some special cases of consecutive-$k$-out-of-$n$ systems can be found in \cite{NE07} and \cite{E10}. In general computation of minimal and/or maximal signatures of complex systems is a demanding and time consuming task. In the literature techniques of finding Samaniego signatures are considerably better developed than those for minimal and maximal signatures; see, for example \cite{DZH12}, \cite{R17}, \cite{DCX18} and \cite{YC18}. Samaniego signature of a coherent system with lifetime $T$ and $n$ exchangeable components with lifetimes $X_1,X_2,\ldots,X_n$ is a vector $\mathbf{s}=(s_1,\ldots,s_n)$ such that
$$P(T>t)=\sum_{i=1}^ns_iP(X_{i:n}>t) \; \hbox{ for any } t$$
and $\mathbf{s}$ depends only on the structure of the system and not on the distribution of $(X_1,X_2,\ldots,X_n)$. The existence of such a vector $\mathbf{s}$ for any coherent system with exchangeable components was proved by Navarro et. al. \cite{NSBB08} who generalized earlier results by Samaniego \cite{S85} and Navarro and Rychlik \cite{NR07}. Minimal and maximal signatures can be easily determined from the corresponding Samaniego signature. Indeed, under the assumption that $(X_1,X_2,\ldots,X_n)$ is exchangeable (\ref{recc}) can be rewritten as
\begin{eqnarray*}
P(X_{r:n}>t) &=&\sum_{j=0}^{r-1} (-1)^{r-1-j} {{n-j-1} \choose {n-r}} {n \choose n-j}P(X_{1:n-j}>t) \\
&=&\sum_{i=n-r+1}^{n} (-1)^{r-1-n+i} {{i-1} \choose {n-r}} {n \choose i}P(X_{1:i}>t).
\end{eqnarray*}
It follows that
\begin{eqnarray*}
\sum_{r=1}^{n}s_r P(X_{r:n}>t) &=&\sum_{r=1}^{n}s_r\sum_{i=n-r+1}^{n} (-1)^{r-1-n+i} {{i-1} \choose {n-r}} {n \choose i}P(X_{1:i}>t) \\
&=&\sum_{i=1}^{n}{n \choose i}\sum_{r=n-i+1}^{n}s_r (-1)^{r-1-n+i} {{i-1} \choose {n-r}} P(X_{1:i}>t),
\end{eqnarray*}
and hence that
\begin{equation}
\label{alfyzSamaniego}
\alpha_i={n \choose i}\sum_{r=n-i+1}^{n}s_r (-1)^{r-1-n+i} {{i-1} \choose {n-r}} , \; i=1,\ldots,n.
\end{equation}
Similarly, using \cite[formula (2.2)]{BBM92} we can derive formulas for $\beta_i$, $ i=1,\ldots,n$, in terms of Samaniego signature $(s_1,\ldots,s_n)$.
Knowledge of minimal and/or maximal signatures of a coherent system simplifies computation of moments of the lifetime of this system in the case when its components are exchangeable. In this case Theorem \ref{mCS} specializes in the following result.
\begin{corollary}
\label{mCSexch}
Let the random vector $(X_1,X_2,\ldots,X_n)$ be exchangeable and write $T$ for the lifetime of a coherent system with component lifetimes $X_1,X_2,\ldots,X_n$ and with minimal and maximal signatures $(\alpha_1,\ldots,\alpha_n)$ and $(\beta_1,\ldots,\beta_n)$, respectively.
\begin{itemize}
\item[(i)] If $X_i$ takes values in the set of non-negative integers, $i=1,2,\ldots,n$, then, for $p=1,2,\ldots$,
\begin{eqnarray*}
ET^p&=&\sum\limits_{m=0}^\infty \big((m+1)^p-m^p\big) \sum_{i=1}^{n} \alpha_{i}
P\left(\bigcap\limits_{l=1}^i\{X_{l}>m\}\right) \\
&=&\sum\limits_{m=0}^\infty \big((m+1)^p-m^p\big) \sum_{i=1}^{n} \beta_{i}
\left\{1-P\left(\bigcap\limits_{l=1}^i\{X_{l}\leq m\}\right)\right\}.
\end{eqnarray*}
Moreover, if $EX_1^p<\infty$ then the approximate formulas
$$ET^p\approx \sum\limits_{m=0}^{M_0} \big((m+1)^p-m^p\big) \sum_{i=1}^{n} \alpha_{i}
P\left(\bigcap\limits_{l=1}^i\{X_{l}>m\}\right) $$
and
$$ET^p\approx \sum\limits_{m=0}^{\bar{M}_0} \big((m+1)^p-m^p\big) \beta_{i}
\left\{1-P\left(\bigcap\limits_{l=1}^i\{X_{l}\leq m\}\right)\right\}$$
introduce an error not greater than $d$, provided that
$$
\sum_{x=M_0+2}^{\infty}x^pP(X_1=x)\leq d \left(\sum_{i=1}^{n}\alpha_{i}^+\right)^{-1}
$$
and
\begin{equation*}
\sum_{x=\bar{M}_0+2}^{\infty}x^pP(X_{1}=x)\leq \frac{d}{2^n-1} \left(\sum_{i=1}^{n} \beta_{i}^+\right)^{-1}
\end{equation*}
respectively, where
$
\alpha_{i}^+=
\left\{
\begin{array}{ll}
\alpha_{i} & \hbox{if } \; \alpha_{i}>0 \\
0 & \hbox{otherwise}
\end{array}
\right.
$
and
$
\beta_{i}^+=
\left\{
\begin{array}{ll}
\beta_{i} & \hbox{if } \; \beta_{i}>0 \\
0 & \hbox{otherwise}
\end{array}
\right..
$
\item[(ii)] If $X_1,X_2,\ldots,X_n$ are non-negative RV's and for a fixed $p\in\{1,2,\ldots\}$ we have $EX_1^p<\infty$, then
$$ET^p=\sum_{i=1}^{n} \alpha_{i} E\left(X_{1:i}^p\right)=\sum_{i=1}^{n} \beta_{i} E\left(X_{i:i}^p\right)$$
and
\begin{eqnarray}
E(T)_p&=&\sum_{i=1}^{n} \alpha_{i} E\left(X_{1:i}\right)_p \label{fmCSexch} \\
&=& \sum_{i=1}^{n} \beta_{i} E\left(X_{i:i}\right)_p. \nonumber
\end{eqnarray}
\end{itemize}
\end{corollary}
It should be noted that (\ref{rmCS2}) was presented in a slighty diffrent form by Navarro et. al. \cite[Corollary 5.1]{NRS07}. Here we gave it for completeness. Yet in the context of discrete lifetimes of compnents, (\ref{fmCS}) proves to be more useful than (\ref{rmCS2}). For example, (\ref{factorialMin}) and (\ref{fmCS}) together with its simplified version (\ref{fmCSexch}) valid for exchangeable components yield closed-form formulas describing single moments of times to failure of coherent systems with components having MVG lifetimes.
\begin{corollary}
\label{mCSMVG}
Let the random vector $(X_1,X_2,\ldots,X_n)$ have the MVG distribution with parameters $\theta_I\in [0,1]$, $\emptyset\neq I\subset \{1,2,\ldots,n\}$. Write $T$ for the lifetime of a coherent system with component lifetimes $X_1,X_2,\ldots,X_n$ and with minimal path sets $P_1,P_2,\ldots,P_s$. Then, for $p=1,2,\ldots$,
\begin{equation}
\label{fmCSMVG}
E(T)_p=p!\sum_{i=1}^{n} \sum\limits_{\{k_1,\ldots,k_i\}\subset\{1,\ldots, n\}}\alpha_{\{k_1,\ldots,k_i\}}
\left(\frac{1}{1-\frac{\theta_{all}}{\prod\limits_{\emptyset\neq I\subset \{1,\ldots,n\}\setminus\{k_1,\ldots,k_i\}}\theta_I}}-1\right)^p.
\end{equation}
where $\alpha_{\{k_1,\ldots,k_i\}}$ and $\theta_{all}$ are given in (\ref{alfy}) and (\ref{teta.all}), respectively.
In particular, if the component lifetimes $X_1,X_2,\ldots,X_n$ are exchangeable with the MVG distribution with parameters $\theta_I\in [0,1]$, $\emptyset\neq I\subset \{1,2,\ldots,n\}$ satisfying (\ref{cMVGexch}),
and $(\alpha_1,\ldots,\alpha_n)$ is the minimal signature of the coherent system, then (\ref{fmCSMVG}) simplifies to
\begin{equation}
\label{fmCSMVGexch}
E(T)_p=p!\sum_{i=1}^{n} \alpha_{i}\left(\frac{1}{1-\prod\limits_{j=1}^{n} \theta_j^{{n \choose j}-{n-i \choose j}}}-1\right)^p.
\end{equation}
\end{corollary}
We illustrate Corollary \ref{mCSMVG} with the following example.
\begin{example}
Let us consider the bridge system presented in Figure \ref{bridge} and assume that the joint distribution of its component lifetimes $X_1,X_2,\ldots,X_5$ is MVG with parameters $\theta_I\in [0,1]$, $\emptyset\neq I\subset \{1,2,\ldots,5\}$.
We will find the expectation $ET$ and variance $Var(T)$ of the time to failure of this system.
\begin{figure}[htb]
\centering
\begin{tabular}{c}
$\xymatrix@R=8pt@C=15pt{
& & *+[F-,]{1} \ar@{-}[r] & *[o]{} \ar@{-}[r] & *+[F-,]{2} \ar@{-}[rd] \\
*[o]{} \ar@{-}[r] & *[o]{} \ar@{-}[ru] \ar@{-}[rd] & & *+[F-,]{5} \ar@{-}[u] \ar@{-}[d] & & *[o]{} \ar@{-}[r] & \\
& & *+[F-,]{4} \ar@{-}[r] & *[o]{} \ar@{-}[r] & *+[F-,]{3} \ar@{-}[ru]}$
\end{tabular}
\caption{\small Bridge system}\label{bridge}
\end{figure}
There are four minimal path sets of this bridge system:
$$
P_1=\{1,2\}, P_2=\{3,4\}, P_3=\{1,3,5\}, P_4=\{2,4,5\}.
$$
Moreover,
$$P_1\cup P_2=\{1,2,\ldots,5\}\setminus\{5\},$$
$$P_1\cup P_3=\{1,2,\ldots,5\}\setminus\{4\},$$
$$P_1\cup P_4=\{1,2,\ldots,5\}\setminus\{3\},$$
$$P_2\cup P_3=\{1,2,\ldots,5\}\setminus\{2\},$$
$$P_2\cup P_4=\{1,2,\ldots,5\}\setminus\{1\},$$
$$P_3\cup P_4=\{1,2,\ldots,5\}$$
and
$$\bigcup_{l=1}^jP_{i_l}=\{1,2,\ldots,5\} \; \hbox{ if } 1\leq i_1<\cdots<i_j\leq 5 \hbox{ and } j\geq 3.$$
Hence
\begin{equation}
\label{bridge_alfy}
\alpha_{\{k_1,\ldots,k_i\}}=\left\{
\begin{array}{cl}
1 & \hbox{ if } \; \{k_1,\ldots,k_i\}\in\{P_1,P_2,P_3,P_4\} \\
-1 & \hbox{ if } \; \{k_1,\ldots,k_i\}\in \left\{\{1,2,\ldots,5\}\setminus\{i\}, i=1,2,\ldots,5\right\}\\
-1+{4 \choose 3}-1=2 & \hbox{ if } \; \{k_1,\ldots,k_i\}=\{1,2,\ldots,5\}\\
0 & \hbox{ otherwise}
\end{array},
\right.
\end{equation}
by (\ref{alfy}). From (\ref{fmCSMVG}) we get, for $p=1,2,\ldots$,
\begin{eqnarray}
E(T)_p &=& p! \left\{\left(\frac{1}{1-\frac{\theta_{all}}{\theta_{\{3\}} \theta_{\{4\}}\theta_{\{5\}} \theta_{\{3,4\}} \theta_{\{3,5\}} \theta_{\{4,5\}} \theta_{\{3,4,5\}} }}-1\right)^p \right. \nonumber \\
&&+ \left(\frac{1}{1-\frac{\theta_{all}}{\theta_{\{1\}} \theta_{\{2\}}\theta_{\{5\}} \theta_{\{1,2\}} \theta_{\{1,5\}} \theta_{\{2,5\}} \theta_{\{1,2,5\}} }}-1\right)^p \nonumber \\
&& + \left(\frac{1}{1-\frac{\theta_{all}}{\theta_{\{2\}} \theta_{\{4\}}\theta_{\{2,4\}} }}-1\right)^p
+ \left(\frac{1}{1-\frac{\theta_{all}}{\theta_{\{1\}} \theta_{\{3\}}\theta_{\{1,3\}} }}-1\right)^p \nonumber \\
&&\left. -\sum_{i=1}^{5} \left(\frac{1}{1-\frac{\theta_{all}}{\theta_{\{i\}} }}-1\right)^p
+2\left(\frac{1}{1-\theta_{all}}-1\right)^p \right\}, \label{fmbridge}
\end{eqnarray}
where $\theta_{all}=\prod_{\emptyset\neq I\subset \{1,2,\ldots,5\}}\theta_I$. Taking $p=1$ and $p=2$ we obtain $ET$ and $E(T(T-1))$, respectively. Then we can compute $Var(T)$ using the relation $Var(T)=E(T(T-1))+ET\left(1-ET\right)$. In Table \ref{Table8} we demonstrate values of $ET$ and $Var(T)$ obtained for the following selected settings of the parameters $\theta_I$, $\emptyset\neq I\subset \{1,2,\ldots,5\}$ ($\theta_I$ which are not listed are assumed to equal $1$):
\begin{enumerate}
\item[(1)] $\theta_{\{1\}}=0.9$, $\theta_{\{3\}}=0.8$, $\theta_{\{1,4,5\}}=\theta_{\{2,3,5\}}=0.99$;
\item[(2)] $\theta_{\{1\}}=\theta_{\{2\}}=0.9$, $\theta_{\{3\}}=\theta_{\{4\}}=\theta_{\{5\}}=0.8$, $\theta_{\{1,4,5\}}=\theta_{\{2,3,5\}}=0.99$;
\item[(3)] $\theta_{\{1\}}=\theta_{\{2\}}=0.9$, $\theta_{\{3\}}=\theta_{\{4\}}=\theta_{\{5\}}=0.8$;
\item[(4)] $\theta_{\{i\}}=0.9$, $i=1,2,\ldots,5$, $\theta_{\{i,j\}}=0.95$, $i,j\in\{1,2,\ldots,5\}$ and $i\neq j$;
\item[(5)] $\theta_{\{i\}}=0.9$, $i=1,2,\ldots,5$, $\theta_{\{i,j\}}=0.95$, $i,j\in\{1,2,\ldots,5\}$ and $i\neq j$, $\theta_{\{1,2,\ldots,5\}}=0.99$.
\end{enumerate}
Clearly (3) corresponds to the case when the RV's $X_1, X_2,\ldots,X_5$ are independent, while (4) and (5) to the case when the random vector $(X_1, X_2,\ldots,X_5)$ is exchangeable.
\begin{table}[ht]
\footnotesize
\caption{Expectation and variance of $T$ for the bridge system when $(X_1,X_2,\ldots,X_5)$ has MVG distribution with parameters $\theta_I$, $\emptyset\neq I\subset\{1,2,3,4,5\}$ (not listed parameters are assumed to be equal to 1)}
\begin{tabular}{|c||c|c||}
\hline
$\theta_I$ & E$T$ & Var$(T)$\\ \hline\hline
$\theta_{\{1\}}=0.9$, $\theta_{\{3\}}=0.8$ &49.251& 2474.938\\
$\theta_{\{1,4,5\}}=\theta_{\{2,3,5\}}=0.99$ && \\ \hline
$\theta_{\{1\}}=\theta_{\{2\}}=0.9$, $\theta_{\{3\}}=\theta_{\{4\}}=\theta_{\{5\}}=0.8$ &4.751&16.996\\
$\theta_{\{1,4,5\}}=\theta_{\{2,3,5\}}=0.99$ && \\ \hline
$\theta_{\{1\}}=\theta_{\{2\}}=0.9$, $\theta_{\{3\}}=\theta_{\{4\}}=\theta_{\{5\}}=0.8$ &5.237&20.001\\\hline
$\theta_{\{i\}}=0.9$, $i=1,\ldots,5$, $\theta_{\{i,j\}}=0.95$, $i,j\in\{1,2,\ldots,5\}$ and $i\neq j$ &2.163&4.167\\ \hline
$\theta_{\{i\}}=0.9$, $i=1,\ldots,5$, $\theta_{\{i,j\}}=0.95$, $i,j\in\{1,2,\ldots,5\}$ and $i\neq j$ &2.109&4.034\\
$\theta_{\{1,2,\ldots,5\}}=0.99$ &&\\ \hline
\end{tabular}
\label{Table8}
\end{table}
In general, in the situation when $X_1, X_2,\ldots,X_5$ are independent, that is when $\theta_I=1$ for all $\emptyset\neq I\subset \{1,2,\ldots,5\}$ except singletons, (\ref{fmbridge}) simplifies to
\begin{eqnarray}
E(T)_p &=& p! \left\{\left(\frac{1}{1-\theta_{\{1\}} \theta_{\{2\}}}-1\right)^p
+\left(\frac{1}{1-\theta_{\{3\}} \theta_{\{4\}}}-1\right)^p\right. \nonumber \\
&&+ \left(\frac{1}{1-\theta_{\{1\}} \theta_{\{3\}} \theta_{\{5\}} }-1\right)^p
+ \left(\frac{1}{1-\theta_{\{2\}} \theta_{\{4\}} \theta_{\{5\}} }-1\right)^p \nonumber \\
&&\left. -\sum_{i=1}^{5} \left(\frac{1}{1-\frac{\theta_{all}^{ind}}{\theta_{\{i\}} }}-1\right)^p
+2\left(\frac{1}{1-\theta_{all}^{ind}}-1\right)^p\right\}, \label{fmbridge_ind}
\end{eqnarray}
where $\theta_{all}^{ind}=\prod_{i=1}^5\theta_{\{i\}}$.
If in turn $X_1, X_2,\ldots,X_5$ are exchangeable with
\begin{equation}
\label{thety_exch_bridge}
\theta_{\{i_1,\ldots,i_s\}}=\theta_s\in[0,1] \quad \hbox{ for all } 1\leq s\leq 5 \hbox{ and } 1\leq i_1<\cdots<i_s\leq 5,
\end{equation}
where at least one $\theta_s$, $1\leq s\leq 5$, is not equal to 1, then to compute $E(T)_p$ we can use directly (\ref{fmCSMVGexch}) but first we need to find the minimal signature $(\alpha_1,\alpha_2,\ldots,\alpha_5)$ for the bridge system. We can do this using one of the following methods:
\begin{enumerate}
\item substitute (\ref{bridge_alfy}) into (\ref{alfy_exch});
\item apply (\ref{alfyzSamaniego}) together with the known fact that the Samaniego signature $(s_1,s_2,\ldots,s_5)$ for the bridge system is equal to $(0,\frac{1}{5},\frac{3}{5},\frac{1}{5},0)$;
\item find the bridge system in Table 1 of \cite{NR10} (it is in the row numbered $N=93$) and then read its minimal signature from Table 2 of \cite{NR10}.
\end{enumerate}
The result is $(\alpha_1,\alpha_2,\ldots,\alpha_5)=(0,2,2,-5,2)$. Hence (\ref{fmCSMVGexch}) gives
\begin{eqnarray}
E(T)_p &=& p! \left\{2\left(\frac{1}{1-\frac{\theta_{all}^{exch}}{\theta_1^3\theta_2^3\theta_3}}-1\right)^p
+2\left(\frac{1}{1-\frac{\theta_{all}^{exch}}{\theta_1^2\theta_2}}-1\right)^p\right. \nonumber \\
&&\left. -5\left(\frac{1}{1-\frac{\theta_{all}^{exch}}{\theta_1}}-1\right)^p
+2\left(\frac{1}{1-\theta_{all}^{exch}}-1\right)^p \right\}, \label{fmbridge_exch}
\end{eqnarray}
where $\theta_{all}^{exch}=\prod_{j=1}^{5}\theta_j^{{5 \choose j}}$.
Of course (\ref{fmbridge_exch}) can be also obtained by substituting (\ref{thety_exch_bridge}) into (\ref{fmbridge}).
In the case when $X_1,X_2,\ldots,X_5$ are IID and geometrically distributed with parameter $\pi\in(0,1)$, both (\ref{fmbridge_ind}) and (\ref{fmbridge_exch}) lead to
\begin{eqnarray*}
E(T)_p &=& p! \left\{2\left(\frac{1}{1-(1-\pi)^2}-1\right)^p
+2\left(\frac{1}{1-(1-\pi)^3}-1\right)^p \right. \\
&&\left. -5\left(\frac{1}{1-(1-\pi)^4}-1\right)^p
+2\left(\frac{1}{1-(1-\pi)^5}-1\right)^p \right\},
\end{eqnarray*}
by taking
$$
\theta_{\{i_1,\ldots,i_s\}}=
\left\{
\begin{array}{ll}
\theta_1=1-\pi & \hbox{ if } \{i_1,\ldots,i_s\}=\{i\}, i=1,2,\ldots,5, \\
\theta_s=1 & \hbox{ otherwise.}
\end{array}
\right.
$$
In Figure \ref{wykresETVarT} we present the expectation $ET$ and variance $Var(T)$ as functions of $\pi\in(0,0.25)$ in the case when $X_1,X_2,\ldots,X_5$ are IID and $X_i$, $i=1,2,\ldots,5$, are geometrically distributed with parameter $\pi$.
\begin{figure}
\includegraphics[width=1\textwidth]{WykresyBridgeIIDgeom}
\caption{Expectation $ET$ and variance $Var(T)$ as functions of $\pi\in(0,0.25)$ for the bridge system when $X_1,X_2,\ldots,X_5$ are IID and $X_i\sim ge(\pi)$, $i=1,2,\ldots,5$}
\label{wykresETVarT}
\end{figure}
\end{example}
Before we present the next example let us mention a useful observation.
\begin{remark}
\label{Sec4Rem1}
If $X_i\sim \mathrm{Pois}(\lambda_i)$, $\lambda_i>0$, $i=1,2,\ldots,n$, then the same analysis as that used in the proof of Corollary \ref{th2c1} shows that (\ref{M0CS}) is satisfied if
\begin{equation}
\label{M0Pois_system}
M_0=\left\{
\begin{array}{lcl}
p-2 & \hbox{if} & \tilde{d}_{Pois}\leq 0 \\
F_{X_{j_0}}^{\leftarrow}(\tilde{d}_{Pois})+p-1 & \hbox{if} & \tilde{d}_{Pois}\in(0,1)
\end{array}
\right.,
\end{equation}
where $ j_0=\operatorname{argmax}_{j=1,\ldots,n}\lambda_j$ and
$$
\tilde{d}_{Pois}=1-d\, 2^{-p(p-1)/2}\lambda_{j_0}^{-p} \left(\sum_{i=1}^{n}\sum\limits_{\{k_1,\ldots,k_i\}\subset\{1,\ldots, n\}}\alpha_{\{k_1,\ldots,k_i\}}^+\right)^{-1}.
$$
\end{remark}
\begin{example}
Let us again consider the bridge system presented in Figure~\ref{bridge} but now let its component lifetimes $X_1,X_2,\ldots,X_5$ be independent and $X_i\sim \mathrm{Pois}(\lambda_i)$, $\lambda_i>0$, $i=1,2,\ldots,5$.
Then using Theorem \ref{mCS} (ii), (\ref{bridge_alfy}) and Remark \ref{Sec4Rem1} we see that to compute $ET^p$, $p=1,2,\ldots$, with an error not greater than $d>0$ we can apply the approximate formula
\begin{eqnarray}
ET^p&\approx&\sum\limits_{m=0}^{M_0} \big((m+1)^p-m^p\big) \sum_{i=1}^{n} \sum\limits_{\{k_1,\ldots,k_i\}\subset\{1,\ldots, n\}}
\alpha_{\{k_1,\ldots,k_i\}} \prod_{l=1}^iP\left(X_{k_l}>m\right) \nonumber \\
&=&\sum\limits_{m=0}^{M_0} \big((m+1)^p-m^p\big)
\bigg\{P(X_1>m)P(X_2>m)+P(X_3>m)P(X_4>m) \nonumber \\
&&+\left(\prod_{i=1}^5P(X_i>m)\right)\bigg(\frac{1}{P(X_2>m)P(X_4>m)}+\frac{1}{P(X_1>m)P(X_3>m)} \nonumber \\
&& -\sum_{j=1}^5\frac{1}{P(X_j>m)}+2\bigg)
\bigg\}, \label{bridge_Pois}
\end{eqnarray}
where $M_0$ is given by (\ref{M0Pois_system}) with $ j_0=\operatorname{argmax}_{j=1,\ldots,5}\lambda_j$ and $\tilde{d}_{Pois}=1-d\, 2^{-p(p-1)/2}\lambda_{j_0}^{-p}6^{-1}$.
Moreover, if $X_1,X_2,\ldots,X_5$ are not only independent but also identically distributed, then (\ref{bridge_Pois}) reduces to
\begin{eqnarray*}
ET^p&\approx&2\sum\limits_{m=0}^{M_0} \big((m+1)^p-m^p\big) \left(P(X_1>m)\right)^2 \bigg\{1+P(X_1>m) \\
&& -\frac{5}{2}\left(P(X_1>m)\right)^2+\left(P(X_1>m)\right)^3\bigg\}.
\end{eqnarray*}
For illustrative purposes we fixed the level of accuracy $d=0.0005$. Then in Table \ref{Table9}, for some selected values of $\lambda_1, \lambda_2,\ldots,\lambda_5$, we demonstrate expectations $ET$ and second raw moments $ET^2$ for the bridge system when $X_1,X_2,\ldots,X_5$ are independent and $X_i\sim \mathrm{Pois}(\lambda_i)$, $i=1,2,\ldots,5$.
Additionally in brackets we provide the corresponding values of $M_0$, that is the numbers of terms in the sum sufficient to obtain the desired accuracy.
Next, under the stronger assumption that $X_1,X_2,\ldots,X_5$ are IID with $X_i\sim \mathrm{Pois}(\lambda)$, in Figure \ref{wykresETET2}, we present $ET$ and $ET^2$ as functions of $\lambda\in(0,100)$. It is interesting that the function $ET(\lambda)$ describing the dependence of $ET$ on $\lambda\in(1,100)$ is such that $ET(\lambda)\approx \lambda$. Moreover, numerical calculations show that this property holds also for $\lambda\geq100$, and $|ET(\lambda)-\lambda|$ decreases as $\lambda$ increases.
\bigskip
\begin{table}[ht]
\footnotesize
\caption{Expectation and second raw moment of $T$ for the bridge system when $X_1,X_2,\ldots,X_5$ are independent and $X_i\sim Pois(\lambda_i)$, $i=1,2,\ldots,5$}
\begin{tabular}{|c||c|c||}
\hline
$\lambda_i$ & E$T$ & E$T^2$\\ \hline\hline
$\lambda_i=1$, $i=1,2,\ldots,5$ &0.877&1.246\\
&(6) &(8)\\\hline
$\lambda_i=i$, $i=1,2,\ldots,5$ &2.728&8.935\\
&(17)&(19)\\\hline
$\lambda_i=6-i$, $i=1,2,\ldots,5$ &3.458&13.980\\
&(17) &(19)\\\hline
$\lambda_1=\lambda_2=10$, $\lambda_3=\lambda_4=20$, $\lambda_5=50$ &17.600&321.251\\
&(86)&(95)\\\hline
$\lambda_1=\lambda_4=20$, $\lambda_2=50$, $\lambda_3=\lambda_5=10$&20.103&422.855\\
&(86)&(95)\\\hline
\end{tabular}
\label{Table9}
\end{table}
\begin{figure}
\includegraphics[width=1\textwidth]{WykresyBridgeIIDPoiss_new}
\caption{Expectation $ET$ and second raw moment $ET^2$ as functions of $\lambda\in(0,100)$ for the bridge system when $X_1,X_2,\ldots,X_5$ are IID and $X_i\sim \mathrm{Pois}(\lambda)$, $i=1,2,\ldots,5$}
\label{wykresETET2}
\end{figure}
\end{example}
\section*{Acknowledgments}
The authors wish to thank Prof. Jorge Navarro for suggesting extension of formulas describing moments of lifetimes of $k$-out-of-$n$ systems to those valid for any coherent system.
A. D. and A. G. acknowledge financial support for this research from Warsaw University of Technology under Grant no. 504/03910/1120 and Polish National Science Center under Grant no. 2015/19/B/ST1/03100, respectively.
\section*{References}
| proofpile-arXiv_059-15576 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Background}
In this section we present the problem dataset on demand solves
(Section~\ref{subsec:user_experience}) and how we evaluate it. We also describe
Aurum, on which we build DoD in Section~\ref{subsec:aurum}. The overall
architecture of DoD is shown in \mbox{Fig.\hspace{0.25em}}\ref{fig:arch}.
\mypar{Definitions} A query view, $qv$, consists of a set of attributes, $A$,
and, optionally, a set of tuples, $T$. A tuple is a set of values, $V$, drawn
from the attributes' domain. If $qv$ contains tuples, these can have values for
only a subset of $A$. Each attribute and tuple value in a query view is called
an \emph{attribute} and \emph{value} constraint, respectively.
A view candidate, $c$, consists of a set of attributes $A' \in A$ and a set of
tuples $T'$ that contains the $T$ defined in $qv$.
\subsection{The Dataset-on-Demand Problem}
\label{subsec:user_experience}
Consider a user who wants to know what is the \textsf{address} of every
\textsf{employee} in a company. The user writes a query view with the relevant
attributes, as shown in the example of \mbox{Fig.\hspace{0.25em}}\ref{fig:intro-example}. We do not
assume the user knows the precise attribute names, but the DoD interface helps
with identifying them using an autocomplete feature.
In addition to the schema, the user may add example tuples to the query view,
such as `Raul CF' in the example (see \mbox{Fig.\hspace{0.25em}}\ref{fig:intro-example}). We assume
users may make mistakes when introducing example tuples, e.g.,\ typos. In addition,
data may be dirty and not match user's input. As a consequence, we do not expect
that values in the query view will appear exactly in the data.
\mypar{View candidate search} Given $qv$, DoD finds a collection of
select-project-join queries that, when executed lead to a collection of
candidate views, $C$. In \mbox{Fig.\hspace{0.25em}}\ref{fig:arch}, step 1, the user submits a query view
to DoD, which produces (step 2) $C=3$ output views. In practice, the number of
views is often in the tens and more. The size of $C$ may be large for many
different reasons. For example, because the attributes of the query view can
appear in multiple different tables, e.g.,\ `employee' may appear in many tables
that are irrelevant to this query. In addition, some of the inclusion
dependencies used to produce the views may be wrong, leading to spurious views
(such as joining with the table `Customers' in the example of
\mbox{Fig.\hspace{0.25em}}\ref{fig:intro-example}). Furthermore, some $c \in C$ may fulfill the query
view only partially, e.g.,\ lacking some attributes. These views have to be
considered as well, because if all views that fully fulfill the query view are
wrong, the users can consider these partially fulfilled views. We explain this
in Section~\ref{sec:engine}.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{img/arch2}
\caption{Step 0: Aurum builds a discovery index from a collection of tables.
Step 1: A user creates and submits a query view. Step 2: DoD searches for views
based on the input query view. Step 3: The 4C method classifies all views into 4
groups (figure shows c1=compatible, and c3=complementary). Step 4: a
presentation strategy uses the 4 groups to reduce the number of views. Step 5:
the resulting views are presented to the user.}
\label{fig:arch}
\end{figure}
If DoD were to produce all $C$, the choice space users would need to investigate
would be very large: they would need to look at every view to choose the right
one, or settle with the first one they find that seems appropriate, at risk of
leaving better views unexplored. Although we can prioritize candidate views that
fulfill the largest number of constraints, there are still many to consider
manually, so we need a strategy to help users select the right candidate view.
\mypar{View candidate selection} Given the candidate views, $C$, 4C classifies
them into 4 groups (step 3 of \mbox{Fig.\hspace{0.25em}}\ref{fig:arch}), and a \emph{presentation
strategy} is applied to reduce the total size of views that users must inspect.
This is shown in step 4 in \mbox{Fig.\hspace{0.25em}}\ref{fig:arch}.
Different presentation strategies cater to different needs. For example, with
the \textsf{4c-summary} strategy (explained later) users may interact with the
system indicating `good' views and `bad' views, and this information can be used
to further narrow down the search. Alternatively, DoD can avoid any user
intervention with a \textsf{multi-row relation} presentation strategy we present
later. We explain this in Section~\ref{sec:4c}.
\underline{\emph{4C Example:}} Suppose the \emph{`employee-address'} query view
above produces 16 views. Out of those, 14 fully fulfill the query view, and 2
others are partial, i.e.,\ lack some attribute defined in the query view. Because
we prefer views that fully fulfill the query view, we apply the 4C method with
the \textsf{4C-summary} (presented more in detail later) presentation strategy
to the group with 14 views first. The 4C method finds that 8 of the 14 views
are \emph{compatible}, i.e.,\ they are exactly the same. This can happen, for
example, because equivalent join attributes were used to assemble the views, or
because there are copies of the underlying tables in different databases. In
this case, \textsf{4C-summary} summarizes those 8 views into 1. Out of the
remaining 6 views, 3 of them are \emph{contained} in another one. This can
happen, for example, because a view joined with tables that contained employees
of \emph{engineering}, and another view joined with tables that contained both
\emph{engineering} and \emph{sales}. Instead of showing the 4 views,
\textsf{4c-summary} shows only the one that contains the other 3.
Suppose the previous two summarized views are \emph{complementary}: each view
has values that do not exist in the other one. This can happen due to the
existence of different versions of the same table. \textsf{4c-summary} unions
these two views into 1. Finally, out of the remaining views, 4C finds a pair
that is \emph{contradictory}, that is, when looking at a particular key in one
view, the row values are different than when looking at that same key in the
other view. For example, an employee `Raul' has a `Pie street' address in one
view, but a `Flea Av' in another one. This can happen because the address
attribute in one view referred to `work address', while another one to `home
address'. It is possible to ask users to resolve contradictions, and use their
actions to further reduce the size of the choice space. Alternatively, another
presentation strategy can assemble a single view with multiple values for
employee `Raul', so this contradiction can be dealt with downstream. At the end,
instead of looking at 14 different views, 4C with \textsf{4c-summary} reduces
the size to a few choices, so it becomes much easier to select the right view
(step 5 in \mbox{Fig.\hspace{0.25em}}\ref{fig:arch}).
Classifying all the views in $C$ into the 4 classes is an expensive
process because it is necessary to compare all cells of each view to all other
views. We show later the algorithms we introduce to make the process practical.
\subsection{Data Discovery in Databases, Data Lakes, and Clouds with Aurum}
\label{subsec:aurum}
In this section, we describe Aurum~\cite{aurum}, an open source data discovery
platform on top of which we build DoD.
\mypar{Aurum overview} Aurum reads relations from data sources such as
databases, lakes, and files in filesystems, and produces a \emph{discovery
index} (corresponds to step 0 in \mbox{Fig.\hspace{0.25em}}\ref{fig:arch}). The discovery index is
logically a hypergraph where the nodes correspond to columns and edges represent
relationships between columns, e.g.,\ there is an inclusion dependency
relationship. Hyperedges are used to indicate columns that are part of some
hierarchy, e.g.,\ they appear in the same relation. This is useful to identify
tables from relevant columns. In addition to these edges, the discovery index
also includes a full text search index of textual columns, as well as their
attribute names.
Aurum provides a discovery API that uses the discovery index to answer complex
discovery queries. For example, a discovery program can ask for nodes in the
graph (columns) that contain a value, and then it can ask whether the tables
that contain those columns (using hyperedges) can be joined together. We detail
the relevant discovery API calls when describing DoD's engine in the next
section.
\mypar{Why approximate inclusion dependencies suffice} State of the art
approaches assume that a PKFK graph that indicates how to
join every pair of tables is available \cite{sfour, mweaver, exemplarqueries}.
This is a strong assumption we do not make. Obtaining such a graph across
multiple databases is extremely hard, and even if it existed, it would not be
enough to solve the problem of semantically different join paths (see the
example in Section~\ref{sec:introduction}).
DoD instead relies on inclusion dependencies, which Aurum finds automatically,
and uses these as a proxy to understand which tables join with each other on
which attributes. When using inclusion dependencies, we accept that some will not
be correct attributes on which to join tables, and will therefore lead to wrong
views. Nevertheless, this approach works because spurious views are classified
and contrasted with other (correct) views as part of the 4C method, so users can
choose the correct one.
Further, instead of using exact inclusion dependencies, DoD uses
\emph{approximate} inclusion dependencies. An approximate inclusion dependency
between two columns exists when values of one column are approximately included
in the other, and at least one of the two columns is almost unique. These
relaxations help with two practical problems. First, it is sometimes useful to
use approximate keys to combine datasets that would not be able to be combined
otherwise, e.g.,\ joining two tables on an attribute `Office Phone' that is not a
real key. This is true when tables come from different sources and nobody
assigned a PKFK at design time. Second, because data is dirty, if we only
considered exact keys, we'd ignore
opportunities to combine datasets.
\subsection{Performance Metrics}
There are two key metrics to evaluate DoD: the ability of 4C to reduce the total
number of views,
and the end-to-end runtime. To measure the first, we
apply the presentation strategy and measure how many views remain at the end.
Second, if finding and classifying all views takes hours, then the benefits of
DoD in comparison to manually inspecting all views would be unclear, so it is
important for the end to end runtime to be not more than a few minutes.
\section{Discussion}
\notera{shorten}
One fundamental difference of Aurum-DoD with respect to previous efforts to
assemble views for users, is that it merges the mapping and view selection
problem together. Instead of relying on users to create mappings manually, or
select automatically suggested mappings, Aurum-DoD instead assembles all
possible mappings. This is a good idea because although sometimes mappings
will be wrong due to data quality issues or due to an incorrect inclusion
dependency, multiple mappings may also exist with different semantics. In both
cases, humans are better tailored to choose the view that satisfies their needs.
We have identified a number of additional consequences of Aurum-DoD's design
which we have not exploited in this paper but are worth mentioning here as
avenues of future work:
\begin{myitemize}
\item\textbf{Focus on high-value tables} As the amount of tables that companies
accumulate grows, their governance (documentation, quality control, etc.)
becomes harder and harder. A history of query views submitted from users to
Aurum-DoD produces a rich log of tables that were involved in the assembling of
views. Because the views were requested by actual users, the collection of
tables indicates highly-valuable tables. This can be used to prioritize where to
dedicate preparation/governance effort.
\item\textbf{Beam data preparation} Similar to how we know what tables appear
more often in the assembly of views, we know what tables DoD tries to join
unsuccessfully. These cases provide an input for transformation algorithms,
which can be used to identify transformations to the tables so they join with
each other. The advantage of DoD in this case is on highlighting what tables are
worth joining, as trying to learn transformations across \emph{all} tables is
not possible.
\item\textbf{Find PKFK} Aurum-DoD relies on Aurum's ability to identify
inclusion dependencies automatically across many data sources. No information
from users is necessary, however, some of the inclusion dependencies won't be
real PKFKs. The 4C algorithm means that those wrong views will be ultimately
detected by users. However, it also demands more compute cycles that are not
dedicated to useful work. Therefore, although Aurum-DoD presents a practical way
of dealing with approximate PKFKs, in an automatic way, more research to
identify better quality PKFK will be welcome.
\item\textbf{Trustworthiness, cleanliness} People's relationship with data is
many times subjective: a user may trust more a view because it recognizes some
of its data, a view that is dirty to some user is clean to another. Aurum-DoD
does not make any decision on either the trustworthiness or cleanliness of views
and instead offers an efficient way of navigating them. However, it is
possible to include more metadata while the views are being assembled to help
users make their subjective calls.
\end{myitemize}
All in all, we think Aurum-DoD, as a system, provides an alternative way of
performing end to end data preparation. In this paper, we present a first step
in the automatic assembly of SPJ queries from an input query view definition,
and the organization of the output results for easy human and machine
consumption.
\section{Evaluation}
In this section we evaluate DoD with respect to the two critical metrics of
interest: capacity to reduce the size of the view choice space, and end-to-end
performance. We organize the section around 3 key questions:
\begin{myitemize}
\item Does DoD-4C reduce the size of the view choice space? The default view
choice space size is the number of candidate views generated by a query view. We
want to understand the reduction achieved by 4C.
\item Is the 4C-chasing algorithm fast enough to be practical? Classifying the
candidate views into the 4 categories is an expensive process. The longer time
it takes, the lower the benefits it yields. We compare with a baseline to
demonstrate its efficiency and practicality.
\item Performance of DoD. In this section we conduct experiments to
understand DoD's end-to-end performance, the factors
that contribute to its performance, studying its scalability, as well as the
impact of each of the ad-hoc processing optimizations.
\end{myitemize}
We start the section by presenting the setup and datasets used and then explore
each of the questions above. We conclude the section with a summary of the
results obtained.
\subsection{Setup and Datasets}
We conduct all experiments in a MacBook Pro with 8GB memory and a core i7 with 4
cores and 3GHz speed each. We use two real world datasets in our evaluation:
\textsf{DWH: }The MIT datawarehouse dataset consists of 160 tables that have
been integrated from 116 different databases. The dataset is heterogeneous, with
information concerning many different aspects of the institute, faculty,
facilities, students, subjects, footage, etc. Every table's size in this dataset
is within 100MB.
\textsf{CHE: }This dataset consists of 113 tables from two popular public
chemical databases, ChEMBL~\cite{chembl}, and DrugCentral~\cite{drugcentral}.
The databases contain some overlapping information, but their emphasis is
different, so it is common to conduct integration tasks between them. Tables of
this dataset are of varying sizes, with a few multi-gigabyte tables, others
below 1GB, and others within 100MB.
\smallskip
We use Aurum to build a discovery index for each dataset. We only provide Aurum
with a connector to the databases, and we do not give any information about
PKFKs or similar: all information DoD uses is inferred by Aurum. We use a
collection of 10 query views from the 2 datasets above, 5 queries from each
dataset. Throughout the section, we check that at least one of the candidate
views is a correct view of the DoD output; we have correct views for each of the
10 query views we use.
\subsection{Does DoD-4C reduce human intervention?}
The goal of 4C is to reduce the number of candidate views produced by the search
process. In this experiment we first submit query views to DoD and obtain
the number of candidate views generated. This number is shown in the `Original
Views' column of table~\ref{table:4csummary} (for \textsf{DWH}, we only show the
3 query views that produced a large number of views). This is the size of the
original view choice space: the number of views that users would need to
consider without the 4C method. Using the original candidate views as input,
we use the 4C method and measure the reduction of the view choice space.
We have implemented two presentation strategies. With \textsf{multi-row}, we
will always obtain 1 multi-row view per schema type, so the more interesting
case, and the one for which we show results, is that of \textsf{4c-summary}.
The results for \textsf{4c-summary} are shown in the `4c-summary' column of
table~\ref{table:4csummary}. The `x(y)' notation indicates the total number of
views (x), and the number of interactions users must make (y), i.e.,\ when choosing
among contradictory views. \textsf{4c-summary} reduces by several factors the
number of views for both datasets.
Note that users would likely not inspect each of the original candidate views.
Instead, they may select the first candidate view that seems to address their
needs. This, however, may leave a better view uncovered. In contrast, 4C does
not skip any candidate view, leading to a more exhaustive exploration.
\begin{table}[]
\centering
\begin{tabular}{|c|c|c|}
& \textbf{Original views} & \textbf{4c-summary} \\ \hline
\textbf{DWH2} & 9 & 4 \\
\textbf{DWH4} & 99 & 3(4) \\
\textbf{DWH5} & 102 & 11(11) \\
\textbf{CHE1} & 12 & 2(2) \\
\textbf{CHE2} & 27 & 3(4) \\
\textbf{CHE3} & 50 & 14(5) \\
\textbf{CHE4} & 54 & 2 \\
\textbf{CHE5} & 127 & 23(23)
\end{tabular}
\caption{Original and reduced views}
\label{table:4csummary}
\end{table}
\smallskip
\noindent\textbf{When does 4C work?} 4C's ability to reduce the size of the choice space
does not depend on the number of original candidate views, but on the actual
content of the views. Consider the column for \textsf{DWH4} and \textsf{DWH5},
as well as \textsf{CHE3} and \textsf{CHE4}. Although in both cases the original
number of views is similar, the reduction is quite different.
To understand why this is the case, consider a pathological case in which each
view is contradictory with respect to every other view, so every candidate view
must be considered at the output. Or, consider the opposite case where all views
are compatible with each other, and hence summarized into only 1, in which no
candidate view needs to be considered. Most query views we have used, including
the ones we use in this evaluation, contain a mix of the 4 groups. This is due
to semantic differences and wrong inclusion dependencies.
\mypar{Note on PKFK} Many candidate views are spurious because they were
obtained using a wrong join path. If the real PKFK-graph was available, the
number of the candidate views would be lower, but the PKFK-graph is rarely
available. An important contribution of DoD is precisely not assuming its
existence and still deal with those spurious views post-hoc.
\subsection{Is 4C-Chasing fast enough?}
4C's ability to reduce the size of the view choice space is only beneficial as
long as the reduction is achieved fast. If users had to wait for hours to
retrieve the results, the benefits would be unclear.
To understand \textsf{4C-Chasing} algorithm's efficiency, we measure its runtime
over the set of query views that yielded a high number of candidate views (we
use those with low candidate views to understand its overhead later). Further,
to justify the design of \textsf{4C-Chasing}, we compare its runtime with a
baseline implementation of 4C, called \textsf{No-Chasing}. \textsf{No-Chasing}
hashes rows so it can classify compatible and contained views quickly, but it
must perform the cell by cell comparison across views to identify complementary
and contradictory ones, as explained in Section~\ref{sec:4c}. A naiver
strategy that inspects each row individually takes too long even when the number
of candidate views is small.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\columnwidth]{evaluation/4cefficiency/chasing-benefit}
\caption{Runtime comparison of 4C-Chasing vs No-Chasing for different number of
views}
\label{fig:chasing-benefit}
\end{figure}
The results of \mbox{Fig.\hspace{0.25em}}\ref{fig:chasing-benefit} show that with \textsf{4C-Chasing}
algorithm, we can summarize the views about 2 orders of magnitude faster than
\textsf{No-Chasing} for most queries. Even the more modest improvements of
\textsf{CHE2} and \textsf{CHE3} are of 38\% and 84\% respectively.
In summary, with \textsf{4C-Chasing} it is possible to vastly reduce the
view choice space in under 2 minutes, making it practical.
\noindent\textbf{What about 4C's overhead?}
4C is not necessary when the number of candidate views is low, or when
classifying them is very fast, e.g.,\ most views are compatible. In both of these
cases, we want to understand what's the overhead of executing 4C and its impact
on the end-to-end execution. To measure the overhead, we executed
\textsf{4C-Chasing} on query views that either produced few candidate views
(\textsf{DWH1}, \textsf{DWH2}, \textsf{CHE1}) or that produced mostly compatible
views, such as \textsf{CHE4}.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\columnwidth]{evaluation/4cefficiency/chasing-overhead}
\caption{Both 4C-Chasing and No-Chasing overhead is negligible when the number
of views is low}
\label{fig:chasing-overhead}
\end{figure}
The results of \mbox{Fig.\hspace{0.25em}}\ref{fig:chasing-overhead} show that both \textsf{4C-Chasing}'s
and \textsf{No-Chasing}'s overhead is low: under 2 seconds in both cases.
This means that it is possible to always execute 4C because its overhead on the
end-to-end runtime is negligible.
\subsection{Performance of DoD}
The end-to-end performance of DoD does not depend on the 4C method only, but
also on the view search process, which we evaluate here. For that, we measure
the time it takes for DoD to generate all the candidate views from the input
queries of both datasets. In the case of \textsf{DWH}, we fully materialize the
views, while in the case of \textsf{CHE}, we use the consistent sampling
strategy (see Section~\ref{sec:engine}), because the underlying tables of the
dataset are much larger and many joins would run out of memory otherwise.
\begin{table}[]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
& \textbf{\begin{tabular}[c]{@{}c@{}}Runtime\\ (s)\end{tabular}} &
\textbf{\begin{tabular}[c]{@{}c@{}}CG\end{tabular}} &
\textbf{\begin{tabular}[c]{@{}c@{}}P\end{tabular}} &
\textbf{\begin{tabular}[c]{@{}c@{}}JG\end{tabular}} &
\textbf{\begin{tabular}[c]{@{}c@{}}Join\\ graphs\end{tabular}} &
\textbf{\begin{tabular}[c]{@{}c@{}}MG\end{tabular}} \\ \hline
\textbf{DWH1} & 0.27 &
3 & 1 & 0
& 0 & 2
\\
\textbf{DWH2} & 46.8 &
22 & 48 & 9
& 2140 & 2
\\
\textbf{DWH3} & 4.6 &
15 & 9 & 3
& 9 & 9
\\
\textbf{DWH4} & 139.1 &
15 & 12 & 12
& 572 & 99
\\
\textbf{DWH5} & 683 &
24 & 47 & 12
& 2148 & 102
\\
\textbf{CHE1} & 856 &
4 & 2 & 2
& 69 & 12
\\
\textbf{CHE2} & 15.9 &
9 & 14 & 2
& 38 & 27
\\
\textbf{CHE3} & 16.1 &
11 & 6 & 5
& 61 & 50
\\
\textbf{CHE4} & 249.1 &
5 & 2 & 2
& 79 & 54
\\
\textbf{CHE5} & 37.5 &
11 & 11 & 6
& 127 & 122
\end{tabular}
\caption{Statistics about e2e view search process for both datasets. Candidate
groups (CG), Pairs of tables (P), Joinable groups (JG), Materializable groups (MG)}
\label{table:e2edod}
\end{table}
Table~\ref{table:e2edod} shows the end to end runtime (first column) of DoD for
different queries (shown in the rows) as well as statistics about each
execution. The total runtime depends on several aspects of the search process:
the effort to understand if tables join (\textsf{Does Join?}), the effort to
check whether the joinable tables are materializable (\textsf{Does
Materialize?}), and the time to actually materialize views
(\textsf{Materialize}). Runtime numbers, per operation are shown in
\mbox{Fig.\hspace{0.25em}}\ref{fig:e2edod} and \mbox{Fig.\hspace{0.25em}}\ref{fig:e2edod_chem}. The figures also include a
category \textsf{Other}, to represent time spent doing work that does not fit in
the categories above. We analyze each cost next:
\smallskip
\noindent\textbf{Does Join?} The \textsf{P} column in the table indicates
the pairs of tables for which the system must find join paths (see
Section~\ref{subsubsec:joinable}). This number depends on the number of
candidate groups, as well as the number of tables per group. The higher the
number of pairs of tables, the costlier the \textsf{Does Join?} operation is, as
confirmed by the results in the table and figures. This effect is more clearly
seen in the case of the \textsf{DWH} dataset, because there are more pairs of
tables for which to find join paths.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\columnwidth]{evaluation/e2edod/e2edod-short}
\caption{Total runtime split by component and for different queries (1/2)}
\label{fig:e2edod}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.9\columnwidth]{evaluation/e2edod/e2edod-long}
\caption{Total runtime split by component and for different queries of (2/2)}
\label{fig:e2edod_chem}
\end{figure}
\smallskip
\noindent\textbf{Does Materialize?} The \textsf{Join Graphs} column of
Table~\ref{table:e2edod} indicates how many potential ways we can join the
pairs of tables identified before. This number depends to some extent on the
number of joinable groups found (column \textsf{JG}), but mostly on how
connected the specific tables are within Aurum's discovery index. For each
joinable graph, DoD must check if the graph is materializable (see
Section~\ref{subsubsec:ismaterializable}), so naturally, the higher the number
of join graphs, the higher the cost of \textsf{Does Materialize?}.
\smallskip
\mypar{Materialize} Finally, the \textsf{MG} column (Table~\ref{table:e2edod}),
materializable groups, indicates how many of the materializable join graphs, if
materialized, would lead to a view that satisfies the input query view. This
variable helps with understanding how much of the runtime is dedicated to
actually materialize the joins to produce the output candidate views. In turn,
the cost of materialization depends on how many views to materialize, and how
expensive each join is. For example, in the case of the \textsf{DWH} dataset,
query \textsf{DWH5} is dominated by the materialization time. Each join is quite
cheap, but there are 102 joins to perform, which explains the relatively high
runtime. In the case of the \textsf{CHE} dataset, runtime is dominated by the
materialization step. This is using using the sampling strategy of
Section~\ref{sec:engine}, which is necessary for this dataset because the
underlying tables are large. Without it, many of the joins run out of memory,
and those that don't, take (each) about 6-8x longer to complete. The reason for
the bottleneck is that in order to obtain the sample, it is necessary to read
the entire relation into memory first---Pandas does not allow to efficiently
read a relation selectively.
\subsubsection{Scalability}
DoD is built on top of Aurum, and benefits from its scalability~\cite{aurum}.
However, DoD uses other operations, such as those performed by the ad-hoc query
engine to understand if a join graph is materializable, and to materialize it
when it is. In this experiment, we want to understand whether DoD's view search
scales as the amount of work to perform grows. Ideally, the runtime depends
linearly on the amount of work that is necessary to perform.
\begin{table}[h]
\centering
\begin{tabular}{|l|c|c|c|}
\multicolumn{1}{|c|}{} & \textbf{\begin{tabular}[c]{@{}c@{}}X1\\
(9)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}X2\\ (36)\end{tabular}}
& \textbf{\begin{tabular}[c]{@{}c@{}}X4\\ (192)\end{tabular}} \\
\hline
\textbf{\# Pairs tables}
& 5 & 38 & 172 \\
\textbf{\# Join graphs}
& 9 & 72 & 576 \\
\textbf{\# Materializable groups}
& 3 & 24 & 192 \\
\end{tabular}
\caption{Statistics for the scalability experiment}
\label{table:scalability}
\end{table}
In order to conduct this experiment, we use the \textsf{DWH2} query view as
input, however, we use 3 instances of the problem with different scale factors.
The first one uses the original \textsf{DWH} dataset. The second one duplicates
the original (X2), and the third one quadruplicates it (X4). Note that we are
increasing the complexity exponentially, not linearly, as we are multiplying the
number of tables to check for joinability, as well as materialization, and the
total number of candidate views to materialize. The exact numbers that indicate
the amount of work are shown in table~\ref{table:scalability}.
We measure the time the system takes to perform each of these operations, and
compute the `Normalized Time' as the time per operation, i.e.,\ instead of the
aggregated one. A scalable system would show that the time per operation remains
constant regardless the scale factor.
\begin{figure}[h]
\centering
\includegraphics[width=0.95\columnwidth]{evaluation/scalabilitydod/scalability}
\caption{Scalability of main time-consuming procedures of DoD as the scale
factor increases: two times (X2), and four times (X4).}
\label{fig:scalability}
\end{figure}
\mbox{Fig.\hspace{0.25em}}\ref{fig:scalability} shows the results of the experiment. The \textsf{X} axis
shows the scale factor and the \textsf{Y} shows the normalized time (time per
operation). The figure shows that the time for \textsf{Does Materialize?} and
\textsf{Materialize} remains constant despite the work growth due to the
increase of the scale factor. The \textsf{Does Join?} operation cost increases a
bit for the 4X case, but it remains within the same scale factor.
\subsubsection{Ablation test}
In this last section, we conduct experiments to understand the benefits of the
optimizations of the ad-hoc query engine. For that, we select two
query views, \textsf{DWH4} and \textsf{DWH2}. We select these two because they
perform different amount of work and produce different number
of views. We execute the query views with different optimizations enabled and
measure their runtime.
\mbox{Fig.\hspace{0.25em}}\ref{fig:ablation} shows the results of executing the queries without any
optimization enabled, \textsf{None} in the \textsf{X} axis. We then activate
optimizations incrementally. First, we activate the caching mechanism for the
\textsf{Does Join?} operation (\textsf{+C1} in the Figure), then the \textsf{Does
Materialize?} optimization, \textsf{+C2}, and last we execute the full
DoD, labeled \textsf{All}, which includes the relation cache to avoid reads from
disk.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\columnwidth]{evaluation/ablation/ablation}
\caption{Performance improves from left to right as more optimizations are
enabled (2 different queries)}
\label{fig:ablation}
\end{figure}
The figure clearly shows the performance benefits of \textsf{+C1} and
\textsf{+C2}. The advantages of the optimization for the \textsf{Materialize}
operation, which are the difference between the right-most point in the figure
and the previous one, are less prominent. The relative value of each of these
optimizations depends on the amount of work assigned to each of these
operations during the view search. However, in aggregate, the optimizations
implemented reduce the time with respect to a baseline implementation a factor
of 4-6X.
\mypar{Note on sampling join} Without sampling join, many of the joins on large
tables run out of memory. Those that do not run out of memory take 6-8X more
time to complete.
\subsection{Summary of Results}
The evaluation focused on two key metrics of interest. First, we showed how 4C
reduces the size of the view choice space by an order of magnitude. It achieves
this reduction while executing within a few minutes at most. End-to-end, DoD
finds candidate views within minutes, and, in some cases within a few seconds.
In essence, DoD trades expensive human time for cheaper compute cycles to
accelerate view search and presentation.
\section{Conclusion}
DoD is a system that identifies views as combinations of multiple tables that
may span multiple heterogeneous data sources. DoD only requires a query view as
input, and does not assume the existence of a PKFK-graph. With the query view,
it finds all views that fully or partially satisfy it, and then reduces the size
of the choice space using the \textsf{4c-chasing} algorithm and a presentation
strategy. The 4C method is an effective way of dealing with views that are
semantically different, as well as with data quality problems.
\section{DoD Engine}
\label{sec:engine}
In this section we describe how DoD finds $C$, which corresponds to step 2 in
\mbox{Fig.\hspace{0.25em}}\ref{fig:arch}.
We start by describing the search process in Section
\ref{subsec:assembly-process}, as well as some details on the ad-hoc processing
engine we use to join relations across sources
(Section~\ref{subsubsec:ismaterializable}).
\subsection{Finding Candidate Views}
\label{subsec:assembly-process}
The search process finds $C$ from an input query view
following the next stages:
\subsubsection{Identify Candidate Tables}
During the first step, DoD identifies the tables
\emph{relevant} to the input query view. A table is relevant to the query
view if it fulfills an attribute constraint, a value constraint, or
any number of the previous. A table fulfills an attribute constraint when it
contains one attribute that appears in the query view. A table fulfills a
value constraint when the table contains both an attribute and value
constraint that were defined in the same column of the query view. Note that a
table that only fulfills a value constraint, but not the corresponding attribute
constraint, is not considered relevant: this is because values alone can appear
in many different columns of the underlying datasets.
To identify underlying tables that fulfill an attribute constraint, DoD uses
Aurum's data discovery API. Given an attribute name, Aurum returns all columns
(and the corresponding tables) that have an attribute name that matches (string
equality) the input attribute name. Because users make use of Aurum's
autocomplete feature to assemble the attributes in the query view in the first
place (see Section~\ref{subsec:user_experience}, exact string matching is sufficient to
identify the tables.
To identify underlying tables that fulfill a value constraint, DoD
uses Aurum again. It first finds the set of tables that fulfill the attribute
constraint, as above. The results are the attribute constraint group. Then, DoD
uses Aurum to search for the value constraint. Aurum returns columns that
contain the value constraint using a full-text search index. This is the
value constraint group. Finally, DoD takes the set intersection of both
groups: the result is the set of tables that fulfill the value
constraints.
The above procedure is performed for every attribute and value
constraint in the query view. The output of this process is a list of
\emph{relevant} tables along with the query view constraints they satisfy. We
call the union of relevant tables the \emph{candidate tables}.
\subsubsection{Find Candidate Groups}
In this step, DoD wants to identify a group of tables that, if joined, would
satisfy the query view definition. The ideal group would fulfill all the
constraints in the query view, and would contain as few tables as possible---so
fewer joins would be necessary. Finding this group is akin to solving a set
cover problem over the candidate tables. This, however, is not sufficient,
because it may be impossible to join the tables in that group.
Instead, DoD finds \emph{all} subsets of tables from the candidate tables set
that, together, fulfill as many constrains from the input query view as
possible. Each subset of candidate tables is called a \emph{candidate group}.
Each candidate group, if materialized, would lead to a view that fulfills or
partially fulfills the query view.
DoD performs the search of candidate groups using a procedure that caters to two
preferences. First, groups that fulfill more constraints are preferred, as those
are closer to the query view. Second, among candidate groups that fulfill the
same number of constraints, smaller groups are preferred, because those involve
fewer joins. These criteria is implemented in a greedy search process that is
shown in detail in Algorithm~\ref{alg:candidatetables} and works as follows:
\input{algorithms/candidate_groups}
DoD sorts the candidate tables based on the number of constraints they fulfill
(line~\ref{alg:ct:sor}), from higher to lower. The search process takes the first
table of such list, called reference (\textsf{ref} in the algorithm) and adds it
to a candidate group (lines~\ref{alg:ct:s}-\ref{alg:ct:e}). Then, it iterates
over the tables in sort order (line~\ref{alg:ct:so}) checking, at each step,
whether the table fulfills constraints not covered by the reference table
(line~\ref{alg:ct:check}). When it does, the table is added to the candidate
group. At this point, if all constraints are fulfilled by the candidate group,
the group is stored and the process selects a new reference table. If not, then
the search continues. Note that in some cases, a single table may fulfill all
constraints (that's why it's necessary to check for query view fulfillment every
time the candidate group is updated, such as in lines~\ref{alg:ct:check1} and
\ref{alg:ct:check2}. This happens when the input query view corresponds to an existing
table.
The output of this step is a collection of candidate groups
(line~\ref{alg:ct:done}), each with the set of constrains it fulfills.
\subsubsection{Select Joinable Groups}
\label{subsubsec:joinable}
The goal of this step is to select among the candidate groups, those that are
\emph{joinable groups}. A joinable group is a candidate group in which all the
tables contained can be combined into one single table through the relational
join operation. There may be multiple ways of joining the tables of a joinable
group. This can happen if there exists more than one inclusion dependency
between a pair of tables, or there are different intermediate tables that can be
used to join two tables. Each strategy to join such tables is called a
\emph{join graph}. A joinable group, then, is a candidate group for which there
is at least one existing join graph.
For each candidate group, DoD must find all join graphs that permit combining
all tables. It is necessary to find all join graphs because some may be using a
wrong inclusion dependency (leading to a wrong view) or one that is correct, but
incompatible with the user's intent---when there are different semantic views
that fulfill the query view.
To find all the join graphs, DoD first finds all the join paths between every
pair of tables in the group. Then, it combines join paths together to form join
graphs that permit joining all tables in the group. DoD discards duplicate join
graphs---those that join the same tables on the same attributes---as well as
join graphs that do not join every table in the group.
In order to identify a join path between a pair of tables, DoD uses Aurum's
discovery API. Aurum returns all join paths between two tables, given a maximum
number of hops, by querying its discovery index. Once all join graphs are
identified, these are sorted based on the number of joins it is necessary to
perform in each case. The output of the process is then the joinable groups,
each with the fulfilled query view constraints.
\mypar{Note on value constraints} A joinable group indicates that when the
tables are joined together, the materialization will contain some of the
attribute constraints defined in the query view. It cannot say, however, whether
the materialization will contain a value constraint. This is because Aurum can
identify whether there are inclusion dependencies between two tables, but it
cannot answer whether the tables, when joined, would contain a particular value.
Identifying what joinable groups will contain the desired specification at the
output is the goal of the next step.
\subsection{Select Materializable Groups}
\label{subsubsec:ismaterializable}
A materializable group is a joinable group that, when materialized, contains all
the constraints of the joinable group, including the value constraints. Because
Aurum cannot help with identifying materializable groups, DoD checks each
joinable group individually, i.e.,\ it performs the join and checks whether the
value constraint appears at the output.
In this section, we discuss alternative ways of achieving that, and explain the
alternative we chose. We then dedicate the rest of the section to specific
optimizations we had to implement in DoD to increase its efficiency.
The need for querying across data sources calls for a federated query
engine~\cite{haasfederation, bigdawg}. These engines receive queries expressed
in some general language, e.g.,\ SQL, and compile them into specific languages of
the data sources. They have a couple of drawbacks. First, such engines require
to configure a \emph{connector} for each data source and code the logic to
transform from the general query language to the specific data source one.
Second, they need to interact with all the data sources in the enterprise,
making these systems hard to deploy in practice.
Another alternative would be to assume all data is in a data lake with querying
capacity, and run all queries there directly. However, in practice we have
found that many times data must remain in certain data sources (e.g.,\ for security
or governance issues), or has not arrived to the lake yet. We only assume read
access.
We implemented an ad-hoc query engine that does not suffer the deployment
problems of federated systems and does not assume all data is in a central
repository, or data lake. The engine is implemented using Python
Pandas~\cite{pandas}. Its main advantage is that instead of relying on external
querying capacity, it owns query processing. Despite its slower performance with
respect to query engines of databases, the benefits of accessing seamlessly
multiple sources are a good match for DoD: we do not need to modify any
system and can control with fine-granularity how processing is done. We describe
next a few technical details of the ad-hoc engine.
\subsubsection{Materializing a Join Graph}
A join graph contains the information necessary to join a set of N tables. The
join graph nodes represent the tables, and an edge between two nodes indicates that
the two tables can be joined together. Each node contains the attribute on which
the table should be joined.
To materialize a join graph, joins are performed from the \emph{leaf nodes to inner
ones}. A leaf node is one that is connected to only another one. Every time
a join is performed, its node and edge are removed. The procedure repeats N-1
times, until the resulting joined table is obtained.
\mypar{On Join Ordering} Choosing the right join ordering is crucial for good
performance; this is one of the classic problems query optimizers aim to
solve. Query optimizers rely on data statistics to estimate what join
orderings are cheaper to compute. Unfortunately, in the scenario we consider, we
cannot assume that the underlying tables will contain statistics, because they
may proceed from different databases, and even file systems. Instead, DoD
records the actual output cardinality as it executes joins, and relies on this
information for future joins, as opposed to use the default \emph{leaf to inner
nodes} join strategy. Since each query view may trigger multiple similar joins,
this strategy is beneficial.
\subsubsection{Expensive-To-Compute Joins}
When materializing join graphs, DoD needs to deal with two practical problems:
expensive joins and joins that have an output larger-than-memory. We explain how
we deal with both problems next:
\mypar{Sampling Joins} Certain joins are so expensive that they become the main
computational bottleneck of the end-to-end view search. Instead of materializing
the entire join, it is possible to join only a \emph{sample} and then run the 4C
method on the sample join. Because 4C runs on a sample, and not the full
join, the view 4C class may change once the full join is materialized.
Therefore, joining only a sample comes with the tradeoff that users may need to
try a few more views before finding the right one.
The main challenge of joining on a sample is selecting the sample so that the
resulting views can be fed into the 4C method. For example, a naive sampling
strategy that selects uniformly from each join graph does not guarantee that
tuples will overlap in the resulting views, because each sampling process is
independent from the others. However, we need samples to overlap, or otherwise,
the 4C method won't find a good classification of views.
To address this problem, DoD employs a strategy that selects samples in a
consistently random way. For every join, it uses a hash function on the join
attribute of the larger table to map the values from all relations to a common
hash space. The sample corresponds to the values with the top-K minimum (or
maximum) hash values, with K indicating the sample size. Because this strategy
is applied consistently across all join graphs, DoD makes sure that the
resulting views can be compared with each other, unlike with independent uniform
sampling. This strategy is reminiscent of conditional random
sampling~\cite{crs}, which has been used before to summarize and compare sets of
elements. As a final note, the sampled views are not guaranteed to contain the
value constraints defined in the input query view. If users want to see the
value constraints in the candidate view, then it's possible to specifically
search for their key in the table and include those as part of the sample,
although it's not necessary for 4C to work.
\mypar{Larger-than-Memory Joins} Sometimes it is necessary to materialize views
that do not fit in memory, so users can analyze them offline. Because Pandas
assumes that joins can be performed in memory, we implement a spill-to-disk
strategy to overcome this limitation. However, because the spill-Join is
significantly less efficient than the default in-memory join, we want to use it
only when the join won't fit in memory. The challenge is that without statistics
it is hard to estimate when this is the case.
Our solution is inspired by dynamic query reoptimization~\cite{reoptimization}.
When joining two tables, DoD selects a sample from one table and joins it with
the other table. It measures the output cardinality of the partial join, and
uses it to estimate the output cardinality of the whole table. With a measure of
the memory required by each row, we can then make an informed decision on what
join strategy to use: whether to use spill-Join, our external algorithm
implementation (when it does not fit in memory) or directly in memory (when it
does).
\subsubsection{Caching Optimizations}
There are 3 operations that are the main computational bottlenecks during the
view search performed by DoD: determining whether a candidate group is
joinable, determining whether a joinable group is materializable, and
materializing the group.
We make extensive use of caches to amortize certain operations that are executed
often. To determine whether a pair of tables can be joined with each other, we
make a call to Aurum. Although this operation is relatively well optimized by
the Aurum engine, it must be performed multiple times, so caching the responses
helps with skipping computation. To determine whether a join graph is
materializable, we need to execute a join query with predicates given by the
cosnstraints in the query view. For a given pair of table-attributes, we can
remember if they do not materialize. Then, before trying to materialize a new
join graph, we first check using this cache that the join graph does not contain
a non-materializable path (this strategy is equivalent to the optimizations
performed by S4~\cite{sfour}). Last, DoD reads tables from data sources in
order to check whether they are materializable and, if so, to materialize them.
Keeping an LRU cache with the read tables helps to avoid expensive IO.
\section{The 4C Method}
\label{sec:4c}
We assume we have found all candidate views, $C$, (step 2 in \mbox{Fig.\hspace{0.25em}}\ref{fig:arch}),
and present the 4C method. 4C classifies views into four classes
that we use to reduce the size of $C$. We start formally defining the 4
categories in Section~\ref{subsec:4c}. Then we describe the algorithms
(Section~\ref{subsec:algo}) and conclude the section describing different
presentation strategies (Section~\ref{subsec:presentation}).
\subsection{Compatible, Contained, Complementary, Contradictory Views}
\label{subsec:4c}
A candidate view, $v_i$ contains a set of rows, $R_i$, that can be retrieved with the
function $f$, $f(v_i) = R_i$. We can refer to a row, $r_i \in R_i$, with its key value,
$k_i$, which we can use to obtain the row; $k(R_i, k_i) = r_i$.
\mypar{Compatible views} Two candidate views, $v_1$ and $v_2$, are compatible if
they have the same cardinality $|R_1| = |R_2|$, and their set of tuples is
exactly the same: $|(R_1 \setminus R_2)| = 0$ and $|(R_2 \setminus R_1)| = 0$,
where the symbol $\setminus$ indicates the set difference operation.
\mypar{Contained views} A view, $v_1$, contains another view, $v_2$, when $|(R_1
\setminus R_2)| > 0$ and $|(R_2 \setminus R_1)| = 0$, that is, when all rows of
$v_2$ are contained in $v_1$.
\mypar{Complementary views} Two views, $v_1$ and $v_2$ are complementary when
$|(R_1 \setminus R_2)| > 0$ and $|(R_2 \setminus R_1)| > 0$, that is, each view
has rows not contained in the other view.
\mypar{Contradictory views} A view, $v_1$ contradicts another view, $v_2$, if a
key value, $k_i$, that exists in both views yields distinct rows, $k(R_1, k_i)
\ne k(R_2, k_i)$. Two rows are distinct when any value of any of their
attributes is different.
\smallskip
Although the description above refers to two views, the 4C method classifies
multiple views into each of the above classes. For example, many views can be
compatible among themselves. One view may contain many other views. One single
view can be complementary with respect to many others, and contradict many
others as well. Similarly, a view can contain another view, be complementary to
another one, compatible with another group and be in contradiction with other
views on different rows, all at the same time. Next, we present the algorithm we
use to classify the candidate views into the 4 above groups.
\subsection{The 4C-Chasing Algorithm}
\label{subsec:algo}
Obvious methods to obtain the 4C classification of views perform poorly. For
example, an algorithm could compare each cell of each view to all other views,
but this would become too slow even when the number of views is low. One good
baseline method hashes rows of views and use the set of hashes to
quickly tell apart compatible and contained groups from complementary and
contradictory. Unfortunately, this improved baseline still needs to use the
per-cell comparison to distinguish between complementary and contradictory
views, so its performance remains low.
We introduce the \textsf{4C-Chasing} algorithm to speed up the classification.
This algorithm is orders of magnitude faster than the baseline, as we show in
the evaluation section.
\subsubsection{Core 4C-Chasing}
The main body of the algorithm is shown in Algorithm~\ref{alg:main}.
We know each $c \in C$ has a subset of the
attributes defined in the query view. When the candidate view contains all
attributes from the query view, we say their schema fully fulfills the
query view. We prefer these views because they are closer to the definition given by
the user. However, we must also consider candidate views with a schema that
partially fulfills the query view (i.e.,\ they contain only a subset of the
attributes in the query view), in case all fully fulfilled views may be wrong.
Because 4C works on views with the same schema (attributes in this case), the
first step is to separate the candidate views into groups according to their
schema (line~\ref{alg:4c:classifyschema}). Then, the algorithm runs the 4C
classification for each group (see line~\ref{alg:4c:foreachschema:start}),
leading to the classification of views into the four groups: compatible
(\textsf{C1}), contained (\textsf{C2}), complementary (\textsf{C3}), and
contradictory, \textsf{C4}.
\input{algorithms/4c.tex}
The algorithm first identifies quickly compatible and contained groups of views.
Then, it identifies complementary and contradictory views among the remaining
ones.
\mypar{Prepare views} In the main body of the loop, the algorithm first attaches
metadata to each view, which consists of a value per attribute that indicates
how likely the attribute is to be a key (line~\ref{alg:4c:metadata}). This is
computed as the ratio between the total number of distinct values in the
attribute column and the total number of values in it. This metadata
will be used later in the \textsf{identify\_c3\_and\_c4()} function.
\mypar{Identify compatible groups} The next step is to identify groups of views
that are compatible (c1) with each other (line~\ref{alg:4c:compatible}), which
corresponds to Algorithm~\ref{alg:c1}. The main idea is to
hash each view (see line~\ref{alg:c1:hashview}), and insert
the view on a map that is keyed on the hash value. A view hash is the sum of its
rows hashes, and a row hash is the sum of its cells hashes. With this hashing
method, compatible views are guaranteed to have the same hash value, so this is
an efficient way of identifying groups of compatible views fast.
Because all compatible views are identical, we select one only to continue the
classification (line~\ref{alg:4c:selection}).
The selected views are then given to a function in charge of identifying contained
groups (c2), as well as groups that may be either complementary or contradictory
(c34), see line~\ref{alg:4c:c2}.
\mypar{Identify contained groups} The function to finding contained views is
shown in Algorithm~\ref{alg:c2}. For each pair of views (loop in
line~\ref{alg:c2:foreachview}), the algorithm obtains the list and the set of
hashes of each view respectively. The list will be used to identify indexes of
rows (because it maintains the order), while the sets are used to quickly tell
whether the views are contained or may be complementary.
Lines~\ref{alg:c2:startc2} to \ref{alg:c2:endc2} checks whether the hashes of
one view are completely contained on the first, in which case the view is
contained in the first and this is recorded in the variable $gc2$. Note that
because the loop enumerates pairs of views in both directions, it is not
necessary to check for containment in both directions here.
If the views are not contained, the algorithm tries to decide if they are
candidates for the complementary and contradictory group. This decision
is done in lines~\ref{alg:c2:startc34} and \ref{alg:c2:endc34}. The
algorithm computes the set difference of rows in both directions and checks
whether it is non-empty in both cases. When it is, this indicates that each view
has hashes not contained in the other view. This can be caused for two different
reasons: either one view has rows that are non existent in the other (a case of
complementarity), or the values of certain rows differ in some values, which
would also lead to different hashes and indicate a contradiction. The algorithm
returns the identified groups of contained views as well as those that may be
complementary or contradictory.
\subsubsection{Distinguising Complementary and Contradictory Views}
The last part of the algorithm is to identify which pairs of tables in $C34$ are
complementary and which are contradictory. This corresponds to
line~\ref{alg:4c:c34} in the main body of algorithm~\ref{alg:main}. The body of
this function is shown in more detail in Algorithm~\ref{alg:c34}.
Checking whether views are complementary or contradictory is expensive. This is
because it's not possible to recover the view row values from the hash. Hence,
it requires checking each row cell individually for all rows that are part of
the set difference, and comparing it with all other views.
The insight that the 4C-chasing algorithm exploits is that if there exists a
particular contradiction in one value between 2 views, it is likely that at
least one of the views contradicts other views on the same value as well---for
example, all views that were assembled using a particular join attribute between
two tables will share the same contradictions. Therefore, instead of finding
contradictions between each pair of views, we can use the contradictions we find
to quickly test whether the same contradiction exists between other pairs too,
i.e.,\ we can chase the contradiction across the views in the group.
\mypar{Separate complementary from contradictory} The algorithm first represents
the pairs of views in an undirected graph (the chasing graph), see
line~\ref{alg:c34:buildgraph}. It then obtains a pair from the input pairs
(line~\ref{alg:c34:pop}) and checks whether the different values (hashes)
identified between this pair are due to complementarity or contradictions. For
that, it first selects the relevant rows from each view
(lines~\ref{alg:c34:sel1} and \ref{alg:c34:sel2}), and then it obtains the
values for the key of each of those views. The key values are obtained by first
identifying the most likely attribute key, $k$, of those views
(line~\ref{alg:c34:findkey}), and then projecting that attribute on the
selection (lines~\ref{alg:c34:proj1} and \ref{alg:c34:proj2}). This produces
$KV_i$ and $KV_j$, which are the key values that correspond to the distinct
hashes of each view. At this point, the intersection of these two sets,
$KV_{ix}$, corresponds to the contradictory examples, while the rest of values
correspond to the complementary ones (line~\ref{alg:c34:compkeys}). When the
keys of two views are contradictory, the views will be classified as
complementary because the set of hashes will be different, and the key
intersection, $KV_{ix}$, is empty.
Within the set of contradictory keys, it is still necessary to identify which
cell value or values caused the contradiction. We want the specific cell values
so we can test these with other views, and so we can include this fine-grained
information with the metadata produced at the output. The contradictory cells
are obtained in lines~\ref{alg:c34:digstart} to \ref{alg:c34:digend}. Here we
call \textsf{mark\_graph\_node()} on the graph as contradictory, and attach the
key attribute, $k_c$, (note that $k_c$ does not need to be the same as $k$) as
well as the key values that produced a contradiction, $V_c$.
\mypar{Chasing graph} The last part of the algorithm uses the nodes of the graph
which has been previously marked (due to a contradiction), and the already found
contradiction to identify what other connected views contradict the node as well
(lines~\ref{alg:c34:chasestart} to \ref{alg:c34:chaseend}). Without
\textsf{4C-chasing}, testing a pair involves performing $r$ operations in the
worst case, with $r$ being the number of rows in the larger cardinality
relation. This cost must then be paid for all-pairs of contradicting views,
$|c34|^2$. In contrast, with the new algorithm, the cost of comparing each pair
is only $1$ for cases where the contradictions are shared. The benefits of the
algorithm are then directly related to how many contradictions are shared
between views. Since contradictions are often shared across many views
this algorithm is efficient.
Once finished, all $C$ are classified into 4 classes, which along with the query
used to assemble the view, and the join graphs used, is given as input to a
\emph{presentation strategy}.
Specialized users such as data
stewards may want a fine-grain understanding of the SPJ used to assemble a
specific view, or the indices of rows in which two views disagree, or
about the attribute and value that correspond to the contradiction between a
pair of views. They can use the information collected so far directly.
\subsection{Presentation Strategies}
\label{subsec:presentation}
The goal of a presentation strategy is to use the 4 classes into which the
candidate views are classified in order to reduce the size of the view
choice space. We explain next two such strategies:
\mypar{4c-summary} This strategy reduces the view choice space by taking
automatic actions for compatible, contained, and complementary views, and then
asking a user to choose an option among contradictory views. This strategy was
illustrated in the example in Section~\ref{subsec:user_experience}. It
summarizes compatible groups with a representative view, and contained groups with the
highest cardinality view. When views are complementary, the union of the views
is shown. Finally, when views are contradictory, the system shows to users the
view that contradicts the largest number of views. If users prefer one view
the information is used to prune other views, otherwise, the next contradiction
is shown. This strategy makes two key assumptions. First, that when views are
contained, the highest cardinality one is preferred. Second, that when views are
complementary, the union is preferred. These actions do not guarantee that the
view the user wants is chosen, e.g.,\ a user may prefer the view for a particular
year (that may be contained in another view). However, users can always inspect
the operations performed by the presentation strategy and recover the right
view.
\mypar{Multi-row views} This strategy does not require user intervention. A
view is shown exactly only once for each schema group. When a view is both in a
compatible, contained, and complementary group, it is unioned with its
complementary views and shown once. When the view has contradictory views, the
system performs a special union, where those rows that contradict each other
become \emph{multi-rows}. A multi-row is a tuple that for the same key has more
than one value. This is not a valid relation, but it can be useful for users to
have all the context to make their decision, and it can be useful to feed to
certain downstream tasks---for example one that uses multi-rows to apply an
entity resolution \cite{entityresolution} algorithm.
\smallskip
Different presentation strategies cater different needs and present different
tradeoffs, and more presentation strategies than the presented above are
possible. While \textsf{4c-summary} relies on minimal user intervention to deal
with contradictory views, the \textsf{multi-row} strategy further automates the
process. In all cases, all metadata collected is shown along with the view so
users can build trust on the results.
\section{Introduction}
\label{sec:introduction}
Analysts of modern organizations need to answer questions that require combining
data from multiple different data sources, such as data lakes, databases and
even files in cloud storage. Example combinations of datasets, or \emph{views},
include training datasets necessary to build ML models, or the information
needed to answer a business question. Finding the desired view is hard for two
reasons. First, one must find relevant tables among many hundreds or thousands
of data sources in modern organizations. Second, one must understand how to
combine the relevant tables to create the desired view. Most users do not know
how to find or join the view they need, often resorting to asking colleagues.
Even expert users find the process to be error-prone and time-consuming.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{img/intro-example}
\caption{Example of multiplicity of candidate views}
\label{fig:intro-example}
\end{figure}
Query-by-example~\cite{qbe,examplestutorial} based approaches such as
S4~\cite{sfour}, MWeaver~\cite{mweaver}, and Exemplar~\cite{exemplarqueries}
help non-expert users complete a partially fulfilled \emph{query view}. The
query view may consist of attributes, tuple values, or a combination of both.
The systems identify a query that fulfills the query view.
However, they suffer two limitations. First, they do not handle
semantic ambiguity. Consider the example of \mbox{Fig.\hspace{0.25em}}\ref{fig:intro-example}. The
desired view is the top-left relation, which requests \textsf{employee} and
\textsf{address}. It is possible to assemble 4 different views that fulfill
that specification by joining the \textsf{Employees} table with any of the
other tables. Joining with \textsf{Billing Address}, which contains \emph{home}
address, however, is semantically different than joining with
\textsf{Staff-2019} or \textsf{Staff-2020}, which contain \emph{work} address
and refer to two different years. In this case, the multiplicity of views stems
from semantic heterogeneity in the data, i.e.,\ multiple possible mappings, a
problem that has long been identified in the data integration community
\cite{schemamappingasquerydiscovery}. Second, these approaches assume knowledge
of the right join attributes between each pair of tables (they assume the
existence of a PKFK graph), so they always know the correct way of joining any
pair of tables. Assuming on the existence of a PFKF graph is a strong assumption we
do not make in this paper, because obtaining the right PKFK graph across many
tables stored in different systems and databases is a notoriously hard
problem~\cite{findpkfk1, findpkfk2}. Instead of making such an assumption, we
concede that some join attributes will be wrong and deal with these cases. For
example, in \mbox{Fig.\hspace{0.25em}}\ref{fig:intro-example}, the \textsf{Employees} table may be
wrongly joined with \textsf{Customers} using \textsf{EID} and \textsf{CID} as
join attributes. Although there is an inclusion dependency between those
attributes (both are integers drawn from the same domain), it is nevertheless
not a valid join path, and will hence lead to a spurious view. We do not aim to
automatically detect these cases, but to identify them efficiently when
presenting views to users so they can select the correct one.
The crux of the problem is that the number of views that fulfill a given query
view is large due to semantic heterogeneity, erroneous join
attributes, and other anomalies. As a consequence, users are overwhelmed with
a large choice space, from where they have to select the view.
In this paper we introduce dataset-on-demand (DoD), a \emph{view discovery
system} that helps users identify a combination of datasets (a view) from many
data sources. With DoD, users first declare the view they need by providing the
attributes they wish to see in the result as well as (possibly partial) tuples (see the
example query view in \mbox{Fig.\hspace{0.25em}}\ref{fig:intro-example}). Then, the system automatically
finds a collection of views that fulfill the specification. With DoD, finding
both the relevant tables and correct join attributes is treated as a single view
selection problem. DoD works by finding all possible views that fulfill the
specification, and then classifying those views into groups using a new method,
called \emph{4C}, which dramatically reduces the size of the view choice space.
As a result, users are only involved at the end of the process, and they choose
the right view from a smaller choice space instead of from all the resulting
views.
To reduce the size of the choice space, the 4C method first classifies all views
that fulfill a query view into 4 category groups, and then a \emph{presentation
strategy} uses those 4 groups to summarize, combine, and prune views. The 4
category groups are: \emph{compatible}, which means that views are identical;
\emph{contained}, which means that the values of one view subsume others;
\emph{complementary}, which means that the set difference of tuples of views is
not empty; and \emph{contradictory}, which means that certain tuples are
incompatible across views. An example of contradictory views is shown in
\mbox{Fig.\hspace{0.25em}}\ref{fig:intro-example}: the view that selects \emph{home} address will
contradict the view that selects \emph{work} address, assuming the employee does
not live in the work place. Once the views are classified into the 4 groups, a
presentation strategy reduces the number of views users have to
inspect. There are different presentation strategies that cater to different
needs. For example, one presentation strategy may combine compatible views into
a single view, show only the highest cardinality view among contained views,
union complementary views, and ask users to pick the preferred view among
contradictory views.
From a performance point of view, there are two operations that are expensive to
compute. First, classifying views into the 4 classes becomes expensive when
there are many input views. We propose a \textsf{4C-Chaining} algorithm that
minimizes the amount of computing resources needed by skipping computation when
possible. Second, identifying all views involves searching for relevant tables
among many data sources, and obtaining all possible ways to join and materialize
them, which potentially involves executing a query across many data sources. We
build an ad-hoc processing engine to perform off-core joins when tables are part
of different data sources, e.g.,\ a database and a CSV file in cloud blob storage.
We build DoD on top of Aurum~\cite{aurum} to help with the search for relevant
tables and join paths.
To evaluate DoD we focus on two key metrics: reduction of the size of the view
choice space, and its end-to-end performance. We show that with 4C we
can reduce the size of the view choice space by 2-10X. We also demonstrate the
end to end performance of DoD, which can find and present views automatically
within minutes and in many cases within a few seconds.
\section{Related Work}
In this section, we explain how DoD fits into the extensive literature on
data integration.
\mypar{View-by-Example} The initial ideas of this approach were presented in
\cite{schemamappingasquerydiscovery}, in a theoretical way, and later
implemented in Clio~\cite{clio}. More recently, on the theoretical side, new
contributions have helped understand the problem in more
depth~\cite{approxmappings}. The current class of view-by-example systems
\cite{samples4, sfour, mweaver, exemplarqueries} aim to find views for users using
a view definition, like DoD. Unlike DoD, these systems make two key assumptions
that sets them apart: i) assume knowledge about how to join any pair of tables;
ii) assume the total views that satisfy the input definition is small. Although
the more advanced systems have a way of ranking the resulting views, such as in
S4~\cite{sfour}, this does not help disambiguate semantically different views. In
contrast, DoD does not make these assumptions and deals with these
problems directly.
\mypar{Data Integration} In the early years, data integration required humans
to provide mappings (more in general, a mediated schema) between heterogeneous
sources in order to understand how to combine them \cite{clio, teenageyears,
manifold}. As the number of sources has grown and become more heterogeneous,
these intensive human-driven approaches have led to more automatic
alternatives. DoD does not require users to provide any mediated schema or
mapping between sources. Instead, users only participate at the end to select a
view among a reduced choice space. Furthermore, with the \textsf{multi-row}
presentation strategy, human involvement is further reduced.
\mypar{Aiding Mapping Selection} Because creating mappings for integration is
hard, many approaches appeared to assist users with the selection of the best
mapping. In \cite{datadrivenunderstanding}, this problem receives a theoretical
treatment. Later, practical approaches to drive the mapping selection have
appeared \cite{bonifati}. In these approaches, the mapping selection is driven
by the user, who is still in charge of determining what mappings are correct.
These approaches are appropriate when users are assumed to be familiar with the
schema. DoD does not assume knowledge of the schema from users, and does not ask
them to reason about mappings. With DoD, users can still inspect the mappings
and make decisions at that level, but they also have the actual view created,
which they can use to make the final decision.
\mypar{Data Discovery} Data discovery solutions such as Goods \cite{goods} and
Infogather~\cite{infogather} assist with identifying relations of interest, but
they do not solve the end to end problem of view creation. Data catalog
solutions~\cite{catalog1, catalog2, catalog3} help organizing, annotating, and
sharing data sources but do not directly help with finding or joining the
datasets. Aurum~\cite{aurum} exposes a data discovery API that permits answering
many different queries, as long as analysts know how to write code, which is
often not the case.
\mypar{Keyword search in databases} Keyword search systems \cite{discover,banks,
keyword1} are precursors of many discovery and view-by-example systems today.
These systems aim to identify tuples in databases that contain the input
keywords. To find such tuples, the systems identify ways to join the involved
tables together. However, these systems do not take attribute names as input,
they assume the existence of a PKFK-graph, and they do not help with assembling
the full view, nor presenting the many views in a concise way.
| proofpile-arXiv_059-15577 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{sec:intro}
Accents are defined as variations in pronunciation within a language and are often peculiar to geographical regions, individuals, social groups, etc. As one of the major sources in speech variability, accents have posed a grand technical challenge to the robustness of ASR systems. Due to the acoustic discrepancy among accents, an ASR system that is trained on the speech data of one accent (e.g., native) often fails to recognize speech of another unseen accent (e.g., non-native). In this work, we focus on learning accent-invariant representations, aiming to build a universal ASR system that is generalizable across accents.
There is an extensive literature on multi-accent modeling for speech recognition \cite{rao2017multi} \cite{lin2009study}. The existing approaches can be categorized into two classes in general: accent-independent and accent-dependent. Accent-independent modeling focuses on building a universal model that generalizes well across accents. One popular baseline is to train a model on all the data of different accents \cite{elfeky2016towards} \cite{kamper2011multi} \cite{vergyri2010automatic}. Elfeky \emph{et al.}~have attempted to build a unified multi-accent recognizer from a pre-defined unified set of CD states by learning from the ensemble of accent-specific models \cite{elfeky2016towards}. Yang \emph{et al.}~have proposed to jointly model ASR acoustic model and accent identification classifier through multi-task learning \cite{yang2018joint}. Accent-dependent approaches either take accent-related information, such as accent embedding or i-vectors, as an complementary input in the modeling or adapt a unified model on accent-specific data \cite{li2018multi} \cite{chen2015improving} \cite{yoo2019highly} \cite{vu2014multilingual} \cite{huang2014multi}. Accent-dependent models usually outperform the unified ones with known accent labels or on an accent-specific dataset, while accent-independent models demonstrate better generalizability on average when accent labels are unavailable during testing.
Generative adversarial nets (GAN) \cite{goodfellow2014generative} or gradient reverse technique \cite{ganin2014unsupervised} has gained popularity in learning a representation that is invariant to domains or conditions \cite{chen2018phonetic} \cite{serdyuk2016invariant} \cite{sun2018domain} \cite{bousmalis2017unsupervised}. Serdyuk \emph{et al.}~have applied adversarial training to generate noise-invariant representations for speech recognition \cite{serdyuk2016invariant}. Gradient reversal training has recently been used for learning domain-invariant features to alleviate the mismatch between accents during training \cite{sun2018domain}. Bousmalis \emph{et al.}~have proposed to a GAN-based pixel-level transformation from one domain to the other and have shown great improvement over state-of-the-art on a number of domain adaptation tasks \cite{bousmalis2017unsupervised}.
This work focuses on learning accent-invariance with the goal of building a unified accent-independent system for end-to-end speech recognition. Pre-training has shown its superiority in many computer vision and NLP tasks \cite{krizhevsky2012imagenet} \cite{devlin2018bert} \cite{mikolov2013distributed}, while research efforts on accent model pre-training thus far have been limited. We propose a novel pre-training framework \texttt{AIPNet} based on GAN for accent-invariant representation learning: \textbf{A}ccent \textbf{I}nvariant \textbf{P}re-training \textbf{Net}works. Unlike most of the existing work that unites the modeling of acoustics and accents in a single stage, our approach decouples accent modeling from acoustic modeling and consists of two stages: pre-training and fine-tuning. In the pre-training stage, \texttt{AIPNet} is built through adversarial training to disentangle accent-invariant and accent-specific characteristics from acoustic features. As transcriptions are not needed in pre-training, \texttt{AIPNet} allows us to make use of many possible accent resources for which transcriptions are unavailable. In the fine-tuning stage, we adopt an attention-based encoder-decoder model for sequence-to-sequence speech recognition. Specifically, we plug in the accent-invariant embeddings in \texttt{AIPNet} into ASR model for further optimization. Experimental results on $9$ English accents show significant WER reduction compared to four popular baselines, indicating the effectiveness of \texttt{AIPNet} on accent-invariance modeling. As a general framework for learning domain-invariance, \texttt{AIPNet} can be easily generalized to model any variabilities, such as speakers or speech noise, in addition to accents.
\section{AIPNet}
\label{sec:approach}
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{approach_v1.png}
\caption{The framework of \texttt{AIPNet} including both pre-training and fine-tuning stages.}
\label{fig:approach}
\end{figure}
\begin{comment}
\begin{figure}[htb]
\begin{minipage}[b]{.49\linewidth}
\centering
\centerline{\includegraphics[width=4.0cm]{gradient_reversal.png}}
\centerline{(a) Gradient-reversal loss.}\medskip
\end{minipage}
\hfill
\begin{minipage}[b]{.49\linewidth}
\centering
\centerline{\includegraphics[width=4.0cm]{negative_entropy.png}}
\centerline{(b) Negative-entropy loss.}\medskip
\end{minipage}
\caption{Goal difference between the gradient-reversal loss and the negative-entropy loss.}
\label{fig:generator_loss}
\end{figure}
\end{comment}
In this section we describe \texttt{AIPNet} in details. Our approach consists of two stages: pre-training and fine-tuning. In the pre-training stage, the model is built through adversarial training with the goal of learning accent-invariant representations. In the fine-tuning stage, we stack the pre-trained model with downstream tasks for further optimization. In this work, we use end-to-end ASR as a downstream application, focusing on improving accent robustness for speech recognition. The framework of \texttt{AIPNet} is illustrated in Fig.~\ref{fig:approach}.
Suppose the input is an utterance $\mathbf{X} = (\mathbf{x}_1, \mathbf{x}_2, ..., \mathbf{x}_T)$, where $\mathbf{x}_t$ represents the feature vector at time step $t$. The speaker accent corresponding to $\mathbf{x}_t$ is denoted as $a_t \in \{1, 2, ..., C\}$, where $C$ is the number of accents in the training data.
\subsection{Accent-Invariance Pre-training}
The goal of pre-training is to learn accent-invariant representations from accented training data. We define three types of losses for this purpose, including adversarial loss to disentangle accent-invariant and accent-specific information, reconstruction loss to enforce acoustic characteristics to be preserved in the disentangled representations, as well as consistency regularization to detach linguistic information from accent-specific representations.
\subsubsection{Adversarial Loss}
\label{subsec:adversarial}
To learn accent-invariant representations, we define two mappings from speech data: accent-invariant generator $G_{AI}(\mathbf{x}_t)$ and accent-specific generator $G_{AS}(\mathbf{x}_t)$. We also define two discriminators $D_{AI}(G_{AI})$ and $D_{AS}(G_{AS})$ that output probabilities of accents to ensure that $G_{AI}$ and $G_{AS}$ encode the corresponding information. Specifically, we train $D_{AI}$ and $D_{AS}$ to maximize the probability of assigning correct accent labels to samples from $G_{AI}$ and $G_{AS}$ respectively, \emph{i.e.,} minimizing cross-entropy loss $L_{CE}^{AI}$ and $L_{CE}^{AS}$:
\begin{align}
\min_{D_{AS}, G_{AS}} L_{CE}^{AS} &= \sum_{t=1}^T -\log P(a_t | G_{AS}(\mathbf{x}_t)),\\
\min_{D_{AI}} L_{CE}^{AI} &= \sum_{t=1}^T -\log P(a_t | G_{AI}(\mathbf{x}_t)).
\label{eq:L_CE_AS}
\end{align}
To decouple accent-related information from $G_{AI}$, we simultaneously train $G_{AI}$ such that $D_{AI}$ is confused about accent labels of samples from $G_{AI}$. The objective is to maximize cross-entropy loss $L_{CE}^{AI}$, equivalent to minimize the negative cross-entropy:
\begin{align}
\min_{G_{AI}} -L_{CE}^{AI} = \sum_{t=1}^T \log P(a_t | G_{AI}(\mathbf{x}_t)).
\label{eq:L_NE_AI}
\end{align}
\begin{comment}
\begin{equation}
\begin{aligned}
L_{NE}^{\mathbf{h}^L} &= \sum_{t=1}^T -H(accents | \mathbf{h}_t^L) \\
&= \sum_{t=1}^T \sum_{j=1}^N P(a_j | \mathbf{h}_t^L) \log P(a_j | \mathbf{h}_t^L).
\label{eq:L_NE}
\end{aligned}
\end{equation}
\end{comment}
\begin{comment}
\begin{equation}
\begin{aligned}
L_{CE}^{\mathbf{h}^A} = \sum_{t=1}^T -\log P(a_i | \mathbf{h}_t^A).
\label{eq:L_CE_hA}
\end{aligned}
\end{equation}
\end{comment}
\subsubsection{Reconstruction Loss}
\label{subsec:autoencoding}
The adversarial loss defined between $D_{AI}$ and $G_{AI}$ enforces that accent-specific information is disentangled from $G_{AI}$ but preserved in $G_{AS}$. To ensure acoustics characteristics are encoded in the representations from both generators, we further define a decoder with autoencoding structure to reconstruct speech feature $\mathbf{x}_t$ from $G_{AI}(\mathbf{x}_t)$ and $G_{AS}(\mathbf{x}_t)$. The decoder is trained by minimizing the reconstruction error $L_{R}$:
\begin{align}
\min_{decoder, G_{AI}, G_{AS}} L_{R} = \sum_{t=1}^T \parallel {\mathbf{x}'_t} - {\mathbf{x}_t} \parallel_2^2.
\label{eq:L_R}
\end{align}
\begin{comment}
\begin{equation}
\begin{aligned}
L_{R} = \sum_{t=1}^T \frac{1}{d} \sum_{j=1}^d \parallel {x'_t}_j - {x_t}_j \parallel_2^2.
\label{eq:L_R}
\end{aligned}
\end{equation}
\end{comment}
\subsubsection{Consistency Regularization}
\label{subsec:consistency}
Accent-specific attributes are generally stable within an utterance while linguistic-related acoustics have larger intra-utterance variance across time frames. Inspired by the utterance-level stability of accent-specific attributes, we impose a consistency regularization for $G_{AS}(\mathbf{x}_t)$ such that accent-specific representations from $G_{AS}$ are consistent across time frames within an utterance:
\begin{align}
\min_{G_{AS}} L_{CR} = \sum_{t=1}^{T-1} \parallel {G_{AS}(\mathbf{x}_{t+1}) - G_{AS}(\mathbf{x}_{t})} \parallel_2^2.
\label{eq:L_CR}
\end{align}
This regularization reinforces the preservation of accent-specific information in $G_{AS}$ meanwhile implicitly encourages linguistic content to be disentangled from $G_{AS}$. The multi-scale nature of information in speech data has also been applied in voice conversion and speech denoising \cite{hsu2017unsupervised}.
\begin{comment}
\begin{equation}
\begin{aligned}
L_{CR} = \sum_{t=1}^{T-1} \frac{1}{d_h} \sum_{j=1}^{d_h} \parallel {h_{(t+1)}^A}_j - {h_t^A}_j \parallel_2^2.
\label{eq:L_CR}
\end{aligned}
\end{equation}
\end{comment}
\subsubsection{Iterative Training}
\label{subsec:iterative}
Given the minmax two-player game between $D_{AI}$ and $G_{AI}$, \texttt{AIPNet} pre-training is designed of repeating the following two steps in an iterative manner.
\begin{itemize}
\item Update the discriminator $D_{AI}$ by minimizing $L_D$,
\item Freeze the discriminator $D_{AI}$ and update the rest of the network by minimizing $L_G$,
\end{itemize}
\vspace{-6pt}
\begin{align}
L_D &= L_{CE}^{AI}, \\
L_G &= -L_{CE}^{AI} + \lambda_1 L_{CE}^{AS} + \lambda_2 L_{R} + \lambda_3 L_{CR},
\end{align}
where $\lambda$s are hyper-parameters.
\begin{table*}[t]
\centering
\caption{WER (\%) of different approaches in each accent in supervised setting. \texttt{F1} indicates fine-tuning with $L_{ASR}$; \texttt{F2} indicates fine-tuning with $L'_G$; AI indicates accent-independent model; AD indicates accent-dependent model.}
\label{table:supervised}
\begin{tabular}{c|c c|c|c c|c c c c c c c c}
\toprule[2pt]
\multicolumn{3}{c|}{Approach} &
\multicolumn{1}{c|}{Ave.} &
\multicolumn{1}{c}{US} &
\multicolumn{1}{c|}{Non-US} &
\multicolumn{1}{c}{CA} &
\multicolumn{1}{c}{FR} &
\multicolumn{1}{c}{IN} &
\multicolumn{1}{c}{KR} &
\multicolumn{1}{c}{PH} &
\multicolumn{1}{c}{LA} &
\multicolumn{1}{c}{GB} &
\multicolumn{1}{c}{VN} \\
\toprule[2pt]
\multirow{4}{*}{Baselines} &
\texttt{B1} & AI & 8.7 & 5.7 & 9.0 & 6.4 & 8.4 & 11.2 & \bf{9.9} & 7.2 & \bf{7.8} & 8.0 & 13.0 \\ \cline{2-14}
& \texttt{B2} & AI & 8.8 & \bf{5.0} & 9.1 & 6.6 & 9.3 & 11.0 & 10.3 & \bf{6.7} & 8.1 & 8.1 & 12.9 \\ \cline{2-14}
& \texttt{B3} & AD & 8.6 & 5.4 & 8.9 & 6.7 & 8.5 & 10.9 & 10.0 & 6.8 & 8.6 & 7.9 & \bf{12.0} \\ \cline{2-14}
& \texttt{B4} & AI & 8.8 & 5.8 & 9.1 & 6.1 & 8.5 & 11.7 & 10.7 & 7.4 & 8.4 & \bf{7.8} & \bf{12.0} \\
\toprule[1.5pt]
\multirow{2}{*}{\texttt{AIPNet}} &
\texttt{F1} & AI & \bf{8.4} & 5.6 & \bf{8.7} & \bf{6.0} & \bf{8.1} & \bf{9.9} & 10.3 & 6.9 & 8.0 & \bf{7.8} & 12.4 \\ \cline{2-14}
& \texttt{F2} & AI & 10.1 & 6.2 & 10.5 & 7.9 & 10.1 & 12.8 & 12.1 & 8.2 & 9.5 & 9.4 & 13.9 \\
\toprule[2pt]
\end{tabular}
\end{table*}
\begin{table*}[t]
\centering
\caption{WER (\%) of different approaches in each accent in semi-supervised setting. \texttt{F1} indicates fine-tuning with $L_{ASR}$; \texttt{F2} indicates fine-tuning with $L'_G$; PL indicates pseudo labeling; AI indicates accent-independent model; AD indicates accent-dependent model.}
\label{table:semi-supervised}
\begin{tabular}{c|c|c c|c|c c|c c c c c c c c}
\toprule[2pt]
\multicolumn{4}{c|}{Approach} &
\multicolumn{1}{c|}{Ave.} &
\multicolumn{1}{c}{US} &
\multicolumn{1}{c|}{Non-US} &
\multicolumn{1}{c}{CA} &
\multicolumn{1}{c}{FR} &
\multicolumn{1}{c}{IN} &
\multicolumn{1}{c}{KR} &
\multicolumn{1}{c}{PH} &
\multicolumn{1}{c}{LA} &
\multicolumn{1}{c}{GB} &
\multicolumn{1}{c}{VN} \\
\toprule[2pt]
\multirow{2}{*}{w/o PL} & Baseline &
\texttt{B1} & AI & 29.9 & 10.6 & 31.8 & 22.0 & 33.1 & 41.0 & 33.1 & 28.2 & 28.7 & 28.3 & 40.6 \\ \cline{2-15}
& \texttt{AIPNet} & \texttt{F1} & AI & 27.9 & \bf{9.4} & 29.8 & 20.1 & 30.8 & 39.0 & 32.8 & 25.5 & 26.4 & 25.3 & 39.2 \\
\toprule[1.5pt]
\multirow{4}{*}{w/ PL} & \multirow{4}{*}{Baselines} &
\texttt{B1} & AI & 26.2 & 10.3 & 27.8 & 18.6 & 28.3 & 36.1 & 29.6 & 24.8 & 25.1 & 24.4 & 35.8 \\ \cline{3-15}
& & \texttt{B2} & AI & 25.9 & \bf{9.4} & 27.6 & 19.0 & 27.7 & 36.5 & 29.6 & 24.2 & 23.8 & 25.1 & 34.9 \\ \cline{3-15}
& & \texttt{B3} & AD & 25.9 & 9.6 & 27.5 & 19.5 & 28.0 & 36.4 & 29.1 & 23.7 & 24.2 & 24.8 & 35.0 \\ \cline{3-15}
& & \texttt{B4} & AI & 25.0 & 9.7 & 26.5 & \bf{18.1} & 26.7 & 34.9 & 28.3 & 23.7 & 23.4 & 23.7 & 33.6 \\
\toprule[1.5pt]
\multirow{2}{*}{w/ PL} & \multirow{2}{*}{\texttt{AIPNet}} &
\texttt{F1} & AI & 25.7 & 12.1 & 27.0 & 19.7 & 27.4 & 34.7 & 28.9 & 23.0 & 23.6 & 24.5 & 34.6 \\ \cline{3-15}
& & \texttt{F2} & AI & \bf{24.6} & 11.8 & \bf{25.9} & 19.0 & \bf{26.0} & \bf{32.6} & \bf{28.0} & \bf{22.2} & \bf{22.8} & \bf{23.1} & \bf{33.5} \\
\toprule[2pt]
\end{tabular}
\vspace{-0.1cm}
\end{table*}
\subsection{Fine-tuning for End-to-End Speech Recognition}
\label{subsec:ASR}
In the fine-tuning stage, the outputs of $G_{AI}$ which encode accent-invariant linguistic content can be plugged in as inputs of any downstream speech tasks that aim to improve accent robustness, as shown in Fig.~\ref{fig:approach}. In this work, we focus on multi-accent speech recognition and adopt Listen, attend and spell (\texttt{LAS}), a popular attention-based encoder-decoder model \cite{chan2016listen} for sequence-to-sequence speech recognition. \texttt{LAS} consists of an encoder encoding an input sequence into high-level representations as well as an attention-based decoder generating a sequence of labels from the encoded representations. The encoder is typically a unidirectional or bidirectional LSTM and the decoder is a unidirectional LSTM.
The label inventory for \texttt{LAS} modeling consists of $200$ word pieces and is further augmented with two special symbols $<$sos$>$ and $<$eos$>$ indicating the start of a sentence and the end of a sentence respectively. \texttt{LAS} models the posterior probability of a label sequence $\mathbf{y}$ given the input feature sequence $G_{AI}(\mathbf{X})$ and the previous label history $\mathbf{y}_{1:j-1}$:
\begin{align}
P(\mathbf{y} | G_{AI}(\mathbf{X})) = \prod_{j=1} P(y_j | G_{AI}(\mathbf{X}), \mathbf{y}_{1:j-1}).
\end{align}
Both encoder and decoder can be trained jointly for speech recognition by maximizing the log probability or minimizing $L_{ASR}$:
\begin{align}
L_{ASR} = \sum_{j=1} -\log P(y_j | G_{AI}(\mathbf{X}), \mathbf{y}_{1:j-1}).
\end{align}
There are two ways of fine-tuning: 1) fine-tune $G_{AI}$ and \texttt{LAS} with $L_{ASR}$. This requires only transcriptions in the training data; 2) continue with adversarial training as described in Sec \ref{subsec:iterative} with $L'_G = L_G + \lambda_4 L_{ASR}$. This requires both transcriptions and accent labels in the training data. In the experiments, we report results of using both ways.
\begin{comment}
\begin{equation}
\begin{aligned}
P(\mathbf{y} | \mathbf{H}^L) = \prod_{j=1}^S P(y_j | \mathbf{H}^L, \mathbf{y}_{<j}).
\label{eq:P_y}
\end{aligned}
\end{equation}
Hence we minimize the loss:
\begin{equation}
\begin{aligned}
L_{ASR} = \sum_{j=1}^S -\log P(y_j | \mathbf{H}^L, \mathbf{y}_{<j}).
\label{eq:L_ASR}
\end{aligned}
\end{equation}
Therefore, the overall generation loss becomes
\begin{equation}
\begin{aligned}
L_G = \lambda_2 L_{NE}^{\mathbf{h}^L} + \lambda_3 L_{CE}^{\mathbf{h}^A} + \lambda_4 L_{R} + \lambda_5 L_{CR} + \lambda_6 L_{ASR}.
\label{eq:L_G_ASR}
\end{aligned}
\end{equation}
During inference we want to find the most likely subword unit sequence given the input:
\begin{equation}
\begin{aligned}
\hat{\mathbf{y}} = \arg \max_{\mathbf{y}} P(\mathbf{y} | \mathbf{H}^L)
\label{eq:infer}
\end{aligned}
\end{equation}
using left-to-right beam search decoding.
\end{comment}
\section{Experiments}
\label{sec:exp}
\subsection{Dataset}
\label{subsec:dataset}
The dataset used in experiments contains utterances in a variety of domains, such as weather or music, collected through crowdsourced workers. There are $9$ English accents in total in the dataset, including United States (US), Korea (KR), Philippines (PH), Canada (CA), India (IN), France (FR), Britain (GB), Vietnam (VN) and Latin America (LA). The training set contains $4$M ($3.8$K hours) utterances among which $1\%$ is split as validation data. Particularly, there are $1$M and $780$K utterances in US and LA respectively and about $330$K data in each of the remaining accents. The testing set has $10.8$K utterances with $1.2$K utterances in each accent. In both training and testing sets, we extract acoustic features using $80$-dimensional log Mel-filterbank energies that are computed over a $25$ms window every $10$ms.
\iffalse
\begin{table}[h]
\fontsize{8}{9}\selectfont
\centering
\caption{The number of utterances in each dialect for training.}
\label{tab:perf}
\setlength\tabcolsep{2.5pt}
\begin{tabular}{c c c c c c c c c c}
\toprule[2pt]
\textbf{Dialect} & \textbf{US} & \textbf{KR} & \textbf{PH} & \textbf{CA} & \textbf{IN} & \textbf{FR} & \textbf{GB} & \textbf{VN} & \textbf{LA}\\
\toprule[2pt]
\texttt{\#Utts} & $1$M & $0.3$M & $0.32$M & $0.3$M & $0.42$M & $0.32$M & $0.32$M & $0.3$M & $0.78$M\\
\toprule[2pt]
\end{tabular}
\end{table}
\fi
\subsection{Experimental Setup}
\label{subsec:setup}
The architecture of each module in \texttt{AIPNet} is a multi-layer LSTM. Specifically, we represent $G_{AI}$, $G_{AS}$ and decoder using $2$ LSTM layers with a hidden size of $768$, $256$ and $1024$ respectively. $D_{AI}$ and $D_{AS}$ are represented by a LSTM layer with softmax outputs.
The configuration of \texttt{LAS} includes a $4$-layer LSTM encoder and a $2$-layer LSTM decoder, each with a hidden size of $1024$. The hyperparameters $(\lambda_1, \lambda_2, \lambda_3, \lambda_4)$ are swept within the range $[0.1, 30]$. Our experiments have shown that the final results are generally stable across different hyperparameter settings. For simplicity, we report results with $(\lambda_1, \lambda_2, \lambda_3, \lambda_4) = (1, 10, 10, 10)$ in this paper. We use batch size of $16,000$ tokens with $32$ GPUs for training. We use Adam with learning rate of $5 \times 10^{-4}$ in pre-training and $2.5 \times 10^{-4}$ in fine-tuning, $\beta_1 = 0.9$, $\beta_2 = 0.999$. A dropout rate of $0.1$ is applied to all the layers. We pre-train \texttt{AIPNet} for $15$ epochs and fine-tune \texttt{LAS} for $20$ epochs. During inference, speech features are fed into $G_{AI}$ that is absorbed as part of \texttt{LAS} encoder and outputs of \texttt{LAS} are decoded using beam size of $20$ without any external language model.
\subsection{Baselines}
We compare our approach against four popular baselines \texttt{B1}-\texttt{B4} for multi-accent speech recognition in the experiments. \texttt{B1} is an accent-independent model which is trained on the data from all the accents. \texttt{B2} and \texttt{B3} have shown strong performance on multi-accent speech recognition in \cite{li2018multi}. Specifically, we append accent labels at the end of each label sequence and \texttt{B2} is trained on the updated sequences from all accents. As accent information is not required at inference, \texttt{B2} is accent-independent. When training \texttt{B3} which is accent-dependent, we transform accent $1$-hot vector into an embedding through a learnable linear matrix and feed the learned embedding into \texttt{LAS} encoder. During \texttt{B1}-\texttt{B3} training, $G_{AI}$ is part of \texttt{LAS} encoder containing $6$ LSTM layers (see Section \ref{subsec:setup}). \texttt{B4} is the most similar to our approach in spirits, aiming to learn accent-invariant features through gradient reversal \cite{sun2018domain}. The gradient reversal approach keeps modules of $G_{AI}$, $D_{AI}$ and ASR model in Fig.~\ref{fig:approach}. Instead of using iterative training in Section \ref{subsec:iterative}, we add a gradient reversal layer between $G_{AI}$ and $D_{AI}$ to reverse the backpropagated gradient for $G_{AI}$ training. For more details about \texttt{B4}, we refer readers to \cite{sun2018domain}.
\begin{figure}[t]
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=.95\linewidth]{baseline_top60_300_legend.png}
\caption{$G_{AI}$ embedding from B1.}
\label{fig:tsne_B1}
\end{subfigure}%
\begin{subfigure}{.24\textwidth}
\centering
\includegraphics[width=.95\linewidth]{disentangle_top60_300.png}
\caption{$G_{AI}$ embedding from F2.}
\label{fig:tsne_F2}
\end{subfigure}
\caption{t-SNE 2-D plots of $G_{AI}$ embedding from B1 and F2 (w/ PL) in each accent. Each color represents each accent.}
\label{fig:tsne}
\end{figure}
\subsection{Experimental Results}
\label{subsec:results}
As described in Section \ref{sec:approach}, \texttt{AIPNet} pre-training requires only accent labels in the training data. This approach hence becomes especially useful when there is a large number of accented data without available transcriptions. We design experiments in two settings, \emph{i.e.,} supervised setting where transcriptions are available in all accents and semi-supervised setting where transcriptions are available only in US accent.
\subsubsection{Results in Supervised Setting}
\label{subsubsec:supervised}
\begin{comment}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{accent_acc.png}
\caption{Accuracy of accents.}
\label{fig:accent_acc}
\end{figure}
\end{comment}
Table~\ref{table:supervised} summarizes the results of different approaches in supervised setting. In our approach, we report results of fine-tuning $G_{AI}$ and \texttt{LAS} with $L_{ASR}$ using transcriptions (\texttt{F1}), as well as those of fine-tuning the entire network with $L'_G$ using both transcriptions and accent labels (\texttt{F2}). We can see that fine-tuning with $L_{ASR}$ (\texttt{F1}) outperforms the baselines by $2.3\sim4.5\%$ relative reduction on average WER. Compared to all the baselines, \texttt{F1} has achieved improvement in CA, FR, GB, and especially IN ($9.1\sim15.3\%$ reduction) but has shown a mediocre performance in each of the remaining accents.
\subsubsection{Results in Semi-supervised Setting}
\label{subsubsec:semi-supervised}
In semi-supervised setting where transcriptions are available only in US accent, we compare the performance between \texttt{B1} and \texttt{F1}. The results are presented in the first two rows of Table \ref{table:semi-supervised}. As \texttt{B2}, \texttt{B3}, \texttt{B4} and \texttt{F2} require the availability of pairs of transcriptions and accent labels for training, the results of these approaches are not available in such scenario. The results have shown that our approach significantly outperforms the baseline model in all accents, achieving $3.4\sim11.3\%$ relative WER reduction.
One popular and effective method for semi-supervised learning is to generate target pseudo labels for unlabeled data using an initial model \cite{lee2013pseudo}. To achieve better performance, we generate pseudo transcriptions for non-US training data using the US model. As a result, we are able to follow all the experiments in supervised setting in Section \ref{subsubsec:supervised}. The results with pseudo labeling (PL) are presented in the last six rows of Table \ref{table:semi-supervised}. By comparing the performance between models with and without pseudo labeling, we can observe that pseudo labeling has shown significant gains for all the approaches and almost in each accent, exhibiting its effectiveness on improving generalization performance using unlabeled data. In the scenario with pseudo labeling, fine-tuning with $L'_G$ (\texttt{F2}) outperforms the baselines by $1.6\sim6.1\%$ relative reduction on average WER and consistently achieves the best performance in all non-US accents except for CA.
\subsection{Analysis}
In this section, we analyze the properties of \texttt{AIPNet} to better understand its superiority for multi-accent speech recognition. Without loss of generality, we use \texttt{B1} and \texttt{F2} (w/ PL) in semi-supervised setting in the analysis.
\textbf{Learning accent-invariance} To comprehend the effectiveness of \texttt{AIPNet} on learning accent-invariant representations, we extract embedding (outputs) of $G_{AI}$ from \texttt{B1} and \texttt{F2} respectively for $300$ data samples in each accent. Fig.~\ref{fig:tsne} shows t-SNE $2$-D visualization of $G_{AI}$ embedding from \texttt{B1} (Fig.~\ref{fig:tsne_B1}) and \texttt{F2} (Fig.~\ref{fig:tsne_F2}) respectively for each accent \cite{maaten2008visualizing}. As can be seen, $G_{AI}$ outputs from the baseline \texttt{B1} tend to be clustered in each accent while those from our approach \texttt{F2} are mixed across different accents. The visualization demonstrates the validity of the accent-invariant features learned through \texttt{AIPNet} and further explains the better generalization performance that our approach has achieved across accents.
\textbf{Reducing overfitting} We further investigate the trend of word piece validation accuracy of the ASR model in \texttt{B1} and \texttt{F2}, as shown in Fig.~\ref{fig:valid_acc}. Compared to \texttt{B1}, \texttt{F2} learns more slowly and reaches a better local optimal.
The learning objective of \texttt{F2} consists of both $L_{ASR}$ and accent-related regularizers (see Section \ref{sec:approach}). This observation corroborates the effectiveness of the regularization in our approach on reducing the risk of overfitting. It is worth noting that such benefit from the accent-related regularization in fine-tuning is not observed in supervised setting (see Table \ref{table:supervised}).
The sufficient training data with high-quality transcriptions in supervised setting empowers the learning capability of the ASR model. As a result, the additional regularization might even weaken such learning capability.
\begin{figure}[t]
\centering
\includegraphics[width=.85\linewidth]{valid_acc.png}
\caption{Word piece validation accuracy of ASR model in B1 and F2.}
\label{fig:valid_acc}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
In this paper, we proposed \texttt{AIPNet}, a GAN-based pre-training network, for learning accent-invariant representations, aiming to build a unified speech recognition system that generalizes well across accents. As transcriptions are not needed in pre-training, \texttt{AIPNet} provides the flexibility of making use of many possible accent resources for which transcriptions are unavailable. Experiments have shown promising results on $9$ English accents compared to the baselines, especially in the case when transcriptions are not available in all accents. Experimental results have demonstrated the effectiveness of \texttt{AIPNet} on learning accent-invariance.
\bibliographystyle{IEEEbib}
| proofpile-arXiv_059-15578 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Topological Data Analysis (TDA) is a relatively new branch of data analysis using techniques from topology to study data \citep{Edelsbrunner2002,zomorodian2012topological,Zomorodian2005}. It has been applied with great success in several fields such as biomolecular chemistry \citep{xia2018multiscale,xia2015multiresolution}, drug design \citep{cang2018integration}, and network analysis \citep{Carstens2013}. Notably, the winners of the Drug Design Data Resource (D3R) Grand Challenge have utilized TDA in their algorithms \citep{nguyen2019mathematical,nguyen2019mathdl}. Topological data analysis can be combined with methods in machine learning (including deep learning) \citep{hofer2017deep,nguyen2019mathematical} as well as statistical methods \citep{bubenik2015statistical}.
Though the term ``topology'' can be used to refer to a wide array of subjects, the topological tools used in TDA generally refer to algebraic topology, or to be specific persistent homology \citep{Ghrist2008,edelsbrunner2012persistent}. Broadly speaking, persistent homology analyzes the ``shape'' of the data to deduce intrinsic properties of the data. Other prominent tools in TDA include Mapper \citep{singh2007topological,ray2017survey} and discrete Morse theory \citep{forman1998morse,forman2002user,wu2020discrete}. Due to the fact that TDA works quite differently from most other data analysis techniques, it can sometimes detect features that are missed by traditional methods of analysis \citep{nicolau2011topology}.
Traditionally, the strengths of TDA include the fact that it analyzes data in a coordinate-free way \citep{lum2013extracting,offroy2016topological} (independent of the coordinate system chosen), as well as being translation-invariant and rotation-invariant \citep{khasawneh2016chatter,bonis2016persistence}. As a direct consequence of these strengths, however, it may be hard for TDA to effectively analyze data that is sensitive to choice of coordinates, translation, and/or rotation. Examples of such data include cases where each coordinate represents a fundamentally different feature (e.g.\ light, temperature, humidity). In Section \ref{sec:symmetrybreak}, we introduce two basic techniques, \emph{symmetry-breaking} and \emph{anchor points} to allow TDA to better study such data with heterogeneous features.
In this paper, we develop a novel {method} for topological machine learning for analyzing multivariate time series, with application to room occupancy detection. We use a dataset originating from the seminal paper by \citet{candanedo2016accurate}. In their research, data recorded from light, temperature, humidity and CO\textsubscript{2} sensors is provided. The main goal is to predict occupancy in an office room using these data. {We also include an additional experiment on Activity Recognition, using data from accelerometers.}
The outline of our {method} is summarized in Figure \ref{fig:workflow}. Firstly, we convert the multivariate time series to point cloud data via sliding windows \citep{gidea2018topological}. We also apply our techniques of symmetry-breaking and anchor points to the point clouds. Secondly, we generate persistence diagrams from the point cloud data. Lastly, we calculate the Wasserstein distance between the persistence diagrams and use the $k$-nearest neighbors algorithm ($k$-NN) for supervised machine learning (classification). {We will elaborate on each block in Figure} \ref{fig:workflow}, {both in theory and practice, in Sections} \ref{sec:methodology} {and} \ref{sec:experiments} {respectively.}
\begin{figure}[htbp]
\begin{center}
\begin{tikzpicture}[
node distance = 5mm and 7mm,
start chain = going right,
disc/.style = {shape=cylinder, draw, shape aspect=0.3,
shape border rotate=90,
text width=20mm, align=center, font=\linespread{0.8}\selectfont},
mdl/.style = {shape=ellipse, aspect=2.2, draw},
alg/.style = {draw, align=center, font=\linespread{0.8}\selectfont}
]
\begin{scope}[every node/.append style={on chain, join=by -Stealth}]
\node (n1) [disc] {Multivariate\\ Time Series};
\node (n2) [alg] {Point\\ Clouds};
\node (n3) [alg] {Persistence\\ Diagrams};
\node (n4) [alg] {$k$-NN\\ (Wasserstein\\ distance)};
\node (n3) [mdl] {Classification};
\end{scope}
\end{tikzpicture}
\end{center}
\caption{Outline of Topological Machine Learning for Multivariate Time Series.}
\label{fig:workflow}
\end{figure}
\subsection{Related work}
In the paper by \citet{gidea2018topological}, the authors introduce a new method based on topological data analysis to analyze financial time series and discover potential early signs of a financial crash. A key innovative factor in their paper is that their method can deal with multivariate time series (more than one time-dependent variable), which is different from the time-delay coordinate embedding for 1D time series \citep{takens1981detecting}. {Our paper generalizes their approach of converting a multivariate time series to a point cloud by introducing a new parameter representing stride. Other than that, our paper differs significantly in how we quantitatively study the data.} \citet{gidea2018topological} {makes use of the $L^p$-norm to study persistence landscapes, while our method uses the 1-Wasserstein distance of persistence diagrams to carry out supervised machine learning.}
In the paper by \citet{tran2019topological}, the authors study a delay-variant embedding method that constructs the topological features by considering the time delay as a variable parameter instead of considering it as a single fixed value. Their method studies multiple-time-scale patterns in a time series, which contains more information than just using a single time delay. {In their seminal work, a time series $x(t)$ is mapped to $m$-dimensional points using delay coordinates [$x(t),x(t-\tau),\dots,x(t-(m-1)\tau)]$ on the embedded space, where $\tau$ is a variable parameter denoting time delay and $m$ denotes the embedding dimension. Our proposed method differs from delay-variant embedding in two main ways. Firstly, our method works for multivariate time series, while the delay-variant embedding method focuses on univariate time series. Secondly, our proposed method did not consider time delay as a variable parameter.}
In the paper by \citet{merelli2016topological}, the authors study multivariate time series characterization using TDA, with applications to the epileptic brain. Their methodology is based on computing the Pearson correlation coefficients matrix for each window, followed by computing and plotting the weighted persistent entropy. {As a comparison, our method does not require the computation of the Pearson correlation coefficients matrix for each time window. Furthermore, our method includes a way of comparing quantitatively between two multivariate time series (via the Wasserstein distance of persistence diagrams). The paper by} \citet{merelli2016topological} {excels at comparing qualitatively two signals based on observing peaks in the plot of the respective weighted persistent entropies.}
\citet{umeda2017time} studied volatile time series using TDA. The methodology of the paper involves converting the time series into a quasi-attractor, and extracting topological information from the quasi-attractor in the form of a Betti sequence. In the learning step, a one-dimensional convolutional neural network (CNN) is used as a classifier. {For comparison, our method does not use CNN or other deep learning architectures. In addition, the paper assumes that an observed time series $\{x_1,\dots,x_t\}$ has a difference function $x_{k+1}=f(x_k,\dots,x_1)$ that specifies its behavior. For our method, we do not require or use this assumption.}
\citet{dirafzoon2016action} proposed a novel framework for activity recognition from 3D motion capture data using TDA. In the paper, point clouds are obtained from time series data using Takens' delay embedding \citep{takens1981detecting}. Subsequently, a feature vector for each time window is created using lengths of the most persistent off-diagonal features in the persistence diagram, with the maximum persistence interval length. As the final step, a nearest neighbor classifier with majority vote using Euclidean distance for the feature vectors is used, in order to classify each window. {We remark that our method is quite different in the way we use the topological information. Instead of a feature vector consisting of maximum persistence interval lengths, we use the Wasserstein distance of the persistence diagrams of each time window. In addition, the above paper uses preprocessing of the signals using a median filter for noise removal. For our method, no noise removal of the time series data was required.}
\citet{hofer2017deep} proposed and developed a technique that enables inputting topological signatures to deep neural networks and learn a task-optimal representation during training. An advantage of their method is that it learns the representation instead of mapping topological signatures to a pre-defined representation. {For comparison, our method does not use any deep learning technique or architecture. Also, the above paper mainly studies image (2D object shapes) and graph data, whereas we mainly focus on (multivariate) time series in this paper.}
In the paper by \citet{ravishanker2019topological}, the authors provide a comprehensive review of TDA for time series. In the work by \citet{seversky2016time}, the authors study the framework for the exploration of TDA techniques applied to time-series data. They consider and explore properties such as stability with respect to time series length, the approximation accuracy of sparse filtration methods, and the discriminating ability of persistence diagrams as a feature for learning. We note that both papers \citep{ravishanker2019topological,seversky2016time} utilize the time-delay coordinate embedding \citep{takens1981detecting}, also known as Takens' embedding \citep{mindlin1992topological}.
It is noted that due to the popularity and usefulness of time series in general, there are many other papers studying time series using TDA \citep{rucco2015topological,sanderson2017computational,gidea2018topological2,pita2017topological}.
We also remark that there are several established papers studying time series using other types of topology, in the broader sense of the word topology \citep{zhang2006complex,muldoon1993topology,tsonis2008topology,bonanno2003topology,djauhari2015optimality,mindlin1992topological}.
\subsection{Contribution}
Our paper combines the 4 key concepts: ``TDA'', ``machine learning ($k$-NN)'', ``multivariate'' and ``time series'', resulting in a novel {method} for topological machine learning on multivariate time series. Since TDA is a relatively new branch of data analysis, our paper also helps to validate and provide further evidence that topological methods work well in analyzing data. In addition, we demonstrate that TDA can be effectively combined with machine learning tools (e.g.\ $k$-NN algorithm) to study multivariate time series data. In addition, we also propose two basic methods, symmetry-breaking and anchor points, to study data that is sensitive to coordinates choice, translation and/or rotation.
For applications, we demonstrate that our method can be effectively used to detect room occupancy. The detection of occupancy in buildings has been estimated to save energy in the order of 30\% to 42\% \citep{candanedo2016accurate,erickson2011observe,dong2009sensor}. Due to privacy concerns, it is also of interest to detect the presence of occupants without the use of a camera. Other applications for occupancy detection include security and analysis of building occupant behaviors \citep{candanedo2016accurate}.
{In addition, we apply our method on Activity Recognition (AR) data and obtained good results. We use data collected from accelerometers embedded on wearable sensors. AR is an emerging field of research with many potential applications such as monitoring the daily activity of the elderly (for ensuring their safety), fitness tracking and health informatics. There are also various other applications in the healthcare, human behavior modeling and human-machine interaction domains} \citep{palumbo2013multisensor}.
\section{Background}
We provide a brief overview of the relevant concepts in algebraic topology and persistent homology, and refer the reader to the appropriate sources for more details. A classical reference for algebraic topology is the text by \citet{Hatcher2002}, while the following papers provide an excellent introduction to persistent homology \citep{Ghrist2008,Zomorodian2005,edelsbrunner2008persistent}.
\subsection{Simplicial complexes}
A \emph{simplicial complex} $K$ is a family of subsets of a set $S$ such that for every $\tau\subseteq\sigma\in K$, we have $\tau\in K$. The sets $\sigma\in K$ are called the \emph{faces} (or \emph{simplices}) of the simplicial complex $K$. We call the singleton sets $\{v\}$ the \emph{vertices} of $K$. The dimension of a simplex $\sigma\in K$ is defined to be $\dim(\sigma)=|\sigma|-1$, and we call a simplex of dimension $k$ a \emph{$k$-simplex}. Simplices of dimension 0, 1, 2, 3 can be viewed to represent a \emph{vertex}, \emph{edge}, \emph{triangle} and \emph{tetrahedron} respectively, as shown in Figure \ref{fig:simplices}.
\begin{figure}[!htbp]
\begin{center}
\begin{tikzpicture}
\node[circle,fill=black,inner sep=0pt,minimum size=3pt] (a) at (0,0){};
\node[below] at (0,-0.1) {$v_1$};
\node[below] at (0,-1) {vertex $\{v_1\}$};
\end{tikzpicture}
\begin{tikzpicture}
\node[circle,fill=black,inner sep=0pt,minimum size=3pt] (a) at (0,0){};
\node[circle,fill=black,inner sep=0pt,minimum size=3pt] (b) at (1.5,0){};
\node[below] at (0,-0.1) {$v_1$};
\node[below] at (1.5,-0.1) {$v_2$};
\node[below] at (0.75,-1) {edge $\{v_1,v_2\}$};
\draw (a) -- (b);
\end{tikzpicture}
\begin{tikzpicture}
\coordinate (a) at (0,0);
\coordinate (b) at (1.5,0);
\coordinate (c) at (0.75,1.3);
\filldraw[gray!50] (a.center) -- (b.center) -- (c.center) -- cycle;
\draw (a.center) -- (b.center) -- (c.center) -- cycle;
\node[circle,fill=black,inner sep=0pt,minimum size=3pt] at (a){};
\node[circle,fill=black,inner sep=0pt,minimum size=3pt] at (b){};
\node[circle,fill=black,inner sep=0pt,minimum size=3pt] at (c){};
\node[below] at (0,-0.1) {$v_1$};
\node[below] at (1.5,-0.1) {$v_2$};
\node[left] at (0.65,1.3) {$v_3$};
\node[below] at (0.75,-1) {triangle $\{v_1,v_2,v_3\}$};
\end{tikzpicture}
\begin{tikzpicture}
\coordinate (a) at (0,0);
\coordinate (b) at (1.5,0);
\coordinate (c) at (0.75,1.3);
\coordinate (d) at (1.8,0.6);
\begin{scope}[dashed,,opacity=0.5]
\draw (a) -- (d);
\end{scope}
\draw[fill=gray,opacity=0.5] (a)--(b)--(c);
\draw[fill=gray,opacity=0.5] (b)--(c)--(d);
\draw (a.center) -- (b.center) -- (c.center) -- cycle;
\draw (c) -- (d);
\draw (b) -- (d);
\node[circle,fill=black,inner sep=0pt,minimum size=3pt] at (a){};
\node[circle,fill=black,inner sep=0pt,minimum size=3pt] at (b){};
\node[circle,fill=black,inner sep=0pt,minimum size=3pt] at (c){};
\node[circle,fill=black,inner sep=0pt,minimum size=3pt] at (d){};
\node[below] at (0,-0.1) {$v_1$};
\node[below] at (1.5,-0.1) {$v_2$};
\node[left] at (0.65,1.3) {$v_3$};
\node[right] at (1.8,0.7) {$v_4$};
\node[below] at (0.5,-1) {tetrahedron $\{v_1,v_2,v_3,v_4\}$};
\end{tikzpicture}
\caption{A 0-simplex (vertex), 1-simplex (edge), 2-simplex (triangle) and 3-simplex (tetrahedron).}
\label{fig:simplices}
\end{center}
\end{figure}
A type of simplicial complex commonly used in TDA is the \emph{Vietoris-Rips complex} (or \emph{Rips complex} for short) which is defined as follows.
\begin{definition}
Let $\{x_i\}$ be a set of points in Euclidean space. The Rips complex $\mathcal{R}_\epsilon$ is the simplicial complex whose $k$-simplices are determined by each subset of $k+1$ points $\{x_j\}_{j=0}^k$ which are pairwise within distance $\epsilon$.
\end{definition}
We also introduce the concept of a \emph{filtration} of a simplicial complex $K$, which is a nested sequence of complexes $\emptyset=K^0\subseteq K^1\subseteq\dots\subseteq K^m=K$. We say that $K$ is a \emph{filtered complex}.
\subsection{Homology}
The $k$th \emph{chain group} $C_k$ of a simplicial complex $K$ is the free abelian group with basis the set of oriented $k$-simplices. The boundary operator $\partial_k: C_k\to C_{k-1}$ is defined on an oriented simplex $\sigma=[v_0,v_1,\dots,v_k]$ by
\begin{equation*}
\partial_k(\sigma)=\sum_{i=0}^k (-1)^i [v_0,\dots,\hat{v_i},\dots,v_k]
\end{equation*}
and extended linearly. The notation $\hat{v_i}$ indicates the deletion of the vertex $v_i$.
The \emph{cycle group} $Z_k$ and \emph{boundary group} $B_k$ are defined as $Z_k=\ker\partial_k$ and $B_k=\Ima\partial_{k+1}$ respectively. The $k$th \emph{homology group} is defined to be the quotient group $H_k=Z_k/B_k$. Informally, the rank of the $k$th homology group $\beta_k=\rank(H_k)$ (also called the $k$th Betti number) counts the number of $k$-dimensional holes in the simplicial complex $K$. For instance, $\beta_0$ counts the number of connected components (0-dim holes), $\beta_1$ counts the number of ``circular holes'' (1-dim holes), while $\beta_2$ counts the number of ``voids'' or ``cavities'' (2-dim holes). We show an example in Figure \ref{fig:betti}.
\begin{figure}[htbp]
\begin{center}
\begin{tikzpicture}[scale=0.8]
\coordinate (a) at (0,0);
\coordinate (b) at (1.5,0);
\coordinate (c) at (0.75,1.3);
\coordinate (d) at (0.75,-1.3);
\filldraw[gray!50] (a.center) -- (b.center) -- (c.center) -- cycle;
\draw (a.center) -- (b.center) -- (c.center) -- cycle;
\draw (a) --(d) --(b);
\node[circle,fill=black,inner sep=0pt,minimum size=3pt] at (a){};
\node[circle,fill=black,inner sep=0pt,minimum size=3pt] at (b){};
\node[circle,fill=black,inner sep=0pt,minimum size=3pt] at (c){};
\node[circle,fill=black,inner sep=0pt,minimum size=3pt] at (d){};
\node[left] at (-0.1,0) {$v_1$};
\node[right] at (1.6,0) {$v_2$};
\node[left] at (0.65,1.3) {$v_3$};
\node[left] at (0.65,-1.3) {$v_4$};
\end{tikzpicture}
\caption{For the above simplicial complex, we have $\beta_0=1$ (1 connected component), $\beta_1=1$ (1 circular hole which corresponds to the unshaded region) and $\beta_2=0$ (no ``voids'').}
\label{fig:betti}
\end{center}
\end{figure}
\subsection{Persistent homology}
Given a filtered complex $K$, the $i$th complex $K^i$ is naturally associated with the boundary operators $\partial_k^i$ and groups $C_k^i$, $Z_k^i$, $B_k^i$ and $H_k^i$. The \emph{$p$-persistent $k$th homology group} of $K^i$ is then defined as
\begin{equation*}
H_k^{i,p}=Z_k^i/(B_k^{i+p}\cap Z_k^i).
\end{equation*}
An equivalent definition of persistent homology groups is $H_k^{i,p}\cong\Ima \eta_k^{i,p}$, where $\eta_k^{i,p}: H_k^i\to H_k^{i+p}$ is the homomorphism that maps a homology class into the one that contains it \citep{ren2018weighted,Zomorodian2005}.
In brief, persistent homology studies a family of spaces parameterized by a distance $\epsilon$. The filtered complex $K$ is commonly obtained by the construction of Rips complexes over a range of distances $\epsilon$. Those topological features which persist over a parameter range can then be detected, revealing meaningful structures in the data.
\section{Methodology}
\label{sec:methodology}
In this section, we describe our methodology for studying multivariate time series using topological data analysis (TDA).
\subsection{Standardization of data}
\label{sec:standardize}
Following best practices in machine learning, we first standardize our data such that the values of each feature in the data have zero-mean and unit-variance. An advantage of standardizing is to prevent a feature with larger scale from completely dominating the other features.
\subsection{Converting multivariate time series to a point cloud}
\label{sec:pointcloud}
{The initial data type accepted for TDA is point cloud data. Hence, in order to study multivariate time series using topological methods, we would need to convert the multivariate time series to a point cloud. An example of such a conversion is the approach of }\citet{gidea2018topological}, {which we will describe in detail below}.
We will adopt the approach of \citet{gidea2018topological} to convert a multivariate time series to a point cloud data set, which is the required starting point for doing topological data analysis. We also generalize \citet{gidea2018topological} by introducing a new parameter $s$, representing stride.
We consider a multivariate time series consisting of $d$ 1-dimensional time series $\{x^k_n\}_n$, where $k=1,\dots,d$. Fix a sliding window of size $w$. For each time $t_n$, we define a point $x(t_n)=(x_n^1,\dots,x_n^d)\in\mathbb{R}^d$. Subsequently, for each time-window of size $w$, we obtain a point cloud
\[X_n=(x(t_{1+s(n-1)}),x(t_{2+s(n-1)}),\dots,x(t_{w+s(n-1)}))\]
consisting of $w$ points in $\mathbb{R}^d$.
In brief, the length of the sliding window $w$ determines the size of the point cloud while the number $d$ of 1D time series determines the dimension of the point cloud \citep{gidea2018topological}. The stride $s$ determines how much the time-window slides for each consecutive point cloud. The value of $s$ corresponding to the original paper \citep{gidea2018topological} is a stride value of $s=1$.
In this paper, {for the first experiment on room occupancy}, we will choose a value of $w=s=10$, corresponding to non-overlapping sliding windows of length 10. {In the second experiment on activity recognition, we choose $w=s=5$, corresponding to non-overlapping windows of length 5. In principle, we can choose sliding windows of any length greater than 1. A sliding window of length 1 should be avoided as it leads to the point cloud having 1 point only (not counting the anchor point). A single point has trivial persistent homology and hence is not well suited for our topological method. By choosing different lengths of sliding windows in our two experiments, we demonstrate that our method can work for different lengths of sliding windows.}
\subsection{Symmetry-breaking and anchor points}
\label{sec:symmetrybreak}
In classical TDA, each coordinate plays the same role and has the same importance. For instance, in the case of $\mathbb{R}^3$, the $x$, $y$, $z$ coordinates are treated equally. Due to this symmetry property, topological methods excel in analyzing spatial data such as 3D point clouds \citep{singh2007topological,rosen2018inferring}.
However, this property may lead to TDA being unable to distinguish between certain point clouds. For example, persistent homology is unable to distinguish the two point clouds
\begin{equation}
\label{eq:x1x2}
\begin{split}
X_1&=\{(0,0,0,0,0), (1,0,0,0,0)\},\\
X_2&=\{(0,0,0,0,0), (0,1,0,0,0)\},
\end{split}
\end{equation}
since in both point clouds the points are equidistant from each other (with distance 1). Alternatively, we can see that the two point clouds can be obtained from each other by rotation (and hence TDA is unable to distinguish them due to rotation-invariance). This can be a problem for certain data where each coordinate represents a fundamentally different type of feature (heterogeneous features). For instance, in our room occupancy data, the first coordinate represents temperature while the second represents humidity, so we \emph{do} actually want to distinguish between the two point clouds.
To this end, we introduce two basic techniques, symmetry-breaking and anchor points. Symmetry-breaking refers to adding a fixed constant vector to each point in the point cloud, while an anchor point refers to a fixed point that is introduced to the point cloud. We define them more precisely as follows.
\begin{definition}[Symmetry-breaking]
Let $X$ be a point cloud consisting of points in $\mathbb{R}^d$. Let $\mathbf{v}=(c_1,c_2,\dots,c_d)$ be a fixed vector in $\mathbb{R}^d$. We define the point cloud $X'$ obtained by \emph{symmetry-breaking} (of $X$) to be:
\begin{equation*}
X'=\{\mathbf{x}+\mathbf{v}\mid \mathbf{x}\in X\}.
\end{equation*}
\end{definition}
\begin{definition}[Anchor points]
Let $X$ be a point cloud consisting of points in $\mathbb{R}^d$. Let $A=\{\mathbf{a_1},\dots,\mathbf{a_n}\}$ be a set of points in $\mathbb{R}^d$, which we call \emph{anchor points}.
We also define a new point cloud $Y=X\cup A$, called the point cloud augmented by the anchor points.
\end{definition}
In this paper, we let $\mathbf{v}=(0,1,2,3,4)\in\mathbb{R}^5$ to be the fixed vector for symmetry-breaking. We take $A=\{(0,0,0,0,0)\}$, i.e., the only anchor point is the origin. With this choice, the point clouds in \eqref{eq:x1x2} become:
\begin{equation*}
\begin{split}
Y_1&=X_1'\cup A=\{(0,1,2,3,4),(1,1,2,3,4),(0,0,0,0,0)\}\\
Y_2&=X_2'\cup A=\{(0,1,2,3,4),(0,2,2,3,4),(0,0,0,0,0)\}.
\end{split}
\end{equation*}
We note that now the distance between $(1,1,2,3,4)$ and the origin $(0,0,0,0,0)$ is $\sqrt{31}$, while the distance between $(0,2,2,3,4)$ and the origin is $\sqrt{33}$. Hence, TDA is now able to distinguish between the point clouds $Y_1$ and $Y_2$ as desired.
We also remark that the inclusion of anchor point(s) can further distinguish between point clouds that differ only by a translation. We illustrate this in Figure \ref{fig:translation}. Without the anchor point, TDA is generally unable to distinguish the two point clouds due to translation-invariance.
\begin{figure}[!htbp]
\begin{center}
\begin{tikzpicture}
\draw[->,thick] (-2,0)--(2,0) node[right]{$x$};
\draw[->,thick] (0,-2)--(0,2) node[above]{$y$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] (a) at (0,0){};
\node[left] at (-0.1,0.2) {anchor point};
\node[right] at (1.1,1) {point cloud};
\node[circle,fill=black,inner sep=0pt,minimum size=3pt] (a) at (1,1){};
\node[circle,fill=black,inner sep=0pt,minimum size=3pt] (a) at (0.9,0.9){};
\node[circle,fill=black,inner sep=0pt,minimum size=3pt] (a) at (0.75,0.9){};
\node[circle,fill=black,inner sep=0pt,minimum size=3pt] (a) at (0.9,0.75){};
\node[circle,fill=black,inner sep=0pt,minimum size=3pt] (a) at (0.75,0.7){};
\end{tikzpicture}
\begin{tikzpicture}
\draw[->,thick] (-2,0)--(2,0) node[right]{$x$};
\draw[->,thick] (0,-2)--(0,2) node[above]{$y$};
\node[circle,fill=black,inner sep=0pt,minimum size=5pt] (a) at (0,0){};
\node[left] at (-0.1,0.2) {anchor point};
\node[right] at (1.1,2) {point cloud (translated)};
\node[circle,fill=black,inner sep=0pt,minimum size=3pt] (a) at (1,2){};
\node[circle,fill=black,inner sep=0pt,minimum size=3pt] (a) at (0.9,1.9){};
\node[circle,fill=black,inner sep=0pt,minimum size=3pt] (a) at (0.75,1.9){};
\node[circle,fill=black,inner sep=0pt,minimum size=3pt] (a) at (0.9,1.75){};
\node[circle,fill=black,inner sep=0pt,minimum size=3pt] (a) at (0.75,1.7){};
\end{tikzpicture}
\caption{The inclusion of the anchor point at the origin can distinguish between the two point clouds which only differ by a translation. This is due to the differences in distance from the point clouds to the anchor point.}
\label{fig:translation}
\end{center}
\end{figure}
{We further summarize the proposed method of symmetry-breaking and anchor points in Algorithm} \ref{alg:algorithm}.
\begin{algorithm}[htbp]
\caption{Symmetry-breaking and anchor points}\label{euclid}
\hspace*{\algorithmicindent} \textbf{Input:} Point cloud $X$ consisting of points in $\mathbb{R}^d$.\\
\hspace*{\algorithmicindent} \textbf{Output:} Augmented point cloud $Y=X'\cup A$.
\begin{algorithmic}[1]
\Procedure{SymmBreakAnchor}{$X$}
\State $\mathbf{v} \gets (c_1,c_2,\dots,c_d)$
\State $A \gets \{\mathbf{a_1},\dots,\mathbf{a_n}\}$
\State $X' \gets \emptyset$
\For{$\mathbf{x}$ in $X$}
\State $X' \gets X'\cup\{\mathbf{x}+\mathbf{v}\}$
\EndFor
\State $Y\gets X'\cup A$
\State \textbf{return} $Y$
\EndProcedure
\end{algorithmic}
\label{alg:algorithm}
\end{algorithm}
\subsection{From point cloud to persistence diagram}
A persistence diagram \citep{cohen2007stability} is a multiset of points in $\Delta:=\{(b,d)\in\mathbb{R}^2\mid b,d\geq 0, b\leq d\}$. Each point $(b,d)$ represents a generator of the homology group (of a chosen dimension), where $b$ denotes the birth of the generator and $d$ its death. In short, the persistence diagram can be viewed as a visual representation of the persistent homology of a point cloud. The persistence diagram is independent of choice of generators and thus is unique \citep{cohen2010lipschitz}. A key result is the stability of persistence diagrams with respect to Hausdorff distance, bottleneck distance \citep{cohen2007stability}, as well as Wasserstein distance \citep{cohen2010lipschitz}. Such stability results are desirable as it implies robustness against noise.
In this paper, we will focus on the Wasserstein distance with $p=1$, also known as the 1-Wasserstein distance or ``earth mover's distance''. The 1-Wasserstein distance is widely used in computer science to compare discrete distributions \citep{rubner2000earth,rabin2009statistical}. The Wasserstein distance is defined as follows \citep{cohen2010lipschitz,mileyko2011probability,berwald2018computing}.
\begin{definition}
The $p$-th Wasserstein distance between two persistence diagrams $D_1$, $D_2$ is defined as
\begin{equation*}
W_p(D_1,D_2)=\left(\inf_{\varphi:D_1\to D_2}\sum_{x\in D_1}\| x-\varphi(x)\|^p_\infty\right)^{1/p},
\end{equation*}
where the infimum is taken over all bijections $\varphi$ between $D_1$ and $D_2$.
\end{definition}
\subsection{The $k$-nearest neighbors algorithm}
To carry out classification (supervised machine learning), we utilize the $k$-nearest neighbors algorithm ($k$-NN) based on the Wasserstein distance. The $k$-NN algorithm is a relatively simple but yet effective machine learning algorithm that has been successfully applied across a wide range of domains \citep{batista2009k}.
For each point cloud $X$ (corresponding to a time window) in the test set, we will determine its $k$-nearest neighbors $\{Y_1,Y_2,\dots,Y_k\}$ in the training set, with respect to the Wasserstein distance. We then classify $X$ based on the majority class of the elements in the set $\{Y_1,Y_2,\dots,Y_k\}$.
\section{Experiments}
\label{sec:experiments}
The experiments were mainly implemented in Python, with the exception of computing the persistence diagrams and Wasserstein distances using the R package \texttt{TDA} \citep{fasy2014introduction}. The codes in the paper are made publicly available on GitHub: \url{https://github.com/wuchengyuan88/room-occupancy-topology}.
\subsection{Room occupancy}
\label{sec:roomoccupancy}
For this section, we study room occupancy data from the seminal paper by \citet{candanedo2016accurate}. In their setup, an office room with dimensions of 5.85m $\times$ 3.50m $\times$ 3.53m (W$\times$D$\times$H) was monitored for temperature, humidity, light and CO\textsubscript{2} levels. An Arduino microcontroller was used to acquire the data, and a digital camera was used to determine if the room was occupied or not. The data was recorded during the month of February (winter) in Mons, Belgium. The room was heated by hot water radiators \citep{candanedo2016accurate}.
Our experiment is regarding supervised machine learning (binary classification), where we train a model to predict if the room is non-occupied (class 0) or occupied (class 1) during a time window. We consider a room to be occupied if it is occupied during any period in the time window. That is, if a room is empty during some period in the time window, but occupied at other times in the same time window, we still classify it as occupied (class 1).
The 5 time-dependent variables in the data are the \texttt{Temperature}, \texttt{Humidity}, \texttt{Light}, \texttt{CO2}, \texttt{HumidityRatio} readings of each time period. Hence, the dimension of each point cloud is $d=5$. Measurements of each variable were taken after each time period of 1 minute. We divide the time periods into non-overlapping time windows of 10 minutes each (10 time periods).
Following \citet{candanedo2016accurate}, we split the data into a training set and two test sets (Test Set 1 and Test Set 2), using a training--test ratio of 80:20. Following best practices in studying time series, we strictly respect the temporal order of the training and test sets. That is, our training and test sets come from distinct and non-overlapping time periods. Test Set 1 comes from time periods that are \emph{after} the training set, while Test Set 2 originates from time periods that are \emph{before} the training set. Benefits of having two such test sets include demonstrating that our method can predict \emph{future} as well as \emph{past} room occupancy (using the time-dependent variables). A further summary of the data sets can be found in Table \ref{table:datasetsummary}.
\begin{table}[!htp]
\footnotesize
\caption{Description of data sets.}
\begin{center}
\begin{tabular}{llll}
\hline
& & \multicolumn{2}{l}{Data class distribution (\%)} \\\cline{3-4}
Data set & Number of time windows & 0 (non-occupied) & 1 (occupied) \\
\hline
Training Set & 800 & 77.5 & 22.5 \\
Test Set 1 & 200 & 70.5 & 29.5 \\
Test Set 2 & 200 & 57.0 & 43.0 \\
\hline
\end{tabular}
\end{center}
\label{table:datasetsummary}
\end{table}
\subsubsection{Standardization of data}
We standardize each of the 5 variables to zero-mean and unit-variance using StandardScaler from the Scikit-learn package.
Technically, we are conducting two separate experiments, the first with Training Set and Test Set 1, and the second with Training Set and Test Set 2. Hence, we perform the standardization accordingly, first by standardizing the combined data set consisting of Training Set and Test Set 1, and then separately standardizing the combined data set consisting of Training Set and Test Set 2.
\subsubsection{Converting multivariate time series to a point cloud}
We follow the procedure outlined in Section \ref{sec:pointcloud}.
We set $w=s=10$ which corresponds to non-overlapping time windows of 10 minutes each. That is, each point cloud (not counting the anchor point) contains 10 points in $\mathbb{R}^5$.
\subsubsection{Symmetry-breaking and anchor points}
We use $\mathbf{v}=(0,1,2,3,4)$ as the fixed vector for symmetry-breaking. We use the origin $\mathbf{0}=(0,0,0,0,0)$ as the anchor point. That is, we augment each point cloud (corresponding to a time window) with the origin $\mathbf{0}$.
After symmetry-breaking, the mean and standard deviation (SD) of the 5 time-dependent variables for each experiment is as described in Table \ref{table:aftersymbreak}.
\begin{table}[!htp]
\footnotesize
\caption{Mean and standard deviation (SD) of time-dependent variables.}
\begin{center}
\begin{tabular}{llllll}
\hline
& Temperature & Humidity & Light & CO\textsubscript{2} & Humidity Ratio \\
\hline
Mean & 0 & 1 & 2 & 3 & 4 \\
SD & 1 & 1 & 1 & 1 & 1 \\
\hline
\end{tabular}
\end{center}
\label{table:aftersymbreak}
\end{table}
\subsubsection{From point cloud to persistence diagram}
For each point cloud, we construct the persistence diagram using the \texttt{ripsDiag} function in the R package \texttt{TDA}. The \texttt{ripsDiag} function uses a filtration of Rips complexes obtained from the point cloud to compute the persistence diagram. We show examples of two persistence diagrams from different classes in Figure \ref{fig:twopd}.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[scale=0.4]{pc60_diagram.eps}
\includegraphics[scale=0.4]{pc0_diagram.eps}
\caption{The persistence diagram on the left belongs to a time window of class 0 (non-occupied), while that on the right belongs to a time window of class 1 (occupied). The points refer to homological features in dimension 0.}
\label{fig:twopd}
\end{center}
\end{figure}
To calculate the 1-Wasserstein distance between two persistence diagrams, we use the \texttt{wasserstein} function, also from the R package \texttt{TDA}, with the default value of $p=1$. For this paper, we use the option \texttt{dimension=0} to specify that distances between persistence diagrams are computed using 0 dimensional features. This is because, for our data, we find that 1 dimensional (and higher) features rarely appear in the persistence diagrams, possibly due to the relatively small size of the point cloud.
\subsubsection{The $k$-nearest neighbors algorithm}
For this experiment, we choose $k=50$ for the $k$-NN algorithm. Since we have two test sets, we initially use Test Set 1 as a validation set to select a suitable value for the parameter $k$. Subsequently, we use the same $k$ for Test Set 2. We show the accuracy, sensitivity (true positive rate) and specificity (true negative rate) for various values of $k$ in Table \ref{table:differentk} (for Test Set 1). We select $k=50$ as the accuracy, sensitivity and specificity values are relatively good. We remark that we do not over-optimize for any single metric (e.g.\ accuracy) since other metrics are important as well in our study of room occupancy.
\begin{table}[!htp]
\footnotesize
\caption{Accuracy, sensitivity and specificity for different values of $k$ for Test Set 1.}
\begin{center}
\begin{tabular}{lllllllllll}
\hline
Value of $k$ & 10 & 20 & 30 & 40 & \textbf{50} & 60 & 70 & 80 & 90 & 100 \\
\hline
Accuracy (\%) & 81 & 83 & 82 & 84 & \textbf{84} & 84 & 84 & 85 & 86 & 87 \\
Sensitivity (\%) & 88 & 88 & 80 & 81 & \textbf{80} & 76 & 69 & 68 & 64 & 63 \\
Specificity (\%) & 77 & 81 & 83 & 85 & \textbf{87} & 88 & 90 & 93 & 96 & 97 \\
\hline
\end{tabular}
\end{center}
\label{table:differentk}
\end{table}
To save computation time, we do not actually need to calculate the Wasserstein distance between all ${1000\choose 2} = 499 500$ pairs of persistence diagrams. We just need to calculate, for each of the 200 persistence diagrams in the test set, their respective Wasserstein distances from the 800 persistence diagrams in the training set. This amounts to $200\times 800=160000$ computations of Wasserstein distances, which is 68\% less computations than if we were to calculate distances between all pairs of diagrams.
The computation of Wasserstein distances is the most time-consuming part of the algorithm, but still taking a reasonably short time of 15 minutes (after the above reduction in computations) on a 2019 model of MacBook Pro with 2.4 GHz Intel Core i5 and 8 GB 2133 MHz LPDDR3.
\subsection{Activity Recognition}
{In addition, we perform a second experiment on Activity Recognition (AR) data. Our data is derived from the paper by} \citet{palumbo2013multisensor}, {which is available on the UCI Machine Learning Repository} \citep{Dua:2019}. {In their work, the authors collected data sampled by accelerometers embedded on wearable sensors as well as Received Signal Strength (RSS) of beacon packets exchanged between the sensors.}
{For this experiment, our goal is to predict the activity carried out by the user based on the given data. We focus on two activities: cycling (class 1) and standing up (class 0). We remark that the activity of standing up is an active one (not merely standing still), whereby there is a vertical momentum pattern of the standing up activity. The $y$-axis component of the accelerometer placed on the chest typically shows values indicative of the vertical direction of movement} \citep{palumbo2013multisensor}.
{The 6 time-dependent variables in the data are \texttt{avg\_rss12}, \texttt{var\_rss12}, \texttt{avg\_rss13}, \texttt{var\_rss13}, \texttt{avg\_rss23}, \texttt{var\_rss23}, where \texttt{avg} and \texttt{var} denote the mean and variance values of the RSS signals, respectively. Measurements were taken after each time period of 250 milliseconds.
We divide the data into non-overlapping time windows of 5 time periods (1.25 seconds), where each time window represents a particular activity.}
{Subsequently, we split the data into training, validation and test sets using a ratio of 60:20:20. A summary of the data sets can be found in Table} \ref{table:datasetsummary2}.
\begin{table}[!htp]
\footnotesize
\caption{Description of data sets for activity recognition experiment.}
\begin{center}
\begin{tabular}{llll}
\hline
& & \multicolumn{2}{l}{Data class distribution (\%)} \\\cline{3-4}
Data set & Number of time windows & 0 (standing up) & 1 (cycling) \\
\hline
Training Set & 1728 & 51.7 & 48.3 \\
Validation Set & 576 & 48.6 & 51.4 \\
Test Set & 576 & 46.4 & 53.6 \\
\hline
\end{tabular}
\end{center}
\label{table:datasetsummary2}
\end{table}
{The experiment is conducted in a similar manner as the first experiment in Section} \ref{sec:roomoccupancy}, {with two differences. Firstly, we choose $w=s=5$ corresponding to non-overlapping time windows of 5 time periods. The main reason for the above choice is to show that our method can work for time windows of different sizes. Secondly, due to there being 6 time-dependent variables, we use $\mathbf{v}=(0,1,2,3,4,5)$ as the fixed vector for symmetry-breaking.}
{For the $k$-NN algorithm, we use the validation set to select the optimal value of $k$. We select $k=40$ corresponding to a high validation accuracy of $98.61\%$, sensitivity of $99.32\%$ and specificity of $97.86\%$.}
\section{Results and discussion}
\subsection{Room Occupancy}
We obtain good results (80\% and above) for the key metrics of accuracy, sensitivity (recall of positive class) and specificity (recall of negative class) for both test sets. We summarize our results (including additional metrics such as precision and $F_1$ score) in Table \ref{table:finalresult}. {We remark that all metrics in Table }\ref{table:finalresult} {are computed by the \texttt{scikit-learn} package in Python. We consider these metrics as they are among the most popular in machine learning. Other than accuracy, metrics like sensitivity and specificity help to measure how an algorithm performs on an unbalanced dataset. We also state the confusion matrix $C$ where $C_{i,j}$ is the number of observations known to be in group $i$ and predicted to be in group $j$. The confusion matrices for Test Set 1 and Test Set 2 are}
$\begin{bmatrix}
122 & 19\\
12 &47
\end{bmatrix}$ {and}
$\begin{bmatrix}
109 & 5\\
14 &72
\end{bmatrix}$
{respectively.}
{We also show a sample manual calculation, using information from the confusion matrix for Test Set 2, in Equation} \ref{eq:sample1}.
\begin{equation}
\label{eq:sample1}
\begin{split}
\text{Accuracy}&=\frac{\text{TP}+\text{TN}}{\text{TP}+\text{TN}+\text{FP}+\text{FN}}=\frac{72+109}{72+109+5+14}=0.91,\\
\text{Precision (class 1)}&=\frac{\text{TP}}{\text{TP}+\text{FP}}=\frac{72}{72+5}=0.94.
\end{split}
\end{equation}
\begin{table}[!htp]
\footnotesize
\caption{Results for Test Set 1 and Test Set 2 (using $k=50$).}
\begin{center}
\begin{tabular}{llllllll}
\hline
& & & & \multicolumn{2}{c}{Precision} & \multicolumn{2}{c}{$F_1$ score} \\\cline{5-6}\cline{7-8}
& Accuracy & Sensitivity & Specificity & (class 0) & (class 1) & (class 0) & (class 1) \\
\hline
Test Set 1 & 0.84 & 0.80 & 0.87 & 0.91 & 0.71 & 0.89 & 0.75 \\
Test Set 2 & 0.91 & 0.84 & 0.96 & 0.89 & 0.94 & 0.92 & 0.88 \\
\hline
\end{tabular}
\end{center}
\label{table:finalresult}
\end{table}
We remark that our results are not directly comparable with the seminal work by \citet{candanedo2016accurate}, even though we are using data derived from the same data set. This is because \citet{candanedo2016accurate} deals with real time occupancy detection (predicting room occupancy at each time period of 1 minute), while our work focuses on predicting room occupancy during a time window (of 10 time periods totalling 10 minutes).
There are also some additional difficulties in predicting room occupancy during a time window, especially with regards to correctly predicting non-occupancy (class 0). For instance, successfully predicting non-occupancy in a time window of length 10 is equivalent to 10 consecutive successful predictions of non-occupancy (for 10 time periods). Hence, even for a highly accurate model for real time prediction (e.g.\ 95\% accuracy), the chances of correctly predicting non-occupancy for a time window of length 10 drops to $0.95^{10}=59.9\%$ (assuming independence).
In view of the above discussions, our results show that our topological method is able to effectively and accurately predict room occupancy (and also non-occupancy) for time windows. Advantages of the time window approach include reducing the amount of data (many time periods are combined into a single time window) and hence computational time. In practice, energy saving measures also have good potential to work well with time windows. For example, it is practical to switch off the room air conditioner after the room has been empty for a time window, rather than immediately upon the room being vacant, since the occupants may only be temporarily leaving the room to return a short while later.
\subsection{Activity Recognition}
{We obtain very good results (above $99\%$) for all key metrics. We summarize our test set results in Table} \ref{table:finalresult2}. {The confusion matrix $C$ is }
$\begin{bmatrix}266 &1\\
0 &309
\end{bmatrix}$. {We remark that all metrics in Table }\ref{table:finalresult2} {are computed by the \texttt{scikit-learn} package in Python. We also show a sample manual calculation, using information from the confusion matrix, in Equation} \ref{eq:sample2}.
\begin{equation}
\label{eq:sample2}
\begin{split}
\text{Accuracy}&=\frac{\text{TP}+\text{TN}}{\text{TP}+\text{TN}+\text{FP}+\text{FN}}=\frac{309+266}{309+266+1+0}=0.9983,\\
\text{Specificity}&=\frac{\text{TN}}{\text{TN}+\text{FP}}=\frac{266}{266+1}=0.9963.
\end{split}
\end{equation}
\begin{table}[!htp]
\footnotesize
\caption{Results for Test Set.}
\begin{center}
\begin{tabular}{llllllll}
\hline
& & & & \multicolumn{2}{c}{Precision} & \multicolumn{2}{c}{$F_1$ score} \\\cline{5-6}\cline{7-8}
& Accuracy & Sensitivity & Specificity & (class 0) & (class 1) & (class 0) & (class 1) \\
\hline
& 0.9983 & 1.0000 & 0.9963 & 1.0000 & 0.9968 & 0.9981 & 0.9984 \\
\hline
\end{tabular}
\end{center}
\label{table:finalresult2}
\end{table}
{Our good results are in line with those obtained in other previous works on activity recognition} \citep{palumbo2013multisensor,gallicchio2011user,lara2012centinela}. {Hence, it is a clear indication of the effectiveness of our proposed method for the activity recognition tasks of the type considered (cycling and standing up).}
{In addition, we remark that the time window chosen for this experiment is very short (1.25 seconds). Hence, this shows that our method can recognize and classify the activity types effectively even when given a very short time series consisting of data from the accelerometer sensors.}
\section{Conclusions}
This work provides a new {method} based on topological data analysis (TDA) to study multivariate time series. We use techniques in persistent homology (persistence diagram and Wasserstein distance) in combination with the $k$-nearest neighbors algorithm ($k$-NN) to perform supervised machine learning of time windows. In this paper, we also introduce methods (symmetry-breaking and anchor points) to allow TDA to better analyze data with heterogeneous features that are sensitive to translation, rotation, or choice of coordinates.
For applications, we first focus on room occupancy detection. Room occupancy detection is important in multiple ways, including energy saving, security and occupant behavior analysis. It is also important, for privacy reasons, to use non-intrusive types of data (such as light, humidity, temperature) instead of cameras (which may contain facial images of individuals) to detect room occupancy. Experimental results demonstrate the effectiveness of predicting room occupancy during a time window using topological methods.
{In the second application, we focus on activity recognition which is an emerging field of research. It has many potential applications such as infering the daily activity of the elderly for ensuring their safety, as well as applications in the healthcare, human behavior modeling and human-machine interaction domains} \citep{palumbo2013multisensor}. {The experimental results also demonstrate that topological methods are effective in recognizing activities based on accelerometer data from wearable devices.}
\section*{Disclosure statement}
No potential conflict of interest was reported by the authors.
\section*{Acknowledgements}
{The authors wish to thank the referees most warmly for numerous suggestions that have improved the exposition of this paper.}
\bibliographystyle{apacite}
| proofpile-arXiv_059-15579 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\vspace{-1mm}
\label{sec:intro}
\myparatight{Byzantine-robust federated learning} In \emph{federated learning} (also known as \emph{collaborative learning})~\cite{Konen16,McMahan17}, the training dataset is decentralized among multiple client devices (e.g., desktops, mobile phones, IoT devices), which could belong to different users or organizations. These users/organizations do not want to share their local training datasets, but still desire to jointly learn a model. For instance, multiple hospitals may desire to learn a healthcare model without sharing their sensitive data to each other.
Each client device (called \emph{worker device}) maintains a \emph{local model} for its local training dataset. Moreover, the service provider has a \emph{master device} (e.g., cloud server), which maintains a \emph{global model}. Roughly speaking, federated learning repeatedly performs three steps: the master device sends the current global model to worker devices; worker devices update their local models using their local training datasets and the global model, and send the local models to the master device; and the master device computes a new global model via aggregating the local models according to a certain \emph{aggregation rule}.
For instance, the \emph{mean} aggregation rule that takes the average of the local model parameters as the global model is widely used under non-adversarial settings. However, the global model can be arbitrarily manipulated for mean even if just one worker device is compromised~\cite{Blanchard17,Yin18}. Therefore, the machine learning community recently proposed multiple aggregation rules (e.g., Krum~\cite{Blanchard17}, Bulyan~\cite{Mhamdi18}, trimmed mean~\cite{Yin18}, and median~\cite{Yin18}), which aimed to be robust against Byzantine failures of certain worker devices.
\begin{figure}[!t]
\vspace{-5mm}
\centering
{\includegraphics[width=0.3 \textwidth]{figs/flowchart}}
\caption{\emph{Data} vs. \emph{local model} poisoning attacks.}
\vspace{-4mm}
\label{flowchart}
\end{figure}
\neil{\myparatight{Existing data poisoning attacks are insufficient}
We consider attacks that aim to manipulate the \emph{training phase} of machine learning such that the learnt model (we consider the model to be a classifier) has a high testing error rate indiscriminately for testing examples, which makes the model unusable and eventually leads to denial-of-service attacks. Figure~\ref{flowchart} shows the training phase, which includes two components, i.e., \emph{training dataset collection} and \emph{learning process}. The training dataset collection component is to collect a training dataset, while the learning process component produces a model from a given training dataset. Existing attacks mainly inject malicious data into the training dataset before the learning process starts, while the learning process is assumed to maintain integrity. Therefore, these attacks are often called \emph{data poisoning attacks}~\cite{rubinstein2009antidote,biggio2012poisoning,xiao2015feature,poisoningattackRecSys16,Jagielski18,Suciu18}. {In federated learning, an attacker could only inject the malicious data into the worker devices that are under the attacker's control.} As a result, these data poisoning attacks have limited success to attack Byzantine-robust federated learning (see our experimental results in Section~\ref{comparison-data}).}
\myparatight{Our work} We perform the first study on \emph{local model poisoning attacks} to Byzantine-robust federated learning. \neil{Existing studies~\cite{Blanchard17,Yin18} only showed local model poisoning attacks to federated learning with the non-robust mean aggregation rule.}
{\bf Threat model.} Unlike existing data poisoning attacks that compromise the integrity of training dataset collection, we aim to compromise the integrity of the learning process in the training phase (see Figure~\ref{flowchart}).
We assume the attacker has control of some worker devices and manipulates the local model parameters sent from these devices to the master device during the learning process. The attacker may or may not know the aggregation rule used by the master device. To contrast with data poisoning attacks, we call our attacks {local model poisoning attacks} as they directly manipulate the local model parameters.
{\bf Local model poisoning attacks.} A key challenge of local model poisoning attacks is how to craft the local models sent from the compromised worker devices to the master device. To address this challenge, we formulate crafting local models as solving an optimization problem in each iteration of federated learning. Specifically, the master device could compute a global model in an iteration if there are no attacks, which we call \emph{before-attack} global model. Our goal is to craft the local models on the compromised worker devices such that the global model deviates the most towards the inverse of the direction along which the before-attack global model would change. Our intuition is that the deviations accumulated over multiple iterations would make the learnt global model differ from the before-attack one significantly.
We apply our attacks to four recent Byzantine-robust federated learning methods including Krum, Bulyan, trimmed mean, and median.
Our evaluation results on the MNIST, Fashion-MNIST, CH-MNIST, and Breast Cancer Wisconsin (Diagnostic) datasets
show that our attacks can substantially increase the error rates of the global models under various settings of federated learning. For instance, when learning a deep neural network classifier for MNIST using Krum, our attack can increase the error rate from 0.11 to 0.75. Moreover, we compare with data poisoning attacks including \emph{label flipping attacks} and \emph{back-gradient optimization based attacks}~\cite{munoz2017towards} (state-of-the-art untargeted data poisoning attacks for multi-class classifiers), which poison the local training datasets on the compromised worker devices. We find that these data poisoning attacks have limited success to attack the Byzantine-robust federated learning methods.
{\bf Defenses.} Existing defenses against data poisoning attacks essentially aim to sanitize the training dataset.
One category of defenses~\cite{Cretu08,barreno2010security,Suciu18,Tran18} detects malicious data based on their negative impact on the error rate of the learnt model. For instance, \emph{Reject on Negative Impact (RONI)}~\cite{barreno2010security} measures the impact of each training example on the error rate of the learnt model and removes the training examples that have large negative impact. Another category of defenses~\cite{Feng14,Liu17AiSec,Jagielski18} leverages new loss functions, solving which detects malicious data and learns a model simultaneously. For instance, Jagielski et al.~\cite{Jagielski18} proposed TRIM, which aims to jointly find a subset of training dataset with a given size and model parameters that minimize the loss function. The training examples that are not in the selected subset are treated as malicious data. However, these defenses are not directly applicable for our local model poisoning attacks because our attacks do not inject malicious data into the training dataset.
To address the challenge, we generalize RONI and TRIM to defend against our {local model poisoning attacks}. Both defenses remove the local models that are potentially malicious before computing the global model using a Byzantine-robust aggregation rule in each iteration. One defense removes the local models that have large negative impact on the error rate of the global model (inspired by RONI that removes training examples that have large negative impact on the error rate of the model), while the other defense removes the local models that result in large loss (inspired by TRIM that removes the training examples that have large negative impact on the loss), where the error rate and loss are evaluated on a validation dataset. We call the two defenses \emph{Error Rate based Rejection (ERR)} and \emph{Loss Function based Rejection (LFR)}, respectively. \neil{Moreover, we combine ERR and LFR, i.e., we remove the local models that are removed by either ERR or LFR. Our empirical evaluation results show that LFR outperforms ERR; and the combined defense is comparable to LFR in most cases.} Moreover, LFR can defend against our attacks in certain cases, but LFR is not effective enough in other cases. For instance, LFR can effectively defend against our attacks that craft local models based on the trimmed mean aggregation rule, but LFR is not effective against our attacks that are based on the Krum aggregation rule. Our results show that we need new defense mechanisms to defend against our local model poisoning attacks.
Our key contributions can be summarized as follows:
\begin{packeditemize}
\item We perform the first systematic study on attacking Byzantine-robust federated learning.
\item {We propose \emph{local model poisoning attacks} to Byzantine-robust federated learning.}
Our attacks manipulate the local model parameters on compromised worker devices during the learning process.
\item {We generalize two defenses for data poisoning attacks to defend against local model poisoning attacks. Our results show that, although one of them is effective in some cases, they have limited success in other cases}.
\end{packeditemize}
\section{Background and Problem Formulation}
\label{sec:background}
\subsection{Federated Learning}
\label{federatedlearning}
Suppose we have $m$ worker devices and the $i$th worker device has a local training dataset $D_i$. The worker devices aim to collaboratively learn a classifier. Specifically, the model parameters $\mathbf{w}$ of the classifier are often obtained via solving the following optimization problem: $\min_{\mathbf{w}} \sum_{i=1}^m F(\mathbf{w}, D_i)$,
where $F(\mathbf{w}, D_i)$ is the objective function for the local training dataset on the $i$th device and characterizes how well the parameters $\mathbf{w}$ model the local training dataset on the $i$th device. Different classifiers (e.g., logistic regression, deep neural networks) use different objective functions.
In federated learning, each worker device maintains a local model for its local training dataset. Moreover, we have a master device to maintain a global model via aggregating local models from the $m$ worker devices.
Specifically, federated learning performs the following three steps in each iteration:
{\bf Step I.} The master device sends the current global model parameters to all worker devices.
{\bf Step II.} The worker devices update their local model parameters using the current global model parameters and their local training datasets in parallel. In particular, the $i$th worker device essentially aims to solve the optimization problem $\min_{\mathbf{w}_i} F(\mathbf{w}_i, D_i)$ with the global model parameters $\mathbf{w}$ as an initialization of the local model parameters $\mathbf{w}_i$. A worker device could use any method to solve the optimization problem, though \emph{stochastic gradient descent} is the most popular one. Specifically, the $i$th worker device updates its local model parameters $\mathbf{w}_i$ as $\mathbf{w}_i=\mathbf{w} - \alpha\cdot \frac{\partial F(\mathbf{w}, B_i)}{\partial \mathbf{w}}$,
where $\alpha$ is the learning rate and $B_i$ is a randomly sampled batch from the local training dataset $D_i$. Note that a worker device could apply stochastic gradient descent multiple rounds to update its local model.
After updating the local models, the worker devices send them to the master device.
{\bf Step III.} The master device {aggregates} the local models from the worker devices to obtain a new global model according to a certain \emph{aggregation rule}. Formally, we have $ \mathbf{w} = \mathcal{A}(\mathbf{w}_1, \mathbf{w}_2, \cdots, \mathbf{w}_m)$.
The master device could also randomly pick a subset of worker devices and send the global model to them; the picked worker devices update their local models and send them to the master device; and the master device aggregates the local models to obtain the new global model~\cite{McMahan17}. \neil{We note that, for the aggregation rules we study in this paper, sending local models to the master device is equivalent to sending gradients to the master device, who aggregates the gradients and uses them to update the global model.}
\subsection{Byzantine-robust Aggregation Rules}
A naive aggregation rule is to average the local model parameters as the global model parameters.
This \emph{mean} aggregation rule is widely used under non-adversarial settings~\cite{Dean12,Konen16,McMahan17}. However, mean is not robust under adversarial settings. In particular, an attacker can manipulate the global model parameters arbitrarily for this mean aggregation rule when compromising only one worker device~\cite{Blanchard17,Yin18}.
Therefore, the machine learning community has recently developed multiple aggregation rules that aim to be robust even if certain worker devices exhibit Byzantine failures.
Next, we review several such aggregation rules.
\myparatight{Krum~\cite{Blanchard17} and Bulyan~\cite{Mhamdi18}} Krum selects one of the $m$ local models that is similar to other models as the global model. The intuition is that even if the selected local model is from a compromised worker device, its impact may be constrained since it is similar to other local models possibly from benign worker devices. Suppose at most $c$ worker devices are compromised. For each local model $\mathbf{w}_i$, the master device computes the $m-c-2$ local models that are the closest to $\mathbf{w}_i$ with respect to Euclidean distance. Moreover, the master device computes the sum of the squared distances between $\mathbf{w}_i$ and its closest $m-c-2$ local models. Krum selects the local model with the smallest sum of squared distance as the global model. When $c < \frac{m-2}{2}$, Krum has theoretical guarantees for the convergence for certain objective functions.
Euclidean distance between two local models could be substantially influenced by a single model parameter. Therefore, Krum could be influenced by some abnormal model parameters~\cite{Mhamdi18}. To address this issue, Mhamdi et al.~\cite{Mhamdi18} proposed Bulyan, which essentially combines Krum and a variant of trimmed mean (trimmed mean will be discussed next). Specifically, Bulyan first iteratively applies Krum to select $\theta$ ($\theta \leq m-2c$) local models.
Then, Bulyan uses a variant of trimmed mean to aggregate the $\theta$ local models. In particular, for each $j$th model parameter, Bulyan sorts the $j$th parameters of the $\theta$ local models, finds the $\gamma$ ($\gamma \leq \theta - 2c$) parameters that are the closest to the median, and computes their mean as the $j$th parameter of the global model. When $c \leq \frac{m-3}{4}$, Bulyan has theoretical guarantees for the convergence under certain assumptions of the objective function.
Since Bulyan is based on Krum, our attacks for Krum can transfer to Bulyan (see Appendix~\ref{attackBulyan}). Moreover, Bulyan is not scalable because it executes Krum many times in each iteration and Krum computes pairwise distances between local models.
Therefore, we will focus on Krum in the paper.
\myparatight{Trimmed mean~\cite{Yin18}} This aggregation rule aggregates each model parameter independently. Specifically, for each $j$th model parameter, the master device sorts the $j$th parameters of the $m$ local models, i.e., $w_{1j}, w_{2j}, \cdots, w_{mj}$, where $w_{ij}$ is the $j$th parameter of the $i$th local model, removes the largest and smallest $\beta$ of them, and computes the mean of the remaining $m-2\beta$ parameters as the $j$th parameter of the global model. Suppose at most $c$ worker devices are compromised. This trimmed mean aggregation rule achieves \emph{order-optimal} error rate when ${c} \leq \beta < \frac{m}{2}$ and the objective function to be minimized is strongly convex. Specifically, the order-optimal error rate is $\tilde{O}(\frac{c}{m\sqrt{n}} + \frac{1}{\sqrt{mn}})$,\footnote{$\tilde{O}$ is a variant of the $O$ notation, which ignores the logarithmic terms.\label{fn:1}} where $n$ is the number of training data points on a worker device (worker devices are assumed to have the same number of training data points).
\myparatight{Median~\cite{Yin18}} In this median aggregation rule, for each $j$th model parameter, the master device sorts the $j$th parameters of the $m$ local models and takes the median as the $j$th parameter of the global model. Note that when $m$ is an even number, median is the mean of the middle two parameters. Like the trimmed mean aggregation rule, the median aggregation rule also achieves an order-optimal error rate when the objective function is strongly convex.
\subsection{Problem Definition and Threat Model}
\label{sec:threatmodel}
\myparatight{Attacker's goal} Like many studies on poisoning attacks~\cite{rubinstein2009antidote,biggio2012poisoning,biggio2013poisoning,xiao2015feature,Jagielski18,poisoningattackRecSys16,YangRecSys17}, we consider an attacker's goal is to manipulate the learnt global model such that it has a high error rate indiscriminately for testing examples. Such attacks are known as \emph{untargeted poisoning attacks}, which make the learnt model unusable and eventually lead to denial-of-service attacks. \alan{For instance, an attacker may perform such attacks to its competitor's federated learning system.} Some studies also considered other types of poisoning attacks (e.g., \emph{targeted poisoning attacks}~\cite{Suciu18}), which we will review in Section~\ref{related}.
We note that the Byzantine-robust aggregation rules discussed above can \emph{asymptotically} bound the error rates of the learnt global model under certain assumptions of the objective functions, and some of them (i.e., trimmed mean and median) even achieve \emph{order-optimal} error rates.
These theoretical guarantees seem to imply the difficulty of manipulating the error rates. However, the asymptotic guarantees do not precisely characterize the \emph{practical} performance of the learnt models. Specifically, the asymptotic error rates are quantified using the $\tilde{O}$ notation. The $\tilde{O}$ notation ignores any constant, e.g., $\tilde{O}(\frac{1}{\sqrt{n}})$=$\tilde{O}(\frac{100}{\sqrt{n}})$. However, such constant significantly influences a model's error rate in practice. As we will show, although these asymptotic error rates still hold for our local model poisoning attacks since they hold for Byzantine failures, our attacks can still significantly increase the testing error rates of the learnt models in practice.
\myparatight{Attacker's capability} We assume the attacker has control of $c$ worker devices. \alan{Specifically, like Sybil attacks~\cite{sybil} to distributed systems, the attacker could inject $c$ fake worker devices into the federated learning system or compromise $c$ benign worker devices.} However, we assume the number of worker devices under the attacker's control is less than 50\% (otherwise, it would be easy to manipulate the global models). We assume the attacker can arbitrarily manipulate the local models sent from these worker devices to the master device. For simplicity, we call these worker devices \emph{compromised worker devices} no matter whether they are fake devices or compromised benign ones.
\myparatight{Attacker's background knowledge} The attacker knows the code, local training datasets, and local models on the compromised worker devices.
We characterize the attacker's background knowledge along the following two dimensions:
{\bf Aggregation rule.}
We consider two scenarios depending on whether the attacker knows the aggregation rule or not. In particular, the attacker could know the aggregation rule in various scenarios.
For instance, the service provider may make the aggregation rule public in order to increase transparency and trust of the federated learning system~\cite{McMahan17}.
When the attacker does not know the aggregation rule, we will craft local model parameters for the compromised worker devices based on a certain aggregation rule. Our empirical results show that such crafted local models could also attack other aggregation rules. In particular, we observe different levels of \emph{transferability} of our local model poisoning attacks between different aggregation rules.
{\bf Training data.} We consider two cases (\emph{full knowledge} and \emph{partial knowledge}) depending on whether the attacker knows the local training datasets and local models on the benign worker devices. In the full knowledge scenario, the attacker knows the local training dataset and local model on every worker device.
We note that the full knowledge scenario has limited applicability in practice for federated learning as the training dataset is decentralized on many worker devices, and \neil{we use it to estimate the \emph{upper bound} of our attacks' threats for a given setting of federated learning}. In the partial knowledge scenario, the attacker only knows the local training datasets and local models on the compromised worker devices.
Our threat model is inspired by multiple existing studies~\cite{Papernot16,Papernot16Distillation,Jagielski18,Suciu18} on adversarial machine learning.
For instance, Suciu et al.~\cite{Suciu18} recently proposed to characterize an attacker's background knowledge and capability for data poisoning attacks with respect to multiple dimensions such as \emph{Feature}, \emph{Algorithm}, and \emph{Instance}. Our aggregation rule and training data dimensions are essentially the Algorithm and Instance dimensions, respectively. We do not consider the Feature dimension because the attacker controls some worker devices and already knows the features in our setting.
Some Byzantine-robust aggregation rules (e.g., Krum~\cite{Blanchard17} and trimmed mean~\cite{Yin18}) need to know the upper bound of the number of compromised worker devices in order to set parameters appropriately. For instance, trimmed mean removes the largest and smallest $\beta$ local model parameters, where $\beta$ is at least the number of compromised worker devices (otherwise trimmed mean can be easily manipulated). \alan{To calculate a lower bound for our attack's threat, we consider a hypothetical, strong service provider who knows the number of compromised worker devices and sets parameters in the aggregation rule accordingly.}
\section{Our Local Model Poisoning Attacks}
We focus on the case where the aggregation rule is known. When the aggregation rule is unknown, we craft local models based on an assumed one. Our empirical results in Section~\ref{unknownrule} show that our attacks have different levels of transferability between aggregation rules.
\subsection{Optimization Problem}
\label{sec:opt}
\alan{Our idea is to manipulate the global model via carefully crafting the local models sent from the compromised worker devices to the master device in each iteration of federated learning.
We denote by $s_j$ the changing direction of the $j$th global model parameter in the current iteration when there are no attacks, where $s_j=1$ or $-1$. $s_j=1$ (or $s_j=-1$) means that the $j$th global model parameter increases (or decreases) upon the previous iteration. We consider the attacker's goal (we call it \emph{directed deviation goal}) is to deviate a global model parameter the most towards the inverse of the direction along which the global model parameter would change without attacks.
Suppose in an iteration, $\mathbf{w}_i$ is the local model that the $i$th worker device intends to send to the master device when there are no attacks. Without loss of generality, we assume the first $c$ worker devices are compromised.
Our directed deviation goal is to craft local models $\mathbf{w}_1', \mathbf{w}_2', \cdots, \mathbf{w}_c'$ for the compromised worker devices via solving the following optimization problem in each iteration:
\begin{align}
\label{problem}
&\max_{\mathbf{w}_1', \cdots, \mathbf{w}_c'} \mathbf{s}^T (\mathbf{w} - \mathbf{w}'),\nonumber \\
\text{subject to } & \mathbf{w}=\mathcal{A}(\mathbf{w}_1, \cdots, \mathbf{w}_c, \mathbf{w}_{c+1}, \cdots, \mathbf{w}_m), \nonumber \\
& \mathbf{w}'=\mathcal{A}(\mathbf{w}_1', \cdots, \mathbf{w}_c', \mathbf{w}_{c+1}, \cdots, \mathbf{w}_m),
\end{align}
where $\mathbf{s}$ is a column vector of the changing directions of all global model parameters, $\mathbf{w}$ is the before-attack global model, and $\mathbf{w}'$ is the after-attack global model. \neil{Note that $\mathbf{s}$, $\mathbf{w}$, and $\mathbf{w}'$ all depend on the iteration number. Since our attacks manipulate the local models in each iteration, we omit the explicit dependency on the iteration number for simplicity.}
In our preliminary exploration of formulating poisoning attacks, we also considered a \emph{deviation goal}, which does not consider the global model parameters' changing directions. We empirically find that our attacks based on both the directed deviation goal and the deviation goal achieve high testing error rates for Krum. However, the directed deviation goal substantially outperforms the
deviation goal for trimmed mean and median aggregation rules. Appendix~\ref{goals} shows our deviation goal and the empirical comparisons between deviation goal and directed deviation goal. }
\subsection{Attacking Krum} Recall that Krum selects one local model as the global model in each iteration. Suppose $\mathbf{w}$ is the selected local model in the current iteration when there are no attacks. Our goal is to craft the $c$ compromised local models such that the local model selected by Krum has the largest directed deviation from $\mathbf{w}$. Our idea is to make Krum select a certain crafted local model (e.g., $\mathbf{w}_1'$ without loss of generality) via crafting the $c$ compromised local models. Therefore, we aim to solve the optimization problem in Equation~\ref{problem} with $\mathbf{w}'=\mathbf{w}_1'$ and the aggregation rule is Krum.
\myparatight{Full knowledge} The key challenge of solving the optimization problem is that the constraint of the optimization problem is highly nonlinear and the search space of the local models $\mathbf{w}_1', \cdots, \mathbf{w}_c'$ is large. To address the challenge, we make two approximations. Our approximations represent suboptimal solutions to the optimization problem, which means that the attacks based on the approximations may have suboptimal performance. However, as we will demonstrate in our experiments, our attacks already substantially increase the error rate of the learnt model.
First, we restrict $\mathbf{w}_{1}'$ as follows: $\mathbf{w}_{1}' = \mathbf{w}_{Re} - \lambda \mathbf{s} $, where $\mathbf{w}_{Re}$ is the global model received from the master device in the current iteration (i.e., the global model obtained in the previous iteration) and $\lambda > 0$. This approximation explicitly models the directed deviation between the crafted local model $\mathbf{w}_{1}'$ and the received global model.
We also explored the approximation $\mathbf{w}_{1}' = \mathbf{w} - \lambda \mathbf{s}$, which means that we explicitly model the directed deviation between the crafted local model and the local model selected by Krum before attack. However, we found that our attacks are less effective using this approximation.
Second, to make $\mathbf{w}_1$ more likely to be selected by Krum, we craft the other $c-1$ compromised local models to be close to $\mathbf{w}_1'$. In particular, when the other $c-1$ compromised local models are close to $\mathbf{w}_1'$, $\mathbf{w}_1'$ only needs to have a small distance to $m-2c-1$ benign local models in order to be selected by Krum. In other words, the other $c-1$ compromised local models ``support'' the crafted local model $\mathbf{w}_1'$. In implementing our attack, we first assume the other $c-1$ compromised local models are the same as $\mathbf{w}_1'$, then we solve $\mathbf{w}_1'$, and finally we randomly sample $c-1$ vectors, whose distance to $\mathbf{w}_1'$ is at most $\epsilon$, as the other $c-1$ compromised local models. With our two approximations, we transform the optimization problem as follows:
\begin{align}
&\max_{\lambda} \lambda \nonumber \\
\label{problem2}
\text{subject to }&\mathbf{w}_1' = Krum(\mathbf{w}_1', \cdots, \mathbf{w}_c', \mathbf{w}_{(c+1)}, \cdots, \mathbf{w}_m), \nonumber \\
&\mathbf{w}_1'= \mathbf{w}_{Re} - \lambda \mathbf{s}, \nonumber \\
& \mathbf{w}_i' = \mathbf{w}_1', \text{ for } i = 2, 3, \cdots, c.
\end{align}
More precisely, the objective function in the above optimization problem should be $\mathbf{s}^T(\mathbf{w}-\mathbf{w}_{Re}) + \lambda \mathbf{s}^T\mathbf{s}$. However, $\mathbf{s}^T(\mathbf{w}-\mathbf{w}_{Re})$ is a constant and $\mathbf{s}^T\mathbf{s}=d$ where $d$ is the number of parameters in the global model. Therefore, we simplify the objective function to be just $\lambda$.
After solving $\lambda$ in the optimization problem, we can obtain the crafted local model $\mathbf{w}_{1}'$. Then, we randomly sample $c-1$ vectors whose distance to $\mathbf{w}_1'$ is at most $\epsilon$ as the other $c-1$ compromised local models. We will explore the impact of $\epsilon$ on the effectiveness of our attacks in experiments.
{\bf Solving $\lambda$.} Solving $\lambda$ in the optimization problem in Equation~\ref{problem2} is key to our attacks. First, we derive an upper bound of the solution $\lambda$ to the optimization problem. Formally, we have the following theorem.
\begin{theorem}
\label{theoremLambda}
Suppose $\lambda$ is a solution to the optimization problem in Equation~\ref{problem2}. $\lambda$ is upper bounded as follows:
\begin{align}
\lambda \le & \sqrt{\frac{1}{(m-2c-1)d} \cdot \min_{c+1\le i\le m}{\sum_{l\in {\tilde{\Gamma}_{\mathbf{w}_i}^{m-c-2}}}} D^2(\mathbf{w}_l,\mathbf{w}_i)} \nonumber\\
& + \frac{1}{\sqrt{d}}\cdot \max_{c+1\le i\le m}{D(\mathbf{w}_i,\mathbf{w}_{Re})},
\end{align}
where $d$ is the number of parameters in the global model, $D(\mathbf{w}_l, \mathbf{w}_i)$ is the Euclidean distance between $\mathbf{w}_l$ and $\mathbf{w}_i $, $\tilde{\Gamma}_{w_i}^{m-c-2}$ is the set of $m-c-2$ benign local models that have the smallest Euclidean distance to $\mathbf{w}_i$.
\end{theorem}
\begin{proof}
See Appendix~\ref{appendix:proof}.
\end{proof}
Given the upper bound, we use a binary search to solve $\lambda$. Specifically, we initialize $\lambda$ as the upper bound and check whether Krum selects $\mathbf{w}_{1}'$ as the global model; if not, then we half $\lambda$; we repeat this process until Krum selects $\mathbf{w}_{1}'$ or $\lambda$ is smaller than a certain threshold (this indicates that the optimization problem may not have a solution). In our experiments, we use $1\times10^{-5}$ as the threshold.
\myparatight{Partial knowledge} In the partial knowledge scenario, the attacker does not know the local models on the benign worker devices, i.e., $\mathbf{w}_{(c+1)}, \cdots, \mathbf{w}_m$. As a result, the attacker does not know the changing directions $\mathbf{s}$ and cannot solve the optimization problem in Equation~\ref{problem2}. However, the attacker has access to the before-attack local models on the $c$ compromised worker devices. Therefore, we propose to craft compromised local models based on these before-attack local models. First, we compute the mean of the $c$ before-attack local models as $\tilde{\mathbf{w}}=\frac{1}{c}\sum_{i=1}^c \mathbf{w}_i$. Second, we estimate the changing directions using the mean local model. Specifically, if the mean of the $j$th parameter is larger than the $j$th global model parameter received from the master device in the current iteration, then we estimate the changing direction for the $j$th parameter to be $1$, otherwise we estimate it to be $-1$. For simplicity, we denote by $\tilde{\mathbf{s}}$ the vector of estimated changing directions.
Third, we treat the before-attack local models on the compromised worker devices as if they were local models on benign worker devices, and we aim to craft local model $\mathbf{w}_{1}'$ such that, among the crafted local model and the $c$ before-attack local models, Krum selects the crafted local model.
Formally, we have the following optimization problem:
\begin{align}
&\max_{\lambda} \lambda \nonumber \\
\label{problem3}
\text{subject to }&\mathbf{w}_1' = Krum(\mathbf{w}_1', \mathbf{w}_{1}, \cdots, \mathbf{w}_c), \nonumber \\
&\mathbf{w}_1'= \mathbf{w}_{Re} - \lambda \tilde{\mathbf{s}}.
\end{align}
Similar to Theorem~\ref{theoremLambda}, we can also derive an upper bound of $\lambda$ for the optimization problem in Equation~\ref{problem3}. Moreover, similar to the full knowledge scenario, we use a binary search to solve $\lambda$. However, unlike the full knowledge scenario, if we cannot find a solution $\lambda$ until $\lambda$ is smaller than a threshold (i.e., $1\times10^{-5}$), then we add one more crafted local model $\mathbf{w}_2'$ such that among the crafted local models $\mathbf{w}_1'$, $\mathbf{w}_2'$, and the $c$ before-attack local models, Krum selects the crafted local model $\mathbf{w}_1'$. Specifically, we solve the optimization problem in Equation~\ref{problem3} with $\mathbf{w}_2'$ added into the Krum aggregation rule. Like the full knowledge scenario, we assume $\mathbf{w}_2'=\mathbf{w}_1'$. If we still cannot find a solution $\lambda$ until $\lambda$ is smaller than the threshold, we add another crafted local model. We repeat this process until finding a solution $\lambda$. We find that such iterative searching process makes our attack more effective for Krum in the partial knowledge scenario.
After solving $\lambda$, we obtain the crafted local model $\mathbf{w}_1'$. Then, like the full knowledge scenario, we randomly sample $c-1$ vectors whose distance to $\mathbf{w}_1'$ is at most $\epsilon$ as the other $c-1$ compromised local models.
\subsection{Attacking Trimmed Mean}
Suppose ${w}_{ij}$ is the $j$th \emph{before-attack} local model parameter on the $i$th worker device and ${w}_{j}$ is the $j$th before-attack global model parameter in the current iteration.
We discuss how we craft each local model parameter on the compromised worker devices.
We denote by $w_{max,j}$ and $w_{min,j}$ the maximum and minimum of the $j$th local model parameters on the benign worker devices, i.e., $w_{max,j}$=$\max\{{w}_{(c+1)j}, {w}_{(c+2)j}, \cdots, {w}_{mj}\}$ and $w_{min,j}$=$\min\{{w}_{(c+1)j}, {w}_{(c+2)j}, \cdots, {w}_{mj}\}$.
\myparatight{Full knowledge}
Theoretically, we can show that the following attack can maximize the directed deviations of the global model (i.e., an optimal solution to the optimization problem in Equation~\ref{problem}): if $s_j=-1$,
then we use any $c$ numbers that are larger than $w_{max,j}$ as the $j$th local model parameters on the $c$ compromised worker devices, otherwise we use any $c$ numbers that are smaller than $w_{min,j}$ as the $j$th local model parameters on the $c$ compromised worker devices.
Intuitively, our attack crafts the compromised local models based on the maximum or minimum benign local model parameters, depending on which one deviates the global model towards the inverse of the direction along which the global model would change without attacks. The sampled $c$ numbers should be close to $w_{max,j}$ or $w_{min,j}$ to avoid being outliers and being detected easily. Therefore, when implementing the attack, if $s_j=-1$, then we randomly sample the $c$ numbers in the interval [$w_{max,j}, b\cdot w_{max,j}$] (when $w_{max,j} > 0$) or [$w_{max,j}, w_{max,j}/b$] (when $w_{max,j} \leq 0$), otherwise we randomly sample the $c$ numbers in the interval [$w_{min,j}/b, w_{min,j}$] (when $w_{min,j} > 0$) or [$b\cdot w_{min,j}, w_{min,j}$] (when $w_{min,j} \leq 0$). Our attack does not depend on $b$ once $b>1$. In our experiments, we set $b=2$.
\myparatight{Partial knowledge} An attacker faces two challenges in the partial knowledge scenario. First, the attacker does not know the changing direction variable $s_j$ because the attacker does not know the local models on the benign worker devices.
Second, for the same reason, the attacker does not know the maximum $w_{max,j}$ and minimum $w_{min,j}$ of the benign local model parameters.
Like Krum, to address the first challenge, we estimate the changing direction variables using the local models on the compromised worker devices.
One naive strategy to address the second challenge is to use a very large number as $w_{max,j}$ or a very small number as $w_{min,j}$. However, if we craft the compromised local models based on $w_{max,j}$ or $w_{min,j}$ that are far away from their true values, the crafted local models may be outliers and the master device may detect the compromised local models easily.
Therefore, we propose to estimate $w_{max,j}$ and $w_{min,j}$ using the before-attack local model parameters on the compromised worker devices.
In particular, the attacker can compute the mean $\mu_j$ and standard deviation $\sigma_j$ of each $j$th parameter on the compromised worker devices.
Based on the assumption that each $j$th parameters of the benign worker devices are samples from a Gaussian distribution with mean $\mu_j$ and standard deviation $\sigma_j$, we can estimate that $w_{max,j}$ is smaller than $\mu_j + 3\sigma_j$ or $\mu_j + 4\sigma_j$ with large probabilities; and $w_{min,j}$ is larger than $\mu_j - 4\sigma_j$ or $\mu_j - 3\sigma_j$ with large probabilities. Therefore, when $s_j$ is estimated to be $-1$, we sample $c$ numbers from the interval $[\mu_j + 3\sigma_j, \mu_j + 4\sigma_j]$ as the $j$th parameter of the $c$ compromised local models, which means that the crafted compromised local model parameters are larger than the maximum of the benign local model parameters with a high probability (e.g., 0.898 -- 0.998 when $m=100$ and $c=20$ under the Gaussian distribution assumption). When $s_j$ is estimated to be $1$, we sample $c$ numbers from the interval $[\mu_j - 4\sigma_j, \mu_j - 3\sigma_j]$ as the $j$th parameter of the $c$ compromised local models, which means that the crafted compromised local model parameters are smaller than the minimum of the benign local model parameters with a high probability. The $j$th model parameters on the benign worker devices may not accurately follow a Gaussian distribution. However, our attacks are still effective empirically.
\subsection{Attacking Median} We use the same attacks for trimmed mean to attack the median aggregation rule. For instance, in the full knowledge scenario,
we randomly sample the $c$ numbers in the interval [$w_{max,j}, b\cdot w_{max,j}$] or [$w_{max,j}, w_{max,j}/b$] if $s_j=-1$, otherwise we randomly sample the $c$ numbers in the interval [$w_{min,j}/b, w_{min,j}$] or [$b\cdot w_{min,j}, w_{min,j}$].
\section{Evaluation}
\label{sec:exp}
\alan{We evaluate the effectiveness of our attacks using multiple datasets in different scenarios, e.g., the impact of different parameters and known vs. unknown aggregation rules. Moreover, we compare our attacks with existing attacks.}
\subsection{Experimental Setup}
\label{exp:setup}
\myparatight{Datasets} We consider four datasets: MNIST, Fashion-MNIST, CH-MNIST~\cite{kather2016multi}\footnote{We use a pre-processed version from \url{https://www.kaggle.com/kmader/colorectal-histology-mnist\#hmnist_64_64_L.csv}.} and Breast Cancer Wisconsin (Diagnostic) \cite{Dua:2019}. MNIST and Fashion-MNIST each includes 60,000 training examples and 10,000 testing examples, where each example is an 28$\times$28 grayscale image.
Both datasets are 10-class classification problems. The CH-MNIST dataset consists of 5000 images of histology tiles from patients with colorectal cancer. The dataset is an 8-class classification problem. Each image has 64$\times$64 grayscale pixels. We randomly select 4000 images as the training examples and use the remaining 1000 as the testing examples. \alan{The Breast Cancer Wisconsin (Diagnostic) dataset is a binary classification problem to diagnose whether a person has breast cancer. The dataset contains 569 examples, each of which has 30 features describing the characteristics of a person's cell nuclei. We randomly select 455 (80\%) examples as the training examples, and use the remaining 114 examples as the testing examples.}
\myparatight{Machine learning classifiers} We consider the following classifiers.
{\bf Multi-class logistic regression (LR).} The considered aggregation rules have theoretical guarantees for the error rate of LR classifier.
{\bf Deep neural networks (DNN).} For MNIST, Fashion-MNIST, and Breast Cancer Wisconsin (Diagnostic), we use a DNN with the architecture described in Table~\ref{DNN-architecture} in Appendix. We use ResNet20~\cite{he2016deep} for CH-MNIST. Our DNN architecture does not necessarily achieve the smallest error rates for the considered datasets, as our goal is not to search for the best DNN architecture. Our goal is to show that our attacks can increase the testing error rates of the learnt DNN classifiers.
\myparatight{Compared attacks} We compare the following attacks.
{\bf Gaussian attack.} This attack randomly crafts the local models on the compromised worker devices. Specifically, for each $j$th model parameter, we estimate a Gaussian distribution using the before-attack local models on all worker devices. Then, for each compromised worker device, we sample a number from the Gaussian distribution and treat it as the $j$th parameter of the local model on the compromised worker device. We use this Gaussian attack to show that crafting compromised local models randomly can not effectively attack the Byzantine-robust aggregation rules.
{\bf Label flipping attack.} This is a data poisoning attack that does not require knowledge of the training data distribution. On each compromised worker device, this attack flips the label of each training instance. Specifically, we flip a label $l$ as $L-l-1$, where $L$ is the number of classes in the classification problem and $l=0,1,\cdots, L-1$.
{\bf Back-gradient optimization based attack~\cite{munoz2017towards}.} This is the state-of-the-art untargeted data poisoning attack for multi-class classifiers. We note that this attack is not scalable and thus we compare our attacks with this attack on a subset of MNIST separately. The results are shown in Section~\ref{comparison-data}.
{\bf Full knowledge attack or partial knowledge attack.} Our attack when the attacker knows the local models on all worker devices or the compromised ones.
\begin{table}[!t]\renewcommand{\arraystretch}{0.9}
\centering
\caption{\neil{Default setting for key parameters.}}
\addtolength{\tabcolsep}{-2pt}
\begin{tabular}{|c|c|c|} \hline
{\small Parameter} & {\small Description} & {\small Value} \\ \hline
{\small $m$} & {\small Number of worker devices.} & {\small 100} \\ \hline
{\small $c$} & {\small Number of compromised worker devices.} & {\small 20} \\ \hline
{\small $p$} & {\small Degree of Non-IID.} & {\small 0.5} \\ \hline
{\small $\epsilon$} & {\small Distance parameter for Krum attacks.} & {\small 0.01} \\ \hline
{\small $\beta$} & {\small Parameter of trimmed mean.} & {\small $c$} \\ \hline
\end{tabular}
\label{defaultparameter}
\vspace{-2mm}
\end{table}
\myparatight{Parameter setting} We describe parameter setting for the federated learning algorithms and our attacks. Table~\ref{defaultparameter} summarizes the default setting for key parameters. \alan{We use MXNet~\cite{chen2015mxnet} to implement federated learning and attacks. We repeat each experiment for 50 trials and report the average results. We observed that the variances are very small, so we omit them for simplicity.}
\label{parametersetting}
{\bf Federated learning algorithms.} By default, we assume $m=100$ worker devices; each worker device applies one round of stochastic gradient descent to update its local model; and the master device aggregates local models from all worker devices.
One unique characteristic of federated learning is that the local training datasets on different devices may not be \emph{independently and identically distributed} (i.e., \emph{non-IID})~\cite{McMahan17}. We simulate federated learning with different non-IID training data distributions. Suppose we have $L$ classes in the classification problem, e.g., $L=10$ for the MNIST and Fashion-MNIST datasets, and $L=8$ for the CH-MNIST dataset. We evenly split the worker devices into $L$ groups. \alan{We model non-IID federated learning by assigning a training instance with label $l$ to the $l$th group with probability $p$, where $p > 0$.}
A higher $p$ indicates a higher degree of non-IID. For convenience, we call the probability $p$ \emph{degree of non-IID}. Unless otherwise mentioned, we set $p=0.5$.
We set 500 iterations for the LR classifier on MNIST; we set 2,000 iterations for the DNN classifiers on all four datasets; and we set the batch size to be 32 in stochastic gradient descent, except that we set the batch size to be 64 for Fashion-MNIST as such setting leads to a more accurate model.
The trimmed mean aggregation rule prunes the largest and smallest $\beta$ parameters, where $ c \leq \beta < \frac{m}{2}$. Pruning more parameters leads to larger testing error rates without attacks. By default, we consider $\beta=c$ as the authors of trimmed mean did~\cite{Yin18}.
{\bf Our attacks.} Unless otherwise mentioned, we consider 20 worker devices are compromised. Our attacks to Krum have a parameter $\epsilon$, which is related to the distance between the crafted compromised local models. We set $\epsilon=0.01$ (we will study the impact of $\epsilon$ on our attack). We do not set $\epsilon=0$ because $\epsilon=0$ makes the $c$ compromised local models exactly the same, making the compromised local models easily detected by the master device.
Our attacks to trimmed mean and median have a parameter $b$ in the full knowledge scenario, where $b>1$. Our attacks do not depend on $b$ once $b>1$.
We set $b=2$. \neil{Unless otherwise mentioned, we assume that attacker manipulates the local models on the compromised worker devices in each iteration.}
\begin{table}[!t]\renewcommand{\arraystretch}{1}
\centering
\caption{\neil{Testing error rates of various attacks.}}
\vspace{-1mm}
\centering
\addtolength{\tabcolsep}{-4pt}
\subfloat[LR classifier, MNIST]{
\begin{tabular}{|c|c|c|c|c|c|} \hline
{\small } & {\small NoAttack} & {\small Gaussian} & {\small LabelFlip} &{\small Partial} & {\small Full} \\ \hline
{\small Krum} & {\small 0.14} & {\small 0.13} & {\small 0.13} &{\small 0.72} & {\small 0.80} \\ \hline
{\small Trimmed mean} & {\small 0.12} & {\small 0.11} & {\small 0.13} &{\small 0.23} & {\small 0.52} \\ \hline
{\small Median} & {\small 0.13} & {\small 0.13} & {\small 0.15} &{\small 0.19} & {\small 0.29} \\ \hline
\end{tabular}
\label{attackeffective-MNIST-LR}
}
\subfloat[DNN classifier, MNIST]{
\begin{tabular}{|c|c|c|c|c|c|} \hline
{\small } & {\small NoAttack} & {\small Gaussian} & {\small LabelFlip} &{\small Partial} & {\small Full} \\ \hline
{\small Krum} & {\small 0.11} & {\small 0.10} & {\small 0.10} &{\small 0.75} & {\small 0.77} \\ \hline
{\small Trimmed mean} & {\small 0.06} & {\small 0.07} & {\small 0.07} &{\small 0.14} & {\small 0.23} \\ \hline
{\small Median} & {\small 0.06} & {\small 0.06} & {\small 0.16} &{\small 0.28} & {\small 0.32} \\ \hline
\end{tabular}
\label{attackeffective-MNIST-DNN}
}
\subfloat[DNN classifier, Fashion-MNIST]{
\begin{tabular}{|c|c|c|c|c|c|} \hline
{\small } & {\small NoAttack} & {\small Gaussian} & {\small LabelFlip} &{\small Partial} & {\small Full} \\ \hline
{\small Krum} & {\small 0.16} & {\small 0.16} & {\small 0.16} &{\small 0.90} & {\small 0.91} \\ \hline
{\small Trimmed mean} & {\small 0.10} & {\small 0.10} & {\small 0.12} &{\small 0.26} & {\small 0.28} \\ \hline
{\small Median} & {\small 0.09} & {\small 0.12} & {\small 0.12} &{\small 0.21} & {\small 0.29} \\ \hline
\end{tabular}
\label{attackeffective-fashion}
}
\subfloat[DNN classifier, CH-MNIST]{
\begin{tabular}{|c|c|c|c|c|c|} \hline
{\small } & {\small NoAttack} & {\small Gaussian} & {\small LabelFlip} &{\small Partial} & {\small Full} \\ \hline
{\small Krum} & {\small 0.29} & {\small 0.30} & {\small 0.43} &{\small 0.73} & {\small 0.81} \\ \hline
{\small Trimmed mean} & {\small 0.17} & {\small 0.25} & {\small 0.37} &{\small 0.69} & {\small 0.69} \\ \hline
{\small Median} & {\small 0.17} & {\small 0.20} & {\small 0.17} &{\small 0.57} & {\small 0.63} \\ \hline
\end{tabular}
\label{attackeffective-CHMNIST}}
\subfloat[DNN classifier, Breast Cancer Wisconsin (Diagnostic)]{
\begin{tabular}{|c|c|c|c|c|c|} \hline
{\small } & {\small NoAttack} & {\small Gaussian} & {\small LabelFlip} &{\small Partial} & {\small Full} \\ \hline
{\small Krum} & {\small 0.03} & {\small 0.04} & {\small 0.14} &{\small 0.17} & {\small 0.17} \\ \hline
{\small Trimmed mean} & {\small 0.02} & {\small 0.03} & {\small 0.05} &{\small 0.14} & {\small 0.15} \\ \hline
{\small Median} & {\small 0.03} & {\small 0.03} & {\small 0.04} &{\small 0.17} & {\small 0.18} \\ \hline
\end{tabular}
\label{attackeffective-BreastCancer}}
\vspace{-8mm}
\label{overallresults}
\end{table}
\subsection{Results for Known Aggregation Rule}
\label{sec:knownrule}
\begin{figure*}[!t]
\centering
\subfloat[Krum]{\includegraphics[width=0.33 \textwidth]{figs/MNIST-LR-Krum-federated}\label{label:MNIST-LR-Krum-federated}}
\subfloat[Trimmed mean]{\includegraphics[width=0.33 \textwidth]{figs/MNIST-LR-Trim-federated}}
\subfloat[Median]{\includegraphics[width=0.33 \textwidth]{figs/MNIST-LR-Median-federated}}
\vspace{-2mm}
\subfloat[Krum]{\includegraphics[width=0.33 \textwidth]{figs/MNIST-CNN-Krum-federated}\label{label:MNIST-CNN-Krum-federated}}
\subfloat[Trimmed mean]{\includegraphics[width=0.33 \textwidth]{figs/MNIST-CNN-Trim-federated}}
\subfloat[Median]{\includegraphics[width=0.33 \textwidth]{figs/MNIST-CNN-Median-federated}}
\caption{Testing error rates for different attacks as we have more compromised worker devices on MNIST. (a)-(c): LR classifier and (d)-(f): DNN classifier.}
\vspace{-4mm}
\label{MNIST-worker}
\end{figure*}
\noindent
\neil{{\bf Our attacks are effective:} Table~\ref{overallresults} shows the testing error rates of the compared attacks on the four datasets.
First, these results show that our attacks are effective and substantially outperform existing attacks, i.e., our attacks result in higher error rates. For instance, when dataset is MNIST, classifier is LR, and aggregation rule is Krum, our partial knowledge attack increases the error rate from 0.14 to 0.72 (around 400\% relative increase).
Gaussian attacks only increase the error rates in several cases, e.g., median aggregation rule for Fashion-MNIST, and trimmed mean and median for CH-MNIST. Label flipping attacks can increase the error rates for DNN classifiers in some cases but have limited success for LR classifiers. }
Second, Krum is less robust to our attacks than trimmed mean and median, except on Breast Cancer Wisconsin (Diagnostic) where Krum is comparable to median. A possible reason why trimmed mean and median outperform Krum is that Krum picks one local model as the global model, while trimmed mean and median aggregate multiple local models to update the global model (the median selects one local model parameter for each model parameter, but the selected parameters may be from different local models). Trimmed mean is more robust to our attacks in some cases while median is more robust in other cases. \alan{Third, we observe that the error rates may depend on the data dimension. For instance, MNIST and Fashion-MNIST have 784 dimensions, CH-MNIST has 4096 dimensions, and Breast Cancer Wisconsin (Diagnostic) has 30 dimensions. For the DNN classifiers, the error rates are higher on CH-MNIST than on other datasets in most cases, while the error rates are lower on Breast Cancer Wisconsin (Diagnostic) than on other datasets in most cases.}
\alan{We note that federated learning may have higher error rate than centralized learning, even if robustness feature is not considered (i.e., mean aggregation rule is used). For instance, {the DNN classifiers respectively achieve testing error rates 0.01, 0.08, 0.07, and 0.01 in centralized learning on the four datasets,} while they respectively achieve testing error rates 0.04, 0.09, 0.09, and 0.01 in federated learning with the mean aggregation rule on the four datasets.
However, in the scenarios where users' training data can only be stored on their edge/mobile devices, e.g., for privacy purposes, centralized learning is not applicable and federated learning may be the only option even though its error rate is higher. Compared to the mean aggregation rule, Byzantine-robust aggregation rule increases the error rate without attacks. However, if Byzantine-robust aggregation rule is not used, a single malicious device can make the learnt global model totally useless~\cite{Blanchard17,Yin18}. To summarize, in the scenarios where users' training data can only be stored on their edge/mobile devices and there may exist attacks, Byzantine-robust federated learning may be the best option, even if its error rate is higher. }
\begin{figure*}[!t]
\centering
\subfloat[Krum]{\includegraphics[width=0.33 \textwidth]{figs/MNIST-LR-Krum-diffbias}}
\subfloat[Trimmed mean]{\includegraphics[width=0.33 \textwidth]{figs/MNIST-LR-Trim-diffbias}}
\subfloat[Median]{\includegraphics[width=0.33 \textwidth]{figs/MNIST-LR-Median-diffbias}}
\subfloat[Krum]{\includegraphics[width=0.33 \textwidth]{figs/MNIST-CNN-Krum-diffbias}}
\subfloat[Trimmed mean]{\includegraphics[width=0.33 \textwidth]{figs/MNIST-CNN-Trim-diffbias}}
\subfloat[Median]{\includegraphics[width=0.33 \textwidth]{figs/MNIST-CNN-Median-diffbias}}
\vspace{-2mm}
\caption{Testing error rates for different attacks as we increase the degree of non-IID on MNIST. (a)-(c): LR classifier and (d)-(f): DNN classifier.}
\vspace{-3mm}
\label{MNIST-noniid}
\end{figure*}
\myparatight{Impact of the percentage of compromised worker devices} Figure~\ref{MNIST-worker} shows the error rates of different attacks as the percentage of compromised worker devices increases on MNIST.
Our attacks increase the error rates significantly as we compromise more worker devices; label flipping only slightly increases the error rates; and Gaussian attacks have no notable impact on the error rates. Two exceptions are that Krum's error rates decrease when the percentage of compromised worker devices increases from 5\% to 10\% in Figure~\ref{label:MNIST-LR-Krum-federated} and from 10\% to 15\% in Figure~\ref{label:MNIST-CNN-Krum-federated}. We suspect the reason is that Krum selects one local model as a global model in each iteration. We have similar observations on the other datasets. Therefore, we omit the corresponding results for simplicity.
\myparatight{Impact of the degree of non-IID in federated learning} Figure~\ref{MNIST-noniid} shows the error rates for the compared attacks for different degrees of non-IID on MNIST.
Error rates of all attacks including no attacks increase as we increase the degree of non-IID, except that the error rates of our attacks to Krum fluctuate as the degree of non-IID increases. A possible reason is that as the local training datasets on different worker devices are more non-IID, the local models are more diverse, leaving more room for attacks. For instance, an extreme example is that if the local models on the benign worker devices are the same, it would be harder to attack the aggregation rules, because their aggregated model would be more likely to depend on the benign local models.
\begin{figure*}[!t]
\centering
\vspace{-2mm}
\subfloat[]{\includegraphics[width=0.33 \textwidth]{figs/MNIST-LR-Median-InnerIter}\label{impact:round}}
\subfloat[]{\includegraphics[width=0.33 \textwidth]{figs/MNIST-LR-Median-diffworkers}\label{impact:worker}}
\subfloat[]{\includegraphics[width=0.33 \textwidth]{figs/MNIST-LR-Median-diffsubset}\label{impact:subset}}
\vspace{-2mm}
\caption{\alan{(a) Impact of the number of rounds of stochastic gradient descent worker devices use to update their local models in each iteration on our attacks. (b) Impact of the number of worker devices on our attacks. (c) Impact of the number of worker devices selected in each iteration on our attacks. MNIST, LR classifier, and median are used.}}
\vspace{-3mm}
\label{MNIST-LR-parasetting}
\end{figure*}
\begin{figure*}[!t]
\centering
\subfloat[]{\includegraphics[width=0.33 \textwidth]{figs/MNIST-LR-Trim-diffbeta}\label{MNIST-LR-Trim-beta}}
\subfloat[]{\includegraphics[width=0.33 \textwidth]{figs/MNIST-LR-Krum-diffepsilon}\label{MNIST-LR-Krum-epsilon}}
\subfloat[]{\includegraphics[width=0.33 \textwidth]{figs/MNIST-LR-Median-randround}\label{MNIST-LR-Median-randround}}
\vspace{-2mm}
\caption{\alan{(a) Testing error rates of the trimmed mean aggregation rule when using different $\beta$. (b) Testing error rates of the Krum aggregation rule when our attack uses different $\epsilon$. (c) Testing error rates of the median aggregation rule when our attacks poison a certain fraction of randomly selected iterations of federated learning. MNIST and LR classifier are used.}}
\vspace{-3mm}
\label{MNIST-LR-parasetting1}
\end{figure*}
\myparatight{Impact of different parameter settings of federated learning algorithms} We study the impact of various parameters in federated learning including the number of rounds of stochastic gradient descent each worker device performs, number of worker devices, number of worker devices selected to update the global model in each iteration, and $\beta$ in trimmed mean. \alan{In these experiments, we use MNIST and the LR classifier for simplicity. Unless otherwise mentioned, we consider median, as median is more robust than Krum and does not require configuring extra parameters (trimmed mean requires configuring $\beta$). Moreover, for simplicity, we consider partial knowledge attacks as they are more practical.}
Worker devices can perform multiple rounds of stochastic gradient descent to update their local models. Figure~\ref{impact:round} shows the impact of the number of rounds on the testing error rates of our attack. The testing error rates decrease as we use more rounds of stochastic gradient descent for both no attack and our partial knowledge attack. This is because more rounds of stochastic gradient descent lead to more accurate local models, and the local models on different worker devices are less diverse, leaving a smaller attack space. However, our attack still increases the error rates substantially even if we use more rounds. For instance, our attack still increases the error rate by more than 30\% when using 10 rounds of stochastic gradient descent. We note that a large number of rounds result in large computational cost for worker devices, which may be unacceptable for resource-constrained devices such as mobile phones and IoT devices.
Figure~\ref{impact:worker} shows the testing error rates of our attack as the number of worker devices increases, where 20\% of worker devices are compromised. Our attack is more effective (i.e., testing error rate is larger) as the federated learning system involves more worker devices. \alan{We found a possible reason is that our partial knowledge attacks can more accurately estimate the changing directions with more worker devices. For instance, for trimmed mean of the DNN classifier on MNIST, our partial knowledge attacks can correctly estimate the changing directions of 72\% of the global model parameters on average when there are 50 worker devices, and this fraction increases to 76\% when there are 100 worker devices.}
In federated learning~\cite{McMahan17}, the master device could randomly sample some worker devices and send the global model to them; the sampled worker devices update their local models and send the updated local models to the master device; and the master device updates the global model using the local models from the sampled worker devices. Figure~\ref{impact:subset} shows the impact of the number of worker devices selected in each iteration on the testing error rates of our attack, where the total number of worker devices is 100. \alan{Since the master device randomly selects a subset of worker devices in each iteration, a smaller number of compromised worker devices are selected in some iterations, while a larger number of compromised worker devices are selected in other iterations. On average, among the selected worker devices, $\frac{c}{m}$ of them are compromised ones, where $c$ is the total number of compromised worker devices and $m$ is the total number of worker devices. Our Figure~\ref{MNIST-worker} shows that our attacks become effective when $\frac{c}{m}$ is larger than 10\%-15\%. Note that an attacker can inject a large number of fake devices to a federated learning system, so $\frac{c}{m}$ can be large.}
The trimmed mean aggregation rule has a parameter $\beta$, which should be at least the number of compromised worker devices. Figure~\ref{MNIST-LR-Trim-beta} shows the testing error rates of no attack and our partial knowledge attack as $\beta$ increases. Roughly speaking, our attack is less effective (i.e., testing error rates are smaller) as more local model parameters are trimmed. This is because our crafted local model parameters on the compromised worker devices are more likely to be trimmed when the master device trims more local model parameters. However, the testing error of no attack also \alan{slightly} increases as $\beta$ increases.
The reason is that more benign local model parameters are trimmed and the mean of the remaining local model parameters becomes less accurate. The master device may be motivated to use a smaller $\beta$ to guarantee performance when there are no attacks.
\myparatight{Impact of the parameter $\epsilon$ in our attacks to Krum} Figure~\ref{MNIST-LR-Krum-epsilon} shows the error rates of the Krum aggregation rule when our attacks use different $\epsilon$, where MNIST dataset and LR classifier are considered. We observe that our attacks can effectively increase the error rates using a wide range of $\epsilon$. Moreover, our attacks achieve larger error rates when $\epsilon$ is smaller. This is because when $\epsilon$ is smaller, the distances between the compromised local models are smaller, which makes it more likely for Krum to select the local model crafted by our attack as the global model.
\myparatight{Impact of the number of poisoned iterations} Figure~\ref{MNIST-LR-Median-randround} shows the error rates of the median aggregation rule when our attacks poison the local models on the compromised worker devices in a certain fraction of randomly selected iterations of federated learning. Unsurprisingly, the error rate increases when poisoning more iterations.
\begin{table}[!t]\renewcommand{\arraystretch}{1}
\centering
\caption{\neil{Testing error rates of attacks on the DNN classifier for MNIST when the master device chooses the global model with the lowest testing error rate.}}
\addtolength{\tabcolsep}{-4pt}
{
\begin{tabular}{|c|c|c|c|c|c|} \hline
{\small } & {\small NoAttack} & {\small Gaussian} & {\small LabelFlip} &{\small Partial} & {\small Full} \\ \hline
{\small Krum} & {\small 0.10} & {\small 0.10} & {\small 0.09} &{\small 0.69} & {\small 0.70} \\ \hline
{\small Trimmed mean} & {\small 0.06} & {\small 0.06} & {\small 0.07} &{\small 0.12} & {\small 0.18} \\ \hline
{\small Median} & {\small 0.06} & {\small 0.06} & {\small 0.06} &{\small 0.11} & {\small 0.32} \\ \hline
\end{tabular}
}
\label{adaptivetraining}
\vspace{-3mm}
\end{table}
\neil{\myparatight{Alternative training strategy} Each iteration results in a global model. Instead of selecting the last global model as the final model, an alternative training strategy is to select the global model that has the lowest \alan{testing} error rate.\footnote{\alan{We give advantages to the alternative training strategy since we use testing error rate to select the global model.}} Table~\ref{adaptivetraining} shows the testing error rates of various attacks on the DNN classifier for MNIST, when such alternative training strategy is adopted. \alan{In these experiments, our attacks attack each iteration of federated learning, and the column ``NoAttack'' corresponds to the scenarios where no iterations are attacked.} Compared to Table~\ref{attackeffective-MNIST-DNN}, this alternative training strategy is \alan{slightly} more secure against our attacks. However, our attacks are still effective. For instance, for the Krum, trimmed mean, and median aggregation rules, our partial knowledge attacks still increase the testing error rates by 590\%, 100\%, and 83\%, respectively. } \alan{Another training strategy is to roll back to a few iterations ago if the master device detects an unusual increase of training error rate. However, such training strategy is not applicable because the training error rates of the global models still decrease until convergence when we perform our attacks in each iteration. In other words, there are no unusual increases of training error rates.}
\vspace{-4mm}
\subsection{Results for Unknown Aggregation Rule}
\label{unknownrule}
We craft local models based on one aggregation rule and show the attack effectiveness for other aggregation rules.
Table~\ref{transfer_result} shows the transferability between aggregation rules, where MNIST and LR classifier are considered.
We observe different levels of transferability between aggregation rules. Specifically, Krum based attack can well transfer to trimmed mean and median, e.g., Krum based attack increases the error rate from 0.12 to 0.15 (25\% relative increase) for trimmed mean, and from 0.13 to 0.18 (38\% relative increase) for median. Trimmed mean based attack does not transfer to Krum but transfers to median well. For instance, trimmed mean based attack increases the error rates from 0.13 to 0.20 (54\% relative increase) for median.
\begin{table}[!t]\renewcommand{\arraystretch}{1}
\centering
\caption{Transferability between aggregation rules. ``Krum attack'' and ``Trimmed mean attack'' mean that we craft the compromised local models based on the Krum and trimmed mean aggregation rules, respectively. Partial knowledge attacks are considered. The numbers are testing error rates.}
\centering
{
\begin{tabular}{|c|c|c|c|} \hline
{\small } & {\small Krum} & {\small Trimmed mean} & {\small Median} \\ \hline
{\small No attack} & {\small 0.14} & {\small 0.12} & {\small 0.13}\\ \hline
{\small Krum attack} & {\small 0.70} & {\small 0.15} & {\small 0.18}\\ \hline
{\small Trimmed mean attack} & {\small 0.14} & {\small 0.25} & {\small 0.20}\\ \hline
\end{tabular}
}
\vspace{-4mm}
\label{transfer_result}
\end{table}
\subsection{Comparing with Back-gradient Optimization based Attack}
\label{comparison-data}
Back-gradient optimization based attack (BGA)~\cite{munoz2017towards}
is state-of-the-art untargeted data poisoning attack for multi-class classifiers such as multi-class LR and DNN. \alan{BGA formulates a bilevel optimization problem, where the inner optimization is to minimize the training loss on the poisoned training data and the outer optimization is to find poisoning examples that maximize the minimal training loss in the inner optimization. BGA iteratively finds the poisoned examples by alternately solving the inner minimization and outer maximization problems.}
We implemented BGA and verified that our implementation can reproduce the results reported by the authors.
However, BGA is not scalable to the entire MNIST dataset. Therefore, we uniformly sample 6,000 training examples in MNIST, and we learn a 10-class LR classifier. Moreover, we assume 100 worker devices, randomly distribute the 6,000 examples to them, and assume 20 worker devices are compromised.
\myparatight{Generating poisoned data} We assume an attacker has \emph{full knowledge} about the training datasets on all worker devices. Therefore, the attacker can use BGA to generate poisoned data based on the 6,000 examples. In particular, we run the attack for 10 days on a GTX 1080Ti GPU, which generates 240 ($240/6000=4\%$) poisoned examples. We verified that these poisoned data can effectively increase the testing error rate if the LR classifier is learnt in a \emph{centralized} environment. In particular, the poisoned data can increase the testing error rate of the LR classifier from 0.10 to 0.16 (60\% relative increase) in centralized learning.
However, in federated learning, the attacker can only inject the poisoned data to the compromised worker devices. We consider two scenarios on how the attacker distributes the poisoned data to the compromised worker devices:
{\bf Single worker.} In this scenario, the attacker distributes the poisoned data on a single compromised worker device.
{\bf Uniform distribution.} In this scenario, the attacker distributes the poisoned data to the compromised worker devices uniformly at random.
\alan{We consider the two scenarios because they represent two extremes for distributing data (concentrated or evenly distributed) and we expect one extreme to maximize attack effectiveness.}
\neil{Table~\ref{data_poison_result_knownagg} compares BGA with our attacks. We observe that BGA has limited success at attacking Byzantine-robust aggregation rules, while our attacks can substantially increase the testing error rates. We note that if the federated learning uses the \emph{mean} aggregation rule BGA is still successful. For instance, when the mean aggregation rule is used, BGA can increase the testing error rate by 50\% when distributing the poisoned data to the compromised worker devices uniformly at random. However, when applying our attacks for trimmed mean to attack the mean aggregation rule, we can increase the testing error rates substantially more (see the last two cells in the second row of Table~\ref{data_poison_result_knownagg}). }
\begin{table}[!t]\renewcommand{\arraystretch}{1}
\centering
\caption{\neil{Testing error rates of back-gradient optimization based attacks (SingleWorker and Uniform) and our attacks (Partial and Full).}}
\centering
\addtolength{\tabcolsep}{-4pt}
\begin{tabular}{|c|c|c|c|c|c|} \hline
{\small } & {\small NoAttack} & {\small SingleWorker} & {\small Uniform} &{\small Partial} & {\small Full} \\ \hline
{\small Mean} & {\small 0.10} & {\small 0.11} & {\small 0.15} &{\small 0.54} & {\small 0.69} \\ \hline
{\small Krum} & {\small 0.23} & {\small 0.24} & {\small 0.25} &{\small 0.85} & {\small 0.89} \\ \hline
{\small Trimmed mean} & {\small 0.12} & {\small 0.12} & {\small 0.13} &{\small 0.27} & {\small 0.32} \\ \hline
{\small Median} & {\small 0.13} & {\small 0.13} & {\small 0.14} &{\small 0.19} & {\small 0.21} \\ \hline
\end{tabular}
\vspace{-3mm}
\label{data_poison_result_knownagg}
\end{table}
\vspace{-3mm}
\section{Defenses}
\label{sec:defense}
We generalize RONI~\cite{barreno2010security} and TRIM~\cite{Jagielski18}, which were designed to defend against data poisoning attacks, to defend against our local model poisoning attacks. Both generalized defenses remove the local models that are potentially malicious before computing the global model in each iteration of federated learning. One generalized defense removes the local models that have large negative impact on the error rate of the global model (inspired by RONI that removes training examples that have large negative impact on the error rate of the model), while the other defense removes the local models that result in large loss (inspired by TRIM that removes the training examples that have large negative impact on the loss). In both defenses, we assume the master device has a small \emph{validation dataset}. Like existing aggregation rules such as Krum and trimmed mean, we assume the master device knows the upper bound $c$ of the number of compromised worker devices. \alan{We note that our defenses make the global model slower to learn and adapt to new data as that data may be identified as from potentially malicious local models.}
\myparatight{Error Rate based Rejection (ERR)}
In this defense, we compute the impact of each local model on the error rate for the validation dataset and remove the local models that have large negative impact on the error rate. Specifically, suppose we have an aggregation rule. For each local model, we use the aggregation rule to compute a global model $A$ when the local model is included and a global model $B$ when the local model is excluded. We compute the error rates of the global models $A$ and $B$ on the validation dataset, which we denote as $E_A$ and $E_B$, respectively. We define $E_A-E_B$ as the \emph{error rate impact} of a local model. A larger error rate impact indicates that the local model increases the error rate more significantly if we include the local model when updating the global model. We remove the $c$ local models that have the largest error rate impact, and we aggregate the remaining local models to obtain an updated global model.
\myparatight{Loss Function based Rejection (LFR)} In this defense, we remove local models based on their impact on the loss instead of error rate for the validation dataset. Specifically, like the error rate based rejection, for each local model, we compute the global models $A$ and $B$. We compute the \alan{cross-entropy loss function values} of the models $A$ and $B$ on the validation dataset, which we denote as $L_A$ and $L_B$, respectively. Moreover, we define $L_A-L_B$ as the \emph{loss impact} of the local model. Like the error rate based rejection, we remove the $c$ local models that have the largest loss impact, and we aggregate the remaining local models to update the global model.
\neil{\myparatight{Union (i.e., ERR+LFR)} In this defense, we combine ERR and LFR. Specifically, we remove the local models that are removed by either ERR or LFR.}
\begin{table}[!t]\renewcommand{\arraystretch}{1}
\centering
\caption{\neil{Defense results. The numbers are testing error rates. The columns ``Krum'' and ``Trimmed mean'' indicate the attacker's assumed aggregation rule when performing attacks, while the rows indicate the actual aggregation rules and defenses. Partial knowledge attacks are considered.}}
\centering
\addtolength{\tabcolsep}{-2pt}
\begin{tabular}{|c|c|c|c|} \hline
{\small } & {\small No attack} & {\small Krum} & {\small Trimmed mean} \\ \hline
{\small Krum } & {\small 0.14} & {\small 0.72} & {\small 0.13}\\ \hline
{\small Krum + ERR} & {\small 0.14} & {\small 0.62} & {\small 0.13}\\ \hline
{\small Krum + LFR} & {\small 0.14} & {\small 0.58} & {\small 0.14}\\ \hline
{\small Krum + Union} & {\small 0.14} & {\small 0.48} & {\small 0.14}\\ \hline \hline
{\small Trimmed mean } & {\small 0.12} & {\small 0.15} & {\small 0.23}\\ \hline
{\small Trimmed mean + ERR} & {\small 0.12} & {\small 0.17} & {\small 0.21}\\ \hline
{\small Trimmed mean + LFR} & {\small 0.12} & {\small 0.18} & {\small 0.12}\\ \hline
{\small Trimmed mean + Union} & {\small 0.12} & {\small 0.18} & {\small 0.12}\\ \hline \hline
{\small Median } & {\small 0.13} & {\small 0.17} & {\small 0.19}\\ \hline
{\small Median + ERR} & {\small 0.13} & {\small 0.21} & {\small 0.25}\\ \hline
{\small Median + LFR} & {\small 0.13} & {\small 0.20} & {\small 0.13}\\ \hline
{\small Median + Union} & {\small 0.13} & {\small 0.19} & {\small 0.14}\\ \hline
\end{tabular}
\label{defense-result}
\end{table}
\myparatight{Defense results} \alan{Table~\ref{defense-result} shows the defense results of ERR, FLR, and Union, where partial knowledge attacks are considered. We use the default parameter setting discussed in Section~\ref{parametersetting}, e.g., 100 worker devices, 20\% of compromised worker devices, MNIST dataset, and LR classifier. Moreover, we sample 100 testing examples uniformly at random as the validation dataset. Each row of the table corresponds to a defense, e.g., Krum + ERR means that the master device uses ERR to remove the potentially malicious local models and uses Krum as the aggregation rule. Each column indicates the attacker's assumed aggregation rule when performing attacks, e.g., the column ``Krum'' corresponds to attacks that are based on Krum. We have several observations.}
First, LFR is comparable to ERR or much more effective than ERR, i.e., LFR achieves similar or much smaller testing error rates than ERR. For instance, Trimmed mean + ERR and Trimmed mean + LFR achieve similar testing error rates (0.17 vs. 0.18) when the attacker crafts the compromised local models based on Krum. However, Trimmed mean + LFR achieves a much smaller testing error rate than Trimmed mean + ERR (0.12 vs. 0.21), when the attacker crafts the compromised local models based on trimmed mean. \neil{Second, Union is comparable to LFR in most cases, except one case (Krum + LFR vs. Krum and Krum + Union vs. Krum) where Union is more effective.}
\neil{Third, LFR and Union can effectively defend against our attacks in some cases. For instance, Trimmed mean + LFR (or Trimmed mean + Union) achieves the same testing error rate for both no attack and attack based on trimmed mean. However, our attacks are still effective in other cases even if LFR or Union is adopted. For instance, an attack, which crafts compromised local models based on Krum,
still effectively increases the error rate from 0.14 (no attack) to 0.58 (314\% relative increase) for Krum + LFR. Fourth, the testing error rate grows in some cases when a defense is deployed. This is because the defenses may remove benign local models, which increases the testing error rate of the global model.}
\vspace{-3mm}
\section{Related Work}
\label{related}
Security and privacy of federated/collaborative learning are much less explored, compared to centralized machine learning. Recent studies~\cite{Hitaj17,Melis19,Nasr19} explored privacy risks in federated learning, which are orthogonal to our study.
\alan{\myparatight{Poisoning attacks} Poisoning attacks aim to compromise the integrity of the training phase of a machine learning system~\cite{barreno2006can}. The training phase consists of two components, i.e., training dataset collection and learning process.
Most existing poisoning attacks compromise the training dataset collection component, e.g., inject malicious data into the training dataset. These attacks are also known as \emph{data poisoning attacks})~\cite{rubinstein2009antidote,biggio2012poisoning,xiao2015feature,Jagielski18,poisoningattackRecSys16,YangRecSys17,Nelson08poisoningattackSpamfilter,munoz2017towards,Suciu18,Gu17,Chen17,Bagdasaryan18,shafahi2018poison,Fang18,Wang19}. Different from data poisoning attacks, our local model poisoning attacks compromise the learning process.
Depending on the goal of a poisoning attack, we can classify poisoning attacks into two categories, i.e., \emph{untargeted poisoning attacks}~\cite{rubinstein2009antidote,biggio2012poisoning,xiao2015feature,Jagielski18,poisoningattackRecSys16,YangRecSys17} and \emph{targeted poisoning attacks}~\cite{Nelson08poisoningattackSpamfilter,Suciu18,Liu18,Gu17,Chen17,Bagdasaryan18,shafahi2018poison,Bhagoji19}. Untargeted poisoning attacks aim to make the learnt model have a high testing error indiscriminately for testing examples, which eventually result in a denial-of-service attack.
In targeted poisoning attacks, the learnt model produces attacker-desired predictions for particular testing examples, e.g., predicting spams as non-spams and predicting attacker-desired labels for testing examples with a particular trojan trigger (these attacks are also known as \emph{backdoor/trojan attacks}~\cite{Gu17}). However, the testing error for other testing examples is unaffected.
Our local model poisoning attacks are untargeted poisoning attacks. Different from existing untargeted poisoning attacks that focus on centralized machine learning, our attacks are optimized for Byzantine-robust federated learning. We note that Xie et al.~\cite{Xie19} proposed inner product manipulation based untargeted poisoning attacks to Byzantine-robust federated learning including Krum and median, which is concurrent to our work.
\myparatight{Defenses} Existing defenses were mainly designed for data poisoning attacks to centralized machine learning. They essentially aim to detect the injected malicious data in the training dataset. One category of defenses~\cite{Cretu08,barreno2010security,Suciu18,Tran18} detects malicious data based on their (negative) impact on the performance of the learnt model.
For instance,
Barreno et al.~\cite{barreno2010security} proposed \emph{Reject on Negative Impact (RONI)}, which measures the impact of each training example on the performance of the learnt model and removes the training examples that have large negative impact. Suciu et al.~\cite{Suciu18} proposed a variant of RONI (called tRONI) for targeted poisoning attacks. In particular, tRONI measures the impact of a training example on only the target classification and excludes training examples that have large impact.
Another category of defenses~\cite{Feng14,Liu17AiSec,Jagielski18,Steinhardt17} proposed new loss functions, optimizing which obtains model parameters and detects the injected malicious data simultaneously. For instance,
Jagielski et al.~\cite{Jagielski18} proposed TRIM, which aims to jointly find a subset of training dataset with a given size and model parameters that minimize the loss function. The training examples that are not in the selected subset are treated as malicious data.
These defenses are not directly applicable for our local model poisoning attacks because our attacks do not inject malicious data into the training dataset.
For federated learning, the machine learning community recently proposed several aggregation rules (e.g., Krum~\cite{Blanchard17}, Bulyan~\cite{Mhamdi18}, trimmed mean~\cite{Yin18}, median~\cite{Yin18}, and others~\cite{ChenPOMACS17}) that were claimed to be robust against Byzantine failures of certain worker devices. Our work shows that these defenses are not effective in practice against our optimized local model poisoning attacks that carefully craft local models on the compromised worker devices. Fung et al.~\cite{Fung18} proposed to compute weight for each worker device according to historical local models and take the weighted average of the local models to update the global model. However, their method can only defend against label flipping attacks, which can already be defended by existing Byzantine-robust aggregation rules. We propose ERR and LFR, which are respectively generalized from RONI and TRIM, to defend against our local model poisoning attacks. We find that these defenses are not effective enough in some scenarios, highlighting the needs of new defenses against our attacks.
\myparatight{Other security and privacy threats to machine learning} Adversarial examples~\cite{barreno2006can,szegedy2013intriguing} aim to make a machine learning system predict labels as an attacker desires via adding carefully crafted noise to normal testing examples in the testing phase. Various methods (e.g.,~\cite{szegedy2013intriguing,goodfellow2014explaining,Papernot16,CarliniSP17,laskov2014practical,sharif2016accessorize,liu2016delving,PracticalBlackBox17,Athalye18}) were proposed to generate adversarial examples, and many defenses (e.g.,~\cite{goodfellow2014explaining,madry2017towards,Papernot16Distillation,detection2,detection1,xu2017feature,region}) were explored to mitigate them. Different from poisoning attacks, adversarial examples compromise the testing phase of machine learning. Both poisoning attacks and adversarial examples compromise the integrity of machine learning. An attacker could also compromise the confidentiality of machine learning. Specifically, an attacker could compromise the confidentiality of users' private training or testing data via various attacks such as \emph{model inversion attacks}~\cite{fredrikson2014privacy,fredrikson2015model}, \emph{membership inference attacks}~\cite{membershipInfer,membershipLocation,Melis19}, and \emph{property inference attacks}~\cite{Ateniese15,Ganju18}. Moreover, an attacker could also compromise the confidentiality/intellectual property of a model provider via stealing its model parameters and hyperparameters~\cite{liang2016cracking,tramer2016stealing,WangHyper18}.
}
\vspace{-3mm}
\section{Conclusion, Limitations, and Future Work}
We demonstrate that the federated learning methods, which the machine learning community claimed to be robust against Byzantine failures of some worker devices, are vulnerable to our local model poisoning attacks that manipulate the local models sent from the compromised worker devices to the master device during the learning process. In particular, to increase the error rates of the learnt global models, an attacker can craft the local models on the compromised worker devices such that the aggregated global model deviates the most towards the inverse of the direction along which the global model would change when there are no attacks. Moreover, finding such crafted local models can be formulated as optimization problems. We can generalize existing defenses for data poisoning attacks to defend against our local model poisoning attacks. Such generalized defenses are effective in some cases but are not effective enough in other cases. Our results highlight that we need new defenses to defend against our local model poisoning attacks.
Our work is limited to untargeted poisoning attacks. It would be interesting to study targeted poisoning attacks to federated learning. Moreover, it is valuable future work to design new defenses against our local model poisoning attacks, e.g., new methods to detect compromised local models and new adversarially robust aggregation rules.
\vspace{-3mm}
\section{Acknowledgements}
We thank the anonymous reviewers and our shepherd Nikita Borisov for constructive reviews and comments. This work was supported by NSF grant No.1937786.
\vspace{-3mm}
\bibliographystyle{plain}
| proofpile-arXiv_059-15580 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{\NetworkFull{}}
In this section, we will first define what is the category-coherent decision,
and then introduce the structure of {\em{\ModuleFull{} (\ModuleShort{})}} together with three corresponding loss functions for training it;
in addition, we will discuss the large category issue that hinders \ModuleShort{} training and our solution;
finally we will demonstrate several exemplars of \NetworkFull{} by
integrating \ModuleShort{}s into some popular backbone network architectures.
\subsection{The Category-coherent Decision}
Given inputs with the same object category, if their corresponding decisions are similar, then these decisions could be called the {\bf {category-coherent decisions}}.
Note that, we also allow inputs with multiple categories to have the same decision results.
\if 0
Based on the inputs with the same object category, if the decisions are similar, we will call these decisions are {\bf {Category-coherent decisions}}.
Given inputs with the same object category, ca
{\bf {Category-coherent decisions}} are the decisions that have similar values for the inputs with the same object category,
and allows the inputs with multiple different categories to have similar values.
\fi
In this paper, we set the category-coherent decision with $n$ ($n\leq N$) {\bf{auxiliary categories}},
namely ${\bf{D}}=\left( D_1,...,D_n \right)$, and $D_1+D_2+...+D_n=1$.
\if 0
Note that, if $\bf{D}$ is a hard decision represented as a
one-hot vector,
then each auxiliary category could fall into a coarse category in a higher hierarchy (e.g., $\left( 1,...,0 \right)$ indicates all the animals and $\left( 0,...,1 \right)$ indicates all the plants).
While in our approach, $\bf{D}$ is a soft decision with allowing each $ {\bf{D}}_j$ $(j \in \{ 1...n\})$ in the range of $\left[0,1 \right]$, and thus the auxiliary categories do not have explicit meanings.
To ease understanding, we still call them the ``auxiliary categories''.
\fi
\if 0
Note that, the category-coherent decision with $n$ {\bf{ auxiliary categories}} is not required to be consistent with the final decision of the classification task with $N$ categories, since, into the same auxiliary categories, therefore we set $n\leq N$ in practice.
In practice, we set $n\leq N$.
to encourage the auxiliary categories to be the categories in a higher hierarchy.
\fi
\subsection{Structure of \ModuleFull{}}
The {\em{\ModuleFull{} (\ModuleShort{})}} is a computational unit that is devised to
make a category-coherent early decision~\cite{zamir-2017-Feedback} based on the features encoded in an early layer and then propagate it to the following network layers to guide them.
\if 0
propagate an intermediate category-related decision made upon an early layer
along the deep network, such that the following network layers could be guided to
these layers could adjust their neurons to encode more discriminative features.
\fi
\if 0
Note that, the intermediate decision with $n$ {\bf{ auxiliary categories}} is not required to be consistent with the final decision of the classification task with $N$ categories.
In practice, we set $n\leq N$ to encourage the auxiliary categories to be the categories in a higher hierarchy.
\fi
A diagram of {{\ModuleShort{}}} is shown in Fig.~\ref{fig:cdn}.
\if 0
In addition, as indicated by \cite{bilal-2017-convLearnHierarchy}, the early layers in deep networks develop feature detectors that can separate high-level groups of classes quite well, therefore we enforce $n < N$ to encourage the immediate decision to be the one of categories in a higher hierarchy.
\fi
\subsubsection{Make a Soft Decision}
Give an intermediate feature map {\bf{${\bf{U}}$}} ({\bf{${\bf U}$}} $\in \mathbb{R}^{C \times H \times W}$) , our aim is to make a category-coherent decision to guide subsequent network layers without bringing too much additional computational cost.
Therefore, instead of continuing convolving on it, we propose to adopt global average pooling (GAP) to extremely reduce the feature dimensions.
As verified in~\citet{lin-2013-NetInNet}, the pooled feature map {\bf{${\bf{U^{'}}}$}} with channel-wise statistics, is usually discriminative enough for classification.
We thus adopt a fully connected network $F$ with one or two layers to make a decision based on it.
To facilitate this decision branch could be optimized with the whole network structure in an end-to-end manner, we apply the softmax function to the output, and thus obtain a ``soft'' decision ${\bf{D}}$.
\subsubsection{Decision Propagation}
To make use of the information aggregated in the intermediate decision ${\bf{D}}$, a straightforward idea is to dynamic route accordingly~\cite{murthy-2016-decsion-mining-low-confident-recursive}, making the network to form a deep decision tree. As a deep decision tree will bring explosive parameter increment and could not be recovered from previous false decisions, we thus follow~\citet{mirza-2014-conditionalGAN} and consider the intermediate decision as a conditional code, such that the category prediction process could be directed by conditioning on it.
Specifically, we expand the decision vector $\bf{D} ~(\bf{D}\in\mathbb{R}^{n})$ to be with the same resolution as the feature map of {\bf{${\bf{V}}$}}, by copying the decision scores directly,
and then concatenate the expanded decision {\bf{${\bf{D^{''}}} ~({\bf{D^{''}}}\in \mathbb{R}^{n \times H \times W})$}}
with {\bf{${\bf{V}}$}} as additional channels (see Fig.~\ref{fig:cdn}).
Note that, {\bf{${\bf{V}}$}} could be {\bf{${\bf{U}}$}} itself or any other feature map outputted by a subsequent network layer, and we also allow propagating one decision to multiple layers.
\subsection{Loss Functions for \ModuleShort{}}
\if 0
Since the intermediate decision does not have any ground truth labels, to enable it
to aggregate some consistent and meaningful information that are related to the original category, we therefore propose three novel loss functions to guide it.
\fi
To enable \ModuleShort{} to make category-coherent decisions, we propose three novel loss functions to guide it.
In the following, we will describe them one by one.
\para{Notations}
We denote {\bf{$\mathcal{D}~ (\mathcal{D} \in \mathbb{R}^{b \times n}) $}} as all the ${\bf{D}}$s in a mini-batch with size of $b$, where {\bf{$\mathcal{D}_{kj}$}} is the decision score (confidence) of the $j$-th auxiliary category for the $k$-th instance in the batch.
\para{Decision Explicit Loss}
If the intermediate decision made by \ModuleShort{} is ambiguous, the following layers could hardly get any useful information from it.
Therefore, we introduce a decision explicit loss to encourage
the decision score of one or several auxiliary categories to have relatively larger values, while avoiding all the auxiliary categories to have similar scores.
The loss function is defined as follows:
{
{
\begin{equation}
\if 0
L_{explicit}({\bf{\mathcal{D}}})= \frac{1}{b} \sum_{k\in \{ 1...b \}} \left( 1.0 - \max_{j \in \{1...n \}}{\mathcal{D}_{kj}} \right)
\fi
L_{explicit}({\bf{\mathcal{D}}})= \frac{1}{b} \sum_{k\in \{ 1...b \}} \left( - \sum_{j \in \{1...n \}}{\mathcal{D}_{kj}\log{\mathcal{D}_{kj}}} \right),
\end{equation}
}
}
\noindent
which is in the form of entropy to encourage the decision scores of different auxiliary categories to vary a lot.
\if 0
zero only when
for each instance $k$ in the batch, there is a $t \in \{1..n\}$ that satisfies $\mathcal{D}_{kt} = 1$.
Namely, \ModuleShort{} is very confident to classify the $k$-th instance into the $t$-th auxiliary category.
\fi
\para{Decision Consistent Loss}
Simply enforcing the decision to be explicit is not enough.
Besides, we wish the decisions for many different instances with the same original category should be consistent.
Specifically, their decision scores of the same auxiliary category should be similar.
\if 0
Namely, $\mathcal{D}_{ij}$ $(i\in\{1...N\},j\in\{1...n\})$ of all the instances in a training batch are expected to with similar values.
\fi
\if 0
For example, if the decision for a cat is randomly among $\{1...n\}$, then the following layers could not get any useful information from it.
On the contrary, if the decision for a cat is mostly $j (j \in \{1...n\})$, when the following layers receive the intermediate decision $j$, they could know that it is likely to be cat.
\fi
Therefore, we propose a decision consistent loss which is defined as follows:
\begin{equation}
\setlength{\abovedisplayskip}{2pt}
\setlength{\belowdisplayskip}{2pt}
L_{consistent}({\bf{\mathcal{D}}})= \frac{1}{Nn} \sum_{i\in \{1...N\}}
\sum_{j\in \{1...n\}}\mathcal{V}_{ij},
\end{equation}
\noindent where $\mathcal{V}_{ij}$ is the sample variance of those $\mathcal{D}_{kj}$ $(k \in \{1...b\})$ for the instances whose original category is $i$ in the batch.
Denote $\mathcal{I}~(\mathcal{I} \in \mathbb{R}^{b\times N})$ as the indicator matrix for a batch of data:
if the original category of the $k$-th instance in the batch is $i$, then $\mathcal{I}_{ki}=1$; otherwise $\mathcal{I}_{ki}=0$.
Thus the mean decision score $\mathcal{M}_{ij}$
of the $j$-th auxiliary category for all the instances in the batch with original category $i$ could be calculated with the following equation:
\begin{equation}
\setlength{\abovedisplayskip}{2pt}
\setlength{\belowdisplayskip}{2pt}
\mathcal{M}_{ij} = \frac{\sum_{k \in \{1...b\}} \left( \mathcal{I}_{ki} \mathcal{D}_{kj} \right)}{ \sum_{k \in \{1...b\}} \mathcal{I}_{ki} + \delta },
\label{equ:M}
\end{equation}
\noindent where $\delta$ is a small value
to avoid divide-zero error. After that, we could calculate $\mathcal{V}_{ij}$ with
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
\mathcal{V}_{ij} = \frac{\sum_{k \in \{1...b\}} \mathcal{I}_{ki} {\left( \mathcal{D}_{kj} - \mathcal{M}_{ij} \right)}^{2} }{ \sum_{k \in \{1...b\}} \mathcal{I}_{ki} -1 + \delta }
\label{equ:V}
\end{equation}
By substituting Equation~\ref{equ:M} into Equation~\ref{equ:V} and expand the formulation, we obtain a new equation:
\vspace{-2mm}
\begin{small}
\begin{equation}
\begin{aligned}
\mathcal{V}_{ij} &= \frac{\sum_{k \in \{1...b\}} \mathcal{I}_{ki} {\mathcal{D}_{kj}}^{2} } { \sum_{k \in \{1...b\}} \mathcal{I}_{ki} -1 + \delta } \\
&- \frac{ {\left( \sum_{k \in \{1...b\}} { \mathcal{I}_{ki} {\mathcal{D}_{kj}}} \right)} ^{2} } { {\left( \sum_{k \in \{1...b\}} \mathcal{I}_{ki} + \delta \right)} {\left( \sum_{k \in \{1...b\}} \mathcal{I}_{ki} -1 + \delta \right)}}
\label{equ:V2}
\end{aligned}
\end{equation}
\end{small}
\vspace{-2mm}
However, calculating those $\mathcal{V}_{ij}$ one by one is very time-consuming,
we therefore leverage matrix operations to accelerate them.
The derived equation is as follows:
\begin{equation}
\mathcal{V} = \frac{ {\mathcal{I}}^{\mathrm{T}} \times \left(\mathcal{D} \mathcal{D}\right)}{ \mathcal{I}^{'} -1} - \frac{ \left( {\mathcal{I}}^{\mathrm{T}} \times \mathcal{D}\right) \left( {\mathcal{I}}^{\mathrm{T}} \times \mathcal{D}\right)}{ \mathcal{I}^{'} \left(\mathcal{I}^{'} - 1\right)},
\end{equation}
\noindent
which is a matrix with the value in
$i$-th row and $j$-th column to be $\mathcal{V}_{ij}$, namely
$\mathcal{V} \in \mathbb{R}^{N\times{n}}$.
Operator $\times$ indicates cross-product, while all other operations are conducted element-wisely.
$\mathcal{I}^{'} \in \mathbb{R}^{N\times n} $ is a two dimension matrix with $\mathcal{I}^{'}_{ij} = \sum_{k \in \{1...b\}} \mathcal{I}_{ki} + \delta$, for arbitrary $j \in \{1...n\}$.
Although
simple, it is critical for training the module in an efficient mode.
\para{Decision Balance Loss}
Besides the above two losses, we also propose a decision balance loss to avoid the degraded situation that no matter what original category the input instance is,
\ModuleShort{} explicitly assign it to a single auxiliary category.
Therefore, the decision balance loss is to encourage all auxiliary categories could be balanced assigned, in the form of the reverse of entropy:
\vspace{-3mm}
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
\begin{split}
L_{balance}({\bf{\mathcal{D}}}) &= \sum_{j \in \{1...n\}} m_j \log m_j, \\
m_j &= \sum_{k \in \{1...b\}}\mathcal{D}_{kj} + \delta.
\end{split}
\end{equation}
\begin{figure}[t!]
\centering
\vspace{-2.5mm}
\includegraphics[width=0.83\linewidth]{LargeCategoryIssue.png}
\vspace{-2.5mm}
\caption{The demonstrations of loading data with 4 categories (the circle with digit $i$ inside indicates an instance whose original category is $i$) into a batch whose size is 4:
(a) traditional approach will results in the number of instances with each category in a training batch is only one;
(b) our approach that loads more data samples into memory, and shuffles then splits the samples into multiple batches
could increase the number of instances that are with the same categories in each training batch. }
\vspace{-2.5mm}
\label{fig:largeCategory}
\end{figure}
\subsection{Large Category Issue }
\label{ssec:largecategoryissue}
For all the original categories appear in a batch, we expect their decision scores could be consistently, explicitly and balancedly distributed into all the auxiliary categories.
However, due to the limitation of computational resources and the training issues of large batch SGD~\cite{goyal-2017-largeBatchSGD}, the batch size is normally set with a small number between 1 to 256.
For the tasks with 100, 1000 or even more original categories, randomly load a batch of data will result in the number of instances with each original category is only two or even smaller, resulting in
$L_{consistent}$
could not work.
\if
A straightforward idea to overcome it is to only consider those categories
A straightforward idea to overcome it is to randomly select the data in a much fewer categories while throwing away the others in the batch. However, this will waste lots of training data, and should require more training iterations.
\fi
Therefore, instead of simply increasing the batch size to maintain more data samples,
we propose a novel load-shuffle-split strategy
which could resolve the large category issue without enlarging the batch size significantly.
Specifically, given Fig.~\ref{fig:largeCategory} as an example, where the original category number is 4 and the mini-batch size is 4, too.
The load-shuffle-split strategy has three key steps:
(1) instead of loading 4 data samples only, we load more samples in each iteration (e.g., 8);
(2) we first generate a number list $\left[ 1,2,3,4\right]$ with containing all the category IDs; and then shuffle it to obtain another list $\left[ 1,4,3,2\right]$; finally the shuffled list is
splitted into two lists: $\left[ 1,4\right]$ and $\left[ 3,2\right]$;
(3) we split the 8 data samples into two training batches according to the category IDs in the two splitted lists generated in the last step (see Fig.~\ref{fig:largeCategory} again), and then train the two batches separately. In this case, the number of instances with each original category in a training batch is doubled.
\if 0
We first load eight data into the memory and generate a number list $\left[ 1,2,3,4\right]$; and then shuffle it to obtain $\left[ 1,4,3,2\right]$; the shuffled list is then spilt into two lists: $\left[ 1,4\right]$ and $\left[ 3,2\right]$; and finally the data are split into two batches accordingly.
\fi
\begin{figure}[t!]
\centering
\vspace{-1mm}
\includegraphics[width=0.75\linewidth]{CDNResNet.png}
\vspace{-2.5mm}
\caption{The schema of (a) the original residual unit; and (b) the \ModuleShort{} based residual unit.}
\vspace{-2.5mm}
\label{fig:cdnResNet}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{CDNInception.png}
\vspace{-4.5mm}
\caption{The schema of (a) the original Inception; and (b) the \ModuleShort{} based Inception.}
\vspace{-5.5mm}
\label{fig:cdnIncep}
\end{figure}
\subsection{Exemplar \NetworkFull{}}
Our \ModuleShort{} is very flexible and could be integrated into various classification network architectures to form \NetworkFull{} (\NetworkShort{}s).
As it is straightforward to apply it to VGG network~\cite{Simonyan-2014-VGG} or AlexNet~\cite{Krizhevsky-2012-AlexNet},
in this section we only illustrate how to integrate \ModuleShort{}s into modern sophisticated architectures.
For residual networks, we take ResNet~\cite{He-2016-ResNet} as an example. As ResNet is organized by stacking multiple residual blocks, we thus integrate
\ModuleShort{} into each residual unit, see Fig.~\ref{fig:cdnResNet} for a demonstration. The intermediate decision is propagated along the residual branch, and thus these neurons could be guided to learn better residuals.
For other popular architectures, such as Inception network, we also demonstrate how to integrate \ModuleShort{} into the Inception module in Fig.~\ref{fig:cdnIncep}.
The integration of \ModuleShort{}(s) with many other ResNet and Inception variants, such as ResNeXt~\cite{Xie-2016-ResNext}, Inception-ResNet~\cite{Szegedy-2017-InceptionResNet} could be constructed
in similar schemes.
\section{Conclusion}
We have presented the
\ModuleFull{} (\ModuleShort{}),
a novel drop-in computational unit
that could
propagate the category-coherent decision made upon an early layer of CNNs to guide the latter layers for image classification.
\NetworkFull{} generated by integrating \ModuleShort{}s into existing classification networks could be trained in an end-to-end fashion, and bring consistent improvements with minimal additional computational cost.
Extensive comparisons validate the effectiveness and superiority of our approach.
We hope \ModuleShort{} become an important component of various
networks for image classification.
In the future, we plan to extend our approach to handle more vision tasks, e.g., detection
and segmentation.
\if 0
This paper presented a novel \ModuleFull{} (\ModuleShort{}) whose aim is to leverage the features encoded in early layers of CNNs to make an meaningful intermediate decision, and then propagate it to the latter layer to guide it to refine the decision and make a finer one, instead of making a unreferenced new one.
Extensive experiments validate \NetworkFull{} that are constructed by stacking \ModuleShort{}s into various backbone architectures could produce significant improvements for image classification with minimal additional computational cost.
We hope \ModuleShort{} become an important component of various network architectures.
In the future, we plan to extent our approach to handle more vision tasks, detection and segmentation.
\CJ{Could refine!}
\fi
\section{Experiments}
In this section, we evaluate the image classification performance of our approach on four publicly available datasets.
Our main focus is {\em{on demonstrating \ModuleShort{} could improve the performance of backbone CNN networks on image classification}}, but not on pushing the state-of-the-art results.
Therefore, we spend more space to compare our approach with popular baseline networks on three relatively small-scale datasets due to limited computational resources, and finally report our results on the ImageNet 2012 dataset~\cite{Deng-2009-Imagenet} to validate the scalability of our approach.
\if 0
Therefore, we intentionally choose simple architectures and spend more space to compare our approach with baseline networks on small scale datasets due to limited computational resources.
\fi
\subsection{Implementation}
We implement \ModuleShort{} and reproduce all the evaluated networks with PyTorch~\cite{paszke2017-pytorch}.
The decision network $F$ of \ModuleShort{} is constructed with two fully connected (FC) layers
around
ReLU~\cite{Nair-2010-Relu} and followed with a Softmax layer to normalize the decision scores. To limit the model complexity, we reduce the dimension in the first FC layer with the reduction ratio of 16.
For all the \ModuleShort{}s integrated into a network, we assume all their auxiliary category numbers are exactly the same to ease network construction, and we set it with 2 if not specifically stated.
For VGG, we add BatchNorm (BN)~\cite{ioffe-2015-batch} while with no Dropout~\cite{Srivastava-2014-dropout}, and use one FC layer.
For Inception, we choose v1 with BN.
While other models are identical to the original papers.
\subsection{Dataset and Training Details}
{\bf{CIFAR-10 and CIFAR-100}}~\cite{Krizhevsky-2009-Cifar} consist of 60k 32$\times$32 images that belong to 10 and 100
categories respectively.
We train the models on the whole training set with 50k images in a mini-batch of 128, and evaluate them on the test set.
We set the initial learning rate with 0.1, and drop it by 0.2 at 60, 120 and 160 epochs for total 200 epochs.
For data augmentation,
we pad 4 pixels on each side of the image, and randomly sample a 32$\times$32 crop from the padded image or its horizontal flip, and then apply the simple mean/std normalization.
\if 0
\para{CINIC-10}~\cite{storkey-2018-cinic-10} has a total of 270,000 images in a size of 32$\times$32, which are equally split into three subsets: train, validate, and test.
In each subset (90,000 images) there are 10 classes which is identical to CIFAR-10 classes, but with 1.8 times training samples.
We train the models on the training set with a mini-batch of 128, and evaluate them on the test set.
The training starts with an initial learning rate of 0.1, and cosine annealed to zero for a total of 300 epochs, while the data augmentation scheme is the same as in CIFARs.
\fi
{\bf{CINIC-10}}~\cite{storkey-2018-cinic-10}
contains 270k 32$\times$32 images belonging to 10 categories, equally split into three subsets: train, validation, and test.
We train the models on the train set with a mini-batch of 128 and evaluate them on the test set.
The training starts with an initial learning rate of 0.1, and cosine annealed to zero for total 300 epochs, based on the same data augmentation scheme as in CIFARs.
{\bf{ImageNet}}~\cite{Deng-2009-Imagenet} consists of 1.2 million training images and 50k validation images from 1k classes.
We train the models
with minimal data augmentation including random resized crop, flip and the simple mean/std normalization on the whole training set and report results on the validation set.
The initial learning rate is set to 0.1 and decreased by a factor of 10 every 30 epochs to a total of 100 epochs.
During training, the three loss functions are calculated for all the \ModuleShort{}s in the network, and the average of each loss is accumulated with the traditional cross-entropy loss for classification. We set the weighting of the three loss terms to be 0.01 on ImageNet while 0.1 on others.
All the models are trained from scratch with SGD using default parameters as the optimizer, and the weights are initialized following~\citet{he-2015-Init}.
We evaluate the single-crop performance at each epoch and report the best one.
\subsection{CIFAR and CINIC-10 Experiments}
To evaluate the effectiveness of \ModuleShort{}, we first perform extensive ablation experiments on three relatively small datasets to verify that \NetworkShort{}s with integrating \ModuleShort{}s outperform the corresponding baseline networks without bells and whistles, and then compare them with the state-of-the-art methods to demonstrate the superiority.
\begin{table}[t!]
\centering
\scalebox{0.8}{
\begin{tabular}{l|c|c|l|l}
\hline
\multicolumn{1}{c|}{\multirow{2}{*}{Architecture}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\#Category \\ per batch\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\#Images\\ per iter. \end{tabular}} & \multicolumn{2}{c}{Acc (\%)} \\ \cline{4-5}
\multicolumn{1}{c|}{} & & & Top-1 & Top-5 \\ \hline
ResNet-20 & - & - & 68.82 &91.03 \\
ResNet-20 & 10 & 1280 & 64.88 &89.54 \\
\ModuleForNet{}-ResNet-20 & 10 & 1280 & 65.34 & 89.87 \\
\ModuleForNet{}-ResNet-20 & 25 & 512 & {\bf{70.51}} & {\bf{92.25}} \\
\ModuleForNet{}-ResNet-20 & 50 & 256 & 69.80 & 91.98 \\
\ModuleForNet{}-ResNet-20 & 100 & 128 & 69.50 & 91.69 \\ \hline
ResNet-56 & - & - & 72.23 & 92.36 \\
ResNet-56 & 10 & 1280 & 54.13 & 78.65 \\
\ModuleForNet{}-ResNet-56 & 10 & 1280 & 53.73 & 78.86 \\
\ModuleForNet{}-ResNet-56 & 25 & 512 & {\bf{73.76}} & {\bf{93.39}} \\
\ModuleForNet{}-ResNet-56 & 50 & 256 & 73.58 & 93.24 \\
\ModuleForNet{}-ResNet-56 & 100 & 128 & 73.41 & 93.18 \\ \hline
\end{tabular}}
\vspace{-2.5mm}
\caption{Classification results on CIFAR-100 with different numbers of categories in a training batch.
\#Images per iter. is the number of images that are loaded into memory in each iteration,
\#category per batch indicates the number of categories that
appears in a training batch.
}
\label{tab:CategoryInABatch}
\end{table}
\begin{table}[t!]
\centering
\scalebox{0.87}{
\begin{tabular}{l|c|l|l}
\hline
\multicolumn{1}{c|}{\multirow{2}{*}{Architecture}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}\#Auxiliary \\ category\end{tabular}} & \multicolumn{2}{c}{Acc (\%)} \\ \cline{3-4}
\multicolumn{1}{c|}{} & & Top-1 & Top-5 \\ \hline
\ModuleForNet{}-ResNet-20 & 2 & {\bf{70.51}} & {\bf{92.52}} \\
\ModuleForNet{}-ResNet-20 & 5 & 69.49 & 91.65 \\
\ModuleForNet{}-ResNet-20 & 10 & 69.91 & 91.56 \\
\ModuleForNet{}-ResNet-20 & 25 & 69.41 & 91.67 \\ \hline
\ModuleForNet{}-ResNet-56 & 2 & 73.76 & {\bf{93.39}} \\
\ModuleForNet{}-ResNet-56 & 5 & {\bf{73.86}} & 93.28 \\
\ModuleForNet{}-ResNet-56 & 10 & 73.51 & 93.32 \\
\ModuleForNet{}-ResNet-56 & 25 & 73.02 & 92.95 \\ \hline
\end{tabular}}
\vspace{-2.5mm}
\caption{Classification results on CIFAR-100 with different number of auxiliary categories for intermediate decisions.}
\vspace{-3mm}
\label{tab:AuxiliaryCategory}
\end{table}
\subsubsection{Category Number in Each Batch}
As mentioned before, to handle the large category issue that affects the estimation of decision consistent loss, we proposed a load-shuffle-split strategy.
In this part, we will evaluate the effects brought by the strategy.
Since CIFAR-100 consists of images with 100 categories while the mini-batch size is 128, we thus choose it to investigate.
Specifically, we make 4 different configurations,
with each configuration has around 128 images in a training batch for fair comparisons.
The results in Tab.~\ref{tab:CategoryInABatch} show that \ModuleForNet{}-ResNet-20 and \ModuleForNet{}-ResNet-56 both obtain the best results when enforcing each training batch with images in 25 categories, instead of the default 100 categories.
Therefore we could conclude that our load-shuffle-split strategy is useful for training \NetworkShort{}s on large category datasets.
However, when the number of categories in a training batch keeps decreasing, the performance drops heavily.
The reason is that CNNs require the data to be i.i.d. distributed, but the reduction of category number in each batch will hurt the distribution, thus degrading the performance.
To validate this, we also conduct experiments on the baseline ResNet-20 and ResNet-56 with
10 categories in each batch and find the performance also drops.
Even though, our approach could leverage it to handle the large category issue.
\begin{table}[t!]
\centering
\scalebox{0.87}{
\begin{tabular}{l|l|l|l}
\hline
\multirow{2}{*}{Architecture} & \multirow{2}{*}{Configuration} & \multicolumn{2}{c}{Acc (\%)} \\ \cline{3-4}
& & Top-1 & Top-5 \\ \hline
\ModuleForNet{}-ResNet-20 & w/o $L_{explicit}$ & 70.31 & 92.11 \\
\ModuleForNet{}-ResNet-20 & w/o $L_{consistent}$ & 70.07 & 92.05 \\
\ModuleForNet{}-ResNet-20 & w/o $L_{balance}$ & 69.44 & 91.88 \\
\ModuleForNet{}-ResNet-20 & \multicolumn{1}{c|}{-} & {\bf{70.51}} & {\bf{92.25}} \\ \hline
\ModuleForNet{}-ResNet-56 & w/o $L_{explicit}$ & 73.27 & 93.19 \\
\ModuleForNet{}-ResNet-56 & w/o $L_{consistent}$ & 73.52 & 93.15 \\
\ModuleForNet{}-ResNet-56 & w/o $L_{balance}$ & 72.87 & 92.73 \\
\ModuleForNet{}-ResNet-56 & \multicolumn{1}{c|}{-} & {\bf{73.76}} & {\bf{93.39}} \\ \hline
\end{tabular}
}
\vspace{-2.5mm}
\caption{Classification results of \ModuleForNet{}-ResNets on the CIFAR-100 dataset with ablating part of loss functions.}
\vspace{-3mm}
\label{tab:ThreeLoss}
\end{table}
\subsubsection{{Auxiliary Category Number}}
To investigate the effects of auxiliary category number in \ModuleShort{},
we follow the best configuration in the above experiments that set the category number in each training batch as 25, and report the experimental results in Tab.~\ref{tab:AuxiliaryCategory}.
It could be seen that the performance with 2 auxiliary categories is very good and stable, while those with larger auxiliary categories
vary a lot for the two different models.
The reason is probably that current supervision is not enough to enforce \ModuleShort{} to make use of more auxiliary categories.
Besides, we will show that it is the decision scores that encode some meaningful information about the original category, rather than the auxiliary category itself (see Sec.~\ref{subsec:vis}).
Therefore, we simply choose the auxiliary category number to be 2.
\subsubsection{Three Loss Functions}
These experiments are to evaluate the influence of each loss function for training \ModuleShort{} by ablating one of them. The results depicted in Tab.~\ref{tab:ThreeLoss} show that the performance drops if we ablate any loss function.
Particularly, $L_{balance}$ has the largest influence on the classification results and the accuracies drop nearly ~1\% for both models, which probably indicates \ModuleShort{} is easily degenerated to consistently and explicitly assigning all pooled feature maps into a single auxiliary category.
In addition, we would like to point out that \ModuleForNet{}-ResNets with ablating one loss function could still outperform the baseline networks whose results are reported in Tab.~\ref{tab:CategoryInABatch}, validating the effectiveness of \ModuleShort{}.
\begin{table*}[t!]
\centering
\scalebox{0.87}{
\begin{tabular}{l|cc|cccc}
\cline{1-6}
\multirow{2}{*}{Architecture} & \multicolumn{1}{l|}{\multirow{2}{*}{\#params}} & \multicolumn{1}{l|}{\multirow{2}{*}{\#MACs}} & \multicolumn{3}{c}{Acc (\%)} & \\ \cline{4-6}
& \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{} & CIFAR10 & CIFAR-100 & CINIC-10 & \\ \cline{1-6}
NIN~\cite{lin-2013-NetInNet} & 996.99k & 0.22G & 89.71 & 67.76 & 80.10 \\
DDN~\cite{murthy-2016-decsion-mining-low-confident-recursive} * & - & - & 90.32 & 68.35 & - \\
DCDJ~\cite{baek-2017-deepDecisionJungle} * & - & - & - & 68.80 & - \\
\ModuleForNet{}-NIN & 997.9k & 0.23G & {\bf{90.87}}{\tiny{(1.16)}} & {\bf{69.11}}{\tiny{(1.35)}} & {\bf{81.07}}{\tiny{(0.97)}} \\ \cline{1-6}
ResNet-56~\cite{He-2016-ResNet} & 855.77k & 0.13G & 93.62 & 72.23 & 84.74 & \\
SE-ResNet-56~\cite{Hu-2017-SENet} & 861.82k & 0.13G & 94.28 & {\bf{73.81}} & 85.09 & \\
\ModuleForNet{}-ResNet-56 & 894.96k & 0.14G & {\bf{94.35}} {\tiny{(0.73)}} & 73.76 {\tiny{(1.53)}} & {\bf{85.50}} {\tiny{(0.76)}} & \\ \cline{1-6}
ResNet-110~\cite{He-2016-ResNet} & 1.73M & 0.26G & 93.98 & 73.94 & 85.18 & \\
SE-ResNet-110~\cite{Hu-2017-SENet} & 1.74M & 0.26G & {\bf{94.58}} & 74.42 & 85.57 & \\
\ModuleForNet{}-ResNet-110 & 1.81M & 0.28G & {{94.56}}{\tiny{(0.58)}} & {\bf{74.85}} {\tiny{(0.91)}} & {\bf{86.34}} {\tiny{(1.16)}} & \\ \cline{1-6}
GoogLeNet~\cite{Szegedy-2015-Inception} & 6.13M & 1.53G & 95.27 & 79.41 & 87.89 & \\
\ModuleForNet{}-GoogLeNet & 6.35M & 1.54G & {\bf{95.65}}{\tiny{(0.38)}} & {\bf{80.73}}{\tiny{(1.32)}} & {\bf{88.31}}{\tiny{(0.42)}} & \\ \cline{1-6}
VGG13~\cite{Simonyan-2014-VGG} & 9.42M & 0.23G & 94.18 & 74.42 & 85.04 & \\
\ModuleForNet{}-VGG13 & 9.49M & 0.23G & {\bf{94.61}} {\tiny{(0.43)}} & {\bf{74.94}} {\tiny{(0.52)}} & {\bf{85.59}}{\tiny{(0.55)}} & \\ \cline{1-6}
\end{tabular}
}
\if 0
\begin{tablenotes}
\footnotesize
\item[*] * indicates the results are reported in the original papers. \CJ{Move this line to the table caption.}
\end{tablenotes}
\fi
\vspace{-3mm}
\caption{Classification results on the CIFAR-10, CIFAR-100 and CINIC-10 datasets. Note that the number of parameters and MACs are calculated based on the experiments on CIFAR-10. The numbers in brackets denote the performance improvement. * indicates the results are reported in the original papers. }
\vspace{-5mm}
\label{tab:compare}
\end{table*}
\subsubsection{Comparisons with the State-of-the-art Methods}
\label{subsec:compare_STOA}
We conduct extensive experiments on the three challenging datasets: CIFAR-10, CIFAR-100 and CINIC-10, with
various popular architectures, including ResNets~\cite{He-2016-ResNet}, Network in Network (NIN)~\cite{lin-2013-NetInNet}, GoogLeNet~\cite{Szegedy-2015-Inception} and VGG~\cite{Simonyan-2014-VGG} as backbones.
The results demonstrated in Tab.~\ref{tab:compare} show that by integrating \ModuleShort{}s, all the networks could consistently obtain significant better performance (e.g., more than 1.5\% improvement for \ModuleForNet{}-ResNet-56 on CIFAR-100),
validating the effectiveness and versatility of \ModuleShort{}. Particularly, we would like to point out that \ModuleForNet{}-ResNet-56 outperforms the original ResNet-110 on both the CIFAR10 and CINIC-10 datasets, but with nearly half the numbers of parameters and multiply-and-accumulates (MACs). In addition, we also compare our approach with two latest decision tree based methods: Deep Convolutional Decision Jungle (DCDJ)~\cite{baek-2017-deepDecisionJungle} and Deep Decision Network (DDN)~\cite{murthy-2016-decsion-mining-low-confident-recursive}.
Since they have not released their codes, we thus compare ours with the results reported in the original papers.
It could be seen that our \ModuleForNet{}-NIN outperforms DCDJ and DDN with all using NIN as the backbone network.
Finally, we compare our \ModuleShort{} with the most advanced SE block~\cite{Hu-2017-SENet}, whose motivation is to improve the performance of various backbone architectures in the manner of attention.
From Tab.~\ref{tab:compare}, we could see that \ModuleShort{} is comparable with the SE block on classification and sometimes is superior to it.
\begin{figure*}[t!]
\centering
\includegraphics[width=0.96\linewidth]{rational.png}
\vspace{-3.5mm}
\caption{Above row: each column visualizes the decisions made by one (totally 9) \ModuleShort{} of \ModuleForNet{}-ResNet-20 on 512 images in CIFAR-10 (from left to right: \ModuleShort{} of early layer to the latter layer).
The vertical position of each image indicates the decision score:
closer to the bottom, larger decision score assigned to the first auxiliary category;
while the images are randomly spanned across the horizontal direction.
Below row: the zoomed view of the decisions and the corresponding categories represented with colored circles.
Note that, the decision scores of the first auxiliary category are
concentrated in a small range instead of the whole $\left[0,1\right]$,
but we still could see the images are well semantically clustered, especially for the decisions made by the last \ModuleShort{}.
}
\vspace{-5mm}
\label{fig:vis}
\end{figure*}
\if 0
\subsubsection{Visualization}
To investigate what \ModuleShort{} learns, we visualize the decisions made by the 9 \ModuleShort{}s of \ModuleForNet{}-ResNet-20 on CIFAR-100 in Fig.~\ref{fig:vis} with \ModuleShort{} of early layer to latter layer located from left to right.
The more confident an image to be assigned with the first auxiliary category (totally two), the more bottom position it is located. While the images are randomly spanned across the horizontal direction. We could see that
the first few \ModuleShort{}s do not provide clear decisions, while the \ModuleShort{}s
in the middle and at the end give many useful information: furnitures are stably located in the upper side while fruits are in the bottom, validating that \ModuleShort{} could learn some consistent and meaningful information.
\fi
\subsubsection{Complexity Analysis}
To enable practical use, \ModuleShort{} is expected to provide an effective trade-off between complexity and performance.
Therefore, we report the statistics of complexity
in Tab.~\ref{tab:compare}.
It could be seen that, by integrating \ModuleShort{}s, the increased numbers of parameters and multiply-and-accumulates (MACs) are less than 5\% of their original ones.
While in several previous subsections, we have validated that the brought improvements are significant.
Therefore, we could conclude that the overhead brought by \ModuleShort{} is deserved.
\subsubsection{Visualization and Discussion}
\label{subsec:vis}
To investigate what \ModuleShort{} learns, we visualize the decisions made by the 9 \ModuleShort{}s of \ModuleForNet{}-ResNet-20 on 512 images from the CIFAR-10 dataset in Fig.~\ref{fig:vis},
with \ModuleShort{} of the earliest layer to the latest layer located from left to right in sequence.
For the decisions made by each \ModuleShort{}, all 512 images are visualized with their positions related to the decision scores assigned to them:
the larger decision score assigned to the first auxiliary category (totally two), the more bottom position the image is located, while all the images are randomly spanned across the horizontal direction.
{\bf{Since with limited supervision, the decision scores are concentrated in a small range instead of the whole $\left[0,1\right]$}}, but we will show it is enough to distinguish the categories.
\if 0
To investigate what \ModuleShort{} learns, we visualize the decisions made by the 9 \ModuleShort{}s of \ModuleForNet{}-ResNet-20 on CIFAR-10 in Fig.~\ref{fig:vis},
with \ModuleShort{} of the earliest layer to the latest layer located from left to right in sequence.
The larger decision score assigned to the first auxiliary category (totally two), the more bottom position the image is located.
While the images are randomly spanned across the horizontal direction.
\fi
We could see the images in the first column are distributed along the vertical direction with ``blue'' images located in the above, and ``green'' images in the below, indicating that
{{the first \ModuleShort{} probably makes decisions based on the color information.}}
Although simple, frogs and airplanes are separated quite well, validating that low-level information is useful for classification.
{{The second \ModuleShort{} seems to not work}},
assigning equal decision scores to the two auxiliary categories.
This behavior is similar to ResNet that allows some gated shortcuts to be closed.
Interestingly, the 3-5th \ModuleShort{}s make almost reversed decisions (e.g., the score is about one minus the score made by the other \ModuleShort{}),
indicating that
{{the auxiliary categories made by different \ModuleShort{}s could be different, and neural networks have the ability to decode these decisions.}}
We rotate the decisions in the 5-th column, and find it is somewhat consistent with the decisions in the first column, but has better semantic clustering along the vertical direction.
For example, all the airplanes (see the red circle in Fig.~\ref{fig:vis}) are located in the above part,
while a ``green'' airplane is located in the below part by mistake within the first decisions (see the black rectangle in the first zoomed view in Fig.~\ref{fig:vis}),
validating that {{our approach could be recovered from some false decisions made before.}}
We also visualize the decisions made by the last \ModuleShort{}, and find that the instances with the same categories (e.g., airplane, automobile) are located within a quite small range along the vertical axis
and are well separated with some other categories,
therefore we could conclude {\bf{it is the decision score that encodes some meaningful information about the object category, rather than the auxiliary category itself.}}
\if 0
namely they are decided with very similar confidences for the first auxiliary category, validating that \ModuleShort{}s could learn meaningful information.
\fi
From the three zoomed views, we could see that the decisions are progressively refined, validating our intuition to propagate decisions.
Particularly, trucks and automobiles are located in similar vertical ranges, which could
be considered as belonging to a coarse category ``man-made objects'' mentioned by the category hierarchy.
However, other ``man-made objects'', such as airplanes and ships, are mixed with the objects belonging to the coarse category ``animals''.
Therefore, we conclude that
{{the decision made by \ModuleShort{} is not based on the man-made category hierarchy, but another division that is better in the view of CNNs. }}
\begin{table}[t!]
\centering
\vspace{-2mm}
\scalebox{0.82}{
\begin{tabular}{l|cc|ccccc}
\hline
\multirow{2}{*}{Architecture} & \multicolumn{1}{l|}{\multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}\#Category \\ in a batch\end{tabular}}} & \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Batch \\ size\end{tabular}} & \multicolumn{5}{c}{Acc(\%)} \\ \cline{4-8}
& \multicolumn{1}{l|}{} & & \multicolumn{2}{c|}{Top-1} & \multicolumn{3}{c}{Top-5} \\ \hline
ResNet-18 & - & 256 & \multicolumn{2}{l}{68.06} & \multicolumn{3}{l}{88.55} \\
\ModuleForNet{}-ResNet-18 & - & 256 & \multicolumn{2}{l}{68.65} & \multicolumn{3}{l}{88.83} \\
\ModuleForNet{}-ResNet-18 & 250 & $\sim$ 1024 & \multicolumn{2}{l}{\textit{{\bf{69.10}}}} & \multicolumn{3}{l}{{\bf{89.03}}} \\ \cline{1-8}
ResNet-50 & - & 256 & \multicolumn{2}{l}{74.42} & \multicolumn{3}{l}{91.97} \\
\ModuleForNet{}-ResNet-50 & - & 256 & \multicolumn{2}{l}{{{75.47}}} & \multicolumn{3}{l}{{{92.75}}} \\
\ModuleForNet{}-ResNet-50 & 250 & $\sim$ 768 & \multicolumn{2}{l}{{\bf{76.15}}} & \multicolumn{3}{l}{{\bf{92.98}}} \\ \hline
GoogLeNet & - & 256 & \multicolumn{2}{l}{70.68} & \multicolumn{3}{l}{90.08} \\
\ModuleForNet{}-GoogLeNet & - & 256 & \multicolumn{2}{l}{{{{{71.22}}}}} & \multicolumn{3}{l}{{{{{90.37}}}}} \\
\ModuleForNet{}-GoogLeNet & 250 & $\sim$ 1024 & \multicolumn{2}{l}{{{{\bf{71.66}}}}} & \multicolumn{3}{l}{{{{\bf{90.54}}}}} \\
\cline{1-8}
ResNet-101 & - & 256 & \multicolumn{2}{l}{76.66} & \multicolumn{3}{l}{93.23} \\
\ModuleForNet{}-ResNet-101 & - & 256 & \multicolumn{2}{l}{{{{{77.59}}}}} & \multicolumn{3}{l}{{{{93.81}}}} \\
\ModuleForNet{}-ResNet-101 & 200 & $\sim$ 512 & \multicolumn{2}{l}{{\bf{78.21}}} & \multicolumn{3}{l}{{\bf{93.92}}} \\ \hline
\end{tabular}
}
\vspace{-2.8mm}
\caption{Classification results on ImageNet.}
\vspace{-5mm}
\label{tab:imagenet}
\end{table}
\subsection{ImageNet Experiments}
We also evaluate the performance of various \NetworkShort{}s on ImageNet~\cite{Deng-2009-Imagenet}.
The results depicted in Tab.~\ref{tab:imagenet} show that \NetworkShort{}s outperform all the baseline networks with large margins (e.g., $\sim$0.6\% for ResNet-18 and GoogLeNet, $\sim$1.0\% for ResNet-50 and ResNet-101) that randomly sample all 1000-category instances into a training batch, in which case $L_{consistent}$ could hardly contribute to the training.
Therefore, we also conduct experiments with setting batch sizes to be 1024, 768 and 512, while enforcing the category number in a batch to be 250, 250 and 200 using the load-shuffle-split strategy.
We could see that the improvements on top-1 accuracy for all \NetworkShort{}s are finally enlarged to 1.0\% - 1.6\%,
validating the scalability of our approach.
\section{Implementation}
We implement \ModuleShort{} and reproduce all the evaluated networks in the PyTorch framework~\citep{paszke2017-pytorch}.
The intermediate decision branch of \ModuleShort{} is constructed with two fully connected (FC) layers around the non-linearity layer ReLU~\citep{Nair-2010-Relu} and followed with a Softmax layer to normalize the decision confidence. To limit the model complexity, we reduce the dimension in the first FC layer with the reduction ratio of 16.
For all the \ModuleShort{}s integrated into a network architecture, we assume all their auxiliary category numbers are exactly the same to ease network construction, and we set it as 2 if not specifically stated.
For VGG, we add BatchNorm while leaving Dropout removed and use one fully connected layer.
For Inception, we chosse v1 and add BatchNorm.
While other models are identical to the original papers.
During training, the three loss functions are calculated for all the \ModuleShort{}s in the network, and the average of each loss is accumulated with the traditional cross-entropy loss for classification.
We set the weighting of the three loss terms to be 0.1 by default based on empirical performance.
\fi
\section{Introduction}
Image classification~\cite{akata2015evaluation,blot2016max,elsayed2018large,li2017improving,rastegari2016xnor,tang2015improving,wang2018ensemble,yan2012beyond}, aiming at classifying an image into one of several predefined categories, is an essential problem in computer vision.
In the last few decades, researchers focus on representing images with hand-crafted low-level descriptors~\cite{chen2009wld,lowe-2004-sift}, and then discriminating them with a classifier (e.g., SVM~\cite{chang-2011-libsvm} or its variants~\cite{lu2007gait,maji2008classification}).
However, due to the lack of high-level
features, the performance is saturating.
Thanks to
the availability of huge
labeled datasets~\cite{lu-2014-twoclass,russakovsky2015imagenet} and powerful computational infrastructures, convolutional neural networks (CNNs) could automatically extract discriminative high-level features from the training images,
significantly improving the state-of-the-art performance.
Although high-level features
are more discriminative, adopting them alone to classify images is still challenging,
since with a growing number of categories, the possibilities of confusion increase.
\if 0
\CJ{, since with a growing number of categories, the possibilities of confusion increase ** due to the higher possibilities of confusion with a growing number of categories}.
\fi
In addition,
features in the early layers are proved to be able to separate groups of classes in a higher hierarchy~\cite{bilal-2017-convLearnHierarchy}.
Therefore, researchers attempt to combine high and low-level features together to exploit their complementary strengths~\cite{yu2017exploiting}.
However, a simple combination of them will make the features in relatively high dimensions, hindering practical use.
\if 0
a a relatively high dimension
will make the extracted features in a relatively high dimension,
and thus significant increase the complexity.
\fi
\if 0
Regarding the methods of encouraging the complementary between low and high-level features, ResNet~\cite{He-2016-ResNet} could be considered as an approach that propagates the features from earlier layers to latter layers via identity shortcut connections, to enforce the latter layers to learn their residuals.
\fi
\if 0
To encourage the complementary between low and high-level features, researchers propose to propagate
a coarse decision from earlier layers,
such that the latter layers' goal is to make finer ones, in the manner of divide-and-conquer.
\fi
Other researchers employ low-level features to make coarse decisions and then utilize high-level features to make finers, based on the idea of divide-and-conquer.
This could be achieved by designing deep decision trees that implement traditional decision trees~\cite{quinlan-1986-DecisionTree} with CNNs.
With the hierarchical structure of categories,
\if 0
a straightforward way is to identify the coarsest category of the object first,
and then determine the category finer and finer~\cite{yan-2015-HierarchicalCNN}.
\fi
a straightforward way is to make the networks of the root node to identify the coarsest category, and then dynamic route to
the networks of
a child node to determine the finer one recursively~\cite{kontschieder-2015-decForest}.
However, hierarchical information of categories is not always available, therefore researchers are required to design suitable division solutions, making the training process extremely complex (e.g., multiple-staged). Besides, current deep decision tree based methods also face
two other fatal weaknesses: (1) the network should save all the tree branches, making the number of parameters explosively larger than a single classification network; (2) once the decision routes to a false path, it could be hardly recovered.
\if 0
Besides, current deep decision tree based approaches still face three fatal weaknesses.
Firstly, the network should save all the tree branches, making the number of parameters explosively larger than a single classification network.
Secondly, once the decision routes to a false path, it could be hardly recovered.
Thirdly, the training process of a deep decision tree is usually very complex (e.g., multiple-staged).
\fi
\if 0
In contrast to the above boosting based solutions, other researchers instead explore the problem in a cascade way, whose basic idea is divide-and-conquer.
When the hierarchical structure of object categories is available, a straight-forward idea is to identify the coarse category first, and then classify the finer category recursively until the finest level~\citep{deng-2014-LabelRelationGraphs,yan-2015-HierarchicalCNN}. However, the hierarchical information is not always available, although some researchers attempts to hand-design or cluster them in a semi-supervised or unsupervised way. In addition, the coarse category itself maybe also not a good separation for classification in the view of CNNs, making the divide-and-conquer scheme fail.
Therefore, other researchers borrow the idea from decision trees~\citep{quinlan-1986-DecisionTree} and devise various deep decision tree based solutions for image classification.
Without the restriction of coarse categories, deep decision trees could automatically search the best divide solution, and thus could conquer it more successfully.
Although with a bit better performance, current deep decision tree based approaches face at least three main weakness.
Firstly, the network should save all the tree structures, making the number of parameters explosively larger than a single classification network.
Secondly, once the decision routes into a false path, it is hardly to recover from it.
\fi
\if 0
The key idea is that if we adopt the early layer to generate a meaningful intermediate decision, by propagating it along the network, the latter layers could be guided to refine the decision (e.g., correct the decision or make a finer one). In the view of residual learning~\citep{He-2016-ResNet}, it is much easier to optimize the refining process than to optimize the making of a unreferenced new decision from scratch.
Therefore, by stacking a collection of \ModuleShort{}s into backbone network architectures for image classification, the generated \NetworkFull{} (\NetworkShort{}s) could obtain significantly better performance.
\fi
To resolve the above issues, we propose a novel \ModuleFull{} (\ModuleShort{}).
The key idea is that if we adopt the early layer to generate a category-coherent decision, and then propagate it along the network,
the latter layers could be guided to encode more discriminative features.
By stacking a collection of \ModuleShort{}s into the backbone network architectures for image classification,
the generated \NetworkFull{} (\NetworkShort{}s) are explicitly formulated as to progressively encode more descriptive features guided by the decisions made by early layers and then refine the decisions based on the new generated features iteratively.
In the view of residual learning~\cite{He-2016-ResNet}, it is much easier to optimize the refining process than to optimize the making of a unreferenced new decision from scratch.
Besides the advantage of easier optimization, the property of \ModuleShort{} enables \NetworkShort{}s to overcome the weaknesses of common deep decision trees naturally.
Firstly, in contrast to dynamically routing between several different branches after making a decision, \ModuleShort{} applies the decision as a conditional code to the latter layers similar to~\citet{mirza-2014-conditionalGAN}, such that it could be propagated without bringing additional network branches.
Thanks to our novel decision propagation scheme again, \NetworkShort{}s could be recovered from some false decisions made before as without routing.
Furthermore, instead of designating what each intermediate decision indicates explicitly,
with weak supervision provided by three novel loss functions,
\ModuleShort{} could automatically learn a more suitable and coherent division to separate the categories instead of following the man-made category hierarchy
and could be trained in a totally end-to-end fashion with the backbone networks.
In total, our contribution is
three fold:
\if 0
Therefore, we propose a novel \ModuleFull{} (\ModuleShort{}), which could be easily integrated into various backbone network architectures to form \NetworkFull{} (\NetworkShort{}s).
The key idea behind \ModuleShort{} is that if we adopt the early layer to generate a meaningful intermediate decision, by propagating it along the network, the latter layer could be guided to refine the decision or to make a finer one.
In the view of residual learning~\citep{He-2016-ResNet}, it is much more easier to optimize than to make a unreferenced new one from scratch.
Besides the advantage of easier optimization, \ModuleShort{}
overcomes the issues of common deep decision trees by its novel design.
In the view of residual learning~\citep{He-2016-ResNet}, as the aim of the following layer is to refine the intermediate decision instead of making a unreferenced new one, it would be much easier for the network to optimize, {and thus improve the performance.}
\ModuleShort{} could resolve the above issues mainly due to the following reasons.
\tang{-In addition, \ModuleShort{} could overcome the issues of common deep decision tree based approaches.}
\fi
\if 0
\CJ{Redundant, could be reduced and highlight the real core idea!}\tangsay{But where should I explain ours could overcome the three weaknesses in last para}
\fi
\if
To resolve the above issues, we propose a novel \ModuleFull{} (\ModuleShort{}), which could be simply into various backbone network architectures (e.g., ResNet~\citep{He-2016-ResNet} and Inception~\citep{Szegedy-2015-Inception}) to form \NetworkFull{} (\NetworkShort{}).
\fi
\if 0
To resolve the above issues, we propose a novel framework of \NetworkFull{} (\NetworkShort{}) whose internal module: \ModuleFull{} (\ModuleShort{}) could be integrated into
various backbone network architectures (e.g., ResNet~\citep{He-2016-ResNet} and Inception~\citep{Szegedy-2015-Inception}) to improve their performance.
In contrast to routing between several different branches after making a decision, we apply the decision as a conditional code to the following network layers similar as~\cite{mirza-2014-conditionalGAN}, such that decision could be propagated without bringing additional network branches.
Furthermore, instead of requiring to make a hard decision to choose which branch to route, our conditional coding scheme is implemented in a ``soft'' way, such that \NetworkShort{} could recover from some false decisions made before.
\fi
\begin{itemize}[leftmargin=*]
\item
We design a novel \ModuleShort{}, which could propagate
the decision made upon an early layer to guide the latter layers.
\if 0
an intermediate decision made upon early layers of CNNs to guide latter layers without bringing additional branches, and is differentiable .
\fi
\item
\if 0
We propose three novel loss functions to apply semi-supervised signals to optimize \ModuleShort{}.
Particularly, we derive the decision consistent loss into a matrix form, such that it could be
calculated more efficiently.
\fi
We propose three novel loss functions to enforce
\ModuleShort{} to make category-coherent decisions.
\if 0
optimize \ModuleShort{}, among which the decision consistent loss is intentionally derived into a matrix form for efficient calculation.
\fi
\if 0
We discover the large category issue that hinders \ModuleShort{} training, and propose a load-shuffle-split strategy to handle it. \CJ{Propose a load-shuffle-split strategy to enable the DPM training with large category datasets.}
\fi
\if 0
We propose a load-shuffle-split strategy to enable \ModuleShort{} to work well on large category datasets.
\fi
\item
We demonstrate a general way to integrate \ModuleShort{}s into various backbone networks to form \NetworkShort{}s.
\end{itemize}
\if 0
We demonstrate the versatility of \ModuleShort{} by integrating it to various deep network architectures for image classification, such as ResNet~\cite{He-2016-ResNet} and Inception~\cite{Szegedy-2015-Inception}.
\fi
Extensive comparison results on four publicly available datasets validate \ModuleShort{} could consistently improve the classification performance, and is superior to the state-of-the-art methods.
Code will be made public upon paper acceptance.
\if 0
\CJ{Would refine after discussion about the writing og Introduction!}
\fi
\section{Related Work}
\if 0
\CJ{I skipped this section, one question is that is it better not to use subsection? Predefined by the template?}
\fi
{\bf{Category Hierarchy}}, which indicates categories form a semantic hierarchy consisting of many levels of abstraction, has been well exploited~\cite{grauman2011learning,saha2018class2str,tousch-2012-semanticHierarchiesSurvey}.
Deng et al.~\citet{deng-2014-LabelRelationGraphs} introduced hierarchy and exclusion graphs that capture semantic relations between any two labels to improve classification.
Yan et al.~\citet{yan-2015-HierarchicalCNN} proposed a two-level hierarchical CNN with the first layer separating easy classes using a coarse category classifier and the second layer handing difficult classes utilizing fine category classifiers.
To mimic the high level reasoning ability of human,
Goo et al.~\citet{goo-2016-taxonomy} introduced a regulation layer that could abstract and differentiate object categories based on a given taxonomy, significantly improving the performance.
However, the man-made category hierarchy may be not
a good division in the view of CNNs.
\if 0
Instead of pursing the accuracy of category in the finest level, \cite{deng-2012-hedgingCategory} observed that making a correct coarse level classification is better than making a wrong one at fine level, and thus proposed a novel solution that could automatically select an appropriate category level, trading off specificity for accuracy in case of uncertainty. Similar with their insight, the hand-designed category hierarchy is not guaranteed to be a good partition for CNNs to divide and conquer.
\fi
\if 0
Rather than relying on a hand-designed partition of the set of categories, \cite{liu-2019-selfMetaLearning} proposed a label-generation network to generate the auxiliary labels, and a multi-task framework with the primary task to conduct traditional classification and the auxiliary task to predict the auxiliary labels. However, hierarchical structure
since the two tasks are trained separately, the improvement is thus very limited.
\fi
\if 0
Instead of pursing the accuracy of category in the finest level, \cite{deng-2012-hedgingCategory} observed that making a correct coarse level classification maybe better than making a wrong one at fine level, and thus proposed a novel solution that could automatically select an appropriate category level, trading off specificity for accuracy in case of uncertainty.
\fi
\firstpara{Deep Decision Trees/Forests}
The cascade of sample splitting in decision trees has been well explored
by traditional machine learning approaches~\cite{quinlan-1986-DecisionTree}.
With the rise of deep networks,
researchers attempt to design deep decision trees or forests~\cite{zhou-2017-deepForest} to solve the classification problem.
Since prevailing approaches for decision tree training typically operate in a greedy and local manner, hindering representation training with CNNs,
Kontschieder et al.~\citet{kontschieder-2015-decForest} therefore introduced a novel stochastic routing for decision trees, enabling split node parameter learning via backpropagation.
Without requiring the user to set the number of trees,
Murthy et al.~\citet{murthy-2016-decsion-mining-low-confident-recursive} proposed a ``data-driven'' deep decision network which stage-wisely introduces decision stumps to classify confident samples and partition the remaining data, which is difficult to classify, into smaller data clusters for learning successive expert networks in the next stage.
Ahmed et al.~\citet{ahmed-2016-NetworkOfExperts} further proposed to combine the learning of a generalist to discriminate coarse groupings of categories and the training of experts aimed at accurate recognition of classes within each specialty together and obtained substantial improvement.
Instead of clustering data based on image labels, Chen et al.~\citet{chen-2018-Semi-supervisedHierarchicalCNN} proposed a large-scale unsupervised maximum margin clustering technique to split images into a number of hierarchical clusters iteratively to learn cluster-level CNNs at parent nodes and category-level CNNs at leaf nodes.
Different from the above approaches that implement each decision branch with another separate routing network, Xiong et al.~\citet{xiong-2015-conditionalNetwork} proposed a conditional CNN framework for face recognition, which dynamic routes via activating a part of kernels, making the deep decision tree more compact.
Based on it, Baek et al.~\citet{baek-2017-deepDecisionJungle} proposed a fully connected ``soft'' decision jungle structure to enable the decision could be recoverable, and thus lead to more discriminative intermediate representations and higher accuracies.
Our \ModuleShort{} could be considered as a deep decision tree based approach, and the most similar work to ours is~\citet{baek-2017-deepDecisionJungle}.
However, the difference is at least three-fold.
Firstly, instead of dynamic activating a part of kernels to reduce parameters which would make each kernel work for only a part of decisions,
our \ModuleShort{}, which adopts conditional codes to propagate decisions, could enforce each kernel to work for all the decisions, and thus make full use of the neurons.
Secondly, their approach requires layers with channel number larger than the category number, which could be hardly satisfied in real cases with 1k or more categories, while our solution does not have such restriction.
Last but not least, we designed three novel loss functions to enforce \ModuleShort{} could make category-coherent decisions.
\begin{figure*}[t!]
\centering
\includegraphics[width=0.75\linewidth]{CDN.png}
\vspace{-3mm}
\caption{The demonstration of \ModuleFull{}: given a feature map {\bf{$ {\bf{ U}}$}}, we first conduct global average pooling to obtain the pooled feature {\bf{${\bf{U^{'}}}$}}; and then classify it using the network layer ${{F}}$ to get an immediate decision {\bf{${\bf{D}}$}}; finally the decision is expanded and concatenated with the feature map {\bf{${\bf{V}}$}}. Note that, {\bf{${\bf{V}}$}} could be {\bf{${\bf{U}}$}} or any other subsequent feature map.}
\vspace{-3mm}
\label{fig:cdn}
\end{figure*}
\firstpara{Belief Propagation in CNNs}
Belief propagation has been well studied for a long time especially by traditional methods~\cite{conitzer-2019-belief,Felzenszwalb-2006-EfficientBP}.
Actually, the concept of belief propagation has also been exploited by various deep networks.
Highway networks~\cite{Srivastava-2015-Highway} allow unimpeded information propagates across several layers on information highways.
ResNets~\cite{He-2016-identity} propagate the identities via the well-defined res-block structures.
Compared with those {\em{skip-connection based methods}} that propagate the identity feature maps directly, the intermediate decisions propagated by our approach are in relatively lower dimensions but with more explicit (category-coherent) guidance.
Therefore, our designed \ModuleShort{} could be considered as another feasible solution for belief propagation.
Besides, we will also see that \ModuleShort{} could be easily integrated into skip-connection based networks to further improve their performance.
| proofpile-arXiv_059-15581 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Experimental Setup}
\label{sect:setup}
In this section, we describe the details of experimental settings used in our experiments. In our experiments, we always use the \emph{grid search} along with \emph{5-fold cross-validation} on a training set to find out optimal hyperparameters involved in different learning methods. Below, we first present the detailed setup in our approach, then describe all the technical details of other learning methods used in our comparative studies on different datasets.
\subsection{Setup in Our Approach}
In our implementation of the operator net, we have to consider an issue concerning the differentiation between \emph{a selected feature of which value is zero} and \emph{any removed features masked with zero} due to the use of the binary masks in our work. Thus, we design an operator net architecture shown in Figure \ref{fig:example_ON}. Instead of feeding only the selected features, $\pmb x \otimes \pmb m$, to the first hidden layer, we concatenate the mask, $\pmb m$, used to indicate the selected features, and the selected features themselves, $\pmb x \otimes \pmb m$, to form the input feeded to the first hidden layer as illustrated in Figure \ref{fig:example_ON}. Thus, the dimension of the input to the first hidden layer is $2d$ rather than $d$ features described in the main text. It is worth mentioning that we had investigated other manners to tackle the aforementioned ``zero-value" issue, e.g., stipulating a value beyond the range of any features for a removed feature in $\pmb x \otimes \pmb m$. However, neither of those yields the better performance than the architecture presented above.
Moreover, there are specific settings in our approach due to technical reasons; e.g., the loss function used to train the selector net and the subtle technical details related to Phase II in our alternate learning algorithm, as described in Sect. 3.3 of the main text.
In our experiments, the loss function used to train the selector net presented in Eq.(2b) of the main text is actually replaced by a {\it weighted} loss as follows:
$$
{\cal L}_S ({\cal M}'; \varphi ) = \frac 1 {2|{\cal M}'|} \sum_{\pmb m \in {\cal M}'}
w_{\pmb m}
\Big ( f_S(\varphi; \pmb m) - \frac 1 {|{\cal D}|} \sum_{({\pmb x}, {\pmb y}) \in {\cal D}}
l(\pmb x \otimes \pmb m, \pmb y; \theta) \Big )^2,
$$
where $w_{\pmb m} = 10, 5, 1$ when $\pmb m = {\pmb m}_{t,best}$ (the best performed subset found in the last step), $\pmb m = {\pmb m}_{t+1,opt}$ (the optimal subset generated in the current step), and $\pmb m$ is any of other subsets in ${\cal M}_{t+1}'$, respectively (c.f. Phase II-A in Sect. 3.3 of the main text). The above weighted selector loss exploits what has been learned so far in order to facilitate the stochastic local search in tackling the combinatorial optimization problem more effectively.
\begin{figure}[t]
\centering
\includegraphics[trim=20 670 370 30, clip,width=0.5\linewidth]{figure/example_ON}
\caption{The actual implementation of the operator net in our experiments to overcome the ``zero-value'' representation issue. As a result, both the selected features, $\pmb x \otimes \pmb m$, and the mask, $\pmb m$, used to indicate those selected features are concatenated as the input to the first hidden layer.}
\label{fig:example_ON}
\end{figure}
\begin{table}[t]
\caption{The optimal architectural hyperparameters of our dual-net learning model in our experiments. \textbf{ CNN$^*$}: there are
3 convolutional layers of 32, 64 and 128 channels and the corresponding kernel sizes are 5, 3, 3, respectively. Each of the convolutional layers is followed by a maximum pooling layer. Two input channels are used for coping with the ``zero-value" issue, one
for the selected features, $\pmb x \otimes \pmb m$, and the other for the mask, $\pmb m$. (c.f. Figure \ref{fig:example_ON})
Two dense layers of 30 neurons are top on the last convolutional layer. The output layer of 15 softmax neurons corresponds to 15 classes.}
\begin{center}
\makebox[\textwidth][c]{
\begin{tabular}{|c|c|c|}
\hline
{\bf Data Set} & {\bf Operator Net} & {\bf Selector Net} \\
\hline
4-way Classification & $20 \rightarrow 60\rightarrow 30\rightarrow 20 \rightarrow 4$ & $10 \rightarrow 100\rightarrow 50\rightarrow 10 \rightarrow 1$ \\
Nonlinear Regression & $20 \rightarrow 100\rightarrow 50\rightarrow 25 \rightarrow 1$ & $10 \rightarrow 100\rightarrow 50\rightarrow 10 \rightarrow 1$ \\
Binary Classification & $20 \rightarrow 60\rightarrow 30\rightarrow 20 \rightarrow 1$ & $10 \rightarrow 100\rightarrow 50\rightarrow 10 \rightarrow 1$ \\
\hline
MNIST Subset & $1568 \rightarrow 500\rightarrow 250\rightarrow 100 \rightarrow 1$ & $784 \rightarrow 300\rightarrow 200\rightarrow 100 \rightarrow 1$ \\
Glass & $20 \rightarrow 50\rightarrow 25\rightarrow 10 \rightarrow 6$ &$10 \rightarrow 500\rightarrow 250\rightarrow 100 \rightarrow 1$ \\
Vowel & $20 \rightarrow 50\rightarrow 25\rightarrow 10 \rightarrow 11$ &$10 \rightarrow 500\rightarrow 250\rightarrow 100 \rightarrow 1$ \\
TOX-171 & $11568 \rightarrow 100\rightarrow 50\rightarrow 20\rightarrow 4$ & $5784 \rightarrow 500\rightarrow 250\rightarrow 100 \rightarrow 1$ \\
Yale & \textbf{CNN$^*$} & $1024 \rightarrow 500\rightarrow 250\rightarrow 100 \rightarrow 1$ \\
\hline
Enhancer–Promoter & $204 \rightarrow 300\rightarrow 200\rightarrow 50 \rightarrow 3$ & $102 \rightarrow 500\rightarrow 250\rightarrow 100 \rightarrow 1$ \\
\hline
RNA-seq & $40528 \rightarrow 1000\rightarrow 500\rightarrow 200 \rightarrow 5$ & $20264 \rightarrow 500\rightarrow 250\rightarrow 100 \rightarrow 1$ \\
\hline
\end{tabular}
}
\end{center}
\label{tab:architectures}
\end{table}
\begin{table}[t]
\caption{Other optimal hyperparameters of our dual-net learning model in our experiments. $E_1$ is the number of SGD training batches (instead of epoches) in Phase I of the alternative learning. In Phase II.A, $|{\cal M}_t'|$ refers to the number of different optimal subset candidates used in a single batch during the SGD learning. $f=\frac {|{\cal M}_{t,1}'|} {|{\cal M}_{t,2}'|}$ is the fraction that governs the exploitation-exploration trade-off in the selector learning, and $s_p$ indicates the number of elements perturbed. For details of the alternative learning algorithm, see Section 3.3 in the main text.}
\begin{center}
\makebox[\textwidth][c]{
\begin{tabular}{|c|c|c|c|c|}
\hline
{\bf Data Set} & $E_1$ & $|{\cal M}_t'|$ & $f$ & $s_p$ \\
\hline
4-way Classification & $6,000$ & $32$ & $0.5$ & $2$ \\
Nonlinear Regression & $6,000$ & $32$ & $0.5$ & $2$ \\
Binary Classification & $6,000$ & $32$ & $0.5$ & $2$ \\
\hline
MNIST Subset & $10,000$ & $32$ & $0.5$ & $5$ \\
Glass & $10,000$ & $128$ & $0.5$ & $2$ \\
Vowel & $6,000$ & $128$ & $0.5$ & $2$ \\
TOX-171 & $1,500$ & $128$ & $0.5$ & $5$ \\
Yale & $4,500$ & $128$ & $0.5$ & $5$ \\
\hline
Enhancer–Promoter & $10,000$ & $64$ & $0.5$ & $5$ \\
\hline
RNA-seq & $8000$ & $32$ & $0.5$ & $5$ \\
\hline
\end{tabular}
}
\end{center}
\label{tab:hyperparameters}
\end{table}
In our experiments, the 3-step validation procedure used for \emph{generation of the optimal subset} would ensure its optimality within those feature subset candidates (c.f. Phrase II.A in Sect. 3.3 of the main text). However, the condition to exit from the loop of repeating steps i)-iii) may not be always satisfied. In our experiments, we hence set the maximum number of repetition in this test to \emph{5} iterations so that the subset optimality validation procedure always ends up to \emph{five} iterations. In addition, the parameter update of the operator and the selector nets in Phase II is done in different frequencies in the alternate learning; i.e., the parameters in the operator net are updated once on each batch in Phase II.B, while the parameters in the selector net are updated once on every 8 batches in Phase II.A.
We employ MLPs (CNNs) of the sigmoid (ReLu) neurons to carry out the operator net and MLPs with the sigmoid neurons for the selector net in our dual-net architecture. For training MLPs (CNNs), we adopt the \emph{Adam optimizer} (Adam with Nestrov momentum for the operator net)\cite{kingma2014Adam} via the stochastic gradient descent (SGD) procedure. Early stopping is used based on the losses evaluated on the validation data\footnote{In our alternate learning procedure, we use the operator loss incurred by the optimal subset, ${\pmb m}_{t,opt}$, on the validation set for early stopping. For clarity and details, see Section \ref{sect:behavior}.}. All the optimal hyperparameters used in our experiments are summarized in Tables \ref{tab:architectures} and \ref{tab:hyperparameters}, respectively.
\subsection{Optimal Hyperparameters in Other Methods}
\begin{table}[t]
\caption{Optimal regularization hyperparameters, $\lambda$, used in LASSO on different datasets in our experiments.}
\begin{center}
\makebox[\textwidth][c]{
\begin{tabular}{|c|c|c|c|c|c|}
\hline
{\bf Data Set} & Fold-1 & Fold-2 & Fold-3 & Fold-4 & Fold-5 \\
\hline
4-way classification&0.055&0.056&0.046&0.008&0.042\\
Nonlinear Regression&0.001&0.0&0.053&0.001&0.0\\
Binary Classification&0.025&0.07&0.081&0.02&0.039\\
\hline
\end{tabular}
}
\end{center}
\label{tab:lasso_params}
\end{table}
\begin{table}[ht]
\caption{Optimal hyperparameters (\#tree, depth) for RF on different datasets in our experiments.}
\begin{center}
\makebox[\textwidth][c]{
\begin{tabular}{|c|c|c|c|c|c|}
\hline
{\bf Data Set} & Fold-1 & Fold-2 & Fold-3 & Fold-4 & Fold-5 \\
\hline
4-way classification& (80, 14) & (70, 11) & (90, 15) & (150, 14) & (150, 12) \\
Nonlinear Regression& (50, 15) & (60, 15) & (50, 14) & (60, 10) & (50, 8) \\
Binary Classification& (60, 14) & (50, 10) & (90, 8) & (90, 15) & (160, 13) \\
\hline
MNIST& (200, 21) & (200, 20) & (210, 21) & (200, 21) & (200, 20) \\
\hline
Enhancer-Promoter& (120, 11) & (120, 12) & (100, 15) & (150, 13) & (60, 14) \\
\hline
\end{tabular}
}
\end{center}
\label{tab:rf_params}
\end{table}
Other 9 methods have also been employed for a comparative study on different datasets. We strictly follow their original settings described in those papers. We implement deep learning algorithms with by ourselves with Tensorflow 2.0 \cite{tensorflow2015-whitepaper} and Keras \cite{chollet2015keras}. For other methods, we use the existing code in the Python scikit-Learn library \cite{scikit-learn} for FS, LASSO, RF, RFE or the authors' project website for BAHSIC\footnote{BAHSIC webpage: \url{https://www.cc.gatech.edu/~lsong/code.html}}, mRMR\footnote{PyMRMR library: \url{https://pypi.org/project/pymrmr/}} and CCM \footnote{CCM repository: \url{https://github.com/Jianbo-Lab/CCM}}. Our source code will be made available after the completion of review. Below, we summarize the actual optimal hyperparameters pertaining to those methods used in our comparative study.
\paragraph{Deep Feature Selection (DFS)} \cite{Li2016DFS}. For DFS, we use the MLPs of the architectures as same as that of the operator net in our dual-net architecture apart from the input layers for a given task, as shown in Table \ref{tab:architectures}. Instead of having the concatenation of the selected features and the mask indicating the selected features in our operator net, the DFS appends an additional one-to-one layer between the input and the first hidden layer. Similarly, the sigmoid neurons are used in their modified MLPs and the Adam optimizer \cite{kingma2014Adam} is adopted for training MLPs via the SGD. The optimal regularization hyperparameter is $\lambda =0.01$ for 3 synthetic datasets and the MNIST subset after a grid search from a large range of $\lambda$. For the Enhance-Promoter dataset, the optimal hyperparameter is $\lambda = 0.008$. The rest of the parameters are kept the same as suggested in \cite{Li2016DFS}, which is $\lambda_2 = 1, \alpha_1 = 0.0001, \alpha_2 = 0$.
Note that we implement the DFS code by ourselves with Tensorflow 2.0 and Keras since the authors' code is merely applicable to a specific dataset.
\paragraph{Average Input Gradient (AvGrad)} \cite{hechtlinger2016Inputgradient}. It is simply a post-processing method for feature importance ranking (FIR) based on a trained MLP, we employ the same MLP architecture as that of our operator net apart from the input and the Adam optimizer \cite{kingma2014Adam} via the SGD on 3 synthetic datasets and the MNIST subset, Table \ref{tab:architectures}.
\paragraph{Forward Selection (FS)} \cite{guyon2003SFintro}. For FS, we employ the MLPs as the base leaner in this wrapper method and the training procedure identical to those used in the AvGrad on 3 synthetic datasets. For FIR, FS always ranks the importance of an early selected feature higher than that of others selected later in the forward subset selection procedure.
\paragraph{LASSO} \cite{tibshirani1996LASSO}. We use the grid-search to find out the optimal regularization hyperparameter, $\lambda$, in LASSO. The optimal hyperparameters found in 5 folds are listed in Table \ref{tab:lasso_params}.
\paragraph{Random Forest (RF)} \cite{breiman2001RF}. We use the grid-search to find out the optimal hyperparameters: number of decision trees and depth of the trees. We search from a range from 50 to 220 trees and between 7 and 24 in depth. The optimal hyperparameters found in 5 folds are listed in Table \ref{tab:rf_params}.
\paragraph{Recursive Feature Estimation (RFE)} \cite{guyon2002RFE}. We use 1 step for all the datasets apart from TOX-171 and Yale datasets where 5 steps are used. In our experiments, we adopt the default values for underlying estimators (linear SVM) with $C=1$ and $\gamma = \frac{1}{n_{features} * \text{var}(X)}$.
\paragraph{Backward Elimination using HSIC (BAHSIC)} \cite{song2007BAHSIC1,song2012BAHSIC2}. A default hyperparemeter regarding the fraction of removed features in each iteration is set to 0.1 as suggested in their papers. In our experiments, we adopt the inverse kernels suggested in their papers and the BAHSIC webpage.
\paragraph{Minimal Redundancy Maximal Relevance Criterion (mRMR)} \cite{peng2005mRMR}. No hyperparemeter needs to be tuned in this method. In our experiments, we adopt he "MIQ" option suggested in the \texttt{PyMRMR} library.
\paragraph{Conditional Covariance Minimization (CCM)} \cite{chen2017CCM}. We use $\epsilon=0.001$ for two synthetic classification datasets, 4-way and binary classification, and 4 benchmark datasets, Glass, Vowel, TOX-171 and Yale. For the nonlinear regression dataset, we use $\epsilon=0.1$. As all 7 datasets were used in the paper, we adopt the optimal hyperparameters reported in the paper and suggested in the CCM repository.
As CCM, RFE, BAHSIC, mRMR and LASSO are filtering methods for feature selection, we need to measure their performance based on another learning model. For CCM, RFE, BAHSIC and mRMR, we adopt the same setting used in \cite{chen2017CCM}, i.e., SVM/SVR with a Gaussian kernel of optimal hyperparameters: $C=1$ and $\gamma = \frac{1}{n_{features} * \text{var}(X)}$. For LASSO, we use the same MLPs used in deep learning models, i.e., DFS, AvGrad and ours (operator net).
\section{More Results on Synesthetic and Benchmark Data}
\label{sect:benchmark}
In this section, we demonstrate the typical learning behavior of our alternate learning algorithm on different datasets, describe the detailed information of benchmark datasets used in our experiments and report more experimental results.
\begin{figure}[t]
\centering
\fbox {
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/tensorboard_regression/epoch_loss}
\caption{selector-loss}
\vspace*{4mm}
\end{subfigure}
}
\fbox {
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figure/tensorboard_regression/epoch_train_loss}
\caption{operator-loss-train}
\vspace*{4mm}
\end{subfigure}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figure/tensorboard_regression/epoch_train_m_opt_loss}
\caption{operator-loss-train(${\pmb m}_{opt}$)}
\vspace*{4mm}
\end{subfigure}
}
\fbox{
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figure/tensorboard_regression/epoch_val_loss}
\caption{operator-loss-val}
\end{subfigure}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figure/tensorboard_regression/epoch_val_m_opt_loss}
\caption{operator-loss-val(${\pmb m}_{opt}$)}
\end{subfigure}
}
\caption{\textbf{Synthetic nonlinear regression dataset}. Evolution of the operator and the selector losses in Phase II ($d=10, s=5$). The \texttt{x-axis} corresponds to the number of batches and \texttt{y-axis} refers to the loss statistics of 5 folds.
(a) The selector loss. (b) The operator loss on the training set. (c) The operator loss on the training set with ${\pmb m}_{opt}$ only.
(d) The operator loss on the validation set.
(e) The operator loss on the validation set with ${\pmb m}_{opt}$ only.
Note that Phase II starts when the operator net has been trained for 6,000 batches in Phase I.}
\label{fig:synth_training}
\end{figure}
\subsection{Learning Behavior}
\label{sect:behavior}
As described in Section 3.3 of the main text, our alternate learning algorithm trains two learning models, operator and selector, simultaneously in an alternate manner; i.e., in Phase II, the learning behavior of the operator and the selector nets are mutually affected each other in each batch during the SGD learning. This is different from most of the existing deep learning algorithm that involves only a deep neural network to be trained. Therefore, we need to investigate how our proposed learning model behaves. Below, we exhibit the typical learning behavior in our alternative learning on 3 datasets, \emph{synthetic nonlinear regression}, \emph{MNIST subset} and \emph{Yale}.
Figure \ref{fig:synth_training} illustrates the learning behavior of the operator and the selector in terms of losses in Phase II on the synthetic \emph{nonlinear regression} dataset. It is observed from Figure \ref{fig:synth_training}(a) that the trained operator in Phase I provides informative training examples so that the averaged selector loss of 5 folds decreases monotonically as required. As evident in Figures \ref{fig:synth_training}(b) and \ref{fig:synth_training}(d), the averaged operator loss on training and validation sets further decreases steadily as the selector keeps offering more ``promising'' optimal mask candidates achieved by the stochastic local search for combinatorial optimization. It is clearly seen in Figures \ref{fig:synth_training}(b) and \ref{fig:synth_training}(d) that at the beginning of Phase II (up to 1k batches), operator loss on both training and validation sets sharply decreases once the selector has been involved. Also, the loss may be reduced substantially when an optimal mask is identified, as shown in Figure \ref{fig:synth_training}(d) (between 6k and 7k batches). Given the fact that at the end of Phase II.A for each iteration, we always achieve an optimal mask, ${\pmb m}_{t,opt}$. Thus, we can apply such optimal masks only to measure the operator loss. As a result, Figures \ref{fig:synth_training}(c) and \ref{fig:synth_training}(e) illustrate the evolution of the operator loss evaluated with ${\pmb m}_{t,opt}$ only on training and validation sets. In contrast to the operator loss with all optimal mask (subset) candidates shown in Figure \ref{fig:synth_training}(d), the abrupt loss drop resulting from the identified optimal mask is much more visible in Figure \ref{fig:synth_training}(e). Therefore, early stopping in our alternate learning algorithm is based on the operator loss evaluated with ${\pmb m}_{t,opt}$ only. Overall, Figure \ref{fig:synth_training} demonstrates that our alternate learning algorithm works well and eventually converges for this regression task.
\begin{figure}[ht]
\centering
\fbox {
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/tensorboard_mnist/epoch_loss}
\caption{selector-loss}
\vspace*{4mm}
\end{subfigure}
}
\fbox{
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figure/tensorboard_mnist/epoch_train_loss}
\caption{operator-loss-train}
\vspace*{4mm}
\end{subfigure}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figure/tensorboard_mnist/epoch_train_m_opt_loss}
\caption{operator-loss-train(${\pmb m}_{opt}$)}
\vspace*{4mm}
\end{subfigure}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figure/tensorboard_mnist/epoch_train_categorical_accuracy}
\caption{operator-ACC-train}
\vspace*{4mm}
\end{subfigure}
}
\fbox {
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figure/tensorboard_mnist/epoch_val_loss}
\caption{operator-loss-val}
\vspace*{4mm}
\end{subfigure}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figure/tensorboard_mnist/epoch_val_m_opt_loss}
\caption{operator-loss-val(${\pmb m}_{opt}$)}
\vspace*{4mm}
\end{subfigure}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figure/tensorboard_mnist/epoch_val_categorical_accuracy}
\caption{operator-ACC-val}
\vspace*{4mm}
\end{subfigure}
}
\caption{\textbf{MNIST Benchmark Subset}. Evolution of the operator and the selector losses in Phase II ($d=784, s=85$). The \texttt{x-axis} corresponds to the number of batches and \texttt{y-axis} refers to the loss statistics of 5 folds.
(a) The selector loss.
(b) The operator loss on the training set.
(c) The operator loss on the training set with ${\pmb m}_{opt}$ only.
(d) The classification accuracy evaluated on the training set.
(e) The operator loss on the validation set.
(f) The operator loss on the validation set with ${\pmb m}_{opt}$ only.
(g) The classification accuracy evaluated on the validation set.
Note that Phase II starts when the operator net has been trained for 10,000 batches in Phase I. The spike at the maximum batch in (e)-(g) correspond to the results evaluated on the test set with the trained operator net upon the completion of the alternate learning.}
\label{fig:training_mnist}
\end{figure}
\begin{figure}[ht]
\centering
\fbox {
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/tensorboard_yale/epoch_loss}
\caption{selector-loss}
\vspace*{4mm}
\end{subfigure}
}
\fbox {
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figure/tensorboard_yale/epoch_train_loss}
\caption{operator-loss-train}
\vspace*{4mm}
\end{subfigure}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figure/tensorboard_yale/epoch_train_m_opt_loss}
\caption{operator-loss-train(${\pmb m}_{opt}$)}
\vspace*{4mm}
\end{subfigure}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figure/tensorboard_yale/epoch_train_categorical_accuracy}
\caption{operator-ACC-train}
\vspace*{4mm}
\end{subfigure}
}
\fbox {
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figure/tensorboard_yale/epoch_val_loss}
\caption{operator-loss-val}
\end{subfigure}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figure/tensorboard_yale/epoch_val_m_opt_loss}
\caption{operator-loss-val(${\pmb m}_{opt}$)}
\end{subfigure}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figure/tensorboard_yale/epoch_val_categorical_accuracy}
\caption{operator-ACC-val}
\end{subfigure}
}
\caption{\textbf{Yale Benchmark Dataset}. Evolution of the operator and the selector losses in Phase II ($d=784, s=10, 30$). The \texttt{x-axis} corresponds to the number of batches and \texttt{y-axis} refers to the loss statistics of 5 folds. The light/dark colors correspond to $s=10$/$s=30$, respectively.
(a) The selector loss.
(b) The operator loss on the training set.
(c) The operator loss on the training set with ${\pmb m}_{opt}$ only.
(d) The classification accuracy evaluated on the training set.
(e) The operator loss on the validation set.
(f) The operator loss on the validation set with ${\pmb m}_{opt}$ only.
(g) The classification accuracy evaluated on the validation set.
Note that Phase II starts when the operator net has been trained for 4,500 batches in Phase I. The spike at the maximum batch (6k) in (e)-(g) correspond to the results evaluated on the test set with the trained operator net upon the completion of the alternate learning.}
\label{fig:yale_training}
\end{figure}
Next, Figure \ref{fig:training_mnist} illustrates the learning behavior of the operator and the selector in terms of losses in Phase II on the MNIST benchmark subset, a \emph{binary classification} task.
It is seen from Figure \ref{fig:training_mnist}(a) that the evaluation of the averaged selector loss of 5 folds has a reduction trend as the number of batches is increased although the averaged loss not longer drops monotonically.
The sharp selector loss increase at around 10k batches is typical and reflects the nature of our stochastic local search procedure in tackling the combinatorial optimization issue. The sharp increase is likely caused by the fact that the optimal mask identified leads to the sharp operator loss reduction and the selector net did not have such training examples before this moment. This analysis is manifested by all the results at round 10k batches shown in other plots in Figure \ref{fig:training_mnist}.
As evident in Figures \ref{fig:training_mnist}(b) and \ref{fig:training_mnist}(c), the averaged operator loss on training and further decreases in general. By using an alternative performance index, we also show the averaged classification accuracy measured on the training set in Figure \ref{fig:training_mnist}(d), which allows one to see the learning performance vividly. Likewise, we illustrate the averaged operator loss and accuracy on the validation set in Figures \ref{fig:training_mnist}(e)-(g). Once again, we can see our alternate learning algorithm works very well. Once again, the operator validation loss evaluated with ${\pmb m}_{opt}$ only provides the solid evidence for early stopping. In general, the learning behavior on this binary classification dataset very much resembles that on the nonlinear regression dataset (c.f. Figure \ref{fig:synth_training}). After the alternate learning is completed, we can evaluate the performance of the trained operator net on the test set in the same manner. To show the test performance, we depict the averaged loss evaluated on the test set with all the optimal mask candidates and the optimal mask as well as the accuracy based on the optimal mask at the maximum batch in Figures \ref{fig:training_mnist}(e)-(g). Interestingly, it is seen in Figures \ref{fig:training_mnist}(e)-(g) that the test performance is significantly better than the validation performance in terms of both the losses and the accuracy. This suggests that our alternate learning algorithm yields the favorable generalization performance on this benchmark dataset.
Finally, Figure \ref{fig:yale_training} shows the learning behavior of the operator and the selector in terms of losses in Phase II on the Yale benchmark dataset, a \emph{multiclass classification} task. For this facial image dataset, we employ a convolutional neural network described in Table \ref{tab:architectures} to carry out the operator net. To understand the learning behavior better, we compare the situations of the alternate learning for different subset sizes, $s=10$ and $s=30$. It is observed from Figure \ref{fig:yale_training}(a) that the averaged selector loss for different subset sizes behaves quite differently. For $s=10$, the selector loss sharply decreases at the first few hundred batches then sharply increases. The limited amount of information carried in 10 out of 1024 features may be accountable for this phenomenon. In contrast, the evolution of selector loss for $s=30$ is similar to that shown in Figures \ref{fig:synth_training}(a) and \ref{fig:training_mnist}(a). Figures \ref{fig:yale_training}(b)-(d) suggest that the averaged operator loss for different subset sizes keeps decreasing and the accuracy remains increasing on the training set. In contrast, the trend of the averaged operator loss for different subset sizes increases on the validation set after 1.5k batches as shown in Figures \ref{fig:yale_training}(e) and (f). This looks like a typical overfitting scenario. As seen in Figure \ref{fig:yale_training}(g), however, the averaged classification accuracy on the validation set generally keeps increasing regardless of different subset sizes. Furthermore, for $s=30$, the averaged operator test losses and the test accuracy shown in Figures \ref{fig:yale_training}(e)-(g) (at 6k batches) also provide the evidence for the good generalization performance. Surprisingly, the alternate learning behavior on this benchmark dataset contradicts or is inconsistent with the normal behavior of a learning system. While we do not fully understand such learning behavior, our preliminary analysis implies that the this phenomenon could be caused by the \emph{covariant shift} nature of this facial image dataset and limited training data. In the Yale dataset, the images of an individual subject correspond to different facial expressions. Since there are only limited training examples and the selector learning is constrained by the operator training performance, the stochastic local search in Phase II.A may have to do a lot of exploration in order to find out the ``genuine'' optimal subset (mask). This can be observed by the fluctuated operator validation loss as shown in Figures \ref{fig:yale_training}(e) and (f). Thanks to our stochastic exploration-exploitation strategy, some sub-optimal subsets may still direct the learning towards the learning performance at an acceptance level.
In summary, we exhibit typical yet different learning behavior of our dual-net architecture trained by the alternate learning algorithm in Figures \ref{fig:synth_training}-\ref{fig:yale_training}. In most of the situations including the one reported in Section \ref{sect:enhance-promoter} and others not reported here, we can use the operator validation loss evaluated with the optimal mask only for early stopping. In some occasion, however, we encounter some ``strange'' learning behavior, as exemplified in Figure \ref{fig:yale_training}. In such occasion, we might have to use the validation classification accuracy (or validation MSE in regression) for early stopping. Thus, we are going to investigate such ``strange" learning behavior in our ongoing work.
\subsection{Detailed Information on Benchmark Data}
In this section, we provide the detailed information on 4 benchmark datasets described in the main text.
In our comparative study, we choose 4 challenging benchmark datasets for feature selection evaluation. As reported in \cite{chen2017CCM}, the state-of-the-art feature selection methods including those latest strong ones do not perform well on the following datasets.
\noindent
\textbf{Glass dataset}\footnote{[online]: \url{https://archive.ics.uci.edu/ml/machine-learning-databases/glass/}}
The Glass is a famous UCI benchmark dataset for a task of predicting a type of glass
based on its chemical composition. Glass dataset contains usually 9 chemical features and the ID for each instance that is normally not treated as a feature. In the experimental setting of CCM \cite{chen2017CCM}, they treated the ID as a new feature so that 10 features are used in their experiments. Due to the fact that the instances in the data file are arranged in an non-shuffled manner according to their class labels, the ID feature turns up to be one of the most important features so that CCM and other strong feature selection methods yields very high accuracy, e.g., CCM achieves 86\% on average \cite{chen2017CCM}. In our experiment, we follow this setting so that our approach yields 90\%+ accuracy (c.f. Figure 4 in the main text). Without the ID feature, however, all the methods including ours yield considerably lower accuracies although ours still outperforms those methods used in our comparative study. In the 5-fold cross validation, the accuracy of our approach drops to the levels of 75\%-80\%, quite close to the known top accuracy of 80\% on the OpenML platform\footnote{[online]: \url{https://www.openml.org/t/3815}}.
\noindent
\textbf{Vowel Dataset}\footnote{[online]: \url{https://www.openml.org/d/307}}
The Vowel is yet another famous UCI benchmark dataset for predicting English vowels from acoustic features. Following the same setting used in CCM \cite{chen2017CCM}, we use a newer version of this dataset so that we can make a fair comparison to those feature selection methods used in our comparative study.
\noindent
\textbf{TOX-171 dataset}\footnote{[online]: \url{http://featureselection.asu.edu/old/datasets.php}}
The TOX-171 is a biological microarray dataset with only 43 instances/class but 5,784 features. The nature of this dataset makes a deep learning model very prone to overfitting. As shown in Figure 4 in the main text, our approach does not outperform the CCM in general, which reveals the limitation of our approach.
\noindent
\textbf{Yale Dataset}\footnote{[online]: \url{http://www.cad.zju.edu.cn/home/dengcai/Data/FaceData.html}}
The Yale is a well-known facial image benchmark dataset. There are 15 individual subjects and 11 images of different facial expressions, e.g., wink, happy and sad, were collected from each individual. When this dataset is used for face recognition, a random split of this dataset could lead to a certain degree of covariant shift; the instances in training and validation/test sets may be subject to different distribution but their distributions conditional on the label are same. This causes a difficulty for all learning models without covariant shift adaptation.
\begin{figure}[H]
\centering
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/MNIST_folds/mnist_importance_1}
\end{subfigure}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/MNIST_folds/mnist_8_importance_1}
\end{subfigure}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/MNIST_folds/mnist_3_importance_1}
\end{subfigure}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/MNIST_folds/mnist_importance_2}
\end{subfigure}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/MNIST_folds/mnist_8_importance_2}
\end{subfigure}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/MNIST_folds/mnist_3_importance_2}
\end{subfigure}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/MNIST_folds/mnist_importance_3}
\end{subfigure}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/MNIST_folds/mnist_8_importance_3}
\end{subfigure}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/MNIST_folds/mnist_3_importance_3}
\end{subfigure}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/MNIST_folds/mnist_importance_4}
\end{subfigure}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/MNIST_folds/mnist_8_importance_4}
\end{subfigure}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/MNIST_folds/mnist_3_importance_4}
\end{subfigure}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/MNIST_folds/mnist_importance_1_full}
\end{subfigure}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/MNIST_folds/mnist_8_importance_1_full}
\end{subfigure}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/MNIST_folds/mnist_3_importance_1_full}
\end{subfigure}
\caption{{\bf MNIST Subset}. Feature importance maps ($d=784, s=85$) generated with the method described in Section \ref{sect:visual-FIM}. From top to bottom, first 4 rows correspond to feature importance maps achieved from different folds. The bottom row is the full feature importance map corresponding the feature importance map shown in Figure 4 of the main text. }
\label{fig:mnist_other_folds}
\end{figure}
\subsection{Feature Importance Map}
\label{sect:visual-FIM}
Due to the limited space in the main text, we demonstrate only one feature importance map on the MNIST subset. Below, we show more feature importance maps achieved from other folds on the MNIST subset and those yielded by our approach on the Yale dataset.
To obtain the superimposed feature importance maps on the background image (the mean of raw images), we apply a method as follows. A blank image is first created in the HSV (hue-saturation-value) colour format. The hue used in [0,270] range corresponds to the importance, and the saturation is set to 1.0 to encode the mean background image from the dataset. Due to the feature importance ranking (FIR) scores are normalised, no negative FIR scores are shown to ensure unselected features have the background color.
Figure \ref{fig:mnist_other_folds} shows different feature importance maps achieved from other 4 folds. As our FIR approach described in Section 3.4 of the main text measure the FIR scores based on the input gradient, it can achieve the input gradient for all the features regardless of whether a feature is selected or not. To this end, we can generate a full feature importance map as well. It is observed from \ref{fig:mnist_other_folds} that the feature importance maps achieved from different folds are very much consistent and the full feature importance map provides a clearer picture in terms of explainablity/interpretability.
\begin{figure}[H]
\centering
\begin{subfigure}{0.195\textwidth}
\centering
\includegraphics[angle=270,origin=c,width=\linewidth]{figure/yale/raw_10}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\centering
\includegraphics[angle=270,origin=c,width=\linewidth]{figure/yale/raw_30}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\centering
\includegraphics[angle=270,origin=c,width=\linewidth]{figure/yale/raw_50}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\centering
\includegraphics[angle=270,origin=c,width=\linewidth]{figure/yale/raw_70}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\centering
\includegraphics[angle=270,origin=c,width=\linewidth]{figure/yale/raw_90}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\centering
\includegraphics[angle=270,origin=c,width=\linewidth]{figure/yale/face_10}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\centering
\includegraphics[angle=270,origin=c,width=\linewidth]{figure/yale/face_30}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\centering
\includegraphics[angle=270,origin=c,width=\linewidth]{figure/yale/face_50}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\centering
\includegraphics[angle=270,origin=c,width=\linewidth]{figure/yale/face_70}
\end{subfigure}
\begin{subfigure}{0.195\textwidth}
\centering
\includegraphics[angle=270,origin=c,width=\linewidth]{figure/yale/face_90}
\end{subfigure}
\caption{{\bf Yale Dataset}. Feature importance maps achieved with different subset mask sizes: $s=10,30,50,70,90$ (from left to right) out of $d=32 \times 32$. The second row corresponds to the images generated by superimposing the feature importance maps to the background image, i.e., mean face image averaged on 11 images collected from an individual.}
\label{fig:yale_faces}
\end{figure}
As Yale is a facial image dataset, we can also illustrate the feature importance maps in Figure \ref{fig:yale_faces} for visual inspection. The visual inspection reveals that increasing the mask size $s$ results in less clear visual representation of feature importance. In comparison, the best performed mask size of 30 clearly selects several meaningful yet discriminative features, e.g., pixels near lip, nose and eyes, and ranks their importance properly as shown in the 2nd column in Figure \ref{fig:yale_faces}. As this dataset has limited instances (7 training examples/class on average), we reckon that the use of large subset mask size is likely to cause the overfitting as revealed by their feature importance maps shown in Figure \ref{fig:yale_faces}.
\begin{figure}[ht]
\centering
\fbox {
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/tensorboard_charts_dfs/epoch_loss}
\caption{selector-loss}
\vspace*{4mm}
\end{subfigure}
}
\fbox {
\centering
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=1.02\linewidth]{figure/tensorboard_charts_dfs/epoch_train_loss}
\caption{operator-loss-train}
\vspace*{4mm}
\end{subfigure}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=1.02\linewidth]{figure/tensorboard_charts_dfs/epoch_train_m_opt_loss}
\caption{operator-loss-train(${\pmb m}_{opt}$)}
\vspace*{4mm}
\end{subfigure}
\hspace*{1mm}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=1.01\linewidth]{figure/tensorboard_charts_dfs/epoch_train_categorical_accuracy}
\caption{operator-ACC-train}
\vspace*{4mm}
\end{subfigure}
}
\fbox {
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figure/tensorboard_charts_dfs/epoch_val_loss}
\caption{operator-loss-val}
\end{subfigure}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figure/tensorboard_charts_dfs/epoch_val_m_opt_loss}
\caption{operator-loss-val(${\pmb m}_{opt}$)}
\end{subfigure}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figure/tensorboard_charts_dfs/epoch_val_categorical_accuracy}
\caption{operator-ACC-val}
\end{subfigure}
}
\caption{\textbf{Enhancer-Promoter Dataset}. Evolution of the operator and the selector losses in Phase II ($d=102, s=35$). The \texttt{x-axis} corresponds to the number of batches and \texttt{y-axis} refers to the loss statistics of 5 folds.
(a) The selector loss.
(b) The operator loss on the training set.
(c) The operator loss on the training set with ${\pmb m}_{opt}$ only.
(d) The classification accuracy evaluated on the training set.
(e) The operator loss on the validation set.
(f) The operator loss on the validation set with ${\pmb m}_{opt}$ only.
(g) The classification accuracy evaluated on the validation set.
Note that Phase II starts when the operator net has been trained for 10,000 batches in Phase I. The spike at the maximum batch in (e)-(g) correspond to the results evaluated on the test set with the trained operator net upon the completion of the alternate learning.}
\label{fig:training}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=.52\linewidth]{figure/CellLine/GM_200_results}
\caption{Classification accuracies yielded by different methods
on the Enhancer-Promoter dataset: cell line GM12878 (200dp). The shadowed regions refer to the performance range between the minimum and the maximum accuracies on 5 folds.While DFS and RF yield only one result with all the 102 features, other methods produce the results at different subset sizes for $s=15, 25, 35, 45, 55$. }
\label{fig:enhance_promoter_perf}
\end{figure}
\section{More Results on Real-World Data}
\label{sect:enhance-promoter}
In this section, we report more results on the enhance-promoter dataset and the UCI gene expression cancer RNA-Seq data set.
\subsection{Results on Enhance-Promoter Dataset}
To evaluate our approach on real-world data, we adopt the GM12878 cell line (a lymphoblastoid cell line) dataset \cite{Li2016DFS}.
This is the dataset for which the \emph{deep feature selection} (DFS) method \cite{Li2016DFS} is especially proposed. Therefore, we follow their setting by using only annotated DNA regions of GM12878 cell line (200 bp).
In the original dataset, there are 7 classes and 102 features, each class has 3,000 instances apart from one that has only 2878 instances. The 7 classes are \emph{\textbf{active promoter, active enhancer}, active exon, inactive promoter, inactive enhancer, inactive exon} and \emph{unknown regions}. The main interest in medicine is classifying the function of DNA sequences into enhancer, promoter and background since non-coding gene regulatory enhancers are essential to transcription in mammalian cells \cite{anderson2014enhancer}.
Following the suggestion in \citep{anderson2014enhancer, Li2016DFS}, we merge inactive enhancers, inactive promoters, active exons, and unknown regions into a background class. Thus, we have a 3-class imbalanced dataset as the background class has roughly 5 times more instances than other two classes: active promoter and active enhancer. We follow a preprocessing method consisting of two steps: 1) making the dataset balanced by down-sampling so that each of 3 classes has 2878 samples; 2) overcoming the natural skewness of biological outcome by taking the logarithm on the input. To avoid the zero-value issue in logarithm, we append a small positive number to each feature, $x \leftarrow x + 0.01$, in our experiments. Note that step 2) is not described in the DFS paper but we believe that this is important for such a data distribution.
It is also worth clarifying that we see some discrepancies between the data presented in the authors' repository\footnote{[online]: \url{https://github.com/yifeng-li/DECRES/tree/master/data}\label{fn:en-pro}}
and their article \cite{Li2016DFS}. Two main differences include 1) 102 features in the repository but 92 features stated in the article; 2) at least 2,878 instance/class in the repository but only 2,156 features mentioned in the article. While we use the dataset in their repository, we have done our best by keeping all the settings suggested in their article for a fair comparison in our experiments.
Due to the limited space in the main text, we only report the result of our approach for a subset mask size, $s=35$. Here, we report more results of our approach and other methods used in comparative study on this dataset.
We first illustrate the learning behavior of our dual-net model on this real-world dataset in Figure \ref{fig:training}. As shown in Figure \ref{fig:training}(a), the averaged selector loss has the typical behavior as described in Section \ref{sect:behavior}. The loss fluctuation and the loss reduction trend in the loss evolution vividly exhibit how the stochastic local search strategy works in finding out optimal subset masks. As shown in Figures \ref{fig:training}(b)-\ref{fig:training}(g), the learning progresses steadily as evident in the evolution of the averaged operator losses and the averaged classification accuracies on the training and the validation sets. Also it is clearly seen in Figures \ref{fig:training}(b)-\ref{fig:training}(g), the overfitting occurs once the optimal subset mask is identified at around 10.5k batches. Once again, this observation provides the solid evidence to support for early stopping with the operator training/validation losses measured on the optimal subset masks (${\pmb m}_{opt}$). Furthermore, it is also seen in Figures \ref{fig:training}(e)-\ref{fig:training}(g) (at the maximum batches) that the averaged test losses and the averaged classification accuracy yielded by the trained operator net on 5 folds are superior to their counterparts on the validation set. Once again, this suggests that our alternate learning leads to the favorable generalization performance although an earlier stopping may yield a better accuracy.
Apart from the comparative study specified in the main text, we have conducted the further experiments on this dataset for a comparison to two recent state-of-the-art methods
\cite{linden2019explanationbb,skrlj2020fiesan}
that obtain the populationwise FIR by aggregating the instancewise FIR. In \cite{linden2019explanationbb}, the global aggregation method workable on this dataset is the homogeneity-weighted importance, which is the same as the global LIME importance proposed in \cite{ribeiro2016should}. In our experiment, we use an MLP of
the architecture: 102-300-200-50-3 and \texttt{n\_samples=500} to achieve the LIME importance on the validation set \cite{ribeiro2016should}. In the SAN
\cite{skrlj2020fiesan}, the populationwise FIR is obtained via either
instance-level aggregation (SAN\_AGGR) or global attention layer (SAN\_GLOBAL). In our experiments, we use the same settings suggested by the authors \cite{skrlj2020fiesan}\footnote{We do this experiment with the authors' code: \url{https://gitlab.com/skblaz/attentionrank}.} with the following hyperparameters: $k=1$, \texttt{p\_{dropout}=20\%}, \texttt{epochs=32}, \texttt{batch\_size=32}, \texttt{n\_1\_1=128} (number of hidden neurons in SAN). In terms of feature selection, both methods fall into the filtering category. Therefore, we employ an MLP of the architecture: s-300-200-50-3 to be a classifier trained on those selected feature subsets of $s=5,25,35,45,55$, respectively, for this 3-class classification task.
\begin{figure}[h]
\centering
\begin{subfigure}{1.0\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/CellLine/dfs_lime_new_200.png}
\end{subfigure}
\begin{subfigure}{1.0\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/CellLine/dfs_san_global_new_200.png}
\end{subfigure}
\begin{subfigure}{1.0\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/CellLine/dfs_san_aggr_new_200.png}
\end{subfigure}
\begin{subfigure}{1.0\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/CellLine/ours_s55.png}
\end{subfigure}
\caption{Feature importance ranking (FIR) scores yielded by LIME, SAN and ours for top 55 features ($s=55$) on the Enhancer-Promoter dataset: GM12878 Cell line (200 bp).}
\label{fig:enhance-promoter1}
\end{figure}
We report the accuracies yielded by 6 different methods for different subset sizes. As shown in Figure \ref{fig:enhance_promoter_perf}, it is evident that our approach yields slightly better accuracies than the DFS \cite{Li2016DFS}, a method especially proposed for this biological dataset, when the subset size of selected features is larger than 15. Also, our approach outperforms RFE \cite{guyon2002RFE}, a state-of-the-art feature selection method especially effective on gene selection for cancer classification, and RF \cite{breiman2001RF}, a famous off-the-shelf ensemble learning model. In contrast, it is evident from Figure \ref{fig:enhance_promoter_perf} that ours along with DFS also outperforms those methods of using the aggregation of instancewise FIR to obtain populationwise one at all different subset sizes ranging from 15 to 55.
For feature importance ranking (FIR), we show the FIR scores yielded by two instancewise aggregation-based methods and ours for $s=55$ in Figure \ref{fig:enhance-promoter1}. It is observed that two methods and ours yield different FIR results and different settings in the SAN also results in different FIR for top 55 features. Due to a lack of the ground-truth, it is difficult to draw an affirmative conclusion but the experimental results suggest that the populationwise FIE is an extremely challenging problem for real-word data.
We further show the FIR scores for top-40 features produced by DFS, RF, RFE and ours in Figure \ref{fig:enhance-promoter}. The FIR scores of the RFE and the RF are generated based on the RFE feature importance estimator \cite{guyon2002RFE} and the out-of-bag errors \cite{breiman2001RF}. The FIR score of the DFS is achieved based the magnitude of shrunk weights between input and the first hidden layer introduced in the DFS method \cite{Li2016DFS}. From Figure \ref{fig:enhance-promoter}, it is observed that our approach yields the relatively consistent FIR results when different subset sizes are used given the fact that the importance ranking order of top features only varies for one or two. Also, our approach is the only one to constantly rank ``RNA'' and two important genes ``ATF2"'' and ``ATF3'' among the most important features regardless of feature subset sizes. In comparison, the DFS also selects those two genes but does not rank them on the top. On the other hand, the RFE chooses other genes, ``RAD21'', "``PGISLANDS'' and ``H3K4ME3'', as the most important features irrespective of feature subset sizes. It is also be seen in Figure \ref{fig:enhance-promoter} that the DFS and ours, two deep learning models rank the importance of features similarly but differently from the RFE and the RF that yield similar FIR scores. Our experimental results on this real-world dataset suggest that deep learning models may lead to different results from the existing state-of-the-art and off-the-shelf machine learning models for FIR. Thus, learning models of different types should be considered simultaneously and their results can be fused at the discretion of domain experts in such real-world applications.
To evaluate the efficiency, we record the averaging training time on this dataset in terms of 5-fold cross-validation. our experiments are conducted on a server of the specification and the environment: Intel Core i7-5930K CPU (3.50GHz), NVIDIA GeForce GTX TITAN X GPU, 64 GB RAM and CentOS 7. In summary, our algorithm takes around 1,100 sec while RF, SAN, DFS, LIME and RFE take around 2.5, 35, 90, 540 and 1,700 sec, respectively. The dual-net architecture along with the alternate learning is responsible for a high computational load in our approach,
\begin{figure}[H]
\centering
\begin{subfigure}{1.0\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/CellLine/dfs_dfs_new_200.png}
\end{subfigure}
\begin{subfigure}{1.0\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/CellLine/dfs_rf_new_200.png}
\end{subfigure}
\begin{subfigure}{1.0\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/CellLine/dfs_SM/rfe_s15.png}
\end{subfigure}
\begin{subfigure}{1.0\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/CellLine/dfs_SM/rfe_s25.png}
\end{subfigure}
\begin{subfigure}{1.0\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/CellLine/dfs_SM/rfe_s35.png}
\end{subfigure}
\begin{subfigure}{1.0\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/CellLine/dfs_SM/rfe_s45.png}
\end{subfigure}
\begin{subfigure}{1.0\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/CellLine/dfs_SM/rfe_s55.png}
\end{subfigure}
\begin{subfigure}{1.0\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/CellLine/dfs_SM/ours_s15.png}
\end{subfigure}
\begin{subfigure}{1.0\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/CellLine/dfs_SM/ours_s25.png}
\end{subfigure}
\begin{subfigure}{1.0\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/CellLine/dfs_SM/ours_s35.png}
\end{subfigure}
\begin{subfigure}{1.0\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/CellLine/dfs_SM/ours_s45.png}
\end{subfigure}
\begin{subfigure}{1.0\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/CellLine/dfs_SM/ours_s55.png}
\end{subfigure}
\caption{Accuracy and feature importance ranking (FIR) scores yielded by different methods on the Enhancer-Promoter dataset: GM12878 Cell line (200 bp). While DFS and RF yield only one result with all the 102 features, RFE and ours produce the results at different subset sizes for $s=15, 25, 35, 45, 55$. Note that the results yielded RFE and ours for $\textcolor{red}{s=35}$ above are not specified deliberately with the subset size to indicate that those have been reported in the main text.}
\label{fig:enhance-promoter}
\end{figure}
\newpage
\subsection{Results on RNA-seq Data}
Apart from the comparative study on the Enhancer-Promoter dataset, we have further applied our approach to the UCI gene expression cancer RNA-Seq data set\footnote{[online]: \url{https://archive.ics.uci.edu/ml/datasets/gene+expression+cancer+RNA-Seq}}, to evaluate our approach on a dataset of many features.
The gene expression cancer RNA-Seq dataset is part of The Cancer Genome Atlas Pan-cancer
Analysis Project. The original data set is maintained by the
cancer genome atlas pan-cancer analysis project.
Gene expression data are composed of DNA microarray and RNA-seq data. Therefore, microarray data analysis facilitates the clarification of
biological mechanisms and development of drugs toward a more predictable future. In comparison to hybridization-based microarray technology,
RNA-seq has a larger range of expression levels and contains more information.
RNA-Seq is a random extraction of gene expression of patients with
five different types of tumors including BRCA (breast), KIRC
(kidney), COAD (colon), LUAD (lung) and PRAD (prostate).
The dataset contains 801 samples, each of which has 20,531 biological features or genes. The data set is imbalanced and there are 300, 146, 78, 141 and 136 samples for BRCA, KIRC, COAD, LUAD and PRAD, respectively.
Unlike other methods, e.g., \cite{GARCIADIAZ20201916}, we do not pre-process the imbalanced data in our experiment apart from removal of 267 constant features. In other words, we use only 20,264 features in each sample to train our dual-net model. All the data are standardised with zero mean and unit standard deviation. The dataset is randomly split into two subsets, training and test, where there are 600 and 201 samples in the training and the test subset. The four-fold cross-validation working on the training subset are used for parameter estimate and hyperparameter tuning for our dual-net model. To make a fair comparison to the best performer on this dataset as reported in \cite{GARCIADIAZ20201916}, we use the identical setting by using $s=49$ in our experiment. The information on the dual-net architecture and optimal hyperparameter values used in this experiment is provided in Tables \ref{tab:architectures} and \ref{tab:hyperparameters}, respectively.
\begin{table}[h]
\begin{center}
\caption{Accuracy yielded by different methods on RNA-seq dataset (adapted from Table 7 in \cite{GARCIADIAZ20201916}).}
\makebox[\textwidth][c]{
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Method & \# Samples & \# Features & \# Classes & \# Selected Features & \# Classifiers & Accuracy \\
\hline
\cite{DING2005} & 96 & 4026 & 9 & $<$60 & 1 & 0.9730\\
\cite{MAHATA2007775} & 62 & 6000 & 2 & 15 & 1 & 0.9677\\
\cite{LIU20102763} & 97 & 24481 & 2 & 7& 7 & 0.9381\\
\cite{LIU20102763} & 102 & 12600 & 2 & 4 & 7 &0.9706 \\
\cite{BEST2015666} & 175 & 1072 & 2 & - & 110 & 0.9500 \\
\cite{PIAO201739} & 215 & 1047 & 4 & - & 20 & 0.9860 \\
\cite{SAYGILI2018} & 569 & 32& 2& 24 & 1& 0.9877 \\
\hline
\cite{GARCIADIAZ20201916} & 801 & 20531 & 5 & 49 & 5 & 0.9881 \\
\textbf{Ours} & \textbf{801}& \textbf{20531} & \textbf{5} & \textbf{49} & \textbf{1} &\textbf{0.9938} \\
\hline
\end{tabular}
}
\label{tab:rna_seq_table}
\end{center}
\end{table}
Table \ref{tab:rna_seq_table} shows the existing results of several feature selection methods \cite{GARCIADIAZ20201916,DING2005,MAHATA2007775,LIU20102763,BEST2015666,PIAO201739,SAYGILI2018}
on this dataset with various settings although most of the existing methods do not work on the entire dataset. From Table \ref{tab:rna_seq_table}, it is clearly seen that, under the same settings, our approach outperforms the best performer on this dataset in literature.
The experimental result on this nontrivial real-world dataset demonstrates that our approach works well for a data set of many features as long as there are enough training examples required by deep learning. Thus, we firmly believe that our approach will be applicable to a large data set of many features, e.g., images where there are millions of pixels. We shall look into the scalability of our approach in our ongoing work.
In summary, our experimental results manifest that leveraged with deep learning, our approach outperforms a number of state-the-art FIR and feature selection methods on two biological datasets. This suggests that our approach would be a strong candidate for feature selection and feature importance ranking in real-world biological applications.
\newpage
\section{Pseudo Code}
\label{sect:pseudo}
In this section, we describe the implementation of our alternate learning algorithm used to train our proposed dual-net neural architecture for feature importance ranking underlying feature selection. The pseudo code\footnote{Our source code and all the related information regarding the experimental settings are available online: \url{https://github.com/maksym33/FeatureImportanceDL}.} in
Algorithm \ref{algo:main-steps} carries out the alternate learning algorithm as described in Sect. 3.3 of the main text. The pseudo code in Algorithm \ref{algo:opt-mask} implements a subroutine used in line 10 of Algorithm \ref{algo:main-steps} to generate an optimal feature subset in the current step as described in Phase II.A (c.f. Sect. 3.3 in the main text). In Algorithm \ref{algo:main-steps}, lines 1-7 corresponds to Phase I, lines 9-12 carry out Phase II.A and lines 13-18 implement Phase II.B. Phase II.A and II.B alternate until the early stopping condition is satisfied as implemented by the loop from line 8 to line 23.
\vspace*{5mm}
\begin{algorithm}[ht]
\caption{Alternate Learning Algorithm}
\label{algo:main-steps}
\begin{algorithmic}[1]
\REQUIRE loss function of operator net, $ l(\pmb x \otimes \pmb m, \pmb y; \theta)$, selector net, $f_{S}(\varphi,\pmb m)$
\REQUIRE feature set/subset size $d$ and $s$, fraction of random masks $f$, perturbation factor $s_p$
\REQUIRE number of optimal subset candidates $|\mathcal{M}'|$, number of batches $E_1$ in Phase I.
\REQUIRE mask weight vector used in the weighted selector loss $\pmb w_{S}$ of $|\mathcal{M}'|$ elements.
\notrainglecomment{
}
\FOR{$e \gets 0$ to $E_1$}
\STATE ${\cal M}_1'= \big \{ {\pmb m}_i| {\pmb m}_i = {\rm Random}({\cal M}, s) \big \}_{i=1}^{|{\cal M}'|}$ \COMMENT{create a random batch of masks}
\STATE ${\cal L}_O ({\cal D}, {\cal M}_1'; \theta) = \frac 1 {|{\cal M}'||{\cal D}|}
\sum_{\pmb m \in {\cal M}'} \sum_{(\pmb x, \pmb y) \in {\cal D}} l(\pmb x \otimes \pmb m,\pmb y; \theta)$ \COMMENT {calculate operator loss}
\STATE $\theta'' \triangleq \theta' - \eta \nabla_{\theta}
{\cal L}_O ({\cal D}, {\cal M}_1'; \theta)|_{\theta=\theta'}$ \COMMENT{update the parameters in operator}
\STATE ${\cal L}_S ({\cal M}'; \varphi ) = \frac 1 {2|{\cal M}'|} \sum_{\pmb m \in {\cal M}'}
\Big ( f_S(\varphi; \pmb m) - \frac 1 {|{\cal D}|} \sum_{({\pmb x}, {\pmb y}) \in {\cal D}}
l(\pmb x \otimes \pmb m, \pmb y; \theta) \Big )^2$ \COMMENT{calculate MSE loss of selector net}
\STATE $\varphi'' \triangleq \varphi' - \eta \nabla_{\varphi}
{\cal L}_S ({\cal M}_1'; \varphi)|_{\varphi=\varphi'}$
\COMMENT{update the weights in selector}
\ENDFOR
\FOR{$t \gets 0$ to $\inf$}
\STATE ${\cal M}_{t+1,1}'= \big \{ {\pmb m}_i| {\pmb m}_i = {\rm Random}({\cal M}, s) \big \}_{i=1}^{(1-f)|{\cal M}'|}$ \COMMENT{create a random batch of masks}
\STATE $m_{t+1,opt} \Leftarrow \mathrm{\textit{generateOptimalMask}} (f_{SN}(\varphi_t))$ \COMMENT{implemented in Algorithm \ref{algo:opt-mask} }
\STATE ${\cal M}_{t+1,2}' = \{{\pmb m}_{t,best}\} \cup
\{{\pmb m}_{t+1,opt}\} \cup \big \{{\pmb m}_i|{\pmb m}_i={\rm Perturb}({\pmb m}_{t+1,opt},s_p) \big \}_{i=1}^{f|{\cal M}'|-2}$ \COMMENT{collect the best mask from last step, the current optimal mask and those perturbed optimal masks}
\STATE ${\cal M}_{t+1}' = {\cal M}_{t+1,1}' \cup {\cal M}_{t+1,2}'$ \COMMENT{form new subset candidates for operator}
\STATE ${\cal L}_O ({\cal D}, {\cal M}_1'; \theta) = \frac 1 {|{\cal M}'||{\cal D}|}
\sum_{\pmb m \in {\cal M}'} \sum_{(\pmb x, \pmb y) \in {\cal D}} l(\pmb x \otimes \pmb m, \pmb y; \theta_t)$ \COMMENT {calculate operator loss}
\STATE $\theta_{t+1} \triangleq \theta_t - \eta \nabla_{\theta}
{\cal L}_O ({\cal D}, {\cal M}_1'; \theta)|_{\theta=\theta}$ \COMMENT{update parameters in operator}
\STATE ${\cal L}_S ({\cal M}'; \varphi ) = \frac 1 {2|{\cal M}'|} \sum_{i=0}^{|{\cal M}'|}
{w_{S,i}} \Big ( f_S(\varphi; \pmb m) - \frac 1 {|{\cal D}|} \sum_{({\pmb x}, {\pmb y}) \in {\cal D}}
l(\pmb x \otimes \pmb m, \pmb y; \theta) \Big )^2$ \COMMENT{calculate the weighted MSE loss of selector}
\STATE $\varphi_{t+1} \triangleq \varphi_t - \eta \nabla_{\varphi}
{\cal L}_S ({\cal M}_1'; \varphi)|_{\varphi=\varphi_t}$ \COMMENT{update the parameters in selector}
\STATE $\pmb m_{t+1,best} = \text{argmin}_{\pmb m \in {\cal M}_{t+1}'}\Big( \sum_{({\pmb x}, {\pmb y}) \in {\cal D}}
l(\pmb x \otimes \pmb m, \pmb y; \theta)\Big)$ \COMMENT{record the best performed mask}
\STATE $\mathcal{L}_{t,m_{opt}} \leftarrow \big(\sum_{({\pmb x}, {\pmb y}) \in {\cal D}} l(\pmb x \otimes \pmb m, \pmb y; \theta) \big)[(1-f)+1]$ \COMMENT{record loss of the ${\pmb m}_{opt}$, $(1-f)+1$ should be its index}
\IF{$checkEarlyStopping(\mathcal{L}_{t,m_{opt}})$}
\STATE{$\theta_t = restoreBestWeights()$} \COMMENT{stopping condition is met}
\STATE{\bf break}
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[t]
\caption{Generation of Optimal Feature Subset}
\label{algo:opt-mask}
\begin{algorithmic}[1]
\REQUIRE selector net $f_{S}(\varphi,\pmb{m})$
\REQUIRE input feature set size $d$, selected subset size $s$
\STATE $\pmb m_0 \leftarrow (\frac{1}{2},\frac{1}{2},...,\frac{1}{2}) $
\STATE $\pmb \delta_{m_0} = \frac{\partial f_{S}(\varphi,\pmb m)}{\partial \pmb m}|_{\pmb m=\pmb m_0}$ \COMMENT{calculate input gradient}
\STATE $\pmb m_{opt} \leftarrow (0,0,...,0) $
\STATE $(\pmb i_{unmasked},\pmb i_{masked} )\leftarrow \mathrm{argsort}(\pmb \delta_{m_0}))$ \COMMENT{determine indexes with 1s (unmasked, top $s$ biggest gradients) and 0s (masked, the smallest gradients)}
\STATE $\pmb m_{opt}[\pmb i_{unmasked}] \leftarrow (1,1,...,1) $\COMMENT{complete creating ${\pmb m}_{opt}$}
\STATE $\pmb \delta_{m_{opt}} = \frac{\partial f_{SN}(\varphi,\pmb m)}{\partial \pmb m}|_{\pmb m=\pmb m_{opt}}$ \COMMENT{reclalculate the gradient at $\pmb m=\pmb m_{opt}$} \label{step:m_opt_validation}
\STATE $i_{min} \leftarrow \mathrm{argmin}(\pmb \delta_{m_{opt}}[i_{unmasked}])$ \COMMENT{index of minimum unmasked gradient}
\STATE $i_{max} \leftarrow \mathrm{argmax}(\pmb \delta_{m_{opt}}[i_{masked}])$ \COMMENT{index of maximum masked gradient}
\STATE $\pmb i_{neg} \leftarrow \mathrm{argwhere}(\pmb \delta_{m_{opt}}[i_{unmasked}] <0)$\COMMENT{create a set of unmasked indices that have negative gradients}
\FOR{$i$ in $\pmb i_{neg}$}
\STATE $\pmb m'_{opt} \leftarrow \pmb m_{opt} $ \label{step:m_opt_step_1}\COMMENT{\textbf{Validation step 1}}
\STATE $\pmb m'_{opt}[i] \leftarrow 0 $ \COMMENT{mask the negative, previously unmasked, input}
\STATE $\pmb m'_{opt}[i_{max}] \leftarrow 1 $ \COMMENT{unmask the biggest (gradient-wise), previously masked input}
\IF{$f_{S}(\varphi,\pmb m''_{opt})<f_{S}(\varphi,\pmb m_{opt})$}
\STATE $\pmb m_{opt}\leftarrow \pmb m'_{opt}$ \COMMENT{replace $\pmb m_{opt}$ and restart the validation}
\STATE recalcualte $\pmb i_{unmasked}$ and $\pmb i_{masked}$
\STATE goto step \ref{step:m_opt_validation}
\ENDIF
\ENDFOR
\STATE $\pmb m''_{opt} \leftarrow \pmb m_{opt} $ \label{step:m_opt_step_2}\COMMENT{\textbf{Validation step 2}}
\STATE $\pmb m''_{opt}[i_{min}] \leftarrow 0 $ \COMMENT{mask the smallest (gradient-wise), previously unmasked, input}
\STATE $\pmb m''_{opt}[i_{max}] \leftarrow 1 $ \COMMENT{unmask the biggest (gradient-wise), previously masked input}
\IF{$f_{S}(\varphi,\pmb m''_{opt})<f_{SN}(\varphi,\pmb m_{opt})$}
\STATE $\pmb m_{opt}\leftarrow \pmb m''_{opt}$ \COMMENT{replace $\pmb m_{opt}$ and restart the validation}
\STATE recalculate $\pmb i_{masked}$ and $\pmb i_{masked}$
\STATE goto step \ref{step:m_opt_validation}
\ENDIF
\end{algorithmic}
\end{algorithm}
\newpage
\small
\bibliographystyle{unsrt}
\section{Introduction}
\label{sect:intro}
In machine learning, feature importance ranking (FIR) refers to a task that measures contributions of individual input features (variables) to the performance of a supervised learning model. FIR has become one of powerful tools in explainable/interpretable AI \cite{samek2017explainable} to facilitate understanding of decision-making by a learning system and discovery of key factors in a specific domain, e.g., in medicine, what genes are likely main causes of a cancer \cite{guyon2002RFE}.
Due to the existence of correlated/dependent and irrelevant features to targets in high-dimensional real data, feature selection \cite{guyon2003SFintro} is often employed to address the well-known curse of dimensionality challenge and to improve the generalization of a learning system, where a subset of optimal features is selected in terms of the pre-defined criteria to maximize the performance of a learning system. Feature selection may be conducted at either population or instance level; the populationwise methods would find out an optimal feature subset collectively for all the instances in a population, while the instancewise ones tend to uncover a subset of salient features specific to a single instance. In practice, FIR is always closely associated with feature selection by ranking the importance of those features in an optimal subset and can also be used as a proxy for feature selection, e.g., \cite{guyon2002RFE,song2007BAHSIC1,song2012BAHSIC2}.
Deep learning has turned out to be extremely powerful in intelligent system development but its purported ``black box'' nature makes it extremely difficult to be applied to tasks demanding explainability/interpretability. Recently, FIR for deep learning has become an active research area where most works focus on instancewise FIR \cite{hooker2019benchmark} and only few works exist for populationwise FIR/feature selection, e.g., \cite{Li2016DFS}. In a populationwise scenario, feature selection needs to find an optimum in detecting any functional dependence between input data and targets, which is NP-hard in general \cite{weston2003NP-hard-FS}. High degree of nonlinearity in deep learning execrates this combinatorial optimization problem.
In this paper, we address a populationwise FIR issue in deep learning: for a feature set, finding an optimal feature subset of a fixed size that maximizes the performance of a deep neural network and ranking the importance of all the features in this optimal subset simultaneously. To tackle this problem, we propose a novel dual-net neural architecture, where an operator net works for a supervised learning task via optimal subset candidates provided by a selector net that learns finding the optimal feature subset and ranking feature importance via the learning performance feedback of the operator. Two nets are jointly trained in an alternate manner. After learning, the selector net is used to find an optimal feature subset and rank feature importance, while the operator net makes predictions based on the optimal feature subset for test data.
A thorough evaluation on synthetic, benchmark and real datasets via a comparative study manifests that our approach leveraged by deep learning outperforms several state-of-the-art FIR and supervised feature selection methods.
\section{Related Work}
\label{sect:related}
In the context of deep learning, there exist three methods for FIR; i.e., \emph{regularization}, \emph{greedy search} and \emph{averaged input gradient}.
The deep feature selection (DFS) \cite{Li2016DFS} was proposed for FIR with the same idea behind the regularized linear models \cite{tibshirani1996LASSO,zou2005Elasticnet}.
The DFS suffers from several issues, e.g. a high computational burden in finding an optimal regularization hyper-parameters and vanishing gradient. Moreover, the weight-shrinkage idea \cite{tibshirani1996LASSO,zou2005Elasticnet} may not always work for complex dependence between input features and targets since the use of shrunk weights as feature importance is theoretically justifiable to linear models only. It seems straightforward to apply a greedy search method, e.g., forward subset selection (FS) \cite{friedman2001els}, to deep learning for FIR. Obviously, this method inevitably incurs extremely high computational cost and may end up with only a sub-optimal result. Finally, some instancewise FIR methods have been converted into populationwise ones, e.g., the averaged input gradient (AvGrad) \cite{hechtlinger2016Inputgradient} that uses the average of all the saliency maps extracted from individual instances for FIR and global aggregation \cite{ribeiro2016should,linden2019explanationbb,skrlj2020fiesan} that uses different aggregation mechanisms to achieve the populationwise feature importance ranking. As local explanations are specific at the instance level and often inconsistent with global explanations at the population level, the simple accumulation of instancewise FIR results may not work on populationwise FIR. In contrast, our method would overcome all the limitations stated above.
In machine learning, regularized linear models, e.g., LASSO \cite{tibshirani1996LASSO}, and random forest (RF) \cite{breiman2001RF} are two off-the-shelf FIR methods. Other strong FIR methods include the SVM-based RFE \cite{guyon2002RFE} and the dependence-maximization based BAHSIC \cite{song2007BAHSIC1,song2012BAHSIC2}. In general, such methods may have the limited learning capacity for complex tasks in comparison to deep learning, and may not always work
for complex dependence between input features and targets. On the other hand, according to the definition in \cite{song2007BAHSIC1,chen2017CCM}, our FIR problem formulation can be treated a sub-problem of supervised feature selection when the size of an optimal feature subset is pre-specified. To this end, our method is closely related to several strong feature selection methods with the same setting, including those working on mutual information criteria, e.g., mRMR \cite{peng2005mRMR} and the kernel-based CCM \cite{chen2017CCM} although such methods do not consider FIR. Leveraged with deep learning, our approach is more effective than those aforementioned FIR and supervised feature selection methods, as manifested in our experiments.
\section{Method}
\label{sect:method}
\subsection{Problem Formulation}
\label{subsect:problem}
Suppose
${\cal D} = \{ {\cal X}, {\cal Y} \}$
is a dataset used for supervised learning. In this data set, $(\pmb{x}, \pmb{y})$ is a training example, where $\pmb{x} \in {\cal X}$ is a vector of $d$ features and $\pmb{y} \in {\cal Y}$ is its corresponding target. Let ${\pmb m} \in {\cal M}$ denote a $d$-dimensional binary mask vector of $0/1$ elements, where $||{\pmb m}||_0 = s$, $s<d$ and $|{\cal M}|= \binom{d}{s}$. Thus, we can use such a mask vector to indicate a feature subset: $\{{\pmb x} \otimes {\pmb m} \}_{{\pmb x} \in {\cal X}}$, where $\otimes$ denotes Hadamard product that yield a subset of $s$ features for any instance ${\pmb x} \in {\cal X}$.
Assume that ${\cal Q}({\pmb x}, {\pmb m})$ quantifies the instance-level performance of a learning system trained on ${\cal D}$ via a feature subset, $\{{\pmb x} \otimes {\pmb m} \}_{{\pmb x} \in {\cal X}}$, the \emph{feature importance ranking} (FIR) can then be formulated as follows:
\begin{equation}
\big ( {\pmb m}^*,{\rm Score}({\pmb m}^*) \big ) = \argmax_{{\pmb m} \in {\cal M}}
\sum_{ {\pmb x} \in {\cal X}} {\cal Q}({\pmb x}, {\pmb m}),
\label{eq:fir-def}
\end{equation}
where ${\pmb m}^*$ is the indicator of an optimal feature subset discovered by an FIR algorithm and ${\rm Score}({\pmb m}^*)$ quantifies the importance of all the selected features in this optimal subset.
\begin{figure*}[t!]
\centering
\begin{subfigure}{0.42\textwidth}
\hspace*{-3mm}
\centering
\fbox{\includegraphics[width=\textwidth]{figure/model.pdf}}
\end{subfigure}
\begin{subfigure}{0.549\textwidth}
\centering
\fbox{\includegraphics[width=\textwidth]{figure/update.pdf}}
\end{subfigure}
\caption{Our feature importance ranking model. (a) Dual-net architecture. (b) Parameter update.}
\label{fig:model}
\end{figure*}
Ideally, an FIR approach should be able to: 1) detect any functional dependence between input features and targets;
2) rank the importance of all the selected features to reflect their contributions to the learning performance; and 3) preserve the detected functional dependence and the feature importance ranking in test data.
\subsection{Model Description}
\label{subsect:model}
To tackle the FIR problem stated in Eq.(\ref{eq:fir-def}) effectively with three criteria described in Sect. \ref{subsect:problem}, we propose a deep learning model of dual nets, \emph{operator} and \emph{selector}, as shown in Fig. \ref{fig:model}(a). The operator net is employed to accomplish a supervised learning task, e.g., classification or regression, on a given feature subset provided by the selector net, while the selector net is designated to learn finding out an optimal feature subset based on the performance feedback of the operator net working on optimal feature subset candidates during learning. Both the operator and the selector nets are trained jointly in an alternate manner (c.f. Sect. \ref{subsect:learn}) to reach a synergy for the FIR.
Technically, the operator is carried out with a deep neural network parameterized with $\theta$, $f_O(\theta; \pmb x, \pmb m)$, for a given task, e.g., multi-layer perceptron (MLP) or convolution neural network (CNN). This net is trained on ${\cal D}$ based on different feature subsets to learn
$f_O\!: {\cal X} \times {\cal M} \rightarrow {\cal Y}$.
After learning (c.f. Sect. \ref{subsect:learn}), the trained operator net,
$f_O(\theta^*; \pmb x, {\pmb m}^*)$, is applied to the test data for prediction, where $\theta^*$ is the optimal parameters of the operator net and $\pmb m^*$ is generated by the trained selector net (c.f. Sect. \ref{subsect:deployment}).
In our method, the selector is implemented with an MLP parameterized with $\varphi$, $f_S(\varphi; \pmb m)$. As defined in Eq.(\ref{eq:fir-def}), a selected optimal feature subset should maximize the averaging performance of the operator quantified by ${\cal Q}({\pmb x}, {\pmb m})$ for all $\pmb x \in \cal X$. Thus, we want the selector net to learn predicting the averaging performance of the operator net on different feature subsets; i.e., $f_S\!\!: {\cal M} \rightarrow \mathbb{R}$. After being trained properly (c.f. Sect. \ref{subsect:learn}), we can use an algorithm working on the trained selector net of the optimal parameters $\varphi^*$, $f_S(\varphi^*; \pmb m)$, to generate an optimal feature subset indicated by ${\pmb m}^*$ and rank feature importance to achieve ${\rm Score}({\pmb m}^*)$ (c.f. Sect. \ref{subsect:deployment}).
\subsection{Learning Algorithm}
\label{subsect:learn}
In essence, the FIR defined in Eq.(\ref{eq:fir-def}) is a combinatorial optimization problem. According to the no free lunch theory for optimization \cite{wolpert1997NFL},
no algorithm can perform better than a random strategy in expectation in the setting of combinatorial optimization. Therefore, our learning algorithm is developed by leveraging learning with a stochastic local search procedure enhanced by injecting noise \cite{mengshoel2008noise} on a small number of candidate feature subsets, ${\cal M}' \subset {\cal M}$, to avoid the exhaustive search.
For a training data set, ${\cal D} = \{\cal X, \cal Y \} = \big \{ (\pmb x, \pmb y) \big \}_{\pmb x \in {\cal X}, \pmb y \in {\cal Y}}$, a mask subset, ${\cal M}'$, converts each training example $(\pmb x, \pmb y) \in {\cal D}$ into $|{\cal M}'|$ examples:
$\big \{(\pmb x \otimes \pmb m, \pmb y) \big \}_{\pmb m \in {\cal M}'}$. Thus, the loss functions on ${\cal M}'$ (changing during learning) for the operator and the selector nets are defined respectively as follows:
\begin{subequations}
\begin{equation}
{\cal L}_O ({\cal D}, {\cal M}'; \theta) = \frac 1 {|{\cal M}'||{\cal D}|}
\sum_{\pmb m \in {\cal M}'} \sum_{(\pmb x, \pmb y) \in {\cal D}} l(\pmb x \otimes \pmb m, \pmb y; \theta),
\label{eq:oloss}
\end{equation}
\begin{equation}
{\cal L}_S ({\cal M}'; \varphi ) = \frac 1 {2|{\cal M}'|} \sum_{\pmb m \in {\cal M}'}
\Big ( f_S(\varphi; \pmb m) - \frac 1 {|{\cal D}|} \sum_{({\pmb x}, {\pmb y}) \in {\cal D}}
l(\pmb x \otimes \pmb m, \pmb y; \theta) \Big )^2.
\label{eq:sloss}
\end{equation}
\end{subequations}
Here, $l(\pmb x \otimes \pmb m, \pmb y; \theta)$ is an instance-level cross-entropy/categorical cross-entropy loss for binary/mutli-class classification or the mean square error (MSE) loss for regression. In Eq.(\ref{eq:sloss}), we utilize the loss of the operator net, $l(\pmb x \otimes \pmb m, \pmb y; \theta)$, to characterize its learning performance, ${\cal Q}({\pmb x}, {\pmb m})$, since maximizing ${\cal Q}({\pmb x}, {\pmb m})$ is equivalent to minimizing $l(\pmb x \otimes \pmb m, \pmb y; \theta)$.
As described in Sect. \ref{subsect:model}, during learning, the operator net relies on the selector net to provide an optimal subset of marks, ${\cal M}'$, indicating different optimal feature subset candidates, while the selector net requires the performance feedback from the operator net, $l(\pmb x \otimes \pmb m, \pmb y; \theta)$ for all
$\pmb m \in {\cal M}'$. Two nets in our learning model hence have to be trained alternately. Below, we present the main learning steps in our learning algorithm of two phases, while the pseudo code can be found from Sect. D in supplementary materials.
\paragraph{Phase I: Initial Operator Learning via Exploration.}
From the scratch, we start training the operator net by using a small number of random feature subsets for several epochs until it can yield the different performance on different feature subsets stably. Technically, in each epoch, we randomly draw a subset of different masks, ${\cal M}_1'$, from $\cal M$; i.e., ${\cal M}_1'= \big \{ {\pmb m}_i| {\pmb m}_i = {\rm Random}({\cal M}, s) \big \}_{i=1}^{|{\cal M}'|}$, where ${\rm Random}({\cal M}, s)$ is a function that randomly draws a $d$-dimensional mask of $s$ one-elements and $d\!-\!s$ zero-elements from $\cal M$. If $\theta$ is trained by stochastic gradient decent (SGD), then it is updated by
$\theta'' \triangleq \theta' - \eta \nabla_{\theta}
{\cal L}_O ({\cal D}, {\cal M}_1'; \theta)|_{\theta=\theta'}$\footnote{Parameters are actually updated on a batch $\cal B$ randomly drawn from $\cal D$, hence $\frac {|\cal D|} {|\cal B|}$ times in an epoch.} where $\eta$ is a learning rate. After $E_1$ epochs, we set $\theta_1 = \theta''(E_1)$ and ${\pmb m}_{1,opt}' = \argmin_{{\pmb m} \in {\cal M}_1'} \sum_{(\pmb x, \pmb y) \in {\cal D}} l(\pmb x \otimes \pmb m, \pmb y; \theta_1)$ to be used at the beginning of Phase II-A;
i.e., $t=1$ as shown in Fig. \ref{fig:model}(b).
\paragraph{Phase II-A: Selector Learning via Operator's Feedback.} As illustrated in Fig. \ref{fig:model}(b), the operator provides training examples for the selector at step $t$: $\big \{ \big ({\pmb m}, \frac 1 {|{\cal D}|} \sum_{({\pmb x}, {\pmb y}) \in {\cal D}} l(\pmb x \otimes \pmb m, \pmb y; \theta_t) \big ) \big \}_{{\pmb m} \in {\cal M}_t'}$. By using the SGD with initializing $\varphi_1$ randomly, the parameters in the selector net, $\varphi$, are updated by
$\varphi_{t+1} \triangleq \varphi_t - \eta \nabla_{\varphi}
{\cal L}_S ({\cal M}_t'; \varphi)|_{\varphi=\varphi_t}$. Then, we adopt an \emph{exploration-exploitation} strategy to generate a new mask subset, ${\cal M}_{t+1}'$, for the operator learning at step $t\!+\!1$. Thus, ${\cal M}_{t+1}'$ is divided into two mutually exclusive subsets: ${\cal M}_{t+1}'={\cal M}_{t+1,1}' \cup {\cal M}_{t+1,2}'$. Motivated by the role of noise in stochastic local search \cite{mengshoel2008noise}, ${\cal M}_{t+1,1}'$ is generated via exploration to avoid overfitting:
${\cal M}_{t+1,1}' = \big \{ {\pmb m}_i| {\pmb m}_i = {\rm Random}({\cal M}, s) \big \}_{i=1}^{|{\cal M}_{t+1,1}'|}$. Motivated by the input gradient idea
\cite{hechtlinger2016Inputgradient}, ${\cal M}_{t+1,2}'$ is generated by exploitation of the selector net, $f_S(\varphi_{t+1}; \pmb m)$, as follows:
\textbf{a) Generation of an optimal subset}.
Starting with $d$-dimensional ${\pmb m}_0= \big (\frac 1 2, \cdots, \frac 1 2 \big )$, meaning that every feature has the equal chance to be selected, we have ${\pmb \delta}_{{\pmb m}_0} =
{\frac {\partial f_S(\varphi_{t+1}; \pmb m)} {\partial {\pmb m}}} |_{{\pmb m}={\pmb m}_0}$. As input features of the larger gradients contribute more to the learning performance of the operator, we can find top $s$ features based on their gradients by
$({\pmb m}_{opt}, \bar{\pmb m}_{opt}) = {\rm argsort}({\pmb \delta}_{{\pmb m}_0}, s)$ where ${\pmb m}_{opt}$ is the mask to indicate top $s$ features and $\bar{\pmb m}_{opt}$ is the mask for the remaining $d\!-\!s$ features. To ensure the optimality of ${\pmb m}_{opt}$, we come up with a three-step \emph{validation} procedure: \textbf{i}) Re-evaluate the contributions of top $s$ features by $({\pmb m}_{opt}, \bar{\pmb m}_{opt}) = {\rm argsort}({\pmb \delta}_{{\pmb m}_{opt}}, s)$ where
${\pmb \delta}_{{\pmb m}_{opt}} =
{\frac {\partial f_S(\varphi_{t+1}; \pmb m)} {\partial {\pmb m}}} |_{{\pmb m}={\pmb m}_{opt}}$; \textbf{ii}) Replace a feature of negative gradient in ${\pmb m}_{opt}$ with the one of the largest gradient in $\bar{\pmb m}_{opt}$ if there exists;
\textbf{iii}) Further check the optimality via a function
$({\pmb m}_{opt}', \bar{\pmb m}_{opt}')={\rm swap}({\pmb m}_{opt}, \bar{\pmb m}_{opt})$ that yields ${\pmb m}_{opt}'$ by swapping between the feature of least gradient in ${\pmb m}_{opt}$ and the one of the largest gradient in $\bar{\pmb m}_{opt}$. Repeat (i)-(iii) until $f_S(\varphi_{t+1}; {\pmb m}_{opt}) \leq f_S(\varphi_{t+1}; {\pmb m}_{opt}')$. After going through the validation procedure, ${\pmb m}_{t+1,opt}$ is obtained for step $t\!+\!1$.
\textbf{b) Generation of optimal subset candidates via perturbation}. As the optimal subset ${\pmb m}_{t+1,opt}$ might be a local optimum, we would further inject noise to generate more optimal subset candidates by a perturbation function ${\rm Perturb}({\pmb m}_{opt},s_p)$. For $s_p < s$, ${\rm Perturb}({\pmb m}_{opt},s_p)$ randomly flips $s_p$ different elements in
${\pmb m}_{opt}$/$\bar{\pmb m}_{opt}$ from 1/0 to 0/1 and swaps between changed elements in ${\pmb m}_{opt}$ and $\bar{\pmb m}_{opt}$.
Applying ${\rm Perturb}({\pmb m}_{opt},s_p)$ repeatedly leads to multiple optimal subset candidates;
\textbf{c) Formation of optimal subset candidates}. Assembling \textbf{a)} and \textbf{b)} leads to ${\cal M}_{t+1,2}' = \{{\pmb m}_{t,best}\} \cup
\{{\pmb m}_{t+1,opt}\} \cup \big \{{\pmb m}_i|{\pmb m}_i={\rm Perturb}({\pmb m}_{t+1,opt},s_p) \big \}_{i=1}^{|{\cal M}_{t+1,2}'|-2}$. Here, we always include ${\pmb m}_{t,best}$, the subset that leads to the best learning performance of the operator net in the last step (step $t$), as the most important subset candidate in the current step (step $t\!+\!1$) in order to make the operator learning progress steadily.
Note that ${\pmb m}_{t,best}$ may not be ${\pmb m}_{t,opt}$.
\paragraph{Phase II-B: Operator Learning via Optimal Subset Candidates from Selector.} After completing the training of Phase II-A at step $t$ , the selector net provides the optimal subset candidates, ${\cal M}_{t+1}'={\cal M}_{t+1,1}' \cup {\cal M}_{t+1,2}'$, for the operator net,
as illustrated in Fig. \ref{fig:model}(b). At step $t\!+\!1$, the operator net is thus trained based on ${\cal M}_{t+1}'$ with SGD:
$\theta_{t+1} \triangleq \theta_t - \eta \nabla_{\theta}
{\cal L}_O ({\cal D}, {\cal M}_{t+1}'; \theta)|_{\theta=\theta_t}$.
As shown in Fig. \ref{fig:model}, our alternate algorithm enables the operator and the selector nets to be trained jointly in Phase II until a pre-specified stopping condition is satisfied.
\subsection{Deployment}
\label{subsect:deployment}
After the learning described in Sect. \ref{subsect:learn} is accomplished, we obtain the optimal parameters of the operator and the selector nets, $\theta^*$ and $\varphi^*$.
By using the trained selector net, $f_S(\varphi^*; \pmb m)$, we find out an optimal feature subset with the same procedure used in Phase II-A as follows: 1) starting
with ${\pmb m}_0= \big (\frac 1 2, \cdots, \frac 1 2 \big )$, calculate the gradient ${\pmb \delta}_{{\pmb m}_0} =
{\frac {\partial f_S(\varphi^*; \pmb m)} {\partial {\pmb m}}} |_{{\pmb m}={\pmb m}_0}$;
2) finding top $s$ features by
$({\pmb m}^*, \bar{\pmb m}^*) = {\rm argsort}({\pmb \delta}_{{\pmb m}_0}, s)$, where ${\pmb m}^*$ indicates the optimal subset of top $s$ features;
and 3) going through the validation procedure described in Phase II.A to ensure the optimality of ${\pmb m}^*$. Thus, feature importance ranking on the final ${\pmb m}^*$ is done by setting ${\rm Score}({\pmb m}^*)= {\frac {\partial f_S(\varphi^*; \pmb m)} {\partial {\pmb m}}} |_{{\pmb m}={\pmb m}^*}$ and sorting the input gradients of selected features.
During test, for a test instance, $\hat{\pmb x}$, the trained operator net, $f_O(\theta^*; \pmb x, {\pmb m})$, can be used to make a prediction, $f_O(\theta^*; \hat{\pmb x}, {\pmb m}^*)$, via $\hat{\pmb x} \otimes {\pmb m}^*$, which allows a supervised learning task to be done based on only the optimal feature subset, ${\pmb m}^*$, found out with our proposed approach.
\section{Experiments}
\label{sect:experiment}
In this section, we evaluate our approach on synthetic, benchmark and real-world datasets where we always use 5-fold cross-validation for evaluation and report the performance statistics, i.e., \emph{mean} and \emph{standard deviation} estimated on 5 folds. We describe our main settings briefly in the main text, and the details of all the experimental settings can be found from Sect. A in Supplementary Materials.
\subsection{Synthetic Data}
\label{sect:synthetic}
Our first evaluation employs 3 synthetic datasets in literature \cite{chen2017CCM,friedman2001els} for feature selection regarding regression and binary/multiclass classification as follows:
\noindent
\textbf{XOR as 4-way classification} \cite{chen2017CCM}.
Group 8 corners of the cube, $(v_0,v_1,v_2) \in \{-1,+1\}^3$, by the tuples $(v_0v_2, v_1v_2)$, leading to 4 sets of vectors paired with their negations $\{v^{(c)}, -v^{(c)}\}$. For a class $c$, a point is generated from the mixture distribution: $\frac 1 2 [N(v^{(c)}, 0.5I_3) + N(-v^{(c)}, 0.5I_3)]$. Then, form a 10-D feature vector for each example by adding 7 standard noise features, $(X_3,\cdots,X_9) \sim N(0,I_7)$.
\noindent
\textbf{Nonlinear regression} \cite{chen2017CCM}.
$Y= -2\sin(2X_0)+\max(X_1,0) + X_2 + \exp(-X_3)+ \epsilon$, where
$(X_0,\cdots,X_9) \sim N(0, I_{10})$ and $\epsilon \sim N(0,1)$, leading to a 10-D feature vector for each example.
\noindent
\textbf{Binary classification} \cite{friedman2001els}. To generate examples, set $Y=-1$ when $(X_0,\cdots,X_9) \sim N(0,I_{10})$ and $Y=+1$ when $X_0$ through $X_3$ are standard normal conditioned on $9 \leq \sum_{i=0}^3 X_i^2 \leq 16$ and $(X_4,\cdots,X_9) \sim N(0,I_6)$, resulting in a 10-D feature vector for each example.
For each dataset, we randomly generate 512 and 1024 examples, respectively, for training and test.
With our problem formulation described in Sect. \ref{subsect:problem}, our experiment on synthetic data simulates an application scenario that selects $s$ out of $d$ features where $s$ is larger than the number of features relevant to the target in a dataset. As there are up to 4 relevant features in the above 3 datasets, we choose $s=5$ in our experiment and compare with all the methods reviewed in Sect. \ref{sect:related}, including DFS \cite{Li2016DFS}, AvGrad \cite{hechtlinger2016Inputgradient}, FS \cite{friedman2001els} based on MLP, LASSO \cite{tibshirani1996LASSO}, RF \cite{breiman2001RF}, RFE \cite{guyon2002RFE}, BAHSIC \cite{song2007BAHSIC1,song2012BAHSIC2}, mRMR \cite{peng2005mRMR} and CCM \cite{chen2017CCM}. According to a taxonomy \cite{guyon2003SFintro}, DFS, AvGrad, RF and ours are embedding methods, FS is a wrapper method and all the others are filtering methods. For those filtering methods, we use the exactly same kernel SVM/SVR described in those papers \cite{guyon2002RFE,song2007BAHSIC1,peng2005mRMR,song2012BAHSIC2,chen2017CCM} and an MLP on LASSO for classification/regression.
While DFS, AvGrad, LASSO and RF work on FIR for all 10 features, all other methods work with the same setting as ours by finding out top 5 features and FIR.
Fig. \ref{fig:synthetic} shows the feature selection and FIR results yielded by different methods regarding top 5 features on 3 synthetic datasets where the FIR scores are normalized in each method and the equal FIR score is set to all the features selected by those methods without considering FIR. It is observed from Fig. \ref{fig:synthetic} that our approach always finds out those relevant features in all 5 folds and does FIR properly by assigning negative scores (gradients), meaning unimportant, to irrelevant features. For the 4-way classification, DFS, RF, RFE, BAHSIC and CCM also find 3 relevant features in all 5 folds but others fail as shown in Fig. \ref{fig:synthetic}(a) although mRMR and CCM cannot yield FIR scores. In terms of accuracy, ours outperforms all other 9 methods despite the fact that DFS, AvGrad and RF work directly on the full feature set. For the nonlinear regression, FS, RF, RFE and CCM also select 4 relevant features in all 5 folds but ours yields the least MSE as shown in Fig. \ref{fig:synthetic}(b). For the binary classification, all the methods apart from LASSO find 4 relevant features in all 5 folds, as shown in Fig. \ref{fig:synthetic}(c). For this dataset, those state-of-the-art filtering methods yield better accuracy than others and the accuracy resulting from ours is slightly worse but comparable to those. In terms of FIR on all relevant features, ours is entirely consistent with those yielded by RF but performs significantly better than RF on 3 datasets. In comparison to the existing FIR methods for deep learning, ours always outperforms DFS, AvGrad and FS on 3 datasets in terms of both FIR and learning performance.
\begin{figure}[t]
\fbox{
\begin{subfigure}{0.97\textwidth}
\centering
\begin{subfigure}{0.19\textwidth}
\centering
\includegraphics[trim=35 0 35 0,clip,width=.99\linewidth ]{figure/DFS/XOR.png}
\end{subfigure}
\begin{subfigure}{0.19\textwidth}
\centering
\includegraphics[trim=35 0 35 0,clip,width=.99\linewidth ]{figure/AvGrad/XOR.png}
\end{subfigure}
\begin{subfigure}{0.19\textwidth}
\centering
\includegraphics[trim=35 0 35 0,clip,width=.99\linewidth ]{figure/FS/XOR.png}
\end{subfigure}
\begin{subfigure}{0.19\textwidth}
\centering
\includegraphics[trim=35 0 35 0,clip,width=.99\linewidth ]{figure/LASSO/XOR.png}
\end{subfigure}
\begin{subfigure}{0.19\textwidth}
\centering
\includegraphics[trim=35 0 35 0,clip,width=.99\linewidth ]{figure/RF/XOR.png}
\end{subfigure}
\begin{subfigure}{0.19\textwidth}
\centering
\includegraphics[trim=35 0 35 0,clip,width=.99\linewidth ]{figure/RFE/XOR.png}
\end{subfigure}
\begin{subfigure}{0.19\textwidth}
\centering
\includegraphics[trim=35 0 35 0,clip,width=.99\linewidth ]{figure/BAHSIC/XOR.png}
\end{subfigure}
\begin{subfigure}{0.19\textwidth}
\centering
\includegraphics[trim=35 0 35 0,clip,width=.99\linewidth ]{figure/MRMR/XOR.png}
\end{subfigure}
\begin{subfigure}{0.19\textwidth}
\centering
\includegraphics[trim=35 0 35 0,clip,width=.99\linewidth ]{figure/CCM/XOR.png}
\end{subfigure}
\begin{subfigure}{0.19\textwidth}
\centering
\includegraphics[trim=35 0 35 0,clip,width=.99\linewidth ]{figure/Ours_(s=5)/XOR.png}
\end{subfigure}
\caption{}
\end{subfigure}
}
\fbox{
\begin{subfigure}{0.97\textwidth}
\centering
\begin{subfigure}{0.19\textwidth}
\centering
\includegraphics[trim=35 0 35 0,clip,width=.99\linewidth ]{figure/DFS/regression.png}
\end{subfigure}
\begin{subfigure}{0.19\textwidth}
\centering
\includegraphics[trim=35 0 35 0,clip,width=.99\linewidth ]{figure/AvGrad/regression.png}
\end{subfigure}
\begin{subfigure}{0.19\textwidth}
\centering
\includegraphics[trim=35 0 35 0,clip,width=.99\linewidth ]{figure/FS/regression.png}
\end{subfigure}
\begin{subfigure}{0.19\textwidth}
\centering
\includegraphics[trim=35 0 35 0,clip,width=.99\linewidth ]{figure/LASSO/regression.png}
\end{subfigure}
\begin{subfigure}{0.19\textwidth}
\centering
\includegraphics[trim=35 0 35 0,clip,width=.99\linewidth ]{figure/RF/regression.png}
\end{subfigure}
\begin{subfigure}{0.19\textwidth}
\centering
\includegraphics[trim=35 0 35 0,clip,width=.99\linewidth ]{figure/RFE/regression.png}
\end{subfigure}
\begin{subfigure}{0.19\textwidth}
\centering
\includegraphics[trim=35 0 35 0,clip,width=.99\linewidth ]{figure/BAHSIC/regression.png}
\end{subfigure}
\begin{subfigure}{0.19\textwidth}
\centering
\includegraphics[trim=35 0 35 0,clip,width=.99\linewidth ]{figure/MRMR/regression.png}
\end{subfigure}
\begin{subfigure}{0.19\textwidth}
\centering
\includegraphics[trim=35 0 35 0,clip,width=.99\linewidth ]{figure/CCM/regression.png}
\end{subfigure}
\begin{subfigure}{0.19\textwidth}
\centering
\includegraphics[trim=35 0 35 0,clip,width=.99\linewidth ]{figure/Ours_(s=5)/regression.png}
\end{subfigure}
\caption{}
\end{subfigure}
}
\fbox{
\begin{subfigure}{0.97\textwidth}
\centering
\begin{subfigure}{0.19\textwidth}
\centering
\includegraphics[trim=35 0 35 0,clip,width=.99\linewidth ]{figure/DFS/orange_skin.png}
\end{subfigure}
\begin{subfigure}{0.19\textwidth}
\centering
\includegraphics[trim=35 0 35 0,clip,width=.99\linewidth ]{figure/AvGrad/orange_skin.png}
\end{subfigure}
\begin{subfigure}{0.19\textwidth}
\centering
\includegraphics[trim=35 0 35 0,clip,width=.99\linewidth ]{figure/FS/orange_skin.png}
\end{subfigure}
\begin{subfigure}{0.19\textwidth}
\centering
\includegraphics[trim=35 0 35 0,clip,width=.99\linewidth ]{figure/LASSO/orange_skin.png}
\end{subfigure}
\begin{subfigure}{0.19\textwidth}
\centering
\includegraphics[trim=35 0 35 0,clip,width=.99\linewidth ]{figure/RF/orange_skin.png}
\end{subfigure}
\begin{subfigure}{0.19\textwidth}
\centering
\includegraphics[trim=35 0 35 0,clip,width=.99\linewidth ]{figure/RFE/orange_skin.png}
\end{subfigure}
\begin{subfigure}{0.19\textwidth}
\centering
\includegraphics[trim=35 0 35 0,clip,width=.99\linewidth ]{figure/BAHSIC/orange_skin.png}
\end{subfigure}
\begin{subfigure}{0.19\textwidth}
\centering
\includegraphics[trim=35 0 35 0,clip,width=.99\linewidth ]{figure/MRMR/orange_skin.png}
\end{subfigure}
\begin{subfigure}{0.19\textwidth}
\centering
\includegraphics[trim=35 0 35 0,clip,width=.99\linewidth ]{figure/CCM/orange_skin.png}
\end{subfigure}
\begin{subfigure}{0.19\textwidth}
\centering
\includegraphics[trim=35 0 35 0,clip,width=.99\linewidth ]{figure/Ours_(s=5)/orange_skin.png}
\end{subfigure}
\caption{}
\end{subfigure}
}
\caption{5-fold cross-validation results (mean$\pm$str) on synthetic datasets ($s=5, d=10$). (a) XOR 4-way classification. (b) Nonlinear regression. (c) Binary classification. * refers to a filtering method, and blue/red colors indicate a feature selected in all 5 folds/fewer folds, respectively.}
\label{fig:synthetic}
\end{figure}
\subsection{Benchmark Data}
\label{sect:benchmark}
We further evaluate our approach on several well-known benchmark datasets from two different perspectives; i.e., explainability of FIR and learning performance on supervised feature selection. Evaluation on more benchmark datasets can be seen from Sect. B in Supplementary Materials.
\noindent
\textbf{MNIST Dataset} \cite{Mnist}. To demonstrate the explainability of FIR via visual inspection, we employ an MNIST subset of hard-to-distinguish digits ``3'' and ``8'' for binary classification. The information on this subset is summarized in Table \ref{tab:benchmark}.
For comparison, we also apply 3 embedding methods, DFS, AvGrad and RF to this subset. To see the explainablity of FIR, we adopt the same full-connected MLP instead of CNN in DFS, AvGrad and the operator net in ours ($s=85, d=784$). The setting ensures that no other mechanisms like convolution/pooling layers can help a model automatically extract salient features for FIR.
As a result, the accuracies yielded by DFS, AvGrad, RF and \textbf{ours} on the test data are $97.42\pm 0.30\%$, $99.27\pm 0.04\%$, $98.84\pm 0.03 \%$ and $\bf{99.31\pm 0.08\%}$, respectively, where ours and DFS use 85 and 212 features, respectively, but AvGrad and RF need all 784 features. For visual inspection, we normalize the FIR scores achieved by different methods to the same range and illustrate typical feature importance maps produced by 4 methods in a fold in Fig. \ref{fig:mnist_importances}. It is observed from Fig. \ref{fig:mnist_importances}(a),(b) that DFS and AvGrad, two FIR methods for deep learning, do not produce explainable maps. In contrast, it is evident from Fig. \ref{fig:mnist_importances}(d-f) that ours yields a meaningful map where those features (pixels) that distinguish between ``3'' and ``8'' images are vividly highlighted in terms of their importance. Again, ours yields a map similar to that of RF (c.f. Fig. \ref{fig:mnist_importances}(c)) but outperforms this off-the-shelf FIR method.
\begin{table}[t]
\begin{center}
\caption{Information on benchmark and real-world datasets used in our experiments.}
\begin{tabular}{|c|c|c|c|c|c|c| }
\hline
Data Set & MNIST & glass & vowel & TOX-171 & yale & Enhancer–Promoter \\
\hline
\hline
\#Features & 784 & 10 & 10 & 5784 & 1024 ($32 \times 32$) & 102\\
\#Classes & 2 & 6 & 11 & 4 &15 & 3\\
\#Training & 11,982 & 150 & 742 & 137 & 132 & 5,756\\
\#Testing & 1,984 & 64 & 248 & 34 & 33 & 2,878 \\
\hline
\end{tabular}
\label{tab:benchmark}
\end{center}
\end{table}
\begin{figure}[t]
\centering
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{figure/MNIST/mnist_DFS.png}
\caption{}
\end{subfigure}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{figure/MNIST/mnist_ag.png}
\caption{}
\end{subfigure}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{figure/MNIST/mnist_RF.png}
\caption{}
\end{subfigure}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{figure/MNIST/mnist_masked_importance.png}
\caption{}
\end{subfigure}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{figure/MNIST/mnist_3_importance.png}
\caption{}
\end{subfigure}
\begin{subfigure}{0.16\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{figure/MNIST/mnist_8_importance.png}
\caption{}
\end{subfigure}
\begin{subfigure}{0.9\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{figure/MNIST/mnist_colorbar.png}
\end{subfigure}
\caption{Feature importance maps yielded by different FIR methods. (a) DFS. (b) AvGrad. (c) RF. (d-f) Ours and our map superimposed on the mean images of ``3'' and ``8'', respectively, for clarity.}
\label{fig:mnist_importances}
\end{figure}
\begin{figure}[h]
\centering
\begin{subfigure}{.245\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figure/benchmarks/glass.png}
\end{subfigure}
\begin{subfigure}{.245\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figure/benchmarks/vowel.png}
\end{subfigure}
\begin{subfigure}{.245\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figure/benchmarks/tox.png}
\end{subfigure}
\begin{subfigure}{.245\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figure/benchmarks/yale.png}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=.99\linewidth]{figure/benchmarks/legend.png}
\end{subfigure}
\caption{Classification accuracies (vertical axis) yielded by the supervised feature selection methods and ours for different numbers of selected features
(horizontal axis) on 4 benchmark datasets.}
\label{fig:benchmark_res}
\end{figure}
\noindent
\textbf{Feature Selection Benchmark}. We further conduct the evaluation in feature selection. As our approach has the same setting as used in the supervised feature selection methods, we compare ours to those strong supervised feature selection methods, RFE, BAHSIC, mRMR and CCM, on four benchmark datasets: \texttt{glass} \cite{Dua2019UCI}, \texttt{vowel} \cite{Dua2019UCI}, \texttt{TOX-171} \cite{li2018featureselectdata} and \texttt{yale} \cite{UCSD2001yale}, as summarized in Table \ref{tab:benchmark}.
For our model, we employ MLPs to implement the operator for \texttt{glass}, \texttt{vowel} and \texttt{TOX-171} but a CNN to carry out the operator for \texttt{yale} to demonstrate the flexibility of our dual-net architecture. By following the setting used in \cite{chen2017CCM}, we employ kernel SVMs for classification on features selected by 4 filtering methods.
It is evident from Fig. \ref{fig:benchmark_res} that ours substantially outperforms all others on \texttt{glass}, \texttt{vowel} and \texttt{yale} with a large margin. Overall, ours yields results comparable to the strongest performer, CCM, on \texttt{TOX-171} where there are 5,700+ features but only 109 training examples for parameter estimate in each of 5 folds, which is very challenging for deep learning.
\subsection{Real-world Data}
\label{sect:realdata}
We finally evaluate our approach on a real-world enhancer–promoter data, a challenging task that classifies the function of DNA sequences into enhancer, promoter and background \cite{anderson2014enhancer}. As listed in Table \ref{tab:benchmark}, the data used in this experiment are sampled from annotated DNA regions of GM12878 cell line (200 dp), the same as used in DFS \cite{Li2016DFS}, a feature selection method dedicated to this task. For comparison, we also apply DFS as well as RF and RFE, 2 strongest FIR methods manifested in our experiments, to this dataset. The same MLP architecture is used in implementing DFS and the operator net in our model.
Fig. \ref{fig:enhance-promoter} shows accuracies and the FIR scores of top $s=35$ out of $d=102$ features yielded by 4 different methods and those features colored in red correspond to the genes of which functions are well known in medicine and genetics literature (see Table 2 in \cite{Li2016DFS} for details). In terms of accuracy, ours is comparable to DFS and slightly better than RF and RFE, where the test accuracies of RFE and ours are based on $35$ features but the accuracies of RF and DFS are achieved with all 102 and 94 features selected by DFS via weight shrinkage, respectively. As seen in Fig. \ref{fig:enhance-promoter}, those top features ranked by DFS and ours, two deep learning methods, appear quite similar but significantly different from those top features ranked by RF and RFE. The biological implication resulting from the results shown in Fig. \ref{fig:enhance-promoter} is worth investigating further from a biological/medical perspective.
More results on this dataset can be found from Sect. C in Supplementary Materials.
\begin{figure}[t]
\centering
\begin{subfigure}{1.0\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/CellLine/dfs_dfs_new_200.png}
\end{subfigure}
\begin{subfigure}{1.0\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/CellLine/dfs_rf_new_200.png}
\end{subfigure}
\begin{subfigure}{1.0\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/CellLine/dfs_rfe_new_200.png}
\end{subfigure}
\begin{subfigure}{1.0\textwidth}
\centering
\includegraphics[width=\linewidth]{figure/CellLine/dfs_sn_new_200.png}
\end{subfigure}
\caption{Accuracy and FIR scores yielded by different methods on GM12878 Cell line (200 bp).}
\label{fig:enhance-promoter}
\end{figure}
Regarding our alternate learning algorithm, our empirical studies suggest that it generally converges by reaching a local optimum (see Sects. B and C in Supplementary Materials for details).
\section{Discussion}
In general, our idea is motivated by RF \cite{breiman2001RF} and the dropout regularization \cite{srivastava2014dropout}; our exploration-exploitation strategy (c.f. Sect. \ref{subsect:learn}) allows for the simultaneous use of different feature subsets and dropout of input ``nodes'' randomly during learning. As the joint use of multiple feature subsets in learning leads to more training examples of fewer features randomly, our approach could provide an alternative way to improve the generalization in deep learning when the limited training examples are available even though FIR/feature selection is not of interest in such application scenarios.
Also, we want to make a connection between our proposed approach and evolutionary computation in terms of feature selection \cite{xue2016fectureselectionec}. In our approach, a single deep learning model, operator, works on different feature masks simultaneously during learning to carry out the functionality of a population of individual learning models in evolutionary computation. Instead of purely stochastic operations, mutation and crossover, on individual learners in a population used in evolutionary computation, our selector carried out by another single deep learning model uses a more efficient gradient-guided local stochastic search strategy to reduce the search space for combinatorial optimization. In general, our approach bears the spirit of evolution computation but addresses the combinatorial optimization issue in an entirely distinct manner, which leads to a more effective yet efficient approach to populationwise FIR and feature selection.
Our proposed approach is scalable to big data and easily makes use of any state-of-the-art deep learning techniques to be our component models for populationwise FIR and feature selection. In terms of computational complexity, however, our approach suffers from a high computational burden in training due to use of the dual-net architecture involving two deep learning models and the alternate learning procedure (see Sect. C in Supplementary Materials for details). Nevertheless, the computational load issue in our approach could be addressed (at least alleviated) by the latest development in deep learning, e.g., EfficientNet \cite{tan2019efficientnet}.
Our approach can be applied to the generic populationwise feature selection problem that needs to find out an optimal feature subset from $\sum_{s=1}^{d-1}\binom{d}{s}$ subsets for a feature set of $d$ features. Instead of a direct search of the entire subset space, we adopt a strategy that makes our model work in parallel on different subset sizes, the same as used in the state-of-the-art supervised feature selection methods, e.g., CCM \cite{chen2017CCM}. To this end, however, our approach might have a higher computational burden than those kernel-based methods in learning.
Also, our approach is extensible to group-based FIR and feature selection by introducing the group feature constraints to our stochastic local search procedure (c.f. Sect. \ref{subsect:learn}), which would overcome the limitation of linear models, e.g., group LARS/LASSO \cite{yuan2006groupLASSO}, in capturing the complex functional dependency between group input features and targets. Furthermore, our proposed dual-net architecture can also be extended to unsupervised feature selection by carrying out the operator with an autoencoder-like learning model.
In conclusion, we propose a dual-net neural architecture along with an alternate learning algorithm to enable deep learning to work effectively for FIR and feature selection. A thorough evaluation manifests that our approach outperforms several state-of-the-art FIR and supervised feature selection methods. In our ongoing work, we would extend our approach to instancewise FIR, group and unsupervised feature selection scenarios and explore its potential in challenging real applications.
\section*{Broader Impact}
This research does not involve any issues directly regarding ethical aspects and future societal consequences. In the future, our approach presented in this paper might be applied in different domains, e.g., medicine and life science, where ethical aspects and societal consequences might have to be considered.
\section*{Acknowledgement}
The authors are grateful to three anonymous reviewers for their comments. In particular, the authors would thank the anonymous reviewer who offers us an insight by understanding our contributions from an evolutionary computation perspective.
\small
\bibliographystyle{unsrt}
| proofpile-arXiv_059-15582 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{sec:intro}
\subsection{Model}
\label{subsec:model}
The constrained-degree percolation model was introduced in \cite{Teodoro14} as follows. Consider an infinite transitive connected graph $G=({\ensuremath{\mathbb V}} ,{\ensuremath{\mathbb E}} )$, let $\kappa$ be a positive integer such that $\kappa\leq \deg(G)$, where $\deg(v)=|\{u\in{\ensuremath{\mathbb V}} :uv\in{\ensuremath{\mathbb E}} \}|$ and $\deg (G)= \deg (v)$ for all $v\in{\ensuremath{\mathbb V}} $, since $G$ is transitive.
Let $(U_e)_{e\in{\ensuremath{\mathbb E}} }$ be a sequence of independent and identically distributed uniform random variables on $[0,1]$. For each $t\in [0,1]$, define a continuous time percolation model, denoting by $\omega^{G,\kappa}(t)\in\{0,1\}^{{\ensuremath{\mathbb E}} }$ the configuration of {\em open} (1) or {\em closed} (0) bonds.
At time $t=0$, we declare all bonds as closed (i.e.\ $\omega_{e}^{G,\kappa}(0)=0$ for all $e\in{\ensuremath{\mathbb E}} $). As time progresses, bonds will become open. Each bond $e\in{\ensuremath{\mathbb E}} $ will try to open at time $U_e$, it will succeed if and only if both its end-vertices have degree, in the cluster of open bonds, at most $\kappa -1$.
More formally, the model is described by the probability space $(\Omega, \ensuremath{\mathcal F},{\ensuremath{\mathbb P}} )$, where $\Omega=[0,1]^{{\ensuremath{\mathbb E}} }$ is the space of clocks, $\ensuremath{\mathcal F}$ is the $\sigma$-algebra generated by cylinder sets of $\Omega$ and ${\ensuremath{\mathbb P}} $ is the product of Lebesgue measures on $[0,1]$. Given the sequence of clocks $(U_e)_{e\in{\ensuremath{\mathbb E}} }$, the percolation configuration on the bond $v_1v_2$ at time $t\in [0,1]$, denoted $\o_{v_1v_2}^{G,\k}(t)$, is the indicator function of the intersection of the events
\[\{U_{v_1v_2}\leq t\}\] and \[\left\{\left|\left\{u\in{\ensuremath{\mathbb V}} \setminus\{v_{3-i}\} :\omega_{v_iu}^{G,\kappa}(U_{v_1v_2})=1\right\}\right|<\k\right\}\mbox{ for } i\in\{1,2\}.\]
Using the Harris graphical construction, one can establish that this model is well defined (see e.g.\ \cite{Liggett05}). On the other hand, it has a dependence of infinite range and does not satisfy the FKG inequality, nor the insertion tolerance (or finite energy) property. When $\kappa\ge\deg(G)$ the constrained-degree percolation model at time $t$ reduces to the ordinary Bernoulli bond percolation model with parameter $t$.
Given $\o\in\{0,1\}^{\ensuremath{\mathbb E}} $, the notation $0\leftrightarrow\infty$ means that there are infinitely many vertices connected to origin by paths of open edges in $\o$. We simplify the notation denoting the event $\{0\leftrightarrow\infty\mbox{ in }\omega^{G,\kappa}(t)\}$ by $\{0\leftrightarrow\infty\mbox{ at }t\}$.
The {\em probability of percolation} is the function $\theta^{G,\kappa}(t):[0,1]\rightarrow[0,1]$, where $\theta^{G,\kappa}(t)={\ensuremath{\mathbb P}} (0\leftrightarrow\infty\mbox{ at }t)$. By definition, the function $\theta^{G,\kappa}(t)$ is non-decreasing in $t$, then it is natural to define the {\em critical time} \[t_{\mathrm{c}}^{G,\k}:=\sup\{t\in [0,1]:\theta^{G,\kappa}(t)=0\},\] with the convention $\sup \varnothing = +\infty$. Whenever they are clear from the context, we will drop the indices $G$ and $\kappa$ from the notation.
Throughout this work, we will almost exclusively deal with the hypercubic lattice ${\ensuremath{\mathbb L}} ^d=({\ensuremath{\mathbb Z}} ^d,{\ensuremath{\mathbb E}} ^d)$, where ${\ensuremath{\mathbb E}} ^d=\{uv\in{\ensuremath{\mathbb Z}} ^d\times{\ensuremath{\mathbb Z}} ^d: \|u-v\|_1=1\}$.
\subsection{Related Works}
\label{subsec:background}
In \cite{Teodoro14}, it was proved that for the hypercubic lattice ${\ensuremath{\mathbb L}} ^d,\ d\geq 2$, when $\kappa=2d-1$, there is percolation at time $t=1$, that is, $\theta^{{\ensuremath{\mathbb L}} ^d, 2d-1}(1)>0$. In \cite{DeLima20}, it was shown that there is a nontrivial phase transition on the square lattice ${\ensuremath{\mathbb L}} ^2$ in the nontrivial case $\kappa=3$. More precisely (see Theorem 1 therein), it was proved that $t_{\mathrm{c}}^{{\ensuremath{\mathbb L}} ^2,3}\in \left(\frac{1}{2},1\right)$. With a martingale argument, it was also proved in \cite{DeLima20} that for all dimensions $d\geq 2$ and $\kappa=2$, $t_{\mathrm{c}}^{{\ensuremath{\mathbb L}} ^d,2}=+\infty$, that is, $\theta^{{\ensuremath{\mathbb L}} ^d,2}(t)=0$ for all $t\in [0,1].$ We emphasise that nothing it is known about percolation for other values of $\kappa$ or $t<1$ when $d\geq 3$ prior to the present work.
In \cite{DeLima20}, the uniqueness of the infinite cluster was also studied, as well as the constrained-degree percolation on the regular $d$-ary trees, ${\ensuremath{\mathbb T}} ^d$, for which it is proved that $t_{\mathrm{c}}^{{\ensuremath{\mathbb T}} ^d,3}<1$ for all $d\geq 2$.
The idea of random system of constrains has ancient origins in the Physical literature and goes back to the work of Flory \cite{Flory39} in which the dimer (or domino) tiling problem was introduced. In 1979, the paper \cite{Gaunt79} introduced the Percolation with restricted-valence model, that is essentially the same model as the constrained-degree percolation studied here, but it is a site percolation version instead of bond percolation.
Some recent mathematical works on variations on percolative models with some kind of constrains on the vertices are \cites{Grimmett10, Grimmett17,Garet18a,Holroyd21}.
\subsection{Results}
\label{subsec:results}
The main goal of this work is to prove that there is a phase transition ($t_{\mathrm{c}}^{{\ensuremath{\mathbb L}} ^d,\kappa}<1$) for the hypercubic lattice, ${\ensuremath{\mathbb L}} ^d$, for $d\geq 3$ and some nontrivial values of $\kappa$, that is $3\leq\kappa\leq 2d-1$. Moreover, we seek non-perturbative results applying beyond $t\approx 1$, as well as for $\kappa$ much smaller than the least constraint case, $\k=2d-1$. Indeed, we even manage to treat constraints not diverging with $d\to\infty$. From now on, we will denote the critical time for the hypercubic lattice ${\ensuremath{\mathbb L}} ^d$, $t_{\mathrm{c}}^{{\ensuremath{\mathbb L}} ^d,\k}$, by $t_{\mathrm{c}}^\kappa (d)$.
\begin{thm}
\label{th:general}
Let $c=1.7$ and $\k\ge 10$. Then for any $d>\k/2$ we have
\[t_{\mathrm{c}}^{\k}(d)\le c/d.\]
Moreover, still with $c=1.7$, for lower dimensions we have the stronger results
\[t_{\mathrm{c}}^{\k}(d)\le\frac{c}{d}\quad \text{ for }\begin{cases}
d=4&\k=7,\\
d\in\{5,6\}&\k\ge 8,\\
d\in\{7,8,9,10,11,12,14,16\}&\k\ge 9.
\end{cases}
\]
\end{thm}
Notice that by a standard branching process comparison for ordinary percolation, it is easy to show that $t_{\mathrm{c}}^{\k}(d)\ge 1/(2d-1)$ for all $\k$ and $d$, so that \cref{th:general} shows that $t_{\mathrm{c}}^{\k}(d)=\Theta(1/d)$ as $d\to\infty$. In fact, for high dimensions and weak constraints, the following sharp result is obtained as a byproduct of the proof of \cref{th:general}.
\begin{thm}
\label{th:high}
For any two integer sequences $(\k_n)$ and $(d_n)$ such that $\k_n\to\infty$ and $d_n\to\infty$ as $n\to\infty$, we have
\[\lim_{n\to\infty}d_n\cdot t_{\mathrm{c}}^{\k_n}(d_n)=\frac{1}{2}.\]
\end{thm}
In order to illustrate the fact that our approach is not intrinsically high-dimensional, we further adapt it to obtain a nontrivial result even in three dimensions.
\begin{thm}
\label{th:3d}
Let $d=3$ and $\k=5$. Then
\[t_{\mathrm{c}}^{\k}(d)\le 0.62.\]
Moreover, the same holds for the graph ${\ensuremath{\mathbb L}} ^2_{\boxtimes} =({\ensuremath{\mathbb Z}} ^2,\{uv\in{\ensuremath{\mathbb Z}} ^2\times{\ensuremath{\mathbb Z}} ^2:\|u-v\|_\infty =1\})$, the matching graph of ${\ensuremath{\mathbb L}} ^2$, obtained from the square lattice by adding the diagonals of each face, and $\k=7$.
\end{thm}
Finally, let us mention that in \cref{app} we establish a quantitative improvement of a result of Chayes and Schonmann \cite{Chayes00} on mixed site-bond percolation in the case of ${\ensuremath{\mathbb L}} ^2$, which may be of independent interest. It is used in the proof of \cref{th:general} to allow the treatment of smaller values of $\k$.
\subsection{Ideas of the proofs}
\label{subsec:sketch}
Let us give an overview of the proofs of our main results. In both cases the idea, though peculiar, is quite simple. In this section we prefer to omit some technical issues in order not to obscure the essence, hoping that this will not lead to confusion.
\subsubsection{General result}
\label{subsubsec:sketch:general}
We first sketch the proof of \cref{th:general}. Although the same proof will directly apply to all sets of parameters in the statement of the theorem, the reader is advised to think of $t=c/d$ with $c$ sufficiently large, but fixed; $\k$ sufficiently large, depending on $c$, but also fixed; $d$ even and going to infinity. This is essentially the setting of \cref{th:high}.
Naively, the guiding principle is ``look for percolation via unsaturated sites'', which is a non-monotone event and thus prohibits perturbative arguments. Intuitively, for our choice of parameters each site should have less than $\k$ edges with $U_e\le t$ (we call such edges feasible) with high probability (since $\k\gg 2c$ and the degree of each vertex is approximately Poisson with parameter $2c$). Discarding the remaining vertices (called saturated), we only need to show that edges are still open with fairly high probability. Fortunately, the information that a vertex was not saturated is not significant, as this event is likely, so the edges of those vertices should almost form an independent Bernoulli bond percolation. It is then not unreasonable to hope that the resulting nearly independent mixed site-bond percolation with site parameter close to $1$ and bond parameter close to $t$ would be supercritical, as it is known that the critical probability of bond percolation on ${\ensuremath{\mathbb L}} ^d$ satisfies $p_{\mathrm{c}}(d)=(1+o(1))/2d<c/d=t$ \cite{Kesten90}. Unfortunately, we could not formalise this intuition and rather take several detours, while keeping the same guideline.
The first technique we rely on originates from a classical work of Holley and Liggett \cite{Holley78}, where it was used to prove an upper bound of order $1/d$ on the critical parameter of the contact process in high dimensions. Similarly to \cite{Holley78} we map ${\ensuremath{\mathbb Z}} ^d$ to ${\ensuremath{\mathbb Z}} ^2$ with each two neighbours connected by $\lfloor d/2\rfloor$ edges in one direction and $\lfloor d/2\rfloor$ in the opposite one as follows. We split the $d$ vectors of the canonical basis of ${\ensuremath{\mathbb Z}} ^d$ in two halves, viewing the first $\lfloor d/2\rfloor$ as pointing east (their opposites point west), while the other half point north (see \cref{eq:Fi}).
There are several advantages to working in two dimensions rather than directly on ${\ensuremath{\mathbb L}} ^d$. Firstly, the control we have on mixed percolation deteriorates quickly with dimension. Secondly, exploring only few of the edges around a vertex allows us to keep the distributions of $U_e$ close to their original \emph{i.i.d.} uniform laws despite the dependencies. In particular, under this mapping the percolation model acquires the ``finite energy'' property, though we will not use it explicitly.
We build an exploration of a part of the cluster of $0$ in the original constrained percolation model, so as to compare it with mixed site-bond percolation on ${\ensuremath{\mathbb L}} ^2$ via the mapping described above. The exploration should rather be viewed in ${\ensuremath{\mathbb L}} ^2$ as we will never visit the same site there twice. Starting with $0$ as our only active site we repeat the following steps until we run out of untreated active sites. We first verify if the active site under consideration is saturated (has more than $\k$ feasible edges). If it is, we close it and move on. Notice that we may only have discovered $3$ feasible edges to that site previously, since it has degree $4$ in ${\ensuremath{\mathbb L}} ^2$ and there is no point in considering vertices all of whose neighbours are already in the cluster of $0$ in ${\ensuremath{\mathbb L}} ^2$. Thus, the vertex remains open with high probability, as $\k-3\gg 2c$.
Knowing that a vertex is open does not tell us much about whether or not we can reach its neighbours in ${\ensuremath{\mathbb L}} ^2$ via feasible edges. We activate each of the inactive neighbouring vertices if we find at least one feasible edge among the $\lfloor d/2\rfloor$ from our current position. Since it suffices to find one feasible bond per neighbour, the next neighbour is not heavily penalised by the previous one becoming active, as $\k-3\gg 2c$. Thus, the probability of a neighbour being activated, given that our original site remained open, is close to the probability that a Poisson random variable with parameter $c/2$ (as there are $\lfloor d/2\rfloor$ edges) is non-zero, which is close to $1$ for $c$ sufficiently large.
Summing up, when viewed in ${\ensuremath{\mathbb L}} ^2$, the exploration opens each active site with probability close to $1$ and then activates each of its inactive neighbours with probability close to $1$. This clearly corresponds to the exploration of the cluster of $0$ in ${\ensuremath{\mathbb L}} ^2$ in a mixed site-bond percolation with both parameters close to $1$. Since this is easily seen to be supercritical, we obtain that with positive probability there is an infinite path in ${\ensuremath{\mathbb L}} ^d$ whose edges are all feasible and whose sites are all unsaturated, which concludes the proof that $t\ge t_{\mathrm{c}}^{\k}$.
For ``finite'' values of $c$, $\k$ and $d$ we aim for a comparison with a site-bond percolation with bond parameter slightly larger than $1/2$ and site parameter very close to $1$. We then use a refinement of a result of Chayes and Schonmann \cite{Chayes00} established in \cref{app} to prove that the parameters are indeed supercritical. In order to prove \cref{th:high} we employ the same strategy, with the difference that we now divide the $d$ directions into $d'\ll\min(d,\k)$ groups and thus reduce the problem to mixed site-bond percolation on ${\ensuremath{\mathbb L}} ^{d'}$. We then use a simpler qualitative version of the result of \cite{Chayes00} as obtained already by Liggett, Schonmann and Stacey \cite{Liggett97} together with Kesten's result \cite{Kesten90} affirming that $d'\cdot p_{\mathrm{c}}(d')\to1/2$ for ordinary bond percolation as $d'\to\infty$.
It is important to note that our argument is intrinsically non-monotone and it is therefore not possible to bring the matter down to a qualitative result on mixed site-bond percolation such as the classical theorem of Liggett, Stacey and Schonmann \cite{Liggett97}. Instead, we require a rather good quantitative bound on the critical curve of mixed percolation. This non-monotonicity is also the reason for obtaining quite strong non-perturbative upper bounds on $t_{\mathrm{c}}^{\k}$ in \cref{th:general}, contrary to previous works \cites{DeLima20, Teodoro14}, but, on the downside, for the values of $\k$ and $d$ for which the resulting mixed percolation is subcritical for any choice of $t$, we recover no result at all.
\subsubsection{The cubic lattice}
\label{subsubsec:3d}
The proof of \cref{th:3d} will use some of the ingredients of \cref{th:general}. The two most important differences are that we will no longer systematically discard saturated vertices and that we will look for a comparison with two-dimensional bond percolation rather than mixed site-bond percolation. We will focus on ${\ensuremath{\mathbb L}} ^3$, as ${\ensuremath{\mathbb L}} ^2_{\boxtimes}$ is treated identically. We fix $\k=5$ and $t=0.62$ as in \cref{th:3d}.
This time no mapping is required to reduce ${\ensuremath{\mathbb L}} ^3$ to ${\ensuremath{\mathbb L}} ^2$, we rather directly look for percolation in the horizontal plane containing $0$. As it was pointed out in \cites{DeLima20,DoAmaral21}, the constrained percolation model does not have a clear monotonicity w.r.t.\ the underlying graph (see \cref{sec:open} below), even if the constraint $\k$ is adjusted accordingly, so we will not rely on any type of monotonicity. Instead, we will use the edges pointing out of the plane to save certain vertices which seem saturated in the plane.
We explore the cluster of $0$ in the plane, treating one active vertex $a$ at a time as follows. We activate the neighbours of $a$ if their bond to $a$ is feasible with the exception of the case in which $a$ is saturated (i.e.\ all 6 edges from $a$ are feasible). If $a$ is saturated, rather than closing it, we look at whether one of its edges going out of the plane happens to have $U_e$ larger than all the edges from $a$ in the plane. If that is the case, we activate all neighbours, while if it fails, we activate none.
Our goal is then to show that the net result of treating each vertex is activating each neighbour at least independently with probability $p>1/2$. Intuitively, the probability of activating all neighbours should be $t^k-t^{2+k}\frac{k}{k+2}$, where $k$ is the number of inactive neighbours. It is then reasonable to hope to be able to establish the desired stochastic domination with
\[p=\max_{k\in\{1,2,3\}}\left(t^k-t^{2+k}\frac{k}{k+2}\right)^{1/k}>1/2.\]
However, more care is needed, as we do not only look at the feasibility of edges, but also at the actual value of their $U_e$. This information may potentially accumulate, propagate and interfere with the probability that the unexplored edges out of the plane have larger $U_e$ than the ones already (partially) explored in the plane from the same vertex. Fortunately, carefully choosing what information to reveal, we are able to ensure that when we activate a vertex the corresponding $U_e$ is either uniformly distributed on $[0,t]$ (as we know it is feasible) or is further biased towards small values, which is in our favour when we compare it with edges out of the plane. This is quite natural, as the only information we may acquire on $U_e$ in addition to being feasible is that it is smaller than one of the edges out of the plane at one of its endpoints.
\section{General case---proof of \cref{th:general,th:high}}
\label{sec:high}
In this section we start by proving \cref{th:general}, assuming the results on mixed site-bond percolation from \cref{app}, namely \cref{cor:pc}, which will be used as a black box. We refer the reader to \cref{subsubsec:sketch:general} for a high-level sketch of the argument.
\begin{proof}[Proof of \cref{th:general}]
Let $t=c/d$ with $c=1.7$ and call an edge $e$ \emph{feasible} if $U_e\le t$. We consider the map
\begin{equation}
\label{eq:Fi}\F(x)=\left(\sum_{i=1}^{\lfloor d/2\rfloor}x_i,\sum_{i=d-\lfloor d/2\rfloor}^dx_i\right)
\end{equation}
from ${\ensuremath{\mathbb Z}} ^d$ to ${\ensuremath{\mathbb Z}} ^2$. We will build a supercritical mixed site-bond percolation process on ${\ensuremath{\mathbb L}} ^2$ stochastically minorating the image of the cluster of $0$ in the constrained percolation model on ${\ensuremath{\mathbb L}} ^d$ with parameters $\k$ and $t$. We will do so by exploring the cluster of $0$ in the following way.
We will construct $A_{n}\subset {\ensuremath{\mathbb Z}} ^d$ the set of \emph{active} sites at time $n\in{\ensuremath{\mathbb N}} $ and the sets of \emph{open}, \emph{closed} and \emph{useless} vertices, $O_{n},C_{n},U_{n}\subset A_{n}$ respectively. For all $n\in{\ensuremath{\mathbb N}} $ we set $A'_{n}=\F(A_{n})$, $O'_{n}=\F(O_{n})$, $C'_{n}=\F(C_{n})$, and $U'_{n}=\F(U_{n})$. Unless otherwise stated, when incrementing $n$ all the above sets remain unchanged.
\begin{algo}
\label{algo:general}
Initialise $A_0=\{0\}$, $U_0=C_0=O_0=\varnothing$ and $n=0$.
\begin{enumerate}[label=Step \arabic*,ref=Step \arabic*]
\item\label{step:1}
If $A_n=O_n\cup C_n\cup U_n$, then END. Otherwise, fix $a\in A_n\setminus(O_n\cup C_n\cup U_n)$. If $\F(a)$ has no neighbour outside $A'_{n}$, set $U_{n+1}=U_n\cup\{a\}$, increment $n$ and repeat \ref{step:1}. Otherwise,
\begin{itemize}
\item if $a$ has at most $\k$ feasible edges, set $O_{n+1}=O_n\cup\{a\}$, increment $n$ and go to \ref{step:2};
\item otherwise, set $C_{n+1}=C_n\cup\{a\}$, increment $n$ and repeat \ref{step:1}.
\end{itemize}
It is important to note that we do not explore the state (feasible or not) of the edges of the vertex $a$, but just ask whether there are more than $\k$ feasible ones or not.
\item\label{step:2}
Let $\{o\}=O_n\setminus O_{n-1}$. Let $V'=\{v'_i,i\in I\}$ be the set of neighbours of $\F(o)$ which are not in $A'_n$. For each $i\in I$ explore the $\lfloor d/2\rfloor$ edges from $o$ to $\F^{-1}(v_i')$ one by one until a feasible edge is discovered. If such an edge is found, let $v_i$ be its endpoint (other than $o$). Set $A_{n+1}=A_n\cup\{v_i,i\in I\}$, increment $n$, and go to \ref{step:1}.
\end{enumerate}
\end{algo}
Let us make a few observations about this algorithm. First, the map $\F$ is always injective on $A_n$, since vertices $v'\in V'$ considered for activation in \ref{step:2} are not in $A_{n}'$ and at most one preimage by $\F$ of each $v'$ is activated, corresponding to the first feasible edge discovered. Furthermore, it is clear that all vertices in $O_n$ have at most $\k$ feasible edges, so feasible edges between vertices in $O_n$ are open in the constrained percolation. Moreover, by induction all vertices in $O_n$ are connected to $0$ (since each new active vertex is connected to an open one). In particular, if the algorithm does not finish, then $0$ belongs to an open infinite cluster in the constrained percolation model. On the other hand, in ${\ensuremath{\mathbb L}} ^2$ the algorithm only considers neighbours of $O'_n$, so it never terminates if and only if $0$ is in an infinite cluster $\bigcup_n O'_n$ in ${\ensuremath{\mathbb L}} ^2$.
We next analyse what information we have on the feasibility of different edges. Clearly, nothing is known about edges not incident with any active vertex. Let $n>0$ and $a\in A_n\setminus(O_n\cup C_n\cup U_n)$ be the vertex considered by \cref{algo:general} in \ref{step:1}. It is not hard to see that $a$ became active (in \ref{step:2}) at the time when we discovered the first feasible edge $e(a)$ connecting $a$ to an open vertex. Assume that $a$ does not become useless (which is purely deterministic at the time of consideration of $a$). Then $a$ has at most two edges from other active vertices (since $\F$ is injective on $A_n$) and we have no information on its remaining (at least $2d-3$) edges other than $e(a)$. If $a$ is declared open, in \ref{step:2} we additionally know that it has at most $\k$ feasible edges (including the one, two or three edges we previously had some information on).
Let $B_{m,p}$ denote the cumulative distribution function of the binomial law with parameters $m$ and $p$ (which is a step function continuous to the right).
\begin{claim}
The probability that a vertex considered in \ref{step:1} and not declared useless becomes open, conditionally on the information revealed by the algorithm until that moment, is least $s:=B_{2d-3,t}(\k-3)$.
\end{claim}
\begin{proof}
There are $j\le3$ explored edges to active vertices and nothing is known about the other $2d-j$ edges, so the conditional probability we seek is at least $B_{2d-j,t}(\k-j)\ge s$.
\end{proof}
\begin{claim}
The probability that a neighbour $v'$ of $\F(o)$ in \ref{step:2} becomes active, conditionally on the information revealed by the algorithm until the moment when $v'$ is considered, is at least \[b:=1-\frac{(1-t)^{\lfloor d/2\rfloor}}{s}\ge 1-\frac{1}{s\cdot\exp\left(\frac{c}{2}\left(1-\frac 1 d\right)\right)}.\]
\end{claim}
\begin{proof}
Let $i$ be the number of neighbours already activated by $o$. Observe that we have revealed $i+1$ feasible edges of $o$ ($e(o)$ used to make $o$ active and one for each neighbour activated by $o$ until now during \ref{step:2}) as well as several unfeasible edges. Additionally, we have information on $j\le 2$ more of its edges (to active vertices not activated by $o$). However, $i+j+1\le 3$, since there are only $4$ neighbours of $o'$ and $\F$ is injective on $A_n$.
Let us denote by $X_1,\dots, X_{k}$ with $\lfloor d/2\rfloor \le k\le 2d-3$ the ${\ensuremath{\mathbbm{1}}} _{U_e\le t}$ for the unexplored edges $e$ of $o$, labelled so that $X_1,\dots,X_{\lfloor d/2\rfloor}$ correspond to edges from $o$ to $\F^{-1}(v')$. The $X_l$ are \emph{i.i.d.} Bernoulli variables with parameter $t$. We further define $X_{k+1},\dots,X_{2d}$ similarly for the remaining edges from $o$. Then $2d-k-i-j-1$ of the latter $X_l$ are already known to be $0$ and up to reordering, we assume them to be $X_{k+i+j+2},\dots,X_{2d}$. In total, the probability that $v'$ is not activated by $o$ is
\begin{multline*}
{\ensuremath{\mathbb P}} \left(\sum_{l=1}^{\lfloor d/2\rfloor} X_l=0\left|\sum_{l=1}^{k+i+j+1}X_l\le\k\right.\right)\\
\le\max_{m\in[1,i+j+1]}{\ensuremath{\mathbb P}} \left(\sum_{l=1}^{\lfloor d/2\rfloor} X_l=0\left|\sum_{l=1}^{k}X_l\le\k-m\right.\right)\le\frac{B_{\lfloor d/2\rfloor,t}(0)}{B_{k,t}(\k-i-j-1)}.
\end{multline*}
Thus, it suffices to note that
\[B_{k,t}(\k-i-j-1)\ge B_{2d-i-j-1,t}(\k-i-j-1)\ge B_{2d-3,t}(\k-3)=s.\qedhere\]
\end{proof}
Observe that $s$ and $b$ are increasing in $\k$, so it suffices to treat $\k=10$. Let us note that if we only wanted to prove that $t_{\mathrm{c}}^{\k}(d)\le c/d$ for $d$ large enough, we are already done by \cref{cor:pc} and the fact that \begin{align*}\lim_{d\to\infty}s&{}= P_{2c}(\k-3)\approx0.9770, &\lim_{d\to\infty}b&{}=1-\frac{e^{-c/2}}{P_{2c}(\k-3)}\approx 0.5625,\end{align*}
where $P_\l$ denotes the cumulative distribution function of a Poisson random variable with parameter $\l$. In order to obtain the desired result for all $d$, we will need a quantitative version of this convergence.
We claim that for all $d>\k/2$ we have $s\ge 0.9765$ and $b\ge 0.5622$. Indeed, one may verify these inequalities directly for $d\le 4000$ (by computer) and, for $d>4000$ use the facts that
\[s\ge B_{2d,t}(\k-3)\ge P_{2c}(7)-\frac{c\left(1-e^{-2c}\right)}{4000}\]
by Chen's inequality (see e.g.\ \cite{Steele94}*{Eq. (5.5)}) and so
\[b\ge 1-\frac{e^{-\frac{c}{2}\left(1-\frac{1}{4000}\right)}}{P_{2c}(7)-\frac{c\left(1-e^{-2c}\right)}{4000}}.\]
From the above it remains to check that mixed percolation with site and bond parameters $s\ge 0.9765$ and $b\ge0.5622$ respectively in two dimensions does percolate with positive probability, which follows directly from \cref{cor:pc}.
Turning to the specific low-dimensional cases in the statement of \cref{th:general}, the same proof applies with the corresponding sets of parameters. Indeed, in all cases we have either $s\ge 0.9809$ and $b\ge 0.5596$ or $s\ge0.9708$ and $b\ge0.5806$, which are supercritical by \cref{cor:pc}.
\end{proof}
We next explain the minor modifications needed in the proof above to establish \cref{th:high}.
\begin{proof}[Proof of \cref{th:high}]
As explained in \cref{subsec:results}, $t_{\mathrm{c}}^{\k}(d)\ge 1/(2d-1)$, so it suffices to prove that for any $c>1/2$ and $\k$ and $d$ large enough depending on $c$ we have $t_{\mathrm{c}}^{\k}(d)\le c/d$. Let us fix $c>1/2$, $d'$ large enough depending on $c$, so that $p_{\mathrm{c}}(d')<1-e^{-c/d'}$ for ordinary bond percolation, which is possible, since $\lim_{d'}d'p_{\mathrm{c}}(d')=1/2$ \cite{Kesten90}. We then fix $\k,d$ large enough depending on $c$ and $d'$ and set $t=c/d$.
Instead of \cref{eq:Fi}, we consider the map
\[\F:{\ensuremath{\mathbb Z}} ^d\to{\ensuremath{\mathbb Z}} ^{d'}:(x_i)_{i=1}^{d}\mapsto\left(\sum_{j=1}^{\lfloor d/d'\rfloor}x_{j+(i-1)\lfloor d/d'\rfloor}\right)_{i=1}^{d'}.\]
We then proceed exactly as in the proof of \cref{th:general} to establish a comparison with mixed site-bond percolation on ${\ensuremath{\mathbb L}} ^{d'}$ with parameters \begin{align*}
s&{}=B_{2d-(2d'-1),t}(\k-(2d'-1)),\\
b&{}=1-\frac{(1-t)^{\lfloor d/d'\rfloor}}{s}.
\end{align*}
By the Poisson approximation, letting $d,\k\to\infty$ (regardless of the relationship between the two), while keeping $d'$ fixed, we have $s\to 1$ and $b\to 1-e^{-c/d'}>p_{\mathrm{c}}(d')$. In particular, taking $d$ and $\k$ large enough we have $s\ge 1-\varepsilon$ and $b\ge p_{\mathrm{c}}(d')+\d$ for any $\varepsilon,\d>0$ small enough depending only on $d'$.
Considering site percolation on ${\ensuremath{\mathbb L}} ^{d'}$ with parameter $s\ge 1-\varepsilon$, by \cite{Liggett97} we have that for $\varepsilon$ small enough depending on $d'$ and $\d$ it stochastically dominates ordinary bond percolation with parameter $b'=1-\d$. We may then conclude that mixed site-bond percolation on ${\ensuremath{\mathbb L}} ^{d'}$ with parameters $s$ and $b$ stochastically dominates pure bond percolation with parameter $bb'> p_{\mathrm{c}}(d')$, which concludes the proof.
\end{proof}
\section{Low dimensional models---proof of \cref{th:3d}}
\label{sec:low}
In this section we prove \cref{th:3d}, refining our strategy from \cref{sec:high} as outlined in \cref{subsubsec:3d}.
\begin{proof}[Proof of \cref{th:3d}]
Let us begin by treating the cubic lattice, from which the two-dimensional result on ${\ensuremath{\mathbb L}} ^2_{\boxtimes}$ will follow immediately.
For any vertex $v\in {\ensuremath{\mathbb Z}} ^3$ we denote by $E_v=\{uv\in {\ensuremath{\mathbb E}} ^3\}$ the set of edges from $v$. Denote by $P$ the plane ${\ensuremath{\mathbb Z}} ^2\times\{0\}\subset{\ensuremath{\mathbb Z}} ^3$. Our aim will be to establish a comparison with supercritical bond percolation in $P$. Let $\k=5$, $t=0.62$ and call an edge \emph{feasible} if $U_e<t$. We will explore the edges with at least one vertex in $P$ according to the following somewhat improved version of \cref{algo:general}.
We will construct the sets of \emph{active, open} and \emph{closed} sites $A_n\subset P$, $O_n,C_n\subset A_n$ respectively, as well as the sets $B_n\subset \{uv\in{\ensuremath{\mathbb E}} ^3:u\in P,v\in P\}$ and $S_n\subset \{uv\in {\ensuremath{\mathbb E}} ^3:u\in P\}$ of \emph{boundary} and \emph{spoilt} edges respectively, as follows (see \cref{fig:algo:3d} for an example). Unless otherwise stated, when incrementing $n$ all the above sets remain unchanged. Whenever an edge $e$ becomes spoilt, we reveal the value of $U_e$.
\begin{figure}
\centering
\begin{tikzpicture}[x=2.0cm,y=2.0cm]
\draw [ultra thick] (0,0)-- (1,0);
\draw [ultra thick] (0,0)-- (0,1);
\draw [ultra thick] (1,0)-- (1,1);
\draw [ultra thick] (1,0)-- (1,-1);
\draw [ultra thick] (1,-1)-- (0,-1);
\draw [ultra thick] (1,-1)-- (2,-1);
\draw [ultra thick] (1,1)-- (2,1);
\draw [ultra thick] (2,1)-- (3,1);
\draw [ultra thick,color=gray] (3,1)-- (3,0);
\draw [ultra thick] (3,1)-- (4,1);
\draw [ultra thick] (4,1)-- (4,0);
\draw [ultra thick] (4,0)-- (4,-1);
\draw [dashed] (0.8,-1.2)-- (1,-1);
\draw (1,-1)-- (1.2,-0.8);
\draw (1.8,-1.2)-- (2,-1);
\draw (2,-1)-- (2.2,-0.8);
\draw [ultra thick] (1,0)-- (2,0);
\draw (1,0)-- (1.2,0.2);
\draw (1,0)-- (0.8,-0.2);
\draw (1,1)-- (0.8,0.8);
\draw (1,1)-- (1.2,1.2);
\draw [dashed] (0,0)-- (0.2,0.2);
\draw [dashed] (0,0)-- (-0.2,-0.2);
\draw [ultra thick,color=gray] (4,-1)-- (3,-1);
\draw [dashed] (2,1)-- (1.8,0.8);
\draw [dashed] (2,1)-- (2.2,1.2);
\draw [dashed] (3,1)-- (2.8,0.8);
\draw (3,1)-- (3.2,1.2);
\draw (4,1)-- (3.8,0.8);
\draw (4,1)-- (4.2,1.2);
\draw (4,0)-- (3.8,-0.2);
\draw (4,0)-- (4.2,0.2);
\draw [dashed] (4,-1)-- (3.8,-1.2);
\draw (4,-1)-- (4.2,-0.8);
\draw (0,1)-- (-0.2,0.8);
\draw (0,1)-- (0.2,1.2);
\draw [ultra thick,color=gray] (4,0)-- (5,0);
\draw [dashed] (0,0)-- (-1,0);
\draw [dashed] (2,0)-- (3,0);
\draw [dashed] (2,1)-- (2,2);
\draw [dashed] (1,1)-- (1,2);
\draw [ultra thick,color=gray] (3,1)-- (3,2);
\draw [dashed] (4,1)-- (4,2);
\draw [dashed] (4,1)-- (5,1);
\draw [dashed] (4,-1)-- (5,-1);
\draw [dashed] (4,-1)-- (4,-2);
\draw [dashed] (1,-1)-- (1,-2);
\draw [dashed] (-1,-1)-- (0,-1);
\draw [dashed] (0,-1)-- (0,-2);
\draw [dashed] (0,-1)-- (0.2,-0.8);
\draw (0,-1)-- (-0.2,-1.2);
\draw (2,0)-- (1.8,-0.2);
\draw [dashed] (2,0)-- (2.2,0.2);
\draw (2,-1)-- (2,-2);
\draw (2,-1)-- (3,-1);
\draw (2,-1)-- (2,0);
\draw (0,1)-- (0,2);
\draw (0,1)-- (1,1);
\draw (0,1)-- (-1,1);
\draw (3,0)-- (4,0);
\draw [dashed] (0,-1)-- (0,0);
\draw [dashed] (2,1)-- (2,0);
\draw [rotate around={45:(1.1,0.1)},ultra thick] (1.1,0.1) ellipse (0.32cm and 0.16cm);
\draw [rotate around={45:(3.9,-0.1)},ultra thick] (3.9,-0.1) ellipse (0.32cm and 0.16cm);
\fill (3,-1) circle (3pt);
\fill (5,0) circle (3pt);
\fill (3,0) circle (3pt);
\fill (3,2) circle (3pt);
\draw (0,1) node[cross,ultra thick] {};
\draw (2,-1) node[cross,ultra thick] {};
\draw (3,-0.4) node[label=$a$] {};
\draw (2.8,0.25) node[label=$b(a)$] {};
\draw (0.9,-1) node[label=$0$] {};
\draw (0.5,-0.4) node[label=$e_1$] {};
\draw (0.9,0.25) node[label=$e_2$] {};
\draw (1.5,-0.4) node[label=$e_3$] {};
\draw (1.25,0.15) node[label=$e$] {};
\draw (0.8,-0.75) node[label=$b(v)$] {};
\draw (1.1,-0.3) node[label=$v$] {};
\draw (4.5,-0.4) node[label=$e'_1$,anchor=north east] {};
\draw (3.9,-0.75) node[label=$e'_2$,anchor=north east] {};
\draw (3.7,-0.4) node[label=$e'$,anchor=north east] {};
\draw (3.8,0.25) node[label=$b(v')$] {};
\draw (4.1,-0.3) node[label=$v'$] {};
\end{tikzpicture}
\caption{Illustration of \cref{algo:3d}. The currently discovered part of the cluster of the origin is thickened. The active sites which are neither open or closed yet, $A_n\setminus (O_n\cup C_n)$ are represented by dots, the closed ones, $C_n$, are crossed out, the open sites, $O_n$, are all the remaining vertices of the thick cluster, the boundary edges, $B_n$, are drawn in grey, the spoilt ones, $S_n$, are black. The solid lines represent feasible edges, while dashed ones are not feasible. Notice that the vertex $a$ will surely become closed when it is examined, as it has no inactive neighbours in $P$. The two circled edges going out of the plane $P$ were used to save their respective vertices from being closed due to having all their 6 edges feasible. Namely, we have $U_{e}\ge\max(U_{e_1},U_{e_2},U_{e_3},U_{b(v)})$ and $U_{e'}\ge\max(U_{e'_1},U_{e'_2},U_{b(v')})$.
\label{fig:algo:3d}}
\end{figure}
\begin{algo}
\label{algo:3d}
Initialise $A_0=\{0\}$, $O_0=C_0=B_0=S_n=\varnothing$ and $n=0$. REPEAT the following.
If $A_n=O_n\cup C_n$, then END. Otherwise, fix $a\in A_n\setminus(O_n\cup C_n)$ and let $b(a)$ denote the edge in $B_n$ with endpoint $a$ (there will always be exactly one such edge except for $a=0$, in which case we make the convention $\{b(0)\}=\varnothing$). If $a$ has no neighbour in $P\setminus A_n$, set $C_{n+1}=C_n\cup\{a\}$, $B_{n+1}=B_n\setminus\{b(a)\}$, $S_{n+1}=S_n\cup E_a$, increment $n$ and go back to REPEAT. Otherwise, for each edge $av$ in $E_a\setminus S_n$ we reveal whether it is feasible or not, let $\G(a)$ denote the set of vertices $v\in P\setminus A_n$ such that $av$ is feasible, set $E^{\mathrm{out}}_a=\{av,v\in\G(a)\}$ and proceed as follows.
\begin{itemize}
\item If at most $5$ edges in $E_a$ are feasible, set $A_{n+1}=A_n\cup\G(a)$, $O_{n+1}=O_n\cup\{a\}$, $B_{n+1}=(B_n\setminus\{b(a)\})\cupE^{\mathrm{out}}_a$, $S_{n+1}=S_n\cup \left(E_a\setminus E^{\mathrm{out}}_a\right)$, increment $n$ and go back to REPEAT.
\item Otherwise, let $au$ be the edge with largest $U_{au}$ among \[E^{\mathrm{out}}_a\cup\{b(a)\}\cup\{av, v\not\in P\}.\]
If $u\in P$, then set $C_{n+1}=C_n\cup\{a\}$, $B_{n+1}=B_n\setminus \{b(a)\}$, $S_{n+1}=S_n\cup E_a$, increment $n$ and go back to REPEAT. Otherwise, set $A_{n+1}=A_n\cup\G(a)$, $O_{n+1}=O_n\cup\{a\}$, $B_{n+1}=(B_n\setminus\{b(a)\})\cup E^{\mathrm{out}}_a$, $S_{n+1}=S_n\cup (E_a\setminusE^{\mathrm{out}}_a)$, increment $n$ and go back to REPEAT.
\end{itemize}
\end{algo}
Let us make a few observations about this algorithm. First, since any vertex $v$ is activated at most once, it is clear that $b(v)$ is well defined. Moreover, at the time of activation of a vertex $v$ the other endpoint $a$ of $b(v)$ becomes open, which guarantees that the constraint at $a$ is not violated by $b(v)$ (either there are not 6 feasible edges at $a$ or at least one of them is to be added after $b(v)$) and that edge is feasible. Thus, whenever a vertex $v$ becomes open, the edge $b(v)$ is known to be present in the constrained percolation, since we have also checked that the constraint at $v$ is not violated by that edge. Hence, all open vertices belong to the cluster of $0$. In particular, if the algorithm does not finish, then $0$ belongs to an infinite cluster. On the other hand, the algorithm only activates neighbours of open vertices, so it never terminates if and only if $0$ is in an infinite cluster $\bigcup_n O_n$ in $P$.
Further note that at any given time edges (with at least one end in $P$, as others will never be used) are divided in three categories: spoilt, boundary and \emph{unexplored}. We know nothing about the value $U_e$ for unexplored $e$, we know the exact value for spoilt $e$ and we will next assess boundary edges and show that we may view them as unexplored. Observe also that all non-boundary edges incident with open or closed vertices are spoilt, while all edges incident with a vertex $a\in A_n\setminus(O_n\cup C_n)$ except its boundary edge, $b(a)$ are unexplored.
\begin{lem}
\label{lem:decoupling}
For $n\ge 0$ conditionally on the information revealed by the algorithm until time $n$, the corresponding $\s$-algebra being denoted by $\ensuremath{\mathcal F}_n$ (the random set $B_n$ is measurable w.r.t.\ $\ensuremath{\mathcal F}_n$), the $(U_e)_{e\in B_n}$ are independent and each $U_e$ has a uniform law on $[0,p(e)]$ with $p(e)\le t$ measurable w.r.t.\ $\ensuremath{\mathcal F}_n$.
\end{lem}
\begin{proof}
Observe that the connected components of $B_n$ are stars centered at open vertices. We will prove the statement by induction on $n$, so we assume it holds for a given $n$.
At step $n$ \cref{algo:3d} reveals information only about the edges adjacent to a certain vertex $a\in A_n\setminus (O_n\cup C_n)$ and does not take into account any other edges. In particular, the values of $(U_{av})_{v\in P\setminus A_n}$, which were previously unexplored, are independent of $(U_e)_{e\in B_n\setminus\{b(a)\}}$ (conditionally on $\ensuremath{\mathcal F}_n$) by induction hypothesis. If $a\in C_{n+1}$ there is nothing left to prove, since we have simply spoiled $E_a\setminus S_n$ (namely, $\ensuremath{\mathcal F}_{n+1}=\s(\ensuremath{\mathcal F}_n,(U_e)_{e\in E_a\setminus S_n})$) and these edges were either $b(a)$ or unexplored, so they were all independent of $(U_e)_{e\in B_n\setminus\{b(a)\}}$ by induction hypothesis. We next assume that $a\in O_{n+1}$ and consider two cases.
Assume first that there are at most $5$ feasible edges in $E_a$. Then we only explored which of the edges in $E_a\setminus S_n$, spoiled $E_a\setminus(E^{\mathrm{out}}_a\cup S_n)$ and made $E^{\mathrm{out}}_a$ boundary edges. In particular, we have \[\ensuremath{\mathcal F}_{n+1}=\s\left(\ensuremath{\mathcal F}_n,E^{\mathrm{out}}_a,(U_e)_{e\in E_a\setminus\left(E^{\mathrm{out}}_a\cup S_n\right)}\right)\]
(recall that $E^{\mathrm{out}}_a$ is a random set). Hence, conditionally on $\ensuremath{\mathcal F}_{n+1}$, we only know that $U_e\le t$ for $e\inE^{\mathrm{out}}_a$ by definition of $E^{\mathrm{out}}_a$ and we are done.
Finally, assume that all six edges in $E_a$ are feasible, but the vertex $u$ from \cref{algo:3d} is not in $P$. In this case $E^{\mathrm{out}}_a=\{av,v\in P\setminus A_n\}$ and, conditionally on $\ensuremath{\mathcal F}_{n+1}$ we only know that $U_e\le \max_{v\not\in P}(U_{av})$ for all $e\in E^{\mathrm{out}}_a$. Yet, $\max_{v\not\in P}(U_{av})$ is measurable w.r.t.\ $\ensuremath{\mathcal F}_{n+1}$, since both such edges $av$ are in $S_{n+1}$, as $a\in O_{n+1}$. Finally, since all edges in $E_a$ are feasible by hypothesis, we obtain that $\max_{v\not\in P}(U_{av})\le t$ and we are done.
\end{proof}
We next establish the desired comparison with ordinary percolation.
\begin{lem}
\label{lem:comparison}
Fix $n>0$ and let $a\in A_n\setminus(O_n\cup C_n)$ be the vertex considered at that step. Let $X=\{v\in P\setminus A_n,va\in E_a\}$ be the set of sites which may be added to $A_n$ at this step. Conditionally on $\ensuremath{\mathcal F}_n$ the variables $({\ensuremath{\mathbbm{1}}} _{x\in A_{n+1}\setminus A_n})_{x\in X}$ stochastically dominate \emph{i.i.d.} Bernoulli variables with parameter $p>0.5$.
\end{lem}
\begin{proof}
If $X$ is empty there is nothing to prove, so we have $|X|\in\{1,2,3\}$. Let $k$ denote the number of edges $av\in S_n$. Without loss of generality we will assume that $k=3-|X|$, as otherwise we may simply condition on the value of $U_{av}$ for $v\in A_n\setminus (O_n\cup C_n)$.
If any of the $U_{av}>t$ for $av\in S_n$, then $B_{n+1}\setminus B_n$ is simply the set of feasible edges from $a$ to $X$, which gives that $({\ensuremath{\mathbbm{1}}} _{x\in A_{n+1}\setminus A_n})_{x\in X}$ are exactly \emph{i.i.d.} Bernoulli with parameter $t$, since these edges are unexplored.
Let us assume that all $3-|X|$ edges $av\in S_n$ are feasible. Let $N=\sum_{x\in X}{\ensuremath{\mathbbm{1}}} _{x\in A_{n+1}\setminus A_n}$. By symmetry it suffices to show that $N$ stochastically dominates a binomial random variable with parameters $|X|$ and $p$.
Let us fix $|X|=3$ for a start. We claim that
\begin{align}
\label{eq:N1}{\ensuremath{\mathbb P}} (N=1|\ensuremath{\mathcal F}_n)&{}=3t(1-t)^{2},\\
\label{eq:N2}{\ensuremath{\mathbb P}} (N=2|\ensuremath{\mathcal F}_n)&{}=3t^2(1-t),\\
\label{eq:N3}{\ensuremath{\mathbb P}} (N=3|\ensuremath{\mathcal F}_n)&{}\ge t^3-t^5+\frac{2}{6}t^5.
\end{align}
\cref{eq:N1,eq:N2} follow directly from \cref{algo:3d}, since $N<3$ guarantees that there are at most 5 feasible edges at $a$. To check \cref{eq:N3}, we notice that we need all three edges $ax$ for $x\in X$ to be feasible; in the case when all edges at $a$ are feasible (we already know from $\ensuremath{\mathcal F}_n$ that $b(a)$ is, but the other $2+|X|$ edges not in $S_n$ are unexplored) we still have a chance that the largest $U_{av}$ among $b(a)$ and the $2+|X|$ unexplored edges is achieved for $v\not\in P$. Using \cref{lem:decoupling} the latter probability is indeed at least $2/(3+|X|)$. It then suffices to check that
\begin{align*}
3t(1-t)^{2}+3t^2(1-t)+ t^3-t^5+\frac{2}{6}t^5&{}> 1-(1-p)^3,\\
3t^2(1-t)+t^3-t^5+\frac{2}{6}t^5&{}> p^2(p+3(1-p)),\\
t^3-t^5+\frac{2}{6}t^5&{}> p^3,
\end{align*}
for $p=0.5$, which is immediate.
For $|X|\in\{1,2\}$ we proceed identically, reaching the inequalities
\begin{align*}
2t(1-t)+t^2-t^4+\frac{2}{5}t^4&{}> p^2+2p(1-p),\\
t^2-t^4+\frac{2}{5}t^4&{}> p^2,\\
t-t^3+\frac{2}{4}t^3&{}> p,
\end{align*}
which are again easily verified for $p=0.5$.
\end{proof}
With \cref{lem:comparison} we are ready to conclude the proof of \cref{th:3d} for ${\ensuremath{\mathbb L}} ^3$. Indeed, it follows that one can couple the exploration of \cref{algo:3d} with an exploration of the cluster of $0$ in bond percolation in $P$ with parameter $p>0.5$ in such a way the set of eventually active sites contains the cluster of $0$ in the latter percolation model. Since the critical probability of bond percolation in two dimensions is $1/2$ \cite{Kesten80}, this concludes our proof.
In order to deal with the square lattice with diagonals added, ${\ensuremath{\mathbb L}} ^2_{\boxtimes}$, it suffices to consider the edges $(x,x+(1,1))$ and $(x,x+(1,-1))$ as analogues of $(x,x+(0,0,1))$ and $(x,x+(0,0,-1))$ in the cubic lattice.
\end{proof}
\begin{rem}
Applying an analogous argument to the triangular or checkerboard lattices with $\k=5$ does not quite work as it stands, since we only manage to compare the constrained degree percolation model (with $t=0.678$) with bond percolation on ${\ensuremath{\mathbb L}} ^2$ with parameter $p=0.47$, which is subcritical. However, it is likely that working slightly more, one could give a nontrivial result in that setting as well.
\end{rem}
\section{Open problems}
\label{sec:open}
Several questions can arise concerning this model. We can consider other graphs or allow the constraint $\kappa (v)$ to be a function of the vertex set, for example, but in the context of the present work, we would like to mention some open problems concerning the critical time $t_{\mathrm{c}}$ for the hypercubic lattice. Some of these questions were already stated in \cites{DoAmaral21,DeLima20}.
In ordinary Bernoulli percolation on ${\ensuremath{\mathbb L}} ^d$, there is a trivial coupling that shows that the percolation threshold is a non-increasing function of the dimension $d$. This same coupling does not work to show that the critical time is a non-increasing function of $d$, which seems to be true.
\begin{ques}
For $\kappa$ fixed, is the function $t_{\mathrm{c}}^{\k}(d)$ non-increasing in $d$?
\end{ques}
Still concerning the monotonicity of $t_{\mathrm{c}}$, keeping the dimension fixed, one may ask whether $t_{\mathrm{c}}^{\k}(d)$ is monotone in $\k$. For example, it was shown in \cite{DeLima20}, it holds that $t_{\mathrm{c}}^2(2)=+\infty$, $\frac{1}{2}<t_{\mathrm{c}}^3(2)<1$ and it is known from \cite{Kesten80} that $t_{\mathrm{c}}^4(2)=\frac{1}{2}$. Numerical support for the following conjecture was provided in \cite{DoAmaral21} for $d\in\{3,4\}$.
\begin{conj}
For all $d\ge 3$ the function $t_{\mathrm{c}}^{\k}(d)$ is non-increasing in $\k$.
\end{conj}
If the answer to the previous question is affirmative, it is logical to define the \emph{critical constraint} $\kappa_{\mathrm{c}}(d):=\min\{\kappa: t_{\mathrm{c}}^\kappa (d)<1\}$.In this language, \cref{th:general,th:3d} provide that $\k_{\mathrm{c}}(d)\le \min(10,2d-1)$ for all $d\ge 3$. In \cite{DoAmaral21} some simulations were performed for dimensions $d=3$ and $4$ that support the following conjecture.
\begin{conj}
For all $d\ge 3$ it hold that $\kappa_{\mathrm{c}}(d)=3$.
\end{conj}
We remark that in \cite{DeLima20} an analogous result was proved for the regular trees.
Turning to high dimensions, in view of our treatment in \cref{th:general,th:high}, it seems reasonable to expect a positive answer to the following question.
\begin{ques}
Does $\lim_{d\to\infty}d\cdot t_{\mathrm{c}}^{\k}(d)$ exist for all $\k\ge 3$?
\end{ques}
\section*{Acknowledgements}
I.H. was supported by ERC Starting Grant 680275 MALIG. B.N.B.L. was supported in part by CNPq grant 305811/2018-5 and FAPERJ (Pronex E-26/010.001269/2016). The authors would like to thank the organisers of the Bernoulli-IMS One World Symposium 2020, which was the occasion for them to ``meet'' and introduce the first author to the model. Thanks are also due to La\"etitia Comminges et Djalil Chafa\"i for pointing us to \cite{Steele94} and to Lyuben Lichev for proofreading.
| proofpile-arXiv_059-15583 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
The past decade has seen a surge in the development of deep neural network models to solve NLP tasks. Pretrained language models such as ELMO\cite{elmo}, BERT\cite{Devlin:18} , XLNet\cite{xlnet} etc. have achieved state-of-the-art on various NLP tasks. For tasks such as QA, earlier models were introduced with a fixed purpose for each layer such as modelling query-passage interaction. However, BERT do not have a predefined purpose individual layers. With the rapid increase in the introduction of such models, it now becomes necessary to analyse them at a layer level in order to \textit{interpret} the model. \\
Earlier works ~\citep{tenney2019bert,peters2018dissecting} analyse syntactic and semantic purposes for each layer in the model. \citet{Clark:19} specifically analyses BERT's attention heads for syntactic and linguistic phenomena. Most works on the analysis of such models work with tasks such as sentiment classification, syntactic/semantic tags prediction, NLI etc. \\
However, there has not been much work done to analyze models for complex tasks like RCQA. We believe that this is due to the sheer number of parameters and non-linearity in deep QA models. Therefore in this work, we analyse BERT's layers on the RCQA dataset SQuAD\cite{Rajpurkar:16} and take the first step in identifying logical functions for each layer leading to the identification of specific layers in the model which are to be further interpreted from a human point-of-view. \\
We first define 3 logical purposes a layer could have and classify BERT's layers based into them : \\
\begin{itemize}[nolistsep]
\item \textit{distinctive}: a distinct part in the model's overall logic (could be one of predefined purposes as in earlier QA models, but not necessarily)
\item \textit{performance booster}: performing mathematical adjustments in its high dimensions that are essential to the model's performance but are not perceivable by the human eye.
\item \textit{verification}: re-affirming/verifying the work done by the previous layers
\end{itemize}
The first 2 purposes describe layers that are essential, and the third one refers to layers that are redundant for the model's performance and interpretability. In Section \ref{layerwise_ig} we mathematically define a layer's functionality is the way it distributes its attribution over its input items (in this case, passage words) using Integrated Gradients\cite{Sundararajan:17}. In Section \ref{experiments} we provide pruning and comparison experiments and experiments on pruning BERT's layers; together, these help to classify BERT's layers into the 3 defined logical purpose
Finally, we provide a qualitative study on identifying the roles of distinctive versus other layers.
\section{Related Work}
The past few years have seen many works on RCQA datasets~\citep{Lai:17,Nguyen:16,Joshi:17} and subsequent deep QA models~\citep{Seo:16, dhingra2016gated, Yu:18} to solve them. Numerous BERT-based models~\citep{Liu:19,Lan:19,Devlin:18} have neared human-level performance on such datasets. To study the interpretability of such models, various attribution methods ~\cite{Bach:15, Sundararajan:17, Ribeiro:16} have been proposed. Works such as ~\citep{tenney2019bert,peters2018dissecting,Clark:19} focus on analyzing model layers and assigning them syntactic and semantic meaning using probing classifiers. \citet{si2019does} analyzes BERT using adversarial attacks similar to earlier works \cite{Jia:17, Mudrakarta:18} and show that these models are likely to be fooled.
\section{Implementation Details}
\textbf{SQuAD}: 90k/10k train/dev samples, each with a 100-300 words passage, natural language query and answer span in the passage itself.
\noindent \textbf{Integrated Gradients}:. The integrated gradients for a Model $M$, a passage word $w_i$, embedded as $x_i \in \mathbf{R}^L$ is as follows:
\begin{align}
\nonumber IG(x_i) = \int\limits_{\alpha=0}^1\frac{\partial M(\tilde{x} + \alpha(x_i - \tilde{x}))}{\partial x_i}\ \ d\alpha
\end{align}
where $\tilde{x}$ is a zero vector, that serves as a baseline to measure integrated gradient for $w_i$. We approximate the above integral across $50$ uniform samples between $[0,1]$.
\noindent \textbf{BERT}: In this work, we use the BERT-BASE model which has 12 Transformer blocks(layers) each with a multi-head self-attention and a feed-forward neural network. We use the official code and pre-trained checkpoints\footnote{\url{https://github.com/google-research/bert}} and fine-tune it to get an F1 score of $88.73$ on SQuAD's dev split.
\section{Layer-wise Integrated Gradients} \label{layerwise_ig}
In this section, we extend integrated gradients for BERT at the layer-level.
\noindent For a given passage P consisting of $n$ words $[w_1,w_2,\dots,w_n]$, query $Q$, and model $f$ with $\theta$ parameters, the task of predicting the answer span is modeled as :
\begin{align}
\nonumber p(w_s, w_e) &= f(w_s, w_e|P,Q,\theta)
\end{align}
where $w_s,w_e$ are the predicted answer start and end words or positions.
\noindent For any given layer \textit{l}, the above is equivalent to:
\begin{align}
\nonumber p(w_s, w_e) &= f_l(w_s, w_e|E_{l-1}(P),E_{l-1}(Q),\theta)
\end{align}
where $f_l$ is the forward propagation from layer $l$ to prediction. $E_{l}(.)$, is the representation learnt for passage or query words by a given layer \textit{l}. To elaborate, we consider the network below the layer \textit{l} as a blackbox which generates \textit{input} representations for layer \textit{l}. \\
\noindent We now calculate the integrated gradients for each layer $IG_l(x_i)$ for all passage words $w_i$ using Algorithm \ref{imp_distribution}. We then compute importance scores for each $w_i$ by taking the euclidean norm of $IG(w_i)$ and then normalize it to give a probability distribution $I_l$ over passage words.
\begin{algorithm}
\caption{To compute Layer-wise Integrated Gradients for layer \textit{l}}
\label{imp_distribution}
\begin{algorithmic}[1]
\STATE $\tilde{p} = 0$ \quad //zero baseline \\
\STATE $G_l(p) = \frac{1}{m}\sum_{k=1}^{m}\frac{\partial f_l(\tilde{p} + \frac{k}{m}(p - \tilde{p}))}{\partial E_l}$
\STATE $IG_l(p) = [(p-\tilde{p}) \times G_l(p)] $
\STATE // Compute squared norm at each row \\
\STATE $\tilde{I}_l([w_1,\dots,w_k]) = ||IG_l(p)||$
\STATE Normalize $\tilde{I}_l$ to a probability distribution $I_l$
\end{algorithmic}
\end{algorithm}
\section{Experiments} \label{experiments}
As described in Section \ref{layerwise_ig}, we quantify and visualize a layer's function as how it distributes importance over the passage words, using the distribution $I_l$. To compute the similarity between any two layers $x,y$, we measure the \textit{Jensen-Shannon Divergence (JSD)} between their corresponding importance distributions $I_x,I_y$. The JSD scores are calculated between every pair of layers in the model, and are visualised as a $n_l \times n_l$ heatmap ($n_l$ is the number of layers in the model). The higher the JSD score is, the more dissimilar the two layers are and the more different are the words the two layers consider as salient. All heatmaps visualised in this section are the experiment-corresponding heatmaps averaged over 500 samples in SQuAD's dev-split. \\
\subsection{JSD analysis} \label{subsec:overall_jsd}
We first present pairwise layer JSD scores for BERT in Fig. \ref{fig:overall_jsd}. We observe low JSD scores between all pairs of layers, with only minimal increase as the layers go further apart(min/max JSD observed is just 0.06/0.41) giving a preliminary result that the layers are highly similar to each other.
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{emnlp2020-templates/jsd_overall/bert_squad_js_rem0_avg_500.png}
\caption{JSD between $I_l$'s}
\label{fig:overall_jsd}
\end{figure}
\subsection{JSD with top-k retained/removed} \label{subsec:topk}
To further evaluate the source of the similarity,
we analyse the distribution in two parts: (i) we retain only top-k scores in each layer and zero out the rest. This denotes the head of the distribution. (ii) we zero the top-k scores in each layer and retain the rest, which denotes the tail of the distribution. In either case we re-normalize to maintain the probability distribution. The resulting heatmaps can be seen in Fig. \ref{fig:bert_jsd_rem_keep}.
\begin{figure}[h]
\centering
\begin{subfigure}{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{jsd_rem_keep/bert_squad_js_keep2_avg_500.png}
\caption{}
\label{bert_jsd1}
\end{subfigure}
\begin{subfigure}{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{jsd_rem_keep/bert_squad_js_rem2_avg_500.png}
\caption{}
\label{bert_jsd2}
\end{subfigure}
\caption{JSD between $I_l$'s with top-2 items removed/retained}
\label{fig:bert_jsd_rem_keep}
\end{figure}
When comparing just the top-2 items in heatmap \ref{bert_jsd1}, we see higher values(min 0.08/max 0.72) than in heatmap \ref{fig:overall_jsd}; when the top-2 items are removed, we see lower values(min 0.09/max 0.26). Therefore we conclude that a layer's function is reflected in the words high up in the importance distribution, and as they are removed, we encounter almost a uniform distribution across the less important words. Hence to correctly identify different layer functionalities, we need to focus only on the head(top-k words) and not the tail.\\
Further in heatmap \ref{bert_jsd1}, we see higher JSD scores when layers 0-6 are each compared with all the layers(min 0.28/max 0.72), whereas we see much lower JSD values between layers 7-11(min 0.08/max 0.32). This suggests that layers 7-11 have fairly similar functionalities.
\subsection{JSD analysis split by question type}
In this section, we analyze the JSD heatmaps split by question type, on the motivation that the model approaches different question types differently. For example, ``what'' or ``who'' questions require entities as answers, and in SQuAD can probably be answered more directly, whereas questions like ``why'' or ``how'' require a more in-depth reading of the passage. Hence we analyze the heatmaps for each question type separately(for the top-2 words retained). The results can be found in Fig. \ref{fig:bert_jsd_qtype}.
\begin{figure}[h]
\centering
\begin{subfigure}{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{jsd_qtype/what_bert_squad_js_keep2_avg_500.png}
\caption{}
\label{bert_jsd_what}
\end{subfigure}
\begin{subfigure}{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{jsd_qtype/why_bert_squad_js_keep2_avg_500.png}
\caption{}
\label{bert_jsd_why}
\end{subfigure}
\caption{JSD of $I_l$'s, split by question types}
\label{fig:bert_jsd_qtype}
\end{figure}
\noindent The `what' heatmap(Fig. \ref{bert_jsd_what}) indicates behaviour similar to that observed in the previous section \ref{subsec:overall_jsd}. However, the heatmap for ``why'' (Fig. \ref{bert_jsd_why}), shows a slightly higher JSD in the later layers as well, supporting the hypothesis that such questions require a deeper understanding of the passage and hence more work to be done by the model. We present heatmaps for other question types in the appendix.
\subsection{Case Study: Classifying Layers} \label{case_study}
\input{emnlp2020-templates/example_table1}
We observed in Section \ref{subsec:topk} that layers 0-6 have high JSD scores with all of BERT's layers; hence, we first classify them as \textit{distinctive}. Layers 7-11 have low JSD scores between each other; they could be classified in either \textit{verification} or \textit{performance booster}. To resolve this ambiguity, we perform pruning experiments on BERT, wherein we remove certain layers from the original pre-trained BERT, train it on SQuAD and observe the change in performance.(Table \ref{tab:pruned_bert}).
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|}
\hline
\textbf{Layers pruned} & \textbf{\%F1} & \textbf{\%Drop in F1}\\
\hline
None & 88.73 & - \\ \hline
11 & 88.66 & 0.07 \\ \hline
10,11 & 87.81 & 0.85\\ \hline
9,10,11 & 86.58 & 1.23 \\ \hline
8,9,10,11 & 86.4 & 0.18\\ \hline
7,8,9,10,11 & 85.15 & 1.25\\ \hline
6,7,8,9,10,11 & 83.75 & 1.4\\ \hline
\end{tabular}
\caption{Pruned BERT models on SQuAD's dev-set}
\label{tab:pruned_bert}
\end{table}
We iteratively drop the layers to identify the impact they have on the model's performance. We first drop only layer 11, then drop 10 \& 11 and so on until layer 6. \\
We see that pruning layer 11 causes almost no change in the performance(only a 0.07\% dip). Dropping layers 10 and then 9 cause a further dip of $\sim$1\% each. However, dropping layer 8 does not have a huge impact(only 0.18\%). Again, dropping layers 7 and then 6 have a tangible impact(1.25\% and 1.4\% respectively). \\
We have already classified layer 6 as \textit{distinctive} using the JSD scores, and the pruning experiment further corroborates that. Based on the pruning results, we now classify layers 11 and 8 as \textit{verification}, since their removal causes almost no reduction in the model's performance. From the JSD experiment, layers 10,9,7 seemed redundant in functionality; however, their removal caused a noticeable dip in BERT's performance. Hence, we classify them as \textit{performance booster}.\\
\subsection{Qualitative Analysis}
Finally, we determine and analyse the top-5 words of each layer, and compute their overlap with question words and predicted answer span. We find that query words make up 14.7-24.3\% of (distinctive) layers 0-6 and 10.2-18.9\% of other layers 7-11. Further answer words make up 26-31.2\% of layers 0-6 and 30.8-34.6\% of layers 7-11.\\
We present a corroborating example in Table \ref{tab:qual_eg}. We see that all these six layers give a high score to the answer span itself (`disastrous', `situation'). Further, we see that the initial layers 0,1 and 2 are also trying to make a connection between the passage and the query (`relegated', `because', `Polonia' get high importance scores). \\
Hence, based on this qualitative analysis, we conclude that the distinctive layers focus on contextual understanding and interaction between the query and passage. In contrast, the non-distinctive layers focus on enhancing and verifying the model's prediction.
\section{Conclusion}
In this work, we analyzed BERT's layers' functionalities using their respective importance distributions over passage words. We presented a JSD-based comparison of the same to understand how the layers work individually as well as collectively. Further, we presented a pruning experiment on BERT's layers, which in combination with the JSD experiment, helped to classify BERT's layers into 3 logical roles (distinctive, verification, and performance booster). Through preliminary experiments, we found that the identified distinctive layers have contextual and query-passage interaction purposes. In contrast, the other layers work on enhancing the performance and re-verifying the predicted answer. As future work, we would like to extend this to a more detailed analysis of the discovered distinctive layers.
\input{emnlp2020-templates/tsne}
\input{emnlp2020-templates/semantic_stats}
\newpage
\bibliographystyle{acl_natbib}
\section{Appendices}
\label{sec:appendix}
Appendices are material that can be read, and include lemmas, formulas, proofs, and tables that are not critical to the reading and understanding of the paper.
Appendices should be \textbf{uploaded as supplementary material} when submitting the paper for review.
Upon acceptance, the appendices come after the references, as shown here.
\paragraph{\LaTeX-specific details:}
Use {\small\verb|
\section{Length of Submission}
\label{sec:length}
The conference accepts submissions of long papers and short papers.
Long papers may consist of up to eight (8) pages of content plus unlimited pages for references.
Upon acceptance, final versions of long papers will be given one additional page -- up to nine (9) pages of content plus unlimited pages for references -- so that reviewers' comments can be taken into account.
Short papers may consist of up to four (4) pages of content, plus unlimited pages for references.
Upon acceptance, short papers will be given five (5) pages in the proceedings and unlimited pages for references.
For both long and short papers, all illustrations and tables that are part of the main text must be accommodated within these page limits, observing the formatting instructions given in the present document.
Papers that do not conform to the specified length and formatting requirements are subject to be rejected without review.
The conference encourages the submission of additional material that is relevant to the reviewers but not an integral part of the paper.
There are two such types of material: appendices, which can be read, and non-readable supplementary materials, often data or code.
Additional material must be submitted as separate files, and must adhere to the same anonymity guidelines as the main paper.
The paper must be self-contained: it is optional for reviewers to look at the supplementary material.
Papers should not refer, for further detail, to documents, code or data resources that are not available to the reviewers.
Refer to Appendices~\ref{sec:appendix} and \ref{sec:supplemental} for further information.
Workshop chairs may have different rules for allowed length and whether supplemental material is welcome.
As always, the respective call for papers is the authoritative source.
\section{Anonymity}
As reviewing will be double-blind, papers submitted for review should not include any author information (such as names or affiliations). Furthermore, self-references that reveal the author's identity, \emph{e.g.},
\begin{quote}
We previously showed \citep{Gusfield:97} \ldots
\end{quote}
should be avoided. Instead, use citations such as
\begin{quote}
\citet{Gusfield:97} previously showed\ldots
\end{quote}
Please do not use anonymous citations and do not include acknowledgements.
\textbf{Papers that do not conform to these requirements may be rejected without review.}
Any preliminary non-archival versions of submitted papers should be listed in the submission form but not in the review version of the paper.
Reviewers are generally aware that authors may present preliminary versions of their work in other venues, but will not be provided the list of previous presentations from the submission form.
Once a paper has been accepted to the conference, the camera-ready version of the paper should include the author's names and affiliations, and is allowed to use self-references.
\paragraph{\LaTeX-specific details:}
For an anonymized submission, ensure that {\small\verb|\aclfinalcopy|} at the top of this document is commented out, and that you have filled in the paper ID number (assigned during the submission process on softconf) where {\small\verb|***|} appears in the {\small\verb|\def\aclpaperid{***}|} definition at the top of this document.
For a camera-ready submission, ensure that {\small\verb|\aclfinalcopy|} at the top of this document is not commented out.
\paragraph{Captions:}
Provide a caption for every illustration; number each one sequentially in the form:
``Figure 1. Caption of the Figure.''
``Table 1. Caption of the Table.''
Type the captions of the figures and tables below the body, using 10 point text.
Captions should be placed below illustrations.
Captions that are one line are centered (see Table~\ref{font-table}).
Captions longer than one line are left-aligned (see Table~\ref{tab:accents}).
\begin{table}
\centering
\begin{tabular}{lc}
\hline
\textbf{Command} & \textbf{Output}\\
\hline
\verb|{\"a}| & {\"a} \\
\verb|{\^e}| & {\^e} \\
\verb|{\`i}| & {\`i} \\
\verb|{\.I}| & {\.I} \\
\verb|{\o}| & {\o} \\
\verb|{\'u}| & {\'u} \\
\verb|{\aa}| & {\aa} \\\hline
\end{tabular}
\begin{tabular}{lc}
\hline
\textbf{Command} & \textbf{Output}\\
\hline
\verb|{\c c}| & {\c c} \\
\verb|{\u g}| & {\u g} \\
\verb|{\l}| & {\l} \\
\verb|{\~n}| & {\~n} \\
\verb|{\H o}| & {\H o} \\
\verb|{\v r}| & {\v r} \\
\verb|{\ss}| & {\ss} \\
\hline
\end{tabular}
\caption{Example commands for accented characters, to be used in, \emph{e.g.}, \BibTeX\ names.}\label{tab:accents}
\end{table}
\paragraph{\LaTeX-specific details:}
The style files are compatible with the caption and subcaption packages; do not add optional arguments.
\textbf{Do not override the default caption sizes.}
\subsection{Hyperlinks}
Within-document and external hyperlinks are indicated with Dark Blue text, Color Hex \#000099.
\subsection{Citations}
Citations within the text appear in parentheses as~\citep{Gusfield:97} or, if the author's name appears in the text itself, as \citet{Gusfield:97}.
Append lowercase letters to the year in cases of ambiguities.
Treat double authors as in~\citep{Aho:72}, but write as in~\citep{Chandra:81} when more than two authors are involved. Collapse multiple citations as in~\citep{Gusfield:97,Aho:72}.
Refrain from using full citations as sentence constituents.
Instead of
\begin{quote}
``\citep{Gusfield:97} showed that ...''
\end{quote}
write
\begin{quote}
``\citet{Gusfield:97} showed that ...''
\end{quote}
\begin{table*}
\centering
\begin{tabular}{lll}
\hline
\textbf{Output} & \textbf{natbib command} & \textbf{Old ACL-style command}\\
\hline
\citep{Gusfield:97} & \small\verb|\citep| & \small\verb|\cite| \\
\citealp{Gusfield:97} & \small\verb|\citealp| & no equivalent \\
\citet{Gusfield:97} & \small\verb|\citet| & \small\verb|\newcite| \\
\citeyearpar{Gusfield:97} & \small\verb|\citeyearpar| & \small\verb|\shortcite| \\
\hline
\end{tabular}
\caption{\label{citation-guide}
Citation commands supported by the style file.
The style is based on the natbib package and supports all natbib citation commands.
It also supports commands defined in previous ACL style files for compatibility.
}
\end{table*}
\paragraph{\LaTeX-specific details:}
Table~\ref{citation-guide} shows the syntax supported by the style files.
We encourage you to use the natbib styles.
You can use the command {\small\verb|\citet|} (cite in text) to get ``author (year)'' citations as in \citet{Gusfield:97}.
You can use the command {\small\verb|\citep|} (cite in parentheses) to get ``(author, year)'' citations as in \citep{Gusfield:97}.
You can use the command {\small\verb|\citealp|} (alternative cite without parentheses) to get ``author year'' citations (which is useful for using citations within parentheses, as in \citealp{Gusfield:97}).
\subsection{References}
Gather the full set of references together under the heading \textbf{References}; place the section before any Appendices.
Arrange the references alphabetically by first author, rather than by order of occurrence in the text.
Provide as complete a citation as possible, using a consistent format, such as the one for \emph{Computational Linguistics\/} or the one in the \emph{Publication Manual of the American
Psychological Association\/}~\citep{APA:83}.
Use full names for authors, not just initials.
Submissions should accurately reference prior and related work, including code and data.
If a piece of prior work appeared in multiple venues, the version that appeared in a refereed, archival venue should be referenced.
If multiple versions of a piece of prior work exist, the one used by the authors should be referenced.
Authors should not rely on automated citation indices to provide accurate references for prior and related work.
The following text cites various types of articles so that the references section of the present document will include them.
\begin{itemize}
\item Example article in journal: \citep{Ando2005}.
\item Example article in proceedings, with location: \citep{borschinger-johnson-2011-particle}.
\item Example article in proceedings, without location: \citep{andrew2007scalable}.
\item Example arxiv paper: \citep{rasooli-tetrault-2015}.
\end{itemize}
\paragraph{\LaTeX-specific details:}
The \LaTeX{} and Bib\TeX{} style files provided roughly follow the American Psychological Association format.
If your own bib file is named \texttt{\small emnlp2020.bib}, then placing the following before any appendices in your \LaTeX{} file will generate the references section for you:
\begin{quote}\small
\verb|\bibliographystyle{acl_natbib}|\\
\verb|
\section{Qualitative Analysis on learnt representations at each layer}
\section{Introduction}
The past decade has witnessed a surge in the development of deep neural network models to solve NLP tasks. Pretrained language models such as ELMO \cite{elmo}, BERT \cite{Devlin:18} , XLNet \cite{xlnet} etc. have achieved state-of-the-art results on various NLP tasks. This success motivated various studies to understand how BERT achieves human-level performance on these tasks. ~\citet{tenney2019bert,peters2018dissecting} analyze syntactic and semantic roles played by different layers in such models. \citet{Clark:19} specifically analyze BERT's attention heads for syntactic and linguistic phenomena. Most of these works focus on tasks such as sentiment classification, syntactic/semantic tags prediction, natural language inference, and so on.
However, to the best of our knowledge, BERT has not been thoroughly analyzed for complex tasks like RCQA. It is a challenging task because of 1) the large number of parameters and non-linearities in BERT, and 2) the absence of pre-defined roles across layers in BERT as compared to pre-BERT models like BiDAF \citep{Seo:16} or DCN \citep{Xiong:16}. In this work, we take the first step to identify each layer's role using the attribution method of Integrated Gradients \cite{Sundararajan:17}. We then try to map these roles to the following functions, deemed necessary in pre-BERT models to reach the answer: (i) learn contextual representations for the passage and the question, individually, (ii) attend to information in the passage specific to the question and, (iii) predict the answer.
\noindent We perform analysis on the SQuAD \citep{Rajpurkar:16} and DuoRC \citep{Saha:18} datasets. We observe that the initial layers primarily focus on question words that are present in the passage. In the later layers, the focus on question words decreases, and more focus is on the supporting words that surround the answer and the predicted answer span. Further, through a focused analysis of quantifier questions (questions that require a numerical entity as the answer), we observe that BERT pays importance to many words similar to the answer (same type, such as numbers) in later layers. We find this intriguing since, even after marking confusing words spread across passage as important, BERT's prediction accuracy is high. We also provide qualitative analysis to demonstrate the above trends.
\section{Related Work}
In the past few years, various large-scale datasets have been proposed for the RCQA task ~\citep{Nguyen:16,Joshi:17, Rajpurkar:16,Saha:18} which have led to various deep neural-network (NN) based architectures such as ~\citet{Seo:16, dhingra2016gated}. Additionally, with complex pretraining, models such as ~\citet{Liu:19,Lan:19,Devlin:18} are very close to human-level performance.
Due to the large number of parameters and non-linearity of deep NN models, the answer to the question ``how did the model arrive at the prediction?'', is not known; hence, they are termed as \textit{blackbox models}. Motivated by this question, there have also been many works that analyze the interpretability of deep NN models on NLP tasks; many of them analyze models based on in-built attention mechanisms \citep{Jain:19, serrano2019attention, wiegreffe2019attention}. Further, various attribution methods such as ~\citet{Bach:15, Sundararajan:17} have been proposed to analyze them. \citet{tenney2019bert} and \citet{peters2018dissecting} perform a layerwise analysis of BERT and BERT-like models to assign them syntactic and semantic meaning using probing classifiers. \citet{si2019does} question BERT's working on QA tasks through adversarial attacks, similar to \citet{Jia:17, Mudrakarta:18}. They point out that BERT is prone to be fooled by such attacks. Unlike these earlier works, we focus on analyzing BERT's layers specifically for RCQA to understand their QA-specific roles and their behavior on potentially confusing quantifier questions.
\section{Experimental Setup}
For our BERT analysis, we use the BERT-BASE model, which has 12 Transformer blocks (layers), each with a multi-head self-attention and a feed-forward neural network. We use the official code and pre-trained checkpoints\footnote{\url{https://github.com/google-research/bert}} and fine-tune it for two epochs for the SQuAD and DuoRC datasets to achieve F1 scores of $88.73$ and $54.80$ on their respective dev-splits. We use SQuAD \cite{Rajpurkar:16} 1.1 with 90k/10k train/dev samples, each with a 100-300 words passage and the SelfRC dataset in DuoRC \cite{Saha:18} with 60k/13k train/dev samples, each with a 500 (on average) words passage. For each passage, both datasets have a natural language query and answer span in the passage itself.
\section{Layer-wise Functionality} \label{layerwise_ig}
As discussed earlier, we aim to understand each BERT layer's functionality for the RCQA task; we want to identify the passage words that are of primary importance at each layer for the answer. Intuitively, the initial layers should focus on question words, and the latter should zoom in on contextual words that point to the answer. To analyze the above, we use the attribution method Integrated Gradients \citep{Sundararajan:17} on BERT at a layerwise level.
For a given passage P consisting of $n$ words $[w_1,w_2,\dots,w_n]$, query $Q$, and model $f$ with $\theta$ parameters, answer prediction is modeled as:
\begin{align}
\nonumber p(w_s, w_e) &= f(w_s, w_e|P,Q,\theta)
\end{align}
where $w_s,w_e$ are the predicted answer start and end words or positions.
\noindent For any given layer \textit{l}, the above is equivalent to:
\begin{align}
\nonumber p(w_s, w_e) &= f_l(w_s, w_e|E_{l-1}(P),E_{l-1}(Q),\theta)
\end{align}
where $f_l$ is the forward propagation from layer $l$ to the prediction. $E_{l}(.)$, is the representation learnt for passage or query words by a given layer \textit{l}. To elaborate, we consider the network below the layer \textit{l} as a blackbox which generates \textit{input} representations for layer \textit{l}.
The \textbf{Integrated Gradients} for a Model $M$, a passage word $w_i$, embedded as $x_i \in \mathbf{R}^L$ is:
\begin{align}
\nonumber IG(x_i) = \int\limits_{\alpha=0}^1\frac{\partial M(\tilde{x} + \alpha(x_i - \tilde{x}))}{\partial x_i}\ \ d\alpha
\end{align}
where $\tilde{x}$ is a zero vector, that serves as a baseline to measure integrated gradient for $w_i$.
We calculate the integrated gradients at each layer $IG_l(x_i)$ for all passage words $w_i$ using Algorithm \ref{imp_distribution}. We approximate the above integral across $50$ uniform samples of $\alpha \in [0,1]$. We then compute importance scores for each $w_i$ by taking the euclidean norm of $IG(w_i)$ and normalizing it to get a probability distribution $I_l$ over the passage words.
\begin{algorithm}
\caption{To compute Layer-wise Integrated Gradients for layer \textit{l}}
\label{imp_distribution}
\begin{algorithmic}[1]
\STATE $\tilde{p} = 0$ \quad //zero baseline \\
\STATE $m=50$ \\
\STATE $G_l(p) = \frac{1}{m}\sum_{k=1}^{m}\frac{\partial f_l(\tilde{p} + \frac{k}{m}(p - \tilde{p}))}{\partial E_l}$
\STATE $IG_l(p) = [(p-\tilde{p}) \times G_l(p)] $
\STATE // Compute squared norm for each word \\
\STATE $\tilde{I}_l([w_1,\dots,w_k]) = ||IG_l(p)|| \in \mathrm{R}^k$
\STATE Normalize $\tilde{I}_l$ to a probability distribution $I_l$
\end{algorithmic}
\end{algorithm}
\subsection{JSD with top-k retained/removed} \label{subsec:topk}
We quantify and visualize a layer's function as its distribution of importance over the passage words $I_l$. To compute the similarity between any two layers $x,y$, we measure the \textit{Jensen-Shannon Divergence (JSD)} between their corresponding importance distributions $I_x, I_y$. We calculate the JSD scores between every pair of layers in the model and visualize it as a $n_l \times n_l$ heatmap ($n_l$ - number of layers in the model). A higher JSD score corresponds to the two layers being more different. This further means the two layers consider different words as salient. We visualize heatmaps for the dev-splits of SQuAD (Figures \ref{bert_jsd1}, \ref{bert_jsd2}) and DuoRC (Figures \ref{bert_jsd3}, \ref{bert_jsd4}), averaging over 1000 samples in each case.\\
We analyze the distribution in two parts: (i) we retain only top-k scores in each layer and zero out the rest, which denotes the distribution's head. (ii) we zero the top-k scores in each layer and retain the rest, which denotes the distribution's tail. In either case, we re-normalize to get a probability distribution.
\begin{figure}
\centering
\begin{subfigure}{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{jsd_rem_keep/bert_squad_js_keep2_avg_500.png}
\caption{}
\label{bert_jsd1}
\end{subfigure}
\begin{subfigure}{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{jsd_rem_keep/bert_squad_js_rem2_avg_500.png}
\caption{}
\label{bert_jsd2}
\end{subfigure} \\
\begin{subfigure}{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{jsd_rem_keep/bert_duorc_js_keep2_avg_500.png}
\caption{}
\label{bert_jsd3}
\end{subfigure}
\begin{subfigure}{0.22\textwidth}
\centering
\includegraphics[width=\textwidth]{jsd_rem_keep/bert_duorc_js_rem2_avg_500.png}
\caption{}
\label{bert_jsd4}
\end{subfigure}
\caption{JSD between $I_l$'s with top-2 items removed/retained (SQuAD - (a), (b), DuoRC - (c), (d))}
\label{fig:bert_jsd_rem_keep}
\end{figure}
When comparing just the top-2 items, we see higher values (min 0.08/max 0.72) in heatmap \ref{bert_jsd1} than in heatmap \ref{bert_jsd2} (min 0.09/max 0.26). Similarly, we see higher values (min 0.23/max 0.89) in heatmap \ref{bert_jsd3} than in heatmap \ref{bert_jsd4} (min 0.12/max 0.28).We conclude that a layer's function is reflected in words high up in the importance distribution. As we remove them, we encounter an almost uniform distribution across the less important words. Hence to correctly identify a layer's functionality, we need to focus only on the head (top-k words) and not on the tail.
\section{Results and Discussions}
\begin{table}
\centering
\resizebox{0.85\linewidth}{!}{%
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Layer Name} & \textbf{\begin{tabular}[c]{@{}c@{}}\% answer\\ span\end{tabular}} & \textbf{\% Q-words} & \textbf{\begin{tabular}[c]{@{}c@{}}\% Contextual\\ Words\end{tabular}} \\ \hline \hline
Layer 0 & 26.99 & 22.94 & 9.45 \\ \hline
Layer 1 & 26.09 & 24.35 & 9.43 \\ \hline
Layer 2 & 29.9 & 22.41 & 11.65 \\ \hline
Layer 3 & 30.44 & 19.55 & 11.13 \\ \hline
Layer 4 & 30.06 & 18.33 & 11.23 \\ \hline
Layer 5 & 30.75 & 14.71 & 11.57 \\ \hline
Layer 6 & 31.25 & 15.33 & 11.94 \\ \hline
Layer 7 & 32.37 & 12.29 & 12.32 \\ \hline
Layer 8 & 30.78 & 18.91 & 12.07 \\ \hline
Layer 9 & 34.58 & 10.21 & 13.41 \\ \hline
Layer 10 & 34.31 & 10.56 & 13.39 \\ \hline
Layer 11 & 34.63 & 12.0 & 13.74 \\ \hline
\end{tabular}%
}
\caption{Semantic statistics of top-5 words - SQuAD}
\label{tab:semantic_stats_squad}
\end{table}
\begin{table}
\centering
\resizebox{0.85\linewidth}{!}{%
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Layer Name} & \textbf{\begin{tabular}[c]{@{}c@{}}\% answer\\ span\end{tabular}} & \textbf{\% Q-words} & \textbf{\begin{tabular}[c]{@{}c@{}}\% Contextual\\ Words\end{tabular}} \\ \hline \hline
Layer 0 & 35.14 & 17.89 & 27.53 \\ \hline
Layer 1 & 37.29 & 18.29 & 29.88 \\ \hline
Layer 2 & 38.30 & 19.59 & 30.05 \\ \hline
Layer 3 & 34.37 & 18.88 & 25.83 \\ \hline
Layer 4 & 33.93 & 20.77 & 26.20 \\ \hline
Layer 5 & 36.32 & 16.16 & 27.97 \\ \hline
Layer 6 & 35.34 & 15.75 & 27.05 \\ \hline
Layer 7 & 41.20 & 10.57 & 31.12 \\ \hline
Layer 8 & 40.38 & 8.50 & 22.16 \\ \hline
Layer 9 & 41.25 & 8.03 & 17.9 \\ \hline
Layer 10 & 43.93 & 5.58 & 15.85 \\ \hline
Layer 11 & 44.37 & 6.00 & 33.74 \\ \hline
\end{tabular}%
}
\caption{Semantic statistics of top-5 words - DuoRC}
\label{tab:semantic_stats_duo}
\end{table}
\input{example_table1}
\subsection{Probing layers: QA functionality}
Based on the defined layers' functionality $I_l$, we try to identify which layers focus more on the question, the context around the answer, etc. We segregate the passage words into three categories: \textit{answer words, supporting words, and query words}, where supporting words are the words surrounding the answer within a window size of 5. Query words are the question words which appear in the passage. We take the top-5 words marked as important in $I_l$ for any layer $l$ and compute how many words from each of the above-defined categories appear in the top-5 words (results in Tables \ref{tab:semantic_stats_squad} and \ref{tab:semantic_stats_duo}).
We observe similar overall trends for both SQuAD and DuoRC. From Column 3, it is evident that the model first tries to identify the part of the passage where the question words are present. As it gets more confident about the answer (Column 2), the question words' importance decreases. From Col. 4, we infer that the layers' contextual role increases from the initial to the final layers.
\noindent \textbf{Qualitative Example:} We present a visualization of the top-5 words of the first and last three layers (with respect to $I_l$) in Table \ref{tab:qual_eg} for a sample from SQuAD. We see that all six layers give a high score to the answer span itself (`disastrous', `situation'). Further, we see that the initial layers 0,1 and 2 are also trying to connect the passage and the query (`relegated', `because', `Polonia' get high importance scores). Hence, in this example, we see that the initial layers incorporate interaction between the query and passage. In contrast, the last layers focus on enhancing and verifying the model's prediction.
\subsection{Visualizing Word Representations}
We now qualitatively analyze the word representations of each layer. We visualize the t-SNE plot for one such {passage, question,answer} triplet from SQuAD (refer Table \ref{tab:eg_squad}) in Figures \ref{fig:tsne1}, \ref{fig:tsne2}. We visualize the answer, supporting words, query words, and special tokens. Note that we have grayed out the other words in the passage. In initial layers (such as layer 0), we observe that similar words such as stop-words, team names, numbers \{eight, four\}, etc., are close to each other. In Layer 4, the passage, question, and answer come closer to each other. By layer 9, we see that the answer words are segregated from the rest of the words, even though the passage word `four', which is of the same type as the answer `eight' (number), is still close to `eight'. We see more interesting observations yet here: (i) in later layers, the question words separate from the answer and the supporting words, (ii) Across all 12 layers, embeddings for \textit{four, eight} remain very close together, which could have easily led to the model making a wrong prediction. However, the model still predicts the answer `eight' correctly. We were not able to identify the layer where the distinction between the two confusing answers occurs.
\input{example_squad}
\begin{figure}[h]
\centering
\begin{subfigure}{0.41\textwidth}
\centering
\includegraphics[width=\textwidth]{wordemb_tsne_qn31/bert_perp40/tsne_qn_31_layer_0.png}
\end{subfigure}
\begin{subfigure}{0.41\textwidth}
\centering
\includegraphics[width=\textwidth]{wordemb_tsne_qn31/bert_perp40/tsne_qn_31_layer_4.png}
\end{subfigure}
\caption{t-SNE plots - word embeddings of layers 0, 4 for the example in Table \ref{tab:eg_squad}. For layer 0 similar words (e.g., team names, stop words) are close to each other. For intermediate layers like Layer 4, all the contextual, answer and question words intermingle.}
\label{fig:tsne1}
\end{figure}
\begin{figure}[h]
\centering
\begin{subfigure}{0.41\textwidth}
\centering
\includegraphics[width=\textwidth]{wordemb_tsne_qn31/bert_perp40/tsne_qn_31_layer_9.png}
\end{subfigure}
\begin{subfigure}{0.41\textwidth}
\centering
\includegraphics[width=\textwidth]{wordemb_tsne_qn31/bert_perp40/tsne_qn_31_layer_11.png}
\end{subfigure}
\caption{t-SNE plots- word embeddings of layers 9, 11 for the example in Table \ref{tab:eg_squad}. In layers 9-11, the answer \textit{eight} segregates from other words. However, numerical entity \textit{four}, is very close to the answer.}
\label{fig:tsne2}
\end{figure}
\noindent\textbf{Quantifier questions:} For a detailed analysis of quantifier questions like \textit{how many, how much} that could have many confusing answers (i.e., numerical words) in the passage, we perform further analysis. Based on our layer-level functionality $I_l$, we compute the number of words that are numerical quantities in the top-5 words, and the entire passage, and compute their ratio. This represents the ratio of confusing words that are marked as important by each layer. There are 799 and 310 such questions in SQuAD and DuoRC, respectively.
\noindent Interestingly, we observe that this ratio \textit{increases} as we go higher up (\underline{SQuAD:} L0 - 5.6\%, L10 - 17.7\%, L11 - 15.5\%, \underline{DuoRC:} L0 - 12.9\%, L10 - 21.6\%, L11 - 22.6\%). For the example in Table \ref{tab:eg_squad}, we observed that in its later layers, BERT gives high importance to the words `eight', `four', and `second' (numerical quantities), even though the latter are not related or necessary to answer the question. This shows that BERT, in its later layers, distributes its focus over confusing words. However, it \textit{still} manages to predict the correct answer for such questions (87.35\% EM for such questions for SQuAD, and 53.5\% in DuoRC); BERT also has high confidence in predicting the answer for such questions (86.5\% vs 80.4\% for quantifier questions with more than one numerical entity in the passage vs non-quantifier questions in SQuAD, 95.2\% vs 87.2\% in DuoRC).
This behavior is very different from the assumed roles a layer might take to answer the question, as it is expected that such words were considered in the initial rather than final layers. This shows the complexity of BERT and the difficulty of interpreting it for the RCQA task.
\section{Conclusion}
In this work, we highlight that the lack of pre-defined roles for layers adds to the difficulty of interpreting highly complex BERT-based models. We first define each layer's functionality using Integrated Gradients. We present results and analysis to show that BERT is learning some form of passage-query interaction in its initial layers before arriving at the answer. We found the following observations interesting and with a potential to be probed further: (i) why do the question word representations move away from contextual and answer representation in later layers? (ii) If the focus on confusing words increases from the initial to later layers, how does BERT still have a high accuracy? We hope that this work will help the research community interpret BERT for other complex tasks and explore the above open-ended questions.
\section*{Acknowledgements}
We thank the Department of Computer Science and Engineering, IIT Madras and the Robert Bosch Center for Data Science and Artificial Intelligence, IIT Madras (RBC-DSAI) for providing us compute resources. We thank Google for supporting Preksha Nema contribution through the Google Ph.D. Fellowship programme. We also thank the anonymous reviewers for their valuable and constructive suggestions.
\bibliographystyle{acl_natbib}
| proofpile-arXiv_059-15584 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\IEEEPARstart{I}{n} future wireless networks, by building and utilizing the computation capability in edge nodes, e.g., access points (APs), edge networks can be established and are able to conduct complex task via
intelligent scheduling and processing \cite{park2019wireless}. Several works utilizing machine learning have been proposed for future communications, e.g.,
C-RAN \cite{luo2020power}, MIMO channel information feedback system \cite{lu2018mimo} and multi-antenna quantization \cite{8845636}.
However, massive raw data generated by user devices triggers two key problems, i.e., privacy disclosure and high cost from data transmission, making the above-mentioned intelligent applications, difficult to process in wireless networks.
Federated learning (FL) has been proposed by Google, as a promising machine learning (ML) technology to solve the above problems \cite{mcmahan2016communication}.
Specifically, there are two key challenges for the deployment of FL in wireless networks. On one hand, local data samples in UEs are diversely distributed, i.e., non-independent and identically distributed (non-IID) and unbalanced \cite{mcmahan2016communication,zhao2018federated}. To strike a balance between computational efficiency and convergence, Google proposed a novel FL architecture, referred to as FedAvg \cite{mcmahan2016communication}. Additionally, other researchers, e.g. \cite{FEDL}, tried to accelerate the convergence by converting the optimization problem into sub-problems.
Another challenge is that the limited capacity of communications, e.g., limited bandwidth.
To address it, scheduling policy-aided FL architectures were utilized in \cite{yang2019scheduling, zeng2019energy, AoU}. The authors in \cite{AoU} adopted the age of update (AoU) as the scheduling policy to accelerate model convergence in mobile edge networks. Also, the authors in \cite{zeng2019energy} proposed a bandwidth resource scheduling policy for FL in wireless networks.
Their scheduling policies only take FL model convergence into consideration \cite{AoU} or just adopt randomly selection \cite{yang2019scheduling}.
However, there is little literature related to joint energy efficiency and model convergence for federated learning in wireless communication. The authors in \cite{FEDL} carefully analysed the trade-off between energy consumed by UEs and FL convergence with no bandwidth-limited constrains. The authors in \cite{zeng2019energy} proposed an energy-efficient bandwidth resource scheduling policy for FL in wireless networks, and it only considered the communication energy cost.
Against the above background, in this paper, we aim to save the energy consumption of the UEs and improve the convergence performance of FL in a bandwidth-limited network.
We propose an efficient sliding differential evolution-based scheduling (SDES) with lower computational complexity compared with the existing methods.
To the best of our knowledge, it is the first time to solve the above problem well.
In detail, we introduce a convergence reference (CR) of the overall training model and propose the SDES policy to reduce the energy consumption and accelerating model convergence, by choosing the optimal subgroup of the UEs.
Compared to conventional mathematical iterative tools, the proposed SDES can process the computational tasks in a parallel model.
Experiment verifies the effectiveness of the proposed solution, in terms of both energy saving and convergence acceleration in the bandwidth-limited network.
\section{System Model}
\begin{figure}[t]
\centering
\includegraphics[height=5cm]{img/pdf/WFL_scheme.pdf} \setlength\belowcaptionskip{0cm}
\caption{Federatd learning in wireless networks}
\label{WFL}
\end{figure}
We consider a FL system as shown in Fig.~\ref{WFL}, where a set $\mathcal{K}$ of $K$ UEs are connected to one AP. Each UE $k$ stores a local dataset $\mathcal{D}_k$, with its size denoted by $D_k = |\mathcal{D}_k|$. Thus, the whole data size equals to $D=\sum_{k=1}^{K}{D_k}$. Dataset $\mathcal{D}_k$ denotes the collection of data samples in the form of input-output pairs as $\{\mathbf{x}_i^{(k)},y_i^{(k)}\}^{D_k}_{i=1}$, where $\mathbf{x}_i^{(k)} \in \mathbb{R}^d$ is an input sample vector with $d$ features, and $y_i^{(k)} \in \mathbb{R}$ is the labeled output value for sample $\mathbf{x}_i^{(k)}$. Considering the user preference, different $\mathcal{D}_k$'s are non-IID and their corresponding data size $D_k$ varies.
\subsection{Model Convergence}
The goal of AP is to learn a statistical model over the data that resides on the $K$ associated UEs. Mathematically, AP needs to fit the model parameter $\mathbf{w}^t \in \mathbb{R}^d$ which characterizes the output $y_i$, by minimizing a particular loss function $f_i(\mathbf{w}^t)=\ell(\mathbf{x}_i^{(k)},y_i^{(k)}; \mathbf{w}^t)$ in the $t$-th communication round. Formally, the loss function on the dataset of UE $k$ is
\begin{equation}
F_k(\mathbf{w}^t):= \frac{1}{D_k}\sum_{i \in \mathcal{D}_k}
f_i(\mathbf{w}^t).
\label{perloss}
\end{equation}
Then, the global loss function minimization problem in AP can be expressed as
\begin{equation}
\label{global_loss_function} \min \limits_{\mathbf{w}^t} \quad F(\mathbf{w}^t):=\sum_{k=1}^{K}\frac{D_k}{D}{ F_k(\mathbf{w}^t)}.
\end{equation}
To protect the user privacy, UE $k$ only exchanges its model parameters $\mathbf{w}^t_k$ with AP.
\subsection{Energy Consumption}
In bandwidth-limited systems, the number of UEs, $K$, far exceeds the number of subchannels, $N$. Only a small portion of UEs, referred to as the updating set $\mathcal{S}[t]$, are selected for participating in the $t$-th communication round. $\mathcal{S}[t]=\{k \mid S_k[t]=1, k=1,2,...,K\}$, where $S_k[t]=1$ implies that UE $k$ is in the updating set $\mathcal{S}[t]$, otherwise $S_k[t]=0$.
The energy consumed by UEs in the updating set $\mathcal{S}[t]$ consists of two componemts, i.e., transmitting energy consumption and computing energy consumption.
In fact, all UEs share the same model with their local parameters, and we
use constant $J$ to denote its size of $\mathbf{w}^t$. We assume in the $t$-th communication round, UE $k$ is assigned with the $n$-th subchannel with channel gain $h_{k,n}$ and bandwidth $B_n$. Then, the achievable data rate of UE $k$ can be
\begin{equation}
r_k=B_n\ln{\left( 1+\frac{h_{k,n}^2 P_{k,n}}{N_0} \right)},
\label{trate}
\end{equation}
where $P_{k,n}$ denotes the corresponding power allocation, and $N_0$ denotes the variance of the white Gaussian noise.
To obtain minimal transimit power, we assume the achievable rate $r_k$ equals to the threshold transmission rate $R_k$, and the energy for signal transmission for UE $k$ is formulated as
\begin{equation}
E_{k,\text{TP}}= \tau_k \cdot P_{k,n} = \frac{J}{R_k} \cdot \frac{N_0}{h_{k,n}^2}\left(e^{\frac{R_k}{B_n}}-1\right), \label{tp}
\end{equation}
where $\tau_k$ is time duration of the signal transmission process.
On the other hand, the computing energy consumed by UE $k$ to train its local model can be written as
\begin{equation}
E_{k,\text{CP}}= \sum_{i=1}^{c_k D_k}\frac{\alpha_k}{2}{f_k}^2=\frac{\alpha_k}{2}{c_k}{D_k}{f_k}^2,
\label{cp}
\end{equation}
where $c_k$ denotes the number of CPU cycles for executing one sample of data; $c_k D_k$ denotes the number of CPU cycles in one local round; and $f_k$ is the CPU-cycle frequency. Then, the total energy consumption of UEs in the $t$-th communication round is
\begin{equation}
E_{\text{P}}[t]= \sum_{k=1}^{K} {S_k[t] (E_{k,\text{TP}}+\kappa E_{k,\text{CP}})}, \quad S_k[t] \in \{0,1\}
\label{EP}
\end{equation}
where $\kappa$ is the number of local rounds for local model training.
\subsection{Problem Formulation}
We aim to minimize the weighted sum of global loss function in \eqref{global_loss_function} and the total energy consumption in \eqref{EP} in the $t$-th communication round as:
\begin{subequations}\label{ooptimfun}
\begin{align}
\label{ooptim1}\min \limits_{\mathcal{S}[t]} \quad & F(\mathbf{w}^t) + \zeta E_{\text{P}}[t], \ t \in \{0,1,..,T\} & \\
\label{ooptim2}\mathrm{s.t.} \quad & S_k[t] \in \{0,1\}, \ \forall k \in \{1,2,\cdots,K\} & \\
\label{ooptim3}& \sum_{k=1}^{K} S_k[t] = N, &
\end{align}
\end{subequations}
where $\zeta$ is the factor to balance the loss function and the energy consumption; $T$ denotes the number of communication rounds between AP and UEs;
constraint \eqref{ooptim3} shows that the number of the available sub-channels is $N$.
\section{Sliding Differential Evolution Based Scheduling}
There are two challenges for solving the optimization problem in \eqref{ooptimfun}.
On one hand, due to the limited bandwidth, only a small portion of UEs' training loss and model parameters can be updated to AP. This makes it impossible to calculate the global loss $F(\mathbf{w}^t)$ in \eqref{ooptim1} accurately.
Also, it is difficult to get the relationship between $\mathcal{S}[t]$ and $\mathbf{w}^t$ in \eqref{ooptimfun}, where $\mathbf{w}^t$ relies on model training.
On the other hand, this optimization problem is a combinatorial problem which does not normally have low complexity solutions.
For instance, in the case of 100 UEs and 25 available sub-channels, $C(100,25) \approx 2.4\times 10^{23}$ searches are needed in the exhaustive searching, where $C(n,k) = \frac{n!}{k!(n-k)!}$ is the searching number of $k$-combinations from a given set of $n$ elements.
In this section, we first introduce the convergence reference (CR) function to replace $F(\mathbf{w}^t)$ for model convergence above, where two parameters in \eqref{ooptim1} are unified into one set, $\mathcal{S}[t]$.
Based on CR value and energy consumption expression, we propose the SDES policy, for solving \eqref{ooptimfun} efficiently.
\subsection{Convergence Reference (CR) Function}
To solve the aforementioned two challenges, we propose the concept of convergence reference (CR). In CR function, we collect the useful data from the updated UEs for model convergence, and utilize CR function to improve the model convergence based on these data. The CR function transforms the convergence problem into finding the optimal updated set $\mathcal{S}[t]$, and it is consistent with the energy problem in \eqref{ooptim1}.
We introduce CR function based on staleness-loss (SL) measure for convergence, where CR value is utilized to select the optimal subgroup of users. We re-formulate the convergence performance of local models into SL measure based on two existing methods, i.e., staleness and training loss method.
The staleness method, also referred to as AoU \cite{AoU} can transform convergence value into model training times. It records the duration ${T}_k[t]$
which the UE $k$ uploading its model in the $t$-th round,
written as ${T}_k[t]=({T}_k[t-1]+1)(1-S_k[t-1])$. Staleness method leverages ${T}_k[t]$ to avoid over-training and under-training \cite{chen2019deep}.
Moreover, the training loss method records the training loss for all the UEs, where the training loss for UE $k$ is $\mathcal{L}_k[t] = F_k(\mathbf{w}^t)$ in the $t$-th communication round in \eqref{perloss}.
Based the above two methods, we introduce staleness-loss (SL) measures, by combining the staleness and training loss. The set of SL value for all the UEs in the $t$-th communication round can be written as:
\begin{equation*}
\mathcal{C}[t]=\{C_1[t],C_2[t],\cdots,C_k[t],\cdots, C_K[t]\}
\end{equation*}
where $C_k[t] = T_k[t] \mathcal{L}_k[t]$.
Then, we introduce the convergence reference (CR) function based on SL value as
\begin{equation}
T_\text{L}[t]=
\frac{(\sum^{K}_{k=1}{D_k V_k[t] S_k[t]})^{1-\beta}}{1-\beta},
\label{Combined_Cov}
\end{equation}
where $\beta \in (0,1)$ is a constant to adjust the sensitivity to the value change of $\sum^{K}_{k=1}{D_k V_k[t] S_k[t]}$, and $D_k$ is the size of user data. $V_k[t] \in \{T_k[t], \mathcal{L}_k[t], C_k[t]\}$ denotes the method used in CR function. Then, in the $t$-th communication round, the objective in \eqref{ooptim1} can be re-written as:
\begin{subequations}\label{optimfun}
\begin{align}
\label{optim1} \min \limits_{\mathcal{S}[t]} \quad & - T_{\text{L}}[t] + \zeta E_{\text{P}}[t], \ t \in \{0,1,..,T\} & \\
& (7b), \ (7c) \nonumber &
\end{align}
\end{subequations}
\subsection{The Sliding Differential Evolution (SDE) Concept}
The optimization problem in \eqref{optimfun} is NP-hard. Differential evolution (DE) \cite{storn1996usage} is a common method to solve the above kind of problem, where DE generates $M$ individuals with the chromosome scale, $K$, i.e., the dimensionality of \eqref{optim1}, for each individual. We assume there are $G_{\text{DE}}$ generations in DE, and DE algorithm terminates after exceeding $G_{\text{DE}}$ iterations. The execution time is proportional to the objective function evaluation $c(K)$ of \eqref{optimfun} \cite{opara2019differential} with $K$ dimensionality, and the number of elementary operations is proportional to the maximal iteration number $G_{\text{DE}}$ and the population size, i.e., $\mathcal{O}(c(K) \cdot M \cdot G_{\text{DE}})$. However, traditional DE methods suffer from heavily computational complexity when the scale of \eqref{optimfun} increases. Therefore, we propose the concept of sliding differential evolution (SDE) to decrease the computational complexity by reducing the scale of chromosomes from $K$ to $W$, and the number of generations from $G_{\text{DE}}$ to $G_{\text{SDES}}$ with parallel computation, where $W$ is the length of energy windows and $G_{\text{SDES}}$ is the number of generations in SDE. Consequently, its complexity can be given by $\mathcal{O}(c(W) \cdot M \cdot G_{\text{SDES}}).$
\subsection{Sliding Differential Evolution-based Scheduling (SDES)}
The sliding differential evolution-based scheduling (SDES) algorithm is shown in Algorithm~\ref{alg:SDES} and its process is summarized in Fig.~\ref{SWpic}. SDES takes the steps as follows:
\begin{itemize}
\item {\bf a)} Energy windows generation:
We first leverage $W$-length SW to generate $K-W+1$ energy windows, where $N \le W \le K$. Specifically, we first sort all UEs according to their energy consumption from small to large, and then align the head of the SW with the first UE to select the first $W$ UEs in one window.
Similarly, we slide the window to the end of the queue for another $K-W$ energy windows.
\item {\bf b)} Alternative individuals evolution:
In each window, we utilize DE to evolve one alternative individual, i.e., one scheduling scheme with minimal value of \eqref{optim1} from the $W$ UEs, and the scale of chromosome scale is $W$. We conduct DE operations in $K-W+1$ energy windows parallelly, and generate $K-W+1$ alternative individuals.
\item {\bf c)} Optimal solution selection:
We select the optimal individual, i.e., the best solution of \eqref{optim1} from the $K-W+1$ alternative individuals.
\end{itemize}
\begin{figure}[t]
\centering
\includegraphics[height=8cm]{img/pdf/Sliding_Window.pdf}
\caption{Sliding differential evolution algorithm}
\label{SWpic}
\end{figure}
\begin{algorithm}[t]
\caption{ SDES }
\label{alg:SDES}
{\bf Parameters:}
$W$: the length of SW of \eqref{optim1}; $M$: the size of populations; $f_{CR}$: the crossover rate; $F$: the selection weighting factor; $G_{\text{SDES}}$: the number of generations\\
{\bf Input:}
Optimization problem \eqref{optim1}; $K-W+1$ energy windows
\begin{algorithmic}[1]
\For{$w=1,2,\cdots,K-W+1$} \qquad \%\% {\textbf{Step a}}
\newline
\qquad \textit{Initialization:}
\State Generate Initial population $P^{0,w}$ with $M$ individuals
\For{$g=0,1,\cdots,G-1$} \qquad \quad \%\% {\textbf{Step b}
\State Calculate the fitness value $Q(\cdot)$ of Generation $p^{g,w}$
\For{each individual $\mathbf{x}^{(g,w)}_i$ in Generation $p^{g,w}$}
\newline
\qquad \qquad \textit{(a) Mutation:}
\State Select three individuals $r1\neq r2\neq r3$ via RWS
\State $\mathbf{v}^{(g,w)}_i = \mathbf{x}^{(g,w)}_{r1}+F \cdot (\mathbf{x}^{(g,w)}_{r2} - \mathbf{x}^{(g,w)}_{r3}) $
\newline
\qquad \qquad \textit{(b) Crossover:}
\State $\mathbf{u}^{(g,w)}_i=\mathbf{x}^{(g,w)}_i$
\State $j$ randomly selected from $\{1,2,...,D\}$, $L=1$
\Repeat
\State $u^{(g,w)}_{i,j} =v^{(g,w)}_{i,j}$
\State $j = (j+1)$ modulo $K$
\State $L = L + 1$
\Until{$rand(0,1)<f_{CR}$ and $L<D$}
\newline
\qquad \qquad \textit{(c) Selection:}
\If{$Q(\mathbf{u}^{(g,w)}_i) \ge Q(\mathbf{x}^{(g,w)}_i)$}
\State add $\mathbf{u}^{(g,w)}_i$ in the next generation $P^{g+1,w}$
\Else{ add $\mathbf{x}^{(g,w)}_i$ in the next generation $P^{g+1,w}$}
\EndIf
\EndFor
\EndFor
\EndFor
\State Add the best individual in the population $P^{G,w}$ into the alternative individual list
\end{algorithmic}
{\bf Output:}
The optimal individual from $K-W+1$ alternative individuals
\qquad \qquad \qquad \qquad \qquad \qquad \quad \%\% \textit{\textbf{Step c}}
\end{algorithm}
For DE operations in each energy window in the above {\bf Step b)}, we define the number of generations as $G_{\text{SDES}}=\min\{\lceil \frac{C(K,W)}{M} \rceil, G_{\text{DE}} \}$, where $M$ is the number of individuals and $G_{\text{DE}}$ is the number of evolution generations in the traditional DE algorithm. Each individual $\mathbf{x}^{(g,w)}_{i}$ represents one solution of \eqref{optim1}. For instance, in the $w$-th energy window, the agent first generates the initial population $P^{0,w}=\{\mathbf{x}^{(0,w)}_1, \mathbf{x}^{(0,w)}_2,...,\mathbf{x}^{(0,w)}_M\}$. $\mathbf{x}^{(0,w)}_i=\{x^{(0,w)}_{i,1},x^{(0,w)}_{i,2},...,x^{(0,w)}_{i,W}\}$ meets the constrains of $S_k[t]$ in \eqref{ooptimfun}, where \eqref{ooptim3} is rewritten as $\sum_{j=1}^{W} x^{(0,w)}_{i,j} = N$. Any individual violating the constraint of \eqref{ooptimfun} is abandoned. Then each individual $\mathbf{x}^{(g,w)}_i$ from the $g$-th generation in the the $w$-th energy window generates the offspring with three process, given as
\begin{itemize}
\item {\bf Mutation:} We choose three individuals from $P^{g,w}$ via roulette wheel selection (RWS) to generate $\mathbf{v}^{(g,w)}_i$.
\item {\bf Crossover:} We cross the current individual $\mathbf{x}^{(g,w)}_i$ with $\mathbf{v}^{(g,w)}_i$ and then generate $\mathbf{u}^{(g,w)}_i$.
\item {\bf Selection:} We choose the appropriate offspring between $\mathbf{u}^{(g,w)}_i$ and $\mathbf{v}^{(g,w)}_i$ by comparing their fitness value.
\end{itemize}
The RWS in the process of mutation associates the probability of selecting individual $x$ with the fitness function, as $p(x)={\frac {Q(x)}{\Sigma _{j=1}^{M}Q(j)}}$.
The fitness function $Q(\cdot)$ is transformed from the optimization objective in \eqref{optim1} via linear scaling as follows
\begin{subequations}\label{DEfunc}
\begin{align}
\label{DEfunc1} & Q(\mathbf{x}^{(g,w)}_{i})=\alpha_1 \cdot O(\mathbf{x}^{(g,w)}_{i})+ \beta_1 & \\
\label{DEfunc2} \text{where} \ \ & O(\mathbf{x}^{(g,w)}_{i}) = -(-T_{\text{L}}[t] +\zeta E_{\text{P}}[t])\bigg|_{S_k[t] \in \mathbf{x}^{(g,w)}_{i}} & \\
\label{DEfunc3} &\alpha_1 = \frac{O_{\text{avg}}}{O_{\text{avg}}-O_{\text{min}}}, \ O_{\text{avg}} = \frac{1}{M} \sum_{j=1}^{M} {\mathbf{x}^{(g,w)}_{j}} & \\
\label{DEfunc4} & \beta_1 = \frac{-O_{\text{min}}O_{\text{avg}}}{O_{\text{avg}}-O_{\text{min}}}, \ O_{\text{min}} = \min \limits_{j=1,...,M} {\mathbf{x}^{(g,w)}_{j}}&
\end{align}
\end{subequations}
and \eqref{DEfunc2} takes the reverse direction of \eqref{optim1} for minimization.
\subsection{Two Cases of SDES: $W$=$K$ and $W$=$N$}
The computational complexity of DE algorithm is $\mathcal{O}(c(K) \cdot M \cdot G_{\text{DE}})$, while the one for SDES is $\mathcal{O}(c(W) \cdot M \cdot G_{\text{SDES}}).$ where $G_{\text{SDES}}=\min\{\lceil \frac{C(K,W)}{M} \rceil, G_{\text{DE}} \}$. Considering $W \le K$ and $G_{\text{SDES}} \le G_{\text{DE}}$ where two equations all reach only if $W=K$, SDES can decrease the computational complexity compared with DE.
However, the performance of scheduling policy generated by SDES is decreased when $W$ reduces.
To investigate the stability of SDES, we analyse two cases of SDES, i.e., $W$=$K$ and $W$=$N$. More specifically, when $W=K$, there is only one energy windows in SDES, and the SDES algorithm can generate the best solution of \eqref{optim1} at the highest computational cost. When $W$=$N$, all the UEs in one energy window are selected as the scheduling policy, and there is no need to generate policies by DE, where SDES generates the worst solution with the lowest computational cost.
\section{Simulation Results}
\begin{table}[t]
\begin{center}
\caption{Parameter Settings}
\label{tal1}
\begin{tabular}{|p{50pt}|p{200pt}|p{150pt}|}
\hline
Symbol & Parameters & Value \\ \hline
$N$, $K$ & Number of subchannels, UEs & 25, 100 \\ \hline
$N_0$, $B$, $R$ & Noise, bandwidth, threshold transmission rate& $10^{-8}$ W, 10Mbps, 500Kbps \\ \hline
$\eta$, $\text{J}$, $\text{D}$ & Learning rate, model and data size & 0.1, 86.6 KB, 47.04 MB \\ \hline
$\text{f}$, $\alpha$, $\text{C}$ & CPU frequency, capacitance coefficient, cycles to execute & 2GHz, $2*10^{-28}$, 20 cycle/bit\\ \hline
$L(d_{k,n})$ & Path loss of Rayleigh fading & $99.3+20\log{d_{k,n}}$ \\ \hline
$d_{k,n}$ & Distribution of UE $k$ & Uniform in [5,50] m \\ \hline
\end{tabular}
\end{center}
\end{table}
In this simulation, we adopt the orthogonal frequency division multiple access (OFDMA) system,
and the details of the system are summarized in Table \ref{tal1}.
We assume all $N$ sub-channels share the same bandwidth of $\frac{B}{N}$.
In FL training, the task is to classify handwritten digits using the MNIST dataset. In detail, the dataset distribution over UEs are unbalanced and non-i.i.d, where the unbalanced feature means that the dataset size varies greatly between different UEs.
The training model is a 6-layer convolutional neural network (CNN), consisting of two 5$\times$5 convolution layers with rectified linear unit (ReLU) activation.
The two convolution layers have 10 and 20 channels respectively, and each layer has 2$\times$2 max pooling, a fully-connected layer with 50 units and ReLU activation, and a log-softmax output layer.
Next, we validate the overall performance of SDES with the respect of energy saving and model convergence, through CR function in \eqref{Combined_Cov}, where SDES ($W$=$K$) and SDES ($W$=$N$) are examined. The measure $V_k[t]$ in CR function can be selected from $\{T_k[t], \mathcal{L}_k[t],C_k[t]\}$.
We adopt the FedAvg from Google \cite{mcmahan2016communication} as the benchmark and set $\zeta=5$.
In detail, when the weight factor $\zeta > 5$, SDES will focus more on energy saving, and consequently improve energy efficiency, however, at the expense of worse convergence performance. When $\zeta < 5$, the model convergence gets improved, and the performance of energy efficiency will deteriorate.
\begin{figure}[t]
\centering
\subfloat[Training loss ($W$=$N$)]{
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=3 in]{img/pdf/1-1.pdf}
\end{minipage}} %
\subfloat[Training loss ($W$=$K$)]{
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=3 in]{img/pdf/1-3.pdf}
\end{minipage}}
\\
\subfloat[Energy consumption ($W$=$N$)]{
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=3 in]{img/pdf/1-2.pdf}
\end{minipage}}%
\subfloat[Energy consumption ($W$=$K$)]{
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=3 in]{img/pdf/1-4.pdf}
\end{minipage}}
\\
\subfloat[Model convergence]{
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=3 in]{img/pdf/All_100_Loss.pdf}
\end{minipage}}%
\subfloat[Total energy consumption]{
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=3 in]{img/pdf/100_ALL_Power.pdf}
\end{minipage}}
\caption{ SDES of $V_k[t] \in \{T_k[t], \mathcal{L}_k[t], C_k[t]\}$ ($\beta = 0.7, \ \zeta = 5$)}
\label{4way-eta0.5}
\end{figure}
\begin{figure}[t]
\subfloat[Instantaneous energy consumption]{
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=3 in]{img/pdf/2-2.pdf}
\end{minipage}} %
\subfloat[Optimization function value \eqref{optim1}]{
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=3 in]{img/pdf/2-3.pdf}
\end{minipage}}
\caption{ SDES ($W \in \{K,N\}$) ($\beta = 0.7, \ \zeta = 5,\ V_k[t]=C_k[t]$)}
\label{combined-0.5}
\end{figure}
Fig.~\ref{4way-eta0.5} shows the performance gain of the proposed $C_k[t]$ measure in \eqref{Combined_Cov}, compared with staleness $T_k[t]$ and training loss $\mathcal{L}_k[t]$. One can see that
in Fig.~\ref{4way-eta0.5}(a) and (b), SDES with $C_k[t]$ achieves good convergences similar to the optimal solution (i.e., FedAvg) which only considers the model convergence. FedAvg may often train the models of UEs with bad channel condition or with large dataset size, making it suffering from huge energy expense. Both of the cases converge fast at the beginning and also has good performance towards the end.
In Fig. \ref{4way-eta0.5}(c) and (d), we compare the cumulative energy consumption among three measures. SDES with $\mathcal{L}_k[t]$ has the lowest cumulative energy consumption but with poor convergence performance, as shown in Fig. \ref{4way-eta0.5}(a).
Moreover, SDES ($W$=$K$) and ($W$=$N$) with $C_k[t]$ have the first and second best performance in energy saving, as they consider both parameters of $\mathcal{L}_k[t]$ and $T_k[t]$.
Fig. \ref{4way-eta0.5}(e) and (f) shows that SDES achieves the best performance in energy conservation, and SDES with $C_k[t]$ and $W=K$ have the similar convergence performance as FedAvg in a more intuitive way.
Fig.~\ref{combined-0.5} further analyses the instant performance of SDES ($W$=$K$) and SDES ($W$=$N$) in terms of energy saving, where $C_k[t]$ in CR function is applied. One can see that both cases have good performance in energy saving compared with FedAvg.
Moreover, one sees that the performances of SDES ($W$=$K$) with respect to model convergence in Fig. \ref{4way-eta0.5} and energy saving in Fig. \ref{combined-0.5}(a) are better than those of SDES ($W$=$N$). This is because high computational resource is required in case of ($W$=$K$), as explained in Section III.D.
The proposed SDES can be extended to more general cases, where the UEs are mobile with time-varying channels or several APs are deployed in FL. In the former case, the energy consumption of UEs is constantly changing. In the latter case, UEs send the trained models to the appropriate APs considering the channel condition, and APs then centralize all the data into one AP for the global model training. Compared with the benchmark solution of FedAvg, SDES bears acceptable computational complexity in the real-time application. Moreover, the choice of weight factor should be careful, since bad choice may lead to unacceptable model convergence performance.
\section{Conclusion}
In this paper, we have proposed a novel energy-efficient scheduling policy, i.e., SDES for federated learning in bandwidth-limited systems with energy-limited UEs. We have utilized the CR function for model convergence and introduced the SDES algorithm, which can reduce the computational complexity with parallel computing architecture. Simulation shows that our proposed SDES performs well in model convergence, and it can save energy consumed by UEs significantly compared with the benchmark solution in bandwidth-limited networks.
In the future, we will focus on the energy efficiency in the more practical federated learning cases in wireless communication, where the dataset contains complicated real information and the UE size is extended to thousands scale.
\bibliographystyle{IEEEtran}
\section{Introduction}
\IEEEPARstart{I}{n} future wireless networks, by building and utilizing the computation capability in edge nodes, e.g., access points (APs), edge networks can be established and are able to conduct complex task via
intelligent scheduling and processing \cite{park2019wireless}. Several works utilizing machine learning have been proposed for future communications, e.g.,
C-RAN \cite{luo2020power}, MIMO channel information feedback system \cite{lu2018mimo} and multi-antenna quantization \cite{8845636}.
However, massive raw data generated by user devices triggers two key problems, i.e., privacy disclosure and high cost from data transmission, making the above-mentioned intelligent applications, difficult to process in wireless networks.
Federated learning (FL) has been proposed by Google, as a promising machine learning (ML) technology to solve the above problems \cite{mcmahan2016communication}.
Specifically, there are two key challenges for the deployment of FL in wireless networks. On one hand, local data samples in UEs are diversely distributed, i.e., non-independent and identically distributed (non-IID) and unbalanced \cite{mcmahan2016communication,zhao2018federated}. To strike a balance between computational efficiency and convergence, Google proposed a novel FL architecture, referred to as FedAvg \cite{mcmahan2016communication}. Additionally, other researchers, e.g. \cite{FEDL}, tried to accelerate the convergence by converting the optimization problem into sub-problems.
Another challenge is that the limited capacity of communications, e.g., limited bandwidth.
To address it, scheduling policy-aided FL architectures were utilized in \cite{yang2019scheduling, zeng2019energy, AoU}. The authors in \cite{AoU} adopted the age of update (AoU) as the scheduling policy to accelerate model convergence in mobile edge networks. Also, the authors in \cite{zeng2019energy} proposed a bandwidth resource scheduling policy for FL in wireless networks.
Their scheduling policies only take FL model convergence into consideration \cite{AoU} or just adopt randomly selection \cite{yang2019scheduling}.
However, there is little literature related to joint energy efficiency and model convergence for federated learning in wireless communication. The authors in \cite{FEDL} carefully analysed the trade-off between energy consumed by UEs and FL convergence with no bandwidth-limited constrains. The authors in \cite{zeng2019energy} proposed an energy-efficient bandwidth resource scheduling policy for FL in wireless networks, and it only considered the communication energy cost.
Against the above background, in this paper, we aim to save the energy consumption of the UEs and improve the convergence performance of FL in a bandwidth-limited network.
We propose an efficient sliding differential evolution-based scheduling (SDES) with lower computational complexity compared with the existing methods.
To the best of our knowledge, it is the first time to solve the above problem well.
In detail, we introduce a convergence reference (CR) of the overall training model and propose the SDES policy to reduce the energy consumption and accelerating model convergence, by choosing the optimal subgroup of the UEs.
Compared to conventional mathematical iterative tools, the proposed SDES can process the computational tasks in a parallel model.
Experiment verifies the effectiveness of the proposed solution, in terms of both energy saving and convergence acceleration in the bandwidth-limited network.
\section{System Model}
\begin{figure}[t]
\centering
\includegraphics[height=5cm]{img/pdf/WFL_scheme.pdf} \setlength\belowcaptionskip{0cm}
\caption{Federatd learning in wireless networks}
\label{WFL}
\end{figure}
We consider a FL system as shown in Fig.~\ref{WFL}, where a set $\mathcal{K}$ of $K$ UEs are connected to one AP. Each UE $k$ stores a local dataset $\mathcal{D}_k$, with its size denoted by $D_k = |\mathcal{D}_k|$. Thus, the whole data size equals to $D=\sum_{k=1}^{K}{D_k}$. Dataset $\mathcal{D}_k$ denotes the collection of data samples in the form of input-output pairs as $\{\mathbf{x}_i^{(k)},y_i^{(k)}\}^{D_k}_{i=1}$, where $\mathbf{x}_i^{(k)} \in \mathbb{R}^d$ is an input sample vector with $d$ features, and $y_i^{(k)} \in \mathbb{R}$ is the labeled output value for sample $\mathbf{x}_i^{(k)}$. Considering the user preference, different $\mathcal{D}_k$'s are non-IID and their corresponding data size $D_k$ varies.
\subsection{Model Convergence}
The goal of AP is to learn a statistical model over the data that resides on the $K$ associated UEs. Mathematically, AP needs to fit the model parameter $\mathbf{w}^t \in \mathbb{R}^d$ which characterizes the output $y_i$, by minimizing a particular loss function $f_i(\mathbf{w}^t)=\ell(\mathbf{x}_i^{(k)},y_i^{(k)}; \mathbf{w}^t)$ in the $t$-th communication round. Formally, the loss function on the dataset of UE $k$ is
\begin{equation}
F_k(\mathbf{w}^t):= \frac{1}{D_k}\sum_{i \in \mathcal{D}_k}
f_i(\mathbf{w}^t).
\label{perloss}
\end{equation}
Then, the global loss function minimization problem in AP can be expressed as
\begin{equation}
\label{global_loss_function} \min \limits_{\mathbf{w}^t} \quad F(\mathbf{w}^t):=\sum_{k=1}^{K}\frac{D_k}{D}{ F_k(\mathbf{w}^t)}.
\end{equation}
To protect the user privacy, UE $k$ only exchanges its model parameters $\mathbf{w}^t_k$ with AP.
\subsection{Energy Consumption}
In bandwidth-limited systems, the number of UEs, $K$, far exceeds the number of subchannels, $N$. Only a small portion of UEs, referred to as the updating set $\mathcal{S}[t]$, are selected for participating in the $t$-th communication round. $\mathcal{S}[t]=\{k \mid S_k[t]=1, k=1,2,...,K\}$, where $S_k[t]=1$ implies that UE $k$ is in the updating set $\mathcal{S}[t]$, otherwise $S_k[t]=0$.
The energy consumed by UEs in the updating set $\mathcal{S}[t]$ consists of two componemts, i.e., transmitting energy consumption and computing energy consumption.
In fact, all UEs share the same model with their local parameters, and we
use constant $J$ to denote its size of $\mathbf{w}^t$. We assume in the $t$-th communication round, UE $k$ is assigned with the $n$-th subchannel with channel gain $h_{k,n}$ and bandwidth $B_n$. Then, the achievable data rate of UE $k$ can be
\begin{equation}
r_k=B_n\ln{\left( 1+\frac{h_{k,n}^2 P_{k,n}}{N_0} \right)},
\label{trate}
\end{equation}
where $P_{k,n}$ denotes the corresponding power allocation, and $N_0$ denotes the variance of the white Gaussian noise.
To obtain minimal transimit power, we assume the achievable rate $r_k$ equals to the threshold transmission rate $R_k$, and the energy for signal transmission for UE $k$ is formulated as
\begin{equation}
E_{k,\text{TP}}= \tau_k \cdot P_{k,n} = \frac{J}{R_k} \cdot \frac{N_0}{h_{k,n}^2}\left(e^{\frac{R_k}{B_n}}-1\right), \label{tp}
\end{equation}
where $\tau_k$ is time duration of the signal transmission process.
On the other hand, the computing energy consumed by UE $k$ to train its local model can be written as
\begin{equation}
E_{k,\text{CP}}= \sum_{i=1}^{c_k D_k}\frac{\alpha_k}{2}{f_k}^2=\frac{\alpha_k}{2}{c_k}{D_k}{f_k}^2,
\label{cp}
\end{equation}
where $c_k$ denotes the number of CPU cycles for executing one sample of data; $c_k D_k$ denotes the number of CPU cycles in one local round; and $f_k$ is the CPU-cycle frequency. Then, the total energy consumption of UEs in the $t$-th communication round is
\begin{equation}
E_{\text{P}}[t]= \sum_{k=1}^{K} {S_k[t] (E_{k,\text{TP}}+\kappa E_{k,\text{CP}})}, \quad S_k[t] \in \{0,1\}
\label{EP}
\end{equation}
where $\kappa$ is the number of local rounds for local model training.
\subsection{Problem Formulation}
We aim to minimize the weighted sum of global loss function in \eqref{global_loss_function} and the total energy consumption in \eqref{EP} in the $t$-th communication round as:
\begin{subequations}\label{ooptimfun}
\begin{align}
\label{ooptim1}\min \limits_{\mathcal{S}[t]} \quad & F(\mathbf{w}^t) + \zeta E_{\text{P}}[t], \ t \in \{0,1,..,T\} & \\
\label{ooptim2}\mathrm{s.t.} \quad & S_k[t] \in \{0,1\}, \ \forall k \in \{1,2,\cdots,K\} & \\
\label{ooptim3}& \sum_{k=1}^{K} S_k[t] = N, &
\end{align}
\end{subequations}
where $\zeta$ is the factor to balance the loss function and the energy consumption; $T$ denotes the number of communication rounds between AP and UEs;
constraint \eqref{ooptim3} shows that the number of the available sub-channels is $N$.
\section{Sliding Differential Evolution Based Scheduling}
There are two challenges for solving the optimization problem in \eqref{ooptimfun}.
On one hand, due to the limited bandwidth, only a small portion of UEs' training loss and model parameters can be updated to AP. This makes it impossible to calculate the global loss $F(\mathbf{w}^t)$ in \eqref{ooptim1} accurately.
Also, it is difficult to get the relationship between $\mathcal{S}[t]$ and $\mathbf{w}^t$ in \eqref{ooptimfun}, where $\mathbf{w}^t$ relies on model training.
On the other hand, this optimization problem is a combinatorial problem which does not normally have low complexity solutions.
For instance, in the case of 100 UEs and 25 available sub-channels, $C(100,25) \approx 2.4\times 10^{23}$ searches are needed in the exhaustive searching, where $C(n,k) = \frac{n!}{k!(n-k)!}$ is the searching number of $k$-combinations from a given set of $n$ elements.
In this section, we first introduce the convergence reference (CR) function to replace $F(\mathbf{w}^t)$ for model convergence above, where two parameters in \eqref{ooptim1} are unified into one set, $\mathcal{S}[t]$.
Based on CR value and energy consumption expression, we propose the SDES policy, for solving \eqref{ooptimfun} efficiently.
\subsection{Convergence Reference (CR) Function}
To solve the aforementioned two challenges, we propose the concept of convergence reference (CR). In CR function, we collect the useful data from the updated UEs for model convergence, and utilize CR function to improve the model convergence based on these data. The CR function transforms the convergence problem into finding the optimal updated set $\mathcal{S}[t]$, and it is consistent with the energy problem in \eqref{ooptim1}.
We introduce CR function based on staleness-loss (SL) measure for convergence, where CR value is utilized to select the optimal subgroup of users. We re-formulate the convergence performance of local models into SL measure based on two existing methods, i.e., staleness and training loss method.
The staleness method, also referred to as AoU \cite{AoU} can transform convergence value into model training times. It records the duration ${T}_k[t]$
which the UE $k$ uploading its model in the $t$-th round,
written as ${T}_k[t]=({T}_k[t-1]+1)(1-S_k[t-1])$. Staleness method leverages ${T}_k[t]$ to avoid over-training and under-training \cite{chen2019deep}.
Moreover, the training loss method records the training loss for all the UEs, where the training loss for UE $k$ is $\mathcal{L}_k[t] = F_k(\mathbf{w}^t)$ in the $t$-th communication round in \eqref{perloss}.
Based the above two methods, we introduce staleness-loss (SL) measures, by combining the staleness and training loss. The set of SL value for all the UEs in the $t$-th communication round can be written as:
\begin{equation*}
\mathcal{C}[t]=\{C_1[t],C_2[t],\cdots,C_k[t],\cdots, C_K[t]\}
\end{equation*}
where $C_k[t] = T_k[t] \mathcal{L}_k[t]$.
Then, we introduce the convergence reference (CR) function based on SL value as
\begin{equation}
T_\text{L}[t]=
\frac{(\sum^{K}_{k=1}{D_k V_k[t] S_k[t]})^{1-\beta}}{1-\beta},
\label{Combined_Cov}
\end{equation}
where $\beta \in (0,1)$ is a constant to adjust the sensitivity to the value change of $\sum^{K}_{k=1}{D_k V_k[t] S_k[t]}$, and $D_k$ is the size of user data. $V_k[t] \in \{T_k[t], \mathcal{L}_k[t], C_k[t]\}$ denotes the method used in CR function. Then, in the $t$-th communication round, the objective in \eqref{ooptim1} can be re-written as:
\begin{subequations}\label{optimfun}
\begin{align}
\label{optim1} \min \limits_{\mathcal{S}[t]} \quad & - T_{\text{L}}[t] + \zeta E_{\text{P}}[t], \ t \in \{0,1,..,T\} & \\
& (7b), \ (7c) \nonumber &
\end{align}
\end{subequations}
\subsection{The Sliding Differential Evolution (SDE) Concept}
The optimization problem in \eqref{optimfun} is NP-hard. Differential evolution (DE) \cite{storn1996usage} is a common method to solve the above kind of problem, where DE generates $M$ individuals with the chromosome scale, $K$, i.e., the dimensionality of \eqref{optim1}, for each individual. We assume there are $G_{\text{DE}}$ generations in DE, and DE algorithm terminates after exceeding $G_{\text{DE}}$ iterations. The execution time is proportional to the objective function evaluation $c(K)$ of \eqref{optimfun} \cite{opara2019differential} with $K$ dimensionality, and the number of elementary operations is proportional to the maximal iteration number $G_{\text{DE}}$ and the population size, i.e., $\mathcal{O}(c(K) \cdot M \cdot G_{\text{DE}})$. However, traditional DE methods suffer from heavily computational complexity when the scale of \eqref{optimfun} increases. Therefore, we propose the concept of sliding differential evolution (SDE) to decrease the computational complexity by reducing the scale of chromosomes from $K$ to $W$, and the number of generations from $G_{\text{DE}}$ to $G_{\text{SDES}}$ with parallel computation, where $W$ is the length of energy windows and $G_{\text{SDES}}$ is the number of generations in SDE. Consequently, its complexity can be given by $\mathcal{O}(c(W) \cdot M \cdot G_{\text{SDES}}).$
\subsection{Sliding Differential Evolution-based Scheduling (SDES)}
The sliding differential evolution-based scheduling (SDES) algorithm is shown in Algorithm~\ref{alg:SDES} and its process is summarized in Fig.~\ref{SWpic}. SDES takes the steps as follows:
\begin{itemize}
\item {\bf a)} Energy windows generation:
We first leverage $W$-length SW to generate $K-W+1$ energy windows, where $N \le W \le K$. Specifically, we first sort all UEs according to their energy consumption from small to large, and then align the head of the SW with the first UE to select the first $W$ UEs in one window.
Similarly, we slide the window to the end of the queue for another $K-W$ energy windows.
\item {\bf b)} Alternative individuals evolution:
In each window, we utilize DE to evolve one alternative individual, i.e., one scheduling scheme with minimal value of \eqref{optim1} from the $W$ UEs, and the scale of chromosome scale is $W$. We conduct DE operations in $K-W+1$ energy windows parallelly, and generate $K-W+1$ alternative individuals.
\item {\bf c)} Optimal solution selection:
We select the optimal individual, i.e., the best solution of \eqref{optim1} from the $K-W+1$ alternative individuals.
\end{itemize}
\begin{figure}[t]
\centering
\includegraphics[height=8cm]{img/pdf/Sliding_Window.pdf}
\caption{Sliding differential evolution algorithm}
\label{SWpic}
\end{figure}
\begin{algorithm}[t]
\caption{ SDES }
\label{alg:SDES}
{\bf Parameters:}
$W$: the length of SW of \eqref{optim1}; $M$: the size of populations; $f_{CR}$: the crossover rate; $F$: the selection weighting factor; $G_{\text{SDES}}$: the number of generations\\
{\bf Input:}
Optimization problem \eqref{optim1}; $K-W+1$ energy windows
\begin{algorithmic}[1]
\For{$w=1,2,\cdots,K-W+1$} \qquad \%\% {\textbf{Step a}}
\newline
\qquad \textit{Initialization:}
\State Generate Initial population $P^{0,w}$ with $M$ individuals
\For{$g=0,1,\cdots,G-1$} \qquad \quad \%\% {\textbf{Step b}
\State Calculate the fitness value $Q(\cdot)$ of Generation $p^{g,w}$
\For{each individual $\mathbf{x}^{(g,w)}_i$ in Generation $p^{g,w}$}
\newline
\qquad \qquad \textit{(a) Mutation:}
\State Select three individuals $r1\neq r2\neq r3$ via RWS
\State $\mathbf{v}^{(g,w)}_i = \mathbf{x}^{(g,w)}_{r1}+F \cdot (\mathbf{x}^{(g,w)}_{r2} - \mathbf{x}^{(g,w)}_{r3}) $
\newline
\qquad \qquad \textit{(b) Crossover:}
\State $\mathbf{u}^{(g,w)}_i=\mathbf{x}^{(g,w)}_i$
\State $j$ randomly selected from $\{1,2,...,D\}$, $L=1$
\Repeat
\State $u^{(g,w)}_{i,j} =v^{(g,w)}_{i,j}$
\State $j = (j+1)$ modulo $K$
\State $L = L + 1$
\Until{$rand(0,1)<f_{CR}$ and $L<D$}
\newline
\qquad \qquad \textit{(c) Selection:}
\If{$Q(\mathbf{u}^{(g,w)}_i) \ge Q(\mathbf{x}^{(g,w)}_i)$}
\State add $\mathbf{u}^{(g,w)}_i$ in the next generation $P^{g+1,w}$
\Else{ add $\mathbf{x}^{(g,w)}_i$ in the next generation $P^{g+1,w}$}
\EndIf
\EndFor
\EndFor
\EndFor
\State Add the best individual in the population $P^{G,w}$ into the alternative individual list
\end{algorithmic}
{\bf Output:}
The optimal individual from $K-W+1$ alternative individuals
\qquad \qquad \qquad \qquad \qquad \qquad \quad \%\% \textit{\textbf{Step c}}
\end{algorithm}
For DE operations in each energy window in the above {\bf Step b)}, we define the number of generations as $G_{\text{SDES}}=\min\{\lceil \frac{C(K,W)}{M} \rceil, G_{\text{DE}} \}$, where $M$ is the number of individuals and $G_{\text{DE}}$ is the number of evolution generations in the traditional DE algorithm. Each individual $\mathbf{x}^{(g,w)}_{i}$ represents one solution of \eqref{optim1}. For instance, in the $w$-th energy window, the agent first generates the initial population $P^{0,w}=\{\mathbf{x}^{(0,w)}_1, \mathbf{x}^{(0,w)}_2,...,\mathbf{x}^{(0,w)}_M\}$. $\mathbf{x}^{(0,w)}_i=\{x^{(0,w)}_{i,1},x^{(0,w)}_{i,2},...,x^{(0,w)}_{i,W}\}$ meets the constrains of $S_k[t]$ in \eqref{ooptimfun}, where \eqref{ooptim3} is rewritten as $\sum_{j=1}^{W} x^{(0,w)}_{i,j} = N$. Any individual violating the constraint of \eqref{ooptimfun} is abandoned. Then each individual $\mathbf{x}^{(g,w)}_i$ from the $g$-th generation in the the $w$-th energy window generates the offspring with three process, given as
\begin{itemize}
\item {\bf Mutation:} We choose three individuals from $P^{g,w}$ via roulette wheel selection (RWS) to generate $\mathbf{v}^{(g,w)}_i$.
\item {\bf Crossover:} We cross the current individual $\mathbf{x}^{(g,w)}_i$ with $\mathbf{v}^{(g,w)}_i$ and then generate $\mathbf{u}^{(g,w)}_i$.
\item {\bf Selection:} We choose the appropriate offspring between $\mathbf{u}^{(g,w)}_i$ and $\mathbf{v}^{(g,w)}_i$ by comparing their fitness value.
\end{itemize}
The RWS in the process of mutation associates the probability of selecting individual $x$ with the fitness function, as $p(x)={\frac {Q(x)}{\Sigma _{j=1}^{M}Q(j)}}$.
The fitness function $Q(\cdot)$ is transformed from the optimization objective in \eqref{optim1} via linear scaling as follows
\begin{subequations}\label{DEfunc}
\begin{align}
\label{DEfunc1} & Q(\mathbf{x}^{(g,w)}_{i})=\alpha_1 \cdot O(\mathbf{x}^{(g,w)}_{i})+ \beta_1 & \\
\label{DEfunc2} \text{where} \ \ & O(\mathbf{x}^{(g,w)}_{i}) = -(-T_{\text{L}}[t] +\zeta E_{\text{P}}[t])\bigg|_{S_k[t] \in \mathbf{x}^{(g,w)}_{i}} & \\
\label{DEfunc3} &\alpha_1 = \frac{O_{\text{avg}}}{O_{\text{avg}}-O_{\text{min}}}, \ O_{\text{avg}} = \frac{1}{M} \sum_{j=1}^{M} {\mathbf{x}^{(g,w)}_{j}} & \\
\label{DEfunc4} & \beta_1 = \frac{-O_{\text{min}}O_{\text{avg}}}{O_{\text{avg}}-O_{\text{min}}}, \ O_{\text{min}} = \min \limits_{j=1,...,M} {\mathbf{x}^{(g,w)}_{j}}&
\end{align}
\end{subequations}
and \eqref{DEfunc2} takes the reverse direction of \eqref{optim1} for minimization.
\subsection{Two Cases of SDES: $W$=$K$ and $W$=$N$}
The computational complexity of DE algorithm is $\mathcal{O}(c(K) \cdot M \cdot G_{\text{DE}})$, while the one for SDES is $\mathcal{O}(c(W) \cdot M \cdot G_{\text{SDES}}).$ where $G_{\text{SDES}}=\min\{\lceil \frac{C(K,W)}{M} \rceil, G_{\text{DE}} \}$. Considering $W \le K$ and $G_{\text{SDES}} \le G_{\text{DE}}$ where two equations all reach only if $W=K$, SDES can decrease the computational complexity compared with DE.
However, the performance of scheduling policy generated by SDES is decreased when $W$ reduces.
To investigate the stability of SDES, we analyse two cases of SDES, i.e., $W$=$K$ and $W$=$N$. More specifically, when $W=K$, there is only one energy windows in SDES, and the SDES algorithm can generate the best solution of \eqref{optim1} at the highest computational cost. When $W$=$N$, all the UEs in one energy window are selected as the scheduling policy, and there is no need to generate policies by DE, where SDES generates the worst solution with the lowest computational cost.
\section{Simulation Results}
\begin{table}[t]
\begin{center}
\caption{Parameter Settings}
\label{tal1}
\begin{tabular}{|p{50pt}|p{200pt}|p{150pt}|}
\hline
Symbol & Parameters & Value \\ \hline
$N$, $K$ & Number of subchannels, UEs & 25, 100 \\ \hline
$N_0$, $B$, $R$ & Noise, bandwidth, threshold transmission rate& $10^{-8}$ W, 10Mbps, 500Kbps \\ \hline
$\eta$, $\text{J}$, $\text{D}$ & Learning rate, model and data size & 0.1, 86.6 KB, 47.04 MB \\ \hline
$\text{f}$, $\alpha$, $\text{C}$ & CPU frequency, capacitance coefficient, cycles to execute & 2GHz, $2*10^{-28}$, 20 cycle/bit\\ \hline
$L(d_{k,n})$ & Path loss of Rayleigh fading & $99.3+20\log{d_{k,n}}$ \\ \hline
$d_{k,n}$ & Distribution of UE $k$ & Uniform in [5,50] m \\ \hline
\end{tabular}
\end{center}
\end{table}
In this simulation, we adopt the orthogonal frequency division multiple access (OFDMA) system,
and the details of the system are summarized in Table \ref{tal1}.
We assume all $N$ sub-channels share the same bandwidth of $\frac{B}{N}$.
In FL training, the task is to classify handwritten digits using the MNIST dataset. In detail, the dataset distribution over UEs are unbalanced and non-i.i.d, where the unbalanced feature means that the dataset size varies greatly between different UEs.
The training model is a 6-layer convolutional neural network (CNN), consisting of two 5$\times$5 convolution layers with rectified linear unit (ReLU) activation.
The two convolution layers have 10 and 20 channels respectively, and each layer has 2$\times$2 max pooling, a fully-connected layer with 50 units and ReLU activation, and a log-softmax output layer.
Next, we validate the overall performance of SDES with the respect of energy saving and model convergence, through CR function in \eqref{Combined_Cov}, where SDES ($W$=$K$) and SDES ($W$=$N$) are examined. The measure $V_k[t]$ in CR function can be selected from $\{T_k[t], \mathcal{L}_k[t],C_k[t]\}$.
We adopt the FedAvg from Google \cite{mcmahan2016communication} as the benchmark and set $\zeta=5$.
In detail, when the weight factor $\zeta > 5$, SDES will focus more on energy saving, and consequently improve energy efficiency, however, at the expense of worse convergence performance. When $\zeta < 5$, the model convergence gets improved, and the performance of energy efficiency will deteriorate.
\begin{figure}[t]
\centering
\subfloat[Training loss ($W$=$N$)]{
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=3 in]{img/pdf/1-1.pdf}
\end{minipage}} %
\subfloat[Training loss ($W$=$K$)]{
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=3 in]{img/pdf/1-3.pdf}
\end{minipage}}
\\
\subfloat[Energy consumption ($W$=$N$)]{
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=3 in]{img/pdf/1-2.pdf}
\end{minipage}}%
\subfloat[Energy consumption ($W$=$K$)]{
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=3 in]{img/pdf/1-4.pdf}
\end{minipage}}
\\
\subfloat[Model convergence]{
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=3 in]{img/pdf/All_100_Loss.pdf}
\end{minipage}}%
\subfloat[Total energy consumption]{
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=3 in]{img/pdf/100_ALL_Power.pdf}
\end{minipage}}
\caption{ SDES of $V_k[t] \in \{T_k[t], \mathcal{L}_k[t], C_k[t]\}$ ($\beta = 0.7, \ \zeta = 5$)}
\label{4way-eta0.5}
\end{figure}
\begin{figure}[t]
\subfloat[Instantaneous energy consumption]{
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=3 in]{img/pdf/2-2.pdf}
\end{minipage}} %
\subfloat[Optimization function value \eqref{optim1}]{
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[width=3 in]{img/pdf/2-3.pdf}
\end{minipage}}
\caption{ SDES ($W \in \{K,N\}$) ($\beta = 0.7, \ \zeta = 5,\ V_k[t]=C_k[t]$)}
\label{combined-0.5}
\end{figure}
Fig.~\ref{4way-eta0.5} shows the performance gain of the proposed $C_k[t]$ measure in \eqref{Combined_Cov}, compared with staleness $T_k[t]$ and training loss $\mathcal{L}_k[t]$. One can see that
in Fig.~\ref{4way-eta0.5}(a) and (b), SDES with $C_k[t]$ achieves good convergences similar to the optimal solution (i.e., FedAvg) which only considers the model convergence. FedAvg may often train the models of UEs with bad channel condition or with large dataset size, making it suffering from huge energy expense. Both of the cases converge fast at the beginning and also has good performance towards the end.
In Fig. \ref{4way-eta0.5}(c) and (d), we compare the cumulative energy consumption among three measures. SDES with $\mathcal{L}_k[t]$ has the lowest cumulative energy consumption but with poor convergence performance, as shown in Fig. \ref{4way-eta0.5}(a).
Moreover, SDES ($W$=$K$) and ($W$=$N$) with $C_k[t]$ have the first and second best performance in energy saving, as they consider both parameters of $\mathcal{L}_k[t]$ and $T_k[t]$.
Fig. \ref{4way-eta0.5}(e) and (f) shows that SDES achieves the best performance in energy conservation, and SDES with $C_k[t]$ and $W=K$ have the similar convergence performance as FedAvg in a more intuitive way.
Fig.~\ref{combined-0.5} further analyses the instant performance of SDES ($W$=$K$) and SDES ($W$=$N$) in terms of energy saving, where $C_k[t]$ in CR function is applied. One can see that both cases have good performance in energy saving compared with FedAvg.
Moreover, one sees that the performances of SDES ($W$=$K$) with respect to model convergence in Fig. \ref{4way-eta0.5} and energy saving in Fig. \ref{combined-0.5}(a) are better than those of SDES ($W$=$N$). This is because high computational resource is required in case of ($W$=$K$), as explained in Section III.D.
The proposed SDES can be extended to more general cases, where the UEs are mobile with time-varying channels or several APs are deployed in FL. In the former case, the energy consumption of UEs is constantly changing. In the latter case, UEs send the trained models to the appropriate APs considering the channel condition, and APs then centralize all the data into one AP for the global model training. Compared with the benchmark solution of FedAvg, SDES bears acceptable computational complexity in the real-time application. Moreover, the choice of weight factor should be careful, since bad choice may lead to unacceptable model convergence performance.
\section{Conclusion}
In this paper, we have proposed a novel energy-efficient scheduling policy, i.e., SDES for federated learning in bandwidth-limited systems with energy-limited UEs. We have utilized the CR function for model convergence and introduced the SDES algorithm, which can reduce the computational complexity with parallel computing architecture. Simulation shows that our proposed SDES performs well in model convergence, and it can save energy consumed by UEs significantly compared with the benchmark solution in bandwidth-limited networks.
In the future, we will focus on the energy efficiency in the more practical federated learning cases in wireless communication, where the dataset contains complicated real information and the UE size is extended to thousands scale.
\bibliographystyle{IEEEtran}
| proofpile-arXiv_059-15585 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
With the realization of masers and lasers quantum optics has proved fertile ground for thermodynamic research in open quantum systems.
An archetypal example is the Scovil-Schulz-DuBois heat engine based on a three-level maser driven by two heat baths of different temperatures \cite{ScovilPRL1959}. This system served as a model to develop a variety of approaches for the microscopic description of
quantum systems in contact with thermal baths in interaction with classical \cite{LambPR1964,GevaPRE1994,GevaJCP1996,BoukobzaPRA2006b,ScullyPNAS2011} or quantized light fields \cite{BoukobzaPRA2006a,GhoshPNAS2018,NiedenzuQuantum2019}.
Of key relevance is the formulation of work and heat
in the quantum realm.
Refs.~\cite{PuszCommMathPhys1978,AlickiJPA1979} defined work flow (power) and heat
flow by partitioning the time derivative of the expectation value of the
\textit{full} Hamiltonian, i.e. including the time-dependent interaction with
classical degrees of freedom, such as a microwave field. Later, Boukobza and Tannor \cite{BoukobzaPRA2006b} proposed an alternative
definition of power and heat flow by restricting to the \textit{bare} Hamiltonian, which describes the system itself
and lacks explicit time dependence. This is sometimes conceptually simpler and was, e.g., also used in Refs.~\cite{KirsanskasPRB2018,JaseemPRE2020}. The authors argued \cite{BoukobzaPRL2007}, that the bare
heat flows always provide a positive entropy production \cite{SpohnJMP1978} for the three level maser,
while this was not the case for full heat flows in their treatment, which thus may violate the second law of thermodynamics.
(We use full and bare in the sense that they
relate to the Hamiltonian from which the flows are derived.)
The correct definition of heat and work is actually still an open issue, see, e.g., the discussion on page 339 of \cite{GelbwaserAdvAtomMolPhys2015}, where further references are given.
In this work we study the definitions for work and heat for the three-level maser coupled to a classical microwave field, where the bath couplings are treated by a Lindblad dissipator as outlined in Sec.~\ref{sec:system}.
Sec.~\ref{sec:retrace} focuses on the different definitions of heat and work, where we essentially follow Ref.~\cite{BoukobzaPRL2007}
showing a violation of entropy production for the full approach.
In Sec.~\ref{SecEffectiveEnergies} we present a reformulation of their expressions, which
allows for the correct identification of energies supplied by the baths. Using these
we recover a strictly positive entropy production for the full heat flows.
\section{The system}
\label{sec:system}
We consider the three-level system of Scovil and Schulz-DuBois \cite{ScovilPRL1959} consisting of an upper ($u$) and lower ($l$) maser level and the
ground level ($g$), see Fig~\ref{fig:atom}.
Throughout this article we set $\hbar=k_B=1$ in order to simplify the notation.
The full system Hamiltonian, $H = H_0 + V(t)$,
consists of the bare Hamiltonian $H_0 = \omega_u\sigma_{uu}+\omega_l\sigma_{ll}$
and a modulating external field
$V(t) = \epsilon ({\rm e}^{{\rm i}\omega_d t}\sigma_{lu}+{\rm e}^{-{\rm i}\omega_d t}\sigma_{ul})$,
where $\epsilon$ is the strength of the driving field, $\omega_d$ its modulating frequency, and we use the operators $\sigma_{ij} = \ket{i}\bra{j}$.
Without loss of generality the energy of the ground state $\ket{g}$ is set to zero.
The three-level system is connected to two bosonic reservoirs (baths), which are labeled by $\alpha$, where $\alpha\in\{u,l\}$.
The bath $\alpha$
couples to the transition $\ket{g}\leftrightarrow\ket{\alpha}$ with strength $\gamma_\alpha$, where
an average number of excitations $n_\alpha$ is available in the bath.
The model and the analysis of its steady state behavior summarized below
follows recent work by \cite{BoukobzaPRA2006b,BoukobzaPRL2007,JaseemPRE2020}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{entropy_maser_figure1.pdf}
\caption{Energy diagram of the three-level maser subjected to a modulating field (dotted arrow), where the transitions $g\leftrightarrow u$ and $g\leftrightarrow l$ (full arrows) are coupled to different reservoirs.
A finite value of $\Delta = \omega_d - (\omega_u - \omega_l)$ reflects the detuning between the modulating field and the energy level difference.}
\label{fig:atom}
\end{figure}
The time evolution of the system density operator $\rho$ is assumed to be Markovian
and governed by the Lindblad master equation \cite{LindbladCMP1976}
\begin{equation}
\dot\rho = -{\rm i}[H(t),\rho]+\mathcal{L}_u[\rho]+\mathcal{L}_l[\rho]
\label{eq:system:lindblad}
\end{equation}
where the coupling to the baths are described by
$\mathcal{L}_\alpha[\rho] = \gamma_\alpha n_\alpha \mathcal{D}_{\sigma_{\alpha g}}[\rho]+\gamma_\alpha (n_\alpha+1)\mathcal{D}_{\sigma_{g\alpha}}[\rho]$
with the dissipator $\mathcal{D}_\sigma[\rho] = \sigma\rho\sigma^\dagger-
\frac{1}{2}\left\{\sigma^\dagger\sigma\rho+\rho\sigma^\dagger\sigma\right\}$.
To simplify the master equation we remove the time dependence of the Hamiltonian
by transforming the system to a rotating frame \cite{BoukobzaPRL2007,JaseemPRE2020}. For $X = \omega_l\sigma_{ll}+(\omega_l+\omega_d)\sigma_{uu}$, we define
$A^\textrm{rot} = U(t) A U^\dagger(t)$ according to the unitary operator
$U(t) = {\rm e}^{{\rm i} Xt}$.
While the dissipative terms are unaffected by the choice of the rotating frame,
the unitary part of the quantum evolution is determined by the Hamiltonian
\begin{equation}
\widetilde{H} = H^\mathrm{rot}-X = -\Delta\sigma_{uu} + \epsilon(\sigma_{ul}+\sigma_{lu})
\end{equation}
with the detuning parameter $\Delta = \omega_d - (\omega_u-\omega_l)$.
Solving Eq.~\eqref{eq:system:lindblad} for the steady state
in the rotating frame (details are given in App.~\ref{AppSystem})
yields the net transition rate $R_{u\rightarrow l}$ from the upper to the lower level
\begin{equation}
R_{u\rightarrow l}= \frac{A(\gamma_u,\gamma_l,n_u,n_l,\epsilon)}{F(\gamma_u,\gamma_l,n_u,n_l,\epsilon,\Delta)}(n_u-n_l)
\label{EqExpressionR}
\end{equation}
where the $A$ and $F$ are both positive \cite{BoukobzaPRL2007}, see Eq.~\eqref{EqExpressionAF}. Thus
$R_{u\rightarrow l}$ has the same sign as the difference $n_u-n_l$ between bath occupations which is driving the transitions.
\section{Work, heat, and entropy}
\label{sec:retrace}
Let the average energy in the system be $\nangle{E} = \mathrm{Tr}\{\rho H\}$.
The typical definitions of full power and full heat flows in the density matrix formalism are \cite{PuszCommMathPhys1978,AlickiJPA1979}
\begin{equation}
P = \dot{W} = \mathrm{Tr}\{\rho\dot{H}\},\quad\dot{Q} = \mathrm{Tr}\{\dot{\rho} H\}
\label{eq:typical}
\end{equation}
where we use the convention, that positive values of $P$ and $\dot{Q}$ correspond to an increase of energy in the system.
Alternatively, some authors apply an alternative definition of the
work and heat for systems coupled to a time-dependent external field
which is based on the bare Hamiltonian,
$\nangle{E_0} = \mathrm{Tr}\{\rho H_0\}$ \cite{BoukobzaPRA2006b,KirsanskasPRB2018}.
Based on the first law of thermodynamics the bare flows are identified from
\begin{equation}
\dot{E_0} =-{\rm i}\mathrm{Tr}\{\rho[H_0,V(t)]\}+\sum_{\alpha\in\{u,l\}}\mathrm{Tr}\{\mathcal{L}_\alpha[\rho]H_0\}
\label{EqHeatWorkBareHam}
\end{equation}
where the first (unitary) term is interpreted as the bare power $P_0$ and the second (dissipative)
term as the sum of bare heat flows $\dot{Q}_{0\alpha}$ from the respective baths to the system.
These terms can be either evaluated in the original or the rotating frame due to the invariance of the trace under cyclic permutations of operators. (This is simpler compared to the first definition with full power and heat flow Eq.~\eqref{eq:typical}, where the transformations
$[\dot{A}]^\textrm{rot}\neq \mathrm{d}A^\textrm{rot}/\mathrm{d} t$ for $A=H,\rho$ are more involved.)
From these definitions the steady state bare power and heat flow become (see App.~\ref{AppBare})
\begin{equation}
\begin{split}
P_0 &= -R_{u\rightarrow l}(\omega_u-\omega_l)\\
\dot{Q}_{0u} &= +R_{u\rightarrow l}\omega_u\\
\dot{Q}_{0l} &= -R_{u\rightarrow l}\omega_l
\label{eq:boukobza}
\end{split}
\end{equation}
We note that the bare power and heat flow correspond to the net transition rate
$R_{u\rightarrow l}$ multiplied with the respective bare transition energies from $H_0$.
The second law of thermodynamics requires a positive definite entropy production.
Spohn's entropy production function for the engine reads \cite{SpohnJMP1978}
\begin{equation}
\sigma = \frac{\partial S}{\partial t} - \frac{\dot{Q}_u}{T_u} - \frac{\dot{Q}_l}{T_l}
\label{eq:spohn}
\end{equation}
where $S=\mathrm{Tr}\{\rho\ln\rho\}=\mathrm{Tr}\{\rho^\textrm{rot}_\textrm{steady state}\ln\rho^\textrm{rot}_\textrm{steady state}\}$
is the von Neumann entropy \cite{VonNeumannBook1932} of the three-level system,
which is constant in steady state.
The temperatures of the baths are commonly related to the mean occupations as
\begin{equation}
T_\alpha = \frac{\omega_\alpha}{\log\left(1+\frac{1}{n_\alpha}\right)}
\label{eq:temperature}
\end{equation}
by using the appropriate Bose distribution function.
Using the bare heat flows from Eq.~\eqref{eq:boukobza} Boukobza and Tannor found \cite{BoukobzaPRL2007}
\begin{equation}
\sigma_0 = R_{u\rightarrow l}\left[\log\left(1+\frac{1}{n_l}\right)-\log\left(1+\frac{1}{n_u}\right)\right] > 0
\label{eq:btentropy}
\end{equation}
which is positive definite as both factors have the same sign of $(n_u-n_l)$, see Eq.~\eqref{EqExpressionR}.
In contrast, using the full heat flows from Eq.~\eqref{eq:typical} with temperatures by Eq.~\eqref{eq:temperature},
Boukobza and Tannor\cite{BoukobzaPRL2007} detected negative entropy production for some operation points.
This suggested that the definition of work and heat based on the bare Hamiltonian \eqref{EqHeatWorkBareHam}
should be preferred.
\section{Resolution by effective energies}\label{SecEffectiveEnergies}
Here, we rewrite the results for the full power and heat flows evaluated from Eq.~\eqref{eq:typical} in the form:
\begin{equation}
\begin{split}
P &= -R_{u\rightarrow l} \omega_d,\\
\quad\dot{Q}_u &= +R_{u\rightarrow l} \tilde{\omega}_u,\\
\quad\dot{Q}_l &= -R_{u\rightarrow l} \tilde{\omega}_l
\label{eq:typflux}
\end{split}
\end{equation}
with effective energies (see App.~\ref{AppFull})
\begin{equation}
\begin{split}
\tilde{\omega}_{u}=\omega_{u}+\frac{\Delta\gamma_{u}(n_{u}+1)}{\gamma_u(n_u+1)+\gamma_l(n_l+1)}\\
\tilde{\omega}_{l}=\omega_{l}-\frac{\Delta\gamma_{l}(n_{l}+1)}{\gamma_u(n_u+1)+\gamma_l(n_l+1)}
\end{split}
\label{EqErenorm}
\end{equation}
Comparing Eq.~\eqref{eq:boukobza} with Eq.~\eqref{eq:typflux} we note that all flows are proportional to the transition rate $R_{u\to l}$, describing the round trip rate of the engine. However, there are different
energy factors in each term.
For vanishing detuning, $\Delta = 0$, the respective energy factors in Eq.~\eqref{eq:boukobza} and Eq.~\eqref{eq:typflux} agree. Here, the heat fluxes from the baths are determined by the level energies
$\omega_\alpha$ and the power transferred from the light field is given by the photon energy $\omega_d$, as expected.
However, for finite detuning, i.e. $\Delta =\omega_d-\omega_u+\omega_l\neq 0$, energy conservation does not allow for this structure, where the full and the bare approach provide different remedies: In the bare approach based on Eq.~\eqref{EqHeatWorkBareHam}, the power supplied from the ac field changes its energy factor $\omega_d\to \omega_u-\omega_l$, see Eq.~\eqref{eq:boukobza}. This appears not physical,
as a quantized ac field, should have energies in portions of $\hbar \omega_d$ and thus may result in an error of the order
$\Delta R_{u\to l}$ in the power. In contrast, for the full approach, the bare level energies are replaced by effective ones
$\omega_\alpha \to \tilde{\omega}_\alpha$, see Eq.~\eqref{eq:typflux}, which satisfy $\omega_d=\tilde{\omega}_u-\tilde{\omega}_l$, so that energy conservation holds with the ac-frequency of the field.
Here we argue that the effective energies Eq.~\eqref{EqErenorm} should be taken seriously in the full approach
and thus be used in the definitions of the bath temperatures
\begin{equation}
\widetilde{T}_\alpha = \frac{\tilde{\omega}_\alpha}{\log\left(1+\frac{1}{n_\alpha}\right)}
\label{eq:temperatureNew}
\end{equation}
Then Eq.~\eqref{eq:spohn} provides
\begin{equation}
\begin{split}
\sigma &= -\frac{\dot{Q}_u}{\widetilde{T}_u}-\frac{\dot{Q}_l}{\widetilde{T}_l}
= R_{u\rightarrow l}\left[\frac{\tilde{\omega}_l}{\widetilde{T}_l}-\frac{\tilde{\omega}_u}{\widetilde{T}_u}\right]\\
&=R_{u\rightarrow l}\left[\log\left(1+\frac{1}{n_l}\right)-\log\left(1+\frac{1}{n_u}\right)\right]
\end{split}
\end{equation}
which is identical with the entropy production function
Eq.~\eqref{eq:btentropy} from the bare approach and, most importantly, positive definite.
\begin{figure}
\includegraphics[width=6cm]{entropy_maser_figure2.pdf}
\caption{Sketch of the spectral functions \eqref{EqSpectralFunction} for the upper and lower levels, $u$ and $l$.
The black arrow shows the optical transition at frequency $\omega_d$ not matching the energy difference of the bare states.
The energies $\tilde{\omega}_u$ and $\tilde{\omega}_l$ are given by \eqref{EqErenorm}.}
\label{FigSpectralFunctions}
\end{figure}
Now, we want to highlight the particular meaning of the energies $\tilde{\omega}_u$, $\tilde{\omega}_l$ from Eq.~\eqref{EqErenorm}.
Due to life-time broadening the energies of the
levels $u$ and $l$ are smeared out by Lorentzian spectral functions (here normalized to one)
\begin{equation}
A_\alpha(\omega)=\frac{1}{2\pi}\frac{\gamma_\alpha(1+n_\alpha)}{(\omega-\omega_\alpha)^2+\gamma_\alpha^2(1+n_\alpha)^2/4}
\label{EqSpectralFunction}
\end{equation}
with a full width at half maximum (FWHM) $\gamma_\alpha(1+n_\alpha)$ resulting from the decay of the states by relaxation to the ground level. This allows for energy-conserving
transitions between the levels $u$ and $l$ at the energy $\omega_d$
imposed by the ac field even if $\omega_d\neq\omega_u-\omega_l$, see Fig.~\ref{FigSpectralFunctions}.
Fermi's golden rule provides the transition rate from the initial level $u$ with the energy $\omega$
(similar but not necessarily equal to $\omega_u$).
\[
W_{u\to l}=2\pi\epsilon^2 A_l(\omega-\omega_d)
\]
Weighting with the density $A_u(\omega)$ of the initial state and multiplying with
the difference in occupation
$f_u(\omega)-f_l(\omega-\omega_d)$ of the levels (technically, $f_\alpha$ is
the ratio between the imaginary part of the lesser Green's function and the spectral function\cite{HaugJauhoBook1996}),
we obtain the net transition rate
\begin{equation}
R_{u\to l}=2\pi\epsilon^2 \int d \omega\, A_u(\omega)A_l(\omega-\omega_d)[f_u(\omega)-f_l(\omega-\omega_d)]\label{EqRbySpectral}
\end{equation}
Neglecting the energy dependence of $f_\alpha$ over the width of the spectral functions (which would be relevant to study dispersive/Bloch gain \cite{WackerNatPhys2007}), we set
$f_u(\omega)-f_l(\omega-\omega_d)\approx \rho_{uu}-\rho_{ll}$. Then, some algebra, see Eq.~\eqref{EqConvolutionSpectral},
results in the expression \eqref{EqRate3Level}. This shows the equivalence of
this Green's function based treatment
with the density matrix calculations used above.
Eq.~\eqref{EqRbySpectral} shows that there is not a single definite energy involved for the upper and lower level, if broadening is taken
into account.
However, as the transitions occur with the weight $A_u(\omega)A_l(\omega-\omega_d)$,
we can identify the average energy for the upper level involved in transitions
\begin{equation}
\langle \omega\rangle_u=\frac{\int d \omega\, \omega A_u(\omega)A_l(\omega-\omega_d)}{\int d \omega A_u(\omega)A_l(\omega-\omega_d)}
\end{equation}
and obtain after some algebra, see Eqs.~(\ref{EqConvolutionSpectral},\ref{EqConvolutionSpectralE}),
$\langle \omega\rangle_u=\tilde{\omega}_u$ with $\tilde{\omega}_u$ from Eq.~\eqref{EqErenorm}.
The average energy for the lower level involved is then $\langle \omega\rangle_l=\langle \omega\rangle_u-\omega_d=\tilde{\omega}_l$.
Thus we find, that the effective levels from Eq.~\eqref{EqErenorm} are the average energies involved in the optical transition,
if level broadening is taken into account. These are the average energies, which need to be added/removed from/to the respective bath
after a transition took place in order to restore the previous state. Therefore the bath properties at these energies is of most relevance which
justifies the definition of temperature via Eq.~\eqref{eq:temperatureNew}.
Energy exchange with the bath $\alpha$ at energies different from $\omega_\alpha$ requires that the
energies available in the baths cover a range of several $\gamma_\alpha$ around $\omega_\alpha$.
In the Green's function picture, this is the basis for assuming an energy-independent self-energy (i.e. a constant width in the spectral function).
For the Lindblad kinetics, the Markovian limit used requires a short bath correlation time $\tau_B\ll 1/\gamma_\alpha$ \cite{BreuerBook2006}
and consequently a spectral width of the bath well surpassing $\gamma_\alpha$.
This demonstrates again the consistency between the Green's function based treatment and the density matrix calculations.
Let us finally consider the Carnot efficiency of the engine.
Ref.~\cite{BoukobzaPRA2013} reported the occurrence of efficiencies
above $1-T_l/T_u$ for $\Delta >0$ in the semi-classical treatment of the ac
field. This is based on Eq.~(15)
of Ref.~\cite{BoukobzaPRA2013}, which (in our notation)
expresses the efficiency as
\begin{equation}
\eta=\frac{-P}{\dot{Q}_u}=\frac{\omega_d}{\tilde{\omega}_u}=1-\frac{\tilde{\omega}_l}{\tilde{\omega}_u}\, . \label{Eqeta}
\end{equation}
A positive power output $(-P)$ from the engine is based on $R_{u\to l}>0$
and thus requires $n_u>n_l$
by Eq.~\eqref{EqExpressionR}. Then our new definition of temperatures \eqref{eq:temperatureNew} provides
\[
n_u>n_l\Rightarrow \frac{\tilde{T}_u}{\tilde{\omega}_u}>\frac{\tilde{T}_l}{\tilde{\omega}_l}
\Leftrightarrow
\frac{\tilde{\omega_l}}{\tilde{\omega_u}}>\frac{\tilde{T_l}}{\tilde{T_u}}
\]
so that Eq.~\eqref{Eqeta} satisfies the Carnot efficiency $\eta < 1-\tilde{T}_l/\tilde{T}_u$.
\section{Conclusion}
Both definitions of heat and work, applying either the full or the bare system Hamiltonian, provide identical (and positive definite) expressions for the entropy production for the common three-level maser driven by thermal baths. For the case of the full heat flow, it is crucial to carefully analyse the energies exchanged
with the baths. These differ from the bare level energies if the ac field does not match the transition frequency. Disregarding this can provide violations of the second law as reported earlier \cite{BoukobzaPRL2007}.
While both the full and bare approach are thermodynamically consistent and provide identical expressions for entropy production, the full approach requires an elaborate description of the energies transferred to the bath, which rely on the steady state in our treatment.
Furthermore, it is an open issue if such a description can be extended to transient behaviour, non-monochromatic fields, or non-cyclic operation \cite{BoukobzaPRA2013}.
On the other hand, the bare approach provides the transition frequency rather than the ac frequency in the work output, which provides a (typically small) error.
\section*{Acknowledgements}
We thank the Knut and Alice Wallenberg foundation and NanoLund for financial support.
\onecolumngrid
\begin{appendix}
\section{Detailed derivations for the steady state solution}
\label{AppSystem}
After transforming to the rotating frame, Eq.~\eqref{eq:system:lindblad} provides the
equations of motion for $\rho_{ij}=\langle i |{\rho}^\textrm{rot}|j\rangle$
\begin{eqnarray}
\frac{\d}{\d t} \rho_{gg}&=&\gamma_u (n_u+1)\rho_{uu}
+\gamma_l (n_l+1)\rho_{ll}-(n_u\gamma_u+n_l\gamma_l) \rho_{gg}\\
\frac{\d}{\d t} \rho_{uu}&=&\gamma_u n_u\rho_{gg}-\gamma_u (n_u+1)\rho_{uu}
+{\rm i}\epsilon (\rho_{ul}-\rho_{ul}^*)\label{EqrhoLL}\\
\frac{\d}{\d t} \rho_{ll}&=&\gamma_l n_l\rho_{gg}-\gamma_l (n_l+1)\rho_{ll}
-{\rm i}\epsilon (\rho_{ul}-\rho_{ul}^*)\label{EqrhoRR}\\
\frac{\d}{\d t} \rho_{ul}&=&{\rm i} \Delta \rho_{ul}+{\rm i}\epsilon(\rho_{uu}-\rho_{ll})-[\gamma_u(n_u+1)+\gamma_l(n_l+1)]\rho_{ul}/2
\label{EqrhoLR}
\end{eqnarray}
In the steady state (superscript $^\textrm{ss}$), Eq.~\eqref{EqrhoLR} provides
\begin{equation}
\rho^\textrm{ss}_{ul}=\frac{-\epsilon(\rho^\textrm{ss}_{uu}-\rho^\textrm{ss}_{ll})}{\Delta+{\rm i}[\gamma_u(n_u+1)+\gamma_l(n_l+1)]/2}\label{EqrhoLRstat}
\end{equation}
Furthermore, we identify the net rate of transitions between $u$ and $l$ due to the ac field:
\begin{equation}
R_{u\to l}=-{\rm i}\epsilon (\rho^\textrm{ss}_{ul}-\rho^{\textrm{ss}*}_{ul})=C (\rho^\textrm{ss}_{uu}-\rho^\textrm{ss}_{ll})\quad \textrm{with }
C=\frac{\epsilon^2[\gamma_u(n_u+1)+\gamma_l(n_l+1)]}{[\gamma_u(n_u+1)+\gamma_l(n_l+1)]^2/4+\Delta^2}
\label{EqRate3Level}
\end{equation}
Using $\rho_{gg}=1- \rho_{uu}- \rho_{ll}$,
Eqs.~(\ref{EqrhoLL},\ref{EqrhoRR},\ref{EqRate3Level}) provide
the system of equations
\[\begin{split}
\gamma_u n_u=& [\gamma_u(2n_u+1)+C]\rho^\textrm{ss}_{uu}+(\gamma_un_u-C)\rho^\textrm{ss}_{ll}\\
\gamma_l n_l=& (\gamma_l n_l-C)\rho^\textrm{ss}_{uu}+[\gamma_l(2n_l+1)+C]\rho^\textrm{ss}_{ll}
\end{split}\]
with the solution
\[\begin{split}
\rho^\textrm{ss}_{uu}&=\frac{\gamma_u\gamma_ln_u(n_l+1)+C(\gamma_un_u+\gamma_l n_l)}{[\gamma_l(2n_l+1)+C] [\gamma_u(2n_u+1)+C]- (\gamma_l n_l-C)(\gamma_u n_u-C)}\\
\rho^\textrm{ss}_{ll}&=\frac{\gamma_u\gamma_ln_l(n_u+1)+C(\gamma_un_u+\gamma_l n_l)}{[\gamma_l(2n_l+1)+C][\gamma_u(2n_u+1)+C]- (\gamma_l n_l-C)(\gamma_u n_u-C)}
\end{split}\]
so that
\[
\rho^\textrm{ss}_{uu}-\rho^\textrm{ss}_{ll}=\frac{\gamma_l\gamma_u (n_u-n_l)}{\gamma_u\gamma_l(3n_u n_l+2n_u+2n_l +1)+C[\gamma_u(3n_u+1)+\gamma_l(3n_l+1)]}
\]
is proportional to the occupation differences of the baths. Inserting into Eq.~\eqref{EqRate3Level}, we obtain
Eq.~\eqref{EqExpressionR} from the main article, where
\begin{equation}
\begin{split}
A=&\frac{\gamma_l\gamma_u}{4}[\gamma_u(n_u+1)+\gamma_l(n_l+1)]\epsilon^2\\
F=&\frac{\gamma_u(n_u+1)+\gamma_l(n_l+1)}{2}\frac{\gamma_u(3n_u+1)+\gamma_l(3n_l+1)}{2}\epsilon^2\\
&+\frac{\gamma_l\gamma_u}{4}(3n_u n_l+2n_u+2n_l +1)
\left\{\frac{\left[\gamma_u(n_u+1)+\gamma_l(n_l+1)\right]^2}{4}+\Delta^2\right\}
\label{EqExpressionAF}
\end{split}\end{equation}
are quadratic polynomials in $\epsilon$. Thus the rate
$R_{u\to l}\propto \epsilon^2$ for small coupling $\epsilon$ to the ac field,
while it saturates for $\epsilon^2\gtrsim \gamma_u^2 (n_u+1)^2+\gamma_l^2 (n_l+1)^2+4\Delta^2$.
$A$ and $F$ are identical with the expressions in Eq.~(13) of \cite{BoukobzaPRL2007}, where
$\gamma_{0\alpha}=\gamma_{\alpha}/2$ is used.
\section{Heat and work with bare Hamiltonian}
\label{AppBare}
The definition of heat flow from the bare Hamiltonian \eqref{EqHeatWorkBareHam} provides the
bare heat entering from bath $u$ (note that the energy of the ground level is zero)
\begin{equation}
\dot{Q}_{0u}= \omega_u\langle u|{\cal L}_u({\rho})|u\rangle
=\omega_u\left[\gamma_u n_u\rho_{gg}-\gamma_u(n_u+1)\rho_{uu}\right]\label{EqHeat1bareL}
\end{equation}
Note that the diagonal elements of ${\rho}(t)$
are identical in the original and rotating frame.
Thus, in the steady state, Eq.~\eqref{EqrhoLL} provides
$\dot{Q}_{0u}^\textrm{ss}=-{\rm i} \omega_u\epsilon (\rho^\textrm{ss}_{ul}-\rho^\textrm{ss}_{lu})
=\omega_uR_{u\to l}$
and similarly we get $\dot{Q}_{0l}^\textrm{ss}=-\omega_l R_{u\to l}$.
Finally, the bare work \eqref{EqHeatWorkBareHam} done by the field on our systems is
\[
P_0=
{\rm i} \textrm{Tr}\{{\rho}[{V}(t),{H}_0]\}=
{\rm i} \textrm{Tr}\{{\rho}^\mathrm{rot}[{V}^\mathrm{rot},{H}_0^\mathrm{rot}]\}=
{\rm i}\epsilon(\omega_u-\omega_l)(\rho_{ul}-\rho_{ul}^*)
\]
which in the steady state provides $P_0^\textrm{ss}=-(\omega_u-\omega_l)R_{u\to l}$
so that $\dot{Q}_{0u}^\textrm{ss}+\dot{Q}_{0l}^\textrm{ss}+P^\textrm{ss}_0=0$, as required by energy conservation.
These are the terms provided in Eq.~\eqref{eq:boukobza} without the superscript $^\textrm{ss}$.
\section{Heat and work with full Hamiltonian}
\label{AppFull}
With the definition \eqref{eq:typical}, we obtain the power transferred to the system
\[
P(t)={\rm i}\epsilon\omega_d \textrm{Tr}\left\{{\rho} \left(|l\rangle \langle u|{\rm e}^{{\rm i}\omega_d t}-
|u\rangle \langle l|{\rm e}^{-{\rm i}\omega_d t}\right)\right\}=
{\rm i}\epsilon \omega_d (\rho_{ul} -\rho_{lu})
\stackrel{\textrm {in ss}}{=}-\omega_dR_{u\to l}
\]
which corresponds to the net rate of absorbed photons $(-R_{u\to l})$ times the photon energy $\omega_d$. (Note, that we defined $\rho_{ul}$ in the rotating frame, see App.~\ref{AppSystem}, so that $\rho_{ul}=\langle u |{\rho}^\textrm{rot}|l\rangle=\langle u |{\rho}|l\rangle{\rm e}^{{\rm i}\omega_d t}$.)
For the heat flow, the unitary evolution of ${\rho}(t)$ due to the Hamiltonian does not contribute, as
$\textrm{Tr}\left\{[{\rho},H]H\right\}=\textrm{Tr}\left\{{\rho}[H,H]\right\}=0$,
where we used the invariance of the trace under cyclic permutations.
Thus we can restrict to the non-unitarian part here. Then the part with $H_0$ provides the heat current $\dot{Q}_{0u}$ from Eq.~\eqref{EqHeat1bareL}. We have to add the part with ${V}(t)$ and find
\[
\dot{Q}_{u}=\dot{Q}_{0u}
+\epsilon
\langle u|{\cal L}_u[{\rho}]|l\rangle{\rm e}^{{\rm i}\omega_d t}
+\langle l|{\cal L}_u[{\rho}]|u\rangle{\rm e}^{-{\rm i}\omega_d t}
=\dot{Q}_{0u}
-\epsilon\frac{\gamma_u(n_u+1)}{2} (\rho_{ul}+ \rho_{lu})
\]
Using Eqs.~(\ref{EqrhoLRstat},\ref{EqRate3Level}) we get in the steady state
\begin{eqnarray}
\dot{Q}^\textrm{ss}_{u}&=&\dot{Q}_{0u}^\textrm{ss}-
\frac{\gamma_u(n_u+1)}{2}
\frac{\Re\left\{\rho^{ss}_{ul}\right\}}{\Im\left\{\rho^{ss}_{ul}\right\}} R_{u\to l}
= R_{u\to l} \tilde{\omega}_u \label{EqQ1}\\
\dot{Q}_{l}^\textrm{ss}&=&\dot{Q}_{0l}^\textrm{ss}-
\frac{\gamma_l(n_l+1)}{2}
\frac{\Re\left\{\rho^{ss}_{ul}\right\}}{\Im\left\{\rho^{ss}_{ul}\right\}} R_{u\to l}
= -R_{u\to l} \tilde{\omega}_l \label{EqQ2}\\
\textrm{with } \tilde{\omega}_u&=&\omega_u +\frac{\Delta\gamma_u(n_u+1)}{\gamma_u(n_u+1)+\gamma_l(n_l+1)},\quad \tilde{\omega}_l=\omega_l -\frac{\Delta\gamma_l(n_l+1)}{\gamma_u(n_u+1)+\gamma_l(n_l+1)}\label{EqQEnergies}
\end{eqnarray}
where $\Re\{z\}$ and $\Im\{z\}$ denote, respectively, the real and imaginary part of a complex value $z$.
The full power and heat flow satisfy energy conservation $P^\textrm{ss}+\dot{Q}_{u}^\textrm{ss}+\dot{Q}_{l}^\textrm{ss}=0$ and
provide Eqs.~(\ref{eq:typflux},\ref{EqErenorm}), where we omitted the superscript $^\textrm{ss}$.
\section{Convolution of Lorentzians}
We consider the function
\[
P(\omega,\Delta)=\frac{1}{2\pi}\frac{2\gamma_u}{\omega^2+\gamma_u^2}\frac{2\gamma_l}{(\omega-\Delta)^2+\gamma_l^2}
\]
which is the product of two spectral functions with FWHM $2\gamma_\alpha$.
Then we find with the residue theorem
\begin{equation}\begin{split}
\int\d \omega\, P(\omega,\Delta)=& {\rm i}\left[\frac{2\gamma_u}{2{\rm i}\gamma_u}
\frac{2\gamma_l}{({\rm i}\gamma_u-\Delta)^2+\gamma_l^2}+\frac{2\gamma_u}{(\Delta+{\rm i}\gamma_l)^2+\gamma_u^2}
\frac{2\gamma_l}{2{\rm i}\gamma_l}\right]\\
=& \frac{2\gamma_l\left[(\Delta+{\rm i}\gamma_l)^2+\gamma_u^2\right]+2\gamma_u\left[(\Delta-{\rm i}\gamma_u)^2+\gamma_l^2\right]}
{\left[(\Delta-{\rm i}\gamma_u)^2+\gamma_l^2\right]\left[(\Delta+{\rm i}\gamma_l)^2+\gamma_u^2\right]}
\\
=& \frac{2(\gamma_u+\gamma_l)\left[\Delta^2-2{\rm i}\Delta(\gamma_u-\gamma_l)-(\gamma_u-\gamma_l)^2\right]}
{\left[\Delta^2+(\gamma_u+\gamma_l)^2\right]\left[\Delta^2-2{\rm i}\Delta(\gamma_u-\gamma_l)-(\gamma_u-\gamma_l)^2\right]}
= \frac{2(\gamma_u+\gamma_l)}{\Delta^2+(\gamma_u+\gamma_l)^2}
\label{EqConvolutionSpectral}
\end{split}\end{equation}
where the third identity is verified by comparing the results of the products in numerator and denominator, respectively.
The main result is that we obtain a Lorentzian with the sum of the individual widths.
Similarly we find
\begin{equation}\begin{split}
\int\d \omega\,\omega P(\omega,\Delta)=& {\rm i}\left[\frac{2\gamma_u}{2{\rm i}\gamma_u}
\frac{{\rm i}\gamma_u 2\gamma_l}{({\rm i}\gamma_u-\Delta)^2+\gamma_l^2}+\frac{2\gamma_u(\Delta+{\rm i}\gamma_l)}{(\Delta+{\rm i}\gamma_l)^2+\gamma_u^2}
\frac{2\gamma_l}{2{\rm i}\gamma_l}\right]\\
=& \frac{2{\rm i}\gamma_u\gamma_l\left[(\Delta+{\rm i}\gamma_l)^2+\gamma_u^2\right]+2\gamma_u(\Delta+{\rm i}\gamma_l)\left[(\Delta-{\rm i}\gamma_u)^2+\gamma_l^2\right]}
{\left[(\Delta-{\rm i}\gamma_u)^2+\gamma_l^2\right]\left[(\Delta+{\rm i}\gamma_l)^2+\gamma_u^2\right]}
\\
=& \frac{2\gamma_u\Delta\left[\Delta^2-2{\rm i}\Delta(\gamma_u-\gamma_l)-(\gamma_u-\gamma_l)^2\right]}
{\left[\Delta^2+(\gamma_u+\gamma_l)^2\right]\left[\Delta^2-2{\rm i}\Delta(\gamma_u-\gamma_l)-(\gamma_u-\gamma_l)^2\right]}
= \frac{2\gamma_u\Delta}{\Delta^2+(\gamma_u+\gamma_l)^2}
\label{EqConvolutionSpectralE}
\end{split}\end{equation}
\end{appendix}
\twocolumngrid
| proofpile-arXiv_059-15586 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction} Let $K$ be an origin-symmetric convex body of volume 1 in ${\mathbb R}^n,$ and let $f$ be any non-negative measurable function on $K$ with $\int_Kf=1.$ Does there exist a constant $c_n$ depending only on $n$ so that for any such $K$ and $f$ there exists a direction $\xi\in S^{n-1}$ with $\int_{K\cap \xi^\bot}f \ge c_n?$ Here $\xi^\bot=\{x\in {\mathbb R}^n: (x,\xi)=0\}$ is the central hyperplane perpendicular to $\xi,$ and integration is with respect to Lebesgue measure on $\xi^\bot.$ It was proved in \cite{K-2014} that, in spite of the generality of the question, the answer to this question is positive, and one can take $c_n>\frac 1{2\sqrt{n}}.$ In \cite{CGL} this result was extended to non-symmetric bodies $K.$ Moreover, it was shown in \cite{KK} that this estimate is optimal up to a logarithmic term, and the logarithmic term was removed in \cite{KLi}, so, finally, $c_n\sim \frac 1{\sqrt{n}}.$ We write $a\sim b$ if there exist absolute constants $c_1,c_2>0$ such that $c_1a\le b \le c_2a.$
Note that the same question for volume, where $f\equiv 1,$ is the matter of the slicing problem of Bourgain \cite{Bo1, Bo2}. In this case the best known result $c_n>cn^{-\frac 14},$ where $c>0$ is an absolute constant, is due to Klartag \cite{Kla} who removed a logarithmic term from an earlier result of Bourgain \cite{Bo3}. The constant does not depend on the dimension for several classes of bodies $K.$ For example, it was proved in \cite{K2} that if $K$ belongs to the class of unconditional convex bodies, the constant $c_n=\frac 1{2e}$ works for all functions $f.$ The same happens for intersection bodies \cite{K-2012}, and for the unit balls of subspaces of $L_p,\ p>2$ where the constant is of the order $p^{-1/2}$ \cite{KP}.
Denote by
$$Rf(\xi,t) = \int_{K\cap \{x\in {\mathbb R}^n:(x,\xi)=t\}} f(x) dx,\qquad \xi\in S^{n-1},\ t\in {\mathbb R}$$
the Radon transform of $f.$ The result described above means that the sup-norm of the Radon transform of a probability density on an origin-symmetric convex body in ${\mathbb R}^n$ is bounded from below by a positive constant depending only on the dimension $n.$
In this note we prove a similar estimate for the derivatives of the Radon transform. For an origin-symmetric infinitely smooth convex body $K$ in ${\mathbb R}^n,$ an infinitely differentiable function $f$ on $K$, and $q\in {\mathbb C},\ \Re q>-1,$ we denote the fractional derivative of the order $q$ of the Radon transform of $f$ by
$$(Rf(\xi,t))_t^{(q)}(0)=\left(\int_{K\cap \{x:\ (x,\xi)=t\}} f(x) dx\right)_t^{(q)}(0).$$
The estimate that we prove is as follows.
\begin{theorem} \label{fract-der}
There exists an absolute constant $c>0$ so that for any infinitely smooth origin-symmetric convex body $K$ of volume 1 in ${\mathbb R}^n,$ any even infinitely smooth probability density $f$ on $K,$ and any $q\in {\mathbb R},\ 0\le q\le n-2,$ which is not an odd integer, there exists a direction $\xi\in S^{n-1}$ so that
$$\left(c \frac {q+1}{\sqrt{n\log^3(\frac{ne}{q+1})}}\right)^{q+1}\le \frac 1{\cos(\frac{\pi q}2)} (Rf(\xi,t))_t^{(q)}(0).$$
\end{theorem}
Compare this with (\ref{eucl-norm}) where it is shown that for $f\equiv 1$ and $K=B_n$ the Euclidean ball of volume 1,
the constant in the left-hand side does not depend on $n,$ and is greater or equal to $(c(q+1))^{\frac{q+1}2},$ where $c$ is an absolute constant. We deduce Theorem \ref{fract-der} from a more general result.
\begin{theorem} \label{slicing} Suppose $K$ is an infinitely smooth origin-symmetric convex body in ${\mathbb R}^n$, $f$ is a non-negative even infinitely smooth function on $K$ and $-1<q<n-1$ is not an odd integer. Then
$$\int_K f \le \frac {n}{(n-q-1)\ 2^q \pi^{\frac{q-1}2} \Gamma(\frac{q+1}2)} |K|^{\frac{q+1}n} \left(d_{\rm ovr}(K,L^n_{-1-q})\right)^{q+1}$$
$$\times \max_{\xi\in S^{n-1}} \frac 1{\cos(\frac{\pi q}2)} (Rf(\xi,t))_t^{(q)}(0).$$
\end{theorem}
Here $|K|$ stands for volume of proper dimension, and the outer volume ratio distance from $K$ to the class $L_{-1-q}^n$ of the unit balls of $n$-dimensional spaces that embed in $L_{-1-q}$ (see definition below) is defined by
$$d_{{\rm {ovr}}}(K,L_{-1-q}^n) = \inf \left\{ \left( \frac {|D|}{|K|}\right)^{1/n}:\ K\subset D,\ D\in L_{-1-q}^n \right\}.$$
For some classes of bodies $K$ the constant in the left-hand side of the estimate of Theorem \ref{fract-der} does not depend on the dimension, and behaves like in the Euclidean case (formula (\ref{eucl-norm})). This follows from estimates for the distance $d_{\rm ovr}(K,L^n_{-1-q}).$ Indeed, this distance is equal to 1 when $K$ is an intersection body. It is less or equal to $e$ for unconditional convex bodies $K,$ and it is less than $c\sqrt{p}$ for the unit balls of subspaces of $L_p$ with $p>2,$ where $c$ is an absolute constant; see remarks at the end of the paper. Also note
that the unit balls of subspaces of $L_p$ with $0<p\le 2$ are intersection bodies, so the distance is 1; see \cite{K5}.
Our next result is related to the Busemann-Petty problem \cite{BP} which asks the following question. Let $K,L$ be origin-symmetric convex bodies in ${\mathbb R}^n,$ and suppose that the $(n-1)$-dimensional volume of every central hyperplane section of $K$ is smaller than the same for $L,$ i.e. $|K\cap \xi^\bot|\le |L\cap\xi^\bot|$ for every $\xi\in S^{n-1}.$ Does it necessarily follow that the $n$-dimensional volume of $K$ is smaller than the volume of $L,$ i.e. $|K|\le |L|?$ The answer is affirmative if the dimension $n\le 4,$ and it is negative when $n\ge 5;$ see \cite{G3, K1} for the solution and its history.
It was proved in \cite{K-1999} (see also \cite[Theorem 5.12]{K1}) that the answer to the Busemann-Petty problem becomes affirmative if one compares the derivatives of the parallel section function of high enough orders. Namely, denote by
$$A_{K,\xi}(t) = R(\chi_K)(\xi,t)= |K\cap \{x\in {\mathbb R}^n:\ (x,\xi)=t\}|,\quad t\in {\mathbb R}$$
the parallel section function of $K$ in the direction $\xi.$ If $K,L$ are infinitely smooth origin-symmetric convex bodies in ${\mathbb R}^n,\ n\ge 4$, $q\in [n-4,n-1)$ is not an odd integer, and for every $\xi\in S^{n-1}$ the fractional derivatives of the order $q$ of the parallel section functions at zero satisfy
$$\frac 1{\cos(\frac{\pi q}2)}A_{K,\xi}^{(q)}(0)\le \frac 1{\cos(\frac{\pi q}2)}A_{L,\xi}^{(q)}(0),$$
then $|K|\le |L|.$ For $-1<q<n-4$ this is no longer true.
Another generalization of the Busemann-Petty problem, known as the isomorphic
Busemann-Petty problem, asks whether the inequality for volumes holds up to an absolute constant. Does there exist an absolute constant $C$ so that for any dimension $n$ and any origin-symmetric
convex bodies $K,L$ in ${\mathbb R}^n$ satisfying
$|K\cap \xi^\bot|\le |L\cap \xi^\bot|$ for all $\xi\in S^{n-1},$
we have $ |K|\le C |L| ?$ This question is equivalent to the slicing problem of Bourgain mentioned above.
Zvavitch \cite{Zv} considered an extension of the Busemann-Petty problem to general Radon transforms, as follows. Suppose that $K,L$ are origin-symmetric convex bodies in ${\mathbb R}^n$, and $f$ is an even continuous strictly positive function on ${\mathbb R}^n.$ Suppose that $R(f\vert_K)(\xi,t)(0)\le R(f\vert_L)(\xi,t)(0)$ for every $\xi\in S^{n-1},$ where $f\vert_K$ is the restriction of to $K.$
Does it necessarily follow that $|K|\le |L|?$ Isomorphic versions of this result were proved in \cite{KZ, KPZv}.
In this note we generalize these results to general Radon transforms as follows.
\begin{theorem} \label{bp} Let $K,L$ be infinitely smooth origin-symmetric convex bodies in ${\mathbb R}^n,$ $f,g$ non-negative infinitely differentiable functions on $K$ and $L$, respectively, $\|g\|_{\infty}=g(0)=1,$ and $q\in (-1,n-1)$ is not an odd integer. If for every $\xi\in S^{n-1}$
$$\frac 1{\cos(\frac{\pi q}2)}(Rf(\xi,t))_t^{(q)}(0)\le \frac 1{\cos(\frac{\pi q}2)}(Rg(\xi,t))_t^{(q)}(0),$$
then
$$\int_K f \le \frac n{n-q-1} \left(d_{\rm ovr}(K,L^n_{-1-q})\right)^{q+1}\left(\int_L g\right)^{\frac {n-q-1}{n}} |K|^{\frac {q+1}n} .$$
\end{theorem}
In the case $q=0$ this result was proved in \cite{KPZv}.
\section{Notation and auxiliary facts}
A closed bounded set $K$ in ${\mathbb R}^n$ is called a {\it star body} if
every straight line passing through the origin crosses the boundary of $K$
at exactly two points, the origin is an interior point of $K,$
and the {\it Minkowski functional}
of $K$ defined by
$\|x\|_K = \min\{a\ge 0:\ x\in aK\}$
is a continuous function on ${\mathbb R}^n.$ If $x\in S^{n-1},$ then $\|x\|_K^{-1}=r_K(x)$
is the radius of $K$ in the direction of $x.$ A star body $K$ is {\it origin-symmetric} if $K=-K.$
A star body $K$ is called {\it convex} if for any $x,y\in K$ and every $0<\lambda<1,$
$\lambda x+ (1-\lambda)y\in K.$
The polar formula for the volume of a star body is as follows:
\begin{equation} \label{polar}
|K|=\frac1n \int_{S^{n-1}} \|x\|_K^{-n} dx.
\end{equation}
A star body $K$ in ${\mathbb R}^n$ is called {\it infinitely smooth} if $\|x\|_K\in C^{\infty}(S^{n-1}).$
We use the techniques of the Fourier approach
to sections of convex bodies that has recently been developed; see \cite{K1}.
The Fourier transform of a
distribution $f$ is defined by $\langle\hat{f}, \phi\rangle= \langle f, \hat{\phi} \rangle$ for
every test function $\phi$ from the Schwartz space $ \mathcal{S}$ of rapidly decreasing infinitely
differentiable functions on ${\mathbb R}^n$. For any even distribution $f$, we have $(\hat{f})^\wedge
= (2\pi)^n f$.
If $K$ is a star body and $0<p<n,$
then $\|\cdot\|_K^{-p}$ is a locally integrable function on ${\mathbb R}^n$ and represents a distribution.
Suppose that $K$ is infinitely smooth, i.e. $\|\cdot\|_K\in C^\infty(S^{n-1})$ is an infinitely differentiable
function on the sphere. Then by \cite[Lemma 3.16]{K1}, the Fourier transform of $\|\cdot\|_K^{-p}$
is an extension of some function $g\in C^\infty(S^{n-1})$ to a homogeneous function of degree
$-n+p$ on ${\mathbb R}^n.$ When we write $\left(\|\cdot\|_K^{-p}\right)^\wedge(\xi),$ we mean $g(\xi),\ \xi \in S^{n-1}.$
If $K,L$ are infinitely smooth star bodies, the following spherical version of Parseval's
formula was proved in \cite{K-1999} (see \cite[Lemma 3.22]{K1}): for any $p\in (-n,0)$
\begin{equation}\label{parseval}
\int_{S^{n-1}} \left(\|\cdot\|_K^{-p}\right)^\wedge(\xi) \left(\|\cdot\|_L^{-n+p}\right)^\wedge(\xi) =
(2\pi)^n \int_{S^{n-1}} \|x\|_K^{-p} \|x\|_L^{-n+p}\ dx.
\end{equation}
A distribution is called {\it positive definite} if its Fourier transform is a positive distribution in
the sense that $\langle \hat{f},\phi \rangle \ge 0$ for every non-negative test function $\phi.$
In this note we extensively use fractional derivatives. In general, the
fractional derivative of the order $q$ of a test function $\phi\in {\mathcal S}({\mathbb R})$
is defined as the convolution of $\phi$ with $t_+^{-1-q}/\Gamma(-q):$
$$\phi^{(q)}(x) = \langle \frac{t_+^{-1-q}}{\Gamma(-q)}, \phi(x-t) \rangle .$$
We need this definition only with $x=0.$ This allows us to replace the test function
$\phi$ by any function differentiable up to a certain order in a neighborhood of zero.
Let $m\in {\mathbb N}\cup \{0\}$ and suppose that $h$ is a continuous
function on ${\mathbb R}$ that is $m$ times continuously differentiable
in some neighborhood of zero.
For $q\in {\mathbb C},\ -1<\Re(q)<m,\ q\neq 0,1,...,m-1,$
the {\it fractional derivative} of the order $q$ of the function $h$ at zero
is defined as the action of the distribution $t_+^{-1-q}/\Gamma(-q)$ on the
function $h,$ as follows:
$$h^{(q)}(0) = \frac{1}{\Gamma(-q)}
\int_0^1 t^{-1-q}\big(h(t)-h(0)-...-h^{(m-1)}(0)
\frac{t^{m-1}}{(m-1)!}\big)\ dt +$$
\begin{equation}\label{fracder}
\frac{1}{\Gamma(-q)}\int_1^\infty t^{-1-q}h(t) dt +
\frac{1}{\Gamma(-q)} \sum_{k=0}^{m-1} \frac{h^{(k)}(0)}{k!(k-q)}.
\end{equation}
It is easy to see that for a fixed $q$ the definition does not depend on
the choice of $m>\Re(q),$ as long as $f$ is $m$ times continuously
differentiable. Note that without dividing by $\Gamma(-q)$ the expression
for the fractional derivative represents an analytic function in the
domain $\{q\in {\mathbb C}:\ \Re(q)>-1\}$ not including integers, and has simple
poles at integers. The function $\Gamma(-q)$ is analytic
in the same domain and also has simple poles at non-negative integers,
so after the division we get an analytic function in the whole domain
$\{q\in {\mathbb C}:\ m>\Re(q)>-1\},$ which also defines fractional derivatives
of integer orders. Moreover, computing the limit as $q\to k,$
where $k$ is a non-negative integer, we see that
the fractional derivatives of integer orders coincide with usual
derivatives up to a sign (when we compute the limit the first two summands
in the right-hand side of (\ref{fracder}) converge to zero, since $\Gamma(-q)\to \infty,$
and the limit in the third summand can be computed using the property $\Gamma(x+1)=x\Gamma(x)$
of the $\Gamma$-function):
$$h^{(k)}(0) = (-1)^k \frac{d^k}{d t^k}
h(t)\vert_{t=0}.$$
If $h$ is an even function its derivatives of odd orders at the origin
are equal to zero and, for $m-2<\Re(q)<m,$ the expression (\ref{fracder})
turns into
\begin{equation}\label{fracdereven}
h^{(q)}(0)
={1\over \Gamma(-q)}\int^\infty_0 t^{-q-1}
\left(h(t) - \sum_{j=0}^{ (m-2)/2} {t^{2j}\over (2j)!}
h^{(2j)}(0)\right) d t.
\end{equation}
We also note that if $-1<q<0$ then
\begin{equation} \label{fracder-1}
h^{(q)}(0) = \frac{1}{\Gamma(-q)}\int_0^\infty t^{-1-q} h(t)\ dt.
\end{equation}
We need a simple fact that can be found in \cite[Lemma 3.14]{K1}.
\begin{lemma} \label{lemma3.14} Let $0<q<1.$ For every even test function $\phi$ and every fixed vector $\theta\in S^{n-1}$
$$\int_{{\mathbb R}^n} |(\theta,\xi)|^{-1-q}\phi(\xi) d\xi = \frac{2\Gamma(-q)\cos(\pi q/2)}{\pi} \int_0^\infty t^{q}\hat{\phi}(t\theta) dt.$$
\end{lemma}
Our next lemma is a generalization of Theorem 3.18 from \cite{K1}, which was proved in \cite{GKS}.
\begin{lemma}
Let $K$ be an infinitely smooth origin-symmetric convex body in ${\mathbb R}^n,$ let $f$ be an even infinitely smooth function on $K,$ and let $q\in (-1,n-1).$ Then for every fixed $\xi\in S^{n-1}$
\begin{equation}\label{fourier}
(Rf(\xi,t))_t^{(q)}(0)
\end{equation}
$$= \frac{\cos(\pi q/2)}{\pi} \left(|x|_2^{-n+q+1}\left(\int_0^{\frac{|x|_2}{\|x\|_K}} r^{n-q-2}f(r\frac{x}{|x|_2}) dr \right) \right)^\wedge_x(\xi).$$
\end{lemma}
\noindent {\bf Proof : \ } Let $-1<q<0.$ Then
$$(Rf(\xi,t))_t^{(q)}(0) = \frac 1{2\Gamma(-q)}
\int_{-\infty}^{\infty} |t|^{-1-q}\left(\int_{K\cap \{x:\ (x,\xi)=t\}} f(x) dx\right) dt$$
$$=\frac 1{2\Gamma(-q)} \int_{{\mathbb R}^n} |(x,\xi)|^{-1-q} f(x) \chi_K(x) dx $$
$$=\frac 1{2\Gamma(-q)} \int_{S^{n-1}} |(\theta,\xi)|^{-1-q}\left( \int_0^{\|\theta\|_K^{-1}} r^{n-q-2} f(r\theta) dr\right) d\theta$$
Consider the latter as a homogeneous of degree $-1-q$ function of $\xi\in {\mathbb R}^n\setminus\{0\},$ apply it to an even test function $\phi$ and use Lemma \ref{lemma3.14}:
$$ \langle(Rf(\xi,t))_t^{(q)}(0), \phi \rangle$$
$$ = \dfrac{1}{2\Gamma(-q)} \int_{{\mathbb R}^n} \phi(\xi) \left(\int_{S^{n-1}} \left| ( \theta, \xi ) \right|^{-1 -q} \left( \int_0^{\|\theta\|_{K}^{-1}} r^{n-q-2} f(r\theta) dr\right) d\theta \right) d\xi $$
$$= \frac{\cos(q\pi/2)}{\pi} \int_{S^{n-1}} \left(\int_0^{\infty} t^q\hat{\phi}(t\theta) dt \right)\left(\int_0^{\|\theta\|_K^{-1}} r^{n-q-2} f(r\theta) dr \right) d\theta.$$
On the other hand, if we apply the function in the right-hand side of (\ref{fourier}) to the test function $\phi$ we get
$$ \left\langle \dfrac{\cos(\pi q / 2)}{\pi} \left( |x|_2^{-n + q + 1} \left( \int_0^{\frac{|x|_2}{\|x\|_K}} r^{n-q-2} f(\frac{rx}{|x|_2}) dr\right) \right)_x^{\wedge} (\xi), \phi(\xi)\right\rangle $$
$$ =\dfrac{\cos(\pi q / 2)}{\pi} \int_{{\mathbb R}^n} |x|_2^{-n + q + 1} \left( \int_{0}^{\frac{|x|_2}{\|x\|_K}} r^{n-q-2} f(\frac{rx}{|x|_2}) dr\right) \hat{\phi}(x) dx$$
$$ =\dfrac{\cos(\pi q / 2)}{\pi} \int_{S^{n-1}} \left(\int_0^\infty t^q \hat{\phi} (t\theta) dt\right) \left( \int_0^{\|\theta\|_K^{-1}} r^{n-q-2} f(r\theta) dr\right) d\theta.$$
This proves the result for $-1<q<0.$ By a simple argument similar to that in Lemma 2.22 from \cite{K1}, one can see that both sides
of (\ref{fourier}) are analytic functions of $q$ in the domain $-1<\Re q <n-1.$
By analytic extension, (\ref{fourier}) holds for all $-1<q<n-1.$
\begin{flushright
Let us compute the derivative of the Radon transform in the case where $f\equiv 1$ and $K=B_2^n,$ the unit Euclidean ball.
We denote by $\chi_K$ the indicator function of $K.$ Let $B_n=B_2^n/|B_2^n|^{\frac 1n}$ be the Euclidean ball of volume 1 in ${\mathbb R}^n.$
\begin{co} \label{eucl1} For $-1<q<n-1$ and every $\xi\in S^{n-1}$
$$(R(\chi_{B_2^n})(\xi,t))_t^{(q)}(0)=|B_2^n\cap \{x:(x,\xi)=t\}|_t^{(q)}(0)$$$$
=((1-t^2)^{\frac{n-1}2})^{(q)}(0) |B_2^{n-1}|= \frac{\cos(\frac{\pi q}2)}{\pi(n-q-1)} (|x|_2^{-n+q+1})^\wedge(\xi)$$
$$=\frac{2^{q+1} \pi^{\frac{n-2}2} \Gamma(\frac{q+1}2) \cos(\frac{\pi q}2)}{(n-q-1)\Gamma(\frac{n-q-1}2)}.$$
\end{co}
\noindent {\bf Proof : \ } Use (\ref{fourier}) with $f\equiv 1$ and $K=B_2^n$ and the formula for the Fourier transform of powers of the Euclidean norms from \cite{GS}: if $\lambda \in (-n,0),$ then
\begin{equation} \label{eucl}
(|x|_2^{\lambda})^\wedge (\xi) = \frac{2^{\lambda +n} \pi^{\frac{n}2} \Gamma(\frac{\lambda +n}2)}
{\Gamma(\frac{-\lambda}2)}|\xi|_2^{-\lambda-n}.
\end{equation}
\begin{flushright
\bigbreak
Let $B_n=B_2^n/|B_2^n|^{\frac 1n}$ be the Euclidean ball of volume 1 in ${\mathbb R}^n,$ and $q\ge 0$ not an odd integer.
Then by Corollary \ref{eucl1} and since $|B_2^n|=\pi^{n/2}/\Gamma(\frac n2+1),$ $\Gamma(x+1)=x\Gamma(x)$,
and the $\Gamma$-function is log-convex (see \cite[Lemma 2.14]{K1}),
\begin{equation}\label{eucl-norm}
\frac 1{\cos(\frac{\pi q}2)} (R(\chi_{B_n})(\xi,t))_t^{(q)}(0)=\frac 1{\cos(\frac{\pi q}2)|B_2^n|^{\frac{n-q-1}n}}(R(\chi_{B_2^n})(\xi,t))_t^{(q)}(0)
\end{equation}
$$= \frac{2^{q} \pi^{\frac{n-2}2} \Gamma(\frac{q+1}2)\left(\Gamma(1+\frac n2)\right)^{\frac {n-q-1}n}}{\Gamma(\frac{n-q-1}2+1)\pi^{\frac{n-q-1}2}}\ge c^{q+1}(q+1)^{\frac{q+1}2},$$
where $c$ is an absolute constant.
\bigbreak
For $q>0,$ the radial $q$-sum of star bodies $K$ and $L$ in ${\mathbb R}^n$ is defined as a new star body $K\tilde{+}_qL$ whose radius
in every direction $\xi\in S^{n-1}$ is given by
$$r^q_{K\tilde{+}_qL}(\xi)= r^q_{K}(\xi) + r^q_{L}(\xi),\qquad \forall \xi\in S^{n-1}.$$
The radial metric in the class of origin-symmetric star bodies is defined by
$$\rho(K,L)=\sup_{\xi\in S^{n-1}} |r_K(\xi)-r_L(\xi)|.$$
For the purposes of this article, we adopt the following definition of intersection bodies.
\begin{df}\label {bp-bodies}For $0<q<n,$ we define the class of {\it generalized $q$-intersection bodies} ${\cal{BP}}_q^n$ in ${\mathbb R}^n$
as the closure in the radial metric of radial $q$-sums of finite collections of origin-symmetric ellipsoids in ${\mathbb R}^n.$
\end{df}
When $q=1,$ by the result of Goodey and Weil \cite{GW}, we get the original class of intersection bodies ${\cal{I}}_n$ in ${\mathbb R}^n$ introduced by Lutwak \cite{Lu}. For integers $k,\ 1<k<n$, by the result of Grinberg and Zhang \cite{GZ}, we get the classes
of generalized $k$-intersection bodies introduced by Zhang \cite{Z2}. Note that the original definitions of Lutwak and Zhang are different; see \cite[Chapter 4]{K1} for details.
Another generalization of the concept of an intersection body was introduced
in \cite{K-2000}. For $k=1,$ this is the original definition of Lutwak.
\begin{df}For an integer $k,\ 1\le k <n$ and star bodies $D,L$ in ${\mathbb R}^n,$
we say that $D$ is the $k$-intersection body of $L$ if
$$|D\cap H^{\perp}|= |L\cap H|,\qquad \forall H\in Gr_{n-k}.$$
Taking the closure in the radial metric of the class of
$k$-intersection bodies of star bodies, we define the class of
{$k$-intersection bodies}.
\end{df}
The latter definition is related to embeddings in $L_p$-spaces. By $L_p,\ p>0$ we mean the $L_p$-space of functions on $[0,1]$ with Lebesgue measure. It was proved in \cite{K-2000} that an $n$-dimensional normed
space embeds isometrically in $L_p,$ where $p>0$
and $p$ is not an even integer, if and only if the Fourier transform in the sense of Schwartz distributions
of the function $\Gamma(-p/2)\|\cdot\|^p$ is a non-negative distribution outside of the origin in ${\mathbb R}^n.$
The concept of embedding of finite dimensional normed spaces in $L_p$ with negative $p$
was introduced in \cite{K-CMB, K-2000}, as an
extension of embedding into $L_p$ with $p>0,$ as follows.
\begin{df} For a star body $D$ in ${\mathbb R}^n,$ we say that $({\mathbb R}^n,\|\cdot\|_D)$ embeds in $L_{-p},\ 0<p<n$
if the function $\|\cdot\|_D^{-p}$ represents a positive definite distribution.
We denote the class of such star bodies by $L_{-p}^n.$
\end{df}
A connection between $k$-intersection bodies and embedding in $L_p$ with
negative $p$ was found in \cite{K-2000}.
\begin{theorem} \label{connection} Let $1\le k < n.$ The following are equivalent:
\item{(i)} An origin symmetric star body $D$ in ${\mathbb R}^n$ is a $k$-intersection
body;
\item{(ii)} The space $({\mathbb R}^n,\|\cdot\|_D)$ embeds in $L_{-k}.$
\end{theorem}
It was proved in \cite{K-2000} (see also \cite{M2, KY}) that for any integer $k,\ 1\le k <n$ every generalized $k$-intersection body is a $k$-intersection body. We need an extension of this fact to non-integers, as follows.
\begin{lemma} \label{incl} For every $0<q<n,$ we have ${\cal{BP}}_q^n\subset L_{-q}^n.$
\end{lemma}
\noindent {\bf Proof : \ } The powers of the Euclidean norm $|x|_2^{-q},\ 0<q<n$ represent positive definite distributions in ${\mathbb R}^n;$ see formula (\ref{eucl}).
Because of the connection between the Fourier transform of distributions and linear transformations, the norms generated by origin-symmetric ellipsoids in ${\mathbb R}^n$ have the same property, and, therefore, radial $q$-sums of ellipsoids in ${\mathbb R}^n$ belong to
the class $L_{-q}^n.$ The fact that positive definiteness is preserved under limits in the radial metric follows from
\cite[Lemma 3.11]{K1}. \begin{flushright
\section{Proofs of main results}
\noindent{\bf Proof of Theorem \ref{bp}.} Let $D$ be such that $|D|^{1/n}\le (1+\delta) d_{ovr}(K,L^n_{-1-q}),\ K\subset D,\ D\in I_q,\ \delta>0.$ By approximation, we can assume that $D$ is infinitely smooth; see \cite[Lemma 4.10]{K1}.
Then $(\|x\|_D^{-1-q})^\wedge$ is a positive function on the sphere. By (\ref{fourier}) for every $\xi\in S^{n-1}$
$$\left(|x|_2^{-n+q+1}\left(\int_0^{\frac{|x|_2}{\|x\|_K}} r^{n-q-2}f(r\frac{x}{|x|_2}) dx \right) \right)^\wedge_x(\xi) $$$$\le
\left(|x|_2^{-n+q+1}\left(\int_0^{\frac{|x|_2}{\|x\|_L}} r^{n-q-2}g(r\frac{x}{|x|_2}) dx \right) \right)^\wedge_x(\xi).$$
Multiplying both sides by $(\|x\|_D^{-1-q})^\wedge(\xi)$, integrating over the sphere and using Parseval's formula on the sphere
(\ref{parseval}) we get
$$\int_{S^{n-1}} \|\theta\|_D^{-1-q}\left( \int_0^{\|\theta\|_K^{-1}} r^{n-q-2} f(r\theta) dr\right)$$
$$\le \int_{S^{n-1}} \|\theta\|_D^{-1-q}\left( \int_0^{\|\theta\|_L^{-1}} r^{n-q-2} g(r\theta) dr\right),$$
or
\begin{equation}\label{sect12}
\int_K \|x\|_D^{-1-q} f(x) dx \le \int_L \|x\|_D^{-1-q} g(x) dx.
\end{equation}
Since $K\subset D,$ we have $1\ge \|x\|_K\ge \|x\|_D$ for every $x\in K.$ Therefore,
$$\int_K \|x\|_D^{-1-q}f(x) dx \ge \int_K \|x\|_K^{-1-q}f(x) dx \ge \int_K f(x) dx.$$
On the other hand, by the Lemma from section 2.1 from Milman-Pajor
\cite[p.76]{MP},
$$
\left(\frac{\int_{L}\|x\|_D^{-1-q} {g}(x) dx}{ \int_{D}\|x\|_D^{-1-q} dx} \right)^{1/(n-q-1)} \le \left(\frac{\int_{L}{g}(x) dx \large}{ \int_{D} dx} \right)^{1/n}.
$$
Since $\int_D\|x\|_D^{-1-q} dx =\frac{n}{n-q-1} |D|,$ (see \cite{MP}) we can estimate the right-hand side of
(\ref{sect12}) by
$$\int_L \|x\|_D^{-1-q} g(x) dx \le \frac n{n-q-1} \left(\int_{L}{g}(x) dx\right)^{\frac {n-q-1}n} |D|^{\frac {q+1}n}.$$
Combining these estimates with the definition of $D$ and sending $\delta$ to zero, we get the result.
\begin{flushright
\noindent{\bf Proof of Theorem \ref{slicing}.} Consider a number $\varepsilon>0$ such that for every $\xi\in S^{n-1}$
$$\frac 1{\cos(\pi q/2)}(R(f\vert_K)(\xi,t))_t^{(q)}(0)\le \frac {\varepsilon}{\cos(\pi q/2)}R(\chi_{B_2^n})(\xi,t))_t^{(q)}(0).$$
By (\ref{fourier}) for every $\xi\in S^{n-1}$
$$\left(|x|_2^{-n+q+1}\left(\int_0^{\frac{|x|_2}{\|x\|_K}} r^{n-q-2}f(r\frac{x}{|x|_2}) dx \right) \right)^\wedge_x(\xi) \le
\frac{\varepsilon}{n-q-1}\left(|x|_2^{-n+q+1}\right)^\wedge(\xi)$$
Let $D$ be such that $|D|^{1/n}\le (1+\delta) d_{\rm ovr}(K,L^n_{-1-q}),\ K\subset D,\ D\in I_q,\ \delta>0.$ By approximation, we can assume that $D$ is infinitely smooth.
Then $(\|x\|_D^{-1-q})^\wedge$ is a positive function on the sphere..
Multiplying both sides of the latter inequality by $(\|x\|_D^{-1-q})^\wedge(\xi)$, integrating over the sphere and using Parseval's formula on the sphere
we get
\begin{equation}\label{sect13}
\int_K \|x\|_D^{-1-q} f(x) dx \le \frac{\varepsilon}{n-q-1}\int_{S^{n-1}} \|x\|_D^{-1-q} dx.
\end{equation}
Since $K\subset D,$ we have
\begin{equation}\label{eq14}
\int_K \|x\|_D^{-1-q} f(x) dx \ge \int_K \|x\|_K^{-1-q} f(x) dx \ge \int_K f.
\end{equation}
On the other hand, by H\"older's inequality and the polar formula for volume,
\begin{equation}\label{eq15}
\int_{S^{n-1}} \|x\|_D^{-1-q} dx \le
|S^{n-1}|^{\frac{n-q-1}n} n^{\frac{q+1}n} |D|^{\frac{q+1}n}.
\end{equation}
Put
$$\varepsilon=\max_{\xi\in S^{n-1}} \frac{\frac 1{\cos(\pi q/2)}(R(f\vert_K)(\xi,t))_t^{(q)}(0)}{\frac {1}{\cos(\pi q/2)}R(\chi_{B_2^n})(\xi,t))_t^{(q)}(0)},$$
then (\ref{sect13}), (\ref{eq14}), (\ref{eq15}) and the choice of $D$ (send $\delta$ to zero) imply
$$\int_K f\le c(n,q) |D|^{\frac{q+1}n} \max_{\xi\in S^{n-1}} \frac 1{\cos(\pi q/2)}(R(f\vert_K)(\xi,t))_t^{(q)}(0)$$
$$\le c(n,q) \left(d_{\rm ovr}(K,L^n_{-1-q})\right)^{q+1} |K|^{\frac{q+1}n} \max_{\xi\in S^{n-1}} \frac 1{\cos(\pi q/2)}(R(f\vert_K)(\xi,t))_t^{(q)}(0),$$
where by Corollary \ref{eucl1}, the property $\Gamma(x+1)=x\Gamma(x)$ of the $\Gamma$-function and the expression for the surface area of the unit Euclidean sphere (see for example \cite[Corollary 2.20]{K1}),
$$c(n,q)= \frac{\pi \Gamma(\frac{n-q-1}2) n^{\frac{q+1}n} 2^{\frac{n-q-1}n} \pi^{\frac{n-q-1}2}}
{2^{q+1}\pi^{\frac n2} \Gamma(\frac{q+1}2) \left(\Gamma(\frac n2)\right)^{\frac{n-q-1}n}}$$
$$= \frac{n\ \Gamma(\frac{n-q-1}2+1)}{2^q \pi^{\frac{q-1}2} \Gamma(\frac{q+1}2) (n-q-1)
\left(\Gamma(\frac n2 +1)\right)^{\frac{n-q-1}n}}\le \frac{n}{2^q \pi^{\frac{q-1}2} \Gamma(\frac{q+1}2) (n-q-1)},$$
where in the last step we used the log-convexity of the $\Gamma$-function (see \cite[Lemma 2.14]{K1}) to estimate
$$\frac{\Gamma(\frac{n-q-1}2+1)}{\left(\Gamma(\frac n2 +1)\right)^{\frac{n-q-1}n}}\le 1.$$
\begin{flushright
To prove Theorem \ref{fract-der} we need the estimate for the outer volume ratio distance from \cite[Theorem 1.1]{KPZ}. Note that this estimate was proved in \cite{KPZ} for integers $q,$ but the proof remains exactly the same for non-integers. Also note that a mistake in the proof in \cite{KPZ} was corrected in \cite[Section 5]{K2}.
\begin{pr} \label{kpz} (\cite{KPZ}) For every $q\in [1,n-1]$ and every origin-symmetric convex body $K$ in ${\mathbb R}^n$
$$d_{\rm ovr}(K, {\cal{BP}}_q^n)\le C\sqrt{\frac{n\log^3(\frac{ne}{q})}{q}},$$
where $C$ is an absolute constant, and
$$d_{{\rm {ovr}}}(K, {\cal{BP}}_q^n) = \inf \left\{ \left( \frac {|D|}{|K|}\right)^{1/n}:\ K\subset D,\ D\in {\cal{BP}}_q^n \right\}.$$
\end{pr}
\noindent{\bf Proof of Theorem \ref{fract-der}.} By Lemma \ref{incl} and Proposition \ref{kpz},
\begin{equation}\label{general}
d_{\rm ovr}(K,L^n_{-1-q})\le d_{\rm ovr}(K, {\cal{BP}}_{q+1}^n)\le C\sqrt{\frac{n\log^3(\frac{ne}{q+1})}{q+1}}.
\end{equation}
Now the result follows from Theorem \ref{slicing}. Note that
$$\frac n{n-q-1} \le e^{\frac{q+1}{n-q-1}} \le e^{q+1}.$$
\begin{flushright
We conclude this note with several remarks about the classes $L_{-p}^n$ and the outer volume ratio distance
$d_{\rm ovr}(K,L^n_{-p}).$ First, every class $L_{-p},\ 0<p<n$ contains the unit balls of all $n$-dimensional subspaces of $L_r$ with $0<r\le 2;$ see \cite{K5} and \cite[Theorem 6.17]{K1}. In particular all of these classes contain the $\ell_1^n$-ball
$$B_1^n=\{x\in {\mathbb R}^n: |x_1|+...+|x_n|\le 1\}.$$
This implies that the distance $d_{\rm ovr}(K,L^n_{-p})\le e$ for every unconditional convex body $K$ and every $0<p<n;$
see \cite{K2}. Since all the classes $L_{-p}$ contain origin-symmetric ellipsoids, by a result from \cite{M1} (see also \cite{KP}),
the distance $d_{\rm ovr}(K,L^n_{-p})\le c\sqrt{r},$ where $K$ is the unit ball of an $n$-dimensional subspace of $L_r$ with $r>2,$ and $c$ is an absolute constant. It was proved in \cite{M2} that the class $L_{-m}^n$ contains $L_{-k}^n$ when $m,k$ are integers and $m$ is divisible by $k.$ This inclusion for arbitrary $0<k<m<n$ has not been established in general. It is only known that if $p>s,$ then $L_{-p}^n$ contains a body that is not in $L_{-s}^n;$ see \cite{KaZ, Y}. When If $n-3\le p<n,$ the class
$L_{-p}^n$ contains all origin-symmetric convex bodies in ${\mathbb R}^n;$ see \cite[Corollary 4.9]{K1}. More results about embeddings in $L_{-p}$ can be found in \cite{KaK, KPZ, K6, M1, M2}, \cite[Chapter 6]{K1} and \cite{KY}. It would be interesting to know whether the estimate (\ref{general}) is sharp.
| proofpile-arXiv_059-15587 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}\label{intro}
Experimental searches for charged lepton flavor violation (cLFV) are among the most sensitive probes of new physics beyond the Standard Model (SM) of elementary particles. In the SM, such flavor changing neutral current interactions in the lepton sector are absent at tree level and with massless neutrinos, and even when neutrino masses are introduced in an effective theory approach via the dimension five neutrino mass operator, they only get induced at loop level at tiny rates far below envisioned observational possibilities.
As an indirect probe of new physics, cLFV is known to be sensitive to extensions of the SM at scales far beyond the reach of direct searches at present and currently discussed future colliders. At present, particularly strong limits on LFV $\mu - e$ transitions come from $Br(\mu\to e\gamma)\le 4.2\times 10^{-13}$ \cite{TheMEG:2016wtm}, and on LFV $\tau - e$ transitions from $Br(\tau\to e\gamma)\le 3.3\times 10^{-8}$ \cite{Aubert:2009ag} and $Br(\tau\to 3e)\le 2.7\times 10^{-8}$ \cite{Hayasaka:2010np}.
Planned experiments to extend the cLFV searches beyond these limits include MEG II \cite{Baldini:2018nnn}, which could reach a sensitivity for $Br(\mu\to e\gamma)$ down to $6\times 10^{-14}$. Furthermore, the Mu3e experiment plans to reach a sensitivity for $Br(\mu\to 3e)$ down to $2\times 10^{-15}$ \cite{Arndt:2020obb}. Regarding muon to electron conversion, the Mu2e and COMET experiments have the goal to increase the sensitivity to the $\mu-e$ conversion rate by four orders of magnitude down to $3\times 10^{-17}$ \cite{Miscetti:2020gkk,Shoukavy:2019ydh}, and the PRISM project even aims at a sensitivity down to $10^{-18}$ \cite{Alekou:2013eta}. Both $B$-factories BABAR and BELLE II aim to improve the sensitivity on LFV $\tau$ decays by more than an order of magnitude down to $Br(\tau\to e\gamma) < 3\times 10^{-9} $ and $Br(\tau\to 3e) < 1.2\times 10^{-9} $ \cite{Kou:2018nap,Bona:2007qt,Lusiani:2010eg}.
In this paper, we show that future electron-proton ($ep$) colliders such as the LHeC would be excellent facilities for probing the cLFV conversion of an electron into a muon or a tau via the effective coupling to a neutral gauge boson or a neutral scalar. To explore the potential for discovering cLFV induced by heavy new physics in a model-independent way, we consider a general effective Lagrangian for our sensitivity calculations via collider simulations at the reconstructed level. In addition, as an example model where flavor changing neutral current (FCNC) operators inducing cLFV are generated at loop level, we consider the extension of the Standard Model by sterile neutrinos. There we show that the LHeC could probe the LFV conversion of an electron into a muon beyond the current experimental bounds, and could reach more than an order of magnitude higher sensitivity for the LFV conversion of an electron into a tau.
\section{High sensitivity to cLFV at $ep$ colliders}
\label{sec.2}
Compared to electron-positron colliders, the high center-of-mass energy at $ep$ colliders can provide an environment to test the SM at high energies with comparably low rates of background. Two examples of possible future $ep$ colliders are the Large Hadron electron Collider (LHeC) \cite{Agostini:2020fmq,Bruening:2013bga,AbelleiraFernandez:2012cc,Klein:2009qt} and the $ep$ mode of the Future Circular Collider (FCC). At the LHeC, the center-of-mass energy of 1.3 TeV with a total of 3 $\text{ab}^{-1}$ integrated luminosity would be achieved by the use of the 7 TeV proton beam of the LHC in addition to a 60 GeV electron beam with up to $80\% $ polarization. Moreover, the proposed electron-proton experiment at the FCC (FCC-eh) is designed with the same energy level of the electron beam from the LHeC electron linac, but with the upgraded proton beam with energy of 50 TeV from the FCC-hh. This will achieve a center of mass energy of 3.5 TeV. This environment can be employed for significantly improving the PDF measurements and lower the associated systematic uncertainties. At the same time, an impact on the precision of some Higgs measurements is anticipated. In general, electron-proton colliders would be a great environment for testing certain types of new physics beyond the Standard Model, as has been explored in various studies (cf.\ e.g.\ \cite{Antusch:2020fyz,Jana:2019tdm,Flores-Sanchez:2019jcx,Azuelos:2019bwg,Antusch:2019eiz,DelleRose:2018ndz,Dev:2019hev}).
\subsection{cLFV via effective vertices at $ep$ colliders}
Charged lepton flavor violating (cLFV) processes can occur at the LHeC through an effective vertex that couples the incoming electron to a muon or a tau and a neutral scalar or vector boson. With the neutral scalar or vector boson in the $t$-channel, the effective flavor changing neutral current (FCNC) interactions can lead to $e-\mu$ or $e-\tau$ flavor transitions, as shown in Fig.\ \ref{fig:1}. The processes have a specific kinematics that can be used to efficiently discriminate the signal from the SM background. A particularly useful feature, as we will discuss below in section \ref{sec:kinematics}, is that at low momentum transfer the final state lepton, i.e.\ the $\mu$ or $\tau$, is dominantly emitted in the backward region of the detector (cf.\ \cite{Agostini:2020fmq,AbelleiraFernandez:2012cc}). At the LHeC, we will show that this allows to almost completely suppress the relevant SM backgrounds in some cases.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.3]{feynman_1.png}
\caption{Feynman diagrams for cLFV processes at the LHeC induced by effective operators (represented by blobs in the diagrams) that couple the incoming electron to a muon or a tau and a vector bosons $V_\nu$ (left) or a scalar $S$ (right).
}
\label{fig:1}
\end{figure}
The effective FCNC Lagrangian for charged leptons contains effective operators coupling the charged leptons to neutral scalars and neutral vector bosons. The effective Lagrangian for the couplings to neutral scalars is given by
\begin{align}
{\mathcal{L}^{\text{scalar}}_{\mathrm{eff}}} = \bar{\ell}_\alpha P_{L,R} \ell_\beta S\ N_{L,R} ,
\label{eq:1}
\end{align}
with $\ell_\beta, \ell_\alpha, S$ representing the incoming and outgoing charged leptons and the neutral scalar boson of the effective vertex, respectively. $N_{L,R}$ represents the left and right form factors of the effective scalar operator and $P_{L,R}$ are the chiral projection operators.
We note that expressions like $P_{L,R} N_{L,R}$ are shorthand notations for the sum over both combinations, $P_{L} N_{L}+P_{R} N_{R}$.
The part of the effective Lagrangian for the coupling to vector bosons can be expressed in terms of monopole and dipole operators. The effective Lagrangian containing the monopole operators is given by
\begin{align}
{\mathcal{L}^{\text{monopole}}_{\mathrm{eff}}} = \bar{\ell}_\alpha \gamma_\mu P_{L,R} \ell_\beta\left[A_{L,R}\ g^{\mu\nu}+ B_{L,R}(g^{\mu\nu} q^2-q^\mu q^\nu) \right] V_\nu ,
\label{eq:2}
\end{align}
where $q$ is the momentum of the gauge boson $V_\nu$ and where in the SM $V_\nu$ is either $Z$ or $\gamma$. $A_{L,R}$ and $B_{L,R}$ are the form factors of the monopole operators. The effective Lagrangian containing the dipole operator is given by
\begin{align}
{\mathcal{L}^{\text{dipole}}_{\mathrm{eff}}} = \bar{\ell}_\alpha \sigma^{\mu\nu} P_{L,R} \ell_\beta\ q_\mu V_\nu\ D_{L,R} ,
\label{eq:3}
\end{align}
with $\sigma^{\mu\nu}=\frac{i}{2}[\gamma^\mu,\gamma^\nu]$ and $D_{L,R}$ denoting the left and right form factors of the dipole operator.
\subsection{Low background for cLFV due to specific kinematics}\label{sec:kinematics}
The differential cross sections of the cLFV processes (cf.\ Fig.\ \ref{fig:1}) depend on the center of mass energy $s$ and the two kinematic variables $q^2$ and the Bjorken variable $x$. At the electron-proton colliders, the Bjorken $x$ can be obtained from the measurement of the inelasticity $y_e$ as \cite{Klein:2008di}
\begin{align}
x = \frac{q^2}{s\ y_e},\hspace{6mm} \text{with}\hspace{6mm} y_e = 1- \frac{E_\mu}{2E_e}\left(1-\cos\theta \right),
\end{align}
with $E_\mu,E_e$ being the energies of the scattered muon and the incoming electron, respectively. The scattering angle $\theta$ is defined between the direction of the outgoing particles and the proton beam. For the region of the parameter space with $x \approx E_e/E_p$,
the energy of the scattered muon is approximately equal to the electron beam, which causes the cross section to peak in the
backward direction of the detector. For larger $q^2$, $x$ is larger due to the larger energy transfer from the proton beam that
pushes the scattered muons somewhat more in the forward direction \cite{AbelleiraFernandez:2012cc}.
\begin{figure}
\centering
~~~~~~\includegraphics[scale=0.5]{theta.pdf}
\caption{Examples for the muon angular distributions at the reconstructed level for the photon (red), $Z$ boson (blue) and Higgs boson (black) mediated cLFV processes shown in Fig.\ \ref{fig:1}. The distributions in the plot correspond to the contributions from the form factors $A^{Z/\gamma}_{L,R}$ and $N^{H}_{L,R}$, with total number of events normalized to one. Note that the $y$-axis has a logarithmic scale. The forward direction is the proton beam direction and the backward direction is the electron beam direction. }
\label{fig:2}
\end{figure}
The SM background processes take place through the charged and neutral currents with $W^\pm$ and $Z/H$ bosons exchange. For the charged current, a ($t$-channel) $W$ boson can radiate a $Z/\gamma^\ast$ which then generates a $\ell\bar{\ell}$ pair. For the neutral currents, a ($t$-channel) $Z/H$ boson can generate charged leptons via radiating weak gauge bosons which then decay leptonically.
Other backgrounds come from the decay of the on-shell produced bosons, e.g. $pe-\to Z e^- j, Z\to\mu^\pm\mu^\mp$. The production of the on-shell $Z$ boson requires a large energy transfer, and thus the dimuons will be detected mainly in the forward region of the detector. Accordingly, the cLFV process at the LHeC through an effective vertex can provide a unique signal in the backward direction which is almost background free.
In Fig.\ \ref{fig:2}, we show examples for the angular distribution of the scattered muons at the LHeC, for the case of exchanged photons, $Z$ bosons, and SM Higgs particles (showing as examples the form factors $A^{Z/\gamma}_{L,R}$ and $N^{H}_{L,R}$).
As one can see, the scattered muons are dominantly emitted in the backward direction.
For the massive mediators $(Z, H)$, the cross section maximizes at $q^2=M^2$ and thus the peak shifts towards the forward direction compared to the photon case. A similar effect occurs for the form factors with momentum dependence, $B^{Z/\gamma}_{L,R}$ and $D^{Z/\gamma}_{L,R}$, which will be discussed in section 4 (with angular distributions shown in Fig.\ 7).
For the simulation, we have implemented the effective vertices in MadGraph \cite{Alwall:2014hca}.
After generating the events by MadGraph, Pythia \cite{Sjostrand:2006za} is used for showering and hadronization. For fast LHeC detector simulation we use Delphes \cite{deFavereau:2013fsa}. The event reconstruction has been done by MadAnalysis5 \cite{Conte:2012fm} with the requirement that the scattered muons have to be hard, with $P_T> 25$ GeV.
\section{LHeC sensitivity to cLFV from heavy neutral leptons}
In this section, before we turn to the model-independent analysis, we investigate the sensitivity of the LHeC for cLFV induced by heavy neutral leptons (also referred to as ``heavy neutrinos'' or ``sterile neutrinos''). In particular, we will explore the LHeC sensitivity to the combinations $|\theta_e\theta^\ast_\mu|$ and $|\theta_e\theta^\ast_\tau|$ of active-sterile neutrino mixing angles within the "Symmetry Protected Seesaw Scenario'' (SPSS) benchmark scenario (cf.\ \cite{Antusch:2015mia,Antusch:2016ejd}), and compare it with the current bounds from non-collider experiments. The most relevant present constraints on the mixing parameters come from the two body decays, e.g. $\ell_\alpha\to \ell_e \gamma$ \cite{TheMEG:2016wtm,Aubert:2009ag}, and the three body decays $\ell_\alpha\to 3 \,\ell_e$ \cite{Bellgardt:1987du,Baranov:1990uh,Hayasaka:2010np} of taus and muons ($\alpha = \mu,\tau$). For final state muons we also consider the constraint from the $\mu - e$ conversion search at SINDRUM II \cite{Dohmen:1993mp}.
\subsection{Benchmark scenario: SPSS}\label{sec:SPSS}
For the analysis of the LHeC sensitivities and the comparison to the present experimental constraints, we consider the SPSS benchmark model. In this subsection, we will only give a brief summary to the SPSS and refer for details to \cite{Antusch:2015mia,Antusch:2016ejd}. Beyond the particle content of the SM, the scenario includes two sterile neutrinos with opposite charges under an approximate "lepton number''-like symmetry. The small observed neutrino masses arise from the small breaking of the "lepton number''-like symmetry.
For the study of cLFV, we can treat the protective "lepton number''-like symmetry as being exact, such that lepton number is conserved. A discussion for which parameter regions the lepton number violating effects can be observable in the SPSS benchmark model with small symmetry breaking can be found in \cite{Antusch:2017ebe}.
The Lagrangian density of the SPSS benchmark model, including the sterile neutrino pair $N_R^1$ and $N_R^2$, is given by:
\begin{equation}
\mathcal{L} = \mathcal{L}_\mathrm{SM} - \overline{N_R^1}
M_N
N^{2\,c}_R - y_{\nu_{\alpha}}\overline{N_{R}^1} \widetilde \phi^\dagger \, L^\alpha
+\mathrm{H.c.}
+ \dots \;,
\label{eqn:lagrange}
\end{equation}
where $L^\alpha$ ($\alpha=e,\mu,\tau$) and $\phi$ are the lepton and Higgs doublets, respectively, and the parameters $y_{\nu_{\alpha}}$ denote the complex-valued neutrino Yukawa couplings. $M_N$ is the heavy neutral lepton (Majorana) mass parameter.
The dots indicate additional terms which can be neglected in this study. They may contain additional heavy neutral leptons that are decoupled from collider phenomenology and indirect searches as well as the terms which slightly break the "lepton number''-like symmetry.
After electroweak symmetry breaking the neutral leptons (i.e.\ the active and sterile neutrinos) have a symmetric mass matrix, which can be diagonalized by a unitary 5 $\times$ 5 matrix $U$, cf.\ \cite{Antusch:2015mia}.
The mass eigenstates are $\tilde n_j = \left(\nu_1,\nu_2,\nu_3,N_4,N_5\right)^T_j = U_{j \alpha}^{\dagger} n_\alpha$. They include the three light neutrinos (which are actually massless in the symmetry limit) and two heavy neutrinos with (in the symmetry limit) degenerate mass eigenvalues $M_N$.
The off-diagonal block of the mixing matrix $U$ governs the interactions of the heavy neutrinos. It can be quantified by the active-sterile neutrino mixing angles $\theta_\alpha$ related to the neutrino Yukawa couplings $y_{\nu_{\alpha}}$ via
\begin{equation}
\theta_\alpha = \frac{y_{\nu_\alpha}^{*}}{\sqrt{2}}\frac{v_\mathrm{EW}}{M_N}\,, \qquad |\theta|^2 := \sum_{\alpha} |\theta_\alpha|^2\,,
\label{def:thetaa}
\end{equation}
where $v_\mathrm{EW} = 246.22$ GeV denotes the vacuum expectation value of the Higgs field.
Due to the mixing of the active and sterile neutrinos, the heavy neutrino mass eigenstates participate in the weak interactions as
\begin{eqnarray}
j_\mu^\pm & \supset & \frac{g}{2} \, \theta_\alpha \, \bar \ell_\alpha \, \gamma_\mu P_L \left(-\mathrm{i} N_4 + N_5 \right) + \text{H.c.} \,, \label{eqn:weakcurrent1}\\
j_\mu^0 & = & \frac{g}{2\,c_W} \sum\limits_{i,j=1}^5 \vartheta_{ij} \overline{ \tilde n_i} \gamma_\mu P_L \tilde n_j\,, \\
\mathcal{L}_{\rm Yuk.} & \supset & \frac{M_N}{v_\mathrm{EW}} \sum\limits_{i=1}^3 \left(\vartheta_{i4}^* \overline{N_4^c}+ \vartheta_{i5}^*\overline{N^c_5}\right) H\, \nu_i +\text{ H.c.} \,.
\label{eqn:weakcurrent2}
\end{eqnarray}
$g$ is the weak coupling constant, $c_W$ the cosine of the Weinberg angle and $P_L = {1 \over 2}(1-\gamma^5)$ is the left-chiral projection operator. $H$ denotes the real scalar Higgs boson and $\vartheta_{ij} := \sum_{\alpha=e,\mu,\tau} U^\dagger_{i\alpha}U_{\alpha j}^{}$.
Finally, we note that in the symmetry limit of the SPSS benchmark model, only the moduli $|\theta_{e}|$, $|\theta_{\mu}|$ and $|\theta_{\tau}|$ of the active-sterile mixing angles and the (w.l.o.g.\ real and positive) mass parameter $M_N$ are physical.
Furthermore, we remark that via the relation
$
|V_{ \alpha N}|^2 = | \theta_\alpha |^2 \:,
$
one can readily translate our results (which we will give in terms of the active-sterile neutrino mixing angles $\theta_\alpha$) to the neutrino mixing matrix elements $V_{ \alpha N}$ often used in the literature.
\subsection{Calculation of the form factors for the cLFV operators}
To calculate the form factors for the cLFV operators within the SPSS from the respective penguin diagrams (cf.\ Fig.~\ref{fig:3}), we use the package Peng4BSM@LO \cite{Bednyakov:2013tca}. Peng4BSM@LO is a Mathematica package that calculates the contributions of the form factors of certain effective operators originating from one-loop penguin Feynman diagrams. In order to allow for generic finite form factors, the package calculates the form factors as the first order expansion of the small masses and momenta of the external fermions. We remark that all cLFV penguin processes have no tree level amplitude, and are thus finite at the one-loop level. The UV-divergence vanishes when we sum up over all diagrams and apply the unitarity condition of the leptonic mixing matrix $U$.
We find (using Peng4BSM@LO \cite{Bednyakov:2013tca}) that the form factors in the SM extension by heavy neutral leptons (within the SPSS benchmark scenario) are given by
\begin{align*}
B^\gamma_L &= \sum_{k=1}^5\frac{e^2 |\theta_e\theta^\ast_{\alpha}|}{1152\pi^2 M^3_W\sin^3\theta_W(1-x^2_k)^4 }\left[ (x^2_k-1)(3e\ v_{EW} \ x^2_k(2x^4_k+5x^2_k-1)\right.\nonumber\\
&\left.-M_W\sin\theta_W(11x^6_k-27x^4_k+90x^4_k-20))
+12(M_W\sin\theta_Wx^4_k(x^4_k-4x^2_k+12)-3e\ v_{EW}\ x^6_k) \ln(x_k)\right],\\
\\
D^\gamma_L &= \frac{-i\ e^2\ M_e|\theta_e\theta^\ast_{\alpha}|}{384\pi^2 \ M^2_W\sin^2_W}\sum_{k=1}^5\left(\frac{7-34x^2_k+63x^4_k-34x^6_k-2x^8-(48x^6_k-12x^4_k)\ln(x_k)}{(1-x^2_k)^4}\right),\\
\\D^\gamma_R &= \frac{-i\ e^2\ M_{\alpha}|\theta_e\theta^\ast_{\alpha}|}{384\pi^2\ M^2_W\sin^2_W}\sum_{k=1}^5\left(\frac{7-34x^2_k+63x^4_k-34x^6_k-2x^8-(48x^6_k-12x^4_k)\ln(x_k)}{(1-x^2_k)^4}\right),\\
\end{align*}
\begin{align*}
A^Z_L &= \frac{ e^2|\theta_e\theta^\ast_{\alpha}|}{16\pi^2\ M_W\ \cos\theta_W\sin^3\theta_W }\\
&\times\sum_{k=1}^5\left(\frac{ 1 }{(1-x^2_k)^4}\left[ (x^2_k-1)(M_W(8x^2_k\sin^2\theta_W-9x^2_k-1)-4 e\sin\theta_W\ v_{EW} x^2_k)\right.\right. \nonumber\\
&\left.\left. +4(M_W(5-4\sin^2\theta_W)+2 e \sin\theta_W v_{EW})x^4_k \ln(x_k) \right] \right),\\
\\
B^Z_L &= -\sum_{k=1}^5\frac{i \text{e}^2 |\theta_e\theta^\ast_{\alpha}|}{2304\pi^2 \cos\theta_W M^3_W \sin^3\theta_W (1-x^2_k)^4}\\
&\times \left[(x^2_k-1)(6 x^2_k \text{e } \sin\theta_W v_{EW} (2x^4_k+5x^2_k-1)+M_W(-12 -2 (\sin^2\theta_W-12)x^2_k\right.\nonumber\\
&\left.+(7\sin^2\theta_W-12)x^4_k-11x^6_k \sin^2\theta_W +\cos^2\theta_W(11x^6_k-47x^4_k+178x^2_k-40)))\right.\\
&\left. -12(6 x^2_k \text{e } \sin\theta_W v_{EW} x^6_k+M_W(4-14x^2_k+8(2+3 \cos^2\theta_W)x^4_k -2x^6_k(3+4 \cos^2\theta_W)\right.\\
&\left. +(\cos^2\theta_W-\sin^2\theta_W)x^8_k))\ln(x_k)\right],\\
\\
D^Z_L &= -\sum_{k=1}^5\frac{i \text{e}^2 M_e |\theta_e\theta^\ast_{\alpha}|}{768\pi^2 \cos\theta_W M^2_W \sin^3\theta_W (1-x^2_k)^4}\\
&\times \left[(x^2_k-1)(8+ x^2_k(\sin^2\theta_W-24) +x^4_k(16-5 \sin^2\theta_W) \right.\\
&\left. -2 x^6_k \sin^2\theta_W \cos^2\theta_W(14-53x^2_k+67x^4_k+2x^6)) -4x^2_k(2-2x^2_k(4+3 \cos^2\theta_W)\right.\\
&\left. +3 x^4_k(2+7\cos^2\theta_W-\sin^2\theta_W)\ln(x_k))\right],\\
\\
\end{align*}
\begin{align*}
D^Z_R &= -\sum_{k=1}^5\frac{i \text{e}^2 M_{\alpha} |\theta_e\theta^\ast_{\alpha}|}{768\pi^2 \cos\theta_W M^2_W \sin^3\theta_W (1-x^2_k)^4}\\
&\times \left[(x^2_k-1)(8+ x^2_k(\sin^2\theta_W-24) +x^4_k(16-5 \sin^2\theta_W)\right.\\
&\left. -2 x^6_k \sin^2\theta_W \cos^2\theta_W(14-53x^2_k+67x^4_k+2x^6)) \right.\\
&\left. -4x^2_k(2-2x^2_k(4+3 \cos^2\theta_W)+3 x^4_k(2+7\cos^2\theta_W-\sin^2\theta_W)\ln(x_k))\right],
\end{align*}
\begin{align*}
B^\gamma_R = A^Z_R= B^Z_R = A^\gamma_{L,R} = 0\:.
\end{align*}
In the above equations, we have defined $x_k := \frac{M_{\tilde{n}_k}}{M_W}$. $e$ is the electric charge, $M_\alpha$ is the mass of the charged lepton $\ell_\alpha$ (with $\alpha = \mu,\tau$) and $\theta_W$ denotes the weak mixing angle.
We note that the lepton self energy diagrams with virtual photon exchange do not contribute to the amplitude since they cancel out with terms from $W$ boson and Goldstone boson diagrams. The monopole term that is proportional to $q_\mu q_\nu$, cf.\ Eq.~(\ref{eq:2}), vanishes as it should because it would violate quark current conservation.
For the case of $Z$ boson exchange the dipole form factors, $D^Z_{L,R}$, flip the chirality of the outgoing fermions. They are suppressed since they are proportional to the lepton mass \cite{Bernabeu:1995sq,Budny:1976hq,Gutierrez-Rodriguez:2015rfa}.
We have neglected the contributions from the effective operators with the SM Higgs boson, because they are suppressed by the small couplings of the Higgs to the beam quarks.
\begin{figure}[t!]
\centering
\includegraphics[scale=0.48]{feynman_egamma.png}\\
\includegraphics[scale=0.48]{feynman_ez.png}
\caption{Feynman diagrams generating the effective vertices for $e^- \to \ell_\alpha \gamma$ and $e^-\to \ell_\alpha Z$ in extensions of the SM by heavy neutral leptons.
$\tilde{n}_k$ runs over all (light and heavy) neutral lepton mass eigenstates.
}
\label{fig:3}
\end{figure}
\subsection{Method for obtaining the cLFV sensitivity at the LHeC}
In the following, we assume that the heavy neutral leptons have sufficiently large masses that they cannot be directly produced at the LHeC. With this condition satisfied, we will apply the effective operator treatment.
The amplitudes for the $e-\mu$/$e-\tau$ conversion processes $pe^-\to \mu^- j$/$pe^-\to \tau^- j$
are given by
\begin{align}
{\mathcal{M}_{LHeC}} = {\mathcal{M_{\gamma^\ast}}} + {\mathcal{M_{Z}}},
\end{align}
with $ {\mathcal{M_{\gamma^\ast}}}$ and $ {\mathcal{M_{Z}}}$ denoting the amplitudes for virtual photon and $Z$ boson exchange
\begin{align*}
&{\mathcal{M}_{\gamma^\ast}} = \bar{u}_{l_\alpha}\left[ B^\gamma_{L,R} P_{L,R} \,q^2\,\gamma^\nu - i\sigma^{\mu\nu} q_\mu \, D^\gamma_{L,R} P_{L,R} \right] u_{e} \left(\frac{-ie\ g_{\mu\nu}}{q^2} \right) \,\bar{u}_{q}(- ie Q_q\gamma^\mu) v_{q} \:, \\
&{\mathcal{M}_{Z}} =
\bar{u}_{l_\alpha}\left[A^Z_{L,R}P_{L,R}\gamma^\nu +B^Z_{L,R}P_{L,R}q^2\gamma^\nu- i\sigma^{\mu\nu} q_\mu D^Z_{L,R}P_{L,R}\right] u_{e} \left(\frac{-ig_{\mu\nu}}{q^2-M^2_Z} \right)\bar{u}_{q} (\gamma^\mu \,g_{L,R} P_{L,R}) v_{q}.
\end{align*}
$Q_q$ is the quark charge and $g_{L,R}$ are the left and right couplings of the $Z$ boson with quarks (where again expressions like $g_{L,R} P_{L,R}$ stand for the sums, i.e.\ $g_{L} P_{L}+g_{R} P_{R}$). $B^\gamma_{L,R},D^\gamma_{L,R}, A^Z_{L,R}, B^Z_{L,R}$ and $D^Z_{L,R}$ are the effective form factors of the one-loop penguin diagrams in Fig.~\ref{fig:3}, with results given in the previous subsection.
In the following we will carry out the cLFV sensitivity analysis for the case of muons in the final state, $pe^-\to \mu^- j$, and taus in the final state, $pe^-\to \tau^- j$, separately. These two searches at the LHeC can test the combinations $|\theta_e\theta^\ast_\mu|$ and $|\theta_e\theta^\ast_\tau|$ of the flavor-dependent active-sterile mixing angles, respectively, for a given heavy neutrino mass $M_N$.
In the analysis with muons in the final state, we initially fix $|\theta_e\theta^\ast_\mu|= 10^{-3}$ with $\theta_e=\theta_\mu$ and $\theta_\tau=0$, and for the analysis with taus in the final state we fix $|\theta_e\theta^\ast_\tau|= 10^{-3}$ with $\theta_e=\theta_\tau$ and $\theta_\mu=0$. We then use MadGraph \cite{Alwall:2014hca} to calculate the total cross section and generate the events, where the form factors and its Lorentz structure have been carefully implemented as described in \cite{Degrande:2011ua}. The parton shower and hadronisation are done by Pythia \cite{Sjostrand:2006za}. For fast detector simulation we use Delphes \cite{deFavereau:2013fsa}. For event reconstruction and analysis we use MadAnalysis \cite{Conte:2012fm,Conte:2014zja}.
\subsection{Event reconstruction and analysis}
For signal reconstruction (at the reconstructed level after detector simulation), we require at least one muon with $P_T \ge 25$ GeV and jets with $P_T\ge 5$ GeV. For tau lepton reconstruction, we use an identification efficiency rate of $75\%$ for tau leptons with $P_T \ge 25$ GeV and misidentification rate about $1\%$ \cite{Bagliesi:2007qx,Bagliesi:2006ck}. For the process with final state taus, we use Pythia for tau decays and then we use the Delphes analysis module to reconstruct the hadronic tau jet with identification efficiency rate of $75\%$ for the signal and identification efficiency rate of $60\%$ for the background. The most relevant backgrounds and their total cross sections are shown in Table \ref{tab:1} for final state taus (left) and final state muons (right).
\begin{table}[h!]
\resizebox{7.4cm}{!}{
\parbox{.68\linewidth}{
\begin{tabular}{|c|c|c|}
\hline
$\#$&Backgrounds $\tau$ final state & $\sigma_{(LHeC)} [Pb]$ \\ \hline\hline
bkg1&$p e^-\to Z/\gamma^\ast (\to \tau^-\tau^+)\ \nu_l\ j $ & 0.0316 \\ \hline
bkg2&$p e^-\to W^\pm(\to \tau^\pm\ {\nu}_\tau) \ e^-\ j $ &0.2657 \\ \hline
bkg3&$p e^-\to Z Z(\to \tau^-\tau^+) \ \nu_l\ j $ & 1.1$\times 10^{-5}$ \\ \hline
bkg4&$p e^-\to Z(\to \tau^-\tau^+) W^\pm(\to \tau^\pm\ {\nu}_\tau) \ \nu_l\ j $& 2.64$\times 10^{-5}$ \\ \hline
\end{tabular}
}}
\hfill
\resizebox{7.4cm}{!}{
\parbox{.68\linewidth}{
\begin{tabular}{|c|c|c|}
\hline
$\#$ &Backgrounds $\mu$ final state& $\sigma_{(LHeC)} [Pb]$ \\ \hline\hline
bkg1&$p e^-\to Z/\gamma^\ast (\to \mu^-\mu^+)\ \nu_l\ j $ & 0.0316 \\ \hline
bkg2&$p e^-\to W^\pm(\to \mu^\pm\ {\nu}_\mu) \ e^-\ j $ &0.2657 \\ \hline
bkg3&$p e^-\to Z/\gamma^\ast (\to \tau^-\tau^+\to\text{leptons})\ \nu_l\ j $ &9.1$\times 10^{-4}$ \\ \hline
bkg4&$p e^-\to W^\pm(\to \tau^\pm\ {\nu}_\tau\to\text{leptons}) \ e^-\ j $ & 0.0451 \\ \hline
bkg5&$p e^-\to Z Z(\to \mu^-\mu^+) \ \nu_l\ j $ & 1.1$\times 10^{-5}$ \\ \hline
bkg6&$p e^-\to Z(\to \mu^-\mu^+) W^\pm(\to \mu^\pm\ {\nu}_\mu) \ \nu_l\ j $& 2.64$\times 10^{-5}$ \\\hline
\end{tabular}
}}
\caption{Dominant background processes considered in our analysis and their total cross sections for final state taus (left) and final state muons (right). The cross sections are obtained from MadGraph, while for the later tau decays we utilize Pythia. The samples have been produced with the following parton level cuts: $P_T(j)\ge 5$ GeV, $P_T(l)\ge 2$ GeV and $|\eta(l/j)|\le 4.5$. }
\label{tab:1}
\end{table}
It is worth mentioning that other backgrounds like $pe^-\to h\nu j$ with the SM Higgs decaying to a lepton pair are suppressed by the small Yukawa couplings, while the process of single top production $pe^-\to\nu t $ is suppressed by the small involved CKM mixing matrix element.
In order to enhance the signal-to-background rate, we reconstruct four variables that can distinguish between the signal and all relevant backgrounds.
\begin{figure}[h!]
\includegraphics[scale=0.35]{fig1.pdf}\includegraphics[scale=0.35]{fig2.pdf}\\
\includegraphics[scale=0.35]{fig3.pdf}\includegraphics[scale=0.35]{fig4.pdf}
\caption{Distributions of kinematic variables (before any cuts applied) for the signal events with $M_N=1$ TeV, for the process $pe^-\to\mu^- j$ with muons in the final state, and with all relevant background events in Table \ref{tab:1} (right) superimposed and normalized to one. Upper left: angular distribution in radians for hard muons in the final state. Upper right: transverse missing energy. Down left: transverse momentum for anti-muons. Down right: transverse momentum for final state electrons.}
\label{fig:5}
\end{figure}
In Fig.~\ref{fig:5}, we show the kinematic distributions of the signal with final state muons versus all backgrounds superimposed. The most important variable is the angular distribution of the final state hard leptons ($\mu/\tau$). They are mainly detected in the backward region of the detector while all the background processes produce hard leptons ($\mu/\tau$) in the forward region of the detector. For the case of signals with hard muons in the final state, the signal events have very low missing energy, while for taus in the final state there is a larger source of missing energy due to the hadronic tau reconstruction. Additionally, the transverse momenta of electrons or $\mu^+/\tau^+$ in the signal events are very small since the only source for them is the decay of radiated photons. In order to enhance the signal to background ratio, we optimize the cuts on these reconstructed kinematic variables as shown in Table \ref{tab:2} (left) for tau final states and (right) for muon final states for the benchmark point with $M_N=1$ TeV.
\begin{table}[h!]
\resizebox{7.4cm}{!}{
\parbox{.68\linewidth}{
\begin{tabular}{|c|c|c|}
\hline
Cut &Background events & Signal events \\ \hline\hline
Normalized events (no cut)&297528&8147 \\ \hline
$P_T(\tau^+)\le 10$ GeV& 137986 &8117 \\ \hline
$\slashed{E}_T\le 100$ GeV& 132844 & 8110 \\ \hline
$P_T(e)\le 10$GeV& 14036& 8110 \\ \hline
$\theta(\tau^-) \ge 1.5$ rad&3561 &5302 \\ \hline
\end{tabular}
}}
\hfill
\resizebox{7.4cm}{!}{
\parbox{.68\linewidth}{
\begin{tabular}{|c|c|c|}
\hline
Cut &Background events & Signal events \\ \hline\hline
Normalized events (no cut)&343600&11639 \\ \hline
$P_T(\mu^+)\le 10$ GeV&180114 &11596.75 \\ \hline
$\slashed{E}_T\le 50$ GeV& 126183 & 11517.4 \\ \hline
$P_T(e)\le 10$ GeV& 12705& 11517.3 \\ \hline
$\theta(\mu^-) \ge 1.5$ rad&4822.8 & 8925.9 \\ \hline
\end{tabular}}}
\caption{Cut efficiency, i.e.\ number of signal events and all backgrounds summed, for the processes $pe^-\to\tau^- j$ (left table) and $pe^-\to\mu^- j$ (right table) at the LHeC with integrated luminosity $3\ ab^{-1}$. For the signal events with final state taus we fix $\theta_e=\theta_\tau$, $\theta_\mu=0$ and $|\theta_e\theta^\ast_\tau| = 10^{-3}$, which corresponds to a total cross section of $0.01173$ Pb (before the tau decays). For the signal events with muons in the final state we fix $\theta_e=\theta_\mu$, $\theta_\tau=0$ and $|\theta_e\theta^\ast_\mu| = 10^{-3}$, which corresponds to a total cross section of $0.01164$ Pb. The heavy neutrino mass parameter $M_N$ has been set to $1$ TeV. The numbers of signal and background events without cuts correspond to the above-given total cross sections and integrated luminosity.}
\label{tab:2}
\end{table}
\subsection{Results: sensitivities to the active-sterile mixing angles at the LHeC}
Given the number of signal events and the number of background events after the optimized cuts, the LHeC sensitivity at $90\%$ confidence level (CL) is obtained for rejecting the signal plus background over the background-only hypothesis and by using the formula \cite{Antusch:2018bgr,Abe:2018bpo}
\begin{align}
\sigma_{sys} = \left[2\left((N_s+N_b) \ln\frac{(N_s+N_b)(N_b+\sigma_b^2)}{N^2_b+(N_s+N_b)\sigma^2_b} - \frac{N^2_b}{\sigma^2_b}\ln (1+\frac{\sigma^2_b N_s}{N_b(N_b+\sigma^2_b)} )\right)\right]^{1/2},
\label{eq:sigma}
\end{align}
with $N_s$ and $N_b$ being the number of signal and background events, and with $\sigma_b$ being the systematic uncertainty, taken to be $2\%$ \cite{AbelleiraFernandez:2012cc} for background events only.
For obtaining the current limits from non-collider experiments we use the following experimental constraints at $90\%$ CL:
\begin{align*}
&Br(\mu\to e\gamma)\le 4.2\times 10^{-13}\ \cite{TheMEG:2016wtm} \:, \\
&Br(\tau\to e\gamma)\le 3.3\times 10^{-8}\ \cite{Aubert:2009ag} \:, \\
&Br(\mu\to e^-e^+e^-)\le 1.\times 10^{-12}\ \cite{Bellgardt:1987du,Baranov:1990uh} \:, \\
&Br(\tau\to e^-e^+e^-)\le 2.7\times 10^{-8}\ \cite{Hayasaka:2010np} \:, \\
&Cr(\mu-e,\:^{197}_{79}\text{Au})\le 7\times 10^{-13}\ \text{\cite{Bertl:2006up}} \:.
\end{align*}
\begin{figure}[h!]
\centering
\includegraphics[scale=0.28]{limit_mu_new.pdf}\\
\includegraphics[scale=0.28]{limit_tau_new.pdf}
\caption{ Estimated sensitivities to the active-sterile neutrino mixing angle combinations $|\theta_e\theta^\ast_\mu|$ (upper panel) and $|\theta_e\theta^\ast_\tau|$ (lower panel). The black lines show our results for the LHeC sensitivity from the processes $pe^-\to\mu^- j$ and $pe^-\to\tau^- j$, respectively, with $1.3$ TeV center-of-mass energy and integrated luminosity of $3\ \text{ab}^{-1}$. The green line in the upper panel corresponds the current limit from $\mu-e$ conversion, the red and blue lines in both panels show the current limits from $\ell_\alpha \to 3e$ and $\ell_\alpha \rightarrow e \gamma$ (taken from \cite{Alonso:2012ji}), respectively. }
\label{fig:limit}
\end{figure}
From the bounds on the branching (or conversion) ratios for $\ell_\alpha\to e\gamma$, $\ell_\alpha\to 3e$ and $\mu-e$ conversion in nuclei, we calculate the limits on the active-sterile neutrino mixing angles using the formulae given in \cite{Alonso:2012ji}. It is worth mentioning that the processes $\ell_\alpha\to 3e$ and $\mu-e $ conversion in nuclei have an energy scale of $q^2= M^2_\alpha$ (with $ \alpha =\mu,\tau$), which implies that the $Z$ boson contribution is suppressed by the squared mass difference in the propagator due to the small energy transfer \cite{Lee:1977tib}. On the other hand, at the LHeC the energy scale is $\sim 1.3$ TeV and thus the $Z$ boson can have a much larger contribution. The largest contribution indeed comes from the from the effective operator with form factor $A^Z_L$.
In Fig.~\ref{fig:limit} we present our results for the LHeC sensitivities to the active-sterile neutrino mixing angles and compare them with the current limits from non-collider experiments. The result with muons in the final state (where the process is sensitive to $|\theta_e\theta^\ast_\mu|$) is shown in the upper plot, while the one with taus in the final state (sensitive to $|\theta_e\theta^\ast_\tau|$) is shown in the lower plot. The results show that with sensitivities down to $|\theta_e\theta^\ast_\mu| \le 2\times 10^{-5}$ and $|\theta_e\theta^\ast_\tau|\le 3\times 10^{-5}$ (for the example of $M_N = 1$ TeV), the LHeC can provide better sensitivity than the current limit in both cases.
Let us now also compare with the planned future experiments, using the sensitivity goals stated in section \ref{intro}. Regarding the mixing parameter combination $|\theta_e\theta_\mu^\ast|$, the future run of MEG II will improve the sensitivity to about $9\times 10^{-6}$, while the Mu3e experiment will be sensitive to this mixing parameter combination down to $4.8\times 10^{-6}$. The future PRISM even aims at a sensitivity of $2.5\times 10^{-6}$. These sensitivities would be better than the ones reachable at LHeC.
Regarding the mixing parameter combination $|\theta_e\theta_\tau^\ast|$ responsible for the conversion of an electron into a tau, the sensitivity at the LHeC can be better than the current limits by more than an order of magnitude. Furthermore, the LHeC can even provide a better sensitivity than the future runs of the B-factories BABAR and BELLE II, which can probe the mixing parameter combination $|\theta_e\theta_\tau^\ast|$ down to $ 4\times 10^{-4}$ and $ 9\times 10^{-4}$, respectively.
Very sensitive test to the active-sterile neutrino mixing angles would also be possible at the FCC-ee \cite{Abada:2019lih}, via the accurate measurement of electroweak precision observables (EWPOs). On the other hand, the EWPOs are not sensitive to the same parameter combinations, but rather to $|\theta_e|^2+|\theta_\mu|^2$ and $|\theta_\tau|^2$. In the large mass limit for the heavy neutral leptons, sensitivities down to about $|\theta_e|^2+|\theta_\mu|^2 \sim 10^{-5}$ and $|\theta_\tau|^2 \sim 6\times 10^{-4}$ could be achieved \cite{Antusch:2016ejd,Antusch:2015mia,Antusch:2014woa}. We remark that e.g.\ for both $|\theta_e|$ and $|\theta_\tau|$ slightly below the (maximal) FCC-ee sensitivities from EWPOs, this would imply $|\theta_e\theta_\tau^\ast| \lesssim 8\times 10^{-5}$, potentially still within reach of LHeC.
\section{Model-independent results}\label{sec:model-independent}
In this section, we calculate the model independent LHeC sensitivities for the form factors of the FCNC operators inducing cLFV given in section 2.1. The results can be used to estimate the LHeC discovery potential for generic heavy new physics that generates these effective operators. To calculate the LHeC sensitivities, we again analyse the processes $pe^-\to\mu^-j$ and $pe^-\to\tau^-j$, mediated by a cLFV effective coupling to photon, $Z$ boson, and SM Higgs. In the following, we use the form factors with superscripts to identify the considered boson, i.e.\ $\gamma$, $Z$ or $H$. To obtain the LHeC sensitivities, we follow the same procedure used in the previous section and perform an analysis at the reconstructed level, switching on only one form factor at a time.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.28]{crossection.pdf}\\
\caption{Total cross section for the process $pe^-\to\mu j$ as function of the size of the individual form factors given in Eqs.~(\ref{eq:1}), (\ref{eq:2}) and (\ref{eq:3}), for the LHeC with $7$ TeV protons and $60$ GeV electrons with $80\%$ polarization. For the form factors $B^{Z/\gamma}_{L,R}$ and $D^{Z/\gamma}_{L,R}$, the $x$-axis shows their size in units of GeV$^{-2}$ and GeV$^{-1}$, respectively.
}
\label{fig:crosssection}
\end{figure}
In Fig.~\ref{fig:crosssection} we show the total cross section of $pe^-\to\mu^-j$ in picobarn as a function of the size of the individual form factors. One can see that the largest cross sections come from the monopole form factors $B^{Z/\gamma}_{L,R}$, which is due to the momentum transfer squared attached to the effective vertex. The dipole form factors, $D^{Z/\gamma}_{L,R}$, also have comparatively large cross section due to the attached $q^\nu$ in the effective vertex. The form factors corresponding to the SM Higgs contribution have the lowest cross sections since the coupling of the SM Higgs with the proton beam is suppressed by the small Yukawa couplings. We remark that all considered kinematic distributions of the final state particles, except the angular distribution of the final state lepton, do not change by considering different form factors.
On the other hand, due to the dependence of the monopole and dipole form factors on the momentum of the mediator particle, the angular distributions of the final state leptons are shifted towards the forward direction.
In Fig.~\ref{fig:theta_FF}, we show the angular distributions of the final state muons for the process $pe^-\to\mu^-j$, with total event number normalized to one. The shifting of the angular distributions towards the forward direction (in addition to the earlier discussed shifting for the processes with massive mediators compared to the photon-mediated processes) indeed weakens the signal vs.\ background separation, but still other characteristics such as $P_T(l^+), \slashed{E_T}$, and $P_T(e)$ can be used to improve the sensitivity.
\begin{figure}[h!]
\centering
~~~\includegraphics[scale=0.6]{theta_FF.pdf}
\caption{Angular distribution of the muons for the process $pe^-\to\mu^- j$ at the reconstructed level, considering the monopole and dipole form factors for the effective operators that mediate the process via photon and Z boson exchange. The total event numbers are normalized to one. The forward direction is the proton beam direction and the backward direction is the electron beam direction.}
\label{fig:theta_FF}
\end{figure}
\begin{table}[!htb]
\resizebox{\columnwidth}{!}{%
\begin{tabular}{|c|c|c|c|c|}
\hline
Form Factors&Cut &Background events & Signal events& LHeC sensitivity $90\%$ CL \\ \hline\hline
$N^H_L/N^H_R$&Normalized events (no cut)&343600&274/33& 4.49$\times 10^{-3}/3.55\times10^{-2}$\\ \hline
$N^H_L/N^H_R$&$P_T(\mu^+)\le 10$ GeV&180114 &274/33& 3.08$\times 10^{-3}/2.57\times 10^{-2}$ \\ \hline
$N^H_L/N^H_R$&$\slashed{E}_T\le 50$ GeV& 126183 & 269 /32& 2.65$\times 10^{-3}/2.22\times 10^{-2}$\\ \hline
$N^H_L/N^H_R$&$P_T(e)\le 10$GeV& 12705& 269/32 & 8.41$\times 10^{-4}/7.05\times10^{-3}$ \\ \hline
$N^H_L/N^H_R$&$\theta(\mu^-) \ge 0.5$ rad&9322 &232/28 & 8.36$\times 10^{-4}/6.90\times10^{-3}$ \\ \hline
\hline
Form Factors&Cut &Background events & Signal events& LHeC sensitivity $90\%$ CL \\ \hline\hline
$A^\gamma_L/A^\gamma_R$&Normalized events (no cut)&343600&$69000/7875$ &$1.75\times10^{-5}/1.50\times10^{-4}$\\ \hline
$A^\gamma_L/A^\gamma_R$&$P_T(\mu^+)\le 10$ GeV& 180114 &$68700/7840$ &$1.31\times10^{-5}/1.09\times10^{-4}$\\ \hline
$A^\gamma_L/A^\gamma_R$&$\slashed{E}_T\le 50$ GeV& 126183 &$68673/7837$ &$1.11\times10^{-5}/9.12\times10^{-5}$ \\ \hline
$A^\gamma_L/A^\gamma_R$&$P_T(e)\le 10$GeV& 12705&$68673/7837$ &$4.93\times10^{-6}/3.12\times10^{-5}$\\ \hline
$A^\gamma_L/A^\gamma_R$&$\theta(\mu^-) \ge 1.5$ rad&4823 &$67586/7713$& $3.94\times10^{-6}/2.16\times10^{-5}$ \\ \hline
\hline
Form Factors&Cut &Background events & Signal events& LHeC sensitivity $90\%$ CL \\ \hline\hline
$D^\gamma_L/D^\gamma_R\ \text{[GeV}^{-1}]$&Normalized events (no cut)&343600&2.41$\times10^7/2.66\times10^6$ &$1.58\times10^{-7}/1.42\times10^{-6}$\\ \hline
$D^\gamma_L/D^\gamma_R\ \text{[GeV}^{-1}]$&$P_T(\mu^+)\le 10$ GeV& 180114 &2.39$\times10^7/2.65\times10^6$ &$1.46\times10^{-7}/1.31\times10^{-6}$\\ \hline
$D^\gamma_L/D^\gamma_R\ \text{[GeV}^{-1}]$&$\slashed{E}_T\le 50$ GeV& 126183 &2.37$\times10^7/2.63\times10^6$ &$1.41\times10^{-7}/1.27\times10^{-6}$ \\ \hline
$D^\gamma_L/D^\gamma_R\ \text{[GeV}^{-1}]$&$P_T(e)\le 10$GeV& 12705& 2.37$\times10^7/2.63\times10^6$ &$1.12\times10^{-7}/1.01\times10^{-6}$\\ \hline
$D^\gamma_L/D^\gamma_R\ \text{[GeV}^{-1}]$&$\theta(\mu^-) \ge 0.3$ rad&10935 &2.37$\times10^7/2.63\times10^6$& $1.05\times10^{-7}/9.44\times10^{-7}$ \\ \hline
\hline
Form Factors&Cut &Background events & Signal events & LHeC sensitivity $90\%$ CL \\ \hline\hline
$B^\gamma_L/B^\gamma_R\ \text{[GeV}^{-2}]$&Normalized events (no cut)&343600&$9.52\times10^{10}/9.99\times10^{9}$&$1.35\times10^{-9}/1.28\times10^{-8}$ \\ \hline
$B^\gamma_L/B^\gamma_R\ \text{[GeV}^{-2}]$&$P_T(\mu^+)\le 10$ GeV& 180114 &$9.52\times10^{10}/9.99\times10^{9}$ &$1.31\times10^{-9}/1.25\times10^{-8}$ \\ \hline
$B^\gamma_L/B^\gamma_R\ \text{[GeV}^{-2}]$&$\slashed{E}_T\le 50$ GeV& 126183 & $9.26\times10^{10}/9.74\times10^{9}$&$1.31\times10^{-9}/1.25\times10^{-8}$ \\ \hline
$B^\gamma_L/B^\gamma_R\ \text{[GeV}^{-2}]$&$P_T(e)\le 10$GeV& 12705& $9.26\times10^{10}/9.74\times10^{9}$&$1.21\times10^{-9}/1.15\times10^{-8}$ \\ \hline
$B^\gamma_L/B^\gamma_R\ \text{[GeV}^{-2}]$&$\theta(\mu^-) \ge 0.1$ rad&11898 &$9.25\times10^{10}/9.73\times10^{9}$& $1.20\times10^{-9}/1.14\times10^{-8}$ \\ \hline
\hline
Form Factors&Cut &Background events & Signal events & LHeC sensitivity $90\%$ CL \\ \hline\hline
$A^Z_L/A^Z_R$&Normalized events (no cut)&343600&17458/2182 &$6.77\times10^{-5}/5.37\times10^{-4}$\\ \hline
$A^Z_L/A^Z_R$&$P_T(\mu^+)\le 10$ GeV& 180114 &17394/2174 &$5.00\times10^{-5}/3.91\times10^{-4}$\\ \hline
$A^Z_L/A^Z_R$&$\slashed{E}_T\le 50$ GeV& 126183& 17276/2159 & $4.20\times10^{-5}/3.30\times10^{-4}$\\ \hline
$A^Z_L/A^Z_R$&$P_T(e)\le 10$GeV& 12705& 17276/2159&$1.54\times10^{-5}/1.07\times10^{-4}$ \\ \hline
$A^Z_L/A^Z_R$&$\theta(\mu^-) \ge 1.5$ rad&4823&13389/1674 & $1.36\times10^{-5}/8.74\times10^{-5}$ \\ \hline
\hline
Form Factors&Cut &Background events & Signal events & LHeC sensitivity $90\%$ CL \\ \hline\hline
$D^Z_L/D^Z_R\ \text{[GeV}^{-1}]$&Normalized events (no cut)&343600&$3.69\times10^6/4.22\times10^5$ &$5.66\times10^{-7}/4.95\times10^{-6}$\\ \hline
$D^Z_L/D^Z_R\ \text{[GeV}^{-1}]$&$P_T(\mu^+)\le 10$ GeV& 180114 &$3.68\times10^6/4.21\times10^5$ &$4.96\times10^{-7}/4.33\times10^{-6}$ \\ \hline
$D^Z_L/D^Z_R\ \text{[GeV}^{-1}]$&$\slashed{E}_T\le 50$ GeV& 126183 &$3.55\times10^6/4.06\times10^5$&$4.75\times10^{-7}/4.15\times10^{-6}$ \\ \hline
$D^Z_L/D^Z_R\ \text{[GeV}^{-1}]$&$P_T(e)\le 10$GeV&12705&$3.55\times10^6/4.06\times10^5$&$3.48\times10^{-7}/3.04\times10^{-6}$ \\ \hline
$D^Z_L/D^Z_R\ \text{[GeV}^{-1}]$&$\theta(\mu^-) \ge 0.1$ rad&11898 &$3.55\times10^6/4.06\times10^5$ &$3.45\times10^{-7}/3.01\times10^{-6}$ \\ \hline
\hline
Form Factors&Cut &Background events & Signal events & LHeC sensitivity $90\%$ CL \\ \hline\hline
$B^Z_L/B^Z_R\ \text{[GeV}^{-2}]$&Normalized events (no cut)&343600&$6.99\times10^{10}/5.42\times10^9$ &$1.60\times10^{-9}/2.07\times10^{-8}$\\ \hline
$B^Z_L/B^Z_R\ \text{[GeV}^{-2}]$&$P_T(\mu^+)\le 10$ GeV& 180114 &$6.96\times10^{10}/5.39\times10^9$ &$1.56\times10^{-9}/2.01\times10^{-8}$\\ \hline
$B^Z_L/B^Z_R\ \text{[GeV}^{-2}]$&$\slashed{E}_T\le 50$ GeV& 126183 & $6.95\times10^{10}/5.39\times10^9$ & $1.53\times10^{-9}/1.97\times10^{-8}$\\ \hline
$B^Z_L/B^Z_R\ \text{[GeV}^{-2}]$&$P_T(e)\le 10$GeV&12705& $6.95\times10^{10}/5.39\times10^9$ &$1.41\times10^{-9}/1.82\times10^{-8}$\\ \hline
\end{tabular}}
\caption{LHeC sensitivities and cut efficiencies for the individual form factors (cf. section 2.1) of the FCNC operators inducing cLFV $e-\mu$ conversion, from the process $pe^-\to\mu^- j$ and with an integrated luminosity of $3 \text{ ab}^{-1}$.}
\label{tab:FF_mu}
\end{table}
\begin{table}[!htb]
\resizebox{\columnwidth}{!}{%
\begin{tabular}{|c|c|c|c|c|}
\hline
Form Factors&Cut &Background events & Signal events& LHeC sensitivity $90\%$ CL \\ \hline\hline
$N^H_L/N^H_R$&Normalized events (no cut)&297528& $148/ 17$&$8.62\times10^{-3}/3.17\times10^{-2}$ \\ \hline
$N^H_L/N^H_R$&$P_T(\tau^+)\le 10$ GeV& 137986 & $148 / 17$&$5.12\times10^{-3}/2.12\times10^{-2}$ \\ \hline
$N^H_L/N^H_R$&$\slashed{E}_T\le 100$ GeV& 132844 & $147 /16$ &$5.09\times10^{-3}/2.01\times10^{-2}$ \\ \hline
$N^H_L/N^H_R$&$P_T(e)\le 10$ GeV& 14036& $147 / 16$ & $1.61\times10^{-3}/1.48\times10^{-2}$\\ \hline
$N^H_L/N^H_R$&$\theta(\tau^-) \ge 0.5$ rad&8641 & $126 / 14 $ & $1.47\times10^{-3}/1.32\times10^{-2}$ \\ \hline
\hline
Form Factors&Cut &Background events & Signal events& LHeC sensitivity $90\%$ CL \\ \hline\hline
$A^\gamma_L/A^\gamma_R$&Normalized events (no cut)&297528& $37260/4252$&$2.98\times10^{-5}/2.57\times10^{-4}$ \\ \hline
$A^\gamma_L/A^\gamma_R$&$P_T(\tau^+)\le 10$ GeV& 137986 & $37098/4234$&$2.09\times10^{-5}/1.76\times10^{-4}$ \\ \hline
$A^\gamma_L/A^\gamma_R$&$\slashed{E}_T\le 100$ GeV& 132844 & $37096/4234$ &$2.05\times10^{-5}/1.73\times10^{-4}$ \\ \hline
$A^\gamma_L/A^\gamma_R$&$P_T(e)\le 10$ GeV& 14036& $37096/4234$ & $8.30\times10^{-6}/5.86\times10^{-5}$\\ \hline
$A^\gamma_L/A^\gamma_R$&$\theta(\tau^-) \ge 1.5$ rad&3561 & $36504 /4166 $ & $5.75\times10^{-6}/3.33\times10^{-5}$ \\ \hline
Form Factors&Cut &Background events & Signal events& LHeC sensitivity $90\%$ CL \\ \hline\hline
$D^\gamma_L/D^\gamma_R\ \text{[GeV}^{-1}]$&Normalized events (no cut)&297528& $1.30\times10^7/1.44\times10^6$&$2.31\times10^{-7}/2.08\times10^{-6}$ \\ \hline
$D^\gamma_L/D^\gamma_R\ \text{[GeV}^{-1}]$&$P_T(\tau^+)\le 10$ GeV& 137986 & $1.29\times10^7/1.43\times10^6$&$2.07\times10^{-7}/1.86\times10^{-6}$ \\ \hline
$D^\gamma_L/D^\gamma_R\ \text{[GeV}^{-1}]$&$\slashed{E}_T\le 100$ GeV& 132844 & $1.29\times10^7/1.43\times10^6$ &$2.06\times10^{-7}/1.85\times10^{-6}$ \\ \hline
$D^\gamma_L/D^\gamma_R\ \text{[GeV}^{-1}]$&$P_T(e)\le 10$ GeV& 14036& $1.29\times10^7/1.43\times10^6$ & $1.62\times10^{-7}/1.45\times10^{-6}$\\ \hline
$D^\gamma_L/D^\gamma_R\ \text{[GeV}^{-1}]$&$\theta(\tau^-) \ge 0.3$ rad&11993& $1.29\times10^7 /1.43\times10^6 $ & $1.61\times10^{-7}/1.45\times10^{-6}$ \\ \hline
Form Factors&Cut &Background events & Signal events& LHeC sensitivity $90\%$ CL \\ \hline\hline
$B^\gamma_L/B^\gamma_R\ \text{[GeV}^{-2}]$&Normalized events (no cut)&297528& $5.14\times10^{10}/5.41\times10^9$&$1.88\times10^{-9}/1.79\times10^{-8}$ \\ \hline
$B^\gamma_L/B^\gamma_R\ \text{[GeV}^{-2}]$&$P_T(\tau^+)\le 10$ GeV& 137986 & $5.14\times10^{10}/5.41\times10^9$&$1.82\times10^{-9}/1.73\times10^{-8}$ \\ \hline
$B^\gamma_L/B^\gamma_R\ \text{[GeV}^{-2}]$&$\slashed{E}_T\le 100$ GeV& 132844 & $5.10\times10^{10}/5.43\times10^9$ &$1.81\times10^{-9}/1.72\times10^{-8}$ \\ \hline
$B^\gamma_L/B^\gamma_R\ \text{[GeV}^{-2}]$&$P_T(e)\le 10$ GeV& 14036& $5.10\times10^{10}/5.43\times10^9$ & $1.67\times10^{-9}/1.59\times10^{-8}$\\ \hline
$B^\gamma_L/B^\gamma_R\ \text{[GeV}^{-2}]$&$\theta(\tau^-) \ge 0.1$ rad&12993& $5.10\times10^{10}/5.43\times10^9$ & $1.66\times10^{-9}/1.58\times10^{-8}$ \\ \hline
Form Factors&Cut &Background events & Signal events& LHeC sensitivity $90\%$ CL \\ \hline\hline
$A^Z_L/A^Z_R$&Normalized events (no cut)&297528& $12221/1222$&$9.00\times10^{-5}/9.01\times10^{-4}$ \\ \hline
$A^Z_L/A^Z_R$&$P_T(\tau^+)\le 10$ GeV& 137986 & $12176/1218$&$6.19\times10^{-5}/6.18\times10^{-4}$ \\ \hline
$A^Z_L/A^Z_R$&$\slashed{E}_T\le 100$ GeV& 132844 & $12165/1217$ &$6.08\times10^{-5}/6.07\times10^{-4}$ \\ \hline
$A^Z_L/A^Z_R$&$P_T(e)\le 10$ GeV& 14036& $12165/1217$ & $2.19\times10^{-5}/2.18\times10^{-4}$\\ \hline
$A^Z_L/A^Z_R$&$\theta(\tau^-) \ge 1.5$ rad&3561& $7953/795$ & $1.89\times10^{-5}/1.88\times10^{-4}$ \\ \hline
Form Factors&Cut &Background events & Signal events& LHeC sensitivity $90\%$ CL \\ \hline\hline
$D^Z_L/D^Z_R\ \text{[GeV}^{-1}]$&Normalized events (no cut)&297528& $1.99\times10^6/2.28\times10^5$&$8.64\times10^{-7}/5.33\times10^{-6}$ \\ \hline
$D^Z_L/D^Z_R\ \text{[GeV}^{-1}]$&$P_T(\tau^+)\le 10$ GeV& 137986 & $1.98\times10^6/2.27\times10^5$&$7.24\times10^{-7}/3.95\times10^{-6}$ \\ \hline
$D^Z_L/D^Z_R\ \text{[GeV}^{-1}]$&$\slashed{E}_T\le 100$ GeV& 132844 & $1.97\times10^6/2.25\times10^5$ &$7.22\times10^{-7}/3.92\times10^{-6}$ \\ \hline
$D^Z_L/D^Z_R\ \text{[GeV}^{-1}]$&$P_T(e)\le 10$ GeV& 14036& $1.97\times10^6/2.25\times10^5$ & $5.05\times10^{-7}/2.10\times10^{-6}$\\ \hline
$D^Z_L/D^Z_R\ \text{[GeV}^{-1}]$&$\theta(\tau^-) \ge 0.1$ rad&12993& $1.97\times10^6/2.25\times10^5$ & $5.00\times10^{-7}/2.07\times10^{-6}$ \\ \hline
Form Factors&Cut &Background events & Signal events& LHeC sensitivity $90\%$ CL \\ \hline\hline
$B^Z_L/B^Z_R\ \text{[GeV}^{-2}]$&Normalized events (no cut)&297528& $3.78\times10^{10}/2.93\times10^9$&$2.22\times10^{-9}/9.13\times10^{-9}$ \\ \hline
$B^Z_L/B^Z_R\ \text{[GeV}^{-2}]$&$P_T(\tau^+)\le 10$ GeV& 137986 & $3.77\times10^{10}/2.91\times10^9$&$2.15\times10^{-9}/8.75\times10^{-9}$ \\ \hline
$B^Z_L/B^Z_R\ \text{[GeV}^{-2}]$&$\slashed{E}_T\le 100$ GeV& 132844 & $3.77\times10^{10}/2.91\times10^9$ &$2.18\times10^{-9}/8.88\times10^{-9}$ \\ \hline
$B^Z_L/B^Z_R\ \text{[GeV}^{-2}]$&$P_T(e)\le 10$ GeV& 14036& $3.77\times10^{10}/2.91\times10^9$ & $2.00\times10^{-9}/7.94\times10^{-9}$\\ \hline
\end{tabular}}
\caption{LHeC sensitivities and cut efficiencies for the individual form factors (cf. section 2.1) of the FCNC operators inducing cLFV $e-\tau$ conversion, from the process $pe^-\to\tau^- j$ and with an integrated luminosity of $3 \text{ ab}^{-1}$.}
\label{tab:FF_tau}
\end{table}
Our model-independent results are presented in Tables \ref{tab:FF_mu} and \ref{tab:FF_tau}, where we show the LHeC sensitivities to the individual form factors at 90$\%$ CL, based on the processes $pe^-\to\mu^-j$ and $pe^-\to\tau^-j$, respectively. For the analysis, we initially fix the values of the considered form factor to $10^{-3}$, with all other form factors set to zero, to calculate the total initial cross section which is used to normalize the generated events with integrated luminosity $3 \text{ ab}^{-1}$. In order to increase the signal over background yield, the cuts have been optimized for each form factor individually. Given the number of signal and background events after each cut we have calculated the LHeC sensitivity at 90$\%$ CL for rejecting the signal plus background over the background-only hypothesis using the formula in Eq.~(\ref{eq:sigma}).
\section{Summary and conclusions}
In this work we have investigated the sensitivity of electron-proton ($ep$) colliders, in particular of the LHeC, for charged lepton flavor violation (cLFV). In an effective theory approach, we have considered a general effective Lagrangian for the conversion of an electron into a muon or a tau via the effective coupling of the charged leptons to a neutral gauge boson or a neutral scalar field.
For the photon, the $Z$ boson and the Higgs particle of the SM, we have presented the sensitivities of the LHeC (with 3 $\text{ab}^{-1}$ integrated luminosity) for the coefficients of the effective operators (cf.\ section \ref{sec:model-independent} and Table \ref{tab:FF_mu} for the results with muons and Table \ref{tab:FF_tau} for the results with taus in the final state), calculated from an analysis at the reconstructed level.
As an example for a model where such flavor changing neutral current (FCNC) operators are generated at loop level, we have considered the extension of the Standard Model by sterile neutrinos in the context of the SPSS benchmark model. Our results for the sensitivities to the active-sterile neutrino mixing angle combinations $|\theta_e\theta^\ast_\mu|$ and $|\theta_e\theta^\ast_\tau|$ are shown in Fig.~\ref{fig:limit}.
Our results show that the LHeC (with 3 $\text{ab}^{-1}$ integrated luminosity) could already the LFV conversion of an electron into a muon beyond the current experimental bounds, and could reach more than an order of magnitude higher sensitivity than the present limits for the LFV conversion of an electron into a tau.
We have argued that the very high sensitivities at the LHeC for some of the form factors are possible because the converted charged lepton is dominantly emitted in the backward direction, enabling an efficient separation of the signal from the background. The LHeC reach we obtained is in fact mainly statistics limited, and higher sensitivities could be achieved with higher integrated luminosity.
In summary, $ep$ colliders such as the proposed LHeC would be excellent facilities for probing cLFV. Especially for the case of cLFV electron-tau conversion, they could reach the best sensitivities among all currently envisioned experiments, opening up a great discovery potential for new physics beyond the SM.
\section*{Acknowledgments}
This work has been supported by the Swiss National Science Foundation under the project number 200020/175502. A.R. acknowledges the hospitality of the Department of Physics, University of Basel, where the visit was supported through the SU-FPDC Grant Program. A.H.\ thanks Oliver Fischer for useful discussions.
\bibliographystyle{h-elsevier}
| proofpile-arXiv_059-15588 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Infrared graphene local optical response}
At infrared frequencies the optical response of graphene is dominated by the conical band structure ${\cal E} = \pm v_{\rm F} |{\bf p}_\parallel|$ around the two Dirac points of the first Brillouin zone, where $v_{\rm F} \approx 9 \cdot 10^5$ m$/$s is the Fermi velocity and ${\cal E},{\bf p}_\parallel$ are the electron energy and momentum, respectively. In the present investigation we mainly focus on the infrared range of wavelengths $5 \: \mu m < \lambda < 20 \: \mu m$. While in undoped graphene the Fermi energy lies at the Dirac points, injection of charge carriers through electrical gating \cite{Chen2011} or chemical doping \cite{Liu2011} efficiently shifts the Fermi level up to $E_{\rm F} \approx 1$ eV owing to the conical dispersion and the 2D electron confinement. The response of graphene to photons of energy $\hbar \omega$ and in-plane momentum $\hbar {\bf k}_{\parallel}$ is described by the surface conductivity $\sigma({\bf k}_{\parallel},\omega)$ which is generally affected both by intraband and interband electron dynamics. The dependence of $\sigma$ on ${\bf k}_{\parallel}$ physically arises from electron-hole pairs excitation and it generally yields unwanted absorption (Landau damping) and nonlocal effects. However, if
\begin{equation} \label{locality}
\frac{{ {k_ \parallel } }}{{k_{\rm F} }} < \frac{{\hbar \omega }}{{E_{\rm F} }} < 2 - \frac{{ {k_ \parallel } }}{{k_{\rm F} }}
\end{equation}
where $k_{\rm F} = E_{\rm F} /\hbar v_{\rm F}$ is the Fermi wave number, the photon momentum is too small to trigger intraband transitions and interband transitions are forbidden by the Pauli exlusion principle \cite{Hwang}. Once interacting with photons satisfying Eq.(\ref{locality}), nonlocal effects can be neglected and graphene displays a marked metal-like behavior with long relaxation time $\tau = \mu E_{\rm F}/ev_{\rm F}^2$, where $\mu$ is the electron mobility, which conversely to noble metals can reach the picosecond time scale at moderate doping and purity (affecting electron mobility) \cite{JavierACSPhot}. In such local regime, random phase approximation (RPA) provides for the graphene conductivity the integral expression
\begin{equation} \label{sigma}
\sigma(\omega) = \frac{-ie^2}{\pi\hbar^2(\omega+i/\tau)}\int_{-\infty}^{+\infty}d{\cal E} \left\{|{\cal E}| \frac{ \partial f_{\cal E} }{\partial {\cal E}} + \frac{ {\rm sign} \left(\cal E\right) }{1 - 4 {\cal E}^2/[\hbar(\omega+i/\tau)]^2} f_{\cal E} \right\},
\end{equation}
where $f_{\cal E} = \{ {\rm exp}[({\cal E} - E_{\rm F})/k_{\rm B} T] +1 \}^{-1}$ is the Fermi function ($k_{\rm B}$ is the Boltzmann constant and $T$ is the temperature). In our analysis, we focus on photon-graphene interactions satysfing Eq.(\ref{locality}) and accordingly we model graphene surface conductivity by means of Eq.(\ref{sigma}).
It is worth noting that, in the regime where the Fermi energy is greater than the photon energy ($E_{\rm F} > \hbar \omega$), Eq.(\ref{locality}) can be casted as
\begin{equation} \label{locality2}
k_\parallel < \left( {\frac{c}{{v_F }}} \right)k_0 \simeq 333\,k_0,
\end{equation}
where $k_0 = \omega/c$ (vacuum wavenumber), which specifies the wavevector range of those photons not triggering nonlocal effects at frequency $\omega$.
\begin{figure}[!t] \label{Fig1}
\center
\includegraphics[width=0.6\textwidth]{FigS1.pdf}
\caption{Geometry of the electron-graphene-sphere interaction.}
\end{figure}
\section{Excitation of graphene-nanoparticle system by fast electrons}
The geometry of the graphene-nanoparticle system interacting with relativistic electrons is sketched in Fig.1. The graphene mono-layer lies on the plane $z=0$ and it is embedded in a homogeneous transparent medium whose real dielectric constant is $\varepsilon$. The plasmonic nanoparticle of radius $a$ has dielectric permittivity $\varepsilon _{\rm NP} (\omega)$ and it lies upon the graphene sheet with its center located at ${\bf r}_{\rm NP} = - a{\bf{\hat e}}_z$. An electron of velocity ${\bf{v}} = v{\bf{\hat e}}_z$ normally impinges the graphene sheet at an impact parameter $d$ with respect to the nanosphere center. In order to evaluate the radiation emitted by the system by cathodoluminescence, we first examine the electromagnetic interaction of the moving charge with the graphene monolayer and subsequently we incorporate the effect of the nanoparticle resorting to the dipole approximation. We hereafter label with $\parallel$ a vector lying in the $xy$ plane (i.e. ${\bf{A}}_\parallel = A_x {\bf{\hat e}}_x + A_y {\bf{\hat e}}_y$) and we label with a subscript $\omega$ a frequency domain quantity by adopting the spectral analysis
\begin{equation}
f_\omega = \frac{1}{{2\pi }}\int {dt} \;e^{i\omega t} f\left( t \right).
\end{equation}
\subsection{Electron-graphene interaction}
An electron of charge $-e<0$ moving on the trajectory ${\bf{r}}_{\rm e} \left( t \right) = - d {\bf{\hat e}}_x + vt{\bf{\hat e}}_z$ is equivalent, in the frequency domain, to the charge and current densities $\rho _\omega = -\frac{e}{{2\pi v}}\delta \left( {x + d } \right)\delta \left( y \right)e^{i\frac{\omega }{v}z}$, ${\bf{J}}_\omega = \rho _\omega v{\bf{\hat e}}_z$
whereas graphene hosts the surface current density ${\bf{K}}_{\omega \parallel } = \sigma (\omega) {\bf{E}}_{\omega \parallel }$ where ${\bf{E}}_{\omega \parallel }$ is the in-plane part of totale electric field at $z=0$. Direct solution of Maxwell equations in the frequency domain with the above charge and current densities together with the boundary conditions at $z=0$ (continuity of the electric field tangential component and discontinuity of the magnetic field tangential component produced by the graphene surface current density) leads to the electric field
\begin{equation} \label{Field_q_GRA}
{\bf{E}}_\omega ^{\left(\rm eg\right)} = {\bf{E}}_\omega ^{\rm \left( e \right)} + {\bf{E}}_\omega ^{\left( \rm {g} \right)},
\end{equation}
where
\begin{eqnarray} \label{fields}
{\bf{E}}_\omega ^{\rm \left( e \right)} &=& \frac{{ E_{\omega 0} }}{{\varepsilon \beta ^2 \gamma }}e^{i \frac{\omega}{v} z } \left[ { - K_1 \left( {\frac{{\omega \rho }}{{v\gamma }}} \right){\bf{\hat e}}_\rho + \frac{i}{\gamma }K_0 \left( {\frac{{\omega \rho }}{{v\gamma }}} \right){\bf{\hat e}}_z } \right], \nonumber \\
{\bf{E}}_\omega ^{\left( \rm {g} \right)} &=& \frac{{ E_{\omega 0} }}{{\varepsilon \beta }}\int\limits_0^\infty {dk_ \parallel } e^{ik_z \left| z \right|} \frac{{k_ \parallel ^2 }}{{k_0 }}\frac{{k_z J_1 \left( {k_ \parallel \rho } \right){\bf{\hat e}}_\rho + i \: {\mathop{\rm sign}} \left( z \right)k_ \parallel J_0 \left( {k_ \parallel \rho } \right){\bf{\hat e}}_z }}{{\left[ {k_ \parallel ^2 + \left( {\frac{\omega }{{v\gamma }}} \right)^2 } \right]\left( {k_z + k_0 \frac{{2\varepsilon }}{{Z_0 \sigma }}} \right)}},
\end{eqnarray}
\begin{figure}[!t] \label{Fig2}
\center
\includegraphics[width=1\textwidth]{FigS2.pdf}
\caption{({\bf a}) Real and ({\bf b}) imaginary part of the GPP pole $\kappa_{\rm p}$ (normalized with the vacuum wavenumber $k_0$) for $\varepsilon = 2$. On the dashed curve the Fermi energy $E_{\rm F}$ is equal to the photon energy $hc / \lambda$.}
\end{figure}
where $\beta = \frac{v}{c}$, $\gamma = \frac{1}{\sqrt{1 - \varepsilon \beta ^2}}$ is the Lorentz contraction factor, $Z_0 = \sqrt {\frac{\mu _0}{\varepsilon _0 }}$ is the vacuum impedance, $E_{\omega 0} = e \frac{k_0 Z_0}{4\pi ^2}$ and $k_z = \sqrt {k_0^2 \varepsilon - k_ \parallel ^2 }$ with ${\mathop{\rm Im}\nolimits} \left(k_z \right) \geq 0$. Here cylindrical coordinates coaxial with the charge trajectory have been introduced through $\rho = \sqrt {\left( {x + d } \right)^2 + y^2 }$ and ${\bf{\hat e}}_\rho = \nabla \rho$, while $K_n$ are the modified Bessel function of the second kind and $J_n$ are the Bessel function of the first kind.
The field ${\bf{E}}_\omega ^{\rm \left( e \right)}$ is the well-known field produced by an electron uniformly moving in a homogenous medium with permittivity $\varepsilon$ whereas ${\bf{E}}_\omega ^{\left( \rm {g} \right)}$ is a source-free field produced by the graphene sheet (accordingly vanishing for $\sigma = 0$). Note that, due to graphene rotational invariance around the charge trajectory, the field ${\bf{E}}_\omega ^{\left( \rm {g} \right)}$ lies on the radial $\rho z$ plane as much as ${\bf{E}}_\omega ^{\left( {\rm e} \right)}$ and the whole field ${\bf{E}}_\omega ^{\left( {\rm eg}\right)} $ is transverse magnetic (TM). We here focus on the sub-Cherenkov regime where $v < \frac{c}{\sqrt \varepsilon}$ so that ${\mathop{\rm Im}\nolimits} \left( \gamma \right) = 0$ and ${\bf{E}}_\omega ^{\left( {\rm e} \right)}$ displays exponentially decaying profile (through the modified Bessel functions) and it does not provide electromagnetic radiation. On the other hand, ${\bf{E}}_\omega ^{\left( \rm {g} \right)}$ is responsible for the transition radiation (TR) associated with the graphene surface charge redistribution caused by electron crossing the mono-layer (see below). The analysis of such field is simplified by noting that it can be casted as
\begin{equation} \label{E_gra}
{\bf{E}}_\omega ^{\left( \rm {g} \right)} = \left( {k_0^2 \varepsilon + \nabla \nabla \cdot } \right) {\bf \Pi} _\omega ^{\left( \rm {gra} \right)}
\end{equation}
where
\begin{equation} \label{Hertz_gra}
{\bf \Pi} _\omega ^{\left( \rm {g} \right)} = E_{\omega 0} \frac{{ i \: {\rm sign}\left( z \right)}}{{\varepsilon \beta k_0 }}\int\limits_0^\infty {dk_\parallel } \frac{{e^{ik_z \left| z \right|} k_ \parallel J_0 \left( {k_\parallel \rho } \right)}}{{\left[ {k_\parallel ^2 + \left( {\frac{\omega }{{v\gamma }}} \right)^2 } \right]\left( {k_z + k_0 \frac{{2\varepsilon }}{{Z_0 \sigma }}} \right)}}{\bf{\hat e}}_z
\end{equation}
is the Hertz vector.
Electron velocity $v$ affects the distribution of photon wavevectors $k_\parallel$ through the characteristic Lorentzian profile of width
\begin{equation}
\Delta k_\parallel \sim \frac{\omega} {v\gamma } = \frac{k_0} { \beta \gamma }
\end{equation}
whereas graphene yields the standard Fresnel coefficient for TM polarization whose pole at the complex wavevector
\begin{equation} \label{GPP_pole}
\kappa_{\rm p} = k_0 \sqrt {\varepsilon - \left( {\frac{{2\varepsilon }}{{Z_0 \sigma }}} \right)^2 }
\end{equation}
(occurring only if ${\mathop{\rm Im}\nolimits} \left( \sigma \right) > 0$) signals the excitation of graphene plasmon polaritons (GPPs). Real and imaginary part of $\kappa_{\rm p}$, normalized with the vacuum wavenumber $k_0$, are plotted in Fig.2{\bf a} and 2{\bf b}, respectively, for $\varepsilon = 2$. It is worth noting that if $E_{\rm F} < \frac{hc}{\lambda}$ (the region at the left side of the grey surface in Fig.2{\bf a} and 2{\bf b}), the plasmon resonance at $k_\parallel = {\rm Re} \left( \kappa_{\rm p} \right) > 150 \: k_0$ falls far outside the Lorentian distribution $\Delta k_\parallel < 10 \: k_0$ (for $\beta > 0.1$ electrons) and it has a very low quality since ${\rm Im} \left( \kappa_{\rm p} \right) > 10 \: k_0$. In other words, if the Fermi energy is smaller than the photon energy, GPPs are effectively not excited by the relativistic electron and consequently the graphene field ${\bf \Pi} _\omega ^{\left( \rm {g} \right)}$ is not affected by electrical gating. Therefore, since electrical tunability is among our main targets, we will focus on the regime $E_{\rm F} > \frac{{hc}}{\lambda } = \hbar \omega$.
\begin{figure}[!t] \label{Fig3}
\center
\includegraphics[width=0.6\textwidth]{FigS3.pdf}
\caption{Sommerfeld contour (black, grey and green curves) used to identify the GPP contribution to the field produced by graphene at the electron crossing.}
\end{figure}
In the chosen $E_{\rm F} > \hbar \omega$ regime, the local model for the graphene surface conductivity of Eq.(\ref{sigma}) is fully adequate to describe the interaction with relativistic electrons. In fact, from Eq.(\ref{Hertz_gra}), the broadest photon wavevectors distribution occurs at the graphene plane $z=0$, it has a width of the order of $\Delta k_\parallel < 10 \: k_0$ (for $\beta > 0.1$ electrons) and it hosts the additional GPP peak at $k_\parallel = {\mathop{\rm Re}\nolimits} \left( \kappa_{\rm p} \right) < 150 \: k_0$ (see Fig.2{\textbf a}) so that Eq.(\ref{locality2}) is fully satisfied.
The integral expression in Eq.(\ref{Hertz_gra}) is also useful to identify the GPP contribution to the graphene field which is sufficiently accurate in the near filed and far from the electron trajectory (plasmon pole approximation, see below). By using the well-known relation $J_0 \left( {k_\parallel \rho } \right) = \frac{1}{2}\left[ {H_0^{\left( 1 \right)} \left( {k_\parallel \rho } \right) - H_0^{\left( 1 \right)} \left( {e^{i\pi } k_\parallel \rho } \right)} \right]$ where $H_0^{\left( 1 \right)} \left( \zeta \right)$ is the analytic continuation from the positive real axis of the Hankel function of the first kind of order $0$, Eq.(\ref{Hertz_gra}) can be casted as
\begin{equation} \label{contour1}
\Pi _{\omega z}^{\left( {\rm g} \right)} = E_{\omega 0} \frac{{i \: {\rm sign}\left( z \right)}}{{2\varepsilon \beta k_0 }}\int_\Gamma {d\kappa } \;\frac{{e^{ik_z \left| z \right|} \kappa H_0^{\left( 1 \right)} \left( {\kappa \rho } \right)}}{{\left[ {\kappa ^2 + \left( {\frac{\omega }{{v\gamma }}} \right)^2 } \right]\left( {k_z + k_0 \frac{{2\varepsilon }}{{Z_0 \sigma }}} \right)}}
\end{equation}
where the contour is performed along the upper side of the real axis ($\Gamma$) due to the branch cut of $H_0^{\left( 1 \right)} \left( {\kappa \rho } \right)$ along the negative real axis (see Fig.3). Due to its asymptotic $|\kappa| \rightarrow \infty$ behavior, $H_0^{\left( 1 \right)} \left( {\kappa \rho } \right) \approx \sqrt {\frac{2}{{\pi \kappa \rho }}} \exp \left( {i \kappa \rho - i \frac{\pi }{4}} \right)$, the Hankel function asymptotically vanishes in the upper half-plane so that, in view of the Jordan's lemma, we require ${e^{ik_z \left| z \right|} }$ to asymptotically vanish by choosing the Riemann sheet of $k_z = \sqrt {k_0^2 \varepsilon - \kappa ^2 }$ uniformly satisfying ${\rm Im} \left( k_z \right) \ge 0$ (i.e. $\sqrt \zeta = \sqrt {\left| \zeta \right|} \exp \left[ {\frac{i}{2}\arg \left( \zeta \right)} \right]$, with $0 \le \arg \left( \zeta \right) < 2\pi$, with branch cut at ${\rm Im} \left(\zeta\right) = 0$, ${\rm Re} \left(\zeta\right) > 0$). For mathematical convenience, we let $\varepsilon$ to have a small positive imaginary part, so that $k_z$ has branch points at $\kappa = \pm k_0 \sqrt \varepsilon$ close to the real axis and branch cuts along the curve ${\mathop{\rm Im}\nolimits} \left( {k_0^2 \varepsilon - \kappa ^2 } \right) = 0$, ${\mathop{\rm Re}\nolimits} \left( {k_0^2 \varepsilon - \kappa ^2 } \right) > 0$ comprising two hyperbola portions asymptotically approaching the imaginary axis (see Fig.3). The integrand in Eq.(\ref{contour1}) has four simple poles, two GPP poles at $\kappa = \pm \kappa_{\rm p}$ (see Eq.(\ref{GPP_pole})) and two electronic poles at $\kappa = \pm \frac{{i\omega }}{{v\gamma }}$ close to the imaginary axis and lying on the branch cuts (since $ k_0^2 \varepsilon - \left( { \pm \frac{{i\omega }}{{v\gamma }}} \right)^2 = \left( {\frac{\omega }{v}} \right)^2$ is real and positive). Residue theorem applied to the Sommerfeld contour reported in Fig.3 (black, grey and green curves), together with Jordan's lemma, implies that the integral along $\Gamma$ equals $2\pi i$ times the residue at $\kappa_{\rm p}$ minus the integral over the contour $\Upsilon$ (green curve surrounding the branch cut), so that Eq.(\ref{contour1}) yields
\begin{equation} \label{contour2}
\Pi _{\omega z}^{\left( {g} \right)} = E_{\omega 0} \frac{{\pi \: {\rm sign}\left( z \right)}}{{\varepsilon \beta k_0 }}\left\{ {\left[ {\frac{{ e^{ik_z \left| z \right|} k_z H_0^{\left( 1 \right)} \left( {\kappa \rho } \right)}}{{\kappa ^2 + \left( {\frac{\omega }{{v\gamma }}} \right)^2 }}} \right]_{\kappa = \kappa _{\rm p} } + \frac{1}{{2\pi i }}\int\limits_\Upsilon {d\kappa } \;\frac{{e^{ik_z \left| z \right|} \kappa H_0^{\left( 1 \right)} \left( {\kappa \rho } \right)}}{{\left[ {\kappa ^2 + \left( {\frac{\omega }{{v\gamma }}} \right)^2 } \right]\left( {k_z + k_0 \frac{{2\varepsilon }}{{Z_0 \sigma }}} \right)}}} \right\}.
\end{equation}
The first term is evidently the field of the GPP excited by the electron which is closely confined to the graphene plane with evanescent decay length $\sim \frac{1}{{{\mathop{\rm Re}\nolimits} \left( {\kappa _{\rm p} } \right)}}$ (since $k_z \left( {\kappa _{\rm p} } \right) \simeq i\kappa _{\rm p})$) and displaying a radially oscillating asymptotic profile with period $\sim \frac{{2\pi }}{{{\mathop{\rm Re}\nolimits} \left( {\kappa _{\rm p} } \right)}}$ and decay length $\sim \frac{1}{{{\mathop{\rm Im}\nolimits} \left( {\kappa _{\rm p} } \right)}}$. The second integral term in Eq.(\ref{contour2}) is responsible in far field for the transition radiation produced by the electron crossing whereas, close to the graphene plane, it is tightly confined around the electron trajectory with the same radial decay length of the electron field ${\bf{E}}_\omega ^{\rm \left( e \right)}$. In fact, for $z=0$ and $\rho \rightarrow \infty$ the leading contribution to the integral comes from the infinitesimal circle around the pole $\frac{i \omega}{v \gamma}$ since the upper and lower portions of the contour $\Upsilon$ provide negligible contributions ($H_0^{\left( 1 \right)} \left( {\kappa \rho } \right)$ has a very fast exponential decay over the upper portion and it very rapidly oscillates over the lower portion). Hence, performing the integral over the infinitesimal circle $\kappa = \frac{i \omega}{v \gamma} + \eta e^{i \phi}$ ($\eta \rightarrow 0^+$), we get
\begin{equation}
\frac{1}{{2\pi i}}\int\limits_\Upsilon {d\kappa } \;\frac{ {e^{ik_z \left| z \right|} \kappa H_0^{\left( 1 \right)} \left( {\kappa \rho } \right)}}{{\left[ {\kappa ^2 + \left( {\frac{\omega }{{v\gamma }}} \right)^2 } \right]\left( {k_z + k_0 \frac{{2\varepsilon }}{{Z_0 \sigma }}} \right)}} \simeq - \;\frac{{K_0 \left( {\frac{{\omega \rho }}{{v\gamma }}} \right)}}{{i\pi \left( {k_0 \frac{{2\varepsilon }}{{Z_0 \sigma }} + \frac{\omega }{v}} \right)}}
\end{equation}
which displays the same vanishing exponential profile of the electron field ${\bf{E}}_\omega ^{\rm \left( e \right)}$ in the first of Eqs.(\ref{fields}).
It is worth noting for our later purposes that, in the chosen regime of Fermi energy grater than the photon energy (where GPPs are effectively excited), the field close the graphene plane and radially far from the electron trajectory is dominated by the GPP contribution (since all the other terms display radial exponential decay). More precisely, if the condition $\frac{{\omega \rho }}{{v\gamma }} \gg 1$ is satisfied, the so called plasmon pole approximation holds and the field ${\bf{E}}_\omega ^{\left( {\rm eg}\right)}$ of Eq.(\ref{Field_q_GRA}), from Eq.(\ref{E_gra}) and the GPP term of Eq.(\ref{contour2}), reduces to
\begin{equation}
{\bf{E}}_\omega ^{\left( {\rm eg}\right)} = E_{\omega 0} \frac{\pi }{{\varepsilon \beta k_0 }}\left[ {e^{ik_z \left| z \right|} \kappa k_z \frac{{ - ik_z H_1^{\left( 1 \right)} \left( {\kappa \rho } \right){\bf{\hat e}}_\rho + {\rm sign}\left( z \right)\kappa H_0^{\left( 1 \right)} \left( {\kappa \rho } \right){\bf{\hat e}}_z }}{{\kappa ^2 + \left( {\frac{\omega }{{v\gamma }}} \right)^2 }}} \right]_{\kappa = \kappa _{\rm p} }
\end{equation}
which, using the relation $k_z \left( {\kappa _{\rm p} } \right) \simeq i\kappa _{\rm p}$, can be casted as
\begin{equation} \label{PlaField}
{\bf{E}}_\omega ^{\left( {\rm eg} \right)} = E_{\omega 0} \frac{{i\pi }}{{\varepsilon \beta k_0}}\frac{{\kappa _{\rm p}^3 \left[ {H_1^{\left( 1 \right)} \left( {\kappa _{\rm p} \rho } \right){\bf{\hat e}}_\rho + \frac{z}{{\left| z \right|}}H_0^{\left( 1 \right)} \left( {\kappa _{\rm p} \rho } \right){\bf{\hat e}}_z } \right]}}{{ {\kappa _{\rm p}^2 + \left( {\frac{\omega }{{v\gamma }}} \right)^2 } }}e^{ - \kappa _{\rm p} \left| z \right|}.
\end{equation}
\subsection{Nanoparticle excitation}
We consider a plasmonic nanoparticle whose dielectric permittivity is described by the Drude model $\varepsilon _{\rm NP} (\omega) = 1 - \frac{\omega_{\rm p}^2}{\omega^2 + i \omega \Gamma}$ which accurately applies to transparent conductors with plasma frequency $\omega_{\rm p}$ in the mid-infrared. The nanoparticle-graphene evanescent coupling entails the hybridization of nanoparticle localized plasmons (NLPs) and GPPs thus yielding hybrid plasmonic modes which, in the presence of the moving electron, are excited by the field ${\bf{E}}_\omega ^{\left( {\rm eg}\right)}$ discussed in the previous section.
Since the radius $a$ is much smaller than the mid-infrared wavelengths, we here resort to the electrostatic (no-retarded) approximation where the nanoparticle is modelled by a point dipole located at ${\bf r}_{\rm c} = -a \hat{\bf e}_z$ whose dipole moment (in the frequency domain) is ${\bf{p}}_\omega = \alpha {\bf{E}}_\omega ^{\left( {\rm ext} \right)}$ where ${\bf{E}}_\omega ^{\left( {\rm ext} \right)}$ is the field experienced by the dipole (without self-field) and $\alpha = 4\pi \varepsilon _0 \varepsilon a^3 \left( {\frac{{\varepsilon _{\rm NP} - \varepsilon }}{{\varepsilon _{\rm NP} + 2\varepsilon }}} \right)$ is the well-known polarizability of the sphere. Due to the presence of the graphene sheet at $z=0$, the field radiated by the point dipole is
\begin{equation} \label{Field_p}
{\bf{E}}_\omega ^{\left( {\rm NP} \right)} = \left\{ \uptheta \left( { - z} \right) {\left[ {G^{\left( {\rm i} \right)} + G^{\left( {\rm r} \right)} } \right]+ \uptheta \left( z \right) \left[ {G^{\left( {\rm t} \right)} } \right]} \right\}{\bf{p}}_\omega
\end{equation}
where
\begin{eqnarray} \label{irt}
G ^{\left( {\rm i} \right)} &=& \int {d^2 {\bf{k}}_\parallel e^{i{\bf{k}}_\parallel \cdot {\bf{r}}_\parallel } } e^{ik_z \left| {z + a} \right|} \left[ {1 - {\mathop{\rm sign}} \left( {z + a} \right)\frac{{{\bf{\hat e}}_z {\bf{k}}_\parallel ^{\rm T} }}{{k_z }}} \right]\left[ {i\frac{{k_0^2 \varepsilon I_\parallel - {\bf{k}}_\parallel {\bf{k}}_\parallel ^{\rm T} - {\mathop{\rm sign}} \left( {z + a} \right)k_z {\bf{k}}_\parallel {\bf{\hat e}}_z^{\rm T} }}{{8\pi ^2 \varepsilon _0 \varepsilon k_z }}} \right], \nonumber \\
G^{\left( {\rm r} \right)} &=& \int {d^2 {\bf{k}}_\parallel e^{i{\bf{k}}_\parallel \cdot {\bf{r}}_\parallel } } e^{ik_z \left( { - z + a} \right)} \left( {1 + \frac{{{\bf{\hat e}}_z {\bf{k}}_\parallel ^{\rm T} }}{{k_z }}} \right)\left[ {r_{\rm TE} \left( {1 - \frac{{{\bf{k}}_\parallel {\bf{k}}_\parallel ^{\rm T} }}{{k_\parallel ^2 }}} \right) + r_{\rm TM} \left( {\frac{{{\bf{k}}_\parallel {\bf{k}}_\parallel ^{\rm T} }}{{k_\parallel ^2 }}} \right)} \right]\left( {i\frac{{k_0^2 \varepsilon I_\parallel - {\bf{k}}_\parallel {\bf{k}}_\parallel ^{\rm T} - k_z {\bf{k}}_\parallel {\bf{\hat e}}_z^{\rm T} }}{{8\pi ^2 \varepsilon _0 \varepsilon k_z }}} \right), \nonumber \\
G^{\left( {\rm t} \right)} &=& \int {d^2 {\bf{k}}_\parallel e^{i{\bf{k}}_\parallel \cdot {\bf{r}}_\parallel } } e^{ik_z \left( {z + a} \right)} \left( {1 - \frac{{{\bf{\hat e}}_z {\bf{k}}_\parallel ^{\rm T} }}{{k_z }}} \right)\left[ {t_{\rm TE} \left( {1 - \frac{{{\bf{k}}_\parallel {\bf{k}}_\parallel ^{\rm T} }}{{k_\parallel ^2 }}} \right) + t_{\rm TM} \left( {\frac{{{\bf{k}}_\parallel {\bf{k}}_\parallel ^{\rm T} }}{{k_\parallel ^2 }}} \right)} \right]\left( {i\frac{{k_0^2 \varepsilon I_\parallel - {\bf{k}}_\parallel {\bf{k}}_\parallel ^{\rm T} - k_z {\bf{k}}_\parallel {\bf{\hat e}}_z^{\rm T} }}{{8\pi ^2 \varepsilon _0 \varepsilon k_z }}} \right). \nonumber \\
\end{eqnarray}
Here the dyadic notation $( {{\bf{ab}}^{\rm T} } ){\bf{c}} = \left( {{\bf{b}} \cdot {\bf{c}}} \right) {\bf{a}}$ has been used, $I_\parallel = {\bf{\hat e}}_x {\bf{\hat e}}_x^{\rm T} + {\bf{\hat e}}_y {\bf{\hat e}}_y^{\rm T}$ and the reflection and trasmission coeffients for TE and TM waves are
\begin{equation}
\begin{array}{*{20}c}
{r_{\rm TE} = - \frac{\displaystyle {k_0 \frac{{Z_0 \sigma }}{2}}}{\displaystyle {k_z + k_0 \frac{{Z_0 \sigma }}{2}}},} & {t_{\rm TE} = \frac{\displaystyle{k_z }}{\displaystyle{k_z + k_0 \frac{{Z_0 \sigma }}{2}}},} & {r_{\rm TM} = - \frac{\displaystyle{k_z }}{\displaystyle{k_z + k_0 \frac{{2\varepsilon }}{{Z_0 \sigma }}}},} & {t_{\rm TM} = \frac{\displaystyle{k_0 \frac{{2\varepsilon }}{\displaystyle{Z_0 \sigma }}}}{\displaystyle{k_z + k_0 \frac{{2\varepsilon }}{{Z_0 \sigma }}}}.} \\
\end{array}
\end{equation}
The first term in Eq.(\ref{Field_p}) is the stadard dipole field $G^{\left( {\rm i} \right)} {\bf{p}}_\omega = \left( {k_0^2 \varepsilon + \nabla \nabla \cdot } \right)\left( {\frac{1}{{4\pi \varepsilon _0 \varepsilon }}\frac{{e^{ik_0 \sqrt \varepsilon \left| {{\bf{r}} + a{\bf{\hat e}}_z } \right|} }}{{\left| {{\bf{r}} + a{\bf{\hat e}}_z } \right|}}{\bf{p}}_\omega } \right)$ in the angular spectrum representation whereas $G^{\left( {\rm r} \right)} {\bf{p}}_\omega$ and $G^{\left( {\rm t} \right)} {\bf{p}}_\omega$ are the reflected and transmitted fields, respectively, produced by the graphene sheet (and accordingly $G^{\left( {\rm r} \right)}=0$ and $G^{\left( {\rm t} \right)}=G^{\left( {\rm i} \right)}$ for $\sigma = 0$, since in this case $r_{\rm TE} = r_{\rm TM} =0$ and $t_{\rm TE} = t_{\rm TM} =1$). Evidently, the TM reflection and transmission coefficients have the plasmon pole $\kappa_{\rm p}$ (see Eq.(\ref{GPP_pole})) which signals the well-known ability of the nano-antenna to excite GPPs.
Note that, due to the factor $e^{ik_z a}$ in the second and third of Eqs.(\ref{irt}), the broadest photon wavevectors distribution at the graphene plane $z=0$ has a width of the order of $\frac{1}{a} < 105\,k_0$ (for $a = 30 \: {\rm nm}$ and $\lambda < 20 \: {\rm \mu m}$) with the same GPP peak discussed in the above setion. Therefore, in the chosen $E_{\rm F} > \hbar \omega$ regime, Eq.(\ref{locality2}) is fully satisfied and nonlocal effects do not play any role in the nanoparticle-graphene interaction.
In the presence of the moving electron, the overall field is
\begin{equation} \label{totalfield}
{\bf{E}}_\omega = {\bf{E}}_\omega ^{\left( {\rm eg} \right)} + {\bf{E}}_\omega ^{\left( \rm NP \right)}
\end{equation}
and the field experienced by the dipole is ${\bf{E}}_\omega ^{\left( {\rm ext} \right)} = \left[ {{\bf{E}}_\omega ^{\left( {\rm eg} \right)} + G^{\left( {\rm r} \right)} {\bf{p}}_\omega } \right]_{{\bf{r}} = {\bf{r}}_{\rm NP} } $ so that, using the nanosphere polarizability $\alpha$, we get
\begin{equation}
{\bf{p}}_\omega = \left[ {\frac{1}{{\frac{1}{\alpha } - G^{\left( {\rm r} \right)} }}{\bf{E}}_\omega ^{\left( {\rm eg} \right)} } \right]_{{\bf{r}} = {\bf{r}}_{\rm NP} }
\end{equation}
for the induced dipole moment. Since the field ${{\bf{E}}_\omega ^{\left( {\rm eg} \right)} }$ lies on the radial $\rho z$ plane, this equation implies that the dipole moment has only $x$- and $z$- components (i.e. ${\bf{p}}_\omega = p_{\omega x} {\bf{\hat e}}_x + p_{\omega z} {\bf{\hat e}}_z$) given by
\begin{eqnarray} \label{momentcomp}
p_{\omega x} &=& \frac{\displaystyle {E_{\omega \rho }^{\left( {\rm eg} \right)} \left( {{\bf{r}}_{\rm NP} } \right)}}{\displaystyle {\frac{1}{\alpha } - \frac{i}{{8\pi \varepsilon _0 \varepsilon }}\int\limits_0^\infty {dk_\parallel } e^{i2k_z a} \left( {r_{\rm TE} \frac{{k_\parallel k_0^2 \varepsilon }}{{k_z }}\, + r_{\rm TM} k_\parallel k_z } \right)}}, \nonumber \\
p_{\omega z} &=& \frac{\displaystyle{E_{\omega z}^{\left( {\rm eg} \right)} \left( {{\bf{r}}_{\rm NP} } \right)}}{\displaystyle{\frac{1}{\alpha } - \frac{i}{{8\pi \varepsilon _0 \varepsilon }}\int\limits_0^\infty {dk_\parallel } e^{i2k_z a} \left(- {r_{\rm TM} \frac{{2k_\parallel ^3 }}{{k_z }}} \right)}}
\end{eqnarray}
where angular integration have been performed in $G^{\left( \rm r \right)}$. Equation (\ref{totalfield}), with the help of Eqs.(\ref{momentcomp}), fully describe the field accompanying the interaction of the relativistic electron with the graphene-nanoparticle system. Hybrid plasmonic resonances of the nanoparticle-graphene system are identified by the poles of ${\bf{p}}_\omega$ so that Equations (\ref{momentcomp}) reveal that the fast electron is able to excite two different hybrid plasmonic modes whose dipole moments are purely $x$- and $z$- polarized, respectively. In order for the denominators of Eqs.(\ref{momentcomp}) to be very small, the $\frac{1}{\alpha}$ and the integral contributions have to be comparable which requires the both NLPs and GPPs have to be excited. Therefore the hybrid plasmonic resonances appear spectrally close to the nanoparticle resonance wavelength in the Fermi energy range where graphene plasmonic resonance occurs.
The overall field of Eq.(\ref{totalfield}) turns out to be highly sensible to the graphene Fermi energy (electric tunability) for two main reasons. First the GPP peak appears in the wavevector spectral distributions of all the graphene reaction fields, i.e. the one directly induced by the electron ${\bf{E}}_\omega ^{\left( {\rm g} \right)}$ (second of Eqs.(\ref{fields})) and the two fields produced by the dipole $G^{\left( {\rm r} \right)}{\bf{p}}_\omega$ and $G^{\left( {\rm t} \right)}{\bf{p}}_\omega$ (Eq.(\ref{Field_p})). Second, and most importantly for our purposes, the dipole field ${\bf{E}}_\omega ^{\left( {\rm NP} \right)}$ directly experiences the above discussed hybrid plasmonic resonances, with a particularly spectacular impact, since they are carried by
${\bf{p}}_\omega$ thus uniformly enhancing the overall wavevector spectral distribution of ${\bf{E}}_\omega ^{\left( {\rm NP} \right)}$.
\section{Directionality of cathodoluminescence emission and its tuning}
The field ${\bf{E}}_\omega$ of Eq.(\ref{totalfield}) has spectral components with $k_\parallel < k_0 \sqrt{\varepsilon}$ which survive in the far field. This physically corresponds to emission of radiation by the target (here the graphene-nanoparticle system) upon interaction with the fast electron, a well-known fact usually referred to as cathodoluminescence (CL). We here investigate the tunability of the spectral CL emission, provided by the graphene Fermi energy, with emphasis on the angular distribution of the radiation pattern.
\subsection{Far field and spectral-angular distribution of the CL emission}
Since we are considering the sub-Cherenkov regime, the electron field ${\bf E}^{\left(\rm e\right)}_{\omega}$ (in the first of Eqs.(\ref{fields})) does not contribute to the emitted radiation so that, after suppressing it, the total field of Eq.(\ref{totalfield}) in the far field ($k_0 r \rightarrow \infty$) reduces to
\begin{equation} \label{farfield}
{\bf{E}}_\omega = \frac{{e^{i\sqrt \varepsilon \left(k_0 r\right)} }}{{k_0 r}}\left[ {{\bf{f}}^{\left( {\rm g} \right)} + {\bf{f}}^{\left( {\rm NP} \right)} } \right]
\end{equation}
where
\begin{eqnarray} \label{amplitudes}
{\bf{f}}^{\left( {\rm g} \right)} &=& E_{\omega 0} e^{i\sqrt \varepsilon \left( {k_0 d} \right)\sin \theta \cos \varphi } \frac{{\left( {\frac{{Z_0 \sigma }}{{2\sqrt \varepsilon }}} \right)}}{{1 + \left( {\frac{{Z_0 \sigma }}{{2\sqrt \varepsilon }}} \right)\left| {\cos \theta } \right|}}\left( {\frac{{\beta \sin \theta \cos \theta }}{{\varepsilon \beta ^2 \cos ^2 \theta - 1}}} \right){\bf{\hat e}}_\theta , \nonumber \\
{\bf{f}}^{\left( {\rm NP} \right)} &=& \uptheta \left( { - \cos \theta } \right) e^{i\sqrt \varepsilon \left( {k_0 a} \right)\cos \theta } \left( {{\bf{\hat e}}_\theta {\bf{\hat e}}_\theta ^{\rm T} + {\bf{\hat e}}_\varphi {\bf{\hat e}}_\varphi ^{\rm T} } \right) \frac{{k_0^3 {\bf{p}}_\omega}}{{4\pi \varepsilon _0 }} + \nonumber \\
&+& \uptheta \left( { - \cos \theta } \right) e^{ - i\sqrt \varepsilon \left( {k_0 a} \right)\cos \theta } \left[ {\frac{{\left( {\frac{{Z_0 \sigma }}{{2\sqrt \varepsilon }}} \right)\cos \theta }}{{1 - \left( {\frac{{Z_0 \sigma }}{{2\sqrt \varepsilon }}} \right)\cos \theta }}\left( {\cos 2\theta \: {\bf{\hat e}}_\theta {\bf{\hat e}}_\theta ^{\rm T} + \sin 2\theta \: {\bf{\hat e}}_\theta {\bf{\hat e}}_r^{\rm T} } \right) + \frac{{\left( {\frac{{Z_0 \sigma }}{{2\sqrt \varepsilon }}} \right)}}{{\cos \theta - \left( {\frac{{Z_0 \sigma }}{{2\sqrt \varepsilon }}} \right)}}{\bf{\hat e}}_\varphi {\bf{\hat e}}_\varphi ^{\rm T} } \right] \frac{{k_0^3 {\bf{p}}_\omega}}{{4\pi \varepsilon _0 }} + \nonumber \\
&+& \uptheta \left( {\cos \theta } \right) e^{i\sqrt \varepsilon \left( {k_0 a} \right)\cos \theta } \left[ {\frac{1}{{1 + \left( {\frac{{Z_0 \sigma }}{{2\sqrt \varepsilon }}} \right)\cos \theta }}{\bf{\hat e}}_\theta {\bf{\hat e}}_\theta ^{\rm T} + \frac{{\cos \theta }}{{\cos \theta + \left( {\frac{{Z_0 \sigma }}{{2\sqrt \varepsilon }}} \right)}}{\bf{\hat e}}_\varphi {\bf{\hat e}}_\varphi ^{\rm T} } \right]
\frac{{k_0^3 {\bf{p}}_\omega}}{{4\pi \varepsilon _0 }} ,
\end{eqnarray}
in which polar spherical coordinates $\left( r,\theta,\varphi \right)$ have been introduced together with their coordinate unit vectors ${\bf{\hat e}}_r$, ${\bf{\hat e}}_\theta$ and ${\bf{\hat e}}_\varphi$. Here ${\bf{f}}^{\left( {\rm g} \right)}$ is the far field amplitude of the graphene field ${\bf E}^{\left(\rm g\right)}_{\omega}$ and it describes the transition radiation (TR) which is generated by the electron crossing the graphene sheet. Note that ${\bf{f}}^{\left( {\rm g} \right)}$ is along the ${\bf{\hat e}}_\theta$ direction, it has a phase factor accounting for the electron impact parameter $d$ and it displays a Fresnel-like coefficient (proportional to $\sigma$) modulated by standard $\beta$-dependent factor (not diverging in the sub-Cherenkov regime $\sqrt \varepsilon \beta < 1$ we are considering). On the other hand ${\bf{f}}^{\left( {\rm NP} \right)}$ is the far-field amplitude of the dipole field ${\bf E}^{\left(\rm NP \right)}_{\omega}$ and it describes the diffraction radiation (DR) which is outcoupled from the nanoparticle excited by the field ${\bf E}^{\left(\rm eg\right)}_{\omega}$. The amplitude ${\bf{f}}^{\left( {\rm NP} \right)}$ has three contributions arising from the fields $G ^{\left( {\rm i} \right)} {\bf p}_\omega$, $G ^{\left( {\rm r} \right)} {\bf p}_\omega$ and $G ^{\left( {\rm t} \right)} {\bf p}_\omega$, respectively and it has components both along ${\bf{\hat e}}_\theta$ and ${\bf{\hat e}}_\varphi$ which are suitable projections of the dipole moment ${\bf p}_\omega$.
The total energy emitted by CL per incoming electron is $U = \int\limits_{ - \infty }^\infty {dt} \int {d\Omega } \: r^2 {\bf{\hat e}}_r \cdot \left[ {{\bf{E}}\left( {{\bf{r}},t} \right) \times {\bf{H}}\left( {{\bf{r}},t} \right)} \right]$ which, resorting to the frequency domain, can be suitably casted as a superposition of photon energy quanta $\frac{hc}{\lambda}$, i.e.
\begin{equation}
U = \int\limits_0^\infty {d\lambda } \left( {\frac{{hc}}{\lambda }} \right)\int {d\Omega } \;\frac{{dN}}{{d\Omega d\lambda }}
\end{equation}
where $\frac{{dN}}{{ d\Omega d\lambda}} = \frac{{4\pi }}{{\hbar \lambda }} r^2 {\bf{\hat e}}_r \cdot {\mathop{\rm Re}\nolimits} \left( {{\bf{E}}_\omega \times {\bf{H}}_\omega ^* } \right)$ is the number of photons emitted per incoming electron, per per unit of solid angle of emission and per unit of photon wavelength. By using Eq.(\ref{farfield}) and the far field relation ${\bf{H}}_\omega = \frac{{\sqrt \varepsilon }}{{Z_0 }}{\bf{\hat e}}_r \times {\bf{E}}_\omega$, we get the spectral-angular distribution of the photon emission probability
\begin{equation} \label{emission}
\frac{{dN}}{{d\Omega d\lambda }} = \frac{{\lambda \sqrt \varepsilon }}{{\pi \hbar Z_0 }}\left[ {\left| {f_\theta ^{\left( {\rm g} \right)} + f_\theta ^{\left( {\rm NP} \right)} } \right|^2 + \left| {f_\varphi ^{\left( {\rm NP} \right)} } \right|^2 } \right]
\end{equation}
revealing that the $\theta$- components of the graphene and dipole fields interfere in the CL radiation pattern. Since TR and DR have different spatial symmetry properties, their interferece in Eq.(\ref{emission}) provides peculiar directionality traits to the overall CL emission. In addition, since the nanoparticle excitation strongly depends, at each wavelength, on the graphene Fermi energy, it turns out that the CL emission directionality can effectively be tuned by electrical gating.
\subsection{CL emission directionality}
In order to investigate CL emission directionality more closely, we note that in our nanophotonic setup the inequalities $k_0 d \ll 1$, $k_0 a \ll 1$ and $\left| {\frac{{Z_0 \sigma }}{{2\sqrt \varepsilon }}} \right| \ll 1$ hold in the chosen infrared range so that Eqs.(\ref{amplitudes}) reduce to
\begin{eqnarray}
{\bf{f}}^{\left( {\rm g} \right)} &=& E_{\omega 0} \left( {\frac{{Z_0 \sigma }}{{2\sqrt \varepsilon }}} \right)\left( {\frac{{\beta \sin \theta \cos \theta }}{{\varepsilon \beta ^2 \cos ^2 \theta - 1}}} \right){\bf{\hat e}}_\theta , \nonumber \\
{\bf{f}}^{\left( {\rm NP} \right)} &=& \left( {{\bf{\hat e}}_\theta {\bf{\hat e}}_\theta ^{\rm T} + {\bf{\hat e}}_\varphi {\bf{\hat e}}_\varphi ^{\rm T} } \right)\frac{{k_0^3 {\bf{p}}_\omega }}{{4\pi \varepsilon _0 }}.
\end{eqnarray}
Since the amplitude $f_\theta ^{\left( {\rm g} \right)}$ does not depend on $\varphi$, the TR angular distribution ($ \sim \left| {f^{\left( {\rm g} \right)} } \right|^2$) is axially symmetric around the electron trajectory with its characteristic double cone shape (see Fig.1b of the main text) of aperture $\theta_{\rm max}$ (with $\tan \theta _{\rm max } = \sqrt {1 - \beta ^2 \varepsilon }$) and maximum $\left| {f_\theta ^{\left( {\rm g} \right)} } \right|_{\rm max } \simeq E_{\omega 0} \frac{\beta }{2}\left( {\frac{{Z_0 \sigma }}{{2\sqrt \varepsilon }}} \right)$. The amplitude ${\bf{f}}^{\left( {\rm NP} \right)}$ is the standard dipole far field amplitude (see Fig.1b of the main text) and the maximum of its $\theta$-component is $\left| {f_\theta ^{\left( {\rm NP} \right)} } \right|_{\max } \simeq \frac{{k_0^3 \left| {{\bf{p}}_\omega } \right|}}{{4\pi \varepsilon _0 }}$. Therefore, the relative impact of TR and DR to their interference is basically measured by the ratio
\begin{equation}
R = \frac{{\left| {f_\theta ^{\left( NP \right)} } \right|_{\max } }}{{\left| {f_\theta ^{\left( g \right)} } \right|_{\max } }} \cong \frac{{4\sqrt \varepsilon }}{{\beta Z_0 \sigma }}\frac{{k_0^3 \left| {{\bf{p}}_\omega } \right|}}{{4\pi \varepsilon _0 E_{\omega 0} }}
\end{equation}
which can be adjusted to be close to 1 by adjusting the electron velocity. Once the condition $R \simeq 1$ is achieved, TR and DR intereference is effective and the directionality of the CL angular distribution basically stems from the relations
\begin{eqnarray} \label{relations}
f_\theta ^{\left( {\rm g} \right)} \left( {\theta } \right)
&=& - f_\theta ^{\left( {\rm g} \right)} \left( {\pi - \theta } \right), \nonumber \\
f_\theta ^{\left( {\rm NP} \right)} \left( {\theta ,\varphi } \right) &=& \frac{{k_0^3 }}{{4\pi \varepsilon _0 }}\left( {\cos \theta \cos \varphi \: p_{\omega x} - \sin \theta \: p_{\omega z} } \right).
\end{eqnarray}
The first of these equations states that the TR amplitude has opposite signs in the half spaces $z>0$ and $z<0$ whereas the second equation shows that the DR amplitude does not have this property if $p_{\omega z} \neq 0$, resulting in different interference patterns in the two half-spaces. In addition the dependence of $f_\theta ^{\left( {\rm NP} \right)}$ of $\varphi$ implies that the interference is not axially symmetric around the $z$-axis with the maximum emission direction angle $\varphi_{max}$ dependent on the dipole moment components $p_{\omega x}$ and $p_{\omega z}$.
\subsection{Impact of the GPP phase on the CL directionality}
As discussed above, the tunability of the maximal CL emission direction is an interferometric effect relying on the dependence of the excitated dipole moment ${\bf p}_{\omega}$ on the Fermi energy at each wavelength. As a consequence the effect is particularly spectacular close the hybrid nanoparticle-graphene resonances where $|{\bf p}_{\omega}|$ is highly sensible to variations of $E_{\rm F}$. There is however a specific situation where the phases $\arg p_{\omega x}$ and $\arg p_{\omega z}$ play a significant role leading to an even more spectacular angular emission phenomenology. This happens when the nanoparticle is far enough from the electron trajectory that it experiences only the GPP field whose phase is a rapidly-varying function of both $\lambda$ and $E_{\rm F}$.
To discuss this effect, we choose the impact parameter $d$ in such a way that
\begin{equation}
d > \frac{{\beta \gamma }}{{2\pi }}\lambda
\end{equation}
for each wavelength in the considered infrared spectral domain, so that the plasmon pole approximation of Eq.(\ref{PlaField}) holds. Since in this regime $
\left| {\kappa _{\rm p} d} \right|$ is very large, we can resort to the Hankel function asymptotic behavior $H_n^{\left( 1 \right)} \left( \zeta \right) \simeq \sqrt {\frac{2}{{\pi z}}} e^{i\left( {\zeta - \frac{\pi }{4} - n\frac{\pi }{2}} \right)}$ (for $|\zeta| \rightarrow \infty$) so that the field at the nanoparticle (and triggering its dipole moment, see Eq.(\ref{momentcomp})) from Eq.(\ref{PlaField}) can be casted as
\begin{equation}
{\bf{E}}_\omega ^{\left( {\rm eg} \right)} \left( {{\bf{r}}_{NP} } \right) = E_{\omega 0} \sqrt {\frac{{2\pi }}{{\kappa _{\rm p} d}}} \frac{{\kappa _{\rm p}^3 e^{ - \kappa _{\rm p} a} }}{{\varepsilon \beta k_0 \left[ {\kappa _{\rm p}^2 + \left( {\frac{\omega }{{v\gamma }}} \right)^2 } \right]}}e^{i\left( {\kappa _{\rm p} d - \frac{\pi }{4}} \right)} \left( {{\bf{\hat e}}_x + i{\bf{\hat e}}_z } \right)
\end{equation}
where the relation $k_z \simeq i\kappa _p$ (correct in this regime) has been exploited. As expected, the GPP field turns out to be circularly polarized in the $xz$ plane (i.e. carrying transverse momentum-locked spin) and exhibiting the plasmon phase factor $e^{i\kappa _{\rm p} d }$ which is a rapidly-varying function of both $\lambda$ and $E_{\rm F}$ since ${\mathop{\rm Re}\nolimits} \left( {\kappa _{\rm p} d} \right)$ is large (see Fig.2a). Due to Eqs.(\ref{momentcomp}), both $p_{\omega x}$ and $p_{\omega z}$ turn out to be proportional to $e^{i\kappa _{\rm p} d }$ which are the only rapidly-varying phase factor. Now, from Eq.(\ref{emission}) (and the second of Eqs.(\ref{relations})) the interference term in the angular emission pattern is
\begin{equation}
2{\mathop{\rm Re}\nolimits} \left( {f_\theta ^{\left( {\rm g} \right)*} f_\theta ^{\left( {\rm NP} \right)} } \right) = \frac{{k_0^3 }}{{2\pi \varepsilon _0 }}{\mathop{\rm Re}\nolimits} \left[ {f_\theta ^{\left( g \right)*} \left( {\cos \theta \cos \varphi \;p_{\omega x} - \sin \theta \;p_{\omega z} } \right)} \right]
\end{equation}
which, due to the plasmon phase factor $e^{i\kappa _{\rm p} d }$ in both $p_{\omega x}$ and $p_{\omega z}$, is evidently a rapidly varying function of both $\lambda$ and $E_{\rm F}$. We conclude that in the plasmon pole approximation, the CL angular distribution is highly sensible to the Fermi energy thus providing the effective ability to tune the maximal emission direction very easily through small variation of $E_{\rm F}$. Conversely, in a spectroscopic perspective, the phenomenon can also be exploited to extract the GPP phase by comparing the CL angular distribution at different Fermi energies, at each wavelength.
\section{Supporting Movies}
Dependence on the Fermi energy of various quantities related to the CLR emission considered in the main text at the on-resonance wavelength $\lambda = 6 \, {\mu m}$ (Supporting Movie S1) and the two on-resonance wavelengths $\lambda = 11.2 \, {\mu m}$ (Supporting Movie S2) and $\lambda = 12.6 \, {\mu m}$ (Supporting Movie S3). The plotted quantities evolving with the increasing Fermi energy are:
the moduli and phases of the (normalized) NP dipole moment components $\tilde p_{\omega x}$ and $\tilde p_{\omega z}$ (extracted from Fig.2c of the main text), the angle-resolved TR emission pattern coaxially superimposed to the electron trajectory, the angle-resolved DR emission pattern, the dipole polarization ellipse ${\bf{p}}\left( t \right) = {\mathop{\rm Re}\nolimits} \left( {{\bf{p}}_\omega e^{ - i\omega t} } \right)$ (arbitrarly rescaled for visualization purposes) superimposed to the DR pattern, the angle-resolved CLR emission pattern.
| proofpile-arXiv_059-15589 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
The occurrence of hatespeech has been increasing~\cite{hate_speech_increasing}. It has become easier than before to reach a large audience quickly via social media, causing an increase of the temptation for inappropriate behaviors such as hatespeech, and potential damage to social systems. In particular, hatespeech interferes with civil discourse and turns good people away. Furthermore, hatespeech in the virtual world can lead to physical violence against certain groups in the real world\footnote{https://www.nytimes.com/2018/10/31/opinion/caravan-hate-speech-bowers-sayoc.html}\footnote{https://www.washingtonpost.com/nation/2018/11/30/how-online-hate-speech-is-fueling-real-life-violence}, so it should not be ignored on the ground of freedom of speech.
To detect hatespeech, researchers developed human-crafted feature-based classifiers \cite{chatzakou2017mean,davidson2017automated,waseem2016hateful,macavaney2019hate}, and proposed deep neural network architectures \cite{zampieri2019semeval,gamback2017using,park2017one,badjatiya2017deep,agrawal2018deep}.
However, they might not explore all possible important features for hatespeech detection, ignored pre-trained language model understanding, or proposed uni-directional language models by reading from left to right or right to left.
Recently, the BERT (Bidirectional Encoder Representations from Transformers) model \cite{devlin2018bert} has achieved tremendous success in Natural Language Processing
. The key innovation of BERT is in applying the transformer~\cite{vaswani2017attention} to language modeling tasks.
A BERT model pre-trained on these language modeling tasks forms a good basis for further fine-tuning on supervised tasks such as machine translation and question answering, \emph{etc}.
Recent work on hatespeech detection \cite{nikolov2019nikolov} has applied the BERT model and has shown its prominent results over previous hatespeech classifiers. However, we point out its two limitations in hatespeech detection domain. First, the previous studies \cite{elsherief2018peer,elsherief2018hate} have shown that a hateful corpus owns distinguished linguistic/semantic characteristics compared to a non-hateful corpus. For instance, hatespeech sequences are often informal or even
intentionally mis-spelled~\cite{elsherief2018hate,arango2019hate}, so words in hateful sequences can sit in a long tail when ranking their uniqueness, and a comment can be hateful or non-hateful using the same words \cite{zhang2019hate}.
For example, ``dick'' in the sentence ``Nobody knew dick about what that meant'' is non-hateful, but ``d1ck'' in ``You are a weak small-d1cked keyboard warrior'' is hateful \footnote{\textbf{It is important to note that this paper contains hate speech examples, which may be offensive to some readers. They do not represent the views of the authors. We tried to make a balance between showing less number of hate speech examples and illustrating the challenges in real-world applications.}}.
Thus, to better understand hateful vocabularies and contexts, it is better to pre-train on a mixture of both hateful and non-hateful corpora. Doing so helps to overcome the
limitation of using BERT models pre-trained on non-hateful corpora like English Wikipedia and BookCorpus. Second, even the smallest pre-trained BERT ``base'' model contains 110M parameters. It takes a lot of computational resources to pre-train, fine-tune, and serve.
Some recent efforts aim to reduce
the complexity of BERT model with the knowledge distillation technique such as DistillBert \cite{sanh2019distilbert} and TinyBert \cite{jiao2019tinybert}. In these methods, a pre-trained BERT-alike model is used as a teacher model, and a student (smaller) model (i.e. TinyBERT, DistilBERT, \emph{.etc}) is trained to produce similar output to that of the teacher model. Unfortunately, while their complexity is reduced, the performance is also degraded in NLP tasks compared to BERT. Another direction is to use cross-layer parameter sharing, such as ALBERT \cite{Lan2020ALBERT}. However, ALBERT's computational time is similar to BERT, since the number of layers remains the same as BERT; likewise, its inference is equally expensive.
Based on the above observation and analysis, we aim to investigate whether it is possible to achieve a better hatespeech prediction performance than state-of-the-art machine learning classifiers, including classifiers based on publicly available BERT model, while significantly reducing the number of parameters compared with the BERT model. By doing so, we believe that performing pre-training tasks from the ground up and on a hatespeech-related corpus would allow the model to understand hatespeech patterns better and enhance the predictive results. However, while language model pretraining tasks require a large scale corpus size, available hatespeech datasets are normally small: only 16K$\sim$115K annotated comments \cite{waseem2016hateful,wulczyn2017ex}. Thus, we introduce a large annotated hatespeech dataset with 1.4M comments extracted from Yahoo News and Yahoo Finance. To reduce the complexity, we reduce the number of layers and hidden size, and propose Quaternion-based Factorization mechanisms in BERT architecture. To further improve the model effectiveness and robustness, we introduce a multi-source ensemble-head fine-tuning architecture, as well as a target-based adversarial training.
The major contributions of our work are:
\squishlist
\item We reduce the number of parameters in BERT considerably, and consequently the training/inferencing time and memory, while achieving better performance compared to the much larger BERT models, and other state-of-the-art hatespeech detection methods.
\item We pre-train from the ground up a hateful language model with our proposed Quaternion Factorization methods on a large-scale hatespeech dataset, which gives better performance than fine tuning a pretrained BERT model.
\item We propose a flexible classification net with multi-sources and multi-heads, building on top of the learned sequence representations to further enhance our model's predictive capability.
\item We utilize adversarial training with a proposed fine-grained and adaptive noise magnitude to improve our model's performance.
\squishend
\section{Related Work}
\label{sec:related}
Some of the earlier works in hatespeech detection have applied a variety of classical machine learning algorithms \cite{chatzakou2017mean,davidson2017automated,waseem2016hateful,macavaney2019hate}. Their intuition is to do feature engineering (i.e. manually generate features), then apply classification methods such as SVM, Random Forest, and Logistic Regression. The features are mostly Term-Frequency Inverse-Document-Frequency scores or Bag-of-Words vectors, and can be combined with additional features extracted from the user account's meta information and network structure (i.e., followers, followees, \emph{etc}). Those methods are suboptimal as they mainly rely on the quality and quantity of the human-crafted features.
Recent works have used deep neural network architectures for hatespeech detection \cite{zampieri2019semeval,mouswe2} such as CNN \cite{gamback2017using,park2017one}, RNN (i.e. LSTM and GRU) \cite{badjatiya2017deep,agrawal2018deep}, combining CNN with RNN \cite{zhang2018detecting}, or fine tuning a pretrained language models \cite{indurthi-etal-2019-fermi}.
Another direction
focuses on the testing generalization of the current hatespeech classifiers \cite{agrawal2018deep,dadvar2018cyberbullying,grondahl2018all}, where those methods are tested in other datasets and domains such as Twitter data \cite{waseem2016hateful}, Wikipedia data \cite{wulczyn2017ex}, Formspring data \cite{reynolds2011using}, and YouTube comment data \cite{dadvar2014experts}.
Unlike previous works, we pre-train a hateful language model, then build a multi-source multi-head hatespeech classifier with regularized adversarial training to enhance the model's performance.
\section{Problem Definition}
\label{sec:problem}
Given an input text sequence $s = [w_1, w_2, ..., w_n]$ where \{$w_1$, $w_2$, .., $w_n$\} are words and $n = |s|$ is the maximum length of the input sequence $s$.
The hatespeech classification task aims to build a mapping function $f: s=[w_1, w_2, ..., w_n] \longrightarrow \mathcal{R} \in [0, 1]$, where $f$ inputs $s$ and returns a probability score $P(y=1 | s) \in [0, 1]$, indicating how likely $s$ is classified as hatespeech. In this paper, we approximate $f$ by a deep neural classifier, where we first pretrain $f$ with unsupervised language modeling tasks to enhance its language understanding. Then, we train $f$ with the hatespeech classification task to produce $P(y=1 | s)$.
\section{Our approach -- HABERTOR}
\label{sec:approach}
\begin{figure*}[t]
\centering
\begin{subfigure}{0.41\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/traditional-tune-BERT-1.pdf}
\caption{Traditional Fine-tuning BERT.}
\label{fig:bert-traditional}
\end{subfigure}
\hfill
\begin{subfigure}{0.58\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/bert-flex-ensemble-1.pdf}
\caption{HABERTOR with two sources and ensemble of 2 heads.}
\label{fig:bert-flex-ensemble}
\end{subfigure}
\vspace{-10pt}
\caption{Architecture comparison of traditional fine-tuned BERT and HABERTOR multi-source ensemble heads.}
\label{fig:architecture}
\vspace{-15pt}
\end{figure*}
\subsection{Tokenization} BERT model relies on WordPiece (WP)~\cite{wu2016google}, a Google's internal code that breaks down each word into common sub-word units (``wordpieces'').
These sub-words are like character n-grams, except that they are automatically chosen to ensure that
each of these sub-words is frequently observed in the input corpus.
WP improves handling of rare words, such as intentionally mis-spelled abusive words, without the need for a huge vocabulary. A comparable implementation that is open sourced is SentencePiece (SP)~\cite{kudo2018sentencepiece}. Like WP, the vocab size is predetermined. Both WP and SP are unsupervised learning models.
Since WP is not released in public, we train a SP model using our training data, then use it to tokenize input texts.
\subsection{Parameter Reduction with Quaternion Factorization}
Denote V the vocab size, E the embedding size, H the hidden size, L the number of layers, and F the feed-forward/filter size. In BERT, F = 4H, E = H, and the number of attention heads is $H/64$.
Encoding the vocabs takes VH parameters.
Each BERT layer contains three parts: (i) attention, (ii) filtering/feedforward, and (iii) output. Each of the three parts has 4$\text{H}^2$ parameters.
Thus, a BERT layer has 12$\text{H}^2$ parameters and a BERT-base setting with 12 layers has $\text{VH} + 144 \text{H}^2$ parameters.
Please refer to Section \ref{sec:bert-param-analysis} in the Appendix for details.
Recently, Quaternion representations have shown its benefits over Euclidean representations in many neural designs \cite{parcollet2018quaternion,tay-etal-2019-lightweight}: (i) a Quaternion number consists of a real component and three imaginary components, encouraging a richer extent of expressiveness; and (ii) a Quaternion transformation reduces 75\% parameters compared to the traditional Euclidean transformation because of the weight sharing using the Hamilton product. Hence, we propose Quaternion fatorization strategies to significantly reduce the model's parameters as follows:
\noindent\textbf{Vocab Factorization (VF)}: Inspired by \citet{Lan2020ALBERT}, we encode V vocabs using Quaternion representations with an embedding size E$\ll$H. Then, we apply a Quaternion transformation to transform E back to H, and concatenate all four parts of a Quaternion to form a regular Euclidean embedding. This leads to a total of VE + EH/4 parameters, compared to VE + EH in ALBERT.
\noindent\textbf{Attention Factorization (AF)}: If the input sequences have length N, the output of the multi-head attention is N$\times$N, which does not depend on the hidden size H. Hence, it is unnecessary to produce the attention Query, Key, and Value with the same input hidden size H and cost 3$\text{H}^2$ parameters per a layer. Instead, we produce the attention Query, Key, and Value in size C$\ll$H using linear Quaternion transformations, leading to 3CH/4 parameters.
\noindent\textbf{Feedforward Factorization (FF):} Instead of linearly transforming from H to 4H (i.e. 4$H^2$ parameters), we apply Quaternion transformations from H to I, and from I to 4H, with I$\ll$H is an intermediate size. This leads to a total of (HI/4 + IH) parameters.
\noindent\textbf{Output Factorization (OF):} We also apply Quaternion transformations from 4H to I, then from I to H. This results in (HI + IH/4) parameters, compared to 4$H^2$ in BERT.
When we apply all the above compression techniques together, the total parameters are reduced to VE + EH/4 + L(3CH/4 + $H^2$ + 5HI/2). Particularly, with BERT-base settings of V=32k, H=768, L=12, if we set E=128, C=192, and I=128, the total of parameters is reduced from 110M to \textbf{only 8.4M}.
\subsection{Pretraining tasks}
Similar to BERT, we pre-train our {HABERTOR}
with two unsupervised learning/\emph{language modeling} tasks: (i) masked token prediction, and (ii) next sentence prediction.
We describe some modifications that we made to the original BERT's implementation as follows:
\subsubsection{Masked token prediction task}
BERT generates only one masked training instance for each input sequence. Instead, inspired by \citet{liu2019roberta}, we generate $\tau$ training instances by randomly sampling with replacement masked positions $\tau$ times. We refer to $\tau$ as a \emph{masking factor}. Intuitively, this helps the model to learn differently combined patterns of tokens in the same input sequence, and boosts the model's language understanding. This small modification works especially well when we have a smaller pre-training data size, which is often true for a domain-specific task (e.g., hatespeech detection).
\subsubsection{Next sentence prediction task}
In BERT, the two input sentences are already paired and prepared in advanced. In our case, we have to preprocess input text sequences to prepare paired sentences for the next sentence prediction task. We conduct the following preprocessing steps:
\textbf{Step 1:} We train an unsupervised sentence tokenizer from \emph{nltk} library. Then we use the trained sentence tokenizer to tokenize each input text sequence into (splitted) sentences.
\textbf{Step 2:}
In BERT, 50\% of the chance two consecutive sentences are paired as \emph{next}, and 50\% of the chance two non-consecutive sentences are paired as \emph{not next}. In our case, our input text sequences can be broken into one, two, three, or more sentences. For input text sequences that consist of only one tokenized sentence, the only choice is to pair with another random sentence to generate a \emph{not next} example. By following our 50-50 rule described in the Appendix, we ensure generating an equal number of \emph{next} and \emph{not next} examples.
\vspace{-3pt}
\subsection{Training the hatespeech prediction task}
For hatespeech prediction task,
we propose a multi-source multi-head {HABERTOR} classifier.
The architecture comparison of the traditional fine-tuning BERT and our proposal is shown in Figure \ref{fig:architecture}. We note two main differences in our design as follows.
First, as shown in Figure \ref{fig:bert-flex-ensemble}, our {HABERTOR} has separated classification heads/nets for different input sequences of different sources but with a shared language understanding knowledge.
Intuitively, instead of measuring the same probabilities $P (y| s)$ for all input sequences, it injects additional prior source knowledge of the input sequences to measure $P (y| s,$ ``news'') or $P (y| s$, ``finance'').
Second, in addition to multi-source, {HABERTOR} with an \emph{ensemble} of $h$ heads provides even more capabilities to model data variance. For each input source, we employ ensemble of several classification heads (i.e. two classification heads for each source in the Figure \ref{fig:bert-flex-ensemble}) and use a pooling layer on top to aggregate results from those classification heads. We use three pooling functions: \emph{min, max, mean}. \emph{min} pooling indicates that {HABERTOR} classifies an input comment as a hateful one if all of the heads classify it as hatespeech, which put a more stringent requirement on classifying hatespeech. On the other hand, {HABERTOR} will predict an input comment as a normal comment if at least one of the heads recognizes the input comment as a normal one, which is less strict. Similarly, using \emph{max} pooling will put more restriction on declaring
comments as normal, and less restriction on declaring hatespeech. Finally, \emph{mean} pooling considers the average voting from all heads.
Note that our design generalizes the traditional fine-tuning BERT architecture when $h$=1 and the two classification nets share the same weights. Thus, {HABERTOR} is more flexible than the conventional fine-tuning BERT.
Also, {HABERTOR} can be extended trivially to problems that have $q$ sources, with $h$ separated classification heads for $q$ different sources. When predicting input sequences from new sources, {HABERTOR} averages the scores from all separated classification nets.
\vspace{-3pt}
\subsection{Parameter Estimation}
Estimating parameters in the pretraining tasks in our model is similar to BERT, and we leave the details in the Appendix due to space limitation.
For hatespeech prediction task, we use the transformed embedding vector of the \emph{[CLS]} token as a summarized embedding vector for the whole input sequence. Let $\boldsymbol{S}$ be a collection of sequences $s_i$.
Note that $s_i$ is a normal sequence, not corrupted or concatenated with another input sequence. Given that $\boldsymbol{y}_i$ is the supervised ground truth label for the input sequence, and $\boldsymbol{\hat{y}}_i=P(\boldsymbol{y}_i| s_i,$ ``news'') (Figure \ref{fig:bert-flex-ensemble}, \ref{fig:bert-flex-ensemble}) where $s_i$ is a \emph{news} input sequence, or $\boldsymbol{\hat{y}}_i=P(\boldsymbol{y}_i | s_i,$``finance'') when $s_i$ is a \emph{finance} input sequence. The hateful prediction task aims to minimize the following binary cross entropy loss:
\vspace{-0.1cm}
\begin{equation}
\label{equa:hate-speech-loss}
\nonumber
\resizebox{0.5\textwidth}{!}{$
\begin{aligned}
\mathcal{L}_{hs} = \operatorname*{argmin}_{\theta} \;
-\sum_{i=1}^{|\boldsymbol{S}|}
& \boldsymbol{y}_i \log \big( \boldsymbol{\hat{y}}_i\big) +
(1 - \boldsymbol{y}_i) \log \; \big( 1 - \boldsymbol{\hat{y}}_i \big)
\end{aligned}
$}
\vspace{-0.1cm}
\end{equation}
\noindent\textbf{Regularize with adversarial training:} To make our model more robust to perturbations of the input embeddings, we further regularize our model with adversarial training. There exist several state-of-the-art \emph{target}-based adversarial attacks such us Fast Gradient Method (\emph{FGM}) \cite{miyato2016adversarial}, Basic Iterative Method \cite{kurakin2016adversarial}, and Carlini L2 attack \cite{carlini2017towards}. We use the \emph{FGM} method as it is effective and efficient according to our experiments.
In \emph{FGM}, the noise magnitude is a scalar value and is a manual input hyper-parameter. This is sub-optimal, as different adversarial directions of different dimensions are scaled similarly, plus, manually tuning the noise magnitude is expensive and not optimal.
Hence, we propose to extend \emph{FGM} with a \emph{learnable} and \emph{fine-grained} noise magnitude, where the noise magnitude is parameterized by a learnable vector, providing different scales for different adversarial dimensions. Moreover, the running time of our proposal compared to \emph{FGM} is similar.
The basic idea of the adversarial training is to add a small perturbation noise $\delta$ on each of the token embeddings that makes the model mis-classify hateful comments as normal comments, and vice versa. Given the input sequence $s_i=[w_1^{(i)}, w_2^{(i)}, ..., w_u^{(i)}]$ with ground truth label $y_i$, let $\Tilde{y}_i$ be the adversarial target class of $s_i$ such that $\Tilde{y}_i \ne y_i$. In the hatespeech detection domain, our model is a binary classifier. Hence, when $y_i = 1$ ($s_i$ is a hateful comment), $\Tilde{y}_i = 0$ and vice versa. Then, the perturbation noise $\delta$ is learned by minimizing the following cost function:
\vspace{-0.1cm}
\begin{equation}
\label{equa:hate-speech-adv-loss1}
\resizebox{0.42\textwidth}{!}{$
\begin{aligned}
& \mathcal{L}_{adv} = \operatorname*{argmin}_{\delta, \delta \in [a, b]} \;
-\sum_{i=1}^{|\boldsymbol{S}|}
log P
\big(
\boldsymbol{\Tilde{y}_i} \; | s_i + \delta_i ; \hat{\theta}
\big)
\end{aligned}
$}
\vspace{-0.05cm}
\end{equation}
Note that in Eq. (\ref{equa:hate-speech-adv-loss1}), $\delta$ is constrained to be less than a predefined noise magnitude scalar in the traditional FGM method. In our proposal, $\delta$ is constrained within a range $[a, b]$ (i.e. $min(\delta) \ge$ a $\wedge$ $max(\delta) \le$ b ).
\noindent Solving Eq. (\ref{equa:hate-speech-adv-loss1}) is expensive and not easy, especially with complicated deep neural networks. Thus, we approximate each perturbation noise $\delta_i$ for each input sequence $s_i$ by linearizing partial loss $- log P\big(\boldsymbol{\Tilde{y}_i} \; | s_i + \delta_i ; \hat{\theta} \big)$ around $s_i$.
Particularly, $\delta_i$ is measured by:
\vspace{-0.1cm}
\begin{equation}
\label{equa:hate-speech-adv-loss2}
\resizebox{0.35\textwidth}{!}{$
\begin{aligned}
\delta_i = -\epsilon \times
\frac {\bigtriangledown_{s_i} \big( -log P\big(\boldsymbol{\Tilde{y}_i} \; | s_i ; \hat{\theta} \big) \big) }
{\Vert \bigtriangledown_{s_i} \big(-log P\big(\boldsymbol{\Tilde{y}_i} \; | s_i ; \hat{\theta} \big) \big) \Vert_2}
\end{aligned}
$}
\vspace{-0.05cm}
\end{equation}
In Eq.~(\ref{equa:hate-speech-adv-loss2}), $\epsilon$ is a {\em learnable vector}, with the same dimensional size as $\delta_i$. Solving the constraint $\delta_i \in [a, b]$ in Eq.~(\ref{equa:hate-speech-adv-loss1}) becomes restricting $\epsilon \in [a, b]$, which is trivial by projecting $\epsilon$ in $[a, b]$.
Finally, {HABERTOR} aims to minimize the following cost function:
\vspace{-0.1cm}
\begin{equation}
\label{equa:hate-speech-final-loss}
\mathcal{L} = \mathcal{L}_{hs} + \lambda_{adv}\mathcal{L}_{adv} - \lambda_{\epsilon} \Vert \epsilon \Vert_2,
\vspace{-0.1cm}
\end{equation}
\noindent where $\Vert \epsilon \Vert_2$ is an additional term to force the model to learn robustly as much as possible, and $\lambda_{\epsilon}$ is a hyper-parameter to balance its effect. Note that, we first learn the optimal values of all token embeddings and {HABERTOR}'s parameters before learning adversarial noise $\delta$. Also, regularizing adversarial training only increases the training time, but not the inferencing time since it does not introduce extra parameters for the model during the inference.
\section{Empirical Study}
\label{sec:emperical}
\subsection{Experiment Setting}
\noindent\textbf{Dataset:}
Our primary dataset was extracted from user comments on Yahoo News and Finance for five years, and consisted of 1,429,138 labeled comments. Among them, 944,391 comments are from Yahoo News and 484,747 comments are from Yahoo Finance. There are 100,652 hateful comments.
The 1.4M labeled data was collected as follows~\cite{hate_speech_Nobata_2016}: comments that are reported as ``abusive'' for any reason by users of Yahoo News and Finance are sent to in-house trained raters for review and labeling.
To further validate the generalizability of {HABERTOR}, we perform transfer-learning experiments on other two publicly available hatespeech datasets:
Twitter \cite{waseem2016hateful}, and Wikipedia (i.e. Wiki) \cite{wulczyn2017ex}.
The Twitter dataset consists of 16K annotated tweets, including 5,054 hateful tweets (i.e., 31\%). The Wiki dataset has 115K labeled discussion comments from English Wikipedia talk's page, including 13,590 hateful comments (i.e., 12\%). The statistics of 3 datasets are shown in Table~\ref{table:datasets}.
\noindent\textbf{Train/Dev/Test split:} We split the dataset into train/dev/test sets with a ratio 70\%/10\%/20\%.
We tune hyper-parameters on the dev set, and report final results on the test set. Considering critical mistakes reported at \citet{arango2019hate} when building machine learning models (i.e. extracting features using the entire dataset, including testing data, \emph{etc}), we generate vocabs, pre-train the two language modeling tasks, and train the hatespeech prediction task using \textbf{only the training set}.
\begin{table}[t]
\centering
\setlength{\tabcolsep}{2pt}
\caption{Statistics of the three datasets.}
\vspace{-10pt}
\label{table:datasets}
\resizebox{0.75\linewidth}{!}{
\begin{tabular}{lcccc}
\toprule
Statistics/Datasets & Yahoo & Twitter & Wiki \\
\midrule
Total & 1.4M & 16K & 115K \\
\# Hateful & 100K & 5K & 13K \\
\% of hatespeech & 7\% & 31\% & 12\% \\
\bottomrule
\end{tabular}
}
\vspace{-12pt}
\end{table}
\noindent\textbf{Baselines, our Models and hyper-parameter Settings:}
We compare our models with 15 state-of-the-art baselines: Bag of Words (BOW) \cite{dinakar2011modeling,van2015automatic}, NGRAM, CNN \cite{kim2014convolutional}, VDCNN \cite{conneau2016very}, FastText \cite{joulin2016fasttext}, LSTM \cite{cho2014learning}, att-LSTM, RCNN \cite{lai2015recurrent}, att-BiLSTM \cite{lin2017structured}, Fermi (best hatespeech detection method as reported in \citet{basile-etal-2019-semeval}) \cite{indurthi-etal-2019-fermi}, Q-Transformer \cite{tay-etal-2019-lightweight}, Tiny-BERT \cite{jiao2019tinybert}, DistilBERT-base \cite{sanh2019distilbert}, ALBERT-base \cite{Lan2020ALBERT}, and BERT-base \cite{devlin2018bert,nikolov2019nikolov}. We are aware of other recent language models such as Transformer-XL \cite{dai2019transformer}, RoBERTa\cite{liu2019roberta}, DialoGPT \cite{zhang2019dialogpt}, to name a few. However, as these models are even heavier than BERT-base, we do not compare with them. The detailed description of the baselines and hyper-parameter settings is described in the Appendix.
\noindent\textbf{Our models:} We denote \emph{HABERTOR} as our model without using any factorization, \emph{HABERTOR-VQF} as \emph{HABERTOR} + Vocab Quaternion Factorization, \emph{HABERTOR-VAQF} as \emph{HABERTOR} + Vocab + Attention Quaternion Factorization, \emph{HABERTOR-VAFQF} as \emph{HABERTOR} + Vocab + Attention + Feedforward Quaternion Factorization, and \emph{HABERTOR-}\emph{VAFOQF} as \emph{HABERTOR} + Vocab + Attention + Feedforward + Output Quaternion Factorization.
\noindent\textbf{Measurement:}
We evaluate models on seven metrics: Area Under the Curve (\emph{AUC}), Average Precision (\emph{AP}), False Positive Rate (FPR), False Negative Rate (FNR), F1 score\footnote{Both AP and F1 account for Precision and Recall so we do not further report Precision and Recall for saving space.}. In real world, for imbalanced datasets, we care more about FPR and FNR. Thus, we report FPR at 5\% of FNR (FPR@5\%FNR), meaning we allow 5\% of hateful texts to be misclassified as normal ones, then report FPR at that point. Similarly, we report FNR at 5\% of FPR (FNR@5\%FPR).
Except for AUC and AP, the other metrics are reported using an optimal threshold
selected by using the development set.
\begin{table}[t]
\centering
\setlength{\tabcolsep}{2pt}
\caption{Parameters Comparison between \emph{HABERTOR-VAFOQF} vs. other LMs. ``--'' indicates not available.}
\vspace{-10pt}
\label{table:config}
\resizebox{1.\linewidth}{!}{
\begin{tabular}{lcccccc}
\toprule
Statistics & \specialcell{\textbf{HABERTOR}\\-\textbf{VAFOQF}} &
\specialcell{AL-\\BERT} &
\specialcell{Tiny-\\BERT} & \specialcell{Distil-\\BERT} & \specialcell{BERT-\\base} \\
\midrule
Layers (L) & \textbf{6} & 12 & 4 & 6 & 12 \\
Attention heads & \textbf{6} & 12 & 12 & 12 & 12 \\
Attention size (C) & \bf{192} & -- & -- & -- & -- \\
Embedding (E) & \textbf{128} & 128 & -- & -- & -- \\
Hidden (H) & \textbf{384} & 768 & 312 & 768 & 768 \\
Intermediate size (I)& \textbf{128} & -- & -- & -- & -- \\
Feedforward size & \textbf{1,536} & 3072 & 1,200 & 3,072 & 3,072 \\
Vocab (V) & \textbf{40k} & 30k & 30k & 30k & 30k \\
\midrule
Parameters & \bf{7.1M} & 12M & 14.5M & 65M &110M \\
\bottomrule
\end{tabular}
}
\vspace{-15pt}
\end{table}
\begin{table*}[t]
\centering
\caption{Performance of all models that we train on Yahoo train data, test on Yahoo test data and report results on Yahoo News and Yahoo Finance separately.
Best baseline is \underline{underlined}, \textbf{better} results than best baseline are \textbf{bold}.}
\vspace{-8pt}
\label{table:yahoo-performance}
\resizebox{\textwidth}{!}{
\begin{tabular}{l|cc|ccccc|ccccc}
\toprule
\multicolumn{1}{c}{\multirow{2}{*}{Model}} &
\multicolumn{2}{|c}{\bf Yahoo} &
\multicolumn{5}{|c}{\bf Yahoo News} & \multicolumn{5}{|c}{\bf Yahoo Finance} \\
\multicolumn{1}{c}{} &
\multicolumn{1}{|c}{AUC} & \multicolumn{1}{c}{AP} &
\multicolumn{1}{|c}{AUC} & \multicolumn{1}{c}{AP} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}FPR@\\ 5\%FNR\end{tabular}} & \begin{tabular}[c]{@{}c@{}}FNR@\\ 5\%FPR\end{tabular} & F1 & \multicolumn{1}{|c}{AUC} & \multicolumn{1}{c}{AP} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}FPR@\\ 5\%FNR\end{tabular}} & \begin{tabular}[c]{@{}c@{}}FNR@\\ 5\%FPR\end{tabular} & F1 \\
\midrule
%
BOW & 85.91 & 48.35
& 85.07 & 51.37 & 61.13 & 50.53 & 49.01
& 85.83 & 36.80 & 60.97 & 49.43 & 40.15\\
NGRAM & 84.19 & 42.15
& 83.17 & 45.00 & 63.45 & 57.45 & 43.59
& 84.29 & 31.63 & 63.42 & 53.94 & 35.95\\
CNN & 91.21 & 63.03
& 90.64 & 65.64 & 47.50 & 36.23 & 60.61
& 91.20 & 52.30 & 45.59 & 33.96 & 51.93 \\
VDCNN & 88.10 & 58.08
& 87.65 & 60.75 & 60.39 & 41.56 & 56.12
& 88.17 & 48.72 & 62.43 & 38.78 & 50.38 \\
FastText & 91.64 & 60.15
& 90.97 & 63.16 & 41.80 & 38.09 & 58.35
& 92.13 & 47.97 & 37.75 & 34.30 & 49.36 \\
LSTM & 91.83 & 64.17
& 91.14 & 66.59 & 43.81 & 35.09 & 60.96
& 92.38 & 54.44 & 38.26 & 31.45 & 53.36 \\
att-LSTM & 91.83 & 64.39
& 91.10 & 66.77 & 44.24 & 34.86 & 61.37
& 92.43 & 54.79 & 38.32 & 30.75 & 53.79 \\
RCNN & 91.17 & 63.34
& 90.52 & 65.72 & 48.49 & 36.37 & 60.29
& 91.32 & 53.77 & 49.40 & 32.17 & 52.73 \\
att-BiLSTM & 92.52 & 64.17
& 91.93 & 66.82 & 38.07 & 34.68 & 61.54
& 92.93 & 53.97 & 36.05 & 31.14 & 52.58 \\
Fermi & 86.53 & 41.52
& 86.10 & 45.16 & 53.33 & 55.60 & 45.65
& 85.45 & 27.53 & 56.60 & 56.48 & 33.27 \\
Q-Transformer
& 92.34 & 64.43
& 91.81 & 67.06 & 39.12 & 34.17 & 61.82
& 92.64 & 54.41 & 37.71 & 29.74 & 53.51\\
Tiny-BERT & 93.60 & 68.70
& 93.03 & 70.80 & 34.50 & 30.37 & 64.42
& 94.09 & 60.25 & 31.16 & 25.09 & 57.58 \\
DistilBERT & 93.68 & 69.15
& 93.13 & 71.25 & 34.33 & 30.05 & 64.69
& 94.12 & 60.56 & 29.23 & 24.94 & 58.01 \\
ALBERT & 93.50 & 67.99
& 92.93 & 70.28 & 34.56 & 31.15 & 63.82
& 93.94 & 58.73 & 30.12 & 25.87 & 56.37 \\
BERT-base & \underline{94.14} & \underline{70.05}
& \underline{93.56} & \underline{71.65} & \underline{32.15} & \underline{28.91} & \underline{65.30}
& \underline{94.60} & \underline{62.34} & \underline{29.14} & \underline{22.81} & \underline{59.72} \\
\midrule
HABERTOR
& {\bf 94.77} & {\bf 72.35}
& {\bf94.12} & {\bf73.79} & {\bf29.26} & {\bf27.12} & {\bf67.09}
& {\bf95.72} & {\bf65.93} & {\bf22.03} & {\bf18.99} & {\bf62.38} \\
HABERTOR-VQF
& \textbf{94.70} & \textbf{71.82}
& \textbf{94.00} & \textbf{73.25} & \textbf{29.50 }& \textbf{27.79} & \textbf{66.57 }
& \textbf{95.81} & \textbf{65.20} & \textbf{20.78} & \textbf{20.08 }& \textbf{61.60}
\\
HABERTOR-VAQF
& \textbf{94.59} & \textbf{71.53}
& \textbf{93.90} & \textbf{73.02} & \textbf{29.94} & \textbf{27.92} & \textbf{66.51}
& \textbf{95.63} & \textbf{64.64} & \textbf{23.08} & \textbf{20.39} & \textbf{60.84} \\
HABERTOR-VAFQF
& \textbf{94.43} & \textbf{70.75}
& \textbf{93.72} & \textbf{72.37} & \textbf{31.86} & \textbf{28.58} & \textbf{65.81}
& \textbf{95.42} & \textbf{63.07} & \textbf{22.87} & \textbf{21.43} & \textbf{60.11}\\
HABERTOR-VAFOQF
& \textbf{94.18} & 69.92
& 93.51 & 71.63 & 32.47 & 29.26 & \textbf{65.35}
& \textbf{95.00} & 61.99 & \textbf{24.95} & 22.81 & 59.50 \\
\end{tabular}
}
\vspace{-12pt}
\end{table*}
\noindent \textbf{Model size comparison}:
\emph{HABERTOR} has 26M of parameters.
\emph{HABERTOR-VQF} and \emph{HABERTOR-VAQF} have 16.2M and 13.4M of parameters, respectively.
\emph{HABERTOR-VAFQF} and \emph{HABERTOR-VAFOQF} have 10.3M and 7.1M of parameters, respectively.
The size of all five models is much smaller than BERT-base (i.e. 110M of parameters).
The configuration comparison of {HABERTOR-VAFOQF} and other pretrained language models
is given in Table~\ref{table:config}.
\emph{HABERTOR-VAFOQF} has less than 2 times compared to TinyBERT's parameters, less than 9 times compared to Distil-BERT's size, and is equal to 0.59 AlBERT's size.
\subsection{Experimental results}
\subsubsection{Performance comparison}
Table~\ref{table:yahoo-performance} shows the performance of all models on Yahoo dataset. Note that we train on the Yahoo training set that contains both Yahoo News and Finance data, and report results on Yahoo News and Finance separately, and report only AUC and AP on both of them (denoted as column ``Yahoo'' in Table~\ref{table:yahoo-performance}). We see that {Fermi} worked worst among all models. It is mainly because Fermi transfers the pre-trained embeddings from the USE model to a SVM classifier without further fine-tuning. This limits {Fermi}'s ability to understand domain-specific contexts.
Q-Transformer works the best among non-LM baselines, but worse than LM baselines as it is not pretrained.
{BERT-base} performed the best among all baselines.
Also, distilled models worked worse than {BERT-base} due to their compression nature on BERT-base as the teacher model.
Next, we compare the performance of our proposed models against each other. Table \ref{table:yahoo-performance} shows that our models' performance is decreasing when we compress more components (\emph{p-value} $<$ 0.05 under the directional Wilcoxon signed-rank test). We reason it is a trade-off between the model size and the model performance as factorizing a component will naturally lose some of its information.
Then, we compared our proposed models with BERT-base -- the best baseline. Table \ref{table:yahoo-performance} shows that except our \emph{HABERTOR-VAFOQF},
our other proposals outperformed BERT-base, improving by an average of 1.2\% and 1.5\% of F1-score in Yahoo News and Yahoo Finance, respectively (\emph{p-value} $<$ 0.05). Recall that in addition to improving hatespeech detection performance, our models' size is much smaller than BERT-base. For example, \emph{HABERTOR} saved 84M of parameters from BERT-base, and \emph{HABERTOR-VAFQF} saved nearly 100M of parameters from BERT-base. Interestingly, even our smallest \emph{HABERTOR-VAFOQF} model (7.1M of parameters) achieves similar results compared to BERT-base (i.e. the performance difference between them is not significant under the directional Wilcoxon signed-rank test).
Those results show the effectiveness of our proposed models against {BERT-base}, the best baseline, and consolidate the need of pretraining a language model on a hateful corpus for a better hateful language understanding.
\subsubsection{Running time and memory comparison}
\noindent\textbf{Running time:} Among LM baselines, TinyBERT is the fastest. Though ALBERT has the smallest number of parameters by adopting the cross-layer weight sharing mechanism, ALBERT has the same number of layers as BERT-base, leading to a similar computational expense as BERT-base.
Our \emph{HABERTOR-VQF} and \emph{HABERTOR-VAQF} have a very similar parameter size with \emph{TinyBERT} and their train/inference time are similar. Interestingly, even though \emph{HABERTOR} has 26M of parameters, its runtime is also competitive with \emph{TinyBERT}. This is because among 26M of parameters in \emph{HABERTOR}, 15.4M of its total parameters are for encoding 40k vocabs, which are not computational parameters and are only updated sparsely during training. \emph{HABERTOR-VAFQF} and \emph{HABERTOR-VAFOQF} significantly reduce the number of parameters compared to TinyBERT, leading to a speedup during training and inference phases. Especially, our experiments on 4 K80 GPUs with a batch size of 128 shows that \emph{HABERTOR-VAFOQF} is 1.6 times faster than TinyBERT.
\noindent\textbf{Memory consumption:} Our experiments with a batch size of 128 on 4 K80 GPUs show that among LM baselines, TinyBERT and ALBERT are the most lightweight models, consuming ~13GB of GPU memory. Compared to TinyBERT and ALBERT, \emph{HABERTOR} takes an additional 4GB of GPU memory, while \emph{HABERTOR-VQF}, \emph{HABERTOR-VAQF} hold a similar memory consumption, \emph{HABERTOR-VAFQF} and \emph{HABERTOR-VAFOQF} reduces 1$\sim$3 GB of GPU memory.
\noindent\textbf{Compared to BERT-base:} In general, \emph{HABERTOR} is 4$\sim$5 times faster, and 3.1 times GPU memory usage smaller than {BERT-base}. Our most lightweight model \emph{HABERTOR-VAFOQF} even reduces 3.6 times GPU memory usage, while remains as effective as BERT-base. The memory saving in our models also indicates that we could increase the batch size to perform inference even faster.
\begin{table
\centering
\caption{Generalizability of \emph{HABERTOR} and top baselines. Report AUC, AP, and F1 on each test set.}
\vspace{-10pt}
\label{table:generalizability}
\resizebox{0.5\textwidth}{!}{
\begin{tabular}{l|lll|lll}
\toprule
& \multicolumn{3}{c|}{Twitter} & \multicolumn{3}{c}{Wiki} \\
\toprule
Model & \multicolumn{1}{c}{AUC} & \multicolumn{1}{c}{AP} & F1 & \multicolumn{1}{c}{AUC} & \multicolumn{1}{c}{AP} & F1 \\
\midrule
Fermi & 89.03 & 79.23 & 74.52& 96.59& 84.26& 75.51\\
TinyBERT & 92.23 & 83.88 & 78.33 & 97.10 & 87.64 & 79.70 \\
DistilBERT & 92.13 & 80.21 & 77.89 & 97.23 & 88.16 & 80.21 \\
ALBERT & 92.55 & 86.51 & 78.76 & 97.66 & 88.91 & 80.66 \\
BERT & \underline{93.21} & \underline{86.67} & \underline{79.68} & \underline{97.75} & \underline{89.23} & \underline{80.73} \\
\midrule
HABERTOR &
\textbf{93.52} & \textbf{88.57} & \textbf{81.22 } &
97.46 & 88.65 & \textbf{80.81} \\
HABERTOR-VQF &
\textbf{93.94} & \textbf{88.45} & \textbf{81.21} &
97.40 & 88.64 & 80.66 \\
HABERTOR-VAQF &
\textbf{93.57} & \textbf{87.66} & \textbf{80.23} &
97.45 & 88.61 & 80.63 \\
HABERTOR-VAFQF &
\textbf{93.51} & \textbf{87.38} & \textbf{80.16} &
97.37 & 88.21 & 80.23 \\
HABERTOR-VAFOQF &
\textbf{93.49} & \textbf{87.14} & \textbf{80.06} &
97.23 & 87.94 & 79.61 \\
\bottomrule
\end{tabular}
}
\vspace{-15pt}
\end{table}
\subsubsection{Generalizability analysis}
We perform hatespeech Language Model transfer learning on other hateful Twitter and Wiki datasets to understand our models' generalizability. We use our models' pre-trained language model checkpoint learned from Yahoo hateful datasets, and fine tune them on Twitter/Wiki datasets. Note that the fine-tuned training also includes regularized adversarial training for best performance.
Next, we compare the performance of our models with Fermi and four LM baselines -- best baselines reported in Table \ref{table:yahoo-performance}.
Table~\ref{table:generalizability} shows that {BERT-base} performed best compared to other fine-tuned LMs, which is consistent with our reported results on Yahoo datasets in Table \ref{table:yahoo-performance}. When comparing with BERT-base's performance (i.e. best baseline) on the Twitter dataset, all our models outperformed BERT-base.
On Wiki dataset, interestingly, our models work very competitively with BERT-base, and achieve similar F1-score results. Recall that BERT-base has a major advantage of pre-training on 2,500M Wiki words, thus potentially understands Wiki language styles and contexts better. In contrast, HABERTOR and its four factorized versions are pre-trained on 33M words from Yahoo hatespeech dataset. As shown in the ablation study (refer to \emph{AS2} in Section \ref{sec:ablation} of the Appendix), a larger pre-training data size leads to better language understanding and a higher hatespeech prediction performance.
Hence, if we acquire larger pre-training data with more hateful representatives, our model's performance can be further boosted.
All of those results show that our models generalize well on other hatespeech datasets compared with BERT-base, with significant model complexity reduction.
\vspace{-3pt}
\subsubsection{Ablation study}
\begin{table}[]
\centering
\caption{Comparison of the \textbf{traditional} FGM with a \textit{fixed} and \textit{scalar} noise magnitude, compared to the FGM with \textbf{our} proposed \textit{fine-grained} and \textit{adaptive} noise magnitude. Better
results are in \textbf{bold}.}
\vspace{-5pt}
\label{table:adv-comparison}
\resizebox{0.5\textwidth}{!}{
\begin{tabular}{l|l|lll|lll}
\toprule
\multicolumn{1}{l|}{} & \multicolumn{1}{l|}{} & \multicolumn{3}{c|}{Twitter} & \multicolumn{3}{c}{Wiki} \\ \hline
\multicolumn{1}{c|}{Model} & \multicolumn{1}{c|}{Type} & \multicolumn{1}{c}{AUC} & \multicolumn{1}{c}{AP} & F1 & \multicolumn{1}{c}{AUC} & \multicolumn{1}{c}{AP} & F1 \\
\midrule
{\multirow{2}{*}{HABERTOR}}
& {traditional}
& 93.54 & 87.88 & 79.84
& 97.50 & 88.14 & 80.13
\\
\cline{2-8}
& {\textbf{ours}}
& 93.52 & \textbf{88.57} & \textbf{81.22}
& 97.46 & \textbf{88.65} & \textbf{80.81}
\\
\midrule
\midrule
{\multirow{2}{*}{\specialcell{HABERTOR-\\VQF}}}
& {traditional}
& 93.62 & 88.09 & 80.26
& 97.44 & 88.19 & 80.11 \\
\cline{2-8}
& {\textbf{ours}}
& 93.94 & \textbf{88.45} & \textbf{81.21}
& 97.40 & \textbf{88.64} & \textbf{80.66} \\
\midrule
\midrule
{\multirow{2}{*}{\specialcell{HABERTOR-\\VAQF}}}
& {traditional}
& 93.03 & 86.77 & 79.56
& 97.44 & 88.15 & 80.16 \\
\cline{2-8}
& {\textbf{ours}}
& 93.57 & \textbf{87.66} & \textbf{80.23}
& 97.45 & \textbf{88.61} & \textbf{80.63} \\
\midrule
\midrule
{\multirow{2}{*}{\specialcell{HABERTOR-\\VAFQF}}}
& {traditional}
& 92.89 & 86.42 & 79.64
& 97.42 & 88.08 & 79.71 \\
\cline{2-8}
& {\textbf{ours}}
& 93.51 & \textbf{87.38} & \textbf{ 80.16}
& 97.37 & \textbf{88.21} & \textbf{80.23} \\
\midrule
\midrule
{\multirow{2}{*}{\specialcell{HABERTOR-\\VAFOQF}}}
& {traditional}
& 93.08 & 86.67 & 79.33
& 97.28 & 87.40 & 79.19 \\
\cline{2-8}
& {\textbf{ours}}
& 93.49 & \textbf{87.14} & \textbf{80.06}
& 97.23 & \textbf{87.94} & \textbf{79.61} \\
\bottomrule
\end{tabular}
}
\vspace{-10pt}
\end{table}
\noindent{\bf Effectiveness of the adversarial attacking method FGM with our fined-grained and adaptive noise magnitude:}
To show the effectiveness of the FGM attacking method with our proposed fine-grained and adaptive noise magnitude, we compare the performance of HABERTOR and its four factorized versions when (i) using a fixed and scalar noise magnitude as in the \textit{traditional} FGM method, and (ii) using a fine-grained and adaptive noise magnitude in \textit{our} proposal. We evaluate the results by performing the Language Model transfer learning on Twitter and Wiki datasets and present results in Table \ref{table:adv-comparison}. Note that, the noise magnitude range is set in [1, 5] for both two cases (i) and (ii) for a fair comparison, and we manually search the optimal value of the noise magnitude in the \textit{traditional} FGM method using the development set in each dataset.
We observe that in all our five models, learning with our modified FGM produces better results compared to learning with a traditional FGM, confirming the effectiveness of our proposed fine-grained and adaptive noise magnitude.
We also plot the histogram of the learned noise magnitude of HABERTOR on Twitter and Wiki datasets. Figure \ref{fig:noise-vis} shows that different embedding dimensions are assigned with different learned noise magnitude, showing the need of our proposed fine-grained and adaptive noise magnitude, that automatically assigns different noise scales for different embedding dimensions.
\begin{figure}[t]
\centering
\begin{subfigure}{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/noise-twitter-habertor.pdf}
\caption{Twitter}
\label{fig:noise-vis-twitter}
\end{subfigure}
\hfill
\begin{subfigure}{0.23\textwidth}
\centering
\includegraphics[width=\textwidth]{figs/noise-wiki-habertor.pdf}
\caption{Wiki.}
\label{fig:noise-vis-wiki}
\end{subfigure}
\vspace{-10pt}
\caption{Histogram of the learned noise magnitude when performing Language Model transfer learning of HABERTOR on (a) Twitter, and (b) Wiki datasets.}
\label{fig:noise-vis}
\vspace{-15pt}
\end{figure}
\noindent\textbf{Additional Ablation study:} We conduct several ablation studies to understand HABERTOR's sensitivity. Due to space limitation, we summarize the key findings as follows, and leave detailed information and additional study results in the Appendix:
(i) A large \emph{masking factor} in {HABERTOR} is helpful to improve its performance;
(ii) Pretraining with a larger hatespeech dataset or a more fine-grained pretraining can improve the hatespeech prediction performance;
and (iii) Our fine-tuning architecture with multi-source and ensemble of classification heads helps improve the performance.
\vspace{-3pt}
\subsubsection{Further application discussion}
\begin{table
\centering
\caption{Application of our models on the sentiment classification task using Amazon Prime Pantry reviews.}
\vspace{-10pt}
\label{table:application}
\resizebox{0.42\textwidth}{!}{
\begin{tabular}{l|ccc}
\toprule
Model & AUC & AP & F1 \\
\midrule
ALBERT-base & 98.77 & 99.77 & 97.95 \\
BERT-base & 99.16 & 99.84 & 98.42 \\
\midrule
HABERTOR & 99.10 & 99.83 & 98.39 \\
HABERTOR+VQF & 99.09 & 99.83 & 98.27 \\
HABERTOR+VAQF & 98.90 & 99.80 & 98.07 \\
HABERTOR+VAFQF & 98.87 & 99.79 & 98.05 \\
HABERTOR+VAFOQF & 98.61 & 99.75 & 97.78 \\
\bottomrule
\end{tabular}
}
\vspace{-15pt}
\end{table}
Our proposals were designed for the hatespeech detection task, but in an extent, they can be applied for other text classification tasks. To illustrate the point, we experiment our models (i.e. all our pretraining and fine-tuning designs) on a sentiment classification task. Particularly, we used 471k Amazon-Prime-Pantry reviews \cite{mcauley2015image}, which is selected due to its reasonable size for fast pretraining, fine-tuning and result attainment.
After some preprocessings (i.e. duplicated reviews removal, convert the reviews with rating scores $\ge$ 4 as positive, rating $\le$ 2 as negative, and no neutral class for easy illustration), we obtained 301k reviews and splited into 210k-training/30k-development/60k-testing with a ratio 70/10/20.
Next, we pretrained our models on 210k training reviews which contain 5.06M of words.
Then, we fine-tuned our models on the 210k training reviews, selected a classification threshold on the 30k development reviews, and report AUC, AP, and F1 on the 60k testing reviews. We compare the performance of our models with fine-tuned BERT-base and ALBERT-base -- two best baselines.
We observe that though pretraining on only 5.06M words of 210k training reviews, HABERTOR performs very similarly to BERT-base, while improving over ALBERT-base. Except HABERTOR-VAFOQF with a little bit smaller F1-score compared to ALBERT-base, our other three compressed models worked better than ALBERT-base, showing the effectiveness of our proposals.
\section{Discussion}
\section{Conclusion}
\vspace{-3pt}
In this paper, we presented the \emph{HABERTOR} model for detecting hatespeech. \emph{HABERTOR} understands the language of the hatespeech datasets better, is 4-5 times faster, uses less than 1/3 of the memory, and has a better performance in hatespeech classification.
Overall, \emph{HABERTOR} outperforms 15 state-of-the-art hatespeech classifiers and generalizes well to unseen hatespeech datasets, verifying not only its efficiency but also its effectiveness.
\vspace{-5pt}
\section*{Acknowledgments}
\vspace{-3pt}
This work was supported in part by NSF grant CNS-1755536.
\section{Appendix}
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{figs/Quaternions.pdf}
\vspace{-10pt}
\caption{Comparison between linear Euclidean transformation (Left) and linear Quaternion transformation (Right). The Hamilton product in Quaternion space is replaced with an equivalent dot product in real space for an easy reference. Computing each output dimension in real-valued transformation (left) always need 4 new parameters, resulting in 16 degrees of freedom. In contrast, only 4 parameters are used and shared in producing all 4 output dimensions in Quaternion transformation, leading to a better inter-dependency encoding and a 75\% of parameter saving.
}
\vspace{-10pt}
\label{fig:quaternions}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{figs/BertArchitecture.pdf}
\caption{General view of the BERT architecture. Uncovering the architecture from left to right.}
\label{fig:bert-architecture}
\end{figure*}
\subsection{Parameter Estimation for pretraining HABERTOR with language model tasks}
Given the following input sentences $s_i=[w_1^{(i)}, w_2^{(i)}, ..., w_u^{(i)}]$ and $s_j=[w_1^{(j)},$ $w_2^{(j)},$ $..., w_v^{(j)}]$, let the text sequence be $c_l = s_{ij} = [w_1^{(i)}, w_2^{(i)}, ..., w_u^{(i)}, w_1^{(j)}, w_2^{(j)}, ..., w_v^{(j)}]$=$[w_1, ..., w_{n}]$ ($n=u+v$) with label $y_l$ where we already paired the sentences to generate a \emph{next} (i.e $y_l = 1$) or \emph{not next} (i.e. $y_l = 0$) training instance. Let $\bar{c}_l$ be a corrupted sequence of $c_l$, where we masked some tokens in $c_l$. Denote $\boldsymbol{C}$ a collection of such training text sequences $c_{l}$. The masked token prediction task aims to reconstruct each $c_l \in \boldsymbol{C}$ given the corrupted sequence $\bar{c_l}$. In another word, the masked token prediction task maximizes the following log-likelihood:
\vspace{-5pt}
\begin{equation*}
\resizebox{0.3\textwidth}{!}{$
\begin{aligned}
\mathcal{L}_1 &=
\operatorname*{argmax}_{\theta} \; \sum_{l=1}^{|\boldsymbol{C}|} log \; p_\theta (c_l | \bar{c_l}) \\
&\approx
\sum_{l=1}^{|\boldsymbol{C}|} \sum_{t=1}^{n} \mathbbm{1}_t \; log \; p_\theta(w_t | \bar{c_l})
\end{aligned}
$}
\vspace{-5pt}
\end{equation*}
where $\mathbbm{1}_t$ is an indicator function and $\mathbbm{1}_t$ = 1 when the token $t^{th}$ is a \emph{[MASK]} token, otherwise $\mathbbm{1}_t$ = 0. $\theta$ refers to all the model's learning parameters, $w_t$ is the ground truth token at position $t^{th}$. Denote $H_\theta (c_l) = [H_\theta (c_l)_1, H_\theta (c_l)_2, ..., H_\theta (c_l)_n]$ as the sequence of transformed output embedding vectors obtaining at the final layer of corresponding $n$ tokens in the sequence $c_l$. $H_\theta (c_l)_t \in \mathcal{R}^d$ with $d$ is the embedding size. By parameterizing a linear layer with a transformation $W_1 \in \mathcal{R}^{V \times d}$ (with $V$ refers to the vocabulary size) as a decoder, we can rewrite $\mathcal{L}_1$
as follows:
\begin{equation*}
\resizebox{0.5\textwidth}{!}{$
\begin{aligned}
&\mathcal{L}_1
= \operatorname*{argmin}_{\theta}
-\sum_{i=1}^{|\boldsymbol{C}|}
\sum_{t=1}^{n}
\mathbbm{1}_t \log\frac{\exp \bigg( \big\lbrack W_1H_\theta (\bar{c}_l)_t \big\rbrack _t \bigg)}
{\sum_{k=1}^{V}
\ exp \bigg( \big\lbrack W_1H_\theta (\bar{c}_l)_t \big\rbrack _k \bigg)}
\end{aligned}
$}
\end{equation*}
where $\lbrack \cdot \rbrack _t$ refers to the output value at position $t$.
For the next sentence prediction task, the objective is to minimize the following binary cross entropy loss function:
\vspace{-0.2cm}
\begin{equation*}
\resizebox{0.44\textwidth}{!}{$
\begin{aligned}
\mathcal{L}_2 = \operatorname*{argmin}_{\theta} \; -\sum_{i=1}^{|\boldsymbol{C}|}
& y_l\ \log \big( \sigma (W_2 H_\theta (c_l)_1) \big) + \\
& (1 - y_l) \log \big( \sigma (W_2 H_\theta (c_l)_1) \big)
\end{aligned}
$}
\vspace{-5pt}
\end{equation*}
\noindent where $W_2 \in \mathcal{R}^{d}$ and $H_\theta (c_l)_1$ refers to the embedding vector of the first token in the sequence $c_l$, or the \emph{[CLS]} token. The intuition behind this is that the \emph{[CLS]}'s embedding vector summarizes information of all other tokens via the attention Transformer network~\cite{vaswani2017attention}.
Then, pretraining with two language modeling tasks aims to minimize both loss functions
$\mathcal{L}_1$ and $\mathcal{L}_2$
by:
$\mathcal{L}_{LM} = \operatorname*{argmin}_{\theta} \; \big( \mathcal{L}_1 + \mathcal{L}_2 \big)$
\subsection{Quaternion}
In mathematics, Quaternions\footnote{https://en.wikipedia.org/wiki/Quaternion} are a hypercomplex number system. A Quaternion number \emph{P} in a Quaternion space $\mathbb{H}$ is formed by a real component (\emph{r}) and three imaginary components as follows:
\vspace{-6pt}
\begin{equation}
\label{equa:quaternion}
P = r + a \boldsymbol{i} + b \boldsymbol{j} + c \boldsymbol{k},
\end{equation}
where $\boldsymbol{ijk} = \boldsymbol{i^2} = \boldsymbol{j^2} = \boldsymbol{k^2} = -1$. The non-commutative multiplication rules of quaternion numbers are: $\boldsymbol{ij}=\boldsymbol{k}$, $\boldsymbol{jk}=\boldsymbol{i}$, $\boldsymbol{ki}=\boldsymbol{j}$, $\boldsymbol{ji}=-\boldsymbol{k}$, $\boldsymbol{kj}=-\boldsymbol{i}$, $\boldsymbol{ik}=-\boldsymbol{j}$. In Equa~(\ref{equa:quaternion}), $r, a, b, c$ are real numbers $\in \mathbb{R}$. Note that $r, a, b, c$ can also be extended to a real-valued vector $\in \mathbb{R}$ to obtain a Quaternion embedding, which we use to represent each word-piece embedding.
\noindent\textbf{Algebra on Quaternions:}
We present the Hamilton product on Quaternions, which is the heart of the linear Quaternion-based transformation.
The Hamilton product (denoted by the $\otimes$ symbol) of two Quaternions $P \in \mathbb{H}$ and $Q \in \mathbb{H}$ is defined as:
\vspace{-4pt}
\begin{equation}
\label{equa:quaternion-hamiltonprod}
\begin{aligned}
P \otimes Q =& (r_P r_Q - a_P a_Q - b_P b_Q - c_P c_Q) + \\
& (r_P a_Q + a_P r_Q + b_P c_Q - c_P b_Q) \boldsymbol{i} + \\
& (r_P b_Q - a_P c_Q + b_P r_Q + c_P a_Q) \boldsymbol{j} + \\
& (r_P c_Q + a_P b_Q - b_P a_Q + c_P r_Q) \boldsymbol{k} + \\
\end{aligned}
\end{equation}
\noindent\textbf{Activation function on Quaternions:} Similar to \cite{tay-etal-2019-lightweight,parcollet2018quaternion}, we use a \emph{split activation function} because of its stability and simplicity. \emph{Split activation function} $\beta$ on a Quaternion $P$ is defined as:
\vspace{-4pt}
\begin{equation}
\label{equa:quaternion-activation}
\beta(P) = f(r) + f(a) \boldsymbol{i} + f(b)\boldsymbol{j} + f(c)\boldsymbol{k}
\end{equation}
, where $f$ is any standard activation function for Euclidean-based values.
\noindent\textbf{Why does a linear Quaternion transformation reduce 75\% of parameters compared to the linear Euclidean transformation?}
Figure \ref{fig:quaternions} shows a comparison between a traditional linear Euclidean transformation and a linear Quaternion-based transformation.
In Euclidean space, the same input will be multiplied with different weights to produce different output dimensions. Particularly, given a real-valued 4-dimensional vector $[r_{in}, a_{in}, b_{in}, c_{in}]$, we need to parameterize a weight matrix of 16 parameters (i.e. 16 degrees of freedom) to transform the 4-dimensional input vector into a 4-dimensional output vector $[r_{out}, a_{out}, b_{out},$ $c_{out}]$. However, with Quaternion transformation, the input vector now is represented with 4 components, where $r_{in}$ is the value of the real component, $a_{in}$, $b_{in}$, $c_{in}$ are the corresponding value of the three imaginary parts $\boldsymbol{i}$, $\boldsymbol{j}$, $\boldsymbol{k}$, respectively. Because of the weight sharing nature of Hamilton product, different output dimensions take different combinations with the same input with exactly same 4 weighting parameters \{$r_{w}, a_{w}, b_{w}, c_{w}$\}. Thus, the Quaternions transformation reduces 75\% of the number of parameters compared to the real-valued representations in Euclidean space.
\noindent\textbf{Quaternion-Euclidean conversion:}
Another excellent property of using Quaternion representations and Quaternion transformations is that converting from Quaternion to Euclidean and vice versa are convenient. To convert a real-valued based vector $v \in \mathcal{R}^d$ into a Quaternion-based vector, we consider the first $d/4$ dimensions of $v$ as the value of the real component, and the corresponding next three $d/4$ dimensions are for the three imaginary parts, respectively. Similarly, to convert a Quaternion vector $v \in \mathcal{H}^d$ into a real-valued vector, we simply concatenate all four components of the Quaternion vector, and treat the concatenated vector as a real-valued vector in Euclidean space.
\subsection{Analysis on the BERT's Parameters}
\label{sec:bert-param-analysis}
Figure \ref{fig:bert-architecture} presents a general view of the BERT architecture. Each BERT layer contains three parts: (i) attention, (ii) filtering, and (iii) output.
The \emph{attention} part parameterizes three weight transformation matrices H$\times$H to form key, query, and value from the input, and another weight matrix H$\times$H to transform the output attention results. The total parameters of this part are 4$H^2$.
The \emph{filtering} part parameterizes a weight matrix H$\times$4H to transform the output of the \emph{attention} part, leading to a total of 4$H^2$ parameters.
The \emph{output} part parameterizes a weight matrix 4H$\times$H to transform the output of the \emph{filtering} part from 4H back to H, resulting in 4$H^2$ parameters.
Thus, a BERT layer has 12$H^2$ parameters, and a BERT-base setting with 12 layers has 144$H^2$ parameters. By taking into account the number of parameters for encoding V vocabs, the total parameters of BERT is VH + 144$H^2$.
\subsection{50-50 Rule}
To ensure the 50-50 rule, we perform the following method: Let M be the number of input text sequences that we can split into multiple sentences, and N be the number of input sequences that can be tokenized into only one sentence. We want the number of sentences to be generated as \emph{next} sentence pairs (sampling with probability $p_1$) to be roughly equal to the number of sentences to be formed as \emph{not next} sentence pairs (sampling with probability $p_2$). In another word, $M \times p_1 = (M + N)\times p_2$ or $\frac{p_1}{p_2} = \frac{(M + N)}{M}$. Since $p_1 + p_2 = 1$, replacing $p_2 = 1 - p_1$, we have: $M\times p_1 = (M + N) (1 - p_1) \longrightarrow p_1 = \frac{(M + N)}{(2M + N)}$. With $p_1$ established, we set $p_1$ as the probability for a sentence to be paired with another consecutive sentence in a same input sequence to generate a \emph{next} sentence example.
\subsection{Baselines and Hyper-parameter Settings}
15 Baselines are described as follows:
\squishlist
\item {\bf BOW}: Similar to \citet{dinakar2011modeling,van2015automatic}, we extract bag of words features from input sequences, then a traditional machine learning classifier is built on top of the extracted features.
\item {\bf NGRAM}: It is similar to BOW model, except using n-gram features of the input sequence.
\item {\bf CNN} \cite{kim2014convolutional}: It is a state-of-the-art word based CNN neural network model.
\item {\bf VDCNN} \cite{conneau2016very}: It is a character based CNN model with a deeper architecture and optional shortcut between layers.
\item {\bf FastText} \cite{joulin2016fasttext}: An extension of the Word2Vec model, where it represents each word as an n-gram of characters to provide embeddings for rare words.
\item {\bf LSTM} \cite{cho2014learning}:
We use: (i) the last LSTM output vector, and (ii) a pooling layer (\emph{max} and \emph{mean}) to aggregate LSTM output vectors and report only the best results.
\item {\bf att-LSTM}: A LSTM model with an attention layer to aggregate LSTM hidden state vectors.
\item {\bf RCNN} \cite{lai2015recurrent}: A combination between a bi-directional recurrent structure to capture contextual information and a \emph{max} pooling layer to extract key features.
\item {\bf att-BiLSTM} \cite{lin2017structured}: It is a self-attentive Bidirectional LSTM model.
\item {\bf Fermi} \cite{indurthi-etal-2019-fermi}: The best hatespeech detection method, as reported in \cite{basile-etal-2019-semeval}. It built a SVM classifier on top of the pretrained embeddings from Universal Sentence Encoder (USE) \cite{cer2018universal} model.
\item\textbf{Q-Transformer} \cite{tay-etal-2019-lightweight}: It is a Quaternion Transformer. It replaces all Euclidean embeddings and linear transformations by Quaternion emddings and Quaternion linear transformation. We use the \emph{full} version of Q-Transformer due to its high effectiveness.
\item \textbf{Tiny-BERT} \cite{jiao2019tinybert}: It is a compressed model of BERT-base by performing knowledge distillation on BERT-base during its pretraining phase with smaller number of layers and embedding sizes. We adopt the Tiny-BERT 4 layers with 14.5M of parameters.
\item \textbf{DistilBERT-base} \cite{sanh2019distilbert}: another knowledge distillation of the BERT-base model during the BERT's pre-training phase.
\item \textbf{ALBERT-base} \cite{Lan2020ALBERT}: a light-weight version of BERT-base model with parameters sharing strategies and an inter-sentence coherence pretraining task.
\item \textbf{BERT-base} \cite{devlin2018bert}: Similar to \citet{nikolov2019nikolov}, we use pre-trained BERT with 12 layers and uncased (our experiments show uncased works better than cased vocab) to perform fine-tuning for the hatespeech detection.
\squishend
For baselines that require word embeddings, to maximize their performances, we initialize word embeddings with both GloVe pre-trained word embeddings~\cite{Pennington14glove:global} and random initialization and report their best results. We implement BOW and NGRAM with Naive Bayes, Random Forest, Logistic Regression, and Xgboost classifiers, and then report the best results.
By default, our vocab size is set to 40k. The number of pretraining epochs is set to 60, and the batch size is set to 768.
The learning rate is set to 5e-5 for the masked token prediction and next sentence prediction tasks, which are the two pretraining tasks, and 2e-5 for the hatespeech prediction task, which is the fine-tuning task. The default design of {HABERTOR} is given at Figure \ref{fig:bert-flex-ensemble}, with one separated classification net with an ensemble of 2 heads for each input source.
The \emph{masking factor $\tau$} is set to 10.
The noise magnitude's bound constraint $[a, b]=[1,2]$ in Yahoo dataset, and $[a, b]=[1,5]$ in Twitter and Wiki datasets. $\lambda_{adv}$=1.0, and $\lambda_\epsilon$=1 in all three datasets.
We use \emph{min} pooling function to put a more stringent requirement on classifying hatespeech comments, as the number of hatespeech-labeled comments is the minority. All the pre-trained language models are fined-tuned with the Yahoo train set. For all other baselines, we vary the hidden size from \{96, 192, 384\} and report their best results.
We build VDCNN with 4 convolutional blocks, which have 64, 128, 256 and 512 filters with a kernel size of 3, and 1 convolution layer. Each convolutional block includes two convolution layers.
For {FastText}, we find that 1,2,3-grams and 1,2,3,4,5-character grams give the best performance. All models are optimized using Adam optimizer \cite{kingma2014adam}.
\subsection{Ablation Study}
\label{sec:ablation}
\begin{table*}[!h]
\centering
\caption{Ablation study of HABERTOR on Yahoo dataset (i.e. both Yahoo News + Finance, to save space). Default results are in bold. Better results compared to the default one are underlined.}
\label{table:ablation}
\resizebox{0.7\textwidth}{!}{
\begin{tabular}{c|l|ccccc}
\toprule
\multirow{3}{*}{{\bf Goal}} & \multicolumn{1}{c}{\multirow{2}{*}{Model}} & \multicolumn{5}{|c}{\bf Yahoo} \\
& \multicolumn{1}{c}{} & \multicolumn{1}{|c}{AUC} & AP & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}FPR@\\ 5\%FNR\end{tabular}} & \begin{tabular}[c]{@{}c@{}}FNR@\\ 5\%FPR\end{tabular} & F1 \\
\midrule
& Default
& {\bf 94.77} & {\bf 72.35} & {\bf 26.11} & {\bf 25.08} & {\bf 66.18} \\
\midrule
\multirow{2}{*}{\textbf{AS1}}
& - adv
& 94.60 & 71.19 & 26.97 & 25.78 & 65.02 \\
& - adv + $\tau=1$
& 94.32 & 70.27 & 28.08 & 26.69 & 64.78 \\
\midrule
\multirow{2}{*}{{\bf AS2}}
& - adv + $\tau=1$ under 250k data
& 92.61 & 64.71 & 36.43 & 32.51 & 60.13 \\
& - adv + $\tau=1$ under 500k data
& 94.04 & 69.11 & 29.82 & 27.99 & 63.34 \\
\midrule
\multirow{3}{*}{{\bf AS3}}
& + single source + single head
& 94.70 & 71.82 & 26.82 & 25.55 & 65.16\\
& + single head
& 94.70 & 72.15 & 26.66 & 25.20 & 65.59 \\
& + ensemble 4
& \underline{94.78} & 72.18 & 26.29 & \underline{24.97} & 65.78 \\
& + ensemble 8
& 94.71 & 72.06 & 26.13 & 25.08 & 65.56 \\
\midrule
\multirow{1}{*}{{\bf AS4}}
& - adv + $\tau=1$ - pretraining
& 92.48 & 65.26 & 36.47 & 32.10 & 60.66\\
\midrule
\multirow{5}{*}{{\bf AS5}}
& + 3 layers
& 94.54 & 71.25 & 27.15 & 25.90 & 64.98\\
& + 4 layers
& 94.67 & 71.53 & 26.25 & 25.50 & 65.38\\
& + 192 hidden size
& 94.57 & 71.00 & 26.56 & 25.93 & 65.05\\
& + 3 att heads
& 94.69 & 72.00 & 26.72 & 25.43 & 65.75\\
& + 4 att heads
& 94.69 & 72.06 & 26.61 & 25.22 & 65.80\\
& + 12 att heads
& 94.70 & 72.01 & 26.28 & 25.14 & 65.64\\
\midrule
\end{tabular}
}
\end{table*}
\noindent{\bf Effectiveness of regularized adversarial training and masking factor $\tau$ (AS1):} Recall that by default, {HABERTOR} has 2 classification nets, each of the two nets has an ensemble of 2 classification heads, \emph{masking factor} $\tau=10$, and is trained with regularized adversarial training. {HABERTOR - \emph{adv}} indicates {HABERTOR} without regularized adversarial training, and {HABERTOR - \emph{adv} + $\tau$=1} indicates {HABERTOR} without regularized adversarial training and \emph{masking factor} $\tau$ of 1 instead of 10. Comparing {HABERTOR} with {HABERTOR - \emph{adv}}, we see a drop of AP by 1.16\%, F1-score by 1.16\%, and the average error rate increases by 0.78\% (i.e. average of FPR@5\%FNR and FNR@5\%FPR). This shows the effectiveness of additional regularized adversarial training to make {HABERTOR} more robust.
Furthermore, comparing {HABERTOR - \emph{adv}} (with default $\tau$=10) with {HABERTOR - \emph{adv} + $\tau=1$}, we observe a drop of AP by 0.92\%, F1-score by 0.24\%, and an increment of average error rate by 1.01\%. This shows the need of both regularized adversarial training with our proposed fine-grained and adaptive noise magnitude, and a large \emph{masking factor} in {HABERTOR}.
\noindent{\bf Is pretraining with a larger domain-specific dataset helpful? (AS2):} We answer the question by answering a reverse question: does pretraining with smaller data reduce performance? We pre-train {HABERTOR} with 250k Yahoo comments data (4 times smaller), and 500k Yahoo comments data (2 times smaller). Then, we compare the results of HABERTOR \emph{- adv} + $\tau=1$ with HABERTOR \emph{- adv} + $\tau=1$ under 250k data, and HABERTOR \emph{- adv} + $\tau=1$ under 500k data. Table \ref{table:ablation} shows the results. We observe that pretraining with a larger data size increases the hatespeech prediction performance. We see a smaller drop when pretraining with 1M data vs 500k data (AP drops 0.6\%), and a bigger drop when pretraining with 500k data vs 250k data (AP drops 4.4\%).
We reason that when the pretraining data size is too small, important linguistic patterns that may exist in the test set are not fully observed in the training set. In short, pretraining with larger hatespeech data can improve the hatespeech prediction performance.
Note that BERT-base is pre-trained on 3,300M words, which are 106 times larger than HABERTOR (only 31M words). Hence, the performance of HABERTOR can be boosted further when pre-training a hatespeech language model with a larger number of hateful representatives.
\noindent{\bf Usefulness of separated source prediction and ensemble heads (\textbf{AS3}):} We compare HABERTOR from \emph{Default} settings to using \emph{single source + single source} (i.e. one classification head for all data sources, see Figure~\ref{fig:bert-traditional}), \emph{single head} (i.e. multi-source and each source has a single classification head, see Figure~\ref{fig:bert-flex-ensemble}), and using more ensemble heads (i.e. multi-source + more ensemble classification heads, see Figure~\ref{fig:bert-flex-ensemble}).
Table \ref{table:ablation} shows that the overall performance order is \emph{multi-source + ensemble of 2 heads} $>$ \emph{multi-source + single head} $>$ \emph{single source + single head}, indicating the usefulness of our multi-source and ensemble of classification heads architecture in the fine-tuning phase. However, when the number of ensemble heads $\ge$ 4, we do not observe better performance.
\noindent{\bf Is pretraining two language modeling tasks helpful for the hatespeech detection task? (AS4)} We compare {HABERTOR-\emph{adv}} + $\tau=1$ with {HABERTOR-\emph{adv}} + $\tau=1$ - \emph{pretraining}, where we ignore the pretraining step and consider HABERTOR as an attentive network for pure supervised learning with random parameter initialization. In Table \ref{table:ablation}, the performance of HABERTOR without the language model pretraining is highly downgraded: AUC drops $\sim$-2\%, AP drops $\sim$-5\%, FPR and FNR errors are $\sim$+9\% and $\sim$+5\% higher, respectively, and F1 drops $~$-4\%. These results show a significant impact of the pretraining tasks for hatespeech detection.
\noindent{\bf Is HABERTOR sensitive when varying its number of layers, attention heads, and embedding size? (AS5)} In Table \ref{table:ablation}, we observe that {HABERTOR}+\emph{3 layers} and {HABERTOR}+\emph{4 layers} work worse than {HABERTOR} (6 layers), indicating that a deeper model does help to improve hatespeech detection. However, when we increase the number of attention heads from 6 to 12, or decrease the number of attention heads from 6 to 4, we observe that the performance becomes worse. We reason that when we set the number of attention heads to 12, since there is no mechanism to constrain different attention heads to attend on different information, they may end up focusing on the similar things, as shown in \cite{clark2019does}. But when reducing the number of attention heads to 4, the model is not complex enough to attend on more relevant information, leading to worse performance.
Similarly, when we reduce the embedding size from 384 in {HABERTOR} to 192, the performance is worse. Note that we could not perform experiments with larger embedding sizes and/or more number of layers due to high running time and memory consumption. However, we can see in Table~\ref{table:ablation} performance of smaller {HABERTOR} with 3 layers, 4 layers, or 192 hidden size still obtain slightly better than BERT-base results reported in Table \ref{table:yahoo-performance}. This again indicates the need for pretraining language models on hatespeech-related corpus for the hatespeech detection task.
\begin{figure}[t]
\centering
\begin{subfigure}{0.46\columnwidth}
\centering
\includegraphics[width=0.99\textwidth]{figs/AUC-vary-pretraining-epoch.png}
\vspace{-15pt}
\caption{AUC.}
\label{fig:AUC-pretraining}
\end{subfigure}
\hfill
\begin{subfigure}{.46\columnwidth}
\centering
\includegraphics[width=0.99\textwidth]{figs/AP-vary-pretraining-epoch.png}
\vspace{-15pt}
\caption{AP.}
\label{fig:AP-pretraining}
\end{subfigure}\hfill
\vspace{-5pt}
\caption{AUC and AP of HABERTOR without regularized adversarial training on Yahoo dataset when varying the number of epochs for the pretraining task.}
\label{fig:compare-fine-grained}
\vspace{-15pt}
\end{figure}
\noindent{\bf Effectiveness of fine-grained pretraining (AS6)?} Since the pretraining phase is unsupervised, a question is how much fine-grained pretraining should we perform to get a good hatespeech prediction performance? Or how many pretraining epochs are good enough? To answer the question, we vary the number of the pretraining epochs from \{10, 20, 30, ..., 60\} before performing the fine-tuning phase with hatespeech classification task. We report the changes in AUC and AP of fine-tuned {HABERTOR} on the Yahoo dataset without performing regularized adversarial training in Figure \ref{fig:compare-fine-grained}. We observe that a more fine-grained pretraining helps to increase the hatespeech prediction results, which is similar to a recent finding at \citet{liu2019roberta}, especially from 10 epochs to 40 epochs. However, after 40 epochs, the improvement is smaller.
| proofpile-arXiv_059-15590 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Challenges and Opportunities}
\label{sec.challenge}
\vspace{1mm}
\subsection{\textbf{Memory and Computational Expensiveness of DNN Models}}
DNN models that achieve state-of-the-art performance are memory and computational expensive.
To illustrate this, Table \ref{tab.CNN_examples} lists the details of some of the most commonly used DNN models.
As shown, these models normally contain millions of model parameters and consume billions of floating-point operations (FLOPs).
This is because these DNN models are designed for achieving high accuracy without taking resources consumption into consideration.
Although computing resources in edge devices are expected to become increasingly powerful, their resources are way more constrained than cloud servers.
Therefore, filling the gap between high computational demand of DNN models and the limited computing resources of edge devices represents a significant challenge.
\begin{table}[b]
\centering
\scalebox{0.75}{
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\textbf{DNN} & \textbf{\begin{tabular}[c]{@{}c@{}}Top-5 Error\\ (\%)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Latency\\ (ms)\end{tabular}} & \textbf{Layers} & \textbf{\begin{tabular}[c]{@{}c@{}}FLOPs\\ (billion)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Parameters\\ (million)\end{tabular}} \\ \hline
AlexNet & 19.8 & 14.56 & 8 & 0.7 & 61 \\ \hline
GoogleNet & 10.07 & 39.14 & 22 & 1.6 & 6.9 \\ \hline
VGG-16 & 8.8 & 128.62 & 16 & 15.3 & 138 \\ \hline
ResNet-50 & 7.02 & 103.58 & 50 & 3.8 & 25.6 \\ \hline
ResNet-152 & 6.16 & 217.91 & 152 & 11.3 & 60.2 \\ \hline
\end{tabular}
}
\vspace{2mm}
\caption{Memory and computational expensiveness of some of the most commonly used DNN models.}
\label{tab.CNN_examples}
\end{table}
To address this challenge, the opportunities lie at exploiting the \textit{redundancy} of DNN models in terms of parameter representation and network architecture.
In terms of parameter representation redundancy, to achieve the highest accuracy, state-of-the-art DNN models routinely use 32 or 64 bits to represent model parameters.
However, for many tasks like object classification and speech recognition, such high-precision representations are not necessary and thus exhibit considerable redundancy.
Such redundancy can be effectively reduced by applying parameter quantization techniques which use 16 bits, 8 bits, or even less number of bits to represent model parameters.
In terms of network architecture redundancy, state-of-the-art DNN models use overparameterized network architectures and thus many of their parameters are redundant.
To reduce such redundancy, the most effective technique is model compression.
In general, DNN model compression technique can be grouped into two categories.
The first category focuses on compressing large DNN models that are pretrained into smaller ones.
For example, \cite{han2015deep} proposed a model compression technique that prunes out unimportant model parameters whose values are lower than a threshold.
However, although this parameter pruning approach is effective at reducing model sizes, it does not necessarily reduce the number of operations involved in the DNN model.
To overcome this issue, \cite{li2016pruning} proposed a model compression technique that prunes out unimportant filters which effectively reduces the computational cost of DNN models.
The second category focuses on designing efficient small DNN models directly.
For example, \cite{howard2017mobilenets} proposed to use depth-wise separable convolutions that are small and computation-efficient to replace conventional convolutions that are large and computational expensive, which reduces not only model size but also computational cost.
Being as an orthogonal approach, \cite{hinton2015distilling} proposed a technique referred to as knowledge distillation to directly extract useful knowledge from large DNN models and pass it to a smaller model which achieves similar prediction performance as the large models but with much less model parameters and computational cost.
\subsection{\textbf{Data Discrepancy in Real-World Settings}}
The performance of a DNN model is heavily dependent on its training data, which is supposed to share the same or a similar distribution with the potential test data.
Unfortunately, in real-world settings, there can be a considerable \textit{discrepancy} between the training data and the test data.
Such discrepancy can be caused by variation in sensor hardware of edge devices as well as various noisy factors in the real world that degrade the quality of the test data.
For example, the quality of images taken in real-world settings can be degraded by factors such as illumination, shading, blurriness, and undistinguishable background~\cite{zeng2017mobiledeeppill} (see Figure \ref{fig.challenges} as an example).
Speech data sampled in noisy places such as busy restaurants can be contaminated by voices from surround people.
The discrepancy between training and test data could degrade the performance of DNN models, which becomes a challenging problem.
\begin{figure}[t]
\centering
\includegraphics[scale=0.55]{challenges.pdf}
\caption{Illustration of differences between training and test images of the same pills under five different scenarios~\cite{zeng2017mobiledeeppill}. For each scenario, the image on the left is the training image; and the image on the right is the test image of the same pill. Due to the deterioration caused by a variety of real-world noisiness such as shading, blur, illumination and background, training image and test image of the same pill look very different.}
\vspace{-3mm}
\label{fig.challenges}
\end{figure}
To address this challenge, we envision that the opportunities lie at exploring data augmentation techniques as well as designing noise-robust loss functions.
Specifically, to ensure the robustness of DNN models in real-world settings, a large volume of training data that contain significant variations is needed.
Unfortunately, collecting such a large volume of diverse data that cover all types of variations and noise factors is extremely time-consuming.
One effective technique to overcome this dilemma is data augmentation.
Data augmentation techniques generate variations that mimic the variations occurred in the real-world settings.
By using the large amount of newly generated augmented data as part of the training data, the discrepancy between training and test data is minimized.
As a result, the trained DNN models become more robust to the various noisy factors in the real world.
A complementary technique to data augmentation is to design loss functions that are robust to discrepancy between the training data and the test data.
Examples of such noise-robust loss functions include triplet loss \cite{DBLP:journals/corr/SchroffKP15} and variational autoencoder \cite{kingma2013auto}.
These noise-robust loss functions are able to enforce a DNN model to learn features that are invariant to various noises that degrade the quality of test data even if the training data and test data do not share a similar distribution.
\noindent
\subsection{\textbf{Constrained Battery Life of Edge Devices}}
For edge devices that are powered by batteries, reducing energy consumption is critical to extending devices' battery lives.
However, some sensors that edge devices heavily count on to collect data from individuals and the physical world such as cameras are designed to capture high-quality data, which are power hungry.
For example, video cameras incorporated in smartphones today have increasingly high resolutions to meet people's photographic demands.
As such, the quality of images taken by smartphone cameras is comparable to images that are taken by professional cameras, and image sensors inside smartphones are consuming more energy than ever before, making energy consumption reduction a significant challenge.
To address this challenge, we envision that the opportunities lie at exploring smart data subsampling techniques, matching data resolution to DNN models, and redesigning sensor hardware to make it low-power.
First, to reduce energy consumption, one commonly used approach is to turn on the sensors when needed.
However, there are streaming applications that require sensors to be always on.
As such, it requires DNN models to be run over the streaming data in a continuous manner.
To reduce energy consumption in such scenario, opportunities lie at subsampling the streaming data and only processing those informative subsampled data points while discarding data points that contain redundant information.
Second, while sensor data such as raw images are high-resolution, DNN models are designed to process images at a much lower resolution (e.g., $224\times224$).
The mismatch between high-resolution raw images and low-resolution DNN models incurs considerable unnecessary energy consumption, including energy consumed to capture high-resolution raw images and energy consumed to convert high-resolution raw images to low-resolution ones to fit the DNN models.
To address the mismatch, one opportunity is to adopt a dual-mode mechanism.
The first mode is traditional sensing mode for photographic purposes that captures high-resolution images.
The second mode is DNN processing mode that is optimized for deep learning tasks.
Under this model, the resolutions of collected images are enforced to match the input requirement of DNN models.
Lastly, to further reduce energy consumption, another opportunity lies at redesigning sensor hardware to reduce the energy consumption related to sensing.
When collecting data from onboard sensors, a large portion of the energy is consumed by the analog-to-digital converter (ADC).
There are early works that explored the feasibility of removing ADC and directly using analog sensor signals as inputs for DNN models \cite{likamwa2016redeye}. Their promising results demonstrate the significant potential of this research direction.
\subsection{\textbf{Heterogeneity in Sensor Data}}
Many edge devices are equipped with more than one onboard sensor.
For example, a smartphone has a GPS sensor to track geographical locations, an accelerometer to capture physical movements, a light sensor to measure ambient light levels, a touchscreen sensor to monitor users’ interactions with their phones, a microphone to collect audio information, and a camera to capture images and videos.
Data obtained by these sensors are by nature \textit{heterogeneous} and are diverse in format, dimensions, sampling rates, and scales.
How to take the data heterogeneity into consideration to build DNN models and to effectively integrate the heterogeneous sensor data as inputs for DNN models represent a significant challenge.
To address this challenge, one opportunity lies at building a multi-modal deep learning model that takes data from different sensing modalities as its inputs.
For example, \cite{radu2016towards} proposed a multi-modal DNN model that uses Restricted Boltzmann Machine for activity recognition.
Similarly, \cite{bhattacharya2016smart} also proposed a multi-modal DNN model for smartwatch-based activity recognition.
Besides building multi-modal DNN models, another opportunity lies at combining information from heterogeneous sensor data extracted at different dimensions and scales.
As an example, \cite{chang2015heterogeneous} proposed a multi-resolution deep embedding approach for processing heterogeneous data at different dimensions.
\cite{DBLP:journals/corr/YaoHZZA16} proposed an integrated convolutional and recurrent neural networks for processing heterogeneous data at different scales.
\subsection{\textbf{Heterogeneity in Computing Units}}
\vspace{-1mm}
Besides data heterogeneity, edge devices are also confronted with heterogeneity in on-device computing units.
As computing hardware becomes more and more specialized, an edge device could have a diverse set of onboard computing units including traditional processors such as CPUs, DSPs, GPUs, and FPGAs as well as emerging domain-specific processors such as Google's Tensor Processing Unit (TPUs).
Given the increasing heterogeneity in onboard computing units, mapping deep learning tasks and DNN models to the diverse set of onboard computing units is challenging.
To address this challenge, the opportunity lies at mapping operations involved in DNN model executions to the computing unit that is optimized for them.
State-of-the-art DNN models incorporate a diverse set of operations but can be generally grouped into two categories: parallel operations and sequential operations.
For example, the convolution operations involved in convolutional neural networks (CNNs) are matrix multiplications that can be efficiently executed in parallel on GPUs which have the optimized architecture for executing parallel operations.
In contrast, the operations involved in recurrent neural networks (RNNs) have strong sequential dependencies, and better fit CPUs which are optimized for executing sequential operations where operator dependencies exist.
The diversity of operations suggests the importance of building an architecture-aware compiler that is able to decompose a DNN models at the operation level and then allocate the right type of computing unit to execute the operations that fit its architecture characteristics.
Such an architecture-aware compiler would maximize the hardware resource utilization and significantly improve the DNN model execution efficiency.
\subsection{\textbf{Multi-Tenancy of Deep Learning Tasks}}
\vspace{-1mm}
The complexity of real-world applications requires edge devices to concurrently execute multiple DNN models which target different deep learning tasks~\cite{fangzeng2018nestdnn}.
For example, a service robot that needs to interact with customers needs to not only track faces of individuals it interacts with but also recognize their facial emotions at the same time.
These tasks all share the same data inputs and the limited resources on the edge device.
How to effectively share the data inputs across concurrent deep learning tasks and efficiently utilize the shared resources to maximize the overall performance of all the concurrent deep learning tasks is challenging.
In terms of input data sharing, currently, data acquisition for concurrently running deep learning tasks on edge devices is exclusive.
In other words, at runtime, only one single deep learning task is able to access the sensor data inputs at one time.
As a consequence, when there are multiple deep learning tasks running concurrently on edge devices, each deep learning task has to explicitly invoke system APIs to obtain its own data copy and maintain it in its own process space.
This mechanism causes considerable system overhead as the number of concurrently running deep learning tasks increases.
To address this input data sharing challenge, one opportunity lies at creating a \textit{data provider} that is transparent to deep learning tasks and sits between them and the operating system as shown in Figure \ref{fig.data_sharing}.
The \textit{data provider} creates a single copy of the sensor data inputs such that deep learning tasks that need to acquire data all access to this single copy for data acquisition.
As such, a deep learning task is able to acquire data without interfering other tasks.
More importantly, it provides a solution that scales in terms of the number of concurrently running deep learning tasks.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.6,clip=true,trim = 85mm 63mm 100mm 50mm]{data_sharing.pdf}
\end{center}
\vspace{-4mm}
\caption{Illustration of data sharing mechanism.}
\label{fig.data_sharing}
\vspace{-4mm}
\end{figure}
In terms of resource sharing, in common practice, DNN models are designed for individual deep learning tasks.
However, existing works in deep learning show that DNN models exhibit layer-wise semantics where bottom layers extract basic structures and low-level features while layers at upper levels extract complex structures and high-level features.
This key finding aligns with a sub-field in machine learning named multi-task learning~\cite{caruana1997multitask}.
In multi-task learning, a single model is trained to perform multiple tasks by sharing low-level features while high-level features differ for different tasks.
For example, a DNN model can be trained for scene understanding as well as object classification~\cite{zhou2014object}.
Multi-task learning provides a perfect opportunity for improving the resource utilization for resource-limited edge devices when concurrently executing multiple deep learning tasks.
By sharing the low-level layers of the DNN model across different deep learning tasks, redundancy across deep learning tasks can be maximally reduced.
In doing so, edge devices can efficiently utilize the shared resources to maximize the overall performance of all the concurrent deep learning tasks.
\subsection{\textbf{Offloading to Nearby Edges}}
\vspace{-2mm}
For edge devices that have extremely limited resources such as low-end IoT devices, they may still not be able to afford executing the most memory and computation-efficient DNN models locally.
In such scenario, instead of running the DNN models locally, it is necessary to offload the execution of DNN models.
As mentioned in the introduction section, offloading to the cloud has a number of drawbacks, including leaking user privacy and suffering from unpredictable end-to-end network latency that could affect user experience, especially when real-time feedback is needed.
Considering those drawbacks, a better option is to offload to nearby edge devices that have ample resources to execute the DNN models.
To realize edge offloading, the key is to come up with a model partition and allocation scheme that determines which part of model should be executed locally and which part of model should be offloading.
To answer this question, the first aspect that needs to take into account is the size of intermediate results of executing a DNN model.
A DNN model adopts a layered architecture.
The sizes of intermediate results generated out of each layer have a pyramid shape (Figure \ref{fig.pyramid}), decreasing from lower layers to higher layers.
As a result, partitioning at lower layers would generate larger sizes of intermediate results, which could increase the transmission latency.
The second aspect that needs to take into account is the amount of information to be transmitted.
For a DNN model, the amount of information generated out of each layer decreases from lower layers to higher layers.
Partitioning at lower layers would prevent more information from being transmitted, thus preserving more privacy.
As such, the edge offloading scheme creates a trade-off between computation workload, transmission latency, and privacy preservation.
\begin{figure}[!ht]
\begin{center}
\includegraphics[scale=0.5,clip=true,trim = 100mm 60mm 100mm 50mm]{pyramid.pdf}
\end{center}
\vspace{-4mm}
\caption{Illustration of intermediate results of a DNN model. The size of intermediate results generated out of each layer decreases from lower layers to higher layers. The amount of information generated out of each layer also decreases from lower layers to higher layers. }
\label{fig.pyramid}
\vspace{-4mm}
\end{figure}
\subsection{\textbf{On-Device Training}}
\vspace{-2mm}
In common practice, DNN models are trained on high-end workstations equipped with powerful GPUs where training data are also located.
This is the approach that giant AI companies such as Google, Facebook, and Amazon have adopted.
These companies have been collecting a gigantic amount of data from users and use those data to train their DNN models.
This approach, however, is privacy-intrusive, especially for mobile phone users because mobile phones may contain the users’ privacy-sensitive data.
How to best protect users' privacy while still obtaining well-trained DNN models becomes a challenging problem.
To address this challenge, we envision that the opportunity lies at on-device training.
As compute resources in edge devices become increasingly powerful, especially with the emergence of AI chipsets, in the near future, it becomes feasible to train a DNN model locally on edge devices.
By keeping all the personal data that may contain private information on edge devices, on-device training provides a privacy-preserving mechanism that leverages the compute resources inside edge devices to train DNN models without sending the privacy-sensitive personal data to the giant AI companies.
Moreover, today, gigantic amounts of data are generated by edge devices such as mobile phones on a daily basis.
These data contain valuable information about users and their personal preferences.
With such personal information, on-device training is enabling training personalized DNN models that deliver personalized services to maximally enhance user experiences.
\section{Concluding Remarks}
\label{sec.con}
\vspace{-2mm}
\noindent
Edge computing is revolutionizing the way we live, work, and interact with the world.
With the recent breakthrough in deep learning, it is expected that in the foreseeable future, majority of the edge devices will be equipped with machine intelligence powered by deep learning.
To realize the full promise of deep learning in the era of edge computing, there are daunting challenges to address.
In this book chapter, we presented eight challenges at the intersection of computer systems, networking, and machine learning.
These challenges are driven by the gap between high computational demand of DNN models and the limited battery lives of edge devices, the data discrepancy in real-world settings, the needs to process heterogeneous sensor data and concurrent deep learning tasks on heterogeneous computing units, and the opportunities for offloading to nearby edges and on-device training.
We also proposed opportunities that have potential to address these challenges.
We hope our discussion could inspire new research that turns the envisioned intelligent edge into reality.
\section{Introduction}
\label{sec.intro}
Of all the technology trends that are taking place right now, perhaps the biggest one is \textit{edge computing}~\cite{shi2016edge,shi2016promise}.
It is the one that is going to bring the most disruption and the most opportunity over the next decade.
Broadly speaking, edge computing is a new computing paradigm which aims to leverage devices that are deployed at the Internet's edge to collect information from individuals and the physical world as well as to process those collected information in a distributed manner~\cite{satyanarayanan2017emergence}.
These devices, referred to as \textit{edge devices}, are physical devices equipped with sensing, computing, and communication capabilities.
Today, we are already surrounded by a variety of such edge devices:
our mobile phones and wearables are edge devices; home intelligence devices such as Google Nest and Amazon Echo are edge devices; autonomous systems such as drones, self-driving vehicles, and robots that vacuum the carpet are also edge devices.
These edge devices continuously collect a variety of data including images, videos, audios, texts, user logs, and many others with the ultimate goal to provide a wide range of services to improve the quality of people's everyday lives.
Although the Internet is the backbone of edge computing, the true value of edge computing lies at the intersection of gathering data from sensors and extracting meaningful information from the collected sensor data.
Over the past few years, deep learning (i.e., Deep Neural Networks (DNNs))~\cite{lecun2015deep} has become the dominant data analytics approach due to its capability to achieve impressively high accuracies on a variety of important computing tasks such as speech recognition \cite{hinton2012deep}, machine translation \cite{bahdanau2014neural}, object recognition \cite{krizhevsky2012imagenet}, face detection \cite{taigman2014deepface}, sign language translation \cite{fang2017deepasl}, and scene understanding \cite{zhou2014learning}.
Driven by deep learning's splendid capability, companies such as Google, Facebook, Microsoft, and Amazon are embracing this technological breakthrough and using deep learning as the core technique to power many of their services.
Deep learning models are known to be expensive in terms of computation, memory, and power consumption~\cite{he2016deep, simonyan2014very}.
As such, given the resource constraints of edge devices, the \textit{status quo} approach is based on the cloud computing paradigm in which the collected sensor data are directly uploaded to the cloud; and the data processing tasks are performed on the cloud servers where abundant computing and storage resources are available to execute the deep learning models.
Unfortunately, cloud computing suffers from three key drawbacks that make it less favorable to applications and services enabled by edge devices.
First, data transmission to the cloud becomes impossible if the Internet connection is unstable or even lost.
Second, data collected at edge devices may contain very sensitive and private information about individuals. Directly uploading those raw data onto the cloud constitutes a great danger to individuals' privacy.
Most importantly, as the number of edge devices continues to grow exponentially, the bandwidth of the Internet becomes the bottleneck of cloud computing, making it no longer be feasible or cost effective to transmit the gigantic amount of data collected by those devices to the cloud.
In this book chapter, we aim to provide our insights for answering the following question: \textit{can edge computing leverage the amazing capability of deep learning?}
As computing resources in edge devices become increasingly powerful, especially with the emergence of Artificial Intelligence (AI) chipsets, we envision that in the near future, majority of the edge devices will be equipped with machine intelligence powered by deep learning.
The realization of this vision requires considerable innovation at the intersection of computer systems, networking, and machine learning.
In the following, we describe eight research challenges followed by opportunities that have high promise to address those challenges.
We hope this book chapter act as an enabler of inspiring new research that will eventually lead to the realization of the envisioned intelligent edge.
| proofpile-arXiv_059-15591 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}\label{sec:Intro}
Game \gls{AI} has been around for several decades. Many computer games make use of \gls{AI}, whether to serve as the players' opponents or allies, to create real-time challenges by altering the environment or even generate entire new levels. Although the main value of computer games is entertainment, they can also be used to improve other aspects of the real or virtual world. For example, some games have educational value as well. Games based on vehicle simulators, for instance, can teach and train piloting skills that players can later apply in real-life. A similar approach could be used with artificial autonomous agents. An agent that can pilot a simulated car in a game, so it can be used as a rival to the player, during a race, could later be used as a base for self driving cars. It all depends on the level of detail of said simulation, but in the end, this kind of development is done with several steps of incremented complexity.
There is a wide variety of skills that games can teach. Multiplayer games, for instance, can teach social skills. In those games, players may find themselves competing against other players or cooperating with them. While the competition may focus more on the individual skills, cooperation allows players to develop skills as team members.
Competitions of Game \gls{AI}, similarly to other AI fields \cite{kitano:robocup}, have been used frequently and successfully to foster the development of new and innovative research. Competitions motivate many researchers to work on difficult problems and provide, at the same time, a framework for common ground to contextualise and compare research. Also, competitions may be used to support teaching of Game \gls{AI} (and \gls{AI} in general) as they present very concrete problems for students to address and, typically, share openly the submissions and results.
In this paper, we present the Geometry Friends Game \gls{AI} Competition~\cite{GF} that runs around a physics based collaborative puzzle-platformer game (Geometry Friends). Because of its cooperative nature, the game promotes interesting \gls{AI} challenges that are not often seen in game \gls{AI} competitions. It provides challenges of collaboration and team AI at different levels. In the paper, we discuss some of the results obtained so far and the implications and potential importance of the competition for game \gls{AI} research. We start by explaining the Geometry Friends game in section~\ref{sec:Game}, after some discussion of related work on competitions of game AI, in section~\ref{sec:RW}. In section~\ref{sec:problems} we enumerate the \gls{AI} problems that can be explored with the game. We then present the competition and framework in the respective sections~\ref{sec:Competition} and \ref{sec:FrameWork}. Afterwards, we describe briefly the solutions developed so far, in section~\ref{sec:solutions}, most of which competed in previous competitions. Finally, we draw some conclusions in section~\ref{sec:conclusions} and discuss future work.
\section{Game AI Competitions}\label{sec:RW}
Game AI competitions have been taking place for several decades.
For example, the Computer Olympiad ran by ICGA\footnote{www.icga.org}, started in 1989. It challenges participants to submit agents, or programs, that can play one or more games, mostly board and card games.
The Battlecode\footnote{www.battlecode.org} is described as ``MIT's longest-running programming competition''. It presents a turn-based strategic game where two teams of agents compete against each other having to manage resources and use combat tactics. Each year, different challenges are added to the game.
There are some competitions that include collaboration as a challenge for the Game AI and some other competitions address Physics-Based Simulation Games~\cite{PBSG}. Both these topics are relevant in the Geometry Friends competition, but we will focus our discussion here in the collaboration aspects that are more central and unique in the competition.
The General Video Game AI Competition\cite{GVGAI} challenges the participants to develop agents that can play any game, without any prior knowledge of it. The agents are tested with a game that they do not know the rules of. It presents to participants many different kinds of games, and more recently, included collaborative games as well.
The Malmo Collaborative AI Challenge\footnote{https://www.microsoft.com/en-us/research/academic-program/collaborative-ai-challenge/}, presents a mini-game based on the ``stag-hunt'' paradigm, implemented with Project Malmo\cite{Malmo}, an experimentation and research platform for AI, built on top of the Minecraft game. The challenge was designed to encourage research related to various problems in Collaborative AI. It invites participants to develop collaborative AI solutions to achieve high scores across a range of partners. In the game presented players need to work together to achieve a common goal, as is the case of Geometry Friends.
Furthermore, the Hanabi competition~\cite{Hanabi} introduces the collaborative turn-based card game with two possible tracks. In one of the tracks the agents follow the same strategy, while in the other they are not aware who they are paired with. The game focus on the ``why'' of the other player's intentions, so a model of the other should be created. It is also about understanding how the other players will understand the instructions given to them.
The FruitPunch AI-esports competition\footnote{www.fruitpunch.ai/competition/} currently has a multiplayer human-agent cooperative game where the challenges are smart navigation, decision making and risk management. In this game, named ``Isaac's Labyrinth'', there are multiple teams of two players: one agent and one of their developers. The objective of the game is to navigate through a dynamic maze to catch some fruits. While the agent performs the navigation, the human player can place or reveal traps. This competition focus on approaches that use reinforcement learning.
These are just some examples of Game \gls{AI} competitions. Some of these share similar problems with the ones in Geometry Friends, but none deal with them in the same way, nor all at the same time. Card and board games usually focus on developing or improving search algorithms, or other generalized strategies that may rely on patterns. In Geometry Friends, this is also important for planning and control. Some of the games presented in these competitions are turn-based, and although they might have time limits to perform their actions at each turn, there is no real-time issue being explored. ``Isaac's Labyrint'' has also the component of adversaries, and focus on human-agent cooperation, which may be a future possibility for the Geometry Friends competition since the game allows human players to play with artificial agents.
Hanabi's focus on the understanding of the partner's intentions using implicit communication that can be explored on Geometry Friends as well.
\section{Geometry Friends Game}\label{sec:Game}
Geometry Friends is a collaborative physics-based puzzle and platform game. It has two characters, a circle and a rectangle, each with different possible actions, as shown in Fig.~\ref{fig:characters}: jump and roll, for the circle, and morph and slide, for the rectangle.
\begin{figure}[h]
\centerline{\includegraphics[width=\columnwidth]{images/circleAndRectangleActions.png}}
\caption{Circle and rectangle characters and their possible actions} \label{fig:characters}
\end{figure}
The game is composed by a series of levels in a physics-based 2D world. Each level is defined by a set of platforms and collectibles (diamonds) that the characters need to collect. The kind of challenges presented vary according to the shape and position of the platforms, the position of the diamonds and the initial position of the two characters. For example, catching some diamonds might require motion coordination between the two characters, while in other cases it will be better to split the task and each character catch part of the diamonds by themselves. The platforms can be of one of three colours. The characters will only collide with platforms that have a different colour than themselves. The rectangle will collide against black and yellow platforms, and ignore the green ones. The circle will collide against black and green platforms, and ignore the yellow ones. Figure~\ref{fig:cooplevel01} shows two examples of cooperative levels. In the level on the left, there are two diamonds that can only be caught by the circle (the highest and the lowest), and one only by the rectangle. Although the circle can reach the highest diamond, it can only do so with the help of the rectangle, since the yellow platform is not considered as a support by the circle. This means the circle needs to ``ride'' the rectangle to reach the other side of the screen.
\begin{figure}[h]
\centering
\begin{minipage}{.5\columnwidth}
\centering
\includegraphics[width=\columnwidth]{images/cooplevel01.png}
\end{minipage}%
\begin{minipage}{.5\columnwidth}
\centering
\includegraphics[width=\columnwidth]{images/cooplevel03.png}
\end{minipage}
\caption{Sample cooperative levels from the 2017 competition} \label{fig:cooplevel01}
\end{figure}
The game also has single-player levels. Those are similar to the cooperative, but only one of the characters is placed in the game world. The goal is the same, to collect all diamonds. These levels, help testing agents in the role of each of the characters separately.
\section{AI PROBLEMS TO EXPLORE}\label{sec:problems}
As simple as the Geometry Friends game appears to be, it deals with several problems of great interest to \gls{AI} research. The game presents several situations of combined task and motion planning, in a dynamic physics-based world, that agents need to deal in real-time. And, more importantly, it presents collaborative challenges of different perspectives.
The main \gls{AI} problem the competition focuses on is cooperation. But, it includes serious challenges for other subjects as well (see Fig.~\ref{fig:problems}). The game involves planning, motion control, and situational assessment. Hence to develop successful agents for Geometry Friends, all these should be taken into consideration in a balanced way. On top of that, each of the subjects must consider cooperation on its own perspective. For example, to define common plans, to execute actions together and to have in consideration the others' view when assessing the game situations.
\begin{figure}[h]
\centerline{\includegraphics[width=\columnwidth]{images/AIProblems.png}}
\caption{AI problems that can be explored in Geometry Friends} \label{fig:problems}
\end{figure}
\subsection{Cooperation}
As stated, the main objective of the competition is to encourage the participants to develop a team of two agents capable of solving cooperative levels. In this context, we define cooperation as a ``set of actions made by the two agents in order to achieve a common goal''. The decision to cooperate or not is not addressed. We are concerned with the aspects that lead the agents to good cooperation, which is measured through their performance in the game. A collaborative challenge is one that entails that if the agents achieve cooperation their performance will improve. In fact, in Geometry Friends, it is often the case that the level is impossible if good cooperation is not achieved.
There are several important aspects to address in order to achieve good cooperation. As the motivation to cooperate is assumed by the fact that both agents are on the same team and they have no individual rewards on the game, most problems are related to coordination. By coordination we mean that the actions that the two agents take must be properly sequenced and timed as there are dependencies between them.
\subsubsection{Task Division}
The cooperative levels may have problems where to be efficient and effective, the agent must decide the best order to catch a diamond. If one diamond can be caught by both agents, they should be able to decide which of them will do it, and not waste time by having them both trying to do so. They should also know how to divide the tasks in a way no agent needs to wait more than necessary time to catch a diamond in need of joint action (see bellow). In fact, the division of task should identify which parts of the task can be solved by each individual alone and which parts need simultaneous action.
Figure~\ref{fig:cooplevel04}, on the left, presents an example of a level that can be solved faster if the agents catch first their respective diamonds (for the circle the one right above, and for the rectangle the one on the right-low corner) and then go for the one that requires cooperation.
\begin{figure}[h]
\centering
\begin{minipage}{.5\columnwidth}
\centering
\includegraphics[width=\columnwidth]{images/cooplevel04.png}
\end{minipage}%
\begin{minipage}{.5\columnwidth}
\centering
\includegraphics[width=\columnwidth]{images/cooplevel05.png}
\end{minipage}
\caption{Cooperation with different task division strategies} \label{fig:cooplevel04}
\end{figure}
The level shown on the right of the figure presents the opposite situation. The agents start at a position that makes their joint action easy and quick to perform as the first set of the cooperation. Not taking advantage of their initial positions will make it harder to catch the highest diamond and lose time.
\subsubsection{Joint Action}
Many of the cooperative levels present situations that require both characters to perform coordinated actions at the same time.
For example, as shown before in the Figure~\ref{fig:cooplevel01} (left), the circle may need to ride the rectangle to reach other places in the level, this action needs good timing to make sure the characters move together. In general, the circle may use the rectangle as another (eventually moving) platform to jump on. This is often the case to help the circle to reach higher ground. But, the rectangle may need the help of the rectangle to climb a platform, as shown in Figure~\ref{fig:cooplevel01} (right), or to cross from a platform to another, as it lacks the ability to jump. In the first case the rectangle can slide against the circle to tilt to the right, and in the latter case, the circle needs to collide with the rectangle, to apply additional force to its movement, at the right moment.
\subsubsection{Communication}\label{subsubsec:communication}
Communication is important for cooperation. The Geometry Friends game supports research on either explicit or implicit communication.
We see explicit communication as the exchange of signals with pre-established meaning. The competition framework allows the agents to send messages that can be used as a channel of explicit communication. On the other hand, agents may attempt of sending messages using the other available actions, or implying intents through the observation of the partner's behavior. For example, the circle may try to catch the rectangle's attention to a certain position, by rolling to the place and jumping nonstop, expecting the rectangle to understand the message it is trying to convey. If the meaning of this sort of action is not previously known to both agents, then it counts as implicit communication. This kind of communication is more challenging and also of great importance for \gls{AI} research, including game \gls{AI}.
If one of the agents is human, we are in a situation of human-agent interactions. In this case, we can research good ways to communicate using restrictive channels. The agents may be able to quickly transmit useful information to the player either through audio or visual signals, but the other way around may be harder. The players may need to be able to reference certain positions in the level by point, for example, but this may not be enough to make their intentions clear. Reading and learning from the actions of the other players will be important for the agents. Another approach is to use Natural Language. There is an increasing interest in the use of Natural Language as a natural interface for AI systems, and there have been several achievements that make this approach promising.
The Geometry Friends game is an interesting test-bed for communication as it deal with communication from a collaborative perspective. Communication can be used to support mutual understanding, to share different perspectives about the situation awareness, to devise plans together, to convey intentions, to coordinate actions and even for motivation and emotional support.
\subsection{Situational Awareness}
Situational Awareness is about the perception of the environment, but also includes comprehension of the perceived state, which supports the prediction of the future states\cite{Endsley1995}.
The perceptions are the input given by the sensors of the agent. The game world is fully observable, hence the agents can get information about the full state of the level (e.g. the position of all elements). The perception is not a challenge, but the comprehension comes with some challenges. On the one hand, the dynamic part of the characters' movement is not given by the sensors, for example, the agents do not receive the information about the angular velocity of the character or the force that is being applied to generate movement. On the other hand, while it is fairly easy to build a representation of the state of the level, it is not as easy to understand what the agents can do in the level. Agents, should build representations of the level that include information about the action affordances as well, for example, how platforms may be connected and what are the possible jump points for the circle. This should also include the identification of situations that may render the conclusion of the level impossible (e.g. falling into a pit). Note that in the presence of coloured platforms, the areas that are available for each of the characters are typically different.
To assess the situation in this game, it is also important to identify which diamonds need joint action and which do not, and understand the intentions of the partner and what it is doing to be able to coordinate the actions.
\subsection{Planning}\label{sebsec:planning}
In Geometry Friends agents need to consider futures states for action, in general to come up with a plan for action to solve the level in an efficient way, but also to avoid ending in states that make the level impossible. The game presents opportunities for hierarchical planning as it combines challenges of both task and motion planning. Task planning is more strategic and involves deciding the general movement around the level (e.g. which platforms to take, and in what order) and coordination actions (e.g. where and when the two characters should meet to perform a join action). The motion planning, is more low level, and involves deciding the control actions needed to move around the level correctly.
So, in other words, task planning is about puzzle solving. Most of the levels of Geometry Friends present at least one type of puzzle to solve:
\begin{itemize}
\item Deciding which diamond each character should catch, either by itself or with the help of the other. See Figure~\ref{fig:cooplevel04}.
\item Finding the correct order to catch the diamonds. See Figure~\ref{fig:circlelevel01}.
\item Finding a plan that reduces the path between the platforms. See Figure~\ref{fig:circlelevel03} (left).
\item In general, it is a combination of the above. See Figure~\ref{fig:circlelevel03} (right).
\end{itemize}
\begin{figure}[h]
\centering
\begin{minipage}{.5\columnwidth}
\centering
\includegraphics[width=\columnwidth]{images/circlelevel01.png}
\end{minipage}%
\begin{minipage}{.5\columnwidth}
\centering
\includegraphics[width=\columnwidth]{images/circlelevel02.png}
\end{minipage}
\caption{The circle should start by the diamond on the top platform} \label{fig:circlelevel01}
\end{figure}
\begin{figure}[h]
\centering
\begin{minipage}{.5\columnwidth}
\centering
\includegraphics[width=\columnwidth]{images/circlelevel03.png}
\end{minipage}%
\begin{minipage}{.5\columnwidth}
\centering
\includegraphics[width=\columnwidth]{images/cooplevel06.png}
\end{minipage}
\caption{Left: The circle should take the right platform first. Right: a level that combines three kinds of challenges} \label{fig:circlelevel03}
\end{figure}
Some levels may test effectiveness, by providing challenges where the order of the diamonds will influence the solvability of the level, like the one in Figure~\ref{fig:circlelevel01} (left). In this level, if the agent does not catch the diamond on the left first, then it will not be able to do it later. This serves to test if the agent is capable of plan carefully in situations with limited paths to take. On the other hand, all levels are tested for efficiency, through the time limit and the score which benefits the fastest agents. Figure~\ref{fig:circlelevel01} (right) and Figure~\ref{fig:circlelevel03} (left) are examples where the order does not matter to solvability but has impact on the time taken to finish the level.
When combining different types of puzzle, we are testing if the agent is able to deal with these problems in a balanced way. In Figure~\ref{fig:circlelevel03} (right) the circle should not catch the diamond on the bottom before catching the highest one on the right. To catch this, the circle needs the help of the rectangle. If the rectangle decides to catch the other two diamonds first, the circle will waste time waiting for the rectangle, while if they catch the highest one first, the circle can then go catch the one on the bottom while the rectangle catches the other two.
Motion planning deals with the timing of the low level actions that the agent can perform in the game. The approach for this kind of planning can be coupled more with the high level planing or with the motion control. But, in fact it is a bridge between the two. The motion planing is the process that makes sure that the agent performs correctly the actions that make the plan execution concrete in the game world.
\subsection{Motion Control}
Motion control is about executing the actions in the game world in a way that it follows the plan of action. In Geometry Friends, the characters' control is strongly coupled with the physics engine. For example, the movement is achieved by applying a force to the character (e.g. to move the circle a force is applied to make it roll). The collisions are realistic (as possible by the physics engine\footnote{We are using a Box2D implementation - https://box2d.org/.}). For example, the spin of the circle affects the collision. And attrition and gravity affect the movement as well. This means that predicting the movement of a character is not necessarily easy. Controlling a character in this context is closer to controlling a simulated vehicle than a common character in a platform game.
Moreover, the environment is dynamic, when playing cooperative levels, and non-deterministic in general. It is dynamic from a point of view of a character as the game state may change without the intervention of the character (e.g. it observes changes, but it did not perform any action). It is non-deterministic, mostly because of the physics engine. There are slight changes in the internal state of the physics engine that are not perceptible for the character, for this reason, in states that are perceived equally for the character the results of the same action may be different. This non-determinism is also due to the non-discrete nature of some of the control actions, that are of physics control nature. To move a character, the agent needs to turn on or off a force in the direction of the desired movement, but the effect depends on the current state of the character, for example, if it is moving to the opposite direction it does not move immediately to the desired direction, as it needs to decelerate beforehand.
The above reasons make the motion execution, and planning, challenging. And task re-planning might be necessary as well.
\section{THE COMPETITION}\label{sec:Competition}
The Geometry Friends Game AI Competition\footnote{https://geometryfriends.gaips.inesc-id.pt/} has been running since 2013. The main objective of the competition is to present different cooperative challenges to \gls{AI} agents.
The competition includes two main tracks: the Cooperative Track and the Single AI Track that is divided in two sub-tracks: the Circle Track and the
Rectangle Track. Although the main track is the Cooperative Track, we also include single player tracks for participants that want to tackle the puzzle problem-solving and the characters’ control issues before undertaking the more demanding task of cooperation because, to excel in the cooperation task, agents will need good individual control and problem-solving capabilities.
Participants are free to use any approach and algorithm they believe will solve the levels, while baring in mind that the challenge is real-time. Levels have a time limit.
The ranking process ensures an unbiased comparison between all competitors. Submitted agents are evaluated and automatically tested through ten levels per track. Five of those levels are made public, to aid the agent development, while the other five are kept private until the end of the competition. The objective of keeping some levels private is to help understand if there is any type of overspecialization of the agents. If an agent solves all public levels, and none of the private ones, then the agent was specifically trained to solve the public ones, and may not be able to solve any other.
Each level has a time limit and, if the agents solution takes longer than this limit, the level is considered incomplete and the score is computed accordingly. Each submission is
evaluated in each level a total of ten times, to reduce the effects of chance due to the real-time execution of the physics engine.
This results in a total of 100 runs per submission. Agents are ranked by the total number of collectibles that they gather in each level as well as by the time they take to
complete the level. The final score of a level is the average score across the 10 runs of the level. The score of run $i$ is computed through the following formula:
\begin{equation}\label{Eq:Score}
\textsc{score}_{i} = V_{completed} \times \frac{ (T_{max} - t)} {T_{max}} + ( V_{collect} \times N_{collect} ),
\end{equation}
Where $V_{Completed}$ is a bonus awarded for completing the level, $T_{max}$ is the time limit, $t$ is the time the agent took to finish the game, $V_{Collect}$ is the score awarded per diamond collected and $N_{Collect}$ is the number of diamonds collected.
The final score is the sum of the scores of the ten levels of the respective track.
\section{THE FRAMEWORK}\label{sec:FrameWork}
The competition provides a framework to develop, run and test agents (e.g. it includes the executable of the game). Some examples of agents are provided and some starting guides as well.
\subsection{Agent development}\label{sec:AgentDevel}
The framework provide the baseline implementation for the two agents. This baseline contains the mandatory main classes that implement the agent's interfaces in C\# and project solution. The participants may organize the rest of the project as they see fit, as long as it is possible to compile the agents from another machine.
The interface provides four input/output methods. Agents receives two types of sensor arrays as input. One at the beginning of the level, through the method $Setup()$, and the other at each update cycle, through the method $SensorsUpdate()$.
The setup method shares the static information of the level and sends the following information through its arguments:
\begin{itemize}
\item $nI$ - The total number of obstacles (platforms) and diamonds.
\item $rI$ and $cI$ - The initial state of the rectangle and circle characters.
\item $oI$, $rPI$ and $cPI$ - The information about the obstacles.
\item $colI$ - The information about the diamonds.
\item $area$ - Game area dimensions.
\item $timeLimit$ - The time limit of the level.
\end{itemize}
Agents can also make their preparatory computations and loading of relevant data in the setup method.
The agents receive additional information through the sensors update method. The state is shared in the following arguments:
\begin{itemize}
\item $nC$ - The number of uncaught diamonds.
\item $rI$ and $cI$ - The current state of the rectangle and circle characters.
\item $colI$ - The information of the diamonds in the level.
\end{itemize}
An obstacle, or platform, is represented by its position and size, and a diamond by its position. The circle character information contains the position, velocity and radius, and the rectangle character information contains the position, velocity and height.
At each update, the agent is expected to return an action within the possible ones, through the method $getAction()$. The agent can also perform computation in response to the $Update(TimeSpan time)$ method that is regularly called on each game update cycle.
The circle character may choose the following actions:
\begin{itemize}
\item \textit{Jump.} This applies a force to give an upward impulse to the character. It only works if the circle is in solid ground. If this is chosen and the circle is moving in the air, the action has no effect.
\item \textit{Roll left.} This applies a force to the left in the attempt to make the circle spin to the left and roll if it is on a platform. The action is always possible. The circle can spin on air. Applying a force to roll left does not make the circle to instantly start moving left. If the circle is spinning to the right, it is first necessary to counter the current rotation. While on, the angular velocity is changed until it reaches a maximum value.
\item \textit{Roll right.} Similar to roll left, but the force is applied to spin the circle to the right.
\end{itemize}
The rectangle character may choose the following actions:
\begin{itemize}
\item \textit{Slide left.} This applies a force that attempts to push the Rectangle character to the left. As in the case of the roll action of the circle character, the movement may not start immediately to the left because of the character's current speed. While on, the velocity increases until it reaches its limit.
\item \textit{Slide right.} similar to Slide Left, but pushing the rectangle to the right;
\item \textit{Morph up.} This reshapes the rectangle character, at a constant rate, increasing its height and reducing its width to keep a constant area (and mass). While on, the control is performed gradually until the height limit is reached. It can be performed while in the air.
\item \textit{Morph down.} Similar to Morph Up but decreases the rectangle's height.
\end{itemize}
Both characters can choose not to perform an action or send a message to the other character. By sending a message they share and object that may contain any information. The message is delivered in the next game update and is handled by a specific method (i.e. $HandleAgentMessages(AgentMessage[] newMessages)$).
In this case no control is executed the characters will keep their current movement. Note that a stop action is not available. To stop the agent needs to perform a counter action (e.g. request the application of a force to the opposite direction of the current movement) or wait for the effects of friction. Friction has more effect on the rectangle, because it slides.
In addition, participants should be aware that characters may move without directly activating any movement controls, due to other moving objects (e.g. because of collisions between characters) in the game world. The constants used in the physics engine, such as, gravity, friction, and the force and torque values used in the characters’ controls are provided in the API.
\subsection{Additional Tools}
The framework provides additional tools that can be used to support the development of agents to the competition.
\subsubsection{Level Editor}
The Geometry Friends framework includes a level editor to allow the creation of new levels. This way, the developers can create all the scenarios they may need to train and test their agents. For example, agents that use machine learning algorithms may rely on a great number of different levels during their training phase. Creating simpler levels can also help developers to visually understand what isolated problems the agents are already (or not) capable of solving.
There are also packs available with levels that were used in previous competitions and in other contexts of the game (e.g. for human to human collaboration).
\subsubsection{Forward Model}
The framework includes a component to support a forward model. It clones the game state and is able to run a simulation (faster than the real-time pace of the game) resulting in a prediction of a future state, in a time interval, if a given action if performed.
\subsubsection{Batch Simulation}
It is possible to run the game in a batch. This means that developers can run their agents in a set of levels repeatedly with or without the game graphics on. If the graphics are off it is possible to run the game at different speeds. The tools write a report with the results of all individual executions of an agent per level. It is also possible to run different agents in the batch process, to support comparison, for example.
This batch simulation is, in fact, the core engine for running the competition and dealing automatically with the submissions received from the website.
\subsubsection{Visual Debug}
While running the game, it is possible to turn on visual debug to get some additional visual information. This is drawn on top of the game's graphical user interface during the execution of the game. The developers can write their own information on this tool (e.g. lines, figures and text). There is a text debug output as well, but the visual feature is quite useful to help understanding the agents' behaviour at run-time.
\section{CURRENT SOLUTIONS}\label{sec:solutions}
Along the years the competition has been running, several solutions have been proposed.
In this section, we will briefly describe these solutions. These descriptions are mainly based on a report that is requested by the submission process. Submissions also include source-code that can be analysed to understand further their particular technical details. The source code can be found in the competition website \footnote{In http://gaips.inesc-id.pt/geometryfriends/, and in the new site https://geometryfriends.gaips.inesc-id.pt/, for submissions after 2018}. The best solutions have been kept as baseline agents for the competition.
Unfortunately, in the first editions of the competition the submission of source code was not mandatory, hence some descriptions in this paper lack some detail in the cases that the report was not sufficiently informative.
With this section, we want to understand how these agents deal with the following aspects:
\begin{itemize}
\item The tracks the agents solve.
\item Input processing.
\item Planning algorithms.
\item Motion control.
\item Planning/Execution layer structure.
\item Problems identified.
\end{itemize}
\subsection{Agents}
\subsubsection{\textbf{CIBot}}
The CIBot agents~\cite{CIBotCoop}\cite{CIBotRectangle}\cite{CIBotCircle}, have a different approach for each character, and for the cooperation track.
The agents make a first analysis of the level, checking were it is possible for them to move on, generating a directed graph. In the case of the rectangle, the graph includes transitions for 3 different shapes of the rectangle (full height, mid height and lowest height).
The rectangle agent uses a Monte-Carlo Tree Search (MCTS) algorithm. The directed graph extracted from the level analysis is converted into a tree to be used by the search. The initial graph and, consequently, the tree, has information on its edges about one of three shapes the rectangle needs to take to go from one node to the other.
The circle agent uses the Dijkstra algorithm for path finding. The resulting path is then divided into an action plan, which is translated into the actions the agent needs to perform in order to reach every node of the plan.
The control for each agent consists in a action queue that is filled by the planner. The control is simple and greedily moves the agent to the place represented by the graph node and there execute one of the actions: stay, jump or morph.
For the cooperative levels, the agents first attempt to catch the diamonds they can by themselves, and then enter a ``cooperative state''. The authors define this state as having the circle above a morphed down rectangle, to which they call a ``riding position''. This allows the circle to reach higher diamonds or platforms. To get into this ``riding position'', the agents have a set of rules to follow, that involve placing the rectangle agent in the right position and wait for the circle to be on top on the rectangle, that then morphs up.
The authors identify some problems. The MCTS algorithm sometimes returns impossible paths, and the cooperation track does not deal with all cases. It failed in most private levels, despite its good performance on the public levels.
\subsubsection{\textbf{KUAS-IS Lab}}
The KUAS-IS Lab agents~\cite{KUAS-IS}, only play the single-player tracks. The two agents use a similar approach. They use A* search and Q-learning to solve the levels. A* search is used to find
the shortest path to go through all the diamonds in the level. This is based on a graph that represents
the level. The graph is similar to that of CIBot, but it also includes the diamonds’ positions. The
search heuristic uses the distance to the collectible; however, the agent tries to avoid situations that lead to pitfalls by just following a greedy approach, for example, if going to the closest diamond renders the level impossible. To address this problem, the search heuristic is weighted by the values of a Q-table that was built beforehand (offline) by training with Q-Learning. The control policy makes use of the Q-Learning offline training as well.
The authors recognize the hardships of controlling the characters, thus, suggest the use of a reinforcement learning approach.
\subsubsection{\textbf{OPU-SCOM 2014}}
The OPU-SCOM~\cite{OPU-SCOM} deals only with the rectangle agent for the single-player track. It uses and hierarchical approach dividing the agent in two layers.
The first layer searches for a global strategy while the second is related to the control of the character and searches for the sequence of actions to complete the strategy. In the first layer the level is converted into a graph by generating cells that cover the level. Cells are generated by tracing lines aligned with all platforms’ edges. Each cell becomes a node in the graph and neighbouring cells are connected. Nodes are added for the diamonds and for the initial position of the character. The latter are connected to the node that represents the cell they are in. Dijkstra’s algorithm is applied to define possible paths between the character and the diamonds, and a particle swarm optimisation algorithm (PSO) searches for the best order to pick-up the diamonds, given the possible paths. The agent selects the first diamond on the ordered list returned by the PSO and defines a hierarchical task plan composed by meta-tasks representing goals, such as, catching a diamond or falling down, and a set of sub-objectives.
The second layer takes the meta-tasks, by the order defined in the plan, and selects and executes the actions needed to achieve it. It retrieves this information from a mapping between the meta-tasks to tasks adding the corresponding parameters. The agents uses four different tasks corresponding to the control actions: slide left, slide right, morph up and morph down. Tasks are executed until they reach successful termination that is checked every step (e.g. the character reached the targeted move position). Upon termination, the next task is started.
\subsubsection{\textbf{OPU-SCOM 2015}}
In 2015, the team submitted an updated version of the agent~\cite{OPU-SCOM2015}, that follows a similar hierarchical approach divided in two layers. The first one is used to compute the global strategy by first identifying (offline) the various sub-objective of the game and then generating the best strategy (sequence of sub-objectives) using a composition of genetic algorithm and neural network. The second layer then generates orders to perform the computed strategy.
The sub-objectives are defined in a list that was compiled through the identification of the relevant sub-objectives by running the OPU-SCOM 2014 agent. The sub-objectives defined are related to changes in values of the properties of the game objects (e.g. increase or decrease the values of the coordinates X and Y, increase or decrease velocity in terms of X of Y, or increase or decrease the height of the rectangle). Then a strategy generator, based on a neural network using a Neuro Evolution of Augmenting Topologies (NEAT) approach, takes the game state, in run-time, as input and presents a single sub-objective as output. This sub-objective is then sent to the second layer. The second layer uses some hard coded rules that translate each sub-objective to its corresponding control action. For example, the sub-objective increasing X corresponds to the control action slide right.
\subsubsection{\textbf{RL Agent}}
The RL Agent \cite{RL} was developed to tackle the single-player circle track. But, it was never submitted to the competition. It was, nevertheless, the baseline for the development of the PG-RL Agent submitted to competition in 2015.
The RL Agent divided the problem of solving a Geometry Friends level in three sub-problems (SP):
\begin{itemize}
\item SP1 - Catching the diamonds on the platform the agent is currently on.
\item SP2 - Deciding the next platform to go to.
\item SP3 - Moving to a platform.
\end{itemize}
A Geometry Friends level is then solved by repeatedly
solving the series of (SP1 $\rightarrow$ SP2 $\rightarrow$ SP3) starting from solving the platform where the character is initially placed.
The initial step, performed at setup time, is the creation of a navigational graph and the identification of platforms. A platform was defined as the region of an obstacle where the agent can move without having to jump. This means that the platforms defined in the level can be separated into more than one platform in the agent's representation of the game world. Then the diamonds in the level are assigned to platforms (to support SP1). They are assigned to the platform right below them.
SP1 and SP3 are treated as motion control problems and use a Reinforcement learning approach. When solving one platform (SP1) the character tries to catch all the diamonds assigned to the platform in a greedy way. The following features were used for learning: the position of the closest diamond on the platform, the circle position in relation to the edges of the platforms, the presence of a safe edge (blocked by an obstacle), the character's distance to the closest diamond, the velocity in X of the character and the number of diamonds in the platform. The reward was given by the number of collected diamonds, the number of diamonds to collect, the distance to the closest diamond and the time left until the level's time limit. SP3 uses additional features, such as, the distance between the platforms, the distance to the jump point on the edge and the size of the landing platform. The reward is the time taken to perform the jump. The training of the agent used a biased randomized action selection that make the agent move more often than jump during exploration.
To solve SP2, a \gls{DFS} search was used, taking from a starting point the agent's current position and trying to reach a platform that still has diamonds to collect. SP2 is triggered any time that agent is in a platform that has no diamonds to collect. Once SP2 finds a solution. SP3 is triggered to make the agent jump to the first platform returned by the search. If the agent is in a platform with diamonds to collect, SP1 is triggered.
The authors recognize that the assignment of the diamonds has some problems when diamonds are between platforms. Additionally, the greedy \gls{DFS} search will not perform well in complex puzzles. Some problems were found when trying to jump to small platforms.
\subsubsection{\textbf{PG-RL Agent}}
The PG-RL Agent \cite{PGRL} extended the RL Aproach described in the previous section to work for the rectangle and the cooperation tracks. It also improved the training set of the RL agent for the circle to include more situations for the agent to train, in particular to improve jumping from one platform to the another. The feature vector was simplified merging the information about position, size of the platform and speed of the character and adding symmetry to consider situations with symmetric coordinates to be treated as the same, hence, reducing the dimensionality of the game state. The sub-problem SP2 was solved by using the Dijkstra's algorithm instead of the \gls{DFS} to achieve shorter solutions that save time to improve the score.
The feature set for the rectangle was similar to the one used for the circle, only including and additional feature with the height of the character. The navigational graph was generated having into account the shape of the rectangle. The overall algorithm was the same, involving series of (SP1 $\rightarrow$ SP2 $\rightarrow$ SP3).
To address collaboration, a naive approach was used. The agent kept their initial training and trained further in a set of levels with cooperative challenges. A new feature, the position of the other character, was included. But it was only activated when the characters were close together. In the other cases the agents would represent the state as a single player problem. This makes it easier for the agent to reuse the previous knowledge. The planning part to solve SP2 did not change much. But, in the end the agent did not perform very well in the cooperation track.
\subsubsection{\textbf{RRT Agent}}
The RRT Agent \cite{GFRRT} uses the \gls{RRT} algorithm to address the single-player tracks.
This solution is divided in two layers, planning and control. The \gls{RRT} is used for planning, at the beginning of the level, which returns a path containing a sequence of what the authors define as ``points''. These points are states to reach, and can be descriptive or direct instructions. For example, one possible point is ``jump'' instructing the circle agent to jump, while another one is ``diamond above'' that indicates that a diamond is above the agent.
The controller then takes the information it needs from the path points, and computes an estimate of the velocity needed when reaching each point as well as determine when the agent should jump or morph. To help the agents achieve the correct velocity a \gls{PID} controller is used. Some of the points given by the plan already have motion instructions to be followed by the controller.
The authors recognize some problems related to the control of the characters, more specifically, of the rectangle.
\subsubsection{\textbf{RRT2017 Agent}}
The RRT2017 Agent~\cite{RRT-EPIA}, was developed using the RRT Agent as a baseline. It improved the state projections of the search, by using the forward model provided by the competition framework, that was not yet available for the first agent.
Using this forward model was more computationally expensive, but it was more precise. Some other extensions were included to the \gls{RRT} search to reduce the search time. A re-planning capability was also added, in case the planner was not able to find a complete solution at first, or if the agent fails to reach a desired state of the plan. Some of these extensions were included to deal with the cooperation track.
This agent uses a different controller than its baseline. This controller performs real-time motion planning by checking the current position and velocity of the agent and comparing it to the desired one. It then computes the action needed, depending on the acceleration it needs to achieve said velocity at said position. If the state of the plan has a jump action associated, then the agent jumps when it reaches said state.
The main problems pointed by the authors are related to the search time taken due to the use of the forward model. This is, in particular, a problem for the cooperative agents. The authors also note that the controller can fail in some tricky situations where the margins of error used to check the action results may prevent the success of the action.
\subsubsection{\textbf{Rule-based RRT Agent}}
The rule-based RRT agent~\cite{RB-RRT} picked up the RRT2017 agent as a baseline to improve the cooperation aspect of the agents, by identifying cooperation problems and creating a set of rules to deal with each. The agents have a single player mode and a cooperative mode.
At the beginning of the level, each diamond is assigned to either agent, or both in case of requiring cooperation. In the latter case, the type of cooperation problem is also identified. The authors identify three possible cooperation problems:
\begin{itemize}
\item Height: the circle cannot reach the diamond by itself, needing the rectangle to be used as a higher platform.
\item Tight space: there is a diamond between two platforms only the rectangle can reach, but needs the help of the circle to climb the platform.
\item Agent specific platforms: some platforms can only be used as such by one of the agents, the other needing the first to cross it.
\end{itemize}
The agents catch their assigned diamonds, first, and only then enter into cooperative mode, where they follow the rules according to the problem.
\subsubsection{\textbf{Subgoal A* Agent}}
The Subgoal A* Agent~\cite{SubGoalAStarThesis} uses a modification to the A* algorithm, the subgoal A*, to run the rectangle track.
The agent starts by creating an abstraction of the level to be used by the search algorithm, considering the abilities of the character. This results in a boolean matrix where the space occupied by obstacles is set as true. After that, a graph is created, where the edges indicate the action to be taken in order to move from one node to the other, including the shape the rectangle must take as well as the direction to follow.
The search is then performed, by the subgoal A*, an adaptation of the A* algorithm that adds a new property to a node. This property contains the diamonds that have been caught allowing the search to find a solution with all diamonds caught as a goal.
The controller then executes the plan, using a set of 13 rules, according to the nodes of the plan
\subsubsection{\textbf{KIT Agent}}
The KIT Agent~\cite{KIT} divides the problem into planning and its execution.
Similar to the RL Agent approach, there is a redefinition of the platforms to the regions where the agents can move on. From there, a directed graph is obtained by identifying from which platforms it is possible to reach other platforms and to collect diamonds. To understand this, trajectories are predicted by simulating the possible motion of the circle. The planning is done by applying the Subgoal A* algorithm, similarly to~\cite{SubGoalAStarThesis}, to the directed graph.
The execution of the plan is done by the controller. This takes into account the positions and horizontal velocities given by the returned path. Then, like other previously mentioned agents, KIT Agent also selects the actions according to a previous Q-learning process.
The authors are aware of some limitations of this solution. For example, the agent may fail in levels with narrow platforms, that require small steps and jumps in order to be climbed, and levels with positions that can only be reached by colliding with platforms.
\subsubsection{\textbf{Supervised DL Agent}}
The Supervised DL Agent~\cite{Supervised} uses a deep learning approach for the circle character.
The agents developed their techniques by observing humans playing the game. They used the capture of screenshots and recorded the keys the human player pressed to perform the actions. This human-generated data was then used for the network training.
During training, the action selection is $\epsilon$-greedy, after that, it is purely greedy.
The authors acknowledge that to follow successfully this kind of approach the agents need more data and training.
\subsubsection{\textbf{Neural Reinforcement Learning Agent}}
The Neural Reinforcement Learning Agent~\cite{NRL} uses a reinforcement learning approach that uses neural network function approximation to deal with the single-player circle track.
The network has an input layer, three hidden layers and a linear output layer. This output layer represents the approximate Q-value for each action available to perform at the input state. The network inputs consists on the distance to the obstacles around the agent in 32 different directions.
This approach alternated between training, where the objective was to optimize the neural network to minimize the error of the value functions; and data gathering, where the agent interacted with the environment according to an $\epsilon$-greedy policy.
This author also recognizes that, such approach, needs more training time and data despite some improvements in the agent's neural architecture.
\subsubsection{\textbf{MARL-GF}}
The MARL-GF agent~\cite{MARLThesis} uses a \gls{MARL} approach for the cooperation track. The agents have three layers: trajectory analysis, planning and control.
The trajectory analysis layer takes into consideration the general motion equations for constant acceleration for both characters and also makes a climbing platform analysis in the case of the rectangle. For the cooperative levels, it also considers the riding position (when the circle stands on the rectangle and both characters move together).
The planning layer uses the Subgoal A* search on a directed graph, like some of the previously mentioned agents. For the cooperation levels, a directed graph is generated for each character and cooperative component and then merged into a single directed graph where the algorithm is performed
For the control, both characters use Q-Learning, as many others. As for the cooperation aspects, the document refers the use of either Team Q-Learning or Optimal Adaptive Learning for the training phase.
\subsubsection{\textbf{NKUST}}
The NKUST~\cite{NKUST} agents address cooperation as well.
The ``riding position'' is the main focus, as is it the most common type of collaboration through the presented levels. First, the area where the rectangle can move is computed, and, thus, which diamonds it can catch. With this, the circle character can get more information about the areas it can reach, since the rectangle is seen as a movable platform.
When searching for a path, the priority is catching the diamonds in an order that makes is possible to catch the highest number of diamonds by using a \gls{DFS} trial. Then an attempt to find a path is done with the A* algorithm. If no complete path is found, the agent follows the one with more diamonds it already discovered.
In terms of task division, the rectangle has priority over the diamonds. If this character can catch all of them by itself, then the circle is not used. Only when there is one or more diamonds that the rectangle cannot catch, the circle performs the path planning considering the motion space of the rectangle. Whenever the circle needs the rectangle to reach a position, this information is added to the ``collaborative nodes'' of the search. When the circle finishes its planning, then the rectangle performs another, with the information of the nodes that need collaboration if any exists.
To perform the cooperative tasks, an agent will wait for the other in case it reaches the start position for the cooperative motions before.
\subsubsection{\textbf{AGAgent}}
The AGAgent~\cite{AGAgent} performs several steps for planning for the circle track.
In the beginning of the level, the original state space is converted into abstract regions. These abstract regions are what before as been described as platforms, i.e. surfaces, referred as segments, where the character can move on. The agent then explores this new state space, using control policies, with the objective of identifying the possible transitions between the states. These policies consist on:
\begin{itemize}
\item RollTo($x$, $vx$): Rolls the to the target $x$ position with $vx$ velocity.
\item JumpAndStop: Jumps and forces the agent to stop when landing.
\item FallAndStop: Lets the agent roll off the segment and forces it to stop when landing.
\item RollUp: Rolls in attempt to climb a segment.
\end{itemize}
With this, a graph is generated and an A* search is performed.
The return plan has already the actions necessary for its execution, as a result of the policies previously used during the initial exploration.
One of the problems the authors mention is related to the simulation phase where sometimes the predictions returned are not accurate.
\subsubsection{\textbf{Other Non-participant Agents}}
There are some agents developed for the Geometry Friends game that did not participate in the competition, but have been described in research publications.
Özgen et al.~\cite{Generalized} proposed a circle agent that also resorts to reinforcement learning. Similar to the Supervised DL Agent, the learning is done by having stacked frames as input via a convolutional neural network, where the resulting image is processed into states. Actions are then returned as output. The performance of the agent was compared with human and random agent performances. This approach was able to complete more games than the random agent, though it still had difficulties in doing such. The authors believe this problem may be due to the architecture of the network not being able to generalize all levels, or that it may need to be iterated with more frames so it can converge.
Simões et al.~\cite{Geo2} used asynchronous deep learning to create the agents' policies. The agents were tested with both \gls{A3C} and \gls{ANQ} approaches. The authors explained they initially had difficulties during training due either to the high cost of time per training steps or not enough amount of samples to achieve decent policies. In order to help speeding the training process, they used the breadcrumbs approach and input shaping techniques to decrease the complexity of the network
The authors of these agents implemented their own version of the game to develop their agents.
\subsection{Explored AI Problems}
In the following tables we can see a summary of the problems the agents deal with, and the algorithms/strategies they use. Table~\ref{tab:tracks} shows the tracks the agents try to solve. The planning algorithms and strategies used are summarized in table~\ref{tab:planning}. And table~\ref{tab:execution} shows the algorithms and strategies used for the plan execution/motion control.
\begin{figure}[h]
\centerline{\includegraphics[width=\columnwidth]{images/layers.png}}
\caption{Examples of different layer separation for planning and control} \label{fig:planninglayers}
\end{figure}
Figure~\ref{fig:planninglayers} shows the different ways the agents described previously organise the layers of planning and execution. The first, on the left, has task planning, motion planning and planning execution in three separate layers. In this case, there is a first planner (task) that outputs a set of objectives that will be turned into a set of actions after going through the second planner (motion) to be then executed by the controller. In the second case both task and motion planning are done together. The difference here is that the task planning is done already considering the challenges of motion control. The third one, implies that the controller receives only the set of the planned objectives and, in real-time, for each objective, plans the necessary actions to follow. The fourth one represents the cases where the motion planning is done offline, usually through machine learning, or through the definition of rules, but the task planning is still done when the agent is running. In the last case, both task and motion planning are performed offline. In the table~\ref{tab:layers} we list the agents by the layer structure we believe they follow, according to their description. The numbers represent the structures from Fig.~\ref{fig:planninglayers} from left to right. It is possible to conclude that most agents use an offline motion planning approach.
The results of the 2019 competition, presented in table~\ref{tab:compResults}, show that MARL-GF and KIT agents are the most promising solutions at the moment. Still, they did not perform perfectly, having several levels of the tracks they were not able to finish, specially in the cooperation track.
\begin{table}[h]
\begin{tabular}{lccc}
\multirow{2}{*}{Agent} & \multicolumn{3}{c}{Tracks} \\ \cline{2-4}
& \multicolumn{1}{l}{Cooperation} & \multicolumn{1}{l}{Circle} & \multicolumn{1}{l}{Rectangle} \\ \hline
CIBot & Yes & Yes & Yes \\
KUAS-IS Lab & No & Yes & Yes \\
OPU-SCOM 2014 & No & No & Yes \\
OPU-SCOM 2015 & No & No & Yes \\
RL Agent & No & Yes & No \\
PG-RL & Yes & Yes & Yes \\
RRT & Yes & Yes & Yes \\
Subgoal A* & No & No & Yes \\
RRT2017 & Yes & Yes & Yes \\
KIT & Yes & Yes & Yes \\
Supervised DL Agent & No & Yes & No \\
NRL & No & Yes & No \\
MARL-GF & Yes & Yes & Yes \\
NKUST & Yes & No & No \\
Rule-based RRT & Yes & Yes & Yes \\
AGAgent & No & Yes & No
\end{tabular}
\caption{\label{tab:tracks}Tracks agents are prepared for.}
\end{table}
\begin{table}[h]
\begin{tabular}{lccc}
\multirow{2}{*}{Agent} & \multicolumn{3}{c}{Planning} \\ \cline{2-4}
& \multicolumn{1}{l}{Cooperation} & \multicolumn{1}{l}{Circle} & \multicolumn{1}{l}{Rectangle} \\ \hline
CIBot & - & Dijkstra & MCTS \\
KUAS-IS Lab & - & A* & A* \\
OPU-SCOM & - & - & PSO \\
OPU-SCOM & - & - & NEAT \\
RL Agent & - & DFS & - \\
PG-RL & - & Dijkstra & Dijkstra \\
RRT & - & RRT & RRT \\
Subgoal A* & - & - & Subgoal A* \\
RRT2017 & RRT & RRT & RRT \\
KIT & - & Subgoal A* & Subgoal A* \\
Supervised Agent & - & DL & - \\
NRL & - & RL & - \\
MARL-GF & MARL & Subgoal A* & Subgoal A* \\
NKUST & A* & - & - \\
Rule Base RRT & Rules & RRT & RRT \\
AGAgent & - & A* & -
\end{tabular}
\caption{\label{tab:planning}Planning algorithms/strategies used by the agents.}
\end{table}
\begin{table}[h]
\begin{tabular}{lcc}
\multirow{2}{*}{Agent} & \multicolumn{2}{c}{Execution} \\ \cline{2-3}
& Circle & Rectangle \\ \hline
CIBot & Rule-based & Rule-based \\
KUAS-IS Lab & Q-Learning & Q-Learning \\
OPU-SCOM 2014 & - & Rule-based \\
OPU-SCOM 2015 & - & Rule-based \\
RL Agent & RL & -
\\
PG-RL & RL & RL \\
RRT & Plan Execution + PID & Plan Execution + PID \\
Subgoal A* & - & Rule-based \\
RRT2017 & Motion planing & Motion planning \\
KIT & Q-Learning & Q-Learning \\
Supervised DL Agent & DL & - \\
NRL & RL & - \\
MARL-GF & Q-Learning & Q-Learning \\
NKUST & Q-Learning & Q-Learning \\
Rule Base RRT & \begin{tabular}[c]{@{}c@{}}Rule-based +\\ Motion Planning\end{tabular} & \begin{tabular}[c]{@{}c@{}}Rule-based +\\ Motion Planning\end{tabular} \\
AGAgent & Plan Execution & -
\end{tabular}
\caption{\label{tab:execution}Motion/Control algorithms used by the agents.}
\end{table}
\begin{table}[h]
\begin{tabular}{lllll}
\multicolumn{5}{c}{Structure} \\ \hline
\multicolumn{1}{c}{1} & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{3} & \multicolumn{1}{c}{4} & \multicolumn{1}{c}{5} \\ \hline
OPU-SCOM 2014 & CIBot & RRT2017 & KUAS-IS Lab & Supervised \\
OPU-SCOM 2015 & RRT & RB RRT & PG-RL & NRL \\
& AGAgent & NKUST & Subgoal A* & \cite{Generalized} \\
& & & KIT & \cite{Geo2} \\
& & & MARL-GF &
\end{tabular}
\caption{\label{tab:layers}Organisation of planning layers as presented in Figure \ref{fig:planninglayers}.}
\end{table}
\begin{table}[h]
\centering
\begin{tabular}{lll}
\multicolumn{1}{c}{Track} & \multicolumn{1}{c}{Agent} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Score\\ (approx.)\end{tabular}} \\ \hline
\multirow{4}{*}{Cooperation} & MARL-GF & 11887 \\
& RRT2017 & 4490 \\
& Rule-based RRT & 4166 \\
& NKUST & 3087 \\ \hline
\multirow{3}{*}{Circle} & KIT & 14804 \\
& AGAgent & 14394 \\
& RRT2017 & 8858 \\ \hline
Rectangle & MARL-GF & 13946 \\
& RRT2017 & 10826 \\
& Subgoal A* & 5042
\end{tabular}
\caption{\label{tab:compResults}Results of the 2019 Competition}
\end{table}
\section{Conclusions and Future Work}\label{sec:conclusions}
Game AI competitions are important motivators for research and development on Game AI, which deals with issues that can be applied to more general AI problems.
We believe that the Geometry Friends Game AI Competition is a strong competition for it deals with several \gls{AI} problems, related to cooperation, task and motion planning, and control, all this with the need to work real-time. Some of these problems already explored by other competitions, but our competition is unique in the way it assembles them at the same time.
The Geometry Friends is also a game that can be later expanded to had more challenges and deal with other AI problems.
First of all, the game has some features that have yet to be introduced, such as, moving platforms. These features are implemented in the game engine but left out of the competition levels, as we are currently waiting to get better results in the tracks as they are before adding more complexity to the base problem.
The competition may include, in the future, a level generation track as well, which we believe would be an interesting scenario for the development of PCG (Procedural Content Generation) algorithms. The creation of puzzle game scenarios is not something completely new in the PCG community, although we believe that the Geometry Friends game will prove an interesting test-bed for these algorithms. The main novelty is the cooperative nature of Geometry Friends, which adds some interesting challenges, as levels should to be fun for both players, for example. Additionally, making sure that the levels are solvable may actually be a hard problem. We already have a clear specification for the levels, which would facilitate the development of this track. Nevertheless, we would still require
an adequate framework to evaluate the levels generated. One
possibility would be to follow the current evaluation practices
in the level generation tracks where the ranking process consist of interleaving artificially generated levels with human-created levels and allow human players to play all.
This may lead to the creation of a level generation tool to create unlimited number of levels automatically. This may be quite important for machine learning approaches that need large number of examples to be able to generalise the performance. And remove the bias of human designers as well, that may not include enough diversity in the levels.
It would also be interesting to explore a systematic way (eventually automatic) to categorize the levels in terms of the challenges they provide. This can guide the definition of specific sets of levels and support a better comparison of the agents being developed. In particular, it would be interesting to assess the difficulty of the levels.
Another extension that has been considered is an Agent
Believability track. This track would consist of creating agents
that would act in a human-believable way. The ranking process
could consist of a Turing-like test where users would view
various human and agent game-play sessions and try to identify which are human and which are artificial.
Finally, the last idea currently being developed, which is
in fact one of the main motivations for the development of the whole agent framework, is the Human-AI Cooperation track.
This will consist of developing a single AI agent (Circle or
Rectangle) that can play cooperatively with a human player.
Besides the challenges presented in the previous sections, this
track would require agents to effectively communicate with
human players, predict their movements and interact with their
characters in an entertaining way. The ranking of this track
would require tracking the performance of both players (such
as the number of levels solved and how long did it take to
solve them) and randomly have users play with other users
or agents (without their knowledge) and ask them how their
playing experience was. This will include similar concerns as
the believability track.
\section*{Acknowledgment}
This work was supported by national funds through FCT, Fundação para a Ciência e a Tecnologia, under project UIDB/50021/2020.
\bibliographystyle{IEEEtran}
| proofpile-arXiv_059-15592 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction} \label{sec:intro}
Automatic real-time fire (or flame) detection by analysing video sequences is increasingly deployed in a wide range of auto-monitoring tasks. The monitoring of urban and industrial areas and public places using security-driven CCTV video systems has given rise to the consideration of these systems as sources of initial fire detection (in addition to heat/smoke based systems). Furthermore, the on-going consideration of remote vehicles for automatic fire detection and monitoring tasks \cite{bardshaw91robots,MARTINEZDEDIOS2006uavfire} further enhances the demand for autonomous fire detection from such platforms. Detecting fire in images and video is a challenging task among other object classification tasks due to the inconsistency in its shape or pattern and varies with the underlying material composition.
Many earlier traditional approaches in this area involve attribute based approaches, such as the colour \cite{Healey93fire,CELIK2009fire} and they can be combined in high-order temporal approaches \cite{Phillips02fire,liu2004fire,Toreyin2005fire}. The work of \cite{Healey93fire} uses a colour based threshold approach on input video. This is expanded in the work \cite{Phillips02fire} which incorporates both colour and motion, utilising a colour histogram to classify fire pixels and examine the temporal variation of the pixels to determine which are fire. The work by \cite{liu2004fire} further explores temporal variation in the Fourier coefficients of fire regions to capture the contour of the region. Slightly more recent attribute based approach \cite{Toreyin2005fire} models flame coloured objects using Markov models to help distinguish the flames.
\begin{figure}[htb!]
\centering
\includegraphics[width=\linewidth]{fire_ex2.pdf}
\vspace{-0.8cm}
\caption{An illustrative example of binary full-frame fire detection (A) and localisation via superpixel segmentation (B) (fire = green, no-fire = red).}
\label{fig:fire_ex}
\end{figure}
Before the advent of deep learning, fire detection work mainly used shallow machine learning approaches using extracted attributes as the input \cite{zhang09fire,Ko2009fire,Chen11:Fire}. The work by \cite{zhang09fire} uses a shallow neural network incorporating a single hidden layer. The model feeds six characteristics of the flame-coloured regions as an input mapping to a hidden layer of seven neurons. A two-class Support Vector Machine (SVM) is used in the work \cite{Ko2009fire} to find a separation between pixels in candidate fire regions to try and remove noise from smoke and differences between frames. Following from this, the work of \cite{Chen11:Fire} develops a non-temporal approach that uses a decision tree as the classification architecture, feeding colour-texture feature descriptors as the input, and achieves 80-90\% mean true positive detection, 7-8\% false positive.
With the advent of deep learning, the focus has shifted from identifying explicit image attributes. A large increase in the classification accuracy has come from creating an end-to-end solution for fire classification \cite{namozov2018efficient,sharma2017deep} and localisation on full-frame/superpixel images (Figure \ref{fig:fire_ex}), achieved by feeding raw image pixel data as input to a Convolutional Neural Network (CNN) architecture. The work of \cite{namozov2018efficient} proposes a custom architecture consisting of convolution, fully connected and pooling layers on a custom dataset using Generative Adversarial Networks (GAN). The work of \cite{sharma2017deep} considers deep CNN architectures such as VGG16 \cite{simonyan2014vgg} and ResNet50 \cite{he2016deep} for the fire detection task. However, these strategies consists of a large number of parameters which lead to slower processing time and may not be suitable for deployment on low-powered embedded devices as commonly found in deployed detection and wide area surveillance systems.
Several CNN architectures \cite{DBLP:journals/corr/IandolaMAHDK16,DBLP:journals/corr/HowardZCKWWAA17,DBLP:journals/corr/abs-1807-11164,DBLP:journals/corr/ZophVSL17} are designed to have a low complexity without compromising on the accuracy. The introduction of depth-wise separable convolutions involves splitting up a conventional $H \times W$ filter into a $H \times 1$ depth-wise filter followed by a $1 \times W$ point-wise filter, vastly reducing the number of floating-point operations and reducing the overall parameter size in a CNN architecture. Recent advancement in creating compact CNN architectures, which focuses on reducing the overhead costs of computation and parameter storage, involves pruning CNN architectures and compressing the weights of various layers \cite{DBLP:journals/corr/LiKDSG16,DBLP:journals/corr/MolchanovTKAK16} without significantly compromising original accuracy.
In the work of \cite{DBLP:journals/corr/MolchanovTKAK16}, a greedy criteria-based pruning of convolutional kernels by backpropagation is proposed. This strategy \cite{DBLP:journals/corr/MolchanovTKAK16} is computationally efficient and maintains good generalisation in the pruned CNN architecture.
An approach by \cite{DBLP:journals/corr/LiKDSG16} presents acceleration method for CNN architectures, by pruning convolutional filters that are identified as having a small effect on the output accuracy. By removing entire filters in the network with their connecting feature maps, it prevents an increase in sparsity and reduces computational costs. A one-shot pruning and retraining strategy is adopted in this work \cite{DBLP:journals/corr/LiKDSG16} to save retraining time for pruning filters across multiple layers, which is critical for creating a reduce complexity CNN architecture.
Recent works on non-temporal fire detection \cite{dunnings18fire, samarth19fire} outperform the conventional architectures by simplifying a complex high performance, generalised architecture. The FireNet and InceptionV1Onfire are proposed in the work of \cite{dunnings18fire}, where the architectures are simplified version of AlexNet \cite{NIPS2012_4824} and InceptionV1 \cite{DBLP:journals/corr/SzegedyLJSRAEVR14} respectively. Both architectures offer better performance in fire detection over their parent architectures where FireNet \cite{dunnings18fire} achieves $17$ frames per second (fps) with Accuracy of $0.92$ for the binary classification task. Further architectural advancements, InceptionV3 \cite{szegedy2016rethinking} and InceptionV4 \cite{szegedy2017inception} architectures are experimentally simplified in the most recent work of \cite{samarth19fire}, which achieves Accuracy of $0.96$ for full-frame and $0.94$ for superpixel classification task. Both architectures achieve high accuracy while maintaining a high computational efficiency and throughput.
In this work, we explicitly consider non-temporal fire detection strategy by proposing significantly reduced complexity CNN architectures compared to prior work of \cite{Chen11:Fire,dunnings18fire,samarth19fire}. Our key contributions are the following:
\begin{itemize}
\item[--] We propose two simplified compact CNN architectures (NasNet-A-OnFire and ShuffleNetV2-OnFire), which are experimentally defined as architectural subsets of seminal CNN architectures \cite{DBLP:journals/corr/ZophVSL17,DBLP:journals/corr/abs-1807-11164} offering maximal performance for the fire detection task.
\item[--] We employ the proposed compact CNN architectures for (a) binary fire detection, \{{\it fire, no-fire}\}, in full-frame imagery and (b) in-frame classification and localisation of fire using superpixel segmentation \cite{achanta2012slic}.
\end{itemize}
\section{Proposed Approach} \label{sec:proposal}
We consider two CNN architectures, NasNet-A-Mobile \cite{DBLP:journals/corr/ZophVSL17} and ShuffleNetV2 \cite{DBLP:journals/corr/abs-1807-11164} (Section \ref{ssec:ref_arch}), which are experimentally optimised using filter pruning \cite{DBLP:journals/corr/LiKDSG16} for fire detection (Section \ref{ssec:sim_arch}). Subsequently, we expand this work for in-frame fire localisation via superpixel segmentation (Section \ref{ssec:superpixel}).
\subsection{Reference Architectures} \label{ssec:ref_arch}
We select NasNet-A-Mobile \cite{DBLP:journals/corr/ZophVSL17} and ShuffleNetV2 \cite{DBLP:journals/corr/abs-1807-11164} due to their compactness and high performance on ImageNet \cite{deng2009imagenet} classification. Both architectures have high level structures, containing normal cell and reduction cell, however with fundamental differences in how these cells are structured at a low level. Due to the modular structures of the architectures, it is easy to remove/modify different cells.
\begin{figure}[htb!]
\centering
\includegraphics[width=8cm]{nas_normal_reduction.pdf}
\vspace{-0.4cm}
\caption{Normal and reduction cells of NasNet-A-Mobile \cite{DBLP:journals/corr/ZophVSL17}.}
\label{fig:nasneta_cells}
\end{figure}
\noindent \textbf{NasNet-A-Mobile} \cite{DBLP:journals/corr/ZophVSL17} consists of an initial $3 \times 3$ convolution layer followed by a sequence repeating three times that consists of a number of reduction cells and four normal cells.
The normal and reduction cells (Figure \ref{fig:nasneta_cells}), both feed in the input from the previous cell and the cell before, create a residual network. The only convolution layers present in the normal cell are three $3 \times 3$ and two $5 \times 5$ depth-wise separable convolutions. The reduction cell contains one $3 \times 3$, two $5 \times 5$, and two $7 \times 7$ depth-wise separable convolutions. The rest of the layers are either averaging or max pooling layers.
\noindent \textbf{ShuffleNetV2} \cite{DBLP:journals/corr/abs-1807-11164} consists of an initial $3 \times 3$ convolution layer followed by a $3 \times 3$ max pooling layer. This is followed by three reduction and normal cells (Figure \ref{fig:shuffle}). There is only one reduction cell for each loop and the number of normal cells is $[3, 7, 3]$. This is followed by a final point-wise convolution and a global pooling layer. The normal cell starts by splitting the number of channels in half, where one half is unchanged and the other half has three convolutions with two point-wise $1 \times 1$ convolutions and a $3 \times 3$ depth-wise convolution. The output dimension is equal to the input in the normal cell. The channels are concatenated and shuffled in order to mix the channels across the branches. This is applied in both halves of the cells.
The reduction cell does not split the channels and both branches receive the whole input. The right branch of the reduction cell is similar to the right branch of the normal cell however the depth-wise convolution has a stride of $2$ to reduce the height and width dimensions by two. The use of point-wise convolutions and depth-wise convolutions allow the network to go very deep without the number of parameters blowing up.
\begin{figure}[htb!]
\centering
\includegraphics[width=6.5cm]{shuff_nor_red.pdf}
\vspace{-0.4cm}
\caption{Normal and reduction cell of ShuffleNetV2 \cite{DBLP:journals/corr/abs-1807-11164}.}
\label{fig:shuffle}
\end{figure}
\vspace{-0.2cm}
\subsection{Simplified CNN Architectures} \label{ssec:sim_arch}
We experimentally investigate variations in architectural configurations of each reference architecture (Section \ref{ssec:ref_arch}).
In the simplified CNN architectures, we use transfer learning from network models training on ImageNet \cite{deng2009imagenet} by removing the final fully connected layer from both architectures \cite{DBLP:journals/corr/ZophVSL17,DBLP:journals/corr/abs-1807-11164} and create a new linear layer that mapped to a single value for binary \{{\it fire, no-fire}\} classification. The entire architecture is frozen except the final linear layer for the first half of training. Subsequently, we unfreeze the final convolution layer for ShuffleNetV2 \cite{DBLP:journals/corr/abs-1807-11164}, and the final two reductions, normal cell iterations for NasNet-A-Mobile \cite{DBLP:journals/corr/ZophVSL17}. This prevents the network over-fitting during training and allowing us to train the model longer for better generalisation.
\subsubsection{Simplified NasNet-A-Mobile} \label{sssec:nas}
The base model for NasNet-A-Mobile \cite{DBLP:journals/corr/ZophVSL17} is pre-trained with $1,056$ output channels for ImageNet \cite{deng2009imagenet} classification. The main experimentation of this architecture revolves around the number of normal cells in the model.
\begin{table}[htb]
\centering
\renewcommand*{\arraystretch}{0.85}
\caption{ NasNet-A-Mobile variants with different components.}
\vspace{-0.3cm}
\begin{tabular}{lccccc}
\hline
{\scriptsize Architecture} & {\scriptsize $A_{N}=2$} & {\scriptsize $A_{N}=4$} & {\scriptsize $A_{3} = A_{N}$} & {\scriptsize $A_{3}=0$} & {\scriptsize {\shortstack[l]{Reduced \\ Filter}}} \\ \hline
\scriptsize $NasNet_{v01}$ & & {\color{green} \checkmark } & {\color{green} \checkmark } & & \\
\scriptsize $NasNet_{v02}$ & & {\color{green} \checkmark } & & {\color{green} \checkmark } & \\
\scriptsize{$\underline{NasNet_{v03}}$} & {\color{green} \checkmark } & & {\color{green} \checkmark } & & \\
\scriptsize $NasNet_{v04}$ & {\color{green} \checkmark } & & & {\color{green} \checkmark } & \\
\scriptsize $NasNet_{v05}$ & & {\color{green} \checkmark } & {\color{green} \checkmark } & & {\color{green} \checkmark } \\
\scriptsize $NasNet_{v06}$ & & {\color{green} \checkmark } & & {\color{green} \checkmark } & {\color{green} \checkmark } \\
\scriptsize $NasNet_{v07}$ & {\color{green}
\checkmark } & & {\color{green} \checkmark } & & {\color{green} \checkmark } \\
\scriptsize $NasNet_{v08}$ & {\color{green} \checkmark } & & & {\color{green} \checkmark } & {\color{green} \checkmark } \\ \hline
\end{tabular}
\label{tab:nasnet_variants}
\end{table}
In our simplified NasNet-A-Mobile architectures, we experiment with eight different variants, with four different architectural structure differences with and without reduced filters (Table \ref{tab:nasnet_variants}). The number of filters is calculated by the number of penultimate filters specified in the architecture. We reduce this number from $1,056$ to $480$ penultimate filters for the reduced filter variants. This creates a reduction of $60\%$ of the filters throughout the whole model. This drastically reduces the number of parameters but achieves a very low accuracy for the each of the reduced filter variants (Figure \ref{fig:graphs}-A).
There is a sharp difference in accuracy between the reduced filter variants and full filter variants in the NasNet-A-Mobile architecture \cite{DBLP:journals/corr/ZophVSL17} (points \{a,b,c,d\} vs. points \{e,f,g,h\} in Figure \ref{fig:graphs}-A) with the highest reduce filter variant achieving $0.77$ accuracy compared to $0.95$ for the corresponding full filter variant.
\begin{figure*}[htb!]
\centering
\includegraphics[width=15cm]{perform_plot_3.pdf}
\vspace{-0.4cm}
\caption{Parameter size and accuracy comparison for the two architecture variants: (A) Nasnet-A-Mobile where a to e represents $NasNet_{v01}$ - $NasNet_{v08}$. (B) ShuffleNetV2 where 1-9 represents $ShuffleNetV2_{v01}$ - $ShuffleNetV2_{v09}$.}
\label{fig:graphs}
\end{figure*}
\subsubsection{Simplified ShuffleNetV2} \label{sssec:shuf}
The number parameters in ShuffleNetV2 architecture \cite{DBLP:journals/corr/abs-1807-11164} are $340,000$. Upon examining the distribution of parameters in the architecture, over $200,000$ of the parameters are contained in the final convolutional layer.
Therefore we freeze the parameters in the first half of the network and experimentally incorporate the filter pruning strategy to further reduce the complexity of the final convolutional layer, without compromising the accuracy. We adopt a similar approach proposed in the work of \cite{DBLP:journals/corr/LiKDSG16}, which computes the L2-normalisation of the filters, and subsequently we sort and remove the lower valued filters. The intuition behind this strategy is that filters with lower values will be less effective to the final output of the architecture. The model is further retrained with the removed pruned filters.
\begin{table}[htb!]
\centering
\renewcommand*{\arraystretch}{0.85}
\caption{ShuffleNetV2 \protect\cite{DBLP:journals/corr/abs-1807-11164} pruning variants on the final convolutional filters.}
\vspace{-0.3cm}
\begin{tabular}{lccc}
\hline
\small Architecture & \shortstack[l]{\small Pruned \\ \small Filters} & \shortstack[l]{\small Final Convolutional \\ \small Layer Parameters} & \shortstack[l]{\small Total \\ \small Parameters} \\ \hline
\scriptsize $ShuffleNet_{v01}$ & \small 128 & \small 196,608 & \small 342,897 \\
\scriptsize $ShuffleNet_{v02}$ & \small 256 & \small 147,456 & \small 292,897 \\
\scriptsize $ShuffleNet_{v03}$ & \small 384 & \small 122,800 & \small 267,937 \\
\scriptsize $ShuffleNet_{v04}$ & \small 512 & \small 98,304 & \small 242,977 \\
\scriptsize $ShuffleNet_{v05}$ & \small 640 & \small 73,728 & \small 218,017 \\
\scriptsize $ShuffleNet_{v06}$ & \small 768 & \small 49,152 & \small 193,057 \\
\scriptsize $ShuffleNet_{v07}$ & \small 896 & \small 24,576 & \small 168,097 \\
\scriptsize $\underline{ShuffleNet_{v08}}$ & \small 960 & \small 12,288 & \small 155,617 \\
\scriptsize $ShuffleNet_{v09}$ & \small 992 & \small 6,144 & \small 149,377 \\ \hline
\end{tabular}
\label{tab:shufflenet_reduction}
\end{table}
Table \ref{tab:shufflenet_reduction} shows the number of filters pruned in the different variants of the ShuffleNetV2 architecture. We start by pruning $128$ filters that represents $1/8th$ of the number of filters in the final convolutional layer. We remove $128$ filters in each iteration and continued as long as the accuracy does not degrade ($ShuffleNet_{v01}$ to $ShuffleNet_{v07}$). We subsequently prune a further $64$ filters in $ShuffleNet_{v08}$ and $32$ filters in $ShuffleNet_{v09}$ variants however at this stage we stop pruning due to a decrease in accuracy (points 8,9 in Figure \ref{fig:graphs}-B).
\begin{figure}[htb]
\centering
\includegraphics[width=\linewidth]{final_nasnet_architecture.png}
\vspace{-0.8cm}
\caption{Reduced complexity architecture for NasNet-A-OnFire optimised for fire detection.}
\label{fig:nasnetarchitecture_reduced}
\end{figure}
With exhaustive experimentation using both architectures, we propose following two reduced complexity architectural variants modified for the binary \{{\it fire, no-fire}\} classification task.
\noindent \textbf{NasNet-A-OnFire} is a variant based on $NasNet_{v03}$ such that each group of normal cells in the model only contain two normal cells compared to the previous four (Table \ref{tab:nasnet_variants}, highlighted in underline). The normal cells and reduction cells in the NasNet-A-OnFire architecture remain the same as shown in Figure \ref{fig:nasneta_cells}. The total number of parameter in NasNet-A-OnFire is $3.2$ million.
\begin{figure}[htb]
\centering
\includegraphics[width=\linewidth]{shufflenetv2onfire.pdf}
\vspace{-0.8cm}
\caption{Reduced complexity architecture for ShuffleNetV2-OnFire optimised for fire detection.}
\label{fig:ShuffleNetV2-OnFire}
\end{figure}
\noindent \textbf{ShuffleNetV2-OnFire} is a variant of $ShuffleNet_{v08}$ (Table \ref{tab:shufflenet_reduction}, highlighted in underline) with the same fundamental architecture design as ShuffleNetV2. In ShuffleNetV2-OnFire (Figure \ref{fig:ShuffleNetV2-OnFire}), by applying filter pruning strategy, we reduce the number of filters in the final convolutional layer, which leads to a total number of the model parameter of only $155,617$.
We employ these two CNN architectures for \{{\it fire, no-fire}\} classification task applied on full frame binary and superpixel segmentation based fire detection.
\vspace{-0.0cm}
\subsection{Superpixel Localisation} \label{ssec:superpixel}
We further expand this work fo in-image fire localisation by using superpixel regions \cite{achanta2012slic}, contrary to the earlier work \cite{Phillips02fire,CELIK2009fire} that largely relies on colour based initial locatiosation. Superpixel based techniques over-segment an image into perceptually meaningful regions which are similar in colour and texture. We use Simple Linear Iterative Clustering (SLIC) \cite{achanta2012slic}, which performs iterative clustering in a similar manner to {\it k-means} to reduced spatial dimensions, where the image is segmented into approximately equally sized superpixels (Figure \ref{fig:fire_ex}-B). Each superpixel region is classified using proposed NasNet-A-OnFire/ShuffleNetV2-OnFire architecture formulated as a \{{\it fire, no-fire}\}, for fire detection task.
\section{Evaluation} \label{sec:evaluation}
\subsection{Experimental Setup} \label{sec:exp}
In this section, we present the dataset and implementation details used in our experiments.
\subsubsection{Dataset} \label{ssec:db}
We use the dataset created in the work \cite{dunnings18fire} to train and evaluate the performance of our proposed architectures.
The dataset consists of $26,339$ full-frame images (Figure \ref{fig:fire_ex}-A) of size $224 \times 224$, with $14,266$ images of fire, and $12,073$ images of non-fire. Training is performed over a set of $23,408$ images (70 : 20 data split) and testing reported over $2,931$ images. The superpixel (SLIC \cite{achanta2012slic}) training set consists of $54,856$ fire, and $167,400$ non-fire superpixels with a test set of $1178$ fire and $881$ non-fire examples.
\subsubsection{Implementation Details} \label{ssec:implenmentation}
The proposed architectures are implemented in PyTorch \cite{paszke2019pytorch} and trained with the following configuration: backpropagation optimisation performed via Stochastic Gradient Descent (SGD), binary cross entropy loss function, learning rate (lr) of $0.0005$, and $40$ epochs. We measure the performance using the following CPU and GPU configuration: Intel Core i5 with 8GB of RAM CPU, and NVIDIA 2080Ti GPU.
\subsection{Results} \label{ssec:results}
We present the results of the simplified architectures compared to the state-of-the-art for binary classification (Section \ref{ssec:bin_cls}) and superpixel localisation task (Section \ref{ssec:super_loc}). For statistically comparing different architectures, we use the metrics of true positive rate (TPR), false positive rate (FPR), F-score (F), Precision (P) and Accuracy (A), Complexity (number of parameters in millions, C), the ratio between accuracy and the number of parameters in the architecture (A:C) and achievable frames per second (fps) throughput.
\subsubsection{Binary Classification} \label{ssec:bin_cls}
From the results presented in Table \ref{tab:bin_classification} for full-frame \{{\it fire, no-fire}\} classification task, our proposed architectures achieve comparable performance with prior works \cite{dunnings18fire,samarth19fire}. We present only the highest performing variants, NasNet-A-OnFire and ShuffleNetV2-OnFire, in our experimentation (Table \ref{tab:bin_classification}-lower).
\begin{table}[htb!]
\centering
\renewcommand*{\arraystretch}{0.85}
\caption{The statistical performance for full-frame binary fire detection. Upper: Prior works. Lower: Our approaches.}
\vspace{-0.3cm}
\begin{tabular}{llllll}
\hline
\small Architecture & \small TPR & \small FPR & \small F & \small P & \small A \\ \hline
\rowcolor{blue!4}
\scriptsize{FireNet \cite{dunnings18fire}} & \small 0.92 & \small 0.09 & \small 0.93 & \small 0.93 & \small 0.92 \\
\rowcolor{blue!8}
\scriptsize{Inception V1-OnFire \cite{dunnings18fire}} & \small \textbf{0.96} & \small 0.10 & \small 0.94 & \small 0.93 & \small 0.93 \\
\rowcolor{blue!4}
\scriptsize{Inception V3-OnFire \cite{samarth19fire}} & \small 0.95 & \small 0.07 & \small 0.95 & \small 0.95 & \small 0.94 \\
\rowcolor{blue!8}
\scriptsize{Inception V4-OnFire \cite{samarth19fire}} & \small 0.95 & \small 0.04 & \small \textbf{0.96} & \small \textbf{0.97} & \small \textbf{0.96} \\ \hline
\rowcolor{blue!14}
\scriptsize{\textbf{NasNet-A-OnFire}} & \small 0.92 & \small \textbf{0.03} & \small 0.94 & \small 0.96 & \small 0.95 \\
\rowcolor{blue!14}
\scriptsize{\textbf{ShuffleNetV2-OnFire}} & \small 0.93 & \small 0.05 & \small 0.94 & \small 0.94 & \small 0.95 \\ \hline
\end{tabular}
\label{tab:bin_classification}
\end{table}
Both the architectures achieve a FPR less than or equal to the minimum FPR in the current state-of-the-art approaches \cite{dunnings18fire,samarth19fire} (Table \ref{tab:bin_classification}-upper) with ShuffleNetV2-OnFire achieving FPR: $0.05$ and NasNet-A-OnFire with FPR: $0.03$ (compared to Inception V4-OnFire \cite{samarth19fire} with FPR: $0.04$). The overall accuracy remains consistent with both proposed architectures achieving A: $0.95$. Although there is a slight decrease in TPR for both architectures (Table \ref{tab:bin_classification}-lower), still it achieves comparable TPR with prior works considering the compactness and reduced complexity of the architectures as presented in Tables \ref{tab:bin_cllas} and \ref{tab:super_pix_speed}. NasNet-A-OnFire offers the best performance in terms of FPR (FPR: 0.03), however ShuffleNetV2-OnFire achieves better TPR with $0.93$. ShuffleNetV2-OnFire outperforms InceptionV3-OnFire \cite{samarth19fire} at FPR (FPR: 0.05 vs 0.07) and accuracy (A: 0.95 vs 0.94), both being the smallest architectures (Table \ref{tab:super_pix_speed}).
\subsubsection{Superpixel Localisation} \label{ssec:super_loc}
For superpixel based fire detection, NasNet-A-OnFire achieves the highest TPR with $0.98$ (Table \ref{tab:superpixel_classification}-lower) outperforming the prior work achieving TPR: $0.94$ (InceptionV3-OnFire/InceptionV4-OnFire \cite{samarth19fire}, Table \ref{tab:superpixel_classification}-upper). However NasNet-A-OnFire suffers from a high FPR of $0.15$ compared to InceptionV3-OnFire/InceptionV4-OnFire \cite{samarth19fire} with FPR: 0.07 and FPR: 0.06 respectively. ShuffleNetV2-OnFire achieves a lower FPR (FPR: 0.08, Table \ref{tab:superpixel_classification}-lower), which is comparable to prior work \cite{dunnings18fire,samarth19fire} and achieves a similar accuracy (A: 0.94).
Both of our architectures achieve a higher F-score (F: $0.98$) and Precision (P: $0.99$) outperforming prior work \cite{dunnings18fire,samarth19fire}.
\begin{table}[htb!]
\centering
\renewcommand*{\arraystretch}{0.85}
\caption{Localisation results using within frame superpixel approach. Upper: Prior works. Lower: Our approaches.}
\vspace{-0.3cm}
\begin{tabular}{llllll}
\hline
\small Architecture & \small TPR & \small FPR & \small F & \small P & \small A \\ \hline
\rowcolor{blue!4}
\scriptsize{InceptionV1-OnFire \cite{dunnings18fire}} & \small 0.92 & \small 0.17 & \small 0.9 & \small 0.88 & \small 0.89 \\
\rowcolor{blue!8}
\scriptsize{InceptionV3-OnFire \cite{samarth19fire}} & \small 0.94 & \small 0.07 & \small 0.94 & \small 0.93 & \small 0.94 \\
\rowcolor{blue!4}
\scriptsize{InceptionV4-OnFire \cite{samarth19fire}} & \small 0.94 & \small \textbf{0.06} & \small 0.94 & \small 0.94 & \small 0.94 \\ \hline
\rowcolor{blue!14}
\scriptsize{\textbf{NasNet-A-OnFire}} & \small \textbf{0.98} & \small 0.15 & \small \textbf{0.98} & \small \textbf{0.99} & \small \textbf{0.97} \\
\rowcolor{blue!14}
\scriptsize{\textbf{ShuffleNetV2-OnFire}} & \small 0.94 & \small 0.08 & \small 0.97 & \small \textbf{0.99} & \small 0.94 \\ \hline
\end{tabular}
\label{tab:superpixel_classification}
\end{table}
Qualitative examples of fire localisation, using ShuffleNetV2-OnFire, are illustrated in Figure \ref{fig:sp_tp}. Each example presents challenging scenarios that could lead to false positive detection. These examples demonstrate the robustness of the proposed architecture for the fire detection task in various challenging scenarios.
\begin{figure}[htb!]
\centering
\includegraphics[width=\linewidth]{sp_tp.pdf}
\vspace{-0.8cm}
\caption{Exemplar superpixel based fire localisation using ShuffleNetV2-OnFire on two challenging scenarios: (A) image containing a fireman wearing an outfit similar colour to the fire and (B) image containing a red colour truck (fire = green, no-fire = red).}
\label{fig:sp_tp}
\end{figure}
\subsubsection{Architecture Simplification vs Speed} \label{ssec:model_speed}
Table \ref{tab:bin_cllas} presents the computational efficiency and speed for full-frame classification using different architectures. ShuffleNetV2-OnFire improves on the computational efficiency by $7.8$ times achieving $608.97$ compared to InceptionV1-OnFire \cite{dunnings18fire} achieving $77.9$ while running on CPU configuration. It also improves on classification speed (fps: $40$) compared to FireNet \cite{dunnings18fire} (fps: $17$), while having $437\times$ fewer number of parameters (C: $0.156$ million), and achieving a higher accuracy (A: $95$\%). Whilst computing on a GPU configuration, the inference time increases further for ShuffleNetV2-OnFire achieving $69$ fps, and NasNet-A-OnFire achieves $35$ fps (Table \ref{tab:bin_cllas}-lower). Overall ShuffleNetV2-OnFire is the best performing architecture for full-frame binary classification in terms of accuracy and efficiency (A:C: $608.7$), outperforming prior works \cite{dunnings18fire}.
\begin{table}[htb!]
\centering
\renewcommand*{\arraystretch}{0.85}
\caption{Computational efficiency for full-frame classification.}
\vspace{-0.3cm}
\begin{tabular}{lllll}
\hline
\small Architecture & \small C & \small A(\%) & \small A:C & \small fps \\ \hline
\rowcolor{blue!4}
\scriptsize{FireNet \cite{dunnings18fire}} & \small 68.3 & \small 91.5 & \small 1.3 & \small 17 \\
\rowcolor{blue!8}
\scriptsize{InceptionV1-OnFire \cite{dunnings18fire}} & \small 1.2 & \small 93.4 & \small 77.9 & \small 8.4 \\ \hline
\rowcolor{blue!14}
\scriptsize{\textbf{NasNet-A-OnFire}} & \small 3.2 & \small \textbf{95.3} & \small 29.78 & \small 7 \\
\rowcolor{blue!14}
\scriptsize{\textbf{ShuffleNetV2-OnFire}} & \small \textbf{0.156} & \small 95 & \small \textbf{608.97} & \small \textbf{40} \\ \hline
\rowcolor{blue!14}
\scriptsize{\textbf{NasNet-A-Mobile (GPU)}} & \small 3.2 & \small \textbf{95.3} & \small 29.78 & \small 35 \\
\rowcolor{blue!14}
\scriptsize{\textbf{ShuffleNetV2-OnFire (GPU)}} & \small \textbf{0.156} & \small 95 & \small \textbf{608.97} & \small \textbf{69} \\ \hline
\end{tabular}
\label{tab:bin_cllas}
\vspace{0.05cm}
\caption{Computational efficiency for superpixel localisation.}
\vspace{-0.3cm}
\begin{tabular}{lllll}
\hline
\small Architecture & \small C & \small A(\%) & \small A:C & \small fps \\ \hline
\rowcolor{blue!4}
\scriptsize{InceptionV3-OnFire \cite{samarth19fire}} & \small 0.96 & \small 94.4 & \small 98.09 & \small 13.8 \\
\rowcolor{blue!8}
\scriptsize{InceptionV4-OnFire \cite{samarth19fire}} & \small 7.18 & \small 95.6 & \small 13.37 & \small 12 \\ \hline
\rowcolor{blue!14}
\scriptsize{\textbf{NasNet-A-OnFire (GPU)}} & \small 3.2 & \small \textbf{97.1} & \small 30.34 & \small 5 \\
\rowcolor{blue!14}
\scriptsize{\textbf{ShuffleNetV2-OnFire (GPU)}} & \small \textbf{0.156} & \small 94.4 & \small \textbf{605.13} & \small \textbf{18} \\ \hline
\end{tabular}
\label{tab:super_pix_speed}
\end{table}
In Table \ref{tab:super_pix_speed}, we present the computational efficiency of superpixel localisation (all superpixels are processed for each frame) running on a GPU configuration. Although NasNet-A-OnFire achieves the highest accuracy obtaining A: $97.1$, however it operates at the lowest fps (fps: $5$). ShuffleNetV2-OnFire, consists of only $0.156$ million parameter, provides the maximal throughput of $18$ fps, outperforming InceptionV3-OnFire \cite{samarth19fire} (fps: $13.8$), as presented in Table \ref{tab:super_pix_speed}-lower.
We also measure the runtime of our proposed architectures on low-powered embedded device (Nvidia Xavier-NX \cite{xavier_nx}) as presented in Table \ref{tab:xaviernx_speed}. The PyTorch implementation operates at a lower speed (fps: 6 \& 18) compared to standard CPU/GPU implementation. However, conversion to 16-bit floating point numerical accuracy (via TensorRT, FP16) improves the inference time by $\sim$3-6 times, achieving fps of $35$ (NasNet-A-OnFire) and $49$ (ShuffleNetV2-OnFire), without compromising the performance accuracy.
\begin{table}[htb!]
\centering
\renewcommand*{\arraystretch}{0.8}
\caption{Runtime (fps) comparison of \{{\it fire, no-fire}\} classification on different hardware devices.}
\vspace{-0.3cm}
\begin{tabular}{lcccc}
\hline
\multirow{2}{*}{\small Architecture} & \small CPU & \small GPU & \multicolumn{2}{c}{\small Xavier-NX} \\ \cline{2-5}
& \small PyTorch & \small PyTorch & \small PyTorch & \small TensorRT \\ \hline
\rowcolor{blue!4}
\scriptsize NasNet-A-OnFire & \small 7 & \small 35 & \small 6 & \small \textbf{35} \\
\rowcolor{blue!14}
\scriptsize ShuffleNetV2-OnFire & \small 40 & \small 69 & \small 18 & \small \textbf{49} \\ \hline
\end{tabular}
\label{tab:xaviernx_speed}
\end{table}
\section{Conclusion} \label{sec:conclusion}
We propose a compact and reduced complexity CNN architecture (ShuffleNetV2-OnFire) that is over six times more compact than the prior works \cite{dunnings18fire,samarth19fire} for fire detection with a size of $\sim$0.15 million parameters. This significantly outperforms prior works by operating over $\sim$2 times faster. Proposed CNN architectures (NasNet-A-OnFire and ShuffleNetV2-OnFire) are not only compact in size but also achieve similar performance accuracy with $95$\% for full-frame and $94.4$\% for superpixel based fire detection. Subsequently, implementation on low-powered devices (achieving $49$ fps) makes our architectures suitable for various real-world applications. Overall, we illustrate a strategy for simplifying the CNN architectures through experimental analysis and filter pruning while maintaining the accuracy and increasing the computational efficiency. Future work will focus on additional synthetic image data training via Generative Adversarial Networks.
| proofpile-arXiv_059-15593 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section*{Contents}
\section*{Introduction}\label{sec:intro}
A singularity is the germ of a complex or real analytic space $(V,p )$
that is not regular at $p $. Equisingularity means equivalent or similar singularity and it is always necessary to make precise which equivalence of singularities we have in mind. Thus two singularities $(X, x)$ and $(Y, y)$ are analytically equivalent if there is an analytic isomorphism germ
$\phi : (X,x) \to (Y,y)$. If $\phi $ is only a homeomorphism then we say
that $(X, x)$ and $(Y, y)$ are topologically equivalent. If
$(X, x)$ and $(Y, y)$ are both subspaces of the affine space $(\mathbb{K}^n,0)$,
$\mathbb{K}=\mathbb R$ or $\mathbb{C}$, then we may require $\phi $ to be the restriction of an isomorphism (resp. homeomorphism) of the ambient spaces $\Phi : (\mathbb{K}^n, 0) \to(\mathbb{K}^n, 0) $. If such $\Phi $ exists we say then that $(X, x)$ and $(Y, y)$ are
ambient analytically (resp. topologically) equivalent.
Let $V$ be a real or complex analytic space. Then there exists a stratification $\mathcal S$ of $V$, that is a decomposition
of $V$ into analytic manifolds that, moreover, are usually required to satisfy some additional properties. For the notion of stratification and a historical account of stratification theory, we refer the reader to the paper of D. Trotman \cite{trotman2020} in the first volume of this handbook and the references therein. It is known that there always exists a stratification of $V$ that is topologically equisingular along each stratum,
that is if $p_1$ and $p_2$ belong to the same stratum then
$(V, p_1)$ and $(V, p_1)$ are topologically equivalent.
If $V$ is a subspace of $\mathbb{K}^n$ then one may, moreover, require this
stratification to be ambient topologically equisingular. This can be achieved by constructing a Whitney stratification of $V$.
Another and entirely independent way of constructing such a stratification is Zariski equisingualrity, which is the subject of this survey.
Recall that, in general, there is no stratification
that is analytically equisingular along each stratum, as the classical example of Whitney
\cite[Example 13.1]{whitneymorse65} shows: $V = \{(x,y,z)\in \mathbb{K}^3; xy(y+x) (y-zx)=0\}$
admits a continuous family of analytically (or even $C^1$-diffeomorphically) non-equivalent singularities, due to the phenomenon of continuous moduli
(the cross-ratio in this example).
In 1971 in "Some open questions in the theory of singularities"
\cite{zariski71open}, O. Zariski proposed a general theory of equisingularity for complex algebraic and analytic hypersurfaces.
Zariski's approach was based on a new version of equisingularity that Zariski called algebro-geometric equisingularity, since it was defined by purely algebraic means but it reflected many geometric properties. For instance, as Varchenko shows in
\cite{varchenkoizv72,varchenko73,varchenkoICM75} answering a question posed by Zariski, Zariski equisingularity, which we now call the algebro-geometric equisingularity of Zariski, implies ambient topological triviality. Zariski equisingularity under an additional genericity of projection assumption implies Whitney conditions as shown by Speder
\cite{speder75}.
This notion of equisingularity extended Zariski's earlier work on the singularities of plane curves, their equivalence and their families,
see \cite{zariski65-S1,zariski65-S2,zariski68-S3}.
For the general case of hypersurface singularities over an algebraically closed field of characteristic zero Zariski presented his program in \cite{zariski76} and \cite{zariski1979}. The paper \cite{zariski1979} "Foundations of a general theory of equisingularity on $r$-dimensional algebroid and algebraic varieties, of embedding dimension $r + 1${"}, published in 1979, contains complete foundations of this theory, stated for algebroid varieties over an algebraically closed field of characteristic zero.
(Recall that algebroid varieties are the varieties defined by ideals of the rings of formal power series, see \cite{lefschetz53} Ch. IV.
and \cite{zariski1979} Section 2.) Since then Zariski equisingularity has been widely applied in the theory of singularities. We present in this survey an account, certainly incomplete, of this development.
Intuitively, Zariski's notion can be characterized by two properties:
\begin{enumerate}
\item
If $(V,P_1)$ and $(V,P_2)$ are equisingular, then $P_1$ is a regular point of $V$ if and only if $P_2$ is a regular point of $V$.
\item
If $W\subset \Sing V$ is non-singular then $V$ is equisingular along $W$ at $P\in W$ if and only if for all sufficiently general projections
$\pi: \mathbb{K}^{r+1}\to \mathbb{K}^r$ the discriminant locus of $\pi|_V$ is equisingular
along $\pi(W) $ at $\pi(P)$.
\end{enumerate}
Formally, one may talk about two notions of Zariski equisingularity,
of a hypersurface along its nonsingular subvariety and of a family of hypersurface singularities parameterized by a finite number of parameters. This is already present, implicitly,
in \cite{zariski71open}, where the former one is motivated by the latter one. We shall follow this path in this survey as well.
In Section \ref{sec:curves} we describe the equisingular families of complex plane curve singularities. This description is based on Puiseux with parameter theorem, Theorem \ref{thm:PuiseuxTheorem}. Then we introduce Zariski equisingularity of families, Sections \ref{sec:curves} and \ref{sec:infamilies}. As we have mentioned, Zariski equisingular families are topologically trivial. Therefore Zariski equisingularity implies the generic topological equisingularity of real or complex, algebraic varieties or analytic spaces (not necessarily hypersurfaces). We present this principle in subsection \ref{ssec:arbitrarycodimension}. As a consequence Zariski equisingularity provides an algorithmic construction of a topologically equisingular stratification. More
applications to Algebraic Geometry are presented in subsections \ref{ssec:fundamentalgroup} and \ref{ssec:generalposition}.
In Section \ref{sec:construction} we show how to construct equisingular deformations of a given singularity. This construction appears in many applications, in particular it is used to show that a (real or complex) analytic singularity is homeomorphic to an algebraic one, see subsection \ref{ssec:homeoanalytic-algebraic}, and, moreover we may assume that the latter one is defined over the field of algebraic numbers $\overline \mathbb{Q}$, see subsection \ref{ssec:homeotonumberfields}.
At the end of Section \ref{sec:construction} we discuss application of Zariski equisingularity to trivialize families of analytic function and map germs
\ref{ssec:functions} and \ref{ssec:smoothmappings}.
In Section \ref{sec:ZEalong} we present the original notion of Zariski equisingularity of a hypersurface $V$ along a nonsingular subvariety $W$, and a related notion of the dimensionality type.
Zariski equisingularity along a hypersurface is defined by taking successive co-rank 1 projections and their discriminants, and a similar construction is used to define Zariski equisingularity in families. The main, and to some extent still open problem, is to decide what projection to take to verify whether such an equisingularity holds. As follows from Zariski work, in the case of families of plane curves singularities, the equisingularity given by a single projection implies equisingularity for all transverse projections, for this notion see subsection \ref{ssec:equimultiplicity}. Therefore, originally, Zariski considered transverse projections as sufficient for such verification, see \cite{zariski71open}. In
\cite{luengo85} Luengo gave an example of a family of surface singularities in $\mathbb{C}^3$ that is Zariski equisingular for one transverse projection but not for a generic or generic
linear projection. Therefore, in \cite{zariski1979}, Zariski
proposed to build this theory on the notion of "generic" projection. The definition of such generic projection given in \cite{zariski1979} is
therefore crucial. It involves adding all the coefficients of a generic formal change of coordinates as indeterminates to the ground field. As Zariski also showed in \cite{zariski1979} a generic (in a more standard meanining) polynomial projection gives the same theory, that is to say the same notion of generic Zariski equisingularity along a nonsingular subvariety. But it is not known how to verify which polynomial projections are generic in this sense or even whether there is a bound on the
minimal degree of such polynomial generic projections. This makes algorithmic computations of the dimensionality type and related notions of generic Zariski equisingularity and Zariski's canonical stratification impossible. The algebraic case was studied in more detail by Hironaka \cite{hironaka79}, where the algebraic semicontinuity of the minimal degree of such polynomial projection is shown.
The question whether a generic linear projection is always sufficient is still open for dimensionality type $\ge 2$, though the case of the dimensionality type 2 is fairly well understood thanks to \cite{brianconhenry80}.
\subsection*{General set-up}{}
In this survey we present Zariski's theory in the complex analytic
set-up, which seems to be the most
common and of the biggest interest for singularity theory. There are two obvious extensions that one has to keep in mind. The first one, as the original definition of Zariski, is the theory of algebroid varieties over an arbitrary algebraically closed field of characteristic zero, when one works with the varieties defined by the ideals in the ring of formal power series.
The second one is the real analytic set-up.
Many results on Zariski equisingularity, such as topological triviality for example, are valid in both complex analytic and real analytic set-ups. The real analytic set up sometimes requires more careful statements, for instance, by replacing analytic sets by the equations or ideals defining them. In general,
for Zariski equisingularity, the assumption on the ground field to be algebraically closed seems not to be essential, unlike the assumption to be of characteristic zero, which is necessary.
In Section \ref{sec:curves}, which can be considered as a motivation for the general definition, we discuss the equisingularity of complex
plane curves. Section \ref{sec:infamilies} and \ref{sec:construction}
are presented for complex and real analytic or algebraic spaces.
For the definitions, theorems and proofs of these two sections there is
no essential difference between the real and the complex case.
The second part of Section \ref{sec:ZEalong}, the dimensionality type,
is presented in the algebroid set-up, like Zariski's original definition. Every statement of this section holds in complex analytic case. We also believe that it can be carried over to the real analytic set-up, but this has yet to be done.
\subsection*{Notation and terminology.}
We denote by $\mathbb{K}$ either $\mathbb R$ or $\mathbb{C}$. Thus, by $\mathbb{K}$-analytic we mean either real analytic or holomorphic (complex analytic). Sometimes we abbreviate it saying that a space or a map is analytic if the ground field,
$\mathbb{C}$ or $\mathbb R$, is clear from the context or if the result holds in both cases.
By an analytic space, we mean one in the sense of \cite{narasimhan66}.
As we work mostly in the local analytic case, it suffices to consider only analytic set germs. For an analytic space $X$ by $\Sing X$ we denote the set of singular points of $X$, i.e. the support of the singular subspace of $X$. By $\Reg X$ we denote its complement $X\setminus \Sing X$, the
set of regular points of $X$. For an analytic function germ $F$ we denote by $V(F)$ its zero set and by $F_{red}$ its reduced (i.e. square free) form. By a real analytic arc, we mean a real analytic map $\gamma : I\to X$, where $I=(-1,1)$ and $X$ is a real or a complex analytic space.
For a polynomial monic in $z$,
$F(x,z) = z^d+ \sum_{i =1}^d a_i(x) z^{d-i}$, with coefficients analytic functions in $x$, we denote by $D_F(x)$ its discriminant, and by
$\Delta_F(x)$ its discriminant locus, the zero set of $D_F(x)$.
The discriminant of $F$, and more generally the generalized discriminants of $F$, are recalled in Appendix, Section \ref{sec:discriminants}.
We say that $f\in \mathbb{K}\{x\}$ is \emph{a unit} if $f(0)\ne 0$.
We often use Weierstrass Preparation Theorem. Recall briefly its statement, see for instance
\cite[Theorem 2, p. 12] {narasimhan66}, \cite[Ch. 3, \S 2]{lojasiewiczbook} for more details. Let $F(x,z) \in \mathbb{K}\{x,z\}$ be regular in the variable $z$, that is $F(0,z)= z^d \unit (z)$. Then there are $a_i(x) \in \mathbb{K}\{x\}$ such that $a_i(0)=0$ and
\begin{align*
F(x,z) = \unit (x,z)\, (z^d+ \sum_{i =1}^d a_i(x) z^{d-i}).
\end{align*}
We call the monic polynomial $z^d+ \sum_{i =1}^d a_i(x) z^{d-i}$, \emph{the Weierstrass polynomial associated to $F$}. An analogous statement holds for formal power series, i.e. for $F(x,z) \in \mathbb{K}[[x,z]]$.
\subsection*{Acknowledgements.}
I would like to thank Jean-Baptiste Campesato, Clint McCrory, Laurentiu Paunescu, and Guillaume Rond for several remarks and suggestions concerning the earlier versions of this survey. I would like to thank as well the anonymous referee for many precise and helpful suggestions.
\section{Equisingular families of plane curve singularities}\label{sec:curves}
We recall the notion of equisingular families of complex plane curve singularities.
There are several equivalent definitions that are proposed by Zariski in
\cite{zariski65-S1,zariski65-S2}.
We use the one based on the discriminant of a local projection. Firstly, this is the definition that Zariski generalizes to the higher-dimensional case. Secondly, by Puiseux with parameter theorem, it gives an equiparameterization of such singularities by fractional power series.
Let
\begin{align}\label{eq:weierstrass-F}
F(t,x,y) = y^d+ \sum_{i =1}^d a_i(t,x) y^{d-i}
\end{align}
be a unitary polynomial in $y\in \mathbb{C}$ with complex analytic coefficients $a_i(t,x)$, defined on $U_{\varepsilon,r} = U_{\varepsilon}
\times U_r $, where $ U_\varepsilon = \{t\in \mathbb{C}^l; \|t\|< \varepsilon\}$, $U_r= \{x\in \mathbb{C} ; | x|<r\}$ .
Here $t=(t_1, \ldots, t_l)$ is considered as a parameter.
One also often assumes that $F$ is reduced (has no multiple factors)
so that its discriminant $D_{F}$ is not identically equal to zero.
For arbitrary $F$ we either consider $D_{F_{red}}$ or, equivalently, the first not identically equal to zero generalized discriminant of $F$,
see Appendix, Section \ref{sec:discriminants}.
\begin{thm}[Puiseux with parameter] \label{thm:PuiseuxTheorem}
Suppose that the discriminant of $F_{red}$
is of the form $D_{F_{red}} (t,x) = x^M \unit (t,x)$ where $\unit (t,x)$ is a complex analytic function defined and nowhere vanishing on $U_{\varepsilon,r}$. Then there is a positive integer $N$ and complex analytic functions $\tilde \xi_i(t,u) $ defined on $U_{\varepsilon}
\times U_{r^{1/N}} $ such that
$$
F(t,u^N,y) = \prod _{i=1}^d(y- \tilde \xi_i (t,u)) .
$$
Let $\theta $ be an $N$th root of unity. Then for each $i$ there is $j$ such
that $\tilde \xi_i ( t,\theta u) = \tilde \xi_j (t, u)$.
\end{thm}
If $F$ is irreducible then one can take $N=d$. In general, $N=d!$ always works, but it is not minimal.
If $M=0$ then, by the Implicit Function Theorem (IFT), the roots of $F$, which we denote by $\xi_1 (t,x), ... , \xi_d(t,x)$, are $\mathbb{C}$-analytic functions of $(t,x)$.
Moreover two such $\xi_i$ and $\xi_j$ either coincide or are distinct everywhere.
In general, for arbitrary $M$, Theorem \ref{thm:PuiseuxTheorem} implies that the projection of
the zero set $V=V(F)$ of $F$ onto $U_{\varepsilon}$, given by $(x,y,t)\to t$ is topologically trivial.
To see it one may use the following corollary.
\begin{cor}\label{cor:Puiseuxcor1}
For $x_0$ fixed, the roots of $F$, $\xi_1 (t,x_0), \ldots, \xi_d(t,x_0)$, can be chosen complex analytic in $t$.
Moreover, if $\xi_i(0,x_0)=\xi_j(0,x_0)$ then $\xi_i(t,x_0)\equiv \xi_j(t,x_0)$. Thus the multiplicity of each $\xi_i(t,x_0)$ as a root of $F$ is independent of $t$.
\end{cor}
\begin{proof}
It suffices to show it for $F$ reduced. Then for $x_0\ne 0$ it follows from the IFT. Let us show it for $x_0=0$.
The family $\xi_1 (t,0), \ldots, \xi_d(t,0)$ coincides (as unordered sets) with $\tilde \xi_1 (t,0), \ldots, \tilde \xi_d(t,0)$.
If $\tilde \xi_i(0,0)= \tilde \xi_j(0,0)$ then $\tilde \xi_i(t,u)- \tilde \xi_j(t,u)$ is
either identically zero or divides $u^{NM}$ and hence equals
a power of $u$ times a unit.
\end{proof}
Using Corollary \ref{cor:Puiseuxcor1} we may trivialize topologically $V$ with respect the parameter $t$ by
\begin{align*
\Phi (t, x, \xi_i(0,x)) = (t, x, \xi_i(t,x)) \quad i=1, \ldots, d.
\end{align*}
The map $\Phi $ is, by Corollary \ref{cor:Puiseuxcor1}, complex analytic in $t$ and one can show, moreover, that it is a local homeomorphism.
(It follows, for instance, from much more general \cite[Theorem1.2]{PP17}.)
The parameterized Puiseux Theorem, Theorem \ref{thm:PuiseuxTheorem}, can be proven in the same way as the classical Puiseux Theorem by considering the finite covering of $V(F)$ over
$U_{\varepsilon} \times U_r^*$, where $U_r^*= U_r \setminus \{0\}$. Then, for a positive integer $N$, the pullback of this covering by $(t,u) \to (t, u^N)=(t,x)$ is trivial, its sheets define the roots $\tilde \xi_i(t,u) $ that extend analytically to
$U_{\varepsilon} \times \{0\}$ by Riemann's Removable Singularity Theorem. For details
we refer for instance to \cite{pawlucki84}.
Note also that this theorem is a special case of the Jung-Abhyankar Theorem, see \cite{jung1908}, \cite{abhyankar55}, that in complex analytic case can be proven exactly along the same lines, see \cite{PR2012} Proposition 2.1.
\begin{thm}[Jung-Abhyankar]\label{thm:Jung}
Let ${k}$ be an algebraically closed field of characteristic zero and let
$f\in {k}[[x_1, \ldots ,x_{r+1}]]$ be of the form
\begin{align*
f(x_1, \ldots ,x_{r+1}) = x_{r+1}^d+ \sum_{i =1}^d a_i(x_1, \ldots ,x_{r}) x_{r+1}^{d-i}.
\end{align*}
Suppose the discriminant $D_f$ of $f$ equals a monomial $\prod_{i=1}^k x_i ^{n_i}$ times a unit. Then the roots of $f$ are fractional power series in $x_1, \ldots ,x_{r}$,
More precisely, there is a positive integer $N$, such that the roots of
$f(u_1^N, \ldots , u_k^N, x_{k+1}, \dots , x_{r+1})$ belong to
${k}[[u_1, \ldots , u_k, x_{k+1}, \dots , x_{r}]]$.
\end{thm}
\subsection{Equisingular families of plane curve singularities. Definition.}
Let us fix a local projection $pr : \mathbb{C}^{l}\times \mathbb{C}^2\to \mathbb{C}^l$ and suppose that $F$ is a complex analytic function defined
in a neighborhood of the origin in $\mathbb{C}^{l}\times \mathbb{C}^2$ and vanishing
identically on $T=\mathbb{C}^l \times \{0\}$. We assume that $F$ is reduced and consider its zero set
$V=V(F)=F^ {-1}(0)$ as a family of plane curve singularities
\begin{align*}
t \rightsquigarrow (V_t,0)= (V\cap pr ^{-1} (t),0)
\end{align*}
parameterized by $t\in (\mathbb{C}^l,0)$.
We say that a local system of coordinates
$t_1, \ldots, t_l,x,y$ is \emph{$pr $-compatible} if $pr (t,x,y) = t$, where $t=(t_1, \ldots, t_l)$, and $T=\{x=y=0\}$.
Suppose that in such a system of coordinates $F$ is regular in variable $y$,
i.e. $F(0,0,y) \not \equiv 0$.
Then, by the Weierstrass Preparation Theorem, we may assume that, up to a multiplication by an analytic unit, $F$ is of the form \eqref{eq:weierstrass-F} with all $a_i(0,0) =0$.
Note also that because $T\subset V(F)$, we have $F(t,0,0)\equiv 0$.
\begin{defn}\label{def:equising-curves}
We say that $V = V(F)$ is
\emph{an equisingular family of plane curve singularities} if there are a $pr $-compatible
system of coordinates $t,x,y$, such that $F$ is regular in variable $y$, and a non-negative integer $M$, such that the discriminant $D_F (t,x)$ of $F$
is of the form
\begin{align}\label{eq:discriminant}
D_F (t,x) = x^M \unit(t,x).
\end{align}
\end{defn}
Equisingular families of plane curve singularities were studied in the
algebroid set-up (i.e. defined by $F$ being a formal power series) over an algebraically closed field of characteristic zero by Zariski
\cite{zariski65-S1,zariski65-S2,zariski68-S3} mainly by means of
(equi)resolution. All the results of \cite{zariski65-S1,zariski65-S2,zariski68-S3}, properly stated, are valid for the complex analytic case. In particular, Zariski has shown that in such families the special fiber $(V_0,0)$ and the generic fiber $(V_{t_{gen}}, 0)$ are equivalent plane curve singularities, see
\cite[section 6]{zariski65-S1} and \cite[section 3]{zariski65-S1} for several equivalent definitions of equivalent plane curve singularities.
In the complex analytic set-up, two complex plane curve singularities are equivalent if and, only if they are ambient topologically equivalent.
By \cite[Theorem 7]{zariski65-S1}, Zariski equisingular families of plane curve singularities are equimultiple, that is to say $\mult_{(t,0,0)} V$ is independent of $t$. If this multiplicity equals $d= \deg_y F$, then we say that
the associated projection $\pi (x,y,t) = (x,t)$ is \emph{transverse}. Geometrically it means that the kernel of $\pi$ is not included in the tangent cone $C_0(V)$.
Because the equimultiple families are normally pseudo-flat
(continuity of the tangent cone), it is enough to check the transversality
for the special fiber $V_0$ and if it holds for the special fiber then it holds also for the generic one. Zariski shows in Theorem 7 of \cite{zariski65-S1} also the following result.
\begin{thm}[\cite{zariski65-S1}, Theorem 7]
\label{thm:equisingulacurves}
If a family of plane curve singularities is equisingular (for a not necessarily transverse projection) then it is equisingular for all transverse projections.
\end{thm}
Note that if $V=V(F)$ is equisingular then the singular locus $\Sing V$ of $V$ is
$T=\mathbb{C}^l \times \{0\}$. In \cite[section 8]{zariski65-S2} Zariski shows that a
family $V(F)$ of plane curve singularities is equisingular if and only if
$\Sing V = T$ and $V\setminus T , T$ is a Whitney stratification of $V$. In the complex analytic case, this gives another proof of the fact that the equisingular families of plane curve singularities are topologically trivial and the following holds.
\begin{cor}\label{cor:Puiseuxcor2}
The Puiseux pairs of the roots $\xi_i(t,x)$ and the contact exponents between different branches of $V_t$ are independent of $t$.
\end{cor}
In the complex analytic set-up, in \cite{teissier77}[p. 623] B. Teissier gives 12 characterizations of equisingular families of plane curve singularities, including (equi)resolutions, constancy of Milnor number, Whitney's conditions, and topological triviality.
\subsection{Equisingular families of plane curve singularities and Puiseux with parameter.} \label{ssec:puiseux-with-par}
The Puiseux with parameter theorem, Theorem \ref{thm:PuiseuxTheorem}, gives the following
criterion of equisingularity of families of plane curve singularities.
\begin{thm}\label{thm:Jung2}
Let $F$ be reduced and of the form \eqref{eq:weierstrass-F} in a $pr $-compatible system of coordinates $t,x,y$. We also assume $a_i(0,0)=0$ for all $i$. Then $V(F)$ is an equisingular family of plane curve singularities for this system of coordinates if and only if there are $\tilde \xi_i \in \mathbb{C}\{t,u\}, i=1,\ldots ,d $, and strictly positive integers $N$, $k_{ij}, i<j$, such that
\begin{align}
F(t,u^N,y) = \prod_{i=1}^d (y- \tilde \xi_i(t,u))
\end{align}
and $\tilde \xi_i - \tilde \xi_j = u^{k_{ij}} \unit (t,u) $ or $\xi_i$ and $\xi_j$ coincide everywhere (the latter possibility may occur
only if $F$ is not reduced).
\end{thm}
The above observation implies, in particular, Corollary \ref{cor:Puiseuxcor1}.
We also note that it implies that all $a_i $ of \eqref{eq:weierstrass-F} satisfy $a_i(t,0)\equiv 0$.
Indeed, by the assumption $T\subset V(F)$ there is a root $\tilde \xi_{j} $ of $F$ such that
$\tilde \xi_{j} (t,0) \equiv 0 $. Since $\tilde \xi_{i} (0,0) =0 $ for all $i$ the
last claim of the above theorem implies our assertion.
\section{Zariski equisingularity in families}\label{sec:infamilies}
Zariski equisingularity of families of singular varieties was introduced by
Zariski in \cite{zariski71open} in the context of equisingularity of a hypersurface along a smooth subvariety, that we discuss in Section \ref{sec:ZEalong}. This is a direct generalization of Definition
\ref{def:equising-curves} but instead of a single co-rank one projection one considers a system of such subsequent projections. It can be formulated over any field, in particular, in the analytic case over $\mathbb{K}=\mathbb R$ or $\mathbb{C}$.
Recall that for $x=(x_1, \ldots, x_n) \in \mathbb{K}^n$ we denote $x^i = (x_1, \ldots, x_i) \in \mathbb{K}^i$.
\begin{defn}\label{def:system}
By a \emph{local system of pseudopolynomials in $x=(x_1,... ,x_n)\in \mathbb{K}^n$ at $(0,0)\in \mathbb{K}^l\times \mathbb{K}^n$, with a parameter $t\in U \subset \mathbb{K}^l$}, we mean a family of $\mathbb{K}$-analytic functions
\begin{align}\label{def:pseudopolynomials}
F_{i} (t, x^i )= x_i^{d_i}+ \sum_{j=1}^{d_i} a_{i-1,j} (t,x^{i-1})
x_i^{d_i-j}, \quad i=0, \ldots,n,
\end{align}
defined on $U\times U_i$, where $U_i $
is a neighborhood of the origin in $\mathbb{K}^i$, with the coefficients $a_{i,j}$ vanishing identically on $T=U\times \{0\}$.
This includes $d_i=0$, in which case we mean $F_i\equiv 1$.
\end{defn}
\begin{defn}\label{def:ZAinfamilies}
Let $V= F^{-1}(0)$ be an analytic hypersurface in a neighborhood
of the origin in $\mathbb{K}^l \times \mathbb{K}^n$.
We say that $V$ is \emph{Zariski equisingular with respect to the
parameter $t$ (and the system of coordinates $x_1, \ldots, x_n$)}
if there are $k\ge 0$ and a system of pseudopolynomials $F_{i} (t, x^i )$ such that
\begin{enumerate}
\item
$F_n$ is the Weierstrass polynomial associated to $F$.
\item
for every $i$, $k\le i\le n-1$, the discriminant of $(F_{i+1})_{red}$ (or, equivalently, the first not identically equal to zero generalized discriminant of $F_{i+1}$, see Appendix, Section \ref{sec:discriminants}) divides $F_{i}$.
\item
$F_k \equiv 1$ (and then we put $F_i \equiv 1$
for all $0\le i< k$).
\end{enumerate}
\end{defn}
\begin{rem}
In the above definition, we suppose that the system of local coordinates
$x_1, \ldots, x_n$ is fixed. Of course one may say that $V$ is \emph{Zariski equisingular with respect to the parameter $t$}, if such a system exists. This raises a variety of interesting
questions, for instance, how to check whether such a system exists. We will discuss it in Section \ref{sec:ZEalong} in a slightly different set-up, Zariski singularity along a nonsingular subspace.
\end{rem}
\begin{rem}
In the original definition of Zariski equisingularity \cite{zariski71open} and also in \cite{varchenkoizv72}, \cite{varchenkoICM75}, the condition 2. was stated in an apparently more restrictive way:
\begin{enumerate}
\item [2'.]
for every $i$, $k\le i\le n-1$, $F_{i}$ is the Weierstrass polynomial associated to the discriminant of $(F_{i+1})_{red}$.
\end{enumerate}
The definition given here comes from \cite{PP17}
and is often easier to work with than the original one. Probably, both definitions are equivalent.
\end{rem}
\subsection{Topological equisingularity and topological triviality}\label{ssec:topequising}
In \cite{zariski71open} Zariski asked the following question: \medskip
\noindent
\emph{Does algebro-geometric equisingularity
(i.e. Zariski equisingularity), in complex analytic case, imply topological equisingularity or even differential equisingularity?}
\medskip
By the latter one, Zariski meant Whitney's conditions (a) and (b). The answer to this part of Zariski's question depends on how generic the system of coordinates giving Zariski equisingularity is, or equivalently how generic the projection defining the subsequent discriminants are, see section \ref{ssec:relationto} below. In 1972 Varchenko \cite{varchenkoizv72} gave the affirmative answer to the first part of the question, see also
\cite{varchenko73} and \cite{varchenkoICM75} for the statement of results.
\begin{thm}\label{thm:topological_triviality}
Suppose that $V$ is Zariski equisingular with respect to the parameter
$t$. Then there are
neighborhoods $U$ of the origin in $\mathbb{K}^l$, $\Omega_0$ of the origin in $\mathbb{K}^n$, and $\Omega$ of the origin in $\mathbb{K}^{l+n}$, and a
homeomorphism
\begin{align}\label{eq:homeomorphismPh}
\Phi : U \times \Omega_0 \to \Omega,
\end{align}
such that
\begin{enumerate}
\item [\rm (i)]
$\Phi (t, 0) = (t,0) $, $\Phi (0, x_1, \ldots, x_n) = (0, x_1, \ldots, x_n) $;
\item [\rm (ii)]
$\Phi $ has a triangular form
\begin{align}\label{eq:Phitriangular}
\Phi (t, x_1, \ldots , x_n) = (t, \Psi _1(t, x_1), \ldots , \Psi _{n-1} (t,x_1, \ldots , x_{n-1}), \Psi _{n} (t, x_1, \ldots , x_{n}) );
\end{align}
\item [\rm (iii)]
$\Phi (U\times (V\cap \Omega_0))=V\cap \Omega$.
\end{enumerate}
\end{thm}
We note that Varchenko's result gives local topological triviality, a property stronger than the topological equisingularity. Here by topological equisingularity we mean the constancy of local topological types of $V_t: = V\cap (\{t\}\times \mathbb{K}^n)$ at the origin, i.e. the existence of homeomorphism germs $h_t : (V_0,0)\to (V_t,0)$, possibly given by ambient homeomorphisms $H_t : (\mathbb{K}^n,0) \to (\mathbb{K}^n,0)$.
The (ambient) topological triviality, that is the existence of $\Phi $ of
\eqref{eq:homeomorphismPh} implies that such $H_t(x) = \Phi (t,x)$ depends continuously on $t$.
The details of the proof of Theorem \ref{thm:topological_triviality}
are published in \cite{varchenkoizv72}. Strictly speaking the proof in \cite{varchenkoizv72} is in the global polynomial case but it can be adapted easily to the local analytic case. The homeomorphism $\Phi$ is constructed in the complex case $\mathbb{K}=\mathbb{C}$. The real case follows from the complex one under a standard argument using the invariance by complex conjugation.
The functions $\Psi _i$ are constructed inductively so that every
\begin{align}\label{eq:Phiinduction}
\Phi _i (t,x_1, \ldots , x_i) = (t, \Psi _1(t, x_1), \ldots , \Psi _{i} (t,x_1, \ldots , x_{i}) )
\end{align}
induces topological triviality of $F_i^{-1}(0)$. Given $\Phi _i$, then
$\Phi _{i+1}$ is constructed in two steps.\\
\noindent
\emph{Step 1.} One lifts $\Phi _i$ to the zero set of $F_{i+1}^{-1}(0)$.
Such a continuous lift exists and is unique thanks to the following lemma, cf. the multiplicity preservation lemmas of Section 2 of \cite{varchenkoizv72} or Lemma on page 429 of \cite{varchenkoICM75}. This lemma and the standard argument of the continuity of roots show that such a lift is continuous.
\begin{lemma}\label{lem:multiplicityconstant}
Let \begin{align}\label{def:pseudopolynomial}
F (t, x )= x_n^{d}+ \sum_{j=1}^{d} a_{i-1,j} (x^{n-1})
x_n^{d-j},
\end{align} be a pseudopolynomial defined in a neighborhood of $p=(p',p_n) \in \mathbb{C}^n$.
Let $H_t : (\mathbb{C}^{n-1},p')\to
(\mathbb{C}^{n-1}, p'_t)$, $t\in [0,1]$, be a continuous family of local homeomorphisms preserving the discriminant locus of $F$, that is
$H_t (\Delta_F, p') = (\Delta_F, p'_t)$.
Then the number of distinct roots of $F$ over $p'_t$, as well as their multiplicities, are independent of $t$.
\end{lemma}
\noindent
\emph{Step 2.}
As soon as $\Phi _{i+1}$ is defined on the zero set of $F_{i+1}^{-1}(0)$ it suffices to extend it to the ambient space. This is obtained in Section 1 of \cite{varchenkoizv72} by the covering isotopy lemma, see also Fundamental Lemma of \cite{varchenko73}. The construction of such extension is based on a triangulation of the base space, so that the finite branched covering $F_{i+1}^{-1}(0) \to \mathbb{C}^{l+i}$ is trivial over each open simplex, and a simplicial extension argument.
\subsection{Arcwise analytic triviality}\label{ssec:arcwise}
Zariski equisingularity implies much stronger triviality property than just the topological one.
The following result was shown in \cite[Theorem 3.1]{PP17}.
\begin{thm}\label{thm:arcwise-analytic_triviality}
Suppose that $V$ is Zariski equisingular with respect to the parameter $t$. Then there are neighborhoods $U$ of the origin in $\mathbb{K}^l$, $\Omega_0$ of the origin in $\mathbb{K}^n$, and $\Omega$ of the origin in $\mathbb{K}^{l+n}$, and a
homeomorphism
\begin{align}\label{eq:trivialization}
\Phi : U \times \Omega_0 \to \Omega,
\end{align}
such that
\begin{enumerate}
\item [\rm (i)]
$\Phi (t, 0) = (t,0) $, $\Phi (0, x_1, \ldots, x_n) = (0, x_1, \ldots, x_n) $;
\item [\rm (ii)]
$\Phi $ has a triangular form \eqref{eq:Phitriangular} ;
\item [\rm (iii)]
there is $C>0$ such that for all $(t,x)\in U \times \Omega_0$
\begin{align*
C^{-1} |F_n(\Phi (0, x))| \le |F_n(\Phi (t,x )) | \le C |F_n(\Phi (0,x)) | ;
\end{align*}
\item [\rm (iv)]
For $(t, x_1, \ldots , x_{i-1})$ fixed,
$\Psi _i (t, x_1, \ldots , x_{i-1}, \cdot ): \mathbb{K} \to \mathbb{K}$ is bi-Lipschitz
and the Lipschitz constants of
$\Psi _i$ and $\Psi _i ^{-1}$ can be chosen independent of $(t, x_1, \ldots , x_{i-1})$;
\item [\rm (v)]
$\Phi $ is an {arc-wise analytic} trivialization of the projection $\Omega \to U$ .
\end{enumerate}
\end{thm}
(Note that (iii) of Theorem \ref{thm:arcwise-analytic_triviality} implies (iii)
Theorem \ref{thm:topological_triviality}.)
Let us recall after \cite{PP17} the notion of arc-wise analytic trivialization. First, we need to recall, after \cite{kurdyka88}, the notion of arc-analytic map. Let $Y, Z$ be real analytic spaces. We say that a map
$g(z) : Z\to Y$ is \emph{arc-analytic} if for everyreal analytic arc $z (s) : I\to Z$, $ g(z(s))$ is analytic in $s$. Suppose now that
$T, Y, Z$ are $\mathbb{K}$-analytic spaces, $T$ nonsingular. We say that a map
$f (t,z) : T \times Z\to Y$ is \emph{{arc-wise analytic } in} $t$
if it is $\mathbb{K}$-analytic in $t$ and arc-analytic in $z$, that is for every real analytic arc $z (s) : I\to Z$, the map
$ f(t, z(s))$ is analytic in both $t$ and $s$. Note that in the complex analytic case it means that $ f(t, z(s))$ can be written as a convergent power series $\sum _{\alpha=(\alpha_1, \ldots, \alpha_ l)} \sum _k a_{\alpha,k} t^ \alpha s^ k$ in $t$ complex and $s$ real.
\begin{rem}
In the complex analytic case, it is in general impossible to have
the complex analytic dependence of $\Phi $ on $x$, even only on the complex arcs. This rigidity property already appears for moduli spaces of elliptic curves.
\end{rem}
Suppose now that, moreover, $\pi : Y\to T$ is $\mathbb{K}$-analytic.
We say $$\Phi (t,z) : T \times Z\to Y$$
is an \emph{{arc-wise analytic} trivialization of $\pi$}, see \cite[Definition 1.2]{PP17}, if it satisfies the following properties
\begin{enumerate}
\item
$\Phi $ is a subanalytic homeomorphism (semi-algebraic in the algebraic case),
\item
$\Phi $ is {arc-wise analytic} in $t$ (in particular it is $\mathbb{K}$-analytic with respect to $t$),
\item
$\pi \circ \Phi (t,z)=t$ for every $(t,z) \in T\times Z$,
\item
the inverse of $\Phi $ is arc-analytic,
\item
there exist $\mathbb{K}$-analytic stratifications $\{Z_i\}$ of $Z$ and $\{Y_i\}$ of $Y,$ such that for each $i$,
$Y_i = \Phi (T\times Z_i)$ and $\Phi _{|T\times Z_i} : T\times Z_i \to Y_i$
is a real analytic diffeomorphism.
\end{enumerate}
The proof of Theorem \ref{thm:arcwise-analytic_triviality} follows Varchenko's strategy, \cite{varchenkoizv72}, which we recalled briefly in subsection \ref{ssec:topequising}. It is technically simpler since in Step 2 of the proof, the extension of the trivialization to the ambient space, is based on Whitney Interpolation Formula,
see \cite{whitneymorse65}, \cite[Appendix I]{PP17}. The homeomorphism
$\Phi _{i+1}$ is given by a precise algebraic formula (formula (3.5) of
\cite{PP17}
) in terms of the roots of the pseudopolynomial $F_{i+1}$ and $\Phi _{i}$. (This algebraic formula is a real rational map, it involves, in particular, the square of the distance to the roots of $F_{i+1}$.
There is no such a complex rational formula and no hope, of course, to make $\Phi _{i+1}$ complex arc-analytic, that would have been, by Hartog's Theorem. just complex analytic.)
The fact that thus obtained trivialization $\Phi _{i+1}$ is {arc-wise analytic } is
proven by induction on $i$. The inductive step is obtained by
a reduction to the Puiseux with parameter theorem, Theorem \ref{thm:PuiseuxTheorem}. Let $x^{i}(s)$ be a real analytic arc. By the inductive assumption $\Phi _{i} (t,x^i(s))$ is analytic in $t,s$. Therefore $P(t,s,x_{i+1})=F_{i+1}(\Phi _{i} (t,x^{i}(s), x_{i+1}))$ is a pseudopolynomial with respect ot $x_{i+1}$ depending analytically on $s$ and $t$. The main point of the proof is to show that $P(t,s,x_{i+1})$ defines a Zariski equisingular family of plane curve singularities parameterized by $t$. It follows in essence by the
stability of the discriminant by a base change, though technically it is more involved, $P$ is not necessarily reduced even if so is $F_{i+1}$, see the proof of
\cite[Theorem 3.1]{PP17}
for more details.
\begin{rem}
Arc-wise analytic triviality is, in part, motivated by the relation of Zariski equisingularity and equiresolution of singularities and the theory of blow-analytic equivalence, see the last paragraphs of subsection \ref{ssec:equiresolution}.
\end{rem}
\subsection{Whitney's Fibering Conjecture.}\label{ssec:fiberingconjecture}
In \cite{PP17}, Theorem \ref{thm:arcwise-analytic_triviality} is used to
show Whitney's fibering conjecture.
Whitney stated this conjecture in the context of
the regularity conditions (a) and (b) introduced in
\cite{whitneyannals65}. These conditions on stratification imply the topological triviality along each stratum. This trivialization is obtained by the flow of "controlled" vector fields as follows from proofs of Thom-Mather Isotopy Lemmas. By stating the Fibering Conjecture, Whitney wanted a stronger version of triviality, namely that the stratified set locally fibers into submanifolds isomorphic to strata.
\begin{conjecture*}[Whitney's fibering conjecture, \cite{whitneymorse65} section 9, p.230] \label{conjecture}
Any analytic subvariety $V\subset U$ ($U$ open in $\mathbb{C}^n$)
has a stratification such that each point $p_0\in V$ has a neighborhood
$U_0$ with a semi-analytic fibration.
\end{conjecture*}
By a semi-analytic fibration Whitney meant a local trivialization as in
\eqref{eq:trivialization} that depends analytically on the parameter $t$.
Whitney does not specify the dependence on $x$, besides that he requires it to be continuous and that the existence of such fibration should imply
Whitney's regularity conditions (a) and (b) (Whitney's semi-analytic fibration should not be confused with the notion of semi-analytic set introduced about the same time by
{\L}ojasiewicz in \cite{lojasiewiczIHES}). Partial results on Whitney's fibering conjectures were obtained in \cite{hardtsullivan88}, and in the smooth case in \cite{muroloduplesseistrotman18}.
Whitney's fibering conjecture was proven in \cite{PP17}
in the local complex and real analytic cases and in global algebraic cases by means of Zariski equisingularity and arc-wise analytic triviality.
More precisely, by Theorem \ref{thm:arcwise-analytic_triviality},
every such set has a stratification that
locally admits arc-wise analytic trivializations (see the previous subsection) along each stratum. Existence of such trivialization guarantees Whitney's regularity condition (a) but not necessarily condition (b)
(see Brian\c con-Speder example, Example \ref{ex:briancon-speder} below).
Here we touch for the first time in this survey an interesting and important feature, some properties of Zariski equisingular families depend on the genericity of the system of coordinates $x_1, \ldots ,x_n$. In order to guarantee Whitney's condition (b) we consider transverse Zariski equisingularity, see \cite[Definition 4.1]{PP17}. We call Zariski equisingularity \emph{transverse (or transversal)} if at each inductive stage the kernel of the projection $(t, x^i) \to (t, x^{ i-1})$ is not included in the tangent cone to $F_i = 0$ at the origin.
If we have a family that is transverse Zariski equisingular then,
by Theorem 4.3 of \cite{PP17}, the arc-wise analytic trivialization constructed in \cite{PP17}
satisfies additionally the property, called regularity,
\begin{align*
C^{-1} \|x\| \le \|\Phi (t,x) - (t,0)\| \le C \|x\| ,
\end{align*}
for a constant $C$ independent of $t$ and $x$.
Geometrically it means that the trivialization $\Phi $ preserves the magnitude of the distance to
$U\times \{0\}$. It is proven in
\cite[Proposition 7.4]{PP17}
that this regularity implies Whitney's regularity conditions
\cite[Section 7]{PP17}
) .
condtion (b) (and even Verdier's condition (w)) along $U\times \{0\}$.
We discuss the relation of Zariski equisingularity, the plain one or with extra conditions such as transversality and genericity, and
Whitney's conditions in subsection \ref{ssec:relationto}.
\subsection{Algebraic Case}\label{ssec:algebraic}
In the papers \cite{varchenko73,varchenkoICM75} Varchenko considers the families of analytic singularities while the paper \cite{varchenkoizv72} deals with the families of affine or projective algebraic varieties. Similarly, the families of algebraic varieties were considered in sections 5 and 9 of \cite{PP17}. The version presented below is stated in \cite{PRpreprint18}. It follows
from the proof of the main theorem, Theorem 3.3, of \cite{PP17}, see also Theorems 3.1 and 4.1 of \cite{varchenkoizv72} in the complex case, Theorems 6.1 and 6.3 \cite{varchenkoizv72} in the real case, and Proposition 5.2 and Theorem 9.2 of \cite{PP17} where the global algebraic case is treated.
\begin{thm}\label{thm:algebraic}
Let $\mathcal V$ be an open connected neighborhood of $\tt$ in $\mathbb{K}^r$ and
let $\mathcal O_{\mathcal V}$ denote the ring of $\mathbb{K}$-analytic functions on
$\mathcal V$.
Let $t=(t_1, \ldots , t_r)$ denote the variables in $\mathcal V$ and
let $x=(x_1,\ldots, x_n)$ be a set of variables in $\mathbb{K}^n$. Suppose that for $i=k_0,\ldots, n,$ there are given
\begin{align}\label{eq:system-extension}
F_i(t,x^i)=x_i^{d_i}+\sum_{j=1}^{d_i}a_{i-1,j}(t,x^{i-1})x_i^{d_i-j}\in
\mathcal O_{\mathcal V}[x^{i}],
\end{align}
with $\ d_i>0$, such that
\begin{enumerate}
\item[(i)] for every $i>k_0$, the first non identically equal to zero generalized discriminant of $F_i(t,x^{i-1},x_i)$
divides $F_{i-1} (t,x^{i-1})$.
\item[(ii)] the first non identically equal to zero generalized discriminant of $F_{k_0}$ is independent of $x$ and does not vanish on $\mathcal V$.
\end{enumerate}
Then, for every $\mathbf{q} \in\mathcal V$ there is a homeomorphism
$$h_{\mathbf{q}}: \{\tt\}\times\mathbb{K}^n\rightarrow \{\mathbf{q}\}\times\mathbb{K}^n$$ such that
$h_{\mathbf{q}} (V_{\tt})=V_{\mathbf{q}}$,
where for ${\mathbf{q}} \in \mathcal V$ we denote $V_\mathbf{q}=\{(\mathbf{q},\mathbf{x})\in \mathcal V \times \mathbb{K}^n\mid F_n(\mathbf{q},\mathbf{x})=0\}$.
Moreover, if $F_n=G_1\cdots G_s$ then for every $j=1,\ldots, s$
$$h_{\mathbf{q}}\left(G_j^{-1}(0)\cap(\{\tt\}\times\mathbb{K}^n)\right)=G_j^{-1}(0)
\cap(\{\mathbf{q} \}\times\mathbb{K}^n).$$
\end{thm}
\begin{rem}
By construction of \cite{varchenkoizv72, varchenkoizv73,varchenkoICM75} and \cite{PP17}, the homeomorphisms $h_\mathbf{q}$ can be obtained by a local topological trivialization. That is there are a neighborhood $\mathcal W$ of $\tt$ in $\mathbb{K}^r$ and a homeomorphism
$$
\Phi : \mathcal W \times \mathbb{K}^n \to \mathcal W \times \mathbb{K}^n,
$$
so that $\Phi (\mathbf{q},\tt,x)) = h_{\mathbf{q}}(\tt,x)$.
This $\Phi $ is triangular of the form \eqref{eq:Phitriangular}. If we
write $\Phi (\mathbf{q},x)= (\mathbf{q}, \Psi _\mathbf{q} (x)) $, i.e. as a family of homeomorphisms
$h_\mathbf{q}= \Psi _\mathbf{q} :\mathbb{K}^n \to \mathbb{K}^n$, then, as follows from \cite{PP17},
we may require that:
\begin{enumerate}
\item
The homeomorphism $\Phi$ is subanalytic. In the algebraic case,
i.e. if we replace in the assumptions
$\mathcal O_{\mathcal V}$ by the ring of regular or
$\mathbb{K}$-valued Nash functions on
$\mathcal V$, $\Phi $ can be chosen semialgebraic.
\item
$\Phi $ is arc-wise analytic. In particular, each $h_\mathbf{q}$ and its inverse
$h_\mathbf{q}^{-1}$ are arc-analytic.
\end{enumerate}
\end{rem}
\begin{rem}\label{rk:proj}
If $F_i$ are homogeneous in $x$, then the functions $\Psi _\mathbf{q}$ satisfy, by construction,
$$\forall \lambda\in\mathbb{K}^*,\forall x\in \mathbb{K}^n\ \ \ \Psi_\mathbf{q}(\lambda x)=\lambda \Psi_\mathbf{q}(x).
$$
Hence if we define $\P (V_\mathbf{q})=\{(\mathbf{q},\mathbf{x})\in \mathcal V \times \P_\mathbb{K}^n\mid F_n(\mathbf{q},\mathbf{x})=0\}$, the homeomorphism $h_\mathbf{q}$ induces a homeomorphism between $\P(V_{\tt})$ and $\P(V_{\mathbf{q}})$.
\end{rem}
\subsection{Principle of generic topological equisingularity}\label{ssec:arbitrarycodimension}
Varchenko applies Theorem \ref{thm:topological_triviality} to establish in \cite[Sections 5 and 6]{varchenkoizv72} generic topological equisingularity for families of
real or complex, affine or projective, algebraic sets.
The principle of generic topological equisingularity says that
in an algebraic family $X_t$ of algebraic sets, parameterized by $t\in T$, where $T$ is not necessarily nonsingular, irreducible algebraic variety,
there is a proper algebraic subset $Y$ of $T$ such that
the fibers $X_t$ have constant topological type for $t$ from each connected component of $T\setminus Y$.
In the complex algebraic case $T\setminus Y$ is connected by the irreducibility of $T$, in the real algebraic case it has finitely many connected components. For analytic spaces or sets, a similar principle holds locally. In both analytic and algebraic cases, the results give actually local topological triviality of the family $X_t$ over $T\setminus Y$. We give examples of possible precise statements below. Let us first make some remarks.
Generic topological equisingularity can be proven, in general, either by Zariski equisingularity or by stratification theory using Whitney's stratification and Thom-Mather Isotopy Lemmas, see \cite[Theorem 4.2.17]{trotman2020}. Whitney's stratification approach is independent of the choice of coordinates and simple to define.
But the trivializations obtained by this method are not explicit since they are flows of "controlled" vector fields. Even if such vector fields can be chosen subanalytic or semialgebraic, not much can be said about the regularity of their flows.
Zariski's equisingularity method is more explicit and in a way constructive. It uses the actual equations and coordinate systems. This can be considered either as a drawback or as an advantage.
The trivializations can be chosen subanalytic (semialgebraic in the
algebraic case), as shown in \cite{PP17}. Actually, the trivializations
are given there by explicit formulas in terms of the coefficients of the polynomials and their roots.
In the real case, the triangulation provides another method for proving generic topological triviality. The classical triangulation procedures are based on a similar construction as Zariski equisingularity, i.e. subsequent co-rank 1 projections and their discriminants. For instance, a beautiful result on semialgebraic triviality was shown by Hardt using this approach in \cite{hardt80}. For a fairly complete account on this approach, the reader can consult \cite{BCR} and the references therein.
It is fairly straightforward to apply Theorem \ref{thm:topological_triviality} to obtain generic topological equisingularity for families of hypersurfaces.
In the case of varieties and spaces of arbitrary codimension the argument
goes as follows.
If $F= G_1 \cdots G_k$, then, under the assumptions of Lemma \ref{lem:multiplicityconstant}, for $x^{n-1}$ fixed, the
number of roots of each $G_{j} (\Phi _{n-1} (t,x^{n-1}), x_n)=0$ is independent of $t$, see Lemma 2.2 of \cite{varchenkoizv72} or Proposition 3.6 of \cite{PP17}. In particular $\Phi $
trivializes not only $V(F)=F ^{-1}(0)$ but also each of
$V(G_j)= G_j ^{-1}(0)$.
Thus \cite{varchenkoizv72} implies the following.
\begin{thm}\label{thm:many-components}
If $F= G_1 \cdots G_k$ then for each $j=1, \ldots , k$,
the homeomorphisms $\Phi $ of Theorem \ref{thm:topological_triviality} satisfies $\Phi (T\times V(G_j )\cap \Omega_0))=V(G_j) \cap \Omega$,
where
$V_t(G_j) = (G_j^ {-1} (0) \cap (\{t\}\times \mathbb{K}^n)$. In particular $\Phi $ trivializes $\{G_1= \cdots = G_k=0\}$.
\end{thm}
Now let us give two possible exact statements for this principle taken from \cite{PP17}.
Note that they give not only generic topological equisingularity but
much stronger generic arc-wise analytic triviality.
\begin{thm}[{\cite[Theorem 9.3]{PP17}}, cf. {\cite[Theorem 5.2, Theorem 6.4]{varchenkoizv72}}]\label{thm:theoremequisingularity3}
Let $T$ be an algebraic variety (over $\mathbb{K}$) and let $\mathcal X =\{X_k\}$ be a finite family of
algebraic subsets $T \times \mathbb P^{n-1}_\mathbb{K}$. Then there exists
an algebraic
stratification $\mathcal S$ of $T$ such that for every stratum $S$ and for
every $t_0\in S$ there is a neighborhood $U$ of $t_0$ in $S$ and a semialgebraic {arc-wise analytic }
trivialization of $\pi,$ preserving each set of the family $\mathcal X,$
\begin{align}\label{eq:raagtrivial}
\Phi : U \times \mathbb P^{n-1}_\mathbb{K} \to \pi^{-1} (U),\end{align}
$\Phi (t,x)= (t,\Psi (t,x))$, $\Phi (t_0,x)= (t_0,x)$,
where $\pi:T \times \mathbb P^{n-1}_\mathbb{K}\to T$ denotes the projection.
\end{thm}
\begin{thm}[{\cite[Theorem 6.2]{PP17}}]\label{thm:genericequisingularity}
Let $T$ be a $\mathbb{K}$-analytic space, $U\subset \mathbb{K}^n$ an open neighborhood of the origin,
$\pi : T\times U \to T$ the standard projection, and let
$\mathcal X=\{X_k\}$ be a finite family of $\mathbb{K}$-analytic subsets of $T\times U$.
Let $t_0\in T$.
Then there exist an open neighborhood $T' $ of $t_0$ in $T$ and a proper $\mathbb{K}$-analytic subset $Z\subset T'$, containing $\Sing T'$, such that for every $t\in T'\setminus Z$, $\mathcal X$
is regularly {arc-wise analytically } equisingular along $T\times \{0\}$ at $t$.
Moreover, there is an analytic stratification of an open neighborhood of $t_0$ in $T$ such that for every stratum $S$ and every $t\in S$, $\mathcal X$
is regularly {arc-wise analytic } equisingular along $S\times \{0\}$ at $t$.
\end{thm}
In the above theorem by saying that $\mathcal X$
is \emph{{arc-wise analytically } equisingular along $T\times \{0\}$ at $t\in \Reg T$} we mean that there are neighborhoods $B $
of $t$ in $\Reg T$ and $\Omega$ of $(t,0)$ in $T\times \mathbb{K}^n$, and an {arc-wise analytic } trivialization
$\Phi : B \times \Omega_t \to \Omega$, where $\Omega_t = \Omega\cap \pi^{-1} (t)$, such that
$\Phi (B\times \{0\} ) = B\times \{0\} $ and for every $k$, $\Phi (T\times X_{k,t}) = X_k, $
where $X_{k,t}= X_k \cap \pi^{-1} (t)$. We say that $\mathcal X$
is \emph{regularly {arc-wise analytically } equisingular along
$T\times \{0\}$ at $t\in T$} if, moreover, $\Phi $ preserves, up to a constant, the distance to
$T\times \{0\}$, as we explained at the end of section
\ref{ssec:fiberingconjecture}. The latter property is related to Whitney's conditions,
see Section 7 of \cite{PP17}.
\subsection{Zariski's theorem on the fundamental group.} \label{ssec:fundamentalgroup}
Varchenko in \cite{varchenkoizv72} applies topological triviality of Zariski equisingular projective algebraic varieties to prove Zariski's theorem on the fundamental group of the complement. This theorem says that the fundamental group of the complement $\mathbb P^n_\mathbb{C} \setminus V_{n-1}$ of a complex projective hypersurface $V_{n-1}$, $n>2$, coincides with the corresponding group obtained from a general hyperplane section.
This theorem was announced by Zariski in \cite{zariski37},
but the proof published in it is not considered as complete.
Another complete proof of this theorem, different from the one of Varchenko, is given in \cite{hammle71,hammle73}.
\subsection{General position theorem.} \label{ssec:generalposition}
In \cite{mccroryetal19} Zariski equisingular families of affine or projective algebraic varieties are used, together with Whitney interpolation, to prove stratified general position and transversality theorems for semialgebraic subsets of algebraic stratifications.
In classical algebraic topology, general position of chains was used by Lefschetz to define the intersection pairing on the homology of a manifold.
This approach is based on a possibility of moving a "subvariety" $Z$ of a
$C^\infty$ manifold $M$, by a family of diffeomorphisms, so that the image $Z$ becomes transverse to a given another "subvariety" $W$ of $M$. This principle was made precise by Trotman
\cite{trotman79}
and, independently, by Goresky
\cite{goresky81}. They proved that by a diffeomorphism one can put in a stratified general position two Whitney stratified closed subsets $Z$ and $W$ of $M$.
The main theorem of \cite{mccroryetal19} is expressed
in terms of a submersive family of diffeomorphisms introduced in
\cite[I.1.3.5]{StratifiedMorseTheory}.
Let $T$ and $M$ be $C^\infty$ manifolds and let $\Psi : T\times M \to M$ be a $C^\infty$ map.
Consider $\Psi _t : M \to M$, $\Psi _t(x) = \Psi (t, x)$, and $\Psi ^x : T \to M$, $\Psi ^x(t) = \Psi (t, x)$. We say $\Psi $ is \emph{a family of diffeomorphisms} if for all $t\in T$ the map $\Psi _t$ is a diffeomorphism. The family $\Psi $ is called \emph{submersive} if, for each $(t, x) \in T\times M$, the differential $D\Psi ^x$ is surjective.
By Theorem \mbox
\cite[I.1.3.6]{StratifiedMorseTheory}
, if $\Psi : T\times M \to M$ is submersive and both $Z$ and $W$ are Whitney stratified closed subsets
of $M$ then the set of $t\in T$ such that $\Psi _t(Z)$ is transverse to $W$ is dense in $T$ and open provided $Z$ is compact.
A good example of a submersive family is a transitive action $\Psi : G\times M \to M$ of a Lie group. Note that in this case Theorem
\cite[I.1.3.6]{StratifiedMorseTheory}
gives characteristic $0$ part of Kleiman's transversality of a general translate theorem
\cite{kleiman74}. For a stratified set $X= \bigsqcup S_i$ we say that $\Psi : T \times X \to X$ is \emph{a stratified submersive family of diffeomorphisms}
if for each stratum $S_j$, we have $\Psi (T\times S_j)\subset S_j$, and the map $\Psi : T \times S_j \to S_j$ is a submersive family of diffeomorphisms.
In algebraic geometry the intersection of cycles can be defined via a moving lemma that allows to move the cycle of nonsingular varieties,
see \cite[Section 11.4]{F:IT}.
But there is no moving lemma nor algebraic general position theorem for singular varieties. In the original construction of Intersection Cohomology \cite{goreskymacphersonIH}
in order to define the intersection pairing on singular complex algebraic varieties equipped with a Whitney stratification Goresky and MacPherson used a piecewise linear general position theorem of McCrory
\cite{mccrory78}. The main theorem of \cite{mccroryetal19}
shows the existence of such stratified submersive family in the arc-wise analytic category of \cite{PP17}, see also subsection \ref{ssec:arcwise}.
\begin{thm}[{\cite[Theorem 1.1]{mccroryetal19}}]\label{thm:projective}
Let $\mathcal V = \{V_i\}$ be a finite family of algebraic subsets of projective space $\mathbb P_\mathbb{K}^n$. There exists an algebraic stratification $\mathcal S = \{S_j\}$ of $\mathbb P_\mathbb{K}^n$ compatible with each $V_i$ and a semialgebraic stratified submersive family of diffeomorphisms $\Psi : U\times \mathbb P_\mathbb{K}^n \to \mathbb P_\mathbb{K}^n$, where $U$ is an open neighborhood of the origin in $\mathbb{K}^{n+1}$,
such that $\Psi (0,x)=x$ for all $x\in \mathbb P_\mathbb{K}^n$. Moreover, the map $\Phi :U\times \mathbb P_\mathbb{K}^ n \to U\times \mathbb P_\mathbb{K}^n$, $\Phi (t,x) = (t,\Psi (t,x))$, is an arc-wise analytic trivialization of the projection $U\times \mathbb P_\mathbb{K}^n\to U$.
\end{thm}
A similar result holds for affine varieties; see \cite[Corollary 3.2]{mccroryetal19}.
The proof of Theorem \ref{thm:projective} is rather tricky.
It uses the formulas used in \mbox
\cite{PP17}
in Step 2 of the construction topological trivialization of Zariski equisingular families, see section \ref{ssec:topequising}. These formulas are based on Whitney interpolation and can be perturbed by introducing complex parameters,
these are $t\in U$ of the theorem. The whole construction is applied to a trivial family, that is to the product $U\times \mathbb P_\mathbb{K}^n$, thus producing a non-trivial arc-wise analytic trivialization of a trivial family.
Theorem \ref{thm:projective} implies the general position in terms of the expected dimension of the intersection and the general transversality.
The general position in terms of dimension is exactly what is needed to define the intersection pairing for the intersection homology,
cf. \cite{goreskymacphersonIH}.
The general position in terms of dimension can be expressed as follows, the dimension means the real dimension since we consider semialgebraic sets.
\begin{cor} [{\cite[Proposition 1.3]{mccroryetal19}}]\label{cor:generalposition1}
Let $\Psi : U\times \mathbb P_\mathbb{K}^ n \to \mathbb P^n$ be a stratified family as in Theorem \ref{thm:projective}, and let $\mathcal S$ be the associated algebraic stratification of $\mathbb P_\mathbb{K}^n$.
Let $Z$ and $W$ be semialgebraic subsets of $\mathbb P_\mathbb{K}^n$. There is an open dense semialgebraic subset $U'$ of $U$ such that, for all $t \in U'$ and all strata $S\in \mathcal S$,
$$
\dim (Z \cap \Psi _t^{-1} (W )\cap S) \le \dim (Z\cap S) + \dim (W\cap S) - \dim S.
$$
\end{cor}
If $\mathcal S$ is a stratification of a semialgebraic set $X$, and $\mathcal T$ is a stratification of a semialgebraic subset $Y$ of $X$,
then $(Y,\mathcal T)$ is a \emph{substratified object} of $(X,\mathcal S)$ if each stratum of $\mathcal T$ is contained in a stratum of $\mathcal S$. Two substratified objects $(Z,\mathcal A)$ and $(W,\mathcal B)$ of $(X,\mathcal S)$ are \emph{transverse} in $(X,\mathcal S)$ if, for every pair of strata $A\in\mathcal A$ and $B\in\mathcal B$ such that $A$ and $B$ are contained in the same stratum $S\in\mathcal S$, the manifolds $A$ and $B$ are transverse in $S$.
\begin{cor}[{\cite[Proposition 1.5]{mccroryetal19}}]\label{cor:transversality}
Let $\Psi : U\times \mathbb P^ n \to \mathbb P^n$ be a stratified family as in Theorem \ref{thm:projective}, and let $\mathcal S$ be the associated algebraic stratification of $\mathbb P^n$.
Let $Z$ and $W$ be semialgebraic subsets of $\mathbb P^n$, with semialgebraic stratifications $\mathcal A$ of $Z$ and $\mathcal B$ of $W$ such that $(Z,\mathcal A)$ and $(W,\mathcal B)$ are substratified objects of $(\mathbb P^n,\mathcal S)$. There is an open dense semialgebraic subset $U'$ of $U$ such that, for all $t \in U'$, $(Z,\mathcal A)$ is transverse to $\Psi _t^{-1} (W,\mathcal B)$ in $(\mathbb P^n,\mathcal S)$.
\end{cor}
In a recent paper \cite{MPintersectionhomology18} Corollary \ref{cor:transversality} is used to define an intersection pairing for \emph{real intersection homology}, an analog of intersection homology for real algebraic varieties.
\section{Construction of equisingular deformations}\label{sec:construction}
Let $f$ be either a polynomial or the germ of an analytic function, and let $V=V(f)$ denote the zero set of $f$. We explain below how to construct Zariski equisingular deformations of $V$ (or more precisely of its equation $f$). The idea comes from \cite{mostowski84}, where the local complex analytic case was considered.
We begin with the global polynomial case as considered in \cite{PRpreprint18} since it is conceptually simpler and does not require Artin approximation. Note also that this method can be applied as well to construct equisingular deformations of sets given by a system of several equations
as was explained at the end of section \ref{ssec:arbitrarycodimension}.
\subsection{Global polynomial case}\label{ssec:deform-polyn}
Given an algebraic subset $V$ of $\mathbb{K}^n$ and let the polynomials $g_1$,\ldots, $g_s\in\mathbb{K}[x]$ generate the ideal defining $V$.
Let
$$g_i=\sum_{\a\in\mathbb N^n}g_{i,\a}x^\a.$$
In general, a deformation of the $g_{i,\a}$, even arbitrarily
small, destroys the topological structure of $V$ due to the presence of singularities (and "singularities at infinity" in the global case).
In this method we construct a finite number of constraints
satisfied by the coefficients $g_{i,\a}$, these are the equations \eqref{eq:2}, \eqref{eq:3},
\eqref{eq:5}, \eqref{eq:rational-relations} and the inequations \eqref{eq:cond-c_i} and \eqref{eq:4} below, that satisfy the following property.
Any deformation $t\mapsto g_{i,\a}(t)$ with $g_{i,\a}(0)=g_{i,\a}$, that satisfies the same constraints \eqref{eq:cond-c_i} and \eqref{eq:2}-\eqref{eq:rational-relations} is, by construction, Zariski equisingular. In particular any such deformation is topologically trivial.
Moreover, the entries of \eqref{eq:cond-c_i} and \eqref{eq:2}-\eqref{eq:rational-relations} are rational functions in $g_{i,\a}$ with rational coefficients, that is they belong to $\mathbb{Q}(u_{i,\a})$, for some new indeterminates $u_{i,\a}$.
Let us fix a finite set of coefficients $g_{i,\a}\in \mathbb{K}$ that contains all
nonzero of them. In what follows we will perturb only these coefficients and keep all the other equal to zero.
After a linear change with rational coefficients of coordinates $x$ we can assume that
\begin{align}\label{eq:g-r-polynomials}
g_i=c_i x_n^{p_i}+\sum_{j=1}^{p_i}b_{n-1,r,j}(x^{n-1})x_n^{p_i-j} =\sum_{\b\in\mathbb N^n}a_{n,i,\b}x^\b ,\quad \forall i=1,\ldots, s,
\end{align}
with
\begin{align}\label{eq:cond-c_i}
c_i\ne 0, \qquad i = 1, \ldots, s.
\end{align}
By multiplying each $g_i$ by $1/c_i$ we can assume that $c_i=1$ for every $i$. Denote by $f=f_n$ the product of the $g_i$ and by $a_{n}$ the vector of coefficients $a_{n,i,\b}$. The entries of $a_n$ are rational functions in the
original $g_{i,\a}$ (i.e. before the linear change of coordinates $x$) with rational coefficients, say
\begin{equation}\label{eq:lin_ch}
a_n=A_n(g_{i,\a}),
\end{equation}
where $A_n=(A_{n,i,\b})_{i,\b}\in \mathbb{Q}(u_{i,\a})^{N_n}$ for some integer $N_n>0$.
Let the integer $l_n$ be defined by
$$D_{n,l_n}(a_{n})\not\equiv 0 \text { and } D_{n,l}(a_n)\equiv 0, \quad \forall l<l_n, $$
where $D_{n,l}$ denotes the $l$-th generalized discriminant of $f_n$,
see Appendix.
After a new linear change of coordinates $x^{n-1}$ with rational coefficients, $D_{n,l_n}(a_{n})=e_{n-1}f_{n-1}$
with $e_{n-1}\neq 0$ and
$$f_{n-1}=\sum_{\b\in\mathbb N^n}a_{n-1,\b}x^\b=x_{n-1}^{d_{n-1}}+\sum_{j=1}^{d_{n-1}}b_{n-2,j}(x^{n-2})x_{n-1}^{d_{n-1}-j}$$
for some constants $a_{n-1,\b}$ and polynomials $b_{n-2,j}$.
We repeat this construction and define recursively a sequence of polynomials $f_j(x^j)$,
monic in $x_j$, such that
\begin{align}\label{eq:2}
D_{j+1,l_{j+1}}(a_{j+1})=e_j\left(x_{j}^{d_{j}}+\sum_{k=1}^{d_{j}}b_{j-1,k}(x^{j-1})x_{j}^{d_{j}-k}\right)=e_j\left(\sum_{\b\in\mathbb N^n} a_{j,\b}x^\b\right)=e_jf_j
\end{align}
is the first non identically equal to zero generalized discriminant of $f_{j+1}$ and $a_j$ denotes the vector of coordinates $a_{j,\b}$.
This way we get a system of equations
\begin{align}\label{eq:3}
D_{j+1,l}(a_{j+1})\equiv 0\ \ \forall l<l_{j+1},\end{align}
and inequations
\begin{align}\label{eq:4}
\qquad e_j\ne 0 , \end{align}
for $j=n, n-1, \ldots, k_0$, until we get
\begin{align}\label {eq:5}
f_{k_0}=1 \text { for some }
k_0\geq 0.
\end{align}
By \eqref{eq:cond-c_i}, \eqref{eq:lin_ch} and \eqref{eq:2} the entries of the $c_i$, $a_k$ and $e_j$ are rational functions in the $g_{i,\a}$ with rational coefficients, let us say
\begin{align}\label{eq:rational-relations}
c_i= C_i(g_{i,\a}), \ \ a_k=A_k(g_{i,\a}),\ \ e_j=E_j(g_{i,\a}),
\end{align}
for some $C_i\in\mathbb{Q}(u_{i,\a})$, $A_k\in\mathbb{Q}(u_{i,\a})^{N_k}$ and $E_j\in\mathbb{Q}(u_{i,\a})$.
Thus \eqref{eq:cond-c_i} and \eqref{eq:2}-\eqref{eq:rational-relations} are equations and inequations, with rational coefficients, on the original coefficients $g_{i,\a}$.
Let $\mathcal V$ be an open connected neighborhood of a point $\tt \in \mathbb{K}^l$ and let $\mathcal O_{\mathcal V}$ denote the ring of $\mathbb{K}$-analytic functions on
$\mathcal V$.
Suppose that $g_{i,\a}(t)\in \mathcal O_{\mathcal V}$, where $t\in \mathcal V$, satisfy $g_{i,\a}= g_{i,\a}(\tt)$.
For $t\in\mathcal V$ and $ i=1,\ldots, s,$ we define
\begin{align*}
\tilde g_i(t,x):=\sum_{\a\in\mathbb N^n}g_{i,\a}(t) x^\a .
\end{align*}
We claim that if the $g_{i,\a}(t)$ satisfy the identities and the inequations
\eqref{eq:cond-c_i} and \eqref{eq:2}-\eqref{eq:rational-relations}, then the family $t\to \{\tilde g_1(t,x)= \cdots = \tilde g_s(t,x) = 0\}$ is topologically trivial for $t$ in a small neighborhood of $\tt$ in
$\mathcal V$. For this we construct a system $F_j (t, x^j)$ satisfying the assumptions of Theorem \ref{thm:algebraic}. We set
\begin{align*}
F_n(t,x)=\prod_{i=1}^s G_i(t,x), \text{ where } G_i(t,x)=\sum_{\b\in\mathbb N^n}A_{n,i,\b}(g_{i,\a}(t)) x^\b, i=1,\ldots, s,
\end{align*}
and $A_n=(A_{n,i,\b})_{i,\b}\in \mathbb{Q}(u_{i,\a})^{N_n}$ given in
\eqref{eq:lin_ch}. Similarly for $j=k_0,\ldots, n-1$ we set
\begin{align}\label{eq:functions-F}
F_j(t,x^j)=\sum_{\b\in\mathbb N^j}A_{j,\b}(g_{i,\a}(t))x^\b.
\end{align}
Note that $G_i(\tt,x)$ concide with $g_i$ after the linear change of coordinates made during the construction.
It is clear from the above construction that the family $(F_j(t,x^j))$ satisfies the assumptions of Theorem \ref{thm:algebraic}. Let us summarize it in the following.
\begin{thm}\label{thm:deformation-polynomial}
Suppose that $g_{i,\a}(t)$ satisfy the identities and the inequations \eqref{eq:cond-c_i} and \eqref{eq:2}-\eqref{eq:rational-relations}. Then $F_n (t, x)$ defines a Zariski equisingular family with respect to the parameter $t$.
\end{thm}
\subsection{Application: Algebraic sets are homeomorphic to algebraic sets defined over algebraic number fields}\label{ssec:homeotonumberfields}
The following result was proven in \cite{PRpreprint18}.
\begin{thm}\label{thm:homeotoalgebraicglobal}
Let $V\subset \mathbb{K}^n$ (resp. $V\subset \P_\mathbb{K}^n$) be an affine (resp. projective) algebraic set, where $\mathbb{K}=\mathbb R$ or $\mathbb{C}$. Then there exist an affine (resp. projective) algebraic set $W\subset \mathbb{K}^n$ (resp. $W\subset \P_\mathbb{K}^n$) and a homeomorphism $h:\mathbb{K}^n\longrightarrow \mathbb{K}^n$ (resp. $h:\P_\mathbb{K}^n\longrightarrow \P_\mathbb{K}^n$) such that:
\begin{enumerate}
\item[(i)] the homeomorphism $h$ maps $V$ onto $W$,
\item[(ii)] $W$ is defined by polynomial equations with coefficients in
$\overline\mathbb{Q}\cap \mathbb{K}$,
\item[(iii)] the variety $W$ is obtained from $V$ by a Zariski equisingular deformation. In particular the homeomorphism $h$ can be chosen semialgebraic and arc-analytic.
\end{enumerate}
\end{thm}
Suppose, as in the previous section, that the ideal defining $V$ is generated by the polynomials $g_1$,\ldots, $g_s\in\mathbb{K}[x]$.
In order to prove Theorem \ref{thm:homeotoalgebraicglobal} one constructs
in \cite{PRpreprint18}
a deformation $t\mapsto g_{i,\a}(t)$ of the coefficients $g_{i,\a}\in \mathbb{K}$ of the $g_i$ that
preserves all polynomial relations over $\mathbb{Q}$ satisfied by these coefficients. Therefore this deformation preserves the identities
\eqref{eq:2}, \eqref{eq:3}, \eqref{eq:5} and \eqref{eq:rational-relations}. If it is sufficiently small the inequations \eqref{eq:cond-c_i}, \eqref{eq:4} are also preserved and, by Theorem \ref{thm:deformation-polynomial}, the deformation is equisingular in the sense of Zariski.
This construction is particularly simple if the field extension ${k}$ of $\mathbb{Q}$ generated by the coefficients $g_{i,\a}$ is a purely transcendental extension of $\mathbb{Q}$. For the general case we refer the reader to \cite{PRpreprint18}.
Thus assume that ${k}=\mathbb{Q}(\tt_1,\ldots, \tt_r)$, where the $\tt_i\in\mathbb{K}$ are algebraically independent over $\mathbb{Q}$. Then there are rational functions
$g_{i,\a} (t) \in \mathbb{Q}(t)$,
$t=(t_1,\ldots, t_r)$, such that $g_{i,\a} =g_{i,\a} (\tt )$. Let $\mathcal V$ be a neighborhood of $\tt= (\tt_1,\ldots, \tt_r)$ that does not contain the poles of the $g_{i,\a}(t) $. Since
$\tt_i\in\mathbb{K}$ are algebraically independent any polynomial relation with coefficients
in $\mathbb{Q}$, satisfied by $g_{i,\a} =g_{i,\a} (\tt )$, is also satisfied by $
g_{i,\a} (t)$.
In particular, $g_{i,\a} (t)$ satisfy the identities
\eqref{eq:2}, \eqref{eq:3}, \eqref{eq:5} and \eqref{eq:rational-relations}
as we wanted.
Choose $\mathbf{q}\in (\overline\mathbb{Q})^r \cap \mathcal V$ sufficiently close to $\tt$.
Then all $g_{i,\a} (\mathbf{q} ) \in \overline\mathbb{Q}$. Therefore the family $(F_j)$,
defined by \eqref{eq:functions-F}, satisfies the hypothesis of Theorem \ref{thm:algebraic} and the hypersurfaces $X_0:=\{F_n(\mathbf{q},x)=0\}$ and $X_1:=\{F_n(\tt,x)=0\}$ are homeomorphic.
Moreover, thus constructed homeomorphism maps every component of $X_0$ defined by $G_i(\mathbf{q} ,x)=0$ onto the component of $X_1$ defined by $G_i(\tt ,x)=0$, as in Theorem \ref{thm:many-components}. This proves that the algebraic variety $V=\{g_1=\cdots =g_s=0\}$ is homeomorphic to the algebraic variety $\{G_1(\mathbf{q} ,x)=\cdots=G_s(\mathbf{q} ,x)=0\}$ defined by polynomial equations over $\overline\mathbb{Q}$.
A result analogous to Theorem \ref{thm:homeotoalgebraicglobal} in the local case, for singularities of analytic spaces or analytic functions was proven by G. Rond in \cite{rond18}.
\begin{rem}
Note that, by the above proof, in the special case when ${k}$ is a purely transcendental extension of $\mathbb{Q}$, we may replace,
in the statement of Theorem \ref{thm:homeotoalgebraicglobal}, $\overline\mathbb{Q}$ by $\mathbb{Q}$ if
$\mathbb{K}=\mathbb R$, resp. $\mathbb{Q}[i]$ if
$\mathbb{K}=\mathbb{C}$.
In general, this is an open problem, whether every algebraic variety
is homeomorphic to a variety defined over $\mathbb{Q}$, resp. $\mathbb{Q}[i]$. In \cite{teissierCRAS90} B. Teissier gave an example of a complex analytic surface singularity defined over $\mathbb{Q} (\sqrt 5)$, which is not Whitney equisingular to any singularity defined over $\mathbb{Q}$.
\end{rem}
\begin{question}{1. Open problem}Is every complex algebraic variety homeomorphic to a variety defined over
$\mathbb{Q}[i]$ ?
Is every real algebraic variety homeomorphic to a variety defined over
$\mathbb{Q}$ ?
\end{question}
\begin{question}{2. Open problem}Is every complex analytic set germ homeomorphic to a set germ defined over $\mathbb{Q}[i]$ ?
Is every real analytic set germ homeomorphic to a
set germ defined over $\mathbb{Q}$ ?
\end{question}
\subsection{Analytic case}\label{ssec:deform-analytic}
Suppose now that $V$ is the germ at the origin of an analytic subset of $\mathbb{K}^n$ and let $g_1$,\ldots, $g_s\in\mathbb{K}\{x\}$ generate the ideal defining $V$. We describe below, following \cite{mostowski84}, the construction of Zariski equisingular deformations of $V$. The main idea is similar to that of subsection \ref{ssec:deform-polyn}, that is to use the discriminants of subsequent linear projections to construct a system of "constrains", that is equations and inequations satisfied by the $g_{i}$. These are the equations and inequations \eqref{eq: polynomials:f_i}, \eqref{eq:discriminants_i} defined below. Then any deformation of the
$g_{i}$ that satisfies the same constraints is Zariski equisingular.
The main difference comes from the fact that now we are not going to use the coefficients of the $g_{i}$, since there are infinitely many of them. Instead we treat the equation of \eqref{eq: polynomials:f_i}, \eqref{eq:discriminants_i}, as a system of equations on the functions $u_i(x^i), a_{i,j}(x^i)$, that is the coefficients of these subsequent discriminants.
Let us consider a finite set of distinguished polynomials
$g_1, \ldots, g_s \in\mathbb{K}\{x\}$:
\begin{align*
g_{i} ( x)= x_n^{r_i}+ \sum_{j=1}^{r_i} a_{n-1,i,j}
(x^{n-1}) x_n^{r_i-j} ,
\end{align*}
i.e. we suppose $a_{n-1,i,j} (0) =0$ for all $i,j$. Arrange
$a_{n-1,i,j}$ in a row vector
$a_{n-1} \in \mathbb{K}\{x^{n-1}\}^{p_n}$, where $p_n:=\sum_i r_i$.
Let $f_n$ be the product of the $g_i$'s. The generalized discriminants
$D_{n,i} $ of $f_n$ are
polynomials in the entries of $a_{n-1}$.
Let $l_n$ be a positive integer such that \begin{align}\label{discriminants:n}
D_{n,l} ( a_{n-1} )\equiv 0 \qquad l<l_n ,
\end{align}
and $D_{n,l_n} ( a_{n-1} ) \not \equiv 0$.
Then, after a linear change of coordinates $x^{n-1}$, by the Weierstrass Preparation Theorem, we may write
\begin{align*
D_{n,l_n} ( a_{n-1} ) = u_{n-1} (x^{n-1}) \Big (x_{n-1}^{p_{n-1}}
+ \sum_{j=1}^{p_{n-1}} a_{n-2,j} (x^{n-2}) x_{n-1}^{p_{n-1}-j} \Big )
.
\end{align*}
where $u_{n-1}(0)\ne 0$ and for all $j$, $a_{n-2,j}(0)=0$. We denote $$
f_{n-1} = x_{n-1}^{p_{n-1}}+
\sum_{j=1}^{p_{n-1}} a_{n-2,j} ( x^{n-2}) x_{n-1}^{p_{n-1}-j} $$
and the vector
of its coefficients $a_{n-2,j}$ by $a_{n-2} \in \mathbb{K}\{x^{n-2}\}^{ p_{n-1}}$.
Let $l_{n-1}$ be the positive integer such that the first $l_{n-1}-1$ generalized discriminants $D_{n-1,l} $
of $f_{n-1}$ are identically zero and $D_{n-1,l_{n-1}} $ is not. Then again we define
$f_{n-2} ( x^{n-2})$ as the Weierstrass polynomial associated to
$D_{n-1,l_{n-1}} $.
We continue this construction and
define a sequence of pseudopolynomials $f_{i} ( x^i )$, $i=1, \ldots, n-1$, such that
$f_i= x_i^{p_i}+ \sum_{j=1}^{p_i} a_{i-1,j} (x^{i-1}) x_i^{p_i-j} $ is the Weierstrass polynomial associated to the first non-identically zero generalized discriminant $D_{i+1,l_{i+1}} ( a_{i} )$ of $f_{i+1}$,
where we denote in general $a_{i}= (a_{i,1} , \ldots , a_{i,p_{i+1}} )$,
\begin{align}\label{eq: polynomials:f_i}
D_{i+1,l_{i+1}} ( a_{i} ) = u_{i} (x^{i}) \Big (x_i^{p_i}+ \sum_{j=1}^{p_i} a_{i-1,j} (x^{i-1}) x_i^{p_i-j} \Big) ,
\quad i=0,...,n-1 .
\end{align}
Thus, for $i=0,...,n-1,$ the vector
of functions $a_i$ satisfies
\begin{align}\label{eq:discriminants_i}
D_{i+1,l} ( a_{i} )\equiv 0 \text { for } l<l_{i+1} , \quad D_{i+1,l_{i+1}} ( a_{i} ) \ne 0.
\end{align}
This means in particular that
\begin{align*
D_{1,k} ( a_{0} ) \equiv 0 \quad \text {for } l<l_1 \text { and } D_{1,l_1} ( a_{0} ) \equiv u_0 ,
\end{align*}
where $u_0$ is a non-zero constant.
The following theorem follows from the construction of the family
$u_i(t, x^i)$, $a_{i,j}(t,x^i)$.
\begin{thm}\label{thm:deformation-analytic}
Suppose that we extend all function $u_i(x^i), a_{i,j}(x^i)$ to analytic families $u_i(t, x^i), a_{i,j}(t,x^i)\in \mathbb{K}\{t,x\}$, $u_i(0, x^i)= u_i(x^i), a_{i,j}(0,x^i)= a_{i,j}(x^i)$, where $t\in \mathbb{K}^ l$ is considered as a parameter. If the identities and
the inequations of \eqref{eq: polynomials:f_i}, \eqref{eq:discriminants_i} are still satisfied by these extensions $u_i(t, x^i), a_{i,j}(t,x^i)$
then the family $f_n(t,x)=0$ is Zariski equisingular.
\end{thm}
\subsection{Application: Analytic set germs are homeomorphic to algebraic ones}\label{ssec:homeoanalytic-algebraic}
The problem of approximation of analytic objects (sets or mappings) by algebraic ones has a long history, see e.g. \cite{bochnakkucharz84} and the bibliography therein. In particular, several results were obtained in the case of isolated singularities.
The local topological algebraicity of analytic set germs, in the general set-up, was first established in \cite{mostowski84} by Mostowski. Given an analytic set germ $(V,0) \subset (\mathbb{K}^n,0)$, Mostowski shows the existence
of a local homeomorphism $\tilde h: (\mathbb{K}^{2n+1},0) \to (\mathbb{K}^{2n+1},0)$ such that, after the embedding
$(V,0) \subset (\mathbb{K}^n,0) \subset (\mathbb{K}^{2n+1},0)$, the image $\tilde h(V)$ is algebraic.
It is easy to see that Mostowski's proof together with Theorem 2 of \cite{bochnakkucharz84} gives
the following result.
\begin{thm} \label{thm:homeotoalgebraic}
Let $\mathbb{K} = \mathbb R$ or $\mathbb{C}$.
Let $(V,0) \subset (\mathbb{K}^n,0)$ be an analytic germ.
Then there is a homeomorphism $h: (\mathbb{K}^n,0) \to (\mathbb{K}^n,0)$ such that $h(V)$
is the germ of an algebraic subset of $\mathbb{K}^n$.
\end{thm}
We remark that in \cite {mostowski84} Mostowski states his results
only for $\mathbb{K}=\mathbb R$ but his proof also works for $\mathbb{K}=\mathbb{C}$.
The proof of Theorem \ref{thm:homeotoalgebraic} is, in principle,
similar to the one of Theorem \ref{thm:homeotoalgebraicglobal}, but is techniquely much more demanding. The main idea is to use Theorem \ref{thm:deformation-analytic} and deform analytic solutions
of \eqref{eq: polynomials:f_i} and \eqref{eq:discriminants_i}
to algebraic ones. Here by algebraic solutions we mean given by the ones
defined by algebraic power series (an algebraic power series is a power series algebraic over $\mathbb{K}[x_1,...,x_n]$ - for example the power series
$u(x)$ such that $u(0)=1$ and $u(x)^2=1+x$). Recall that the Artin approximation theorem states that convergent power series solutions of algebraic equations can be approximated by algebraic power series solutions.
Clearly, we need a stronger result, not only an approximation but also a parameterized deformation from the old, convergent solutions to the new, algebraic ones. This is provided
by P{\l}oski's version of Artin approximation, see \cite{ploski74}.
Finally, in order to apply Theorem \ref{thm:deformation-analytic} we need the nested Artin approximation, i.e. solutions
$u_i(t,x^i), a_{i,j}(t,x^i)\in \mathbb{K}\{t,x^i\}$, of \eqref{eq: polynomials:f_i} and \eqref{eq:discriminants_i}, that depend only on $x_1, \ldots, x_i $
and not on $x_k$ for $k>i$. Nested Artin Approximation Theorem follows from the N\'eron Desingularization, proven by Popescu \cite{popescu86}, and was not available at the time Mostowski's paper \cite {mostowski84} was written. Instead, Mostowski proposes a recursive construction
of the system of equations \eqref{eq: polynomials:f_i} and \eqref{eq:discriminants_i} giving Zariski equisingularity conditions by local linear changes of coordinates and,
at the same time, step by step, provides the deformation-approximation by algebraic power series solutions following the recipe given in \cite{ploski74}.
One may shorten significantly Mostowski's construction using a stronger result, the nested variant of P{\l}oski's version Artin Approximation. This is done in \cite{BPR17}, where
such Nested Artin-P{\l}oski-Popescu Approximation Theorem is proven.
This theorem was used in \cite{BPR17} to deform $u_i(x^i), a_{i,j}(x^i)$ to algebraic power series solutions of \eqref{eq: polynomials:f_i} and \eqref{eq:discriminants_i}. Furthermore, a result of Bochnak-Kucharz \cite{bochnakkucharz84},
based on Artin-Mazur Theorem of \cite{artinmazur65}, allows one
to approximate the zeros of algebraic power series (or equivalently germs of Nash functions) by the zeros of polynomial functions.
A stronger version of Theorem \ref{thm:homeotoalgebraic} was given in \cite{BKPR18} where it was shown that such a homeomorphism $h$ can be found with any prescribed order of tangency at the origin. \\
\begin{question}{3. Open problem}{What is the best level of regularity of homeomorphisms for which the statement of Theorem \ref{thm:homeotoalgebraic} holds ? It is known, for instance, that Theorem \ref{thm:homeotoalgebraic} is no longer true if one replaces "homeomorphism" by "diffeomorphism", for examples see \cite{bochnakkucharz84} and the last section of \cite{BPR17}. It is not known whether Theorem \ref{thm:homeotoalgebraic} holds true if one requires the homeomorphism $h$ to be bi-Lipschitz.}
\end{question}
\subsection{Equisingularity of function germs.} \label{ssec:functions}
Zariski Equisingularity can also be used to construct topologically trivial deformations of analytic map germs, see \cite{varchenkoizv72}. Let us consider first the case of functions as studied in \cite{BPR17}, that is the mappings with values in $\mathbb{K}$.
Given a family $g_t(y) = g(t, y_1, \ldots , y_{n-1})$ of such germs parameterized by $t\in (T,t_0)$ We consider the associated family of set germs defined by the graph of $g$, the zero set of
$F(t,x_1, \ldots, x_n) := x_1 - g(t, x_2, \ldots , x_{n})$, and construct a topological trivialization $h_t$ of $V=V(F)$ that does not move the variable $x_1$
\begin{align}\label{eq:preservex_1}
h_t (x_1, \ldots x_n) = (x_1 , \hat h_t (x_1, x_2, \ldots, x_n))
\end{align}
so that $V\ni (t_0,x)$ if and only if $(t, h_t (x)) \in V$.
Set $\sigma_t (y) := \hat h_t (g(y), y) $.
Then
$$
g_t\circ \sigma_t = g_{t_0},
$$
that is $g_t$ and $g_{t_0}$ are right (i.e. by a homeomorphism of the source) topologically equivalent. Moreover, since $\sigma_t$ depends continuously on $t$ the family $g_t$ is topologically trivial.
We now follow the main ideas of \cite{BPR17} in order to explain the construction of topological trivialization of a family $V_t$ of analytic subspaces of $(\mathbb{K} ^n,0)$ that preserves the variable $x_1$.
For this we adapt the definition of Zariski equisingular families, Definition \ref{def:ZAinfamilies}, by changing it slightly, and also
by changing accordingly the construction of equisingular deformations.
The point is that, when we make linear changes of coordinates in order to replace a function by its Weierstrass polynomial, now we are no longer allowed to change the variable $x_1$ and mix it with the other variables.
So if one of the subsequent discriminants is divisible by $x_1$ we cannot proceed the way we have done it before. Therefore we replace the assumptions (2) and (3) of Definition \ref{def:ZAinfamilies} by
\begin{enumerate}
\item [2'.]
There are $q_i\in \mathbb N$ such that the discriminant of $(F_{i})_{red}$ divides $x_1^{q_i} F_{i-1} (t, x^{i-1})$.
\item [3'.]
$F_1\equiv 1$.
\end{enumerate}
Then the construction of the homeomorphisms that we presented in Section \ref{ssec:topequising} gives the following version of Theorem \ref{thm:topological_triviality}, that is a simplified statement of
\cite[Theorem 5.1]{BPR17}.
\begin{thm}\label{thm:triv-functions}
Suppose that $V$ is Zariski equisingular with respect to the parameter $t$ in the sense of Definition \ref{def:ZAinfamilies} with the conditions 2 and 3 replaced by conditions 2'and 3'. Then we may require that the homeomorphisms $\Phi $ of \eqref{eq:trivialization} satisfies
additionally $\Psi _1(t, x_1) =x_1$.
\end{thm}
\begin{proof}
Idea of proof.
Because $F_1\equiv 1$, by 2', the discriminant locus of $F_2$ is either empty or given by $x_1=0$.
Therefore we may take $\Psi _1(t, x_1) =x_1$.
Then we show by induction on $i$ that each $\Phi _i$
can be lifted so that the lift $\Phi _{i+1}$ preserves the zero set of
$F_{i+1}$ and the values of $x_1$. The former
condition follows by inductive assumption and the fact that $\Phi _i$ preserves the discriminant locus of $F_{i+1}$.
The latter condition is satisfied trivially since $\Phi _{i+1}$
is a lift of $\Phi _{i}$.
\end{proof}
As a corollary we obtain the following result.
\begin{thm} [{\cite[Theorem 1.2]{BPR17}}]
\label{thm:homeotopolynomial}
Let $\mathbb{K} = \mathbb R$ or $\mathbb{C}$.
Let $g: (\mathbb{K}^n,0)\to (\mathbb{K}, 0)$ be an analytic function germ.
Then there is a homeomorphism $\sigma : (\mathbb{K}^n,0) \to (\mathbb{K}^n,0)$ such that $g\circ \sigma$
is the germ of a polynomial.
\end{thm}
For the proof of Theorem \ref{thm:homeotopolynomial} one follows the
proof of Theorem \ref{thm:homeotoalgebraic} that gives such homeomorphism to a Nash function and not directly to a polynomial, since we cannot get a better result just using the Artin approximation.
Recall that a function is \emph{Nash} if it is analytic and satisfies an algebraic equation. Thus $f:(\mathbb{K}^n,0) \to \mathbb{K}$ is the germ of a Nash function if and only if its Taylor series
is an algebraic power series. For more details on real and complex Nash functions and sets see \cite{BCR}, \cite{bochnakkucharz84}.
The final step of the proof of Theorem \ref{thm:homeotopolynomial}, a homeomorphism of a Nash germ to a polynomial germ follows from
\cite{bochnakkucharz84}, that is in essence from the Artin-Mazur Theorem,
and a Thom stratification argument, see section 5.5 of \cite{BPR17} for details.
There is a common generalization of Theorems \ref{thm:homeotopolynomial} and \ref{thm:homeotoalgebraic}.
\begin{thm} [{\cite[Theorem 1.3]{BPR17}}] \label{thm:generalhomeo}
Let $(V_i,0) \subset (\mathbb{K}^n,0)$ be a finite family of analytic set germs and let $g: (\mathbb{K}^n,0)\to (\mathbb{K}, 0)$ be an analytic function germ. Then there is a homeomorphism $\sigma : (\mathbb{K}^n,0) \to (\mathbb{K}^n,0)$ such that $g\circ \sigma$
is the germ of a polynomial, and for each $i$, $\sigma^{-1} (V_i)$
is the germ of an algebraic subset of $\mathbb{K}^n$.
\end{thm}
Theorem \ref{thm:generalhomeo} cannot be extended to many functions or to maps with values in $\mathbb{K}^p$ for $p > 1$, see \cite[Example 6.3]{BPR17}.
\begin{cor}[{\cite[Corollary 1.4]{BPR17}}]
Let $g: (V,p)\to (\mathbb{K},0)$ be an analytic function germ defined on the germ $(V,p)$ of an analytic space. Then there
exists an algebraic affine variety $V_1$, a point $p_1\in V_1$, the germ of a polynomial function $g_1:(V_1,p_1)\to
(\mathbb{K},0)$ and a homeomorphism $\sigma : (V_1,p_1) \to (V,p)$ such that $g_1= g\circ \sigma$.
\end{cor}
We do not know whether the above results hold true with "homeomorphism" replaced by "bi-Lipschitz homeomorphism".
\begin{question}{Open problem}{4. Is an analytic function germ bi-Lipschitz homeomorphic to a Nash or a polynomial germ ? }
\end{question}
Unlike the analogous open problem for analytic set germs, it is more likely that the answer to this one is negative. The reason is the following.
By the existence of Lipschitz stratification, cf. \cite{mostowski85}, \cite{PA94}, the bi-Lipschitz equivalence of analytic set germs does not have continuous moduli (the principle of generic bi-Lipschitz triviality, analogous to the one described in subsection
\ref{ssec:arbitrarycodimension}, holds true). On the other hand, the bi-Lipschitz right equivalence of analytic function germs admits continuous moduli, see \cite{henryparus2003}, \cite{henryparus2004}.
Using Theorem \ref{thm:triv-functions} one may show that the principle of generic topological equisingularity of analytic function germs holds true.
(An alternative proof follows again by stratification theory, more precisely by Thom stratification and Thom-Mather Isotopy Lemma.)
Let $T$ be a $\mathbb{K}$-analytic space and let $g_t(y)=g(t, y) : (T,t_0)\times (\mathbb{K}^n,0) \to (\mathbb{K} ,0)$ be a $\mathbb{K}$-analytic family of $\mathbb{K}$-analytic function germs.
We say that the family $g_t(y)$ is \emph{topologically trivial at $t_0$} (for topological right equivalence) if there are an open neighborhood $T' $ of $t_0$ in $T$ and neighborhoods $\Omega_0$ of the origin in $\mathbb{K}^n$, and $\Omega$ of $(t_0,0)$ in $T\times \mathbb{K}^{n}$, and a
homeomorphism
\begin{align*
\Phi : U \times \Omega_0 \to \Omega,
\end{align*}
such that
$g(\Phi (t,y))=g(0,y)$. Then the following statement holds.
\begin{cor}[Principle of generic topological equisingularity of analytic function germs,
{\cite[Theorem 8.5]{PP17}}]\label{thm:gen-top-equising-functions}
Let $T$ be a $\mathbb{K}$-analytic space and let $g_t(y) : (T,t_0)\times (\mathbb{K}^n,0) \to (\mathbb{K} ,0)$ be a $\mathbb{K}$-analytic family of $\mathbb{K}$-analytic function germs. Let $t_0\in T$.
Then there exist an analytic stratification of an open neighborhood of $t_0$ in $T$ such that for every stratum $S$ and every $t'_0\in S$, the family
$g_t(y), t\in S$ is topologically trivial at $t_0'$.
\end{cor}
\subsection{Local topological classification of smooth mappings.}\label{ssec:smoothmappings}
The principle of generic topological equisingularity does not hold for the germs of mappings. That is it is known by an example of Thom \cite{thom62}, see also \cite{nakai84}, that the topological classification of real or complex, analytic or polynomial map germs admits continuous moduli. This means that there are, polynomial in $t$, families of polynomial map germs $f_t:(\mathbb{K}^n,0)\to (\mathbb{K}^p,0)$ that have different topological types for different $t$, provided $n\ge 3, p\ge 2$, see \cite{nakai84}.
Recall that we say that two germs $f_i:(\mathbb{K}^n,0)\to (\mathbb{K}^p,0)$, $i=1,2$,
have \emph{the same topological type} if there exist homeomorphisms germs
$h:(\mathbb{K}^n,0)\to (\mathbb{K}^n,0)$ and $g:(\mathbb{K}^p,0)\to (\mathbb{K}^p,0)$ such that
$f_1\circ h= g\circ f_2$, that is, in other words, they are right-left topologically equivalent.
A smooth map germ $f:(\mathbb R^n,0)\to (\mathbb R^p,0)$ is \emph{topologically $r$-determined} if every smooth map germ with the same $r$-jet as $f$ is topologically equivalent to $f$. In \mbox
\cite{thom64}
Thom proposed a stabilization theorem: For any positive integer $r$, there is a closed semialgebraic subset $\Sigma_r$ of the $r$-jet space $J_r(n,p)$ such that
(i)
$\codim \Sigma_r \to \infty$ as $r\to \infty$, and (ii) if the $r$-jet of a map-germ $f$ belongs to $J_r(n,p) \setminus \Sigma_r$, then $f$ is topologically $r$-determined. In other words "most" smooth mappings, that is up to a set of infinite codimension in the jet space, look algebraic and are finitely determined. Thom gave a sketch of proof in \cite{thom64}. The first complete proof was given by Varchenko in
\cite{varchenkoizv73,varchenkoizv74}
using very different ideas that the ones of Thom, namely Zariski equisingularity. Actually, Varchenko proved a much stronger result.
\begin{thm}[{\cite[Theorem 2]{varchenkoICM75}}]\label{thm:finiteldetermined}
There exists a partition of the space of $r$-jets $J_r(n,p)$ in disjoint semialgebraic sets $V_0, V_1, \ldots$ having the following properties.
\begin{enumerate}
\item
Maps whose jets live in the same $V_i$, $i>0$, are (right-left) topologically equivalent.
\item
Any germ whose $r$-jet is in $V_i$ for $i>0$ is simplicial for suitable triangulations of
$\mathbb R^n$ and $\mathbb R^p$.
\item
The codimension of $V_0$ in $J_r(n,p)$ tends to infinity as $r$ tends to infinity.
\end{enumerate}
\end{thm}
The stabilization theorem of Thom was also shown by du Plessis in
\cite{duplessis82}. The proof given there follows the original Thom's ideas, stratification theory, transversality, isotopy lemmas and Mather's ideas about versal unfoldings.
Another application of Zariski equisingularity method to finite determinacy was given in
\cite{bobadilla2004}, where the function case ($p=1$) was considered. Note that topologically finitely determined function germs
$f:(\mathbb{K}^n,0)\to (\mathbb{K},0)$ have isolated singularities (or are regular).
In \cite{bobadilla2004} Bobadilla gives a meaningful version of Theorem
\ref{thm:finiteldetermined} for non-isolated singularities by considering functions belonging to a fixed ideal $I$ instead of the whole space of analytic germs at the origin. We refer the reader to
\cite{bobadilla2004} for details.
\begin{rem}
If the target space of $f_t$ is of dimension bigger than one then the method, we applied in the previous subsection to trivialize families of function germs may not work. In general we cannot trivialize the family of graphs starting from the variables in the target, as we did by taking $F(t,x_1, \ldots, x_n) := x_1 - g(t, x_2, \ldots , x_{n})$, if this graph is not included in the zero set of a Weierstrass polynomial in a variable in the source.
This is related to the presence of fibers of dimension bigger than the expected dimension (dimension of the source minus dimension of the target), not only for $f$ but for every function (discriminant) obtained during the construction process.
Even if this phenomenon of "blowing-up" of the special fiber is not present, that is we can apply Zariski method without mixing the variables of the source and of the target, we cannot, in general, construct topological trivialization that is the identity on the target. That means that, if $p\ge 2$, we get the right-left equivalence instead of the right one as in Theorem \ref{thm:gen-top-equising-functions}.
\end{rem}
\section{Equisingularity along a nonsingular subspace.
Zariski's dimensionality type}\label{sec:ZEalong}
In \cite[Definition 3] {zariski71open}, see also \cite{varchenkoICM75}, Zariski introduced the notion of algebro-geometric equisingularity, now called Zariski equisingularity, of an algebroid hypersurface $V\subset \mathbb{K}^{r+1}$ along a nonsingular subspace of $\Sing V$. This notion can be easily adapted to the complex and real analytic set-ups.
Let $V=f^{-1}(0)$ be an analytic hypersurface defined in a neighborhood of a point $P \in \mathbb{K}^{r+1}$. As before we assume that $f$ is reduced. Let
$W $ be a nonsingular analytic subspace of $\Sing V$ containing $P$.
Let $x_1,x_2, \ldots, x_{r+1}$ be a local coordinate system at $P$.
Consider a set of $r$ elements $z_1, z_2, \ldots ,z_r$ of the local ring of $V$ at $P$ :
$$
z_i = z_i(x_1, x_2, \ldots, x_{r+1}) = z_{i,1} + z_{1,2} + \cdots , \quad
i = 1, 2, \ldots, r,
$$
where the $z_i$ are convergent power series in the $x$'s, and $z_{i,\alpha}$ is homogeneous of degree $\alpha$.
We say that the $r$ elements $z_i$ form \emph{a set of parameters} if the following two conditions are satisfied :
\begin{enumerate}
\item [(a)]
\emph{$x = 0$ is an isolated solution of the $r + 1$ equations $z_1(x) = z_2(x) = \cdots = z_r(x) = f(x) = 0$.
\item [(b)]
The $ r$ linear forms $z_{i,1}$ are linearly independent.}
\end{enumerate}
If condition (b) is satisfied, then the $r$ linear equations
$z_{i,1} (x) = 0, i = 1, 2, \ldots , r$ define a line $l_z$ through $P$ and
the parameters $z_i$ define a co-rank one projection $\pi_z $ of a neighborhood of
$P$ in $\mathbb{K}^{r+1}$ onto a neighborhood of $\overline P= \pi (P)$
in $\mathbb{K}^{r}$.
This projection $\pi_z$ is called \emph{permissible} if the fiber $\pi^{-1}(\pi (P ))$, that is a nonsingular curve, is transverse to $W $ (here
"transverse" means that the the tangent line to the fiber is not tangent to $W$).
If this curve is transverse to $V$ at $P$, that is $l_z$ does not belong to the tangent cone $C_P (V)$ to $V$ at $P$, then the projection is called \emph{transversal} (or transverse) and the $z_i(x), i=1, \ldots, r$, are \emph{transversal parameters}.
Let $\pi_z$ be a permissible projection and let $\pi_{z,V}$ denote the restriction of $\pi$ to $V$.
Thus we may suppose that locally $f$ is a suitable reduced Weierstrass polynomial whose discriminant $D_f$ is an analytic function in
$(z_1, z_2, \ldots ,z_r)$. Denote by $\Delta_z$ its zero set, that is the discriminant locus of $\pi_{z,V}$.
The projection $\pi_z (W )$ is a nonsingular variety $\overline {\W}$, of the same dimension as $W $.
Since we have assumed that $W\subset \Sing V$ we have $\overline {\W}\subset \Delta_z$.
If $\dim W = \dim \Delta_z =r-1$ then we say that $V$ is \emph{Zariski equisingular at $P $ along $W $} if $\overline P$ is a non-singular point of $\Delta_z$. In the general case, Zariski's definition is inductive and goes as follows.
\begin{defn}\label{def:ZAalong}
We say that $V$ is \emph{Zariski equisingular at $P $ along $W $ } if there exists a permissible projection $\pi_z$ such that
$\Delta_z$ is Zariski equisingular along $\overline {\W}$ at $\overline P$
(or if $\overline P$ is a nonsingular point of $\Delta_z$).
\end{defn}
\subsection{Equimultiplicity. Transversality of projection}\label{ssec:equimultiplicity}
As Zariski states on page 489 of \cite{zariski71open} the algebro-geometric equisingularity, i.e. Zariski equisingularity as defined
in Definition \ref{def:ZAalong}, implies equimultplicity.
\begin{prop}\label{prop:equimultiplicity}
If $V$ is Zariski equisingular at $P $ along $W $ then the multiplicity
of $V$ is constant along $W $.
\end{prop}
Zariski proves it when $\dim W = \dim V-1=r-1$, see \cite[Theorem 7]{zariski65-S1}, and in the general algebroid case in \cite{zariski75}.
For a proof in the complex, and also real analytic case, see
\cite[Proposition 3.6]{PP17}.
Similarly to Definition \ref{def:ZAalong} one may
define \emph{transverse Zariski equisingularity along a nonsingular subspace} as the one given by
transverse projections. By Proposition \ref{prop:equimultiplicity}, because the equimultiple families are normally pseudo-flat
(continuity of the tangent cone), the transversality of $\pi_z$ at $P$ implies the transversality at all points of $W $ in a neighborhood of $P$.
One can also define \emph{generic or generic linear Zariski equisingularity along a nonsingular subspace}. For generic linear it means that we require at each stage the projection to be chosen from a Zariski open non-empty set of linear projection. Note that a priori this notion depends on the choice of coordinates and it is not clear whether it is preserved by nonlinear changes of coordinates. We discuss the notion of generic projection in subsections \ref{ssec:motivation} and \ref{ssec:dimensionality}.
\subsection{Relation to other equisingularity conditions}\label{ssec:relationto}
As we mentioned before Varchenko showed in \cite{varchenkoizv72}, see also
\cite{varchenkoizv72,varchenkoICM75}, that in the complex or real analytic case Zariski equisingularity implies topological triviality.
In \cite[Question E]{varchenkoizv72}, Zariski asked as well whether Zariski equisingularity implies Whitney's conditions. This has been disproved by
Brian{\c c}on end Speder in \cite{brianconspeder75a} for the equisingularity as defined in Definition \ref{def:ZAalong}.
In \cite{speder75} Speder shows that if $V$ is Zariski equisingular along
a nonsingular variety $W $ for sufficiently generic projections, then the pair $(\Reg (V), W )$ satisfies Whitney's Conditions. For instance, generic linear projections, that is from a Zariski open non-empty subset of such projections, are generic in the sense of Speder. This result was improved in \cite{PP17}, where it was shown that transverse Zariski equisingularity, both in real and complex analytic cases, implies Whitney's Conditions, see Theorem 4.3 and Theorem 7.1 of \cite{PP17} for precise statements.
There are several classical examples describing the relation between
Zariski equisingularity and Whitney's conditions. The general set up
for these examples is the following. Consider a complex algebraic hypersurface
$X\subset \mathbb{C}^4$ defined by a polynomial $F(t,x,y,z)=0$ such that $\Sing X =T$, where $T$ is the $t$-axis. Let $\pi : \mathbb{C}^4 \to T$ be the standard projection. In all these examples $X_t =\pi ^{-1} (t)$, $t\in T$, is a family of isolated singularities, topologically trivial along $T$. These examples relate the following conditions:
\begin{enumerate}
\item
$X$ is Zariski equisingular along $T$, Definition \ref{def:ZAalong}.
\item
$X$ is Zariski equisingular along $T$ for a transverse projection.
\item
$X$ is Zariski equisingular along $T$ for a generic system of coordinates. Here we consider "generic" in the sense of \cite{brianconhenry80}. It is equivalent, see loc. cit. to be generic linear,
or generic in the sense of Zariski \cite{zariski1979}, that we recall in Section \ref{ssec:dimensionality} below.
\item
The pair $(X\setminus T,T)$ satisfies Whitney's conditions (a) and (b).
\end{enumerate}
Clearly (3)$\Rightarrow$(2)$\Rightarrow$(1). Speder showed (3)$\Rightarrow$(4) in \cite{speder75}
and (2)$\Rightarrow$(4)
for families of complex analytic hypersurfaces with isolated singularities in $\mathbb{C}^3$ in his thesis
\cite{spederthese} (not published).
Theorem 7.1 of \cite{PP17}
gives (2)$\Rightarrow$(4) in the general case. As the examples below show, all the other
implications are false.
\begin{example}[\cite {brianconspeder75a}]\label{ex:briancon-speder}
\begin{align}
F(x,y,z,t)= z^5 + t y^6 z + y^7 x + x^{15}
\end{align}
This example satisfies (1) for the projections $(x,y,z) \to (y,z) \to x$ but (4) fails.
As follows from Theorem 7.1 of \cite{PP17}, (2) fails as well.
\end{example}
\begin{example}[\cite {brianconspeder75b}]
\begin{align}
F(x,y,z,t)= z^3 + t x^4 z + y^6 + x^{6}
\end{align}
In this example (4) is satisfied and (3) fails.
This example satisfies (1) for the projections $(x,y,z) \to (x,z) \to x$.
\end{example}
\begin{example}[\cite {luengo85}]\label{ex:luengo}
\begin{align}
F(x,y,z,t)= z^{16} + ty z^3 x^7 + y^6z^4+ y^{10} + x^{10}
\end{align}
In this example (2) is satisfied and (3) fails.
\end{example}
\begin{example}[\cite{parusinski1985}]
\begin{align}
F(x,y,z,t)= x^9 + y ^{12} + z^{15} + tx^3 y ^4 z^5
\end{align}
In this example (4) is satisfied and (1) fails. This also shows that (4) does not imply (2).
\end{example}
\subsection{Lipschitz equisingularity}\label{ssec:lipschitz}
In 1985 T. Mostowski \cite{mostowski85} introduced the notion of Lipschitz stratification and showed the existence of such stratification for germs of complex analytic subsets of $\mathbb{C}^n$. For complex algebraic varieties, such stratification exists globally.
The existence of Lipschitz stratification for real analytic spaces and algebraic varieties was shown \cite{Lipschitz-Fourier}, \cite{PA94}.
Lipschitz stratification satisfies the extension property of stratified Lipschitz vector fields from lower-dimensional to higher-dimensional strata, and therefore implies local bi-Lipschitz triviality along each stratum (and hence Lipschitz equisingularity as well). Mostowski's construction is similar to the one of Zariski, but involves considering many co-rank one generic projections at each stage of construction. For more on Lipschitz stratification,
we refer the interested reader to \cite{mostowski85}, \cite{Lipschitz-review}, \cite{halupczok-yin2018}.
By Lipschitz saturation, see \cite{PTpreprint}, an equisingular family of complex analytic plane curves is bi-Lipschitz trivial, i.e. trivial by a local ambient bi-Lipschitz homeomorphism. In general, there is a conjectural relation between Lipschitz and Zariski equisingularity, at least in the complex analytic set up.
\begin{question}{5. Open problem}{Are generically Zariski equisingular families of complex hypersurfaces bi-Lipschitz equisingular? Does Zariski equisingularity provide a "natural" way of construction of Lipschitz stratification in the sense of Mostowski ?
}
\end{question}
For families of complex surface singularities, that is along a nonsingular subspace of codimension 2 the following results have been announced in \cite{NPpreprint}, \cite{PPpreprint19}. In \cite{NPpreprint} it was shown that generically Zariski equisingular families of normal complex surface singularities are bi-Lipschitz trivial. In \cite{PPpreprint19} was shown that a natural stratification given by subsequent generic
(or generic linear) projections of a complex hypersurface satisfies Mostowski's Conditions in codimension 2. In particular, the latter result implies that generic Zariski equisingulariy of families (not necessarily isolated) of complex surface hypersurface singularities is Lipschitz equisingular.
\subsection{Zariski dimensionality type. Motivation}\label{ssec:motivation}
When $\dim W = \dim V - 1$,
$V$ is Zariski equisingular at $P $ along $W $
if and only if $V$ is isomorphic to the total space of an equisingular family of plane curve singularities along $W $, see \cite[Theorem 4.4]{zariski65-S2}. Then, moreover, Zariski equisingularity can be realized by
any transversal projection $\pi_z$.
Guided by this example, Zariski conjectured in \cite[Question I]{zariski71open},
that Zariski equisingularity for a single permissible projection implies the equisingularity for generic projection (or for almost all projections that we recall later in section \ref{ssec:almostall}).
An affirmative answer to this question would imply, in particular, that if there exists an equisingular projection then there exist a transversal equisingular projection. Both turned out not to be true.
In \cite{luengo85} Luengo gave an example of a family of surface
singularities in $\mathbb{C}^3$ that is Zariski equisingular for one projection, that is even transversal, but is not equisingular for the generic projection, see Example
\ref{ex:luengo}. Brian\c con end Speder gave in \cite {brianconspeder75a}
an example that is equisingular for one projection but there is no transversal projection that gives Zariski equisingularity, see Example
\ref{ex:briancon-speder}.
Therefore Zariski in \cite{zariski1979} proposes a different strategy. Instead of arbitrary permissible projections, or even transversal
projections, Zariski uses generic projections to define the equisingularity relation. (We recall what "generic" means for Zariski in the next subsection.) Having fixed such an equisingularity relation, Zariski introduces the notion of dimensionality type. For this the equisingularity relation should first satisfy the following property.\\
\noindent
\emph{The set of points of equivalent singularities form a locally nonsingular subspace of $V$ of codimension that depends only on this equisingularity class.}
\\
Thus, for a point $P \in V$ the set of points equivalent to $(V,P)$ is nonsingular and its codimension in $V$ characterizes how complicated the singularity is. This codimension is then called \emph{the dimensionality type of $(V,P)$}. The points of dimensionality type $0$ are the nonsingular points of $V$. The simplest singular points of $V$, of dimensionality type $1$, are those at which $V$ is isomorphic to the total space of an equisingular family of plane curves.
The closure of the set of points of fixed equisingularity type may contain
points of different equisingularity type but only of the higher dimensionality type and only on finitely many such equisingular strata.
The very definition of what is meant by the word "generic" is the main point of Zariski's definition. Let us make a quick comment on an apparently obvious choice. Similarly to Definition \ref{def:ZAalong} one may
define \emph{generic linear Zariski equisingularity} as the one given by linear projections belonging to a Zariski open non-empty subset of linear projections.
Except the case of the dimensionality type $1$, it is not clear whether such notion of generic linear Zariski equisingularity is preserved by non-linear local changes of coordinates, nor whether it implies the generic Zariski equisingularity.
\subsection{Zariski dimensionality type}\label{ssec:dimensionality}
Formally Zariski's original definition of the dimensionality type requires the field extension by infinitely many indeterminates. Therefore, in this subsection exceptionally, we work over an arbitrarily algebraically closed field of characterisitc zero that will be denoted by ${k}$,
and instead of the category of complex analytic spaces we consider the category of algebraic or algebroid varieties. Recall that the algebroid varieties are the varieties defined by ideals of the rings of formal power series, see \cite[Ch. IV]{lefschetz53} and \cite[Section 2]{zariski1979}.
We note, however, that in \cite[Proposition 5.3]{zariski1979} Zariski shows that his definition involving such a field extension can be replaced by a condition that is based on the notion of almost all projections that does not require a field extension.
We recall the approach via almost all projections in the next subsection.
Let ${k}$ be an algebraically closed field of characteristic zero.
Consider an algebroid hypersurface $
V=f^{-1}(0) \subset ({k}^{r+1},P )$ at $P \in {k}^{r+1}$ defined in a local system of coordinates by a formal power series $f\in {k} [[x_1, \ldots , x_{r+1}]]$ (that we assume reduced). Zariski's definition of the dimensionality type is based on the following notion of generic projection. \emph{The generic projection}, in the sense of \cite{zariski1979}, is the map
$\pi_u (x) = (\pi_{u.1}(x), \ldots, \pi_{u,{r}}(x))$, with
\begin{align}\label{eq:genprojection}
\pi_{u,i} (x) = \sum_{d\ge 1} \sum_{\nu_1+ \cdots \nu_{r+1}=d} u^{(i)}_{\nu_1, \cdots, \nu_{r+1} } x^\nu .
\end{align}
This map is defined over ${k}^*$, any field extension of $k$ that contains all coefficients $u^{(i)}_{\nu_1, \cdots, \nu_{r+1} }$
as interdeterminates, thus formally
$\pi_u: (({k}^*)^{r+1},P ) \to (({k}^*)^{r},P_0^*)$, where
$P_0^*= \pi_u (P)$. Denote by
$\Delta_u ^*\subset ({{k}^*}^{r},P_0^* )$ the discriminant locus of
$(\pi_u)_{|V^*}$, where $V^*=f^{-1}(0) \subset ({{k}^*}^{(r+1)},P )$.
Let $W $ be a nonsingular algebroid subspace of $\Sing V$ and let
$\overline {\W^*} = \pi_u (W )$. If $\dim W =\dim V-1$ then we say that
$V$ is generically Zariski equisingular at $P $ along $W $
if $\bar P^*$ is a non-singular point of $\Delta_u^*$. In general, the definition is similar to Definition \ref{def:ZAalong}.
\begin{defn}\label{def:generic equisingularity}
We say that $V$ is \emph{generically Zariski equisingular at $P $ along $W $ }
if $\Delta_u ^*$ is generically Zariski equisingular
along $\overline {\W^*}$ at $\bar P^*$ (or if $\bar P^*$ is a non-singular point $\Delta_u^*$).
\end{defn}
The definition of dimensionality type of \cite{zariski1979} is again recursive. It is defined
for any point $Q$ of $V$, not only the closed point $P $.
\begin{defn}\label{def:dimensionalitytype}
Any simple (i.e. non-singular) point $Q$ of $V$ is of dimensionality type $0$. Let $Q$ be a singular point $V$ and let $Q^*_0= \pi_u (Q)$.
Then \emph{the dimensionality type of $V$ at $Q$},
denoted by $\operatorname {d.t. } (V,Q)$, is equal to
$$\operatorname {d.t. } (V,Q) = 1+ \operatorname {d.t. } (\Delta^*_u,Q^*_0).$$
\end{defn}
The notions of generic Zariski equisingularity and of dimensionality type are independent of the choice of
this field extension ${k}^*$, see
\cite{zariski1979} and \cite{zariski1980}.
As follows from \cite{zariski1979} the set of points where the dimensionality type is constant, say equal to $\sigma$, is either
empty
or a nonsingular locally closed subvariety of $V$ of codimension $\sigma$.
The dimensionality type defines a stratification $V= \sqcup_\alpha S_\alpha $ of $V$ that satisfies the frontier condition, i.e. if
$S_\alpha \cap S_\beta \ne \emptyset$ then $S_\alpha \subset \overline {S_\beta}$, and
$V$ is generically Zariski equisingular along $W $ at $P$ if and only if $W$ is contained in the stratum containing $P$.
A singularity is of dimensionality type $1$ if and only if it is isomorphic to the total space of an equisingular family of plane curve singularities, see \cite{zariski65-S2} Theorem 4.4.
Moreover, if this is the case, then
$V$ is such an equisingular family for any transverse system of coordinates.
\subsection{Almost all projections.}\label{ssec:almostall}
In \cite[Proposition 5.3]{zariski1979} Zariski shows that, in Definitions \ref{def:generic equisingularity} and \ref{def:dimensionalitytype}, the generic projection $\pi_u$ can be replaced by a condition that involves almost all projections $\pi_{\bar u}: {k}^{r+1}\to {k}^{r}$ (so it does not require a field extension).
One says that a property holds \emph{for almost all projections} if
there exists a finite set of polynomials $\mathcal G=\{G_\mu\}$ in the indeterminates $u^{(i)}_{\nu_1, \cdots, \nu_{r+1} }$ and coefficients in ${k}$ such that
this property holds for all projections $\pi_{\bar u}$ for $\bar u$
satisfying $\forall_\mu G_\mu (\bar u)\ne 0 $.
Here the bar denotes the specialization $u \to \bar u$, i.e.
we replace all indeterminates $u^{(i)}_{\nu_1, \cdots, \nu_{r+1} }$ by
elements of ${k}$, $\bar u^{(i)}_{\nu_1, \cdots, \nu_{r+1} }\in {k}$. Thus, for almost all projections $\pi_{\bar u} $
the dimensionality type $\operatorname {d.t. } (V,P )$ equals
\begin{align}\label{eq:dimtype-polynproj}
\operatorname {d.t. } (V,P ) = 1+ \operatorname {d.t. } (\Delta_{\bar u},\pi_{\bar u} (P )), \end{align}
where $\Delta_{\bar u}$
denotes the discriminant locus of ${\pi_{\bar u}}_{|V}$. Since the finite set of polynomials $\mathcal G$ involves nontrivially only finitely many indeterminates $u^{(i)}_{\nu_1, \cdots, \nu_{r+1} }$, we may specialize the remaining ones to $0$, and then the projection
$\pi_{\bar u} $ becomes polynomial.
This means that, as soon as we know the set of polynomials $\mathcal G$,
we may compute the dimensionality type of $V$ at $P $
just by computing $\operatorname {d.t. } (\Delta_{\bar u},\pi_{\bar u} (P ))$,
for only one polynomial projection $\pi_{\bar u} $, satisfying $\forall_\mu G_\mu (\bar u)\ne 0 $.
Similarly, in order to check whether $V$ is generic Zariski
equisingular at $P $ along $S$,
it suffices to check it for $\Delta_{\bar u} ^*$
along $\pi_{\bar u} (S)$ at $\pi_{\bar u} (P )$.
\subsection{Canonical stratification of hypersurfaces}\label{ssec:canonstrat}
The dimensionality type defines a canonical stratification of a given algebroid or algebraic hypersurface over an algebraically closed field of characteristic zero.
Unfortunately, in general, no specific information on the polynomials of
$\mathcal G$ is given in \cite{zariski1979}.
Zariski's construction is purely transcendental,
and there is no explicit bound on the degree of such polynomial
projections. This makes, for instance, an algorithmic computation of Zariski's canonical stratification impossible. The algebraic case was studied in more detail by Hironaka \cite{hironaka79}, where the semicontinuity of such a degree in Zariski topology is shown. This implies in particular that the dimensionality type induces, in the algebraic case, a stratification by locally closed algebraic subvarieties.
For complex analytic singularities, we can define the dimensionality type using generic polynomial or generic analytic projections. It follows from Speder \cite[Theorems I-IV]{speder75}, that in the complex analytic case thus defined canonical stratification satisfies Whitney's conditions (a) and (b). It also follows from Theorems 3.7 and 7.3 of \cite{PP17}, where a stronger regularity condition, called (arc-w) was proven.
It is an open question whether generic linear projections are generic in the sense of Zariski.
\begin{question
{6. Open problem}{Are generic linear projections sufficient to define generic Zariski equisingularity and the dimensionality type ? More precisely, is the formula \eqref{eq:dimtype-polynproj} valid for a Zariski open non-empty set of linear projections $\pi_{\bar u} $ ?
}
\end{question}
Even if the answers to the above questions were positive it would not give an algorithm to compute the dimensional type and the canonical stratification automatically, but the positive answer to this question would probably help to consider other related open problems that we summarize below.
\begin{question
{7. Open problems}{
Characterize geometrically or algebraically generic polynomial projections in the sense of Zariski ?
}
\end{question}
In the case of strata of dimensionality type 2 a partial answer to
both questions of problem 6 was obtained in \cite{brianconhenry80}.
In this paper Brian\c con and Henry characterized
generically Zariski equisingular families of isolated surface singularities
in the 3-space in terms of local analytic invariants: Teissier's numbers (multiplicity, Milnor number, and Milnor number of a generic plane section), the number of double points and the number of cusps of the apparent contours of the generic projection of the generic fibre of a mini-versal deformation. All these numbers are local analytic invariants, and therefore linearly generic change of coordinates is generic in the sense of Zariski. This shows in particular that, if $\Sing V$ is of codimension 2 in $V$ at $P$, then the generic linear projections are sufficient to verify whether
$\operatorname {d.t. } (V,p) = 2$.
Note that the answer to problem 7 should be quite subtle even in the case of the dimensionality type 2. Let us remind that in
\cite{luengo85} Luengo gave an example of a family of surface
singularities in $\mathbb{C}^3$ that is Zariski equisingular for one transverse
projection but not for the generic ones.
\begin{question
{8. Open problems}{
Is the canonical Zariski's stratification of a complex analytic hypersurface Lipschitz equisingular ?
}
\end{question}
This problem is a version of problem 5.
As we have mentioned in section
\ref{ssec:lipschitz}, the positive answer for the strata of codimension 2 was announced \cite{PPpreprint19} and, if moreover $\codim (\Sing V) =2$, in \cite{NPpreprint}.
\subsection{Zariski equisingularity and equiresolution of singularities}\label{ssec:equiresolution}
In the case of dimensionality type $1$, i.e. equivalently, for families of plane curves singularities, Zariski equisingularity can be expressed in terms of blowings-up (monoidal transformations) and equiresolution.
More precisely, firstly, the following property of stability by blowings-up holds.
\begin{thm}[{\cite[Theorem 7.4]{zariski65-S2}}]\label{thm:equiresolution}
Assume that the singular locus $W$ of $V$ is of codimension $1$ and let $P$ be a regular point of $W$. Let $\pi : V' \to V$ be the blowing-up of $W$, and let $\pointgen$ denote a general point of $W$.
Then $V$ is of dimensionality type $1$ along $W$ at $P$ if and only if the following two conditions hold
\begin{enumerate}
\item [\rm (1)]
$\pi^{-1} (P) $ is a finite set of the same cardinality as
$\pi^{-1} (\pointgen) $,
\item [\rm (2)]
each $P '\in \pi^{-1} (P) $ is either a nonsingular point
of $V'$ or a point of dimensionality type $1$.
\end{enumerate}
\end{thm}
In the complex analytic set up the conditions (1) and (2) mean that
over a neighborhood of $P$, $W':=\pi^{-1} (W) \to W$ is a finite analytic covering and $V'$ is Zariski generically equisingular along each connected component of $W'$ (this includes the case that $V'$ is nonsingular along this connected component).
Secondly, the repeating process of such blowings-up leads not only to
a resolution $\tilde \pi :\tilde V \to V$ of $V$ but also to an equiresolution in the following sense. Fix a local projection of $pr : \mathbb{K}^{r+1}\to W$, such that $pr ^{-1} (P)$ is nonsingular and transverse to $W$. The fibers of this projection restricted to $W$ are plane curve singularities $V_t$ parameterized by $t\in W$. Then the restrictions of $\tilde \pi$, $\tilde V_t:= \tilde \pi^{-1}(V_t) \to V_t$ are the resolutions of $V_t$, see
e.g. \mbox
\cite[Corollary 7.5]{zariski65-S2}
and the paragraph after it, and the induced projections of $\tilde V$ and
of the exceptional divisor $E$ of $\tilde \pi$ onto $W$ are submersions. Note that $\tilde V$ coincides with a normalization of $V$, and hence it is also an equinormalization of the family $V_t$. this can be deduced as well from Puiseux with parameter Theorem \ref{thm:PuiseuxTheorem}).
\begin{question}{Question}
Do the two properties, stability by blowing-up and equiresolution, hold
for arbitrary codimension strata of Zariski's stratification ?
\end{question}
The first part of this question was stated by Zariski in \mbox
\cite{zariski76}
: "Now, one test that any definition of equisingularity must meet is the test of its stable behavior along $W$ under blowing-up of $W$". It also appears in questions F, G and H of \mbox
\cite{zariski71open}
. An example of Luengo \mbox
\cite{luengo84}
shows that the generic Zariski equisingularity does not satisfy the stability under blowings-up property. That is
$V= \{z^7+ y^ 7+ty^5x^3+ x^ {10}=0\}\subset \mathbb{C}^4$ is generically Zariski equisingular along $W=\Sing V= \{x=y=z=0\}$ but the blow-up $\tilde V$ of $V$ along $W$ is not generically Zariski equisingular along $\tilde W=\Sing \tilde V$ ($\tilde W $ is a nonsingular curve in this example).
Moreover, reciprocally, a blowing-up may make non-equisingular families equisingular as shows another example from \mbox
\cite{luengo84}
. In this example
$V= \{z^4+ y^6+tz^ 2 y^3+ x^ {8}=0\}\subset \mathbb{C}^4$ is not generically Zariski equisingular along $W=\Sing V= \{x=y=z=0\}$ (the origin is of dimensionality 3), but the strata of
Zariski canonical stratification of the blow-up of $W$,
$\tilde V \to V$, project submersively onto $W$ (no point of the highest possible dimensionality $3$ in $\tilde V$).
In order to answer the second part of this question one has to make precise what the equiresolution, also often called simultaneous resolution, means.
To start with there are embedded resolutions (modifications of the ambient space containing $V$) and the non-embedded ones.
The concept of equiresolution was largely studied and clarified in \mbox
\cite{Teissier76-77-80}
within the context of non-embedded resolutions of complex analytic surface singularities, and in \mbox
\cite{Lip97}
in the context of embedded equiresolution of complex analytic or algebraic varieties.
The relation between generic Zariski equisingularity and equiresolution depends which notion of the equiresolution is adapted. In the first cited above example of Luengo
\cite{luengo84}
,
$V$ understood as a family $V_t$ does not have strong simultaneous resolution in the sense of Teissier
\cite{Teissier76-77-80}
, if we require, moreover, that this resolution is given by a sequence blowings-up of non-singular equimultiple centers following Hironaka's algorithm, see
\cite{luengo84}
for details.
In \cite{Lip97}
Lipman proposes a strategy to prove a weaker version of
equiresolution for such families. This proof is completed by Villamayor in \
\cite{villamayor2000}
, in the algebraic case. Villamayor's equiresolution is a modification of the ambient nonsingular space containing $V$ more general than the ones obtained as compositions of sequences of nonsingular centers blowings-up. Moreover, it is not required that the induced resolution of $V$ is an isomorphism over $V \setminus \Sing V$.
There is another open problem related to the equiresolutions of
families of singularities. Namely, it is not clear whether, in general, equiresolution can be used to construct topological trivializations. Let us explain it on a simplified example. Suppose that $V$ is a hypersurface of a nonsingular (real or complex) analytic manifold $M$, $\tilde \pi : \tilde M \to M$ is a modification, the composition of blowings-up of smooth centers for instance, and that
\begin{enumerate}
\item [(i)]
there is a local (at $P \in W$) analytic projection $pr : M\to W$,
such that $pr ^{-1} (P)$ is nonsingular and transverse to $W$ and whose fibers restricted to $V$ define a family of reduced hypersurfaces $V_t$, $t\in W$.
\item [(ii)]
$\tilde \pi ^{-1} (V)$ is a divisor with normal crossings that is the union of the strict transform $\tilde V$ of $V$, assumed non-singular, and the exceptional divisor $E$.
\item[(iii)]
The strata of the canonical stratification of $\tilde \pi ^{-1} (V)$ (as a divisor with normal crossings) project by $\tilde pr := pr \circ \tilde \pi $ submersively onto $W$.
\end{enumerate}
Then, by a version of Ehresmann fibration theorem, there is a trivialization of $\tilde pr $ that preserves the strata of $\tilde \pi ^{-1} (V)$ and hence also $\tilde V$. Moreover, by
\cite{kuo85},
this trivialization can be made real analytic.
If this trivialization is a lift by $\tilde \pi$ of a trivialization of $pr $ then we call the latter \emph{a blow-analytic trivialization}.
$$
\xymatrix{
\tilde M \ar@{{}->}[rr]^{\tilde \pi } \ar[rd]_{\tilde pr } & & M \ar[ld]^{pr } \\
& W &
}
$$
In \mbox
\cite{kuo85}
Kuo developed the theory of blow-analytic equivalence of real analytic function germs. Kuo shows that
for families of isolated hypersurface (or function) singularities such
blow-analytic trivializations exist under the following additional assumptions: $W = \Sing $ and $\tilde \pi $ is an isomorphism over the complement of $W$. In this case he constructs a nice (real analytic) trivialization of the resolution space that projects to a topological trivialization of $pr $. In particular, it shows that the principle of generic blow-analytic equisingularity of real analytic function germs holds, see \cite[Theorem 1]{kuo85}. But in the general, non-isolated singularity case it is not even clear whether there is a topological trivialization that lifts to the resolution space.
The existence of a blow analytic trivialization of family of
non-isolated singularities
remains the main open problem of Kuo's Theory, see \cite{FKK1998}, \cite{koike2004}.
Blow-analytic trivializations are, in particular, arc-analytic, and, actually, at least in the algebraic (i.e. Nash) case, blow-analytic and arc-analytic maps coincide, see \cite{bierstonemilmanarcanalytic}, \cite{subfunc}. Thus there is a hope that Theorem \ref{thm:arcwise-analytic_triviality}, proven using Zariski equisingularity, can help in developing blow-analytic theory of non-isolated singularities.
\section{Appendix. Generalized discriminants} \label{sec:discriminants}
We recall the notion of generalized discriminants,
see Appendix IV of \cite{whitneybook}, \cite{BPRbook}, \cite{roy2006}, or Appendix B of \cite{PP17}.
Let ${k}$ be an arbitrary field of characteristic zero and let
\begin{align}\label{type}
F(Z) = Z^d + \sum_{i=1}^ d a_iZ^{d-i} = \prod_{i=1}^d (Z-\xi_i)\in {k} [Z],
\end{align}
be a polynomial with coefficients $a_i\in {k}$ and the roots $\xi_i\in \overline {k}$.
Recall that the discriminant of $F$ is a polynomial in the coefficients
$a_i$ that can be defined either in terms of the roots
$$D_F = \prod_{1\le j_1< j_2 \le d} (\xi_{j_2}-\xi_{j_1})^2 ,$$
or as the determinant of the Jacobi-Hermite matrix
$$D_F =\left|\begin{array}{cccc} s_0 & s_1 & \cdots & s_{d-1}\\
s_1 & s_2 & \cdots & s_d \\
\cdots & \cdots & \cdots & \cdots\\
s_{d-1} & s_l & \cdots & s_{2d-2} \end{array}\right| ,$$
where $s_i=\sum_{j=1}^d \xi_j^i$, for $i\in \mathbb N$, are Newton's symmetric functions. Thus $D_F =0$ if and only if $F$ has a multiple root in $\overline {k}$.
\emph{The generalized discriminants, or subdiscriminants}, $D_{d+1-l}$ of $F$, $l=1, \ldots, d$,
can be defined as the principal minors of the Jacobi-Hermite matrix
$$D_{d+1-l} : =\left|\begin{array}{cccc} s_0 & s_1 & \cdots & s_{l-1}\\
s_1 & s_2 & \cdots & s_l \\
\cdots & \cdots & \cdots & \cdots\\
s_{l-1} & s_l & \cdots & s_{2l-2} \end{array}\right| ,$$
and we put $D_d=1$ by convention. Thus $D_{d+1-l}$ are polynomials in the coefficients $a_i$.
The generalized discriminants can be defined equivalently in terms of the roots
$$
D_{d+1-l} = \sum_{r_1 <\cdots < r_{l}} \, \prod_{j_1< j_2;\, j_1, j_2 \in \{r_1, \ldots ,r_{j}\}} (\xi_{j_2}-\xi_{j_1})^2.
$$
In particular $D_1= D_F$ and $F$ admits exactly $l$ distinct roots in $\overline {k}$ if and only if $D_{1} = \cdots = D_{d-l}=0$ and
$D_{d-l+1} \ne 0$.
If $F$ is not reduced, that means in this case that has multiple roots, the generalized discriminants can replace the (classical)
discriminant of $F_{red}$. Here $F_{red}$ equals $\prod (Z-\xi_i)$,
where the product is taken over all distinct roots of $F$.
The following lemma is easy.
\begin{lemma}\label{lem:twodiscr}
Suppose $F$ has exactly $l>1$ distinct roots in
$\overline {k}$ of multiplicities $\mathbf m=(m_1, ..., m_l)$. Then there is a positive constant $C= C_{l,\mathbf m}$, depending only on
$\mathbf m=(m_1, ..., m_l)$, such that the generalized discriminant
$D_{d-l+1}$ of $F$ and
the standard discriminant $D_{F_{red}}$ of $F_{red}$ are related by
$$
D_{d-l+1}= C D_{F_{red}}.
$$
\end{lemma}
We conclude with the following obvious consequence of the IFT.
\begin{lemma}\label{lem:implicit}
Let $F\in \mathbb{C} \{x_1, ... ,x_n\}[Z]$ be a monic polynomial in $Z$ such that the discriminant $D_{F_{red}}$ does not vanish at the origin. Then, there is a neighborhood $U$ of $0\in \mathbb{C}^n$ such that
the complex roots $F$ are analytic on $U$, distinct, and of constant multiplicities (as the roots of $F$).
\end{lemma}
\bibliographystyle{siam}
| proofpile-arXiv_059-15594 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Fock and Chekhov \cite{FC} defined a noncommutative algebra related to the Teichmuller space of a punctured surface. The algebra is a noncommutative torus whose defining relations come from a triangulation of the underlying surface. There is an action of the mapping class group on this algebra, and if a representation is fixed by this action, then this gives rise to a projective representation of the mapping class group.
In this paper, we study a toy model of quantum Teichmuller space, the noncommutative torus in two variables, $\mathcal{W}_q$, which can be thought of as being associated to the two torus $T^2=S^1\times S^1$.
The ``trivial'' represetantation of $\mathcal{W}_q$ is fixed by all elements of the mapping class group, and so gives rise to a projective representation of the mapping class group of the torus, $SL_2(\mathbb{Z})$. In this paper we compute the trace, and the determinant of a matrix associated to each element of $SL_2(\mathbb{Z})$ by this projective representation in the case of $q$ being a root of unity of prime order.
We begin with a section of preliminaries, starting with the definition of $\mathcal{W}_q$ and a description of the action of $SL_2(\mathbb{Z})$ on $\mathcal{W}_q$ as automorphisms.
After reviewing properties of matrix algebras in subsection \ref{mat}, we will review representations of algebras in subsection \ref{rep}. We finish the preliminaries with a review of Gauss sums.
In section \ref{reptor} we give models of the irreducible representations of $\mathcal{W}_q$ and prove they are indeed irreducible.
Next, in section \ref{skno}, we take an arbitrary element $B\in SL_2(\mathbb{Z})$ and find a matrix whose action by conjugation is the same as $B$'s action as an automorphism on the representation of $\mathcal{W}_q$. Finally in sections \ref{trcalc} and \ref{detcalc}, we calculate the trace and determinant (respectively) of the conjugating matrix.
\section{Preliminaries}
\subsection{Noncomutative Tori}\label{weyl}
Let $\mathcal{W}_q = \mathbb{C}[l,l^{-1},m,m^{-1}]_q$ be the non-commuting Laurent polynomials in variables $l$ and $m$ with $lm = q^2ml$ for some $q\in\mathbb{C}\backslash \{0\}$. We will study $\mathcal{W}_q$ using the following basis. Let $e_{r,s} = q^{-rs}l^rm^s$. The set $\{e_{r,s}\mid r,s\in\mathbb{Z}\}$ forms a basis for $\mathcal{W}_q$ over $\mathbb{C}$ where we can take products of elements in this basis as follows,
\begin{align*} e_{p,t}*e_{r,s} &= q^{-pt-rs}l^pm^tl^rm^s\\
&= q^{-pt-rs}l^p(q^{-2tr}l^rm^t)m^s\\
&= q^{-pt-rs-2tr}q^{(p+r)(t+s)}q^{-(p+r)(t+s)}l^{p+r}m^{t+s}\\
&= q^{ps-rt}e_{p+r,t+s}\\
&= q^{\begin{vmatrix}
p & t\\
r & s
\end{vmatrix}}e_{p+r,t+s}
\end{align*}
Lastly, note that if $n$ is odd and $q$ is a primitive $n$th root of unity, then the center of $\mathcal{W}_q$ (see Definition \ref{algdef}) is spanned by $\{e_{np,nt}\mid p,t\in\mathbb{Z}\}$. We can now view $\mathcal{W}_q$ as a module over its center (see Definition \ref{repdef}).
\begin{prop} The algebra $\mathcal{W}_q$ is free module over its center with rank $n^2$ and basis $\{e_{i,j}\mid 0\leq i,j<n\}$.
\end{prop}
\begin{proof}
It is clear that $\{e_{i,j}\mid 0\leq i,j<n\}$ spans $\mathcal{W}_q$ over $Z(\mathcal{W}_q)$ since for any $r,s\in\mathbb{Z}$ we can find $p$ and $t$ so that $r = pn + i$ and $s = tn + j$ with $0\leq i,j < n$ so that $e_{r,s} = q^{-\begin{vmatrix}pn & tn\\ i & j\end{vmatrix}}e_{pn,tn}*e_{i,j}$. To show that this set is linearly independent, first note that $\{e_{np,nt}\mid p,t\in\mathbb{Z}\}$ is linearly independent over $\mathbb{C}$ as it is a subset of a basis for $\mathcal{W}_q$ over $\mathbb{C}$. Suppose that $\sum_{i,j=0}^{n-1}\alpha_{i,j}e_{i,j} = 0$ with $\alpha_{i,j}\in Z(\mathcal{W}_q) = \langle e_{np,nt}\mid p,t\in\mathbb{Z}\rangle$. Write $\alpha_{i,j} = \sum_k\beta_{i,j}^ke_{np_{i,j}^k,nt_{i,j}^k}$ where $\beta_{i,j}^k\in\mathbb{C}$ and the choice of the $\beta_{i,j}^k$ is unique. Then
\begin{align*}
0 &= \sum_{i,j=0}^{n-1}\alpha_{i,j}e_{i,j}\\
&= \sum_{i,j=0}^{n-1}\left(\sum_k\beta_{i,j}^ke_{np_{i,j}^k,nt_{i,j}^k}\right)e_{i,j}\\
&= \sum_{i,j=0}^{n-1}\sum_k\beta_{i,j}^kq^{\begin{vmatrix}np_{i,j}^k & nt_{i,j}^k\\i & j\end{vmatrix}}e_{np_{i,j}^k + i,nt_{i,j}^k + j}\\
&= \sum_{i,j=0}^{n-1}\sum_k\beta_{i,j}^ke_{np_{i,j}^k + i,nt_{i,j}^k + j}
\end{align*}
But this is a linear combination of unique elements in $\{e_{i,j}\mid i,j\in\mathbb{Z}\}$, our basis for $\mathcal{W}_q$ over $\mathbb{C}$, and thus $\forall i,j,k$ it must be that $\beta_{i,j}^k = 0$ by the linear independence of our basis. Therefore, $\forall i,j\in\{0,\ldots,n-1\}$, $\alpha_{i,j} = 0$.
\end{proof}
\begin{prop} There is a left action of $SL_2(\mathbb{Z})$ as automorphisms on $\mathcal{W}_q$ defined by $$\begin{pmatrix}
a & b\\
c & d
\end{pmatrix}e_{p,t} = e_{ap + bt,cp + dt}.$$
\end{prop}
\begin{proof}
We need only show that $\begin{pmatrix}
a & b\\
c & d
\end{pmatrix}(e_{p,t}*e_{r,s}) = \begin{pmatrix}
a & b\\
c & d
\end{pmatrix}e_{p,t} * \begin{pmatrix}
a & b\\
c & d
\end{pmatrix}e_{r,s}$. This can be accomplished through direct computation:
\begin{align*}
\begin{pmatrix}
a & b\\
c & d
\end{pmatrix}(e_{p,t} * e_{r,s}) &= q^{\begin{vmatrix}
p & t\\
r & s
\end{vmatrix}}\begin{pmatrix}
a & b\\
c & d
\end{pmatrix}e_{p+r,t+s}\\
&= q^{\begin{vmatrix}
p & t\\
r & s
\end{vmatrix}} e_{a(p+r) + b(t+s),c(p+r) + d(t+s)}\\
&= q^{\begin{vmatrix}a & b\\c & d\end{vmatrix}\cdot\begin{vmatrix}p & r\\t & s\end{vmatrix}}e_{ap + ar + bt + bs, cp + cr + dt + ds}\\
&= q^{\begin{vmatrix}ap + bt & ar + bs\\cp + dt & cr + ds\end{vmatrix}}e_{ap + bt + ar + bs, cp + dt + cr + ds}\\
&= e_{ap + bt,cp + dt} * e_{ar + bs,cr + ds}\\
&= \begin{pmatrix}
a & b\\
c & d
\end{pmatrix}e_{p,t} * \begin{pmatrix}
a & b\\
c & d
\end{pmatrix}e_{r,s}
\end{align*}
\end{proof}
\subsection{Matrix Algebras} \label{mat}
\begin{defi}\label{algdef} An \textbf{algebra} $A$ over a field $F$ is a vector space with an additional bilinear binary operation $\cdot:A\times A\rightarrow A$ usually called multiplication. We assume that the multiplication is associative, and there is a unit element. The \textbf{center} of $A$, $Z(A)$ are those elements of $A$ that commute with all other elements, i.e. $Z(A) = \{x\in A\mid x\cdot a = a\cdot x\ \forall a\in A\}$.
\end{defi}
\iffalse
\begin{defi} An algebra is called \textbf{prime} if for any $a,b\in A$, then $arb = 0\Rightarrow a = 0$ or $b = 0$ for all $r\in A$.
\end{defi}
\fi
\begin{defi}\label{repdef} Given an algebra, $A$, a \textbf{left $A$-module} or \textbf{representation} of $A$ is a vector space $V$ over $\mathbb{C}$ along with a homomorphism $\rho:A\rightarrow End(V)$. We say the representation is \textbf{irreducible} if $\rho$ is onto.
\end{defi}
Let $M_n(\mathbb{C})$ be the algebra of $n\times n$-matrices with complex entries. There is a standard basis denoted $E_{i,j}$ of matrices that are zero except in the $(i,j)$-entry. In this paper the indices $i$ and $j$ run from $0$ to $n-1$. It is well known that
\begin{equation*} E_{i,j}E_{k,l}=\delta^j_kE_{i,l} \end{equation*} where $\delta^j_k$ is the Kronecker delta. Note that the center of $M_n(\mathbb{C})$ is exactly all scalar multiples of the identity. We will also use the Skolem-Noether \cite{G} theorem, which ensures that every automorphism of $M_n(\mathbb{C})$ is inner, i.e, if $\theta:M_n(\mathbb{C})\rightarrow M_n(\mathbb{C})$ is an automorphism, then there exists $C\in M_n(\mathbb{C})$, unique up to scalar multiples, so that for all $A\in M_n(\mathbb{C})$,
\begin{equation*} \theta(A)=C^{-1}AC. \end{equation*}
\begin{remark} After choosing a basis for the vector space $V$, $End(V)$ can be identified with $M_n(\mathbb{C})$. In this paper we treat representations as homomorphisms into $M_n(\mathbb{C})$. \end{remark}
\subsection{Irreducible Representations}\label{rep}
We begin this section, by showing that irreducible representations of an algebra, $A$, are determined, up to equivalence, by their kernels. Then we show that under certain circumstances (that will appear in Section \ref{reptor}), those kernels are fully determined by their intersections with $Z(A)$.
\begin{prop}\label{uniqueirr} If $A$ is an associative algebra, and $\rho:A\rightarrow M_n(\mathbb{C})$ is irreducible with $I = \ker\rho$, then $\rho$ is completely determined by $I$.
\end{prop}
\begin{proof} Suppose $\rho_1,\rho_2:A\rightarrow M_n(\mathbb{C})$ both onto, such that $\ker\rho_1=\ker\rho_2$. Then there exists a well defined endomorphism on $M_n(\mathbb{C})$, $\rho_2\circ\rho_1^{-1}$. Therefore, by the Skolem-Noether theorem, this map must be inner so that there is some $C\in GL_n(\mathbb{C})$ such that $C\rho_1C^{-1} = \rho_2$.
\end{proof}
\begin{prop}\label{amax} Let $A$ be an associative algebra that is a free module of rank $n^2$ over its center. Let $\rho:A\rightarrow M_n(\mathbb{C})$ be an irreducible representation, and let $I=ker(\rho)$, then $(I\cap Z(A))\cdot A = I$.
In particular, this means the kernel of $\rho$ is determined by its intersection with the center of $A$. \end{prop}
\begin{proof}
Because $I$ is an ideal of $A$, it is clear that $(I\cap Z(A))\cdot A\subseteq I$, so it is sufficient show the other inclusion. Notice that $\rho\vert_{Z(A)}:Z(A)\rightarrow Z(M_n(\mathbb{C})) = \mathbb{C}I_n$ is a homomorphism from $Z(A)$ onto a field; consequently, $\ker(\rho\vert_{Z(A)}) = I\cap Z(A)$ is a maximal ideal of $Z(A)$. If $B=\{e_i\}_{i=1}^{n^2}$ is a basis for $A$ over $Z(A)$ (of order $n^2$), then $\rho(B)$ must be a spanning set of $M_n(\mathbb{C})$ since $\rho$ is onto. Therefore, the elements of $\rho(B)$ are linearly independent as they span an $n^2$-dimensional vector space. Let
$$J_i:=\{z\in Z(A)\vert \exists ze_i + \sum_{j\neq i}z_je_j\in I\}.$$
This is an ideal of $Z(A)$ because the $e_i$ are a basis so that the expression $ze_i + \sum_{j\neq i}z_je_j$ is unique. Notice the fact that for all $i$, $I\cap Z(A)$ is contained in $J_i$ which implies $I\cap Z(A) = J_i$ since $I\cap Z(A)$ is maximal. As this is true for each $i$, we get that for any $i$ (since $J_{i_1} = J_{i_2}$), $I\subseteq J_i\cdot A$. Therefore, $(I\cap Z(A))\cdot A = I$.
\end{proof}
\subsection{ Legendre Symbols and Gauss Sums}
Lastly, we introduce Gauss Sums which will be instrumental in Section \ref{trcalc}.
\begin{defi}
Given two integers $a$ and $p$ such that $a\not\equiv 0\mod p$, then the associated \textbf{quadratic symbol}, or \textbf{Legendre symbol}, is defined to be
$$\left(\frac{a}{p}\right) = \left\{\begin{array}{ccc}
1 & \text{if} & a\equiv x^2\mod p\\
-1 & \text{if} & a\not\equiv x^2\mod p
\end{array}\right.$$ for any integer $x$. This value depends only on the residue class of $a\mod p$.
\end{defi}
\begin{defi} A \textbf{Gauss quadratic sum} is a sum of the form,
$$\sum_{x\mod b}e^{\frac{2\pi i}{b}ax^2} =: G(a,b)$$
where $a$ and $b$ are relatively prime, non-zero integers with $b>0$.
\end{defi}
Lang gives an exposition \cite{L} of Gauss' calculation showing that if $b\geq 1$ is odd, then $G(a,b) = \left(\frac{a}{b}\right)G(1,b)$, and that $G(1,b) = \frac{1 + i^{-b}}{1 + i^{-1}}\sqrt{b}$ so that for $b\geq 1$ odd, $$G(a,b) = \left(\frac{a}{b}\right)\frac{1 + i^{-b}}{1 + i^{-1}}\sqrt{b}.$$.
\section{Irreducible Representations of the Noncommutative Torus}\label{reptor}
We now describe representatives of every irreducible representation of $\mathcal{W}_q$. Choose an $n$th root $b^{1/n}$ of $b$. Define $\rho_{a,b}:\mathcal{W}_q\rightarrow M_n(\mathbb{C})$ to be the representation of $\mathcal{W}_q$ determined by $\rho_{a,b}(l) = L_a$ and $\rho_{a,b}(m) = M_b$ where $a,b\in\mathbb{C}$ and
$$L_a = \begin{pmatrix}
0 & 0 & \cdots & 0 & a\\
1 & 0 & \cdots & 0 & 0\\
0 & 1 & \cdots & 0 & 0\\
\vdots & \vdots & \ddots & \vdots & \vdots\\
0 & 0 & \cdots & 1 & 0
\end{pmatrix},\ M_b = \begin{pmatrix}
b^{\frac{1}{n}} & 0 & 0 & \cdots & 0\\
0 & b^{\frac{1}{n}}q^{-2} & 0 & \cdots & 0\\
0 & 0 & b^{\frac{1}{n}}q^{-4} & \cdots & 0\\
\vdots & \vdots & \vdots & \ddots & \vdots\\
0 & 0 & 0 & \cdots & b^{\frac{1}{n}}q^{-2(n-1)}
\end{pmatrix}$$
\begin{remark} Even though the matrix $M_b$ depends on the choice of $b^{1/n}$, the equivalence class of the representation does not. As this is a representation of an associative algebra that is free of rank $n^2$ over its center, by Proposition \ref{amax} the representation is determined by the intersection of its kernel with $Z(\mathcal{W}_q)$. In this case that is the ideal $(l^n-a,m^n-b)$, which does not depend on the choice of $n$th root of $b$.\end{remark}
\begin{prop} The representation, $\rho_{a,b}:\mathcal{W}_q\rightarrow M_n(\mathbb{C})$, is irreducible (surjective).
\end{prop}
\begin{proof}
Recall that $1 + q + q^2 + \cdots + q^{n-1} = \frac{q^n-1}{q-1} = 0$. We have that $\sum_{i=0}^{n-1}M_1^i = nE_{0,0}$, the matrix where every entry but the top left corner is zero. Hence, $\frac{1}{n}\sum_{i=0}^{n-1}(b^{-\frac{1}{n}}M_b)^i = \frac{1}{n}\sum_{i=0}^{n-1}M_1^i = E_{0,0}$. Subsequently, $L_a^iE_{0,0} = E_{i,0}$ for $0\leq i\leq n-1$. Finally, $E_{i,0}(\frac{1}{a}L_a)L_a^{k-1} = E_{i,n-k}$ for $1\leq k\leq n-1$ so that $L_a,M_b$ together span every $E_{i,j}$ which spans all of $M_n(\mathbb{C})$. Therefore, $\rho_{a,b}$ is irreducible.
\end{proof}
\begin{prop}\label{fixedrep} If $B\in SL_2(\mathbb{Z})$, and $1$ is not an eigenvalue of $B$ the only representations of $\mathcal{W}_q$ that can be fixed by $B$ are $\rho_{a,b}:\mathcal{W}_q\rightarrow M_n(\mathbb{C})$ were $a$ and $b$ are roots of unity whose order divides the determinant of $B-I_2$. \end{prop}
\proof Suppose that $B=\begin{pmatrix} a & b \\ c & d \end{pmatrix}$ and $\rho_{e,f}:\mathcal{W}_q\rightarrow M_n(\mathbb{C})$ is fixed by $B$. Since irreducible representations of $\mathcal{W}_q$ are determined by the intersection of their kernel with the center we are looking for $\lambda,\mu \in \mathbb{C}-\{0\}$ that solve
the system of equations
\begin{equation*} \lambda^n=e , \ \mu^n=f \end{equation*} \begin{equation*} \lambda^{na}\mu^{nb}=e , \ \lambda^{nc}\mu^{nd}=f. \end{equation*}
Performing a multiplicative row operation we get,
\begin{equation*} \lambda^{n(a-1)}\mu^{nb}=1 , \ \lambda^{nc}\mu^{n(d-1)}=1.\end{equation*}
Using elementary operations of determinant $1$, done multiplicatively, we can make this system,
\begin{equation*} \lambda^{ne_1}=1, \ \mu^{ne_2}=1,\end{equation*} where $e_1e_2$ is equal to the determinant of $ \begin{pmatrix} a-1 & b \\ c & d-1 \end{pmatrix}$.
From this we see that $\lambda^n$ and $\mu^n$ are roots of unity whose order divides the determinant of $B-I_2$.
\qed
\begin{remark} For a particular $\begin{pmatrix} a & b \\c & d \end{pmatrix}$ there can be some fixed representations, that hold out promise of invariants of the mapping cylinders of those matrices. In order to get a problem that we can solve uniformly we now restrict our attention to the representation $\rho_{1,1}:\mathcal{W}_q\rightarrow M_n(\mathbb{C})$, which is fixed by all elements of $SL_2(\mathbb{Z})$. We are in fact studying a projective representation of the mapping class group, of the torus that should be related to the Witten-Reshetikhin-Turaev representations \cite{BWP}. \end{remark}
\begin{remark}Proposition \ref{fixedrep} shows us that the only representation $\rho_{a,b}$ fixed by all of $SL_2(\mathbb{Z})$ is $\rho_{1,1}$. From now on, we will refer to $\rho_{1,1}$ as $\rho$, $L_1$ as $L$, and $M_1$ as $M$. \end{remark}
\section{Finding the Conjugating Matrix using the Skolem-Noether Theorem} \label{skno}
We now know that every $B = \begin{pmatrix}
a & b\\
c & d
\end{pmatrix}\in SL_2(\mathbb{Z})$ acts as automorphisms of $\mathcal{W}_q$, and consequently on $Z(\mathcal{W}_q)$. Thus, if any ideal $I\subseteq Z(\mathcal{W}_q)$ is fixed by $B$, then $B$ induces an automorphism on $M_n(\mathbb{C})$ as inner automorphisms. Our goal in this section is to determine the conjugating matrix associated with the automorphism induced by $B$. We begin with the following crucial observation,
\begin{lemma}\label{conjcomp}
The first column of $CE_{k,0}C^{-1}$ is a constant multiple of the $k$th column of $C$.
\end{lemma}
\begin{proof} Note that
$$ME_{k,l}N = (m_{i,j})E_{k,l}(n_{i,j}) = (m_{i,j})(\delta_{j}^kn_{l,j}) = (m_{i,k}n_{l,j})$$
where $\delta_{j}^k = \left\{\begin{array}{cc}
0 & k\neq j\\
1 & k = j
\end{array}\right.$. Specifically, the first column of $(m_{i,j})E_{k,0}(n_{i,j}) = (m_{i,k}n_{0,j})$ is $n_{0,0}\begin{pmatrix}
m_{1,k}\\
\vdots\\
m_{n,k}
\end{pmatrix}$, a constant multiple of the $k$th column of $M$.
\end{proof}
Thus, if we know what our automorphism does to the matrices $E_{k,0}$, then we can determine our conjugating matrix (which is only defined up to a scalar multiple).\\
The last thing we need in order to compute the conjugating matrix is the following lemma, which will be helpful in all future computations.
\begin{lemma}\label{ecompute} If $A = (a_{i,j})_{i,j=0}^{n-1}$, then $L^rM^sA = (q^{-2s(i-r)}a_{i-r,j})$ (where the index $i-r$ is taken modulo $n$).
\end{lemma}
\begin{proof}
Note that $L$ is a permutation matrix so that $LA = (a_{i-1,j})$ where the index is taken modulo $n$. It is also clear that $MA = (q^{-2i}a_{i,j})$ since $M$ is diagonal. More generally, this means $L^rA = (a_{i-r,j})$ and $M^sA = (q^{-2is}a_{i,j})$ so that $L^rM^sA = (q^{-2s(i-r)}a_{i-r,j})$.
\end{proof}
\begin{theorem}\label{conjmat}
The conjugating matrix associated with $B$ is $$C = (c_{i,j})_{i,j=0}^{n-1} = (q^{-b^{-1}d(i-aj)^2 - 2cj(i-aj) - acj^2}).$$
\end{theorem}
\begin{proof}
Because the sum $1 + q + q^2 + \cdots + q^{n-1} = \frac{q^n - 1}{q-1} = 0$, and since $n$ is prime, we have that $\sum_{i=0}^{n-1}M^i = \sum_{i=0}^{n-1}e_{0,i} = nE_{0,0}$, and from here, we can use $L = e_{1,0}$ to get our desired matrices (up to a scalar multiple): $$L^j\sum_{i=0}^{n-1}M^i = e_{j,0}\sum_{i=0}^{n-1}e_{0,i} = nE_{j,0}.$$
Now we can see where our automorphism sends these matrices to determine our conjugating matrix as in Lemma \ref{conjcomp}. The matrix $\begin{pmatrix}
a & b\\
c & d
\end{pmatrix}$ sends $e_{j,0}\sum_{i=0}^{n-1}e_{0,i}$ to $e_{aj,cj}\sum_{i=0}^{n-1}e_{bi,di}$ whose first column (indexed by $j=0$) is the first column of $\sum_{i=0}^{n-1}e_{bi,di}$, call this vector $v = \begin{pmatrix}
v_0\\
\vdots\\
v_{n-1}
\end{pmatrix}$. Since $e_{bi,di} = q^{-bdi^2}L^{bi}M^{di}$, and the first column of $M$ is $\begin{pmatrix}
1\\
0\\
\vdots\\
0
\end{pmatrix}$, we have that $v_{bi} = q^{-bdi^2}$ where the index is taken modulo $n$. Hence, $v_i = q^{-bd(b^{-1}i)^2} = q^{-b^{-1}di^2}$ where $b^{-1}$ is the multiplicative inverse of $b$ in $\mathbb{Z}/n\mathbb{Z}$. In general, the first column of $e_{aj,cj}\sum_{i=0}^{n-1}e_{bi,di}$ is $e_{aj,cj}v = q^{-acj^2}L^{aj}M^{cj}(q^{-b^{-1}di^2})_{i=0}^{n-1}$ from which we can finally write down our desired matrix by using Lemma \ref{ecompute}.
$$q^{-acj^2}\left[ L^{aj}M^{cj}(q^{-b^{-1}di^2})\right] = q^{-acj^2}(q^{-b^{-1}d(i-aj)^2}q^{-2cj(i-aj)}) = (q^{-b^{-1}d(i-aj)^2 - 2cj(i-aj) - acj^2}).$$
\end{proof}
\section{Trace Calculation}\label{trcalc}
We now compute the trace of the matrix found in Section \ref{skno}.
\begin{theorem}
Let $B = \begin{pmatrix}
a & b\\
c & d
\end{pmatrix}$. The matrix $C$ we computed in the last section, associated to $B$ has
\begin{equation*} \Tr(C) = \left(\frac{K_B}{n}\right)\frac{1 + i^{-n}}{1 + i^{-1}}\sqrt{n} \end{equation*} where $K_B := -(b^{-1}d(1-a)^2 + c(2-a))$ \end{theorem}
\proof Above we see that the elements along the diagonal of our conjugating matrix are of the form $c_{i,i} = q^{-b^{-1}d(i-ai)^2 - 2ci(i-ai) - aci^2} = q^{-i^2(b^{-1}d(1-a)^2 + 2c(1-a) + ac)} = q^{-i^2(b^{-1}d(1-a)^2 + c(2-a))}$. Define $K_B := -(b^{-1}d(1-a)^2 + c(2-a))$. Then we have that $\Tr(C) = \sum_{i=0}^{n-1}q^{i^2K_B}$ and this is just the Gauss sum, $G(K_B,n) = \left(\frac{K_B}{n}\right)G(1,n) = \left(\frac{K_B}{n}\right)\frac{1 + i^{-n}}{1 + i^{-1}}\sqrt{n}$ where $\left(\frac{K_B}{n}\right)$ is the Legendre symbol of $K_B$ with respect to $n$.\qed
\section{Determinant Calculation}\label{detcalc}
We now wish to calculate the determinant of $C$, which would be impractical to find through direct calculation and so I will use the following,
\begin{prop}$CC^* = nI_n$ where $C^*$ is the conjugate transpose of $C$.
\end{prop}
\begin{proof}
Let $v = (q^{-b^{-1}di^2})_{i=0}^{n-1}$ as above. I have shown that the $j$th column of $C$ is $e_{aj,cj}v = (e_{a,c})^jv$, that is, $C = (v\ Av\ A^2v\ \cdots\ A^{n-1}v)$ where $A:=e_{a,c}$. Hence, $$CC^* = (v\ Av\ A^2v\ \cdots\ A^{n-1}v)\begin{pmatrix}
v^*\\
v^*A^*\\
v^*(A^*)^2\\
\vdots\\
v^*(A^*)^{n-1}
\end{pmatrix} = \sum_{k=0}^{n-1}A^kvv^*(A^*)^k.$$
We know that $v^*=(q^{b^{-1}dj^2})$ and so $vv^*$ is the matrix $(q^{b^{-1}d(j^2-i^2)})$. By definition, $A^k = q^{-ack^2}L^{ak}M^{ck}$ so that $A^kvv^* = (q^{-ack^2}q^{-2ck(i-ak)}q^{b^{-1}d(j^2-(i-ak)^2)})$ by Lemma \ref{ecompute}. Finally, consider $(A^*)^k = q^{ack^2}(M^*)^{ck}(L^*)^{ak}$. To compute this, we will need an analagous statement to Lemma \ref{ecompute} but for $L^*$ and $M^*$.
\begin{lemma} If $A = (a_{i,j})_{i,j=0}^{n-1}$, then $A(M^*)^s(L^*)^r = (q^{2s(j-r)}a_{i,j-r})$ (where the index $j-r$ is taken modulo $n$).
\end{lemma}
\begin{proof}
Note $L^*$ is a (column) permutation matrix so that $(a_{i,j})L^* = (a_{i,j-1})$, and note $(a_{i,j})M^* = (q^{2j}a_{i,j})$ so that in general $(a_{i,j})(L^rM^s)^* = (a_{i,j})(M^*)^s(L^*)^r = (q^{2s(j-r)}a_{i,j-r})$.
\end{proof}
We can now use this to calculate,
\begin{align*}
A^kvv^*(A^*)^k &= (q^{ack^2}q^{2ck(j-ak)}q^{-ack^2}q^{-2ck(i-ak)}q^{b^{-1}d((j-ak)^2-(i-ak)^2)})\\
&= (q^{2ck((j-ak)-(i-ak)) + b^{-1}d((j-ak)^2 - (i-ak)^2)})\\
&= (q^{2ck((j-ak)-(i-ak)) + b^{-1}d(j^2 - i^2 - 2ak(j-i))})
\end{align*}
Therefore, $CC^* = (\sum_{k=0}^{n-1}q^{2ck((j-ak)-(i-ak)) + b^{-1}d(j^2 - i^2 - 2ak(j-i))})$ and it is clear that if $i=j$, then the above becomes $\sum_{k=0}^{n-1}q^0=n$. If $i\neq j$, then the exponent of $q$ is linear in $k$ and since $n$ is prime and $q$ is a primitive $n$th root of unity, we have that we will get a different root of unity for each $k$ so that we get the sum $\sum_{k=0}^{n-1}q^k = 0$. Therefore, $CC^* = nI_n$.
\end{proof}
Using this fact, we can conclude,
\begin{theorem} $\det(C) = \pm\sqrt{n}^n$. \end{theorem}
\begin{proof}
$\det(CC^*) = \det(C)\det(C^*) = \det(C)^2 = n^n\Rightarrow \det(C) = \pm\sqrt{n}^n$.
\end{proof}
Therefore, $$\frac{\Tr(C)}{\sqrt[n]{\det(C)}} = \pm\left(\frac{K_B}{n}\right)\frac{1+i^{-n}}{1 + i^{-1}}$$
Although this should have to do with the Witten-Reshetikhin-Turaev invariant of the mapping cylinder of the corresponding element of the mapping class group \cite{BWP}. We inspected these invariants as computed by Lisa Jeffrey \cite{J}, but do not personally see the connection.
| proofpile-arXiv_059-15595 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction\label{sec:introduction}}
\IEEEPARstart{U}{nder-determined} inverse problems are of fundamental importance
to modern mathematical and machine learning applications. In these problems, one
aims to recover or approximate a ground truth signal $x_{0} \in \ensuremath{\mathbb{R}}^{N}$ from
noisy measurements $y \in \ensuremath{\mathbb{R}}^{m}$ when $m \ll N$. The paradigm of
generalized compressed sensing (GCS) further specifies that the signal $x_{0}$
be well characterized by some structural proxy, known \emph{a priori}, and that
the measurement process be linear: $y = Ax_{0} + \eta z$, where
$A \in \ensuremath{\mathbb{R}}^{m \times N}$ is typically \emph{random}, and $z$ is either a
random or deterministic corruption with noise scale $\eta > 0$.
For example, in MR imaging, one may wish to recover the wavelet coefficients of
an image by randomly subsampling its Fourier
coefficeints~\cite{lustig2007sparse, lustig2008compressed}. In geophysics, one
may wish to determine a region's bathymetry from a small number of radar
measurements taken at the surface~\cite{kumar2015source}, or to obtain a
subsurface image using a small number of
geophones~\cite{herrmann2012fighting}. Recent investigations suggest how
well-analyzed approaches for solving inverse problems may help to elucidate
``mysterious'' behaviours of high-dimensional non-linear function
approximators~\cite{hastie2019surprises, mei2019generalization}. Moreover,
compressed sensing theory may be used to prove recovery guarantees for certain
neural network architectures and particular data regimes~\cite{hand2019global}.
The now classical CS result~\cite{foucart2013mathematical} shows that when
$x_{0} \in \ensuremath{\mathbb{R}}^{N}$ is an $s$-sparse signal, $m \geq Cs \log (e N /s)$
measurements suffice to efficiently recover $x_{0}$ from $(y, A)$ \emph{with
high probability} on the realization of $A$. The \textsc{Lasso} is a common
and well-analyzed tool for effecting the recovery of
$x_{0}$~\cite{bickel2009simultaneous, chen2001atomic, shaobing1994basis,
foucart2013mathematical, stojnic2013framework, tibshirani1996regression,
van2008probing}. Currently, ``\textsc{Lasso}'' is an umbrella term referring
to three or more different \textsc{Lasso} programs, though it originally
referred to the $\ell_{1}$-constrained program {$(\mathrm{LS}_{\tau})$}, which is defined
below~\cite{tibshirani1996regression}. Effective for its ability to perform
simultaneous best-basis and subset selection~\cite{tibshirani1996regression},
the \textsc{Lasso} is a convex optimization approach that has several variants
and cousins~\cite{bickel2009simultaneous, oymak2013squared,
thrampoulidis2018precise, van2008probing}.
Of particular interest to this work, we introduce three common \textsc{Lasso}
programs and their solutions:
\begin{align}
\label{eq:ls}
\hat x(\tau) %
&\in \argmin_{x\in \ensuremath{\mathbb{R}}^{N}} \big\{ \|y - Ax\|_{2}^{2} : \|x\|_{1} \leq \tau \big\} %
\tag{$\mathrm{LS}_{\tau}$}
\\
\label{eq:bp}
\tilde x(\sigma) %
&\in \argmin_{x\in \ensuremath{\mathbb{R}}^{N}} \big\{ \|x\|_{1} : \|y - Ax\|_{2}^{2} \leq \sigma^{2} \big\} %
\tag{$\mathrm{BP}_{\sigma}$}
\\
\label{eq:qp}
x^{\sharp}(\lambda) %
&\in \argmin_{x\in \ensuremath{\mathbb{R}}^{N}} \big\{ \frac12 \|y - Ax\|_{2}^{2} + \lambda \|x\|_{1} \big\}. %
\tag{$\mathrm{QP}_{\lambda}$}
\end{align}
Some naming ambiguity for these programs exists in the literature. Here, we
refer to {$(\mathrm{LS}_{\tau})$} as constrained \textsc{Lasso}, {$(\mathrm{QP}_{\lambda})$} as unconstrained
\textsc{Lasso}, and {$(\mathrm{BP}_{\sigma})$} as [quadratically constrained] basis pursuit;
solutions for each program are denoted, respectively, by $\hat x(\tau)$,
$\tilde x(\sigma)$ and $x^{\sharp}(\lambda)$. Our notation and naming convention
for these three programs is similar to that used in~\cite{van2008probing}.
Generalizations of these programs, commonly referred to as generalized
\textsc{Lasso}, allow for the recovery of signals with other kinds of structure
that are well modelled by convex proxy sets. To introduce the generalized
\textsc{Lasso} programs, first let
$\emptyset \neq \mathcal{K} \subseteq \ensuremath{\mathbb{R}}^{N}$ be a convex set and denote by
$\|\cdot \|_{\mathcal{K}}$ the Minkowski functional of $\mathcal{K}$ (i.e.,~ gauge). For
$\sigma, \tau, \lambda > 0$, the following \emph{generalized} \textsc{Lasso}
programs, which are convex, are defined by:
\begin{align}
\label{eq:lsK}
\hat x(\tau; A, y, \mathcal{K}) %
&\in \argmin_{x\in \ensuremath{\mathbb{R}}^{N}} \big\{ \|y - Ax\|_{2}^{2} : %
x \in \tau \mathcal{K} \big\} %
\tag{$\mathrm{LS}_{\tau, \mathcal{K}}$}
\\
\label{eq:bpK}
\tilde x(\sigma; A, y, \mathcal{K}) %
&\in \argmin_{x\in \ensuremath{\mathbb{R}}^{N}} \big\{ \|x\|_{\mathcal{K}} : \|y - Ax\|_{2}^{2} \leq \sigma^{2} \big\} %
\tag{$\mathrm{BP}_{\sigma, \mathcal{K}}$}
\\
\label{eq:qpK}
x^{\sharp}(\lambda; A, y, \mathcal{K}) %
&\in \argmin_{x\in \ensuremath{\mathbb{R}}^{N}} \big\{ \frac12 \|y - Ax\|_{2}^{2} + \lambda \|x\|_{\mathcal{K}} \big\}. %
\tag{$\mathrm{QP}_{\lambda, \mathcal{K}}$}
\end{align}
In the standard CS setting, the gauge is the $\ell_{1}$-norm, though $x_{0}$ is
assumed to belong to the set of $s$-sparse vectors
$\Sigma_{s}^{N} := \{ x \in \ensuremath{\mathbb{R}}^{N} : |\supp(x)| \leq s\}$. So,
$x_{0}$ does not necessarily belong to the convex proxy set $\mathcal{K} = B_{1}^{N}$,
where $B_{1}^{N}$ denotes the $N$-dimensional unit $1$-norm ball. In particular,
$B_{1}^{N}$ itself serves as a convex proxy set for sparse vectors in the sense
that if $x\in \ensuremath{\mathbb{R}}^{N}$ is $s$-sparse, then $\|x\|_{1} / \|x\|_{2}$ is small
relative to non-sparse vectors.
It is worth mentioning a brief note on uniqueness. When $A$ is under-determined
and has a suitable randomness, it is straightforward to show the programs {$(\mathrm{BP}_{\sigma})$}
and {$(\mathrm{QP}_{\lambda})$} admit unique solutions almost surely on the realization of $A$. A
detailed exposition for {$(\mathrm{QP}_{\lambda})$} is given in~\cite{tibshirani2013lasso}. For a
sufficient condition on $A$ giving uniqueness of {$(\mathrm{BP}_{\sigma})$},
see~\cite{zhang2015necessary}. However, {$(\mathrm{LS}_{\tau})$} does not always admit a unique
solution. For instance, if $\tau$ is ``too large'', then there may be infinitely
many solutions $x \in \tau B_{1}^{N}$ satisfying $\|y - Ax \|_{2} = 0$. This
fact is fundamental to one of our results in~\autoref{sec:LS-instability}. By
mild abuse of notation, when the solution to a program is unique we will replace
``$\in$'' with ``$=$'' in the definitions of the solutions for each
program. Otherwise, we define each of $\hat x(\tau), \tilde x(\sigma)$, and
$x^{\sharp}(\lambda)$ as the solution yielding worst-case error, and which
appears first when ordered lexicographically. For example, $\hat x(\tau)$ refers
to the particular solution solving {$(\mathrm{LS}_{\tau})$} such that
$\|\hat x(\tau) - x_{0}\|_{2} \geq \|\hat x - x_{0}\|_{2}$ for any other
$\hat x$ solving {$(\mathrm{LS}_{\tau})$}. We make an analogous modification to the definitions of
the solutions to the generalized \textsc{Lasso} programs.
To relate the recovery performance of each program, we compare their recovery
errors. While there are several possibilities for measuring the recovery error,
the expected squared error and noise-normalized expected squared error of the
estimator are common when the noise, $z$, is random~\cite{oymak2013squared}. In
this work, we'll define the loss for an estimator as the noise normalized
squared error of that estimator (with respect to the ground truth signal
$x_{0}$); and define the estimator's risk as the expectation of the loss with
respect to $z$. Note that the risk and loss are functions of the random matrix
$A$. Specifically, the loss is defined for {$(\mathrm{LS}_{\tau})$}, {$(\mathrm{BP}_{\sigma})$}, {$(\mathrm{QP}_{\lambda})$} respectively by:
\begin{align*}
\hat{L}(\tau; x_{0}, A, \eta z) %
&:= \eta^{-2} \|\hat x(\tau) - x_{0} \|_{2}^{2},
\\
\tilde L(\sigma; x_{0}, A, \eta z) %
&:= \eta^{-2} \|\tilde x(\sigma) - x_{0} \|_{2}^{2},
\\
L^{\sharp}(\lambda; x_{0}, A, \eta z) %
&:= \eta^{-2} \|\hat x(\eta\lambda) - x_{0} \|_{2}^{2},
\end{align*}
and the risk by:
\begin{align*}
\hat R (\tau; x_{0}, A, \eta) %
&:= \E_{z} \hat{L}(\tau; x_{0}, A, \eta z) %
\\\tilde R (\sigma; x_{0}, A, \eta) %
&:= \E_{z} \tilde L(\sigma; x_{0}, A, \eta z) %
\\R^{\sharp} (\lambda; x_{0}, A, \eta) %
&:= \E_{z} L^{\sharp}(\lambda; x_{0}, A, \eta z). %
\end{align*}
Minimax order-optimal error rates are well-known for
$\hat x(\tau; y, A, \mathcal{K})$ when $\tau$ is equal to the optimal
parameter choice, $A$ is a matrix whose rows are independent, isotropic
subgaussian random vectors, and $\mathcal{K}$ is a symmetric, closed convex
set containing the origin~\cite{foucart2013mathematical, liaw2017simple,
oymak2013squared}. A kind of equivalence between the three estimators
(\emph{cf}. \autoref{prop:foucart-program-equivalence}) allows, in kind, for
the characterization of the error rates for $\tilde x(\sigma)$ and
$x^{\sharp}(\lambda)$ when $\sigma$ and $\lambda$ are optimally
tuned. However, the error of $\hat x(\tau; y, A, \mathcal{K})$ is not fully
characterized in the setting where $\tau$ is not the optimal
choice. Similarly, the programs {$(\mathrm{LS}_{\tau})$}, {$(\mathrm{BP}_{\sigma})$} and {$(\mathrm{QP}_{\lambda})$} are often referred to
interchangeably, but a full comparison of the error of the three estimators
$\hat x(\tau), \tilde x(\sigma)$, and $x^{\sharp}(\lambda)$, as a function of
their governing parameters, is lacking. It is an open question if there are
settings in which one estimator is always preferable to another.
Understanding the sensitivity of a \textsc{Lasso} program to its parameter
choice is crucial. While theoretical guarantees for recovery error are
typically given for an oracular choice of the parameter, the optimal parameter
setting is generally unknown in practice. Thus, the usefulness of theoretical
recovery guarantees may hinge on the assumption that the recovery error is
stable with respect to variation of the governing parameter. In particular,
one may hope that small changes in the governing parameter beget no more than
small changes in the risk or loss.
We take a step toward characterizing the performance and sensitivity of the
three programs introduced by examining particular asymptotic parameter regimes
for each program. We do this by extending results of~\cite{berk2020sl1mpc,
berk2019pdparmsens} from the setting where $A$ is identity to the setting
where $A$ is a matrix whose rows are independent, isotropic subgaussian random
vectors. In this setting, we prove the existence of regimes in which CS
programs exhibit sensitivity to their parameter choice: small changes in
parameter values can lead to blow-up in risk. Despite the notion of
equivalence hinted at above, we demonstrate regimes in which one program
exhibits sensitivity, while the other two do not. For example, in the very
sparse regime, our theory and simulations suggest not to use {$(\mathrm{BP}_{\sigma})$}. In the
low-noise regime, they suggest not to use {$(\mathrm{LS}_{\tau})$}. Assuredly, we identify
situations where CS programs perform well in theory and \emph{in silico}
alike. We hope that the asymptotic theory, coupled with fairly extensive
numerical simulations, aid practitioners in deciding which CS program to
select.
\section{Summary of results to follow\label{sec:summary-results}}
As a way of alluding to the main results to follow, we start by describing three
sibling results. We intend for them to contrast the behaviour of the three
$\ell_{1}$ programs that are the main focus of this work. Define the worst-case
risk for {$(\mathrm{LS}_{\tau})$} in the low-noise regime by:
\begin{align*}
R^{*}(s, A) %
:= \lim_{\eta \to 0} \sup_{x \in \Sigma_{s}^{N} \cap \partial B_{1}^{N}} %
\hat R (1; x, A, \eta)
\end{align*}
Importantly, under mild assumptions, $R^{*}(s, A)$ is nearly equivalent to the
optimally tuned worst-case risk for {$(\mathrm{LS}_{\tau})$}. Namely, for all $\eta > 0$,
\begin{align*}
R^{*}(s, A) %
\leq \sup_{x \in \Sigma_{s}^{N}} \hat R(\|x\|_{1}; x, A, \eta)
\leq C R^{*}(s, A).
\end{align*}
This result is treated formally in \autoref{sec:rhat-nearly-monotone}. In each
case discussed below, the performance of the estimators will be compared to
$R^{*}(s, A)$ as a benchmark, noting that this quantity is minimax order optimal
in the sense of \autoref{prop:Rstar-minimax-optimal}.
In~\autoref{sec:LS-instability}, we show that {$(\mathrm{LS}_{\tau})$} exhibits an asymptotic
instability in the low-noise regime. There is exactly one value $\tau^{*}$ of
the governing parameter yielding minimax order-optimal error, with any choice
$\tau \neq \tau^{*}$ yielding markedly worse behaviour. This result holds for
normalized $K$-subgaussian matrices $A$, which are defined
in~\autoref{def:isgrm}. The intuition provided by this result is that {$(\mathrm{LS}_{\tau})$} is
extremely sensitive to the value of $\tau$ in the low-noise regime, making
empirical use of {$(\mathrm{LS}_{\tau})$} woefully unstable in this regime.
\begin{thm}[{$(\mathrm{LS}_{\tau})$} instability simplified]
Let $1 \leq s \leq m < N < \infty$ be integers. If $A \in \ensuremath{\mathbb{R}}^{m \times N}$
is a normalized $K$-subgaussian matrix with
$m > C_{\varepsilon} \delta^{-2} K^{2} \log (K) s \log
\left(\tfrac{N}{s}\right)$, then with probability at least $1 - \varepsilon$ on
the realization of $A$,
\begin{align*}
\lim_{\eta \to 0} \sup_{x \in \Sigma_{s}^{N} \cap B_{1}^{N}} %
\frac{\hat R(\tau; x, A, \eta)}{R^{*}(s, A)} =
\begin{cases}
1 & \tau = 1\\
\infty & \text{otherwise}
\end{cases}
\end{align*}
\end{thm}
Next, in~\autoref{sec:qp} we state a rephrasing
of~\cite[Theorem~3]{shen2015stable}. The result shows there is a parameter
$\lambda^{*}$ such that {$(\mathrm{QP}_{\lambda})$} is not sensitive to its parameter choice for
$\lambda \geq \lambda^{*}$. Right-sided parameter stability of {$(\mathrm{QP}_{\lambda})$} was first
established in~\cite[Theorem~7.2]{bickel2009simultaneous}. This well-known
result is contrasted in~\autoref{sec:numerical-results} with numerical results
demonstrating a left-sided parameter instability for {$(\mathrm{QP}_{\lambda})$} in the regime of high
sparsity, low noise, and large dimension.
\begin{thm}[{$(\mathrm{QP}_{\lambda})$} right-sided stability]
Let $A \in \ensuremath{\mathbb{R}}^{m \times N}$ be a normalized $K$-subgaussian matrix for
$1 \leq m < N < \infty$. There is an absolute constant $C> 0$ such that if
$\lambda \geq C \sqrt{\log N}$ and
$m \geq C_{\varepsilon} \delta^{-2} K^{2} \log(K) s \log
\left(\tfrac{N}{s}\right)$, then with probability at least $1 - \varepsilon$
on the realization of $A$,
\begin{align*}
R^{\sharp}(\lambda; x_{0}, A, \eta) \leq C \lambda^{2} s.
\end{align*}
\end{thm}
In the initial work, $\lambda^{*} = 2 \sqrt{2\log N}$
\cite[Theorem~7.2]{bickel2009simultaneous}. In the present phrasing, we write
only that $\lambda^{*} = C\sqrt{\log N}$ for an absolute constant $C > 0$. Note
that when the data are Gaussian, right-sided stability of {$(\mathrm{QP}_{\lambda})$} has been
examined in~\cite{thrampoulidis2015asymptotically}; of {$(\mathrm{QP}_{\lambda, \mathcal{K}})$},
in~\cite{thrampoulidis2018precise}.
Finally, in~\autoref{sec:analysis-bp} we show that {$(\mathrm{BP}_{\sigma})$} is poorly behaved for
all $\sigma > 0$ when $x_{0}$ is very sparse. In particular, under mild
restrictions on the aspect ratio of the measurement matrix, we show that
$\tilde R(\sigma; x_{0}, N, \eta)$ is asymptotically suboptimal for \emph{any}
$\sigma > 0$ when $s / N$ is sufficiently small. Below, this theorem shows that
the minimax risk for {$(\mathrm{BP}_{\sigma})$}, relative to the benchmark risk $R^{*}$, converges in
probability to $\infty$.
\begin{thm}[{$(\mathrm{BP}_{\sigma})$} instability simplified]
\label{thm:bp-instability-simplified}
Fix $\eta > 0$, an integer $s \geq 1$, and suppose for $m : \ensuremath{\mathbb{N}} \to \ensuremath{\mathbb{N}}$
that $m (N)/ N \to \gamma \in (0, 1)$. For each $N$, suppose
$A = A(N) \in \ensuremath{\mathbb{R}}^{m(N) \times N}$ is a normalized $K$-subgaussian
matrix. Then, for all $M > 0$,
\begin{align*}
\lim_{N\to \infty} \mathbb{P}\left(%
\inf_{\sigma > 0} \sup_{x \in \Sigma_{s}^{N}} %
\frac{ \tilde R(\sigma; x, A, \eta)}{R^{*}(s,A)} > M \right) %
= 1.
\end{align*}
\end{thm}
Numerical results supporting~\autoref{sec:LS-instability}
and~\autoref{sec:analysis-bp} are discussed
in~\autoref{sec:numerical-results}. Proofs of most of the theoretical results
are deferred to~\autoref{sec:proofs}. Next, we add two clarifications. First,
the three programs are equivalent in a sense.
\begin{prop}[Program equivalence {\cite[Proposition 3.2]{foucart2013mathematical}}]
\label{prop:foucart-program-equivalence}
Let $0 \neq x_{0} \in \ensuremath{\mathbb{R}}^{N}$ and $\lambda > 0$. Where
$x^{\sharp}(\lambda)$ solves {$(\mathrm{QP}_{\lambda})$}, define
$\tau := \|x^{\sharp}(\lambda)\|_{1}$ and
$\sigma := \|y - Ax^{\sharp}(\lambda)\|_{2}$. Then $x^{\sharp}(\lambda)$
solves {$(\mathrm{LS}_{\tau})$} and {$(\mathrm{BP}_{\sigma})$}.
\end{prop}
However, $\tau$ and $\sigma$ are functions of $z$, a random variable, and this
mapping may not be smooth. Thus, parameter stability of one program is not
implied by that of another. Second, $R^{*}(s, A)$ has the desirable property
that it is computable up to multiplicative constants~\cite{liaw2017simple}.
\begin{prop}[Risk equivalences]
\label{prop:Rstar-minimax-optimal}
Fix $\delta, \varepsilon > 0$, let $1 \leq s \leq m < \infty, N \geq 2$ be
integers, let $\eta > 0$. Suppose $A \in \ensuremath{\mathbb{R}}^{m \times N}$ is a normalized
$K$-subgaussian matrix satisfying
$m > C_{\varepsilon} \delta^{-2} K^{2} \log K s\log(N/s)$, and suppose that
$y = A x_{0} + \eta z$ for $z \in \ensuremath{\mathbb{R}}^{m}$ with
$z_{i} \ensuremath{\overset{\text{iid}}{\sim}} \mathcal{N}(0, 1)$. Let
$M^{*}(s, N) := \inf_{x^{*}} \sup_{x_{0} \in \Sigma_{s}^{N}}
\eta^{-2} \|x_{*} - x_{0}\|_{2}^{2}$ be the minimax risk over arbitrary
estimators $x^{*} = x^{*}(y)$. With probability at least $1 - \varepsilon$ on
the realization of $A$, there is $c, C_{1}, C_{\delta} > 0$ such that
\begin{align*}
cs \log (N /s ) %
& \leq M^{*}(s, N) %
\leq \inf_{\lambda > 0} \sup_{x_{0} \in \Sigma_{s}^{N}} %
R^{\sharp}(\lambda; x_{0}, A, \eta) %
\\
& \leq C_{\delta} R^{*}(s, A) %
\leq C_{\delta} s \log (N /s).
\end{align*}
\end{prop}
In this work, we focus primarily on the versions of \textsc{Lasso} for which
$\|\cdot\|_{1}$ is the structural proxy, and $\Sigma_{s}^{N}$ the structure set
for the data $x_{0}$. In addition, we discuss the pertinence of our results to
the Generalized \textsc{Lasso} setting. For instance,
in~\autoref{lem:nuc-norm-recovery}, we show how~\autoref{thm:ls-instability}
adapts to setting of parameter sensitivity for low-rank matrix recovery using
nuclear norm. Further, we connect our discussion on {$(\mathrm{QP}_{\lambda})$} in~\autoref{sec:qp} to
results for more general gauges~\cite{thrampoulidis2015asymptotically,
thrampoulidis2018precise}, which works have developed tools suitable for
analyzing parameter sensitivity of {$(\mathrm{QP}_{\lambda, \mathcal{K}})$} when the data are Gaussian. While it
remains an open question to determine how our results
in~\autoref{sec:analysis-bp} may be extended to analyze parameter sensitivity of
{$(\mathrm{BP}_{\sigma, \mathcal{K}})$}, we conjecture that a suboptimality result like that exemplified
in~\autoref{thm:bp-instability-simplified} exists under analogous assumptions
for {$(\mathrm{BP}_{\sigma, \mathcal{K}})$}.
\section{Related Work}
\label{sec:related-work}
Several versions of the \textsc{Lasso} program are well-studied in the context
of solving CS problems~\cite{foucart2013mathematical}. The program {$(\mathrm{LS}_{\tau})$} was
first posed in~\cite{tibshirani1996regression}. An analysis of its risk when
$\tau = \|x_{0}\|_{1}$ and the noise $z$ is deterministic may be found
in~\cite{foucart2013mathematical}. A sharp non-asymptotic analysis for the
generalized constrained \textsc{Lasso} may be found
in~\cite{oymak2013squared}. There, the risk was shown to depend on specific
geometric properties of the regularizer. When the measurement matrix has
independent isotropic subgassian rows, it has been demonstrated how a geometric
quantity may unify the quantification of generalized constrained \textsc{Lasso}
risk~\cite{liaw2017simple}. Risk bounds for generalized constrained
\textsc{Lasso} with nonlinear observations were characterized
in~\cite{plan2016generalized}. Recent work has shown how dimensional parameters
governing signal recovery problems in ridgeless least squares regression affect
the average out-of-sample risk in some settings~\cite{hastie2019surprises,
mei2019generalization}.
Non-asymptotic bounds for the unconstrained \textsc{Lasso} were developed
in~\cite{bickel2009simultaneous}, which also determines an order-optimal choice
for the program's governing parameter. The asymptotic risk for the unconstrained
\textsc{Lasso} is determined analytically in~\cite{bayati2011dynamics,
bayati2012lasso}. Sharp, non-asymptotic risk bounds for the generalized
unconstrained \textsc{Lasso} are developed
in~\cite{thrampoulidis2015regularized,
thrampoulidis2015asymptotically}. In~\cite{thrampoulidis2015asymptotically},
$R^{\sharp}(\lambda)$ is examined for $\lambda$ about $\lambda_{\mathrm{opt}}$,
while~\cite{thrampoulidis2018precise} examines the risk as a function of its
governing parameter for other kinds of $M$-estimators. Both assume Gaussianity
of the data, and neither considers sensitivity with respect to parameter choice.
Basis pursuit is a third popular phrasing of the \textsc{Lasso} program, first
proposed in~\cite{shaobing1994basis, chen2001atomic}. For a theoretical
treatment of basis pursuit, we refer to~\cite{foucart2013mathematical}. Analytic
connections between basis pursuit and other \textsc{Lasso} programs are
exploited for fast computation of solutions in~\cite{van2008probing}.
Other modifications of the standard \textsc{Lasso} have also been examined. For
example, sharp non-asymptotic risk bounds for the so-called square-root
\textsc{Lasso} were obtained in~\cite{oymak2013squared}. Related to basis
pursuit, instance optimality of an exact $\ell_{1}$ decoder is analyzed
in~\cite{wojtaszczyk2010stability}.
Sensitivity to parameter choice was analyzed for three proximal denoising (PD)
programs that are analogues of the ones considered in this
work~\cite{berk2020sl1mpc, berk2019pdparmsens}. PD is a simplification of CS, in
which $A$ is the identity matrix. There, the authors prove an asymptotic
cusp-like behaviour for constrained PD risk in the low-noise regime, an
asymptotic phase transition for unconstrained PD risk in the low-noise regime,
and asymptotic suboptimality of the basis pursuit PD risk in the very sparse
regime. The current work develops non-trivial generalizations of the results
in~\cite{berk2020sl1mpc}, proving asymptotic results about the sensitivity of
$\ell_{1}$ minimization for the generalized constrained \textsc{Lasso}, and
generalized basis pursuit.
\section{Main theoretical tools}
\label{sec:main-theor-tools}
\subsection{Notation}
\label{sec:notation}
Let $\cvx (\mathcal{C})$ denote the convex hull of the set
$\mathcal{C}\subseteq \ensuremath{\mathbb{R}}^{N}$:
\begin{align*}
\cvx (\mathcal{C}) %
& := \{ \sum_{j = 1}^{J} \alpha_{j} x_{j} : x_{j} \in \mathcal{C}, \alpha_{j} \geq 0, \sum \alpha_{j} = 1, J < \infty \} \\
& = \bigcap_{\mathclap{\substack{\mathcal{C}' \supseteq \mathcal{C}\\\mathcal{C}' \text{is convex}}}} \mathcal{C'}.
\end{align*}
Let $\cone (\mathcal{C})$ denote the cone of $\mathcal{C}$:
\begin{align*}
\cone(\mathcal{C}) := \{ \lambda x: \lambda \geq 0, x \in \mathcal{C}\}.
\end{align*}
Define the descent cone of a convex function $f : \ensuremath{\mathbb{R}}^{N} \to \ensuremath{\mathbb{R}}$ at a
point $x \in \ensuremath{\mathbb{R}}^{N}$ by
\begin{align*}
T_{f}(x) := \cone \{ z - x : z\in \ensuremath{\mathbb{R}}^{N}, f(z) \leq f(x) \}.
\end{align*}
When $f = \|\cdot\|_{1}$ we write $T(x) := T_{\|\cdot\|_{1}}(x)$. By abuse of
notation, we write $T_{\mathcal{C}}(x) := T_{\|\cdot\|_{\mathcal{C}}} (x)$ to
refer to the descent cone of the gauge $\|\cdot\|_{\mathcal{C}}$ at a point $x$,
where $\mathcal{C} \subseteq \ensuremath{\mathbb{R}}^{N}$ is a convex set.
Given $x \in \ensuremath{\mathbb{R}}^{N}$, denote the $0$-norm of $x$ by
$\|x\|_{0} = \# \{ j \in [N] : x_{j} \neq 0\}$, where
$[N] := \{ 1, 2, \ldots, N\}$. Note that $\|\cdot \|_{0}$ is not a norm. For
$0 \leq p \leq \infty$, denote the $\ell_{p}$ ball by
$B_{p}^{N} := \{ x \in \ensuremath{\mathbb{R}}^{N} : \|x\|_{p} \leq 1\}$. For $s, N \in \ensuremath{\mathbb{N}}$
with $0 \leq s \leq N$, denote the set of at most $s$-sparse vectors by
\begin{align*}
\Sigma_{s}^{N} := \{ x\in \ensuremath{\mathbb{R}}^{N} : \|x\|_{0} \leq s\},
\end{align*}
and define $\Sigma_{-1}^{N} := \emptyset$. Define the following sets:
\begin{align*}
\mathcal{L}_{s}(r) %
& := r\cdot \cvx (\Sigma_{s}^{N} \cap \ensuremath{\mathbb{S}}^{N-1}),
\\
\mathcal{L}_{s} %
& := \mathcal{L}_{s}(2),
\\
\mathcal{L}^{*}_{s} %
& := \mathcal{L}_{2s}(4). %
\end{align*}
Additionally, define the sets:
\begin{align*}
\mathcal{J}_{s}^{N} %
&:= \left\{ x \in \ensuremath{\mathbb{R}}^{N} : \|x\|_{1} \leq \sqrt s \|x\|_{2}\right\},
\\
\mathcal{K}_{s}^{N} %
&:= \left\{ x \in \ensuremath{\mathbb{R}}^{N} : \|x\|_{2} \leq 1 \, %
\And \, \|x\|_{1} \leq \sqrt s\right\}. %
\end{align*}
Observe that $\mathcal{J}_{s}^{N}$ is a cone, and that
$\mathcal{K}_{s}^{N} = B_{2}^{N} \cap \sqrt s B_{1}^{N} =
\cvx(\mathcal{J}_{s}^{N} \cap \ensuremath{\mathbb{S}}^{N-1})$.
\subsection{Tools from probability theory}
\label{sec:geometric-tools}
We start by introducing subgaussian random variables, which generalize Gaussian
random variables, but retain certain desirable properties, such as Gaussian-like
tail decay and moment bounds.
\begin{defn}[Subgaussian random variable]
A random variable $X$ is called subgaussian if there exists a constant $K > 0$
such that the moment generating function of $X^{2}$ satisfies, for all
$\lambda$ such that $|\lambda| \leq K^{-1}$,
\begin{align*}
\E \exp(\lambda^{2} X^{2}) \leq \exp(K^{2} \lambda^{2}).
\end{align*}
The subgaussian norm of $X$ is defined by
\begin{align*}
\|X\|_{\psi_{2}} := \inf \{ t > 0 : \E \exp(X^{2} / t^{2}) \leq 2 \}.
\end{align*}
\end{defn}
We similarly define subexponential random variables.
\begin{defn}[Subexponential random variable]
A random variable $X$ is called subexponential if there exists a constant
$K > 0$ such that the moment generating function of $|X|$ satisfies, for all
$\lambda$ such that $0 \leq \lambda \leq K^{-1}$,
\begin{align*}
\E \exp ( \lambda |X|) \leq \exp(K \lambda ).
\end{align*}
The subexponential norm of $X$ is defined by
\begin{align*}
\|X\|_{\psi_{1}} := \inf \{ t > 0 : \E \exp(|X| / t) \leq 2 \}.
\end{align*}
\end{defn}
Additionally, we call $X \in \ensuremath{\mathbb{R}}^{N}$ a $K$-subgaussian random vector if
$\|X\|_{\psi_{2}} := \sup_{a \in \ensuremath{\mathbb{R}}^{N}} \|\ip{a, X}\|_{\psi_{2}} \leq K$;
analogously so for $K$-subexponential random vectors. Where it is either clear
or irrelevant, we may omit observing the norm parameter and refer to a
$K$-subgaussian random vector simply as a subgaussian random vector; likewise
with a subexponential random vector. For properties and equivalent definitions
of subgaussian and subexponential random variables and vectors,
see~\cite[Chapter~2]{vershynin2018high}. Next, we introduce a piece of jargon
for the sake of concision.
\begin{defn}[$K$-subgaussian matrix]
\label{def:isgrm}
Given $m, N \in \ensuremath{\mathbb{N}}$, call $A \in \ensuremath{\mathbb{R}}^{m \times N}$ a $K$-subgaussian
matrix if $A$ has rows $A_{i}^{T}$ that are independent, isotropic
$K$-subgaussian random vectors:
\begin{align*}
\E A_{i} A_{i}^{T} = I, \qquad %
\| A_{i}\|_{\Psi_{2}} \leq K, \qquad %
i \in [m]. %
\end{align*}
Further, call $\tfrac{1}{\sqrt m} A$ a normalized $K$-subgaussian matrix.
\end{defn}
In this work, we crucially leverage the fact that $A$ satisfies a restricted
isometry property (RIP). An exposition on RIP and restricted isometry constants
may be found in~\cite{foucart2013mathematical}. As the results of this work
concern $K$-subgaussian matrices, we state a classical version of RIP for such
matrices restricted to the set of $s$-sparse vectors.
\begin{thm}[RIP for subgaussian matrices {\cite[Theorem 9.2]{foucart2013mathematical}}]
\label{thm:rip-subgaus}
Let $A \in \ensuremath{\mathbb{R}}^{m \times N}$ be a normalized $K$-subgaussian matrix. There
exists a constant $C = C_{K} > 0$ such that the restricted isometry constant
of $A$ satisfies $\delta_{s} \leq \delta$ with probability at least
$1 - \varepsilon$ provided
\begin{align*}
m \geq C \delta^{-2} ( s \ln (eN/s) + \ln (2 \varepsilon^{-1})).
\end{align*}
\end{thm}
\begin{rmk}
Setting $\varepsilon = 2 \exp( -\delta^{2} m / (2C))$ yields the condition
\begin{align*}
m \geq 2 C \delta^{-2} s \ln ( eN/s)
\end{align*}
which guarantees that $\delta_{s} \leq \delta$ with probability at least $1 - 2 \exp ( - \delta^{2} m / (2C))$.
\end{rmk}
A tool necessary to the development of the results in
\autoref{sec:LS-instability} and \autoref{sec:analysis-bp} (specifically
\autoref{prop:unif-noise-scale}, \ref{prop:gw-lb-2}, and \ref{prop:gw-ub-2})
characterizes the variance and tail decay of the supremum of a Gaussian
process. For an introduction to random processes, we refer the reader
to~\cite{vershynin2018high, adler2009random}.
\begin{thm}[Borell-TIS inequality {\cite[Theorem 2.1.1]{adler2009random}}]
\label{thm:borell-tis}
Let $T$ be a topological space and let $\{f_{t}\}_{t \in T}$ be a centred (i.e.,~
mean-zero) Gaussian process almost surely bounded on $T$ with
\begin{align}
\label{eq:borell-norm}
\vertiii{f}_{T} %
&:= \sup_{t \in T} f_{t}, %
&
\sigma_{T}^{2} %
&:= \sup_{t\in T} \E \big[ f_{t}^{2} \big]
\end{align}
such that $\vertiii{f}_{T}$ is almost surely finite. Then $\E \vertiii{f}_{T}$ and
$\sigma_{T}$ are both finite and for each $u > 0$,
\begin{align*}
\mathbb{P}\big( \vertiii{f}_{T} > \E \vertiii{f}_{T} + u\big) %
\leq \exp\big( - \frac{u^{2}}{2 \sigma_{T}^{2}} \big).
\end{align*}
Observe that $\vertiii{f}_{T}$ is notation; $\vertiii{\cdot}_{T}$ is not a
norm. By symmetry, one may derive an analogous lower-tail
inequality. Consequently, one also has for each $u > 0$,
\begin{align*}
\mathbb{P}\big( \left|\vertiii{f}_{T} - \E \vertiii{f}_{T}\right| > u\big) %
\leq 2 \exp\big( - \frac{u^{2}}{2 \sigma_{T}^{2}} \big).
\end{align*}
\end{thm}
\subsection{Geometric tools from probability}
\label{sec:geometric-tools-from}
Analysis of structured signals benefits from the ability to characterize their
\emph{effective dimension}. In this work, we capture this notion of effective
dimension with the Gaussian complexity and Gaussian width (gw), which
are closely related.
\begin{defn}[Gaussian complexity]
Let $T \subseteq \ensuremath{\mathbb{R}}^{N}$. Define the Gaussian complexity of $T$ by
\begin{align*}
\gamma (T) %
:= \E \sup_{x \in T} \left|\ip{x, g}\right|, %
\qquad g \sim \mathcal{N}(0, I_{N}).
\end{align*}
\end{defn}
\begin{defn}[Gaussian width]
Let $T \subseteq \ensuremath{\mathbb{R}}^{N}$. Define the Gaussian width of $T$, by:
\begin{align*}
\w(T) %
:= \E \sup_{x \in T} \ip{x, g}, %
\qquad g \sim \mathcal{N}(0, I_{N}).
\end{align*}
\end{defn}
\begin{rmk*}
If $T$ is symmetric, then the gw of $T$ satisfies
$\w(T) = \frac12 \E \sup_{x \in T - T}\ip{x, g}$.
\end{rmk*}
Next we state two results controlling the deviation of a $K$-subgaussian matrix
on a bounded set, which generalize the idea of RIP introduced
in~\autoref{thm:rip-subgaus}. These results were first proved
in~\cite{liaw2017simple}, and an improved dependence on the constant $K$ was
then obtained in~\cite{xiaowei2019taildep}. The results are stated using the
improved constant $\tilde K := K \sqrt{\log K}$; we refer the reader
to~\cite[Theorem 2.1]{xiaowei2019taildep} for further details.
\begin{thm}[{\cite[Theorem 1.1]{liaw2017simple}}]
\label{thm:liaw-11}
Let $A \in \ensuremath{\mathbb{R}}^{m\times N}$ be a $K$-subgaussian matrix and
$T \subseteq \ensuremath{\mathbb{R}}^{N}$ bounded. Then
\begin{align*}
\E \sup_{x \in T} \left| \|Ax\|_{2} - \sqrt m \|x\|_{2} \right| %
\leq C \tilde K \gamma (T).
\end{align*}
\end{thm}
Another version of this result holds, where the deviation is instead controlled
by the gw and radius, rather than the Gaussian complexity.
\begin{thm}[{\cite[Theorem 1.4]{liaw2017simple}}]
\label{thm:liaw-14}
Let $A \in \ensuremath{\mathbb{R}}^{m \times N}$ be a $K$-subgaussian matrix and
$T \subseteq \ensuremath{\mathbb{R}}^{N}$ bounded. For any $u \geq 0$ the event
\begin{align}
\label{eq:thm-1-4}
\sup_{x\in T} \big|\|Ax\|_{2} &- \sqrt m \|x\|_{2}\big| %
\nonumber\\
&\leq C\tilde K \left[ \w(T) + u \cdot \rad(T)\right]
\end{align}
holds with probability at least $1 - 3\exp(-u^{2})$. Here,
$\rad(T) := \sup_{x \in T}\|x\|_{2}$ denotes the radius of $T$.
\end{thm}
\begin{rmk}
If $u \geq 1$ the bound in~\eqref{eq:thm-1-4} can be loosened to the
following simpler one:
\begin{align*}
\sup_{x \in T} \left|\|Ax\|_{2} - \sqrt m \|x\|_{2}\right| %
\leq C\tilde K u \gamma(T).
\end{align*}
\end{rmk}
Setting $T := \ensuremath{\mathbb{S}}^{N-1}$, and using the improved constant obtained
in~\cite[Theorem 2.1]{xiaowei2019taildep} gives the following corollary.
\begin{coro}[Largest singular value of $K$-subgaussian matrices]
\label{coro:largest-subgaus-singval}
Let $A \in \ensuremath{\mathbb{R}}^{m \times N}$ be a $K$-subgaussian matrix. For all
$t \geq 0$, with probability at least $1 - 3 \exp(-t^{2})$,
\begin{align*}
\left|\|A\| - \sqrt m\right| %
\leq C \tilde K \left[\sqrt N + t\right].
\end{align*}
\end{coro}
Finally, we state the following comparison inequality for two centred Gaussian
processes.
\begin{thm}[Sudakov-Fernique inequality {\cite[Theorem
7.2.11]{vershynin2018high}}]
\label{thm:sudakov-fernique}
Let $(X_{t})_{t \in T}, (Y_{t})_{t \in T}$ be two mean-zero Gaussian
processes. Assume that, for all $s,t \in T$, we have
\begin{align*}
\E (X_{t} - X_{s})^{2} \leq \E (Y_{t} - Y_{s})^{2}.
\end{align*}
Then
\begin{align*}
\E \sup_{t \in T} X_{t} \leq \E \sup_{t \in T} Y_{t}.
\end{align*}
\end{thm}
\subsection{Geometric tools}
\label{sec:geometric-tools-1}
In this section, we introduce tools primarily relevant to obtaining recovery
bounds for compressed sensing in the classical setting where
$\mathcal{K} = B_{1}^{N}$. We start by recalling that sparse vectors have low
effective dimension, as does their difference set.
\begin{lem}[gw of the sparse signal set {\cite[Lemma 2.3]{plan2013robust}}]
\label{lem:mean-width-sparse-signals}
There exist absolute constants $c, C > 0$ such that
\begin{align*}
cs \log (2N /s) %
& \leq \w^{2} \left((\Sigma_{s}^{N}\cap B_{2}^{N}) %
- (\Sigma_{s}^{N} \cap B_{2}^{N})\right) %
\\
& \leq C s \log (2N /s)
\end{align*}
For possibly different absolute constants $c, C >0$, one also has
\begin{align*}
cs \log (2N /s) %
\leq \w^{2} \left(\Sigma_{s}^{N}\cap B_{2}^{N}\right) %
\leq C s \log (2N /s).
\end{align*}
\end{lem}
In addition, we also recall that the descent cone of the $\ell_{1}$ ball has
comparable effective dimension to the set of sparse vectors. For example,
see~\cite[Proposition 9.24]{foucart2013mathematical} for a related result
showing
$\w^{2}\left(T_{B_{1}^{N}}(x) \cap \ensuremath{\mathbb{S}}^{N-1}\right) \leq 2s \log(eN/s)$
when $x$ is $s$-sparse. To clarify the connection between the gw of $B_{1}^{N}$
and that of $\Sigma_{s}^{N}$, we recall the following result.
\begin{lem}[Convexification {\cite[Lemma 3.1]{plan2013one}}]
\label{lem:convexification}
One has
\begin{align*}
\cvx(\Sigma_{s}^{N} \cap B_{2}^{N}) %
\subseteq \mathcal{K}_{s}^{N} %
\subseteq 2 \cvx(\Sigma_{s}^{N} \cap B_{2}^{N}).
\end{align*}
\end{lem}
Next, it will be useful to leverage the following equivalent characterization
for the $\ell_{1}$ descent cone. Recall that $\sgn(\alpha) := 1$ if
$\alpha > 0$, $\sgn(\alpha) = -1$ if $\alpha < 0$ and $\sgn(0) = 0$.
\begin{lem}[Equivalent descent cone characterization]
\label{lem:equivalent-descent-cone}
Let $x \in \Sigma_{s}^{N}$ with non-empty support set
$\mathcal{T} \subseteq [N]$ and define $\mathcal{C} := \|x\|_{1}
B_{1}^{N}$. If
$K(x) := \{ h \in \ensuremath{\mathbb{R}}^{N} : \|h_{\mathcal{T}^{C}}\|_{1} \leq - \ip{\mathrm{sgn}(x),
h}\}$, then $T_{\mathcal{C}}(x) = K(x)$.
\end{lem}
Finally, recall that \textsc{Lasso} solutions admit the following descent cone
condition.
\begin{lem}[Descent cone condition]
\label{lem:descent-cone-condition}
Let $x \in \Sigma_{s}^{N}$ have non-empty support set $T \subseteq N$. Suppose
$y = A x + \eta z$ for $\eta > 0$, $z \in \ensuremath{\mathbb{R}}^{m}$, and
$A \in \ensuremath{\mathbb{R}}^{m\times N}$. Let $\hat x$ solve {$(\mathrm{LS}_{\tau})$} with $\tau =
\|x\|_{1}$. Then $\|h\|_{1} \leq 2 \sqrt s \|h\|_{2}$, where $h = \hat x - x$.
\end{lem}
\begin{proof}[Proof of {\autoref{lem:descent-cone-condition}}]
We use \autoref{lem:equivalent-descent-cone} above before applying
Cauchy-Schwarz:
\begin{align*}
\|\hat x - x\|_{1} %
& = \|h_{T}\|_{1} + \|h_{T^{C}}\|_{1} %
\\
& \leq \ip{\mathrm{sgn}(\hat x_{T} - x), h_{T}} - \ip{\mathrm{sgn}(x), h} %
\\
& \leq \|\mathrm{sgn}(\hat x_{T} - x) - \mathrm{sgn}(x)\|_{2} \|h\|_{2} %
\\
& \leq 2 \sqrt s \|h\|_{2}.
\end{align*}
\end{proof}
\subsection{Refinements on bounds for gw}
\label{sec:refin-bounds-gauss}
Crucial to the results of \autoref{sec:analysis-bp} are two recent results
appearing in~\cite{bellec2019localized}. These results are a fine-tuning of
standard gw results for bounded convex polytopes.
\begin{prop}[{\cite[Proposition 1]{bellec2019localized}}]
\label{prop:bellec1}
Let $m \geq 1$ and $N \geq 2$. Let $T$ be the convex hull of $2N$ points in
$\ensuremath{\mathbb{R}}^{m}$ and assume $T\subseteq B_{2}^{m}$. Then for $\gamma \in (0, 1)$,
\begin{align*}
\w & (T \cap \gamma B_{2}^{m}) %
\\
& \leq \min \big\{ %
4 \sqrt{ \max \big\{1, \log(8e N \gamma^{2}) \big\} }, %
\gamma \sqrt{ \min\{m, 2N\}} \big\}
\end{align*}
\end{prop}
The work also proves a lower bound on the gw of bounded convex polytopes.
\begin{prop}[{\cite{bellec2019localized}}]
\label{prop:bellec2}
Let $m \geq 1$ and $N \geq 2$. Let $\gamma \in (0, 1]$ and assume for
simplicity that $s = 1/ \gamma^{2}$ is a positive integer such that
$s \leq N /5$. Let $T$ be the convex hull of the $2N$ points
$\{ \pm M_{1}, \ldots, \pm M_{N}\} \subseteq \ensuremath{\mathbb{S}}^{m-1}$. Assume that for some
real number $\kappa \in (0, 1)$ we have
\begin{align*}
\kappa \| \theta \|_{2} \leq \| M \theta \|_{2} \qquad %
\text{for all $\theta \in \ensuremath{\mathbb{R}}^{N}$ such that $\|\theta\|_{0} \leq 2s$},
\end{align*}
Then
\begin{align*}
\w (T \cap \gamma B_{2}^{m}) %
\geq ( \sqrt 2 / 4) \kappa \sqrt{ \log ( N \gamma^{2} / 5)}.
\end{align*}
\end{prop}
In particular, the above two results may be combined to obtain a bound on a
random polytope, obtained by considering the image of a (non-random) polytope
under a normalized $K$-subgaussian matrix; proved in~\autoref{sec:proofs-refin-bounds}.
\begin{coro}[Controlling random hulls]
\label{coro:bellec-random-hulls}
Fix $\delta, \varepsilon > 0$, $\gamma \in (0, 1]$ and let
$A \in \ensuremath{\mathbb{R}}^{m \times N}$ be a normalized $K$-subgaussian matrix. Assume for
simplicity that $s = 1 / \gamma^{2} \in \ensuremath{\mathbb{N}}$ with $s < N/5$ and let $T$
denote the convex hull of the $2N$ points $\{\pm A^{j} : j \in [N]\}$. Assume
$m > C_{\varepsilon} \delta^{-2} \tilde K^{2} s \log (2N / s)$. With
probability at least $1 - \varepsilon$, for any $\alpha \in (0, (1-\delta))$,
\begin{align*}
(\sqrt 2 / 4)& (1 - \delta)^{2} \sqrt{
\log \frac{N\alpha^{2}}{5(1-\delta)^{2}}} %
\\
& \leq \w(T \cap \alpha B_{2}^{m}) %
\\
& \leq \min \big\{%
4 (1 + \delta) \sqrt{ \max \big\{1,
\log \frac{8e N \alpha^{2}}{(1 + \delta)^{2}} \big\} }, %
\\
&\qquad \qquad \alpha \sqrt{ \min\{m, 2N\}} \big\}.
\end{align*}
\end{coro}
\subsection{Projection lemma}
\label{sec:projection-lemma}
For $x \in \ensuremath{\mathbb{R}}^{N}$ and $\mathcal{C} \subseteq \ensuremath{\mathbb{R}}^{N}$ nonempty, denote
the distance of $x$ to $\mathcal{C}$ by
$\mathrm{dist}(x, \mathcal{C}) := \inf_{w \in \mathcal{C}} \|x - w\|_{2}$. If
$\mathcal{C}$ is a closed and convex set, there exists a unique point in
$\mathcal{C}$ attaining the infimum. We denote this point
\begin{align*}
\mathrm{P}_{\mathcal{C}}(x) := \argmin_{w \in \mathcal{C}} \|x - w\|_{2}.
\end{align*}
The projection lemma was an important tool in~\cite{berk2020sl1mpc} for showing
parameter instability of two \textsc{Lasso} programs in the proximal denoising
setting. The result was first proved in~\cite[Lemma 15.3]{oymak2016sharp}, and a
simpler proof of a restated version given in~\cite{berk2020sl1mpc}. The result
shall play a critical role in this work, too. We include it here for
completeness.
\begin{lem}[Projection lemma {\cite[Lemma 3.2]{berk2020sl1mpc}}]
\label{lem:projection-lemma}
Let $\mathcal{K} \subseteq \ensuremath{\mathbb{R}}^{m}$ be a non-empty closed and convex set with
$0 \in \mathcal{K}$, and fix $\lambda \geq 1$. Then, for any $z \in \ensuremath{\mathbb{R}}^{m}$,
\begin{align*}
\|\mathrm{P}_{\mathcal{K}}(z) \|_{2} \leq \|\mathrm{P}_{\lambda \mathcal{K}}(z) \|_{2}.
\end{align*}
\end{lem}
\autoref{lem:projection-lemma} admits the following corollary, useful in proving
\autoref{lem:bp-oc-zero}. Its proof is deferred to
\autoref{sec:proofs-proj-lemma}.
\begin{coro}[{\cite[Corollary 3.1]{berk2020sl1mpc}}]
\label{cor:projection-lemma}
Let $\mathcal{K} \subseteq \ensuremath{\mathbb{R}}^{m}$ be a non-empty closed and convex set
with $0 \in \mathcal{K}$ and let $\|\cdot \|_{\mathcal{K}}$ be the gauge of
$\mathcal{K}$. Given $y \in \ensuremath{\mathbb{R}}^{m}$, define for $\alpha > 0$,
\begin{align*}
q_{\alpha} := \argmin \{ \|q\|_{\mathcal{K}} : \|q - y\|_{2} \leq \alpha \}.
\end{align*}
Then $\|q_{\alpha}\|_{2}$ is decreasing in $\alpha$.
\end{coro}
\section{{$(\mathrm{LS}_{\tau})$} parameter instability}
\label{sec:LS-instability}
The main result of this section is proved in the case of standard CS, where
$x_{0} \in \Sigma_{s}^{N}$ and where tight bounds on the effective dimension of
the structure set are known (e.g.,~ bounds on $\gamma(\mathcal{L}_{s})$ or
$\gamma (\mathcal{K}_{s}^{N})$). Define $\tau^{*} := \|x_{0}\|_{1}$. The
following result states that $\hat L$ is almost surely suboptimal in the
limiting low-noise regime when $\tau \neq \tau^{*}$, while $\hat R(\tau^{*})$ is
order-optimal. A proof of the result may be found in
\autoref{sec:optimal-choice-tau}, with supporting lemmas in
\autoref{sec:proofs-constr-lasso}.
\begin{thm}[Asymptotic singularity]
\label{thm:ls-instability}
Fix $\delta, \varepsilon > 0$ and let $1 \leq s \leq m < N < \infty$ be
integers. Let $x_{0} \in \Sigma_{s}^{N}\setminus \Sigma_{s-1}^{N}$ with
$\tau^{*} := \|x_{0}\|_{1}$ and $\tau > 0$ such that $\tau\neq \tau^{*}$. Let
$\eta > 0$ and let $z \in \ensuremath{\mathbb{R}}^{m}$ with $z_{i} \ensuremath{\overset{\text{iid}}{\sim}}
\mathcal{N}(0,1)$. Suppose $A \in \ensuremath{\mathbb{R}}^{m\times N}$ is a normalized
$K$-subgaussian matrix, and assume $m$ satisfies
\begin{align*}
m > C_{\varepsilon} \tilde K^{2} \delta^{-2} s \log \frac{eN}{s}.
\end{align*}
Almost surely on the realization of $(A, z)$,
\begin{align*}
\lim_{\eta \to 0} \hat L(\tau; x_{0}, A, \eta z) %
= \infty. %
\end{align*}
With probability at least $1 - \varepsilon$ on the realization of $A$, there
exist constants $0 < c_{\delta} < C_{\delta} < \infty$ such that
\begin{align*}
c_{\delta} s \log \frac{N}{s}
\leq \lim_{\eta \to 0} \sup_{x \in \Sigma_{s}^{N}}
\hat R(\|x\|_{1}; x, A, \eta) %
\leq C_{\delta} s \log \frac{N}{2s}.
\end{align*}
\end{thm}
For clarity, observe the similarity to the definition of $\tau^{*}$ of the
precise parameter, $\|x\|_{1}$, appearing in the lower bound. Importantly, the
spirit of this result extends to the situation where $x_{0}$ belongs, more
generally, to some convex proxy set $\mathcal{K} \subseteq \ensuremath{\mathbb{R}}^{N}$. In
particular, the blow-up of $\hat L$ in the limiting low-noise regime holds
independent of the assumptions on $\mathcal{K}$, except that $\mathcal{K}$ be
bounded. For instance,~\autoref{lem:nuc-norm-recovery} addresses the an
analogous result in the case where the signal is a $d\times d$ matrix and
$\mathcal{K}$ is the nuclear norm ball. Further, worst-case bounds on $\hat R$
are well-known in the case where $\mathcal{K}$ is a convex polytope, and are
useful when $\mathcal{K}$ has small gw~\cite{bellec2019localized,
liaw2017simple}.
\section{Analysis of {$(\mathrm{QP}_{\lambda})$}}
\label{sec:qp}
\subsection{Right-sided parameter stability}
\label{sec:right-sided-param}
In this section we present a contrast to the type of sensitivity observed in
\autoref{sec:LS-instability}. Specifically, the result serves to demonstrate
that {$(\mathrm{QP}_{\lambda})$} is not sensitive to its parameter choice if the chosen parameter is
too large. This so-called right-sided parameter stability is important in
practical settings, as it suggests that recovery will not be penalized ``too
heavily'' if the parameter is chosen incorrectly to be too large. Having such a
leniency is reassuring, since knowing the exact choice of $\lambda$ in an
experimental setting is unlikely at best.
The right-sided parameter stability for {$(\mathrm{QP}_{\lambda})$} was first proved
in~\cite[Theorem~7.2]{bickel2009simultaneous}. When the data are Gaussian,
right-sided stability of {$(\mathrm{QP}_{\lambda})$} has been examined
in~\cite{thrampoulidis2015asymptotically}; of {$(\mathrm{QP}_{\lambda, \mathcal{K}})$},
in~\cite{thrampoulidis2018precise}. Here, we state a specialized rephrasing of a
version more suitably adapted to the present
work~\cite[Theorem~3]{shen2015stable}. This version is the same as that stated
in~\autoref{sec:summary-results}.
\begin{thm}[Specialized {\cite[Theorem~3]{shen2015stable}}]
\label{thm:qp-r-stability}
For integers $1 \leq s \leq m < N < \infty$, suppose
$x_{0} \in \Sigma_{s}^{N}$ and let $A \in \ensuremath{\mathbb{R}}^{m \times N}$ be a normalized
$K$-subgaussian matrix. There is an absolute constant $C> 0$ such that if
$\lambda \geq C \sqrt{\log N}$ and
$m \geq C_{\varepsilon} \tilde K^{2} \delta^{-2} s \log \tfrac{N}{s}$, then
with probability at least $1 - \varepsilon$ on the realization of $A$,
\begin{align*}
R^{\sharp}(\lambda; x_{0}, A, \eta) \leq C s \lambda^{2}.
\end{align*}
\end{thm}
In particular, over-guessing $\lambda$ results in no more than a quadratic
penalty on the bound for the recovery error. Consequently, {$(\mathrm{QP}_{\lambda})$} is right-sided
parameter stable --- it is not sensitive to variation of its governing parameter
when the parameter is sufficiently large. We note, thereby, that there exist
regimes in which \textsc{Lasso} programs are not sensitive to their parameter
choice. Perhaps more importantly: there are regimes in which one program may be
sensitive to its parameter choice, and another program is not. Namely, we see
that {$(\mathrm{LS}_{\tau})$} can be sensitive to its parameter choice in the low-noise regime (so
the correct choice of $\tau$ is an imperative), while recovery for the very same
data using {$(\mathrm{QP}_{\lambda})$} is not sensitive to $\lambda$ (if $\lambda$ is sufficiently
large enough).
\section{Analysis of {$(\mathrm{BP}_{\sigma})$}}
\label{sec:analysis-bp}
The final program that we subject to scrutiny is {$(\mathrm{BP}_{\sigma})$}. It is well-known under
standard assumptions that an optimal choice of $\sigma$ yields order-optimal
risk $\tilde R(\sigma^{*}; x_{0}, A, \eta)$ with high probability on the
realization of $A$. In this section, we demonstrate the existence of a regime in
which any choice of $\sigma$ fails to yield order-optimal recovery for {$(\mathrm{BP}_{\sigma})$}. A
key message of this section is that {$(\mathrm{BP}_{\sigma})$} performs poorly if the signal is too
sparse and the number of measurements is too large. We demonstrate this
behaviour for two regimes: the underconstrained setting, where $\sigma$ is ``too
large'' and the overconstrained setting where $\sigma$ is ``too small''. Each of
these settings covers the case where $\sigma$ is chosen ``just right''; we will
see how {$(\mathrm{BP}_{\sigma})$} risk fails to achieve order optimality in this case, as well.
For the duration of this section, we will consider $x_{0} \in \Sigma_{s}^{N}$
where $s$ may or may not be allowed to be $0$. We will clarify this explicitly
in each instance. The main result of the section will be stated in the case
where $A \in \ensuremath{\mathbb{R}}^{m \times N}$ is a normalized $K$-subgaussian matrix whose
rows $m = m(N)$ satisfying a particular growth condition. Thus, the measurement
vector will be given by $y = Ax_{0} + \eta z$ where $\eta > 0$ is the noise
scale and $z_{i} \ensuremath{\overset{\text{iid}}{\sim}} \mathcal{N}(0, 1)$ as before. For the sake of analytical
and notational simplicity, we assume that $\eta$ is independent of $N$. However,
we eventually make clear how $\sigma$ may be allowed to depend on the ambient
dimension, and that our result holds irrespective of this dependence.
\subsection{Underconstrained parameter instability}
\label{sec:underc-param-inst}
As a ``warm-up'' for the main result, we start by demonstrating that there is a
regime in which $\tilde R(\sigma; x_{0}, A, \eta)$ fails to achieve minimax
order-optimality when restricted to $\sigma \geq \eta \sqrt m$. Specifically, if
$m$ is too large, then there is a (sufficiently sparse) vector
$x_{0} \in \Sigma_{s}^{N}$ such that, with high probability on the realization
of $A$, the risk $\tilde R(\sigma; x_{0}, A, \eta)$ is large regardless of the
choice of $\sigma \in [\eta \sqrt m, \infty)$. We defer the proof of this result
to \autoref{sec:subopt-regime-underc}.
\begin{lem}[Underconstrained maximin {$(\mathrm{BP}_{\sigma})$}]
\label{lem:uc-bp-subgaus}
Fix $\delta, \varepsilon, \eta > 0$, let $1 \leq s < m \leq N$ be integers,
and suppose $A \in \ensuremath{\mathbb{R}}^{m \times N}$ is a normalized $K$-subgaussian
matrix. If
\begin{align*}
m %
> C_{\varepsilon} \delta^{-2} \tilde K^{2} s^{2} \log^{2} \left(\frac{N}{s}\right),
\end{align*}
then with probability at least $1 - \varepsilon$ on the realization of $A$,
\begin{align*}
\inf_{\sigma \geq \eta \sqrt m} \sup_{x \in \Sigma_{s}^{N}} %
\tilde R(\sigma; x, A, \eta) %
\geq C \sqrt m.
\end{align*}
\end{lem}
\begin{rmk}
In some settings, it may be not be appropriate for $m$ to depend
logarithmically on $N$. If $m$ and $N$ satisfy the power law relation
$m = N^{k}$ for some $k > 0$, then under the assumptions of
\autoref{lem:uc-bp-subgaus},
\begin{align*}
\E_{z} \|\tilde x (\sigma) - x_{0} \|_{2}^{2} = \Omega(N^{k/2}).
\end{align*}
\end{rmk}
\subsection{Minimax suboptimality}
\label{sec:bp-minimax-suboptimality}
Just as observed in \autoref{sec:underc-param-inst}, the results of this section
hold in the regime where the aspect ratio approaches a constant:
$m / N \to \gamma \in (0, 1)$. Our simulations in \autoref{sec:bp-numerics}
support suboptimality of $\tilde R$, and sensitivity of {$(\mathrm{BP}_{\sigma})$} to its parameter
choice for aspect ratios ranging from $\gamma = 0.1$ to $\gamma = 0.45$.
Our result is of an asymptotic nature in one additional sense. We have stated
that $\tilde R$ may be suboptimal for ``very sparse'' signals $x_{0}$. This is
specified in the sense that, while $m$ and $N$ may be allowed to grow, $s$
remains fixed. Our numeric simulations demonstrate how this assumption may be
interpreted as the inability of {$(\mathrm{BP}_{\sigma})$} to effectively recover the off-support of
the signal $x_{0}$ (i.e.,~ the all $0$ sub-vector $x_{T^{C}}$ where
$T \subseteq [N]$ denotes the support of $x_{0}$).
Thus, it is in this setting, where the number of measurements is sufficiently
large, and the sparsity sufficiently small, that we show
$\tilde R(\sigma; x_{0}, A, \eta)$ is asymptotically suboptimal, regardless of
the choice of $\sigma$.
\begin{thm}[{$(\mathrm{BP}_{\sigma})$} minimax suboptimality]
\label{thm:bp-minimax-suboptimality}
Fix $\varepsilon, \eta > 0$ and $m: \ensuremath{\mathbb{N}} \to \ensuremath{\mathbb{N}}$ with
$m(N)/ N \to \gamma \in (0, 1)$. There is $N_{0} \geq 2$ and $p > 0$ so that
for any $N \geq N_{0}$ and any $1 \leq s < m(N_{0})$, if
$A \in \ensuremath{\mathbb{R}}^{m \times N}$ is a normalized $K$-subgaussian matrix, then with
probability at least $1 - \varepsilon$ on the realization of $A$,
\begin{align}
\label{eq:bp-minimax-suboptimality}
\inf_{\sigma > 0} \sup_{x \in \Sigma_{s}^{N}}
\tilde R(\sigma; x, A, \eta) %
\geq C_{\gamma, K} N^{p}.
\end{align}
\end{thm}
A minor modification of the above result allows one to show that the minimax
risk for {$(\mathrm{BP}_{\sigma})$}, relative to the benchmark risk $R^{*}(s, A)$, converges in
probability to $\infty$.
\begin{coro}[{$(\mathrm{BP}_{\sigma})$} suboptimal in probability]
Fix $\eta > 0$, an integer $s \geq 1$, and suppose for $m : \ensuremath{\mathbb{N}} \to \ensuremath{\mathbb{N}}$
that $m (N)/ N \to \gamma \in (0, 1)$. For each $N$, suppose
$A = A(N) \in \ensuremath{\mathbb{R}}^{m(N) \times N}$ is a normalized $K$-subgaussian
matrix. Then, for all $M > 0$,
\begin{align*}
\lim_{N\to \infty} \mathbb{P}\left(%
\inf_{\sigma > 0} \sup_{x \in \Sigma_{s}^{N}} %
\frac{ \tilde R(\sigma; x, A, \eta)}{R^{*}(s,A)} > M \right) %
= 1.
\end{align*}
\end{coro}
\section{Numerical results}
\label{sec:numerical-results}
Let $\mathfrak{P} \in \{ \text{$(\mathrm{LS}_{\tau})$}, \text{$(\mathrm{QP}_{\lambda})$}, \text{$(\mathrm{BP}_{\sigma})$}\}$ be a CS program
with solution $x^{*}(\upsilon)$, where $\upsilon \in \{ \tau, \lambda, \sigma\}$
is the associated parameter. Given a signal $x_{0} \in \ensuremath{\mathbb{R}}^{N}$, matrix
$A \in \ensuremath{\mathbb{R}}^{m\times N}$, and noise $\eta z \in \ensuremath{\mathbb{R}}^{m}$, denote by
$\mathscr{L}(\upsilon; x_{0}, A, \eta z)$ the loss associated to
$\mathfrak{P}$. For instance, if $\mathfrak{P} = \text{{$(\mathrm{LS}_{\tau})$}}$, then
$\mathscr{L} = \hat L$. In most cases, the signal $x_{0}$ for our numerical
simulations will be $s$-sparse, and $s$ will be ``small''. For simplicity, and
to ensure adequate separation of the ``signal'' from the ``noise'', each
non-zero entry of $x_{0}$ will be equal to $N$, except where otherwise
noted. Unless otherwise noted the measurement matrix $A$ will have entries
$A_{ij} \ensuremath{\overset{\text{iid}}{\sim}} \mathcal{N}(0, m^{-1})$.
Define $\upsilon^{*} := \upsilon^{*}(x_{0}, A, \eta) > 0$ to be the value of
$\upsilon$ yielding best risk (i.e.,~ where
$\E_{z} \mathscr{L} (\cdot; x_{0}, A, \eta z)$ is minimal) and let the
normalized parameter $\rho$ for the problem $\mathfrak{P}$ be given by
$\rho := \upsilon / \upsilon^{*}$. Note that $\rho = 1$ is a population estimate
of the argmin of $\mathscr{L} (\rho \upsilon^{*}; x_{0}, A, \eta z)$; by the law
of large numbers, this risk estimates well an average of such losses over many
realizations $\hat z$. Let $L(\rho) := \mathscr{L}(\rho \upsilon^{*})$ denote
the loss for $\mathfrak{P}$ as a function of the normalized parameter, let
$\{ \rho_{i}\}_{i=1}^{n}$ denote a sequence of points in the normalized
parameter space, and define the average loss for $\mathfrak P$ at any point
$\rho_{i}$ by
\begin{align*}
\bar L ( \rho_{i}; x_{0}, A, \eta, k) %
:= k^{-1} \sum_{j = 1}^{k} L(\rho_{i} \upsilon^{*};
x_{0}, A, \eta \hat z_{ij}),
\end{align*}
where $\hat z_{ij}$ is the $(i,j)$-th realization of noise;
$\hat z_{ij} \sim \mathcal{N}(0, I_{m})$ for all $(i, j) \in [n] \times [k]$. We
may also refer to $\bar L$ as the empirical risk or noise-normalized squared
error (nnse). Note that $\bar L$ depends on
$(\hat z_{ij} : i \in [n], j \in [k])$ and that notating this dependence is
omitted for simplicity. Below, $\hat z_{ij}$ are not necessarily sampled
independently. In fact, to obtain tractable computational simulations, we will
frequently have $\hat z_{ij} = \hat z_{i' j}$ for $i, i' \in [n]$. Where
necessary, we disambiguate the average losses with a subscript:
$\bar{L}_{\text{$(\mathrm{LS}_{\tau})$}}$, $\bar{L}_{\text{$(\mathrm{QP}_{\lambda})$}}$ and $\bar{L}_{\text{$(\mathrm{BP}_{\sigma})$}}$ for
the programs {$(\mathrm{LS}_{\tau})$}, {$(\mathrm{QP}_{\lambda})$} and {$(\mathrm{BP}_{\sigma})$}, respectively.
In this section, we include plots representing average loss of a program with
respect to that program's normalized governing parameter. Both the optimal
parameter $\upsilon^{*}$ and the average loss $\bar L$ are approximated by
$\upsilon^{\dagger}$ and $L^{\dagger}$, respectively, using RBF interpolation as
described in~\autoref{sec:rbf-approximation}. Parameter settings for the
interpolations are provided in~\autoref{sec:interp-param-sett}. To this end, the
average loss $\E_{z} L(\rho; x_{0}, A, \eta z)$ is approximated from $k$
realizations of the true loss on a logarithmically spaced grid of $n$ points
centered about $\rho = 1$. The approximation is computed using multiquadric RBF
interpolation and the values of the parameters of the interpolation,
$(k, n, \varepsilon_{\text{rbf}}, \mu_{\text{rbf}}, n_{\text{rbf}})$, are stated
in each instance where the computation was performed. In every case, due to
concentration effects, for a given program and given parameter value the
realizations cluster very closely about the average loss. Therefore, RBF
interpolation is very close to the true approximated average loss curve computed
from the loss realizations; and has the added advantage of facility to account
for nonuniformly spaced data points. In the main graphics, we omit the original
data point cloud in favour of presenting clean, interpretable plots. However, we
include auxiliary plots of the average loss approximant and the point cloud to
visualize goodness of fit. In addition, these latter visualizations serve to
support how a program's order-optimality for a single realization may be
impacted by averaging over noise.
There is one final caveat to note in how the plots were generated. Computational
methods available to the authors for computing solutions to {$(\mathrm{BP}_{\sigma})$} and {$(\mathrm{LS}_{\tau})$} were
much slower than those available for {$(\mathrm{QP}_{\lambda})$}. Consequently, ensuring computational
tractability of our numerical simulations required solving {$(\mathrm{QP}_{\lambda})$} and obtaining
corresponding parameter values from those problem instances. Namely, given
$\{\rho_{i}\}_{i=1}^{n}$ and solutions $x^{\sharp}(\lambda_{i})$ where
$\lambda_{i} = \rho_{i} \lambda^{*}, i \in [n]$, we use
\autoref{prop:foucart-program-equivalence} to obtain
$\tau_{i} := \|x^{\sharp}(\lambda_{i})\|_{1}$ and
$\sigma_{i} := \|y - Ax^{\sharp}(\lambda_{i})\|_{2}$. Thus, we obtain loss
curves $\hat L(\tau_{i}; x_{0}, A, \eta z)$ and
$\tilde L(\sigma_{i}; x_{0}, A, \eta z)$ by solving {$(\mathrm{QP}_{\lambda})$} on a sufficiently fine
grid, yielding $(\lambda_{i}, x^{\sharp}(\lambda_{i}))$. This allows us to
approximate the optimal parameter choices for {$(\mathrm{LS}_{\tau})$} and {$(\mathrm{BP}_{\sigma})$} and therefore
determine within some numerical tolerance all of $\bar L_{\text{$(\mathrm{LS}_{\tau})$}}$,
$\bar L_{\text{$(\mathrm{QP}_{\lambda})$}}$ and $\bar L_{\text{$(\mathrm{BP}_{\sigma})$}}$. Further details for
approximating the average loss of a program are given
in~\autoref{sec:rbf-approximation} where we describe radial basis function (RBF)
interpolation~\cite{buhmann2003radial, 2020SciPy-NMeth}.
\subsection{{$(\mathrm{LS}_{\tau})$} numerics}
\label{sec:ls-numerics}
The data generating process for the numerics in this section is as follows. Fix
$A \in \ensuremath{\mathbb{R}}^{m\times N}$, $\eta > 0$ and $x_{0} \in \Sigma_{s}^{N}$. Fix a
logarithmically spaced grid of $n$ points for the normalized parameter,
$\{\rho_{i}\}_{i=1}^{n}$, centered about $1$. Generate $k$ realizations
$\{z_{j}\}_{j=1}^{k}$ of the noise. Obtain
$\lambda_{i} := \rho_{i} \lambda^{*}(A, x_{0}, \eta)$ and obtain
$\ell_{ij} := L^{\sharp}(\lambda_{i}; x_{0}, A, \eta z_{j})$ after computing
$x^{\sharp}(\lambda_{i}; z_{j})$. Observe that
$\hat L(\tau_{ij}; x_{0}, A, \eta z_{j}) = \ell_{ij}$ where
$\tau_{ij} := \|x^{\sharp}(\lambda_{i}; z_{j})\|_{1}$. Similarly, for
$\sigma_{ij} := \|y_{j} - Ax^{\sharp}(\lambda_{i}; z_{j})\|_{2}$, one has
$\tilde L(\sigma_{ij}; x_{0}, A, \eta z_{j}) = \ell_{ij}$. Finally, for a
sufficiently fine and wide numerical grid, one may approximate the normalized
parameter grids $\{ \tau_{ij}\}$ and $\{ \sigma_{ij}\}$ using the values
$\{ \ell_{ij}\}$. Consequently, for each program we are able to approximate the
average loss $\bar L$ by obtaining a clever approximate interpolant of
$\{(\tau_{ij}, \ell_{ij}) : (i,j) \in [n] \times [k]\}$,
$\{(\sigma_{ij}, \ell_{ij}) : (i,j) \in [n] \times [k]\}$ or
$\cup_{j \in [k]}\{(\rho_{i}, \ell_{ij}): i \in [n]\}$, respectively, while only
having to solve {$(\mathrm{QP}_{\lambda})$}. This particular bit of good fortune is guaranteed to us
by \autoref{prop:foucart-program-equivalence}. As stated, the clever approximant
is obtained using multiquadric RBF
interpolation~\cite{buhmann2003radial,2020SciPy-NMeth}. For more background on
kernel methods for function approximation and radial basis functions in
particular, we refer the reader to~\cite{buhmann2003radial, hastie2009elements,
murphy2012machine}. Some additional detail to this end is provided in \autoref{sec:rbf-approximation}.
The numerics for {$(\mathrm{LS}_{\tau})$}, appearing in \autoref{fig:low-noise-numerics}, concern
the case in which $\eta$ is small. When the ambient dimension is modest
($N = 10^{4}$) and the noise scale only moderately small
($\eta = 2\cdot 10^{-3}$), parameter instability of {$(\mathrm{LS}_{\tau})$} is readily
observed. Minute changes in $\tau$ lead to blow-up in the nnse and the peak
signal-to-noise ratio (psnr) (left and right plot, respectively). Indeed, for
the range plotted, it is difficult to visually segment the left half of the
{$(\mathrm{LS}_{\tau})$} average loss curve from the right half. These observations support the
asymptotic theory of \autoref{sec:LS-instability}. Moreover, the simulations
suggest that the other two programs, {$(\mathrm{BP}_{\sigma})$} and {$(\mathrm{QP}_{\lambda})$} are relatively much less
sensitive to the choice of their governing parameter.
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{berk1.pdf}\quad
\includegraphics[width=.45\textwidth]{berk2.pdf}
\caption{{$(\mathrm{LS}_{\tau})$} parameter instability in the low-noise regime. Average loss
(left) plotted on a log-log scale with respect to the normalized parameter;
average psnr (right) plotted on a log-linear scale with respect to the
normalized parameter. The data parameters are
$(s, m, N, \eta, k, n) = (1, 2500, 10^{4}, 2\cdot 10^{-3}, 15, 301)$.}
\label{fig:low-noise-numerics}
\end{figure}
\subsection{{$(\mathrm{QP}_{\lambda})$} numerics}
\label{sec:qp-numerics}
In this section we visualize the average loss of {$(\mathrm{QP}_{\lambda})$} as a function of its
normalized parameter $\rho = \lambda / \lambda^{*}$. In
\autoref{fig:qp-instability}, the average loss for {$(\mathrm{QP}_{\lambda})$} is plotted with respect
to the $\rho$ for an aspect ratio $\delta$ ranging between $0.25$ and $4$. As
suggested by \autoref{thm:qp-r-stability}, the average loss appears to scale
quadratically with respect to the normalized parameter for values $\rho >
1$. For $\rho \in (0.5, 0.9)$, the average loss appears to scale
super-quadratically with respect to the normalized parameter, with the rate of
growth increasing as a function of $\delta$. This behaviour suggests that {$(\mathrm{QP}_{\lambda})$}
can be sensitive to its parameter choice if $\rho$ is too small. The intuition
for the observed behaviour is that {$(\mathrm{QP}_{\lambda})$} increasingly behaves like ordinary
least squares when $\lambda \to 0$. Each average loss was approximated from 15
realizations of the loss using multiquadric RBF interpolation. Due to
concentration effects, the realizations for each parameter value clustered very
closely to the approximated average loss. The left-hand plot
\autoref{fig:qp-instability} is plotted on a log-log scale, while the right-hand
plot is plotted on a linear-linear scale. The linear-linear plot readily
demonstrates how over-guessing $\lambda$ by a factor of $2$ is more robust to
error than under-guessing $\lambda$ by a factor of $2$.
\begin{figure}[h]
\centering
\includegraphics[width=.45\textwidth]{berk3.pdf}\quad %
\includegraphics[width=.45\textwidth]{berk4.pdf}
\caption{Average loss of {$(\mathrm{QP}_{\lambda})$} plotted with respect its normalized parameter in
the low-noise, high sparsity regime. Parameters for the simulation are
$(s, N, \eta, k, n) = (1, 10^{4}, 10^{-5}, 15, 301)$. The aspect ratio of the
matrix $A \in \ensuremath{\mathbb{R}}^{m \times N}$ with $A_{ij} \ensuremath{\overset{\text{iid}}{\sim}} \mathcal{N}(0, m^{-1})$
takes values $\delta \in \{4^{-1}, \ldots, 4\}$, as shown in the legend. The
data are visualized on a log-log scale (left) and linear scale (right).}
\label{fig:qp-instability}
\end{figure}
In \autoref{fig:qp-instability-2}, we visualize {$(\mathrm{QP}_{\lambda})$} average loss with respect
to its normalized parameter. In the top row, we include two plots similar to
\autoref{fig:qp-instability}, but for $\delta = 0.25, 0.45$ only. Again, the
left-hand plot is on a log-log scale while the right-hand plot is on a
linear-linear scale. The bottom row depicts the goodness of fit of the RBF
approximation for the average loss.
\begin{figure*}[h]
\centering
\includegraphics[width=.45\textwidth]{berk5.pdf}\quad
\includegraphics[width=.45\textwidth]{berk6.pdf}
\includegraphics[width=.9\textwidth]{berk7.pdf}
\caption{Average loss of {$(\mathrm{QP}_{\lambda})$} plotted with respect to its normalized
parameter in the low-noise, high-sparsity regime. Parameters for the
simulation are $(s, N, \eta, k, n) = (1, 10^{4}, 10^{-5}, 25, 201)$. The
aspect ratio of the matrix $A \in \ensuremath{\mathbb{R}}^{m \times N}$ with
$A_{ij} \ensuremath{\overset{\text{iid}}{\sim}} \mathcal{N}(0, m^{-1})$ takes values $\delta = 0.25, 0.45$, as
shown in the legend. The data in the top row are visualized on a
$\log$-$\log$ scale (left) and linear scale (right). The bottom row depicts
the quality of fit for the RBF approximation of the average loss for
$\delta = 0.25$ (left) and $\delta = 0.45$ (right). }
\label{fig:qp-instability-2}
\end{figure*}
\subsection{{$(\mathrm{BP}_{\sigma})$} numerics}
\label{sec:bp-numerics}
This section includes numerical simulations depicting the sensitivity of {$(\mathrm{BP}_{\sigma})$}
to its parameter choice. These numerics serve to support the asymptotic theory
developed in \autoref{sec:analysis-bp}.
The graphics in \autoref{fig:bp-numerics-1} serve as an initial depiction of
{$(\mathrm{BP}_{\sigma})$} parameter instability, depicting the average loss for each program and for
$N \in \{4000, 7000\}$, $\delta \in \{ 0.1, 0.25, 0.45\}$. Each plot depicts the
average loss as a function of the normalized parameter for {$(\mathrm{LS}_{\tau})$} (green), {$(\mathrm{BP}_{\sigma})$}
(orange) and {$(\mathrm{QP}_{\lambda})$} (blue). The domain of the normalized parameter in each plot
is $(0.2, 5)$. A single realization of $A$ was fixed and the average loss was
computed from $k = 50$ realizations of the noise by constructing a function
approximator using radial basis function approximation with a multiquadric
kernel. The RBF approximator was evaluated on a logarithmically spaced grid of
$n_{\text{rbf}} = 301$ points centered about $1$. The loss values for {$(\mathrm{LS}_{\tau})$} and
{$(\mathrm{BP}_{\sigma})$} were computed by using the program equivalence described
by~\autoref{prop:foucart-program-equivalence}. In particular, for computational
expediency, once $A$ and $z$ were fixed, the \textsc{Lasso} program was solved
only using {$(\mathrm{QP}_{\lambda})$}, for all $\lambda$ in a specified range. For each
$x^{\sharp}(\lambda)$, we obtained $\hat x (\tau)$ and $\tilde x (\sigma)$ using
that $\hat x(\tau) = \tilde x (\sigma) = x^{\sharp}(\lambda)$ for
$\tau := \|x^{\sharp}(\lambda) \|_{1}$ and
$\sigma := \|y - A x^{\sharp}(\lambda)\|_{2}$.
For convenience, we refer to each plot in \autoref{fig:bp-numerics-1} \emph{via}
its (row, column) position in the figure. The collection of plots serves to
depict how the average loss changes as a function of $N$ and $\delta = m/N$ when
$\eta = 1$. Namely, as $N$ increases, the average loss for {$(\mathrm{BP}_{\sigma})$} becomes sharper
about the optimal parameter choice. In addition, as $\delta$ increase, we
observe the same phenomenon. In \autoref{fig:bp-numerics-2}, similar content is
depicted, but for $\eta = 100$. In this case, $n_{\text{rbf}} = 501$ was used.
Specific paramter settings for the RBF approximation for each set of problem
parameters and program are detailed in
\autoref{tab:rbf-parameter-settings}. Because {$(\mathrm{BP}_{\sigma})$} parameter instability is not
easily visualized in small dimensions (e.g.,~ for $N < 10^{6}$), we supply several
plots visualizing the quality of the RBF approximation. Namely, approximation
quality plots corresponding with \autoref{fig:bp-numerics-1} may be found in
\autoref{fig:bp-numerics-1-approx-quality}; and approximation quality plots are
included in \autoref{fig:bp-numerics-2}. Each row of these plots is a triptych;
each column corresponds to a program: {$(\mathrm{LS}_{\tau})$} for the left-most, {$(\mathrm{BP}_{\sigma})$} in the
centre; and {$(\mathrm{QP}_{\lambda})$} on the right. These plots depict a single line and a
collection of points. The points correspond to individual loss values for each
realization of the noise and each normalize parameter value computed. The line
corresponds to the RBF approximation of the average loss for that program. The
domain for the {$(\mathrm{LS}_{\tau})$} plots is $(0.95, 0.95^{-1})$ in the normalized parameter
space. For {$(\mathrm{BP}_{\sigma})$} it is $(0.9, 0.9^{-1})$, and for {$(\mathrm{QP}_{\lambda})$} $(0.75, 0.75^{-1})$.
In \autoref{fig:bp-numerics-1-approx-quality}, one may observe by inspection
that the loss realizations for {$(\mathrm{LS}_{\tau})$} and {$(\mathrm{QP}_{\lambda})$} typically are achieved very close
to $1$. In contrast, there is a relatively wider range in the domain for where
the {$(\mathrm{BP}_{\sigma})$} loss achieves its optimum. This is integral to how {$(\mathrm{BP}_{\sigma})$} risk is
sensitive to the choice of $\sigma$.
There appear to be two competing factors that impact sensitivity to parameter
choice and program optimality. The first is stability of the parameter with
respect to variation due to the noise realization. This has already been
described: because $\sigma (\lambda^{*}, z)$ varies greatly as a function of
$z$, program optimality is destroyed by suboptimal loss values near
$\sigma = \sigma^{*}$. On the other hand, this also tends to have somewhat of a
smooth effect about the optimal normalized parameter. Thus, as
$\tau(\lambda^{*}, z)$ does not vary as greatly in this manner, its sensitivity
to parameter choice is not smoothed due to local averaging effects.
\begin{figure*}[h]
\centering
\includegraphics[width=.45\textwidth]{berk8.pdf}\hfill
\includegraphics[width=.45\textwidth]{berk9.pdf}
\includegraphics[width=.45\textwidth]{berk10.pdf}\hfill
\includegraphics[width=.45\textwidth]{berk11.pdf}
\includegraphics[width=.45\textwidth]{berk12.pdf}\hfill\hphantom{a}
\caption{Each plot depicts the average loss as a function of the normalized
parameter for each of the three programs under consideration. The collection
of plots depicts how the average loss changes as a function of $N$ and
$\delta = m/N$. Details for each plot will be given by referencing the (row,
column) position of the plot in this figure. The domain of the normalized
parameter in each plot is $(0.2, 5)$. A single realization of $A$ was fixed
and the average loss was computed from $k = 50$ realizations of the noise by
constructing a function approximator using radial basis function
approximation with a multiquadric kernel. The RBF approximator was evaluated
on a logarithmically spaced grid of $n = 301$ points centered about $1$. %
\textbf{(1,1):} $(s, N, \delta, \eta) = (1, 4000, 0.1, 1)$;
\textbf{(1,2):} $(s, N, \delta, \eta) = (1, 7000, 0.1, 1)$;
\textbf{(2,1):} $(s, N, \delta, \eta) = (1, 4000, 0.25, 1)$;
\textbf{(2,2):} $(s, N, \delta, \eta) = (1, 7000, 0.25, 1)$;
\textbf{(3,1):} $(s, N, \delta, \eta) = (1, 4000, 0.45, 1)$.}
\label{fig:bp-numerics-1}
\end{figure*}
\begin{figure*}[h]
\centering
\begin{minipage}[h]{.7\linewidth}
\large $N = 4000, \eta = 1$
\hfill
\large $N = 7000, \eta = 1$
\end{minipage}
\includegraphics[width=.45\textwidth]{berk13.pdf}
\hfill
\includegraphics[width=.45\textwidth]{berk14.pdf}
\includegraphics[width=.45\textwidth]{berk15.pdf}
\hfill
\includegraphics[width=.45\textwidth]{berk16.pdf}
\includegraphics[width=.45\textwidth]{berk17.pdf}
\hfill\hphantom{a}
\caption{Each plot depicts the quality of the RBF approximation about the
optimal normalized parmaeter. The left-most plot is in every case depicting
the loss and (approximate) average loss of {$(\mathrm{LS}_{\tau})$}; the middle that for {$(\mathrm{BP}_{\sigma})$};
and the right that for {$(\mathrm{QP}_{\lambda})$}. Top-to-bottom:%
$(s, N, \delta, \eta) = (1, 4000, 0.1, 1)$;
$(s, N, \delta, \eta) = (1, 4000, 0.25, 1)$;
$(s, N, \delta, \eta) = (1, 4000, 0.45, 1)$;
$(s, N, \delta, \eta) = (1, 7000, 0.1, 1)$;
$(s, N, \delta, \eta) = (1, 7000, 0.25, 1)$.
}
\label{fig:bp-numerics-1-approx-quality}
\end{figure*}
\begin{figure*}[h]
\centering
\includegraphics[width=.45\textwidth]{berk18.pdf}
\hfill \includegraphics[width=.45\textwidth]{berk19.pdf}
\vspace{-6pt}
\textcolor{lightgrey}{\rule{.45\textwidth}{.75pt}}\hfill
\textcolor{lightgrey}{\rule{.45\textwidth}{.75pt}}
\vspace{6pt}
\includegraphics[width=.45\textwidth]{berk20.pdf}
\hfill
\includegraphics[width=.45\textwidth]{berk21.pdf}
\caption{\textbf{Top row:} Each plot depicts the average loss as a function of
the normalized parameter for each of the three programs under
consideration. The collection of plots depicts how the average loss changes as
a function of $\delta = m/N$. The domain of the normalized parameter in each
plot is $(0.2, 5)$. A single realization of $A$ was fixed and the average loss
was computed from $k = 50$ realizations of the noise by constructing a
function approximator using radial basis function approximation with a
multiquadric kernel. The RBF approximator was evaluated on a logarithmically
spaced grid of $n_{\text{rbf}} = 501$ points centered about
$1$. \textbf{Bottom row:} Each plot depicts the quality of the RBF
approximation about the optimal normalized parmaeter. In each triptych, the
left plot depicts the loss and (approximate) average loss of {$(\mathrm{LS}_{\tau})$}; the middle
that for {$(\mathrm{BP}_{\sigma})$}; and the right that for {$(\mathrm{QP}_{\lambda})$}. \textbf{Left column:}
$(s, N, \delta, \eta) = (1, 7000, 0.1, 100)$; %
\textbf{Right column:} $(s, N, \delta, \eta) = (1, 7000, 0.25, 100)$.}
\label{fig:bp-numerics-2}
\end{figure*}
\subsection{More synthetic examples}
\label{sec:more-synth-exampl}
In this section, we display three synthetic examples where only $s$ and $\eta$
were changed. Thus, the effect of sparsity and noise scale is readily
observed. In each of these figures, the aspect ratio of the measurement matrix
was $\delta = 0.25, 0.45$ for the left- and right-hand plots respectively
(except \autoref{fig:synthetic-example-1} where $\delta = 0.25$ was too small to
achieve recovery). The average loss curves for each program were computed from
$k = 25$ realizations of loss curves that were, themselves, generated on a
logarthmically spaced grid of $n = 201$ points centered about the optimal choice
of the normalized parameter, $\rho = 1$. The loss realizations were again
computed by solving {$(\mathrm{QP}_{\lambda})$} and using the correspondence between \textsc{Lasso}
programs to compute the loss curves for {$(\mathrm{LS}_{\tau})$} and {$(\mathrm{BP}_{\sigma})$}.
The bottom row of \autoref{fig:synthetic-example-0} and
\autoref{fig:synthetic-example-2}, and the right half of
\autoref{fig:synthetic-example-1} depict the quality of the approximation of the
average loss curve for each program. Specifically, each program appears with its
own facet, in which are displayed the individual loss realizations
$L(\rho_{i}; x_{0}, A, \eta z_{j}), i \in [n], j \in [k]$ as grey points, and
the average loss $\bar L(\rho; x_{0}, A, \eta)$ as a coloured line. The top row
of \autoref{fig:synthetic-example-0} and \autoref{fig:synthetic-example-2}, and
the left half of \autoref{fig:synthetic-example-1} compare the the average loss
curves for each program, where the average losses are plotted on a log-log scale
with respect to the normalized parameter.
The first figure, \autoref{fig:synthetic-example-0}, displays a setting similar
to \autoref{fig:low-noise-numerics}. The noise scale was $\eta = 10^{-5}$ and
$s = 1$. Thus, the setting depicts the low-noise high-sparsity regime. The
second figure, \autoref{fig:synthetic-example-1}, depicts a moderately low-noise
regime, with a large value of $s$ (so large that $\delta = 0.25$ did not yield
adequate recovery). Thus, this figure depicts the regime in which $x_{0}$ is
near the limit of acceptable sparsity for the CS regime. Finally, the parameter
settings for \autoref{fig:synthetic-example-2} were $s = 100$ and $\eta =
100$. In particular, sparsity is modest, and the noise scale is large (the
variance equals the ambient dimension, $\eta^{2} = N = 10^{4}$).
It is readily observed that {$(\mathrm{LS}_{\tau})$} is highly sensitive to its parameter choice in
both low-noise regimes. We observe that {$(\mathrm{BP}_{\sigma})$} becomes more sensitive to its
parameter choice as sparsity decreases from $750$ to $100$ to $1$. Finally, we
observe that {$(\mathrm{QP}_{\lambda})$} is most sensitive to its parameter choice in the low-noise
high-sparsity regime. This left-sided sensitivity is consistent with the theory
and numerical simulations for the corresponding proximal denoising setting
in~\cite{berk2019pdparmsens, berk2020sl1mpc}.
\begin{figure*}[h]
\centering
\includegraphics[width=.45\textwidth]{berk22.pdf}
\hfill
\includegraphics[width=.45\textwidth]{berk23.pdf}
\includegraphics[width=.45\textwidth]{berk24.pdf}
\hfill
\includegraphics[width=.45\textwidth]{berk25.pdf}
\caption{Parameter instability numerics in the low-noise, high-sparsity
regime. \textbf{Top row:} Average loss is plotted with respect to the
normalized parameter for each program. \textbf{Bottom row:} Visualizations of
RBF approximation quality for average loss (best seen on a
computer). \textbf{Left:} %
$(s, N, m, \eta, k, n) = (1, 10^{4}, 2500, 10^{-5}, 25, 201)$; \textbf{Right:}
$(s, N, m, \eta, k, n) = (1, 10^{4}, 4500, 10^{-5}, 25, 201)$.}
\label{fig:synthetic-example-0}
\end{figure*}
\begin{figure}[h]
\centering
\includegraphics[width=.44\textwidth]{berk26.pdf}
\hfill
\includegraphics[width=.46\textwidth]{berk27.pdf}
\caption{Parameter instability numerics in the low-sparsity regime with
parameters $(s, N, m, \eta, k, n) = (750, 10^{4}, 4500, 10^{-1}, 25,
201)$. \textbf{Left:} Average loss is plotted with respect to the normalized
parameter for each program. \textbf{Right:} Visualizations of the RBF
approximation quality for average loss.}
\label{fig:synthetic-example-1}
\end{figure}
\begin{figure*}[h]
\centering
\includegraphics[width=.45\textwidth]{berk28.pdf}\hfill
\includegraphics[width=.45\textwidth]{berk29.pdf}
\includegraphics[width=.45\textwidth]{berk30.pdf}\hfill
\includegraphics[width=.45\textwidth]{berk31.pdf}
\caption{Parameter instability numerics for intermediate parameter values:
$(s, N, \eta, k, n) = (10^{2}, 10^{4}, 10^{-1}, 25, 201)$. \textbf{Left:}
$m = 2500$; \textbf{Right:} $m = 4500$. \textbf{Top:} Average loss is plotted
with respect to the normalized parameter for each program. \textbf{Bottom:}
Visualizations of average loss approximation quality.}
\label{fig:synthetic-example-2}
\end{figure*}
\subsection{Realistic Examples}
\label{sec:realistic-examples}
We next include two realistic examples in addition to the synethetic ones of the
previous sections. We show how CS programs may exhibit sensitivity as a function
of their governing parameter for a 1D and 2D wavelet problem. In each example,
we will include plots similar to those appearing above; however there will be
some key differences. As above, the average loss is computed from several
realizations of the loss, which depend in turn on realizations of the
noise. However, we will plot the loss corresponding to a single realization as a
function of the normalized parameter. Computing the normalized parameter is what
requires computing the average loss. As before, we approximate the average loss
and the normalized parameter using RBF interpolation, described in
\autoref{sec:rbf-approximation}. The figures of this section contain three main
pieces. We will plot psnr, as a function of the normalized parameter; loss,
equal to the nnse, as a function of the normalized parameter; and we will
include a grid of plots that allows for comparison of CS recovery by visualizing
the recovery in the signal domain. The latter grid of plots shall be referred to
as ``grid plots'' while the psnr and nnse plots shall be referred to as
``reference plots'', as they contain annotations that relate them to the grid
plots.
We now include a brief description of the so-called grid plots and associated
reference plots that appear in this section. Other than plotting loss, rather
than average loss, a key difference of the reference plots to the plots of
\autoref{sec:ls-numerics}--\ref{sec:bp-numerics} is that they have been
annotated with vertical black dashed lines, and coloured dots. Where the loss
for a program intersects the black dashed line, we show a representative
solution for that program where the normalized parameter for the problem is
given by the x intercept of the vertical line (approximately). Because the
programs were solved on a grid, the true value of the normalized program is
given by the coloured dot appearing nearest the black dashed line. The $x$ axis
for the reference plots is the normalized parameter (plotted on a log
scale). The $y$ axis for the reference plots is either the psnr (plotted on a
linear scale) or the nnse (plotted on a log scale). The representatives for each
chosen normalized parameter value and each program are plotted as a faceted grid
below the reference plot. The chosen normalized parameter value is given at the
top of each column, while the program used to recover the noisy ground truth
signal is described in the legend.
\subsubsection{1D wavelet compressed sensing}
\label{sec:1d-wavel-compr}
The signal $\xi_{0} \in \ensuremath{\mathbb{R}}^{N}$, $N = 4096$, was constructed in the
Haar-wavelet domain. In particular, $x_{0} \in \ensuremath{\mathbb{R}}^{N}$ has $10$ non-zero
coefficients, each equal to $N$. Let $\mathcal{W}_{1}$ denote the $1$D Haar
wavelet transform. Thus, $\xi_{0} = \mathcal{W}_{1}x_{0}$ where
$\|x_{0}\|_{0} = 10$. Next, for $A \in \ensuremath{\mathbb{R}}^{m \times N}$ where $m = 1843$,
define $y = A x_{0} + \eta z$ where $z_{i} \ensuremath{\overset{\text{iid}}{\sim}} \mathcal{N}(0, 1)$ and
$\eta = 50$. The signal's wavelet coefficients were recovered using {$(\mathrm{LS}_{\tau})$}, {$(\mathrm{QP}_{\lambda})$}
and {$(\mathrm{BP}_{\sigma})$} for several realizations of the noise $z$ and over a grid of
normalized parameter values. For example $\hat x(\tau_{i}; x_{0}, A, z^{(j)})$
is the {$(\mathrm{LS}_{\tau})$} recovery of the wavelet coefficients $x_{0}$ from $(y^{(j)}, A)$
with $\tau = \tau_{i}$, where $y^{(j)} := Ax_{0} + \eta z^{(j)}$. The recovered
signal is thus given by
$\hat \xi(\tau_{i}) := \mathcal{W}^{-1} \hat x(\tau_{i})$ and the loss given by
$\eta^{-2} \| \hat \xi(\tau_{i}) - \xi_{0}\|_{2}^{2}$. The loss modified
similarly for the other programs. Specifically, the loss is measured in the
signal domain and not the wavelet domain. The average loss was approximated from
$k = 50$ loss realizations using RBF interpolation, as described in
\autoref{sec:rbf-approximation}, on a grid of $n = 501$ points logarithmically
spaced and centered about $\rho = 1$.
Results of this simulation are depicted in \autoref{fig:lasso-realistic-1d} with
RBF interpolation parameter settings given in
\autoref{tab:lasso-realistic-1d}. The results shown in
\autoref{fig:lasso-realistic-1d} depict data from only a single noise
realization: the top-most graphic shows psnr as a function of the normalized
parameter; the middle graphic plots the loss as a function of the normalized
parameter; and the bottom group compares the ground-truth signal and recovered
signals in the signal domain. While psnr and loss are plotted instead of average
psnr and average loss, the normalized parameter was computed from the average
loss, as usual (\emph{cf.} \autoref{sec:rbf-approximation}). Correspondingly,
observe that the optimal parameter choice for each program may not appear at
$\rho = 1$, since the optimal normalized parameter for a particular loss
realization is not necessarily equal to the optimal normalized parameter for the
expected loss. In the bottom group of 15 plots, each row corresponds with a
particular program --- {$(\mathrm{LS}_{\tau})$}, {$(\mathrm{QP}_{\lambda})$} and {$(\mathrm{BP}_{\sigma})$}, from top to bottom --- while each
column corresponds with a particular vaue of the normalized parameter --- 0.5,
0.75, 1, 1.3 and 2, from left to right. The recovered signal for that program
and normalized parameter value is shown as a coloured line, while the ground
truth signal is shown as a black line.
As $\eta = 50$, the problem lies outside of the small-noise regime. As such,
{$(\mathrm{BP}_{\sigma})$} is more sensitive to its parameter choice than {$(\mathrm{QP}_{\lambda})$}, and more sensitive
than {$(\mathrm{LS}_{\tau})$} for $\rho > 1$ due to the relatively high sparsity of the
signal. Since suboptimality of {$(\mathrm{BP}_{\sigma})$} is observed for risk or average loss rather
than for individual loss realizations, we do not observe suboptimality of {$(\mathrm{BP}_{\sigma})$}
loss in these graphics. As expected, {$(\mathrm{LS}_{\tau})$} is sensitive to its parameter choice
for $\rho < 1$, as the ground-truth solution lies outside the feasible set in
this setting. It appears that the loss is mildly more sensitive to
under-guessing $\tau$ in this regime, than it is to over-guessing $\sigma$. This
is readily observed from all of the plots in the figure, especially by comparing
those in the bottom group of 15.
For comparison with the middle plot of \autoref{fig:lasso-realistic-1d}, we
include a plot of the average loss for each program as a function of the
normalized parameter in \autoref{fig:lasso-realistic-1d-avg-loss} (left
plot). Beside it is a triptych visualizing the RBF approximation quality for the
average loss.
\begin{figure*}[h]
\centering
\includegraphics[width=.8\textwidth]{berk32.pdf}
\includegraphics[width=.8\textwidth]{berk33.pdf}
\includegraphics[width=.8\textwidth]{berk34.pdf}
\caption{Realistic Example in 1D for
$(s, N, m, \eta, k, n) = (10, 4096, 1843, 50, 50, 501)$. Ground truth signal
$x_{0}$ defined in the Haar wavelet domain with first $s$ coefficients equal
to $N$. Noise added in the Haar wavelet domain; recovery error measured in the
signal domain. \textbf{Top:} Average psnr as a function of the normalized
parmaeter for each parameter. \textbf{Middle:} Average nnse as a function of
the normalized parmaeter for each parameter. \textbf{Bottom:} The ground truth
and recovered signal for a single realization of the noise, faceted by the
approximate normalized parameter value (given in the title) and by program (as
depicted in the legend). }
\label{fig:lasso-realistic-1d}
\end{figure*}
\begin{figure*}[h]
\centering
\includegraphics[width=.3\textwidth]{berk35.pdf}\hfill
\includegraphics[width=.6\textwidth]{berk36.pdf}
\caption{Parameter sensitivity numerics for $1$D wavelet CS example with
parameter settings $(s, N, m, \eta, k n) = (10, 4096, 1843, 50, 50,
501)$. \textbf{Left:} Average loss for each program plotted with respect to
the normalized parameter. \textbf{Right:} A visualization of approximation
quality for the average loss: {$(\mathrm{LS}_{\tau})$}, {$(\mathrm{BP}_{\sigma})$} and {$(\mathrm{QP}_{\lambda})$}, from left to right.}
\label{fig:lasso-realistic-1d-avg-loss}
\end{figure*}
\subsubsection{2D Wavelet Compressed Sensing}
\label{sec:2d-wavel-compr}
\begin{figure}[h]
\centering
\includegraphics[width=.3\textwidth]{berk37.pdf}
\caption{The square Shepp-Logan phantom (sslp).}
\label{fig:sslp-base}
\end{figure}
In this section, we describe numerical simulations for a 2D wavelet compressed
sensing problem. The signal, $\xi_{0}$, is an $80\times 80$ image of the
so-called square Shepp-Logan phantom (sslp), visualized
in~\autoref{fig:sslp-base}. The sslp was first used
in~\cite{berk2020sl1mpc}. Let $\mathcal{W}$ denote the Haar wavelet transform
and define $x_{0} := \mathcal{W}\xi_{0} \in \ensuremath{\mathbb{R}}^{6400}$ to be the vector of
Haar wavelet coefficients for the signal $\xi_{0}$. The linear measurements are
taken as
\begin{align*}
y = A x_{0} + \eta z, \qquad %
A_{ij} \ensuremath{\overset{\text{iid}}{\sim}} \mathcal{N}\left(0, m^{-1}\right), z_{i} \ensuremath{\overset{\text{iid}}{\sim}} \mathcal{N}(0, 1), \eta > 0.
\end{align*}
The signal's wavelet coefficients were recovered using {$(\mathrm{QP}_{\lambda})$} to obtain
$x^{\sharp}(\lambda_{i})$, where $i \in [n]$ enumerates the grid of parameter
values. The recovered image is then given by
$\xi^{\sharp}(\lambda_{i}) := \mathcal{W}^{-1}(x^{\sharp}(\lambda_{i}))$. By
using the method described previously at the beginning of
\autoref{sec:numerical-results}, the corresponding solutions for {$(\mathrm{LS}_{\tau})$} and {$(\mathrm{BP}_{\sigma})$}
were computed, obtaining $\hat x(\tau_{i})$ and $\tilde x(\sigma_{i})$,
respectively, in addition to the corresponding images
$\hat \xi(\tau_{i}) := \mathcal{W}^{-1}(\hat x (\tau_{i}))$ and
$\tilde \xi(\sigma_{i}) := \mathcal{W}^{-1}(\tilde x (\sigma_{i}))$,
$i \in [n]$. As in \autoref{sec:1d-wavel-compr}, the loss has been modified to
measure the nnse in the image domain. For example, the {$(\mathrm{LS}_{\tau})$} loss is given as
$\eta^{-2}\|\hat\xi(\tau_{i}) - \xi_{0}\|_{2}^{2}$; similarly for the other two
programs.
Average loss as a function of the normalized parameter $\rho$ is shown in
\autoref{fig:2d-wavelet-nnse-1} for $\eta = 10^{-2}, 1/2$ with $m = 2888$ (i.e.,~
$m / N \approx 0.45$). The average loss was approximated using RBF interpolation
from $k=50$ realizations along a logarithmically spaced grid of $501$ points
centered about $\rho=1$ using the method described in
\autoref{sec:rbf-approximation}. The parameter settings for the RBF
interpolation are provided in \autoref{tab:2d-wvlt-cs-params}. Plots showing the
approximation quality of the RBF interpolation quality are given in the bottom
row of \autoref{fig:2d-wavelet-nnse-1}. In these plots, individual realizations
of the nnse for the recovery are shown as grey points. The RBF interpolant is
given by the coloured line in each plot. The approximation quality is only
visualized for a narrow region about $\rho = 1$. Indeed, the approximation
quality of the RBF interpolant was observed, in every case, to be better away
from $\rho = 1$ than about $\rho = 1$: ensuring good interpolation of the loss
realizations about $\rho = 1$ was observed to be sufficient for ensuring good
interpolation of the average loss over the region of interest,
$\rho \in [10^{-1}, 10^{1}]$.
In the left column of the figure, where $\eta = 10^{-2}$, we observe that {$(\mathrm{LS}_{\tau})$}
is relatively more sensitive to its parameter choice than either {$(\mathrm{BP}_{\sigma})$} or
{$(\mathrm{QP}_{\lambda})$}. In particular, for this problem, we observe that $\eta = 10^{-2}$ is
sufficient to lie within the low-noise regime. Due to how the solutions for
{$(\mathrm{LS}_{\tau})$} were computed from those for {$(\mathrm{QP}_{\lambda})$}, the average loss curve for {$(\mathrm{LS}_{\tau})$} is
not resolved over the full domain for the normalized parameter. This reinforces
how small changes in the normalized parameter value for {$(\mathrm{LS}_{\tau})$} correspond to
relatively much larger changes in the normalized parameter value for {$(\mathrm{QP}_{\lambda})$}.
In the right column of the figure, where $\eta = 1/2$, we observe that {$(\mathrm{BP}_{\sigma})$} is
relatively more sensitive to its parameter choice than either {$(\mathrm{LS}_{\tau})$} or {$(\mathrm{QP}_{\lambda})$}. We
expect this is due to the relatively high sparsity of the signal. Again, the
average loss curve for {$(\mathrm{BP}_{\sigma})$} is not resolved over the full plotted domain of the
normalized parameter. This underscores how changes in the governing parmeter for
{$(\mathrm{QP}_{\lambda})$} correspond with relatively smaller changes in the governing parameter for
{$(\mathrm{BP}_{\sigma})$}. In particular {$(\mathrm{BP}_{\sigma})$} is more sensitive to its governing parameter than
{$(\mathrm{QP}_{\lambda})$} in the present problem. This observation is supported by the theory of
\autoref{sec:bp-minimax-suboptimality}.
As in previous numerical simulations, the bottom row of
\autoref{fig:2d-wavelet-nnse-1} includes triptyches depicting the average loss
approximation quality for the RBF interpolation of the loss realizations
(\emph{cf.} \autoref{sec:rbf-approximation}).
\begin{figure*}[h]
\centering
\includegraphics[width=.45\textwidth]{berk38.pdf}\hfill
\includegraphics[width=.45\textwidth]{berk39.pdf}
\vspace{-6pt}
\textcolor{lightgrey}{\rule{.45\textwidth}{.75pt}}\hfill
\textcolor{lightgrey}{\rule{.45\textwidth}{.75pt}}
\vspace{6pt}
\includegraphics[width=.45\textwidth]{berk40.pdf}\hfill \includegraphics[width=.45\textwidth]{berk41.pdf}
\caption{Average loss for a 2D wavelet compressed sensing problem, plotted as
a function of $\rho$; $(s, N, m) = (416, 6418, 2888)$ with
$(k, n) = (50, 501)$. \textbf{Left:} $\eta = 10^{-2}$. \textbf{Right:}
$\eta = 1/2$. \textbf{Top:} The average loss (i.e.,~ nnse) for each program as
a function of $\rho$. The average loss was approximated using RBF
interpolation with parameters given
in~\autoref{tab:2d-wvlt-cs-params}. \textbf{Bottom:} Plots to evaluate the
quality of the RBF interpolation. In each polot, individual realizations of
the loss are visible as grey points; the approximation to the average loss
is visible as the coloured line through those points. }
\label{fig:2d-wavelet-nnse-1}
\end{figure*}
In both \autoref{fig:realistic-lasso-sslp-1} and
\autoref{fig:realistic-lasso-sslp-2}, we use four main elements to depict the
results of a 2D wavelet CS problem, each for a single realization of the
noise. In each figure, the top row depicts the psnr curves for each program
(left) and loss curves for each program (right). The bottom row of the figure
contains two groupings of the 15 plots each. Each grid of 15 plots is faceted by
program ({$(\mathrm{LS}_{\tau})$}, {$(\mathrm{BP}_{\sigma})$} and {$(\mathrm{QP}_{\lambda})$}, top-to-bottom) and normalized parameter value
($0.5$, $0.75$, $1$, $1.3$, $2$, left-to-right). Each (program, normalized
parameter) tuple on the left-hand side of the figure corresponds with its
partner on the right-hand side. Specifically, the left-hand grid of images
depicts the recovered image for a given (program, normalized parameter) tuple,
while the corresponding right-hand image depicts the pixel-wise nnse in the
signal domain. The details of these images are best examined on a computer.
In \autoref{fig:realistic-lasso-sslp-1}, the parameter settings are
$(s, N, m, \eta, k, n) = (416, 6418, 2888, 10^{-2}, 50, 501)$. In particular,
$\eta$ lies within the low-noise regime, as observed by the relative sensitivity
of {$(\mathrm{LS}_{\tau})$} to its parameter choice. In~\autoref{fig:realistic-lasso-sslp-2}, the
parameter settings are
$(s, N, m, \eta, k, n) = (416, 6418, 2888, 1/2, 50, 501)$. In particular, the
noise scale is relatively larger. The relatively high sparsity of the signal
causes {$(\mathrm{BP}_{\sigma})$} to be relatively more sensitive to its parameter choice than either
{$(\mathrm{LS}_{\tau})$} or {$(\mathrm{QP}_{\lambda})$}. These observations are supported by the theory
of~\autoref{sec:LS-instability} and~\autoref{sec:bp-minimax-suboptimality}.
\begin{figure*}[h]
\centering
\includegraphics[width=0.45\textwidth]{berk42.pdf}\hfill
\includegraphics[width=0.45\textwidth]{berk43.pdf}
\includegraphics[width=0.45\textwidth]{berk44.pdf}\hfill
\includegraphics[width=0.45\textwidth]{berk45.pdf}
\caption{A 2D wavelet compressed sensing problem using the square Shepp-Logan
phantom; $(s, N, m, \eta) = (416, 6418, 2888, 10^{-2})$ with
$(k, n) = (50, 501)$. \textbf{Top row:} psnr (left) and nnse (right),
plotted as a function of $\rho$. The plotted curves were generated from the
single realization of the measurements that correspond to the grids depicted
below them. \textbf{Bottom grids:} The left grid of 15 images shows the
recovered image for each of five values of $\rho$:
$\rho \in \{ \frac12, \frac34, 1, \frac43, 2\}$; and for each program:
{$(\mathrm{LS}_{\tau})$}, {$(\mathrm{QP}_{\lambda})$}, {$(\mathrm{BP}_{\sigma})$}. The right grid of 15 images shows the pixel-wise nnse
of the recovered image for the same values of $\rho$, and for the three
programs. Colour bars provide scale, and are best observed on a
computer. The stated values of $\rho$ are approximate; the values of $\rho$
for which the images are depicted are marked by points in the nnse and psnr
plots of the same colour as the loss curve on top of which they're plotted.}
\label{fig:realistic-lasso-sslp-1}
\end{figure*}
\begin{figure*}[h]
\centering
\includegraphics[width=0.45\textwidth]{berk46.pdf}\hfill
\includegraphics[width=0.45\textwidth]{berk47.pdf}
\includegraphics[width=0.45\textwidth]{berk48.pdf}\hfill
\includegraphics[width=0.45\textwidth]{berk49.pdf}
\caption{A 2D wavelet compressed sensing problem using the square Shepp-Logan
phantom; $(s, N, m, \eta) = (416, 6418, 2888, 1/2)$ with
$(k, n) = (50, 501)$. \textbf{Top row:} psnr (left) and nnse (right),
plotted as a function of $\rho$ for each program. The plotted curves were
generated from the single realization of the measurements that correspond to
the grids depicted below them. \textbf{Bottom grids:} The left grid of 15
images shows the recovered image for each of five values of $\rho$:
$\rho \in \{ \frac12, \frac34, 1, \frac43, 2\}$; and for each program:
{$(\mathrm{LS}_{\tau})$}, {$(\mathrm{QP}_{\lambda})$}, {$(\mathrm{BP}_{\sigma})$}. The right grid of 15 images shows the pixel-wise nnse
of the recovered image for the same values of the normalized parameter, and
for the three programs. Colour bars provide scale, and are best observed on
a computer. The stated values of $\rho$ are approximate; the values of
$\rho$ for which the images are depicted are marked by points in the nnse
and psnr plots of the same colour as the loss curve on top of which they're
plotted.}
\label{fig:realistic-lasso-sslp-2}
\end{figure*}
\section{Proofs}
\label{sec:proofs}
\subsection{Risk equivalences}
\label{sec:risk-equivalences}
\begin{proof}[Proof of {\autoref{prop:Rstar-minimax-optimal}}]
The left-most inequality,
\begin{align*}
cs \log (N /s ) %
\leq M^{*}(s, N), %
\end{align*}
is a consequence of~\cite[Theorem 1]{candes2013well}, and the second inequality,
\begin{align*}
M^{*}(s, N) %
\leq \inf_{\lambda > 0} \sup_{x_{0} \in \Sigma_{s}^{N}} %
R^{\sharp}(\lambda; x_{0}, A, \eta) %
\end{align*}
is trivial. The third inequality,
\begin{align*}
\inf_{\lambda > 0} \sup_{x_{0} \in \Sigma_{s}^{N}} %
R^{\sharp}(\lambda; x_{0}, A, \eta) %
\leq C_{\delta} R^{*}(s, A)
\end{align*}
is a consequence of~\cite[Theorem 6]{bickel2009simultaneous} and
\autoref{coro:opt-risk-near-equiv}. Indeed, $R^{*}(s, A)$ may be lower-bounded
by the optimally tuned worst-case risk,
$\sup_{x \in \Sigma_{s}^{N}} \hat R(\|x\|_{1}; x, A, \eta)$, which is again
lower-bounded by $c s \log (N /s )$ due to~\cite[Theorem 1]{candes2013well}. In
particular, selecting constants appropriately gives
\begin{align*}
& \inf_{\lambda > 0} \sup_{x_{0} \in \Sigma_{s}^{N}} %
R^{\sharp}(\lambda; x_{0}, A, \eta) %
\\
&\qquad \leq C_{\delta} s \log(N/s) %
\leq C_{\delta} M^{*}(s, N) %
\\
&\qquad \leq C_{\delta} \sup_{x \in \Sigma_{s}^{N}} \hat R(\|x\|_{1}; x, A, \eta) %
\leq C_{\delta} R^{*}(s, A).
\end{align*}
The final inequality, a variant of which may be found in~\cite{liaw2017simple}
or~\cite{oymak2013squared}, easily follows
from~\autoref{lem:ls-instability-ec}.
\end{proof}
\subsection{$\hat R$ is nearly monotone}
\label{sec:rhat-nearly-monotone}
We first quote a specialized version of a result introduced
in~\cite{liaw2017simple}, which gives a kind of local characterization of the
deviation inequality presented in~\autoref{thm:liaw-14}.
\begin{thm}[{\cite[Theorem~1.7]{liaw2017simple}}]
\label{thm:liaw-17}
Let $A$ be a normalized $K$-subgaussian matrix and $T\subseteq \ensuremath{\mathbb{R}}^{N}$ a
convex set. For any $t \geq 1$, it holds with probability at least
$1 - \exp(-t^{2})$ that
\begin{align*}
\left| \|Ax\|_{2} - \sqrt m \|x \|_{2}\right| %
\leq t \cdot C \tilde K \gamma(T \cap \|x\|_{2} B_{2}^{N}), %
\qquad \text{for all } x \in T.
\end{align*}
\end{thm}
We now present the main result of this section.
\begin{prop}[$\hat R$ is nearly monotone]
\label{prop:Rhat-nearly-monotone}
Let $A$ be a normalized $K$-subgaussian matrix and
$\mathcal{K} \subseteq \ensuremath{\mathbb{R}}^{N}$ a non-empty closed convex set. Fix
$\delta, \eta > 0$, $0 < \tau_{1} \leq \tau_{2} < \infty$ and
$x_{0} \in \mathcal{K}$ with $\|x_{0}\|_{\mathcal{K}} = 1$. For any
$t \geq 1$, if $m$ satisfies
$m > Ct^{2}\tilde K^{2}\delta^{-2} \gamma^{2}(T_{\mathcal{K}}(x_{0}) \cap
\ensuremath{\mathbb{S}}^{N-1})$, then with probability at least $1 - \exp(-t^{2})$ on the
realization of $A$,
\begin{align*}
\hat R(\tau_{1}; \tau_{1} x_{0}, A, \eta)
\leq \frac{1 + \delta}{1 - \delta} \hat R(\tau_{2}; \tau_{2} x_{0}, A, \eta).
\end{align*}
\end{prop}
\begin{proof}[Proof of {\autoref{prop:Rhat-nearly-monotone}}]
Given $0 < \tau_{1} \leq \tau_{2} < \infty$, let
$\tau \in \{ \tau_{1}, \tau_{2}\}$, define $y(\tau) = A\tau x_{0} + \eta z$,
and define
\begin{align*}
q(\tau) := A\hat w(\tau), %
\quad \text{where} \quad %
\hat w(\tau) &:= \hat x(\tau; A, y) - \tau x_{0}, %
\\
\tau &\in \{\tau_{1}, \tau_{2}\}.
\end{align*}
Observe that $q(\tau)$ may be written as
\begin{align*}
q(\tau) &\in \argmin \{ \|q - \eta z\|_{2} : q \in \tau \mathcal{K}' \}, %
\\
\mathcal{K}' &:= \{ A(x - x_{0}) : x \in \mathcal{K} \}.
\end{align*}
The set $\mathcal{K}' \subseteq \ensuremath{\mathbb{R}}^{m}$ is non-empty, closed and convex,
with $0 \in \mathcal{K}'$. In particular, \autoref{lem:projection-lemma}
implies
\begin{align*}
\|q(\tau_{1})\|_{2} \leq \|q(\tau_{2})\|_{2}.
\end{align*}
By~\cite[Theorem 1.7]{liaw2017simple}, for any $t \geq 1$ it holds with
probability at least $1 - \exp(-t^{2})$ on $A$ that for all
$w \in T_{\mathcal{K}}(x_{0})$,
\begin{align*}
\sqrt m \cdot \left|\|Aw\|_{2} - \|w\|_{2} \right| %
\leq Ct\tilde K \gamma(T_{\mathcal{K}}(x_{0}) \cap \ensuremath{\mathbb{S}}^{N-1}).
\end{align*}
Accordingly, since $\hat w(\tau) \in T_{\mathcal{K}}(x_{0})$ for
$\tau = \tau_{1}, \tau_{2}$, under the assumption on $m$ it holds with
probability at least $1 - \exp(-t^{2})$
\begin{align*}
(1 - \delta) \|\hat w(\tau_{1})\|_{2} %
&\leq \|q(\tau_{1})\|_{2} \\
&\leq \|q(\tau_{2})\|_{2} %
\leq (1 + \delta) \|\hat w(\tau_{2})\|_{2}. %
\end{align*}
In particular,
$\|\hat w(\tau_{1})\|_{2} \leq \frac{1+\delta}{1-\delta} \|\hat
w(\tau_{2})\|_{2}$. As $z$ was arbitrary, the result follows:
\begin{align*}
\hat R(\tau_{1}, \tau_{1} x_{0}, A, \eta) %
\leq \frac{1+\delta}{1-\delta} \hat R(\tau_{2}; \tau_{2}x_{0}, A, \eta).
\end{align*}
\end{proof}
\begin{coro}
\label{coro:opt-risk-near-equiv}
Under the assumptions of \autoref{prop:Rhat-nearly-monotone}, the optimally
tuned worst-case risk for {$(\mathrm{LS}_{\tau})$} is nearly equivalent to $R^{*}(s, A)$, in the
sense that
\begin{align*}
R^{*}(s, A) %
\leq \sup_{x \in \Sigma_{s}^{N}} \hat R(\|x\|_{1} ; x, A, \eta)
\leq C R^{*}(s, A).
\end{align*}
\end{coro}
\begin{proof}[Proof of {\autoref{coro:opt-risk-near-equiv}}]
The $\sup$ defining the optimally tuned worst-case risk may be decoupled as
\begin{align}
\label{eq:opt-risk-decouple}
\sup_{x' \in \Sigma_{s}^{N}} \hat R(\|x'\|_{1}; x', A, \eta) %
= \sup_{\tau > 0} \sup_{x \in \Sigma_{s}^{N} \cap \ensuremath{\mathbb{S}}^{N-1}} \hat R (\tau; \tau x, A, \eta).
\end{align}
Applying a standard scaling property gives the relation:
\begin{align*}
\hat R(\tau; \tau x, A, \eta) %
& = \left(\frac{\tau}{\eta}\right)^{2} %
\E \|\hat x(1; y/\tau, A) - x \|_{2}^{2} %
\\
& = \hat R(1; x, A, \eta / \tau). %
\end{align*}
The lower bound follows trivially from these two observations. To prove the
upper bound, we start by observing two facts. First,
$\Sigma_{s}^{N} \cap \ensuremath{\mathbb{S}}^{N-1}$ is compact, so there is $x^{*}(\tau)$
achieving the supremum over the set $\Sigma_{s}^{N} \cap \ensuremath{\mathbb{S}}^{N-1}$ in
\eqref{eq:opt-risk-decouple}. Next, if the supremum over $\tau > 0$ is
achieved for $\tau \to \infty$, there is nothing to show, since
\begin{align*}
\sup_{\tau > 0} &\sup_{x \in \Sigma_{s}^{N}\cap \ensuremath{\mathbb{S}}^{N-1}} \hat R(\tau; \tau x, A, \eta) %
\\
&= \lim_{\tau \to \infty} \sup_{x \in \Sigma_{s}^{N}\cap
\ensuremath{\mathbb{S}}^{N-1}}
\hat R(\tau; \tau x, A, \eta)
\\
&= \lim_{\tau \to \infty} \sup_{x \in \Sigma_{s}^{N}\cap \partial B_{1}^{N}}
\hat R(1; x, A, \eta/ \tau)
\\
& = \lim_{\eta \to 0} \sup_{x \in \Sigma_{s}^{N}\cap \partial B_{1}^{N}}
\hat R(1; x, A, \eta).
\end{align*}
Otherwise, the supremum is achieved for some $0 \leq \tau^{*} < \infty$. Let
$(\tau_{i})_{i \in \ensuremath{\mathbb{Z}}}$ be an arbitrary bi-infinite monotone sequence with
$\tau_{i} \xrightarrow{i\to -\infty} \tau^{*}$ and
$\tau_{i} \xrightarrow{i\to\infty} \infty$. For any $i \leq j$,
\autoref{prop:Rhat-nearly-monotone} and properties of the supremum give
\begin{align*}
\sup_{x \in \Sigma_{s}^{N} \cap \ensuremath{\mathbb{S}}^{N-1}} &\hat R(\tau_{i}; \tau_{i} x, A, \eta) %
\\
&= \hat R(\tau_{i}; \tau_{i} x^{*}(\tau_{i}), A, \eta) %
\\
& \leq C \hat R(\tau_{j}; \tau_{j} x^{*}(\tau_{i}), A, \eta) %
\\
&\leq C \hat R(\tau_{j}; \tau_{j} x^{*}(\tau_{j}), A, \eta) %
\\
& = C \sup_{x \in \Sigma_{s}^{N}\cap \ensuremath{\mathbb{S}}^{N-1}}\hat R(\tau_{j}; \tau_{j} x, A, \eta)
\end{align*}
As the above chain of inequalities holds for any pair $i < 0$ and $j > 0$,
taking $i \to -\infty$ and $j \to \infty$ gives,
\begin{align*}
\sup_{\tau > 0} \sup_{x \in \Sigma_{s}^{N} \cap \ensuremath{\mathbb{S}}^{N-1}}\hat R(\tau; \tau x, A, \eta)
&\leq C \sup_{x \in \Sigma_{s}^{N}\cap \ensuremath{\mathbb{S}}^{N-1}}%
\hat R(\tau_{j}; \tau_{j}x, A, \eta) %
\\
& \xrightarrow{j\to\infty} C \liminf_{\tau \to \infty}
\sup_{x \in \Sigma_{s}^{N} \cap \ensuremath{\mathbb{S}}^{N-1}} \hat R(\tau; \tau x, A, \eta). %
\end{align*}
Finally, combining the above with an application of the standard scaling property yields
\begin{align*}
\sup_{\tau > 0}& \sup_{x \in \Sigma_{s}^{N} \cap \ensuremath{\mathbb{S}}^{N-1}}\hat R(\tau; \tau x, A, \eta) %
\\
&\leq C \liminf_{\tau \to \infty} \sup_{x \in \Sigma_{s}^{N} \cap \ensuremath{\mathbb{S}}^{N-1}} %
\hat R(\tau; \tau x, A, \eta) %
\\
&= C \liminf_{\tau \to \infty} \sup_{x \in \Sigma_{s}^{N} \cap \partial B_{1}^{N}} %
\hat R(1; x, A, \eta / \tau) %
\\
& = C \liminf_{\eta \to 0} \sup_{x \in \Sigma_{s}^{N} \cap \partial B_{1}^{N}} %
\hat R(1; x, A, \eta) %
\end{align*}
\end{proof}
\subsubsection{Controlling a conditionally Gaussian process}
\label{sec:control-condl-gp}
Here we ready two technical results that are used to control the error of the
tuned approximation ($\tau = \tau^{*}$) uniformly with respect to the noise
scale $\eta > 0$. First, we specialize a result of~\cite{liaw2017simple}. Next,
with high probability we control in expectation the extreme values of a
conditionally Gaussian process.
\begin{lem}[Corollary of {\autoref{thm:liaw-14}}]
\label{lem:corollary-lmpv-thm1.4}
Fix $\delta, \varepsilon, r > 0$ and let $A \in \ensuremath{\mathbb{R}}^{m \times N}$ be a
normalized $K$-subgaussian matrix. For a constant $C_{\varepsilon} > 0$, if
\begin{align}
\label{eq:m-bd-near-isometry}
m %
> C_{\varepsilon} \delta^{-2} \tilde K^{2} r^{2} s \log\left(
\frac{2N}{s}\right),
\end{align}
it holds with probability at least $1 - \varepsilon$ on the realization of
$A$ that
\begin{align}
\label{eq:near-isometry}
\sup_{x \in \mathcal{L}_{s}(r)} \left|\|Ax \|_{2} - \|x\|_{2}\right| %
< \delta.
\end{align}
\end{lem}
\begin{proof}[Proof of {\autoref{lem:corollary-lmpv-thm1.4}}]
If $s = 0$ the result holds trivially. For $s \geq 1$, this lemma is a
straightforward consequence of \autoref{thm:liaw-14}. Set
$u := \sqrt{\log(2\varepsilon^{-1})}$. Indeed, by that result, it holds with
probability at least $1 - \varepsilon$ on the realization of $A$ that
\begin{align*}
\sup_{x \in \mathcal{L}_{s}(r)} &\left|\|Ax\|_{2} - \|x\|_{2}\right| %
\\
& \leq C_{1} m^{-1/2} \tilde K r \left[\w(\Sigma_{s}^{N} \cap \ensuremath{\mathbb{S}}^{N-1})
+ u \right],
\end{align*}
where $C_{1}$ is an absolute constant. By
\autoref{lem:mean-width-sparse-signals}, there is an absolute constant
$C_{2} > 0$ so that
\begin{align*}
\w^{2}(\Sigma_{s}^{N} \cap \ensuremath{\mathbb{S}}^{N-1}) %
\leq C_{2}^{2} s \log \left(\frac{2N}{s}\right).
\end{align*}
In particular,~\eqref{eq:near-isometry} holds if
\begin{align*}
C_{1} \tilde K m^{-1/2}r \left[ %
C_{2} \sqrt{s \log \left(\frac{2N}{s}\right)} + u\right] %
< \delta.
\end{align*}
Observe that this condition is satisfied if~\eqref{eq:m-bd-near-isometry}
holds:
\begin{align*}
m &> C_{\varepsilon}\delta^{-2} \tilde K^{2} r^{2} s
\log\left(\frac{2N}{s}\right),
\\
C_{\varepsilon} %
&:= 4C_{1}^{2} \cdot \max \left\{ %
\log\left(2\varepsilon^{-1}\right), C_{2}^{2} \right\}.
\end{align*}
\end{proof}
\begin{lem}[Conditionally Gaussian process]
\label{lem:conditional-gp-expectation}
Let $\mathcal{K} \subseteq \mathcal{K}_{s}^{N} \cap \ensuremath{\mathbb{S}}^{N-1}$ and suppose
$A \in \ensuremath{\mathbb{R}}^{m \times N}$ is a normalized $K$-subgaussian matrix. Let
$z \in \ensuremath{\mathbb{R}}^{m}$ with $z_{i} \ensuremath{\overset{\text{iid}}{\sim}} \mathcal{N}(0, 1)$ and define
\begin{align*}
f(A, z) := \sup_{x \in \mathcal{K}} \ip{A x, z}. %
\end{align*}
Let $\delta, \varepsilon > 0$ and $s \in \ensuremath{\mathbb{N}}$ with $s \geq 1$. There is an
absolute constant $C_{\varepsilon} > 0$, depending only on $\varepsilon$, so
that if
\begin{align*}
m > C_{\varepsilon} \delta^{-2} \tilde K^{2} s \log(N/s),
\end{align*}
then with probability at least $1 - \varepsilon$ on the realization of $A$,
\begin{align*}
\E\left[ f(A,z) \mid A\right] %
\leq C_{\delta} \sqrt{s \log (2N/s)}
\end{align*}
where $C_{\delta} > 0$ is an absolute constant depending only on
$\delta$.
\end{lem}
\begin{proof}[Proof of {\autoref{lem:conditional-gp-expectation}}]
By \autoref{lem:convexification},
\begin{align*}
\mathcal{K} %
\subseteq \mathcal{K}_{s}^{N} \cap \ensuremath{\mathbb{S}}^{N-1} %
\subseteq \mathcal{L}_{s}.
\end{align*}
Therefore, $f(A,z) \leq \sup_{x \in \mathcal{L}_{s}} \ip{A x,
z}$. Furthermore,
\begin{align*}
\mathcal{L}_{s} - \mathcal{L}_{s} \subseteq \mathcal{L}^{*}_{s}.
\end{align*}
By \autoref{lem:corollary-lmpv-thm1.4},
\begin{align}
\label{eq:conditional-gp-event}
\max_{j \in [N]} \left|\|A^{j}\|_{2} - 1\right| %
\leq \sup_{x \in \mathcal{L}^{*}_{s}} \left|\|Ax\|_{2} - \|x\|_{2}\right| %
< \delta %
\end{align}
with probability at least $1 - \varepsilon$ if $m$ satisfies
\begin{align}
\label{eq:m-bd-conditional-gp}
m %
> 32 C_{\varepsilon} \delta^{-2} \tilde K^{2} s \log(N/s).
\end{align}
Next, where $x \in \mathcal{L}_{s}$, define the random processes
\begin{align*}
X_{x} %
&:= \ip{Ax, z},
& z_{i} &\ensuremath{\overset{\text{iid}}{\sim}} \mathcal{N}(0, 1);
\\
Y_{x} %
&:= (1 + \delta) \ip{x, g},
& g_{i} &\ensuremath{\overset{\text{iid}}{\sim}} \mathcal{N}(0,1).
\end{align*}
Assume~\eqref{eq:m-bd-conditional-gp} holds and condition on the event
$\mathcal{A}$ described by~\eqref{eq:conditional-gp-event}. Then
$x - y \in \mathcal{L}_{s} - \mathcal{L}_{s} \subseteq \mathcal{L}^{*}_{s}$, so
\begin{align*}
\E (X_{y} - X_{x})^{2} %
& = \|A(x - y)\|_{2}^{2}
\\
& \leq (1 + \delta)^{2} \|x - y\|_{2}^{2}
= \E (Y_{y} - Y_{x})^{2}.
\end{align*}
Namely, conditioned on $\mathcal{A}$, the Sudakov-Fernique inequality
(\autoref{thm:sudakov-fernique}) gives
\begin{align*}
\E\left[f(A,z) \mid A\right] %
& \leq \E\left[\sup_{x \in \mathcal{L}_{s}} X_{x}\right] %
\leq \E\left[\sup_{x \in \mathcal{L}_{s}} Y_{x}\right] %
\\
&= (1 + \delta) \w(\mathcal{L}_{s})
\\
& \leq 2 C (1+ \delta) \sqrt{s \log(2N/s)},
\end{align*}
where $C > 0$ is an absolute constant.
\end{proof}
\begin{rmk}[Subgaussianity of $f(A,z) \mid A$]
\label{rmk:fAz-subgaussianity}
Conditioned on $A$, Borell-TIS (\autoref{thm:borell-tis}) gives subgaussian
concentration of $f(A,z)$ about $\E[f(A,z)\mid A]$. In particular,
\begin{align*}
\left\|f(A,z) - \E\left[f(A,z) \mid A\right] \right\|_{\Psi_{2}} %
\lesssim \sigma_{\mathcal{K}}
\end{align*}
where, on the event $\mathcal{A}$ as defined in the proof of
\autoref{lem:conditional-gp-expectation},
\begin{align*}
\sigma_{\mathcal{K}}^{2} %
= \sup_{x \in \mathcal{K}} \E \left[|\ip{Ax, z}|^{2}\right] %
= \sup_{x \in \mathcal{K}} \|Ax\|_{2}^{2} \leq (1 + \delta)^{2}.
\end{align*}
Note that subgaussianity of $f(A, z)$ about $\E[f(A, z)\mid A]$ can also be
established using concentration of Lipschitz functions of Gaussians. Indeed,
since $\mathcal{K} \subseteq \ensuremath{\mathbb{S}}^{N-1}$, for each $A$ it holds that $f(A, z)$
is Lipschitz in $z$. In fact, one can show that ``for most'' $A$, $f(A, z)$ is
``nearly'' $1$-Lipschitz.
\end{rmk}
\subsection{Proofs for constrained Lasso sensitivity}
\label{sec:proofs-constr-lasso}
\subsubsection{Suboptimal choice of $\tau$}
\label{sec:subopt-choice-tau}
The first result required to prove \autoref{thm:ls-instability} concerns the
case where {$(\mathrm{LS}_{\tau})$} is controlled by a parameter that is too large. Under mild
regularity assumptions on the mapping $A$, we show that this underconstrained
problem cannot recover even the least-squares proximal denoising error rate in
the limiting low-noise regime. The second result of this section concerns the
situation where $\tau$ is too small, $\tau < \tau^{*}$. In this overconstrained
problem, the ground truth does not lie in the feasible set and one expects this
to be detrimental to recovery performance. We confirm this intuition
irrespective of the assumptions on the measurement matrix $A$.
\begin{lem}[Underconstrained {$(\mathrm{LS}_{\tau})$}]
\label{lem:ls-instability-uc}
Let $A \in \ensuremath{\mathbb{R}}^{m\times N}$ and assume that $\dim(\Null (A)) >
0$. Given $x_{0} \in \ensuremath{\mathbb{R}}^{N}, \eta > 0$ and $z \in \ensuremath{\mathbb{R}}^{m}$ with
$z_{i} \ensuremath{\overset{\text{iid}}{\sim}} \mathcal{N}(0, 1)$, let $y := Ax_{0} + \eta z$. Suppose
$\tau > \|x_{0}\|_1$. Almost surely on the realization of $z$,
\begin{align*}
\lim_{\eta \to 0} \hat L (\tau; x_{0}, A, \eta z) = \infty.
\end{align*}
\end{lem}
\begin{proof}[Proof of {\autoref{lem:ls-instability-uc}}]
Define $\rho := \tau - \tau^{*}$, where $\tau^{*} := \|x_{0}\|_1$. For
simplicity, first assume $\Span (A) = \ensuremath{\mathbb{R}}^{m}$. There exists $\zeta \in
\ensuremath{\mathbb{R}}^{N}$ such that $A\zeta = z$, and so $A(x_{0} + \eta \zeta) = Ax_{0} +
\eta z = y$. Moreover, if $\eta < \rho \|\zeta\|_{1}^{-1}$ then $x_{0} + \eta
\zeta \in \tau B_{1}^{N}$. In particular, $\xi := x_{0} + \eta \zeta$ solves
{$(\mathrm{LS}_{\tau})$}, because it is feasible and achieves the lowest possible objective value
for {$(\mathrm{LS}_{\tau})$}. Notice $\|\zeta\|_{1} < \infty$ almost surely, so for any
realization of $z$, $\eta < \rho \|\zeta\|_{1}^{-1}$ holds for all $\eta$
sufficiently small. Specifically, we have constructed $\xi$ solving {$(\mathrm{LS}_{\tau})$}, and
lying on the interior of $\tau B_{1}^{N}$. Consequently, almost surely there
is $\nu \in \Null (A)$ so that $\xi + \nu \in \tau B_{1}^{N}$ and still $A
(\xi + \nu) = y$. Scale $\nu$ if necessary so that $\|\xi + \nu\|_{1} \in
[\frac{1}{2}(\tau + \tau_{*}), \tau]$. Then, almost surely on the realization
of $z$,
\begin{align*}
\hat L (\tau; x_{0}, A, \eta z) %
&\geq \eta^{-2} \|\xi + \nu - x_{0}\|_{2}^{2} %
\\
&\geq \frac{1}{N\eta^{2}} \|\xi + \nu - x_{0} \|_{1}^{2} %
\\
&\geq \frac{\rho^{2}}{4N \eta^{2}} %
\xrightarrow{\eta \to 0} \infty.
\end{align*}
The case $\Span(A) \neq \ensuremath{\mathbb{R}}^{m}$ is similar. This case is interesting only
when $z \in \ensuremath{\mathbb{R}}^{m} \setminus \Span(A) \neq \emptyset$, otherwise we argue
as above. In this setting, define $P$ to be the projection onto the range of
$A$ with $P^{\perp} = (I - P)$ being its orthogonal component. We may
re-write the objective of {$(\mathrm{LS}_{\tau})$} as
\begin{align*}
\|y - Ax\|_{2}^{2} %
& = \|(P + P^{\perp})(y - Ax) \|_{2}^{2} %
\\
& = \|P(y - Ax) + P^{\perp} y\|_{2}^{2} %
\\
& = \|\tilde y - Ax\|_{2}^{2} + \|P^{\perp} y\|_{2}^{2}
\end{align*}
where $\tilde y := Py$. Therefore, when $z \not \in \Span (A)$, solving {$(\mathrm{LS}_{\tau})$}
is equivalent to solving
\begin{align}
\argmin \left\{ \|Py - Ax\|_{2} : \|x\|_{1} \leq \tau\right\}.
\tag{$\star$}
\end{align}
By construction, $\tilde y = Py \in \Rng(A)$, so we may apply the same
argument as above to the program $(\star)$, implying
\begin{align*}
\lim_{\eta \to 0}\hat L (\tau; x_{0}, A, \eta z) = \infty.
\end{align*}
\end{proof}
\begin{lem}[Overconstrained {$(\mathrm{LS}_{\tau})$}]
\label{lem:ls-instability-oc}
Fix $\tau < \tau^{*}$. Almost surely on the realization $z$,
\begin{align*}
\lim_{\eta \rightarrow 0} \hat{L} \left(\tau ; x_{0}, A, \eta z\right)%
= \infty.
\end{align*}
\end{lem}
\begin{proof}[{Proof of \autoref{lem:ls-instability-oc}}]
Let $\rho := \tau^{*} - \tau > 0$. For any solution $\xi$ to {$(\mathrm{LS}_{\tau})$}, one has
\begin{align*}
\eta^{-2} \|\xi - x_{0} \|_{2}^{2} %
\geq \frac{\rho^{2}}{N \eta^{2}}. %
\end{align*}
By definition, the desired result follows immediately:
\begin{align*}
\lim_{\eta \to 0}\hat L (\tau; x_{0}, A, \eta z) %
\geq \eta^{-2} \|\xi - x_{0}\|_{2}^{2} %
\geq \frac{\rho^{2}}{N \eta ^{2}} %
\xrightarrow{\eta \to 0} \infty.
\end{align*}
\end{proof}
\subsubsection{Uniform control over noise scales}
\label{sec:uniform-control-over-noise-scales}
In this section, we control {$(\mathrm{LS}_{\tau})$} in the optimally tuned setting, uniform over
the noise scale $\eta$. Specifically, for any $x_{0} \in \Sigma_{s}^{N}$ we
control the expected error of recovery for {$(\mathrm{LS}_{\tau})$} uniformly over the noise scale
$\eta > 0$. The results of \autoref{sec:control-condl-gp} are crucial for this
purpose.
\begin{prop}[Uniform over noise scale]
\label{prop:unif-noise-scale}
Let $0 \leq s < N < \infty$ be integers and let $m \in \ensuremath{\mathbb{N}}$. Let
$A \in \ensuremath{\mathbb{R}}^{m \times N}$ be a normalized $K$-subgaussian matrix, and fix
$\delta, \varepsilon > 0$. Suppose that $y = Ax_{0} + \eta z$ for $\eta > 0$
and $z \in \ensuremath{\mathbb{R}}^{m}$ with $z_{i} \ensuremath{\overset{\text{iid}}{\sim}} \mathcal{N}(0, 1)$. With probability
at least $1 - \varepsilon$ on the realization of $A$, there exist constants
$C_{\delta}, C_{\varepsilon} > 0$ so that if
\begin{align*}
m > C_{\varepsilon} \delta^{-2} \tilde K^{2} s \log \left(\frac{N}{2s}\right),
\end{align*}
then for all $\eta > 0$:
\begin{align*}
\E \left[ \|\hat x - x_{0} \|_{2}^{2} \mid A \right] %
&\leq C_{\delta} \eta^{2} s \log \left(\frac{N}{2s}\right). %
\end{align*}
where $\hat x = \hat x(\tau^{*})$ solves {$(\mathrm{LS}_{\tau})$} with
$\tau = \tau^{*} := \|x_{0}\|_{1}$.
\end{prop}
\begin{proof}[Proof of {\autoref{prop:unif-noise-scale}}]
If $s = 0$, the result holds trivially as, by construction,
$\|\hat x - x_{0}\|_{2} = 0$ almost surely. Suppose $s \geq 1$. By definition
of $\hat x$, where $h := \hat x - x_{0}$,
\begin{align*}
\|A \hat x - y\|_{2}^{2} \leq \|A x_{0} - y\|_{2}^{2} %
\quad \implies \quad %
\|Ah\|_{2}^{2} \leq 2 \eta \ip{A h, z}.
\end{align*}
\noindent\textbf{Step 1:} Lower bound $\|Ah\|_{2}$ with high
probability. Note that $\|Aw\|_{2} = \|w\|_{2} \|A \hat w\|_{2}$ for
$w \neq 0$ where $\hat w := w / \|w\|_{2}$. By
\autoref{lem:corollary-lmpv-thm1.4}, there is an event $\mathcal{A}_{1}$ with
$\mathbb{P}(\mathcal{A}_{1}) \geq 1 - \varepsilon /2$ on which
\begin{align*}
\sup_{x \in \mathcal{K}_{4s}^{N}} \left|\|Ax\|_{2} - \|x\|_{2}\right| %
\leq \sup_{x \in \mathcal{L}_{4s}} \left|\|Ax\|_{2} - \|x\|_{2}\right| %
< \delta_{1} %
\end{align*}
if $m$ satisfies
\begin{align}
\label{eq:unif-m-lb1}
m %
> 16 C_{\varepsilon}' \delta_{1}^{-2} \tilde K^{2}
s\log\left(\frac{N}{2s}\right).
\end{align}
Specifically, $h \in \mathcal{J}_{4s}^{N}$ by
\autoref{lem:descent-cone-condition}, meaning
$\hat h \in \mathcal{J}_{4s}^{N} \cap \ensuremath{\mathbb{S}}^{N-1} \subseteq
\mathcal{K}_{4s}^{N}$ if $h \neq 0$. So, conditioning on $\mathcal{A}_{1}$
and enforcing~\eqref{eq:unif-m-lb1}, one has
\begin{align}
\label{eq:unif-over-noise-scales-1}
\|A h\|_{2}^{2} %
& = \|h\|_{2}^{2} \|A\hat h\|_{2}^{2} %
\geq \|h\|_{2}^{2} \left(\|\hat h\|_{2} - \delta_{1}\right)^{2}
\geq (1 - \delta_{1})^{2}\|h\|_{2}^{2}.
\end{align}
The inequality $(1 - \delta_{1})^{2}\|h\|_{2}^{2} \leq \|Ah\|_{2}^{2}$
holds also for $h = 0$.
\noindent\textbf{Step 2a:} Upper bound $\ip{Ah, z}$. Again
using that $h \in \mathcal{J}_{4s}^{N}$,
\begin{align}
\label{eq:unif-over-noise-scales-2}
2\eta \ip{Ah, z} %
\leq 2\eta\|h\|_{2} \sup_{\hat h \in \mathcal{K}_{4s}^{N} \cap \ensuremath{\mathbb{S}}^{N-1}} %
\ip{A\hat h, z}. %
\end{align}
\textbf{Step 2b:} Control the latter quantity in expectation. By
\autoref{lem:conditional-gp-expectation}, there is $C_{\varepsilon}'' > 0$ so
that for
\begin{align}
\label{eq:unif-m-lb2}
m > 4 C_{\varepsilon}''\delta^{-2}\tilde K^{2} s \log \left(\frac{N}{4s}\right),
\end{align}
there is an event $\mathcal{A}_{2}$ holding
with probability at least $1 - \varepsilon/2$, on which there is a constant
$C_{\delta} > 0$ such that
\begin{align*}
\E \left[ \sup_{\hat h \in \mathcal{K}_{4s}^{N}} \ip{A\hat h, z} \mid A \right]%
\leq 2C_{\delta} \sqrt{s \log \left(\frac{N}{2s}\right)}.
\end{align*}
%
\textbf{Step 3:} Now combine steps 1 and 2a. Assume $m$ simultaneously
satisfies~\eqref{eq:unif-m-lb1} and~\eqref{eq:unif-m-lb2}, and condition on
$\mathcal{A}_{1} \cap \mathcal{A}_{2}$, which holds with probability at least
$1- \varepsilon$. Combining~\eqref{eq:unif-over-noise-scales-1}
and~\eqref{eq:unif-over-noise-scales-2}, and letting
$\delta_{1} := 1 - 2^{-1/2}$ gives
\begin{align*}
\|h\|_{2} %
\leq 4\eta \sup_{\hat h \in \mathcal{K}_{4s}^{N}} \ip{A \hat h, z}.
\end{align*}
Take expectation of both sides and bound the quantity by applying step
2b. This yields,
\begin{align*}
\E \left[ \|h\|_{2}\mid A\right] %
\leq 8 C_{\delta} \eta \sqrt{s \log \left(\frac{N}{2s}\right)}.
\end{align*}
Note that by setting $C_{\varepsilon} %
:= \max \{ \frac{32 C_{\varepsilon}'}{3 - 2 \sqrt 2} , 4
C_{\varepsilon}''\}$, it suffices to require
\begin{align*}
m %
> C_{\varepsilon} \delta^{-2} \tilde K^{2} s \log \left( \frac{N}{2s}\right).
\end{align*}
Alternatively, one may also apply a standard fact for subgaussian random
variables. Recall as in~\eqref{eq:borell-norm},
$\vertiii{X_{w}}_{K_{4s}^{N}} := \sup_{w \in K_{4s}^{N}} X_{w}$. Then
$\left\| \vertiii{X_{w}}_{K_{4s}^{N}} - \E \vertiii{X_{w}}_{K_{4s}^{N}}
\right\|_{\Psi_{2}} \leq \sigma_{K_{4s}^{N}}^{2}$ by \autoref{thm:borell-tis},
and so there is an absolute constant $C > 0$ such that on $\mathcal{A}_{1}$,
\begin{align*}
&\left\|\vertiii{X_{w}}_{K_{4s}^{N}} - \E_{z} \vertiii{X_{w}}_{K_{4s}^{N}} %
\right\|_{L^{2}}^{2} %
\\
&= \E_{z} \vertiii{X_{w}}_{K_{4s}^{N}}^{2} - \left(
\E_{z} \vertiii{X_{w}}_{K_{4s}^{N}}\right)^{2} %
\\
& \leq C \sigma_{K_{4s}^{N}}^{2} \leq C (1 + \delta_{1})^{2}.
\end{align*}
where $X_{w} := \ip{Aw, z}$ conditioned on $A$. In particular, choosing
instead $\delta_{1} := 1 - 2^{-1/4}$,
\begin{align*}
\E \left[ \|h\|_{2}^{2}\mid A\right] %
\leq 8 \eta^{2} \left(4 C_{\delta}^{2} s \log \frac{N}{2s} %
+ C\sqrt 2 \right).
\end{align*}
Rearranging, and observing that the right hand term in parentheses is small
relative to the left hand term, we may obtain a new absolute constant
$C_{\delta} > 0$ depending only on $\delta$ such that
\begin{align*}
\eta^{-2} \E \left[ \|\hat x - x_{0}\|_{2}^{2} \mid A \right] %
\leq C_{\delta} s \log \frac{N}{2s}.
\end{align*}
\end{proof}
\begin{rmk}
In the proof above, no attempt was made to optimize constants. In fact,
several simplifications were made for clarity of presentation, which in turn
resulted in larger than necessary constants.
\end{rmk}
\begin{rmk}[Uniform control over noise scale and signal class]
Observe that the result above is uniform over noise scale $\eta > 0$ and
signal $x_{0} \in \Sigma_{s}^{N}$. In particular, we could have written
(conditioning on $A$),
\begin{align*}
\sup_{\eta > 0} \sup_{x_{0} \in \Sigma_{s}^{N}}
\hat R (\tau^{*}; x_{0}, A, \eta) %
\leq C_{\delta} s \log \frac{N}{2s}.
\end{align*}
\end{rmk}
\subsubsection{Optimal choice of $\tau$ and phase transition}
\label{sec:optimal-choice-tau}
Here, we synthesize the technical results
of~\autoref{sec:uniform-control-over-noise-scales} to show that, with high
probability on the realization of $A$, {$(\mathrm{LS}_{\tau})$} achieves order-optimal risk in the
limiting low-noise regime when $m$ is sufficiently large and $\tau =
\tau^{*}$.
\begin{lem}[Tuned {$(\mathrm{LS}_{\tau})$}]
\label{lem:ls-instability-ec}
Fix $\delta, \varepsilon > 0$ and let $A \in \ensuremath{\mathbb{R}}^{m\times N}$ be a
normalized $K$-subgaussian matrix. For $s\in \ensuremath{\mathbb{N}}$ fixed with
$0 \leq s \leq m$, suppose $x_{0} \in \Sigma_{s}^{N}$ and $\eta > 0$. If
$m$ satisfies
\begin{align*}
m > C_{\varepsilon}' \delta^{-2} \tilde K^{2} s \log \frac{N}{2s},
\end{align*}
then, with probability at least $1 - \varepsilon$ on the realization $A$,
there exist constants $0 < c_{\delta} < C_{\delta} < \infty$ such that
\begin{align*}
c_{\delta} \cdot s \log\left( \frac{N}{s}\right) %
&\leq \lim_{\eta \to 0}
\sup_{x_{0} \in \Sigma_{s}^{N}}
\hat{R} (\tau^{*}; x_{0}, N, \eta) %
\\
& \leq C_{\delta} \cdot s\log\left( \frac{N}{2s} \right).
\end{align*}
\end{lem}
\begin{proof}[{Proof of \autoref{lem:ls-instability-ec}}]
For simplicity of the proof, we assume $\tilde K^{2} = 1$.
\noindent\textbf{Upper bound:} Given $\delta, \varepsilon_{1} > 0$, assume
\begin{align*}
m > C_{\varepsilon_{1}} \delta^{-2} s \log \frac{N}{2s}.
\end{align*}
With probability at least $1 - \varepsilon_{1}$ on the realization of $A$, by
\autoref{prop:unif-noise-scale}, for any $x_{0} \in \Sigma_{s}^{N}$ and
$\eta > 0$,
\begin{align*}
\hat R(\tau^{*}; x_{0}, A, \eta) %
\leq C_{\delta} \cdot s \log \frac{N}{2s}.
\end{align*}
In particular,
\begin{align*}
\lim_{\eta \to 0} \sup_{x_{0} \in \Sigma_{s}^{N}}
\hat R(\tau^{*}; x_{0}, A, \eta) %
\leq C_{\delta} \cdot s \log \frac{N}{2s}.
\end{align*}
\noindent \textbf{Lower bound:} From \autoref{coro:opt-risk-near-equiv}
and~\cite[Theorem 1]{candes2013well},
\begin{align*}
\sup_{x_{0} \in \Sigma_{s}^{N}}\hat R(\tau^{*}; x_{0}, A, \eta) %
& \geq \inf_{x_{*}} \sup_{x_{0} \in \Sigma_{s}^{N}} \eta^{-2}
\E \|x_{*} - x_{0} \|_{2}^{2}
\\
& \geq \frac{C_{1} N}{\|A\|_{F}^{2}}s \log\left(\frac{N}{s}\right).
\end{align*}
In particular,
\begin{align*}
\lim_{\eta \to 0} \sup_{x_{0} \in \Sigma_{s}^{N}}
\hat R(\tau^{*}; x_{0}, A, \eta) %
\geq \frac{C_{1} N}{\|A\|_{F}^{2}}s \log\left(\frac{N}{s}\right).
\end{align*}
Now, $\E \|A\|_{F}^{2} = N$, and $\|A\|_{F}^{2}$ admits subexponential
concentration around its expectation by Bernstein's inequality~\cite[Corollary
2.8.3]{vershynin2018high}. Therefore, with probability at least
$1 - \varepsilon_{2}$ on the realization of $A$, there is a constant
$c_{\delta} > 0$ depending only on $C_{1}$ and $\delta$ such that
\begin{align*}
\lim_{\eta \to 0} \sup_{x_{0} \in \Sigma_{s}^{N}}
\hat R(\tau^{*}; x_{0}, A, \eta) %
\geq c_{\delta} \cdot s \log\left(\frac{N}{s}\right),
\end{align*}
under the condition that
\begin{align*}
m \geq C \delta^{-2}N^{-1} \log \frac{2}{\varepsilon_{2}}.
\end{align*}
\noindent\textbf{Combine:} Finally, set $\varepsilon_{1} = \varepsilon_{2}
= \varepsilon/2$. Under the assumptions on $m$, with probability at least
$1 - \varepsilon$ on the realization of $A$ it holds that
\begin{align*}
c_{\delta} \cdot s \log\left(\frac{N}{s}\right) %
& \leq \lim_{\eta \to 0} \sup_{x_{0} \in \Sigma_{s}^{N}}
\hat R(\tau^{*} ; x_{0}, A, \eta) %
\\
& \leq C_{\delta} \cdot s \log \frac{N}{2s}.
\end{align*}
\end{proof}
We conclude this section with the proof of \autoref{thm:ls-instability} which
combines \autoref{lem:ls-instability-ec} and the results of
\autoref{sec:subopt-choice-tau}. Namely, even when $m$ is sufficiently large,
{$(\mathrm{LS}_{\tau})$} admits order-optimal risk in the limiting low-noise regime only when the
governing parameter is chosen optimally.
\begin{proof}[Proof of {\autoref{thm:ls-instability}}]
This result follows immediately from the lemmas of this section. Indeed, a
direct application of \autoref{lem:ls-instability-ec} gives
\begin{align*}
c_{\delta} \cdot s \log\left(\frac{N}{s}\right) %
& \leq \lim_{\eta \to 0} \sup_{x \in \Sigma_{s}^{N}}
\hat R(\tau^{*} ; x, A, \eta) %
\\
& \leq C_{\delta} \cdot s \log \frac{N}{2s}.
\end{align*}
Otherwise,
$\tau \neq \tau^{*}$. First, if $\tau < \tau^{*}$, then
\autoref{lem:ls-instability-oc} immediately implies
\begin{align*}
\lim_{\eta \to 0} \hat L (\tau; x_{0}, A, \eta z) = \infty.
\end{align*}
Otherwise, assume $\tau > \tau^{*}$. In order to apply
\autoref{lem:ls-instability-uc}, $A$ must satisfy $\dim(\Null(A)) > 0$, which
holds trivially, as $m < N$. In particular, \autoref{lem:ls-instability-uc}
implies almost surely on $(A, z)$,
\begin{align*}
\lim_{\eta \to 0} \hat L (\tau; x_{0}, A, \eta z) = \infty.
\end{align*}
\end{proof}
\begin{rmk}
The proof for \autoref{thm:ls-instability} proceeds whether $z$ be
deterministic (say with fixed norm $\|z\|_{2} = \sqrt m$) or have entries
$z_{i} \ensuremath{\overset{\text{iid}}{\sim}} \mathcal{N}(0,1)$. We have presented it this way so that the
assumption is consistent with the implicit assumption on the noise for the
result concerning $\hat R(\tau^{*}; x_{0}, A, \eta)$.
\end{rmk}
\subsection{Proofs for basis pursuit suboptimality}
\label{sec:proofs-basis-pursuit}
\subsubsection{Suboptimal regime for underconstrained basis pursuit}
\label{sec:subopt-regime-underc}
This section contains the proof for \autoref{lem:uc-bp-subgaus} in
\autoref{sec:underc-param-inst}.
\begin{proof}[Proof of {\autoref{lem:uc-bp-subgaus}}]
It suffices to prove this result for the best choice of $\sigma$ and any
$x \in \Sigma_{s}^{N}$. In particular, choose $x_{0} \in \Sigma_{s}^{N}$
having at least one non-zero entry, and for which the non-zero entries have
magnitude satisfying $|x_{0,j}| \geq C\eta \sqrt{m}$,
$j \in \supp(x_{0}) \subseteq [N]$. For this choice of $x_{0}$, let
$y = Ax_{0} + \eta z$ and define the event
$\mathcal{F} := \{ \|y\|_{2} \leq \sigma \}$.
For any $\sigma \geq \eta \sqrt m$, re-choose $x_{0} \in \Sigma_{s}^{N}$ if
necessary so that moreover $\mathbb{P}(\mathcal{F}^{C}) \geq
0.99$. Restricting to $\mathcal{F}^{C}$, the solution to {$(\mathrm{BP}_{\sigma})$} satisfies, by
the KKT conditions~\cite{bertsekas2003convex},
\begin{align*}
\eta^{2} m %
\leq \sigma^{2} %
= \|Ah \|_{2}^{2} - 2 \eta \ip{Ah, z} + \eta^{2} \|z\|_{2}^{2}.
\end{align*}
By \autoref{lem:corollary-lmpv-thm1.4}, it holds with probability at least
$1 - \varepsilon$ on the realization of $A$ that
\begin{align*}
(1 + \delta)^{2} \|h\|_{2}^{2} %
\geq \|Ah \|_{2}^{2} %
\geq \eta^{2}(m - \|z\|_{2}^{2}) + 2\eta \ip{Ah, z}
\end{align*}
Define the event
$\mathcal{Z}_{\leq} := \{ \|z\|_{2}^{2} \leq m - 2 \sqrt m\}$ and observe
that further restricting to $\mathcal{F}^{C} \cap \mathcal{Z}_{\leq}$ thereby
gives
\begin{align*}
(1 + \delta)^{2} \|h\|_{2}^{2} %
& \geq 2 \eta^{2}\sqrt m - 2 \eta \|h\|_{2} f(A,z)
\\
& \geq 2 \eta^{2}\sqrt m - \frac{1}{2} \|h\|_{2}^{2}
- 2\eta^{2} f^{2}(A,z),
\end{align*}
where $f(A, z)$ is defined as in \autoref{lem:conditional-gp-expectation} with
$\mathcal{K} = \mathcal{K}_{2s}^{N}\cap \ensuremath{\mathbb{S}}^{N-1}$. Indeed, where
$\hat h = h / \|h\|_{2}$, one has $\ip{A\hat h, z} \leq f(A,z)$ since
$\hat h \in \mathcal{K}_{2s}^{N}\cap \ensuremath{\mathbb{S}}^{N-1}$ with high probability on the
realization of $A$. This yields the following bound on the risk:
\begin{align}
\label{eq:uc-bp-subgaus-1}
\tilde R(\sigma; &\, x_{0}, A, \eta)
\nonumber
\\
&\geq \eta^{-2} \E_{z} \left[\|h\|_{2}^{2} \cdot
\1 \left( \mathcal{F}^{C} \cap \mathcal{Z}_{\leq}\right)\right] %
\nonumber
\\
& \geq C_{\delta} \E_{z}\left[\big(\sqrt m - f^{2}(A,z)\big)
\cdot \1 \left( \mathcal{F}^{C} \cap \mathcal{Z}_{\leq}\right) \right]
\nonumber
\\
& = C_{\delta} \sqrt m
\mathbb{P}\left(\mathcal{F}^{C} \cap \mathcal{Z}_{\leq}\right)
- C_{\delta} \E_{z} \left[f^{2}(A,z)\cdot
\1\left(\mathcal{F}^{C} \cap \mathcal{Z}_{\leq}\right)\right]
\nonumber
\\
&\geq C_{\delta} \sqrt m
\mathbb{P}\left(\mathcal{F}^{C} \cap \mathcal{Z}_{\leq}\right)
- C_{\delta} \E_{z} f^{2}(A,z)
\end{align}
Finally, we bound $\E_{z} f^{2}(A,z) = \E [ f^{2}(A,z) \mid A]$. With high
probability on the realization of $A$:
\begin{align*}
\E [ f^{2}(A,z) \mid A] \leq C \E[f(A, z) \mid A]^{2} \leq C_{\delta}s \log (N/s).
\end{align*}
Above, we have first used~\cite[Exercise~7.6.1]{vershynin2018high} followed by
an application of~\autoref{lem:conditional-gp-expectation}. Another way to see
this would be through the successive application
of~\autoref{rmk:fAz-subgaussianity}
and~\autoref{lem:conditional-gp-expectation}, noting that
$f(A, z) - \E[f(A, z)\mid A]$ is a centered subgaussian random variable.
Consequently, using that
$\mathbb{P}(\mathcal{F}^{C} \cap \mathcal{Z}_{\leq}) \geq
C$,~\eqref{eq:uc-bp-subgaus-1} becomes
\begin{align}
\label{eq:uc-bp-subgaus-2}
\tilde R(\sigma; x_{0}, A, \eta) %
\geq C_{\delta} \left(\sqrt m - s\log(N/s)\right).
\end{align}
The result follows trivially from the definition of $\sup$ and by the initial
assumption on $m$.
\end{proof}
\subsubsection{Suboptimal regime for overconstrained basis pursuit}
\label{sec:overc-param-inst}
In this section, we show that $\tilde R(\sigma; x_{0}, A, \eta)$ is suboptimal
for $\sigma \leq \eta \sqrt m$. To the chagrin of the beleaguered reader, the
proofs in this section require several technical lemmas, some assumptions and
notation. As much as possible, we attempt to relegate to the appendix those
details that, we believe, do not aid the reader's intuition and are particularly
technical.
The flow of this section will proceed as follows. After establishing required
preliminary details, we state and prove results concerning the ability of {$(\mathrm{BP}_{\sigma})$}
to recover the $0$ vector from noisy random measurements. The results exhibit a
regime in which $\tilde R(\sigma; x_{0}, A, \eta)$ may be lower-bounded in the
case where $\sigma = \eta \sqrt m$. Then, we proceed by showing that {$(\mathrm{BP}_{\sigma})$}
performs no better if $\sigma$ is allowed to be smaller. In particular, we
obtain lower bounds on $\tilde R(\sigma; x_{0}, A, \eta)$ for
$\sigma \leq \eta \sqrt m$. Motivation for this latter result is readily
observed by a re-phrasing of the projection lemma in
\autoref{prop:oc-solution-ordering}.
\paragraph{Preliminaries.}
For $z \in \ensuremath{\mathbb{R}}^{m}$ and $\sigma > 0$ define
$F(z; \sigma) := \{ q \in \ensuremath{\mathbb{R}}^{m} : \|q - z\|_{2}^{2} \leq \sigma^{2}\}$ and
denote $F := F(z; \sqrt m)$. For a matrix $A \in \ensuremath{\mathbb{R}}^{m \times N}$, denote
$B_{1, A} := \{ Ax \in \ensuremath{\mathbb{R}}^{m} : x \in B_{1}^{N}\}$, and define the gauge of
$B_{1, A}$ by
\begin{align}
\|q \|_{1, A} %
&:= \inf \{ \|x\|_{1} : Ax = q, x \in \ensuremath{\mathbb{R}}^{N} \} %
\nonumber
\\
\label{eq:defn-gauge-B1A}
& = \inf \{ \lambda > 0 : q \in \lambda B_{1, A}\}.
\end{align}
Recall a gauge is nonnegative, positively homogeneous, convex and vanishes at
the origin. Moreover, note that $B_{1, A}$ is a random set, and so
$\|\cdot\|_{1, A}$ is random. Now, for a matrix $A \in \ensuremath{\mathbb{R}}^{m\times N}$,
$z \in \ensuremath{\mathbb{R}}^{m}$ and $\sigma > 0$, define the program
\begin{align}
\label{eq:bq}
\tilde q(\sigma; A, z) %
:= \argmin \big\{ \|q\|_{1, A} : q \in F(z; \sigma) \big\},
\tag{$\mathrm{BQ}_{\sigma}$}
\end{align}
where $\|\cdot\|_{1, A}$ is defined as in~\eqref{eq:defn-gauge-B1A}. Where
clear, we omit notating the dependence of $\tilde q(\sigma; A, z)$ on $A$ and
$z$, writing simply $\tilde q(\sigma)$.
With the above notation, we define an admissible ensemble. The elements of an
admissible ensemble will be used to state the main lemma,
\autoref{lem:geometric-lemma}. The technical arguments characterizing their
properties appear in \autoref{sec:techn-lemm-supp}.
\begin{defn}[Admissibile ensemble]
Let $0 \leq s < N$ be integers, and let $m : \ensuremath{\mathbb{N}} \to \ensuremath{\mathbb{N}}$ be an
integer-valued function mapping $N \mapsto m(N)$ such that
$\lim_{N\to \infty} m(N) / N = \gamma \in (0, 1)$. For any
$0 < \theta < \min\{1-\gamma, \gamma\}$, define $N_{\theta} \geq 1$ to be the
least integer such that for all $N \geq N_{\theta}$,
\begin{align*}
\left|\frac{m(N)}{N} - \gamma\right| < \theta.
\end{align*}
Where $N \geq 2$, let $A(N)$ be a family of normalized $K$-subgaussian
matrices $A = A(N) \in \ensuremath{\mathbb{R}}^{m(N) \times N}$. Define
$N_{*} := \max \{ N_{\theta}, N_{\text{RIP}}\}$ where $N_{\mathrm{RIP}} \geq 1$ is the
least positive integer such that for all $N \geq N_{\mathrm{RIP}}$,
\begin{align*}
m(N) \geq C_{\varepsilon} \delta^{-2} \tilde K^{2} s \log \frac{2N}{s}.
\end{align*}
where $\delta, \varepsilon > 0$ are fixed in advance.
Let $z = z(N) \in \ensuremath{\mathbb{R}}^{m(N)}$ with $z_{i} \ensuremath{\overset{\text{iid}}{\sim}} \mathcal{N}(0, 1)$. Define
$F = F(z; \sqrt{m(N)}) = \{ q \in \ensuremath{\mathbb{R}}^{m(N)} : \|q - z\|_{2}^{2} \leq
m(N)\}$ and omit writing explicitly its dependence on $N$, unless
necessary. Define $\alpha_{1} = \alpha_{1}(N) := a_{1} m(N)^{1/4}$ for some
dimension-independent constant $a_{1} > 0$;
$\lambda = \lambda(N) := L \sqrt{\frac{m(N)}{\log N}}$ for some
dimension-independent constant $L > 1$; and
\begin{align*}
K_{1} &= K_{1}(N) := \lambda(N) B_{1, A} \cap \alpha_{1}(N) B_{2}^{m(N)},
\\
K_{2} &= K_{2}(N) := \lambda(N) B_{1, A} \cap \alpha_{2}(N) B_{2}^{m(N)},
\end{align*}
where $0 < \alpha_{2} = \alpha_{2}(N) \leq \alpha_{1}$ will be quantified in
\autoref{prop:gw-ub-1}. Lastly, define the following random processes. For
$g \in \ensuremath{\mathbb{R}}^{m}$ with $g_{i} \ensuremath{\overset{\text{iid}}{\sim}} \mathcal{N}(0, 1)$, let
\begin{align*}
X_{1} &:= \sup_{x \in K_{1}}|\ip{x, g}|,
&
X_{2} &:= \sup_{x \in K_{2}} |\ip{x, g}|.
\end{align*}
Thus we define an $(s, m(N), N, \delta, \varepsilon, \theta)$-admissible
ensemble as the collection $(A(N), z(N), K_{1}(N), K_{2}(N), X_{1}, X_{2})$
satisfying the conditions just described, defined for all $N \geq N_{*}$. This
collection will generally be abbreviated to
$(A, z, K_{1}, K_{2}, X_{1}, X_{2})$ where clear.
\end{defn}
Where possible, we simplify notation by omitting explicit dependence on
arguments. For example, if $N$ is fixed, then we may refer to $m(N)$ simply as
$m$. Note, however, that for any $N \geq 2$, $K_{1}$ and $K_{2}$ always depend
on $\alpha_{1} = \alpha_{1}(N)$ and $\alpha_{2} = \alpha_{2}(N)$,
respectively. Further observe that $K_{1}$ and $K_{2}$ are random, as they
depend on the matrix $A$. Observe that $N_{\theta}$ depends on $\theta$,
$\gamma$ and $m(\cdot)$, and omit writing explicitly its dependence on the
latter two; we assume $m(\cdot)$ and $\gamma$ are fixed in advance. Requiring
$N \geq N_{\text{RIP}}$ is the key condition on $m(N)$ so that
\autoref{lem:corollary-lmpv-thm1.4} holds. Clearly, $N_{\mathrm{RIP}}$ depends
on the parameters $\delta, \varepsilon, K, s$ and the function $m(\cdot)$; for
simplicity of presentation we omit writing explicitly its dependence on these
parameters. Finally, note that the parameters on which $N_{*}$ depends are
exactly those for which $N_{\theta}$ and $N_{\text{RIP}}$ depend.
\begin{prop}
\label{prop:oc-solution-ordering}
Let $z \in \ensuremath{\mathbb{R}}^{m}$ and $A \in \ensuremath{\mathbb{R}}^{m \times N}$ be a normalized
$K$-subgaussian matrix with $1 \leq m < N$. If
$0 < \sigma_{1} < \sigma_{2} < \infty$ and $\tilde q(\sigma)$ solves {$(\mathrm{BQ}_{\sigma})$}
then almost surely on $(A, z)$,
\begin{align*}
\|\tilde q (\sigma_{1}) \|_{2} \geq \|\tilde q (\sigma_{2})\|_{2}.
\end{align*}
\end{prop}
\begin{proof}[Proof of {\autoref{prop:oc-solution-ordering}}]
The result follows by~\autoref{cor:projection-lemma}, because
$\|\cdot\|_{1, A}$ is a gauge.
\end{proof}
\paragraph{The geometric lemma.}
Next, we state a lemma with a geometric flavour, \autoref{lem:geometric-lemma},
which is the main workhorse for proving suboptimality of $\tilde R$ in the
overconstrained setting. It is a generalization of \cite[Lemma
6.2]{berk2020sl1mpc}.
\begin{lem}[Geometric Lemma]
\label{lem:geometric-lemma}
Fix $\delta, \varepsilon_{1}, \varepsilon_{2} > 0$ and
$\theta \in (0, \gamma)$. Given an
$(s, m, N, \delta, \varepsilon_{1}, \theta)$-admissible ensemble, there is a
choice of $a_{1} > 0$ defining $\alpha_{1}(N)$; $L > 1$ defining $\lambda(N)$;
an integer $N_{0} \geq N_{*}$; and absolute constants $p, k > 0$, so that the
following occurs. For all $N \geq N_{0}$, with probability at least
$1 - \varepsilon_{1}$ on the realization of $A$, there is an event
$\mathcal{E} := \mathcal{E}(\varepsilon_{1}, \varepsilon_{2})$ for $z$ on
which
\begin{align*}
&1.~K_{1} \cap F \neq \emptyset, %
&
&2.~K_{2} \cap F = \emptyset, %
\\
&3.~\alpha_{2} > C N^{p}, %
&
&4.~\mathbb{P}(\mathcal{E}) > k.
\end{align*}
Above, $k$ depends on $N_{0}$ and $\varepsilon_{2}$ only; $p$ on
$\delta, \gamma$ and $\theta$ only.
\end{lem}
\begin{proof}[Proof of \autoref{lem:geometric-lemma}]
For constants $0 < C_{2} < C_{1} < \infty$, define the events
\begin{align*}
\mathcal{Z}_{<} &:= \{ \|z\|_{2}^{2} \leq m + C_{1}\sqrt m \}
\\
\mathcal{Z}_{>} &:= \{ \|z\|_{2}^{2} \geq m + C_{2} \sqrt m \}.
\end{align*}
By \autoref{prop:gw-lb-3} and \ref{prop:gw-ub-3}, there is an integer
$N_{0} \geq N_{*}$ (select the larger of the two bestowed by each result),
and respective events, $\mathcal{E}_{1}, \mathcal{E}_{2}$, so that with
probability at least $1 - \varepsilon_{1}$ on the realization of $A$,
\begin{align*}
\mathbb{P}(\mathcal{E}_{1}) %
&\geq \mathbb{P}(\mathcal{Z}_{<}) - \varepsilon_{2} %
& %
\mathbb{P}(\mathcal{E}_{2}) %
&\geq \mathbb{P}(\mathcal{Z}_{>}) - \varepsilon_{2} .
\end{align*}
In particular, for $\mathcal{E} := \mathcal{E}_{1} \cap \mathcal{E}_{2}$,
choose a largest such absolute constant
$k := k(N_{0}, C_{1}, C_{2}, \varepsilon_{2}) > 0$ so that
\begin{align*}
\mathbb{P}(\mathcal{E}) %
= \mathbb{P}(\mathcal{E}_{1} \cap \mathcal{E}_{2}) %
\geq \mathbb{P}(\mathcal{Z}_{<}\cap \mathcal{Z}_{>})
- 2 \varepsilon_{2} %
\geq k.
\end{align*}
As per \autoref{prop:gw-lb-3} and \autoref{prop:gw-ub-3}, conditioning on
$\mathcal{E}$ and letting $N \geq N_{0}$ gives $K_{1} \cap F \neq \emptyset$
and $K_{2} \cap F = \emptyset$ with probability at least $1 - \varepsilon_{1}$
on the realization of $A$, as desired. In this regime, that there exists
$p > 0$ satisfying $\alpha_{2} = \alpha_{2}(N) \geq C N^{p}$ is a consequence
of \autoref{prop:gw-ub-2}. One need simply select the largest $p$ satisfying
for all $N \geq N_{0}$:
\begin{align*}
C N^{p}\sqrt{\log N} \leq C_{\delta, \gamma, L, \theta} N^{d/2}.
\end{align*}
Thus, for all $N \geq N_{0}$, with probability at least $1 - \varepsilon$ on
the realization of $A$ there exists an event $\mathcal{E}$ for $z$ on which
all four of the desired criteria hold.
\end{proof}
\paragraph{Implications for overconstrained basis pursuit.} Finally, we state
the main results of this section. The first result,
\autoref{lem:bp-oc-tuned-zero}, uses the geometric lemma to show that there
exists a regime in which $\tilde R$ is suboptimal in the setting where
$x_{0} = 0$ and $\sigma = \eta \sqrt m$. From there, we show in
\autoref{lem:bp-oc-zero} that $\tilde R$ is no better if $\sigma$ is any
larger. This is enough to state a maximin suboptimality result for {$(\mathrm{BP}_{\sigma})$}, with
$\sigma$ restricted to $(0, \eta \sqrt m]$, in
\autoref{thm:bp-oc-maximin}. Notably, this result is stronger than the analogous
minimax statement, which necessarily follows from the maximin result.
\begin{lem}[Lower bound $\tilde R(\eta \sqrt m; 0, A, \eta)$]
\label{lem:bp-oc-tuned-zero}
Fix $\delta, \varepsilon, \eta > 0$ and suppose $m : \ensuremath{\mathbb{N}} \to \ensuremath{\mathbb{N}}$
satisfies $m(N)/N \to \gamma \in (0, 1)$. There is $N_{0} \in \ensuremath{\mathbb{N}}$ and an
absolute constant $p > 0$ so that for all $N \geq N_{0}$, if
$A \in \ensuremath{\mathbb{R}}^{m(N) \times N}$ is a normalized $K$-subgaussian matrix, then
with probability at least $1 - \varepsilon$ on the realization of $A$,
\begin{align*}
\tilde R (\eta \sqrt m; 0, A, \eta) \geq C_{\delta, \gamma, K} N^{p}.
\end{align*}
\end{lem}
\begin{proof}[Proof of \autoref{lem:bp-oc-tuned-zero}]
By a simple scaling argument, it suffices to assume $\eta = 1$. Consider an
$(s, m, N, \delta, \varepsilon, \theta)$-admissible ensemble. By
\autoref{lem:geometric-lemma}, there is a choice of $a_{1} > 0$ for
$\alpha_{1}(N)$ and $L > 1$ for $\lambda (N)$, an integer $N_{0} \geq N_{*}$
and absolute constants $k, p > 0$ so that with probability at least
$1 - \varepsilon/2$ on the realization of $A$, there is an event $\mathcal{E}$
for $z$ on which $K_{1} \cap F \neq \emptyset$, $K_{2} \cap F = \emptyset$,
and for which $\mathbb{P}(\mathcal{E}) \geq k_{3}$. Where $\tilde q$ solves
{$(\mathrm{BQ}_{\sigma})$}, observe that $\tilde q = A\tilde x(\sqrt m)$ and moreover, by
construction, $\tilde q \in (K_{1} \setminus K_{2}) \cap F$. In particular,
\begin{align*}
\|\tilde q\|_{1, A} \leq \lambda, \qquad %
\alpha_{2} \leq \|\tilde q\|_{2} \leq \alpha_{1}.
\end{align*}
By \autoref{coro:largest-subgaus-singval} and our initial assumptions,
\begin{align*}
\|A\| %
&\leq 1 + C \tilde K\left(1 + \sqrt{\frac{N}{m}}\right) %
\\
&\leq 1 + C\tilde K\left(1 + (\gamma - \theta)^{-1/2}\right) %
= C_{\gamma, K, \theta},
\end{align*}
with probability at least $1 - C\exp(-m)$. Note, by re-choosing $N_{0}$ if
necessary,
\begin{align*}
1 - C\exp(-m) %
&\geq 1 - C\exp(-N(\gamma - \theta)) %
\\
& \geq 1 - C_{\gamma, \theta}\exp(-N_{0}) %
\geq 1 - \varepsilon/2.
\end{align*}
In particular, for $N \geq N_{0}$, with probability at least $1 - \varepsilon$
on the realization $A$, it holds with probability at least $k$ on $z$ that
\begin{align*}
\alpha_{2} \leq \|\tilde q\|_{2} \leq \|A\| \|\tilde x(\sqrt m)\|_{2} %
\leq C_{\gamma, K, \theta}\|\tilde x (\sqrt m)\|_{2}.
\end{align*}
On the same event, by item 3 of \autoref{lem:geometric-lemma}, there is an
absolute constant $p > 0$ so that
$\alpha_{2} \geq C_{\delta, \gamma, L, \theta} N^{p}$, whence
\begin{align*}
\|\tilde x(\sqrt m)\|_{2} \geq C_{\delta, \gamma, K, L, \theta} N^{p}.
\end{align*}
Finally, this immediately implies that for $N \geq N_{0}$, with probability
at least $1 - \varepsilon_{1}$ on the realization of $A$,
\begin{align*}
\tilde R(\sqrt m; 0, N, 1) %
&\geq \E \left[\|\tilde x(\sqrt m)\|_{2}^{2}
\mid \mathcal{E}\right] \mathbb{P}(\mathcal{E})
\\
&\geq C_{\delta, \gamma, K, L, \theta} k N^{p}.
\end{align*}
\end{proof}
\begin{lem}[Lower bound $\tilde R(\sigma; 0, A, \eta)$, $\sigma < \eta \sqrt m$]
\label{lem:bp-oc-zero}
Fix $\delta, \varepsilon, \eta > 0$ and suppose $m : \ensuremath{\mathbb{N}} \to \ensuremath{\mathbb{N}}$
satisfies $m(N)/N \to \gamma \in (0, 1)$. There is $N_{0} \in \ensuremath{\mathbb{N}}$ and
absolute constant $p > 0$ so that for all $N \geq N_{0}$, if
$A \in \ensuremath{\mathbb{R}}^{m(N) \times N}$ is a normalized $K$-subgaussian matrix, it
holds with probability at least $1 - \varepsilon$ on the realization of $A$
that for any $0 < \sigma \leq \eta \sqrt m$,
\begin{align*}
\tilde R(\sigma; 0, A, \eta) \geq C_{\delta, \gamma, K} N^{p}.
\end{align*}
\end{lem}
\begin{proof}[Proof of {\autoref{lem:bp-oc-zero}}]
The proof of this result is nearly identical to that of
\autoref{lem:bp-oc-tuned-zero}. The crucial difference is its use of
\autoref{prop:oc-solution-ordering}, using which one argues
\begin{align*}
\alpha_{2} %
\leq \|\tilde q(\sqrt m) \|_{2} %
\leq \|\tilde q(\sigma) \|_{2} %
\leq C_{\gamma, K, \theta} \|\tilde x(\sigma)\|_{2}
\end{align*}
to show, in the appropriate regime, that
$\|\tilde x(\sigma) \|_{2} \geq C_{\delta, \gamma, K, L, \theta} N^{p}$.
\end{proof}
\begin{thm}[Overconstrained maximin]
\label{thm:bp-oc-maximin}
Fix $\delta, \varepsilon, \eta > 0$ and suppose $m : \ensuremath{\mathbb{N}} \to \ensuremath{\mathbb{N}}$
satisfies $m(N) / N \to \gamma \in (0, 1)$. For any $s \geq 0$, there is an
integer $N_{0} \in \ensuremath{\mathbb{N}}$ and an absolute constant $p > 0$ so that for all
$N \geq N_{0}$, if $A \in \ensuremath{\mathbb{R}}^{m(N)\times N}$ is a normalized
$K$-subgaussian matrix, then it holds with probability at least
$1 - \varepsilon$ on the realization of $A$ that
\begin{align*}
\sup_{x \in \Sigma_{s}^{N}} \inf_{\sigma \leq \eta \sqrt m}
\tilde R(\sigma; x, A, \eta) %
\geq C_{\delta, \gamma, K, \theta} N^{p}.
\end{align*}
\end{thm}
\begin{proof}[Proof of {\autoref{thm:bp-oc-maximin}}]
By a scaling argument, it suffices to consider the case $\eta =
1$. Establishing an admissible ensemble and using \autoref{lem:bp-oc-zero},
there is $N_{0} \geq N_{*}$ such that for any $N \geq N_{0}$, with probability
at least $1 - \varepsilon$ on $A$,
\begin{align*}
\sup_{x \in \Sigma_{s}^{N}} \inf_{\sigma \leq \sqrt m}
\tilde R(\sigma; x, A, 1) %
&\geq \inf_{\sigma \leq \sqrt m}
\tilde R(\sigma; 0, A, 1) %
\\
& \geq C_{\delta, \gamma, K, \theta} N^{p}.
\end{align*}
\end{proof}
\subsubsection{Suboptimal regime for basis pursuit}
\label{sec:subopt-regime-basis}
This section contains the proof for \autoref{thm:bp-minimax-suboptimality}, the
main result of \autoref{sec:analysis-bp} establishing a regime in which {$(\mathrm{BP}_{\sigma})$} is
minimax suboptimal. In essence, it combines \autoref{lem:uc-bp-subgaus} and
\autoref{lem:bp-oc-zero}.
\begin{proof}[Proof of {\autoref{thm:bp-minimax-suboptimality}}]
By a scaling argument, it suffices to consider the case $\eta = 1$. Re-write
the minimax expression \eqref{eq:bp-minimax-suboptimality} as
\begin{align*}
\inf_{\sigma > 0} \sup_{x \in \Sigma_{s}^{N}} %
\tilde R(\sigma; x, A, 1) %
&= \min \left\{ %
\inf_{\sigma \leq \sqrt m} S(\sigma), %
\inf_{\sigma > \sqrt m} S(\sigma) \right\},
\\
S(\sigma) &:= \sup_{x \in \Sigma_{s}^{N}} \tilde R(\sigma; x, A, 1).
\end{align*}
For any $N \geq N_{*}$, by \autoref{lem:uc-bp-subgaus}, it holds with
probability at least $1 - \varepsilon/2$ on the realization of $A$ that
\begin{align*}
\inf_{\sigma > \sqrt m} S(\sigma) \geq C_{\delta, \gamma, \theta} \sqrt N.
\end{align*}
Next observe that the trivial lower bound
$S(\sigma) \geq \tilde R(\sigma; 0, A, 1)$ holds for any $\sigma >0$, because
$0 \in \Sigma_{s}^{N}$. In particular, \autoref{lem:bp-oc-zero} yields an
$N_{0} \geq N_{*}$ and absolute constant $p > 0$ such that, with probability
at least $1 - \varepsilon/2$ on the realization of $A$,
\begin{align*}
\inf_{\sigma \leq \sqrt m} S(\sigma) %
\geq \inf_{\sigma \leq \sqrt m} \tilde R(\sigma; 0, A, 1) %
\geq C_{\delta,\gamma,K,\theta} N^{p}.
\end{align*}
Consequently, there is an absolute constant $p > 0$ so that, for all
$N \geq N_{0}$, it holds with probability at least $1 - \varepsilon$ on the
realization of $A$ that
\begin{align*}
\inf_{\sigma > 0} \sup_{x \in \Sigma_{s}^{N}}
\tilde R (\sigma; x, A, 1) %
&\geq \min \left\{ C_{\delta, \gamma, \theta} \sqrt N,
C_{\delta, \gamma, K, \theta} N^{p} \right\} %
\\
& \geq C_{\delta, \gamma, K, \theta} N^{p}.
\end{align*}
\end{proof}
\section{Conclusion}
\label{sec:conclusion}
This work examined the relative sensitivity of three \textsc{Lasso} programs to
their governing parameters: {$(\mathrm{LS}_{\tau})$}, {$(\mathrm{BP}_{\sigma})$} and {$(\mathrm{QP}_{\lambda})$}. We proved asymptotic
cusp-like behaviour of $\hat R(\tau; x_{0}, A, \eta)$ in the limiting low-noise
regime in \autoref{sec:LS-instability}. Numerical simulations in
\autoref{sec:numerical-results} support these observations for even modest
dimensional parameters and noise scales.
In \autoref{sec:qp}, we recall a result establishing right-sided stability of
{$(\mathrm{QP}_{\lambda})$} for a class of matrices that satisfy a version of RIP.\@ The result does
not address sensitivity of {$(\mathrm{QP}_{\lambda})$} to its governing parameter when the governing
parameter is less than its optimal value. In \autoref{sec:numerical-results}, we
demonstrate numerically that there are regimes in which {$(\mathrm{QP}_{\lambda})$} is sensitive to
its governing parameter $\lambda$ when $\lambda < \lambda^{*}$. This sensitivity
is readily observed in the rightmost plot of \autoref{fig:qp-instability}. This
observation establishes a numerical connection to the numerics and theory
of~\cite{berk2019pdparmsens, berk2020sl1mpc}, in which the authors analyze the
proximal denoising setting. Moreover, we observe that {$(\mathrm{QP}_{\lambda})$} is more sensitive to
its choice of parameter when the aspect ratio is larger. We believe this is due
to there being a smaller null-space, which has the effect of shrinking the space
of possible solutions. This behaviour is visible in both plots of
\autoref{fig:qp-instability}: the error curves for larger $\delta$ are steeper
for $\lambda < \lambda^{*}$ than those for smaller values of $\delta$.
In \autoref{sec:analysis-bp}, we proved asymptotic suboptimality of
$\tilde R(\sigma; x_{0}, A, \eta)$ in a certain dimensional regime that falls
outside the typical CS regime where $m \approx Cs\log(N/s)$. In particular, for
$m \approx \delta N$, $\delta \in (0, 1)$, we show that {$(\mathrm{BP}_{\sigma})$} risk is
asymptotically suboptimal for ``very sparse'' signals. We demonstrate that this
theory is relevant to the CS regime in \autoref{sec:numerical-results}, in which
we show that the loss and average loss for {$(\mathrm{BP}_{\sigma})$} are sensitive to the value of
the governing parameter if the ground-truth signal is very sparse. Furthermore,
\autoref{fig:bp-numerics-1} and \autoref{fig:bp-numerics-2} depict suboptimality
of the {$(\mathrm{BP}_{\sigma})$} risk for modest choices of dimesional parameters.
Future works include extending the main results to the generalized
\textsc{Lasso} setting, using more general atomic norms. A rigorous examination
of low-rank matrix recovery could be interesting. Finally, it would be useful to
understand when a convex program is expected to exhibit sensitivity to its
governing parameter, and to determine systematically the regime in which that
instability arises.
\section{Acknowledgments}
\label{sec:acknowledgments}
We would like to thank Xiaowei Li for a careful reading of portions of the
manuscript.
| proofpile-arXiv_059-15596 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Brezis and Nirenberg
in their famous paper \cite{Brezis1983} introduced the problem
\begin{equation}\label{p}
-\Delta u=|u|^{4\over n-2} u+\lambda u \ \hbox{in}\ \Omega,\ u=0\ \hbox{on}\ \partial\Omega,
\end{equation}
where $\Omega$ is a bounded domain in $\mathbb R^n$ and $n\ge3.$
A huge number of results concerning \eqref{p} has been obtained since then. Let us summarize the most relevant results which are also connected with the topic of the present paper.\\
First of all, the classical Pohozaev's identity ensures that \eqref{p} does not have any solutions if $\lambda\le0$ and $\Omega$ is a star-shaped domain.
A simple argument shows that problem \eqref{p} does not have any positive solutions if $\lambda\ge \lambda_1(\Omega)$, where $\lambda_1(\Omega)$ is the first eigenvalue of $-\Delta$ with Dirichlet boundary condition. The existence of a least energy positive solution $u_\lambda$ to \eqref{p}, i.e. a solution which achieves
the infimum
$$m_{ \lambda}:=\inf\limits_{u \in H^1_0(\Omega )\setminus\{0\}}
{\int\limits_{\Omega_\theta} \( |\nabla u |^2-\lambda u ^2\)d x\over ( \int\limits_{\Omega } | u |^{p+1}d x)^{2\over p+1}}$$
has been proved by Brezis and Nirenberg in \cite{Brezis1983} when $\lambda\in (0,\lambda_1(\Omega))$ in dimension $n\ge4$ and when $\lambda\in(\lambda^*(\Omega),\lambda_1(\Omega))$ in dimension $n=3$ where $\lambda^*(\Omega)>0$ depends on the domain $\Omega$. If $\Omega$ is the ball then $\lambda^*(\Omega)=\frac14\lambda_1(\Omega) $ (see \cite{Brezis1983}), while the general case has been treated by Druet in \cite{druet}.
The existence of a sign-changing solution has been proved by Cerami, Solimini and Struwe in \cite{css} when $\lambda\in(0,\lambda_1(\Omega))$ and $n\ge6$ and by Capozzi, Fortunato and Palmieri in \cite{cfp} when $\lambda\ge\lambda_1(\Omega)$ and $n\ge4.$
\\
There is a wide literature about the study of the asymptotic profile of the solutions when the parameter $\lambda$ approaches either zero or some strictly positive values depending on the dimension $n$ and the domain $\Omega.$
In the following, we will focus on the existence of solutions which exhibite a positive or negative blow-up phenomenon as $\lambda$ approaches some particular values.
When the parameter $\lambda$ approaches zero, positive and sign-changing solutions which blow-up positively or negatively at one or more points in $\Omega$ do exist provided the dimension $n\ge4.$
Rey in \cite{rey}, Musso and Pistoia in \cite{mupi} and Esposito, Pistoia and V\'{e}tois in \cite{epv} built solutions to \eqref{p} with simple positive or negative blow-up points, i.e. around each point the solution looks like
a positive or a negative standard bubble. Here the standard bubbles are the functions
\begin{equation}\label{bub} U_{\delta, \xi}(x):= \alpha_n\frac{\delta^{n-2\over2} }{\(\delta^2+|y|^2\)^{n-2\over2}},\ \hbox{with}\ \delta>0,\ \xi\in \mathbb R^n\ \hbox{and}\ \alpha_n:=(n(n-2))^{n-2\over4 },\end{equation}
which are the only
positive solutions of the equation $-\Delta U= U^{n+2\over n-2}$ in $ \mathbb R^n $ (see \cite{A, cgs, T})
More precisely, if $\lambda$ is small enough problem \eqref{p} has a positive solution which blows-up at one point (see \cite{rey} if $n\ge5$ and \cite{epv} if $n=4$)
and a sign-changing solution which blows-up positively and negatively at two different points (see \cite{mupi} if $n\ge5$). As far as we know the existence of multiple concentration in the case $n=4$ is still open.
If $n=3$ positive solutions of \eqref{p} blowing-up at a single point when the parameter $\lambda$ approaches a strictly positive number have been found by Del Pino, Dolbeault and Musso in \cite{ddm}. Moreover, sign-changing solutions having both positive and negative blow-up points can be constructed arguing as Musso and Salazar in \cite{ms}, where they found solutions which blow-up at more points when $\lambda$ is close to a suitable strictly positive number.
In higher dimension $n\ge7$
Premoselli \cite{pre} found an arbitrary large number of sign-changing solutions to \eqref{p} with a towering blow-up point in $\Omega$, i.e. around the point the solution likes look like the superposition of
bubbles of alternating sign (see also Iacopetti and Vaira \cite{iava1}). In particular, if $\Omega$ is a ball these solutions are nothing but the radially symmetric nodal solutions.
Conversely, if $\Omega$ is the ball in low dimension $n=3,4,5,6$, Atkinson, Brezis and Peletier in \cite{abp} proved that
problem \eqref{p} does not have any sign-changing radial solutions when $\lambda\in(0,\lambda_*)$ where $\lambda_*$ depends on the dimension $n.$ In particular, we expect that in low dimension the blowing-up (or blowing-down) phenomenon takes places when $\lambda $ approaches a positive value different from zero. In fact
Iacopetti and Vaira in \cite{iava2} proved that if $n=4,5$ and $\lambda$ approaches the first eigenvalue $\lambda_1(\Omega)$ the problem \eqref{p} has a sign-changing solution which blows-down at the origin and shares the shape of the positive first eigenfunction associated with $\lambda_1(\Omega)$ far away. So a natural question arises:{\it is it possible to find a sign-changing blowing-up solution of \eqref{p} in dimension $n=6$ when $\lambda$ approaches some strictly positive number?}
\\
In the present paper, we give a positive answer. In order to state our result, we need to introduce some notation and the assumptions.
\\
Let $u_0$ be a solution to
\begin{equation}\label{20}
\begin{cases}
-\Delta u_0=|u_0|u_0+ \lambda_0u_0\ \hbox{in}\ \Omega
,\\
u_0=0\ \hbox{on}\ \partial\Omega
\end{cases}
\end{equation}
If $\xi_0\in\Omega$ is such that $\max_\Omega u_0=u_0(\xi_0)>0,$
we suppose that
\begin{equation}\label{200}
\boxed{\lambda_0=2u_0(\xi_0)} \end{equation}
We assume that $u_0$ is non-degenerate, i.e.
\begin{equation}\label{30} \boxed{
\begin{cases}
-\Delta v =(2|u_0| + \lambda_0)v \ \hbox{in}\ \Omega\\
v=0\ \hbox{on}\ \partial\Omega.
\end{cases}\ \Rightarrow\ v\equiv0}
\end{equation}
If $v_0$ solves
\begin{equation}\label{v0}
\begin{cases}
-\Delta v_0-(2 |u_0|+\lambda_0 )v_0=u_0\quad\mbox{in}\,\, \Omega\\
v_0=0\ \hbox{on}\ \partial\Omega,
\end{cases}
\end{equation}
we require that
\begin{equation}\label{v00}\boxed{2v_0(\xi_0) \not=1}\end{equation}
We will show that the problem
\begin{equation}\label{p1}
\begin{cases}
-\Delta u=u|u| +(\lambda_0+\epsilon) u\ \hbox{in}\ \Omega,\\
u=0\ \hbox{on}\ \partial\Omega,
\end{cases}
\end{equation}
where $\Omega$ is a bounded domain in $\mathbb R^6$ has a sign-changing solution which blows-down at $\xi_0$ as $|\epsilon|$ approaches zero (note that $\epsilon$ is not necessarily positive) and shares the shape of $u_0$ far away from $\xi_0.$
More precisely our existence result reads as follows.
\begin{theorem}\label{main1} Assume \eqref{200}, \eqref{30} and \eqref{v00}. There exists $\varepsilon_0>0$ such that
\begin{enumerate}
\item if $1-2v_0(\xi_0) >0$ and $\epsilon\in(0,\varepsilon_0)$
\item if $1-2v_0(\xi_0) <0$ and $\epsilon\in(- \varepsilon_0,0)$
\end{enumerate}
then there exists a sign-changing solution $u_\epsilon$ of the problem \eqref{p1}
which blows-up at the point $\xi_0$ as $\epsilon\to0$. More precisely
$$
u_\epsilon (x)=u_0(x)+\epsilon v_0(x)-PU_{\delta_\epsilon,\xi_\epsilon}(x)+\phi_\epsilon(x)
$$
with as $\epsilon\to0$
$$\delta_\epsilon|\epsilon|^{-1}\to d>0,\ \xi_\epsilon\to\xi_0\ \hbox{and}\ \|\phi_\epsilon\|_{H^1_0(\Omega)}=\mathcal O\(\epsilon^2|\ln|\epsilon||^{\frac23}\).$$
\end{theorem}
Here $P U_{\delta,\xi}$ denotes the projection onto $H^1_0(\Omega) $ of the standard bubble $ U_{\delta, \xi}$ defined in \eqref{bub}, i.e.
$
-\Delta P U_{\delta,\xi}=U^2_{\delta,\xi}$ in $\Omega$ with
$P U_{\delta,\xi}=0$ on $\partial\Omega.$\\
It is natural to ask to for which domains $\Omega$ the assumptions \eqref{200}, \eqref{30} and \eqref{v00}
hold true.
If $\Omega$ is the ball and $u_0$ is the positive solution they are all satisfied (see \cite{Sri93} for \eqref{30}, \cite{aggpv} for \eqref{200} and \eqref{v00}).
More in general, we can only prove that assumptions \eqref{200} and \eqref{30} are satisfied for most domains $\Omega$ (see Theorem \ref{main}) when $u_0$ is the least energy positive solution to \eqref{20}. It would be interesting to prove that \eqref{v00} also holds for generic domains. \\
Let us state our generic result.
Let $\Omega_0$ be a bounded and smooth domain in $\mathbb R^6$ and let $D$ be an open neighbourhood of $\overline{\Omega_0}$.
Set $\Omega_\theta:=\Theta(\Omega_0)$ where $\Theta=I+\theta,$ $\theta\in C^{3,\alpha}(\overline D,\mathbb R^6)$ with $ \|\theta\|_{2,\alpha}\le\rho, $ with
$\alpha\in(0,1)$ and $\rho$ small enough. Let us consider the problem on the perturbed domain $\Omega_\theta$
\begin{equation}\label{1theta}
\Delta u+\lambda u+|u| u=0\ \hbox{in}\ \Omega_\theta,\ u=0\ \hbox{on}\ \partial\Omega_\theta.
\end{equation}
\begin{theorem}\label{main-generico}
The set
$$\begin{aligned}\Xi:=\big\{\theta\in C^{3,\alpha}(\overline D,\mathbb R^6)\ :\ &\hbox{if $\lambda>0$ and $u\in H^1_0(\Omega_\theta)$ solves \eqref{1theta} }\\
& \hbox{then $u$ is non-degenerate} \big\}\end{aligned}$$
is a residual subset in $C^{3,\alpha}(\overline D,\mathbb R^6),$
i.e. $C^{3,\alpha}(\overline D,\mathbb R^6)\setminus \Xi$ is a countable union of close subsets without interior points.
\\
Moreover, if $\lambda\in(0,\lambda_1(\Omega_\theta))$ and $u_\lambda$ denotes the least energy positive solution of \eqref{1theta}, for any $\theta\in \Xi$ there exists $\lambda_\theta\in (0,\lambda_1(\Omega_\theta))$ such that
$$\lambda_\theta=2\max\limits_{\Omega_\theta} u_{\lambda_\theta}.$$
\end{theorem}
The proof of Theorem \ref{main1} is based upon the well-known Ljapunov-Schmidt reduction. In Section 2 we describe the main steps of the proof by omitting many details which can be found up to minor changes in the quoted papers. We only prove which cannot be immediately deduced by known results. In particular, we point out the careful construction of the ansatz \eqref{sol} which has to be refined up to a second order and the delicate estimate of the reduced energy \eqref{cruc1} given in Proposition \ref{cruc0} whose leading term \eqref{cruc2} arises from the interaction between the bubble and the second order term in the ansatz.\\
The proof of Theorem \ref{main-generico} relies on a classical transversality argument and it is carried out in Section 3.
\section{The existence of a sign-changing solution}
\subsection{Setting of the problem and the choice of the ansatz}
In what follows we denote by $$(u , v):=\int_\Omega \nabla u\nabla v\, dx,\ \|u\|:=\(\int_\Omega |\nabla u|^2\, dx\)^{\frac 12}\ \hbox{and}\ |u|_r:=\(\int_\Omega |u|^r\, dx\)^{\frac 1 r}$$
the inner product and its correspond norm in $H^1_0(\Omega)$ and the standard norm in $L^r(\Omega)$, respectively. When $A\neq \Omega$ is any Lebesgue measurable set we specify the domain of integration by using $\|u\|_A, |u|_{r, A}$. \\
Let $(-\Delta)^{-1}: L^{\frac 32}(\Omega)\to H^1_0(\Omega)$ be the operator defined as the unique solution of the equation $$-\Delta u =v \quad \mbox{in}\,\, \Omega\qquad u=0\quad\mbox{on}\,\, \partial\Omega.$$ By the Holder inequality it follows that
$$\|(-\Delta)^{-1}(v)\|\le C |v|_{\frac 32}\qquad\forall\,\, v\in L^{\frac 32}(\Omega)$$ for some positive constant $C$, which does not depend on $v.$ Hence we can rewrite problem \eqref{p1} as \begin{equation}\label{pb1r} u=(-\Delta)^{-1}[f(u)+(\lambda_0+\epsilon) u]\quad u\in H^1_0(\Omega)\end{equation} with $f(u)=|u|u$.\\
Next we remind the expansion of the projection of the bubble. We denote by $G(x, y)$ the Green's function of the Laplace operator given by
$$
G(x, y)=\frac{1}{4\omega_6}\(\frac{1}{|x-y|^4}-H(x, y)\)$$
where $\omega_6$ denotes the surface area of the unit sphere in $\mathbb R$ and $H$ is the regular part of the Green's function, namely for all $y\in\Omega$, $H(x, y)$ satisfies $$\Delta H(x, y)=0\quad\mbox{in}\,\, \Omega\qquad H(x, y)=\frac{1}{|x-y|^4}\,\, x\in\partial\Omega.$$
It is known that the following expansion holds (see \cite{rey})
\begin{equation}\label{varphiexp}PU_{\delta,\xi}(x)=U_{\delta,\xi}(x)-\alpha_6 \delta^2 H(x, \xi)+\mathcal O\(\delta^4\)\ \hbox{as}\ \delta\to0\end{equation}
uniformly with respect to $\xi$ in compact sets of $\Omega$
Moreover we recall (see \cite{B}) that every solution to the linear equation $$-\Delta\psi=2U_{\delta, \xi}\psi\quad\mbox{in}\,\, \mathbb R$$ is a linear combination of the functions $Z_{\delta, \xi}^j$ $j=0, \ldots, 6$ given by $$Z_{\delta,\xi}^0(x)=\partial_\delta U_{\delta, \xi}(x)=2\alpha_6\delta \frac{|x-\xi|^2-\delta^2}{\(\delta^2+|x-\xi|^2\)^3}$$ and $$Z_{\delta, \xi}^j(x)=\partial_{\xi_j}U_{\delta, \xi}(x)=4\alpha_6\delta^2\frac{x_j-\xi_j}{\(\delta^2+|x-\xi|^2\)^3}\qquad j=1, \ldots, 6.$$ If we denote by $P Z_{\delta, \xi}^j$ the projection of $Z_{\delta, \xi}^j$ onto $H^1_0(\Omega)$, i.e.
$$-\Delta PZ_{\delta,\xi}^j= f'(U_{\delta, \xi})Z_{\delta,\xi}^j\quad\mbox{in}\,\, \Omega,\,\, PZ_{\delta,\xi}^j=0\quad\mbox{on}\,\,\, \partial\Omega, $$ elliptic estimates give $$PZ_{\delta, \xi}^0(x)=Z_{\delta, \xi}^0-2\delta \alpha_6 H(x, \xi)+\mathcal O\(\delta^3\)\ \hbox{as}\ \delta\to0$$
and
$$PZ_{\delta,\xi}^j(x)=Z_{\delta,\xi}^j-\delta^2\alpha_6\partial_{\xi_j}H(x, \xi)+\mathcal O\(\delta^4\), \quad j=1, \ldots, 6 \ \hbox{as}\ \delta\to0$$
uniformly with respect to $\xi$ in compact sets of $\Omega.$ \\
We look for a solution of \eqref{p1} of the form
\begin{equation}\label{sol}
u_\epsilon(x)=\underbrace{u_0(x)+\epsilon v_0-PU_{\delta, \xi}(x)}_{:=W_{\delta, \xi}}+\phi_\epsilon(x)
\end{equation}
where $\delta, \xi$ are chosen so that
\begin{equation}\label{sceltapar}
\delta=|\epsilon| d\ \hbox{with}\ d\in\(\sigma, \frac{1}{\sigma}\)\ \hbox{and}\ \xi=\xi_0+\sqrt\delta \eta\ \hbox{with}\ |\eta|\le \frac{1}{\sigma}\ \hbox{where $\sigma>0$ is small}
\end{equation} and $\phi_\epsilon$ is a remainder term which is small as $\epsilon\to0$ which belongs to the space $\mathcal K_{\delta, \xi}^\bot$ defined as follows.
Now let us define $$\mathcal K_{\delta, \xi}:={\rm span}\{P Z_{\delta,\xi}^j\,\,:\,\, j=0, \ldots, 6\}$$ and $$\mathcal K_{\delta, \xi}^\bot:=\{\phi\in H^1_0(\Omega)\,\,:\,\,\, (\phi, PZ_{\delta,\xi}^j)=0\,\, j=0, \ldots, 6\}.$$
Let us denote by $\Pi_{\delta, \xi}$ and $\Pi_{\delta,\xi}^\bot$ the projection of $H^1_0(\Omega)$ on $\mathcal K_{\delta,\xi}$ and $\mathcal K_{\delta,\xi}^\bot$ respectively.\\
Then solves problem \eqref{pb1r} is equivalent to solve the system
\begin{equation}\label{sist1}
\Pi_{\delta,\xi}^\bot\left\{u_\epsilon(x)-(-\Delta)^{-1}\left[f(u_\epsilon)+\lambda u_\epsilon\right]\right\}=0\end{equation}
\begin{equation}\label{sist2}
\Pi_{\delta,\xi}\left\{u_\epsilon(x)-(-\Delta)^{-1}\left[f(u_\epsilon)+\lambda u_\epsilon\right]\right\}=0\end{equation}\vskip0.2cm
\subsection{The remainder term: solving equation (\ref{sist1})}
The equation \eqref{sist1} can be written as
$$
\mathcal L_{\delta,\xi}(\phi_\epsilon)+\mathcal R_{\delta,\xi}+\mathcal N_{\delta,\xi}(\phi_\epsilon)=0
$$
where
$$
\mathcal L_{\delta,\xi}(\phi_\epsilon)=\Pi_{\delta,\xi}^\bot\left\{\phi_\epsilon(x)-(-\Delta)^{-1}\left[f'(W_{\delta, \xi})\phi_\epsilon+\lambda \phi_\epsilon\right]\right\}
$$
is the linearized operator at the approximate solution,
$$
\mathcal R_{\delta,\xi}=\Pi_{\delta,\xi}^\bot\left\{W_{\delta, \xi}(x)-(-\Delta)^{-1}\left[f(W_{\delta, \xi})+\lambda W_{\delta, \xi}\right]\right\}$$
is the error term and $$
\mathcal N_{\delta,\xi}(\phi_\epsilon)=\Pi_{\delta,\xi}^\bot\left\{-(-\Delta)^{-1}\left[f(W_{\delta, \xi}+\phi_\epsilon)-f(W_{\delta, \xi})-f'(W_{\delta, \xi})\phi_\epsilon\right]\right\}$$
is a quadratic term in $\phi_\epsilon$.\\
First of all, we estimate the size of the error term $\mathcal R_{\delta,\xi}.$
\begin{lemma}\label{error}For any $\sigma>0$
there exist $c>0$ and $\varepsilon_0>0$ such that for any $d>0$ and $\eta\in\mathbb R$ satisfying \eqref{sceltapar} and for any $\epsilon\in(-\varepsilon_0,\varepsilon_0)$
$$\|\mathcal R_{\delta,\xi} \|\le c \e^{2}|\ln|\e||^{\frac 23}.$$\end{lemma}
\begin{proof}
First we remark that
$$\begin{aligned}
&-\Delta W_{\delta,\xi}-|W_{\delta,\xi}|W_{\delta,\xi}-(\lambda_0+\e)W_{\delta,\xi}\\ &=-\Delta u_0-\e\Delta v_0-U_{\de, \xi}^2-|u_0+\e v_0- PU_{\de,\xi}|(u_0+\e v_0- PU_{\de,\xi})\\
&-\lambda_0 u_0-\lambda_0\e v_0+(\lambda_0+\e)PU_{\de,\xi}-\e u_0-\e^2v_0\\
&=-|u_0+\e v_0- PU_{\de,\xi}|(u_0+\e v_0- PU_{\de,\xi})-U_{\de, \xi}^2+|u_0|u_0\\ &+\e\underbrace{\(-\Delta v_0 -\lambda_0 v_0 - u_0\)}_{=2|u_0| v_0\ \hbox{\tiny because of \eqref{v0}}}+(\lambda_0+\e)PU_{\de,\xi}-\e^2v_0. \end{aligned}$$
By the continuity of $\Pi_{\delta,\xi}^\bot$ we get that
$$\begin{aligned} \|\mathcal R_{\delta,\xi}\|&\le c\left|-\Delta W_{\delta,\xi}-f(W_{\delta,\xi})-\lambda W_{\delta,\xi}\right|_{\frac 32}\\
&\le c \underbrace{\left|-|u_0+\e v_0- PU_{\de,\xi}|(u_0+\e v_0- PU_{\de,\xi})-PU_{\delta,\xi}^2+|u_0|u_0+2\e|u_0|v_0\right|_{\frac 32}}_{(I)}\\ &+c\underbrace{\left|PU_{\delta,\xi}^2-U_{\delta,\xi}^2\right|_{\frac 32}}_{(II)}\\
&+(\lambda_0+\e)\left|PU_{\delta,\xi}\right|_{\frac 32}+\underbrace{\e^2\left|v_0\right|_{\frac 32}}_{:=\mathcal O\(\e^2\)}
\end{aligned}$$
First of all, we point out that
$$|PU_{\delta,\xi}|_{\frac 32}\le c |U_{\delta,\xi}|_{\frac 32}\le c \de^2|\ln\de|^{\frac 23}.$$
and by \eqref{varphiexp}
$$\begin{aligned}
(II)&\le c\(\int_\Omega\underbrace{|PU_{\delta,\xi}-U_{\delta,\xi}|^{\frac 32}}_{=O(\delta^2)}\underbrace{|PU_{\delta,\xi}+U_{\delta,\xi}|^{\frac 32}}_{\le cU_{\delta,\xi}}\)^{\frac 23}\le c \delta^2 \(\int_\Omega |U_{\delta,\xi}|^{\frac 32}\, dx\)^{\frac 23} =\mathcal O\( \de^4|\ln\de|^{\frac 23}\).\end{aligned}$$
First let us estimate $(I)$ in $B(\xi, \sqrt\de)$ and $\Omega\setminus B(\xi, \sqrt\de)$:
$$\begin{aligned} (I)&\le c \(\int_{B(\xi, \sqrt\de)}\big||u_0+\e v_0-PU_{\de,\xi}|(u_0+\e v_0-PU_{\de,\xi})|+(PU_{\de,\xi})^2\big|^{\frac 32}\)^{\frac 23}\\ &+c\underbrace{\(\int_{B(\xi, \sqrt\de)}\big||u_0|u_0+2\e |u_0| v_0|^{\frac 32}\, dx\)^{\frac 23}}_{=\mathcal O(\delta^2)}\\
&+ c\(\int_{\Omega\setminus B(\xi, \sqrt \de)}\big||u_0+\e v_0-PU_{\de, \xi}|(u_0+\e v_0-PU_{\de, \xi})-|u_0|u_0- 2|u_0| (\e v_0-PU_{\de, \xi})\big|^{\frac 32}\)^{\frac 23}\\
&+c\(\int_{\Omega\setminus B(\xi, \sqrt \de)}\big|(PU_{\de,\xi})^2+2|u_0|PU_{\de, \xi}\big|^{\frac 32}\, dx\)^{\frac 23}\\
&=\mathcal O\(\delta^2|\ln\de|^{\frac 23}\),\end{aligned}$$
since
by mean value Theorem (here $\theta\in[0,1]$)
$$\begin{aligned}&\int_{B(\xi, \sqrt\de)}\big||u_0+\e v_0-PU_{\de,\xi}|(u_0+\e v_0-PU_{\de,\xi})+(PU_{\de,\xi})^2\big|^{\frac 32}
\\
&=2\int_{B(\xi, \sqrt\de)}|(\theta(u_0+\e v_0)-PU_{\de,\xi})(u_0+\e v_0)|^{\frac 32}\, dx
\\ &
\le c\underbrace{\int_{B(\xi, \sqrt\de)}|PU_{\de,\xi}|^{\frac32}\, dx}_{=\mathcal O(\delta^3|\log\delta|)}+c\underbrace{\int_{B(\xi, \sqrt\de)}|u_0+\e v_0|^{3}\, dx}_{=\mathcal O(\delta^3)}
\end{aligned},$$
and by the inequality
\begin{equation}\label{a1}
\big||a+b|(a+b)-|a|a-2|a|b\big|\le 7 b^2\ \hbox{for any}\ a,b\in \mathbb R
\end{equation}
$$\begin{aligned}&\int_{\Omega\setminus B(\xi, \sqrt \de)}\Big||u_0+\e v_0-PU_{\de, \xi}|(u_0+\e v_0-PU_{\de, \xi})-|u_0|u_0- 2|u_0| (\e v_0-PU_{\de, \xi})\Big|^{\frac 32}\\
&\le c\int_{\Omega\setminus B(\xi, \sqrt \de)}|\e v_0-PU_{\de, \xi}|^{3}dx\\
&\le c\underbrace{\int_{\Omega\setminus B(\xi, \sqrt \de)}|\e v_0|^{3}dx}_{=\mathcal O(\epsilon^3)}+c\underbrace{\int_{\Omega\setminus B(\xi, \sqrt \de)}| U_{\de, \xi}|^{3}dx}_{=\mathcal O(\delta^3)},\\
&\left(\int_{\Omega\setminus B(\xi, \sqrt \de)}\Big||u_0+\e v_0-PU_{\de, \xi}|(u_0+\e v_0-PU_{\de, \xi})-|u_0|u_0- 2|u_0| (\e v_0-PU_{\de, \xi})\Big|^{\frac 32}\right)
^\frac23=O(\e^2)
\end{aligned}$$
and
$$\int_{\Omega\setminus B(\xi, \sqrt \de)}\Big||PU_{\de, \xi}|(PU_{\de, \xi})+2|u_0|PU_{\de, \xi}\Big|^{\frac 32}\le c\underbrace{\int_{\Omega\setminus B(\xi, \sqrt \de)}| U_{\de, \xi} |^3\, dx}_{=\mathcal O(\delta^3 )}+\underbrace{\int_{\Omega\setminus B(\xi, \sqrt \de)}| U_{\de, \xi}|^{\frac 32}\, dx}_{=\mathcal O(\delta^3|\log\delta|)}.$$
which ends the proof.
\end{proof}
Next we analyze the invertibility of the linear operator $\mathcal L_{\delta,\xi}$ (see for example \cite{va}, Lemma 2.4 or \cite{rv}, Lemma 4.2).
\begin{lemma}\label{inv}
For any $\sigma>0$ there exist $c>0$ and $\varepsilon_0>0$ such that for any $d>0$ and $\eta\in\mathbb R$ satisfying \eqref{sceltapar} and for any $\e\in(-\varepsilon_0,\varepsilon_0)$
$$\|\mathcal L_{\delta,\xi}(\phi) \|\ge c \|\phi\|\ \hbox{for any}\ \phi\in \mathcal K_{\de,\xi}^\bot.$$ Moreover, $\mathcal L_{\de,\xi}$ is invertible and $\|\mathcal L_{\de,\xi}^{-1}\|\le \frac 1 c.$\end{lemma}
We are in position now to find a solution of the equation \eqref{sist1} whose proof relies on a standard contraction mapping argument (see for example \cite{mupi}, Proposition 1.8 and \cite{mipive}, Proposition 2.1)
\begin{proposition}\label{solphi}
For any $\sigma>0$ there exist $c>0$ and $\varepsilon_0>0$ such that for any $d>0$ and $\eta\in\mathbb R$ satisfying \eqref{sceltapar} and for any $\e\in(-\varepsilon_0,\varepsilon_0)$
there exists a unique $\phi_\epsilon=\phi_\epsilon(d,\eta)\in \mathcal K_{\de,\xi}^\bot$ solution to \eqref{sist1} which is continuously differentiable with respect to $d$ and $\eta$ and such that
$$
\|\phi_\epsilon\|\le c \e^2|\ln|\e||^{\frac 23}.
$$ \end{proposition}
\subsection{The reduced problem: solving equation (\ref{sist2})}
To solve equation \eqref{sist2}, we shall find the parameter $\delta$ and the point $\xi\in\Omega$ as in \eqref{sceltapar}, i.e. $d>0$ and $\eta\in\mathbb R,$ so that \eqref{sist2} is satisfied. \\ It is well known that this problem has a variational structure, in the sense that solutions of \eqref{sist2} reduces to find critical points to some given explicit finite dimensional functional. Indeed, let $J_\e: H^1_0(\Omega)\to \mathbb R$ defined by $$J_\e(u):=\frac 12 \int_\Omega |\nabla u|^2\, dx-\frac \lambda 2 \int_\Omega u^2\, dx-\frac 13\int_\Omega |u|^3\, dx$$ and let $\tilde J_\e: \mathbb R_+\times \mathbb R\to \mathbb R$ be the reduced energy which is defined by $$\tilde J_\e(d, \eta)=J_\e(W_{\delta, \xi}+\phi_\epsilon).$$
\begin{proposition}\label{cruc0} For any $\sigma>0$ there exists $\varepsilon_0>0$ such that for any $\epsilon\in(-\varepsilon_0,\varepsilon_0)$
\begin{equation}\label{cruc1}
\tilde J_\e(d,\eta)= \mathfrak c_0(\e)+|\e|^3 \Upsilon(d,\eta) +o\(|\e|^3\)
\end{equation}
with
\begin{equation}\label{cruc2}
\Upsilon(d,\eta):= \mathtt{sgn}(\e)\(1-2v_0(\xi_0)\) d^2\mathfrak a_1+d^3\(\mathfrak a_2 \langle D^2 u_0(\xi_0)\eta, \eta\rangle-\mathfrak a_3 \),
\end{equation}
uniformly with respect to $(d,\eta)$ which satisfies \eqref{sceltapar}, where the
$\mathfrak c_0(\e)$ only depends on $\e$ and the $\mathfrak a_i$'s are positive constants.
Moreover, if $(d, \eta)$ is a critical point of $\tilde J_\e$, then $W_{\delta,\xi}+\phi_\epsilon$ is a solution of \eqref{p1}.\\
\end{proposition}
\begin{proof}
It is quite standard to prove that if $(d, \eta)$ satisfies \eqref{sceltapar} and is a critical point of $\tilde J_\e$, then $W_{\delta,\xi}+\phi_\epsilon$ is a solution of \eqref{p1}
(see for example \cite{mipive}, Proposition 2.2). Moreover, it is not difficult to check that $\tilde J_\e(d,\eta)=J_\e(W_{\delta,\xi})+o\(|\e|^3\) $ uniformly with respect to $(d, \eta)$ which
satisfies \eqref{sceltapar} (see for example \cite{mipive}, Proposition 2.2). \\
We need only to estimate the main term of the reduced energy $J_\e(W_{\delta,\xi})$, i.e.
\begin{align*} &J_\e(u_0+\e v_0-PU_{\de,\xi})\\ &= \frac 1 2 \intO |\nabla(u_0+\e v_0- PU_{\de,\xi})|^2-\frac{\lambda_0+\e}{2}\intO(u_0+\e v_0- PU_{\de,\xi})^2
-\frac 13\intO |u_0+\e v_0- PU_{\de,\xi}|^3 \\
&=\frac 1 2 \intO |\nabla(u_0+\e v_0)|^2+\frac 12 \intO|\nabla PU_{\de,\xi}|^2-\frac{\lambda_0+\e}{2}\intO(u_0+\e v_0)^2-\frac{\lambda_0+\e}{2}\intO(PU_{\de,\xi})^2\\ &-\underbrace{\(\intO \nabla u_0\nabla PU_{\de,\xi}-\lambda_0\intO u_0 PU_{\de,\xi}\)}_{=\intO |u_0|u_0 PU_{\de,\xi}}-\e\underbrace{ \(\intO \nabla v_0\nabla PU_{\de,\xi}-\lambda_0 \intO v_0 PU_{\de,\xi}- \intO u_0 PU_{\de,\xi}\)}_{= \intO 2|u_0| v_0 PU_{\de,\xi} } \\
&+ \e^2\intO v_0PU_{\de,\xi}\\
&-\frac 13\intO |u_0+\e v_0- PU_{\de,\xi}|^3\\
& =\underbrace{\frac 1 2 \intO |\nabla(u_0+\e v_0)|^2-\frac{\lambda_0+\e}{2}\intO(u_0+\e v_0)^2-\frac13 \intO |u_0+\e v_0 |^3}_{=:I_1} \\
&+ \underbrace{\frac 12 \intO|\nabla PU_{\de,\xi}|^2-\frac 13\intO PU_{\de,\xi}^3}_{=:I_2}\underbrace{-\frac{\lambda_0}{2}\intO PU_{\de,\xi}^2+\intO u_0 PU_{\de,\xi}^2}_{=:I_3}
\underbrace{-\frac\e 2\intO PU_{\de,\xi}^2+\e\int v_0 PU_{\de,\xi}^2}_{=:I_4}\\
&\underbrace{ -\frac 13\intO\(|u_0+\e v_0- PU_{\de,\xi}|^3-|u_0+\e v_0|^3- PU_{\de,\xi}^3+3(u_0 +\e v_0)PU_{\de,\xi}^2+3|u_0 +\e u_0|(u_0+\e v_0) PU_{\de,\xi}\)}_{=:I_5}\\&
\underbrace{ + \intO\Big[|u_0 +\e v_0|(u_0+\e v_0) -(|u_0|u_0 +2\e|u_0| v_0)\Big] PU_{\de,\xi} }_{=:I_6}+ \underbrace{\e^2\intO v_0PU_{\de,\xi}}_{=:I_7}\end{align*}
It is clear that
$$I_7=\mathcal O\(\e^2\intO{\delta^2\over |x-\xi|^4}dx\right)=\mathcal O\(\e^2\delta^2\)=\mathcal O\(\e^4\).$$
To estimate $I_6$ by \eqref{a1} it follows that
$$I_6=O\left(\e^2\intO PU_{\de,\xi}
\right)=O\(\e^2\delta^2\)=\mathcal O\(\e^4\).$$
Now, $I_1$ does not depend neither on $d$ nor on $\eta$ and it will be included in the constant $\mathfrak c_0$ in \eqref{cruc1}.
By \eqref{varphiexp}
\begin{equation*}
\begin{aligned}
I_2&=\frac 12 \intO U_{\de,\xi}^3-\frac 13\intO PU_{\de,\xi}^3\\ &=\frac 12 \intO U_{\de,\xi}^3-\frac 13\intO\big(U_{\delta,\xi}(x)-\alpha_6 \delta^2 H(x, \xi)+\mathcal O\(\delta^4\)\big)^3\\
&=
\frac1 6
\int\limits_{\mathbb R}U^3+\mathcal O\left(\delta^2\intO U_{\delta,\xi}^2\right)+O\(\delta^4\)\\ &=\frac1 6
\int\limits_{\mathbb R}U_{\delta,\xi}^3+O\(\delta^4\).
\end{aligned}
\end{equation*}\\
Now,
setting $\varphi_{\delta,\xi}:=PU_{\delta, \xi}-U_{\delta, \xi}=\mathcal O(\delta^2),$ by \eqref{varphiexp} and \eqref{sceltapar}
\begin{equation*}
\begin{aligned}
I_3&=\intO \( u_0(x)-\frac{\lambda_0}2\)(U_{\delta, \xi}+\varphi_{\delta,\xi})^2\\
& =\intO \( u_0(x)-u_0(\xi_0)\)U_{\delta, \xi}^2+\mathcal O(\delta^4)\\
&=\intO\left[\frac12\langle D^2 u_0(\xi_0)(x-\xi_0),(x-\xi_0)\rangle+\mathcal O(|x-\xi_0|^3)\right]\alpha_6^2\frac{\delta^4}{(\delta^2+|x-\xi|^2)^4}dx +\mathcal O(\delta^4)\\
&=\alpha_6^2\intO\frac12 \langle D^2 u_0(\xi_0)(x-\xi_0),(x-\xi_0)\rangle \frac{\delta^4}{(\delta^2+|x-\xi|^2)^4}dx +\mathcal O(\delta^4)\\
&=\alpha_6^2\delta^2\int\limits_{\Omega-\xi\over\delta}\frac12 \langle D^2 u_0(\xi_0)(\delta y+\sqrt\delta\eta),(\delta y+\sqrt\delta\eta)\rangle \frac{1}{(1+|y|^2)^4}dy +\mathcal O(\delta^4)\\
&=\frac{\alpha_6^2}2\delta^3\(\int\limits_{\mathbb R}\frac{1}{(1+|y|^2)^4}dy\) \langle D^2 u_0(\xi_0)\eta,\eta\rangle +\mathcal O(\delta^4|\ln\delta|)\\
&=\frac{\alpha_6^2}2d^3|\e|^3\(\int\limits_{\mathbb R}\frac{1}{(1+|y|^2)^4}dy\) \langle D^2 u_0(\xi_0)\eta,\eta\rangle +\mathcal O(\e^4|\ln|\e||).
\end{aligned}
\end{equation*}
and analogously
\begin{equation*}
\begin{aligned}
I_4&=\e\intO\(v_0(x)-\frac12\) PU_{\de,\xi}^2\\ &=\e\left[\alpha_6^2\delta^2\(\int\limits_{\mathbb R}\frac{1}{(1+|y|^2)^4}dy\) \(v_0(\xi_0)-\frac12\) +o(1)\right]\\
&=\e^3d^2\left[\alpha_6^2\(\int\limits_{\mathbb R}\frac{1}{(1+|y|^2)^4}dy\) \(v_0(\xi_0)-\frac12\) +o(1)\right]
\end{aligned}
\end{equation*}
Finally, we have to estimate $I_5.$
We point out that
$$|u_0+\e v_0- PU_{\de,\xi}|^3-|u_0+\e v_0|^3- PU_{\de,\xi}^3+3(u_0 +\e v_0)PU_{\de,\xi}^2+3|u_0 +\e u_0|(u_0+\e v_0) PU_{\de,\xi}=0\ \hbox{if}\ u_0+\e v_0\le0$$
and so
$$\begin{aligned} I_5 &=-\frac 13\int_{\{u_0+\e v_0\ge0\}}\(|u_0+\e v_0- PU_{\de,\xi}|^3-(u_0+\e v_0)^3- PU_{\de,\xi}^3\right.\\ &\left.
\hskip 4truecm+3(u_0 +\e v_0)PU_{\de,\xi}^2+
3 (u_0+\e v_0)^2 PU_{\de,\xi}\)dx \\
&=-\frac 13\int_{\{u_0+\e v_0\ge PU_{\delta, \xi}\}}\left(-2PU_{\delta, \xi}^3+6(u_0 +\e v_0) PU_{\de,\xi}^2\right)\\
&-\frac 13\int_{\{0< u_0+\e v_0<PU_{\delta, \xi}\}}\left(-2(u_0+\e v_0)^3+6(u_0 +\e v_0)^2 PU_{\de,\xi} \right).\end{aligned}$$
First of all we claim that for any $\sigma>0$ there exists $\varepsilon_0>0$ such that for any $\e\in(-\varepsilon_0,\varepsilon_0)$ and $(d,\xi)$ satisfying \eqref{sceltapar}
\begin{equation}\label{claimlevel}
B\(\xi, R^1_\delta\sqrt\delta\)\subset\{x\in\Omega\,:\, 0< u_0(x)+\e v_0(x)< P U_{\delta, \xi}(x)\} \cap B\(\xi,\delta^\frac14\)\subset B\(\xi, R^2_\delta\sqrt\delta\)
\end{equation}
where
\begin{equation}\label{ro} R^1_\delta,R^2_\delta=R_0+o(1)\ \hbox{with}\ R_0:=\(\frac{\alpha_6}{u_0(\xi_0)}\)^{\frac 14}. \end{equation}
We remind that $\delta=\mathcal O(\epsilon)$ and also that $P U_{\delta, \xi}(x)=\alpha_6{\delta^2\over(\delta^2+|x-\xi|^2)^2}+\mathcal O(\epsilon^2)$ uniformly in $\Omega.$
If $|x-\xi|<R^1_\delta\sqrt\delta$ is small enough then by mean value theorem $u_0(x)+\e v_0(x)=u_0(\xi_0)+\mathcal O_1(\epsilon) $ and
$$ \begin{aligned}u_0(x)+\e v_0(x)< P U_{\delta, \xi}(x)\ &\Leftrightarrow\ \frac{u_0(\xi_0)}{\alpha_6}+\mathcal O_1(\epsilon)< {\delta^2\over(\delta^2+|x-\xi|^2)^2}\\
& \Leftrightarrow\
|x-\xi|\le \sqrt\delta \underbrace{\({1\over \(\frac{u_0(\xi_0)}{\alpha_6}+\mathcal O_1(\epsilon)\)^{\frac12}}-\delta\)^{1\over2}}_{R^1_\delta}\end{aligned}$$
and the first inclusion in \eqref{claimlevel} together with \eqref{ro} follow.
On the other hand, again by mean value theorem
we have $u_0(x)+\e v_0(x)=u_0(\xi_0)+\mathcal O_2(\sqrt \delta)$ for any $x\in B(\xi,\delta^\frac14)$
and arguing as above we get the second inclusion in \eqref{claimlevel} and \eqref{ro}. \\
It is useful to point out that by \eqref{claimlevel} we immediately get
\begin{equation}\label{claimlevel+}
B^c\(\xi, R^1_\delta\sqrt\delta\) \supset
\{x\in\Omega\,:\, u_0(x)+\e v_0(x)\ge P U_{\delta, \xi}(x)\}\cup B^c\(\xi,\delta^\frac14\)
\supset B^c\(\xi, R^2_\delta\sqrt\delta\)
\end{equation}
Now by \eqref{claimlevel} and \eqref{claimlevel+} we deduce
$$\begin{aligned} I_5&=-\frac 13\int_{\{u_0+\e v_0\ge PU_{\delta, \xi}\}}\left(-2PU_{\delta, \xi}^3+6(u_0 +\e v_0) PU_{\de,\xi}^2\right)\\
&-\frac 13\int_{\{0<u_0+\e v_0<PU_{\delta, \xi}\}}\left(-2(u_0+\e v_0)^3+6(u_0 +\e v_0)^2 PU_{\de,\xi} \right)\\
&=-\frac 13\int_{\{u_0+\e v_0\ge PU_{\delta, \xi}\}\cup B^c\(\xi,\delta^\frac14\)}\left(-2PU_{\delta, \xi}^3+6(u_0 +\e v_0) PU_{\de,\xi}^2\right)\\
&+\frac 13\int_{B^c\(\xi,\delta^\frac14\)\setminus \{u_0+\e v_0\ge PU_{\delta, \xi}\}\cap B^c\(\xi,\delta^\frac14\) }\left(-2PU_{\delta, \xi}^3+6(u_0 +\e v_0) PU_{\de,\xi}^2\right)\\
&-\frac 13\int_{\{0<u_0+\e v_0<PU_{\delta, \xi}\}\cap B\(\xi,\delta^\frac14\)}\left(-2(u_0+\e v_0)^3+6(u_0 +\e v_0)^2 PU_{\de,\xi} \right)\\
& -\frac 13\int_{\{0<u_0+\e v_0<PU_{\delta, \xi}\}\cap B^c\(\xi,\delta^\frac14\)}\left(-2(u_0+\e v_0)^3+6(u_0 +\e v_0)^2 PU_{\de,\xi} \right)\\
&=-\frac 13\int_{\{u_0+\e v_0\ge PU_{\delta, \xi}\}\cup B^c\(\xi,\delta^\frac14\) }\left(-2PU_{\delta, \xi}^3+6(u_0 +\e v_0) PU_{\de,\xi}^2\right)\\
&-\frac 13\int_{\{0<u_0+\e v_0<PU_{\delta, \xi}\} \cap B\(\xi,\delta^\frac14\)}\left(-2(u_0+\e v_0)^3+6(u_0 +\e v_0)^2 PU_{\de,\xi} \right)+o(\delta^3),
\end{aligned}$$
because
$$\begin{aligned}& \int_{B^c\(\xi,\delta^\frac14\)\setminus\{u_0+\e v_0\ge PU_{\delta, \xi}\}\cap B^c\(\xi,\delta^\frac14\)}\left(-2PU_{\delta, \xi}^3+6(u_0+\e v_0) PU_{\delta, \xi}^2 \right)\\ &=\mathcal O\(
\int_{ B^c\(\xi,\delta^\frac14\)}\left(U_{\delta, \xi}^3+ U_{\delta, \xi}^2 \right)\)\\
&=\mathcal O\(\delta^\frac72\),\end{aligned}$$
and
$$\begin{aligned}&\int_{\{0<u_0+\e v_0<PU_{\delta, \xi}\}\cap B^c\(\xi,\delta^\frac14\)}\left(-2(u_0+\e v_0)^3+6(u_0 +\e v_0)^2 PU_{\de,\xi} \right)\\ &=\mathcal O\(\delta^3\mathtt{meas}\{0<u_0(x)<2\delta\}\) \\
&=o(\delta^3)
\end{aligned}
$$
since
$PU_{\delta, \xi}(x)=\mathcal O(\delta)$ if $|x-\xi|\ge \delta^\frac14$ and $\{0<u_0+\e v_0<PU_{\delta, \xi}\}\cap B^c\(\xi,\delta^\frac14\)\subset \{0<u_0(x)<2\delta\}$ if $\delta$ is small enough.
Next we claim that
$$
\begin{aligned}
&-\frac 13\int_{\{u_0+\e v_0\ge PU_{\delta, \xi}\}\cup B^c\(\xi,\delta^\frac14\) }\left(-2PU_{\delta, \xi}^3+6(u_0 +\e v_0) PU_{\de,\xi}^2\right)\\ &-\frac 13\int_{\{0<u_0+\e v_0<PU_{\delta, \xi}\} \cap B\(\xi,\delta^\frac14\)}\left(-2(u_0+\e v_0)^3+6(u_0 +\e v_0)^2 PU_{\de,\xi} \right)+o(\delta^3) \\
&=-\frac 13\int_{\{u_0+\e v_0\ge PU_{\delta, \xi}\}\cup B^c\(\xi,\delta^\frac14\) }\left(-2PU_{\delta, \xi}^3+6u_0 PU_{\delta, \xi}^2 \right)\\ &-\frac 13\int_{\{0<u_0+\e v_0<PU_{\delta, \xi}\} \cap B\(\xi,\delta^\frac14\)}\left(-2u_0^3+6u_0^2 PU_{\delta, \xi} \right)+o(\delta^3),\\
\end{aligned}
$$
Indeed using \eqref{claimlevel+} and \eqref{claimlevel} we get
$$\int_{\{u_0+\e v_0\ge PU_{\delta, \xi}\}\cup B^c\(\xi,\delta^\frac14\) } PU_{\de,\xi}^2=\mathcal O\(
\int_{ B^c\(\xi,\delta^\frac12\)} U_{\delta, \xi}^2 \)=\mathcal O\(\delta^3\),$$
$\mathtt{meas} B \(\xi,\delta^\frac12\)=\mathcal O(\delta^3)$ and
$$\int_{\{u_0+\e v_0<PU_{\delta, \xi}\}\cap B \(\xi,\delta^\frac14\) } PU_{\de,\xi} =\mathcal O\(
\int_{ B \(\xi,\delta^\frac12\)} U_{\delta, \xi} \)=\mathcal O\(\delta^3\).$$
We estimate the last two terms in the expansion of $I_5.$ By \eqref{claimlevel+}
\begin{equation*}
B^c\(\xi, R^2_\delta\sqrt\delta\)\subset\{x\in\Omega\,:\, u_0(x)+\e v_0(x)\ge P U_{\delta, \xi}\}\cup B^c\(\xi,\delta^\frac14\)\subset B^c\(\xi, R^1_\delta\sqrt\delta\)\
\end{equation*}
Hence
$$\begin{aligned}\int_{|x-\xi|>R_\delta^2\sqrt\de}\left(-2PU_{\delta, \xi}^3+6u_0 PU_{\delta, \xi}^2 \right)&\le \int_{\{u_0+\e v_0\ge PU_{\delta, \xi}\}\cup B\(\xi,\delta^\frac14\)}\left(-2PU_{\delta, \xi}^3+6u_0 PU_{\delta, \xi}^2 \right)\\&\le \int_{|x-\xi|>R_\de^1\sqrt\de}\left(-2PU_{\delta, \xi}^3+6u_0 PU_{\delta, \xi}^2 \right).\end{aligned}$$
Now if $R_\delta$ denotes either $R^1_\delta$ or $R^2_\delta$ we get
$$\begin{aligned}&\int_{|x-\xi|>R_\de\sqrt\de}\left(-2PU_{\delta, \xi}^3+6u_0 PU_{\delta, \xi}^2 \right)\\ &=-2\int_{|x-\xi|>R_\de\sqrt\de}U_{\de,\xi}^3+6\int_{|x-\xi|>R_\de\sqrt\de}u_0 U_{\de,\xi}^2+\mathcal O\(\delta^4\)\\
& =-2\int_{|y|>\frac{R_\de}{\sqrt\de}}\frac{\alpha_6^3}{(1+|y|^2)^6}+6\de^2\int_{|y|>\frac{R_\de}{\sqrt\de}}u_0(\de y+\xi)\frac{\alpha_6^2}{(1+|y|^2)^4}+\mathcal O\(\de^4\)\\
& =-2\omega_6\alpha_6^3\int_{\frac{R_\de}{\sqrt\de}}^{+\infty}\frac{r^5}{(1+r^2)^6}+6\de^2\omega_6\alpha_6^2 u_0(\xi_0)\int_{\frac{R_\de}{\sqrt\de}}^{+\infty}\frac{r^5}{(1+r^2)^4}+\mathcal O\(\de^4\int_{\frac{R_\de}{\sqrt\de}}^{+\infty}\frac{r^7}{(1+r^2)^4}\)+\mathcal O\(\de^4\)\\
&= -\frac 13\omega_6\alpha_6^3 R_\de^{-6}\de^3+3\de^3\omega_6\alpha_6^2R_\de^{-2}u_0(\xi_0)+\mathcal O\(\de^4|\log\de|\)\\
&= -\frac 13\omega_6\alpha_6^3 R_0^{-6}\de^3+3\de^3\omega_6\alpha_6^2R_0^{-2}u_0(\xi_0)+o\(\de^{3}\), \ \hbox{because of \eqref{ro}}.
\end{aligned}$$
and by comparison
\begin{equation}\label{fuoripalla}
\int_{\{u_0+\e v_0\ge PU_{\delta, \xi}\}\cup B^c\(\xi,\delta^\frac14\)}\left(-2PU_{\delta, \xi}^3+6u_0 PU_{\delta, \xi}^2 \right)= -\frac 13\omega_6\alpha_6^3 (R_0)^{-6}\de^3+3\de^3\omega_6\alpha_6^2(R_0)^{-2}u_0(\xi_0)+o\(\de^{3}\).\end{equation}
In a similar way, by \eqref{claimlevel}
$$\begin{aligned}\int_{|x-\xi|<R_\delta^1\sqrt\de}\left(-2u_0^3+6u_0^2 PU_{\delta, \xi} \right)&\le \int_{\{0< u_0+\e v_0<PU_{\delta, \xi}\}\cap B\(\xi,\delta^\frac14\)}\left(-2u_0^3+6u_0^2 PU_{\delta, \xi} \right)\\ &\le \int_{|x-\xi|<R_\de^2\sqrt\de}\left(-2u_0^3+6u_0^2 PU_{\delta, \xi} \right).\end{aligned}$$
and if $R_\delta$ denotes either $R^1_\delta$ or $R^2_\delta$ we get
$$\begin{aligned}&\int_{|x-\xi|<R_\de\sqrt\de}\(-2u_0^3+6u_0^2 PU_{\delta, \xi}\)\\ &=-2\de^6\int_{|y|<\frac{R_\de}{\sqrt\de}}u_0^3(\de y+\xi)+6\de^4\int_{|y|<\frac{R_\de}{\sqrt\de}}u_0^2(\de y+\xi)\frac{\alpha_6}{(1+|y|^2)^2}+\mathcal O\(\de^5\)\\
& =\(-2u_0^3(\xi_0)+\mathcal O\(\sqrt\de\)\)\de^6\omega_6\int_0^{\frac{R_\de}{\sqrt \de}}r^5+6\alpha_6\(u_0^2(\xi_0)+\mathcal O\(\sqrt\de\)\)\de^4\omega_6\int_0^{\frac{R_\de}{\sqrt\de}}\frac{r^5}{(1+r^2)^2}+\mathcal O\(\de^5\)\\
& =-2\de^3u_0^3(\xi_0)\omega_6R_\de^6+3\alpha_6\de^3u_0^2(\xi_0)\omega_6R_\de^2+\mathcal O\(\de^{\frac 72}\)\\
& =-2\de^3u_0^3(\xi_0)\omega_6R_0^6+3\alpha_6\de^3u_0^2(\xi_0)\omega_6R_0^2+o\(\de^3\), \ \hbox{because of \eqref{ro}}.
\end{aligned}$$
and by comparison
\begin{equation}\label{dentropalla}
\int_{\{u_0+\e v_0<PU_{\delta, \xi}\}\cap B\(\xi,\delta^\frac14\)}\left(-2u_0^3+6u_0^2 PU_{\delta, \xi} \right)=-2\de^3u_0^3(\xi_0)\omega_6R_0^6+3\alpha_6\de^3u_0^2(\xi_0)\omega_6R_0^2+o\(\de^{3}\)\end{equation}
Finally, by \eqref{dentropalla} and \eqref{fuoripalla}
$$I_5=|\e|^3d^3\(-\frac{11}9\omega_6\alpha_6^{\frac 32}(u_0(\xi_0))^{\frac 32}+o(1)\)$$
Collecting all the previous estimates we get
$$
\tilde J_\e(d,\eta)= \mathfrak c_0(\e)+|\e|^3\underbrace{\left\{ \mathtt{sgn}(\e)\(1-2v_0(\xi_0)\) d^2\mathfrak a_1+d^3\(\mathfrak a_2 \langle D^2 u_0(\xi_0)\eta, \eta\rangle-\mathfrak a_3 \)\right \}}_{=:\Upsilon(d,\eta)}+o\(|\e|^3\)
$$
with
$$\begin{aligned}
&\mathfrak a_1 =\alpha_6^2\(\int\limits_{\mathbb R^6}\frac{1}{(1+|y|^2)^4}dy\)=96\omega_6 \\
&\mathfrak a_2 =\frac{\alpha_6^2}2\int\limits_{\mathbb R}\frac{dy}{(1+|y|^2)^4}\\
&\mathfrak a_3 =\frac{11}{9}\omega_6\alpha_6^{\frac 32}(u_0(\xi_0))^{\frac 32}
\end{aligned}$$
and that concludes the proof.
\end{proof}
We are now in position to prove Theorem \ref{main1}.
\begin{proof}[\bf Proof of Theorem \ref{main1}: completed] The claim follows by Proposition \ref{cruc0} taking into account that
if $ \mathtt{sgn}(\e)\(1-2v_0(\xi_0)\)>0$ the function $\Upsilon$ has always an isolated maximum point
$(d_0,0),$ with $ d_0:={2\mathfrak a_1\over 3\mathfrak a_3}\mathtt{sgn}(\e)\(1-2v_0(\xi_0)\)$, which is stable under uniform perturbations.
\end{proof}
\section{A generic result}\label{gen}
Let $\Omega_0$ be a bounded and smooth domain in $\mathbb R^n$, we let $D$ be and open neighbourhood of $\overline{\Omega_0}$ and $\alpha\in(0,1).$
There exists $\epsilon>0$ such that if $\theta\in C^{3,\alpha}(\overline D,\mathbb R^n)$ with $ \|\theta\|_{2,\alpha}\le\epsilon $ then $\Theta=I+\theta$ maps $\Omega_0$ in a one-to-one way onto the smooth domain $\Omega_\theta:=\Theta(\Omega_0)$ with boundary $\partial\Omega_\theta=\Theta(\partial\Omega_0).$\\
If $x\in\Omega_0$ we agree that $\hat x=\Theta x=(I+\theta)x\in\Omega_\theta,$ with $\theta\in V.$ If
$\hat u\in H^1_0(\Omega_\theta)\cap H^2(\Omega_\theta)$ then it is clear that
$u=\hat u\circ\Theta \in H^1_0(\Omega_0)\cap H^2(\Omega_0).$
\\
Our result reads as follows.
\begin{theorem}\label{main}
The set
\begin{equation}\label{non-de}\begin{aligned}\Xi:=\big\{\theta\in C^{3,\alpha}(\overline D,\mathbb R^n)\ :\ &\hbox{if $\lambda>0$ and $u\in H^1_0(\Omega_\theta)$ solve }\\
&\Delta u+\lambda u+|u|^{4\over n-2}u=0\ \hbox{in}\ \Omega_\theta,\ u=0\ \hbox{on}\ \partial\Omega_\theta\\
& \hbox{then $u$ is non-degenerate} \big\}\end{aligned}\end{equation}
is a residual subset in $C^{3,\alpha}(\overline D,\mathbb R^n),$
i.e. $C^{3,\alpha}(\overline D,\mathbb R^n)\setminus \Xi$ is a countable union of close subsets without interior points.
\end{theorem}
The proof relies on the following abstract transversality theorem (see \cite{Q,ST,U}).
\begin{theorem}\label{tran}
Let $X,Y,Z$ be three Banach spaces and $U\subset X,$ $V\subset Y$ open subsets.
Let $F:U\times V\to Z$ be a $C^\alpha-$map with $\alpha\ge1.$ Assume that
\begin{itemize}
\item[i)] for any $y\in V$, $F(\cdot,y):U\to Z$ is a Fredholm map of index $l$ with $l\le\alpha;$
\item[ii)] $0$ is a regular value of $F$, i.e. the operator $F'(x_0,y_0):X\times Y\to Z$ is onto at any point $(x_0,y_0)$ such that $F(x_0,y_0)=0;$
\item[iii)] the map $\pi\circ i:F^{-1}(0)\to Y$ is $\sigma-$proper, i.e. $F^{-1}(0)=\cup_{s=1}^{+\infty} C_s$
where $C_s$ is a closed set and the restriction $\pi\circ i_{|_{C_s}}$ is proper for any $s$; here $i:F^{-1}(0)\to Y$ is the canonical embedding and $\pi:X\times Y\to Y$ is the projection.
\end{itemize}
Then the set
$\mathcal V:=\left\{y\in V\ :\ 0\ \hbox{is a regular value of } F(\cdot,y)\right\}$
is a residual subset of $V$, i.e. $V\setminus \mathcal V$ is a countable union of close subsets without interior points.
\end{theorem}
Indeed, in our case we choose
$$\begin{aligned}
&X=\mathbb R\times \(H^1_0(\Omega_0)\cap H^2(\Omega_0)\)\\
&U=(0,\infty)\times \(H^1_0(\Omega_0)\cap H^2(\Omega_0)\setminus\{0\}\)\\
&Y= C^{3,\alpha}\(\overline D,\mathbb R^n\)\\
&V=\mathcal B_\epsilon:=\left\{\theta\in C^{3,\alpha}(\overline D,\mathbb R^n)\ :\ \|\theta\|_{2,\alpha}<\epsilon\right\}\\
&Z=\mathbb R\times L^2(\Omega_0).
\end{aligned}$$
$X$ and Z are Banach spaces equipped with the norms
$\|(a,u)\|_X:=|a|+\|u\|_{H^1_0\cap H^2(\Omega_0)},$
and $\|(a,u)\|_Z:=|a|+\|u\|_{L^2(\Omega_0)},$
respectively.
Moreover, the function $F:U\times V\to Z$ is defined by
$$
F(\lambda,u,\theta):=\(Q (\lambda,\hat u,\theta),\Delta _{\hat x} \hat u+|\hat u|^{p-1}\hat u+\lambda \hat u\),
$$
where
$$
Q (\lambda,\hat u,\theta):=\int\limits_{\Omega_\theta}\(|\nabla_{\hat x} \hat u|^2-|\hat u|^{p+1}-\lambda \hat u^2\)d\hat x.
$$
It is clear that
$$F(\lambda,u,\theta)=(0,0)\ \Leftrightarrow\ \Delta _{\hat x} \hat u+|\hat u|^{p-1}\hat u+\lambda \hat u=0\ \hbox{in}\ \Omega_\theta,\ \hat u=0\ \hbox{on}\ \partial\Omega_\theta. $$
Theorem \ref{main} will follow by Theorem \ref{tran} as soon as we prove that $F$ satisfies the assumptions and this is done below.\\
First of all, we rewrite $F$ in terms of the $x-$variable (see \cite{ST,P})
\begin{lemma} We have
\begin{equation}\label{f1}
Q (\lambda,\hat u,\theta):=\int\limits_{\Omega_0}\left\{\nabla u\cdot\left[(\det \Theta')(\Theta')^{-1}(^t\Theta')^{-1}\nabla u\right]-\(|u|^{p+1}+\lambda u^2\)(\det \Theta')\right\}d x.
\end{equation}
and
\begin{equation}\label{f2}\Delta _{\hat x} \hat u+|\hat u|^{p-1}\hat u+\lambda \hat u=
{\mathrm {div}}\left[(\det \Theta')(\Theta')^{-1}(^t\Theta')^{-1}\nabla u\right]+\(| u|^{p-1} u+\lambda u\)(\det\Theta').
\end{equation}
\end{lemma}
At this point it is useful to point out the following fact.
\begin{remark}\label{norm}
We can choose $\epsilon>0$ small enough so that for any $\theta\in \mathcal B_\epsilon$
$$\(\ \int\limits_{\Omega_0}\(\left| \left\langle(\det \Theta')(\Theta')^{-1}(^t\Theta')^{-1}\nabla u,\nabla u\right\rangle\right|^2
+\left|{\mathrm {div}}\left[(\det \Theta')(\Theta')^{-1}(^t\Theta')^{-1}\nabla u\right]\right|^2\)dx\)^{1/2}$$
defines on $H^1_0(\Omega_0)\cap H^2(\Omega_0)$ a norm which is equivalent to the standard one
$$\| u\|_{H^1_0\cap H^2(\Omega_0)}=\(\ \int\limits_{\Omega_0}\(|\nabla u|^2+|\Delta u|^2\)dx\)^{1/2}.$$
\end{remark}
Next, we check the differentiability of $F$ (see \cite{ST,P}).
\begin{lemma}
The function $F$ is differentiable at any $(\lambda_0,u_0,\theta_0)\in U\times V$ such that $F(\lambda_0,u_0,\theta_0)=(0,0).$
Moreover if $\Theta_0=I+\theta_0$
\begin{equation}\label{f3}
\begin{aligned} &F'(\lambda_0,u_0,\theta_0)[\lambda,u]\\ &=\( \int\limits_{\Omega_0}\left\{2\nabla u_0\cdot\left[(\det \Theta_0')(\Theta_0')^{-1}(^t\Theta_0')^{-1}\nabla u\right]-\((p+1)|u_0|^{p-1}u_0+2\lambda _0 u_0\)u(\det \Theta_0')\right\}d x\right.\\ & \qquad -\lambda \int\limits_{\Omega_0}u_0^2(\det \Theta'_0)d x,\\
&\quad\left.\mathrm {div}\left[(\det \Theta_0')(\Theta_0')^{-1}(^t\Theta_0')^{-1}\nabla u\right]+\(p| u_0|^{p-1} +\lambda_0 \)u(\det\Theta_0')+\lambda u_0(\det\Theta'_0),\)
\end{aligned}
\end{equation}
and if $\theta_0=0$
\begin{equation}\label{f4}
\begin{aligned} &F'(\lambda_0,u_0,\theta_0)[\theta]\\ &=\(\int\limits_{\Omega_0}\left\{\nabla u_0\cdot\left[(\mathrm {div}\theta)\nabla u_0-(\theta'+{}^t\theta')\nabla u_0\right]-\(|u_0|^{p+1}+ \lambda _0 u_0^2\)(\mathrm {div}\theta)\right\}d x\right.,\\
&\quad\left.{\mathrm {div}}\left[(\mathrm {div}\theta)\nabla u_0-(\theta'+{}^t\theta')\nabla u_0\right]+\(| u_0|^{p-1} u_0 +\lambda_0u_0 \)(\mathrm {div}\theta)\).
\end{aligned}
\end{equation}
\end{lemma}
Let us check assumption i) of Theorem \ref{tran}.
\begin{lemma}
For any $\theta\in V$ the function $F(\cdot,\cdot,\theta)$ is a Fredholm map from $U$ into $Z$ of index 0.
\end{lemma}
\begin{proof}
The partial derivative $F'_{\lambda,u}(\lambda_0,u_0,\theta_0):X\to Z$ is the sum of an isomorphism $\mathcal I$ and a compact perturbation $\mathcal K,$ namely
$$
\mathcal I(\lambda,u):=
\( -\lambda \int\limits_{\Omega_0}u_0^2(\det \Theta'_0)d x,\mathrm {div}\left[(\det \Theta_0')(\Theta_0')^{-1}(^t\Theta_0')^{-1}\nabla u\right]\)$$
and
$$\begin{aligned}
\mathcal K(\lambda,u ):=&\( \int\limits_{\Omega_0}\left\{2\nabla u_0\cdot\left[(\det \Theta_0')(\Theta_0')^{-1}(^t\Theta_0')^{-1}\nabla u\right]-\((p+1)|u_0|^{p-1}u_0+2\lambda _0 u_0\)u(\det \Theta_0')\right\}d x\right.,\\
&\left. \(p| u_0|^{p-1} +\lambda_0 \)u(\det\Theta_0')+\lambda u_0(\det\Theta'_0)\).\end{aligned}
$$
\end{proof}
Let us check assumption iii) of Theorem \ref{tran}.
\begin{lemma} The map $\pi\circ i: F^{-1}(0)\to Y$ is $\sigma-$proper.
\end{lemma}
\begin{proof}
Let us write
$$F^{-1}(0,0)=\cup_{m=1}^\infty \mathcal C_m,\ \mathcal C_m=\(A_m\times B_m\times C_m\)\cap F^{-1}(0,0),$$
where
$$A_m:=\left\{\frac1m\le \lambda\le m\right\},$$
$$\begin{aligned}B_m:&=\left\{u\in {H^1_0(\Omega_0)\cap H^2(\Omega_0)}\ :\ \frac1m\le \|u\|:=\(\ \int\limits_{\Omega_0}\(|\nabla u|^2+(\Delta u)^2\)dx\)^{\frac12}\le m
\right\}\end{aligned}$$
and
$$C_m:= \left\{\theta\in C^{3,\alpha}(\Omega_0)\ :\ \|\theta\|_{2,\alpha}\le \epsilon\(1-\frac1m\)\right\}. $$
Let us fix $m$. We have to prove that if $(\theta_k)_{k\ge1}\subset C_m$ with $\theta_k\to\theta$ and $(\lambda_k,u_k)_{k\ge1}\subset A_m\times B_m$ is such that
$F(\lambda_k,u_k,\theta_k)=0$ then, up to a subsequence, $(\lambda_k,u_k)\to(\lambda,u)\in A_m\times B_m$ and $F(\lambda,u,\theta)=0.$
First of all, up to a subsequence, we have $\lambda_k\to\lambda\in A_m$ and $u_k\to u$ weakly in $H^1_0(\Omega_0)\cap H^2(\Omega_0)$ and strongly in $L^q(\Omega_0)$ for any $q>1$ if $n=3,4$ and $1<q<{2n\over n-4}$ if $n\ge5.$
If $\Theta_k=I+\theta_k$ we know that $\Theta_k\to \Theta:=I+\theta$ in $ C^{1,\alpha}(\Omega_0,\mathbb R^n).$ Now, condition $F(\lambda_k,u_k,\theta_k)=0$ reads as
$${\mathrm {div}}\(\underbrace{(\det \Theta_k')(\Theta_k')^{-1}(^t\Theta_k')^{-1}}_{=A_k}\nabla u_k\)+\underbrace{\(| u_k|^{p-1} u_k+\lambda _k u_k\) (\det\Theta_k')}_{=f_k}=0\ \hbox{in}\ \Omega_0,\ u=0\ \hbox{on}\ \partial\Omega_0.$$
In particular, for any $ \varphi\in H^1_0(\Omega_0)$
\begin{equation}\label{comp1}\int\limits_{\Omega_0}\left[\left\langle A_k\nabla u_k,\nabla \varphi\right\rangle+f_k\varphi\right]dx=0\end{equation}
and so passing to the limit
\begin{equation}\label{comp2}\int\limits_{\Omega_0}\left[\left\langle \underbrace{(\det \Theta')(\Theta')^{-1}(^t\Theta')^{-1}}_{=A}\nabla u,\nabla \varphi\right\rangle-\underbrace{\(| u|^{p-1} u+\lambda u\)(\det\Theta'}_{=f})\varphi\right]dx=0,\end{equation}
namely
$${\mathrm {div}}\(A\nabla u\)+f=0\ \hbox{in}\ \Omega_0,\ u=0\ \hbox{on}\ \partial\Omega_0,$$
i.e. $F(\lambda,u,\theta)=0.$\\
Now, let us prove that $u_k\to u$ strongly in $H^1_0(\Omega_0)\cap H^2(\Omega_0).$
By \eqref{comp1} and \eqref{comp2} we deduce
$$\begin{aligned}\int\limits_{\Omega_0} \left\langle A\nabla (u_k-u),\nabla (u_k-u)\right\rangle&=\int\limits_{\Omega_0} \left\langle A\nabla u_k, \nabla u_k\right\rangle+ \int\limits_{\Omega_0} \left\langle A\nabla u , \nabla u \right\rangle-2\int\limits_{\Omega_0} \left\langle A\nabla u , \nabla u_k\right\rangle\\
&=\int\limits_{\Omega_0} \left\langle (A-A_k)\nabla u_k, \nabla u_k\right\rangle+ \int\limits_{\Omega_0}\(-f_ku_k-fu+2fu_k\)\\
&=o(1),
\end{aligned}$$
because $A_k\to A$ in $C^0(\Omega_0)$ and $u_k\to u$ strongly in $L^{2n\over n-2}(\Omega_0).$
Moreover, we also have
$$\begin{aligned}\int\limits_{\Omega_0}\({\mathrm {div}}\(A\nabla (u_k-u)\)\)^2&=\int\limits_{\Omega_0}\({\mathrm {div}}\((A-A_k)\nabla u_k\)-f_k+f \)^2\\
&\le 2\int\limits_{\Omega_0}\({\mathrm {div}}\((A-A_k)\nabla u_k\)\)^2+2\int\limits_{\Omega_0}\(f_k-f \)^2\\
&=o(1), \end{aligned}$$
because $A_k\to A$ in $C^0(\Omega_0)$ and $u_k\to u$ strongly in $L^{2(n+2)\over n-2}(\Omega_0).$
Then the claim follows directly from Remark \ref{norm}.
\end{proof}
Let us check assumption ii) of Theorem \ref{tran}.
\begin{proposition}$(0,0)$ is a regular value of $F.$
\end{proposition}
\begin{proof}
Let $(\lambda_0,u_0,\theta_0)\in U\times V$ such that $F(\lambda_0,u_0,\theta_0)=(0,0).$ We shall prove that
if $(\lambda,u)\in X$ is such that
\begin{equation}\label{f5}
\left\{ \begin{aligned}&F'(\lambda_0,u_0,\theta_0)[\lambda,u]=0\\
&\langle F'(\lambda_0,u_0,\theta_0)[\theta],(\lambda,u)\rangle_Z=0\ \hbox{for any}\ \theta\in Y\end{aligned}\right.\ \Rightarrow
\ \lambda=0\ \hbox{and}\ u\equiv0. \end{equation}
Without loss of generality we can assume $\theta_0=0.$ Then $\Theta_0=I$ and by \eqref{f1} and \eqref{f2} condition $F(\lambda_0,u_0,\theta_0)=(0,0)$ reads as
\begin{equation}\label{f6}
\left\{\begin{aligned}&\int\limits_{\Omega_0}\(|\nabla u_0|^2-|u_0|^{p+1}-\lambda_0 u_0^2\) d x=0\\
&\Delta u_0+|u_0|^{p-1}u_0+\lambda_0u_0=0\ \hbox{in}\ \Omega_0,\ u=0\ \hbox{on}\ \partial\Omega_0.
\end{aligned}\right.\end{equation}
Moreover by \eqref{f3} and \eqref{f4} condition \eqref{f5} can be rephrased as
\begin{equation}\label{f7}\left\{
\begin{aligned}
&
\int\limits_{\Omega_0}\left\{2\nabla u_0 \nabla u -\((p+1)|u_0|^{p-1}u_0+2\lambda_0 u_0\)u -\lambda u_0^2 \right\}d x=0\\
& \Delta u+\(p| u_0|^{p-1} +\lambda_0 \)u +\lambda u_0 =0\ \hbox{in}\ \Omega_0,\ u=0\ \hbox{on}\ \partial\Omega_0
\end{aligned}\right.\end{equation}
and
\begin{equation}\label{f8}
\begin{aligned}
&\lambda \int\limits_{\Omega_0}\left\{\nabla u_0\cdot\left[(\mathrm {div}\theta)\nabla u_0-(\theta'+{}^t\theta')\nabla u_0\right]-\(|u_0|^{p+1}+ \lambda _0 u_0^2\)(\mathrm {div}\theta)\right\}d x\\
&+\int\limits_{\Omega_0}\left\{{\mathrm {div}}
\left[(\mathrm {div}\theta)\nabla u_0-(\theta'+{}^t\theta')\nabla u_0\right]+\(| u_0|^{p-1} u_0 +\lambda_0u_0 \)(\mathrm {div}\theta)\right\}udx=0 \ \forall\ \theta\in Y.
\end{aligned}
\end{equation}
We can simplify expression \eqref{f8}. Indeed, taking into account that
\begin{equation}\label{f10}\Delta u_0+\underbrace{| u_0|^{p-1} u_0 +\lambda_0u_0}_{=g(u_0)}=0\ \hbox{in}\ \Omega_0,\ u=0\ \hbox{on}\ \partial\Omega_0,
\end{equation}
we have
$$\begin{aligned}\mathrm {div}\left[(\mathrm {div}\theta)\nabla u_0-(\theta'+{}^t\theta')\nabla u_0\right]&=\mathrm {div}(\theta\Delta u_0)-\Delta (\theta\nabla u_0)=-\mathrm {div}(g(u_0)\theta)-\Delta (\theta\nabla u_0)\\ &=-g(u_0)(\mathrm {div}\theta)-g'(u_0)\nabla u_0\theta-\Delta (\theta\nabla u_0).\end{aligned}$$
Moreover,
$$\int\limits_{\Omega_0}\Delta (\theta\nabla u_0)udx=-\int\limits_{\partial\Omega_0} \theta\nabla u_0\partial_\nu udx+\int\limits_{\Omega_0} \theta\nabla u_0\Delta udx$$
Therefore, \eqref{f8} reads as
\begin{equation}\label{f9}
\begin{aligned}
0= &\lambda \int\limits_{\Omega_0}\left\{\left[g(u_0) u_0(\mathrm {div}\theta)+g'(u_0)u_0\nabla u_0\theta+\theta\nabla u_0\underbrace{\Delta u_0}_{=-g(u_0)}\right]-\underbrace{\(|u_0|^{p+1}+ \lambda _0 u_0^2\)}_{=g(u_0)u_0}(\mathrm {div}\theta)\right\}d x\\ &-\lambda \int\limits_{\partial\Omega_0}\theta\nabla u_0\partial_\nu u_0dx\\
&+\int\limits_{\Omega_0}\left\{\left[-g(u_0)u(\mathrm {div}\theta)-g'(u_0)u\nabla u_0\theta-\theta\nabla u_0\underbrace{\Delta u}_{=-g'(u_0)u-\lambda u_0}\right]+\underbrace{\(| u_0|^{p-1} u_0 +\lambda_0u_0 \)}_{=g(u_0)}(\mathrm {div}\theta)u\right\} dx\\ &+ \int\limits_{\partial\Omega_0}\theta\nabla u_0\partial_\nu udx\\ &
=\lambda \int\limits_{\Omega_0}\underbrace{\(g'(u_0)u_0-g(u_0)
+u_0\)}_{= (p-1)|u_0|^{p-1}u_0+u_0}\theta\nabla u_0dx + \int\limits_{\partial\Omega_0}\theta\nabla u_0\(\partial_\nu u
-\lambda \partial_\nu u_0\)dx.
\end{aligned}
\end{equation}
Now, we prove that $\lambda=0$. Indeed by taking deformations $\theta$ which take fix the boundary of $\Omega_0$ by \eqref{f9} we get
$$\lambda \int\limits_{\Omega_0} \left[ (p-1)|u_0|^{p-1}u_0+u_0\right]\theta\nabla u_0dx=0\ \hbox{for any}\ \theta\in V,\ \theta=0\ \hbox{on}\ \partial\Omega_0.$$
If $\lambda\not=0$ then we necessarily have
$$ u_0\left[ (p-1)|u_0|^{p-1}+1\right]\nabla u_0=0\ \hbox{a.e. in}\ \Omega_0,$$
and so $u_0\nabla u_0=0$ a.e. in $\Omega.$ This is not possible because $u_0$ solves \eqref{f10} and by the unique continuation theorem in \cite{AKS} we know
that $\mathrm {meas}\{x\in\Omega_0\ :\ u_0(x)=0\}=\mathrm {meas}\{x\in\Omega_0\ :\ \nabla u_0(x)=0\}=0.$\\
Since $\lambda=0$ by \eqref{f9} we deduce that
$$ \int\limits_{\partial\Omega_0}\theta\nabla u_0\partial_\nu u
dx =0\ \hbox{for any}\ \theta\in Y$$
and arguing exactly as in \cite{ST}, page 313-314, we deduce that $u=0.$
That concludes the proof.
\end{proof}
\begin{proposition}\label{least}
For any $\theta\in \Xi$ as in \eqref{non-de} there exists $\lambda_\theta\in (0,\lambda_1(\Omega_\theta))$ such that
\begin{equation}\label{cru}\lambda_\theta=2\max\limits_{\Omega_\theta} u_{\lambda_\theta}.\end{equation}
\end{proposition}
\begin{proof}
Let $\theta\in \Xi$ as in \eqref{non-de} and let us consider the perturbed domain $\Omega_\theta$. For any $\lambda\in (0,\lambda_1(\Omega_\theta))$ let $u_{\lambda}$ be the least energy positive solution on the domain $\Omega_\theta$, which is non-degenerate because of Theorem \ref{main}.
Therefore, by the Implicit function Theorem we deduce that there exists a continuous curve $\lambda\to u_\lambda$
Let us consider the continuous function
$$f(\lambda):=\lambda-2\|u _\lambda\|_{L^{\infty}(\Omega_\theta)},\ \lambda\in (0,\lambda_1(\Omega_\theta)).$$
Since
$$\lim\limits_{\lambda\to0}\|u _\lambda|_{L^{\infty}(\Omega_\theta)}=+\infty\ \hbox{and}\ \lim\limits_{\lambda\to\lambda_1(\Omega_\theta)}\|u _\lambda\|_{L^{\infty}(\Omega_\theta)}=0$$
(see \cite{Han91} and the classical bifurcation theory, respectively),
there exists $\lambda_\theta$ such that $f(\lambda_\theta)=0$ and the claim \eqref{cru} follows.
\end{proof}
\begin{proof}[\bf Proof of Theorem \ref{main-generico}.] It follows immediately by Theorem \ref{main} and Proposition \ref{least}.
\end{proof}
\bibliographystyle{abbrv}
| proofpile-arXiv_059-15597 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{cha:intro}
Electroencephalography (EEG) is one of the most common methods of measuring brain electrical activities \cite{ubeyli2009combined}. The signals contain complex information about the state of the human brain, which can be used to study brain functions, detect brain abnormalities and so on. Due to the uniqueness of the EEG signal from every individual, it can be a reliable source of biometric identification \cite{campisi2014brain,palaniappan2007biometrics}; on the other hand, such personal information could be used for malicious purposes if not well protected. With increasing interest in and demand for research on EEG signals, there has been many EEG databases disclosed to the public. Therefore, there is a need to protect from exposure the personal identity information in the EEG signals of the experiment subjects, who will kindly contribute to such public EEG datasets in the future.
Our key contribution is the creation of labelled EEG with dummy identities mimicking EEG signals from groups that have common features, but with averaged identity information. The EEG data with dummy identities helps disguise the identity information in the original EEG by composing a training set in the target domain, to which the original EEG is transformed. The experiment results suggest that the dummy identities effectively hide real identities in EEG, while our approach keeps the features of interest.
\section{Related Works}
\label{cha:background}
Yao et al. proposed a feature filter to protect certain diseases from exposure \cite{yao2020information}. There, the EEG signals with disease information are transferred to healthy EEG signals without such disease, so that the disease information can be effectively removed. The limitation of that approach is that it can only deal with the features from a limited number of categories (closed set), such that the EEG signals can be transferred from one class to another. For information that has potentially infinite classes (open set), such as personal identity, it is not feasible to find a class of EEG signals that has no identity information.
Instead of removing the information related to biometrics in EEG, this paper proposes an approach of disguising the identity information using a dummy identity, such that the EEG signal cannot be used for person recognition.
\section{Methods}
\label{cha:methodology}
\subsection{UCI EEG Dataset}
The time-series data consists of EEG signals from two groups of subjects – alcoholic individuals and controls \cite{begleiter1999eeg}. The dataset has three attributes: alcoholism, stimulus condition and subject identity. Each EEG signal contains $64 \text{ channels} \times 256 \text{ samples}$ of data. The data within each subject is split into a training set (70\%), a testing set (20\%) and a validation set (10\%), using the same within-subject train-test-validation splitting method as \cite{li2017targeting,yao2020information}.
\subsection{EEG Data Pre-processing}
The raw time-domain EEG data is challenging to analyse due to its high dimensionality. We can extract prominent features from the frequency bands in EEG spectrum \cite{mcgrogan1999neural,subasi2005automatic} for dimension reduction. We convert the EEG signals into the frequency domain using Fast Fourier Transform (FFT), which can capture most characteristics from stationary signals; and the short-term EEG signals in this work can be regarded as stationary \cite{tcheslavski2012alcoholism}.
\begin{figure}[t]
\centering
\includegraphics[width=0.6\linewidth]{Pictures/EEG2Img.pdf}
\caption[Convert EEG spectrum into RGB EEG images] {Convert EEG spectrum into RGB EEG images \cite{yao2020information}. Features from three frequency bands are extracted and used as red, green and blue channels of an image.}
\label{figure:EEG2img}
\end{figure}
To extract lower-level spectrum features, we adopt an approach proposed by Bashivan et al., which captures both spectral features and spatial information and represents them in images \cite{bashivan2015learning}. As in Fig. \ref{figure:EEG2img}, with the spectrum features of the 64 channels and their corresponding electrode locations projected on a 2D plane, a feature matrix is calculated for each frequency band. we obtain color images by merging the feature matrix from three key frequency bands. As the EEG signals are collected from subjects as evoked by visual stimuli, we
select the mid range frequency bands, i.e. bands $\theta$ (4-8 Hz), $\alpha$ (8-13 Hz) and $\beta$ (13-30 Hz), which are less noisy and captures the most related information \cite{musha1997feature}.
\subsection{Disguised EEG Images Generation}
\subsubsection{Obtaining Labelled EEG Images with Dummy Identities}
\label{sec:dummyEEG}
In neuroscience, grand averages of EEG signals across subjects is a common technique when statistically analysing EEG patterns in certain conditions \cite{delorme2015grand,polich1997eeg,marshall2007infant}. The waveform of the grand mean of EEG signals can be regarded as a representative of a group and used to study their significant characteristics. A similar technique is also used in computer vision (CV) to investigate facial characteristics. E.g., Burt and Perrett generated composite face images by averaging the faces from different age groups, and those average faces can still be correctly recognized as faces in the corresponding age range \cite{burt1995perception}.
We adopt this technique to generate average EEG images, which hold the characteristics of the common features of EEG signals in a certain group while having average biometric information. The averaged biometric information can be regarded as dummy identities. As the EEG dataset has 2 classes for the alcoholism attribute and 5 classes for stimulus condition attribute, we split the dataset into 10 groups corresponding to the 10 combinations of attributes. Then we take the grand average power of the EEG spectrums across each of several subjects within each group. The average EEG spectrums are converted into EEG images (see section 3.2). This process is applied to the training dataset and the average EEG images are labelled as they are obtained within each group.
\subsubsection{From Original to Disguised}
We train a CycleGAN model \cite{zhu2017unpaired} to generate the disguised EEG images that have the features which are consistent with the corresponding original images, but with dummy identity information instead of real identity information. For the training datasets, the source-domain data are the real EEG images and the target-domain data are the average EEG images with dummy identity obtained in section \ref{sec:dummyEEG}.
The EEG images used in the model are labelled and we want them to be correctly classified to the original labels after translation to the target domain with dummy identities. Therefore, we need further constraints on our model to minimize the information loss in the most interesting features. We apply task and semantic loss in our model, as proposed by Hoffman et al. in their Cycle-consistent Adversarial Domain Adaptation (CyCADA) model \cite{hoffman2017cycada,yao2020simulating}.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{Pictures/CyCADA.pdf}
\caption[The architecture of the image disguise model] {The architecture of the image disguising model with task loss and semantic loss. $G_X$ and $G_Y$ denote the generators with real input images from X and Y domain respectively; $D_X$ and $D_Y$ denote the discriminator that distinguishes whether images are from X and Y domains respectively; $\bm{x}$ and $\bm{y}$ denote input images from X domain and Y domain. $G_Y(G_X(\bm{x}))$ is a reconstructed image in X domain, which should be close to the corresponding real images $\bm{x}$; $C$ denotes the classifier that is used to add a further constraint to the model. For clearer illustration, only one direction of the cycle ($\bm{x}\rightarrow G_X(\bm{x})\rightarrow G_Y(G_X(\bm{x}))$) is shown.}
\label{figure:cycada}
\end{figure}
We add an additional classifier C to our own CycleGAN-based model (Fig. \ref{figure:cycada}) \cite{hoffman2017cycada}. First, the classifier learns to make predictions on the labels of EEG images by minimizing the task loss during training. After the classifier is sufficiently trained to have relatively low task loss (\textless{1.0}), the model can take the semantic consistency into account, which means the label of a disguised EEG $G_X(\bm{x})$ predicted by the classifier should be consistent with the predicted label of the original EEG $\bm{x}$, and similar to the other generator $G_Y$. The generators should learn to minimize the semantic loss during training.
\subsection{Classification Tasks}
To evaluate our model, we need to validate whether the personal identity information has been disguised such that it fails the person recognition task; also, we need to validate whether the information of interest is preserved, which means a disguised EEG image should be correctly classified to its original label. Therefore, we need to train classification models to predict the labels of the EEG data with respect to original subject identities, alcoholism and stimulus conditions.
\subsubsection{ResNet Classifier}
We could use a deep CNN model to extract high-level features from EEG images, which contains the low-level features we obtained from the EEG spectrum. Although deeper networks may give better results with higher-level features extracted, they also bring difficulties to optimization during the training and as a result degrade the performance in practice; this problem can be addressed by deep residual learning \cite{he2016deep}. Thus, we implement ResNet models for the classifications which are essential to the evaluation step. To explore the impact of depth of ResNet models on our classification tasks, we will experiment with 18-layer, 34-layer and 50-layer ResNet models.
\subsubsection{Joint Training} Since the average EEG images obtained in section \ref{sec:dummyEEG} are labelled data, we make use of them for joint training by combining them with the training dataset when training the classification model for alcoholism detection and stimulus classification tasks. This can add randomness and diversity to the training dataset thus improving the generalization of the classification model \cite{krizhevsky2012imagenet}.
\section{Experiment Results and Discussion}
\label{cha:result}
\subsection{Evaluation Criteria}
We aim for the disguised EEG images to fail the person recognition task while not sacrificing the performance on the alcoholism detection task and the stimulus classification task, and $accuracy = \frac{\text{TP + TN}}{\text{TP + FP + TN + FN}}$ is an important criterion when measuring those results. When classifying the subject identities of the disguised EEG, the accuracy should drop significantly compared with the results for original EEG images using the same classifier, while the accuracy of detecting alcoholism and classifying stimulus conditions of the disguised EEG images should be at the same level as that of the original EEG images.
Both the test dataset and validation dataset are imbalanced in the alcoholism label, so it is biased if accuracy is the only evaluation criterion. Thus, we also use $sensitivity = \frac{\text{TP}}{\text{TP + FN}}$ and $specificity = \frac{\text{TN}}{\text{TN + FP}}$ as additional evaluation criteria for the alcoholism detection task.
\subsection{Performance of the Classification Models}
To evaluate the performance of our EEG disguising model, we need to first train a classification model. We experiment with ResNet-based classification models with different numbers of layers. For comparison, we also trained an autoencoder-based classification model from Yao et al. \cite{yao2020information}.
\begin{table}[t]
\centering
\caption{The testing results of classification models. Our Resnet based model outperfom autoencoder based model by 18.87\% in terms of Identity accuracy. }
\label{results_cls}
\begin{tabular}{@{}|l !{\vrule width 1pt} c|c|c !{\vrule width 1pt} c !{\vrule width 1pt} c|@{}}
\hline
\multirow{2}{*}{} & \multicolumn{3}{c!{\vrule width 1pt}}{Alcoholism (\%)} & \multicolumn{1}{l!{\vrule width 1pt}}{Stimulus (\%)} & \multicolumn{1}{l|}{Identity (\%)} \\ \cline{2-6}
& Acc. & Sens. & Spec. & Acc. & Acc. \\ \Xhline{1pt}
Yao et al. & 87.84 & 88.16 & 84.42 & 51.67 & 79.53 \\%\hline
ResNet-18 & \textbf{92.49} & \textbf{91.13} & \textbf{93.85} & 61.98 & \textbf{98.40} \\%\hline
ResNet-34 & 91.88 & 90.78 & 92.99 & \textbf{62.69} & 97.26 \\%\hline
ResNet-50 & 87.11 & 83.76 & 90.49 & 60.91 & 90.49 \\
\hline
\end{tabular}
\end{table}
As shown in Table \ref{results_cls}, the ResNet-18 classification model performs well on alcoholism and personal identity recognition, with an accuracy of 92.5\% and 98.4\% respectively. In addition, the model achieves 91.1\% sensitivity and 93.9\% specificity in the alcoholism detection task, which is a balanced result. The testing results indicate that the ResNet-18 model has good generalization when classifying unseen samples. The model predicts the stimulus condition with an accuracy of 62.0\%, which is much higher than chance (20\%) although it may not be optimal. The ResNet-34 classifier achieves similar results (slightly lower) compared to the ResNet-18 model and only the accuracy of stimulus condition prediction task (62.7\%) is slightly higher, which indicates that the deeper network may contribute slightly to this task. However, the performance degrades when we explore deeper networks using a ResNet-50 classifier (Table \ref{results_cls}).
The results in Table \ref{results_cls} show that the ResNet models outperform Yao et al.'s autoencoder-based model in general. With deeper network and more free parameters, the ResNet models are able to extract enough higher-level features for the classification tasks, especially the personal identity recognition, on which the ResNet models achieve the most improvement. Although the ResNet models benefit from deeper networks, the model with the most layers does not achieve the best results. The performance of the ResNet-50 model drops significantly. It indicates that the model is overly deep and has too many parameters for these EEG- related classification tasks, which causes overfitting.
\subsection{Evaluation on the EEG Disguising Model}
We use trained ResNet-18 models for the personal identity, alcoholism and stimulus condition classification tasks on both the original validation dataset and the corresponding disguised EEGs. In this experiment, we assume that the alcoholism information is of more interest; so during the training, the semantic loss and task loss come from the classifier’s prediction on the alcoholism label.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{Pictures/cls_bar_EEG.pdf}
\caption[Classification results: original EEG vs. disguised EEG] {Classification results: original EEG vs. disguised EEG. The disguised images generated by our method hides about 90\% of personal identity information and can preserve most of the key features used for alcoholism and stimulus prediction. }
\label{figure:cls_bar_EEG}
\end{figure}
As shown in Fig. \ref{figure:cls_bar_EEG}, when classifying the personal identity of the original EEG images, the accuracy is 97.5\%; after those are disguised, the accuracy drops dramatically to 9.2\%. It indicaes that the personal identity information in the original EEG is successfully hidden by our EEG disguising model and cannot be recognized by the classifier, which performs well on the original EEG data. For alcoholism detection, the classifier performs well on the original EEG images, achieves 91.3\% sensitivity and 95.4\% specificity; when using the same model to classify the disguised EEG images, the specificity only drops to 91.6\%, while the sensitivity drops to 72.8\%. For the stimulus condition classification task, although the accuracy drops from 61.7\% to 43.4\% when the task is performed on the disguised EEG, it is still much higher than chance (20\%).
Although the decrease in the results of alcoholism and stimulus classification tasks show that there is some information loss in terms of the corresponding features, we do not see a sharp fall comparable with the results of the identity recognition task. It indicates that most of the alcoholism and stimulus features are preserved such that the classification model can still make similar predictions.
\subsection{Ablation Study on the EEG Disguising Model}
\begin{table}[t]
\centering
\caption[Classification results (\%): disguised EEG with different semantic constraints]{Classification results (\%): disguised EEG with different semantic constraints. (a) Results with original EEG. (b) The 1st row (baseline) shows results without semantic constraints; the 2nd row (+ Alc.) shows results with the semantic constraint on the alcoholism feature; the 3rd row (+ Sti.) shows results with the semantic constraint on the stimulus feature; the 4th row (+ Alc. \& Sti.) shows results with the semantic constraint on both of the alcoholism and stimulus feature.}
\label{results_disguised_eeg}
\begin{tabular}{|c !{\vrule width 1pt} c|c|c|c|}
\multicolumn{5}{l}{(a) Original EEG} \\ \hline
Original EEG & ID Acc. & Alcoholism Sens. & Alcoholism Spec. & Stimulus Acc. \\ \Xhline{1pt}
& 97.46 & 91.29 & 95.44 & 61.66 \\ \hline
\multicolumn{5}{l}{} \\
\multicolumn{5}{l}{(b) Disguised EEG with Different Constraints} \\ \hline
Constraint on & ID Acc. $\downarrow$ & Alcoholism Sens. $\uparrow$ & Alcoholism Spec.$\uparrow$ & Stimulus Acc.$\uparrow$ \\ \Xhline{1pt}
Baseline & 0.48 & 65.17 & 35.41 & 37.79 \\ \hline
+ Alc. & \textbf{9.19} & 72.79 & \textbf{91.56} & 43.35 \\
+ Sti. & 21.19 & 78.91 & 62.38 & \textbf{52.61} \\
+ Alc. \& Sti. & 48.97 & \textbf{93.47} & 64.59 & 50.41 \\ \hline
\end{tabular}
\end{table}
\subsubsection{Ablation: Semantic Constraints on Stimulus Condition Feature} We also use the classifier in our EEG disguising model to impose a semantic constraint associated with the stimulus condition feature on the disguised EEG. Table \ref{results_disguised_eeg} shows that the stimulus classification task on the disguised EEG achieves 52.6\% accuracy, which is significantly higher than the counterpart result of using the classifier for alcoholism feature (43.4\%). It implies that the classifier can effectively preserve more information of interest. Also, the results show that the performance of alcoholism detection task degrades, as the loss of alcoholism information increases without the constraint on it. Further, we find that the accuracy of personal identity task on disguised EEG also increases to 21.2\%, which implies that the constraint may be too strong for the EEG disguising model so that more personal identity information is kept to reduce the semantic loss. A possible reason could be the classifier relies more on the identity information to predict the stimulus condition rather than the actual stimulus features.
\subsubsection{Ablation: Semantic Constraints on both Alcoholism and Stimulus} We explore more restricted semantic constraints on the model by using the classifier to predict both the alcoholism and stimulus labels when calculating semantic loss. The results in Table \ref{results_disguised_eeg} show the accuracy of the stimulus condition classification task (50.4\%) and the sensitivity of the alcoholism task (93.5\%) are higher, compared to the results of the models without semantic constraints on the stimulus or alcoholism features, respectively. However, the specificity of the alcoholism detection task is low, one possible reason is that the classifier may not be able to well handle the multi-label classification problem. The accuracy of personal identity increases as the constraint has become too strong.
\subsubsection{Ablation: No Additional Semantic Constraints} We experiment with the model without the additional classifier that places semantic constraints on the disguised EEG. The results (Table \ref{results_disguised_eeg}) shows that the performance degrades on both the alcoholism and the stimulus condition classification task with the disguised EEG. It demonstrates that the semantic constraints are critical in our model to preserve the information of interests. Correspondingly, the accuracy of personal identity recognition (0.48\%) drops significantly compared with the results of the model with the constraints. Without constraints, more identity information in the original EEG is disguised.
\section{Conclusion and Future Work}
Our EEG disguising model can be used to protect personal privacy, by hiding the personal identity information in EEG signals. The results demonstrate that the model is able to disguise 90\% of the personal identity information in EEG signals with dummy identities while preserving most of the key information. In addition, we experiment with ResNet classifiers that can be used to perform different EEG classification tasks. From the results, we find that ResNet models are suitable for complex EEG signals, especially for the personal identity recognition, which requires more parameters and higher-level features to solve.
The information loss during the EEG disguising process should be improved, we may use a more complex classifier as the constraint. Also, we will validate our model on other EEG datasets to determine experimentally how well the model works in general. In addition, we will explore other techniques to improve the performance of the stimulus condition classification task. As an extension, we will also explore the possibility of decoding EEG signals at the feature level, one hypothetical approach could be gradually filtering out different key features in an EEG signal to see whether we can decode the EEG signal eventually.
| proofpile-arXiv_059-15598 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
We consider the \emph{exploration problem}
with a single mobile entity, called a mobile agent or simply an \emph{agent}, in any undirected, single, and connected graph $G=(V,E)$. The agent is a finite state machine and it can migrate between two nodes via edges in $E$
at each step. When the agent visits a node $v \in V$,
it can read and update the local memory, called \emph{whiteboard},
of $v$.
The graph is anonymous, that is, the nodes do not have
unique identifiers.
Our goal is to let the agent visit all nodes in the graph
within as small number of steps as possible
by using as small space of agent memory and whiteboards
as possible.
The exploration problem is one of the most fundamental problems
in a computer network with mobile agents
and has been intensively studied in the literature,
to name a few, \cite{PDD+96,PP99,YWI+03,IKY09,MT10,SBN15}.
This problem is practically motivated, for example,
by the situation where a mobile agent has to search for data at unknown nodes in a network
or has to find broken communication links between two nodes.
In this paper, we address the exploration problem in much harder setting: we consider
\emph{self-stabilizing exploration}.
Specifically, we do not assume any specific initial global state
(or \emph{configuration}) of a network.
In other words, when the agent begins an exploration,
(i) the location of the agent in $G$ is arbitrary,
(ii) the state of the agent is arbitrary,
and
(iii) the content of each whiteboard is arbitrary.
The agent has to visit all nodes in $G$
starting from any (possibly inconsistent) configuration.
Generally, we say that an algorithm is self-stabilizing \cite{dijkstra82}
for problem $P$ if it solves $P$ starting from an arbitrary configuration.
A self-stabilizing algorithm tolerates any kind of transient faults
such as temporal memory corruption.
It also tolerates the change of the topology of a graph,
that is, any combination of the addition of nodes and/or edges
and the deletion of nodes and/or edges
unless the resulting graph violates the specification that we assume.
In our case, a self-stabilizing algorithm tolerates any change of the topology
unless the graph is partitioned into two or more connected components.
Therefore, it is both practically and theoretically important
to design self-stabilizing algorithms.
\begin{table*}[t]
\caption{Self-stabilizing graph exploration algorithms for a single agent.}
\label{tbl:algorithms}
\center
\small
\begin{tabular}{c c c c c}
\hline
&Type & Cover Time & Agent Memory & Memory on node $v$\\
\hline
Simple Random Walk & Randomized & $O(\min(mn,mD\log n))$ & 0 & 0\\
\multirow{2}{*}{ \begin{tabular}{c}Biased Random Walk \cite{IKY09}\\ (requires $\Delta \ge \dmax$ )\end{tabular}}&
\multirow{2}{*}{Randomized} &
\multirow{2}{*}{$O(n^2 \dmax \log n)$} & $O(\log \Delta)$ & $O(\delta_v \log \Delta)$\\
& & & $O(\Delta \log \Delta)$ & 0\\
$\pran{c}$ ($c\ge 2$) & Randomized &
$\begin{aligned}
&O(\min(m \lceil n/\max(c-n,n/D)\rceil,\\
&\hspace{2.5cm} m\lceil D/c \rceil \log n))
\end{aligned}$
& $O(\log c)$ & $O(\log \delta_v+\log c )$\\
$\pran{c}$ ($c = \Omega(D)$)
& Randomized & $O(m \log n)$ & $O(\log c)$ & $O(\log \delta_v+\log c )$\\
$\pran{c}$ ($c = n(1+\Omega(1))$) & Randomized & $O(m)$ & $O(\log c)$ & $O(\log \delta_v+\log c )$ \\
\hline
Rotor-router \cite{PDD+96}& Deterministic & $O(mD)$ & 0 & $O(\log \delta_v)$ \\
2-color DFT \cite{MT10}
& Deterministic & $O(mD)$ & $O(1)$ & $O(\log \delta_v)$ \\
$\pdet{k}$ ($k \ge D-1$) & Deterministic & $O(m+nD)$ & $O(\log k)$ & $O(\delta_v + \log k)$ \\
\hline
\end{tabular} \\
\vspace{0.1cm}
The cover times of randomized algorithms are shown in expectation. \\ We use design parameters $c$ and $k$ for algorithms $\pran{c}$ and $\pdet{k}$, while $D$ is the diameter of a graph.
\end{table*}
\subsection{Related Work}
If we are allowed to use randomization,
we can easily solve the self-stabilizing exploration
with a well known strategy called \emph{the simple random walk}.
When the agent visits a node $v \in V$, it simply chooses a node as the next destination uniformly at random
among $N(v)$,
where $N(v)$ is the set of all neighbors of $v$ in $G$.
In other words, it moves to any node $u \in N(v)$ with probability $P_{v,u} = 1/\delta_v$, where $\delta_v = |N(v)|$
is the \emph{degree} of $v$.
It is well known that the agent running this simple algorithm visits all nodes in $G$
within $O(\min(mn,mD\log n))$ steps,
where $n=|V|$, $m=|E|$, and $D$ is the diameter of $G$ \cite{AKJ+79,Mat88}.\footnote{
The expected number of steps until the agent reaches a node $v$ from a node $u$
is called \emph{hitting time}, denoted by $h_G(u,v)$.
By Lemma 3 in \cite{AKJ+79}, $h_G(u,v)$
is bound by $O(m\cdot d(u,v))$ where $d(u,v)$ is the distance
between $u$ and $v$ (\ie the length of the shortest path between them). Therefore, the agent visits all nodes in $G$
within $\sum_{(u,v) \in T} h_G(u,v) = O(mn)$ steps in expectation,
where $T$ is any spanning tree on $G$.
Matthews \cite{Mat88} proves that
the agent visits all nodes in $G$
within $\max_{u,v \in V_G} h_G(u,v) \log n$ steps in expectation.
Together with the above bound (\ie $h_G(u,v) =O(m \cdot d(u,v))$),
it is trivial that the agent visits all nodes
within $O(mD \log n)$ steps.
}
Since the agent is oblivious (\ie the agent does not bring any information at a migration between two nodes) and does not use whiteboards, the simple random walk is obviously a self-stabilizing exploration algorithm.
Ikeda, Kubo, and Yamashita \cite{IKY09} improved the cover time
(\ie the number of steps to visit all nodes)
of the simple random walk
by setting the transition probability as
$P'_{v,u} = \delta_u^{-1/2}/\sum_{w \in N(v)}\delta_w^{-1/2}$
for any $u \in N(v)$.
They proved that the cover time of this \emph{biased random walk} is $O(n^2 \log n)$ steps in expectation.
However, we cannot use this result directly in our setting
because the agent must know the degrees of all neighbors of the current node to compute the next destination.
We can implement this random walk, for example, as follows:
every time the agent visits node $v$,
it first obtains $(\delta_u)_{u \in N(v)}$
by visiting all $v$'s neighbors in $2\delta_v$ steps,
and then decides the next destination
according to probability $(P'_{v,u})_{u \in N(v)}$,
which is now computable with $(\delta_u)_{u \in N(v)}$.
However, this implementation increases the cover time
by a factor of at least $\dmin$ and at most $\dmax$,
where $\dmin = \min_{v \in V}\delta_v$ and
$\dmax = \max_{v \in V}\delta_v$.
Whereas $n^2 \dmax \log n > mn$ always holds,
$n^2 \dmin \log n < \min(mn,mD\log n)$ may also hold.
Thus, we cannot determine which random walk has smaller cover time
without detailed analysis.
To bound the space complexity, we must know an upper bound
$\Delta$ on $\dmax$ to implement this random walk.
If the agent stores $(\delta_u)_{u \in N(v)}$ on $v$'s whiteboard,
it uses $O(\log \Delta)$ bits in the agent-memory
and $O(\delta_v \log \Delta)$ bits in the whiteboard of each node $v$.
If the agent stores $(\delta_u)_{u \in N(v)}$
only on the agent-memory,
it uses $O(\Delta \log \Delta)$ bits in the agent-memory.
The algorithm given by Priezzhev, Dhar, Dhar, and Krishnamurthy \cite{PDD+96}, which is nowadays
well known as the \emph{rotor-router},
solves the self-stabilizing exploration
deterministically.
The agent is oblivious, but it uses only $O(\log \delta_v)$
bits in the whiteboard of each node $v \in V$.
The edges $(\{v,u\})_{u \in N(v)}$ are assumed
to be locally labeled by $0,1,\dots,\delta_v-1$ in a node $v$.
The whiteboard of each node $v$ has one variable
$v.\last \in \{0,1,\dots,\delta_v-1\}$.
Every time the agent visits a node $v$,
it increases $v.\lastport$ by one modulo $\delta_v$
and moves to the next node via the edges
labeled by the updated value of $v.\lastport$.
This simple algorithm guarantees that
starting from any configuration,
the agent visits all nodes within $O(mD)$ steps \cite{YWI+03}.
Masuzawa and Tixeuil \cite{MT10} also gave a deterministic
self-stabilizing exploration algorithm.
This algorithm itself is designed to solve the gossiping problem
where two or more agents have to share their given information with each other.
However, this algorithm has a mechanism to visit all the nodes
starting from any configuration, which can be seen
as a self-stabilizing exploration algorithm.
The cover time and the space complexity for the whiteboards
of this algorithm are asymptotically the same
as those of the rotor-router,
while it uses a constant space of the agent-memory,
unlike oblivious algorithms such as the rotor-router.
If we assume a specific initial configuration,
that is, if we do not require a self-stabilizing solution,
the agent can easily visit all nodes within $2m$ steps
with a simple depth first traversal (DFT).
Panaite and Pelc \cite{PP99} gave a faster algorithm,
whose cover time is $m + 3n$ steps.
They assume that the nodes are labeled by the unique identifiers.
Their algorithm uses $O(m \log n)$ bits in the agent-memory,
while it does not use whiteboards.
Sudo, Baba, Nakamura, Ooshita, Kakugawa, and Masuzawa \cite{SBN15}
gave another implementation of this algorithm:
they removed the assumption of the unique identifiers
and reduced the space complexity on the agent-memory
from $O(m\log n)$ bits to $O(n)$ bits by using $O(n)$ bits in
each whiteboard.
It is worthwhile to mention that
these algorithms \cite{PP99,SBN15}
guarantee the termination of the agent
after exploration is completed,
whereas designing a self-stabilizing exploration algorithm
with termination is impossible.
Self-stabilization and termination
contradict each other by definition:
if an agent-state that yields termination exists,
the agent never completes exploration
when starting exploration with this state.
If such state does not exist,
the agent never terminates the exploration.
A few self-stabilizing algorithms were given for mobile agents
to solve problems other than exploration.
Blin, Potop-Butucaru, and Tixeuil \cite{BPT07}
studied the self-stabilizing naming and leader election problem.
Masuzawa and Tixeuil \cite{MT10} gave a self-stabilizing
gossiping algorithm. Ooshita, Datta, and Masuzawa \cite{ODM17}
gave self-stabilizing rendezvous algorithms.
\subsection{Our Contribution}
In this paper, we investigate how short cover time
we can achieve in a self-stabilizing setting.
One can easily observe that
the cover time is lower bounded by $\Omega(m)$:
any deterministic algorithm requires $\Omega(m)$ steps
and any randomized algorithm requires $\Omega(m)$ steps
in expectation before the agent visits all nodes.
(For completeness, we will prove this lower bound
in Section \ref{sec:lower}.)
Our goal is to give an algorithm
whose cover time is close to this lower bound
with as small complexity of agent-memory and whiteboards
as possible.
We gave two self-stabilizing exploration algorithms
$\pran{c}$ and $\pdet{k}$,
where $c$ and $k$ are the design parameters.
The cover times and the space complexities
of the proposed algorithms and the existing algorithms
are summarized in Table \ref{tbl:algorithms}.
Algorithm $\pran{c}$ is a randomized algorithm.
Generally,
the agent visits all nodes
within $\result$
steps in expectation
and uses $O(\log c)$ bits in the agent-memory
and $O(\log \delta + \log c)$ bits of the whiteboard
of each node with degree $\delta$.
Thus, we have trade-off between the cover time and
the space complexity.
The larger $c$ we use, the smaller cover time we obtain.
In particular,
the expected cover time
is $O(m\log n)$ steps if we set $c=\Omega(D)$,
and it becomes optimal (\ie $O(m)$ steps) if we set
$c$ such that $c \ge 2n$.
(Strictly speaking,
any small $n(1+\Omega(1))$ is enough for $c$.)
This means that we require the knowledge of
an upper bound of $n$ to make $\pran{c}$ time-optimal.
Fortunately, this assumption can be ignored
from a practical point of view:
even if $c$ is extremely larger than $n$,
the overhead will be just an additive factor of $\log c$
in the space complexity.
Thus, any large value in $O(\poly(n))$ is enough to obtain
the optimal cover time and the space complexity of
$O(\log n)$ bits both in the agent memory and whiteboards.
Moreover, irrespective of parameter $c \ge 2$,
the cover time is $O(mD)$ steps with probability $1$.
Algorithm $\pdet{k}$ is a deterministic algorithm.
The cover time of $\pdet{k}$ is $O(m+nD)$ steps,
which does not depend on parameter $k$,
while the agent uses $O(\log k)$ bits in the agent-memory
and $O(\delta_v + \log k)$ bits of $v$'s whiteboard
for all $v \in V$.
Thus, we do not have trade-off between the cover time
and the space complexity.
However, unlike
$\pran{c}$, we require an upper bound on the diameter $D$
of graph $G$, that is, $\pdet{k}$ requires $k \ge D-1$
to solve a self-stabilizing exploration.
If $k < D-1$, the correctness of $\pdet{k}$ is no longer
guaranteed.
However,
the knowledge of an upper bound on $D$
is not a strong assumption because the space complexity
increases only logarithmically in $k$:
we can assign any large $O(\poly(n))$ value for $k$
to satisfy $k \ge D-1$ while keeping the space complexity
of the agent-memory and $v$'s whiteboard
bounded by $O(\log n)$ bits and $O(\delta_v + \log n)$ bits,
respectively.
For example, consider the case that we set $k=2^{500}$.
Then, $\pdet{k}$ can fail only if $n \ge 2^{500}$,
which is too large to consider in practice.
This extremely large value for $k$
results in the increase of the memory usage
only by 500 bits in the agent and whiteboards.
\section{Preliminaries}
\label{sec:model}
Let $G=(V,E,p)$ be a simple, undirected, and connected graph
where $V$ is the set of nodes and
$E$ is the set of edges.
The edges are locally labeled at each node:
we have a family of functions
$p=(p_v)_{v\in V}$
such that
each $p_v: \{\{v,u\} \mid u \in N(v)\} \to \{0,1,\dots,\delta_v-1\}$ uniquely assigns
a \emph{port number} to every edge incident to node $v$.
Two port numbers $p_u(e)$ and $p_v(e)$ are
independent of each other for edge $e =\{u,v\}\in E$.
The node $u$ neighboring to node $v$ such that
$p_v(\{v,u\}) = q$ is called the \emph{$q$-th neighbor} of $v$
and is denoted by $N(v,q)$.
For any two nodes $u,v$, we define the distance between $u$ and $v$ as the length of a shortest path between them and denote it by $d(u,v)$. We also define the set of the \emph{$i$-hop neighbors} of $v$ as $N_i(v) = \{u \in V \mid d(u,v) \le i\}$.
Note that $N_0(v)=\{v\}$ and $N_1(v)=N(v) \cup \{v\}$.
Throughout this paper, we denote the diameter of $G$ by $D$.
A program or \emph{algorithm} for an agent
is defined as a 3-tuple $\calp=(\phi,\calm,\calw)$,
where $\calm$ is the set of states for the agent,
$\calw=(\calw_k)_{k \in \mathbb{N}}$
is the family such that $\calw_k$ is
the sets of states for each node with degree $k$,
and $\phi$ is a function that determines
how the agent updates its state (\ie agent memory) and
the state of the current node (\ie \emph{whiteboard}).
At each time step,
the agent is located at exactly one node
$v \in V$,
and moves through an edge incident to $v$.
Every node $v \in V$ has a whiteboard $w(v) \in \calw_{\delta_v}$,
which the agent can access freely when it visits $v$.
The function $\phi$ is invoked
every time the agent visits a node or when the exploration begins.
Suppose that the agent with state $s \in \calm$
has moved to node $v$ with state $w(v) = x \in \calw_{\delta_v}$
from $u \in N(v)$. Let $\pin=p_v(\{u,v\})$.
The function $\phi$ takes 4-tuple $(\delta_v,\pin,s,x)$ as the input
and returns 3-tuple $(\pout,s',x')$ as the output.
Then, the agent updates its state to $s'$
and $w(v)$ to $x'$, after which it moves via port $\pout$,
that is, it moves to $v'$ such that $\pout = p_v(\{v,v'\})$.
At the beginning of an execution, we let $\pin$ be
an arbitrary integer in $\{0,1,\dots,\delta_v-1\}$ where
$v$ is the node that the agent exists on.
Note that if algorithm $\calp$ is randomized one,
function $\phi$ returns the probabilistic distributions
for $(\pout,s',x')$.
\paragraph*{\textbf{Exploration Problem}}
\noindent
Given a graph $G=(V,E)$, a \emph{configuration} (or a global state)
consists of the location of the agent, the state of the agent
(including $\pin$), and the states of all the nodes in $V$.
Our goal is to let the agent visit all nodes in the graph starting from any configuration.
In other words,
we aim to design a \emph{self-stabilizing exploration algorithm}
for a single agent,
defined as follows.
\begin{definition}
Algorithm $\calp$
is a self-stabilizing exploration algorithm
for a class $\calg$ of graphs
if for any graph $G=(V,E,p) \in \calg$, the agent running $\calp$ on $G$ eventually visits all the nodes in $V$ at least once
starting from any configuration.
\end{definition}
We measure the efficiency of algorithm $\calp$
by three metrics:
\emph{the cover time}, \emph{the agent memory space},
and \emph{the whiteboard memory space}.
All the above metrics are evaluated in the worst-case manner
with respect to graph $G$ and an initial configuration.
The cover time is defined as the number of moves
that the agent makes before it visits all nodes.
If algorithm $\calp$ is a randomized one,
the cover time is evaluated in expectation.
The memory spaces of the agent and the whiteboard on node $v$
are just defined
as $\log_2 |\calm|$ and $\log_2|\calw_{\delta_v}|$,
respectively.
\paragraph*{\textbf{Algorithm Description}}
This paper presents two algorithm descriptions.
For the simplicity,
instead of giving a formal 3-tuple $(\phi,\calm,\calw)$,
we specify the set of agent variables,
the set of whiteboard variables,
and the pseudocode of instructions that the agent performs.
In addition to the agent variables,
the agent must convey the program counter of the pseudocode
and the call stack (or the function stack) when its migrates
between nodes to execute an algorithm consistently.
Thus, all the agent variables, the program counter,
and the call stack
constitute the state of the agent.
Conforming to the above formal model,
the state of the agent changes only when it migrates
between nodes.
Therefore, the program counter of each state
is restricted to one of the line numbers
corresponding to the instructions of migration.
For example, in Algorithm \ref{al:pran} explained
in Section \ref{sec:random}, the domain of the program counter
in the agent states are $\{13,17,19\}$.
We can ignore the space for the program counter
and the call stack to evaluate the space complexity
because they require only $O(1)$ bits.
(Our pseudocodes do not use a recursive function.)
Whiteboard variables are stored in the whiteboard $w(v)$
of each node $v$.
All the whiteboard variables in $w(v)$
constitute the state of $w(v)$.
For clarification,
we denote an agent variable $x$ by $\self.x$
and an whiteboard variable $y$ in $w(v)$ by $v.y$.
Throughout this paper,
we call the node the agent currently exists on
\emph{the current node} and denote it by $\vcur$.
We call the port number of the current node via which
the agent migrates to $\vcur$ \emph{in-port}
and denote it by $\pin$.
As mentioned above, the function $\phi$ can use $\pin$
to compute 3-tuple $(\pout, s', x')$.
In other words, the agent can always access $\pin$
to update its (agent) variables and
the whiteboard variables of the current node
and compute the destination of the next migration.
Note that
$\pin$ is an arbitrary integer in $\{1,2,\dots,\delta(v)\}$
at the beginning of an execution.
\begin{algorithm}
\caption{$\pran{c}$}
\label{al:pran}
\VarAgent{
\\
\varspace $\self.\clr \in \{1,2,\dots,c\}$
}
\aspace
\VarNode{
\\
\varspace $v.\parent \in \{\bot,0,1,\dots,\delta_v-1\}$\\
\varspace $v.\clr \in \{1,2,\dots,c\}$
}
\aspace
\Notation{$\nextr(v,q) = (q + 1) \bmod \delta_{v}$}
\aspace
\While{$\tr$}{
Choose $\self.\clr$ uniformly at random from $\{1,2,\dots,c\} \setminus \{\self.\clr\}$\;
$\vcur.\clr \gets \self.\clr$\;
$\goforward(0)$\;
\While{$\lnot (\vcur.\parent = \bot \land \pin = \delta_{\vcur}-1)$}{
\uIf{$\vcur.\clr \neq \self.\clr$}{
$\vcur.\clr \gets \self.\clr$\; $\vcur.\parent \gets \pin$\;
$\goforward(\nextr(\vcur,\pin))$\;
}
\Else{
\uIf{$\vcur.\parent = \nextr(\vcur,\pin)$}{
$\vcur.\parent \gets \bot$\;
Migrate to $N(\vcur,\nextr(\vcur,\pin))$\;
\tcp*{type-II backtracking}
}
\Else{
$\goforward(\nextr(\vcur,\pin))$\;
}
}
}
}
\Fn{$\goforward()$}{
Migrate to node $N(\vcur,q)$\;
\tcp*{forward move}
\If{$\vcur.\clr = \self.\clr$}{
Migrate to node $N(\vcur,\pin)$\;
\tcp*{type-I backtracking}
}
}
\end{algorithm}
\section{Randomized Exploration}
\label{sec:random}
In this section, we give a randomized
self-stabilizing exploration algorithm $\pran{c}$.
The cover time and the space complexity
of this algorithm depend on its design parameter $c \ge 2$.
The cover time is $O(m)$ if $c=\Omega(n)$ holds,
while it is $O(m \log n)$ if $c = \Omega(D)$,
where $D$ is the diameter of a graph.
It is guaranteed irrespective of $c$ that
the cover time is $O(mD)$ with probability 1.
Algorithm $\pran{c}$ requires $O(\log c)$ bits of agent memory
and $O(\log c + \log \delta_v)$ bits of each node $v \in V$.
If $c=2$, this algorithm becomes deterministic and almost the same as
the exploration algorithm
given by \cite{MT10}.
The idea of algorithm $\pran{c}$ is simple.
The agent tries to traverse all the edges in every $2m$ moves according to the Depth-First-Traversal (DFT).
However, we have an issue for executing DFT.
When the agent visits a node for the first time,
it has to mark the node as an \emph{already-visited} node.
However,
we must deal with an arbitrary initial configuration
in a self-stabilizing setting,
thus some of the nodes may be marked even at the beginning of an execution.
To circumvent this issue, we use \emph{colors} and let the agent make DFTs repeatedly.
The agent and all nodes maintain
their color in a variable $\clr \in \{1,2,\dots,c\}$.
Every time the agent begins a new DFT,
it chooses a new color $r$ uniformly at random from all the colors except for the current color of the agent.
In the new DFT,
the agent changes the colors of all the visited nodes to $r$.
Thus, the agent can distinguish the visited nodes and
the unvisited nodes with their colors:
it interprets that the nodes colored $r$ are already visited and
the nodes having other colors have not been visited before
in the current DFT.
Of course, the agent may make incorrect decision
because one or more nodes may be colored $r$
when the agent chooses $r$ for its color at the beginning of a new DFT.
Thus, the agent may not perform a complete DFT.
However, the agent can still make a progress for exploration
to a certain extent in this DFT:
Let $\vst$ be the node at which the agent is located
when it begins the DFT,
let $S$ be the set of the nodes colored $r$ at that time,
and let $R$ be the set of all nodes in the connected component including $\vst$ of the induced subgraph $G[V \setminus S]$;
then it visits all nodes in $R$
and their neighbors.
Hence, as we will see later, the agent visits
all nodes in $V$
by repeating DFTs a small number of times,
which depends on how large $c$ is.
Figure \ref{fig:pran} shows an example of one DFT where
$\vst = v_1$ and $S=\{v_6,v_9\}$. The agent tries to make a DFT
on the graph starting from node $v_1$.
Unfortunately,
whenever the agent visits node $v_6$ or $v_9$, which is colored $r$,
it immediately \emph{backtracks} to the previous node.
This is because it mistakenly \emph{thinks} that it has already visited these nodes even when it visits them for the first time
in the DFT.
Thus, it never visits $v_8$ or $v_{11}$, which is unreachable
from node $v_1$ in the induced subgraph $G[V \setminus S]$.
However, it visits to all the other nodes.
Note that the index of the nodes
are depicted in the figure only for simplicity of explanation,
and we do not assume the existence of any unique identifiers of the nodes.
\begin{figure}
\centering
\includegraphics[width=0.39\linewidth]{pran}
\caption{An example of the trajectory of one DFT}
\label{fig:pran}
\end{figure}
We present the following two lemmas in advance,
which we will prove later
after explaining the details of $\pran{c}$.
\begin{lemma}
\label{lemma:DFTtime}
Starting from any configuration,
the agent running $\pran{c}$ changes its color (\ie begins a new DFT) within $8m+n=O(m)$ rounds.
\end{lemma}
\begin{lemma}
\label{lemma:expand}
Suppose now that the agent running $\pran{c}$
changes its color to $r$ and begins a new DFT at node $\vst$.
Let $S \subseteq V$ be the set of the nodes colored $r$ at that time
and let $R$ be the set of the nodes
in the connected component including $\vst$ in the induced subgraph $G[V \setminus S]$.
Then, the agent visits all nodes in $R$
and their neighbors in this DFT.
Moreover, this DFT finishes (\ie the agent changes its color)
at node $\vst$.
\end{lemma}
In the DFT mentioned in Lemma \ref{lemma:expand},
the agent changes the colors of all nodes in $R'=R \cup \bigcup_{v \in R}N(v)$ to $r$.
In the next DFT, the agent
must choose a different color from $r$,
thus it is guaranteed that the agent will visit
all nodes in $R'$ and their neighbors.
(It will visit much more nodes in many cases.)
Thus, the number of the nodes visited in the $i$-th DFT is monotonically increasing with respect to $i$.
Therefore,
we obtain the following corollary.
\begin{corollary}
\label{cor:visitall}
The agent running $\pran{c}$ visits all nodes
before it changes its color $D+1$ times.
\end{corollary}
\begin{theorem}
\label{theorem:pran}
Algorithm $\pran{c}$ is a randomized
self-stabilizing exploration algorithm
for all simple, undirected, and connected graphs.
Irrespective of $c$, the cover time is $O(mD)$ steps
with probability $1$.
The expected cover time is
$\result$ steps.
The agent memory space is $O(\log c)$
and the memory space of each node $v$ is $O(\log c + \log \delta_v)$.
\end{theorem}
\begin{proof}
The first and the second claim of this lemma
immediately follows from Lemma \ref{lemma:DFTtime}
and Corollary \ref{cor:visitall}.
The forth claim is trivial (See the list of variables
in Algorithm \ref{al:pran}.)
In the rest of this proof,
we prove the third claim of this lemma:
the agent visits all nodes within
$\result$ steps in expectation.
Let $\tau$ denote how many times the agent changes its color
before it visits all nodes in $V$.
By Lemma \ref{lemma:DFTtime},
it suffices to show
$\ex[\tau] = O(\lceil n/ \max(c-n,n/D) \rceil)$
and $\ex[\tau] = O(\lceil D/c \rceil \log n)$.
Note that
we can ignore the case $c = 2$ because
these equalities immediately hold by Corollary \ref{cor:visitall}.
First, we prove $\ex[\tau] = O(\lceil n/ \max(c-n,n/D) \rceil)$.
If $n/D \ge c-n$, this equation trivially holds
because we have $n/ \max(c-n,n/D) = D \ge \ex[\tau]$ by
Corollary \ref{cor:visitall}.
Consider the case $c-n \ge n/D \ge 1$.
By Lemma \ref{lemma:expand},
whenever the agent begins a new DFT,
it visits all nodes in $V$ in this DFT if
it chooses a color different from the colors of any node.
It chooses such a color
with probability at least
$1-(n-1)/(c-1) =(c-n)/(c-1)$.
Thus, $(c-1)/(c-n) = O(\lceil n/(c-n)\rceil)$ DFTs are sufficient
to visit all nodes in expectation.
Next, we prove $\ex[\tau] = O(\lceil D/c \rceil \log n)$.
By Lemma \ref{lemma:expand},
the agent changes its color (\ie begins all DFTs)
at the same node in an execution.
We denote this node by $\vst$.
Remember that $N_i(v)$ is the set of $i$-hop neighbors of $v$.
To prove $\ex[\tau] = O(\lceil D/c \rceil \log n)$,
it suffices to show that
for any $x \in \mathbb{N}^{+}$,
if the agent visits all nodes in $N_{x}(\vst)$
in the $i$-th DFT ($i \ge 1$),
it visits all nodes in $N_{x+c}(\vst)$
in the next $O(\log n)$ DFTs in expectation.
Let $u$ be any node in $V \in N_{x+c}(\vst)$.
Then, there exists a path $v_0,v_1,\dots,v_l$
such that $v_0 \in N_{x}(\vst)$, $u=v_l$, and $l \le c$.
Since the agent visits all nodes in $N_{x}(\vst)$
in the $i$-th DFT,
for any $j \ge i$,
the agent and all nodes in $N_{x}(\vst)$ have the same color
when the $j$-th DFT finishes.
Therefore, by Lemma \ref{lemma:expand},
the agent visits $u$ in the $j+1$-th DFT
if it chooses a color different from the colors
of any node $v_1$, $v_2$, \dots, or $v_{l-1}$ at the beginning
of the $j+1$-th DFT.
The agent chooses such a color with probability
$(1-1/(c-1))^{l-1} \ge (1-1/(c-1))^{c-1} \ge (1-1/(c-1)) e^{-1} \ge 1/(2e)$.
Thus, the agent visits $u$ during the next $\lceil 2e \ln n \rceil$
DFTs (after the $i$-th DFT finishes) with probability at least
$1-(1-1/(2e))^{2e\ln n} \ge 1-n^2$.
By the union bound, the agent visits all nodes during
this period with probability $1-O(1/n)=\Omega(1)$.
Thus, the next $O(\ln n)$ DFTs are sufficient to visit
all nodes in $N_{x+c}$ in expectation.
\end{proof}
In the rest of this section, we explain $\pran{c}$ in detail
and prove Lemmas \ref{lemma:DFTtime} and \ref{lemma:expand}.
The pseudocode of $\pran{c}$ is shown in Algorithm $\ref{al:pran}$. The agent maintains one variable $\self.\clr \in \{1,2,\dots,c\}$, while each node $v$ maintains two variables
$v.\parent \in \{\bot,0,1,2,\dots,\delta_v-1\}$
and $v.\clr \in \{1,2,\dots,c\}$.
As mentioned before,
the agent uses variables $\self.\clr$ and $v.\clr$
to distinguish already-visited nodes and unvisited nodes
in the current DFT. It also uses whiteboard variable
$v.\parent$ to go back to the node at which it began
the current DFT.
Specifically, whenever the agent finds a node having different color from $\self.\clr$, it changes $\vcur.\clr$ to $\self.\clr$ while it simultaneously substitutes $\pin$ for $\vcur.\parent$ (Lines 7,8).
Thus, the agent performs a DFT with making a spanning tree on graph $G$. We say that node $u$ is the parent of node $v$ when $u = N(v,v.\parent)$.
In an execution of this algorithm, there are three kinds of agent-migration between nodes:
\begin{itemize}
\item \textbf{Forward Moves}: The agent executes this kind of migration when it tries to find an unvisited node in the current DFT,
that is, a node not colored $\self.\clr$.
The agent makes a forward move only in Line 17 in function
$\goforward()$, which is invoked in Lines 4, 9, and 15.
\item \textbf{Backtracking (Type I)}:
The agent executes this kind of migration when the agent makes a forward move, but the destination node already has color $\self.\clr$. By this migration, the agent backtracks to the previous node at which it made a forward move (Line 19).
\item \textbf{Backtracking (Type II)}:
The agent executes this kind of migration
when it thinks that it has already visited all nodes in $N(\vcur)$.
Specifically, it backtracks to the parent of $\vcur$
when $\vcur.\parent = \nextr(\vcur,\pin)$ (Line 13),
where $\nextr(v,q) = (q + 1) \bmod \delta_{v}$ for all $v \in V$ and $q \in \{0,1,\dots,\delta_v-1\}$.
\end{itemize}
Every time the agent begins a new DFT,
it chooses a new color uniformly at random from $\{1,2,\dots,c\}\setminus \{\self.\clr\}$ (Line 2).
At this time, the starting node of this DFT, say $\vst$, satisfies $\vst.\parent = \bot$ because the while-loop at Lines 5-15 ends if and only if $\vcur.\parent = \bot$ holds.
Thereafter, it performs a DFT with the new color $\self.\clr$
(Lines 3-15).
The agent first changes $\vst.\clr$ to $\self.\clr$
and makes a forward move to node $N(\vst,0)$ (Lines 3-4).
Thereafter, whenever it finds an unvisited node,
it marks the node with color $\self.\clr$ and
keeps on making a forward move (Lines 6-9).
If the destination node of a forward move already has color $\self.\clr$, it makes type-I backtracking.
After a (type-I or type-II) backtracking,
the agent decides whether there are still unvisited neighbors in $N(\vcur)$. The agent can make this decision according to the predicate $\vcur.\parent = \nextr(\vcur,\pin)$ (Line 11) because
it must have made forward moves to
$N(v,(\vcur.\parent+1) \bmod \delta_v), N(v,(\vcur.\parent+2) \bmod \delta_v),\allowbreak \dots, N(v,(\vcur.\parent + \delta_v -1) \bmod \delta_v)$
in this order if this predicate holds.
If this predicate does not hold, it makes a forward move
to node $N(\vcur,\nextr(\vcur,\pin))$ (Line 15).
Otherwise,
it makes type-II backtracking (Line 13).
At this time, the agent simultaneously
updates $\vcur.\parent$ to $\bot$ (Line 12).
This update is necessary
to avoid endless type-II backtracking that may occur without this update in an execution starting from a configuration where $(\{v,v.\parent\})_{v \in V}$ contains a cycle.
The agent eventually goes back to $\vst$
from node $N(\vst,\delta_{\vst}-1)$. Then, it terminates the current DFT (\ie breaks the while-loop at Lines 5-15) and begins a new DFT.
In the rest of this section,
we prove Lemmas \ref{lemma:DFTtime} and \ref{lemma:expand},
the combination of which is sufficient to prove Theorem \ref{theorem:pran} as shown above.
Before proving the lemmas, we show the following simple but useful lemma.
\begin{lemma}
\label{lemma:back}
If the agent running $\pran{c}$ makes a forward move from node $v \in V$
to node $u \in N(v)$,
thereafter no type-II backtracking to $v$ occurs before
the agent makes type-II backtracking from $u$ to $v$.
\end{lemma}
\begin{proof}
By the definition of $\pran{c}$, the set
$(\{v,v.\parent\})_{v\in V}$ always contains
a path from $\vcur$ to $v$ during the period from the forward move to the type-II backtracking from $u$ to $v$.
Therefore, the agent never makes type-II backtracking from any $w \in N(v) \setminus \{u\}$ to $v$ during the period.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma:DFTtime}]
Let $C$ be an any configuration and let $r$ be the color of the agent (\ie $r=\self.\clr$) in $C$.
Let $M_F(v)$, $M_{B1}(v)$, and $M_{B2}(v)$ be the numbers of forward moves,
type-I backtracking, and type-II backtracking
that the agent makes at $v \in V$, respectively,
until it changes $\self.\clr$ from $r$
in the execution starting from $C$.
Whenever the agent makes type-II backtracking at $v$,
it changes $v.\parent$ to $\bot$.
Thus, it never makes type-II backtracking twice at $v$,
that is, $M_{B2}(v) \le 1$.
Next, we bound $M_{F}(v)$.
By Lemma \ref{lemma:back},
once the agent makes a forward move from $v$ to $N(v,i)$,
the following migrations involving $v$ must be
type-II backtracking from $N(v,i)$ to $v$,
the forward move from $v$ to $N(v,i+1)$,
type-II backtracking from $N(v,i+1)$ to $v$,
the forward move from $v$ to $N(v,i+2)$, and so on.
Therefore, the agent makes type-II backtracking at $v$
or observes $v.\parent = \bot \land \pin = \delta_v$
(and changes its color) before it makes forward moves
$\delta_v$ times. Since the agent makes
type-II backtracking at $v$ at most once,
we also have $M_{F}(v) \le 2\delta_v$.
Type-I backtracking occurs only after the agent makes a forward move. Therefore, we have $\sum_{v \in V}M_{B1}(v) \le \sum_{v \in V}M_{F}(v)$.
To conclude, we have $\sum_{v \in V}(M_{F}(v)+M_{B1}(v)+M_{B2}(v)) \le \sum_{v\in V} 2M_{F}(v) + \sum_{v \in V} M_{B2}(v) = 8m + n$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma:expand}]
By the definition of $\pran{c}$, the set
$(\{v,v.\parent\})_{v\in V}$ always contains
a path from $\vcur$ to $\vst$.
Therefore, this DFT terminates only at $\vst$.
Thus it suffices to show that the agent
visits all nodes in
$R \cup \bigcup_{v \in R} N(v)$ in this DFT.
Let $u$ be any node in $R \cup \bigcup_{v \in R} N(v)$.
By definition of $R$, there exists a path
$v_0,v_1,v_2,\dots,v_l$ in $G$ where $\vst=v_0$, $u=v_l$ and $v_i \notin S$ for all $i = 1,2,\dots,l-1$.
As mentioned in the proof of Lemma \ref{lemma:DFTtime},
once the agent makes a forward move from $v$ to $N(v,i)$,
the following migrations involving $v$ must be
type-II backtracking from $N(v,i)$ to $v$,
the forward move from $v$ to $N(v,i+1)$,
type-II backtracking from $N(v,i+1)$ to $v$,
the forward move from $v$ to $N(v,i+2)$, and so on.
Moreover, the agent makes the first forward move at $\vst$ to $N(\vst,0)$ at Line 4 and
makes the first forward move at each $v_i$ for $i=1,\dots,l-1$ to $N(v_i,(v_i.parent + 1) \bmod \delta_{v_i})$ at Line 9
because we have $v_i \notin R$ for $i=0,1,\dots,l-1$.
This means that for each $i=0,1,\dots,l-1$,
the agent makes a forward move exactly once
from $v_i$ to each $u \in N(v_i)$ except for $v_i$'s parent
in the current DFT.
In particular, the agent makes a forward move from $v_{l-1}$
to $v_{l}=u$, thus the agent visits $u$ in this DFT.
\end{proof}
\section{Deterministic Exploration}
\label{sec:det}
The randomized algorithm
$\pran{c}$, which is presented in the previous section,
achieves self-stabilizing exploration in a small number of steps in expectation. The key idea is to repeat DFTs with changing the color of the agent randomly. Unfortunately, we cannot make use of the same idea if we are not allowed to use randomization
and we choose a new color deterministically among $c$ candidates. For example, consider an $(n/2,n/2)$-lollipop graph $G_L=(V_L,E_L)$, which consists of a clique $K_{n/2}$ of $n/2$ nodes and a path $P_{n/2}$ of $n/2$ nodes, connected with a bridge. Let $u_1, u_2, \dots, u_{n/2}$ be the nodes in $P_{n/2}$ such that one endpoint $u_1$ neighbors one node of clique $K_{n/2}$. Suppose that the agent begins a new DFT in a configuration where it is located in the clique, and all the nodes in the clique have the same color. (See Figure \ref{fig:lolli}.)
For $i=1,2,\dots,n/2$, let $c_i$ be the colors that the agent (deterministically) chooses for the $i$-th DFT.
Then, the adversary can assign color $c_i$ to each node $u_i$ in the path in the initial configuration. In this case, the deterministic version of $\pran{c}$ requires $\Omega(n^3)$ steps to visit all the nodes in $G_L$
because the agent visits $u_i$ in the $i$-th DFT but it immediately backtracks without visiting $u_{i+1}$,
and each DFT involves the move through every edge in the clique,
resulting in $\Theta(n^2)$ steps.
\begin{figure}
\centering
\includegraphics[width=0.45\linewidth]{lolli}
\caption{A (6,6)-lollipop graph}
\label{fig:lolli}
\end{figure}
In this section,
we give a deterministic
self-stabilizing exploration algorithm $\pdet{k}$
with design parameter $k \ge D-1$.
We must satisfy $k \ge D-1$,
otherwise this algorithm no longer guarantees the correctness.
Thus we require the knowledge of
an upper bound of the diameter $D$ of graph $G$.
Unlike $\pran{c}$,
the cover time of this algorithm does not depend on parameter $k$:
the agent visits all nodes within $O(m+nD)$ steps starting from any configuration.
Algorithm $\pdet{k}$ requires $O(\log k)$ bits of agent memory
and $O(\delta_v + \log k)$ bits of each node $v \in V$.
Whereas $\pran{c}$ is based on DFTs,
our deterministic algorithm $\pdet{k}$ is based on Breadth First Traversals (BFTs).
Specifically, the agent tries to make a Breadth First Search (BFS) tree rooted at some node $\vst \in V$.
Each node $v \in V$ maintains whiteboard variables $v.\parent \in \{0,1,2,\dots,\delta_v-1\}$
and $v.\child \in 2^{\{0,1,\dots,\delta_v-1\}}$.
In this section, we consider a directed graph $T=(V_T,A_T)$ where $V_T = V$ and $A_T = \{(u,v) \in V \times V \mid v = N(u,u.\parent) \land \exists q \in v.\child: u = N(v,q)\}$ and we say that
$v$ is the parent of $u$ if $(u,v) \in A_T$.
We aim to reach a configuration where $T$ is a BFS tree rooted at $\vst \in V$, that is, $T$ is a weakly connected digraph with $n-1$ arcs and for any $i=1,2,\dots,D$,
each $u \in N_i(\vst) \setminus N_{i-1}(\vst)$ has exactly one neighbor $v \in N_{i-1}(\vst)$ such that $(u,v) \in T$.
(Remember that $N_i(v)$ is the set of $i$-hop neighbors of $v$.)
Before reaching such a configuration, $T$ may not be weakly connected. We denote the weakly connected component of $T$ that includes a node $v \in V$ by $T(v)$. In particular, we are interested in $T(\vst)$.
We say that $T(v)$ is \emph{consistent}
if $\bigcup_{q \in w.\child}N(w,q) \subseteq \{u \in N(w) \mid (u,w) \in T(v)\}$ for every node $w$ in $T(v)$.
In other words, in a consistent component $T(v)$,
if a node $w \in T(v)$ \emph{thinks} that $u$ is one of its children,
$u$ must be actually a child of $w$ in $T(v)$.
The agent maintains a variable $\self.\phase \in \{0,1,2,\dots,k\}$. When $\self.\phase = i$, we say that the agent is in Phase $i$.
Suppose that the agent now begins Phase $0$ at node $\vst \in V$.
In Phase $0$, the agent visits all neighbors of $\vst$
and initializes $\vst.\child$, $u.\parent$, and $u.\child$
for all $u \in N(\vst)$
in such a way that
$T(\vst)$ becomes a spanning tree rooted at $\vst$ on
the induced subgraph $G[N_{1}(v)]$.
At the beginning of Phase $i \ge 1$,
we assume $T(\vcur)$ is a spanning tree rooted at $\vst$
in the induced subgraph $G[N_{i}(v)]$
and $T(\vcur)$ is consistent.
Then, the agent tries to expand $T(\vst)$ such that
it becomes a spanning tree on $G[N_{i+1}(\vst)]$ at the end of
Phase $i$, by using whiteboard variable $v.\clr \in \{B,W,R\}$
for each $v \in V$, where $B$, $W$, and $R$ indicate \emph{black}, \emph{white}, and
\emph{red}, respectively.
For any $i \ge 1$,
Phase $i$ consists of five sub-phases or \emph{stages},
which is managed by variable $\self.\stage \in \{1,2,3,4,5\}$,
as follows.
\begin{itemize}
\item \textbf{Stage 1}:
The agent circulates tree $T(\vst)$ and colors all nodes in $T(\vst)$ white.
\item \textbf{Stage 2}:
The agent circulates tree $T(\vst)$. Every time it visits a node $v \in N_{i}(\vst)\setminus N_{i-1}(\vst)$, it visits all neighbors of $v$ and checks their colors. For each $u \in N(v)$, the agent changes $u$'s color to red if it is black.
\item \textbf{Stage 3}:
The agent circulates tree $T(\vst)$ and colors all nodes in $T(\vst)$ black.
\item \textbf{Stage 4}:
The agent circulates tree $T(\vst)$. Every time it visits a node $v \in N_{i}(\vst) \setminus N_{i-1}(\vst)$, it visits all neighbors of $v$ and checks their colors. For each $u \in N(v)$, the agent changes $u$'s color to red if its color is white.
\item \textbf{Stage 5}:
The agent circulates tree $T(\vst)$. Every time it visits a node $v \in N_{i}(\vst) \setminus N_{i-1}(\vst)$, it visits all neighbors of $v$ and checks their colors.
The agent incorporates all red $u \in N(v)$ into $T(\vst)$
by assigning $p_u(\{u,v\})$ for $u.\parent$,
initializing $u.\child$ to $\emptyset$,
and adding $p_v(\{v,u\})$ to $v.\child$.
The agent simultaneously changes $u$'s color to black.
\end{itemize}
By these stages, we can incorporate all nodes in $N_{i+1}(\vst) \setminus N_{i}(\vst)$ into $T(\vst)$. Let $S_W$ (resp.~$S_B$, and $S_R$) be the set of nodes in $N_{i+1}(\vst)$ whose colors are $W$ (resp.~$B$, and $R$) at the beginning of Phase $i \ge 1$. All nodes in $S_B$ are colored $R$ in Stage 2
and all nodes in $S_W$ are colored $R$ in Stage 4.
All nodes in $S_R$ never change their colors in the first four stages.
Thus, all nodes in $N_{i+1}(\vst) \setminus N_{i}(\vst)$ have
color $R$ at the beginning of Stage 5.
Therefore, all of them are incorporated into $T(\vst)$.
One may think that a node $u \in N_i(\vst)$ may mistakenly become one of $v$'s children for some $v \in N_{i}(\vst)\setminus N_{i-1}(\vst
)$ in Stage 5, which may result in partitioning $T(\vst)$ into two or more components.
However, such an event never happens:
all nodes in $N_i(\vst)$ become white in Stage 1
thus never become red in Stage 2; they also become black in Stage 3
thus never become red in Stage 4;
as a result, they are always black during Stage 5.
Importantly, each node $u \in N_{i+1}(\vst) \setminus N_{i}(\vst)$
becomes black immediately after it is added into $T(\vst)$ in Stage 5.
Hence, the agent never change $u.\parent$ thereafter,
thus the consistency of $T(\vst)$ is preserved
in these five stages.
Therefore, once the agent begins Phase 0 in any node $\vst \in V$,
it makes a spanning tree on the whole graph and thus visits all nodes
in Phase $D$.
The agent can circulate any tree with at most $n$ nodes
within $2n-2$ steps.
In addition, when the agent visits
a node $v$ in $N_{i}(\vst)\setminus N_{i-1}$ in Stages 2, 4, and 5
in Phase $i$, it visits each of $v$'s neighbors and comes back to $v$,
resulting in $3\cdot 2\delta_v=6\delta_v$ migrations.
In total, the agent requires
$D\cdot (2n-2) + \sum_{v \in V}6\delta_v = O(m + nD)$ steps.
However, this is not enough because
our goal is to design a self-stabilizing algorithm;
we must consider an \emph{arbitrary} initial configuration.
At the beginning of an execution, we may have many inconsistencies:
\eg $\self.\phase$ may be much larger than $D$,
there may be no $\vst \in V$ such that
$T(\vst)$ is a spanning tree on the induced subgraph $G[N_{\self.\phase}(\vst)]$ and $T(\vst)$ is consistent, and so on.
Therefore,
we need a mechanism
guaranteeing that the agent begins Phase 0 in every $O(m+nD)$ steps
\footnotemark{}
starting from any (possibly inconsistent) configuration.
In the rest of this section, we will explain algorithm $\pdet{k}$
in detail with the emphasis on how we handle inconsistencies.
\footnotetext{
We have to let the agent begin Phase 0 infinitely often:
in a self-stabilizing setting,
we must consider an arbitrary initial configuration,
thus the agent never correctly recognizes that $T(\vcur)$ is
a consistent spanning tree on $G$ at any time.
}
\setcounter{AlgoLine}{0}
\begin{algorithm}
\caption{Main Routine of $\pdet{k}$}
\label{al:pdet_main}
\VarAgent{
\\
\varspace $\self.\phase,\self.\dist \in \{0,1,\dots,k\}$\\
\varspace $\self.\stage \in \{1,2,3,4,5\}$\\
\varspace $\self.\error,\self.\found \in \{\fl,\tr\}$
}
\aspace
\VarNode{
\\
\varspace $v.\parent \in \{0,1,\dots,\delta_v-1\}$ \\
\varspace $v.\child \in 2^{\{0,1,\dots,\delta_v-1\}}$,
$v.\clr \in \{W,B,R\}$ \\
\varspace $v.\dist \in \{0,1,\dots,k\}$,
$v.\port \in \{0,1,\dots,\delta_{v}\}$
}
\aspace
\While{$\tr$}{
\If{$\self.\phase = 0$}{
$\initialize()$; $\self.\stage \gets 1$\;
}
\If{$\self.\phase \ge 1$}{
$\circulate()$\;
$\self.\stage \gets (\self.\stage \bmod 5)+1$\;
}
\uIf{
$
\left(
\begin{aligned}
&\self.\error = \tr \lor (\self.\phase \ge 1 \\
&\land \self.\stage = 1 \land \self.\found = \fl)
\end{aligned}
\right)
$
}{
$\self.\phase \gets 0$; $\self.\error \gets \fl$\;
}
\ElseIf{$\self.\stage = 1$}{
$\self.\phase \gets (\self.\phase + 1) \bmod k$\;
$\self.\found \gets \fl$\;
}
}
\end{algorithm}
\begin{algorithm}
\caption{Functions of $\pdet{k}$ except for $\expand()$}
\label{al:pdet_functions}
\Notation{\\
\varspace $\restchild(v,q) = \{r \in v.\child \mid r \ge q\}$\\
$
\begin{aligned}
&\varspace \nextd(v,q) \\
&\hspace{-5pt}=
\begin{cases}
\min \restchild(v,0) &
\left(
\begin{aligned}
&v.\dist \neq 0 \land q = v.\parent\\
&\land \restchild(v,0) \neq \emptyset
\end{aligned}
\right)
\\
\min \restchild(v,q+1) &
\left(
\begin{aligned}
&(v.\dist = 0 \lor q \neq v.\parent)\\
& \land \restchild(v,q+1) \neq \emptyset
\end{aligned}
\right)
\\
\bot &
\left(
\begin{aligned}
&v.\dist = 0 \\
&\land \restchild(v,q+1) = \emptyset
\end{aligned}
\right)
\\
v.\parent & \text{otherwise}
\end{cases}
\end{aligned}
$\\
\varspace $\colorupdate(a)=
\begin{cases}
W & \self.\stage = 1\\
B & \self.\stage = 3\\
a & \text{otherwise}
\end{cases}$
}
\aspace
\Fn{$\initialize()$}{
$\self.\dist \gets 0$; $\vcur.\dist \gets 0$;
$\vcur.\port \gets 0$\;
$\vcur.\child \gets \{0,1,\dots,\delta_{\vcur}-1\}$\;
\While{$\vcur.\port < \delta_{\vcur}$}{
Migrate to $N(\vcur,q)$\;
$\vcur.\parent \gets \pin$;
$\vcur.\child \gets \emptyset$\;
$\vcur.\dist \gets 1$\;
Migrate to $N(\vcur,\pin)$\;
$\vcur.\port \gets \vcur.\port + 1$\;
}
}
\Fn{$\circulate()$}{
$\vcur.\clr \gets \colorupdate(\vcur.\clr)$\;
$\move(0,+1)$\;
\While{$\nextd(\vcur,\pin) \neq \bot \land \self.\error = \fl $}{
$\vcur.\clr \gets \colorupdate(\vcur.\clr)$\;
\uIf{$\self.\dist = \self.\phase$}{
\If{$\self.\stage \in \{2,4,5\}$}{$\expand()$}
$\move(\vcur.\parent,-1)$\;
}
\Else{
$
\begin{aligned}
\epsilon \gets
\begin{cases}
- 1 &
\begin{aligned}
&\self.\dist > 0 \\
&\land \nextd(\vcur,\pin)=v.\parent
\end{aligned}
\\
+ 1 & \text{otherwise}
\end{cases}
\end{aligned}
$\;
$\move(\nextd(\vcur,\pin),\epsilon)$\;
}
}
}
\Fn{$\move(q,\epsilon)$}{
Migrate to $N(\vcur,q)$\;
$\self.\dist \gets \min(k,\max(0,\self.\dist +\epsilon))$\;
\If{
$
\left(
\begin{aligned}
&\self.\dist \neq \vcur.\dist\\
& \lor (\vcur.\dist > 0 \land \vcur.\parent \in \vcur.\child)\\
&\lor (\epsilon = 1 \land \pin \neq \vcur.\parent)\\
& \lor (\epsilon = -1 \land \pin \notin \vcur.\child)
\end{aligned}
\right)
$
}{
$\self.\error \gets \tr$
}
}
\end{algorithm}
\begin{algorithm*}
\caption{$\expand()$ in $\pdet{k}$}
\label{al:pdet_expand}
\Fn{$\expand()$}{
$\vcur.\port \gets (\vcur.\parent + 1) \bmod \delta_{\vcur}$\;
\While{$\vcur.\port \neq \vcur.\parent$}{
Migrate to $N(\vcur,\vcur.\port)$\;
\uIf{$(\self.\stage = 2 \land \vcur.\clr = B) \lor (\self.\stage = 4 \land \vcur.\clr = W)$}{
$\vcur.\clr \gets R$; Migrate to $N(\vcur,\pin)$\;
}
\uElseIf{$\self.\stage = 5 \land \vcur.\clr = R$}{
$\vcur.\dist \gets \self.\dist+1$;
$\vcur.\parent \gets \pin$; $\vcur.\child \gets \emptyset$;
$\vcur.\clr \gets B$\;
Migrate to $N(\vcur,\pin)$\;
$\vcur.\child \gets \vcur.\child \cup \{\pin\}$;
$\self.\found \gets \tr$\;
}
\ElseIf{$\self.\stage = 5 \land \vcur.\clr = B \land \vcur.dist < \self.\phase - 1$}{
$\self.\error \gets \tr$\;
Migrate to $N(\vcur,\pin)$\;
}
$\vcur.\port \gets (\vcur.\port + 1) \bmod \delta_{\vcur}$\;
}
}
\end{algorithm*}
Algorithm $\pdet{k}$ is presented in Algorithms \ref{al:pdet_main}, \ref{al:pdet_functions}, and \ref{al:pdet_expand}.
Algorithm \ref{al:pdet_main} gives the list of variables
and the main routine,
while Algorithms \ref{al:pdet_functions}
and \ref{al:pdet_expand} give four subroutines (functions).
In addition to the variables we explained above
(\ie $\self.\phase$, $\self.\stage$, $v.\parent$, $v.\child$, $v.\clr$),
we have three agent variables $\self.\dist \in \{0,1,\dots,k\}$,
$\self.\error \in \{\fl,\tr\}$, and $\self.\found \in \{\fl,\tr\}$
and two whiteboard variables $v.\dist$ and $v.\port \in \{0,1,\dots,\delta_v\}$ for each $v \in V$. We use $\self.\dist$ (resp.~$v.\dist$) to maintain the distance between the current location $\vcur$ (resp.~node $v$) and the node at which the agent begins Phase 0 (\ie reset $\self.\phase$ to zero) for the last time.
We use $\self.\error$ and $\self.\found$ to reset $\self.\phase$ to zero, as we will see later. A whiteboard variable $v.\port$ is just a temporary variable to visit all neighbors of the current node in functions $\initialize()$ and $\expand()$.
The main routine is simply structured.
In Phase 0 (\ie $\self.\phase = 0$),
it executes function $\initialize()$ and sets $\self.\stage$ to $1$
(Lines 2-3).
In $\initialize()$, the agent simply initializes
$\self.\dist$ and $v.\dist$ for all $v \in N_1(\vcur)$
and makes the parent-child relationship
between $\vcur$ and each of its neighbors (Lines 13-20).
As a result, at the end of Phase 0,
$T(\vcur)$ is the tree rooted at $\vcur$
such that every $u \in N(\vcur)$ is a child of $\vcur$
and no node outside $N_1(\vcur)$ is included in $T(\vcur)$.
In Phase $i \ge 1$,
it executes function $\circulate()$
and increments $\self.\stage$ by one;
If $\self.\stage$ has already been $5$ before incrementation,
$\self.\stage$ is reset to $1$ (Lines 4-6).
We will explain $\circulate()$ later.
Every time $\self.\stage$ is reset to $1$,
the agent begins the next phase,
that is, increments $\self.\phase$ by one modulo $k$ (Line 10).
The agent knows $k \ge D-1$, however
it may not know the exact diameter $D$
or any $\Theta(D)$ value.
To begin Phase $0$ in every $O(m+nD)$ steps,
the agent always memorizes in $\self.\found$ whether
it added a new node to $T(\vcur)$ in the current phase.
If $\self.\found$ is still $\fl$ at the end of a phase except for Phase 0, the agent resets $\self.\phase$ to zero and begins Phase 0 (Line 8). In addition, the agent always substitutes $\tr$ for $\self.\error$
when it finds any inconsistency in $\circulate()$.
Once $\self.\error = \tr$ holds, the agent
resets $\self.\phase$ to zero and begins Phase 0 immediately after $\circulate()$ finishes (Line 8).
When the agent invokes $\circulate()$ at a node $\vst$ in Phase $i$, it circulates all the nodes in $T_i(\vst)$,
where $T_i(\vst) = T(\vst)[V_i]$ is the subgraph of $T(\vst)$
induced by $V_i \in V$, a set of nodes that are reachable from
$\vst$ in $i$-hop in $T(\vst)$.
If $T_i(\vst)$ is a tree rooted by $\vst$,
the circulation is easily done with a simple rule:
the agent moves via port $0$ in the first step;
thereafter, when the agent visits a node $v$,
it moves to $N(v,\nextd(v,\pin))$ unless $\nextd(v,\pin) = \bot$
(Lines 31 and 32),
where $\nextd(v,q)$ is defined in Algorithm \ref{al:pdet_functions}.
Only exception is when the agent visits a node $v$ with
$v.\dist = \self.\phase$. Then, it simply goes back to the parent node
(Line 29).
The agent makes a move inside the tree by invoking $\move()$,
in which $\self.\dist$ is adequately updated (Line 35).
In Stage 1 (resp.~3), it always changes the color of the visited node
to white (resp.~black).
In Stages 2, 4, and 5, whenever the agent visits a node with
$\dist = \self.\phase$, it tries to add the neighbors
of the current node according to the strategy mentioned above,
by invoking $\expand()$ (Line 28).
In $\move()$, every kind of inconsistency regarding
the parent-child relationship is detected.
Specifically, the agent detects an inconsistency at the migration from $u$ to $v$ if
(i) $\self.\dist \neq v.\dist$,
(ii) $v.\dist > 0$ and $v.\parent \in v.\child$,
(iii) $p_u(\{u,v\}) \in u.\child$ but $v.\parent \neq p_v(\{v,u\})$,
or
(iv) $u.\parent = p_u(\{u,v\})$ but $p_v(\{v,u\}) \notin v.\child$.
If an inconsistency is detected, the agent sets $\tr$ for $\self.\error$ (Line 37).
One can easily observe that $T(\vcur)$ is a tree rooted by some node
if every pair of $u \in T(\vcur)$ and $v \in u.\child$ has no inconsistency listed above.
Hence, when the agent invokes $\circulate()$ on a node $\vst$ in Phase $i$,
the agent circulates $T_i(\vst)$ or detect an inconsistency;
thus return to the main routine
within $O(n)$ moves except for the moves triggered in $\expand()$,
starting from any configuration.
\begin{figure}
\centering
\includegraphics[width=0.45\linewidth]{worstcase}
\caption{A consistent but undesirable tree}
\label{fig:worstcase}
\end{figure}
Function $\expand()$ is a simple implementation of the strategy explained above. However, we need to explain one mechanism to detect another kind of inconsistency. Let $r$ be the root of $T(\vcur)$. (If $T(\vcur)$ is not a rooted tree, the agent detects the inconsistency in $\move()$.) In our strategy, we expect that $T(r)$ is a spanning tree on $G[N_{i}(r)]$ at the beginning of Phase $i\ge 0$. However, we must consider an arbitrary initial configuration; thus, some nodes in $N_{i}(r)$ may be out of $T(r)$. As a result, the agent may need $n$ phases to make a spanning tree on $G$, implying that we require $O(m+n^2)$ steps to begin Phase 0 in the worst case. For example, consider the case that
$T(r)=\{(v_3,v_2),(v_2,v_1),(v_1,v_0)\}$ at the beginning of Phase 3
in the graph shown in Figure \ref{fig:worstcase}.
Then,
edges $\{v_0,v_1\},\{v_0,v_2\},\dots,\{v_0,v_{n-2}\}$ are never used
in the subsequent tree-circulations
since $v_1$ is only the child of $v_0$.
Thus exactly one node is added to $T(r)$ in each phase,
requiring $O(m+n^2)$ steps before beginning Phase 0 in spite of $D=2$.
To circumvent this issue,
we added another rule:
the agent raises the error flag
when the agent executing $\expand()$ finds a black node with $\dist < \self.\phase - 1$ in Stage 5 (Line 49).
This rule guarantees that the agent begins Phase 0
within $O(m+nD)$ steps starting from any configuration.
Moreover, this rule is never harmful in an execution after
the agent begins Phase 0:
this rule is never applicable
because $T(\vcur)$ is always
a shortest path tree on $G[T(\vcur)]$ in the subsequent execution.
We have the following lemmas from the above discussions.
\begin{lemma}
\label{lemma:begins_zero}
Starting from any configuration,
the agent running $\pdet{k}$ begins Phase 0
(\ie changes $\self.\phase$ to 0)
within $O(m+nD)$ steps.
\end{lemma}
\begin{lemma}
\label{lemma:done}
Let $k \ge D-1$.
Once the agent running $\pdet{k}$ begins Phase 0,
it visits all nodes within $O(m+nD)$ steps.
\end{lemma}
Thus, we have the following theorem.
\begin{theorem}
\label{theorem:pdet}
Algorithm $\pdet{k}$ is a deterministic
self-stabilizing exploration algorithm
for all simple, undirected, and connected graphs
with diameter less than or equal to $k+1$.
Irrespective of $k$, the cover time is $O(m+nD)$ steps.
The agent memory space is $O(\log k)$
and the memory space of each node $v$ is $O(\delta_v + \log k)$.
\end{theorem}
\section{Lower Bound}
\label{sec:lower}
By definition, the cover time of any exploration algorithm is
trivially $\Omega(n)$.
In this section, we give a better lower bound:
the cover time of any (possibly randomized) algorithm is $\Omega(m)$.
Specifically, we prove the following theorem.
\begin{theorem}
\label{theorem:lower}
Let $\calp$ be any exploration algorithm.
For any positive integers $n,m$ such that $n \ge 3$ and $n-1 \le m \le n(n+1)/2$,
there exits a simple, undirected, and connected graph $G=(V,E)$
with $|V| = n$ and $|E| = m$
such that
the agent running $\calp$ on $G$
starting from some node in $V$
requires at least $(m-1)/4$ steps
to visit all nodes in $V$ in expectation.
\end{theorem}
\begin{proof}
For simplicity, we assume that $\calp$ is
a randomized algorithm. However, the following proof also
holds without any modification
even if $\calp$ is a deterministic one.
Let $G'=(V',E')$ be any simple, undirected, and connected graph
such that $|V'|=n-1$ and $|E'|=m-1$. There must be such a graph
because $n-1 \ge 2$ and $m-1 \ge (n-1)-1$.
Suppose that the agent runs $\calp$ on $G'$
starting from any node $\vst \in V'$.
Let $X_{u,v}$ be the indicator random variable such that
$X_{u,v}=1$ if the agent traverses edge $\{u,v\}$
(from $u$ to $v$ or from $v$ to $u$)
in the first $(m-1)/2$ steps,
and $X_{u,v}=0$ otherwise.
Clearly, the agent traverses at most $(m-1)/2$ edges
in the first $(m-1)/2$ steps.
Therefore, $\sum_{\{u,v\} \in E'} X_{u,v} \le (m-1)/2$
always hold,
thus $\sum_{\{u,v\} \in E'} \ex[X_{u,v}] = \ex[\sum_{\{u,v\} \in E'} X_{u,v}]\le (m-1)/2$. This yields that there exists
an edge $\{u,v\} \in E'$ such that
$\Pr[X_{u,v}=1] = \ex[X_{u,v}] \le 1/2$.
Let $\{u,v\} \in E'$ such that $\Pr[X_{u,v}=1] \le 1/2$
and define $G=(V,E)$ as the graph that we obtain
by modifying $G'$ as follows:
remove edge $\{u,v\}$,
introduce a new node $w \notin V'$,
and add two edges $\{w,u\}$ and $\{w,v\}$.
Formally, $V=V'\cup \{w\}$ and $E=E' \cup \{\{w,u\},\{w,v\}\}\setminus \{\{u,v\}\}$.
By definition, graph $G$ is simple, undirected, and connected.
Then, the agent running $\calp$
on $G$ starting from node $\vst$
requires at least $(m-1)/4$ steps in expectation
to visit all nodes in $V$
because the agent does not visit node $w$ in the first $(m-1)/2$
steps with probability at least $1/2$.
\end{proof}
\section{Conclusion}
We gave two self-stabilizing graph exploration algorithms.
A randomized algorithm $\pran{c}$ is time-optimal
(\ie its cover time is $O(m)$ steps)
and uses only $O(\log c)$ bits both in
the agent-memory and whiteboards
if $c=n(1+\Omega(1))$.
A deterministic algorithm $\pdet{k}$
correctly works if $k \ge D-1$.
Then, this algorithm lets
the agent visit all nodes within $O(m+nD)$ steps
and uses $O(\log k)$ bits and $O(\delta + \log k)$ bits
for the agent-memory and the whiteboard of each node
with degree $\delta$, respectively.
\bibliographystyle{plain}
| proofpile-arXiv_059-15599 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
A central problem in computer graphics and computer vision is that of acquiring some observations of an object, and then producing photorealistic relit renderings of that object.
Of particular interest are renderings of human faces, which have many practical uses within consumer photography and the visual effects industry, but also serve as a particularly challenging case due to their complexity and the high sensitivity of the human visual system to facial appearance.
A light stage represents an effective solution for this task: by programmatically activating and deactivating several LED lights arranged in a sphere while capturing synchronized images, the light stage acquires a full reflectance field for a \changed{human} subject, which we refer to as a ``one-light-at-a-time'' (OLAT) image set.
Because light is additive, this OLAT scan represents a lighting ``basis'', and the subject can be relit according to some desired environment map by simply projecting that environment map onto the light stage basis~\cite{debevec2000acquiring}.
Though straightforward and theoretically elegant, this classic relighting approach has a critical limitation. The lights on the light stage are usually designed to be small and distant from the subject, so that they are well-approximated as directional light sources. As a consequence, realistic high-frequency effects such as sharp cast shadows and specular highlights are present in the captured OLAT images.
In order to achieve photorealistic relighting results under \emph{all} possible lighting conditions, the lights must be placed closely enough on the sphere of the stage such that shadows and specularities in the captured images of adjacent lights ``move'' by less than one pixel.
However, practical constraints (the cost and size of each light, and the difficulty of powering and synchronizing many lights) discourage the construction of light stages with very high densities of lights. Even if such a high-density light stage could be built, the time to acquire an OLAT increases linearly with the number of lights, and this makes human subjects (which must be stationary during OLAT acquisition) difficult to capture. For these reasons, even the most sophisticated light stages in existence today contain only a few hundred lights that are spaced many degrees apart.
This means that the OLAT scans from a light stage are \emph{undersampled} with respect to the angular sampling of lights, and the rendered images using conventional approaches will likely contain \emph{ghosting}. Attempting to render an image using a ``virtual'' light source that lies in between the real lights of the stage by applying a weighted average on adjacent OLAT images will not produce a soft shadow or a streaking specularity, but will instead produce the superposition of multiple sharp shadows and specular dots (see Fig.~\ref{fig:teaser}b).
This problem can be mitigated by imaging subjects that only exhibit low-frequency reflectance variation, or by performing relighting using only low-frequency environment maps. However, most human subjects have complicated material properties (specularities, scattering, \textit{etc}. ) and real-world environment maps frequently exhibit high-frequency variation (bright light sources at arbitrary locations), which often results in noticeable artifacts as shown in Fig.~\ref{fig:teaser}b.
To this end, we propose a learning based solution for super-resolving the angular resolution of light stage scans \changed{of human faces}. Given an OLAT scan \changed{of a human face} with finite images and the direction of a desired ``virtual'' light, our model predicts a complete high-resolution RGB image that appears to have been lit by a light source from that direction, even though that light is not present in our light stage (see Fig.~\ref{fig:teaser}c).
Our robust solution for ``upsampling'' the number of lights, which we refer to as {\em light stage super-resolution}, can additionally enable the construction of simpler light stages with fewer lights, thereby reducing cost and increasing the frame rate at which subjects can be scanned.
Our algorithm can also produce better rendered images for applications that require light stage data for training, such as portrait relighting or shadow removal. Casual users can then utilize these algorithms on a single cellphone without requiring capture inside a light stage. \changed{Note that we focus only on human face relighting within a light stage. While we believe the methods herein could be applied more broadly, a comprehensive system for general object relighting remains a topic of future work.}
Our algorithm (Sec.~\ref{sec:model}) must work with the inherent aliasing and regularity of the light stage data used for training. We address this by combining the power of deep neural networks with the efficiency and generality of conventional linear interpolation methods. Specifically, we use an active set of closest lights within our network (Sec.~\ref{sec:activeset}) and develop a novel alias-free pooling approach to combine their network activations (Sec.~\ref{sec:pooling}) using a weighting operator guaranteed to be smooth when lights enter or exit the active set.
Our network allows us to \emph{super-resolve} an OLAT scan \changed{of a human face}: we can take our learned model and repeatedly query it with thousands of light directions, and treat the resulting set of synthesized images as though they were acquired by a physically-unconstrained light stage with an unbounded sampling density. As we will demonstrate, these super-resolved ``virtual'' OLAT scans allow us to produce photorealistic renderings of \changed{human faces} with arbitrarily high-frequency illumination content.
\begin{comment}
\barron{Uncommenting this one sentence as it's critical to the paper and demonstrably true. Leaving the density speculation commented.}
This minimal required sampling density grows as a function of both the material and geometric variation of the subject being imaged, as well as the resolution of the camera being used.
When considering a human subject photographed with a modern high-megapixel camera, this necessitates a light stage with thousands or tens of thousands of lights, and building such a dense light stage is largely intractable in practice. \sean{Not sure this is 100 \% accurate? Off-the-shelf LED panels can be used to build a very dense Light Stage and I believe Paul helped to build one for a movie :) Maybe we should mention this as alternative?}.
\end{comment}
\section{Related Work}
The angular undersampling from the light stage relates to much work over the past two decades on a frequency analysis of light transport~\cite{invrend,Imari,Fredo}, and can also be related to analyses of sampling rate in image-based rendering~\cite{plenoptic} for the related problem of view synthesis~\cite{llff}. This problem also bears some similarities to multi-image super-resolution~\cite{milanfar2010super} and angular super-resolution in the light field~\cite{kalantari2016learning,Cheng_2019_CVPR_Workshops}, where aliased observations are combined to produce interpolated results. In this paper, we leverage priors and deep learning to go beyond these sampling limits, upsampling or super-resolving a sparse input light sampling on the light stage to achieve continuous high-frequency relighting.
Recently, many approaches for acquiring a sparse light transport matrix have been developed, including methods based on compressive sensing~\cite{peers2009compressive,Sen3}, kernel Nystrom~\cite{Tong}, optical computing~\cite{Otoole} and neural networks~\cite{ren2013global,Ren2015,kang2018efficient}. However, these methods are not designed for the light stage and are largely orthogonal to our approach. They seek to acquire the transport matrix for a fixed light sampling resolution with a sparse set of patterns, while we seek to take this initial sampling resolution and upsample or super-resolve it to much higher-resolution lighting (and indeed enable continuous high-frequency relighting).
Most recently,~\cite{xu2018deep} proposed a deep learning approach for image-based relighting from only five lighting directions, but cannot reproduce very accurate shadows. While we do use many more lights, we achieve significantly higher-quality results with accurate shadows.
The general approach of using light stages for image-based relighting stands in contrast to more model-based approaches. Traditionally, instead of super-resolving a light stage scan, one could use that scan as input to a photometric stereo algorithm~\cite{photometric_stereo}, and attempt to recover the normal and the albedo maps of the subject. More advanced techniques were developed to produce a parametric model of the geometry and reflectance for even highly specular objects~\cite{tunwattanapong2013acquiring}. There are also works that focus on recovering a parametric model from a single image~\cite{sfsnetSengupta18}, constructing a volumetric model for view synthesis~\cite{lombardi2018deep}, or even a neural representation of a scene~\cite{tewari2020state}. However, the complicated reflectance and geometry of human subjects is difficult to even parameterize analytically, let alone recover. Though recent progress may enable the accurate capture of human faces using parametric models, there are additional difficulties in capturing a complete portrait due to the complexity of human hair, eyes, ears, etc. Indeed, this complexity has motivated the use of image-based relighting via light stages in the visual effects industry for many years~\cite{tunwattanapong2011practical,debevec2012light}.
Interpolating a reflectance function has also been investigated in the literature. \citet{masselus2004smooth} compare the errors of fitting the sampled reflectance function to various basis functions and conclude that multilevel B-Splines can preserve the most features. More recently, \citet{rainer2019neural} utilize neural networks to compress and interpolate sparsely sampled observations. However, these algorithms interpolate the reflectance function independently on each pixel and do not consider local information in neighboring pixels. Thus, their results are smooth and consistent in the light domain, but might not be consistent in the image domain.
\citet{fuchs2007superresolution} treat the problem as a light super-resolution problem, and is the most similar to our work. They use heuristics to decompose the captured images into diffuse and specular layers, and apply optical-flow and level-set algorithms to interpolate highlights and light visibility respectively. This approach works well on highly reflective objects, but as we will demonstrate, it usually fails on human skin which contains high frequency bumps and cannot be well modeled using only diffuse and specular terms.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{image/architecture}
\caption{
A visualization of our model architecture. The encoder of our model $\network_{e}(\cdot)$ takes as input a concatenation of the nearby OLAT images in the active set and their light directions, which are processed by a series of stride-2 conv layers. The resulting encoded activations of these 8 images at each level are then combined using the alias-free pooling described in Section~\ref{sec:pooling}, and skip-connected to the decoder. The decoder $\network_{d}(\cdot)$ takes as input the query light direction $\light$, processes it with fully connected layers and then upsamples it (along with the skip-connected encoder activations), and decodes the image using a series of stride-2 transposed conv layers.
Whether or not a conv or transposed conv changes resolution is indicated by whether or not its edge spans two spatial scales.
}
\label{fig:architecture}
\end{figure*}
In recent years, light stages have also been demonstrated to be invaluable tools for generating training data for use in deep learning tasks~\cite{meka2019deep,guo_relightables:_2019,Sun2019,nestmeyer2019structural}. This enables user-facing effects that do not require acquiring a complete light stage scan of the subject, such as ``portrait relighting'' from a single image~\cite{Sun2019, ApplePortraitMode} or VR experiences~\cite{guo_relightables:_2019}.
These learning-based applications suffer from the same undersampling issue as do conventional uses of light stage data. For example, \citet{Sun2019} observe artifacts when relighting with environment maps that contain high-frequency illumination. We believe our method can provide better training data and significantly improve many of these methods in the future.
\begin{comment}
Light stages have been used successfully in the special effects industry for many years, as an approach for acquiring scans of actors or objects that can then be used to generate relit renderings \cite{debevec2012light}.
In recent years, light stages have also been demonstrated to be invaluable tools for generating training data for use in deep learning tasks~\cite{meka2019deep, guo_relightables:_2019, Sun2019}.
OLAT data can be used to generate input/output pairs for use as training data in applications where only a single image or some similarly impoverished data is available, and a model can be learned to transform input images into output renderings.
This enables user-facing effects that do not require a acquiring a complete light stage scan of the subject,
such as ``portrait relighting''~\cite{Sun2019, ApplePortraitMode} or VR experiences~\cite{guo_relightables:_2019}.
Regrettably, these learning-based applications suffer from the same undersampling issue as do conventional uses of light stage data---their synthesized training data (both input and output) lacks high-frequency illumination detail. Because neural networks tend to absorb the biases of their training data, these models may therefore produce inaccurate output renderings when targeting environment maps that contain high-frequency illumination (as observed by \citet{Sun2019}) or may fail to generalize well to new input images that contain previously-unseen high-frequency illumination.
\end{comment}
\begin{comment}
The undersampling induced by the light stage relates closely to the well-studied problems of compact representations for illumination~\cite{ramamoorthi2002analytic} and materials~\cite{ngan2005experimental}.
This work illustrates that the degree to which undersampling is problematic depends on the shape and reflectance of the subject, and on the environment maps that will be used for relighting:
If the subject is convex and Lambertian, or if the environment map consists of entirely ambient illumination, then an undersampled light stage may be sufficient.
But the further the subject or the environment deviates from these assumptions, the more lights are required in the light stage to produce accurate renderings.
Human subjects exhibit complicated geometry and reflectance properties such as scattering and specularity~\cite{donner2006spectral}, and as such they place significant demands upon the sampling density of the light stage. \barron{Ravi, your help would be useful here}.
\end{comment}
\section{Model}
\label{sec:model}
An OLAT scan of a subject captured by a light stage consists of $n$ images, where each image is lit by a single light in the stage. The conventional way to relight the captured subject with an arbitrary light direction is to linearly blend the images captured under nearby lights in the OLAT scan. As shown in Fig.~\ref{fig:teaser}, this often results in ``ghosting'' artifacts on shadows and highlights. The goal of this work is to use machine learning instead of simple linear interpolation to produce higher-quality results.
Our model takes as input a query light direction $\light$ and a complete OLAT scan consisting of a set of paired images and light directions $\{ \Mat{I}_i, {\boldsymbol\ell}_i \}$, and uses a deep neural network $\network$ to obtain the predicted image $\image$,
\begin{equation}
\image\left(\light\right) = \network\left(\left\{ \Mat{I}_i, {\boldsymbol\ell}_i \right\}_{i=1}^{n}, \light \right) .
\label{equ:relight}
\end{equation}
This formalization is broad enough to describe some prior works on learning-based relighting~\cite{xu2018deep,meka2019deep}. While these methods usually operate by training a U-Net~\cite{ronneberger2015u} to map from a {\em sparse} set of input images to an output image, we focus on producing as high-quality as possible rendering results given the {\em complete} OLAT scan.
However, feeding all the captured images into a conventional CNN network is not tractable in terms of speed or memory requirements. In addition, this naive approach seems somewhat excessive for practical applications involving human faces. While complex translucency and interreflection may require multiple lights to reproduce, it is unlikely that \emph{all} images in the OLAT scan are necessary to reconstruct the image for any particular query light direction, especially given that barycentric interpolation requires only three nearby lights to produce a somewhat plausible rendering.
Our work attempts to find an effective and tractable compromise between these two extremes, in which the power of deep neural networks is combined with the efficiency and generality of nearest-neighbor approaches. This is accomplished by a linear blending approach that (like barycentric blending) ensures the output rendering is a smooth function of the input, where the blending is performed on the activations of a neural network's encoding of our input images instead of on the raw pixel intensities of the input images.
Our complete network structure is shown in Fig.~\ref{fig:architecture}.
Given a query light direction $\light$, we identify the $k$ captured images in the OLAT scan whose corresponding light directions are nearby the query light direction, which we call \emph{active set} $\mathrm{A}(\light)$.
These OLAT images $\Mat{I}_i$ and their corresponding light directions ${\boldsymbol\ell}_i$ are then each independently processed in parallel by the encoder $\network_{e}(\cdot)$ of our convolutional neural network (or equivalently, they are processed as a single ``batch''), thereby producing a multi-scale set of internal neural network activations that describe all $k$ images.
After that, the set of $k$ activations at each layer of the network are pooled into a single set of activations at each layer, which is performed using a weighted averaging where the weighting is a function of the query light and each input light $\mathrm{W}(\light, {\boldsymbol\ell}_i)$. This weighted average is designed to remove the aliasing introduced by the nearest neighbor sampling in the active set selection stage.
Together with the query light direction $\light$, these pooled feature maps are then fed into the decoder $\network_{d}(\cdot)$ by means of skip links from each level of the encoder, thereby producing the final predicted image $\image\left(\light\right)$.
Formally, our final image synthesis procedure is:
\begin{equation}
\image\left(\light\right) = \network_{d}\left(\sum_{i\in\mathrm{A}(\light)}\mathrm{W}(\light, {\boldsymbol\ell}_i) \network_{e}\left(\Mat{I}_i, {\boldsymbol\ell}_i\right), \light\right).
\label{equ:decoder}
\end{equation}
This hybrid approach of nearest-neighbor selection and neural network processing allows us to learn a single neural network that produces high quality results, and generalizes well across query light directions and across subjects in our OLAT dataset
Our active set construction approach is explained in Section~\ref{sec:activeset}, our alias-free pooling is explained in Section~\ref{sec:pooling}, the network architecture is described in Section~\ref{sec:network}, and our progressive training procedure is discussed in Section~\ref{sec:loss}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{image/dropout.png}
\caption{
The OLAT images taken from a light stage have a uniform hexagonal pattern, which means that the distances between each light and its nearest neighbors is highly regular (a). In contrast, at test time we want to synthesize images corresponding to unseen light directions that do not lie on this hexagonal grid, and whose neighboring distances will therefore be irregular (c). During training we therefore sample a random subset of nearest neighbors for use in the active set of our model (b), which forces the network to adapt to challenging and irregular distributions of neighbor-distances that better match those that will be seen at test time.
}
\label{fig:activeset}
\end{figure}
\subsection{Active Set Selection}
\label{sec:activeset}
Light stages are conventionally constructed by placing lights on a regular hexagonal tessellation of a sphere (with some ``holes'' for cameras and other practical concerns), as shown in Fig.~\ref{fig:activeset}.
As discussed, at test time our model works by identifying the OLAT images and lights that are nearest to the desired query light direction, and averaging their neural activations.
But this natural approach, when combined with the regularity of the sampling of lights in the light stage, presents a number of problems for training our model.
First, we can only supervise our super-resolution model using ``virtual'' lights that exactly coincide with the real lights of the light stage, as these are the only light directions for which we have ground-truth images (this will also be a problem when evaluating our model, as will be discussed in Sec.~\ref{sec:eval}).
Second, this regular hexagonal sampling means that, for any given light in the stage, the distances between it and its neighbors will always exhibit a highly regular pattern (Fig.~\ref{fig:activeset}a). For example, the $6$ nearest neighbors of every point on a hexagonal tiling are guaranteed to have exactly the same distance to that point. In contrast, at test time we would like to be able to produce renderings for query light directions that correspond to arbitrary points on the sphere, and those points will likely have irregular distributions of neighboring lights (Fig.~\ref{fig:activeset}c). This represents a significant deviation between our training data and our test data, and as such we should expect poor generalization at test time if we were to naively train on highly-regular sets of nearest neighbors.
To address this issue, we adopt a different technique for sampling neighbors for use in our active set than what is used during test time. For each training iteration, we first identify a larger set of $m$ nearest neighbors near the query light (which in this case is identical to one of the real lights in the stage), and among them randomly select only $k<m$ neighbors to use in the active set (in practice, we use $m=16$ and $k=8$). As shown in Fig.~\ref{fig:activeset}b, this results in irregular neighbor sampling patterns during training, which simulates our test-time scenario wherein the query light is at a variety of locations in between the real input light sources.
This approach shares a similar motivation as that of dropout~\cite{srivastava2014dropout} in neural networks, in which network activations are randomly set to $0$ during training to prevent overfitting. Here we instead randomly remove input images, which also has the effect of preventing the model from overfitting to the hexagonal pattern of the light stage while training our network, by forcing it to operate on more varied inputs.
Notice that the query light itself is included in the candidate set, to reflect the fact that during test-time the ``virtual'' query light may be next to a real light source. As we will show in Sec.~\ref{sec:eval} and in the supplementary video, this active set selection approach results in a learned model whose synthesized shadows move more smoothly and at a more regular rate than is achieved with a naive nearest-neighbor sampling approach.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{image/trajectory.png}
\caption{Varying the query light direction will cause OLAT images to leave and enter the active set of our model, which introduces aliasing that, if unaddressed, results in jarring temporal artifacts in our renderings. To address this, we use an ``alias-free pooling'' technique to ensure that the network activations of each OLAT image are averaged in a way that suppresses this aliasing. We use a weighted average where the weights are smooth, and are exactly zero at the point where lights enter and leave the active set.
}
\label{fig:weight}
\end{figure}
\subsection{Alias-Free Pooling}
\label{sec:pooling}
A critical component in our model is the design of the skip links from each level of the encoder of our model to its corresponding level in the decoder. This model component is responsible for the network activations corresponding to the 8 images in our active set and reducing them to one set of activations corresponding to a single output, which will then be decoded into an image. This requires a pooling operator for these 8 images. This pooling operator must be permutation-invariant, as the images in our active set may correspond to any OLAT light direction and may be presented in any order. Standard permutation-invariant pooling operators, such as average-pooling or max-pooling, are not sufficient for our case, because they do not suppress \emph{aliasing}.
As the query light direction moves across the sphere, images will enter and leave the active set of our model, which will cause the network activations within our encoder to change suddenly (see Fig.~\ref{fig:weight}). If we use simple average-pooling or max-pooling, the activations in our decoder will also vary abruptly, resulting in unrealistic flickering artifacts or temporal instability in our output renderings as the light direction varies. \changed{In other words, the point sampled signal should go through an effective prefiltering process in order to suppess the artifacts.}
The root cause of this problem is that our active set is an aliased observation of the input images, and average- or max-pooling allows this aliasing to persist. We therefore introduce a technique for alias-free pooling to address this issue. We use a weighted average as our pooling operator where the weight of each item in our active set is a continuous function of the query light direction, and where the weight of each item is guaranteed to be zero at the moment it enters or leaves the active set.
We define our weighting function between the query light direction $\light$ and each OLAT light direction ${\boldsymbol\ell}_i$ as follows:
\newcommand{s}{s}
\newcommand{\rawweight}{\widetilde{\mathrm{W}}}
\begin{equation}
\begin{gathered}
\rawweight(\light, {\boldsymbol\ell}_i) = \operatorname{max}\left(0, e^{s\left(\light\cdot{\boldsymbol\ell}_i - 1\right)} - \min_{j \in \mathrm{A}(\light)} e^{s\left(\light\cdot{\boldsymbol\ell}_j - 1\right)}\right), \\
\mathrm{W}(\light, {\boldsymbol\ell}_i) = \frac{\rawweight(\light, {\boldsymbol\ell}_i)}{\sum_j \rawweight(\light, {\boldsymbol\ell}_j)},
\label{equ:weight}
\end{gathered}
\end{equation}
where $s$ is a learnable parameter that adjusts the decay of the weight with respect to the distance and each ${\boldsymbol\ell}$ is a normalized vector in 3D space. During training, parameter $s$ will be automatically adjusted to balance between selecting the nearest neighbor ($s = +\infty$) and taking an unweighted average of all neighbors ($s = 0$).
Our weighting function is an offset spherical Gaussian, similar to the normalized Gaussian distance between the query light's Cartesian coordinates and those of the other lights in our active set, but where we have subtracted out the unnormalized weight corresponding to the most distant light in the active set (and clipped the resulting weights at $0$).
This adaptive truncation is necessary because the lights in the light stage may be spaced irregularly (due to holes in the stage for cameras or other reasons),
which means that a fixed truncation may be too aggressive in setting weights to zero in regions where lights are sampled less frequently. We instead leverage the fact that
when a light exits the active set, a new light will enter it at exactly the same time with exactly the same distance to the query light. This allows us to truncate our Gaussian weights using the maximum distance in the active set, which ensures that lights have a weight of zero as they leave or enter the active set. This results in renderings that change smoothly as we move the query light direction.
\subsection{Network Architecture}
\label{sec:network}
The remaining components of our model consist of the conventional building blocks used in constructing convolutional neural networks, and can be seen in Fig.~\ref{fig:architecture}.
The encoder of our network consists of $3\times3$ convolutional neural network blocks (with a stride of 2 so as to reduce resolution by half), each of which is followed by group normalization \cite{Wu_2018_ECCV} and a PReLU~\cite{he2015delving} activation function.
The number of hidden units for each layer begins at $32$ and doubles after each layer, but is clipped at $512$.
The input to our encoder is a set of $8$ RGB input images corresponding to the nearby OLAT images in our active set, each of which has been concatenated with the $xyz$ coordinate of its source light (tiled to every pixel) giving us 8 6-channel input images.
These images are processed along the ``batch'' dimension of our network, and so are treated identically at each level of the encoder.
These 8 images are then pooled down to a single ``image'' (\textit{i}.\textit{e}., a single batch) of activations using the alias-free pooling approach of Section~\ref{sec:pooling}, each of which is concatenated onto the internal activations of the network's decoder.
The decoder of the network begins with a series of fully-connected (aka ``dense'') neural network blocks that take as input the query light direction $\light$, each of which is followed by instance normalization~\cite{ulyanov2016instance} and a PReLU activation function. These activations are then upsampled to $4 \times 4$ and used as the basis of our decoder. Each layer of the decoder consists of a $3 \times 3$ transposed convolutional neural network block (with a stride of 2 so as to double resolution) which is again followed by group normalization and a PReLU activation function. The input to each layer's conv block is a concatenation of the upsampled activations from the previous decoder level, with the pooled activations from the encoder that have been ``skip'' connected from the same spatial scale.
The final activation function before any output image is produced is a sigmoid function, as our images are normalized to $[0, 1]$.
Because our network is fully convolutional~\cite{long2015fully}, it can be evaluated on images of arbitrary resolution, with GPU memory being the only limiting factor.
We train on $512 \times 512$ resolution images for the sake of speed, and evaluate and test on $1024 \times 1024$ resolution images to maximize image quality.
\subsection{Loss Functions and Training Strategy}
\label{sec:loss}
We supervise the training of our model using an $L_1$ loss on pixel intensities. Formally, our loss function is:
\begin{equation}
\mathcal{L}_d = \sum_i\left\lVert \Mat{M} \odot \left(\Mat{I}_i - \image\left({\boldsymbol\ell}_i\right)\right) \right\lVert_{1},
\label{eq:lossfun}
\end{equation}
where $\Mat{I}_i$ is the ground truth image under light $i$, and $\image\left({\boldsymbol\ell}_i\right)$ is our prediction. When computing the loss over the image, we use a precomputed binary image \Mat{M}~to mask out pixels that are known to belong to the background of the subject.
During training, we construct each training data instance by randomly selecting a human subject in our training dataset and then randomly selecting one OLAT light direction $i$.
The image corresponding to that light $\Mat{I}_i$ will be used as the ground-truth image our model will attempt to reconstruct, and the ``query'' light direction for our model will be the light corresponding to that image ${\boldsymbol\ell}_i$.
We then identify a set of 8 neighboring images/lights to include in our active set using the selection procedure described in Section~\ref{sec:activeset}.
Our only data augmentation is a randomly-positioned $512\times512$ crop of all images in each batch.
Progressive training has been found to be effective for accelerating and stabilizing the training of GANs for high-resolution image synthesis~\cite{karras2017progressive}, and though our model is not a GAN (but is instead a convolutional encoder-decoder architecture with skip connections) we found it to also benefit from a progressive training strategy.
We first inject downsampled image inputs directly into a coarse layer of our encoder and supervise training by imposing a reconstruction loss at a coarse layer of our decoder, resulting in a shallower model that is easier to train. As training proceeds, we add additional convolutional layers to the encoder and decoder, thereby gradually increasing the resolution of our model until we arrive at the complete network and the full image resolution. In total, we train our network for 200,000 iterations, using 8 NVIDIA V100 GPUs, which takes approximately 10 hours. Please see the detailed training procedure in the supplementary material.
Our model is implemented in Tensorflow~\cite{Tensorflow} and trained using Adam~\cite{KingmaB14} with a batch size of 1 (the ``batch'' dimension of our tensors is used to represent the $8$ images in our active set), a learning rate of $10^{-3}$, and default hyperparameter settings ($\beta_1 = 0.9, \beta_2 = 0.999, \epsilon=10^{-7}$).
\section{Evaluation}
\label{sec:eval}
\begin{figure*}
\begin{tabular}{@{}c@{\quad\,\,}c@{\,\,}c@{\,\,}c@{}}
\begin{subfigure}[b]{.23\linewidth}
\centering
\includegraphics[width=\linewidth]{image/steadyshadow/ref.png}
\caption{One rendering, for reference}\label{subfig:movingshadowa}
\end{subfigure}
&
\begin{subfigure}[b]{.2562\linewidth}
\centering
\includegraphics[width=\linewidth]{image/steadyshadow/ours_comp_overlay_theta.png}
\caption{Our model}\label{subfig:movingshadowb}
\end{subfigure}
&
\begin{subfigure}[b]{.23\linewidth}
\centering
\includegraphics[width=\linewidth]{image/steadyshadow/ours_nearest_neighbor_comp_overlay.png}
\caption{Our model w/ naive neighbors}\label{subfig:movingshadowc}
\end{subfigure}
&
\begin{subfigure}[b]{.23\linewidth}
\centering
\includegraphics[width=\linewidth]{image/steadyshadow/ours_ave_pool_comp_overlay.png}
\caption{Our model w/ avg pooling}\label{subfig:movingshadowd}
\end{subfigure}
\end{tabular}
\caption{
A visualization of how our learned model synthesizes renderings in which shadows move smoothly as a function of light direction.
In (a) we show a rendering from our model for some virtual light $\light$ with a horizontal angle of $\theta$, and highlight one image strip that includes a horizontal cast shadow.
In (b) we repeatedly query our model with $\theta$ values that should induce a linear horizontal translation of the shadow boundary in the image plane, and by stacking these image strips we can see this linear trend emerge (highlighted in red).
In (c) and (d) we do the same for ablations of our model that do not have our active-set random selection procedure nor our alias-free pooling, and we see that the resulting shadow boundary does not vary smoothly or linearly.
}
\label{fig:movingshadow}
\end{figure*}
We use the OLAT portrait dataset from~\cite{Sun2019}, which contains 22 subjects with multiple facial expressions captured using a light stage and a 7-camera system.
The light stage consists of 302 LEDs uniformly distributed on a spherical dome, and capturing a subject takes roughly 6 seconds. Each capture process produces an OLAT scan of a specific facial expression on each camera, which consists of 302 images, and we treat the OLAT scans from different cameras as independent OLAT scans. Because the subject is asked to stay still (and an optical flow algorithm~\cite{Wenger:2005:PRR} is applied to correct the small movements) the captured 302 images in each OLAT are aligned and only differ in lighting directions. We manually select 4 OLAT scans with a mixture of subjects and views for use as our validation set, and choose another 16 OLAT scans with good coverage of gender and diverse skin tones for use as training data. Our 16 training datasets only covers 5 of 7 cameras, and the remaining 2 are covered by the validation data. We train our network using all lights from our OLAT data in a canonical global lighting coordinate frame, which allows us to train a single network for all viewpoints in our training data. We train one single model for all subjects in our training dataset, which we found to match the performance of training an individual model for each subject.
Empirically evaluating our model presents a significant challenge: our model is attempting to super-resolve an undersampled scan from a light stage, which means that the only ground-truth that is available for benchmarking is \emph{also} undersampled. In other words, the goal of our model is to accurately synthesize images that correspond to virtual lights in between the real lights of the stage --- but we do not have ground-truth images that correspond to those virtual lights. In addition, the model also needs to generalize to an unseen view and subject.
For these reasons, qualitative results (figures, videos) are preferred, and we encourage readers to view our figures and the accompanying video.
In the quantitative results presented here, we use held-out real images lit by real lights on our light stage as a validation set.
When evaluating one of these validation images, we do not use the active-set selection technique of Section~\ref{sec:activeset}, and instead just sample the $k=8$ nearest neighbors (excluding the validation image itself from the input). Holding out the validation image from the inputs is critical, as otherwise a model could simply reproduce the input image as an error-free output. This held-out validation approach is not ideal, as all such evaluations will follow the same regular sampling pattern of our light stage. This evaluation task is therefore more biased than the real task of predicting images away from the sampling pattern of the light stage.
Selecting an appropriate metric for measuring image reconstruction accuracy for our task is not straightforward.
Conventional image interpolation techniques often result in ghosting artifacts or duplicated highlights, which are perceptually salient but often not penalized heavily by traditional image metrics such as per-pixel RMSE. We therefore evaluate image quality using multiple image metrics: RMSE, the Sobolev $H^1$ norm~\cite{Ravi2}, DSSIM~\cite{SSIM}, and E-LPIPS~\cite{elpips}. RMSE measures pixel-wise error, the $H^1$ norm emphasizes image gradient error, while DSSIM and E-LPIPS approximate an overall perceptual difference between the predicted image and the ground truth. Still, images and videos are preferred for comparison.
\definecolor{Yellow}{rgb}{1,1, 0.6}
\definecolor{Orange}{rgb}{1,0.8, 0.6}
\definecolor{Red}{rgb}{1, 0.6, 0.6}
\begin{table}[t]
\caption{
Here we benchmark our model against prior work and ablations of our model on our validation dataset. We report the arithmetic mean of each metric across the validation set. The top three results of each metric are highlighted in red, orange, yellow, respectively. While ``Ours w/naive neighbors`` has the lowest error according to this evaluation, ``Our model`` performs better in our \emph{real} test-time scenario where the synthesized light does not lie in a regular hexagonal grid (see text and Fig.~\ref{fig:movingshadow} for details).
\label{table:compare}}
\begin{center}
\begin{tabular}{p{33mm}@{\,\,}|@{\,\,}c@{\quad}c@{\quad}c@{\quad}c}
Algorithm & RMSE & $H^1$ & DSSIM & E-LPIPS \\
\hline
Our model & \cellcolor{Orange} 0.0160 & \cellcolor{Orange} 0.0203 & \cellcolor{Orange} 0.0331 & \cellcolor{Orange} 0.00466 \\
Ours w/naive neighbors & \cellcolor{Red} 0.0156 & \cellcolor{Red} 0.0199 & \cellcolor{Red} 0.0322 & \cellcolor{Red} 0.00449 \\
Ours w/avg-pooling & 0.0203 & 0.0241 & 0.0413 & 0.00579 \\
\hline
Linear blending & \cellcolor{Yellow} 0.0191 & \cellcolor{Yellow} 0.0232 & \cellcolor{Yellow} 0.0366 & 0.00503 \\
\citet{fuchs2007superresolution} & 0.0195 & 0.0258 & 0.0382 & \cellcolor{Yellow} 0.00485\\
Photometric stereo & 0.0284 & 0.0362 & 0.0968 & 0.00895\\
\citet{xu2018deep} &&&& \\
\quad w/ 8 optimal lights & 0.0410 & 0.0437 & 0.1262 & 0.01666\\
\quad w/ adaptive input & 0.0259 & 0.0291 & 0.1156 & 0.00916\\
\citet{meka2019deep} & 0.0505 & 0.0561 & 0.1308 & 0.01482\\
\end{tabular}
\end{center}
\end{table}
\setlength{\tabcolsep}{0pt}
\newcommand{0.10\textwidth}{0.10\textwidth}
\newcommand{\cropwidth}{0.105\textwidth}
\begin{figure*}
\centering
\begin{tabular}[c]{@{}c@{\,\,}c@{\,\,}c@{\,\,}c@{\,\,}c@{\,\,}c@{\,\,}c@{\,\,}c@{\,\,}c@{}}
{\small \Centerstack{(a) Ours\\(full image)}} & {\small (b) Groundtruth} & {\small (c) Ours} & {\small \Centerstack{(d) Linear\\blending}} & {\small \Centerstack{(e) Fuchs\\\textit{et al}. ~\shortcite{fuchs2007superresolution}}} & {\small \Centerstack{(f) Photometric\\stereo}} & {\small \Centerstack{(g) Xu\\ \textit{et al}. ~\shortcite{xu2018deep} w/\\optimal sample}} & {\small \Centerstack{(h) Xu\\\textit{et al}. ~\shortcite{xu2018deep} w/\\adaptive sample}} & {\small \Centerstack{(i) Meka\\\textit{et al}. ~\shortcite{meka2019deep}}}\\
\includegraphics[width=\cropwidth]{image/related/ours_0.png}&
\includegraphics[width=\cropwidth]{image/related/crop_0_0.png}&
\includegraphics[width=\cropwidth]{image/related/crop_0_1.png}&
\includegraphics[width=\cropwidth]{image/related/crop_0_2.png}&
\includegraphics[width=\cropwidth]{image/related/crop_0_3.png}&
\includegraphics[width=\cropwidth]{image/related/crop_0_4.png}&
\includegraphics[width=\cropwidth]{image/related/crop_0_5.png}&
\includegraphics[width=\cropwidth]{image/related/crop_0_6.png}&
\includegraphics[width=\cropwidth]{image/related/crop_0_7.png}\\
\includegraphics[width=\cropwidth]{image/related/ours_1.png}&
\includegraphics[width=\cropwidth]{image/related/crop_1_0.png}&
\includegraphics[width=\cropwidth]{image/related/crop_1_1.png}&
\includegraphics[width=\cropwidth]{image/related/crop_1_2.png}&
\includegraphics[width=\cropwidth]{image/related/crop_1_3.png}&
\includegraphics[width=\cropwidth]{image/related/crop_1_4.png}&
\includegraphics[width=\cropwidth]{image/related/crop_1_5.png}&
\includegraphics[width=\cropwidth]{image/related/crop_1_6.png}&
\includegraphics[width=\cropwidth]{image/related/crop_1_7.png}\\
\includegraphics[width=\cropwidth]{image/related/ours_2.png}&
\includegraphics[width=\cropwidth]{image/related/crop_2_0.png}&
\includegraphics[width=\cropwidth]{image/related/crop_2_1.png}&
\includegraphics[width=\cropwidth]{image/related/crop_2_2.png}&
\includegraphics[width=\cropwidth]{image/related/crop_2_3.png}&
\includegraphics[width=\cropwidth]{image/related/crop_2_4.png}&
\includegraphics[width=\cropwidth]{image/related/crop_2_5.png}&
\includegraphics[width=\cropwidth]{image/related/crop_2_6.png}&
\includegraphics[width=\cropwidth]{image/related/crop_2_7.png}\\
\end{tabular}
\caption{
Here we present a qualitative comparison between our method and other light interpolation algorithms.
Traditional methods (linear blending, \citet{fuchs2007superresolution}, photometric stereo) retain detail but suffer from ghosting artifacts in shadowed regions.
Results from \citet{xu2018deep} and \citet{meka2019deep} exhibit significant oversmoothing and brightness changes.
Our method retains details and synthesizes shadows that resemble the ground truth.
}
\label{fig:related}
\end{figure*}
\setlength{\tabcolsep}{6pt}
\subsection{Ablation Study}
We first evaluate against ablated versions of our model, with results shown in Tab.~\ref{table:compare}.
In the ``Ours w/naive neighbors'' ablation we use the $k=8$ nearest neighbors in our active set during training. This setup leads to a match between our training and validation data, which results in better numerical performance (as shown in Tab.~\ref{table:compare}) but also significant overfitting: this apparent improvement in performance is misleading, because the validation set of our dataset has the same overly-regular sampling as the training set.
During our \emph{real} test-time scenario in which we synthesize with lights that do not lie on the regular hexagonal grid of our light stage, we see this ablated model generalizes poorly. In Fig.~\ref{fig:movingshadow} we visualize the output of our model and ablations of our model as a function of the query light direction. We see that our model is able to synthesize a cast shadow that is a smooth linear function in the image plane of the angle of the query light (after accounting for foreshortening, \textit{etc}. ). Ablations of our technique do not reproduce this linearly-varying shadow, due to the aliasing and overfitting problems described earlier. See the supplemental video for additional visualizations.
In the ``Ours w/avg-pooling'' ablation we replace the alias-free pooling of our model with simple average pooling. As shown in Tab~\ref{table:compare}, ablating this component reduces performance. But more importantly, ablating this component also causes flickering during our \emph{real} test-time scenario in which we smoothly vary our light source, and this is not reflected in our quantitative evaluation. Because average pooling assigns a non-zero weight to images as they enter and exit our active set, renderings from this model will contain significant temporal instability. See the supplemental video for examples.
\subsection{Related Work Comparison}
We compare our results against related approaches that are capable of solving the relighting problem. The ``Linear blending'' baseline in Tab.~\ref{table:compare} produces competitive results, despite being a very simple algorithm: we simply blend the input images of our light stage according to our alias-free weights.
Because linear blending directly interpolates aligned pixel values, it is often able to retain accurate high frequency details in the flat region, and this strategy works well for minimizing our error metrics.
However, linear blending produces significant ghosting artifacts in shadows and highlights, as shown in Fig.~\ref{fig:related}. Though these errors are easy to detect visually, they appear to be hard to measure empirically.
We evaluate against the layer-based technique of \citet{fuchs2007superresolution} by decomposing an OLAT into diffuse, specular, and visibility layers, and interpolating the illumination individually for each layer. Although the method works well on specular objects as shown in the original paper, it performs less well on OLATs of human subjects, as shown in Tab.~\ref{table:compare}. This appears to be due to the complex specularities on human skin not being tracked accurately by the optical flow algorithm of \citet{fuchs2007superresolution}. Additionally, the interpolation of the visibility layer sometimes contains artifacts, which results in cast shadows being predicted incorrectly. That being said, the algorithm results in fewer ghosting artifacts than the linear blending algorithm, as shown in Fig.~\ref{fig:related} and as reflected by the E-LPIPS metric.
Using the layer decomposition produced by \citet{fuchs2007superresolution}, we additionally perform photometric stereo on the OLAT data by simple linear regression to estimate a per-pixel albedo image and normal map. Using this normal map and albedo image we can then use Lambertian reflectance to render a new diffuse image corresponding to the query light direction, which we add to the specular layer from~\cite{fuchs2007superresolution} to produce our final rendering.
As shown in Tab.~\ref{table:compare}, this approach underperforms that of \citet{fuchs2007superresolution}, likely due to the reflectance of human faces being non-Lambertian. Additionally, the scattering effect of human hair is poorly modeled in terms of a per-pixel albedo and normal vector. These limiting assumptions result in overly sharpened and incorrect shadow predictions, as shown in Fig.~\ref{fig:related}.
In contrast to this photometric stereo approach and the layer-based approach of \citet{fuchs2007superresolution}, our model does not attempt to factorize the human subject into a predefined reflectance model wherein interpolation can be explicitly performed. Our model is instead trained to identify a latent vector space of network activations in which naive linear interpolation results in accurate non-linearly interpolated images, which results in more accurate renderings.
The technique of \citet{xu2018deep} (retrained on our training data) represents another possible candidate for addressing our problem.
This technique does not natively solve our problem. In order to find the optimal lighting directions for relighting, it requires as input \emph{all} $302$ high-resolution images in each OLAT scan in the first step, which significantly exceeds the memory constraints of modern GPUs. To address this, we first jointly train the Sample-Net and the Relight-Net on our images (downsampled by a factor of 4$\times$ due to memory constraints) to identify 8 optimal directions from the 302 total directions of the light stage.
Using those 8 optimal directions, we then retrain the Relight-Net using the full-resolution images from our training data, as prescribed in \citet{xu2018deep}. Table~\ref{table:compare} shows that this approach works poorly on our task. This may be because this technique is built around 8 fixed input images and is naturally disadvantaged compared to our approach, which is able to use any of the 302 light stage images as input. We therefore also evaluate a variant of \citet{xu2018deep} where we use the same active-set selection approach used by our model to select the images used to train their Relight-Net. By using our active-set selection approach (Sec.~\ref{sec:activeset}) this baseline is able to better reason about local information, which improves performance as shown in Tab.~\ref{table:compare}. However, this baseline still results in flickering artifacts when rendering with moving lights, because (unlike our approach) it is sensitive to the aliasing induced when images leave and enter the active set.
\setlength{\tabcolsep}{1pt}
\renewcommand{0.10\textwidth}{0.175\textwidth}
\renewcommand{\cropwidth}{0.13\textwidth}
\begin{figure*}[!ht]
\centering
\begin{tabular}[t]{ccccccc}
$n = 100$ & $n = 150$ & $n = 200$ & $n = 250$ & $n = 302$ & Groundtruth & Groundtruth (Complete) \\
\includegraphics[width=\cropwidth]{image/sparse/crop_0_ours_0.png}&
\includegraphics[width=\cropwidth]{image/sparse/crop_0_ours_1.png}&
\includegraphics[width=\cropwidth]{image/sparse/crop_0_ours_2.png}&
\includegraphics[width=\cropwidth]{image/sparse/crop_0_ours_3.png}&
\includegraphics[width=\cropwidth]{image/sparse/crop_0_ours_4.png}&
\includegraphics[width=\cropwidth]{image/sparse/crop_0_gt.png} &
\multirow{4}{*}[6.5em]{\includegraphics[trim=100 0 100 0, clip, width=0.10\textwidth]{image/sparse/crop_0_gtfull.png}}\\
\multicolumn{6}{c}{\small (a) Ours} & \\
\includegraphics[width=\cropwidth]{image/sparse/crop_0_linearblend_0.png}&
\includegraphics[width=\cropwidth]{image/sparse/crop_0_linearblend_1.png}&
\includegraphics[width=\cropwidth]{image/sparse/crop_0_linearblend_2.png}&
\includegraphics[width=\cropwidth]{image/sparse/crop_0_linearblend_3.png}&
\includegraphics[width=\cropwidth]{image/sparse/crop_0_linearblend_4.png}&
\includegraphics[width=\cropwidth]{image/sparse/crop_0_gt.png} & \\
\multicolumn{6}{c}{\small (b) Linear Blending} & \\
\end{tabular}
\caption{Here we compare the performance of our model against linear blending as we reduce $n$, the number of lights in our light stage.
As we decrease the number of available lights from $n=302$ to $n=100$, the quality of our model's rendered shadow degrades slowly. Linear blending, in contrast, is unable to produce an accurate rendering even with access to all lights.
}
\label{fig:stagesubsample}
\end{figure*}
\setlength{\tabcolsep}{6pt}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{image/sparse/sparse_plot.pdf}
\caption{
The image quality of relighting algorithms will gradually reduce as we remove lights from the light stage. However, our algorithm is able to retain the image quality to a greater extent with fewer lights compared to naive linear blending.
}
\label{fig:sparse_plot}
\end{figure}
\setlength{\tabcolsep}{1pt}
\renewcommand{0.10\textwidth}{0.219\textwidth}
\renewcommand{\cropwidth}{0.27\textwidth}
\begin{figure*}[!ht]
\centering
\begin{tabular}[t]{cccc}
\makecell{Captured image under light A} & \multicolumn{2}{c}{$\xleftarrow{\hspace*{1cm}}$ Interpolation between captured lights $\xrightarrow{\hspace*{1cm}}$} & \makecell{Captured image under light B} \\
\multirow{8}{*}[2.9em]{\includegraphics[width=0.10\textwidth]{image/pointlight/crop_start.png}}&
\includegraphics[width=\cropwidth]{image/pointlight/crop_ours_0.png}&
\includegraphics[width=\cropwidth]{image/pointlight/crop_ours_1.png}&
\multirow{8}{*}[2.9em]{\includegraphics[width=0.10\textwidth]{image/pointlight/crop_end.png}}\\
& \multicolumn{2}{c}{\small (a) Ours}\\
&
\includegraphics[width=\cropwidth]{image/pointlight/crop_linearblend_0.png}&
\includegraphics[width=\cropwidth]{image/pointlight/crop_linearblend_1.png}&\\
& \multicolumn{2}{c}{\small (b) Linear Blending}\\
&
\includegraphics[width=\cropwidth]{image/pointlight/crop_ibr_0.png}&
\includegraphics[width=\cropwidth]{image/pointlight/crop_ibr_1.png}&\\
& \multicolumn{2}{c}{\small (c) \cite{xu2018deep} w/ adaptive sampling}\\
&
\includegraphics[width=\cropwidth]{image/pointlight/crop_drf_0.png}&
\includegraphics[width=\cropwidth]{image/pointlight/crop_drf_1.png}&\\
& \multicolumn{2}{c}{\small (d) \cite{meka2019deep}}\\
\end{tabular}
\caption{
Here we use produce interpolated images corresponding to ``virtual'' lights between two real lights of the light stage.
Our model (a) produces renderings where sharp shadows and accurate highlights move realistically. Linear blending (b) and \citet{xu2018deep} with adaptive sampling result in ghosting artifacts and duplicated highlights. The results from \citet{meka2019deep} contain blurry highlights and shadows with unrealistic motion.
}
\label{fig:interpolation}
\end{figure*}
We also evaluate Deep Reflectance Fields~\cite{meka2019deep} for our task, which is also outperformed by our model. This is likely because their model is specifically designed for fast and approximate video relighting and uses only two images as input, while our model has access to the entire OLAT scan and is designed to prioritize high-quality rendering.
\subsection{Light Stage Subsampling}
An interesting question in light transport acquisition is how many images (light samples) are needed to reconstruct the full light transport function.
To address this question, we present an experiment in which we remove some lights from our training set and use only this subsampled data during training and inference. We reduce the number of lights on the light stage $n$ (while maintaining a uniform distribution on the sphere) to $[250, 200, 150, 100]$, while also changing the number of candidates $m$ and the active set size $k$ to $[14, 12, 10, 8]$ and $[7, 6, 5, 4]$ respectively.
Image quality on the complete validation dataset (with all 302 lights) as a function of the number of subsampled training/input lights is shown in Fig.~\ref{fig:sparse_plot}.
As expected, relighting quality decreases as we remove the lights, but we see that the rendering quality of our method decreases more slowly than that of linear blending.
This can also be observed in Fig.~\ref{fig:stagesubsample}, where we present relit renderings using these subsampled light stages.
We see that removing lights reduces accuracy compared to the ground truth, but that our synthesized shadows remain relatively sharp:
ghosting artifacts only appear when $n=100$.
In comparison, linear blending produces ghosting artifacts near shadow boundaries for all values of $n$.
During test time, our model can also produce accurate shadows and sharp highlights. Please refer to our supplementary video for our qualitative comparison.
\section{Continuous High-Frequency Relighting}\label{sec:app}
A key benefit of our method is the ability to "super-resolve" an OLAT scan with virtual lights at a higher resolution than the original light stage data, thereby enabling continuous high-frequency relighting with an essentially continuous lighting distribution (or equivalently, with a light stage whose sampling frequency is unbounded). In this section, we present three applications of this idea.
\paragraph{Precise Directional Light Relighting}
Traditional image-based relighting methods produce accurate results near the observed lights of the stage, but may introduce ghosting effects or inaccurate shadows when no observed light is nearby.
In Fig.~\ref{fig:interpolation} we try to interpolate the image between two lights on the stage. As shown in the second and the third row, linear blending or \citet{xu2018deep} with adaptive sampling does not produce realistic results and always contains multiple superposed shadows or highlights. The shadows produced by \citet{meka2019deep} are sharp, but are not moving smoothly when the light moves. In contrast, our method is able to produce sharp and realistic images for arbitrary light directions: highlights and cast shadows move smoothly as we change the light direction, and our results have comparable sharpness to the (non-interpolated) groundtruth images that are available.
\begin{figure}[!t]
\centering
\begin{subfigure}[b]{.49\linewidth}
\centering
\includegraphics[width=\linewidth]{image/envmap/ours.png}
\caption{With super-resolution}
\end{subfigure}
\begin{subfigure}[b]{.49\linewidth}
\centering
\includegraphics[width=\linewidth]{image/envmap/linear.png}
\caption{Without super-resolution}
\end{subfigure}
\caption{
Our model (a) is able to produce accurate relighting results under high-frequency environments by super-resolving the light stage before performing image-based relighting~\cite{debevec2000acquiring}.
Using the light stage data as-provided (b) results in ghosting.
}
\label{fig:envmap}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=0.99\linewidth]{image/envmap/freq}
\caption{
In the top figure, each blue dot represents a lighting environment. We render a portrait under this environment using both linear-blending and our method, and measure the image difference using SSIM to evaluate the quality gain of our algorithm. The image quality improvement produced by our model becomes more apparent when the environment map has more high-frequency variation. In the bottom figure, we compare the rendered images using our model and linear blending under environment maps with different frequencies. Our model produces similar results to linear blending when the lighting variation is low frequency (left columns). As the lighting variation becomes higher frequency, our model produces better renderings with fewer artifacts and sharper shadows (right columns).
}
\label{fig:freq}
\end{figure}
\paragraph{High Frequency Environment Relighting}
OLAT scans captured from a light stage can be linearly blended to reproduce images that appear to have been captured under a specific environment. The pixel values of the environment map are usually distributed to the nearest or neighboring lights on the light stage for blending. This traditional approach may cause ghosting artifacts in shadows and specularities, due to the finite sampling of light directions on the light stage. Although this ghosting is hardly noticeable when the lighting is low-frequency, it can be significant when the environment contains high frequency lighting, such as the sun in the sky.
These ghosting artifacts can be ameliorated by using our model. Given an environment map, our algorithm can predict the image corresponding to the light direction of each pixel in the environment map. By taking a linear combination of all such images (weighted by their pixel values and solid angles), we are able to produce a rendering that matches the sampling resolution of the environment map.
As shown in Fig.~\ref{fig:envmap}, this approach produces images with sharp shadows and minimal ghosting when given a high-frequency environment, while linear blending does not. In this example, we use an environment resolution of $256 \times 128$, which corresponds to a super-resolved light stage with \num[group-separator={,}]{32768} lights. Please see our video for more environment relighting results.
We now analyze the relationship between the image quality gain from our model and the frequency of the lighting. Specifically, we evaluate for which environments, and at what frequencies, our algorithm will be required for accurate rendering, and conversely how our model performs in low-frequency lighting environments where previous solutions are adequate. For this purpose, we use one OLAT scan, and render it under 380 high quality indoor and outdoor environment maps (environments downloaded from hdrihaven.com) using both our model and linear blending. We then measure the image quality gain from our model by computing the DSSIM value between our rendering and that from linear blending. We measure the frequency of the environmental lighting by decomposing it into spherical harmonics (up to degree 50), and finding the degree below which 90\% of the energy can be recovered.
As shown in Fig.~\ref{fig:freq}, the benefit of using our model becomes larger when the frequency of the environment increases. For low-frequency light (up to degree 15 spherical harmonics), our model produces almost identical results compared to the traditional linear blending method. This is a desired property, showing that our method reduces gracefully to linear blending for low frequency lighting, and thus produces high quality results for any low or high-frequency environment.
As the frequency of the lighting becomes higher, the renderings of our model contain sharper and more accurate shadows without ghosting artifacts. Note that there is some variation among the environment maps as expected; even a very high-frequency environment could coincidentally have its brightest lights aligned with one of the light in the light stage, leading to low error in linear blending and comparable results to our method. Nevertheless, the trend is clear in Fig.~\ref{fig:freq} with many high-frequency environments requiring our algorithm for lighting super-resolution.
According to the plot, we conclude that our model is necessary when the light frequency is equal or larger than about 20, which means more than $21^2 = 441$ basis functions are needed to recover the lighting. This number has the same order as the number of lights in the stage ($n=302$). This observation agrees with intuition and frequency analysis. If the environment cannot be recovered using the limited lighting basis in the light stage, then light super-resolution is needed to generate new bases in order to accurately render the shadow and highlights.
\setlength{\tabcolsep}{1pt}
\renewcommand{0.10\textwidth}{0.136\textwidth}
\renewcommand{\cropwidth}{0.107\textwidth}
\begin{figure}[!t]
\centering
\begin{tabular}[t]{@{}cccc@{}}
Our full image & \multicolumn{3}{c}{ Increased shadow radius $\xrightarrow{\hspace*{1cm}}$ } \\
\multirow{4}{*}[5.25em]{\includegraphics[trim=50 0 400 0, clip, width=0.10\textwidth]{image/softness/ours.png}}&
\includegraphics[width=\cropwidth]{image/softness/crop_ours_0.png}&
\includegraphics[width=\cropwidth]{image/softness/crop_ours_4.png}&
\includegraphics[width=\cropwidth]{image/softness/crop_ours_8.png}\\
& \multicolumn{3}{c}{\small (a) Our Model}\\
& \includegraphics[width=\cropwidth]{image/softness/crop_linear_0.png}&
\includegraphics[width=\cropwidth]{image/softness/crop_linear_4.png}&
\includegraphics[width=\cropwidth]{image/softness/crop_linear_8.png}\\
& \multicolumn{3}{c}{\small (b) Linear Blending}\\
\end{tabular}
\caption{
Soft shadows can be rendered by synthesizing and averaging images corresponding to directional light sources within some area on the sphere.
Soft shadows rendered by our method (a) are more realistic and contain fewer ghosting artifacts than those rendered using linear blending (b).
}
\label{fig:softness}
\end{figure}
\paragraph{Lighting Softness Control}
Our model's ability to render images under arbitrary light directions also allows us to control the softness of the shadow.
Given a light direction, we can densely synthesize images corresponding to the light directions around it, and average those images together to produce a rendering with realistic soft shadows (the sampling radius of these lights determines the softness of the resulting shadow). As shown in Fig.~\ref{fig:softness}, our model is able to synthesize realistic shadows with controllable softness, which is not possible using traditional linear blending methods.
\section{Conclusions and Future Work}
\label{sec:conclude}
The light stage is a crucial tool for enabling the image-based relighting of human subjects in novel environments.
But as we have demonstrated, light stage scans are undersampled with respect to the angle of incident light, which means that synthesizing virtual lights by simply combining images results in ghosting on shadows and specular highlights.
We have presented a learning-based solution for super-resolving light stage scans, thereby allowing us to create a ``virtual'' light stage with a much higher angular lighting resolution, which allows us to render accurate shadows and highlights in high-frequency environment maps.
Our network works by embedding input images from the light stage into a learned space where network activations can then be averaged, and decoding those activations according to some query light direction to reconstruct an image.
In constructing this model, we have identified two critical issues: an overly regular sampling pattern in light stage training data, and aliasing introduced when pooling activations of a set of nearest neighbors. These issues are addressed through our use of a dropout-like supersampling of neighbors in our active set, and our alias-free pooling technique.
By combining ideas from conventional linear interpolation with the expressive power of deep neural networks, our model is able to produce renderings where shadows and highlights move smoothly as a function of the light direction.
This work is by no means the final word for the task of light stage super-resolution or image-based rendering. Approaches similar to ours could be applied to other general light transport acquisition problems, to other physical scanning setups, or to other kinds of objects besides human subjects.
\changed{Though our network can work on inputs with different image resolutions, GPU memory has been a major bottleneck to apply our approach on images with much higher resolutions such as 4K resolution. A much more memory efficient approach for light-stage super-resolution is expected for production level usage in the visual effects industry.}
Though we exclusively pursue the one-light-at-a-time light stage scanning approach, alternative patterns where multiple lights are active simultaneously could be explored, which may enable a more sparse light stage design.
Though the undersampling of the light stage is self-evident in our visualizations, it may be interesting to develop a formal theory of this undersampling with respect to materials and camera resolution, so as to understand what degree of undersampling can be tolerated in the limit. We have made a first step in this direction with the graph in Fig.~\ref{fig:freq}.
Finally, it would be interesting to extend our approach to
enable the synthesis of novel viewpoints in addition to lighting directions.
We believe that light stage super-resolution represents an exciting direction for future research, and has the potential to further decrease the time and resource constraints required for reproducing accurate high-frequency relighting effects.
| proofpile-arXiv_059-15600 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Abstract}
Automatic credit scoring, which assesses the probability of default by loan applicants, plays a vital role in peer-to-peer lending platforms to reduce the risk of lenders. Although it has been demonstrated that dynamic selection techniques are effective for classification tasks, the performance of these techniques for credit scoring has not yet been determined. This study attempts to benchmark different dynamic selection approaches systematically for ensemble learning models to accurately estimate the credit scoring task on a large and high-dimensional real-life credit scoring data set. The results of this study indicate that dynamic selection techniques are able to boost the performance of ensemble models, especially in imbalanced training environments.
\\ \textbf{Keywords:} Credit Scoring, Peer-to-peer Lending Platform, Dynamic Ensemble Learning
\section{Introduction}
The need for credit scoring goes back to the beginning of borrowing and lending. Lenders often attempt to gather information about loan applicants to distinguish reliable customers from unreliable ones based on the likelihood of being charged-off \citep{louzada2016classification}.
The objective of credit scoring is to assess the probability that a borrower will show some undesirable behavior in the future.
Financial institutions leverage scorecards, which are predictive models developed by classification algorithms to estimate the probability of default (PD) by loan applicants \citep{lessmann2015benchmarking}.
With the emergence of digital technology, social lending, also known as peer-to-peer (P2P) lending, becomes an alternative to the traditional loan granting process in which individuals lend and borrow money through an online platform that connects borrowers to lenders without the intermediation of banks. P2P lending platforms such as Prosper~\footnote{https://www.prosper.com/}, Lending Club~\footnote{https://www.lendingclub.com/}, and Zopa~\footnote{https://www.zopa.com/} ~\citep{malekipirbazari2015risk} can substantially reduce intermediary costs compared to traditional banks due to lack of a "brick-and-mortar" business approach \citep{teply2019best}.
P2P lending can reduce the processing time and cost for both borrowers and lenders. Borrowers can request micro-loans directly from lenders with a lower interest rate and faster processing time. Lenders can earn higher rates of return with fewer administrative fees compared to traditional savings accounts. In traditional lending markets, banks and financial institutions can use collateral as a tool to enhance lenders' trust in borrowers. Such actions to increase trust between borrowers and lenders cannot be implemented in an online environment. In such a context, automated scoring of customers plays a vital role in credit risk assessment~\citep{emekter2015evaluating}.
With the expansion of financial institutions' loan portfolios as well as the advent of P2P lending platforms, classifying customers based on their PD is crucial to support decision making. In credit scoring literature, it is well established that a minor improvement in the credit scoring accuracy can result in significant future savings~\citep{baesens2003benchmarking}.
As such, various credit scoring models are exploited by banks, financial institutions, and online P2P lending platforms to make informed decisions regarding the borrowers' default risk. Thus, available customer data, such as loan history, demographic information, financial and educational data, is exploited to build a machine learning model which then is used to develop a decision support system to group loan applications into reliable and unreliable ones.
Despite the significance and importance of credit scoring in reducing risks and costs of lenders and financial institutions, the performance of credit scoring models is not satisfying yet for practical applications in real life due to the following issues. First, the number of available datasets for credit scoring is limited due to the difficulty of obtaining customers' credit data. Therefore, credit scoring studies use different public and private data sets. The public data sets used in credit scoring are often small or contain encrypted variables due to privacy concerns. On the other hand, private data sets cannot be released to the public \citep{louzada2016classification}. In such a context, new classification approaches are evaluated in different datasets and/or settings. The effectiveness of different approaches is unclear due to the lack of a systematic benchmark of various credit scoring models. Hence, the comparison of models trained on different data sets is highly desired.
Second, several statistical techniques and machine learning approaches have been proposed over the years to improve classification performance of credit scoring such as Linear Discriminant Analysis (LDA) \citep{reichert1983examination}, Logistic Regression (LR) \citep{hand2002superscorecards}, Support Vector Machine (SVM) \citep{huang2004credit}, Neural Networks (NN) \citep{bastani2019wide,duan2019financial}, etc. In addition to individual classifiers, lately, there has been much attention to ensemble models that aggregate the classification power of individual classifiers (base classifiers) to improve the final result such as \citep{xia2018novel,he2018novel,yu2018dbn, ala2016classifiers,ala2016new}. One of the challenges of implementing credit scoring models in real-life is that, despite the wide range of proposed classification techniques in credit scoring literature, the most effective techniques for different data sets, especially real-life data sets, have not yet been determined. Therefore, future studies should focus on comparing the abilities of different classification approaches.
Finally, similar to many other real-life classification problems, credit scoring data sets are heavily imbalanced \citep{brown2012experimental, xiao2012dynamic}. The majority of samples in the credit scoring data sets belong to the negative class (i.e., the loans that were fully paid). Therefore, robust algorithms that can handle imbalanced data should be developed to increase the classification performance.
While several machine learning approaches have been employed to improve the performance of credit scoring models, recently, \citet{lessmann2015benchmarking} have demonstrated that multiple classifier system (also known as ensemble models) are able to outperform individual classifier.
Based on the idea that building a single classifier to cover all the intrinsic variabilities of the data sets cannot take advantage of all the available information in the data set, Multiple Classifier Systems (MCSs) were introduced. MCSs use multiple classifiers' decisions to make a more robust and effective model for predicting the class of a sample \citep{britto2014dynamic}. The MCSs can be divided into two categories: Static Selection (SS) and Dynamic Selection (DS). In static selection, the strategy to select the best base classifiers is determined in the training set, which is then applied on all test samples, regardless of the competency of the base classifier in the local region surrounding the test sample. In dynamic selection, the most competent classifiers in the local region of the test sample are selected based on a competence criterion for each test sample on the fly. Therefore, each test sample is classified by one or a set of classifiers that have a high performance in the local region of the test sample based on the competency criterion used for the base classifier selection.
The rationale for dynamic selection strategies is that each base classifier in the pool of classifiers is an expert in a specific local region of the feature space. Therefore, the most competent classifiers should be selected to classify each test sample from the pool of classifiers \citep{cruz2018dynamic}.
The effectiveness of using dynamic selection in classification task has been tested by \citet{britto2014dynamic} and \citet{cruz2018dynamic}; In their studies, the performance of DS was evaluated on several benchmarking data sets. \citet{britto2014dynamic} and \citet{cruz2018dynamic} concluded that using DS can boost the performance of a pool of weak classifiers. \citet{cruz2018dynamic} state that as dynamic selection techniques perform locally, the final classification results are not biased towards the majority class, which will be validated in this study experimentally.
\citet{lessmann2015benchmarking} used two dynamic selection algorithms in their benchmark to evaluate their performance on credit scoring data sets. \citet{junior2020novel} used four dynamic selection techniques to evaluate the performance of credit scoring. However, to the best of our knowledge, no comprehensive study has been conducted to evaluate other dynamic selection techniques on the credit scoring problem specifically on a real world data set.
In this study, four classifiers, namely support vector machine (SVM), multilayer perceptron (MLP), k nearest neighbors (k-NN), and Gaussian naive Bayes (GNB) are used to construct the pool of classifiers. In addition to individual classifiers, DS techniques are also applied to the random forest (RF) to evaluate their effectiveness to boost the performance of classification. The classifiers are evaluated on the Lending Club data set, which is a real-world data set in the field of credit scoring and social lending. Furthermore, the ability of dynamic selection to classify test samples trained on data sets with different imbalance ratios is evaluated. Five data sets with different imbalance ratios created by under-sampling the majority class, as well as the original data set, are used to investigate the impact of imbalanced data on the robustness of classification performance.
\section{Literature Review}
\citet{louzada2016classification} present a systematic review of 187 papers from 1992 to 2015 on binary classification techniques for the credit scoring problem. The papers in this systematic review are categorized based on seven objectives: proposing a new method, comparing traditional techniques, conceptual discussions, feature selection, literature review, performance measures studies, and others.
With regards to the categories mentioned in the review of \citet{louzada2016classification} many new DS algorithms have been developed to classify default of borrowers. For example, \citet{xiao2016ensemble} use supervised clustering to make different ensembles in each cluster. By assigning each test sample to the cluster with the most similar members they use different sets of classifiers to classify each test sample. Unsupervised clustering integrated with fuzzy assignment is implemented in \citet{zhang2018classifier} work in the pool generation and testing phase of the classification procedure. They also use genetic algorithm to include both diversity and accuracy measures in selecting the best classifiers in the field of credit scoring. \citet{xiao2012dynamic} combine ensemble learning method with cost-sensitive learning for imbalanced data sets to develop a framework that for each test sample selects the most appropriate classifier(s) to classify customers. \citet{feng2019dynamic} built a dynamic weighted trainable combiner based on Markov Chain to model the impact of selecting different classifiers in decreasing the missclassification cost in credit scoring.
Regarding the comparison of different classification techniques for credit scoring, several studies have been carried out: \citet{baesens2003benchmarking} have studied the performance of various classification algorithms such as logistic regression, linear and quadratic discriminant analysis, support vector machines, neural networks, naive bayes and nearest neighbour classifiers. Among the applied classifiers, least squares support vector machines and neural networks yielded a good performance on several credit scoring data sets. Following that, \citet{lessmann2015benchmarking} have provided a benchmark of 41 classification algorithms across credit scoring data sets. In addition to individual classifiers covered by \citet{baesens2003benchmarking}, homogeneous and heterogeneous ensemble models were also included.
As credit scoring data sets are usually highly imbalanced, a comparison of different classifiers applied to five credit scoring data sets was presented by \citet{brown2012experimental}. Logistic regression, least square support vector machine, decision trees, neural networks, k-NN, gradient boosting algorithm and random forests were applied while the imbalance ratio of the data set is increased by undersampling minority classes to evaluate the robustness of classifiers in extreme imbalanced situations.
Apart from studies that focus on only comparing different classification models for credit scoring, papers that propose new classification techniques such as \citep{xiao2012dynamic, ala2016classifiers,ala2016new, xiao2016ensemble}, compare their methods with basic individual and ensemble methods. The standard procedure in the comparison process of new classification techniques is to compare them with LR, which is considered the industry standard. \citet{lessmann2015benchmarking} argue that LR should not be considered the baseline method in credit scoring because it may lead to a biased decision about the performance of newly proposed models, as random forest performs much better on credit scoring data sets as a simple algorithm.
Considering the comparative studies on the Lending Club data set, \citet{teply2019best} have conducted a comparison study to rank 10 different classifiers to evaluate the performance of different classifiers. According to the ranking, they have concluded that logistic regression and artificial neural networks were placed as the best and second-best classifiers respectively. They have also concluded that classification and regression trees and k-nearest neighbors perform poorly on the Lending club data set.
By comparing RF, SVM, LR, and k-NN, \citet{malekipirbazari2015risk} have shown that RF outperforms other classification methods and can be used as a powerful approach to predict borrowers' status.
\citet{junior2020novel} proposed a modification of k-NN algorithm to be used in the definition of local region in Dynamic selection techniques. In their study they have compared four dynamic selection techniques, namely Local Class Accuracy, Modified Classifier Rank, K-Nearest Oracles Eliminate, and K-Nearest Oracles Union on credit scoring data sets with different pre-processing and pool generation settings. \citet{junior2020novel} have concluded that K-Nearest Oracles Union could achieve the best result compared to the other three DS techniques in credit scoring.
As stated in \citet{cruz2018dynamic}, Dynamic selection is an active field of study in machine learning. So far, to the best of our knowledge only \citet{lessmann2015benchmarking}, \citet{xiao2012dynamic}, and \citet{junior2020novel} have applied a limited number of dynamic selection techniques on credit scoring data sets. Also as stated in \citet{junior2020novel}, we believe that examining the ability of DS techiniques, especially on real-world data sets to evaluate their competency is yet to be studied. In this paper, we apply 14 different DS techniques on different classification algorithms both in the context of homogeneous and heterogeneous pool of classifiers across varying degrees of data imbalance.
\section{methodology}
\subsection{Multiple classifier systems}
To deal with uncertainty and noise in data, various classification techniques have been developed over the years to address the limitations and improve classification performance. Due to the intrinsic characteristics of different classification models, the misclassified samples by different classifiers do not necessarily overlap. Therefore, different classification models potentially offer complementary information for classification of different test samples. Thus, fusing multiple classifiers often result in improvement of classification performance since each classifier provides complementary information for distinct aspects of a given sample \citep{kittler1998combining}. Therefore, MCS decisions are expected to improve the classification accuracy by combining the decisions of different classifiers trained on the training set \citep{dietterich2000experimental}.
Multiple classifier systems are composed of three phases: I) pool generation II) selection III) combination. In the first phase, a pool of accurate and diverse classifiers is constructed to classify the samples. The need for diverse classifiers comes from the fact that the generated classifiers should show some degree of complementarity. Bagging \citep{breiman1996bagging}, boosting \citep{freund1996experiments}, and random subspace \citep{ho1998random} are among the most commonly used strategies to create the pool of classifiers.
In the second phase, which is an optional phase, the generated classifiers are selected based on a competency measure to classify the unknown samples. There are two types of base classifier selection in the second phase: static and dynamic selection. In static selection, the classifiers’ competence is determined in the training phase by computing the base classifiers' competency based on a selection criterion. After the classifier selection, all the selected base classifiers are employed to classify unknown samples regardless of the individual characteristics of the query sample in the selection of base classifiers.
In dynamic selection, a single or a subset of trained classifiers are selected to classify each unknown sample exclusively regarding its surrounding local region. Based on the number of classifiers selected in DS techniques, they are categorized into two categories: I) Dynamic Classifier Selection (DCS), which selects only the most competent classifier from the pool of classifiers. II) Dynamic Ensemble Selection (DES) which selects a subset of classifiers from the pool.
The third phase of MCSs deals with the aggregation of decisions made by the selected classifies. The outputs of the classifiers are aggregated according to a combination rule. One of the most basic combination rules that can be named is majority voting (i.e., aggregating the predictions of base classifiers and choosing the prediction with the most votes).
\subsection{Dynamic Selection}
In dynamic selection, the classification of an unknown sample is composed of the following steps:
First, the dynamic selection set (DSEL), which is a set of labeled samples from the training or validation set, is set aside to determine the region of competence. Here, the region of competence is the local region surrounding the query test sample, which is determined by the most similar or nearest samples from the DSEL.
A test sample’s region of competence can be obtained by applying K-nearest neighbors technique, clustering and the competence map to the DSEL. The k nearest technique and clustering methods find the nearest and the most similar samples in the DSEL to the query sample which are then used to evaluate the competency of base classifiers. The competence map uses all samples in the DSEL as the region of competence. Then by applying Gaussian potential function, the influence of each DSEL sample in competency of classifers is computed.
Second, the selection criteria to compute the competency of each classifier in the region of competence is determined. These criteria can be calculated by the accuracy of the base classifier, their rank among all the classifiers present in the pool, or probabilistic methods.
Finally, one or a subset of classifiers from the pool are selected based on the competency level of classifiers in the region of competence. To classify the query sample, the final classification result is reached by combining chosen competent classifiers using a voting combination method such as majority voting.
For a comprehensive explanation and review of dynamic classifier selection techniques, we refer to \citet{cruz2018dynamic} and \citet{britto2014dynamic}. Before diving into details, we introduce the mathematical notation used in this paper as summarized in Table \ref{tab:notation_table}:
\begingroup
\renewcommand{\arraystretch}{1.35}
\setlength{\tabcolsep}{6 pt}
\begin{table}[H]
\centering
\caption{The mathematical notation used in this study}
\label{tab:notation_table}
\resizebox{\linewidth}{!}{%
\begin{tabular}{ll}
\toprule
Notation & Description \\
\hline
$C=\{c_1,\dots,c_M\}$ & The pool of classifiers with $M$ base classifiers. \\
$x_j$ & The test sample to be classified. \\
$\theta_j=\{x_1,\dots,x_k\}$ & The region of competence surrounding $x_j$, $x_k$ is one sample belonging to the region of competence. \\
$\Omega=\{\omega_1,\dots,\omega_L\}$ & The set of $L$ classes in the classification problem. \\
$\omega_l$ & The class predicted by classifier $c_i$. \\
$P(\omega_l | x_j, c_i)$ & Posterior probability of classifying $x_j$ as $\omega_l$ by classifier $c_i$. \\
$W_k=\frac{1}{d_k}$ & The weight of $x_k$ computed by its distance from $x_j$, $d_k$ is the distance between the query $x_j$ and $x_k$. \\
$\delta_i,_j$ & The estimated competency of classifier $c_i$ in classification of query $x_j$. \\
$\tilde{x_j}$ & The output profile of the query sample $x_j$. \\
$\phi_j$ & The set of most similar output profiles of the query sample, $\tilde{x_j}$, computed in the decision space. \\
\bottomrule
\end{tabular}
}
\end{table}
\endgroup
\subsection{Dynamic selection techniques}
In this section, we briefly introduce various dynamic selection techniques:
\subsubsection{Modified Classifier Ranking}
In the modified classifier ranking technique by \citet{woods1997combination} and \citet{sabourin1993classifier}, the local accuracy of classifier $c_i$ is estimated by the number of consecutive correctly classified samples in the region of competence of a given test sample. The number of correctly classified samples is considered the rank of the classifier. The classifier with the highest rank in this technique is selected as the most competent classifier to classify the test sample.
\subsubsection{Overall Local Accuracy}
Overall locan accuracy (OLA) calculates the competency of base classifier $c_i$ by computing the percentage of samples in the region of competence that have been correctly classified as shown in equation \ref{OLA} \citep{woods1997combination}.
\begin{flalign} \label{OLA}
\delta_i,_j =\frac{1}{K}\sum_{k=1}^{K} P(\omega_l | x_k\in \omega_l,c_i)
\end{flalign}
\subsubsection{Local Classifier Accuracy}
Local classifier accuracy (LCA) is proposed as the percentage of the samples in the region of competence that are correctly labeled by classifier $c_i$ with respect to the output class $\omega_l$. $\omega_l$ is the class with maximum probability assigned by classifier $c_i$. Equation \ref{LCA} represents the competence of classifier $i$ in the classification of query sample $x_j$ \citep{woods1997combination}.
\begin{flalign}\label{LCA}
\delta_i,_j =\frac{\sum_{x_k \in \omega_l} P(\omega_l|x_k,c_i)}{\sum_{k=1}^{K} P(\omega_l|x_k,c_i)}
\end{flalign}
\subsubsection{A Priori}
As presented in the equation \ref{apriori}, the selection criterion of A Priori technique is the probability of correctly classified samples in the region of competence by classifier $c_i$ \citep{giacinto1999methods}. Here, the probabilities are weighted by the Euclidean distance of $x_k$ from the query sample $x_j$. The classifier with the highest value of competency is selected if the competence level is significantly better than other base classifiers in the pool. If the competence level is not significantly higher for any of the classifiers, all classifiers in the pool are combined by majority voting.
\begin{flalign}\label{apriori}
\delta_i,_j =\frac{\sum_{k=1}^{K} P(\omega_l|x_k \in \omega_l,c_i)W_k}{\sum_{k=1}^{K} W_k}
\end{flalign}
\subsubsection{A Posteriori}
A Posteriori technique takes into account $\omega_l$, which is the assigned class to the query sample $x_j$, by classifier $c_i$ as shown in equation \ref{aposteriori} \citep{giacinto1999methods}. Similar to the A Priori technique, the classifier with a significantly higher competency level is chosen to classify the test sample. Otherwise, all classifiers in the pool are combined by majority voting to classify the test sample.
\begin{flalign}\label{aposteriori}
\delta_i,_j =\frac{\sum_{x_k \in \omega_l} P(\omega_l|x_k,c_i)W_k}{\sum_{k=1}^{K} P(\omega_l|x_k,c_i)W_k}
\end{flalign}
\subsubsection{Multiple Classifier Behavior}
A behavior knowledge space (BKS) is a k-dimensional space where each dimension corresponds to the decision of one base classifier \citep{huang1995method}. Based on the behavior knowledge space and the classifier local accuracy, the multiple classifier behavior (MCB) technique computes the output profiles of each test sample (i.e., predicted class labels) along with the output profiles of the samples in the region of competence. The similarities between the output profile of a given test sample and the output profiles of instances in the region of competence, are calculated by $S(\tilde{x_j},\tilde{x_k})$, i.e., the similarity measure between $x_j$ and $x_k$ as demonstrated in equation \ref{MCB} and equation \ref{MCB_2}. Samples with the $S(\tilde{x_j},\tilde{x_k}) > \zeta$ are selected to remain in the region of competence and other samples in the region are deleted. Here, $\zeta$ is a hyper-parameter of the model which demonstrates the similarity threshold. The competency of each classifier in the pool is calculated by the classification accuracy in the modified region of competence. The classifier with significantly higher performance in the region of competence than other classifiers, is selected to classify $x_j$. If no such classifier exists, all classifiers in the pool of classifiers are combined using majority voting \citep{giacinto2001dynamic}.
\begin{flalign}\label{MCB}
\text{S(}\tilde{x_j},\tilde{x_k})=\frac{1}{M}\sum_{i=1}^{M} T(x_j,x_k)
\end{flalign}
\begin{flalign}\label{MCB_2}
T(x_j,x_k)=
\begin{cases}
\text{1} \qquad \text{if} \qquad c_i(x_j) = c_i(x_k),\\
\text{0} \qquad \text{if} \qquad c_i(x_j) \neq c_i(x_k).
\end{cases}
\end{flalign}
\subsubsection{Modified Local Accuracy}
The hypothesis behind local accuracy is that the neighboring elements in the feature space maintain a stronger relationship with each other compared to further elements. Therefore weighting the impact of samples in the region of competence is logical. \citet{smits2002multiple} proposed a modified local accuracy (MLA) by weighting $x_k$ in the region of competence, using its distance to the query $x_j$, as shown in equation~\ref{MLA}. The classifier with the highest competence level is selected to classify the test sample.
\begin{flalign}\label{MLA}
\delta_i,_j =\sum_{k=1}^K P(\omega_l|x_k \in \omega_l, c_i)W_k
\end{flalign}
\subsubsection{DES-Clustering}
In DES-Clustering technique, the k-means algorithm is exploited to group DSEL into k distinct clusters. For each cluster, the classifiers are ranked in descending order of accuracy and ascending order of the double-fault diversity measure which is a pairwise diversity measure and uses the proportion of the missclassified samples by two classifiers. The test sample is assigned to a cluster by measuring its Euclidean distance to the cluster centroid. Finally N most accurate and J most diverse classifiers (N $\geq$ J) are selected to classify the test sample by combining the results of each classifier for the test sample \citep{soares2006using}.
\subsubsection{DES-KNN}
Here, the region of competence is determined by applying the k nearest neighbors method on the DSEL. Then, the classifiers in the pool $C=\{c_1,....,c_M\}$, are ranked decreasingly based on accuracy and increasingly based on double-fault diversity measure. The first N classifiers according to accuracy rank and the first J classifiers according to diversity rank (N $\geq$ J) are selected to classify the test sample \citep{soares2006using}. It is worth noting that DES-KNN is different from DES-Clustering with regards to the technique used in determining the region of competence for the given test sample $x_j$.
\subsubsection{K-Nearest Oracles Eliminate}
The Oracle is an abstract model that always selects the classifier that predicts the correct label for a given query if it exists \citep{kuncheva2002theoretical}. It is obvious that, the Oracle is regarded as a possible upper limit for MCSs performance in classification. This concept is adopted in K-Nearest Oracles Eliminate (knoraE). K-Nearest oracles Eliminate selects all the classifiers that classify all samples in the region of competence correctly if such classifier exists \citep{ko2008dynamic}. If no classifier can be found that satisfies the above criterion, the size of region of competence is reduced and the search to find the competent classifiers restarts.
\subsubsection{K-Nearest Oracles Union}
The classifiers that can classify at least one sample in the region of competence are selected in K-Nearest Oracles Union (knoraU) technique. The number of votes classifier $c_i$ has in the voting process is equivalent to the number of correctly classified samples in the region of competence to promote classifiers that participate more in making the correct decision (i.e., assigning the correct label to the query sample). In this manner, an automatic weighting procedure is done in the majority voting process \citep{ko2008dynamic}.
\subsubsection{Dynamic Ensemble Selection Performance}
\citet{woloszynski2012measure} introduced the dynamic ensemble selection performance (DESP), which attempts to compute the performance of $c_i$ in the region of competence . The competency of classifier $c_i$ is computed by the difference between the accuracy of $c_i$ in the region of competence and the accuracy of a random classifier in the region. The random classifier is presented as a classifier that randomly assigns classes to a sample with equal probabilities. Therefore, its performance in the region is $1/L$. Equation \ref{desp} shows the selection criterion of this technique. The base classifiers that are able to classify samples with a higher accuracy than the random classifier are selected to classify $x_j$.
\begin{flalign}\label{desp}
\delta_i,_j = \hat{P}(c_i|\theta_j)-\frac{1}{L}
\end{flalign}
\subsubsection{k-nearest Output Profiles}
By applying classifier $c_i$ to the test sample, and the DSEL set, the output profiles of the samples are obtained. Next, the similarity degree between the query output profile, $\tilde{x_j}$, and the output profiles of the DSEL samples is computed. Finally K most similar DSEL samples are selected as the region of competence. The classifiers that can correctly classify the region of competence samples are selected to classify the test sample in k-nearest output profiles (KNOP) technique. Similar to knoraU, each classifier gains one vote for each correct classification of samples in the region of competence \citep{cavalin2012logid}.
\subsubsection{Meta-DES}
A classifier’s level of competence determines its selection to classify a test sample in DS. In the DS literature, several competence measures have been proposed. To benefit from the advantages of different competency measures a meta-learning problem was generated by \citet{cruz2015meta} as follows:
\begin{itemize}
\item The meta-classes are either "competent" (1) or “incompetent" (0) to classify $x_j$.
\item Each set of meta-features $f_i$ corresponds to a different criterion for measuring the competency level of a base classifier.
\item The meta-features are encoded into a meta-features vector $v_i,_j$.
\item A meta-classifier $\lambda$ is trained based on the meta-features to predict whether or not the classifier $c_i$ will correctly classify $x_j$. In other words, if classifier $c_i$ is competent to classify the test sample.
\end{itemize}
In this technique, a meta classifier is trained to predict whether a base classifier is competent to classify a test sample. In the meta training stage, the meta-features are extracted from the samples in the training and DSEL sets. The extracted meta-features are then used to train a meta classifier to classify each model in the pool of classifiers as competent or incompetent. The training phase of the Meta-DES technique is represented in Algorithm \ref{metades alg}.
In the generalization stage, meta-features, which are the meta classifier's input, are extracted for query $x_j$. The classifier $c_i$ is then evaluated by the meta classifier to determine whether it should be selected to participate in the classification of the given test sample or not.
\begin{algorithm}[ht]
\SetAlgoLined
\KwIn{$T_{\lambda}$: Training data}
\KwOut{Pool of Classifiers $C=\{c_1,\dots,c_M\}$}
$T_{\lambda}^{*} = \varnothing$\;
\For{$x_{n,train_{\lambda}} \in T_{\lambda}$}{
Compute the consensus of the pool $H(x_{n,train_{\lambda}}, C)$ \\
\uIf{$H(x_{n,train_{\lambda}}, C) < h_{c}$}{
Find the region of competence $\theta_{j}$ of $x_{n,train_{\lambda}}$ using $T_{\lambda}$ \\
Compute the output profile ${\Tilde{x}_{n,train_{\lambda}}}$ of $x_{n,train_{\lambda}}$ \\
Find the $K_p$ similar output profiles $\phi_{j}$ of $x_{n,train_{\lambda}}$ \\
\For{$c_i \in C$}{
$v_{i,n}$= \textit{MetaFeatureExtraction($\theta_j,\phi_j,c_i,x_{n,train_{\lambda}}$) \\
\uIf{$c_i$ correctly classifies $x_{n,train_{\lambda}}$}{$\alpha_{i,j}=1$ "$c_i$ is competent to classify $x_{n,train_{\lambda}}$" \\}
\uElse{$\alpha_{i,j}=0$ "$c_i$ is incompetent to classify $x_{n,train_{\lambda}}$"}
$T_{\lambda}^{*} = T_{\lambda}^{*}\bigcup \{v_{i,n}\}$ \\}
}
}
}
Divide $T_{\lambda}^{*}$ into 25\% for validation and 75\% for training \\
Train $\lambda$ \\
\Return{The meta-classifier $\lambda$}
\caption{The META-DES Algorithm (Training Phase)}
\label{metades alg}
\end{algorithm}
\section{Experimental Setting}
\subsection{Data set}
In this study we used a real-life data set from Lending Club~\footnote{retrieved from \url{https://-www.kaggle.com/wordsforthewise/lending-club}} which is one of the most popular P2P lending platforms in the financial industry \citep{malekipirbazari2015risk}. In order to apply for a loan, Lending Club users send their request via the website and the investors interested in investing, start funding for the requested loans based on the available information on the loan requests.
The payback duration of loans provided by Lending Club is either $36$ or $60$ months. As the latest payment date made by a borrower was March 2019, we selected 36 month loans issued before March 2016 to ensure that loans have had enough time to reach their maturity. Out of the $151$ available variables in the original data set, we dropped variables that had more than half the size of the data set missing values. After considering samples labeled as “Charged off” and “Fully paid” as class 1 (bad customers) and class 0 (good customers), respectively we organized the remaining variables based on other studies that have used the Lending Club data set \citep{setiawan2019comparison,teply2019best, serrano2015determinants,malekipirbazari2015risk} and imputed the missing values of continuous variables by mean imputation.
We stored the average value of variables \textit{fico\_range\_low} and \textit{fico\_range\_high} in variable \textit{average\_fico} for each data sample. We normalized continuous variable to be in range $[0, 1]$ and as a last step in data preparation we used dummy variables to represent values of categorical variables. The final variables and their description are shown in Table~\ref{Variables table}.
We selected Loans issued from July 2015 to December 2015 as the training data set, the remainder as test set. The training set icludes $162,235$ samples (approximately $76\%$ of total data), and the test set contains $50,695$ samples (approximately $24\%$ of total data) from January and February of 2016.
\begin{longtable}{@{}p{110.0 pt}p{223.0 pt}@{}}
\caption{Variables used in this study.}
\label{Variables table}\\
\toprule
Variables & Description \\* \midrule
\endfirsthead
\multicolumn{2}{c}%
{{\bfseries Table \thetable\ continued from previous page}} \\
\toprule
Variables & Description \\* \midrule
\endhead
\bottomrule
\endfoot
\endlastfoot
loan\_amnt & Amount of the loan applied for by the borrower. \\
acc\_now\_delinq & The number of accounts on which the borrower is now delinquent. \\
int\_rate & Interest Rate on the loan \\
installment & The monthly payment owed by the borrower if the loan originates. \\
annual\_inc & The self-reported annual income provided by the borrower during registration. \\
emp\_length & employment length in years. \\
verification\_status & Indicates if income was verified by LC~\footnote{Lending Club}, not verified, or if the income source was verified \\
dti & The borrower’s total monthly debt payments on the total debt obligations, excluding mortgage and the requested LC loan, divided by the borrower’s self-reported monthly income. \\
delinq\_2yrs & The number of 30+ days past-due incidences of delinquency in the borrower's credit file for the past 2 years \\
average\_fico & The average value of the upper and lower boundary range of the borrower’s FICO at loan origination. \\
inq\_last\_6mths & The number of inquiries in past 6 months (excluding auto and mortgage inquiries). \\
open\_acc & The number of open credit lines in the borrower's credit file. \\
pub\_rec & Number of derogatory public records. \\
revol\_util & Revolving line utilization rate, or the amount of credit the borrower is using relative to all available revolving credit. \\
total\_acc & The total number of credit lines currently in the borrower's credit file. \\
chargeoff\_within\_12\_mths & Number of charge-offs within 12 months. \\
delinq\_amnt & The past-due amount owed for the accounts on which the borrower is now delinquent \\
mort\_acc & Number of mortgage accounts \\
pub\_rec\_bankruptcies & Number of public record bankruptcies \\
tax\_liens & Number of tax liens \\
grade & LC assigned loan grade. The values are: [A,B,C,D,E,F,G] \\
subgrade & LC assigned loan subgrade. The values are: [A1,\dots,A5,\dots,G1,\dots,G5] \\
home\_ownership & The home ownership status provided by the borrower during registration or obtained from the credit report. The values are: RENT, OWN, MORTGAGE \\
purpose & A category provided by the borrower for the loan request such as: Debt consolidation, small business, vacation, etc. \\
initial\_list\_status & The initial listing status of the loan. The values are: Whole, Fractional \\
loan\_status & Current status of the loan (Target variable) \\* \bottomrule
\end{longtable}
\subsection{Evaluation measures}
To evaluate the performance of the techniques studied in this paper, we consider six performance measures: Accuracy, Area under the ROC curve (AUC), H-measure, F-measure, Brier score, and G-mean. The performance measures we have selected are often used in credit scoring, and they can cover almost all important aspects of classification models performance. The accuracy evaluates the correctness of categorical predictions, the AUC and H-measure assess discriminatory ability, and the Brier score assesses the accuracy of probability predictions. The G-mean evaluates the balance between the classification results of positive and negative classes. In the following, we explain how to compute these metrics from the confusion matrix (see Table~\ref{confusion matrix})
\begin{table}[h]
\centering
\caption{Confusion matrix}
\label{confusion matrix}
\begin{tabular}{llll}
\toprule
& & \multicolumn{2}{c}{Predicted} \\
\cline{3-4}
& & Positive & Negative \\
\hline
Real & Positive & True Positive (TP) & False Negative (FN) \\
& Negative & False Positive (FP) & True Negative (TN) \\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{Accuracy}
Accuracy is the correct prediction sample size divided by the total testing sample size, as shown in equation \ref{acc}.
\begin{flalign}\label{acc}
\text{Accuracy} = \frac{TP + TN}{TP + FN + FP + TN}
\end{flalign}
\subsubsection{F-measure}
F-measure is the harmonic mean of precision and recall where precision and recall measures are computed via equation \ref{precision} and \ref{recall} respectively.
\begin{flalign}\label{fmeasure}
\text{F-measure} = \frac{2 \times Precision \times Recall}{Precision + Recall}
\end{flalign}
\begin{flalign}\label{precision}
\text{Precision} = \frac{TP}{TP + FP}
\end{flalign}
\begin{flalign}\label{recall}
\text{Recall} = \text{Sensitivity} = \text{TPR} = \frac{TP}{TP + FN}
\end{flalign}
\subsubsection{G-mean}
Geometric mean is a comprehensive evaluation method based on sensitivity and specificity presented in equation \ref{gmean} \citep{kubat1997addressing}.
Sensitivity shows the percentage of true positives, similar to precision, while specificity computes the percentage of true negative. Sensitivity and specificity are calculated as shown in equation \ref{recall} and \ref{spec} respectively.
A higher value of G-mean shows the balance between classes is reasonable and classification model has good performance in binary classification.
\begin{flalign}\label{gmean}
\text{G-mean} = \sqrt{Sensitivity \times Specificity}
\end{flalign}
\begin{flalign}\label{spec}
\text{Specificity} = \frac{TN}{TN + FP}
\end{flalign}
\subsubsection{AUC}
The AUC is obtained from the Receiver Operating Characteristic (ROC) curve representing the area under the ROC curve \citep{fawcett2004roc}. The horizontal axis of ROC curve represents the false-positive rate (1-specificity) and the vertical axis of this case represents the true-positive ate (sensitivity). In this curve the true positive rate is plotted against the false positive rate at various cut-off values where each point on the ROC curve signifies a pair of TPR and FPR that corresponds to a certain threshold. The AUC of a classifier equals the probability that a randomly chosen positive case receives a score higher than a randomly chosen negative case, and its value is in the range of [0,1].
\subsubsection{H-measure}
The H-measure metric was proposed by \citet{hand2009measuring}. H-measure uses a pre-specified beta distribution to represent the misclassification cost distribution function.
\subsubsection{Brier score}
The Brier score measures the accuracy of probabilistic prediction. The Brier score ranges from 0 (perfect probabilistic prediction) to 1 (poor prediction) as the following equation. Where N is the number of samples, $s_i$ is the probability prediction of classification and $y_i$ is the true label of the sample.
\begin{flalign}\label{bs}
\text{Brier score} = \frac{1}{N} \sum_{i=1}^N (s_i - y_i)^2
\end{flalign}
\subsection{Experiments}
\subsubsection{DS techniques evaluation}\label{experiment1}
Dynamic selection techniques select the most competent classifiers for each query sample in its region of competence. Based on this definition, to create the pool of classifiers, we used bootstrap aggregation (Bagging) using k nearest neighbors, support vector machine, Gaussian naive Bayes, and multilayer perceptron as the base classifiers. We also implemented DS on random forest due to its high performance in credit scoring classification~\citep{lessmann2015benchmarking}. In addition to homogeneous classifier pools, we also created a heterogeneous pool using all previously mentioned base classifiers as well as random forest. The number of base classifiers in the pool as well as the number of trees in the random forest, were determined by a grid search (values between 10 to 200). We held aside twenty-five percent of the training set for the DSEL. The region of competence for each test sample was calculated based on the samples in the DSEL. We ran all the experiments using scikit-learn library in python for training the base classifiers and the pool. We used the DESlib library in python by \citet{cruz2018deslib} to implement the DS techniques on the created pool of classifiers. Hyper-parameters of the base classifiers and the DS techniques were set to the default values.
\subsubsection{DS performance on imbalanced data}\label{experiment2}
The imbalance ratio of a data set is defined as the number of samples in the majority class divided by the number of minority class samples. Credit scoring data sets often have a high imbalance ratio, which makes classification a challenging task. \citet{cruz2018dynamic} state that since dynamic selection techniques perform locally, the classifiers that are selected to classify each query sample are not affected by all samples in the data set and only take into account the neighborhood of the query sample. This notion suggests that dynamic selection techniques might be robust against imbalanced data sets which we will validate by the following experiments.
To evaluate the robustness of dynamic selection techniques in classifying imbalanced data sets, we modify the imbalance ratio of our data set by under-sampling majority class. The imbalance ratio for the original training set is $5.8$. We under-sample majority class training samples to create data sets with varying degrees of imbalance from a completely balanced data set (i.e., imbalance ratio=1) to the original data set. The imbalance ratio and the number of samples in each data set are shown in Table~\ref{IR}.
For each modified data set, the same structure for pools of classifiers was considered, thus all the experiments on the original data set were repeated on modified data sets. The number of classifiers in the pools was set as the optimized number of classifiers for the original data set with default values of hyper-parameters for each base classifier.
\begin{table}[H]
\centering
\caption{The imbalance ratio and the number of samples in modified data sets created by under-sampling}
\label{IR}
\begin{tabular}{lcc}
\toprule
Imbalance Ratio & \multicolumn{1}{l}{No. majority samples} & \multicolumn{1}{l}{No. minority samples} \\
\hline
1 & 23863 & 23863 \\
2 & 47726 & 23863 \\
3 & 71589 & 23863 \\
4 & 95452 & 23863 \\
5 & 119315 & 23863 \\
5.8 (Original data set) & 138372 & 23863 \\
\bottomrule
\end{tabular}
\end{table}
\section{Results and Discussion}
The results of our experiments in section \ref{experiment1} and \ref{experiment2} are represented in Tables 12-15 (the best three DS techniques are in bold across all evaluation measures). Based on the obtained results we discuss our findings regarding the following aspects:
\subsection{On the impact of DS}
The hypothesis behind employing dynamic selection techniques for to ensemble models is to increase their classification ability by choosing the most competent classifiers for each test sample. From the results of our experiments on the real-life credit data of Lending Club customers, it is observed that DS techniques are able to improve the pool of classifiers’ performance. Table \ref{Table DS discussion} shows the average performance of the top 3 DS techniques regarding different performance measures compared to the performance of the pool in the original data set. The obtained results demonstrate that DS techniques can boost the performance of ensembles. The improvements are particularly observed in G-mean and F1 score, which represents the classification ability of classifiers in the presence of cost-sensitive and imbalanced training sets. This result indicates that we can validate the statement of \citet{cruz2018dynamic} that dynamic selection techniques are great candidates for classifying imbalanced data sets specifically in the context of highly imbalanced credit scoring problems.
\begin{table}[H]
\centering
\caption{The average performance of top 3 DS techniques compared to the pool of classifiers.}
\label{Table DS discussion}
\resizebox{\linewidth}{!}{%
\begin{tabular}{llccllcc}
\toprule
Classifier & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Evaluation \\measure \end{tabular}} & \begin{tabular}[c]{@{}c@{}}Classifiers' \\pool \end{tabular} & \begin{tabular}[c]{@{}c@{}}Average top 3 \\DS techniques \end{tabular} & Classifier & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Evaluation \\measure \end{tabular}} & \begin{tabular}[c]{@{}c@{}}Classifiers' \\pool \end{tabular} & \begin{tabular}[c]{@{}c@{}}Average top 3 \\DS techniques \end{tabular} \\
\hline
\multirow{6}{*}{\begin{tabular}[c]{@{}l@{}}GNB \\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.7728 & \textbf{0.8212 } & \multirow{6}{*}{\begin{tabular}[c]{@{}l@{}}Heterogeneous\\pool \\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.8459 & \textbf{0.8510 } \\
& AUC & 0.6879 & 0.6763 & & AUC & 0.6813 & 0.6794 \\
& F1 & 0.3171 & 0.2876 & & F1 & 0.1112 & \textbf{0.2144 } \\
& G-mean & 0.5483 & 0.5083 & & G-mean & 0.2527 & \textbf{0.3967 } \\
& H\_measure & 0.1105 & 0.1019 & & H\_measure & 0.1065 & 0.1030 \\
& Brier score & 0.1816 & \textbf{0.1656 } & & Brier score & 0.1231 & \textbf{0.1221 } \\
\multirow{6}{*}{\begin{tabular}[c]{@{}l@{}}RF \\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.8519 & \textbf{0.8519 } & \multirow{6}{*}{\begin{tabular}[c]{@{}l@{}}MLP \\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.8494 & \textbf{ 0.8499} \\
& AUC & 0.6829 & \textbf{0.6837 } & & AUC & 0.6837 & 0.6824 \\
& F1 & 0.0369 & \textbf{0.2144 } & & F1 & 0.0656 & \textbf{ 0.1804} \\
& G-mean & 0.1382 & \textbf{0.4360 } & & G-mean & 0.1879 & \textbf{ 0.3532} \\
& H\_measure & 0.1071 & \textbf{0.1084 } & & H\_measure & 0.1062 & 0.1047 \\
& Brier score & 0.1198 & 0.1204 & & Brier score & 0.1204 & 0.1205 \\
\bottomrule
\end{tabular}
}
\end{table}
\subsection{On the impact of DS on imbalanced data sets}
The results of our study shows that generally the accuracy increases with the increase in the imbalance ratio of the training data sets. Simultaneously, the G-mean measure, which reflects the ability of a classifier to distinguish between majority and minority classes, decreases in the results. This concurrent might indicate that as the classification accuracy increases, in higher imbalanced data sets, the ability of distinguishing between negative (minority) class and positive (majority) class of the data set decreases. We can argue that the increase in the accuracy measure is a consequence of encountering more negative samples which increases the ability of the algorithms to classify them correctly.
Applying dynamic selection techniques on the pool of classifiers across different versions of the data set, can increase the ability of the ensemble to distinguish between minority and majority classes. The increase of the F1 measure, G-mean and AUC metric along with improvement of accuracy can indicate that DS techniques are able to improve the performance of the pool, especially in data sets with higher imbalance ratios. Tables \ref{DS IR1 discussion} to \ref{DS IR5 discussion} show the average results of the top 3 DS techniques compared to the pool of classifiers’ performance regarding different imbalance ratios.
\begin{table}[H]
\centering
\caption{The average performance of top 3 DS techniques compared to the pool of classifiers in data set (Imbalance Ratio=1).}
\label{DS IR1 discussion}
\resizebox{\linewidth}{!}{%
\begin{tabular}{llccllcc}
\toprule
Classifier & \begin{tabular}[c]{@{}l@{}}Evaluation\\\textasciitilde{}measure \end{tabular} & \multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}Classifiers\\\textasciitilde{}pool \end{tabular}} & \multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}Average top 3\\\textasciitilde{}DS techniques \end{tabular}} & Classifier & \begin{tabular}[c]{@{}l@{}}Evaluation\\\textasciitilde{}measure \end{tabular} & \multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}Classifiers\\pool \end{tabular}} & \multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}Average top 3\\DS techniques \end{tabular}} \\
\hline
\multirow{6}{*}{\begin{tabular}[c]{@{}l@{}}GNB\\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.8251 & \textbf{0.8287} & \multirow{6}{*}{\begin{tabular}[c]{@{}l@{}}Heterogeneous\\pool\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.7406 & \textbf{0.7443} \\
& AUC & 0.6854 & \textbf{0.6870} & & AUC & 0.6856 & 0.6827 \\
& F1 & 0.2298 & \textbf{0.2910} & & F1 & 0.3310 & \textbf{0.3464} \\
& G-mean & 0.4062 & \textbf{0.4978} & & G-mean & 0.5861 & \textbf{0.6373} \\
& H\_measure & 0.1072 & \textbf{0.1098} & & H\_measure & 0.1094 & 0.1071 \\
& Brier score & 0.1587 & 0.1658 & & Brier score & 0.1813 & 0.1856 \\
\multirow{6}{*}{\begin{tabular}[c]{@{}l@{}}RF \\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.6233 & \textbf{0.6265} & \multirow{6}{*}{\begin{tabular}[c]{@{}l@{}}MLP \\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.6225 & 0.6283 \\
& AUC & 0.6934 & 0.6910 & & AUC & 0.6758 & 0.6776 \\
& F1 & 0.3435 & 0.3425 & & F1 & 0.3346 & 0.3338 \\
& G-mean & 0.6397 & 0.6387 & & G-mean & 0.6296 & 0.6284 \\
& H\_measure & 0.1151 & 0.1131 & & H\_measure & 0.0965 & 0.0988 \\
& Brier score & 0.2227 & 0.2250 & & Brier score & 0.2303 & 0.2312 \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{table}[H]
\centering
\caption{The average performance of top 3 DS techniques compared to the pool of classifiers in data set (Imbalance Ratio=2).}
\label{DS IR2 discussion}
\resizebox{\linewidth}{!}{%
\begin{tabular}{llccllcc}
\toprule
Classifier & \begin{tabular}[c]{@{}l@{}}Evaluation\\measure \end{tabular} & \multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}Classifiers\\pool \end{tabular}} & \multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}Average top 3\\DS techniques \end{tabular}} & Classifier & \begin{tabular}[c]{@{}l@{}}Evaluation\\measure \end{tabular} & \multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}Classifiers\\pool \end{tabular}} & \multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}Average top 3\\DS techniques \end{tabular}} \\
\hline
\multirow{6}{*}{\begin{tabular}[c]{@{}l@{}}GNB\\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.7996 & 0.7954 & \multirow{6}{*}{\begin{tabular}[c]{@{}l@{}}Heterogeneous\\pool \\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.8354 & \textbf{0.8487} \\
& AUC & 0.6872 & 0.6824 & & AUC & 0.6820 & 0.6814 \\
& F1 & 0.2797 & \textbf{0.3273} & & F1 & 0.1944 & \textbf{0.2391} \\
& G-mean & 0.4839 & \textbf{0.5908} & & G-mean & 0.3580 & \textbf{0.4347} \\
& H\_measure & 0.1097 & 0.1050 & & H\_measure & 0.1072 & 0.1062 \\
& Brier score & 0.1641 & 0.1760 & & Brier score & 0.1273 & \textbf{0.1231} \\
\multirow{6}{*}{\begin{tabular}[c]{@{}l@{}}RF \\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.8017 & 0.8006 & \multirow{6}{*}{\begin{tabular}[c]{@{}l@{}}MLP \\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.7845 & \textbf{ 0.7859} \\
& AUC & 0.6908 & 0.6893 & & AUC & 0.6812 & 0.6800 \\
& F1 & 0.3028 & 0.3016 & & F1 & 0.2974 & 0.2950 \\
& G-mean & 0.5085 & \textbf{0.5364} & & G-mean & 0.5164 & \textbf{ 0.5392} \\
& H\_measure & 0.1138 & 0.1128 & & H\_measure & 0.1032 & 0.1025 \\
& Brier score & 0.1535 & 0.1446 & & Brier score & 0.1561 & \textbf{ 0.1530} \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{table}[H]
\centering
\caption{The average performance of top 3 DS techniques compared to the pool of classifiers in data set (Imbalance Ratio=3).}
\label{DS IR3 discussion}
\resizebox{\linewidth}{!}{%
\begin{tabular}{llccllcc}
\toprule
Classifier & \begin{tabular}[c]{@{}l@{}}Evaluation\\measure \end{tabular} & \multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}Classifiers\\pool \end{tabular}} & \multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}Average top 3\\DS techniques \end{tabular}} & Classifier & \begin{tabular}[c]{@{}l@{}}Evaluation\\measure \end{tabular} & \multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}Classifiers\\pool \end{tabular}} & \multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}Average top 3\\DS techniques \end{tabular}} \\
\hline
\multirow{6}{*}{\begin{tabular}[c]{@{}l@{}}GNB \\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.7956 & \textbf{0.8121} & \multirow{6}{*}{\begin{tabular}[c]{@{}l@{}}Heterogeneous\\pool \\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.8138 & \textbf{0.8406} \\
& AUC & 0.6870 & 0.6804 & & AUC & 0.6837 & 0.6824 \\
& F1 & 0.2833 & \textbf{0.3263} & & F1 & 0.2762 & 0.2751 \\
& G-mean & 0.4914 & \textbf{0.5858} & & G-mean & 0.4678 & \textbf{0.4897} \\
& H\_measure & 0.1091 & 0.1037 & & H\_measure & 0.1085 & 0.1062 \\
& Brier score & 0.1720 & \textbf{0.1687} & & Brier score & 0.1364 & \textbf{0.1305} \\
\multirow{6}{*}{\begin{tabular}[c]{@{}l@{}}RF \\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.8370 & \textbf{0.8389} & \multirow{6}{*}{\begin{tabular}[c]{@{}l@{}}MLP \\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.8293 & \textbf{ 0.8307} \\
& AUC & 0.6892 & 0.6887 & & AUC & 0.6796 & 0.6786 \\
& F1 & 0.1785 & \textbf{0.2500} & & F1 & 0.2087 & \textbf{ 0.2532} \\
& G-mean & 0.3389 & \textbf{0.5072} & & G-mean & 0.3791 & \textbf{ 0.4805} \\
& H\_measure & 0.1119 & 0.1117 & & H\_measure & 0.1013 & 0.1003 \\
& Brier score & 0.1325 & \textbf{0.1283} & & Brier score & 0.1335 & \textbf{ 0.1321} \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{table}[H]
\centering
\caption{The average performance of top 3 DS techniques compared to the pool of classifiers in data set (Imbalance Ratio=4).}
\label{DS IR4 discussion}
\resizebox{\linewidth}{!}{%
\begin{tabular}{llccllcc}
\toprule
Classifier & \begin{tabular}[c]{@{}l@{}}Evaluation\\measure \end{tabular} & \multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}Classifiers\\pool \end{tabular}} & \multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}Average top 3\\DS techniques \end{tabular}} & Classifier & \begin{tabular}[c]{@{}l@{}}Evaluation\\measure \end{tabular} & \multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}Classifiers\\pool \end{tabular}} & \multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}Average top 3\\DS techniques \end{tabular}} \\
\hline
\multirow{6}{*}{\begin{tabular}[c]{@{}l@{}}GNB\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.7944 & \textbf{0.8117} & \multirow{6}{*}{\begin{tabular}[c]{@{}l@{}}Heterogeneous\\pool \\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.8138 & \textbf{0.8406} \\
& AUC & 0.6874 & 0.6762 & & AUC & 0.6837 & 0.6824 \\
& F1 & 0.2847 & \textbf{0.3266} & & F1 & 0.2762 & 0.2751 \\
& G-mean & 0.4939 & \textbf{0.5848} & & G-mean & 0.4678 & \textbf{0.4897} \\
& H\_measure & 0.1097 & 0.1015 & & H\_measure & 0.1085 & 0.1062 \\
& Brier score & 0.1745 & \textbf{0.1707} & & Brier score & 0.1364 & \textbf{0.1305} \\
\multirow{6}{*}{\begin{tabular}[c]{@{}l@{}}RF \\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.8485 & \textbf{0.8490} & \multirow{6}{*}{\begin{tabular}[c]{@{}l@{}}MLP \\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.8443 & \textbf{ 0.8453} \\
& AUC & 0.6867 & \textbf{0.6871} & & AUC & 0.6835 & 0.6825 \\
& F1 & 0.0992 & \textbf{0.2332} & & F1 & 0.1233 & \textbf{ 0.2226} \\
& G-mean & 0.2355 & \textbf{0.4747} & & G-mean & 0.2687 & \textbf{ 0.4214} \\
& H\_measure & 0.1101 & \textbf{0.1109} & & H\_measure & 0.1058 & 0.1044 \\
& Brier score & 0.1242 & \textbf{0.1240} & & Brier score & 0.1238 & \textbf{ 0.1236} \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{table}[H]
\centering
\caption{The average performance of top 3 DS techniques compared to the poolof classifiers in data set (Imbalance Ratio=5).}
\label{DS IR5 discussion}
\resizebox{\linewidth}{!}{%
\begin{tabular}{llccllcc}
\toprule
Classifier & \begin{tabular}[c]{@{}l@{}}Evaluation\\measure \end{tabular} & \multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}Classifiers\\pool \end{tabular}} & \multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}Average top 3\\DS techniques \end{tabular}} & Classifier & \begin{tabular}[c]{@{}l@{}}Evaluation\\measure \end{tabular} & \multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}Classifiers\\pool \end{tabular}} & \multicolumn{1}{l}{\begin{tabular}[c]{@{}l@{}}Average top 3\\DS techniques \end{tabular}} \\
\hline
\multirow{6}{*}{\begin{tabular}[c]{@{}l@{}}GNB\\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.7121 & \textbf{0.8105} & \multirow{6}{*}{\begin{tabular}[c]{@{}l@{}}Heterogeneous\\pool\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.8438 & \textbf{0.8503} \\
& AUC & 0.6876 & 0.6841 & & AUC & 0.6820 & 0.6796 \\
& F1 & 0.3370 & 0.3184 & & F1 & 0.1423 & \textbf{0.2260} \\
& G-mean & 0.6082 & 0.5641 & & G-mean & 0.2919 & \textbf{0.4148} \\
& H\_measure & 0.1102 & 0.1072 & & H\_measure & 0.1073 & 0.1031 \\
& Brier score & 0.2218 & \textbf{0.1754} & & Brier score & 0.1245 & \textbf{0.1223} \\
\multirow{6}{*}{\begin{tabular}[c]{@{}l@{}}RF \\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.8508 & \textbf{0.8509} & \multirow{6}{*}{\begin{tabular}[c]{@{}l@{}}MLP \\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.8484 & \textbf{ 0.8487} \\
& AUC & 0.6851 & \textbf{0.6855} & & AUC & 0.6849 & 0.6836 \\
& F1 & 0.0462 & \textbf{0.2228} & & F1 & 0.0891 & \textbf{ 0.1943} \\
& G-mean & 0.1556 & \textbf{0.4520} & & G-mean & 0.2222 & \textbf{ 0.3755} \\
& H\_measure & 0.1085 & \textbf{0.1094} & & H\_measure & 0.1075 & 0.1060 \\
& Brier score & 0.1209 & 0.1216 & & Brier score & 0.1210 & \textbf{ 0.1210} \\
\bottomrule
\end{tabular}
}
\end{table}
\subsection{On comparison of DS techniques}
The F1 measure considers the trade-off between precision and recall measures (i.e., fewer false positives and fewer false negatives). We use the F1 measure to compare the results of different dynamic selection techniques on the original data set. In our comparison of DS techniques, we have ranked their performance based on the F1 measure as shown in Table \ref{Ranks table}. From the rank of DS techniques in the ensemble pool of each classifier in the original data set, it is observed that the overall top 3 classification techniques are \textit{GNB\_bag}, \textit{GNB\_posteriori}, and \textit{Pool\_posteriori}. It is interesting to mention that more complex DS techniques such as Meta-DES do not seem to perform as efficiently as less complex DS techniques such as A Posteriori. This may be due to the fact that the Meta-DES technique is considered to be most effective in small sample size classification problems \citep{cruz2015meta}, whereas our data set contains more than 160,000 training samples with high dimensionality.
\begin{table}[H]
\centering
\caption{Ranks of different techniques based on the F measure in the original data set.}
\label{Ranks table}
\resizebox{\linewidth}{!}{%
\begin{tabular}{lcclcc}
\toprule
Classifier & F1 measure\textasciitilde{} & Rank & Classifier & F1 measure\textasciitilde{} & Rank \\
\hline
GNB\_bag & 0.3171 & 1 & Pool & 0.1112 & 34 \\
GNB\_posteriori & 0.2989 & 2 & Pool\_posteriori & 0.2932 & 3 \\
GNB\_priori & 0.2379 & 14 & Pool\_priori & 0.1035 & 35 \\
GNB\_lca & 0.2727 & 8 & Pool\_lca & 0.0493 & 51 \\
GNB\_mcb & 0.2496 & 12 & Pool\_mcb & 0.1200 & 32 \\
GNB\_mla & 0.2727 & 8 & Pool\_mla & 0.0497 & 50 \\
GNB\_ola & 0.2467 & 13 & Pool\_ola & 0.1246 & 31 \\
GNB\_rank & 0.2738 & 7 & Pool\_rank & 0.1825 & 22 \\
GNB\_metades & 0.2264 & 16 & Pool\_metades & 0.0602 & 45 \\
GNB\_descluster & 0.2701 & 11 & Pool\_descluster & 0.0105 & 60 \\
GNB\_desknn & 0.2765 & 6 & Pool\_desknn & 0.0356 & 58 \\
GNB\_desp & 0.2326 & 15 & Pool\_desp & 0.0800 & 36 \\
GNB\_knop & 0.2809 & 5 & Pool\_knop & 0.0533 & 49 \\
GNB\_kne & 0.2720 & 10 & Pool\_kne & 0.1674 & 25 \\
GNB\_knu & 0.2829 & 4 & Pool\_knu & 0.0558 & 47 \\
RF\_indv & 0.0369 & 57 & MLP\_bag & 0.0656 & 40 \\
RF\_posteriori & 0.1469 & 29 & MLP\_posteriori & 0.0478 & 52 \\
RF\_priori & 0.2177 & 17 & MLP\_priori & 0.1135 & 33 \\
RF\_lca & 0.1494 & 27 & MLP\_lca & 0.0657 & 38 \\
RF\_mcb & 0.2129 & 18 & MLP\_mcb & 0.1850 & 21 \\
RF\_mla & 0.1494 & 27 & MLP\_mla & 0.0657 & 38 \\
RF\_ola & 0.2116 & 20 & MLP\_ola & 0.1756 & 24 \\
RF\_rank & 0.2127 & 19 & MLP\_rank & 0.1808 & 23 \\
RF\_metades & 0.0370 & 56 & MLP\_metades & 0.0642 & 41 \\
RF\_descluster & 0.0454 & 53 & MLP\_descluster & 0.0630 & 43 \\
RF\_desknn & 0.0540 & 48 & MLP\_desknn & 0.0620 & 44 \\
RF\_desp & 0.0411 & 54 & MLP\_desp & 0.0694 & 37 \\
RF\_knop & 0.0216 & 59 & MLP\_knop & 0.0596 & 46 \\
RF\_kne & 0.1496 & 26 & MLP\_kne & 0.1438 & 30 \\
RF\_knu & 0.0384 & 55 & MLP\_knu & 0.0631 & 42 \\
\bottomrule
\end{tabular}
}
\end{table}
\section{Conclusion}
In this paper, we set out to find out the effectiveness of dynamic selection techniques to improve the classification ability of both homogeneous and heterogeneous ensemble models in credit scoring. Dynamic selection techniques, are applied on a pool of classifiers and select the most competent classifiers for each test query individually based on a pre-specified competency measure.
We have trained 14 different dynamic selection techniques trained on a real-life credit scoring data set. Also, to inspect the robustness of DS techniques in imbalanced environments, we have formed different training sets from the original training set by under-sampling the majority class samples. DS techniques were then trained on each of the data sets and used to classify the test set.
Based on the results of our experiments, we can conclude that dynamic selection techniques are able to boost the performance of ensemble models. DS techniques are mostly effective in increasing the G-mean and F1 measure which determine the classifiers' ability to perform well in learning from imbalanced data sets. Our experiments validate \citet{cruz2018dynamic} assumption about the robustness of dynamic selection techniques in dealing with imbalanced data sets.
Based on the results of the F1 measure, we can conclude that less complex dynamic selection techniques such as A Posteriori, Knora Union and Eliminate are able to perform better as opposed to complicated techniques such as Meta-DES and DES Performance. Which may be due to the fact that large and high dimensional data sets are used in credit scoring.
The majority of DS techniques use k-NN algorithm to obtain the region of competence for each test sample, therefore dynamic selection techniques have higher complexity compared to the pool of classifiers. One of the limitation of our study is that due to computation power, we have used the default value of the hyper-parameters both in our classifiers and DS techniques due to our computation limitations. We believe that by hyper-parameter optimization the results of dynamic selection techniques may improve classification more significantly. Additionally, due to the limitaitons of our computation power, we were not able to train the pool of support vector machine and k-nearest neighbors on our real-life data set.
The high complexity of DS techniques may be one of the drawbacks of employing these techniques especially in high dimensional data sets. We believe our experiments on a large and high dimensional data set can be used as the starting point of other studies both in the context of credit scoring and other classification tasks with large data sets.
\iffalse
\begin{table}
\centering
\caption{Comparison of results with other comparative papers}
\label{table relevant studies}
\resizebox{\textwidth}{!}{%
\begin{tabular}{|p{100 pt}|p{50 pt}||p{200 pt}|p{50 pt}||p{50 pt}|p{100 pt}|}
\hline
\multirow{2}{*}{classification papers} & \multirow{2}{*}{data set} & \multirow{2}{*}{classification algorithms} & \multicolumn{3}{l|}{performance of the best model} \\ \cline{4-6}
& & & ACC & AUC & Brier \\ \hline
Teply and Polena & LC2012-2013 & logistic regression, artificial neural network, LDA, linear SVM,RF, baysian net, classification and regression tree, k-NN, naïve bayes, RBF SVM, & 0.7913(In LR algorithm) & 0.6979 (In LR algorithm) & 0.1239 (LR) \\ \hline
Malekipirbazari and Aksakalli & LC2012-2014 & RF, k-nn, SVM, logistic regression & 0.78 (RF) & 0.71 (RF) & - \\ \hline
Setiawana et al. & LC2009-2013 & Binary particle swarm optimization SVM, RF, Exteremly Randomized Tree & 0.64 BPSOSVMERT & - & - \\ \hline
present study & LC2015-2016 & RF,k-NN,SVM, GNB, MLP & 0.8520 (RF\_metades) & 0.6879 (GNB\_bag) & 0.1198 (RF\_metades) \\ \hline
\end{tabular}%
}
\end{table}
\fi
\begin{sidewaystable}
\centering
\caption{GNB results (Number of base classifiers=80)}
\label{gnb_results}
\resizebox{\linewidth}{!}{%
\begin{tabular}{clcccccccccccccccc}
\toprule
\multirow{2}{*}{Imbalance Ratio} & \multicolumn{1}{c}{\multirow{2}{*}{Evaluation measure}} & \multicolumn{16}{c}{Classification technique} \\
\cline{3-18}
& \multicolumn{1}{c}{} & GNB\_indv & GNB\_bag & GNB\_posteriori & GNB\_priori & GNB\_lca & GNB\_mcb & GNB\_mla & GNB\_ola & GNB\_rank & GNB\_metades & GNB\_descluster & GNB\_desknn & GNB\_desp & GNB\_knop & GNB\_kne & GNB\_knu \\
\hline
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}1\\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.8217 & 0.8251 & \textbf{0.8244 } & 0.7981 & \textbf{0.8309 } & 0.7990 & \textbf{0.8309 } & 0.7983 & 0.8033 & 0.8029 & 0.8194 & 0.8222 & 0.8000 & 0.8215 & 0.8039 & 0.8223 \\
& AUC & 0.6866 & 0.6854 & 0.6732 & 0.6834 & 0.6601 & 0.6852 & 0.6601 & 0.6831 & 0.6765 & 0.6848 & \textbf{0.6873 } & \textbf{0.6871 } & \textbf{0.6868 } & 0.6863 & 0.6858 & 0.6864 \\
& F1 & 0.2369 & 0.2298 & 0.2258 & \textbf{0.2888 } & 0.1982 & \textbf{0.2922 } & 0.1982 & \textbf{0.2919 } & 0.2777 & 0.2703 & 0.2524 & 0.2461 & 0.2868 & 0.2438 & 0.2760 & 0.2459 \\
& G-mean & 0.4170 & 0.4062 & 0.4023 & \textbf{0.4956 } & 0.3661 & \textbf{0.4988 } & 0.3661 & \textbf{0.4990 } & 0.4786 & 0.4705 & 0.4363 & 0.4268 & 0.4917 & 0.4248 & 0.4762 & 0.4265 \\
& H\_measure & 0.1073 & 0.1072 & 0.0950 & 0.1051 & 0.0860 & 0.1073 & 0.0860 & 0.1054 & 0.0996 & 0.1063 & \textbf{0.1103 } & \textbf{0.1096 } & \textbf{0.1095 } & 0.1084 & 0.1071 & 0.1089 \\
& Brier score & 0.1754 & 0.1587 & 0.1741 & 0.1940 & 0.1675 & 0.1893 & 0.1675 & 0.1918 & 0.1872 & 0.1818 & \textbf{0.1671 } & 0.1676 & 0.1864 & \textbf{0.1656 } & 0.1840 & \textbf{0.1646 } \\
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}2\\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.7252 & 0.7996 & 0.7718 & 0.7539 & 0.7234 & 0.7410 & 0.7234 & 0.7392 & 0.6819 & 0.7666 & \textbf{0.7963 } & 0.7785 & 0.7523 & \textbf{0.7943 } & 0.6860 & \textbf{0.7956 } \\
& AUC & 0.6862 & 0.6872 & 0.6305 & 0.6413 & \textbf{0.6824 } & 0.6468 & \textbf{0.6824 } & 0.6613 & 0.6481 & 0.6662 & 0.6606 & 0.6682 & 0.6672 & \textbf{0.6824 } & 0.6495 & 0.6799 \\
& F1 & 0.3356 & 0.2797 & 0.2865 & 0.2966 & \textbf{0.3388 } & 0.3004 & \textbf{0.3388 } & 0.3023 & 0.2894 & 0.2854 & 0.2960 & \textbf{0.3043 } & 0.2965 & 0.2926 & 0.2882 & 0.2923 \\
& G-mean & 0.6002 & 0.4839 & 0.5131 & 0.5369 & \textbf{0.6049 } & 0.5492 & \textbf{0.6049 } & 0.5525 & \textbf{0.5624 } & 0.5153 & 0.5053 & 0.5291 & 0.5378 & 0.5030 & 0.5595 & 0.5017 \\
& H\_measure & 0.1070 & 0.1097 & 0.0705 & 0.0765 & \textbf{0.1047 } & 0.0772 & \textbf{0.1047 } & 0.0867 & 0.0786 & 0.0915 & 0.0927 & 0.0949 & 0.0908 & \textbf{0.1056 } & 0.0787 & 0.1038 \\
& Brier score & 0.2489 & 0.1641 & 0.2252 & 0.2333 & 0.2547 & 0.2323 & 0.2547 & 0.2327 & 0.2820 & 0.2012 & \textbf{0.1818 } & 0.2005 & 0.2256 & \textbf{0.1726 } & 0.2766 & \textbf{0.1735 } \\
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}3\\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.7228 & 0.7956 & 0.7746 & 0.7946 & 0.7183 & 0.7845 & 0.7183 & 0.7807 & 0.7236 & \textbf{0.8125 } & \textbf{0.8214 } & 0.8008 & 0.7965 & 0.8015 & 0.7268 & \textbf{0.8023 } \\
& AUC & 0.6862 & 0.6870 & 0.6371 & 0.6382 & \textbf{0.6793 } & 0.6447 & \textbf{0.6793 } & 0.6375 & 0.6397 & 0.6748 & 0.6573 & 0.6724 & 0.6673 & 0.6793 & 0.6557 & \textbf{0.6824 } \\
& F1 & 0.3374 & 0.2833 & \textbf{0.3022 } & 0.2725 & \textbf{0.3383 } & 0.2730 & \textbf{0.3383 } & 0.2717 & 0.2881 & 0.2640 & 0.2318 & 0.2776 & 0.2670 & 0.2831 & 0.2867 & 0.2853 \\
& G-mean & 0.6035 & 0.4914 & 0.5294 & 0.4799 & \textbf{0.6069 } & 0.4881 & \textbf{0.6069 } & 0.4894 & \textbf{0.5436 } & 0.4552 & 0.4116 & 0.4807 & 0.4720 & 0.4863 & 0.5404 & 0.4881 \\
& H\_measure & 0.1066 & 0.1091 & 0.0777 & 0.0740 & \textbf{0.1026 } & 0.0762 & \textbf{0.1026 } & 0.0757 & 0.0747 & 0.0996 & 0.0860 & 0.0962 & 0.0907 & \textbf{0.1027 } & 0.0816 & \textbf{0.1059 } \\
& Brier score & 0.2504 & 0.1720 & 0.2220 & 0.1965 & 0.2617 & 0.1942 & 0.2617 & 0.1950 & 0.2439 & \textbf{0.1665 } & \textbf{0.1672 } & 0.1815 & 0.1884 & 0.1727 & 0.2404 & \textbf{0.1725 } \\
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}4\\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.7369 & 0.7944 & 0.7594 & 0.8062 & 0.7188 & 0.7984 & 0.7188 & 0.7967 & 0.7432 & \textbf{0.8167 } & \textbf{0.8103 } & 0.8014 & \textbf{0.8080 } & 0.8022 & 0.7448 & 0.8014 \\
& AUC & 0.6861 & 0.6874 & 0.6387 & 0.6387 & 0.6695 & 0.6439 & 0.6695 & 0.6361 & 0.6398 & 0.6661 & 0.6537 & \textbf{0.6715 } & 0.6655 & \textbf{0.6778 } & 0.6594 & \textbf{0.6793 } \\
& F1 & 0.3296 & 0.2847 & \textbf{0.3010 } & 0.2596 & \textbf{0.3394 } & 0.2664 & \textbf{0.3394 } & 0.2681 & 0.2858 & 0.2456 & 0.2707 & 0.2865 & 0.2563 & 0.2804 & 0.2829 & 0.2834 \\
& G-mean & 0.5866 & 0.4939 & \textbf{0.5386 } & 0.4556 & \textbf{0.6080 } & 0.4698 & \textbf{0.6080 } & 0.4731 & 0.5304 & 0.4311 & 0.4647 & 0.4903 & 0.4504 & 0.4825 & 0.5260 & 0.4867 \\
& H\_measure & 0.1065 & 0.1097 & 0.0753 & 0.0756 & \textbf{0.0991 } & 0.0776 & \textbf{0.0991 } & 0.0776 & 0.0768 & 0.0934 & 0.0851 & 0.0973 & 0.0913 & \textbf{0.1022 } & 0.0850 & \textbf{0.1033 } \\
& Brier score & 0.2399 & 0.1745 & 0.2368 & 0.1865 & 0.2608 & 0.1853 & 0.2608 & 0.1852 & 0.2271 & \textbf{0.1665 } & 0.1733 & 0.1793 & 0.1791 & \textbf{0.1728 } & 0.2272 & \textbf{0.1730 } \\
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}5\\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.7290 & 0.7121 & 0.7041 & \textbf{0.8074 } & 0.8034 & 0.8005 & 0.8034 & 0.8036 & 0.7444 & \textbf{0.8144 } & 0.8007 & 0.7940 & \textbf{0.8095 } & 0.7728 & 0.7446 & 0.7681 \\
& AUC & 0.6859 & 0.6876 & 0.6513 & 0.6393 & 0.6674 & 0.6442 & 0.6674 & 0.6642 & 0.6550 & 0.6449 & \textbf{0.6843 } & 0.6790 & 0.6461 & \textbf{0.6830 } & 0.6439 & \textbf{0.6850 } \\
& F1 & 0.3336 & 0.3370 & \textbf{0.3192 } & 0.2635 & 0.2808 & 0.2654 & 0.2808 & 0.2626 & 0.2810 & 0.2504 & 0.2792 & 0.2916 & 0.2600 & \textbf{0.3173 } & 0.2810 & \textbf{0.3187 } \\
& G-mean & 0.5957 & 0.6082 & \textbf{0.5903 } & 0.4590 & 0.4821 & 0.4669 & 0.4821 & 0.4612 & 0.5239 & 0.4383 & 0.4825 & 0.5022 & 0.4533 & \textbf{0.5485 } & 0.5238 & \textbf{0.5535 } \\
& H\_measure & 0.1065 & 0.1102 & 0.0805 & 0.0757 & 0.0930 & 0.0768 & 0.0930 & 0.0894 & 0.0821 & 0.0841 & \textbf{0.1074 } & 0.1034 & 0.0815 & \textbf{0.1063 } & 0.0778 & \textbf{0.1081 } \\
& Brier score & 0.2461 & 0.2218 & 0.2907 & 0.1863 & 0.1922 & 0.1856 & 0.1922 & 0.1853 & 0.2308 & \textbf{0.1697 } & \textbf{0.1792 } & 0.1860 & \textbf{0.1774 } & 0.1876 & 0.2273 & 0.1883 \\
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}5.8\\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.7422 & 0.7728 & 0.7779 & \textbf{0.8184 } & 0.8094 & 0.8136 & 0.8094 & 0.8138 & 0.7658 & \textbf{0.8254 } & 0.8096 & 0.8070 & \textbf{0.8199 } & 0.8051 & 0.7663 & 0.8030 \\
& AUC & 0.6869 & 0.6879 & 0.6438 & 0.6378 & 0.6516 & 0.6470 & 0.6516 & 0.6521 & 0.6486 & 0.6685 & 0.6515 & \textbf{0.6723 } & 0.6670 & \textbf{0.6777 } & 0.6619 & \textbf{0.6787 } \\
& F1 & 0.3299 & 0.3171 & \textbf{0.2989 } & 0.2379 & 0.2727 & 0.2496 & 0.2727 & 0.2467 & 0.2738 & 0.2264 & 0.2701 & 0.2765 & 0.2326 & \textbf{0.2809 } & 0.2720 & \textbf{0.2829 } \\
& G-mean & 0.5838 & 0.5483 & \textbf{0.5232 } & 0.4210 & 0.4677 & 0.4382 & 0.4677 & 0.4348 & \textbf{0.5021 } & 0.4021 & 0.4647 & 0.4741 & 0.4138 & 0.4807 & \textbf{0.4997 } & 0.4848 \\
& H\_measure & 0.1074 & 0.1105 & 0.0810 & 0.0751 & 0.0851 & 0.0798 & 0.0851 & 0.0830 & 0.0789 & 0.0952 & 0.0855 & \textbf{0.0981 } & 0.0929 & \textbf{0.1035 } & 0.0880 & \textbf{0.1040 } \\
& Brier score & 0.2345 & 0.1816 & 0.2186 & 0.1746 & 0.1875 & 0.1727 & 0.1875 & 0.1741 & 0.2117 & \textbf{0.1589 } & 0.1766 & 0.1760 & \textbf{0.1667 } & \textbf{0.1713 } & 0.2083 & 0.1717 \\
\bottomrule
\end{tabular}
}
\end{sidewaystable}
\begin{sidewaystable}
\centering
\caption{RF results (Number of base estimators=150)}
\label{rf_results}
\resizebox{\linewidth}{!}{%
\begin{tabular}{clccccccccccccccc}
\toprule
\multirow{2}{*}{Imbalance Ratio} & \multicolumn{1}{c}{\multirow{2}{*}{Evaluation measure}} & \multicolumn{15}{c}{Classification technique} \\
\cline{3-17}
& \multicolumn{1}{c}{} & RF\_indv & RF\_posteriori & RF\_priori & RF\_lca & RF\_mcb & RF\_mla & RF\_ola & RF\_rank & RF\_metades & RF\_descluster & RF\_desknn & RF\_desp & RF\_knop & RF\_kne & RF\_knu \\
\hline
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}1\\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.6233 & 0.5767 & 0.5617 & 0.5676 & 0.5593 & 0.5676 & 0.5618 & 0.5629 & 0.6172 & \textbf{0.6295 } & 0.6166 & \textbf{0.6238 } & 0.6175 & \textbf{0.6263 } & 0.6196 \\
& AUC & 0.6934 & 0.5827 & 0.5565 & 0.5807 & 0.5558 & 0.5807 & 0.5592 & 0.5601 & 0.6832 & 0.6842 & 0.6824 & \textbf{0.6891 } & \textbf{0.6898 } & 0.5792 & \textbf{0.6941 } \\
& F1 & 0.3435 & 0.2931 & 0.2710 & 0.2914 & 0.2706 & 0.2914 & 0.2734 & 0.2741 & 0.3362 & 0.3390 & 0.3354 & \textbf{0.3421 } & \textbf{0.3419 } & 0.2867 & \textbf{0.3434 } \\
& G-mean & 0.6397 & 0.5827 & 0.5564 & 0.5804 & 0.5558 & 0.5804 & 0.5592 & 0.5601 & 0.6317 & 0.6339 & 0.6309 & \textbf{0.6381 } & \textbf{0.6383 } & 0.5724 & \textbf{0.6399 } \\
& H\_measure & 0.1151 & 0.0281 & 0.0132 & 0.0267 & 0.0128 & 0.0267 & 0.0144 & 0.0149 & 0.1049 & 0.1038 & 0.1022 & \textbf{0.1108 } & \textbf{0.1109 } & 0.0305 & \textbf{0.1177 } \\
& Brier score & 0.2227 & 0.4233 & 0.4383 & 0.4324 & 0.4407 & 0.4324 & 0.4382 & 0.4371 & 0.2317 & \textbf{0.2258 } & \textbf{0.2256 } & \textbf{0.2237 } & 0.2581 & 0.3476 & 0.2371 \\
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}2\\~\\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.8017 & 0.7138 & 0.6539 & 0.7149 & 0.6538 & 0.7149 & 0.6523 & 0.6520 & \textbf{0.7989 } & 0.7966 & 0.7876 & 0.7975 & \textbf{0.8038 } & 0.7339 & \textbf{0.7992 } \\
& AUC & 0.6908 & 0.5711 & 0.5547 & 0.5731 & 0.5515 & 0.5731 & 0.5516 & 0.5530 & 0.6857 & 0.6808 & 0.6801 & \textbf{0.6881 } & \textbf{0.6879 } & 0.5691 & \textbf{0.6919 } \\
& F1 & 0.3028 & 0.2763 & 0.2618 & 0.2788 & 0.2582 & 0.2788 & 0.2585 & 0.2601 & 0.2814 & 0.2950 & \textbf{0.2992 } & \textbf{0.3000 } & 0.2907 & 0.2592 & \textbf{0.3056 } \\
& G-mean & 0.5085 & 0.5338 & \textbf{0.5364 } & \textbf{0.5364 } & 0.5320 & \textbf{0.5364 } & 0.5327 & 0.5348 & 0.4866 & 0.5040 & 0.5161 & 0.5089 & 0.4929 & 0.5032 & 0.5139 \\
& H\_measure & 0.1138 & 0.0274 & 0.0140 & 0.0289 & 0.0125 & 0.0289 & 0.0125 & 0.0131 & 0.1072 & 0.1026 & 0.1012 & \textbf{0.1111 } & \textbf{0.1114 } & 0.0293 & \textbf{0.1159 } \\
& Brier score & 0.1535 & 0.2862 & 0.3461 & 0.2851 & 0.3462 & 0.2851 & 0.3477 & 0.3480 & 0.1493 & 0.1563 & 0.1561 & 0.1538 & \textbf{0.1423 } & 0.2527 & \textbf{0.1422 } \\
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}3\\~\\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.8370 & 0.7677 & 0.7002 & 0.7691 & 0.6991 & 0.7691 & 0.7008 & 0.6986 & \textbf{0.8394 } & 0.8347 & 0.8301 & 0.8360 & \textbf{0.8408 } & 0.7813 & \textbf{0.8366 } \\
& AUC & 0.6892 & 0.5498 & 0.5504 & 0.5542 & 0.5497 & 0.5542 & 0.5504 & 0.5471 & \textbf{0.6885 } & 0.6785 & 0.6759 & \textbf{0.6860 } & 0.6848 & 0.5568 & \textbf{0.6917 } \\
& F1 & 0.1785 & 0.2346 & \textbf{0.2503 } & 0.2421 & \textbf{0.2496 } & 0.2421 & \textbf{0.2502 } & 0.2461 & 0.1606 & 0.1899 & 0.2044 & 0.1909 & 0.1617 & 0.2300 & 0.1885 \\
& G-mean & 0.3389 & 0.4542 & \textbf{0.5075 } & 0.4623 & \textbf{0.5070 } & 0.4623 & \textbf{0.5071 } & 0.5028 & 0.3166 & 0.3535 & 0.3736 & 0.3536 & 0.3166 & 0.4399 & 0.3504 \\
& H\_measure & 0.1119 & 0.0190 & 0.0137 & 0.0223 & 0.0133 & 0.0223 & 0.0137 & 0.0120 & \textbf{0.1110 } & 0.0996 & 0.0974 & \textbf{0.1088 } & 0.1083 & 0.0307 & \textbf{0.1152 } \\
& Brier score & 0.1325 & 0.2323 & 0.2998 & 0.2309 & 0.3009 & 0.2309 & 0.2992 & 0.3014 & \textbf{0.1306 } & 0.1347 & 0.1349 & 0.1326 & \textbf{0.1288 } & 0.2062 & \textbf{0.1255 } \\
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}4\\~\\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.8485 & 0.8002 & 0.7262 & 0.8005 & 0.7258 & 0.8005 & 0.7268 & 0.7273 & \textbf{0.8492 } & 0.8453 & 0.8444 & 0.8472 & \textbf{0.8496 } & 0.8050 & \textbf{0.8481 } \\
& AUC & 0.6867 & 0.5434 & 0.5422 & 0.5425 & 0.5407 & 0.5425 & 0.5413 & 0.5434 & \textbf{0.6867 } & 0.6761 & 0.6732 & \textbf{0.6849 } & 0.6815 & 0.5580 & \textbf{0.6898 } \\
& F1 & 0.0992 & 0.2094 & \textbf{0.2332 } & 0.2071 & 0.2309 & 0.2071 & \textbf{0.2316 } & \textbf{0.2348 } & 0.0917 & 0.1089 & 0.1239 & 0.1027 & 0.0750 & 0.1885 & 0.1028 \\
& G-mean & 0.2355 & 0.4025 & \textbf{0.4750 } & 0.3996 & 0.4723 & 0.3996 & \textbf{0.4726 } & \textbf{0.4764 } & 0.2252 & 0.2500 & 0.2694 & 0.2408 & 0.2017 & 0.3744 & 0.2404 \\
& H\_measure & 0.1101 & 0.0200 & 0.0110 & 0.0193 & 0.0102 & 0.0193 & 0.0105 & 0.0116 & \textbf{0.1104 } & 0.0977 & 0.0951 & \textbf{0.1082 } & 0.1060 & 0.0302 & \textbf{0.1141 } \\
& Brier score & 0.1242 & 0.1998 & 0.2738 & 0.1995 & 0.2742 & 0.1995 & 0.2732 & 0.2727 & \textbf{0.1238 } & 0.1262 & 0.1263 & \textbf{0.1243 } & 0.1302 & 0.1805 & \textbf{0.1239 } \\
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}5\\~\\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.8508 & 0.8148 & 0.7467 & 0.8166 & 0.7451 & 0.8166 & 0.7471 & 0.7460 & \textbf{0.8508 } & 0.8493 & 0.8486 & 0.8506 & \textbf{0.8511 } & 0.8193 & \textbf{0.8508 } \\
& AUC & 0.6851 & 0.5333 & 0.5410 & 0.5378 & 0.5394 & 0.5378 & 0.5376 & 0.5371 & \textbf{0.6851 } & 0.6717 & 0.6710 & \textbf{0.6827 } & 0.6791 & 0.5643 & \textbf{0.6888 } \\
& F1 & 0.0462 & 0.1758 & \textbf{0.2254 } & 0.1862 & \textbf{0.2233 } & 0.1862 & \textbf{0.2196 } & 0.2191 & 0.0452 & 0.0582 & 0.0678 & 0.0549 & 0.0308 & 0.1653 & 0.0500 \\
& G-mean & 0.1556 & 0.3525 & \textbf{0.4550 } & 0.3635 & \textbf{0.4533 } & 0.3635 & \textbf{0.4476 } & 0.4475 & 0.1539 & 0.1764 & 0.1916 & 0.1705 & 0.1261 & 0.3368 & 0.1622 \\
& H\_measure & 0.1085 & 0.0156 & 0.0116 & 0.0196 & 0.0107 & 0.0196 & 0.0099 & 0.0096 & \textbf{0.1086 } & 0.0936 & 0.0925 & \textbf{0.1066 } & 0.1027 & 0.0342 & \textbf{0.1129 } \\
& Brier score & 0.1209 & 0.1852 & 0.2533 & 0.1834 & 0.2549 & 0.1834 & 0.2529 & 0.2540 & 0.1209 & 0.1229 & 0.1230 & 0.1212 & 0.1339 & 0.1648 & 0.1261 \\
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}5.8\\~\\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.8519 & 0.8262 & 0.7572 & 0.8237 & 0.7567 & 0.8237 & 0.7566 & 0.7563 & \textbf{0.8520 } & 0.8505 & 0.8509 & 0.8518 & \textbf{0.8519 } & 0.8280 & \textbf{0.8519 } \\
& AUC & 0.6829 & 0.5267 & 0.5386 & 0.5267 & 0.5358 & 0.5267 & 0.5351 & 0.5356 & \textbf{0.6829 } & 0.6701 & 0.6698 & \textbf{0.6813 } & 0.6759 & 0.5660 & \textbf{0.6871 } \\
& F1 & 0.0369 & 0.1469 & \textbf{0.2177 } & 0.1494 & \textbf{0.2129 } & 0.1494 & 0.2116 & \textbf{0.2127 } & 0.0370 & 0.0454 & 0.0540 & 0.0411 & 0.0216 & 0.1496 & 0.0384 \\
& G-mean & 0.1382 & 0.3100 & \textbf{0.4398 } & 0.3147 & \textbf{0.4341 } & 0.3147 & 0.4326 & \textbf{0.4341 } & 0.1382 & 0.1543 & 0.1690 & 0.1460 & 0.1050 & 0.3120 & 0.1410 \\
& H\_measure & 0.1071 & 0.0133 & 0.0111 & 0.0126 & 0.0096 & 0.0126 & 0.0093 & 0.0095 & \textbf{0.1072 } & 0.0943 & 0.0940 & \textbf{0.1059 } & 0.1015 & 0.0360 & \textbf{0.1120 } \\
& Brier score & 0.1198 & 0.1738 & 0.2428 & 0.1763 & 0.2433 & 0.1763 & 0.2434 & 0.2437 & \textbf{0.1198 } & \textbf{0.1214 } & 0.1215 & \textbf{0.1200 } & 0.1360 & 0.1562 & 0.1284 \\
\bottomrule
\end{tabular}
}
\end{sidewaystable}
\begin{sidewaystable}
\centering
\caption{MLP results (Number of base estimators=80)}
\label{MLP_results}
\resizebox{\linewidth}{!}{%
\begin{tabular}{clcccccccccccccccc}
\cmidrule[\heavyrulewidth]{1-18}
\multirow{2}{*}{Imbalance Ratio} & \multirow{2}{*}{Evaluation measure} & \multicolumn{16}{c}{Classification technique} \\
\cline{3-18}
& & \multicolumn{1}{l}{MLP\_indv} & \multicolumn{1}{l}{MLP\_bag} & \multicolumn{1}{l}{MLP\_posteriori} & \multicolumn{1}{l}{MLP\_priori} & \multicolumn{1}{l}{MLP\_lca} & \multicolumn{1}{l}{MLP\_mcb} & \multicolumn{1}{l}{MLP\_mla} & \multicolumn{1}{l}{MLP\_ola} & \multicolumn{1}{l}{MLP\_rank} & \multicolumn{1}{l}{MLP\_metades} & \multicolumn{1}{l}{MLP\_descluster} & \multicolumn{1}{l}{MLP\_desknn} & \multicolumn{1}{l}{MLP\_desp} & \multicolumn{1}{l}{MLP\_knop} & \multicolumn{1}{l}{MLP\_kne} & \multicolumn{1}{l}{MLP\_knu} \\
\hline
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}1\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.6108 & 0.6225 & 0.6065 & 0.6022 & 0.6000 & 0.5846 & 0.6000 & 0.5870 & 0.5831 & 0.6229 & 0.6220 & \textbf{0.6271} & \textbf{0.6327} & \textbf{0.6251} & 0.6195 & 0.6235 \\
& AUC & 0.6367 & 0.6758 & 0.6465 & 0.6378 & 0.6321 & 0.6153 & 0.6321 & 0.6248 & 0.6184 & 0.6714 & \textbf{0.6760} & \textbf{0.6778} & 0.6759 & 0.6744 & 0.6310 & \textbf{0.6791} \\
& F1 & 0.3091 & 0.3346 & 0.3204 & 0.3082 & 0.3099 & 0.2902 & 0.3099 & 0.2969 & 0.2917 & 0.3291 & 0.3323 & \textbf{0.3330} & 0.3295 & \textbf{0.3335} & 0.2978 & \textbf{0.3350} \\
& G-mean & 0.6008 & 0.6296 & 0.6140 & 0.6001 & 0.6022 & 0.5795 & 0.6022 & 0.5873 & 0.5813 & 0.6231 & 0.6268 & \textbf{0.6272} & 0.6224 & \textbf{0.6280} & 0.5866 & \textbf{0.6298} \\
& H\_measure & 0.0617 & 0.0965 & 0.0693 & 0.0621 & 0.0576 & 0.0459 & 0.0576 & 0.0528 & 0.0491 & 0.0922 & 0.0969 & \textbf{0.0974} & \textbf{0.0989} & 0.0947 & 0.0562 & \textbf{0.1002} \\
& Brier score & 0.2493 & 0.2303 & 0.3049 & 0.2849 & 0.2963 & 0.2841 & 0.2963 & 0.2838 & 0.2837 & \textbf{0.2310} & \textbf{0.2315} & \textbf{0.2312} & 0.2326 & 0.2566 & 0.2650 & 0.2531 \\
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}2\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.7569 & 0.7845 & 0.7820 & 0.7435 & 0.7674 & 0.7024 & 0.7674 & 0.7078 & 0.6996 & 0.7848 & \textbf{0.7849} & 0.7846 & 0.7825 & \textbf{0.7873} & 0.7350 & \textbf{0.7856} \\
& AUC & 0.6626 & 0.6812 & 0.6328 & 0.6379 & 0.6290 & 0.6219 & 0.6290 & 0.6216 & 0.6198 & \textbf{0.6806} & 0.6787 & \textbf{0.6804} & \textbf{0.6789} & 0.6692 & 0.6361 & 0.6778 \\
& F1 & 0.2962 & 0.2974 & 0.2881 & 0.2904 & 0.2915 & 0.2788 & 0.2915 & 0.2764 & 0.2758 & \textbf{0.2937} & \textbf{0.2948} & 0.2909 & 0.2928 & 0.2934 & 0.2675 & \textbf{0.2964} \\
& G-mean & 0.5345 & 0.5164 & 0.5075 & 0.5358 & 0.5220 & \textbf{0.5418} & 0.5220 & \textbf{0.5366} & \textbf{0.5393} & 0.5120 & 0.5132 & 0.5088 & 0.5126 & 0.5096 & 0.5127 & 0.5144 \\
& H\_measure & 0.0840 & 0.1032 & 0.0685 & 0.0654 & 0.0624 & 0.0501 & 0.0624 & 0.0500 & 0.0477 & \textbf{0.1026} & 0.1006 & 0.1008 & \textbf{0.1028} & 0.0953 & 0.0590 & \textbf{0.1022} \\
& Brier score & 0.1684 & 0.1561 & 0.1761 & 0.1847 & 0.1820 & 0.2022 & 0.1820 & 0.2005 & 0.2040 & 0.1561 & \textbf{0.1544} & 0.1558 & 0.1573 & \textbf{0.1525} & 0.1892 & \textbf{0.1521} \\
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}3\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.8062 & 0.8293 & 0.8259 & 0.7962 & 0.8197 & 0.7604 & 0.8197 & 0.7661 & 0.7568 & 0.8296 & \textbf{0.8299} & 0.8295 & 0.8274 & \textbf{0.8313} & 0.7849 & \textbf{0.8310} \\
& AUC & 0.6544 & 0.6796 & 0.6013 & 0.6237 & 0.6118 & 0.6261 & 0.6117 & 0.6266 & 0.6266 & \textbf{0.6796} & \textbf{0.6777} & \textbf{0.6784} & 0.6772 & 0.6410 & 0.6426 & 0.6576 \\
& F1 & 0.2353 & 0.2087 & 0.2023 & 0.2304 & 0.1915 & \textbf{0.2566} & 0.1915 & \textbf{0.2507} & \textbf{0.2522} & 0.2061 & 0.2018 & 0.2048 & 0.2024 & 0.1990 & 0.2371 & 0.2088 \\
& G-mean & 0.4279 & 0.3791 & 0.3747 & 0.4299 & 0.3672 & \textbf{0.4851} & 0.3672 & \textbf{0.4745} & \textbf{0.4820} & 0.3759 & 0.0994 & 0.3746 & 0.3736 & 0.3665 & 0.4459 & 0.3779 \\
& H\_measure & 0.0773 & 0.1013 & 0.0475 & 0.0548 & 0.0481 & 0.0526 & 0.0481 & 0.0538 & 0.0536 & \textbf{0.1013} & \textbf{0.0994} & 0.0988 & \textbf{0.1003} & 0.0773 & 0.0648 & 0.0889 \\
& Brier score & 0.1437 & 0.1335 & 0.1470 & 0.1535 & 0.1480 & 0.1693 & 0.1480 & 0.1659 & 0.1698 & 0.1335 & \textbf{0.1321} & 0.1331 & 0.1336 & \textbf{0.1327} & 0.1585 & \textbf{0.1314} \\
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}4\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.8338 & 0.8443 & 0.8426 & 0.8230 & 0.8344 & 0.7954 & 0.8344 & 0.7979 & 0.7931 & 0.8445 & \textbf{0.8446} & 0.8434 & 0.8432 & \textbf{0.8460} & 0.8143 & \textbf{0.8452} \\
& AUC & 0.6588 & 0.6835 & 0.5990 & 0.6256 & 0.6150 & 0.6340 & 0.6150 & 0.6346 & 0.6375 & \textbf{0.6835} & \textbf{0.6821} & \textbf{0.6819} & 0.6798 & 0.6227 & 0.6576 & 0.6369 \\
& F1 & 0.1765 & 0.1233 & 0.1244 & 0.1812 & 0.1419 & \textbf{0.2218} & 0.1419 & \textbf{0.2200} & \textbf{0.2260} & 0.1242 & 0.1186 & 0.1192 & 0.1248 & 0.1135 & 0.1938 & 0.1229 \\
& G-mean & 0.3392 & 0.2687 & 0.2713 & 0.3529 & 0.2982 & \textbf{0.4205} & 0.2982 & \textbf{0.4166} & \textbf{0.4270} & 0.2697 & 0.2627 & 0.2642 & 0.2714 & 0.2555 & 0.3739 & 0.2677 \\
& H\_measure & 0.0822 & 0.1058 & 0.0438 & 0.0575 & 0.0491 & 0.0594 & 0.0491 & 0.0603 & 0.0611 & \textbf{0.1058} & \textbf{0.1039} & 0.1016 & \textbf{0.1036} & 0.0690 & 0.0767 & 0.0784 \\
& Brier score & 0.1305 & 0.1238 & 0.1349 & 0.1387 & 0.1380 & 0.1496 & 0.1380 & 0.1484 & 0.1505 & \textbf{0.1238} & \textbf{0.1231} & \textbf{0.1238} & 0.1242 & 0.1284 & 0.1407 & 0.1273 \\
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}5\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.8403 & 0.8484 & 0.8472 & 0.8352 & 0.8445 & 0.8140 & 0.8445 & 0.8149 & 0.8105 & \textbf{0.8483} & 0.8480 & 0.8480 & 0.8482 & \textbf{0.8489} & 0.8261 & \textbf{0.8488} \\
& AUC & 0.6591 & 0.6849 & 0.5939 & 0.6203 & 0.6167 & 0.6386 & 0.6168 & 0.6407 & 0.6447 & \textbf{0.6849} & \textbf{0.6833} & \textbf{0.6827} & 0.6818 & 0.5968 & 0.6620 & 0.6141 \\
& F1 & 0.1323 & 0.0891 & 0.0781 & 0.1368 & 0.0988 & \textbf{0.1920} & 0.0988 & \textbf{0.1869} & \textbf{0.2040} & 0.0867 & 0.0813 & 0.0765 & 0.0836 & 0.0789 & 0.1691 & 0.0810 \\
& G-mean & 0.2824 & 0.2222 & 0.2075 & 0.2915 & 0.2374 & \textbf{0.3721} & 0.2374 & \textbf{0.3656} & \textbf{0.3887} & 0.2189 & 0.2116 & 0.2047 & 0.2147 & 0.2077 & 0.3365 & 0.2108 \\
& H\_measure & 0.0824 & 0.1075 & 0.0380 & 0.0549 & 0.0505 & 0.0635 & 0.0505 & 0.0645 & 0.0680 & \textbf{0.1075} & \textbf{0.1056} & 0.1037 & \textbf{0.1048} & 0.0569 & 0.0815 & 0.0634 \\
& Brier score & 0.1268 & 0.1210 & 0.1323 & 0.1333 & 0.1320 & 0.1408 & 0.1320 & 0.1401 & 0.1416 & \textbf{0.1210} & \textbf{0.1209} & \textbf{0.1212} & 0.1214 & 0.1293 & 0.1340 & 0.1283 \\
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}5.8\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.8398 & 0.8494 & 0.8490 & 0.8391 & 0.8462 & 0.8234 & 0.8462 & 0.8226 & 0.8181 & 0.8494 & \textbf{0.8498} & 0.8491 & 0.8492 & \textbf{0.8499} & 0.8305 & \textbf{0.8499} \\
& AUC & 0.6607 & 0.6837 & 0.5995 & 0.6230 & 0.6090 & 0.6450 & 0.6090 & 0.6391 & 0.6412 & \textbf{0.6837} & \textbf{0.6811} & \textbf{0.6823} & 0.6805 & 0.5894 & 0.6653 & 0.6044 \\
& F1 & 0.1347 & 0.0656 & 0.0478 & 0.1135 & 0.0657 & \textbf{0.1850} & 0.0657 & \textbf{0.1756} & \textbf{0.1808} & 0.0642 & 0.0630 & 0.0620 & 0.0694 & 0.0596 & 0.1438 & 0.0631 \\
& G-mean & 0.2857 & 0.1879 & 0.1592 & 0.2599 & 0.1896 & \textbf{0.3569} & 0.1896 & \textbf{0.3467} & \textbf{0.3560} & 0.1858 & 0.1837 & 0.1826 & 0.1937 & 0.1783 & 0.3033 & 0.1838 \\
& H\_measure & 0.0826 & 0.1062 & 0.0399 & 0.0542 & 0.0436 & 0.0699 & 0.0436 & 0.0648 & 0.0655 & \textbf{0.1062} & \textbf{0.1034} & 0.1030 & \textbf{0.1046} & 0.0490 & 0.0850 & 0.0557 \\
& Brier score & 0.1277 & 0.1204 & 0.1306 & 0.1317 & 0.1308 & 0.1360 & 0.1308 & 0.1363 & 0.1378 & \textbf{0.1204} & \textbf{0.1206} & \textbf{0.1206} & 0.1208 & 0.1300 & 0.1310 & 0.1290 \\
\bottomrule
\end{tabular}
}
\end{sidewaystable}
\begin{sidewaystable}
\centering
\caption{Heterogeneous pool of classifiers results }
\label{het_results}
\resizebox{\linewidth}{!}{%
\begin{tabular}{clccccccccccccccc}
\toprule
\multirow{2}{*}{Imbalance Ratio} & \multicolumn{1}{c}{\multirow{2}{*}{Evaluation measure}} & \multicolumn{15}{c}{Classification technique} \\
\cline{3-17}
& \multicolumn{1}{c}{} & Pool & Pool\_posteriori & Pool\_priori & Pool\_lca & Pool\_mcb & Pool\_mla & Pool\_ola & Pool\_rank & Pool\_metades & Pool\_descluster & Pool\_desknn & Pool\_desp & Pool\_knop & Pool\_kne & Pool\_knu \\
\hline
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}1\\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.7406 & \textbf{0.8137 } & 0.6692 & 0.6330 & 0.6497 & 0.6330 & 0.6373 & 0.6478 & 0.6637 & 0.6623 & \textbf{0.7307 } & \textbf{0.6885 } & 0.6552 & 0.6755 & 0.6632 \\
& AUC & 0.6856 & 0.5353 & 0.6360 & 0.6654 & 0.6344 & 0.6654 & 0.6485 & 0.6222 & 0.6623 & \textbf{0.6933 } & 0.6713 & 0.6614 & \textbf{0.6736 } & 0.6277 & \textbf{0.6813 } \\
& F1 & 0.3310 & 0.2526 & 0.3274 & 0.3389 & 0.3257 & 0.3389 & 0.3262 & 0.3133 & 0.3349 & \textbf{0.3501 } & 0.3282 & 0.3377 & \textbf{0.3429 } & 0.3211 & \textbf{0.3462 } \\
& G-mean & 0.5861 & 0.4415 & 0.6124 & 0.6333 & 0.6151 & 0.6334 & 0.6179 & 0.6007 & 0.6228 & \textbf{0.6413 } & 0.5884 & 0.6187 & \textbf{0.6343 } & 0.6030 & \textbf{0.6364 } \\
& H\_measure & 0.1094 & 0.0513 & 0.0795 & 0.0928 & 0.0748 & 0.0929 & 0.0800 & 0.0652 & 0.0880 & \textbf{0.1180 } & 0.0972 & 0.0926 & \textbf{0.0974 } & 0.0743 & \textbf{0.1060 } \\
& Brier score & 0.1813 & \textbf{0.1713 } & 0.2152 & 0.2219 & 0.2193 & 0.2219 & 0.2217 & 0.2193 & 0.1998 & 0.2110 & \textbf{0.1941 } & 0.1983 & 0.2011 & 0.2075 & \textbf{0.1914 } \\
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}2\\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.7773 & 0.7522 & 0.7733 & 0.7975 & 0.7674 & 0.7975 & 0.7685 & 0.7555 & \textbf{0.8016 } & \textbf{0.8248 } & \textbf{0.8142 } & 0.7983 & 0.7995 & 0.7720 & 0.7982 \\
& AUC & 0.6878 & 0.6488 & 0.6598 & 0.6778 & 0.6680 & 0.6779 & 0.6720 & 0.6694 & \textbf{0.6882 } & \textbf{0.6913 } & 0.6812 & \textbf{0.6832 } & 0.6733 & 0.6756 & 0.6764 \\
& F1 & 0.3267 & \textbf{0.3288 } & 0.3030 & 0.2951 & 0.3041 & 0.2953 & 0.2996 & 0.3005 & 0.3031 & 0.2492 & 0.2684 & 0.2968 & \textbf{0.3061 } & 0.3016 & \textbf{0.3067 } \\
& G-mean & 0.5561 & \textbf{0.5763 } & 0.5313 & 0.5033 & \textbf{0.5369 } & 0.5035 & 0.5308 & \textbf{0.5405 } & 0.5089 & 0.4279 & 0.4587 & 0.5046 & 0.5141 & 0.5306 & 0.5159 \\
& H\_measure & 0.1123 & 0.0872 & 0.0900 & 0.1009 & 0.0919 & 0.1010 & 0.0937 & 0.0912 & \textbf{0.1118 } & \textbf{0.1178 } & \textbf{0.1056 } & 0.1055 & 0.1032 & 0.0977 & 0.1043 \\
& Brier score & 0.1543 & 0.2212 & 0.1759 & 0.1584 & 0.1716 & 0.1585 & 0.1655 & 0.1739 & \textbf{0.1537 } & \textbf{0.1425 } & 0.1596 & 0.1597 & 0.1563 & 0.1723 & \textbf{0.1555 } \\
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}3\\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.8138 & 0.7610 & 0.8110 & \textbf{0.8357 } & 0.8076 & 0.8356 & 0.8092 & 0.7928 & 0.8312 & \textbf{0.8477 } & \textbf{0.8384 } & 0.8278 & 0.8325 & 0.8028 & 0.8318 \\
& AUC & 0.6837 & 0.6674 & 0.6659 & 0.6665 & 0.6684 & 0.6664 & 0.6641 & 0.6652 & \textbf{0.6833 } & \textbf{0.6849 } & 0.6779 & \textbf{0.6791 } & 0.6618 & 0.6719 & 0.6698 \\
& F1 & 0.2762 & \textbf{0.3226 } & 0.2388 & 0.1811 & 0.2433 & 0.1810 & 0.2298 & \textbf{0.2569 } & 0.1945 & 0.0918 & 0.1574 & 0.1995 & 0.1955 & \textbf{0.2459 } & 0.2013 \\
& G-mean & 0.4678 & \textbf{0.5631 } & 0.4282 & 0.3428 & 0.4360 & 0.3428 & 0.4194 & \textbf{0.4632 } & 0.3616 & 0.2261 & 0.3136 & 0.3701 & 0.3617 & \textbf{0.4428 } & 0.3687 \\
& H\_measure & 0.1085 & 0.0952 & 0.0918 & 0.0905 & 0.0918 & 0.0904 & 0.0865 & 0.0867 & \textbf{0.1078 } & \textbf{0.1098 } & 0.1008 & \textbf{0.1010 } & 0.0919 & 0.0932 & 0.0989 \\
& Brier score & 0.1364 & 0.2202 & 0.1514 & \textbf{0.1332 } & 0.1515 & \textbf{0.1333 } & 0.1460 & 0.1603 & 0.1355 & \textbf{0.1250 } & 0.1395 & 0.1424 & 0.1347 & 0.1594 & 0.1337 \\
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}4\\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.8354 & 0.7806 & 0.8310 & \textbf{0.8474 } & 0.8256 & \textbf{0.8474 } & 0.8284 & 0.8125 & 0.8456 & \textbf{0.8512 } & 0.8469 & 0.8388 & 0.8456 & 0.8172 & 0.8454 \\
& AUC & 0.6820 & 0.6663 & 0.6594 & 0.6612 & 0.6654 & 0.6613 & 0.6643 & 0.6640 & \textbf{0.6819 } & \textbf{0.6839 } & 0.6758 & \textbf{0.6785 } & 0.6506 & 0.6741 & 0.6574 \\
& F1 & 0.1944 & \textbf{0.3079 } & 0.1812 & 0.0836 & 0.1838 & 0.0836 & 0.1771 & \textbf{0.2136 } & 0.1267 & 0.0273 & 0.0812 & 0.1347 & 0.1179 & \textbf{0.1957 } & 0.1211 \\
& G-mean & 0.3580 & \textbf{0.5316 } & 0.3467 & 0.2152 & 0.3540 & 0.2152 & 0.3440 & \textbf{0.3983 } & 0.2720 & 0.1185 & 0.2121 & 0.2864 & 0.2611 & \textbf{0.3741 } & 0.2653 \\
& H\_measure & 0.1072 & 0.0948 & 0.0863 & 0.0855 & 0.0899 & 0.0856 & 0.0881 & 0.0872 & \textbf{0.1071 } & \textbf{0.1104 } & 0.0992 & \textbf{0.1012 } & 0.0832 & 0.0961 & 0.0905 \\
& Brier score & 0.1273 & 0.2022 & 0.1386 & \textbf{0.1247 } & 0.1391 & \textbf{0.1247 } & 0.1362 & 0.1507 & 0.1270 & \textbf{0.1199 } & 0.1288 & 0.1330 & 0.1262 & 0.1493 & 0.1251 \\
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}5\\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.8438 & 0.7872 & 0.8373 & 0.8496 & 0.8356 & \textbf{0.8496 } & 0.8369 & 0.8193 & 0.8485 & \textbf{0.8519 } & 0.8496 & 0.8443 & 0.8492 & 0.8225 & 0.8491 \\
& AUC & 0.6820 & 0.6681 & 0.6559 & 0.6564 & 0.6626 & 0.6564 & 0.6571 & 0.6589 & \textbf{0.6806 } & \textbf{0.6823 } & 0.6732 & \textbf{0.6759 } & 0.6362 & 0.6725 & 0.6439 \\
& F1 & 0.1423 & \textbf{0.3060 } & 0.1242 & 0.0477 & 0.1403 & 0.0477 & 0.1253 & \textbf{0.1929 } & 0.0729 & 0.0152 & 0.0453 & 0.0904 & 0.0689 & \textbf{0.1791 } & 0.0680 \\
& G-mean & 0.2919 & \textbf{0.5242 } & 0.2746 & 0.1588 & 0.2954 & 0.1588 & 0.2762 & \textbf{0.3693 } & 0.1993 & 0.0878 & 0.1546 & 0.2263 & 0.1930 & \textbf{0.3509 } & 0.1917 \\
& H\_measure & 0.1073 & 0.0958 & 0.0816 & 0.0797 & 0.0878 & 0.0797 & 0.0804 & 0.0814 & \textbf{0.1048 } & \textbf{0.1075 } & 0.0947 & \textbf{0.0970 } & 0.0718 & 0.0944 & 0.0779 \\
& Brier score & 0.1245 & 0.1965 & 0.1331 & 0.1238 & 0.1335 & 0.1238 & 0.1320 & 0.1486 & \textbf{0.1237 } & \textbf{0.1195 } & 0.1254 & 0.1286 & 0.1245 & 0.1467 & \textbf{0.1238 } \\
\multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}5.8\\~\\~\\~\\~\\~\\~\\ \end{tabular}} & Acc & 0.8459 & 0.7968 & 0.8415 & 0.8492 & 0.8383 & 0.8493 & 0.8399 & 0.8232 & 0.8492 & \textbf{0.8518 } & \textbf{0.8505 } & 0.8462 & \textbf{0.8507 } & 0.8267 & 0.8504 \\
& AUC & 0.6813 & 0.6653 & 0.6505 & 0.6582 & 0.6614 & 0.6582 & 0.6619 & 0.6634 & \textbf{0.6805 } & \textbf{0.6815 } & 0.6740 & \textbf{0.6761 } & 0.6363 & 0.6749 & 0.6409 \\
& F1 & 0.1112 & \textbf{0.2932 } & 0.1035 & 0.0493 & 0.1200 & 0.0497 & 0.1246 & \textbf{0.1825 } & 0.0602 & 0.0105 & 0.0356 & 0.0800 & 0.0533 & \textbf{0.1674 } & 0.0558 \\
& G-mean & 0.2527 & \textbf{0.5016 } & 0.2455 & 0.1616 & 0.2687 & 0.1625 & 0.2734 & \textbf{0.3543 } & 0.1797 & 0.0729 & 0.1361 & 0.2108 & 0.1678 & \textbf{0.3340 } & 0.1720 \\
& H\_measure & 0.1065 & 0.0935 & 0.0778 & 0.0807 & 0.0865 & 0.0808 & 0.0853 & 0.0852 & \textbf{0.1048 } & \textbf{0.1065 } & 0.0963 & \textbf{0.0975 } & 0.0704 & 0.0960 & 0.0736 \\
& Brier score & 0.1231 & 0.1868 & 0.1306 & 0.1242 & 0.1316 & 0.1242 & 0.1297 & 0.1460 & \textbf{0.1226 } & \textbf{0.1199 } & 0.1240 & 0.1266 & 0.1244 & 0.1441 & \textbf{0.1239 } \\
\bottomrule
\end{tabular}
}
\end{sidewaystable}
\newpage
\bibliographystyle{apacite}
| proofpile-arXiv_059-15601 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{sec:intro}
Huge orbital angular momenta (OAM) are produced perpendicular to the reaction plane
in non-central high energy heavy ion collisions, and part of such huge OAM are transferred
to the hot and dense matter created in collisions \cite{Liang:2004ph,Liang:2004xn,Voloshin:2004ha,Betz:2007kg,Becattini:2007sr,Gao:2007bc,Huang:2011ru}. Due to the shear of the longitudinal flow particles with spins can be polarized via the spin-orbit coupling
in particle scatterings \cite{Liang:2004ph,Gao:2007bc,Huang:2011ru,Chen:2008wh}. Such a type of spin polarization with respect to
the reaction plane defined in the global laboratory frame of the collision is called the global polarization and is different from a particle's possible polarization
with respect to its production plane which depends on the particle's momentum \cite{Liang:2004ph}.
The global spin polarization of $\Lambda$ and $\bar{\Lambda}$ has been
measured by the STAR collaboration in Au+Au collisions over a wide range of beam energies, $\sqrt{s_{NN}}=7.7-200$
GeV \cite{STAR:2017ckg,Adam:2018ivw} and by ALICE collaboration in Pb+Pb collisions at 2.76 TeV and 5.02 TeV \cite{Acharya:2019ryw}. The magnitude of the global spin polarization
is about 2\% at 7.7 GeV which decreases to be about 0.3\% at 200 GeV and almost vanishes at LHC energies.
It has been shown that the spin-orbit coupling in microscopic particle scatterings
can lead to the spin-vorticity coupling in a fluid when taking an ensemble average over random
incoming momenta of colliding particles in a locally thermalized fluid \cite{Zhang:2019xya}.
In this way, the spin polarization is linked with the vorticity field in a fluid.
To describe the STAR data on the global polarization of hyperons, the hydrodynamic and
transport models have been used to calculate the vorticity field \cite{Baznat:2013zx,Csernai:2013bqa,Csernai:2014ywa,Becattini:2015ska,Teryaev:2015gxa,Jiang:2016woz,Deng:2016gyh,Ivanov:2017dff,Deng:2020ygd,Li:2017slc,Wei:2018zfb,Shi:2017wpk}.
In hydrodynamic models, the velocity and in turn the vorticity
fields in the fluid can be obtained naturally. In transport
models the phase space evolution of a multi-particle system is described by the Boltzmann transport equation with particle collisions, where the position and momentum of each particle in the system at any time are explicitly known.
To extract the fluid velocity at one space-time point out of randomly distributed momenta in all events,
a suitable coarse-graining method has to be used that can map the transport description into hydrodynamic information~\cite{Jiang:2016woz,Deng:2016gyh}. The vorticity field can then be computed based on the so-obtained fluid velocity. Once the vorticity field is obtained, the
global polarization of hyperons can be calculated from an integral
over the freeze-out hyper-surface, which will be discussed in detail in Sec.~\ref{sec:pola} and Sec.~\ref{sec:pola:num}. The calculations following the above procedure give results on the global polarization that agree with the data \cite{Li:2017slc,Wei:2018zfb,Shi:2017wpk,Karpenko:2016jyx,Xie:2017upb,Sun:2017xhx,Xie:2016fjj}.
In this note we will give a brief review of vorticity formation and spin polarization in heavy ion collisions with transport models. We use the Minkowskian metric $g_{\mu\nu}={\rm diag}(1,-1,-1,-1)$ and natural unit $k_B=c=\hbar=1$ except for Sec.~\ref{sec:pola} in which $\hbar$ is kept explicitly.
\section{Fluid vorticity}
\label{sec:vort}
\subsection{Non-relativistic case}
\label{subsec:nonrel}
In non-relativistic hydrodynamics, the (kinematic) vorticity is a (pseudo)vector field that describes the local angular velocity of a fluid cell. Mathematically, it is defined as
\begin{eqnarray}
\label{defvor1}
\vec\omega({\vec x},t)=\frac{1}{2}\vec\nabla\times\vec v ({\vec x},t),
\end{eqnarray}
where $\vec v$ is the flow velocity with its three components denoted as $v_i$ ($i=1,2,3$). Sometimes it is also defined without the pre-factor $1/2$ in Eq. (\ref{defvor1}). It can also be written in the tensorial form, $\omega_{ij}=(1/2)(\partial_i v_j-\partial_j v_i)$, so that $\omega_i=(1/2)\epsilon_{ijk}\omega_{jk}$, where $\epsilon_{ijk}$ is the three-dimension anti-symmetric tensor. For an ideal fluid, the flow is governed by the Euler equation which can be written in terms of $\vec\omega$ as,
\begin{eqnarray}
\label{eq:vort}
\frac{\partial\vec \omega}{\partial t}=\vec\nabla\times(\vec v\times\vec\omega).
\end{eqnarray}
This is called the vorticity equation. To arrive at \eq{eq:vort}, we have implicitly assumed the barotropic condition, $\vec\nabla\rho \mathbin{\parallel} \vec\nabla P$, which is satisfied if the pressure $P$ is a function of mass density $\rho$, $P=P(\rho)$. Equation (\ref{eq:vort}) has interesting consequences. Let us define the circulation integral of the velocity field over a loop $l$ co-moving with the fluid,
\begin{equation}
\label{eq:circ}
\Gamma=\oint_l\vec v\cdot d\vec x = 2\int_\Sigma\vec\omega \cdot d\vec\sigma ,
\label{circulation}
\end{equation}
where $\Sigma$ is a surface bounded by $l$ with $d\vec\sigma $ being its infinitesimal area element. Note that the second equality in \eq{circulation} follows from the Stokes theorem. It can be shown from \eq{eq:vort},
\begin{equation}
\frac{d\Gamma}{d\tau}=0,
\end{equation}
with co-moving time derivative $d/d\tau$. This result is called Helmholtz-Kelvin theorem which states that the vortex lines move with the fluid. Physically, it is equivalent to the angular momentum conservation for a closed fluid filament in the absence of viscosity, as all forces acting on the filament would be normal to it and generate no toque. Another interesting consequence of \eq{eq:vort} is the conservation of the flow helicity \cite{Moffatt:1969,Moreau:1961}
\begin{eqnarray}
\label{eq:heli}
{\cal H}_{\rm f}=\int d^3\vec x\,\vec\omega\cdot\vec v,
\end{eqnarray}
where the integral is over the whole space. Similar to energy, helicity is a quadratic invariant of the Euler equation of an ideal fluid although it is not positive definite.
In the following, we will generalize the notion of the vorticity to relativistic fluids and introduce the relativistic counterpart of the Helmholtz-Kelvin theorem and helicity conservation.
\subsection{Relativistic case}
\label{subsec:rel}
The generalization of vorticity to the relativistic case is not unique, and different definitions can be introduced for different purposes. Here we discuss four types of relativistic vorticity. The first one is called the {\it kinematic vorticity} defined as
\begin{eqnarray}
\label{def:kv}
\omega_{\rm K}^\mu=\frac{1}{2}\epsilon^{\mu\nu\rho\sigma}u_\nu \partial_\rho u_\sigma,
\end{eqnarray}
which is a natural generalization of \eq{defvor1} as its spatial components recover \eq{defvor1} at non-relativistic limit. In the above, the four-velocity vector is defined by $u^\mu=\gamma(1,\vec v)$ with $\gamma=1/\sqrt{1-\vec v^2}$ the Lorentz factor. It is more convenient to define the kinematic vorticity tensor,
\begin{eqnarray}
\label{def:kvt}
\omega^{\rm K}_{\mu\nu}=-\frac{1}{2}(\partial_\mu u_\nu-\partial_\nu u_\mu),
\end{eqnarray}
so the kinematic vorticity vector is given by
\begin{equation}
\omega_{\rm K}^\mu=-(1/2)\epsilon^{\mu\nu\rho\sigma}u_\nu\omega^{\rm K}_{\rho\sigma}.
\label{vort-vec}
\end{equation}
Note that the minus sign in \eq{def:kvt} and (\ref{vort-vec}) is just a convention. The vorticity tensor and vector can also be defined without it. However, in either case (with or without the minus sign) the definition in \eq{def:kv} always holds. We note that the relationship between the vorticity tensor and vector in \eq{vort-vec} also holds for the other types of vorticity definitions to be discussed below.
The second one is the {\it temperature vorticity} defined as
\begin{eqnarray}
\label{def:tv}
\omega^{\rm T}_{\mu\nu}=-\frac{1}{2}[\partial_\mu (T u_\nu)-\partial_\nu (T u_\mu)],
\end{eqnarray}
where $T$ is the temperature. The temperature vorticity for ideal neutral fluids is relevant to the relativistic version of Helmholtz-Kelvin theorem and helicity conservation~\cite{Becattini:2015ska,Deng:2016gyh}.
For an ideal neutral fluid, we can rewrite the Euler equation as
\begin{eqnarray}
\label{eq:euler}
(\varepsilon+P)\frac{d}{d\tau}u^\mu=\nabla^\mu P,
\end{eqnarray}
with $d/d\tau=u^\mu\partial_\mu$ and $\nabla_\mu=\partial_\mu-u_\mu (d/d\tau)$. The Euler equation (\ref{eq:euler}) can be put into the form of the Carter-Lichnerowicz equation with the help of the thermodynamic equation for a neutral fluid $dP=sdT$,
\begin{eqnarray}
\label{eq:cl}
\omega^{\rm T}_{\mu\nu}u^\nu=0,
\end{eqnarray}
from which the relativistic Helmholtz-Kelvin theorem can be obtained immediately,
\begin{eqnarray}
\label{eq:circ:r2}
\frac{d}{d\tau}\oint T u_\mu dx^\mu =2\oint \omega^{\rm T}_{\mu\nu} u^\mu dx^\nu=0.
\end{eqnarray}
Using \eq{eq:cl} we can also show that the temperature vorticity vector (multiplied by $T$) is conserved,
\begin{eqnarray}
\label{eq:rheli}
\partial_\mu (T\omega _{\rm T}^\mu ) = 4 u^\mu\omega_{\mu\nu}^{\rm T} \omega ^\nu_{\rm T} = 0,
\end{eqnarray}
where $\omega ^\mu_{\rm T}=-(1/2)\epsilon^{\mu\nu\rho\sigma}u_\nu\omega_{\rho\sigma}^{\rm T}$. The conserved charge ${\cal H}_{\rm T}=(1/2)\int d^3\vec x T^2 \gamma^2\vec v\cdot\vec\nabla\times\vec v$ is an extension of the helicity (\ref{eq:heli}) to the relativistic case for an ideal neutral fluid.
The third type is the charged-fluid counterpart of the temperature vorticity which we call the {\it enthalpy vorticity},
\begin{eqnarray}
\label{def:ev}
\omega^{\rm w}_{\mu\nu}=-\frac{1}{2}[\partial_\mu (w u_\nu)-\partial_\nu (w u_\mu)],
\end{eqnarray}
where $w=(\varepsilon+P)/n$ is the enthalpy per particle and $n$ is the charge density. In this case, the Euler equation (\ref{eq:euler}) can be written in the following Carter-Lichnerowicz form
\begin{eqnarray}
\label{eq:cl2}
u^\mu\omega^{\rm w}_{\mu\nu}=\frac{1}{2}T\nabla_\nu(s/n).
\end{eqnarray}
If the flow is isentropic ($s/n$ is a constant), we have $u^\mu\omega^{\rm w}_{\mu\nu}=0$, in the same form as \eq{eq:cl}. Therefore we have the conservation law for an ideal charged-fluid with the isentropic flow
similar to \eq{eq:circ:r2},
\begin{eqnarray}
\label{eq:circulation-w}
\frac{d}{d\tau}\oint w u_\mu dx^\mu =2\oint \omega^{\rm w}_{\mu\nu} u^\mu dx^\nu=0.
\end{eqnarray}
At the same time, the current $w\omega_{\rm w}^\mu$ is conserved, $\partial_\mu (w\omega_{\rm w}^\mu)=0$, and the corresponding conserved charge is the enthalpy helicity, ${\cal H}_{\rm w}=(1/2)\int d^3\vec x w^2 \gamma^2\vec v\cdot\vec\nabla\times\vec v$~\cite{Deng:2016gyh}.
The fourth vorticity is the {\it thermal vorticity}. It is defined as~\cite{Becattini:2015ska}
\begin{eqnarray}
\label{def:thv}
\omega^{\rm \beta}_{\mu\nu}=-\frac{1}{2}[\partial_\mu (\beta u_\nu)-\partial_\nu (\beta u_\mu)].
\end{eqnarray}
The thermal vorticity has an important property: for a fluid at global equilibrium, the four vector $\beta_\mu=\beta u_\mu$ is a Killing vector and is given by $\beta_\mu=b_\mu +\omega^{\rm \beta}_{\mu\nu} x^\nu$ with $b_\mu$ and $\omega^{\rm \beta}_{\mu\nu}$ constant. Thus, the thermal vorticity characterizes the global equilibrium of the fluid. In addition, the thermal vorticity is responsible for the local spin polarization of particles in a fluid at global equilibrium which we will discuss in details in the next section.
\section{Spin polarization in a vortical fluid}
\label{sec:pola}
A semi-classical way to describe the space-time evolution of spin
degrees of freedom is through the spin-dependent distribution function.
The quantum theory provides a more rigorous description for the spin
evolution through the Wigner function, a quantum counterpart of the
distribution function. For a relativistic spin-1/2 fermion, one has
to use the covariant Wigner function \cite{Heinz:1983nx,Elze:1986qd,Vasak:1987um,Zhuang:1995pd},
which is a $4\times4$ matrix function of position and momentum. Now
the covariant Wigner function becomes a useful tool to study the chiral
magnetic and vortical effect and other related effects
\cite{Gao:2012ix,Chen:2012ca,Gao:2015zka,Fang:2016vpj,Hidaka:2016yjf,Mueller:2017lzw,Huang:2018wdl,Liu:2018xip}.
The Wigner function is equivalent to the quantum field and contains
all information that the quantum field does. Therefore the spin information
in phase space is fully encoded in the Wigner function from which
one can obtain the quark polarization from its axial vector components.
The covariant Wigner function for spin-1/2 fermions in an external
electromagnetic field is defined by \cite{Heinz:1983nx,Elze:1986qd,Vasak:1987um,Zhuang:1995pd}
\begin{equation}
W_{\alpha\beta}(x,p)=\frac{1}{(2\pi)^{4}}\int d^{4}ye^{-ip\cdot y}\left\langle \bar{\psi}_{\beta}\left(x+\frac{y}{2}\right)U\left(A;x+\frac{1}{2}y,x-\frac{1}{2}y\right)\psi_{\alpha}\left(x-\frac{y}{2}\right)\right\rangle ,
\end{equation}
where $\psi_{\alpha}$ and $\bar{\psi}_{\beta}$ are the fermionic
field components ($\alpha,\beta=1,2,3,4$ are the spinor indices),
$U(A;x_{2},x_{1})=\exp\left[iQ\int_{x_{1}}^{x_{2}}dx^{\mu}A_{\mu}(x)\right]$
is the gauge link that makes gauge invariance of the Wigner function
with $A_{\mu}$ being the electromagnetic gauge potential, and $\left\langle \hat{O}\right\rangle $
denotes the ensemble average of the operator $\hat{O}$ over thermal
states. As a $4\times4$ complex matrix having 32 real variables,
the Wigner function satisfies $W^{\dagger}=\gamma_{0}W\gamma_{0}$,
which reduces the number of independent variables to 16.
Therefore the Wigner function can be expanded in terms of 16 generators of Clifford
algebra $\{1,\gamma_{5},\gamma^{\mu},\gamma_{5}\gamma^{\mu},\sigma^{\mu\nu}\}$
with $\gamma^{5}\equiv i\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}$
and $\sigma^{\mu\nu}\equiv\frac{i}{2}[\gamma^{\mu},\gamma^{\nu}]$,
\begin{equation}
W=\frac{1}{4}\left(\mathcal{F}+i\gamma^{5}\mathcal{P}+\gamma^{\mu}\mathcal{V}_{\mu}+\gamma^{5}\gamma^{\mu}\mathcal{A}_{\mu}+\frac{1}{2}\sigma^{\mu\nu}\mathcal{S}_{\mu\nu}\right),
\end{equation}
where the coefficients are the scalar ($\mathcal{F}$), pseudoscalar
($\mathcal{P}$), vector ($\mathcal{V}_{\mu}$), axial vector ($\mathcal{A}_{\mu}$)
and tensor ($\mathcal{S}_{\mu\nu}$) components with 1, 1, 4, 4 and
6 independent variables, respectively. Each component of $W$ can be
extracted by multiplying it with the corresponding generator and taking
a trace. These components are all real functions of phase space coordinates
and satisfy 32 real equations with 16 redundant equations.
For massless fermions, the equations for the vector and axial-vector component
are decoupled from the rest components. They can be linearly combined into
the right-handed and left-handed vector component, both sectors satisfy
the same set of equations. By solving the set of equations one can derive
the right-handed and left-handed currents which give the chiral magnetic and vortical effect
in an external electromagnetic field and a vorticity field
\cite{Gao:2012ix,Chen:2012ca,Gao:2015zka,Hidaka:2016yjf,Huang:2018wdl,Liu:2018xip,Gao:2017gfq,Gao:2018wmr}.
For massive fermions, the equations for Wigner function components
are all entangled and hard to solve. Fortunately there is a natural expansion parameter
in these equations, the Planck constant $\hbar$, which gives the
order of quantum correction. The Wigner function components can thus
be obtained by solving these questions order by order in $\hbar$,
which is called semi-classical expansion
\cite{Weickgenannt:2019dks,Gao:2019znl,Hattori:2019ahi,Wang:2019moi,Liu:2020flb,Sheng:2020oqs,Florkowski:2018ahw,Yang:2020hri,Weickgenannt:2020aaf,Wang:2020pej}.
The Wigner function components at the zero-th order in $\hbar$ are
given by \cite{Weickgenannt:2019dks}
\begin{eqnarray}
\mathcal{F}^{(0)}(x,p) & = & m\delta(p^{2}-m^{2})V^{(0)}(x,p),\nonumber \\
\mathcal{P}^{(0)}(x,p) & = & 0,\nonumber \\
\mathcal{V}_{\mu}^{(0)}(x,p) & = & p_{\mu}\delta(p^{2}-m^{2})V^{(0)}(x,p),\nonumber \\
\mathcal{A}_{\mu}^{(0)}(x,p) & = & mn_{\mu}^{(0)}(x,\mathbf{p})\delta(p^{2}-m^{2})A^{(0)}(x,p),\nonumber \\
\mathcal{S}_{\mu\nu}^{(0)}(x,p) & = & m\Sigma_{\mu\nu}^{(0)}(x,\mathbf{p})\delta(p^{2}-m^{2})A^{(0)}(x,p),\label{eq:zero-th-comp}
\end{eqnarray}
with
\begin{eqnarray}
V^{(0)}(x,p) & \equiv & \frac{2}{(2\pi\hbar)^{3}}\sum_{e,s=\pm}\theta(ep^{0})f_{s}^{(0)e}(x,e\mathbf{p}),\nonumber \\
A^{(0)}(x,p) & \equiv & \frac{2}{(2\pi\hbar)^{3}}\sum_{e,s=\pm}s\theta(ep^{0})f_{s}^{(0)e}(x,e\mathbf{p}),\nonumber \\
n^{(0)\mu}(x,p) & \equiv & \theta(p^{0})n^{+\mu}(x,\mathbf{p})-\theta(-p^{0})n^{-\mu}(x,\mathbf{p}),\nonumber \\
\Sigma_{\mu\nu}^{(0)}(x,p) & = & -\frac{1}{m}\epsilon_{\mu\nu\alpha\beta}p^{\alpha}n^{(0)\beta},\label{eq:v0-a0-n0-sig0}
\end{eqnarray}
where $e=\pm$ denotes particle/antiparticle, $s=\pm$ denotes spin
up/down, $f_{s}^{(0)e}$ are the distribution functions. In Eq. (\ref{eq:v0-a0-n0-sig0})
$n^{\mu}(\mathbf{p},\mathbf{n})$ is the spin four-vector and $n^{\pm\mu}(x,\mathbf{p})$
are spin-four vector for particle/antiparticle given by
\begin{eqnarray}
n^{+\mu}(x,\mathbf{p}) & = & \left(\frac{\mathbf{n}^{+}\cdot\mathbf{p}}{m},\mathbf{n}^{+}+\frac{\mathbf{n}^{+}\cdot\mathbf{p}}{m(m+E_{\mathbf{p}})}\mathbf{p}\right),\nonumber \\
n^{-\mu}(x,\mathbf{p}) & = & \left(\frac{\mathbf{n}^{-}\cdot\mathbf{p}}{m},-\mathbf{n}^{-}-\frac{\mathbf{n}^{-}\cdot\mathbf{p}}{m(m+E_{\mathbf{p}})}\mathbf{p}\right),\label{nDef}
\end{eqnarray}
where $\mathbf{n}^{\pm}$ are spin quantization directions for particle/antiparticle
in the particle's rest frame. In general $\mathbf{n}^{+}$ can be
different from $\mathbf{n}^{-}$. We note that $n^{+\mu}(x,\mathbf{p})$
can be expressed by a Lorentz boost from the the particle's rest frame
to the lab frame in which the particle has the momentum $\mathbf{p}$
\begin{eqnarray}
n^{+\mu}(x,\mathbf{p}) & = & \Lambda_{\;\nu}^{\mu}(-\mathbf{v}_{p})n^{+\nu}(\mathbf{0},\mathbf{n}^{+}).\label{eq:n-part}
\end{eqnarray}
Here $\Lambda_{\;\nu}^{\mu}(-\mathbf{v}_{p})$ is the Lorentz transformation
for $\mathbf{v}_{p}=\mathbf{p}/E_{p}$ and $n^{+\nu}(\mathbf{0},\mathbf{n}^{+})=(0,\mathbf{n}^{+})$
is the four-vector of the spin quantization direction in the particle's
rest frame. One can check that $n^{+\mu}(x,\mathbf{p})$ satisfies
$n_{\mu}^{+}n_{+}^{\mu}=-1$ and $n^{+}\cdot p=0$. Similarly $n^{-\mu}(x,\mathbf{p})$
for the antiparticle can be expressed by
\begin{eqnarray}
n^{-\mu}(x,\mathbf{p}) & = & \Lambda_{\;\nu}^{\mu}(\mathbf{v}_{p})n^{-\nu}(\mathbf{0},\mathbf{n}^{-}),\label{eq:n-antipart}
\end{eqnarray}
where $n^{-\nu}(\mathbf{0},\mathbf{n}^{-})=(0,-\mathbf{n}^{-})$.
We see in Eqs. (\ref{eq:zero-th-comp},\ref{eq:v0-a0-n0-sig0}) that
the axial vector component corresponds to the spin four-vector. We
can rewrite the last line of Eq. (\ref{eq:v0-a0-n0-sig0}) in another
form \cite{Weickgenannt:2019dks}
\begin{equation}
n_{\mu}^{(0)}=-\frac{1}{2m}\epsilon_{\mu\nu\alpha\beta}p^{\nu}\Sigma^{(0)\alpha\beta},
\end{equation}
where $n_{\mu}^{(0)}$ is the Pauli-Lubanski pseudovector and $\Sigma^{(0)\alpha\beta}$
plays the role of a spin angular momentum tensor.
At the first order in $\hbar$, the axial vector component is \cite{Fang:2016vpj,Weickgenannt:2019dks}
\begin{equation}
\mathcal{A}_{\mu}^{(1)}=m\bar{n}_{\mu}^{(1)}\delta(p^{2}-m^{2})+\tilde{F}_{\mu\nu}p^{\nu}V^{(0)}\delta^{\prime}(p^{2}-m^{2}),\label{eq:1st-axial-vector}
\end{equation}
where $\tilde{F}_{\mu\nu}=(1/2)\epsilon_{\mu\nu\alpha\beta}F^{\alpha\beta}$
and
\begin{equation}
\bar{n}_{\mu}^{(1)}\equiv-\frac{1}{2m}\epsilon_{\mu\nu\alpha\beta}p^{\nu}\bar{\Sigma}^{(1)\alpha\beta},\label{eq:n1-sig-1}
\end{equation}
is the first-order on-shell correction to $n_{\mu}^{(0)}A^{(0)}$.
In Eq. (\ref{eq:n1-sig-1}) $\bar{\Sigma}^{(1)\alpha\beta}$ can be
decomposed as
\begin{equation}
\bar{\Sigma}^{(1)\alpha\beta}=\frac{1}{2}\chi^{\alpha\beta}+\Xi^{\alpha\beta},
\end{equation}
where the tensor $\Xi^{\alpha\beta}$ is symmetric and satisfies $p_{\alpha}\Xi^{\alpha\beta}=0$.
The evolution equations for $\chi^{\alpha\beta}$ and for $\Xi^{\alpha\beta}$
are \cite{Weickgenannt:2019dks}
\begin{eqnarray}
p\cdot\nabla^{(0)}\chi_{\mu\nu} & = & 0,\nonumber \\
p\cdot\nabla^{(0)}\Xi_{\mu\nu} & = & F_{\ \mu}^{\alpha}\Xi_{\nu\alpha}-F_{\ \nu}^{\alpha}\Xi_{\mu\alpha},
\end{eqnarray}
where $\nabla^{(0)\mu}\equiv\partial_{x}^{\mu}-F^{\mu\nu}\partial_{p\nu}$.
The component $\chi_{\mu\nu}$ satisfies the constraint
\begin{equation}
p^{\nu}\chi_{\mu\nu}=\nabla_{\mu}^{(0)}V^{(0)}.
\end{equation}
In global equilibrium a special choice of $\chi_{\mu\nu}$ is
\begin{equation}
\chi_{\mu\nu}=-\omega_{\mu\nu}^{\beta}\frac{\partial V^{(0)}}{\partial(\beta p_{0})},
\end{equation}
where $\omega_{\mu\nu}^{\beta}$ is the thermal vorticity tensor (\ref{def:thv}) and
\begin{eqnarray}
V^{(0)} & \equiv & \frac{2}{(2\pi\hbar)^{3}}\sum_{s}\left[\theta(u\cdot p)f_{s}^{(0)+}+\theta(-u\cdot p)f_{s}^{(0)-}\right],\nonumber \\
f_{s}^{(0)\pm} & = & \frac{1}{\exp(\beta u\cdot p\mp\beta\mu_{s})+1}.\label{eq:omega}
\end{eqnarray}
Here $u^{\mu}$ is the flow velocity and $\omega_{\mu\nu}$ is the
vorticity tensor. Therefore the vorticity dependent part of the axial
vector component in Eq. (\ref{eq:1st-axial-vector}) reads \cite{Fang:2016vpj,Weickgenannt:2019dks}
\begin{equation}
\mathcal{A}_{\mu}^{(1)}=\frac{1}{4}\epsilon_{\mu\nu\rho\sigma}p^{\nu}\omega_{\beta}^{\rho\sigma}\frac{\partial V^{(0)}}{\partial(\beta u\cdot p)}\delta(p^{2}-m^{2}).
\end{equation}
We can integrate $\mathcal{A}_{\mu}^{(1)}$ over $p_{0}$ to make
the momentum of the particle/antiparticle to be on the mass shell.
The average spin per particle (with an additional factor 1/2 from
the particle's spin) is given by
\begin{equation}
S_{\mu}^{\pm}=-\frac{1}{8(u\cdot p)}\epsilon_{\mu\nu\rho\sigma}p^{\nu}\omega_{\beta}^{\rho\sigma}(1-f_{\mathrm{FD}}^{\pm}),\label{eq:spin-vector}
\end{equation}
where $f_{\mathrm{FD}}^{\pm}$ is the on-shell Fermi-Dirac distribution
function with $p_{0}$ replaced by $\pm E_{p}$ ($E_{p}\equiv\sqrt{m^{2}+\mathbf{p}^{2}}$)
in $f_{s}^{(0)\pm}$ for a particle/antiparticle respectively. We
can generalize the above equilibrium formula to a hydrodynamic process
at a freeze-out hypersurface $\sigma_{\mu}$ \cite{Fang:2016vpj,Liu:2020flb,Becattini:2013fla},
in this case the average spin per particle is given by
\begin{equation}
S^{\mu}(p)=-\frac{1}{8}\epsilon^{\mu\nu\rho\sigma}p_{\nu}\frac{\int d\sigma_{\lambda}p^{\lambda}\omega_{\rho\sigma}^{\beta}(u\cdot p)^{-1}f_{\mathrm{FD}}(1-f_{\mathrm{FD}})}{\int d\sigma_{\lambda}p^{\lambda}f_{\mathrm{FD}}},
\label{average-spin-vector}
\end{equation}
where we have suppressed the index $\pm$ for the particle/antiparticle
since the above formula is valid for both particles and antiparticles.
If the momentum is not large compared with the particle mass, we have
$u\cdot p\approx m$ and Eq. (\ref{average-spin-vector}) recovers
the result in Ref. \cite{Fang:2016vpj,Liu:2020flb,Becattini:2013fla} which is
widely used in calculating the hadron polarization in heavy ion collisions.
\section{Vorticity in heavy ion collisions}
\label{sec:vort:num}
There are multiple sources of vorticity in heavy ion collisions.
One source is the global orbital angular momentum (OAM) of the two colliding nuclei in non-central collisions. Geometrically, this OAM is perpendicular to the reaction plane~\footnote{Strictly speaking, this is true only after taking average over many collision events, as the collision geometry itself (and thus the direction of the OAM) suffers from event-by-event fluctuations.}. After the collision, a fraction of the total OAM is retained in the produced quark-gluon matter and induces vorticity. As we will discuss later in this section, in the mid-rapidity region for $\sqrt{s}$ larger than about 10 GeV, such a generated vorticity decreases with the increasing beam energy, consistent with the measured global spin polarization of $\Lambda$ and $\bar{\Lambda}$ hyperons. The second source of the vorticity is the jet-like fluctuation in the fireball which can induce smoke-loop type vortex around the fast moving particle~\cite{Betz:2007kg}. The direction of such vorticity is not correlated to the reaction plane and thus does not contribute to the global $\Lambda$ polarization. Instead, on an event-by-event basis, it generates a near-side longitudinal spin-spin correlation~\cite{Pang:2016igs}. The third source of the vorticity is the inhomogeneous expansion of the fire ball~\cite{Jiang:2016woz,Wei:2018zfb,Pang:2016igs,Becattini:2017gcx,Xia:2018tes}. In particular, anisotropic flows in the transverse plane can produce a quadrupole pattern of the longitudinal vorticity along the beam direction while the inhomogeneous transverse expansion can produce a transverse vorticity circling the longitudinal axis. There may be other sources of vorticity, e.g., the strong magnetic field created by fast-moving spectators may magnetize the quark-gluon matter and potentially lead to a vorticity along the direction of the magnetic field through the so-called Einstein-de Haas effect.
Vorticity formation in high energy nuclear collisions has been extensively studied in relativistic hydrodynamic models, such as ECHO-QGP \cite{Becattini:2015ska}, PICR \cite{Csernai:2013bqa,Csernai:2014ywa} and CLVisc \cite{Pang:2016igs} in (3+1) dimensions. Using the ECHO-QGP code \cite{DelZanna:2013eua}, different vorticities in relativistic hydrodynamics are studied in the context of directed flow in non-central collisions \cite{Becattini:2015ska}. The evolution of the kinematic vorticity has been calculated using the PICR hydrodynamic code \cite{Csernai:2013bqa}. Using CLVisc \cite{Pang:2012he,Pang:2018zzo} with event-by-event fluctuating initial conditions the vorticity distributions have been calculated. A structure of vortex-pairing in the transverse plane due to the convective flow of hot spots in the radial direction is found to possibly form in high energy heavy ion collisions.
In this section we will focus on the kinematic and thermal vorticity based on transport models such as the AMPT model, but the discussion will also involve other types of vorticity. Before we go into the details, let us first discuss the setup of numerical simulations for extracting vorticity structures from the AMPT model~\cite{Jiang:2016woz} as well as the HIJING model~\cite{Deng:2016gyh} with partons as basic degrees of freedom.
\subsection{Setup of computation in transport models}
\label{sec:setup}
According to the definitions in Sec.~\ref{sec:vort}, in order to calculate the kinematic and thermal vorticity, we first need to obtain the velocity field $u^\mu$ (with normalization $u^\mu u_\mu=1$) and the temperature field $T$. A natural way to achieve this is by using the energy momentum tensor $T^{\mu\nu}$ through which we can define the velocity field and the energy density $\varepsilon$ as the eigenvector and eigenvalue of $T^{\mu\nu}$ respectively,
\begin{eqnarray}
\label{eq:T:u}
T^{\mu\nu} u_\nu=\varepsilon u^\mu .
\end{eqnarray}
The temperature $T$ can be determined from $\varepsilon$ as a function of $T$ by assuming a local equilibrium. In transport models such as HIJING, AMPT or UrQMD, the position and momentum of each particle is known at any moment. A simple way to determine $T^{\mu\nu}$ as a function of space-time is by the coarse-grained method. This is done by splitting the whole space-time volume into grid cells and calculating an event average of $\sum _i p^\mu_i p^\nu_i/p^0_i$ inside each space-time cell,
\begin{equation}
T^{\mu\nu}(x) = \frac{1}{\Delta x \Delta y \Delta z}\left\langle
\sum _i \frac{p^\mu_i p^\nu_i}{p^0_i} \right\rangle
\end{equation}
where $i$ labels a particle inside the cell. The event average is taken to cancel the random or thermal motion of particles in each space-time cell, and finally the collective motion is kept.
Another way is to introduce a function $\Phi(x,x_i)$ to smear a physical quantity (such as the momentum) of the $i$-th particle at $x_i$ in an event. In such a way, we can construct a continuous function of that physical quantity~\cite{Deng:2016gyh,Wei:2018zfb}. Physically, function $\Phi(x, x_i)$ reflects the quantum nature of the particle as a wave-packet. With $\Phi(x, x_i)$, the phase space distribution can be obtained as
\begin{eqnarray}
\label{eq:dist}
f(x,p)=\frac{1}{\cal N}\sum_{i}(2\pi)^3\delta^{(3)}[\vec p-\vec p_i(t)]\Phi[x,x_i(t)],
\end{eqnarray}
where ${\cal N}=\int d^3\vec x \Phi(x,x_i)$ is a normalization factor. Then the energy-momentum tensor is given by
\begin{eqnarray}
\label{eq:EM}
T^{\mu\nu}(x)=\int\frac{d^3\vec p}{(2\pi)^3}\frac{p^\mu p^\nu}{p^0} f(x,p)=\frac{1}{\cal N}\sum_{i}\frac{p^\mu_i p^\nu_i}{p^0_i}\Phi(x,x_i).
\end{eqnarray}
The choice of the smearing function is important. Here we give two examples.
(a) The $\Delta$ smearing. This is given by generalizing the $\delta$ function $\delta^{(3)}[x-x_i(t)]$ (corresponding to a zero smearing) to
\begin{eqnarray}
\label{smear:Delt}
\Phi_\Delta[x,x_i(t)]=\delta_\Delta^{(3)}[\vec x-\vec x_i(t)],
\end{eqnarray}
which is $1$ if $|x-x_i(t)|<\Delta x, |y-y_i(t)|<\Delta y, |z-z_i(t)|<\Delta z$, and is $0$ otherwise.
This is actually the coarse-grained method as we have discussed ealier in this subsection.
(b) The Gaussian smearing \cite{Deng:2016gyh,Pang:2012he,Hirano:2012kj}. This is given by
\begin{eqnarray}
\label{smear:Gaus}
\Phi_{\rm G}[x,x_i(\tau)]=K \exp\left[-\frac{(x-x_i)^2}{2\sigma_x^2}-\frac{(y-y_i)^2}{2\sigma_y^2}-\frac{(\eta-\eta_i)^2}{2\sigma_\eta^2}\right],
\end{eqnarray}
where we have adopted the Milne coordinate $(\tau, x, y, \eta)$ with $\eta=(1/2)\ln[(t+z)/(t-z)]$ being the spacetime rapidity and $\tau=\sqrt{t^2-z^2}$ being the proper time instead of the Minkowski coordinate, and
$K$ and $\sigma_{x,y,z}$ are parameters that can be determined by fitting to experimental data. As a convention for the coordinate system, the $z$-axis is along the beam direction of the projectile, the $x$-axis is along the impact parameter from the target to the projectile nucleus, and the $y$-axis is along
$\hat{\vec z}\times\hat{\vec x}$, see \fig{frame}.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=4.5cm]{frame}
\caption{The coordinate system of a heavy ion collision. Here, `T' is for target and `P' is for projectile.}
\label{frame}
\end{center}
\end{figure}
\subsection{Results for kinematic vorticity}
\label{sec:num:kv}
The kinematic vorticity (\ref{def:kv}) is a natural extension of the non-relativistic vorticity (\ref{defvor1}) which is a direct measure of the angular velocity of the fluid cell. We will discuss a series of features of the kinematic vorticity (including the non-relativistic one).
\textit{Centrality dependence.}
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=5.5cm]{vort-b-200GeV}
\includegraphics[width=5.5cm]{vort-b-276TeV}
\caption{The $y$-components of the non-relativistic vorticity in Eq. (\ref{defvor1}) and the relativistic kinematic vorticity in Eq. (\ref{def:kv}) at $\tau=\tau_0$ and $\eta=0$ in $200$ GeV Au + Au and $2.76$ TeV Pb + Pb collisions ~\cite{Deng:2016gyh}.}
\label{bdepen}
\end{center}
\end{figure}
It is expected that for a given collision energy, the total angular momentum of the two colliding nuclei with respect to the collision center increases with the centrality or equivalently impact parameter. As a result, the vorticity is expected to increase with the centrality too. This is indeed the case as shown in \fig{bdepen} in which the average non-relativistic and relativistic vorticity in $y$-direction $\langle\overline{\omega}_y\rangle$ at initial time ($\tau_0=0.4$ fm for $\sqrt{s}=200$ GeV and $\tau_0=0.2$ fm for $\sqrt{s}=2.76$ TeV) and mid-rapidity are plotted as functions of the impact parameter $b$. The average is over both the transverse overlapping region (indicated by an overline of $\omega_y$) and the collision events (indicated by $\langle\cdots\rangle$), see Ref. \cite{Deng:2016gyh} for details. We see that the magnitude of the kinematic vorticity is big, for example, $|\omega_y|$ is about $10^{20} s^{-1}$ at $b=10$ fm and $\sqrt{s}=200$ GeV, a value surpassing the vorticity of any other known fluids. We also notice that the kinematic vorticity begins to decrease with $b$ when $b\gtrsim 2R_A$ with $R_A$ being the nucleus radius, reflecting the fact that the two colliding nuclei begin to separate.
\textit{Energy dependence.}
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=5.5cm]{energy-kine}
\caption{The collision energy dependence of the kinematic vorticity at mid-rapidity~\cite{Deng:2016gyh}. }
\label{edepen}
\end{center}
\end{figure}
It is obvious that the total angular momentum of the two colliding nuclei grows with collision energy $\sqrt{s}$ at a fixed impact parameter. Naively one would then expect a similar energy dependence of the vorticity. However, as shown in \fig{edepen} (and also in \fig{tdepen}), the $y$-component of the kinematic vorticity at mid-rapidity decreases as $\sqrt{s}$ increases. Such a behavior features the relativistic effect in the mid-rapidity region: as $\sqrt{s}$ increases, two nuclei become more transparent to each other and leave the mid-rapidity region more boost invariant which supports a lower vorticity. To put it in another way: while the total angular momentum of the colliding system increases with the beam energy, the fraction of that angular momentum carried by the fireball at mid-rapidity decreases rapidly with the beam energy~\cite{Jiang:2016woz}. At low energy, the relativistic effect becomes less important and the fireball acquires a considerably more fraction of the system's angular momentum, leading to a much increased vorticity~\cite{Jiang:2016woz,Deng:2016gyh}. At very low energy, however, the total angular momentum would be small and the vorticity become inevitably small again \cite{Deng:2020ygd}.
\textit{Correlation to the participant plane.}
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=5.5cm]{vort-correl}
\caption{The correlation between the direction of the vorticity $\psi_\omega$ and the second order participant plane $\psi_2$ \cite{Deng:2016gyh}.}
\label{cordepen}
\end{center}
\end{figure}
Geometrically, it is expected that the direction of the vorticity should be perpendicular to the reaction plane. However, this is true only at the optical limit or after event average. In reality, the nucleons in the nucleus are not static but always move from time to time, leading to the event-by-event fluctuation at the moment of collisions. Such event-by-event fluctuations can smear the direction of the vorticity from being perfectly perpendicular to the reaction plane. To quantify this effect, one can study the azimuthal angle correlation between the vorticity and the participant plane (which can describe the overlapping region more accurately than the reaction plane), $\langle \cos[2(\psi_\omega-\psi_2)]\rangle$, where $\psi_\omega$ and $\psi_2$ denote the azimuthal angle of the vorticity and the participant plane of the second order respectively. The result is shown in \fig{cordepen}. We see that the correlation is significantly suppressed in the most central (due to the strong fluctuation in $\psi_\omega$) and most peripheral (due to the strong fluctuation in $\psi_2$) collisions. We note that the similar feature can also be observed in magnetic fields~\cite{Bloczynski:2012en,Bloczynski:2013mca}.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=5.45cm]{space}
\includegraphics[width=5.5cm]{helicity}
\caption{The spatial distribution of the non-relativistic vorticity (left panel) and the helicity (right panel) in the transverse plane at $\eta=0$ for RHIC Au + Au collisions at $\sqrt{s}=200$ GeV~\cite{Deng:2016gyh}. See also discussion around \eq{eq:rheli}.}
\label{spadep}
\end{center}
\end{figure}
\textit{Spatial distribution.}
The vorticity is inhomogeneous in the transverse plane (the $x$-$y$ plane in \fig{frame}). As seen in \fig{spadep} (left panel), the non-relativistic vorticity varies more steeply along the $x$ direction in accordance with the elliptic shape of the overlapping region. The event avergage of the helicity field $h_T^0\equiv (1/2) T^2 \vec v\cdot\vec\nabla\times\vec v$ as defined below \eq{eq:rheli} is depicted in \fig{spadep} (right panel). Clearly, the reaction plane separates the region with the positive helicity from that with the negative helicity, due simply to the fact that $\langle v_y\rangle$ changes its sign across the reaction plane while $\langle\omega_y\rangle$ does not change sign. We note that a similar feature also exists for the electromagnetic helicity $\langle{\vec E\cdot\vec B}\rangle$ in heavy ion collisions~\cite{Deng:2012pc}.
\textit{Time evolution.}
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=6cm]{time}
\caption{The time evolution of the non-relativistic vorticity for different collision energies~\cite{Jiang:2016woz}.}
\label{tdepen}
\end{center}
\end{figure}
In the hot quark-gluon medium, the fluid velocity evolves in time, so does the vorticity. Understanding the time evolution of the vorticity is also important for understanding vorticity-driven effects such as spin polarization. The results for the non-relativistic vorticity as functions of time in an AMPT simulation are presented in \fig{tdepen}. We see that at very early stage
$-\langle\bar{\omega}_y\rangle$ (in Ref. \cite{Jiang:2016woz}, the spacial average is weighted by the inertia moment) briefly increases with time which is probably due to a decrease of inertia moment by parton scatterings before the transverse
radial expansion is developed. After reaching a maximum value at $\sim 1$ fm, $-\langle\bar{\omega}_y\rangle$ follows a steady decrease with time because of the system's expansion.
To understand how the system's expansion brings the vorticity down, we can consider the dissipation equation for the non-relativistic vorticity,
\begin{eqnarray}
\label{eq:voreq}
\frac{\partial\vec \omega}{\partial t}=\vec\nabla\times(\vec v\times\vec\omega) +\nu\nabla^2\vec\omega,
\end{eqnarray}
where $\nu=\eta/(\varepsilon+P)=\eta/(sT)$ is the kinematic shear viscosity with $\eta$ being the shear viscosity and $s$ the entropy density. Thus, the change of the vorticity can be driven by either the fluid flow (the first term on the right-hand side) or by the viscous damping (the second term on the the right-hand side). The ratio of the two terms can be characterized by the Reynolds number $Re=UL/\nu$ with $U$ and $L$ being the characteristic velocity and system size respectively. If $Re\ll 1$, the second term dominates and the vorticity is damped by the shear viscosity with a time scale $t_\omega\sim L^2/(4\nu)$. If $Re\gg 1$, the first term in \eq{eq:voreq} dominates and the vortex flux is nearly frozen in the fluid (see the discussion in Sec.~\ref{subsec:nonrel} about the Helmholtz-Kelvin theorem). In this case, the vorticity decreases due to the system's expansion. Considering Au + Au collisions at
$\sqrt{s}=200$ GeV as an example. Typically, we can assume $U\sim 0.1 - 1$, $L \sim 5$ fm, $T\sim 300$ MeV, and $\eta/s\sim 1/(4\pi)$ for the strongly coupled QGP, then we have $Re\sim 10 - 100$. Thus, the vorticity decays as shown in \fig{tdepen} mainly due to the system's expansion, see Ref~\cite{Jiang:2016woz,Deng:2016gyh} for more discussions.
\subsection{Results for thermal vorticity}
\label{sec:num:thv}
The thermal vorticity (\ref{def:thv}) can be decomposed into the part
proportional to the kinematic vorticity and the part related to temperature gradients,
\begin{eqnarray}
\label{eq:thvsk}
\varpi_{\mu\nu}\equiv\omega^\beta_{\mu\nu}=\beta\omega^{\rm K}_{\mu\nu}+u_{[\nu}\partial_{\mu]}\beta,
\end{eqnarray}
where $[\cdots]$ means anti-symmetrization of indices. Note that in this sub-section we will use $\varpi$ to denote the thermal vorticity in order to be consistent with the traditional notation widely used in literature. Thus, in many aspects, the thermal vorticity behaves similarly to the kinematic vorticity. But the difference between two vorticities becomes significant when the temperature gradient is large.
\textit{Time evolution.}
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=5cm]{th-t-dep}
\caption{The time evolution of the $zx$-component of the thermal vorticity at spacetime rapidity $\eta=0$ and impact parameter $b=9$ fm for different collision energies. The figure is taken from Ref.~\cite{Wei:2018zfb}.}
\label{th-t-depen}
\end{center}
\end{figure}
In \fig{th-t-depen}, we show the $zx$-component of the thermal vorticity in Au + Au collisions at $\eta=0$, $b=9$ fm and $\sqrt{s}=19.6, 62.4, 200$ GeV. Here, the thermal vorticity is averaged over the transverse plane first (weighted by the energy density and indicated by an overline) and then over collision events (indicated by $\langle\cdots\rangle$). Comparing with \fig{tdepen}, except for a very short early time, the time evolution of the thermal vorticity is similar to the kinematic vorticity, so is the energy dependence: both the thermal and kinematic vorticity decrease with $\sqrt{s}$. This can be understood from the fact that at higher collision energies both terms in \eq{eq:thvsk} become smaller at $\eta=0$ as two colliding nuclei become more transparent to each other and make the mid-rapidity region more boost-invariant.
\textit{Spatial distribution.}
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=5cm]{th-spa-e0}$\;\;\;\;\;\;\;\;\;\;\;$
\includegraphics[width=5cm]{th-spa-r}
\caption{The distribution of the event-averaged thermal vorticity on the transverse plane at $t=0.6$ fm, $\eta=0$ and $\sqrt{s}=19.6$ GeV for Au + Au collisions, averaged over the centrality range 20-50\%. Left panel: arrows represent $\langle\vec\varpi_\perp\rangle=(\langle\varpi_{yz}\rangle, \langle\varpi_{zx}\rangle)$ and colors represent the magnitude of $\langle\varpi_{zx}\rangle$. Right panel: the radial thermal vorticity $\langle\varpi_r\rangle=\hat{\vec r}\cdot\langle\vec\varpi_\perp\rangle$. The figures are taken from Ref.~\cite{Wei:2018zfb}.}
\label{th-spa-1}
\end{center}
\end{figure}
In the left panel of \fig{th-spa-1} we show the spatial distribution of the event-averaged thermal vorticity $\vec\varpi_\perp=(\varpi_{yz}, \varpi_{zx})$ on the transverse plane at $\eta=0$. We take $t=0.6$ fm for the Au + Au collisions at $\sqrt{s}=19.6$ GeV as an example. The arrows represent $\langle\vec\varpi_\perp\rangle$ and colors represent the magnitude of $\langle\varpi_{zx}\rangle$. We see two vorticity loops associated with the motion of the participant nucleons in the projectile and target nucleus, respectively. The right panel shows the radial component of $\langle\vec\varpi_\perp\rangle$ and a clear sign separation by the reaction plane is observed.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=4.6cm]{illu-longitudinal2}$\;\;\;\;\;\;\;\;\;\;\;$
\includegraphics[width=5cm]{th-longitudinal}
\caption{Left panel: illustration of an anisotropic expansion of the fireball in the transverse plane in non-central collisions. Such a flow profile represents a positive elliptic flow $v_2$ and a quadrupolar distribution of the longitudinal kinematic vorticity \eq{defvor1} in the transverse plane. Right panel: the longitudinal component of the thermal vorticity distributed in the transverse plane at $t=0.6$ fm, $\eta=0$ and $\sqrt{s}=19.6$ GeV in Au + Au collisions. The results are obtained by averaging over events in $20-50$\% centrality. A remarkable difference between the left and right panel is the sign difference of the longitudinal vorticity in each quadrant. The figure is taken from Ref.~\cite{Wei:2018zfb}.}
\label{th-spa-long}
\end{center}
\end{figure}
As we have already discussed, the source of vorticty is multifold. The inhomogeneous expansion of the fireball serves as a good generator of the vorticity. To see this more clearly, let us consider a non-central collision and parameterize its velocity profile at a given moment as
\begin{eqnarray}
\label{expan}
v_r&\sim&\bar{v}_r(r,z)\left[1+2c_r\cos(2\phi)\right],\nonumber\\
v_z&\sim&\bar{v}_z(r,z)\left[1+2c_z\cos(2\phi)\right],\nonumber\\
v_\phi&\sim&2c_\phi \bar{v}_\phi(r,z) \sin(2\phi),
\end{eqnarray}
where $r$, $z$ and $\phi$ are the radial, longitudinal and azimuthal coordinates respectively, and $c_r$, $c_z$ and $c_\phi$ characterize the eccentricity in $v_r, v_z$ and $v_\phi$ respectively. For high-energy collisions, the expansion respects approximately a $z\rightarrow -z$ reflection symmetry which requires that $\bar{v}_r(r,z)=\bar{v}_r (r,-z)$, $\bar{v}_z(r,z)=-\bar{v}_z (r,-z)$, and $\bar{v}_\phi(r,z)=\bar{v}_\phi (r,-z)$. Thus we find very interesting features in the non-relativistic kinematic vorticity field, $\vec\omega=(1/2)\vec\nabla\times\vec v$, from the velocity profile (\ref{expan}).
First, at mid-rapidity $\eta=0$ or $z=0$ in a non-central collision, the longitudinal non-relativistic kinematic vorticity $\omega_z$ can be nonzero while the transverse component $\omega_r$ and $\omega_\phi$ vanish. In particular, we have $\omega_z\sim \sin(2\phi)$ at mid-rapidity, featuring a quadrupole distribution as illustrated in the left panel of \fig{th-spa-long} ~\footnote{We note that the left panel of \fig{th-spa-long} is just for illustrative purpose, the real velocity profile is much more complicated including components which can contribute a positive $v_2$ but an opposite vortical structure to the one shown in the figure.}. Such a quadrupole structure in the non-relativistic vorticity field is a result of the positive elliptic flow $v_2$. Quite similarly, the longitudinal component of the thermal vorticity also shows a quadrupole structure in the transverse plane in the right panel of \fig{th-spa-long}, in which the results of $\varpi_{xy}$ in the transverse plane of Au + Au collisions at at $t=0.6$ fm, $\eta=0$ and $\sqrt{s}=19.6$ GeV are presented. Surprisingly, in each quadrant, the thermal vorticity $\varpi_{xy}$ has an opposite sign comparing with the non-relativistic vorticity $\omega_z$. This means that the contributions from acceleration and temperature gradient to the thermal vorticity are large and outperform that from the velocity gradient.
Second, at finite rapidity, all three components of $\vec\omega$ can be finite and the transverse vorticity is dominated by the $\phi$ component. The origin of this $\phi$-directed vortex is similar to the onset of the smoke-loop vortex as illustrated in \fig{th-spa-eta} (upper-left). More precisely, $\omega_\phi\sim(1/2)[\partial\bar{v}_r/\partial z-\partial\bar{v}_z/\partial r]$ changes sign under the relection transformation $z\rightarrow -z$ or $\eta\rightarrow -\eta$, such a behavior exists in non-central as well as central collisions. In positive rapidity region $\eta>0$, the first term in $\omega_\phi$ is usually negative while the second term is positive, so the direction of the $\phi$-directed vortex depends on the relative strength of two terms. Similar smoke-loop pattern for the thermal vorticity also exists, see the lower panels of \fig{th-spa-eta}. The projection to the reaction plane forms a quadrupole structure for $\varpi_{zx}$ as shown in the upper-right panel of \fig{th-spa-eta}. We will discuss how this intricate local vortical structure can be reflected in the spin polarization of $\Lambda$ hyperons in next section.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=5.8cm]{expansion}$\;\;\;\;$
\includegraphics[width=5cm]{th-quad}
\includegraphics[width=4.8cm]{th-eta-p}$\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;$
\includegraphics[width=4.8cm]{th-eta-m}
\caption{Upper-left: the illustration of the smoke-loop type vortices due to the fast longitudinal expansion. Note that the radial expansion inhomogeneous in $z$ or $\eta$ direction results in similar vortices. Upper-right: the distribution of event-averaged thermal vorticity in the reaction plane (the $x-\eta$ plane) for Au + Au collisions at $\sqrt{s}=19.6$ GeV. Lower-left and lower-right: the vector plot for the thermal vorticity projected to the transverse plane at spacetime rapidity $\eta = -2,2$ for Au + Au collisions at $\sqrt{s}=19.6$ GeV averaged over events in 20-50\% centrality range. The background color represents the magnitude and sign of $\varpi_{zx}$. The figures are from Ref.~\cite{Wei:2018zfb}.}
\label{th-spa-eta}
\end{center}
\end{figure}
\section{$\Lambda$ polarization in heavy ion collisions}
\label{sec:pola:num}
An important consequence of the vorticity field is that particles with spin can be polarized. The detailed mechanism for such a spin polarization has been discussed in Sec.~\ref{sec:pola}. In this section, we review the numerical simulation based on transport models for the spin polarization of one specific hyperon, $\Lambda$ and its antiparticle, $\bar\Lambda$. The reason why the $\Lambda$ hyperon is chosen is that its weak decay $\Lambda\rightarrow p +\pi^-$ which violates the parity symmetr, so the daughter proton emits preferentially along the spin direction of $\Lambda$ in its rest frame. More precisely, if $\vec P^*_\Lambda$ is the spin polarization of $\Lambda$ in its rest frame (hereafter, we will use an asterisk to indicate $\Lambda$'s rest frame), the angular distribution of the daughter protons is given by
\begin{eqnarray}
\label{lambdadecay}
\frac{1}{N_p}\frac{d N_p}{d\Omega^*}=\frac{1}{4\pi}\left(1+\alpha\hat{\vec p}^*\cdot\vec P^*_\Lambda\right),
\end{eqnarray}
where $\vec p^*$ is the momentum of the proton in the rest frame of $\Lambda$ (a hat over a vector denotes its unit vector), $\Omega^*$ is the solid angle of $\vec p^*$, and $\alpha\approx 0.642\pm 0.013$ is the decay constant. Thus, experimentally, one can extract $P^*_\Lambda$ by measuring $dN_p/d\Omega^*$ \cite{STAR:2017ckg,Abelev:2007zk,Siddique:2017ddr}. The above discussion applies equally well to $\bar\Lambda$ but with a negative decay constant $-\alpha$. Our purpose is to discuss the current theoretical understanding of $\vec P^*_\Lambda$ induced by the vorticity in heavy ion collisions. We note that the vorticity induced spin polarization can also lead to other interesting consequences like the spin alignment of vector mesons,~\cite{Liang:2004xn,Sheng:2019kmk,Sheng:2020ghv,Xia:2020tyd}, enhancement of the yield of hadrons with higher spin~\cite{Taya:2020sej}, spin-spin correlation~\cite{Pang:2016igs}, polarization of emitted photons~\cite{Ipp:2007ng}, which, however, will not be discussed.
The basic assumption that enables us to link the vorticity and $\Lambda$ spin polarization is the local equilibrium of the spin degree of freedom leading to the formula for a spin-$s$ fermion with mass $m$ and momentum $p^\mu$ produced at point $x$ \cite{Fang:2016vpj,Liu:2020flb,Becattini:2013fla,Becattini:2016gvu},
\begin{eqnarray}
\label{spin}
S^\mu(x,p)=-\frac{s(s+1)}{6m}(1-n_F)\epsilon^{\mu\nu\rho\sigma} p_\nu \varpi_{\rho\sigma}(x)+O(\varpi)^2,
\end{eqnarray}
where $n_F(p_0)$ is the Fermi-Dirac distribution function with $p_0=\sqrt{\vec p^2+m^2}$ being the energy of the fermion. We should note that this formula can be shown to be hold at global equilibrium, as we derived in Sec.~\ref{sec:pola}, but here we assume that it holds also at local equilibrium. For $\Lambda$ and $\bar{\Lambda}$, we have $s=1/2$. If the fermion mass is much larger than the temperature as in the case of $\Lambda$ and $\bar{\Lambda}$ produced in heavy ion collisions at RHIC and LHC energies, we can approximate $1-n_F\approx 1$. Using $S^{*\mu}=(0,{\vec S}^{*})$ to denote the spin vector in $\Lambda$'s rest frame, the Lorentz transformation from the laboratory frame gives
\begin{eqnarray}
\label{localspin}
{\vec S}^{*}={\vec S}-\frac{{\vec p}\cdot{\vec S}}{p_0(p_0+m)}{\vec p}.
\end{eqnarray}
Finally, the spin polarization of $\Lambda$ in the direction ${\vec n}$ is given by
\begin{eqnarray}
\label{def:pol}
P^*_n= \frac{1}{s}{\vec S}^{*}\cdot{\vec n}.
\end{eqnarray}
In the following, for simplicity, we will use $P_n$ to denote $P^*_n$ if there is no confusion. In a transport model like AMPT, \eqs{spin}{def:pol} are used to calculate the $\Lambda$ polarization.
\textit{The global polarization.} In the last few years, transport models such as AMPT have been
widely used in the study of the $\Lambda$ polarization. The results of various groups are consistent
to each other to a large extent. Here, we mainly show the results of Ref.~\cite{Li:2017slc,Wei:2018zfb,Shi:2017wpk}.
In \fig{pol:global}, theoretical results for the spin polarization of $\Lambda$ and $\bar\Lambda$ are compared
with experimental data. The simulations are done for Au + Au collisions in the centrality range $20-50\%$
and rapidity range $|Y|<1$. We see very good agreement between numerical results and data, which
gives a strong support for the vorticity interpretation of the measured $\Lambda$ polarization.
We have three comments. (1) The simulations include only the polarization caused by the vorticity,
so there is no difference between $\Lambda$ and $\bar\Lambda$ in the calculation.
The data show a difference between $\Lambda$ and $\bar\Lambda$ although
the errors are are large. This is not fully understood. A possible source for such a
difference might be the magnetic field because $\Lambda$ and $\bar\Lambda$ have an opposite magnetic moment.
(2) The simulation given in \fig{pol:global} counts only the $\Lambda$ and $\bar\Lambda$
coming from hadronization of quarks in the AMPT model (called the primary or
primordial $\Lambda$ and $\bar\Lambda$). However, a big fraction ($\sim 80\%$) of
the measured $\Lambda$ and $\bar\Lambda$ hypersons are from the decay
of higher-lying hyperons such as $\Sigma^0$, $\Sigma^*$, $\Xi$, and $\Xi^*$.
However, such a feed-down contribution to the $\Lambda$ polarization is small:
it can reduce about $10-20\%$ of the spin polarization of primordial $\Lambda$'s \cite{Xia:2019fjf,Becattini:2019ntv}.
(3) Recently, HADES collaboration reported the measurement of the $\Lambda$ polarization
at $\sqrt{s}=2.4$ GeV which shows a nearly vanishing $P_y$~\cite{HADES:2019}. This means that the energy
dependence of $P_y$ at low energies is not monotonous. The AMPT model is not applicable
in such a low-energy region, and it is necessary to use other transport models,
such as UrQMD and IQMD,
to calculate the $\Lambda$ polarization at very low energy~\cite{Deng:2020ygd}.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=5.6cm]{globalpol-Li}
\includegraphics[width=5.5cm]{globalpol-Shi}
\includegraphics[width=5cm]{globalpol-Wei}
\caption{The global $\Lambda$ and $\bar\Lambda$ spin polarization simulated in AMPT model with comparison to the experimental data. Shown are the polarization along y direction in 20-50\% centrality range of Au+Au collisions from different working groups which are consistent to each other~\cite{Li:2017slc,Shi:2017wpk,Wei:2018zfb}.}
\label{pol:global}
\end{center}
\end{figure}
\textit{Local polarization and polarization harmonics.}
The above analysis is for the integrated spin polarization over the azimuthal angle and rapidity and $p_T$ region, so is called the global polarization. As we have shown in Sec.~\ref{sec:num:thv}, the thermal vorticity has a nontrivial distribution in coordinate space, especially the quadrupole structure shown in \fig{th-spa-long} and \fig{th-spa-eta}, leading to a nontrivial spin-polarization distribution in momentum space following Eq. (\ref{average-spin-vector}).
Here, we show the results from Ref.~\cite{Xia:2018tes}. In \fig{pol:local-Xia}, we present the $\Lambda$ spin polarization as functions of $\Lambda$s' momentum azimuthal angle $\phi_p$ for Au + Au collisions at 200 GeV (left) and Pb + Pb collisions at 2760 GeV (right). As illustrated in \fig{th-spa-eta}, the inhomogeneous expansion of the fireball can generate transverse vorticity loops, the directions of which are clockwise and counterclockwise in positive and negative rapidity regions, respectively. As a consequence, the transverse $\Lambda$ spin polarization $(P_x, P_y)$ should have a similar structure. To extract this effect, $P_x$ and $P_y$ are weighted by the sign of rapidity and then averaged in local azimuthal-angle bins. The results shown in the upper two panels in \fig{pol:local-Xia} present good harmonic behaviors $P_x\mathrm{sgn}(Y)\sim\sin(\phi_p)$ and $P_y\mathrm{sgn}(Y)\sim-\cos(\phi_p)$, which agree with the direction of the transverse vorticity loop. On the other hand, the longitudinal vorticity has a quadrupole structure on the transverse plane shown in \fig{th-spa-long}. Correspondingly, the longitudinal spin polarization $P_z$ shows a $-\sin(2\phi_p)$ behavior in the lowest panels in \fig{pol:local-Xia}.
\begin{figure}[!htb]
\begin{center}
\includegraphics[height=5cm]{P_vs_phi_200}
\includegraphics[height=5cm]{P_vs_phi_2760}
\caption{The $\Lambda$ spin polarization as functions of $\Lambda$s' momentum azimuthal angle $\phi_p$ in 20--50\% central Au + Au
collisions at 200 GeV (left) and Pb + Pb collisions at 2760 GeV (right)~\cite{Xia:2018tes}.}
\label{pol:local-Xia}
\end{center}
\end{figure}
\fig{pol:local} shows another way to present the local polarization \cite{Wei:2018zfb}. In the left panel, we present the distribution of the transverse spin polarization $P_y$ on the $\phi-Y$ plane where $\phi$ is the momentum azimuthal angle and $Y$ is the rapidity. Clearly, as a spin-polarization response to the quadrupole structure of the vorticity field shown in the upper-right panel of \fig{th-spa-eta}, $P_y(Y,\phi)$ also shows a quadrupole structure. To characterize such a nontrivial $\phi$ dependence of $P_y(\phi)$ at a given rapidity $Y$, we can decompose $P_y(Y,\phi)$ into a harmonic series,
\begin{eqnarray}
\label{harmo}
P_{y}(Y,\phi)= \frac{1}{2\pi}\frac{dP_{y}}{dY}\{1+2\sum_{n=1}^{\infty}f_{n}\cos[n(\phi-\Phi_{n})]\},
\end{eqnarray}
where $\Phi_n$ defines the $n$-th harmonic plane for spin with the corresponding harmonic coefficient $f_n$. The first harmonic coefficient, $f_1$, shown in the right panel of \fig{pol:local}, is induced by the vorticity from collective expansion, which is odd in rapidity and peaks at finite rapidity in accordance with \fig{th-spa-eta}. The measurement of such a directed flow of spin polarization would be the indicator of the quadrupole structure in the vorticity field due to inhomogeneous expansion of the fireball.
\begin{figure}[!htb]
\begin{center}
\includegraphics[height=5cm]{quad-py}$\;\;\;\;$
\includegraphics[height=4.6cm]{trans-f1}$\;\;\;\;$
\caption{Left: the polarization $P_y$ on $\phi-Y$ plane which is the spin-polarization response to the upper-right panel of \fig{th-spa-eta} up to the linear order in thermal vorticity according to \eq{spin}. Right: the directed spin flow $f_1$ defined in \eq{harmo} versus rapidity $Y$ \cite{Wei:2018zfb}.}
\label{pol:local}
\end{center}
\end{figure}
\textit{The ``sign problem".} Although the global polarization $P_y$ can be described well by simulations based on the thermal vorticity following Eq. (\ref{average-spin-vector}), such relation fails in describing the azimuthal dependence of $P_y$ at mid-rapidity. In fact, the theoretical calculations, including both the transport-model and hydrodynamic model calculations, found that $P_y(\phi)$ at mid-rapidity grows from $\phi=0$ (i.e., the in-plane direction) to $\phi=\pi/2$ (i.e., the out-of-plane direction) while the experimental data shows an opposite trend, see the left panel of \fig{pol:local2}. In addition, the longitudinal polarization $P_z(\phi)$ at mid-rapidity also has a similar ``sign problem": the theoretical calculations predicted that $P_z(\phi)\sim -\sin (2\phi )$ as shown in Fig. \ref{pol:local-Xia}. This can also be seen in \fig{th-spa-long} as the spin polarization is roughly proportional to the thermal vorticity following Eq. (\ref{average-spin-vector}). However the data show an opposite sign, see the right panel of \fig{pol:local2}. These sign problems challenge the thermal vorticity interpretation of the measured $\Lambda$ polarization and are puzzles at the moment.
Here we have several comments about them. (1) As we have already discussed, the feed-down decays of other strange baryons constitute a major contribution to the total yield of $\Lambda$ and $\bar\Lambda$. Thus, to bridge the measured spin polarization and the vorticity, we must take into account the feed-down contributions. In addition, $\Lambda$ hyperon produced from the feed-down decay can have opposite spin polarization comparing to its parent particle in some decay channels, e.g., $\Sigma^0\rightarrow\Lambda+\gamma$. Recently, the feed-down effects have been carefully studied in Refs.~\cite{Xia:2019fjf,Becattini:2019ntv}. Although the feed-down contribution suppresses the polarization of primordial $\Lambda$, it is not strong enough to resolve the sign problem. (2) At the moment, most of the theoretical studies are based on Eq. (\ref{average-spin-vector}) which assumes global equilibrium for the spin degree of freedom. This might not be the case for realistic heavy ion collisions. In non-equilibrium state or even near equilibrium state, the spin polarization is not determined by the thermal vorticity and should be treated as an independent dynamic variable. This requires new theoretical frameworks like the spin hydrodynamics; see e.g.~\cite{Florkowski:2017ruc,Hattori:2019lfp}. Recently, there have been progresses in developing these new frameworks and hopefully the numerical simulations based on them can give more accurate description of the $\Lambda$ polarization and insight to the sign problem. (3) There have been theoretical explanations of the sign problem based on chiral kinetic theory~\cite{Sun:2018bjl,Liu:2019krs}, blast-wave model~\cite{Adam:2019srw} and hydrodynamics~\cite{Florkowski:2019voj,Wu:2019eyi}. But they introduce new assumptions such as the presence of net chirality~\cite{Sun:2018bjl}, kinematic vorticity or T-vorticity dominance of the polarization~\cite{Adam:2019srw,Florkowski:2019voj,Wu:2019eyi}) which need further examinations; see Refs.~\cite{Wang:2017jpl,Liu:2020ymh,Huang:2020xyr,Becattini:2020ngo,Gao:2020vbh} for recent reviews.
\begin{figure}[!htb]
\begin{center}
\includegraphics[height=5cm]{azim-py}$\;\;\;\;$
\includegraphics[height=5cm]{star-long}$\;\;\;\;$
\caption{Left: the polarization $P_y$ as a function of the azimuthal angle $\phi$. The red squares are experimental data~\cite{Adam:2018ivw}. Right: the experimental results of longitudinal polarization $P_z(\phi)$ from STAR Collaboration~\cite{Adam:2019srw}. Note that $P_z\sim \langle\cos\theta_p^*\rangle$.}
\label{pol:local2}
\end{center}
\end{figure}
\textit{The magnetic polarization.} Finally let us discuss one intriguing aspect of the measured global polarization: there is a visible difference between hyperons and anti-hyperons, especially in the low beam energy region. While the error bars are still too large to unambiguously identify a splitting between $P_{\bar{\Lambda}}$ and $P_\Lambda$, the difference shown by these data is significant enough to warrant a serious investigation into the probable causes. One natural and plausible explanation could be the magnetic polarization effect (which distinguishes particles from anti-particles) in addition to the rotational polarization (which is ``blind'' to particle/anti-particle identities)~\cite{Becattini:2016gvu,Muller:2018ibh,Guo:2019joy,Guo:2019mgh}. Indeed the hyperon $\Lambda$ and anti-hyperon $\bar{\Lambda}$ have negative and positive magnetic moments respectively. When subject to an external magnetic field, $\bar{\Lambda}$ spin would be more aligned along the field direction while $\bar{\Lambda}$ spin would be more aligned against the field direction. This could indeed qualitatively explain the observed splitting with $P_{\bar{\Lambda}} > P_\Lambda$, provided that the magnetic field in heavy ion collisions is indeed approximately parallel to average vorticity and possibly survive long enough till the freeze-out time.
\begin{figure}[!htb]
\begin{center}
\includegraphics[height=5cm]{fig-mag}$\;\;\;\;$
\includegraphics[height=5cm]{fig-tB}$\;\;\;\;$
\caption{Left: The dependence on magnetic field lifetime parameter $t_B$ of the global polarization signals $P_H$ for hyperons ( $H \to \Lambda$, blue solid curves with filled symbols) and anti-hyperons ( $H \to \bar{\Lambda}$, red dashed curves with open symbols) at beam energy
$\sqrt{s_{NN}}=$19.6 (square), 27 (diamond), 39 (circle) GeV respectively~\cite{Guo:2019joy}.
Right:
The optimal value of magnetic field lifetime parameter $\tilde{t}_B$ extracted from polarization splitting $\Delta P$ data for a range of collision beam energy $\sqrt{s_{NN}}$. The results in this plot use the parameterization $eB(t) = eB(0) / [1+(t-t_0)^2/t_B^2]$ ( --- see \cite{Guo:2019joy} for details). The solid curve is from fitting analysis with a formula $\tilde{t}_B = \frac{A}{\sqrt{s_{NN}}}$. The error bars are converted from the corresponding errors of experimental data in \cite{STAR:2017ckg,Adam:2018ivw}.}
\label{pol:mag}
\end{center}
\end{figure}
To examine whether this idea may work, quantitative simulations have been carried out recently within the AMPT framework~\cite{Guo:2019joy}. Under the presence of both vorticity and magnetic field, the polarization given in Eq.~(\ref{spin}) should be modified to include both effects as follows:
\begin{eqnarray}
\label{spin-mag}
S^\mu(x,p)=-\frac{s(s+1)}{6m}(1-n_F)\epsilon^{\mu\nu\rho\sigma} p_\nu \left [ \varpi_{\rho\sigma}(x) \mp 2 (eF_{\rho\sigma})\ \mu_\Lambda/T_f \right ] \ ,
\end{eqnarray}
where the $\mp$ sign is for $\Lambda$ and $\bar{\Lambda}$ while $\mu_\Lambda = 0.613/(2m_N)$ is the absolute value of the hyperon/anti-hyperon magnetic moment, with $m_N=938\rm MeV$ being the nucleon mass. $T_f$ is the local temperature upon the particle's formation. Here we focus on the electromagnetic field component that is most relevant to the global polarization effect, namely $eB_y = eF_{31} = - eF_{13}$ along the out-of-plane direction. By adopting a certain parameterization of the time dependence for the magnetic field with a lifetime parameter $t_B$, one could then investigation how the polarization splitting depends on the $B$ field lifetime. The left plan of Fig.~\ref{pol:mag} shows how the magnetic field lifetime $t_B$ would quantitatively influence the polarization of $\Lambda$ and $\bar{\Lambda}$.
As one can see, with increasing magnetic field lifetime (which means stronger magnetic field at late time in the collisions), the $P_{\bar{\Lambda}}$ steadily increases while the $P_\Lambda$ decreases at all collision energies. With long enough $t_B$, eventually the $P_{\bar{\Lambda}}$ always becomes larger than $P_\Lambda$. By comparing with experimentally measured polarization splitting, one could actually extract the optimal value (or a constraint) on the magnetic field lifetime. Such analysis is shown in the right panel of Fig.~\ref{pol:mag}. A number of different parameterizations were studied in \cite{Guo:2019joy} and the overall analysis suggests an empirical formula for possible magnetic field lifetime: $t_B = A/\sqrt{s_{NN}}$ with $A=115\pm 16\ \rm GeV\cdot fm/c$. Interestingly, this is considerably longer than the expected vacuum magnetic field lifetime without any medium effect, which could be estimated by $t_{vac}\simeq 2R_A/\gamma \simeq 26\ {\rm GeV\cdot fm\cdot c^{-1}}/\sqrt{s_{NN}}$. Such extended magnetic field lifetime, as indicated by polarization difference, may imply a considerable role of the medium-generated dynamical magnetic field especially at low beam energy~\cite{Guo:2019mgh}.
\section{Summary}
\label{sec:sum}
The non-vanishing global spin polarization of $\Lambda$ and $\bar{\Lambda}$ has been
measured by STAR collaboration in Au+Au collisions at $\sqrt{s_{NN}}=7.7-200$
GeV. Microscopically, such a global polarization originates from the spin-orbit coupling of particle scatterings
in a fluid with local vorticity. It has been shown that the spin-orbit coupling can lead to
a spin-vorticity coupling when taking an ensemble average over random
incoming momenta of scattering particles in a locally thermalized fluid.
With the spin-vorticity coupling, the local spin polarization can be obtained from
the vorticity field in the fluid. The global polarization is an integration of the local one
in whole phase space. In this note we review recent progress on the vorticity formation
and spin polarization in heavy ion collisions
with transport models. We present an introduction of the fluid vorticity
in non-relativistic and relativistic hydrodynamics.
We discuss the spin polarization in a vortical fluid in the Wigner function formalism
for massive spin-1/2 fermions, in which we derive the freeze-out formula for the spin polarization
in heavy ion collisions. Then we show results for various properties of the kinematic and thermal vorticity with transport models,
including: the evolution in time and space, the correlation to the participant plane,
the collision energy dependence, etc.
Finally we give a brief overview of recent theoretical results for the spin polarization
of $\Lambda$ and $\bar{\Lambda}$ including the global and local polarization,
the polarization harmonics in azimuthal angles, the sign problem in the longitudinal polarization as well as the polarization difference between particles and anti-particles.
\section*{Acknowledgements}
We thank Wei-Tian Deng, Hui Li, Yu-Chen Liu, Yin Jiang, Zi-Wei Lin, Long-Gang Pang, Shuzhe Shi, De-Xian Wei, Xin-Nian Wang for collaborations and discussions. This work is supported in part by the NSFC Grants No.~11535012, No.~11675041 and No. 11890713, as well as by the NSF Grant No. PHY-1913729 and by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, within the framework of the Beam Energy Scan Theory (BEST) Topical Collaboration.
| proofpile-arXiv_059-15602 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Arrangements of hyperplanes defined by finite complex reflection groups and sets of points corresponding to the duals of the hyperplanes are a rich source of examples and a testing ground for various conjectures in commutative algebra and algebraic geometry. It is not surprising that they find their place in new paths of research developed within these two branches of mathematics.
In the present note we study the configuration $Z_{60}$ of $60$ points in $\P^3$ determined by the reflection group $G_{31}$ in the Shephard Todd list \cite{SheTod54}. This configuration has been known for long, its origins go back to Felix Klein's doctoral thesis \cite{KleinPhD}. We recall basic properties, relevant for our study in Section \ref{sec: history}. In section \ref{sec: group} we discuss another group acting on the configuration, discovered only recently by Cheltsov and Shramov \cite{CheltsovShramov19}. In fact, their article was a departure point for this work.
Our main results are presented in the two subsequent sections. The first results concern unexpected surfaces admitted by the set $Z_{60}$. The concept of \emph{unexpected hypersurfaces} has been introduced first for curves in the ground breaking article \cite{CHMN} by Cook II, Harbourne, Migliore and Nagel
and generalized to arbitrary dimensions in the subsequent
article \cite{HMNT} by Harbourne, Migliore, Nagel and Teitler. Loosely speaking, a finite set of points $Z$
in a projective space admits an unexpected hypersurface of degree $d$ if a general fat point
or a finite number of general fat points impose less conditions on forms
of degree $d$ vanishing at $Z$ than naively expected, see Definition \ref{def:unexpected hypersurface}
for precise statement. In Theorems \ref{thm: unexpected cone of degree 6} and \ref{thm: unexpected with 3 singularities} we show that $Z_{60}$ admits two different types of unexpected surfaces. One is a cone with a single singular point of multiplicity $6$ and the other is a surface of degree $6$ with three singular points of multiplicities $4, 2$ and $2$. It seems to be the first case where a fixed set of points admits unexpected hypersurfaces of two different kinds: one with a single general point in the spirit of foundational article \cite{CHMN} and the other with multiple general points in the spirit of \cite{Szp18multi}. This is a new and quite unexpected phenomenon.
Theorem \ref{thm: geproci for Z60} goes in different direction. Chiantini and Migliore realized that there are sets of points spanning the whole $\P^3$ with the striking property that their general projections to a plane are complete intersections. We say that the set has the \geproci \, property, see Definition \ref{def geproci}. In their recent work \cite{ChiantiniMigliore19} they construct a series of examples of such sets, which they call grids, as they result as intersection points of two families of lines in $\P^3$ organized so that lines in both families are disjoint but each line from one family intersects all lines from the other family. They showed that for a small number of points, grids are the only sets with the \geproci \, property. On the other hand, in the appendix to their work there is an example of a set of $24$ points which has the \geproci \, property but is not a grid. We show that the set $Z_{60}$ behaves in the same way. It is not a grid but it has the \geproci \, property. More precisely its projection
from a general point in $\P^3$ to $\P^2$ is a complete intersection of curves of degree
$6$ and $10$. This provides in particular a positive answer to Question 7.1 in \cite{ChiantiniMigliore19}.
We work over the field of complex numbers. We show in Proposition \ref{prop:notR} that the configuration studied here does not exist over the reals.
\section{Klein's arrangement $(60_{15})$ -- a historical outline}\label{sec: history}
We will follow a classical construction by F. Klein with a modern glimpse. Let ${\rm Gr}(2,4) = Q$ be the Grassmannian of lines in $\mathbb{P}^{3}$ embedded via the Pl\"ucker embedding as a smooth quadric hypersurface in $\mathbb{P}^{5}$. Let $L = H\cap Q$ with $H$ being a hyperplane in $\mathbb{P}^{5}$, then $L$ is called as a \emph{linear line complex}. This object was studied by F. Klein in his PhD thesis, see for instance \cite{KleinPhD}. In particular, using Klein's language, there are six fundamental linear complexes corresponding to the choice of the coordinate planes $H_i=\left\{x_{i}=0\right\}$ for homogeneous coordinated $x_{0},\ldots, x_{5}$ on $\mathbb{P}^{5}$. The intersections
$$Q \cap H_{i_{1}} \cap H_{i_{2}} \cap H_{i_{3}} \cap H_{i_{4}}$$
with $0 \leq i_{1} < i_{2} < i_{3} < i_{4} \leq 5$ consist of exactly two points each. These lines corresponding to the points form a configuration of
$$2 \cdot \binom{6}{4} = 30 \text{ lines in } \mathbb{P}^{3}.$$
We denote their union by $\mathbb{L}_{30}$. These lines intersect by $3$ in $60$ points. Klein showed that these points can be divided into $15$ subsets of $4$ points. Each subset determines vertices of a fundamental tetrahedron. The edges of these tetrahedra are contained in the $30$ lines. The faces and the vertices are all mutually distinct, this gives a $(60_{15})$ configuration of $60$ planes and $60$ vertices: through each vertex there pass exactly $15$ planes, and each plane contains exactly $15$ vertices. Moreover, through each of the $30$ edges $6$ planes pass, and each edge contains $6$ vertices. It is well-known by a result due to Shephard and Todd \cite{SheTod54} that this arrangement is defined by the unitary reflection group, denoted there by $G_{11,520}$. In the Orlik-Solomon notation the symbol $\mathcal{A}_{2}^{3}(60)$ is used for Klein's $(60_{15})$-arrangement. Following Hunt's notation for arrangements of planes in $\mathbb{P}^{3}$ \cite{Hunt86}, we denote by $t_{i}$ the number of points where exactly $i\geq 3$ of planes from the arrangement intersect, and by $t_{j}(1)$ the number of lines contained in exactly $j\geq 2$ planes in the arrangement. For $\mathcal{A}_{2}^{3}(60)$ we have:
$$t_{15} = 60, \quad t_{6} = 480, \quad t_{4} = 960,$$
$$t_{6}(1) = 30, \quad t_{3}(1) = 320, \quad t_{2}(1) = 360.$$
It is worth noticing that the set $Z_{60}$ and $\mathbb{L}_{30}$ build a $(60_{3}, 30_{6})$ configuration of points and lines.
To the arrangement $\mathcal{A}_{2}^{3}(60)$ we can associate an interesting arrangement of $10$ quadrics -- these are called in literature \textit{Klein's fundamental quadrics}. It is worth emphasizing that through each of the $30$ lines described above there are $4$ fundamental quadrics containing that line. We refer to \cite{CheltsovShramov19} for the table of incidences between the $30$ lines and $10$ Klein's fundamental quadrics.
\section{A finite Heisenberg group and a group of order $80$}
\label{sec: group}
Let $H_{2,2}$ be the subgroup of $\SL_4(\C)$ generated by the following four matrices:
$$
S_1=\left(\begin{array}{cccc}
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1\\
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0
\end{array}\right),\;
S_2=\left(\begin{array}{cccc}
0 & 1 & 0 & 0\\
1 & 0 & 0 & 0\\
0 & 0 & 0 & 1\\
0 & 0 & 1 & 0
\end{array}\right),$$
$$
T_1=\left(\begin{array}{cccc}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & -1 & 0\\
0 & 0 & 0 & -1
\end{array}\right),\;
T_2=\left(\begin{array}{cccc}
1 & 0 & 0 & 0\\
0 & -1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & -1
\end{array}\right).$$
This group has order $32$ (note that all pairs of generators commute except of $S_iT_i=-T_iS_i$ for $i=1,2$.) and the center and the commutator of $H_{2,2}$ are both equal to
$\left\{\jeden, -\jeden\right\}$, where $\jeden$ denotes the identity matrix of size $4$. We call $H_{2,2}$ the (finite) \emph{Heisenberg group}.
Consider the natural projection $\phi: \SL_{4}(\mathbb{C}) \rightarrow {\rm PGL}_{4}(\mathbb{C})$ and for every group $G \subset \SL_{4}(\mathbb{C})$ we define $\overline{G} = \phi(G)$. Denote by $G_{80}$ the subgroup in $\SL_{4}(\mathbb{C})$ generated by $H_{2,2}$ and the following matrix
$$ T=\frac{1+i}{2}\left(\begin{array}{cccc}
-i & 0 & 0 & i\\
0 & 1 & 1 & 0 \\
1 & 0 & 0 & 1 \\
0 & -i & i & 0
\end{array}\right),$$
and by $\overline{G_{80}}$ the image of $G_{80}$ in ${\rm PGL}_{4}(\mathbb{C})$.
With the notation fixed, we are now in the position to endow the configurations described in Section \ref{sec: history}
with coordinates. Checking all properties of Klein's arrangement listed in Section \ref{sec: history} boils thus to elementary linear algebra. First, one can show that Klein's fundamental quadrics are invariant under the action of the group $\overline{G_{80}}$, and this group splits them into two orbits (note that the quadrics are $H_{2,2}$ invariant):
\begin{align*}
\mathcal{Q}_{1} &= x^2 + y^2 + z^2 + w^2,\\
\mathcal{Q}_{2} =T(\mathcal{Q}_{1}) &= xw + zy,\\
\mathcal{Q}_{3} =T^2(\mathcal{Q}_{1}) &= xz + yw,\\
\mathcal{Q}_{4} =T^3(\mathcal{Q}_{1}) &= x^2 + y^2 - z^2 - w^2,\\
\mathcal{Q}_{5} =T^4(\mathcal{Q}_{1})&= x^2 - y^2 - z^2 + w^2,\\
\mathcal{Q}_{6} &= x^2 - y^2 + z^2 - w^2, \\
\mathcal{Q}_{7} =T(\mathcal{Q}_{6}) &= xw - yz, \\
\mathcal{Q}_{8} =T^2(\mathcal{Q}_{6}) &= xy + zw,\\
\mathcal{Q}_{9} =T^3(\mathcal{Q}_{1}) &= xy -zw,\\
\mathcal{Q}_{10} =T^4(\mathcal{Q}_{1}) &= xz - yw.
\end{align*}
Using equations of quadrics and taking into account that each pair of them intersects in $4$ of $30$ Klein's lines, we identify the equations of lines
\[\begin{array}{ll}
\ell_{1} = V(x,y), & \ell_{2} = V(z-w, x-y), \\
\ell_{3} = V(z-w,x+y), &\ell_{4} = V(z +i\cdot w, x +i\cdot y), \\
\ell_{5} = V(z+i \cdot w, x-i\cdot y), & \ell_{6} = V(z+w,x-y), \\
\ell_{7} = V(z+w, x+y), & \ell_{8} = V(z-i \cdot w, x+ i\cdot y), \\
\ell_{9} = V(z - i\cdot w, x -i\cdot y), & \ell_{10} = V(z,x), \\
\ell_{11} = V(y-w,x-z), & \ell_{12} = V(y-w,x+z), \\
\ell_{13} = V(y + i\cdot w, x+ i\cdot z), & \ell_{14} = V(y + i \cdot w, x - i\cdot z), \\
\ell_{15} = V(y+w, x-z), & \ell_{16} = V(y+w, x+z), \\
\ell_{17} = V(y-i \cdot w, x + i \cdot z), & \ell_{18} = V(y - i \cdot w, x - i \cdot z), \\
\ell_{19} = V(w,x), & \ell_{20} = V(y-z, x-w), \\
\ell_{21} = V(y-z, x+w), & \ell_{22} = V(y + i\cdot z, x + i \cdot w), \\
\ell_{23} = V(y + i \cdot z, x-i \cdot w), & \ell_{24} = V (y+z, x-w),\\
\ell_{25} = V(y+z,x+w), & \ell_{26} = V(y - i \cdot z, x+ i \cdot w), \\
\ell_{27} = V(y-i \cdot z, x - i\cdot w), & \ell_{28} = V(z,y), \\
\ell_{29} = V(w,y), & \ell_{30} = V(w,z).
\end{array}\]
Finally, taking intersection points of lines, we identify coordinates of points in the $Z_{60}$ set
\iffalse
\begin{footnotesize}
\[\begin{array}{llll}
P_{1} = [0:0:1:1] & P_{2} = [0:0:1:i] & P_{3} = [0:0:1:-1] & P_{4} = [0:0:1:-i] \\
P_{5} = [0:1:0:1] & P_{6} = [0:1:0:i] & P_{7} = [0:1:0:-1] & P_{8} = [0:1:0:-i] \\
P_{9} = [0:1:1:0] & P_{10} = [0:1:i:0] & P_{11} = [0:1:-1:0] & P_{12} = [0:1:-i:0] \\
P_{13} = [1:0:0:1] & P_{14} = [1:0:0:i] & P_{15} = [1:0:0:-1] & P_{16} = [1:0:0:-i] \\
P_{17} = [1:0:1:0] & P_{18} = [1:0:i:0] & P_{19} = [1:0:-1:0] & P_{20} = [1:0:-i:0] \\
P_{21} = [1:1:0:0] & P_{22} = [1:i:0:0] & P_{23} = [1:-1:0:0] & P_{24} = [1:-i:0:0] \\
P_{25} = [1:0:0:0] & P_{26} = [0:1:0:0] & P_{27} = [0:0:1:0] & P_{28} =[0:0:0:1] \\
P_{29} = [1:1:1:1] & P_{30} = [1:1:1:-1] & P_{31} = [1:1:-1:1] & P_{32} = [1:1:-1:-1] \\
P_{33} = [1:-1:1:1] & P_{34} = [1:-1:1:-1] & P_{35} = [1:-1:-1:1] & P_{36} = [1:-1:-1:-1] \\
P_{37} = [1:1:i:i] & P_{38} = [1:1:i:-i] & P_{39} = [1:1:-i:i] & P_{40} = [1:1:-i:-i] \\
P_{41} = [1:-1:i:i] & P_{42} = [1:-1:i:-i] & P_{43} = [1:-1:-i:1] & P_{44} = [1:-1:-i:-i] \\
P_{45} = [1:i:1:i] & P_{46} = [1:i:1:-i] & P_{47} = [1:-i:1:i] & P_{48} = [1:-i:1:-i] \\
P_{49} = [1:i:-1:i] & P_{50} = [1:i:-1:-i] & P_{51} = [1:-i:-1:i] & P_{52} = [1:-i:-1:-i] \\
P_{53} = [1:i:i:1] & P_{54} = [1:i:-i:1] & P_{55} = [1:-i:i:1] & P_{56} = [1:-i:-i:1] \\
P_{57} = [1:i:i:-1] & P_{58} = [1:i:-i:-1] & P_{59} = [1:-i:i:-1] & P_{60} = [1:-i:-i:-1]
\end{array}.\]
\end{footnotesize}
\fi
\[\begin{array}{lll}
P_{1} = [0:0:1:1] & P_{2} = [0:0:1:i] & P_{3} = [0:0:1:-1] \\
P_{4} = [0:0:1:-i] & P_{5} = [0:1:0:1] & P_{6} = [0:1:0:i] \\
P_{7} = [0:1:0:-1] & P_{8} = [0:1:0:-i] & P_{9} = [0:1:1:0] \\
P_{10} = [0:1:i:0] & P_{11} = [0:1:-1:0] & P_{12} = [0:1:-i:0] \\
P_{13} = [1:0:0:1] & P_{14} = [1:0:0:i] & P_{15} = [1:0:0:-1] \\
P_{16} = [1:0:0:-i] & P_{17} = [1:0:1:0] & P_{18} = [1:0:i:0]\\
P_{19} = [1:0:-1:0] & P_{20} = [1:0:-i:0] & P_{21} = [1:1:0:0]\\
P_{22} = [1:i:0:0] & P_{23} = [1:-1:0:0] & P_{24} = [1:-i:0:0] \\
P_{25} = [1:0:0:0] & P_{26} = [0:1:0:0] & P_{27} = [0:0:1:0]\\
P_{28} =[0:0:0:1] & P_{29} = [1:1:1:1] & P_{30} = [1:1:1:-1]\\
P_{31} = [1:1:-1:1] & P_{32} = [1:1:-1:-1] & P_{33} = [1:-1:1:1]\\
P_{34} = [1:-1:1:-1] & P_{35} = [1:-1:-1:1] & P_{36} = [1:-1:-1:-1] \\
P_{37} = [1:1:i:i] & P_{38} = [1:1:i:-i] & P_{39} = [1:1:-i:i]\\
P_{40} = [1:1:-i:-i] & P_{41} = [1:-1:i:i] & P_{42} = [1:-1:i:-i]\\
P_{43} = [1:-1:-i:i] & P_{44} = [1:-1:-i:-i] & P_{45} = [1:i:1:i]\\
P_{46} = [1:i:1:-i] & P_{47} = [1:-i:1:i] & P_{48} = [1:-i:1:-i] \\
P_{49} = [1:i:-1:i] & P_{50} = [1:i:-1:-i] & P_{51} = [1:-i:-1:i]\\
P_{52} = [1:-i:-1:-i] & P_{53} = [1:i:i:1] & P_{54} = [1:i:-i:1]\\
P_{55} = [1:-i:i:1] & P_{56} = [1:-i:-i:1] & P_{57} = [1:i:i:-1]\\
P_{58} = [1:i:-i:-1] & P_{59} = [1:-i:i:-1] & P_{60} = [1:-i:-i:-1].
\end{array}\]
The following lemma proved in \cite[Lemma 3.18]{CheltsovShramov19} gives useful geometric information about $Z_{60}$ and $\L_{30}$.
\begin{lemma}[Line-point incidences]
\label{lemma:Heseinebrg-lines-points}
The set $Z_{60} \subset \mathbb{P}^{3}$ contains all the intersection points of lines in $\L_{30}$. Moreover, for every point $P\in Z_{60}$ there are exactly three lines from $\L_{30}$ passing through $P$.
\end{lemma}
Now we turn to algebraic properties of the ideal of points in $Z_{60}$.
\begin{lemma}[Generators of $I(Z_{60})$]\label{lem: 24 generators of J}
The ideal $J=I(Z_{60})$ of points in $Z_{60}$ is generated by $24$ forms of degree $6$.
\end{lemma}
\begin{proof}
We want to show that the following, particularly nice forms, generate $J$, namely
\begin{equation}\label{eq: 24 generators}
\begin{array}{llll}
xy(x^4-y^4) & xz(z^4-x^4) & xw(x^4-w^4) & yz(y^4-z^4)\\
yw(w^4-y^4) & zw(z^4-w^4) & xy(z^4-w^4) & xz(y^4-w^4)\\
xw(y^4-z^4) & yz(x^4-w^4) & yw(x^4-z^4) & zw(x^4-y^4)\\
yw(x^2y^2-z^2w^2) & xw(x^2y^2-z^2w^2) & yz(x^2y^2-z^2w^2) & xz(x^2y^2-z^2w^2)\\
zw(x^2z^2-y^2w^2) & xw(x^2z^2-y^2w^2) & yz(x^2z^2-y^2w^2) & xy(x^2z^2-y^2w^2) \\
zw(y^2z^2-x^2w^2) & yw(y^2z^2-x^2w^2) & xz(y^2z^2-x^2w^2) & xy(y^2z^2-x^2w^2).
\end{array}
\end{equation}
To this end, we study first the diminished set $W=Z_{60}\setminus\{P_{25},P_{26},P_{27},P_{28}\}$,
i.e., the set $Z_{60}$ without the $4$ coordinate points in $\P^3$.
We claim that there is no form of degree $5$ vanishing along $W$. Note that the coordinate points in $\P^3$ lie in pairs on lines $\ell_1$, $\ell_{10}$, $\ell_{19}$, $\ell_{28}$, $\ell_{29}$ and $\ell_{30}$ from the set $\mathbb{L}_{30}$. In particular, the remaining $24$ lines $\mathbb{L}_{24}$ contain each still $6$ points from the set $W$. Note that the coordinate points in $\P^3$ are not contained in quadrics $Q_1$, $Q_4$, $Q_5$, $Q_6$. Hence each quadric contains $12$ lines from $\mathbb{L}_{30}$. Suppose now that there is a surface $\Omega$ of degree $5$ vanishing along $W$. Then, by B\'ezout Theorem, $\Omega$ contains all lines in $\mathbb{L}_{24}$. Taking the intersection of $\Omega$ with irreducible quadric $Q_1$, we identify $12$ lines contained in $Q_1$ as lying in $\Omega$. Again, by B\'ezout Theorem, this is possible only if $Q_1$ is a component of $\Omega$. By the same token, quadrics $Q_4$, $Q_5$ and $Q_6$ are also components of $\Omega$. As this is clearly not possible, we conclude that $\Omega$ does not exist.
Hence the points in $W$ impose independent conditions on forms of degree $5$ in $\P^3$, i.e., we have
\[
H^1(\P^3;\mathcal{O}_{\P^3}(5)\otimes I(W))=0.
\]
By \cite[Theorem 1.8.3]{PAG}, this gives
\[
{\rm reg} (I(W))=6.
\]
It follows that $W$ imposes independent conditions on forms of degree $6$ as well, hence
\[
h^0(\P^3,\mathcal{O}_{\P^3}(6)\otimes I(W))=\binom{9}{3}-56=28.
\]
In addition to generators listed in \eqref{eq: 24 generators} we have the following $4$ generators:
\begin{align*}
g_1 & =2x^2y^2z^2-x^4w^2-y^4w^2-z^4w^2+w^6,\\
g_2 & =2x^2y^2w^2-x^4z^2-y^4z^2-w^4z^2+z^6,\\
g_3 & =2x^2z^2w^2-x^4y^2-z^4y^2-w^4y^2+y^6,\\
g_4 & =2y^2z^2w^2-y^4x^2-z^4x^2-w^4x^2+x^6.
\end{align*}
Now, it is easy to see that requiring vanishing at the $4$ coordinate points kills the above additional generators.
\end{proof}
\section{Unexpected hypersurfaces associated with $Z_{60}$}
In the ground-breaking work \cite{CHMN} by Cook II, Harbourne, Migliore and Nagel introduced the concept of unexpected curves. This notion was generalized to arbitrary hypersurfaces in the subsequent article \cite{HMNT} by Harbourne, Migliore, Nagel and Teitler.
\begin{definition}\label{def:unexpected hypersurface}
We say that a reduced set of points $Z\subset\P^N$ \emph{admits an unexpected hypersurface} of degree $d$
if there exists a sequence of non-negative integers $m_1,\ldots,m_s$ such that for general points $P_1,\ldots,P_s$
the zero-dimensional subscheme $P = m_1P_1+\ldots +m_sP_s$ fails to impose independent conditions on forms
of degree $d$ vanishing along $Z$ and the set of such forms is non-empty. In other words, we have
$$h^0(\P^N;\calo_{\P^N}(d)\otimes I(Z)\otimes I(P))>
\max\left\{0, h^0(\P^N;\calo_{\P^N}(d)\otimes I(Z))-\sum_{i=1}^s\binom{N+m_s-1}{N}\right\}.$$
\end{definition}
Following \cite[Definition 2.5]{ChiantiniMigliore19} we introduce also the following notion.
\begin{definition}[Unexpected cone property]
Let $Z$ be a finite set of points in $\P^N$ and let $d$ be a positive integer. We say that $Z$ has the \emph{unexpected cone property} $\calc(d)$, if for a general point $P\in\P^3$, there exists an unexpected (in the sense of Definition \ref{def:unexpected hypersurface}) hypersurface $S_P$ of degree $d$ and multiplicity $d$ at $P$ passing through all points in $Z$.
\end{definition}
\begin{theorem}[Unexpected cone property of $Z_{60}$]\label{thm: unexpected cone of degree 6}
The set $Z_{60}$ has the $\calc(6)$ property. Moreover, the unexpected cone of degree $6$ is unique.
\end{theorem}
\begin{proof}
Let $P=(a:b:c:d)$ be a general point in $\P^3$. Then
\begin{align*}
F = & xy(x^4-y^4)cd(c^4-d^4)+xz(z^4-x^4)bd(b^4-d^4)+xw(x^4-w^4)bc(b^4-c^4)\\
+& yz(y^4-z^4)ad(a^4-d^4)+yw(w^4-y^4)ac(a^4-c^4)+zw(z^4-w^4)ab(a^4-b^4)\\
+& 5xy(z^4-w^4)cd(a^4-b^4)+5xz(y^4-w^4)bd(c^4-a^4)+5xw(y^4-z^4)bc(a^4-d^4)\\
+& 5yz(x^4-w^4)ad(b^4-c^4)+5yw(x^4-z^4)ac(d^4-b^4)+5zw(x^4-y^4)ab(c^4-d^4)\\
+& 10yw(x^2y^2-z^2w^2)ac(c^2d^2-a^2b^2)+10xw(x^2y^2-z^2w^2)bc(a^2b^2-c^2d^2)\\
+& 10yz(x^2y^2-z^2w^2)ad(a^2b^2-c^2d^2)+10xz(x^2y^2-z^2w^2)bd(c^2d^2-a^2b^2)\\
+& 10zw(x^2z^2-y^2w^2)ab(a^2c^2-b^2d^2)+10xw(x^2z^2-y^2w^2)bc(b^2d^2-a^2c^2)\\
+& 10yz(x^2z^2-y^2w^2)ad(b^2d^2-a^2c^2)+10xy(x^2z^2-y^2w^2)cd(a^2c^2-b^2d^2)\\
+& 10zw(y^2z^2-x^2w^2)ab(a^2d^2-b^2c^2)+10yw(y^2z^2-x^2w^2)ac(b^2c^2-a^2d^2)\\
+& 10xz(y^2z^2-x^2w^2)bd(b^2c^2-a^2d^2)+10xy(y^2z^2-x^2w^2)cd(a^2d^2-b^2c^2)
\end{align*}
defines a cone of degree $6$ with the vertex at $P$. Being unexpected cone for $\mathcal{C}(6)$ follows immediately from Lemma \ref{lem: 24 generators of J} since a point of multiplicity $6$ is expected to impose $56$ conditions.
The equation of $F$ has been found by Singular \cite{Singular} and can be verified by the script \cite{60script} accompanying our manuscript. Once the equation is there, the claimed properties can be checked, at least in principle, by hand. However, the highly symmetric form of $F$, with respect to the sets of variables $\left\{x,y,z,w\right\}$ and $\left\{a,b,c,d\right\}$ is not a coincidence. It was established in \cite{HMNT} that the BMSS-duality, observed first in \cite{BMSS}, implies that the equation of $F$, considered as a polynomial in variables $\left\{a,b,c,d\right\}$, describes the tangent cone at $P$ of the surface defined by $F$ in variables $\left\{x,y,z,w\right\}$. Since the set of zeroes of $F$ is a cone with vertex $P$, it is the same cone in both sets of variables.
The property that $F$ is unique follows easily from the fact that for $P$ general the polynomial $F$ is irreducible (it is a cone over a smooth curve of degree $6$) and any other cone of degree $6$ with vertex at $P$ and multiplicity $6$ would intersect $F$ in $36$ lines. Taking $P$ general, away of the secant variety of $Z_{60}$, i.e., away of the union of lines through pairs of points in $Z_{60}$ these $36$ lines would not be enough to cover the whole set $Z_{60}$.
\end{proof}
\begin{theorem}[Unexpected surface with $3$ general points]
\label{thm: unexpected with 3 singularities}
Let $P,Q_1,Q_2$ be general points in $\P^3$. Then there exists a unique surface of degree $6$ vanishing in all points of $Z_{60}$ with a point of multiplicity $4$ at $P$ and multiplicity $2$ at both $Q_1$ and $Q_2$.
\end{theorem}
\begin{proof}
Let $P=(a:b:c:d)$. We consider first sextics vanishing at all the points in $Z_{60}$ and at a general point $P$ to order $4$. We are not able to write them explicitly down because the equations are too complex. In fact, the coefficients in front of monomials of degree $6$ in variables $x,y,z,w$ are polynomials of degree $45$ in variables $a,b,c,d$. This is quite surprising when confronted with the proof of Theorem \ref{thm: unexpected cone of degree 6}.
Our approach is quite standard. We outline it here and refer to our script \cite{60script} for details. We build an interpolation matrix whose columns are the $24$ generators of $J=I(Z_{60})$. In the rows we write down one by one all the $20$ differentials of order $4$ of the generators and evaluate them at $P$. This gives a $20\times 24$ matrix. Even though many of its coefficients are $0$, the matrix is still too large to reproduce here. Nevertheless it is simple enough, so that Singular can compute its rank, which is $15$. That means that vanishing at $P$ to order $4$ imposes only $15$ instead of the expected $20$ conditions on generators of $J$. With this fact established, the remaining part of the proof is easy. We have linear system of sextics of dimension $24-15=9$, so it allows $2$ singularities $Q_1$ and $Q_2$ anywhere. Interestingly, no symbolic algebra program we asked, was able to determine the coefficients of the unique sextic vanishing at $P$ to order $4$ and at $Q_1,Q_2$ to order $2$. We expect that their coefficients in coordinates of the singular points are huge.
\end{proof}
\begin{remark}
Note that unexpected hypersurfaces with multiple singular points seem to be quite rare. The sextics described in Theorem \ref{thm: unexpected with 3 singularities} are the only example, we are aware of, apart of a series of examples related to Fermat-type arrangements constructed by the third author in \cite{Szp18multi}.
\end{remark}
\section{Projections of $Z_{60}$}
In this section we study general projections of $Z_{60}$. Our motivation comes from a recent work of Chiantini and Migliore \cite{ChiantiniMigliore19}.
\begin{definition}[\geproci \, property]
\label{def geproci}
We say that a finite set $Z\subset\P^3$ has a \emph{general projection complete intersection} property (\geproci \, in short), if its projection from a general point in $\P^3$ to $\P^2$ is a complete intersection.
\end{definition}
Obvious examples of \geproci\, sets are complete intersections in $\P^3$ with the property that one of the intersecting surfaces is a plane. Thus it is interesting to study non-degenerate (i.e. not contained in a hyperplane) sets $Z$ with the \geproci\, property.
Chiantini and Migliore observed in \cite{ChiantiniMigliore19} that such sets exist.
They distinguished grids as a wide class of sets which enjoy the \geproci\, property.
\begin{definition}[$(a,b)$-grid]
Let $a, b$ be a positive integers. A set $Z$ of $ab$ points in $\mathbb{P}^{3}$ is an $(a,b)$ - grid if there exists a set of pairwise skew lines $\ell_{1}, ...,\ell_{a}$ and a set of $b$ skew lines $\ell_{1}', ..., \ell_{b}'$, such that
$$Z = \{\ell_{i} \cap \ell_{j}' \, :\, i \in \{1, ..., a\}, j \in \{1, ..., b\}\}.$$
This means in particular that for all $i,j$ the lines $\ell_{i}, \ell_j'$ are different and incident.
\end{definition}
Indeed, projecting to $\P^2$ lines in $\P^3$ from a general point in $\P^3$ results again in lines in $\P^2$. Let $\pi: \P^3\dashrightarrow \P^2$ be such a projection. Then $\pi(Z)$ is the complete intersection of curves $C=\pi(\ell_1)+\ldots +\pi(\ell_a)$ and $D=\pi(\ell_1')+\ldots +\pi(\ell_b')$.
Appendix to \cite{ChiantiniMigliore19} contains examples of sets of points in $\P^3$, which are not $(a,b)$-grids, but which have the \geproci\, property. Such sets seem to be extremely rare, which motivated the following problem.
\begin{question}[Chiantini, Migliore, \cite{ChiantiniMigliore19} Question 7.1]
Are there any examples of $ab$ points in $\mathbb{P}^{3}$, other than $ab = 12, 16, 20$ or $24$, that are not $(a, b)$-grids but that have a general projection that is a complete intersection in $\mathbb{P}^{2}$ of type $(a, b)$?
\end{question}
It turns out that the set $Z_{60}$ is not a grid and yet it has the \geproci \, property.
\begin{proposition}
The set $Z_{60}$ is not an $(a,b)$-grid for any choice of $a,b$.
\end{proposition}
\begin{proof}
The only possibilities for the numbers $a$ and $b$ up to their order are:
$$(2,30),\;\; (3,20),\;\; (4,15),\;\; (5,12)\; \mbox{ and }\;(6,10).$$
It is easy to check that each of distinguished lines
$\ell_1,\ldots,\ell_{30}$ contains $6$ points from $Z_{60}$ and this is the highest number of collinear points in $Z_{60}$. There are additional $320$ lines meeting $Z_{60}$ in $3$ points. In any case, the number of collinear points in $Z_{60}$ is too small to allow the grid structure.
\end{proof}
\begin{theorem}[$Z_{60}$ is \geproci]\label{thm: geproci for Z60}
The set $\pi(Z_{60})$ has the \geproci \, property. More precisely, its general projection to $\P^2$ is a complete intersection of curves of degree $6$ and $10$.
\end{theorem}
\begin{proof}
Let $P=(a:b:c:d)$ be a general point in $\P^3$ and let $\pi$ be the rational map
$$\P^3\ni(x:y:z:w)\dashrightarrow (ay-bx:bz-cy:cw-dz)\in\P^2.$$
Then, the cone $F$ associated to $P$ in the proof of Theorem \ref{thm: unexpected cone of degree 6} projects to the following curve of degree $6$ in variables $(s:t:u)$
\begin{align*}
C_6 = & b(a^4-b^4)tu(t^4-u^4)+c(a^4-c^4)su(u^4-s^4)+d(a^4-d^4)st(s^4-t^4)\\
+ & 5b(d^4-c^4)s^4tu+5c(b^4-d^4)st^4u+5d(c^4-b^4)stu^4\\
+ & 10b(a^2d^2-b^2c^2)s^2t^3u+10c(a^2d^2-b^2c^2)s^3t^2u+10d(a^2c^2-b^2d^2)s^3tu^2\\
+ & 10b(b^2d^2-a^2c^2)s^2tu^3+10c(a^2b^2-c^2d^2)st^2u^3+10d(c^2d^2-a^2b^2)st^3u^2.
\end{align*}
By construction, the projection of $Z_{60}$ is contained in $C_6$. Somewhat surprisingly, there is a certain ambiguity in the choice of a curve of degree $10$ cutting out on $C_6$ precisely the set $Z_{60}$. The most appealing way comes from the geometry of the arrangement of lines $\mathbb{L}_{30}$.
Using explicit equations of lines $\ell_i$ and coordinates of points $P_i$, it is easy to check that there are $6$ ways of choosing $10$ disjoint lines among $\left\{\ell_1,\ldots,\ell_{30}\right\}$ covering the set $Z_{60}$.
These selections are indicated in Table \ref{tab: 10 disjoint lines}.
\begin{tabularx}{\linewidth}{|l|c|c|c|c|c|c|
\caption{The division of $60$ lines in $6$ groups of $10$ disjoint lines.}
\label{tab: 10 disjoint lines}
\\
\toprule
\hline
& A & B & C & D & E & F\\
\hline
$\ell_{1}$ & + & + & & & & \\
\hline
$\ell_2$ & & & & & + & + \\
\hline
$\ell_3$ & & & + & + & & \\
\hline
$\ell_4$ & & & + & & + & \\
\hline
$\ell_5$ & & & & + & & + \\
\hline
$\ell_6$ & & & + & + & & \\
\hline
$\ell_7$ & & & & & + & + \\
\hline
$\ell_8$ & & & & + & & + \\
\hline
$\ell_9$ & & & + & & + & \\
\hline
$\ell_{10}$ & & & + & & & + \\
\hline
$\ell_{11}$ & & + & & + & & \\
\hline
$\ell_{12}$ & + & & & & + & \\
\hline
$\ell_{13}$ & + & & & + & & \\
\hline
$\ell_{14}$ & & + & & & + & \\
\hline
$\ell_{15}$ & + & & & & + & \\
\hline
$\ell_{16}$ & & + & & + & & \\
\hline
$\ell_{17}$ & & + & & & + & \\
\hline
$\ell_{18}$ & + & & & + & & \\
\hline
$\ell_{19}$ & & & & + & + & \\
\hline
$\ell_{20}$ & + & & + & & & \\
\hline
$\ell_{21}$ & & + & & & & + \\
\hline
$\ell_{22}$ & & + & + & & & \\
\hline
$\ell_{23}$ & + & & & & & + \\
\hline
$\ell_{24}$ & & + & & & & + \\
\hline
$\ell_{25}$ & + & & + & & & \\
\hline
$\ell_{26}$ & + & & & & & + \\
\hline
$\ell_{27}$ & & + & + & & & \\
\hline
$\ell_{28}$ & & & & + & + & \\
\hline
$\ell_{29}$ & & & + & & & + \\
\hline
$\ell_{30}$ & + & + & & & & \\
\hline
\end{tabularx}
Since the curve $C_6$ is irreducible, the image under the projection of any selection of $10$ disjoint lines out of $\L_{30}$, cuts $C_6$ in exactly $60$ distinct points. It follows that $C_6$ and lines intersect transversally and thus the intersection is scheme theoretic.
We complete our considerations providing explicit equations of images $\ell_i'$ of the $30$ lines from~$\mathbb{L}_{30}$.
\begin{equation}\label{eq:30 lines}
\begin{array}{l}
\ell_{1}' = s, \\
\ell_{2}' = (c^2-cd)s+(ac-ad-bc+bd)t+(-ab+b^2)u, \\
\ell_{3}' = (c^2-cd)s+(ac-ad+bc-bd)t+(-ab-b^2)u, \\
\ell_{4}' = (i\cdot cd+c^2)s+(i\cdot(ad+bc)+ac-bd)t+(i\cdot ab-b^2)u, \\
\ell_{5}' = (i\cdot cd+c^2)s+(i\cdot (ad-bc)+ac+bd)t+(i\cdot ab+b^2)u \\
\ell_{6}' = (c^2+cd)s+(ac+ad-bc-bd)t+(ab-b^2)u, \\
\ell_{7}' = (c^2+cd)s+(ac+ad+bc+bd)t+(ab+b^2)u, \\
\ell_{8}' = (-i\cdot cd+c^2)s+(-i\cdot(ad-bc)+ac+bd)t+(-i\cdot ab+b^2)u, \\
\ell_{9}' = (-i\cdot cd+c^2)s+(-i\cdot(ad+bc)+ac-bd)t+(-i\cdot ab-b^2)u, \\
\ell_{10}' = cs+at, \\
\ell_{11}' = (bc-cd)s+(-ad+bc)t+(-ab+bc)u, \\
\ell_{12}' = (bc-cd)s+(-ad-bc)t+(-ab-bc)u, \\
\ell_{13}' = (i\cdot cd+bc)s+i\cdot(ad-bc)t+(i\cdot ab-bc)u, \\
\ell_{14}' = (i\cdot cd+bc)s+i\cdot(ad+bc)t+(i\cdot ab+bc)u, \\
\ell_{15}' = (bc+cd)s+(ad+bc)t+(ab-bc)u, \\
\ell_{16}' = (bc+cd)s+(ad-bc)t+(ab+bc)u, \\
\ell_{17}' = (-i\cdot cd+bc)s-i\cdot(ad+bc)t+(-i\cdot ab+bc)u, \\
\ell_{18}' = (-i\cdot cd+bc)s-i\cdot(ad-bc)t+(-i\cdot ab-bc)u, \\
\ell_{19}' = cds+adt+abu, \\
\ell_{20}' = (bc-c^2)s+(-ac+bd)t+(b^2-bc)u, \\
\ell_{21}' = (bc-c^2)s+(-ac-bd)t+(-b^2+bc)u, \\
\ell_{22}' = (i\cdot c^2+bc)s+i\cdot(ac-bd)t+(-i\cdot b^2+bc)u, \\
\ell_{23}' = (i\cdot c^2+bc)s+i\cdot(ac+bd)t+(i\cdot b^2-bc)u, \\
\ell_{24}' = (bc+c^2)s+(ac+bd)t+(b^2+bc)u,\\
\ell_{25}' = (bc+c^2)s+(ac-bd)t+(-b^2-bc)u, \\
\ell_{26}' = (-i\cdot c^2+bc)s-i\cdot(ac+bd)t+(-i\cdot b^2-bc)u, \\
\ell_{27}' = (-i\cdot c^2+bc)s-i\cdot (ac-bd)t+(i\cdot b^2+bc)u, \\
\ell_{28}' = t, \\
\ell_{29}' = dt+bu, \\
\ell_{30}' = u.
\end{array}
\end{equation}
\end{proof}
\begin{remark}
One might expect that the projection of $10$ disjoint lines in $\L_{30}$ is somehow special. However, it can be checked that the contrary situation holds since the lines form the star configuration -- they intersect only in pairs producing $45$ double intersection points. The intersections take place away from the $C_6$ curve.
\end{remark}
We derive for completeness the following corollary to Theorem \ref{thm: geproci for Z60}.
\begin{corollary}[Subsets of $Z_{60}$ with the \geproci \, property]\label{cor: Z60 minus lines}
Removing $6$ collinear points from $Z_{60}$ produces a set $Z$ of $54$ points with the \geproci \, property. Their projection is a complete intersection of the curve of degree $6$ and now the remaining $9$ lines covering $Z$.
This procedure can be repeated with remaining sets of $6$ collinear points. Thus we get sets of $60, 54, 48, 42, 36, 30$, and $24$ points, which are not $(a,b)$-grids, with the \geproci \, property.
\end{corollary}
\begin{proof}
The only feature to check is if the obtained sets of points are not grids. Removing points in $Z_{60}$ from one, two, or three lines from the same set of lines fixed among $A,B,C,D,E$ or $F$ certainly does not lead to any grid, because the maximal number of collinear points in $Z_{60}$ is $6$. Removing the fourth line, we get a set $V_4$ of $36$ points, which could, in principle, be a $(6,6)$-grid. However, a direct check (supported by computer, but also manageable by hand) shows that the only lines with $6$ points come from the set covering $Z_{60}$ selected for the procedure, say $A$. Going down to $30$ points, we obtain a set $V_5$ and we detect only two lines with $5$ points from $V_5$ (these lines come from two different families, for example if we run the procedure with the family $A$, then they belong to $D$ and $F$). The same lines are the only two lines with $4$ points from $V_6$ when we pass from $V_5$ to $V_6$ removing another set of $6$ collinear points. However, further removal fails as noted in Remark \ref{rem:18_is_grid}.
\end{proof}
\begin{remark}\label{rem:18_is_grid}
Taking out collinear points as in Corollary \ref{cor: Z60 minus lines} we can arrive also at a set of $18$ points. Somewhat unexpectedly such sets turn out to grids.
\end{remark}
On the other hand, it is interesting to note that whereas the curve $C_6$ of degree $6$ vanishing along $\pi(Z_{60})$ is unique, there are six ways to choose a completely reducible (i.e. splitting in lines) degree $10$ curve cutting out $\pi(Z_{60})$ on $C_6$. The example studied in this work motivates the following definition.
\begin{definition}[Half grid]
Let $a$, $d$ be positive integers. A set $Z$ of $ad$ points in $\P^3$ is an $(a,d)$ - half grid if there exists a set of mutually skew lines $\ell_1,\ldots, \ell_a$ covering $Z$ and a general projection of $Z$ to a plane is a complete intersection of images of $a$ lines with a (possibly reducible ) curve of degree $d$.
\end{definition}
It is clear that $Z_{60}$ is a half grid. Moreover, any grid is a half grid. It is also clear that the point in $Z$ are equidistributed over the lines. Taking $6$ collinear points out of $Z_{60}$ results also in a half grid.
\begin{problem}
Are there any half grids, but not grids in $\P^3$ other than $Z_{60}$ and its subgrids?
\end{problem}
\section{More subsets of $Z_{60}$ with the \geproci \,property}
In the Appendix to \cite{ChiantiniMigliore19} the first sets of points in $\mathbb{P}^{3}$ with the \geproci \, property have been identified. They are subsets of the set of points, which we call here $Z_{24}$, associated with the $F_{4}$ root system. This set is a subset of $Z_{60}$. We keep the numbering of points:
\[\begin{array}{lll}
P_{1} = [0:0:1:1] & P_{3} = [0:0:1:-1] & P_{5} = [0:1:0:1] \\
P_{7} = [0:1:0:-1] & P_{9} = [0:1:1:0] & P_{11} = [0:1:-1:0] \\
P_{13} = [1:0:0:1] & P_{15} = [1:0:0:-1] & P_{17} = [1:0:1:0] \\
P_{19} = [1:0:-1:0] & P_{21} = [1:1:0:0] & P_{23} = [1:-1:0:0] \\
P_{25} = [1:0:0:0] & P_{26} = [0:1:0:0] & P_{27} = [0:0:1:0] \\
P_{28} = [0:0:0:1] & P_{29} = [1:1:1:1] & P_{30} = [1:1:1:-1] \\
P_{31} = [1:1:-1:1] & P_{32} = [1:1:-1:-1] & P_{33} = [1:-1:1:1] \\
P_{34} = [1:-1:1:-1] & P_{35} = [1:-1:-1:1] & P_{36} = [1:-1:-1:-1].
\end{array}\]
As it was shown in Appendix to \cite{ChiantiniMigliore19}, the set $Z_{24}$ is not any $(a,b)$-grid, but its general projection onto $\mathbb{P}^{2}$ is a complete intersection of curves of degree $4$ and $6$. Using our results from the previous sections we would like to describe the geometry of degree $6$ curves that are crucial to construct the complete intersection of $24$ points in the plane. First of all, by an easy inspection, we can find a subset $\mathbb{L}_{18}$ of $\mathbb{L}_{30}$ consisting of $18$ lines such that these lines and the set $Z_{24}$ form a $(24_{3},18_{4})$ point-line configuration in $\mathbb{P}^{3}$. The indices of lines in $\L_{18}$ corresponding to those in \eqref{eq:30 lines} together with groups from Table \ref{tab: 10 disjoint lines} are presented below:
\begin{align}\label{eq: 18 lines}
\begin{split}
\L_{18} = &\left\{ \ell_{1} (AB), \ell_{2} (EF), \ell_{3} (CD), \ell_{6} (CD), \ell_{7} (EF), \ell_{10} (CF),\ell_{11} (BD), \ell_{12} (AE), \ell_{15} (AE),\right.\\
& \left.\ell_{16} (BD), \ell_{19} (DE), \ell_{20} (AC), \ell_{21} (BF), \ell_{24} (BF), \ell_{25} (AC), \ell_{28} (DE), \ell_{29} (CF), \ell_{30} (AB)
\right\}.
\end{split}
\end{align}
Choosing lines indexed by the same letter gives a subset of $6$ disjoint lines in $\L_{18}$ covering the set $Z_{24}$.
For example, the lines corresponding to $A$ are
$$\{\ell_1, \ell_{12}, \ell_{15}, \ell_{20}, \ell_{25}, \ell_{30}\}.$$
That the lines are disjoint is clear from the definition of groups $A,B,\ldots,F$. The covering property follows by a direct inspection. This shows, in particular, that $Z_{24}$ is a half grid.
It is important to observe that $Z_{24}$ cannot be obtained from $Z_{60}$ by removing points on disjoint lines. Thus, even though it is a subset of $24$ points in $Z_{60}$, it is not obtainable by the procedure described in Corollary \ref{cor: Z60 minus lines}. Indeed, the procedure diminishes in each the set $Z_{60}$ by sets of six collinear points. Since initially there are $10$ lines, each containing $6$ points from $Z_{60}$, in each step of the procedure there is certain number of these lines. On the other hand in $Z_{24}$ there are at most $4$ points in a line.
Indeed, in \eqref{eq: 18 lines} there are already $6$ lines from each family $A,B,\ldots,F$ involved.
\begin{proposition}\label{prop: Z24 is geproci}
The set $Z_{24}$ has the \geproci \, property.
\end{proposition}
\proof
This has been already proved in the Appendix to \cite{ChiantiniMigliore19}. However, now our understanding of the situation is better. The set $Z_{24}$ has the $\calc(4)$ property as proved in the Appendix, so its image under a general projection is contained in an irreducible curve $\Gamma$ of degree $4$. The points on $\Gamma$ can be either cut out by images of $6$ disjoint lines, as discussed above, or by an irreducible curve determined by the cone given in Theorem \ref{thm: unexpected cone of degree 6}.
\hspace*{\fill}\frame{\rule[0pt]{0pt}{6pt}\rule[0pt]{6pt}{0pt}}\endtrivlist
We conclude by the following, amusing, observation.
\begin{remark}
The residual set $Z_{60}\setminus Z_{24}$ is a $(6,6)$-grid.
\end{remark}
\proof
It can be checked directly that the following two sets of lines
$$\left\{\ell_4,\ell_9,\ell_{14},\ell_{17},\ell_{22},\ell_{27}\right\}\;\;\mbox{ and }\;\;
\left\{\ell_5, \ell_8, \ell_{13}, \ell_{18}, \ell_{23}, \ell_{26}\right\}$$
intersect in the $36$ points of $Z_{60}\setminus Z_{24}$.
\hspace*{\fill}\frame{\rule[0pt]{0pt}{6pt}\rule[0pt]{6pt}{0pt}}\endtrivlist
We conclude by answering an interesting question raised by the referee.
\begin{proposition}\label{prop:notR}
The Klein configuration $Z_{60}$ can not be realized over the reals.
\end{proposition}
By the realizability we mean simply finding a set of $60$ points in $\P^3(\R)$ with the same colinearities and coplanarities as those in $Z_{60}$.
\begin{proof}
The idea is to look at one of the $60$ planes incident to $15$ points in $Z_{60}$. To work with specific points, we choose the plane $w=0$. Then we obtain the configuration indicated in Figure \ref{fig:F}. For the sake of clarity we just indicate the numbers of the points, which are in the accordance with those introduced earlier. This configuration is an augmented Fermat configuration, see \cite[Examples 3.4 and 3.5]{Szp19c}.
\definecolor{uuuuuu}{rgb}{0.26,0.26,0.26}
\begin{figure}[h!
\centering
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1cm,y=1cm]
\clip(-1,-.5) rectangle (7,6);
\draw [line width=2pt,domain=0:14.18] plot(\x,{(-0--5.196152422706633*\x)/3});
\draw [line width=2pt,domain=-4.94:6] plot(\x,{(--31.1769145362398-5.196152422706633*\x)/3});
\draw [line width=2pt,domain=.15:5.85] plot(\x,0.8660254037844393);
\begin{scriptsize}
\draw [fill=uuuuuu] (0.5,0.8660254037844393) circle (4pt);
\draw [fill=uuuuuu] (1,1.7320508075688785) circle (3pt);
\draw (0.6,1.7320508075688785) node {$20$};
\draw [fill=uuuuuu] (1.5,2.5980762113533182) circle (3pt);
\draw (1.1,2.5980762113533182) node {$19$};
\draw [fill=uuuuuu] (2,3.4641016151377584) circle (3pt);
\draw (1.6,3.4641016151377584) node {$18$};
\draw [fill=uuuuuu] (2.5,4.330127018922198) circle (3pt);
\draw (2.1,4.330127018922198) node {$17$};
\draw [fill=uuuuuu] (3.5,4.330127018922196) circle (3pt);
\draw (3.9,4.330127018922196) node {$12$};
\draw [fill=uuuuuu] (4,3.464101615137759) circle (3pt);
\draw (4.4,3.464101615137759) node {$11$};
\draw [fill=uuuuuu] (4.5,2.5980762113533244) circle (3pt);
\draw (4.9,2.5980762113533244) node {$10$};
\draw [fill=uuuuuu] (5,1.7320508075688847) circle (3pt);
\draw (5.4,1.7320508075688847) node {$9$};
\draw [fill=uuuuuu] (5.5,0.8660254037844328) circle (4pt);
\draw [fill=uuuuuu] (1.5,0.8660254037844328) circle (3pt);
\draw [fill=uuuuuu] (2.5,0.8660254037844328) circle (3pt);
\draw [fill=uuuuuu] (3.5,0.8660254037844328) circle (3pt);
\draw [fill=uuuuuu] (4.5,0.8660254037844328) circle (3pt);
\draw [fill=uuuuuu] (3,5.19) circle (4pt);
\draw (3,5.7) node {$27$};
\draw (0.55,0.5) node {$25$};
\draw (1.5,0.5) node {$21$};
\draw (2.5,0.5) node {$22$};
\draw (3.5,0.5) node {$23$};
\draw (4.5,0.5) node {$24$};
\draw (5.45,0.5) node {$26$};
\end{scriptsize}
\end{tikzpicture}
\caption{Points from $Z_{60}$ in the $w=0$ plane}
\label{fig:F}
\end{figure}
Most of collinearities between points in the set
$$F=\left\{P_9, P_{10}, P_{11}, P_{12}, P_{17}, P_{18} , P_{19}, P_{20}, P_{21}, P_{22}, P_{23}, P_{24}\right\}$$
are not visible in Figure \ref{fig:F}. This is so for a good reason, it is impossible to draw them in the real plane and this is our argument for the Proposition.
Turning to the details, it is easy to check by direct computations that the following triples of points are collinear
$$9,17,23;\;\; 9,18,24;\;\; 9,19,21;\;\; 9,20,22;\;\; 10,17,22;\;\; 10,18,23;\;\; 10,19,24;\;\; 10,20,21;\;\;$$
$$11,17,21;\;\; 11,18,22;\;\;
11,19,23;\;\; 11,20,24;\;\;
12,17,24;\;\; 12,18,21;\;\;
12,19,22;\;\; 12,20,23.$$
Additionally, the quadruples indicated in Figure \ref{fig:F} are collinear
$$9,10,11,12;\;\; 17,18,19,20;\;\; 21,22,23,24.$$
Hence any line determined by a pair of points in $F$ contains at least one additional point of $F$. This is not possible over the reals due to the dual Sylvester-Gallai Theorem (see \cite[Theorem 1.1]{GreenTao2013}) and we are done.
\end{proof}
\paragraph*{Acknowledgement.}
Our work started in the framework of the Research in Pairs by the Mathematisches Forschungsinstitut Oberwolfach in September 2020. We thank the MFO for providing excellent working conditions despite of the COVID-19 crisis.
A preliminary version of our results was presented during Oberwolfach workshop on \emph{Lefschetz Properties in Algebra, Geometry and Combinatorics}. We thank the participants and especially Juan Migliore for very helpful comments expressed during the workshop.
Research of P. Pokora was partially supported by National Science Centre, Poland, Sonata Grant 2018/31/D/ST1/00177. Research of T. Szemberg was partially supported by National Science Centre, Poland, Opus Grant 2019/35/B/ST1/00723.
Research of J. Szpond was partially supported by National Science Centre, Poland, Harmonia Grant 2018/30/M/ST1/00148.
We thank the referee for helpful remarks and suggestions which helped us to better present some parts of the manuscript.
| proofpile-arXiv_059-15603 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\textbf{Conventions:} (1) A \emph{fibration} means a projective morphism $f:X \to Y$ of varieties such that $f_*\mathcal{O}_X = \mathcal{O}_Y$. Throughout this paper, as $Y$ frequently appears as the base of a fibration, we use $\eta$ and $\bar{\eta}$ to denote the generic point and geometric generic point of $Y$ respectively, and naturally use $X_{\eta}$ and $X_{\bar{\eta}}$ to denote the generic fiber and geometric generic fiber of $f$. For a scheme $Z$ we use $Z^{\mathrm{red}}$ to denote the scheme with the reduced structure of $Z$.
\smallskip
(2) For a morphism $\sigma: Z \to X$, if $D$ is a divisor on $X$, especially when $Z$ is birational to a subvariety of $X$, we often use $D|_Z$ to denote the pullback $\sigma^*D$ for simplicity.
\smallskip
(3) Let $D$ be a Weil divisor on a normal variety $X$ (or a Cartier divisor on an integral variety). We regard $\mathcal{O}_X(D)$ as a subsheaf of the constant sheaf of function field $K(X)$ of $X$ via
$$\mathcal{O}_X(D)_x:=\{f \in K(X)| (\mathrm{div}(f) + D)|_U \geq 0 ~\mathrm{for~some~open~set}~U~\mathrm{containing}~x\}.$$
Let $V \subseteq H^0(X, \mathcal{O}_X(D))$ be a finite dimensional linear subspace. If the linear system $|V|$ defines a birational map, we will simply say that $V$ or $|V|$ is birational.
\smallskip
(4) For two integral divisors $D_1,D_2$ on a normal variety $X$ such that $D_1 \leq D_2$, let $E= D_2-D_1$. We will use $s_E$ to denote a nonzero section of $\mathcal{O}_X(E)$ with zero $E$ (unique up to multiplying with a nonzero constant if $X$ is a projective variety). Then we have a natural inclusion $H^0(X, \mathcal{O}_X(D_1)) \otimes s_E \subseteq H^0(X, \mathcal{O}_X(D_2))$. When $X$ is a projective variety, the natural inclusion map $H^0(X, \mathcal{O}_X(D_1)) \hookrightarrow H^0(X, \mathcal{O}_X(D_2))$ means the map induced by tensoring with $s_E$.
\bigskip
Pluricanonical system $|nK_X|$ plays an important role in the classification of varieties.
For the class of varieties with non-negative Kodaira dimension, it is significant to get a lower bound of $n$ such that
\begin{itemize}
\item
the linear system $|nK_X|\neq \emptyset$ (effective nonvanishing problem) and
\item
the $n$-canonical map defined by $|nK_X|$ is birationally equivalent to the Iitaka fibration (effective Iitaka fibration problem).
\end{itemize}
Here we are only concerned with varieties of general type. For a smooth projective surface $X$ of general type over an algebraically closed field of arbitrary characteristic, it is known that $|5K_X|$ is birational (\cite{SB91,Re88}). In general, over the field of complex numbers $\mathbb{C}$, there exists a number $M(d)$ such that, for any $d$-dimensional smooth projective varieties of general type, if $m \geq M(d)$ then $|mK_X|$ is birational (\cite{HM06, Tak06}), and for threefolds we may take $M(3) = 126$ (\cite{CC10a,CC10b}).
This paper aims to investigate pluricanonical systems of varieties of general type in positive characteristic. As is known to experts, Kawamata-Viehweg vanishing is a key technical tool in the study of adjoint linear system $|K_X + L|$ in characteristic zero, which enables one to extend a section on the log canonical center to the whole variety. Unfortunately, in positive characteristic, Kawamata-Viehweg vanishing fails for some varieties (\cite{Ra78}). To substitute for the role of this vanishing in positive characteristic, the idea is to combine Fujita vanishing (\cite{Fuj83, Ke03}) with Mumford regularity, then one can apply Frobenius amplitude to produce certain global sections of $K_X + L$ (\cite{Ke08, Sch14}). These sections are called \emph{Frobenius stable sections}, and the corresponding sub-linear system is called a \emph{Frobenius stable adjoint linear system} (Sec. \ref{sec:Frobenius stable sections}). Following this idea, we shall adapt some classical inductive approaches from characteristic zero to positive characteristic case. We can apply them to treat varieties endowed with certain fibrations, which arise either from pluricanonical maps or from Albanese morphisms. With the help of Riemann-Roch formula, we are able to prove effective nonvanishing and birationality of Frobenius stable pluricanonical systems on lower dimensional varieties.
\subsection{Effectivity for Linear systems on varieties equipped with certain fibrations}
We briefly recall a classical inductive strategy from characteristic zero as follows. For a smooth projective variety $X$ over an algebraically closed field of characteristic zero, if given a natural number $n_1$ such that $\dim |n_1K_X| \geq 1$, which induces a rational map $f:X \dashrightarrow Y$ with generic fiber $F$, and given a number $n_2$ such that $|n_2K_F|$ defines birational map of $F$, then one can get a suitably bigger number $M(n_1, n_2)$ such that for $m\geq M(n_1, n_2)$ the linear system $|mK_X|$ is birational (\cite{Ch04}). The most important step to carry out this strategy is to extend sections on a fiber to the whole variety, hence one needs vanishing results and weak positivity of the pushforward of (relative) pluricanonical sheaves (\cite{Ko86}). This approach still works if we consider Frobenius stable adjoint linear systems. We will use the following criterion, which can be seen as a generalization of Keeler's result \cite{Ke08} to higher relative dimensional cases.
\begin{thm}[=Theorem \ref{thm:bir-criterion-for-induction}]\label{thm:intr-bir-crt1}
Let $f:X \to Y$ be a fibration of normal projective varieties over an algebraically closed field $k$ of characteristic $p$, and let $d = \dim Y$.
Let $D$ be a nef and big $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor on $X$, and $H, \tilde{H}$ two $\mathbb{Q}$-Cartier Weil divisors on $Y$ such that $|H|$ defines a generically finite map and $|\tilde{H}|$ is birational.
(i) If $S_{-}^0(X_{\eta}, K_{X_{\eta}} + D|_{X_{\eta}}) \neq 0$ then $S_{-}^0(X, K_X +D + f^*sH)\neq 0$ for any $s \geq d$.
(ii) If $S_{-}^0(X_{\eta}, K_{X_{\eta}} + D|_{X_{\eta}})$ is birational then $S_{-}^0(X, K_X +D + f^*sH)$ is birational for $s \geq d+1$; and if moreover $S_{-}^0(X, K_{X} +D + df^*H - f^*\tilde{H}) \neq 0$ then $S_{-}^0(X, K_X +D + f^*dH)$ is birational.
\end{thm}
About the notions and the assumptions in the theorem above, we make the following remarks. First, we do not require $D$ to be integral to keep certain flexibility in the application. Second, the notion $S_{-}^0(X, K_X + D)$ is a subspace of the space of Frobenius stable sections $S^0(X, K_X + \ulcorner D\urcorner)$ (Section \ref{sec:Frobenius stable sections}). The advantage of this subspace lies in that each section of this type on the generic fiber can be lifted to a global one.
\medskip
Inspired by the idea of continuous global generation (CGG) introduced by Pareschi and Popa \cite{PP03}, we can prove the following theorem, which is used to treat the case of irregular varieties.
\begin{thm}[=Theorem \ref{thm:bir-criterion-irr}]\label{thm:intr-bir-crt2}
Let $X$ be a smooth projective variety over an algebraically closed field $k$ of characteristic $p$, and let $a: X \to A$ be a morphism to an abelian variety. Denote by $f: X \to Y$ the fibration arising from the Stein factorization of $a: X \to A$. Let $D, D_1,D_2$ be three divisors on $X$. Assume that $D$ is nef, big and $\mathbb{Q}$-Cartier.
(i) If $S^0_{-}(X_{\eta}, K_{X_{\eta}} + D_{\eta}) \neq 0$, then for any $\mathcal{P}_{\alpha} \in \mathrm{Pic}^0(A)$, $H^0(X, K_X + \ulcorner D\urcorner + a^*\mathcal{P}_{\alpha})) \neq 0$, and there exists some $\mathcal{P}_{\beta} \in \mathrm{Pic}^0(A)$ such that $S_{-}^0(X, K_X + \ulcorner D\urcorner + a^*\mathcal{P}_{\beta})\neq 0$.
(ii) Assume that $S^0_{-}(X_{\eta}, K_{X_{\eta}} + D_{\eta}) \neq 0$, $D_1$ is integral and for any $\mathcal{P}_{\alpha} \in \mathrm{Pic}^0(A)$, $|D_1 + a^*\mathcal{P}_{\alpha}| \neq \emptyset$. Then for any $\mathcal{P}_{\alpha_0} \in \mathrm{Pic}^0(A)$, $S^0_{-}(X, K_X + D + D_1 + a^*\mathcal{P}_{\alpha_0}) \neq 0$.
(iii) Assume that $S^0_{-}(X_{\eta}, K_{X_{\eta}} + D_{\eta})$ is birational, both $D_1$ and $D_2$ are integral and for any $\mathcal{P}_{\alpha} \in \mathrm{Pic}^0(A)$, $|D_i + a^*\mathcal{P}_{\alpha}| \neq \emptyset$. Then for any $\mathcal{P}_{\alpha_0} \in \mathrm{Pic}^0(A)$, $S^0_{-}(X, K_X + D + D_1+D_2 + a^*\mathcal{P}_{\alpha_0})$ is birational.
(iv) Assume that $S^0_{-}(X_{\eta}, K_{X_{\eta}} + D_{\eta})$ is birational, and $D_1,D_2$ are nef and big $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisors such that $S^0_{-}(X_{\eta}, K_{X_{\eta}} + (D_i)_{\eta}) \neq 0$. Then for any $\mathcal{P}_{\alpha_0} \in \mathrm{Pic}^0(A)$,
$S^0_{-}(X, K_X + D + (K_{X} + \ulcorner D_1\urcorner) + (K_{X} + \ulcorner D_2\urcorner )+ a^*\mathcal{P}_{\alpha_0}))$ is birational.
\end{thm}
\subsection{Effectivity of pluricanonical maps of lower dimensional varieties} We explain our strategy as follows. Without loss of generality we consider a minimal threefold $X$ of general type and separate two cases $q(X) =0$ and $q(X) >0$.
(1) For the first case $q(X) = 0$, we shall apply the first criterion (Theorem 1.1). We are left to find a number $n_0$ such that $\dim|n_0K_X| \geq 1$. If $p_g(X) >1$ we may take $n_0 =1$. If $p_g(X)=0$ then $\chi(\mathcal{O}_X) \geq 0$. For this case we first apply Riemann-Roch formula to find a number $n_0$ such that $\chi(X, n_0K_X) \geq 2$, here we need to prove a Miyaoka-Yau type inequality (Theorem \ref{thm:my-ineq}). Then we intend to show $h^2(X, n_0K_X) = 0$ by using some results of bend-and-break, but we have to require that $X$ has Gorenstein singularities for an unhappy technical reason (Lemma \ref{lem:van-h2}).
(2) For the second case $q(X) >0$, as $X$ has nontrivial Albanese map, we can apply Theorem 1.2 and argue according to the relative Albanese dimension.
\smallskip
It is worth mentioning that, to do induction, we need to verify the effectivity conditions of Frobenius stable pluricanonical systems $|S^0_{-}(X_{\eta}, K_{X_{\eta}} + nK_{X_{\eta}})|$ on the generic fiber, but in practice, thanks to Theorem \ref{thm:bir-geo-gen}, we may pass to a normal model $Z_{\bar{\eta}}$ of $X^{\mathrm{red}}_{\bar{\eta}}$ and only need to verify the corresponding conditions for $|S^0_{-}(Z_{\bar{\eta}}, K_{Z_{\bar{\eta}}} + nK_{X_{\eta}}|_{Z_{\bar{\eta}}})|$. It is convenient to work with $Z_{\bar{\eta}}$ since it is defined over an algebraically closed field.
In Theorem \ref{thm:eff-curve} and \ref{thm:eff-surface}, we first obtain effectivity results for Frobenius stable adjoint linear systems on curves and surfaces. In particular, we prove that for smooth projective curves of general type, $S^0_{-}(X, K_X + nK_X) \neq 0$ for $n\geq 1$, and $S^0_{-}(X, K_X + nK_X)$ is very ample if $n\geq 2$; and for surfaces of general type, the two lower bounds are $4$ and $7$ respectively. With these preparations, we finally prove the following theorem for threefolds.
\begin{thm}\label{thm:eff-3folds}
Let $X$ be a minimal terminal threefold of general type over an algebraically closed field of characteristic $p$.
(1) Assume $q(X) >0$. Then $S^0_{-}(X, K_X + nK_X) \neq 0$ if $n\geq 11$, and $S^0_{-}(X, K_X + nK_X)$ is birational if $n\geq 21$; and if moreover $p>2$, then $S^0_{-}(X, K_X + nK_X) \neq 0$ if $n\geq 9$, and $S^0_{-}(X, K_X + nK_X)$ is birational if $n\geq 17$.
(2) Assume $q(X) =0$ and $X$ has only Gorenstein singularities. Set $n_0(2)=13, n_0(3) =10, n_0(5) =9$, $n_0(7) = 8$ if $p \geq 7$. Then
$S^0_{-}(X, K_X + nK_X) \neq 0$ if $n\geq 2n_0(p) + 2$ and $S^0_{-}(X, K_X + nK_X)$ is birational if $n\geq 3n_0(p) + 3$.
\end{thm}
\subsection{Further remarks and questions} $\empty$ \par
(1) For surfaces, the birational lower bound for Frobenius stable pluricanonical systems is very near to the classical one (for pluricanonical systems), it is possibly optimal. For threefolds of general type, in characteristic zero we have that $|5K_X|$ is birational if $X$ either has $q(X) >0$ or has a Gorenstein minimal model (\cite{CH07, CCZ07}), comparing with this result, the bound obtained in Theorem \ref{thm:eff-3folds} seems far from optimal.
\smallskip
(2) Our proof relies on two technical assumptions.
The first one is the existence of minimal models. When this paper is in preparation, (log) minimal model theory in dimension two has been established (\cite{Tan14, Tan18a, Tan20}), and existence of minimal models in dimension three has been proved when the characteristic $p \geq 5$ (\cite{HX15,Bir16,HW19}). Moreover when $p>5$, for minimal threefolds, when $q(X) >0$ abundance has been proved (\cite{Zha17}), and when $q(X) =0$ only nonvanishing has been proved (\cite{XZ19}).
The second one is the Gorenstein condition on the minimal model when $q(X) =0$. In fact, by our proof, if the minimal model $X$ has rational singularities we can get a bound relying on the Cartier index of $K_X$. It is proved in \cite{ABL20} that a terminal singularity over an algebraically closed field of characteristic $p >5$ is rational, but when $p\leq 5$ this is not necessarily true.
It is expected that there exists a birational lower bound only depending on the volume of $K_X$. But we cannot drop the above two assumptions.
\smallskip
(3) Furthermore, effectivity problems for varieties of intermediate Kodaira dimension are also of great significance. There have been many progresses in characteristic zero, and we refer the reader to \cite{BZ16} for the recent results and techniques. But in characteristic $p$, up to now we do not have any result for threefolds, because of the lack of some deep results from Hodge theory.
\medskip
This paper is organized as follows. In Section \ref{sec:pre} we introduce the notion of Frobenius stable adjoint linear system and study its behaviour under base changes. In Section \ref{sec:main-bir-crt}, we prove the two effectivity criteria in Theorem \ref{thm:intr-bir-crt1} and \ref{thm:intr-bir-crt2}. In Section \ref{sec:dim2} we investigate effectivity of Frobenius stable adjoint linear systems on curves and surfaces. In Section \ref{sec:my-ineq} we prove a Miyaoka-Yau type inequality. In Section \ref{sec:dim3}, we study the Frobenius stable pluricanonical systems of threefolds and prove Theorem \ref{thm:eff-3folds}.
\medskip
{\small \noindent\textit{Acknowledgments.}
The author thanks Prof. Meng Chen and Chen Jiang for useful discussions. This research is partially supported by grant
NSFC (No. 11771260), the Fundamental
Research Funds for Central Universities and the project ``Analysis and Geometry on Bundles'' of Ministry of Science and Technology of the People's Republic of China. }
\section{Preliminaries}\label{sec:pre}
In this section, we will collect some useful technical results and introduce the notion of Frobenius stable adjoint linear system.
\subsection{Fujita vanishing} The key technical tool of this paper is Fujita vanishing proposed by Fujita \cite{Fuj83}. Here we will present a generalized version due to Keeler \cite{Ke03}.
\begin{thm}[{\cite[Theorem 1.5]{Ke03}}]
Let $f: X \rightarrow Y$ be a projective morphism over a Noetherian scheme, $H$ an $f$-ample line bundle and $\mathcal{F}$ a coherent sheaf on $X$. Then there exists a positive integer $N$ such that, for every $n >N$ and every relatively nef line bundle $L$ on $X$
$$R^if_*(\mathcal{F}\otimes H^n \otimes L) = 0~ \mathrm{~if~} i>0.$$
\end{thm}
As a corollary, we get the following version which will be used frequently in the proof.
\begin{thm}\label{thm:var-fujita-vanishing}
Let $f: X \rightarrow Y$ be a projective morphism of normal varieties, let $H$ be an $f$-ample $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor and $D$ another divisor on $X$, and let $\mathcal{F}$ be a coherent sheaf on $X$. Then there exists a positive integer $N$ such that, for every $n >N$ and every relatively nef line bundle $L$ on $X$
$$R^if_*(\mathcal{F}\otimes \mathcal{O}_X(\ulcorner nH + D\urcorner) \otimes L) = 0~ \mathrm{~if~} i>0.$$
\end{thm}
\subsection{Frobenius stable sections}\label{sec:Frobenius stable sections} Let $K$ be an $F$-finite field of characteristic $p>0$. Let $X$ be a normal projective scheme over $K$ of finite type. Let $D$ be a $\mathbb{Q}$-divisor on $X$. Let $F^e: X^e \to X$ denote the $e$-th iteration of absolute Frobenius map of $X$, which is a finite morphism since $K$ is assumed to be $F$-finite. By the duality theory we have the trace map
$Tr^e_X: F^e_*\mathcal{O}_X(K_X) \to \mathcal{O}_X(K_X)$, and after tensoring this map with $\mathcal{O}_X(\ulcorner D\urcorner)$ and taking saturation, the trace map induces a homomorphism of $\mathcal{O}_X$-modules
$$F^{e}_*\mathcal{O}_X(K_X+ p^e\ulcorner D\urcorner)\to \mathcal{O}_X(K_X + \ulcorner D\urcorner).$$
We will use simply $Tr^e_X$ or $Tr^e$, if no confusion occurs, to denote various maps induced by the trace map of $F^e$.
By $\ulcorner p^eD\urcorner \leq p^e\ulcorner D\urcorner$, we have a natural inclusion map $\mathcal{O}_X(K_X+ \ulcorner p^eD\urcorner) \hookrightarrow \mathcal{O}_X(K_X+ p^e\ulcorner D\urcorner)$. Consider the composition map
$$Tr^e: F^{e}_*\mathcal{O}_X(K_X+ \ulcorner p^eD\urcorner) \hookrightarrow F^{e}_*\mathcal{O}_X(K_X+ p^e\ulcorner D\urcorner)\to \mathcal{O}_X(K_X + \ulcorner D\urcorner)$$
and denote
$$S^e(X, K_X + D)= \mathrm{Im}(Tr^e:H^0(X, F^{e}_*\mathcal{O}_X(K_X+ \ulcorner p^eD\urcorner)) \to H^0(X, \mathcal{O}_X(K_X+ \ulcorner D\urcorner))).$$
We have the following factorization
\begin{align*}
Tr^e: F^{e}_*\mathcal{O}_X(K_X+ \ulcorner p^eD\urcorner) &\hookrightarrow F^{e}_*\mathcal{O}_X(K_X+ p\ulcorner p^{e-1}D\urcorner) \\
&\xrightarrow{F^{e-1}_*Tr^1}F^{e-1}_*\mathcal{O}_X(K_X+ \ulcorner p^{e-1}D\urcorner) \xrightarrow{Tr^{e-1}} \mathcal{O}_X(K_X + \ulcorner D\urcorner),
\end{align*}
and then get a natural inclusion $S^e(X, K_X + D) \subseteq S^{e-1}(X, K_X + D)$.
The set of Frobenius stable sections is defined as follows, which forms a $K$-linear space
$$S^0(X, K_X + D)= \bigcap_{e>0}S^e(X, K_X + D).$$
If $X$ is regular and $D$ is an integral divisor, then this notion coincides with the usual one (\cite[Sec. 3]{Sch14}).
Take another divisor $D' \leq D$. Set $E= \ulcorner D\urcorner - \ulcorner D'\urcorner$. Then we have the following commutative diagram
{\small $$\xymatrix{&F^{e}_*\mathcal{O}_X(K_X+ \ulcorner p^eD'\urcorner) \ar@{^(_->}[r]\ar[d]^{Tr^e} &F^{e}_*\mathcal{O}_X(K_X+ \ulcorner p^eD\urcorner)\ar[d]^{Tr^e}\ar@{^(_->}[r] &F^{e}_*\mathcal{O}_X(K_X+ p^e(\ulcorner D'\urcorner + E))\ar[d]^{Tr^e} \\
&\mathcal{O}_X(K_X+ \ulcorner D'\urcorner)) \ar[r]^{\otimes s_E} &\mathcal{O}_X(K_X+ \ulcorner D\urcorner) \ar@{=}[r] &\mathcal{O}_X(K_X+ \ulcorner D'\urcorner + E)}$$}
and conclude $S^e(X, K_X + D')\otimes s_E \subseteq S^e(X, K_X + D)$, which is compatible with the natural inclusion $H^0(X, K_X + \ulcorner D'\urcorner)\otimes s_E \subseteq H^0(X, K_X + \ulcorner D\urcorner)$. In turn, we obtain that
$$S^0(X, K_X + D')\otimes s_E \subseteq S^0(X, K_X + D).$$
\smallskip
Let $\Delta$ be an effective $\mathbb{Q}$-divisor. We define
$$S_{\Delta}^e(X, K_X + D):=S^e(X, K_X + D-\Delta)\otimes s_E \subseteq S^e(X, K_X + D)$$
where $E = \ulcorner D\urcorner - \ulcorner D-\Delta\urcorner$ and denote
\begin{align*}
S_{\Delta}^0(X, K_X + D)=\bigcap_{e\geq0}S_{\Delta}^e(X, K_X + D).
\end{align*}
Obviously for another effective divisor $\Delta'$, if $\Delta' \leq \Delta$ then $S_{\Delta}^0(X, K_X + D) \subseteq S_{\Delta'}^0(X, K_X + D)$.
To extend a section on certain closed subvariety to the whole variety, it is convenient to deal with ample divisors. To this end, we introduce a smaller subspace $S_{-}^0(X, K_X + D) \subseteq S^0(X, K_X + D)$. First assume $D$ is a nef and big $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor, and let $\Theta_{D}^{\mathrm{amp}}$ denote the set of effective $\mathbb{Q}$-divisors $\Delta$ such that $D- \Delta$ is an ample $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor, then define this subspace as follows
\begin{equation}\label{def:S0}
S_{-}^0(X, K_X + D)= \bigcap_{\Delta \in \Theta_{D}^{\mathrm{amp}}}~(\bigcup_{t\in \mathbb{Q}^+}S_{t\Delta}^0(X, K_X + D)~\subseteq S^0(X, K_X + D)).
\end{equation}
In general, we let $\mathcal{B}_{D}^{\mathrm{nef}}$ be the set of effective divisors $B$ such that $D-B$ is a nef and big $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor, and define
\begin{equation}\label{def:gS}S_{-}^0(X, K_X + D)= \mathrm{Im}(\sum\limits_{B \in \mathcal{B}_{D}^{\mathrm{nef}}}S_{-}^0(X, K_X + D-B) \hookrightarrow S^0(X, K_X + D)).\end{equation}
In this paper we call the linear system generated by $S_{-}^0(X, K_X + D)$ a \emph{Frobenius stable adjoint linear system}.
\smallskip
\begin{prop}\label{prop:F-stable-section} Let $X,D$ be as above.
(i) For an effective Weil divisor $E$, we have
$$S_{-}^0(X, K_X + D)\otimes s_E \subseteq S_{-}^0(X, K_X + D + E).$$
(ii) Assume $D$ is a nef and big $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor on $X$. Then there exists a closed subvariety $T \subsetneq X$ such that for any $\Delta \in \Theta_{D}^{\mathrm{amp}}$, if $\mathrm{Supp}~ \Delta\supseteq T$ then
$$S_{-}^0(X, K_X + D) = S_{t\Delta}^0(X, K_X + D)$$
for sufficiently small $t >0$.
(iii) Assume that $X$ is regular and $D$ is a $\mathbb{Q}$-divisor on $X$.
(iii-1) If $D$ is ample and there is an integer $a>0$ such that $p^aD$ is integral, then for any nef line bundle $L$ on $X$, there exists a number $N$ independent of $L$ such that for any $e\geq N$
$$S_{-}^0(X, K_X + D + L)=S^0(X, K_X + D + L) = S^{ea}(X, K_X + D + L).$$
(iii-2) If $D$ is nef and big then there exists $\Delta \in \Theta_{D}^{\mathrm{amp}}$ such that, for any nef line bundle $L$ on $X$ and any $t \in \mathbb{Q}^{+}$
$$S_{t\Delta}^0(X, K_X + D + L) \subseteq S_{-}^0(X, K_X + D + L),$$
and as a consequence the equality is attained if $t$ is sufficiently small (not necessarily independent of $L$).
(iv) Assume $D$ is a nef and big $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor on $X$. Let $\sigma: X' \to X$ be a birational morphism from another normal projective variety $X'$ over $K$ and let $E$ be an effective $\sigma$-exceptional $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor on $X'$. Then
(iv-1) the trace map of $\sigma$ induces a natural injective map $Tr_{\sigma}: S_{-}^0(X', K_{X'} + \sigma^*D) \to S_{-}^0(X, K_X + D)$;
(iv-2) if $X$ is $\mathbb{Q}$-factorial and $E' = \ulcorner \sigma^*D + E\urcorner - \ulcorner \sigma^*D\urcorner$, then $S_{-}^0(X', K_{X'} + \sigma^*D)\otimes s_{E'} = S_{-}^0(X', K_{X'} + \sigma^*D + E)$.
\end{prop}
\begin{proof}
(i) By the construction we have that for $\Delta \geq 0$, $S_{\Delta}^0(X, K_X + D) \otimes s_E = S_{\Delta + E}^0(X, K_X + D + E)$. From this fact, the assertion (i) follows from the definition.
\smallskip
(ii) Assume that $D$ is nef and big. Observe that
\begin{itemize}
\item
for $\Delta_1, \Delta_2 \in \Theta_{D}^{\mathrm{amp}}$, if $\mathrm{Supp}~\Delta_1 \subseteq \mathrm{Supp}~\Delta_2$ then
$$\bigcup_{t\in \mathbb{Q}^+}S_{t\Delta_1}^0(X, K_X + D) \supseteq \bigcup_{t\in \mathbb{Q}^+}S_{t\Delta_2}^0(X, K_X + D);$$
\item
since $S_{-}^0(X, K_X + D)$ is a finite dimensional $k$-linear space, in the definition (\ref{def:S0}) the equality can be attained by taking finitely many divisors $\Delta_1, \cdots, \Delta_r \in \Theta_{D}^{\mathrm{amp}}$.
\end{itemize}
Therefore, $T = \mathrm{Supp}~\sum_{i=1}^r \Delta_i$ satisfies our requirement.
\smallskip
(iii) First we apply (iii-1) to show (iii-2). Fix an effective divisor $\Delta$ such that $D - \Delta$ is ample and $\mathrm{Supp}~D \subseteq \mathrm{Supp}~\Delta$. Let $t_n= \frac{1}{n}$. We can get a sequence of $\mathbb{Q}$-divisors $\Delta_n$ such that $t_{n+1}\Delta \leq \Delta_n \leq t_n\Delta$, $D- \Delta_n$ are ample and the indices of $D- \Delta_n$ are powers of $p$.
Then it follows that
\begin{align*}
&S_{t_n\Delta}^0(X, K_X + L+ D)\cong S^0(X, K_X + L+ D - t_n\Delta)\\
&\hookrightarrow S^0(X, K_X + L+D - \Delta_n) = S_{-}^0(X, K_X +L+ D - \Delta_n) \hookrightarrow S_{-}^0(X, K_X + L + D)
\end{align*}
where the equality on the second row is due to (iii-1). From this, we can conclude (iii-2).
We start to prove (iii-1). Let $T$ be a closed subvariety as in (ii).
Take an effective $\mathbb{Q}$-divisor $\Delta$ with $\mathrm{Supp}~T \subseteq \mathrm{Supp}~\Delta$ and coefficients being indivisible by $p$. Since $X$ is regular, we may replace $\Delta$ with some small multiple such that, there exists a large positive integer $g$ such that $(p^g-1)\Delta$ is integral, and that the trace map $Tr_{\Delta}^g: F^{g}_*\mathcal{O}_X((1-p^g)(K_X + \Delta)) \to \mathcal{O}_X$ is surjective. Let $\mathcal{K}_g$ denote the kernel of $Tr_{\Delta}^g$. Then we have the following exact sequence
$$0 \to \mathcal{K}_g \to F^{g}_*\mathcal{O}_X((1-p^g)(K_X + \Delta)) \to \mathcal{O}_X \to 0.$$
By the assumption we may also assume $p^gD$ is integral. For natural numbers $e,s$, we can deduce the following exact sequences
\begin{align*}(*)_{e,s}:~0 \to F^{(e+s)g}_*(\mathcal{K}_g\otimes \mathcal{O}_X&(K_X+ p^{(e+s)g}(D+L))-(p^{sg}-1)\Delta)) \\
\to F^{(e + s +1)g}_*&\mathcal{O}_X(K_X+ p^{(s+1)g} p^{eg}(D+L) - (p^{(s+1)g}-1)\Delta) \\
&\to F^{(e+s)g}_*\mathcal{O}_X(K_X+ p^{sg} p^{eg}(D+L) -(p^{sg}-1)\Delta) \to 0.
\end{align*}
Since both $D$ and $D-\Delta$ are ample and $L$ is nef, applying Fujita vanishing (Theorem \ref{thm:var-fujita-vanishing}), we can show that
there exists some $e_0>0$ (only dependent on $D$ and $\Delta$) such that for any integer $s\geq 0$,
$$H^1(X, F^{(e_0+s)g}_*(\mathcal{K}_g\otimes \mathcal{O}_X(K_X+ p^{(e_0+s)g}(D+L)-(p^{sg}-1)\Delta)))=0.$$
Fix such an $e_0$ and take the cohomology of $(*)_{e_0, s}$ for $s\geq0$. By induction on $s$, we can show for each $s > 0$ the trace map
\begin{align*}
\eta_{e_0, s}: H^0(X, F^{(e_0+s)g}_*\mathcal{O}_X(K_X+ p^{sg} &p^{e_0g}(D+L) - (p^{sg}-1)\Delta))) \\
&\to H^0(X, F^{e_0g}_*\mathcal{O}_X(K_X+ p^{e_0g}(D+L)))
\end{align*}
is surjective.
From this we conclude that for any $s\geq 0$,
$$S^{(e_0+s)g}(X, K_X+D+L) = S^{e_0g}(X, K_X+D+L) = S^{0}(X, K_X+D+L).$$
Since $p^gD$ is integral, we can take a sufficiently small $t>0$ such that for any integer $s>0$,
$$
\ulcorner p^{(s+e_0)g}(D +L - t\Delta)\urcorner - (p^{sg}p^{e_0g}(D+L) - (p^{sg}-1)\Delta) = \ulcorner(p^{sg}-1-tp^{(s+e_0)g})\Delta \urcorner>0
$$
which gives a natural inclusion
$$\mathcal{O}_X(K_X+ p^{sg} p^{e_0g}(D+L) - (p^{sg}-1)\Delta) \hookrightarrow \mathcal{O}_X(K_X+ \ulcorner p^{(s+e_0)g}((D+L) - t\Delta)\urcorner.$$
From the surjectivity of $\eta_{e_0, s}$, we see that the following trace map is surjective
$$H^0(X, F^{(e_0+s)g}_*\mathcal{O}_X(K_X+ \ulcorner p^{(s+e_0)g}((D+L) - t\Delta)\urcorner)) \to S^0(X, K_X+D+L).$$
Therefore, by the choice of $\Delta$ it follows that $S_{-}^0(X, K_X+D+L) = S^0(X, K_X+D+L)$.
To compute $S^0(X, K_X+D+L)$, we set $\Delta =0$ and $g=a$ in $(*)_{e,s}$. Then by Fujita vanishing there exists a number $N$ independent of $L$ such that for any integer $s\geq 0$
$$H^1(X, F^{(N+s)a}_*(\mathcal{K}_g\otimes \mathcal{O}_X(K_X+ p^{(N+s)g}(D+L))))=0.$$
We can show that $S^0(X, K_X+D+L) = S^{Na}(X, K_X+D+L)$ by similar argument of the previous paragraph.
\smallskip
(iv) We consider the following commutative diagram
$$\xymatrix{
&(X')^e \ar[r]\ar[d]^{F_{X'}^e} &X^e\ar[d]^{F_X^e} \\
&X' \ar[r]^<<<<<{\sigma} &X
}.$$
Then for $\Delta \geq 0$, since $E$ is $\sigma$-exceptional and $X$ is normal, the following commutative diagram of trace maps makes sense
$$\xymatrix{
&\sigma_*F^e_*\mathcal{O}_{X'}(K_{X'} + \ulcorner p^e\sigma^*(D-\Delta) + p^eE\urcorner) \ar[r]^<<<<<{Tr_{\sigma}}\ar[d]^{Tr_{X'}^e} &F^e_*\mathcal{O}_{X}(K_X + \ulcorner p^e(D-\Delta)\urcorner)\ar[d]^{Tr_{X}^e}\\
&\sigma_*\mathcal{O}_{X'}(K_{X'} + \ulcorner \sigma^*D + E\urcorner) \ar[r]^{Tr_{\sigma}} &\mathcal{O}_{X}(K_{X} + \ulcorner D\urcorner)
}.$$
We get a natural map $Tr_{\sigma}: S_{\sigma^*\Delta}^0(X', K_{X'} + \sigma^*D +E) \to S_{\Delta}^0(X, K_X + D)$ which is injective since $\sigma$ is birational.
For each $\Delta \in \Theta_{D}^{\mathrm{amp}}$, we take an effective divisor $\Delta''$ on $X'$ such that $\sigma^*\Delta + \Delta'' \in \Theta_{\sigma^*D}^{\mathrm{amp}}$. So we can obtain the required map of (iv-1) as follows
\begin{align*}
Tr_{\sigma}: S_{-}^0(X', K_{X'} + \sigma^*D) & \subseteq \bigcap_{\Delta \in \Theta_{D}^{\mathrm{amp}}}~(\bigcup_{t\in \mathbb{Q}^+}S_{t(\sigma^*\Delta + \Delta'')}^0(X', K_{X'} + \sigma^*D))\\
&\subseteq \bigcap_{\Delta \in \Theta_{D}^{\mathrm{amp}}}~(\bigcup_{t\in \mathbb{Q}^+}S_{t\sigma^*\Delta}^0(X', K_{X'} + \sigma^*D)) \\
&\to \bigcap_{\Delta \in \Theta_{D}^{\mathrm{amp}}}~(\bigcup_{t\in \mathbb{Q}^+}S_{t\Delta}^0(X, K_{X} + D)) = S_{-}^0(X, K_X + D).
\end{align*}
Let us assume $X$ is $\mathbb{Q}$-factorial and prove (iv-2). We have known $S_{-}^0(X', K_{X'} + \sigma^*D)\otimes s_{E} \subseteq S_{-}^0(X', K_{X'} + \sigma^*D + E)$. We still need to prove the inverse inclusion. For this we take $\Delta' \in \mathcal{B}_{\sigma^*D +E}^{\mathrm{nef}}$ and let $\Delta = \sigma_*\Delta'$. As $X$ is assumed $\mathbb{Q}$-factorial, $D-\Delta$ is also a $\mathbb{Q}$-Cartier, nef and big divisor. Notice that $(\sigma^*D +E- \Delta')-\sigma^*(D-\Delta)= \sigma^*\Delta +E- \Delta'$ is supported in the exceptional locus and is $\sigma$-nef. By negativity lemma, we have $\Delta' -E \geq \sigma^*\Delta \geq 0$. It follows that
$$S_{-}^0(X', K_{X'} + \sigma^*D + E - \Delta') \hookrightarrow S_{-}^0(X', K_{X'} + \sigma^*D -\sigma^*\Delta) \hookrightarrow S_{-}^0(X', K_{X'} + \sigma^*D).$$
Then the desired inclusion follows by the definition (\ref{def:gS}).
\end{proof}
\begin{rem}
Let $\sigma: X' \to X$ be a birational morphism of projective normal varieties of general type. If $X$ is $\mathbb{Q}$-factorial, then by Proposition \ref{prop:F-stable-section} (iv) we have a natural inclusion $S_{-}^0(X', K_{X'} + nK_{X'}) \subseteq S_{-}^0(X, K_{X} + nK_X)$, so the birationality of $S_{-}^0(X, K_{X} + nK_X)$ is implied by that of $S_{-}^0(X', K_{X'} + nK_{X'})$. This fact will be frequently used in the sequel.
\end{rem}
\begin{rem}
We avoid involving $F$-singularities too much for simplicity, at the cost of results of Proposition \ref{prop:F-stable-section} being not in full generality.
We refer the interested reader to \cite[Sec. 2]{Sch14} for techniques to treat singularities.
\end{rem}
\subsection{The behavior of varieties under field extension}\label{sec:bc}
It is known that a variety defined over a non-algebraically closed field may become more singular after inseparable base field extension. Here we collect some results needed in this paper and refer the reader to \cite[Chap. 3.2.2]{Liu02} for a systematical study of this issue. Let $X$ be an integral projective variety over $K$ such that $H^0(X, \mathcal{O}_X) = K$. Then
\begin{itemize}
\item
The variety $X_{\bar{K}}$ is geometrically irreducible, hence $X_{\bar{K}}^{\mathrm{red}}$ is integral; and if $X$ is separable over $\mathrm{Spec}~K$, namely $K(X)/K$ is a separable extension, then $X_{\bar{K}}$ is reduced, hence is integral (\cite[Cor. 2.14 (d) Chap. 3]{Liu02}).
\item
If $K$ is an extension over an algebraically closed field $k$ with the transcendental degree $tr. \deg (K) = 1$, then $X$ is separable over $\mathrm{Spec}~K$ by \cite[Lemma 7.2]{Ba01}.
\item
If $W_{K^{\frac{1}{p^{\infty}}}} \to X_{K^{\frac{1}{p^{\infty}}}}^{\mathrm{red}}$ is a desingularization, then $W_{K^{\frac{1}{p^{\infty}}}}$ is smooth over $K^{\frac{1}{p^{\infty}}}$. Therefore, there exists a finite purely inseparable extension $K'/K$ such that the desingularization $W_{K'} \to X_{K'}^{\mathrm{red}}$ is smooth over $K'$.
\end{itemize}
\begin{prop}\label{prop:ex-over-surface}
Let $X$ be a projective regular surface over a field $K$ such that $H^0(X, \mathcal{O}_X) = K$. If $\sigma: W_{\bar{K}} \to X_{\bar{K}}^{\mathrm{red}}$ is a desingularization\footnote{Lipman \cite{Lip78} proved that an excellent algebraic surface always has a desingularization in the strong sense, namely, the resolution is isomorphic over the regular locus.} then each irreducible component of the exceptional locus of $\sigma$
is a rational curve.
\end{prop}
\begin{proof}
We first take a finite purely inseparable extension $K'/K$ such that the desingularization $\sigma': Y_{K'} \to X_{K'}^{\mathrm{red}}$ is smooth over $K'$. In fact we only need to prove that the exceptional locus of $\sigma'$ geometrically is a union of rational curves.
For sufficiently large $e$, the function field $K(X)$ is between $K(Y_{K'})^{p^e}$ and $K(Y_{K'})$, this gives a rational map $X \dashrightarrow Y_{K'}^{p^e}$. Let $Y$ be the normalization of $Y_{K'}^{p^e}$ in $K(X)$. Then we have a natural commutative diagram
$$\xymatrix{
&Y_{K'} \ar[r]\ar[d] &Y \ar[d]\\
&X_{K'}^{\mathrm{red}} \ar[r] &X
}$$
such that $Y$ is normal, $Y \to X$ is a projective birational morphism and $Y_{K'} \to Y$ is finite and purely inseparable, hence is an homeomorphism. Let $\tilde{Y} \to Y$ be a regular resolution. Since $X$ is regular, a relative minimal model program starting with $\tilde{Y}$ over $X$ must end up with $X$, we see that the exceptional locus is a union of curves of arithmetic genus zero (\cite{Sh66}). This is enough to imply the statement.
\end{proof}
\subsection{The behaviour of the canonical bundle under purely inseparable base changes}\label{sec:canonical-bundle-bc}
First recall an important observation due to Tanaka \cite{Tan18}.
Let $K$ be a field of characteristic $p$ and $\bar{K}$ the algebraic closure of $K$. Let $X$ be a normal projective variety over $K$ such that $H^0(X, \mathcal{O}_X) = K$. Assume $X_{\bar{K}}$ is not reduced. By \cite[Lemma 2.3 and 2.4]{Tan18} we have
the following commutative diagram
$$\xymatrix{
&X^{(1)}\ar[rd]_{\sigma'}\ar[d]\ar[rrd]^{\sigma_1} & & \\
&X_{K_1}\ar[r]\ar[d] &X_{K_1'} \ar[r]^{\pi}\ar[d] &X\ar[d] \\
&\mathrm{Spec}~K_1 \ar[r] &\mathrm{Spec}~K_1' \ar[r] &\mathrm{Spec}~K
}$$
where
\begin{itemize}
\item
$K_1'/K$ is a purely inseparable extension of degree $p$ such that $X_{K_1'}$ is integral but not normal;
\item
$\sigma': X^{(1)} \to X_{K_1'}$ is the normalization map, and $K_1= H^0(X^{(1)}, \mathcal{O}_{X^{(1)}})$ is an inseparable extension over $K_1'$.
\end{itemize}
Note that $X_{K_1}$ is not reduced and the natural map $X^{(1)} \to X_{K_1}^{\mathrm{red}}$ coincides with the normalization.
Comparing the dualizing sheaves, we can write that
$$K_{X^{(1)}} \sim \sigma'^*K_{X_{K_1'}} - C \sim \sigma'^*\pi^*K_{X_{K_1'}} - C \sim \sigma_1^*K_X - C$$
with the following remarks
\begin{itemize}
\item
$X_{K_1'}$ is Gorenstein in codimension one ($G_1$) and satisfies Serre condition $S_2$, as its dualizing sheaf coincides with $\pi^*\omega_X$ and hence is reflexive, we may write it into the divisorial form $\mathcal{O}_{X_{K_1'}}(K_{X_{K_1'}})$ and also call $K_{X_{K_1'}}$ the canonical divisor\footnote{On an $S_2$ variety $X$, a rank one reflexive coherent sheaf $\mathcal{L}$ is invertible in codimension one and is isomorphic to $i_{U*}(\mathcal{L}|_U)$ where $U$ is the maximal open subset of $X$ such that $\mathcal{L}|_U$ is invertible.};
\item $C > 0$ arises from the conductor of the normalization (\cite[2.6]{Re94}).
\end{itemize}
We can repeat this process and finally get a finite sequence
$$\xymatrix{&X=X^{(0)}/K &X^{(1)}/K_1 \ar[l] &\cdots\ar[l] &X^{(n-1)}/K_{n-1} \ar[l] &X^{(n)}/K_n \ar[l]}$$
where $X^{(n)}$ is geometrically reduced. Then we can do a further finite purely inseparable field extension $L/K_n$ such that the normalization $Z_L \to (X_n)_L$ is geometrically normal over $L$. In this way, we obtain a normalization $Z_{\bar{K}} \to X_{\bar{K}}^{\mathrm{red}}$. If denoting by $\sigma: Z_{\bar{K}} \to X$ the natural morphism, there exists an effective Weil divisor $E$ on $Z_{\bar{K}}$, which arises from the conductor in the process of doing the normalization, such that
$$K_{Z_{\bar{K}}} \sim \sigma^*K_X -E.$$
Moreover by \cite[Theorem 3.16]{Tan19}), we may write that $E= (p-1)E'$ for some effective Weil divisor $E'$ on $Z_{\bar{K}}$.
\subsection{Frobenius stable sections on generic fibers and geometric fibers}\label{sec:Frob-stable-section-bc}
Though we originally aim to treat only normal varieties, non-normal varieties appear as intermediates when doing inseparable base changes, we package the treatment in the following theorem.
\begin{thm}\label{thm:bir-geo-gen}
Let $K$ be an $F$-finite field and $\bar{K}$ the algebraic closure of $K$. Let $X$ be a normal projective variety over $K$ such that $H^0(X, \mathcal{O}_X) = K$. Let $\sigma: Z_{\bar{K}} \to X_{\bar{K}}^{\mathrm{red}}$ be the normalization map. Let $D$ be a nef and big $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor on $X$, and denote by $\bar{D}$ its pullback via the natural map $Z_{\bar{K}} \to X$. If $S_{*}^0(Z_{\bar{K}}, K_{Z_{\bar{K}}} + \bar{D})$ is birational (resp. nonzero), then so is $S_{*}^0(X, K_X + D)$, where ``$*$'' can be ``$-$'' or an effective divisor $\Delta$ on $X$ (automatically
$*= \bar{\Delta}(:=\Delta|_{Z_{\bar{K}}})$ on $Z_{\bar{K}}$).
\end{thm}
\begin{proof}
In the following we only consider the case ``$*$''$=$``$-$'' and divide the proof into four steps. If ``$*$'' is taken as a fixed effective divisor we only need Step 1, 3 and 4.
\smallskip
\textbf{Step 1:} Let $K'/K$ be a field extension where $K'$ is also $F$-finite and let $\sigma': X' \to X_{K'}^{\mathrm{red}}$ denote the normalization morphism. For an effective $\mathbb{Q}$-divisor $\Delta$ on $X$, the trace map of $\sigma'$ induces a natural $K'$-linear map $\eta_{K', \Delta}: S_{\Delta'}^0(X', K_{X'} + D') \to S_{\Delta}^0(X, K_X + D)\otimes_K K'$ where $D' = D|_{X'}, \Delta'=\Delta|_{X'}$.
Proof of Step 1.
We consider the following commutative diagram
$$\xymatrix{
&(X')^e\ar[r]^>>>>{\sigma'_e}\ar[d]_{F^e_{X'}} &X^e_{K'}=X^e\otimes_K K' \ar[r]^<<<<<{\pi_e}\ar[d]^{\eta_e} &X^e\ar[d]^{F_X^e} \\
& X'\ar[r]^>>>>>>{\sigma'} &X_{K'}=X\otimes_K K' \ar[r]^<<<<<{\pi} &X
}.$$
Here we remark that both $X_{K'}$ and $X^e_{K'}$ are $G_1$ and $S_2$, and the dualizing sheaves $\mathcal{O}_{X_{K'}}(K_{X_{K'}})$ and $\mathcal{O}_{X^e_{K'}}(K_{X^e_{K'}})$ coincide with the pullback of the dualizing sheaves of $X$ and $X^e$ respectively. First, since $\pi$ is a flat base change, we have the trace map
{\small \begin{align*}
& \eta_{e*}[\mathcal{O}_{X^e_{K'}}(K_{X^e_{K'}} + \pi^{*}_e\ulcorner p^e(D-\Delta)\urcorner)(: =\pi^{*}_e\mathcal{O}_{X^e}(K_{X^e} + \ulcorner p^e(D-\Delta)\urcorner))] \\
&\cong \pi^*F_{X*}^e\mathcal{O}_{X^e}(K_{X^e} + \ulcorner p^e(D-\Delta)\urcorner) \xrightarrow{\pi^*Tr_X^e} \mathcal{O}_{X_{K'}}(K_{X_{K'}} + \pi^*\ulcorner(D-\Delta)\urcorner)(:= \pi^*\mathcal{O}_{X}(K_{X} + \ulcorner (D-\Delta)\urcorner)).
\end{align*}}
For $e\gg0$ the image of the global sections of the above map coincides with $S^0(X, K_X + (D-\Delta))\otimes_K K'$. Second, by $\sigma'^{*}_e\pi^{*}_e\ulcorner p^e(D-\Delta)\urcorner \geq \ulcorner p^e(D'-\Delta')\urcorner$, the trace map
$$
Tr_{\sigma'_e}:~ \sigma'_{e*}(\mathcal{O}_{X'}(K_{X'} + \ulcorner p^e(D'-\Delta')\urcorner))
\to \mathcal{O}_{X^e_{K'}}(K_{X^e_{K'}} + \pi^{*}_e\ulcorner p^e(D-\Delta)\urcorner)
$$
makes sense in codimension one, and it in fact can extend to the whole variety because $\mathcal{O}_{X^e_{K'}}(K_{X^e_{K'}} + \pi^{*}_e\ulcorner p^e(D-\Delta)\urcorner)$ is reflexive. In summary, we obtain the following commutative diagram
$$\xymatrix{
&\eta_{e*}\sigma'_{e*}\mathcal{O}_{X'}(K_{X'} + \ulcorner\ p^e(D'-\Delta')\urcorner) \ar[r]^{Tr_{\sigma'_e}}\ar[d]_{Tr^e_{X'}} &\eta_{e*}\mathcal{O}_{X^e_{K'}}(K_{X^e_{K'}} + \pi^{e*}\ulcorner\ p^e(D-\Delta)\urcorner) \ar[d]_{\pi^*Tr^e_{X}} \\
&\sigma'_* \mathcal{O}_{X'}(K_{X'} + \ulcorner(D'-\Delta')\urcorner)\ar[r]^{Tr_{\sigma'}} &\pi^*\mathcal{O}_{X}(K_{X} + \ulcorner (D-\Delta)\urcorner)~.
}$$
For $e \gg 0$, the map between the images of the global sections of the vertical trace maps is nothing but the desired map $\eta_{K', \Delta}: S_{\Delta'}^0(X', K_{X'} + D') \to S_{\Delta}^0(X, K_X + D)\otimes_K K'$.
\smallskip
\textbf{Step 2:} The trace map $Tr_{\sigma}$ induces the $\bar{K}$-linear map $\eta: S_{-}^0(Z_{\bar{K}}, K_{Z_{\bar{K}}} + D) \to S_{-}^0(X, K_{X} + D)\otimes_K \bar{K}$.
Proof of Step 2.
Note that $\sigma^*\Theta_{D}^{\mathrm{amp}} \subseteq \Theta_{\sigma^*D}^{\mathrm{amp}}$. Then the desired map is obtained as follows
\begin{align*}
\eta: S_{-}^0(Z_{\bar{K}}, K_{Z_{\bar{K}}} + \bar{D}) &\subseteq \bigcap_{\Delta \in \Theta_{D}^{\mathrm{amp}}}~(\bigcup_{t\in \mathbb{Q}^+}S_{t\sigma^*\Delta}^0(Z_{\bar{K}}, K_{Z_{\bar{K}}} + \bar{D}))\\
& \xrightarrow{\bigcap_{\Delta \in \Theta_{D}^{\mathrm{amp}}}~(\bigcup_{t\in \mathbb{Q}^+}\eta_{\bar{K}, t\Delta})} \bigcap_{\Delta \in \Theta_{D}^{\mathrm{amp}}}~(\bigcup_{t\in \mathbb{Q}^+}S_{t\Delta}^0(X, K_X + D))\otimes_K\bar{K} \\
& = S_{-}^0(X, K_{X} + D)\otimes_K \bar{K}
\end{align*}
where the map appearing in the second row is from Step 1.
\smallskip
\textbf{Step 3:} We assume that $X$ is geometrically reduced and prove the theorem.
Proof of Step 3.
Since $X_{\bar{K}}$ is reduced, thus $Z_{\bar{K}} \to X_{\bar{K}}$ is a birational map. By Step 2 we have a natural inclusion
$$S_{-}^0(Z_{\bar{K}}, K_{Z_{\bar{K}}} + D) \hookrightarrow S_{-}^0(X, K_{X} + D)\otimes_K \bar{K}.$$
Since $S_{-}^0(Z_{\bar{K}}, K_{Z_{\bar{K}}} + D)$ induces a birational map of $Z_{\bar{K}}$ (resp. is nonzero), $S_{-}^0(X, K_{X} + D)\otimes_K \bar{K}$ induces a birational map of $X_{\bar{K}}$ (resp. is nonzero) too. Then we conclude that $S_{-}^0(X, K_X + D)$ is birational (resp. is nonzero).
\smallskip
\textbf{Step 4:} We assume that $X$ is not geometrically reduced and prove the theorem.
Proof of Step 4.
Recall the following commutative diagram from Section \ref{sec:canonical-bundle-bc}
$$\xymatrix{
&X^{(1)}\ar[rd]\ar[d] & & \\
&X_{K_1}\ar[r]\ar[d] &X_{K_1'} \ar[r]\ar[d] &X\ar[d] \\
&\mathrm{Spec}~K_1 \ar[r] &\mathrm{Spec}~K_1' \ar[r] &\mathrm{Spec}~K.
}$$
As in the previous case the trace map induces an inclusion of $K_1'$-linear spaces
$$S_{-}^0(X^{(1)}, K_{X^{(1)}} + D)\hookrightarrow S_{-}^0(X, K_{X} + D)\otimes_K K'_1.$$
If the $K_1$-linear space $S_{-}^0(X^{(1)}, K_{X^{(1)}} + D)$ defines a birational map of $X^{(1)}$ (resp. is nonzero), then it also defines a birational map of $X_{K_1'}$ as $K_1'$-linear space, in turn we can prove that $S_{-}^0(X, K_{X} + D)$ is birational (resp. is nonzero).
If $X^{(1)}$ is already geometrically reduced, then the normalization of $(X^{(1)}_{\bar{K}})^{\mathrm{red}}$ coincides with $Z_{\bar{K}}$, in turn we conclude the theorem by combining the result of Step 3 with the assertion proved in the previous paragraph.
Otherwise, we can repeat this process and finally get a chain of purely inseparable maps
$$\xymatrix{&X=X^{(0)}/K &X^{(1)}/K_1 \ar[l] &\cdots\ar[l] &X^{(n)}/K_n \ar[l]}$$
such that $X^{(n)}$ is geometrically reduced. We can complete the proof by doing induction.
\end{proof}
To compare the behaviours of Frobenius stable adjoint linear systems on the general fibers and the generic fiber, we have the following result.
\begin{cor}\label{cor:bir-general-fiber}
Let $f: X \to Y$ be a fibration of normal varieties over an algebraically closed, uncountable field $k$ of characteristic $p$. Let $D$ be a nef and big $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor on $X$. Let $F$ be a general fiber and denote by $G \to F^{\mathrm{red}}$ the normalization.
Assume that $S_{-}^0(G, K_G+ D|_G)$ is birational (resp. non-zero). Then $S_{-}^0(X_{\eta}, K_{X_{\eta}}+ D|_{X_{\eta}})$ is birational (resp. non-zero).
\end{cor}
\begin{proof}
By Theorem \ref{thm:bir-geo-gen} we only need to prove the analogous assertions for the geometric generic fiber. To relate the general fiber and the geometric generic fiber, we shall use the trace maps of the relative Frobenius maps.
We may assume $f$ is flat and $Y$ is regular. There exists a quasi-finite, flat, purely inseparable base change $Y' \to Y$ such that, if $Z$ denotes the normalization of $(X':=X\times_Y Y')^{\mathrm{red}}$, the geometric generic fiber $Z_{\bar{\eta}}$ of the fibration $g: Z \to Y'$ is normal. We can shrink $Y'$ to assume $g$ is flat and $Y'$ is regular, and if $F = X_y$ for some closed point $y \in Y$ and $y' \in Y'$ is the point over $y$ then $G \cong Z_{y'}$. By Theorem \ref{thm:bir-geo-gen} we only need to prove the assertion for the firation $g: Z \to Y'$. We consider the following commutative diagram
$$\xymatrix@C=2cm{&Z=Z^e \ar[r]^{F_{Z/Y'}^e}\ar[d]^{g^e}\ar@/^2pc/[rr]|{F_Z^e} &Z_{Y'^e}=Z\times_{Y'} Y'^e \ar[r]^{\pi^e}\ar[d]^{g_e'} & Z\ar[d]^g \\
&Y'\ar@{=}[r] &Y'^e\ar[r]^{F_{Y'}^e} &Y'
}.$$
Fix an effective divisor $\Delta$ on $Z$ and consider the trace map of the relative Frobenius map $F_{Z/Y'}^e$
{\small \begin{align*}
Tr_{Z/Y'}^e:~ & g^e_*\mathcal{O}_{Z^e}(K_{Z^e/Y'^e} + \ulcorner p^e(D-\Delta)\urcorner|_{Z^e}) \\
&\to g'_{e*}(\pi^{e*}\mathcal{O}_Z(K_{Z/Y} + \ulcorner D\urcorner|_Z)) \cong F_{Y'}^{e*}g_*\mathcal{O}_Z(K_{Z/Y'} + \ulcorner D\urcorner|_Z)
\end{align*}}
where the isomorphism holds because $F_{Y'}^e$ is flat. We denote the image of the above trace map by $S_{\Delta}^eg_*\mathcal{O}_Z(K_{Z/Y'} + D) \subseteq F_Y^{e*}g_*\mathcal{O}_Z(K_{Z/Y'} + \ulcorner D-\Delta\urcorner)$.
Applying Propostion \ref{prop:F-stable-section} (ii), we can find a reduced subvariety $\bar{T}$ on $Z_{\bar{\eta}}$ such that $\mathrm{Supp}~D_{\bar{\eta}} \subseteq \bar{T}$ and for any $\bar{\Delta} \in \Theta_{D_{\bar{\eta}}}^{\mathrm{amp}}$, if $\mathrm{Supp}~\bar{\Delta} = \bar{T}$ then
$$S_{\bar{\Delta}}^0(Z_{\bar{\eta}}, (K_{Z/Y'} + D)|_{Z_{\bar{\eta}}}) \subseteq S_{-}^0(Z_{\bar{\eta}}, (K_{Z/Y'} + D)|_{Z_{\bar{\eta}}}).$$
We can do some further quasi-finite flat base changes to assume there exists a subvariety $T \subset Z$ such that $T_{\bar{\eta}} = \bar{T}$, and that each irreducible component of $T$ is flat and separable over $Y'$, hence its restriction on $Z_{\bar{\eta}}$ is still reduced. As a consequence, if $B$ is a divisor supported on $T$ then
$$\ulcorner B\urcorner|_{Z_{\bar{\eta}}}= \ulcorner B|_{Z_{\bar{\eta}}}\urcorner~ \mathrm{and}~\ulcorner B\urcorner|_{G} = \ulcorner B|_{G}\urcorner.$$
From now on we fix an effective $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor $\Delta$ on $Z$ such that $\mathrm{Supp}~\Delta=T$ and that $D|_Z - \Delta$ is ample over $Y'$. By shrinking $Y'$ again, we may assume $\mathrm{Supp}~D \subseteq T$ and for any geometric point $\zeta \in Y'$ and natural number $e$
$$\ulcorner p^e(D-\Delta) \urcorner|_{Z_{\zeta}} = \ulcorner p^e(D-\Delta)|_{Z_{\zeta}} \urcorner,$$
and in turn we obtain a natural $k(\zeta)$-linear map
$$S_{\Delta}^eg_*\mathcal{O}_Z(K_{Z/Y'} + D) \otimes_{\mathcal{O}_{Y'}^{\frac{1}{p^e}}} k(\zeta)^{\frac{1}{p^e}} \to S_{\Delta}^e(Z_{\zeta}, (K_{Z/Y'} + D)|_{Z_{\zeta}}).$$
As $k(\zeta)$ is algebraically closed, we may omit the Frobenius twist if no confusion occurs.
Let $t_n= \frac{1}{n}$. By replacing $\Delta$ with a small multiple, we may assume that for all $n$
$$S_{-}^0(Z_{\bar{\eta}}, (K_{Z/Y'} + D)|_{Z_{\bar{\eta}}}) = S_{t_n\Delta}^0(Z_{\bar{\eta}}, (K_{Z/Y'} + D)|_{Z_{\bar{\eta}}}).$$
Then for each $n$ there exists a positive integer $e_n$ such that for any $e\geq e_n$,
{\small \begin{align}\label{eq:fs-geo-gen}S_{t_n\Delta}^eg_*\mathcal{O}_Z(K_{Z/Y'} + D) \otimes k(\bar{\eta}) \cong S_{t_n\Delta}^e(Z_{\bar{\eta}}, (K_{Z/Y'} + D)|_{Z_{\bar{\eta}}}) = S_{-}^0(Z_{\bar{\eta}}, (K_{Z/Y'} + D)|_{Z_{\bar{\eta}}}).\end{align}}
For each $n$ and $e \geq e_n$, there exists a nonempty open subset $U_{n,e} \subseteq Y'$ such that $S_{t_n\Delta}^eg_*\mathcal{O}_Z(K_{Z/Y'} + D)|_{U_{n,e}}$ is locally free of rank $\dim_{k(\bar{\eta})} S_{-}^0(Z_{\bar{\eta}}, (K_{Z/Y'} + D)|_{Z_{\bar{\eta}}}) $ and for any closed point $y' \in U_{n,e}$
{\small\begin{align}\label{eq:fs-gen}S_{t_n\Delta}^eg_*\mathcal{O}_Z(K_{Z/Y'} + D)\otimes k(y') \cong S_{t_n\Delta}^e(Z_{y'}, (K_{Z/Y'} + D)|_{Z_{y'}}).\end{align}}
Therefore, for each $y' \in \Xi = \bigcap_n \bigcap_e U_{n,e}$, $\dim_{k(y')}S_{t_n\Delta}^e(Z_{y'}, (K_{Z/Y'} + D)|_{Z_{y'}})$ is independent of $n$ and sufficiently large $e$, hence
$$S_{t_n\Delta}^e(Z_{y'}, (K_{Z/Y'} + D)|_{Z_{y'}}) \cong S_{t_n\Delta}^0(Z_{y'}, (K_{Z/Y'} + D)|_{Z_{y'}}) \supseteqq S_{-}(Z_{y'}, (K_{Z/Y'} + D)|_{Z_{y'}}).$$
Since $k$ is uncountable, the locus $\Xi$ is dense in $Y'$.
If for general $y' \in Y'$, $S_{-}(Z_{y'}, (K_{Z/Y'} + D)|_{Z_{y'}})$ is birational, then there is a dense subset $\Xi' \subseteq \Xi$ such that for any $y' \in \Xi'$ and $e\in \mathbb{N}$, $S_{t_n\Delta}^e(Z_{y'}, (K_{Z/Y'} + D)|_{Z_{y'}})$ is birational. From this we conclude by (\ref{eq:fs-gen}) that for the relative map
{\small $$Z/Y' \dashrightarrow \mathrm{Proj}_{\mathcal{O}_{Y'}}(\bigoplus_l \mathrm{Sym}^l(S_{\Delta}^eg_*\mathcal{O}_Z(K_{Z/Y'} + D))$$}
is a birational map, which, by (\ref{eq:fs-geo-gen}), implies that $S_{-}^0(Z_{\bar{\eta}}, (K_{Z/Y'} + D)|_{Z_{\bar{\eta}}})$ is birational.
The nonvanishing assertion follows similarly.
\end{proof}
\subsection{Birational Criterion}
The following birational criterion is well known to experts, but here we provide a detailed proof for the conveniences of the reader.
\begin{thm}\label{thm:bir}
Let $f: X \to Y$ be a dominant morphisms of integral varieties over an algebraically closed field $k$, with integral generic fiber $X_{\eta}$. Let $D$ be a Weil divisor on $X$, and let $V \subseteq H^0(X, D)$ be a finite dimensional linear subspace, which is also regarded as a subspace of $H^0(Y, f_*\mathcal{O}_X(D))$. Let $\mathcal{V} \subseteq f_*\mathcal{O}_X(D)$ be the subsheaf generated by $V$. Assume that
(i) $|V||_{X_{\eta}}$ defines a birational map of $X_{\eta}$, \\
and one of the following conditions holds
(ii) there is an open dense subset $Y' \subseteq Y$ such that for any $y \in Y'$, $\mathcal{V}\otimes \mathcal{I}_y$ is globally generated on $Y'$ by
$V_y:=V \bigcap H^0(Y, \mathcal{V}\otimes \mathcal{I}_y)$;
(ii') there exists a linear system $|H|$ on $Y$ and an effective Weil divisor $E$ on $X$ such that $|H|$ defines a birational map of $Y$ and $f^*|H|+ E \subseteq |V|$.\\
Then $|V|$ defines a birational map of $X$.
\end{thm}
\begin{proof}
We first show the theorem under the assumptions (i) and (ii).
We may assume $Y' = \mathrm{Spec} A$ is affine and restrict ourselves on a nonempty open affine subset $X' = \mathrm{Spec} B$ with $f(X') \subseteq Y'$. By shrinking $X'$ and choosing a local generator of $\mathcal{O}_X(D)|_{X'}$, we may assume $V|_{X'} = \mathrm{Span}_k\{1, g_1, \cdots, g_r\}$ where $g_1, \cdots, g_r \in B$. The condition (i) means $K(A)(g_1, \cdots, g_r) = K(B)$. By shrinking $X'$ again we may assume
\begin{itemize}
\item
$B= A[g_1, \cdots, g_r]_g$ for some $g \in A[g_1, \cdots, g_r]$.
\end{itemize}
It follows that for $x \in X'$ and $y=f(x) \in Y'$,
\begin{itemize}
\item[(a)]
$X'_y \cong \mathrm{Spec}~A[g_1, \cdots, g_r]_g\otimes_Ak(y) \cong \mathrm{Spec}~k[\bar{g}_1, \cdots, \bar{g}_r]_{\bar{g}}$, thus the ideal sheaf $\mathcal{I}_{X'_y,x}$ is globally generated by $\bar{V} (:= \mathrm{Span}_k\{\bar{g}_1, \cdots, \bar{g}_r\}) \cap \frac{I_x}{I_y\cdot B}$.
\end{itemize}
Define the $A$-module $\mathcal{W} = \mathrm{Span}_A\{1, g_1, \cdots, g_r\} \subset B$, which generate a coherent sheaf $\widetilde{\mathcal{W}}$ on $Y'$ such that $\mathcal{V}|_{Y'} = \widetilde{\mathcal{W}}$. Take a closed point $y \in \mathrm{Spec} A$, denote by $I_y \subset A$ the ideal of $y$ and $\mathcal{W}_y = \mathcal{W} \cap I_y \cdot B$. Then
\begin{itemize}
\item[(b)]
we may identify $V_y = V \cap I_y\cdot B$, and by the condition (ii) we have $\mathcal{W}_y = \mathrm{Span}_AV_y$, and as a consequence the ideal $I_{y}\cdot B$ is generated by $V_y$.
\end{itemize}
Then considering the following exact sequence
$$0 \to \mathcal{I}_y\cdot \mathcal{O}_{X',x} \to \mathcal{I}_{X',x} \to \mathcal{I}_{X'_y,x}\to 0$$
by (a) and (b) we can conclude that $\mathcal{I}_{X',x}$ is globally generated by $V \cap I_x$. The above arguments show that $|V||_{X'}$ defines an isomorphism from $X'$ to its image, hence $|V|$ defines a birational map of $X$.
\smallskip
Now assume (i) and (ii'). To apply the above argument, we require moreover that $X' \cap E=\emptyset$. Then by the assumption (ii') we can find an open subset $Y' \subset Y$ such that for any $y\in Y'$, the sheaf $\mathcal{I}_y\cdot \mathcal{O}_{X',x}$ is generated by $V_y$. This combining with (a) shows the desired statement.
\end{proof}
\section{Linear systems on varieties equipped with certain fibrations}\label{sec:main-bir-crt}
In this section, we work over an algebraically closed field $k$ of characteristic $p>0$. We will prove the nonvanishing and birational criteria in Theorem \ref{thm:intr-bir-crt1} and \ref{thm:intr-bir-crt2}.
\subsection{} We restate the first criterion as follows.
\begin{thm}\label{thm:bir-criterion-for-induction}
Let $f:X \to Y$ be a fibration of normal projective varieties and let $d = \dim Y$.
Let $D$ be a nef and big $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor on $X$, and $H, \tilde{H}$ two $\mathbb{Q}$-Cartier Weil divisors on $Y$ such that $|H|$ defines a generically finite map and $|\tilde{H}|$ is birational.
(i) If $S_{-}^0(X_{\eta}, K_{X_{\eta}} + D|_{X_{\eta}}) \neq 0$ then $S_{-}^0(X, K_X +D + f^*sH)\neq 0$ for any $s \geq d$.
(ii) If $S_{-}^0(X_{\eta}, K_{X_{\eta}} + D|_{X_{\eta}})$ is birational then $S_{-}^0(X, K_X +D + f^*sH)$ is birational for $s \geq d+1$; and if moreover $S_{-}^0(X, K_{X} +D + df^*H - f^*\tilde{H}) \neq 0$ then $S_{-}^0(X, K_X +D + f^*dH)$ is birational.
\end{thm}
\begin{proof}
To ease the situation, we first blow up $Y$ to make the movable part of $|H|$ have no base point, then blow up $X$ along some locus disjoint to $X_{\eta}$ to make sure the rational map $X\to Y$ is still a morphism, and replace $H$ with the movable part and replace $D, \tilde{H}$ with their pull-backs. Remark that under the above birational modifications the assumptions in the theorem still hold. From now on, we may assume $|H|$ is base point free.
Next we claim that for each $m \in \mathbb{N}$ there exists an effective divisor $\Delta_m$ on $X$ such that $D-\Delta_m$ is ample, and that $S_{-}^0(X, K_X +D + f^*mH) = S_{\Delta_m}^0(X, K_X +D + f^*mH)$. To prove this claim, we fix an effective divisor $\Delta \in \Theta_D^{\mathrm{amp}}$. By Proposition \ref{prop:F-stable-section},
there exists a divisor $\Delta'_m \in \Theta_{D + f^*mH}^{\mathrm{amp}}$ such that
$S_{-}^0(X, K_X +D + f^*mH) = S_{t\Delta'_m}^0(X, K_X +D + f^*mH)$ for any sufficiently small positive rational number $t$. We may take a sufficiently small $s\in \mathbb{Q}^{+}$ such that $D-\Delta - s\Delta'_m$ is ample. Set $\tilde{\Delta}_m = \Delta + s\Delta'_m$. Then for any $t\in \mathbb{Q}^{+}$,
$$S_{t\tilde{\Delta}_m}^0(X, K_X +D + f^*mH) \subseteq S_{ts\Delta'_m}^0(X, K_X +D + f^*mH) \subseteq S_{-}^0(X, K_X +D + f^*mH).$$
By the definition (\ref{def:S0}) we can take a sufficiently small $t_0\in \mathbb{Q}^{+}$ such that both the above inclusions attain equality, then we may let $\Delta_m=t_0(\Delta + s\Delta'_m)$.
We fix a sufficiently small rational number $t \ll1$ and may assume that
$$S_{-}^0(X_{\eta}, K_{X_{\eta}} + D|_{X_{\eta}}) \subseteq S_{t\Delta_m|_{X_{\eta}}}^0(X_{\eta}, K_{X_{\eta}} + D|_{X_{\eta}}).$$
By the construction of $\Delta_m$, we only need to prove the following Theorem \ref{thm:bir-criterion-1}.
\end{proof}
\begin{thm}\label{thm:bir-criterion-1}
Let the notation and assumptions be as in Theorem \ref{thm:bir-criterion-for-induction}. Assume moreover that $|H|$ is free of base point. Let $\Delta$ be an effective $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor on $X$ such that $D -\Delta$ is ample.
(i) If $S_{\Delta|_{X_{\eta}}}^0(X_{\eta}, K_{X_{\eta}} + D|_{X_{\eta}}) \neq 0$, then $S_{\Delta}^0(X, K_X +D + f^*sH) \neq 0$ for any $s \geq d$.
(ii) If $S_{\Delta|_{X_{\eta}}}^0(X_{\eta}, K_{X_{\eta}} + D|_{X_{\eta}})$ is birational, then $S_{\Delta}^0(X, K_X +D + f^*sH)$ is birational for any $s \geq d+1$, and if in addition $S_{\Delta}^0(X, K_{X} +D + df^*H - f^*\tilde{H}) \neq 0$ then $S_{\Delta}^0(X, K_X +D + f^*dH)$ is birational.
\end{thm}
\begin{proof}
Let $\mu: Y \to Y'$ be the associated map to $H$. Then $H'= \mu_*H$ is ample, and the linear system $|H'|$ is base point free. Denote by $f': X \to Y'$ the natural morphism. We fit these varieties into the following commutative diagram
$${\small \xymatrix{&X\ar[d]^{f}\ar[rd]^{f'} &\\
&Y\ar[r]^{\mu} &Y'}}.$$
\smallskip
Let
$$\mathcal{F}_{s}^e= f'_*(F^{e}_*\mathcal{O}_X(K_X + \ulcorner p^e(D-\Delta)\urcorner)) \otimes \mathcal{O}_{Y'}(sH')$$
and
$$\mathcal{V}_{s}^e = \mathrm{Im}[f'_*Tr^e_{\Delta}: \mathcal{F}_{s}^e \to f'_*\mathcal{O}_X(K_X + \ulcorner D\urcorner) \otimes \mathcal{O}_{Y'}(sH')]$$
and
$$V_{s}^e = \mathrm{Im}(Tr^e_{\Delta}: H^0(Y', \mathcal{F}_{s}^e) \to H^0(Y', \mathcal{V}_{s}^e)).$$
For sufficiently large $e$ we have natural isomorphisms
$$\mathcal{V}_{s}^e\otimes k(\eta) \cong S_{\Delta|_{X_{\eta}}}^0(X_{\eta}, K_{X_{\eta}} + D|_{X_{\eta}})~\mathrm{and}~ V_{s}^e \cong V_{s}(:=S_{\Delta}^0(X, K_X +D + f^*sH)).$$
\begin{lem}\label{lem:Mumford-reg}
There exists a positive integer $e_0$ such that for any $e>e_0$,
(a) if $s \geq d$ then the sheaf $\mathcal{V}_{s}^e$ is globally generated by $V_{s}$, and
(b) if $s \geq d+1$ then for any closed point $y' \in Y'$ over which $f'$ is flat, the sheaf $\mathcal{I}_{y'}\cdot\mathcal{V}_{s}^e$ is globally generated by $V_{s,y'}=V_{s}\cap H^0(Y',\mathcal{I}_{y'}\cdot\mathcal{V}_{s}^e)$.
\end{lem}
Granted this lemma, under the assumptions of the theorem, the assertion (i) follows from (a); by applying Theorem \ref{thm:bir}, the first part of the assertion (ii) follows from (b) and the second part follows from (a).
\smallskip
We start to prove Lemma \ref{lem:Mumford-reg}. By the construction, we only need to verify that $\mathcal{F}_{\Delta, s}^e$ (reps. $\mathcal{I}_{y'}\cdot\mathcal{F}_{\Delta, s}^e$) satisfies Mumford regularity if $s\geq d$ (resp. $s \geq d+1$).
Since $D-\Delta$ is an ample $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor, applying relative Fujita vanishing (Theorem \ref{thm:var-fujita-vanishing}) there exists $e_1$ such that
\begin{itemize}
\item[$\diamondsuit$:]
for any $e\geq e_1$ and $i>0$, $R^if'_*(F^{e}_*\mathcal{O}_X(K_X + \ulcorner p^e(D-\Delta)\urcorner)) =0$.
\end{itemize}
Consider the Lerray spectral sequence associated to $R\Gamma\circ Rf'_*(F^{e}_*\mathcal{O}_X(K_X + \ulcorner p^e(D-\Delta)\urcorner))$. Applying Fujita vanishing we show that there
exists $e_0 \geq e_1$ such that for any $e\geq e_0$, if $s\geq d$ then
$$\clubsuit:~H^i(Y', \mathcal{F}_{s}^e(-jH')) \cong H^i(X, K_X + \ulcorner p^e(D-\Delta)\urcorner + (s-j)p^ef'^*H') = 0 ~\mathrm{for}~j\leq d,~i>0.$$
In particular, the sheaf $\mathcal{F}_{\Delta, s}^e$ satisfies Mumford regularity if $s\geq d$.
Fix $e\geq e_0$. To prove $\mathcal{I}_{y'}\cdot\mathcal{F}_{\Delta, s}^e$ satisfies Mumford regularity, we take $d$ general hypersurfaces $H_1', H_2', \cdots, H_d' \in |H'|$ passing through $y'$.
We may assume that $\mathrm{Supp} (\bigcap_{t=1}^{t=d} H_t')$ consists of finitely many isolated points and is contained in the flat locus of $f'$. Then Kozul complex gives an exact sequence
$$0 \to \mathcal{O}_{Y'}(-\sum_{t=1}^{t=d}H_t') \to \cdots \to \bigoplus_{t=1}^{t=d}\mathcal{O}_{Y'}(-H_t') \to \mathcal{J} \to 0$$
where $\mathcal{J} \subset \mathcal{O}_{Y'}$ is an ideal sheaf such that $\mathrm{Supp}~\mathcal{O}_{Y'}/\mathcal{J} =\mathrm{Supp} ~(\bigcap_{t=1}^{t=d} H_t')$. By the condition ($\diamondsuit$), we know that
the sheaf $\mathcal{F}_{s}^e$ is locally free around $\mathrm{Supp} (\bigcap_{t=1}^{t=d} H_t')$. Tensoring the above exact sequence with $\mathcal{F}_{s}^e$ induces the following exact sequence
$$(*): 0 \to \mathcal{F}_{s}^e(-\sum_{t=1}^{t=d}H_t') \to \cdots \to \bigoplus_{t=1}^{t=d}\mathcal{F}_{s}^e(-H_t') \to \mathcal{J}\cdot\mathcal{F}_{s}^e \to 0$$
Take the cohomology of $(*)$. By a standard application of spectral sequence, from the condition $\clubsuit$ we deduce that if $s\geq d+1$ then for $i>0$
$$H^i(Y', \mathcal{J}\cdot\mathcal{F}_{s}^e(-iH')) = 0.$$
Let $\tau = \frac{\mathcal{I}_{y'}}{\mathcal{J}}\cdot\mathcal{F}_{s}^e(-iH')$. We have the following exact sequence
$$0 \to \mathcal{J}\cdot\mathcal{F}_{s}^e(-iH') \to \mathcal{I}_{y'}\cdot\mathcal{F}_{s}^e(-iH') \to \tau \to 0.$$
Since $H^i(Y', \tau)=0$ for each $i >0$, by taking the cohomology of the above sequence, we can show that if $s\geq d+1$, then $$H^i(Y', \mathcal{I}_{y'}\cdot\mathcal{F}_{s}^e(-iH')) = 0 ~\mathrm{for~any}~i>0,$$
that is to say, the sheaf $\mathcal{I}_{y'}\cdot\mathcal{F}_{s}^e$ satisfies Mumford regularity.
\end{proof}
\subsection{} Recall that an \emph{irregular variety} $X$ is a smooth projective variety with $q(X):=\dim \mathrm{Pic}^0(X) >0$. For this kind of varieties, we can take advantage of the Albanese map and have the following theorem.
\begin{thm}\label{thm:bir-criterion-irr}
Let $X$ be a smooth projective variety with a morphism $a: X \to A$ to an abelian variety. Denote by $f: X \to Y$ the fibration arising from the Stein factorization of $a: X \to A$. Let $D, D_1,D_2$ be three divisors on $X$. Assume that $D$ is nef, big and $\mathbb{Q}$-Cartier.
(i) If $S^0_{-}(X_{\eta}, K_{X_{\eta}} + D_{\eta}) \neq 0$, then for any $\mathcal{P}_{\alpha} \in \mathrm{Pic}^0(A)$, $H^0(X, K_X + \ulcorner D\urcorner + a^*\mathcal{P}_{\alpha})) \neq 0$, and there exists some $\mathcal{P}_{\beta} \in \mathrm{Pic}^0(A)$ such that $S_{-}^0(X, K_X + \ulcorner D\urcorner + a^*\mathcal{P}_{\beta})\neq 0$.
(ii) Assume that $S^0_{-}(X_{\eta}, K_{X_{\eta}} + D_{\eta}) \neq 0$, $D_1$ is integral and that for any $\mathcal{P}_{\alpha} \in \mathrm{Pic}^0(A)$, $|D_1 + a^*\mathcal{P}_{\alpha}| \neq \emptyset$. Then for any $\mathcal{P}_{\alpha_0} \in \mathrm{Pic}^0(A)$, $S^0_{-}(X, K_X + D + D_1 + a^*\mathcal{P}_{\alpha_0}) \neq 0$.
(iii) Assume that $S^0_{-}(X_{\eta}, K_{X_{\eta}} + D_{\eta})$ is birational, both $D_1$ and $D_2$ are integral and that for any $\mathcal{P}_{\alpha} \in \mathrm{Pic}^0(A)$, $|D_i + a^*\mathcal{P}_{\alpha}| \neq \emptyset$. Then for any $\mathcal{P}_{\alpha_0} \in \mathrm{Pic}^0(A)$, $S^0_{-}(X, K_X + D + D_1+D_2 + a^*\mathcal{P}_{\alpha_0})$ is birational.
(iv) Assume that $S^0_{-}(X_{\eta}, K_{X_{\eta}} + D_{\eta})$ is birational, and that $D_1,D_2$ are nef and big $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisors such that $S^0_{-}(X_{\eta}, K_{X_{\eta}} + (D_i)_{\eta}) \neq 0$. Then for any $\mathcal{P}_{\alpha_0} \in \mathrm{Pic}^0(A)$,
$S^0_{-}(X, K_X + D + (K_{X} + \ulcorner D_1\urcorner) + (K_{X} + \ulcorner D_2\urcorner )+ a^*\mathcal{P}_{\alpha_0}))$ is birational.
\end{thm}
In \cite{Zhy14} the author proved that for a smooth projective variety $X$ of maximal Albanese dimension and of general type, if in addition the Albanese map is separable then $4K_X$ is birational. In general, if $K_X$ is big then there exists a nef and big $\mathbb{Q}$-divisor $D \leq K_X$, we can apply the theorem above and obtain the following result.
\begin{cor}
Let $X$ be a smooth projective variety of maximal Albanese dimension and of general type. Then $S^0_{-}(X, K_X + 5K_{X})$ is birational.
\end{cor}
Before preceding with the proof, let us recall some notions and results about Fourier-Mukai transform and generic vanishing sheaves developed by Pareschi and Popa \cite{PP03}.
Let $A$ be an abelian variety of dimension $d$, $\hat{A}= \mathrm{Pic}^0(A)$ and $\mathcal{P}$ the Poincar\'{e} line bundle on
$A \times \hat{A}$. Let $p,q$ denote the projections from $A \times \hat{A}$ to $A, \hat{A}$ respectively. The \emph{Fourier-Mukai transform} $R\Phi_{\mathcal{P}}: D^b(A) \rightarrow D^b(\hat{A})$ w.r.t. $\mathcal{P}$ is defined as
$$R\Phi_{\mathcal{P}}(-) := Rq_*(Lp^*(-)\otimes \mathcal{P})$$
which is a right derived functor. For a coherent sheaf $\mathcal{F}$ on $A$, let
$$D_A(\mathcal{F})= R\mathcal{H}om(\mathcal{F}, \mathcal{O}_A[d])~\mathrm{and}~\widehat{R\Delta(\mathcal{F})} = R\Phi_{\mathcal{P}}(D_A(\mathcal{F})).$$
\smallskip
\begin{defn}\label{defgv}
Given a coherent sheaf $\mathcal{F}$ on $A$, its ~$i$-$th$ ~$cohomological ~support ~locus$ is defined as
$$V^i(\mathcal{F}): = \{\alpha \in \hat{A}| h^i(\mathcal{F} \otimes \mathcal{P}_\alpha) > 0\}$$
The number $gv(\mathcal{F}): = min_{i>0}\{\mathrm{codim}_{\hat{A}}V^i(\mathcal{F}) - i\}$ is called the \emph{generic vanishing index} of $\mathcal{F}$, and
we say $\mathcal{F}$ is a $GV~ sheaf$ (resp. $M$-$regular~ sheaf$) if $gv(\mathcal{F}) \geq 0$ (resp. $>0$). If $V^i(\mathcal{F}) = \emptyset$ for any $i>0$ then we say $\mathcal{F}$ is an \emph{$IT^0$ sheaf}.
We say $\mathcal{F}$ is $continuously~ globally~ generated$ (CGG) if the sum of the evaluation maps
$$ev_{U,\mathcal{F}} : ~\oplus_{\alpha \in U}H^0(\mathcal{F} \otimes \mathcal{P}_\alpha) \otimes \mathcal{P}_\alpha^{-1} \rightarrow \mathcal{F}$$
is surjective for any dense subset $U \subset \mathrm{Pic}^0(A)$.
\end{defn}
\begin{prop}\label{prop:gv-M-reg}
Let $\mathcal{F}$ be a coherent sheaf on $A$.
(i) The sheaf $\mathcal{F}$ is a GV sheaf if and only if $\mathrm{codim}_{\hat{A}}(\mathrm{Supp}R^i\Phi_{\mathcal{P}}(\mathcal{F})) \geq i$ for any $i>0$, and if and only if $\widehat{R\Delta(\mathcal{F})}$ is quasi-isomorphic to a coherent sheaf on $\hat{A}$.
(ii) The sheaf $\mathcal{F}$ is M-regular if and only if $\mathrm{codim}_{\hat{A}}(\mathrm{Supp}R^i\Phi_{\mathcal{P}}(\mathcal{F})) > i$ for any $i>0$, and if and only if $\widehat{R\Delta(\mathcal{F})}$ is quasi-isomorphic to a torsion free coherent sheaf on $\hat{A}$.
\end{prop}
\begin{proof}
We refer the reader to \cite[Sec.2]{PP11} for the proof.
\end{proof}
\begin{lem}[\cite{BLNP}, Lemma 4.6]\label{lem:coker}
Let $\mathcal{F}$ be a GV-sheaf on $A$. Let $L$ be an ample line bundle on $A$. Then,
for all sufficiently large $n \in \mathbb{N}$, and for any subset $T \subseteq \mathrm{Pic}^0(A)$, the Fourier-Mukai transform $\Phi_{\mathcal{P}}$
induces a canonical isomorphism
$$H^0(A, \mathrm{coker}~ev_{T,\mathcal{F}}\otimes L^n) \cong (\mathrm{ker} ~\psi_{T,\mathcal{F}})^{\vee},$$
where $\psi_{T,\mathcal{F}}$ is the natural evaluation map defined as follows
$$\psi_{T,\mathcal{F}}: \mathrm{Hom}(\widehat{L^n},\widehat{R\Delta(\mathcal{F})}) \to \prod_{\alpha \in T}\mathcal{H}om(\widehat{L^n},\widehat{R\Delta(\mathcal{F})}) \otimes k(\alpha).$$
\end{lem}
\begin{cor}\label{cor:generation}
Let $\mathcal{F}$ be a coherent sheaf on $A$.
(i) If $\mathcal{F}$ is GV then there exist $\alpha_1, \alpha_2, \cdots, \alpha_m \in \hat{A}$ such that the evaluation homomorphism
$$\bigoplus_{i=1}^m H^0(A, \mathcal{F}\otimes \mathcal{P}_{\alpha_i}) \otimes \mathcal{P}_{\alpha_i}^{-1} \to \mathcal{F}$$
is surjective.
(ii) If $\mathcal{F}$ is $M$-regular, then it is CGG, in particular for any dense subset $V \subseteq \hat{A}$, there exist $\alpha_1, \alpha_2, \cdots, \alpha_m \in V$ such that the evaluation homomorphism
$$\bigoplus_{i=1}^m H^0(A, \mathcal{F}\otimes \mathcal{P}_{\alpha_i}) \otimes \mathcal{P}_{\alpha_i}^{-1} \to \mathcal{F}$$
is surjective.
\end{cor}
\begin{proof}
(i) Take an ample line bundle $L$ on $A$ and a sufficiently large number $n$ such that $\mathrm{coker}~ev_{\hat{A},\mathcal{F}}\otimes L^n$ is globally generated. As $L^n$ is an $IT^0$ sheaf, $\widehat{L^n}$ is a locally free sheaf on $\hat{A}$. Hence in Lemma \ref{lem:coker}, if setting $T=\hat{A}$, then $\mathrm{ker} ~\psi_{\hat{A},\mathcal{F}} =0$. This implies $H^0(A, (\mathrm{coker}~ev_{\hat{A},\mathcal{F}})\otimes L^n) =0$ by Lemma \ref{lem:coker}, hence $(\mathrm{coker}~ev_{\hat{A},\mathcal{F}})\otimes L^n = 0$, in other words,
$$ev_{\hat{A},\mathcal{F}}: \oplus_{\alpha \in \hat{A}}H^0(A, \mathcal{F} \otimes \mathcal{P}_\alpha) \otimes (\mathcal{P}_\alpha^{-1}) \rightarrow \mathcal{F}$$
is surjective. By Noetherian induction, we can find finitely many $\alpha_1, \alpha_2, \cdots, \alpha_m \in \hat{A}$ satisfying the requirement of (i).
(ii) If $\mathcal{F}$ is $M$-regular, then $\widehat{R\Delta(\mathcal{F})}$ is torsion free by Proposition \ref{prop:gv-M-reg}. Applying Lemma \ref{lem:coker} again by setting $T=V$, since $T$ is dense, we see that $\mathrm{ker} ~\psi_{T,\mathcal{F}} =0$. We can prove (ii) by Noetherian induction as in (i).
\end{proof}
\begin{prop}\label{prop:tensor-of-gv} Let $\mathcal{F}$ and $\mathcal{E}$ be two coherent sheaves on $A$.
If $\mathcal{F}$ is GV and $\mathcal{E}$ is CGG, then the tensor product $\mathcal{F} \otimes \mathcal{E}$ is CGG.
\end{prop}
\begin{proof}
By Corollary \ref{cor:generation}, $\mathcal{F}$ is the quotient of $\bigoplus_{i=1}^m \mathcal{P}_{\alpha_i}$ for some $\alpha_1, \alpha_2, \cdots, \alpha_m \in \hat{A}$. Since $\bigoplus_{i=1}^m \mathcal{E}\otimes\mathcal{P}_{\alpha_i}$ is CGG, the tensor product $\mathcal{F} \otimes \mathcal{E}$, as the quotient of this sheaf, is CGG too.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:bir-criterion-irr}]
Since $D$ is nef and big, by Proposition \ref{prop:F-stable-section} (iii) we can take $\Delta \geq 0$ such that
\begin{itemize}
\item[(a)] $S_{-}^0(X_{\eta}, K_{X_{\eta}} + D|_{X_{\eta}}) \subseteq S_{\Delta}^0(X_{\eta}, K_{X_{\eta}} + D|_{X_{\eta}})$ and
\item[(b)] $D- \Delta$ is ample, $p^g(D-\Delta)$ is integral for some $g$, and $S_{\Delta}^0(X, K_{X} + D + a^*\mathcal{P}_{\alpha})\subseteq S_{-}^0(X, K_{X} + D + a^*\mathcal{P}_{\alpha})$ for any $\alpha \in \mathrm{Pic}^0(A)$.
\end{itemize}
Let
$$\mathcal{F}^e= f_*F^{e}_*\mathcal{O}_X(K_X + \ulcorner p^e(D-\Delta)\urcorner)~\mathrm{and}~\mathcal{V}^e= \mathrm{Im}(f_*Tr^e:\mathcal{F}^e \to f_*\mathcal{O}_X(K_X + \ulcorner D\urcorner)).$$
By (a) we always have
\begin{itemize}
\item[(c)] $S^0_{-}(X_{\eta}, K_{X_{\eta}} + D_{\eta}) \subseteq \mathcal{V}^e \otimes k(\eta)$.
\end{itemize}
Let $\pi: Y \to A$ be the natural morphism such that $a=\pi\circ f$, which is finite. Applying $R(\pi\circ f)_*= Ra_*$ to $F^{e}_*\mathcal{O}_X(K_X + \ulcorner p^e(D-\Delta)\urcorner)$ and considering the induced Lerray spectral sequence, by Fujita vanishing (Theorem \ref{thm:var-fujita-vanishing}), we can show that for sufficiently large $e$, $\pi_*\mathcal{F}^e$ is an $IT^0$ sheaf.
From now on we fix a sufficiently divisible integer $e >0$ and assume that
\begin{itemize}
\item[(d)] $S_{\Delta}^0(X, K_{X} + D + a^*\mathcal{P}_{\alpha})=S_{\Delta}^{e}(X, K_{X} + D + a^*\mathcal{P}_{\alpha})$ for any $\mathcal{P}_{\alpha} \in \mathrm{Pic}^0(A)$, which is reasonable by Proposition \ref{prop:F-stable-section} (iii); and
\item[(e)] $\pi_*\mathcal{F}^e$ is an $IT^0$ sheaf.
\end{itemize}
(i) Assume $S^0_{-}(X_{\eta}, K_{X_{\eta}} + D_{\eta})\neq 0$. The assertion (c) implies that $\mathrm{rank}~\mathcal{V}^e >0$. And by (e), $\pi_*\mathcal{F}^e$ is CGG, so is $\pi_*\mathcal{V}^e$, in particular, for general $\alpha \in \hat{A}$ we have $H^0(A, \pi_*\mathcal{V}^e\otimes \mathcal{P}_{\alpha}) \neq 0$. Combining this with the fact that the set $\{\alpha \in \hat{A}|H^0(A, \pi_*\mathcal{V}^e\otimes \mathcal{P}_{\alpha}) = 0\}$ is open, which is a consequence of Lower Semicontinuity Theorem (\cite[Sec. 8.3]{FGA05}), we conclude that $H^0(A, \pi_*\mathcal{V}^e\otimes \mathcal{P}_{\alpha}) \neq 0$ for any $\alpha \in \hat{A}$, and thus $H^0(X, K_X+ \ulcorner D\urcorner + a^*\mathcal{P}_{\alpha})\neq 0$. Again since $\pi_*\mathcal{F}^e$ is CGG, there exists some $\beta \in \hat{A}$ such that the trace map
$$Tr^e: H^0(X, F^{e}_*\mathcal{O}_X(K_X + \ulcorner p^e(D-\Delta)\urcorner)\otimes a^*\mathcal{P}_{\beta}) \cong H^0(A, \pi_*\mathcal{F}^e\otimes \mathcal{P}_{\beta}) \to H^0(A, \pi_*\mathcal{V}^e\otimes \mathcal{P}_{\beta})$$
is a nonzero map, which implies by (b,d) that $S_{-}^0(X, K_{X} + D + a^*\mathcal{P}_{\beta}) \neq 0$.
\smallskip
(ii) Fix $\alpha_0 \in \hat{A}$. By (i) we can take $\beta \in \hat{A}$ such that $S_{-}^0(X, K_{X} + D + a^*\mathcal{P}_{\beta}) \neq 0$. By the assumption we are allowed to take a nonzero section $s \in H^0(X, D_1+ a^*\mathcal{P}_{\alpha_0-\beta})$. Then applying Proposition \ref{prop:F-stable-section} (i), we prove (ii) by
$$S_{-}^0(X, K_{X} + D + a^*\mathcal{P}_{\beta})\otimes s \subseteq S_{-}^0(X, K_{X} + D + D_1 + \mathcal{P}_{\alpha_0}).$$
\smallskip
(iii) Assume $S^0_{-}(X_{\eta}, K_{X_{\eta}} + D_{\eta})$ is birational. There exists a nonempty open subset $X'$ of $X$ such that, $f^*\mathcal{V}^e \to \mathcal{O}_X(K_X + \ulcorner D\urcorner)$ is surjective over $X'$, and that for every closed point $x \in X'$, $f^*(\empty_x\mathcal{V}^e) \to \mathcal{I}_x\cdot\mathcal{O}_X(K_X + \ulcorner D\urcorner)$ is surjective over $X'$, here $\empty_x\mathcal{V}^e$ denotes the kernel of the natural map $\mathcal{V}^e \to f_*(k(x)\otimes \mathcal{O}_X(K_X + \ulcorner D\urcorner)) \cong k(f(x))$.
From now on fix a closed point $x \in X'$. We define $\empty_x\mathcal{F}^e$ by the following commutative diagram of exact sequences
{\small $$\xymatrix{
&0\ar[r] &\empty_x\mathcal{F}^e \ar[r]\ar[d]^{\gamma_x} &\mathcal{F}^e \ar[r]\ar[d]^{\gamma} &k(f(x)) \ar[r]\ar@{=}[d] &0\\
&0\ar[r] &\empty_x\mathcal{V}^e \ar[r]&\mathcal{V}^e \ar[r] &k(f(x)) \ar[r] &0}$$}
where $\gamma_x$ is surjective since $\gamma$ is.
\begin{lem}
$\pi_*(\empty_x\mathcal{F}^e)$ is a GV-sheaf.
\end{lem}
\begin{proof}
Since $\pi$ is a finite morphism, the following sequence is exact
$$0 \to \pi_*(\empty_x\mathcal{F}^e) \to \pi_*\mathcal{F}^e \to k(a(x))\cong k \to 0.$$
Tensoring the above sequence with $\mathcal{P}_{\alpha} \in \mathrm{Pic}^0(A)$ and taking cohomology we obtain a long exact sequence, then since $\pi_*\mathcal{F}^e$ is $IT^0$ we deduce that
\begin{itemize}
\item
for~$i\geq 2$, $H^i(A, (\pi_*(\empty_x\mathcal{F}^e))\otimes \mathcal{P}_{\alpha}) =0$~which means $V^i(\pi_*(\empty_x\mathcal{F}^e)) = \emptyset$, and
\item
$H^1(A, \pi_*(\empty_x\mathcal{F}^e)\otimes \mathcal{P}_{\alpha})= \mathrm{coker} (H^0(A, \pi_*\mathcal{F}^e\otimes \mathcal{P}_{\alpha}) \to H^0(A, k(a(x))\otimes \mathcal{P}_{\alpha})\cong k)$.
\end{itemize}
Then since $\pi_*\mathcal{F}^e \to k(a(x))$ is surjective and $\pi_*\mathcal{F}^e$ is CGG, there exists $\alpha \in \hat{A}$ such that the evaluation map $H^0(A, \pi_*\mathcal{F}^e\otimes \mathcal{P}_{\alpha}) \to k(a(x))$ is surjective. Therefore, the closed subset
$$\{\alpha \in \hat{A}|\mathrm{the ~evaluation ~map}~H^0(A, \pi_*\mathcal{F}^e\otimes \mathcal{P}_{\alpha}) \to k~\mathrm{is ~zero}\}$$
has codimension at least one in $\hat{A}$, thus $\pi_*(\empty_x\mathcal{F}^e)$ is a GV-sheaf.
\end{proof}
By Corollary \ref{cor:generation}, there exist $\alpha_1, \alpha_2, \cdots, \alpha_m \in \hat{A}$ such that the evaluation homomorphism
$$\bigoplus_{j=1}^m H^0(A, \pi_*(\empty_x\mathcal{F}^e)\otimes \mathcal{P}_{\alpha_j}) \otimes \mathcal{P}_{\alpha_j}^{-1} \to \pi_*(\empty_x\mathcal{F}^e)$$
is surjective. Notice that for $\alpha \in \mathrm{Pic}^0(A)$, by (d) the image of the composition map
\begin{align*}
H^0(A, \pi_*(\empty_x\mathcal{F}^e)\otimes \mathcal{P}_{\alpha}) &\hookrightarrow H^0(A, \pi_*\mathcal{F}^e\otimes \mathcal{P}_{\alpha}) \\
&\cong H^0(X, \mathcal{O}_X(K_X + \ulcorner p^e(D-\Delta)\urcorner)+ p^ea^*\mathcal{P}_{\alpha}) \\
&\xrightarrow{Tr^e} H^0(X, \mathcal{O}_X(K_X + \ulcorner D\urcorner + a^*\mathcal{P}_{\alpha})
\end{align*}
coincides with $\empty_xS^0_{\Delta}(X, K_X + D + a^*\mathcal{P}_{\alpha})$ (the subspace of $S^0_{\Delta}(X, K_X + D + a^*\mathcal{P}_{\alpha})$ consisting of those sections vanishing at $x$), and that
$$a^*(\pi_*(\empty_x\mathcal{F}^e)) \xrightarrow{Tr^e} f^*(\empty_x\mathcal{V}^e) \to \mathcal{I}_x\cdot\mathcal{O}_X(K_X + \ulcorner D\urcorner)$$
is surjective on $X'$. Then we can conclude
$$\bigoplus_{j=1}^m (\empty_xS^0_{\Delta}(X, K_X + D + a^*\mathcal{P}_{\alpha_j})\otimes a^*\mathcal{P}_{-\alpha_j} )\to \mathcal{I}_x\cdot\mathcal{O}_X(K_X + \ulcorner D\urcorner)$$
is surjective on $X'$.
Now fix $\alpha_0 \in \hat{A}$. We can take a nonempty open subset $\hat{V} \subseteq \hat{A}$ such that for $i=1,2$, $\Phi_{\mathcal{P}}(a_*\mathcal{O}_X(D_i))$ is locally free over $\hat{V}$ and for any $\alpha \in \hat{V}$ $$\Phi_{\mathcal{P}}(a_*\mathcal{O}_X(D_i))\otimes k(\alpha) \cong H^0(X, \mathcal{O}_X(D_i)\otimes a^*\mathcal{P}_{\alpha}).$$
Let $T_i = \cap_{\alpha \in \hat{V}} \mathrm{Bs}|D_i + a^*\mathcal{P}_{\alpha}|$ and $X'' = X' \setminus (T_1 \cup T_2)$.
Observe this fact
\begin{itemize}
\item[(f)] for a closed point $z \in X''$ the set
$\hat{V}_i=\{\alpha \in \hat{V}| z\notin \mathrm{Bs}|D_i + a^*\mathcal{P}_{\alpha}|\}$
is a nonempty open subset of $\hat{V}$.
\end{itemize}
To prove that $S^0_{-}(X, K_X + \ulcorner D\urcorner + D_1 + D_2 + a^*\mathcal{P}_{\alpha_0})$ is birational, we only need to verify that for any $x \in X''$, $\mathcal{I}_x\cdot\mathcal{O}_X(K_X + \ulcorner D\urcorner + D_1 + D_2 + a^*\mathcal{P}_{\alpha_0})$ is globally generated over $X''$ by $\empty_xS^0_{-}(X, K_X + \ulcorner D\urcorner + D_1 + D_2 + a^*\mathcal{P}_{\alpha_0})$. Fix a closed point $z \in X''$. Then by (f), we know that for general $\beta_1, \cdots, \beta_m \in \hat{V}$,
$$z \notin\mathrm{Bs}|D_1 + a^*\mathcal{P}_{\alpha_0 - \alpha_j - \beta_j}| \cup \mathrm{Bs}|D_2 + a^*\mathcal{P}_{\beta_j}|,~j=1,2, \cdots, m.$$
For any $j =1,2, \cdots, m$, we may take
$$s_j \in H^0(X, D_1 + a^*\mathcal{P}_{\alpha_0 - \alpha_j - \beta_j})~\mathrm{and}~t_j \in H^0(X, D_2 + a^*\mathcal{P}_{\beta_j})$$
such that $s_j(z) \neq 0$ and $t_j(z) \neq 0$. Then
$$\empty_xS^0_{\Delta}(X, K_X + D + a^*\mathcal{P}_{\alpha_j})\otimes s_j\otimes t_j \subseteq~ \empty_xS^0_{\Delta}(X, K_X + D + D_1 +D_2 +a^*\mathcal{P}_{\alpha_0}).$$
It follows that around $z$, $\sum_{j}~\empty_xS^0_{\Delta}(X, K_X + D + a^*\mathcal{P}_{\alpha_j})\otimes s_j\otimes t_j$ generates the sheaf $\mathcal{I}_x\cdot\mathcal{O}_X(K_X + \ulcorner D\urcorner + D_1 + D_2 + a^*\mathcal{P}_{\alpha_0})$. This finish the proof of the assertion (iii).
\smallskip
(iv) Applying the assertion (i), the assumptions on $D_i$ imply that for any $\alpha \in \hat{A}$, $H^0(X, K_X +\ulcorner D_i\urcorner + a^*\mathcal{P}_{\alpha}) \neq 0$. Then applying the assertion (iii) we get (iv).
\end{proof}
\section{Frobenius stable pluricanonical systems on curves and surfaces}\label{sec:dim2}
In this section we will study Frobenius stable pluricanonical systems on curves and surfaces in positive characteristic.
\begin{thm}\label{thm:eff-curve}
Let $X$ be a normal projective curve over an algebraically closed field $k$ of characteristic $p$. Let $D$ be a $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor on $X$. If $\deg D >1$ then $S_{-}^0(X, K_X + D)$ is base point free; and if $\deg D >2$ then $S_{-}^0(X, K_X + D)$ is very ample.
\smallskip
In particular, if $g(X) \geq 2$ then $S_{-}^0(X, K_X + K_X) \neq 0$ and $S_{-}^0(X, K_X + nK_X)$ is very ample for $n\geq 2$.
\end{thm}
\begin{proof} We may replace $D$ with a smaller divisor, which still satisfies the conditions of the theorem and in addition that the index is a power of $p$. So we only need to prove the assertion for $S^0(X, K_X + D)$ by Proposition \ref{prop:F-stable-section} (iii).
By direct local computations, we can show that
\begin{itemize}
\item
the trace map $Tr^e: F^e_*\mathcal{O}_X(K_X + \ulcorner p^eD\urcorner) \to \mathcal{O}_X(K_X + \ulcorner D\urcorner)$ is surjective, and for a closed point $x \in X$, the kernel of the composition map
$$F^e_*\mathcal{O}_X(K_X + \ulcorner p^eD\urcorner) \to \mathcal{O}_X(K_X + \ulcorner D\urcorner) \to k(x)\otimes \mathcal{O}_X(K_X + \ulcorner D\urcorner) \cong k(x)$$
is $F^e_*\mathcal{O}_X(K_X + \ulcorner p^eD\urcorner - q_{e,x} x)$ for some integer $q_{e,x} \in [1,p^e]$.
\end{itemize}
It follows that for any closed point $x \in X$, the following commutative diagram of exact sequences holds
{\small$$\xymatrix@C=0.5cm{
&0\ar[r] &F^e_*\mathcal{O}_X(K_X + \ulcorner p^eD\urcorner - q_{e,x}x) \ar[r]\ar[d] &F^e_*\mathcal{O}_X(K_X + \ulcorner p^eD\urcorner - (q_{e,x}-1)x)\ar[r]\ar[d] &k(x) \ar[r]\ar@{=}[d] &0\\
&0\ar[r] &\mathcal{O}_X(K_X + \ulcorner D\urcorner -x) \ar[r]\ar[d] &\mathcal{O}_X(K_X + \ulcorner D\urcorner) \ar[r]\ar[d] &k(x) \ar[r] &0\\
& &0 &0 & &
}$$}
First assume $\deg D >1$. Then for for any closed point $x \in X$ and any sufficiently large $e$, $H^1(X, F^e_*\mathcal{O}_X(K_X + \ulcorner p^eD\urcorner - q_{e,x}x) = 0$. By taking cohomology of the first row of the diagram above, we show that $\mathcal{O}_X(K_X + \ulcorner D\urcorner)$ is globally generated by $S^0(X, K_X + D)$ at $x$. This proves $S^0(X, K_X + D) \neq 0$ is base point free.
Next assume $\deg D >2$. Fix a closed point $z \in X$ and set $D'= D-z$. By replacing $D$ with $D'$, the argument in the previous paragraph shows that the sheaf $\mathcal{O}_X(K_X + \ulcorner D\urcorner - z)$ is globally generated by $S^0(X, K_X + D-z)$.
From this we conclude that $S^0(X, K_X + D)$ is very ample.
\smallskip
Finally the remaining assertion follows from the fact that $\deg K_X \geq 2$ if $X$ is of general type.
\end{proof}
\begin{thm}\label{thm:eff-surface}
Let $X$ be a smooth surface over an algebraically closed field $k$ of characteristic $p$. Let $a: X \to A$ be the Albanese map (which is trivial if $q(X)=0$), and let $f:X \to Y$ be the fibration induced by the Stein factorization of $a$. Let $D$ be a nef and big Cartier divisor and $D'$ a big $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor on $X$.
\smallskip
Case (i) $q(X) =0$. Let $r$ be a natural number such that $(rD-K_X)\cdot D \geq 0$. Then $S_{-}^0(X, K_X + (r+2)D + D') \neq 0$, and $S_{-}^0(X, K_X + (2r+4)D + D')$ is birational.
\smallskip
Case (ii) $q(X) >0$. Fix $\mathcal{P}_\alpha \in \mathrm{Pic}^0(A)$.
Case (ii-1) $\dim Y =2$. Then $S_{-}^0(X, K_X + D' + (K_X+ D) + a^*\mathcal{P}_\alpha) \neq 0$, and $S_{-}^0(X, K_X + D'+ 2(K_X+ D) + a^*\mathcal{P}_\alpha)$ is birational.
Case (ii-2) $\dim Y =1$. If $\deg D|_{X_{\eta}} \geq 2$, then $S_{-}^0(X, K_X + D + D' + (K_X+ D) + a^*\mathcal{P}_\alpha) \neq 0$, and $S_{-}^0(X, K_X + D + D' + 2(K_X+ D) + a^*\mathcal{P}_\alpha)$ is birational; if $\deg D|_{X_{\eta}} = 1$, then $S_{-}^0(X, K_X + D + D' + (K_X+ 2D)+ a^*\mathcal{P}_\alpha) \neq 0$, and $S_{-}^0(X, K_X + 2D + D' + 2(K_X+ 2D)+ a^*\mathcal{P}_\alpha)$ is birational.
\smallskip
Moreover, if $X$ is of general type then $S_{-}^0(X, K_X + nK_X) \neq 0$ for $n \geq 4$, and $S_{-}^0(X, K_X + nK_X)$ is birational for $n\geq 7$.
\end{thm}
\begin{proof} We may replace $D'$ with a smaller divisor to assume it is nef.
\smallskip
Case (i). In this case we have $h^1(\mathcal{O}_X) - h^2(\mathcal{O}_X) \leq q(X) =0$ (see for example \cite[Remark 9.5.15, 9.5.25]{FGA05}), thus $\chi(\mathcal{O}_X) >0$. And by the assumption $(rD-K_X)\cdot D \geq 0$, we have $h^2(X,(r+1)D) = h^0(X, K_X-(r+1)D)=0$. Applying Riemann-Roch formula we get
$$h^0(X,(r+1)D) \geq \chi(\mathcal{O}_X((r+1)D)) = \frac{((r+1)D-K_X)\cdot (r+1)D}{2} + \chi(\mathcal{O}_X) \geq 2.$$
Remark that if we blow up $X$ and replace $D$ with the pullback, it still holds that $(rD-K_X )\cdot D \geq 0$, hence we are allowed to blow up $X$ in the proof. Take a linear system $|V| \subseteq |(r+1)D|$ of dimension one. If necessary blowing up $X$, we may assume $|V|= |H| + E$ where $|H|$ is free of base point and defines a fibration $f: X \to Y=\mathbb{P}^1$. Since
$\dim Y=1$, by results of Sec. \ref{sec:bc}, $X_{\bar{\eta}}$ is integral. Denote by $Z_{\bar{\eta}}$ the normalization of $X_{\bar{\eta}}$. Since $\deg (D + D')|_{Z_{\bar{\eta}}} >1$ and $\deg (2D + D')|_{Z_{\bar{\eta}}} >2$, by Theorem \ref{thm:eff-curve} we know that $S_{-}^0(Z_{\bar{\eta}}, K_{Z_{\bar{\eta}}} + (D + D')|_{Z_{\bar{\eta}}}) \neq 0$ and $S_{-}^0(Z_{\bar{\eta}}, K_{Z_{\bar{\eta}}} + (2D + D')|_{Z_{\bar{\eta}}})$ is birational. Combining Theorem \ref{thm:bir-geo-gen} and \ref{thm:bir-criterion-1}, it follows that $S_{-}^0(X, K_X + H + D+ D') \neq 0$ and $S_{-}^0(X, K_X + 2H + 2D+ D')$ is birational. This statement still holds if $H$ replaced with $(r+1)D$ by Proposition \ref{prop:F-stable-section} (i).
\smallskip
Case (ii). In this case, we denote by $Z_{\bar{\eta}}$ the normalization of $X_{\bar{\eta}}^{\mathrm{red}}$.
If $\dim Z_{\bar{\eta}} = 0$ then both $S_{-}^0(Z_{\bar{\eta}}, K_{Z_{\bar{\eta}}} + D|_{Z_{\bar{\eta}}})$ and $S_{-}^0(Z_{\bar{\eta}}, K_{Z_{\bar{\eta}}} + D'|_{Z_{\bar{\eta}}})$ are birational. Applying Theorem \ref{thm:bir-criterion-irr} (i), (ii) and (iv), we can show the claimed result in Case (ii-1).
Now assume $\dim Z_{\bar{\eta}} = 1$. We know that $X_{\bar{\eta}}$ is integral, which implies $\deg D|_{X_{\eta}} = \deg D|_{Z_{\bar{\eta}}}$. Applying Theorem \ref{thm:eff-curve} we obtain that
\begin{itemize}
\item
$S_{-}^0(Z_{\bar{\eta}}, K_{Z_{\bar{\eta}}} + 2D|_{Z_{\bar{\eta}}}) \neq 0$, $S_{-}^0(Z_{\bar{\eta}}, K_{Z_{\bar{\eta}}} + (D+D')|_{Z_{\bar{\eta}}}) \neq 0$, and if moreover $\deg D|_{X_{\eta}} \geq 2$ then $S_{-}^0(Z_{\bar{\eta}}, K_{Z_{\bar{\eta}}} + D|_{Z_{\bar{\eta}}}) \neq 0$; and
\item
$S_{-}^0(Z_{\bar{\eta}}, K_{Z_{\bar{\eta}}} + (2D + D')|_{Z_{\bar{\eta}}})$ is birational, and if $\deg D|_{X_{\eta}} \geq 2$ then $S_{-}^0(Z_{\bar{\eta}}, K_{Z_{\bar{\eta}}} + (D + D')|_{Z_{\bar{\eta}}})$ is birational.
\end{itemize}
Theorem \ref{thm:bir-geo-gen} tells that the corresponding results hold true for $X_{\eta}$. Then we can apply Theorem \ref{thm:bir-criterion-irr} (i), (ii) and (iv) to show the assertion in Case (ii-2).
\smallskip
At the end, let us focus on the pluricanonical system. We assume $X$ is of general type and take a minimal model $\bar{X}$, which is endowed with the birational morphism $\rho:X \to \bar{X}$. Set $D = D'=\rho^*K_{\bar{X}}$. Since $K_X \geq \rho^*K_{\bar{X}}$, it is enough to show the corresponding assertions for $S^0(X, K_X + n\rho^*K_{\bar{X}})$. If $q(X) =0$, then applying results of Case (i) we obtain that $S_{-}^0(X, K_X + n\rho^*K_{\bar{X}}) \neq 0$ for $n\geq 4$, and $S_{-}^0(X, K_X + n\rho^*K_{\bar{X}})$ is birational for $n\geq 7$. If $q(X) >0$ and $\dim Y = 2$, then applying results of Case (ii-1) we obtain that $S_{-}^0(X, K_X + n\rho^*K_{\bar{X}}) \neq 0$ for $n\geq 3$, and $S_{-}^0(X, K_X + n\rho^*K_{\bar{X}})$ is birational for $n\geq 5$. If $q(X) >0$ and $\dim Y = 1$, since the arithmetic genus $p_a(X_{\eta}) \geq 2$ which implies $\deg \rho^*K_{\bar{X}}|_{X_{\eta}}= \deg K_X|_{X_{\eta}} \geq 2$, then applying results of Case (ii-2) we obtain that $S_{-}^0(X, K_X + n\rho^*K_{\bar{X}}) \neq 0$ for $n\geq 4$, and $S_{-}^0(X, K_X + n\rho^*K_{\bar{X}})$ is birational for $n\geq 6$.
\end{proof}
\begin{cor}\label{cor:eff-surface}
Let the notation be as in Theorem \ref{thm:eff-surface}. Assume that the Albanese map $a: X \to A$ factors through a birational contraction $\sigma: X \to \bar{X}$ such that $D = \sigma^*\bar{D}$ is the pull back of a nef and big Cartier divisor $\bar{D}$ on $\bar{X}$.
Assume moreover that there exist an integer $r>0$ and an integral effective $\sigma$-exceptional divisor $E$ such that $D''=rD +E - K_X$ is big. Then for any $\mathcal{P}_{\alpha} \in \mathrm{Pic}^0(A)$, $S_{-}^0(X, K_X + (2+r)D + D'+ a^*\mathcal{P}_{\alpha}) \neq 0$, and $S_{-}^0(X, K_X + (4+2r)D + D'+ a^*\mathcal{P}_{\alpha})$ is birational.
\end{cor}
\begin{proof}
By the assumption we have $(rD - K_X)\cdot D =D''\cdot D > 0$, when $q(X)=0$ we can apply the first case of Theorem \ref{thm:eff-surface}.
From now on we assume $q(X) >0$ and fix $\mathcal{P}_{\alpha} \in \mathrm{Pic}^0(A)$. We only consider the case $\dim X_{\eta} =1$, because the other case is similar and easier.
We claim that for any
$\mathcal{P}_\beta \in \mathrm{Pic}^0(A)$, $H^0(X, (r+1)D + a^*\mathcal{P}_\beta) \neq 0$. To prove this, we take a nef and big divisor $D''_{-}$ such that $D''_{-} \leq D'' = \ulcorner D''\urcorner$. By combining Theorem \ref{thm:eff-curve} with Theorem \ref{thm:bir-geo-gen}, we can show $S_{-}^0(X_{\eta}, K_{X_{\eta}} + (D + D''_{-})_{\eta}) \neq 0$. Then Theorem \ref{thm:bir-criterion-irr} (i) tells that for any
$\mathcal{P}_\beta \in \mathrm{Pic}^0(A)$, $H^0(X, K_X + D + \ulcorner D''_{-} \urcorner+ a^*\mathcal{P}_\beta) \neq 0$, thus $H^0(X, K_X + D + D'' + a^*\mathcal{P}_\beta) \neq 0$, namely, $H^0(X, (r+1)D + E + a^*\mathcal{P}_\beta)\neq 0$. The claimed nonvanishing follows from the projection formula since $E$ is $\sigma$-exceptional.
Applying Theorem \ref{thm:eff-curve} and \ref{thm:bir-geo-gen} again, we have that
\begin{itemize}
\item[(a)] $S_{-}^0(X_{\eta}, K_{X_{\eta}} + (D + D')_{\eta}) \neq 0$; and
\item[(b)] $S_{-}^0(X_{\eta}, K_{X_{\eta}} + (2D + D')_{\eta})$ is birational.
\end{itemize}
Combining these with the claim above, we can prove the two desired assertions by applying Theorem \ref{thm:bir-criterion-irr} (ii) and (iii) respectively.
\end{proof}
To do induction we need to study the pluricanonical systems on the generic fiber, which is defined over a non-algebraically closed field. To study threefolds, we shall need the following result.
\begin{thm}\label{thm:eff-surface-nonclosed-field}
Let $X$ be a minimal regular surface of general type over an $F$-finite field $K$ and $D'$ a big $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor on $X$. Then $S_{-}^0(X, K_X + 4K_X + D') \neq 0$, and $S_{-}^0(X, K_X + 8K_X + D')$ is birational; and if moreover $p>2$ then $S_{-}^0(X, K_X + 3K_X + D') \neq 0$, and $S_{-}^0(X, K_X + 6K_X + D')$ is birational.
\end{thm}
\begin{proof}
Let $Z_{\bar{K}}$ denote the normalization of $X_{\bar{K}}^{\mathrm{red}}$. By results of Section \ref{sec:canonical-bundle-bc} we may write that $K_X|_{Z_{\bar{K}}} = K_{Z_{\bar{K}}} + (p-1)C$ where $C \geq 0$ is a Weil divisor arising from the conductors. Let $\rho: W_{\bar{K}} \to Z_{\bar{K}}$ be a minimal smooth resolution of singularities. Then $\rho^*K_{Z_{\bar{K}}} \geq K_{W_{\bar{K}}}$, and in particular $K_X|_{W_{\bar{K}}} \geq K_{W_{\bar{K}}} + \rho^*(p-1)C$. Applying Theorem \ref{thm:bir-geo-gen} and Proposition \ref{prop:F-stable-section} (iv), we only need to prove the corresponding assertions for $S_{-}(W_{\bar{K}}, K_{W_{\bar{K}}} + (rK_X + D')|_{W_{\bar{K}}})$.
From the construction above, we can write that
\begin{align}\label{eq:***}~K_X|_{W_{\bar{K}}} \sim K_{W_{\bar{K}}} + (p-1)C' + E\end{align}
where $C'$ is an effective integral divisor on $W_{\bar{K}}$ and $E$ is an effective $\rho$-exceptional divisor. It follows that for $r_1, r_2\geq 0$, there is a natural inclusion
$$S_{-}(W_{\bar{K}}, K_{W_{\bar{K}}} + r_1K_{W_{\bar{K}}} + (r_2K_X + D')|_{W_{\bar{K}}}) \hookrightarrow S_{-}(W_{\bar{K}}, K_{W_{\bar{K}}} + ((r_1+r_2)K_X + D')|_{W_{\bar{K}}}).$$
\smallskip
Case (i) $q(W_{\bar{K}}) = 0$. We set $D= K_X|_{W_{\bar{K}}}$ and apply Theorem \ref{thm:eff-surface}, then we obtain that $S_{-}(W_{\bar{K}}, K_{W_{\bar{K}}} + (3K_X + D')|_{W_{\bar{K}}}) \neq 0$ and $S_{-}(W_{\bar{K}}, K_{W_{\bar{K}}} + (6K_X + D')|_{W_{\bar{K}}})$ is birational.
\smallskip
Case (ii) $q(W_{\bar{K}}) \geq 1$. The Albanese map of $W_{\bar{K}}$ induces a fibration $f: W_{\bar{K}} \to Y$ to a normal variety $Y$ over $\bar{K}$ by the Stein factorization.
Case (ii-1) $\dim Y =2$. We set $D= K_X|_{W_{\bar{K}}}$ and apply Theorem \ref{thm:eff-surface}, then we obtain that $S_{-}(W_{\bar{K}}, K_{W_{\bar{K}}} + D'|_{W_{\bar{K}}}+ K_{W_{\bar{K}}} +K_X|_{W_{\bar{K}}}) \neq 0$
and $S_{-}(W_{\bar{K}}, K_{W_{\bar{K}}} + D'|_{W_{\bar{K}}} + 2(K_{W_{\bar{K}}} +K_X|_{W_{\bar{K}}}))$ is birational, which infer that $S_{-}(W_{\bar{K}}, K_{W_{\bar{K}}} + (2K_X + D')|_{W_{\bar{K}}}) \neq 0$
and that $S_{-}(W_{\bar{K}}, K_{W_{\bar{K}}} + (4K_X + D')|_{W_{\bar{K}}})$ is birational.
Case (ii-2) $\dim Y =1$. We may set $D= K_X|_{W_{\bar{K}}}$ and apply Theorem \ref{thm:eff-surface}, by similar argument we can show $S_{-}(W_{\bar{K}}, K_{W_{\bar{K}}} + (4K_X + D')|_{W_{\bar{K}}}) \neq 0$, and $S_{-}(W_{\bar{K}}, K_{W_{\bar{K}}} + (8K_X + D')|_{W_{\bar{K}}})$ is birational.
\smallskip
For the remaining assertion, we may assume $p>2$ and only need to consider Case (ii-2).
Let $F$ be the normalization of geometric generic fiber of $f$. Since each irreducible component of $E$ is a rational curve by Propsition \ref{prop:ex-over-surface}, we have $E\cdot F =0$.
Then by the formula (\ref{eq:***}), it follows that
$$\deg K_X|_F = \deg K_{W_{\bar{K}}}|_F + (p-1) C'\cdot F >0.$$
Since the number $\deg K_{W_{\bar{K}}}|_F \geq -2$ and is even, we see that if $p\geq 3$ then $\deg K_X|_F \geq 2$. Therefore, we can complete the proof by applying Theorem \ref{thm:eff-surface} again.
\end{proof}
\section{A Miyaoka-Yau type inequality for minimal threefolds in positive characteristic}\label{sec:my-ineq}
In practice, we require $\mathcal{O}_X(nK_X)$ to be big to obtain enough global sections of $\mathcal{O}_X(nK_X)$ for a minimal threefold. When applying Riemann-Roch, we need to compute $K_Z \cdot \rho^*c_2(X)$ where $\rho: Z \to X$ is a smooth resolution. In characteristic zero, for a smooth variety $X$ with pseudo-effective $K_X$, Miyaoka \cite{Miy85} proved that $\Omega_X^1$ satisfies generic positivity, which implies $c_2(X)$ is pseudo-effective. However in characteristic $p>0$, this is not necessarily true, as is known, there are examples of Raynaud's surfaces, which are of general type and have $c_2(X) <0$.
In \cite{XZ19} the authors studied the minimal threefolds in characteristic $p \geq 5$ with $\nu(X, K_X) \leq 2$ and $K_Z \cdot \rho^*c_2(X) < 0$, and proved that $X$ is uniruled and $K_X$ is semiample. When $K_X$ is big, we prove the following result.
\begin{thm}\label{thm:my-ineq}
Let $X$ be a minimal, terminal, projective threefold of general type over an algebraically closed field $k$ of characteristic $p>0$. Let $\rho: Z \to X$ be a smooth resolution. Then
\begin{align}\label{eq:my-ineq} c_2(Z) \cdot \rho^*K_X + AK_X^3 \geq 0\end{align}
where $A = \frac{(54n_0^2 + 9n_0)p^2 + (9n_0 +\frac{3}{2})p}{(p-1)^2}$ and $n_0$ is the Cartier index of $K_X$.
\end{thm}
Before the proof, let us recall some related results. For a smooth minimal surface $X$ of general type over an algebraically closed field $k$, if char $k=0$, then we have the famous Miyaoka-Yau inequality $c_2(X) - 3K_X^2 \geq 0$ (\cite{Miy77}); if char $k=p>0$, \cite{GSZ19} proved
\begin{align}\label{eq:my-ineq-sur}c_2(X) + \frac{5}{8}c_1(X)^2 \geq 0.\end{align}
This inequality is implied by Xiao's slope inequality and is sharp! We may regard the inequality (\ref{eq:my-ineq}) as an analogue of (\ref{eq:my-ineq-sur}) in dimension three and call them \emph{Miyaoka-Yau type inequalities}. Our proof follows Miyaoka's approach \cite{Miy85}, so the main ingredients include the stability of the tangent bundle and Bogomolov inequality proved by \cite{Lan04} in positive characteristic.
\smallskip
From now on, we work over an algebraically closed field $k$ of characteristic $p>0$.
\subsection{Stability}
Let $X$ be a smooth projective variety of dimension $n$, $E$ a torsion free coherent sheaf and $D_1, \cdots, D_{n-1}$ nef $\mathbb{R}$-Cartier divisors on $X$. Assume that $D_1\cdots D_{n-1}$ is nontrivial. Recall that the \emph{slope} of $E$ with respect to $(D_1,...,D_{n-1})$ is defined as
$$\mu(E) = \frac{c_1(E)\cdot D_1 \cdots D_{n-1}}{\rk~E},$$
and $E$ is \emph{semistable} (resp. \emph{stable}) w.r.t. $(D_1,...,D_{n-1})$ if for every nontrivial subsheaf $E' \subset E$
$$\mu(E') \leq \mu(E)~\mathrm{(}\mathrm{resp}. <\mathrm{)}.$$
It is well known that $E$ has \emph{Harder-Narasimhan (HN) filtration} (\cite[Sec. I.1.3]{HL10})
$$0 = E_0 \subset E_1 \subset E_2 \subset \cdots \subset E_m = E$$
such that the $F_i=E_i/E_{i-1}$'s are semistable and $\mu(F_i) >\mu(F_{i+1})$.
Denote by $F: X \cong X^1 \to X$ the Frobenius map. Recall that $F^*E$ has a canonical connection $\nabla_{\mathrm{can}}: F^*E \to F^*E\otimes \Omega_{X^1}^1$ which is determined by
$$\mathrm{for}~ a \in \O_{X^1}~\mathrm{and}~e \in E,~~ \nabla_{\mathrm{can}}: a\otimes e \mapsto da \otimes e,$$
and it yields the Cartier equivalence of categories between
the category of quasi-coherent sheaves on $X$ and the category of quasi-coherent
$\mathcal{O}_{X^1}$-modules with integrable $k$-connections, whose $p$-curvature is zero (cf. \cite[Theorem 5.1]{Kat70}).
It is well known that in positive characteristic the stability is not preserved by the Frobenius pullback. We say that $E$ is \emph{strongly semistable} (resp. \emph{strongly stable}) w.r.t. $(D_1,...,D_{n-1})$ if for every integer $k\geq 0$ the pullback $F^{k*}E$ is semistable (resp. stable). We need to consider the Harder-Narasimhan filtrations of the $F^{k*}E$'s. It is worth mentioning that \emph{$E$ satisfies the property ``fdHN''}: there exists some $k_0$ such that for any $k\geq k_0$ the Harder-Narasimhan filtration of $F^{k*}E$ coincides with the pull-back of that of $F^{k_0*}E$ (\cite[Theorem 2.7]{Lan04}).
Recall the invariants
$$L_{\mathrm{max}}(E) = \lim_{k \to \infty} \frac{\mu_{\max}(F^{k*}(E))}{p^k}~\mathrm{and}~L_{\mathrm{min}}(E) = \lim_{k \to \infty} \frac{\mu_{\min}(F^{k*}(E))}{p^k}$$
and denote $\alpha(E)=\max\{L_{\max}(E)-\mu_{\max}(E), ~\mu_{\min}(E)-L_{\min}(E)\}$.
\subsection{The gap between the slopes}
It is more convenient to consider the stability with respect to ample divisors. The natural method is to perturb the $D_i$'s. Let $H$ be an ample divisor and $\epsilon >0$. Consider the stability with respect to $(D_1 + \epsilon H,\cdots, D_{n-1} + \epsilon H)$. Let $\mu_{i_1, \cdots, i_k; ~H}$ denote the slope with respect to $(D_{i_1},...,D_{i_k}, H, \cdots, H)$ and $\mu_{\epsilon H}$ denote the slope with respect to $(D_1 + \epsilon H,\cdots, D_{n-1} + \epsilon H)$. Then
$$\mu_{\epsilon H}= \mu + \sum_{k <n-1}\epsilon^{n-1-k} \sum_{i_1, \cdots, i_k}\mu_{i_1, \cdots, i_k; ~H}.$$
If $\mathcal{G}_0$ is the maximal $\mu$-slope subsheaf of $E$ and $\mathcal{G}_{\epsilon}$ is the maximal $\mu_{\epsilon H}$-slope subsheaf, then
$$\mu_{\epsilon H}(\mathcal{G}_0) \leq \mu_{\epsilon H, \max}(E) = \mu_{\epsilon H}(\mathcal{G}_{\epsilon}).$$
It follows that
{\small\begin{equation}\label{eq:slope-max}
\begin{split}
\mu_{\max}(E) + &\sum_{k <n-1}\epsilon^{n-1-k} \sum_{i_1, \cdots, i_k}\mu_{i_1, \cdots, i_k; ~H}(\mathcal{G}_0) \\
&\leq \mu_{\epsilon H, \max}(E)= \mu_{\epsilon H}(\mathcal{G}_{\epsilon})
\leq \mu_{\max}(E) + \sum_{k <n-1}\epsilon^{n-1-k} \sum_{i_1, \cdots, i_k}\mu_{i_1, \cdots, i_k; ~H,~ \max}(E).
\end{split}
\end{equation}}
Applying the above inequality to the $F^{k*}E$'s, as $E$ satisfies fdHN with respect to the $\mu$- and $\mu_{i_1, \cdots, i_k; ~H}$-slopes, we can show that
$$\lim_{\epsilon \to 0} L_{\epsilon H, \max}(E) = L_{\max}(E).$$
Similarly we can show $\lim_{\epsilon \to 0} L_{\epsilon H, \min}(E) = L_{\min}(E)$.
We generalize the results of \cite[Corollary 6.2]{Lan04} as follows.
\begin{lem}\label{lem:slope}
Let $X$ be a smooth projective variety of dimension $n$, $E$ a torsion free coherent sheaf of rank $r$ and $D_1, \cdots, D_{n-1}$ nef divisors on $X$. Assume that the intersection $D_1\cdot D_2 \cdots D_{n-1}$ is nontrivial and consider the $(D_1, \cdots, D_{n-1})$-slope. Then
(i) $\alpha(E) \leq \frac{r-1}{p}[L_{\mathrm{max}}(\Omega_X^1)]_{+}$ where $[-]_{+}:=\max\{-, 0\}$;
(ii) if $E$ is semistable then
$$L_{\mathrm{max}}(E) - L_{\mathrm{min}}(E) \leq \frac{r-1}{p}[L_{\mathrm{max}}(\Omega_X^1)]_{+};$$
(iii) if $E$ is semistable and has rank two then
$$L_{\mathrm{max}}(E) - L_{\mathrm{min}}(E) \leq \frac{1}{p}[\mu_{\mathrm{max}}(\Omega_X^1)]_{+}.$$
\end{lem}
\begin{proof}
The assertions (i) and (ii) have been proved in \cite[Corollary 6.2]{Lan04} when the $D_i$'s are ample, and in general we can prove them by perturbing the $D_i$'s to be ample divisors and taking the limit.
To prove (iii), we may assume $E$ is not strongly semistable, since $E$ satisfies fDHN, there exists a minimal $k\geq 1$ such that $\mu_{\max}(F^{k*}E) = p^kL_{\max}(E)$, thus if $E_1^k$ is the maximal destablizing subsheaf of $F^{k*}E$ then $E_1^k$ cannot descend to a subsheaf of $F^{(k-1)*}E$.
As a consequence, the canonical connection induces a non-trivial $\mathcal{O}_X$-linear homomorphism $E_1^k \to (F^{k*}E/E_1^k) \otimes \Omega_X^1$. Since rank~$F^{k*}E/E_1^k = 1$, we can show the desired inequality by
$$p^kL_{\mathrm{max}}(E) = \mu(E_1^k) \leq \mu_{\max}((F^{k*}E/E_1^k )\otimes \Omega_X^1) = p^kL_{\mathrm{min}}(E) + \mu_{\mathrm{max}}(\Omega_X^1).$$
\end{proof}
\subsection{Slopes of $1$-foliations}\label{sec:1-fol}
Let $X$ be a smooth variety over an algebraically closed field $k$ with $\mathrm{char}~ k=p >0$. Recall that a \emph{$1$-foliation} is a saturated subsheaf $\mathcal{F} \subset T_X$ which is involutive (i.e., $[\mathcal{F}, \mathcal{F}] \subset \mathcal{F}$) and $p$-closed (i.e., $\xi^p \in \mathcal{F}, \forall \xi \in \mathcal{F}$).
A $1$-foliation $\mathcal{F}$ induces a finite purely inseparable morphism (cf. \cite{Ek87})
$$\pi: X \to Y = X/\mathcal{F} = \mathrm{Spec} (Ann(\mathcal{F}) := \{a \in \O_X| \xi(a) = 0, \forall \xi \in \mathcal{F}\}),$$
and if $\mathcal{F}$ is a locally free subsheaf of $T_X$, then $Y$ is smooth and
$$K_{X} \sim \pi^*K_{Y} + (p-1)\det \mathcal{F}|_{X}.$$
For a subsheaf of $T_X$ be to a $1$-foliation, we have the following criterion.
\begin{lem}\textup{(\cite[Lemma 1.5]{Lan15})}\label{lem:c-fol}
Let $\mathcal{F}$ be a saturated $\O_X$-submodule of $T_X$. If
$$\mathrm{Hom}_X(\wedge^2\mathcal{F}, T_X/\mathcal{F}) = \mathrm{Hom}_X(F_X^*\mathcal{F}, T_X /\mathcal{F}) = 0$$
then $\mathcal{F}$ is a 1-foliation.
\end{lem}
\begin{thm}\textup{(\cite[Theorem 2.1]{Lan15})}\label{rat-curve}
Let $L$ be a nef $\mathbb{R}$-divisor on a normal projective variety $X$. Let $f : C \to X$ be a non-constant morphism
from a smooth projective curve $C$ such that $X$ is smooth along $f(C)$. Let $\mathcal{F} \subseteq T_X$ be a
$1$-foliation, smooth along $f(C)$. Assume that
$$c_1(\mathcal{F})\cdot C > \frac{K_X\cdot C}{p-1}.$$
Then for every $x \in f(C)$ there is a rational curve $B_x \subseteq X$ passing through $x$ such that
$$L \cdot B_x \leq 2\dim X \frac{pL \cdot C}{(p-1)c_1(\mathcal{F})\cdot C-K_X\cdot C}.$$
\end{thm}
\begin{cor}\label{cor:bound}
Let $X$ be a normal projective variety of dimension $d$ and let $A$ be an ample divisor on $X$. Let $\rho: Z \to X$ be a smooth resolution of singularities. Assume that $K_X$ is a nef $\mathbb{Q}$-Cartier divisor with Caritier index $n_0$ and numerical dimension $\nu(K_X) = l$. Set
$$D_1= \cdots D_{l} = \rho^*K_X,~D_{l+1} = \cdots D_{d-1} = \rho^*A.$$
Let $\mu$ be the $(D_1, \cdots, D_{d-1})$-slope.
Let $\mathcal{F} \subseteq T_Z$ be a saturated subsheaf.
(1) If $\mathcal{F}$ is a foliation with rank $r$ and $\mu(\mathcal{F}) > 0$
then either the nef dimension $n(K_X) \leq d-1$, or $K_X$ is big and then
$$r\mu(\mathcal{F}) = c_1(\mathcal{F})\cdot(\rho^*K_X)^{d-1} \leq \frac{2pn_0d +1}{p-1}K_X^d.$$
(2) If $d=3$ and $\mu_{\mathrm{max}}(T_Z) = \mu(\mathcal{F}) >0$ then $\mathcal{F}$ is a foliation.
\end{cor}
\begin{proof}
(1) We will mimic the proof of \cite[Lemma 2.10]{XZ19}.
First assume $K_X$ is big. We may assume $A$ is sufficiently ample such that, for any sufficiently divisible positive integer $m$ the divisor $mK_X + A$ is also very ample on $X$ (see \cite{Ke08}). Take a curve $C_m$ from the intersection of $d-1$ general divisors in $\rho^*|mK_X +A|$. We can assume $\rho_*C_m$ is contained in the smooth locus of $K_X$ and $\mathcal{F}$ is smooth along $C_m$.
Then
\begin{align*}
K_Z \cdot C_m =K_Z \cdot (m\rho^*K_X + \rho^* A)^{d-1} =K_X^d m^{d-1} + o(m^{d-1})
\end{align*}
and
\begin{align*}
c_1(\mathcal{F}) \cdot C_m = c_1(\mathcal{F}) \cdot (m\rho^*K_X + \rho^* A)^{d-1} =c_1(\mathcal{F})\cdot(\rho^*K_X)^{d-1} m^{d-1} + o(m^{d-1}).
\end{align*}
If $c_1(\mathcal{F}) \cdot C_m \leq \frac{K_Z \cdot C_m}{p-1}$ for sufficiently large $m$, then the desired inequality holds. Otherwise, we set $L = \rho^*n_0K_X$ which is big, then we apply Theorem \ref{rat-curve} and obtain that for general closed point $z \in C_m$, there exists a rational curve $B_z$ passing through $z$ such that
{\small \begin{align*}1\leq L \cdot B_z \leq \frac{2dpL \cdot C_m}{(p-1)c_1(\mathcal{F})\cdot C_m- K_Z\cdot C_m}
&= \frac{2dpn_0K_X^d m^{d-1} + o(m^{d-1})}{((p-1)c_1(\mathcal{F})\cdot(\rho^*K_X)^{d-1} - K_X^d) m^{d-1} + o(m^{d-1})}\\
&= \frac{2dpn_0K_X^d}{(p-1)c_1(\mathcal{F})\cdot(\rho^*K_X)^{d-1} - K_X^d} + o(1).\end{align*}}
Taking $m\gg0$ shows the desired inequality.
If $K_X$ is not big, by similar argument as above we can get a family of rational curves $B_z$ passing through a very general point $z$ of $Z$ such that $\rho^*K_X \cdot B_z=0$ (here we use the fact that $L \cdot B_z$ is a nonnegative integer), which means the nef dimension $n(K_X) \leq d-1$.
\smallskip
(2) By the assumptions $\mu(T_Z) \leq 0$ and $\mu(\mathcal{F}) >0$, we have $\mu(T_Z/\mathcal{F}) <0$. By Lemma \ref{lem:c-fol}, to show $\mathcal{F}$ is a $1$-foliation it is enough to verify
$$\mathrm{Hom}_Z(\wedge^2\mathcal{F}, T_Z/\mathcal{F}) = \mathrm{Hom}_Z(F_Z^*\mathcal{F}, T_Z /\mathcal{F}) = 0.$$
It is trivial when $\mathrm{rank}(\mathcal{F}) = 3$. And when $\mathrm{rank}(\mathcal{F}) = 1$, the above two vanishings follow from $\wedge^2\mathcal{F} = 0$ and $ \mu(F_Z^*\mathcal{F}) = p \mu(\mathcal{F}) > \mu(\mathcal{F}) > \mu_{\max} (T_Z /\mathcal{F})$ respectively.
Now assume $\mathrm{rank}(\mathcal{F}) = 2$. Since $\mu(\mathcal{F}) >0$, we have
$\mu(\wedge^2\mathcal{F}) =2 \mu(\mathcal{F}) > \mu(T_Z/\mathcal{F})$,
which infers the first vanishing $\mathrm{Hom}_Y(\wedge^2\mathcal{F}, T_Z/\mathcal{F}) = 0$.
If $F_Z^*\mathcal{F}$ is semistable, then the second vanishing holds. So we may assume $F_Z^*\mathcal{F}$ is not semistable, then the HN-filtration induces an exact sequence
\begin{equation}\label{ex-seq-1}
0\to \mathcal{F}_1 \to F_Z^*\mathcal{F} \to \mathcal{F}_2 \to 0.
\end{equation}
The canonical connection $\nabla_{\mathrm{can}}$ induces a non-trivial $\mathcal{O}_Z$-linear map $\mathcal{F}_1 \to \mathcal{F}_2 \otimes \Omega_Z^1$, which implies that
\begin{equation}\label{eq:5.1}
\mu(\mathcal{F}_1) - \mu(\mathcal{F}_2) \leq \mu_{\max}(\Omega_Z^1) = -\mu(T_Z /\mathcal{F}).
\end{equation}
By the exact sequence (\ref{ex-seq-1}), we have $\mu(\mathcal{F}_1) + \mu(\mathcal{F}_2) = 2p \mu(\mathcal{F})$. This equality minus the inequality (\ref{eq:5.1}) yields that
$$2\mu(\mathcal{F}_2) \geq 2p \mu(\mathcal{F}) + \mu(T_Z/\mathcal{F}) > 2\mu(T_Z/\mathcal{F})$$
which implies $\mathrm{Hom}_Z(\mathcal{F}_1, T_Z /\mathcal{F}) = \mathrm{Hom}_Z(\mathcal{F}_2, T_Z /\mathcal{F}) = 0$. Then we apply $\mathrm{Hom}_Z(-, T_Z /\mathcal{F})$ to the exact sequence (\ref{ex-seq-1}) and obtain the other vanishing
$\mathrm{Hom}_Z(F_Z^*\mathcal{F}, T_Z /\mathcal{F}) = 0$.
\end{proof}
\subsection{Proof of Theorem \ref{thm:my-ineq}} Before the proof let us recall the following results to estimate the discriminant $\Delta(E) = 2r c_2(E) - (r-1)c_1(E)^2$ where $r = \mathrm{rank}(E)$.
\begin{lem}\label{lem:bgmlv}
Let $X$ be a smooth projective variety of dimension $n$, $E$ a torsion free coherent sheaf of rank $r$ and $D_1, \cdots, D_{n-1}$ nef $\mathbb{R}$-Cartier divisors on $X$. Assume that $D_1\cdot D_2\cdots D_{n-1}$ is nontrivial and $F^{l*}E$ has a filtration
$$0 = E_0 \subset E_1 \subset E_2 \subset \cdots \subset E_m = F^{l*}E$$
with $F_i=E_i/E_{i-1}$ being torsion free. Let $r_i = \mathrm{rank}(F_i)$. Then
(1) $\Delta(E)\cdot D_2\cdots D_{d-1} = \frac{1}{p^{2l}}D_2\cdots D_{d-1}\cdot(\sum\frac{r}{r_i}\Delta(F_i) -\sum r_ir_j(\frac{c_1(F_i)}{r_i}-\frac{c_1(F_j)}{r_j})^2)$;
(2) if $\Delta(F_i)\cdot D_2\cdots D_{d-1} \geq 0$ for $i=1,\cdots,m$, then
$$(D_1^2 \cdot D_2\cdots D_{d-1}) \cdot (\Delta(E)\cdot D_2\cdots D_{d-1}) \geq -r^2(\frac{\max_i\{\mu(F_i)\}}{p^l}- \mu(E))(\mu(E) - \frac{\min_i\{\mu(F_i)\}}{p^l});$$
(3) $(D_1^2 \cdot D_2\cdots D_{d-1}) \cdot (\Delta(E)\cdot D_2\cdots D_{d-1})\geq -r^2(L_{\mathrm{max}}(E)- \mu(E))(\mu(E) - L_{\mathrm{min}}(E))$.
\end{lem}
\begin{proof}
We refer the reader to \cite[p. 263]{Lan04} for the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:my-ineq}] Let $\mu$ denote the slope with respect to $D_1=D_2=\rho^*K_X$. Then
$$\mu(T_Z) = -\mu(\Omega_Z^1) = \frac{K_X^3}{3}.$$
We separate two cases according to the stability of the tangent bundle $T_Z$.
\smallskip
Case (i) $T_Z$ is semistable. By Lemma \ref{lem:slope} we have
$$L_{\mathrm{max}}(\Omega^1_Z) - L_{\mathrm{min}}(\Omega^1_Z) \leq \frac{2}{p}L_{\mathrm{max}}(\Omega^1_Z)$$
thus $L_{\mathrm{min}}(\Omega^1_Z) \geq (1-\frac{2}{p})L_{\mathrm{max}}(\Omega^1_Z) \geq 0$. Then we can show that $$L_{\mathrm{max}}(\Omega^1_Z) \leq 3\mu(\Omega^1_Z) = K_X^3.$$
Applying Lemma \ref{lem:bgmlv} (3), we obtain
$$6c_2(Z) \cdot \rho^*K_X - 2K_X^3 \geq -9\cdot\frac{2}{3}\frac{1}{3}K_X^3= -2K_X^3$$
which is equivalent to that $c_2(Z) \cdot \rho^*K_X \geq 0$.
\smallskip
Case (ii) $T_Z$ is not semistable. Consider the the HN-filtration
$$0 = \mathcal{F}_0 \subsetneqq \mathcal{F}_1 \subsetneqq \mathcal{F}_2 \subsetneqq \mathcal{F}_m = T_Z$$
where $2\leq m \leq 3$, and set $F_i = \mathcal{F}_i/\mathcal{F}_{i-1}$ and $r_i =\mathrm{rank}(F_i)$.
Case (ii-1) $m=3$. Then for each $i=1,2,3$, we have $\mathrm{rank}(F_i) =1$, and $\mu(F_1) + \mu(F_2) + \mu(F_3) = -K_X^3$. Applying Corollary \ref{cor:bound}, we always have $\mu(F_1) \leq \frac{6n_0p+1}{p-1}K_X^3$.
If $\mu(F_2) \geq 0$, then $\mu(F_3) = -K_X^3 - (\mu(F_1) + \mu(F_2)) <0$ and $\mu(\wedge^2\mathcal{F}_2) = \mu(F_1) + \mu(F_2) >0$. It follows that
$$\mathrm{Hom}_Z(\wedge^2\mathcal{F}_2, T_Z /\mathcal{F}_2)= \mathrm{Hom}_Z(F^*F_1, T_Z /\mathcal{F}_2) = \mathrm{Hom}_Z(F^*F_2, T_Z /\mathcal{F}_2) = 0.$$
Then we can apply $\mathrm{Hom}_Z(-, T_Z /\mathcal{F}_2)$ to the exact sequence
$$0\to F^*\mathcal{F}_1 \to F^*\mathcal{F}_2 \to F^*F_2 \to 0$$
and obtain $\mathrm{Hom}_Z(F^*\mathcal{F}_2, T_Z /\mathcal{F}_2) = 0$. By Lemma \ref{lem:c-fol}, we see that $\mathcal{F}_2$ is also a foliation, thus $2\mu(\mathcal{F}_2) \leq \frac{6n_0p+1}{p-1}K_X^3$ by Corollary \ref{cor:bound}. Then it follows that
$$\mu(F_3) = -K_X^3 - 2\mu(\mathcal{F}_2) \geq -K_X^3 -\frac{6n_0p+1}{p-1}K_X^3= -\frac{(6n_0+1)p}{p-1}K_X^3.$$
Applying Lemma \ref{lem:bgmlv} (2) we get
{\small \begin{equation}\label{ineq:my-pre}
\begin{split}
6c_2(T_Z)\cdot \rho^*K_X - 2K_X^3 &\geq -9(\mu(F_1)+ \frac{K_X^3}{3})(- \frac{K_X^3}{3} - \mu(F_3))/K_X^3 \\
&\geq -9(\frac{6n_0p+1}{p-1}K_X^3 + \frac{K_X^3}{3})(- \frac{K_X^3}{3}+ \frac{(6n_0+1)p}{p-1}K_X^3)/K_X^3 \\
&= -\frac{(18^2n_0^2 + 54n_0 +2)p^2 + (54n_0 +5)p + 2}{(p-1)^2}K_X^3.
\end{split}
\end{equation}}
This is equivalent to the desired inequality
\begin{equation}\label{ineq:my}c_2(T_Z)\cdot \rho^*K_X + \frac{(54n_0^2 + 9n_0)p^2 + (9n_0 +\frac{3}{2})p}{(p-1)^2}K_X^3 \geq 0.\end{equation}
It is worth mentioning that the equality in (\ref{ineq:my}) is attained only when
$$\mu(F_1) = \frac{6n_0p+1}{p-1}K_X^3, \mu(F_2)=0 ~\mathrm{and}~\mu(F_3)=-\frac{(6n_0+1)p}{p-1}K_X^3.$$
If $\mu(F_2) <0$, then
$$\mu(F_3) = -K_X^3 - 2\mu(\mathcal{F}_2) > -K_X^3 - \mu(F_1) \geq (-K_X^3 -\frac{6n_0p+1}{p-1}K_X^3)= -\frac{(6n_0+1)p}{p-1}K_X^3,$$
and it is easy to verify the strict inequality in (\ref{ineq:my}) by the computation (\ref{ineq:my-pre}).
Case (ii-2) $m=2$. In this case $F_1 \cong \mathcal{F}_1$. We claim that
\begin{equation}\label{ineq:F1}r_1\mu(F_1) \leq \frac{6n_0p+1}{p-1}K_X^3.\end{equation} Indeed, if $\mu(F_1) \leq 0$ then this inequality automatically holds; otherwise, we can apply Corollary \ref{cor:bound}.
It follows from this claim that
\begin{equation}\label{ineq:F2}0 > \mu(T_Z) > \mu(F_2) =\frac{ -K_X^3 - r_1\mu(F_1)}{r_2} \geq -\frac{(6n_0+1)p}{r_2(p-1)K_X^3}.\end{equation}
If both $F_1$ and $F_2$ are strongly semistable, then $\Delta(F_i) \cdot \rho^*K_X \geq 0$ holds for each $i=1,2$ (\cite[Thm. 0.1]{Lan04}), we can apply Lemma \ref{lem:bgmlv} (2) to verify the inequality (\ref{ineq:my}) by the computation (\ref{ineq:my-pre}).
Assume $F_i$ (one of $F_1, F_2$) is not strongly semistable. Then $\mathrm{rank} (F_i) =2$ and by Lemma \ref{lem:slope}
\begin{equation}\label{ineq:gap}
L_{\mathrm{max}}(F_i) - L_{\mathrm{min}}(F_i) \leq \frac{1}{p}[\mu_{\max}(\Omega^1_Z)]_{+}=- \frac{1}{p}\mu(F_2).
\end{equation}
If it is $F_1$ that is not strongly semistable, then $r_1=2,r_2=1$ and
\begin{equation}\label{eq:F_1}L_{\mathrm{max}}(F_1) + L_{\mathrm{min}}(F_1) = 2\mu(F_1)= -K_X^3 - \mu(F_2).\end{equation}
The equation (\ref{ineq:gap}) plus (\ref{eq:F_1}) yields
$$L_{\mathrm{max}}(F_1) \leq -K_X^3 + (1+\frac{1}{p})(-\mu(F_2)) < \frac{6n_0p+1}{p-1}K_X^3$$
where the ``$<$'' is due to (\ref{ineq:F2}). And similarly, the equation (\ref{eq:F_1}) minus (\ref{ineq:gap})
yields
$$L_{\mathrm{min}}(F_1) \geq -K_X^3 - (1-\frac{1}{p})(-\mu(F_2)) \geq - 6n_0K_X^3.$$
Then considering the HN-filtration of $F^{l*}F_1$ for some sufficiently large $l$, which gives a refinement of the filtration
$F^{l*}F_1 \subseteq F^{l*}T_Z$, and applying Lemma \ref{lem:bgmlv} (2) we can show
\begin{align*}
6c_2(T_Z)\cdot \rho^*K_X - 2K_X^3 &\geq -9(L_{\mathrm{max}}(F_1) + \frac{K_X^3}{3})(- \frac{K_X^3}{3}- \min\{L_{\mathrm{max}}(F_1), \mu(F_2)\})/K_X^3 \\
&\geq -9(\frac{6n_0p+1}{p-1}K_X^3 + \frac{K_X^3}{3})(- \frac{K_X^3}{3} +\frac{(6n_0+1)p}{p-1}K_X^3)/K_X^3,
\end{align*}
thus the desired inequality (\ref{ineq:my}) follows.
If it is $F_2$ that is not strongly semistable, then
$$L_{\mathrm{max}}(F_2) + L_{\mathrm{min}}(F_2) = 2\mu(F_2) = -K_X^3- \mu(F_1).$$
This equality plus (\ref{ineq:gap}) yields that
$$L_{\mathrm{max}}(F_2) \leq \mu(F_2) + \frac{1}{2p}(-\mu(F_2)) < 0,$$
and thus
$$L_{\mathrm{min}}(F_2) > 2\mu(F_2) \geq -\frac{(6n_0+1)p}{(p-1)}K_X^3.$$
Similarly as in the previous case, we can show
\begin{align*}
6c_2(T_Z)\cdot \rho^*K_X - 2K_X^3 &\geq -9(\max\{\mu(F_1),L_{\mathrm{max}}(F_2)\} + \frac{K_X^3}{3})(- \frac{K_X^3}{3}- L_{\mathrm{min}}(F_2) )/K_X^3 \\
&> -9(\frac{6n_0p+1}{p-1}K_X^3 + \frac{K_X^3}{3})(- \frac{K_X^3}{3} +\frac{(6n_0+1)p}{(p-1)}K_X^3)/K_X^3
\end{align*}
which infers the desired inequality (\ref{ineq:my}).
\end{proof}
\section{Effectivity of pluricanonical maps for 3-folds}\label{sec:dim3}
Let $X$ be a minimal terminal threefold over an algebraically closed field $k$ of characteristic $p>0$. We aim to find natural numbers $M_1, M_2$ such that $S^0_{-}(X, K_X + mK_X) \neq 0$ if $m\geq M_1$, and that $S^0_{-}(X, K_X + mK_X)$ is birational if $m\geq M_2$. We conclude Theorem \ref{thm:eff-3folds} by separating the following two cases.
\subsection{Regular case}
In this case since $h^1(\O_X) - h^2(\O_X) \leq q(X)=0$ (\cite[Remark 9.5.15, 9.5.25]{FGA05}), we have $$\chi(\O_X) = h^0(\O_X) - (h^1(\O_X) - h^2(\O_X)) - h^3(\O_X) \geq h^0(\O_X) - h^3(\O_X).$$ We also assume that $X$ is Gorenstein, in particular it has only rational singularities.
\begin{lem}\label{lem:van-h2} If $(n-1)(p-1)\geq 6$ then $h^2(X, nK_X) =0$. \end{lem}
\begin{proof}We argue by contradiction. Suppose that $h^2(X, nK_X) \neq 0$.
By Serre duality we have $H^1(X, (1-n)K_X) \neq 0$. Applying Fujita vanishing theorem, we can take a sufficiently ample divisor $H$ such that $$H^i(X, -H -mK_X) \cong H^{3-i}(X, H + (m+1)K_X)^{\vee} =0~\ \ \mathrm{for ~any}~i>0~\mathrm{and}~m\geq 0.$$
Since $X$ has at most isolated singularities, we may assume $H$ is a smooth hypersurface contained in the smooth locus of $X$.
Since $K_X|_H$ is nef and big, for any sufficiently large $m$ we have $h^1(\mathcal{O}_H(-mK_X|_H)) =0$ (see \cite[Thm. 4.3]{XZ19}).
By taking cohomology of the following exact sequence
$$0 \to \mathcal{O}_X(-mK_X -H) \to \mathcal{O}_X(-mK_X) \to \mathcal{O}_H(-mK_X|_H) \to 0$$
we get a long exact sequence and can find a number $m_0$ such that for any $m\geq m_0$, $H^1(X, -mK_X) =0$. As a consequence, there exists a natural number $e$ such that the pullback map of Frobenius map
$$F^*: H^1(X, p^e(1-n)K_X) \to H^1(X, p^{e+1}(1-n)K_X)$$
has nontrivial kernel. Applying \cite[Corollary 4.6]{XZ19}\footnote{The assertion is valid if the assumption that $X$ is smooth is replaced by that $K_X$ is Cartier.}, it must hold that $(p-1)p^e(n-1) - 1 \leq 4$. However, this contradicts to our assumption.
\end{proof}
Next we prove
\begin{lem}\label{lem:h0} Assume that $h^0(X, \omega_X) \leq 1$. Set $n_0(2)=13, n_0(3) =10, n_0(5) =9$ and $n_0(p) = 8$ if $p \geq 7$. Then $h^0(X, n_0(p) K_X) \geq 15$ for $p=2,3,5$, and $h^0(X, n_0(p) K_X) \geq 10$ for $p \geq 7$.
\end{lem}
\begin{proof}
Since $h^3(\O_X) = h^0(X, \omega_X) \leq 1$, $\chi(\O_X) \geq 0$. Let $\rho: Z \to X$ be a smooth resolution. By Theorem \ref{thm:my-ineq}, we can set $A(2) = 273$, $A(3) = 151$, $A(5) = 103$ and $A(p) = 90$ for $p\geq 7$ to make sure that $c_2(Z) \cdot \sigma^*K_X + A(p)K_X^3 \geq 0$.
Since $R\rho_*\mathcal{O}_X \cong \rho_*\mathcal{O}_X$, applying Riemann-Roch formula we have
\begin{align*}
&h^0(X, nK_X) + h^2(X, nK_X) \\
&\geq \chi(Z, \rho^*\mathcal{O}_X(nK_X)) \\
&= \frac{2n^3 - 3n^2}{12}(\rho^*K_X)^3+ \frac{n}{12}(\rho^*K_X)\cdot(K_Z^2 + c_2(Z)) + \chi(\mathcal{O}_Z)\\
& = \frac{2n^3 - 3n^2}{12}K_X^3 + \frac{n}{12}K_X\cdot(K_X^2 + \rho_*c_2(Z)) + \chi(\mathcal{O}_X) \\
& \geq \frac{2n^3 - 3n^2}{12}K_X^3 + \frac{n}{12}K_X\cdot(K_X^2 + \rho_*c_2(Z)) \geq \frac{n(2n^2 - 3n +1-A(p))}{12}K_X^3.
\end{align*}
Note that for $n \geq 6$, $h^2(X, nK_X) =0$ by Lemma \ref{lem:van-h2}. Then we can verify the lemma by direct computations.
\end{proof}
If $h^3(\mathcal{O}_X) \leq 1$ we let $n_0 = n_0(p)$ as in Lemma \ref{lem:h0}; otherwise we let $n_0=1$. Let $\phi_{n_0}: X \dashrightarrow \mathbb{P}^N$ denote the $n_0$-canonical map. We can do some blowup $\rho: Z \to X$ to assume that in the decompostion $\rho^*K_X = |H_Z| + E$, the movable part $|H_Z|$ has no base point and hence defines a morphism $\psi: Z \to \mathbb{P}^N$. Let $f: Z \to Y$ be the fibration arising from the Stein factorization of $\psi$. Then $H_Z = f^*H$ for an ample and free divisor $H$ on $Y$.
Case (1) $\dim\phi_{n_0} = 3$. Applying Theorem \ref{thm:bir-criterion-for-induction}, we can prove that for any $l >0$, $S_{-}^0(Z, K_Z + f^*3H + l\rho^*K_X) \neq 0$ and $S_{-}^0(Z, K_Z + f^*4H + l\rho^*K_X)$ is birational. Since $n_0\rho^*K_X \geq f^*H$, applying Proposition \ref{prop:F-stable-section} (i) and (iv) we only need to set $M_1 = 3n_0 +1$ and $M_2 = 4n_0 + 1$.
Case (2) $\dim\phi_{n_0} = 2$. Applying Theorem \ref{thm:eff-curve} and Theorem \ref{thm:bir-geo-gen}, since $K_X$ is Cartier, we have that $S_{-}^0(Z_{\eta}, K_{Z_{\eta}} + 2\rho^*K_X) \neq 0$, and $S_{-}^0(Z_{\eta}, K_{Z_{\eta}} + l\rho^*K_X)$ is birational for $l \geq 3$.
It follows by Theorem \ref{thm:bir-criterion-for-induction} that for any $l >0$, $S_{-}^0(Z, K_Z + f^*2H + l\rho^*K_X) \neq 0$ and $S_{-}^0(Z, K_Z + f^*3H + l\rho^*K_X)$ is birational. Then arguing similarly as in Case (1) we may set $M_1 = 2n_0 + 2$ and $M_2 = 3n_0 + 3$.
Case (3) $\dim\phi_{n_0} = 1$. In this case, $Y = \mathbb{P}^1$ since $q(Z) =0$. Since it suffices to show the result after a base field extension we may assume $k$ is uncountable. Let $F$ denote a general fiber of $f$ and let $\bar{F} = \rho_*(F)$. Note that the Weil divisor $\bar{F}$ is Cartier in codimension two, and the divisor $K_{\bar{F}} \sim (K_X + \bar{F})|_{\bar{F}}$ is $\mathbb{Q}$-Cartier and Cartier in codimension one on $\bar{F}$. Let $G$ be a smooth resolution of $F$. By the adjunction formula we can write that
$$K_G \sim K_{\bar{F}}|_G + E_2' - E_1'$$
where $E_1',E_2'$ are effective divisors without common components. Then $E_2'$ must be exceptional over $\bar{F}$, and we can prove that each its irreducible component is a rational curve by running a minimal model program over $\bar{F}$.
Case (3.1) $h^0(X, \omega_X) \leq 1$. We may assume $H_Z \sim r_0F$ where $r_0 = h^0(X, n_0K_X) -1$. Then by Lemma \ref{lem:h0}, we have that $r_0 > n_0$ and $n_0K_X \geq r_0\bar{F}$. We can write that
$$K_X= \frac{n_0}{r_0}K_X + (1-\frac{n_0}{r_0})K_X \sim_{\mathbb{Q}} \bar{F} + \bar{E} + (1-\frac{n_0}{r_0})K_X$$
where $\bar{E} \geq 0$, and we may assume $\bar{E}$ and $\bar{F}$ have no common component since $|\bar{F}|$ is movable. In turn we have
$$2K_X|_G \sim_{\mathbb{Q}} (K_X + \bar{F} + \bar{E} + (1-\frac{n_0}{r_0})K_X)|_G\sim_{\mathbb{Q}} K_G - E_2' + E_1'+ (\bar{E} + (1-\frac{n_0}{r_0})K_X)|_G.$$
It follows that $(2K_X + E_2')|_G - K_G$ is big. To apply Corollary \ref{cor:eff-surface}, we set $D =K_X|_G$ and $r=2$, then obtain that $S^0_{-}(G, K_G + sK_X|_G) \neq 0$ for any integer $s\geq 5$, and $S^0_{-}(G, K_G + sK_X|_G)$ is birational for any integer $s\geq 9$. Applying Proposition \ref{prop:F-stable-section} (iv) and Corollary \ref{cor:bir-general-fiber}, analogous results hold for $S^0_{-}(Z_{\eta}, K_{Z_{\eta}} + sK_X|_{Z_{\eta}})$. By Theorem \ref{thm:bir-criterion-for-induction} (i) we may set
$M_1 = n_0 + 5 $, and by the second part of (ii) we may set $M_2 = n_0 + 9$.
Case (3.2) $h^0(X, \omega_X) \geq 2$. Then $K_X \geq \bar{F}$. By similar argument of Case (3.1) we first obtain that $(3K_X + E_2')|_G - K_G$ is big, then apply Corollary \ref{cor:eff-surface} to prove $S^0_{-}(G, K_G + sK_X|_G) \neq 0$ for any integer $s\geq 6$, and $S^0_{-}(G, K_G + sK_X|_G)$ is birational for any integer $s\geq 11$. Finally applying Theorem \ref{thm:bir-criterion-for-induction} we may set $M_1 = 7$ and $M_2 = 13$.
\smallskip
Remark that in Case (1), by taking a sub-linear system of $|n_0K_X|$ we can reduce us to the situation of Case (2), in practice the bounds of Case (2) are smaller. But we cannot reduce the former two cases to Case (3) because we do not have the relation $r_0 = h^0(X, n_0K_X) -1$ in Case (3.1).
\smallskip
\subsection{Irregular case} Let $a: X \to A$ be the Albanese map, let $\rho: Z \to X$ be a smooth resolution of singularities and let $f: Z \to Y$ be the fibration induced by the Stein factorization of $a\circ \rho: Z \to A$. Denote by $F$ the generic fiber of $f$. According to the relative dimension of $f$ we fall into three cases as follows.
\medskip
Case (1) $\dim F = 0$. We may set $M_1 = 3$ and $M_2=5$ by applying Theorem \ref{thm:bir-criterion-irr} (i,ii) and (iv) respectively.
Case (2) $\dim F = 1$. Since $K_X|_F = K_F$ has degree $\geq 2$, by Theorem \ref{thm:eff-curve} it follows that $S^0_{-}(F, K_F+ K_X|_F) \neq 0$ and $S^0_{-}(F, K_F+ 2K_X|_F)$ is birational. Then applying Theorem \ref{thm:bir-criterion-irr} we may set $M_1 = 1 +(1+1) = 3$ and $M_2=2 + 2 +2=6$.
Case (3) $\dim F = 2$. Since $X$ is smooth in codimension two, the generic fiber $F$ is regular and $K_X|_F \sim K_F$. By Theorem \ref{thm:eff-surface-nonclosed-field}, it follows that $S^0_{-}(F, K_F+ 5K_X|_F) \neq 0$ and $S^0_{-}(F, K_F+ 9K_X|_F)$ is birational; and if $p>2$ then $S^0_{-}(F, K_F+ 4K_X|_F) \neq 0$ and $S^0_{-}(F, K_F+ 7K_X|_F)$ is birational. Applying Theorem \ref{thm:bir-criterion-irr}, we may set $M_1=5+(1+5) = 11$ and $M_2=9+6+6 =21$; and if $p>2$, $M_1=4+(1+4)=9$ and $M_2=7 +5+5=17$.
\bibliographystyle{plain}
| proofpile-arXiv_059-15604 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section*{Acknowledgments}
We thank Aaron Snoswell, Nathaniel Du Preez-Wilkinson, Jordan Bishop,
Russell Tsuchida, and Matthew Aitchison
\input{support_ack}
\section*{\centering APPENDIX}
\section{Tables of existing works} \label{sec:table_of_existing}
\tblrefand{tbl:valiter_work}{tbl:politer_work} summarize
the major contributions and experiments of the existing works on
average-reward model-free RL that are based on value- and policy-iteration schemes,
respectively.
\section{Taxonomy for approximation techniques} \label{sec:taxonomy}
We present taxonomies of approximation techniques.
They are for optimal gain $v_g^*$ in value-iteration schemes
(\figref{fig:taxonomy_optgainapprox}),
as well as for those in policy-iteration schemes, namely
gain $v_g^\pi$ (\figref{fig:taxonomy_gain}),
state values $v_b^\pi$ (\figref{fig:taxonomy_stateval}),
and action-related values $q_b^\pi$ and $\adv_b^\pi$ (\figref{fig:taxonomy_actval}).
\input{appendix_problem}
\afterpage{%
\clearpage%
\input{appendix_taxonomy_optgain}
\input{appendix_taxonomy_gain}
\input{appendix_taxonomy_stateval}
\input{appendix_taxonomy_actval}
\clearpage%
}
\afterpage{%
\clearpage%
\input{appendix_tbl_valiterwork}
\clearpage%
}
\afterpage{%
\clearpage%
\input{appendix_tbl_politerwork}
\clearpage%
}
\section{Glossary} \label{sec_glossary}
For the sake of clarity, we include the definition of some critical terminology used in this work.
Note that some terms in literature are vague and overloaded. %
\paragraph{Online vs offline learning:}
Online RL makes updates only with the experience from current interaction with the environment.
The experience is in the form of $\{ (s, a, s', r')_i \}_{i=1}^\xi$, where
$\xi$ indicates the batch size.
In contrast, offline (batch) RL utilizes previously collected data (from past interaction)
without additional online data collection \citep{levine_2020_offlinerl, fonteneau_2013_brl}.
\paragraph{Incremental vs batch updates:}
Incremental updates refer to per-step (step-wise) updates with batch size $\xi = 1$.
On the other hand, in batch updates, the batch size is typically $\xi \gg 1$.
By batches, we mean full-batches, unless specified otherwise as mini-batches.
\paragraph{Synchronous vs asynchronous updates:}
In synchronous settings, \emph{all} entries of the iterate are updated at each update iteration.
In asynchronous settings, only one (or some, but not all) of the entries is
updated at each update iteration.
Such an updated entry is the one that the agent visits.
\paragraph{On- vs off-policy learning:}
On-policy learning updates the learnable parameters associated to the behaviour policy
based on data collected using the same behaviour policy, which is being executed by the agent.
In other words, the data (experience) for learning is drawn from
the corresponding distribution under a policy that is also the target of learning.
Thus, the terms ``target'' and ``behaviour'' policies refer to the same policy.
On the other hand, off-policy learning updates the learnable parameters associated with
the target policy based on data collected using a different behaviour policy.
Thus, there are two distinct policies, namely target and behaviour policies.
\paragraph{Tabular vs function approximation settings:}
In tabular settings, the value function or the policy is represented by a look-up table,
where every state or every state-action pair has an entry.
This causes scalability problems in terms of memory, as well as
learning rate (too many entries to update).
A function approximator has a set of (learnable) parameters whose
size is typically much less than the number of tabular entries.
Hence, there exists generalization across states and actions.
Note that we may interpret the tabular setting as a special case of
\emph{linear} function approximation,
where there are as many (table-lookup) features as the tabel size.
\paragraph{Tabular $Q_b$-learning, $Q_\gamma$-learning, and Sarsa:}
\algref{alg:avgrew_qlearn_tabular} shows the general implementation of tabular $Q_b$-learning.
Its discounted-reward counterpart, i.e.~ $Q_\gamma$-learning, can be obtained by
some appropriate changes regarding the use of discount factor $\gamma$ and the removal of $v_g^*$.
In \alglineref{alg:avgrew_qlearn_tabular}{line:deltafn}, the function $\Delta(\cdot)$
encodes the variations of $v_g^*$ approximation techniques discussed in \secref{sec:valiter_tabular}.
Both $Q_b$- and $Q_\gamma$-learning are VI-based and off-policy algorithms.
In contrast, the tabular Sarsa (\algref{alg:avgrew_sarsa_tabular}) is based on PI.
It operates on-policy; although one may argue that it has slight off-policy nature.
\begin{minipage}[t]{0.45\textwidth}
\vspace{0pt}
\input{appendix_glossary_qlearn}
\end{minipage}%
\hfill
\begin{minipage}[t]{0.45\textwidth}
\vspace{0pt}
\input{appendix_glossary_sarsa}
\end{minipage}
\section{Existing benchmarking environments} \label{sec:problem_spec}
Different average-reward RL methods are often evaluated with different sets of environments,
making comparisons difficult.
Therefore, it is imperative to recapitulate those environments in order to
calibrate the progress, as well as
for a sanity check (before targeting more complex environments).
We compile a variety of environments that are used in existing works.
They are classified into two categories, namely continuing (non-episodic)
and episodic environments.
\input{appendix_problem_continuing}
\input{appendix_problem_episodic}
\subsection{Continuing environments}
Continuing environments are those with no terminating goal state and
hence continue on infinitely.
These environments induce infinite horizon MDPs and are typically geared towards
everlasting real-world domains found in areas such as business,
stock market decision making, manufacturing, power management,
traffic control, server queueing operation and communication networks.
There are various different continuous environments used in average-reward RL literature
with the more popular detailed below.
Other examples include
Preventive Maintenance in Product Inventory \citep{das_1999_smart},
Optimal Portfolio Choice \citep{ormoneit_2001_kbrl},
Obstacle avoidance task \citep{mahadevan_1996_avgrew}, and
Multi-Armed Restless Bandits \citep{avrachenkov_2020_wiql}.
\input{appendix_problem_continuing_symbolic}
\input{appendix_problem_continuing_queue}
\input{appendix_problem_continuing_continuous}
\subsubsection{Continuous States and Actions}
Many environments with applications to real world systems such as robotics can contain
continuous states and actions.
The most common approach to these environments in RL is to discretize the continuous variables
however there are RL methods that can be used even if the MDP remains continuous.
Below we describe commonly used continuous environments from within the literature.
\paragraph{Linear Quadratic Regulators:} \mbox{} \\
PI: \cite{kakade_2002_npg}. \\
VI: \emph{None}.
The Linear Quadratic Regulator (LQR) problem is fundamental in RL due to the
simple structure providing a useful tool for assessing different methods.
The systems dynamics are $s_{t+1} = \mat{A} s_t + \mat{B} a_t + \epsilon_t$
where $\epsilon_t$ is uniformly distributed random noise, that is i.i.d.
for each $t \geq 0$
The cost (negative reward) function for the system is $c(s,a)=s^\intercal Qx + a^\intercal Ra$.
$\mat{A}$, $\mat{B}$, $\mat{Q}$ and $\mat{R}$ are all matrices with proper dimensions.
This problem is viewed as an MDP with
state and action spaces of $\setname{S} = \real{d}$ and $\setname{A} = \real{k}$ respectively.
The optimal policy of LQR is a linear function of the state.
\paragraph{Swing-up pendulum:} \mbox{} \\
PI: \cite{morimura_2010_sdpg, degris_2012_offpolac, liu_2018_offpol}. \\
VI: \emph{None}.
Swinging a simulated pendulum with the goal of getting it into
and keeping it in the vertical position.
The state of the system is described by the current angle and angular
velocity of the pendulum.
The action is the torque applied at the base of the pendulum
which is restricted to be within a practical limit.
Rewards are received as the cosine of pendulums angle which is
proportional to the height of the pendulum's tip.
\subsubsection{Queuing}
Another commonly used group of environments in RL involve optimization of systems that
involve queuing.
Common examples of queuing environments within literature include call bandwidth
allocation \citep{marbach_2001_simopt}, and seating on airplanes
\citep{gosavi_2002_smart}.
\paragraph{Server-Access Control Queue:} \mbox{} \\
PI: \cite{sutton_2018_irl}: \page{252}, \cite{devra_2018_dtdl, abbasi-yadkori_2019_politex, abbasi-yadkori_2019_xpert} \\
VI: \cite{wan_2020_avgrew, schneckenreither_2020_avgrew}.
The problem of controlling the access of queuing customers to a limited number of servers.
Each customer belongs to a priority group and the state of the system (MDP) is
described by this priority as well as the number of free servers.
At each time-step, the servers become available with some probability and the agent can
accept or reject service to the first customer in each line.
Accepting a customer gives a reward proportional to the priority
of that customer when their service is complete.
Rejecting a customer always results in a reward of 0.
The goal of this problem is to maximize the reward received by making decisions based on
customer priority and the number of available servers.
\paragraph{Traffic Signal Control:} \mbox{} \\
PI: \cite{prashanth_2011_tlrl, liu_2018_offpol} \\
VI: \cite{prashanth_2011_tlrl}
Maximizing the flow of traffic across a network of junctions through finding the optimal
traffic signal configuration.
The controller periodically receives information about congestion which gives us a state
consisting of a vector containing information of queue length and elapsed time since each lane
turned red.
Actions available are the sign configurations of signals which can be turned to green simultaneously.
The cost (negative reward) is the sum of all queue lengths and elapsed times across the whole network.
Queue length is used to reduce congestion and the elapsed time ensures fairness for all lanes.
\subsubsection{Symbolic MDPs}
In order to initially test the performance of algorithms in RL, simple environments are
required to use as a test bed. These environments often serve no practical purpose and
are developed purely for testing. Below we describe three such environments being $n$-state,
$n$-cycle and $n$-chain.
\paragraph{$n$-state (randomly-constructed) MDPs:} \mbox{} \\
PI: \cite{singh_1994_avgrewrl, kakade_2002_npg, morimura_2008_nnpg, morimura_2009_gnac, bhatnagar_2009_nactr, castro_2010_stsac, matsubara_2010_asspg, morimura_2014_pgrl, wei_2019_avgrew, hao_2020_aapi}. \\
VI: \cite{schwartz_1993_rlearn, singh_1994_avgrewrl, jafarniajahromi_2020_nor}.\\
These are environments in which an MDPs with $n$ states is constructed with
randomized transition matrices and rewards. In order to provide context
and meaning, a background story is often provided for these systems such as in the case of
DeepSea \citep{hao_2020_aapi} where the environment is interpreted
as a diver searching for treasure.
One special subclass is the Generic Average Reward Non-stationary Environment Testbed (GARNET),
which was proposed by \cite{bhatnagar_2009_nactr}.
It is parameterized as GARNET($\setsize{S}, \setsize{A}, x,\sigma,\tau$),
where $\setsize{S}$ is the number of states and $\setsize{A}$ is the number of actions,
$\tau$ determines how non-stationary the problem is, and
$x$ is a branching factors which determines how many next states are available
for each state-action pair.
The set of next states is created by a random selection from the state set without replacement.
Probabilities of going to each available next state are uniformly distributed and the reward
recieved for each transition is normally distributed with standard deviation $\sigma$.
\paragraph{$n$-cycle (loop, circle) MDPs:} \mbox{} \\
PI: \emph{None}. \\
VI: \cite{schwartz_1993_rlearn, mahadevan_1996_avgrew, yang_2016_csv, wan_2020_avgrew}.\\
This group of problem has $n$ cycles (circles, loops) with the state set
defined by the number of cycles and the number of states in each cycle.
The state transitions are typically deterministic.
At the state in which the loops intersect, the agent must decide which loop to take.
For each state inside the loops, the only available actions are to stay or move to the
next state.
The reward function is set out such that each loop leads to a different reward,
with some rewards being more delayed than others.
\cite{mahadevan_1996_avgrew} contains a 2-cycle problem. A robot receives a reward of
+5 if it chooses the cycle going from home to the printer which has 5 states. It receives
+20 if it chooses the other cycle which leads to the mail room and contains 10 states.
\paragraph{$n$-chain:} \mbox{} \\
PI: \cite{wei_2019_avgrew}. \\
VI: \cite{yang_2016_csv, jafarniajahromi_2020_nor}.
These are environments involving $n$ states arranged in a linear chain-like formation.
The only two available actions are to move forwards or backwards.
Note that the term ``chain'' here does not mean a recurrent class.
All $n$-chain environments has one recurrent class.
\cite{strens_2000_bfrl} presents are more specific type which is used in the OpenAI gym.
In this, moving forward results in no reward whereas choosing to go backwards returns
the agent to beginning results in a small reward.
Reaching the end of the chain presents a large reward.
The state transitions are not deterministic, with each action having a small possibility
of resulting in the opposite transition.
RiverSwim is another $n$-chain style problem consisting of 6 states in a chain-like formation to
represent positions across a river with a current flowing from right to left.
Each state has two possible actions; swim left with the current or right against
the current. Swimming with the current is always successful (deterministic) but swimming against
it can fail with some probability. The reward function is such that the
optimal policy is swimming to the rightmost state and remaining there.
\subsection{Episodic environments}
An episodic problem has at least one terminal state
(note that multiple types of terminals can be modeled as one terminal state when
the model concerns only with $r(s, a)$, instead of $r(s, a, s')$).
Once an agent enters the terminal state, the agent-environment interaction terminates.
An episode refers to a sequence of states, actions, and rewards from $t = 0$
until entering the terminal state.
It is typically assumed that the termination eventually occurs in finite time.
\paragraph{Grid-navigation:} \mbox{} \\
PI: \emph{None}. \\
VI: \cite{mahadevan_1996_avgrew}.
\paragraph{Tetris:} \mbox{} \\
PI: \cite{kakade_2002_npg}. \\
VI: \cite{yang_2016_csv}.
\paragraph{Atari Pacman:} \mbox{} \\
PI: \cite{abbasi-yadkori_2019_politex}. \\
VI: \emph{None}.
\section{Preliminaries} \label{sec:backgnd}
We assume that the problem we aim to tackle can be well modeled as
a Markov decision process (MDP) with the following properties.
\begin{itemize}
\item The state set $\setname{S}$ and action set $\setname{A}$ are finite.
All states are fully observable, and all actions $a \in \setname{A}$
are available in every state $s \in \setname{S}$.
The decision-epoch set $\setname{T} = \integer{\ge 0}$ is discrete and infinite.
Thus, we have discrete-time infinite-horizon finite MDPs.
\item The state-transition probability $p(s_{t+1}|s_t, a_t)$ and
the reward function $r(s_t, a_t)$ are both stationary (time homogenous, fixed over time).
Here, $r(s_t, a_t) = \E{}{r(s_t, a_t, S_{t+1})}$, and it is uniformly bounded by
a finite constant $r_{\mathrm{max}}$, i.e.~
$|r(s, a)| \le r_{\mathrm{max}} < \infty, \forall (s, a) \in \setname{S} \times \setname{A}$.
\item The MDP is recurrent, and
its optimal policies belong to the stationary policy set $\piset{S}$.
\end{itemize}
Furthermore, every stationary policy $\pi \in \piset{S}$ of an MDP induces
a Markov chain (MC), whose transition (stochastic) matrix is denoted as
$\mat{P}_\pi \in [0, 1]^{\setsize{S} \times \setsize{S}}$.
The $[s, s']$-entry of $\mat{P}_\pi$ indicates the probability of transitioning
from a current state $s$ to the next state $s'$ under a policy $\pi$.
That is, $\mat{P}_\pi[s, s'] = \sum_{a \in \setname{A}} \pi(a|s) p(s'|s, a)$.
The $t$-th power of $\mat{P}_\pi$ gives $\mat{P}_\pi^t$, whose $[s_0, s]$-entry
indicates the probability of being in state $s$ in $t$ timesteps
when starting from $s_0$ and following~$\pi$.
That is, $\mat{P}_\pi^t[s_0, s] = \prob{S_t = s | s_0, \pi} \eqqcolon p_\pi^t(s | s_0)$.
The limiting distribution of $p_\pi^t$ is given by
\begin{equation}
p_\pi^\star(s | s_0)
= \lim_{t_{\mathrm{max}} \to \infty} \frac{1}{t_{\mathrm{max}}} \sum_{t=0}^{t_{\mathrm{max}} - 1} p_\pi^t(s | s_0)
= \lim_{t_{\mathrm{max}} \to \infty} p_\pi^{t_{\mathrm{max}}}(s | s_0),
\quad \forall s \in \setname{S},
\label{equ:pstar_lim}
\end{equation}
where the first limit is proven to exist in finite MDPs, while
the second limit exists whenever the finite MDP is aperiodic
\citep[\app{A.4}]{puterman_1994_mdp}.
This limiting state distribution $p_\pi^\star$ is equivalent to
the unique stationary (time-invariant) state distribution that satisfies
$(\vecb{p}_\pi^\star)^\intercal \mat{P}_\pi = (\vecb{p}_\pi^\star)^\intercal$,
which may be achieved in finite timesteps.
Here, $\vecb{p}_\pi^\star \in [0, 1]^{\setsize{S}}$ is
$p_\pi^\star(s | s_0)$ stacked together for all $s \in \setname{S}$.
The \emph{expected} average reward (also termed the gain) value of a policy $\pi$
is defined for all $s_0 \in \setname{S}$ as
\begin{align}
v_g(\pi, s_0)
& \coloneqq \lim_{t_{\mathrm{max}} \to \infty} \frac{1}{t_{\mathrm{max}}}
\E{S_t, A_t}{\sum_{t=0}^{t_{\mathrm{max}} - 1}
r(S_t, A_t) \Big| S_0 = s_0, \pi}
\label{equ:gain_lim} \\
& = \sum_{s \in \setname{S}}
\Big\{ \lim_{t_{\mathrm{max}} \to \infty} \frac{1}{t_{\mathrm{max}}} \sum_{t=0}^{t_{\mathrm{max}} - 1}
p_\pi^t(s | s_0) \Big\} r_\pi(s)
= \sum_{s \in \setname{S}} p_\pi^\star(s| s_0) r_\pi(s)
\label{equ:gain_prob} \\
& = \lim_{t_{\mathrm{max}} \to \infty} \E{S_{t_{\mathrm{max}}}, A_{t_{\mathrm{max}}}}{
r(S_{t_{\mathrm{max}}}, A_{t_{\mathrm{max}}}) \Big| S_0 = s_0, \pi},
\label{equ:gain_lim2}
\end{align}
where $r_\pi(s) = \sum_{a \in \setname{A}} \pi(a|s)\ r(s, a)$.
The limit in \eqref{equ:gain_lim} exists when the policy $\pi$ is stationary,
and the MDP is finite \citep[\prop{8.1.1}]{puterman_1994_mdp}.
Whenever it exists, the equality in \eqref{equ:gain_prob} follows due to
the existence of limit in \eqref{equ:pstar_lim} and
the validity of interchanging the limit and the expectation.
The equality in \eqref{equ:gain_lim2} holds if its limit exists, for instance
when $\pi$ is stationary and the induced MC is aperiodic
(nonetheless, note that even if the induced MC is periodic,
the limit in \eqref{equ:gain_lim2} exists for certain reward structures,
see \citet[\problem{5.2}]{puterman_1994_mdp}).
In matrix forms, the gain can be expressed as
\begin{equation*}
\vecb{v}_g(\pi)
= \lim_{t_{\mathrm{max}} \to \infty} \frac{1}{t_{\mathrm{max}}} \vecb{v}_{t_{\mathrm{max}}}(\pi)
= \lim_{t_{\mathrm{max}} \to \infty} \frac{1}{t_{\mathrm{max}}} \sum_{t=0}^{t_{\mathrm{max}} - 1} \mat{P}_\pi^t \vecb{r}_\pi
= \mat{P}_\pi^\star \vecb{r}_\pi,
\end{equation*}
where $\vecb{r}_\pi \in \real{\setsize{S}}$ is $r_\pi(s)$ stacked together for all $s \in \setname{S}$.
Notice that the gain involves taking the limit of the average of
the \emph{expected} total reward $\vecb{v}_{t_{\mathrm{max}}}$ from
$t=0$ to $t_{\mathrm{max}} - 1$ for $t_{\mathrm{max}} \to \infty$.
Since unichain MDPs have a single chain (i.e.~ a closed irreducible recurrent class),
the stationary distribution is invariant to initial states.
Therefore, $\mat{P}_\pi^\star$ has identical rows so that the gain is
constant across all initial states.
That is, $v_g(\pi) = \vecb{p}_\pi^\star \cdot \vecb{r}_\pi = v_g(\pi, s_0), \forall s_0 \in \setname{S}$,
hence $\vecb{v}_g(\pi) = v_g(\pi) \cdot \vecb{1}$.
The gain $v_g(\pi)$ can be interpreted as the stationary reward because it
represents the average reward per timestep of a system in its steady-state under~$\pi$.
A policy $\pi^* \in \piset{S}$ is gain-optimal (hereafter, simply called optimal) if
\begin{equation}
v_g(\pi^*, \cancel{s_0}) \ge v_g(\pi, \cancel{s_0}),
\quad \forall \pi \in \piset{S}, \forall s_0 \in \setname{S},
\quad \text{hence}\ \pi^* \in \argmax_{\pi \in \piset{S}} v_g(\pi)
\label{equ:gainopt}
\end{equation}
Such an optimal policy $\pi^*$ is also a greedy (hence, deterministic) policy
in that it selects actions maximizing the RHS of
the average-reward Bellman \emph{optimality} equation
(which is useful for deriving optimal control algorithms) as follows,
\begin{equation}
v_b(\pi^*, s) + v_g(\pi^*) = \max_{a \in \setname{A}} \Big\{ r(s, a)
+ \sum_{s' \in \setname{S}} p(s' | s, a)\ v_b(\pi^*, s') \Big\},
\quad \forall s \in \setname{S},
\label{equ:avgrew_bellman_optim}
\end{equation}
where $v_b$ denotes the bias value.
It is defined for all $\pi \in \piset{S}$ and all $s_0 \in \setname{S}$ as
\begin{align}
v_b(\pi, s_0)
& \coloneqq \lim_{t_{\mathrm{max}} \to \infty} \E{S_t, A_t}{
\sum_{t=0}^{t_{\mathrm{max}} - 1}
\Big( r(S_t, A_t) - v_g(\pi) \Big) \Big| S_0 = s_0, \pi}
\label{equ:bias_lim} \\
& = \lim_{t_{\mathrm{max}} \to \infty} \sum_{t=0}^{t_{\mathrm{max}} - 1}
\sum_{s \in \setname{S}} \Big( p_\pi^t(s|s_0) - p_\pi^\star(s) \Big) r_\pi(s)
\label{equ:bias_prob2} \\
& = \underbrace{\sum_{t=0}^{\tau - 1}
\sum_{s \in \setname{S}} p_\pi^t(s|s_0) r_\pi(s)
}_\text{the expected total reward $v_\tau$}
-\ \tau v_g(\pi)
+ \underbrace{\lim_{t_{\mathrm{max}} \to \infty} \sum_{t=\tau}^{t_{\mathrm{max}} - 1} \sum_{s \in \setname{S}}
\Big( p_\pi^t(s|s_0) - p_\pi^\star(s) \Big) r_\pi(s)
}_\text{approaches 0 as $\tau \to \infty$}
\label{equ:bias_prob3} \\
& = \sum_{s \in \setname{S}} \Big\{
\lim_{t_{\mathrm{max}} \to \infty} \sum_{t=0}^{t_{\mathrm{max}} - 1}
\Big( p_\pi^t(s|s_0) - p_\pi^\star(s) \Big) \Big\} r_\pi(s)
= \sum_{s \in \setname{S}} d_\pi(s|s_0) r_\pi(s),
\label{equ:bias_prob}
\end{align}
where all limits are assumed to exist, and
\eqref{equ:bias_lim} is bounded because of the subtraction of $v_g(\pi)$.
Whenever exchanging the limit and the expectation is valid in \eqref{equ:bias_prob},
$d_\pi(s|s_0)$ represents the $[s_0, s]$-entry of
the non-stochastic deviation matrix
$\mat{D}_\pi
\coloneqq (\mat{I} - \mat{P}_\pi + \mat{P}_\pi^\star)^{-1} (\mat{I} - \mat{P}_\pi^\star)$.
The bias $v_b(\pi, s_0)$ can be interpreted in several ways.
\textbf{Firstly} based on \eqref{equ:bias_lim}, the bias is the expected total difference
between the immediate reward $r(s_t, a_t)$ and the stationary reward $v_g(\pi)$
when a process starts at~$s_0$ and follows~$\pi$.
\textbf{Secondly} from \eqref{equ:bias_prob2}, the bias indicates the difference between
the expected total rewards of two processes under $\pi$: one starts at $s_0$ and
the other at an initial state drawn from $p_\pi^\star$.
Put in another way, it is the difference of the total reward of $\pi$ and
the total reward that would be earned if the reward per timestep were $v_g(\pi)$.
\textbf{Thirdly}, decomposing the timesteps as in \eqref{equ:bias_prob3} yields
$v_\tau(\pi, s_0) \approx v_g(\pi) \tau + v_b(\pi, s_0)$.
This suggests that the bias serves as the intercept of a line around which
the expected total reward $v_\tau$ oscillates, and eventually converges as $\tau$ increases.
Such a line has a slope of the gain value.
For example in an MDP with zero reward absorbing terminal state (whose gain is 0),
the bias equals the expected total reward before the process is absorbed.
\textbf{Lastly}, the deviation factor $(p_\pi^t(s|s_0) - p_\pi^\star(s))$ in \eqref{equ:bias_prob}
is non-zero only before the process reaches its steady-state.
Therefore, the bias indicates the transient performance.
It may be regarded as the ``transient'' reward, whose
values are earned during the transient phase.
For any reference states $s_{\mathrm{ref}} \in \setname{S}$, we can define
the (bias) relative value $v_{brel}$ as follows
\begin{equation*}
v_{brel}(\pi, s)
\coloneqq v_b(\pi, s) - v_b(\pi, s_{\mathrm{ref}})
= \lim_{t_{\mathrm{max}} \to \infty} \{ v_{t_{\mathrm{max}}}(\pi, s) - v_{t_{\mathrm{max}}}(\pi, s_{\mathrm{ref}}) \},
\quad \forall \pi \in \piset{S}, \forall s\in \setname{S}.
\end{equation*}
The right-most equality follows from \eqref{equ:bias_lim} or \eqref{equ:bias_prob3},
since the gain are the same from both $s$ and $s_{\mathrm{ref}}$.
It indicates that $v_{brel}$ represents the asymptotic different
in the expected total reward due to starting from $s$, instead of $s_{\mathrm{ref}}$.
More importantly, the relative value $v_{brel}$ is equal to $v_b$
up to some constant for any $s \in \setname{S}$.
Moreover, $v_{brel}(\pi, s=s_{\mathrm{ref}}) = 0$.
After fixing $s_{\mathrm{ref}}$, therefore, we can substitute $v_{brel}$ for $v_b$ in
\eqref{equ:avgrew_bellman_optim}, then uniquely determine $v_{brel}$ for all states.
Note that \eqref{equ:avgrew_bellman_optim} with $v_b$ is originally
an underdetermined nonlinear system with $\setsize{S}$ equations and
$\setsize{S} + 1$ unknowns (i.e.~ one extra of $v_g$).
In practice, we often resort to $v_{brel}$ and abuse the notation $v_b$ to
also denote the relative value whenever the context is clear.
In a similar fashion, we call the bias as the relative/differential value.
One may also refer to the bias as relative values due to \eqref{equ:bias_prob2},
average-adjusted values (due to \eqref{equ:bias_lim}), or
potential values (similar to potential energy in physics
whose values differ by a constant \citep[\page{193}]{cao_2007_slo}).
For brevity, we often use the following notations,
$v_g^\pi \coloneqq v_g(\pi) \eqqcolon g^\pi$, and $v_g^* \coloneqq v_g(\pi^*) \eqqcolon g^*$, as well as
$v_b^\pi(s) \coloneqq v_b(\pi, s)$, and $v_b^*(s) \coloneqq v_b(\pi^*, s)$.
Here, $v_b^\pi(s)$ may be read as the relative state value of $s$ under a policy~$\pi$.
\section{Discussion} \label{sec:discuss}
In this section, we discuss several open questions as first steps towards
completing the literature in average-reward model-free RL.
We begin with those of value iteration schemes (\secref{sec:discuss_optimalvalue}),
then of policy iteration schemes focussing on policy evaluation
(\secref{sec:discuss_value}).
Lastly, we also outline several issues that apply to both or beyond
the two aforementioned schemes.
\input{discuss_optimalvalue}
\input{discuss_value}
\input{discuss_beyond}
\subsection{Further open research questions}
In the following passages, relevant works in discounted reward settings are mentioned,
if any, inside the brackets at the end of each part.
\paragraph{On batch settings:}
For on-policy online RL without experience replay buffer,
we pose the following questions.
How to determine a batch size that balances the trade-off between
collecting more samples with the current policy and
updating the policy more often with fewer numbers of samples?
How to apply the concept of backward-view eligibility traces in batch settings?
(cf.~ \cite{harb_2017_etdqn}).
\paragraph{On value- vs policy-iteration schemes:}
As can be observed in \tblrefand{tbl:valiter_work}{tbl:politer_work} (in \appref{sec:table_of_existing}),
there is less work on value- than policy-iteration schemes;
even less when considering only function approximation settings.
Our preliminary experiments suggest that although following value iteration is straightforward
(cf.~ policy iteration with evaluation and improvement steps),
it is more difficult to make it ``work'', especially with function approximation.
One challenge is to specify the proper offset, e.g.~ $v_g^*$ or its estimate,
in RVI-like methods to bound the iterates.
Moreover, \citet[\secc{3.5}]{mahadevan_1996_avgrew} highlighted that
the seminal value-iteration based RL, i.e.~ average reward $Q_b$-learning,
is sensitive to exploration.
We posit these questions.
Is it still worth it to adopt the value iteration scheme after all?
Which scheme is more advantageous in terms of exploration in RL?
How to reconcile both schemes?
(cf.~ \cite{odonoghue_2016_pgq, schulman_2017_pgql, wang_2020_qlearning}).
\paragraph{On distributional (cf. expected-value) perspective:}
There exist few works on distributional views.
\cite{morimura_2010_sdpg} proposed estimating $\nabla \log p^{\vecb{\theta}}(s)$ for obtaining
the gradient estimate $\hat{\nabla} v_g(\vecb{\theta})$ in \eqreff{equ:polgrad},
removing the need to estimate the action value $q_b^{\vecb{\theta}}$.
One important question is how to scale up the distribution (density) estimation for large RL problems.
(cf.~ \cite{bellemare_2017_distrib}).
\paragraph{On MDP modeling:}
The broadest class that can be handled by existing average-reward RL is the unichain MDP;
note that most works assume the more specific class, i.e.~ the recurrent (ergodic) MDP.
To our knowledge, there is still no average reward model-free RL for multichain MDPs
(which is the most general class).
We also desire to apply average-reward RL to continuous state problems,
for which we may benefit from the DP theory on general states, e.g.~ \citet[\ch{8}]{sennott_1998_sdp}.
There are few attempts thus far, for instance,
\citep{yang_2019_elqr}, which is limited to linear quadratic regulator with ergodic cost.
\paragraph{On optimality criteria:}
The average reward optimality is underselective for problems with transient states,
for which we need $(n=0)$-discount (bias) optimality, or even higher $n$-discount optimality.
This underselective-ness motivates the weighted optimality in DP \citep{krass_1992_wmdp}.
In RL, \cite{mahadevan_1996_sensitivedo} developed bias-optimal Q-learning.
In the other extreme, $(n=\infty)$-discount optimality is the most selective criterion.
According to \citet[\thm{10.1.5}]{puterman_1994_mdp}, it is proven to be equivalent to
the Blackwell optimality, which intuitively claims that upon considering sufficiently
far into the future via the Blackwell's discount factor $\gammabw$,
there is no policy better than the Blackwell optimal policy.
Moreover, optimizing the discounted reward does not require any knowledge about
the MDP structure (i.e.~ recurrent, unichain, multichain classification).
Therefore, one of pressing questions is on estimating such $\gammabw$ in RL.
\subsection{Approximation for optimal gain and optimal action-values (of optimal policies)}
\label{sec:discuss_optimalvalue}
Two main components of $Q_b$-learning are $\hat{q}_b^*$ and $\hat{v}_g^*$ updates.
The former is typically carried out via either \eqreff{equ:qbstar_update} for tabular,
or \eqreff{equ:qbw_update} for (parametric) function approximation settings.
What remains is determining how to estimate the optimal gain $v_g^*$,
which becomes the main bottleneck for $Q_b$-learning.
As can be seen in \figref{fig:taxonomy_optgainapprox} (\appref{sec:taxonomy}),
there are two classes of $v_g^*$ approximators.
First, approximators that are not SA-based generally need
$s_{\mathrm{ref}}$ and $a_{\mathrm{ref}}$ specification, which is shown to affect the performance
especially in large state and action sets \citep{wan_2020_avgrew, yang_2016_csv}.
On the other hand, SA-based approximators require a dedicated stepsize $\beta_g$,
yielding more complicated 2-timescale $Q_b$-learning.
Furthermore, it is not yet clear whether the approximation for $v_g^*$ should be on-policy,
i.e.~ updating only when a greedy action is executed, at the cost of reduced sample efficiency.
This begs the question of which approach to estimating $v_g^*$ is ``best'' (in which cases).
In discounted reward settings, \cite{hasselt_2010_doubleq} (also, \cite{hasselt_2016_ddqn})
pointed out that the approximation for $\E{}{\max_{a \in \setname{A}} q_\gamma^*(S_{t+1}, a)}$
poses overestimation, which may be non-uniform and not concentrated at states
that are beneficial in terms of exploration.
He proposed instantiating two decoupled approximators such that
\begin{equation*}
\E{S_{t+1}}{\max_{a' \in \setname{A}} q_\gamma^*(S_{t+1}, a')} \approx
\hat{q}_\gamma^*(s_{t+1}, \argmax_{a' \in \setname{A}} \hat{q}_\gamma^*(s_{t+1}, a';
\vecb{w}_q^{(2)}); \vecb{w}_q^{(1)}),
\tag{Double $Q_\gamma$-learning}
\end{equation*}
where $\vecb{w}_q^{(1)}$ and $\vecb{w}_q^{(2)}$ denote their corresponding weights.
This was shown to be successful in reducing the negative effect of overestimation.
In average reward cases, the overestimation of $q_b^*$ becomes more convoluted
due to the involvement of $\hat{v}_g^*$, as shown in \eqreff{equ:avgrew_bellmanopt_q}.
We believe that it is important to extend the idea of double action-value approximators to $Q_b$-learning.
To our knowledge, there is no finite-time convergence analysis for $Q_b$-learning thus far.
There are also very few works on $Q_b$-learning with function approximation.
This is in contrast with its discounted reward counterpart,
e.g.~ sample complexity of $Q_\gamma$-learning with UCB-exploration bonus \citep{wang_2020_qlearning},
as well as deep $Q_\gamma$-learning neural-network (DQN) and its variants
with \emph{non-linear} function approximation \citep{hessel_2018_rainbow}.
\subsection{Approximation for gain, state-, and action-values of any policy}
\label{sec:discuss_value}
\input{discuss_value_gain.tex}
\input{discuss_value_state.tex}
\input{discuss_value_action.tex}
\input{discuss_value_extra.tex}
\section{Introduction}
Reinforcement learning (RL) is one promising approach to
the problem of making sequential decisions under uncertainty.
Such a problem is often formulated as a Markov decision process (MDP)
with a state set~$\setname{S}$, an action set~$\setname{A}$,
a reward set~$\setname{R}$, and a decision-epoch set~$\setname{T}$.
At each decision-epoch (i.e.~ timestep) $t \in \setname{T}$,
a decision maker (henceforth, an agent) is at a state $s_t \in \setname{S}$, and
chooses to then execute an action $a_t \in \setname{A}$.
Consequently, the agent arrives at the next state $s_{t+1}$ and earns an (immediate) reward $r_{t+1}$.
For $ t = 0, 1, \ldots, t_{\mathrm{max}}$ with $t_{\mathrm{max}} \le \infty$, the agent experiences a sequence of
$S_0, A_0, R_1, S_1, A_1, R_2, \ldots, S_{t_{\mathrm{max}}}$.
Here, $S_0$ is drawn from an initial state distribution~$\isd$, whereas
$s_{t+1}$ and $r_{t+1}$ are governed by the environment dynamics,
which is fully specified by
the state-transition probability $p(s' | s, a) \coloneqq \prob{S_{t+1} = s' | S_t = s, A_t = a}$, and
the reward function $r(s, a, s') = \sum_{r \in \setname{R}} \prob{r | s, a, s'} \cdot r$.
The solution to the decision making problem is a mapping from every state to
a probability distribution over the set of available actions in that state.
This mapping is called a policy, i.e.~ $\pi: \setname{S} \times \setname{A} \mapsto [0, 1]$,
where $\pi(a_t | s_t) \coloneqq \prob{A = a_t | S_t = s_t}$.
Thus, solving such a problem amounts to finding the optimal policy, denoted by $\pi^*$.
The basic optimality criterion asserts that a policy with the largest value is optimal.
That is, $v(\pi^*) \ge v(\pi), \forall \pi \in \Pi$,
where the function $v$ measures the value of any policy $\pi$ in the policy set $\Pi$.
There are two major ways to value a policy based on the infinite reward sequence that it generates,
namely the average- and discounted-reward policy value formulations.
They induce the average- and discounted-reward optimality criteria, respectively.
For an examination of their relationship, pros, and cons in RL,
we refer the readers to \citep{dewanto_2021_avgdis}.
Furthermore, RL can be viewed as
simulation-based asynchronous approximate dynamic programming (DP)
\citep[\ch{7}]{gosavi_2015_sborl}.
Particularly in model-free RL, the simulation is deemed expensive because
it corresponds to direct interaction between an agent and its (real) environment.
Model-free RL mitigates not only the curse of dimensionality (inherently as approximate DP methods),
but also the curse of modeling (since no model learning is required).
This is in contrast to model-based RL, where an agent interacts with
the learned model of the environment \citep{dean_2017_lqr, jaksch_2010_ucrl2, tadepalli_1998_hlearning}.
Model-free RL is fundamental in that it encompasses the bare essentials for updating
sequential decision rules through natural agent-environment interaction.
The same update machinery can generally be applied to model-based RL.
In practice, we may always desire a system that runs both model-free and model-based mechanisms,
see e.g.~ the so-called Dyna architecture \citep{silver_2008_dyna2, sutton_1990_dyna}.
This work surveys the existing value- and policy-iteration based
average-reward model-free RL.
We begin by reviewing relevant DP (as the root of RL), before progressing to
the tabular then function approximation settings in RL.
For comparison, the solo survey on average-reward RL \citep{mahadevan_1996_avgrew}
embraced only the tabular value-iteration based methods.
We present our review in \secrefand{sec:valiter}{sec:politer}, which are accompanied
by concise \tblrefand{tbl:valiter_work}{tbl:politer_work} (in \appref{sec:table_of_existing})
along with literature maps
(\figrefd{fig:taxonomy_optgainapprox}{fig:taxonomy_gain}{fig:taxonomy_stateval}{fig:taxonomy_actval}
in \appref{sec:taxonomy}).
We then discuss the insight and outlook in \secref{sec:discuss}.
Additionally in \appref{sec:problem_spec}, we compile environments
that were used for evaluation by the existing works.
In order to limit the scope, this work focuses on model-free RL
with a single non-hierarchical agent interacting \emph{online} with its environment.
We do not include works that \emph{approximately} optimize the average reward by
introducing a discount factor (hence, the corresponding approximation error), e.g.~
\cite{schneckenreither_2020_avgrew, karimi_2019_nasym, bartlett_2002_polgrad}.
We also choose not to examine RL methods that are based on
linear programming \citep{wang_2017_pilearn, neu_2017_entreg},
and decentralized learning automata \citep{chang_2009_dlfmc, wheeler_1986_dlmc}.
\section{Policy-iteration schemes} \label{sec:politer}
Instead of iterating towards the value of optimal policies, we can iterate
policies directly towards the optimal one.
At each iteration, the current policy iterate is evaluated, then
its value is used to obtain the next policy iterate by taking the greedy action
based on the RHS of the Bellman optimality equation \eqref{equ:avgrew_bellman_optim}.
The latter step differentiates this so-called policy iteration from
naive policy enumeration that evaluates and compares policies as prescribed
in \eqref{equ:gainopt}.
Like in the previous \secref{sec:valiter}, we begin with
the original policy iteration in DP, which is then generalized in~RL.
Afterward, we review average-reward policy gradient methods,
including actor-critic variants, because of their prominence and proven empirical successes.
The last two sections are devoted to approximate policy evaluation, namely
gain and relative value estimations.
\tblref{tbl:politer_work} (in \appref{sec:table_of_existing}) summarizes
existing works on average-reward policy-iteration-based model-free RL.
\input{politer_backgnd}
\input{politer_polgrad}
\input{politer_gainapprox}
\input{politer_vapprox}
\subsection{Foundations} \label{sec:politer_backgnd}
In DP, \cite{howard_1960_dpmp} proposed the first policy iteration algorithm to
obtain gain optimal policies for unichain MDPs.
It proceeds as follows.
\begin{enumerate} [label=\textbf{Step~\arabic{*}:}]
\item Initialize the iteration index $k \gets 0$ and
set the initial policy arbitrarily, $\hat{\pi}^{k=0} \approx \pi^*$.
\item Perform (exact) policy evaluation: \\
Solve the following underdetermined linear system for $v_g^k$ and $\vecb{v}_b^k$,
\begin{equation}
v_b^k(s) + v_g^k = r(s, a) + \sum_{s' \in \setname{S}} p(s' | s, a)\ v_b^k(s'),
\qquad \forall s \in \setname{S}, \text{with}\ a = \hat{\pi}^k(s),
\label{equ:poisson}
\end{equation}
which is called the Bellman policy expectation equation, also
the Poisson equation \citep[\equ{9.1}]{feinberg_2002_hmdp}.
\item Perform (exact) policy improvement: \\
Compute a policy $\hat{\pi}^{k+1}$ by greedy action selection
(analogous to the RHS of \eqref{equ:avgrew_bellman_optim}):
\begin{equation*}
\hat{\pi}^{k+1}(s) \gets \argmax_{a \in \setname{A}} \big[
\underbrace{
r(s, a) + \sum_{s' \in \setname{S}} p(s' | s, a)\ v_b^k(s')
}_\text{$q_b^k(s, a) + v_g^k$}
\big],\
\forall s \in \setname{S}.
\tag{Synchronous updates}
\end{equation*}
\item If stable, i.e.~ $\hat{\pi}^{k+1}(s) = \hat{\pi}^{k}(s), \forall s \in \setname{S}$,
then output $\hat{\pi}^{k+1}$ (which is equivalent to $\pi^*$). \\
Otherwise, increment $k$, then go to Step~2.
\end{enumerate}
The above \emph{exact} policy iteration is generalized to
the generalized policy iteration (GPI) for RL \citep[\secc{4.6}]{sutton_2018_irl}.
Such generalization is in the sense of the details of policy evaluation and improvement,
such as approximation (\emph{inexact} evaluation and \emph{inexact} improvement) and
the update granularity, ranging from per timestep (incremental, step-wise) to
per number (batch) of timesteps.
GPI relies on the policy improvement theorem \citep[\page{101}]{sutton_2018_irl}.
It assures that any $\epsilon$-greedy policy with respect to $q_b(\pi)$ is
an improvement over any $\epsilon$-soft policy $\pi$,
i.e.~ any policy whose
$\pi(a|s) \ge \epsilon / \setsize{A}, \forall (s, a) \in \setname{S} \times \setname{A}$.
Moreover, $\epsilon$-greediness is also beneficial for exploration.
There exist several variations on policy improvement that all share
a similar idea to $\epsilon$-greedy, i.e.~
updating the current policy \emph{towards} a greedy policy.
They include
\citet[\equ{5}]{wei_2019_avgrew},
\citet[\equ{4}]{abbasi-yadkori_2019_politex},
\citet[\equ{4.1}]{hao_2020_aapi},
and approximate gradient-ascent updates used in policy gradient methods
(\secref{sec:politer_backgnd_polgrad}).
\subsection{Gain Approximation} \label{sec:politer_gainapprox}
We classify gain approximations into incremental and batch categories.
Incremental methods update their current estimates using information
only from one timestep, hence step-wise updates.
In contrast, batch methods uses information from multiple timesteps in a batch.
Every update, therefore, has to wait until the batch is available.
Notationally, we write $g^\pi \coloneqq v_g(\pi)$.
The superscript $\pi$ is often dropped whenever the context is clear.
\input{politer_gainapprox_increment.tex}
\input{politer_gainapprox_batch.tex}
\subsubsection{Batch methods}
In batch settings, there are at least two ways to approximate the gain of a policy,
denoted as $g$.
\textbf{First,} $\hat{g}$ is set to the total rewards \emph{divided} by
the number of timesteps in a batch, as in \eqref{equ:lr_special_sa}, which
also shows its incremental equivalence.
This is used in
\citetext{\citealp[\page{339}]{powell_2011_adp}; \citealp[\secc{2B}]{yu_2009_lspe};
\citealp[\secc{4}]{gosavi_2004_qplearn}, \citealp[\page{319}]{feinberg_2002_hmdp}}.
The work of \citet[\equ{16}]{marbach_2001_simopt} also falls in this category,
but $\hat{g}$ is computed as a weighted average of \emph{all} past rewards from
all batches (regenerative cycles), mixing on- and off-policy samples.
Nonetheless, it is shown to yield lower estimation variance.
\textbf{Second,} based on similiar justification as RVI, we can specify
a reference state and action pair.
For instance,
\citet[\equ{6.23}, \page{311}]{cao_2007_slo} proposed $\hat{g} \gets \hat{v}_b^\pi(s_{\mathrm{ref}})$,
whereas
\citet[\app{A}]{lagoudakis_2003_thesis} advocated $\hat{g} \gets \hat{q}_b^\pi(s_{\mathrm{ref}}, a_{\mathrm{ref}})$.
\subsubsection{Incremental methods}
Based on \eqreff{equ:gain_lim2}, we can define an increasing function
$f(g) \coloneqq g - \E{}{r(S, A)}$.
Therefore, the problem of estimating $g$ becomes finding the root of $f(g)$,
by which $f(g) = 0$.
As can be observed, $f(g)$ is unknown because $\E{}{r(S, A)}$ is unknown.
Moreover, we can only obtain noisy observations of $\E{}{r(S, A)}$, namely
$r(s, a) = \E{}{r(S, A)} + \varepsilon$ for some additive error $\varepsilon > 0$.
Thus, the \emph{noisy} observation of $f(\hat{g}_t)$ at iteration $t$ is given by
$\hat{f}(\hat{g}_t) = \hat{g}_t - r(s_t, a_t)$.
Recall that an increasing function satisfies $d f(x)/ dx > 0$ for $x > x^*$
where $x^*$ indicates the root of the $f(x)$.
We can solve for the root of $f(g)$ iteratively via
the Robbins-Monro (RM) algorithm as follows, %
\begin{align}
\hat{g}_{t+1}
& = \hat{g}_t - \beta_t \{ \hat{f}(\hat{g}_t) \}
= \hat{g}_t - \beta_t \{ \hat{g}_t - r(s_t, a_t) \}
\tag{for an increasing function $f$} \\
& = \hat{g}_t + \beta_t \{ r(s_t, a_t) - \hat{g}_t \}
= (1 - \beta_t) \hat{g}_t + \beta_t \{ r(s_t, a_t) \},
\label{equ:gain_basic}
\end{align}
where $\beta_t \in (0, 1)$ is a gain approximation step size;
note that setting $\beta_t \ge 1$ violates the RM algorithm since
it makes the coefficient of $\hat{g}_t$ non-positive.
This recursive procedure converges to the root of $f(g)$ with probability~1
under several conditions, including $\varepsilon_t$ is i.i.d with
$\E{}{\varepsilon_t} = 0$,
the step size satisfies the standard requirement in SA, and
some other technical conditions \citep[\secc{6.1}]{cao_2007_slo}.
The stochastic approximation technique in \eqreff{equ:gain_basic} is
the most commonly used.
For instance, \citet[\alg{1}]{wu_2020_ttsac},
\cite{heess_2012_acrl}, \citet[\secc{10.7}]{powell_2011_adp},
\citet[\equ{13}]{castro_2010_stsac}, \citet[\equ{21}]{bhatnagar_2009_nac},
\citet[\equ{3.1}]{konda_2003_aca}, \citet[\secc{4}]{marbach_2001_simopt}, and
\citet[\secc{2}]{tsitsiklis_1999_avgtd}.
Furthermore, a learning rate of $\beta_t = 1/(t + 1)$ in \eqreff{equ:gain_basic}
yields
\begin{equation}
\hat{g}_{t+1}
= \frac{t \times \hat{g}_t + r(s_t, a_t)}{t+1}
= \frac{r(s_t, a_t) + r(s_{t - 1}, a_{t - 1}) + \ldots + r(s_0, a_0)}{t + 1},
\label{equ:lr_special_sa}
\end{equation}
which is the average of a noisy gain observation sequence up to $t$.
The ergodic theorem asserts that for such a Markovian sequence, the time average
converges to \eqref{equ:gain_lim2} as $t$ approaches infinity
\citep[\ch{8}]{gray_2009_prob}.
This decaying learning rate is suggested by \citet[\alg{2}]{singh_1994_avgrewrl}.
Additionally, a variant of $\beta_t = 1/(t + 1)^\kappa$ with
a positive constant $\kappa < 1$
is used by \citet[\secc{5.2.1}]{wu_2020_ttsac} for establishing
the finite-time error rate of this gain approximation under non-i.i.d Markovian samples.
Another iterative approach is based on the Bellman expectation equation \eqref{equ:poisson}
for obtaining the noisy observation of $g$.
That is,
\begin{align}
\hat{g}_{t + 1}
& = (1 - \beta_t) \hat{g}_t
+ \beta_t \big\{
\underbrace{
r(s_t, a_t) + \hat{v}_{b, t}^\pi(s_{t+1}) - \hat{v}_{b, t}^\pi(s_t)
}_\text{$g$ in expectation of $S_{t+1}$ when $\hat{v}_{b, t}^\pi = v_b^\pi$}
\big\} \tag{The variance is reduced, cf.~ \eqreff{equ:gain_basic}} \\
& = \hat{g}_t
+ \beta_t \big\{
\underbrace{
{\color{red}\{}
r(s_t, a_t) - \hat{g}_t + \hat{v}_{b, t}^\pi(s_{t+1})
{\color{red}\}}
- \hat{v}_{b, t}^\pi(s_t)
}_\text{$\hat{\delta}_{v_b, t}^\pi(s_t, a_t, s_{t+1})$}
\big\}.
\label{equ:gain_tdbased}
\end{align}
In comparison to \eqreff{equ:gain_basic}, the preceding update is anticipated to
have lower variance due to the adjustment terms of
$\hat{v}_{b, t}^\pi(s_{t+1}) - \hat{v}_{b, t}^\pi(s_t)$.
This update is used by \citet[\alg{1}]{singh_1994_avgrewrl}, \citet[\secc{3B}]{degris_2012_contact},
and \citet[\page{333}]{sutton_2018_irl}.
An analogous update can be formulated by replacing $v_b^\pi$ with $q_b^\pi$,
which is possible only when the next action $a_{t+1}$ is already available,
as in the differential Sarsa algorithm \citep[\page{251}]{sutton_2018_irl}.
\subsection{Average-reward policy gradient methods} \label{sec:politer_backgnd_polgrad}
In this section, we outline policy gradient methods, which have proven empirical successes
in function approximation settings \citep{agarwal_2019_polgrad, duan_2016_benchrl}.
They requires explicit policy parameterization, which enables
\emph{not only} learning appropriate levels of exploration
(either control- or parameter-based
\citetext{\citealp{vemula_2019_xplor}, \citealp[\fig{1}]{miyamae_2010_npgpe}}),
\emph{but also} injection of domain knowledge \citep[\secc{1.3}]{deisenroth_2013_polsearchrob}.
Note that from a sensitivity-based point of view, policy gradient methods belong to
perturbation analysis \citep[\page{18}]{cao_2007_slo}.
In policy gradient methods, a policy $\pi$ is parameterized by a parameter vector
$\vecb{\theta} \in \Theta = \real{\dim(\vecb{\theta})}$,
where $\dim(\vecb{\theta})$ indicates the number of dimensions of $\vecb{\theta}$.
In order to obtain a smooth dependence on~$\vecb{\theta}$ (hence, smooth gradients),
we restrict the policy class to a set of \emph{randomized} stationary policies~$\piset{SR}$.
We further assume that $\pi(\vecb{\theta})$ is twice differentiable with
bounded first and second derivatives.
For discrete actions, one may utilize a categorical distribution
(a special case of Gibbs/Boltzmann distributions) as follows,
\begin{equation*}
\pi(a | s; \vecb{\theta}) \coloneqq
\frac{\exp(\vecb{\theta}^\intercal \vecb{\phi}(s, a))}{
\sum_{a' \in \setname{A}} \exp(\vecb{\theta}^\intercal \vecb{\phi}(s, a'))},
\quad \forall (s, a) \in \setname{S} \times \setname{A},
\tag{soft-max in action preferences}
\end{equation*}
where $\vecb{\phi}(s, a)$ is the feature vector of a state-action pair $(s, a)$
for this policy parameterization
(\citet[\secc{3.1}]{sutton_2018_irl}, \citet[\equ{8}]{bhatnagar_2009_nac}).
Note that parametric policies may not contain the optimal policy because
there are typically fewer parameters than state-action pairs, yielding some approximation error.
The policy improvement is based on the following optimization with
$v_g(\vecb{\theta}) \coloneqq v_g(\pi(\vecb{\theta}))$,
\begin{equation}
\vecb{\theta}^* \coloneqq \argmax_{\vecb{\theta} \in \Theta} v_g(\vecb{\theta}),
\quad \text{with iterative update:}\
\vecb{\theta} \gets \vecb{\theta}
+ \alpha \mat{C}^{-1} \nabla v_g(\vecb{\theta}),
\label{equ:polparam_update}
\end{equation}
where $\alpha$ is a positive step length, and
$\mat{C} \in \real{\dim(\vecb{\theta}) \times \dim(\vecb{\theta}) }$ denotes
some preconditioning positive definite matrix.
Based on \eqreff{equ:gain_prob}, we have
\begin{align}
\nabla v_g(\vecb{\theta})
& = \sum_{s \in \setname{S}} \sum_{a \in \setname{A}}
r(s, a)\ \nabla \{ p_{\pi}^\star(s)\ \pi(a|s; \vecb{\theta}) \}
\tag{$\nabla \coloneqq \partdiff{\vecb{\theta}}$} \\
& = \sum_{s \in \setname{S}} \sum_{a \in \setname{A}}
p_{\pi}^\star(s)\ \pi(a|s; \vecb{\theta})\ r(s, a)\
\{ \nabla \log \pi(a|s; \vecb{\theta}) + \nabla \log p_{\pi}^\star(s) \}
\notag \\ %
& = \sum_{s \in \setname{S}} \sum_{a \in \setname{A}}
p_{\pi}^\star(s)\ \pi(a|s; \vecb{\theta})\
\underbrace{q_b^\pi(s, a)}_\text{in lieu of $r(s, a)$}
\nabla \log \pi(a|s; \vecb{\theta})
\tag{does not involve $\nabla \log p_{\pi}^\star(s)$} \\
& = \sum_{s \in \setname{S}} \sum_{a \in \setname{A}} \sum_{s' \in \setname{S}}
p_{\pi}^\star(s)\ \pi(a|s; \vecb{\theta})\ p(s'| s, a)\
v_b^\pi(s')\ \nabla \log \pi(a|s; \vecb{\theta}).
\label{equ:polgrad}
\end{align}
The penultimate equation above is due to the (randomized) policy gradient theorem
(\citet[\thm{1}]{sutton_2000_pgfnapprox}, \citet[\secc{6}]{marbach_2001_simopt},
\citet[\page{28}]{deisenroth_2013_polsearchrob}).
The last equation was proven to be equivalent by \citet[\app{B}]{castro_2010_stsac}.
There exist (at least) two variants of preconditioning matrices $\mat{C}$.
\textbf{First} is through the second derivative
$\mat{C} = - \nabla^2 v_g(\vecb{\theta})$, as well as its approximation,
see \citet[\app{B.1}]{furmston_2016_newtonps},
\cite[\secc{3.3, 5.2}]{morimura_2008_nnpg}, \citet[\equ{6}]{kakade_2002_npg}.
\textbf{Second}, one can use a Riemannian-metric matrix for natural gradients.
It aims to make the update directions invariant to the policy parameterization.
\cite{kakade_2002_npg} first proposed the Fisher information matrix (FIM) as such a matrix,
for which an incremental approximation was suggested by \citet[\equ{26}]{bhatnagar_2009_nac}.
A generalized variant of natural gradients was introduced by
\cite{morimura_2009_gnac, morimura_2008_nnpg}.
In addition, \cite{thomas_2014_genga} derived another generalization that
allows for a positive semidefinite matrix.
Recall that FIM is only guaranteed to be positive semidefinite
(hence, describing a semi-Riemannian manifold);
whereas the natural gradient ascent assumes the function being optimized
has a Riemannian manifold domain. %
In order to obtain more efficient learning with efficient computation,
several works propose using backward-view eligibility traces in policy parameter updates.
The key idea is to keep track of the eligibility of each component of $\vecb{\theta}$
for getting updated whenever a reinforcing event, i.e.~ $q_b^\pi$, occurs.
Given a state-action sample $(s, a)$, the update in \eqref{equ:polparam_update}
becomes
\begin{equation}
\vecb{\theta} \gets \vecb{\theta}
+ \alpha \mat{C}^{-1} q_b^\pi(s, a)\ \vecb{e}_{\vecb{\theta}},
\quad \text{which is carried out after}\
\vecb{e}_{\vecb{\theta}} \gets
\lambda_{\vecb{\theta}} \vecb{e}_{\vecb{\theta}}
+ \underbrace{\nabla \log \pi(a|s; \vecb{\theta})}_\text{the eligibility},
\label{equ:theta_update}
\end{equation}
where $\vecb{e}_{\vecb{\theta}} \in \real{\dim(\vecb{\theta})}$ denotes
the accumulating eligibility vector for $\vecb{\theta}$ and
is initialized to $\vecb{0}$, whereas
$\lambda_{\vecb{\theta}} \in (0, 1)$ the trace decay factor for $\vecb{\theta}$.
This is used by
\citet[\secc{4}]{iwaki_2019_i2nac}, %
\citet[\secc{13.6}]{sutton_2018_irl}, %
\citet[\app{B.1}]{furmston_2016_newtonps}, %
\citet[\secc{III.B}]{degris_2012_contact}, %
\citet[\secc{4}]{matsubara_2010_lrpg}, and %
\citet[\secc{5}]{marbach_2001_simopt}. %
As can be observed in \eqreff{equ:polgrad}, computing the gradient
$\nabla v_g(\vecb{\theta})$ exactly
requires (exact) knowledge of $p_{\pi}^\star$ and $q_b^\pi$,
which are both unknown in RL.
It also requires summation over all states and actions, which becomes an issue
when state and action sets are large.
As a result, we resort to the gradient estimate, which if \emph{unbiased},
leads to \emph{stochastic} gradient ascent.
In order to reduce the variance of gradient estimates, there are two common techniques
based on control variates \citep[\secc{5, 6}]{greensmith_2004_varredgrad}.
\textbf{First} is the baseline control variate, for which the \emph{optimal} baseline is
the one that minimizes the variance.
Choosing $v_b^\pi$ as a baseline \citep[\lmm{2}]{bhatnagar_2009_nac}
yields
\begin{equation}
\underbrace{
q_b^\pi(s, a) - v_b^\pi(s)
}_\text{relative action advantage, $\adv_b^\pi(s, a)$}
= \mathbb{E}_{S'} \Big[
\underbrace{
\Big\{ r(s, a) - v_g^\pi + v_b^\pi(S') \Big\}
- v_b^\pi(s) \big| s, a
}_\text{relative TD, $\delta_{v_b}^\pi(s, a, S')$}
\Big],
\quad \forall (s, a) \in \setname{S} \times \setname{A},
\label{equ:td_vs_adv}
\end{equation}
where $S'$ denotes the next state given the current state~$s$ and action~$a$.
It has been shown that
$\adv_b^\pi(s, a) = \E{S'}{\delta_{v_b}^\pi(s, a, S')}$,
meaning that the TD, i.e.~ $\delta_{v_b}^\pi(s, a, S')$, can be used as
an \emph{unbiased} estimate for the action advantage
\citetext{\citealp[\prop{1}]{iwaki_2019_i2nac}, \citealp[\thm{5}]{castro_2010_stsac},
\citealp[\lmm{3}]{bhatnagar_2009_nac}}.
Hence, we have
\begin{equation*}
\nabla v_g(\vecb{\theta})
= \mathbb{E}_{S, A} \big[
\adv_b^\pi(S, A)\ \nabla \log \pi(A|S; \vecb{\theta}) \big]
\approx \delta_{v_b}^\pi(s, a, s')\ \nabla \log \pi(a|s; \vecb{\theta}),
\end{equation*}
where the expectation of $S$, $A$, and $S'$ is approximated with a single sample $(s, a, s')$.
This yields an unbiased gradient estimate with lower variance than that using
$q_b^\pi$.
Note that in RL, the exact $\adv_b^\pi$ and $v_b^\pi$
(for calculating $\delta_{v_b}^\pi$) should also be approximated.
\textbf{Second}, a policy value estimator is set up and often also parameterized by
$\vecb{w} \in \setname{W} = \real{\dim(\vecb{w})}$.
This gives rise to actor-critic methods, where ``actor'' refers to the parameterized policy,
whereas ``critic'' the parameterized value estimator.
The critic can take either one of these three forms,
\begin{enumerate} [label=\roman{*}.]
\item relative state-value estimator $\hat{v}_b^{\pi}(s; \vecb{w}_v)$
for computing the relative TD approximate $\hat{\delta}_{v_b}^\pi$,
\item both relative state- and action-value estimators for
$\hat{\adv}_b^\pi(s, a) \gets
\hat{q}_b^\pi(s, a; \vecb{w}_q) - \hat{v}_b^{\pi}(s; \vecb{w}_v)$,
\item relative action-advantage estimator $\hat{\adv}_b^\pi(s, a; \vecb{w}_{\adv})$,
\end{enumerate}
for all $s \in \setname{S}, a \in \setname{A}$, and
with the corresponding critic parameter vectors $\vecb{w}_v$, $\vecb{w}_q$, and $\vecb{w}_{\adv}$.
This parametric value approximation is reviewed in \secref{sec:politer_valueapprox}.
\subsection{Relative state- and action-value approximation}
\label{sec:politer_valueapprox}
\input{politer_vapprox_tab}
\input{politer_vapprox_fa}
\subsubsection{Methods with function approximation} \label{sec:politer_vapprox_fa}
Here, we review gradient-based approximate policy evaluation that relies on
the gradient of an error function to update its parameter.
Note that there exist approximation techniques based on least squares, e.g.~
LSPE($\lambda$) \citep{yu_2009_lspe},
gLSTD and LSTDc \citep{ueno_2008_lstd}, and
LSTD-Q \citep[\app{A}]{lagoudakis_2003_thesis}.
\input{politer_vapprox_fa_v}
\input{politer_vapprox_fa_qa}
\subsubsection{Tabular methods} \label{sec:politer_vapprox_tab}
\citet[\alg{1, 2}]{singh_1994_avgrewrl} proposed the following incremental TD-based update,
\begin{equation*}
\hat{v}_b^\pi(s) \gets \hat{v}_b^\pi(s)
+ \beta \{
\underbrace{
\big( r(s, a) - \hat{v}_g^\pi + \hat{v}_b^\pi(s') \big) - \hat{v}_b^\pi(s)
}_{\delta_{v_b}^\pi(s, a, s')} \},
\quad \text{for a sample $(s, a, s')$}.
\end{equation*}
A similar update for $\hat{q}_b^\pi$ can be obtained by substituting
$\hat{v}_b^\pi$ with $\hat{q}_b^\pi$ along with a sample for the next action,
as in Sarsa-like algorithms.
In a batch fashion, the relative action value can be approximated as follows,
\begin{equation}
q_b^\pi(s_t, a_t) \approx
\underbrace{
\sum_{\tau = t}^{t_{s_{\mathrm{ref}}}^\pi} r(s_\tau, a_\tau) - \hat{v}_g^\pi,
}_\text{an episode return}
\quad \text{using a sample set $\{(s_\tau, a_\tau) \}_{\tau = t}^{t_{s_{\mathrm{ref}}}^\pi}$}
\label{equ:qb_batch_return}
\end{equation}
where $t_{s_{\mathrm{ref}}}^\pi$ denotes the timestep at which $s_{\mathrm{ref}}$ is visited while
following a policy~$\pi$, assuming that $s_{\mathrm{ref}}$ is a recurrent state under all policies.
This is used by \citet[\secc{1}]{sutton_2000_pgfnapprox}, and \citet[\secc{6}]{marbach_2001_simopt}.
Another batch approximation technique is based on the inverse-propensity scoring
\citep[\alg{3}]{wei_2019_avgrew}. %
\section{Value-iteration schemes} \label{sec:valiter}
Based on the Bellman optimality equation \eqref{equ:avgrew_bellman_optim},
we can obtain the optimal policy once we know the optimal state value
$\vecb{v}_b^* \in \real{\setsize{S}}$, which includes knowing the optimal gain $v_g^*$.
Therefore, one approach to optimal control is to (approximately) compute $\vecb{v}_b^*$,
which leads to the value-iteration algorithm in DP.
The value-iteration scheme in RL uses the same principle as its DP counterpart.
However, we approximate the optimal (state-)action value
$\vecb{q}_b^* \in \real{\setsize{S} \setsize{A}}$, instead of $\vecb{v}_b^*$.
In this section, we begin by introducing the foundations of the value-iteration scheme,
showing the progression from DP into RL.
We then cover tabular methods by presenting how to estimate the optimal action values iteratively
along with numerous approaches to estimating the optimal gain.
Lastly, we examine function approximation methods, where
the action value function is parameterized by a weight vector, and
present different techniques of updating the weights.
\tblref{tbl:valiter_work} (in \appref{sec:table_of_existing}) summarizes existing
average reward model-free RL based on the value-iteration scheme.
\input{valiter_backgnd}
\input{valiter_tabular}
\input{valiter_fnapprox}
\subsection{Foundations} \label{sec:valiter_backgnd}
In DP, the basic value iteration (VI) algorithm \emph{iteratively} solves for
the optimal state value $\vecb{v}_b^*$ and outputs an $\varepsilon$-optimal policy;
hence it is deemed exact up to the convergence tolerance $\varepsilon$.
The basic VI algorithm proceeds with the following steps.
\begin{enumerate} [label=\textbf{Step~\arabic{*}:}]
\item Initialize the iteration index $k \gets 0$ and $\hat{v}_b^{k=0}(s) \gets 0$,
where $\hat{v}_b^k(s) \approx v_b^*(s), \forall s \in \setname{S}$. \\
Also select a small positive convergence tolerance $\varepsilon > 0$. \\
Note that $\hat{v}_b^k$ may not correspond to any stationary policy.
\item Perform updates on \emph{all} iterates (termed as \emph{synchronous} updates)
as follows,
\begin{equation*}
\hat{v}_b^{k+1}(s) = \B{*}{\hat{v}_b^{k}(s)}, \quad \forall s \in \setname{S},
\tag{Basic VI: synchronous updates}
\end{equation*}
where the average-reward Bellman optimality operator $\mathbb{B}^*$ is defined as
\begin{equation*}
\B{*}{\hat{v}_b^k(s)} \coloneqq \max_{a \in \setname{A}}
\Big[ r(s, a) + \sum_{s' \in \setname{S}} p(s' | s, a)\ \hat{v}_b^k(s') \Big],
\quad \forall s \in \setname{S}.
\end{equation*}
Clearly, $\mathbb{B}^*$ is non-linear (due to the $\max$ operator), and
is derived from \eqref{equ:avgrew_bellman_optim} with $\hat{v}_g^* \gets 0$.
\item If the span seminorm $sp(\hat{\vecb{v}}_b^{k+1} - \hat{\vecb{v}}_b^{k}) > \varepsilon$,
then increment $k$ and go to Step~2. \\
Otherwise, output a greedy policy with respect to the RHS of \eqref{equ:avgrew_bellman_optim};
such a policy is $\varepsilon$-optimal.
Here, $sp(\vecb{v}) \coloneqq \max_{s \in \setname{S}} v(s) - \min_{s' \in \setname{S}} v(s')$,
which measure the range of all components of a vector~$\vecb{v}$.
Therefore, although the vector changes, its span may be constant.
\end{enumerate}
The mapping in Step~2 of the VI algorithm is not contractive (because there is no discount factor).
Moreover, the iterates $\hat{\vecb{v}}_b^k$ can grow very large, leading to numerical instability.
Therefore, \cite{white_1963_dpmc} proposed substracting out the value of
an arbitrary-but-fixed reference state~$s_{\mathrm{ref}}$ at every iteration.
That is,
\begin{equation*}
\hat{v}_b^{k+1}(s) = \B{*}{\hat{v}_b^{k}(s)} - \hat{v}_b^{k}(s_{\mathrm{ref}}),
\quad \forall s \in \setname{S},
\tag{Relative VI: synchronous updates}
\end{equation*}
which results in the so-called \emph{relative} value iteration (RVI) algorithm.
This update yields the same span and the same sequence of maximizing actions as
that of the basic VI algorithm.
Importantly, as $k \to \infty$, the iterate $\hat{v}_b^{k}(s_{\mathrm{ref}})$ converges to $v_g^*$.
The \emph{asynchronous} version of RVI may diverge
\citetext{\citealp[\page{682}]{abounadi_2001_rviqlearn}, \citealp[\page{232}]{gosavi_2015_sborl}}.
As a remedy, \cite{jalali_1990_avgcost} introduced the following update,
\begin{equation*}
\hat{v}_b^{k+1}(s) = \B{*}{\hat{v}_b^{k}(s)} - \hat{v}_g^k,
\quad \forall (s \ne s_{\mathrm{ref}}) \in \setname{S},
\tag{Relative VI: asynchronous updates}
\end{equation*}
where $\hat{v}_g^k$ is the estimate of $v_g^*$ at iteration~$k$, and
$\hat{v}_b^{k}(s_{\mathrm{ref}}) = 0$, for all iterations $k = 0, 1, \ldots$.
It is shown that this asynchronous method converges to produce a gain optimal policy
in a regenerative process, where there exists a \emph{single} recurrent state
under \emph{all} stationary and deterministic policies.
In RL, the reward function $r(s, a)$ and the transition distribution $p(s'|s, a)$ are unknown.
Therefore, it is convenient to define a relative (state-)action value $\vecb{q}_b(\pi)$
of a policy $\pi$ as follows,
\begin{align}
q_b(\pi, s, a)
& \coloneqq \lim_{t_{\mathrm{max}} \to \infty} \E{S_t, A_t}{\sum_{t=0}^{t_{\mathrm{max}} - 1} \left( r(S_t, A_t)
- v_g^\pi \right) \Big| S_0 = s, A_0 = a, \pi} \notag \\
& = r(s, a) - v_g^\pi + \E{}{v_b^\pi(S_{t+1}) },
\qquad \forall (s, a) \in \setname{S} \times \setname{A},
\label{equ:qb}
\end{align}
where for brevity, we define $q_b^\pi(s, a) \coloneqq q_b(\pi, s, a)$,
as well as $q_b^*(s, a) \coloneqq q_b(\pi^*, s, a)$.
The corresponding average reward Bellman optimality equation is
as follows \citep[\page{549}]{bertsekas_2012_dpoc},
\begin{equation}
q_b^*(s, a) + v_g^* =
r(s, a)
+ \sum_{s' \in \setname{S}} p(s' | s, a)
\underbrace{\max_{a' \in \setname{A}} q_b^*(s', a'),}_\text{$v_b^*(s')$}
\quad \forall (s, a) \in \setname{S} \times \setname{A},
\label{equ:avgrew_bellmanopt_q}
\end{equation}
whose RHS is in the same form as the quantity maximized over all actions
in \eqref{equ:avgrew_bellman_optim}.
Therefore, the gain-optimal policy $\pi^*$ can be obtained simply by acting greedily over
the optimal action value; hence, $\pi^*$ is deterministic.
That is,
\begin{equation*}
\pi^*(s) = \argmax_{a \in \setname{A}} \Big[
\underbrace{
r(s, a)
+ \sum_{s' \in \setname{S}} p(s' | s, a) \max_{a' \in \setname{A}} q_b^*(s', a')
- v_g^*
}_{q_b^*(s, a)}
\Big], \qquad \forall s \in \setname{S}.
\end{equation*}
As can be observed, $\vecb{q}_b^*$ combines the effect of $p(s'|s, a)$ and $\vecb{v}_b^*$,
without estimating them separately at the cost of an increased number of
estimated values since typically $\setsize{S} \times \setsize{A} \gg \setsize{S}$.
The benefit is that action selection via $\vecb{q}_b^*$ does not require
the knowledge of $r(s, a)$ and $p(s'|s, a)$.
Note that $v_g^*$ is invariant to state~$s$ and action~$a$;
hence its involvement in the above maximization has no effect.
Applying the idea of RVI on action values $\vecb{q}_b^*$ yields the following iterate,
\begin{align}
\hat{q}_b^{k+1}(s, a)
& = \B{*}{\hat{q}_b^{k}(s, a)} - v_b^{k}(s_{\mathrm{ref}})
\tag{Relative VI on $\vecb{q}_b^*$: asynchronous updates} \\
& = \underbrace{
r(s, a) + \sum_{s' \in \setname{S}} p(s' | s, a)
\max_{a' \in \setname{A}} \hat{q}_b^k(s', a')
}_{\B{*}{\hat{q}_b^{k}(s, a)}}
- \underbrace{
\max_{a'' \in \setname{A}} \hat{q}_b^k(s_{\mathrm{ref}}, a'')
}_\text{can be interpreted as $\hat{v}_g^*$},
\label{equ:rviq_iter}
\end{align}
where $\hat{\vecb{q}}_b^{k}$ denotes the estimate of $\vecb{q}_b^*$ at iteration~$k$,
and $\mathbb{B}^*$ is based on \eqref{equ:avgrew_bellmanopt_q} so it operates on action values.
The iterates of $\hat{v}_b^{k}(s) = \max_{a \in \setname{A}} \hat{q}_b^k(s, a)$
are conjectured to converge to $v_b^*(s)$ for all $s \in \setname{S}$
by \citet[\secc{7.2.3}]{bertsekas_2012_dpoc}.
\subsection{Methods with function approximation}
We focus on \emph{parametric} techniques for $Q_b$-learning with function approximation.
In such cases, the action value is parameterized by
a weight (parameter) vector $\vecb{w} \in \setname{W} = \real{\dim(\vecb{w})}$,
where $\dim(\vecb{w})$ is the number of dimensions of $\vecb{w}$.
That is,
$\hat{q}_b^*(s, a; \vecb{w}) \approx q_b^*(s, a), \forall (s, a) \in \setname{S} \times \setname{A}$.
Note that there also exist non-parametric techniques, for instance,
kernel-based methods \citep{ormoneit_2002_kbrlavg, ormoneit_2001_kbrl}
and those based on state aggregation \citep{ortner_2007_pmet}.
\citet[\page{404}]{bertsekas_1996_neurodp} proposed the following weight update,
\begin{equation}
\vecb{w} \gets \vecb{w}
+ \beta \Big\{
\Big( r(s, a) - \hat{v}_g^*
+ \max_{a' \in \setname{A}} \hat{q}_b^*(s', a'; \vecb{w})
\Big) \nabla \hat{q}_b^*(s, a; \vecb{w})
- \vecb{w}
\Big\}.
\label{equ:qbw_update_bertsekas}
\end{equation}
Particularly, the optimal gain $v_g^*$ is estimated using
\eqreffand{equ:vgstar_sa}{equ:optgain_sa_bertsekas}.
They highlighted that even if $\hat{q}_b^*(\vecb{w})$ is bounded,
$\hat{v}_g^*$ may diverge.
\citet[\equ{9}]{das_1999_smart} updated the weight using temporal difference
(TD, or TD error) as follows,
\begin{equation}
\vecb{w} \gets \vecb{w} + \beta \Big\{
\Big(
\underbrace{
r(s, a) - \hat{v}_g^* + \max_{a' \in \setname{A}} \hat{q}_b^*(s', a', \vecb{w})
- \hat{q}_b^*(s, a; \vecb{w})
}_\text{relative TD in terms of $\hat{q}_b^*$}
\Big) \nabla \hat{q}_b^*(s, a; \vecb{w})
\Big\},
\label{equ:qbw_update}
\end{equation}
which can be interpreted as the parameterized form of \eqreff{equ:qbstar_update},
and is a \emph{semi-gradient} update, similar to its discounted-reward counterpart
in $Q_\gamma$-learning \citep[\equ{16.3}]{sutton_2018_irl}.
This update is adopted by \citet[\alg{2}]{yang_2016_csv}.
The approximation for $v_g^*$ can be performed in various ways,
as for tabular settings in \secref{sec:valiter_tabular}.
Note that \eqreff{equ:qbw_update} differs from \eqreff{equ:qbw_update_bertsekas}
in the use of TD, affecting the update factor.
In contrast, \citet[\equ{10}]{prashanth_2011_tlrl} leveraged \eqreff{equ:qbstar_update}
in order to suggest the following update,
\begin{equation*}
\vecb{w} \gets \vecb{w} + \beta \Big\{
r(s, a) - \hat{v}_g^* + \max_{a' \in \setname{A}} \hat{q}_b^*(s', a', \vecb{w}) \Big\},
\quad \text{whose $\hat{v}_g^*$ is updated using \eqreffand{equ:vgstar_sa}{equ:optgain_sa_bertsekas}}.
\end{equation*}
\subsection{Tabular methods} \label{sec:valiter_tabular}
In model-free RL, the iteration for estimating $\vecb{q}_b^*$ in \eqreff{equ:rviq_iter}
is carried out asynchronously as follows,
\begin{equation}
\hat{q}_b^*(s, a)
\gets \hat{q}_b^*(s, a)
+ \beta \big\{
r(s, a) + \max_{a' \in \setname{A}} \hat{q}_b^*(s', a')
- \hat{v}_g^* -\ \hat{q}_b^*(s, a)
\big\},
\label{equ:qbstar_update}
\end{equation}
where $\beta$ is a positive stepsize, whereas $s$, $a$, and $s'$ denote
the current state, current action, and next state, respectively.
Here, the sum over $s'$ in \eqreff{equ:rviq_iter},
i.e.~ the expectation with respect to $S'$, is approximated by a single sample $s'$.
The stochastic approximation (SA) based update in \eqreff{equ:qbstar_update} is
the essense of $Q_b$-learning \citep[\secc{5}]{schwartz_1993_rlearn}, and most of its variants.
One exception is that of \citet[\equ{8}]{prashanth_2011_tlrl},
where there is no subtraction of $\hat{q}_b^*(s, a)$.
In order to prevent the iterate $\hat{q}_b^*$ from becoming very large (causing numerical instability),
\citet[\page{702}]{singh_1994_avgrewrl} advocated assigning $q_b^*(s_{\mathrm{ref}}, a_{\mathrm{ref}}) \gets 0$,
for arbitrary-but-fixed reference state $s_{\mathrm{ref}}$ and action $a_{\mathrm{ref}}$.
Alternatively, \citet[\page{404}]{bertsekas_1996_neurodp} advised setting $\hat{q}_b^*(s_{\mathrm{ref}}, \cdot) \gets 0$.
Both suggestions seem to follow the heuristics of obtaining the unique solution of
the underdetermined Bellman optimality non-linear system of equations in \eqreff{equ:avgrew_bellman_optim}.
There are several ways to approximate the optimal gain $v_g^*$ in \eqreff{equ:qbstar_update},
as summarized in \figref{fig:taxonomy_optgainapprox} (\appref{sec:taxonomy}).
In particular, \citet[\secc{2.2}]{abounadi_2001_rviqlearn} proposed three variants as follows.
\begin{enumerate} [label=\roman{*}.]
\item $\hat{v}_g^* \gets \hat{q}_b^*(s_{\mathrm{ref}}, a_{\mathrm{ref}})$
with reference state $s_{\mathrm{ref}}$ and action $a_{\mathrm{ref}}$.
\cite{yang_2016_csv} argued that properly choosing $s_{\mathrm{ref}}$ can be difficult
in that the choice of $s_{\mathrm{ref}}$ affects the learning performance,
especially when the state set is large.
They proposed setting $\hat{v}_g^* \gets c$ for a constant $c$ from prior knowledge.
Moreover, \cite{wan_2020_avgrew} showed empirically that such a reference
retards learning and causes divergence.
This happens when $(s_{\mathrm{ref}}, a_{\mathrm{ref}})$ is infrequently visited,
e.g.~ being a trasient state in unichain MDPs.
\item $\hat{v}_g^* \gets \max_{a' \in \setname{A}} \hat{q}_b^*(s_\mathrm{ref}, a')$,
used by \citet[\equ{8}]{prashanth_2011_tlrl}.
This inherits the same issue regarding $s_{\mathrm{ref}}$ as before.
Whenever the action set is large, the maximization over $\setname{A}$ should
be estimated, yielding another layer of approximation errors.
\item $\hat{v}_g^* \gets
\sum_{(s, a) \in \setname{S} \times \setname{A}} \hat{q}_b^*(s, a) / (\setsize{S} \setsize{A})$,
used by \citet[\equ{11}]{avrachenkov_2020_wiql}.
Averaging all entries of $\hat{\vecb{q}}_b^*$ removes the need for $s_{\mathrm{ref}}$ and $a_{\mathrm{ref}}$.
However, because $\hat{q}_b^*$ itself is an estimating function with diverse accuracy
across all state-action pairs, the estimate $\hat{v}_g^*$ involves the averaged approximation error.
The potential issue due to large state and action sets is also concerning.
\end{enumerate}
Equation \eqreff{equ:qbstar_update} with one of three proposals (i - iii) for $v_g^*$ estimators
constitutes the RVI $Q_b$-learning.
Although it operates asynchronously, its convergence is assured by decreasing the stepsize~$\beta$.
The optimal gain $v_g^*$ in \eqreff{equ:qbstar_update} can also be estimated
iteratively via SA as follows,
\begin{equation}
\hat{v}_g^* \gets \hat{v}_g^* + \beta_g \Delta_g,
\quad \text{for some update $\Delta_g$ and a positive stepsize $\beta_g$}.
\label{equ:vgstar_sa}
\end{equation}
Thus, the corresponding $Q_b$-learning becomes 2-timescale SA,
involving both $\beta$ and $\beta_g$.
There exist several variations on $\Delta_g$ as listed below.
\begin{enumerate} [label=\roman{*}.]
\item In \citet[\secc{5}]{schwartz_1993_rlearn},
\begin{equation}
\Delta_g \coloneqq r(s, a) +
\underbrace{
\max_{a' \in \setname{A}} \hat{q}_b^*(s', a')
- \max_{a' \in \setname{A}} \hat{q}_b^*(s, a')
}_\text{to minimize the variance of updates}
-\ \hat{v}_g^*,
\quad \text{if}\
\underbrace{a = \argmax_{a' \in \setname{A}} \hat{q}_b^*(s, a')}_\text{a greedy action $a$}.
\label{equ:vgstarhat_schwartz}
\end{equation}
By updating only when greedy actions are executed, the influence of exploratory actions
(which are mostly suboptimal) can be avoided.
\item In \citet[\alg{3}]{singh_1994_avgrewrl}, and \citet[\equ{6}]{wan_2020_avgrew},
\begin{equation*}
\Delta_g \coloneqq \underbrace{
r(s, a)
+ \max_{a' \in \setname{A}} \hat{q}_b^*(s', a')
- \hat{q}_b^*(s, a)
}_\text{$v_g^*$ in expectation of $S'$ when using $q_b^*$
(see \eqreff{equ:avgrew_bellmanopt_q})}
-\ \hat{v}_g^*. %
\tag{To update at every action}
\end{equation*}
Since the equation for $v_g^*$ \eqreff{equ:avgrew_bellmanopt_q} applies to any state-action pairs,
it is reasonable to update $\hat{v}_g^*$ for both greedy and exploratory actions as above.
This also implies that information from non-greedy actions is not wasted.
Hence, it is more sample-efficient than that of \eqreff{equ:vgstarhat_schwartz}.
\item In \citet[\alg{4}]{singh_1994_avgrewrl}, and \citet[\equ{8}]{das_1999_smart},
\begin{equation*}
\Delta_g \coloneqq r(s, a) - \hat{v}_g^*,
\quad \text{if $a$ is chosen greedily with $\beta_g$ set to $1/(n_u + 1)$},
\end{equation*}
where $n_u$ denotes the number of $\hat{v}_g^*$ updates so far \eqreff{equ:vgstar_sa}.
This special value of $\beta_g$ makes the estimation equivalent to
the sample average of the rewards received for greedy actions.
\item In \citet[\page{404}]{bertsekas_1996_neurodp}, \citet[\equ{2.9b}]{abounadi_2001_rviqlearn},
and \citet[\page{551}]{bertsekas_2012_dpoc},
\begin{equation}
\Delta_g \coloneqq \underbrace{
\max_{a' \in \setname{A}} \hat{q}_b^*(s_{\mathrm{ref}}, a')
}_{\hat{v}_b^*(s_{\mathrm{ref}})},
\quad \text{for an arbitrary reference state $s_{\mathrm{ref}}$}.
\label{equ:optgain_sa_bertsekas}
\end{equation}
This benefits from having $s_{\mathrm{ref}}$ such that $v_b^*(s_{\mathrm{ref}})$ can be interpreted as $v_g^*$,
while also satisfying the underdetermined system of average-reward Bellman optimality equations.
\end{enumerate}
| proofpile-arXiv_059-15605 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\chapter[Cat\'egorie des espaces analytiques~: propri\'et\'es]{Cat\'egorie des espaces analytiques~: propri\'et\'es}\label{catan}
Ce chapitre est consacr\'e \`a l'\'etude de la cat\'egorie des espaces analytiques sur un anneau de Banach. Fixons d\`es \`a pr\'esent un anneau de Banach $(\mathcal{A},\nm)$.
Dans la section~\ref{sec:analytification}, nous construisons un foncteur analytification de la cat\'egorie des sch\'emas localement de pr\'esentation finie sur~$\mathcal{A}$ vers celle des espaces $\mathcal{A}$-analytiques. L'ingr\'edient principal est l'existence d'une correspondance bijective entre les morphismes d'un espace $\mathcal{A}$-analytique vers l'espace affine analytique~$\E{n}{\mathcal{A}}$ et les $n$-uplets de fonctions globales sur~$X$. La d\'emonstration de ce r\'esultat repose de fa\c{c}on cruciale sur le th\'eor\`eme de fermeture des id\'eaux~\ref{fermeture}. Aussi devrons-nous souvent, dans cette section et tout au long du chapitre, supposer que~$\mathcal{A}$ est un anneau de base g\'eom\'etrique.
Dans la section~\ref{sec:extensionB}, \'etant donn\'e un anneau de Banach~$\mathcal{B}$ et un morphisme born\'e $\mathcal{A} \to \mathcal{B}$, nous construisons un foncteur~$\wc \ho{\mathcal{A}} \mathcal{B}$ d'extension des scalaires de la cat\'egorie des espaces $\mathcal{A}$-analytiques vers celle des espaces $\mathcal{B}$-analytiques. Dans la section~\ref{sec:produitsfibres}, nous d\'emontrons que la cat\'egorie des espaces $\mathcal{A}$-analytiques admet des produits fibr\'es.
La section~\ref{sec:paspconv} est consacr\'ee aux parties spectralement convexes et, plus pr\'ecis\'ement, \`a la description de l'extension des scalaires d'une partie spectralement convexe et du produit fibr\'e de deux parties spectralement convexes. Nous introduisons les morphismes propres et montrons que les extensions des scalaires en sont des exemples. Nous expliquons \'egalement comment identifier une fibre d'un morphisme \`a un produit fibr\'e.
La section finale~\ref{sec:morphismessepares} est consacr\'ee aux morphismes s\'epar\'es.
\section{Analytification de sch\'emas}
\label{exemple_esp}\label{sec:analytification}
\index{Analytification!|(}
La th\'eorie des sch\'emas peut fournir de nombreux exemples d'espaces analytiques, \emph{via} un foncteur d'analytification. Afin de construire ce foncteur, nous aurons besoin de la description attendue de l'ensemble des morphismes d'un espace analytique quelconque vers un espace affine analytique.
\begin{prop}\label{morphsec}\index{Morphisme analytique!vers un espace affine}
Soit~$f \colon \mathcal{B}\to\mathcal{B}'$ un morphisme born\'e de~$\mathcal{A}$-alg\`ebres de Banach, o\`u $\mathcal{B}'$ est un anneau de base g\'eom\'etrique. Soit~$X$ un espace $\mathcal{B}'$-analytique. L'application
\[\fonction{s_{X}}{\Hom_{\An_{\mathcal{A}},f}(X,\E{n}{\mathcal{B}})}{\Gamma(X,\mathcal{O}_{X})^n}{\varphi}{(\varphi^\sharp(T_1),\dotsc,\varphi^\sharp(T_n))},\]
o\`u $T_{1},\dotsc,T_{n}$ sont les coordonn\'ees, sur~$\E{n}{\mathcal{B}}$ est bijective.
En outre, la famille $(s_{X})_{X \in \Ob(\mathcal{B}'-\An)}$ d\'efinit une transformation naturelle entre les foncteurs $\Hom_{\An_\mathcal{A},f}(\wc,\E{n}{\mathcal{B}})$ et $\Gamma(\wc,\mathcal{O})^n$.
\end{prop}
\begin{proof}
Commen\c cons par v\'erifier que la famille $s := (s_{X})$ d\'efinit bien une transformation naturelle. En effet, pour tout morphisme d'espaces $\mathcal{B}'$-analytiques $\psi \colon Y \to X$, on a
\[s_{Y}(\varphi\circ\psi)=(\psi^\sharp(\varphi^\sharp(T_1)),\ldots,\psi^\sharp(\varphi^\sharp(T_n)))=\psi^\sharp(s_{X}(\varphi)).\]
\medbreak
Soit $X$ un espace $\mathcal{B}'$-analytique. Montrons que $s_{X}$ est bijective.
$\bullet$ \textit{Injectivit\'e :} Soient $\varphi \colon X \to \E{n}{\mathcal{B}}$ et $\varphi' \colon X \to \E{n}{\mathcal{B}}$ des morphismes d'espaces analytiques au-dessus de~$f$ tels que l'on ait
\[ (\varphi^\sharp(T_1),\ldots ,\varphi^\sharp(T_n))=(\varphi'^\sharp(T_1),....\varphi'^\sharp(T_n)). \]
Soit $x\in X$. Commen\c cons par montrer que~$\varphi(x)=\varphi'(x)$. Soit $P \in \mathcal{B}[T_{1},\dotsc,T_{n}]$. Notons $f(P) \in \mathcal{B}'[T_{1},\dotsc,T_{n}]$ le polyn\^ome obtenu en appliquant~$f$ \`a tous les coefficients de~$P$. D'apr\`es la proposition~\ref{prop:propmorphismeaudessusdef}, on a
\begin{align*}
|P(T_{1}(\varphi(x)),\dotsc,T_{n}(\varphi(x)))| &= |f(P)(\varphi^\sharp(T_{1})(x),\dotsc,\varphi^\sharp(T_{n})(x))|\\
& = |f(P)(\varphi'^\sharp(T_{1}),\dotsc,\varphi'^\sharp(T_{n}))(x)|\\
& =|P(T_{1}(\varphi'(x)),\dotsc,T_{n}(\varphi'(x)))|.
\end{align*}
On en d\'eduit que $\varphi(x) = \varphi'(x)$.
Montrons, \`a pr\'esent, que, pour tout $y \in \E{n}{\mathcal{B}}$, les morphismes $\varphi^\sharp \colon \mathcal{O}_{\E{n}{\mathcal{B}},y}\to\varphi_\ast(\mathcal{O}_X)_y$ et $\varphi'^\sharp:\mathcal{O}_{\E{n}{\mathcal{B}},y}\to \varphi'_\ast(\mathcal{O}_X)_y = \varphi_\ast(\mathcal{O}_X)_y$ sont \'egaux.
Soit $y \in \E{n}{\mathcal{B}}$. Soient~$F\in \mathcal{O}_{\E{n}{\mathcal{B}},y}$ et~$V$ un voisinage compact de~$y$ dans~$\E{n}{\mathcal{B}}$ sur lequel~$F$ est d\'efinie. Quitte \`a r\'eduire~$V$, on peut supposer que~$F$ est limite uniforme sur~$V$ d'une suite de fractions rationnelles~$\left(\frac{P_i}{Q_i}\right)_{i\in \ensuremath{\mathbf{N}}}$ sans p\^oles sur~$V$.
Soit~$x\in\varphi^{-1}(\mathring V)$. Montrons que $\varphi^\sharp(F) = \varphi'^\sharp(F)$ dans~$\mathcal{O}_{X,x}$. Par d\'efinition d'espace $\mathcal{B}'$-analytique, $x$ poss\`ede un voisinage~$Z$ qui est un ferm\'e analytique d'un ouvert~$U$ d'un espace affine analytique sur~$\mathcal{B}'$. Notons~$\mathcal{I}$ le faisceau d'id\'eaux coh\'erent sur~$U$ d\'efinissant~$Z$ et $j \colon Z \to U$ l'immersion ferm\'ee correspondante.
La d\'efinition de morphisme assure qu'il existe un voisinage~$U'$ de~$j(x)$ dans~$U$ et un morphisme $\tilde\varphi \colon U' \to \E{n}{\mathcal{B}}$ au-dessus de~$f$ tels que le diagramme
\[\begin{tikzcd}
U' \arrow[r, "\tilde\varphi"] & \E{n}{\mathcal{B}}\\
j^{-1}(U') \arrow[u, "j"] \arrow[ru, "\varphi_{|j^{-1}(U')}"']&
\end{tikzcd}\]
commute. Quitte \`a restreindre~$U'$, on peut supposer qu'il existe \'egalement un morphisme $\tilde\varphi' \colon U' \to \E{n}{\mathcal{B}}$ au-dessus de~$f$ tel que le diagramme
\[\begin{tikzcd}
U' \arrow[r, "\tilde\varphi'"] & \E{n}{\mathcal{B}}\\
j^{-1}(U') \arrow[u, "j"] \arrow[ru, "\varphi'_{|j^{-1}(U')}"']&
\end{tikzcd}\]
commute.
Par la d\'efinition de morphisme entre ouverts d'espaces affines, il existe un voisinage compact~$U''$ de~$x$ dans~$U'$ tel que $\tilde\varphi(U'') \subset V$ et, pour tout $G \in \mathcal{O}_{\E{n}{\mathcal{B}}}(V)$, $\|\tilde{\varphi}^\sharp(G)\|_{U''}\leq\|G\|_{V}$. On en d\'eduit que
\[\lim_{i \to +\infty} \left\|\tilde{\varphi}^\sharp(F) - \tilde{\varphi}^\sharp\left(\frac{P_i}{Q_i}\right) \right\|_{U''} = 0.\]
Quitte \`a restreindre $U''$, on peut supposer que le m\^eme r\'esultat vaut pour~$\tilde\varphi'$, et donc que
\[\lim_{i \to +\infty} \|\tilde{\varphi}^\sharp(F) - \tilde{\varphi}'^\sharp(F)\|_{U''} = 0.\]
Par hypoth\`ese, pour tout $k \in \cn{1}{n}$, $\tilde{\varphi}^\sharp(T_k)-\tilde{\varphi}'^\sharp(T_k)$ appartient \`a~$\mathcal{I}(U'')$. On en d\'eduit que, pour~$i\in\ensuremath{\mathbf{N}}$, $\tilde{\varphi}^\sharp\left(\frac{P_i}{Q_i}\right)-\tilde{\varphi}'^\sharp\left(\frac{P_i}{Q_i}\right)$ appartient \`a~$\mathcal{I}(U'')$. Le corollaire~\ref{limite} assure alors que l'\'el\'ement $\tilde{\varphi}^\sharp(F)-\tilde{\varphi}'^\sharp(F)$ de $\mathcal{O}_{U'',j(x)}$ appartient \`a~$\mathcal{I}_{j(x)}$ et donc que $\varphi^\sharp(F)=\varphi'^\sharp(F)$ dans $\mathcal{O}_{X,x}$.
On a d\'emontr\'e que $\varphi^\sharp=\varphi'^\sharp$, et donc que $\varphi=\varphi'$ .
\medbreak
$\bullet$ \textit{Surjectivit\'e :} Soient $F_1,\ldots,F_n\in\mathcal{O}_X(X)$. Puisque l'application~$s_{X}$ est injective, il suffit de v\'erifier que tout point~$x$ de~$X$ poss\`ede un voisinage ouvert~$Z$ tel que $(F_{1},\dotsc,F_{n})$ appartienne \`a l'image de~$s_{Z}$.
Soit $x$ un point de~$X$. Il poss\`ede un voisinage~$Z$ qui est un ferm\'e analytique d'un ouvert~$U$ d'un espace affine analytique sur~$\mathcal{B}'$. Notons $j \colon Z \to U$ l'immersion ferm\'ee correspondante. Quitte \`a restreindre~$Z$ et~$U$, on peut supposer que $F_{1},\dotsc,F_{n} \in \mathcal{O}_{Z}(Z)$ appartiennent \`a l'image du morphisme $j^\sharp \colon \mathcal{O}_U(U)\to\mathcal{O}_Z(Z)$. Choisissons des relev\'es $F'_1,\ldots,F'_n \in\mathcal{O}_{U}(U)$ de ces \'el\'ements.
D'apr\`es l'exemple~\ref{exemple_morphisme}, il existe un morphisme d'espaces $\mathcal{B}'$-analytiques $\psi \colon U \to \E{n}{\mathcal{B}'}$ tel que, pour tout $i \in \cn{1}{n}$, on ait $\psi^\sharp(T_{i}) = F'_{i}$. Posons $\varphi := \tilde f_{n} \circ \psi \circ j$, o\`u $\tilde f_{n} \colon \E{n}{\mathcal{B}'} \to \E{n}{\mathcal{B}}$ est le morphisme d\'efini dans l'exemple~\ref{ex:tildefn}. C'est un morphisme analytique au-dessus de~$f$ qui satisfait $s_{Z}(\varphi) = (F_{1},\dotsc,F_{n})$.
\end{proof}
\begin{coro}\label{cor:factorisationouvert}\index{Morphisme analytique!vers un espace affine}
Soit~$f \colon \mathcal{B}\to\mathcal{B}'$ un morphisme born\'e de~$\mathcal{A}$-alg\`ebres de Banach, o\`u $\mathcal{B}'$ est un anneau de base g\'eom\'etrique. Soient $P_{1},\dotsc,P_{k} \in \mathcal{B}[T_1,\ldots,T_n]$ et $r_{1},s_{1},\dotsc,r_{k},s_{k} \in \ensuremath{\mathbf{R}}$. Posons
\[U:=\bigcap_{i=1}^k\{y\in\E{n}{\mathcal{B}} : r_i<|P_i(y)|<s_i\}.\]
Soit~$X$ un espace $\mathcal{B}'$-analytique. Alors, l'application~$s_{X}$ induit une bijection entre $\Hom_{\An_{\mathcal{A}},f}(X,U)$ et l'ensemble
\[\{(f_{1},\dotsc,f_{n}) \in \mathcal{O}(X)^n :
\forall x\in X, \forall i\in\cn{1}{k},\ r_i<|P_i(f_1,\ldots,f_n)(x)|<s_i.\}\]
\end{coro}
\begin{proof}
Le r\'esultat provient du fait que, pour tout $x\in X$ et tout $i\in \cn{1}{k}$, on a
\begin{align*}
|P_{i}(\varphi(x))| & = |P_{i}(T_{1}(\varphi(x)),\dotsc,T_{n}(\varphi(x)))|\\
& = |P_{i}(\varphi^\sharp(T_{1})(x),\dotsc,\varphi^\sharp(T_{n})(x))|.
\end{align*}
\end{proof}
Nous pouvons maintenant construire un foncteur d'analytification. Nous noterons $\Hom_{\mathcal{A}-\mathrm{loc}}(\wc,\wc)$ l'ensemble des morphismes dans la cat\'egorie des espaces localement $\mathcal{A}$-annel\'es.%
\nomenclature[Kcz]{$\Hom_{\mathcal{A}-\mathrm{loc}}(X,Y)$}{pour $X,Y$ espaces localement $\mathcal{A}$-annel\'es, ensemble des morphismes d'espaces localement $\mathcal{A}$-annel\'es de~$X$ dans~$Y$
\begin{theo}\label{thm:analytification}\index{Analytification}
Supposons que~$\mathcal{A}$ est un anneau de base g\'eom\'etrique.
Soit~$\mathcal{X}$ un sch\'ema localement de pr\'esentation finie sur~$\mathcal{A}$. Le foncteur
\[\fonction{\Phi_{\mathcal{X}}}{\mathcal{A}-\An}{\mathrm{Ens}}{Y}{\Hom_{\mathcal{A}-\mathrm{loc}}(Y,\mathcal{X})}\]
est repr\'esentable.
\end{theo}
\begin{proof}
Nous allons proc\'eder en plusieurs \'etapes.
\medbreak
$\bullet$ \textit{$\mathcal{X}$ est un espace affine.}
Il existe $n\in \ensuremath{\mathbf{N}}$ tel que $\mathcal{X} = \ensuremath{\mathbf{A}}^n_{\mathcal{A}}$. Notons~$T_{1},\dotsc,T_{n}$ les coordonn\'ees sur~$\ensuremath{\mathbf{A}}^n_{\mathcal{A}}$. Pour tout espace $\mathcal{A}$-analytique~$Y$, l'application
\[\begin{array}{ccc}
\Hom_{\mathcal{A}-\mathrm{loc}}(Y,\ensuremath{\mathbf{A}}^n_{\mathcal{A}}) & \to & \mathcal{O}(Y)^n\\
\varphi & \mapsto & (\varphi^\sharp(T_{1}),\dotsc,\varphi^\sharp(T_{n}))
\end{array}\]
est alors une bijection (\cf~\cite[proposition~1.6.3]{EGAInew}). La proposition~\ref{morphsec} entra\^ine alors que~$\Phi_{\ensuremath{\mathbf{A}}^n_{\mathcal{A}}}$ est repr\'esent\'e par~$\E{n}{\mathcal{A}}$.
\medbreak
$\bullet$ \textit{$\mathcal{X}$ est un sch\'ema affine de pr\'esentation finie sur~$\mathcal{A}$.}
Il existe $n\in \ensuremath{\mathbf{N}}$ et un id\'eal de type fini~$I$ de $\mathcal{O}(\ensuremath{\mathbf{A}}^n_{\mathcal{A}})$ tel que~$\mathcal{X}$ soit le sous-sch\'ema ferm\'e de~$\ensuremath{\mathbf{A}}^n_{\mathcal{A}}$ d\'efinir par~$I$. L'id\'eal~$I$ engendre un faisceau d'id\'eaux coh\'erent~$\mathcal{I}$ de~$\E{n}{\mathcal{A}}$. Notons~$X$ le ferm\'e analytique de~$\E{n}{\mathcal{A}}$ d\'efini par~$\mathcal{I}$. Le point pr\'ec\'edent et la proposition~\ref{immersion_ferm\'e} assurent que le foncteur~$\Phi_{\mathcal{X}}$ est repr\'esent\'e par~$X$.
\medbreak
$\bullet$ \textit{$\mathcal{X}$ est un sch\'ema localement de pr\'esentation finie sur~$\mathcal{A}$.}
Le sch\'ema~$\mathcal{X}$ admet un recouvrement ouvert~$\{\mathcal{X}_i\}_{i\in I}$ par des sch\'emas affines de pr\'esentation finie sur~$\mathcal{A}$. Pour chaque $i\in I$, consid\'erons l'espace $\mathcal{A}$-analytique~$X_{i}$ et le morphisme $\rho_{i} \colon X_{i} \to \mathcal{X}_{i}$ associ\'es \`a~$\mathcal{X}_{i}$ par la construction du point pr\'ec\'edent.
Pour tout~$i\in I$ et tout ouvert~$U$ de~$\mathcal{X}_{i}$, on v\'erifie que $\rho_{i}^{-1}(U)$ repr\'esente le foncteur~$\Phi_{U}$. On d\'eduit de cette propri\'et\'e que les~$X_{i}$ se recollent. En effet, on peut identifier les intersections puisqu'elles repr\'esentent un m\^eme foncteur.
On v\'erifie que l'espace $\mathcal{A}$-analytique~$X$ obtenu en recollant les~$X_{i}$ repr\'esente~$\mathcal{X}$.
\end{proof}
\begin{defi}\label{def:analytification}\index{Analytification!d'un schema@d'un sch\'ema|textbf}\index{Analytification!d'un morphisme|textbf}\index{Morphisme analytique!analytifie@analytifi\'e|see{Analytification}}\index{Espace analytique|analytifie@analytifi\'e|see{Analytification}}%
\nomenclature[Kma]{$\mathcal{X}^\an$}{analytifi\'e d'un sch\'ema~$\mathcal{X}$}%
\nomenclature[Kmb]{$\rho_{\mathcal{X}}$}{morphisme canonique $\mathcal{X}^\an\to\mathcal{X}$}%
\nomenclature[Kmc]{$\varphi^\an$}{analytifi\'e d'un morphisme de sch\'emas~$\varphi$
Supposons que~$\mathcal{A}$ est un anneau de base g\'eom\'etrique.
Soit~$\mathcal{X}$ un sch\'ema localement de pr\'esentation finie sur~$\mathcal{A}$. On appelle \emph{analytification} ou \emph{analytifi\'e} de~$\mathcal{X}$ l'espace $\mathcal{A}$-analytique qui repr\'esente~$\Phi_{\mathcal{X}}$. On le note~$\mathcal{X}^\an$ et on note $\rho_{\mathcal{X}} \colon \mathcal{X}^\an\to\mathcal{X}$ le morphisme d'espaces localement $\mathcal{A}$-annel\'es canoniquement associ\'e.
Soit $\varphi \colon \mathcal{X} \to \mathcal{Y}$ un morphisme entre sch\'emas localement de pr\'esentation finie sur~$\mathcal{A}$. La repr\'esentabilit\'e du foncteur~$\Phi_{\mathcal{Y}}$ assure l'existence d'un unique morphisme d'espace $\mathcal{A}$-analytiques de~$\mathcal{X}^\an$ dans~$\mathcal{Y}^\an$, not\'e~$\varphi^\an$,
faisant commuter le diagramme
\[\begin{tikzcd}
\mathcal{X}^\an \ar[d, "\varphi^\an"] \ar[r, "\rho_{\mathcal{X}}"] & \mathcal{X} \ar[d, "\varphi"]\\
\mathcal{Y}^\an \ar[r, "\rho_{\mathcal{Y}}"] & \mathcal{Y}
\end{tikzcd}.\]
Le morphisme $\varphi^\an \colon \mathcal{X}^\an \to \mathcal{Y}^\an$ est appel\'e \emph{analytification} ou \emph{analytifi\'e} de~$\varphi$.
\end{defi}
\index{Analytification!|)}
\section{Extension des scalaires}\label{sec:extensionB}
\index{Extension des scalaires|(}
Soit $f\colon \mathcal{A} \to \mathcal{B}$ un morphisme d'anneaux de Banach born\'e, o\`u $\mathcal{B}$ est un anneau de base g\'eom\'etrique. Rappelons que, pour tout $n\in \ensuremath{\mathbf{N}}$, nous avons d\'efini un morphisme $\tilde{f}_n \colon \E{n}{\mathcal{B}} \to \E{n}{\mathcal{A}}$ dans l'exemple~\ref{ex:tildefn}.
\begin{defi}\label{def:extensionscalaire}\index{Extension des scalaires|textbf}%
\nomenclature[Kna]{$X_{\mathcal{B}}$}{extension des scalaires d'un espace $\mathcal{A}$-analytique~$X$ \`a un anneau de Banach~$\mathcal{B}$}%
\nomenclature[Knb]{$X \ho{\mathcal{A}} \mathcal{B}$}{extension des scalaires d'un espace $\mathcal{A}$-analytique~$X$ \`a un anneau de Banach~$\mathcal{B}$}%
Soit $X$ un espace $\mathcal{A}$-analytique. On dit que \emph{$X$ s'\'etend \`a~$\mathcal{B}$} si le foncteur
\[\fonction{B_{X,\mathcal{B}}}{\mathcal{B}-\An}{\mathrm{Ens}}{Y}{\Hom_{\An_{\mathcal{A}},f}(Y,X)}\]
est repr\'esentable. Dans ce cas, on appelle \emph{extension des scalaires de~$X$ \`a~$\mathcal{B}$} l'espace $\mathcal{B}$-analytique repr\'esentant~$B_{X,\mathcal{B}}$. On le note~$X_{\mathcal{B}}$ ou~$X \ho{\mathcal{A}} \mathcal{B}$ (par analogie avec le cas classique, \cf~\cite[\S 1.4]{Ber2}).
\end{defi}
Commen\c cons par un lemme g\'en\'eral.
\begin{lemm}\label{lem:extensionBouverts}
Soit $X$ un espace $\mathcal{A}$-analytique qui s'\'etend \`a~$\mathcal{B}$. Notons $\pi_{\mathcal{B}} \colon X_{\mathcal{B}} \to X$ le morphisme canonique.
\begin{enumerate}[i)]
\item Soit~$U$ un ouvert de~$X$. Alors $\pi_{\mathcal{B}}^{-1}(U)$ est l'extension des scalaires de~$U$ \`a~$\mathcal{B}$.
\item Soient~$F$ un ferm\'e de~$X$ d\'efini par un faisceau d'id\'eaux coh\'erent~$\mathcal{I}$. Alors le ferm\'e analytique de~$X_\mathcal{B}$ d\'efini par l'id\'eal $\pi_{\mathcal{B}}^\ast \mathcal{I}$ est l'extension des scalaires de~$F$ \`a~$\mathcal{B}$.
\item Soit $\iota \colon X' \to X$ une immersion (resp. immersion ouverte, resp. immersion ferm\'ee) d'espaces $\mathcal{A}$-analytiques. Alors~$X'$ s'\'etend \`a~$\mathcal{B}$ et le morphisme canonique $X'_{\mathcal{B}} \to X_{\mathcal{B}}$ est une immersion (resp. immersion ouverte, resp. immersion ferm\'ee) d'espaces $\mathcal{B}$-analytiques.
\end{enumerate}
\end{lemm}
\begin{proof}
Le point~i) d\'ecoule directement de la propri\'et\'e universelle de l'extension. Pour d\'emontrer le point~ii), on utilise la proposition~\ref{immersion_ferm\'e}. Le point~iii) se d\'eduit des deux premiers.
\end{proof}
Passons maintenant \`a des r\'esultats d'existence.
\begin{prop}\label{prop:extensionBaffine}\index{Espace affine analytique!extension des scalaires}
Soit~$n\in \ensuremath{\mathbf{N}}$. Alors l'extension des scalaires de~$\E{n}{\mathcal{A}}$ \`a~$\mathcal{B}$ est~$\E{n}{\mathcal{B}}$ avec pour morphisme canonique $\tilde f_{n} \colon \E{n}{\mathcal{B}} \to \E{n}{\mathcal{A}}$.
\end{prop}
\begin{proof}
Le r\'esultat d\'ecoule de la proposition~\ref{morphsec}.
\end{proof}
\begin{theo}\label{thm:extensionB}
Tout espace $\mathcal{A}$-analytique s'\'etend \`a~$\mathcal{B}$.
\end{theo}
\begin{proof}
Soit~$X$ un espace $\mathcal{A}$-analytique. Si~$X$ est un mod\`ele local $\mathcal{A}$-analytique, alors le r\'esultat d\'ecoule de la proposition~\ref{prop:extensionBaffine} et du lemme~\ref{lem:extensionBouverts}. Le cas g\'en\'eral s'en d\'eduit en recouvrant~$X$ par des mod\`eles locaux et en recollant les diff\'erentes extensions.
\end{proof}
\begin{lemm}\label{lem:BB'}
Soit~$X$ un espace $\mathcal{A}$-analytique. Soit $f'\colon \mathcal{B} \to \mathcal{B}'$ un morphisme d'anneaux de Banach born\'e, o\`u $\mathcal{B}'$ est un anneau de base g\'eom\'etrique. Alors on a un isomorphisme canonique
\[X\ho{\mathcal{A}} \mathcal{B}' \xrightarrow[]{\sim} (X \ho{\mathcal{A}} \mathcal{B})\ho{\mathcal{B}} \mathcal{B}'.\]
\end{lemm}
\begin{proof}
Lorsque~$X$ est un espace affine analytique, le r\'esultat d\'ecoule de la proposition~\ref{prop:extensionBaffine}. Celui pour les mod\`eles locaux analytiques s'en d\'eduit par le lemme~\ref{lem:extensionBouverts}, et, finalement, le cas g\'en\'eral, par recollement.
\end{proof}
Il est souvent utile de consid\'erer l'extension des scalaires d'un espace au corps r\'esiduel compl\'et\'e d'un de ses points.
\begin{lemm}\label{lem:morphismexHx}\index{Morphisme analytique!associe a un point@associ\'e \`a un point}
Soient~$X$ un espace $\mathcal{A}$-analytique. Le morphisme $\lambda_{x} \colon \mathcal{M}(\mathcal{H}(x)) \to X$ de l'exemple~\ref{ex:morphismex} induit un morphisme
\[\lambda_{x} \ho{\mathcal{A}} \mathcal{H}(x) \colon \mathcal{M}(\mathcal{H}(x)) \to X \ho{\mathcal{A}} \mathcal{H}(x) \]
qui est une immersion ferm\'ee. Son image~$x'$ est envoy\'ee sur~$x$ par le morphisme $X \ho{\mathcal{A}} \mathcal{H}(x) \to X$ et le morphisme canonique $\mathcal{H}(x) \to \mathcal{H}(x')$ est un isomorphisme isom\'etrique.
\end{lemm}
\begin{proof}
L'image de~$\lambda_{x}$ \'etant le point~$x$, on peut localiser au voisinage de~$x$ pour d\'emontrer le r\'esultat. On peut donc supposer que~$X$ est un ferm\'e analytique d'un ouvert~$U$ d'un espace affine~$\E{n}{\mathcal{A}}$. Notons $T_{1},\dotsc,T_{n}$ les coordonn\'ees sur~$\E{n}{\mathcal{A}}$. Le morphisme~$\lambda_{x}$ est alors d\'efini comme celui associ\'e au morphisme d'\'evaluation $\mathcal{A}[T_{1},\dotsc,T_{n}] \to \mathcal{H}(x)$.
Par extension des scalaires \`a~$\mathcal{H}(x)$, on obtient un diagramme commutatif
\[\begin{tikzcd}
\mathcal{M}(\mathcal{H}(x)) \arrow[r] \arrow[dr]& X \arrow[r] & U \arrow[r] & \E{n}{\mathcal{A}} \\
& X\ho{\mathcal{A}} \mathcal{H}(x) \arrow[r] \arrow[u] & U\ho{\mathcal{A}} \mathcal{H}(x) \arrow[r] \arrow[u] & \E{n}{\mathcal{H}(x)} \arrow[u]
\end{tikzcd}\]
dans lequel le morphisme $\mathcal{M}(\mathcal{H}(x)) \to \E{n}{\mathcal{H}(x)}$ est associ\'e au morphisme
\[ \begin{array}{ccc}
\mathcal{A}[T_{1},\dotsc,T_{n}] & \to & \mathcal{H}(x)\\
T_{i} & \mapsto & T_{i}(x)
\end{array}.\]
En particulier, le morphisme $\mathcal{M}(\mathcal{H}(x)) \to \E{n}{\mathcal{H}(x)}$ est une immersion ferm\'ee d\'efinie par le faisceau d'id\'eaux~$\mathcal{I}$ sur~$\E{n}{\mathcal{H}(x)}$ engendr\'e par $T_{1}-T_{1}(x),\dotsc,T_{n}-T_{n}(x)$. Le morphisme $\mathcal{M}(\mathcal{H}(x)) \to X\ho{\mathcal{A}} \mathcal{H}(x)$ est donc encore une immersion ferm\'ee, d\'efinie par le tir\'e en arri\`ere de~$\mathcal{I}$ sur~$X$.
La derni\`ere partie de l'\'enonc\'e est claire.
\end{proof}
\index{Extension des scalaires|)}
\section{Produits fibr\'es}\label{sec:produitsfibres}
\index{Produit!fibre@fibr\'e|(}
Dans cette section, nous supposerons que $\mathcal{A}$ est un anneau de base g\'eom\'etrique. Nous allons d\'emontrer que la cat\'egorie~$\mathcal{A}-\An$ admet des produits fibr\'es. La strat\'egie adopt\'ee est la m\^eme qu'\`a la section~\ref{sec:extensionB} pour l'extension des scalaires.
\medskip
Commen\c cons par deux lemmes g\'en\'eraux.
\begin{lemm}\label{lem:produitouverts}
Soient $\varphi\colon X \to Z$ et $\psi \colon Y \to Z$ des morphismes d'espaces $\mathcal{A}$-analytiques.
Supposons que le produit fibr\'e de~$X$ et~$Y$ au-dessus de~$Z$ existe dans la cat\'egorie~$\mathcal{A}-\An$. Notons~$P$ ce produit et $p_{X} \colon P \to X$ et $p_{Y}\colon P \to Y$ les deux projections canoniques.
\begin{enumerate}[i)]
\item Soient~$U$ un ouvert de~$X$ et~$V$ un ouvert de~$Y$. Alors $p_{X}^{-1}(U) \cap p_{Y}^{-1}(V)$ est le produit fibr\'e de~$U$ et~$V$ au-dessus de~$Z$ dans~$\mathcal{A}-\An$.
\item Soient~$F$ un ferm\'e de~$X$ d\'efini par un faisceau d'id\'eaux coh\'erent~$\mathcal{I}$ et~$G$ un ferm\'e de~$Y$ d\'efini par un faisceau d'id\'eaux coh\'erent~$\mathcal{J}$. Alors le ferm\'e analytique de~$P$ d\'efini par l'id\'eal $p_{X}^\ast \mathcal{I} + p_{Y}^\ast \mathcal{J}$ est le produit fibr\'e de~$F$ et~$G$ au-dessus de~$Z$ dans~$\mathcal{A}-\An$.
\end{enumerate}
\end{lemm}
\begin{proof}
Le point~i) d\'ecoule directement de la propri\'et\'e universelle du produit. Pour d\'emontrer le point~ii), on utilise la proposition~\ref{immersion_ferm\'e}.
\end{proof}
L'\'enonc\'e qui suit d\'ecoule du pr\'ec\'edent.
\begin{lemm}\label{lem:changementbaseimmersion}
Soit~$Z$ un espace $\mathcal{A}$-analytique. Soient~$X$ et~$Y$ des espaces $\mathcal{A}$-analytiques au-dessus de~$Z$. Supposons qu'il existe un produit fibr\'e~$P$ de~$X$ et~$Y$ au-dessus de~$Z$ dans~$\mathcal{A}-\An$. Soit $\iota \colon X' \to X$ une immersion (resp. immersion ouverte, resp. immersion ferm\'ee) d'espaces $\mathcal{A}$-analytiques au-dessus de~$Z$. Alors, il existe un produit fibr\'e~$P'$ de~$X'$ et~$Y$ au-dessus de~$Z$ dans~$\mathcal{A}-\An$ et le morphisme canonique $P' \to P$ est une immersion (resp. immersion ouverte, resp. immersion ferm\'ee).
\qed
\end{lemm}
Nous allons maintenant d\'emontrer des r\'esultats d'existence de produits.
\begin{prop}\label{prop:produitaffines}\index{Espace affine analytique!produit d'}
Soient $n,m\in \ensuremath{\mathbf{N}}$. Alors $\E{n+m}{\mathcal{A}}$ est le produit de~$\E{n}{\mathcal{A}}$ et~$\E{m}{\mathcal{A}}$ dans la cat\'egorie~$\mathcal{A}-\An$.
\end{prop}
\begin{proof}
Le r\'esultat d\'ecoule de la proposition~\ref{morphsec}.
\end{proof}
Puisque chaque espace $\mathcal{A}$-analytique poss\`ede un morphisme canonique vers~$\mathcal{M}(\mathcal{A})$, le produit dans la cat\'egorie $\mathcal{A}-\An$ co\"incide avec le produit fibr\'e au-dessus de~$\mathcal{M}(\mathcal{A})$. Nous pourrons donc utiliser le lemme~\ref{lem:produitouverts} dans le contexte des produits.
\begin{coro}\label{cor:produitmodeles}\index{Modele local analytique@Mod\`ele local analytique!produit de}
Soient $n,m\in \ensuremath{\mathbf{N}}$. Soient~$X$ un ferm\'e analytique d'un ouvert de~$\E{n}{\mathcal{A}}$ et $Y$~un ferm\'e analytique d'un ouvert de~$\E{m}{\mathcal{A}}$. Alors le produit de~$X$ et~$Y$ existe dans la cat\'egorie~$\mathcal{A}-\An$ et c'est un ferm\'e analytique d'un ouvert de~$\E{n+m}{\mathcal{A}}$.
\qed
\end{coro}
\begin{coro}\label{cor:produit}\index{Produit|textbf}\index{Espace analytique!produit d'|see{Produit}}
La cat\'egorie~$\mathcal{A}-\An$ admet des produits finis.
\end{coro}
\begin{proof}
Il suffit de montrer que le produit de deux espaces existe. Soient~$X$ et~$Y$ des espaces $\mathcal{A}$-analytiques. Si ce sont des mod\`eles locaux $\mathcal{A}$-analytiques, le r\'esultat d\'ecoule du corollaire~\ref{cor:produitmodeles}. Le cas g\'en\'eral s'en d\'eduit en recouvrant les espaces par des mod\`eles locaux et en recollant les diff\'erents produits.
\end{proof}
\begin{nota}%
\nomenclature[Ko]{$X\times_{\mathcal{A}} Y$}{produit d'espaces $\mathcal{A}$-analytiques~$X$ et~$Y$
Soient~$X$ et~$Y$ des espaces $\mathcal{A}$-analytiques. Nous noterons $X\times_{\mathcal{A}} Y$ leur produit dans la cat\'egorie $\mathcal{A}-\An$.
\end{nota}
L'existence de produits va nous permettre de d\'emontrer l'existence de produits fibr\'es.
\begin{prop}\label{prop:produitfibreZaffine}
Soient $\varphi\colon X \to Z$ et $\psi \colon Y \to Z$ des morphismes d'espaces $\mathcal{A}$-analytiques.
Notons $p_{X} \colon X\times_{\mathcal{A}} Y\to X$ et $p_{Y} \colon X\times_{\mathcal{A}} Y \to Y$ les deux morphismes de projection. Supposons que~$Z$ soit un ferm\'e analytique d'un ouvert de~$\E{n}{\mathcal{A}}$. Notons~$T_{1},\dotsc,T_{n}$ les coordonn\'ees sur~$\E{n}{\mathcal{A}}$ et, pour tout $i\in \cn{1}{n}$, posons
\[ f_{i} := (\varphi\circ p_{X})^\sharp(T_{i}) \textrm{ et } g_{i} := (\psi\circ p_{Y})^\sharp(T_{i})
\textrm{ dans } \mathcal{O}(X\times_{\mathcal{A}} Y) .\]
Alors, le ferm\'e analytique de~$X\times_{\mathcal{A}} Y$ d\'efini par l'id\'eal $(f_{1}-g_{1},\dotsc,f_{n}-g_{n})$ est le produit fibr\'e de~$X$ et~$Y$ au-dessus de~$Z$.
\end{prop}
\begin{proof}
Cela d\'ecoule de la proposition~\ref{immersion_ferm\'e}.
\end{proof}
\begin{theo}\label{produit_fibr\'e}\index{Produit!fibre@fibr\'e|textbf}\index{Espace analytique!produit fibr\'e d'|see{Produit fibr\'e}}
Pour tous morphismes d'espaces $\mathcal{A}$-analytiques $\varphi\colon X \to Z$ et $\psi \colon Y \to Z$, le produit fibr\'e de~$X$ et~$Y$ au-dessus de~$Z$ existe et est naturellement un ferm\'e analytique de~$X\times_{\mathcal{A}} Y$.
En particulier, la cat\'egorie~$\mathcal{A}-\An$ admet des produits fibr\'es finis.
\end{theo}
\begin{proof}
Le r\'esultat s'obtient en recouvrant~$Z$ par des mod\`eles locaux, en appliquant la proposition~\ref{prop:produitfibreZaffine} au-dessus de chacun d'eux, puis en recollant les produits fibr\'es obtenus.
\end{proof}
\begin{nota
\nomenclature[Kproduitfibre1]{$X\times_{Z} Y$}{produit fibr\'e d'espaces $\mathcal{A}$-analytiques~$X$ et~$Y$ au-dessus d'un espace $\mathcal{A}$-analytique~$Z$
Soient $\varphi\colon X \to Z$ et $\psi \colon Y \to Z$ des morphismes d'espaces $\mathcal{A}$-analytiques. Nous noterons $X\times_{Z} Y$ le produit fibr\'e de~$X$ et~$Y$ au-dessus de~$Z$ dans la cat\'egorie $\mathcal{A}-\An$ (en sous-entendant les morphismes~$\varphi$ et~$\psi$).
\end{nota}
En utilisant le corollaire~\ref{cor:produitmodeles}, nous obtenons le r\'esultat suivant.
\begin{coro}\label{cor:produitfibremodeles}\index{Modele local analytique@Mod\`ele local analytique!produit fibr\'e de}
Soient $n,m\in \ensuremath{\mathbf{N}}$. Soient~$X$ un ferm\'e analytique d'un ouvert de~$\E{n}{\mathcal{A}}$ et $Y$~un ferm\'e analytique d'un ouvert de~$\E{m}{\mathcal{A}}$. Soient $\varphi\colon X \to Z$ et $\psi \colon Y \to Z$ des morphismes d'espaces $\mathcal{A}$-analytiques. Alors le produit fibr\'e $X\times_{Z}Y$ est un ferm\'e analytique d'un ouvert de~$\E{n+m}{\mathcal{A}}$.
\qed
\end{coro}
En utilisant les propri\'et\'es universelles, on v\'erifie que le produit fibr\'e commute \`a l'extension des scalaires.
\begin{lemm}\label{lem:produitifbreextensionscalaires}\index{Extension des scalaires
Soient $\varphi\colon X \to Z$ et $\psi \colon Y \to Z$ des morphismes d'espaces $\mathcal{A}$-analytiques.
Soit $f\colon \mathcal{A} \to \mathcal{B}$ un morphisme d'anneaux de Banach born\'e, o\`u $\mathcal{B}$~est un anneau de base g\'eom\'etrique. Alors, on a un isomorphisme canonique
\[(X\times_{Z}Y)\ho{\mathcal{A}} \mathcal{B} \simeq (X\ho{\mathcal{A}} \mathcal{B})\times_{Z\ho{\mathcal{A}} \mathcal{B}}(Y\ho{\mathcal{A}} \mathcal{B}).\]
\qed
\end{lemm}
En utilisant l'extension des scalaires,
on peut obtenir un \'enonc\'e plus g\'en\'eral d'existence de produits fibr\'es. Il se d\'emontre directement \`a l'aide des r\'esultats d\'ej\`a obtenus.
\begin{prop}\label{prop:produitfibreAnA}\index{Extension des scalaires
Soit~$\varphi \colon X\to Z$ un morphisme d'espaces~$\mathcal{A}$-analytiques. Soit $f\colon \mathcal{A} \to \mathcal{B}$ un morphisme d'anneaux de Banach born\'e, o\`u $\mathcal{B}$~est un anneau de base g\'eom\'etrique.
Soient~$Y$ un espace~$\mathcal{B}$-analytique et $\psi \colon Y\to Z$ un morphisme d'espaces analytiques au-dessus de~$f$. Alors l'espace $\mathcal{B}$-analytique $(X\ho{\mathcal{A}} \mathcal{B})\times_{Z\ho{\mathcal{A}} \mathcal{B}} Y$ repr\'esente le foncteur qui \`a tout espace analytique~$T$ au-dessus de~$\mathcal{B}$ associe l'ensemble des diagrammes commutatifs d'espaces analytiques au-dessus de~$\mathcal{A}$ de la forme
\[\begin{tikzcd}
T\ar[r]\ar[d]&X\ar[d]\\
Y\ar[r]&Z
\end{tikzcd}.\]
\qed
\end{prop}
Dans le cadre de la proposition pr\'ec\'edente, nous adopterons parfois la notation simplifi\'ee
\[ X \times_{Z} Y := (X\ho{\mathcal{A}} \mathcal{B})\times_{Z\ho{\mathcal{A}} \mathcal{B}} Y.\]
\nomenclature[Kproduitfibre2]{$X\times_{Z} Y$}{produit fibr\'e d'un espace $\mathcal{A}$-analytique~$X$ et d'un espace $\mathcal{B}$-analytique~$Y$ au-dessus d'un espace $\mathcal{A}$-analytique~$Z$
\medbreak
D\'efinissons maintenant les morphismes diagonaux.
\begin{defi}\label{def:diagonale}\index{Morphisme analytique!diagonal}%
\nomenclature[Kqb]{$\Delta_{X/Y}$}{pour un espace $\mathcal{A}$-analytique~$X$ au-dessus d'un espace $\mathcal{A}$-analytique~$Y$, morphisme diagonal $X\to X\times_{Y} X$}%
\nomenclature[Kqa]{$\Delta_{X}$}{pour un espace $\mathcal{A}$-analytique~$X$, morphisme diagonal $X\to X\times_{\mathcal{A}} X$}%
Soient $X$, $Y$ des espaces $\mathcal{A}$-analytiques et $\varphi \colon X\to Y$ un morphisme. On note $p_{1} \colon X\times_{Y} X \to X$ et $p_{2} \colon X\times_{Y} X \to X$ les deux projections canoniques. On appelle \emph{morphisme diagonal} l'unique morphisme $\Delta_{X/Y} \colon X\to X\times_{Y} X$ qui fait commuter le diagramme
\[\begin{tikzcd}
X \arrow[rd, "\Delta_{X/Y}"] \arrow[rrd, bend left, "\mathrm{id}_{X}"] \arrow[rdd, bend right, "\mathrm{id}_{X}"'] & &\\
&X\times_{Y} X \arrow[r, "p_{2}"] \arrow[d, "p_{1}"']& X \arrow[d, "\varphi"]\\
& X \arrow[r, "\varphi"]& Y
\end{tikzcd}.\]
Dans le cas o\`u $Y = \mathcal{M}(\mathcal{A})$ et~$\varphi$ est le morphisme structural, on pose $\Delta_{X} := \Delta_{X/\mathcal{M}(\mathcal{A})}$.
\end{defi}
\begin{prop}\label{prop:immersiondiagonale}\index{Immersion}\index{Morphisme analytique!diagonal}
Soient $X$, $Y$ des espaces $\mathcal{A}$-analytiques et $\varphi \colon X\to Y$ un morphisme. Alors le morphisme diagonal $\Delta_{X/Y} \colon X\to X\times_{Y} X$ est une immersion.
\end{prop}
\begin{proof}
Posons $\Delta := \Delta_{X/Y}$. D'apr\`es la proposition~\ref{crit\`ere_local_immersion}, il suffit de montrer que tout point~$z$ de~$X\times_{Y} X$ poss\`ede un voisinage~$W$ tel que le morphisme $\Delta^{-1}(W)\to W$ induit par~$\Delta$ soit une immersion ferm\'ee. La question est donc locale sur~$X\times_{Y} X$. D'apr\`es le lemme~\ref{lem:produitouverts}, on peut donc restreindre~$X$ et supposer que c'est un ferm\'e analytique d'un ouvert de~$\E{n}{\mathcal{A}}$. Notons $T_{1},\dotsc,T_{n}$ les coordonn\'ees sur l'espace~$\E{n}{\mathcal{A}}$ et identiquement leur restriction \`a~$X$.
Soit $\mathcal{I}$ le faisceau d'id\'eaux de~$X \times_{Y} X$ engendr\'e par les sections $p_{1}^\ast T_{j} - p_{2}^\ast T_{j}$ avec $j \in\cn{1}{m}$, $F$ le ferm\'e analytique de~$X\times_{Y} X$ qu'il d\'efinit et $\iota \colon F \to X \times_{Y} X$ l'immersion ferm\'ee associ\'ee. Pour tout $i\in \cn{1}{m}$, on a
\begin{align*}
\Delta^\ast (p_{1}^\ast T_{j} - p_{2}^\ast T_{j}) &= \Delta^\ast p_{1}^\ast T_{j} - \Delta^\ast p_{2}^\ast T_{j}\\
&= T_{j} - T_{j}\\
&=0
\end{align*}
donc, d'apr\`es la proposition~\ref{immersion_ferm\'e}, il existe un morphisme $\psi \colon X \to F$ tel que $\Delta= \iota \circ \psi$.
Montrons que~$\psi$ est un isomorphisme, d'inverse $p_{1} \circ \iota$. Cela entra\^inera que~$\Delta$ est une immersion ferm\'ee, comme d\'esir\'e. Remarquons, tout d'abord, que l'on a
\[p_{1} \circ \iota \circ \psi = p_{1} \circ \Delta = \mathrm{id}_{X}\]
par d\'efinition.
Calculons maintenant $\psi \circ p_{1} \circ \iota$. D'apr\`es le corollaire~\ref{cor:produitfibremodeles}, $X\times_{Y} X$ est un ferm\'e analytique d'un ouvert de~$\E{2n}{\mathcal{A}}$. Les fonctions $p_{1}^\ast T_{1},\dotsc, p_{1}^\ast T_{n},p_{2}^\ast T_{1},\dotsc, p_{2}^\ast T_{n}$ forment un syst\`eme de coordonn\'ees sur~$\E{2n}{\mathcal{A}}$. Pour tout $i\in \cn{1}{n}$, on a
\[ (\psi \circ p_{1} \circ \iota)^\ast \iota^\ast p_{1}^\ast T_{i} = \iota^\ast p_{1}^\ast \psi^\ast \iota^\ast p_{1}^\ast T_{i} = \iota^\ast p_{1}^\ast T_{i},\]
et, pour tout $j\in \cn{1}{m}$, on a
\[ (\psi \circ p_{1} \circ \iota)^\ast \iota^\ast p_{2}^\ast T_{j} = \iota^\ast p_{1}^\ast \psi^\ast \iota^\ast p_{2}^\ast T_{j} = \iota^\ast p_{1}^\ast T_{j} = \iota^\ast p_{2}^\ast T_{j},\]
par d\'efinition de~$F$. On d\'eduit alors de la proposition~\ref{morphsec} que $\psi \circ p_{1} \circ \iota = \mathrm{id}_{X}$.
\end{proof}
\index{Produit!fibre@fibr\'e|)}
\section{Parties spectralement convexes}\label{sec:paspconv}
Dans cette section, nous allons \'etudier le comportement des parties spectralement convexes par extension des scalaires et produit fibr\'e.
\medbreak
Commen\c cons par l'extension des scalaires. Soient~$\mathcal{B}$ un anneau de Banach et $f \colon \mathcal{A} \to \mathcal{B}$ un morphisme born\'e. On suppose que~$\mathcal{A}$ et~$\mathcal{B}$ sont des anneaux de base g\'eom\'etriques.
\begin{nota}\index{Norme!tensorielle}\index{Produit tensoriel compl\'et\'e}%
\nomenclature[Bka]{$\nm_{\mathcal{M} \otimes_{\mathcal{A}} \mathcal{N}}$}{pour $\mathcal{M},\mathcal{N}$ des $\mathcal{A}$-alg\`ebres de Banach, semi-norme tensorielle sur $ \mathcal{M} \otimes_{\mathcal{A}} \mathcal{N}$}%
\nomenclature[Bkb]{$\mathcal{M} \ho{\mathcal{A}} \mathcal{N}$}{s\'epar\'e compl\'et\'e de~$\mathcal{M} \otimes_{\mathcal{A}} \mathcal{N}$ pour la semi-norme~$\nm_{\mathcal{M} \otimes_{\mathcal{A}} \mathcal{N}}$}%
\nomenclature[Bkc]{$\mathcal{M} \hosp{\mathcal{A}} \mathcal{N}$}{s\'epar\'e compl\'et\'e de~$\mathcal{M} \otimes_{\mathcal{A}} \mathcal{N}$ pour la semi-norme spectrale associ\'ee \`a~$\nm_{\mathcal{M} \otimes_{\mathcal{A}} \mathcal{N}}$
Soient~$(\mathcal{M},\nm_{\mathcal{M}})$ et~$(\mathcal{N},\nm_{\mathcal{N}})$ deux $\mathcal{A}$-alg\`ebres de Banach. On d\'efinit une semi-norme~$\nm_{\mathcal{M} \otimes_{\mathcal{A}} \mathcal{N}}$ sur $\mathcal{M} \otimes_{\mathcal{A}} \mathcal{N}$ par la formule suivante~: pour tout $x \in \mathcal{M} \otimes_{\mathcal{A}} \mathcal{N}$,
\[ \|x\|_{\mathcal{M} \otimes_{\mathcal{A}} \mathcal{N}} = \inf\big(\big\{ \max_{i\in I}(\|m_{i}\|_{\mathcal{M}}\, \|n_{i}\|_{\mathcal{N}}), \ x = \sum_{i\in I} m_{i}\otimes n_{i}\big\}\big).\]
On note $\mathcal{M} \ho{\mathcal{A}} \mathcal{N}$ le s\'epar\'e compl\'et\'e de~$\mathcal{M} \otimes_{\mathcal{A}} \mathcal{N}$ pour la semi-norme~$\nm_{\mathcal{M} \otimes_{\mathcal{A}} \mathcal{N}}$ et $\mathcal{M} \hosp{\mathcal{A}} \mathcal{N}$ le s\'epar\'e compl\'et\'e pour la semi-norme spectrale associ\'ee.
\end{nota}
Rappelons la d\'efinition d'application propre entre espaces topologiques (\cf~\cite[I, \S 10]{BourbakiTG14}).
\index{Application!propre|(}\index{Morphisme analytique!propre|(}
\begin{defi}\index{Application!propre|textbf}\index{Morphisme analytique!propre|textbf}\index{Morphisme!topologique|see{Application}}
Une application $f \colon T\to T'$ entre espaces topologiques est dite \emph{propre} si elle est continue et si, pour tout espace topologique~$T''$, l'application $f \times \mathrm{id}_{T''} \colon T \times T'' \to T' \times T''$ est ferm\'ee.
Un morphisme d'espaces $\mathcal{A}$-analytiques est dit \emph{propre}
si l'application induite entre les espaces topologiques sous-jacents est propre au sens pr\'ec\'edent.
\end{defi}
\begin{rema}\label{rem:proprelocalaubut}
La propret\'e est une notion locale au but, \cf~\cite[I, \S 10, \no 1, proposition~3 b)]{BourbakiTG14}.
\end{rema}
\'Enon\c cons quelques propri\'et\'es des applications propres vis-\`a-vis de la composition.
\begin{lemm}[\protect{\cite[I, \S 10, \no 1, proposition~5]{BourbakiTG14}}]\label{lem:compositionpropre}
Soient $f\colon T \to T'$ et $g\colon T' \to T''$ des applications continues entre espaces topologiques.
\begin{enumerate}[i)]
\item Si $f$ et $g$ sont propres, alors $g\circ f$ est propre.
\item Si $g\circ f$ est propre et $f$ est surjective, alors $g$ est propre.
\item Si $g\circ f$ est propre et $g$ est injective, alors $f$ est propre.
\item Si $g\circ f$ est propre et $T'$ est s\'epar\'e, alors $f$ est propre.
\end{enumerate}
\end{lemm}
La propret\'e peut essentiellement s'exprimer en termes de compacit\'e. Nous disposons en particulier d'un crit\`ere utile pour d\'emontrer la propret\'e.
\begin{lemm}\label{lem:criterepropre}
Soit $f \colon T\to T'$ une application continue entre espaces topologiques.
\begin{enumerate}[i)]
\item Si $f$ est propre, alors pour tout partie quasi-compacte~$K$ de~$T$, l'image r\'eciproque $f^{-1}(K)$ est quasi-compacte.
\item Si tout point~$t$ de~$T'$ poss\`ede un voisinage compact~$V$ dont l'image r\'eciproque~$f^{-1}(V)$ est compacte, alors $f$~est propre.
\end{enumerate}
\end{lemm}
\begin{proof}
i) d\'ecoule de \cite[I, \S 10, \no 3, proposition~6]{BourbakiTG14}.
ii) d\'ecoule de la remarque~\ref{rem:proprelocalaubut} et de \cite[I, \S 10, \no 2, corollaire~2 du lemme~2]{BourbakiTG14}.
\end{proof}
\index{Application!propre|)}
Rappelons que, pour tout $n\in \ensuremath{\mathbf{N}}$, nous avons d\'efini un morphisme $\tilde{f}_n \colon \E{n}{\mathcal{B}} \to \E{n}{\mathcal{A}}$ dans l'exemple~\ref{ex:tildefn}.
\begin{prop}\label{extension_scalaire_spectral}\index{Partie!spectralement convexe}\index{Produit tensoriel compl\'et\'e}
Soit $n\in \ensuremath{\mathbf{N}}$. Le morphisme~$\tilde{f}_{n}$ est propre.
Soit~$V$ une partie compacte spectralement convexe de~$\E{n}{\mathcal{A}}$. Alors $\tilde{f}_{n}^{-1}(V)$ est compacte et spectralement convexe et le morphisme canonique
\[ \mathcal{B}(V)\hosp{\mathcal{A}} \mathcal{B} \xrightarrow[]{\sim} \mathcal{B}(\tilde{f}_{n}^{-1}(V))\]
est un isomorphisme isom\'etrique.
\end{prop}
\begin{proof}
Notons~$T_{1},\dotsc,T_{n}$ les coordonn\'ees sur~$\E{n}{\mathcal{A}}$ et~$\E{n}{\mathcal{B}}$. Pour tout $r \in \ensuremath{\mathbf{R}}_{>0}$, notons $\overline{D}_{\mathcal{M}(\mathcal{A})}(r)$ et $\overline{D}_{\mathcal{M}(\mathcal{B})}(r)$ les polydisques ferm\'es de polyrayon~$(r,\dotsc,r)$ au-dessus de~$\mathcal{M}(\mathcal{A})$ et~$\mathcal{M}(\mathcal{B})$ respectivement.
Soit~$K$ une partie compacte de~$\E{n}{\mathcal{A}}$. Il existe $r\in \ensuremath{\mathbf{R}}_{>0}$ tel que $K \subset \overline{D}_{\mathcal{M}(\mathcal{A})}(r)$. Or, on a
\[ \tilde{f}_{n}^{-1}(\overline{D}_{\mathcal{M}(\mathcal{A})}(r)) = \overline{D}_{\mathcal{M}(\mathcal{B})}(r), \]
qui est une partie compacte de~$\E{n}{\mathcal{B}}$. Puisque~$\tilde f_{n}$ est continue, $\tilde{f}_{n}^{-1}(K)$ est ferm\'ee dans un compact de~$\E{n}{\mathcal{B}}$, et donc compacte. On d\'eduit alors du lemme~\ref{lem:criterepropre} que~$\tilde f_{n}$ est propre.
Soit~$V$ une partie compacte spectralement convexe de~$\E{n}{\mathcal{A}}$. Le morphisme $\mathcal{A}[T_{1},\dotsc,T_{n}] \to \mathcal{B}[T_{1},\dotsc,T_{n}]$ induit un morphisme $\mathcal{K}(V) \to \mathcal{K}(\tilde{f}_{n}^{-1}(V))$ et un morphisme born\'e d'anneaux de Banach $\mathcal{B}(V) \to \mathcal{B}(\tilde{f}_{n}^{-1}(V))$. Ces morphismes s'ins\'erent dans un diagramme commutatif
\[\begin{tikzcd}
\mathcal{A}[T_{1},\dotsc,T_{n}] \arrow[r] \arrow[d] & \mathcal{B}[T_{1},\dotsc,T_{n}] \arrow[d]\\
\mathcal{B}(V) \arrow[r] & \mathcal{B}(\tilde{f}_{n}^{-1}(V))\\
\end{tikzcd}.\]
Puisque~$V$ est spectralement convexe, l'image de l'application $\mathcal{M}(\mathcal{B}(V)) \to \E{n}{\mathcal{A}}$ est \'egale \`a~$V$. On en d\'eduit que l'image de l'application $\mathcal{M}(\mathcal{B}(\tilde{f}_{n}^{-1}V)) \to \E{n}{\mathcal{B}}$ est contenue dans~$\tilde{f}_{n}^{-1}(V)$, et donc que $\tilde f_{n}^{-1}(V)$ est spectralement convexe, d'apr\`es la proposition~\ref{crit_spectral}.
Passons maintenant \`a la d\'emonstration de l'isomorphisme. Le morphisme de $\mathcal{A}$-alg\`ebres $\mathcal{A}[T_1,\ldots,T_n]\to\mathcal{B}(V)$ induit un morphisme de $\mathcal{B}$-alg\`ebres $\mathcal{B}[T_1,\ldots,T_n]\to \mathcal{B}(V) \hosp{\mathcal{A}} \mathcal{B}$. Consid\'erons le morphisme induit $\mathcal{M}( \mathcal{B}(V) \hosp{\mathcal{A}} \mathcal{B}) \to \E{n}{\mathcal{B}}$.
D'apr\`es la proposition \ref{crit_spectral_pu}, il suffit de d\'emontrer les deux propri\'et\'es suivantes~:
\begin{enumerate}[i)]
\item l'image du morphisme $\mathcal{M}(\mathcal{B}(V) \hosp{\mathcal{A}} \mathcal{B})\to\E{n}{\mathcal{B}}$ est contenue dans~$\tilde{f}_{n}^{-1}(V)$ ;
\item pour toute~$\mathcal{B}$-alg\`ebre de Banach uniforme~$\mathcal{C}$ et tout morphisme de $\mathcal{B}$-alg\`ebres $\mathcal{B}[T_1,\dotsc,T_n]\to\mathcal{C}$ tel que l'image du morphisme induit $\mathcal{M}(\mathcal{C})\to\E{n}{\mathcal{B}}$ soit contenue dans~$\tilde{f}_{n}^{-1}(V)$, le morphisme~$\mathcal{B}[T_1,\dotsc,T_n]\to\mathcal{C}$ se factorise de mani\`ere unique par $\mathcal{B}(V) \hosp{\mathcal{A}} \mathcal{B}$.
\end{enumerate}
Pour d\'emontrer~i), il suffit de v\'erifier que l'image du morphisme $\mathcal{M}(\mathcal{B}(V) \hosp{\mathcal{A}} \mathcal{B})\to\E{n}{\mathcal{A}}$ est contenue dans~$V$. Or le morphisme de $\mathcal{A}$-alg\`ebres $\mathcal{A}[T_1,\ldots,T_n]\to\mathcal{B}(V) \hosp{\mathcal{A}} \mathcal{B}$
se factorise par~$\mathcal{B}(V)$. Puisque~$V$ est spectralement convexe, le r\'esultat s'ensuit.
D\'emontrons maintenant~ii). Soient~$\mathcal{C}$ une~$\mathcal{B}$-alg\`ebre de Banach uniforme et $\mathcal{B}[T_1,\ldots,T_n]\to\mathcal{C}$ un morphisme de~$\mathcal{B}$-alg\`ebres tel que le morphisme induit $\mathcal{M}(\mathcal{C})\to\E{n}{\mathcal{B}}$ se factorise par~$\tilde{f}_{n}^{-1}(V)$.
Le morphisme~$\mathcal{M}(\mathcal{C})\to\E{n}{\mathcal{A}}$ se factorise alors par~$V$. Puisque~$V$ est spectralement convexe, la proposition~\ref{crit_spectral_pu} assure que le morphisme $\mathcal{A}[T_1,\ldots,T_n]\to\mathcal{C}$ se factorise par~$\mathcal{B}(V)$, et ce de fa\c con unique. En tensorisant par~$\mathcal{B}$ au-dessus de~$\mathcal{A}$, on en d\'eduit une factorisation de~$\mathcal{B}[T_1,\ldots,T_n]\to\mathcal{C}$ par~$\mathcal{B}(V)\otimes_{\mathcal{A}}\mathcal{B}$, elle aussi unique. Puisque l'alg\`ebre~$\mathcal{C}$ est compl\`ete et uniforme, le morphisme $\mathcal{B}(V)\otimes_{\mathcal{A}}\mathcal{B} \to \mathcal{C}$ se factorise de mani\`ere unique par $\mathcal{B}(V) \hosp{\mathcal{A}} \mathcal{B}$, ce qui cl\^ot la d\'emonstration.
\end{proof}
\begin{coro}\label{stabilit\'e_propre_extension_scalaire}\index{Extension des scalaires}
Pour tout espace $\mathcal{A}$-analytique~$X$, le morphisme d'extension des scalaires $X \ho{\mathcal{A}} \mathcal{B} \to X$ est propre.
\end{coro}
\begin{proof}
D'apr\`es le lemme~\ref{lem:criterepropre}, il suffit de montrer que tout point~$x$ de~$X$ poss\`ede un voisinage compact dont l'image r\'eciproque est compacte.
Soit~$x\in X$. Quitte \`a remplacer~$X$ par un voisinage de~$x$, on peut supposer que~$X$ est un sous-espace analytique d'un ouvert~$U$ de l'espace affine~$\E{n}{\mathcal{A}}$. La proposition~\ref{extension_scalaire_spectral} assure que le morphisme $\tilde{f}_{n} \colon \E{n}{\mathcal{B}}\to\E{n}{\mathcal{A}}$ est propre.
D'apr\`es le lemme~\ref{lem:extensionBouverts}, on a $U \ho{\mathcal{A}} \mathcal{B} = \tilde{f}_{n}^{-1}(U)$ et $X \ho{\mathcal{A}} \mathcal{B} = \tilde{f}_{n}^{-1}(X)$. Le r\'esultat s'en d\'eduit.
\end{proof}
\index{Morphisme analytique!propre|)}
On peut, comme de coutume, exprimer la fibre d'un morphisme au-dessus d'un point \`a l'aide d'un produit fibr\'e.
\begin{prop}\label{preimage}\index{Morphisme analytique!fibre d'un}\index{Morphisme analytique!associe a un point@associ\'e \`a un point}\index{Produit!fibre@fibr\'e}
Soient~$\varphi\colon X\to Y$ un morphisme d'espaces~$\mathcal{A}$-analytiques et~$y$ un point de~$Y$. Consid\'erons le morphisme $\lambda_{y} \colon \mathcal{M}(\mathcal{H}(y)) \to Y$ de l'exemple~\ref{ex:morphismex} et le produit fibr\'e
\[X\times_Y \mathcal{M}(\mathcal{H}(y)) := (X \ho{\mathcal{A}} \mathcal{H}(y)) \times_{Y \ho{\mathcal{A}} \mathcal{H}(y)} \mathcal{M}(\mathcal{H}(y))\]
de la proposition~\ref{prop:produitfibreAnA}. Alors, la projection $p_{X} \colon X\times_Y \mathcal{M}(\mathcal{H}(y)) \to X$ induit un hom\'eomorphisme
\[X\times_Y \mathcal{M}(\mathcal{H}(y)) \overset{\sim}{\longrightarrow} \varphi^{-1}(y).\]
\end{prop}
\begin{proof}
Apr\`es extension des scalaires de~$\mathcal{A}$ \`a~$\mathcal{H}(y)$, on obtient le morphisme
\[\varphi_{\mathcal{H}(y)} := \varphi \ho{\mathcal{A}} \mathcal{M}(\mathcal{H}(y)) \colon X \ho{\mathcal{A}} \mathcal{M}(\mathcal{H}(y)) \to Y \ho{\mathcal{A}} \mathcal{M}(\mathcal{H}(y)).\]
D'apr\`es le lemme~\ref{lem:morphismexHx}, le morphisme $\mathcal{M}(\mathcal{H}(y)) \to Y \ho{\mathcal{A}} \mathcal{M}(\mathcal{H}(y))$ induit par~$\lambda_{y}$ est une immersion ferm\'ee et, en notant~$y'$ l'unique point de son image, on a un isomorphisme canonique $\mathcal{H}(y) \xrightarrow[]{\sim}\mathcal{H}(y')$. On d\'eduit alors du lemme~\ref{lem:produitouverts} que le morphisme
\[(X \ho{\mathcal{A}} \mathcal{M}(\mathcal{H}(y))) \times_{Y \ho{\mathcal{A}} \mathcal{M}(\mathcal{H}(y))} \mathcal{M}(\mathcal{H}(y)) \to \varphi_{\mathcal{H}(y)}^{-1}(y')\]
induit par la premi\`ere projection est un hom\'eomorphisme. Il suffit donc de montrer que l'application $\varphi_{\mathcal{H}(y)}^{-1}(y') \to \varphi^{-1}(y)$ induite par le morphisme $\pi_{\mathcal{H}(y)} \colon X \ho{\mathcal{A}} \mathcal{H}(y) \to X$ est un hom\'eomorphisme. Le morphisme $\pi_{\mathcal{H}(y)}$ est continu et propre,
d'apr\`es le corollaire~\ref{stabilit\'e_propre_extension_scalaire}. Par cons\'equent, il suffit de montrer que l'application $\varphi_{\mathcal{H}(y)}^{-1}(y') \to \varphi^{-1}(y)$ est bijective.
Quitte \`a restreindre~$Y$, on peut supposer qu'il s'envoie par une immersion dans un espace affine analytique~$\E{m}{\mathcal{A}}$, et m\^eme qu'il co\"incide avec cet espace affine. Notons $T_{1},\dotsc,T_{m}$ les coordonn\'ees sur~$\E{m}{\mathcal{A}}$.
On peut \'egalement raisonner localement sur~$X$. Quitte \`a restreindre~$X$, on peut supposer que c'est un ferm\'e analytique
d'un ouvert~$U$ d'un espace affine~$\E{n}{\mathcal{A}}$.
On peut \'egalement supposer que le morphisme~$\varphi$ s'\'etend en un morphisme $\tilde \varphi \colon X \to Y = \E{m}{\mathcal{A}}$.
Soit~$V$ un voisinage spectralement convexe de~$x$. Soit $z \in V\cap \varphi^{-1}(y)$. Le morphisme
\[\mathcal{A}[T_1,\ldots,T_{n'}]\to\mathcal{B}(V) \to \mathcal{H}(z)\]
se factorise par~$\mathcal{H}(y)$ et induit donc un morphisme
\[\sigma_{z} \colon \mathcal{B}(V) \hosp{\mathcal{A}} \mathcal{H}(y) \to \mathcal{H}(z),\]
par la propri\'et\'e universelle du produit tensoriel. Or, d'apr\`es la proposition~\ref{extension_scalaire_spectral}, $\tilde f_{n}^{-1}(V)$ est spectralement convexe et on a un isomorphisme $\mathcal{B}(\tilde f_{n}^{-1}(V)) \simeq \mathcal{B}(V) \hosp{\mathcal{A}} \mathcal{H}(y)$. Par cons\'equent, le morphisme~$\sigma_{z}$ d\'efinit un point~$\sigma(z)$ de~$\tilde f_{n}^{-1}(V)$. On v\'erifie qu'il appartient \`a~$\varphi_{\mathcal{H}(y)}^{-1}(y')$ et que $\pi_{\mathcal{H}(y)}(\sigma(z)) = z$. On v\'erifie \'egalement que, pour tout point~$t$ de $\tilde f_{n}^{-1}(V) \cap \varphi_{\mathcal{H}(y)}^{-1}(y')$, on a $\sigma(\pi_{\mathcal{H}(y)}(t)) = t$. Ceci conclut la preuve.
\end{proof}
Passons maintenant \`a l'\'etude du produit fibr\'e.
\begin{prop}\label{produit_spectralement_convexe}\index{Partie!spectralement convexe}\index{Produit!fibre@fibr\'e}\index{Produit tensoriel compl\'et\'e}
Soient $n,m,N\in\ensuremath{\mathbf{N}}$. Soit~$U'$ (resp.~$V'$, resp.~$W'$) un ouvert de~$\E{n}{\mathcal{A}}$ (resp.~$\E{m}{\mathcal{A}}$, resp.~$\E{N}{\mathcal{A}}$). Soit~$U$ (resp.~$V$, resp.~$W$) un ensemble compact spectralement convexe de~$U'$ (resp.~$V'$, resp.~$W'$). Soient $\varphi \colon U' \to W'$ et $\psi \colon V'\to W'$ des morphismes d'espaces $\mathcal{A}$-analytiques tels que $\varphi(U) \subset W$ et $\psi(V) \subset W$. On a naturellement
\[ U'\times_{W'} V' \subset U'\times_{\mathcal{A}} V' \subset \E{n}{\mathcal{A}}\times_{\mathcal{A}} \E{m}{\mathcal{A}} \simeq \E{n+m}{\mathcal{A}}.\]
Notons $p_{U'} \colon U' \times_{W'} V' \to U'$ et $p_{V'} \colon U' \times_{W'} V' \to V'$ les deux projections.
Alors, l'ensemble
\[ U \times_{W} V := p_{U'}^{-1}(U) \cap p_{V'}^{-1}(V),\]
est une partie compacte spectralement convexe de~$\E{n+m}{\mathcal{A}}$ et le morphisme canonique
\[\mathcal{B}(V) \hosp{\mathcal{B}(W)} \mathcal{B}(U) \xrightarrow[]{\sim} \mathcal{B}(U\times_{W} V).\]
est un isomorphisme isom\'etrique.
\end{prop}
\begin{proof}
La suite d'inclusions suivie d'un isomorphisme provient du th\'eor\`eme~\ref{produit_fibr\'e}, du lemme~\ref{lem:produitouverts} et de la proposition~\ref{prop:produitaffines}.
Notons $T_{1},\dotsc,T_{n}$ (resp. $S_{1},\dotsc,S_{m}$, resp. $Z_{1},\dotsc,Z_{N}$) les coordonn\'ees sur~$\E{n}{\mathcal{A}}$ (resp.~$\E{m}{\mathcal{A}}$, resp.~$\E{N}{\mathcal{A}}$).
Notons $p_{\mathcal{A},U'} \colon U' \times_{\mathcal{A}} V' \to U'$ et $p_{\mathcal{A},V'} \colon U' \times_{\mathcal{A}} V' \to V'$ les deux projections. Posons
\[ U \times_{\mathcal{A}} V := p_{\mathcal{A},U'}^{-1}(U) \cap p_{\mathcal{A},V'}^{-1}(V).\]
Pour tout $i\in \cn{1}{N}$, posons
\[ f_{i} := (\varphi\circ p_{\mathcal{A},U'})^\sharp(Z_{i}) \textrm{ et } g_{i} := (\psi\circ p_{\mathcal{A},V'})^\sharp(Z_{i})
\textrm{ dans } \mathcal{O}(U'\times_{\mathcal{A}} V') .\]
D'apr\`es la proposition~\ref{prop:produitfibreZaffine}, $U'\times_{W'} V' = U'\times_{\E{N}{\mathcal{A}}} V'$ est le ferm\'e analytique de~$U'\times_{\mathcal{A}} V'$ d\'efini par l'id\'eal $(f_{1}-g_{1},\dotsc,f_{N}-g_{N})$.
Montrons, tout d'abord, que $U\times_{W} V$ est compact. Puisque c'est un ferm\'e de~$U\times_{\mathcal{A}} V$, il suffit de montrer que ce dernier l'est. Puisque~$U$ et~$V$ sont compacts, il existe $r_{1},\dotsc,r_{n},s_{1},\dotsc,s_{m}\in \ensuremath{\mathbf{R}}_{>0}$ tels que $U \subset \overline{D}_{\mathcal{M}(\mathcal{A})}(r_{1},\dotsc,r_{n})$ et~$V \subset \overline{D}_{\mathcal{M}(\mathcal{A})}(s_{1},\dotsc,s_{m})$. Alors, $U \times_{\mathcal{A}} V$ est une partie ferm\'ee de~$\E{n+m}{\mathcal{A}}$ qui est contenue dans le compact $\overline{D}_{\mathcal{M}(\mathcal{A})}(r_{1},\dotsc,r_{n},s_{1},\dotsc,s_{m})$. Elle est donc compacte.
Montrons, maintenant, que~$U\times_{W} V$ est spectralement convexe. Le morphisme $\mathcal{A}[T_1,\dotsc,T_n]\to \mathcal{B}(U\times_{W} V)$ s'\'etend en un morphisme born\'e d'alg\`ebres de Banach~$\mathcal{B}(U)\to\mathcal{B}(U\times_{W} V)$. Par cons\'equent, l'image du morphisme $\mathcal{M}(\mathcal{B}(U\times_{W} V)) \to \E{n+m}{\mathcal{A}}$ induit par $\mathcal{A}[T_1,\dotsc,T_n,S_{1},\dotsc,S_{m}]\to \mathcal{B}(U\times_{W} V)$ est contenue dans~$p_{U'}^{-1}(U)$. En raisonnant de fa\c con similaire, on montre que cette image est \'egalement contenue dans~$p_{V'}^{-1}(V)$, et donc dans~$U\times_{\mathcal{A}} V$. En outre, pour tout $i\in \cn{1}{N}$, on a $f_{i} - g_{i} = 0$ sur $U'\times_{W'} V'$ et donc dans $\mathcal{B}(U\times_{W} V)$. On en d\'eduit que l'image du morphisme $\mathcal{M}(\mathcal{B}(U\times_{W} V)) \to \E{n+m}{\mathcal{A}}$ est contenue dans $U\times_{W} V$, ce qui entra\^ine que~$U\times_{W} V$ est spectralement convexe, d'apr\`es la proposition~\ref{spectralement_conv}.
Passons finalement \`a la d\'emonstration de l'isomorphisme. \`A partir des morphismes $\mathcal{A}[T_1,\dotsc,T_n]\to \mathcal{B}(U)$ et $\mathcal{A}[S_1,\dotsc,S_m]\to \mathcal{B}(V)$, on construit un morphisme
\[ \mathcal{A}[T_1,\dotsc,T_n,S_{1},\dotsc,S_{m}] \to \mathcal{B}(U) \hosp{\mathcal{B}(W)} \mathcal{B}(V).\]
Consid\'erons le morphisme induit $\mathcal{M}(\mathcal{B}(U) \hosp{\mathcal{B}(W)} \mathcal{B}(V)) \to \E{n+m}{\mathcal{A}}$. D'apr\`es la proposition~\ref{crit_spectral_pu}, il suffit de d\'emontrer les deux propri\'et\'es suivantes~:
\begin{enumerate}[i)]
\item l'image du morphisme $\mathcal{M}(\mathcal{B}(U) \hosp{\mathcal{B}(W)} \mathcal{B}(V))\to\E{n+m}{\mathcal{A}}$ est contenue dans~$U \times_{W}V$ ;
\item pour toute~$\mathcal{A}$-alg\`ebre de Banach uniforme~$\mathcal{C}$ et tout morphisme de $\mathcal{A}$-alg\`ebres $\mathcal{A}[T_1,\dotsc,T_{n},S_{1},\dotsc,S_{m}]\to\mathcal{C}$ tel que l'image du morphisme induit $\mathcal{M}(\mathcal{C})\to\E{n+m}{\mathcal{A}}$ soit contenue dans~$U \times_{\mathcal{B}(W)}V$, le morphisme~$\mathcal{A}[T_1,\dotsc,T_{n},S_{1},\dotsc,S_{m}]\to\mathcal{C}$ se factorise de mani\`ere unique par $\mathcal{B}(U) \hosp{\mathcal{B}(W)} \mathcal{B}(V)$.
\end{enumerate}
On y parvient en raisonnant d'une fa\c con similaire \`a celle adopt\'ee dans la d\'emonstration de la proposition~\ref{extension_scalaire_spectral}.
\end{proof}
\section{Morphismes s\'epar\'es}\label{sec:morphismessepares}
\index{Morphisme analytique!separe@s\'epar\'e|(}
Dans cette section, nous g\'en\'eralisons la notion classique de morphisme s\'epar\'e.
Commen\c cons par un r\'esultat sur les espaces topologiques sous-jacents aux produits fibr\'es d'espaces analytiques.
\begin{prop}\label{produit_fibr\'e_propre}\index{Produit!fibre@fibr\'e}\index{Morphisme analytique!propre}\index{Application!propre}
Soient $\varphi \colon X\to Z$ et $\psi \colon Y\to Z$ des morphismes d'espaces $\mathcal{A}$-analytiques. Notons~$|X|$, $|Y|$ et~$|Z|$ les espaces topologiques sous-jacents \`a~$X$, $Y$ et~$Z$ respectivement. L'application naturelle
\[|X\times_Z Y|\to|X|\times_{|Z|} |Y|\]
est continue, propre et surjective.
\end{prop}
\begin{proof}
L'application $\Pi \colon |X\times_Z Y|\to|X|\times_{|Z|} |Y|$ est continue par construction. Montrons qu'elle est surjective. Soient $x \in X$, $y\in Y$ et $z\in Z$ tels que $z = \varphi(x) = \psi(y)$.
Quitte \`a restreindre les espaces, on peut supposer que~$X$, $Y$, et~$Z$ sont des ferm\'es analytiques d'espaces affines. Notons~$T$ l'ensemble des points de~$X\times_{Z} Y$ qui se projettent respectivement sur~$x$ et~$y$ par les premi\`ere et deuxi\`eme projections. D'apr\`es les exemples~\ref{ex:pointprorationnel} et~\ref{ex:Bpoint} et le th\'eor\`eme~\ref{thm:rationnel}, tout point~$x$ d'un espace affine est spectralement convexe et on a $\mathcal{B}(\{x\}) = \mathcal{H}(x)$. La proposition~\ref{produit_spectralement_convexe} assure donc que~$T$ est en bijection avec $\mathcal{M}(\mathcal{H}(x)\hosp{\mathcal{H}(z)} \mathcal{H}(y))$. Or $\mathcal{H}(x)\hosp{\mathcal{H}(z)} \mathcal{H}(y)$ n'est pas nulle\JP{d\'etails ?}, donc, d'apr\`es le th\'eor\`eme~\ref{th:MAnonvide}, $T$~n'est pas vide. On en d\'eduit que l'application~$\Pi$ est surjective.
Montrons finalement que l'application $\Pi \colon |X\times_Z Y| \to |X|\times_{|Z|} |Y|$ est propre. D'apr\`es le lemme~\ref{lem:criterepropre}, il suffit de montrer que tout point de $|X|\times_{|Z|} |Y|$ poss\`ede un voisinage compact dont l'image r\'eciproque par~$\Pi$ est compacte. Soient~$t$ un point de~$|X|\times_{|Z|} |Y|$. Il existe $x \in X$, $y\in Y$ et $z\in Z$ tels que $z = \varphi(x) = \psi(y)$ et $t$ soit l'image du point $(x,y)$ de $|X|\times |Y|$. Soit~$U$ (resp.~$V$, resp.~$W$) un voisinage de~$x$ dans~$X$ (resp.~$y$ dans~$Y$, resp.~$z$ dans~$Z$) qui est un ferm\'e analytique d'un ouvert~$U'$ (resp.~$V'$, resp.~$W'$) d'un espace affine analytique. On peut supposer que $\varphi(U) \subset W$ et que~$\varphi$ s'\'etend en un morphisme $\tilde{\varphi} \colon U' \to W'$. De m\^eme, on peut supposer que $\psi(V) \subset W$ et que~$\psi$ s'\'etend en un morphisme $\tilde{\psi} \colon V' \to W'$. Soit~$W_{0}$ un voisinage compact de~$z$ dans~$W$ qui est spectralement convexe dans~$W'$. Soit~$U_{0}$ (resp.~$V_{0}$) un voisinage compact de~$x$ dans~$U$ (resp.~$y$ dans~$V$) qui est spectralement convexe dans~$U'$ (resp.~$V'$) et tel que $\varphi(U_{0}) \subset W_{0}$ (resp. $\psi(V_{0}) \subset W_{0}$). Il suffit de montrer que l'image r\'eciproque de $U_{0} \times_{W_{0}} V_{0}$ par~$\Pi$ est compacte. Cela d\'ecoule de la proposition~\ref{produit_spectralement_convexe}.
\end{proof}
\begin{defi}\label{def:separe}\index{Morphisme analytique!separe@s\'epar\'e|textbf}\index{Immersion!fermee@ferm\'ee}\index{Espace analytique!separe@s\'epar\'e|textbf}
Soient $X$, $Y$ des espaces $\mathcal{A}$-analytiques et $\varphi \colon X\to Y$ un morphisme.
On dit que le morphisme~$\varphi$ est \emph{s\'epar\'e} si le morphisme diagonal $\Delta_{X/Y} \colon X \to X\times_{Y} X$ est une immersion ferm\'ee.
On dit que l'espace $\mathcal{A}$-analytique~$X$ est \emph{s\'epar\'e} si son morphisme structural $\pi \colon X \to \mathcal{M}(\mathcal{A})$ est s\'epar\'e.
\end{defi}
\begin{defi}\index{Application!separee@s\'epar\'ee|textbf}
Soient~$S$, $T$ des espaces topologiques. Une application $f \colon S\to T$ est dite \emph{s\'epar\'ee} si l'image du morphisme diagonal $S \to S\times_{T} S$ est ferm\'ee.
\end{defi}
La caract\'erisation suivante se d\'emontre sans difficult\'es.
\begin{lemm}
Soient~$S$, $T$ des espaces topologiques. Une application $f \colon S\to T$ est s\'epar\'ee si, et seulement si, pour tout $t\in T$ et tous $s \ne s' \in f^{-1}(t)$, il existe des voisinages~$U$ de~$s$ et et~$U'$ de~$s'$ dans~$S$ tels que $U \cap U' = \emptyset$.
En particulier, si~$S$ est s\'epar\'e, alors toute application $f \colon S \to T$ est s\'epar\'ee.
\qed
\end{lemm}
Les notions de s\'eparation analytique et topologique sont reli\'ees.
\begin{prop
Soit $\varphi \colon X\to Y$ un morphisme d'espaces $\mathcal{A}$-analytiques. Alors, $\varphi$ est s\'epar\'e si, et seulement si, le morphisme d'espaces topologiques sous-jacent $|\varphi| \colon |X| \to |Y|$ est s\'epar\'e.
\end{prop}
\begin{proof}
Supposons que $\varphi$ est s\'epar\'e. Le morphisme diagonal $\Delta_{X/Y}$ est alors une immersion ferm\'ee. En particulier, son image est ferm\'ee. Consid\'erons le diagramme commutatif
\[\begin{tikzcd}
\vert X \vert \arrow[r, "\vert\Delta_{X/Y}\vert"] \arrow[rd, "\Delta_{\vert X\vert/\vert Y\vert}"'] & \vert X\times_{Y} X\vert \arrow[d, "\Pi"]\\
& \vert X\vert\times_{\vert Y\vert} \vert X\vert
\end{tikzcd}.\]
On a
\[ \Im(\Delta_{|X|/|Y|}) = \Pi( \Im(|\Delta_{X/Y}|)).\]
D'apr\`es la proposition~\ref{produit_fibr\'e_propre}, l'application~$\Pi$ est propre, donc ferm\'ee. Par cons\'equent, $\Im(\Delta_{|X|/|Y|})$ est ferm\'ee.
R\'eciproquement, supposons que~$|\varphi|$ est s\'epar\'e. Nous allons montrer que $\Delta_{X/Y}$ est une immersion ferm\'ee. D'apr\`es la proposition~\ref{prop:immersiondiagonale}, $\Delta_{X/Y}$ est une immersion, donc, d'apr\`es la proposition~\ref{immersion_ferm\'e2}, il suffit de montrer que son image est ferm\'ee.
Soit $t \in (X\times_{Y} X) \setminus \Delta_{X/Y}(X)$. Alors, avec les notations de la d\'efinition~\ref{def:separe}, on a $p_{1}(t) \ne p_{2}(t)$. Puisque $|\varphi|$ est s\'epar\'e, il existe un voisinage ouvert~$U_{1}$ de~$p_{1}(t)$ dans~$X$ et un voisinage ouvert~$U_{2}$ de~$p_{2}(t)$ dans~$X$ tels que $U_{1}\cap U_{2} = \emptyset$. L'ouvert $p_{1}^{-1}(U_{1}) \cap p_{2}^{-1}(U_{2})$ de~$X\times_{Y} X$ contient alors~$t$ sans rencontrer $\Delta_{X/Y}(X)$. On en d\'eduit que $(X\times_{Y} X) \setminus \Delta_{X/Y}(X)$ est ouvert, et donc que $\Delta_{X/Y}(X)$ est ferm\'e.
\end{proof}
Terminons en consid\'erant les graphes de morphismes.
\begin{defi}\index{Morphisme analytique!graphe d'un
\nomenclature[Kr]{$\Gamma_{\varphi}$}{graphe d'un morphisme d'espaces $\mathcal{A}$-analytiques~$\varphi$
Soient $X$, $Y$, $Z$ des espaces $\mathcal{A}$-analytiques et $\varphi \colon X\to Y$ un morphisme au-dessus de~$Z$. Notons $p_X \colon X\times_{Z} Y\to X$ et $p_Y \colon X\times_{Z} Y\to Y$ les deux projections naturelles. On appelle \emph{graphe du morphisme~$\varphi$} l'unique morphisme $\Gamma_{\varphi} \colon X \to X\times_{Z} Y$ qui fait commuter le diagramme
\[\begin{tikzcd}
X \arrow[rd, "\Gamma_{\varphi}"] \arrow[rrd, bend left, "\varphi"] \arrow[rdd, bend right, "\mathrm{id}_{X}"'] & &\\
&X\times_{Z} Y \arrow[r, "p_{Y}"] \arrow[d, "p_{X}"']& Y \arrow[d]\\
& X \arrow[r]& Z.
\end{tikzcd}\]
\end{defi}
\begin{prop}\label{prop:grapheimmersion}\index{Morphisme analytique!graphe d'un} \index{Immersion}\index{Immersion!fermee@ferm\'ee}
Soient $X$, $Y$, $Z$ des espaces $\mathcal{A}$-analytiques et $\varphi \colon X\to Y$ un morphisme au-dessus de~$Z$. Alors, le morphisme $\Gamma_{\varphi} \colon X \to X\times_{Z} Y$ est une immersion.
Si le morphisme $Y \to Z$ est s\'epar\'e, alors $\Gamma_{\varphi}$ est une immersion ferm\'ee.
\end{prop}
\begin{proof}
On a le diagramme cart\'esien suivant dans la cat\'egorie des espaces $\mathcal{A}$-analytiques~:
\[\begin{tikzcd}
X \arrow[r, "\Gamma_{\varphi}"] \arrow[d, "\varphi"] & X \times_{Z} Y \arrow[d, "{(\mathrm{id}_{X},\varphi)}"] \\
Y \arrow[r, "\Delta_{Y/Z}"] & Y\times_{Z} Y.
\end{tikzcd}\]
D'apr\`es la proposition~\ref{prop:immersiondiagonale}, $\Delta_{Y/Z}$ est une immersion. Si le morphisme $Y \to Z$ est s\'epar\'e, c'est m\^eme une immersion ferm\'ee, par d\'efinition. Le r\'esultat d\'ecoule maintenant du lemme~\ref{lem:changementbaseimmersion}.
\end{proof}
\index{Morphisme analytique!separe@s\'epar\'e|)}
\chapter[Cat\'egorie des espaces analytiques~: d\'efinitions]{Cat\'egorie des espaces analytiques~: d\'efinitions}
\label{def_cat}
Ce chapitre est consacr\'e \`a la construction de la cat\'egorie des espaces analytiques sur un anneau de Banach. Fixons d\`es \`a pr\'esent un anneau de Banach $(\mathcal{A},\nm)$.
Dans la section~\ref{sec:catAan}, nous d\'efinissons les espaces $\mathcal{A}$-analytiques et les morphismes analytiques entre ces espaces. Nous proc\'edons en plusieurs \'etapes en commen\c{c}ant par les ouverts d'espaces affines analytiques sur~$\mathcal{A}$, en poursuivant avec leurs ferm\'es analytiques (appel\'es mod\`eles locaux $\mathcal{A}$-analytiques), avant de traiter le cas g\'en\'eral. La cat\'egorie correspondante est not\'ee~$\mathcal{A}-\An$.
Dans la section~\ref{sec:An_A}, nous d\'efinissons la cat\'egorie~$\An_\mathcal{A}$ des espaces analytiques au-dessus de~$\mathcal{A}$. Plus grosse que la premi\`ere, elle autorise des morphismes entre espaces sur des anneaux de Banach de base diff\'erents (mais n\'eanmoins reli\'es par un morphisme born\'e). Elle nous permettra, dans un chapitre ult\'erieur, d'effectuer des extensions des scalaires ou encore de munir les fibres des morphismes de structures analytiques.
Finalement, dans la section~\ref{sec:immersion}, nous introduisons la notion d'immersion d'espaces $\mathcal{A}$-analytiques et en d\'emontrons quelques propri\'et\'es \'el\'ementaires, analogues \`a celles dont on dispose dans d'autres cadres.
\section[Espaces $\mathcal{A}$-analytiques]{Cat\'egorie des espaces $\mathcal{A}$-analytiques}\label{sec:catAan}
Nous d\'efinissons ici les notions d'espace analytique et de morphisme d'espaces analytiques au-dessus d'un anneau de Banach~$\mathcal{A}$. Ces d\'efinitions ne requi\`erent pas de propri\'et\'es particuli\`eres sur~$\mathcal{A}$.
\subsection{Morphismes entre ouverts d'espaces affines analytiques}
\index{Morphisme!analytique|see{Morphisme analytique}}
\index{Morphisme analytique|(}
\begin{defi}\label{def:morphismeouvertaffine}\index{Morphisme analytique|textbf}
Soient~$U$ et~$V$ des ouverts d'espaces affines analytiques sur~$\mathcal{A}$.
Un \emph{morphisme analytique} de~$U$ dans~$V$ est un morphisme d'espaces localement annel\'es
$\varphi \colon U \to V$
v\'erifiant la condition suivante~: pour toute partie compacte~$U'$ de~$U$ et toute partie compacte~$V'$ de~$V$ telles que $\varphi(U') \subset V'$, le morphisme~$\mathcal{O}_V(V')\to\mathcal{O}_U(U')$ induit par $\varphi^\sharp$ est contractant (\textit{i.e.} pour tout~$f\in\mathcal{O}_V(V')$, on a $\|\varphi^\sharp(f)\|_{U'}\leq\|f\|_{V'}$).
\end{defi}
Nous pouvons caract\'eriser ces morphismes de la fa\c con suivante.
\begin{prop}\label{isomouvert}\index{Morphisme!de corps r\'esiduels}
Soient $U$ et~$V$ des ouverts d'espaces affines analytiques sur~$\mathcal{A}$.
Soit~$\mor{\varphi}: (U,\mathcal{O}_U)\to(V,\mathcal{O}_V)$ un morphisme d'espaces localement annel\'es. Les deux conditions suivantes sont \'equivalentes~:
\begin{enumerate}[i)]
\item le morphisme~$\mor{\varphi}$ est un morphisme analytique ;
\item pour tout $x \in U$, le morphisme~$\varphi^\sharp$ induit un plongement isom\'etrique de corps~$\kappa(\varphi(x))\to\kappa(x)$.
\end{enumerate}
\end{prop}
\begin{proof}
$i) \implies ii)$
Soit~$a\in\kappa(\varphi(x))$. Soit~$f\in\mathcal{O}_{V,\varphi(x)}$ un relev\'e de~$a$. Soit~$V'$ un voisinage compact de~$\varphi(x)$ dans~$V$ sur lequel~$f$ est d\'efinie. Soit~$U'$ un voisinage compact de~$x$ dans~$\varphi^{-1}(V')$. Par hypoth\`ese, on a
\[\|\varphi^\sharp(f)\|_{U'}\leq\|f\|_{V'}\]
et, en particulier,
\[|\varphi^\sharp(f)(x)|\leq\|f\|_{V'}.\]
En outre, on a l'\'egalit\'e
\[ |a| = |f(\varphi(x))|=\lim_{V'\ni \varphi(x)}\|f\|_{V'},\]
o\`u~$V'$ parcourt les voisinages compacts de~$\varphi(x)$ dans~$V$ sur lesquels~$f$ est d\'efini. On en d\'eduit que
\[|\varphi^\sharp(f)(x)|\leq |f(\varphi(x))|,\]
puis que
\[|\varphi^\sharp(f)(x)| = |f(\varphi(x))|\]
en distinguant selon que~$a$ est nul, auquel cas l'\'egalit\'e \'evidente, ou non nul, auquel cas le r\'esultat d\'ecoule de l'in\'egalit\'e pour~$a^{-1}$.
\medbreak
$ii) \implies i)$
Soient~$x\in U$, $V'$ un voisinage compact de~$\varphi(x)$ dans~$V$ et~$U'$ un voisinage compact de~$x$ dans~$\varphi^{-1}(V')$. Soit $f\in \mathcal{O}_{V}(V')$. Par hypoth\`ese, pour tout~$x'\in U'$, on a
\[|\varphi^\sharp(f)(x')| = |f(\varphi(x'))| \le \|f\|_{V'}.\]
Le r\'esultat s'en d\'eduit.
\end{proof}
Concluons avec un exemple classique et fondamental.
\begin{exem}\label{exemple_morphisme}\index{Morphisme analytique!vers un espace affine}
Soient $n,m \in \ensuremath{\mathbf{N}}$. Soient~$U$ un ouvert de~$\E{m}{\mathcal{A}}$ et $f_{1},\dotsc,f_{n} \in \mathcal{O}(U)$. Nous allons expliquer comment construire un morphisme analytique $\varphi \colon U \to \E{n}{\mathcal{A}}$ \`a partir de ces donn\'ees. Plus pr\'ecis\'ement, si l'on d\'esigne par $T_{1},\dotsc,T_{n}$ les coordonn\'ees sur~$\E{n}{\mathcal{A}}$, le morphisme~$\varphi$ satisfera la condition suivante~:
\[ \forall i\in \cn{1}{n},\ \varphi^\sharp(T_{i})=f_{i}.\]
Commen\c cons par d\'efinir l'application ensembliste sous-jacente. Soit $x\in U$. Notons~$\varphi(x)$ le point de $\E{n}{\mathcal{A}}$ associ\'e \`a la semi-norme multiplicative
\[\renewcommand{\arraystretch}{1.2}
\begin{array}{ccc}
\mathcal{A}[T_1,\ldots,T_n] & \to & \ensuremath{\mathbf{R}}_{\ge 0}\\
P(T_{1},\dotsc,T_{n}) & \mapsto & |P(f_{1}(x),\dotsc,f_{n}(x))|
\end{array}.\]
On d\'efinit ainsi une application $\varphi \colon x \in U \mapsto \varphi(x) \in \E{n}{\mathcal{A}}$.
Sa continuit\'e d\'ecoule du lemme~\ref{lem:evaluationcontinueaffine}.
Il suit \'egalement de la d\'efinition que, pour tout point~$x$ de~$U$, on a un morphisme de corps isom\'etrique
\[ \varphi_{x}^\sharp \colon \mathcal{H}(\varphi(x))\to\mathcal{H}(x).\]
Construisons, \`a pr\'esent, l'application~$\varphi^\sharp$. Soient~$V$ un ouvert de~$\E{n}{\mathcal{A}}$ et $g$ un \'el\'ement de~$\mathcal{O}(V)$. Par d\'efinition, ce dernier est une application
\[g \colon V\to\displaystyle\coprod_{y\in V}\mathcal{H}(y)\]
telle que
\begin{enumerate}[i)]
\item pour tout $y\in V$, on a $g(y)\in\mathcal{H}(y)$~;
\item pour tout~$y\in V$, il existe un voisinage compact~$V_{y}$ de~$y$ dans~$V$ tel que~$g$ soit limite uniforme sur~$V_{y}$ d'une suite de fractions rationnelles sans p\^oles.
\end{enumerate}
Posons
\[\renewcommand{\arraystretch}{1.5}
\fonction{\varphi^\sharp(g)}{\varphi^{-1}(V)}{\displaystyle\coprod_{x\in \varphi^{-1}(V)}\mathcal{H}(x)}{x}{\varphi_{x}^\sharp(g(\varphi(x)))}.\]
Montrons que~$\varphi^\sharp(g)$ est un \'el\'ement de~$\mathcal{O}(\varphi^{-1}(V))$.
Soit~$x\in \varphi^{-1}(V)$. Posons $y :=\varphi(x)$. Il existe un voisinage compact~$V_{y}$ de~$y$ dans~$V$ et une suite de fractions rationnelles $\left(\frac{P_{i}}{Q_{i}}\right)_{i\in\ensuremath{\mathbf{N}}}$ sans p\^oles sur~$V_{y}$ qui converge uniform\'ement vers~$g$ sur~$V_{y}$. Soit~$U_{x}$ un voisinage compact de~$x$ dans~$\varphi^{-1}(V_{y})$. Consid\'erons la suite $\left(\frac{P_i(f_1,\ldots,f_n)}{Q_i(f_1,\ldots,f_n)}\right)_{i\in\ensuremath{\mathbf{N}}}$ d'\'el\'ements de~$\mathcal{O}(U_{x})$. Pour tout $x' \in U_{x}$, on a
\[ \left| \frac{P_i(f_1,\ldots,f_n)(x')}{Q_i(f_1,\ldots,f_n)(x')} - \varphi^\sharp(g)(x')\right| = \left| \frac{P_i(\varphi(x'))}{Q_i(\varphi(x'))} - g(\varphi(x')) \right|.\]
On en d\'eduit que la suite~$\left(\frac{P_i(f_1,\ldots,f_n)}{Q_i(f_1,\ldots,f_n)}\right)_{i\in\ensuremath{\mathbf{N}}}$ converge uniform\'ement vers~$\varphi^\sharp(g)$ sur~$U_{x}$. Par cons\'equent, $\varphi^\sharp(g)$ appartient bien \`a~$\mathcal{O}(\varphi^{-1}(V))$.
Remarquons que, pour tout $x \in \varphi^{-1}(V)$, on a
\begin{equation}\label{eq:phifg} \tag{*}
|\varphi^\sharp(g)(x)| = |g(\varphi(x))|.
\end{equation}
Cette \'egalit\'e entra\^ine que le morphisme $\varphi^\sharp \colon \mathcal{O}_{\varphi(x)}\to\mathcal{O}_{x}$ est un morphisme local d'anneaux locaux. Pour le d\'emontrer, il suffit d'utiliser le fait que, dans chacun de ces anneaux locaux, l'id\'eal maximal est constitu\'e des \'el\'ements qui s'annulent au point consid\'er\'e.
Finalement, nous avons d\'efini un morphisme d'espaces localement annel\'es $\varphi \colon U\to \E{n}{\mathcal{A}}$. D'apr\`es~\eqref{eq:phifg}, c'est un morphisme analytique. Par construction, pour tout $i\in \cn{1}{n}$, on a $\varphi^\sharp(T_{i})=f_{i}$.
\end{exem}
\begin{exem}\label{ex:structural}\index{Morphisme!structural|textbf}\index{Projection}
Lorsque $n\le m$ et qu'on applique le r\'esultat de l'exemple~\ref{exemple_morphisme} avec $U = \E{m}{\mathcal{A}}$ et, pour tout $i\in \cn{1}{n}$, $f_{i} = T_{i}$, on obtient un morphisme analytique
\[ \pi_{m,n} \colon \E{m}{\mathcal{A}} \to \E{n}{\mathcal{A}}\]
qui n'est autre que la projection sur les $n$ premi\`eres coordonn\'ees (\cf~d\'efinition~\ref{def:projection}).
Dans le cas particulier o\`u $n=0$, on obtient un morphisme analytique
\[ \pi_{m} \colon \E{m}{\mathcal{A}} \to \mathcal{M}(\mathcal{A}),\]
qui n'est autre que la projection sur la base (\cf~d\'efinition~\ref{def:projection}). On l'appelle \'egalement \emph{morphisme structural}.
\end{exem}
\index{Morphisme analytique|)}
\subsection{Mod\`eles locaux $\mathcal{A}$-analytiques}
\index{Modele local analytique@Mod\`ele local analytique|(}
\begin{defi}\index{Ferme analytique@Ferm\'e analytique|textbf}
Soit~$U$ un ouvert de~$\E{n}{\mathcal{A}}$. Un \emph{ferm\'e analytique} de~$U$ est un quadruplet~$(X,\mathcal{O}_X,j,\mathcal{I})$, o\`u~$\mathcal{I}$ est un faisceau coh\'erent d'id\'eaux de~$\mathcal{O}_U$, $X$~est le support du faisceau~$\mathcal{O}_U/\mathcal{I}$, $j \colon X \to U$ l'inclusion canonique et $\mathcal{O}_X$ le faisceau $j^{-1} (\mathcal{O}_U/\mathcal{I})$. Le couple~$(X,\mathcal{O}_X)$ est un espace localement annel\'e.
\end{defi}
\begin{rema}\label{rem:fermesanalytiques}
Soient~$(X,\mathcal{O}_X,j,\mathcal{I})$ un ferm\'e analytique d'un ouvert~$U$ de~$\E{n}{\mathcal{A}}$ et~$Y$ un ouvert de~$X$. Il existe alors un ouvert~$V$ de~$U$ tel que~$X\cap V=Y$ et $Y$~h\'erite naturellement d'une structure de ferm\'e analytique de~$V$, le faisceau d'id\'eaux associ\'e \'etant~$\mathcal{I}_{|V}$.
\end{rema}
\begin{defi}\index{Immersion!fermee@ferm\'ee|textbf}
Soit~$(X,\mathcal{O}_X,j,\mathcal{I})$ un ferm\'e analytique d'un ouvert~$U$ de~$\E{n}{\mathcal{A}}$. On a un morphisme naturel $j^\sharp \colon \mathcal{O}_{U} \to j_{\ast}\mathcal{O}_{X} = \mathcal{O}_{U}/\mathcal{I}$. Le couple~$\mor{j}$ d\'efinit un morphisme d'espaces localement annel\'es de~$(X,\mathcal{O}_X)$ dans~$(U,\mathcal{O}_U)$ appel\'e \emph{immersion ferm\'ee} de~$X$ dans~$U$.
Ce morphisme est injectif et induit des isomorphismes isom\'etriques entre les corps r\'esiduels.
\end{defi}
\begin{defi}\label{def:modelelocal}\index{Modele local analytique@Mod\`ele local analytique|textbf}\index{Morphisme!structural|textbf}
Un \emph{mod\`ele local $\mathcal{A}$-analytique} est la donn\'ee d'un entier~$n$, d'un ouvert~$U$ de~$\E{n}{\mathcal{A}}$ et d'un ferm\'e analytique~$X$ de~$U$.
Le morphisme $\E{n}{\mathcal{A}} \to \mathcal{M}(\mathcal{A})$ de l'exemple~\ref{ex:structural} induit un morphisme d'espaces localement annel\'es $\pi \colon X \to \mathcal{M}(\mathcal{A})$ appel\'e \emph{morphisme structural}.
\end{defi}
\index{Morphisme analytique|(}
\begin{defi}\label{def:morphismefermeouvert}\index{Morphisme analytique|textbf}
Soient $X$ un ferm\'e analytique d'un ouvert~$U$ d'un espace affine analytique sur~$\mathcal{A}$ et~$Y$ un ferm\'e analytique d'un ouvert~$V$ d'un espace affine analytique sur~$\mathcal{A}$. Notons $j_{U} \colon X \to U$ et $j_{V} \colon Y \to V$ les immersions ferm\'ees associ\'ees.
Un \emph{morphisme analytique} de~$X$ dans~$Y$ est un morphisme d'espaces localement annel\'es
$\varphi \colon X \to Y$
v\'erifiant la condition suivante : pour tout point~$x$ de~$X$, il existe un voisinage ouvert~$U'$ de~$j_{U}(x)$ dans~$U$ et un morphisme analytique
$\tilde \varphi \colon U' \to V$
tels que le diagramme
\[\begin{tikzcd}
U' \arrow[r, "\tilde{\varphi}"] &V\\
j_{U}^{-1}(U') \arrow[u, hook, "j_{U}"] \arrow[r, "\varphi"'] & \arrow[u, hook, "j_{V}"] Y
\end{tikzcd}\]
commute.
\end{defi}
\begin{exem}\label{ex:structuralmodele}\index{Morphisme!structural}
Le morphisme structural $\pi \colon X \to \mathcal{M}(\mathcal{A})$ est un morphisme analytique.
\end{exem}
\begin{rema}
Soit $\varphi \colon X \to Y$ un morphisme analytique entre mod\`eles locaux $\mathcal{A}$-analytiques. Soient~$X'$ et~$Y'$ des ouverts de~$X$ et~$Y$ respectivement tels que $\varphi(X') \subset Y'$. Alors, $\varphi$ induit par restriction un morphisme analytique $X' \to Y'$.
\end{rema}
\begin{lemm}\label{lem:localmodele}
La notion de morphisme analytique entre mod\`eles locaux $\mathcal{A}$-analytiques est locale \`a la source et au but.
Soit $\varphi \colon X \to Y$ un morphisme d'espaces localement annel\'es entre mod\`eles locaux $\mathcal{A}$-analytiques. Alors $\varphi$ est un morphisme analytique si, et seulement si, pour tout point $x$ de~$X$ et tout point~$y$ de~$Y$, il existe un voisinage ouvert~$X'$ de~$x$ et un voisinage ouvert~$Y'$ de~$y$ tels que $\varphi(X') \subset Y'$ et le morphisme $X' \to Y'$ induit par~$\varphi$ soit un morphisme analytique.
\qed
\end{lemm}
\begin{lemm}\label{lem:compositionmodele}
Les morphismes compatibles entre mod\`eles locaux $\mathcal{A}$-analytiques analytiques se composent.
Soient $\varphi \colon X \to Y$ et $\psi \colon Y \to Z$ des morphismes analytiques entre mod\`eles locaux $\mathcal{A}$-analytiques. Alors $\psi \circ \varphi$ est un morphisme analytique.
\qed
\end{lemm}
\begin{prop}\label{prop:isometrie}\index{Morphisme!de corps r\'esiduels}
Soit $\varphi \colon X \to Y$ un morphisme entre mod\`eles locaux $\mathcal{A}$-analytiques. Pour tout point~$x$ de~$X$, le morphisme~$\varphi^\sharp$ induit un plongement isom\'etrique de corps $\kappa(\varphi(x)) \to \kappa(x)$.
\end{prop}
\begin{proof}
On se ram\`ene imm\'ediatement au cas o\`u~$X$ et~$Y$ sont des ouverts d'espaces affines analytiques. Le r\'esultat d\'ecoule alors de la proposition~\ref{isomouvert}.
\end{proof}
\index{Morphisme analytique|)}
\index{Modele local analytique@Mod\`ele local analytique|)}
\subsection{Espaces $\mathcal{A}$-analytiques}
\index{Espace analytique|(}
\begin{defi}\index{Carte analytique|textbf}\index{Atlas analytique|textbf}
Soit~$X$ un espace localement annel\'e.
Une \emph{carte $\mathcal{A}$-analytique} de~$X$ est la donn\'ee d'un ouvert~$U$ de~$X$, d'un mod\`ele local $\mathcal{A}$-analytique~$Z_{U}$ et d'un isomorphisme d'espaces localement annel\'es~$j_U \colon U\xrightarrow[]{\sim} Z_U$.
Un \emph{atlas $\mathcal{A}$-analytique} de~$X$ est un ensemble $\{(U,Z_{U},j_{U})\}_{U \in \mathcal{U}}$ de cartes $\mathcal{A}$-analytiques de~$X$ tel que l'ensemble $\{U\}_{U\in \mathcal{U}}$ forme un recouvrement ouvert de~$X$ et que, pour tous $U,U' \in \mathcal{U}$, le morphisme
\[j_{U}\circ j_{U'}^{-1} \colon j_{U'}(U\cap U') \to j_{U}(U\cap U')\]
soit un isomorphisme analytique (entre mod\`eles locaux $\mathcal{A}$-analytiques).
Deux atlas $\mathcal{A}$-analytiques sur~$X$ sont dits \emph{compatibles} si leur r\'eunion est encore un atlas $\mathcal{A}$-analytique de~$X$. Cela d\'efinit une relation d'\'equivalence sur l'ensemble des atlas $\mathcal{A}$-analytiques d'un espace localement annel\'e.
\end{defi}
\begin{defi}\index{Espace analytique|textbf}\index{Morphisme!structural|textbf}
Un \emph{espace $\mathcal{A}$-analytique} est un espace localement annel\'e muni d'une classe d'\'equivalence d'atlas $\mathcal{A}$-analytiques.
Les morphismes structuraux des cartes d'un espace $\mathcal{A}$-analytique~$X$ se recollent en un morphisme d'espaces localement annel\'es $\pi \colon X \to \mathcal{M}(\mathcal{A})$, encore appel\'e \emph{morphisme structural}.
\end{defi}
Lorsque nous parlerons d'un atlas d'un $\mathcal{A}$-espace analytique, il s'agira d'un \'el\'ement de cette classe d'\'equivalence.
\begin{rema}\label{remarque_base_espace_analytique}
Soit~$X$ un espace $\mathcal{A}$-analytique. Tout ouvert~$U$ de~$X$ h\'erite par restriction d'une structure d'espace $\mathcal{A}$-analytique.
\end{rema}
Soit~$X$ un espace $\mathcal{A}$-analytique. \`A tout point~$x$ de~$X$, on peut associer un corps r\'esiduel~$\kappa(x)$. En choisissant une carte de~$X$ contenant~$x$, on peut munir~$\kappa(x)$ d'une valeur absolue. D'apr\`es la proposition~\ref{prop:isometrie}, celle-ci ne d\'epend pas du choix de la carte.
\nomenclature[Ka]{$\kappa(x)$}{corps r\'esiduel d'un point~$x$ d'un espace $\mathcal{A}$-analytique
\begin{defi}\index{Corps!residuel@r\'esiduel|textbf}\index{Corps!residuel complete@r\'esiduel compl\'et\'e|textbf}\index{Evaluation@\'Evaluation|textbf
\nomenclature[Kb]{$\mathcal{H}(x)$}{corps r\'esiduel compl\'et\'e du point~$x$}%
\nomenclature[Kc]{$f(x)$}{\'evaluation en $x$ d'une section $f$ de $\mathcal{O}_{X}$
Soit~$X$ un espace $\mathcal{A}$-analytique. Soit~$x \in X$. On munit le corps r\'esiduel~$\kappa(x)$ de la valeur absolue provenant de n'importe quelle carte de~$X$ contenant~$x$. On note~$\mathcal{H}(x)$ le compl\'et\'e correspondant.
Pour tout $f\in \mathcal{O}_{X}(X)$, on note~$f(x)$ l'image de~$f$ dans~$\mathcal{H}(x)$.
\end{defi}
\begin{lemm}\label{lem:evaluationcontinue}\index{Evaluation@\'Evaluation}
Soit~$X$ un espace $\mathcal{A}$-analytique. Pour tout $f\in\mathcal{O}_X(X)$, l'application
\[\fonction{|f|}{X}{\ensuremath{\mathbf{R}}_{\ge 0}}{x}{|f(x)|}\]
est continue.
\end{lemm}
\begin{proof}
On se ram\`ene au cas d'un mod\`ele local, puis au cas d'un espace affine analytique, et on conclut alors par le lemme~\ref{lem:evaluationcontinueaffine}.
\end{proof}
\index{Morphisme analytique|)}
\begin{defi}\label{def:morphismegeneral}\index{Morphisme analytique|textbf}
\nomenclature[Ke]{$\Hom_{\mathcal{A}-\An}(X,Y)$}{pour $X,Y$ espaces $\mathcal{A}$-analytiques, ensemble des morphismes analytiques de~$X$ dans~$Y$
Soient~$X$ et~$Y$ deux espaces $\mathcal{A}$-analytiques. Un \emph{morphisme analytique} de~$X$ dans~$Y$ est un morphisme d'espaces localement annel\'es $\varphi \colon X \to Y$ tel que, pour tous atlas~$\mathcal{U}$ de~$X$ et~$\mathcal{V}$ de~$Y$, pour tout $U \in \mathcal{U}$ et $V\in \mathcal{V}$, le morphisme
\[ X_U \cap j_{U}(\varphi^{-1}(V)) \to Y_V\]
induit par $j_V\circ\varphi \circ j_U^{-1}$
soit un morphisme analytique (entre mod\`eles locaux $\mathcal{A}$-analytiques).
On note $\Hom_{\mathcal{A}-\An}(X,Y)$ l'ensemble des morphismes analytiques de~$X$ dans~$Y$.
\end{defi}
\begin{exem}\label{ex:structuralAanalytique}\index{Morphisme!structural}
Le morphisme structural $\pi \colon X \to \mathcal{M}(\mathcal{A})$ est un morphisme analytique.
\end{exem}
\begin{rema}\label{atlas}
Pour montrer qu'un morphisme d'espaces localement annel\'es $\varphi \colon X \to Y$ est un morphisme analytique, il suffit de montrer que la condition de la d\'efinition~\ref{def:morphismegeneral} est satisfaite pour \emph{un} atlas~$\mathcal{U}$ de~$X$ et \emph{un} atlas~$\mathcal{V}$ de~$Y$. Cela provient du fait que la notion de morphisme analytique entre mod\`ele locaux $\mathcal{A}$-analytiques est locale (\cf~lemme~\ref{lem:localmodele}).
\end{rema}
Nous disposons de r\'esultats analogues \`a ceux des lemmes~\ref{lem:localmodele} et~\ref{lem:compositionmodele} et de la proposition~\ref{prop:isometrie}.
\begin{prop}\label{prop:proprietemorphismes}\index{Morphisme!de corps r\'esiduels}
\
\begin{enumerate}[i)]
\item La notion de morphisme analytique d'espaces $\mathcal{A}$-analytiques est locale \`a la source et au but.
\item Les morphismes compatibles d'espaces $\mathcal{A}$-analytiques se composent.
\item Les morphismes d'espaces $\mathcal{A}$-analytiques induisent des plongements isom\'etriques entre les corps r\'esiduels.
\end{enumerate}
\qed
\end{prop}
\begin{defi}\index{Categorie@Cat\'egorie!des espaces A-analytiques@des espaces $\mathcal{A}$-analytiques|textbf}
\nomenclature[Kd]{$\mathcal{A}-\An$}{cat\'egorie des espaces~$\mathcal{A}$-analytiques
La \emph{cat\'egorie des espaces $\mathcal{A}$-analytiques} est la cat\'egorie dont les objets sont les espaces~$\mathcal{A}$-analytiques et dont les morphismes sont les morphismes analytiques. Nous noterons~$\mathcal{A}-\An$ cette cat\'egorie.
\end{defi}
\begin{rema}\label{rem:Homfaisceau}
Il est parfois utile de munir la cat\'egorie~$\mathcal{A}-An$ de la topologie de Grothendieck~$\mathrm{Ouv}$ d\'efinie par les recouvrements ouverts. Le foncteur
\[\fonction{\Hom_{\mathcal{A}-\An}(\wc,Y)}{\mathcal{A}-\An}{\mathrm{Ens}}{X}{\Hom_{\mathcal{A}-\An}(X,Y)},\]
o\`u $\mathrm{Ens}$ d\'esigne la cat\'egorie des ensembles, est alors un faisceau.
\end{rema}
\index{Espace analytique|)}
\index{Morphisme analytique|)}
\section[Espaces analytiques au-dessus de~$\mathcal{A}$]{Cat\'egorie des espaces analytiques au-dessus de~$\mathcal{A}$}\label{sec:An_A}
\index{Espace analytique!au-dessus de~$\mathcal{A}$|(}
Nous d\'efinissons ici une cat\'egorie plus grande que~$\mathcal{A}-\An$. Elle nous permettra de proc\'eder \`a des extensions des scalaires.
\begin{defi}\label{def:espaceaudessusdeA}\index{Espace analytique!au-dessus de~$\mathcal{A}$|textbf}
Un \emph{espace analytique au-dessus de~$\mathcal{A}$} est la donn\'ee d'un morphisme born\'e d'anneaux de Banach~$\mathcal{A}\to\mathcal{B}$ et d'un espace~$\mathcal{B}$-analytique.
\end{defi}
Remarquons que l'espace~$\mathcal{B}$-analytique est alors muni une structure d'espace localement~$\mathcal{A}$-annel\'e.
\medbreak
Soit~$f \colon \mathcal{B}\to\mathcal{B}'$ un morphisme born\'e de~$\mathcal{A}$-alg\`ebres de Banach. Notons $\tilde f \colon \mathcal{M}(\mathcal{B}') \to \mathcal{M}(\mathcal{B})$ le morphisme d'espaces localement annel\'es associ\'e.
Nous allons maintenant d\'efinir une notion de morphisme d'espaces analytiques au-dessus de~$f$, allant d'un espace~$\mathcal{B}'$-analytique vers un espace~$\mathcal{B}$-analytique. Nous proc\'ederons de la m\^eme mani\`ere que pour les morphismes d'espaces~$\mathcal{A}$-analytiques en commen\c cant par d\'efinir les morphismes entre ouvert d'espaces affines analytiques.
\index{Morphisme analytique!au-dessus d'un morphisme d'anneaux de Banach|(}
\begin{defi}\label{defi:morphismeaudessusdefouvertaffine}\index{Morphisme analytique!au-dessus d'un morphisme d'anneaux de Banach|textbf}
Soient~$U$ un ouvert d'un espace affine analytique sur~$\mathcal{B}'$ et $V$ un ouvert d'un espace affine analytique sur~$\mathcal{B}$. Un \emph{morphisme analytique de~$U$ dans~$V$ au-dessus de~$f$} est un morphisme d'espaces localement annel\'es $\varphi \colon U\to V$ qui fait commuter le diagramme
\[\begin{tikzcd}
U \arrow[r, "\varphi"] \arrow[d, "\pi"]& V \arrow[d, "\pi"]\\
\mathcal{M}(\mathcal{B}') \arrow[r, "\tilde f"] & \mathcal{M}(\mathcal{B})
\end{tikzcd},\]
o\`u les morphismes verticaux sont les morphismes structuraux,
et qui satisfait la condition suivante~: pour toute partie compacte~$U'$ de~$U$ et toute partie compacte~$V'$ de~$V$ telles que $\varphi(U') \subset V'$, le morphisme $\mathcal{O}_V(V')\to\mathcal{O}_U(U')$ induit par~$\varphi^\sharp$ est contractant (\textit{i.e.} pour tout~$a\in\mathcal{O}_V(V')$, on a $\|\varphi^\sharp(a)\|_{U'}\leq\|a\|_{V'}$).
\end{defi}
\begin{rema}
Dans la situation de la d\'efinition pr\'ec\'edente, le morphisme~$f$ munit~$U$ d'une structure d'espace localement $\mathcal{B}$-annel\'e et le morphisme analytique~$\varphi$ respecte cette structure.
\end{rema}
\begin{exem}\label{ex:tildefn}
\nomenclature[Kk]{$\tilde{f}_{n}$}{morphisme naturel de $\E{n}{\mathcal{B}'}$ dans $\E{n}{\mathcal{B}}$ au-dessus d'un morphisme $f \colon \mathcal{B} \to \mathcal{B}'$
Soit~$n\in\ensuremath{\mathbf{N}}$. Le morphisme~$f$ induit un morphisme d'espaces localement annel\'e $\tilde{f}_n \colon \E{n}{\mathcal{B}'} \to \E{n}{\mathcal{B}}$. C'est un morphisme analytique au-dessus de~$f$.
\end{exem}
\begin{defi}\label{defi:morphismeaudessusdeffermeouvert}\index{Morphisme analytique!au-dessus d'un morphisme d'anneaux de Banach|textbf}
Soient $X$ un ferm\'e analytique d'un ouvert~$U$ d'un espace affine analytique sur~$\mathcal{A}$ et~$Y$ un ferm\'e analytique d'un ouvert~$V$ d'un espace affine analytique sur~$\mathcal{A}$. Notons $j_{U} \colon X \to U$ et $j_{V} \colon Y \to V$ les immersions ferm\'ees associ\'ees.
Un \emph{morphisme analytique de~$X$ dans~$Y$ au-dessus de~$f$} est un morphisme d'espaces localement annel\'es
$\varphi \colon X \to Y$
v\'erifiant la condition suivante : pour tout point~$x$ de~$X$, il existe un voisinage ouvert~$U'$ de~$j_{U}(x)$ dans~$U$ et un morphisme analytique
$\tilde \varphi \colon U' \to V$ au-dessus de~$f$
tels que le diagramme
\[\begin{tikzcd}
U' \arrow[r, "\tilde{\varphi}"] &V\\
j_{U}^{-1}(U') \arrow[u, hook, "j_{U}"] \arrow[r, "\varphi"'] & \arrow[u, hook, "j_{V}"] Y
\end{tikzcd}\]
commute.
\end{defi}
\begin{defi}\label{defi:morphismeaudessusdef}\index{Morphisme analytique!au-dessus d'un morphisme d'anneaux de Banach|textbf}%
\nomenclature[Ki]{$\Hom_{\An_{\mathcal{A}},f}(X,Y)$}{pour $X$ espace $\mathcal{B}$-analytique, $Y$ espace $\mathcal{B}'$-analytique et $f\colon \mathcal{B} \to \mathcal{B}'$ morphisme d'anneaux de Banach, ensemble des morphismes analytiques de~$X$ dans~$Y$ au-dessus de~$f$
Soient~$X$ un espace $\mathcal{B}'$-analytique et~$Y$ un espace $\mathcal{B}$-analytique. Un \emph{morphisme analytique de~$X$ dans~$Y$ au-dessus de~$f$} est un morphisme d'espaces localement annel\'es $\varphi \colon X\to Y$ tel que, pour tous atlas~$\mathcal{U}$ de~$X$ et~$\mathcal{V}$ de~$Y$, pour tout $U \in \mathcal{U}$ et $V\in \mathcal{V}$, le morphisme
\[j_V\circ\varphi \circ j_U^{-1} \colon X_U \cap j_{U}(\varphi^{-1}(V)) \to Y_V\]
soit un morphisme analytique au-dessus de~$f$.
On note $\Hom_{\An_{\mathcal{A}},f}(X,Y)$ l'ensemble des morphismes analytiques de~$X$ dans~$Y$ au-dessus de~$f$.
\end{defi}
\begin{rema}
Comme dans le cas des morphismes entres espaces $\mathcal{A}$-analytiques, il suffit de montrer que la condition de la d\'efinition est satisfaite pour \emph{un} atlas~$\mathcal{U}$ de~$X$ et \emph{un} atlas~$\mathcal{V}$ de~$Y$.
\end{rema}
\begin{rema}
Dans la situation de la d\'efinition pr\'ec\'edente, on a un diagramme commutatif
\[\begin{tikzcd}
X \arrow[r, "\varphi"] \arrow[d]& Y \arrow[d]\\
\mathcal{M}(\mathcal{B}') \arrow[r, "\tilde f"] & \mathcal{M}(\mathcal{B})
\end{tikzcd}.\]
En particulier, le morphisme analytique~$\varphi$ respecte les structures d'espaces localement $\mathcal{B}$-annel\'es de~$X$ (induite par le morphisme $f \colon \mathcal{B} \to \mathcal{B}'$) et~$Y$.
\end{rema}
\begin{rema}
Supposons que le morphisme~$f$ est le morphisme identit\'e $\mathrm{id} \colon \mathcal{B} \to \mathcal{B}$ (et donc que $\mathcal{B}' = \mathcal{B}$). La notion de morphisme analytique au-dessus de~$\mathrm{id}$ co\"incide alors avec celle de morphisme analytique entre espaces $\mathcal{B}$-analytiques.
\end{rema}
Les propri\'et\'es des morphismes entre espaces $\mathcal{A}$-analytiques se transposent \emph{mutatis mutandis} \`a notre nouveau cadre.
\begin{prop}\label{prop:propmorphismeaudessusdef}\index{Morphisme!de corps r\'esiduels}
\
\begin{enumerate}[i)]
\item La notion de morphisme analytique au-dessus de~$f$ est locale \`a la source et au but.
\item Les morphismes analytiques au-dessus de~$f$ induisent des plongements isom\'etriques entre les corps r\'esiduels.
\end{enumerate}
\qed
\end{prop}
La propri\'et\'e de composition prend une forme un peu diff\'erence.
\begin{lemm}
Soient $f \colon \mathcal{B}\to\mathcal{B}'$ et $f' \colon \mathcal{B}'\to\mathcal{B}''$ deux morphismes born\'es de~$\mathcal{A}$-alg\`ebres de Banach. Soient $\varphi \colon X\to Y$ un morphisme analytique au-dessus de~$f'$ et $\varphi' \colon Y\to Z$ un morphisme analytique au-dessus de~$f$. Alors le morphisme~$\varphi'\circ\varphi \colon X\to Z$ est un morphisme analytique au-dessus de~$f'\circ f$.
\qed
\end{lemm}
\begin{defi}\index{Categorie@Cat\'egorie!des espaces analytiques au-dessus de~$\mathcal{A}$|textbf}%
\nomenclature[Kj]{$\Hom_{\An_{\mathcal{A}}}(X,Y)$}{pour $X,Y$ espaces analytiques au-dessus de~$\mathcal{A}$, ensemble des morphismes analytiques de $X$ dans $Y$}%
\nomenclature[Kh]{$\An_{\mathcal{A}}$}{cat\'egorie des espaces analytiques au-dessus de~$\mathcal{A}$
La \emph{cat\'egorie des espaces analytiques au-dessus de~$\mathcal{A}$} est d\'efinie comme suit. Ses objets sont les espaces analytiques au-dessus de~$\mathcal{A}$. Pour toutes $\mathcal{A}$-alg\`ebres de Banach~$\mathcal{B}$ et~$\mathcal{B}'$, tout espace $\mathcal{B}'$-analytique~$X$ et tout espace $\mathcal{B}$-analytique~$Y$, l'ensemble des morphismes de~$X$ dans~$Y$ est
\[\Hom_{\An_{\mathcal{A}}}(X,Y) := \bigcup_{f} \Hom_{\An_{\mathcal{A}},f}(X,Y),\]
o\`u $f$~parcourt l'ensemble des morphismes born\'es de $\mathcal{A}$-alg\`ebres de Banach de~$\mathcal{B}$ dans~$\mathcal{B}'$.
On note~$\An_{\mathcal{A}}$ la cat\'egorie des espaces analytiques au-dessus de~$\mathcal{A}$.
\end{defi}
\begin{exem}\label{ex:morphismex}\index{Morphisme analytique!associe a un point@associ\'e \`a un point}%
\nomenclature[Kl]{$\gamma_{x}$}{morphisme analytique $\mathcal{M}(\mathcal{H}(x)) \to X$ associ\'e \`a un point $x$ d'un espace $\mathcal{A}$-analytique $X$
Soient~$X$ un espace $\mathcal{A}$-analytique. \`A tout point~$x$ de~$X$, on peut associer canoniquement un morphisme $\gamma_{x} \colon \mathcal{M}(\mathcal{H}(x)) \to X$ d'espaces analytiques au-dessus de~$\mathcal{A}$ dont l'image est~$\{x\}$.
Pour le construire, on peut raisonner localement et donc supposer que~$X$ est un ferm\'e analytique d'un ouvert de~$\E{n}{\mathcal{A}}$. Le morphisme $\mathcal{A}[T_{1},\dotsc,T_{n}] \to \mathcal{H}(x)$ d'\'evaluation en~$x$ induit alors le morphisme
\[ \gamma_{x} \colon \mathcal{M}(\mathcal{H}(x)) \to X \hookrightarrow \E{n}{\mathcal{A}}\]
recherch\'e. C'est un morphisme au-dessus du morphisme $\mathcal{A} \to \mathcal{H}(x)$ obtenu en restreignant le morphisme d'\'evaluation en~$x$.
\end{exem}
\index{Morphisme analytique!au-dessus d'un morphisme d'anneaux de Banach|)}
\index{Espace analytique!au-dessus de~$\mathcal{A}$|)}
\section{Immersions d'espaces analytiques}\label{sec:immersion}
\index{Immersion|(}
Revenons maintenant \`a la cat\'egorie des espaces~$\mathcal{A}$-analytiques~$\mathcal{A}-\An$.
\begin{defi}\index{Ferme analytique@Ferm\'e analytique|textbf}
Soit~$(X,\mathcal{O}_{X})$ un espace $\mathcal{A}$-analytique. Un \emph{ferm\'e analytique} de~$X$ est un quadruplet~$(Z,\mathcal{O}_Z,j,\mathcal{I})$, o\`u~$\mathcal{I}$ est un faisceau coh\'erent d'id\'eaux de~$\mathcal{O}_X$, $Z$~le lieu des z\'eros de~$\mathcal{I}$, $j \colon Z \to X$ l'inclusion canonique et $\mathcal{O}_Z$ le faisceau $j^{-1} (\mathcal{O}_X/\mathcal{I})$.
L'espace~$(Z,\mathcal{O}_Z)$ est naturellement muni d'une structure d'espace $\mathcal{A}$-analytique, obtenue en restreignant les cartes des atlas de~$X$.
\end{defi}
\begin{rema}\label{rem:supportfaisceau}\index{Faisceau!annulateur d'un}\index{Faisceau!support d'un
\nomenclature[Eb]{$\textrm{Ann}(\mathcal{F})$}{annulateur de~$\mathcal{F}$
Soient~$X$ un espace $\mathcal{A}$-analytique et~$\mathcal{F}$ un faisceau coh\'erent sur~$X$. Alors l'annulateur~$\textrm{Ann}(\mathcal{F})$ de~$\mathcal{F}$ est un faisceau d'id\'eaux coh\'erents dont le lieu des z\'eros co\"incide avec le support de~$\mathcal{F}$ (\cf~\cite[A, \S 5]{Gr-Re2}). En particulier, le support de~$\mathcal{F}$ est naturellement muni d'une structure de ferm\'e analytique de~$X$.
\end{rema}
\begin{defi}\index{Immersion|textbf}\index{Immersion!ouverte|textbf}\index{Immersion!fermee@ferm\'ee|textbf}
Soit~$i \colon X\to Y$ un morphisme d'espaces $\mathcal{A}$-analytiques. On dit que~$i$ est
\begin{itemize}
\item une \emph{immersion ouverte} s'il induit un isomorphisme entre~$X$ et un ouvert de~$Y$ (qui a naturellement une structure d'espace analytique d'apr\`es la remarque \ref{remarque_base_espace_analytique}) ;
\item une \emph{immersion ferm\'ee} s'il induit un isomorphisme entre~$X$ et un ferm\'e analytique de~$Y$ ;
\item une \emph{immersion} s'il existe une immersion ouverte~$i_{1}$ et une immersion ferm\'ee~$i_{2}$ telles que $i = i_1\circ i_2$.
\end{itemize}
\end{defi}
\'Enon\c cons quelques propri\'et\'es des immersions.
\begin{lemm}
Soient $i \colon X \to Y$ une immersion ferm\'ee d'espaces $\mathcal{A}$-analytiques. Soit~$\mathcal{I}$ le noyau du morphisme $\mathcal{O}_{y} \to i_{\ast}\mathcal{O}_{X}$. C'est un faisceau d'id\'eaux coh\'erent. Notons~$Z$ le ferm\'e analytique de~$Y$ qu'il d\'efinit et $j \colon Z \to Y$ le morphisme d'inclusion. Alors, il existe un unique isomorphisme d'espaces $\mathcal{A}$-analytiques $\psi \colon X \to Z$ tel que $i = j \circ \psi$.
\qed
\end{lemm}
\begin{prop}
La compos\'ee d'un nombre fini d'immersions ouvertes (resp. immersions ferm\'ees, resp. immersions) est une immersions ouverte (resp. immersions ferm\'ee, resp. immersion).
\end{prop}
\begin{proof}
Il suffit de d\'emontrer le r\'esultat pour la compos\'ee de deux morphismes. Le r\'esultat pour les immersions ouvertes et ferm\'ees est imm\'ediat.
Soient $i \colon X \to Y$ et $j \colon Y \to Z$ des immersions. On suit la preuve pour le cas des sch\'emas, \cf~\cite[\href{https://stacks.math.columbia.edu/tag/02V0}{lemma~02V0}]{stacks-project}. \'Ecrivons $i = i_1\circ i_2$ et $j = j_1\circ j_2$ o\`u $i_{2} \colon X \to U$ et $j_{2} \colon Y \to V$ sont des immersions ferm\'ees et $i_{1} \colon U \to Y$ et $j_{1} \colon V \to Z$ sont des immersions ouvertes. Il existe un ouvert~$V'$ de~$V$ tel que $U = j_{2}^{-1}(V')$. On peut alors \'ecrire~$j \circ i$ comme la compos\'ee de l'immersion ouverte
\[V' \to V \to Z\]
et de l'immersion ferm\'ee
\[X \to U = j_{2}^{-1}(V') \to V'.\]
\end{proof}
\begin{lemm}\label{lem:immersionfermeelocann}
Soient $\varphi \colon X \to Y$ un morphisme d'espaces $\mathcal{A}$-analytiques et $i \colon Y' \to Y$ une immersion ferm\'ee d'espaces $\mathcal{A}$-analytiques. Le morphisme~$\varphi$ se factorise par~$i$ en tant que morphisme d'espaces $\mathcal{A}$-analytiques si et seulement si il se factorise par~$i$ en tant que morphisme d'espaces localement annel\'es.
\end{lemm}
\begin{proof}
Si~$\varphi$ se factorise par~$i$ en tant que morphisme d'espaces $\mathcal{A}$-analytiques, il se factorise par~$i$ en tant que morphisme d'espaces localement annel\'es, par d\'efinition.
D\'emontrons l'implication r\'eciproque. Supposons qu'il existe un morphisme d'espaces localement annel\'es $\varphi' \colon X \to Y'$ tel que $\varphi = i \circ \varphi'$. On veut montrer que~$\varphi'$ est un morphisme d'espaces analytiques. Par d\'efinition, $Y'$ est isomorphe \`a~$Z$ dans la cat\'egorie des espaces $\mathcal{A}$-analytiques. On peut donc supposer que $Y'=Z$ et que $i \colon Z \to X$ est le morphisme d'inclusion.
La question \'etant locale, on peut supposer que~$X$ et~$Y$ sont des ferm\'es analytiques d'ouverts~$U$ et~$V$ d'espaces affines et que~$\varphi$ se rel\`eve en un morphisme $\tilde{\varphi} \colon U\to V$. L'espace~$Z$ est alors un ferm\'e analytique de~$V$ et le morphisme~$\tilde{\varphi}$ rel\`eve~$\varphi'$. Le r\'esultat s'ensuit.
\end{proof}
\begin{prop}\label{immersion_ferm\'e}
Soit $\varphi \colon X \to Y$ un morphisme d'espaces $\mathcal{A}$-analytiques.
Soit $i \colon Y' \to Y$ une immersion ferm\'ee d'espaces $\mathcal{A}$-analytiques. Soit~$\mathcal{I}$ un faisceau coh\'erent d'id\'eaux d\'efinissant le ferm\'e analytique $Z := i(Y')$ de~$Y$. Les propositions suivantes sont \'equivalentes~:
\begin{enumerate}[i)]
\item le morphisme~$\varphi$ se factorise par~$i$, en tant que morphisme d'espaces analytiques~;
\item le faisceau~$\varphi^*(\mathcal{I})$ est nul.
\end{enumerate}
En outre, lorsqu'elle sont satisfaites, le morphisme $\varphi' \colon X \to Y'$ tel que $\varphi = i\circ \varphi'$ est unique.
\end{prop}
\begin{proof}
Dans la cat\'egorie des espaces localement annel\'es, le r\'esultat est d\'emontr\'e dans
\cite[\href{https://stacks.math.columbia.edu/tag/01HP}{lemma~01HP}]{stacks-project}. Notre r\'esultat s'en d\'eduit en utilisant le lemme~\ref{lem:immersionfermeelocann}.
\end{proof}
\begin{prop}\label{immersion_ferm\'e2}
Soit~$i \colon X \to Y$ une immersion d'espaces $\mathcal{A}$-analytiques. Les propositions suivantes sont \'equivalentes~:
\begin{enumerate}[i)]
\item le morphisme $i$ est une immersion ferm\'ee ;
\item l'image de~$i$ est ferm\'ee dans~$Y$.
\end{enumerate}
De m\^eme, les propositions suivantes sont \'equivalentes~:
\begin{enumerate}[i')]
\item le morphisme $i$ est une immersion ouverte ;
\item l'image de~$i$ est ouverte dans~$Y$.
\end{enumerate}
\end{prop}
\begin{proof}
L'implication $i) \implies ii)$ est \'evidente. D\'emontrons $ii) \implies i)$. On suit la preuve de~\cite[\href{https://stacks.math.columbia.edu/tag/01IQ}{lemma~01IQ}]{stacks-project} (pour les sch\'emas). Supposons que $i(X)$ est ferm\'ee dans~$Y$. Puisque le morphisme~$i$ est une immersion, il induit un isomorphisme avec son image et il suffit donc de montrer que $i(X)$ est un ferm\'e analytique de~$Y$. Par d\'efinition, il existe un ouvert~$U$ de~$Y$ et un faisceau d'id\'eaux coh\'erent~$\mathcal{I}$ sur~$U$ tel que $i(X)$ soit le ferm\'e analytique de~$U$ associ\'e \`a l'id\'eal~$\mathcal{I}$.
Remarquons que l'on a
\[\mathcal{I}_{|U \setminus i(X)} = \mathcal{O}_{|U \setminus i(X)} = \big(\mathcal{O}_{Y \setminus i(X)}\big)_{|U \setminus i(X)}.\]
On peut donc d\'efinir un faisceau d'id\'eaux coh\'erent~$\mathcal{J}$ en recollant~$\mathcal{I}$ et~$\mathcal{O}_{Y \setminus i(X)}$. Le faisceau~$\mathcal{O}/\mathcal{J}$ est alors support\'e sur~$U$ et sa restriction \`a~$U$ est \'egale \`a~$\mathcal{O}_{U}/\mathcal{I}$. Le r\'esultat s'en d\'eduit.
\medbreak
L'\'equivalence $i') \Longleftrightarrow ii')$ d\'ecoule directement des d\'efinitions.
\end{proof}
\begin{prop}\label{crit\`ere_local_immersion}
Soit $i \colon X \to Y$ un morphisme d'espaces $\mathcal{A}$-analytiques. Les propositions suivantes sont \'equivalentes~:
\begin{enumerate}[i)]
\item le morphisme~$i$ est une immersion~;
\item pour tout~$y\in i(X)$, il existe un voisinage ouvert de~$V$ de~$y$ dans~$Y$ tel que le morphisme $i^{-1}(V) \to V$ induit par~$i$ soit une immersion ferm\'ee.
\end{enumerate}
\end{prop}
\begin{proof}
L'implication $i) \implies ii)$ d\'ecoule des d\'efinitions. D\'emontrons $ii) \implies i)$. Supposons que, pour tout $y \in i(X)$, il existe un voisinage~$V_y$ de~$y$ dans~$Y$ tel que~$i$ induise un isomorphisme entre~$i^{-1}(V_y)$ et un ferm\'e analytique~$Z_y$ de~$V_y$ d\'efini par un faisceau d'id\'eaux coh\'erent~$\mathcal{I}^y$.
Posons $U := \bigcup_{y\in i(X)} V_{y}$. C'est un ouvert de~$Y$ sur lequel les faisceaux~$\mathcal{I}^{y}$ se recollent en un faisceau d'id\'eaux coh\'erent~$\mathcal{I}$. On v\'erifie que le morphisme~$i$ induit un isomorphisme entre~$X$ et le ferm\'e analytique de~$Y$ d\'efini par~$\mathcal{I}$.
\end{proof}
\index{Immersion|)}
\chapter{Structure locale des espaces analytiques}\label{chap:structurelocale}
Dans ce chapitre, nous d\'emontrons quelques r\'esultats sur la structure locale des espaces analytiques et en tirons des applications.
Dans la section~\ref{sec:decomposition}, nous d\'emontrons, qu'au voisinage d'un point, un espace analytique peut s'\'ecrire de fa\c{c}on unique comme union de ferm\'es analytiques int\`egres en ce point (\cf~proposition~\ref{decomp}). Il s'agit d'un r\'esultat de d\'ecomposition en composantes irr\'eductibles dans lequel on ne prend en compte que l'espace topologique sous-jacent.
Dans la section~\ref{sec:critereouverture}, nous d\'emontrons un crit\`ere permettant d'assurer qu'un morphisme ouvert en un point reste ouvert au voisinage de ce point (\cf~proposition~\ref{ouvert2}).
La section~\ref{sec:normalisation} contient une variante locale du lemme de normalisation de Noether dans notre contexte (\cf~th\'eor\`eme~\ref{proj}). Elle assure qu'\'etant donn\'e un point en lequel un espace analytique est int\`egre, il existe un voisinage de ce point pouvant s'envoyer par un morphisme fini et ouvert vers un ouvert d'un espace affine.
Dans la section~\ref{sec:dimlocale}, nous d\'efinissons la dimension locale d'un espace analytique (\cf~d\'efinition~\ref{def:dimlocale}). Nous d\'emontrons, au passage, un r\'esultat d'int\'er\^et ind\'ependant~: un morphisme structural plat est ouvert (\cf~proposition~\ref{prop:morphismestructuralouvert}).
Finalement, dans les deux derni\`eres sections, nous comparons certaines propri\'et\'es d'un sch\'ema ou d'un morphisme de sch\'emas \`a celles de son analytifi\'e. Nous commen\c{c}ons, dans la section~\ref{sec:schan1}, par des r\'esultats assez directs~: pour un morphisme, surjectivit\'e, propret\'e et finitude se transf\`erent \`a l'analytifi\'e (\cf~proposition~\ref{stabilit\'e_analytification1}), pour un faisceau, analytification et image directe par un morphisme fini commutent (\cf~proposition~\ref{formule_analytification}).
La section~\ref{sec:schan2} est bas\'ee sur un r\'esultat plus difficile~: la platitude du morphisme d'analytication (\cf~th\'eor\`eme~\ref{platitude_analytification}). Nous la d\'emontrons pour une classe plus restrictive d'anneaux de base, celle des anneaux de Dedekind analytiques (\cf~d\'efinition~\ref{def:aDa}). Nos exemples usuels (corps valu\'es, anneaux d'entiers de corps de nombres, corps hybrides, anneaux de valuation discr\`ete, anneaux de Dedekind trivialement valu\'es) appartiennent \`a cette classe. Elle nous permet notamment de comparer la dimension d'un sch\'ema \`a celle de son analytifi\'e (\cf~th\'eor\`eme~\ref{thm:dimXXan}).
\medbreak
Soit $(\mathcal{A},\nm)$ un anneau de base g\'eom\'etrique.
Posons $B:=\mathcal{M}(\mathcal{A})$.
\section[Composantes irr\'eductibles]{D\'ecomposition locale des espaces analytiques en composantes irr\'eductibles}\label{sec:decomposition}
Dans cette section, nous suivons de pr\`es \cite[\S 4.1.3]{Gr-Re2}. Commen\c cons par d\'efinir les espaces analytiques localement int\`egres \'evoqu\'es dans l'introduction.
\index{Espace analytique!integre@int\`egre|(}
\begin{defi}\index{Espace analytique!integre@int\`egre|textbf}
Soit~$X$ un espace $\mathcal{A}$-analytique. On dit que l'espace~$X$ est \emph{int\`egre en un point } $x$ de~$X$ si l'anneau local~$\mathcal{O}_{X,x}$ est r\'eduit. On dit que l'espace~$X$ est \emph{localement int\`egre} s'il est int\`egre en tout point.
\end{defi}
Pour tout~$n\in\ensuremath{\mathbf{N}}$, l'espace affine~$\E{n}{\mathcal{A}}$ est localement int\`egre, puisque tous ses anneaux locaux sont r\'eguliers, d'apr\`es le th\'eor\`eme~\ref{rigide}.
\index{Espace affine analytique!integre@int\`egre}
\medbreak
Rappelons que l'on note~$\mathcal{I}(X)$ le faisceau d'id\'eaux des fonctions s'annulant en tout point de~$X$, $\sqrt{0}$ celui des \'el\'ements nilpotents de~$\mathcal{O}_{X}$ (\cf~notation~\ref{nota:VI}) et que ces deux faisceaux co\"incident, d'apr\`es le corollaire~\ref{cor:IVJ}.
\index{Espace analytique!irreductible@irr\'eductible|(}
\begin{defi}\index{Espace analytique!irreductible@irr\'eductible|textbf}
Soit~$X$ un espace $\mathcal{A}$-analytique. On dit que l'espace~$X$ est \emph{irr\'eductible en un point} $x$ de~$X$ si~$\mathcal{I}(X)_x$ est un id\'eal premier de~$\mathcal{O}_{X,x}$.
\end{defi}
\begin{lemm}\label{lem:integreirreductible}
Soient~$X$ un espace $\mathcal{A}$-analytique et $x \in X$. Si~$X$ est int\`egre en~$x$, alors $X$ est irr\'eductible en~$x$.
\end{lemm}
\begin{proof}
Supposons que~$X$ est int\`egre en~$x$. Alors, l'anneau~$\mathcal{O}_{X,x}$ est int\`egre et $\mathcal{I}(X)_{x} = (\sqrt{0})_{x} =0$ est un id\'eal premier de~$\mathcal{O}_{X,x}$.
\end{proof}
\begin{rema}
Il existe des espaces analytiques irr\'eductibles en un point mais non int\`egres en ce point. C'est le cas du ferm\'e analytique de~$\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}}$, avec coordonn\'ee~$T$, d\'efini par le polyn\^ome~$T^2$.
\end{rema}
Le r\'esultat qui suit permet cependant d'affirmer qu'un espace analytique irr\'eductible en un point n'est pas tr\`es loin d'\^etre int\`egre en ce point.
\begin{prop}\label{representant}
Soient~$X$ un espace $\mathcal{A}$-analytique et~$x\in X$. Supposons que~$X$ est irr\'eductible en~$x$. Alors, il existe un voisinage~$V$ de~$x$ dans~$X$ et un ferm\'e analytique~$\widetilde V$ de~$V$ de m\^eme ensemble sous-jacent tels que~$\widetilde V$ soit int\`egre en~$x$.
\end{prop}
\begin{proof}
Par hypoth\`ese, l'id\'eal~$\mathcal{I}(X)_x$ de~$\mathcal{O}_{X,x}$ est premier. D'apr\`es le th\'eor\`eme~\ref{rigide}, $\mathcal{O}_{X,x}$ est noeth\'erien, donc~$\mathcal{I}(X)_{x}$ est engendr\'e par un nombre fini d'\'el\'ements $f_{1},\dotsc,f_{n} \in\mathcal{O}_{X,x}$. Soit~$V$ un voisinage de~$x$ dans~$X$ sur lequel tous les~$f_i$ sont d\'efinis. Quitte \`a restreindre~$V$, on peut supposer que les~$f_{i}$ s'annulent en tout point de~$V$.
Notons~$\widetilde V$ le sous-espace analytique de~$V$ d\'efini par le faisceau d'id\'eaux~$\mathcal{J}$ engendr\'e par les~$f_i$. Ensemblistement, on a $\widetilde V = V$. Qui plus est, on a
\[\mathcal{O}_{\widetilde V,x} = \mathcal{O}_{X,x}/\mathcal{J}_x = \mathcal{O}_{X,x}/\mathcal{I}(X)_x,\]
donc $\widetilde V$ est int\`egre en~$x$.
\end{proof}
\index{Espace analytique!irreductible@irr\'eductible|)}
D\'emontrons maintenant un \'enonc\'e local de d\'ecomposition d'un espace analytique en ferm\'es analytiques int\`egres. Commen\c cons par rappeler un r\'esultat d'alg\`ebre commutative. Il se d\'eduit, par exemple de \cite[IV, \S 2, \no 2, th\'eor\`eme~1]{BourbakiAC14} (pour l'existence) et \cite[IV, \S 2, \no 3, proposition~4]{BourbakiAC14} (pour l'unicit\'e).
\begin{theo}\label{noeth}
Soient~$R$ un anneau noeth\'erien et~$I$ un id\'eal de~$R$ tel que~$R/I$ est r\'eduit. Alors, il existe une unique famille finie d'id\'eaux premiers~$\ensuremath{\mathfrak{p}}_1,\dotsc,\ensuremath{\mathfrak{p}}_s$ de~$R$ v\'erifiant, pour tous $i, j \in \cn{1}{s}$ avec $i\ne j$, $\ensuremath{\mathfrak{p}}_i\not\subset \ensuremath{\mathfrak{p}}_j$ et telle que
\[I=\bigcap_{i=1}^s \ensuremath{\mathfrak{p}}_i.\]
\end{theo}
\begin{lemm}\label{lem:Jintegre}
Soient~$X$ un espace $\mathcal{A}$-analytique et $V$ un ferm\'e analytique de~$X$ d\'efini par un faisceau d'id\'eaux~$\mathcal{J}$ de~$\mathcal{O}_{X}$. Si~$V$ est int\`egre en un point~$x$, alors on a $\mathcal{J}_{x} = \mathcal{I}(V)_{x}$.
\end{lemm}
\begin{proof}
Soit $x\in V$ tel que~$V$ est int\`egre en~$x$. Alors, $\mathcal{J}_{x}$ est un id\'eal premier de~$\mathcal{O}_{X,x}$. D'apr\`es le corollaire~\ref{cor:IVJ}, on a donc $\mathcal{I}(V)_{x} = \sqrt{\mathcal{J}}_{x} = \mathcal{J}_{x}$.
\end{proof}
\begin{prop}\label{decomp
Soient~$X$ un espace $\mathcal{A}$-analytique et~$x\in X$. Alors, il existe un voisinage ouvert~$V$ de~$x$ et des ferm\'es analytiques $V_{1},\dotsc,V_{s}$ de~$V$ contenant~$x$ et int\`egres en~$x$ v\'erifiant, pour tous $i, j \in \cn{1}{s}$ avec $i\ne j$, $V_{i}\not\subset V_j$ et tels que
\[V=\bigcup_{i=1}^s V_i.\]
De plus, cette d\'ecomposition est unique au sens suivant~: s'il existe un voisinage ouvert~$V'$ de~$x$ et des ferm\'es analytiques $V'_{1},\dotsc,V'_{t}$ de~$V'$ satisfaisant les m\^emes propri\'et\'es, alors on a $s=t$ et il existe un voisinage ouvert~$W$ de~$x$ dans $V\cap V'$ et une permutation~$\sigma$ de $\cn{1}{s}$ tels que
\[\forall i\in \cn{1}{s},\ V_{i} \cap W = V'_{\sigma(i)} \cap W.\]
\end{prop}
\begin{proof}
$\bullet$ \emph{Existence}
Soient $\ensuremath{\mathfrak{p}}_{1},\dotsc,\ensuremath{\mathfrak{p}}_{s}$ les id\'eaux premiers associ\'es \`a l'id\'eal~$\mathcal{I}(X)_{x}$ de~$\mathcal{O}_{X,x}$ par le th\'eor\`eme~\ref{noeth}. Pour tout $i\in \cn{1}{s}$, soient $f_{i,1},\dotsc,f_{i,k_{i}}$ des g\'en\'erateurs de~$\ensuremath{\mathfrak{p}}_{i}$. Soit~$V$ un voisinage ouvert de~$x$ sur lequel tous les~$f_{i,j}$ sont d\'efinis. Pour tout $i\in \cn{1}{s}$, on note~$\mathcal{I}_{i}$ le faisceau d'id\'eaux de~$\mathcal{O}_{V}$ engendr\'e par $f_{i,1},\dotsc,f_{i,k_{i}}$ et $V_{i}$ le ferm\'e analytique de~$V$ qu'il d\'efinit.
Pour tout $i\in \cn{1}{s}$, on a, par d\'efinition, $\mathcal{O}_{V_{i},x} = \mathcal{O}_{X,x}/\mathcal{I}_{i,x} = \mathcal{O}_{X,x}/\ensuremath{\mathfrak{p}}_{i}$, donc $V_{i}$ est int\`egre en~$x$. Pour tous $i,j \in \cn{1}{s}$ avec $i\ne j$, on a $\ensuremath{\mathfrak{p}}_{i} \not\subset \ensuremath{\mathfrak{p}}_{j}$ et on en d\'eduit que $V_{j} \not\subset V_{i}$. Quitte \`a restreindre~$V$, on peut supposer que, pour tout $i\in \cn{1}{s}$ et tout $j\in \cn{1}{k_{i}}$, on a $f_{i,j} \in \mathcal{I}(X)(V)$. On a alors $V=\bigcup_{i=1}^s V_i$.
\medbreak
$\bullet$ \emph{Unicit\'e}
Soient $V'$ un voisinage ouvert de~$x$ et $V'_{1},\dotsc,V'_{t}$ v\'erifiant les propri\'et\'es de l'\'enonc\'e. Pour tout $i\in \cn{1}{t}$, notons~$\mathcal{I}'_{i}$ le faisceau d'id\'eaux de~$V'$ d\'efinissant~$V'_{i}$.
D'apr\`es le lemme~\ref{lem:Jintegre}, pour tout $i\in \cn{1}{t}$, on a $\mathcal{I}(V'_{i})_{x} = \mathcal{I}'_{i,x}$ et c'est un id\'eal premier de~$\mathcal{O}_{X,x}$. Pour tous $i,j\in \cn{1}{t}$ avec $i\ne j$, on a $\mathcal{I}(V'_{i})_{x} \not\subset \mathcal{I}(V'_{j})_{x}$. En outre, puisque $V' = \bigcup_{i=1}^t V'_i$, on a $\mathcal{I}(X)_{x} = \bigcap_{i=1}^t \mathcal{I}(V'_{i})_{x}$. Le th\'eor\`eme~\ref{noeth} assure alors que la famille $(\mathcal{I}(V'_{i})_{x})_{1\le i\le t}$ est une permutation de la famille $(\mathcal{I}(V_{i})_{x})_{1\le i\le s}$. Le r\'esultat s'en d\'eduit.
\end{proof}
\begin{defi}\label{def:composanteslocales}\index{Composantes locales|textbf}
Soient~$X$ un espace $\mathcal{A}$-analytique et~$x\in X$. Les ferm\'es analytiques $V_{1},\dotsc,V_{s}$ de la proposition~\ref{decomp} sont appel\'es \emph{composantes locales de~$x$ en~$X$}.
\end{defi}
\index{Espace analytique!integre@int\`egre|)}
\section{Un crit\`ere d'ouverture}\label{sec:critereouverture}
Dans cette section, nous d\'emontrons un r\'esultat technique permettant d'assurer qu'un morphisme fini et ouvert en un point le reste sur un voisinage. Nous suivons le raisonnement de~\cite[\S 3.3.3]{Gr-Re2}.
\begin{nota}\index{Faisceau!torsion d'un}\index{Faisceau!dual d'un}%
\nomenclature[Ec]{$\mathcal{F}^\mathrm{tors}$}{sous-faisceau des sections de torsion de~$\mathcal{F}$}%
\nomenclature[Ed]{$\mathcal{F}^\vee$}{dual de~$\mathcal{F}$
Soient $X$ un espace $\mathcal{A}$-analytique et~$\mathcal{F}$ un faisceau de $\mathcal{O}_{X}$-modules.
On note $\mathcal{F}^\mathrm{tors}$ le sous-faisceau de~$\mathcal{F}$ form\'e des sections de torsion.
On note $\mathcal{F}^\vee := \sHom_{\mathcal{O}_{X}}(\mathcal{F},\mathcal{O}_{X})$ le dual de~$\mathcal{F}$.
\end{nota}
\begin{lemm}\label{lem:Ftors}\index{Faisceau!coherent@coh\'erent}
Soient $X$ un espace $\mathcal{A}$-analytique localement int\`egre et~$\mathcal{F}$ un faisceau coh\'erent sur~$X$. Alors on a
\[\mathcal{F}^\mathrm{tors} = \sKer(\mathcal{F} \to \mathcal{F}^{\vee\vee}).\]
En particulier, le faisceau~$\mathcal{F}^\mathrm{tors}$ est coh\'erent.
\end{lemm}
\begin{proof}
Il suffit de montrer que, pour tout $x\in X$, on a $\mathcal{F}^\mathrm{tors}_{x} = \textrm{Ker}(\mathcal{F}_{x} \to \mathcal{F}^{\vee\vee}_{x})$.
Soit $x\in X$. Soit $f\in \mathcal{F}^\mathrm{tors}_{x}$. Par d\'efinition, il existe $a\in \mathcal{O}_{X,x} \setminus \{0\}$ tel que $af = 0$ dans~$\mathcal{F}_{x}$. Soit $\varphi \in \mathcal{F}^\vee_{x}$. On a $a \varphi(f) = \varphi(af) = 0$ dans $\mathcal{O}_{X,x}$, donc $\varphi(f) = 0$ car $\mathcal{O}_{X,x}$ est int\`egre. On en d\'eduit que l'image de~$f$ dans~$\mathcal{F}^{\vee\vee}_{x}$ est nulle. Nous avons donc montr\'e que $\mathcal{F}^\mathrm{tors}_{x} \subset \textrm{Ker}(\mathcal{F}_{x} \to \mathcal{F}^{\vee\vee}_{x})$.
Soit $f\in \mathcal{F}_{x} \setminus \mathcal{F}^{\mathrm{tors}}_{x}$. Posons $\mathcal{G} := \mathcal{F}/\mathcal{F}^\mathrm{tors}$. Posons $V_{x} := (\mathcal{O}_{X,x}\setminus \{0\})^{-1} \mathcal{G}_{x}$. C'est un espace vectoriel de dimension finie sur~$\Frac{\mathcal{O}_{X,x}}$ et le morphisme naturel $\mathcal{G}_{x} \to V_{x}$ est injectif. Notons~$f'$ (resp.~$f'')$ l'image de~$f$ dans~$\mathcal{G}_{x}$ (resp. $V_{x}$). L'\'el\'ement~$f''$ n'\'etant pas nul, il existe un morphisme $\Frac{\mathcal{O}_{X,x}}$-lin\'eaire $\varphi''_{x} \colon V_{x} \to \Frac{\mathcal{O}_{X,x}}$ tel que $\varphi_{x}(f'') \ne 0$. Quitte \`a multiplier par un \'el\'ement convenable de~$\mathcal{O}_{X,x}$, on obtient un morphisme $\mathcal{O}_{X,x}$-lin\'eaire $\varphi'_{x} \colon \mathcal{G}_{x} \to \mathcal{O}_{X,x}$ tel que $\varphi'_{x}(f') \ne 0$. Le faisceau~$\mathcal{G}$ \'etant de type fini, le morphisme~$\varphi'_{x}$ est induit par un morphisme $\varphi \colon \mathcal{G} \to \mathcal{O}_{X}$ d\'efini sur un voisinage de~$x$. On en d\'eduit que $f \notin \textrm{Ker}(\mathcal{F}_{x} \to \mathcal{F}^{\vee\vee}_{x})$. Ceci cl\^ot la d\'emonstration.
\end{proof}
Venons-en maintenant au r\'esultat annonc\'e.
\begin{prop}\label{ouvert2}\index{Morphisme analytique!ouvert}\index{Espace analytique!integre@int\`egre}
Soit~$\varphi \colon X\to Y$ un morphisme fini entre espaces~$\mathcal{A}$-analytiques, avec $Y$ localement int\`egre. Soit~$x\in X$. Si~$\varphi$ est ouvert en~$x$ et~$X$ est int\`egre en~$x$, alors il existe un voisinage ouvert~$U$ de~$x$ et un voisinage ouvert~$V$ de~$\varphi(x)$ tels que~$\varphi(U)\subset V$ et le morphisme $\varphi_{U,V} \colon U\to V$ induit par~$\varphi$ soit fini et ouvert.
\end{prop}
\begin{proof}
D'apr\`es le lemme~\ref{lem:voisinagefibre}, quitte \`a restreindre~$X$, on peut supposer que $\varphi^{-1}(\varphi(x)) = \{x\}$. Posons $\mathcal{F} := \varphi_*(\mathcal{O}_X)$. D'apr\`es le th\'eor\`eme~\ref{thm:fini}, c'est un faisceau coh\'erent. D'apr\`es la proposition~\ref{prop:ouvert}, il suffit de montrer qu'il existe un voisinage ouvert~$V$ de~$\varphi(x)$ dans~$Y$ tel que pour tout~$y\in V$, $\mathcal{F}_y$ soit un~$\mathcal{O}_{Y,y}$-module sans torsion. D'apr\`es le lemme~\ref{lem:Ftors}, $\mathcal{F}^\mathrm{tors}$ est coh\'erent. Par cons\'equent, son support est ferm\'e. Il suffit donc de montrer que $\mathcal{F}_{\varphi(x)}$ est un~$\mathcal{O}_{Y,\varphi(x)}$-module sans torsion.
Montrons tout d'abord que le morphisme $\colon \mathcal{O}_{Y,\varphi(x)} \to \mathcal{O}_{X,x}$ induit par~$\varphi^\sharp$ est injectif. Soit~$f\in \mathcal{O}_{Y,\varphi(x)}$ dont l'image~$\varphi^\sharp(f)$ dans~$\mathcal{O}_{X,x}$ est nulle. Il existe un voisinage~$U$ de~$x$ dans~$X$ tel que~$\varphi^\sharp(f)$ est nulle sur~$U$. En utilisant la proposition~\ref{prop:proprietemorphismes}, on en d\'eduit que~$f$ est nulle en tout point de~$\varphi(U)$. Puisque~$\varphi$ est ouverte en~$x$, $f$ appartient donc \`a~$\mathcal{I}(Y)_{\varphi(x)}$, donc~$f$ est nilpotente dans~$\mathcal{O}_{Y,\varphi(x)}$. Or~$Y$ est localement int\`egre donc $f$ est nulle dans~$\mathcal{O}_{Y,\varphi(x)}$.
Puisque $\varphi^{-1}(\varphi(x)) = \{x\}$, on a $\mathcal{F}_{\varphi(x)}\simeq \mathcal{O}_{X,x}$. Or, par hypoth\`ese, $\mathcal{O}_{X,x}$ est int\`egre. Le r\'esultat s'ensuit.
\end{proof}
\section{Normalisation de Noether locale}\label{sec:normalisation}
Le but de cette section est de d\'emontrer un analogue local du lemme de normalisation de Noether, valable en un point en lequel l'espace est int\`egre.
\medbreak
Rappelons la d\'efinition~\ref{def:projection} concernant les projections.
Pour tout $b\in B$, notons $0_{b,l} \in \E{n}{\mathcal{A}}$ le point~0 de la fibre~$\pi_{l}^{-1}(b)$. Commen\c cons par un cas particulier.
\begin{lemm}\label{lem:finivoisinagefibre}
Soient~$X$ un espace $\mathcal{A}$-analytique. Notons $\pi \colon X \to B$ le morphisme structural. Soient~$b \in B$, $x \in \pi^{-1}(b)$, $n \in \ensuremath{\mathbf{N}}$, $V_{0}$ un voisinage ouvert de~$0_{b,n}$ dans~$\E{n}{\mathcal{A}}$ et $\psi\colon X \to V_{0}$ un morphisme fini tel que $\psi(x) = 0_{b,n}$. Alors, il existe un voisinage ouvert~$W$ de~$b$ dans~$B$, un voisinage ouvert~$U$ de~$x$ dans~$X$, un entier $l\le n$, un voisinage ouvert~$V$ de~$0_{b,l}$ dans~$\E{l}{\mathcal{A}}$ et un morphisme fini $\varphi\colon U \to V$ tels que
\begin{enumerate}[i)]
\item $\pi(U) = \pi_{l}(V) = W$ et $\pi_{l} \circ \varphi = \pi$ ;
\item $\varphi(U)$ est un voisinage de~$0_{b,l}$ dans~$\pi_{l}^{-1}(b)$.
\end{enumerate}
\end{lemm}
\begin{proof}
D\'emontrons le r\'esultat par r\'ecurrence sur~$n$. Si $n=0$, le r\'esultat est satisfait avec $l=0$ puisque $\pi_{0}^{-1}(b) = \{b\}$.
Supposons que $n\ge 1$ et que le r\'esultat est vrai pour~$n-1$. Si $\psi(X)$ est un voisinage de~$0_{b,n}$ dans~$\pi_{n}^{-1}(b)$, alors $\psi$ satisfait toutes les propri\'et\'es requises. Supposons maintenant que tel n'est pas le cas.
D'apr\`es le corollaire~\ref{cor:imagefini}, $\psi(X)$ est un ferm\'e analytique de~$V_{0}$ et $\psi(X) \cap \pi_{n}^{-1}(b)$ est strictement contenu dans $V_{0} \cap \pi_{n}^{-1}(b)$. On en d\'eduit qu'il existe un voisinage ouvert~$V$ de~$0_{b,n}$ dans~$V_{0}$ et un \'el\'ement~$f$ de~$\mathcal{O}(V)$ qui soit nul sur $V \cap \psi(X)$ mais pas identiquement nul sur $V \cap \pi_{n}^{-1}(b)$. Posons $U := \psi^{-1}(V)$ et $W:= \pi_{n}(V)$. D'apr\`es le corollaire~\ref{cor:projectionouverte}, $W$~est un ouvert de~$B$. D'apr\`es le lemme~\ref{changement_variable}, il existe un isomorphisme $\sigma \colon\pi_{n}^{-1}(W) \to \pi_{n}^{-1}(W)$ tel que $\sigma(0_{b,n}) = 0_{b,n}$ et la restriction de~$\sigma^\sharp(f)$ \`a~$\pi_{n,n-1}^{-1}(b)$ ne soit pas nulle. Le point~$0_{b,n}$ est alors isol\'e dans $\sigma(\psi(U)) \cap \pi_{n,n-1}^{-1}(b)$.
Posons $\varphi := \pi_{n,n-1} \circ \sigma \circ \psi$. On a alors $\varphi(x) = 0_{b,n-1}$ et le point~$x$ est isol\'e dans~$\varphi^{-1}(0_{b,n-1})$. D'apr\`es le th\'eor\`eme~\ref{thm:locfini}, le morphisme~$\varphi$ est donc fini en~$x$. Puisqu'il est \`a valeurs dans un ouvert de~$\E{n-1}{\mathcal{A}}$, l'hypoth\`ese de r\'ecurrence permet de conclure.
\end{proof}
\begin{theo}\label{proj}\index{Espace analytique!integre@int\`egre}\index{Theoreme@Th\'eor\`eme!de normalisation de Noether}\index{Lemme!de normalisation de Noether}
Soit~$X$ un espace $\mathcal{A}$-analytique. Notons $\pi \colon X \to B$ le morphisme structural. Soit~$x$ un point de~$X$ en lequel~$\mathcal{O}_{X,x}$ est int\`egre. Posons $b := \pi(x)$ et notons $\pi^\sharp_{b,x} \colon \mathcal{O}_{B,b}\to\mathcal{O}_{X,x}$ le morphisme induit par~$\pi^\sharp$.
Si $\pi^\sharp_{b,x}$ est injectif, alors il existe un voisinage ouvert~$U$ de~$x$ dans~$X$, un entier $n\in \ensuremath{\mathbf{N}}$, un ouvert~$V$ de~$\E{n}{\mathcal{A}}$ et un morphisme fini ouvert $\varphi \colon U\to V$.
Si $\pi^\sharp_{b,x}$ n'est pas injectif, alors il existe un voisinage ouvert~$U$ de~$x$ dans~$X$, un entier $n\in \ensuremath{\mathbf{N}}$, un ouvert~$V$ de~$\E{n}{\mathcal{H}(b)}$ et un morphisme fini ouvert $\varphi \colon U\to V$.
\end{theo}
\begin{proof}
Puisque la question est locale, on peut supposer que~$X$ est un ferm\'e analytique d'un ouvert~$\tilde U$ de~$\E{n}{\mathcal{A}}$. Quitte \`a permuter les variables, nous pouvons supposer qu'il existe $k\in \cn{0}{n}$ tel que $x$ soit rigide \'epais au-dessus de $x_{k} :=\pi_{n,k}(x)$ et que $x_{k}$ soit purement localement transcendant au-dessus de~$b := \pi_{n}(x)$.
Pour tout $m\ge k$, notons $0_{x_{k},m} \in \E{m}{\mathcal{A}}$ le point~0 de la fibre $\pi_{m,k}^{-1}(x_{k}) \simeq \E{m-k}{\mathcal{H}(x_{k})}$. D'apr\`es le lemme~\ref{lem:translation}, il existe un voisinage~$W$ de~$x_{k}$ dans~$\E{k}{\mathcal{A}}$, un voisinage ouvert~$U'$ de~$x$ dans~$\tilde U$ et un voisinage ouvert~$V_{0}$ de~$0_{x_{k},n}$ dans~$\E{n}{\mathcal{A}}$ v\'erifiant $\pi_{n,k}(U') = \pi_{n,k}(V_{0}) = W$ et un morphisme fini $\psi \colon U' \to V_{0}$ tels que l'on ait $\psi^{-1}(0_{x_{k},n})=\{x\}$ et $\pi_{n,k} \circ \psi = \pi_{n,k}$.
D'apr\`es le lemme~\ref{lem:finivoisinagefibre} appliqu\'e \`a~$\psi_{|X}$, quitte \`a restreindre~$W$, on peut supposer qu'il existe un voisinage ouvert~$U$ de~$x$ dans~$X$, un entier $l\ge k$, un voisinage ouvert~$V$ de~$0_{x_{k},l}$ dans~$\E{l}{\mathcal{A}}$ et un morphisme fini $\varphi\colon U \to V$ tels que
$\pi_{n,k}(U) = \pi_{l,k}(V) = W$, $\pi_{l,k} \circ \varphi = \pi_{n,k}$ et $\varphi(U)$ est un voisinage de~$0_{x_{k},l}$ dans~$\pi_{l,k}^{-1}(b)$. En outre, en utilisant le lemme~\ref{lem:voisinagefibre}, on montre que, quitte \`a restreindre~$U$, on peut supposer que $\varphi^{-1}(0_{x_{k},l}) = \{x\}$. Posons $y := 0_{x_{k},l}$.
D'apr\`es le th\'eor\`eme~\ref{thm:fini}, le faisceau $\varphi_{\ast} \mathcal{O}_{U}$ est coh\'erent. Notons~$\mathcal{I}$ le noyau du morphisme $\varphi^\sharp \colon \mathcal{O}_{V} \to \varphi_{\ast} \mathcal{O}_{U}$. L'ensemble~$\varphi(U)$ est l'ensemble sous-jacent au ferm\'e analytique de~$V$ d\'efini par~$\mathcal{I}$. Nous consid\'ererons d\'esormais~$\varphi(U)$ comme un espace analytique muni de cette structure.
Remarquons que~$\mathcal{I}_{y}$ est un id\'eal premier de~$\mathcal{O}_{V,y}$. En effet, puisque $\varphi^{-1}(y) = \{x\}$, $\mathcal{I}_{y}$ est le noyau du morphisme $\mathcal{O}_{V,y}\to\mathcal{O}_{U,x}$ induit par~$\varphi^\sharp$. On en d\'eduit que $\mathcal{O}_{\varphi(U),y}\simeq\mathcal{O}_{V,y}/\mathcal{I}_{y}$ s'injecte dans~$\mathcal{O}_{U,x}$, qui est suppos\'e int\`egre. Le r\'esultat s'ensuit.
Nous allons \`a pr\'esent traiter s\'epar\'ement les deux cas pr\'esent\'es dans l'\'enonc\'e.
\smallbreak
$\bullet$ Supposons que le morphisme~$\pi^\sharp_{b,x}$ est injectif.
Puisque~$U$ est int\`egre en~$x$ et que~$V$ est localement int\`egre, d'apr\`es la proposition~\ref{ouvert2}, il suffit de montrer que~$\varphi$ ouvert en~$x$.
Puisqu'on a $\varphi^{-1}(y) = \{x\}$, d'apr\`es le lemme~\ref{lem:voisinagefibre}, pour tout voisinage~$U'$ de~$x$, $\varphi(U')$ est un voisinage de~$y$ dans~$\varphi(U)$. Il suffit donc de montrer que~$\varphi(U)$ est un voisinage de~$y$ dans~$V$.
Raisonnons par l'absurde et supposons que~$\varphi(U)$ n'est pas un voisinage de~$y$ dans~$V$. Dans ce cas, $\mathcal{I}_{y}$ n'est pas nul. Soit $f \in \mathcal{I}_{y} \setminus\{0\}$.
Supposons que~$\mathcal{O}_{B,b}$ est un corps fort. D'apr\`es le th\'eor\`eme~\ref{rigide}, $\mathcal{O}_{\E{k}{\mathcal{A}},x_k}$ est encore un corps fort. D'apr\`es le lemme~\ref{restriction_fibre}, l'image de~$f$ dans~$\mathcal{O}_{\pi_{l,k}^{-1}(x_k),y}$ n'est pas nulle. Or l'image de~$f$ dans~$\mathcal{O}_{\varphi(U),y}$ est nulle et $\varphi(U)$ est un voisinage de~$y$ dans $\pi_{l,k}^{-1}(x_k)$. On aboutit \`a une contradiction.
Supposons que $\mathcal{O}_{B,b}$ est un anneau de valuation discr\`ete. Fixons-en une uniformisante~$\pi_{b}$. D'apr\`es le th\'eor\`eme~\ref{rigide}, $\mathcal{O}_{\E{k}{\mathcal{A}},x_k}$ est encore un anneau fortement de valuation discr\`ete d'uniformisante~$\pi_{b}$. D'apr\`es le lemme~\ref{restriction_fibre_avd}, il existe $v\in \ensuremath{\mathbf{N}}$ et $g\in\mathcal{O}_{V,y}$ tels que la restriction de~$g$ \`a~$\pi_{l,k}^{-1}( x_{k})$ ne soit pas nulle et $f=\pi_b^v g$ dans~$\mathcal{O}_{V,y}$.
Puisque le morphisme $\pi^\sharp_{b,x}\colon \mathcal{O}_{B,b}\to\mathcal{O}_{X,x}$ est injectif, le morphisme $\mathcal{O}_{B,b}\to\mathcal{O}_{\varphi(U),y} \simeq\mathcal{O}_{V,y}/\mathcal{I}_{y}$ l'est aussi. Par cons\'equent, l'image de~$\pi_{b}$ dans $\mathcal{O}_{V,y}$ n'appartient pas \`a~$\mathcal{I}_{y}$. Puisque~$\mathcal{I}_{y}$ est premier, on en d\'eduit que $g\in \mathcal{I}_{y}$. On en d\'eduit une contradiction, comme pr\'ec\'edemment.
\smallbreak
$\bullet$ Supposons que le morphisme~$\pi^\sharp_{b,x}$ n'est pas injectif.
Ce cas ne peut se produire que lorsque~$\mathcal{O}_{B,b}$ est un anneau de valuation discr\`ete. Fixons une uniformisante~$\pi_{b}$ de~$\mathcal{O}_{B,b}$. Notons~$V_{b}$ le ferm\'e analytique de~$V$ d\'efini par~$\pi_{b}$. Rappelons que le morphisme $\mathcal{O}_{\varphi(U),y} \simeq \mathcal{O}_{V,y}/\mathcal{I}_{y} \to\mathcal{O}_{U,x}$ induit par~$\varphi^\sharp$ est injectif. Puisque $\pi^\sharp_{b,x}\colon \mathcal{O}_{B,b}\to\mathcal{O}_{U,x}$ n'est pas injectif, l'image de~$\pi_{b}$ dans~$\mathcal{O}_{V,y}$ appartient \`a~$\mathcal{I}_{y}$. Par cons\'equent, quitte \`a r\'etr\'ecir~$U$ et~$V$, on peut supposer que~$\varphi(U)$ est un ferm\'e analytique de~$V_b$. Le morphisme~$\varphi$ se factorise donc par un morphisme $\varphi' \colon U\to V_b$. On peut alors appliquer \`a~$\varphi'$ le m\^eme raisonnement que pr\'ec\'edemment, dans le cas des corps forts, pour montrer qu'il est ouvert en~$x$. On conclut alors par la proposition~\ref{ouvert2}.
\end{proof}
\begin{rema}
Dans le cas classique des espaces sur un corps valu\'e, on peut d\'emontrer un r\'esultat similaire \`a celui du th\'eor\`eme~\ref{proj} sans condition sur l'espace~$X$ et construire un morphisme ouvert en~$x$ (qui sera donc ouvert au voisinage de~$x$ si~$X$ est irr\'eductible, par exemple). Notre preuve fournit d'ailleurs ce r\'esultat lorsque~$\mathcal{O}_{B,b}$ est un corps.
En revanche, cet \'enonc\'e plus g\'en\'eral tombe en d\'efaut lorsque~$\mathcal{O}_{B,b}$ est un anneau de valuation discr\`ete, comme on le v\'erifie sur l'exemple du ferm\'e analytique~$X$ de~$\E{1}{\ensuremath{\mathbf{Z}}}$, avec coordonn\'ee~$T$, d\'efini par l'\'equation $2T=0$ et du point $x$ d\'efini par $T(x) = 2(x) = 0$.
\end{rema}
\begin{rema}
L'entier~$n$ pr\'esent dans l'\'enonc\'e du th\'eor\`eme~\ref{proj} ne d\'epend que de~$x$. Pour s'en convaincre, il suffit de remarquer que, dans tous les cas, on a un morphisme fini et ouvert $U \cap \pi^{-1}(b) \to \E{n}{\mathcal{H}(b)}$.
On est alors ramen\'es au cas classique des espaces sur un corps valu\'e complet et on v\'erifie que l'entier~$n$ n'est autre que la dimension de $U \cap \pi^{-1}(b)$ en~$x$.
\end{rema}
\section{Dimension locale d'un espace analytique}\label{sec:dimlocale}
Le but de cette section est de d\'efinir la dimension d'un espace analytique en un point.
Commen\c cons par d\'emontrer un r\'esultat utile d'ouverture des morphismes plats. Nous remercions Mattias Jonsson qui nous a sugg\'er\'e de l'inclure.
\begin{prop}\label{prop:morphismestructuralouvert}\index{Morphisme analytique!plat}\index{Morphisme analytique!ouvert}\index{Morphisme!structural}
Soit~$X$ un espace~$\mathcal{A}$-analytique. Notons $\pi \colon X \to B$ le morphisme structural. Soit $x\in X$. Si le morphisme $\mathcal{O}_{X,\pi(x)} \to \mathcal{O}_{X,x}$ induit par~$\pi^\sharp$ est plat, alors le morphisme~$\pi$ est ouvert au point~$x$.
\end{prop}
\begin{proof}
Supposons que le morphisme $\mathcal{O}_{X,\pi(x)} \to \mathcal{O}_{X,x}$ induit par~$\pi^\sharp$ est plat. Soit~$U$ un voisinage ouvert de~$x$ dans~$X$. Posons $b:=\pi(x)$. Montrons que~$\pi(U)$ est un voisinage de~$b$.
D'apr\`es la proposition~\ref{decomp}, quitte \`a r\'etr\'ecir le voisinage~$U$, on peut supposer qu'il s'\'ecrit sous la forme $U = \bigcup_{i=1}^s U_{i}$, o\`u les~$U_{i}$ sont des ferm\'es analytiques de~$U$ contenant~$x$ et int\`egres en~$x$. Il suffit de montrer qu'il existe $i\in \cn{1}{s}$ tel que $\pi(U_{i})$ soit un voisinage de~$b$.
Montrons, tout d'abord, qu'il existe $i\in \cn{1}{s}$ tel que le morphisme naturel $\mathcal{O}_{B,b}\to\mathcal{O}_{U_i,x}$ soit injectif. Si~$\mathcal{O}_{B,b}$ est un corps, le r\'esultat est \'evident. Supposons donc que~$\mathcal{O}_{B,b}$ est un anneau fortement de valuation discr\`ete. Soit~$\varpi_{b}$ une uniformisante de cet anneau. Il suffit de montrer qu'il existe $i\in \cn{1}{s}$ tel que l'image de~$\varpi_{b}$ dans $\mathcal{O}_{U_i,x}$ ne soit pas nulle. Supposons, par l'absurde, que tel ne soit pas le cas. Alors $\varpi_{b}$ s'annule en tout point d'un voisinage de~$x$ dans~$U$, donc son image dans~$\mathcal{O}_{U,x}$ est nilpotente, d'apr\`es le corollaire~\ref{cor:nilpotent}. Ceci contredit la platitude du morphisme $\mathcal{O}_{B,b}\to\mathcal{O}_{U,x}$.
Soit~$i_0 \in \cn{1}{s}$ tel que le morphisme $\mathcal{O}_{B,b}\to\mathcal{O}_{U_{i_0},x}$ est injectif. Puisque~$U_{i_0}$ est int\`egre en~$x$, le th\'eor\`eme~\ref{proj} assure que, quitte \`a r\'etr\'ecir~$U_{i_0}$, il existe un morphisme fini ouvert $\varphi \colon U_{i_0}\to V$, o\`u~$V$ est un ouvert d'un espace affine analytique~$\E{n}{\mathcal{A}}$. Or, d'apr\`es le corollaire~\ref{cor:projectionouverte}, le morphisme de projection $\E{n}{\mathcal{A}}\to B$ est ouvert. Le r\'esultat s'en d\'eduit.
\end{proof}
Venons-en maintenant aux questions de dimension. Nous nous permettrons d'utiliser la th\'eorie classique de la dimension des espaces analytiques sur un corps valu\'e complet.\index{Dimension!en un point|see{Espace analytique}}\index{Espace analytique!dimension d'un|(}
\begin{defi}\index{Espace analytique!dimension d'un|textbf
\nomenclature[Ksa]{$\dim_{x}(X/\mathcal{M}(\mathcal{A}))$}{dimension relative d'un espace $\mathcal{A}$-analytique~$X$ en un point~$x$
Soit~$X$ un espace $\mathcal{A}$-analytique. Notons $\pi \colon X \to B$ le morphisme structural. Soit $x\in X$ tel que $x$ soit int\`egre en~$X$. On appelle \emph{dimension relative de~$X$ en~$x$} la quantit\'e
\[ \dim_{x}(X/B) := \dim_{x}\big(X \cap \pi^{-1}(\pi(x))\big).\]
\end{defi}
Avant de d\'efinir la dimension locale d'un espace, introduisons encore une nouvelle notion.
\begin{defi}\label{def:vertical}\index{Espace analytique!vertical en un point|textbf}
Soit~$X$ un espace $\mathcal{A}$-analytique. Notons $\pi \colon X \to B$ le morphisme structural. On dit que l'espace~$X$ est \emph{vertical en~$x$} si~$\pi^{-1}(\pi(x))$ est un voisinage de~$x$ dans~$X$.
\end{defi}
\begin{nota}%
\nomenclature[Ksb]{$v_{x}(X)$}{$ =0$ si $X$ est vertical en~$x$, 1 sinon
Soient~$X$ un espace $\mathcal{A}$-analytique et $x\in X$. On pose
\[ v_{x}(X) :=
\begin{cases}
0 \textrm{ si } X \textrm{ est vertical en } x~;\\
1 \textrm{ sinon.}
\end{cases}\]
\end{nota}
\begin{lemm}\label{lem:vertical}\index{Morphisme analytique!plat}\index{Morphisme!structural
Soit~$X$ un espace $\mathcal{A}$-analytique. Notons $\pi \colon X \to B$ le morphisme structural. Soit $x\in X$ tel que $X$ soit irr\'eductible en~$x$. Posons $b:= \pi(x)$. Notons $\pi_{b,x}^\sharp \colon \mathcal{O}_{B,b} \to \mathcal{O}_{X,x}$ le morphisme induit par~$\pi^\sharp$.
Si $b$ est ouvert dans~$B$, alors $\mathcal{O}_{B,b}$ est un corps et~$X$ est vertical en~$x$.
Si $b$ n'est pas ouvert dans~$B$, les propositions suivantes sont \'equivalentes~:
\begin{enumerate}[i)]
\item $X$ est vertical en~$x$~;
\item le morphisme $\pi_{b,x}^\sharp$ n'est pas plat~;
\item le morphisme $\pi_{b,x}^\sharp$ n'est pas injectif~;
\item $\mathcal{O}_{B,b}$ est un anneau de valuation discr\`ete dont une (et donc toute) uniformisante est nulle dans~$\mathcal{O}_{X,x}$.
\end{enumerate}
\end{lemm}
\begin{proof}
La partie de l'\'enonc\'e concernant le cas o\`u~$b$ est ouvert dans~$B$ est imm\'ediate. Supposons que tel n'est pas le cas
$i) \implies ii)$ Supposons, par l'absurde, que $\pi_{x,b}^\sharp$ est plat. Alors, d'apr\`es la proposition~\ref{prop:morphismestructuralouvert}, $\pi(\pi^{-1}(b)) = \{b\}$ est un voisinage de~$b$ et on aboutit \`a une contradiction.
$ii) \implies iii)$ L'hypoth\`ese force~$\mathcal{O}_{B,b}$ a \^etre un anneau de valuation discr\`ete. Notons~$\varpi_{b}$ une uniformisante. Par hypoth\`ese, il existe un entier~$n\ge 1$ et un \'el\'ement $f$ de~$\mathcal{O}_{X,x}$ non nul tels que $\varpi_{b}^n f = 0$. En d'autres termes, $\varpi_{b}^n f \in \mathcal{I}(X)_{x}$. Or $\mathcal{I}(X)_{x}$ est un id\'eal premier de~$\mathcal{O}_{X,x}$, donc $\varpi_{b}^n =0$ dans~$\mathcal{O}_{X,x}$. On en d\'eduit que $\pi_{b,x}^\sharp$ n'est pas injectif.
$iii) \implies iv)$ C'est imm\'ediat.
$iv) \implies i)$ Soit~$\varpi_{b}$ une uniformisante de~$\mathcal{O}_{B,b}$. Il existe un voisinage ouvert~$V$ de~$b$ sur lequel~$\varpi_{b}$ est d\'efinie et tel que le ferm\'e analytique d\'efini par~$\varpi_{b}$ soit le singleton~$\{b\}$. Puisque l'image de~$\varpi_{b}$ dans~$\mathcal{O}_{X,x}$ est nulle, le ferm\'e analytique d\'efini par~$\varpi_{b}$ dans~$\pi^{-1}(V)$ est un voisinage de~$x$ dans~$X$. Or, ce ferm\'e n'est autre que~$\pi^{-1}(b)$.
\end{proof}
\begin{defi}\label{def:dimlocale}\index{Espace analytique!dimension d'un|textbf
\nomenclature[Ksc]{$\dim_{x}(X)$}{dimension de~$X$ en~$x$
Soit~$X$ un espace $\mathcal{A}$-analytique. Soit $x\in X$ tel que $x$ soit irr\'eductible en~$X$. On appelle \emph{dimension de~$X$ en~$x$} la quantit\'e
\[ \dim_{x}(X) := \dim_{x}(X/B) + v_{x}(X).\]
Soit~$x\in X$. D\'ecomposons~$X$ au voisinage de~$x$ sous la forme $X = \bigcup_{i=1}^s V_{i}$, o\`u $V_{1},\dotsc,V_{s}$ sont int\`egres en~$x$ comme dans la proposition~\ref{decomp}. On appelle \emph{dimension de~$X$ en~$x$} la quantit\'e
\[ \dim_{x}(X) := \max_{1\le i\le s} \dim_{x}(V_{i}).\]
\end{defi}
\index{Espace analytique!dimension d'un|)}
\section[D'un sch\'ema \`a son analytifi\'e~1]{D'un sch\'ema \`a son analytifi\'e~: premi\`eres propri\'et\'es}\label{sec:schan1}
\index{Analytification!|(}
Dans cette section, nous nous int\'eressons aux propri\'et\'es d'un sch\'ema localement de pr\'esentation finie sur~$\mathcal{A}$ qui sont pr\'eserv\'ees lorsque l'on passe \`a son analytifi\'e.
\medbreak
Rappelons qu'\`a tout sch\'ema localement de pr\'esentation finie~$\mathcal{X}$ sur~$\mathcal{A}$, on associe un espace $\mathcal{A}$-analytique~$\mathcal{X}^\an$ et un morphisme d'espaces localement $\mathcal{A}$-annel\'es $\rho_{\mathcal{X}} \colon \mathcal{X}^\an \to \mathcal{X}$ (\cf~d\'efinition~\ref{def:analytification}).
\begin{lemm}\label{lem:Pnancompact}\index{Espace analytique!projectif}
Pour tout $n\in \ensuremath{\mathbf{N}}$, l'espace $\EP{n}{\mathcal{A}}$ est compact.
\end{lemm}
\begin{proof}
Il suit de la construction de l'analytifi\'e (\cf~th\'eor\`eme~\ref{thm:analytification}) que $\EP{n}{\mathcal{A}}$ peut \^etre construit \`a partir d'espaces affines~$\E{n}{\mathcal{A}}$ par la prod\'edure de recollement habituelle. Il peut \'egalement \^etre construit en recollant des disques unit\'es ferm\'es, qui sont compacts. Le r\'esultat s'ensuit.
\end{proof}
\begin{lemm}\label{lem:imageanalytification}\index{Morphisme analytique!image d'un}
Soient $\mathcal{X}$ et~$\mathcal{Y}$ des sch\'emas localement de pr\'esentation finie sur~$\mathcal{A}$ et $\varphi\colon \mathcal{X}\to \mathcal{Y}$ un morphisme de sch\'emas.
Alors, on a
\[ \rho_{\mathcal{Y}}(\varphi^\an(\mathcal{X}^\an)) = \varphi(\mathcal{X}).\]
\end{lemm}
\begin{proof}
Soit~$y\in \mathcal{Y}^{an}$. Munissons~$(\varphi^\an)^{-1}(y)$ de la structure d'espace $\mathcal{H}(y)$-analytique induite par l'hom\'eomorphisme
\[ (\varphi^\an)^{-1}(y) \simeq (\mathcal{X}^\an \ho{\mathcal{A}} \mathcal{H}(y)) \times_{\mathcal{Y}^\an \ho{\mathcal{A}} \mathcal{H}(y)} \mathcal{M}(\mathcal{H}(y))\]
fourni par la proposition~\ref{preimage}. Il d\'ecoule alors des propri\'et\'es universelles (\cf~d\'efinitions~\ref{def:analytification} et~\ref{def:extensionscalaire}) qu'on a un isomorphisme canonique entre espaces $\mathcal{H}(y)$-analytiques
\[(\mathcal{X}_{\rho_Y(y)}\otimes_{\kappa(\rho_Y(y))}\mathcal{H}(y))^{an} \simeq (\varphi^\an)^{-1}(y).\]
En particulier, $\varphi^{-1}(\rho_{\mathcal{Y}}(y)) \simeq \mathcal{X}_{\rho_Y(y)}$ est vide si, et seulement si, $(\varphi^\an)^{-1}(y)$ l'est. Le r\'esultat s'en d\'eduit.
\end{proof}
\begin{prop}\label{stabilit\'e_analytification1}\index{Morphisme analytique!surjectif}\index{Morphisme analytique!propre}\index{Morphisme analytique!fini}\index{Immersion
Soit $\varphi\colon \mathcal{X}\to \mathcal{Y}$ un morphisme de sch\'emas localement de pr\'esentation finie sur~$\mathcal{A}$. Si le morphisme~$\varphi$ est (1) une immersion (resp. une immersion ferm\'ee, resp. une immersion ouverte), (2) surjectif, (3) propre, (4) fini, alors le morphisme~$\varphi^{an}$ poss\`ede la m\^eme propri\'et\'e.
\end{prop}
\begin{proof}
(1) Les trois propri\'et\'es d\'ecoulent de la construction de l'analytifi\'e (\cf~th\'eor\`eme~\ref{thm:analytification}).
(2) Cette propri\'et\'e d\'ecoule du lemme~\ref{lem:imageanalytification}.
(3) Supposons que~$\varphi$ est propre. D'apr\`es le lemme de Chow (\cf~\cite[corollaire~5.6.2]{EGAII})), il existe un $\mathcal{Y}$-sch\'ema projectif~$\mathcal{X}'$ et un morphisme de $\mathcal{Y}$-sch\'emas $\psi \colon \mathcal{X}'\to \mathcal{X}$ projectif et surjectif. D'apr\`es (2), $\psi^\an$ est surjectif. D'apr\`es le lemme~\ref{lem:compositionpropre}, il suffit donc de montrer le r\'esultat pour les morphismes projectifs.
Supposons que~$\varphi$ est projectif. Puisque la propret\'e est une notion locale au but, quitte \`a r\'etr\'ecir~$\mathcal{X}$ et~$\mathcal{Y}$, on peut supposer que~$\varphi$ s'\'ecrit comme la composition d'une immersion ferm\'ee $\mathcal{X}\to \P^n_\mathcal{A}\times_{\mathcal{A}} \mathcal{Y}$ et de la projection $\P^n_{\mathcal{A}}\times_{\mathcal{A}} \mathcal{Y}\to \mathcal{Y}$. Remarquons qu'il d\'ecoule imm\'ediatement des propri\'et\'es universelles que l'analytification commute au produit. D'apr\`es~(1), $\mathcal{X}^\an\to \EP{n}{\mathcal{A}}\times_{\mathcal{A}} \mathcal{Y}^\an$ est une immersion ferm\'ee, donc un morphisme propre. D'apr\`es le lemme~\ref{lem:Pnancompact}, $\EP{n}{\mathcal{A}}$ est compact, donc la projection $\EP{n}{\mathcal{A}} \to\mathcal{M}(\mathcal{A})$ est propre. D'apr\`es la proposition~\ref{stabilite_fini}, la projection $\EP{n}{\mathcal{A}}\times_{\mathcal{A}} Y\to Y$ l'est encore. On conclut en utilisant le fait que la compos\'ee de deux morphismes propres est propre (\cf~lemme~\ref{lem:compositionpropre}).
(4) Supposons que~$\varphi$ est un morphisme fini. Alors, d'apr\`es~(3), $\varphi^{an}$ est propre. Il suffit donc de montrer que les fibres de~$\varphi^\an$ sont finies. Cela d\'ecoule de l'isomorphisme qui figure au d\'ebut de la preuve de~(2).
\end{proof}
\begin{rema}
Il est tr\`es plausible que les implications d\'emontr\'ees \`a la proposition~\ref{stabilit\'e_analytification1} soient en fait des \'equivalences. Nous n'en dirons pas plus ici.
\end{rema}
Nous allons maintenant d\'emontrer des r\'esultats sur les faisceaux.
\begin{defi}\index{Analytification!d'un faisceau|textbf}\index{Faisceau!analytifie@analytifi\'e|see{Analytification}}%
\nomenclature[Ee]{$\mathcal{F}^\an$}{analytifi\'e d'un faisceau coh\'erent~$\mathcal{F}$ sur un sch\'ema
Soit~$\mathcal{X}$ un sch\'ema localement de pr\'esentation finie sur~$\mathcal{A}$. Pour tout faisceau de $\mathcal{O}_{\mathcal{X}}$-modules~$\mathcal{F}$, on appelle \emph{analytification} ou \emph{analytifi\'e} de~$\mathcal{F}$ le faisceau de $\mathcal{O}_{\mathcal{X}^\an}$-modules d\'efini par
\[ \mathcal{F}^\an := \rho_{\mathcal{X}}^\ast \mathcal{F}.\]
\end{defi}
\begin{lemm}\index{Faisceau!coherent@coh\'erent
Soit~$\mathcal{X}$ un sch\'ema localement de pr\'esentation finie sur~$\mathcal{A}$. Soit~$\mathcal{F}$ un faisceau de $\mathcal{O}_{\mathcal{X}}$-modules. Si $\mathcal{F}$ est de type fini (resp. coh\'erent), alors~$\mathcal{F}^\an$ est de type fini (resp. coh\'erent).
\end{lemm}
\begin{proof}
Le foncteur $\mathcal{F} \mapsto \mathcal{F}^\an$ est exact \`a droite, donc il pr\'eserve les faisceaux de type fini. D'apr\`es le th\'eor\`eme~\ref{coherent}, le faisceau structural $\mathcal{O}_{\mathcal{X}^\an}$ est coh\'erent, donc les faisceaux coh\'erents sont exactement ceux de pr\'esentation finie. On en d\'eduit qu'il sont \'egalement pr\'eserv\'es.
\end{proof}
\begin{prop}\label{formule_analytification}\index{Morphisme analytique!fini!image d'un faisceau par un}
Soit~$\varphi \colon \mathcal{X}\to \mathcal{Y}$ un morphisme fini de sch\'emas localement de pr\'esentation finie sur~$\mathcal{A}$. Soit~$\mathcal{F}$ un faisceau de $\mathcal{O}_{\mathcal{X}}$-modules. Alors le morphisme naturel
\[(\varphi_*\mathcal{F})^{an}\to (\varphi^{an})_*\mathcal{F}^{an}\]
est un isomorphisme.
\end{prop}
\begin{proof}
Consid\'erons le diagramme commutatif
\[\begin{tikzcd}
\mathcal{X}^{\an} \ar[d, "\varphi^{\an}"] \ar[r, "\rho_{\mathcal{X}}"] &\mathcal{X} \ar[d,"\varphi"] \\
\mathcal{Y}^\an \ar[r, "\rho_{\mathcal{Y}}"] &\mathcal{Y}.
\end{tikzcd}\]
Par le m\^eme raisonnement que dans la preuve du lemme~\ref{lem:cbflocal}, on v\'erifie qu'il suffit de d\'emontrer l'\'enonc\'e suivant, que l'on appellera \emph{\'enonc\'e local}~: pour tout $u\in \mathcal{X}$ et tout $y\in \mathcal{Y}^\an$ tels que $\varphi(u) = \rho_{\mathcal{Y}}(y)$, le morphisme naturel
\[\mathcal{O}_{\mathcal{X},u}\otimes_{\mathcal{O}_{\mathcal{Y},\varphi(u)}}\mathcal{O}_{\mathcal{Y}^\an,y} \to \prod_{x\in\rho_{\mathcal{X}}^{-1}(u) \cap (\varphi^\an)^{-1}(y)}\mathcal{O}_{\mathcal{X}^\an, x}\]
est un isomorphisme.
La question \'etant locale sur~$\mathcal{Y}$, on peut supposer que~$\mathcal{Y}$ est affine. Puisque~$\varphi$ est fini, $\mathcal{X}$ est \'egalement affine. Il existe donc des $\mathcal{A}$-alg\`ebres~$A$ et~$B$ telles que $X= \Spec(A)$ et $\mathcal{Y} = \Spec(B)$. En outre, $\varphi^\sharp \colon B \to A$ munit $A$ d'une structure de $B$-module de type fini. Soit~$B[T_1,\dotsc,T_k]\to A$ un morphisme de $B$-alg\`ebres surjectif. Montrons l'\'enonc\'e souhait\'e par r\'ecurrence sur~$k$.
Supposons que~$k=0$. Alors $\varphi$ est une immersion ferm\'ee et il va de m\^eme pour~$\varphi^\an$, d'apr\`es la proposition \ref{stabilit\'e_analytification1}. Soient $y \in Y^\an$ et $u\in X$ tels que $\varphi(u) = \rho_{\mathcal{Y}}(y)$. D'apr\`es le lemme~\ref{lem:imageanalytification}, on a $y\in \varphi^{\an}(\mathcal{X}^\an)$. Notons~$x\in \mathcal{X}^\an$ l'unique point tel que $\varphi^\an(x)=y$. On a alors $\rho_{\mathcal{X}}(x) = u$. Il d\'ecoule de la construction de l'analytifi\'e (\cf~th\'eor\`eme~\ref{thm:analytification}) que si~$\mathcal{X}$ est d\'efini par un faisceau d'id\'eaux~$\mathcal{I}$ de~$\mathcal{Y}$, alors $\mathcal{X}^\an$ est d\'efini par le faisceau d'id\'eaux~$\mathcal{I}\otimes_{\mathcal{O}_{\mathcal{Y}}} \mathcal{O}_{\mathcal{Y}^\an}$. On en d\'eduit que
\[ \mathcal{O}_{\mathcal{X},u} \otimes_{\mathcal{O}_{\mathcal{Y},\rho_{\mathcal{Y}}(y)}} \mathcal{O}_{\mathcal{Y}^\an,y} \simeq \mathcal{O}_{\mathcal{X}^\an,x} ,\]
ce qui n'est autre que l'\'enonc\'e local souhait\'e.
\medbreak
Supposons que $k\ge 1$ et que l'\'enonc\'e est vrai pour~$k-1$. Notons $A_{k-1}$ l'image de $B[T_{1},\dotsc,T_{k-1}]$ par la surjection $B[T_1,\dotsc,T_k]\to A$ et posons $\mathcal{X}_{k-1} := \Spec(A_{k-1})$. L'inclusion~$A_{k-1}\to A$ induit un morphisme de sch\'emas $\psi \colon X\to X_{k-1}$ et le morphisme~$\varphi$ se factorise sous la forme $\varphi = \psi' \circ \psi $ avec $\psi' \colon X_{k-1} \to Y$. Par hypoth\`ese de r\'ecurrence, $\psi'$ satisfait l'\'enonc\'e local. Il suffit de d\'emontrer que~$\psi$ le satisfait \'egalement.
Par construction, il existe une surjection $A_{k-1}[S]\to A$. Puisque~$A$ est un~$A_{k-1}$-module de type fini, il existe un polyn\^ome unitaire~$P\in A_{k-1}[S]$ tel que cette surjection se factorise par $A_{k-1}[S]/(P)\to A$. Puisque~$B$ est une $\mathcal{A}$-alg\`ebre de pr\'esentation finie, il existe un morphisme de $\mathcal{A}$-alg\`ebres surjectif $\mathcal{A}[S_{1},\dotsc,S_{n}] \to A_{k-1}$. On notera encore~$P$ un rel\`evement unitaire de~$P$ \`a $\mathcal{A}[S_1,\dotsc,S_n][S]$. Soit~$Z_{P}$ le sous-sch\'ema ferm\'e de~$\E{n+1}{\mathcal{A}}$ d\'efini par~$P$. On a un diagramme commutatif
\[\begin{tikzcd}
X\ar[r]\ar[d, "\psi"] & Z_{P} \ar[d, "\pi_{n+1,n}"]\\
X_{k-1}\ar[r]&\E{n}{\mathcal{A}},
\end{tikzcd}\]
o\`u les morphismes $X\to Z_{P}$ et~$X_k\to \E{n}{\mathcal{A}}$ sont des immersions ferm\'ees et le morphisme $\pi_{n+1,n} \colon Z_{P}\to\E{n}{\mathcal{A}}$ est la restriction \`a~$Z_{P}$ de la projection sur les~$n$ premiers facteurs. On v\'erifie qu'il suffit de d\'emontrer l'\'enonc\'e local pour le morphisme~$\pi_{n+1,n}$. Ce dernier d\'ecoule du lemme \ref{cas_particulier_coh\'erence}.
\end{proof}
\section[D'un sch\'ema \`a son analytifi\'e~2]{D'un sch\'ema \`a son analytifi\'e~: platitude et cons\'equences}\label{sec:schan2}
Soit $\mathcal{X}$ un sch\'ema localement de pr\'esentation finie sur~$\mathcal{A}$. L'objet de cette section est de d\'emontrer que le morphisme $\rho_{\mathcal{X}} \colon \mathcal{X}^\an \to \mathcal{X}$ est plat. Il s'agit d'un r\'esultat essentiel pour les applications. En g\'eom\'etrie complexe, il est par exemple utilis\'e dans la preuve du th\'eor\`eme GAGA de J.~P.~Serre (\cf~\cite{GAGA}).
Nous aurons besoin d'imposer des propri\'et\'es suppl\'ementaires sur l'anneau~$\mathcal{A}$, permettant de relier ses localis\'es alg\'ebriques et analytiques. Pour ce faire, nous introduisons une nouvelle d\'efinition.
\begin{defi}\label{def:aDa}\index{Anneau!de Dedekind analytique|textbf}
On dit que $(\mathcal{A},\nm)$ est un \emph{anneau de Dedekind analytique} s'il satisfait les propri\'et\'es suivantes~:
\begin{enumerate}[i)]
\item $\mathcal{A}$ est un anneau de Dedekind~;
\item $\mathcal{A}$ est un anneau de base g\'eom\'etrique~;
\item le morphisme $\rho \colon \mathcal{M}(\mathcal{A}) \to \Spec(\mathcal{A})$ est surjectif~;
\item pour tout $a \in \Spec(\mathcal{A})$,
\begin{enumerate}
\item[$\alpha$)] si $\mathcal{O}_{\Spec(\mathcal{A}),a}$ est un corps, alors pour tout $b\in \rho^{-1}(a)$, $\mathcal{O}_{\mathcal{M}(\mathcal{A}),b}$ est un corps~;
\item[$\beta$)] si $\mathcal{O}_{\Spec(\mathcal{A}),a}$ est un anneau de valuation discr\`ete, alors $\rho^{-1}(a)$ contient un unique point, disons~$b$, $\mathcal{O}_{\mathcal{M}(\mathcal{A}),b}$ est un anneau de valuation discr\`ete et le morphisme $\mathcal{O}_{\Spec(\mathcal{A}),a} \to \mathcal{O}_{\mathcal{M}(\mathcal{A}),b}$ induit par~$\rho^\sharp$ est le morphisme de compl\'etion.
\end{enumerate}
\end{enumerate}
\end{defi}
\begin{exem}\index{Corps!valu\'e}\index{Anneau!des entiers relatifs $\ensuremath{\mathbf{Z}}$}\index{Anneau!des entiers d'un corps de nombres}\index{Corps!hybride}\index{Anneau!de valuation discr\`ete}\index{Anneau!de Dedekind trivialement valu\'e}
Nos exemples usuels \ref{ex:corpsvalue} \`a~\ref{ex:Dedekind}~: les corps valu\'es, l'anneau~$\ensuremath{\mathbf{Z}}$ et les anneaux d'entiers de corps de nombres, les corps hybrides, les anneaux de valuation discr\`ete et les anneaux de Dedekind trivialement valu\'es sont tous des anneaux de Dedekind analytiques.
\end{exem}
\emph{Nous supposerons d\'esormais que $\mathcal{A}$ est un anneau de Dedekind analytique. }
\medbreak
Commen\c cons par un cas particulier du r\'esultat \`a d\'emontrer.
\begin{prop}\label{prop:affineplat}
Soit $n\in \ensuremath{\mathbf{N}}$. Pour tout $x\in \E{n}{\mathcal{A}}$, le morphisme $\mathcal{O}_{\ensuremath{\mathbf{A}}^n_{\mathcal{A}},\rho_{\ensuremath{\mathbf{A}}^n_{\mathcal{A}}}(x)} \to \mathcal{O}_{\E{n}{\mathcal{A}},x}$ induit par~$\rho_{\ensuremath{\mathbf{A}}^n_{\mathcal{A}}}^\sharp$ est plat.
\end{prop}
\begin{proof}
Posons $\rho := \rho_{\ensuremath{\mathbf{A}}^n_{\mathcal{A}}}$. D\'emontrons l'\'enonc\'e par r\'ecurrence sur~$n$.
Supposons que~$n=0$. Soit $x\in \mathcal{M}(\mathcal{A})$. Si $\mathcal{A}_{\rho(x)}$ est un corps, alors le morphisme $\mathcal{O}_{\Spec(\mathcal{A}),\rho(x)} = \mathcal{A}_{\rho(x)} \to \mathcal{O}_{\mathcal{M}(\mathcal{A}),x}$ est plat. Si $\mathcal{A}_{\rho(x)}$ est un anneau de valuation discr\`ete, alors le morphisme $\mathcal{O}_{\Spec(\mathcal{A}),\rho(x)} = \mathcal{A}_{\rho(x)} \to \mathcal{O}_{\mathcal{M}(\mathcal{A}),x}$ est le morphisme de compl\'etion. Il est donc encore plat.
\medbreak
Supposons que $n\ge 1$ et que le r\'esultat vaut pour~$n-1$. Soit~$x\in\E{n}{\mathcal{A}}$. Notons~$\ensuremath{\mathfrak{q}}$ l'id\'eal premier de~$\mathcal{A}$ correspondant \`a la projection de~$\rho(x)$ sur~$\Spec(\mathcal{A})$. Le point~$\rho(x)$ correspond alors \`a un id\'eal premier~$\ensuremath{\mathfrak{p}}_{x}$ de~$\Frac(\mathcal{A}/\ensuremath{\mathfrak{q}})[T_1,\dotsc,T_{n}]$.
Si $\ensuremath{\mathfrak{p}}_{x} = (0)$ et $\ensuremath{\mathfrak{q}} = (0)$, alors $\mathcal{O}_{\ensuremath{\mathbf{A}}^{n}_\mathcal{A},\rho(x)}$ est un corps, donc $\mathcal{O}_{\E{n}{\mathcal{A}},x}$ est plat sur $\mathcal{O}_{\ensuremath{\mathbf{A}}^{n}_\mathcal{A},\rho(x)}$.
Si $\ensuremath{\mathfrak{p}}_{x} = (0)$ et $\ensuremath{\mathfrak{q}} \ne (0)$, alors $\mathcal{O}_{\ensuremath{\mathbf{A}}^{n}_\mathcal{A},\rho(x)}$ est un anneau de valuation discr\`ete d'id\'eal maximal engendr\'e par~$\ensuremath{\mathfrak{q}}$. D'apr\`es le corollaire~\ref{cor:projectionouverte}, le morphisme de projection $\E{n}{\mathcal{A}} \to \mathcal{M}(\mathcal{A})$ est ouvert, donc, pour tout~$m\in \ensuremath{\mathbf{N}}_{ge1}$, l'image de~$\ensuremath{\mathfrak{q}}^m$ dans~$\mathcal{O}_{\E{n}{\mathcal{A}},x}$ n'est pas nulle. On en d\'eduit que le morphisme $\mathcal{O}_{\ensuremath{\mathbf{A}}^{n}_\mathcal{A},\rho(x)}\to \mathcal{O}_{\E{n}{\mathcal{A}},x}$ est injectif. Or, d'apr\`es le th\'eor\`eme~\ref{rigide}, $\mathcal{O}_{\E{n}{\mathcal{A}},x}$ est int\`egre. Par cons\'equent, il est sans $\ensuremath{\mathfrak{q}}$-torsion, et donc plat sur~$\mathcal{O}_{\ensuremath{\mathbf{A}}^{n}_\mathcal{A},\rho(x)}$.
Supposons d\'esormais que $\ensuremath{\mathfrak{p}}_{x} \ne (0)$. Notons $\ensuremath{\mathfrak{p}}'_{x}$ l'id\'eal de~$\Frac(\mathcal{A}/\ensuremath{\mathfrak{q}})[T_1,\dotsc,T_{n}]$ correspondant \`a~$\rho(x)$. Quitte \`a permuter les variables, on peut supposer qu'il existe~$k\in\cn{0}{n-1}$ tel que
\[ \ensuremath{\mathfrak{p}}'_{x} \cap \Frac(\mathcal{A}/\ensuremath{\mathfrak{q}})[T_1,\dotsc,T_k] = \{0\} \]
et $\kappa(\rho(x))$ soit une extension finie de $\Frac(\mathcal{A}/\ensuremath{\mathfrak{q}})(T_1,\dotsc,T_k)$.
Notons $x_{n-1} \in \E{n-1}{\mathcal{A}}$ la projection de~$x$ sur les $n-1$ premi\`eres coordonn\'ees. Son image canonique $\rho(x_{n-1})$ dans $\ensuremath{\mathbf{A}}^{n-1}_{\mathcal{A}}$ est l'image de~$\rho(x)$ par la projection sur les $n-1$ premi\`eres coordonn\'ees. Par construction, le morphisme naturel $\kappa(\rho(x_{n-1}))[T_{n}]\to \kappa(\rho(x))$ n'est pas injectif. Notons $P \in \kappa(\rho(x_{n-1}))[T_{n}]$ un g\'en\'erateur unitaire de son noyau et~$d$ le degr\'e de ce polyn\^ome. Soient~$\tilde{P}$ un relev\'e de~$P$ \`a $\mathcal{O}_{\ensuremath{\mathbf{A}}^{n-1}_{\mathcal{A}},\rho(x_{n-1})}[T_{n}]$ unitaire de degr\'e~$d$.
Soit $U=\Spec(B)$ un voisinage affine de~$\rho(x_{n-1})$ dans~$\ensuremath{\mathbf{A}}^{n-1}_{\mathcal{A}}$ sur lequel tous les coefficients de~$\tilde{P}$ sont d\'efinis.
Proc\'edons comme dans la preuve du corollaire~\ref{cor:phinormes}. Consid\'erons le sch\'ema~$\ensuremath{\mathbf{A}}^2_{B}$ muni des coordonn\'ees~$T_{n}$ et~$T$. Notons $p_{1} \colon \ensuremath{\mathbf{A}}^2_{B} \to \ensuremath{\mathbf{A}}^1_{B}$ (resp. $p_{2} \colon \ensuremath{\mathbf{A}}^2_{B} \to \ensuremath{\mathbf{A}}^1_{B}$) le morphisme de projection sur~$\ensuremath{\mathbf{A}}^1_{B}$ muni de la coordonn\'ee~$T$ (resp.~$T_{n}$). Notons $\varphi_{\tilde P} \colon \ensuremath{\mathbf{A}}^1_{B} \to \ensuremath{\mathbf{A}}^1_{B}$ le morphisme induit par le morphisme naturel
\[B[T] \to B[T,T_{n}]/(\tilde P(T_{n}) - T) \xrightarrow[]{\sim} B[T_{n}] \]
et $\sigma$ la section de $p_{2}$ induite par le morphisme de $B$-alg\`ebres
\[\begin{array}{ccc}
B[T,T_{n}]&\to&B[T_{n}]\\
T&\mapsto&\tilde{P}(T_{n})\\
T_{n}&\mapsto&T_{n}
\end{array}.\]
On a alors $p_{1} \circ \sigma = p_{2}$.
\[\begin{tikzcd}
& \ensuremath{\mathbf{A}}^2_{B}\ar[ld, "p_{1}"] \ar[rd, "p_2"'] &\\
\ensuremath{\mathbf{A}}^1_{B} &&\ensuremath{\mathbf{A}}^1_{B} \ar[ll, "\varphi_{\tilde P}"'] \ar[ul, bend right, "\sigma"']
\end{tikzcd}\]
Notons $Z$ le ferm\'e de Zariski de~$\ensuremath{\mathbf{A}}^{2}_{\mathcal{A}}$ d\'efini par $\tilde P(T_{n})-T$. La projection~$p_{2}$ induit un isomorphisme entre~$Z$ et~$\ensuremath{\mathbf{A}}^1_{\mathcal{A}}$ dont l'inverse est la section~$\sigma$.
Le morphisme
\[\fonction{\psi_{\tilde P}}{\mathcal{O}^d_{\ensuremath{\mathbf{A}}^1_B}}{(p_1)_*\mathcal{O}_Z}{(a_0(T),\dotsc,a_{d-1}(T))}{\displaystyle\sum_{i=0}^{d-1}a_i(T)T_{n}^i}\]
est un isomorphisme, dont l'inverse est le reste de la division euclidienne par~$\tilde P(T_{n})-T$. En composant par l'isomorphisme $(p_1)_*\mathcal{O}_Z \to (p_1)_*\sigma_{*}\mathcal{O}_Z = (\varphi_{\tilde P})_{*} \mathcal{O}_{\ensuremath{\mathbf{A}}^1_B}$ induit par~$\sigma^\sharp$, on obtient un isomorphisme
\[\fonction{\psi'}{\mathcal{O}^d_{\ensuremath{\mathbf{A}}^1_B}}{(\varphi_{\tilde P})_{*} \mathcal{O}_{\ensuremath{\mathbf{A}}^1_B}}{(a_0(T),\dotsc,a_{d-1}(T))}{\displaystyle\sum_{i=0}^{d-1}a_i(\tilde P (T_{n}))T_{n}^i}.\]
Consid\'erons $\E{n}{\mathcal{A}}$ avec coordonn\'ees $(T_{1},\dotsc,T_{n-1},S)$ et $\pi_{n,n-1} \colon \E{n}{\mathcal{A}} \to \E{n-1}{\mathcal{A}}$ la projection sur les $n-1$ premi\`eres coordonn\'ees. Notons $0_{x_{n-1}} \in \E{n}{\mathcal{A}}$ le point~0 de la fibre $\pi_{n,n-1}^{-1}(x_{n-1}) \simeq \E{1}{\mathcal{H}(x_{n-1})}$. Par construction, on a $\varphi_{\tilde P}^{-1}(\rho(0_{x_{n-1}})) = \{\rho(x)\}$ et le morphisme $\psi_{\tilde P}$ induit donc un isomorphisme entre de germes
\[ \mathcal{O}^d_{\ensuremath{\mathbf{A}}^1_B,\rho(0_{x_{n-1}})} \to \big((\varphi_{\tilde{P}})_* \mathcal{O}_{\ensuremath{\mathbf{A}}^1_B}\big)_{\rho(0_{x_{n-1}})}\simeq \mathcal{O}_{\ensuremath{\mathbf{A}}^1_B,\rho(x)}. \]
En analytifiant la construction, on obtient un morphisme de germes
\[ \mathcal{O}^d_{\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}}\times_{\mathcal{A}} U^\an,0_{x_{n-1}}} \to \big((\varphi^\an_{\tilde{P}})_* \mathcal{O}_{\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}}\times_{\mathcal{A}} U^\an}\big)_{0_{x_{n-1}}}, \]
et le corollaire~\ref{cor:phinormes} assure que c'est un isomorphisme.
Or, $x$ est un ant\'ec\'edent de~$0_{x_{n-1}}$ par~$\varphi^{an}_{\tilde{P}}$, donc, d'apr\`es la proposition~\ref{prop:fetoileenbasexact}, $\mathcal{O}_{\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}}\times_{\mathcal{A}} U^{\an},x}$ est un facteur de $\big((\varphi^{\an}_{\tilde{P}})_*\mathcal{O}_{\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}}\times_{\mathcal{A}} U^{\an}}\big)_{0_{x_{n-1}}}$. Par cons\'equent, pour montrer que le morphisme $\mathcal{O}_{\ensuremath{\mathbf{A}}^1_B,\rho(x)}\to\mathcal{O}_{\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}}\times_{\mathcal{A}} U^{\an},x}$ est plat, il suffit de montrer que le morphisme $\mathcal{O}_{\ensuremath{\mathbf{A}}^1_B,\rho(x)}\to \big((\varphi^{\an}_{\tilde{P}})_*\mathcal{O}_{\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}}\times_{\mathcal{A}} U^{\an}}\big)_{0_{x_{n-1}}}$ l'est. Gr\^ace aux isomorphismes obtenus pr\'ec\'edemment, cela revient encore \`a montrer que le morphisme
\[\mathcal{O}_{\ensuremath{\mathbf{A}}^1_B,\rho(0_{x_{n-1}})} \simeq \mathcal{O}_{\ensuremath{\mathbf{A}}^{n}_\mathcal{A},\rho(0_{x_{n-1}})} \to \mathcal{O}_{\E{n}{\mathcal{A}},0_{x_{n-1}}} \simeq \mathcal{O}_{\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}}\times_{\mathcal{A}} U^{\an},0_{x_{n-1}}}\]
est plat.
Pour ce faire, il suffit de montrer que le morphisme induit entre les compl\'et\'es $T_{n}$-adiques est plat. Or, ces compl\'et\'es sont respectivement isomorphes \`a $\mathcal{O}_{\ensuremath{\mathbf{A}}^{n}_{\mathcal{A}},\rho(x_{n-1})}\llbracket T_{n}\rrbracket$ et $\mathcal{O}_{\E{n}{\mathcal{A}},x_{n-1}}\llbracket T_{n}\rrbracket$ (d'apr\`es la proposition~\ref{prop:disqueglobal} pour le second). L'hypoth\`ese de r\'ecurrence permet de conclure.
\end{proof}
\begin{theo}\label{platitude_analytification}\index{Morphisme analytique!plat}
Soit~$\mathcal{X}$ un sch\'ema localement de pr\'esentation finie sur~$\mathcal{A}$. Alors, pour tout $x\in \mathcal{X}^\an$, le morphisme $\mathcal{O}_{\mathcal{X},\rho_{\mathcal{X}}(x)}\to\mathcal{O}_{\mathcal{X}^{\an},x}$ induit par~$\rho_{\mathcal{X}}$ est plat.
\end{theo}
\begin{proof}
Quitte \`a localiser au voisinage de~$\rho(x)$, on peut supposer que~$\mathcal{X}$ est un sous-sch\'ema ferm\'e d'un espace affine~$\ensuremath{\mathbf{A}}^n_{\mathcal{A}}$. Par construction de l'analytifi\'e (\cf~th\'eor\`eme~\ref{thm:analytification}), on a un isomorphisme
\[\mathcal{O}_{X,x} \simeq \mathcal{O}_{\mathcal{X},\rho_{\mathcal{X}}(x)} \otimes_{\mathcal{O}_{\ensuremath{\mathbf{A}}^{n}_{\mathcal{A}},\rho_{\mathcal{X}}(x)}} \mathcal{O}_{\E{n}{\mathcal{A}},x}. \]
Or, d'apr\`es la proposition~\ref{prop:affineplat}, $\mathcal{O}_{\E{n}{\mathcal{A}},x}$ est plat sur $\mathcal{O}_{\ensuremath{\mathbf{A}}^{n}_{\mathcal{A}},\rho_{\mathcal{X}}(x)}$. Le r\'esultat s'en d\'eduit.
\end{proof}
\begin{coro}\index{Analytification!d'un faisceau!exactitude du foncteur d'}
Soit~$\mathcal{X}$ un sch\'ema localement de pr\'esentation finie sur~$\mathcal{A}$. Le foncteur qui \`a un faisceau de $\mathcal{O}_{\mathcal{X}}$-modules~$\mathcal{F}$ associe le faisceau de $\mathcal{O}_{\mathcal{X}^\an}$-modules~$\mathcal{F}^\an$ est exact.
\qed
\end{coro}
Donnons maintenant un autre \'enonc\'e de stabilit\'e de propri\'et\'e par passage \`a l'analytification.
\begin{prop}\label{stabilit\'e_analytification2}\index{Faisceau!plat}\index{Faisceau!sans torsion
Soit $\varphi \colon \mathcal{X}\to \mathcal{Y}$ un morphisme fini entre sch\'emas localement de pr\'esentation finie sur~$\mathcal{A}$. Soit~$\mathcal{F}$ un faisceau coh\'erent sur~$\mathcal{X}$ et soit $y\in \mathcal{Y}^\an$.
\begin{enumerate}[i)]
\item Le $\mathcal{O}_{\mathcal{Y},\rho_{\mathcal{Y}}(y)}$-module $(\varphi_*\mathcal{F})_{\rho_{\mathcal{Y}}(y)}$ est sans torsion si, et seulement si, le $\mathcal{O}_{\mathcal{Y}^\an,y}$-module $(\varphi^\an_*\mathcal{F}^\an)_{y}$ est sans torsion.
\item Soit $u\in \varphi^{-1}(\rho_{\mathcal{Y}}(y))$. Le $\mathcal{O}_{\mathcal{Y},\rho_{\mathcal{Y}}(y)}$-module $\mathcal{F}_{u}$ est plat si, et seulement si, pour tout $x\in (\varphi^\an)^{-1}(y) \cap \rho_{\mathcal{X}}^{-1}(u)$, le $\mathcal{O}_{\mathcal{Y}^\an,y}$-module $\mathcal{F}^\an_{x}$ est plat.
\end{enumerate}
\end{prop}
\begin{proof}
Pour all\'eger l'\'ecriture, on notera indiff\'eremment~$\rho$ les deux morphismes~$\rho_{\mathcal{X}}$ et~$\rho_{\mathcal{Y}}$.
i) D'apr\`es le th\'eor\`eme \ref{platitude_analytification}, $\mathcal{O}_{\mathcal{Y}^{an},y}$ est plat sur $\mathcal{O}_{\mathcal{Y},\rho(y)}$. Puisque~$\varphi$ est fini, $(\varphi_*\mathcal{F})_{\rho(y)}$ est un $\mathcal{O}_{\mathcal{Y},\rho(y)}$-module de pr\'esentation finie. D'apr\`es \cite[I, \S 2, \no 10, proposition~11]{BourbakiAC14} et la proposition \ref{formule_analytification}, on a des isomorphismes canoniques
\begin{align*}
&\Hom_{\mathcal{O}_{\mathcal{Y},\rho(y)}}((\varphi_*\mathcal{F})_{\rho_{\mathcal{Y}}(y)},(\varphi_*\mathcal{F})_{\rho(y)}) \otimes_{\mathcal{O}_{\mathcal{Y},\rho(y)}} \mathcal{O}_{\mathcal{Y}^{an},y}\\
\simeq\ &
\Hom_{\mathcal{O}_{\mathcal{Y}^{an},y}}((\varphi_*\mathcal{F})_{\rho(y)}\otimes_{\mathcal{O}_{\mathcal{Y},\rho(y)}} \mathcal{O}_{\mathcal{Y}^{an},y},(\varphi_*\mathcal{F})_{\rho(y)}\otimes_{\mathcal{O}_{\mathcal{Y},\rho(y)}} \mathcal{O}_{\mathcal{Y}^{an},y})\\
\simeq \ &\Hom_{\mathcal{O}_{\mathcal{Y}^{an},y}}((\varphi^{\an}_*\mathcal{F}^{\an})_y,(\varphi^{\an}_*\mathcal{F}^{\an})_y).
\end{align*}
Or, l'id\'eal de torsion de $(\varphi_*\mathcal{F})_{\rho(y)}$ s'identifie au noyau du morphisme
\[\mathcal{O}_{\mathcal{Y},\rho(y)} \to \Hom_{\mathcal{O}_{\mathcal{Y},\rho(y)}}((\varphi_*\mathcal{F})_{\rho(y)},(\varphi_*\mathcal{F})_{\rho(y)})\]
et de m\^eme pour $(\varphi^{\an}_*\mathcal{F}^{\an})_y$. Le r\'esultat d\'ecoule donc de la fid\`ele platitude de $\mathcal{O}_{\mathcal{Y}^{an},y}$ sur $\mathcal{O}_{\mathcal{Y},\rho(y)}$.
ii) Quitte \`a restreindre~$\mathcal{X}$, on peut supposer que $\varphi^{-1}(\varphi(u)) = \{u\}$. Dans ce cas, on a $(\varphi_{*}\mathcal{F})_{\rho(y)} \simeq \mathcal{F}_{u}$ et, d'apr\`es la proposition~\ref{prop:fetoileenbasexact},
\[ (\varphi^\an_{*}\mathcal{F}^\an)_{y} \simeq \prod_{x \in (\varphi^\an)^{-1}(y)} \mathcal{F}^\an_{x}.\]
En outre, d'apr\`es la proposition \ref{formule_analytification}, on a un isomorphisme canonique
\[\mathcal{O}_{\mathcal{Y}^{an},y}\otimes_{\mathcal{O}_{\mathcal{Y},\rho(y)}} (\varphi_*\mathcal{F})_{\rho(y)} \simeq (\varphi^{\an}_*\mathcal{F}^{\an})_y.\]
Le r\'esultat d\'ecoule alors de la fid\`ele platitude de $\mathcal{O}_{\mathcal{Y}^{an},y}$ sur $\mathcal{O}_{\mathcal{Y},\rho(y)}$ par \cite[I, \S 3, \no 3, proposition~6]{BourbakiAC14}.
\end{proof}
\begin{prop}\label{prop:platan}\index{Morphisme analytique!plat}\index{Morphisme!structural
Soit~$\mathcal{X}$ un sch\'ema localement de pr\'esentation finie sur~$\mathcal{A}$. Notons $\pi \colon \mathcal{X} \to \Spec(\mathcal{A})$ le morphisme structural. Soit $x\in \mathcal{X}^\an$. Le morphisme $\pi$ est plat en~$\rho_{\mathcal{X}}(x)$ si, et seulement si, le morphisme $\pi^\an$ est plat en~$x$.
\end{prop}
\begin{proof}
Posons $\rho := \rho_{\mathcal{X}}$. Si $\mathcal{O}_{\Spec(\mathcal{A}),\pi(\rho(x))}$ est un corps, alors $\mathcal{O}_{\mathcal{M}(\mathcal{A}),\pi^\an(x)}$ l'est aussi et le r\'esultat est imm\'ediat.
Supposons que $\mathcal{O}_{\Spec(\mathcal{A}),\pi(\rho(x))}$ est un anneau de valuation discr\`ete. Fixons-en une uniformisante~$\varpi$. Par hypoth\`ese, $\mathcal{O}_{\mathcal{M}(\mathcal{A}),\pi^\an(x)}$ est encore un anneau de valuation discr\`ete d'uniformisante~$\varpi$. Il suffit de montrer que~$\varpi$ divise~0 dans~$\mathcal{O}_{\mathcal{X},\rho(x)}$ si, et seulement si, $\varpi$ divise~0 dans~$\mathcal{O}_{\mathcal{X}^{\an},x}$, autrement dit, que la multiplication par~$\varpi$ est injective dans~$\mathcal{O}_{\mathcal{X},\rho(x)}$ si, et seulement si, elle l'est dans~$\mathcal{O}_{\mathcal{X}^{\an},x}$. Ce r\'esultat d\'ecoule de la fid\`ele platitude du morphisme $\mathcal{O}_{\mathcal{X},\rho(x)}\to\mathcal{O}_{\mathcal{X}^{\an},x}$ (\cf~th\'eor\`eme~\ref{platitude_analytification}).
\end{proof}
D\'emontrons finalement un r\'esultat sur la dimension. Commen\c cons par un lemme pr\'eliminaire.
\begin{lemm}\label{lem:finiouvertcomposante}\index{Morphisme analytique!fini}\index{Morphisme analytique!ouvert}\index{Composantes locales}
Soit $\varphi \colon X \to Y$ un morphisme fini et ouvert d'espaces $\mathcal{A}$-analytiques. Soit $x\in X$ et supposons que $Y$~est int\`egre en~$\varphi(x)$. Notons $U_{1},\dotsc,U_{s}$ les composantes locales de~$X$ en~$x$. Alors il existe $i\in \cn{1}{s}$, un voisinage ouvert~$U'_{i}$ de~$x$ dans~$U_{i}$ et un voisinage ouvert~$V$ de~$\varphi(x)$ dans~$Y$ tels que le morphisme $U'_{i} \to V$ induit par~$\varphi$ soit fini et ouvert en~$x$.
\end{lemm}
\begin{proof}
Par d\'efinition des composantes locales, il existe un voisinage ouvert~$U$ de~$x$ dans~$X$ tel que les~$U_{i}$ soit des ferm\'es analytiques de~$U$ dont la r\'eunion recouvre~$U$. Pour $i\in \cn{1}{s}$, notons~$\mathcal{J}_{i}$ l'id\'eal de~$U_{i}$. D'apr\`es les lemmes~\ref{lem:Jintegre} et~\ref{lem:integreirreductible}, $\mathcal{J}_{i,x}$ est un id\'eal premier de~$\mathcal{O}_{X,x}$. D'apr\`es le corollaire~\ref{cor:nilpotent}, on a $\sqrt{(0)} =\bigcap_{i=1}^s \mathcal{J}_{i,x}$.
Remarquons que le morphisme $\varphi^\sharp_{x} \colon \mathcal{O}_{Y,\varphi(x)} \to \mathcal{O}_{X,x}$ induit par~$\varphi^\sharp$ est injectif. Puisque~$\varphi$ ouvert et que $\mathcal{O}_{Y,\varphi(x)}$ est int\`egre, cela d\'ecoule du corollaire~\ref{cor:nilpotent}. On en d\'eduit que
\[ (0) = (\varphi^\sharp_{x})^{-1}\big(\sqrt{(0)}\big) = \bigcap_{i=1}^s (\varphi^\sharp_{x})^{-1}(\mathcal{J}_{i,x}). \]
D'apr\`es la partie unicit\'e du th\'eor\`eme~\ref{noeth}, il existe $i\in \cn{1}{s}$ tel que $(\varphi^\sharp_{x})^{-1}(\mathcal{J}_{i,x}) = (0)$.
Par construction, le morphisme $\mathcal{O}_{Y,\varphi(x)} \to \mathcal{O}_{U_{i},x}$ induit par~$\varphi^\sharp$ est encore injectif. Quitte \`a restreindre~$U_{i}$, on peut supposer qu'il existe un voisinage ouvert~$V$ de~$\varphi(x)$ dans~$Y$ tel que le morphisme $\psi \colon U_{i} \to V$ induit par~$\varphi$ soit fini. Quitte \`a restreindre encore, on peut supposer que $\psi^{-1}(\psi(x)) = \{x\}$. On a alors un isomorphisme $(\psi_{\ast} \mathcal{O}_{U_{i}})_{\psi(x)} \simeq \mathcal{O}_{U_{i},x}$ et la proposition~\ref{prop:ouvert} assure que~$\psi$ est ouvert en~$x$.
\end{proof}
\begin{theo}\label{thm:dimXXan}\index{Espace analytique!dimension d'un|(
Supposons que $\dim(\mathcal{A}) = 1$. Soient~$\mathcal{X}$ un sch\'ema localement de pr\'esentation finie sur~$\mathcal{A}$ et~$x$ un point de~$\mathcal{X}^{\an}$. On a $\dim_{x}(\mathcal{X}^\an) = \dim_{\rho_{\mathcal{X}}(x)}(\mathcal{X})$.
\end{theo}
\begin{proof}
Il suffit de d\'emontrer le r\'esultat pour chaque composante irr\'eductible de~$\mathcal{X}$. On peut donc supposer que~$\mathcal{X}$ est irr\'eductible. Quitte \`a le r\'eduire et \`a localiser au voisinage de~$\rho_{\mathcal{X}}(x)$, on peut supposer que~$\mathcal{X}$ est int\`egre et affine. Il existe alors une $\mathcal{A}$-alg\`ebre int\`egre~$A$ telle que $\mathcal{X} = \Spec(A)$. Notons $\pi \colon \mathcal{X} \to \Spec(\mathcal{A})$ le morphisme structural. Distinguons deux cas.
\smallbreak
$\bullet$ Supposons que le morphisme $\pi^\sharp \colon \mathcal{A} \to A$ n'est pas injectif.
Notons~$\ensuremath{\mathfrak{p}}$ le noyau de~$\pi^\sharp$. Par hypoth\`ese, le ferm\'e d\'efini par~$\ensuremath{\mathfrak{p}}$ dans~$\Spec(\mathcal{A})$ et~$\mathcal{M}(\mathcal{A})$ est un singleton. Puisque l'espace~$\mathcal{X}$ et son analytif\'e se projettent sur ce singleton, on a $v_{x}(\mathcal{X}^\an) =0$ et on est ramen\'es \`a la th\'eorie classique de la dimension sur un corps valu\'e. Le r\'esultat est alors bien connu.
\smallbreak
$\bullet$ Supposons que le morphisme $\pi^\sharp \colon \mathcal{A} \to A$ est injectif.
Alors le morphisme $\pi^\sharp \colon \mathcal{A} \to A$ est plat, puisque~$\mathcal{A}$ est un anneau de Dedekind. En particulier, on a
\[ \dim(A) = \dim (A\otimes_{\mathcal{A}} K) + \dim(\mathcal{A}) = \dim (A\otimes_{\mathcal{A}} K) +1,\]
o\`u $K := \Frac(\mathcal{A})$.
Posons $n := \dim (A\otimes_{\mathcal{A}} K)$.
Le lemme de normalisation de Noether assure qu'il existe un morphisme injectif $K[T_{1},\dotsc,T_{n}] \to A \otimes_{\mathcal{A}} K$ qui fait de~$A$ un $K[T_{1},\dotsc,T_{n}]$-module fini. On en d\'eduit qu'il existe $f\in \mathcal{A} \setminus\{0\}$ et un morphisme injectif $\mathcal{A}_{f}[T_{1},\dotsc,T_{n}] \to A \otimes_{\mathcal{A}} \mathcal{A}_{f}$ qui fait de~$A$ un $\mathcal{A}_{f}[T_{1},\dotsc,T_{n}]$-module fini. Le morphisme associ\'e $\varphi \colon \Spec(A) \times_{\mathcal{A}} \Spec(\mathcal{A}_f) \to \ensuremath{\mathbf{A}}^n_{\mathcal{A}} \times_{\mathcal{A}} \Spec(\mathcal{A}_f)$ est alors un morphisme fini sans torsion. Notons~$B_{f}$ l'ouvert de $B = \mathcal{M}(\mathcal{A})$ d\'efini par l'in\'egalit\'e $f\ne 0$. D'apr\`es la proposition~\ref{stabilit\'e_analytification2}, le morphisme induit $\varphi^\an \colon \mathcal{X}^\an_{f} := \mathcal{X}^{\an}\times_{\mathcal{A}} B_{f}\to \E{n}{\mathcal{A}} \times_{\mathcal{A}} B_{f}$ est \'egalement fini et sans torsion. D'apr\`es la proposition~\ref{prop:ouvert}, il est ouvert.
Notons $U_{1},\dotsc,U_{s}$ les composantes locales de~$\mathcal{X}^\an_{f}$ en~$x$ et $U := \bigcup_{i=1}^s U_{i}$. Quitte \`a restreindre~$U$ et les~$U_{i}$, on peut supposer qu'il existe un ouvert~$V$ de $\E{n}{\mathcal{A}} \times_{\mathcal{A}} B_{f}$ tel que le morphisme $\psi \colon U \to V$ induit par~$\varphi^\an$ soit fini. La th\'eorie de la dimension sur un corps valu\'e assure que, pour tout $i\in \cn{1}{s}$, on a
\[ \dim_{x}(U_{i}) = \dim_{x}(U_{i}/B) + v_{x}(U_{i}) \le n + 1.\]
En outre, d'apr\`es le lemme~\ref{lem:finiouvertcomposante}, il existe $j\in \cn{1}{s}$ tel que le morphisme $\psi_{j} \colon U_{j} \to V$ induit par~$\psi$ soit fini et ouvert en~$x$. On en d\'eduit que $v_{x}(U_{j}) = 1$ et que $\dim_{x}(U_{j}) = n$, donc que $\dim_{x}(U_{j}) = n+1$. Ceci montre finalement que
\[ \dim_{x}(\mathcal{X}^\an) = \dim_{x}(U) = n+1 = \dim(A) = \dim_{\rho_{\mathcal{X}}(x)}(\mathcal{X}).\]
\end{proof}
\begin{rema}
Si $\mathcal{A}$ est un corps valu\'e, le r\'esultat $\dim_{x}(\mathcal{X}^\an) = \dim_{\rho_{\mathcal{X}}(x)}(\mathcal{X})$ vaut encore, par la th\'eorie classique.
En revanche, si~$\mathcal{A}$ est un corps muni d'une norme hybride (\cf~exemple~\ref{ex:corpshybride}), il n'est pas difficile de se convaincre que l'on a toujours $v_{x}(\mathcal{X}^\an) = 1$ et $\dim_{x}(\mathcal{X}^\an) = \dim_{\rho_{\mathcal{X}}(x)}(\mathcal{X}) +1$.
\end{rema}
\index{Espace analytique!dimension d'un|)}
\index{Analytification!|)}
\chapter[Morphismes finis]{Quelques propri\'et\'es des morphismes finis et r\'esultats de connexit\'e sur les espaces affines}
\chapter{\'Etude des morphismes finis}\label{chap:fini}\index{Morphisme analytique!fini|(}
Ce chapitre est consacr\'e aux morphismes finis et \`a leurs applications.
Dans la section~\ref{sec:defpropfini}, nous d\'efinissons les morphismes finis d'espaces analytiques en \'etablissons quelques propri\'et\'es \'el\'ementaires. Nous nous int\'eressons ensuite aux analogues de certains r\'esultats classiques dans notre contexte.
Dans la section~\ref{sec:imagedirectefinie}, nous montrons que l'image directe d'un faisceau coh\'erent par un morphisme fini est un faisceau coh\'erent (\cf~th\'eor\`eme~\ref{thm:fini}). La preuve fournit \'egalement un crit\`ere de finitude locale~: pour qu'un morphisme soit fini au voisinage d'un point, il faut et il suffit que ce point soit isol\'e dans sa fibre (\cf~th\'eor\`eme~\ref{thm:locfini}).
Dans la section~\ref{sec:stabilitemorphismesfinis}, nous montrons que les morphismes finis et les morphismes propres sont stables par changement de base (\cf~proposition~\ref{stabilite_fini}).
Dans la section~\ref{sec:chgtbasefini}, nous \'etablissons une propri\'et\'e de changement de base fini pour les faisceaux coh\'erents. \'Etant donn\'e un carr\'e cart\'esien correspondant au produit fibr\'e d'un morphisme fini par un morphisme quelconque, les op\'erations de tirer en arri\`ere puis pousser en avant ou de pousser en avant puis tirer en arri\`ere fournissent des faisceaux coh\'erents isomorphes (\cf~th\'eor\`eme~\ref{thm:formule_changement_base}). Le m\^eme r\'esultat vaut en rempla\c{c}ant ci-dessus le morphisme quelconque par une extension des scalaires (\cf~th\'eor\`eme~\ref{thm:formule_extension_scalaire}).
Dans la section~\ref{sec:Ruckert}, nous d\'emontrons une variante du Nullstellensatz, dite Nullstellensatz de R\"uckert (\cf~th\'eor\`eme~\ref{thm:Ruckert} et corollaire~\ref{cor:IVJ}). Elle est utilis\'ee dans la section finale~\ref{sec:ouverture} pour montrer des crit\`eres d'ouverture de morphismes finis, par exemple le fait qu'un morphisme fini de plat vers un espace r\'eduit est ouvert (\cf~corollaire~\ref{plat}).
\medbreak
Fixons~$(\mathcal{A},\nm)$ un anneau de base g\'eom\'etrique. Posons $B:=\mathcal{M}(\mathcal{A})$.
\section{D\'efinition et premi\`eres propri\'et\'es}\label{sec:defpropfini}
Dans cette section, nous d\'efinissons les morphismes finis et rappelons quelques r\'esultats classiques.
\begin{defi}\index{Application!finie|textbf}
Une application $f \colon T \to T'$ entre espaces topologiques est dite \emph{finie} si elle est ferm\'ee et si, pour tout $t\in T'$, la fibre $f^{-1}(t)$ est finie.
\end{defi}
\begin{rema}\label{rem:finilocalaubut}
La finitude est une propri\'et\'e locale au but.
\end{rema}
\begin{lemm}\label{lem:voisinagefibre}\index{Application!fermee@ferm\'ee}
Soit $f \colon T \to T'$ une application ferm\'ee entre espaces topologiques. Soient $t' \in T'$ et~$\mathcal{V}$ une base de voisinages de~$t'$ dans~$T'$. Alors
\[\{ f^{-1}(V) : V \in \mathcal{V}\}\]
est une base de voisinages de~$f^{-1}(t')$ dans~$T$.
\qed
\end{lemm}
Le r\'esultat suivant est une cons\'equence directe de \cite[I, \S 10, \no 2, th\'eor\`eme~1]{BourbakiTG14}.
\begin{prop}\label{prop:finipropre}\index{Application!propre}
Toute application finie entre espaces topologiques est propre.
\qed
\end{prop}
\'Enon\c cons quelques propri\'et\'es des applications finies vis-\`a-vis de la composition.
\begin{lemm}\label{lem:compositionfini
Soient $f\colon T \to T'$ et $g\colon T' \to T''$ des applications continues entre espaces topologiques.
\begin{enumerate}[i)]
\item Si $f$ et $g$ sont finies, alors $g\circ f$ est finie.
\item Si $g\circ f$ est finie et $f$ est surjective, alors $g$ est finie.
\item Si $g\circ f$ et finie et $g$ est injective, alors $f$ est finie.
\end{enumerate}
\end{lemm}
\begin{proof}
Les propri\'et\'es de finitude des fibres se v\'erifient ais\'ement. Pour d\'emontrer le caract\`ere ferm\'e, il suffit de d\'emontrer le caract\`ere propre. En utilisant la proposition~\ref{prop:finipropre}, on se ram\`ene \`a d\'emontrer les propri\'et\'es correspondantes pour les applications propres et l'on peut alors conclure par le lemme~\ref{lem:compositionpropre}.
\end{proof}
\begin{defi}\index{Morphisme analytique!fini|textbf}\index{Morphisme analytique!fini!en un point|textbf}
Soit $\varphi \colon X \to Y$ un morphisme d'espaces $\mathcal{A}$-analytiques. On dit que~$\varphi$ est \emph{fini} si l'application induite entre les espaces topologiques sous-jacents est finie.
Pour tout point~$x$ de~$X$, on dit que~$\varphi$ est \emph{fini en~$x$} s'il existe un voisinage~$U$ de~$x$ dans~$X$ et un voisinage~$V$ de~$\varphi(y)$ dans~$Y$ tel que $\varphi(U)\subset V$ et le morphisme $\varphi_{U,V} \colon U\to V$ induit par~$\varphi$ soit fini.
\end{defi}
\begin{exem}\label{ex:fini}\index{Endomorphisme de la droite}
Soit $P \in \mathcal{A}[T]$ un polyn\^ome unitaire non constant. Alors le morphisme $\varphi_{P} \colon \E{1}{\mathcal{A}} \to \E{1}{\mathcal{A}}$ d\'efini par~$P$ (\cf~exemple~\ref{exemple_morphisme}) est fini.
En effet, pour tout $r\in \ensuremath{\mathbf{R}}_{\ge 0}$, l'ensemble
\[\varphi_{P}^{-1}(\overline{D}_{B}(r)) = \overline{D}_{B}(P,r)\]
est compact (\cf~\cite[Corollaire~1.1.12]{A1Z}). Par cons\'equent, d'apr\`es le lemme~\ref{lem:criterepropre}, le morphisme~$\varphi_{P}$ est propre. La finitude des fibres est claire.
\end{exem}
L'\'enonc\'e suivant se d\'eduit sans difficult\'es du lemme~\ref{lem:voisinagefibre}.
\begin{prop}\label{prop:fetoileenbasexact}
Soit $\varphi \colon X \to Y$ un morphisme fini d'espaces $\mathcal{A}$-analytiques.
\begin{enumerate}[i)]
\item Pour tout faisceau de~$\mathcal{O}_{X}$-modules~$\mathcal{F}$ et tout $y\in Y$, le morphisme naturel
\[(\varphi_{\ast}\mathcal{F})_{y} \to \prod_{x\in \varphi^{-1}(y)} \mathcal{F}_{x}\]
est un isomorphisme de~$\mathcal{O}_{Y,y}$-modules.
\item Le foncteur~$\varphi_{\ast}$ de la cat\'egorie des~$\mathcal{O}_{X}$-modules dans celle des $\mathcal{O}_{Y}$-modules est exact.
\end{enumerate}
\qed
\end{prop}
En utilisant des r\'esolutions flasques, on d\'emontre le r\'esultat suivant (\cf~\cite[I, \S 1, theorem~5]{GR}, par exemple).
\begin{theo}\label{th:Hqfini}
Soit $\varphi \colon X \to Y$ un morphisme fini d'espaces $\mathcal{A}$-analytiques. Pour tout faisceau de~$\mathcal{O}_{X}$-modules~$\mathcal{F}$ et tout $q\in \ensuremath{\mathbf{N}}$, on a un isomorphisme naturel
\[ H^q(X,\mathcal{F}) \simeq H^q(Y,\varphi_{\ast} \mathcal{F}).\]
\qed
\end{theo}
\section{Images directes des faisceaux coh\'erents}\label{sec:imagedirectefinie}
Le but de cette section est de montrer que si~$\varphi$ est un morphisme fini d'espaces analytiques, le foncteur~$\varphi_*$ pr\'eserve les faisceaux coh\'erents. D'un point de vue intuitif, cela signifie que la finitude topologique entra\^ine la finitude alg\'ebrique. Nous d\'emontrerons \'egalement un crit\`ere de finitude locale. Nous suivons ici \cite[\S 3.1]{Gr-Re2}.
Commen\c cons par des cas particuliers. Le premier d\'ecoule directement des d\'efinitions.
\begin{lemm}\label{lem:immersionfermeecoherence}\index{Immersion!fermee@ferm\'ee!image d'un faisceau par une}
Soit $\iota \colon X \to X'$ une immersion ferm\'ee d'espaces $\mathcal{A}$-analytiques. Alors, un faisceau de~$\mathcal{O}_{X}$-modules~$\mathcal{F}$ est coh\'erent si, et seulement si, $\iota_{\ast}\mathcal{F}$ est coh\'erent.
\qed
\end{lemm}
\begin{lemm}\label{cas_particulier_coh\'erence}
Posons $X := \E{1}{\mathcal{A}}$ avec coordonn\'ee~$T$ et notons $\pi \colon X \to B$ le morphisme structural. Soit~$\omega\in\mathcal{A}[T]$ un polyn\^ome unitaire de degr\'e~$d \ge 1$. Notons~$Z$ le ferm\'e analytique d\'efini par~$\omega$. Alors,
\begin{enumerate}[i)]
\item le morphisme~$\pi_{|Z} \colon Z \to B$ est un morphisme fini ;
\item le morphisme
\[\begin{array}{ccc}
\mathcal{O}_B^d & \to & (\pi_{Z})_{*} \mathcal{O}_{Z}\\
(a_{0},\dotsc,a_{d-1}) & \mapsto & \displaystyle \sum_{i=0}^{d-1} a_{i}\, T^i
\end{array}\]
est un isomorphisme~;
\item pour tout faisceau coh\'erent~$\mathcal{F}$ sur~$Z$, le faisceau~$(\pi_{|Z})_*\mathcal{F}$ est un faisceau coh\'erent sur~$B$.
\end{enumerate}
\end{lemm}
\begin{proof}
i) Puisque~$\omega$ est unitaire, le ferm\'e analytique~$Z$ est compact (\cf~\cite[corollaire~1.1.12]{A1Z}). Par cons\'equent, d'apr\`es le lemme~\ref{lem:criterepropre}, le morphisme~$\pi_{|Z}$ est propre. En outre, pour tout point~$b$ de~$B$, l'image r\'eciproque $\pi_{|Z}^{-1}(b)$ s'identifie \`a l'ensemble des racines du polyn\^ome unitaire~$\omega(b)$, et elle est donc finie. On en d\'eduit que~$\pi_{|Z}$ est fini.
\medbreak
ii) Soit~$b \in B$. Puisque~$\omega$ est unitaire de degr\'e~$d$, le morphisme
\[\begin{array}{ccc}
\mathcal{O}_{B,b}^d & \to & \mathcal{O}_{B,b}[T]/(\omega)\\
(a_{0},\dotsc,a_{d-1}) & \mapsto & \displaystyle \sum_{i=0}^{d-1} a_{i}\, T^i
\end{array}\]
est un isomorphisme. D'apr\`es le corollaire~\ref{cor:weierstrassgeneralise}, on a un isomorphisme naturel $\mathcal{O}_{B,b}[T]/(\omega) \xrightarrow[]{\sim} \prod_{i=1}^t \mathcal{O}_{z_{i}}$, o\`u $z_{1},\dotsc,z_{t}$ sont les z\'eros de~$\omega(b)$ dans~$\pi^{-1}(b)$, autrement dit les points de~$\pi_{|Z}^{-1}(b)$. Le r\'esultat d\'ecoule alors de l'isomorphisme naturel $((\pi_{|Z})_{\ast}\mathcal{O}_{Z})_{b} \xrightarrow[]{\sim} \prod_{i=1}^t \mathcal{O}_{z_{i}}$ de la proposition~\ref{prop:fetoileenbasexact}.
\medbreak
iii) Soit~$\mathcal{F}$ un faisceau coh\'erent sur~$Z$. Soit $b\in B$. Notons $z_{1},\dotsc,z_{t}$ les \'el\'ements de~$\pi_{|Z}^{-1}(b)$. Pour tout $i\in\cn{1}{t}$, ou peut trouver un voisinage ouvert~$U_{i}$ de~$z_i$ dans~$X$ et une suite exacte
\[\mathcal{O}_{U_{i}}^{m_i}\to\mathcal{O}_{U_{i}}^{n_i}\to\mathcal{F}_{|U_{i}}\to 0.\]
D'apr\`es le lemme~\ref{lem:voisinagefibre}, on peut supposer que les~$U_{i}$ sont disjoints et qu'il existe un voisinage ouvert~$V$ de~$b$ tel que $\pi_{|Z}^{-1}(V) = \bigcup_{1\le i\le t} U_{i}$. D'apr\`es la proposition~\ref{prop:fetoileenbasexact}, la suite
\[(\pi_{|Z})_*\mathcal{O}_{U_{i}}^{m_i}\to(\pi_{|Z})_*\mathcal{O}_{U_{i}}^{n_i}\to(\pi_{|Z})_*\mathcal{F}_{|U_{i}} \to 0\]
est encore exacte et, d'apr\`es~ii), le faisceau $(\pi_{|Z})_*\mathcal{O}_{U_{i}}$ est coh\'erent. Le r\'esultat s'ensuit.
\end{proof}
\'Enon\c cons maintenant deux lemmes qui nous seront utiles \`a plusieurs reprises pour effectuer des raisonnements par r\'ecurrence.
\begin{lemm}\label{lem:finiprojection}
Soient~$\varphi \colon X\to Y$ un morphisme d'espaces $\mathcal{A}$-analytiques et~$x \in X$. Posons $y := \varphi(x)$ et supposons que $\varphi^{-1}(y) =\{x\}$.
Alors, il existe un voisinage ouvert~$U$ de~$x$ dans~$X$, un voisinage ouvert~$V$ de~$y$ dans~$Y$ tel que $\varphi^{-1}(V) = U$, des entiers $k,l \in \ensuremath{\mathbf{N}}$, un ouvert~$U'$ de~$\E{k}{\mathcal{A}}$, un ouvert~$V'$ de~$\E{l}{\mathcal{A}}$, des immersions ferm\'ees $\iota_{U} \colon U \to U'\times_{\mathcal{A}} V'$ et $\iota_{V} \colon V \to V'$ tels que, en notant $p_{V'} \colon U'\times_{\mathcal{A}} V' \to V'$ la seconde projection,
on ait $\iota_{V} \circ \varphi_{|U} = p_{V'} \circ \iota_{U}$.
\end{lemm}
\begin{proof}
Par d\'efinition d'espace analytique, le point~$x$ poss\`ede un voisinage ouvert~$U_{0}$ qui peut s'envoyer par une immersion ferm\'ee~$i_{U_{0}}$ dans un ouvert~$U_{0}'$ de~$\E{k}{\mathcal{A}}$ pour un certain $k\in\ensuremath{\mathbf{N}}$. D'apr\`es le lemme~\ref{lem:voisinagefibre}, il existe un voisinage ouvert~$V$ de~$y$ tel que $\varphi^{-1}(V) \subset U_{0}$. Quitte \`a restreindre~$V$, on peut supposer qu'il existe une immersion ferm\'ee~$i_{V}$ dans un ouvert~$V'$ de~$\E{l}{\mathcal{A}}$ pour un certain $l\in\ensuremath{\mathbf{N}}$. Posons $U :=\varphi^{-1}(V)$. Il existe alors une immersion ferm\'ee~$i_{U}$ dans un ouvert~$U'$ de~$\E{k}{\mathcal{A}}$. Notons $\psi \colon U \to V$ le morphisme induit par~$\varphi$.
D'apr\`es la proposition~\ref{prop:grapheimmersion}, le morphisme $\Gamma_{\psi} \colon U \to U\times_{\mathcal{A}} V$ est une immersion ferm\'ee. D'apr\`es le lemme~\ref{lem:produitouverts}, nous avons \'egalement une immersion ferm\'ee $j \colon U \times_{\mathcal{A}} V \to U'\times_{\mathcal{A}} V'$. En posant $\iota_{U} := j \circ \Gamma_{\psi}$ et $\iota_{V} := i_{V}$, on obtient le r\'esultat.
\end{proof}
\begin{lemm}\label{lem:recurrenceAn}
Soient $l,n \in \ensuremath{\mathbf{N}}$ avec $n>l$. Pour $m\in \cn{0}{n}$, notons $\varrho_{m} \colon \E{n}{\mathcal{A}} \to \E{m}{\mathcal{A}}$ le morphisme de projection sur les $m$~derni\`eres coordonn\'ees. Notons~$T$ la premi\`ere coordonn\'ee de~$\E{n}{\mathcal{A}}$.
Soit~$U$ un ferm\'e analytique d'un ouvert~$W'$ de~$\E{n}{\mathcal{A}}$ et $V$ un ferm\'e analytique d'un ouvert~$V'$ de~$\E{l}{\mathcal{A}}$ tels que $\varrho_{l}(U) = V$.
Soient $x \in U$ et $y\in V$ tels que $\varrho_{l}^{-1}(y) \cap U=\{x\}$.
Alors il existe un voisinage ouvert~$W'_{n-1}$ de~$\varrho_{n-1}(x)$ dans~$\varrho_{n-1}(W')$ et un polyn\^ome $\omega \in \mathcal{O}(W'_{n-1})[T]$ unitaire non constant tel que $U \cap \varrho_{n-1}^{-1}(W'_{n-1})$ soit un ferm\'e analytique du ferm\'e analytique~$Z_{\omega}$ de~$\varrho_{n-1}^{-1}(W'_{n-1})$ d\'efini par~$\omega$.
\end{lemm}
\begin{proof}
Posons $x_{n-1}:=\varrho_{n-1}(x)$. Puisque $q_{l}^{-1}(y) \cap U=\{x\}$, on a et $\varrho_{n-1}^{-1}(x_{n-1}) \cap U= \{x\}$. Par cons\'equent, il existe $g \in \mathcal{O}_{\E{n}{\mathcal{A}},x}$ appartenant \`a l'id\'eal de~$U$, qui s'annule en~$x$ mais n'est pas identiquement nul sur $\varrho_{n-1}^{-1}(x_{n-1})$.
Quitte \`a restreindre~$W'$, nous pouvons supposer que $g \in \mathcal{O}(W')$ et que~$U$ est contenu dans le lieu d'annulation de~$g$ sur~$W'$. Posons $W'_{n-1} := \varrho_{n-1}(W')$. D'apr\`es le corollaire~\ref{cor:projectionouverte}, c'est un ouvert de~$\E{n-1}{\mathcal{A}}$. D'apr\`es le th\'eor\`eme de pr\'eparation de Weierstra\ss{} \ref{thm:preparationW}, quitte \`a restreindre encore~$W'$, on peut supposer qu'il existe $e\in \mathcal{O}(W')$ inversible et $\omega \in \mathcal{O}(W'_{n-1})[T]$ unitaire tel que $g = e \omega$. Posons $d := \deg(\omega)$. Les propri\'et\'es de~$g$ assurent que $d\ge 1$.
Notons $Z_{\omega}$ le ferm\'e analytique de~$\varrho_{n-1}^{-1}(W'_{n-1})$ d\'efini par~$\omega$. D'apr\`es le lemme \ref{cas_particulier_coh\'erence}, le morphisme $\varrho_{n-1}' \colon Z_{\omega}\to W'_{n-1}$ induit par~$\varrho_{n-1}$ est fini. Par cons\'equent, quitte \`a r\'eduire~$W'_{n-1}$, et remplacer $W'$ par $W' \cap \varrho_{n-1}^{-1}(W'_{n-1})$, on peut supposer que~$Z_{\omega}$ est une r\'eunion finie d'ouverts disjoints et que l'un de ces ouverts est~$Z_{\omega} \cap W'$. Dans cette situation, $Z_{\omega} \cap W'$ est un ferm\'e analytique de~$Z_{\omega}$ et, par cons\'equent, $U$ aussi.
\end{proof}
D\'emontrons un autre cas particulier du r\'esultat, celui o\`u le morphisme provient d'une projection entre espaces affines analytiques.
\begin{lemm}\label{lem:pousseprojection}
Soient~$\varphi \colon X\to Y$ un morphisme d'espaces $\mathcal{A}$-analytiques. Soient $x\in X$ et $y\in Y$ tels que $\varphi^{-1}(y) = \{x\}$.
Supposons qu'il existe des entiers $n,l$ avec $n\ge l$, des ouverts~$W'$ de~$\E{n}{\mathcal{A}}$ et $V'$ de~$\E{l}{\mathcal{A}}$ et des immersions ferm\'ees $\iota_{X} \colon X \to W'$ et $\iota_{Y} \colon Y \to V'$ tels que, en notant $\varrho_{l} \colon \E{n}{\mathcal{A}} \to \E{l}{\mathcal{A}}$ la projection sur les $l$~derni\`eres coordonn\'ees, on ait $\varrho_{l}(W') = V'$ et le diagramme
\[\begin{tikzcd}
X \arrow[d, "\varphi"] \arrow[r, "\iota_{X}"] & W' \arrow[d, "\varrho_{l, |W'}"]\\
Y \arrow[r, "\iota_{Y}"] & V'
\end{tikzcd}\]
commute.
Alors, il existe un voisinage ouvert~$Y'$ de~$y$ dans~$Y$ tel que
\begin{enumerate}[i)]
\item le morphisme $\psi \colon \varphi^{-1}(Y') \to Y'$ le morphisme induit par~$\varphi$ soit fini~;
\item pour tout faisceau coh\'erent~$\mathcal{F}$ sur~$ \varphi^{-1}(Y')$, le faisceau $\psi_{\ast} \mathcal{F}$ soit coh\'erent.
\end{enumerate}
\end{lemm}
\begin{proof}
D\'emontrons le r\'esultat par r\'ecurrence sur $n \ge l$. Si $n=l$, on a $W'=V'$, donc $\varrho_{l}=\mathrm{id}$ et $\iota_{Y} \circ \varphi = \iota_{X}$. Par cons\'equent, on a $(\iota_{Y})_{\ast} \varphi_{\ast} \mathcal{F} = (\iota_{X})_{\ast} \mathcal{F}$. Le r\'esultat d\'ecoule alors des lemmes~\ref{lem:compositionfini} et~\ref{lem:immersionfermeecoherence}.
Supposons maintenant que $n> l$ et que le r\'esultat est vrai pour~$n-1$. Notons $\varrho_{n-1} \colon \E{n}{\mathcal{A}} \to \E{n-1}{\mathcal{A}}$ la projection sur les $n-1$ derni\`eres coordonn\'ees et~$T$ la premi\`ere coordonn\'ee de~$\E{n}{\mathcal{A}}$. D'apr\`es le lemme~\ref{lem:recurrenceAn}, il existe un voisinage ouvert~$W'_{n-1}$ de~$\varrho_{n-1}(x)$ dans~$\varrho_{n-1}(W')$ et un polyn\^ome $\omega \in \mathcal{O}(W'_{n-1})[T]$ unitaire non constant tel que $X \cap \varrho_{n-1}^{-1}(W'_{n-1})$ soit un ferm\'e analytique du ferm\'e analytique~$Z_{\omega}$ de~$\varrho_{n-1}^{-1}(W'_{n-1})$ d\'efini par~$\omega$. Quitte \`a remplacer $W'$ par $W' \cap \varrho_{n-1}^{-1}(W'_{n-1})$ et les autres espaces en cons\'equence,
on peut supposer que $W'_{n-1} = \varrho_{n-1}(W')$.
Par hypoth\`ese, l'immersion ferm\'ee~$\iota_{X}$ peut se factoriser sous la forme $\iota_{X} = j_{Z_{\omega}} \circ j_{X}$, o\`u $j_{X} \colon X \to Z_{\omega}$ et $j_{Z_{\omega}} \colon Z_{\omega} \to \varrho_{n-1}^{-1}(W'_{n-1})$ sont deux immersions ferm\'ees. On a $\varrho_{n-1} \circ \iota_{X} = q_{n-1,|Z_{\omega}} \circ j_{X}$. D'apr\`es le lemme~\ref{cas_particulier_coh\'erence}, c'est un morphisme fini et, d'apr\`es les lemmes~\ref{lem:immersionfermeecoherence} et~\ref{cas_particulier_coh\'erence}, pour tout faisceau coh\'erent~$\mathcal{F}$ sur~$X$, le faisceau
\[(\varrho_{n-1})_{\ast}(\iota_{X})_{\ast}\mathcal{F} = (q_{n-1,|Z_{\omega}})_{\ast} (j_{X})_{\ast} \mathcal{F}\]
est un faisceau coh\'erent sur~$W_{n-1}$.
Le raisonnement pr\'ec\'edent appliqu\'e avec le faisceau structural montre que $(\varrho_{n-1})_{\ast}(\iota_{X})_{\ast}\mathcal{O}_{X}$ est coh\'erent. Son support $S := \varrho_{n-1}(\iota(X))$ est donc un ferm\'e analytique de~$W'_{n-1}$, d'apr\`es la remarque~\ref{rem:supportfaisceau}. En particulier, il est muni d'une structure d'espace $\mathcal{A}$-analytique. Notons $\varrho'_{l} \colon \E{n-1}{\mathcal{A}} \to \E{l}{\mathcal{A}}$ la projection sur les $l$~derni\`eres coordonn\'ees et $\varrho' \colon S \to Y$ le morphisme induit. On a $(\varrho')^{-1}(y) = \{\varrho_{n-1}(x)\}$. Par hypoth\`ese de r\'ecurrence, quitte \`a restreindre~$S$ et~$Y$, on peut supposer que~$\varrho'$ est fini et que, pour tout faisceau coh\'erent~$\mathcal{G}$ sur~$S$, le faisceau $(\varrho')_{\ast} \mathcal{G}$ est coh\'erent. Le morphisme $\varphi = \varrho' \circ \varrho_{n-1} \circ \iota_{X}$ est alors fini et, pour tout faisceau coh\'erent~$\mathcal{F}$ sur~$X$, le faisceau
\[ \varphi_{\ast}\mathcal{F} =
(\varrho')_{\ast}(\varrho_{n-1})_{\ast}(\iota_{X})_{\ast}\mathcal{F}\]
est coh\'erent.
\end{proof}
Nous pouvons maintenant d\'emontrer le r\'esultat d\'esir\'e en toute g\'en\'eralit\'e.
\begin{theo}\label{thm:fini}\index{Morphisme analytique!fini!image d'un faisceau par un}\index{Faisceau!coherent@coh\'erent}
\index{Faisceau!image d'un|see{Immersion ferm\'ee et Morphisme fini}}
Soient~$\varphi \colon X\to Y$ un morphisme fini d'espaces $\mathcal{A}$-analytiques et~$\mathcal{F}$ un faisceau coh\'erent sur~$X$. Alors, le faisceau~$\varphi_*\mathcal{F}$ est coh\'erent.
\end{theo}
\begin{proof}
La coh\'erence \'etant une notion locale, il suffit de montrer le r\'esultat au voisinage de tout point de~$Y$. Soit $y\in Y$.
Si~$y$ n'appartient pas \`a l'image de~$\varphi$, puisque~$\varphi$ est ferm\'e, il existe un voisinage~$V$ de~$y$ tel que~$\varphi^{-1}(V)$ soit vide. Par cons\'equent, $\varphi_*\mathcal{F}$ est nul, et donc coh\'erent, au voisinage de~$y$.
Supposons que~$y$ appartient \`a l'image de~$\varphi$. Notons $x_{1},\dotsc,x_{t}$ ses ant\'ec\'edents par~$\varphi$. Puisque~$\varphi$ est fini, d'apr\`es le lemme~\ref{lem:voisinagefibre}, il existe un voisinage~$V$ de~$y$ tel que $\varphi^{-1}(V)$ puisse s'\'ecrire comme union disjointe finie d'ouverts $\bigsqcup_{i=1}^t U_{i}$, o\`u, pour tout $i\in \cn{1}{t}$, $U_{i}$ contient~$x_{i}$. Pour tout $i\in \cn{1}{t}$, notons $\varphi_{i} \colon U_{i} \to V$ le morphisme induit par~$\varphi$. Il est encore fini.
Il suffit de montrer que, pour tout $i\in \cn{1}{t}$, il existe un voisinage ouvert~$V_{i}$ de~$y$ dans~$V$ tel que, en notant $\psi_{i} \colon \varphi_{i}^{-1}(V_{i}) \to V_{i}$ le morphisme induit par~$\varphi_{i}$, le faisceau $(\psi_{i})_{\ast} \mathcal{F}$ soit coh\'erent. En effet, dans ce cas, on peut choisir un voisinage ouvert~$W$ de~$y$ contenu dans tous les~$V_{i}$ et on a alors
\[ (\varphi_{\ast}\mathcal{F})_{|W} = \prod_{i=1}^t ((\psi_{i})_{\ast}\mathcal{F})_{|W},\]
ce qui permet de conclure.
Soit $i\in \cn{1}{t}$. Puisque~$\varphi_{i}$ est fini, le point~$x_{i}$ est isol\'e dans sa fibre et on peut donc appliquer le lemme~\ref{lem:finiprojection} avec~$\varphi_{i}$ et~$x_{i}$. On se trouve alors dans la situation du lemme~\ref{lem:pousseprojection}, qui permet de conclure.
\end{proof}
\begin{coro}\label{cor:imagefini}\index{Ferme analytique@Ferm\'e analytique}\index{Morphisme analytique!fini!image d'un ferm\'e par un}
Soient~$\varphi \colon X\to Y$ un morphisme fini d'espaces $\mathcal{A}$-analytiques. Alors $\varphi(X)$ est un ferm\'e analytique de~$Y$.
\end{coro}
\begin{proof}
D'apr\`es le th\'eor\`eme~\ref{thm:fini}, le faisceau $\varphi_{*}\mathcal{O}_{X}$ est coh\'erent. D'apr\`es la remarque~\ref{rem:supportfaisceau}, son support, qui n'est autre que~$\varphi(X)$, est un ferm\'e analytique de~$Y$.
\end{proof}
D\'emontrons finalement un crit\`ere de finitude locale.
\begin{theo}\label{thm:locfini}\index{Morphisme analytique!fini!en un point}
Soient~$\varphi \colon X\to Y$ un morphisme d'espaces $\mathcal{A}$-analytiques et~$x \in X$. Le morphisme~$\varphi$ est fini en~$x$ si, et seulement si, le point~$x$ est isol\'e dans sa fibre~$\varphi^{-1}(\varphi(x))$.
\end{theo}
\begin{proof}
Suppsosons que~$\varphi$ est fini en~$x$. Alors, il existe un voisinage~$U$ de~$x$ dans~$X$ tel que $\varphi^{-1}(\varphi(x)) \cap U$ est fini. On en d\'eduit que~$x$ est isol\'e dans $\varphi^{-1}(\varphi(x))$.
\medbreak
Supposons que~$x$ est isol\'e dans~$\varphi^{-1}(\varphi(x))$. Quitte \`a restreindre~$X$, on peut supposer que $\varphi^{-1}(\varphi(x)) = \{x\}$. Dans ce cas, le lemme~\ref{lem:finiprojection} s'applique. On se trouve alors dans la situation du lemme~\ref{lem:pousseprojection}, qui permet de conclure.
\end{proof}
\section{Stabilit\'e des morphismes finis et propres}\label{sec:stabilitemorphismesfinis}
Dans cette section, nous d\'emontrons que les morphismes finis et propres sont stables par changement de base.
\begin{prop}\label{stabilite_fini}\index{Morphisme analytique!propre}\index{Extension des scalaires}\index{Produit!fibre@fibr\'e}
Soit~$\varphi \colon X\to Y$ un morphisme d'espaces $\mathcal{A}$-analytiques.
\begin{enumerate}[i)]
\item Soit $f \colon \mathcal{A} \to \mathcal{B}$ un morphisme d'anneaux de Banach born\'e, o\`u~$\mathcal{B}$ est un anneau de base g\'eom\'etrique. Notons $\varphi_{\mathcal{B}} \colon X \ho{\mathcal{A}} \mathcal{B} \to Y\ho{\mathcal{A}} \mathcal{B}$ le morphisme d\'eduit de~$\varphi$ par extension des scalaires \`a~$\mathcal{B}$. Si $\varphi$ est propre (resp. fini), alors $\varphi_{\mathcal{B}}$ est propre (resp. fini).
\item Soit $\psi \colon Z\to Y$ un morphisme d'espaces $\mathcal{A}$-analytiques. Notons $\varphi_{Z} \colon X \times_{Y} Z \to Z$ le morphisme d\'eduit de~$\varphi$ par changement de base \`a~$Z$. Si $\varphi$ est propre (resp. fini), alors $\varphi_{Z}$ est propre (resp. fini).
\end{enumerate}
\end{prop}
\begin{proof}
i) Supposons que~$\varphi$ est propre. La propri\'et\'e \'etant locale au but, d'apr\`es le lemme~\ref{lem:extensionBouverts}, on peut supposer que~$Y$ est un ferm\'e analytique d'un ouvert de~$\E{n}{\mathcal{A}}$. D'apr\`es le lemme~\ref{lem:extensionBouverts} et la proposition~\ref{prop:extensionBaffine}, $Y\ho{\mathcal{A}}\mathcal{B}$ est alors un ferm\'e analytique d'un ouvert de~$\E{n}{\mathcal{B}}$. En particulier, $|Y\ho{\mathcal{A}}\mathcal{B}|$ est s\'epar\'e.
On a un diagramme commutatif
\[\begin{tikzcd}
X \ho{\mathcal{A}}\mathcal{B} \arrow[r, "p_{X}"] \arrow[d, "\varphi_{\mathcal{B}}"] & X \arrow[d, "\varphi"] \\
Y\ho{\mathcal{A}} \mathcal{B} \arrow[r, "p_{Y}"] & Y
\end{tikzcd}.\]
D'apr\`es le corollaire~\ref{stabilit\'e_propre_extension_scalaire}, le morphisme~$p_{X}$ est propre. On d\'eduit alors du lemme~\ref{lem:compositionpropre} que $\varphi \circ p_{X} = p_{Y} \circ \varphi_{\mathcal{B}}$ est propre, puis que $\varphi_{\mathcal{B}}$ est propre, comme d\'esir\'e.
\medbreak
Supposons que~$\varphi$ est fini. D'apr\`es le raisonnement pr\'ec\'edent, $\varphi_{\mathcal{B}}$ est propre, et il suffit donc de montrer que ses fibres sont finies.
Soit $z\in Y\ho{\mathcal{A}} \mathcal{B}$. Posons $y := p_{Y}(z)$. L'exemple~\ref{ex:morphismex} fournit un morphisme $\lambda_{z} \colon \mathcal{M}(\mathcal{H}(z)) \to Y\ho{\mathcal{A}} \mathcal{B}$. D'apr\`es la proposition~\ref{preimage}, il suffit de montrer que l'ensemble sous-jacent \`a l'espace
\[(X\ho{\mathcal{A}} \mathcal{B})\times_{Y\ho{\mathcal{A}} \mathcal{B}} \mathcal{M}(\mathcal{H}(z)) := \big((X\ho{\mathcal{A}} \mathcal{B})\ho{\mathcal{B}} \mathcal{H}(z)\big) \times_{(Y\ho{\mathcal{A}} \mathcal{B}) \ho{\mathcal{B}} \mathcal{H}(z)} \mathcal{M}(\mathcal{H}(z))\]
est fini. Puisque le diagramme
\[\begin{tikzcd}
Y \ho{\mathcal{A}}\mathcal{B} \arrow[r, "p_{Y}"] \arrow[d] & Y \arrow[d] \\
\mathcal{M}(\mathcal{B}) \arrow[r] & \mathcal{M}(\mathcal{A})
\end{tikzcd}\]
commute, en utilisant les lemmes~\ref{lem:BB'} et~\ref{lem:produitifbreextensionscalaires}, on obtient une suite d'isomorphismes
\begin{align*}
&(X\ho{\mathcal{A}} \mathcal{B})\times_{Y\ho{\mathcal{A}} \mathcal{B}} \mathcal{M}(\mathcal{H}(z))\\
\simeq\ & (X\ho{\mathcal{A}} \mathcal{H}(z)) \times_{Y\ho{\mathcal{A}} \mathcal{H}(z)} \mathcal{M}(\mathcal{H}(z))\\
\simeq\ & \big((X\ho{\mathcal{A}} \mathcal{H}(y))\ho{\mathcal{H}(y)} \mathcal{H}(z)\big) \times_{(Y\ho{\mathcal{A}} \mathcal{H}(y)) \ho{\mathcal{H}(y)} \mathcal{H}(z)} \mathcal{M}(\mathcal{H}(z))\\
\simeq\ & \big( (X\ho{\mathcal{A}} \mathcal{H}(y)) \times_{Y\ho{\mathcal{A}} \mathcal{H}(y) } \mathcal{M}(\mathcal{H}(y)) \big) \ho{\mathcal{H}(y)} \mathcal{H}(z).
\end{align*}
D'apr\`es la proposition~\ref{preimage}, l'ensemble sous-jacent \`a $F := (X\ho{\mathcal{A}} \mathcal{H}(y)) \times_{Y\ho{\mathcal{A}} \mathcal{H}(y) } \mathcal{M}(\mathcal{H}(y))$ est hom\'eomorphe \`a~$\varphi^{-1}(y)$. Par cons\'equent, $F$ est fini et la seconde projection $F \to \mathcal{M}(\mathcal{H}(y))$ est un morphisme fini d'espaces $\mathcal{H}(y)$-analytiques, d'apr\`es le th\'eor\`eme~\ref{thm:locfini}.
Pour montrer que $F \ho{\mathcal{H}(y)} \mathcal{M}(\mathcal{H}(z))$ est fini, on peut supposer que~$F$ est r\'eduit \`a un point, disons~$x$. Le th\'eor\`eme~\ref{thm:fini} implique que~$\mathcal{O}_{F,x}$ est un $\mathcal{H}(y)$-espace vectoriel de dimension finie. En particulier, l'extension $\mathcal{H}(x)/\mathcal{H}(y)$ est finie.
L'espace~$F$ \'etant r\'eduit \`a un point, on peut l'identifier \`a un ferm\'e analytique d'un ouvert de~$\E{n}{\mathcal{H}(y)}$. D'apr\`es la proposition~\ref{prop:extensionBaffine} et le lemme~\ref{lem:extensionBouverts}, l'ensemble sous-jacent \`a~$F \ho{\mathcal{H}(y)} \mathcal{M}(\mathcal{H}(z))$ s'identifie alors \`a~$\tilde{f}_{n}^{-1}(F)$. Puisque~$F$ est une partie compacte spectralement convexe de~$\E{n}{\mathcal{H}(y)}$, d'apr\`es la proposition~\ref{extension_scalaire_spectral}, $\tilde{f}_{n}^{-1}(F)$ est une partie compacte spectralement convexe de~$\E{n}{\mathcal{H}(z)}$ et on a
\[ \mathcal{B}(\tilde{f}_{n}^{-1}(F)) \simeq \mathcal{B}(F) \hosp{\mathcal{H}(y)} \mathcal{H}(z) = \mathcal{H}(x) \ho{\mathcal{H}(y)} \mathcal{H}(z).\]
Puisque l'extension $\mathcal{H}(x)/\mathcal{H}(y)$ est finie, $\mathcal{H}(x) \ho{\mathcal{H}(y)} \mathcal{H}(z)$ est une $\mathcal{H}(z)$-alg\`ebre finie et
\[ \tilde{f}_{n}^{-1}(F) = \mathcal{M}(\mathcal{B}(\tilde{f}_{n}^{-1}(F))) \simeq \mathcal{M}(\mathcal{H}(x) \ho{\mathcal{H}(y)} \mathcal{H}(z))\]
est finie, ce qui termine la d\'emonstration.
\medbreak
ii) Supposons que~$\varphi$ est propre, c'est-\`a-dire que l'application entre espaces topologiques $|\varphi| \colon |X| \to |Y|$ est propre. Par d\'efinition, la propret\'e est pr\'eserv\'ee par changement de base, donc l'application $|X| \times_{|Z|} |Y| \to |Z|$ induite par~$|\varphi|$ et~$|\psi|$ est propre. En outre, d'apr\`es la proposition~\ref{produit_fibr\'e_propre}, l'application naturelle $|X\times_YZ|\to|X|\times_{|Y|}|Z|$ est propre. Le lemme~\ref{lem:compositionpropre} assure que la compos\'ee~$|\varphi_{Z}|$ des applications pr\'ec\'edentes est propre.
\medbreak
Supposons que~$\varphi$ est fini. D'apr\`es le raisonnement pr\'ec\'edent, $\varphi_{Z}$ est propre, et il suffit donc de montrer que ses fibres sont finies.
Soit $z\in Z$. Posons $y := \psi(z)$. L'exemple~\ref{ex:morphismex} fournit un morphisme $\lambda_{z} \colon \mathcal{M}(\mathcal{H}(z)) \to Z$. D'apr\`es la proposition~\ref{preimage}, il suffit de montrer que l'ensemble sous-jacent \`a l'espace
\[(X\times_YZ)\times_{Z} \mathcal{M}(\mathcal{H}(z)) := \big((X\times_YZ)\ho{\mathcal{A}} \mathcal{H}(z)\big) \times_{Z\ho{\mathcal{A}} \mathcal{H}(z)} \mathcal{M}(\mathcal{H}(z))\]
est fini. En utilisant le lemme~\ref{lem:produitifbreextensionscalaires}, on obtient la suite d'isomorphismes canoniques
\begin{align*}
& \big((X\times_YZ)\ho{\mathcal{A}} \mathcal{H}(z)\big) \times_{Z\ho{\mathcal{A}} \mathcal{M}(\mathcal{H}(z))} \mathcal{M}(\mathcal{H}(z))\\
\simeq\ & \big((X\ho{\mathcal{A}} \mathcal{H}(z))\times_{Y\ho{\mathcal{A}} \mathcal{H}(z)}(Z\ho{\mathcal{A}} \mathcal{H}(z))\big) \times_{Z\ho{\mathcal{A}} \mathcal{H}(z)} \mathcal{M}(\mathcal{H}(z))\\
\simeq\ & (X\ho{\mathcal{A}} \mathcal{H}(z))\times_{Y\ho{\mathcal{A}} \mathcal{H}(z)} \mathcal{M}(\mathcal{H}(z)).
\end{align*}
D'apr\`es la proposition~\ref{preimage}, ce dernier espace est hom\'eomorphe \`a une fibre du morphisme $\varphi_{\mathcal{H}(z)} \colon X\ho{\mathcal{A}} \mathcal{H}(z) \to Y\ho{\mathcal{A}} \mathcal{H}(z)$ d\'eduit de~$\varphi$ par extension des scalaires \`a~$\mathcal{H}(z)$. D'apr\`es~i), ce morphisme est fini et le r\'esultat s'ensuit.
\end{proof}
\section{Changement de base fini}\label{sec:chgtbasefini}
Soient $\varphi \colon X\to Y$ un morphisme fini d'espaces $\mathcal{A}$-analytiques et $\psi \colon Z\to Y$ un morphisme d'espaces $\mathcal{A}$-analytiques. Consid\'erons le carr\'e cart\'esien
\[\begin{tikzcd}
Z\times_Y X \arrow[r, "\rho"] \arrow[d, "\chi"] & X \arrow[d, "\varphi"] \\
Z \arrow[r, "\psi"] & Y
\end{tikzcd}\ .\]
Le but de cette section est de montrer que, pour tout faisceau de $\mathcal{O}_{X}$-modules~$\mathcal{F}$, le morphisme naturel
\[\psi^*\varphi_*\mathcal{F}\to \chi_*\rho^*\mathcal{F}\]
est un isomorphisme.
\begin{defi
Soit~$\varphi \colon X\to Y$ un morphisme fini d'espaces $\mathcal{A}$-analytiques. Nous dirons que $\varphi$ \emph{satisfait la propri\'et\'e de changement de base} si, pour tout morphisme d'espaces $\mathcal{A}$-analytiques $\psi \colon Z\to Y$ et tout faisceau de $\mathcal{O}_{X}$-modules~$\mathcal{F}$, le morphisme naturel
\[\psi^*\varphi_*\mathcal{F}\to \chi_*\rho^*\mathcal{F}\]
o\`u~$\chi$ et~$\rho$ sont d\'efinis comme pr\'ec\'edemment, est un isomorphisme.
\end{defi}
\begin{lemm}\label{lem:cbflocal}
Soit $\varphi \colon X\to Y$ un morphisme fini d'espaces $\mathcal{A}$-analytiques. Supposons que, pour tout morphisme d'espaces $\mathcal{A}$-analytiques $\psi \colon Z\to Y$, tout $x\in X$ et tout $z\in Z$ tels que $\varphi(x) = \psi(z)$, le morphisme naturel
\[\mathcal{O}_{X,x}\otimes_{\mathcal{O}_{Y,\varphi(x)}}\mathcal{O}_{Z,z} \to \prod_{t\in\rho^{-1}(x) \cap \chi^{-1}(z)}\mathcal{O}_{Z\times_{Y} X,t}\]
o\`u~$\chi$ et~$\rho$ sont d\'efinis comme pr\'ec\'edemment, est un isomorphisme. Alors, $\varphi$ satisfait la propri\'et\'e de changement de base.
\end{lemm}
\begin{proof}
Soit $\psi \colon Z\to Y$ un morphisme d'espaces $\mathcal{A}$-analytiques. Posons $T := Z\times_{Y} X$. Soit $\mathcal{F}$ un faisceau de $\mathcal{O}_{X}$-modules.
Il suffit de montrer que, pour tout $z\in Z$, le morphisme
\[(\psi^*\varphi_*\mathcal{F})_{z}\to (\chi_*\rho^*\mathcal{F})_{z}\]
est un isomorphisme.
Soit~$z\in Z$. Posons $y:=\psi(z)$. On a des isomorphismes naturels
\[(\psi^*\varphi_*\mathcal{F})_z\simeq\bigg(\prod_{x\in\varphi^{-1}(y)}\mathcal{F}_x\bigg)\otimes_{\mathcal{O}_{Y,y}}\mathcal{O}_{Z,z}\]
et
\[(\chi_*\rho^*\mathcal{F})_z\simeq \prod_{t\in\chi^{-1}(z)} \big(\mathcal{F}_{\rho(t)}\otimes_{\mathcal{O}_{X,\rho(t)}}\mathcal{O}_{T,t}\big).\]
Il suffit donc de montrer que pour tout~$x\in \varphi^{-1}(y)$, le morphisme naturel
\[\mathcal{F}_x\otimes_{\mathcal{O}_{Y,y}}\mathcal{O}_{Z,z} \to \prod_{t\in\rho^{-1}(x) \cap \chi^{-1}(z)}\mathcal{F}_{x}\otimes_{\mathcal{O}_{X,x}}\mathcal{O}_{T,t}\]
est un isomorphisme. Le r\'esultat s'en d\'eduit.
\end{proof}
\begin{defi
Soit $\varphi \colon X\to Y$ un morphisme fini d'espaces $\mathcal{A}$-analytiques et soit $x\in X$. On dit que $\varphi$ \emph{satisfait la propri\'et\'e de changement de base en~$x$} si, pour tout morphisme d'espaces $\mathcal{A}$-analytiques $\psi \colon Z \to Y$ et tout $z\in \psi^{-1}(\varphi(x))$,
le morphisme naturel
\[\mathcal{O}_{X,x}\otimes_{\mathcal{O}_{Y,\varphi(x)}}\mathcal{O}_{Z,z} \to \prod_{t\in\rho^{-1}(x) \cap \chi^{-1}(z)}\mathcal{O}_{T,t},\]
o\`u~$\chi$ et~$\rho$ sont d\'efinis comme pr\'ec\'edemment, est un isomorphisme.
\end{defi}
Commen\c cons par deux lemmes.
\begin{lemm}\label{lem:cbfcomposition}
Soient $\varphi_{1} \colon X_{1}\to X_{2}$ et $\varphi_{2} \colon X_{2} \to X_{3}$ des morphismes finis d'espaces $\mathcal{A}$-analytiques. Soit $x_{1}\in X_{1}$. Si $\varphi_{1}$ et $\varphi_{2}$ satisfont la propri\'et\'e de changement de base respectivement en~$x_{1}$ et~$\varphi_{1}(x_{1})$, alors $\varphi_{2} \circ \varphi_{1}$ satisfait la propri\'et\'e de changement de base en~$x_{1}$.
\qed
\end{lemm}
\begin{lemm}\label{lem:cbfYY'}
Soient $\varphi_{1} \colon X_{1}\to X_{2}$ et $\varphi_{2} \colon X_{2} \to X_{3}$ des morphismes finis d'espaces $\mathcal{A}$-analytiques. Soit~$x\in X_{1}$. Si $\varphi_{2} \circ \varphi_{1}$ satisfait la propri\'et\'e de changement de base en~$x$, alors $\varphi_{1}$ aussi.
\end{lemm}
\begin{proof}
Soit $\psi \colon Z \to X_{2}$ un morphisme d'espaces $\mathcal{A}$-analytiques. On a un isomorphisme canonique
\[Z \times_{X_{2}} X_{1} \xrightarrow[]{\sim} Z \times_{X_{3}} X_{1},\]
o\`u les morphismes d\'efinissant le produit fibr\'e de droite sont $\varphi_{2} \circ \psi$ et $\varphi_{2} \circ \varphi_{1}$. De m\^eme, pour tout $z \in \psi^{-1}(\varphi_{1}(x))$, on a un isomorphisme canonique
\[\mathcal{O}_{X_{1},x_{1}} \otimes_{\mathcal{O}_{X_{2},\varphi_{1}(x_{1})}}\mathcal{O}_{Z,z} \xrightarrow[]{\sim} \mathcal{O}_{X_{1},x_{1}} \otimes_{\mathcal{O}_{X_{3},\varphi_{2}(\varphi_{1}(x_{1}))}}\mathcal{O}_{Z,z}.\]
Le r\'esultat s'en d\'eduit.
\end{proof}
Traitons maintenant deux cas particuliers du r\'esultat.
\begin{lemm}\label{lem:cbfomega}
Soit~$Y$ un espace $\mathcal{A}$-analytique et $\omega\in\mathcal{O}(Y)[T]$ un polyn\^ome unitaire non constant. Consid\'erons~$\E{1}{\mathcal{A}}$ avec coordonn\'ee~$T$ et notons $\pi \colon Y\times_{\mathcal{A}}\E{1}{\mathcal{A}} \to Y$ le morphisme de projection. Notons $Z_{\omega}$ le ferm\'e analytique de~$Y\times_{\mathcal{A}}\E{1}{\mathcal{A}} $ d\'efini par~$\omega$. Soit $x\in Z_{\omega}$ tel que $\pi_{|Z_{\omega}}^{-1}(\pi_{|Z_{\omega}}(x)) = \{x\}$. Alors $\pi_{|Z_{\omega}}$ satisfait la propri\'et\'e de changement de base en~$x$.
\end{lemm}
\begin{proof}
Soit $\psi \colon Z \to Y$ un morphisme d'espaces $\mathcal{A}$-analytiques. Soit $z\in Z$ tel que $\psi(z) = \pi(x)$. Posons $y := \pi(x)$. Consid\'erons le carr\'e cart\'esien
\[\begin{tikzcd}
T := Z\times_Y Z_{\omega} \arrow[r, "\rho"] \arrow[d, "\chi"] & Z_{\omega} \arrow[d, "\pi_{|Z_{\omega}}"] \\
Z \arrow[r, "\psi"] & Y
\end{tikzcd}\ .\]
D'apr\`es le lemme~\ref{lem:produitouverts}, le produit fibr\'e~$T$ s'identifie au ferm\'e analytique de~$Z\times_{\mathcal{A}} \E{1}{\mathcal{A}}$ d\'efini par le polyn\^ome $\psi^\sharp(\omega) \in \mathcal{O}(Z)[T]$. D'apr\`es la proposition~\ref{prop:fetoileenbasexact} et le lemme~\ref{cas_particulier_coh\'erence}, on a des isomorphismes naturels
\[ \prod_{t\in \chi^{-1}(z)} \mathcal{O}_{T,t} \simeq (\chi_*\mathcal{O}_{T})_z \simeq \mathcal{O}_{Z,z}[T]/(\psi^{\sharp}(\omega))\]
et
\[\mathcal{O}_{Z_{\omega},x} \simeq \big((\pi_{|Z_{\omega}})_*\mathcal{O}_{Z_{\omega}}\big)_{y} \simeq \mathcal{O}_{Y,y}[T]/(\omega).\]
On en d\'eduit que $\mathcal{O}_{Z_{\omega},x} \otimes_{\mathcal{O}_{Y,y}} \mathcal{O}_{Z,z} \simeq \prod_{t\in \chi^{-1}(z)} \mathcal{O}_{T,t} $. Puisque $\pi_{|Z_{\omega}}^{-1}(y) = \{x\}$, on a $\chi^{-1}(z) \subset \rho^{-1}(x)$. Le r\'esultat s'en d\'eduit.
\end{proof}
\begin{lemm}\label{lem:cbfprojection}
Soient~$\varphi \colon X\to Y$ un morphisme d'espaces $\mathcal{A}$-analytiques. Soient $x\in X$ et $y\in Y$ tels que $\varphi^{-1}(y) = \{x\}$.
Supposons qu'il existe des entiers $n,l$ avec $n\ge l$, des ouverts~$W'$ de~$\E{n}{\mathcal{A}}$ et $V'$ de~$\E{l}{\mathcal{A}}$ et des immersions ferm\'ees $\iota_{X} \colon X \to W'$ et $\iota_{Y} \colon Y \to V'$ tels que, en notant $\varrho_{l} \colon \E{n}{\mathcal{A}} \to \E{l}{\mathcal{A}}$ la projection sur les $l$~derni\`eres coordonn\'ees, on ait $\varrho_{l}(W') = V'$ et le diagramme
\[\begin{tikzcd}
X \arrow[d, "\varphi"] \arrow[r, "\iota_{X}"] & W' \arrow[d, "\varrho_{l, |W'}"]\\
Y \arrow[r, "\iota_{Y}"] & V'
\end{tikzcd}\]
commute. Alors $\varphi$ satisfait la propri\'et\'e de changement de base en~$x$.
\end{lemm}
\begin{proof}
D\'emontrons par r\'ecurrence sur~$n\ge l$ que $\varphi$ satisfait la propri\'et\'e de changement de base en~$x$.
Si $n=l$, alors $V' = W'$ et $q_{l} = \mathrm{id}$. D'apr\`es le lemme~\ref{lem:cbfYY'}, on peut remplacer~$\varphi$ par sa compos\'ee avec~$\iota_{Y}$ et donc supposer que $Y=V'$ et que $\iota_{Y} =\mathrm{id}$. On a alors $\varphi = \iota_{X}$. Soit $\psi \colon Z\to Y$ un morphisme d'espaces $\mathcal{A}$-analytiques et reprenons les notations du d\'ebut de la section. D'apr\`es le lemme~\ref{lem:produitouverts}, le morphisme~$\chi$ est alors le morphisme d'inclusion d'un ferm\'e analytique d\'efini par le m\^eme id\'eal que~$X$ dans~$W'$. Pour tout $z\in Z$ tel que $\psi(z) = \varphi(x)$, la fibre $\chi^{-1}(z)$ contient donc un unique point, disons~$t$, et nous avons $\mathcal{O}_{T,t} = \mathcal{O}_{Z,z} \otimes_{\mathcal{O}_{W',\varphi(x)}} \mathcal{O}_{X,x}$. Le r\'esultat s'en d\'eduit.
Supposons que $n >l$ et que le r\'esultat est satisfait pour~$n-1$. Notons $\varrho_{n-1} \colon \E{n}{\mathcal{A}} \to \E{n-1}{\mathcal{A}}$ le morphisme de projection sur les $n-1$ derni\`eres coordonn\'ees et~$T$ la premi\`ere coordonn\'ee de~$\E{n}{\mathcal{A}}$. D'apr\`es le lemme~\ref{lem:recurrenceAn}, il existe un voisinage ouvert~$W'_{n-1}$ de~$\varrho_{n-1}(x)$ dans~$\varrho_{n-1}(W')$ et un polyn\^ome $\omega \in \mathcal{O}(W'_{n-1})[T]$ unitaire non constant tel que $X \cap \varrho_{n-1}^{-1}(W'_{n-1})$ soit un ferm\'e analytique du ferm\'e analytique~$Z_{\omega}$ de~$\varrho_{n-1}^{-1}(W'_{n-1})$ d\'efini par~$\omega$. Quitte \`a remplacer $W'$ par $W' \cap \varrho_{n-1}^{-1}(W'_{n-1})$ et les autres espaces en cons\'equence, on peut supposer que $W'_{n-1} = \varrho_{n-1}(W')$.
D'apr\`es le lemme~\ref{cas_particulier_coh\'erence} et le corollaire~\ref{cor:imagefini}, $X_{n-1} := \varrho_{n-1}(X)$ est un ferm\'e analytique de~$W'_{n-1}$. En particulier, on peut le munir d'une structure d'espace $\mathcal{A}$-analytique. Notons $\varrho'_{n-1} \colon X \to X_{n-1}$ le morphisme induit par~$\varrho_{n-1}$. D'apr\`es le lemme~\ref{lem:cbfomega}, $\varrho'_{n-1}$ satisfait la propri\'et\'e de changement de base en~$x$. Notons $\varrho_{n-1,l} \colon \E{n-1}{\mathcal{A}} \to \E{l}{\mathcal{A}}$ le morphisme de projection sur les $l$~derni\`eres coordonn\'ees et $\varrho'_{n-1,l} \colon X_{n-1} \to Y$ le morphisme induit. Par hypoth\`ese de r\'ecurrence, $\varrho'_{n-1,l}$ satisfait la propri\'et\'e de changement de base en~$\varrho_{n-1}(x)$. Le r\'esultat d\'ecoule alors du lemme~\ref{lem:cbfcomposition}.
\end{proof}
D\'emontrons, \`a pr\'esent, le r\'esultat en toute g\'en\'eralit\'e.
\begin{theo}\label{thm:formule_changement_base}\index{Changement de base fini|see{Morphisme analytique fini}}\index{Morphisme analytique!fini!changement de base d'un}\index{Produit!fibre@fibr\'e}
Soient $\varphi \colon X\to Y$ un morphisme fini d'espaces $\mathcal{A}$-analytiques et $\psi \colon Z\to Y$ un morphisme d'espaces $\mathcal{A}$-analytiques. Consid\'erons le carr\'e cart\'esien
\[\begin{tikzcd}
X\times_YZ \arrow[r, "\rho"] \arrow[d, "\chi"] & X \arrow[d, "\varphi"] \\
Z \arrow[r, "\psi"] & Y
\end{tikzcd}\ .\]
Alors, pour tout faisceau de $\mathcal{O}_{X}$-modules~$\mathcal{F}$, le morphisme naturel
\[\psi^*\varphi_*\mathcal{F}\to \chi_*\rho^*\mathcal{F}\]
est un isomorphisme.
\end{theo}
\begin{proof}
D'apr\`es le lemme~\ref{lem:cbflocal}, il suffit de montrer que, pour tout $x\in X$, $\varphi$ satisfait la propri\'et\'e de changement de base en~$x$.
Soit $x\in X$. Puisque la question est maintenant locale en~$x$, quitte \`a restreindre~$X$ et~$Y$, on peut supposer que~$\varphi^{-1}(\varphi(x)) = \{x\}$. On peut alors appliquer le lemme~\ref{lem:finiprojection} pour se retrouver dans la situation du lemme~\ref{lem:cbfprojection}, qui permet de conclure.
\end{proof}
En utilisant les m\^emes arguments, on peut d\'emontrer un r\'esultat similaire en rempla\c cant le changement de base par une extension des scalaires.
\begin{theo}\label{thm:formule_extension_scalaire}\index{Morphisme analytique!fini!changement de base d'un}\index{Extension des scalaires}
Soient $\varphi \colon X\to Y$ un morphisme fini d'espaces $\mathcal{A}$-analytiques. Soit $f\colon \mathcal{A} \to \mathcal{B}$ un morphisme d'anneaux de Banach born\'e, o\`u $\mathcal{B}$ est un anneau de base g\'eom\'etrique. Consid\'erons le carr\'e cart\'esien
\[\begin{tikzcd}
X\ho{\mathcal{A}} \mathcal{B} \arrow[r, "\pi_{X}"] \arrow[d, "\varphi_{\mathcal{B}}"] & X \arrow[d, "\varphi"] \\
Y\ho{\mathcal{A}} \mathcal{B} \arrow[r, "\pi_{Y}"] & Y
\end{tikzcd}\ .\]
Alors, pour tout faisceau de $\mathcal{O}_{X}$-modules~$\mathcal{F}$, le morphisme naturel
\[(\pi_{Y})^*\varphi_*\mathcal{F}\to (\varphi_{\mathcal{B}})_*(\pi_{X})^*\mathcal{F}\]
est un isomorphisme.
\qed
\end{theo}
\section{Nullstellensatz de R\"uckert}\label{sec:Ruckert}
Dans cette section, nous d\'emontrons une version analytique du Nullstellensatz, connue en g\'eom\'etrie analytique complexe sous le nom de Nullstellensatz de R\"uckert.
Dans sa premi\`ere version, il s'agit de montrer que si une fonction s'annule sur le support d'un faisceau coh\'erent, alors une puissance de cette fonction annule le faisceau. Nous suivons ici la strat\'egie de \cite[\S 3.2 et \S 4.1.5]{Gr-Re2}.
Commen\c{c}ons par d\'emontrer plusieurs cas particuliers du r\'esultat. Comme de coutume, nous noterons $T_{1},\dotsc,T_{n}$ les coordonn\'ees sur~$\E{n}{\mathcal{A}}$. Rappelons la d\'efinition~\ref{def:projection} concernant les projections.
\begin{lemm}\label{lem:Ruckertloctr}
Soient~$U$ un ouvert de~$\E{n}{\mathcal{A}}$ et~$x$ un point~$U$.
Supposons que $x$~est purement localement transcendant au-dessus de $b := \pi_{n}(x)$. Soient~$\mathcal{F}$ un faisceau coh\'erent sur~$U$ et $f\in\mathcal{O}_{\E{n}{\mathcal{A}}}(U)$ une fonction analytique nulle en tout point du support de~$\mathcal{F}$. Alors, il existe~$d\in\ensuremath{\mathbf{N}}$ tel que~$f^d\mathcal{F}_x=0$.
\end{lemm}
\begin{proof}
Si l'image de~$f$ dans~$\mathcal{O}_{\E{n}{\mathcal{A}},x}$ est nulle, alors le r\'esultat vaut avec $d=1$. Supposons donc que l'image de~$f$ dans~$\mathcal{O}_{\E{n}{\mathcal{A}},x}$ n'est pas nulle.
D'apr\`es le th\'eor\`eme~\ref{rigide}, si~$\mathcal{O}_{B,b}$ est un corps fort (resp. un anneau fortement de valuation discr\`ete), alors il en va de m\^eme pour $\mathcal{O}_{\E{n}{\mathcal{A}},x}$.
Supposons que $\mathcal{O}_{\E{n}{\mathcal{A}},x}$ est un corps fort. Alors, on a $f(x) \ne 0$, donc $x$ n'appartient pas au support de~$\mathcal{F}$ et on a donc $\mathcal{F}_{x}= 0$.
Supposons que $\mathcal{O}_{\E{n}{\mathcal{A}},x}$ est un anneau fortement de valuation discr\`ete. Choisissons-en une uniformisante~$\varpi_{x}$. La fonction~$f$ peut alors s'\'ecrire de fa\c con unique sous la forme $f = h \varpi_x^l$ dans~$\mathcal{O}_{\E{n}{\mathcal{A}},x}$, o\`u $h$ est un \'el\'ement inversible de~$\mathcal{O}_{\E{n}{\mathcal{A}},x}$ et $l$ un entier non nul.
Par hypoth\`ese, le support de~$\mathcal{F}$ est contenu dans le ferm\'e analytique d\'efini par~$f$. Au voisinage de~$x$, il est donc contenu dans le ferm\'e analytique d\'efini par~$\varpi_{x}$. En particulier, d'apr\`es le corollaire~\ref{cor:projectionouverte}, ce support n'est donc pas un voisinage de~$x$. Or le support de~$\mathcal{F}$ co\"incide avec le lieu des z\'eros de son id\'eal annulateur, qui est coh\'erent. On en d\'eduit qu'il existe $g \in \mathcal{O}_{\E{n}{\mathcal{A}},x}$ non nul tel que $g\mathcal{F}_x$ soit nul. Puisque $\mathcal{O}_{\E{n}{\mathcal{A}},x}$ est un anneau de valuation discr\`ete d'uniformisante~$\varpi_{x}$, on en d\'eduit qu'il existe~$v\in\ensuremath{\mathbf{N}}$ tel que~$\varpi_x^v \mathcal{F}_x = 0$. Soit~$d\in \ensuremath{\mathbf{N}}$ tel que~$dl\geq v$. On a alors
\[f^d\mathcal{F}_x = h^d\varpi_x^{dl}\mathcal{F}_x = h^d\varpi_x^{dl-v}(\varpi_x^v\mathcal{F}_x)=0.\]
\end{proof}
\begin{lemm}\label{lem:Ruckertprojectioncorps}
Soient $k,n \in \ensuremath{\mathbf{N}}$ avec $n>k$.
Soient~$U$ un ouvert de~$\E{n}{\mathcal{A}}$ et $\mathcal{F}$ un faisceau coh\'erent sur~$U$ dont le support est contenu dans le ferm\'e analytique d\'efini par~$T_{k+1}$. Soient~$b \in B$ tel que $\mathcal{O}_{B,b}$ soit un corps fort. Soit $y \in \E{k}{\mathcal{A}}$ tel que $\pi_{k}(y) = b$ et $y$ soit purement localement transcendant au-dessus de~$b$. Notons~$0_{y} \in \E{n}{\mathcal{A}}$ le point~0 de la fibre $\pi_{n,k}^{-1}(y)$. Alors, il existe~$d\in\ensuremath{\mathbf{N}}$ tel que~$T_{k+1}^d \mathcal{F}_{0_{y}}=0$.
\end{lemm}
\begin{proof}
D'apr\`es le th\'eor\`eme~\ref{rigide}, $\mathcal{O}_{\E{k}{\mathcal{A}},y}$ est un corps fort.
Par hypoth\`ese, le support de~$\mathcal{F}$ est inclus dans le ferm\'e analytique de~$\E{n}{\mathcal{A}}$ d\'efini par~$T_{k+1}$. En particulier, le support de~$\mathcal{F}$ n'est pas un voisinage de~$0_{y}$. Or le support de~$\mathcal{F}$ co\"incide avec le lieu des z\'eros de son id\'eal annulateur, qui est coh\'erent. On en d\'eduit qu'il existe $g \in \mathcal{O}_{\E{n}{\mathcal{A}},0_{y}}$ non nul tel que $g\mathcal{F}_{0_{y}}$ soit nul. Quitte \`a restreindre~$U$, on peut supposer que~$g\mathcal{F} = 0$.
Notons~$Z_{g}$ le ferm\'e analytique de~$U$ d\'efini par~$g$ et $j\colon Z_{g} \to U$ l'immersion ferm\'ee associ\'ee. Le faisceau~$\mathcal{F}$ poss\`ede une structure naturelle de $\mathcal{O}_{U}/(g)$-module. Par cons\'equent, on a $j_{\ast}j^\ast\mathcal{F} = \mathcal{F}$.
D\'emontrons maintenant le r\'esultat par r\'ecurrence sur $n \ge k+1$.
\medbreak
$\bullet$ Supposons que $n=k+1$.
D'apr\`es la proposition~\ref{prop:disqueglobal} (appliqu\'ee avec $t=0$), $g$ peut s'\'ecrire sous la forme
\[ g = T_{k+1}^d \sum_{i\ge 0} \alpha_i \, T_{k+1}^i,\]
o\`u les~$\alpha_{i}$ sont des \'el\'ements de~$\mathcal{O}_{\E{k}{\mathcal{A}},y}$ avec $\alpha_{0} \ne 0$. Puisque $\mathcal{O}_{\E{k}{\mathcal{A}},y}$ est un corps fort, $\alpha_{0}$ est inversible et il en va de m\^eme pour $ \sum_{i\ge 0} \alpha_i \, T_{k+1}^i$. L'\'egalit\'e $g\mathcal{F}_{0_{y}} = 0$ entra\^ine donc $T_{k+1}^d \mathcal{F}_{0_{y}} = 0$.
\medbreak
$\bullet$ Supposons que $n > k+1$ et que l'\'enonc\'e est d\'emontr\'e pour $n-1$.
D'apr\`es la proposition~\ref{restriction_fibre}, la restriction de~$g$ \`a~$\pi_{n,k}^{-1}(y)$ n'est pas nulle. Posons $0_{n-1,y} := \pi_{n,n-1}(0_{y})$. D'apr\`es le lemme~\ref{changement_variable}, quitte \`a effectuer un changement de variables, on peut supposer que l'image de~$g$ dans $\mathcal{O}_{\pi^{-1}_{n,n-1},0_{n-1,y}}$ n'est pas nulle. Par cons\'equent, quitte \`a restreindre~$U$, on peut supposer que $Z_{g} \cap \pi_{n,n-1}^{-1}(0_{n-1,y}) = \{0_{y}\}$.
D'apr\`es le th\'eor\`eme~\ref{thm:locfini}, quitte \`a r\'eduire~$U$, on peut supposer qu'il existe un voisinage ouvert~$V$ de~$0_{n-1,y}$ dans~$\E{n-1}{\mathcal{A}}$ tel que $\pi_{n,n-1}(U) \subset V$ et le morphisme $\pi'_{n,n-1} \colon U \cap Z_{g}\to V$ induit par~$\pi_{n,n-1}$ soit fini. D'apr\`es le th\'eor\`eme~\ref{thm:fini}, $(\pi'_{n,n-1})_*j^\ast\mathcal{F}$ est un faisceau coh\'erent sur~$V$. Son support est contenu dans le ferm\'e analytique d\'efini par~$T_{k+1}$ donc, par hypoth\`ese de r\'ecurrence, il existe~$d\in \ensuremath{\mathbf{N}}$ tel que
\[T_{k+1}^d \big((\pi'_{n,n-1})_*j^\ast\mathcal{F}\big)_{0_{n-1,y}} = T_{k+1}^d j^\ast\mathcal{F}_{0_{y}} = T_{k+1}^d \mathcal{F}_{0_{y}} = 0.\]
\end{proof}
\begin{lemm}\label{lem:Ruckertprojectionavd}
Soient $k,n \in \ensuremath{\mathbf{N}}$ avec $n>k$.
Soient~$U$ un ouvert de~$\E{n}{\mathcal{A}}$ et $\mathcal{F}$ un faisceau coh\'erent sur~$U$ dont le support est contenu dans le ferm\'e analytique d\'efini par~$T_{k+1}$. Soient~$b \in B$ tel que $\mathcal{O}_{B,b}$ soit un anneau fortement de valuation discr\`ete. Soit $y \in \E{k}{\mathcal{A}}$ tel que $\pi_{k}(y) = b$ et $y$ soit purement localement transcendant au-dessus de~$b$. Notons $0_{y} \in \E{n}{\mathcal{A}}$ le point~0 de la fibre $\pi_{n,k}^{-1}(y)$. Alors, il existe~$d\in\ensuremath{\mathbf{N}}$ tel que~$T_{k+1}^d \mathcal{F}_{0_{y}}=0$.
\end{lemm}
\begin{proof}
Notons $\varpi$ une uniformisante de~$\mathcal{O}_{B,b}$. D'apr\`es le th\'eor\`eme~\ref{rigide}, $\mathcal{O}_{\E{k}{\mathcal{A}},y}$ est encore un anneau fortement de valuation discr\`ete d'uniformisante~$\varpi$. Quitte \`a restreindre~$U$, on peut supposer que~$\varpi$ est d\'efinie sur~$U$. Notons~$Z_{\varpi}$ le ferm\'e analytique de~$U$ d\'efini par~$\varpi$ et $j \colon Z_{\varpi} \to U$ l'immersion ferm\'ee associ\'ee.
Il suffit de montrer qu'il existe $v,d\in \ensuremath{\mathbf{N}}$ tels que $\varpi^vT_{k+1}^d\mathcal{F}_{0_{y}}=0$. Si $v=0$, on obtient directement le r\'esultat souhait\'e. Supposons que $v \ge 1$. Quitte \`a restreindre~$U$, on peut supposer que $\varpi^v T_{k+1}^d\mathcal{F}=0$. Le faisceau $\varpi^{v-1}T_{k+1}^d\mathcal{F}$ est alors naturellement muni d'une structure de~$\mathcal{O}_{U}/(\varpi)$-module et on a donc $j_{\ast} j^\ast (\varpi^{v-1}T_{k+1}^d\mathcal{F}) = \varpi^{v-1}T_{k+1}^d\mathcal{F}$.
Or $\mathcal{O}_{Z,y} = \mathcal{O}_{\E{n}{\mathcal{A}},y}/(\varpi)$ est un corps fort. D'apr\`es le lemme~\ref{lem:Ruckertprojectioncorps}, il existe donc~$d'$ tel que
\[T_{k+1}^{d'} j^\ast (\varpi^{v-1}T_{k+1}^d\mathcal{F})_{0_{y}} = j^\ast (\varpi^{v-1}T_{k+1}^{d+d'}\mathcal{F})_{0_{y}} = \varpi^{v-1}T_{k+1}^{d+d'}\mathcal{F}_{0_{y}} =0.\]
En r\'ep\'etant le proc\'ed\'e, on montre finalement qu'il existe $d''\in \ensuremath{\mathbf{N}}$ tel que $T_{k+1}^{d''}\mathcal{F}_{0_{y}} = 0$, comme attendu.
\medbreak
Passons maintenant \`a la d\'emonstration du r\'esultat. Par hypoth\`ese, le support de~$\mathcal{F}$ est inclus dans le ferm\'e analytique de~$\E{n}{\mathcal{A}}$ d\'efini par~$T_{k+1}$. En particulier, le support de~$\mathcal{F}$ n'est pas un voisinage de~$0_{y}$. Or le support de~$\mathcal{F}$ co\"incide avec le lieu des z\'eros de son id\'eal annulateur, qui est coh\'erent. On en d\'eduit qu'il existe $g \in \mathcal{O}_{\E{n}{\mathcal{A}},0_{y}}$ non nul tel que $g\mathcal{F}_{0_{y}}$ soit nul.
D\'emontrons le r\'esultat par r\'ecurrence sur $n \ge k+1$.
\medbreak
$\bullet$ Supposons que $n=k+1$.
D'apr\`es la proposition~\ref{prop:disqueglobal} (appliqu\'ee avec $t=0$), il existe un voisinage ouvert~$V$ de~$y$ dans~$\E{n}{k}$ tel que~$g$ puisse s'\'ecrire sous la forme
\[ g = T_{k+1}^d \sum_{i\ge 0} \alpha_i \, T_{k+1}^i,\]
o\`u les~$\alpha_{i}$ sont des \'el\'ements de~$\mathcal{O}(V)$ avec $\alpha_{0} \ne 0$ dans $\mathcal{O}_{\E{k}{\mathcal{A}},y}$. Posons
$\tilde g := \sum_{i\ge 0} \alpha_i \, T_{k+1}^i.$
Si $\alpha_{0}(y) \ne 0$, alors $\tilde{g}$ est inversible dans~$\mathcal{O}_{\E{k+1}{\mathcal{A}},0_{y}}$ et on en d\'eduit que $T_{k+1}^d \mathcal{F}_{0_{y}} = 0$. On supposera donc d\'esormais que~$\alpha_0(x_k)=0$. D'apr\`es le lemme~\ref{restriction_fibre_avd}, il existe~$v\in\ensuremath{\mathbf{N}}$ et $h \in \mathcal{O}_{\E{k+1}{\mathcal{A}},0_y}$ tels que~$\tilde{g}=\varpi^v h$ dans~$\mathcal{O}_{\E{k+1}{\mathcal{A}},0_y}$ et la restriction de~$h$ \`a $\pi_{k+1,k}^{-1}(y)$ ne soit pas nulle. On a alors
\[ h \varpi^vT_{k+1}^d\mathcal{F}_{0_{y}}=0.\]
Quitte \`a restreindre~$U$, nous pouvons supposer que $h \varpi^vT_{k+1}^d\mathcal{F} = 0$.
Si $h(0_{y}) \ne 0$, alors on a $\varpi^vT_{k+1}^d\mathcal{F}_{0_{y}}=0$. On supposera donc d\'esormais que $h(0_{y}) = 0$. D'apr\`es la proposition~\ref{prop:disqueglobal}, quitte \`a restreindre~$V$, on peut \'ecrire~$h$ sous la forme
\[ h = \sum_{i\ge 0} \beta_i \, T_{k+1}^i,\]
o\`u les~$\beta_{i}$ sont des \'el\'ements de~$\mathcal{O}(V)$. Remarquons que $\beta_{0}(y) = 0$, autrement dit, $\beta_{0}$ est divisible par~$\varpi$. Le support du faisceau $\varpi^vT_{k+1}^d\mathcal{F}$ est contenu dans l'ensemble
\begin{align*}
\{h = 0\} \cap \{T_{k+1} = 0\} & = \{\beta_{0} = 0\} \cap \{T_{k+1} = 0\} \\
&= \{\varpi=0\}\cap\{T_{k+1}=0\}.
\end{align*}
En particulier, le support du faisceau $\varpi^vT_{k+1}^d\mathcal{F}$ est contenu dans~$\{\varpi=0\}$.
Puisque la restriction de~$h$ \`a~$\pi_{k+1,k}^{-1}(y)$ n'est pas nulle, quitte \`a restreindre~$U$, on peut supposer que l'ensemble des z\'eros de~$h$ dans~$\pi_{k+1,k}^{-1}(y)\cap U$ est r\'eduit \`a~$0_{y}$. D'apr\`es le th\'eor\`eme~\ref{thm:locfini}, quitte \`a r\'eduire~$U$ et~$V$, on peut supposer que $\pi_{k+1,k}(U) \subset V$ et que le morphisme $\pi'_{k+1,k} \colon U \cap \{h=0\} \to V$ induit par~$\pi_{k+1,k}$ est fini. D'apr\`es le th\'eor\`eme~\ref{thm:fini}, le faisceau
\[(\pi_{k+1,k})_*(\varpi^v T_{k+1}^d \mathcal{F}) = (\pi'_{k+1,k})_*(\varpi^v T_{k+1}^d \mathcal{F})_{|\{h=0\}}\]
est un faisceau coh\'erent sur~$V$.
Puisque le support de~$\varpi^vT_{k+1}^d\mathcal{F}$ est contenu dans~$\{\varpi=0\}$, celui de~$(\pi_{k+1,k})_*(\varpi^vT_{k+1}^d\mathcal{F})$ l'est aussi. D'apr\`es le lemme~\ref{lem:Ruckertloctr}, il existe~$v'\in\ensuremath{\mathbf{N}}$ tel que
\[\varpi^{v'}(\pi_{k+1,k})_*(\varpi^vT_{k+1}^d\mathcal{F})_{y} = (\pi_{k+1,k})_*(\varpi^{v+v'}T_{k+1}^d\mathcal{F})_{y}=0.\]
On en d\'eduit que $\varpi^{v+v'}T_{k+1}^d\mathcal{F}_{0_{y}}=0$.
\medbreak
$\bullet$ Supposons que $n > k+1$ et l'\'enonc\'e est d\'emontr\'e pour $n-1$.
D'apr\`es le lemme~\ref{restriction_fibre_avd}, il existe $v\in \ensuremath{\mathbf{N}}$ et $h\in\mathcal{O}_{\E{n}{\mathcal{A}},0_{y}}$ dont la restriction \`a~$\pi_{n,k}^{-1}(y)$ n'est pas nulle tels que $g = \varpi^v h$. Quitte \`a remplacer~$g$ par~$h$ et~$\mathcal{F}$ par~$\varpi^v \mathcal{F}$, on peut donc supposer que la restriction de~$g$ \`a~$\pi_{n,k}^{-1}(y)$ n'est pas nulle. On peut maintenant conclure en reprenant la fin de la d\'emonstration du lemme~\ref{lem:Ruckertprojectioncorps}.
\end{proof}
D\'emontrons maintenant un lemme technique permettant de ramener l'\'etude d'un point rigide \'epais \`a celle du point rationnel~0.
\begin{lemm}\label{lem:translation}\index{Point!rigide epais@rigide \'epais}
Soit~$x\in\E{n}{\mathcal{A}}$.
Soit $k\in \cn{0}{n}$. Posons $x_{k} := \pi_{n,k}(x)$ et notons~$0_{x_{k}}$ le point de~0 de la fibre $\pi_{n,k}^{-1}(x_{k}) \simeq \E{n-k}{\mathcal{H}(x_{k})}$.
Supposons que~$x$ est rigide \'epais au-dessus de~$x_{k}$. Alors, il existe un voisinage ouvert~$U_{k}$ de~$x_{k}$ dans~$\E{k}{\mathcal{A}}$, un voisinage ouvert~$U$ de~$x$ dans~$\E{n}{\mathcal{A}}$, un voisinage ouvert~$V$ de~$0_{x_{k}}$ dans~$\E{n}{\mathcal{A}}$ tel que $\pi_{n,k}(U) = \pi_{n,k}(V) = U_{k}$ et un morphisme fini $\varphi \colon U \to V$ tel que l'on ait $\varphi^{-1}(0_{x_k})=\{x\}$ et $\pi_{n,k} \circ \varphi = \pi_{n,k}$.
\end{lemm}
\begin{proof}
Posons~$l:=n-k$. D\'emontrons le r\'esultat par r\'ecurrence sur~$l$. Dans le cas o\`u~$l=0$, c'est imm\'ediat.
Supposons que~$l\ge 1$ et que le r\'esultat est satisfait pour~$l-1$.
Posons $x_{n-1} := \pi_{n-1}(x)$.
Puisque~$x$ est rigide \'epais au-dessus de~$x_k$, il l'est au-dessus de~$x_{n-1}$. Consid\'erons le polyn\^ome minimal~$\mu_{\kappa,x} \in \kappa(x_{n-1})[T_{n}]$ et relevons-le en un polyn\^ome unitaire $M \in\mathcal{O}_{\E{n-1}{\mathcal{A}},x_{n-1}}[T_{n}]$. Soit~$U$ un voisinage compact spectralement convexe de~$x_{n-1}$ sur lequel tous les coefficients de~$M$ sont $\mathcal{B}$-d\'efinis. Notons $\varphi_{M} \colon \E{1}{\mathcal{B}(U)} \to \E{1}{\mathcal{B}(U)}$ le morphisme d'espaces $\mathcal{B}(U)$-analytiques d\'efini par~$M$ (\cf~exemple~\ref{exemple_morphisme}). Identifions $\E{1}{\mathcal{B}(U)}$ \`a~$\pi_{n,n-1}^{-1}(U)$ et notons~$0_{x_{n-1}}$ le point de~0 de la fibre $\pi_{n,n-1}^{-1}(x_{n-1})$. Le morphisme~$\varphi_{M}$ est fini, par l'exemple~\ref{ex:fini}, et satisfait $\varphi_{M}^{-1}(0_{x_{n-1}}) = \{x\}$, car $\mu_{\kappa,x}$ est une puissance du polyn\^ome minimal de~$x$ sur~$\mathcal{H}(x_{n-1})$, d'apr\`es le lemme~\ref{lem:polmin}. Remarquons que, puisque $\varphi_{M}$ est un morphisme d'espaces $\mathcal{B}(U)$-analytiques, il satisfait $\pi_{n,n-1} \circ \varphi_{M} = \pi_{n,n-1}$.
Notons~$0_{n-1,x_{k}}$ le point de~0 de la fibre $\pi_{n-1,k}^{-1}(x_{k})$.
Par hypoth\`ese de r\'ecurrence, il existe un voisinage ouvert~$U_{k}$ de~$x_{k}$ dans~$\E{k}{\mathcal{A}}$, un voisinage ouvert~$U_{n-1}$ de~$x_{n-1}$ dans~$\E{n-1}{\mathcal{A}}$, un voisinage ouvert~$V_{n-1}$ de~$0_{n-1,x_{k}}$ dans~$\E{n-1}{\mathcal{A}}$ tel que $\pi_{n-1,k}(U_{n-1}) = \pi_{n-1,k}(V_{n-1}) = U_{k}$ et un morphisme fini $\psi \colon U_{n-1} \to V_{n-1}$ tel que l'on ait $\psi^{-1}(0_{n-1,x_k})=\{x_{n-1}\}$ et $\pi_{n-1,k} \circ \psi = \pi_{n-1,k}$. On peut supposer que $U_{n-1} \subset U$.
Notons $\psi_{n} \colon \pi_{n,n-1}^{-1}(U_{n-1}) = U_{n-1} \times_{\mathcal{A}} \ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}} \to V_{n-1} \times_{\mathcal{A}} \ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}} = \pi_{n,n-1}^{-1}(V_{n-1})$ le morphisme d\'eduit de~$\psi$ par le changement de base $\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}} \to \mathcal{M}(\mathcal{A})$. D'apr\`es la proposition~\ref{stabilite_fini}, c'est encore un morphisme fini. Il v\'erifie $\psi_{n}^{-1}(0_{x_{k}}) = \{0_{x_{n-1}}\}$.
Notons $\varphi'_{M} \colon \pi_{n,n-1}^{-1}(U_{n-1}) \to \pi_{n,n-1}^{-1}(U_{n-1})$ le morphisme induit par~$\varphi_{M}$. Le morphisme compos\'e $\psi_{n} \circ \varphi'_{M}$ satisfait alors les propri\'et\'es de l'\'enonc\'e.
\end{proof}
Nous pouvons maintenant d\'emontrer l'analogue du Nullstellensatz de R\"uckert en toute g\'en\'eralit\'e.
\begin{theo}\label{thm:Ruckert}\index{Nullstellensatz}\index{Theoreme@Th\'eor\`eme!Nullstellensatz}\index{Faisceau!support d'un}
Soient $X$ un espace $\mathcal{A}$-analytique, $\mathcal{F}$ un faisceau coh\'erent sur~$X$ et $f\in\mathcal{O}(X)$ une fonction analytique nulle en tout point du support de~$\mathcal{F}$. Alors, pour tout $x\in X$, il existe~$d\in\ensuremath{\mathbf{N}}$ tel que~$f^d\mathcal{F}_x=0$.
\end{theo}
\begin{proof}
Soit $x\in X$. Le r\'esultat \'etant local au voisinage de~$X$, on peut supposer qu'il existe une immersion ferm\'ee $j \colon X \to U$, o\`u $U$ est un ouvert de~$\E{n}{\mathcal{A}}$. Quitte \`a restreindre~$X$ et~$U$, on peut supposer qu'il existe $g\in \mathcal{O}(U)$ tel que $j^\sharp(g) = f$. Le faisceau $j_{\ast} \mathcal{F}$ est coh\'erent, son support est contenu dans le lieu d'annulation de~$g$ et on a $(j_{\ast}\mathcal{F})_{j(x)} = \mathcal{F}_{x}$. On peut donc supposer que~$X$ est un ouvert~$U$ de~$\E{n}{\mathcal{A}}$.
D'apr\`es la remarque~\ref{rem:rigeptrans}, quitte \`a permuter les variables, on peut supposer qu'il existe~$k\in\cn{0}{n}$ tel que $x_k := \pi_{n,k}(x)$ soit purement localement transcendant au-dessus de~$\pi_{n}(x)$ et que~$x$ soit rigide \'epais au-dessus de~$x_k$.
Si $n=k$, alors le r\'esultat d\'ecoule du lemme~\ref{lem:Ruckertloctr}. Supposons donc que $n>k$.
Notons $\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}}(T_{n+1})$ la droite affine analytique au-dessus de~$\mathcal{A}$ munie de la coordonn\'ee~$T_{n+1}$. Notons $F \colon U \to \ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}}(T_{n+1})$ le morphisme associ\'e \`a $f \in \mathcal{O}_{\E{n}{\mathcal{A}}}(U)$ par la proposition~\ref{morphsec}. D'apr\`es la proposition~\ref{prop:grapheimmersion}, le morphisme $\Gamma_{F} \colon U \to U\times_{\mathcal{A}} \ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}}(T_{n+1})$ est une immersion ferm\'ee. Identifions le produit $U \times_{\mathcal{A}} \ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}}(T_{n+1})$ \`a $\pi_{n+1,n}^{-1}(U)$.
Consid\'erons maintenant le faisceau coh\'erent~$(\Gamma_{F})_*\mathcal{F}$. Soit $y \in \pi_{n+1,n}^{-1}(U)$ tel que $((\Gamma_{F})_*\mathcal{F})_y \ne 0$. Alors~$y$ poss\`ede un unique ant\'ec\'edent~$z$ par~$\Gamma_{F}$ et on a
\[ \mathcal{F}_{z} \simeq ((\Gamma_{F})_*\mathcal{F})_y = 0.\]
On en d\'eduit que
\[ T_{n+1}(y) = \Gamma_{F}^\sharp(f)(y) = f(z) = 0.\]
Par cons\'equent, le support de~$(\Gamma_{F})_*\mathcal{F}$ est contenu dans le ferm\'e analytique de~$\E{n+1}{\mathcal{A}}$ d\'efini par~$T_{n+1}$.
Il suffit maintenant de montrer qu'il existe~$d\in\ensuremath{\mathbf{N}}$ tel que
\[T_{n+1}^d ((\Gamma_{F})_*\mathcal{F})_{\Gamma_{F}(x)} \simeq ((\Gamma_{F})_*(f^d\mathcal{F}))_x =0.\]
Pour la suite du raisonnement, par commodit\'e, \'echangeons les variables~$T_{k+1}$ et~$T_{n+1}$. On a alors $\pi_{n+1,k+1}(\Gamma_{F}(x)) = 0_{k+1,x_{k}}$, o\`u $0_{k+1,x_{k}} \in \E{k+1}{\mathcal{A}}$ d\'esigne le point~0 de la fibre $\pi_{k+1,k}^{-1}(x_{k})$.
Notons $0_{n+1,x_{k}} \in \E{n+1}{\mathcal{A}}$ le point~0 de la fibre $(\pi_{n+1,k})^{-1}(x_{k})$. D'apr\`es le lemme~\ref{lem:translation}, il existe un voisinage~$U'$ de~$\Gamma_{F}(x)$ dans~$\E{n+1}{\mathcal{A}}$, un voisinage ouvert~$V$ de~$0_{n+1,x_{k}}$ dans~$\E{n+1}{\mathcal{A}}$ tel que $\pi_{n+1,k+1}(U') = \pi_{n+1,k+1}(V)$ et un morphisme fini $\varphi \colon U' \to V$ tel que l'on ait $\varphi^{-1}(0_{n+1,x_k})=\{\Gamma_{F}(x)\}$ et $\pi_{n+1,k+1} \circ \varphi = \pi_{n+1,k+1}$.
D'apr\`es le th\'eor\`eme~\ref{thm:fini}, $\varphi_*(\Gamma_{F})_{\ast}\mathcal{F}$ est un faisceau coh\'erent. Puisque~$\varphi$ commute \`a~$\pi_{n+1,k+1}$, on a $\varphi^\sharp(T_{k+1}) = T_{k+1}$. Par cons\'equent, le support de~$\varphi_*(\Gamma_{F})_{\ast}\mathcal{F}$ est contenu dans le ferm\'e analytique de~$\E{n+1}{\mathcal{A}}$ d\'efini par~$T_{k+1}$. En outre, on a
\[(\varphi_*(\Gamma_{F})_{\ast}\mathcal{F})_{\varphi(\Gamma_{F}(x))} \simeq ((\Gamma_{F})_{\ast}\mathcal{F})_{\Gamma_{F}(x)}.\]
Par cons\'equent, on peut supposer que $x = 0_{n+1,x_{k}}$ et que $f = T_{k+1}$. Le r\'esultat d\'ecoule alors des lemmes~\ref{lem:Ruckertprojectioncorps} et~\ref{lem:Ruckertprojectionavd}.
\end{proof}
\begin{coro}\label{cor:nilpotent}\index{Fonction!nilpotente}\index{Nilpotent|see{Fonction nilpotente}}
Soient $X$ un espace $\mathcal{A}$-analytique. Soit~$f$ un \'el\'ement de~$\mathcal{O}(X)$ qui s'annule identiquement sur~$X$. Alors, pour tout $x\in X$, l'image de~$f$ dans~$\mathcal{O}_{X,x}$ est nilpotente.
\qed
\end{coro}
\begin{nota}\label{nota:VI}%
\nomenclature[Eib]{$V(\mathcal{J})$}{lieu des z\'eros d'un faisceau d'id\'eaux coh\'erent~$\mathcal{J}$}%
\nomenclature[Eia]{$\sqrt{\mathcal{J}}$}{radical d'un faisceau d'id\'eaux coh\'erent~$\mathcal{J}$}%
\nomenclature[Eic]{$\mathcal{I}(Z)$}{faisceau d'id\'eaux des fonctions s'annulant en tout point d'un ferm\'e analytique~$Z$ d'un espace $\mathcal{A}$-analytique
Soit $X$ un espace $\mathcal{A}$-analytique.
Soit~$\mathcal{J}$ un faisceau d'id\'eaux sur~$X$. On note $V(\mathcal{J})$ le lieu des z\'eros de~$\mathcal{J}$ et $\sqrt{\mathcal{J}}$ le radical de~$\mathcal{J}$.
Soit~$Z$ un ferm\'e analytique de~$X$. On note $\mathcal{I}(Z)$ le faisceau d'id\'eaux des fonctions s'annulant en tout point de~$Z$.
\end{nota}
\begin{coro}\label{cor:IVJ}\index{Nullstellensatz}\index{Theoreme@Th\'eor\`eme!Nullstellensatz}\index{Fonction!nilpotente}\index{Faisceau!radical d'un}\index{Faisceau!lieu des z\'eros d'un}\index{Ferme analytique@Ferm\'e analytique!faisceau associ\'e \`a un}
Soit $X$ un espace $\mathcal{A}$-analytique. Soit~$\mathcal{J}$ un faisceau d'id\'eaux coh\'erent sur~$X$. Alors, on a
\[\mathcal{I}(V(\mathcal{J})) = \sqrt{\mathcal{J}}.\]
En particulier, $\mathcal{I}(X)$ co\"incide avec le faisceau d'id\'eaux~$\sqrt{0}$ form\'e par les \'el\'ements nilpotents.
\end{coro}
\begin{proof}
Soit $x\in X$.
Pour tout $f \in (\sqrt{\mathcal{J}})_{x}$, il existe $d \in \ensuremath{\mathbf{N}}$ tel que $f^d \in \mathcal{J}_{x}$. On en d\'eduit que $f \in \mathcal{I}(V(\mathcal{J}))_{x}$.
Soit $f \in \mathcal{I}(V(\mathcal{J}))_{x}$. Le lieu des z\'eros de~$\mathcal{J}$ s'identifie au support du faisceau $\mathcal{O}/\mathcal{J}$. Par cons\'equent, au voisinage de~$x$, $f$ s'annule sur ce support. D'apr\`es le th\'eor\`eme~\ref{thm:Ruckert}, il existe $d\in \ensuremath{\mathbf{N}}$ tel que $f^d \mathcal{O}_{x}/\mathcal{J}_{x} = 0$, autrement dit, $f^d \in \mathcal{J}_{x}$.
La derni\`ere partie du r\'esultat s'obtient en appliquant la premi\`ere au faisceau $\mathcal{J} = 0$.
\end{proof}
\section{Crit\`eres d'ouverture de morphismes finis}\label{sec:ouverture}
\index{Morphisme analytique!ouvert|(
Nous allons maintenant utiliser les r\'esultats obtenus dans les sections pr\'ec\'edentes de ce chapitre pour formuler des conditions assurant que les morphismes finis soient ouverts.
\begin{defi}\index{Espace analytique!reduit@r\'eduit|textbf}
Soient~$X$ un espace $\mathcal{A}$-analytique. On dit que l'espace~$X$ est \emph{r\'eduit en un point } $x$ de~$X$ si l'anneau local~$\mathcal{O}_{X,x}$ est r\'eduit. On dit que l'espace~$X$ est \emph{r\'eduit} s'il est r\'eduit en tout point.
\end{defi}
On peut d'ores et d\'ej\`a remarquer que, pour tout~$n\in\ensuremath{\mathbf{N}}$, l'espace affine analytique~$\E{n}{\mathcal{A}}$ est r\'eduit.
\index{Espace affine analytique!reduit@r\'eduit}
Le r\'esultat suivant est inspir\'e de \cite[3.3.2, Criterion of Openness]{Gr-Re2}.
\begin{prop}\label{prop:ouvert}
Soit~$\varphi \colon X\to Y$ un morphisme fini d'espaces~$\mathcal{A}$-analytiques. Soit $x\in X$. Supposons que~$\mathcal{O}_{Y,\varphi(x)}$ est r\'eduit et que $(\varphi_*\mathcal{O}_X)_{\varphi(x)}$ est un $\mathcal{O}_{Y,\varphi(x)}$-module sans torsion. Alors, le morphisme~$\varphi$ est ouvert en~$x$.
\end{prop}
\begin{proof}
Il suffit de montrer que $\varphi(X)$ contient un voisinage de~$\varphi(x)$. Notons $x_{1} := x, x_{2},\dotsc,x_{n}$ les \'el\'ements de $\varphi^{-1}(\varphi(x))$. D'apr\`es le lemme~\ref{lem:voisinagefibre}, il existe un voisinage ouvert~$V$ de~$\varphi(x)$ tel que~$\varphi^{-1}(V)$ soit une union disjointe d'ouverts $\bigsqcup_{i=1}^n U_i$, avec, pour tout~$i\in\cn{1}{n}$, $x_i \in U_i$.
Pour $i\in\cn{1}{n}$, notons $\varphi_{i} \colon U_i\to V$ le morphisme induit par~$\varphi$. On a
\[(\varphi_{\ast} \mathcal{O}_{X})_{\varphi(x)} = \bigoplus_{i=1}^ n \big((\varphi_{i})_*\mathcal{O}_{U_i}\big)_{\varphi(x)}.\]
Par cons\'equent, pour tout $i\in\cn{1}{n}$, $(\varphi_{i})_*\mathcal{O}_{U_i}$ est coh\'erent, d'apr\`es le th\'eor\`eme~\ref{thm:fini}, et $\big((\varphi_{i})_*\mathcal{O}_{U_i}\big)_{\varphi(x)}$ est sans torsion.
Posons $\mathcal{F} := (\varphi_{1})_*\mathcal{O}_{U_1}$. Son support $\varphi_{1}(U_{1})$ est un ferm\'e analytique de~$V$. Si ce ferm\'e n'est pas un voisinage de~$\varphi(x)$, alors il existe un voisinage~$V'$ de~$\varphi(x)$ dans~$V$ et $f\in \mathcal{O}(V')$ non nulle qui s'annule identiquement sur $V' \cap \varphi_{1}(U_{1})$. D'apr\`es le th\'eor\`eme~\ref{thm:Ruckert}, il existe~$d\in\ensuremath{\mathbf{N}}$ tel que~$f^d \mathcal{F}_{\varphi(x)}=0$. Puisque~$Y$ est r\'eduit, l'image de~$f^d$ dans~$\mathcal{O}_{Y,\varphi(x)}$ n'est pas nulle, donc $\mathcal{F}_{\varphi(x)}$ est de torsion, et on aboutit \`a une contradiction.
\end{proof}
\begin{defi}\index{Morphisme analytique!plat}
Soit~$\varphi \colon X\to Y$ un morphisme d'espaces $\mathcal{A}$-analytiques. On dit que le morphisme~$\varphi$ est \emph{plat en un point} $x$ de~$X$ si le morphisme d'anneaux $\varphi^\sharp \colon \mathcal{O}_{Y,y}\to(\varphi_*\mathcal{O}_{X})_{y}$ est plat. On dit que le morphisme~$\varphi$ est \emph{plat} s'il est plat en tout point de~$X$.
\end{defi}
\index{Morphisme analytique!fini!et plat|(
\begin{coro}\label{plat}\index{Endomorphisme de la droite}
Soit $\varphi \colon X\to Y$ un morphisme fini et plat d'espaces $\mathcal{A}$-analytiques et supposons que~$Y$ r\'eduit. Alors, le morphisme~$\varphi$ est ouvert.
En particulier, pour tout polyn\^ome~$P$ \`a coefficients dans~$\mathcal{A}$, le morphisme $\varphi_{P} \colon \ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}} \to \ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}}$ induit par~$P$ (\cf~exemple~\ref{ex:fini}) est fini et plat, et donc ouvert.
\end{coro}
\begin{proof}
Puisque~$\varphi$ est plat, pour tout point~$y\in Y$, $(\varphi_*\mathcal{O}_X)_y$ est un $\mathcal{O}_{Y,y}$-module libre et donc sans torsion. Le r\'esultat d\'ecoule alors de la proposition \ref{prop:ouvert}.
D\'emontrons la seconde partie du r\'esultat. On a d\'ej\`a vu dans l'exemple~\ref{ex:fini} que le morphisme~$\varphi_{P}$ est fini. Consid\'erons une copie~$X_{T}$ de~$\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}}$ avec coordonn\'ee~$T$ et une autre~$X_{S}$ avec coordonn\'ee~$S$. On peut identifier~$X_{S}$ au ferm\'e analytique de~$\E{2}{\mathcal{A}}$ avec coordonn\'ees $T,S$ d\'efini par $P(S)-T$. Notons $\pi \colon \E{2}{\mathcal{A}} \to X_{T}$ le morphisme de projection. Le morphisme~$\varphi_{P}$ s'identifie alors \`a~$\pi_{|X_{S}}$ et sa platitude d\'ecoule du lemme~\ref{cas_particulier_coh\'erence}.
\end{proof}
Les morphismes finis et plats sont stables par changement de base.
\begin{prop}\label{changement_base_plat}\index{Produit!fibre@fibr\'e}\index{Extension des scalaires}
Soit $\varphi \colon X \to Y$ un morphisme fini et plat d'espaces $\mathcal{A}$-analytiques.
\begin{enumerate}[i)]
\item Soit $\psi \colon Z \to Y$ un morphisme d'espaces $\mathcal{A}$-analytiques. Alors le morphisme $\varphi_{Z} \colon X\times_{Y} Z \to Z$ obtenu par changement de base \`a~$Z$ est fini et plat.
\item Soit $f\colon \mathcal{A} \to \mathcal{B}$ un morphisme d'anneaux de Banach born\'e, o\`u $\mathcal{B}$ est un anneau de base g\'eom\'etrique. Alors le morphisme $\varphi_{\mathcal{B}} \colon X\ho{\mathcal{A}}\mathcal{B} \to Y\ho{\mathcal{A}}\mathcal{B}$ obtenu par extension des scalaires \`a~$\mathcal{B}$ est fini et plat.
\end{enumerate}
\end{prop}
\begin{proof}
Nous nous contenterons de d\'emontrer le point~i), l'autre s'obtenant de fa\c con similaire. D'apr\`es la proposition~\ref{stabilite_fini}, le morphisme~$\varphi_{Z}$ est fini. Soit $z\in Z$. Montrons que le $\mathcal{O}_{Z,z}$-module $\big((\varphi_{Z})_*\mathcal{O}_{X\times_YZ}\big)_{z}$ est plat.
Notons $\psi_{X} \colon X\times_{Y} Z \to X$ le morphisme d\'eduit de~$\psi$ par changement de base \`a~$X$. D'apr\`es le th\'eor\`eme~ \ref{thm:formule_changement_base}, on a un isomorphisme
\[ \psi^*\varphi_*\mathcal{O}_X \xrightarrow[]{\sim} (\varphi_Z)_{\ast}(\psi_{X})^*\mathcal{O}_X, \]
d'o\`u un isomorphisme
\[\varphi_*(\mathcal{O}_X)_{\psi(z)}\otimes_{\mathcal{O}_{Y,\psi(z)}}\mathcal{O}_{Z,z}\xrightarrow[]{\sim} \big((\varphi_{Z})_*\mathcal{O}_{X\times_YZ}\big)_z.\]
Puisque~$\varphi$ est plat, le $\mathcal{O}_{Y,\psi(z)}$-module $(\varphi_*\mathcal{O}_{X})_{\psi(z)}$ est plat. Le r\'esultat s'ensuit.
\end{proof}
Nous pouvons pr\'eciser la conclusion du lemme~\ref{lem:translation}.
\begin{lemm}\label{lem:translation'}\index{Point!rigide epais@rigide \'epais}
Dans le lemme~\ref{lem:translation}, on peut imposer au morphisme $\varphi$ d'\^etre fini et plat, et donc ouvert.
\end{lemm}
\begin{proof}
Il suffit de reprendre la d\'emonstration du lemme~\ref{lem:translation} en utilisant le corollaire~\ref{plat} et le fait que les morphismes finis et plats sont stables par changement de base, d'apr\`es la proposition~\ref{changement_base_plat}, et par composition.
\end{proof}
\index{Morphisme analytique!fini!et plat|)}
\index{Morphisme analytique!ouvert|)}
\index{Morphisme analytique!fini|)}
\chapter*{Introduction}
\`A la fin des ann\'ees 1980, V.~Berkovich a propos\'e, dans l'ouvrage~\cite{Ber1}, une nouvelle approche de la g\'eom\'etrie analytique sur un corps $p$-adique, ou plus g\'en\'eralement ultram\'etrique. Elle a rapidement connu un grand succ\`es et trouv\'e des applications dans des domaines vari\'es des math\'ematiques tels que le programme de Langlands, la dynamique complexe, ou encore la g\'eom\'etrie diophantienne.
L'une des sp\'ecificit\'es des espaces de Berkovich r\'eside dans les excellentes propri\'et\'es topologiques dont ils jouissent, et qu'ils partagent avec les espaces analytiques complexes~: compacit\'e locale, connexit\'e par arcs locale, contractibilit\'e locale, etc. Ces propri\'et\'es sont d'autant plus remarquables que les corps ultram\'etriques sur lesquels ces espaces sont d\'efinis, ne sont, en g\'en\'eral, ni localement compacts, ni localement connexes. L'explication r\'eside dans le fait que les espaces de Berkovich ne contiennent pas uniquement les points \og classiques \fg{} (ceux du corps de base), mais de nombreux autres, que l'on peut, par exemple, utiliser pour tracer des chemins entre les premiers.
\begin{figure}[!h]
\centering
\begin{tikzpicture}
\draw[line width=.001pt] (0,0) -- (0,5.9) ;
\spectrivcentre(0,5.9,295,8,1.5,7) ;
\spectrivcentre(0,4,290,4,.8,12) ;
\spectrivcentre(0,3,270,11,1.3,10) ;
\spectrivcentre(0,1.5,270,6,1.3,14) ;
\def \z{350} ;
\def \l{13} ;
\spectrivcentre(0,4,\z,\l,6,2) ;
\spectrivcentreenplus(0,4,\z,\l,2,1,40,1.2,2) ;
\spectrivcentreenplus(0,4,\z,\l,2,2.5,25,1.2,3) ;
\spectrivcentreenplus(0,4,\z,\l,2,4,18,1.2,4) ;
\spectrivcentreenplus(0,4,\z,\l,2,5.5,14,1.2,5) ;
\def \a{220} ;
\def \b{18} ;
\spectrivcentre(0,4,\a,\b,6,2) ;
\spectrivcentreenplus(0,4,\a,\b,2,1,40,1.2,2) ;
\spectrivcentreenplus(0,4,\a,\b,2,2.5,25,1.2,3) ;
\spectrivcentreenplus(0,4,\a,\b,2,4,18,1.2,4) ;
\spectrivcentreenplus(0,4,\a,\b,2,5.5,14,1.2,5) ;
\spectrivcentre(0,0,270,8,5,3) ;
\spectrivcentre(0,0,270,5,1.2,16) ;
\spectrivcentreenplus(0,0,270,8,3,1.5,25,1.2,3) ;
\spectrivcentreenplus(0,0,270,8,3,3,18,1.2,4) ;
\spectrivcentreenplus(0,0,270,8,3,4.5,14,1.2,5) ;
\end{tikzpicture}
\caption{Une droite de Berkovich traditionnelle.}\label{fig:A1Cp}
\end{figure}
Nous rappellerons la d\'efinition formelle plus tard, mais indiquons, d\`es \`a pr\'esent, que, de fa\c{c}on g\'en\'erale, les points des espaces de Berkovich se pr\'esentent comme des semi-normes multiplicatives sur des anneaux bien choisis. Par exemple, tout point classique donne lieu \`a une semi-norme sur un anneau de polyn\^omes, obtenue en composant l'application d'\'evaluation en ce point avec la valeur absolue sur le corps de base.
Un autre aspect particuli\`erement int\'eressant des espaces de Berkovich est la possibilit\'e de les d\'efinir en prenant comme base, non seulement un corps ultram\'etrique complet, mais, de fa\c{c}on bien plus g\'en\'erale, un anneau de Banach quelconque. On peut, par exemple, choisir comme base le corps des nombres complexes~$\ensuremath{\mathbf{C}}$ muni de la valeur absolue usuelle. Les espaces de Berkovich qui en r\'esultent sont alors les espaces analytiques complexes classiques. On peut \'egalement partir de l'anneau des entiers relatifs~$\ensuremath{\mathbf{Z}}$ muni de la valeur absolue usuelle. Comme la description en termes de semi-normes le laisse deviner, les espaces que l'on d\'efinit ainsi se composent \`a la fois de parties $p$-adiques, pour tout nombre premier~$p$, et de parties archim\'ediennes, proches des espaces analytiques complexes. Plus pr\'ecis\'ement, tout espace de Berkovich sur~$\ensuremath{\mathbf{Z}}$ se projette naturellement sur un espace not\'e~$\mathcal{M}(\ensuremath{\mathbf{Z}})$, le spectre de~$\ensuremath{\mathbf{Z}}$ au sens de Berkovich, repr\'esent\'e sur la figure~\ref{fig:MZintro}. La fibre au-dessus d'un point $p$-adique de ce spectre est un espace de Berkovich $p$-adique. La fibre au-dessus d'un point archim\'edien est presque un espace analytique complexe (en r\'ealit\'e, un espace analytique complexe quotient\'e par l'action de la conjugaison). De tels espaces pr\'esentent un int\'er\^et du point de vue arithm\'etique.
\begin{figure}[!h]
\centering
\begin{tikzpicture}
\foreach \x [count=\xi] in {-2,-1,...,17}
\draw (0,0) -- ({10*cos(\x*pi/10 r)/\xi},{10*sin(\x*pi/10 r)/\xi}) ;
\foreach \x [count=\xi] in {-2,-1,...,17}
\fill ({10*cos(\x*pi/10 r)/\xi},{10*sin(\x*pi/10 r)/\xi}) circle ({0.07/(sqrt(\xi)}) ;
\draw ({2.5*cos(-pi/5 r)},{2.5*sin(-pi/5 r)}) node[above right, rotate = -pi/5 r]{branche archim\'edienne} ;
\draw ({2.5*cos(-pi/10 r)},{2.5*sin(-pi/10 r)-.05}) node[above right, rotate = - pi/10 r]{branche 2-adique} ;
\draw (2.5,0) node[above right]{branche 3-adique} ;
\end{tikzpicture}
\caption{Le spectre de~$\ensuremath{\mathbf{Z}}$ au sens de Berkovich.}\label{fig:MZintro}
\end{figure}
Mentionnons un autre exemple d'anneau de Banach int\'eressant~: le corps~$\ensuremath{\mathbf{C}}^\mathrm{hyb}$, d\'efini comme le corps des nombres complexes~$\ensuremath{\mathbf{C}}$ muni de la norme dite hybride, \'egale au maximum entre la valeur absolue usuelle et la valeur absolue triviale. Les espaces de Berkovich sur~$\ensuremath{\mathbf{C}}^\mathrm{hyb}$ se pr\'esentent comme des familles d'espaces analytiques complexes (au-dessus du corps~$\ensuremath{\mathbf{C}}$ muni de la valeur absolue usuelle~$\va_{\infty}$ ou d'une de ses puissances $\va_{\infty}^\ensuremath{\varepsilon}$ avec $\ensuremath{\varepsilon} \in \intof{0,1}$) qui semblent d\'eg\'en\'erer sur un espace de Berkovich ultram\'etrique (au-dessus du corps~$\ensuremath{\mathbf{C}}$ muni de la valeur absolue triviale~$\va_{0}$), \cf~figure~\ref{fig:A1hybintro}.
Ces espaces ont \'et\'e introduits par V.~Berkovich pour \'etudier la structure de Hodge mixte limite d'une famille d'espaces complexes dans~\cite{BerW0}. Ils ont, depuis, \'et\'e utilis\'es pour d'autres types de probl\`emes~: comportement asymptotique de formes volumes (\cf~\cite{BJ}), de mesures d'\'equillibre d'endomorphismes (\cf~\cite{FavreEndomorphisms}), de produits de matrices al\'eatoires dans $\textrm{SL}_{2}(\ensuremath{\mathbf{C}})$ (\cf~\cite{DujardinFavreSL2C}), ou encore bornes uniformes de type Manin-Mumford en genre~2 (\cf~\cite{DKY}).
\begin{figure}[!h]
\centering
\begin{tikzpicture}
\draw[thick] (-7,-3) -- (3,-3);
\node at (-7,-3) {$|$};
\node at (3,-3) {$|$};
\node at (3,-3.5) {$\va_{\infty}$};
\node at (-7,-3.5) {$\va_{0}$};
\node at (-2,-3.5) {$\va_{\infty}^\ensuremath{\varepsilon}$};
\node at (-2,-3) {$|$};
\draw (-2,0) circle (1.5cm);
\draw[fill=white] (-2,1.5) circle (0.08cm);
\node at (-2,1.75) {$\infty$};
\draw[fill=black] (-2,-1.5) circle (0.08cm);
\node at (-2,-1.75) {$0$};
\draw (-.5,0) arc (315:225:2.13cm and 1.3cm);
\draw[densely dotted] (-.5,0) arc (45:135:2.13cm and 1.3cm);
\draw (3,0) circle (1.5cm);
\draw[fill=white] (3,1.5) circle (0.08cm);
\node at (3,1.75) {$\infty$};
\draw[fill=black] (3,-1.5) circle (0.08cm);
\node at (3,-1.75) {$0$};
\draw (4.5,0) arc (315:225:2.13cm and 1.3cm);
\draw[densely dotted] (4.5,0) arc (45:135:2.13cm and 1.3cm);
\draw[thick] (-7,-1.5) -- (-7,1.5);
\draw[fill=white] (-7,1.5) circle (0.08cm);
\node at (-7,1.75) {$\infty$};
\draw[fill=black] (-7,-1.5) circle (0.08cm);
\node at (-7,-1.75) {$0$};
\draw[thick] (-8.5,0) -- (-5.5,0);
\draw[fill=black] (-8.5,0) circle (0.04cm);
\draw[fill=black] (-5.5,0) circle (0.04cm);
\draw[thick] (-8.06066,0.43934-1.5) -- (-5.93934,2.56066-1.5);
\draw[fill=black] (-8.06066,0.43934-1.5) circle (0.04cm);
\draw[fill=black] (-5.93934,2.56066-1.5) circle (0.04cm);
\draw[thick] (-8.06066,1.06066) -- (-5.93934,-1.06066);
\draw[fill=black] (-8.06066,1.06066) circle (0.04cm);
\draw[fill=black] (-5.93934,-1.06066) circle (0.04cm);
\draw (-5.61418,0.574025) -- (-8.38582,-0.574025);
\draw[fill=black] (-5.61418,0.574025) circle (0.03cm);
\draw[fill=black] (-8.38582,-0.574025) circle (0.03cm);
\draw (-6.42597,1.38582) -- (-7.54403,-1.38582);
\draw[fill=black] (-6.42597,1.38582) circle (0.03cm);
\draw[fill=black] (-7.54403,-1.38582) circle (0.03cm);
\draw (-7.57403,1.38582) -- (-6.42597,-1.38582);
\draw[fill=black] (-7.57403,1.38582) circle (0.03cm);
\draw[fill=black] (-6.42597,-1.38582) circle (0.03cm);
\draw (-8.38582,0.574025) -- (-5.61418,-0.574025);
\draw[fill=black] (-8.38582,0.574025) circle (0.03cm);
\draw[fill=black] (-5.61418,-0.574025) circle (0.03cm);
\end{tikzpicture}
\caption{La droite affine analytique sur $\ensuremath{\mathbf{C}}^\mathrm{hyb}$.}\label{fig:A1hybintro}
\end{figure}
Bien qu'elle ait \'et\'e formellement introduite d\`es le premier chapitre de~\cite{Ber1}, la th\'eorie des espaces de Berkovich sur un anneau de Banach n'a gu\`ere \'et\'e developp\'ee. Le second auteur leur a cependant consacr\'es deux textes. Le premier, \cite{A1Z}, propose une \'etude d\'etaill\'ee de la droite de Berkovich sur~$\ensuremath{\mathbf{Z}}$ selon diff\'erents aspects (alg\'ebriques, topologiques, cohomologiques, etc.). Le second, \cite{EtudeLocale}, traite d'espaces arbitraires sur une certaine classe d'anneaux de Banach (contenant~$\ensuremath{\mathbf{Z}}$ et~$\ensuremath{\mathbf{C}}^\mathrm{hyb}$), mais uniquement d'un point de vue alg\'ebrique local (noeth\'erianit\'e des anneaux locaux, coh\'erence du faisceau structural, etc.).
L'objectif du pr\'esent manuscrit est de combler quelques lacunes de la th\'eorie des espaces de Berkovich sur~$\ensuremath{\mathbf{Z}}$ (ou sur d'autres anneaux de Banach) et d'\'etudier en profondeur certains aspects qui ont, jusqu'ici, \'et\'e laiss\'es de c\^ot\'e, en d\'epit de leur importance. Nos investigations suivent trois axes principaux.
\medbreak
\noindent\textbf{Axe \ding{172}~: D\'efinition de la cat\'egorie des espaces de Berkovich sur un anneau de Banach}\nopagebreak
V.~Berkovich a d\'efini les espaces de Berkovich sur un anneau de Banach d\`es les premi\`eres pages de son ouvrage fondateur~\cite{Ber1}, mais n'a pas introduit de notion de morphisme dans ce contexte. Le premier objectif de ce texte est de combler ce manque. Une fois cette t\^ache effectu\'ee, on dispose de la cat\'egorie des espaces de Berkovich sur un anneau de Banach et il devient possible de consid\'erer et d\'efinir dans un langage ad\'equat diff\'erentes op\'erations d'usage courant~: produits fibr\'es, extension des scalaires (de~$\ensuremath{\mathbf{Z}}$ \`a un anneau d'entiers de corps de nombres, par exemple), analytification de sch\'emas, etc.
Cette \'etude fait l'objet des chapitres~\ref{def_cat} (d\'efinitions et propri\'et\'es g\'en\'erales dans le cadre des espaces de Berkovich sur un anneau de Banach), \ref{analyse}~(compl\'ements techniques sur des normes d'anneaux de Banach) et~\ref{catan} (propri\'et\'es plus fines dans le cadre des espaces de Berkovich sur~$\ensuremath{\mathbf{Z}}$ ou sur des anneaux de Banach aux propri\'et\'es similaires).
\medbreak
\noindent\textbf{Axe \ding{173}~: \'Etude de la topologie des espaces de Berkovich sur~$\ensuremath{\mathbf{Z}}$}\nopagebreak
Les propri\'et\'es topologiques des espaces de Berkovich forment l'un des points saillants de la th\'eorie et rec\`elent nombre d'informations subtiles. Dans~\cite{FirstSteps}, V.~Berkovich explique s'\^etre rendu compte d\`es les premiers instants que la topologie de l'analytifi\'ee d'une courbe elliptique sur~$\ensuremath{\mathbf{C}}_{p}$ (qui est contractile ou homotope \`a un cercle) permettait de retrouver son type de r\'eduction (bonne ou mauvaise, respectivement). Citons \'egalement le travail~\cite{beth} d'A.~Thuillier, qui identifie le type d'homotopie du complexe d'intersection d'un diviseur \`a croisements normaux compactifiant une vari\'et\'e sur un corps parfait \`a celui d'un espace de Berkovich canoniquement associ\'e \`a la vari\'et\'e (montrant ainsi que ce type d'homotopie ne d\'epend pas du diviseur choisi).
En d\'epit de cet int\'er\^et \'evident, les consid\'erations topologiques sont presque totalement absentes de l'\'etude des espaces de Berkovich sur un anneau de Banach. Seuls quelques r\'esultats existent, limit\'es, pour l'essentiel, au cas de la droite affine analytique sur~$\ensuremath{\mathbf{Z}}$ (connexit\'e par arcs locale dans~\cite{A1Z}, contractibilit\'e globale dans~\cite{LS}).
Dans ce texte, nous souhaitons d\'evelopper les outils n\'ecessaires \`a une \'etude topologique syst\'ematique des espaces de Berkovich sur~$\ensuremath{\mathbf{Z}}$ (ou sur des anneaux de Banach aux propri\'et\'es similaires) et obtenir de premiers r\'esultats g\'en\'eraux concrets, telle la connexit\'e par arcs locale.
Cette \'etude fait l'objet des chapitres~\ref{chap:fini} (pr\'eliminaires sur les morphismes finis), \ref{chap:structurelocale} (structure locale des espaces) et~\ref{chap:topo} (connexit\'e par arcs locale et dimension topologique).
\medbreak
\noindent\textbf{Axe~\ding{174}~: \'Etude de la cohomologie des espaces de Berkovich sur~$\ensuremath{\mathbf{Z}}$}\nopagebreak
Comme dans le domaine de la topologie, les aspects cohomologiques des espaces de Berkovich sur un anneau de Banach n'ont \'et\'e abord\'es que dans le cadre de la droite affine analytique sur~$\ensuremath{\mathbf{Z}}$, dans~\cite{A1Z}. Pourtant, les applications de la g\'eom\'etrie analytique, tant complexe que $p$-adique, interviennent souvent par le biais de calculs de dimension d'espaces de cohomologie ou de r\'esultats d'annulation.
Nous souhaitons initier ici l'\'etude de la cohomologie coh\'erente des espaces de Berkovich sur un anneau de Banach en dimension sup\'erieure. La premi\`ere \'etape, indispensable \`a tout calcul, est de disposer d'une vaste classe d'espaces dont les groupes de cohomologie coh\'erente sup\'erieurs soient nuls. Ces espaces seraient alors amen\'es \`a jouer le r\^ole des espaces affino\"ides de la g\'eom\'etrie analytique rigide ou des espaces de Stein de la g\'eom\'etrie analytique complexe.
Cette \'etude fait l'objet du chapitre~\ref{chap:Stein} (et utilise de fa\c{c}on essentielle les r\'esultats techniques du chapitre~\ref{analyse}).
\bigbreak
Nous allons maintenant exposer plus en d\'etails, chapitre par chapitre, le contenu de ce texte. Nous mettrons en lumi\`ere des r\'esultats qui nous semblent importants, tout en en laissant quelques autres de c\^ot\'e. L'introduction de chacun des chapitres contient une description exhaustive de son contenu.
Le but principal de ce texte est de d\'evelopper la th\'eorie des espaces de Berkovich sur~$\ensuremath{\mathbf{Z}}$ ou d'autres anneaux de Banach aux propri\'et\'es similaires (anneaux d'entiers de corps de nombres, corps hybrides comme $\ensuremath{\mathbf{C}}^\mathrm{hyb}$, anneaux de valuation discr\`ete, \cf~exemples~\ref{ex:corpsvalue} \`a~\ref{ex:Dedekind} pour une liste plus compl\`ete). Nous ne sommes pas parvenus \`a isoler un ensemble de conditions naturelles \`a imposer \`a un anneau de Banach pour qu'il entre dans le cadre de notre th\'eorie. Les diff\'erents chapitres de ce texte contiennent donc diff\'erentes d\'efinitions (anneau de base, anneau de base g\'eom\'etrique, anneau de Dedekind analytique, etc.), adapt\'es \`a nos besoins sp\'ecifiques. Ce choix pr\'esente l'avantage de mettre en lumi\`ere les propri\'et\'es utilis\'ees pour d\'emontrer chacun des r\'esultats, et nous esp\'erons \'egalement qu'il permettra de prendre en compte plus facilement les d\'eveloppements ult\'erieurs de la th\'eorie (qui pourraient faire intervenir d'autres anneaux de Banach, dont l'id\'ee nous \'echappe \`a l'heure o\`u nous \'ecrivons).
Insistons sur le fait que tous les r\'esultats que nous d\'emontrons sont valables pour des espaces de Berkovich d\'efinis sur un anneau de Banach qui est l'un de ceux cit\'es en exemple plus haut. Seule l'ultime section~\ref{sec:noetherianite}, sp\'ecifique aux anneaux d'entiers de corps de nombres, fait exception \`a cette r\`egle.
Dans cette introduction, nous nous autoriserons cependant \`a ne mentionner que le cas des espaces de Berkovich sur~$\ensuremath{\mathbf{Z}}$, afin d'all\'eger la r\'edaction.
\medbreak
\noindent\textbf{Chapitre~\ref{chap:rappels}~: Pr\'eliminaires et rappels}\nopagebreak
Soit $\mathcal{A}$ un anneau de Banach. Dans ce chapitre, nous commen\c{c}ons par rappeler en d\'etail la construction des espaces $\mathcal{A}$-analytiques, en suivant~\cite{Ber1}. Nous rappelons ensuite les principaux r\'esultats obtenus par le second auteur dans ses travaux~\cite{A1Z, EtudeLocale}. Nous en profitons pour d\'emontrer quelques r\'esultats techniques (par exemple sur les bases de voisinages des points, \cf~propositions~\ref{prop:basevoisdim1rigide} et~\ref{prop:basevoisdim1}) et introduire quelques outils dont nous nous servirons dans la suite de ce texte.
Un r\'esultat majeur est le th\'eor\`eme de division de Weierstra\ss{} (\cf~\cite[th\'eor\`eme~8.3]{EtudeLocale}),
dont nous rappelons maintenant l'\'enonc\'e. Consid\'erons la droite affine analytique $X := \E{1}{\mathcal{A}}$ (avec coordonn\'ee~$T$) et la projection $\pi \colon X \to \mathcal{M}(\mathcal{A})$.
Soient~$b$ un point de~$\mathcal{M}(\mathcal{A})$ et~$x$ un point de~$\pi^{-1}(b)$. Nous supposerons que~$x$ est rigide dans la fibre $\pi^{-1}(b) \simeq \E{1}{\mathcal{H}(b)}$, o\`u $\mathcal{H}(b)$ d\'esigne le corps r\'esiduel compl\'et\'e du point~$b$. Cela signifie qu'il existe un polyn\^ome irr\'eductible $\mu_{x} \in \mathcal{H}(b)[T]$ qui s'annule exactement en ce point.
\begin{enonce*}{Th\'eor\`eme \ref{weierstrass}}\index{Theoreme de Weierstrass@Th\'eor\`eme de Weierstra\ss!division}
Pla\c{c}ons-nous dans le cadre d\'ecrit ci-dessus. Soit~$G$ un \'el\'ement de l'anneau local~$\mathcal{O}_{X,x}$. Supposons que son image dans l'anneau de valuation discr\`ete~$\mathcal{O}_{\pi^{-1}(b),x}$ n'est pas nulle et notons~$n$ sa valuation.
Alors, pour tout~$F\in\mathcal{O}_{X,x}$, il existe un unique couple~$(Q,R)\in \mathcal{O}_{X,x}^2$ tel que
\begin{enumerate}[i)]
\item $F=QG+R$ ;
\item $R\in\mathcal{O}_{\mathcal{M}(\mathcal{A}),b}[T]$ est un polyn\^ome de degr\'e strictement inf\'erieur \`a~$n\deg(\mu_{x})$.
\end{enumerate}
\end{enonce*}
Afin d'\^etre exhaustifs, signalons que l'\'enonc\'e du th\'eor\`eme requiert des hypoth\`eses sur le point~$b$ (par exemple de supposer que $b$ est d\'ecent, \cf~d\'efinition~\ref{def:decent}). Ces hypoth\`eses sont toujours v\'erifi\'ees dans la pratique, par exemple si~$b$ est un point d'un espace de Berkovich sur~$\ensuremath{\mathbf{Z}}$.\JP{pas tr\`es clair}
Dans~\cite{EtudeLocale}, on tire de nombreuses cons\'equences de ce th\'eor\`eme dans le cadre des espaces de Berkovich sur~$\ensuremath{\mathbf{Z}}$~: noeth\'erianit\'e et excellence des anneaux locaux~$\mathcal{O}_{x}$, coh\'erence du faisceau strucural, etc. Ils sont rappel\'es \`a la fin du chapitre.
\medbreak
\noindent\textbf{Chapitre~\ref{def_cat}~: Cat\'egorie des espaces analytiques~: d\'efinitions}\nopagebreak
Soit $\mathcal{A}$ un anneau de Banach. L'objet principal de ce chapitre est de d\'efinir la notion de morphisme entre deux espaces $\mathcal{A}$-analytiques. Rappelons que la d\'efinition d'espace analytique sur~$\mathcal{A}$, comme la d\'efinition d'espace analytique complexe, s'effectue en plusieurs \'etapes~: on commence par les espaces affines analytiques, on consid\`ere ensuite les mod\`eles locaux, qui sont des ferm\'es analytiques d'ouverts des pr\'ec\'edents, puis on recolle ces derniers. La d\'efinition de morphisme proc\`ede de la m\^eme logique.
Soient $U$ et $V$ des ouverts d'espaces affines analytiques sur~$\mathcal{A}$ et $\varphi \colon U \to V$ un morphisme d'espaces annel\'es. Pour tout compact~$U'$ de~$U$ et tout compact~$V'$ de~$V$ contenant~$\varphi(U')$, $\varphi^\sharp$ induit un morphisme
\[\varphi_{U',V'}^\sharp \colon \mathcal{O}_{V}(V') \to \mathcal{O}_{U}(U')\]
entre espaces norm\'es (munis des normes uniformes sur~$V'$ et~$U'$ respectivement). Nous dirons que $\varphi\colon U \to V$ est un \emph{morphisme analytique} de~$U$ dans~$V$ lorsque tous les morphismes $\varphi_{U',V'}^\sharp$ pr\'ec\'edents sont contractants (\cf~d\'efinition~\ref{def:morphismefermeouvert}).
Soient~$X$ et~$Y$ des ferm\'es analytiques d'ouverts d'espaces affines analytiques sur~$\mathcal{A}$. Un \emph{morphisme analytique} $\varphi \colon X \to Y$ est un morphisme d'espaces localement annel\'es qui se rel\`eve localement en un morphisme analytique d'ouverts d'espaces affines analytiques au sens pr\'ec\'edent (\cf~d\'efinition~\ref{def:morphismefermeouvert}).
La d\'efinition g\'en\'erale s'obtient par recollement, ce que nous formulerons pr\'ecis\'ement en termes d'atlas (\cf~d\'efinition~\ref{def:morphismegeneral}). Nous obtenons ainsi la \emph{cat\'egorie des espaces $\mathcal{A}$-analytiques}. Nous la notons~$\mathcal{A}-\An$.
Nous d\'efinissons \'egalement une seconde cat\'egorie, la \emph{cat\'egorie des espaces analytiques au-dessus de~$\mathcal{A}$}, not\'ee~$\An_{\mathcal{A}}$. Ses objets sont des couples form\'es d'un morphisme born\'e d'anneaux de Banach $\mathcal{A} \to \mathcal{B}$ et d'un espace $\mathcal{B}$-analytique (\cf~d\'efinition~\ref{def:espaceaudessusdeA}). Ses morphismes sont d\'efinis de fa\c{c}on analogue \`a ceux de~$\mathcal{A}-\An$ (\cf~d\'efinitions~\ref{defi:morphismeaudessusdefouvertaffine}, \ref{defi:morphismeaudessusdeffermeouvert} et~\ref{defi:morphismeaudessusdef}). Dans la suite du texte, cette cat\'egorie nous sera utile pour consid\'erer des changement d'anneaux de base.
Insistons sur le fait que, dans ce chapitre, il n'est besoin d'aucune hypoth\`ese sur l'anneau de Banach~$\mathcal{A}$. Nous aurons besoin d'en imposer plus tard lorsque nous \'etudierons la cat\'egorie~$\mathcal{A}-\An$.
\medbreak
\noindent\textbf{Chapitre~\ref{analyse}~: Quelques r\'esultats topologiques sur les anneaux de fonctions analytiques}\nopagebreak
Ce chapitre est probablement le plus technique du manuscrit. Il d\'ebute par des consid\'erations sur certains anneaux de Banach qui se pr\'esentent comme des quotients et des r\'esultats de comparaison de normes (norme r\'esiduelle, norme uniforme, etc.) sur ceux-ci. La cons\'equence principale en est une \emph{version norm\'ee du th\'eor\`eme de division de Weierstra\ss{}} rappel\'e plus haut. Le diviseur~$G$ \'etant fix\'e, elle permet d'obtenir un contr\^ole de la norme du quotient~$Q$ et du reste~$R$ en fonction de la norme du dividende~$F$ (\cf~th\'eor\`eme~\ref{weierstrassam})
Ce th\'eor\`eme est utilis\'e, \`a la fin du chapitre, pour d\'emontrer un r\'esultat de fermeture des id\'eaux du faisceau structural.
\begin{enonce*}{Corollaire~\ref{limite}}\index{Ideal@Id\'eal!ferme@ferm\'e}
Soient~$x$ un point de~$\E{n}{\ensuremath{\mathbf{Z}}}$, $I$~un id\'eal de~$\mathcal{O}_{x}$. Soient $V$ un voisinage compact de~$x$ dans~$\E{n}{\ensuremath{\mathbf{Z}}}$ et $(g_n)_{n\in\ensuremath{\mathbf{N}}}$ une suite d'\'el\'ements de~$\mathcal{B}(V)$ qui converge vers un \'el\'ement~$g$ de~$\mathcal{B}(V)$. Si, pour tout $n\in \ensuremath{\mathbf{N}}$, l'image de~$g_{n}$ dans~$\mathcal{O}_{x}$ appartient \`a~$I$, alors l'image de~$g$ dans~$\mathcal{O}_{x}$ appartient \`a~$I$.
\end{enonce*}
Pr\'ecisons que l'anneau~$\mathcal{B}(V)$ dont il est question ici est le compl\'et\'e pour la semi-norme uniforme sur~$V$ de l'anneau des fractions rationnelles sans p\^oles sur~$V$ (\cf~notation~\ref{nota:BV}). Par exemple, si~$V$ est contenu dans un disque~$D$, les fonctions globales sur~$D$ sont d\'eveloppables en s\'erie enti\`ere (\cf~proposition~\ref{prop:disqueglobal}) et fournissent donc des \'el\'ements de~$\mathcal{B}(V)$.
\medbreak
\noindent\textbf{Chapitre~\ref{catan}~: Cat\'egorie des espaces analytiques~: propri\'et\'es}\nopagebreak
Ce chapitre contient une \'etude de la cat\'egorie des espaces analytiques sur~$\ensuremath{\mathbf{Z}}$. La restriction sur l'anneau de base permet de prolonger l'\'etude men\'ee au chapitre~\ref{def_cat}.
Notre premier r\'esultat est l'analogue d'un r\'esultat classique d'apparence anodine, mais fondamental. Il affirme l'existence d'une bijection naturelle entre les fonctions globales sur un espace et les morphismes de cet espace vers la droite affine analytique.
\begin{enonce*}{Proposition~\ref{morphsec}}\index{Morphisme!vers un espace affine}
Soit~$X$ un espace analytique sur~$\ensuremath{\mathbf{Z}}$. L'application
\[\fonctionsp{\Hom_{\ensuremath{\mathbf{Z}}-\An}(X,\E{1}{\ensuremath{\mathbf{Z}}})}{\Gamma(X,\mathcal{O}_{X})}{\varphi}{\varphi^\sharp(T)},\]
o\`u $T$ est la coordonn\'ee sur~$\E{1}{\ensuremath{\mathbf{Z}}}$, est bijective.
\end{enonce*}
En termes vagues, la surjectivit\'e signifie qu'une fonction~$F$ sur~$X$ \'etant donn\'ee (correspondant \`a $\varphi^\sharp(T)$), les s\'eries convergentes en~$F$ font sens sur~$X$ (la s\'erie $\sum a_{i} F^i$ correspondant \`a $\varphi^\sharp(\sum a_{i} T^i)$). L'injectivit\'e signifie que ces s\'eries sont des fonctions sur~$X$ bien d\'efinies. L'espace~$X$ \'etant localement un ferm\'e analytique (d\'efini par un faisceau d'id\'eaux coh\'erent) d'un ouvert d'un espace affine analytique, on con\c{c}oit bien que le th\'eor\`eme de fermeture des id\'eaux \'enonc\'e plus haut (\cf~corollaire~\ref{limite}) joue un r\^ole capital dans sa d\'emonstration.
Ce r\'esultat, et sa g\'en\'eralisation \`a un nombre fini de fonctions globales, permet de montrer que plusieurs foncteurs naturels sont repr\'esentables et ainsi de d\'efinir, de fa\c{c}on ad\'equate, l'analytifi\'e d'un sch\'ema localement de type fini sur~$\ensuremath{\mathbf{Z}}$ (\cf~th\'eor\`eme~\ref{thm:analytification}), l'extension des scalaires d'un espace analytique sur~$\ensuremath{\mathbf{Z}}$ \`a un anneau d'entiers de corps de nombres (\cf~th\'eor\`eme~\ref{thm:extensionB}), ou encore les produits fibr\'es d'espaces analytiques sur~$\ensuremath{\mathbf{Z}}$ (\cf~th\'eor\`eme~\ref{produit_fibr\'e}).
\medbreak
\noindent\textbf{Chapitre~\ref{chap:fini}~: \'Etude des morphismes finis}\nopagebreak
Dans ce chapitre, on d\'efinit et \'etudie les morphismes finis d'espaces analytiques. Il s'agit d'une notion essentielle, puisque c'est par le biais des morphismes finis que nous pourrons passer des propri\'et\'es des espaces affines analytiques \`a celles des espaces analytiques g\'en\'eraux.
Nous d\'emontrons les analogues de plusieurs r\'esultats classiques sur les morphismes finis. En voici un embl\'ematique.
\begin{enonce*}{Th\'eor\`eme \ref{thm:fini}}
Soient~$\varphi \colon X\to Y$ un morphisme fini d'espaces analytiques sur~$\ensuremath{\mathbf{Z}}$ et~$\mathcal{F}$ un faisceau coh\'erent sur~$X$. Alors, le faisceau~$\varphi_*\mathcal{F}$ est coh\'erent.
\end{enonce*}
Ce th\'eor\`eme est l'un des ingr\'edients essentiels de la preuve du Nullstellensatz, que nous d\'emontrons sous la forme suivante.
\begin{enonce*}{Corollaire \ref{cor:IVJ}}
Soit $X$ un espace analytique sur~$\ensuremath{\mathbf{Z}}$. Soit~$\mathcal{J}$ un faisceau d'id\'eaux coh\'erent sur~$X$. Notons $V(\mathcal{J})$ le lieu des z\'eros de~$\mathcal{J}$ et $\mathcal{I}(V(\mathcal{J}))$ le faisceau d'id\'eaux des fonctions s'annulant en tout point de~$V(\mathcal{J})$. Alors, on a
\[\mathcal{I}(V(\mathcal{J})) = \sqrt{\mathcal{J}}.\]
\end{enonce*}
\medbreak
\noindent\textbf{Chapitre~\ref{chap:structurelocale}~: Structure locale des espaces analytiques}\nopagebreak
Dans ce chapitre, nous \'etudions la structure locale des espaces analytiques. Une version du lemme de normalisation de Noether assure que tout espace int\`egre se pr\'esente localement comme un rev\^etement fini (ramifi\'e) d'un espace affine.
\begin{enonce*}{Th\'eor\`eme~\ref{proj}}
Soit~$X$ un espace analytique sur~$\ensuremath{\mathbf{Z}}$. Notons $\pi \colon X \to \mathcal{M}(\ensuremath{\mathbf{Z}})$ le morphisme structural. Soit~$x$ un point de~$X$ en lequel~$\mathcal{O}_{X,x}$ est int\`egre. Alors, il existe un voisinage ouvert~$U$ de~$x$ dans~$X$, un entier $n\in \ensuremath{\mathbf{N}}$, un ouvert~$V$ de l'espace affine analytique~$\E{n}{\ensuremath{\mathbf{Z}}}$ ou~$\E{n}{\mathcal{H}(\pi(x))}$ et un morphisme fini ouvert $\varphi \colon U\to V$.
\end{enonce*}
D'autre part, la proposition~\ref{decomp} assure que tout espace analytique sur~$\ensuremath{\mathbf{Z}}$ peut s'\'ecrire, localement au voisinage d'un point, comme union de ferm\'es analytiques int\`egres en ce point. La combinaison de ces deux r\'esultats fournit donc un r\'esultat de structure locale pr\'ecis pour n'importe quel espace.
\smallbreak
Dans la suite du chapitre, nous utilisons cette description locale afin de comparer les propri\'et\'es d'un sch\'ema \`a celle de son analytifi\'e. Le r\'esultat le plus important est le suivant.
\begin{enonce*}{Th\'eor\`eme \ref{platitude_analytification}}
Soit~$\mathcal{X}$ un sch\'ema localement de type fini sur~$\ensuremath{\mathbf{Z}}$. Notons~$\mathcal{X}^\an$ son analytif\'e. Alors, le morphisme canonique $\rho_{\mathcal{X}} \colon \mathcal{X}^\an \to \mathcal{X}$ est plat.
\end{enonce*}
L'\'enonc\'e analogue en g\'eom\'etrie analytique complexe ou ultram\'etrique est fondamental. Il intervient notamment dans la preuve des th\'eor\`emes GAGA (\cf~\cite{GAGA}).
\medbreak
\noindent\textbf{Chapitre~\ref{chap:topo}~: Propri\'et\'es topologiques des espaces analytiques}\nopagebreak
Dans ce chapitre, nous \'etudions finalement la topologie des espaces analytiques. Le r\'esultat principal est le suivant.
\begin{enonce*}{Th\'eor\`eme \ref{th:cpageneral}}
Tout espace analytique sur~$\ensuremath{\mathbf{Z}}$ est localement connexe par arcs.
\end{enonce*}
Comme on peut s'y attendre, nous d\'emontrons tout d'abord le r\'esultat pour les espaces affines analytiques. Nous en d\'eduisons le cas g\'en\'eral gr\^ace aux r\'esultats de structure locale du chapitre~\ref{chap:structurelocale}. Cette seconde \'etape n'est pas imm\'ediate car un rev\^etement fini d'un espace connexe par arcs ne le reste pas n\'ecessairement (\cf~remarque~\ref{rem:revetementdegre2}). Cette difficult\'e nous conduit \`a introduire une nouvelle notion de topologie g\'en\'erale, appel\'ee \emph{\'elasticit\'e} (\cf~d\'efinition~\ref{def:elastique}), raffinant la connexit\'e par arcs.
\medbreak
\noindent\textbf{Chapitre~\ref{chap:Stein}~: Espaces de Stein}\nopagebreak
Dans le dernier chapitre de cet ouvrage, nous initions la th\'eorie des espaces de Stein sur un anneau de Banach. Notre r\'esultat principal concerne les polydisques ferm\'es.
\begin{enonce*}{Corollaires \ref{cor:thA} et~\ref{cor:thB}}
Soient $r_{1},\dotsc,r_{n} \in \ensuremath{\mathbf{R}}_{>0}$ et~$\mathcal{F}$ un faisceau coh\'erent sur le polydisque relatif $\overline{D}_{\mathcal{M}(\ensuremath{\mathbf{Z}})}(r_{1},\dotsc,r_{n})$ sur~$\ensuremath{\mathbf{Z}}$. Alors,
\begin{enumerate}[i)]
\item pour tout entier $q\ge 1$, on a $H^q(X,\mathcal{F}) = 0$~;
\item le faisceau~$\mathcal{F}$ est engendr\'e par l'ensemble de ses sections globales~$\mathcal{F}(X)$.
\end{enumerate}
\end{enonce*}
Nous pouvons ensuite jeter les bases d'une th\'eorie des espaces affino\"ides surconvergents (\cf~d\'efinition~\ref{def:affinoide}) poss\'edant des propri\'et\'es similaires \`a celle de la th\'eorie ultram\'etrique classique. On peut y d\'efinir des domaines rationnels, de Laurent, de Weierstra\ss{} (\cf~d\'efinition~\ref{def:domaines}) et du r\'esultat pr\'ec\'edent d\'ecoule un analogue du th\'eor\`eme d'acyclicit\'e de Tate et du th\'eor\`eme de Kiehl (\cf~th\'eor\`eme~\ref{th:affinoideAB}). En particulier, tout point d'un espace analytique sur~$\ensuremath{\mathbf{Z}}$ poss\`ede une base de voisinages form\'e d'affino\"ides surconvergents, en particulier d'espaces dont la cohomologie coh\'erente sup\'erieure s'annule.
Dans la section~\ref{sec:Bouvert}, nous adaptons ces r\'esultats (sans la propri\'et\'e de g\'en\'eration globale du faisceau) dans un cadre ouvert.
\smallbreak
Nous terminons avec une application des r\'esultats d'annulation cohomologique \`a la noeth\'erianit\'e d'anneaux de \emph{s\'eries arithm\'etiques convergentes}. Soit $n\in \ensuremath{\mathbf{N}}$. Pour $r_{1},\dotsc,r_{n} \in \intoo{0,1}$, notons
\[\ensuremath{\mathbf{Z}}\llbracket T_{1},\dotsc,T_{n} \rrbracket_{> (r_{1},\dotsc,r_{n})}\]
le sous-anneau de $\ensuremath{\mathbf{Z}}\llbracket T_{1},\dotsc,T_{n} \rrbracket$ form\'e des s\'eries qui convergent au voisinage du polydisque complexe $\overline{D}(r_{1},\dotsc,r_{n})$. D.~Harbater a montr\'e dans~\cite{HarbaterConvergent} que ces anneaux sont noeth\'eriens lorsque $n=1$. Nous tirons parti des r\'esultats cohomologiques obtenus dans ce chapitre pour g\'en\'eraliser ce r\'esultat.
\begin{enonce*}{Corollaire \ref{cor:noetherienconcret}}
Pour tout $n\in \ensuremath{\mathbf{N}}$ et tous $r_{1},\dotsc,r_{n} \in \intoo{0,1}$, l'anneau $\ensuremath{\mathbf{Z}}\llbracket T_{1},\dotsc,T_{n} \rrbracket_{> (r_{1},\dotsc,r_{n})}$
est noeth\'erien.
\end{enonce*}
Nous obtenons en r\'ealit\'e une version raffin\'ee de ce r\'esultat qui autorise des coefficients dans $\ensuremath{\mathbf{Z}}[1/N]$, pour $N\in \ensuremath{\mathbf{N}}_{\ge 1}$, et fait intervenir des rayons de convergence aux places $p$-adiques, pour $p$ divisant~$N$. En outre, l'anneau~$\ensuremath{\mathbf{Z}}$ peut \^etre remplac\'e par n'importe quel anneau d'entiers de corps de nombres.
\bigbreak
\noindent\textbf{Remerciements :}\nopagebreak
Le premier auteur tient tout particuli\`erement \`a remercier Antoine Ducros et Fr\'ed\'eric Paugam pour de nombres discussions stimulantes. Plusieurs r\'esultats de ce texte ont pu voir le jour durant la th\`ese du premier auteur gr\^ace \`a leurs conseils et leurs remarques. Le second auteur remercie Dorian Berger pour ses nombreux commentaires.
Les auteurs ont b\'en\'efici\'e du soutien de l'ANR (projet ANR JCJC \og GLOBES\fg{} ANR-12-JS01-0007) et de l'ERC (projet ERC Starting Grant \og TOSSIBERG \fg{} 637027).
\chapter[Topologie des anneaux de fonctions]{Quelques r\'esultats topologiques sur les anneaux de fonctions analytiques}
\label{analyse}
Le but principal de ce chapitre est de d\'emontrer un r\'esultat de fermeture des id\'eaux du faisceau structural. Il sera essentiel, par la suite, dans la d\'emonstration des propri\'et\'es de la cat\'egorie des espaces analytiques (\cf~proposition~\ref{morphsec}, dont d\'ecoule le th\'eor\`eme~\ref{thm:analytification}) ou pour obtenir des propri\'et\'es d'annulation cohomologiques sur des espaces ouverts (\cf~th\'eor\`eme~\ref{th:Bouvert}).
Dans la section~\ref{sec:normesquotients}, nous d\'emontrons des r\'esultats techniques permettant de comparer des normes (norme r\'esiduelle, norme quotient, etc.) sur des anneaux de Banach. Nous commen\c{c}ons par rappeler quelques d\'efinitions et r\'esultats issus de~\cite{A1Z}, et les adaptons ensuite afin de pouvoir les appliquer \`a d'autres anneaux.
La section~\ref{sec:divWnormes} contient le c\oe ur technique de ce chapitre~: la d\'emonstration d'une version raffin\'ee du th\'eor\`eme de division de Weierstra\ss~\ref{weierstrass} dans laquelle on garde un contr\^ole sur les normes du reste et du quotient.
Dans la section~\ref{sec:fermeture}, nous d\'emontrons finalement un r\'esultat de fermeture pour les id\'eaux du faisceau structural ou, plus g\'en\'eralement, les sous-modules d'une puissance finie du faisceau structural. Pour ce faire, nous devons imposer \`a l'anneau de base au-dessus duquel nous travaillons de satisfaire certaines conditions, que nous regroupons sous le vocable d'anneau de base g\'eom\'etrique. Nos exemples usuels (corps valu\'es, anneaux d'entiers de corps de nombres, corps hybrides, anneaux de valuation discr\`ete, anneaux de Dedekind trivialement valu\'es) satisfont ces conditions.
\section{Normes sur les quotients}\label{sec:normesquotients}
Soit $(\mathcal{A},\nm)$ un anneau de Banach. Posons $B := \mathcal{M}(\mathcal{A})$. Soit~$G(S)$ un polyn\^ome unitaire non constant \`a coefficients dans~$\mathcal{A}$~:
\[ G = S^d + \sum_{i=0}^{d-1} g_i \, S^i \in\mathcal{A}[S]. \]
Dans cette section, nous \'etudierons plusieurs normes et semi-normes dont on peut munir le quotient $\mathcal{A}[S]/(G(S))$.
\subsection{G\'en\'eralit\'es}
\index{Norme!comparaison|(}
Cette section est reprise de \cite[\S 5.2]{A1Z}.
Nous noterons encore~$\nm$ la norme sur~$\mathcal{A}^d$ d\'efinie par
\[ \fonction{\nm}{\mathcal{A}^d}{\ensuremath{\mathbf{R}}_{\ge 0}}{(a_0,\ldots,a_{d-1})}{\max(\|a_{0}\|,\dotsc,\|a_{d-1}\|)}.\]
Nous avons un isomorphisme
\[ \fonction{n_{G}}{\mathcal{A}^d}{\mathcal{A}[S]/(G(S))}{(a_0,\ldots,a_{d-1})}{\displaystyle\sum^{d-1}_{i=0} a_i\, S^i}.\]
\begin{defi}\index{Norme!divisorielle|textbf}%
\nomenclature[Bja]{$\nm_{\div}$}{norme divisorielle sur $\mathcal{A}[S]/(G(S))$, h\'erit\'ee de~$\mathcal{A}^d$ par division euclidienne
On appelle \emph{norme divisorielle} sur $\mathcal{A}[S]/(G(S))$ la norme~$\nm_{\div}$ d\'efinie par
\[ \|F\|_{\div} := \|n_{G}^{-1}(F)\| \]
pour $F \in \mathcal{A}[S]/(G(S))$.
\end{defi}
Introduisons maintenant les semi-normes r\'esiduelles. Pour $w\in \ensuremath{\mathbf{R}}_{>0}$, consid\'erons la norme~$\nm_{v}$ sur $\mathcal{A}[S]$ d\'efinie par
\[ \|F\|_{v} := \max_{0\le i\le n} (\|a_{i}\|\, v^i) \]
pour $F = \displaystyle\sum_{i=0}^n a_{i}\, S^i \in \mathcal{A}[S]$.%
\nomenclature[Bib]{$\nm_{v}$}{norme $\infty$ de rayon~$v$ sur $\mathcal{A}[S]$
\begin{defi}\index{Norme!residuelle@r\'esiduelle|textbf
\nomenclature[Bjb]{$\nm_{v,\mathrm{r\acute{e}s}}$}{norme r\'esiduelle $\mathcal{A}[S]/(G(S))$ induite par~$\nm_{v}$
Pour~$v \in \ensuremath{\mathbf{R}}_{>0}$, on appelle \emph{semi-norme r\'esiduelle de rayon~$v$} la semi-norme $\nm_{v,\mathrm{r\acute{e}s}}$ sur $\mathcal{A}[S]/(G(S))$ d\'efinie par
\[ \|F\|_{v, \mathrm{r\acute{e}s}} := \inf \big( \|F'\|_{v} \ \colon \ F' \in \mathcal{A}[S], F' = F \mod G \big).\]
pour $F \in \mathcal{A}[S]/(G(S))$.
\end{defi}
Soit $v_{1}$ un \'el\'ement de~$\ensuremath{\mathbf{R}}_{>0}$ satisfaisant l'in\'egalit\'e
\begin{equation*
\sum_{i=0}^{d-1} \|g_i\|\, v_{1}^{i-d} \le \frac{1}{2}.
\end{equation*}
\begin{prop}[\protect{\cite[th\'eor\`eme~5.2.1, lemmes~5.2.2 et~5.2.3]{A1Z}}]\label{prop:equivalencedivres}\index{Norme!residuelle@r\'esiduelle}\index{Norme!divisorielle}
Soit $v \ge v_{1}$. On a les propri\'et\'es suivantes :
\begin{enumerate}[i)]
\item la semi-norme~$\|.\|_{v,\mathrm{r\acute{e}s}}$ sur~$\mathcal{A}[S]/(G(S))$ est une norme ;
\item l'anneau~$\mathcal{A}[S]/(G(S))$ muni de $\nm_{v,\mathrm{r\acute{e}s}}$ est un anneau de Banach ;
\item pour tout~$F\in\mathcal{A}[S]/(G(S))$ on a les in\'egalit\'es :
\[v^{-d+1}\|F\|_{v,\mathrm{r\acute{e}s}}\leq\|F\|_{\div}\leq 2\|F\|_{v,\mathrm{r\acute{e}s}}.\]
\end{enumerate}
En particulier, les normes~$\nm_{\div}$ et les normes~$\nm_{v,\mathrm{r\acute{e}s}}$ pour $v\ge v_{1}$ sont toutes \'equivalentes.
\qed
\end{prop}
Les constantes que nous donnons ne figurent pas dans l'\'enonc\'e de~\cite[th\'eor\`eme 5.2.1]{A1Z}, mais sa preuve les fournit.
\begin{defi}\index{Norme!spectrale|textbf
\nomenclature[Bjd]{$\nm_{\sp}$}{semi-norme spectrale de $\nm_{v,\mathrm{r\acute{e}s}}$ sur $\mathcal{A}[S]/(G(S))$ pour $v$ assez grand
On appelle \emph{semi-norme spectrale} sur $\mathcal{A}[S]/(G(S))$ la semi-norme~$\nm_{\sp}$ obtenue comme semi-norme spectrale de $\nm_{v_{1}, \mathrm{r\acute{e}s}}$. Elle est ind\'ependante du choix de~$v_{1}$.
\end{defi}
Notons $Z_{G}$ le ferm\'e de Zariski de~$\E{1}{\mathcal{A}}$ d\'efini par l'\'equation $G=0$. Le morphisme d'anneaux de Banach $\mathcal{A} \to \mathcal{A}[S]/(G(S))$ induit un morphisme d'espaces localement annel\'es
\[ \varphi_{G} \colon Z_{G} \to B .\]%
\nomenclature[Jya]{$Z_{G}$}{ferm\'e de Zariski de~$\E{1}{\mathcal{A}}$ d\'efini par $G \in \mathcal{A}[S]$}%
\nomenclature[Jyb]{$\varphi_{G}$}{morphisme $Z_{G} \xrightarrow[]{\sim} \mathcal{M}(\mathcal{A})$
Soit $v_{2}$ un \'el\'ement de~$\ensuremath{\mathbf{R}}_{>0}$ satisfaisant l'in\'egalit\'e
\begin{equation*}\label{eq:v02}
v_{2} \ge \max_{0\le i\le d-1}(\|g_{i}\|^{1/(d-i)}).
\end{equation*}
\begin{lemm}[\protect{\cite[lemme~5.2.4]{A1Z}}]\label{lem:ZGspectreASG}\index{Norme!residuelle@r\'esiduelle}\index{Norme!uniforme
Pour tout $v\ge v_{2}$, le disque~$\overline{D}_{B}(v)$ contient toutes les racines de~$G$ dans~$\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}}$ et le spectre de l'anneau~$\mathcal{A}[S]/(G(S))$ muni de $\nm_{v,\mathrm{r\acute{e}s}}$ s'identifie \`a $Z_{G}$.
En particulier, d'apr\`es le lemme~\ref{lem:spuni}, la semi-norme spectrale~$\nm_{\sp}$ s'identifie \`a la semi-norme uniforme sur~$Z_{G}$.
\qed
\end{lemm}
\subsection{Condition $(\mathcal{B} N_{G})$}
\index{Condition!BNG@$(\mathcal{B} N_{G})$|(}
Nous introduisons ici une condition permettant de comparer normes r\'esiduelles et normes spectrales dans le cadre de la section pr\'ec\'edente. Il s'agit d'un outil technique qui sera pour nous crucial dans la suite. Cette section est principalement reprise de \cite[\S 5.3]{A1Z} et \cite[\S 6]{EtudeLocale}.
Signalons un changement de notation~: la condition~$(N_{G})$ de~\cite{EtudeLocale} sera not\'ee ici~$(\mathcal{B} N_{G})$, en vue d'une g\'en\'eralisation \`a la section~\ref{sec:ONG}.\index{Condition!NG@$(N_{G})$}
\begin{defi}[\protect{\cite[d\'efinition~4.1]{EtudeLocale}}]\index{Condition!BNG@$(\mathcal{B} N_{G})$}\index{Norme!residuelle@r\'esiduelle}\index{Norme!spectrale}
On dit qu'une partie compacte~$U$ de~$B$ \emph{satisfait la condition~$(\mathcal{B} N_G)$} si elle est spectralement convexe et s'il existe $v_{U,0} \in \ensuremath{\mathbf{R}}_{>0}$ tel que, pour tout $v \ge v_{U,0}$, la semi-norme~$\nm_{U,v,\mathrm{r\acute{e}s}}$ sur~$\mathcal{B}(U)[S]/(G(S))$ soit \'equivalente \`a la semi-norme spectrale.
\end{defi}
On peut supposer que~$v_{U,0}$ satisfait les in\'egalit\'es
\[ \sum_{i=0}^{d-1} \|g_i\|_{U}\, v_{U,0}^{i-d} \le \frac{1}{2} \textrm{ et } v_{U,0} \ge \max_{0\le i\le d-1}\big(\|g_{i}\|_{U}^{1/(d-i)}\big).\]
Nous nous placerons toujours sous cette hypoth\`ese dans la suite.
\begin{prop}[\protect{\cite[proposition~5.3.3]{A1Z}}]\index{Norme!residuelle@r\'esiduelle}\index{Norme!divisorielle}\index{Norme!uniforme}\index{Condition!BNG@$(\mathcal{B} N_{G})$}
Soit~$U$ une partie compacte de~$B$ qui satisfait la condition~$(\mathcal{B} N_{G})$. Alors, pour tout $v\ge v_{U,0}$,
les normes $\nm_{U,\div}$ et $\nm_{U,v,\mathrm{r\acute{e}s}}$ sur~$\mathcal{B}(U)[T]/(G(T))$ sont \'equivalentes \`a la norme uniforme sur~$\varphi_{G}^{-1}(U)$. De plus, le morphisme naturel
\[ \mathcal{B}(U)[T]/(G(T)) \to \mathcal{B}(\varphi_{G}^{-1}(U)), \]
o\`u $\varphi_{G}^{-1}(U)$ est vu comme partie de~$\E{1}{\mathcal{A}}$, est un isomorphisme.
\qed
\end{prop}
Nous rappelons maintenant quelques conditions suffisantes pour que la condition~$(\mathcal{B} N_{G})$ soit satisfaite. En pratique, nous utiliserons la premi\`ere dans la partie ultram\'etrique de l'espace et la seconde dans la partie de caract\'eristique nulle.
\begin{prop}[\protect{\cite[corollaire~6.3 et proposition~6.14]{EtudeLocale}}]\label{prop:CNBNG}\index{Condition!BNG@$(\mathcal{B} N_{G})$}
\index{Bord analytique}\index{Resultant@R\'esultant
\nomenclature[Jz]{$\mathrm{R\acute{e}s}$}{r\'esultant
Soit~$U$ une partie compacte et spectralement convexe de~$B$. Supposons que~$U$ poss\`ede un bord analytique~$\Gamma_{U}$ satisfaisant l'une ou l'autre des conditions suivantes~:
\begin{enumerate}
\item $\Gamma_{U}$ est fini et, pour tout $\gamma \in \Gamma_{U}$, $G(\gamma)$ est sans facteurs multiples;
\item la fonction $|\mathrm{R\acute{e}s}(G,G')|$, o\`u $\mathrm{R\acute{e}s}(G,G')$ d\'esigne le r\'esultant des polyn\^omes~$G$ et~$G'$, est born\'ee inf\'erieurement sur~$\Gamma_{U}$ par une constante strictement positive.
\end{enumerate}
Alors, $U$ satisfait la condition~$(\mathcal{B} N_{G})$.
\qed
\end{prop}
Dans le cas particulier o\`u le polyn\^ome~$G$ est de degr\'e~1, la condition~$(\mathcal{B} N_{G})$ est toujours v\'erifi\'ee (ce que l'on peut d\'emontrer soit directement, soit \`a l'aide de la proposition \ref{prop:CNBNG}). \`A l'aide des r\'esultats de ce paragraphe, nous obtenons l'\'enonc\'e suivant. Notons qu'il serait ici plus simple de le d\'emontrer directement.
\begin{prop}\label{prop:isodegre1}\index{Norme!divisorielle}\index{Norme!uniforme}
Supposons que le polyn\^ome~$G$ est de degr\'e~1. Alors, pour toute partie compacte spectralement convexe~$U$ de~$B$, la norme~$\nm_{U,\div}$ est \'equivalente \`a la norme uniforme sur~$\varphi_{G}^{-1}(U)$ et le morphisme naturel
\[\mathcal{B}(U) \xrightarrow[]{\sim} \mathcal{B}(U)[T]/(G(T)) \to \mathcal{B}(\varphi_{G}^{-1}(U)) \]
est un isomorphisme admissible.
En particulier, le morphisme naturel $Z_{G} \to B$ est un isomorphisme d'espaces annel\'es.
\qed
\end{prop}
Les r\'esultats pr\'ec\'edents peuvent s'appliquer dans le cadre d'endomorphismes polynomiaux de la droite. \index{Endomorphisme de la droite}\index{Condition!BNPS-T@$(\mathcal{B} N_{P(S)-T})$|(}
Soit~$W$ une partie compacte et spectralement convexe de~$B$. Soit~$P$ un polyn\^ome unitaire non constant \`a coefficients dans~$\mathcal{B}(W)$. Le morphisme naturel
\[\varphi^\sharp_{P} \colon \mathcal{B}(W)[T] \to \mathcal{B}(W)[T,S]/(P(S) - T) \xrightarrow[]{\sim} \mathcal{B}(W)[S]\]%
\nomenclature[Jy]{$\varphi_{P}$}{endomorphisme de $\E{1}{\mathcal{A}}$ induit par $P \in \mathcal{A}[T]$
induit un endomorphisme~$\varphi_{P}$ de~$\E{1}{\mathcal{B}(W)}$. Pour toute partie compacte spectralement convexe~$U$ de~$\E{1}{\mathcal{B}(W)}$ (avec coordonn\'ee~$T$), notons~$Z_{U,P(S)-T}$ le ferm\'e de Zariski de $\E{1}{\mathcal{B}(U)}$ (avec coordonn\'ee~$S$) d\'efini par l'\'equation $P(S)-T = 0$. D'apr\`es la proposition~\ref{prop:isodegre1}, il s'identifie \`a~$\varphi_{P}^{-1}(U)$.
En particulier, pour tous $r,s \in \ensuremath{\mathbf{R}}_{\ge 0}$ tels que $0 = r< s$ ou $0< r\le s$, si $U = \overline{C}_{W}(r,s)$, $Z_{U,P(S)-T}$ s'identifie \`a $\overline{C}_{W}(P;r,s)$.
\begin{coro}[\protect{\cite[proposition~7.1]{EtudeLocale}}]\label{coro:BNPST}\index{Couronne!algebre@alg\`ebre d'une}\index{Disque!algebre@alg\`ebre d'un}\index{Domaine polynomial!algebre@alg\`ebre d'un}\index{Norme!residuelle@r\'esiduelle}\index{Norme!uniforme}\index{Norme!divisorielle}
Soient $W,P,r,s$ comme ci-dessus.
Supposons que $\overline{C}_{W}(r,s)$ satisfait la condition $(\mathcal{B} N_{P(S)-T})$. Alors, pour tout $w\ge v_{0}$, les normes $\nm_{\overline{C}_{W}(r,s),\div}$ et $\nm_{\overline{C}_{W}(r,s),w,\mathrm{r\acute{e}s}}$ sur~$\mathcal{B}(\overline{C}_{W}(r,s))[S]/(P(S)-T)$ sont \'equivalentes
\`a la norme uniforme sur $\overline{C}_{W}(P;r,s)$. De plus, le morphisme
\[ \mathcal{B}(\overline{C}_{W}(r,s))[S]/(P(S) - T) \to \mathcal{B}(\overline{C}_{W}(P;r,s))\]
est un isomorphisme.
\qed
\end{coro}
On peut d\'eduire de la proposition~\ref{prop:CNBNG} des conditions permettant d'assurer que certaines couronnes $\overline{C}_{W}(r,s)$ satisfont la condition $(\mathcal{B} N_{P(S)-T})$.
\begin{prop}[\protect{\cite[proposition~6.5 et lemme~6.16]{EtudeLocale}}]\label{prop:CNBNPST
Soient $W,P,r,s$ comme ci-dessus.
\begin{enumerate}[i)]
\item Supposons que $W$ est contenue dans la partie ultram\'etrique de~$B$ et poss\`ede un bord analytique fini. Alors $\overline{C}_{W}(r,s)$ satisfait la condition $(\mathcal{B} N_{P(S)-T})$.
\item Supposons que $P(b) \in \mathcal{H}(b)[T]$ est s\'eparable. Soient $r,r',s,s' \in \ensuremath{\mathbf{R}}_{\ge 0}$ tels que $r \prec r'$ et $0 < s' < s$. Alors, il existe $r_{1},r_{2},s_{1},s_{2} \in \ensuremath{\mathbf{R}}_{\ge 0}$ v\'erifiant $r \prec r_{1}\prec r_{2} \prec r'$ et $s' < s_{1} < s_{2}< s$, un voisinage compact~$V_{0}$ de~$b$ dans~$B$ et $\ensuremath{\varepsilon}\in \ensuremath{\mathbf{R}}_{>0}$ tels que, pour toute partie compacte spectralement convexe~$V$ de~$V_{0}$ contenant~$b$, tout $u \in[r_{1},r_{2}]$ et tout $v \in[s_{1},s_{2}]$, tout $Q \in \mathcal{B}(V)[T]$ tel que $\|Q-P\|_{V,\infty} \le \ensuremath{\varepsilon}$, la couronne $\overline{C}_{V}(u,v)$ satisfasse la condition $(\mathcal{B} N_{Q(S)-T})$.
\end{enumerate}
\qed
\end{prop}
L'\'enonc\'e de \cite[lemme~6.16]{EtudeLocale} est moins pr\'ecis que~ii), mais sa preuve fournit cette version raffin\'ee.
\begin{coro}\label{cor:BVfineNPST}\index{Voisinage
Soient $b\in B$ et~$x$ un point de~$\E{1}{\mathcal{A}}$ (avec coordonn\'ee~$S$) au-dessus de~$b$. Soit~$\mathcal{V}_{b}$ une base fine de voisinages compacts spectralement convexes de~$b$ dans~$B$. Si~$\mathcal{H}(b)$ est de caract\'eristique non nulle et trivialement valu\'e, supposons que tout \'el\'ement de~$\mathcal{V}_{b}$ poss\`ede un bord analytique fini. Alors $x$ poss\`ede une base fine~$\mathcal{V}_{x}$ de voisinages compacts spectralement convexes de la forme
\[\overline{C}_{V}(P,r,s),\]
avec $V \in \mathcal{V}_{b}$, $P \in \mathcal{B}(V)[S]$ unitaire non constant et $r,s \in \ensuremath{\mathbf{R}}_{\ge 0}$. En outre, on peut supposer que, pour tous $V,P,r,s$ tels que $\overline{C}_{V}(P,r,s)$ appartienne \`a~$\mathcal{V}_{x}$,
\begin{enumerate}[i)]
\item $\overline{C}_{b}(P,r,s)$ est connexe~;
\item il existe un voisinage~$V_{0}$ de~$V$ tel que $P \in \mathcal{B}(V_{0})[S]$ et $r_{0},s_{0} \in \ensuremath{\mathbf{R}}_{\ge 0}$ avec $r_{0}\prec r$ et $s_{0}>s$ tels que, pour tout $V' \in \mathcal{V}_{b}$ v\'erifiant $V \subset V' \subset V_{0}$ et tous $r',s' \in \ensuremath{\mathbf{R}}_{\ge 0}$ v\'erifiant $r_{0}\prec r' \prec r$ et $s_{0}> s'>s$, $\overline{C}_{V'}(P',r',s')$ appartienne \`a~$\mathcal{V}_{x}$~;
\item la couronne $\overline{C}_{V}(r,s)$ (avec coordonn\'ee~$T$) satisfait la propri\'et\'e~$(\mathcal{B} N_{P(S)-T})$.
\end{enumerate}
\end{coro}
\begin{proof}
Commen\c cons par une remarque topologique qui nous permettra d'assurer que les bases de voisinages que l'on construit sont fines. Soient~$V$ une partie compacte de~$B$, $P \in \mathcal{O}(V)[S]$ et $r,s \in \ensuremath{\mathbf{R}}_{\ge 0}$. Soit~$U$ un voisinage de $\overline{C}_{V}(P,r,s)$. Alors, pour tout voisinage~$V'$ de~$V$ assez petit et tous $r',s' \in \ensuremath{\mathbf{R}}_{\ge 0}$ avec $r'\prec r$ assez proche de~$r$ et $s'>s$ assez proche de~$s$, on a $\overline{C}_{V'}(P,r',s') \subset U$.
L'\'enonc\'e d\'ecoule alors des propositions~\ref{prop:basevoisdim1rigide} et \ref{prop:CNBNPST}, si $x$~est rigide dans la fibre au-dessus de~$b$, et des propositions~\ref{prop:basevoisdim1} et \ref{prop:CNBNPST} sinon.
\end{proof}
Pr\'ecisons le r\'esultat dans le cas des points rigides \'epais.
\begin{coro}\label{cor:BVfineNPSTrigideepais}\index{Voisinage!d'un point rigide \'epais}\index{Point!rigide epais@rigide \'epais!voisinage|see{Voisinage}
Soient $b\in B$ et~$x$ un point rigide \'epais de~$\E{1}{\mathcal{A}}$ (avec coordonn\'ee~$S$) au-dessus de~$b$. Notons $d$ le degr\'e de $\mu_{\kappa,x} \in \kappa(b)[S]$. Soit~$P_{0}$ un relev\'e unitaire de degr\'e~$d$ de~$\mu_{\kappa,x}$ dans $\mathcal{O}_{B,b}[S]$. Soit~$\mathcal{V}_{b}$ une base fine de voisinages compacts spectralement convexes de~$b$ dans~$B$ sur lesquels~$P_{0}$ est $\mathcal{B}$-d\'efinie.
Alors $x$ poss\`ede une base fine~$\mathcal{V}_{x}$ de voisinages compacts spectralement convexes de la forme
\[\overline{D}_{V}(P_{0},s),\]
avec $V \in \mathcal{V}_{b}$ et $s \in \ensuremath{\mathbf{R}}_{> 0}$. En outre, on peut supposer que pour tous $V,s$ tels que $\overline{D}_{V}(P_{0},s)$ appartienne \`a~$\mathcal{V}_{x}$,
\begin{enumerate}[i)]
\item $\overline{D}_{b}(P_{0},s)$ est connexe~;
\item il existe un voisinage~$V_{0}$ de~$V$ et $s_{0} \in \ensuremath{\mathbf{R}}_{\ge 0}$ avec $s_{0}>s$ tels que, pour tout $V' \in \mathcal{V}_{b}$ tel que $V \subset V' \subset V_{0}$ et tout $s' \in \intoo{s,s_{0}}$, $\overline{D}_{V'}(P',s')$ appartienne \`a~$\mathcal{V}_{x}$.
\end{enumerate}
Si $\mu_{\kappa,x}$ est s\'eparable ou si tout \'el\'ement de~$\mathcal{V}_{b}$ poss\`ede un bord analytique fini, on peut \'egalement supposer que
\begin{enumerate}
\item[iii)] le disque $\overline{D}_{V}(s)$ (avec coordonn\'ee~$T$) satisfait la propri\'et\'e~$(\mathcal{B} N_{P_{0}(S)-T})$.
\end{enumerate}
Si $\mathcal{H}(b)$ est ultram\'etrique et non trivialement valu\'e, on peut \'egalement supposer que
\begin{enumerate}
\item[iii')] il existe $Q \in \mathcal{B}(V)[S]$ unitaire de degr\'e~$d$ tel que $Q(b) \in \mathcal{H}(b)[T]$ soit s\'eparable et un intervalle ouvert~$I$ de~$\ensuremath{\mathbf{R}}_{>0}$ contenant~$s$ tels que, pour tout voisinage compact spectralement convexe~$W$ de~$b$ dans~$V$ et tout $s' \in I$, on ait $\overline{D}_{W}(Q,s') = \overline{D}_{W}(P_{0},s')$ et le disque $\overline{D}_{W}(s)$ (avec coordonn\'ee~$T$) satisfasse la propri\'et\'e~$(\mathcal{B} N_{Q(S)-T})$.
\end{enumerate}
\end{coro}
\begin{proof}
La premi\`ere partie de l'\'enonc\'e d\'ecoule du lemme~\ref{lem:polmin} et de la proposition~\ref{prop:basevoisdim1rigide}, ainsi que les points~i) et~ii). Les points~iii) et~iii') suivent alors de la proposition~\ref{prop:CNBNPST}.
\end{proof}
\index{Condition!BNG@$(\mathcal{B} N_{G})$|)}
\index{Condition!BNPS-T@$(\mathcal{B} N_{P(S)-T})$|)}
\subsection{Condition $(\mathcal{O} N_{G})$}\label{sec:ONG}
\index{Condition!ONG@$(\mathcal{O} N_{G})$|(}
Dans le chapitre~\ref{chap:Stein} de ce texte, consacr\'e aux espaces de Stein, nous aurons besoin d'utiliser des r\'esultats similaires \`a ceux de la section pr\'ec\'edente en rempla\c cant l'alg\`ebre~$\mathcal{B}(U)$ par $\overline{\mathcal{O}(U)}$. Cela nous conduit \`a modifier certaines d\'efinitions. Il sera \'egalement important de ne plus imposer \`a~$U$ d'\^etre spectralement convexe.
Soit~$U$ une partie compacte de~$B$. Le morphisme canonique $\varphi_{G} \colon Z_{G} \to B$ induit un morphisme
\[\psi_{G,U} \colon \mathcal{O}(U)[T]/(G(T)) \to \mathcal{O}_{Z_{G}}(\varphi_{G}^{-1}(U)).\]%
\nomenclature[Jyc]{$\psi_{G,U}$}{morphisme naturel $\mathcal{O}(U)[T]/(G(T)) \to \mathcal{O}_{Z_{G}}(\varphi_{G}^{-1}(U))$
\begin{defi}\label{def:NG}\index{Condition!ONG@$(\mathcal{O} N_{G})$|textbf}\index{Norme!residuelle@r\'esiduelle}\index{Norme!uniforme}
On dit que le compact~$U$ satisfait la condition~$(\mathcal{O} N_{G})$ s'il existe $v_{U,0} \in \ensuremath{\mathbf{R}}_{>0}$ tel que, pour tout $v\ge v_{U,0}$, la norme r\'esiduelle $\nm_{U,v,\mathrm{r\acute{e}s}}$ sur $\mathcal{O}_{X}(U)[T]/(G(T))$ soit \'equivalente \`a $\|\psi_{G,U}(\wc)\|_{\varphi_{G}^{-1}(U)}$.
\end{defi}
On peut supposer que~$v_{U,0}$ satisfait les in\'egalit\'es
\[ \sum_{i=0}^{d-1} \|g_i\|_{U}\, v_{U,0}^{i-d} \le \frac{1}{2} \textrm{ et } v_{U,0} \ge \max_{0\le i\le d-1}(\|g_{i}\|_{U}^{1/(d-i)}).\]
\emph{Dans la suite de cette section, nous nous placerons toujours sous cette hypoth\`ese.}
\begin{lemm}\label{lem:eqnormespectrale}\index{Norme!residuelle@r\'esiduelle}\index{Norme!spectrale
Supposons que~$U$ satisfait la condition~$(\mathcal{O} N_{G})$. Alors, pour tout $v\ge v_{U,0}$, la norme r\'esiduelle $\nm_{U,v,\mathrm{r\acute{e}s}}$ sur $\overline{\mathcal{O}(U)}[T]/(G(T))$ est \'equivalente \`a sa norme spectrale.
En outre, si le morphisme canonique $U \to \mathcal{M}(\overline{\mathcal{O}(U)})$ est bijectif, cette condition est \'equivalente \`a la condition~$(\mathcal{O} N_{G})$.
\end{lemm}
\begin{proof}
Consid\'erons l'espace $\hat{U} = \mathcal{M}(\overline{\mathcal{O}(U)})$ et le ferm\'e $\hat Z_{G}$ d\'efini par $G=0$ dans $\E{1}{\hat U}$. Notons $\hat{\varphi}_{G} \colon \hat Z_{G} \to \hat V$ et
\[\hat \psi_{G,U} \colon \overline{\mathcal{O}(U)}[T]/(G(T)) \to \mathcal{O}_{\hat Z_{G}}(\hat Z_{G})\]
les morphismes canoniques. On a alors un diagramme cart\'esien
\[\begin{tikzcd}
\varphi_{G}^{-1}(U) \arrow[r, hook] \arrow[d] & \hat Z_{G} \arrow[d]\\
U \arrow[r, hook, ] & \hat{U}
\end{tikzcd}.\]
Soit~$v\ge v_{U,0}$. Pour tout $f\in \mathcal{O}_{X}(U)[T]/(G(T))$, on a
\[\|\psi_{G,U}(f)\|_{\varphi_{G}^{-1}(U)} \le \|\hat\psi_{G,U}(f)\|_{\hat Z_{G}} \le \|f\|_{U, v, \mathrm{r\acute{e}s}},\]
la derni\`ere in\'egalit\'e provenant du fait que l'on a un morphisme born\'e
\[\overline{\mathcal{O}_{X}(U)}[T] \to \mathcal{O}_{\hat Z_{G}}(\hat Z_{G})\]
puisque $\overline{D}_{\overline{\mathcal{O}(U)}}(v)$ contient toutes les racines de~$G(T)$ dans~$\E{1}{\overline{\mathcal{O}(U)}}$, d'apr\`es le lemme~\ref{lem:ZGspectreASG}.
Or, d'apr\`es le lemme~\ref{lem:ZGspectreASG} encore, $\hat{Z}_{G}$ s'identifie naturellement au spectre de l'alg\`ebre $\overline{\mathcal{O}_{X}(U)}[T]/(G(T))$, donc $\|\hat\psi_{G,U}(\wc)\|_{\hat Z_{G}}$ n'est autre que la norme spectrale de $\nm_{U, v, \mathrm{r\acute{e}s}}$. Le r\'esultat d\'ecoule maintenant directement de la condition~$(\mathcal{O} N_{G})$.
Sous l'hypoth\`ese de l'assertion finale, la fl\`eche horizontale inf\'erieure est bijective. On en d\'eduit que la fl\`eche sup\'erieure l'est aussi, ce qui permet d'identifier $\|\psi_{G,U}(\wc)\|_{\varphi_{G}^{-1}(U)}$ \`a la norme spectrale de $\nm_{U, v, \mathrm{r\acute{e}s}}$ et donc de conclure.
\end{proof}
\begin{rema
Pour toute partie compacte~$U$ de~$B$, l'anneau $\mathcal{B}(U)$ s'envoie isom\'etriquement dans~$\overline{\mathcal{O}(U)}$. Il suit alors du lemme~\ref{lem:eqnormespectrale} que, si $U$ est spectralement convexe et satisfait la condition~$(\mathcal{O} N_{G})$, il satisfait aussi la condition~$(\mathcal{B} N_{G})$.
\end{rema}
On reprend maintenant les \'enonc\'es de \cite[\S 6.1]{EtudeLocale} en indiquant les modifications \`a effectuer.
\index{Bord analytique!fort|(}
\begin{defi}[\protect{\cf~\cite[d\'efinition~6.1]{EtudeLocale}}]\index{Bord analytique!fort|textbf}
Soit~$U$ une partie compacte de~$X$. On dit qu'une partie~$\Gamma$ de~$U$ est un bord analytique fort de~$U$ si, pour tout $f\in\mathcal{O}(U)$, on a
\[\|f\|_{U} = \|f\|_{\Gamma}.\]
\end{defi}
\begin{prop}[\protect{\cf~\cite[proposition~6.2]{EtudeLocale}}]\label{prop:NGunion
Supposons qu'il existe des parties $V_{1},\dotsc,V_{r}$ de~$U$ telles que
\begin{enumerate}[i)]
\item pour tout $i\in\cn{1}{r}$, $V_{i}$ soit compacte et satisfasse la condition~$(\mathcal{O} N_{G})$~;
\item la r\'eunion $\Gamma_{U} := \bigcup_{1\le i\le r} V_{i}$ soit un bord analytique fort de~$U$.
\end{enumerate}
Alors, $U$ satisfait \'egalement la condition~$(\mathcal{O} N_{G})$.
En outre, si $U$~est d\'ecente,
alors la partie $\Gamma_{U,G} := \varphi_{G}^{-1}(\Gamma_{U})$ est un bord analytique fort de $\varphi_{G}^{-1}(U)$.
\end{prop}
\begin{proof}
La d\'emonstration de la premi\`ere assertion de~\cite[proposition~6.2]{EtudeLocale} utilise pr\'ecis\'ement la condition~$(N_{G})$ (c'est-\`a-dire $(\mathcal{B} N_{G})$ avec nos notations) sous la forme que nous avons \'enonc\'ee dans la d\'efinition~\ref{def:NG}. Il n'y a donc que des modifications \'evidentes \`a effectuer.
En reprenant la d\'emonstration de la seconde assertion de~\cite[proposition~6.2]{EtudeLocale}, on montre que, pour tout $f\in \mathcal{O}(U)[T]/(G(T))$, on a
\[\|\psi_{G,U}(f)\|_{\varphi_{G}^{-1}(U)} = \max_{x\in \Gamma_{U,G}} (|\psi_{G,U}(f)(x)|).\]
La d\'ecence de~$U$ et le th\'eor\`eme de division de Weierstra\ss{} \ref{weierstrass} (sous la forme du th\'eor\`eme~\ref{thm:isolemniscate} adapt\'e au polyn\^ome~$G$)
assurent que l'on a un isomorphisme
\[\mathcal{O}(U)[T]/(G(T)) \xrightarrow[]{\sim} \mathcal{O}_{Z_{G}}(\varphi_{G}^{-1}(U)).\]
Le r\'esultat s'en d\'eduit.
\end{proof}
Le corollaire~6.3 de~\cite{EtudeLocale} s'adapte imm\'ediatement et nous l'omettrons. Le lemme~6.4 a d\'ej\`a \'et\'e repris par le lemme~\ref{lem:couronneum}. Les r\'esultats techniques n\'ecessaires sont maintenant tous \`a notre disposition. Nous \'enon\c{c}ons leurs cons\'equences sans plus de commentaires.
\begin{prop}[\protect{\cf~\cite[propositions~6.5 et~6.6]{EtudeLocale}}]\label{prop:bordanfortfini}\index{Domaine polynomial}\index{Condition!ONPS-T@$(\mathcal{O} N_{P(S)-T})$}
Soit~$U$ une partie compacte et d\'ecente de~$X_{\mathrm{um}}$ qui poss\`ede un bord analytique fort fini. Soit $P(S) \in \mathcal{O}(U)[S]$ unitaire non constant et soient $r,s\in\ensuremath{\mathbf{R}}$ v\'erifiant $0=r<s$ ou $0<r\le s$. Alors, le domaine polynomial $\overline{C}_{V}(P;r,s)$ poss\`ede un bord analytique fort fini.
En outre, pour tout $Q(T) \in \mathcal{O}(U)[T]$ unitaire non constant, le domaine polynomial $\overline{C}_{V}(Q;r,s)$ satisfait la condition $(\mathcal{O} N_{P(S)-T})$.
\qed
\end{prop}
\index{Bord analytique!fort|)}
\begin{defi}[\protect{\cf~\cite[D\'efinition~6.9]{EtudeLocale}}]\index{Point!ultrametrique tres typique@ultram\'etrique tr\`es typique|textbf}
Un \emph{point}~$x$ de~$B$ est dit \emph{ultram\'etrique tr\`es typique} s'il appartient \`a l'int\'erieur de~$B_{\mathrm{um}}$ et poss\`ede un syst\`eme fondamental de voisinages compacts, spectralement convexes admettant un bord analytique fort fini.
\end{defi}
\begin{defi}\label{defi:honnete}\index{Point!tres decent@tr\`es d\'ecent|textbf}\index{Partie!tres decente@tr\`es d\'ecente|textbf}
On dit qu'un point~$x$ d'un espace analytique est \emph{tr\`es d\'ecent} si l'une (au moins) des trois conditions suivantes est satisfaite~:
\begin{enumerate}[i)]
\item $\mathcal{H}(x)$ est de caract\'eristique nulle~;
\item $\mathcal{H}(x)$ est de valuation non triviale~;
\item $x$ est ultram\'etrique tr\`es typique.
\end{enumerate}
On dit qu'une partie d'un espace analytique est tr\`es d\'ecente lorsque tous ses points le sont.
\end{defi}
\begin{coro}[\protect{\cf~\cite[proposition~6.10]{EtudeLocale}}
Soit~$b\in B$ un point ultram\'etrique typique (resp. d\'ecent). Soit $n\in\ensuremath{\mathbf{N}}$. Alors, tout point de~$\E{n}{\mathcal{A}}$ situ\'e au-dessus de~$b$ est encore ultram\'etrique tr\`es typique (resp. tr\`es d\'ecent).
\qed
\end{coro}
Le corollaire qui suit est nouveau.
\begin{coro}\label{cor:BVcompactN}\index{Condition!ONPS-T@$(\mathcal{O} N_{P(S)-T})$}
Soit~$U$ une partie compacte de~$B_{\mathrm{um}}$ dont tout point est ultram\'etrique tr\`es typique. Alors, le compact~$U$ poss\`ede un syst\`eme fondamental de voisinages compacts~$\mathcal{V}$ tel que, pour tout $V\in \mathcal{V}$, pour tout $P(S) \in \mathcal{O}(V)[S]$ unitaire non constant et pour tous $r,s\in\ensuremath{\mathbf{R}}$ v\'erifiant $0=r<s$ ou $0<r\le s$, la couronne $\overline{C}_{V}(r,s)$ satisfasse la condition~$(\mathcal{O} N_{P(S)-T})$.
\end{coro}
\begin{proof}
Soit~$W$ un voisinage de~$U$. Puisque tout point de~$U$ poss\`ede un voisinage compact dans~$W$ admettant un bord analytique fort fini, par compacit\'e de~$U$, il existe un voisinage compact~$V$ de~$U$ dans~$W$ qui est union finie de compacts admettant un bord analytique fort fini. On conclut alors \`a l'aide des propositions~\ref{prop:bordanfortfini} et~\ref{prop:NGunion}.
\end{proof}
On peut \'egalement reprendre les \'enonc\'es de \cite[\S 6.2]{EtudeLocale}.
\begin{defi}[\protect{\cf~\cite[d\'efinition~6.12]{EtudeLocale}}]\index{Condition!ORG@$(\mathcal{O} R_{G})$|textbf}\index{Bord analytique!fort}\index{Resultant@R\'esultant}
On dit que le compact~$U$ satisfait la condition $(\mathcal{O} R_{G})$ s'il poss\`ede un bord analytique fort sur lequel la fonction $|\mathrm{R\acute{e}s}(G,G')|$ est born\'ee inf\'erieurement par un nombre r\'eel strictement positif.
\end{defi}
\begin{prop}[\protect{\cf~\cite[proposition~6.14]{EtudeLocale}}]\label{prop:RG
Toute partie compacte de~$B$ qui satisfait la condition~$(\mathcal{O} R_{G})$ satisfait aussi la condition~$(\mathcal{O} N_{G})$.
\end{prop}
\index{Condition!ONG@$(\mathcal{O} N_{G})$|)}
\index{Norme!comparaison|)}
\section{Division de Weierstra\ss{} avec contr\^ole sur les normes}\label{sec:divWnormes}
Dans cette section, nous d\'emontrons une version raffin\'ee du th\'eor\`eme de division de Weierstra\ss{} (\cf~th\'eor\`eme~\ref{weierstrass}) et du th\'eor\`eme de Weierstra\ss{} g\'en\'eralis\'e (\cf~corollaire~\ref{cor:weierstrassgeneralise}) comprenant un contr\^ole des normes du reste et du quotient.
Soit $(\mathcal{A},\nm)$ un anneau de Banach. Posons $B := \mathcal{M}(\mathcal{A})$ et $X := \E{1}{\mathcal{A}}$ avec coordonn\'ee~$S$. Notons $\pi \colon X \to B$ le morphisme de projection. Pour tout point~$b$ de~$B$, posons $X_{b} := \pi^{-1}(b)$.
Rappelons qu'\`a tout point rigide~$x$ de~$\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{k}}$, on associe un polyn\^ome minimal~$\mu_{x}$ et un degr\'e~$\deg(x)$ (\cf~d\'efinition~\ref{def:rigidedroite}) et que l'anneau local $\mathcal{O}_{\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{k}},x}$ en ce point est un anneau de valuation discr\`ete d'uniformisante~$\mu_{x}$.
\begin{theo}\label{weierstrassam}\index{Theoreme@Th\'eor\`eme!de division de Weierstra\ss!avec contr\^ole des normes}
Soient $b\in B$ et~$x$ un point rigide de~$X_{b}$.
Soit~$U_{0}$ un voisinage compact de~$x$ dans~$X$ et soit~$G$ un \'el\'ement de~$\mathcal{B}(U_{0})$. Supposons que son image dans $\mathcal{O}_{X_{b},x}$ n'est pas nulle et notons $n \in \ensuremath{\mathbf{N}}$ la valuation~$\mu_{x}$-adique de cette derni\`ere.
Soit~$\mathcal{V}_b$ une base de voisinages compacts spectralement convexes de~$b$ dans~$B$.
Si $\mathcal{H}(b)$ est trivialement valu\'e et $\mu_{x}$ ins\'eparable, supposons de plus que tous les \'el\'ements de~$\mathcal{V}_{b}$ sont inclus dans~$B_{\mathrm{um}}$ et poss\`edent un bord analytique fini. Soit~$\eta \in \ensuremath{\mathbf{R}}_{>0}$.
Alors, il existe un voisinage~$V_{0}$ de~$b$ dans~$B$, un polyn\^ome~$P_{0}$ unitaire non constant \`a coefficients dans~$\mathcal{B}(V_{0})$ tel que $\|P_{0}(b) - \mu_{x}\|\le \eta$ et des nombres r\'eels $s_{1},s_{2},\ensuremath{\varepsilon} \in \ensuremath{\mathbf{R}}_{>0}$ avec $s_{1} < s_{2}$ v\'erifiant les propri\'et\'es suivantes~: pour tout \'el\'ement~$V$ de~$\mathcal{V}_{b}$ contenu dans~$V_{0}$, pour tout $P_{1} \in \mathcal{B}(V)[T]$ tel que $\|P_{1}-P_{0}\|_{V,\infty} \le \ensuremath{\varepsilon}$, $\overline{D}_V(P_{1};s_{2})$ est contenu dans~$U_{0}$ et, pour tous $F \in \mathcal{B}(\overline{D}_V(P_{1};s_{2}))$ et $s \in (s_{1},s_{2})$, il existe un unique couple $(Q,R) \in \mathcal{B}(\overline{D}_V(P_{1};s))^2$ v\'erifiant
\begin{enumerate}[i)]
\item $F = QG+R$ dans $\mathcal{B}(\overline{D}_V(P_{1};s))$;
\item $R \in \mathcal{B}(V)[S]$ est un polyn\^ome de degr\'e strictement inf\'erieur \`a~$n\deg(x)$.
\end{enumerate}
En outre, pour tout $s\in (s_{1},s_{2})$, il existe $C_{s} \in \ensuremath{\mathbf{R}}_{>0}$ tel que, pour tout $F \in \mathcal{B}(\overline{D}_V(P_{1};s_{2}))$, avec les notations pr\'ec\'edentes, on ait
\[\|Q\|_{\overline{D}_V(P_{1};s)} \leq C_{s}\, \|F\|_{\overline{D}_V(P_{1};s_{2})} \textrm{ et } \|R\|_{\overline{D}_V(P_{1};s)}\leq C_{s}\,\|F\|_{\overline{D}_V(P_{1};s_{2})}.\]
\end{theo}
\begin{proof}
Posons $d:=\deg(x)$.
Il existe~$\alpha_0,\ldots,\alpha_{d-1}\in\mathcal{H}(b)$ tels que
\[\mu_{x}(S)=S^d+ \sum_{i=0}^{d-1}\alpha_iS^i.\]
Puisque la valuation~$\mu_{x}$-adique de~$G$ est \'egale \`a~$n$, il existe un \'el\'ement inversible~$H$ de~$\mathcal{O}_{X_{b},x}$ tel que $G=H \mu_{x}^n$. Quitte \`a restreindre~$U_{0}$, on peut supposer que $H$ et $H^{-1}$ sont d\'efinies sur $U_{0} \cap X_{b}$.
Posons $N := \|\mu_{x}\|_{b,\infty}$. Soit~$v \in \ensuremath{\mathbf{R}}_{>0}$ tel que
\[(|\alpha_{0}| + 1) v^{-d} + \sum_{i=1}^{d-1}|\alpha_i| v^{i-d} < \frac{1}{2}.\]
Soient~$V_{0}, P_{0}, s_{2}, \ensuremath{\varepsilon}$ satisfaisant les conclusions de la proposition~\ref{prop:basevoisdim1rigide}.
avec $U=U_{0}$. On peut supposer que $s_{2} \le 1$. Si~$\mu_{x}(b)$ est s\'eparable ou si $\mathcal{H}(b)$ n'est pas trivialement valu\'e, on peut supposer que~$P_{0}$ est s\'eparable. Sinon, par hypoth\`ese, tous les \'el\'ements de~$\mathcal{V}_{b}$ sont inclus dans~$B_{\mathrm{um}}$ et poss\`edent un bord analytique fini. Dans tous les cas, d'apr\`es la proposition~\ref{prop:CNBNPST}, quitte \`a diminuer~$V_{0}$, $s_{2}$ et~$\ensuremath{\varepsilon}$, on peut supposer qu'il existe $s_{1} \in (0,s_{2})$ tel que, pour toute partie spectralement convexe~$V$ de~$V_{0}$ contenant~$b$, tout $s \in [s_{1},s_{2}]$ et tout $Q \in \mathcal{B}(V)[T]$ tel que $\|Q-P_{0}\|_{V,\infty} \le \ensuremath{\varepsilon}$, le disque $\overline{D}_{V}(s)$ satisfasse la condition $(\mathcal{B} N_{Q(S)-T})$.
\'Ecrivons $P_{0}$ sous la forme $P_{0} = S^d + \sum_{i=0}^{d-1} p_i S^i \in \mathcal{B}(V_{0})[S]$. On peut \'egalement supposer que
\begin{equation}\label{eq1}
\|P_{0}(b) - \mu_{x}\|_{b,\infty} \le \max \left(1, \frac{s_{1}^n \, v^{-(n+1) d + n + 1}}{8 n (N+1)^{n-1}} \right)
\end{equation}
et
\begin{equation*
(|p_{0}(b)| + 1) v^{-d} + \sum_{i=1}^{d-1} |p_{i}(b)|\, v^{i-d} < \frac12.
\end{equation*}
Par cons\'equent, il existe un voisinage compact spectralement convexe~$V$ de~$b$ contenu dans~$V_{0}$ sur lequel on a
\begin{equation}\label{eq2}
(\|p_{0}\|_{V} + 1) v^{-d} + \sum_{i=1}^{d-1} \|p_{i}\|_{V}\, v^{i-d} \le \frac12.
\end{equation}
Notons \'egalement que l'\'equation~\eqref{eq1} entra\^ine
\begin{equation}\label{eq:LN+2}
\|P_{0}(b)\|_{b,\infty} \le \|P_{0}(b) - \mu_{x}\|_{b,\infty} + \|\mu_{x}\|_{b,\infty} \le N+1.
\end{equation}
Soit $s'_{2} \in (s_{1},s_{2})$. Les choix effectu\'es assurent que $\overline{D}_{V}(s_{2})$ satisfait la condition $(\mathcal{B} N_{P_{0}(S)-T})$. D'apr\`es le corollaire~\ref{coro:BNPST}, le morphisme naturel
\[ \mathcal{B}(\overline{D}_{V}(s_{2}))[S]/(P_{0}(S) - T) \to \mathcal{B}(\overline{D}_{V}(P_{0};s_{2}))\]
est donc un isomorphisme. En outre, d'apr\`es la proposition~\ref{prop:restrictionserie}, tout \'el\'ement de $\mathcal{B}(\overline{D}_{V}(s_{2}))$ induit par restriction un \'el\'ement de $\mathcal{B}(V)\ensuremath{\langle} |T| \le s'_{2}\ensuremath{\rangle}$. Quitte \`a remplacer~$s_{2}$ par~$s'_{2}$, on peut donc supposer que~$G$ appartient \`a $\mathcal{B}(V)\ensuremath{\langle} |T| \le s_{2}\ensuremath{\rangle}[S]/(P_{0}(S)-T)$.
Soit~$W$ une partie compacte de~$V$ et $t \in (0,1]$. D'apr\`es la proposition~\ref{prop:equivalencedivres} appliqu\'ee avec $\mathcal{A} = \mathcal{B}(W)\ensuremath{\langle} |T| \le t\ensuremath{\rangle}$ et $G(S) = P_{0}(S) -T$, pour tout $F\in \mathcal{B}(W)\ensuremath{\langle} |T|\le t\ensuremath{\rangle}[S]/(P_{0}(S)-T)$, on a
\begin{equation}\label{eq3}
v^{-d+1} \|F\|_{\mathcal{B}(W)\ensuremath{\langle}|T|\le t\ensuremath{\rangle},v,\mathrm{r\acute{e}s}} \le \|F\|_{\mathcal{B}(W)\ensuremath{\langle}|T|\le t\ensuremath{\rangle},\div}\le 2\|F\|_{\mathcal{B}(W)\ensuremath{\langle}|T|\le t\ensuremath{\rangle},v,\mathrm{r\acute{e}s}}.
\end{equation}
Par la suite, on notera~$\|.\|_{W,t,v,\mathrm{r\acute{e}s}}$ la norme~$\|.\|_{\mathcal{B}(W)\langle|T|\leq t\rangle,v,\mathrm{r\acute{e}s}}$ et~$\|.\|_{W,t,\div}$ la norme~$\|.\|_{\mathcal{B}(W)\langle|T|\leq t\rangle,\div}$.
D'apr\`es la proposition~\ref{prop:disqueglobal} et \cite[corollaire~7.4]{EtudeLocale}, les morphismes naturels
\[\underset{t>s_{2}}\colim\;\mathcal{H}(b)\langle|T|\leq t\rangle \to \mathcal{O}_{\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{H}(b)}}}(\overline{D}_{b}(s_{2}))\]
et
\[\mathcal{O}_{\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{H}(b)}}}(\overline{D}_{b}(s_{2}))[S]/(P_{0}(S)-T) \to \mathcal{O}_{\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{H}(b)}}}(\overline{D}_{b}(P_{0};s_{2}))\]
sont des isomorphismes.
Puisque $\overline{D}_{b}(P_{0};s_{2}) \subset U_{0}\cap X_{b}$, il existe $t>s_{2}$ tel que~$H^{-1}$ poss\`ede un repr\'esentant dans $\mathcal{H}(b)\langle|T|\leq t\rangle[S]/(P_{0}(S)-T)$. Ce repr\'esentant peut lui-m\^eme \^etre approch\'e avec une pr\'ecision arbitraire pour la norme $\nm_{b,t,v,\mathrm{r\acute{e}s}}$ par un \'el\'ement de $\mathcal{B}(W)\langle|T|\leq t\rangle[S]/(P_{0}(S)-T)$ pour un voisinage compact~$W$ de~$b$ suffisamment petit. Quitte \`a r\'etr\'ecir~$V$, on peut donc supposer qu'il existe un \'el\'ement~$K$ de~$\mathcal{B}(V)\langle|T|\leq s_{2}\rangle[S]/(P_{0}(S)-T)$ tel que
\begin{equation}
\|K(b)G(b)-P^n\|_{b,s_{2},v,\mathrm{r\acute{e}s}} < \frac{s_{1}^n}{8v^{d-1}}.
\label{eq7}
\end{equation}
On a alors
\begin{equation}
\begin{array}{rcl}
\|\mu_{x}^n-P_{0}(b)^n\|_{b,s_{2},v,\mathrm{r\acute{e}s}}&\leq&\|\mu_{x}-P_{0}(b)\|_{b,s_{2},v,\mathrm{r\acute{e}s}} \, \|\sum_{i=0}^{n-1} \mu_{x}^i P_{0}(b)^{n-1-i}\|_{b,s_{2},v,\mathrm{r\acute{e}s}}\\
&\leq&\|\mu_{x}-P_{0}(b)\|_{b,s_{2},v,\mathrm{r\acute{e}s}}\, n\max(\|\mu_{x}\|_{b,s_{2},v,\mathrm{r\acute{e}s}},\|P_{0}(b)\|_{b,s_{2},v,\mathrm{r\acute{e}s}})^{n-1}\\
&\overset{(\ref{eq3})}\leq&n v^{n(d-1)}\|\mu_{x}-P_{0}(b)\|_{b,s_{2},\div}\,\max(\|\mu_{x}\|_{b,s_{2},\div},\|P_{0}(b)\|_{b,s_{2},\div})^{n-1}\\
&\overset{\eqref{eq:LN+2}}\leq&n v^{n(d-1)}\|\mu_{x}-P_{0}(b)\|_{b,s_{2},\div}\,(N+1)^{n-1}\\
&\overset{\eqref{eq1}}\leq &\frac{s_{1}^nv^{-d+1}}{8}.
\end{array}
\label{eq9}
\end{equation}
On en d\'eduit que
\begin{equation}
\|K(b)G(b)-P_{0}(b)^n\|_{b,s_{2},v,\mathrm{r\acute{e}s}}\overset{(\ref{eq7})+(\ref{eq9})}\le \frac{v^{-d+1}s_{1}^n}{4}.
\label{eq10}
\end{equation}
Quitte \`a restreindre~$V$, on peut donc supposer que
\begin{equation}
\|KG-P_{0}^n\|_{V,s_{2},v,\mathrm{r\acute{e}s}}\leq\frac{v^{-d+1}s_{1}^n}{4}.
\label{eq12}
\end{equation}
Soit $s\in (s_{1},s_{2})$. Tout \'el\'ement~$\varphi$ de $\mathcal{B}(V)\langle |T|\leq s\rangle [S]/(P_{0}(S)-T)$ poss\`ede un unique repr\'esentant de la forme
\[\varphi=\sum_{i=0}^{d-1}(\alpha_i(\varphi)T^n+\beta_i(\varphi))S^i,\]
o\`u les~$\alpha_i(\varphi)$ sont des \'el\'ements de~$\mathcal{B}(V)\langle|T|\leq s\rangle$ et les~$\beta_i(\varphi)$ des \'el\'ements de~$\mathcal{B}(V)[T]$ de degr\'e strictement inf\'erieur \`a~$n$. Posons
\[\alpha(\varphi) := \sum_{i=0}^{d-1}\alpha_i(\varphi)S^i \text{ et } \beta(\varphi) := \sum_{i=0}^{d-1} \beta_i(\varphi)S^i.\]
On a alors
\[\varphi=\alpha(\varphi)T^n+\beta(\varphi)\]
et les deux in\'egalit\'es suivantes :
\begin{equation}\label{eq:alphabeta}
\|\alpha(\varphi)\|_{V,s,\div}\leq s^{-n}\|\varphi\|_{V,s,\div}\text{ et }\|\beta(\varphi)\|_{V,s,\div}\leq \|\varphi\|_{V,s,\div}.
\end{equation}
Remarquons que, dans $\mathcal{B}(V)\langle |T|\leq s\rangle [S]/(P_{0}(S)-T)$, on peut \'ecrire~$\beta(\varphi)$ comme un polyn\^ome en~$S$ de degr\'e inf\'erieur \`a~$nd-1$. R\'eciproquement, tout polyn\^ome en~$S$ de degr\'e strictement inf\'erieur \`a~$nd$ peut s'\'ecrire sous la forme $\sum_{i=0}^{d-1} b_i S^i$ o\`u les~$b_{i}$ sont des polyn\^omes en~$T$ de degr\'e strictement inf\'erieur \`a~$n$.
Consid\'erons, \`a pr\'esent, l'endomorphisme
\[\fonction{A_{s}}{\mathcal{B}(V)\langle|T|\leq s\rangle[S]/(P_{0}(S)-T)}{\mathcal{B}(V)\langle|T|\leq s\rangle[S]/(P_{0}(S)-T)}{\varphi}{\alpha(\varphi)KG+\beta(\varphi)}.\]
Pour tout~$\varphi\in \mathcal{B}(V)\langle|T|\leq s\rangle[S]/(P_{0}(S)-T)$, on a
\begin{equation}
\begin{array}{rcl}
\|A_{s}(\varphi)-\varphi\|_{V,s,v,\mathrm{r\acute{e}s}}&=&\|\alpha(\varphi)(KG-T^n)\|_{V,s,v,\mathrm{r\acute{e}s}}\\
&\leq&\|\alpha(\varphi)\|_{V,s,v,\mathrm{r\acute{e}s}}\, \|KG-P_{0}^n\|_{V,s,v,\mathrm{r\acute{e}s}}\\
&\overset{(\ref{eq3})+(\ref{eq:alphabeta})}\leq&2v^{d-1}s^{-n}\,\|KG-P_{0}^n\|_{V,s,v,\mathrm{r\acute{e}s}}\, \|\varphi\|_{V,s,v,\mathrm{r\acute{e}s}}.
\end{array}
\label{eq14}
\end{equation}
On d\'eduit donc de l'in\'egalit\'e (\ref{eq12}) que la norme de l'op\'erateur~$A_{s}-\mathrm{Id}$ sur $\mathcal{B}(V)\langle|T|\leq s\rangle[S]/(P_{0}(S)-T)$ est inf\'erieure \`a $1/2 (s_{1}/s)^n \le 1/2 < 1$. En particulier, $A_{s}$~est un isomorphisme d'espaces de Banach. Remarquons que l'on a
\begin{equation}\label{eq:normeAs-1}
\|A_{s}^{-1}\| \le 2.
\end{equation}
Les choix effectu\'es assurent que $\overline{D}_{V}(s_{2})$ satisfait la condition $(\mathcal{B} N_{P_{0}(S)-T})$. D'apr\`es le corollaire~\ref{coro:BNPST}, le morphisme naturel
\[ \mathcal{B}(\overline{D}_{V}(s_{2}))[S]/(P_{0}(S) - T) \to \mathcal{B}(\overline{D}_{V}(P_{0};s_{2}))\]
est un isomorphisme et il existe $w\ge v$ et $C' \in \ensuremath{\mathbf{R}}_{>0}$ tel que, pour tout $F \in \mathcal{B}(\overline{D}_{V}(s_{2}))[S]/(P_{0}(S)-T)$, on ait
\begin{equation*
\|F\|_{\overline{D}_{V}(s_{2}),w,\mathrm{r\acute{e}s}}\leq C' \|F\|_{\overline{D}_{V}(P_{0};s_{2})}
\end{equation*}
et donc
\begin{equation*
\|F\|_{\overline{D}_{V}(s_{2}),\div}\leq C'' \|F\|_{\overline{D}_{V}(P_{0};s_{2})}
\end{equation*}
pour une certaine constante $C'' \in \ensuremath{\mathbf{R}}_{>0}$, d'apr\`es la proposition~\ref{prop:equivalencedivres}. D'apr\`es la proposition~\ref{prop:restrictionserie} tout \'el\'ement~$f$ de~$\mathcal{B}(\overline{D}_{V}(s_{2}))$ induit naturellement par restriction un \'el\'ement de $\mathcal{B}(V)\ensuremath{\langle} |T|\le s\ensuremath{\rangle}$ et on a
\begin{equation*
\|f\|_{V,s} \le \frac{s_{2}}{s_{2}-s}\, \|f\|_{\overline{D}_{V}(s_{2})}
\end{equation*}
On en d\'eduit que tout \'el\'ement~$F$ de $\mathcal{B}(\overline{D}_{V}(s_{2}))[S]/(P_{0}(S) - T)$ induit naturellement par restriction un \'el\'ement de $\mathcal{B}(V)\ensuremath{\langle} |T|\le s\ensuremath{\rangle}[S]/(P_{0}(S)-T)$ et que l'on a
\begin{equation}\label{eq4}
\|F\|_{V,s,\div}\leq C'' \max \left(1, \big(\frac{s_{2}}{s_{2}-s}\big)^{d-1} \right) \|F\|_{\overline{D}_{V}(P_{0};s_{2})}.
\end{equation}
Soit $F \in \overline{D}_{V}(s_{2})$. Les \'el\'ements $Q' := \alpha(A_{s}^{-1}(F)) K$ et $R' := \beta(A_{s}^{-1}(F))$ de $\mathcal{B}(V)\langle|T|\leq s\rangle[S]/(P_{0}(S)-T)$ satisfont alors l'\'egalit\'e $F = QG+R$ dans $\mathcal{B}(V)\langle|T|\leq s\rangle[S]/(P_{0}(S)-T)$. En outre, en combinant \eqref{eq:alphabeta}, \eqref{eq:normeAs-1} et~\eqref{eq4}, on montre qu'il existe une constante $C'_{s} \in \ensuremath{\mathbf{R}}_{>0}$, ind\'ependante de~$F$, telle que
\[\|Q'\|_{V,s,\div} \leq C'_{s}\, \|F\|_{\overline{D}_V(P_{0};s_{2})} \textrm{ et } \|R'\|_{V,s,\div}\leq C'\,\|F\|_{\overline{D}_V(P_{0};s_{2})}.\]
Les choix effectu\'es assurent que $\overline{D}_{V}(s)$ satisfait la condition $(\mathcal{B} N_{P_{0}(S)-T})$. On en d\'eduit que les images respectives~$Q$ et~$R$ de~$Q'$ et~$R'$ dans $\mathcal{B}(\overline{D}_{V}(s))[S]/(P_{0}(S)-T) \simeq \mathcal{B}(\overline{D}_{V}(P_{0};s))$ satisfont les conditions de l'\'enonc\'e, y compris la partie sur l'\'egalit\'e des normes, pour une constante $C_{s} \in \ensuremath{\mathbf{R}}_{>0}$ bien choisie.
Il reste \`a d\'emontrer l'unicit\'e. Pour cela, soient deux \'el\'ements~$Q_{1}$ et~$R_{1}$ de $\mathcal{B}(\overline{D}_{V}(P_{0};s))$ satisfaisant les conditions de l'\'enonc\'e. On peut les consid\'erer dans $\mathcal{B}(\overline{D}_{V}(s))[S]/(P_{0}(S)-T)$ et donc dans $\mathcal{B}(V)\ensuremath{\langle}|T|\le s'\ensuremath{\rangle}[S]/(P_{0}(S)-T)$ pour $s' \in (s_{1},s)$. La bijectivit\'e de l'op\'erateur~$A_{s'}$ assurent que les images de~$Q_{1}$ et~$R_{1}$ dans $\mathcal{B}(V)\ensuremath{\langle}|T|\le s'\ensuremath{\rangle}[S]/(P_{0}(S)-T)$ co\"incident avec celles de~$Q$ et~$R$. On peut donc identifier les d\'eveloppement \`a coefficients dans~$\mathcal{B}(V)$ de repr\'esentants de degr\'e inf\'erieurs \`a~$d-1$ et en d\'eduire que $Q_{1}=Q$ et $R_{1}=R$.
\end{proof}
\begin{rema}\label{rem:kappaseparable}
Il d\'ecoule de la preuve que, si $\mu_{x}$ est \`a coefficients dans~$\kappa(b)$ et est s\'eparable, on peut choisir $P_{0} = \mu_{x}$.
\end{rema}
D\'emontrons \`a pr\'esent quelques cons\'equences de ce r\'esultat. Soit $G(T) \in \mathcal{A}[T]$ un polyn\^ome unitaire de degr\'e~$d \ge 1$. Fixons un point~$b$ de~$B$.
Supposons que $G(b)(T)$ est une puissance d'un polyn\^ome irr\'eductible sur~$\mathcal{H}(b)[T]$. Le lieu d'annulation de~$G(b)$ d\'etermine alors un unique point rigide~$x$ de~$X_{b}$ et il existe un entier $n\in \ensuremath{\mathbf{N}}$ tel que $G(b) = \mu_{x}^n$. Posons $d:=\deg(x)$.
\begin{coro}\label{cor:divisionGirrednormes}
Pla\c cons-nous dans le cadre pr\'ec\'edent. Si~$\mathcal{H}(b)$ est trivialement valu\'e et~$\mu_{x}$ ins\'eparable, supposons que $b$~est ultram\'etrique typique.
Soit~$U$ un voisinage de~$x$ dans~$X$. Alors, il existe un voisinage compact spectralement convexe~$V$ de~$b$ dans~$B$, un polyn\^ome $G_{0} \in \mathcal{B}(V)[T]$ unitaire non constant et des nombres r\'eels $s_{1},s_{2} \in \ensuremath{\mathbf{R}}_{>0}$ avec $s_{1} < s_{2}$ tels que l'on ait $\overline{D}_{V}(G_{0};s_{2}) \subset U$ et, en posant, pour $s\in [s_{1},s_{2}]$,
\[\psi_{s} \colon
\begin{array}{ccc}
\mathcal{B}(V)^{nd} &\to& \mathcal{B}(\overline{D}_{V}(G_{0};s)),\\
(a_{0},\dotsc,a_{nd-1}) & \mapsto & \sum_{i=0}^{nd-1} a_{i}\, T^i
\end{array}\]
les propri\'et\'es suivantes soient v\'erifi\'ees~:
\begin{enumerate}[i)]
\item il existe $K \in \ensuremath{\mathbf{R}}_{>0}$ tel que, pour tout $A \in \mathcal{B}(V)^{nd}$ et tout $s\in [s_{1},s_{2}]$, on ait $\|\psi_{s}(A)\|_{\overline{D}_{V}(G_{0};s)} \le K\, \|A\|_{V,\infty}$;
\item pour tout $s\in (s_{1},s_{2})$ et tout $B \in \mathcal{B}(\overline{D}_{V}(G_{0};s_{2}))$, il existe un unique $A \in \mathcal{B}(V)^{nd}$ tel que $\psi_{s}(A) = B \mod G$. En outre, il existe $K'_{s} \in \ensuremath{\mathbf{R}}_{>0}$ (ind\'ependant de~$B$) tel que $\|A\|_{V,\infty} \le K'_{s} \, \|B\|_{\overline{D}_{V}(G_{0};s_{2})}$.
\end{enumerate}
En particulier, le morphisme naturel
\[\mathcal{O}_{b}[T]/(G(T)) \to \mathcal{O}_{x}/(G(T))\]
est un isomorphisme.
\end{coro}
\begin{proof}
Soit~$U_{0}$ un voisinage compact de~$x$ dans~$U$ tel que $G \in \mathcal{B}(U_{0})$. Soit~$\mathcal{V}_b$ une base de voisinages compacts spectralement convexes de~$b$ dans~$B$. Si $\mathcal{H}(b)$ est trivialement valu\'e et $\mu_{x}$ ins\'eparable, supposons de plus que tous les \'el\'ements de~$\mathcal{V}_{b}$ sont inclus dans~$B_{\mathrm{um}}$ et poss\`edent un bord analytique fini.
$\bullet$ Supposons que $x\ne 0$.
Posons $d:=\deg(x)$ et \'ecrivons $\mu_{x}(T) = T^d + \sum_{i=0}^{d-1} a_{i} T^i$. Posons $I := \{i \in \cn{0}{d-1} : a_{i} \ne 0\}$. Puique $x\ne 0$, on a $I \ne \emptyset$. Soit $\eta \in \intoo{0,\min_{i \in I}(|a_{i}|)}$.
Le th\'eor\`eme~\ref{weierstrassam} appliqu\'e avec $x$, $G$ et~$\eta$ fournit alors un voisinage~$V_{0}$ de~$b$ dans~$B$, un polyn\^ome $P_{0} \in \mathcal{B}(V_{0})[T]$ unitaire non constant et des nombres r\'eels $s_{1},s_{2},\ensuremath{\varepsilon} \in \ensuremath{\mathbf{R}}_{>0}$. Puisque $\|P_{0}(b) - \mu_{x}\| \le \eta$, $P_{0}(b)$ n'est pas une puissance de~$T$. Soit $V$ un \'el\'ement de~$\mathcal{V}_{b}$ contenu dans~$V_{0}$. Soit $G_{0} \in \mathcal{B}(V)[T]$ tel que $\|G_{0} - P_{0}\|_{V,\infty} \le \ensuremath{\varepsilon}$. Quitte \`a restreindre~$V$, on peut supposer que, pour tout $b'\in B$, $G_{0}(b')$ n'est pas une puissance de~$T$. La propri\'et\'e~i) est imm\'ediate avec $K = \|T\|_{\overline{D}_{V}(G_{0};s_{2})}$. La propri\'et\'e~ii) d\'ecoule de la majoration de la norme du reste dans la conclusion du th\'eor\`eme~\ref{weierstrassam} et du corollaire~\ref{coro:minorationDPsA}.
$\bullet$ Supposons que $x=0$.
D'apr\`es la remarque~\ref{rem:kappaseparable}, on peut choisir $P_{0} = T$ dans l'\'enonc\'e du th\'eor\`eme~\ref{weierstrassam}. On choisit \'egalement $G_{0}=T$. Le m\^eme raisonnement que pr\'ec\'edemment s'applique alors, en utilisant la remarque~\ref{rem:Tn} au lieu du corollaire~\ref{coro:minorationDPsA}.
\end{proof}
Des arguments standards permettent de d\'emontrer un r\'esultat similaire sans hypoth\`ese sur~$G(b)(T)$.
Nous renvoyons \`a la preuve de \cite[th\'eor\`eme~5.5.3]{A1Z} pour les d\'etails (dans un cadre o\`u les normes ne sont pas prises en compte).
Notons $G(b)(T) = \prod_{j=1}^t h_{j}(T)^{n_{j}}$ la d\'ecomposition en produit de polyn\^omes irr\'eductibles et unitaires de~$G(b)(T)$ dans~$\mathcal{H}(b)[T]$. D'apr\`es~\cite[corollaire~5.4]{EtudeLocale}, il existe $H_{1},\dotsc,H_{t} \in \mathcal{O}_{B,b}[T]$ unitaires tels que
\begin{enumerate}[i)]
\item $G = \prod_{j=1}^t H_{j}$ dans $\mathcal{O}_{B,b}[T]$;
\item pour tout $j\in \cn{1}{t}$, on a $H_{j}(b) = h_{j}^{n_{j}}$.
\end{enumerate}
Il s'agit de la condition~$(D_{G})$ de \cite[d\'efinition~4.6]{EtudeLocale}, elle-m\^eme li\'ee \`a la condition $(I_{G})$ de \cite[d\'efinition~5.3.5]{A1Z} qui est pr\'esente dans l'\'enonc\'e de \cite[th\'eor\`eme~5.5.3]{A1Z}. Le r\'esultat pr\'ec\'edent est cons\'equence de l'hens\'elianit\'e du corps~$\kappa(b)$ (\cf~\cite[th\'eor\`eme~5.2]{EtudeLocale}).
Pour tout $j\in \cn{1}{t}$, notons $z_{j}$ le point rigide de~$X_{b}$ d\'etermin\'e par l'annulation de~$h_{j}$.
\begin{coro}\label{cor:divisionGnormes}\index{Theoreme@Th\'eor\`eme!de division de Weierstra\ss!g\'en\'eralis\'e avec contr\^ole des normes}
Pla\c cons-nous dans le cadre pr\'ec\'edent.
Si~$\mathcal{H}(b)$ est trivialement valu\'e et l'un des~$h_{j}$ est ins\'eparable, supposons que $b$~est ultram\'etrique typique.
Pour tout $j \in \cn{1}{t}$, soit~$U_{j}$ un voisinage de~$z_{j}$ dans~$X$ sur lequel $H_{j}$~est d\'efini. Alors, il existe un voisinage compact spectralement convexe~$V$ de~$b$ dans~$B$, des polyn\^omes $H_{1,0},\dotsc,H_{t,0} \in \mathcal{B}(V)[T]$ unitaires non constants et des nombres r\'eels $s_{1,1},s_{1,2},\dotsc,s_{t,1},s_{t,2} \in \ensuremath{\mathbf{R}}_{>0}$ avec $s_{j,1} < s_{j,2}$ pour tout~$j$ tels qu'on ait $\overline{D}_{V}(H_{j,0};s_{j,2}) \subset U_{j}$ pour tout~$j$ et, en posant, pour $s = (s_{1},\dotsc,s_{t})\in \prod_{j=1}^t [s_{j,1},s_{j,2}]$,
\[\psi_{s} \colon
\begin{array}{ccc}
\mathcal{B}(V)^d &\to& \prod_{j=1}^t \mathcal{B}(\overline{D}_{V}(H_{j,0};s_{j})),\\
(a_{0},\dotsc,a_{d-1}) & \mapsto & (\sum_{i=0}^{d-1} a_{i}\, T^i,\dotsc,\sum_{i=0}^{d-1} a_{i}\, T^i)
\end{array}\]
les propri\'et\'es suivantes soient v\'erifi\'ees~:
\begin{enumerate}[i)]
\item il existe $K \in \ensuremath{\mathbf{R}}_{>0}$ tel que, pour tout $A \in \mathcal{B}(V)^d$ et tout $s \in \prod_{j=1}^t [s_{j,1},s_{j,2}]$, on ait
\[\|\psi_{s}(A)\|_{s} \le K\, \|A\|_{V,\infty},\]
o\`u $\nm_{s}$ d\'esigne la norme infini sur $\prod_{j=1}^t\mathcal{B}(\overline{D}_{V}(H_{j,0};s_{j}))$~;
\item pour tout $s \in \prod_{j=1}^t (s_{j,1},s_{j,2})$ et tout $B \in \prod_{j=1}^t \mathcal{B}(\overline{D}_{V}(H_{j,0};s_{j,2}))$, il existe un unique $A \in \mathcal{B}(V)^d$ tel que $\psi_{s}(A) = B$ dans $\prod_{j=1}^t \mathcal{B}(\overline{D}_{V}(H_{j,0};s_{j}))/(H_{j})$. En outre, il existe $K'_{s} \in \ensuremath{\mathbf{R}}_{>0}$ (ind\'ependant de~$B$) tel que
\[\|A\|_{V,\infty} \le K'_{s} \, \|B\|_{s_{2}},\]
o\`u $\nm_{s_{2}}$ d\'esigne la norme infini sur $\prod_{j=1}^t \mathcal{B}(\overline{D}_{V}(H_{j,0};s_{j,2}))$.
\end{enumerate}
En particulier, le morphisme naturel
\[\mathcal{O}_{b}[T]/(G(T)) \to \prod_{j=1}^t\mathcal{O}_{z_{j}}/(H_{j})\]
est un isomorphisme.
\end{coro}
Appliquons finalement ces r\'esultats au cas d'un endomorphisme polynomial de~$X$, de fa\c con \`a obtenir une version norm\'ee du th\'eor\`eme~\ref{thm:isolemniscate}. Soit $P\in \mathcal{A}[T]$ un polyn\^ome unitaire de degr\'e $d\ge 1$. Le morphisme
\[\begin{array}{ccc}
\mathcal{A}[T] &\to& \mathcal{A}[T]\\
T & \mapsto & P(T)
\end{array}\]
induit un morphisme $\varphi_{P} \colon X\to X$. \index{Endomorphisme de la droite}
Remarquons que si $U$ et $V$ sont des parties compactes de~$X$ telles que $\varphi_{P}(U) \subset W$, alors nous avons un morphisme naturel $\mathcal{B}(W) \to \mathcal{B}(U)$ envoyant~$T$ sur~$P(T)$.
\begin{coro}\label{cor:phinormes} \index{Endomorphisme de la droite}
Pla\c cons-nous dans le cadre pr\'ec\'edent. Si~$\mathcal{H}(b)$ est de caract\'eristique non nulle et trivialement valu\'e, supposons que $b$~est ultram\'etrique typique.
Soit~$y \in X$. Notons $\varphi_{P}^{-1}(y) = \{x_{1},\dotsc,x_{t}\}$. Soit~$W$ un voisinage de~$y$ dans~$X$. Pour $j\in \cn{1}{t}$, soit $U_{j}$ un voisinage de~$x_{j}$ dans~$\varphi_{P}^{-1}(W)$. Alors, il existe un voisinage compact spectralement convexe~$V$ de~$y$ dans~$W$ et, pour tout $j\in \cn{1}{t}$, un voisinage spectralement convexe~$U'_{j}$ de~$x_{j}$ dans $\varphi_{P}^{-1}(V) \cap U_{j}$ tels que,
en d\'efinissant
\[\chi_{W} \colon
\begin{array}{ccc}
\mathcal{B}(W)^d &\to& \mathcal{B}(U_{1}) \times \dotsb \times \mathcal{B}(U_{t})\\
(a_{0}(T),\dotsc,a_{d-1}(T)) & \mapsto & \displaystyle \big(\sum_{i=0}^{d-1} \varphi_{P}^\sharp(a_{i})\, T^i,\dotsc,\sum_{i=0}^{d-1} \varphi_{P}^\sharp(a_{i})\, T^i\big)
\end{array}\]
et $\chi_{V} \colon \mathcal{B}(V)^d \to \prod_{j=1}^t \mathcal{B}(U'_{j})$ par la m\^eme formule,
les propri\'et\'es suivantes soient v\'erifi\'ees~:
\begin{enumerate}[i)]
\item il existe $K \in \ensuremath{\mathbf{R}}_{>0}$ tel que, pour tout $A \in \mathcal{B}(W)^d$, on ait $\|\chi_{W}(A)\|_{U} \le K\, \|A\|_{W,\infty}$ et $\|\chi_{V}(A)\|_{U'} \le K\, \|A\|_{V,\infty}$, o\`u~$\nm_{U}$ et~$\nm_{U'}$ d\'esignent respectivement les normes infini sur $\prod_{i=1}^t \mathcal{B}(U_{i})$ et $\prod_{i=1}^t \mathcal{B}(U_{i}')$~;
\item il existe $K' \in \ensuremath{\mathbf{R}}_{>0}$ tel que, pour tout $B \in \prod_{j=1}^t \mathcal{B}(U_{j})$, il existe un unique $A \in \mathcal{B}(V)^d$ tel que $\chi_{V}(A) = B$ (dans $\prod_{j=1}^t \mathcal{B}(U'_{j})$) et $\|A\|_{V,\infty} \le K' \, \|B\|_{U}$.
\end{enumerate}
En particulier, le morphisme $\chi \colon \mathcal{O}_{y}^d \to \prod_{j=1}^t \mathcal{O}_{x_{j}}$ d\'efini par la m\^eme formule que~$\chi_{W}$ est un isomorphisme.
\end{coro}
\begin{proof}
On se ram\`ene au corollaire~\ref{cor:divisionGnormes} en \'ecrivant comme d'habitude le morphisme~$\varphi_{P}$ comme celui provenant de la composition
\[\mathcal{A}[T] \to \mathcal{A}[T,S]/(P(S) - T) \xrightarrow[]{\sim} \mathcal{A}[S].\]
Notons $p_{1}, p_{2}$ les projections de~$\E{2}{\mathcal{A}}$ sur $\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}}$. Notons~$\sigma$ la section de~$p_{2}$ envoyant $x$ sur $(P(x),x)$.
\[\begin{tikzcd}
& \E{2}{\mathcal{A}}\ar[ld, "p_{1}"] \ar[rd, "p_2"'] &\\
\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}} &&\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}} \ar[ll, "\varphi_{P}"'] \ar[ul, bend right, "\sigma"']
\end{tikzcd}\]
Notons $Z$ le ferm\'e de Zariski de~$\E{2}{\mathcal{A}}$ d\'efini par l'\'equation $P(S)-T = 0$. La projection~$p_{2}$ induit un isomorphisme entre~$Z$ et~$\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}}$ dont l'inverse est la section~$\sigma$.
D'apr\`es la proposition~\ref{prop:typiqueAn}, si~$b$ est ultram\'etrique typique, il en va de m\^eme pour~$y$ et les~$\sigma(x_{j})$. Nous pouvons donc appliquer le corollaire~\ref{cor:divisionGnormes} en rempla\c cant $\mathcal{A}$ par un voisinage compact spectralement convexe de~$y$ dans~$W$, $T$ par~$S$, $G$ par $P(S)-T$ et chaque $U_{j}$ par un voisinage de~$\sigma(x_{j})$ dans $p_{2}^{-1}(U_{j})$ sur lequel~$H_{j}$ est d\'efini. On obtient l'\'enonc\'e d\'esir\'e en posant $U'_{j} = p_{2}(\overline{D}_{V}(H_{j,0},s_{j}))$ et en utilisant le fait que tirer en arri\`ere une fonction (ici par~$\sigma$ ou~$p_{2}$) diminue sa norme.
\end{proof}
\section{Fermeture des id\'eaux du faisceau structural}\label{sec:fermeture}
Dans cette section, nous d\'emontrons un r\'esultat de fermeture pour les faisceaux d'id\'eaux du faisceau structural d'un espace analytique.
Nous commencerons par \'etudier quelques cas particuliers~: celui de l'id\'eal nul et celui de l'id\'eal engendr\'e par une puissance d'une uniformisante (en un point situ\'e au-dessus d'un point dont l'anneau local est fortement de valuation discr\`ete).
Soit $(\mathcal{A},\nm)$ un anneau de Banach. Posons $B := \mathcal{M}(\mathcal{A})$.
\begin{lemm}\label{fortement_de_valuation_discr\`ete}
\index{Anneau!fortement de valuation discrete@fortement de valuation discr\`ete}
\index{Ideal@Id\'eal!B-fortement de type fini@$\mathcal{B}$-fortement de type fini}
Soient~$b \in B$. Supposons que~$\mathcal{O}_{B,b}$ est un anneau fortement de valuation discr\`ete relativement \`a une base fine~$\mathcal{V}$ de voisinages compacts spectralement convexes de~$b$ dans~$B$. Soit~$\varpi_{b}$ une uniformisante de~$\mathcal{O}_{B,b}$. Pour tout~$v\in\ensuremath{\mathbf{N}}$, l'id\'eal~$(\varpi_b^v)\subset\mathcal{O}_{B,b}$ est~$\mathcal{B}$-fortement de type fini relativement \`a~$\mathcal{V}$.
\end{lemm}
\begin{proof}
Nous allons d\'emontrer le r\'esultat par r\'ecurrence sur~$v$. Le cas $v=0$ est imm\'ediat.
Supposons avoir d\'emontr\'e le r\'esultat pour $v\ge 0$. Soient~$U$ un voisinage compact de~$b$ et~$V$ un \'el\'ement de~$\mathcal{V}$ tel que $V \subset \mathring U$. Puisque~$\mathcal{V}$ est une base fine de voisinages, il existe un \'el\'ement~$V_1$ de~$\mathcal{V}$ tel que $V_{1} \subset \mathring U$ et $V \subset \mathring V_1$. Soit $f \in \mathcal{B}(U)$ dont l'image dans~$\mathcal{O}_{B,b}$ appartient \`a l'id\'eal~$(\varpi_b^{v+1})$.
Par d\'efinition d'anneau fortement de valuation discr\`ete, il existe $g \in \mathcal{B}(V_1)$ tel que $f = g \varpi_b$ sur~$\mathcal{B}(V_1)$ et $\|g\|_{V_1}\leq K_{V_1,U}\,\|f\|_U$, o\`u~$K_{V_1,U}$ est la constante intervenant dans la d\'efinition d'anneau fortement de valuation discr\`ete. De plus, puisque~$\mathcal{O}_{B,b}$ est int\`egre, le germe au voisinage de~$b$ correspondant \`a~$g$ est un multiple de~$\varpi_b^{v}$.
Par hypoth\`ese de r\'ecurrence, il existe $h \in \mathcal{B}(V)$ telle que~$h\varpi_b^{v}=g$ dans~$\mathcal{B}(V)$ et $\|h\|_{V}\leq K_{V,V_1,v} \,\|g\|_{V_1}$ o\`u~$K_{V,V_1,v}$ est la constante intervenant dans la d\'efinition d'id\'eal~$\mathcal{B}$-fortement de type fini de~$(\varpi_b^{v})$. Ainsi, on a l'\'egalit\'e~$h\pi_{b}^{v + 1}=f$ dans~$\mathcal{B}(V)$ et l'in\'egalit\'e~$\|h\|_{V}\leq K_{V,V_1,v}\, K_{V_1,U}\,\|f\|_U$.
\end{proof}
\begin{lemm}\label{division_anneau_valuation_discr\`ete
Soit~$b$ un point d\'ecent de~$B$.
Supposons que $\mathcal{O}_{B,b}$ est un anneau fortement de valuation discr\`ete. Soit~$\varpi_b$ une uniformisante de $\mathcal{O}_{B,b}$. Soient~$x$ un point rigide \'epais de~$\E{n}{\mathcal{A}}$ au-dessus de~$b$ et $v\in \ensuremath{\mathbf{N}}$.
Alors, pour tout voisinage compact~$V$ de~$x$ dans~$\E{n}{\mathcal{A}}$, il existe un voisinage compact spectralement convexe~$V'$ de~$x$ dans~$\mathring{V}$ et une constante~$K_{V',V,v} \in \ensuremath{\mathbf{R}}_{>0}$ tels que, pour tout $f \in \mathcal{B}(V)$ dont l'image dans $\mathcal{O}_{\ensuremath{\mathbf{A}}^n_\mathcal{A},x}$ appartient \`a $(\varpi_b^v)$, il existe $g \in \mathcal{B}(V')$ tel que
\begin{enumerate}[i)]
\item $f=\varpi_b^vg$ dans $\mathcal{B}(V')$ ;
\item $\|g\|_{V'}\leq K_{V',V,v}\,\|f\|_V$.
\end{enumerate}
\end{lemm}
\begin{proof}
Nous allons d\'emontrer ce r\'esultat par r\'ecurrence sur~$n$. Le cas~$n=0$ est donn\'e par le lemme \ref{fortement_de_valuation_discr\`ete}.
Soit~$n\ge 1$ et supposons que le r\'esultat est satisfait pour~$\E{n-1}{\mathcal{A}}$. Notons~$x_{n-1}$ la projection de~$x$ sur ses~$n-1$ premi\`eres coordonn\'ees.
Commen\c cons par d\'emontrer l'\'enonc\'e dans le cas o\`u~$x$ est le point~0 au-dessus du point~$x_{n-1}$, que nous noterons~$0_{x_{n-1}}$. Soient~$V$ un voisinage compact de~$x$ dans~$\E{n}{\mathcal{A}}$ et $f \in \mathcal{B}(V)$ dont l'image dans $\mathcal{O}_{0_{x_{n-1}}}$ appartienne \`a $(\varpi_b^v)$. Il existe $g\in\mathcal{O}_{0_{x_{n-1}}}$ tel que~$f=\varpi_b^vg$ dans~$\mathcal{O}_{0_{x_{n-1}}}$.
Le point~$0_{x_{n-1},n}$ admet une base de voisinages de la forme~$\overline{D}_W(t)$, o\`u~$W$ parcourt une base de voisinages de~$x_{n-1}$. Soit~$W$ un voisinage compact spectralement convexe de~$x_{n-1}$ et~$t \in \ensuremath{\mathbf{R}}_{>0}$ tels que $\overline{D}_W(t)$ soit contenu dans~$\mathring{V}$, les fonctions~$f$, $g$ et~$\varpi_b$ soient~$\mathcal{B}$-d\'efinies sur~$\overline{D}_W(t)$ et $f=\varpi_b^vg$ dans~$\mathcal{B}(\overline{D}_W(t))$. Soit $s \in \intoo{0,t}$. D'apr\`es la proposition~\ref{prop:restrictionserie}, on a
\[\|f\|_{W,s}\leq \frac{t}{t-s}\, \|f\|_{\overline{D}_W(t)} \textrm{ et }\|g\|_{W,s}\leq \frac{t}{t-s}\,\|g\|_{\overline{D}_W(t)},\]
o\`u les normes \`a gauche des in\'egalit\'es sont les normes de~$f$ et~$g$ vues comme des \'el\'ements $\sum_{i\ge0}\alpha_iT^i$ et $\sum_{i\ge0}\beta_iT^i$ de $\mathcal{B}(W)\langle |T|\leq s\rangle$.
Nous avons
\[\sum_{i\ge0}\alpha_iT^i=\varpi_b^v \sum_{i\ge0}\beta_iT^i \textrm{ dans } \mathcal{B}(W)\langle |T|\leq s\rangle,\]
ce qui implique que, pour tout~$i \ge 0$,~$\alpha_i=\varpi_b^v\beta_i$.
Par hypoth\`ese de r\'ecurrence, il existe un voisinage spectralement convexe~$W'$ de~$x_{n-1}$ dans~$\mathring{W}$, une constante $K_{W',W,v} \in \ensuremath{\mathbf{R}}_{>0}$ et, pour tout $i\ge 0$, un \'el\'ement~$\beta_{i,v}$ de~$\mathcal{B}(W')$ tels que
\begin{enumerate}[i)]
\item pour tout $i\ge 0$, $\|\beta_{i,v}\|_{W'}\leq K_{W',W,v}\, \|\alpha_i\|_W$ ;
\item $\sum_{i\ge0}\alpha_iT^i=\varpi_b^{v}\big(\sum_{i\ge0}\beta_{i,v}\, T^i\big)$ dans~$\mathcal{B}(W')\langle |T|\leq s\rangle$.
\end{enumerate}
Le voisinage $V' := \overline{D}_{W'}(s)$ satisfait donc les propri\'et\'es de l'\'enonc\'e.
\smallbreak
Passons maintenant au cas o\`u $x$ est un point rigide \'epais quelconque au-dessus de~$x_{n-1}$. Soit~$P(S)\in\kappa(x_{n-1})[S]$ le polyn\^ome minimal du point~$x$ au-dessus de~$x_{n-1}$. On note~$\tilde{P}\in\mathcal{O}_{x_{n-1}}[S]$ un rel\`evement unitaire de~$P$.
Soit~$\tilde{U}$ un voisinage compact spectralement convexe de~$x_{n-1}$ tel que chacun des coefficients de~$\tilde{P}$ soit~$\mathcal{B}$-d\'efini sur~$\tilde{U}$. En notant $\varphi_P \colon \E{1}{\mathcal{B}(\tilde U)}\to \E{1}{\mathcal{B}(\tilde{U})}$ le morphisme d\'efini par le polyn\^ome~$\tilde P\in\mathcal{B}(\tilde{U})[T]$, nous avons $\varphi_{P}^{-1}(0_{x_{n-1}}) = \{x\}$. Le r\'esultat se d\'eduit alors du cas du point~$0_{x_{n-1}}$ et du corollaire~\ref{cor:phinormes}
On note encore~$\varphi_P:\E{1}{\mathcal{B}(\tilde U)}\to \E{1}{\mathcal{B}(\tilde{U})}$ le morphisme d'espaces analytiques associ\'e au polyn\^ome~$\tilde P\in\mathcal{B}(\tilde{U})[T]$. Il envoie le point~$x$ sur le point~$0_{x_{n-1}}$ et nous avons $\varphi_{P}^{-1}(0_{x_{n-1}}) = \{x\}$.
\end{proof}
La proposition suivante montre que, sous certaines conditions, la propri\'et\'e que l'id\'eal nul de l'anneau local en un point soit~$\mathcal{B}$-fortement de type fini se transf\`ere aux points de la fibre.
\begin{prop}\label{prolongement_purement_localement_transcendant
Soit~$b \in B$. Soit~$\mathcal{V}_{b}$ une base fine de voisinages compacts spectralement convexes de~$b$ dans~$B$. Si~$\mathcal{H}(b)$ est de caract\'eristique non nulle et trivialement valu\'e, supposons que tout \'el\'ement de~$\mathcal{V}_{b}$ poss\`ede un bord analytique fini. Supposons que~$\mathcal{O}_{B,b}$ soit un corps fort ou un anneau fortement de valuation discr\`ete relativement \`a~$\mathcal{V}_b$ et que l'id\'eal nul de~$\mathcal{O}_{B,b}$ soit $\mathcal{B}$-fortement de type fini relativement \`a~$\mathcal{V}_b$.
Soit~$x$ un point de~$\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}}$ au-dessus de~$b$. Alors, il existe une base fine de voisinages compacts spectralement convexes de~$x$ dans~$\E{1}{\mathcal{A}}$ relativement \`a laquelle l'id\'eal nul de~$\mathcal{O}_{\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}},x}$ est $\mathcal{B}$-fortement de type fini.
\end{prop}
\begin{proof}
Nous supposerons que~$\mathcal{O}_{B,b}$ est un anneau fortement de valuation discr\`ete d'uniformisante~$\varpi_{b}$.
Le cas des corps forts se traite de mani\`ere similaire (et plus simple).
Soit~$\mathcal{V}_{x}$ une base de voisinages de~$x$ satisfaisant les conclusions du corollaire~\ref{cor:BVfineNPST} i), ii). Soit~$U$ un voisinage compact de~$x$ et $f$ un \'el\'ement de~$\mathcal{B}(U)$ dont l'image dans~$\mathcal{O}_{x}$ est nulle. Soit~$\overline{C}_V(P,r,s)$ un \'el\'ement de~$\mathcal{V}_x$ contenu dans~$\mathring U$. Les propri\'et\'es de~$\mathcal{V}_{x}$ assurent qu'il existe $r',s'\in \ensuremath{\mathbf{R}}_{\ge 0}$ avec $r'\prec r$ et $s'>s$ et un \'el\'ement~$V'$ de~$\mathcal{V}_{b}$ contenant~$V$ tels que $\overline{C}_{V'}(P,r',s')$ appartienne \`a~$\mathcal{V}_{x}$ et que l'on ait
\[\overline{C}_{V}(P,r,s)\subset\mathring{\overbrace{\overline{C}_{V'}(P,r',s')}}\subset \overline{C}_{V'}(P,r',s')\subset\mathring U.\]
On va montrer que~$f$ est nulle sur~$\overline{C}_V(P,r,s)$. Puisque $\overline{C}_{b}(P,r',s')$ est connexe, le principe du prolongement analytique assure que~$f$ y est nulle. D'apr\`es \cite[corollaire~9.8]{EtudeLocale}, $f$ est multiple de~$\pi_{b}$ sur~$\mathcal{B}(\overline{C}_{V'}(P,r',s'))$. En r\'ep\'etant cet argument (en partant d'un $\overline{C}_{V''}(P,r'',s'')$ l\'eg\`erement plus grand), on montre que, pour tout~$n\in\ensuremath{\mathbf{N}}$, $f$~est multiple de~$\pi_{b}^n$ dans~$\mathcal{B}(\overline{C}_{V'}(P,r',s'))$. La seconde partie de l'\'enonc\'e de \cite[corollaire~9.8]{EtudeLocale} assure alors que $f$ est nulle au voisinage de~$\overline{C}_{b}(P,r',s')$.
Le morphisme naturel
\[\mathcal{B}(\overline{C}_{V'}(r',s'))[S]/(P(S)-T)\to\mathcal{B}(\overline{C}_{V'}(P;r',s'))\]
est un isomorphisme. En utilisant la proposition~\ref{prop:restrictionserie}, on en d\'eduit un morphisme injectif naturel
\[\mathcal{B}(\overline{C}_{V'}(P,r',s"))\to \mathcal{B}(V')\langle r\leq|T|\leq s\rangle[S]/(P(S)-T).\]
L'image de~$f$ par ce morphisme poss\`ede un repr\'esentant de la forme $\sum_{i=0}^{d-1}\alpha_iS^i$, o\`u~$d$ est le degr\'e du polyn\^ome~$P$ et, pour tout $i \in \cn{0}{d-1}$, $\alpha_i = \sum_{k=-\infty}^{+\infty}a_{i,k}T^k$ est un \'el\'ement~$\mathcal{B}(V')\langle r\leq|T|\leq s\rangle$. Puisque~$f$ est nulle au voisinage de~$\overline{C}_{b}(P,r',s')$, pour tous $i \in \cn{0}{d-1}$ et $k \in \ensuremath{\mathbf{Z}}$, $a_{i,k}$ est nul au voisinage de~$b$, et donc sur~$V'$ car l'id\'eal nul de~$\mathcal{O}_{B,b}$ est $\mathcal{B}$-fortement de type fini relativement \`a~$\mathcal{V}_b$. Le r\'esultat s'ensuit.
\end{proof}
On peut d\'emontrer un \'enonc\'e similaire sans hypoth\`eses sur~$\mathcal{O}_{B,b}$ dans le cas des points rigides \'epais.
\begin{prop}\label{prolongement_rigide
Soit~$b \in B$. Soit~$\mathcal{V}_{b}$ une base fine de voisinages compacts spectralement convexes de~$b$ dans~$B$. Supposons que l'id\'eal nul de~$\mathcal{O}_{B,b}$ soit $\mathcal{B}$-fortement de type fini relativement \`a~$\mathcal{V}_b$.
Soit~$x$ un point de~$\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}}$ rigide \'epais au-dessus de~$b$. Si~$\mu_{\kappa,x}$ est ins\'eparable et si $\mathcal{H}(b)$ est trivialement valu\'e, supposons que tout \'el\'ement de~$\mathcal{V}_{b}$ poss\`ede un bord analytique fini. Alors, il existe une base fine de voisinages compacts spectralement convexes de~$x$ dans~$\E{1}{\mathcal{A}}$ relativement \`a laquelle l'id\'eal nul de~$\mathcal{O}_{\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}},x}$ est $\mathcal{B}$-fortement de type fini.
\end{prop}
\begin{proof}
Posons $d :=\deg(\mu_{\kappa,x})$. Choisissons un relev\'e unitaire~$P_{0}$ de~$\mu_{\kappa,x}$ dans~$\mathcal{O}_{B,b}[T]$. On va distinguer deux cas.
\medbreak
$\bullet$ Supposons que $\mu_{\kappa,x}$ est s\'eparable ou que tout \'el\'ement de~$\mathcal{V}_{b}$ poss\`ede un bord analytique fini.
Soit~$\mathcal{V}_{x}$ une base de voisinages de~$x$ satisfaisant les conclusions du corollaire~\ref{cor:BVfineNPSTrigideepais} iii).
Soient~$U$ un voisinage compact de~$x$, $\overline{D}_V(P_{0},s)$ un \'el\'ement de~$\mathcal{V}_x$ inclus dans~$\mathring U$ et $f$ un \'el\'ement de $\mathcal{B}(U)$ dont l'image dans~$\mathcal{O}_x$ est nulle.
Les propri\'et\'es de~$\mathcal{V}_x$ assurent que le morphisme naturel
\[\mathcal{B}(V)\langle |T|\leq s\rangle[S]/(P_{0}(S)-T) \to \mathcal{B}(\overline{D}_{V}(P_{0},s))\]
est un isomorphisme. L'ant\'ec\'edent de~$f$ poss\`ede un unique repr\'esentant de la forme $\sum_{i=0}^{d-1}\alpha_iS^i$, avec $\alpha_i = \sum_{k=0}^{+\infty}a_{i,k}T^k\in\mathcal{B}(V)\langle|T|\leq s\rangle$.
Pour tous~$i$ et~$k$, le germe de~$a_{i,k}$ en~$b$ est nul. En effet, puisque le germe de~$f$ en~$x$ est nul, il existe~$V'\subset V$ et~$s'\leq s$ tels que $\overline{D}_{V'}(P_{0},s')$ appartienne \`a~$\mathcal{V}_{x}$ et $f$~soit nulle sur $\overline{D}_{V'}(P_{0},s')$. Le r\'esultat d\'ecoule alors du fait que le morphisme naturel
\[\mathcal{B}(V')\langle |T|\leq s'\rangle[S]/(P_{0}(S)-T) \to \mathcal{B}(\overline{D}_{V'}(P_{0},s'))\]
est un isomorphisme.
Puisque l'id\'eal nul de~$\mathcal{O}_{B,b}$ est $\mathcal{B}$-fortement de type fini relativement \`a~$\mathcal{V}_b$, pour tous~$i$ et~$k$, $a_{i,k}$ est nul sur~$V$. Le r\'esultat s'ensuit.
\medbreak
$\bullet$ Supposons que~$\mathcal{H}(b)$ est ultram\'etrique et non trivialement valu\'e.
Soit~$\mathcal{V}_{x}$ une base de voisinages de~$x$ satisfaisant les conclusions du corollaire~\ref{cor:BVfineNPSTrigideepais} iii'). Soient~$U$ un voisinage compact de~$x$, $\overline{D}_V(P_{0},s)$ un \'el\'ement de~$\mathcal{V}_x$ inclus dans~$\mathring U$ et $f$ un \'el\'ement de $\mathcal{B}(U)$ dont l'image dans~$\mathcal{O}_x$ est nulle. Par hypoth\`ese, il existe $Q \in \mathcal{B}(V)[S]$ unitaire de degr\'e~$d$ et un intervalle ouvert~$I$ de~$\ensuremath{\mathbf{R}}_{>0}$ contenant~$t$ tels que, pour tout voisinage compact spectralement convexe~$V'$ de~$b$ dans~$V$ et tout $s' \in I$, on ait $\overline{D}_{V'}(Q,s') = \overline{D}_{V'}(P_{0},s')$ et le disque $\overline{D}_{V'}(s')$ (avec coordonn\'ee~$T$) satisfasse la propri\'et\'e~$(\mathcal{B} N_{Q(S)-T})$.
Puisque~$\mathcal{H}(b)$ n'est pas trivialement valu\'e, on peut trouver un polyn\^ome \`a coefficients dans~$\kappa(b)$ unitaire de degr\'e~$d$ et s\'eparable qui soit arbitrairement proche de~$P_{0}(b)$. Par cons\'equent, il existe un voisinage compact spectralement convexe~$V'$ de~$b$ dans~$V$, $Q' \in \mathcal{B}(V')[S]$ unitaire de degr\'e~$d$ tel que $Q'(b)$ soit s\'eparable et $s_{0}, s_{1} \in \ensuremath{\mathbf{R}}_{>0}$ v\'erifiant $s_{0} < s < s_{1}$ tels que, pour tout $s' \in \intff{s_{0},s_{1}}$, on ait $\overline{D}_{V'}(Q',s') = \overline{D}_{V'}(P_{0},s')$. On peut \'egalement supposer que $f$ est nulle sur $\overline{D}_{V'}(Q',s_{0})$, que $\overline{D}_{V'}(Q',s_{1})$ soit contenu dans~$U$ et que $s_{1}\in I$. D'apr\`es la proposition~\ref{prop:CNBNPST}, ii), on peut supposer que $\overline{D}_{V'}(s_{0})$ et $\overline{D}_{V'}(s_{1})$ satisfont la condition $(\mathcal{B} N_{Q'(S)-T})$.
En utilisant le m\^eme argument que dans le cas pr\'ec\'edent, on montre que l'image de~$f$ dans $\overline{D}_{V'}(Q',s_{1})$ est nulle. Par cons\'equent, l'image de~$f$ dans $\overline{D}_{V'}(Q',s) = \overline{D}_{V'}(P_{0},s) = \overline{D}_{V'}(Q,s)$ est nulle. En r\'eutilisant le m\^eme argument, on montre que l'image de~$f$ dans $\overline{D}_{V}(Q,s) = \overline{D}_{V}(P_{0},s)$ est nulle.
\end{proof}
Nous allons maintenant synth\'etiser les propositions \ref{prolongement_purement_localement_transcendant} et \ref{prolongement_rigide} dans le r\'esultat suivant.
\begin{prop}\label{prolongement}\index{Ideal@Id\'eal!B-fortement de type fini@$\mathcal{B}$-fortement de type fini}
Soit~$b$ un point d\'ecent de~$B$.
Supposons que l'id\'eal nul de~$\mathcal{O}_{B,b}$ est $\mathcal{B}$-fortement de type fini. Alors, pour tout point~$x$ de~$\E{n}{\mathcal{A}}$ au-dessus de~$b$, l'id\'eal nul de~$\mathcal{O}_{\E{n}{\mathcal{A}},x}$ est $\mathcal{B}$-fortement de type fini.
\end{prop}
\begin{proof}
Soit~$x$ un point de~$\E{n}{\mathcal{A}}$ au-dessus de~$b$. Quitte \`a \'echanger les coordonn\'ees, on peut supposer qu'il existe~$k\in \cn{0}{n}$ tel que la projection~$x_{k}$ de~$x$ sur les~$k$ premi\`eres soit purement localement transcendante au-dessus de~$b$ et tel que~$x$ soit rigide \'epais au-dessus de~$x_k$. Rappelons que, d'apr\`es la proposition~\ref{prop:typiqueAn}, si~$b$ est ultram\'etrique typique, alors tous les points au-dessus de~$b$ le sont \'egalement.
On montre tout d'abord que le point~$x_{k}$ satisfait les conclusions de l'\'enonc\'e \`a l'aide d'une r\'ecurrence utilisant la proposition~\ref{prolongement_purement_localement_transcendant}. On passe de~$x_{k}$ \`a~$x$ par une r\'ecurrence utilisant la proposition~\ref{prolongement_rigide}.
\end{proof}
Rappelons un r\'esultat technique qui nous sera utile.
\begin{lemm}[\protect{\cite[lemme~9.14]{EtudeLocale}}]\label{restriction_fibre_avd}\index{Anneau!fortement de valuation discrete@fortement de valuation discr\`ete}\index{Point!rigide epais@rigide \'epais}
Soit~$b$ un point d\'ecent de~$B$ tel que~$\mathcal{O}_{B,b}$ soit un anneau fortement de valuation discr\`ete d'uniformisante~$\varpi_{b}$. Soit $x\in \E{n}{\mathcal{A}}$ un point rigide \'epais au-dessus de~$b$. Alors, pour tout \'el\'ement non nul~$f$ de~$\mathcal{O}_{\E{n}{\mathcal{A}},x}$, il existe un unique couple $(v,g) \in \ensuremath{\mathbf{N}} \times \mathcal{O}_{\E{n}{\mathcal{A}},x}$ v\'erifiant les propri\'et\'es suivantes~:
\begin{enumerate}[i)]
\item $f = \varpi_{b}^v g$ dans~$\mathcal{O}_{\E{n}{\mathcal{A}},x}$~;
\item la restriction de~$g$ \`a~$\E{n}{\mathcal{H}(b)}$ n'est pas nulle.
\end{enumerate}
\qed
\end{lemm}
Adaptons maintenant ce r\'esultat (et sa preuve) au cas d'un corps fort.
\begin{lemm}\label{restriction_fibre}\index{Corps!fort}\index{Point!rigide epais@rigide \'epais}
Soit~$b$ un point d\'ecent de~$B$ tel que~$\mathcal{O}_{B,b}$ soit un corps fort. Soit $x\in \E{n}{\mathcal{A}}$ un point rigide \'epais au-dessus de~$b$. Alors, pour tout \'el\'ement non nul~$f$ de~$\mathcal{O}_{\E{n}{\mathcal{A}},x}$, la restriction de~$f$ \`a~$\E{n}{\mathcal{H}(b)}$ n'est pas nulle.
\end{lemm}
\begin{proof}
Notons $T_{1},\dotsc,T_{n}$ les coordonn\'ees sur~$\E{n}{\mathcal{A}}$. Quitte \`a les \'echanger, on peut supposer que la projection~$x_{k}$ de~$x$ sur les~$k$ premi\`eres coordonn\'ees est purement localement transcendante au-dessus de~$b$ et que~$x$ est rigide \'epais au-dessus de~$x_k$. On va d\'emontrer l'\'enonc\'e par r\'ecurence sur l'entier~$n-k$.
Si~$n-k=0$, alors~$x$ est purement localement transcendant sur~$b$ et, d'apr\`es le th\'eor\`eme~\ref{rigide}, $\mathcal{O}_{\ensuremath{\mathbf{A}}^n_\mathcal{A},x}$ est un corps. Par cons\'equent, si~$f$ est non nul dans dans~$\mathcal{O}_{\E{n}{\mathcal{A}},x}$, alors $f(x)$ est non nul. Le r\'esultat s'ensuit.
Soit $l\in \ensuremath{\mathbf{N}}$ et supposons avoir d\'emontr\'e le r\'esultat pour $n-k = l$. Supposons que $n-k = l+1$. Notons~$x_{n-1}$ la projection de~$x$ sur les~$n-1$ premi\`eres coordonn\'ees. Par hypoth\`ese, $x$ est rigide \'epais au-dessus de~$x_{n-1}$. Consid\'erons son polyn\^ome minimal $\mu_{\kappa,x} \in \kappa(x_{n-1})[T_{n}]$ sur~$\kappa(x_{n-1})$ et choisissons $P \in \mathcal{O}_{\E{n-1}{\mathcal{A}},x_{n-1}}[T_n]$ un relev\'e unitaire de ce polyn\^ome. Notons $0_{x_{n-1}}$ le point de~$\E{n}{\mathcal{A}}$ au-dessus de~$x_{n-1}$ d\'efini par $T_{n} = 0$. D'apr\`es le th\'eor\`eme~\ref{thm:isolemniscate} et le lemme~\ref{lem:polmin}, le morphisme naturel
\[ \mathcal{O}_{\E{n}{\mathcal{A}},0_{x_{n-1}}}[S]/(P(S)-T_n)\to \mathcal{O}_{\E{n}{\mathcal{A}},x}, \]
est un isomorphisme. Il suffit donc de d\'emontrer le r\'esultat pour~$0_{x_{n-1}}$. Or, d'apr\`es la proposition~\ref{prop:disqueglobal}, tout \'el\'ement de~$\mathcal{O}_{\E{n}{\mathcal{A}},0_{x_{n-1}}}$ poss\`ede un d\'eveloppement en s\'erie enti\`ere \`a coefficients dans~$\mathcal{O}_{\E{n-1}{\mathcal{A}},x_{n-1}}$. On conclut alors par l'hypoth\`ese de r\'ecurrence.
\end{proof}
\`A l'aide du th\'eor\`eme de division de Weierstra\ss{} avec contr\^ole sur les normes que nous avons d\'emontr\'e dans la partie pr\'ec\'edente (\cf~th\'eor\`eme~\ref{weierstrassam}), nous allons pouvoir montrer l'\'enonc\'e de fermeture des id\'eaux souhait\'e. Pour ce faire, nous allons nous placer dans un cadre l\'eg\`erement plus restrictif que celui des anneaux de base de la d\'efinition~\ref{def:basique}.
\begin{defi}\index{Anneau!de base!geometrique@g\'eom\'etrique|textbf}
Soit~$(\mathcal{A},\nm)$ un anneau de Banach. On dit que~$\mathcal{A}$ est un \emph{anneau de base g\'eom\'etrique}
si les conditions suivantes sont satisfaites~:
\begin{enumerate}[i)]
\item $\mathcal{A}$ est un anneau de base tel que, pour tout $b\in B$, les \'el\'ements de~$\mathcal{V}_{b}$ puissent \^etre choisis d'int\'erieur connexe~;
\item $\mathcal{M}(\mathcal{A})$ satisfait le principe du prolongement analytique.
\end{enumerate}
\end{defi}
\begin{exem}\index{Corps!valu\'e}\index{Anneau!des entiers relatifs $\ensuremath{\mathbf{Z}}$}\index{Anneau!des entiers d'un corps de nombres}\index{Corps!hybride}\index{Anneau!de valuation discr\`ete}\index{Anneau!de Dedekind trivialement valu\'e}
Nos exemples usuels \ref{ex:corpsvalue} \`a~\ref{ex:Dedekind}~: les corps valu\'es, l'anneau~$\ensuremath{\mathbf{Z}}$ et les anneaux d'entiers de corps de nombres, les corps hybrides, les anneaux de valuation discr\`ete et les anneaux de Dedekind trivialement valu\'es sont tous des anneaux de base g\'eom\'etriques.
\end{exem}
\begin{lemm}\label{lem:basegeometriqueideaux}\index{Ideal@Id\'eal!B-fortement de type fini@$\mathcal{B}$-fortement de type fini}
Soit~$\mathcal{A}$ un anneau de base g\'eom\'etrique. Alors, pour tout $b\in B$, tout id\'eal de~$\mathcal{O}_{B,b}$ est~$\mathcal{B}$-fortement de type fini relativement \`a~$\mathcal{V}_{b}$.
\end{lemm}
\begin{proof}
Soit~$b\in B$. Supposons que $\mathcal{O}_{B,b}$ est un corps fort. Les id\'eaux de~$\mathcal{O}_{B,b}$ sont $\mathcal{O}_{B,b}$ et~$(0)$. Par hypoth\`ese, il existe une base de voisinages compacts et spectralement convexes~$\mathcal{V}_{b}$ de~$b$ tel que 0~engendre $\mathcal{B}$-fortement l'id\'eal~$(0)$ relativement \`a~$\mathcal{V}_{b}$. Il est imm\'ediat que 1~engendre $\mathcal{B}$-fortement l'id\'eal~$\mathcal{O}_{B,b}$ relativement \`a~$\mathcal{V}_{b}$.
Supposons que $\mathcal{O}_{B,b}$ est un anneau fortement de valuation discr\`ete. Fixons-en une uniformisante~$\varpi$. Les id\'eaux de~$\mathcal{O}_{B,b}$ sont $\mathcal{O}_{B,b}$, $(0)$ et les $(\varpi^n)$ pour $n\in \ensuremath{\mathbf{N}}_{ge1}$. Par hypoth\`ese, il existe une base de voisinages compacts, spectralement convexes et d'int\'erieur connexe~$\mathcal{V}_{b}$ de~$b$ tel que $\varpi$~engendre $\mathcal{B}$-fortement l'id\'eal~$(\varpi)$ relativement \`a~$\mathcal{V}_{b}$. D'apr\`es le lemme~\ref{fortement_de_valuation_discr\`ete}, pour tout $n\ge 2$, $\varpi^n$~engendre $\mathcal{B}$-fortement l'id\'eal~$(\varpi^n)$ relativement \`a~$\mathcal{V}_{b}$. Comme pr\'ec\'edemment, il est \'evident que 1~engendre $\mathcal{B}$-fortement l'id\'eal~$\mathcal{O}_{B,b}$ relativement \`a~$\mathcal{V}_{b}$. Il reste \`a traiter le cas de l'id\'eal nul. Puisque les \'el\'ements de~$\mathcal{V}_{b}$ sont d'int\'erieur connexe, le fait que 0~engendre $\mathcal{B}$-fortement l'id\'eal~$(0)$ relativement \`a~$\mathcal{V}_{b}$ d\'ecoule du principe du prolongement analytique.
\end{proof}
Le r\'esultat qui suit g\'en\'eralise \cite[th\'eor\`eme~6.6.19]{A1Z} en dimension quelconque. La preuve que nous proposons est tr\`es largement inspir\'ee de celle de \cite[theorem II.D.2]{Gu-Ro}.
Pour $x \in \E{n}{\mathcal{A}}$, $g=(g_1,\ldots,g_p)\in\mathcal{O}_{x}^p$ et~$V$ voisinage compact de~$x$ dans~$\E{n}{\mathcal{A}}$ sur lequel~$g$ est d\'efinie, on pose
\[\|g\|_V := \max_{1\le i\le p} (\|g_{i}\|_{V}).\]
\begin{theo}\label{fermeture}\index{Ideal@Id\'eal!ferme@ferm\'e}\index{Sous-module|see{Id\'eal}}
Supposons que~$\mathcal{A}$ est un anneau de base g\'eom\'etrique. Soient~$x$ un point de~$\E{n}{\mathcal{A}}$, $p \ge 1$ un entier et $M$~un sous-module de~$\mathcal{O}^p_{x}$ engendr\'e par des \'el\'ements $f_1,\dotsc,f_l$.
Alors, pour tout voisinage compact~$V$ de~$x$ dans~$\E{n}{\mathcal{A}}$, il existe un voisinage compact spectralement convexe~$V'$ de~$x$ dans~$V$ sur lequel les~$f_{i}$ sont $\mathcal{B}$-d\'efinis et une constante $K_{V',V} \in \ensuremath{\mathbf{R}}_{>0}$ v\'erifiant la propri\'et\'e suivante~: pour tout \'el\'ement~$f$ de~$\mathcal{B}(V)^p$ dont l'image dans~$\mathcal{O}_{x}^p$ appartient \`a~$M$, il existe~$a_1,\dotsc,a_{l}\in\mathcal{B}(V')$ tels que
\begin{enumerate}[i)]
\item $\displaystyle f = \sum_{i=1}^l a_i f_i$ dans~$\mathcal{B}(V')^p$ ;
\item pour tout $i\in \cn{1}{l}$, on a $\|a_i\|_{V'}\leq K_{V',V}\,\|f\|_V$.
\end{enumerate}
\end{theo}
\begin{proof}
Remarquons tout d'abord que que si l'\'enonc\'e vaut pour l'entier $p=1$ et pour un entier $p = p_{0}\ge 1$ donn\'e, alors il vaut encore pour $p=p_{0}+1$. En effet, consid\'erons un sous-module~$M$ de~$\mathcal{O}^{p_{0}+1}_{x}$ engendr\'e par des \'el\'ements $f_1,\ldots,f_l\in \mathcal{O}^{p_{0}+1}_{x}$. Notons $f'_1,\ldots,f'_l \in \mathcal{O}_{x}^{p_{0}}$ les projections respectives de $f_1,\ldots,f_l$ sur les $p_{0}$~premiers facteurs et~$M'$ le sous-module de~$\mathcal{O}^{p_{0}}_{x}$ engendr\'e par $f'_{1},\dotsc,f'_{l}$. Le noyau $M''$ du morphisme $M \to M'$ s'identifie alors \`a un sous-module de~$\mathcal{O}_{x}$. Le r\'esultat pour le module~$M$ d\'ecoule ais\'ement de celui pour~$M'$ et~$M''$.
Une r\'ecurrence imm\'ediate montre qu'il suffit de traiter le cas o\`u $p=1$, cadre que l'on adopte d\'esormais.
Notons~$b$ la projection de~$x$ sur $B = \mathcal{M}(\mathcal{A})$. Quitte \`a \'echanger les coordonn\'ees, on peut supposer qu'il existe~$k\in\cn{0}{n}$ tel que la projection~$x_k$ de~$x$ sur les~$k$ premi\`eres coordonn\'ees soit purement localement transcendante au-dessus de~$b$ et que~$x$ soit rigide \'epais au-dessus de~$x_k$. D\'emontrons le r\'esultat par r\'ecurence sur l'entier~$n-k$.
\medbreak
\noindent$\bullet$ \textit{Initialisation de la r\'ecurrence~: $n-k=0$. }
Dans ce cas, d'apr\`es le th\'eor\`eme~\ref{rigide}, l'anneau local~$\mathcal{O}_{x}$ est un corps fort ou un anneau fortement de valuation discr\`ete. Soit~$V$ un voisinage compact de~$x$ dans~$\E{n}{\mathcal{A}}$.
Supposons tout d'abord que~$\mathcal{O}_{X,x}$ est un corps fort. Si tous les~$f_{i}$ sont nuls, le r\'esultat d\'ecoule de la d\'efinition de corps fort. Sinon, nous pouvons supposer que $f_{1} \ne 0$. Il existe alors un voisinage spectralement convexe~$V'$ de~$x$ dans~$V$ sur lequel $f_{1}$ et~$f_{1}^{-1}$ sont $\mathcal{B}$-d\'efinies. Pour tout $f\in \mathcal{B}(V)$, nous avons alors $f = (f f_{1}^{-1}) \, f_{1}$ dans $\mathcal{B}(V')$ et $\|f f_{1}^{-1}\|_{V'} \le \|f_{1}^{-1}\|_{V'} \, \|f\|_{V}$, ce qui d\'emontre l'\'enonc\'e.
Supposons maintenant que~$\mathcal{O}_{x}$ est un anneau fortement de valuation discr\`ete. Si $M=0$, le r\'esultat d\'ecoule de la proposition~ \ref{prolongement}. Si $M = \mathcal{O}_{x}$, l'un des~$f_{i}$ est inversible et le r\'esultat s'obtient comme dans le cas o\`u $\mathcal{O}_{x}$~est un corps fort. Sinon, on peut se ramener au cas o\`u $l=1$ et $f_{1}$ est une puissance non triviale d'une uniformisante de~$\mathcal{O}_{x}$. Le r\'esultat se d\'eduit alors du lemme~\ref{fortement_de_valuation_discr\`ete}.
\medbreak
\noindent$\bullet$ \textit{\'Etape de r\'ecurrence.}
Supposons que le r\'esultat soit satisfait lorsque $n-k=N \ge 0$ (pour $p=1$ et donc pour tout $p\in \ensuremath{\mathbf{N}}_{ge1}$ d'apr\`es la remarque pr\'eliminaire) et d\'emontrons-le dans le cas o\`u $n-k=N+1$.
\medbreak
\noindent$-$ \textit{Cas o\`u la restriction de~$f_1$ \`a~$\E{n-k}{\mathscr{H}(x_k)}$ n'est pas nulle}
Soient~$x_{n-1}$ la projection de~$x$ sur les~$n-1$ premi\`eres coordonn\'ees. Puisque $n-k\ge 1$, le point~$x$ est rigide \'epais au-dessus de~$x_{n-1}$. Notons~$P\in\kappa(x_{n-1})[T]$ son polyn\^ome minimal. Soit~$W$ un voisinage spectralement convexe de~$x_{n-1}$ dans~$\E{n-1}{\mathcal{A}}$ sur lequel les coefficients de~$P$ sont $\mathcal{B}$-d\'efinis. Puisque l'\'enonc\'e est local, on peut se placer dans~$\E{1}{\mathcal{B}(W)}$ pour le d\'emontrer.
Soit~$\varphi_P \colon \E{1}{\mathcal{B}(W)} \to \E{1}{\mathcal{B}(W)}$ le morphisme induit par~$P$. En notant $0_{x_{n-1}}$ le point de~$\E{1}{\mathcal{B}(W)}$ appartenant \`a la fibre au-dessus de~$x_{n-1}$ et \'egal \`a~0 dans cette fibre, on a $\varphi_{P}^{-1}(0_{x_{n-1}}) = \{x\}$. Le corollaire~\ref{cor:phinormes} permet de ramener la d\'emonstration du r\'esultat pour le point~$x$ (pour $p=1$) \`a celui pour le point~$0_{x_{n-1}}$ (pour $p$ \'egal au degr\'e de~$P$, et donc pour $p=1$ d'apr\`es la remarque pr\'eliminaire). On peut donc supposer que $x= 0_{x_{n-1}}$.
Par hypoth\`ese, la restriction de~$f_1$ \`a~$\E{n-k}{\mathscr{H}(x_k)}$ n'est pas nulle. Le lemme~\ref{changement_variable} assure que, quitte \`a effectuer un changement de variables, on peut supposer que la restriction de~$f_1$ \`a~$\E{n-1}{\mathscr{H}(x_k)}$ n'est pas nulle. Notons~$T$ la derni\`ere coordonn\'ee sur~$\E{n}{\mathcal{A}}$. Alors $\mathcal{O}_{\E{1}{\mathscr{H}(x_{n-1})},x}$ est un anneau de valuation discr\`ete d'uniformisante~$T$. Notons~$v$ la valuation de~$f_{1}$.
D'apr\`es le th\'eor\`eme de division de Weierstra\ss{} \ref{weierstrassam} (la version sans normes \cite[th\'eor\`eme 8.3]{EtudeLocale} suffirait), tout \'el\'ement~$f$ de~$\mathcal{O}_{x}$ peut s'\'ecrire de fa\c con unique sous la forme $f = f' f_{1} + \tilde f$ avec $f' \in \mathcal{O}_{x}$ et $\tilde f \in \mathcal{O}_{x_{n-1}}[T]$ de degr\'e inf\'erieur \`a~$v-1$.
Consid\'erons le $\mathcal{O}_{x_{n-1}}$-module $\tilde M := \mathcal{O}_{x_{n-1}}[T]_{\leq v-1}\cap M$, autrement dit le module des restes de~$M$ dans la division de Weierstra\ss{} par~$f_{1}$.
Par noetherianit\'e, $\tilde M$ est de type fini. Fixons des g\'en\'erateurs $\ti g_{1},\dotsc, \ti g_{m}$. Pour tout $i\in\cn{0}{m}$, il existe $a_{i,1},\dotsc,a_{i,l} \in \mathcal{O}_{x}$ tels que $\ti g_{i} = \sum_{j=1}^l a_{i,j}f_j$ dans~$\mathcal{O}_{x}$.
Soit~$V$ un voisinage compact de~$x$ dans~$\E{n}{\mathcal{A}}$. D'apr\`es le th\'eor\`eme de division de Weierstra\ss{} \ref{weierstrassam}, il existe un voisinage compact~$W$ de~$x_{n-1}$ dans~$\E{n-1}{\mathcal{A}}$, un polyn\^ome $P \in \mathcal{B}(W)[T]$ et des nombres r\'eel $s,K \in \ensuremath{\mathbf{R}}_{>0}$ tels que $\overline{D}_{W}(P;s)$ soit contenu dans~$V$ et, pour tout \'el\'ement~$f$ de~$\mathcal{B}(V)$ les propri\'et\'es suivantes soient satisfaites~: $f'$ est $\mathcal{B}$-d\'efini sur $\overline{D}_{W}(P;s)$, les coefficients de $\ti f$ sont $\mathcal{B}$-d\'efinis sur~$W$ et on a
\[ \|f'\|_{\overline{D}_{W}(P;s)} \le K\, \|f\|_{V} \textrm{ et } \|\ti f\|_{\overline{D}_{W}(P;s)} \le K\, \|f\|_{V}.\]
Puisque le point~$x$ est le point~$0$ au-dessus de~$x_{n-1}$, il poss\`ede une base de voisinages de la forme $\overline{D}_{W}(T;s)$ et on peut donc supposer que~$P = T$. On peut en outre supposer que tous les~$a_{i,j}$ sont $\mathcal{B}$-d\'efinis sur $\overline{D}_{W}(T;s)$.
Pour toute partie compacte~$U$ de~$\E{n}{\mathcal{A}}$ et tout nombre r\'eel $t\in \ensuremath{\mathbf{R}}_{>0}$, consid\'erons l'isomorphisme
\[\fonction{\psi}{\mathcal{B}(U)^{v}}{\mathcal{B}(U)[T]_{\le v-1}}{(h_{0},\dotsc,h_{v-1})}{\displaystyle\sum_{i=0}^{v-1} h_{i} T^i}.\]
Pour tous $h\in \mathcal{B}(U)^{v}$ et $k\in \mathcal{B}(U)[T]_{\le v-1}$, on a
\[\|\psi(h)\|_{\overline{D}_{U}(T;t)} \le v \max(1,t^{v-1}) \|h\|_{U}\]
et
\[\|\psi^{-1}(k)\|_{U} \le \max(1,t^{1-v}) \, \|k\|_{\overline{D}_{U}(T;t)}.\]
La premi\`ere in\'egalit\'e est imm\'ediate. Pour la seconde, on se ram\`ene imm\'ediatement \`a une situation un corps valu\'e complet, auquel cas elle est cons\'equence de la description explicite des normes des disques dans le cas ultram\'etrique ou de la formule de Cauchy dans le cas archim\'edien (\cf~\cite[lemme~2.1.2]{A1Z}).
L'hypoth\`ese de r\'ecurrence appliqu\'ee au module~$\ti M$, que l'on identifie \`a un sous-module de~$\mathcal{O}_{x_{n-1}}^{v}$ \textit{via} la restriction de~$\psi$, et \`a~$W$ fournit un voisinage~$W'$ de~$x_{n-1}$ dans~$\E{n-1}{\mathcal{A}}$ et une constante~$K'$.
On peut maintenant d\'emontrer le r\'esultat. Soit $f \in \mathcal{B}(V)$ dont l'image dans~$\mathcal{O}_{x}$ appartient \`a~$M$. On peut l'\'ecrire sous la forme $f = f' f_{1} + \sum_{j=1}^m b_{j} \ti g_{j}$ avec $f' \in \mathcal{B}(\overline{D}_{W}(T;s))$, $b_{1},\dotsc,b_{m} \in \mathcal{B}(W')$,
\[\|f'\|_{\overline{D}_{W}(T;s)} \le K \|f\|_{V}\]
et, pour tout $j\in \cn{1}{m}$,
\[\|b_{j}\|_{W'} \le K \max(1,t^{1-v}) \|f\|_{V}.\]
On conclut alors en r\'e\'ecrivant~$f$ sous la forme
\[f = \big( f' + \sum_{j=1}^m b_{j} a_{i,j}\big) f_{1} + \sum_{i=2}^l \big(\sum_{j=1}^m b_{j} a_{i,j}\big) f_{i}\]
et en choisissant $V' := \overline{D}_{W'}(T;s)$.
\medbreak
\noindent~$-$ \textit{Cas g\'en\'eral}
Si~$M=0$, le r\'esultat d\'ecoule de la proposition~\ref{prolongement}. On peut donc supposer que $M\ne 0$, et m\^eme que tous les~$f_{i}$ sont non nuls.
Si~$\mathcal{O}_{b}$ est un corps fort, alors le lemme~\ref{restriction_fibre} assure que la restriction de~$f_1$ \`a~$\E{n-k}{\mathscr{H}(x_k)}$ n'est pas nulle et on se ram\`ene au cas pr\'ec\'edent. On peut donc supposer que~$\mathcal{O}_{b}$ est un anneau fortement de valuation discr\`ete. Soit~$\varpi_{b}$ une uniformisante de~$\mathcal{O}_{b}$.
D'apr\`es le lemme~\ref{restriction_fibre_avd}, pour tout $i\in \cn{1}{l}$, il existe $v_{i}\in \ensuremath{\mathbf{N}}$ et $g_{i} \in \mathcal{O}_{x}$ dont la restriction \`a~$\E{n-k}{\mathscr{H}(x_k)}$ n'est pas nulle tels que $f_{i} = \varpi_{b}^{v_{i}} g_{i}$ dans~$\mathcal{O}_{x}$. Quitte \`a permuter les~$f_{i}$, on peut supposer que~$v_{1}$ est le minimum des~$v_{i}$. Le lemme~\ref{division_anneau_valuation_discr\`ete} permet de d\'eduire le r\'esultat pour l'id\'eal~$M$ de~$\mathcal{O}_{x}$ engendr\'e par les~$f_{i}$ du r\'esultat pour l'id\'eal engendr\'e par les $\pi_{b}^{-v_{1}}f_{i}$. Pour ce dernier, la restriction de la fonction $\pi_{b}^{-v_{1}}f_{1} = g_{1}$ \`a~$\E{n-k}{\mathscr{H}(x_k)}$ n'est pas nulle et on se ram\`ene de nouveau au cas pr\'ec\'edent.
\end{proof}
\begin{coro}\label{limite}\index{Ideal@Id\'eal!ferme@ferm\'e}
Supposons que~$\mathcal{A}$ est un anneau de base g\'eom\'etrique. Soient~$x$ un point de~$\E{n}{\mathcal{A}}$, $p\ge 1$ un entier et $M$~un sous-module de~$\mathcal{O}^p_{x}$. Soient $V$ un voisinage compact de~$x$ dans~$\E{n}{\mathcal{A}}$ et $(g_n)_{n\in\ensuremath{\mathbf{N}}}$ une suite d'\'el\'ements de~$\mathcal{B}(V)^p$ qui converge vers un \'el\'ement~$g$ de~$\mathcal{B}(V)^p$. Si, pour tout $n\in \ensuremath{\mathbf{N}}$, l'image de~$g_{n}$ dans~$\mathcal{O}_{x}^p$ appartient \`a~$M$, alors l'image de~$g$ dans~$\mathcal{O}_{x}^p$ appartient \`a~$M$.
\end{coro}
\begin{proof}
D'apr\`es le th\'eor\`eme~\ref{rigide}, $\mathcal{O}_{x}$ est noeth\'erien, donc $M$~est de type fini. Soient $f_{1},\dotsc,f_{l}$ des g\'en\'erateurs de~$M$. D'apr\`es le th\'eor\`eme~\ref{fermeture}, il existe un voisinage compact~$V'$ de~$x$ dans~$V$ sur lequel les~$f_{i}$ sont $\mathcal{B}$-d\'efinis et une constante $K \in \ensuremath{\mathbf{R}}_{>0}$ v\'erifiant la propri\'et\'e suivante~: pour tout \'el\'ement~$f$ de~$\mathcal{B}(V)^p$ dont l'image dans~$\mathcal{O}_{x}^p$ appartient \`a~$M$, il existe~$a_1,\dotsc,a_{l}\in\mathcal{B}(V')$ tels que
\begin{enumerate}[i)]
\item $\displaystyle f = \sum_{i=1}^l a_i f_i$ dans~$\mathcal{B}(V')^p$ ;
\item pour tout $i\in \cn{1}{l}$, on a $\|a_i\|_{V'}\leq K\,\|f\|_V$.
\end{enumerate}
Soit~$(g_{n_k})_{k\in\ensuremath{\mathbf{N}}}$ une sous-suite de~$(g_n)_{n\in\ensuremath{\mathbf{N}}}$ telle que, pour tout~$k\in\ensuremath{\mathbf{N}}$, on ait l'in\'egalit\'e~$\|g_{n_{k+1}}-g_{n_k}\|_V\leq 1/2^k$. Pour tout $k\in \ensuremath{\mathbf{N}}$, il existe $a_{1,k},\dotsc,a_{l,k}\in\mathcal{B}(V')$ tels que
\begin{enumerate}[i)]
\item $\displaystyle g_{n_{k+1}} - g_{n_{k}} = \sum_{i=1}^l a_{i,k} f_i$ dans~$\mathcal{B}(V')^p$ ;
\item pour tout $i\in \cn{1}{l}$, on a $\|a_{i,k}\|_{V'}\leq \frac{K}{2^k}$.
\end{enumerate}
Pour tout $i \in \cn{1}{l}$, la suite $(\sum_{j=0}^k a_{i,j})_{k\in \ensuremath{\mathbf{N}}}$ est de Cauchy, et donc convergente. Notons~$s_{i} \in \mathcal{B}(V')$ sa limite. On a alors
\[g = g_{n_{0}} + \sum_{i=1}^l s_{i} f_i \textrm{ dans } \mathcal{B}(V')^p.\]
Le r\'esultat s'ensuit.
\end{proof}
\chapter[Pr\'eliminaires et rappels]{Pr\'eliminaires et rappels}\label{chap:rappels}
Dans ce chapitre, nous pr\'esentons les bases de la g\'eom\'etrie analytique sur un anneau de Banach telle que d\'evelopp\'ee par V.~Berkovich dans le premier chapitre de son ouvrage fondateur~\cite{Ber1}. Nous rappelons \'egalement quelques r\'esultats locaux d\'emontr\'es par le second auteur dans~\cite{A1Z} et~\cite{EtudeLocale}.
Dans la section~\ref{sec:spectreanalytique}, nous introduisons la d\'efinition de spectre analytique d'un anneau de Banach. Nous en donnons plusieurs exemples fondamentaux, qui nous guideront dans toute la suite de ce texte~: cas des corps valu\'es, de l'anneau des entiers relatifs~$\ensuremath{\mathbf{Z}}$, des anneaux d'entiers de corps de nombres, des corps hybrides, des anneaux de valuation discr\`ete et des anneaux de Dedekind. Nous introduisons ensuite, dans la section~\ref{sec:espaceaffineanalytique}, la d\'efinition d'espace affine analytique sur un anneau de Banach. Il s'agit d'un espace localement annel\'e. Les sections de son faisceau structural sont des fonctions que nous qualifierons d'analytiques.
Le reste du chapitre est constitu\'e, pour la majeure partie, de rappels sur les travaux men\'es par le second auteur dans~\cite{A1Z} et~\cite{EtudeLocale}. Nous commen\c{c}ons par des d\'efinitions et r\'esultats techniques utiles. Dans la section~\ref{sec:spconvexe}, nous introduisons les parties spectralement convexes. Il s'agit de parties qui peuvent \^etre naturellement identifi\'ees \`a des spectres analytiques d'anneaux de Banach est nous les utiliserons constamment dans la suite. La section~\ref{sec:dcl} est consacr\'ee aux disques, couronnes et domaines polynomiaux. Nous y rappelons des r\'esultats sur les fonctions globales sur les disques et les couronnes qui, comme on s'y attend, peuvent s'exprimer \`a l'aide de s\'eries convergentes. La section~\ref{description_locale} contient une description pr\'ecise de bases de voisinages des points de la droite affine analytique sur un anneau de Banach.
La section finale~\ref{sec:resultatslocaux} est plus pouss\'ee. Nous y pr\'esentons quelques outils techniques essentiels pour la suite de notre \'etude, tels les th\'eor\`emes de division et de pr\'eparation de Weierstra\ss. Ils sont valables pour des anneaux de Banach qualifi\'es de basiques, dont nous rappelons la d\'efinition. Indiquons que tous les exemples mentionn\'es plus haut sont basiques. Afin d'obtenir des r\'esultats locaux pr\'ecis, nous distinguons diff\'erents types de points~: points rigides, rigides \'epais ou localement transcendants. Nous concluons en rappelant les r\'esdultats principaux de~\cite{EtudeLocale}~: noeth\'erianit\'e et r\'egularit\'e des anneaux locaux, prolongement analytique, coh\'erence du faisceau structural.
\section{Spectre analytique d'un anneau de Banach}\label{sec:spectreanalytique}
\begin{defi}\index{Norme|textbf}
Une \emph{norme} sur un anneau~$\mathcal{A}$ est une application
\[ \nm_{\mathcal{A}} \colon \mathcal{A} \longrightarrow \ensuremath{\mathbf{R}}_{\ge 0}\]
v\'erifiant les propri\'et\'es suivantes~:
\begin{enumerate}[i)]
\item $\forall a,b\in \mathcal{A}$, $\|a+b\|_{\mathcal{A}} \le \|a\|_{\mathcal{A}} + \|b\|_{\mathcal{A}}$ ;
\item $\forall a\in \mathcal{A}$, $\|-a\|_{\mathcal{A}} = \|a\|_{\mathcal{A}}$ ;
\item $\forall a \in \mathcal{A}$, $\|a\|_{\mathcal{A}} = 0 \Longleftrightarrow a=0$.
\end{enumerate}
\end{defi}
\begin{defi}\index{Norme!sous-multiplicative|textbf}
Une norme~$\nm_{\mathcal{A}}$ sur un anneau~$\mathcal{A}$ est dite \emph{sous-multiplicative} si, pour tout ensemble fini~$I$ et toute famille $(a_{i})_{i\in I}$ d'\'el\'ements de~$\mathcal{A}$, on a
\[ \big\| \prod_{i\in I} a_{i} \big\|_{\mathcal{A}} \le \prod_{i\in I} \|a_{i}\|_{\mathcal{A}}.\]
\end{defi}
\begin{rema}
Soit~$\mathcal{A}$ un anneau. Si $\mathcal{A} = 0$, l'unique norme d\'efinie sur~$\mathcal{A}$ est la norme nulle et elle est sous-multiplicative.
Supposons que~$\mathcal{A} \ne 0$. Soit~$\nm_{\mathcal{A}}$ une norme sous-multiplicative sur~$\mathcal{A}$. En appliquant la formule de la d\'efinition avec $I = \emptyset$, on obtient $\|1\|_{\mathcal{A}} \le 1$. En l'appliquant avec $I = \{0,1\}$ et $a_{0} = a_{1} = 1$, on obtient $\|1\|_{\mathcal{A}} \le \|1\|_{\mathcal{A}}^2$, d'o\`u l'on tire $\|1\|_{\mathcal{A}} \ge 1$, puis $\|1\|_{\mathcal{A}} = 1$.
\end{rema}
\begin{defi}\index{Anneau!de Banach|textbf}
Un \emph{anneau de Banach} est un couple $(\mathcal{A},\nm_{\mathcal{A}})$ o\`u $\mathcal{A}$ est un anneau et $\nm_{\mathcal{A}}$ une norme sous-multiplicative sur~$\mathcal{A}$ pour laquelle il est complet.
\end{defi}
Soit~$(\mathcal{A},\|.\|_\mathcal{A})$ un anneau de Banach.
\begin{defi}[\protect{\cite[\S 1.2]{Ber1}}]%
\index{Spectre analytique|textbf}%
\nomenclature[Fa]{$\mathcal{M}(\mathcal{A})$}{spectre analytique de~$\mathcal{A}$}
Le \emph{spectre analytique} de~$(\mathcal{A},\|.\|_\mathcal{A})$ est l'ensemble~$\mathcal{M}(\mathcal{A})$ des semi-normes multiplicatives born\'ees sur~$\mathcal{A}$, c'est-\`a-dire l'ensemble des applications
\[\va \colon \mathcal{A} \to \ensuremath{\mathbf{R}}_{\ge 0}\]
v\'erifiant les propri\'et\'es suivantes~:
\begin{enumerate}[i)]
\item $|0| = 0$ et $|1| = 1$;
\item $\forall a,b \in \mathcal{A},\ |a+b| \le |a| + |b|$ ;
\item $\forall a,b \in \mathcal{A},\ |ab| = |a| \, |b|$ ;
\item $\forall a\in \mathcal{A},\ |a| \le \|a\|_{\mathcal{A}}$.
\end{enumerate}
On munit~$\mathcal{M}(\mathcal{A})$ de la topologie de la convergence simple, c'est-\`a-dire la topologie la plus grossi\`ere telle que, pour tout $a\in \mathcal{A}$, l'application
\[\fonction{a}{\mathcal{M}(\mathcal{A})}{\ensuremath{\mathbf{R}}}{|.|}{|a|}\]
soit continue.
\end{defi}
\begin{theo}[\protect{\cite[theorem~1.2.1]{Ber1}}]\label{th:MAnonvide}
Si $\mathcal{A} \ne 0$, alors $\mathcal{M}(\mathcal{A})$ est compact et non vide.
\qed
\end{theo}
On pense g\'en\'eralement aux \'el\'ements de~$\mathcal{M}(\mathcal{A})$ comme \`a des points d'un espace et on les note~$x$, $y$, etc. On note alors~$\va_{x}$, $\va_{y}$, etc. les semi-normes sur~$\mathcal{A}$ associ\'ees.%
\nomenclature[Fb]{$\va_{x}$}{semi-norme sur~$\mathcal{A}$ associ\'ee \`a point~$x$ de~$\mathcal{M}(\mathcal{A})$
\begin{defi}\index{Corps!residuel complete@r\'esiduel compl\'et\'e|textbf}
Soit $x\in \mathcal{M}(\mathcal{A})$. L'ensemble
\[\textrm{Ker}(x) := \{a \in \mathcal{A} \,\colon |a|_{x} =0\}\]
est un id\'eal premier de~$\mathcal{A}$
\nomenclature[Fc]{$\textrm{Ker}(x)$}{noyau de~$\va_{x}$
La semi-norme~$\va_{x}$ induit une valeur absolue sur l'anneau int\`egre $\mathcal{A}/\textrm{Ker}(x)$ et son corps de fractions $\Frac(\mathcal{A}/\textrm{Ker}(x))$. On appelle \emph{corps r\'esiduel compl\'et\'e} de~$x$ le compl\'et\'e
\[ \mathcal{H}(x) := \widehat{\Frac(\mathcal{A}/\textrm{Ker}(x))}.\
\nomenclature[Fd]{$\mathcal{H}(x)$}{corps r\'esiduel compl\'et\'e de~$x$
On notera simplement~$\va$ la valeur absolute induite sur~$\mathcal{H}(x)$, cela ne pr\^etant pas \`a confusion.
\end{defi}
Pour tout $x\in \mathcal{M}(\mathcal{A})$, on a, par construction, un morphisme naturel \index{Evaluation@\'Evaluation|textbf}
\[ \mathrm{ev}_{x} \colon \mathcal{A} \to \mathcal{H}(x).\
\nomenclature[Fe]{$\mathrm{ev}_{x}$}{morphisme d'\'evaluation en~$x$
Pour $a \in \mathcal{A}$, on pose
\[ a(x) := \mathrm{ev}_{x}(a) \in \mathcal{H}(x).\
\nomenclature[Ff]{$a(x)$}{\'evaluation en~$x$ d'un \'el\'ement~$a$ de~$\mathcal{A}$
On a alors
\[ |a(x)| = |a|_{x}.\]
\begin{defi}\index{Norme!spectrale|textbf}\index{Norme!uniforme|textbf}\index{Semi-norme|see{Norme}}\index{Anneau!uniforme|textbf}
On d\'efinit la \emph{semi-norme spectrale}~$\nm_{\mathcal{A},\sp}$ sur~$\mathcal{A}$ associ\'ee \`a~$\nm_{\mathcal{A}}$ par
\[ \forall a\in \mathcal{A},\ \|a\|_{\mathcal{A},\sp} := \inf_{n \ge 1} (\|a^n\|_{\mathcal{A}}^{1/n}).\]
On dit que~$\nm_{\mathcal{A}}$ ou $(\mathcal{A},\nm_{\mathcal{A}})$ est \emph{uniforme} si $\nm_{\mathcal{A}} = \nm_{\mathcal{A},\sp}$.
\end{defi}
\begin{lemm}[\protect{\cite[theorem~1.3.1]{Ber1}}]\label{lem:spuni}
Pour tout $a\in \mathcal{A}$, on a
\[ \|a\|_{\mathcal{A},\sp} = \max_{x\in \mathcal{M}(\mathcal{A})} (|a(x)|).\]
\qed
\end{lemm}
Nous allons maintenant dresser une liste d'exemples \`a laquelle nous nous r\'ef\'ererons constamment par la suite. Ce sont eux qui motivent le d\'eveloppement de la th\'eorie. Introduisons, au pr\'ealable, quelques notations.
\begin{nota}\index{Valeur absolue triviale|textbf}%
\nomenclature[Ba]{$\va_{\infty}$}{Valeur absolue usuelle sur $\ensuremath{\mathbf{Z}}$, $\ensuremath{\mathbf{Q}}$, $\ensuremath{\mathbf{R}}$, $\ensuremath{\mathbf{C}}$, etc.}%
\nomenclature[Bb]{$\va_{p}$}{Valeur absolue $p$-adique normalis\'ee sur $\ensuremath{\mathbf{Z}}$, $\ensuremath{\mathbf{Q}}$, $\ensuremath{\mathbf{Q}}_{p}$, $\ensuremath{\mathbf{C}}_{p}$, etc.}%
\nomenclature[Bc]{$\va_{0}$}{Valeur absolue triviale sur un anneau
On note~$\va_{\infty}$ la valeur absolue usuelle sur~$\ensuremath{\mathbf{C}}$. On note identiquement sa restriction aux sous-anneaux de~$\ensuremath{\mathbf{C}}$~: $\ensuremath{\mathbf{R}}$, $\ensuremath{\mathbf{Q}}$, $\ensuremath{\mathbf{Z}}$, etc.
Pour tout nombre premier~$p$, on note~$\va_{p}$ la valeur absolue $p$-adique sur~$\ensuremath{\mathbf{Q}}_{p}$ normalis\'ee par la condition $|p|_{p} = p^{-1}$. On note identiquement sa restriction aux sous-anneaux de~$\ensuremath{\mathbf{Q}}_{p}$.
Pour tout anneau~$A$, on note~$\va_{0}$ la \emph{valeur absolue triviale} sur~$A$~:
\[\fonction{\va_{0}}{\ensuremath{\mathbf{Z}}}{\ensuremath{\mathbf{R}}_{\ge 0}}{n}{\begin{cases} 0 \textrm{ si } n=0~;\\ 1 \textrm{ sinon.}\end{cases}}\]
\end{nota}
Ajoutons un lemme technique utile.
\begin{lemm}\label{lem:critcomvalabs}
Soient $k$ un corps et $\va$ et $\va'$ deux valeurs absolues sur~$k$. Supposons que~$\va'$ n'est pas triviale et que
\[\forall a\in k,\ |a|' > 1 \implies |a| \geq |a|'\]
Alors il existe $\ensuremath{\varepsilon}\in\intof{0,1}$ tel que $\va'=\va^\ensuremath{\varepsilon}$.
\end{lemm}
\begin{proof}
Puisque~$\va'$ n'est pas triviale, il existe $a \in k$ tel que $|a|'>1$. Par hypoth\`ese, on a $|a| \ge |a|'$, donc il existe $\ensuremath{\varepsilon}\in\intof{0,1}$ tel que $|a'| = |a|^\ensuremath{\varepsilon}$.
Soit $b\in k^*$. Soient $n\in\ensuremath{\mathbf{Z}}$ et $m\in\ensuremath{\mathbf{N}}_{ge1}$ tels que $|b|' > (|a|')^{n/m}$. On a alors $|b^m/a^n|' >1$, donc $|b^m/a^n| >1$, puis $|b| > |a|^{n/m}$ et finalement $|b|^\ensuremath{\varepsilon} > (|a|')^{n/m}$. En utilisant la densit\'e de~$\ensuremath{\mathbf{Q}}$ dans~$\ensuremath{\mathbf{R}}$, on en d\'eduit que $|b|^\ensuremath{\varepsilon} \ge |b|'$. En appliquant ce r\'esultat \`a l'inverse de~$b$, on obtient $|b|^\ensuremath{\varepsilon} \le |b|'$, d'o\`u $|b|^\ensuremath{\varepsilon} = |b|'$.
\end{proof}
\begin{exem}\label{ex:corpsvalue}\index{Corps!valu\'e|textbf}\index{Spectre analytique!exemples|(
Soit~$k$ un corps et~$\va$ une valeur absolue (archim\'edienne ou non) pour laquelle il est complet. Alors $(k,\va)$ est un anneau de Banach et son spectre analytique~$\mathcal{M}(k)$ est r\'eduit \`a un point, correspondant \`a~$\va$.
\end{exem}
\begin{exem}\label{ex:Z}\index{Anneau!des entiers relatifs $\ensuremath{\mathbf{Z}}$|textbf}
L'anneau des entiers relatifs~$\ensuremath{\mathbf{Z}}$ muni de la valeur absolue usuelle~$\va_{\infty}$ est un anneau de Banach. La description de son spectre analytique d\'ecoule du th\'eor\`eme d'Ostrowski. Nous l'explicitons ci-dessous.
Le spectre~$\mathcal{M}(\ensuremath{\mathbf{Z}})$ rev\^et la forme d'une \'etoile~: il poss\'ede un point central~$a_{0}$, qui correspond \`a la valeur absolue triviale~$\va_{0}$, et une infinit\'e de branches, index\'ees par les nombres premiers et la place infinie
\nomenclature[Ga]{$a_{0}$}{point de~$\mathcal{M}(\ensuremath{\mathbf{Z}})$ associ\'e \`a la valeur absolue triviale~$\va_{0}$
Soit~$p$ un nombre premier. D\'efinissons la semi-norme
\[\fonction{\va_{p}^{+\infty}}{\ensuremath{\mathbf{Z}}}{\ensuremath{\mathbf{R}}_{\ge 0}}{n}{\begin{cases} 0 \textrm{ si } n=0 \mod p~;\\ 1 \textrm{ sinon.}\end{cases}}\]
Pour tout $\ensuremath{\varepsilon}\in\intof{0,+\infty}$, notons~$a_{p}^\ensuremath{\varepsilon}$ le point de~$\mathcal{M}(\ensuremath{\mathbf{Z}})$ associ\'e \`a~$\va_{p}^\ensuremath{\varepsilon}$. Posons $a_{p}^0:= a_{0}$. L'application $\ensuremath{\varepsilon}\in\intof{0,+\infty}\mapsto a_p^\ensuremath{\varepsilon}$ r\'ealise alors un hom\'eomorphisme sur la branche associ\'ee \`a~$p$.%
\nomenclature[Gb]{$a_{p}^\ensuremath{\varepsilon}$}{point de~$\mathcal{M}(\ensuremath{\mathbf{Z}})$ associ\'e \`a la valeur absolue~$\va_{p}^\ensuremath{\varepsilon}$
Pour tout $\ensuremath{\varepsilon}\in\intof{0,1}$, notons~$a_{\infty}^\ensuremath{\varepsilon}$ le point de~$\mathcal{M}(\ensuremath{\mathbf{Z}})$ associ\'e \`a~$\va_{\infty}^\ensuremath{\varepsilon}$. Posons $a_{\infty}^0:= a_{0}$. L'application $\ensuremath{\varepsilon}\in\intff{0,1}\mapsto a_{\infty}^\ensuremath{\varepsilon}$ r\'ealise alors un hom\'eomorphisme sur la branche associ\'ee \`a la place infinie.%
\nomenclature[Gc]{$a_{\infty}^\ensuremath{\varepsilon}$}{point de~$\mathcal{M}(\ensuremath{\mathbf{Z}})$ associ\'e \`a la valeur absolue~$\va_{\infty}^\ensuremath{\varepsilon}$
Indiquons \'egalement que tout voisinage de~$a_{0}$ contient enti\`erement toutes les branches \`a l'exception d'un nombre fini.
La figure~\ref{fig:MZ} contient une repr\'esentation d'un plongement (non canonique) de~$\mathcal{M}(\ensuremath{\mathbf{Z}})$ dans~$\ensuremath{\mathbf{R}}^2$ respectant la topologie.
\begin{figure}[!h]
\centering
\begin{tikzpicture}
\foreach \x [count=\xi] in {-2,-1,...,17}
\draw (0,0) -- ({10*cos(\x*pi/10 r)/\xi},{10*sin(\x*pi/10 r)/\xi}) ;
\foreach \x [count=\xi] in {-2,-1,...,17}
\fill ({10*cos(\x*pi/10 r)/\xi},{10*sin(\x*pi/10 r)/\xi}) circle ({0.07/(sqrt(\xi)}) ;
\draw ({10.5*cos(-pi/5 r)},{10.5*sin(-pi/5 r)}) node{$a_{\infty}$} ;
\fill ({5.5*cos(-pi/5 r)},{5.5*sin(-pi/5 r)}) circle (0.07) ;
\draw ({5.5*cos(-pi/5 r)},{5.5*sin(-pi/5 r)-.1}) node[below]{$a_{\infty}^\ensuremath{\varepsilon}$} ;
\draw ({11*cos(-pi/10 r)/2+.1},{11*sin(-pi/10 r)/2}) node{$a_2^{+\infty}$} ;
\fill ({2.75*cos(-pi/10 r)},{2.75*sin(-pi/10 r)}) circle ({0.07/(sqrt(2)}) ;
\draw ({2.75*cos(-pi/10 r)},{2.75*sin(-pi/10 r)-.05}) node[below]{$a_2^\ensuremath{\varepsilon}$} ;
\draw ({12*cos(pi/5 r)/5+.1},{12*sin(pi/5 r)/5+.1}) node{$a_p^{+\infty}$} ;
\end{tikzpicture}
\caption{Le spectre de Berkovich $\mathcal{M}(\ensuremath{\mathbf{Z}})$.}\label{fig:MZ}
\end{figure}
\medbreak
Les parties compactes et connexes de~$\mathcal{M}(\ensuremath{\mathbf{Z}})$ peuvent \'egalement s'exprimer comme des spectres. Plus pr\'ecis\'ement, soit~$V$ une partie compacte et connexe de~$\mathcal{M}(\ensuremath{\mathbf{Z}})$. Notons $S_{V}$ l'ensemble des \'el\'ements de~$\ensuremath{\mathbf{Z}}$ qui ne s'annulent par sur~$V$. Posons $\mathcal{K}(V) := S_{V}^{-1} \, \ensuremath{\mathbf{Z}}$ et notons~$\mathcal{B}(V)$ le s\'epar\'e compl\'et\'e de~$\mathcal{K}(V)$ pour la semi-norme uniforme~$\nm_{V}$ sur~$V$. (Nous retrouverons ces d\'efinitions plus tard dans un cadre plus g\'en\'eral, \cf~d\'efinition~\ref{def:fracrat} et notation~\ref{nota:BV}). Le spectre $\mathcal{M}(\mathcal{B}(V))$ s'identifie alors naturellement \`a~$V$, d'apr\`es \cite[proposition~3.1.16]{A1Z} (ce qui signifie que le compact~$V$ est spectralement convexe au sens de la d\'efinition~\ref{def:spconvexe}). \index{Partie!spectralement convexe}
Pour proposer un exemple plus concret, consid\'erons la partie de~$\mathcal{M}(\ensuremath{\mathbf{Z}})$ d\'efinie par
\begin{align*}
E &:= \Big\{ x\in \mathcal{M}(\ensuremath{\mathbf{Z}}) : \forall p\in \mathcal{P},\ |p(x)| \ge \frac1p\Big\}\\
& = \intff{a_{0},a_{\infty}} \cup \bigcup_{p \in \mathcal{P}} \intff{a_{0},a_{p}},
\end{align*}
o\`u $\mathcal{P}$ d\'esigne l'ensemble des nombres premiers. On a alors $S_{E} = \ensuremath{\mathbf{Z}} \setminus \{0\}$, $\mathcal{K}(E) = \ensuremath{\mathbf{Q}}$, $\nm_{E} = \max \{ \va_{\infty}, \va_{p} : p\in \mathcal{P}\}$ et $\mathcal{B}(E) = \ensuremath{\mathbf{Q}}$.
L'ensemble~$E$ est repr\'esent\'e \`a la figure~\ref{fig:compactMZ}. Il nous semble m\'eriter consid\'eration et poss\`eder des propri\'et\'es int\'eressantes. On peut, par exemple, remarquer que la s\'erie exponentielle est d\'efinie et convergente dans un disque relatif de rayon strictement positif au-dessus de~$E$.
\begin{figure}[h!]
\centering
\begin{tikzpicture}
\draw (0,0) -- ({10*cos(-2*pi/10 r)},{10*sin(-2*pi/10 r)}) ;
\foreach \x [count=\xi] in {-1,0,...,17}
\draw (0,0) -- ({5.5*cos(\x*pi/10 r)/(\xi+1)},{5.5*sin(\x*pi/10 r)/(\xi+1)}) ;
\foreach \x [count=\xi] in {-1,0,...,17}
\draw [dotted] ({5.5*cos(\x*pi/10 r)/(\xi+1)},{5.5*sin(\x*pi/10 r)/(\xi+1)}) -- ({10*cos(\x*pi/10 r)/(\xi+1)},{10*sin(\x*pi/10 r)/(\xi+1)}) ;
\foreach \x [count=\xi] in {-1,0,...,17}
\fill ({5.5*cos(\x*pi/10 r)/(\xi+1)},{5.5*sin(\x*pi/10 r)/(\xi+1)}) circle ({0.07/(sqrt(\xi)}) ;
\draw ({10.5*cos(-pi/5 r)-.35},{10.5*sin(-pi/5 r)+.05}) node{$a_{\infty}$} ;
\fill ({10*cos(-pi/5 r)},{10*sin(-pi/5 r)}) circle (0.07) ;
\fill ({2.75*cos(-pi/10 r)},{2.75*sin(-pi/10 r)}) circle ({0.07/(sqrt(2)}) ;
\draw ({2.75*cos(-pi/10 r)},{2.75*sin(-pi/10 r)}) node[below]{$a_2$} ;
\draw ({6*cos(pi/7 r)/5},{6*sin(pi/7 r)/5}) node{\tiny $a_p$} ;
\end{tikzpicture}
\caption{Le compact~$E$ de $\mathcal{M}(\ensuremath{\mathbf{Z}})$.}\label{fig:compactMZ}
\end{figure}
\end{exem}
\begin{exem}\label{ex:cdn}\index{Anneau!des entiers d'un corps de nombres|textbf}
Soit~$K$ un corps de nombres. Notons~$\Sigma_{\infty}$ l'ensemble des places infinies de~$K$, vues comme classes de conjugaison de plongements complexes. Alors, l'anneau des entiers~$A$ de~$K$ muni de la norme
\[\nm_{A} := \max_{\sigma \in \Sigma} (|\sigma(\wc)|_{\infty}) \]
est un anneau de Banach. Les descriptions et r\'esultats de l'exemple~\ref{ex:Z} valent encore dans ce cadre, \textit{mutatis mutandis}.
En ce qui concerne le spectre~$\mathcal{M}(A)$, par exemple, il poss\`ede un point central~$a_{0}$, correspondant \`a la valeur absolue triviale, et une infinit\'e de branches qui en jaillissent~: une pour chaque id\'eal maximal~$\ensuremath{\mathfrak{m}}$ de~$A$ (dont les points seront not\'es~$a_{\ensuremath{\mathfrak{m}}}^\ensuremath{\varepsilon}$ avec $\ensuremath{\varepsilon} \in [0,+\infty]$) et une pour chaque place infinie~$\sigma$ de~$K$ (dont les points seront not\'es~$a_{\sigma}^\ensuremath{\varepsilon}$ avec $\ensuremath{\varepsilon} \in [0,1]$). Pour plus de pr\'ecisions, nous renvoyons le lecteur \`a~\cite[\S 3.1]{A1Z}.%
\nomenclature[Ha]{$a_{0}$}{point de $\mathcal{M}(A)$ associ\'e \`a la valeur absolue triviale~$\va_{0}$}%
\nomenclature[Hb]{$a_{\ensuremath{\mathfrak{m}}}^\ensuremath{\varepsilon}$}{point de $\mathcal{M}(A)$ associ\'e \`a une valeur absolue $\ensuremath{\mathfrak{m}}$-adique, o\`u $\ensuremath{\mathfrak{m}}$ est un id\'eal maximal de~$A$}%
\nomenclature[Hc]{$a_{\sigma}^\ensuremath{\varepsilon}$}{point de $\mathcal{M}(A)$ associ\'e \`a une valeur absolue $\sigma$-adique, o\`u $\sigma$ est une place de~$K$
\end{exem}
\begin{exem}\label{ex:corpshybride
\index{Corps!hybride|textbf}%
\nomenclature[Bd]{$k^\mathrm{hyb}$}{corps $k$ muni de la norme hybride
Soit~$k$ un corps muni d'une valeur absolue non triviale~$\va$ (pour laquelle il n'est pas n\'ecessairement complet).
Posons $\nm := \max(\va,\va_{0})$. Alors $(k,\nm)$ est un anneau de Banach. Nous l'appellerons \emph{corps hybride} associ\'e \`a~$k$ et le noterons~$k^\mathrm{hyb}$.
L'application
\[\ensuremath{\varepsilon}\in[0,1]\mapsto \va^\ensuremath{\varepsilon}\in\mathcal{M}(k^\mathrm{hyb})\]
est un hom\'eomorphisme.
Le seul point qui n'est pas imm\'ediat est la surjectivit\'e. Soit~$x$ un point de~$\mathcal{M}(k^\mathrm{hyb})$ associ\'e \`a une valeur absolue~$\va_{x}$ non triviale. Par d\'efinition du spectre, pour tout $a\in k$, on a $|a|_{x} \le |a|$. Le lemme~\ref{lem:critcomvalabs} assure alors que~$x$ appartient bien \`a l'image de l'application ci-dessus.
Cet exemple appara\^it notamment lorsque l'on souhaite \'etudier des d\'eg\'en\'erescences de familles de vari\'et\'es sur~$k$. Il a \'et\'e introduit par V.~Berkovich dans~\cite{BerW0} pour $k=\ensuremath{\mathbf{C}}$.
\end{exem}
\begin{exem}\label{ex:avd}\index{Anneau!de valuation discr\`ete|textbf}
Soit $R$ un anneau de valuation discr\`ete complet et~$v$ la valuation associ\'ee. Notons~$\ensuremath{\mathfrak{m}}$ l'id\'eal maximal de~$R$. Soit $r \in \intoo{0,1}$ et posons $\va := r^{v(\wc)}$. Alors $(R,\va)$ est un anneau de Banach.
D\'efinissons la semi-norme
\[\fonction{\va^{+\infty}}{R}{\ensuremath{\mathbf{R}}_{\ge 0}}{a}{\begin{cases} 0 \textrm{ si } a \in \ensuremath{\mathfrak{m}}~;\\ 1 \textrm{ sinon.}\end{cases}}\]
L'application
\[\ensuremath{\varepsilon}\in[1,+\infty]\mapsto\va^\ensuremath{\varepsilon}\in\mathcal{M}(R)\]
est alors un hom\'eomorphisme.
Le seul point qui n'est pas imm\'ediat est la surjectivit\'e. Soit $x\in\mathcal{M}(R)$. Le noyau~$\textrm{Ker}(x)$ de la semi-norme~$\va_{x}$ associ\'ee \`a~$x$ est un id\'eal premier de~$R$. Il est donc \'egal soit \`a~$(0)$, soit \`a~$\ensuremath{\mathfrak{m}}$.
Supposons que $\textrm{Ker}(x) = (0)$. Alors la semi-norme~$\va_{x}$ est une valeur absolue et elle s'\'etend par multiplicativit\'e en une valeur absolue sur~$\Frac(R)$. Soit $a\in \Frac(R)$ tel que $|f| >1$. Alors, on a $f^{-1} \in R$, donc $|f^{-1}|_{x} \le |f^{-1}|$, d'o\`u $|f|_{x} \ge |f|$. Le lemme~\ref{lem:critcomvalabs} assure alors qu'il existe~$\eta \in \intof{0,1}$ tel que $\va = \va_{x}^{\eta}$.
Supposons que $\textrm{Ker}(x) = \ensuremath{\mathfrak{m}}$. Alors $R/\textrm{Ker}(x)$ est un corps valu\'e dont la valeur absolue est born\'ee par~1. Cette derni\`ere est donc triviale. On en d\'eduit que $\va_{x} = \va^{+\infty}$. Ceci conclut la preuve.
\end{exem}
\begin{exem}\label{ex:Dedekind}\index{Anneau!de Dedekind trivialement valu\'e|textbf}
Soit $R$ un anneau de Dedekind. Munissons-le de la valeur absolue triviale~$\va_{0}$. Alors $(R,\va_{0})$ est un anneau de Banach. Nous le noterons~$R^\mathrm{triv}$.%
\nomenclature[Bca]{$R^\mathrm{triv}$}{Anneau $R$ muni de la valeur absolue triviale
Soit~$\ensuremath{\mathfrak{p}}$ un id\'eal premier non nul de~$R$. Notons~$v_\ensuremath{\mathfrak{p}}$ la valuation associ\'ee \`a cet id\'eal. Pour tout $r \in \intoo{0,1}$, posons $\va_{\ensuremath{\mathfrak{p}},r} := r^{v_\ensuremath{\mathfrak{p}}(\wc)}$. D\'efinissons \'egalement la semi-norme
\[\fonction{\va_{\ensuremath{\mathfrak{p}},0}}{R}{\ensuremath{\mathbf{R}}_{\ge 0}}{a}{\begin{cases} 0 \textrm{ si } a \in \ensuremath{\mathfrak{p}}~;\\ 1 \textrm{ sinon.}\end{cases}}\]
On a une application injective $\lambda_{\ensuremath{\mathfrak{p}}} \colon r \in {[}{0,1}{[} \mapsto \va_{\ensuremath{\mathfrak{p}},r}\in\mathcal{M}(R^{\mathrm{triv}})$.
Soit $s \in \intoo{0,1}$. Posons $V_{\ensuremath{\mathfrak{p}},s} := \{ x \in \mathcal{M}(R^\mathrm{triv}) : \forall a \in \ensuremath{\mathfrak{p}}, |a(x)| \le s\}$. Notons~$R_{\ensuremath{\mathfrak{p}},s}$ le compl\'et\'e de~$R_{\ensuremath{\mathfrak{p}}}$ par rapport \`a la valeur absolue~$\va_{\ensuremath{\mathfrak{p}},s}$. Le morphisme born\'e naturel $R\to R_{\ensuremath{\mathfrak{p}},s}$ induit un hom\'eomorphisme $\mu_{\ensuremath{\mathfrak{p}},s} \colon \mathcal{M}(R_{\ensuremath{\mathfrak{p}},s}) \to V_{\ensuremath{\mathfrak{p}},s}$. Seule la surjectivit\'e requiert une preuve. Elle repose sur le fait que toute semi-norme associ\'ee \`a un point de~$V_{\ensuremath{\mathfrak{p}},s}$ est born\'ee par $\va_{\ensuremath{\mathfrak{p}},s}$ et s'\'etend donc \`a~$R_{\ensuremath{\mathfrak{p}},s}$. En utilisant l'exemple~\ref{ex:avd}, on en d\'eduit que l'application~$\lambda_{\ensuremath{\mathfrak{p}}}$ induit un hom\'eomorphisme entre~$[0,s]$ et~$V_{\ensuremath{\mathfrak{p}},s}$.
Posons $U_{\ensuremath{\mathfrak{p}}} := \{ x \in \mathcal{M}(R^\mathrm{triv}) : \forall a \in \ensuremath{\mathfrak{p}}, |a(x)| < 1\}$. Puisque l'id\'eal~$\ensuremath{\mathfrak{p}}$ est de type fini, on a $U_{\ensuremath{\mathfrak{p}}} = \bigcup_{s \in \intoo{0,1}} V_{\ensuremath{\mathfrak{p}},s}$. Le r\'esultat pr\'ec\'edent montre que~$\lambda_{\ensuremath{\mathfrak{p}}}$ induit un hom\'eomorphisme entre $\intfo{0,1}$ et~$U_{\ensuremath{\mathfrak{p}}}$.
Notons $\lambda \colon \bigsqcup_{\ensuremath{\mathfrak{p}} \ne (0)} \intfo{0,1}\to \mathcal{M}(R^\mathrm{triv})$ l'application induite par les~$\lambda_{\ensuremath{\mathfrak{p}}}$. C'est un hom\'eomorphisme sur son image.
Pour tout point~$x$ de~$\mathcal{M}(R^\mathrm{triv})$, $\{a \in R : |a(x)| < 1\}$ est un id\'eal premier de~$R$, non nul si $x$ ne correspond pas \`a~$\va_{0}$. Par cons\'equent, l'image de~$\lambda$ est \'egale \`a $\mathcal{M}(R^\mathrm{triv}) \setminus \{\va_{0}\}$. Puisque $\mathcal{M}(R^\mathrm{triv})$ est compact, il s'ensuit que $\mathcal{M}(R^\mathrm{triv})$ est hom\'eomorphe au compactifi\'e d'Alexandrov de~$\bigsqcup_{\ensuremath{\mathfrak{p}} \ne (0)} \intfo{0,1}$, le point~$\va_{0}$ correspondant au point \`a l'infini.
Dans le cas o\`u $R=\ensuremath{\mathbf{Z}}$, on peut identifier le spectre de $\ensuremath{\mathbf{Z}}^{\mathrm{triv}}$ \`a une partie du spectre de $\ensuremath{\mathbf{Z}}$, comme repr\'esent\'e sur la figure~\ref{fig:MZtriv}. Cette identification reste valable dans le cas o\`u~$R$ est un anneau d'entiers de corps de nombres.
\begin{figure}[!h]
\centering
\begin{tikzpicture}
\draw [dotted] (0,0) -- ({10*cos(-2*pi/10 r)},{10*sin(-2*pi/10 r)}) ;
\foreach \x [count=\xi] in {-1,0,...,17}
\draw (0,0) -- ({10*cos(\x*pi/10 r)/(\xi+1)},{10*sin(\x*pi/10 r)/(\xi+1)}) ;
\foreach \x [count=\xi] in {-1,0,...,17}
\fill ({10*cos(\x*pi/10 r)/(\xi+1)},{10*sin(\x*pi/10 r)/(\xi+1)}) circle ({0.07/(sqrt(\xi)}) ;
\draw ({10.5*cos(-pi/5 r)},{10.5*sin(-pi/5 r)}) node{$a_{\infty}$} ;
\draw ({11*cos(-pi/10 r)/2},{11*sin(-pi/10 r)/2}) node{$a_2^{+\infty}$} ;
\draw ({12*cos(pi/5 r)/5+.1},{12*sin(pi/5 r)/5}) node{$a_p^{+\infty}$} ;
\end{tikzpicture}
\caption{Le spectre de Berkovich $\mathcal{M}(\ensuremath{\mathbf{Z}}^\mathrm{triv})$.}\label{fig:MZtriv}
\end{figure}
\end{exem}
\index{Spectre analytique!exemples|)}
Nous allons maintenant formuler des d\'efinitions abstraites inspir\'ees par les caract\'eristiques topologiques des exemples pr\'ec\'edents.
\begin{nota}\label{not:Ix}%
\nomenclature[Fg]{$x^\ensuremath{\varepsilon}$}{point associ\'e \`a la semi-norme~$\va_{x}^\ensuremath{\varepsilon}$ sur~$\mathcal{A}$}%
\nomenclature[Fh]{$I_{x}$}{ensemble des $\ensuremath{\varepsilon} \in \ensuremath{\mathbf{R}}_{>0}$ pour lesquels $x^\ensuremath{\varepsilon}$ est d\'efini
Soit $x\in \mathcal{M}(\mathcal{A})$. On note
\[ I_{x} := \{ \ensuremath{\varepsilon} \in \ensuremath{\mathbf{R}}_{>0} : \va_{x}^\ensuremath{\varepsilon} \in \mathcal{M}(\mathcal{A})\},\]
c'est-\`a-dire l'ensemble des nombres r\'eels $\ensuremath{\varepsilon}>0$ tels que l'application $\va_{x}^\ensuremath{\varepsilon} \colon \mathcal{A} \to \ensuremath{\mathbf{R}}_{\ge 0}$ soit une semi-norme multiplicative born\'ee sur~$\mathcal{A}$. L'ensemble~$I_{x}$ est un intervalle de~$\ensuremath{\mathbf{R}}_{>0}$.
Pour $\ensuremath{\varepsilon} \in I_{x}$, on note~$x^\ensuremath{\varepsilon}$ le point de~$\mathcal{M}(\mathcal{A})$ associ\'e \`a~$\va_{x}^\ensuremath{\varepsilon}$.
\end{nota}
\begin{defi}\label{def:lineaire}\index{Partie!lineaire@lin\'eaire|textbf}\index{Spectre analytique!lineaire@lin\'eaire|textbf}
Une partie~$V$ de~$\mathcal{M}(\mathcal{A})$ est dite \emph{lin\'eaire} s'il existe une partie~$V^{\partial}$ de~$V$, un point~$x$ de~$V$ et un intervalle $I \subset I_{x}$ non r\'eduit \`a un point satisfaisant les propri\'et\'es suivantes~:
\begin{enumerate}[i)]
\item $V^{\partial}$ est contenue dans le bord de~$V$ et $\sharp V^{\partial}\le 2$~;
\item l'application
\[\begin{array}{ccc}
I & \longrightarrow & V\setminus V^{\partial}\\
\ensuremath{\varepsilon} & \longmapsto & x^\ensuremath{\varepsilon}
\end{array}\]
est un hom\'eomorphisme.
\end{enumerate}
Le spectre~$\mathcal{M}(\mathcal{A})$ est dit \emph{lin\'eaire} s'il est une partie lin\'eaire de lui-m\^eme.
\end{defi}
\begin{exem}\index{Corps!hybride}\index{Anneau!des entiers relatifs $\ensuremath{\mathbf{Z}}$}\index{Anneau!des entiers d'un corps de nombres}
Les spectres des corps hybrides (\cf~exemple~\ref{ex:corpshybride}) et des anneaux de valuation discr\`ete (\cf~exemple~\ref{ex:avd}) sont lin\'eaires.
\end{exem}
\begin{defi}\label{def:ostrowski}\index{Partie!d'Ostrowski|textbf}\index{Spectre analytique!d'Ostrowski|textbf}
Une partie~$V$ de~$\mathcal{M}(\mathcal{A})$ est dite \emph{d'Ostrowski} s'il existe un point $a_{0}$ de~$V$, un ensemble d\'enombrable non vide (\'eventuellement fini)~$\Sigma$ et une famille $(V_{\sigma})_{\sigma\in \Sigma}$ de parties disjointes de~$V\setminus \{a_{0}\}$ satisfaisant les propri\'et\'es suivantes~:
\begin{enumerate}[i)]
\item $V = \bigcup_{\sigma \in \Sigma} V_{\sigma} \cup \{a_{0}\}$~;
\item pour tout $\sigma\in \Sigma$, $V_{\sigma}$ et $V_{\sigma} \cup \{a_{0}\}$ sont des parties lin\'eaires de~$\mathcal{M}(\mathcal{A})$~;
\item l'ensemble des parties de la forme
\[\bigcup_{\sigma \in \Sigma'} V'_{\sigma} \cup \bigcup_{\sigma \in \Sigma\setminus\Sigma'} V_{\sigma},\]
o\`u $\Sigma'$ est un sous-ensemble fini de~$\Sigma$ et, pour tout $\sigma \in \Sigma'$, $V'_{\sigma}$ est un voisinage de~$a_{0}$ dans~$V_{\sigma} \cup \{a_{0}\}$, est une base de voisinages de~$a_{0}$ dans~$V$.
\end{enumerate}
Le spectre~$\mathcal{M}(\mathcal{A})$ est dit \emph{d'Ostrowski} s'il est une partie d'Ostrowski de lui-m\^eme.
\end{defi}
\begin{exem}\index{Anneau!des entiers relatifs $\ensuremath{\mathbf{Z}}$}\index{Anneau!des entiers d'un corps de nombres}\index{Anneau!de Dedekind trivialement valu\'e}
Le spectre de~$\ensuremath{\mathbf{Z}}$ et les spectres des anneaux d'entiers de corps de nombres (\cf~exemple~\ref{ex:Z}) sont d'Ostrowski, ainsi que les spectres des anneaux de Dedekind trivialement valu\'es (\cf~exemple~\ref{ex:Dedekind}).
\end{exem}
\section{Espace affine analytique sur un anneau de Banach}\label{sec:espaceaffineanalytique}
On peut adapter la d\'efinition de spectre analytique pour d\'efinir des espaces affines analytiques. Soit~$(\mathcal{A},\|.\|_\mathcal{A})$ un anneau de Banach et soit~$n\in\ensuremath{\mathbf{N}}$.
\begin{defi}[\protect{\cite[definition~1.5.1]{Ber1}}]\index{Espace analytique!affine|see{Espace affine analytique}}\index{Espace affine analytique|textbf}
L'\emph{espace affine analytique} de dimension~$n$ sur~$(\mathcal{A},\|.\|_\mathcal{A})$ est l'ensemble $\E{n}{\mathcal{A}}$ des semi-normes multiplicatives sur $\mathcal{A}[T_{1},\dotsc,T_{n}]$ born\'ees sur~$\mathcal{A}$, c'est-\`a-dire l'ensemble des applications
\[\va \colon \mathcal{A}[T_{1},\dotsc,T_{n}] \to \ensuremath{\mathbf{R}}_{\ge 0}\]
v\'erifiant les propri\'et\'es suivantes~:
\begin{enumerate}[i)]
\item $|0| = 0$ et $|1| = 1$;
\item $\forall P,Q \in \mathcal{A}[T_{1},\dotsc,T_{n}],\ |P+Q| \le |P| + |Q|$ ;
\item $\forall P,Q \in \mathcal{A}[T_{1},\dotsc,T_{n}],\ |PQ| = |P| \, |Q|$ ;
\item $\forall a\in \mathcal{A},\ |a| \le \|a\|_{\mathcal{A}}$.
\end{enumerate}%
\nomenclature[Ia]{$\E{n}{\mathcal{A}}$}{espace affine analytique de dimension~$n$ sur~$\mathcal{A}$
\nomenclature[J]{$\E{1}{\mathcal{A}}$}{droite affine analytique sur un anneau de Banach~$\mathcal{A}$
On munit $\E{n}{\mathcal{A}}$ de la topologie de la convergence simple.
\end{defi}
Comme dans le cas de~$\mathcal{M}(\mathcal{A})$, pour tout point~$x$ de~$\E{n}{\mathcal{A}}$, on note $\va_{x}$ la semi-norme sur $\mathcal{A}[T_{1},\dotsc,T_{n}]$ associ\'ee. On d\'efinit \'egalement un corps r\'esiduel compl\'et\'e~$\mathcal{H}(x)$ et un morphisme d'\'evaluation
\[ \mathrm{ev}_{x} \colon \mathcal{A}[T_{1},\dotsc,T_{n}] \to \mathcal{H}(x).\]
\index{Corps!residuel complete@r\'esiduel compl\'et\'e|textbf}\index{Evaluation@\'Evaluation|textbf}
Pour $f \in \mathcal{A}[T_{1},\dotsc,T_{n}]$, on pose
\[ f(x) := \mathrm{ev}_{x}(f) \in \mathcal{H}(x).\]%
\nomenclature[Ib]{$\va_{x}$}{semi-norme sur~$\mathcal{A}[T_{1},\dotsc,T_{n}]$ associ\'ee \`a point~$x$ de~$\E{n}{\mathcal{A}}$}
\nomenclature[Iba]{$\mathcal{H}(x)$}{corps r\'esiduel compl\'et\'e de~$x$}%
\nomenclature[Ieva]{$\mathrm{ev}_{x}$}{morphisme d'\'evaluation en~$x$ sur $\mathcal{A}[T_{1},\dotsc,T_{n}]$}%
\nomenclature[Ifxa]{$f(x)$}{\'evaluation en~$x$ d'un \'el\'ement~$f$ de~$\mathcal{A}[T_{1},\dotsc,T_{n}]$
\begin{defi}\index{Partie!ultrametrique@ultram\'etrique|textbf}\index{Partie!archimedienne@archim\'edienne|textbf}
Soit~$V$ une partie de~$\E{n}{\mathcal{A}}$. On appelle \emph{partie ultram\'etrique} de~$V$ l'ensemble
\[V_{\mathrm{um}} := \{x \in V :\va_{x} \textrm{ est ultram\'etrique}\}.\]
On appelle \emph{partie archim\'edienne} de~$V$ l'ensemble
\[V_{\mathrm{arc}} := \{x \in V :\va_{x} \textrm{ est archim\'edienne}\}.\]
\end{defi}
\nomenclature[Ik]{$V_{\mathrm{um}}$}{partie ultram\'etrique d'une partie $V$ de $\E{n}{\mathcal{A}}$}
\nomenclature[Il]{$V_{\mathrm{arc}}$}{partie archim\'edienne de $V$}
\begin{rema}\label{rem:umfermee}
Un r\'esultat classique assure que
\[V_{\mathrm{um}} \ = \{x\in V:|2(x)|\le 1\}.\]
En particulier, $V_{\mathrm{um}}$ et~$V_{\mathrm{arc}}$ sont respectivement ferm\'ee et ouverte dans~$V$.
\end{rema}
Les possibilit\'es pour les corps r\'esiduels compl\'et\'es en les points archim\'ediens sont tr\`es limit\'ees. Il s'agit l\`a encore d'un r\'esultat classique (\cf~\cite[VI, \S 6, \no 6, th\'eor\`eme~2]{BourbakiAC57} par exemple). Rappelons que l'on note~$\va_{\infty}$ la valeur absolue usuelle sur~$\ensuremath{\mathbf{R}}$ ou~$\ensuremath{\mathbf{C}}$.
\begin{theo}\label{th:vaarchimedienne}\index{Corps!residuel complete@r\'esiduel compl\'et\'e!archimedien@archim\'edien}
Soit $(k,\va)$ un corps valu\'e archim\'edien complet. Alors, il existe un corps $K$ \'egal \`a~$\ensuremath{\mathbf{R}}$ ou~$\ensuremath{\mathbf{C}}$, un isomorphisme $j \colon k \xrightarrow[]{\sim} K$ et un nombre r\'eel $\ensuremath{\varepsilon} \in \intof{0,1}$ tel que
\[ \forall a \in k,\ |a| = |j(a)|_{\infty}^\ensuremath{\varepsilon}.\]
\end{theo}
\begin{nota}\label{not:epsilon}
\nomenclature[Ig]{$\ensuremath{\varepsilon}(x)$}{pour $x$ archim\'edien, \'el\'ement de~$\intof{0,1}$ tel que $\va_{x} = \va_{\infty}^{\ensuremath{\varepsilon}(x)}$
Pour tout $x \in (\E{n}{\mathcal{A}})_{\mathrm{arc}}$, on note $\ensuremath{\varepsilon}(x)$ l'unique \'el\'ement de~$\intof{0,1}$ tel que la valeur absolue canonique sur~$\mathcal{H}(x)$ s'identifie \`a~$\va_{\infty}^{\ensuremath{\varepsilon}(x)}$ \textit{via} l'isomorphisme entre~$\mathcal{H}(x)$ et~$\ensuremath{\mathbf{R}}$ ou~$\ensuremath{\mathbf{C}}$ fourni par le th\'eor\`eme~\ref{th:vaarchimedienne}.
\end{nota}
\begin{lemm}\label{lem:epscontinue}
L'application $\ensuremath{\varepsilon} \colon (\E{n}{\mathcal{A}})_{\mathrm{arc}} \longrightarrow \intof{0,1}$ est continue.
\end{lemm}
\begin{proof}
Pour tout $x \in (\E{n}{\mathcal{A}})_{\mathrm{arc}}$, on a $\ensuremath{\varepsilon}(x) = \log(|2(x)|)/\log(2)$. Or, l'application~$|2|$ est continue, par d\'efinition de la topologie. Le r\'esultat s'ensuit.
\end{proof}
Rappelons que les espaces affines analytiques sur les corps valu\'es archim\'ediens complets sont tr\`es proches des espaces analytiques complexes usuels (\cf~\cite[\S 1.5.4]{Ber1}).
\begin{lemm}\label{lem:AnC}\index{Espace affine analytique!sur un corps archim\'edien}
Munissons~$\ensuremath{\mathbf{C}}$ et~$\ensuremath{\mathbf{R}}$ d'une puissance de la valeur absolue usuelle. On a alors des isomorphismes d'espaces localement annel\'es naturels
\[ \E{n}{\ensuremath{\mathbf{C}}} \simeq \ensuremath{\mathbf{C}}^n \textrm{ et } \E{n}{\ensuremath{\mathbf{R}}} \simeq \ensuremath{\mathbf{C}}^n/\Gal(\ensuremath{\mathbf{C}}/\ensuremath{\mathbf{R}}).\]
\qed
\end{lemm}
\medbreak
Nous allons maintenant munir~$\E{n}{\mathcal{A}}$ d'un faisceau structural.
\begin{nota}\index{Norme!uniforme}
\nomenclature[Im]{$\nm_{V}$}{semi-norme uniforme sur une partie compacte~$V$ de~$\E{n}{\mathcal{A}}$
Pour toute partie compacte~$V$ de~$\E{n}{\mathcal{A}}$, on note~$\nm_{V}$ la \emph{semi-norme uniforme} sur~$V$, c'est-\`a-dire la semi-norme d\'efinie par
\[\forall P \in \mathcal{A}[T_{1},\dotsc,T_{n}],\ \|P\|_{V} = \max_{x\in V}(|P(x)|).\]
\end{nota}
\begin{defi}\label{def:fracrat}\index{Fraction rationnelle sans p\^oles|textbf}
\nomenclature[In]{$\mathcal{K}(V)$}{fractions rationnelles sans p\^oles sur~$V$}
Pour toute partie compacte~$V$ de~$\E{n}{\mathcal{A}}$, posons
\[ S_{V} := \{P \in \mathcal{A}[T_{1},\dotsc,T_{n}] : \forall x \in V, |P(x)| \ne 0\}.\]
Les \'el\'ements de l'anneau
\[ \mathcal{K}(V) := S_{V}^{-1} \mathcal{A}[T_{1},\dotsc,T_{n}]\]
sont appel\'ees \emph{fractions rationnelles sans p\^oles sur~$V$}.
\end{defi}
La semi-norme~$\nm_{V}$ sur $\mathcal{A}[T_{1},\dotsc,T_{n}]$ s'\'etend en une semi-norme sur~$\mathcal{K}(V)$.
Pour tout $x\in V$, l'application d'\'evaluation $\mathrm{ev}_{x} \colon \mathcal{A}[T_{1},\dotsc,T_{n}] \to \mathcal{H}(x)$ s'\'etend \`a~$\mathcal{K}(V)$. Pour $f\in \mathcal{K}(V)$, on pose $f(x) := \mathrm{ev}_{x}(f) \in \mathcal{H}(x)$.\index{Evaluation@\'Evaluation|textbf}%
\nomenclature[Ievb]{}{morphisme d'\'evaluation en~$x$ sur $\mathcal{K}(V)$}%
\nomenclature[Ifxb]{}{\'evaluation en $x$ d'un \'el\'ement $f$ de $\mathcal{K}(V)$}
\begin{defi}[\protect{\cite[definition~1.5.1]{Ber1}}]\index{Espace affine analytique!faisceau structural|textbf
Le \emph{faisceau structural} sur~$\E{n}{\mathcal{A}}$ est le faisceau qui \`a tout ouvert~$U$ de~$\E{n}{\mathcal{A}}$ associe l'ensemble des applications
\[ f \colon U \to \bigsqcup_{x\in U} \mathcal{H}(x)\]
v\'erifiant les propri\'et\'es suivantes~:
\begin{enumerate}[i)]
\item pour tout $x\in U$, on a $f(x) \in \mathcal{H}(x)$ ;
\item $f$ est localement limite uniforme de fractions rationnelles sans p\^oles.
\end{enumerate}
Nous noterons ce faisceau $\mathcal{O}_{\E{n}{\mathcal{A}}}$ ou simplement~$\mathcal{O}$
\nomenclature[Io]{$\mathcal{O}_{\E{n}{\mathcal{A}}}$}{faisceau structural sur $\E{n}{\mathcal{A}}$}%
\nomenclature[Ifxc]{}{\'evaluation en~$x$ d'une section~$f$ de~$\mathcal{O}_{\E{n}{\mathcal{A}}}$}
\end{defi}
La condition~ii) peut se r\'e\'ecrire pr\'ecis\'ement de la fa\c{c}on suivante~: pour tout point~$x$ de~$U$, il existe un voisinage compact~$V$ de~$x$ dans~$U$ et une suite $(R_{i})_{i \ge 0}$ d'\'el\'ements de~$\mathcal{K}(V)$ qui converge vers~$f_{|V}$ pour~$\nm_{V}$.
\begin{rema}\label{rem:gammaR}%
\nomenclature[Is]{$\gamma_{\ensuremath{\mathbf{R}}}$}{pour $U$ ouvert de la partie archim\'edienne de~$\E{n}{\mathcal{A}}$, morphisme canonique $\ensuremath{\mathbf{R}} \to \mathcal{O}(U)$
Soit~$U$ un ouvert de~$\E{n}{\mathcal{A}}$ contenu dans la partie archim\'edienne. On a alors un morphisme injectif canonique
\[\gamma_{\ensuremath{\mathbf{R}}} \colon \ensuremath{\mathbf{R}} \to \mathcal{O}(U).\]
En effet, le morphisme $\ensuremath{\mathbf{Z}} \to \mathcal{A}$ induit un morphisme $\ensuremath{\mathbf{Z}} \to \mathcal{O}(U)$ et on v\'erifie que ce dernier s'\'etend \`a~$\ensuremath{\mathbf{Q}}$ puis \`a~$\ensuremath{\mathbf{R}}$.
\end{rema}
\begin{lemm}\label{lem:evaluationcontinueaffine
Pour tout ouvert~$U$ de~$\E{n}{\mathcal{A}}$ et tout \'el\'ement~$f$ de~$\mathcal{O}(U)$, l'application
\[\fonction{|f|}{\mathcal{O}(U)}{\ensuremath{\mathbf{R}}_{\ge 0}}{x}{|f(x)|}\]
est continue.
\end{lemm}
\begin{proof}
Si~$f$ est un polyn\^ome, le r\'esultat d\'ecoule de la d\'efinition de la topologie. On en d\'eduit le r\'esultat pour les fractions rationnelles sans p\^oles, puis pour des fonctions arbitraires, la continuit\'e \'etant pr\'eserv\'ee par limite uniforme.
\end{proof}
\begin{lemm}\index{Corps!residuel@r\'esiduel|textbf}%
\nomenclature[Ioa]{$\mathcal{O}_{x}$}{anneau local en un point~$x$ de~$\mathcal{E}{n}{\mathcal{A}}$}%
\nomenclature[Iob]{$\ensuremath{\mathfrak{m}}_{x}$}{id\'eal maximal de $\mathcal{O}_{x}$}%
\nomenclature[Ioc]{$\kappa(x)$}{$ = \mathcal{O}_{x}/\ensuremath{\mathfrak{m}}_{x}$, corps r\'esiduel de $x$}%
Pour tout point~$x$ de~$\E{n}{\mathcal{A}}$, le germe~$\mathcal{O}_{x}$ est un anneau local. Son id\'eal maximal~$\ensuremath{\mathfrak{m}}_{x}$ est constitu\'e des fonctions qui s'annulent au point~$x$. Le corps $\kappa(x) := \mathcal{O}_{x}/\ensuremath{\mathfrak{m}}_{x}$ est un sous-corps dense de~$\mathcal{H}(x)$.
\qed
\end{lemm}
\begin{defi}\label{def:projection}\index{Projection|textbf}%
\nomenclature[Iaproj1]{$\pi_{n}$}{projection $\pi_{n} \colon \E{n}{\mathcal{A}}\to\mathcal{M}(\mathcal{A})$}%
\nomenclature[Iaproj2]{$\pi_{n,m}$}{projection $\pi_{n,m} \colon \E{n}{\mathcal{A}}\to \E{m}{\mathcal{A}}$ sur les $m$~premi\`eres coordonn\'ees}%
On appelle \emph{projection sur la base} le morphisme d'espaces localement annel\'es
\[\pi_{n} \colon \E{n}{\mathcal{A}}\to\mathcal{M}(\mathcal{A})\]
induit par le morphisme $\mathcal{A} \to \mathcal{A}[T_{1},\dotsc,T_{n}]$.
Plus g\'en\'eralement, pour tout $m\in \cn{0}{n}$, on appelle \emph{projection sur les $m$~premi\`eres coordonn\'ees} le morphisme d'espaces localement annel\'es
\[\pi_{n,m} \colon \E{n}{\mathcal{A}}\to \E{m}{\mathcal{A}}\]
induit par le morphisme $\mathcal{A}[T_{1},\dotsc,T_{m}] \to \mathcal{A}[T_{1},\dotsc,T_{n}]$.
\end{defi}
On verra plus loin que les projections sont des morphismes analytiques (\cf~exemple~\ref{ex:structural}).
\begin{lemm}\index{Projection!fibre d'une}
Pour tout $m\in \cn{0}{n}$ et tout $y \in \E{m}{\mathcal{A}}$, on a un hom\'eomorphisme naturel
\[ \E{n-m}{\mathcal{H}(y)} \xrightarrow[]{\sim} \pi_{n,m}^{-1}(y). \]
\qed
\end{lemm}
\begin{rema}
La construction des espaces affines est fonctorielle en~$\mathcal{A}$. Tout morphisme born\'e d'anneaux de Banach $\mathcal{A} \to \mathcal{B}$ induit un morphisme d'anneaux $\mathcal{A}[T_{1},\dotsc,T_{n}] \to \mathcal{B}[T_{1},\dotsc,T_{n}]$ et un morphisme d'espaces annel\'es $\E{n}{\mathcal{B}} \to \E{n}{\mathcal{A}}$.
\end{rema}
Terminons par un r\'esultat topologique qui nous sera utile dans la suite. Un cas particulier figure dans~\cite[proposition~3.4.1]{A1Z} et la d\'emonstration s'\'etend sans peine au cas g\'en\'eral.
\begin{nota}%
\nomenclature[Ih]{$x^\ensuremath{\varepsilon}$}{point associ\'e \`a la semi-norme $\va_{x}^\ensuremath{\varepsilon}$}%
Soit $b\in \mathcal{M}(\mathcal{A})$. Pour tout $x\in \pi_{n}^{-1}(b)$ et tout $\ensuremath{\varepsilon} \in I_{b}$, l'application~$\va_{x}^\ensuremath{\varepsilon}$ est une semi-norme multiplicative sur $\mathcal{A}[T_{1},\dotsc,T_{n}]$ born\'ee sur~$\mathcal{A}$. On note~$x^\ensuremath{\varepsilon}$ le point de~$\E{n}{\mathcal{A}}$ associ\'e.
\end{nota}
\begin{lemm}\label{lem:flot}
Soit $b\in \mathcal{M}(\mathcal{A})$. L'application
\[{\renewcommand{\arraystretch}{1.3}\begin{array}{ccc}
\pi_{n}^{-1}(b) \times I_{b} & \longrightarrow & \E{n}{\mathcal{A}}\\
(x,\ensuremath{\varepsilon}) & \longmapsto & x^\ensuremath{\varepsilon}
\end{array}}\]
r\'ealise un hom\'eomorphisme sur son image.
\qed
\end{lemm}
\section{Parties rationnelles et spectralement convexes}\label{sec:spconvexe}
Soit~$\mathcal{A}$ un anneau de Banach et soit $n\in \ensuremath{\mathbf{N}}$.
\begin{defi}\index{Partie!rationnelle|textbf}\index{Partie!pro-rationnelle|textbf}
On dit qu’une partie compacte~$V$ de~$\E{n}{\mathcal{A}}$ est \emph{rationnelle} s’il existe $P_{1},\dotsc,P_{p},Q \in \mathcal{A}[T_{1},\dotsc,T_{n}]$ ne s’annulant pas simultanément sur $\E{n}{\mathcal{A}}$ et $r_{1},\dotsc,r_{p} \in \ensuremath{\mathbf{R}}_{>0}$ tels que
\[ V = \bigcap_{1\le i\le p} \{x\in \E{n}{\mathcal{A}}: |P_i(x)|\le r_i \, |Q(x)|\} .\]
On dit qu’une partie compacte~$V$ de~$\E{n}{\mathcal{A}}$ est \emph{pro-rationnelle} si elle est intersection de parties compactes rationnelles.
\end{defi}
\begin{exem}\label{ex:pointprorationnel}
Tout point de~$\E{n}{\mathcal{A}}$ est une partie pro-rationnelle. En effet, pour tout $x \in \E{n}{\mathcal{A}}$, on a
\[ \{x\} = \bigcap_{P \in \mathcal{A}[T_{1},\dotsc,T_{n}]} \{y \in \E{n}{\mathcal{A}} : |P(y)| = |P(x)|\}.\]
\end{exem}
Un calcul classique (\cf~\cite[proposition~7.2.3/7]{BGR}) montre qu'une intersection finie de parties compactes rationnelles est encore compacte rationnelle. Il suit alors de la d\'efinition de la topologie de~$\E{n}{\mathcal{A}}$ que tout point poss\`ede une base de voisinages form\'ee de parties compactes rationnelles.
\begin{nota}\label{nota:BV}
\nomenclature[Ina]{$\mathcal{B}(V)$}{s\'epar\'e compl\'et\'e de~$\mathcal{K}(V)$ pour $\nm_{V}$
Pour tout partie compacte~$V$ de~$\E{n}{\mathcal{A}}$, on note~$\mathcal{B}(V)$ le s\'epar\'e compl\'et\'e de~$\mathcal{K}(V)$ pour la semi-norme uniforme~$\nm_{V}$ sur~$V$.
C'est un anneau de Banach uniforme.
\end{nota}
\begin{exem}\label{ex:Bpoint}
Pour tout point~$x$ de~$\E{n}{\mathcal{A}}$, on a $\mathcal{B}(\{x\}) = \mathcal{H}(x)$.
\end{exem}
\begin{lemm}\label{lem:factorisationBV}
Soit~$V$ une partie compacte de~$\E{n}{\mathcal{A}}$. Alors, pour toute $\mathcal{A}$-alg\`ebre de Banach uniforme~$\mathcal{B}$ et tout morphisme de~$\mathcal{A}$-alg\`ebres $f \colon \mathcal{A}[T_1,\ldots,T_n]\to\mathcal{B}$ tel que l'image du morphisme induit $\mathcal{M}(\mathcal{B})\to\E{n}{\mathcal{A}}$ soit contenue dans~$V$, il existe un unique morphisme de $\mathcal{A}$-alg\`ebres born\'e $\mathcal{B}(V) \to \mathcal{B}$ qui fasse commuter le diagramme
\[\begin{tikzcd}
\mathcal{A}[T_{1},\dotsc,T_{n}] \arrow[d] \arrow[r, "f"] & \mathcal{B}. \\
\mathcal{B}(V) \arrow[ru]
\end{tikzcd}\]
\end{lemm}
\begin{proof}
Soient~$\mathcal{B}$ une $\mathcal{A}$-alg\`ebre de Banach uniforme, $f \colon \mathcal{A}[T_1,\ldots,T_n]\to\mathcal{B}$ un morphisme de~$\mathcal{A}$-alg\`ebres et supposons que l'image du morphisme induit $\varphi \colon \mathcal{M}(\mathcal{B})\to\E{n}{\mathcal{A}}$ est contenue dans~$V$.
Soit $P \in S_{V}$. Pour tout $y\in \mathcal{M}(\mathcal{B})$, on a $|f(P)(y)| = |P(\varphi(y))| \ne 0$ car $\varphi(y) \in V$. On en d\'eduit que $f(P)$ est inversible dans~$\mathcal{B}$, d'apr\`es \cite[theorem~1.2.1]{Ber1}. Par cons\'equent, $f$ s'\'etend en un morphisme
\[f' \colon \mathcal{K}(V) = S_{V}^{-1} \mathcal{A}[T_{1},\dotsc,T_{n}] \to \mathcal{B},\]
et ce de fa\c{c}on n\'ecessairement unique.
Soient $P \in \mathcal{A}[T_{1},\dotsc,T_{n}]$ et $Q\in S_{V}$. Pour tout $y\in \mathcal{M}(\mathcal{B})$, on a
\[ \left|f'\left(\frac P Q\right)(y)\right| = \frac{|f(P)(y)|}{|f(Q)(y)|} = \frac{|P(\varphi(y))|}{|Q(\varphi(y))|} =\left|\frac P Q(\varphi(y))\right|. \]
Puisque~$\mathcal{B}$ est uniforme et que l'image de~$\varphi$ est contenue dans~$V$, on en d\'eduit que $\|f'(P/Q)\| \le \|P/Q\|_{V}$.
Par cons\'equent, le morphisme~$f'$ se prolonge en un morphisme born\'e $\mathcal{B}(V) \to \mathcal{B}$, et ce de fa\c{c}on unique.
\end{proof}
Remarquons que, pour toute partie compacte~$V$ de~$\E{n}{\mathcal{A}}$, le morphisme naturel $\mathcal{A}[T_{1},\dotsc,T_{n}] \to \mathcal{B}(V)$ induit un morphisme d'espaces annel\'es $\varphi_{V} \colon \mathcal{M}(\mathcal{B}(V))\to \E{n}{\mathcal{A}}$.
\begin{nota}\index{Interieur@Int\'erieur|textbf}
\nomenclature[D]{$\mathring V$}{int\'erieur d'une partie $V$ d'un espace topologique
Soient $T$ un espace topologique et $V$ une partie de~$T$. On note~$\mathring V$ l'int\'erieur de~$V$ dans~$T$.
\end{nota}
\begin{theo}[\protect{\cite[th\'eor\`eme~1.2.11]{A1Z}}]\label{thm:rationnel}
Soit~$V$ une partie compacte pro-rationnelle de~$\E{n}{\mathcal{A}}$. Alors le morphisme $\varphi_{V}$ induit
\begin{enumerate}[i)]
\item un hom\'eomorphisme $\mathcal{M}(\mathcal{B}(V)) \xrightarrow[]{\sim} V$ ;
\item un isomorphisme d'espaces annel\'es $\varphi_{V}^{-1}(\mathring V) \xrightarrow[]{\sim} \mathring V$.
\end{enumerate}
\qed
\end{theo}
\begin{defi}\label{def:spconvexe}\index{Partie!spectralement convexe|textbf}
On dit qu'une partie compacte~$V$ de~$\E{n}{\mathcal{A}}$ est \emph{spectralement convexe} si elle satisfait les conclusions du th\'eor\`eme~\ref{thm:rationnel}.
\end{defi}
\begin{prop}[\protect{\cite[remarque~1.2.13]{A1Z}}]\label{crit_spectral}
Une partie compacte~$V$ de~$\E{n}{\mathcal{A}}$ est spectralement convexe si, et seulement si, l'image de~$\varphi_{V}$ est contenue dans~$V$.
\qed
\end{prop}
\begin{coro}
L'ensemble des parties compactes spectralement convexes de~$\E{n}{\mathcal{A}}$ est stable par intersection.
\qed
\end{coro}
\`A l'aide du lemme~\ref{lem:factorisationBV}, nous en d\'eduisons une autre caract\'erisation des parties spectralement convexes.
\begin{prop}\label{crit_spectral_pu}
Soit~$V$ une partie compacte de~$\E{n}{\mathcal{A}}$. Les propositions suivantes sont \'equivalentes~:
\begin{enumerate}[i)]
\item $V$ est spectralement convexe~;
\item pour toute $\mathcal{A}$-alg\`ebre de Banach uniforme~$\mathcal{B}$ et tout morphisme de~$\mathcal{A}$-alg\`ebres $f \colon \mathcal{A}[T_1,\ldots,T_n]\to\mathcal{B}$, l'image du morphisme induit $\mathcal{M}(\mathcal{B})\to\E{n}{\mathcal{A}}$ est contenue dans~$V$ si, et seulement si, il existe un unique morphisme de $\mathcal{A}$-alg\`ebres born\'e $\mathcal{B}(V) \to \mathcal{B}$ qui fasse commuter le diagramme
\[\begin{tikzcd}
\mathcal{A}[T_{1},\dotsc,T_{n}] \arrow[d] \arrow[r, "f"] & \mathcal{B}. \\
\mathcal{B}(V) \arrow[ru]
\end{tikzcd}\]
\end{enumerate}
\qed
\end{prop}
\begin{rema}\label{spectral}\label{spectralement_conv}\index{Projection!fibre d'une}
Soit $m\in \cn{0}{n}$ et consid\'erons le morphisme de projection sur les $m$~premi\`eres coordonn\'ees $\pi_{n,m} \colon \E{n}{\mathcal{A}} \to \E{m}{\mathcal{A}}$ (\cf~d\'efinition~\ref{def:projection}). Soit~$V$ un ensemble compact spectralement convexe de~$\E{m}{\mathcal{A}}$. D'apr\`es \cite[proposition~1.2.15]{A1Z}, on a un hom\'eomorphisme naturel
\[\E{n-m}{\mathcal{B}(V)} \xrightarrow[]{\sim} \pi_{n,m}^{-1}(V)\]
et m\^eme un isomorphisme d'espaces annel\'es au-dessus de~$\mathring V$.
Ce r\'esultat se r\'ev\`ele tr\`es utile dans les d\'emonstrations par r\'ecurrence sur la dimension. Par exemple, si l'on souhaite \'etudier une propri\'et\'e locale en un point~$x$ de~$\E{n}{\mathcal{A}}$, il permet de se ramener au cas d'un point $x'$ de $\E{1}{\mathcal{B}}$, o\`u $\mathcal{B}$ est un anneau de Banach de la forme~$\mathcal{B}(V)$ pour un voisinage compact spectralement convexe de~$\pi_{n,n-1}(x)$ dans $\E{n-1}{\mathcal{A}}$.
\end{rema}
\section{Disques, couronnes et domaines polynomiaux}\label{sec:dcl}
Soit~$\mathcal{A}$ un anneau de Banach. Posons $B :=\mathcal{M}(\mathcal{A})$ et $X := \ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}}$, avec coordonn\'ee~$T$. Notons $\pi \colon X \to B$ la projection sur la base.
\begin{nota}\index{Disque|textbf}\index{Couronne|textbf}\index{Domaine polynomial|textbf}%
\nomenclature[Jb]{$D_{V}(P,t)$}{domaine polynomial ouvert relatif au-dessus de $V$}%
\nomenclature[Jba]{$\overline{D}_{V}(P,t)$}{domaine polynomial ferm\'e relatif au-dessus de $V$}%
\nomenclature[Jc]{$C_{V}(P,s,t)$}{domaine polynomial ouvert relatif au-dessus de $V$}%
\nomenclature[Jd]{$\overline{C}_{V}(P,s,t)$}{domaine polynomial ferm\'e relatif au-dessus de $V$}%
Soit~$V$ une partie de~$B$. Soient $P\in \mathcal{A}[T]$ et $s,t\in \ensuremath{\mathbf{R}}$. On pose
\begin{align*}
D_{V}(P,t) &:= \{x\in X : \pi(x)\in V, |P(x)| < t\}~;\\
\overline{D}_{V}(P,t) &:= \{x\in X : \pi(x)\in V, |P(x)| \le t\}~;\\
C_{V}(P,s,t) &:= \{x\in X : \pi(x)\in V, s < |P(x)| < t\}~;\\
\overline{C}_{V}(P,s,t) &:= \{x\in X : \pi(x)\in V, s \le |P(x)| \le t\}.
\end{align*}
Une partie de la forme $D_{V}(P,t)$ ou $C_{V}(P,s,t)$ sera appel\'ee \emph{domaine polynomial ouvert relatif au-dessus de~$V$}. Une partie de la forme $\overline D_{V}(P,t)$ ou $\overline C_{V}(P,s,t)$ sera appel\'ee \emph{domaine polynomial ferm\'e relatif au-dessus de~$V$}.
Lorsque $P = T$, on s'autorise \`a supprimer le~$P$ de la notation et \`a \'ecrire simplement $D_{V}(t)$, $\overline{D}_{V}(t)$, $C_{V}(s,t)$ ou $\overline{C}_{V}(s,t)$. On parlera de \emph{disque relatif} dans les deux premiers cas et de \emph{couronne relative} dans les deux derniers.
\nomenclature[Je]{$D_{V}(t)$}{disque ouvert relatif au-dessus de $V$}
\nomenclature[Jf]{$\overline{D}_{V}(t)$}{disque ferm\'e relatif au-dessus de $V$}
\nomenclature[Jg]{$C_{V}(s,t)$}{couronne ouverte relative au-dessus de $V$}
\nomenclature[Jh]{$\overline{C}_{V}(s,t)$}{couronne ferm\'ee relative au-dessus de $V$}
\end{nota}
Dans tout le texte, nous nous autoriserons \`a parler de sections d'un faisceau sur une partie qui n'est pas n\'ecessairement ouverte en consid\'erant les sections surconvergentes.\index{Surconvergence|see{Faisceau, Fonction et Espace affino\"ide}}
\begin{nota}\index{Faisceau!surconvergent}
\nomenclature[Ira]{$\mathcal{O}(V)$}{fonctions surconvergentes sur un compact~$V$ de~$\E{n}{\mathcal{A}}$}
\nomenclature[Ea]{$\mathcal{F}(V)$}{sections d'un faisceau coh\'erent~$\mathcal{F}$ surconvergentes sur une partie~$V$}
Pour tout espace topologique~$T$, tout faisceau~$\mathcal{F}$ sur~$T$ et toute partie~$V$ de~$T$, on pose
\[\mathcal{F}(V) := \colim_{U \supset V} \mathcal{O}(U),\]
o\`u $U$ parcourt l'ensemble des voisinages ouverts de~$V$ dans~$T$.
\end{nota}
\subsection{Alg\`ebres de disques et de couronnes}
\index{Disque!algebre@alg\`ebre d'un|(}
\index{Couronne!algebre@alg\`ebre d'une|(}
\begin{nota}\label{nota:ATt}
Soit~$\mathcal{C}$ une alg\`ebre norm\'ee.
Soit $t \in \ensuremath{\mathbf{R}}_{>0}$. On note $\mathcal{C}\ensuremath{\langle} |T| \le t\ensuremath{\rangle}$ l'alg\`ebre constitu\'ee des s\'eries de la forme
\[\sum_{n\in \ensuremath{\mathbf{N}}} a_{n}\, T^n \in \mathcal{C}[\![T]\!]\]
telles que la s\'erie $\sum_{n\in \ensuremath{\mathbf{N}}} \|a_{n}\|\, t^n$ converge et on la munit de la norme d\'efinie par
\[\left\| \sum_{n\in \ensuremath{\mathbf{N}}} a_{n}\, T^n\right\|_{t} := \sum_{n\in \ensuremath{\mathbf{N}}} \|a_{n}\|\, t^n.\]
Si~$\mathcal{C}$ est complet, $\mathcal{C}\ensuremath{\langle} |T| \le t\ensuremath{\rangle}$ est compl\`ete.
Soient $s,t \in \ensuremath{\mathbf{R}}_{>0}$ tels que $s \le t$. On note $\mathcal{C}\ensuremath{\langle} s \le |T| \le t\ensuremath{\rangle}$ l'alg\`ebre constitu\'ee des s\'eries de la forme
\[\sum_{n\in \ensuremath{\mathbf{Z}}} a_{n}\, T^n \in \mathcal{C}[\![T,T^{-1}]\!]\]
telles que la famille $(\|a_{n}\|\, \max(s^n,t^n))_{n\in \ensuremath{\mathbf{Z}}}$ soit sommable et on la munit de la norme d\'efinie par
\[\left\| \sum_{n\in \ensuremath{\mathbf{Z}}} a_{n}\, T^n\right\|_{s,t} := \sum_{n\in \ensuremath{\mathbf{Z}}} \|a_{n}\|\, \max(s^n,t^n).\]
Si~$\mathcal{C}$ est complet, $\mathcal{C}\ensuremath{\langle} s\le |T| \le t\ensuremath{\rangle}$ est compl\`ete.
Par commodit\'e, on pose $\mathcal{C}\ensuremath{\langle} 0 \le |T| \le t\ensuremath{\rangle} := \mathcal{C}\ensuremath{\langle} |T| \le t\ensuremath{\rangle}$ et $\nm_{0,t} := \nm_{t}$.%
\nomenclature[Ca]{$\mathcal{A}\ensuremath{\langle} \vert T\vert \le t \ensuremath{\rangle}$}{s\'eries \`a coefficients dans~$\mathcal{A}$ normalement convergentes sur $\overline{D}(t)$}%
\nomenclature[Bha]{$\nm_{t}$}{norme 1 sur $\mathcal{A}\ensuremath{\langle} \vert T\vert \le t\ensuremath{\rangle}$}%
\nomenclature[Cb]{$\mathcal{A}\ensuremath{\langle} s \le \vert T\vert \le t\ensuremath{\rangle}$}{s\'eries de Laurent \`a coefficients dans~$\mathcal{A}$ normalement convergentes sur $\overline{C}(s,t)$}%
\nomenclature[Bhb]{$\nm_{s,t}$}{norme 1 sur $\mathcal{A}\ensuremath{\langle} s\le \vert T\vert \le t\ensuremath{\rangle}$}%
\end{nota}
\begin{prop}[\protect{\cite[proposition~2.1.1]{A1Z}}]\index{Norme!spectrale}\index{Norme!uniforme}\index{Norme!comparaison}
Supposons que~$\mathcal{A}$ est uniforme. Pour $s,t \in \ensuremath{\mathbf{R}}_{\ge 0}$ tels que $0= s <t$ ou $0<s\le t$, on a un isomorphisme canonique
\[\mathcal{M}(\mathcal{A}\ensuremath{\langle} s \le |T| \le t\ensuremath{\rangle}) \xrightarrow[]{\sim} \overline{C}(s,t).\]
En particulier, la norme spectrale de~$\nm_{s,t}$ s'identifie avec la norme uniforme sur $\overline{C}(s,t)$.
\qed
\end{prop}
Pour \'eviter d'avoir \`a distinguer dans chaque \'enonc\'e les disques et les couronnes, introduisons une notation.
\begin{nota}%
\nomenclature[Ja]{$s\prec t$}{relation entre $s,t \in \ensuremath{\mathbf{R}}_{\ge0}$ satisfaite si $s=0$ ou $s <t$
Pour $s,t \in \ensuremath{\mathbf{R}}_{\ge0}$, on note
\[s \prec t \textrm{ si } s =0 \textrm{ ou } s< t.\]
\end{nota}
\begin{prop}[\protect{\cite[lemme~2.1.2 et proposition~2.1.3]{A1Z}}]\label{prop:restrictionserie}\index{Norme!comparaison}
\index{Algebre@Alg\`ebre!d'un disque|see{Disque}}\index{Algebre@Alg\`ebre!d'une couronne|see{Couronne}}\index{Algebre@Alg\`ebre!d'un domaine polynomial|see{Domaine polynomial}}
Supposons que~$\mathcal{A}$ est uniforme.
Soient $s,u \in \ensuremath{\mathbf{R}}_{\ge 0}$ et $t,v \in \ensuremath{\mathbf{R}}_{>0}$ tels que $s\prec u \le v < t$. Soit $f = \sum_{n\in \ensuremath{\mathbf{Z}}} a_{n}\, T^n \in \mathcal{A}\ensuremath{\langle} s \le |T| \le t\ensuremath{\rangle}$. Alors, on a
\[\forall n\in \ensuremath{\mathbf{Z}},\ \|a_{n}\| \max(u^n,v^n) \le \|f\|_{\overline{C}(s,t)}\]
et
\[ \|f\|_{u,v} \le \left( \frac{s}{u-s} + \frac{t}{t-v}\right) \|f\|_{\overline{C}(s,t)},\]
avec la convention que $s/(u-s) = 0$ lorsque $s=0$.
En particulier, on a un morphisme injectif naturel
\[\mathcal{B}(\overline{C}(s,t)) \to \mathcal{A}\ensuremath{\langle} u\le |T| \le v\ensuremath{\rangle}.\]
\end{prop}
Rappelons un r\'esultat permettant de d\'ecrire les fonctions au voisinage des disques.
\begin{prop}[\protect{\cite[corollaire~2.8]{EtudeLocale}}]\label{prop:disqueglobal}
Soit $V$ une partie de~$B$. Soit $t \in \ensuremath{\mathbf{R}}_{\ge 0}$. Alors, le morphisme naturel
\[\colim_{W\supset V,v > t} \mathcal{O}(W)\ensuremath{\langle} |T|\le v\ensuremath{\rangle} \to \mathcal{O}(\overline{D}_{V}(t)),\]
o\`u~$W$ parcourt l'ensemble des voisinages compacts de~$V$ dans~$B$, est un isomorphisme.
\qed
\end{prop}
On peut obtenir un analogue de ce r\'esultat pour les couronnes ultram\'etriques.
\begin{prop
Soit $b \in B_{\mathrm{um}}$. Soient $s,t \in \ensuremath{\mathbf{R}}_{> 0}$ tels que $s\le t$. Alors, le morphisme naturel
\[\colim_{V\ni b, u\prec s \le t< v} \mathcal{B}(V)\ensuremath{\langle} u\le |T|\le v\ensuremath{\rangle} \to \mathcal{O}(\overline{C}_{b}(s,t)),\]
o\`u~$V$ parcourt l'ensemble des voisinages compacts de~$b$ dans~$B$, est un isomorphisme.
\end{prop}
\begin{proof}
L'injectivit\'e \'etant claire, il suffit de d\'emontrer la surjectivit\'e. Soit $f \in \mathcal{O}(\overline{C}_{b}(s,t))$. On peut supposer qu'il existe un voisinage ouvert~$U$ de~$b$ dans~$B$ et des nombres r\'eels $s_{1},t_{1}\in \ensuremath{\mathbf{R}}_{>0}$ v\'erifiant $s_{1}< s \le t < t_{1}$ tels que $f\in \mathcal{O}(\overline{C}_{U}(s_{1},t_{1}))$.
Notons~$z$ l'unique point du bord de Shilov de la couronne $\overline{C}_{b}(t,t)$. Par d\'efinition du faisceau structural, il existe un voisinage compact~$W$ de~$z$ dans~$\overline{C}_{U}(s_{1},t_{1})$ et un \'el\'ement~$g$ de~$\mathcal{B}(W)$ co\"incidant avec~$f$ sur~$W$.
L'int\'erieur de~$W$ contient une couronne de la forme $\overline{C}_{V}(u,u)$, o\`u~$V$ est un voisinage compact spectralement convexe de~$b$ dans~$U$ et $t'$ un nombre r\'eel v\'erifiant $t< t' <t_{1}$. D'apr\`es \cite[proposition~2.4]{EtudeLocale}, il existe un \'el\'ement~$h$ de $\mathcal{B}(V)\ensuremath{\langle} |T| = t'\ensuremath{\rangle}$ qui co\"incide avec~$g$ sur $\overline{C}_{V}(t',t')$. On peut \'ecrire
\[h = \sum_{n\in\ensuremath{\mathbf{Z}}} a_{n}\, T^n \in \mathcal{B}(V)[\![ T,T^{-1} ]\!],\]
o\`u la famille $(\|a_{n}\|_{V}\, {t'}^n)_{n\in \ensuremath{\mathbf{Z}}}$ est sommable.
Soit~$b' \in V$ et raisonnons dans l'espace $\pi^{-1}(b') \simeq \E{1}{\mathcal{H}(b')}$. Puisque la restriction de~$f$ appartient \`a $\mathcal{O}(\overline{C}_{b'}(s_{1},t_{1}))$, elle est d\'eveloppable en s\'erie de Laurent sur~$\mathcal{H}(b')$ et, par principe du prolongement analytique, son d\'eveloppement n'est autre que
\[h(b') = \sum_{n\in\ensuremath{\mathbf{Z}}} a_{n}(b')\, T^n \in \mathscr{H}(b')[\![ T , T^{-1}]\!].\]
Soient $s_{2},t_{2} \in \ensuremath{\mathbf{R}}_{>0}$ tels que $s_{1}< s_{2}< s \le t < t_{2} < t_{1}$. La proposition~\ref{prop:restrictionserie} assure alors que, pour tout $n\in \ensuremath{\mathbf{Z}}$, on a
\[|a_{n}(b')| \max(s_{2}^n,t_{2}^n)\le \left( \frac{s_{1}}{s_{2}-s_{1}} + \frac{t_{1}}{t_{1}-t_{2}} \right) \|f\|_{\overline{C}_{b'}(s_{1},t_{1})},\]
et donc
\[\|a_{n}\|_{V} \max(s_{2}^n,t_{2}^n)\le \left( \frac{s_{1}}{s_{2}-s_{1}} + \frac{t_{1}}{t_{1}-t_{2}} \right) \|f\|_{\overline{C}_{V}(s_{1},t_{1})}.\]
On en d\'eduit que~$f$ appartient \`a l'image de $\mathcal{B}(V)\ensuremath{\langle} s_{3}\le |T|\le t_{3}\ensuremath{\rangle}$ pour tous $s_{3},t_{3} \in \ensuremath{\mathbf{R}}_{>0} $ tels que $s_{2}< s_{3}< s \le t < t_{3} < t_{2}$.
\end{proof}
\begin{rema}
Un r\'esultat du m\^eme type vaut certainement encore dans le cas archim\'edien. Par exemple, pour d\'emontrer que tout \'el\'ement de $\mathcal{O}(\overline{C}_{b}(s,t))$ poss\`ede un d\'eveloppement \`a valeurs dans~$\mathcal{O}_{b}$, une strat\'egie naturelle consisterait \`a exprimer chaque coefficient du d\'eveloppement \`a l'aide de la fonction~$f$, autrement dit \`a \'ecrire une formule des r\'esidus en chaque fibre, et \`a montrer que le r\'esultat est une fonction analytique du param\`etre sur la base. Nous n'avons pas souhait\'e pousser plus loin ces consid\'erations, n'ayant pas l'utilit\'e du r\'esultat.
\end{rema}
En appliquant la proposition pr\'ec\'edente fibre \`a fibre et en recollant, on obtient le r\'esultat suivant.
\begin{coro}\label{cor:couronneglobale
Soit~$V$ une partie de~$B_{\mathrm{um}}$. Soient $s,t\in\ensuremath{\mathbf{R}}_{> 0}$ tels que $s\le t$. Alors le morphisme naturel
\[\colim_{W\supset V,u\prec s \le t < v} \mathcal{O}(W)\ensuremath{\langle} u\le |T|\le v\ensuremath{\rangle} \to \mathcal{O}(\overline{C}_{V}(s,t)),\]
o\`u~$W$ parcourt l'ensemble des voisinages compacts de~$V$ dans~$B$, est un isomorphisme.
\end{coro}
Dans le cadre ultram\'etrique, on peut \'egalement obtenir des r\'esultats sur les compl\'et\'es des anneaux de sections globales sur les couronnes. Introduisons au pr\'ealable quelques d\'efinitions sp\'ecifiques.
\begin{nota}%
\nomenclature[Cc]{$\mathcal{A}\{ \vert T\vert \le t\}$}{pour $\mathcal{A}$ ultram\'etrique, s\'eries \`a coefficients dans~$\mathcal{A}$ convergentes sur $\overline{D}(t)$}
\nomenclature[Bhc]{$\nm_{t}$}{norme $\infty$ sur $\mathcal{A}\{ \vert T\vert \le t\}$}
\nomenclature[Cd]{$\mathcal{A}\{ s \le \vert T\vert \le t\}$}{pour $\mathcal{A}$ ultram\'etrique, s\'eries de Laurent \`a coefficients dans~$\mathcal{A}$ convergentes sur $\overline{C}(s,t)$}
\nomenclature[Bhd]{$\nm_{s,t}$}{norme $\infty$ sur $\mathcal{A}\{ s\le \vert T\vert \le t\}$}
Supposons que la norme sur~$\mathcal{A}$ est ultram\'etrique.
Soit $t \in \ensuremath{\mathbf{R}}_{>0}$. On note $\mathcal{A}\{ |T| \le t\}$ l'alg\`ebre constitu\'ee des s\'eries de la forme
\[\sum_{n\in \ensuremath{\mathbf{N}}} a_{n}\, T^n \in \mathcal{A}[\![T]\!]\]
telles que $\lim_{n \to +\infty} \|a_{n}\|\, t^n = 0$.
Elle est compl\`ete pour la norme d\'efinie par
\[\left\| \sum_{n\in \ensuremath{\mathbf{N}}} a_{n}\, T^n\right\|_{t} := \max_{n\in \ensuremath{\mathbf{N}}} \|a_{n}\|\, t^n.\]
Soient $s,t \in \ensuremath{\mathbf{R}}_{>0}$ tels que $s \le t$. On note $\mathcal{A}\{ s \le |T| \le t\}$ l'alg\`ebre constitu\'ee des s\'eries de la forme
\[\sum_{n\in \ensuremath{\mathbf{Z}}} a_{n}\, T^n \in \mathcal{A}[\![T,T^{-1}]\!]\]
telles que $\lim_{n \to +\infty} \|a_{n}\|\, t^n = \lim_{n \to -\infty} \|a_{n}\|\, s^n = 0$.
Elle est compl\`ete pour la norme d\'efinie par
\[\left\| \sum_{n\in \ensuremath{\mathbf{Z}}} a_{n}\, T^n\right\|_{s,t} := \max_{n\in \ensuremath{\mathbf{Z}}} \|a_{n}\|\, \max(s^n,t^n).\]
Par commodit\'e, on pose $\mathcal{A}\{ 0 \le |T| \le t\} := \mathcal{A}\{ |T| \le t\}$ et $\nm_{0,t} := \nm_{t}$.
\end{nota}
\begin{nota}
\nomenclature[Irb]{$\overline{\mathcal{O}(V)}$}{s\'epar\'e compl\'et\'e de~$\mathcal{O}(V)$ pour $\nm_{V}$
Pour toute partie compacte~$W$ de~$B$ ou de~$X$, on note $\overline{\mathcal{O}(W)}$ le s\'epar\'e compl\'et\'e de~$\mathcal{O}(W)$ pour la semi-norme uniforme~$\nm_{W}$ sur~$W$.
\end{nota}
\begin{lemm}\label{lem:couronneum
Soit~$V$ une partie compacte de~$B_{\mathrm{um}}$. Soient $s,t\in\ensuremath{\mathbf{R}}_{\ge 0}$ tels que $0=s<t$ ou $0<s\le t$. Le morphisme naturel
\[\mathcal{O}(V)[T] \to \mathcal{O}(\overline{C}_{V}(s,t))\]
induit un isomorphisme isom\'etrique
\[\overline{\mathcal{O}(V)}\{s\le |T| \le t\} \xrightarrow[]{\sim} \overline{\mathcal{O}(\overline{C}_{V}(s,t))}.\]
\end{lemm}
\begin{proof}
On se contentera de traiter le cas o\`u $s>0$. Soit $f \in \mathcal{O}(\overline{C}_{V}(s,t))$. D'apr\`es le corollaire~\ref{cor:couronneglobale}, $f$~peut s'\'ecrire sous la forme d'une s\'erie $\sum_{n\in\ensuremath{\mathbf{Z}}} a_{n}\,T^n$ satisfaisant les conditions de convergence ad\'equates.
La norme uniforme de~$f$ sur $\overline{C}_{V}(s,t)$ est la borne sup\'erieure des normes uniformes de~$f$ sur les fibres au-dessus des points de~$V$. Puisque ces points sont ultram\'etriques, on en d\'eduit que
\[\|f\|_{\overline{C}_{V}(s,t)} = \sup_{b\in V} \bigl(\max_{n\in \ensuremath{\mathbf{Z}}} \bigl(|a_{n}(b)|\, \max(s^n,t^n)\bigr)\bigr).\]
Puisque~$V$ est compacte, $\overline{C}_{V}(s,t)$ l'est aussi, donc la borne sup\'erieure pr\'ec\'edente est un maximum~:
\begin{align*}
\|f\|_{\overline{C}_{V}(s,t)} &= \max_{b\in V} \bigl(\max_{n\in \ensuremath{\mathbf{Z}}} \bigl(|a_{n}(b)|\, \max(s^n,t^n)\bigr)\bigr)\\
&= \max_{n\in \ensuremath{\mathbf{Z}}} \bigl(\|a_{n}\|_{V}\, \max(s^n,t^n)\bigr).
\end{align*}
Cette derni\`ere norme n'est autre que la norme sur $\overline{\mathcal{O}(V)} \{ s\le |T|\le t\}$, par d\'efinition. L'expression du compl\'et\'e de $\mathcal{O}(\overline{C}_{V}(s,t))$
s'en d\'eduit.
\end{proof}
\index{Disque!algebre@alg\`ebre d'un|)}
\index{Couronne!algebre@alg\`ebre d'une|)}
\subsection{Normes sur les domaines polynomiaux}
Dans cette section, nous d\'emontrons un r\'esultat technique sur la norme des fonctions sur les domaines polynomiaux.
\begin{lemm}\label{lem:minorationetaar}
Soit $(k,\va)$ un corps valu\'e ultram\'etrique complet.
Soient $\alpha\in k$ et $r \in \ensuremath{\mathbf{R}}_{>0}$. Pour tout $Q = \sum_{j=0}^d b_{j} T^j \in k[T]$, on a
\[\max_{0\le j\le d} (|b_{j}|) \le \max\left(\frac{|\alpha|}{r}, \frac1 r, 1\right)^d \, |Q(\eta_{\alpha,r})| .\]
\end{lemm}
\begin{proof}
Traitons tout d'abord le cas d'un polyn\^ome de la forme $T-b$ avec $b\in k$. On a
\begin{align*}
\max(|b|, 1) &\le \max (|\alpha|, |b-\alpha|, 1)\\
& \le\max\left(\frac{|\alpha|}{r}, \frac1 r, 1\right) \cdot \max(|b-\alpha|,r)\\
& \le \max\left(\frac{|\alpha|}{r}, \frac1 r, 1\right) \cdot |(T-b)(\eta_{\alpha,r})|.
\end{align*}
Posons $C := \max(\frac{|\alpha|}{r}, \frac1 r, 1)$.
Soit $Q = \sum_{j=0}^d b_{j} T^j \in k[T]$. Pour d\'emontrer l'in\'egalit\'e souhait\'ee, on peut remplacer~$k$ par une extension valu\'ee compl\`ete et donc supposer que~$k$ est alg\'ebriquement clos. On peut \'egalement supposer que $b_{d}\ne 0$ (car $C\ge 1$). Il existe alors $\beta_{1},\dotsc,\beta_{d} \in k$ tels que
\[Q = b_{d} \prod_{i=1}^d (T-\beta_{i}).\]
Il d\'ecoule alors des relations coefficients/racines et de l'in\'egalit\'e triangulaire ultram\'etrique que l'on a
\[ \max_{0\le j\le d} (|b_{j}|) \le |b_{d}|\, \max_{I \subset \cn{1}{d}} \big( \prod_{i\in I} |\beta_{i}| \big) \le |b_{d}|\, \prod_{i=1}^d \max(|\beta_{i}|, 1).\]
En utilisant l'in\'egalit\'e d\'emontr\'ee au d\'ebut de la preuve, on obtient
\[ \max_{0\le j\le d} (|b_{j}|) \le |b_{d}| C^d \prod_{i=1}^d |(T-\beta_{i})(\eta_{\alpha,r})| \le C^d |Q(\eta_{\alpha,r})|.\]
\end{proof}
\begin{prop}\label{prop:minorationDPsk}\index{Domaine polynomial!norme sur un|(}\index{Norme!sur un domaine polynomial|see{Domaine polynomial}}
Soit $(k,\va)$ un corps valu\'e ultram\'etrique complet. Soient $p\in \ensuremath{\mathbf{N}}_{ge1}$ et $P = \sum_{i=0}^p a_{i} T^i \in k[T]$ un polyn\^ome de degr\'e~$p$ diff\'erent de~$T^p$. Soit $s \in \ensuremath{\mathbf{R}}_{>0}$. Posons
\[\sigma := \max_{0\le i\le p-1} \left(\big| \frac{a_{i}}{a_{p}}\big|^{\frac{1}{p-i}}\right) >0 \]
et $\rho := \min ( \sigma , s \,\sigma^{1-p})$. Pour tout $Q = \sum_{j=0}^d b_{j} T^j \in k[T]$, nous avons
\[\max_{0\le j\le d} (|b_{j}|) \le \max\left(\frac{\sigma}{\rho}, \frac1 \rho\right)^d \, \|Q\|_{\overline{D}(P;s)}.\]
\end{prop}
\begin{proof}
On peut supposer que~$k$ est alg\'ebriquement clos. On peut \'egalement supposer que~$P$ est unitaire. Il existe alors $\alpha_{1},\dotsc,\alpha_{p} \in k$ tels que
\[P = \prod_{i=1}^p (T-\alpha_{i}).\]
Quitte \`a r\'eordonner les~$\alpha_{i}$, on peut supposer que $|\alpha_{p}| = \max_{1\le i\le p} (|\alpha_{i}|)$. D'apr\`es \cite[proposition~3.1.2/1]{BGR} (ou la th\'eorie des polygones de Newton), on a
\[|\alpha_{p}| = \max_{0\le i\le p-1} (|a_{i}|^{1/(p-i)}) = \sigma > 0.\]
Pour tout $r \in (0,|\alpha_{p}|]$, on a
\[ |P(\eta_{\alpha_{p},r})| = r \, \prod_{i=1}^{p-1} \max(|\alpha_{i} - \alpha_{p}|, r) \le r |\alpha_{p}|^{p-1}. \]
En particulier, pour $\rho = \min ( |\alpha_{p}| , s |\alpha_{p}|^{1-p})$, on a $\eta_{\alpha_{p},\rho} \in \overline{D}(P;s)$ et donc $ |Q(\eta_{\alpha_{p},\rho})| \le \|Q\|_{\overline{D}(P;s)}$.
D'apr\`es le lemme~\ref{lem:minorationetaar}, on a
\[ \max_{0\le j\le d} (|b_{j}|) \le \max\left(\frac{|\alpha_{p}|}{\rho}, \frac1 {\rho}, 1\right)^d \, |Q(\eta_{\alpha_{p},\rho})|\]
et le r\'esultat s'ensuit.
\end{proof}
\begin{coro}\label{coro:minorationDPsA
Supposons que~$\mathcal{A}$ est uniforme.
Soient $p\in \ensuremath{\mathbf{N}}_{ge1}$ et $P = \sum_{i=0}^p a_{i} T^i \in \mathcal{A}[T]$ tel que, pour tout $b\in \mathcal{M}(\mathcal{A})$, $P(b) \in \mathcal{H}(b)[T]$ soit un polyn\^ome de degr\'e~$p$ diff\'erent de~$T^p$. Soit $s \in \ensuremath{\mathbf{R}}_{>0}$. Alors, il existe $K_{P} \in \ensuremath{\mathbf{R}}_{>0}$ tel que, pour tout $Q = \sum_{j=0}^d b_{j} T^j \in \mathcal{A}[T]$, on ait
\[\max_{0\le j\le d} (\|b_{j}\|) \le K_{P}^d \, \|Q\|_{\overline{D}(P;s)}.\]
\qed
\end{coro}
\begin{rema}\label{rem:Tn}
Si~$P=T^p$, on a $\overline{D}(T^p;s) = \overline{D}(s^{1/p})$ et le r\'esultat reste valable d'apr\`es la proposition~\ref{prop:restrictionserie}.
\end{rema}
\index{Domaine polynomial!norme sur un|)}
\section{Bases de voisinages}
\label{description_locale}
Soit~$\mathcal{A}$ un anneau de Banach. Posons $B :=\mathcal{M}(\mathcal{A})$ et $X := \ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}}$, avec coordonn\'ee~$T$. Notons $\pi \colon X \to B$ la projection sur la base. Dans cette section, nous donnons une description pr\'ecise de bases de voisinages des points de~$X$.
\medbreak
Commen\c{c}ons par rappeler des r\'esultats connus sur les bases de voisinages des points de la droite analytique sur un corps valu\'e ultram\'etrique complet.
\begin{defi}\label{def:rigidedroite}\index{Point!rigide|textbf}\index{Point!rigide!polynome minimal@polyn\^ome minimal|textbf}\index{Point!rigide!degre@degr\'e|textbf}
\nomenclature[Jq]{$\mu_{x}$}{polyn\^ome minimal d'un point rigide~$x$ de~$\E{1}{k}$ (dans $k[T]$)}
\nomenclature[Jqa]{$\deg(x)$}{degr\'e de $\mu_{x}$}
Soit $(k,\va)$ un corps valu\'e complet. On dit qu'un point~$x$ de~$\E{1}{k}$ est \emph{rigide} si l'extension $\mathcal{H}(x)/k$ est finie.
Dans ce cas, il existe un unique polyn\^ome irr\'eductible unitaire \`a coefficients dans~$k$ qui s'annule en~$x$. On l'appelle \emph{polyn\^ome minimal} de~$x$ et on le note~$\mu_{x}$.
On appelle \emph{degr\'e} de~$x$ son degr\'e et on le note~$\deg(x)$.
\end{defi}
\begin{lemm}\label{lem:bv23}\index{Voisinage!d'un point de type~2 ou~3
Soit $(k,\va)$ un corps valu\'e ultram\'etrique complet. Soit~$x$ un point de type~2 ou~3 de~$\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{k}}$. Notons~$\mathcal{E}_{x}$ l'ensemble des composantes connexes relativement compactes de $\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{k}}\setminus\{x\}$. Pour tout \'el\'ement~$C$ de~$\mathcal{E}_{x}$, choisissons un point rigide~$x_{C}$ dans~$C$.
Pour toute partie finie~$F$ de~$\mathscr{E}_{x}$, posons $P_{F} := \prod_{C\in F} \mu_{x_C}$.
Alors, les ensembles de la forme
\[\overline{C}(P_{F},|P_{F}(x)| -\ensuremath{\varepsilon}, |P_{F}(x)| +\ensuremath{\varepsilon}),\]
o\`u~$F$ est un sous-ensemble fini de~$\mathcal{E}_{x}$ et~$\ensuremath{\varepsilon}$ un nombre r\'eel strictement positif,
sont une base de voisinages connexes de~$x$ dans~$\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{k}}$.
\end{lemm}
\begin{proof}
Le r\'esultat d\'ecoule du fait que tout voisinage de~$x$ contient enti\`erement toutes les composantes connexes de $\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{k}} \setminus \{x\}$ \`a l'exception \'eventuelle d'un nombre fini d'entre elles et d'arguments bas\'es sur le graphe de variation de~$|P_{F}|$.
\end{proof}
\begin{lemm}\label{lem:bv14}\index{Voisinage!d'un point de type~1 ou~4}\index{Base de voisinages|see{Voisinage}}
Soit $(k,\va)$ un corps valu\'e ultram\'etrique complet. Tout point~$x$ de type~1 ou~4 de~$\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{k}}$ poss\`ede un syst\`eme fondamental de voisinages ouverts connexes de la forme
\[\overline{D}(P,|P(x)| + \ensuremath{\varepsilon}),\]
o\`u $P\in k[T]$ est un polyn\^ome irr\'eductible et~$\ensuremath{\varepsilon}$ un nombre r\'eel strictement positif.
\end{lemm}
\begin{proof}
Soit~$U$ un voisinage de~$x$ dans~$\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{k}}$.
Soit~$\bar k$ une cl\^oture alg\'ebrique de~$k$ et notons~$\hat{\bar{k}}$ son compl\'et\'e. Notons $\pi \colon \E{1}{\hat{\bar{k}}} \to \E{1}{k}$ le morphisme de projection. Soit $x' \in \pi^{-1}(x)$. Le point~$x'$ est un point de type~1 ou~4 de~$\E{1}{\hat{\bar{k}}}$. Puisque~$\hat{\bar{k}}$ est alg\'ebriquement clos, le singleton~$\{x\}$ est l'intersection d'une famille de disques ferm\'es de rayons strictement positifs. On en d\'eduit qu'il $\alpha \in \hat{\bar{k}}$ et $r_{1}>0$ tel que $\overline{D}(\alpha,r_{1})$ contienne~$x'$ et soit contenu dans~$\pi^{-1}(U)$. Puisque~$\bar k$ est dense dans~$\hat{\bar{k}}$, on peut supposer que $\alpha\in \bar k$. Notons $P \in k[T]$ son polyn\^ome minimal. Il existe alors $s_{1}>0$ tel que $\pi(\overline{D}(\alpha,r_{1})) = \overline{D}(P,s_{1})$ et on a
\[ x \in \overline{D}(P,s_{1}) \subset U.\]
Le r\'esultat s'en d\'eduit, en remarquant que les ensembles de la forme~$\overline{D}(P,s_{1})$ sont toujours connexes (\cf~lemme~\ref{lem:DQsconnexe}).
\end{proof}
Pour passer \`a des bases plus g\'en\'erales que des corps valu\'es, nous aurons besoin de quelques r\'esultats techniques.
\begin{nota}%
\nomenclature[Bia]{$\nm_{\infty}$}{norme $\infty$ sur $\mathcal{A}[S]$
Soit $(\mathcal{C},\nm)$ un anneau de Banach. On note~$\nm_{\infty}$ la norme sur l'anneau des polyn\^omes~$\mathcal{C}[T]$ d\'efinie par
\[ \| P(T)\|_{\infty} = \max_{0\le i\le d} (\|a_{i}\|)\]
pour tout $P(T) = \sum_{i=0}^d a_{i}\, T^i \in \mathcal{C}[T]$.
\end{nota}
\begin{lemm}\label{lem:borneT}
Soit $(k,\va)$ un corps valu\'e complet. Soit $P \in k[T]$ unitaire de degr\'e $d\ge 1$. Alors, pour tout $x\in \ensuremath{\mathbf{A}^{1,\mathrm{an}}_{k}}$, on a
\[ |T(x)| \le |P(x)| + d\, \|P\|_{\infty}.\]
\end{lemm}
\begin{proof}
Soit $x\in \ensuremath{\mathbf{A}^{1,\mathrm{an}}_{k}}$. \'Ecrivons $P(T) = \sum_{i=0}^d a_{i}\, T^i \in k[T]$. Supposons, par l'absurde, que l'on ait $|T(x)| > |P(x)| + d\, \|P\|_{\infty}$.
Puisque $P$~est unitaire, on a, en particulier, $|T(x)| \ge 1$, d'o\`u
\[\big|\sum_{i=0}^{d-1} a_{i}(x) \, T(x)^i\big| \le d\, \|P\|_{\infty} \, |T(x)|^{d-1}.\]
On en d\'eduit que
\begin{align*}
|P(x)| &= \big| T(x)^d + \sum_{i=0}^{d-1} a_{i}(x) \, T(x)^i \big|\\
& \ge |T(x)|^d - d\, \|P\|_{\infty} \, |T(x)|^{d-1}\\
& \ge |T(x)|^{d-1} \, (|T(x)| - d\, \|P\|_{\infty})\\
& > |P(x)|,
\end{align*}
et l'on aboutit ainsi \`a la contradiction souhait\'ee.
\end{proof}
\begin{lemm}\label{lem:DQsconnexe}\index{Domaine polynomial!connexe}
Soit $(k,\va)$ un corps valu\'e complet. Soit~$Q_{0} \in k[T]$ un polyn\^ome de degr\'e $d\ge 1$ qui est une puissance d'un polyn\^ome irr\'eductible. Soit $r > 0$.
Alors, il existe $\ensuremath{\varepsilon}>0$ tel que, pour tout $Q \in k[T]$ unitaire de degr\'e~$d$ tel que $\|Q-Q_{0}\|_{\infty} \le \ensuremath{\varepsilon}$ et tout $s \ge r$, $\overline{D}(Q,s)$ soit connexe.
\end{lemm}
\begin{proof}
Soit $Q\in k[T]$. Le principe du maximum assure que toute composante connexe de $\overline{D}(Q,s)$ contient un z\'ero de~$Q$. Par cons\'equent, pour montrer que $\overline{D}(Q,s)$ est connexe, il suffit de montrer que l'ensemble des z\'eros de~$Q$ est contenu dans une partie connexe de~$\overline{D}(Q,s)$.
Par hypoth\`ese, le polyn\^ome~$Q_{0}$ poss\`ede un seul z\'ero (\'eventuellement multiple) dans~$\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{k}}$. Le raisonnement pr\'ec\'edent assure que, pour tout $t>0$, $\overline{D}(Q_{0},t)$ est connexe.
Soit $t \in \intoo{0,r}$. Par continuit\'e des racines, il existe $\ensuremath{\varepsilon}>0$ tel que, pour tout $Q \in k[T]$ unitaire de degr\'e~$d$ tel que $\|Q-Q_{0}\|_{\infty} \le \ensuremath{\varepsilon}$, tous les z\'eros de~$Q$ dans~$\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{k}}$ soient contenus dans $\overline{D}(Q_{0},t)$. Quitte \`a diminuer~$\ensuremath{\varepsilon}$, on peut, en outre, supposer que $\overline{D}(Q,r) \supset \overline{D}(Q_{0},t)$. La connexit\'e de $\overline{D}(Q,r)$, et de $\overline{D}(Q,s)$ pour tout $s\ge r$, s'ensuit.
\end{proof}
Venons-en maintenant aux descriptions annonc\'ees des bases de voisinages des points de $X = \E{1}{\mathcal{A}}$. Rappelons que, si $b$ est un point de $B = \mathcal{M}(\mathcal{A})$, la fibre $X_{b} := \pi^{-1}(b)$ s'identifie \`a~$\E{1}{\mathcal{H}(b)}$.
\begin{prop}\label{prop:basevoisdim1rigide}\index{Voisinage!d'un point rigide
Soient $b\in B$, $x$ un point rigide de $X_{b}$ et $U$ un voisinage de~$x$ dans~$X$. Notons $\mu_{x} \in \mathcal{H}(b)[T]$ le polyn\^ome minimal de~$x$.
Soit~$\mathcal{V}_{b}$ une famille de voisinages compacts de~$b$ dans~$B$.
Alors, il existe un voisinage spectralement convexe~$V_{0}$ de~$b$ dans~$B$, un polyn\^ome~$P_{0}$ unitaire non constant \`a coefficients dans~$\mathcal{B}(V_{0})$
et des nombres r\'eels $s_{1},s_{2},\ensuremath{\varepsilon} \in \ensuremath{\mathbf{R}}_{> 0}$ avec $s_{1} < s_{2}$ v\'erifiant les propri\'et\'es suivantes: pour tout \'el\'ement~$V$ de~$\mathcal{V}_{b}$ contenu dans~$V_{0}$, pour tout $P \in \mathcal{B}(V)[T]$ unitaire de m\^eme degr\'e que~$P$ tel que $\|P - P_{0}\|_{V,\infty} \le \ensuremath{\varepsilon}$ et pour tout $s \in [s_{1},s_{2}]$, on a
\[x \in \overline{D}_V(P,s) \subset U\]
et $\overline{D}_b(P,s)$ est connexe.
En outre,
\begin{enumerate}[i)]
\item si~$\mathcal{H}(b)$ est de valuation non triviale ou si~$\mu_{x}$ est s\'eparable, on peut imposer que $P_{0}(b)$ soit s\'eparable~;
\item s'il existe $\delta \ge 1$ tel que $\mu_{x}^\delta \in \kappa(b)[T]$, on peut choisir pour $P_{0}$ un relev\'e fix\'e de $\mu_{x}^\delta$ \`a~$\mathcal{O}_{B,b}[T]$.\footnote{Nous ne pr\'etendons pas qu'il soit possible d'imposer simultan\'ement les conclusions de~i) et~ii).}
\end{enumerate}
\end{prop}
\begin{proof}
On peut supposer que $U$ est ouvert. Soit $e \ge 1$. Posons $Q_{0} := \mu_{x}^e$. Le sous-ensemble de la fibre~$X_{b}$ d\'efini par l'\'equation $Q_{0} = 0$ est alors r\'eduit au point~$x$. Notons~$d$ le degr\'e de~$Q_{0}$. Posons $M := \|Q_{0}\|_{b,\infty}$ et $R := 1+d(M+1)$.
L'ensemble $\overline{D}_{b}(0,R) \setminus U$ est un compact sur lequel~$|Q_{0}|$ est born\'ee inf\'erieurement par une constante $m_{0}>0$.
Soit $m \in \intoo{0,\min(m_{0},1)}$. On d\'eduit du lemme~\ref{lem:borneT} que l'on a $\overline{D}_{b}(Q_{0},m) \subset U$.
Puisque~$\kappa(b)$ est dense dans~$\mathcal{H}(b)$, il existe un polyn\^ome~$P_{0}$ unitaire de degr\'e~$d$ \`a coefficients dans~$\mathcal{O}_{b}$ tel que l'on ait
\[ \|P_{0}(b) - Q_{0}\|_{b,\infty} < \min \Big( \frac 1 2, \frac m 5 \, \big(\sum_{i=0}^{d-1} R^i \big)^{-1} \Big).\]
On a alors $\|P_{0}\|_{b,\infty} \le M+1/2$ donc, d'apr\`es le lemme~\ref{lem:borneT}, $\overline{D}_{b}(P_{0},1) \subset \overline{D}_{b}(0,R)$. On en d\'eduit alors que $\overline{D}_{b}(P_{0}, 4m/5) \subset \overline{D}_{b}(Q_{0},m) \subset U$.
En outre, $\overline{D}_{b}(P_{0},m/5)$ contient~$x$.
Il existe un voisinage compact spectralement convexe~$W$ de~$b$ dans~$B$ tel que $P_{0}$ soit \`a coefficients dans~$\mathcal{B}(W)$ et satisfasse $\|P_{0}\|_{W,\infty} \le M+3/4$. L'ensemble $\overline{D}_{W}(P_{0}, 4m/5) \setminus U$ est compact et sa projection sur~$B$ est un compact~$L$ qui ne contient pas le point~$b$. Soit~$V_{0}$ un voisinage compact spectralement convexe de~$b$ dans~$B \setminus L$. Par construction, on a $\overline{D}_{V_{0}}(P_{0}, 4m/5) \subset U$.
Soit $\ensuremath{\varepsilon} \in \intoo{0,1/4}$ tel que $\ensuremath{\varepsilon} \sum_{i=0}^{d-1} R^i < m/5$. Soient~$V$ une partie compacte de~$V_{0}$ et $P \in \mathcal{B}(V)[T]$ un polyn\^ome unitaire de m\^eme degr\'e que~$P$ tel que $\|P - P_{0}\|_{V,\infty} \le \ensuremath{\varepsilon}$. D'apr\`es le lemme~\ref{lem:borneT}, on a alors $\overline{D}_{V}(P,1) \subset \overline{D}_{V}(0,R)$. Posons $s_{1} := 2m/5$ et $s_{2} := 3m/5$. Pour tout $s \in [s_{1},s_{2}]$, on a alors $\overline{D}_{V}(P, s) \subset \overline{D}_{V}(P_{0},4m/5) \subset U$ et $\overline{D}_{V}(P, s) \supset \overline{D}_{V}(P_{0}, m/5) \ni x$. La premi\`ere partie du r\'esultat s'ensuit. Le lemme~\ref{lem:DQsconnexe} (utilis\'e avec $k = \mathcal{H}(b)$ et $r=s_{1}$) permet d'assurer la connexit\'e de $\overline{D}_{b}(P, s)$, quitte \`a diminuer~$\ensuremath{\varepsilon}$.
\medbreak
Int\'eressons-nous maintenant aux conditions suppl\'ementaires que nous pouvons imposer.
$i$) Si $\mu_{x}$ est s\'eparable, on peut choisir~$Q_{0} = \mu_{x}$ et la condition est satisfaite. Si $\mathcal{H}(b)$ est de valuation non triviale, tout polyn\^ome \`a coefficients dans~$\mathcal{H}(b)$ peut \^etre approch\'e autant qu'on le souhaite par un polyn\^ome s\'eparable \`a coefficients dans~$\mathcal{H}(b)$, et m\^eme~$\kappa(b)$. Le r\'esultat s'en d\'eduit.
$ii$) S'il existe $\delta \ge 1$ tel que $\mu_{x}^\delta \in \kappa(b)[T]$, il suffit de choisir $e = \delta$ au d\'ebut de la preuve.
\end{proof}
\begin{prop}\label{prop:basevoisdim1}\index{Voisinage
Soient $b\in B$, $x$ un point non rigide de~$X_{b}$ et $U$ un voisinage de~$x$ dans~$X$. Soit~$\mathcal{V}_{b}$ une famille de voisinages compacts de~$b$ dans~$B$. Alors, il existe un voisinage spectralement convexe~$V_{0}$ de~$b$ dans~$B$, un polyn\^ome~$P_{0}$ unitaire non constant \`a coefficients dans~$\mathcal{B}(V_{0})$ et des nombres r\'eels $r_{1},r_{2},s_{1},s_{2},\ensuremath{\varepsilon} \in \ensuremath{\mathbf{R}}_{> 0}$ avec $r_{2} < r_{1} < s_{1} < s_{2}$ v\'erifiant les propri\'et\'es suivantes~: pour tout \'el\'ement~$V$ de~$\mathcal{V}_{b}$ contenu dans~$V_{0}$, pour tout $P \in \mathcal{B}(V_{0})[T]$ tel que $\|P - P_{0}\|_{V_{0},\infty} \le \ensuremath{\varepsilon}$, pour tous $r,s \in \ensuremath{\mathbf{R}}_{> 0}$ tels que $r_{2} \le r \le r_{1}$ et $s_{1} \le s\le s_{2}$, on a
\[x \in \overline{C}_V(P,r,s) \subset U\]
et $\overline{C}_b(P,r,s)$ est connexe.
En outre, si~$\mathcal{H}(b)$ est de valuation non triviale, on peut imposer que~$P_{0}(b)$ soit s\'eparable.
\end{prop}
\begin{proof}
Remarquons que les hypoth\`eses entra\^inent que le point~$b$ est ultram\'etrique. Le r\'esultat d\'ecoule alors des lemmes~\ref{lem:bv23} et~\ref{lem:bv14}
appliqu\'es avec $k = \mathcal{H}(b)$ et d'arguments d'approximation similaires \`a ceux de la preuve de la proposition~\ref{prop:basevoisdim1rigide}. La d\'emonstration de la connexit\'e de~$C_{b}(P,r,s)$ repose sur l'in\'egalit\'e triangulaire ultram\'etrique, qui assure que~$C_{b}(P,r,s)$ reste inchang\'e si l'on remplace~$P$ par un polyn\^ome suffisamment proche.
\end{proof}
\begin{coro}[\protect{\cite[corollaire~6.8]{EtudeLocale}}]\label{cor:projectionouverte}\index{Projection!ouverte}
Soient $n,m \in \ensuremath{\mathbf{N}}$ avec $n\ge m$. La projection $\pi_{n,m} \colon \E{n}{\mathcal{A}} \to \E{m}{\mathcal{A}}$ sur les $m$ premi\`eres coordonn\'ees est ouverte.
\end{coro}
\begin{proof}
La remarque~\ref{spectralement_conv} permet de se ramener au cas o\`u~$m=0$ et~$n=1$. Le r\'esultat d\'ecoule alors des propositions~\ref{prop:basevoisdim1rigide} et~\ref{prop:basevoisdim1}.
\end{proof}
\section{R\'esultats locaux}\label{sec:resultatslocaux}
Soit~$\mathcal{A}$ un anneau de Banach et soit~$n\in \ensuremath{\mathbf{N}}$.
Dans cette section, nous rappelons quelques r\'esultats locaux sur~$\E{n}{\mathcal{A}}$ obtenus par le second auteur dans~\cite{EtudeLocale}.
Posons $B := \mathcal{M}(\mathcal{A})$.
\subsection{Th\'eor\`eme de division de Weierstra\ss}
En g\'eom\'etrie analytique classique, les th\'eor\`emes de division et de pr\'eparation de Weierstra\ss{} sont des outils fondamentaux. Ils interviennent notamment de fa\c{c}on essentielle dans la d\'emonstration de la coh\'erence du faisceau structural. Le second auteur a d\'emontr\'e des versions de ces th\'eor\`emes pour les espaces de Berkovich dans~\cite{EtudeLocale}.
Dans cette section, nous nous consacrons au th\'eor\`eme de division. Nous reviendrons plus tard sur le th\'eor\`eme de pr\'eparation (\cf{} th\'eor\`eme~\ref{thm:preparationW}). Commen\c{c}ons par rappeler quelques d\'efinitions.
\begin{defi}\index{Bord analytique|textbf}
Soit~$V$ un ensemble compact spectralement convexe de~$\E{n}{\mathcal{A}}$. On dit qu'une partie ferm\'ee~$\Gamma$ de~$V$ est un \emph{bord analytique} de~$V$ si, pour tout~$f\in\mathcal{B}(V)$, on a
\[\|f\|_\Gamma=\|f\|_V.\]
\end{defi}
\begin{defi}\label{def:decent}\index{Point!ultrametrique typique@ultram\'etrique typique|textbf}\index{Point!decent@d\'ecent|textbf}\index{Partie!ultrametrique typique@ultram\'etrique typique|textbf}\index{Partie!decente@d\'ecente|textbf}
Un point $x$ de $\E{n}{\mathcal{A}}$ est dit \emph{ultram\'etrique typique} s'il appartient \`a l'int\'erieur de la partie ultram\'etrique de~$\E{n}{\mathcal{A}}$ et s'il poss\`ede une base de voisinages form\'ee d'ensembles compacts et spectralement convexes poss\'edant un bord analytique fini.
Un point $x$ de $\E{n}{\mathcal{A}}$ est dit \emph{d\'ecent} si l'une (au moins) des trois conditions suivantes est satisfaite~:
\begin{enumerate}[i)]
\item $\mathcal{H}(x)$ est de caract\'eristique nulle~;
\item $\mathcal{H}(x)$ est de valuation non triviale~;
\item $x$ est ultram\'etrique typique.
\end{enumerate}
Une partie de $\E{n}{\mathcal{A}}$ est dite ultram\'etrique typique (resp. d\'ecente) lorsque tous ses points le sont.
\end{defi}
\begin{prop}[\protect{\cite[proposition~6.10]{EtudeLocale}}]\label{prop:typiqueAn}
Soit~$b\in B$ un point ultram\'etrique typique (resp. d\'ecent). Alors, tout point de~$\E{n}{\mathcal{A}}$ situ\'e au-dessus de~$b$ est encore ultram\'etrique typique (resp. d\'ecent).
\qed
\end{prop}
Posons $X := \ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}}$, avec coordonn\'ee~$S$.
Notons $\pi \colon X \to B$ la projection sur la base. Rappelons que si $b$ est un point de~$B$, la fibre $X_{b} := \pi^{-1}(b)$ s'identifie \`a~$\E{1}{\mathcal{H}(b)}$.
Rappelons \'egalement que, si $x$ est un point rigide de~$X_{b}$, alors l'anneau local $\mathcal{O}_{X_{b},x}$ est un anneau de valuation discr\`ete d'uniformisante~$\mu_{x}$ (\cf~d\'efinition~\ref{def:rigidedroite} pour la notation).
\begin{theo}[Division de Weierstra\ss, \protect{\cite[th\'eor\`eme~8.3]{EtudeLocale}}]\label{weierstrass}\index{Theoreme@Th\'eor\`eme!de division de Weierstra\ss}
Soit $b$ un point de~$B$.
Soit~$x$ un point rigide de~$X_{b}$. Si~$\mu_{x}$ est ins\'eparable et si $b$ est trivialement valu\'e, supposons que~$b$ est ultram\'etrique typique.
Soit~$G$ un \'el\'ement de l'anneau local~$\mathcal{O}_{X,x}$. Supposons que son image dans l'anneau de valuation discr\`ete~$\mathcal{O}_{X_{b},x}$ n'est pas nulle et notons~$n$ sa valuation.
Alors, pour tout~$F\in\mathcal{O}_{X,x}$, il existe un unique couple~$(Q,R)\in \mathcal{O}_{X,x}^2$ tel que
\begin{enumerate}[i)]
\item $F=QG+R$ ;
\item $R\in\mathcal{O}_{B,b}[S]$ est un polyn\^ome de degr\'e strictement inf\'erieur \`a~$n\deg(x)$.
\end{enumerate}
\qed
\end{theo}
On peut \'etendre ce th\'eor\`eme de fa\c con \`a diviser simultan\'ement en plusieurs points de la fibre. On renvoie \`a la preuve de \cite[th\'eor\`eme~5.5.3]{A1Z} pour les d\'etails\footnote{Dans cette r\'ef\'erence, le polyn\^ome~$G$ est suppos\'e d'une forme particuli\`ere, mais cela n'est pas utilis\'e dans la preuve. La condition~$(I_{G})$ peut \^etre remplac\'ee par la condition~$(D_{G})$ et \cite[corollaire~5.4]{EtudeLocale} assure qu'elle est toujours satisfaite. La condition~$(S)$ n'est pr\'esente que pour assurer que le th\'eor\`eme de division de Weierstra\ss{} s'applique.}.
\begin{coro}\label{cor:weierstrassgeneralise}\index{Theoreme@Th\'eor\`eme!de division de Weierstra\ss!g\'en\'eralis\'e}
Soit $b$ un point d\'ecent de~$B$. Soit~$G$ un polyn\^ome unitaire de degr\'e~$d$ \`a coefficients dans~$\mathcal{O}_{B,b}$. Notons~$\{z_1,\dotsc,z_{t}\}$ l'ensemble des z\'eros de~$G$ dans~$X_{b}$.
Alors, pour tout~$(f_1,\dotsc,f_t)\in\prod_{i=1}^t \mathcal{O}_{X,z_i}$, il existe un unique \'el\'ement~$(r,q_1,\dotsc,q_t)$ de $\mathcal{O}_{B,b}[T]\times\prod_{i=1}^t\mathcal{O}_{X,z_i}$ v\'erifiant les propri\'et\'es suivantes~:
\begin{enumerate}[i)]
\item pour tout~$i\in\cn{1}{t}$, on a~$f_i = q_i G + r$ dans~$\mathcal{O}_{X,z_i}$~;
\item le polyn\^ome~$r$ est de degr\'e strictement inf\'erieur \`a~$d$.
\end{enumerate}
\qed
\end{coro}
On d\'emontrera plus loin une version raffin\'ee de cet \'enonc\'e permettant de contr\^oler les normes du reste et du quotient (\cf~th\'eor\`eme~\ref{weierstrassam} et corollaire~\ref{cor:divisionGnormes}).
\medbreak
Le th\'eor\`eme de division de Weierstra\ss{} a de nombreuses cons\'equences importantes, par exemple pour l'\'etude des morphismes finis. En voici une. Soit $P(S) \in \mathcal{A}[S]$ unitaire non constant. Le morphisme naturel
\[\mathcal{A}[T] \to \mathcal{A}[T,S]/(P(S) -T) \xrightarrow[]{\sim} \mathcal{A}[S]\]
induit un morphisme $\varphi_{P} \colon \ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}} \to \ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}}$, o\`u nous notons~$S$ (resp.~$T$) la coordonn\'ee sur l'espace de d\'epart (resp. d'arriv\'ee). Soit~$V$ une partie de~$B$. Soient $s,t\in\ensuremath{\mathbf{R}}$ tels que $0=s<t$ ou $0<s\le t$. Nous avons alors $\varphi_{P}^{-1}(\overline{C}_{V}(s,t)) = \overline{C}_{V}(P,s,t)$.
\begin{theo}[\protect{\cite[th\'eor\`eme~8.8]{EtudeLocale}}]\label{thm:isolemniscate}\index{Domaine polynomial!algebre@alg\`ebre d'un|(}
Dans la situation pr\'ec\'edente, si $V$~est d\'ecente, alors le morphisme naturel
\[\mathcal{O}(\overline{C}_{V}(s,t))[S]/(P(S)-T) \to \mathcal{O}(\overline{C}_{V}(P,s,t))\]
est un isomorphisme.
\qed
\end{theo}
On en d\'eduit une version du corollaire~\ref{cor:couronneglobale} pour les domaines polynomiaux.
\begin{coro}\label{cor:lemniscateglobale}
Dans la situation pr\'ec\'edente, si~$V$ est ultram\'etrique et d\'ecente, alors, le morphisme naturel
\[\colim_{W\supset V,u\prec s \le t < v} \mathcal{O}(W)\ensuremath{\langle} u\le |T|\le v\ensuremath{\rangle}[S]/(P(S)-T) \to \mathcal{O}(\overline{C}_{V}(P,s,t)),\]
o\`u~$W$ d\'ecrit l'ensemble des voisinages compacts de~$V$ dans~$X$ sur lesquels les coefficients de~$P$ sont d\'efinis, est un isomorphisme.
\qed
\end{coro}
On va maintenant modifier l'\'enonc\'e pr\'ec\'edent de fa\c{c}on \`a faire intervenir des anneaux de Banach dans la limite inductive, ce qui se r\'ev\`elera utile par la suite.
\begin{lemm}\label{lem:isoBVcompacts}\index{Fonction!surconvergente
Soient~$V$ une partie compacte de~$X$ et~$\mathcal{V}$ une base de voisinages compacts de~$V$ dans~$X$. Alors, pour tout $W\in\mathcal{V}$, le morphisme de restriction induit un morphisme
\[\varphi_{W} \colon \overline{\mathcal{O}(W)} \to \mathcal{O}(V).\]
Le morphisme naturel
\[\colim_{W\in \mathcal{V}} \overline{\mathcal{O}(W)} \to \mathcal{O}(V)\]
qui s'en d\'eduit est un isomorphisme.
\end{lemm}
\begin{proof}
Par d\'efinition, les sections du faisceau structural sur un ouvert~$U$ sont localement des limites uniformes de fractions rationnelles sans p\^oles. On en d\'eduit que le morphisme $\varphi_{W} \colon \overline{\mathcal{O}(W)} \to \mathcal{O}(V)$ est bien d\'efini. Le r\'esultat final est imm\'ediat.
\end{proof}
\begin{coro}\label{cor:lemniscateglobaleBanach
Dans la situation pr\'ec\'edente, si~$V$ est ultram\'etrique et d\'ecente, alors, le morphisme naturel
\[\colim_{W\supset V,u\prec s \le t < v} \overline{\mathcal{O}(W)}\ensuremath{\langle} u\le |T|\le v\ensuremath{\rangle}[S]/(P(S)-T) \to \mathcal{O}(\overline{C}_{V}(P,s,t)),\]
o\`u~$W$ d\'ecrit l'ensemble des voisinages compacts de~$V$ dans~$X$ sur lesquels les coefficients de~$P$ sont d\'efinis, est un isomorphisme.
\qed
\end{coro}
\index{Domaine polynomial!algebre@alg\`ebre d'un|)}
\subsection{Changement de variables}
\label{comparaison_entre_les_normes}
Dans cette section, nous d\'emontrons un r\'esultat de changement de variables tr\`es utile en pratique. \'Etant donn\'e une fonction non nulle sur un espace affine, il permet d'assurer que celle-ci reste non nulle lorsque l'on sp\'ecialise toutes les variables sauf une, ce qui permet notamment d'appliquer le th\'eor\`eme de division de Weierstra\ss~\ref{weierstrass}. Dans le cadre des espaces sur un corps valu\'e ultram\'etrique complet, on retrouve le fait qu'une fonction non nulle peut \^etre transform\'ee en une fonction distingu\'ee par rapport \`a une variable, par le biais d'un automorphisme (\cf~\cite[proposition~5.2.4/1]{BGR}). Notre r\'esultat figure d\'ej\`a dans \cite[lemme~9.15]{EtudeLocale}, mais l'auteur n'en a r\'edig\'e qu'une preuve succinte. Nous en proposons ici une d\'emonstration compl\`ete et diff\'erente.
\medbreak
Pour $n\in \ensuremath{\mathbf{N}}$, notons $T_{1},\dotsc,T_{n}$ les coordonn\'ees de~$\E{n}{\mathcal{A}}$.
Pour $\ensuremath{{\boldsymbol{u}}} = (u_1,\dotsc,u_{n-1}) \in(\ensuremath{\mathbf{N}}_{ge1})^{n-1}$, notons $\psi_{\ensuremath{{\boldsymbol{u}}}} \colon \E{n}{\mathcal{A}} \to \E{n}{\mathcal{A}}$ l'automorphisme d'espaces localement annel\'es induit par le changement de variables
\[\left\lbrace
\begin{array}{rcl}
T_1 &\longmapsto& T_1+T_n^{u_1}~;\\
&\vdots&\\
T_{n-1}&\longmapsto &T_{n-1}+T_n^{u_{n-1}}~;\\
T_n&\longmapsto& T_n.
\end{array}
\right.\]
\begin{lemm}\label{changement_variable}\index{Changement de variables}
Soient~$b$ un point de~$B$. Soient~$n\in\ensuremath{\mathbf{N}}$ et $x$ un point rigide de~$\pi_{n}^{-1}(b)$. Soit~$f$ un \'el\'ement de~$\mathcal{O}_{\E{n}{\mathcal{A}},x}$ dont la restriction \`a~$\pi_{n}^{-1}(b)$ n'est pas nulle. Alors, il existe $\ensuremath{{\boldsymbol{u}}} \in (\ensuremath{\mathbf{N}}_{ge1})^{n-1}$ tel que l'image de $\psi_{\ensuremath{{\boldsymbol{u}}}}^\#(f)$ dans $\pi_{n-1}^{-1}(\pi_{n-1}(\psi_{\ensuremath{{\boldsymbol{u}}}}^{-1}(x)))$ ne soit pas nulle.
\end{lemm}
\begin{proof}
Puisque le r\'esultat ne concerne que la restriction de la fonction~$f$ \`a~$\pi_{n}^{-1}(b)$, nous pouvons remplacer~$\mathcal{A}$ par~$\mathcal{H}(b)$, et donc supposer que~$\mathcal{A}$ est un corps valu\'e complet. Nous le noterons d\'esormais~$K$.
Remarquons qu'il suffit de d\'emontrer le r\'esultat apr\`es extension des scalaires. Nous pouvons donc supposer que~$K$ est alg\'ebriquement clos. Le point~$x$ s'identifie alors \`a un \'el\'ement $(\alpha_{1},\dotsc,\alpha_{n})$ de~$K^n$.
Pour tout $n\in \ensuremath{\mathbf{N}}$, d\'efinissons maintenant un id\'eal~$I_{n}$ de~$\mathcal{O}_{\E{n}{K},x}$. Posons $I_{0} := (0)$, $I_{1} := (0)$ et, pour tout $n\ge 2$,
\[I_{n} := \bigcap_{\ensuremath{{\boldsymbol{u}}} \in \ensuremath{\mathbf{N}}^{n-1}} \big(T_1-\alpha_1-T_n^{u_{1}}+\alpha_{n}^{u_1},\dotsc,T_{n-1}-\alpha_{n-1}-T_n^{u_{n-1}} + \alpha_{n}^{u_{n-1}}\big).\]
Rappelons que, d'apr\`es la proposition~\ref{prop:disqueglobal}, l'anneau local~$\mathcal{O}_{\E{n}{K},x}$ s'identifie \`a un sous-anneau de $K\llbracket T_1-\alpha_1,\dotsc,T_n - \alpha_{n}\rrbracket$.
Nous allons montrer que $I_{n} = (0)$ par r\'ecurrence sur~$n$. Si $n$ vaut~0 ou~1, le r\'esultat vaut par d\'efinition. Soit $n\in \ensuremath{\mathbf{N}}$ avec $n\ge 2$ et supposons avoir d\'emontr\'e que $I_{n-1} = (0)$.
Fixons~$u_{n-1}\in\ensuremath{\mathbf{N}}_{ge1}$. Puisque $T_{n} - \alpha_{n}$ divise $T_{n}^{u_{n-1}} - \alpha_{n}^{u_{n-1}}$, on a un isomorphisme naturel
\begin{align*}
& K\llbracket T_1-\alpha_1,\dotsc,T_n-\alpha_n\rrbracket /(T_{n-1}-\alpha_{n-1} - T_{n}^{u_{n-1}} + \alpha_{n}^{u_{n-1}})\\
\simeq \ & K\llbracket T_1-\alpha_1,\dotsc,T_{n-2}-\alpha_{n-2},T_{n}-\alpha_{n}\rrbracket.
\end{align*}
L'image de~$I_{n}$ par cet isomorphisme s'identifie \`a~$I_{n-1}$ (en effectuant le changement de variable $T_{n} \mapsto T_{n-1} - \alpha_{n-1} +\alpha_{n}$), et est donc nulle, par hypoth\`ese de r\'ecurrence. On en d\'eduit que~$I_{n}$ est contenu dans l'id\'eal de $ K\llbracket T_1-\alpha_1,\dotsc,T_n-\alpha_n\rrbracket$ engendr\'e par $T_{n-1}-\alpha_{n-1} - T_{n}^{u_{n-1}} -\alpha_{n}^{u_{n-1}}$.
Puisque le r\'esultat pr\'ec\'edent vaut pour tout $u_{n-1} \in \ensuremath{\mathbf{N}}_{ge1}$, que tous les $T_{n-1}-\alpha_{n-1}-T_{n}^{u_{n-1}} -\alpha_{n}^{u_{n-1}}$ sont irr\'eductibles et non associ\'es, et que $K\llbracket T_1-\alpha_1,\dotsc,T_n-\alpha_n\rrbracket$ est factoriel, on en d\'eduit que $I_{n}=0$.
On peut maintenant conclure. Puisque~$f$ n'est pas nulle, il existe $\ensuremath{{\boldsymbol{u}}} = (u_1,\dotsc,u_{n-1})\in (\ensuremath{\mathbf{N}}_{ge1})^{n-1}$ tel que~$f$ n'appartienne pas \`a l'id\'eal
\[(T_1-\alpha_1- T_n^{u_{1}} + \alpha_{n}^{u_1},\dotsc,T_{n-1}-\alpha_{n-1}- T_{n}^{u_{n-1}} + \alpha_{n}^{u_{n-1}})\]
de $\mathcal{O}_{\E{n}{K},x}$. On en d\'eduit que $\psi_{\ensuremath{{\boldsymbol{u}}}}^\sharp(f)$
n'appartient pas \`a l'id\'eal
\[(T_1-\alpha_1 + \alpha_{n}^{u_1},\dotsc,T_{n-1}-\alpha_{n-1} + \alpha_{n}^{u_{n-1}})\]
de $\mathcal{O}_{\E{n}{K},\psi_{\ensuremath{{\boldsymbol{u}}}}^{-1}(x)}$, ce qui signifie exactement que~$\psi_{\ensuremath{{\boldsymbol{u}}}}^\sharp(f)$ ne s'annule pas en~$\psi_{\ensuremath{{\boldsymbol{u}}}}^{-1}(x)$.
\end{proof}
\subsection{Anneaux de Banach de base} Nous introduisons ici une classe d'anneaux de Banach sur lesquels nous travaillerons dans la suite.
Posons $X:= \E{n}{\mathcal{A}}$. Remarquons que, par d\'efinition, du faisceau structural, pour tout point~$x$ de~$X$, on a un isomorphisme
\[\colim_{V \ni x} \mathcal{B}(V) \xrightarrow[]{\sim} \mathcal{O}_{x},\]
o\`u $V$ parcourt l'ensemble des voisinages compacts de~$x$.
\begin{defi}\index{B-definie@$\mathcal{B}$-d\'efinie|see{Fonction}}\index{Fonction!B-definie@$\mathcal{B}$-d\'efinie|textbf}
Soient~$x\in X$ et $V$ un voisinage compact de~$x$. On dit qu'un \'el\'ement~$f$ de~$\mathcal{O}_{x}$ est \emph{$\mathcal{B}$-d\'efini} sur~$V$ s'il appartient \`a l'image du morphisme naturel $\mathcal{B}(V) \to \mathcal{O}_{x}$.
\end{defi}
\begin{defi}\index{Base de voisinages!fine|textbf}
Soient~$T$ un espace topologique, $t$ un point de~$T$ et $\mathcal{U}$ une base de voisinages de~$t$. On dit que~$\mathcal{U}$ est une \emph{base de voisinages fine} si elle contient une base de voisinages de chacun de ses \'el\'ements.
\end{defi}
\begin{defi}[\protect{\cite[d\'efinition~9.1]{EtudeLocale}}]\index{Ideal@Id\'eal!B-fortement de type fini@$\mathcal{B}$-fortement de type fini|textbf}\index{Ideal@Id\'eal!$\mathcal{B}$-syst\`eme de g\'en\'erateurs forts|textbf}
Soient~$x$ un point de~$X$ et~$\mathcal{V}$ une base de voisinages fine de~$x$ form\'ee d'ensembles compacts et spectralement convexes. On dit qu'un id\'eal~$I$ de~$\mathcal{O}_{x}$ est \emph{$\mathcal{B}$-fortement de type fini} relativement \`a~$\mathcal{V}$ s'il existe~$f_1,\dotsc,f_p$ appartenant \`a~$I$ tels que
\begin{enumerate}[i)]
\item pour tous~$V\in\mathcal{V}$ et~$i\in \{1,\dotsc,p\}$, $f_i$ est~$\mathcal{B}$-d\'efini sur~$V$.
\item pour tout voisinage compact~$U$ de~$x$, il existe une famille $(K_{V,U})_{V\in\mathcal{V}}$ de~$\ensuremath{\mathbf{R}}_{>0}$ telle que, pour tout \'el\'ement~$f$ de~$I$ qui est $\mathcal{B}$-d\'efini sur~$U$ et tout \'el\'ement~$V$ de~$\mathcal{V}$ contenu dans~$\mathring U$, il existe des \'el\'ements~$a_1,\ldots,a_p$ de~$\mathcal{B}(V)$ tels que
\[\begin{cases}
f=a_1f_1+\cdots+a_pf_p \textrm{ dans }\mathcal{B}(V)~;\\
\forall i\in \{1,\dotsc,p\}, \|a_i\|_V\leq K_{V,U}\, \|f\|_U.
\end{cases}\]
\end{enumerate}
Une famille~$(f_1,\ldots,f_p)$ v\'erifiant les propi\'et\'es pr\'ec\'edentes est appel\'ee \emph{$\mathcal{B}$-syst\`eme de g\'en\'erateurs fort} de l'id\'eal~$I$ relativement \`a~$\mathcal{V}$. On dit \'egalement qu'elle \emph{engendre $\mathcal{B}$-fortement} l'id\'eal~$I$ relativement \`a~$\mathcal{V}$.
\end{defi}
\begin{defi}[\protect{\cite[d\'efinition~9.3]{EtudeLocale}}]\index{Anneau!fortement regulier@fortement r\'egulier|textbf}\index{Anneau!fortement de valuation discrete@fortement de valuation discr\`ete|textbf}\index{Corps!fort|textbf}
Soient~$x$ un point de~$X$ et~$\mathcal{V}$ une base de voisinages fine de~$x$ form\'ee d'ensembles compacts et spectralement convexes. Soit $d\in \ensuremath{\mathbf{N}}$. On dit que l'anneau local~$\mathcal{O}_{x}$ est \emph{fortement r\'egulier} de dimension~$d$ relativement \`a~$\mathcal{V}$ si
\begin{enumerate}[i)]
\item $\mathcal{O}_{x}$ est noeth\'erien de dimension de Krull~$n$~;
\item il existe des \'el\'ements $f_1,\dotsc,f_d$ de~$\ensuremath{\mathfrak{m}}_x$ tels que $(f_1,\ldots,f_d)$ engendre~$\mathcal{B}$-fortement l'id\'eal~$\ensuremath{\mathfrak{m}}_x$ relativement \`a~$\mathcal{V}$.
\end{enumerate}
On dit que l'anneau local~$\mathcal{O}_{x}$ est un \emph{corps fort} (resp. \emph{fortement de valuation discr\`ete}) s'il est fortement r\'egulier de dimension~0 (resp.~1).
\end{defi}
La propri\'et\'e d'\^etre un corps fort s'apparente \`a une propri\'et\'e de prolongement analytique. En effet, la d\'efinition requiert qu'une fonction d\'efinie sur~$V$ et nulle au voisinage de~$x$ soit nulle sur tout voisinage de~$x$ appartenant \`a~$\mathcal{V}$ et inclus dans~$\mathring U$. R\'eciproquement, si~$X$ satisfait le principe du prolongement analytique et si tous les voisinages appartenant \`a~$\mathcal{V}$ sont connexes, le fait que l'anneau~$\mathcal{O}_{x}$ soit un corps implique que c'est un corps fort pour la base de voisinages~$\mathcal{V}$.
\begin{defi}[\protect{\cite[d\'efinition~9.5]{EtudeLocale}}]\label{def:basique}\index{Anneau!de base|textbf}\index{Anneau!basique|see{de base}}
On dit que l'anneau de Banach~$\mathcal{A}$ est \emph{de base}
ou \emph{basique}
si tout point~$b$ de $B = \mathcal{M}(\mathcal{A})$ poss\`ede une base de voisinages compacts et spectralement convexes~$\mathcal{V}_b$ satisfaisant les propri\'et\'es suivantes~:
\begin{enumerate}[i)]
\item si $\mathcal{H}(b)$ est de caract\'eristique non nulle et trivialement valu\'e, alors tout \'el\'ement de~$\mathcal{V}_{b}$ est contenu $B_{\mathrm{um}}$ et poss\`ede un bord analytique fini~;
\item l'anneau local~$\mathcal{O}_{B,b}$ est un corps fort ou un anneau fortement de valuation discr\`ete relativement \`a~$\mathcal{V}_{b}$.
\end{enumerate}
La condition~i) entra\^ine que tout point de~$B$ est d\'ecent.
\end{defi}
\begin{exem}\label{ex:basique}\index{Anneau!de base}
\index{Corps!valu\'e}\index{Corps!hybride}\index{Anneau!des entiers relatifs $\ensuremath{\mathbf{Z}}$}\index{Anneau!des entiers d'un corps de nombres}\index{Anneau!de valuation discr\`ete}\index{Anneau!de Dedekind trivialement valu\'e}
Les exemples \ref{ex:corpsvalue} \`a~\ref{ex:Dedekind} donn\'es plus haut~: les corps valu\'es, l'anneau~$\ensuremath{\mathbf{Z}}$ et les anneaux d'entiers de corps de nombres, les corps hybrides, les anneaux de valuation discr\`ete, les anneaux de Dedekind trivialement valu\'es sont tous des anneaux de base. On renvoie \`a \cite[\S 3.1]{A1Z} pour des d\'etails dans le cas des anneaux d'entiers de corps de nombres.
\end{exem}
\subsection{Anneaux locaux et faisceau structural}
Dans cette section, nous \'enon\c cons les principaux r\'esultats obtenus dans~\cite{EtudeLocale}. Posons $X:= \E{n}{\mathcal{A}}$, avec coordonn\'ees $T_{1},\dotsc,T_{n}$.
\subsubsection{Types de points}
Il sera utile de distinguer diff\'erents types de points dans les espaces affines.
\begin{defi}\index{Point!rigide epais@rigide \'epais|textbf}\index{Point!rigide epais@rigide \'epais|(}\index{Point!localement transcendant|textbf}\index{Point!purement localement transcendant|textbf}
Soient $b$ un point de~$B$ et $x$ un point de~$X$ au-dessus de~$b$. On dit que le point~$x$ est \emph{rigide \'epais} si $T_1(x),\ldots,T_n(x)$ sont alg\'ebriques sur~$\kappa(b)$. Dans le cas contraire, on dit que le point~$x$ est \emph{localement transcendant}.
On dit que le point~$x$ est \emph{purement localement transcendant} si, pour tout $i \in \{1,\dotsc,n\}$, $\pi_{n,i}(x)$ est localement transcendant au-dessus de $\pi_{i,i-1}(\pi_{n,i}(x)) = \pi_{n,i-1}(x)$.
\end{defi}
Dans~\cite[d\'efinition~8.1]{EtudeLocale}, un point est dit rigide \'epais lorsque~$\kappa(x)$ est une extension finie de~$\kappa(b)$, mais c'est en r\'ealit\'e la condition que nous avons \'enonc\'ee qui est utilis\'ee (sous la forme de l'existence d'un polyn\^ome non nul \`a coefficients dans~$\mathcal{O}_{b}$ dont l'image dans $\kappa(x)$ est nulle).
\begin{prop}\label{rigide_\'epais
Soient $b$ un point d\'ecent de~$B$ et $x$ un point de~$X$ au-dessus de~$b$. Alors, $x$ est rigide \'epais au-dessus de~$b$ si, et seulement si, $\kappa(x)$ est une extension finie de~$\kappa(b)$.
En particulier, la notion de point rigide \'epais est ind\'ependante du choix des coordonn\'ees $T_{1},\dotsc,T_{n}$.
Soit $m\in \cn{0}{n}$. Alors $x$ est rigide \'epais au-dessus de~$b$ si, et seulement si, $x$ est rigide \'epais au-dessus de~$\pi_{n,m}(x)$ et $\pi_{n,m}(x)$~est rigide \'epais au-dessus de~$b$.
\end{prop}
\begin{proof}
Si $\kappa(x)/\kappa(b)$ est finie, alors $x$ est rigide \'epais. R\'eciproquement, supposons que~$x$ est rigide \'epais. Le cas $n=0$ est trivial et une r\'ecurrence permet ensuite de se ramener au cas $n=1$. Posons $T := T_{1}$. Par hypoth\`ese, il existe un polyn\^ome $G \in \mathcal{O}_{b}[T]$ non nul dont l'image dans~$\kappa(x)$ est nulle. Le th\'eor\`eme de division de Weierstra\ss~\ref{weierstrass} assure alors que~$\kappa(x)$ est engendr\'e, en tant que $\kappa(b)$-espace vectoriel, par un nombre fini de puissances de~$T(x)$. Le r\'esultat s'ensuit.
\end{proof}
\begin{rema}\index{Point!purement localement transcendant}
L'analogue de la proposition~\ref{rigide_\'epais} tombe en d\'efaut pour les points localement transcendant, m\^eme dans le cas o\`u l'anneau de base est un corps valu\'e. On renvoie \`a \cite[\S 5.2.7]{TemkinTranscendence} pour un contre-exemple d\^u \`a M.~Temkin, inspir\'e par un r\'esultat de M.~Matignon et M.~Reversat dans~\cite{MatignonReversatSousCorpsFermes}.
\end{rema}
\begin{rema}\label{rem:rigeptrans}\index{Point!rigide epais@rigide \'epais}\index{Point!purement localement transcendant}
Soient $b$ un point de~$B$ et $x$ un point de~$X$ au-dessus de~$b$.
Alors, quitte \`a permuter les coordonn\'ees $T_{1},\dotsc,T_{n}$, il existe $m_{x} \in\cn{0}{n}$ tel que les conditions suivantes soient satisfaites~:
\begin{enumerate}[i)]
\item $\pi_{n,m_{x}}(x)$ est purement localement transcendant au-dessus de~$\pi_{n}(x)$~;
\item $x$ est rigide \'epais au-dessus de~$\pi_{n,m_{x}}(x)$.
\end{enumerate}
\end{rema}
\subsubsection{Points rigides \'epais d'une droite relative}
Dans cette section, nous supposerons que $n=1$. Nous noterons $T=T_{1}$. Soit~$b$ un point de~$B$ et $x$ de~$X$ au-dessus de~$b$. Notons~$e$ l'exposant caract\'eristique de~$\kappa(b)$.
\begin{nota
\index{Point!rigide epais@rigide \'epais!polynome minimal@polyn\^ome minimal|textbf}\index{Point!rigide epais@rigide \'epais!polynome minimal@polyn\^ome minimal|(}%
\nomenclature[Jr]{$\mu_{\kappa,x}$}{polyn\^ome minimal \'epais d'un point rigide \'epais~$x$ de $\E{1}{\mathcal{A}}$ (dans $\kappa(b)[T]$ si $x$ est au-dessus de $b \in \mathcal{M}(\mathcal{A})$)
Supposons que~$x$ est un point rigide \'epais. On appelle \emph{polyn\^ome minimal \'epais} de~$x$ le polyn\^ome minimal de~$T(x)$ sur~$\kappa(b)$ et on le note $\mu_{\kappa,x}(T)$.
\end{nota}
Rappelons que l'on note $\mu_{x}(T)$ le polyn\^ome minimal de~$T(x)$ sur~$\mathcal{H}(b)$ (\cf~d\'efinition~\ref{def:rigidedroite}). Dans~ \cite[d\'efinition~8.2]{EtudeLocale}, on affirme que $\mu_{\kappa,x} = \mu_{x}$, mais cet \'enonc\'e est incorrect.
\JP{Donner un exemple explicite avec $\alpha \in \mathcal{H}(b) \setminus \kappa(b)$ tel que $\alpha^p \in \kappa(b)$ ?
En revanche, on peut d\'emontrer que le polyn\^ome minimal sur~$\kappa(b)$ est une puissance de celui sur~$\mathcal{H}(b)$.
\begin{lemm}\label{lem:PTp}
Soit~$K$ un corps de caract\'eristique~$p>0$. Soient $P \in K[X]$ un polyn\^ome irr\'eductible s\'eparable et $s\in \ensuremath{\mathbf{N}}$. Alors il existe un polyn\^ome irr\'eductible $Q \in K[X]$ et $r \in \ensuremath{\mathbf{N}}$ tels que
\[P(T^{p^s}) = Q(T)^{p^r}.\]
\end{lemm}
\begin{proof}
On peut supposer que~$P$ est unitaire. Soit~$\bar K$ une cl\^oture alg\'ebrique de~$K$. Par hypoth\`ese, il existe des \'el\'ements $\alpha_{1},\dotsc,\alpha_{d} \in \bar K$, deux \`a deux distincts, tels que l'on ait
\[P(T) = \prod_{i=1}^d (T-\alpha_{i}) \textrm{ dans } \bar K[T].\]
Pour tout $i \in \{1,\dotsc,d\}$, $\alpha_{i}$ poss\`ede une unique racine $p^s$-\`eme dans~$\bar K$. Notons-la~$\alpha_{i}^{1/p^s}$. On a alors
\[P(T^{p^s}) = \prod_{i=1}^d (T^{p^s}-\alpha_{i}) = \prod_{i=1}^d (T-\alpha_{i}^{1/p^s})^{p^s} \textrm{ dans } \bar K[T].\]
Notons~$Q(T)$ le polyn\^ome minimal de~$\alpha_{1}^{1/p^s}$ sur~$K$.
Soit~$R(T)$ un facteur irr\'eductible de~$P(T^{p^s})$ dans~$K[T]$. Il existe $i \in \{1,\dotsc,d\}$ tel que $R(\alpha_{i}^{1/p^s}) = 0$. En posant $S(T) = R(T^{p^s})$, on a $S(\alpha_{i}) = 0$. Puisque~$P(T)$ est irr\'eductible, $P(T)$ divise~$S(T)$ et on a donc $S(\alpha_{1})=0$. On en d\'eduit que $R(\alpha_{1}^{1/p^s})=0$, donc que $Q(T)$ divise $R(T)$, puis que $R(T) = Q(T)$.
On a montr\'e que tous les facteurs irr\'eductibles de~$P(T^{p^s})$ sont \'egaux \`a~$Q(T)$. Par cons\'equent, il existe $m\in \ensuremath{\mathbf{N}}$ tel que $P(T^{p^s}) = Q(T)^m$. En consid\'erant les d\'ecompositions des polyn\^omes dans~$\bar K[T]$, on montre que~$m$ est n\'ecessairement une puissance de~$p$.
\end{proof}
\begin{lemm}\label{lem:polmin}\index{Point!rigide!polynome minimal@polyn\^ome minimal}\index{Point!rigide epais@rigide \'epais!polynome minimal@polyn\^ome minimal}
Supposons que~$x$ est un point rigide \'epais. Alors il existe $r\in \ensuremath{\mathbf{N}}$ tel que
\[\mu_{\kappa,x}(T) = \mu_{x}(T)^{e^r}.\]
\end{lemm}
\begin{proof}
On suit ici la preuve de \cite[corollaire~5.4]{EtudeLocale}.
Il existe un polyn\^ome irr\'eductible et s\'eparable $P \in \kappa(b)[T]$ et un entier $s\in\ensuremath{\mathbf{N}}$ tels que $\mu_{\kappa(b)}(T) = P(T^{e^s})$.
D'apr\`es~\cite[th\'eor\`eme~5.2]{EtudeLocale}, le corps~$\kappa(b)$ est hens\'elien, donc, d'apr\`es \cite[VI, \S 8, Exercices 14a et 12b]{BourbakiAC57} ou \cite[Proposition~2.4.1]{Ber2}, le polyn\^ome~$P$ reste irr\'eductible dans~$\mathcal{H}(b)[T]$. D'apr\`es le lemme~\ref{lem:PTp}, il existe un polyn\^ome irr\'eductible $Q \in \mathcal{H}(b)[T]$ et un entier $r \in \ensuremath{\mathbf{N}}$ tel que $P(T^{e^s}) = Q(T)^{e^r}$. Le r\'esultat s'ensuit.
\end{proof}
\begin{defi}\index{Point!rigide epais@rigide \'epais!multiplicite@multiplicit\'e|textbf}
Avec les notations du lemme~\ref{lem:polmin}, on appelle \emph{multiplicit\'e du point~$x$} l'entier
\[\delta(x) := e^r \in\ensuremath{\mathbf{N}}^{\ast}.\]
\end{defi}
\index{Point!rigide epais@rigide \'epais!polynome minimal@polyn\^ome minimal|)}
\index{Point!rigide epais@rigide \'epais|)}
Nous pouvons maintenant \'enoncer un th\'eor\`eme de pr\'eparation de Weierstra\ss{} g\'en\'eralisant \cite[th\'eor\`eme~8.6]{EtudeLocale} et valable pour tout point rigide \'epais d'une droite relative.
\begin{theo}[Pr\'eparation de Weierstra\ss]\label{thm:preparationW}\index{Theoreme@Th\'eor\`eme!de pr\'eparation de Weierstra\ss}
Supposons que~$x$ est un point rigide \'epais. Si~$\mu_{x}$ est ins\'eparable et si $b$ est trivialement valu\'e, supposons que~$b$ est ultram\'etrique typique. Soit~$G$ un \'el\'ement de l'anneau local $\mathcal{O}_{X,x}$. Supposons que son image dans~$\mathcal{O}_{X_{b},x}$ n'est pas nulle et notons~$n$ sa valuation $\mu_{x}$-adique. Alors $n$ est multiple de~$\delta(x)$ et il existe un unique couple $(\Omega,E) \in \mathcal{O}_{X,x}^2$ tel que
\begin{enumerate}[i)]
\item $\Omega \in \mathcal{O}_{B,b}[T]$ est un polyn\^ome unitaire de degr\'e $n \deg(\mu_{x})$ v\'erifiant $\Omega(b)(T) = \mu_{\kappa,x}(T)^{n/\delta(x)}$ dans $\kappa(b)[T]$ ;
\item $E$ est inversible dans $\mathcal{O}_{X,x}$ ;
\item $G = \Omega E$.
\end{enumerate}
\end{theo}
\begin{proof}
Posons $d := \deg(\mu_{x})$ et $\delta := \delta(x)$. Le polyn\^ome~$\mu_{\kappa,x}$ est alors de degr\'e $d\delta$. Choisissons un relev\'e~$M$ de~$\mu_{\kappa,x}$ unitaire de degr\'e~$d\delta$ dans~$\mathcal{O}_{B,b}[T]$. Lorsque l'on parlera de valuation, il s'agira de la valuation $\mu_{x}$-adique.
Effectuons la division euclidienne de~$n$ par~$m$~: il existe $a\in \ensuremath{\mathbf{N}}$ et $b\in \cn{0}{\delta-1}$ tels que $n = a \delta + b$. D'apr\`es le th\'eor\`eme de division de Weierstra\ss{} \ref{weierstrass} appliqu\'e \`a~$G$ et~$M^a$ , il existe $Q \in \mathcal{O}_{X,x}$ et $R\in \mathcal{O}_{B,b}[T]$ de degr\'e strictement inf\'erieur \`a~$ad\delta$ tels que $G = QM^a + R$. Puisque le degr\'e de~$R$ est strictement inf\'erieur \`a~$ad\delta$, sa valuation est soit infinie, soit strictement inf\'erieure \`a~$a\delta$. Puisqu'elle ne peut \^etre strictement inf\'erieure aux valuations de~$G$ et de~$QM^a$, on en d\'eduit qu'elle est infinie. En d'autres termes, on a $R(b) = 0$.
Le raisonnement qui pr\'ec\`ede montre que la valuation de~$Q$ est \'egale \`a~$b$. Supposons, par l'absurde, que $b>0$. Alors, en appliquant le th\'eor\`eme de division de Weierstra\ss{} \`a~$M$ et \`a~$Q$, on obtient un reste dans $\mathcal{O}_{B,b}[T]$ qui s'annule en~$x$ et dont le degr\'e est strictement inf\'erieur \`a~$bd$, et donc \`a $d\delta = \deg(\mu_{\kappa,x})$. On aboutit donc \`a une contradiction, ce qui d\'emontre que $b=0$. On en d\'eduit que~$\delta$ divise~$n$ (et que~$Q$ est inversible dans~$\mathcal{O}_{X,x}$).
En appliquant, \`a pr\'esent, le th\'eor\`eme de division de Weierstra\ss{} \`a~$M^a$ et~$G$, on obtient une \'egalit\'e de la forme $M^a = Q'G+R'$. En raisonnant sur les valuations comme pr\'ecedemment, on montre que~$Q'$ est inversible dans~$\mathcal{O}_{X,x}$ et que~$R'(b)=0$. Les fonctions $\Omega := M^a - R'$ et $E := Q'^{-1}$ satisfont alors les propri\'et\'es de l'\'enonc\'e.
Il reste \`a d\'emontrer l'unicit\'e du couple~$(\Omega,E)$ satisfaisant les propri\'et\'es de l'\'enonc\'e. Partant d'un tel couple, $R := \Omega - M^{n/\delta}$ est un \'el\'ement de~$\mathcal{O}_{B,b}[T]$ de degr\'e strictement inf\'erieur \`a~$nd$ et on a $M^{n/\delta} = E^{-1}G - R$. L'unicit\'e du couple~$(\Omega,E)$ se d\'eduit donc de l'\'enonc\'e d'unicit\'e dans le th\'eor\`eme de division de Weierstra\ss.
\end{proof}
\subsubsection{R\'esultats}
Revenons au cas o\`u $X = \E{n}{\mathcal{A}}$, avec $n\in \ensuremath{\mathbf{N}}$ arbitraire. Le r\'esultat qui suit permet de comprendre l'anneau local en un point de~$X$ en fonction de celui en sa projection sur la base~$B$. C'est une combinaison de \cite[corollaire~9.11 et th\'eor\`emes 9.17 et~9.18]{EtudeLocale}.
\begin{theo}\label{rigide}\index{Anneau!noetherien@noeth\'erien}\index{Anneau!fortement regulier@fortement r\'egulier}\index{Anneau!fortement de valuation discrete@fortement de valuation discr\`ete}\index{Corps!fort}
Supposons que~$\mathcal{A}$ est basique. Soient~$b$ un point de~$B$ et~$x$ un point de~$X$ au-dessus de~$b$.
Alors l'anneau local~$\mathcal{O}_{\E{n}{\mathcal{A}},x}$ est noeth\'erien et fortement r\'egulier de dimension
\[\dim(\mathcal{O}_{X,x})=\dim(\mathcal{O}_{B,b})+n-m_{x},\]
o\`u $m_{x}$ est d\'efini comme dans la remarque~\ref{rem:rigeptrans}.
Si~$x$ est puremement localement transcendant au-dessus de~$b$ (c'est-\`a-dire si $m_{x}=n$) et si $\mathcal{O}_{B,b}$ est un corps fort (resp. un anneau fortement de valuation discr\`ete d'uniformisante~$\varpi_b$), alors~$\mathcal{O}_{X,x}$ est un corps fort (resp. un anneau fortement de valuation discr\`ete d'uniformisante~$\varpi_b$).
\qed
\end{theo}
Int\'eressons-nous, \`a pr\'esent, au principe du prolongement analytique. Nous adopterons la d\'efinition suivante, qui reste maniable en l'absence de connexit\'e locale.
\begin{defi}[\protect{\cite[d\'efinition~11.1]{EtudeLocale}}]\label{def:prolongementanalytique}\index{Prolongement analytique|textbf}\index{Prolongement analytique|(}
Soit $S$ un espace localement annel\'e. Soit $s\in S$. On dit que \emph{$S$ satisfait le principe du prolongement analytique en~$s$} si, pour tout ouvert~$U$ de~$S$ contenant~$s$ et tout \'el\'ement~$f$ de~$\mathcal{O}_{S}(U)$ dont l'image dans~$\mathcal{O}_{S,s}$ n'est pas nulle, il existe un voisinage~$V$ de~$s$ dans~$U$ tel que, pour tout $t\in V$, l'image de~$f$ dans~$\mathcal{O}_{S,t}$ n'est pas nulle.
On dit que \emph{$S$ satisfait le principe du prolongement analytique} s'il le satisfait en tout point.
\end{defi}
\begin{rema}
Soit~$S$ un espace localement annel\'e qui satisfait le principe du prolongement analytique au sens de la d\'efinition~\ref{def:prolongementanalytique}. Alors il satisfait \'egalement la version classique de ce principe. Plus pr\'ecis\'ement, soit~$U$ un ouvert connexe de~$S$ et $f$~un \'el\'ement de~$\mathcal{O}_{S}(U)$. S'il existe $s\in U$ tel que l'image de~$f$ dans~$\mathcal{O}_{S,s}$ est nulle, alors $f$ est nulle dans~$\mathcal{O}_{S}(U)$.
\end{rema}
\begin{rema}\label{rem:prolongementanalytiquecorpsavd}
Soit $S$ un espace localement annel\'e et soit $s\in S$.
Si $\mathcal{O}_{S,s}$ est un corps, alors $S$ satisfait le principe du prolongement analytique en~$s$.
Si $\mathcal{O}_{S,s}$ est un anneau de valuation discr\`ete d'uniformisante~$\pi$, alors $S$ satisfait le principe du prolongement analytique en~$s$ si, et seulement si, il existe un voisinage ouvert~$U$ de~$s$ dans~$S$ tel que, pour tout $t\in U \setminus\{s\}$, l'image de~$\pi$ dans~$\mathcal{O}_{S,t}$ ne soit pas nulle.
\end{rema}
\begin{exem}\index{Prolongement analytique}
Si~$\mathcal{A}$ est l'un des anneaux cit\'e dans les exemples \ref{ex:corpsvalue} \`a~\ref{ex:Dedekind}, alors $\mathcal{M}(\mathcal{A})$ satisfait le principe du prolongement analytique.
\end{exem}
\begin{prop}[\protect{\cite[corollaire~11.5]{EtudeLocale}}]\label{prop:prolongementanalytique}\index{Prolongement analytique}
Supposons que~$\mathcal{A}$ est basique. Si~$\mathcal{M}(\mathcal{A})$ satisfait le principe du prolongement analytique, alors il en va de m\^eme pour~$\E{n}{\mathcal{A}}$.
\qed
\end{prop}
\index{Prolongement analytique|)}
\'Enon\c cons finalement un r\'esultat de coh\'erence. Il sera utilis\'e \`a de nombreuses reprises dans ce travail.
\begin{theo}[\protect{\cite[th\'eor\`eme~11.9]{EtudeLocale}}]\label{coherent}\index{Faisceau!coherent@coh\'erent}
Supposons que~$\mathcal{A}$ est basique et que~$\mathcal{M}(\mathcal{A})$ satisfait le principe du prolongement analytique. Alors, le faisceau structural de~$\E{n}{\mathcal{A}}$ est coh\'erent.
\end{theo}
\chapter{Espaces de Stein}\label{chap:Stein}
Le but de ce chapitre est d'initier la th\'eorie des espaces de Stein sur un anneau de Banach en dimension sup\'erieure (le cas de la dimension~1 ayant d\'ej\`a fait l'objet de~\cite[chapitre~6]{A1Z}). Plus pr\'ecis\'ement, nous exhibons une famille d'espaces satisfaisant les conclusions des th\'eor\`emes~A et~B de Cartan~: la famille des polydisques ferm\'es (et de leurs ferm\'es analytiques). Rappelons que le th\'eor\`eme~A stipule que tout faisceau coh\'erent est engendr\'e par ses sections globales et le th\'eor\`eme~B que la cohomologie de tout faisceau coh\'erent est nulle en degr\'e strictement positif.\index{Theoreme@Th\'eor\`eme!A}\index{Theoreme@Th\'eor\`eme!B}
Dans la section~\ref{sec:CousinRunge}, nous d\'efinissons les notions de syst\`emes de Cousin et de Runge (\cf~d\'efinitions~\ref{def:Cousin} et~\ref{def:Runge}). Deux compacts \'etant donn\'es, il s'agit d'imposer des conditions reliant les fonctions sur l'intersection \`a des fonctions sur chacun des compacts. Dans ce cadre, nous d\'emontrons que si un faisceau est globalement engendr\'e sur chacun des compacts, il l'est encore sur leur r\'eunion (\cf~corollaire~\ref{cor:K-K+A}). Nous introduisons ensuite la notion, assez lourde, d'arbre de Cousin-Runge (\cf~d\'efinition~\ref{def:arbreCR}). C'est un outil technique permettant de construire des syst\`emes de Cousin-Runge dans un disque relatif \`a partir de syst\`emes de Cousin-Runge sur la base. Il nous sera utile pour effectuer des raisonnements par r\'ecurrence sur la dimension.
Dans la section~\ref{sec:AB}, nous d\'emontrons les th\'eor\`emes~A et~B sur les disques ferm\'es relatifs (\cf~corollaires~\ref{cor:thA} et~\ref{cor:thB}). Pour ce faire, nous nous pla\c{c}ons sur une base que nous appelons de Stein (\cf~d\'efinition~\ref{def:basedeStein}). Les spectres de nos anneaux de Banach habituels (corps valu\'es, anneaux d'entiers de corps de nombres, corps hybrides, anneaux de valuation discr\`ete, anneaux de Dedekind trivialement valu\'es) en sont des exemples.
Dans la section~\ref{sec:affinoides}, nous d\'efinissons les espaces affino\"ides surconvergents (\cf~d\'efinition~\ref{def:affinoide}), par analogie avec la g\'eom\'etrie analytique rigide. De nombreuses propri\'et\'es restent valable dans notre cadre. Les exemples classiques, domaines de Weierstra\ss, domaines de Laurent et domaines rationnels s'adaptent sans peine (\cf~d\'efinition~\ref{def:domaines} et proposition~\ref{prop:domaines}). Nous d\'emontrons \'egalement des analogues du th\'eor\`eme d'acyclicit\'e de Tate du th\'eor\`eme de Kiehl (\cf~th\'eor\`emes~\ref{th:sectionsglobalesaffinoide} et~\ref{th:affinoideAB}). Il s'agit de cons\'equences assez directes de nos th\'eor\`emes~A et~B sur des polydisques qu'il nous semble utile d'\'enoncer explicitement pour renforcer le parall\`ele avec la th\'eorie ultram\'etrique classique.
Dans la section~\ref{sec:Bouvert}, nous d\'emontrons que les disques ouverts relatifs (et les espaces qui s'en d\'eduisent) satisfont le th\'eor\`eme~B (\cf~th\'eor\`eme~\ref{th:Bouvert}). Nous suivons une strat\'egie classique~: exhaustion par des disques ferm\'es relatifs et passage \`a la limite. Le th\'eor\`eme de fermeture des id\'eaux~\ref{fermeture} y joue un r\^ole essentiel.
Dans la section finale~\ref{sec:noetherianite}, nous appliquons le r\'esultat d'annulation cohomologique sur les disques ferm\'es \`a l'\'etude des s\'eries arithm\'etiques convergentes, c'est-\`a-dire de s\'eries \`a coefficients entiers qui convergent sur un polydisque complexe. Dans~\cite{HarbaterConvergent}, D.~Harbater a d\'emontr\'e que certains anneaux naturels de s\'eries arithm\'etiques convergentes en une variable sont noeth\'eriens. Nous \'etendons son r\'esultat \`a des s\'eries en un nombre quelconque de variables (\cf~corollaire~\ref{cor:noetherienconcret}).
\medbreak
Soit $(\mathcal{A},\nm)$ un anneau de base g\'eom\'etrique. On pose $B := \mathcal{M}(\mathcal{A})$. Pr\'ecisons que l'hypoth\`ese sur l'anneau ne sera que tr\`es peu utilis\'ee. Toute la section~\ref{sec:CousinRunge} reste, par exemple, valable pour un anneau de Banach arbitraire. L'hypoth\`ese interviendra lorsque nous aurons besoin de la coh\'erence du faisceau structural, et uniquement pour cette raison. Nous avons cependant pr\'ef\'er\'e nous placer dans ce cadre plus restrictif d\`es le d\'ebut de fa\c{c}on \`a ne pas alourdir la r\'edaction.
\medbreak
Dans ce chapitre, nous travaillerons souvent avec des parties compactes d'espaces $\mathcal{A}$-analytiques. Elles seront munies du \emph{faisceau structural surconvergent}, sans que nous le pr\'ecisions d\'esormais. Cela signifie que, si $X$ est un espace $\mathcal{A}$-analytique et $V$ une partie compacte de~$X$, nous munirons~$V$ du faisceau d'anneaux $\mathcal{O}_{V} := j_{V}^{-1}\mathcal{O}_{X}$, o\`u $j_{V} \colon V \to X$ d\'esigne l'inclusion.
\index{Faisceau!surconvergent}
\section{Syst\`emes de Cousin-Runge}\label{sec:CousinRunge}
\subsection{G\'en\'eralit\'es}
\index{Systeme@Syst\`eme|(}
Soit~$X$ un espace $\mathcal{A}$-analytique.
Soient~$K^-$ et~$K^+$ deux parties compactes de~$X$. Posons $L := K^- \cap K^+$ et $M := K^- \cup K^+$. Dans cette section, nous introduisons diff\'erentes notions permettant de relier les sections globales d'un faisceau coh\'erent sur~$K^-$ et~$K^+$ \`a celles sur~$M$. Elles sont inspir\'ees de celles introduites dans \cite[\S 6.2.1]{A1Z}, tout en \'etant parfois diff\'erentes, de fa\c con \`a prendre en compte le cadre plus g\'en\'eral \'etudi\'e ici.%
\nomenclature[La]{$K^-$, $K^+$}{compacts d'un espace $\mathcal{A}$-analytique}%
\nomenclature[Lb]{$L$}{intersection $K^-\cap K^+$ des compacts $K^-$ et $K^+$}%
\nomenclature[Lc]{$M$}{union $K^-\cup K^+$ des compacts $K^-$ et $K^+$}%
Nous travaillerons ici, sans plus le pr\'eciser, avec la cat\'egorie des anneaux de Banach dont les objets sont les anneaux de Banach et les morphismes sont les morphismes born\'es.
\begin{defi}\label{def:systemedeBanachfin}\index{Systeme@Syst\`eme!de Banach|textbf}\index{Systeme@Syst\`eme!de Banach fort|textbf}%
\nomenclature[Ld]{$(\mathcal{B}_{m}^-,\nm_{m}^-)$}{famille d'espaces de Banach associ\'ee \`a~$K^-$ dans un syst\`eme de Banach}%
\nomenclature[Le]{$(\mathcal{B}_{m}^+,\nm_{m}^+)$}{famille d'espaces de Banach associ\'ee \`a~$K^+$ dans un syst\`eme de Banach}%
\nomenclature[Lf]{$(\mathcal{C}_{m}^-,\nm_{m})$}{famille d'espaces de Banach associ\'ee \`a~$L$ dans un syst\`eme de Banach}%
Un \emph{syst\`eme de Banach} associ\'e \`a~$(K^-,K^+)$ est la donn\'ee de syst\`emes inductifs d'anneaux de Banach
\begin{align*}
\mathcal{B}^- &= ((\mathcal{B}_{m}^-,\nm_{m}^-)_{m\in \ensuremath{\mathbf{N}}},(\varphi^-_{m',m})_{m' \ge m\, \in \ensuremath{\mathbf{N}}}),\\
\mathcal{B}^+ &= ((\mathcal{B}_{m}^+,\nm_{m}^+)_{m\in \ensuremath{\mathbf{N}}},(\varphi^+_{m',m})_{m' \ge m\, \in \ensuremath{\mathbf{N}}}),\\
\mathcal{C} &= ((\mathcal{C}_{m},\nm_{m})_{m\in \ensuremath{\mathbf{N}}},(\varphi_{m',m})_{m' \ge m\, \in \ensuremath{\mathbf{N}}})
\end{align*}
et de morphismes
\[(\psi^-_{m})_{m\in\ensuremath{\mathbf{N}}} \colon \mathcal{B}^- \to \mathcal{C}, \ (\psi^+_{m})_{m\in\ensuremath{\mathbf{N}}} \colon \mathcal{B}^+ \to \mathcal{C}\]
et
\[(\rho^-_{m})_{m\in\ensuremath{\mathbf{N}}} \colon \mathcal{B}^- \to \mathcal{O}(K^-), \ (\rho^+_{m})_{m\in\ensuremath{\mathbf{N}}} \colon \mathcal{B}^+ \to \mathcal{O}(K^+), \ (\rho_{m})_{m\in\ensuremath{\mathbf{N}}} \colon \mathcal{C} \to \mathcal{O}(L)\]
tels que le morphisme
\[ \colim_{m\in \ensuremath{\mathbf{N}}} \mathcal{C}_{m} \to \mathcal{O}(L) \]
induit par les~$\rho_{m}$ soit un isomorphisme d'anneaux.
On dit que le syst\`eme de Banach est \emph{fort} si, de plus, pour tout voisinage compact~$V$ de~$L$, il existe $m_{V} \in\ensuremath{\mathbf{N}}$ satisfaisant la propri\'et\'e suivante~: pour tout $m\ge m_{V}$, il existe $C_{m} \in \ensuremath{\mathbf{R}}$ tel que, pour tout $f\in \mathscr{O}(V)$, il existe $g\in \mathscr{C}_{m}$ v\'erifiant
\[\begin{cases}
f = \rho_{m}(g) \textrm{ dans } \mathscr{O}(L)~;\\
\|g\|_{m} \le C_{m}\, \|f\|_{V}.
\end{cases}\]
\end{defi}
Dans la suite, nous nous permettrons d'utiliser implicitement les morphismes $\psi^-_{m}, \psi^+_{m}, \rho^-_{m}, \rho^+_{m}, \rho_{m}$, sans que cela ne soit source de confusions.
\begin{rema}
La d\'efinition de syst\`eme de Banach reprend~\cite[d\'efinition~6.2.1]{A1Z}. La d\'efinition de syst\`eme de Banach fort est nouvelle.
\end{rema}
\begin{defi}\label{def:Cousin}\index{Systeme@Syst\`eme!de Cousin|textbf}
Un \emph{syst\`eme de Cousin} associ\'e \`a~$(K^-,K^+)$ est un syst\`eme de Banach associ\'e \`a~$(K^-,K^+)$ pour lequel il existe $D \in\ensuremath{\mathbf{R}}$ satisfaisant la propri\'et\'e suivante~: pour tous $m\in\ensuremath{\mathbf{N}}$ et $f \in \mathcal{C}_{m}$, il existe $f^-\in\mathcal{B}_{m}^-$ et $f^+\in\mathcal{B}_{m}^+$ tels que
\begin{enumerate}[i)]
\item $f = f^- + f^+$ dans $\mathcal{C}_{m}$~;
\item $\|f^-\|^-_{m} \le D\, \|f\|_{m}$~;
\item $\|f^+\|^+_{m} \le D\, \|f\|_{m}$.
\end{enumerate}
\end{defi}
\begin{rema}
La d\'efinition de syst\`eme de Cousin reprend~\cite[d\'efinition~6.2.2]{A1Z}.
\end{rema}
Donnons quelques exemples simples pour illustrer la d\'efinition.
\begin{exem}\label{ex:ZCousinum}\index{Anneau!des entiers relatifs $\ensuremath{\mathbf{Z}}$}
Consid\'erons le cas o\`u $\mathcal{A} = \ensuremath{\mathbf{Z}}$ et reprenons les notations de l'exemple~\ref{ex:Z}.
Soit $q$ un nombre premier et soit $\alpha \in \ensuremath{\mathbf{R}}_{>0}$. Posons $K^- := [a_{q}^\alpha,a_{q}^{+\infty}]$ et $K^+ := \mathcal{M}(\ensuremath{\mathbf{Z}}) \setminus \intof{a_{q}^\alpha,a_{q}^{+\infty}}$. On a $K^-\cap K^+ = \{a_{q}^\alpha\}$ et $K^-\cup K^+ = \mathcal{M}(\ensuremath{\mathbf{Z}})$. Cette situation est repr\'esent\'ee \`a la figure~\ref{fig:K-K+}.
\begin{figure}[!h]
\centering
\begin{tikzpicture}
\foreach \x [count=\xi] in {-2,-1,...,17}
\draw (0,0) -- ({10*cos(\x*pi/10 r)/\xi},{10*sin(\x*pi/10 r)/\xi}) ;
\foreach \x [count=\xi] in {-2,-1,...,17}
\fill ({10*cos(\x*pi/10 r)/\xi},{10*sin(\x*pi/10 r)/\xi}) circle ({0.07/(sqrt(\xi)}) ;
\fill (2.9,0) circle ({0.07/(sqrt(3)}) ;
\draw (2.6,-0.3) node{$a_q^\alpha$} ;
\draw (2.9,0) to[out=100,in=0] (0,1.7);
\draw (0,1.7) to[out=180,in=90] (-1.4,0);
\draw (-1.4,0) to[out=270,in=135] (-0.8,-1);
\draw (-0.8,-1) to[out=-45,in=160] (8,-6.4);
\draw (8,-6.4) to[out=-20,in=-70] (8.7,-5.7);
\draw (8.7,-5.7) to[out=110,in=-45] (5.5,-1.5);
\draw (5.5,-1.5) to[out=135,in=-65] (2.9,0);
\draw (2.9,0) to[out=45,in=90] (3.7,0);
\draw (3.7,0) to[out=-90,in=-45] (2.9,0);
\draw (4.1,0.4) node{$K^-$} ;
\draw (-.8,-1.6) node{$K^+$} ;
\end{tikzpicture}
\caption{Un syst\`eme de Cousin de $\mathcal{M}(\ensuremath{\mathbf{Z}})$.}\label{fig:K-K+}
\end{figure}
Pour tout $m\in \ensuremath{\mathbf{N}}$, posons
\[\begin{cases}
(\mathcal{B}^-_{m},\nm^-_{m}) := (\mathcal{B}(K^-),\nm_{K^-}) = (\ensuremath{\mathbf{Z}}_{q}, \va_{q}^\alpha)~;\\[2pt]
(\mathcal{B}^+_{m},\nm^+_{m}) := (\mathcal{B}(K^+),\nm_{K^+}) = \big(\ensuremath{\mathbf{Z}}\big[\frac1q\big],\max(\va_{q}^\alpha,\va_{\infty})\big)~;\\[2pt]
(\mathcal{C}_{m},\nm_{m}) := (\mathcal{H}(a_{q}^\alpha), \va_{a_{q}^\alpha}) = (\ensuremath{\mathbf{Q}}_{q},\va_{q}^\alpha).
\end{cases}\]
Soit $f\in \mathcal{H}(a_{q}^\alpha) = \ensuremath{\mathbf{Q}}_{q}$. Supposons que $f\ne 0$ et \'ecrivons-le sous la forme $f = \sum_{i\ge i_{0}} a_{i} \, q^i$ avec $i_{0} \in \ensuremath{\mathbf{Z}}$, $a_{i_{0}} \ne 0$ et, pour tout $i \ge i_{0}$, $a_{i} \in \cn{0}{q-1}$.
Si $i_{0} \ge 0$, on pose
\[ f^- := f \textrm{ et } f^+ := 0.\]
On a alors $f = f^-+f^+$, $\|f^-\|_{K^-} = |f|_{q}^\alpha$ et $\|f\|_{K^+} = 0$.
Si $i_{0} < 0$, on a $|f|_{q}^\alpha >1$ et on pose
\[ f^- := \sum_{i\ge 0} a_{i} \, q^i \textrm{ et } f^+ := \sum_{i_{0}\le i <0} a_{i} \, q^i.\]
On a alors $f = f^-+f^+$, $\|f^-\|_{K^-} \le 1$ et $\|f\|_{K^+} \le \sum_{i<0} (q-1)\, q^{-i} = 1$.
On a donc bien d\'efini un syst\`eme de Cousin associ\'e \`a $(K^-,K^+)$.
\end{exem}
\begin{exem}\label{ex:ZCousinarc}\index{Anneau!des entiers relatifs $\ensuremath{\mathbf{Z}}$}
Consid\'erons le cas o\`u $\mathcal{A} = \ensuremath{\mathbf{Z}}$ et reprenons les notations de l'exemple~\ref{ex:Z}.
Soit $\beta \in \intoo{0,1}$. Posons $K^- := [a_{\infty}^\beta,a_{\infty}]$ et $K^+ := \mathcal{M}(\ensuremath{\mathbf{Z}}) \setminus \intof{a_{\infty}^\beta,a_{\infty}}$. On a $K^-\cap K^+ = \{a_{\infty}^\beta\}$ et $K^-\cup K^+ = \mathcal{M}(\ensuremath{\mathbf{Z}})$.
Pour tout $m\in \ensuremath{\mathbf{N}}$, posons
\[\begin{cases}
(\mathcal{B}^-_{m},\nm^-_{m}) := (\mathcal{B}(K^-),\nm_{K^-}) = (\ensuremath{\mathbf{R}}, \max(\va_{\infty}^\beta,\va_{\infty}))~;\\
(\mathcal{B}^+_{m},\nm^+_{m}) := (\mathcal{B}(K^+),\nm_{K^+}) = (\ensuremath{\mathbf{Z}},\va_{\infty}^\beta)~;\\
(\mathcal{C}_{m},\nm_{m}) := (\mathcal{H}(a_{\infty}^\beta), \va_{a_{\infty}^\beta}) = (\ensuremath{\mathbf{R}},\va_{\infty}^\beta).
\end{cases}\]
Soit $f\in \mathcal{H}(a_{\infty}^\beta) = \ensuremath{\mathbf{R}}$.
Si $|f|_{\infty} \le 1$, on pose
\[ f^- := f \textrm{ et } f^+ := 0.\]
On a alors $f = f^-+f^+$, $\|f^-\|_{K^-} = |f|_{\infty}^\beta$ et $\|f\|_{K^+} = 0$.
Si $|f|_{\infty} > 1$, il existe $n\in \ensuremath{\mathbf{Z}}$ tel que $|n|_{\infty} \le |f|_{\infty}$ et $|f-n|_{\infty} < 1$ et on pose
\[ f^- := f-n \textrm{ et } f^+ := n.\]
On a alors $f = f^-+f^+$, $\|f^-\|_{K^-} \le 1$ et $\|f\|_{K^+} = |n|_{\infty}^\beta \le |f|_{\infty}^\beta$.
On a donc bien d\'efini un syst\`eme de Cousin associ\'e \`a $(K^-,K^+)$.
\end{exem}
Les exemples pr\'ec\'edents se g\'en\'eralisent.
\begin{exem}\label{ex:Cousin
\index{Corps!valu\'e}
\index{Anneau!des entiers d'un corps de nombres}\index{Corps!hybride}\index{Anneau!de valuation discr\`ete}\index{Anneau!de Dedekind trivialement valu\'e}
Soit $\mathcal{A}$ l'un de nos anneaux de Banach usuels~: corps valu\'e, anneau d'entiers de corps de nombres, corps hybride, anneau de valuation discr\`ete, anneau de Dedekind trivialement valu\'e (\cf~exemples~\ref{ex:corpsvalue} \`a~\ref{ex:Dedekind}).
Soit~$a$ un point de~$\mathcal{M}(\mathcal{A})$ non associ\'e \`a la valuation triviale sur~$\mathcal{A}$. Dans ce cas, il existe deux compacts~$K^-$ et~$K^+$ de~$\mathcal{M}(\mathcal{A})$ tels que $K^- \cap K^+ = \{a\}$ et $K^- \cup K^+ = \mathcal{M}(\mathcal{A})$. (Si $\mathcal{M}(\mathcal{A}) \setminus \{a\}$ poss\`ede deux composantes connexes, ce qui est le cas g\'en\'eral, $K^-$ et~$K^+$ sont les adh\'erences de ces composantes connexes.)
On obtient alors un syst\`eme de Cousin fort associ\'e \`a~$(K^-,K^+)$ en posant, pour tout $m\in \ensuremath{\mathbf{N}}$, $(\mathcal{B}^-_{m},\nm^-_{m}) := (\mathcal{B}(K^-),\nm_{K^-})$, $(\mathcal{B}^+_{m},\nm^+_{m}) := (\mathcal{B}(K^+),\nm_{K^+})$ et $(\mathcal{C}_{m},\nm_{m}) := (\mathcal{H}(a),\va_{a})$. Le seul cas qui n'est pas imm\'ediat est celui des anneaux d'entiers de corps de nombres. On peut le d\'eduire du th\'eor\`eme des unit\'es de Dirichlet (\cf~\cite[lemme~6.3.2]{A1Z}).
\end{exem}
\begin{defi}\label{def:Runge}\index{Systeme@Syst\`eme!de Runge|textbf}
Un \emph{syst\`eme de Runge} associ\'e \`a~$(K^-,K^+)$ est un syst\`eme de Banach associ\'e \`a~$(K^-,K^+)$ tel que, pour tous $m \in \ensuremath{\mathbf{N}}$, $\ensuremath{\varepsilon} >0$, $p\in\ensuremath{\mathbf{N}}^\ast$, $s_{1},\dotsc,s_{p}\in \mathcal{C}_{m}$, les propri\'et\'es suivantes soient v\'erifi\'ees~: pour tout $\sigma\in \{-,+\}$, il existe $f \in \mathcal{B}_{m}^\sigma$ inversible et $s'_{1},\dotsc,s'_{p}\in \mathcal{B}_{m}^{-\sigma}$ tels que
\begin{enumerate}[i)]
\item $\lim_{m'\to +\infty} \|f\|_{m'} \, \|f^{-1}\|_{m'} = 1$~;
\item pour tout $i\in\cn{1}{p}$, on ait
\[\|fs_{i} - s'_{i}\|_{m} < \ensuremath{\varepsilon} \|f\|_{m}.\]
\end{enumerate}
\end{defi}
\begin{rema}\label{rem:CR}
Dans le cadre de la d\'efinition pr\'ec\'edente, pour $m$ assez grand (d\'ependant de~$f$), on a
\[\|fs_{i} - s'_{i}\|_{m} \, \|f^{-1}\|_{m} < \ensuremath{\varepsilon}.\]
\end{rema}
Nous utiliserons souvent des combinaisons des d\'efinitions pr\'ec\'edentes dont le sens est clair~: syst\`eme de Cousin fort, syst\`eme de Cousin-Runge, syst\`eme de Cousin-Runge fort, etc.
\begin{rema}
Dans~\cite{A1Z} figure uniquement la d\'efinition de syst\`eme de Cousin-Runge (\cf~\cite[d\'efinition~6.2.8]{A1Z}). Elle est proche de celle que nous proposons ici, mais plus compliqu\'ee.
\end{rema}
Reprenons les exemples de syst\`emes de Banach vus plus haut.
\begin{exem}\label{ex:ZCousinRungeum}\index{Anneau!des entiers relatifs $\ensuremath{\mathbf{Z}}$
Reprenons l'exemple~\ref{ex:ZCousinum}. Soit $\ensuremath{\varepsilon} \in \ensuremath{\mathbf{R}}_{>0}$ et soient $s_{1},\dotsc,s_{p} \in \mathcal{H}(a_{q}^\alpha) = \ensuremath{\mathbf{Q}}_{q}$.
Consid\'erons le cas $\sigma = -$. Puisque $\ensuremath{\mathbf{Z}}[1/q]$ est dense dans~$\ensuremath{\mathbf{Q}}_{q}$, il existe $s'_{1},\dotsc,s'_{p} \in \ensuremath{\mathbf{Z}}[1/q]$ tel que, pour tout $i\in \cn{1}{p}$, on ait $|s_{i} - s'_{i}| < \ensuremath{\varepsilon}$. Le r\'esultat vaut donc avec $f=1$.
Consid\'erons le cas $\sigma = +$. Il existe $N \in \ensuremath{\mathbf{N}}$ tel que, pour tout $i\in \cn{1}{p}$, on ait $q^N s_{i} \in \ensuremath{\mathbf{Z}}_{q}$. Le r\'esultat vaut donc avec $f=q^N$ (qui est inversible dans~$\ensuremath{\mathbf{Z}}[1/q]$) et, pour tout $i\in \cn{1}{p}$, $s'_{i} = q^N s_{i}$.
On en d\'eduit que le syst\`eme de l'exemple~\ref{ex:ZCousinum} est de Runge.
\end{exem}
\begin{exem}\label{ex:ZCousinRungearc}\index{Anneau!des entiers relatifs $\ensuremath{\mathbf{Z}}$
Reprenons l'exemple~\ref{ex:ZCousinarc}. Soit $\ensuremath{\varepsilon} \in \ensuremath{\mathbf{R}}_{>0}$ et soient $s_{1},\dotsc,s_{p} \in \mathcal{H}(a_{\infty}^\beta) = \ensuremath{\mathbf{R}}$.
Consid\'erons le cas $\sigma = +$. Les anneaux sous-jacents \`a~$\mathcal{H}(a_{\infty}^\beta)$ et $\mathcal{B}(K^-)$ co\"incident et le r\'esultat vaut donc avec $f=1$ et, pour tout $i\in \cn{1}{p}$, $s'_{i} = s_{i}$.
Consid\'erons le cas $\sigma = -$. Il existe $M \in \ensuremath{\mathbf{N}}_{\ge 1}$ tel que $\ensuremath{\varepsilon} \ge 1/M^\beta$. Pour tout $i\in \cn{1}{p}$, il existe $s'_{i} \in \ensuremath{\mathbf{Z}}$ tel que $|M s_{i} - s'_{i}|_{\infty} < 1$. Le r\'esultat vaut alors avec $f=M$ (qui est inversible dans~$\ensuremath{\mathbf{R}}$).
On en d\'eduit que le syst\`eme de l'exemple~\ref{ex:ZCousinarc} est de Runge.
\end{exem}
\begin{exem}\label{ex:CousinRunge
\index{Corps!valu\'e}
\index{Anneau!des entiers d'un corps de nombres}\index{Corps!hybride}\index{Anneau!de valuation discr\`ete}\index{Anneau!de Dedekind trivialement valu\'e}
Les syst\`emes de Cousin de l'exemple~\ref{ex:Cousin} sont des syst\`emes de Cousin-Runge forts. Comme pr\'ec\'edemment, le seul cas difficile est celui des anneaux d'entiers de corps de nombres. On peut le d\'eduire du th\'eor\`eme d'approximation fort (\cf~\cite[lemme~6.3.3]{A1Z}).
\end{exem}
\begin{lemm}\label{lem:CRrelatif}\index{Systeme@Syst\`eme!sur un disque}\index{Disque!systeme@syst\`eme sur un|see{Syst\`eme}}
Soient $r_{1},\dotsc,r_{n} \in \ensuremath{\mathbf{R}}_{>0}$. Notons $\pi \colon \overline{D}_{X}(r_{1},\dotsc,r_{n}) \to X$ le morphisme de projection. Soient~$K^-$ et~$K^+$ des parties compactes de~$X$. S'il existe un syst\`eme de Banach (resp. de Banach fort, resp. de Cousin, resp. de Runge) associ\'e \`a $(K^-,K^+)$, alors il existe un syst\`eme de Banach (resp. de Banach fort, de Cousin, resp. de Runge) associ\'e \`a $(\pi^{-1}(K^-),\pi^{-1}(K^+))$.
\end{lemm}
\begin{proof}
Une r\'ecurrence imm\'ediate montre qu'il suffit de traiter le cas $n=1$. Posons $r=r_{1}$. Identifions $\pi^{-1}(K^+)$ \`a $\overline{D}_{K^+}(r)$, $\pi^{-1}(K^-)$ \`a $\overline{D}_{K^-}(r)$ et $\pi^{-1}(L)$ \`a $\overline{D}_{L}(r)$.
Supposons tout d'abord qu'il existe un syst\`eme de Banach~$\Omega$ associ\'e au couple $(K^-,K^+)$. Soit $(R_{n})_{n\in\ensuremath{\mathbf{N}}}$ une suite d\'ecroissante d'\'el\'ements de $\intof{r,+\infty}$ de limite~$r$. Pour $n \in \ensuremath{\mathbf{N}}$, posons $\bar\mathcal{B}_{n}^- = \mathcal{B}_{n}^-\ensuremath{\langle} |T|\le R_{n}\ensuremath{\rangle}$, $\bar\mathcal{B}_{n}^+ = \mathcal{B}_{n}^+\ensuremath{\langle} |T|\le R_{n}\ensuremath{\rangle}$, $\bar\mathcal{C}_{n} = \mathcal{C}_{n}\ensuremath{\langle} |T|\le R_{n}\ensuremath{\rangle}$. Avec les morphismes \'evidents, on obtient un syst\`eme de Banach~$\bar \Omega$ associ\'e au couple $(\pi^{-1}(K^-),\pi^{-1}(K^+))$. En effet, les autres propri\'et\'es \'etant imm\'ediates, il suffit de v\'erifier que le morphisme
\[\colim_{m\in\ensuremath{\mathbf{N}}} \bar\mathcal{C}_{m} \to \mathcal{O}(\pi^{-1}(L))\]
est un isomorphisme, ce qui d\'ecoule de la proposition~\ref{prop:disqueglobal}.
\smallbreak
Supposons, maintenant, que~$\Omega$ est un syst\`eme de Banach fort. Soit~$V$ un voisinage compact de~$\overline{D}_{L}(r)$. On peut supposer qu'il est de la forme $\overline{D}_{W}(s)$, o\`u~$W$ est un voisinage compact de~$L$ et~$s$ un nombre r\'eel strictement sup\'erieur \`a~$r$. Soit~$m_{W}$ un entier v\'erifiant les propri\'et\'es de la d\'efinition de syst\`eme de Banach fort pour le voisinage compact~$W$ de~$L$. Soit~$m_{V} \ge m_{W}$ tel que $R_{m_{V}} < s$. Soient $m\ge m_{V}$ et $f\in\mathscr{O}(\overline{D}_{W}(s))$. D'apr\`es la proposition~\ref{prop:disqueglobal}, on peut \'ecrire~$f$ sous la forme
\[f = \sum_{n\ge 0} a_{n}\, T^n \in \mathscr{O}(W)\llbracket T \rrbracket,\]
o\`u la s\'erie $\sum_{n\ge 0} \|a_{n}\|_{W}\, s^n$ converge. Pour tout $n\in\ensuremath{\mathbf{N}}$, il existe $b_{n} \in \mathcal{C}_{m}$ tel que
\[\begin{cases}
a_{n} = b_{n} \textrm{ dans } \mathscr{O}(L)~;\\
\|b_{n}\|_{m} \le C_{m}\, \|a_{n}\|_{W}.
\end{cases}\]
Consid\'erons la s\'erie
\[g = \sum_{n\ge 0} b_{n}\, T^n \in \mathcal{C}_{m}\llbracket T \rrbracket.\]
Elle appartient \`a $\mathcal{C}_{m}\ensuremath{\langle} |T|\le s\ensuremath{\rangle}$, donc \`a $\mathcal{C}_{m}\ensuremath{\langle} |T|\le R_{m}\ensuremath{\rangle}$, et son image dans $\mathcal{O}(\overline{D}_{L}(r))$ co\"{\i}ncide avec celle de~$f$. D'apr\`es la proposition~\ref{prop:restrictionserie}, on a
\begin{align*}
\|g\|_{\mathcal{C}_{m}\ensuremath{\langle} |T|\le R_{m}\ensuremath{\rangle}} &= \sum_{n\ge 0} \|b_{n}\|_{m}\, R_{m}^n\\
&\le C_{m}\, \sum_{n\ge 0} \|b_{n}\|_{W}\, R_{m}^n\\
&\le C_{m} \frac{s}{s-R_{m}}\, \|f\|_{\overline{D}_{W}(s)}.
\end{align*}
Ceci montre que~$\bar\Omega$ est un syst\`eme de Banach fort.
\smallbreak
Supposons, maintenant, que~$\Omega$ est un syst\`eme de Cousin. Par hypoth\`ese, il existe $D\in \ensuremath{\mathbf{R}}$ tel que, pour tous $m\in\ensuremath{\mathbf{N}}$ et $a\in \mathcal{C}_{m}$, il existe $a^-\in \mathcal{B}_{m}^-$ et $a^+\in \mathcal{B}_{m}^+$ tels que
\[\begin{cases}
a = a^- + a^+ \textrm{ dans } \mathcal{C}_{m}~;\\
\|a^-\|^-_{m} \le D\, \|a\|_{m}~;\\
\|a^-\|^+_{m} \le D\, \|a\|_{m}.
\end{cases}\]
Soient $m\in\ensuremath{\mathbf{N}}$ et $f \in \bar\mathcal{C}_{m} = \mathcal{C}_{m}\ensuremath{\langle} |T|\le R_{m}\ensuremath{\rangle}$. On peut l'\'ecrire sous la forme
\[f = \sum_{n\ge 0} a_{n}\, T^n.\]
Les s\'eries
\[f^- := \sum_{n\ge 0} a_{n}^-\, T^n \textrm{ et } f^+ := \sum_{n\ge 0} a_{n}^+\, T^n\]
d\'efinissent alors respectivement des \'el\'ements de~$\bar\mathcal{B}_{m}^-$ et~$\bar\mathcal{B}_{m}^+$ tels que
\[\begin{cases}
f = f^- + f^+ \textrm{ dans } \bar\mathcal{C}_{m}~;\\
\|f^-\|^-_{m} \le D\, \|f\|_{m}~;\\
\|f^-\|^+_{m} \le D\, \|f\|_{m}.
\end{cases}\]
Ceci montre que~$\bar\Omega$ est un syst\`eme de Cousin.
\smallbreak
Supposons, finalement, que~$\Omega$ est un syst\`eme de Runge. Soient $m\in\ensuremath{\mathbf{N}}$, $\ensuremath{\varepsilon}>0$ et $\sigma\in\{-,+\}$. Soient $p\in\ensuremath{\mathbf{N}}^\ast$ et $s_{1},\dotsc,s_{p}\in \bar\mathcal{C}_{m}$.
Pour tout $i\in\cn{1}{p}$, il existe
\[ s'_{i} = \sum_{n=0}^{d_{i}} a'_{i,n} \, T^n \in \mathcal{C}_{m}[T] \]
tel que $\|s'_{i}-s_{i}\|_{m} < \ensuremath{\varepsilon}/2$. On peut supposer que les~$d_{i}$ sont tous \'egaux. Notons~$d$ leur valeur commune. Posons $C = \sum_{n=0}^d R_{m}^n$. D'apr\`es la propri\'et\'e de Runge pour~$\mathcal{C}_{m}$, il existe $f\in \mathcal{B}_{m}^\sigma$ inversible et, pour tout $(i,n) \in \cn{1}{p} \times \cn{0}{d}$, $a''_{i,n} \in \mathcal{B}_{m}^{-\sigma}$ tels que
\begin{enumerate}[i)]
\item $\lim_{m'\to +\infty} \|f\|_{m'} \, \|f^{-1}\|_{m'} = 1$~;
\item pour tout $(i,n) \in \cn{1}{p} \times \cn{0}{d}$, on ait
\[\|fa'_{i,n} - a''_{i,n}\|_{m} < \ensuremath{\varepsilon}/(2C)\, \|f\|_{m}.\]
\end{enumerate}
Pour tout $i\in\cn{1}{p}$, posons
\[ s''_{i} = \sum_{n=0}^{d_{i}} a''_{i,n} \, T^n \in \mathcal{C}_{m}[T]. \]
On a alors
\[ \|fs_{i} - s''_{i}\|_{\bar{\mathcal{C}}_{m}} \le \|f(s_{i} - s'_{i})\|_{\bar{\mathcal{C}}_{m}}+ \|fs'_{i} - s''_{i}\|_{\bar{\mathcal{C}}_{m}} \le \ensuremath{\varepsilon} \|f\|_{\bar{\mathcal{C}}_{m}}.\]
De plus, pour tout $m'\ge m$ et tout \'el\'ement~$g$ de~$\mathcal{C}_{m}$, on a $\|g\|_{\bar{\mathcal{C}}_{m'}} = \|g\|_{m'}$. On en d\'eduit que
\[\lim_{m'\to +\infty} \|f\|_{\bar{\mathcal{C}}_{m'}} \, \|f^{-1}\|_{\bar{\mathcal{C}}_{m'}} = 1.\]
Ceci montre que~$\bar\Omega$ est un syst\`eme de Runge.
\end{proof}
Rappelons que, dans le cadre des syst\`emes de Cousin, on dispose de l'analogue du lemme de Cartan.
\begin{theo}[\protect{\cite[th\'eor\`eme~6.2.7]{A1Z}}]\label{thm:lemmeCartan}\index{Lemme!de Cartan}\index{Theoreme@Th\'eor\`eme!de Cartan}
Soit $\Omega = (\mathcal{B}^-_{m},\mathcal{B}^+_{m},\mathcal{C}_{m})_{m\in\ensuremath{\mathbf{N}}}$ un syst\`eme de Cousin.
Alors, il existe $\varepsilon>0$ tel que, pour tous $m\in\ensuremath{\mathbf{N}}$, $q\in\ensuremath{\mathbf{N}}^\ast$ et $A\in M_{q}(\mathcal{C}_{m})$ v\'erifiant $\|A-I\|_{m} < \varepsilon$, il existe $C^- \in GL_{q}(\mathcal{B}_{m}^-)$ et $C^+ \in GL_{q}(\mathcal{B}_{m}^+)$ telles que
\[\begin{cases}
A = C^-\, C^+ \textrm{ dans } GL_{q}(\mathcal{C}_{m})~;\\
\|C^--I\|_{m}^- \le 4 D\, \|A-I\|_{m}~;\\
\|C^+-I\|_{m}^+ \le 4 D\, \|A-I\|_{m},\\
\end{cases}\]
o\`u~$D$ est la constante apparaissant dans la d\'efinition de syst\`eme de Cousin.
\qed
\end{theo}
Nous souhaitons maintenant d\'emontrer que, pour un faisceau coh\'erent sur $M = K^-\cup K^+$, la propri\'et\'e d'\^etre globalement engendr\'e passe de~$K^-$ et~$K^+$ \`a~$M$. Le lemme qui suit est une version l\'eg\`erement modifi\'ee de \cite[lemme~6.2.9]{A1Z}, dont la preuve reste essentiellement inchang\'ee.
\begin{lemm}\label{lem:matriceapprochee}
Soit $\Omega = (\mathcal{B}^-_{m},\mathcal{B}^+_{m},\mathcal{C}_{m})_{m\in\ensuremath{\mathbf{N}}}$ un syst\`eme de Cousin associ\'e \`a $(K^-,K^+)$. Soient~$\mathcal{F}$ un faisceau de $\mathcal{O}_{M}$-modules, $p,q\in \ensuremath{\mathbf{N}}^\ast$, $T^- \in \mathcal{F}(K^-)^p$, $T^+ \in \mathcal{F}(K^+)^q$,
$U\in M_{p,q}(\mathcal{O}(\mathcal{C}_{m}))$ et $V \in M_{q,p}(\mathcal{O}(\mathcal{C}_{m}))$
tels que qu'on ait
\[\begin{cases}
T^- = U\,T^+ \textrm{ dans } \mathcal{F}(L)^p~;\\
T^+=V\,T^- \textrm{ dans } \mathcal{F}(L)^q.
\end{cases}\]
Soit $\ensuremath{\varepsilon}>0$ satisfaisant la conclusion du th\'eor\`eme~\ref{thm:lemmeCartan}. Supposons qu'il existe $m\in \ensuremath{\mathbf{N}}$ et $\bar U \in M_{p,q}(\mathcal{B}_{m}^+)$ tels que
\[ \|\bar U - U\|_{m}\, \|V\|_{m} < \varepsilon.\]
Alors, il existe $S^-\in \mathcal{F}(M)^p$ et $A^- \in GL_{p}(\mathcal{O}(K^-))$ tels que
\[ S^- = A^-\, T^- \textrm{ dans } \mathcal{F}(K^-)^p.\]
\end{lemm}
\begin{proof}
Posons $A = I + (\bar U -U) V \in M_{p}(\mathcal{C}_{m})$. Nous avons alors $A\, T^- = \bar U\, T^+$ dans $\mathcal{F}(L)^p$.
Nous avons \'egalement $\|A-I\|_{m} < \ensuremath{\varepsilon}$, donc, par hypoth\`ese, il existe $C^- \in GL_{q}(\mathcal{B}_{m}^-)$ et $C^+ \in GL_{q}(\mathcal{B}_{m}^+)$ telles que $A = C^+\, C^-$ dans $GL_{p}(\mathcal{C}_{m})$. On a alors
\[C^-\, T^- = (C^+)^{-1}\, A\, T^- = (C^+)^{-1}\, \bar U\, T^+.\]
On peut donc d\'efinir un \'el\'ement~$S^-$ de~$\mathcal{F}(M)^p$ par
\[\begin{cases}
S^- = C^-\, T^- \textrm{ sur } K^-~;\\
S^- = (C^+)^{-1}\, \bar U\, T^+ \textrm{ sur } K^+.
\end{cases}\]
Il v\'erifie la propri\'et\'e requise avec la matrice~$A^-$ qui est l'image de~$C^-$ dans $GL_{p}(\mathcal{O}(K^-))$.
\end{proof}
Nous en d\'eduisons un analogue de \cite[th\'eor\`eme~6.2.10]{A1Z}.
\begin{theo}\index{Faisceau!globalement engendr\'e}
Soit $\Omega = (\mathcal{B}^-_{m},\mathcal{B}^+_{m},\mathcal{C}_{m})_{m\in\ensuremath{\mathbf{N}}}$ un syst\`eme de Cousin-Runge associ\'e \`a $(K^-,K^+)$. Soit~$\mathcal{F}$ un faisceau de $\mathcal{O}_{M}$-modules. Supposons qu'il existe deux entiers~$p$ et~$q$, une famille $(t_{1}^-,\dotsc,t_{p}^-)$ d'\'el\'ements de~$\mathcal{F}(K^-)$ et une famille $(t_{1}^+,\dotsc,t_{q}^+)$ d'\'el\'ements de~$\mathcal{F}(K^+)$ dont les restrictions \`a~$L$ engendrent le m\^eme sous-$\mathcal{O}(L)$-module de~$\mathcal{F}(L)$. Alors, il existe $s_{1}^-,\dotsc,s_{p}^-,s_{1}^+,\dotsc,s_{q}^+ \in \mathcal{F}(M)$, $A^- \in GL_{p}(\mathcal{O}(K^-))$ et $A^+ \in GL_{q}(\mathcal{O}(K^+))$ tels que
\[\begin{pmatrix} s_{1}^-\\ \vdots \\ s_{p}^- \end{pmatrix} = A^- \begin{pmatrix} t_{1}^-\\ \vdots \\ t_{p}^- \end{pmatrix} \textrm{ dans } \mathcal{F}(K^-)^p\]
et
\[\begin{pmatrix} s_{1}^+\\ \vdots \\ s_{q}^+ \end{pmatrix} = A^+ \begin{pmatrix} t_{1}^+\\ \vdots \\ t_{q}^+ \end{pmatrix} \textrm{ dans } \mathcal{F}(K^+)^q.\]
En outre, on peut choisir $s_{1}^-,\dotsc,s_{p}^-,s_{1}^+,\dotsc,s_{q}^+$ de fa\c{c}on que, pour tout compact~$K^0$ contenu dans~$K^-$ (resp.~$K^+$) tel que $(t_{1}^-,\dotsc,t_{p}^-)_{|K^0}$ engendre le $\mathcal{O}(K^0)$-module $\mathcal{F}(K^0)$ (resp. $(t_{1}^+,\dotsc,t_{q}^+)_{|K^0}$ engendre le $\mathscr{O}(K^0)$-module $\mathcal{F}(K^0)$), la famille $(s_{1}^-,\dotsc,s_{p}^-,s_{1}^+,\dotsc,s_{q}^+)_{|K^0}$ engendre encore le $\mathscr{O}(K^0)$-module $\mathcal{F}(K^0)$.
\end{theo}
\begin{proof}
Posons
\[T^- = \begin{pmatrix} t_{1}^-\\ \vdots \\ t_{p}^- \end{pmatrix} \textrm{ et } T^+ = \begin{pmatrix} t_{1}^+\\ \vdots \\ t_{q}^+ \end{pmatrix}.\]
Par hypoth\`ese, il existe $m\in\ensuremath{\mathbf{N}}$, $U = (u_{a,i}) \in M_{p,q}(\mathcal{C}_{m})$ et $V = (v_{b,j}) \in M_{q,p}(\mathcal{C}_{m})$ tels que l'on ait
\[\begin{cases}
T^- = U\,T^+ \textrm{ dans } \mathcal{F}(L)^p~;\\
T^+=V\,T^- \textrm{ dans } \mathcal{F}(L)^q.
\end{cases}\]
Consid\'erons un nombre r\'eel~$\ensuremath{\varepsilon}>0$ satisfaisant la conclusion du th\'eor\`eme~\ref{thm:lemmeCartan}. D'apr\`es la remarque~\ref{rem:CR}, il existe un entier $m' \ge m$, un \'el\'ement inversible~$f$ de~$\mathcal{B}_{m'}^-$ et des \'el\'ements $\bar{u}_{a,i}$ de~$\mathcal{B}_{m'}^+$, pour $(a,i) \in \cn{1}{p}\times\cn{1}{q}$, tels que, pour tous $(a,i) \in \cn{1}{p}\times\cn{1}{q}$ et $(b,j) \in \cn{1}{q}\times\cn{1}{p}$, on ait
\[\|f u_{a,i} - \bar{u}_{a,i}\|_{m'}\, \|f^{-1} v_{b,j}\|_{m'} \le \|f u_{a,i} - \bar{u}_{a,i}\|_{m'}\, \|f^{-1}\|_{m'}\, \| v_{b,j}\|_{m'} < \ensuremath{\varepsilon}.\]
On a alors
\[\begin{cases}
fT^- = (fU)\, T^+ \textrm{ dans } \mathcal{F}(L)^p~;\\
T^+ = (f^{-1}V)\, (fT^-) \textrm{ dans } \mathcal{F}(L)^q
\end{cases}\]
et, en posant $\bar U = (\bar{u}_{a,i}) \in M_{p,q}(\mathcal{B}_{m'}^+)$,
\[\|\bar U - fU\|\, \|f^{-1}V\| < \ensuremath{\varepsilon}.\]
On se trouve donc sous les hypoth\`eses du lemme~\ref{lem:matriceapprochee}. On en d\'eduit qu'il existe $S^- \in \mathcal{F}(M)^p$ et $A^- \in GL_{p}(\mathcal{O}(K^-))$ tels que
\[S^- = A^- \, f T^- \textrm{ dans } \mathcal{F}(K^-)^p.\]
Le premier r\'esultat s'en d\'eduit en utilisant le fait que~$f$ est inversible sur~$K^-$. Le r\'esultat pour~$K^+$ se d\'emontre identiquement.
La remarque finale d\'ecoule de la construction et de l'inversibilit\'e des matrices~$A^-$ et~$A^+$.
\end{proof}
Nous en d\'eduisons finalement un analogue raffin\'e et corrig\'e de \cite[corollaire~6.2.11]{A1Z}. (L'hypoth\`ese de surjectivit\'e sur~$L$ a malencontreusement \'et\'e omise dans l'\'enonc\'e original.)
\begin{coro}\label{cor:K-K+A}\index{Faisceau!globalement engendr\'e}
Supposons qu'il existe un syst\`eme de Cousin-Runge fort associ\'e \`a $(K^-,K^+)$. Soit~$\mathcal{F}$ un faisceau de $\mathcal{O}_{M}$-modules. Supposons qu'il existe des morphismes surjectifs $\varphi^- \colon \mathcal{O}^{p}_{K^-} \to \mathcal{F}_{|K^-}$ et $\varphi^+ \colon \mathcal{O}^{q}_{K^+} \to \mathcal{F}_{|K^+}$ tels que les morphismes induits $\mathcal{O}^{p}(L) \to \mathcal{F}(L)$ et $\mathcal{O}^{q}(L) \to \mathcal{F}(L)$ soient surjectifs. Alors, $\mathcal{F}$~est globalement engendr\'e sur~$M$.
Plus pr\'ecis\'ement, il existe un morphisme surjectif $\varphi \colon \mathcal{O}^{p+q} \to \mathcal{F}$ tel que, pour tout compact~$K^0$ contenu dans~$K^-$ (resp.~$K^+$) tel que le morphisme induit $\mathcal{O}^{p}(K^0) \to \mathcal{F}(K^0)$ (resp. $\mathcal{O}^{q}(K^0) \to \mathcal{F}(K^0)$) soit surjectif, le morphisme induit $\mathcal{O}^{p+q}(K^0) \to \mathcal{F}(K^0)$ soit surjectif.
\qed
\end{coro}
\index{Systeme@Syst\`eme|)}
\subsection{Alg\`ebres de domaines polynomiaux}
Nous pr\'esentons ici l'exemple principal de syst\`eme de Cousin-Runge fort que nous utiliserons dans la suite de ce texte.
Posons $X := \E{1}{\mathcal{A}}$ et notons $\pi \colon X \to B$ le morphisme structural. Pour toute partie~$V$ de~$B$, on note $X_{V} := \pi^{-1}(V)$.
\begin{prop}\label{prop:systemeCR}\index{Systeme@Syst\`eme!sur un domaine polynomial}\index{Domaine polynomial!systeme@syst\`eme sur un|see{Syst\`eme}}\index{Systeme@Syst\`eme!de Runge}\index{Systeme@Syst\`eme!de Cousin}\index{Condition!ONPS-T@$(\mathcal{O} N_{P(S)-T})$}
Soit~$V$ une partie compacte et d\'ecente de~$B_{\mathrm{um}}$.
Soit $P \in \mathcal{O}(V)[T]$ unitaire non constant. Soient $r,s\in\ensuremath{\mathbf{R}}$ tels que $0<r\le s$. Soient~$K^-$ et~$K^+$ deux parties compactes de~$X_{V}$ telles que
\[\begin{cases}
K^- \subset \{x\in X_{V} : |P(x)|\le s\}\ ;\\
K^+ \subset \{x\in X_{V} : |P(x)|\ge r\}\ ;\\
K^- \cap K^+ = \{x\in X_{V} : r \le |P(x)| \le s\}.
\end{cases}\]
Alors, il existe un syst\`eme de Cousin associ\'e \`a $(K^-,K^+)$. Si $r=s$, alors il existe un syst\`eme de Cousin-Runge associ\'e \`a $(K^-,K^+)$.
Supposons, en outre, qu'il existe une suite d\'ecroissante $(W_{m})_{m\in \ensuremath{\mathbf{N}}}$ de parties compactes de~$B$ sur lesquels les coefficients de~$P$ sont d\'efinis et formant une base de voisinages de~$V$, une suite $(u_{m})_{m\in\ensuremath{\mathbf{N}}}$ d'\'el\'ements de~$\intoo{0,r}$ strictement croissante de limite~$r$ et une suite $(v_{m})_{m\in\ensuremath{\mathbf{N}}}$ d'\'el\'ements de~$\intoo{s,+\infty}$ strictement d\'ecroissante de limite~$s$ telles que, pour tout $m\in\ensuremath{\mathbf{N}}$, le compact $\overline{C}_{W_{m}}(u_{m},v_{m})$ satisfasse la condition $(\mathcal{O} N_{P(S)-T})$. Alors, il existe un syst\`eme de Banach fort (et donc de Cousin fort, et m\^eme de Cousin-Runge fort si $s=r$) associ\'e \`a $(K^-,K^+)$.
\end{prop}
\begin{proof}
Puisque~$K^+$ est compact, il existe un nombre r\'eel $t > s$ qui majore la fonction~$|P|$ sur~$K^+$. Nous pouvons supposer que $K^+ = \overline{C}_{V}(P;r,t)$ et $K^- = \overline{C}_{V}(P;s)$. Posons $L =: K^-\cap K^+ = \overline{C}_{V}(P;r,s)$.
Soient $(W_{m})_{m\in \ensuremath{\mathbf{N}}}$ une suite d\'ecroissante de parties compactes de~$B$ sur lesquels les coefficients de~$P$ sont d\'efinis et formant une base de voisinages de~$V$, $(u_{m})_{m\in\ensuremath{\mathbf{N}}}$ une suite d'\'el\'ements de~$\intoo{0,r}$ strictement croissante de limite~$r$ et $(v_{m})_{m\in\ensuremath{\mathbf{N}}}$ une suite d'\'el\'ements de~$\intoo{s,+\infty}$ strictement d\'ecroissante de limite~$s$.
D'apr\`es la proposition~\ref{prop:equivalencedivres} appliqu\'ee avec $\mathcal{A} = \overline{\mathcal{O}(W_{0})}\ensuremath{\langle} |T|\le t\ensuremath{\rangle}$, il existe $w \in \ensuremath{\mathbf{R}}_{>0}$ tel que, pour tout entier~$m$, la semi-norme r\'esiduelle $\nm_{\overline{\mathcal{O}(W_{m})}\ensuremath{\langle}|T|\le v_{m}\ensuremath{\rangle},w,\mathrm{r\acute{e}s}}$ d\'efinie sur $\overline{\mathcal{O}(W_{m})}\ensuremath{\langle}|T|\le v_{m}\ensuremath{\rangle}[S]/(P(S)-T)$ soit une norme pour laquelle cet espace est complet. On notera $(\mathcal{B}_{m}^-,\nm_{m}^-)$ cet anneau de Banach. Les m\^emes propri\'et\'es valent pour $\overline{\mathcal{O}(W_{m})} \ensuremath{\langle} u_{m} \le |T|\le t\ensuremath{\rangle}[S]/(P(S)-T)$, que l'on notera $(\mathcal{B}_{m}^+,\nm_{m}^+)$, et $\overline{\mathcal{O}(W_{m})}\ensuremath{\langle} u_{m}\le |T|\le v_{m}\ensuremath{\rangle}[S]/(P(S)-T)$, que l'on notera $(\mathcal{C}_{m},\nm_{m})$. On peut supposer que le m\^eme nombre r\'eel~$w$ convient pour tous ces anneaux.
Avec les morphismes \'evidents, on vient de d\'efinir un syst\`eme de Banach~$\Omega$ associ\'e \`a $(K^-,K^+)$. Le seul point d\'elicat \`a v\'erifier est le fait que le morphisme
\[\colim_{m\in\ensuremath{\mathbf{N}}} \mathcal{C}_{m} \to \mathcal{O}(L)\]
soit un isomorphisme. C'est l'objet du corollaire~\ref{cor:lemniscateglobaleBanach}.
\smallbreak
D\'emontrons maintenant que~$\Omega$ est un syst\`eme de Cousin. Soient~$m\in \ensuremath{\mathbf{N}}$ et $f\in \mathcal{C}_{m}$. On peut supposer que $f\ne 0$. Il existe alors un entier $d \in \ensuremath{\mathbf{N}}$ et un \'el\'ement $F = \sum_{i=0}^d a_{i} \, S^i$ de $\overline{\mathcal{O}(W_{m})}\ensuremath{\langle} u_{m}\le |T|\le v_{m}\ensuremath{\rangle}[S]$ tel que $\|F\|_{\overline{\mathcal{O}(W_{m})}\ensuremath{\langle} u_{m}\le |T|\le v_{m}\ensuremath{\rangle},w} \le 2 \|f\|_{m}$. En utilisant la description explicite de $\overline{\mathcal{O}(W_{m})}\ensuremath{\langle} u_{m}\le |T|\le v_{m}\ensuremath{\rangle}$ comme anneau de s\'eries de Laurent, on montre que, pour tout $i\in \cn{0}{d}$, il existe $a_{i}^- \in \overline{\mathcal{O}(W_{m})}\ensuremath{\langle} |T|\le v_{m}\ensuremath{\rangle}$ et $a_{i}^+ \in \overline{\mathcal{O}(W_{m})}\ensuremath{\langle} u_{m}\le |T|\le t\ensuremath{\rangle}$ tels que
\[\begin{cases}
a_{i} = a_{i}^-+a_{i}^+ \textrm{ dans } \overline{\mathcal{O}(W_{m})}\ensuremath{\langle} u_{m}\le |T|\le v_{m}\ensuremath{\rangle}\ ;\\[2pt]
\|a_{i}^-\|_{\overline{\mathcal{O}(W_{m})}\ensuremath{\langle} |T|\le v_{m}\ensuremath{\rangle}} \le \|a_{i}\|_{\overline{\mathcal{O}(W_{m})}\ensuremath{\langle} u_{m}\le |T|\le v_{m}\ensuremath{\rangle}}\ ;\\[2pt]
\|a_{i}^+\|_{\overline{\mathcal{O}(W_{m})}\ensuremath{\langle} u_{m}\le |T|\le t\ensuremath{\rangle}} \le \|a_{i}\|_{\overline{\mathcal{O}(W_{m})}\ensuremath{\langle} u_{m}\le |T|\le v_{m}\ensuremath{\rangle}}.
\end{cases}\]
En posant $F^- = \sum_{i=0}^d a_{i}^- \,S^i \in\overline{\mathcal{O}(W_{m})}\ensuremath{\langle} |T|\le v_{m}\ensuremath{\rangle}[S]$ et $F^+ = \sum_{i=0}^d a_{i}^+ \,S^i \in \overline{\mathcal{O}(W_{m})}\ensuremath{\langle} u_{m}\le |T|\le t\ensuremath{\rangle}[S]$ et en notant~$f^-$ et~$f^+$ les classes r\'esiduelles respectives modulo $P(S)-T$, on obtient le r\'esultat d\'esir\'e avec la constante~$D=2$ (ind\'ependante de~$m$).
\smallbreak
Supposons que $r=s$ et d\'emontrons que~$\Omega$ est un syst\`eme de Runge. Soient $m\in\ensuremath{\mathbf{N}}$ et $\ensuremath{\varepsilon}>0$. Soient $p\in\ensuremath{\mathbf{N}}^\ast$ et $s_{1},\dotsc,s_{p} \in \mathcal{C}_{m}$. Pour tout $i\in\cn{1}{p}$, choisissons un \'el\'ement $S_{i} = \sum_{k=0}^d A_{i,k}\, S^k$ de $\overline{\mathcal{O}(W_{m})}\ensuremath{\langle} u_{m}\le |T|\le v_{m}\ensuremath{\rangle}[S]$ qui rel\`eve~$s_{i}$. Remarquons qu'il est loisible de supposer l'entier~$d$ ind\'ependant de~$i$.
Commen\c{c}ons par traiter le cas $\sigma=-$. Pour tous $i\in\cn{1}{p}$ et $k\in\cn{0}{d}$, il existe $A'_{i,k} \in \overline{\mathcal{O}(W_{m})}[T,T^{-1}]$ tel que
\[ \|A_{i,k} - A'_{i,k}\|_{\overline{\mathcal{O}(W_{m})}\ensuremath{\langle} u_{m}\le |T|\le v_{m}\ensuremath{\rangle}} \, w^k < \ensuremath{\varepsilon}/(d+1).\]
Pour tout $i\in\cn{1}{p}$, on pose $S'_{i} = \sum_{k=0}^d A'_{i,k}\, S^k$ et on a alors
\[ \| S_{i} - S'_{i}\|_{\overline{\mathcal{O}(W_{m})}\ensuremath{\langle} u_{m}\le |T|\le v_{m}\ensuremath{\rangle},w} < \ensuremath{\varepsilon}.\]
Tout \'el\'ement~$S'_{i}$ se prolonge en un \'el\'ement de $\overline{\mathcal{O}(W_{m})}\ensuremath{\langle} u_{m}\le |T|\le t\ensuremath{\rangle}[S]$ et d\'efinit donc un \'el\'ement~$s'_{i}$ de $\mathcal{B}_{m}^+$ par passage au quotient. La propri\'et\'e d'approximation de l'\'enonc\'e est alors v\'erifi\'ee avec $f=1$. La condition
\[\lim_{m'\to\infty} \|f\|_{m} \, \|f^{-1}\|_{m} = 1\]
est \'evidemment v\'erifi\'ee aussi.
Traitons maitenant le cas $\sigma=+$. Pour tous $i\in\cn{1}{p}$ et $k\in\cn{0}{d}$, notons
\[A_{i,k} = \sum_{l\in\ensuremath{\mathbf{Z}}} a_{i,k}^l \,T^l \in \overline{\mathcal{O}(W_{m})}\ensuremath{\langle} u_{m}\le |T|\le v_{m}\ensuremath{\rangle}.\]
Il existe un entier $l_{0}\le 0$ tel que, pour tous $i\in\cn{1}{p}$ et $k\in\cn{0}{d}$, on ait
\[\sum_{l\le l_{0} - 1} \|a_{i,k}^l\|_{W_{m}}\, \max(u_{m}^l,v_{m}^l) < \frac{\ensuremath{\varepsilon}}{(d+1) w^k}.\]
Posons $F = T^{-l_{0}} \in \overline{\mathcal{O}(W_{m})}\ensuremath{\langle} u_{m}\le |T|\le t\ensuremath{\rangle}[S]$. C'est un \'el\'ement inversible. Notons~$f$ son image dans~$\mathcal{B}_{m}^+$.
Pour $i\in\cn{1}{p}$ et $k\in\cn{0}{d}$, posons
\[A'_{i,k} = \sum_{l\ge l_{0}} a_{i,k}^l\, T^l.\]
On a
\[F A'_{i,k} = \sum_{l\ge 0} a_{i,k}^{l+l_{0}}\, T^l \in \overline{\mathcal{O}(W_{m})}\ensuremath{\langle} |T|\le v_{m}\ensuremath{\rangle}.\]
Pour $i\in\cn{1}{p}$, posons
\[S'_{i} = \sum_{k=0}^d F A'_{i,k}\, S^k \in \overline{\mathcal{O}(W_{m})}\ensuremath{\langle} |T|\le v_{m}\ensuremath{\rangle}[S]\]
et notons~$s'_{i}$ son image dans~$\mathcal{B}_{m}^-$. Pour tout $i\in\cn{1}{p}$, on a
\begin{align*}
\|f s_{i}-s'_{i}\|_{m} & \le \|f\|_{m}\, \|s_{i} - f^{-1}s'_{i}\|_{m}\\
& \le \|f\|_{m} \, \left\| \sum_{k=0}^d (A_{i,k} - A'_{i,k}) \, S^{k}\right\|_{\overline{\mathcal{O}(W_{m})}\ensuremath{\langle} u_{m}\le |T|\le v_{m}\ensuremath{\rangle},w} \\
& <\ensuremath{\varepsilon}\, \|f\|_{m}.
\end{align*}
Finalement, pour tout $m' \ge m$, on a
\begin{align*}
\|f\|_{m'}\, \| f^{-1}\|_{m'} &\le \|T^{-l_{0}}\|_{\overline{\mathcal{O}(W_{m'})}\ensuremath{\langle} u_{m'}\le |T|\le v_{m'}\ensuremath{\rangle}} \, \|T^{l_{0}}\|_{\overline{\mathcal{O}(W_{m'})}\ensuremath{\langle} u_{m'}\le |T|\le v_{m'}\ensuremath{\rangle}}\\
& \le \left(\frac{v_{m}'}{u_{m'}}\right)^{|l_{0}|}.
\end{align*}
Cette derni\`ere quantit\'e tend vers $s/r =1$ lorsque~$m'$ tend vers l'infini (et c'est le seul endroit o\`u l'\'egalit\'e $s=r$ est utilis\'ee).
\smallbreak
Finalement, supposons que, pour tout $m\in\ensuremath{\mathbf{N}}$, le compact $\overline{C}_{W_{m}}(u_{m},v_{m})$ satisfait la condition $(\mathcal{O} N_{P(S)-T})$ et d\'emontrons que~$\Omega$ est un syst\`eme de Banach fort.
Soit~$U$ un voisinage compact de~$L$. Il contient un voisinage de la forme $\overline{C}_{W_{m_{0}}}(P;u_{m_{0}},v_{m_{0}})$. D'apr\`es la condition~$(\mathcal{O} N_{P(S)-T})$, il existe $A\in\ensuremath{\mathbf{R}}$ tel que, pour tout $h\in\mathcal{O}(\overline{C}_{W_{m_{0}}}(u_{m_{0}},v_{m_{0}}))[S]/(P(S)-T)$, on ait
\[\|h\|_{\mathcal{O}(\overline{C}_{W_{m_{0}}}(u_{m_{0}},v_{m_{0}})),w,\mathrm{r\acute{e}s}} \le A\, \|h\|_{\overline{C}_{W_{m_{0}}}(P;u_{m_{0}},v_{m_{0}}) } \le A\, \| h\|_{U}.\]
Soit $m\ge m_{0}+1$. Alors, d'apr\`es la proposition~\ref{prop:restrictionserie}, pour tout $k\in\overline{\mathcal{O}(W_{m_{0}})}\ensuremath{\langle} u_{m_{0}}\le|T|\le v_{m_{0}}\ensuremath{\rangle}$, on a
\[\|k\|_{\overline{\mathcal{O}(W_{m})}\ensuremath{\langle} u_{m}\le|T|\le v_{m}\ensuremath{\rangle}} \le \left(\frac{u_{m_{0}}}{u_{m}-u_{m_{0}}} + \frac{v_{m_{0}}}{v_{m_{0}}-v_{m}} \right) \, \|k\|_{\mathcal{O}(\overline{C}_{W_{m_{0}}}(u_{m_{0}},v_{m_{0}}))}.\]
Soit $f\in \mathcal{O}(U)$. D'apr\`es le corollaire~\ref{cor:lemniscateglobale}, il existe un \'el\'ement~$g$ de $\overline{\mathcal{O}(W_{m})}\ensuremath{\langle} u_{m}\le|T|\le v_{m}\ensuremath{\rangle}[S]/(P(S)-T)$ dont l'image dans~$\mathcal{O}(L)$ co\"{\i}ncide avec celle de~$f$. Les \'egalit\'es pr\'ec\'edentes montrent par ailleurs que l'on a
\begin{align*}
\|g\|_{\overline{\mathcal{O}(W_{m})}\ensuremath{\langle} u_{m}\le|T|\le v_{m}\ensuremath{\rangle},w,\mathrm{r\acute{e}s}} &\le \left(\frac{u_{m_{0}}}{u_{m}-u_{m_{0}}} + \frac{v_{m_{0}}}{v_{m_{0}}-v_{m}} \right)\, \|g\|_{\mathcal{O}(\overline{C}_{W_{m_{0}}}(u_{m_{0}},v_{m_{0}})),w,\mathrm{r\acute{e}s}}\\
&\le A \left(\frac{u_{m_{0}}}{u_{m}-u_{m_{0}}} + \frac{v_{m_{0}}}{v_{m_{0}}-v_{m}} \right)\, \|f\|_{U}.
\end{align*}
Le r\'esultat s'ensuit.
\end{proof}
Dans la proposition~\ref{prop:systemeCR}, la difficult\'e r\'eside, bien s\^ur, dans la v\'erification de la condition suppl\'ementaire finale portant sur la propri\'et\'e $(\mathcal{O} N_{P(S)-T})$. Remarquons que le corollaire~\ref{cor:BVcompactN} fournit un moyen d'y parvenir dans le cas o\`u tous les points du compact~$V$ sont ultram\'etriques tr\`es typiques.
Mentionnons \'egalement le r\'esultat suivant, particuli\`erement adapt\'e au voisinage de points dont le corps r\'esiduel est de caract\'eristique nulle ou de valuation non triviale.
\begin{lemm}\label{lem:conditionscar0}\index{Condition!ONPS-T@$(\mathcal{O} N_{P(S)-T})$}\index{Resultant@R\'esultant}
Soient~$V$ une partie compacte de~$B_{\mathrm{um}}$ et $P \in \mathcal{O}(V)[T]$ unitaire non constant. Soient $r,s \in \ensuremath{\mathbf{R}}$ tels que $0<r\le s$. Supposons que le polyn\^ome $R(T) = \mathrm{R\acute{e}s}_{S}(P(S)-T,P'(S))$ ne s'annule ni sur $\overline{C}_{V}(r,r)$, ni sur $\overline{C}_{V}(s,s)$.
Alors, il existe un voisinage~$U$ de~$V$ sur lequel les coefficients de~$P$ sont d\'efinis et des nombres r\'eels $r_{0}\in\intoo{0,r}$ et $s_{0}\in\intoo{s,+\infty}$ tels que, pour toute partie compacte~$W$ de~$U$ contenant~$V$, tout $u\in\intff{r_{0},r}$ et tout $v\in\intff{s,s_{0}}$, le compact $\overline{C}_{W}(u,v)$ satisfasse la condition $(\mathcal{O} N_{P(S)-T})$.
En particulier, la condition finale de la proposition~\ref{prop:systemeCR} est satisfaite.
\end{lemm}
\begin{proof}
Soit~$U$ un voisinage de~$V$ sur lequel les coefficients de~$P$ sont d\'efinis. Consid\'erons le polyn\^ome $R(T) = \mathrm{R\acute{e}s}_{S}(P(S)-T,P'(S)) \in \mathscr{O}(U)[T]$. Par hypoth\`ese, il ne s'annule ni sur $\overline{C}_{V}(r,r)$, ni sur $\overline{C}_{V}(s,s)$. Quitte \`a restreindre~$U$, on peut supposer qu'il ne s'annule pas non plus sur $\overline{C}_{U}(r,r)$ et $\overline{C}_{U}(s,s)$, et m\^eme sur $\overline{C}_{U}(u,u)$ et $\overline{C}_{U}(v,v)$, pour tous $u \in [r_{0},r]$ et $v \in [s,s_{0}]$ avec $r_{0}<r$ et $s_{0}>s$ bien choisis. Le r\'esultat d\'ecoule alors de la proposition~\ref{prop:RG}.
\end{proof}
\subsection{Arbres de Cousin-Runge}
\index{Arbre|(}
Soit~$X$ un espace $\mathcal{A}$-analytique.
Nous introduisons maintenant une nouvelle notion qui sera utile pour effectuer des r\'ecurrences sur la dimension.
\begin{defi}\label{def:arbreCR}\index{Arbre!binaire de compacts|textbf}\index{Arbre!assez d'|textbf}\index{Arbre!a intersections simples@\`a intersections simples|textbf}\index{Arbre!de Cousin|textbf}\index{Arbre!adapt\'e \`a un recouvrement|textbf}%
\nomenclature[Ra]{$\mathcal{T}$}{arbre binaire de compacts}%
\nomenclature[Rb]{$K(v)$}{compact \'etiquetant un n\oe ud~$v$ de~$\mathcal{T}$}%
\nomenclature[Rc]{$\mathcal{F}(\mathcal{T})$}{famille des compacts associ\'es aux feuilles de~$\mathcal{T}$}%
Soit~$V$ une partie compacte de~$X$. On appelle \emph{arbre binaire de compacts} sur~$V$ un arbre~$\mathcal{T}$ dont chaque n{\oe}ud~$v$ est \'etiquet\'e par un compact~$K(v)$ et qui v\'erifie les propri\'et\'es suivantes~:
\begin{enumerate}[i)]
\item la racine~$r$ de~$\mathcal{T}$ est \'etiquet\'ee par~$K(r) = V$~;
\item tout n{\oe}ud~$v$ de~$\mathcal{T}$ qui n'est pas une feuille poss\`ede deux fils~$v^-$ et~$v^+$ et on a
\[K(v) = K(v^-) \cup K(v^+).\]
\end{enumerate}
En particulier, la famille~$\mathcal{F}(\mathcal{T})$ des compacts associ\'es aux feuilles de~$\mathcal{T}$ est un recouvrement de~$V$. \'Etant donn\'e un recouvrement~$\mathcal{U}$ de~$V$, on dit que l'\emph{arbre}~$\mathcal{T}$ est \emph{adapt\'e} \`a~$\mathcal{U}$ si~$\mathcal{F}(\mathcal{T})$ est plus fin que~$\mathcal{U}$.
On appelle \emph{arbre de Cousin} (resp. Cousin-Runge, etc.) sur~$V$ tout arbre binaire de compacts sur~$V$ qui v\'erifie la propri\'et\'e suivante~:
\begin{enumerate}
\item[iii)] pour tout n{\oe}ud interne~$v$, il existe un syst\`eme de Cousin (resp. Cousin-Runge, etc.) associ\'e \`a $(K(v^-),K(v^+))$.
\end{enumerate}
Un arbre binaire de compacts~$\mathcal{T}$ sur~$V$ est dit \emph{\`a intersections simples} si, pour tout n{\oe}ud~$v$ de~$\mathcal{T}$ qui n'est pas une feuille, il existe une feuille~$f(v^-)$ du sous-arbre issu de~$v^-$ et une feuille~$f(v^+)$ du sous-arbre issu de~$v^+$ telles que
\[K(v^-)\cap K(v^+) = K(f(v^-)) \cap K(f(v^+)).\]
On dit que le compact~$V$ de~$X$ poss\`ede \emph{assez d'arbres} de Cousin (resp. Cousin-Runge, etc.) si, pour tout recouvrement ouvert~$\mathcal{U}$ de~$V$, il existe un arbre de Cousin (resp. Cousin-Runge, etc.) sur~$V$ adapt\'e \`a~$\mathcal{U}$.
\end{defi}
\begin{prop}\label{prop:CRimpliqueA}\index{Faisceau!globalement engendr\'e}
Soit~$V$ une partie compacte de~$X$. Soient $r_{1},\dotsc,r_{n} \in \ensuremath{\mathbf{R}}_{>0}$. Notons $\pi \colon \overline{D}_{X}(r_{1},\dotsc,r_{n}) \to X$ le morphisme de projection. Soit~$\mathcal{F}$ un faisceau de $\mathcal{O}_{\pi^{-1}(V)}$-modules de type fini. Supposons qu'il existe un arbre de Cousin-Runge fort \`a intersections simples~$\mathcal{T}$ sur~$V$ et, pour toute feuille~$f$ de~$\mathcal{T}$, un morphisme surjectif $\varphi_{f} \colon \mathcal{O}_{\pi^{-1}(K(f))}^{n_{f}} \to \mathcal{F}_{|\pi^{-1}(K(f))}$ v\'erifiant la propri\'et\'e suivante~: pour toute feuille~$g$ de~$\mathcal{T}$, le morphisme $\mathcal{O}^{n_{f}}(\pi^{-1}(K(f)\cap K(g))) \to \mathcal{F}(\pi^{-1}(K(f)\cap K(g)))$ induit par~$\varphi_{f}$ est surjectif.
Alors, le faisceau $\mathcal{F}$ est globalement engendr\'e sur~$\pi^{-1}(V)$.
\end{prop}
\begin{proof}
Modifions l'arbre binaire de compacts~$\mathcal{T}$ en rempla\c{c}ant chaque \'etiquette~$K$ par $\pi^{-1}(K)$. On obtient un arbre binaire de compacts~$\mathcal{T}'$ sur~$\pi^{-1}(V)$. Il est encore \`a intersections simples, et \'egalement de Cousin-Runge fort, d'apr\`es le lemme~\ref{lem:CRrelatif}. Pour tout n\oe ud~$v'$ de~$\mathcal{T}'$, notons~$K'(v')$ son \'etiquette.
On souhaite, \`a pr\'esent, d\'emontrer que, pour tout n\oe ud~$v'$ de~$\mathcal{T}'$, il existe un entier~$n_{v'}$ et un morphisme surjectif $\varphi_{v'} \colon \mathcal{O}_{K'(v')}^{n_{v'}} \to \mathcal{F}_{|K'(v')}$ qui induit un morphisme surjectif $\mathcal{O}^{n_{v'}}(K'(f')\cap K'(g')) \to \mathcal{F}(K'(f')\cap K'(g'))$, pour toute feuille~$f'$ de~$\mathcal{T}'$ issue de~$v'$ et toute feuille~$g'$ de~$\mathcal{T}'$. Il suffit, pour ce faire, d'appliquer le corollaire~\ref{cor:K-K+A} de fa\c{c}on r\'ep\'et\'ee en remontant dans l'arbre, l'hypoth\`ese de surjectivit\'e globale sur les intersections \'etant assur\'ee par la condition d'intersections simples sur~$\mathcal{T}'$.
En particulier, lorsque~$v'$ est la racine de~$\mathcal{T}'$, d'\'etiquette~$\pi^{-1}(V)$, on obtient le r\'esultat souhait\'e.
\end{proof}
\subsection{Au-dessus d'un corps ultram\'etrique}
Dans cette section, nous fixons un corps valu\'e ultram\'etrique complet $(k,\va)$. Nous allons construire des arbres de Cousin-Runge sur certaines parties de la droite analytique~$\E{1}{k}$.
\begin{defi}\index{Voisinage!elementaire@\'el\'ementaire|textbf}\index{Recouvrement elementaire@Recouvrement \'el\'ementaire|textbf}
Soient $x\in \ensuremath{\mathbf{A}^{1,\mathrm{an}}_{k}}$ un point de type~2 ou~3. On dit qu'un ouvert~$U$ de~$\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{k}}$ est un \emph{voisinage \'el\'ementaire} de~$x$ dans~$\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{k}}$ s'il existe un nombre r\'eel $\ensuremath{\varepsilon} >0$, un ensemble fini~$F$ de composantes connexes relativement compactes de~$\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{k}}\setminus\{x\}$ et, pour tout \'el\'ement~$C$ de~$F$, un point rigide~$x_{C}$ de~$C$ tels que
\[U = C(P_{F},|P_{F}(x)| - \ensuremath{\varepsilon}, |P_{F}(x)| + \ensuremath{\varepsilon}),\]
o\`u $P_{F} := \prod_{C\in F} \mu_{x_{C}}$.
Soient $x\in \ensuremath{\mathbf{A}^{1,\mathrm{an}}_{k}}$ un point de type~1 ou~4. On dit qu'un ouvert~$U$ de~$\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{k}}$ est un \emph{voisinage \'el\'ementaire} de~$x$ dans~$\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{k}}$ s'il existe un nombre r\'eel $\ensuremath{\varepsilon} >0$ et un polyn\^ome irr\'eductible $P\in k[T]$ tels que
\[U = D(P,|P(x)|+ \ensuremath{\varepsilon}).\]
On dit qu'un recouvrement ouvert~$\mathcal{U}$ de~$\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{k}}$ est un \emph{recouvrement \'el\'ementaire} si tout \'el\'ement~$U$ de~$\mathcal{U}$ est voisinage \'el\'ementaire de l'un de ses points.
\end{defi}
\begin{rema}\label{rem:raffinementelementaire}\index{Voisinage
Les lemmes~\ref{lem:bv23} et~\ref{lem:bv14} assurent que tout point de~$\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{k}}$ poss\`ede une base de voisinages \'el\'ementaires, et donc que tout recouvrement de~$\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{k}}$ peut \^etre raffin\'e en un recouvrement \'el\'ementaire.
\end{rema}
\begin{defi}\label{defi:arbreelementaire}\index{Arbre!binaire de compacts!elementaire@\'el\'ementaire|textbf}\index{Arbre!bien decoupe@bien d\'ecoup\'e|textbf}%
\nomenclature[Rd]{$G_{v}$}{ensemble des n{\oe}uds~$w\ne v$ situ\'es entre la racine et~$v$ tels que~$v$ soit \`a gauche de~$w$}%
\nomenclature[Re]{$D_{v}$}{ensemble des n{\oe}uds~$w\ne v$ situ\'es entre la racine et~$v$ tels que~$v$ soit \`a droite de~$w$}%
\nomenclature[Rf]{$\mathcal{T}(\mathcal{E})$}{arbre binaire de compacts \'el\'ementaire}%
Soient~$X$ un espace analytique et~$V$ une partie compacte de~$X$. Soient $P\in\mathcal{O}(V)[T]$ et $a>0$.
Soit~$\mathcal{T}$ un arbre binaire. Pour tout n{\oe}ud $v$ de~$\mathcal{T}$, notons~$G_{v}$ (resp.~$D_{v}$) l'ensemble des n{\oe}uds~$w\ne v$ situ\'es entre la racine et~$v$ et tels que~$v$ soit \`a gauche (resp. \`a droite) de~$w$, c'est-\`a-dire dans le sous-arbre issu du fils gauche (resp. droit), \cf~figure~\ref{fig:GvDv}.
\begin{figure}[!h]
\centering
\begin{tikzpicture}
\draw (-3.2,4.3) node[above right]{$\mathcal{T}$} ;
\draw (-1.6,5) -- (-2.6,3.3);
\draw (-1.6,5)-- (-0.8,3.3);
\draw (-2.6,3.3)-- (-4,1.8);
\draw (-2.6,3.3)-- (-1.6,1.8);
\draw (-1.6,1.8)-- (-2.5,0.4);
\draw (-1.6,1.8)-- (-0.9,0.3);
\draw (-3,0.3) node[anchor=north west] {$v$};
\draw (2,3.6) node[anchor=north west] {$G_v$};
\draw (-5.7,3.6) node[anchor=north west] {$D_v$};
\draw [->] (1.8,3.2) to[out=120,in=-10] (-1.2,4.9);
\draw [->] (1.8,3.2) to[out=230,in=10] (-1.3,1.8);
\draw [->] (-4.8,3.3) to[out=20,in=160] (-3,3.3);
\fill (-1.6,5) circle (2pt) ;
\fill (-2.6,3.3) circle (2pt) ;
\fill (-0.8,3.3) circle (2pt) ;
\fill (-4,1.8) circle (2pt) ;
\fill (-1.6,1.8) circle (2pt) ;
\fill (-2.5,0.4) circle (2pt) ;
\fill (-0.9,0.3) circle (2pt) ;
\end{tikzpicture}
\caption{Sous-ensembles $G_{v}$ et $D_{v}$.}\label{fig:GvDv}
\end{figure}
Soit $\mathcal{E} = (P_{v},s_{v})_{v\in \mathcal{I}(\mathcal{T})}$ une famille d'\'el\'ements de $\mathcal{O}(V)[T] \times \ensuremath{\mathbf{R}}_{>0}$ ou $\mathcal{B}(V)[T] \times \ensuremath{\mathbf{R}}_{>0}$ ind\'ex\'ee par l'ensemble~$\mathcal{I}(\mathcal{T})$ des n{\oe}uds internes de~$\mathcal{T}$. On d\'efinit alors un \emph{arbre binaire de compacts} $\mathcal{T}(\mathcal{E})$ sur~$\overline{D}_{V}(P;a)$ dit \emph{\'el\'ementaire} en \'etiquetant tout n{\oe}ud~$v$ de~$\mathcal{T}$ par
\begin{align*}
K(v) &= \overline{D}_{V}(P;a) \cap \bigcap_{w\in G_{v}} \{x\in \E{1}{V} : |P_{w}(x)| \le s_{w}\}\\
&\qquad \cap \bigcap_{w\in D_{v}} \{x\in \E{1}{V} : |P_{w}(x)| \ge s_{w}\}.
\end{align*}
On dit que l'\emph{arbre} $\mathcal{T}(\mathcal{E})$ est \emph{bien d\'ecoup\'e} si, pour tout n\oe uds internes $v \ne v'$, on a
\[\{x\in \E{1}{V} : |P_{v}(x)| = s_{v}\} \cap \{x\in \E{1}{V} : |P_{v'}(x)| = s_{v'}\} = \emptyset.\]
\end{defi}
\begin{rema}\label{rem:biendecoupe}
Pour tout n{\oe}ud interne~$v$, on a
\[\begin{cases}
K(v^-) = K(v) \cap \{x\in \E{1}{V} : |P_{v}(x)|\le s_{v}\}\ ;\\
K(v^+) = K(v) \cap \{x\in \E{1}{V} : |P_{v}(x)|\ge s_{v}\}.
\end{cases}\]
En particulier, si~$\mathcal{T}(\mathcal{E})$ est bien d\'ecoup\'e, on a
\[\begin{cases}
K(v^-) = \{x\in \E{1}{V} : |P_{v}(x)|\le s_{v}\}\ ;\\
K(v^-) \cap K(v^+) = \{x\in \E{1}{V} : |P_{v}(x)| = s_{v}\}.
\end{cases}\]
Nous retrouvons ainsi une partie des hypoth\`eses de la proposition~\ref{prop:systemeCR}.
Remarquons encore que, si~$\mathcal{T}(\mathcal{E})$ est bien d\'ecoup\'e, alors, pour tout n\oe ud interne~$v$, on a
\[ \{x\in \E{1}{V} : |P_{v}(x)| = s_{v}\} = K(f^d(v^-)) \cap K(f^d(v^+)),\]
o\`u $f^d(v^-)$ (resp. $f^d(v^+)$) d\'esigne la feuille la plus \`a droite du sous-arbre issu de~$v^-$ (resp. $v^+$). En particulier, $\mathcal{T}(\mathcal{E})$ est \`a intersections simples. On peut \'egalement v\'erifier que, pour toutes feuilles $f\ne g$ de~$\mathcal{T}(\mathcal{E})$, l'intersection $K(f) \cap K(g)$ est soit vide, soit de la forme $\{x\in \E{1}{V} : |P_{v}(x)| = s_{v}\}$ pour un certain n\oe ud interne~$v$.
\end{rema}
\begin{defi}\label{defi:etoile}\index{Etoile@\'Etoile|textbf}
On dit qu'une partie~$E$ de~$\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{k}}$ est une \emph{\'etoile} s'il existe un polyn\^ome irr\'eductible $P\in k[T]$, un nombre r\'eel~$a >0$ et une famille finie~$\mathcal{C}$ de composantes connexes de $\overline{D}(P,a) \setminus \{\eta\}$, o\`u~$\eta$ d\'esigne l'unique point du bord de Shilov de~$\overline{D}(P,a)$, tels que
\[E = \overline{D}(P,a) \setminus \bigcup_{C \in \mathcal{C}} C.\]
\end{defi}
\begin{exem}
Les disques et les couronnes, et plus g\'en\'eralement les parties de la forme $\overline{D}(P,s)$ et $\overline{C}(P;s,s)$, avec $P \in k[T]$ irr\'eductible et $s \in \ensuremath{\mathbf{R}}_{>0}$, sont des \'etoiles.
\end{exem}
\begin{prop}\label{prop:CRcorps}\index{Etoile@\'Etoile!arbre sur une|see{Arbre}}\index{Arbre!sur une \'etoile}\index{Resultant@R\'esultant}
Soit~$E \subset \ensuremath{\mathbf{A}^{1,\mathrm{an}}_{k}}$ une \'etoile. Alors, pour tout une famille d'ouverts~$\mathcal{U}$ de~$\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{k}}$ recouvrant~$E$, il existe un arbre binaire~$\mathcal{T}$ et une famille $\mathcal{E} = (P_{v},s_{v})_{v\in \mathcal{I}(\mathcal{T})}$ d'\'el\'ements de $k[T]\times\ensuremath{\mathbf{R}}_{>0}$, les polyn\^omes~$P_{v}$ \'etant irr\'eductibles, index\'ee par l'ensemble des n{\oe}uds internes de~$\mathcal{T}$ tels que l'arbre binaire de compacts \'el\'ementaire $\mathcal{T}(\mathcal{E})$ sur~$E$ soit bien d\'ecoup\'e, adapt\'e \`a~$\mathcal{U}$ et de Cousin-Runge fort.
En outre, si~$k$ est parfait ou de valuation non triviale, on peut supposer que, pour tout n{\oe}ud interne~$v$ de~$\mathcal{T}$, le polyn\^ome $R_{v}(T) = \mathrm{R\acute{e}s}_{S}(P_{v}(S)-T,P_{v}'(S))$ ne s'annule pas sur $\overline{C}(s_{v},s_{v})$.
\end{prop}
\begin{proof}
Par hypoth\`ese, il existe un polyn\^ome irr\'eductible $P\in k[T]$, un nombre r\'eel~$a >0$ et une famille finie~$\mathcal{C}$ de composantes connexes de $\overline{D}(P,a) \setminus \{\eta\}$, o\`u~$\eta$ d\'esigne l'unique point du bord de Shilov de~$\overline{D}(P,a)$, tels que
\[E = \overline{D}(P,a) \setminus \bigcup_{C \in \mathcal{C}} C.\]
Soit~$\mathcal{U}$ une famille d'ouverts de~$\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{k}}$ recouvrant $E$.
D'apr\`es les lemmes~\ref{lem:bv23} et~\ref{lem:bv14}, on peut supposer que tout \'el\'ement de~$\mathcal{U}$ est voisinage \'el\'ementaire de d'un point de~$E$. En outre, par compacit\'e de~$E$, on peut supposer que~$\mathcal{U}$ est fini, de cardinal~$n \ge 1$. D\'emontrons le r\'esultat pour de tels recouvrements par r\'ecurrence sur~$n$.
Traitons tout d'abord le cas o\`u $n=1$. Notons~$\mathcal{T}$ l'arbre binaire r\'eduit \`a sa racine. En consid\'erant la famille vide~$\mathcal{E}$, on obtient un arbre binaire de compacts~$\mathcal{T}(\mathcal{E})$ sur~$E$ r\'eduit \`a sa racine, \'etiquet\'ee par~$E$. Les conditions de l'\'enonc\'e sont satisfaites.
Supposons d\'esormais que~$n\ge 2$ et que le r\'esultat est d\'emontr\'e pour les recouvrements \'el\'ementaires de cardinal inf\'erieur \`a~$n-1$. Soit un \'el\'ement~$U$ de~$\mathcal{U}$ contenant~$\eta$. C'est un voisinage \'el\'ementaire d'un point~$x$ de~$E$.
Si~$x$ est de type~1 ou~4, alors tout voisinage \'el\'ementaire de~$x$ contenant~$\eta$ contient $E$ tout entier, et nous sommes donc ramen\'es au cas $n=1$ d\'ej\`a trait\'e.
Nous pouvons donc supposer que~$x$ est de type~2 ou~3. Il existe alors un ensemble $F = \{C_{1},\dots,C_{p}\}$ de composantes connexes born\'ees de $\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{k}} \setminus\{x\}$, pour tout $i\in\cn{1}{p}$, un polyn\^ome irr\'eductible~$P_{i}$ s'annulant en un point de~$C_{i}$ et deux nombres r\'eels $r,s$ avec $r < |P_{F}(x)| < s$ tels que $U = \mathring{D}(P_{F};r,s)$, o\`u $P_{F} = \prod_{1\le i\le p} P_{i}$. Remarquons que, puisque $U$ contient~$\eta$, on a
\[U \cap E = \{y\in \ensuremath{\mathbf{A}^{1,\mathrm{an}}_{k}} : r < |P_{F}(y)| \le |P_{F}(\eta)|\} \cap E.\]
Si $r<0$, alors $\mathring{D}(P_{F};r,s) = \mathring{D}(P_{F},s)$ contient~$E$, et nous sommes de nouveau ramen\'es au cas $n=1$. Nous pouvons donc supposer que $r>0$.
Soit $i\in \cn{1}{p}$. Le compact $C_{i} \setminus U$ est de la forme $\overline{D}(P_{i};a_{i})$ et il est recouvert par $\mathcal{U}\setminus\{U\}$. Il existe $c_{i} >a_{i}$ tel que $\overline{D}(P_{i};c_{i})$ soit encore contenu dans~$C_{i}$ et recouvert par $\mathcal{U}\setminus\{U\}$.
Si $k$ est parfait, le polyn\^ome~$P_{i}$ est s\'eparable. Si~$k$ est de valuation non triviale, on peut perturber l\'eg\`erement les coefficients de~$P_{i}$ de fa\c{c}on \`a le rendre s\'eparable sans changer les disques $\overline{D}(P_{i};b_{i})$ pour $b_{i} \in [a_{i},c_{i}]$. Alors, le polyn\^ome $R_{i}(T) = \mathrm{R\acute{e}s}_{S}(P_{i}(S)-T,P'_{i}(S))$ n'est pas nul. Quitte \`a diminuer~$c_{i}$, on peut supposer qu'il ne s'annule pas sur $\overline{C}(b_{i},b_{i})$, pour tout $b_{i} \in \intoo{a_{i},c_{i}}$.
D'apr\`es l'hypoth\`ese de r\'ecurrence appliqu\'ee \`a~$\overline{D}(P_{i},a_{i})$, il existe un arbre binaire~$\mathcal{T}_{i}$ et une famille $\mathcal{E}_{i} = (P_{v},s_{v})_{v\in \mathcal{I}(\mathcal{T}_{i})}$ d'\'el\'ements de $k[T]\times\ensuremath{\mathbf{R}}_{>0}$ tels que l'arbre binaire de compacts $\mathcal{T}_{i}(\mathcal{E}_{i})$ sur $\overline{D}(P_{i},a_{i})$ satisfasse les propri\'et\'es de l'\'enonc\'e (y compris la propri\'et\'e finale si~$k$ est parfait ou de valuation non triviale). Quitte \`a diminuer~$c_{i}$, on peut supposer que, pour tout $b_{i} \in \intff{a_{i},c_{i}}$, l'arbre binaire de compacts $\mathcal{T}_{i}(\mathcal{E}_{i})$ sur $\overline{D}(P_{i},b_{i})$ satisfait encore ces propri\'et\'es. Remarquons que, pour tout $b_{i} \in \intof{a_{i},c_{i}}$ et tout n\oe ud interne de~$\mathcal{T}_{i}$, on a
\begin{equation}\label{eq:biendecoupe}\tag{a
\{x\in \E{1}{V} : |P_{v}(x)| = s_{v}\} \cap \{x\in \E{1}{V} : |P_{i}(x)| = b_{i}\} = \emptyset.
\end{equation}
Rappelons que $U = \mathring{D}(P_{F};r,s)$. Il existe donc $r' \in \intoo{r,|P_{F}(x)|}$ tel que, en posant
\[V := \{y\in \ensuremath{\mathbf{A}^{1,\mathrm{an}}_{k}} : r' \le |P_{F}(y)| \le |P_{F}(z)|\} \cap E \subset U,\]
on ait, pour tout~$i \in \cn{1}{p}$, $C_{i} \setminus V \subset \mathring{D}(P_{i};c_{i})$.
Soit $i\in\cn{1}{p}$. Il existe $b_{i} \in \intoo{a_{i},c_{i}}$ tel que l'on ait
\begin{equation}\label{eq:intersection}\tag{b}
\overline{D}(P_{i};b_{i}) \cap V = \{y\in\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{k}} : |P_{i}(y)| = b_{i}\}.
\end{equation}
On a alors
\begin{equation}\label{eq:V}\tag{c}
V = E \cap \bigcap_{1\le i\le p} \{y\in\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{k}} : |P_{i}(y)| \ge b_{i}\}.
\end{equation}
D\'efinissons maintenant un arbre binaire de compacts sur~$E$. Commen\c{c}ons par consid\'erer l'arbre \og peigne \fg{} \`a $2p+1$ n{\oe}uds avec les dents \`a gauche, \cf~figure~\ref{fig:peigne}.
Pour l'obtenir, on part d'un arbre r\'eduit \`a sa racine et on effectue~$p$ fois l'op\'eration qui consiste \`a ajouter un fils gauche et un fils droit \`a la feuille la plus \`a droite. Notons~$\mathcal{T}_{0}$ cet arbre. Ses n{\oe}uds internes forment un ensemble totalement ordonn\'e $v_{0} \ge \dotsb \ge v_{p-1}$, o\`u~$v_{0}$ d\'esigne la racine et~$v_{p-1}$ le p\`ere des feuilles les plus basses.
\begin{figure}[!h]
\centering
\begin{tikzpicture}
\foreach \r in {0,...,1}
{\draw ({(\r)*cos(30)},-{(\r)*sin(30)}) -- ({(\r+1)*cos(30)},-{(\r+1)*sin(30)}) ;}
\foreach \r in {0,...,2}
{\fill ({(\r)*cos(30)},-{(\r)*sin(30)}) circle (2pt) ;
\draw ({(\r)*cos(30)},-{(\r)*sin(30)}) node[above right]{$v_{\r}$} ;
\draw ({(\r)*cos(30)},-{(\r)*sin(30)}) -- ({(\r)*cos(30) - cos(60)},-{(\r)*sin(30)-sin(60)});
\fill ({(\r)*cos(30) - cos(60)},-{(\r)*sin(30)-sin(60)}) circle (2pt) ;
\draw ({(\r)*cos(30) - cos(60)},-{(\r)*sin(30)-sin(60)}) node[below left]{$v_{\r}^-$} ;
}
\draw[dashed] ({(2)*cos(30)},-{(2)*sin(30)}) -- ({(2+2)*cos(30)},-{(2+2)*sin(30)}) ;
\foreach \r in {4}
{\fill ({(\r)*cos(30)},-{(\r)*sin(30)}) circle (2pt) ;
\draw ({(\r)*cos(30)},-{(\r)*sin(30)}) -- ({(\r+1)*cos(30)},-{(\r+1)*sin(30)}) ;
\draw ({(\r+1)*cos(30)},-{(\r+1)*sin(30)}) node[above right]{$v_{p-1}^+$} ;
\draw ({(\r)*cos(30)},-{(\r)*sin(30)}) -- ({(\r)*cos(30) - cos(60)},-{(\r)*sin(30)-sin(60)});
\fill ({(\r)*cos(30) - cos(60)},-{(\r)*sin(30)-sin(60)}) circle (2pt) ;
\draw ({(\r)*cos(30) - cos(60)},-{(\r)*sin(30)-sin(60)}) node[below left]{$v_{p-1}^-$} ;
}
\fill ({(5)*cos(30)},-{(5)*sin(30)}) circle (2pt) ;
\draw ({(4)*cos(30)},-{(4)*sin(30)}) node[above right]{$v_{p-1}$} ;
\draw (-0.5,-3.3) node[above right]{$\mathcal{T}_{0}$} ;
\end{tikzpicture}
\caption{Un arbre peigne \`a $2p+1$ n\oe uds.}\label{fig:peigne}
\end{figure}
Consid\'erons maintenant la famille $\mathcal{E}_{0} = (P_{i},b_{i})_{1\le i\le p}$ et l'arbre binaire de compacts \'el\'ementaire $\mathcal{T}_{0}(\mathcal{E}_{0})$ sur~$E$ qui lui est associ\'e. Ses feuilles sont $v_{0}^-$ (\'etiquet\'ee par $\overline{D}(P_{1},b_{1})$), \dots, $v_{p-1}^-$ (\'etiquet\'ee par $\overline{D}(P_{p},b_{p})$) et~$v_{p-1}^+$ (\'etiquet\'ee par~$V$, d'apr\`es~\eqref{eq:V}). Puisque, pour tout $i \in \{1,\dotsc,p\}$, $\overline{D}(P_{i},b_{i})$ est contenu dans~$C_{i}$, $\mathcal{T}_{0}(\mathcal{E}_{0})$ est bien d\'ecoup\'e.
D'apr\`es~\eqref{eq:intersection}, toutes les intersections $K(v^-)\cap K(v^+)$ sont de la forme $\overline{C}(P_{i};b_{i},b_{i})$, avec $i\in \cn{1}{p}$. On d\'eduit alors de la proposition~\ref{prop:systemeCR} que~$\mathcal{T}_{0}(\mathcal{E}_{0})$ est un arbre de Cousin-Runge fort sur~$E$.
Construisons maintenant un autre arbre binaire de compacts~$\mathcal{T}$ sur~$E$ \`a partir de~$\mathcal{T}_{0}(\mathcal{E}_{0})$ en rempla\c{c}ant, pour tout $i\in\cn{1}{p}$, la feuille~$v_{i-1}^-$, \'etiquet\'ee par $\overline{D}(P_{i},b_{i})$, par l'arbre~$\mathcal{T}_{i}(\mathcal{E}_{i})$. L'arbre~$\mathcal{T}$ est encore un arbre de Cousin-Runge fort sur~$E$. On v\'erifie ais\'ement que c'est un arbre \'el\'ementaire, associ\'e \`a la famille obtenue en r\'eunissant~$\mathcal{E}_{0}$ et les~$\mathcal{E}_{i}$, pour $i\in\cn{1}{p}$.
L'arbre~$\mathcal{T}$ est adapt\'e \`a~$\mathcal{U}$ car chacune de ses feuilles est soit une feuille de l'un des~$\mathcal{T}_{i}(\mathcal{E}_{i})$, pour $i\in \cn{1}{p}$, soit~$V$. Le fait qu'il soit bien d\'ecoup\'e d\'ecoule du fait que $\mathcal{T}_{0}(\mathcal{E}_{0})$ est bien d\'ecoup\'e et de~\eqref{eq:biendecoupe}.
\end{proof}
\subsection{Au-dessus d'une base ultram\'etrique} Soit $X$ un espace $\mathcal{A}$-analytique.
Nous souhaitons montrer que la propri\'et\'e de poss\'eder assez d'arbres de Cousin-Runge se transf\`ere d'une partie de l'espace~$X$ \`a un disque ferm\'e relatif au-dessus de cette partie.
\begin{coro}\label{cor:CREx}\index{Arbre!sur une \'etoile}\index{Resultant@R\'esultant}
Soit~$x$ un point d\'ecent (resp. tr\`es d\'ecent) de~$X_{\mathrm{um}}$. Posons $\ensuremath{\mathbf{A}}^1_{X} := X \times_{\mathcal{A}} \ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}}$ et notons $\pi \colon \ensuremath{\mathbf{A}}^1_{X} \to X$ le morphisme de projection. Soit~$E$ une \'etoile de~$\pi^{-1}(x)$. Alors, pour toute famille d'ouverts~$\mathcal{U}$ de~$\ensuremath{\mathbf{A}}^1_{X}$ recouvrant~$E$, il existe un arbre binaire~$\mathcal{T}$ et une famille $\mathcal{E} = (P_{v},s_{v})_{v\in \mathcal{I}(\mathcal{T})}$ d'\'el\'ements de $\mathcal{O}_{x}[T]\times\ensuremath{\mathbf{R}}_{>0}$ index\'ee par l'ensemble des n{\oe}uds internes de~$\mathcal{T}$ tels que l'arbre binaire de compacts \'el\'ementaire $\mathcal{T}(\mathcal{E})$ sur~$E$ soit bien d\'ecoup\'e, adapt\'e \`a~$\mathcal{U}$ et de Cousin-Runge (resp. de Cousin-Runge fort). De plus, si $f \ne g$ sont des feuilles de~$\mathcal{T}$, alors $K(f) \cap K(g)$ est une \'etoile de~$\pi^{-1}(x)$.
En outre, si~$\mathcal{H}(x)$ est parfait ou de valuation non triviale, on peut supposer que, pour tout n{\oe}ud interne~$v$ de~$\mathcal{T}$, le polyn\^ome $R_{v}(T) = \mathrm{R\acute{e}s}_{S}(P_{v}(S)-T,P_{v}'(S))$ ne s'annule pas sur $\overline{C}(s_{v},s_{v})$.
\end{coro}
\begin{proof}
D'apr\`es la proposition~\ref{prop:CRcorps} appliqu\'ee \`a~$E$, il existe un arbre binaire~$\mathcal{T}$ et une famille $\mathcal{E} = (P_{v},s_{v})_{v\in \mathcal{I}(\mathcal{T})}$ d'\'el\'ements de $\mathcal{H}(x)[T]\times\ensuremath{\mathbf{R}}_{>0}$, les $P_{v}$ \'etant irr\'eductibles, tels que l'arbre binaire de compacts \'el\'ementaire $\mathcal{T}(\mathcal{E})$ sur~$E$ soit bien d\'ecoup\'e et adapt\'e \`a~$\mathcal{U}$. Si~$\mathcal{H}(x)$ est parfait ou de valuation non triviale, on peut supposer que, pour tout n{\oe}ud interne~$v$ de~$\mathcal{T}$, le polyn\^ome $R_{v}(T) = \mathrm{R\acute{e}s}_{S}(P_{v}(S)-T,P_{v}'(S))$ ne s'annule pas sur $\overline{C}(s_{v},s_{v})$.
L'image de~$\mathcal{O}_{X,x}$ dans~$\mathcal{H}(x)$ \'etant dense, on peut modifier les polyn\^omes~$P_{v}$, o\`u~$v$ est un n{\oe}ud interne de~$\mathcal{T}_{x}$, de fa\c{c}on que leurs coefficients appartiennent \`a~$\mathcal{O}_{X,x}$ sans modifier les ensembles $\{x\in \E{1}{\mathcal{H}(x)} : |P_{v}(x)| \bowtie s_{v}\}$, avec $\bowtie\, \in \{\le, \ge, =\}$. Les propri\'et\'es de l'arbre~$\mathcal{T}_{x}(\mathcal{E}_{x})$ sont alors pr\'eserv\'ees. Puisque les \'etiquettes n'ont pas \'et\'e modif\'ees, les intersections des \'etiquettes de feuilles distinctes sont encore soit vides, soit des \'etoiles (\cf~remarque~\ref{rem:biendecoupe}). Si~$\mathcal{H}(x)$ est parfait ou de valuation non triviale, on peut \'egalement supposer que la condition de non annulation du r\'esultant \'enonc\'ee ci-dessus reste satisfaite.
Pour tout n{\oe}ud interne~$v$ de~$\mathcal{T}_{x}(\mathcal{E}_{x})$, la proposition~\ref{prop:systemeCR} assure qu'il existe un syst\`eme de Cousin-Runge associ\'e \`a $(K(v^-),K(v^+))$. Si~$x$ est tr\`es d\'ecent, il s'agit m\^eme d'un syst\`eme de Cousin-Runge fort, d'apr\`es le corollaire~\ref{cor:BVcompactN} si~$x$ est tr\`es typique et le lemme~\ref{lem:conditionscar0} sinon.
\end{proof}
\begin{coro}\label{cor:CRfibreum}\index{Arbre!sur un disque}\index{Disque!arbre sur un|see{Arbre}}
Soit~$V$ une partie compacte et d\'ecente de~$X_{\mathrm{um}}$ et soient $r_{1},\dotsc,r_{n} \in \ensuremath{\mathbf{R}}_{>0}$. Si~$V$ poss\`ede assez d'arbres de Cousin (resp. de Cousin-Runge), alors il en va de m\^eme pour $\overline{D}_{V}(r_{1},\dotsc,r_{n})$.
\end{coro}
\begin{proof}
En raisonnant par r\'ecurrence, on se ram\`ene imm\'ediatement au cas o\`u~$n=1$. Notons $r=r_{1}$. Supposons que~$V$ poss\`ede assez d'arbres de Cousin (resp. de Cousin-Runge). Soit~$\mathcal{U}$ un recouvrement ouvert de~$\overline{D}_{V}(r)$.
Soit~$x$ un point de~$X$. D'apr\`es le corollaire~\ref{cor:CREx} appliqu\'e \`a $\overline{D}_{x}(r)$, il existe un arbre binaire~$\mathcal{T}_{x}$ et une famille $\mathcal{E}_{x} = (P_{v},s_{v})_{v\in \mathcal{I}(\mathcal{T}_{x})}$ d'\'el\'ements de $\mathcal{O}_{x}[T]\times\ensuremath{\mathbf{R}}_{>0}$ tels que l'arbre binaire de compacts \'el\'ementaire $\mathcal{T}_{x}(\mathcal{E}_{x})$ sur~$\overline{D}_{x}(r)$ soit bien d\'ecoup\'e et adapt\'e \`a~$\mathcal{U}$.
Consid\'erons un voisinage ouvert~$V_{x}$ de~$x$ dans~$V$ sur lequel les coefficients des polyn\^omes~$P_{v}$ sont d\'efinis. Soit~$W$ un voisinage compact de~$x$ dans~$V_{x}$.
Pour tout n\oe ud interne~$v$ de~$\mathcal{T}_{x}$, on notera encore~$P_{v}$ un repr\'esentant de~$P_{v}$ dans $\mathcal{O}(W)[T]$. Consid\'erons la famille $\mathcal{E}_{W} = (P_{v},s_{v})_{v\in \mathcal{I}(\mathcal{T}_{x})}$ de $\mathcal{O}(W)[T]\times\ensuremath{\mathbf{R}}_{>0}$ et l'arbre de compacts \'el\'ementaire $\mathcal{T}_{x}(\mathcal{E}_{W})$ associ\'e sur $\overline{D}_{W}(r)$. Puisque~$\mathcal{T}_{x}(\mathcal{E}_{x})$ est bien d\'ecoup\'e et adapt\'e \`a~$\mathcal{U}$, quitte \`a restreindre~$V_{x}$, on peut \'egalement supposer que $\mathcal{T}_{x}(\mathcal{E}_{W})$ est bien d\'ecoup\'e et adapt\'e \`a~$\mathcal{U}$.
En outre, pour tout n{\oe}ud interne~$v$ de~$\mathcal{T}_{x}(\mathcal{E}_{W})$, d'apr\`es la remarque~\ref{rem:biendecoupe} et la proposition~\ref{prop:systemeCR}, il existe un syst\`eme de Cousin (resp. Cousin-Runge) associ\'e \`a $(K(v^-),K(v^+))$.
La famille $\mathcal{V} = (V_{x})_{x\in X}$ forme un recouvrement ouvert de~$V$. Puisque~$V$ poss\`ede assez d'arbres de Cousin (resp. Cousin-Runge), il existe un arbre de Cousin (resp. Cousin-Runge) $\mathcal{T}$ sur~$V$ adapt\'e \`a~$\mathcal{V}$. Consid\'erons l'arbre binaire de compacts~$\mathcal{T}'$ obtenu en rempla\c{c}ant dans~$\mathcal{T}$ chaque \'etiquette~$K$ par l'\'etiquette $\overline{D}_{K}(r)$. D'apr\`es le lemme~\ref{lem:CRrelatif}, $\mathcal{T}'$ est encore un arbre de Cousin (resp. de Cousin-Runge).
Finalement, pour toute feuille~$f$ de~$\mathcal{T}$, consid\'erons un point~$x_{f}$ de~$V$ tel que $K(f) \subset V_{x_{f}}$ et rempla\c{c}ons la feuille correspondante de~$\mathcal{T}'$ par l'arbre~$\mathcal{T}_{x_{f}}(\mathcal{E}_{K(f)})$. Le r\'esultat est un arbre de Cousin (resp. de Cousin-Runge) sur~$\overline{D}_{X}(r)$ adapt\'e \`a~$\mathcal{U}$.
\end{proof}
\begin{coro}\label{cor:CRfortfibreum}\index{Arbre!sur un disque}
Soit~$V$ une partie compacte de~$X_{\mathrm{um}}$ et soient $r_{1},\dotsc,r_{n} \in \ensuremath{\mathbf{R}}_{>0}$. Supposons que l'une des conditions suivantes est satisfaite~:
\begin{enumerate}[i)]
\item tous les points de~$V$ sont ultram\'etriques tr\`es typiques~;
\item en tout point~$b$ de~$V$, le corps r\'esiduel compl\'et\'e~$\mathcal{H}(b)$ est de caract\'eristique nulle ou de valuation non triviale.
\end{enumerate}
Si~$V$ poss\`ede assez d'arbres de Cousin forts (resp. de Cousin-Runge forts), alors il en va de m\^eme pour $\overline{D}_{V}(r_{1},\dotsc,r_{n})$.
\end{coro}
\begin{proof}
Reprenons le d\'ebut de la d\'emonstration du corollaire~\ref{cor:CRfibreum} pour construire des arbres binaires de compacts~$\mathcal{T}_{x}(\mathcal{E}_{x})$ et~$\mathcal{T}_{x}(\mathcal{E}_{W})$.
Soit~$v$ un n\oe ud interne de~$\mathcal{T}_{x}(\mathcal{E}_{W})$. Pour poursuivre, on a besoin d'un syst\`eme de Cousin fort ou Cousin-Runge fort associ\'e \`a $(K(v^-),K(v^+))$. Dans le cas~i), son existence d\'ecoule de la remarque~\ref{rem:biendecoupe}, la proposition~\ref{prop:systemeCR} et le corollaire~\ref{cor:BVcompactN}.
Dans le cas~ii), remarquons tout d'abord que l'assertion finale du corollaire~\ref{cor:CREx} permet de supposer que le polyn\^ome $R_{v}(T) = \mathrm{R\acute{e}s}_{S}(P_{v}(S)-T,P_{v}'(S))$ ne s'annule pas sur~$\overline{C}_{x}(s_{v},s_{v})$. Quitte \`a restreindre~$V_{x}$, on peut supposer qu'ils ne s'annule pas non plus sur~$\overline{C}_{W}(s_{v},s_{v})$. L'existence d'un syst\`eme de Cousin fort ou de Cousin-Runge fort d\'ecoule alors de la remarque~\ref{rem:biendecoupe}, de la proposition~\ref{prop:systemeCR} et du lemme~\ref{lem:conditionscar0}.
On peut alors conclure en reprenant la fin de la d\'emonstration du corollaire~\ref{cor:CRfibreum}.
\end{proof}
\index{Arbre|)}
\section{Th\'eor\`emes A et B sur les disques ferm\'es relatifs}\label{sec:AB}
Dans cette section, nous d\'emontrons que les disques ferm\'es relatifs sur des bases convenables satisfont les th\'eor\`emes~A et~B. Nous commencerons par traiter le cas o\`u la base est un point, archim\'edien d'abord, ultram\'etrique ensuite, avant de traiter le cas d'une base plus grande.
\subsection{Fibres archim\'ediennes}
Commen\c{c}ons par \'etablir les th\'eor\`emes~A et~B pour les fibres archim\'ediennes de disques de Berkovich. Ils d\'ecoulent assez directement de r\'esultats classiques dans le cadre des espaces analytiques complexes.
\begin{prop}\label{prop:C}
Munissons le corps~$\ensuremath{\mathbf{C}}$ muni de la valeur absolue~$\va_{\infty}^\ensuremath{\varepsilon}$, avec $\ensuremath{\varepsilon}\in\intof{0,1}$. Soient $r_{1},\dotsc,r_{n}>0$ et~$\mathcal{F}$ un faisceau coh\'erent sur $\overline{D}_{\ensuremath{\mathbf{C}}}(r_{1},\dotsc,r_{n})$. Alors, $\mathcal{F}$ est globalement engendr\'e et, pour tout $q\ge 1$, on a
$H^q(\overline{D}_{\ensuremath{\mathbf{C}}}(r_{1},\dotsc,r_{n}),\mathcal{F})=0$.
\end{prop}
\begin{proof}
Le fait que la valeur absolue puisse \^etre une puissance diff\'erente de la valeur absolue usuelle n'influe pas sur le r\'esultat et on peut supposer que $\ensuremath{\varepsilon}=1$, ce qui nous ram\`ene au cas des espaces analytiques complexes usuels. Il est alors bien connu que les disques ferm\'es sont des espaces de Stein et qu'ils v\'erifient les propri\'et\'es de l'\'enonc\'e (\cf~\cite[III, \S 3.2]{GR}).
\end{proof}
Nous allons maintenant nous int\'eresser aux espaces analytiques sur~$\ensuremath{\mathbf{R}}$ (muni de la valeur absolue~$\va^\ensuremath{\varepsilon}$, avec $\ensuremath{\varepsilon}\in\intof{0,1}$) au sens de V.~Berkovich. Rappelons que l'extension des scalaires $\ensuremath{\mathbf{R}} \to \ensuremath{\mathbf{C}}$ permet d'associer \`a tout espace de Berkovich~$Y$ sur~$\ensuremath{\mathbf{R}}$ un espace analytique complexe~$Y_{\ensuremath{\mathbf{C}}}$ et un morphisme $\pi \colon Y_{\ensuremath{\mathbf{C}}} \to Y$, qui n'est autre que le quotient par la conjugaison complexe (\cf~lemme~\ref{lem:AnC}). En particulier, pour tout point~$y$ de~$Y$, la fibre $\pi^{-1}(y)$ contient un point si $\mathscr{H}(y)\simeq \ensuremath{\mathbf{R}}$ et deux points conjugu\'es si $\mathscr{H}(y)\simeq \ensuremath{\mathbf{C}}$.
L'ingr\'edient cl\'e pour passer de~$\ensuremath{\mathbf{C}}$ \`a~$\ensuremath{\mathbf{R}}$ est le r\'esultat suivant, d\^u \`a Q.~Liu (\cf~\cite[proposition~2]{LiuContre-Exemple}) et \'enonc\'e sous cette forme dans \cite[th\'eor\`eme~6.11]{A1Z} (\`a une hypoth\`ese manquante pr\`es).
\begin{lemm}\label{lem:Liu}\index{Theoreme@Th\'eor\`eme!B}
Soit $\varphi \colon Z \to Y$ un morphisme surjectif d'espaces analytiques. Supposons que l'espace~$Z$ et les ferm\'es de Zariski de~$Y$ de support diff\'erent de~$Y$ satisfont le th\'eor\`eme~B. Supposons \'egalement qu'il existe un point~$y$ tel que~$\{y\}$ soit l'espace topologique sous-jacent \`a un ferm\'e analytique de~$Y$ et que la fibre $(\varphi_{\ast}\mathscr{O}_{Z})_{y}$ soit un $\mathscr{O}_{Y,y}$-module libre de rang fini. Alors, l'espace~$Y$ satisfait le th\'eor\`eme~B.
\qed
\end{lemm}
\begin{prop}\label{prop:R}
Consid\'erons le corps~$\ensuremath{\mathbf{R}}$ muni de la valeur absolue~$\va_{\infty}^\ensuremath{\varepsilon}$, avec $\ensuremath{\varepsilon}\in\intof{0,1}$. Soient $r_{1},\dotsc,r_{n}>0$ et~$\mathcal{F}$ un faisceau coh\'erent sur $\overline{D}_{\ensuremath{\mathbf{R}}}(r_{1},\dotsc,r_{n})$. Alors, $\mathcal{F}$ est globalement engendr\'e et, pour tout $q\ge 1$, on a
$H^q(\overline{D}_{\ensuremath{\mathbf{R}}}(r_{1},\dotsc,r_{n}),\mathcal{F})=0$.
\end{prop}
\begin{proof}
Munissons $\overline{D}_{\ensuremath{\mathbf{R}}}(r_{1},\dotsc,r_{n})$ et $\overline{D}_{\ensuremath{\mathbf{C}}}(r_{1},\dotsc,r_{n})$ de la topologie dont les ferm\'es sont les ferm\'es analytiques. D'apr\`es \cite[proposition I,7]{Frisch}, l'espace topologique $\overline{D}_{\ensuremath{\mathbf{C}}}(r_{1},\dotsc,r_{n})$ est noeth\'erien et il en va donc de m\^eme de l'espace $\overline{D}_{\ensuremath{\mathbf{R}}}(r_{1},\dotsc,r_{n})$ sur lequel il se surjecte contin\^ument.
Supposons, par l'absurde, qu'il existe un ferm\'e analytique de $\overline{D}_{\ensuremath{\mathbf{R}}}(r_{1},\dotsc,r_{n})$ ne satisfaisant pas le th\'eor\`eme~B. Par noeth\'erianit\'e, il en existe un qui soit minimal pour cette propri\'et\'e~; notons-le~$Y$. L'espace~$Y_{\ensuremath{\mathbf{C}}}$ est n\'ecessairement de dimension strictement positive et l'espace~$Y$ contient donc un point~$y$ donc le corps r\'esiduel est isomorphe \`a~$\ensuremath{\mathbf{C}}$. Le morphisme $\pi \colon Y_{\ensuremath{\mathbf{C}}} \to Y$ d\'efinit alors un isomorphisme d'un voisinage de chacun des deux ant\'ec\'edents de~$y$ vers un voisinage de~$y$. Par cons\'equent, la fibre $(\pi_{\ast}\mathscr{O}_{Y_{\ensuremath{\mathbf{C}}}})_{y}$ est un $\mathscr{O}_{Y,y}$-module libre de rang deux. D'apr\`es le lemme~\ref{lem:Liu}, $Y$ satisfait le th\'eor\`eme~B et nous aboutissons \`a une contradiction.
Il est par ailleurs classique que, sur un espace qui satisfait le th\'eor\`eme~B et dont tout point est l'espace topologique sous-jacent \`a un ferm\'e analytique, tout faisceau coh\'erent est globalement engendr\'e (\cf~\cite[IV, \S 1, theorem~2]{GR} ou~\cite[th\'eor\`eme~6.1.9]{A1Z} -- l\`a encore, il manque une hypoth\`ese).
\end{proof}
Soit $K$ le corps $\ensuremath{\mathbf{R}}$ ou~$\ensuremath{\mathbf{C}}$ muni de la norme $\nm_{\mathrm{hyb}} = \max(\va_{0},\va_{\infty})$, o\`u~$\va_{0}$ d\'esigne la valeur absolue triviale et~$\va_{\infty}$ la valeur absolue usuelle. Rappelons que le spectre~$\mathcal{M}(K)$ est constitu\'e des points associ\'es aux valeurs absolues~$\va_{\infty}^\ensuremath{\varepsilon}$ pour $\ensuremath{\varepsilon}\in[0,1]$ (en identifiant~$\va_{\infty}^0$ \`a~$\va_{0}$), \cf~exemple~\ref{ex:corpshybride}.
\begin{prop}\label{prop:fibrearc}
Soit $K$ le corps $\ensuremath{\mathbf{R}}$ ou~$\ensuremath{\mathbf{C}}$ muni de la norme $\nm_{\mathrm{hyb}}$. Posons $X=\mathcal{M}(K)$. Soit~$b$ un point de~$X$ associ\'e \`a une valeur absolue archim\'edienne (c'est-\`a-dire $\va_{\infty}^\ensuremath{\varepsilon}$, avec $\ensuremath{\varepsilon}\in\intof{0,1}$). Soient $r_{1},\dotsc,r_{n} \in \ensuremath{\mathbf{R}}_{>0}$ et $\mathcal{F}$~un faisceau coh\'erent sur $\overline{D}_{b}(r_{1},\dotsc,r_{n})$. Alors, $\mathcal{F}$ est globalement engendr\'e et, pour tout $q\ge 1$, on a
$H^q(\overline{D}_{b}(r_{1},\dotsc,r_{n}),\mathcal{F})=0$.
\end{prop}
\begin{proof}
D'apr\`es \cite[proposition~3.4.6]{A1Z}, l'inclusion
\[j_{b} \colon \overline{D}_{b}(r_{1},\dotsc,r_{n}) \hookrightarrow \overline{D}_{X}(r_{1},\dotsc,r_{n})\]
de la fibre dans l'espace total induit un isomorphisme d'espaces annel\'es
\[\bigl(\overline{D}_{b}(r_{1},\dotsc,r_{n}), j_{b}^{-1} \mathscr{O}_{\overline{D}_{X}(r_{1},\dotsc,r_{n})}\bigr) \xrightarrow[]{\sim} (\overline{D}_{b}(r_{1},\dotsc,r_{n}),\mathscr{O}_{\overline{D}_{b}(r_{1},\dotsc,r_{n})}).\]
Il suffit donc de d\'emontrer les r\'esultats sur l'espace de gauche, qui est un espace analytique sur un corps valu\'e. Les propositions~\ref{prop:C} et~\ref{prop:R} permettent alors de conclure.
\end{proof}
\subsection{Fibres ultram\'etriques}
Soit~$X$ un espace $\mathcal{A}$-analytique.
\begin{prop}\label{prop:H1}
Soient $r_{1},\dotsc,r_{n} \in \ensuremath{\mathbf{R}}_{>0}$, $\overline{D}_{X}(r_{1},\dotsc,r_{n})$ le disque relatif et $\pi \colon \overline{D}_{X}(r_{1},\dotsc,r_{n}) \to X$ le morphisme de projection. Soit~$V$ une partie compacte de~$X$.
Supposons que~$V$ poss\`ede assez d'arbres de Cousin et que, pour tout point~$x$ de~$V$, on a $H^1(\pi^{-1}(x),\mathcal{O})=0$. Alors, on a $H^{1}(\pi^{-1}(V),\mathcal{O}) = 0$.
En particulier, si~$V$ poss\`ede assez d'arbres de Cousin, on a $H^1(V,\mathcal{O})=0$.
\end{prop}
\begin{proof}
Soit
\[0 \to \mathcal{O} \to \mathcal{I}_{0} \xrightarrow[]{d} \mathcal{I}_{1} \xrightarrow[]{d} \dotsc\]
une r\'esolution injective du faisceau~$\mathcal{O}$ sur~$\pi^{-1}(V)$. Soit $\alpha\in \mathcal{I}_{1}(\pi^{-1}(V))$ tel que $d\alpha=0$.
Soit~$x$ un point de~$V$. Par hypoth\`ese, nous avons $H^{1}(\pi^{-1}(x),\mathcal{O}) = 0$, donc il existe un \'el\'ement $\beta_{x} \in \mathcal{I}_{0}(\pi^{-1}(x))$ tel que $d \beta_{x} = \alpha_{|\pi^{-1}(x)}$. Par d\'efinition, il existe un voisinage~$U_{x}$ de~$\pi^{-1}(x)$ et un \'el\'ement $\gamma_{x}$ de $\mathcal{I}_{0}(U_{x})$ dont la restriction \`a~$\pi^{-1}(x)$ est \'egale \`a~$\beta_{x}$. En outre, $d \gamma_{x}$ est \'egale \`a~$\alpha$ sur un voisinage de~$\pi^{-1}(x)$, que l'on peut supposer \^etre \'egal \`a~$U_{x}$. En d'autres termes, l'image de~$\alpha$ dans $H^{1}(U_{x},\mathcal{O})$ est nulle. Puisque~$\pi$ est propre, on peut supposer que~$U_{x}$ est de la forme $\pi^{-1}(V_{x})$, o\`u~$V_{x}$ est un voisinage ouvert de~$x$ dans~$X$.
Consid\'erons le recouvrement ouvert $\mathcal{V} = (V_{x})_{x\in V}$ de~$V$. Par hypoth\`ese, il existe un arbre de Cousin~$\mathcal{T}$ sur~$V$ adapt\'e \`a~$\mathcal{U}$. Par construction, pour toute feuille~$f$ de~$\mathcal{T}$, l'image de~$\alpha$ dans $H^1(\pi^{-1}(K(f)),\mathcal{O})$ est nulle.
En remontant pas \`a pas dans l'arbre, on d\'emontre plus g\'en\'eralement que, pour tout n{\oe}ud~$v$ de~$\mathcal{T}$, l'image de~$\alpha$ dans $H^1(\pi^{-1}(K(v)),\mathcal{O})$ est nulle. En effet, supposons le r\'esultat d\'emontr\'e pour les deux fils~$v^-$ et~$v^+$ d'un n{\oe}ud~$v$ de~$\mathcal{T}$. Par hypoth\`ese, il existe un syst\`eme de Cousin associ\'e \`a $(K(v^-),K(v^+))$. En particulier, le morphisme
\[\begin{array}{rcl}
\mathcal{O}(K(v^-))\times \mathcal{O}(K(v^+)) &\to & \mathcal{O}(K(v^-) \cap K(v^+))\\
(f^-,f^+) & \mapsto & f^--f^+
\end{array} \]
est surjectif et il en va donc de m\^eme du morphisme
\[\begin{array}{rcl}
\mathcal{O}(\pi^{-1}(K(v^-)))\times \mathcal{O}(\pi^{-1}(K(v^+))) &\to & \mathcal{O}(\pi^{-1}(K(v^-)) \cap \pi^{-1}(K(v^+)))\\
(f^-,f^+) & \mapsto & f^--f^+
\end{array},\]
d'apr\`es le lemme~\ref{lem:CRrelatif}. Le r\'esultat d\'ecoule alors de la suite exacte longue de Mayer-Vietoris
\[\hspace{-170pt}
\begin{tikzcd}
\dotsb \longrightarrow H^{0}(\pi^{-1}(K(v^-)),\mathcal{O}) \oplus H^{0}(\pi^{-1}(K(v^+)),\mathcal{O})
\ar[out=0, in=180, looseness=2]{d}\\
H^0(\pi^{-1}(K(v^-)) \cap \pi^{-1}(K(v^+)),\mathcal{O}) \longrightarrow H^{1}(\pi^{-1}(K(v)),\mathcal{O}) \ar[out=0, in=180, looseness=2]{d}\\
H^{1}(\pi^{-1}(K(v^-)),\mathcal{O}) \oplus H^{1}(\pi^{-1}(K(v^+)),\mathcal{O}) \longrightarrow \dotsb
\end{tikzcd}\]
Puisque la racine de~$\mathcal{T}$ est \'etiquet\'ee par le compact~$V$, ceci d\'emontre que l'image de~$\alpha$ dans $H^1(\pi^{-1}(V),\mathcal{O})$ est nulle.
\smallbreak
La partie finale du r\'esultat d\'ecoule du cas~$n=0$ en remarquant que, pour tout point~$x$ de~$V$, on a $H^1(\{x\},\mathcal{O})=0$.
\end{proof}
\begin{lemm}\label{lem:Hq}
Soient $r_{1},\dotsc,r_{n} \in \ensuremath{\mathbf{R}}_{>0}$, $\overline{D}_{X}(r_{1},\dotsc,r_{n})$ le disque relatif et $\pi \colon \overline{D}_{X}(r_{1},\dotsc,r_{n}) \to X$ le morphisme de projection. Soient~$V$ une partie compacte de~$X$ et~$\mathcal{F}$ un faisceau de groupes ab\'eliens sur~$\pi^{-1}(V)$. Soit $q\ge 1$.
Supposons que~$V$ est de dimension de recouvrement~1 et que, pour tout point~$x$ de~$V$, on a $H^q(\pi^{-1}(x),\mathcal{F})=H^{q+1}(\pi^{-1}(x),\mathcal{F})=0$. Alors, on a $H^{q+1}(\pi^{-1}(V),\mathcal{F}) = 0$.
\end{lemm}
\begin{proof}
Consid\'erons la suite spectrale de Leray
\[ E_{2}^{p,p'} = H^p(V,R^{p'} \pi_{\ast} \mathcal{F}) \implies H^{p+p'}(\pi^{-1}(V),\mathcal{F}). \]
Puisque~$V$ est de dimension~1, pour tout faisceau de groupes ab\'eliens~$\mathcal{G}$ sur~$V$ et tout $p'\ge 2$, on a $H^{p'}(V,\mathcal{G})=0$. En particulier, seules les deux premi\`eres colonnes de la deuxi\`eme page de la suite spectrale peuvent contenir des termes non nuls. En outre, par hypoth\`ese, on a $R^q \pi_{\ast}\mathcal{F} = R^{q+1} \pi_{\ast}\mathcal{F} = 0$. Le r\'esultat s'en d\'eduit.
\end{proof}
\begin{coro}\label{cor:HqO}
Soit~$x$ un point d\'ecent de~$X_{\mathrm{um}}$. Alors, pour tous $r_{1},\dotsc,r_{n} \in \ensuremath{\mathbf{R}}_{>0}$ et tout $q\ge 1$, on a
\[H^q(\overline{D}_{x}(r_{1},\dotsc,r_{n}),\mathcal{O}) = 0.\]
\end{coro}
\begin{proof}
D\'emontrons le r\'esultat par r\'ecurrence sur~$n$. Pour $n=0$, on a bien $H^q(\{x\},\mathscr{O}) = 0$ pour tout $q\ge 1$.
Supposons que le r\'esultat est vrai pour les disques de dimension~$n$ et d\'emontrons-le pour la dimension~$n+1$. Soient $r_{1},\dotsc,r_{n+1} \in \ensuremath{\mathbf{R}}_{>0}$.
D'apr\`es le corollaire~\ref{cor:CRfibreum}, le disque relatif~$\overline{D}_{x}(r_{1},\dotsc,r_{n+1})$ poss\`ede assez d'arbres de Cousin. D'apr\`es la proposition~\ref{prop:H1}, on a donc $H^1(\overline{D}_{x}(r_{1},\dotsc,r_{n+1}),\mathcal{O})=0$.
Consid\'erons le morphisme de projection sur la derni\`ere coordonn\'ee $\pi \colon \overline{D}_{x}(r_{1},\dotsc,r_{n+1}) \to \overline{D}_{x}(r_{n+1})$. Par hypoth\`ese de r\'ecurrence, pour tout point~$y$ de $\overline{D}_{x}(r_{n+1})$ et tout $q\ge 1$, on a $H^q(\pi^{-1}(y),\mathcal{O})=0$. Le lemme~\ref{lem:Hq} assure alors que, pour tout $q\ge 2$, on a $H^q(\overline{D}_{x}(r_{1},\dotsc,r_{n+1}),\mathcal{O})=0$.
\end{proof}
\subsection{Cas g\'en\'eral}
Soit~$X$ un espace $\mathcal{A}$-analytique.
Nous souhaitons montrer que les disques ferm\'es relatifs au-dessus de certaines parties de~$X$ satisfont les th\'eor\`emes~A et~B. D\'efinissons maintenant les parties sur lesquelles nous allons travailler.
\begin{defi}\label{def:basedeStein}\index{Base de Stein|textbf}
Une \emph{base de Stein} de~$X$ est une partie compacte~$V$ de~$X$ qui satisfait les conditions suivantes~:
\begin{enumerate}[i)]
\item $V$ est de dimension de recouvrement inf\'erieure \`a~1~;
\item tout point ultram\'etrique de~$V$ est d\'ecent~;
\item tout point archim\'edien de~$V$ est isol\'e ou poss\`ede un voisinage lin\'eaire dans~$V$~;
\item tout polydisque ferm\'e relatif sur~$V$ est de dimension de recouvrement finie~;
\item pour tout recouvrement ouvert~$\mathcal{U}$ de~$V$, il existe un arbre de Cousin-Runge fort \`a intersections simples~$\mathcal{T}_{\mathcal{U}}$ sur~$V$ adapt\'e \`a~$\mathcal{U}$ tel que, pour toutes feuilles $f\ne g$ de~$\mathcal{T}_{\mathcal{U}}$, le compact $K(f)\cap K(g)$ poss\`ede assez d'arbres de Cousin.
\end{enumerate}
\end{defi}
\begin{exem}\label{ex:basedeStein}
\index{Corps!valu\'e}\index{Anneau!des entiers relatifs $\ensuremath{\mathbf{Z}}$}\index{Anneau!des entiers d'un corps de nombres}\index{Corps!hybride}\index{Anneau!de valuation discr\`ete}\index{Anneau!de Dedekind trivialement valu\'e}
Soit $\mathcal{A}$ l'un de nos anneaux de Banach usuels~: corps valu\'es, anneaux d'entiers de corps de nombres, corps hybrides, anneaux de valuation discr\`ete, anneaux de Dedekind trivialement valu\'es (\cf~exemples~\ref{ex:corpsvalue} \`a~\ref{ex:Dedekind}). Alors, toute partie compacte et connexe de~$X$ est une base de Stein.
En effet, soit~$\mathcal{U}$ un recouvrement ouvert de~$V$. En utilisant la description explicite de~$\mathcal{M}(\mathcal{A})$, on montre que l'on peut raffiner~$\mathcal{U}$ en un recouvrement compact fini~$\mathcal{V}$ tel que l'intersection de deux parties quelconques de~$\mathcal{V}$ soit vide ou r\'eduite \`a un point non associ\'e \`a la valuation triviale sur~$\mathcal{A}$. On peut alors associer un syst\`eme de Cousin-Runge fort par restriction, \`a partir du proc\'ed\'e d\'ecrit dans les exemples~\ref{ex:Cousin} et~\ref{ex:CousinRunge}. Le r\'esultat s'en d\'eduit.
\end{exem}
\begin{lemm}\label{lem:Dxrlineaire
Soient $x$ un point tr\`es d\'ecent de~$X_{\mathrm{um}}$ et $r\in \ensuremath{\mathbf{R}}_{>0}$. Alors, la partie compacte $\overline{D}_{x}(r)$ de~$\ensuremath{\mathbf{A}}^1_{X}$ est une base de Stein.
\end{lemm}
\begin{proof}
Les points~i) et~iv) de la d\'efinition~\ref{def:basedeStein} d\'ecoulent du th\'eor\`eme~\ref{th:dimensioncorps}. Le point~ii) suit de la proposition~\ref{prop:typiqueAn}. Le point~iii) est trivialement v\'erifi\'e.
D\'emontrons le point~v). Soit~$\mathcal{U}$ un recouvrement ouvert de~$V$. D'apr\`es le corollaire~\ref{cor:CREx}, il existe un arbre binaire de compacts \'el\'ementaire~$\mathcal{T}_{\mathcal{U}}$ sur~$V$ qui soit bien d\'ecoup\'e, adapt\'e \`a~$\mathcal{U}$ et de Cousin-Runge fort. D'apr\`es la remarque~\ref{rem:biendecoupe}, il est \`a intersections simples. Par construction, pour toutes feuilles $f\ne g$ de~$\mathcal{T}_{\mathcal{U}}$, le compact $K(f)\cap K(g)$ est une \'etoile, donc poss\`ede assez d'arbres de Cousin, d'apr\`es le corollaire~\ref{cor:CREx}.
\end{proof}
\begin{coro}\label{cor:HqVO}
Soit~$V$ une base de Stein de~$X$. Alors, pour tous $r_{1},\dotsc,r_{n} \in \ensuremath{\mathbf{R}}_{>0}$ et tout $q\ge 1$, on a
\[H^q(\overline{D}_{V}(r_{1},\dotsc,r_{n}),\mathcal{O}) = 0.\]
\end{coro}
\begin{proof}
Soient $r_{1},\dotsc,r_{n} \in \ensuremath{\mathbf{R}}_{>0}$, $\overline{D}_{X}(r_{1},\dotsc,r_{n})$ le disque relatif et $\pi \colon \overline{D}_{X}(r_{1},\dotsc,r_{n}) \to X$ le morphisme de projection. D'apr\`es la proposition~\ref{prop:fibrearc} et le corollaire~\ref{cor:HqO}, pour tout point~$x$ de~$V$, on a $H^q(\overline{D}_{x}(r_{1},\dotsc,r_{n}),\mathcal{O}) = 0$. Le r\'esultat d\'ecoule alors de la proposition~\ref{prop:H1} et du lemme~\ref{lem:Hq}.
\end{proof}
Rappelons que, d'apr\`es le th\'eor\`eme~\ref{th:dimensionetoile}, si~$V$ est une partie lin\'eaire ou d'Ostrowski de~$X$, alors, tout disque ferm\'e relatif sur~$V$ est de dimension de recouvrement finie.
\begin{coro}\label{cor:HqVFresolution}
Soit~$V$ une base de Stein de~$X$. Soient $r_{1},\dotsc,r_{n} \in \ensuremath{\mathbf{R}}_{>0}$ et~$\mathcal{F}$ un faisceau de $\mathcal{O}$-modules sur $\overline{D}_{V}(r_{1},\dotsc,r_{n})$. Supposons qu'il existe une r\'esolution libre
\[\mathcal{O}^{N_{d}} \to \mathcal{O}^{N_{d-1}} \to \dotsb \to \mathcal{O}^{N_{1}} \to \mathcal{F} \to 0,\]
o\`u $d := \dimr(\overline{D}_{V}(r_{1},\dotsc,r_{n}))$. Alors, pour tout $q\ge 1$, on a
\[H^q(\overline{D}_{V}(r_{1},\dotsc,r_{n}),\mathcal{F}) = 0.\]
\end{coro}
\begin{proof}
Pour $m \in \cn{1}{d}$, notons $\mathcal{R}_{m}$ le noyau de $\mathcal{O}^{N_{m}} \to \mathcal{F}$. Par r\'ecurrence sur~$m$, en utilisant des suites exactes longues de cohomologie et le corollaire~\ref{cor:HqVO}, on d\'emontre que, pour tout $m \in \cn{1}{d}$ et tout $q\ge 1$, on a
\[H^{q}(\overline{D}_{V}(r_{1},\dotsc,r_{n}),\mathcal{F}) = H^{q+m}(\overline{D}_{V}(r_{1},\dotsc,r_{n}),\mathcal{R}_{m}).\]
Pour $m=d$, ces groupes sont nuls car~$\overline{D}_{V}(r_{1},\dotsc,r_{n})$ est de dimension de recouvrement~$d$.
\end{proof}
\begin{prop}\label{prop:thArecurrence}
Soient $r_{1},\dotsc,r_{n} \in \ensuremath{\mathbf{R}}_{>0}$, $\overline{D}_{X}(r_{1},\dotsc,r_{n})$ le disque relatif et $\pi \colon \overline{D}_{X}(r_{1},\dotsc,r_{n}) \to X$ le morphisme de projection. Soit~$V$ une base de Stein de~$X$.
Supposons que, pour tout point~$x$ de~$V$, tout faisceau coh\'erent sur~$\pi^{-1}(x)$ soit globalement engendr\'e. Alors, tout faisceau coh\'erent sur~$\pi^{-1}(V)$ est globalement engendr\'e.
\end{prop}
\begin{proof}
Soit~$\mathcal{F}$ un faisceau coh\'erent sur~$\pi^{-1}(V)$.
Soit~$x$ un point de~$V$. Par hypoth\`ese, il existe un morphisme de $\mathcal{O}_{\pi^{-1}(x)}$-modules surjectif $\varphi_{x} \colon \mathcal{O}_{\pi^{-1}(x)}^{N_{x}} \to \mathcal{F}_{|\pi^{-1}(x)}$. Ce dernier s'\'etend en un morphisme surjectif $\psi_{x} \colon \mathcal{O}^{N_{x}} \to \mathcal{F}$ d\'efini sur un voisinage de~$\pi^{-1}(x)$, que l'on peut supposer de la forme~$\pi^{-1}(V_{x})$, o\`u~$V_{x}$ est un voisinage ouvert de~$x$ dans~$V$. Notons~$\mathcal{R}$ le noyau de~$\psi_{x}$. Le faisceau~$\mathcal{O}$ est coh\'erent d'apr\`es le th\'eor\`eme~\ref{coherent}, le faisceau~$\mathcal{F}$ l'est aussi par hypoth\`ese, donc le faisceau~$\mathcal{R}$ l'est \'egalement.
En appliquant de fa\c{c}on r\'ep\'et\'ee l'hypoth\`ese de l'\'enonc\'e, on montre le faisceau~$\mathcal{R}_{|\pi^{-1}(x)}$ poss\`ede une r\'esolution libre
\[\mathcal{O}_{\pi^{-1}(x)}^{N_{d}} \to \mathcal{O}_{\pi^{-1}(x)}^{N_{d-1}} \to \dotsb \to \mathcal{O}_{\pi^{-1}(x)}^{N_{1}} \to \mathcal{R}_{|\pi^{-1}(x)} \to 0,\]
o\`u~$d$ est la dimension de recouvrement de~$\pi^{-1}(V)$. Cette r\'esolution s'\'etend en une r\'esolution de~$\mathcal{R}$ sur un voisinage de~$\pi^{-1}(x)$, que l'on peut supposer de la forme~$\pi^{-1}(V'_{x})$, o\`u~$V'_{x}$ est un voisinage ouvert de~$x$ dans~$V_{x}$.
D'apr\`es le corollaire~\ref{cor:HqVFresolution}, pour toute partie compacte~$W$ de~$V'_{x}$ qui poss\`ede assez d'arbres de Cousin, nous avons $H^1(\pi^{-1}(W),\mathcal{R})=0$. On en d\'eduit que le morphisme $\mathcal{O}^{N_{x}}(\pi^{-1}(W)) \to \mathcal{F}(\pi^{-1}(W))$ induit par~$\psi_{x}$ est surjectif.
Consid\'erons le recouvrement~$\mathcal{V}$ de~$V$ d\'efini par les~$V'_{x}$. Par hypoth\`ese, il existe un arbre de Cousin-Runge fort \`a intersections simples~$\mathcal{T}$ adapt\'e \`a~$\mathcal{V}$ tel que, pour toutes feuilles $f \ne g \in \mathcal{T}$, le compact $K(f)\cap K(g)$ poss\`ede assez d'arbres de Cousin. Soit~$f$ une feuille de~$\mathcal{T}$. Il existe un point~$x$ de~$V$ tel que~$V'_{x}$ contienne~$K(f)$. Pour toute feuille~$g$ de~$\mathcal{T}$, le compact $K(f)\cap K(g)$ poss\`ede assez d'arbres de Cousin. Le raisonnement ci-dessus montre alors que le morphisme $\mathcal{O}^{N_{x}}(\pi^{-1}(K(f)\cap K(g))) \to \mathcal{F}(\pi^{-1}(K(f)\cap K(g)))$ induit par~$\psi_{x}$ est surjectif. Le r\'esultat d\'ecoule alors de la proposition~\ref{prop:CRimpliqueA}.
\end{proof}
\begin{coro}\label{cor:thAum}
Soient~$x$ un point tr\`es d\'ecent de~$X_{\mathrm{um}}$ et $r_{1},\dotsc,r_{n} \in \ensuremath{\mathbf{R}}_{>0}$. Alors, tout faisceau coh\'erent sur $\overline{D}_{x}(r_{1},\dotsc,r_{n})$ est globalement engendr\'e.
\end{coro}
\begin{proof}
D\'emontrons le r\'esultat par r\'ecurrence sur~$n$. Pour~$n=0$, le r\'esultat est \'evident.
Supposons que le r\'esultat est vrai pour les disques de dimension~$n$ et d\'emontrons-le pour la dimension~$n+1$. Soient $r_{1},\dotsc,r_{n+1} \in \ensuremath{\mathbf{R}}_{>0}$. Soit~$\mathcal{F}$ un faisceau coh\'erent sur $\overline{D}_{x}(r_{1},\dotsc,r_{n+1})$. Consid\'erons le morphisme de projection sur la derni\`ere coordonn\'ee $\pi \colon \overline{D}_{x}(r_{1},\dotsc,r_{n+1}) \to \overline{D}_{x}(r_{n+1})$. Par hypoth\`ese de r\'ecurrence, pour tout point~$y$ de $\overline{D}_{x}(r_{n+1})$, le faisceau~$\mathcal{F}$ est globalement engendr\'e sur~$\pi^{-1}(y)$. En outre, d'apr\`es le lemme~\ref{lem:Dxrlineaire}, le disque $\overline{D}_{x}(r_{n+1})$ est une base de Stein. On conclut par la proposition~\ref{prop:thArecurrence}.
\end{proof}
\begin{coro}\label{cor:thA}\index{Faisceau!globalement engendr\'e}\index{Theoreme@Th\'eor\`eme!A}
Soient $r_{1},\dotsc,r_{n} \in \ensuremath{\mathbf{R}}_{>0}$ et~$V$ une base de Stein de~$X$. Alors, tout faisceau coh\'erent sur~$\overline{D}_{V}(r_{1},\dotsc,r_{n})$ est globalement engendr\'e.
\end{coro}
\begin{proof}
Notons $\pi \colon \overline{D}_{X}(r_{1},\dotsc,r_{n}) \to X$ le morphisme de projection. Soit~$\mathcal{F}$ un faisceau coh\'erent sur~$\pi^{-1}(V)$. Pour tout point~$x$ de~$V$, le faisceau~$\mathcal{F}$ est globalement engendr\'e sur~$\pi^{-1}(x)$. Cela d\'ecoule de la proposition~\ref{prop:fibrearc} si~$x$ est archim\'edien et du corollaire~\ref{cor:thAum} si~$x$ est ultram\'etrique. On conclut par la proposition~\ref{prop:thArecurrence}.
\end{proof}
\begin{coro}\label{cor:thB}\index{Theoreme@Th\'eor\`eme!B}
Soient $r_{1},\dotsc,r_{n} \in \ensuremath{\mathbf{R}}_{>0}$ et~$V$ une base de Stein de~$X$. Alors, pour tout faisceau coh\'erent~$\mathcal{F}$ sur~$\overline{D}_{V}(r_{1},\dotsc,r_{n})$ et tout $q\ge 1$, on a $H^q(\overline{D}_{V}(r_{1},\dotsc,r_{n}),\mathcal{F})=0$.
\end{coro}
\begin{proof}
Soit~$\mathcal{F}$ un faisceau coh\'erent sur~$\overline{D}_{V}(r_{1},\dotsc,r_{n})$. En appliquant de fa\c con r\'ep\'et\'ee le corollaire~\ref{cor:thA}, on montre que le faisceau~$\mathcal{F}$ poss\`ede des r\'esolutions libres de longueur arbitraire. Le r\'esultat d\'ecoule alors du corollaire~\ref{cor:HqVFresolution}.
\end{proof}
\section{Affino\"ides surconvergents}\label{sec:affinoides}
Dans cette section, nous introduisons les espaces affino\"ides surconvergents et montrons qu'ils satisfont des propri\'et\'es analogues aux affino\"ides de la g\'eom\'etrie rigide tels que le th\'eor\`eme d'acyclicit\'e de Tate et th\'eor\`eme de Kiehl.
\medbreak
Rappelons que toute partie compacte~$V$ de~$\E{n}{\mathcal{A}}$ est munie de son faisceau de fonctions surconvergentes, c'est-\`a-dire du faisceau $\mathcal{O}_{V} := j_{V}^{-1}(\mathcal{O}_{\E{n}{\mathcal{A}}})$, o\`u $j_{V} \colon V \to \E{n}{\mathcal{A}}$ est l'inclusion. Par exemple, si $V$ est un polydisque ferm\'e $\overline{D}_{B}(r_{1},\dotsc,r_{n})$, chaque section globale converge sur un disque $\overline{D}_{B}(r'_{1},\dotsc,r'_{n})$ avec $r'_{i} >r_{i}$ pour tout $i \in \cn{1}{n}$.
\index{Faisceau!surconvergent}\index{Fonction!surconvergente}
\begin{defi}\label{def:polydisquenondegenere}\index{Disque!ferme non degenere@ferm\'e non d\'eg\'en\'er\'e}
Un espace de la forme $\overline{D}_{B}(r_{1},\dotsc,r_{n})$ avec $n\in \ensuremath{\mathbf{N}}$ et $r_{1},\dotsc,r_{n} \in \ensuremath{\mathbf{R}}_{>0}$ est appel\'e \emph{polydisque ferm\'e non d\'en\'eg\'er\'e sur~$B$}.
\end{defi}
\begin{defi}\label{def:affinoide}\index{Espace affino\"ide!surconvergent|textbf}
On dit qu'un espace $\mathcal{A}$-analytique~$X$ est \emph{$\mathcal{A}$-affino\"ide surconvergent} s'il existe un polydisque~$\overline{D}$ ferm\'e non d\'eg\'en\'er\'e sur~$B$ et un faisceau d'id\'eaux coh\'erent~$\mathcal{I}$ sur $\overline{D}$ tel que~$X$ soit isomorphe au ferm\'e analytique de~$\overline{D}$ d\'efini par~$\mathcal{I}$.
\end{defi}
En g\'eom\'etrie analytique rigide, on dispose de proc\'ed\'es classiques de constructions de domaines affino\"ides. Nous en proposons ici l'analogue.
\begin{defi}\label{def:domaines}\index{Domaine!de Weierstra\ss|textbf}\index{Domaine!de Laurent|textbf}\index{Domaine!rationnel|textbf}
Soient~$X$ un espace $\mathcal{A}$-analytique et~$Y$ une partie de~$X$.
On dit que $Y$ est un \emph{domaine de Weierstra\ss} de~$X$ s'il existe $p\in \ensuremath{\mathbf{N}}$, $f_{1},\dotsc,f_{p} \in \mathcal{O}(X)$ et $r_{1},\dotsc,r_{p} \in \ensuremath{\mathbf{R}}_{>0}$ tels que
\[ Y = \{x\in X : \forall i\in \cn{1}{p},\ |f_{i}(x)| \le r_{i}\}.\]
On dit que $Y$ est un \emph{domaine de Laurent} de~$X$ s'il existe $p,q\in \ensuremath{\mathbf{N}}$, $f_{1},\dotsc,f_{p},g_{1},\dotsc,g_{q} \in \mathcal{O}(X)$ et $r_{1},\dotsc,r_{p},s_{1},\dotsc,s_{q} \in \ensuremath{\mathbf{R}}_{>0}$ tels que
\[ Y = \{x\in X : \forall i\in \cn{1}{p},\ |f_{i}(x)| \le r_{i},\ \forall j\in \cn{1}{q},\ |g_{j}(x)| \ge s_{j}\}.\]
On dit que $Y$ est un \emph{domaine rationnel} de~$X$ s'il existe $p\in \ensuremath{\mathbf{N}}$, $f_{1},\dotsc,f_{p},g \in \mathcal{O}(X)$ sans z\'eros communs sur~$X$ et $r_{1},\dotsc,r_{p} \in \ensuremath{\mathbf{R}}_{>0}$ tels que
\[ Y = \{x\in X : \forall i\in \cn{1}{p},\ |f_{i}(x)| \le r_{i} |g(x)|\}.\]
\end{defi}
\begin{prop}\label{prop:domaines}
Soit~$X$ un espace $\mathcal{A}$-affino\"ide surconvergent. Les domaines de Weierstra\ss, les domaines de Laurent et les domaines rationnels de~$X$ sont des espaces $\mathcal{A}$-affino\"ides surconvergents.
\end{prop}
\begin{proof}
Par hypoth\`ese, il existe un polydisque~$E$ ferm\'e non d\'eg\'en\'er\'e sur~$B$ et un faisceau d'id\'eaux coh\'erent~$\mathcal{I}$ sur $E$ tel que~$X$ soit isomorphe au ferm\'e analytique de~$E$ d\'efini par~$\mathcal{I}$.
Soit~$Y$ un domaine de Weierstra\ss{} de~$X$. Alors, il existe $p\in \ensuremath{\mathbf{N}}$, $f_{1},\dotsc,f_{p} \in \mathcal{O}(X)$ et $r_{1},\dotsc,r_{p} \in \ensuremath{\mathbf{R}}_{>0}$ tels que
\[ Y = \{x\in X : \forall i\in \cn{1}{p},\ |f_{i}(x)| \le r_{i}\}.\]
Posons $E' := \overline{D}_{E}(r_{1},\dotsc,r_{p})$ avec coordonn\'ees $T_{1},\dotsc,T_{p}$. C'est encore un polydisque ferm\'e non d\'eg\'en\'er\'e sur~$B$. Notons $\pi \colon E' \to E$ la projection. Le faisceau d'id\'eaux
\[\mathcal{I}' := \pi^\ast \mathcal{I} + (T_{1}-f_{1},\dotsc,T_{p}-f_{p})\, \mathcal{O}_{E'}\]
est coh\'erent sur~$E'$ et la projection~$\pi$ induit un isomorphisme entre le ferm\'e analytique de~$E'$ d\'efini par~$\mathcal{I}'$ et~$Y$ (dont l'inverse est le morphisme associ\'e \`a $(f_{1},\dotsc,f_{p})$ par l'application bijective de la proposition~\ref{morphsec}). On en d\'eduit que $Y$ est un espace $\mathcal{A}$-affino\"ide surconvergent.
Le cas des domaines de Laurent se traite de fa\c con similaire. Si~$Y$ est un domaine de Laurent donn\'e comme dans la d\'efinition~\ref{def:domaines}, on reprend la preuve pr\'ec\'edente en consid\'erant, cette fois-ci, le disque $E' := \overline{D}_{E}(r_{1},\dotsc,r_{p},s_{1},\dotsc,s_{q})$ avec coordonn\'ees $T_{1},\dotsc,T_{p},S_{1},\dotsc,S_{q}$ et le faisceaux d'id\'eaux
\[\mathcal{I}' := \pi^\ast \mathcal{I} + (T_{1}-f_{1},\dotsc,T_{p}-f_{p},S_{1}g_{1}-1,\dotsc,S_{q}g_{q}-1)\, \mathcal{O}_{E'}.\]
Remarquons que les fonctions $g_{1},\dotsc,q_{q}$ sont inversibles sur~$Y$. On peut donc consid\'erer le morphisme de source~$Y$ associ\'e \`a $(f_{1},\dotsc,f_{p},g_{1}^{-1},\dotsc,g_{q}^{-1})$. Son image s'identifie au ferm\'e analytique~$Z'$ de~$E'$ d\'efini par~$\mathcal{I}'$ et il fournit un inverse au morphisme $Z' \to Y$ induit par la projection $\pi\colon E' \to E$.
La m\^eme strat\'egie s'applique encore pour les domaines rationnels. Si~$Y$ est un domaine de Laurent donn\'e comme dans la d\'efinition~\ref{def:domaines}, on reprend la preuve en consid\'erant, cette fois-ci, le disque $E' := \overline{D}_{E}(r_{1},\dotsc,r_{p})$ avec coordonn\'ees $T_{1},\dotsc,T_{p},S$ et le faisceaux d'id\'eaux
\[\mathcal{I}' := \pi^\ast \mathcal{I} + (gT_{1}-f_{1},\dotsc,gT_{p}-f_{p})\, \mathcal{O}_{E'}.\]
Puisque les fonctions $f_{1},\dotsc,f_{p},g$ sont suppos\'ees sans z\'eros communs sur~$X$, la fonction~$g$ ne peut s'annuler sur~$Y$ et elle y est donc inversible. On peut donc consid\'erer le morphisme de source~$Y$ associ\'e \`a $(f_{1}\,g^{-1},\dotsc,f_{p}\,g^{-1})$. Son image s'identifie au ferm\'e analytique~$Z'$ de~$E'$ d\'efini par~$\mathcal{I}'$ et il fournit un inverse au morphisme $Z' \to Y$ induit par la projection $\pi\colon E' \to E$.
\end{proof}
\begin{coro}\label{cor:BVaffinoide}\index{Voisinage
Soit~$X$ un espace $\mathcal{A}$-analytique. Tout point de~$X$ poss\`ede une base de voisinages form\'ee d'espaces $\mathcal{A}$-affino\"ides surconvergents.
\end{coro}
\begin{proof}
Soit~$x \in X$. Le r\'esultat \`a d\'emontrer \'etant local, on peut supposer que~$X$ est un ferm\'e analytique d'un ouvert~$U$ d'un espace affine analytique~$\E{n}{\mathcal{A}}$. Puisque tout ferm\'e analytique d'un espace $\mathcal{A}$-affino\"ide surconvergent est encore $\mathcal{A}$-affino\"ide surconvergent, il suffit de montrer le r\'esultat pour~$U$, et donc pour~$\E{n}{\mathcal{A}}$. Or, par d\'efinition de la topologie de~$\E{n}{\mathcal{A}}$, les domaines de Laurent forment une base de voisinages de tout point. Le r\'esultat d\'ecoule donc de la proposition~\ref{prop:domaines}.
\end{proof}
Passons maintenant aux analogues des r\'esultats classiques sur les affino\"ides.
\begin{theo}\label{th:sectionsglobalesaffinoide}\index{Theoreme@Th\'eor\`eme!d'acyclicit\'e de Tate}
Soient $r_{1},\dotsc,r_{n} \in \ensuremath{\mathbf{R}}_{>0}$ et $f_{1},\dotsc,f_{m} \in \mathcal{O}_{\E{n}{\mathcal{A}}}(\overline{D}_{B}(r_{1},\dotsc,r_{n}))$.
Le ferm\'e analytique~$X$ de $\overline{D}_{B}(r_{1},\dotsc,r_{n})$ d\'efini par $f_{1},\dotsc,f_{m}$ est un espace $\mathcal{A}$-affino\"ide surconvergent et on a un isomorphisme canonique
\[ \mathcal{O}_{\E{n}{\mathcal{A}}}(\overline{D}_{B}(r_{1},\dotsc,r_{n}))/(f_{1},\dotsc,f_{m}) \xrightarrow[]{\sim} \mathcal{O}_{X}(X).\]
\end{theo}
\begin{proof}
Posons $\overline{D} := \overline{D}_{B}(r_{1},\dotsc,r_{n})$. Notons $\mathcal{I}$ le faisceau d'id\'eaux coh\'erent de~$\overline{D}$ engendr\'e par~$(f_{1},\dotsc,f_{m})$. D'apr\`es le corollaire~\ref{cor:thB}, on a $H^1(\overline{D},\mathcal{I}) =0$. On en d\'eduit que
\[\mathcal{O}_{X}(X) = (\mathcal{O}/\mathcal{I})(\overline{D}) \simeq \mathcal{O}(\overline{D})/I.\]
Le morphisme de faisceau
\[\fonction{\varphi}{\mathcal{O}^m}{\mathcal{O}}{(a_{1},\dotsc,a_{m})}{\sum_{i=1}^m a_{i} f_{i}}\]
a pour image~$\mathcal{I}$. Son noyau~$\mathcal{K}$ est un faisceau coh\'erent sur~$\overline{D}$. La suite exacte courte $0 \to \mathcal{K} \to \mathcal{O}^m \to \mathcal{I} \to 0$ fournit une suite exacte longue
\[ \dotsc \longrightarrow \mathcal{O}(\overline{D})^m \xrightarrow[]{\varphi(\overline{D})} \mathcal{I}(\overline{D}) \longrightarrow H^1(\overline{D},\mathcal{K}) \longrightarrow \dotsc\]
D'apr\`es le corollaire~\ref{cor:thB}, on a $H^1(\overline{D},\mathcal{K}) =0$, d'o\`u l'on d\'eduit que $I = \mathcal{I}(\overline{D}) = (f_{1},\dotsc,f_{m}) \mathcal{O}(\overline{D})$. Cela conclut la preuve.
\end{proof}
Le r\'esultat pr\'ec\'edent peut \^etre vu comme une partie du th\'eor\`eme d'acyclicit\'e de Tate, affirmant que les sections globales sur un affino\"ide surconvergent sont celles auxquelles on s'attend d'apr\`es sa d\'efinition.
\begin{theo}\label{th:affinoideAB}\index{Theoreme@Th\'eor\`eme!d'acyclicit\'e de Tate}\index{Theoreme@Th\'eor\`eme!de Kiehl}\index{Theoreme@Th\'eor\`eme!A}\index{Theoreme@Th\'eor\`eme!B}
Soient~$X$ un espace $\mathcal{A}$-affino\"ide surconvergent et~$\mathcal{F}$ un faisceau coh\'erent sur~$X$. Alors,
\begin{enumerate}[i)]
\item pour tout entier $q\ge 1$, on a $H^q(X,\mathcal{F}) = 0$~;
\item le faisceau~$\mathcal{F}$ est engendr\'e par l'ensemble de ses sections globales~$\mathcal{F}(X)$.
\end{enumerate}
\end{theo}
\begin{proof}
Par hypoth\`ese, il existe un polydisque~$\overline{D}$ ferm\'e non d\'eg\'en\'er\'e sur~$B$ et un faisceau d'id\'eaux coh\'erent~$\mathcal{I}$ sur $\overline{D}$ tel que~$X$ soit isomorphe au ferm\'e analytique de~$\overline{D}$ d\'efini par~$\mathcal{I}$. Notons $j \colon X \to \overline{D}$ l'inclusion.
Soit~$\mathcal{F}$ un faisceau coh\'erent. D'apr\`es la proposition~\ref{prop:fetoileenbasexact}, le foncteur~$\varphi_{\ast}$ est exact et on en d\'eduit que $\varphi_{\ast}\mathcal{F}$ est coh\'erent. Le th\'eor\`eme~\ref{th:Hqfini} permet alors de se ramener \`a d\'emontrer les r\'esultats sur~$\overline{D}$. Le point~i) d\'ecoule alors du corollaire~\ref{cor:thB} et le point~ii) du corollaire~\ref{cor:thA}.
\end{proof}
Le point~i) correspond \`a la seconde partie du th\'eor\`eme d'acyclicit\'e de Tate et le point~ii) au th\'eor\`eme de Kiehl (sans l'hypoth\`ese de finitude sur le module des sections globales).
\section{Th\'eor\`eme B sur les disques ouverts}\label{sec:Bouvert}
Pour d\'emontrer des r\'esultats d'annulation cohomologique sur des ouverts \`a partir de r\'esultats sur des ferm\'es, nous nous inspirerons de la strat\'egie utilis\'ee dans le cas complexe, et en particulier de la notion d'exhaustion de Stein, \cf~\cite[chapter~IV]{GR}. Signalons que le second auteur a d\'ej\`a mis en {\oe}uvre une strat\'egie similaire dans le cas de la droite affine dans~\cite[\S 6.6]{A1Z}.
Les changements \`a apporter dans notre cadre sont mineurs. Pour faciliter la lecture, nous allons cependant proc\'eder \`a quelques rappels et indiquer les grandes lignes du raisonnement.
\begin{defi}\index{Exhaustion|textbf}
Soit~$T$ un espace topologique. Une \emph{exhaustion de~$T$} est une suite de parties compactes $(K_{m})_{n\ge 0}$ de~$T$ telle que, pour tout $m\ge0$, $K_{m}$ est inclus dans l'int\'erieur de~$K_{m+1}$ et
\[S = \bigcup_{m\ge 0} K_{m} .\]
\end{defi}
\begin{theo}[\protect{\cite[IV, \S 1, theorem~4]{GR}}]\index{Theoreme@Th\'eor\`eme!B}
Soient~$T$ un espace topologique et $(K_{n})_{n\ge 0}$ une exhaustion de~$T$. Soient~$\mathcal{F}$ un faisceau de groupes ab\'eliens sur~$T$ et $q\ge 2$ un entier. Supposons que, pour tout $m\ge 0$, on a
\[ H^{q-1}(K_{m},\mathcal{F}) = H^{q}(K_{m},\mathcal{F}).\]
Alors, on a
\[ H^q(T,\mathcal{F}) = 0.\]
\qed
\end{theo}
Ce r\'esultat ne pr\'esente pas de difficult\'e particuli\`ere. On voit cependant que l'annulation du~$H^1$ reste hors d'atteinte. Pour l'obtenir, on utilise une notion plus fine qui fait intervenir des conditions topologiques sur les espaces de sections.
\begin{defi}[\protect{\cite[IV, \S 1, definition~6]{GR}}]\index{Exhaustion!de Stein|textbf}
Soient~$T$ un espace topologique et~$\mathcal{F}$ un faisceau de groupes ab\'eliens sur~$T$. Une \emph{exhaustion de Stein de~$T$ relativement \`a~$\mathcal{F}$} est une exhaustion $(K_{m})_{n\ge 0}$ de~$T$ telle que
\[ \textrm{pour tous } m\ge 0, q\ge 1, \ H^q(K_{m},\mathcal{F})=0\]
et, pour tout $m\ge0$, il il existe une semi-norme~$\nm_{m}$ sur~$\mathcal{F}(K_{m})$ v\'erifiant les propri\'et\'es suivantes~:
\begin{enumerate}[i)]
\item l'image du morphisme de restriction $\mathcal{F}(T) \to \mathcal{F}(K_{m})$ est dense pour~$\nm_{m}$~;
\item le morphisme de restriction $(\mathcal{F}(K_{m+1}),\nm_{m+1}) \to (\mathcal{F}(K_{m}),\nm_{m})$ est born\'e~;
\item le morphisme de restriction $(\mathcal{F}(K_{m+1}),\nm_{m+1}) \to (\mathcal{F}(K_{m}),\nm_{m})$ envoie toute suite de Cauchy sur une suite convergente~;
\item tout \'el\'ement~$s$ de~$\mathcal{F}(K_{m+1})$ tel que $\|s\|_{m+1}=0$ est nul sur~$K_{m}$.
\end{enumerate}
\end{defi}
\begin{theo}[\protect{\cite[IV, \S 1, theorem~7]{GR}}]\label{thm:exhaustionStein}\index{Theoreme@Th\'eor\`eme!B}
Soient~$T$ un espace topologique et~$\mathcal{F}$ un faisceau de groupes ab\'eliens sur~$T$. Soit $(K_{n})_{n\ge 0}$ une exhaustion de Stein de~$T$ relativement \`a~$\mathcal{F}$. Alors, on a
\[ \textrm{pour tout } q\ge 1,\ H^q(T,\mathcal{F}) = 0.\]
\qed
\end{theo}
Nous allons maintenant d\'emontrer l'annulation des groupes de cohomologie sup\'erieurs des disques ouverts relatifs en exhibant des exhaustions de Stein. Nous suivrons \cite[\S 6.6]{A1Z} de pr\`es. La succession de lemmes est identique et nous nous contenterons de d\'emontrer ceux qui demandent des modifications. Notons qu'on utilise ici de fa\c con cruciale le r\'esultat de fermeture des modules locaux (\cf~th\'eor\`eme~\ref{fermeture}).
Soient $t_{1},\dotsc,t_{n} \in \ensuremath{\mathbf{R}}_{>0} \cup \{+\infty\}$. Posons $D := D_{B}(t_{1},\dotsc,t_{n})$. Soit~$\mathcal{F}$ un faisceau coh\'erent sur~$\overline{D}$.
Pour $i\in \cn{1}{n}$, fixons une suite strictement croissante $(t_{i,m})_{m\ge 0}$ d'\'el\'ements de~$\ensuremath{\mathbf{R}}_{>0}$ qui tend vers~$t_{i}$.
Soit $m\in \ensuremath{\mathbf{N}}$. Posons $\overline{D}_{m} := \overline{D}_{B}(t_{1,m},\dotsc,t_{n,m})$. D'apr\`es le corollaire~\ref{cor:thA}, $\mathcal{F}$ est globalement engendr\'e sur~$\overline{D}_{m}$, donc il existe $l_{m} \in \ensuremath{\mathbf{N}}$ et un morphisme surjectif $\alpha_{m} \colon \mathcal{O}_{\overline{D}_{m}}^{l_{m}} \to \mathcal{F}_{|\overline{D}_{m}}$. D'apr\`es le corollaire~\ref{cor:thB}, le morphisme induit
\[ \ensuremath{\varepsilon}_{m} \colon \mathcal{O}(\overline{D}_{m})^{l_{m}} \to \mathcal{F}(\overline{D}_{m})\]
est surjectif.
Munissons $\mathcal{O}(\overline{D}_{m})^{l_{m}}$ de la norme d\'efinie par le maximum des normes des coordonn\'ees. On la note~$\nm_{\overline{D}_{m}}$. On d\'efinit alors une semi-norme~$\nm_{m}$ sur~$\mathcal{F}(\overline{D}_{m})$ par
\[\renewcommand{\arraystretch}{1.2}\fonction{\nm_{m}}{\mathcal{F}(\overline{D}_{m})}{\ensuremath{\mathbf{R}}_{\ge0}}{s}{\inf(\{\|t\|_{\overline{D}_{m}} : t\in \ensuremath{\varepsilon}_{m}^{-1}(s)\})}.\]
Consid\'erons les morphismes de restriction
\[r_{m} \colon (\mathcal{O}(\overline{D}_{m+1})^{l_{m+1}},\nm_{\overline{D}_{m+1}}) \longrightarrow (\mathcal{O}(\overline{D}_{m})^{l_{m+1}},\nm_{\overline{D}_{m}})\]
et
\[\rho_{m} \colon (\mathcal{F}(\overline{D}_{m+1}),\nm_{m+1}) \longrightarrow (\mathcal{F}(\overline{D}_{m}),\nm_{m}).\]
Le morphisme~$r_{m}$ est born\'e.
D'apr\`es le corollaire~\ref{cor:thB}, le morphisme $\alpha_{m+1} \colon \mathcal{O}_{\overline{D}_{m+1}}^{l_{m+1}} \to \mathcal{F}_{|\overline{D}_{m+1}}$ induit un morphisme surjectif
\[ \ensuremath{\varepsilon}'_{m} \colon \mathcal{O}(\overline{D}_{m})^{l_{m+1}} \to \mathcal{F}(\overline{D}_{m}).\]
On d\'efinit alors une nouvelle semi-norme~$\nm'_{m}$ sur~$\mathcal{F}(\overline{D}_{m})$ par
\[\renewcommand{\arraystretch}{1.2}\fonction{\nm'_{m}}{\mathcal{F}(\overline{D}_{m})}{\ensuremath{\mathbf{R}}_{\ge0}}{s}{\inf(\{\|t\|_{\overline{D}_{m}} : t\in {\ensuremath{\varepsilon}'_{m}}^{-1}(s)\})}.\]
Consid\'erons maintenant le morphisme
\[ \sigma_{m} \colon (\mathcal{F}(\overline{D}_{m}),\nm'_{m}) \longrightarrow (\mathcal{F}(\overline{D}_{m}),\nm_{m})\]
qui induit l'identit\'e sur~$\mathcal{F}(\overline{D}_{m})$. Il est born\'e.
\begin{lemm}[\protect{\cite[lemme~6.6.21]{A1Z}}]
Il existe un morphisme born\'e~$\eta_{m}$ qui fait commuter le diagramme
\[\begin{tikzcd}
(\mathcal{O}(\overline{D}_{m})^{l_{m+1}}, \nm_{\overline{D}_{m}}) \ar[r, "\ensuremath{\varepsilon}'_{m}"] \ar[d, "\eta_{m}"]& (\mathcal{F}(\overline{D}_{m}), \nm'_{m}) \ar[d, "\sigma_{m}"]\\
(\mathcal{O}(\overline{D}_{m})^{l_{m+1}}, \nm_{\overline{D}_{m}}) \ar[r, "\ensuremath{\varepsilon}_{m}"] & (\mathcal{F}(\overline{D}_{m}), \nm_{m})
\end{tikzcd}.\]
\qed
\end{lemm}
On a maintenant un diagramme commutatif
\[\begin{tikzcd}
(\mathcal{O}(\overline{D}_{m+1})^{l_{m+1}}, \nm_{\overline{D}_{m}}) \ar[r, "\ensuremath{\varepsilon}_{m+1}"] \ar[d, "r_{m}"]& (\mathcal{F}(\overline{D}_{m+1}), \nm_{m+1}) \ar[d, "\wc_{|\overline{D}_{m}}"] \ar[dd, out = -30, in= 30, bend left= 70, "\rho_{m}"]\\
(\mathcal{O}(\overline{D}_{m})^{l_{m+1}}, \nm_{\overline{D}_{m}}) \ar[r, "\ensuremath{\varepsilon}'_{m}"] \ar[d, "\eta_{m}"]& (\mathcal{F}(\overline{D}_{m}), \nm'_{m}) \ar[d, "\sigma_{m}"]\\
(\mathcal{O}(\overline{D}_{m})^{l_{m+1}}, \nm_{\overline{D}_{m}}) \ar[r, "\ensuremath{\varepsilon}_{m}"] & (\mathcal{F}(\overline{D}_{m}), \nm_{m})
\end{tikzcd}\]
\begin{lemm}[\protect{\cite[lemme~6.6.22]{A1Z}}]
Le morphisme $\rho_{m}$ est born\'e.
\qed
\end{lemm}
Nous reprenons maintenant \cite[lemme~6.6.23]{A1Z} en apportant les modifcations n\'ecessaires \`a sa preuve.
\begin{lemm}
Le morphisme~$\rho_{m}$ est d'image dense.
\end{lemm}
\begin{proof}
Puisque le morphisme~$\sigma_{m}$ est surjectif et born\'e, il suffit de montrer que l'image du morphisme de restriction $\mathcal{F}(\overline{D}_{m+1}) \to \mathcal{F}(\overline{D}_{m})$ est dense pour la norme~$\nm'_{m}$.
Soit $s \in \mathcal{F}(\overline{D}_{m})$. Soit $\delta>0$. Puisque~$\ensuremath{\varepsilon}'_{m}$ est surjectif, il existe $t\in \mathcal{O}(\overline{D}_{m})^{k_{m+1}}$ tel que $\ensuremath{\varepsilon}'_{m}(t)=s$. En utilisant la proposition~\ref{prop:disqueglobal} et une r\'ecurrence facile sur~$n$, on montre que $\mathcal{A}[T_{1},\dotsc,T_{m}]$ est dense dans $\mathcal{O}(\overline{D}_{m})$. On en d\'eduit que $\mathcal{O}(\overline{D}_{m+1})$ est dense dans $\mathcal{O}(\overline{D}_{m})$ et donc qu'il existe $t' \in \mathcal{O}(\overline{D}_{m+1})$ tel que $\|r_{m}(t') -t\|_{\overline{D}_{m}} \le \delta$. On a alors $\|\ensuremath{\varepsilon}_{m+1}(t')_{|\overline{D}_{m}} - s\|'_{m} \le \delta$.
\end{proof}
Nous reprenons de m\^eme \cite[lemme~6.6.24]{A1Z}.
\begin{lemm}
Soit $s\in \mathcal{F}(\overline{D}_{m+1})$ tel que $\|s\|_{m+1} = 0$. Alors, $s$ est nulle sur l'int\'erieur de~$\overline{D}_{m+1}$ et, en particulier, $s_{|\overline{D}_{m}} = 0$.
\end{lemm}
\begin{proof}
Soit $t\in \ensuremath{\varepsilon}_{m+1}^{-1}(s)$. Par hypoth\`ese, il existe une suite~$(t_{j})_{j\ge 0}$ d'\'el\'ements de~$\textrm{Ker}(\ensuremath{\varepsilon}_{m+1})$ telle que $\lim_{j \to \infty} \|t-t_{j}\|_{\overline{D}_{m+1}}= 0$. En d'autres termes, $(t_{j})_{j\ge 0}$ converge uniform\'ement vers~$t$ sur~$\overline{D}_{m+1}$.
En utilisant de nouveau la proposition~\ref{prop:disqueglobal} et une r\'ecurrence sur~$n$, on montre que, pour tout $j\ge 0$, on a $t_{j} \in \mathcal{B}(\overline{D}_{m+1})$. D'apr\`es le th\'eor\`eme~\ref{fermeture}, pour tout point~$x$ dans l'int\'erieur de~$\overline{D}_{m+1}$, on a $t \in \sKer(\alpha_{m+1})_{x}$, donc l'image de $\ensuremath{\varepsilon}_{m+1}(t)$ dans~$\mathcal{F}_{x}$ est nulle. On en d\'eduit que~$s$ est nulle sur l'int\'erieur de~$\overline{D}_{m+1}$, et donc sur~$\overline{D}_{m}$.
\end{proof}
En utilisant les r\'esultats qui pr\'ec\'edent, les autres lemmes s'adaptent maintenant sans changements.
\begin{lemm}[\protect{\cite[lemme~6.6.25]{A1Z}}]
Soit $(s_{k})_{k\ge 0}$ une suite d'\'el\'ements de~$\mathcal{F}(\overline{D}_{m+1})$ qui est de Cauchy pour la semi-norme~$\nm_{m+1}$. Alors, il existe un \'el\'ement~$s$ de~$\mathcal{F}(\overline{D}_{m})$ tel que la suite $(\rho_{m}(s_{k}))_{k\ge 0}$ converge vers~$s$ pour la semi-norme~$\nm_{m}$.
De plus, si $s' \in \mathcal{F}(\overline{D}_{m})$ est une limite de la suite $(\rho_{m}(s_{k}))_{k\ge 0}$, elle co\"incide avec~$s$ sur l'int\'erieur de~$\overline{D}_{m}$ .
\qed
\end{lemm}
\begin{lemm}[\protect{\cite[lemme~6.6.25]{A1Z}}]
L'image du morphisme de restriction $\mathcal{F}(D) \to \mathcal{F}(\overline{D}_{m})$ est dense pour la semi-norme~$\nm_{m}$.
\qed
\end{lemm}
Nous disposons maintenant de toutes les propri\'et\'es constitutives d'une exhaustion de Stein.
\begin{prop}\index{Exhaustion!de Stein}
La suite $(\overline{D}_{m})_{m\ge 0}$ est une exhaustion de Stein de~$D$ relativement \`a~$\mathcal{F}$.
\qed
\end{prop}
On peut alors conclure par le th\'eor\`eme~\ref{thm:exhaustionStein}.
\begin{theo}\label{th:Bouvert}\index{Theoreme@Th\'eor\`eme!B}
Soient $t_{1},\dotsc,t_{n} \in \ensuremath{\mathbf{R}}_{>0} \cup \{+\infty\}$. Pour tout faisceau coh\'erent~$\mathcal{F}$ sur~$D_{B}(t_{1},\dotsc,t_{n})$ et tout entier $q\ge 1$, on a
\[H^q(D_{B}(t_{1},\dotsc,t_{n}),\mathcal{F})=0.\]
\qed
\end{theo}
De la m\^eme fa\c con que les propri\'et\'es des affino\"ides surconvergents se d\'eduisent de celles des disques ferm\'es, du th\'eor\`eme~\ref{th:Bouvert} d\'ecoulent des r\'esultats d'annulation cohomologique pour d'autres espaces. Nous introduisons un peu de terminologie de fa\c con \`a pouvoir les \'enoncer plus commod\'ement.
\begin{defi}\label{def:polydisqueouvertgeneralise}\index{Disque!ouvert g\'en\'eralis\'e}
Un espace de la forme $D_{B}(t_{1},\dotsc,t_{n})$ avec $n\in \ensuremath{\mathbf{N}}$ et $t_{1},\dotsc,t_{n} \in \ensuremath{\mathbf{R}}_{>0}\cup \{+\infty\}$ est appel\'e \emph{polydisque ouvert g\'en\'eralis\'e sur~$B$}.
\end{defi}
\begin{exem}
Pour tout $n\in \ensuremath{\mathbf{N}}$, l'espace affine analytique~$\E{n}{\mathcal{A}}$ est un polydisque ouvert g\'en\'eralis\'e.
\end{exem}
\begin{defi}\label{def:affinoideouvert}\index{Espace affino\"ide!ouvert|textbf}
On dit qu'un espace $\mathcal{A}$-analytique~$X$ est \emph{$\mathcal{A}$-affino\"ide ouvert} s'il existe un polydisque ouvert g\'en\'eralis\'e~$D$ sur~$B$ et un faisceau d'id\'eaux coh\'erent~$\mathcal{I}$ sur~$D$ tel que~$X$ soit isomorphe au ferm\'e analytique de~$D$ d\'efini par~$\mathcal{I}$.
\end{defi}
\begin{exem}
Soit~$\mathcal{X}$ un sch\'ema affine de pr\'esentation finie sur~$\mathcal{A}$. Alors son analytifi\'e~$\mathcal{X}^\an$ est un espace $\mathcal{A}$-affino\"ide ouvert, par construction (\cf~th\'eor\`eme~\ref{thm:analytification} et sa preuve).
\end{exem}
\begin{defi}\label{def:domainesouverts}\index{Domaine!de Weierstra\ss!ouvert|textbf}\index{Domaine!de Laurent!ouvert|textbf}\index{Domaine!rationnel!ouvert|textbf}
Soient~$X$ un espace $\mathcal{A}$-analytique et~$Y$ une partie de~$X$.
On dit que $Y$ est un \emph{domaine de Weierstra\ss{} ouvert} de~$X$ s'il existe $p\in \ensuremath{\mathbf{N}}$, $f_{1},\dotsc,f_{p} \in \mathcal{O}(X)$ et $r_{1},\dotsc,r_{p} \in \ensuremath{\mathbf{R}}_{>0}$ tels que
\[ Y = \{x\in X : \forall i\in \cn{1}{p},\ |f_{i}(x)| < r_{i}\}.\]
On dit que $Y$ est un \emph{domaine de Laurent ouvert} de~$X$ s'il existe $p,q\in \ensuremath{\mathbf{N}}$, $f_{1},\dotsc,f_{p},g_{1},\dotsc,g_{q} \in \mathcal{O}(X)$ et $r_{1},\dotsc,r_{p},s_{1},\dotsc,s_{q} \in \ensuremath{\mathbf{R}}_{>0}$ tels que
\[ Y = \{x\in X : \forall i\in \cn{1}{p},\ |f_{i}(x)| < r_{i},\ \forall j\in \cn{1}{q},\ |g_{j}(x)| > s_{j}\}.\]
On dit que $Y$ est un \emph{domaine rationnel ouvert} de~$X$ s'il existe $p\in \ensuremath{\mathbf{N}}$, $f_{1},\dotsc,f_{p},g \in \mathcal{O}(X)$ sans z\'eros communs sur~$X$ et $r_{1},\dotsc,r_{p} \in \ensuremath{\mathbf{R}}_{>0}$ tels que
\[ Y = \{x\in X : \forall i\in \cn{1}{p},\ |f_{i}(x)| < r_{i} |g(x)|\}.\]
\end{defi}
\begin{exem}
Une polycouronne est un domaine rationnel ouvert d'un espace affine analytique.
\end{exem}
Les deux r\'esultats qui suivent se d\'emontrent respectivement comme la proposition~\ref{prop:domaines} et le corollaire~\ref{cor:BVaffinoide}.
\begin{prop}\label{prop:domainesouverts}
Soit~$X$ un espace $\mathcal{A}$-affino\"ide ouvert. Les domaines de Weierstra\ss, les domaines de Laurent et les domaines rationnels de~$X$ sont des espaces $\mathcal{A}$-affino\"ides ouverts.
\qed
\end{prop}
\begin{coro}\label{cor:BVaffinoideouvert}\index{Voisinage
Soit~$X$ un espace $\mathcal{A}$-analytique. Tout point de~$X$ poss\`ede une base de voisinages form\'ee d'espaces $\mathcal{A}$-affino\"ides ouverts.
\qed
\end{coro}
En suivant la m\^eme strat\'egie que dans la preuve du th\'eor\`eme~\ref{th:affinoideAB}, on d\'eduit le r\'esultat suivant du th\'eor\`eme~\ref{th:Bouvert}
\begin{coro}\index{Theoreme@Th\'eor\`eme!B}
Soit~$X$ un espace $\mathcal{A}$-affino\"ide ouvert. Pour tout faisceau coh\'erent~$\mathcal{F}$ sur~$X$ et tout entier $q\ge 1$, on a $H^q(X,\mathcal{F}) = 0$.
\qed
\end{coro}
\section[Noeth\'erianit\'e]{Noeth\'erianit\'e d'anneaux de s\'eries arithm\'etiques convergentes}\label{sec:noetherianite}
\index{Anneau!noetherien@noeth\'erien|(}
\index{Series arithmetiques convergentes@S\'eries arithm\'etiques convergentes|(}
Dans cette section finale, nous proposons une application de nos r\'esultats aux anneaux de s\'eries arithm\'etiques convergentes. Ceux-ci ont \'et\'e introduits par D.~Harbater dans le cadre de son \'etude du probl\`eme inverse de Galois (\cf~\cite{HarbaterConvergent,HarbaterGaloisCovers}). Un exemple typique est
\[ \ensuremath{\mathbf{Z}}_{r^+}\llbracket T\rrbracket := \{f \in \ensuremath{\mathbf{Z}}\llbracket T\rrbracket : R_{\infty}(f) >r \},\]%
\nomenclature[Ck]{$\ensuremath{\mathbf{Z}}_{r^+}\llbracket T\rrbracket$}{sous-anneau de $\ensuremath{\mathbf{Z}}\llbracket T\rrbracket$ form\'e des s\'eries~$f$ telles que $R_{\infty}(f)>r$
o\`u~$r$ est un nombre r\'eel positif et $R_{\infty}(f)
\nomenclature[Cja]{$R_{\infty}(f)$}{rayon de convergence complexe d'une s\'erie~$f$
d\'esigne le rayon de convergence complexe de la s\'erie~$f$, c'est-\`a-dire son rayon de convergence en tant que fonction analytique complexe ($\ensuremath{\mathbf{C}}$ \'etant muni de la valeur absolue usuelle).
Nous n'\'etudierons pas ici les propri\'et\'es galoisiennes de ces anneaux. Nous renvoyons le lecteur int\'eress\'e par une approche g\'eom\'etrique de ces questions \`a l'article~\cite{Raccord}, dans lequel figure une preuve, bas\'ee sur la construction de rev\^etements de disques analytiques sur~$\ensuremath{\mathbf{Z}}$, du fait que tout groupe fini peut se r\'ealiser comme groupe de Galois d'une extension r\'eguli\`ere de $\ensuremath{\mathbf{Z}}_{r^+}\llbracket T\rrbracket$, lorsque $r \in \intfo{0,1}$.
En ce qui concerne les aspects alg\'ebriques des anneaux de s\'eries arithm\'etiques convergentes, on dispose du r\'esultat suivant.
\begin{theo}[\protect{\cite[theorem~1.8]{HarbaterConvergent}}]\label{th:Harbaternoetherien}
Pour tout $r\in \intfo{0,1}$, l'anneau $\ensuremath{\mathbf{Z}}_{r^+}\llbracket T\rrbracket$ est noeth\'erien, r\'egulier, factoriel et de dimension~2.
\qed
\end{theo}
Nous allons montrer que la propri\'et\'e de noeth\'erianit\'e s'\'etend aux anneaux de s\'eries arithm\'etiques convergentes en plusieurs variables.
La preuve du th\'eor\`eme~\ref{th:Harbaternoetherien} que propose D.~Harbater est de nature tr\`es alg\'ebrique. La noeth\'erianit\'e, par exemple, est obtenue en exhibant, pour chaque id\'eal premier, un famille g\'en\'eratrice explicite. Une telle strat\'egie semble difficile \`a g\'en\'eraliser.
Notre approche est, au contraire, g\'eom\'etrique. Elle s'inspire de celle adopt\'ee par J.~Frisch dans le cadre d'anneaux de fonctions analytiques complexes.
\index{Disque!algebre@alg\`ebre d'un!noeth\'erianit\'e de l'|(}
\begin{theo}[\protect{\cite[th\'eor\`eme (I, 9)]{Frisch}}]\label{th:noetheriencomplexe}
Soit~$A$ une partie compacte semi-analytique et de Stein d'un espace analytique complexe~$X$. Alors, l'anneau $\mathcal{O}(A)$ est noeth\'erien.
En particulier, pour tous $n\in \ensuremath{\mathbf{N}}$ et $\ensuremath{{\boldsymbol{r}}} \in \ensuremath{\mathbf{R}}_{\ge 0}^n$, l'anneau $\mathcal{O}(\overline{D}_{\ensuremath{\mathbf{C}}}(\ensuremath{{\boldsymbol{r}}}))$ est noeth\'erien.
\qed
\end{theo}
Signalons que Y.-T. Siu a, par la suite, obtenu une d\'emonstration simplifi\'ee d'un r\'esultat un peu plus g\'en\'eral dans~\cite{SiuNoetherianness}. Pour le cas particulier des polydiques, K.~Langmann a propos\'e une preuve plus courte dans~\cite{LangmannFrisch}. Nous nous sommes grandement inspir\'es de cette derni\`ere.
Il n'est gu\`ere difficile de passer du r\'esultat sur~$\ensuremath{\mathbf{C}}$ \`a un r\'esultat valable sur tout corps valu\'e archim\'edien complet.
\begin{coro}\label{th:noetherienarchimedien}
Soit $(k,\va)$ un corps valu\'e complet archim\'edien. Soient $n\in \ensuremath{\mathbf{N}}$ et $\ensuremath{{\boldsymbol{r}}} \in \ensuremath{\mathbf{R}}_{\ge0}^n$. Alors l'anneau $\mathcal{O}(\overline{D}_{k}(\ensuremath{{\boldsymbol{r}}}))$ est noeth\'erien.
\end{coro}
\begin{proof}
D'apr\`es le th\'eor\`eme~\ref{th:vaarchimedienne}, il existe $\ensuremath{\varepsilon} \in \intof{0,1}$ tel que $(k,\va)$ soit isom\'etriquement isomorphe \`a $(\ensuremath{\mathbf{R}},\va_{\infty}^\ensuremath{\varepsilon})$ ou $(\ensuremath{\mathbf{C}},\va_{\infty}^\ensuremath{\varepsilon})$. Traitons s\'epar\'ement ces deux cas.
$\bullet$ Supposons que $(k,\va) = (\ensuremath{\mathbf{C}},\va_{\infty}^\ensuremath{\varepsilon})$. L'\'el\'evation \`a la puissance~$1/\ensuremath{\varepsilon}$ induit un isomorphisme entre $\mathcal{O}(\overline{D}_{k}(\ensuremath{{\boldsymbol{r}}}))$ et $\mathcal{O}(\overline{D}_{\ensuremath{\mathbf{C}}}(\ensuremath{{\boldsymbol{r}}}^{1/\ensuremath{\varepsilon}}))$, o\`u~$\ensuremath{\mathbf{C}}$ est muni de la valeur absolue usuelle~$\va_{\infty}$ (\cf~\cite[proposition~1.3.10]{A1Z} pour un r\'esultat permettant de comparer chacun de ces anneaux \`a l'anneau d'un disque relatif sur l'espace hybride~$\mathcal{M}(\ensuremath{\mathbf{C}}^\mathrm{hyb}) \setminus \{\va_{0}\}$). Le r\'esultat d\'ecoule alors du th\'eor\`eme~\ref{th:noetheriencomplexe}.
$\bullet$ Supposons que $(k,\va) = (\ensuremath{\mathbf{R}},\va_{\infty}^\ensuremath{\varepsilon})$. Posons $(k',\va) = (\ensuremath{\mathbf{C}},\va_{\infty}^\ensuremath{\varepsilon})$. Pour toute s\'erie $h = \sum_{\ensuremath{{\boldsymbol{i}}}\ge 0} a_{\ensuremath{{\boldsymbol{i}}}}\, \ensuremath{{\boldsymbol{T}}}^\ensuremath{{\boldsymbol{i}}} \in \ensuremath{\mathbf{C}}\llbracket \ensuremath{{\boldsymbol{T}}}\rrbracket$, on pose $\bar h := \sum_{\ensuremath{{\boldsymbol{i}}}\ge 0} \bar a_{\ensuremath{{\boldsymbol{i}}}}\, \ensuremath{{\boldsymbol{T}}}^\ensuremath{{\boldsymbol{i}}} \in \ensuremath{\mathbf{C}}\llbracket \ensuremath{{\boldsymbol{T}}}\rrbracket$. Remarquons que, pour tout \'el\'ement~$h$ de~$\mathcal{O}(\overline{D}_{k'}(\ensuremath{{\boldsymbol{r}}}))$, $\bar h$ appartient \`a~$\mathcal{O}(\overline{D}_{k'}(\ensuremath{{\boldsymbol{r}}}))$ et $h+\bar h$ \`a~$\mathcal{O}(\overline{D}_{k}(\ensuremath{{\boldsymbol{r}}}))$.
Soit $(f_{m})_{m\ge 0}$ une suite d'\'el\'ements de $\mathcal{O}(\overline{D}_{k}(\ensuremath{{\boldsymbol{r}}}))$. D'apr\`es le cas pr\'ec\'edent, l'anneau~$\mathcal{O}(\overline{D}_{k'}(\ensuremath{{\boldsymbol{r}}}))$ est noeth\'erien, donc il existe $M\ge 0$ tel que, pour tout $m> M$, on ait $f_{m} \in (f_{0},\dotsc,f_{M}) \, \mathcal{O}(\overline{D}_{k'}(\ensuremath{{\boldsymbol{r}}}))$. Soit $m> M$. Il existe $h_{0},\dotsc,h_{M} \in \mathcal{O}(\overline{D}_{k'}(\ensuremath{{\boldsymbol{r}}}))$ tels que $f_{m} = \sum_{j=0}^M h_{j}f_{j}$. On a
\[ f_{m} = \bar f_{m} = \sum_{j=0}^M \bar h_{j}f_{j} = \sum_{j=0}^M \frac{h_{j} + \bar h_{j}}2\, f_{j},\]
donc $f_{m} \in (f_{0},\dotsc,f_{M}) \, \mathcal{O}(\overline{D}_{k}(\ensuremath{{\boldsymbol{r}}}))$. On en d\'eduit que $\mathcal{O}(\overline{D}_{k}(\ensuremath{{\boldsymbol{r}}}))$ est noeth\'erien.
\end{proof}
Un r\'esultat similaire vaut \'egalement dans le cas ultram\'etrique. Le cas du polydisque unit\'e a \'et\'e trait\'e par E.~Grosse--Kl\"onne dans~\cite[1.4]{GrosseKlonne}. Un r\'esultat g\'en\'eral proche de celui de J.~Frisch et de Y.-T.~Siu a \'et\'e obtenu par le second auteur dans~\cite[th\'eor\`eme~A.5]{PoineauConnexite}. Pour \'eviter d'introduire des notions techniques suppl\'ementaires, nous nous contenterons d'en \'enoncer la cons\'equence suivante.
\begin{theo}\label{th:noetherienultrametrique}
Soit $(k,\va)$ un corps valu\'e complet ultram\'etrique. Soient $n\in \ensuremath{\mathbf{N}}$ et $\ensuremath{{\boldsymbol{r}}} \in \ensuremath{\mathbf{R}}_{\ge0}^n$. Alors l'anneau $\mathcal{O}(\overline{D}_{k}(\ensuremath{{\boldsymbol{r}}}))$ est noeth\'erien.
\qed
\end{theo}
Soit~$K$ un corps de nombres et $A$~son anneau d'entiers. Notons~$\Sigma_{f}$ l'ensemble des places finies de~$K$ (vues comme des id\'eaux maximaux de~$A$) et $\Sigma_{\infty}$ l'ensemble des places infinies de~$K$ (vues comme des classes de conjugaison de plongements complexes). Posons $\Sigma :=\Sigma_{f} \cup \Sigma_{\infty}$. On utilisera les notations de l'exemple~\ref{ex:cdn} pour les points de~$\mathcal{M}(A)$.
Soit~$\Sigma_{f,0}$ un sous-ensemble fini de~$\Sigma_{f}$. Posons $\Sigma_{0} := \Sigma_{f,0} \cup \Sigma_{\infty}$. Pour tout $\sigma \in \Sigma_{f,0}$, fixons $\ensuremath{\varepsilon}_{\sigma} \in \intoo{0,+\infty}$ et, pour tout $\sigma \in \Sigma_{\infty}$, fixons $\ensuremath{\varepsilon}_{\sigma} \in \intof{0,1}$. Posons
\[ V := \bigcup_{\sigma \in \Sigma_{0}} \{a_{\sigma}^\ensuremath{\varepsilon} : \ensuremath{\varepsilon} \in [0,\ensuremath{\varepsilon}_{\sigma}]\} \cup \bigcup_{\sigma \in \Sigma \setminus \Sigma_{0}} \{a_{\sigma}^\ensuremath{\varepsilon} : \ensuremath{\varepsilon} \in [0,+\infty]\}.\]
Notons~$A[1/\Sigma_{0}]$ l'ensemble des \'el\'ements de~$K$ qui sont entiers pour toutes les places de $\Sigma_{f} \setminus \Sigma_{f,0}$. D\'ecrivons maintenant les fonctions globales sur~$V$.
\begin{lemm}\label{lem:AOV}
L'application $A \to \mathcal{O}(V)$ induit un isomorphisme $A[1/\Sigma_{0}] \xrightarrow[]{\sim} \mathcal{O}(V)$ et on a $\nm_{V} = \max_{\sigma \in \Sigma_{0}} (\va_{\sigma}^{\ensuremath{\varepsilon}_{\sigma}})$.
\end{lemm}
\begin{proof}
C'est une cons\'equence des descriptions explicites d\'emontr\'ees dans \cite[\S 3.1.2]{A1Z}.
\end{proof}
Passons maintenant aux fonctions sur des polydisques relatifs.
\begin{nota}
Pour $n\in\ensuremath{\mathbf{N}}$ et $\ensuremath{{\boldsymbol{r}}} = (r_{1},\dotsc,r_{n}), \ensuremath{{\boldsymbol{s}}} = (s_{1},\dotsc,s_{n}) \in \ensuremath{\mathbf{R}}_{\ge 0}^n$, on note $\ensuremath{{\boldsymbol{r}}} < \ensuremath{{\boldsymbol{s}}}$ si, pour tout $i\in \cn{1}{n}$, $r_{i} < s_{i}$.%
\nomenclature[Itbb]{$\ensuremath{{\boldsymbol{r}}} < \ensuremath{{\boldsymbol{s}}}$}{relation satisfaite si $r_{i} < s_{i}$ pour tout $i\in \cn{1}{n}$}
\end{nota}
\begin{lemm}\label{lem:descriptionseriesarithmetiques}\index{Disque!fonction sur un}\index{Fonction!sur un disque}
Soient $n\in\ensuremath{\mathbf{N}}$ et $\ensuremath{{\boldsymbol{r}}} \in \ensuremath{\mathbf{R}}_{\ge 0}^n$. L'anneau $\mathcal{O}(\overline{D}_{V}(\ensuremath{{\boldsymbol{r}}}))$ est constitu\'e des s\'eries
\[ f = \sum_{\ensuremath{{\boldsymbol{i}}} \ge 0} f_{\ensuremath{{\boldsymbol{i}}}}\, \ensuremath{{\boldsymbol{T}}}^\ensuremath{{\boldsymbol{i}}} \in \mathcal{A}\Big[\frac1{\Sigma_{0}}\Big]\llbracket \ensuremath{{\boldsymbol{T}}} \rrbracket\]
v\'erifiant la condition suivante~:
\[\exists \ensuremath{{\boldsymbol{s}}} > \ensuremath{{\boldsymbol{r}}}, \forall \sigma \in \Sigma_{0},\ \lim_{\ensuremath{{\boldsymbol{i}}} \to \infty} |f_{\ensuremath{{\boldsymbol{i}}}}|_{\sigma}^{\ensuremath{\varepsilon}_{\sigma}}\, \ensuremath{{\boldsymbol{s}}}^\ensuremath{{\boldsymbol{i}}} =0.\]
\end{lemm}
\begin{proof}
Pour $\beta\in\ensuremath{\mathbf{R}}_{>1}$, posons
\[ V_{\beta} := \bigcup_{\sigma \in \Sigma_{0}} \{a_{\sigma}^\ensuremath{\varepsilon} : \ensuremath{\varepsilon} \in [0,\beta \, \ensuremath{\varepsilon}_{\sigma}]\} \cup \bigcup_{\sigma \in \Sigma \setminus \Sigma_{0}} \{a_{\sigma}^\ensuremath{\varepsilon} : \ensuremath{\varepsilon} \in [0,+\infty]\}.\]
La famille $(V_{\beta})_{\beta >1}$ forme une base de voisinages compacts de~$V$ dans~$\mathcal{M}(\mathcal{A})$.
Le r\'esultat d\'ecoule alors de la proposition~\ref{prop:disqueglobal} et du lemme~\ref{lem:AOV} en remarquant que la condition $\lim_{\ensuremath{{\boldsymbol{i}}} \to \infty} |f_{\ensuremath{{\boldsymbol{i}}}}|_{\sigma}^{\beta \, \ensuremath{\varepsilon}_{\sigma}}\, \ensuremath{{\boldsymbol{s}}}^\ensuremath{{\boldsymbol{i}}} =0$ peut se retraduire par $\lim_{\ensuremath{{\boldsymbol{i}}} \to \infty} |f_{\ensuremath{{\boldsymbol{i}}}}|_{\sigma}^{ \ensuremath{\varepsilon}_{\sigma}}\, (\ensuremath{{\boldsymbol{s}}}^{1/\beta})^\ensuremath{{\boldsymbol{i}}} =0$.
\end{proof}
Ajoutons un r\'esultat technique utile sur les fonctions sur des disques relatifs contenus dans une branche.
\begin{lemm}\label{lem:restrictionbrancheiso}\index{Disque!fonctions sur un}
Soit $\sigma \in \Sigma$. Soient $u, v \in \intoo{0,+\infty}$ avec $u\le v$. Si $\sigma\in \Sigma_{\infty}$, supposons que $v\le 1$. Posons $W := \{a_{\sigma}^\ensuremath{\varepsilon} : \ensuremath{\varepsilon} \in [u,v]\}$. Soient~$n\in \ensuremath{\mathbf{N}}$ et $\ensuremath{{\boldsymbol{r}}} \in \intfo{0,1}^n$. Alors, le morphisme de restriction $\mathcal{O}(\overline{D}_{W}(\ensuremath{{\boldsymbol{r}}})) \to \mathcal{O}(\overline{D}_{\mathcal{H}(a_{\sigma}^v)}(\ensuremath{{\boldsymbol{r}}}))$ est un isomorphisme.
En particulier, l'anneau $\mathcal{O}(\overline{D}_{W}(\ensuremath{{\boldsymbol{r}}}))$ est noeth\'erien.
\end{lemm}
\begin{proof}
Notons $P := \{(u',v') \in \ensuremath{\mathbf{R}}_{>0}^2 : 0< u'<u\le v<v'\}$. Pour $(u',v') \in P$, posons $W_{u',v'} := \{a_{\sigma}^\ensuremath{\varepsilon} : \ensuremath{\varepsilon} \in [u',v']\}$. On a $\mathcal{O}(W_{u',v'}) = \hat{K}_{\sigma}$ et $\nm_{W_{u',v'}} = \max(\va_{\sigma}^u,\va_{\sigma}^v)$.
La famille $(W_{u',v'})_{(u',v') \in P}$ forme une base de voisinages compacts de~$W$ dans~$\mathcal{M}(\mathcal{A})$. On d\'eduit alors de la proposition~\ref{prop:disqueglobal} que l'anneau $\mathcal{O}(\overline{D}_{W}(\ensuremath{{\boldsymbol{r}}}))$ est constitu\'e des s\'eries $f = \sum_{\ensuremath{{\boldsymbol{i}}} \ge 0} f_{\ensuremath{{\boldsymbol{i}}}}\, \ensuremath{{\boldsymbol{T}}}^\ensuremath{{\boldsymbol{i}}} \in \hat{K}_{\sigma}\llbracket \ensuremath{{\boldsymbol{T}}} \rrbracket$
v\'erifiant la condition suivante~:
\[ \exists \ensuremath{{\boldsymbol{s}}} > \ensuremath{{\boldsymbol{r}}}, \ \lim_{\ensuremath{{\boldsymbol{i}}} \to \infty} \max(|f_{\ensuremath{{\boldsymbol{i}}}}|_{\sigma}^{u}\, \ensuremath{{\boldsymbol{s}}}^\ensuremath{{\boldsymbol{i}}} , |f_{\ensuremath{{\boldsymbol{i}}}}|_{\sigma}^{v}\, \ensuremath{{\boldsymbol{s}}}^\ensuremath{{\boldsymbol{i}}} )=0,\]
ou encore
\[ \exists \ensuremath{{\boldsymbol{s}}} > \ensuremath{{\boldsymbol{r}}}, \ \lim_{\ensuremath{{\boldsymbol{i}}} \to \infty} \max(|f_{\ensuremath{{\boldsymbol{i}}}}|_{\sigma}\, \ensuremath{{\boldsymbol{s}}}^{\ensuremath{{\boldsymbol{i}}} /u} , |f_{\ensuremath{{\boldsymbol{i}}}}|_{\sigma}\, \ensuremath{{\boldsymbol{s}}}^{\ensuremath{{\boldsymbol{i}}}/v})=0,\]
Or $\ensuremath{{\boldsymbol{r}}} \in \intfo{0,1}^n$ et, pour tout $\ensuremath{{\boldsymbol{s}}} \in \intfo{0,1}^n$, on a $\ensuremath{{\boldsymbol{s}}}^{1/u} \le \ensuremath{{\boldsymbol{s}}}^{1/v}$. La condition est donc encore \'equivalente \`a
\[ \exists \ensuremath{{\boldsymbol{s}}} > \ensuremath{{\boldsymbol{r}}}, \ \lim_{\ensuremath{{\boldsymbol{i}}} \to \infty} |f_{\ensuremath{{\boldsymbol{i}}}}|_{\sigma}\, \ensuremath{{\boldsymbol{s}}}^{\ensuremath{{\boldsymbol{i}}}/v}=0.\]
On en d\'eduit que $\mathcal{O}(\overline{D}_{W}(\ensuremath{{\boldsymbol{r}}})) = \mathcal{O}(\overline{D}_{\mathcal{H}(a_{\sigma}^v)}(\ensuremath{{\boldsymbol{r}}}))$.
La derni\`ere partie du r\'esultat d\'ecoule des th\'eor\`emes~\ref{th:noetherienarchimedien} et~\ref{th:noetherienultrametrique}.
\end{proof}
Nous utiliserons \'egalement un r\'esultat de noeth\'erianit\'e pour les faisceaux qui s'\'enonce sous la forme suivante.
\begin{prop}\label{prop:faisceaunoetherien}\index{Faisceau!noetherien@noeth\'erien}
Soit $X$ un espace $\mathcal{A}$-analytique. Soient~$\mathcal{F}$ un faisceau coh\'erent sur~$X$ et~$(\mathcal{F}_{m})_{m\ge 0}$ une suite croissante de sous-faisceaux coh\'erents de~$\mathcal{F}$. Alors, tout point de~$X$ poss\`ede un voisinage sur lequel la suite $(\mathcal{F}_{m})_{m\ge 0}$ stationne.
\qed
\end{prop}
Dans le cadre surconvergent qui est le n\^otre, la difficult\'e pour appliquer ce th\'eor\`eme r\'eside dans le fait qu'il requi\`ere que tous les faisceaux soient d\'efinis sur un m\^eme ouvert.
\begin{theo}\label{th:noetherien}
Soient $n\in \ensuremath{\mathbf{N}}$ et $\ensuremath{{\boldsymbol{r}}} \in \intfo{0,1}^n$. Alors l'anneau $\mathcal{O}(\overline{D}_{V}(\ensuremath{{\boldsymbol{r}}}))$ est noeth\'erien.
\end{theo}
\begin{proof}
Soit $(f_{m})_{m\ge 0}$ une suite d'\'el\'ements de $\mathcal{O}(\overline{D}_{V}(\ensuremath{{\boldsymbol{r}}}))$. Pour tout $m\ge 0$, notons~$\mathcal{I}_{m}$ le faisceau d'id\'eaux engendr\'e par~$f_{1},\dotsc,f_{m})$. Insistons sur le fait qu'il est \textit{a priori} d\'efini sur un voisinage de $\overline{D}_{V}(\ensuremath{{\boldsymbol{r}}})$ qui d\'epend de~$m$.
Soit $\sigma \in \Sigma_{0}$. Posons $W_{\sigma} := \{a_{\sigma}^\ensuremath{\varepsilon} : \ensuremath{\varepsilon} \in [\ensuremath{\varepsilon}_{\sigma}/3,\ensuremath{\varepsilon}_{\sigma}]\}$. D'apr\`es le lemme~\ref{lem:restrictionbrancheiso}, l'anneau $\mathcal{O}(\overline{D}_{W_{\sigma}}(\ensuremath{{\boldsymbol{r}}}))$ est noeth\'erien.
Il existe donc~$m_{\sigma}$ tel que, pour tout $m\ge m_{\sigma}$, on ait
\[(f_{0},\dotsc,f_{m}) \, \mathcal{O}(\overline{D}_{W_{\sigma}}(\ensuremath{{\boldsymbol{r}}})) = (f_{0},\dotsc,f_{m_{\sigma}}) \, \mathcal{O}(\overline{D}_{W_{\sigma}}(\ensuremath{{\boldsymbol{r}}})).\]
D'apr\`es le corollaire~\ref{cor:thA}, pour tout $m\ge m_{\sigma}$ et tout $x\in \overline{D}_{W_{\sigma}}(\ensuremath{{\boldsymbol{r}}})$, on a donc $(\mathcal{I}_{m})_{x} = (\mathcal{I}_{m_{\sigma}})_{x}$.
Pour tout $\alpha\in \intoo{0,1}$, posons
\begin{align*}
V_{\alpha} &= \{b^\alpha : b \in V\}\\
& = \bigcup_{\sigma\in\Sigma_{0}} \{a_{\sigma}^\ensuremath{\varepsilon} : \sigma \in \intff{0, \alpha \, \ensuremath{\varepsilon}_{\sigma}}\} \cup \bigcup_{\sigma\in\Sigma_{f} \setminus \Sigma_{f,0}} \{a_{\sigma}^\ensuremath{\varepsilon} : \sigma \in \intff{0, +\infty}\}.
\end{align*}
Il suit du lemme~\ref{lem:descriptionseriesarithmetiques} que l'on a $\mathcal{O}(\overline{D}_{V_{\alpha}}(\ensuremath{{\boldsymbol{r}}}^{\alpha})) = \mathcal{O}(\overline{D}_{V}(\ensuremath{{\boldsymbol{r}}}))$. En particulier, toute fonction analytique d\'efinie au voisinage de~$\overline{D}_{V}(\ensuremath{{\boldsymbol{r}}})$ est encore d\'efinie sur au voisinage de $\overline{D}_{V_{1/2}}(\ensuremath{{\boldsymbol{r}}}^{1/2})$. Notons~$U_{1/2}$ l'int\'erieur de ce dernier polydisque dans~$\E{n}{\mathcal{A}}$. Puisque~$\ensuremath{{\boldsymbol{r}}} \in \intfo{0,1}^n$, il contient $\overline{D}_{V_{1/3}}(\ensuremath{{\boldsymbol{r}}})$. On d\'eduit alors de la proposition~\ref{prop:faisceaunoetherien} qu'il existe $m' \ge0$ tel que, pour tout $m\ge m'$ et tout $x\in \overline{D}_{V_{1/3}}(\ensuremath{{\boldsymbol{r}}})$, on ait $(\mathcal{I}_{m})_{x} = (\mathcal{I}_{m'})_{x}$.
Posons $M := \max\big(m', \max_{\sigma\in \Sigma_{0}}(m_{\sigma})\big)$. Soit $m\ge M$. Pour tout $x\in \overline{D}_{V}(\ensuremath{{\boldsymbol{r}}})$, on a alors $(\mathcal{I}_{m})_{x} = (\mathcal{I}_{M})_{x}$. En d'autres termes, le morphisme de faisceaux
\[\fonction{\varphi}{\mathcal{O}^M}{\mathcal{I}_{m}}{(g_{1},\dotsc,g_{M})}{\displaystyle\sum_{i=1}^M g_{i} f_{i}}\]
est surjectif sur~$\overline{D}_{V}(\ensuremath{{\boldsymbol{r}}})$. Son noyau~$\mathcal{K}$ est un faisceau coh\'erent. De la suite exacte courte $ 0 \to \mathcal{K} \to \mathcal{O}^{M} \to \mathcal{I}_{m} \to 0$, on tire une suite exacte longue
\[ \dotsc \longrightarrow \mathcal{O}(\overline{D}_{V}(\ensuremath{{\boldsymbol{r}}}))^M \xrightarrow[]{\varphi(\overline{D}_{V}(\ensuremath{{\boldsymbol{r}}}))} \mathcal{I}_{m}(\overline{D}_{V}(\ensuremath{{\boldsymbol{r}}})) \longrightarrow H^1(\overline{D}_{V}(\ensuremath{{\boldsymbol{r}}}),\mathcal{K}) \longrightarrow \dotsc\]
Or, d'apr\`es le corollaire~\ref{cor:thB}, on a $H^1(\overline{D}_{V}(\ensuremath{{\boldsymbol{r}}}),\mathcal{K}) =0$, d'o\`u il d\'ecoule que $f_{m} \in (f_{1},\dotsc,f_{M})\, \mathcal{O}(\overline{D}_{V}(\ensuremath{{\boldsymbol{r}}}))$. On en d\'eduit que l'anneau $\mathcal{O}(\overline{D}_{V}(\ensuremath{{\boldsymbol{r}}}))$ est noeth\'erien.
\end{proof}
\index{Disque!algebre@alg\`ebre d'un!noeth\'erianit\'e de l'|)}
Traduisons ce r\'esultat en termes plus concrets.
\begin{nota}%
\nomenclature[Cjb]{$R_{\sigma}(f)$}{rayon de convergence d'une s\'erie~$f$ en une place~$\sigma$ d'un corps de nombres
Soit $n\in \ensuremath{\mathbf{N}}$. Soit $f \in \mathcal{A}[1/\Sigma_{0}]\llbracket \ensuremath{{\boldsymbol{T}}}\rrbracket$. Pour tout $\sigma\in \Sigma_{0}$ et tout $\ensuremath{{\boldsymbol{r}}} \in \ensuremath{\mathbf{R}}_{\ge 0}^n$, on \'ecrit $R_{\sigma}(f) > \ensuremath{{\boldsymbol{r}}}$ si l'image de~$f$ dans $\hat{K}_{\sigma}\llbracket \ensuremath{{\boldsymbol{T}}}\rrbracket$ converge au voisinage du disque $D_{\hat{K}_{\sigma}}(\ensuremath{{\boldsymbol{r}}})$, le corps~$\hat{K}_{\sigma}$ \'etant muni de la valeur absolue~$\va_{\sigma}$.
\end{nota}
\begin{coro}\label{cor:noetherienconcret
Soient $n\in \ensuremath{\mathbf{N}}$ et $\ensuremath{{\boldsymbol{r}}}\in \intfo{0,1}^n$. Pour tout $\sigma\in \Sigma_{0}$, soit $\alpha_{\sigma} \in \ensuremath{\mathbf{R}}_{>0}$. Alors, le sous-anneau de $\mathcal{A}[1/\Sigma_{0}]\llbracket \ensuremath{{\boldsymbol{T}}}\rrbracket$ constitu\'e des s\'eries~$f$ telles que
\[ \forall \sigma\in \Sigma_{0},\ R_{\sigma}(f) > \ensuremath{{\boldsymbol{r}}}^{\alpha_{\sigma}}\]
est noeth\'erien.
\end{coro}
\begin{proof}
Quitte \`a multiplier les~$\alpha_{\sigma}$ par une constante~$\gamma >0$ et remplacer~$\ensuremath{{\boldsymbol{r}}}$ par~$\ensuremath{{\boldsymbol{r}}}^{1/\gamma}$, on peut supposer que, pour tout $\sigma \in \Sigma_{0}$, on a $\alpha_{\sigma} \in \intoo{0,1}$. Posons
\[ V := \bigcup_{\sigma \in \Sigma_{0}} \{a_{\sigma}^\ensuremath{\varepsilon} : \ensuremath{\varepsilon} \in [0,\alpha_{\sigma}]\} \cup \bigcup_{\sigma \in \Sigma \setminus \Sigma_{0}} \{a_{\sigma}^\ensuremath{\varepsilon} : \ensuremath{\varepsilon} \in [0,+\infty]\}.\]
D'apr\`es le lemme~\ref{lem:descriptionseriesarithmetiques}, l'anneau d\'ecrit dans l'\'enonc\'e n'est autre que~$\mathcal{O}(\overline{D}_{V}(\ensuremath{{\boldsymbol{r}}}))$. Le th\'eor\`eme~\ref{th:noetherien} assure qu'il est noeth\'erien.
\end{proof}
\index{Anneau!noetherien@noeth\'erien|)}
\index{Series arithmetiques convergentes@S\'eries arithm\'etiques convergentes|)}
\chapter{Propri\'et\'es topologiques des espaces analytiques}\label{chap:topo}
Dans ce chapitre, nous \'etudions certaines propri\'et\'es topologiques des espaces analytiques.
La section~\ref{sec:elastique} traite de topologie g\'en\'erale. Nous y introduisons la notion d'espace \'elastique, qui renforce celle d'espace connexe par arcs (\cf~d\'efinition~\ref{def:elastique}).
Dans la section~\ref{sec:cpa}, nous d\'emontrons que les espaces analytiques sont localement connexes par arcs (\cf~th\'eor\`eme~\ref{th:cpageneral}). Comme on s'y attend, nous \'etudions d'abord les espaces affines analytiques, cas dans lequel nous pouvons exhiber des sections explicites de la projection (\cf~lemme~\ref{lem:section}). Le cas g\'en\'eral s'y r\'eduit, par normalisation de Noether.
Pour finir, dans la section~\ref{sec:dimtop}, nous calculons la dimension topologique, plus pr\'ecis\'ement la dimension de recouvrement, des espaces affines analytiques, ainsi que des disques (\cf~th\'eor\`eme~\ref{th:dimensionetoile}). Le calcul de la dimension des fibres est classique, mais, les espaces que nous consid\'erons n'\'etant, en g\'en\'eral, pas m\'etrisables, le passage \`a l'espace total requiert de la prudence.
\medbreak
Soit~$(\mathcal{A},\nm)$ un anneau de base g\'eom\'etrique. Posons $B:=\mathcal{M}(\mathcal{A})$.
\section{Espaces \'elastiques}\label{sec:elastique}
\index{Espace topologique!elastique@\'elastique|(}
Dans cette section, nous regroupons quelques r\'esultats de topologie g\'en\'erale qui vont nous \^etre utiles pour montrer que les espaces affines, puis les espaces analytiques quelconques, sont localement connexes par arcs. Nous introduisons notamment la notion d'espace \'elastique.
\medbreak
Commen\c cons par montrer un r\'esultat de mod\'eration du nombre de composantes connexes.
\begin{prop}\label{prop:top}\index{Espace topologique!connexe}\index{Application!finie et ouverte}
Soit $f \colon X \to Y$ une application finie et ouverte entre espaces topologiques. Supposons que~$Y$ est connexe. Alors $X$ poss\`ede un nombre fini de composantes connexes et la restriction de~$f$ \`a chacune d'elle est surjective.
\end{prop}
\begin{proof}
Soit~$x\in X$. Notons~$\mathcal{O}\mathcal{F}_x$ l'ensemble des parties ouvertes et ferm\'ees de~$X$ qui contiennent~$x$ et posons $OF_x:= \bigcap_{U\in\mathcal{O}\mathcal{F}_x} U$. Montrons que
\[\bigcap_{U\in\mathcal{O}\mathcal{F}_x} f(U)=f(OF_x).\]
L'inclusion $f(OF_x) \subset \bigcap_{U\in\mathcal{O}\mathcal{F}_x} f(U)$ est \'evidente. D\'emontrons l'inclusion r\'eciproque. Soit $y \in \bigcap_{U\in\mathcal{O}\mathcal{F}_x} f(U)$. Notons $x_{1},\dotsc,x_{n}$ les \'el\'ements de~$f^{-1}(y)$. Supposons, par l'absurde, que pour tout~$i\in\cn{1}{n}$, il existe~$U_i\in \mathcal{O}\mathcal{F}_x$ tel que~$x_i\notin U_i$. On a alors $(\bigcap_{i=1}^n U_i)\cap f^{-1}(y)= \emptyset$. Or, toute intersection finie de parties ouvertes et ferm\'ees est encore ouverte et ferm\'ee, donc $\bigcap_{i=1}^nU_{i} \in \mathcal{O}\mathcal{F}_x$, ce qui est absurde car~$y$ appartient \`a~$\bigcap_{U\in\mathcal{O}\mathcal{F}_x} f(U)$. Par cons\'equent, il existe~$i\in\cn{1}{n}$ tel que~$x_{i}$ appartienne \`a tous les \'el\'ements de~$\mathcal{O}\mathcal{F}_{x}$ et donc \`a~$OF_{x}$. On en d\'eduit que~$y$ appartient \`a~$f\left(OF_x\right)$.
Pour tout~$U\in\mathcal{O}\mathcal{F}_x$, $f(U)$ est ouvert, ferm\'e et non vide, donc \'egal \`a~$Y$, puisque~$Y$ est connexe. Ainsi, par ce qui pr\'ec\`ede, on en d\'eduit que $f(OF_x)=Y$.
\medbreak
Montrons maintenant que les ensembles~$OF_x$ forment une partition de~$X$. Soient $x,x' \in X$.
Supposons que $x' \notin OF_{x}$. Alors il existe un ouvert ferm\'e~$U$ contenant~$x$ tel que~$x'$ n'appartienne pas \`a~$U$. Dans ce cas~$X\setminus U$ est un ouvert ferm\'e qui contient~$x'$ et ne contient pas~$x$. On en d\'eduit que $OF_x\cap OF_{x'} = \emptyset$.
Supposons que $x' \in OF_{x}$. Alors tout ouvert ferm\'e qui contient~$x$ contient~$x'$ donc $OF_{x'} \subset OF_x$. D'apr\`es ce qui pr\'ec\`ede, cela implique que $x \in OF_{x'}$, et finalement que $OF_x = OF_{x'}$. La propri\'et\'e annonc\'ee s'ensuit.
\medbreak
Soit~$y\in Y$. Pour tout $x\in X$, on a $f(OF_x)=Y$, donc il existe $z\in f^{-1}(y)$ tel que $z \in OF_{x}$, ce qui entra\^ine $OF_{z} = OF_{x}$. On en d\'eduit l'\'egalit\'e
\[ X = \bigsqcup_{z\in f^{-1}(y)} OF_{z}.\]
Soit $z\in f^{-1}(y)$. L'ensemble~$OF_{z}$ est intersection de ferm\'es, donc ferm\'e. En \'ecrivant~$OF_{z}$ comme le compl\'ementaire de la r\'eunion (finie) des~$OF_{z'}$, pour $z' \in f^{-1}(y) \setminus \{z\}$, on montre que~$OF_{z}$ est ouvert.
Montrons que~$OF_{z}$ est connexe, ce qui permettra de conclure. Soient~$A$ et~$B$ des ouverts disjoints de~$OF_{z}$ de r\'eunion \'egale \`a~$OF_{z}$. On peut supposer que $z\in A$. Dans ce cas, on a $A \in \mathcal{O}\mathcal{F}_{z}$, donc $OF_{z} \subset A$, et finalement $OF_{z} = A$. Le r\'esultat s'ensuit.
\end{proof}
Nous souhaiterions disposer d'un r\'esultat analogue au pr\'ec\'edent mais concernant cette fois-ci la connexit\'e par arcs. Pour l'obtenir, nous aurons besoin d'imposer une condition plus forte sur l'espace d'arriv\'ee. Nous l'introduisons ici.
\begin{defi}\label{def:elastique}\index{Espace topologique!elastique@\'elastique|textbf}\index{Chemin!elastique@\'elastique|textbf}
Soit~$X$ un espace topologique.
Soient $x,y \in X$ et~$\ell$ un chemin reliant~$x$ \`a~$y$. Le chemin~$\ell$ est dit \emph{\'elastique en~$y$} si, pour tout voisinage~$U$ de~$\ell$, il existe un voisinage~$V$ de~$y$ tel que, pour tout~$y'\in V$, il existe un chemin de~$x$ \`a~$y'$ contenu dans~$U$. Le chemin~$\ell$ est dit \emph{\'elastique} s'il est \'elastique en~$x$ et en~$y$.
L'espace~$X$ est dit \emph{\'elastique} si, pour tous $x,y \in X$, il existe un chemin \'elastique reliant~$x$ \`a~$y$.
\end{defi}
\begin{rema}\label{rem:definitionelastique}
\begin{enumerate}[i)]
\item Pour montrer qu'un espace~$X$ est \'elastique, il suffit de montrer qu'il est connexe par arcs et que, pour tout point~$x$ de~$X$, il existe un chemin d'origine~$x$ qui est \'elastique en~$x$. La preuve consiste en un simple argument bas\'e sur la concat\'enation des chemins.
\item Tout espace connexe et localement connexe par arcs est \'elastique. De m\^eme, tout espace contractile est \'elastique. La notion d'espace \'elastique est cependant plus g\'en\'erale. En effet, il existe des espaces connexes et localement connexes par arcs mais non contractiles (par exemple le cercle) ainsi que des espaces contractiles non localement connexes par arcs (par exemple la r\'eunion des droites du plan r\'eel de pente rationnelle passant par~0).
\end{enumerate}
\end{rema}
\begin{exem}\label{exemple_non_\'elastique}
L'exemple qui suit montre que les notions d'espaces \'elastique et connexe par arcs sont distinctes.
Notons~$G_{\sin(1/x)}$ le sous-ensemble de~$\ensuremath{\mathbf{R}}^2$ constitu\'e du graphe de~$\sin(1/x)$ sur~$\intof{0,1/\pi}$. Consid\'erons l'espace topologique~$X$ form\'e de la r\'eunion de l'adh\'erence $\overline{G_{\sin(1/x)}}$ de~$G_{\sin(1/x)}$ dans~$\ensuremath{\mathbf{R}}^2$ et d'un chemin~$\ell_{0}$ reliant le point $y_{0}$ de coordonn\'ees $(0,-1)$ au point~$x_{0}$ de coordonn\'ees $(1/\pi,0)$, de sorte que $\ell \cap \overline{G_{\sin(1/x)}} =\{x_{0},y_{0}\}$ (\cf~figure~\ref{fig:elastiquevscpa}). Cet espace est connexe par arcs.
\begin{figure}[!h]
\centering
\labellist
\pinlabel~$\bullet$ at 3.5 27
\pinlabel~$y_{0}$ at -6 27
\pinlabel~$\bullet$ at 71 47
\pinlabel~$x_0$ at 79 47
\pinlabel~$\ell_0$ at 40 0
\endlabellist
\includegraphics[scale=1.5]{nonelastique1}
\caption{Un espace connexe par arcs non \'elastique.}
\label{fig:elastiquevscpa}
\end{figure}
Soit~$\ell$ un chemin reliant~$x_{0}$ \`a~$y_{0}$. Tout voisinage de~$y_{0}$ contient un point de $G_{\sin(1/x)}$ avec une abscisse strictement positive tr\`es petite. Un tel point ne peut \^etre reli\'e \`a~$x_{0}$ dans aucun voisinage assez petit du chemin~$\ell$ (\cf~figure~\ref{fig:voisinagechemin}).
\pgfdeclareimage[interpolate=true,height=4cm]{nonelastique2}{nonelastique2}
\begin{figure}[!h]
\centering
\labellist
\pinlabel~$\bullet$ at 1 21
\pinlabel~$y_{0}$ at -6 21
\pinlabel~$\bullet$ at 69 39.5
\pinlabel~$x_0$ at 77 40
\endlabellist
\includegraphics[scale=1.5]{nonelastique2}
\caption{Un voisinage d'un chemin de~$x_{0}$ \`a~$y_{0}$.}
\label{fig:voisinagechemin}
\end{figure}
Par cons\'equent, le chemin~$\ell$ n'est pas \'elastique en~$y_{0}$ et l'espace topologique~$X$ n'est donc pas \'elastique.
\end{exem}
La proposition suivante reprend \cite[lemma~3.2.5]{Ber1}. Nous en rappelons la d\'emonstration pour apporter quelques pr\'ecisions.
\begin{prop}\label{chemin}\index{Espace topologique!connexe par arcs}\index{Application!finie et ouverte}
Soit~$f \colon X \to Y$ une application finie et ouverte entre espaces topologiques. Supposons que~$X$ et~$Y$ sont connexes, que~$X$ est s\'epar\'e et que~$Y$ est \'elastique. Alors $X$~est connexe par arcs.
\end{prop}
\begin{proof}
Soit~$\ell$ un chemin de~$Y$. Soit~$\mathcal{U}$ une base d'ouverts de~$\ell$ qui engendre sa topologie. Nous pouvons supposer que~$\mathcal{U}$ est d\'enombrable. Pour tout $U \in \mathcal{U}$, l'application $f^{-1}(U) \to U$ induite par~$f$ est encore finie et ouverte. La proposition~\ref{prop:top} assure alors que~$f^{-1}(U)$ a un nombre fini de composantes connexes. Celles-ci sont donc ouvertes. Notons~$\mathcal{C}_{U}$ l'ensemble des composantes connexes de~$f^{-1}(U)$. L'ensemble $\mathcal{C} := \bigcup_{U\in \mathcal{U}} \mathcal{C}_{U}$ est alors d\'enombrable et, d'apr\`es le lemme~\ref{lem:voisinagefibre}, il engendre la topologie de~$f^{-1}(\ell)$.
D'apr\`es le lemme~\ref{lem:criterepropre}, $f^{-1}(\ell)$ est compact. D'apr\`es \cite[IX, \S 2, \no 9, proposition~16]{BourbakiTG510}
il est donc m\'etrisable. Puisque les \'el\'ements de~$\mathcal{C}$ sont connexes, $f^{-1}(\ell)$ est localement connexe. D'apr\`es \cite[III, \S 2, \no 4, corollaire~2 de la proposition~11]{BourbakiTA14},
$f^{-1}(\ell)$ est donc localement connexe par arcs. En particulier, les composantes connexes de~$f^{-1}(\ell)$ sont compactes et connexes par arcs.
\medbreak
Pour d\'emontrer le r\'esultat, il suffit de montrer que les composantes connexes par arcs de~$X$ sont ouvertes. Soit $x\in X$. Posons $y := f(x)$. Soit $y_{0} \in Y$. Soit~$\ell$ un chemin reliant~$y_{0}$ \`a~$y$ qui soit \'elastique en~$y$. D'apr\`es le raisonnement pr\'ec\'edent, $f^{-1}(\ell)$ poss\`ede un nombre fini de composantes connexes, qui sont compactes et connexes par arcs. Notons-les $\Sigma_{1},\dotsc,\Sigma_{n}$. On peut supposer que $x \in \Sigma_{1}$.
Puisque~$X$ est s\'epar\'e, on peut trouver des voisinages $W_{1},\dotsc,W_{n}$ respectivement de $\Sigma_{1},\dotsc,\Sigma_{n}$ qui soient deux \`a deux disjoints. D'apr\`es le lemme~\ref{lem:voisinagefibre}, il existe un voisinage~$U$ de~$\ell$ tel que $f^{-1}(U) \subset \bigsqcup_{i=1}^n W_{i}$.
Soit~$V$ un voisinage ouvert de~$y$ dans~$U$ tel que, pour tout $y' \in V$, il existe un chemin reliant~$y_{0}$ \`a~$y$ contenu dans~$U$. Alors $f^{-1}(V) \cap W_{1}$ est inclus dans la composante connexe par arcs de~$x$. En effet, soit $x' \in f^{-1}(V) \cap W_{1}$. Il existe un chemin~$\ell'$ reliant~$y_{0}$ \`a~$y':=f(x')$ contenu dans~$U$. Notons~$\Sigma'$ la composante connexe de~$f^{-1}(\ell')$ contenant~$x'$. Elle est connexe par arcs et contenue dans $f^{-1}(U)$, donc dans $\bigsqcup_{i=1}^n W_{i}$, donc dans~$W_{1}$ puisqu'elle rencontre~$W_{1}$. On a donc
\[ \Sigma' \cap f^{-1}(y_{0}) \subset W_{1} \cap f^{-1}(y_{0}) = \Sigma_{1} \cap f^{-1}(y_{0}) .\]
On en d\'eduit que $\Sigma' \cup \Sigma_{1}$ est connexe par arcs. Il existe donc un chemin reliant~$x$ \`a~$x'$. Le r\'esultat s'en d\'eduit.
\end{proof}
\begin{rema}\label{rem:revetementdegre2}
Il existe un espace topologique connexe par arcs (mais non \'elastique) admettant un rev\^etement de degr\'e~$2$ connexe mais non connexe par arcs. On peut, par exemple, consid\'erer le rev\^etement de l'espace d\'efini \`a l'exemple \ref{exemple_non_\'elastique} repr\'esent\'e \`a la figure~\ref{fig:revetementcpa}.
\begin{figure}[!h]
\centering
\includegraphics[scale=2.2]{nonelastique3}
\caption{Rev\^etement de degr\'e 2 connexe mais non connexe par arcs d'un espace connexe par arcs non \'elastique.}
\label{fig:revetementcpa}
\end{figure}
Cela permet de justifier l'introduction de la notion d'espace \'elastique.
\end{rema}
Dans le cas o\`u l'espace au but est seulement suppos\'e connexe par arcs, on dispose cependant du r\'esultat suivant.
\begin{prop}\label{prop:revetementcpa}\index{Espace topologique!connexe par arcs}\index{Application!finie et ouverte}
Soit~$f \colon X \to Y$ une application finie et ouverte entre espaces topologiques. Supposons que~$X$ et~$Y$ sont s\'epar\'es et que~$Y$ est connexe par arcs. Alors $X$ poss\`ede un nombre fini de composantes connexes par arcs et la restriction de~$f$ \`a chacune d'elles est surjective.
\end{prop}
\begin{proof}
Soient~$y_{1},y_{2} \in Y$. Par hypoth\`ese, il existe un chemin~$\ell$ reliant~$y_{1}$ et~$y_{2}$. Puisque~$Y$ est s\'epar\'e, d'apr\`es~\cite[III, \S 2, \no 9, proposition~18]{BourbakiTA14},
on peut supposer que~$\ell$ est hom\'eomorphe au segment~$[0,1]$ et, en particulier, \'elastique. Le morphisme $\varphi^{-1}(\ell) \to \ell$ induit par~$\varphi$ est encore fini et ouvert. D'apr\`es la proposition~\ref{prop:top} et la proposition~\ref{chemin}, $\varphi^{-1}(\ell)$ a un nombre fini de composantes connexes et celles-ci se surjectent sur~$\ell$ par~$\varphi$ et sont connexes par arcs.
On en d\'eduit imm\'ediatement que les composantes connexes par arcs de~$X$ se surjectent sur~$Y$ par~$f$. Puisque les fibres de~$f$ sont finies, il ne peut y en avoir qu'un nombre fini.
\end{proof}
\begin{rema}
L'exemple de la remarque~\ref{rem:revetementdegre2} montre que les composantes connexes par arcs de~$X$ peuvent n'\^etre ni ferm\'ees ni ouvertes.
\end{rema}
Nous allons maintenant \'enoncer deux crit\`eres permettant de montrer l'\'elasticit\'e de certains espaces topologiques.
\begin{lemm}\label{lem:crit_elast}
Soient~$X$ un espace topologique et~$W$ un ouvert de~$X$. Supposons que
\begin{enumerate}[i)]
\item $W$ et~$F:=X\setminus W$ sont \'elastiques ;
\item l'ensemble $\partial F:=F\setminus \mathring F$ n'est pas vide ;
\item tout point~$x$ de~$\partial F$ admet une base de voisinages~$\mathcal{U}_x$ dans~$X$ telle que pour tout $U\in\mathcal{U}_x$ et tout $y\in W\cap U$, il existe un chemin trac\'e sur~$U$ reliant~$y$ \`a un point de~$F$.
\end{enumerate}
Alors l'espace topologique~$X$ est \'elastique.
\end{lemm}
\begin{proof}
Soit~$x \in X$. D'apr\`es la remarque~\ref{rem:definitionelastique}, il suffit de montrer qu'il existe un chemin d'origine~$x$ qui est \'elastique en~$x$. Si $x\in W$, la propri\'et\'e d\'ecoule du fait que~$W$ est ouvert dans~$X$ et \'elastique. Il en va de m\^eme si $x\in \mathring F$. On peut donc supposer que $x\in \partial F$.
Choisissons un chemin~$\ell$ de~$F$ d'origine~$x$ et \'elastique en~$x$ en tant que chemin de~$F$. Notons $x_{0}\in F$ son autre extr\'emit\'e.
Soit~$V$ un voisinage de~$\ell$ dans~$X$. Le point~$x$ poss\`ede un voisinage~$U'$ dans~$V$ tel que, pour tout~$x'\in U'\cap F$, il existe un chemin reliant~$x'$ \`a~$x_{0}$ dans~$V$. Soient~$U$ un voisinage de~$x$ dans~$U'$ appartenant \`a~$\mathcal{U}_{x}$. Soit~$y \in U$. Si $y\in F$, on a d\'ej\`a vu qu'il existe un chemin reliant~$y$ \`a~$x_0$ dans~$V$. Si~$y\in W$, alors il existe un chemin dans~$U$ reliant~$y$ \`a un point~$x'$ de~$U\cap F$. Il existe alors un chemin reliant~$x'$ \`a~$x_0$ dans~$V$, et donc un chemin reliant~$y$ \`a~$x_0$ dans~$V$. Ceci montre que~$\ell$ est \'elastique en~$x$ en tant que chemin de~$X$.
\end{proof}
\begin{lemm}\label{lem:crit_elast'}
Soient~$X$ un espace topologique et~$(X_i)_{i\in\ensuremath{\mathbf{N}}}$ une suite croissante (pour l'inclusion) de sous-espaces topologiques tels que~$\bigcup_{i\in\ensuremath{\mathbf{N}}}\mathring X_i=X$. Si, pour tout~$i\in\ensuremath{\mathbf{N}}$, l'espace~$X_i$ est \'elastique, alors l'espace~$X$ est lui aussi \'elastique.
\qed
\end{lemm}
\index{Espace topologique!elastique@\'elastique|)}
\section[Connexit\'e par arcs locale]{Connexit\'e par arcs locale}\label{sec:cpa}
Dans cette section, nous nous int\'eressons \`a la connexit\'e par arcs et la connexit\'e par arcs locale des espaces analytiques. Nous commencerons par le cas des espaces affines.
\medbreak
Soit $n\in \ensuremath{\mathbf{N}}$. Notons $\pi \colon \E{n}{\mathcal{A}} \to B = \mathcal{M}(\mathcal{A})$ le morphisme de projection. Fixons des coordonn\'ees $T_{1},\dotsc,T_{n}$ sur $\E{n}{\mathcal{A}}$. Posons $\ensuremath{{\boldsymbol{T}}} := (T_{1},\dotsc,T_{n})$.%
\nomenclature[Ita]{$\ensuremath{{\boldsymbol{T}}}$}{$= (T_{1},\dotsc,T_{n})$, $n$-uplet de coordonn\'ees
\begin{nota}%
\nomenclature[Itb]{$\ensuremath{{\boldsymbol{r}}}$}{$= (r_{1},\dotsc,r_{n}) \in \ensuremath{\mathbf{R}}_{\ge 0}^n$}%
\nomenclature[Itc]{$\ensuremath{{\boldsymbol{i}}}$}{$= (i_{1},\dotsc,i_{n}) \in \ensuremath{\mathbf{N}}^n$}%
\nomenclature[Itd]{$\ensuremath{{\boldsymbol{r}}}^\ensuremath{{\boldsymbol{i}}}$}{$= \prod_{m=1}^n r_{m}^{i_{m}}$
Soit $A$ un anneau. Pour tout $\ensuremath{{\boldsymbol{a}}} = (a_{1},\dotsc,a_{n}) \in A^n$ et tout $\ensuremath{{\boldsymbol{i}}} = (i_{1},\dotsc,i_{n}) \in \ensuremath{\mathbf{N}}^n$, on pose
\[ \ensuremath{{\boldsymbol{a}}}^\ensuremath{{\boldsymbol{i}}} := \prod_{m=1}^n a_{m}^{i_{m}},\]
avec la convention que $a^0 = 1$, pour tout $a\in A$.
\end{nota}
\begin{nota}%
\nomenclature[Iua]{$\eta_{\ensuremath{{\boldsymbol{r}}}}$}{sur un corps valu\'e ultram\'etrique complet, unique point du bord de Shilov du polydisque centr\'e en~0 de polyrayon $\ensuremath{{\boldsymbol{r}}}$
Soit $(k,\va)$ un corps valu\'e ultram\'etrique complet. Notons $\ensuremath{{\boldsymbol{T}}} := (T_{1},\dotsc,T_{n})$ des coordonn\'ees sur~$\E{n}{k}$. Soit $\ensuremath{{\boldsymbol{r}}} = (r_1,\dotsc,r_n) \in \ensuremath{\mathbf{R}}_{\ge0}^n$. On note~$\eta_{\ensuremath{{\boldsymbol{r}}}}$ le point de~$\E{n}{k}$ associ\'e \`a la semi-norme multiplicative
\[{\renewcommand{\arraystretch}{1.3}\begin{array}{ccc}
k[\ensuremath{{\boldsymbol{T}}}] & \to & \ensuremath{\mathbf{R}}_{\ge 0}\\
\displaystyle \sum_{\ensuremath{{\boldsymbol{i}}} \ge 0} a_{\ensuremath{{\boldsymbol{i}}}}\, \ensuremath{{\boldsymbol{T}}}^{\ensuremath{{\boldsymbol{i}}}} & \mapsto & \max_{\ensuremath{{\boldsymbol{i}}} \ge 0} ( |a_{\ensuremath{{\boldsymbol{i}}}}| \, \ensuremath{{\boldsymbol{r}}}^{\ensuremath{{\boldsymbol{i}}}} ).
\end{array}}\]
Soit $b\in B_{\mathrm{um}}$. On note $\eta_{b,\ensuremath{{\boldsymbol{r}}}} \in \E{n}{\mathcal{A}}$ le point~$\eta_{\ensuremath{{\boldsymbol{r}}}}$ de la fibre $\pi^{-1}(b) \simeq \E{n}{\mathcal{H}(b)}$
\end{nota}
Rappelons que nous avons introduit une fonction $\ensuremath{\varepsilon} \colon B_{\mathrm{arc}} \to \intof{0,1}$ (\cf~notation~\ref{not:epsilon}).
\begin{nota}
\nomenclature[Isz]{$\rho_{b}$}{pour $b$ point archim\'edien de~$\mathcal{M}(\mathcal{A})$, plongement canonique $\ensuremath{\mathbf{R}}^n \to \pi_{n}^{-1}(b)$
Soit $b\in B_{\mathrm{arc}}$. D'apr\`es le th\'eor\`eme~\ref{th:vaarchimedienne} et le lemme~\ref{lem:AnC}, la fibre $\pi^{-1}(b)$ s'identifie \`a~$\ensuremath{\mathbf{C}}^n$ si $\mathcal{H}(b) = \ensuremath{\mathbf{C}}$ et \`a $\ensuremath{\mathbf{C}}^n/\Gal(\ensuremath{\mathbf{C}}/\ensuremath{\mathbf{R}})$ si $\mathcal{H}(b) = \ensuremath{\mathbf{R}}$ et, dans tous les cas, on a un plongement canonique
\[ \rho_{b} \colon \ensuremath{\mathbf{R}}^n \to \pi^{-1}(b).\]
\end{nota}
\begin{lemm}\label{lem:section}\index{Projection!section d'une}%
\nomenclature[Iuc]{$\sigma_{\ensuremath{{\boldsymbol{r}}}}$}{une section de la projection~$\pi_{n}$
Soit $\ensuremath{{\boldsymbol{r}}} = (r_1,\dotsc,r_n) \in \ensuremath{\mathbf{R}}_{\ge0}^n$. L'application
\[\fonction{\sigma_{\ensuremath{{\boldsymbol{r}}}}}{B}{\E{n}{\mathcal{A}}}{b}{
\begin{cases}
\eta_{b,\ensuremath{{\boldsymbol{r}}}} &\textrm{si } b\in B_{\mathrm{um}}~;\\
\rho_{b}(\ensuremath{{\boldsymbol{r}}}^{1/\ensuremath{\varepsilon}(b)}) &\textrm{si } b\in B_{\mathrm{arc}}.
\end{cases}}\]
est une section de la projection $\pi \colon \E{n}{\mathcal{A}} \to B$. Sa restriction \`a~$B_{\mathrm{um}}$ et sa restriction \`a~$B_{\mathrm{arc}}$ sont continues.
Notons $S := \{ i \in \cn{1}{n} : r_{i} \ne 0\}$. Soit~$b_{0} \in B$ tel que la famille $(r_{i})_{i\in S}$ soit libre dans le $\ensuremath{\mathbf{Q}}$-espace vectoriel $\ensuremath{\mathbf{R}}_{>0}/|\mathcal{H}(b_{0})^\times|^\ensuremath{\mathbf{Q}}$. Alors $\sigma_{\ensuremath{{\boldsymbol{r}}}}$ est continue en~$b_{0}$.
\end{lemm}
\begin{proof}
Il d\'ecoule directement de la d\'efinition de~$\sigma_{\ensuremath{{\boldsymbol{r}}}}$ que c'est une section de~$\pi$.
Rappelons que, par d\'efinition de la topologie de~$\E{n}{\mathcal{A}}$, pour montrer que la restriction de~$\sigma_{\ensuremath{{\boldsymbol{r}}}}$ \`a une partie~$U$ de~$B$ est continue, il suffit de montrer que, pour tout $P \in \mathcal{A}[\ensuremath{{\boldsymbol{T}}}]$, l'application
\[ b\in U \mapsto |P(\sigma_{\ensuremath{{\boldsymbol{r}}}}(b))| \in \ensuremath{\mathbf{R}}_{\ge 0}\]
est continue. Soit $P = \sum_{\ensuremath{{\boldsymbol{i}}} \ge 0} a_{\ensuremath{{\boldsymbol{i}}}}\, \ensuremath{{\boldsymbol{T}}}^{\ensuremath{{\boldsymbol{i}}}}\in \mathcal{A}[\ensuremath{{\boldsymbol{T}}}]$.
Commen\c cons par traiter le cas de la restriction \`a~$B_{\mathrm{um}}$. Pour tout $b\in B_{\mathrm{um}}$, on a
\[ |P(\sigma_{\ensuremath{{\boldsymbol{r}}}}(b))| = \max_{\ensuremath{{\boldsymbol{i}}} \ge0} (|a_{\ensuremath{{\boldsymbol{i}}}}(b)| \, \ensuremath{{\boldsymbol{r}}}^\ensuremath{{\boldsymbol{i}}}).\]
Le r\'esultat d\'ecoule donc de la continuit\'e des fonctions~$|a_{\ensuremath{{\boldsymbol{i}}}}|$ (qui sont en nombre fini).
\'Etudions maintenant la restriction \`a~$B_{\mathrm{arc}}$. Rappelons que~$B_{\mathrm{arc}}$ est un ouvert de~$B$, d'apr\`es la remarque~\ref{rem:umfermee}. Soit~$b \in B_{\mathrm{arc}}$. Pour tout $b'\in B$, on a
\begin{align*}
& \big| |P(\sigma_{\ensuremath{{\boldsymbol{r}}}}(b'))| -|P(\sigma_{\ensuremath{{\boldsymbol{r}}}}(b))| \big|\\
=\ & \big| |P(\rho_{b'}(\ensuremath{{\boldsymbol{r}}}^{1/\ensuremath{\varepsilon}(b')}))| - |P(\rho_{b}(\ensuremath{{\boldsymbol{r}}}^{1/\ensuremath{\varepsilon}(b)})| \big|\\
\le\ & \big| |P(\rho_{b'}(\ensuremath{{\boldsymbol{r}}}^{1/\ensuremath{\varepsilon}(b')}))| - |P(\rho_{b'}(\ensuremath{{\boldsymbol{r}}}^{1/\ensuremath{\varepsilon}(b)})| \big| + \big| |P(\rho_{b'}(\ensuremath{{\boldsymbol{r}}}^{1/\ensuremath{\varepsilon}(b)}))| - |P(\rho_{b}(\ensuremath{{\boldsymbol{r}}}^{1/\ensuremath{\varepsilon}(b)})| \big|.
\end{align*}
\'Etudions d'abord le second terme de la somme. Rappelons que, d'apr\`es la remarque~\ref{rem:gammaR}, on a une injection canonique $\gamma_{\ensuremath{\mathbf{R}}} \colon \ensuremath{\mathbf{R}} \to \mathcal{O}(B_{\mathrm{arc}})$. Pour tout $b'\in B$, on a
\[\big| |P(\rho_{b'}(\ensuremath{{\boldsymbol{r}}}^{1/\ensuremath{\varepsilon}(b')}))| - |P(\rho_{b'}(\ensuremath{{\boldsymbol{r}}}^{1/\ensuremath{\varepsilon}(b)})| \big|
= \big| |P(\gamma_{\ensuremath{\mathbf{R}}}(\ensuremath{{\boldsymbol{r}}}^{1/\ensuremath{\varepsilon}(b)}))(b')| - |P(\gamma_{\ensuremath{\mathbf{R}}}(\ensuremath{{\boldsymbol{r}}}^{1/\ensuremath{\varepsilon}(b)}))(b)| \big|\]
et cette quantit\'e tend donc vers~0 quand~$b'$ tend vers~$b$, d'apr\`es le lemme~\ref{lem:evaluationcontinueaffine}.
\'Etudions maintenant le premier terme de la somme. Il existe un voisinage~$V$ de~$b$ dans~$B_{\mathrm{arc}}$ et une constante $M\in \ensuremath{\mathbf{R}}$ tels que
\[ \forall \ensuremath{{\boldsymbol{i}}} \ge 0, \forall b' \in V,\ |a_{\ensuremath{{\boldsymbol{i}}}}(b')| \le M.\]
Soit $\alpha \in \ensuremath{\mathbf{R}}_{>0}$. D'apr\`es le lemme~\ref{lem:epscontinue}, la fonction~$\ensuremath{\varepsilon}$ est continue. Quitte \`a restreindre~$V$, on peut donc supposer que
\[ \forall b' \in V,\ |\ensuremath{{\boldsymbol{r}}}^{1/\ensuremath{\varepsilon}(b')} - \ensuremath{{\boldsymbol{r}}}^{1/\ensuremath{\varepsilon}(b)}| \le \frac\alpha M.\]
On a alors
\begin{align*}
\big| |P(\rho_{b'}(\ensuremath{{\boldsymbol{r}}}^{1/\ensuremath{\varepsilon}(b')}))| - |P(\rho_{b'}(\ensuremath{{\boldsymbol{r}}}^{1/\ensuremath{\varepsilon}(b)})| \big|
&\le \big| \sum_{\ensuremath{{\boldsymbol{i}}} \ge 0} a_{\ensuremath{{\boldsymbol{i}}}}(b') (\ensuremath{{\boldsymbol{r}}}^{1/\ensuremath{\varepsilon}(b')} - \ensuremath{{\boldsymbol{r}}}^{1/\ensuremath{\varepsilon}(b)})\big|\\
& \le d M \frac\alpha M \le d\alpha,
\end{align*}
o\`u~$d$ est le nombre de coefficients non nuls de~$P$.
On en d\'eduit que $|P(\sigma_{\ensuremath{{\boldsymbol{r}}}}(\wc))|$ est continue en~$b$, puis que~$\sigma_{\ensuremath{{\boldsymbol{r}}}}$ est continue sur~$B_{\mathrm{arc}}$.
\medbreak
D\'emontrons maintenant la derni\`ere partie du r\'esultat. L'hypoth\`ese de libert\'e entra\^ine que l'on a
\[ \{ x \in \pi^{-1}(b_{0}) : \forall i \in \cn{1}{n},\ |T_{i}(x)| = r_{i} \} = \{\sigma_{\ensuremath{{\boldsymbol{r}}}}(b_{0})\}.\]
Soit~$U$ un voisinage de~$\sigma_{\ensuremath{{\boldsymbol{r}}}}(b_{0})$. D'apr\`es l'\'egalit\'e pr\'ec\'edente (\cf~\cite[lemme~2.4.1]{A1Z} pour les d\'etails), il existe un voisinage~$V$ de~$b_{0}$ dans~$B$ et $s_{1},t_{1},\dotsc,s_{n},t_{n} \in \ensuremath{\mathbf{R}}_{\ge0}$ avec, pour tout $i\in \cn{1}{n}$, $s_{i} \prec r_{i} < t_{i}$, tels que
\[ \{ x \in \pi^{-1}(V) : \forall i \in \cn{1}{n},\ s_{i} \prec |T_{i}(x)| < t_{i} \} \subset U.\]
Or, par d\'efinition, pour tout $b\in B$ et tout $i\in \cn{1}{n}$, on a $|T_{i}(\sigma_{\ensuremath{{\boldsymbol{r}}}}(b))| = r_{i}$.
Par cons\'equent, $\sigma_{\ensuremath{{\boldsymbol{r}}}}^{-1}(U)$ contient~$V$. On en d\'eduit que~$\sigma_{\ensuremath{{\boldsymbol{r}}}}$ est continue en~$b_{0}$.
\end{proof}
Introduisons une d\'efinition qui nous permettra de tirer profit du lemme~\ref{lem:section}.
\begin{defi}\label{def:peumixte}\index{Chemin!pur|textbf}\index{Partie!peu mixte|textbf}
Soit $\varphi \colon [0,1] \to B$ une application continue. Le chemin correspondant \`a~$\varphi$ est dit \emph{pur} si $\varphi(\intoo{0,1})$ est enti\`erement contenu soit dans~$U_{\mathrm{um}}$ soit dans~$U_{\mathrm{arc}}$.
Une partie~$U$ de $B=\mathcal{M}(\mathcal{A})$ est dite \emph{peu mixte} si
\begin{enumerate}[i)]
\item pour pour $b \in \partial U_{\mathrm{um}} = \partial U_{\mathrm{arc}}$, le $\ensuremath{\mathbf{Q}}$-espace vectoriel $|\mathcal{H}(b)^\times|^\ensuremath{\mathbf{Q}}$ est de codimension infinie dans~$\ensuremath{\mathbf{R}}_{>0}$~;
\item pour tous $x,y$ dans la m\^eme composante connexe par arcs de~$U$, il existe un chemin reliant~$x$ \`a~$y$ qui est concat\'enation d'un nombre fini de chemins purs.
\end{enumerate}
\end{defi}
\begin{rema}\label{rem:peumixte}
Le point~ii) de la d\'efinition de partie peu mixte est satisfait d\`es que $\partial U_{\mathrm{um}} = \partial U_{\mathrm{arc}}$ est discret.
\end{rema}
\begin{exem}\label{ex:peumixte}\index{Corps!valu\'e}\index{Anneau!des entiers relatifs $\ensuremath{\mathbf{Z}}$}\index{Anneau!des entiers d'un corps de nombres}\index{Corps!hybride}\index{Anneau!de valuation discr\`ete}\index{Anneau!de Dedekind trivialement valu\'e}
Si l'anneau de Banach~$\mathcal{A}$ est l'un de nos exemples habituels (corps valu\'e, $\ensuremath{\mathbf{Z}}$ ou anneau d'entiers de corps de nombres, corps hybride, anneau de valuation discr\`ete, anneau de Dedekind trivialement valu\'e, \cf~exemples~\ref{ex:corpsvalue} \`a~\ref{ex:Dedekind}), alors toute partie de~$\mathcal{M}(\mathcal{A})$ est peu mixte.
\end{exem}
Passons maintenant aux r\'esultats de connexit\'e. Nous allons, dans un premier temps, montrer que les polycouronnes relatives dont la base est connexe par arcs sont connexes par arcs. Nous utiliserons ensuite ce r\'esultat pour montrer que ces polycouronnes sont m\^eme \'elastiques. Finalement, nous en d\'eduirons que tout point d'un espace affine analytique admet une base de voisinages connexes par arcs.
Introduisons une notation pour les polycouronnes relatives.
\begin{nota}%
\nomenclature[Iva]{$D_{U}(\ensuremath{{\boldsymbol{t}}})$}{polydisque ouvert relatif au-dessus de~$U$}%
\nomenclature[Ivb]{$\overline{D}_{U}(\ensuremath{{\boldsymbol{t}}})$}{polydisque ferm\'e relatif au-dessus de~$U$}%
\nomenclature[Ivc]{$C_{U}(\ensuremath{{\boldsymbol{s}}},\ensuremath{{\boldsymbol{t}}})$}{polycouronne ouverte relative au-dessus de~$U$}%
\nomenclature[Ivd]{$\overline{C}_{U}(\ensuremath{{\boldsymbol{s}}},\ensuremath{{\boldsymbol{t}}})$}{polycouronne ferm\'ee relative au-dessus de~$U$}%
Soit~$U$ une partie de~$B$. Soient $\ensuremath{{\boldsymbol{s}}}=(s_1,\ldots,s_n), \ensuremath{{\boldsymbol{t}}}=(t_1,\ldots,t_n) \in \ensuremath{\mathbf{R}}^n$. On pose
\begin{align*}
D_{U}(\ensuremath{{\boldsymbol{t}}}) &:= \{x\in X : \pi(x)\in U,\ \forall i\in\cn{1}{n}, |T(x)| < t_{i}\}~;\\
\overline{D}_{U}(\ensuremath{{\boldsymbol{t}}}) &:= \{x\in X : \pi(x)\in U,\ \forall i\in\cn{1}{n}, |T(x)| \le t_{i}\}~;\\
C_{U}(\ensuremath{{\boldsymbol{s}}},\ensuremath{{\boldsymbol{t}}}) &:= \{x\in X : \pi(x)\in U,\ \forall i\in\cn{1}{n}, s_{i} < |T(x)| < t_{i}\}~;\\
\overline{C}_{U}(\ensuremath{{\boldsymbol{s}}},\ensuremath{{\boldsymbol{t}}}) &:= \{x\in X : \pi(x)\in U,\ \forall i\in\cn{1}{n}, s_{i} \le |T(x)| \le t_{i}\}.
\end{align*}
\end{nota}
\begin{lemm}\label{lem:cpasurMk
Soit $(k,\va)$ un corps valu\'e ultram\'etrique complet. Soient $\ensuremath{{\boldsymbol{s}}}=(s_1,\ldots,s_n), \ensuremath{{\boldsymbol{t}}}=(t_1,\ldots,t_n) \in \ensuremath{\mathbf{R}}^n$.
Alors $C_{\mathcal{M}(k)}(\ensuremath{{\boldsymbol{s}}},\ensuremath{{\boldsymbol{t}}})$ est connexe par arcs.
\end{lemm}
\begin{proof}
Nous allons distinguer trois cas, selon le type du corps~$k$ consid\'er\'e.
\smallbreak
$\bullet$ Supposons que~$k=\ensuremath{\mathbf{C}}$.
Alors, d'apr\`es le lemme~\ref{lem:AnC}, $\E{n}{k}$ est isomorphe \`a~$\ensuremath{\mathbf{C}}^n$. Par cons\'equent, $C_{\mathcal{M}(k)}(\ensuremath{{\boldsymbol{s}}},\ensuremath{{\boldsymbol{t}}})$ est une polycouronne complexe. Elle est donc connexe par arcs.
\smallbreak
$\bullet$ Supposons que~$k=\ensuremath{\mathbf{R}}$.
Alors, d'apr\`es le lemme~\ref{lem:AnC}, $\E{n}{k}$ est hom\'eomorphe \`a~$\ensuremath{\mathbf{C}}^n/\Gal(\ensuremath{\mathbf{C}}/\ensuremath{\mathbf{R}})$. Par cons\'equent, $C_{\mathcal{M}(k)}(\ensuremath{{\boldsymbol{s}}},\ensuremath{{\boldsymbol{t}}})$ est l'image d'une polycouronne complexe par une application continue. Elle est donc \'egalement connexe par arcs.
\smallbreak
$\bullet$ Supposons que~$k$ est ultram\'etrique.
On renvoie \`a~\cite[corollary~3.2.3]{Ber1}. Dans cette r\'ef\'erence, V.~Berkovich d\'emontre la connexit\'e par arcs du polydisque unit\'e, mais le m\^eme raisonnement fonctionne pour une polycouronne ferm\'ee quelconque. Le cas d'une polycouronne ouverte s'en d\'eduit en l'\'ecrivant comme union croissante de polycouronnes ferm\'ees.
\end{proof}
\begin{prop}\label{connexe}\index{Disque!connexe par arcs}\index{Couronne!connexe par arcs}\index{Espace affine analytique!connexe par arcs}
Soit~$U$ une partie connexe par arcs de~$B$. Supposons que~$U$ est peu mixte. Soient $\ensuremath{{\boldsymbol{s}}}=(s_1,\ldots,s_n), \ensuremath{{\boldsymbol{t}}}=(t_1,\ldots,t_n) \in \ensuremath{\mathbf{R}}^n$.
Alors $C_U(\ensuremath{{\boldsymbol{s}}},\ensuremath{{\boldsymbol{t}}})$ est connexe par arcs.
En particulier, $\pi^{-1}(U)$ est connexe par arcs.
\end{prop}
\begin{proof}
On peut supposer que, pour tout $i\in \cn{1}{n}$, on a $s_{i} < t_{i}$ et $t_{i}>0$. En effet, si tel n'est pas le cas, $C_{U}(\ensuremath{{\boldsymbol{s}}},\ensuremath{{\boldsymbol{t}}})$ est vide et le r\'esultat est imm\'ediat.
Soient $x,y \in C_U(\ensuremath{{\boldsymbol{s}}},\ensuremath{{\boldsymbol{t}}})$. Puisque~$U$ est connexe par arcs, il existe un chemin dans~$U$ reliant~$\pi(x)$ \`a~$\pi(y)$. Puisque~$U$ est peu mixte, on peut trouver un tel chemin~$\ell$ qui s'\'ecrive comme concat\'enation d'un nombre fini de chemins purs $\ell_{1},\dotsc,\ell_{m}$.
Soit $j \in \cn{1}{m}$. Soit $\varphi_{j} \colon [0,1] \to U$ l'application continue correspondant au chemin~$\ell_{j}$. Posons $a_{j} := \varphi_{j}(0)$ et $b_{j} := \varphi_{j}(1)$. Montrons que la fibre $\pi^{-1}(a_{j})$ peut-\^etre reli\'ee par un chemin \`a toute fibre au-dessus d'un point de $\varphi_{j}(\intoo{0,1})$. Distinguons trois cas.
$\bullet$ Supposons que $\varphi_{j}(\intoo{0,1}) \subset U_{\mathrm{um}}$.
Alors $a_{j} \in U_{\mathrm{um}}$. Soit $\ensuremath{{\boldsymbol{r}}} = (r_{1},\dotsc,r_{n}) \in \ensuremath{\mathbf{R}}_{>0}^n$ tel que, pour tout $i\in\cn{1}{n}$, on ait $s_{i} < r_{i} < t_{i}$. D'apr\`es le lemme~\ref{lem:section}, la restriction de la section~$\sigma_{\ensuremath{{\boldsymbol{r}}}}$ \`a~$\varphi_{j}(\intfo{0,1})$ est continue. Le r\'esultat s'ensuit.
$\bullet$ Supposons que $\varphi_{j}(\intoo{0,1}) \subset U_{\mathrm{arc}}$ et que $a_{j} \in U_{\mathrm{arc}}$.
On conclut par le m\^eme raisonnement que dans le cas pr\'ec\'edent.
$\bullet$ Supposons que $\varphi_{j}(\intoo{0,1}) \subset U_{\mathrm{arc}}$ et que $a_{j} \in U_{\mathrm{um}}$.
Alors $a_{j} \in \partial U_{\mathrm{arc}}$. Par hypoth\`ese, il existe $\ensuremath{{\boldsymbol{r}}} = (r_{1},\dotsc,r_{n}) \in \ensuremath{\mathbf{R}}_{>0}^n$ tel que, pour tout $i\in\cn{1}{n}$, on ait $s_{i} < r_{i} < t_{i}$, et $\ensuremath{{\boldsymbol{r}}}$ soit une famille libre de $\ensuremath{\mathbf{R}}_{>0}/|\mathcal{H}(a_{j})^\times|^\ensuremath{\mathbf{Q}}$. D'apr\`es le lemme~\ref{lem:section}, la restriction de la section~$\sigma_{\ensuremath{{\boldsymbol{r}}}}$ \`a~$\varphi_{j}(\intfo{0,1})$ est encore continue. Le r\'esultat s'ensuit.
On a montr\'e que la fibre $\pi^{-1}(a_{j})$ peut-\^etre reli\'ee par un chemin \`a toute fibre au-dessus d'un point de $\varphi_{j}(\intoo{0,1})$. Le m\^eme raisonnement s'applique en rempla\c cant~$a_{j}$ par~$b_{j}$ et il en d\'ecoule que les fibres $\pi^{-1}(a_{j})$ et~$\pi^{-1}(b_{j})$ peuvent \^etre reli\'ees par un chemin. On en d\'eduit que fibres $\pi^{-1}(\pi(x))$ et~$\pi^{-1}(\pi(y))$ peuvent \^etre reli\'ees par un chemin.
Pour d\'emontrer le r\'esultat, il suffit maintenant de d\'emontrer que les fibres de~$\pi$ en les points de~$U$ sont connexes par arcs. Or, pour tout $b\in U$, on a un hom\'eomorphisme $\pi^{-1}(b) \simeq \E{n}{\mathcal{H}(b)}$, et le r\'esultat d\'ecoule donc du lemme~\ref{lem:cpasurMk}.
D\'emontrons maintenant la derni\`ere partie de l'\'enonc\'e. Posons $\ensuremath{{\boldsymbol{s}}} := (-1,\dotsc,-1)$. On obtient le r\'esultat en \'ecrivant $\pi^{-1}(U)$ comme union croissante de polydisques ouverts $D_{U}(\ensuremath{{\boldsymbol{t}}}) = C_{U}(\ensuremath{{\boldsymbol{s}}},\ensuremath{{\boldsymbol{t}}})$ et en utilisant la premi\`ere partie.
\end{proof}
Nous allons maintenant d\'eduire du r\'esultat pr\'ec\'edent la connexit\'e locale de~$\E{n}{\mathcal{A}}$.
\begin{prop}\label{connexit\'e_fini_ouvert}\index{Voisinage}\index{Morphisme analytique!fini!et plat}\index{Couronne}
Supposons que~$B$ est localement connexe par arcs. Soit $x \in \E{n}{\mathcal{A}}$. Le point~$x$ admet une base de voisinages ouverts~$\mathcal{V}$ telle que, pour tout~$V\in\mathcal{V}$, il existe un ouvert connexe par arcs~$U$ de~$B$, $\ensuremath{{\boldsymbol{s}}}=(s_1,\dotsc,s_n), \ensuremath{{\boldsymbol{t}}}=(t_1,\ldots,t_n) \in \ensuremath{\mathbf{R}}^n$ et un morphisme fini et plat $\varphi \colon V\to C_U(\ensuremath{{\boldsymbol{s}}},\ensuremath{{\boldsymbol{t}}})$.
\end{prop}
\begin{proof}
Nous allons d\'emontrer l'\'enonc\'e par r\'ecurrence sur~$n$. Si $n=0$, le r\'esultat d\'ecoule directement de la connexit\'e par arcs locale de~$B$.
Soit $n\ge 1$ et supposons avoir d\'emontr\'e le r\'esultat pour~$n-1$. Soit $x\in \E{n}{\mathcal{A}}$. Notons $\pi_{n-1} \colon \E{n}{\mathcal{A}} \to \E{n-1}{\mathcal{A}}$ la projection sur les $n-1$ premi\`eres coordonn\'ees et posons $x_{n-1} := \pi_{n-1}(x_{n})$. Soit~$\mathcal{V}_{n-1}$ une base de voisinages ouverts de~$x_{n-1}$ dans~$\E{n-1}{\mathcal{A}}$ satisfaisant les conditions de l'\'enonc\'e. Notons~$T$ la derni\`ere coordonn\'ee de~$\E{n}{\mathcal{A}}$.
Soit~$W$ un voisinage de~$x$. D'apr\`es les propositions~\ref{prop:basevoisdim1rigide} et~\ref{prop:basevoisdim1}, il existe $V \in \mathcal{V}_{n-1}$, $s,t \in \ensuremath{\mathbf{R}}$ et $P \in \mathcal{O}_{\E{n-1}{\mathcal{A}}}(V)[T]$ tels que $C_{V}(P,s,t) \subset W$. Puisque $V \in \mathcal{V}_{n-1}$, il existe un ouvert connexe par arcs~$U$ de~$B$, $\ensuremath{{\boldsymbol{s}}}'=(s_1,\dotsc,s_{n-1}), \ensuremath{{\boldsymbol{t}}}'=(t_1,\ldots,t_{n-1}) \in \ensuremath{\mathbf{R}}^{n-1}$ et un morphisme fini et plat $\varphi_{n-1} \colon V\to C_U(\ensuremath{{\boldsymbol{s}}}',\ensuremath{{\boldsymbol{t}}}')$. D'apr\`es le lemme~\ref{lem:produitouverts} et la proposition~\ref{prop:produitaffines}, $\pi_{n-1}^{-1}(V)$ et $\pi_{n-1}^{-1}(C_U(\ensuremath{{\boldsymbol{s}}}',\ensuremath{{\boldsymbol{t}}}'))$ s'identifient respectivement aux produits $V\times_{\mathcal{A}}\ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}}$ et $C_U(\ensuremath{{\boldsymbol{s}}}',\ensuremath{{\boldsymbol{t}}}')\times_{\mathcal{A}} \ensuremath{\mathbf{A}^{1,\mathrm{an}}_{\mathcal{A}}}$. La proposition~\ref{changement_base_plat} assure donc que le morphisme
\[\varphi_{n} \colon \pi_{n-1}^{-1}(V) \to \pi_{n-1}^{-1}(C_U(\ensuremath{{\boldsymbol{s}}}',\ensuremath{{\boldsymbol{t}}}'))\]
d\'eduit de~$\varphi_{n-1}$ par changement de base est fini et plat. Posons $\ensuremath{{\boldsymbol{s}}} := (\ensuremath{{\boldsymbol{s}}}',s)$ et $\ensuremath{{\boldsymbol{t}}}:=(\ensuremath{{\boldsymbol{t}}}',t)$. Les propri\'et\'es de finitude et de platitude \'etant locales au but, le morphisme $\varphi'_{n} \colon C_V(s,t)\to C_U(\ensuremath{{\boldsymbol{s}}},\ensuremath{{\boldsymbol{t}}})$ induit par~$\varphi_{n}$ est encore fini et plat.
D'apr\`es la proposition~\ref{plat}, le morphisme $\varphi_{P} \colon \pi_{n-1}^{-1}(V) \to \pi_{n-1}^{-1}(V)$ induit par~$P$ est fini et plat. Sa restriction $\varphi'_{P} \colon C_V(P,s,t)\to C_V(s,t)$ l'est donc \'egalement. Ceci montre que le morphisme $\varphi := \varphi'_{n} \circ \varphi'_{P}$ satisfait les conditions de l'\'enonc\'e.
\end{proof}
En combinant les propositions~\ref{connexit\'e_fini_ouvert} et~\ref{connexe}, le corollaire~\ref{plat} et la proposition~\ref{prop:top}, on obtient le r\'esultat suivant.
\begin{coro
Supposons que~$B$ est localement connexe par arcs et peu mixte. Alors, l'espace $\E{n}{\mathcal{A}}$ est localement connexe.
\qed
\end{coro}
Nous allons maintenant montrer que l'ensemble~$C_U(\ensuremath{{\boldsymbol{s}}},\ensuremath{{\boldsymbol{t}}})$ est \'elastique.
\begin{prop}\label{elast_partiel1
Supposons que~$B_{\mathrm{um}}$ est localement connexe par arcs. Soit~$U$ un ouvert connexe de~$B_{\mathrm{um}}$. Soient $\ensuremath{{\boldsymbol{s}}}=(s_1,\ldots,s_n), \ensuremath{{\boldsymbol{t}}}=(t_1,\ldots,t_n) \in \ensuremath{\mathbf{R}}^n$. Alors, la polycouronne~$C_U(\ensuremath{{\boldsymbol{s}}},\ensuremath{{\boldsymbol{t}}})$ est \'elastique.
\end{prop}
\begin{proof}
Montrons, par r\'ecurrence sur~$n$, que, pour tout ouvert connexe~$U$ de~$B_{\mathrm{um}}$, et tous $\ensuremath{{\boldsymbol{s}}}=(s_1,\ldots,s_n), \ensuremath{{\boldsymbol{t}}}=(t_1,\ldots,t_n) \in \ensuremath{\mathbf{R}}^n$, la polycouronne $C_U(\ensuremath{{\boldsymbol{s}}},\ensuremath{{\boldsymbol{t}}})$ est \'elastique.
Pour $n=0$, le r\'esultat est vrai car tout ouvert connexe de~$B_{\mathrm{um}}$ est connexe par arcs et localement connexe par arcs, donc \'elastique, d'apr\`es la remarque~\ref{rem:definitionelastique}.
Soit $n\ge 1$ et supposons que le r\'esultat est vrai pour~$n-1$. Soit~$U$ un ouvert connexe de~$B_{\mathrm{um}}$. Soient $\ensuremath{{\boldsymbol{s}}}=(s_1,\ldots,s_n), \ensuremath{{\boldsymbol{t}}}=(t_1,\ldots,t_n) \in \ensuremath{\mathbf{R}}^n$. Posons $\ensuremath{{\boldsymbol{s}}}' := (s_1,\ldots,s_{n-1}), \ensuremath{{\boldsymbol{t}}}' := (t_1,\ldots,t_{n-1})$ et $U' := C_{U}(\ensuremath{{\boldsymbol{s}}}',\ensuremath{{\boldsymbol{t}}}')$. On peut supposer que, pour tout $i\in \cn{1}{n}$, on a $s_{i} < t_{i}$ et $t_{i}>0$. D'apr\`es le lemme~\ref{lem:crit_elast'}, il suffit de montrer que, pour tous $s'_{n},t'_{n} \in \ensuremath{\mathbf{R}}$ avec $s_{n} < s'_{n} < t'_{n} < t_{n}$ et $t'_{n}>0$, l'espace $\overline{D}_{U'}(s'_{n},t'_{n})$ est \'elastique.
Soient $s'_{n},t'_{n} \in \ensuremath{\mathbf{R}}$ avec $s_{n} < s'_{n} < t'_{n} < t_{n}$ et $t'_{n}>0$. Soit $x \in \overline{D}_{U'}(s'_{n},t'_{n})$. Notons~$y$ sa projection sur~$U'$.
Montrons qu'il existe un chemin reliant~$x$ \`a~$\eta_{y,t'_{n}}$ dans $\overline{D}_{U'}(s'_{n},t'_{n})$ qui est \'elastique en~$x$. Cela suffira pour conclure, d'apr\`es la remarque~\ref{rem:definitionelastique}.
Fixons une coordonn\'ee~$T$ sur $\overline{D}_{U}(s'_{n},t'_{n})$. Elle induit une coordonn\'ee sur $\overline{D}_{y}(s'_{n},t'_{n})$ que l'on note identiquement. D\'efinissons un ordre partiel~$\le$ sur $\overline{D}_{y}(s'_{n},t'_{n})$ de la fa\c con suivante~: pour tous $z_{1},z_{2} \in \overline{D}_{y}(s'_{n},t'_{n})$, on a $z_{1} \le z_{2}$ si
\[\forall P \in \mathcal{H}(y)[T],\ |P(z_{1})| \le |P(z_{2})|.\]
Puisque~$\mathcal{O}_{U',y}$ est dense dans~$\mathcal{H}(y)$, on peut remplacer~$\mathcal{H}(y)$ par~$\mathcal{O}_{U',y}$ dans cette d\'efinition. Le point~$\eta_{y,t'_{n}}$ est le plus grand point de $\overline{D}_{y}(s'_{n},t'_{n})$ pour l'ordre partiel~$\le$. En outre, l'ensemble
\[ \{z \in \overline{D}_{y}(s'_{n},t'_{n}) : z \ge x\}\]
est hom\'eomorphe \`a un intervalle ferm\'e dont les extr\'emit\'es s'envoient sur~$x$ et~$\eta_{y,t'_{n}}$. Identifions-le \`a un chemin~$\ell$ reliant~$\eta_{y,t'_{n}}$ \`a~$x$ et montrons qu'il est \'elastique en~$x$.
Soit~$U''$ un voisinage compact de~$y$ dans~$U'$. Pour tout $\ensuremath{\varepsilon} \in \ensuremath{\mathbf{R}}_{>0}$, tout voisinage~$W$ de~$y$ dans~$U''$ et tout $P\in\mathcal{O}(W)[T]$, posons
\[L_{\ensuremath{\varepsilon},W,P}:=\{z\in \overline{D}_{W}(s'_n,t'_n) : |P(z)| \ge |P(x)|-\epsilon\}.\]
C'est un voisinage de~$\ell_{x}$ dans $\overline{D}_{U''}(s'_n,t'_n)$, compact si~$W$ est compact. Remarquons que l'on a
\[ \ell = \bigcap_{\ensuremath{\varepsilon}>0} \bigcap_{W \ni y}\bigcap_{P\in\mathcal{O}(W)[T]} L_{\ensuremath{\varepsilon},W,P},\]
o\`u $W$ d\'ecrit l'ensemble des voisinages compacts de~$y$ dans~$U'$.
Soit~$V$ un voisinage ouvert de~$\ell$ dans~$\overline{D}_{U''}(s'_n,t'_n)$. Alors $\overline{D}_{U''}(s'_n,t'_n) \setminus \ell$ est compact et on en d\'eduit qu'il existe un ensemble fini~$I$ et, pour chaque $i\in I$, un nombre r\'eel~$\ensuremath{\varepsilon}_{i} >0$, un voisinage compact~$W_{i}$ de~$y$ dans~$U''$ et un polyn\^ome $P_{i} \in \mathcal{O}(W_{i})[T]$ tels que
\[ \bigcap_{i\in I} L_{\ensuremath{\varepsilon}_{i},W_{i},P_{i}} \subset V.\]
En utilisant la proposition \ref{connexit\'e_fini_ouvert}, le corollaire~\ref{plat}, l'hypoth\`ese de r\'ecurrence et la proposition~\ref{chemin}, on montre que $\bigcap_{i\in I} W_{i}$ contient un voisinage connexe par arcs~$W_{0}$ de~$y$. On a encore
\[L := \bigcap_{i\in I} L_{\ensuremath{\varepsilon}_{i},W_{0},P_{i}} \subset V.\]
Montrons que tout point~$x'$ de~$L$ peut-\^etre reli\'e \`a~$\eta_{y,t'_{n}}$ par un chemin contenu dans~$V$. Soit $x'\in L$. Notons~$y'$ sa projection sur~$U'$. L'ensemble
\[\ell_{x'} := \{z \in \overline{D}_{y'}(s'_{n},t'_{n}) : z \ge x'\}\]
s'identifie \`a un chemin reliant~$x'$ \`a~$\eta_{y',t'_{n}}$ et ce chemin est contenu dans~$L$. Puisque~$W_{0}$ est connexe par arcs, il existe un chemin~$\ell_{y'}$ dans~$W_{0}$ reliant~$y'$ \`a~$y$. Son image $\sigma_{t'_{n}}(\ell_{y'})$ est un chemin reliant~$\eta_{y',t'_{n}}$ \`a~$\eta_{y,t'_{n}}$ et contenu dans~$L$. Le r\'esultat s'ensuit.
\end{proof}
\begin{prop}\label{elast_partiel2
Supposons que~$B_{\mathrm{arc}}$ est localement connexe par arcs. Soit~$U$ un ouvert connexe de~$B_{\mathrm{arc}}$. Soient $\ensuremath{{\boldsymbol{s}}}=(s_1,\ldots,s_n), \ensuremath{{\boldsymbol{t}}}=(t_1,\ldots,t_n) \in \ensuremath{\mathbf{R}}^n$. Alors, la polycouronne~$C_U(\ensuremath{{\boldsymbol{s}}},\ensuremath{{\boldsymbol{t}}})$ est \'elastique.
\end{prop}
\begin{proof}
Nous suivrons une strat\'egie similaire \`a celle de la preuve de la proposition~\ref{elast_partiel1}. Montrons, par r\'ecurrence sur~$n$, que, pour tout ouvert connexe~$U$ de~$B_{\mathrm{arc}}$, et tous $\ensuremath{{\boldsymbol{s}}}=(s_1,\ldots,s_n), \ensuremath{{\boldsymbol{t}}}=(t_1,\ldots,t_n) \in \ensuremath{\mathbf{R}}^n$, la polycouronne $C_U(\ensuremath{{\boldsymbol{s}}},\ensuremath{{\boldsymbol{t}}})$ est \'elastique.
Pour $n=0$, le r\'esultat est vrai car tout ouvert connexe de~$B$ est \'elastique.
Soit $n\ge 1$ et supposons que le r\'esultat est vrai pour~$n-1$. Soit~$U$ un ouvert connexe de~$B_{\mathrm{arc}}$. Soient $\ensuremath{{\boldsymbol{s}}}=(s_1,\ldots,s_n), \ensuremath{{\boldsymbol{t}}}=(t_1,\ldots,t_n) \in \ensuremath{\mathbf{R}}^n$. On peut supposer que, pour tout $i\in \cn{1}{n}$, on a $s_{i} < t_{i}$ et $t_{i}>0$. Posons $\ensuremath{{\boldsymbol{s}}}' := (s_1,\ldots,s_{n-1}), \ensuremath{{\boldsymbol{t}}}' := (t_1,\ldots,t_{n-1})$ et $U' := C_{U}(\ensuremath{{\boldsymbol{s}}}',\ensuremath{{\boldsymbol{t}}}')$. D'apr\`es le lemme~\ref{lem:crit_elast'}, il suffit de montrer que, pour tous $s'_{n},t'_{n} \in \ensuremath{\mathbf{R}}$ avec $s_{n} < s'_{n} < t'_{n} < t_{n}$ et $t'_{n}>0$, l'espace $\overline{D}_{U'}(s'_{n},t'_{n})$ est \'elastique.
Soient $s'_{n},t'_{n} \in \ensuremath{\mathbf{R}}$ avec $s_{n} < s'_{n} < t'_{n} < t_{n}$ et $t'_{n}>0$. Notons $\pi \colon \overline{D}_{U'}(s'_{n},t'_{n}) \to U'$ le morphisme de projection. Rappelons que, pour tout $r\in [s'_{n},t'_{n}] \cap \ensuremath{\mathbf{R}}_{\ge 0}$, nous en avons d\'efini une section continue~$\sigma_{r}$ au lemme~\ref{lem:section}.
Fixons une coordonn\'ee~$T$ sur $\overline{D}_{U'}(s'_{n},t'_{n})$. Soit $x \in \overline{D}_{U'}(s'_{n},t'_{n})$. Posons $y := \pi(x)$. L'ensemble
\[\{z \in \pi^{-1}(y) : |T(z)| = |T(x)|\} \]
est hom\'eomorphe \`a un cercle si $\mathcal{H}(y) = \ensuremath{\mathbf{C}}$ et \`a un demi-cercle si $\mathcal{H}(y) = \ensuremath{\mathbf{R}}$. C'est l'ensemble sous-jacent \`a un chemin~$\ell_{x}$ reliant~$x$ \`a~$\sigma_{|T(x)|}(y)$.
Soit~$U''$ un voisinage compact de~$y$ dans~$U'$. Pour tout $\ensuremath{\varepsilon} \in \ensuremath{\mathbf{R}}_{>0}$ et tout voisinage~$W$ de~$y$ dans~$U'$, posons
\[L_{\ensuremath{\varepsilon},W}:=\big\{z\in \overline{D}_{W}(s'_n,t'_n) : \big||T(z)| - |T(x)|\big| \le \epsilon\big\}.\]
C'est un voisinage de~$\ell_{x}$ dans $\overline{D}_{U''}(s'_n,t'_n)$, compact si~$W$ est compact.
Soit~$V$ un voisinage ouvert de~$\ell_{x}$ dans~$\overline{D}_{U''}(s'_n,t'_n)$. Par le m\^eme raisonnement que dans la preuve de la proposition~\ref{elast_partiel1}, on montre qu'il existe un nombre r\'eel $\ensuremath{\varepsilon} >0$ et un voisinage connexe par arcs~$W$ de~$y$ dans~$U''$ tels que
\[ L_{\ensuremath{\varepsilon},W} \subset V.\]
Montrons que tout point~$x'$ de~$L_{\ensuremath{\varepsilon},W}$ peut-\^etre reli\'e \`a~$\sigma_{|T(x)|}(y)$ par un chemin contenu dans~$V$. Soit $x'\in L$. Notons~$y'$ sa projection sur~$W$. Puisque l'ensemble
\[\{z \in \pi^{-1}(y') : |T(z)| = |T(x')|\} \]
est contenu dans~$L_{\ensuremath{\varepsilon},W}$, il existe un chemin reliant~$x'$ \`a~$\sigma_{|T(x')|}(y')$ contenu dans~$L_{\ensuremath{\varepsilon},W}$. On peut \'egalement relier $\sigma_{|T(x')|}(y')$ \`a $\sigma_{|T(x)|}(y')$ dans~$L_{\ensuremath{\varepsilon},W}$, puis $\sigma_{|T(x)|}(y')$ \`a $\sigma_{|T(x)|}(y)$, toujours dans~$L_{\ensuremath{\varepsilon},W}$, gr\^ace \`a la continuit\'e de la section~$\sigma_{|T(x)|}$. Ceci termine la d\'emonstration.
\end{proof}
\begin{prop}\label{elast}\index{Couronne!elastique@\'elastique}\index{Disque!elastique@\'elastique}
Supposons que~$B$ et~$B_{\mathrm{um}}$ sont localement connexes par arcs et que $B$~est peu mixte.
Soit~$U$ un ouvert connexe de~$B$. Soient $\ensuremath{{\boldsymbol{s}}}=(s_1,\ldots,s_n), \ensuremath{{\boldsymbol{t}}}=(t_1,\ldots,t_n) \in \ensuremath{\mathbf{R}}^n$. Alors la polycouronne $C_U(\ensuremath{{\boldsymbol{s}}},\ensuremath{{\boldsymbol{t}}})$ est \'elastique.
\end{prop}
\begin{proof}
D'apr\`es les propositions~\ref{elast_partiel1} et~\ref{elast_partiel2}, les polycouronnes $C_{U_{\mathrm{arc}}}(\ensuremath{{\boldsymbol{s}}},\ensuremath{{\boldsymbol{t}}})$ et $C_{U_{\mathrm{um}}}(\ensuremath{{\boldsymbol{s}}},\ensuremath{{\boldsymbol{t}}})$ sont \'elastiques. D'apr\`es le lemme~\ref{lem:crit_elast} appliqu\'e avec $W=C_{U_{\mathrm{arc}}}(\ensuremath{{\boldsymbol{s}}},\ensuremath{{\boldsymbol{t}}})$ et $F=C_{U_{\mathrm{um}}}(\ensuremath{{\boldsymbol{s}}},\ensuremath{{\boldsymbol{t}}})$, il suffit de montrer que tout point $x$ de $\partial C_{U_{\mathrm{um}}}(\ensuremath{{\boldsymbol{s}}},\ensuremath{{\boldsymbol{t}}})$ admet une base de voisinages~$\mathcal{V}_x$ dans~$C_U(\ensuremath{{\boldsymbol{s}}},\ensuremath{{\boldsymbol{t}}})$ telle que, pour tout $V \in \mathcal{V}_x$ et tout $y\in V\cap C_{U_\mathrm{arc}}(\ensuremath{{\boldsymbol{s}}},\ensuremath{{\boldsymbol{t}}})$, il existe un chemin dans~$V$ reliant~$y$ \`a un point de~$C_{U_{\mathrm{um}}}(\ensuremath{{\boldsymbol{s}}},\ensuremath{{\boldsymbol{t}}})$.
Soit $x \in \partial C_{U_{\mathrm{um}}}(\ensuremath{{\boldsymbol{s}}},\ensuremath{{\boldsymbol{t}}})$. D'apr\`es la proposition~\ref{connexit\'e_fini_ouvert}, $x$ admet une base de voisinages dans~$C_U(\ensuremath{{\boldsymbol{s}}},\ensuremath{{\boldsymbol{t}}})$ form\'ee d'ouverts~$V$ pour lesquels il existe un ouvert connexe~$U'$ de~$U$, $\ensuremath{{\boldsymbol{s}}}'=(s'_1,\dotsc,s'_n), \ensuremath{{\boldsymbol{t}}}'=(t'_1,\dotsc,t'_n) \in \ensuremath{\mathbf{R}}_{\ge0}^n$ avec $s'_{i}<t'_{i}$ pour tout $i\in\cn{1}{n}$ et un morphisme fini et plat $\varphi \colon V\to C_{U'}(\ensuremath{{\boldsymbol{s}}}',\ensuremath{{\boldsymbol{t}}}')$. D'apr\`es le corollaire~\ref{plat}, $\varphi$ est ouvert.
Montrons qu'un tel~$V$ satisfait la propri\'et\'e requise. D'apr\`es la proposition~\ref{connexe}, $C_{U'}(\ensuremath{{\boldsymbol{s}}}',\ensuremath{{\boldsymbol{t}}}')$ est connexe par arcs donc, d'apr\`es la proposition~\ref{prop:revetementcpa}, chaque composante connexe par arcs de~$V$ se surjecte sur $C_{U'}(\ensuremath{{\boldsymbol{s}}}',\ensuremath{{\boldsymbol{t}}}')$. En particulier, tout point de~$V$ peut-\^etre reli\'e \`a un point de $V_{\mathrm{um}}$.
\end{proof}
\begin{theo}\label{arbre}\index{Espace affine analytique!localement connexe par arcs}
Supposons que~$B$ et~$B_{\mathrm{um}}$ sont localement connexes par arcs et que $B$~est peu mixte. Alors, l'espace analytique~$\E{n}{\mathcal{A}}$ est localement connexe par arcs.
\end{theo}
\begin{proof}
Soient~$x\in\E{n}{\mathcal{A}}$ et~$W$ un voisinage de~$x$ dans~$\E{n}{\mathcal{A}}$. D'apr\`es la proposition \ref{connexit\'e_fini_ouvert}, il existe un voisinage~$V$ de~$x$ dans~$W$ pour lesquel il existe un ouvert connexe~$U$ de~$B$, $\ensuremath{{\boldsymbol{s}}}=(s_1,\dotsc,s_n), \ensuremath{{\boldsymbol{t}}}=(t_1,\dotsc,t_n) \in \ensuremath{\mathbf{R}}_{\ge0}^n$ avec $s_{i}<t_{i}$ pour tout $i\in\cn{1}{n}$ et un morphisme fini et plat $\varphi \colon V\to C_{U}(\ensuremath{{\boldsymbol{s}}},\ensuremath{{\boldsymbol{t}}})$. D'apr\`es le corollaire~\ref{plat}, $\varphi$ est ouvert. Il suit des propositions~\ref{connexe} et~\ref{prop:top} que la composante connexe~$V_{x}$ de~$V$ contenant~$x$ est ouverte et se surjecte sur $C_{U}(\ensuremath{{\boldsymbol{s}}},\ensuremath{{\boldsymbol{t}}})$ par~$\varphi$. D'apr\`es les propositions~\ref{elast} et~\ref{chemin}, $V_{x}$ est connexe par arcs. Le r\'esultat s'ensuit.
\end{proof}
Nous g\'en\'eralisons finalement le r\'esultat de connexit\'e par arcs locale \`a des espaces analytiques quelconques.
\begin{theo}\label{th:cpageneral}\index{Espace analytique!localement connexe par arcs}
Supposons que~$B$ et~$B_{\mathrm{um}}$ sont localement connexes par arcs et que $B$~est peu mixte. Alors tout espace $\mathcal{A}$-analytique est localement connexe par arcs.
\end{theo}
\begin{proof}
Soit $X$ un espace $\mathcal{A}$-analytique et soit $x\in X$. La proposition \ref{decomp} assure qu'il existe un voisinage ouvert~$V$ de~$x$ dans~$X$ et des ferm\'es analytiques $V_1,\dotsc,V_{s}$ de~$V$ contenant~$x$ et int\`egres en~$x$ tels que $V=\bigcup_{i=1}^s V_i$. Il suffit de montrer que, pour tout $i\in\cn{1}{s}$, $x$ admet une base de voisinages connexes par arcs chacun des~$V_{i}$. Il suffit donc de traiter le cas d'un espace int\`egre en~$x$. Par cons\'equent, on supposera d\'esormais que~$X$ est int\`egre en~$x$.
Soit~$U$ un voisinage ouvert de~$x$ dans~$X$. On peut supposer qu'il est s\'epar\'e. Le th\'eor\`eme \ref{proj} assure que, quitte \`a r\'etr\'ecir~$U$, on peut supposer qu'il existe un morphisme fini ouvert $\varphi \colon U \to V$, o\`u~$V$ est un ouvert de $\E{n}{\mathcal{A}}$ ou de~$\E{n}{\mathcal{H}(b)}$ pour un certain $b\in B$. D'apr\`es le th\'eor\`eme~\ref{arbre}, $\E{n}{\mathcal{A}}$ et~$\E{n}{\mathcal{H}(b)}$ sont localement connexes par arcs. Quitte \`a r\'etr\'ecir~$V$ (et~$U$ en cons\'equence), on peut donc supposer que~$V$ est connexes par arcs et localement connexe par arcs. La remarque~\ref{rem:definitionelastique} assure que~$V$ est \'egalement \'elastique.
D'apr\`es les propositions~\ref{prop:top} et~\ref{chemin} , $U$ poss\`ede un nombre fini de composantes connexes et celles-ci sont connexes par arcs. On en d\'eduit le r\'esultat souhait\'e.
\end{proof}
\section[Dimension topologique]{Dimension topologique}\label{sec:dimtop}
Dans cette section, nous calculons la dimension topologique des espaces affines et des disques sur des bases convenables.
Rappelons tout d'abord la d\'efinition de dimension de recouvrement. On rappelle qu'un espace topologique~$T$ est dit \emph{normal} si deux ferm\'es disjoints de~$T$ sont toujours contenus dans deux ouverts disjoints. Par exemple, un espace s\'epar\'e et paracompact est normal.
\index{Espace topologique!normal|textbf}
\index{Espace topologique!dimension d'un|(}\index{Dimension!topologique|see{Espace topologique}}
\begin{defi}[\protect{\cite[definitions~1.6.6, 1.6.7]{Eng1}}]\index{Espace topologique!dimension d'un|textbf}%
\nomenclature[D]{$\dimr$}{dimension de recouvrement d'un espace topologique
Soient~$T$ un espace topologique normal non vide.
\begin{enumerate}[i)]
\item Soit~$\mathcal{U}$ un ensemble non vide de parties de~$T$. On appelle \emph{ordre de~$\mathcal{U}$} la borne sup\'erieure de l'ensemble des entiers~$n$ tel qu'il existe $n+1$ \'el\'ements distincts de~$\mathcal{U}$ d'intersection non vide.
\item On appelle \emph{dimension de recouvrement de~$T$} la borne sup\'erieure de l'ensemble des entiers~$n$ tel que tout recouvrement fini ouvert de~$T$ admette un raffinement fini d'ordre inf\'erieur o\`u \'egal \`a~$n$. On la note $\dimr(X) \in \ensuremath{\mathbf{N}} \cup \{\infty\}$.
\end{enumerate}
\end{defi}
Dans la suite du texte, la dimension d'un espace sera toujours \`a prendre au sens de la dimension de recouvrement, m\^eme si nous omettrons parfois cette pr\'ecision.
\begin{rema}\index{Espace topologique!m\'etrisable}\index{Espace affine analytique!m\'etrisable}
La notion de dimension se comporte bien dans le cas des espaces m\'etrisables et s\'eparables (au sens o\`u ils poss\`edent un sous-ensemble d\'enombrable dense). Les espaces que nous consid\'erons ne jouissent malheureusement pas tous de ces propri\'et\'es. Par exemple, la droite analytique sur le corps~$\ensuremath{\mathbf{C}}$ muni de la valuation triviale n'est ni m\'etrisable, ni s\'eparable. Nous devrons donc manipuler la notion de dimension avec pr\'ecaution.
Signalons cependant que, si~$\mathcal{A}$ poss\`ede un sous-ensemble d\'enombrable dense (par exemple, si $\mathcal{A}$ est~$\ensuremath{\mathbf{Z}}$ ou un anneau d'entiers de corps de nombres), alors tout espace affine analytique sur~$\mathcal{A}$ est m\'etrisable et s\'eparable (\cf~\cite[th\'eor\`eme~3.5.1]{A1Z} et sa preuve).
\end{rema}
Rappelons quelques propri\'et\'es de la notion de dimension de recouvrement sur les espaces topologiques normaux.
\begin{theo}[\protect{\cite[theorem~3.1.3]{Eng1}}]\label{dimension_sous-espace}
Soient~$T$ un espace topologique normal et~$F$ une partie ferm\'ee de~$T$. Alors, on a
\[\dimr(F)\le \dimr(T).\]
\qed
\end{theo}
\begin{theo}[\protect{\cite[theorem~3.1.8]{Eng1}}]\label{dimension_union_d\'enombrable}
Soient~$T$ un espace topologique normal et~$(F_i)_{i\in I}$ une famille d\'enombrable de parties ferm\'ees de~$T$ telle que $T=\bigcup_{i\in I} F_i$.
Alors, on a
\[\dimr(T)\leq \sup_{i\in I} (\dimr(F_{i})).\]
\qed
\end{theo}
\begin{theo}[\protect{\cite[theorem~3.2.13]{Eng1}}]\label{th:produit}
Soient~$T$ et~$S$ des espaces topologiques compacts non vides. Alors, on a
\[\dimr(T\times S)\le \dimr(T) + \dimr(S).\]
\qed
\end{theo}
\index{Espace topologique!dimension d'un|)}
\index{Espace affine analytique!dimension d'un|(}\index{Disque!dimension d'un|(}
Rappelons le r\'esultat suivant pour les espaces analytiques sur un corps valu\'e complet.
\begin{theo}\label{th:dimensioncorps}
Soit $(k,\va)$ un corps valu\'e complet. Soient $n\in \ensuremath{\mathbf{N}}$ et $\ensuremath{{\boldsymbol{r}}} \in \ensuremath{\mathbf{R}}_{>0}^n$. Alors, on a
\begin{align*}
\dimr(\E{n}{k}) & = \dimr(\overline{D}_{k}(\ensuremath{{\boldsymbol{r}}})) = \dimr(D_{k}(\ensuremath{{\boldsymbol{r}}}))\\
& = \begin{cases}
2n & \textrm{ si } (k,\va) \textrm{ est archim\'edien~;}\\
n & \textrm{ si } (k,\va) \textrm{ est ultram\'etrique.}
\end{cases}
\end{align*}
En outre, chacun des espaces $\E{n}{k}$, $\overline{D}_{k}(\ensuremath{{\boldsymbol{r}}})$ et~$D_{k}(\ensuremath{{\boldsymbol{r}}})$ contient un pav\'e compact de m\^eme dimension.
\end{theo}
\begin{proof}
Si $(k,\va)$ est archim\'edien, alors, d'apr\`es le th\'eor\`eme~\ref{th:vaarchimedienne} et le lemme~\ref{lem:AnC}, $\E{n}{k}$ est hom\'eomorphe \`a~$\ensuremath{\mathbf{C}}^n$ ou $\ensuremath{\mathbf{C}}^n/\Gal(\ensuremath{\mathbf{C}}/\ensuremath{\mathbf{R}})$. Le r\'esultat s'en d\'eduit.
Supposons que $(k,\va)$ est ultram\'etrique. D'apr\`es~\cite[corollary~3.2.8]{Ber1}, on a $\dimr(\E{n}{k}) = n$.
D'apr\`es le th\'eor\`eme~\ref{dimension_sous-espace}, on a donc $\dimr(\overline{D}_{k}(\ensuremath{{\boldsymbol{r}}})) \le n$. Notons $\ensuremath{{\boldsymbol{r}}} = (r_{1},\dotsc,r_{n})$. Pour tout $(t_{1},\dotsc,t_{n}) \in \ensuremath{\mathbf{R}}_{>0}$, notons $\eta_{t_{1},\dotsc,t_{n}}$ l'unique point du bord de Shilov du disque $\overline{D}_{k}(t_{1},\dotsc,t_{n})$. Pour tout $i\in \cn{1}{n}$, soient $s_{i} \in \intoo{0,r_{i}}$. L'application
\[ \begin{array}{cccc}
\varphi \colon & \displaystyle\prod_{i=1}^n [s_{i},r_{i}] & \longrightarrow & \overline{D}_{k}(r_{1},\dotsc,r_{n})\\
& (t_{1},\dotsc,t_{n}) & \longmapsto & \eta_{t_{1},\dotsc,t_{n}}
\end{array}\]
r\'ealise un hom\'eomorphisme du pav\'e compact $\prod_{i=1}^n [s_{i},r_{i}]$ sur un ferm\'e de~$\overline{D}_{k}(r_{1},\dotsc,r_{n})$. D'apr\`es le th\'eor\`eme~\ref{dimension_sous-espace}, on a donc
\[ \dimr(\overline{D}_{k}(\ensuremath{{\boldsymbol{r}}})) \ge \dimr \Big(\prod_{i=1}^n [s_{i},r_{i}] \Big) = n.\]
Le r\'esultat s'ensuit.
Puisque le polydisque ouvert~$D_{k}(\ensuremath{{\boldsymbol{r}}})$ contient un polydisque ferm\'e et peut s'\'ecrire comme union d\'enombrable de polydisques ferm\'es, le r\'esultat pr\'ec\'edent, joint aux th\'eor\`emes~\ref{dimension_sous-espace} et~\ref{dimension_union_d\'enombrable}, entra\^ine que $\dimr(D_{k}(\ensuremath{{\boldsymbol{r}}})) = n$.
\end{proof}
En nous basant sur ce r\'esultat, nous allons pouvoir calculer la dimension de recouvrement d'espaces affines sur des bases plus g\'en\'erales. Nous utiliserons les notions de partie lin\'eaire et de partie d'Ostrowski introduites pr\'ec\'edemment (\cf~d\'efinitions~\ref{def:lineaire} et~\ref{def:ostrowski}).
\begin{theo}\label{th:dimensionetoile}\index{Partie!lineaire@lin\'eaire}\index{Partie!d'Ostrowski
Soit~$V$ une partie lin\'eaire ou d'Ostrowski de~$B$. Soient $n\in \ensuremath{\mathbf{N}}$ et $\ensuremath{{\boldsymbol{r}}} \in \ensuremath{\mathbf{R}}_{>0}^n$. Notons $\pi \colon \E{n}{\mathcal{A}} \to B$ la projection canonique. Alors, on a
\begin{align*}
\dimr(\pi^{-1}(V)) & = \dimr(\overline{D}_{V}(\ensuremath{{\boldsymbol{r}}})) = \dimr(D_{V}(\ensuremath{{\boldsymbol{r}}}))\\
& = \begin{cases}
2n+1 & \textrm{ si } V \cap B^\mathrm{arc} \ne \emptyset~;\\
n+1 & \textrm{ si } V \subset B^\mathrm{um}.
\end{cases}
\end{align*}
En outre, chacun des espaces $\pi^{-1}(V)$, $\overline{D}_{V}(\ensuremath{{\boldsymbol{r}}})$ et~$D_{V}(\ensuremath{{\boldsymbol{r}}})$ contient un pav\'e compact de m\^eme dimension.
\end{theo}
\begin{proof}
Remarquons qu'il suffit de d\'emontrer le r\'esultat pour les polydisques ferm\'es. Ceci d\'ecoule des th\'eor\`emes~\ref{dimension_sous-espace} et~\ref{dimension_union_d\'enombrable} puisque les espaces affines et les polydisques ouverts contiennent et sont union d\'enombrables de tels disques.
Posons
\[ d =
\begin{cases}
2n & \textrm{ si } V \cap B^\mathrm{arc} \ne \emptyset~;\\
n & \textrm{ si } V \subset B^\mathrm{um}.
\end{cases}
\]
$\bullet$ Supposons que $V$ est lin\'eaire.
Par d\'efinition, il existe une partie~$V^{\partial}$ de~$V$, un point~$b$ de~$V$ et un intervalle $I \subset I_{b}$ non r\'eduit \`a un point satisfaisant les propri\'et\'es suivantes~:
\begin{enumerate}[i)]
\item $V^{\partial}$ est contenue dans le bord de~$V$ et $\sharp V^{\partial}\le 2$~;
\item l'application
\[\begin{array}{ccc}
I & \longrightarrow & V\setminus V^{\partial}\\
\ensuremath{\varepsilon} & \longmapsto & b^\ensuremath{\varepsilon}
\end{array}\]
est un hom\'eomorphisme.
\end{enumerate}
Supposons que $I$ est un segment et que $V^\partial = \emptyset$. Alors, d'apr\`es le lemme~\ref{lem:flot}, l'application
\[\fonction{\Phi}{\pi^{-1}(b) \times I}{\pi^{-1}(V)}{(x,\ensuremath{\varepsilon})}{x^\ensuremath{\varepsilon}}\]
est un hom\'eomorphisme.
Soit $\ensuremath{{\boldsymbol{s}}} = (s_{1},\dotsc,s_{n}) \in \ensuremath{\mathbf{R}}_{>0}^n$. On a alors
\[ \Phi( \overline{D}_{b}(\ensuremath{{\boldsymbol{s}}}) \times I) = \bigcup_{\ensuremath{\varepsilon} \in I} \overline{D}_{b^\ensuremath{\varepsilon}}(s_{1}^\ensuremath{\varepsilon},\dotsc,s_{n}^\ensuremath{\varepsilon}).\]
On peut choisir~$\ensuremath{{\boldsymbol{s}}}$ de fa\c con que $\Phi( \overline{D}_{b}(\ensuremath{{\boldsymbol{s}}}) \times I) \subset \overline{D}_{V}(\ensuremath{{\boldsymbol{r}}})$. D'apr\`es le th\'eor\`eme~\ref{th:dimensioncorps}, $\overline{D}_{b}(\ensuremath{{\boldsymbol{s}}})$ contient un pav\'e compact de dimension~$d$, donc $\overline{D}_{b}(\ensuremath{{\boldsymbol{s}}}) \times I$ contient un pav\'e compact de dimension~$d+1$. D'apr\`es le th\'eor\`eme~\ref{dimension_sous-espace}, on en d\'eduit que $\dimr(\overline{D}_{V}(\ensuremath{{\boldsymbol{r}}})) \ge d+1$.
On peut choisir \'egalement choisir~$\ensuremath{{\boldsymbol{s}}}$ de fa\c con que $\overline{D}_{V}(\ensuremath{{\boldsymbol{r}}}) \subset \Phi( \overline{D}_{b}(\ensuremath{{\boldsymbol{s}}}) \times I)$. D'apr\`es les th\'eor\`emes~\ref{th:dimensioncorps}, \ref{th:produit} et~\ref{dimension_sous-espace}, on a alors $\dimr(\overline{D}_{V}(\ensuremath{{\boldsymbol{r}}})) \le d+1$. Le r\'esultat s'ensuit.
\medbreak
Traitons maintenant le cas g\'en\'eral. Posons $F := \bigsqcup_{c \in V^\partial} \overline{D}_{c}(\ensuremath{{\boldsymbol{r}}})$. D'apr\`es le th\'eor\`eme~\ref{th:dimensioncorps}, $F$ est soit vide, soit de dimension~$d$.
\'Ecrivons l'intervalle~$I$ comme une r\'eunion d\'enombrable $\bigcup_{n\in\ensuremath{\mathbf{N}}} I_{n}$, o\`u les~$I_{n}$ sont des segments non r\'eduits \`a un point. Pour tout $n\in \ensuremath{\mathbf{N}}$, posons $V_{n} := \{b^\ensuremath{\varepsilon} : \ensuremath{\varepsilon} \in I_{n}\}$, $\overline{D}_{n} := F \cup \overline{D}_{V_{n}}(\ensuremath{{\boldsymbol{r}}})$. D'apr\`es le cas pr\'ec\'edent, $\overline{D}_{n}$ est de dimension~$d+1$ et contient un pav\'e de dimension~$d+1$. On en d\'eduit que $\overline{D}_{V}(\ensuremath{{\boldsymbol{r}}})$ contient un pav\'e de dimension~$d+1$ et que $\dimr(\overline{D}_{V}(\ensuremath{{\boldsymbol{r}}})) \ge d+1$, d'apr\`es le th\'eor\`eme~\ref{dimension_sous-espace}.
Pour $n\in\ensuremath{\mathbf{N}}$, posons $\overline{D}'_{n} := F \sqcup \overline{D}_{n}$. On a $\dimr(\overline{D}'_{n}) = d+1$. Puisque, pour tout $n\in \ensuremath{\mathbf{N}}$, $\overline{D}'_{n}$ est ferm\'e dans~$\overline{D}_{V}(\ensuremath{{\boldsymbol{r}}})$ et que $\overline{D}_{V}(\ensuremath{{\boldsymbol{r}}}) = \bigcup_{n\in \ensuremath{\mathbf{N}}} \overline{D}'_{n}$, le th\'eor\`eme~\ref{dimension_union_d\'enombrable} assure que $\dimr(\overline{D}_{V}(\ensuremath{{\boldsymbol{r}}})) \le d+1$. Le r\'esultat s'ensuit.
\medbreak
$\bullet$ Supposons que $V$ est d'Ostrowski.
Par hypoth\`ese, il existe un point $a_{0}$ de~$V$, un ensemble d\'enombrable non vide~$\Sigma$ et une famille $(V_{\sigma})_{\sigma\in \Sigma}$ de parties disjointes de~$V\setminus \{a_{0}\}$ satisfaisant les propri\'et\'es suivantes~:
\begin{enumerate}[i)]
\item $V = \bigcup_{\sigma \in \Sigma} V_{\sigma} \cup \{a_{0}\}$~;
\item pour tout $\sigma\in \Sigma$, $V_{\sigma}$ et $V_{\sigma} \cup \{a_{0}\}$ sont des parties lin\'eaires de~$\mathcal{M}(\mathcal{A})$~;
\item l'ensemble des parties de la forme
\[\bigcup_{\sigma \in \Sigma'} V'_{\sigma} \cup \bigcup_{\sigma \in \Sigma\setminus\Sigma'} V_{\sigma},\]
o\`u $\Sigma'$ est un sous-ensemble fini de~$\Sigma$ et, pour tout $\sigma \in \Sigma'$, $V'_{\sigma}$ est un voisinage de~$a_{0}$ dans~$V_{\sigma} \cup \{a_{0}\}$, est une base de voisinages de~$a_{0}$ dans~$V$.
\end{enumerate}
Soit $\sigma\in \Sigma$. Si $V \cap B^\mathrm{arc} \ne \emptyset$, on peut supposer que $V_{\sigma} \cap B^\mathrm{arc} \ne \emptyset$. La premi\`ere partie de la preuve assure que~$\overline{D}_{V_{\sigma}}(\ensuremath{{\boldsymbol{r}}})$ contient un pav\'e compact de dimension~$d+1$. Il en va donc de m\^eme pour~$\overline{D}_{V}(\ensuremath{{\boldsymbol{r}}})$. En particulier, d'apr\`es le th\'eor\`eme~\ref{dimension_sous-espace}, on a $\dimr(\overline{D}_{V}(\ensuremath{{\boldsymbol{r}}})) \ge d+1$.
\medbreak
D\'emontrons l'\'in\'egalit\'e r\'eciproque. Nous supposerons que~$\Sigma$ est infini et l'identifierons \`a~$\ensuremath{\mathbf{N}}$. Le cas fini se traite de fa\c{c}on similaire.
Pour tout $n \in \ensuremath{\mathbf{N}}$, $V_{n}$ est lin\'eaire, donc il existe une partie~$V_{n}^{\partial}$ de~$V$, un point~$b_{n}$ de~$V$ et un intervalle $I_{n} \subset I_{b_{n}}$ non r\'eduit \`a un point satisfaisant les propri\'et\'es suivantes~:
\begin{enumerate}[i)]
\item $V_{n}^{\partial}$ est contenue dans le bord de~$V_{n}$ et $\sharp V_{n}^{\partial}\le 2$~;
\item l'application
\[\begin{array}{cccc}
p_{n} \colon &I_{n} & \longrightarrow & V_{n}\setminus V_{n}^{\partial}\\
&\ensuremath{\varepsilon} & \longmapsto & b_{n}^\ensuremath{\varepsilon}
\end{array}\]
est un hom\'eomorphisme.
\end{enumerate}
\'Ecrivons~$I_{n}$ comme une r\'eunion d\'enombrable $\bigcup_{m\in \ensuremath{\mathbf{N}}} I_{n,m}$, o\`u les~$I_{n,m}$ sont des segments non r\'eduits \`a un point.
Pour tout $n\in \ensuremath{\mathbf{N}}$, posons
\[ W_{n} := \{a_{0}\} \sqcup \bigsqcup_{m=0}^n \big(p_{m}(I_{m,n}) \sqcup V_{m}^\partial\big).\]
C'est une partie ferm\'ee de~$V$ qui est union disjointe finie de points et de parties lin\'eaires. D'apr\`es le th\'eor\`eme~\ref{th:dimensioncorps} et la premi\`ere partie de la preuve, on a $\dimr(\overline{D}_{W_{n}}(\ensuremath{{\boldsymbol{r}}})) \le d+1$. Or on a $\overline{D}_{V}(\ensuremath{{\boldsymbol{r}}}) = \bigcup_{n\in \ensuremath{\mathbf{N}}} \overline{D}_{W_{n}}(\ensuremath{{\boldsymbol{r}}})$, donc, d'apr\`es le th\'eor\`eme~\ref{dimension_union_d\'enombrable}, $\dimr(\overline{D}_{V}(\ensuremath{{\boldsymbol{r}}})) \le d+1$. Ceci conclut la preuve.
\end{proof}
\index{Espace affine analytique!dimension d'un|)}\index{Disque!dimension d'un|)}
| proofpile-arXiv_059-15606 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
The field of fractional quantum Hall effect~\cite{Tsui82} (FQHE) has been the birthplace for a web of spectacular phenomena, exotic emergent particles, and nontrivial states, all arising as a result of the interaction between electrons. The FQHE is a rare example of a strongly correlated state for which we not only have a qualitative understanding of a large part of the prominent phenomenology but have achieved a detailed microscopic description that is quantitatively accurate~\cite{Jain07, Halperin20}. Nonetheless, the origin of a few experimentally observed states remains unsettled. This article aims to report on our theoretical investigations of one such state, namely the FQHE state at filling factor $\nu=1/2$ observed in wide quantum wells (WQWs)~\cite{Suen92, Suen92b, Suen94b, Luhman08, Shabani09a, Shabani09b, Shabani13, Liu14d, Hasdemir15, Mueed16, Drichko19}, the origin of which has been a topic of debate ever since its discovery. There are two motivations for our study. First, this observation is in stark contrast to the state at half-filling in narrow quantum wells, which is established to be a Fermi sea of composite fermions (CFs)~\cite{Halperin93, Jain07, Halperin20}. The FQHE thus arises due to changes in the interaction arising from finite quantum well width, and thus constitutes an important challenge for our quantitative understanding of the FQHE. Second, the physical origin of the observed state can be potentially very interesting.
A promising two-component state is the Halperin $(3,3,1)$ state~\cite{Halperin83}, which can be relevant because a very WQW behaves as a two-component system. [There is little doubt that the $1/2$ FQHE observed in {\it real} double-layer systems~\cite{Eisenstein92}, observed at the same time as the $1/2$ state in WQWs, is the two-component Halperin $(3,3,1)$ state~\cite{Faugno20}. Our focus in this article is on WQWs, not double layer systems.] However, another promising candidate is the one-component Pfaffian state, which is a paired state of composite fermions~\cite{Moore91, Read00}. This state is believed to be responsible for the FQHE at $\nu=5/2$~\cite{Willett87}, and one can ask if the changes in the inter-electron interaction due to finite width may stabilize this state at $\nu=1/2$ as well. The Halperin $(3,3,1)$ state supports Abelian quasiparticles, whereas the Pfaffian is believed to support non-Abelian quasiparticles. The latter has motivated many interesting theoretical
and experimental studies of the $5/2$ FQHE. If the 1/2 state in WQWs turns out to be the Pfaffian state, that would provide another venue where non-Abelian quasiparticles may be investigated.
While the $1/2$ FQHE in WQWs has often been interpreted in terms of the $(3,3,1)$ state, arguments can also be given in favor of a one-component state. We provide here a summary of experimental results and their implications for the nature of the state:
\begin{itemize}
\item In a double layer system, which consists of two layers separated by a distance $d$, the situation is relatively clear~\cite{Park98,Papic10,Faugno20,Scarola01b, Scarola02b,Chakraborty87,Yoshioka89,He91,He93}. For zero layer separation, the two-component system of spin polarized electrons is formally equivalent to a single layer system of spinful electrons with zero Zeeman splitting. Here the state is a layer singlet Fermi sea of composite fermions~\cite{Park98,Balram15c,Balram17}. In variational calculations~\cite{Scarola01b, Scarola02b,Faugno20} this state survives in the range $d/l_B\lesssim1$. The $(3,3,1)$ state is predicted to occur for layer separations $1\lesssim d/l_B \lesssim 3$\cite{Scarola01b,Faugno20}, in general agreement with experiments. For layer separations $d/l_B \gtrsim 3$ two uncoupled CF Fermi seas (CFFSs) are formed in each layer, with composite fermions now binding four vortices~\cite{Scarola01b,Faugno20}. In contrast, the 1/2 FQHE in WQWs is seen when the width is approximately $2.6 - 8$ $l_B$\cite{Suen92,Suen92b,Suen92c,Suen94b, Shayegan96,Manoharan96,Shabani09a,Shabani09b,Shabani13,Yang14b,Hasdemir15, Mueed16,Liu14d}. Although not conclusive, this points against the two-component $(3,3,1)$ state.
\item For quantum well widths and densities where the 1/2 FQHE is observed in WQWs, the behavior of FQHE states surrounding it is often consistent with single layer physics. In particular, the standard Jain sequences $n/(2n\pm 1)$~\cite{Jain89} are observed. Recently, Mueed {\it et al.}~\cite{Mueed15} have directly measured, from commensurability oscillations, the Fermi wave vector of composite fermions in the vicinity of filling factor $1/2$ and found that the Fermi sea is a one-component state. The fact that the states in the immediate vicinity of $\nu=1/2$ are one-component states makes it plausible that the $\nu=1/2$ FQHE also has a one-component origin. If not, it would be important to understand what is special about $\nu=1/2$ that makes a two-component state favorable.
\item A phase diagram has been constructed as a function of the filling factor and $\Delta_{\rm SAS}$, the gap between the symmetric and antisymmetric subbands~\cite{Manoharan96}. The island of the 1/2 FQHE state straddles the boundary where many nearby FQHE states make a transition from a one-component state to an insulator, presumably a double layer crystal. However, the 1/2 FQHE island is contiguous, i.e. it is either all one component or two-component.
\item The effect of asymmetry in the charge distribution is complex but worth mentioning here. An early work on 80 nm wide QW by Suen {\it et al.}~\cite{Suen92b, Suen94b} reported a monotonic decrease in the strength of the FQHE at $\nu=1/2$ as the charge distribution is made asymmetric, with the FQHE state disappearing at approximately 10\% imbalance. This may arise from either two-component nature or complicated changes in the effective interaction. Subsequently, Shabani {\it et al.}~\cite{Shabani09a, Shabani09b} found that in a 55 nm quantum well an asymmetry of the charge distribution favors FQHE at $1/2$. This suggests a one-component nature of the FQHE here. Numerical studies also show that in such asymmetric quantum wells around certain widths the one-component Pfaffian wave function has a large overlap with the ground state, although the $(3,3,1)$ state is also competitive~\cite{Thiebaut14, Liu14d, Peterson10}.
\end{itemize}
We next briefly review the theoretical studies of the 1/2 FQHE in WQWs and also provide a summary of the main results arising from the present study. In particular, we indicate how the theoretical phase diagram is sensitive to the various assumptions that go into the calculation.
The problem has been addressed by exact diagonalization (ED)~\cite{Papic10,He93,Storni10,Peterson10,Balram20}. ED can often deal with only very small systems and is thus not likely to capture the thermodynamic behavior. This is especially the case for WQWs, for which the width may become comparable to the available lateral dimension of the system. The energy orderings of states are often seen to change as the system size increases.
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./torus_VMC_Phase_boundary_all.pdf}
\caption{The phase diagram of states at $\nu=1/2$ obtained by the VMC method as a function of the quantum well width $W$ and the carrier density. The transverse wave function is assumed to have the form obtained from LDA at zero magnetic field. Both one-component and two-component states are included. The following states are seen to occur: the one-component CFFS state (red), the $(3,3,1)$ state (green), and the state with two uncoupled $1/4$ CFFSs, labeled $1/4+1/4$ CFFS (yellow). The region where experiments find an incompressible state~\cite{Shabani09b} is indicated by light dashed lines. For a given width, the uncertainty of the calculated transition densities is approximately $1\times 10^{10} \text{cm}^{-2}$. The overall phase boundary is obtained by smoothly joining the transition points at $W=50,60,70,80$ nm. The subband gap determined by LDA is used to to determine the total energies of the two-component states.
}
\label{VMC_PHASE_DIAGRAM_2}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./2dDMC_Phase_boundary_all.pdf}
\caption{The phase diagram of states as a function of the quantum well width $W$ and the carrier density obtained from a 2D DMC calculation, which incorporates finite width corrections by using an LDA interaction derived at zero magnetic field. This figure shows how the phase diagram in Fig.~\ref{VMC_PHASE_DIAGRAM_2} changes upon Landau level mixing. The region where experiments find an incompressible state~\cite{Shabani09b} is indicated by light dashed grey lines. For a given width, the uncertainty of the calculated transition densities is about $2\times 10^{10} \text{cm}^{-2}$.}
\label{2D_DMC_BOUNDARY_ALL}
\end{figure}
This issue has also been investigated by variational Monte Carlo (VMC)\cite{Biddle13,Papic09, Scarola10,Thiebaut14,Thiebaut15,Faugno20}. During the course of this work, we have determined the phase diagram of $\nu=1/2$ in a WQW using the VMC method, shown in Fig.~\ref{VMC_PHASE_DIAGRAM_2}. The $(3,3,1)$ state is stabilized in a part of the phase diagram that qualitatively agrees with experiments. The phase boundary between one component CFFS and the $(3,3,1)$ state is consistent with earlier calculations~\cite{Thiebaut15}.
However, the VMC calculations make the following assumptions. (i) The effect of finite width is incorporated through a transverse wave function for electrons, which modifies the interactions between them (see Eq.\,\ref{V_eff}). The transverse wave function is evaluated in local density approximation (LDA) at zero magnetic field~\cite{Park99b}, and it is assumed that it remains unaltered at a strong perpendicular magnetic field. Given that the nature of the transverse wave function depends on the state that the electrons form in two dimensions (for example, at zero magnetic field LDA assumes a Fermi sea state of electrons), one may wonder to what extent this assumption is valid. (ii) The phase boundary between the one- and two-component states depends sensitively on $\Delta_{\rm SAS}$, i.e. the gap between the symmetric and antisymmetric subbands. One uncritically uses its value obtained at zero magnetic fields. However, this gap is typically very large compared to the Coulomb energy differences between the competing states, and even a few percent change in $\Delta_{\rm SAS}$ can substantially shift the phase boundaries.
The VMC calculation also does not incorporate the effect of Landau level mixing (LLM) directly. We have further investigated the role of LLM within the VMC method through a two-dimensional (2D) fixed-phase diffusion Monte Carlo (DMC) method developed by Ortiz, Ceperley and Martin ~\cite{Ortiz93, Melik-Alaverdian97}, which itself is a generalization of the standard DMC method~\cite{Foulkes01} to find ground states in the presence of broken time-reversal symmetry. In this method, we allow for LLM for electrons interacting with the effective interaction derived from LDA at zero magnetic field. We refer to this as ``2D-DMC." We find that, at this level of approximation, the phase diagram is substantially altered and neither the (3,3,1) nor the Pfaffian state is stabilized for a significant range of parameters (see Fig.~\ref{2D_DMC_BOUNDARY_ALL}). However, a conceptual difficulty with this method is an uncontrolled double-counting, because mixing with higher bands has already been incorporated through the modification of the transverse wave function, which, in a sense, is akin to LLM at a finite magnetic field. (At finite magnetic fields, it is LLM that leads to a modification of the form of the transverse wave function.) This study nonetheless shows the importance of LLM, indicating that the results from neither VMC nor 2D-DMC are fully reliable.
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./3dDMC_Phase_boundary_all_strict2L.pdf}
\caption{The phase diagram of states determined by 3D-DMC as a function of the quantum well width $W$ and the carrier density. Here both finite width and Landau level mixing are included in a DMC calculation directly in the presence of a magnetic field. The red region is the single-component CFFS state and the yellow region marks the $1/4+1/4$ CFFS state. In the purple region, the energies of the single-component CFFS and the single-component Pfaffian states are equal within numerical uncertainty. The uncertainty of the transition density from one-component state to the $1/4+1/4$ CFFS at each width is approximately $5\times 10^{10}\text{cm}^{-2}$. The region where experiments find an incompressible state~\cite{Shabani09b} is indicated by light dashed grey lines. }
\label{3D_PHASE_2}
\end{figure}
The primary motivation of our work is to develop a technique that circumvents some of the above issues and treats finite width and LLM effects directly at a large magnetic field. Specifically, we use a three-dimensional (3D) version of the fixed phase DMC method, referred to below as ``3D-DMC," or simply as ``DMC." The most important advantage of the 3D-DMC method is that it directly gives the ground state energy (as well as the form of the transverse wave function) at a high magnetic field, automatically including the effects of finite width and LLM. No reference is made to zero magnetic field in our calculation. Of course, this method also makes an approximation, namely the choice of fixed phase, and all of our conclusions are subject to the validity of our choice of the phase. (We use the accurate lowest Landau level wave functions to fix the phase, which has been found to give good agreement with experiments in the past~\cite{Zhang16, Zhao18, Ma20}.) There are other practical difficulties with our method. One is that the required computation time does not allow treatment of very large systems; we have studied systems with up to about 25 particles. The second is that it does not allow treatment of two-component states with non-zero $\Delta_{\rm SAS}$. For two-component states, we assume that $\Delta_{\rm SAS}=0$, i.e. the wave function strictly vanishes at the center. This should be a decent approximation for sufficiently large widths and densities where $\Delta_{\rm SAS}$ is small.
The phase diagram obtained from 3D-DMC calculations is shown in Fig.~\ref{3D_PHASE_2}. The light purple region shows the part of the phase diagram where the energies of the one-component CFFS and the one-component Pfaffian states are so close that we cannot distinguish between them within numerical uncertainty [although both of these energies are lower than the energy of the two-component $(3,3,1)$ state]. Given that experiments show an incompressible state here, we believe that the one-component Pfaffian state is the most likely possibility. Nonetheless, in light of the approximations made in the calculation, a definitive confirmation can come only from experiments, and we hope that our study will motivate further experimental studies of this state.
We have also studied several other candidate wave functions at $\nu=1/2$ but found them not to be relevant for the issue at hand.
Additionally, our 3D-DMC study yields the form of the transverse wave function directly in the presence of a high perpendicular magnetic field. Here, the double layer nature of the ground state for large widths or densities arises due to LLM. We find that, surprisingly, the form of the transverse wave function of the lowest symmetric band is not particularly sensitive to the nature of the 2D state; we find very similar forms for $\nu=1$, $1/3$, and $1/5$, as discussed later. Furthermore, also surprisingly, we find that the transverse wave function obtained from our 3D-DMC is also close to that obtained from LDA at zero magnetic field. Nonetheless, our phase diagram with 3D-DMC method is very different from that obtained from VMC.
A recent work~\cite{Zhu16} has concluded that switching on tunneling in a bilayer favors the Pfaffian state. The model for quantum well considered in Ref.~\cite{Zhu16} is different from ours.
The plan of our paper is as follows. In Sec.\,\ref{sec_cf_states}, we briefly review the fundamentals of the FQHE on a torus and give explicit forms of wave functions that are involved in our calculation. In Sec.\,\ref{sec_vmc} we report our VMC studies on the topic. We next introduce the general principles of the DMC method in Sec.\,\ref{sec_dmc_basics}. After that, we present our 2D-DMC and 3D-DMC investigations individually. We discuss our results in the end and more technical details can be found in the appendices.
All calculations are performed in the torus geometry, except those presented in Appendix~\ref{VMC_SPHERE_SEC}. Throughout this work, we assume parameters appropriate for GaAs, with dielectric constant $\epsilon=12.6$ and band mass $m=0.067 m_e$, where $m_e$ is the electron mass in vacuum. The magnetic length is denoted $l_B=\sqrt{\hbar c/eB}$ where $B$ is the magnetic field.
\section{Relevant states at half filling}\label{sec_cf_states}
We shall include in our study several different states at filling factor $\nu=1/2$, which we now list. We primarily use the torus geometry for our study, because the CFFS can be constructed on a torus with explicit wave vector configuration. (On the sphere one must approach the CFFS by taking the limit $n\to \infty$ for Jain states at $\nu=\frac{n}{2n+1}$~\cite{Rezayi94,Balram15b,Balram17}, which requires going to very large systems that are not accessible to DMC.) We also give the VMC results in the spherical geometry in Appendix~\ref{VMC_SPHERE_SEC} for comparison.
We start this section by reviewing some basics of FQHE on a torus\cite{Pu17, Bernevig12, Haldane85b, Haldane85, Greiter16}.
\subsection{Basics of FQHE on a torus}
We start by formulating the single-particle orbitals and we use them to construct the many-body wave functions. We map a torus to a parallelogram with quasi-periodic boundary conditions in the complex plane. The two edges of the parallelogram are given by $L$ and $L\tau$ in the complex plane, where $\tau$ is a complex number representing the modular parameter of the torus. We will take $L$ to be real. (We also use the symbol $\uptau$, with a different font, for the imaginary time in the introduction of the DMC algorithm; this should not cause any confusion, given that the two appear in very different contexts.) The location $\vec{z}=(x,y)$ of a particle in the complex plane is represented by the complex number $z=x+i y$. Later when we include the transverse dimension, the displacement vector in 3D space is labeled by $\vec{r}=(x, y, w)$. To make the quasi-periodic boundary conditions in $L$ and $L\tau$ directions compatible, the number of flux quanta through the torus, $N_\phi=BL^2{\rm Im}[\tau]/\phi_0$, must be an integer, where $\phi_0=hc/e$ is a single flux quantum. We will work with the symmetric gauge $\vec{A}=(B/2)(y, -x, 0)$, which corresponds to a uniform magnetic field $\vec{B}=-B\hat{z}$ perpendicular to the surface of the torus. For simplicity, we choose a square torus with $\tau=i$. The magnetic translation operator is given by
\begin{align}
t\left(\vec{\xi}\right)=e^{-\frac{i}{2l_B^2} \hat{z}\cdot (\vec{\xi}\times \vec z)}T\left(\vec{\xi} \right)
\label{}
\end{align}
where $T\left(\vec{\xi}\right)$ is the usual translation operator.
The single-particle orbitals are imposed with the quasi-periodic boundary conditions: \begin{equation}
\begin{aligned}
&t\left(L\right) \psi\left(z\right)=e^{i\phi_1}\psi\left(z\right)\\
&t\left(L\tau\right) \psi\left(z\right)=e^{i\phi_\tau}\psi\left(z\right)
\end{aligned}
\label{PBC_single}
\end{equation}
where the phases $\phi_1$ and $\phi_\tau$ are the periodic boundary phases which define the Hilbert space. We have chosen $\phi_1=\phi_\tau=0$ because for our purpose, the calculation of the energy is independent of the choice of these phases.
In general, the single-particle orbitals in the Lowest Landau level (LLL) in symmetric gauge can be written as:\cite{Greiter16, Pu17}
\begin{align}
\psi^{(n)}\left(z\right)&=e^{\frac{z^2-|z|^2}{4 l_B^2}} f^{(n)}(z)\\
\end{align}
where $f\left(z\right)$ satisfies
\begin{equation}
\begin{aligned}
\frac{T\left(L\right)f\left(z\right)}{f\left(z\right)}=\frac{f\left(z+L\right)}{f\left(z\right)}&=1\\
\frac{T\left(L\tau\right)f\left(z\right)}{f\left(z\right)}=\frac{f\left(z+L\tau\right)}{f\left(z\right)}
&=e^{-i \pi N_\phi(2z/L+\tau)}
\end{aligned}
\label{PBC_eigenstate}
\end{equation}
The solutions to Eq. \ref{PBC_eigenstate} are given by\cite{Greiter16}
\begin{equation}
\begin{aligned}
f^{(n)}\left(z\right) &= e^{i k^{(n)} z} \prod_{s=1}^{N_\phi}\theta\left( z/L-w_s^{(n)}|\tau\right)\\
k^{(n)} &=\frac{-\pi N_\phi+2\pi n}{L}\\
w_s^{(n)} &=\frac{1}{2\pi N_\phi}\left[-\pi N_\phi(2-\tau)-2\pi n \tau+\pi+2\pi(s-1)\right]
\end{aligned}
\end{equation}
where $\theta\left( z|\tau\right)$ is the odd Jacobi theta function\cite{Mumford07} (see Appendix~\ref{theta_function_definition} for its definition and properties). Here we have $n=0,1,2,\cdots N_\phi-1$; $w_s^{(n)} L$ give the positions of zeros; and $k^{(n)}$ is a real number labeling the eigenvalues of magnetic translation $t(L/N_\phi)$:
\begin{equation}
\label{T1}
t\left(L/N_\phi\right)\psi^{(k)}(z,\bar{z}) =e^{\i{{2\pi} k\over N_\phi}}\psi^{(k)}(z,\bar{z}).
\end{equation}
Starting from single-particle wave functions, one can construct many-body wave functions that preserve the quasi-periodic boundary conditions. In general, the many-body wave function at filling $p/q$, where $p$ and $q$ are co-primes, has a $q$ fold center-of-mass (CM) degeneracy\cite{Haldane85b}. The Laughlin wave function at $\nu=1/m$ is given by\cite{Haldane85, Greiter16, Haldane85b}
\begin{equation}
\begin{aligned}
\label{Laughlin_wf}
\Psi_{1/m}^{(n)}(\{z_i\})=e^{\sum_i\frac{z_i^2-|z_i|^2 }{4l_B^2}} F_\frac{1}{m}^{(n)}\left( Z\right)\prod_{i<j}\left[\theta\left(\frac{z_i-z_j}{L}|\tau\right)\right]^{m}
\end{aligned}
\end{equation}
where $F_\frac{1}{m}^{(n)}(Z)$ describes the CM part with $Z=\sum_{i=1}^N z_i$:
\begin{equation}
\begin{aligned}
F_\frac{1}{m}^{(n)}(Z)=&e^{i K^{(n)}Z}\prod_{s=1}^{m} \theta\left(Z/L-W_s^{(n)}|\tau\right),\\
K^{(n)}
=&(-\pi N_\phi+2 \pi n)/L\\
W_s^{(n)}
=&\frac{ N_\phi \tau-N_\phi-2n \tau-(m-1)+2(s-1)}{2m}
\end{aligned}
\end{equation}
where $n=0, 1, 2,\dots,m-1$ labels the $m$-fold CM degeneracy\cite{Haldane85,Greiter16}. In the special case $m=1$, Eq.~\ref{Laughlin_wf} gives the wave function $\Psi_1$ for filled LLL. For the filled LLL wave function, we drop the superscript $n$ for $F_1(Z)$, since $n$ can take only one value $n=0$.
The Jain state at $\nu=\frac{s}{2ps+1}$ is constructed as
\begin{equation}
\begin{aligned}
\Psi_{s\over 2ps+1}=\mathcal{P}_\text{LLL}\Psi_s\Psi_1^2
\end{aligned} \label{Jainwf}
\end{equation}
where $\Psi_s$ stands for the wave function of electrons filling the lowest $s$ LLs, $\Psi_1^2$ attaches $2p$ vortices to each electron to composite-fermionize it, and $\mathcal{P}_\text{LLL}$ projects the wave function into the LLL. This form is valid for both the spherical and the torus geometries. On torus, the wave function in Eq.~\ref{Jainwf} does not have a well defined CM momentum, but $2ps+1$ degenerate CM eigenstates can be constructed as discussed by Pu {\it et al.}\cite{Pu17} Ref.~\onlinecite{Pu17} also shows how LLL projection can be conveniently accomplished for the Jain states in the torus geometry.
\subsection{One-component CFFS state}
An important state involved for our purposes is the one-component CFFS. As mentioned above, this state thrives in narrow quantum wells.
The construction of the CFFS wave function at $\nu=1/2p$ in torus geometry is accomplished by attaching 2p flux quanta to an electron fermi sea state and projecting it into the LLL \cite{Rezayi94,Shao15,Geraedts18,Wang19,Pu18}:
\begin{equation}
\Psi_\text{CFFS, 1/2p}\left(\left\{ z_i\right\}\right)=\mathcal{P}_\text{LLL} \Psi_{FS} \Psi_1^{2p}
\end{equation}
where $\Psi_\text{FS}={\rm det}[e^{i\vec{k}_n\cdot\vec{r}_i}]$ stands for fermi sea wave function. It can be projected into the LLL to produce
\begin{equation}
\begin{aligned}
& \Psi_\text{CFFS, 1/2p}\left(\left\{ z_i\right\}\right)
=e^{\frac{\sum_i z_i^2-|z_i|^2}{4l_B^2}} F_1\left( Z+i\ell_B^2K\right)^{2p} \\
&\times \det{\left[G_{k_n}\left(z_m\right)\right]} \left[\prod_{i<j}\theta\left( \frac{z_i-z_j}{L}|i\right)\right]^{2p-2}
\end{aligned}
\label{CFFS WF}
\end{equation}
where
\begin{equation}
\begin{aligned}
G_{k_n}\left(z_m\right)&=e^{-\frac{k_n l_B^2}{4}(k_n+2\bar{k}_n)}e^{\frac{i}{2}(\bar{k}_n+k_n)z_m}\cdot\\ &\cdot\prod_{j, j\neq m}\theta\left(\frac{z_m+2pik_n l_B^2-z_j}{L}|i\right).
\end{aligned}
\end{equation}
Here $k_n$ stand for the magnetic momenta occupied by the CFFS, with the CM momentum given by $K=\sum_n k_n$. The empirical rule is that the configuration of $k_n$'s that produces the ground state is as compact as possible, i.e. minimizes $\sum_n\left(k_n-K/N\right)^2$. More details can be found in References \onlinecite{Rezayi94,Pu18,Fremling18,Pu20b}.
\subsection{Pfaffian state}
Three distinct Pfaffian wave functions on the torus are given by \cite{Greiter91,Greiter92a}
\begin{equation}
\label{Pfaffian_wfn}
\begin{aligned}
&\Psi_\text{Pf, 1/2}\left( \left\{ z_i\right\}\right)\\
=&Pf\left( M_{ij} \right)F_1^2\left(Z\right) \prod_{i<j}\theta^2\left(\frac{z_i-z_j}{L}|i\right) e^{\frac{\sum_i z_i^2-|z_i|^2}{4l_B^2}}.
\end{aligned}
\end{equation}
Here $Pf\left(M_{ij}\right)$ is the Pfaffian of the matrix $M_{ij}=\frac{\theta_a\left(\frac{z_i-z_j}{L}|i\right)}{\theta_1\left(\frac{z_i-z_j}{L}|i\right)}$, and the choices $a=2, 3, 4$ produce three distinct Pfaffian wave functions.
The definition of $\theta_a\left(z|\tau\right)$ can be found in Appendix\,\ref{theta_function_definition}. These three states are degenerate for a three-body Hamiltonian for which the Pfaffian state is exact and are believed to become degenerate for Coulomb interaction in the thermodynamic limit \cite{Peterson08}. Our calculations also show that the energy difference between them is negligible because: (1) for VMC calculation the difference is much smaller than the difference between the Pfaffian state and the CFFS; and (2) for DMC calculation the energy differences are smaller than the statistical uncertainty. (See Appendix\,\ref{PF_DEGENERACY}) Due to these reasons and the limit of our computational resources, we choose $a=2$ below.
\subsection{Uncoupled $1/4+1/4$ two-component CFFS state}
In the limit of very wide quantum wells, we expect the system to form two uncoupled $1/4$ CFFSs, which is referred to as $1/4+1/4$ CFFS. The wave function of this two-component state is the product of the two $1/4$ CFFSs defined in Eq.\,\ref{CFFS WF}:
\begin{equation}
\Psi_\text{CFFS, 1/4+1/4}=
\Psi_\text{CFFS, 1/4}\left(\left\{ z_i\right\}\right) \Psi_\text{CFFS, 1/4}\left(\left\{ z_{[j]}\right\}\right)
\end{equation}
where $i=1, 2, \dots, N_e/2$ denote the electrons belonging to the first layer and $[j]\equiv N_e/2+j=N_e/2+1, N_e/2+2,\dots,N_e$ denote the electrons belonging to the second layer.
\subsection{The pseudo-spin singlet CFFS states}
We also consider the pseudo-spin singlet CFFS states, which is compressible and it is constructed by attaching flux quanta to the pseudo-spin-singlet fermi sea wave function. Here the term "pseudo-spin" refers to the layer index. The pseudo-spin singlet CFFS state has interlayer correlations, in contrast to the $1/4+1/4$ CFFS state. One can write its wave function by simply replacing in Eq.\,\ref{CFFS WF} the determinant in the wave function of the pseudo-spin polarized 1/2 CFFS by the product of determinants of the two pseudo-spins \cite{Hossain20a}:
\begin{equation}
\begin{aligned}
\det \left[G_{k_n}\left( z_m \right)\right] \to \det \left[G_{k_n}\left( z_i \right)\right] \det \left[ G_{k_l}\left( z_{[j]} \right)\right]
\end{aligned}
\end{equation}
where $i=1, 2, \dots, N_e/2$ and $[j]=N_e/2+1, N_e/2+2,\dots,N_e$ denote the electrons belonging to two pseudo-spin components. The Jastrow factor remains the same as in Eq.\,\ref{CFFS WF} which includes both intra-layer and inter-layer correlations. To make sure that the state is a singlet, one also needs to make the momentum distribution identical for both pseudo-spins.
\subsection{The Halperin $(3,3,1)$ state}
The Halperin $(3,3,1)$ state reads
\begin{equation}
\begin{aligned}
&\Psi_\text{$(3,3,1)$}\left( \left\{ z_i\right\} \right)=\\
&e^{\frac{\sum_i z_i^2-|z_i|^2}{4l_B^2}} F_{(3,3,1)}\left(Z\right)\prod_{1\leq i<j\leq N_e/2}\theta^3\left( \frac{z_i-z_j}{L}|i\right)\cdot\\
\cdot&\prod_{N_e/2<[i]<[j]\leq N_e}\theta^3\left( \frac{z_{[i]}-z_{[j]}}{L}|i\right) \prod_{\substack{1\leq i\leq N_e/2,\\N_e/2<[j]\leq N_e}}\theta\left( \frac{z_i-z_{[j]}}{L}|i\right).
\end{aligned}
\end{equation}
Here
\begin{equation}
\begin{aligned}
F_{(3,3,1)}(Z)&=F^{(0)}_\frac{1}{2}\left(Z_L\right)F^{(0)}_\frac{1}{2}\left(Z_R\right)F_1\left(Z\right)
\end{aligned}
\end{equation}
where $Z_L=\sum_{i=1}^{N_e/2}z_i$, $Z_R=\sum_{[j]=N_e/2+1}^{N_e}z_{[j]}$, and $Z= Z_L+Z_R$ (here $L$ and $R$ denote the left and right layers).
\section{VMC calculation of the phase diagram}\label{sec_vmc}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./LDA_n_1_5_10_30_w80.pdf}
\caption{The transverse density profile for the lowest (red) and the first excited (blue) subbands in the quantum well of width $W=80$ nm calculated by LDA.}
\label{LDA_band}
\end{figure}
We shall model the confinement potential as a quantum well with the infinite depth and a width of $W$. In some circumstances the finite depth is also considered, but in general this does not cause any significant difference because the GaAs quantum wells we discuss in this article (and also those in experiments) are generally very deep. The problem is modeled via a VMC calculation which includes an effective two-dimensional interaction, defined as follows:
\begin{equation}
V_{\text{eff}}\left( \vec{r}\right)=\frac{e^2}{\epsilon}\int dw_1\int dw_2 \frac{|\psi(w_1)|^2|\psi(w_2)|^2}{\sqrt{|\vec{r}|^2+(w_1-w_2)^2}}.
\label{V_eff}
\end{equation}
Here $\psi(w)$ is the transverse wave function, $w$ is the transverse coordinate, and $\vec{r}$ is a two-dimensional vector. In the simplest approximation, the subband wave functions are taken as the single-particle solutions of a quantum well problem $\psi_S(w)=\sqrt{\frac{2}{W}}\cos\left(\frac{\pi w}{W}\right)$ and $\psi_A(w)=\sqrt{\frac{2}{W}}\sin\left(\frac{2\pi w}{W}\right)$, where $S$ and $A$ refer to symmetric and antisymmetric. In this approximation the subband gap is $\Delta_{\rm SAS}=\frac{3\pi^2}{2}\frac{\hbar^2}{mW^2}$, where $m$ is the band mass of the electron. A better approximation for $\psi(w)$ is obtained by LDA at zero magnetic field, where one assumes a Fermi liquid state in the 2D plane~\cite{Park99b,Faugno19}. In this and the next section that introduces the 2D fixed-phase DMC, we use the LDA form for $\psi(w)$. We denote the lowest two subbands as $\psi_S$ and $\psi_A$, in which $S$ represents the symmetric subband and $A$ represents the anti-symmetric subband. The typical LDA density profiles of the lowest two subbands are shown in Fig.~\ref{LDA_band}. Before going further, let us discuss how the occupation of the subbands changes as one tunes the subband gap. When $\Delta_{\rm SAS}$ is much larger than the Fermi energy, as is the case for either very small $W$ or small densities, only the lowest subband is occupied. In the limit when the lowest two bands are approximately degenerate ($\Delta_{\rm SAS}\approx 0$), which happens at large $W$ or at large densities, two-component states are possible, where the two components are linear combinations of the two subbands. Because the system tends to form two layers at large widths, we choose the left-right bases as (Fig.~\ref{LDA_LR_COMPONENT}):
\begin{equation}
\begin{aligned}
\psi_L=\frac{1}{\sqrt{2}}(\psi_S +\psi_A)\\
\psi_R=\frac{1}{\sqrt{2}}(\psi_S -\psi_A)\\
\end{aligned}
\label{vmc_bases}
\end{equation}
More generally, we can choose $\psi_\theta=\frac{1}{\sqrt{2}}(\psi_S +e^{i \theta}\psi_A)$ and $\psi'_\theta=\frac{1}{\sqrt{2}}(\psi_S -e^{-i \theta}\psi_A)$. However, because the systems becomes a bilayer for sufficiently wide quantum wells or large densities, we expect that $\theta=0$ will produce the lowest energy.
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./LDA_n_1_5_10_30_w80_LR.pdf}
\caption{The density profiles of the left (blue) and right (red) bases in the quantum well of $W=80$ nm calculated by LDA.}
\label{LDA_LR_COMPONENT}
\end{figure}
Similarly to Eq.\,\ref{V_eff}, we define the effective interactions as follows. For one-component states, only the lowest symmetric subband is used for defining the effective interactions whereas for two-component states, both intra-component interaction and inter-component interaction are needed:
\begin{equation}
\begin{aligned}
V_{\text{SS}}\left( \vec r\right)=\frac{e^2}{\epsilon}\int dw_1\int dw_2 \frac{\rho_\text{S}(w_1)\rho_\text{S}(w_2)}{\sqrt{|\vec{r}|^2+(w_1-w_2)^2}}\\
V_{\text{LL}}\left( \vec r\right)=\frac{e^2}{\epsilon}\int dw_1\int dw_2 \frac{\rho_\text{L}(w_1)\rho_\text{L}(w_2)}{\sqrt{|\vec{r}|^2+(w_1-w_2)^2}}\\
V_{\text{RR}}\left( \vec r\right)=\frac{e^2}{\epsilon}\int dw_1\int dw_2 \frac{\rho_\text{R}(w_1)\rho_\text{R}(w_2)}{\sqrt{|\vec{r}|^2+(w_1-w_2)^2}}\\
V_{\text{LR}}\left( \vec r\right)=\frac{e^2}{\epsilon}\int dw_1\int dw_2 \frac{\rho_\text{L}(w_1)\rho_\text{R}(w_2)}{\sqrt{|\vec{r}|^2+(w_1-w_2)^2}}\\
\end{aligned}
\label{V_eff_explicit}
\end{equation}
The densities are defined as $\rho_\text{S}=\left|\psi_S\right|^2$ and $\rho_\text{L,R}=\left|\psi_\text{L,R}\right|^2$.
We mention here two caveats. First of all, we consider states for which either only the lowest subband is occupied, or the two lowest subbands are equally occupied. All of our trial wave functions, namely the single-component CFFS, the single component-Pfaffian, the pseudo-spin singlet CFFS, the uncoupled $1/4+1/4$ CFFS, and the $(3,3,1)$ satisfy this requirement. In principle, we can also consider a partially polarized CFFS, which will have an unequal occupation of two subbands, but we have not done so (because it significantly enhances the calculational difficulty). All other states considered here cannot be partially polarized. Second, the value of $\Delta_{\rm SAS}$ is relevant for transitions from a single-component to a two-component state. $\Delta_{\rm SAS}$ is typically very large compared to the Coulomb energy differences between the relevant states. We determine the value of $\Delta_{\rm SAS}$ from the LDA calculation (Fig.~\ref{Delta_SAS}).
The energies of one-component states relative to the CFFS are shown in Fig.~\ref{VMC_E_1}.
As one can see, the CFFS remains the lowest energy state for all parameters, although the Pfaffian comes as close as $0.001 \frac{e^2}{\epsilon l_B}$ at densities greater than $2\times 10^{11} \text{cm}^{-2}$. This conclusion is also supported by exact diagonalization studies of finite systems in the spherical geometry. In Appendix~\ref{ED_Ajit}, we show the overlap between the exact ground state of the LDA interaction with the one-component CFFS and the Pfaffian states, and find that in the entire region of parameter space that we considered, the one-component CFFS always has a very high overlap with the exact ground state and thus is superior to the Pfaffian.
[Note: We have also performed the energy comparison in the spherical geometry, where we see a different result, namely that the Pfaffian state has lower energy in the thermodynamic limit for some parts of the phase diagram. We believe that the torus results are more reliable because the thermodynamic extrapolation on the sphere is less accurate for finite widths. See Appendix\,\ref{VMC_SPHERE_SEC} for further discussion.]
The energies of the two-component states, namely the Pseudo-spin singlet CFFS, $(3,3,1)$ and the uncoupled $1/4+1/4$ CFFS, relative to the $(3,3,1)$ are shown in Fig.~\ref{VMC_E_2}.
A transition from the singlet CFFS to the Halperin $(3,3,1)$ occurs at very low densities, followed by a second transition into the uncoupled $1/4+1/4$ CFFS (Fig.~\ref{VMC_E_2}). This behavior is similar to that found in earlier VMC calculations on the zero-width bilayer systems\cite{Scarola01b}. The phase diagrams for one and two-component states separately are shown in Fig.~\ref{VMC_PHASE_DIAGRAM_1}.
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./LDA_Subband_Ec.pdf}
\caption{Subband gap $\Delta_{\rm SAS}$ calculated in LDA for various quantum well widths as a function of the density.}
\label{Delta_SAS}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./Torus_VMC_FW_onelayers.pdf}
\caption{The VMC calculation of the energy difference per particle between the Pfaffian and the one-component CFFS state in the thermodynamic limit. The well widths are shown on the plots.}
\label{VMC_E_1}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./Torus_VMC_FW_bilayers.pdf}
\caption{The VMC calculation of the energy per particle of the $1/4+1/4$ CFFS state and the singlet CFFS state relative to the $(3,3,1)$ state in the thermodynamic limit. The well widths are shown on the plots. The statistical errors are smaller than the symbol sizes.}
\label{VMC_E_2}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./Torus_VMC_FW_all_states.pdf}
\caption{The VMC calculation of the energy per particle of the one-component CFFS state, the Pfaffian state, the $1/4+1/4$ CFFS state, and the singlet CFFS state in the thermodynamic limit. All energies are measured relative to the energy of the $(3, 3, 1)$ state. The well widths are shown on the plots. The energies of the one-component states change rapidly relative to the $(3, 3, 1)$ state due to the $\Delta_{\rm SAS}$ component. The statistical errors are smaller than the symbol sizes.}
\label{VMC_E_3}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./torus_VMC_Phase_boundary_1L.pdf}
\includegraphics[width=\columnwidth]{./torus_VMC_Phase_boundary_2L.pdf}
\caption{(a) The phase diagram of one component states, including CFFS (red) and the Pfaffian state (purple). The Pfaffian state is not stabilized for the parameters considered. (b) The phase diagram of two component states, including the $(3, 3, 1)$ state (green), pseudo-spin singlet (blue), and $1/4+1/4$ CFFS state (yellow). The region where experiments find an incompressible state~\cite{Shabani09b} is indicated by light dashed grey lines. At each width, the uncertainty of the transition density is about $1\times 10^{10} \text{cm}^{-2}$. The overall phase boundary is obtained by smoothly joining the transition points at $W=50,60,70,80$ nm.}
\label{VMC_PHASE_DIAGRAM_1}
\end{figure}
Fig.~\ref{VMC_E_3} shows the energies of all states. We add $\frac{1}{2} \Delta_\text{SAS}$ to the energy per particle for each two-component state, because half of all particles occupy the second subband. We quote all energies relative to the $(3,3,1)$ state in Fig.~\ref{VMC_E_3}, which yields the phase diagrams in Fig.~\ref{VMC_PHASE_DIAGRAM_2}. In general, one can see the ground state of the system is in a one-component state when the carrier density is low and the system makes a transition into a two-component state at high density. This result is qualitatively consistent with the experiments~\cite{Shabani09b, Suen92, Luhman08}, and favors the possibility that the observed incompressible state is the $(3,3,1)$ state.
There are significant differences, however. As noted earlier, the lower phase boundary is very sensitive to $\Delta_{\rm SAS}$. However, the upper theoretical phase boundary ought to be more reliable, and its deviation from the experimental phase boundary is thus significant. We note, however, that the calculation, so far, does not include LLM or disorder.
\section{Fixed-phase Diffusion Monte Carlo Method}\label{sec_dmc_basics}
In the following sections, we will use the fixed-phase DMC method to evaluate the phase diagram. The general DMC is a standard Monte Carlo method designed to obtain the ground state of the many-body Schr\"{o}dinger equation\cite{Reynolds82, Foulkes01} by a stochastic method.
By setting time to an imaginary variable $(t\to t=-i \uptau)$, the Schr\"odinger equation takes the form
\begin{equation}
\begin{aligned}
-\hbar \partial_\uptau \Psi\left(\vec{R}, \uptau\right)= \left( H\left(\vec{R}\right)-E_T\right)\Psi\left(\vec{R}, \uptau\right)
\end{aligned}
\end{equation}
where $\vec{R}=\left( \vec{r}_1, \vec{r}_2, \dots, \vec{r}_{N_e}\right)$ is the collective coordinate of the system and $E_T$ is a constant energy offset. When $\Psi\left(\vec{R}, \uptau\right)$ is real and non-negative, one can interpret the above equation as a diffusion equation, with $\Psi\left(\vec{R}, \uptau\right)$ interpreted as the density distribution of randomly moving walkers. The energy offset $E_T$
controls the population of random walkers. Starting from an initial trial wave function $\Psi_T$, as the walkers diffuse stochastically, the distribution gradually converges to a stable distribution that represents the ground state (provided $\Psi_T$ has a non-zero overlap with the ground state).
More details can be found in Ref.~[\onlinecite{Foulkes01},\onlinecite{Mitas98}] .
The applicability of the DMC method relies on the assumption that the ground state is real and non-negative. However, this condition is not satisfied in a system with broken time-reversal symmetry, which is the case in the presence of a magnetic field. To overcome this difficulty, the fixed-phase DMC method has been proposed\cite{Ortiz93, Melik-Alaverdian97}. The key idea is to write the wave function as
\begin{equation}
\Psi(\vec{R})= \left| \Psi(\vec{R}) \right | \exp \left[ i \phi(\vec{R})\right]
\label{amp_phase}
\end{equation}
and determine the $\left| \Psi(\vec{R})\right |$ that gives the lowest energy for a fixed phase $\phi(\vec{R})$ by DMC method. This amounts to solving the Schr\"odinger equation
\begin{equation}
\begin{aligned}
&H_\text{DMC} \left| \Psi\left(\vec{R}, \uptau\right)\right|=\\
& \left( -\sum_{i=1}^N \frac{\hbar^2\nabla_i^2}{2m} +V_{\text{DMC}}\left(\vec{R}\right)-E_T\right) \left|\Psi\left(\vec{R}, \uptau\right)\right|=E \left|\Psi\left(\vec{R}, \uptau\right)\right|\\
\end{aligned}
\label{eqn_amp}
\end{equation}
with
\begin{equation}
V_{\text{DMC}}\left( \vec{R} \right)=V\left( \vec{R} \right)+\frac{1}{2 m}\sum_{i=1}^N
\left[ \hbar \nabla_i \phi \left(\vec{R}\right)+\frac{e}{c} {\mathbf A}\left({\mathbf r }_i\right) \right]^2 \label{Vdmc}.
\end{equation}
The diffusion equation is often efficiently solved by an importance sampling method. The so-called guiding function is defined as
\begin{equation}
f\left( \vec{R}, \uptau \right)=\left|\Psi_T \left( \vec{R}\right) \right| \left|\Psi \left( \vec{R}, \uptau\right) \right|
\end{equation}
where $\Psi_T$ is the trial wave function.
Instead of solving Eq.\,\ref{eqn_amp}, we have an equivalent equation:
\begin{equation}
\begin{aligned}
-\hbar\partial_{\uptau}f({\bf R},\uptau)
&=-\frac{\hbar^2}{2m}\nabla^2 f({\bf R},\uptau)+\frac{\hbar^2}{m}\nabla \cdot \left({\bf v}_D f({\bf R},\uptau)\right)\\
&+\left(E_L({\bf R})-E_T\right) f({\bf R},\uptau)
\end{aligned}
\label{equiv_schrodingerr}
\end{equation}
where $\nabla=\left(\nabla_1, \nabla_2, \dots, \nabla_N\right)$ is the dN-dimensional (in d space dimensions) gradient operator, $\vec{v}_D\left(\vec{R}\right)$ is the dN-dimensional drift velocity defined by
\begin{equation}
\vec{v}_D\left(\vec{R}\right)=\nabla \ln \left|\Psi_T \left(\vec{R}\right)\right|,
\end{equation}
and
\begin{equation}
E_L\left(\vec{R}\right)=|\Psi_T|^{-1}H_\text{DMC} |\Psi_T|
\end{equation}
is the local energy. We give their explicit forms in Appendix~\ref{DMC_algorithm} based on Ref.~[\onlinecite{Ortiz93}].
The accuracy of the DMC energy depends on the choice of the phase $\phi(\vec{R})$. In this paper, our initial DMC trial wave functions will be our candidate trial wave functions described earlier. (In the case of 3D-DMC, these will also include the transverse wave function.)
Each trial wave function identifies a specific phase $\phi_T$. The DMC algorithm then produces the lowest energy state for each choice of the trial wave function.
We stress that the DMC calculation automatically includes LLM. In fact, it is a non-perturbative method for treating LLM, which has been shown in past studies to give rather accurate results\cite{Guclu05a,Ortiz93,Melik-Alaverdian95, Bolton96, Melik-Alaverdian97, Melik-Alaverdian99, Zhao18, Zhang16, Hossain20}.
\section{2D fixed-phase DMC study with effective interaction}\label{sec_2d_dmc}
We implement a 2D fixed-phase DMC study of the problem where we obtain the lowest energy using DMC while setting $V(\vec{R})$ in Eq.~\ref{Vdmc} to $V_{\rm eff}(\vec{R})$ introduced in Eq.~\ref{V_eff}. This allows for LLM in a model where electrons confined to 2D are interacting via $V_{\rm eff}(\vec{R})$. As noted above, the phase is fixed by the trial wave functions described above.
As shown in Fig.~\ref{DMC_2d_1}, the comparison between the one-component CFFS and the Pfaffian state is very similar to that from VMC calculation and no transition occurs into the Pfaffian state (Fig.~\ref{2D_DMC_BOUNDARY}(a)). Meanwhile, the result for two-component states is quite different from the VMC result (Fig.~\ref{DMC_2d_2}). We find that the uncoupled $1/4+1/4$ CFFS is very efficient in lowering its energy in the presence of the LLM. In contrast to the VMC result, the system makes a transition from the pseudo-spin singlet CFFS directly into the $1/4+1/4$ CFFS state for most parameters (Fig.~\ref{2D_DMC_BOUNDARY}(b)). For very large widths and low densities, we find a small region of $(3,3,1)$ state.
When both one-component and two-component states are considered, the resulting phase diagram is shown in Fig.~\ref{2D_DMC_BOUNDARY_ALL}. The one-component CFFS makes a transition into the uncoupled two-component $1/4+1/4$ without going through an incompressible state, except in a small region where the well-width is large. We note here that the extrapolation of the 2D-DMC is less linear for the two-component states, which leads to a larger statistical error of about $2\times 10^{10} \text{cm}^{-2}$ for the density where the phase transition occurs.
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./Torus_2d_DMC_FW_onelayers.pdf}
\caption{2D-DMC calculation of the energy difference per particle between the one-component CFFS and the Pfaffian state in the thermodynamic limit.}
\label{DMC_2d_1}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./Torus_2d_DMC_FW_bilayers.pdf}
\caption{2D-DMC calculation of the energy per particle of the $1/4+1/4$ CFFS and the singlet CFFS state in the thermodynamic limit relative to the $(3,3,1)$ state. The well widths are indicated on the plots.}
\label{DMC_2d_2}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./Torus_2d_DMC_FW_all.pdf}
\caption{2D-DMC calculation of the energy per particle of the one-component CFFS state, the Pfaffian state, the $1/4+1/4$ CFFS and the singlet CFFS state relative to the $(3, 3, 1)$ state in the thermodynamic limit. The well widths are labeled on the plots. The energies include contribution from $\Delta_{\rm SAS}$.}
\label{DMC_2d_3}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./2dDMC_Phase_boundary_1L.pdf}
\includegraphics[width=\columnwidth]{./2dDMC_Phase_boundary_2L.pdf}
\caption{(a) The phase diagrams of one component states obtained by 2D-DMC on torus. (b) The phase diagrams of two component states. The states considered are the CFFS state (red), the $(3, 3, 1)$ state (green), the singlet CFFS state (blue), the $1/4+1/4$ CFFS state (yellow), and the Pfaffian state (purple). In the lower panel, the uncertainty of the transition density from the singlet CFFS state to $(3,3,1)$ is about $1\times 10^{10} \text{cm}^{-2}$, while the transition density from $(3, 3, 1)$ or singlet CFFS to $1/4+1/4$ CFFS has an uncertainty of about $2\times 10^{10} \text{cm}^{-2}$.The region where experiments find an incompressible state~\cite{Shabani09b} is indicated by light dashed grey lines. The overall phase boundary is obtained by smoothly joining the transition points at $W=50,60,70,80$ nm.}
\label{2D_DMC_BOUNDARY}
\end{figure}
\section{3D fixed-phase DMC study of the $1/2$ FQHE}\label{sec_dmc_3d}
The transverse trial wave function for the one-component states are chosen to be:
\begin{equation}
\begin{aligned}
\Psi_\text{trans}\left( \left\{ w_i\right\}\right)&=\prod_{i=1}^{N_e}\psi_S(w_i)\\
&=\prod_{i=1}^{N_e} \left[\cos\left( \frac{\pi w_i}{W} \right)-\alpha \cos\left( \frac{3 \pi w_i}{W} \right)\right],
\end{aligned}
\label{trans_wf_sing}
\end{equation}
where $W$ is the width of the quantum well and $\alpha$ is a parameter introduced to improve the converging speed. Empirically we find the program to be most efficient and stable when $\alpha$ is tuned from $0.2$ to $0.8$ when the well width ranges from $2 l_B$ to $10 l_B$. However, one should keep in mind that the choice of $\alpha$ is a technical matter; as long as the number of iterations is large enough, any choice of $\alpha$ leads to the same result because the fixed-phase DMC solves for the lowest energy state within a given phase sector independent of the initial wave function.
For two-component states, before coming to 3D-DMC,
it is necessary to address a significant difficulty. In general, one needs to evaluate the energy expectation of a given two-component state [e.g. Halperin $(3,3,1)$ state] by fully anti-symmetrizing the wave function. For two-component states in a single layer with real spin, the Coulomb interaction does not depend on the spin index, and all the cross-terms produced by anti-symmetrization vanish, and one can treat the two components as two sets of distinguishable particles, which greatly simplifies the calculation. This, however, is not true for the present case since the Coulomb interaction explicitly depends on the transverse coordinates and the cross-terms are nonzero. Here, one must include all the permutation-terms to fully anti-symmetrize the wave function. This is impractical for systems with greater than 10 particles because there are $\frac{N_e!}{(N_e/2)! (N_e/2)!}$ inter-component permutations. A special case is when the two transverse bases have no overlap. In this case, all cross-terms vanish and one can calculate the energy expectation as if the two components were two distinguishable sets of particles. We, therefore, use a transverse trial wave function for the two components to be strictly spatially separated, i.e. one basis function is strictly confined in the left half of the quantum well while the other component in the other half. In other words, our basis is given by:
\begin{equation}
\Psi_\text{trans} \left( \left\{ w_i, w_{[j]}\right\}\right)=\prod_{i=1}^{N_e/2} \prod_{\left[j\right]=N_e/2+1}^{N_e} \psi_L\left(w_i \right) \psi_R \left( w_{[j]}\right)
\label{trans_wf_left}
\end{equation}
where
\begin{equation}
\begin{aligned}
\psi_L\left( w_i \right)&=\left\{\begin{array}{ll}
-\sin(\frac{2\pi w_i}{W}), & \text{if } -W/2<w_i<0\\
0, & \text{if } 0\leqslant w_i<W/2
\end{array}
\right.\\
\psi_R\left( w_{[j]} \right)&=\left\{\begin{array}{ll}
0, & \text{if } -W/2<w_{[j]}<0\\
\sin(\frac{2\pi w_{[j]}}{W}), & \text{if } 0\leqslant w_{[j]}< W/2
\end{array}
\right.
\end{aligned}
\end{equation}
represents the left- and right-components.
The 3D trial wave function for the $(3,3,1)$ state is constructed as:
\begin{equation}
\Psi_{(3,3,1)}^{\text{3D}}\left({\mathbf{R}}\right) =\Psi_{(3,3,1)}\left( \left\{ z_i, z_{[j]}\right\}\right) \Psi_\text{trans} \left( \left\{ w_i, w_{[j]}\right\}\right),
\end{equation}
The other two-component states are constructed similarly, with the in-plane part replaced by the corresponding wave functions.
In Appendix~\ref{Full_Antisymmtrized} we test the regime of validity of our approximation (that the right and left components are non-overlapping) for a system of four particles, for which we can implement full antisymmetrization. We find that our approximation becomes excellent near the upper phase boundary in Fig.~\ref{3D_PHASE_2}.
\subsection{Transverse density profile evaluated by 3D-DMC}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./LDA_W50_W80.pdf}
\caption{The transverse density calculated by LDA, which assumes a finite depth quantum well with the realistic parameters of the GaAs. The carrier densities are shown in units of $10^{10}\text{cm}^{-2}$. The area under each profile is normalized to unity.}
\label{LDA_density}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./3d_dmc_density_other_W80_Ne16.pdf}
\caption{3D-DMC calculation of transverse densities for different FQHE states. The calculations are performed for $N_e=16$ and quantum well width $W=80 \text{nm}$. The legend shows the carrier densities corresponding to each color, measured in units of $10^{10}\text{cm}^{-2}$}
\label{DMC_density_others}
\end{figure}
It is essential to quantitatively understand how the transverse distribution of electrons evolves as the well-width increases. We first show the transverse density calculated by LDA (Fig.~\ref{LDA_density}). The LDA package \cite{Martin20} is for realistic parameters with finite well-depth, and the transverse density extends outside the well by about $3\sim 4$ nm or less on each side.
This justifies our infinite-depth approximation. (In principle our method can also deal with finite depth quantum well, but technically that makes the form of the transverse trial wave function and the local energy more complicated.) In our approach, we implement the 3D-fixed-phase DMC and explicitly calculate the transverse density profiles of different candidate states.
Let us first consider one component states. Fig.~\ref{FINITE_SIZE_DEN} in Appendix~\ref{transverse density} shows that the transverse density for the CFFS is insensitive to the system size. We have found similar behavior at other filling factors. Hence, we believe that the density profiles shown in our work represent the thermodynamic limit.
We have also studied the transverse profile for several filling factors, e.g. for $\nu=1$, 1/3, 1/5 FQHE states. As shown in Fig.~\ref{DMC_density_others}, we find that the transverse densities are not sensitive to the filling factor.
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./3d_dmc_density_W50_1L_Ne16.pdf}
\includegraphics[width=\columnwidth]{./3d_dmc_density_W80_1L_Ne16.pdf}
\caption{Transverse densities for the one-component CFFS and the Pfaffian state calculated by 3D-DMC. The legend shows the carrier density in units of $\mathrm{10^{10} cm^{-2}}$. The results are shown for quantum well widths $W=50$nm (a) and $W=80$ nm WQW (b). At each width, the density profiles of one-component CFFS the Pfaffian state are shown individually in the upper two panels of (a) and (b). The lowest panel shows the differences between the two densities $\rho_\text{CFFS}-\rho_\text{Pfaf}$; the scale on the left corresponds to the lowest plot, and the rest are shifted up by 0.0025 units; also only 1 out of every 5 data points in the calculation are shown for clarity. The system size is $N_e=16$.}
\label{Density_1L}
\end{figure}
In Figure\,\ref{Density_1L} we show the transverse density for the one-component CFFS and Pfaffian states at $\nu=1/2$ in a $\mathrm{50 nm}$ and a $\mathrm{80 nm}$ well width, with the areal density ranging from $\mathrm{n=1\times10^{10} cm^{-2}}$ to $\mathrm{3\times 10^{11} cm^{-2}}$. Other widths we consider in this article are between $\mathrm{50 nm}$ and $\mathrm{80 nm}$ and the profiles of the transverse density are similar (not shown). As one can see, the system becomes more and more two-component-like with increasing carrier density.
If one compares the 3D-DMC results with the LDA results, one can see that the two methods give very similar predictions, although the two ``humps," which indicate the onset of bilayer-like physics, appear at somewhat smaller densities in the LDA results.
We next show in Fig.~\ref{Density_2L} the transverse density profiles for two-component states, assuming that the density vanishes at the center point (for reasons discussed above). The transverse wave function is insensitive to the state in 2D, and, as expected, the system becomes more bilayer-like with increasing carrier density.
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./3d_dmc_density_strict2L_Ne8.pdf}
\caption{Transverse density of the left-component obtained by 3D-DMC for two-component states. The right component is analogous. The legend shows the carrier density in units of $\mathrm{10^{10} cm^{-2}}$. The system size is $N_e=8$.}
\label{Density_2L}
\end{figure}
\subsection{Energy calculation and phase diagram by 3D-DMC}
In this section, we show our calculation of the energy expectations of different states considered in this article. We first show in Fig.~\ref{3D_1L_energy} the energy comparison between the one-component CFFS and the Pfaffian state. The energy of the Pfaffian state gets closer and closer to the CFFS as the carrier density increases for each well-width. In fact, their difference becomes so small that it is comparable to the statistical error and we are not able to determine which one is lower. Within the two-component states, the energies are shown in Fig.~\ref{3D_strict2L_energy} and the theoretical phase diagram is shown in Fig.~\ref{3D_PHASE_1} (b). As the density increases, the system first makes a transition from the pseudo-spin singlet CFFS state into the Halperin $(3,3,1)$ state, and finally into the uncoupled $1/4+1/4$ state. This is qualitatively similar to the behavior found in the VMC calculation. The resulting phase diagrams for one- and two-component states (separately) are shown in Fig.~\ref{3D_PHASE_1} (a).
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./3d_DMC_1L_e.pdf}
\caption{The energy difference between the Pfaffian state and the one-component CFFS state in the thermodynamic limit as a function of the carrier density at different well widths calculated by 3D-DMC in the torus geometry.}
\label{3D_1L_energy}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./3d_DMC_2L_strict2_e.pdf}
\caption{The energy of several two-component states relative to the Halperin $(3, 3, 1)$ state in the thermodynamic limit as functions of the carrier density at different well widths calculated by 3D-DMC in the torus geometry.}
\label{3D_strict2L_energy}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./3dDMC_Phase_boundary_1L.pdf}
\includegraphics[width=\columnwidth]{./3dDMC_Phase_boundary_2L_strict.pdf}
\caption{
(a) The phase diagrams of one component states obtained by 3D-DMC on the toroidal geometry. Above the dashed boundary, the uncertainty is greater than the energy difference between the one-component CFFS and the Pfaffian state so we suggest it is either the one-component CFFS or the Pfaffian state (purple). (b) The phase diagrams of two-component states. The states considered are the CFFS state (red), the $(3, 3, 1)$ state (green), the singlet CFFS state (blue), and the $1/4+1/4$ CFFS state (yellow).In the lower panel, The uncertainty of the transition density at each width is about $2\times10^{10}\text{cm}^{-2}$. The region where experiments find an incompressible state~\cite{Shabani09b} is indicated by light dashed grey lines. The overall phase boundary is obtained by smoothly joining the transition points at $W=50,60,70,80$ nm.}
\label{3D_PHASE_1}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./3d_DMC_all_e.pdf}
\caption{The energy of all the states calculated by 3D-DMC on the torus in the thermodynamic limit. The energy of each state is measured relative to that of the $(3,3,1)$ state.}
\label{3D_1L_strict2L_energy}
\end{figure}
Because the 3D-DMC automatically includes $\Delta_{\rm SAS}$, we can directly compare all the states. Their energies are given in Fig.~\ref{3D_1L_strict2L_energy}, and the resulting phase diagram is shown in Fig.~\ref{3D_PHASE_2}. This phase diagram is different from that found by VMC or 2D-DMC and suggests that the experimentally observed incompressible state is likely to be the one-component Pfaffian state.
\section{Discussion}
This work concerns the nature of the FQHE at $\nu=1/2$ in wide quantum wells. We have evaluated the phase diagram of states at $\nu=1/2$ as a function of the quantum well width and the carrier density at three different levels of approximation.
Figure~\ref{VMC_PHASE_DIAGRAM_2} shows the phase diagram obtained by a variational Monte Carlo calculation. In this calculation, we evaluate an effective 2D interaction with the help of a transverse wave function calculated by LDA at zero magnetic field. A shortcoming of this method is the assumption that the transverse wave function and $\Delta_{\rm SAS}$ evaluated at zero magnetic field remain valid at finite magnetic fields as well. This is of particular concern for the phase boundaries separating one- and two-component states, because these phase boundaries depend sensitively on $\Delta_{\rm SAS}$, which is a relatively large energy, and also a rapidly varying function of the quantum well width and density. For that reason, in Fig.~\ref{VMC_PHASE_DIAGRAM_2} the phase boundary separating the $(3,3,1)$ and $1/4+1/4$ CFFS states is more reliable than that separating the one-component CFFS and the $(3,3,1)$ states.
The VMC calculation also does not include the effect of LLM directly. Figure~\ref{2D_DMC_BOUNDARY_ALL} includes LLM for electrons interacting with an effective 2D interaction within a fixed phase DMC calculation.
The principal result of the present work is given in Fig.~\ref{3D_PHASE_2}, which is the phase diagram obtained from a 3D fixed phase DMC. This method produces the ground state energy directly at a finite magnetic field, including, in principle, the effect of finite width and LLM. This suggests, although does not prove, that the incompressible state observed in experiments is the one-component Pfaffian state.
A technical difficulty of the 3D-DMC method is that for two-component states we must assume that the transverse wave function vanishes at the center of the quantum well. One may question if this affects comparisons between one- and two-component states. Fortunately, this is an excellent approximation near the upper phase boundary of Fig.~\ref{3D_PHASE_2}, which separates the single component ``CFFS/Pfaffian" state from the two-component ``$1/4+1/4$ CFFS" state. That gives us some degree of confidence that the transition from the two-component $1/4+1/4$ CFFS occurs into the one-component Pfaffian state.
Nonetheless, a definitive confirmation must await further experimental studies. In particular, thermal Hall measurements, which have shown half-quantized value at $5/2$~\cite{Banerjee18}, can convincingly reveal whether the FQHE state here has a non-Abelian origin.
We also note that we do not consider the anti-Pfaffian state, which is the hole partner of the Pfaffian state~\cite{Levin07, Lee07, Balram18}. These two are degenerate in energy in the absence of LLM, but LLM is expected to select one of them. We have not investigated this issue here, both because the anti-Pfaffian is harder to deal with numerically, and because the energy differences are expected to be small compared to the Monte Carlo uncertainty.
Before ending we list other assumptions made in our study. We do not consider the crystal phase. Previous theoretical studies of possible states in an ideal bilayer~\cite{Faugno18, Faugno20} (i.e. two 2D layers separated by a distance $d$) did not find any crystal states, but a crystal may occur in wide quantum wells~\cite{Thiebaut15}. Such a crystal might be responsible for the fact that the experiments see an insulator on the either side of the FQHE state, rather than the compressible $1/4+1/4$ CFFS state. Of course, an alternative possibility is that disorder, omitted in our study, may turn the $1/4+1/4$ CFFS into an insulator. Experimental studies in better quality samples can clarify the situation.
\textit{Acknowledgement}: We thank Mansour Shayegan for many insightful discussions. The work was made possible by financial support from the U.S. Department of Energy under Award No. DE-SC0005042. The VMC and DMC calculations were performed using Advanced CyberInfrastructure computational resources provided by The Institute for CyberScience at The Pennsylvania State University. A. C. B. acknowledges the Science and Engineering Research Board (SERB) of the Department of Science and Technology (DST) for funding support via the Start-up Grant SRG/2020/000154. Computational portions of exact diagonalization research work were conducted using the Nandadevi supercomputer, which is maintained and supported by the Institute of Mathematical Science's High-Performance Computing Center. Some of the numerical diagonalizations were performed using the DiagHam package, for which we are grateful to its authors.
\begin{appendix}
\counterwithin{figure}{section}
\section{VMC results from the spherical geometry}
\label{VMC_SPHERE_SEC}
All the above calculations have been performed in the torus geometry. In this section, we present results from our VMC calculations in the spherical geometry. The energy extrapolations are shown in Fig.~\ref{VMC_extrap_CFFS_sphere} for the one-component CFFS, Fig.~\ref{VMC_extrap_Pfaffian_sphere} for the Pfaffian state, Fig.~\ref{VMC_extrap_331_sphere} for the $(3,3,1)$ state, Fig.~\ref{VMC_extrap_2CFFS_sphere} for the $1/4+1/4$ CFFS and \ref{VMC_extrap_singlet_sphere} for the single component CFFS. Figs.~\ref{sphere_VMC_FW_1} and \ref{sphere_VMC_FW_2} depict the energies as a function of density for several quantum well widths. The resulting phase diagrams within the one-component and the two-component regimes are shown in Figs.~\ref{sphere_VMC_PT_1}. While the phase diagram of two-component states is almost identical to that on the torus, the phase diagram of one-component states is different: in particular, a phase transition occurs from the one-component CFFS to the Pfaffian state at sufficiently large densities. The final phase diagram shown in Fig.~\ref{sphere_VMC_PT_2} is similar to but slightly different from, the VMC phase diagram obtained from the torus geometry, shown in the main text.
We believe that the results from the torus geometry are more reliable for the following reasons. (i) As one can see, the thermodynamic extrapolations in the spherical geometry are not as linear as in the torus geometry, and thus entail greater uncertainty in the thermodynamic limit. This is because the finite width effect is only considered in the calculation of the electron-electron repulsion, whereas the electron-background and background-background interactions are chosen to be the same as those for zero-width well, for the simplicity of the calculation. (The form of the background-background interactions in the spherical geometry can be found in the appendix of Ref.~[\onlinecite{Jain07}], while in the torus geometry, the electron-background and background-background interactions are included through Ewald summation which assumes the same form for all interactions.) (ii) The torus geometry is better for the CFFS states. While one can directly construct the CFFS on the torus by attaching flux quanta to electron Fermi sea for any particle number, one must work with the Jain states of filling factor $\nu=\frac{n}{2n+1}$ and take the limit $n\to \infty$ to obtain the energy of the CFFS. Alternatively, one can consider systems with zero effective flux and take the thermodynamic limit~\cite{Rezayi94,Balram15b,Balram17}. The filled shell systems occur at particle numbers $N_e=4,9,16,25,36, ...$. However, due to the complexity of the wave functions, we cannot go beyond $N_e=36$ in VMC. This size limitation makes the energy comparisons less reliable. (iv) Finally, when it comes to DMC, very few CFFS systems are accessible in the spherical geometry, making thermodynamic extrapolations even more unreliable. For that reason, we have not performed DMC calculations in the spherical geometry.
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./2d_sphere_VMC_extrap_1L_CFFS.pdf}
\caption{Finite-size extrapolation of the energy for the one-component CFFS state for different widths and carrier densities. The calculation is done by VMC on the sphere. The well widths are shown on the plots.}
\label{VMC_extrap_CFFS_sphere}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./2d_sphere_VMC_extrap_1L_Pfaf.pdf}
\caption{Finite-size extrapolation of the energy for the Pfaffian state for different widths and carrier densities. The calculation is done by VMC on the sphere. The well widths are shown on the plots.}
\label{VMC_extrap_Pfaffian_sphere}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./2d_sphere_VMC_extrap_2L_hp331.pdf}
\caption{Finite-size extrapolation of the energy for the $(3,3,1)$ state for different widths and carrier densities. The calculation is done by VMC on the sphere. The well widths are shown on the plots.}
\label{VMC_extrap_331_sphere}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./2d_sphere_VMC_extrap_2L_biCFFS.pdf}
\caption{Finite-size extrapolation of the energy for the $1/4+1/4$ CFFS state for different widths and carrier densities. The calculation is done by VMC on the sphere. The well widths are shown on the plots.}
\label{VMC_extrap_2CFFS_sphere}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./2d_sphere_VMC_extrap_2L_singlet.pdf}
\caption{Finite-size extrapolation of the energy for the pseudo-spin singlet CFFS state for different widths and carrier densities. The calculation is done by VMC on the sphere. The well widths are shown on the plots.}
\label{VMC_extrap_singlet_sphere}
\end{figure}
\begin{figure}[H]
\includegraphics[width=0.95\columnwidth]{./2d_VMC_Sphere_1L_E.pdf}
\includegraphics[width=0.95\columnwidth]{./2d_VMC_Sphere_2L_E.pdf}
\caption{The VMC energies of different states relative to either the CFFS state or the $(3,3,1)$ state, as labeled in each figure. All energies are thermodynamic limits evaluated on the sphere. The statistical errors are smaller than the symbol sizes.}
\label{sphere_VMC_FW_1}
\end{figure}
\begin{figure}[H]
\includegraphics[width=0.95\columnwidth]{./2d_VMC_Sphere_ALL_E.pdf}
\caption{The VMC energies of all states relative to the $(3,3,1)$ state. All energies are evaluated on the sphere, and represent the thermodynamic limit. An offset of $\frac{1}{2}\Delta_{\rm SAS}$ per particle is included for the two-component states. The statistical errors are smaller than the symbol sizes.}
\label{sphere_VMC_FW_2}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./VMC_Sphere_PB_1L.pdf}
\includegraphics[width=\columnwidth]{./VMC_Sphere_PB_2L.pdf}
\caption{(a) The phase diagram of one-component states. (b) The phase diagram of two component states. The phase boundaries are obtained from VMC calculation in the spherical geometry.
The region where experiments find an incompressible state~\cite{Shabani09b} is indicated by light dashed grey lines. The uncertainty in the density of the transition point is approximately $1\times 10^{10} \text{cm}^{-2}$}
\label{sphere_VMC_PT_1}
\end{figure}
\begin{figure}[H]
\includegraphics[width=0.95\columnwidth]{./VMC_phase_diagram.pdf}
\caption{The full phase diagram of states at half filling, calculated by VMC method on the sphere. The uncertainty in the density at the transition point is approximately $1\times 10^{10} \text{cm}^{-2}$. The region where experiments find an incompressible state~\cite{Shabani09b} is indicated by light dashed grey lines.}
\label{sphere_VMC_PT_2}
\end{figure}
\section{Jacobi $\theta$ function and its periodicity}\label{theta_function_definition}
Here we list the definition and properties of the Jacobi $\theta$ function, following the conventions in the text book by David Mumford\cite{Mumford07}. In general, the $\theta$ function is defined as
\begin{equation}
\begin{aligned}
&\theta_{a,b}(z|\tau)\\
&=\sum_{n=-\infty}^{+\infty}\exp\left[\pi i(n+a)^2 \tau+2\pi i(n+a)(z+b)\right],
\end{aligned}
\end{equation}
which satisfies the periodicity properties:
\begin{equation}
\begin{aligned}
&\theta_{a,b}(z+1|\tau)=e^{2\pi a i}\theta_{a,b}(z|\tau)\\
\end{aligned}
\end{equation}
and
\begin{equation}
\begin{aligned}
&\theta_{a,b}(z+\tau|\tau)\\
&=\exp\left[-\pi i \tau-2\pi i(z+b)\right]\theta_{a,b}(z|\tau)
\end{aligned}
\end{equation}
For simplicity of notation, we have dropped the subscripts and defined $\theta_{1/2, 1/2}\left(z|\tau\right)=\theta\left(z|\tau\right)$ in the main text.
The other three Jacobi theta functions for the Pfaffian states on a torus are defined as follows:
\begin{equation}
\begin{aligned}
\theta_2\left(z|\tau\right)&=\theta\left(z+1/2|\tau\right)\\
\theta_3\left(z|\tau\right)&=e^{i\pi \tau/4}e^{i\pi z}\theta\left(z+1/2+i/2|\tau\right)\\
\theta_4\left(z|\tau\right)&=e^{i\pi \tau/4}e^{i\pi z}\theta\left(z+i/2|\tau\right)\\
\end{aligned}
\end{equation}
\section{Quasi-degeneracy of the Pfaffian state on the torus}
\label{PF_DEGENERACY}
We have given in Eq.\,\ref{Pfaffian_wfn} the explicit form for three Pfaffian wave functions, called Pfaffian (1), Pfaffian (2), and Pfaffian (3), which correspond to the choices $a=2$, 3 and 4, respectively. These are not related by CM translation, and as a result, have different Coulomb energy expectation values for finite systems. We have calculated the thermodynamic limits for the energies of these three states by the VMC method. We present the extrapolations of the VMC energies of the Pfaffian (2) and Pfaffian (3) in Figs.~\ref{Pf_degeneracy_1} and \ref{Pf_degeneracy_2} for various quantum well widths and densities. We compare these energies with the energy of the Pfaffian (1) (Fig.~\ref{VMC_EXTRAP_PFAF}) in Fig.~\ref{Pf_degeneracy_3}. At the lowest density of $n=10^{10} \text{cm}^{-2}$, the energy differences can be on the order of $\sim 0.008\pm0.003 e^2/\epsilon l_B$, which is approximately $1/4$ of the energy difference between the one-component CFFS and Pfaffian (1). As the carrier density increases, the differences between the various Pfaffian wave functions quickly drop to $\sim 0.0001 e^2/\epsilon l_B$. (Peterson{\it et al.}\cite{Peterson08} have also found similar behavior as a function of the well-width in their ED studies.) Around transition densities in experiments, the difference is smaller than the uncertainty of either 2D-DMC or 3D-DMC, which is generally of the order of $0.001 e^2/\epsilon l_B$. Due to this fact, we conclude that at least in this work the choice on the $\theta_a(z)$ in the Pfaffian does not affect our result, and we have used Pfaffian (1) with $a=2$ in our calculations.
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./Pf2_VMC.pdf}
\caption{The energy of the Pfaffian (2) state [in which $\theta_a(z)$ is chosen to be $\theta_3(z)$] as a function of $1/N_e$. Each energy is obtained by VMC method with the effective interaction defined in Eq.\,\ref{V_eff_explicit}.}
\label{Pf_degeneracy_1}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./Pf3_VMC.pdf}
\caption{The energy of the Pfaffian (3) state [in which $\theta_a(z)$ is chosen to be $\theta_4(z)$] as a function of $1/N_e$. Each energy is obtained by VMC method with the effective interaction defined in Eq.\,\ref{V_eff_explicit}.}
\label{Pf_degeneracy_2}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./Pf_degenracy.pdf}
\caption{Comparison of the VMC thermodynamic energies of the three Pfaffian states.}
\label{Pf_degeneracy_3}
\end{figure}
\section{Exact diagonalization studies for the LDA interaction}
\label{ED_Ajit}
In the main article, we found that, within the single component states, VMC with the LDA interaction (without LLM) supports the CFFS state in the entire parameter range considered. In this section, we present results obtained from exact diagonalization for the LDA interaction. Before we go on to the states at $\nu=1/2$, we first show that the $1/3$ Laughlin, and the $2/5$ and $3/7$ Jain states are robust to the effects of finite-width and density changes in the LLL. Using the pseudopotentials of the interaction obtained from the finite-width LDA discussed above [with parameters $W=18-70 \text{nm}$ and $n=1\times 10^{10}-30\times 10^{10}$ cm$^{-2}$], we obtain the exact ground states at $1/3$, $2/5$, and $3/7$ in the LLL at the $1/3$ Laughlin, $2/5$ Jain, and $3/7$ Jain fluxes, respectively. All our calculations are carried out for a system of $N_e=12$ electrons which is the largest system for which the $2/5$ and $3/7$ Jain states (obtained by a brute-force projection to the LLL) have been constructed in the Fock space~\cite{Yang19a}.
We also evaluate the charge and neutral gaps for the same system of $N_e=12$ electrons using exact diagonalization. The neutral gap is defined as the difference in energies of the two lowest-lying states of the system of $N$ electrons at the flux $2Q_{gs}$ corresponding to the incompressible ground state. The charge gap is defined as $\Delta_{c} = [E(2Q_{gs}+1)+E(2Q_{gs}-1)-2E(2Q_{gs})]/n_{q}$, where $E(2Q)$ is the background-subtracted~\cite{Balram20} ground-state energy of $N_e$ electrons at flux $2Q$, and $n_{q}$ is the number of quasiparticles (quasiholes) created by the removal (insertion) of a single flux quantum in the ground state. The charge gap measures the energy required to create a far-separated quasiparticle-quasihole pair in the ground-state. The value of $n_{q}$ is one, two, and three, for the $1/3$ Laughlin, $2/5$, and $3/7$ Jain states respectively.
The results for the overlaps and gaps obtained from exact diagonalization using the LDA pseudopotentials at $\nu=1/3$, $2/5$ and $3/7$ are shown in Fig. ~\ref{fig: Laughlin_Jain_overlaps_finite_width_LDA} (note that the scales on different plots are different). We find that the $1/3$ Laughlin, $2/5$ and $3/7$ Jain states provide a near-perfect representation of the exact ground state at all widths and densities considered. Furthermore, these states support robust charge and neutral gaps, which indicates that they are stable to perturbations in the interaction arising from finite-width corrections and density variations. These results are consistent with the experimental observation of incompressible states at $1/3$, $2/5$, and $3/7$ in wide quantum wells~\cite{Shabani09b}.
\begin{figure*}[htpb]
\begin{center}
\includegraphics[width=0.32\textwidth]{./CF13_overlap.pdf}
\includegraphics[width=0.32\textwidth]{./CF25_overlap.pdf}
\includegraphics[width=0.32\textwidth]{./CF37_overlap.pdf}\\
\vspace{0.3cm}
\includegraphics[width=0.32\textwidth]{./CF13_neutral_gap.pdf}
\includegraphics[width=0.32\textwidth]{./CF25_neutral_gap.pdf}
\includegraphics[width=0.32\textwidth]{./CF37_neutral_gap.pdf} \\
\vspace{0.3cm}
\includegraphics[width=0.32\textwidth]{./CF13_charge_gap.pdf}
\includegraphics[width=0.32\textwidth]{./CF25_charge_gap.pdf}
\includegraphics[width=0.32\textwidth]{./CF37_charge_gap.pdf}
\end{center}
\caption{
Overlaps with the exact lowest Landau level ground state [top panels (a), (b), and (c)], neutral gaps [middle panels (d), (e), and (f)] and charge gaps [bottom panels (g), (h), and (i)] in the spherical geometry for the $\nu=1/3$ Laughlin [left panels (a), (d), and (g)] $\nu=2/5$ Jain [center panels (b), (e), and (i)] and $\nu=3/7$ Jain [right panels (c), (f), and (i)] state evaluated using the pseudopotentials of the finite-width interaction obtained using a local density approximation (LDA). All the panels are for $N_{e}=12$ electrons.}
\label{fig: Laughlin_Jain_overlaps_finite_width_LDA}
\end{figure*}
We next consider the 1/2 state and evaluate its charge and neutral gaps as well as its overlaps with the Moore-Read Pfaffian wave function as a function of the width and density. Here we consider the three systems of $N_e=14,~16$ and $18$ electrons that do not alias with any of the Jain states~\cite{Scarola02b}. The overlap maps shown in Fig. ~\ref{fig: MR_Pfaffian_overlaps_finite_width_LDA} indicate that the overlap of the Pfaffian state with the exact ground state increases with increasing width and density and reaches a value comparable to the overlap of the Pfaffian wave function with the 5/2 Coulomb ground state~\cite{Balram20}. We next look at the neutral and charge gaps ($n_{q}=2$) of the 1/2 Moore-Read Pfaffian state. These results, shown in Fig. ~\ref{fig: MR_Pfaffian_overlaps_finite_width_LDA}, suggests that the 1/2 Moore-Read Pfaffian state does not consistently, i.e. for all values of $N_e$, support a robust charge / neutral gap. For the system of $N_e=14$ and $N_e=16$ electrons, we find that the charge gap is negative for most widths and densities, which indicates that the 1/2 Moore-Read Pfaffian state is not stabilized for these interactions. Even for the system of $N_e=18$ electrons, where the charge gaps are positive, the 1/2 Moore-Read Pfaffian state has a gap that is an order of magnitude lower than that of the Laughlin and Jain states. Thus, we conclude that the LDA interaction does not stabilize the 1/2 Moore-Read Pfaffian state in the LLL for the LDA interaction (without LLM).
\begin{figure*}[htpb]
\begin{center}
\includegraphics[width=0.32\textwidth]{./Pf_Overlap_N14.pdf}
\includegraphics[width=0.32\textwidth]{./Pf_Overlap_N16.pdf}
\includegraphics[width=0.32\textwidth]{./Pf_Overlap_N18.pdf} \\
\vspace{0.3cm}
\includegraphics[width=0.32\textwidth]{./Pf_neutral_gap_N14.pdf}
\includegraphics[width=0.32\textwidth]{./Pf_neutral_gap_N16.pdf}
\includegraphics[width=0.32\textwidth]{./Pf_neutral_gap_N18.pdf}\\
\vspace{0.3cm}
\includegraphics[width=0.32\textwidth]{./Pf_charge_gap_N14.pdf}
\includegraphics[width=0.32\textwidth]{./Pf_charge_gap_N16.pdf}
\includegraphics[width=0.32\textwidth]{./Pf_charge_gap_N18.pdf}
\end{center}
\caption{Overlaps of the $\nu=1/2$ Moore-Read Moore-Read Pfaffian state with the exact lowest Landau level ground state [top panels (a), (b), and (c)], neutral gaps [middle panels (d), (e), and (f)] and charge gaps [bottom panels (g), (h), and (i)] in the spherical geometry evaluated using the pseudopotentials of the finite-width interaction obtained using a local density approximation (LDA). The left, center and right panels correspond to $N_{e}=14$ [panels (a), (d), and (g)],~$16$ [panels (b), (e), and (h)] and $18$ [panels (c), (f), and (i)] respectively.}
\label{fig: MR_Pfaffian_overlaps_finite_width_LDA}
\end{figure*}
Finally, we turn to the CFFS state at $\nu=1/2$ and consider its overlap with the exact ground state. For this purpose, we consider the exact zero-width LLL Coulomb ground state of $N_e=14$ electrons at $2Q=2N_e-3$, since this system has a uniform ($L=0$) ground state. We take this ground state to represent the CFFS state and calculate its overlap with the exact LDA ground state as a function of width and density. These overlaps are shown in Fig. ~\ref{fig: CFFS_overlaps_finite_width_LDA} and are essentially unity in the entire parameter space we have considered. (For comparison, the overlap of the Moore-Read Pfaffian state with the exact zero-width LLL Coulomb ground state for this system size is $0.72$~\cite{Balram20}.)
\begin{figure}[htpb]
\begin{center}
\includegraphics[width=0.47\textwidth]{./CFFS_Overlap_N14.pdf}
\end{center}
\caption{Overlaps of the composite fermion Fermi sea (zero-width Coulomb ground state in the lowest Landau level [see text]) with the ground state of the finite-width LDA interaction for $N_{e}=14$ electrons at flux $2Q=25$. The overlap is essentially unity in the entire range of widths and densities considered.}
\label{fig: CFFS_overlaps_finite_width_LDA}
\end{figure}
To summarize, our exact diagonalization results are consistent with the VMC results given in the main article. In the entire parameter range that we explored, the CFFS has almost unit overlap with the exact ground state. Thus the CFFS state is favored over the Moore-Read Pfaffian state for all the LDA interactions that we have looked at in the absence of LLM.
\section{Additional details on the diffusion Monte Carlo}
\label{DMC_algorithm}
The fixed-phase DMC, which is a generalization of the standard DMC method~[\onlinecite{Mitas98, Foulkes01}], was developed in Ref.~[\onlinecite{Ortiz93}] and also described in Refs.~[\onlinecite{Zhang16, Zhao18}]. The method we use in this paper is based on these articles. Here we give some details that are specific to our work.
We use parameters appropriate for Gallium Arsenide. We express lengths in units of $l_B$ and energies in units of $\frac{e^2}{\epsilon l_B}$. The local energy for a 2D system is simply $E_L(\vec{R})=\frac{N_e}{2\kappa}+V_\text{Ewald}(\vec{R})$ and for a 3D system an extra term $\sum_i{E}_\text{trans}\left( w_i \right)$ is introduced due to the transverse degree of freedom:
\begin{equation}
E_L\left(\vec R \right)=\frac{N_e}{2 \kappa}+V_\text{Ewald}(\vec{R})+\sum_i{E}_\text{trans}\left( w_i \right)
\end{equation}
where $N_e/2\kappa$ is the cyclotron energy for $N_e$ particles in the initial trial state. $V_\text{Ewald}(\vec{R})$ is the Coulomb interaction extended periodically in the x-y plane; it satisfies open boundary conditions in the transverse dimension as appropriate for our 3D quantum wells (for 2D systems we simply set all $w_i$'s to be $0$). Its explicit form is given below in Appendix~\ref{Ewald_V}.
The transverse local energy of a one-component state is given by:
\begin{equation}
\begin{aligned}
&E_\text{trans}\left( w \right)\\
=&\begin{cases}\frac{1}{\kappa} \frac{\pi^2}{2 W^2} \left(9-\frac{8}{1+\alpha-2\alpha \cos(2\pi w/W)}\right), &| w|<W/2\\
\infty, &| w|\geq W/2
\end{cases}
\end{aligned}
\end{equation}
For two-component states, the energies for the left-layer and right-layer are as follows:
\begin{equation}
\begin{aligned}
&E_\text{trans}^L\left( w\right)=\begin{cases}\frac{1}{\kappa} \frac{\beta (2 W-\beta w)}{2 W^2 w},&-W/2< w<0\\
\infty, & w\geqslant0
\end{cases}\\
&E_\text{trans}^R\left( w\right)=\begin{cases}\frac{1}{\kappa} \frac{\beta \left[2 W-\beta (w0-w)\right]}{2 W^2 (W-w)},&0< w<W/2\\
\infty, & w\leqslant 0
\end{cases}
\end{aligned}
\end{equation}
We use the mixed estimator method\cite{Foulkes01} to calculate the ground state energy.
\section{Transverse distribution of fully antisymmetrized two-component states}
In the main text, we make the approximation that the two transverse basis wave functions of two-component states do not overlap, i.e. they are located entirely either in the left or the right half of the quantum well. The approximation becomes quantiatively valid when the well-width or the density is very high, in which case which both the lowest symmetric and asymmetric subbands have vanishing density at the center, and the linear combinations of them form the left- and right-layer bases. This approximation simplifies the calculation because
the system's energy can be evaluated without doing an antisymmetrization over all particles.
In this section, we test the dependence of the transverse density on the well-width and the carrier density numerically with fully-antisymmetrized wave functions in 3D space and ascertain to what extent the system can be approximated with two non-overlapping bases. Because the number of permutations increases rapidly with the system size, and because one does not have analytical ways to simplify the calculation of the drift velocity in the 3D-DMC, we estimate that the study of a system with more than 8-10 particles is out of our reach.
Fortunately, we have found that the system's transverse distribution is largely insensitive to the size of the system and the type of the in-plane wave function. Therefore we study a 4-particle system with its in-plane wave function given by the $(3, 3, 1)$ state. We choose transverse wave functions that are not strictly orthogonal, i.e. incorporate a small tunneling between the two layers. Specifically, we choose
\begin{equation}
\begin{aligned}
\psi_L\left( w_i \right)&=\frac{w_i}{W} \exp{\left[-8 \frac{w_i}{W}\right]}\\
\psi_R\left( w_i \right)&=(1-\frac{w_i}{W}) \exp{\left[-8\left(1-\frac{w_i}{W}\right)\right]}\\
\end{aligned}
\end{equation}
Here we have shifted the quantum well's location to the range $[0, W]$ for simplicity.
We do not enforce the central density to be zero; as a result, whether the system is a well-defined bilayer is determined by the diffusion process itself.
The bases chosen here are not strictly orthogonal but they are still linearly independent. If the final distribution breaks into two well-separated density lobes, then it indicates that the system can be treated as a two-component state. On the contrary, if the final distribution is not well-separated, then one should not treat the system as a two-component state. [This is the reason why we call the state $(3, 3, 1)$-like state rather than $(3, 3, 1)$ state in the caption of Fig.~\ref{Full_Antisymmtrized}.] This also offers an estimation of the width and density beyond which the system can be treated as a two-component state. Our 3D-DMC results for the density are shown in Fig.~\ref{Full_Antisymmtrized}. As one can see, the system is only well-separated and has negligible density in the center when $n\gtrsim 2\times 10^{11} \text{cm}^{-2}$ for $W=70 \text{nm}$ and $n\gtrsim 1\times 10^{11} \text{cm}^{-2}$ for $W=80 \text{nm}$. Recalling that in the main text we show a phase transition from a one-component state to a two-component state occurring around $n=2.2\times10^{11} \text{cm}^{-2}$ for $W=70 \text{nm}$ $n=1.5\times10^{11} \text{cm}^{-2}$ for $W=80 \text{nm}$, this calculation of the fully-antisymmetrized state justifies our approximation in the main text.
\begin{figure}
\includegraphics[width=\columnwidth]{./Full_Antisymmetrized_W70.pdf}
\includegraphics[width=\columnwidth]{./Full_Antisymmetrized_W80.pdf}
\caption{The transverse density profiles of the $(3,3,1)$-like state for 4-particle system of the widths $W=70 \text{nm}$ (top) and $W=80 \text{nm}$ (bottom). The legend shows the carrier density in units of $10^{10} \text{cm}^{-2}$.}
\label{Full_Antisymmtrized}
\end{figure}
\section{Thermodynamic extrapolations of energy}
The phase diagrams in the main text are obtained by comparing the energies of different states in the thermodynamic limit. For completeness, we show the extrapolations of the energies of various states calculated by either VMC, 2D-DMC, or 3D-DMC in this section.
Figs.~\ref{VMC_EXTRAP_CFFS}-\ref{VMC_EXTRAP_SINGLET} show the energy extrapolation for the VMC calculation;
Figs.~\ref{2D_DMC_EXTRAP_CFFS}-\ref{2D_DMC_EXTRAP_SINGLET} show the energy extrapolation for the 2D-DMC calculation; and
Figs.~\ref{3D_DMC_EXTRAP_CFFS}-\ref{3D_DMC_EXTRAP_SINGLET} show the energy extrapolation for the 3D-DMC calculation.
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./2d_torus_VMC_extrap_1L_CFFS.pdf}
\caption{The VMC energy of the one-component CFFS as a function of $1/N_e$.}
\label{VMC_EXTRAP_CFFS}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./2d_torus_VMC_extrap_1L_Pfaf.pdf}
\caption{The VMC energy of the one-component Pfaffian as a function of $1/N_e$.}
\label{VMC_EXTRAP_PFAF}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./2d_torus_VMC_extrap_2L_hp331.pdf}
\caption{The VMC energy of the two-component $(3, 3, 1)$ as a function of $1/N_e$.}
\label{VMC_EXTRAP_331}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./2d_torus_VMC_extrap_2L_biCFFS.pdf}
\caption{The VMC energy of the two-component $1/4+1/4$ CFFS as a function of $1/N_e$.}
\label{VMC_EXTRAP_BICFFS}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./2d_torus_VMC_extrap_2L_singlet.pdf}
\caption{The VMC energy of the two-component pseudo-spin singlet CFFS as a function of $1/N_e$.}
\label{VMC_EXTRAP_SINGLET}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./2d_DMC_extrap_1L_CFFS.pdf}
\caption{2D-DMC energy of the one-component CFFS as a function of $1/N_e$.}
\label{2D_DMC_EXTRAP_CFFS}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./2d_DMC_extrap_1L_Pfaffian.pdf}
\caption{2D-DMC energy of the one-component Pfaffian as a function of $1/N_e$.}
\label{2D_DMC_EXTRAP_PFAF}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./2d_DMC_extrap_2L_hp331.pdf}
\caption{2D-DMC energy of the two-component $(3, 3, 1)$ as a function of $1/N_e$.}
\label{2D_DMC_EXTRAP_331}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./2d_DMC_extrap_2L_biCFFS.pdf}
\caption{2D-DMC energy of the two-component $1/4+1/4$ CFFS as a function of $1/N_e$.}
\label{2D_DMC_EXTRAP_BICFFS}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./2d_DMC_extrap_2L_singlet.pdf}
\caption{2D-DMC energy of the two-component pseudo-spin singlet CFFS as a function of $1/N_e$.}
\label{2D_DMC_EXTRAP_SINGLET}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./3D_DMC_E_CFFS.pdf}
\caption{3D-DMC energy of the one-component CFFS as a function of $1/N_e$.}
\label{3D_DMC_EXTRAP_CFFS}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./3D_DMC_E_PFAF.pdf}
\caption{3D-DMC energy of the one-component Pfaffian as a function of $1/N_e$.}
\label{3D_DMC_EXTRAP_PFAF}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./3D_DMC_E_331.pdf}
\caption{3D-DMC energy of the two-component $(3, 3, 1)$ as a function of $1/N_e$.}
\label{3D_DMC_EXTRAP_331}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./3D_DMC_E_BICFFS.pdf}
\caption{3D-DMC energy of the two-component $1/4+1/4$ CFFS as a function of $1/N_e$.}
\label{3D_DMC_EXTRAP_BICFFS}
\end{figure}
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./3D_DMC_E_SINGLET.pdf}
\caption{3D-DMC energy of the two-component pseudo-spin singlet CFFS as a function of $1/N_e$.}
\label{3D_DMC_EXTRAP_SINGLET}
\end{figure}
\section{The system size dependence of transverse density}
\label{transverse density}
The profiles of the transverse density for the CFFS, obtained from our 3D-DMC calculation, are shown in Fig.~\ref{FINITE_SIZE_DEN} for several system sizes. These show that the 3D-DMC transverse density has negligible dependence on the system size. This conclusion also applies to other states considered in this paper. We, therefore, believe that the various transverse density profiles shown in this article represent the thermodynamic limit.
\begin{figure}[H]
\includegraphics[width=\columnwidth]{./CFFS_finite_size.pdf}
\caption{The transverse density profiles of the CFFS state for several particle numbers. They are identical within the statistical uncertainty. The different colors are for different densities, following the same color scheme as in Fig.~\ref{Density_1L}.}
\label{FINITE_SIZE_DEN}
\end{figure}
\section{Periodic Coulomb interaction and Ewald Summation}
\label{Ewald_V}
On the torus, we must work with a periodic version of the Coulomb interaction. A naive strategy is to extend the normal Coulomb potential periodically. Although this approach is theoretically possible, it is impractical because of slow convergence.
The Ewald-summation method overcomes this difficulty.
The idea is to split the Coulomb interaction into a short-ranged part and a long-ranged part. The short-ranged part can be summed in real space quickly; the long-ranged part in the real space becomes short-ranged in the momentum space, hence can be summed conveniently in the momentum space.
We follow Yeh's approach\cite{Yeh99a} in which a generalized summation is explicitly formalized including the transverse dimension with an open boundary:
\begin{widetext}
\begin{equation}
\begin{aligned}
V_\text{Ewald}=&\frac{1}{2}\sideset{}{'}\sum_{i,j=1}^{N_e} \sum_{|\vec{m}=0|}^{\infty}q_i q_j \frac{\text{erfc}(\alpha|\vec r_{ij}+\vec m|)}{|\vec r_{ij}+\vec m|}+\frac{\pi}{2A}\sum_{i,j=1}^{N_e}\sum_{\vec h\neq 0}q_i q_j\frac{\cos(\vec h\cdot \vec r_{ij})}{h}\\
&\times \left\{ \exp{(h z_{ij})}\text{erfc}(\alpha z_{ij} +\frac{h}{2\alpha})+\exp{(-h z_{ij})}\text{erfc}(-\alpha z_{ij}+\frac{h}{2\alpha}) \right\}\\&-\frac{\pi}{A} \sum_{i=1}^{N_e}\sum_{i=1}^{N_e}q_i q_j\left\{ z_{ij} \text{erf}(\alpha z_{ij})+\frac{1}{\alpha\sqrt{\pi}}\exp(-\alpha^2z_{ij}^2)\right\}-\frac{\alpha}{\sqrt{\pi}}\sum_{i=1}^{N_e}q_i^2
\end{aligned} \label{Ewaldsum}
\end{equation}
\end{widetext}
The prime on the summation $\sideset{}{'}\sum_{i,j=1}^{N_e}$ is to remind us that terms with $i=j$ are included only for $\vec{m}\neq 0$.
It is worth noting that this definition of the interaction properly includes the charge-neutrality condition, i.e. it contains the electron-electron, background-background repulsion, and the electron-background attraction. To be more explicit, the omission of the term with $\vec{h}=0$ in the summation and the last of Eq.~\ref{Ewaldsum} term are due to the electron-background and background-background interaction. We refer the reader to Refs.~[\onlinecite{Heyes97}] and [\onlinecite{Parry75}],
for a thorough discussion of the technical aspects of this method.
\end{appendix}
| proofpile-arXiv_059-15607 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{sec:introduction}
In financial markets, the price movement is not only related to the good or bad news for the related company but also to people's understanding of the information and their preferences for trading\cite{stauffer1,lin1}. For a homogeneous population, the same beliefs and strategies may lead to the same buying-selling actions, which may further lead to the occurrence of herding effect and crowding effect\cite{zhang1,zhang2}. For a heterogeneous population, the different beliefs and strategies may lead to the different buying-selling actions, which may further lead to the slow changes of the prices\cite{medo1,chau1,zhong1,wiesinger1,sasidevan1,martino1,wong1}. Understanding the effects of homogeneous and heterogeneous population on the evolution of stock market is quite important for the risk management and the construction of an efficient market\cite{manrique1,cao1, burkholz1,zhang3,wawrzyniak1,biswas1,xie1}.
In the study of the effects of individual preferences on the evolutionary dynamics, a variety of agent-based models have been borrowed to model people's social and economic behaviors\cite{schweitzer1,hadzibeganovic1,hadzibeganovic2,gao1,hart2,lo1,lo2,hod1}, among which the minority game and the majority game are mainly used to simulate people's trading behaviors and explore the possible strategic effects and psychological effects\cite{zhang4,challet2,challet3,challet4,challet5,challet6}. Depending upon the minority game, the effects of response time on the evolution of stock prices have been investigated\cite{mosetti1}. The delayed response to the historic information leads to the lag effect of price fluctuations. Depending upon the majority game, the effects of imitation on the evolution of stock prices have been investigated\cite{alfi1,martino2}. Buying and selling the stocks according to most people's choices lead to the occurrence of herding effects in the stock market. Depending upon the mixed minority game, the majority game and the dollar-game\cite{challet1}, the effects of sophisticated individuals on the evolution of stock prices have been investigates. Different kinds of trader-trader interactions lead to typical stylized facts in the stock markets\cite{gabaix1,jiang1,bianconi1,galla2,barato1,marsili1}.
In the study of the effects of investment strategies on the evolution of stock prices, the pair pattern strategy and the reference point strategy are two typical strategies proposed by zhang et al\cite{zhang5,zhang6,ren1}. Depending upon the pair pattern strategies, people buy and sell the stocks frequently, which leads to the occurrence of a power-law return distribution similar to that in real stock markets\cite{mantegna1,mantegna2,cizeau1,gopikrishnan1,plerou1}. Different from the pair pattern strategies which depend on the history of price movement, the reference point strategy is a myopic strategy, which depends on people's subjective cognition and provides us an anchor to simplify our complex decision-making processes\cite{baker1,shi1}. In real markets, the reference point strategies help us finish the transactions in a simplified way.
In this paper, we incorporate the investors with pair pattern strategies and reference point strategies into a trading model. The role of heterogeneities in investment strategies and risk tolerance in the evolution of stock prices is investigated. The following are our main findings.
(1)The heterogeneities in investment strategies and risk tolerance effectively suppress the price fluctuations. As nearly all the investors have similar investment strategies or risk tolerance, the price has a large fluctuation. For the market with heterogeneous investment strategies and heterogeneous risk tolerance, the price fluctuation becomes moderate.
(2)The competition between different investment strategies is related to the coexistence of different strategies. As compared with the strategy which leads to a large price fluctuation, the strategy which leads to a stable price is more competitive.
(3)The coexistence of the investors with pair pattern strategies and reference point strategies makes some of the investors earn more and the others lose more, over-frequent trading is disadvantageous for the investors to earn more.
The paper is organized as follows. In section 2, the agent-based model with pair pattern strategies and reference point strategies is introduced. In section 3, the numerical results are presented. In section 4, a theoretical analysis is given. In section 5, the conclusions are drawn.
\section{The model}
\label{sec:model}
In the present model, there are two kinds of investors: the investors with pair pattern strategies and the investors with reference point strategies. In the following, we introduce the characteristics of different strategies, the updating of strategies and the evolution of stock prices respectively.
\subsection{\label{subsec:levelA}Pair pattern strategies and reference point strategies}
In the present model, there are two kinds of strategies, pair pattern strategies and reference point strategies, which are used to make one's buying and selling decisions.
The pair pattern strategy space consists of a series of buying-and-selling strategy pairs. A buying strategy or a selling strategy consists of a m-bit long binary array. For example, if an individual i's buying strategy is $S_i^{buy}=(110)$ and his selling strategy is $S_i^{sell}=(101)$, facing the latest history of $M=3$ price changes, (rise-rise-drop), individual i buys a stock on condition that the number of stocks in his hand is below the maximum value of $K_{max}$. Facing the latest history of $M=3$ price changes, (rise-drop-rise), individual i sells a stock on condition that the number of stocks in his hand is above the minimum value of $K_{min}$. Or else, individual i does nothing.
The reference point strategy space consists of a series of expected prices, called reference points $P^{ref}$ in the present model. If the latest price $P$ is lower than an individual j's reference point, $P<P_j^{ref}$, individual j buys a stock with probability $\frac{P_j^{ref}-P}{P}$ on condition that the number of stocks in his hand is below the maximum value of $K_{max}$. If the latest price is higher than individual j's reference point, $P>P_j^{ref}$, individual j sells a stock with probability $\frac{P-P_j^{ref}}{P}$ on condition that the number of stocks in his hand is above the minimum value of $K_{min}$. Or else, individual j does nothing.
\subsection{\label{subsec:levelB}Evolution of investment strategies}
An individual i's pair pattern strategy evolves as follows. Initially, individual i randomly chooses $n_S$ strategies from the pair pattern strategy space. At each time step, he gives each strategy a virtual score, which is obtained as if the strategy were being used and the virtual score changes continuously. He makes his buying or selling decision according the strategy with the highest score.
An individual j's reference point strategy evolves as follows. Initially, an individual j randomly chooses a gene $g_j$ from the range of $g_j\in[0,g^{max}]$, which is kept with no change in the evolutionary process.He randomly chooses a strategy $P^{exp}$ from the range of $P^{exp}\in [\bar Pe^{\frac{-\alpha g_j}{N}},\bar Pe^{\frac{\alpha g_j}{N}}]$, in which $\bar P$ is the averaged price in the latest $\Delta t$ steps and $\alpha$ is a pre-given constant. At each time step, he makes his buying or selling decision according to this strategy. If his strategy deviates from the range of $P^{exp}\in [\bar Pe^{\frac{-\alpha g_j}{N}},\bar Pe^{\frac{\alpha g_j}{N}}]$, he randomly chooses a new strategy $P^{exp}$ from the range of $P^{exp}\in [\bar Pe^{\frac{-\alpha g_j}{N}},\bar Pe^{\frac{\alpha g_j}{N}}]$. Or else, he keeps his strategy. From the above evolutionary mechanism we know that, in the present model, a large gene means that an individual has strong risk tolerance and is quite possible to change his strategy less frequently. A small gene means that an individual has weak risk tolerance and is quite possible to change his strategy more frequently.
\subsection{\label{subsec:levelC}Evolution of stock prices}
After all the individuals have made their buying, selling or doing nothing decision, i.e. $a_i$=+1, -1 or 0, the price is updated according to the equation
\begin{equation}
P(t)=P(t-1)e^{\frac{\alpha A}{N}},
\end{equation}
in which $A=\Sigma^N_{i=1} a_i$ and $\alpha$ is a pre-given constant.
\section{Simulation results and discussions}
\label{sec:results}
In numerical simulations, we firstly examine whether the proposed model can reproduce the stylized facts in real financial markets. Secondly, the coupled effects of pair pattern strategies and reference point strategies on price fluctuations are investigated extensively.
\subsection{\label{subsec:level}Reproduction of the stylized facts}
In the study of real financial markets, the distribution of price returns and the autocorrelation of price returns are usually investigated, which reflect the characteristics of price movement. For a random price movement, the distribution of price returns is close to a Poisson distribution. For an autocorrelated price movement, the distribution of price returns is close to a power-law distribution\cite{gu1,gu2}. Some studies have shown that a power-law distribution and long-range autocorrelation exist in real financial markets. We firstly examine whether the present model can reproduce the characteristics of price movement in real financial markets or not.
\begin{figure}
\includegraphics[width=12cm]{fig1
\caption{\label{fig:epsart}(a)The distribution of price returns $R$ for the ratio of investors with reference point strategies $\rho_{ref}$=0 and the memory size $M$=3 (circles), 5(squares); (b) the distribution of absolute price returns $\vert R \vert$ for $\rho_{ref}$=0 and $M$=3 (circles), 5(squares); (c)The distribution of price returns $R$ for $M$=3 and $\rho_{ref}$=0 (circles), 0.5 (squares); (d) the distribution of absolute price returns $\vert R \vert$ for $M$=3 and $\rho_{ref}$=0 (circles), 0.5 (squares). Other parameters are: the total population $N=5000$, the number of strategies for each investor with pair pattern strategies $n_s=2$, the averaged time $\Delta$ =10, the maximum gene for the investors with reference point strategies $g^{max}$=5000, maximum and minimum stocks for each investor $K_{max}$=1 and $K_{min}$=-1, constant $\alpha$=10.}
\end{figure}
Figure 1 (a) and (b) show that, as all the investors adopt pair pattern strategies, the distribution of price returns is like a power-law distribution, the tail of which is satisfied with the equation $P(\vert R\vert)\sim \vert R \vert^{-\gamma}$. An increase in memory size $M$ leads to a decrease in the exponent $\gamma$. For $M=3$, $\gamma\sim 4.8$. For $M=5$, $\gamma\sim 4$.
Figure 1 (c) and (d) show that, for a given memory size $M$=3, the distribution of price returns is closely related to the ratio of the investors with reference point strategies $\rho_{ref}$. An increase in $\rho_{ref}$ leads to an increase in the exponent $\gamma$. For $\rho_{ref}$=0, $\gamma\sim 4.8$. For $\rho_{ref}$=0.5, $\gamma\sim 8$.
Such results indicate that the distribution of price returns in the present model is closely related to the memory size $M$ and the ratio of the investors with reference point strategies $\rho_{ref}$. Compared with the situation where there is a small $M$ and a small $\rho_{ref}$, a large $M$ and an intermediate $\rho_{ref}$ lead to a broader distribution of price returns.
\begin{figure}
\includegraphics[width=12cm]{fig2
\caption{\label{fig:epsart}(a)DFA of returns for the ratio of investors with reference point strategies $\rho_{ref}$=0 and the memory size $M$=3 (circles), 5(squares); (b) DFA of absolute returns for $\rho_{ref}$=0 and $M$=3 (circles), 5(squares); (c)DFA of returns for $M$=3 and $\rho_{ref}$=0 (circles), 0.5 (squares); (d) DFA of absolute returns for $M$=3 and $\rho_{ref}$=0 (circles), 0.5 (squares). Other parameters are: total population $N=5000$, number of strategies for each investor with pair pattern strategy $n_s=2$, the averaged time $\Delta$ =10, maximum gene value for the investors with reference point strategy $g^{max}$=5000, maximum and minimum number of stocks for each investor $K_{max}$=1 and $K_{min}$=-1, constant $\alpha$=10.}
\end{figure}
The long-range correlations are usually examined depending upon the detrended fluctuation analysis (DFA). The root mean square of the detrended series $F$ is satisfied with an equation $F(S)\sim S^h$, in which $S$ is the length of local data and $h$ is called hurst exponent. The value of $h$ reflects the moving patterns of price returns. For $h<0.5$, the price returns are satisfied with a short-term correlation. For $h=0.5$, the price returns are satisfied with a random walk. For $h>0.5$, the price returns are satisfied with a long-range correlation.
Figure 2 (a) and (b) show that, for a given ratio of the investors with reference point strategies $\rho_{ref}$=0, the root mean square of the detrended series $F$ is closely related to the memory size $M$. An increase in $M$ leads to an overall decrease in $F$ within the whole range of $S\ge 2$.
Figure 2 (c) and (d) show that,for a given memory size $M$=3, the root mean square of the detrended series $F$ is closely related to the ratio of the investors with reference point strategies $\rho_{ref}$. An increase in $\rho_{ref}$ leads to an overall decrease in $F$ within the whole range of $S\ge 2$. Comparing the results in figure 1(a)-(d), we find that the values of hurst exponent $h$ are nearly the same for different $M$ and $\rho_{ref}$, which is $h\sim 1$.
Such results indicate that the DFA of returns and the DFA of absolute returns are closely related to the memory size $M$ and the ratio of the investors with reference point strategies $\rho_{ref}$. Compared with the situation where there is a small $M$ and a small $\rho_{ref}$, a large $M$ and an intermediate $\rho_{ref}$ lead to an overall decrease in $F$. There exist long-range autocorrelations of price returns for different $M$ and $\rho_{ref}$.
\subsection{\label{subsec:level}Competition between pair pattern strategies and reference point strategies}
In the present model, we are especially concerned about how the heterogeneities in investment strategies affect the price fluctuations. In the following, we firstly examine how the coupling of the memory size of the investors with pair pattern strategies $M$, the maximum gene of the investors with reference point strategies $g^{max}$ and the ratio of the investors with reference point strategies $\rho_{ref}$ affects the time-dependent behaviors of the stock prices.
\begin{figure}
\includegraphics[width=13cm]{fig3
\caption{\label{fig:epsart}The time-dependent price $P$ (a)for the ratio of the investors with reference point strategies $\rho_{ref}$=0 and the memory size $M=2$ (slashes), 5 (slash dotted lines); (b)for $\rho_{ref}$=1 and the maximum gene of the investors with reference point strategies $g^{max}$=100 (circles),1000 (squares); (c) for $M=3$, $g^{max}=1000$ and $\rho_{ref}$=0.1(slashes), 0.5(slash dotted lines). Other parameters are: total population $N=1000$, the number of strategies for each investor with pair pattern strategies $n_s$=2, the averaged time $\Delta$ =10, maximum and minimum stocks for each investor $K_{max}$=1 and $K_{min}$=-1, constant $\alpha$=10.}
\end{figure}
Figure 3 (a) shows that, as all the investors adopt pair pattern strategies, the price movement is closely related to the memory size $M$. For $M$=2, the price has a large fluctuation. For $M=5$, the price fluctuation becomes ease. In the present model, a large $M$ implies that the investors with pair pattern strategies have more opportunities to choose his personal investment strategies. Given a large $M$ and $S=2$, it is quite possible that all the investors have different investment strategies. The results in figure 3 (a) imply that the heterogeneities in pair pattern strategies suppress the price fluctuations.
Figure 3 (b) shows that, as all the investors adopt reference point strategies, the price movement is closely related to the maximum gene $g^{max}$. For $g^{max}=100$, the price has a zigzag fluctuation. For $g^{max}=1000$, the price becomes stable. In the present model, a large $g^{max}$ implies that the genes of the investors with reference point strategies scatter about a large range of $0<g<g^{max}$. Given a large $g^{max}\sim N$, it is quite possible that all the investors have different genes. The results in figure 3 (b) imply that the heterogeneities in genes suppress the price fluctuations.
Figure 3 (c) shows that, as the investors with pair pattern strategies coexist with the investors with reference point strategies, the price movement is closely related to the ratio of the investors with pair pattern strategies and reference point strategies. An increase in the ratio of the investors with reference point strategies leads to a decrease in the price fluctuations. As compared with the situations in figure 3 (a) and (b), we find that a large $M$ coupled with a large $g^{max}$ can not only inhibit the occurrence of large price fluctuations but also the occurrence of no-trading states.
Such results indicate that the heterogeneities in investment strategies and individual genes has a great impact on the evolution of prices. A homogeneous population promotes price fluctuations while a heterogeneous population suppresses price fluctuations. In the heterogeneous environment where the investors with pair pattern strategies coexist with the investors with reference point strategies, the pair pattern strategies drive the system away from the equilibrium while the reference point strategies draw the system back to the equilibrium. Heterogeneities in the present model are beneficial for stabilizing the stock prices.
\begin{figure}
\includegraphics[width=10cm]{fig4
\caption{\label{fig:epsart}The standard deviation of stock prices $\sigma_P$ (a) as a function of the memory size $M$ for the maximum gene of the investors with reference point strategies $g^{max}=1000$ and the ratio of the investors with reference point strategies $\rho_{ref}$=0 (circles), 0.8 (squares); (b) as a function of $g^{max}$ for $M=3$ and $\rho_{ref}$=1 (circles), 0.8 (squares). Other parameters are: total population $N=1000$, number of strategies for each investor with pair pattern strategy $n_s$=2, the averaged time $\Delta$ =10, maximum and minimum stocks for each investor $K_{max}$=1 and $K_{min}$=-1, constant $\alpha$=10. Final data are obtained by averaging over 100 runs and $10^4$ times after $10^5$ relaxation times in each run.}
\end{figure}
In order to get a clear view of the relationship between the price fluctuations and the heterogeneities in pair pattern strategies and individual genes, in figure 4 (a) and (b) we plot the standard deviation of stock prices $\sigma_P$ as a function of the memory size $M$ and the maximum gene $g^{max}$ respectively.
Figure 4 (a) shows that, as all the investors adopt pair pattern strategies, the price fluctuations are determined by the memory size $M$. As $M$ increases from $M=2$ to $M=10$, the standard deviation of stock prices $\sigma_P$ decreases from $\sigma_P\sim 22$ to $\sigma_P\sim 0.1$. An increase in the ratio of the investors with reference point strategies $\rho_{ref}$ leads to an overall decrease of $\sigma_P$ within the whole range of $2\le M\le 10$. An increase in $\rho_{ref}$ does not affect the changing tendency of $\sigma_P$ vs $M$.
Figure 4 (b) shows that, as all the investors adopt reference point strategies, the price fluctuations are determined by the maximum gene $g^{max}$. There exist two critical points $g^{max}_{c1}\sim 500$ and $g^{max}_{c2}\sim 1100$. As $g^{max}$ increases from $g^{max}=1$ to $g^{max}=500$, $\sigma_P$ keeps a fix value of $2\times 10^5$. As $g^{max}$ increases from $g^{max}=500$ to $g^{max}=1100$, $\sigma_P$ drops quickly from $\sigma_P\sim 2\times 10^5$ to $\sigma_P\sim 0.01$. As $g^{max}$ increases from $g^{max}=1100$ to $g^{max}=1800$, $\sigma_P$ keeps the value of $\sigma_P\sim 0.01$. A decrease in the ratio of the investors with reference point strategies $\rho_{ref}$ leads to a decrease in the critical points $g^{max}_{c1}$ and $g^{max}_{c2}$. Within the range of $1\le g^{max}\le g^{max}_{c2}$, a decrease in $\rho_{ref}$ leads to an overall decrease of $\sigma_P$. Within the range of $g^{max}>g^{max}_{c2}$, a decrease in $\rho_{ref}$ leads to an overall increase of $\sigma_P$. A decrease in $\rho_{ref}$ does not affect the changing tendency of $\sigma_P$ vs $g^{max}$.
Such results indicate that both heterogeneous investment strategies and heterogeneous individual genes have a great impact on the price movement. For the system with either pair pattern strategies or reference point strategies, the heterogeneity suppresses the price fluctuations. For the system with the coexistence of pair pattern strategies and reference point strategies, the price fluctuations may be promoted within some range and may be suppressed within other range.
\begin{figure}
\includegraphics[width=10cm]{fig5
\caption{\label{fig:epsart}The predictability $H$ of stock prices (a)as a function of the memory size $M$ for the maximum gene of the investors with reference point strategies $g^{max}=1000$ and the ratio of the investors with reference point strategies $\rho_{ref}$=0 (circles), 0.8 (squares); (b)as a function of $g^{max}$ for $M$=3 and $\rho_{ref}$=1 (circles), 0.8 (squares). Other parameters are: total population $N=1000$, number of strategies for each investor with pair pattern strategies $n_s$=2, the averaged time $\Delta$ =10, maximum and minimum stocks for each investor $K_{max}$=1 and $K_{min}$=-1, constant $\alpha$=10. Final data are obtained by averaging over 100 runs and $10^4$ times after $10^5$ relaxation times in each run.}
\end{figure}
In a predictable market, people usually make a deal according to the historic information, which may lead to people's common behaviors. In the present model, even if people have the same historic information, because they have heterogeneous strategies, it is quite possible that their common behaviors may be inhibited. In order to examine whether the reduction of price fluctuations in the present model results from the reduction of the predictability of price movement, in figure 5 (a) and (b) we plot the predictability of stock prices $H$ as a function of the memory size $M$ and the maximum gene $g^{max}$ respectively.
Figure 5 (a) shows that, as all the investors adopt pair pattern strategies, the predictability of stock prices $H$ is closely related to the memory size $M$. As $M$ increases from $M=2$ to $M=10$, $H$ decreases from $H\sim 6.8$ to $H\sim 0.05$. An increase in the ratio of the investors with reference point strategies $\rho_{ref}$ leads to an overall decrease in $H$ within the whole range of $2\le M\le10$. An increase in $\rho_{ref}$ does not affect the changing tendency of $H$ vs $M$. Comparing the results in figure 4 (a) with the results in figure 5 (a), we find that a more predictable market has a larger price fluctuation.
Figure 5 (b) shows that, as all the investors adopt pair pattern strategies, the predictability of stock prices $H$ is closely related to the maximum gene $g^{max}$. There exist two critical points $g^{max}_{c1}\sim 500$ and $g^{max}_{c2}\sim 1100$. As $g^{max}$ increases from $g^{max}=1$ to $g^{max}=500$, $H$ keeps a fix value of $1\times 10^5$. As $g^{max}$ increases from $g^{max}=500$ to $g^{max}=1100$, $H$ drops quickly from $H\sim 1\times 10^5$ to $H\sim 0.002$. As $g^{max}$ increases from $g^{max}=1100$ to $g^{max}=1800$, $H$ keeps the value of $H\sim 0.002$. A decrease in the ratio of the investors with reference point strategies $\rho_{ref}$ leads to a decrease in the critical points $g^{max}_{c1}$ and $g^{max}_{c2}$. Within the range of $1\le g^{max}\le g^{max}_{c2}$, a decrease in $\rho_{ref}$ leads to an overall decrease of $H$. Within the range of $g^{max}>g^{max}_{c2}$, a decrease in $\rho_{ref}$ leads to an overall increase of $H$. A decrease in $\rho_{ref}$ does not affect the changing tendency of $H$ vs $g^{max}$. Comparing the results in figure 4 (b) with the results in figure 5 (b), we find that a more predictable market has a larger price fluctuation.
Such results indicate that the predictability of price movement is closely related to the heterogeneities in investment strategies and individual genes. For the system with either pair pattern strategies or reference point strategies, the heterogeneities are quite possible to make the market become unpredictable, which is similar to the situation in an efficient market. For the system with the coexistence of pair pattern strategies and reference point strategies, the price movement may become more predictable within some range and may become more unpredictable within other range.
\begin{figure}
\includegraphics[width=10cm]{fig6
\caption{\label{fig:epsart}(a)The averaged wealth of the investors with pair pattern strategies $\bar W_{pair}$ as a function of the memory size $M$ for the maximum gene of the investors with reference point strategies $g^{max}=1000$ and the ratio of the investors with reference point strategies $\rho_{ref}$=0 (circles), 0.8 (squares); (b) $\bar W_{pair}$ as a function of $g^{max}$ for $M=3$ and $\rho_{ref}$= 0.8 (squares); (c)the averaged wealth of the investors with reference point strategies $\bar W_{ref}$ as a function of $M$ for $g^{max}=1000$ and $\rho_{ref}$= 0.8 (squares); (d) $\bar W_{ref}$ as a function of $g^{max}$ for $M=3$ and $\rho_{ref}$=1 (circles), 0.8 (squares). Other parameters are: total population $N=1000$,the number of strategies for each investor with pair pattern strategies $n_s$=2, the averaged time $\Delta$ =10, maximum and minimum stocks for each investor $K_{max}$=1 and $K_{min}$=-1, constant $\alpha$=10. Final data are obtained by averaging over 100 runs and $10^4$ times after $10^5$ relaxation times in each run.}
\end{figure}
In order to find a competitive strategy in the present model, in figure 6 (a) - (d) we plot the averaged wealth $\bar W$ of the investors with pair pattern strategies and reference point strategies respectively.
Figure 6 (a) and (c) show that, as all the investors adopt pair pattern strategies, the wealth of the investors with pair pattern strategies $\bar W_{pair}$ is closely related to the memory size $M$. There exists a critical point $M_c=4$. As $M$ increases from $M=2$ to $M=4$, $\bar W_{pair}$ increases quickly from $\bar W_{pair}\sim -2.4\times 10^4$ to $\bar W_{pair}\sim -479$. As $M$ increases from $M=4$ to $M=10$, $\bar W_{pair}$ increases slowly from $\bar W_{pair}\sim -479$ to $\bar W_{pair}\sim -6.3$. An increase in the ratio of the investors with reference point strategies does not affect the critical point $M_c=4$. As $M$ increases from $M=2$ to $M=4$, $\bar W_{pair}$ increases from $\bar W_{pair}\sim -3.2\times 10^4$ to $\bar W_{pair}\sim -224$ while the wealth of the investors with reference point strategies $\bar W_{ref}$ firstly increases from $\bar W_{ref}\sim -367$ to $\bar W_{ref}\sim 191$ and then drops from $\bar W_{ref}\sim 191$ to $\bar W_{ref}\sim -1$. As $M$ increases from $M=4$ to $M=10$, $\bar W_{pair}$ increases from $\bar W_{pair}\sim -224$ to $\bar W_{pair}\sim -20$ while $\bar W_{ref}$ decreases from $\bar W_{ref}\sim -1$ to $\bar W_{ref}\sim -16$. Within the range of $M\le M_c$, the coexistence of pair pattern strategies and reference point strategies is beneficial for the investors with reference point strategies.
Figure 6 (b) and (d) show that, as all the investors adopt reference point strategies, the wealth of the investors with reference point strategies $\bar W_{ref}$ is closely related to the maximum gene $g^{max}$. There exist two critical points $g^{max}_{c1}\sim 500$ and $g^{max}_{c2}\sim 700$. Within the range of $g^{max}<500$, $\bar W_{ref}$ fluctuates within the range of $\bar W_{ref}\sim -7\times 10^9$. As $g^{max}$ increases from $g^{max}=500$ to $g^{max}=700$, $\bar W_{ref}$ increases from $\bar W_{ref}\sim -7\times 10^9$ to $\bar W_{ref}\sim -3\times 10^7$. As $g^{max}$ increases from $g^{max}=700$ to $g^{max}=1800$, $\bar W_{ref}$ increases from $\bar W_{ref}\sim -3\times 10^7$ to $\bar W_{ref}\sim -3.8$.
A decrease in the ratio of the investors with reference point strategies $\rho_{ref}$ leads to a decrease in the critical points $g^{max}_{c1}$ and $g^{max}_{c2}$. For $\rho_{ref}$=0.8, as $g^{max}$ increases from $g^{max}=1$ to $g^{max}=300$, the wealth of the investors with pair pattern strategies $\bar W_{pair}$ fluctuates around $\bar W_{pair}\sim 8\times 10^7$ while the wealth of the investors with reference point strategies $\bar W_{ref}$ fluctuate within the range of $\bar W_{ref}\sim -8\times 10^8$. As $g^{max}$ increases from $g^{max}=300$ to $g^{max}=500$, $\bar W_{pair}$ decreases from $\bar W_{pair}\sim 8\times 10^7$ to $\bar W_{pair}\sim -5\times 10^4$ while $\bar W_{ref}$ increases from $\bar W_{ref}\sim -8\times 10^8$ to $\bar W_{ref}\sim -2\times 10^5$. As $g^{max}$ increases from $g^{max}=500$ to $g^{max}=1800$, $\bar W_{pair}$ increases from $\bar W_{pair}\sim -5\times 10^4$ to $\bar W_{pair}\sim -3\times 10^3$ while $\bar W_{ref}$ increases from $\bar W_{pair}\sim -2\times 10^5$ to $\bar W_{pair}\sim -2\times 10^3$. Within the range of $g^{max}\le g^{max}_{c2}$, the coexistence of pair pattern strategies and reference point strategies is beneficial for the investors with pair pattern strategies.
Comparing the results in Figure 6 (a) and (c) with the results in Figure 6 (b) and (d), we find that the coexistence of the investors with pair pattern strategies and reference point strategies is not always good for both sides. In some cases the investors with reference point strategies may defeat the investors with pair pattern strategies and in other cases the investors with pair pattern strategies may defeat the investors with reference point strategies.
\section{Theoretical analysis}
\label{sec:analysis}
\subsection{\label{subsec:levelA} Relationship between price fluctuations and heterogeneities in pair pattern strategies and risk tolerance}
\begin{figure}
\includegraphics[width=13cm]{fig7
\caption{\label{fig:epsart}The time-dependent price $P$ for (a) $\rho_{ref}$=0 and $M$=1(circles), 3(squares); (b) $\rho_{ref}$=0.5, $g^{max}=100$ and $M$=1(circles), 3(squares); (c) $\rho_{ref}$=0.5, $g^{max}=1000$ and $M$=1(circles), 3(squares); (d) $\rho_{ref}$=1 and $g^{max}$=100(circles), 1000(squares). Other parameters are: total population $N=1000$, number of strategies for each investors with pair pattern strategy $n_s$=2, the averaged time $\Delta$ =10, maximum and minimum number of stocks for each investor $K_{max}$=1 and $K_{min}$=-1, constant $\alpha$=10.}
\end{figure}
The price fluctuation is determined by the difference in the number of investors buying and selling the stocks, which is satisfied with the equation
\begin{equation}
ln\frac{P(t)}{P(t-1)}={\frac{\alpha \Delta N}{N}}.
\end{equation}
On condition that all the investors adopt pair pattern strategies, for a given population $N$, the heterogeneity in pair pattern strategies is determined by the memory size $M$. For $M=1$, the total number of pair pattern strategies is $n_{pair}=2^M=2$, i.e. $s_1$=(0,1) and $s_2$=(1,0), which means that, facing an increase in the latest price, $s_1$ tells the investors to sell and $s_2$ tells the investors to buy. Facing a decrease in the latest price, $s_1$ tells the investors to buy and $s_2$ tells the investors to sell. Therefore, facing a typical change in the latest price, the difference in the number of investors buying and selling the stocks should be
\begin{equation}
\mid\Delta N\mid=\mid N_{s_1}-N_{s_2} \mid,
\end{equation}
in which $N_{s_1}$ and $N_{s_2}$ are the number of investors buying and selling the stocks respectively.
For quite a large $M$, i.e. $M=10$, the number of the combination of $M$ latest price changes is $2^M$=1024. The total number of pair pattern strategies is $n_{pair}=C_{1024}^1C_{1023}^1=1024\times 1023$. For a given $N=1000$, facing a typical combination of the latest $M$ price changes, the difference in the number of investors buying and selling the stocks should be
\begin{equation}
\mid\Delta N\mid\sim 1.
\end{equation}
On condition that all the investors adopt reference point strategies, for a given population $N$, the heterogeneity in reference point strategies is determined by the maximum value of risk tolerance $g^{max}$. Facing the latest price $P(t-1)$, the difference in the number of investors buying and selling the stocks should be
\begin{equation}
\mid\Delta N\mid=\mid N_{P^{ref}>P}-N_{P^{ref}<P} \mid,
\end{equation}
in which $N_{P^{ref}>P}$ and $N_{P^{ref}<P}$ are the number of investors buying and selling the stocks respectively.
For quite a small $g^{max}$, i.e. $g^{max}=1$, all the reference points are within a small range
\begin{equation}
\frac {\bar P}{e}\le P^{ref}\le e\bar P,
\end{equation}
in which $\bar P$ is the averaged value of stock prices in the latest $\Delta t$ steps. For quite a large $g^{max}$, i.e. $g^{max}=N$, all the reference points are within a wide range
\begin{equation}
\frac {\bar P}{e^N}\le P^{ref}\le e^N\bar P.
\end{equation}
Given a typical $\Delta N=N_{P^{ref}>P}-N_{P^{ref}<P}$=2, for $g^{max}=1$, $\Delta N$ is quite possible to become $\Delta N=N_{P^{ref}>P}-N_{P^{ref}<P}$=-2 in the next step because nearly all the investors are within a small range around $P(t-1)$. For $g^{max}=N$, $\Delta N$ is quite possible to become $\Delta N=N_{P^{ref}>P}-N_{P^{ref}<P}$=1 or $\Delta N=N_{P^{ref}>P}-N_{P^{ref}<P}$=0 in the next step because the investors scatter within a wide range around $P(t-1)$.
On condition that the investors with pair pattern strategies coexist with the investors with reference point strategies, the price fluctuation is determined by the coupling of the heterogeneities in investment strategies and risk tolerance. In a heterogeneous population, the existence of the investors with pair pattern strategies helps the investors with reference point strategies away from a no-trading state while the existence of the investors with reference point strategies helps the investors with pair pattern strategies away from a large fluctuation. The price has the characteristics of a random walk which slowly fluctuates around an equilibrium state.
The above analyses indicate that, for a small $M$ and a small $g^{max}$, a zigzag price fluctuation is more possible to occur. For a large $M$ and a large $g^{max}$, a slow price fluctuation like a random walk is more possible to occur. The theoretical analysis is in accordance with the simulation data in figure 7.
\subsection{\label{subsec:levelB} Competition between pair pattern strategies and reference point strategies}
The wealth of the investors with pair pattern strategies and reference point strategies is determined by the difference between the buying price and the selling price.
\begin{equation}
\bar W=\Sigma (P_{sell}-P_{buy}).
\end{equation}
The price change $\Delta P=P(t)-P(t-1)=\Sigma a_{pair}+ \Sigma a_{ref}$. If $\vert\Sigma a_{pair}\vert>>\vert\Sigma a_{ref}\vert$, the price change is determined by the investors with pair pattern strategies. If $\vert\Sigma a_{pair}\vert<<\vert\Sigma a_{ref}\vert$, the price change is determined by the investors with reference point strategies.
\begin{figure}
\includegraphics[width=13cm]{fig8
\caption{\label{fig:epsart}The averaged wealth of the investors with pair pattern strategies (circles) and reference point strategies (squares) for (a)$M=1$, $g^{max}=100$;(b)$M=1$, $g^{max}=1000$;(c)$M=3$, $g^{max}=100$;(d)$M=3$, $g^{max}=1000$. Other parameters are: total population $N=1000$, the ratio of the investors with reference point strategies $\rho_{ref}$=0.5, number of strategies for each investor with pair pattern strategy $n_s$=2, the averaged time $\Delta$ =10, maximum and minimum number of stocks for each investor $K_{max}$=1 and $K_{min}$=-1, constant $\alpha$=10. Final data are obtained by averaging over 100 runs and $10^4$ times after $10^5$ relaxation times in each run.}
\end{figure}
On condition that only the investors with pair pattern strategies exist, facing a typical combination of $M$ latest price changes, if there are more buyers than sellers, the price increases. The investors buying the stock with a higher price is more than the investors selling the stock with a higher price. Facing another typical combination of $M$ latest price changes, if there are more sellers than buyers, the price decreases. The investors selling the stock with a lower price is more than the investors buying the stock with a lower price. Therefore, buying high and selling low lead to the negative wealth of the investors with pair pattern strategies.
On condition that only the investors with reference point strategies exist, facing the latest price $P(t-1)$, if there are more buyers than sellers, the price increases. The number of investors buying the stock with a higher price is more than the number of investors selling the stock with a higher price. Facing the latest price $P(t-1)$, if there are more sellers than buyers, the price decreases. The investors selling the stocks with a lower price is more than the investors buying the stocks with a lower price. Therefore, buying high and selling low lead to the negative wealth of the investors with reference point strategies.
On condition that the investors with pair pattern strategies coexist with the investors with reference point strategies, the price movement is determined by the strategy governing the moving trend of the price, no matter whether the strategy is pair pattern strategy or reference point strategy. The investors in the minority side gain more than the investors in the majority side.
For a small memory size $M$ and a small maximum gene $g^{max}$, the moving patterns is closely related to the ratio of the investors with pair pattern strategies and reference point strategies. For a small and an intermediate $\rho_{ref}$, because a small $g^{max}$ has a greater impact on the price movement than a small $M$, a large $\rho_{pair}$ has a greater impact on the price movement than a small $\rho_{ref}$. The moving trend of the stock prices is governed by both the investors with pair pattern strategies and reference point strategies. The wealth of the investors with pair pattern strategies is similar to the wealth of the investors with reference point strategies. For a large $\rho_{ref}$, the price movement is governed by the investors with reference point strategies, which means that, if the investors with reference point strategies buy more, the price increases. If the investors with reference point strategies sell more, the price decreases. The majority choice of the investors with reference point strategies determines the moving trend of the price. Therefore, buying high and selling low lead to the negative wealth of the investors with reference point strategies. For the investors with pair pattern strategies, if they have the buying and selling behaviors similar to the investors with reference point strategies, buying high and selling low lead to their negative wealth. If they have the buying and selling behaviors different from the investors with reference point strategies, buying low and selling high lead to their positive wealth. Therefore, compared with the wealth of the investors with reference point strategies, the wealth of the investors with pair pattern strategies is quite possible to be larger than 0.
For a small memory size $M$ and a large maximum gene $g^{max}$, because a large $g^{max}$ has little impact on the price movement, the moving trend of the price is determined by the investors with pair pattern strategies. If the investors with pair pattern strategies buy more, the price increases. If the investors with pair pattern strategies sell more, the price decreases. Therefore, buying high and selling low lead to the negative wealth of the investors with pair pattern strategies. For the investors with reference point strategies, if they have the buying and selling behaviors similar to the investors with pair pattern strategies, buying high and selling low lead to their negative wealth. If they have the buying and selling behaviors different from the investors with pair pattern strategies, buying low and selling high lead to their positive wealth. Therefore, compared with the wealth of the investors with pair pattern strategies, the wealth of the investors with reference point strategies is quite possible to be larger than 0.
For an intermediate memory size $M$ and a small maximum gene $g^{max}$, similar to the situation where there is a small memory size $M$ and a small maximum gene $g^{max}$, the moving patterns is closely related to the ratio of the investors with pair pattern strategies and reference point strategies $\rho_{ref}$. For a small and an intermediate $\rho_{ref}$, because a small $g^{max}$ has a greater impact on the price movement than an intermediate $M$ and a large $\rho_{pair}$ has a greater impact on the price movement than a small $\rho_{ref}$, the moving trend of the price is governed by both the investors with pair pattern strategies and reference point strategies. The wealth of the investors with pair pattern strategies is similar to the wealth of the investors with reference point strategies. For a large $\rho_{ref}$, the price movement is governed by the investors with reference point strategies, which means that, if the investors with reference point strategies buy more, the price increases. If the investors with reference point strategies sell more, the price decreases. The majority choice of the investors with reference point strategies determines the moving trend of the price. Therefore, buying high and selling low lead to the negative wealth of the investors with reference point strategies. For the investors with pair pattern strategies, if they have the buying and selling behaviors similar to the investors with reference point strategies, buying high and selling low lead to their negative wealth. If they have the buying and selling behaviors different from the investors with reference point strategies, buying low and selling high lead to their positive wealth. Therefore, compared with the wealth of the investors with reference point strategies, the wealth of the investors with pair pattern strategies is quite possible to be larger than 0.
For an intermediate memory size $M$ and a large maximum gene $g^{max}$, similar to the situation where there is a small memory size $M$ and a large maximum gene $g^{max}$, because a large $g^{max}$ has little impact on the price movement, the moving trend of the price is determined by the investors with pair pattern strategies. If the investors with pair pattern strategies buy more, the price increases. If the investors with pair pattern strategies sell more, the price decreases. Therefore, buying high and selling low lead to the negative wealth of the investors with pair pattern strategies. For the investors with reference point strategies, if they have the buying and selling behaviors similar to the investors with pair pattern strategies, buying high and selling low lead to their negative wealth. If they have the buying and selling behaviors different from the investors with pair pattern strategies, buying low and selling high lead to their positive wealth. Therefore, compared with the wealth of the investors with pair pattern strategies, the wealth of the investors with reference point strategies is quite possible to be larger than 0.
The above analyses indicate that the strategy that drives the system far away from the equilibrium loses more while the strategy that draws the system back to the equilibrium gains more. The theoretical analysis is in accordance with the simulation data in figure 8.
\section{Summary}
\label{sec:summary}
In an efficient market, the price movement fully reflects good or bad news for the related companies. However, in real societies, in the face of common information, the price movement often depends upon the characteristics of the investors. Homogeneous population and heterogeneous population have an entirely different effect on the price movement.
By incorporating pair pattern strategies and reference point strategies into a trading model, we have investigated the coupled effects of heterogeneous investment strategies and heterogeneous risk tolerance on price movement. In the stock market flooded with the investors with pair pattern strategies, homogeneous investment strategies lead to the occurrence of large price fluctuations. An increase in the heterogeneity in investment strategies leads to a decrease in price fluctuations. In the stock market flooded with the investors with reference point strategies, the dispersion of individual genes determines the characteristics of price fluctuations. As the investors have similar genes, a large price fluctuation is easy to occur. As the individual genes are well diversified, the price fluctuations can be effectively suppressed and the market is quite possible to be stuck in a no-trading state. The coexistence of pair pattern strategies and reference point strategies not only helps the stock market refrain from large fluctuations but also be stuck in a no-trading state. In the stock market flooded with the investors with heterogeneous investment strategies and heterogeneous individual genes, the investors with pair pattern strategies push the stock market away from the equilibrium while the investors with reference point strategies pull the stock market back to the equilibrium.
The role of heterogeneities in price movement is a quite important issue for us to understand the evolutionary dynamics in financial markets. In the future, people's heterogeneous cognitive behaviors will be further considered in the investigation of the evolutionary dynamics of socioeconomic systems. What the differences between a homogeneous environment and a heterogeneous environment are and how these differences affect the evolution of socioeconomic systems are the favorites of ours.
\section*{Acknowledgments}
This work is the research fruits of Social Science Foundation of Zhejiang Province, National Social Science Foundation of China (Grant No. 20BJL147), Humanities and Social Sciences Fund sponsored by Ministry of Education of China (Grant Nos. 19YJAZH120, 17YJAZH067), National Natural Science Foundation of China (Grant Nos. 71371165, 11865009, 71871094, 71631005, 71773105). Thank professor Yi-Cheng Zhang for his suggestions and discussions about the reference point mechanisms.
\bibliographystyle{model1-num-names}
| proofpile-arXiv_059-15608 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{INTRODUCTION}
Planning and control with guarantees on safety and reachability for systems with unknown dynamics has long been sought-after in the robotics and control community. Model-based optimal control can achieve this if the dynamics are precisely modeled, but modeling assumptions inevitably break down when applied to real physical systems due to unmodeled effects from friction, slip, flexing, etc. To account for this gap, data-driven machine learning methods and robust control seek to sidestep the need to precisely model the dynamics \textit{a priori}. While robust control can provide strong guarantees when the unmodeled component of the dynamics is small and satisfies strong structural assumptions \cite{zhou1998essentials}, such methods requires an accurate prior which may not be readily available. In contrast, machine learning methods are flexible but often lack formal guarantees, precluding their use in safety-critical applications.
For instance, small perturbations from training data cause drastically poor and costly predictions in stock prices and power consumption \cite{mode2020adversarial}. Since even small perturbations from the training distribution can yield untrustworthy results, applying AI systems to predict dynamics can lead to unsafe, unpredictable behavior.%
To address this gap, we propose a method for planning with learned dynamics which yields probabilistic guarantees on safety, reachability, and goal invariance in execution on the true system. Our core insight is that we can determine where a learned model can be trusted for planning using the Lipschitz constant of the error (the difference between the true and learned dynamics), which also informs how well the training data covers the task-relevant domain. Under the assumption of deterministic true dynamics, we can plan trajectories in this trusted domain with strong safety guarantees for an important class of learned dynamical systems.
Specifically, with a Lipschitz constant, we can bound the difference in dynamics between a novel point (that our model was not trained on) and a training point. Since the bound grows with the distance to training points, we can naturally define a domain where the model can be trusted as the set of points within a certain distance to training points.
Conversely, to obtain a small bound over a desired domain, it is necessary to have good training data coverage in the task-relevant domain. At a high level, to obtain a small bound on the error in a domain, we want to have good coverage over the domain and regularity of the learned model via the Lipschitz constant of the error.
Our safety and reachability guarantees ultimately rely on an overestimate of the smallest Lipschitz constant. To find an estimate that exceeds the smallest Lipschitz constant with a given probability $\rho$, we use a statistical approach based on Extreme Value Theory \cite{de2007extreme} and validate its result with a Kolmogorov-Smirnov goodness-of-fit test \cite{degroot2013probability}. If the test validates our estimate, we can choose a confidence interval \cite{degroot2013probability} with an upper bound that overestimates the true Lipschitz constant with probability $\rho$. Our method requires the estimation of three Lipschitz constants, translating to system safety and reachability guarantees which hold with a probability of at least $\rho^3$. This guarantee is fairly strong as it holds for all time, unlike many methods offering probabilistic guarantees on a per trajectory or episode basis \cite{akametalu2014reachability} \cite{berkenkamp2017safe} \cite{van2011lqg}.
If the learned dynamics have at least as many controls as states and are control-affine
(note we do not assume the true dynamics are also control-affine), then we also determine conditions for the existence of a feedback controller that tightly tracks the planned trajectory in execution under the true dynamics.
The tight tracking error bound yields favorable properties for our planner and controller: if we have a valid Lipschitz constant estimate for a sufficiently-accurate learned model, 1) we guarantee safety if no obstacle is within the tracking error of the trajectory, 2) we guarantee we can reach the goal within a small tolerance, and 3) if we can assert a feedback law that keeps the system at the goal exists, then the closed-loop system is guaranteed to remain in a small region around the goal. In this paper, we assume the learned dynamics are control-affine, deterministic, and have at least as many controls as states (such as a robotic arm under velocity control), the true dynamics are deterministic, and that independent samples of the true dynamics can be taken in the domain of interest.
Our contributions are:
\begin{enumerate}
\item A method to bound error between two general dynamics functions in a domain by using a Lipschitz constant
\item A condition for uncertain control-affine systems that guarantees the existence of a feasible feedback law
\item A planner that probabilistically guarantees safety and closed-loop stability-like properties about the goal for learned dynamics with as many controls as states
\item Evaluation on a 7DOF Kuka arm and a 6D quadrotor
\end{enumerate}
\section{RELATED WORK}
Prior work has used data coverage and Lipschitz constant regularization to ensure properties of a learned function. \cite{dean2020robust} shows that a linearization of a nonlinear state estimator generalizes with bounded error by estimating the maximum slope (related to the Lipschitz constant) of the error. \cite{robey2020learning} learns a control barrier function (CBF) from data, ensuring its validity through Lipschitz constant regularity and by checking the CBF conditions at a finite set of points.
In contrast, we apply and extend these ideas to plan with learned dynamics, using the Lipschitz constant of the error dynamics to provide safety, reachability, and stability-like guarantees.
More broadly, our work is related to methods for planning and control of unknown dynamics with performance guarantees. A traditional approach is robust control, which assumes a good prior on the true dynamics and that unmodeled components are tightly bounded in some set \cite{zhou1998essentials}.
Robust control has been applied in model predictive control \cite{aswani2013provably} and Hamilton-Jacobi (HJ) reachability analysis \cite{mitchell2005time}.
While these methods have strong guarantees, the assumption that the unmodeled dynamics can be tightly bounded
requires a accurate prior whereas our method does not, by actively keeping both planned and executed trajectories in a domain where the model can be trusted without \textit{a priori} knowledge.
Other methods use Gaussian Processes (GPs) to estimate the mean and covariance of the dynamics, providing probabilistic bounds on safety and reachability. For example, \cite{koller2018learning} probabilistically bounds the reachable set of a fixed horizon trajectory.
Similarly \cite{akametalu2014reachability} explores the environment while ensuring (with some probability) safety via HJ reachability analysis.
In many contexts, GPs are used to derive confidence bounds that can provide probabilistic safety guarantees \cite{berkenkamp2016safe} \cite{berkenkamp2017safe}. These methods can model dynamics with stochasticity, which our method cannot handle. However, the GP-based methods are incapable of long-horizon planning due to the
unbounded growth of the covariance ellipse unless a known feedback controller exists. Our method does not require any prior controller, and
we can plan trajectories of arbitrary length without unbounded growth of the reachable set.
Other work performs long-horizon planning with learned models without safety or reachability guarantees. \cite{ichter2019robot} plans in learned latent spaces.
\cite{guzzi2020path} estimates the confidence that a controller can move between states to guide planning.
\cite{mcconachie2020learning} learns when a reduced-order model can be used.
Unlike \cite{ichter2019robot}-\cite{mcconachie2020learning}, our method provides safety and reachability guarantees.
Our safety guarantees rely on proving the existence of a stabilizing feedback controller in execution, like LQR-trees \cite{tedrake2009lqr}, funnel libraries \cite{MajumdarT17}, and LQG-MP \cite{van2011lqg}. Unlike these approaches, our method requires no \textit{a priori} model and can prove a feedback law exists without structural assumptions on the true dynamics (e.g. that they are polynomial).
\section{PRELIMINARIES}
Let $f: \mathcal{X} \times \mathcal{U} \rightarrow \mathcal{X}$ be the true unknown discrete-time dynamics where $\mathcal{X}$ is the state space and $\mathcal{U}$ is the control space, which we assume are deterministic. We define $g: \mathcal{X} \times \mathcal{U} \rightarrow \mathcal{X}$ to be an approximation of the true dynamics that is control-affine and therefore can be written as follows
\begin{equation}\label{eq:g}
g(x,u) = g_0(x) + g_1(x) u.
\end{equation}
In this paper, we represent the approximate dynamics with a neural network, though our method is agnostic to the structure of the model and how it is derived.
Let $\S = \{(x_i,u_i,f(x_i,u_i))\}_{i=1}^N$ be the training data for $g$, and let $\Psi = \{(x_j,u_j,f(x_j,u_j))\}_{j=1}^M$ be another set of samples collected near $\S$ that will be used to estimate the Lipschitz constant. We use $\bar\cdot$ to refer to data points from $\S$ or $\Psi$. We place no assumption on how $\S$ is obtained; any appropriate method (uniform sampling, perturbations from expert trajectories, etc.) may be employed%
, although we require independent and identically distributed (i.i.d.) samples for $\Psi$. A single state-control pair is written as $(x,u)$. With some abuse of notation, we write $(\bar{x},\bar{u}) \in \S$ if $(\bar{x},\bar{u}) = (x_i, u_i)$ for some $1 \leq i \leq N$ (similarly for $\Psi$).
A Lipschitz constant bounds how much outputs change with respect to a change in the inputs. For some function $h$, a Lipschitz constant over a domain $\mathcal{Z}$ is any number $L$ such that for all $z_1, z_2 \in \mathcal{Z}$
\begin{equation}\label{eq:lipschitz_def}
\| h(z_1) - h(z_2) \| \leq L \| z_1 - z_2 \|
\end{equation}
Norms $\|\cdot\|$ are always the 2-norm or induced 2-norm. We define $L_{f-g}$, $L_{g_0}$, and $L_{g_1}$ as the smallest Lipschitz constants of the error $f - g$, $g_0$, and $g_1$. The input to $f-g$ is a state-control pair $(x,u)$ and its output is a state. For $g_0$, both the input and output are a state. For $g_1$, its input is a state and its output is a $\texttt{dim}(\mathcal{X}) \times \texttt{dim}(\mathcal{U})$ matrix where \texttt{dim}$(\cdot)$ is the dimension of the space. A ball $\mathcal{B}_r(x)$ of radius $r$ about a point $x$ is defined as the set $\{y \enspace | \enspace \|y - x\| < r\}$, also referred to as a $r$-ball about $x$. We suppose the state space $\mathcal{X}$ is partitioned into safe $\X_{\textrm{safe}}$ and unsafe $\X_{\textrm{unsafe}}$ sets (e.g., the states in collision with an obstacle).
The method consists of two major components. First, we determine a trusted domain $D \subseteq \mathcal{X} \times \mathcal{U}$ and estimate the Lipschitz constants. Second, we use $D$ to find a path to the goal satisfying our safety and reachability requirements.
\begin{problem}\label{prob:D}
Given a learned model $g$, unknown dynamics $f$, and datasets $\Psi$ and $\S$, determine the trusted domain $D$ where $\|f(x,u)-g(x,u)\| \leq \epsilon$, for some $\epsilon > 0$. Additionally determine the Lipschitz constants $L_{f-g}$, $L_{g_0}$, and $L_{g_1}$ in $D$.
\end{problem}
\begin{problem}\label{prob:path}
Given control-affine $g$, unknown $f$, start $x_I$, goal $x_G$, goal tolerance $\lambda$, $D$, $L_{f-g}$, $L_{g_0}$, $L_{g_1}$, and $\X_{\textrm{unsafe}}$, plan a trajectory $(x_0,\ldots,x_K)$, $(u_0,\ldots,u_{K-1})$ such that $x_0 = x_I$, $x_{k+1} = g(x_k,u_k)$, $K < \infty$, and $\|x_K - x_G\| \leq \lambda$. Additionally, under the true dynamics $f$, guarantee that closed loop execution does not enter $\X_{\textrm{unsafe}}$, converges to $\mathcal{B}_{\epsilon+\lambda}(x_G)$, and remains in $\mathcal{B}_{\epsilon+\lambda}(x_G)$ after reaching $x_K$.
\end{problem}
\section{METHOD}
Secs. \ref{sec:domain} - \ref{sec:estimating_lip} and \ref{sec:planning} - \ref{sec:alg} cover our approaches to Probs. \ref{prob:D} and \ref{prob:path}, respectively. In Sec. \ref{sec:domain}, we show how $L_{f-g}$
can establish a trusted domain and how $L_{f-g}$ can be estimated in Sec. \ref{sec:estimating_lip}. In Sec. \ref{sec:planning}, we design a planner that ensures safety, that the system remains in the trusted domain, and that a feedback law maintaining minimal tracking error exists.
We present the full algorithm in Sec. \ref{sec:alg}.
\subsection{The trusted domain}\label{sec:domain}
For many systems, we are only interested in a task-relevant domain, and it is often impossible to collect data everywhere in state space, especially for high-dimensional systems. Hence, it is natural that our learned model is only accurate near training data. With a Lipschitz constant of the error, we can precisely define how accurate the learned dynamics are in a domain constructed from the training data.
We note this derivation can also be done for systems without the control-affine assumption on the learned dynamics, and thus it can still be useful for determining where a broader class of learned models can be trusted. However, removing the control-affine structure makes controller synthesis much more difficult, and is the subject of future work.
Consider a single training point $(\bar{x},\bar{u})$ and a novel point $(x,u)$. We derive a bound on the error between the true and estimated dynamics at $(x,u)$ using the triangle inequality and Lipschitz constant of the error:
\begin{equation}
\begin{aligned}
\|f(&x,u) - g(x,u)\| \\
&= \|f(x,u) - g(x,u) - f(\bar{x},\bar{u}) + g(\bar{x},\bar{u}) \\
& \qquad + f(\bar{x},\bar{u}) - g(\bar{x},\bar{u})\| \\
&\leq L_{f-g} \|(x,u) - (\bar{x},\bar{u})\| + \|f(\bar{x},\bar{u}) - g(\bar{x},\bar{u})\|.
\end{aligned}
\end{equation}
The above relation describes the error at a novel point, but we can also generalize to any domain $D$. Define $b_T$ to be the dispersion \cite{lavalle2006planning} of $\S \cap D$ in $D$
and define $e_T$ to be the maximum training error of the learned model. Explicitly,
\begin{equation}
b_T \doteq \underset{(x,u) \in D}{\sup} \quad \underset{(\bar{x},\bar{u}) \in \S \cap D}{\min} \quad \|(x,u) - (\bar{x},\bar{u})\|
\end{equation}\begin{equation}
e_T \doteq \underset{(\bar{x},\bar{u}) \in \S \cap D}{\max} \quad \|f(\bar{x},\bar{u}) - g(\bar{x},\bar{u})\|
\end{equation}
Then, we can uniformly bound the error across the entire set $D$ to yield a simple and exact relation between $f$ and $g$.
\begin{equation}\label{eq:epsilon}
\epsilon \doteq L_{f-g} b_T + e_T
\end{equation}
\begin{equation}\label{eq:nonauto_dyn}
\forall (x,u) \in D \quad f(x,u) = g(x,u) + \delta, \quad \|\delta\| \leq \epsilon
\end{equation}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/bt_et_e.pdf}
\caption{An example with $f(x) = x'$, $\texttt{dim}(\mathcal{X}) = 1$, and $\texttt{dim}(\mathcal{U}) = 0$. True dynamics: yellow; learned linear dynamics: orange; $\S$: green crosses; $\Psi$: light blue crosses; domain $D$: interval $[-1, 2]$, bordered in black. Here, $b_T = 0.3633$ (purple) and $e_T = 0.1161$ (blue). The Lipschitz constant of the error is $L_{f-g} = 0.1919$, yielding $\epsilon = 0.1859$. We can use this bound to ensure the difference between the learned and true dynamics is no more than $\epsilon$ in $D$ (shaded orange area). Note $L_{f-g}$ can be larger outside of $D$.}
\label{fig:bt_et_e}
\end{figure}
See Fig. \ref{fig:bt_et_e} for an example of these quantities. For the remainder of the method, we select $D$ to be the union of $r$-balls about a subset of the training data $\S_D \subset \S$:
\begin{equation}\label{eq:def_D}
D = \bigcup_{(\bar{x},\bar{u}) \in \S_D} \, \mathcal{B}_{r}(\bar{x},\bar{u})
\end{equation}
In the next section we discuss selection of $\S_D$, its role in estimating $L_{f-g}$, and how $r$ is selected.
\subsection{Estimating the Lipschitz constant}\label{sec:estimating_lip}
For \eqref{eq:nonauto_dyn} to hold over all of $D$, we require that $L_{f-g}$ is a Lipschitz constant for the error. We use results from Extreme Value Theory to obtain an estimate $\hat{L}_{f-g}$ that overestimates $L_{f-g}$, i.e. $\hat{L}_{f-g} \ge L_{f-g}$, with a user-defined probability $\rho$.
We build on \cite{weibull}-\cite{weng2018evaluating}, which find an estimate $\hat{L}_h$ of the Lipschitz constant $L_h$ for a function $h(z)$ over a domain $\mathcal{Z}$ by estimating the location parameter $\gamma$ of a three-parameter reverse Weibull distribution, which for a random variable $W$ has the cumulative distribution function (CDF) $$F_W(w) = \begin{cases}
\exp(-(\frac{\gamma - w}{\alpha})^\beta), & \textrm{if } w < \gamma \\
1, & \textrm{if } w \ge \gamma.
\end{cases}$$Here, the location parameter $\gamma$ is the upper limit on the support of the distribution, and $\alpha$ and $\beta$ are the scale and shape parameters, respectively. Consider the random variable described by the maximum slope taken over $N_L$ pairs of i.i.d. samples $\{(z_1^i,z_2^i)\}_{i=1}^{N_L}$ from $\mathcal{Z}$, i.e. $s = \max_i \frac{\Vert h(z_1^i) - h(z_2^i) \Vert}{\Vert z_1^i - z_2^i \Vert}$. From the Fisher-Tippett-Gnedenko Theorem \cite{de2007extreme}, $s$ follows one of the Frechet, reverse Weibull, or Gumbel distributions in the limit as $N_L$ approaches infinity. If $s$ follows the reverse Weibull distribution, which we validate in our results using the Kolmogorov-Smirnov (KS) goodness-of-fit test \cite{degroot2013probability} with a significance value of 0.05 (the same threshold used in \cite{weng2018evaluating}), then $L_h$ is finite and equals $\gamma$. We estimate $L_h$ using the location parameter $\hat\gamma$ of a reverse Weibull distribution fit via maximum likelihood to $N_S$ samples of $s$. Finally, we compute a confidence interval $c = \Phi^{-1}(\rho) \xi$ on $\hat\gamma$. Here $\xi$ is the standard error of the fit $\hat\gamma$, which correlates with the quality of the fit, and $\Phi(\cdot)$ is the standard normal CDF \cite{degroot2013probability}. We select the upper end of the confidence interval as our estimate $\hat{L}_h = \hat\gamma + c$, which overestimates $L_h$ with probability $\rho$. Note that increasing $\rho$ increases $c$, improving the safety probability at the cost of loosening $\hat{L}_h$, which can make planning more conservative. We also note that this probability is valid in the limit as $N_L$ approaches infinity, due to the Fisher-Tippett-Gnedenko theorem making claims only on the asymptotic distribution. We summarize the estimation method in Alg. \ref{alg:lipschitz}.
\begin{algorithm}\label{alg:lipschitz}\small\DontPrintSemicolon
\KwIn{$N_S$, $N_L$, $\rho$}
\For{$j = 1, \ldots, N_S$}{
sample $\{(z_1^{i,j}, z_2^{i,j})\}_{i=1}^{N_L}$ uniformly in $\mathcal{Z}$ \\
compute $s_j = \max_i \Vert h(z_1^{i,j}) - h(z_2^{i,j}) \Vert/\Vert z_1^{i,j} - z_2^{i,j} \Vert$\\}
fit reverse Weibull to $\{s_j\}$ to obtain $\hat{\gamma}$ and standard error $\xi$ \\
validate fit using KS test with significance level 0.05 \\
\textbf{if} validated \Return $\hat{L}_h = \hat{\gamma} + \Phi^{-1}(\rho)\xi$ \textbf{else} \Return failure
\caption{Lipschitz estimation for $h(z)$ over $\mathcal{Z}$}
\end{algorithm}
We wish to choose $D$ to be large enough for planning while also keeping $L_{f-g}$ small. To achieve this, we use a filtering procedure to reduce the impact of outliers in $\S$. Let $\mu$ and $\sigma$ be the mean and standard deviation of the error over $\S$. Then, let $\S_D = \{(\bar{x},\bar{u}) \in \S \enspace | \enspace \|f(\bar{x},\bar{u}) - g(\bar{x},\bar{u})\| \leq \mu + a \sigma\}$ where $a$ is a user-defined parameter. Then, we run Alg. \ref{alg:select_D} in order to grow $D$. This method works by proposing values of $r$, estimating $L_{f-g}$, and increasing $r$ until $r > \epsilon$ or $L_{f-g} \geq 1$. Finding $D$ with $r > \epsilon$ and $L_{f-g} < 1$ is useful for planning (described further in Sec. \ref{sec:planning}, see \eqref{eq:select_bt}). Note that, in Euclidean spaces, $r \geq b_T$. If no filtering is done, $r = b_T$, since no point in $D$ is further than a distance $r$ from $\S_D$ and the furthest any point in $D$ can lie from a point in $\S_D$ is $r$; however, filtering shrinks $D$ and thus decreases the dispersion, making it possible that $r \ge b_T$.
The parameter $a$ should be chosen to balance the size of $D$ against the magnitude of $L_{f-g}$, which we tune heuristically.
This filtering lets us exclude regions where our learned model is less accurate, yielding smaller $e_T$. Note that filtering does not affect the i.i.d. property of the samples needed for Alg. \ref{alg:lipschitz}; it only applies a mask to the domain. We also note that Alg. \ref{alg:select_D} returns a minimum value for $r$, but a larger $r$ can be chosen as long as $L_{f-g}$ is estimated with Alg. \ref{alg:lipschitz}. A larger $r$ makes planning easier by expanding the trusted domain.
\begin{algorithm}\label{alg:select_D}\small\DontPrintSemicolon
\KwIn{$\mu$, $\sigma$, $a$, $S_D$, $\Psi$, $\alpha > 0$}
$r \leftarrow \mu + a \sigma$ \\
\While{True}{
construct $D$ using equation \eqref{eq:def_D} \\
estimate $L_{f-g}$ using Alg. \ref{alg:lipschitz} and $\Psi$ \\
calculate $\epsilon$ using equation \eqref{eq:epsilon} \\
\lIf{$L_{f-g} \geq 1$}{\Return failure} \label{line:term_cond_fail}
\lIf{$r > \epsilon$}{\Return $r$ and $D$} \label{line:term_cond}
\lElse{$r \leftarrow \epsilon + \alpha \quad$ \texttt{\ //\enspace $\alpha$ is a small constant}}}
\caption{Selecting $r$ and $D$}
\end{algorithm}
While we never explicitly address the assumption that the true dynamics are deterministic, the estimated Lipschitz constant may be unbounded in the stochastic case, such as when two samples have the same inputs but different outputs due to noise, causing a division by 0 in line 3 of Alg. \ref{alg:lipschitz}.
$L_{g_0}$ and $L_{g_1}$ may also be estimated with Alg. \ref{alg:lipschitz}, which we employ in the results. Alternatively, \cite{fazlyab2019efficient} can give tight upper bounds on the Lipschitz constant of neural networks, though it could not scale to the networks used in our results. Other approaches \cite{JordanD20} improve scalability at the cost of looser Lipschitz upper bounds, and will be examined in the future.
\subsection{Planning}\label{sec:planning}
We want to plan a trajectory from start $x_I$ to goal $x_G$ using the learned dynamics while remaining in $\X_{\textrm{safe}}$ in execution. We constrain the system to stay inside $D$, as model accuracy may degrade outside of the trusted domain.
We develop a planner similar to a kinodynamic RRT \cite{lavalle2001randomized}, growing a search tree $\mathcal{T}$ by sampling controls that steer towards novel states until we reach the goal. If a path is found the we can ensure the goal is reachable with safety guarantees.
\subsubsection{Staying inside $D$}\label{sec:in_D}
To remain inside the set $D$, we introduce another set $D_\epsilon := D \ominus \mathcal{B}_\epsilon(0)$, which is the Minkowski difference between $D$ and a ball of radius $\epsilon$.
Every point in $D_\epsilon$ is at least a distance of $\epsilon$ from any point in the complement of $D$. Since the learned dynamics differs from the true dynamics by at most $\epsilon$ in $D$, controlling to a point in $D_\epsilon$ under the learned dynamics ensures the system remains within $D$ under the true dynamics (see Fig. \ref{fig:d_eps}).
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{figures/d_eps.pdf}
\caption{Visualizing $D$ (boundary in black), $D_\epsilon$ (yellow), and $D^c$ (complement of $D$). Each point in $D_\epsilon$ is at least $\epsilon$ distance away from $D^c$. If the system is controlled to a point in $D_\epsilon$ from anywhere in $D$ under the learned dynamics, then it remains in $D$ under the true dynamics.} \label{fig:d_eps}
\end{figure}
How do we determine if a query point $(x,u)$ is inside of $D_\epsilon$? Since we define $D$ to be a union of balls \eqref{eq:def_D}, it would suffice to find a subset of training points $\mathcal{W} \subset S_D$ such that the union of $r$-balls about the training points completely covers an $\epsilon$-ball about $(x,u)$. Explicitly,
\begin{equation}\label{eq:coverage}
\bigcup_{(\bar{x},\bar{u}) \in \mathcal{W}} \, \mathcal{B}_{r}(\bar{x},\bar{u}) \supset \mathcal{B}_\epsilon(x,u)
\end{equation}
In general, checking \eqref{eq:coverage} is difficult, but if $L_{f-g} < 1$ and
\begin{equation}\label{eq:select_bt}
r > \frac{e_T}{1 - L_{f-g}},
\end{equation}
\noindent then only one training point within a distance $r - \epsilon$ is needed to ensure a query point is in $D_\epsilon$ (see Fig. \ref{fig:small_l}). Note by Alg. \ref{alg:select_D} lines \ref{line:term_cond_fail}-\ref{line:term_cond}, either \eqref{eq:select_bt} is guaranteed or $L_{f-g} \geq 1$, in which we return failure.
\begin{lem}
If $L_{f-g} < 1$ and $r$ is selected according to equation \eqref{eq:select_bt}, then a point $(x,u)$ is in $D_\epsilon$ if there exists $(\bar{x},\bar{u}) \in S$ such that $\|(x,u) - (\bar{x},\bar{u})\| \leq r - \epsilon$.
\end{lem}
\begin{proof}To prove, note we can rearrange terms in equation \eqref{eq:select_bt} to get $r > L_{f-g} r + e_T \ge \epsilon$. If there exists $(\bar{x},\bar{u}) \in S_D$ such that $\|(x,u) - (\bar{x},\bar{u})\| \leq r - \epsilon$, then $\mathcal{B}_\epsilon(x,u) \subset \mathcal{B}_{r}(\bar{x},\bar{u}) \subset D$ since no point in $\mathcal{B}_\epsilon(x,u)$ is further than $r$ distance from $(\bar{x},\bar{u})$. Since $\mathcal{B}_\epsilon(x,u) \subset D$, $(x,u)$ is at least $\epsilon$ distance from any point in $D^c$ and therefore $(x,u) \in D_\epsilon$.
\end{proof}
In order to ensure $L_{f-g} < 1$, since it is derived from the training data and learned model, we must train a learned model that is sufficiently accurate (i.e. low error on $\S \cup \Psi$). In our experiments, it was enough to minimize mean squared error over the training set to learn models with this property.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figures/d_fastcheck_lr.pdf}
\caption{Illustrating the advantage of $L_{f-g} < 1$ and $r$ selected according to \eqref{eq:select_bt}. An $\epsilon$-ball about a query point is shown in black; $r$-balls about training data are shown in blue. \textbf{Left}: $L_{f-g} > 1$, therefore requiring many training points to cover an $\epsilon$-ball about the query point. \textbf{Right}: $L_{f-g} < 1$ and $r$ is selected according to \eqref{eq:select_bt}. Under these conditions, only one training point within a $r - \epsilon$ distance ensures an $\epsilon$-ball about the $(x,u)$ is entirely in $D$, ensuring that the query point is in $D_\epsilon$.} \label{fig:small_l}
\end{figure}
To ensure that the resulting trajectory remains in $D_\epsilon$, we ensure that corresponding pairs of state and control lie in $D_\epsilon$ at each step. In growing the search tree $\mathcal{T}$, we break down this requirement into two separate checks, the first of which optimistically adds states to the search tree and the second that requires pairs of states and controls to lie in $D_\epsilon$. To illustrate, suppose we sample a new configuration $x_\textrm{new}$ and grow the tree from some $x$ to $x_\textrm{new}$. At this point, when sampling a control $u$ to steer from $x$ to $x_\textrm{new}$ we enforce that $(x,u) \in D_\epsilon$ (see line \ref{line:dist_check} in Alg. \ref{alg:rrt}). However, how do we know the resulting state, $x' = g(x,u)$, will lie in $D_\epsilon$? Since $x'$ is a state and not a state-control pair, the above question is not well defined. Instead, we perform an optimistic check in adding $x'$ which requires that there exists some $\hat{u}$ such that $(x',\hat{u}) \in D_\epsilon$ (see line \ref{line:opt_check2} in Alg. \ref{alg:rrt}). In turn, when growing the search tree from $x'$ to some other sampled point $x'_\textrm{new}$ we ensure that the pair of state and newly sampled control $u'$ lies in $D_\epsilon$, i.e. $(x',u') \in D_\epsilon$.
\subsubsection{One step feedback law}\label{sec:feedback}
To prevent drift in execution, we also seek to ensure the trajectory planned with RRT can be tracked with minimal error. One key requirement to guarantee a feedback law exists is that the system is sufficiently actuated under the learned dynamics. This requires that $\texttt{dim}(\mathcal{U}) \geq \texttt{dim}(\mathcal{X})$. The check for sufficient actuation is done on a per state basis and can be done as we grow $\mathcal{T}$. This feedback law ensures that, under the learned dynamics, we can return to a planned trajectory in exactly one step.
\begin{figure}
\centering
\includegraphics[width=0.42\textwidth]{figures/rollout.pdf}
\caption{The one-step feedback law: plan with the learned dynamics (dashed black); rollout with the true dynamics (blue); prediction with the learned dynamics using the feedback law (red).
At each point, we use \eqref{eq:pert_lls} to find a feedback control $\tilde{u}_k$ so $x_{k+1} = g(\tilde{x}_k, \tilde{u}_k)$. We arrive within $\epsilon$ of the next state under the true dynamics. This repeats until we reach the goal.} \label{fig:one_step}
\end{figure}
Suppose we are executing a trajectory $(x_0, \ldots, x_K)$ with corresponding control $(u_0, \ldots, u_{K-1})$ planned with the learned dynamics, and the system is currently at $x_{k-1}$. Under the learned dynamics, the plan is to move to $x_{k} = g(x_{k-1},u_{k-1})$, but, under the true dynamics, the system will end up at some $\tilde{x}_{k} = f(x_{k-1},u_{k-1})$ which is no more than an $\epsilon$ distance from $x_{k}$. Our goal is to find an input $\tilde{u}_{k}$ such that $x_{k+1} = g(\tilde{x}_k, \tilde{u}_k)$. If this one-step feedback law exists for all $1 \leq k \leq K-1$, it ensures the executed trajectory stays within $\epsilon$ distance of the planned trajectory (see Fig. \ref{fig:one_step}).
With Lipschitz constants $L_{g_0}$ and $L_{g_1}$, we can bound how much the learned dynamics varies in the $\epsilon$-ball about $x_k$.
\begin{equation}\label{eq:g_bound}\small
\forall \tilde{x}_k \in \mathcal{B}_\epsilon(x_k) \enspace g(\tilde{x}_k,u) = g_0(x_k) + \Delta_0 + (g_1(x_k) + \Delta_1)u
\end{equation}
\noindent where $\|\Delta_0\| \leq L_{g_0}\epsilon$ and $\|\Delta_1\| \leq L_{g_1}\epsilon$. With \eqref{eq:g_bound}, the existence of $\tilde{u}_k$ is informed by a perturbed linear equation:
\begin{equation}\label{eq:pert_lls}
\begin{aligned}
x_{k+1} &= g(\tilde{x}_k,\tilde{u}_k), \quad \tilde{x}_k \in \mathcal{B}_\epsilon(x_k) \\
& \hspace{-25pt} \Rightarrow x_{k+1} = g_0(x_k) + \Delta_0 + (g_1(x_k) + \Delta_1)\tilde{u}_k \\
& \hspace{-25pt} \Rightarrow A\tilde{u}_k = b \\
\text{with} \enspace A &= g_1(x_k) + \Delta_1 \enspace \text{and} \enspace b = x_{k+1} - g_0(x_k) - \Delta_0
\end{aligned}\hspace{-15pt}
\end{equation}
Prior to execution, we seek to answer two questions: when does $\tilde{u}_k$ exist and does $\tilde{u}_k$ lie in the control space $\mathcal{U}$ (for instance in the presence of box constraints)? Results from the literature \cite{lotstedt1983perturbation} give a bound on the difference between the nominal solution $u_k$ and perturbed solution $\tilde{u}_k$,
\begin{equation}
\Vert u_k - \tilde{u}_k \Vert \le \frac{\Vert g_1(x_k)^+\Vert(\Vert \Delta_1 \Vert \Vert u_k \Vert + \Vert \Delta_0 \Vert)}{1 - \Vert g_1(x_k)^+ \Vert \Vert \Delta_1 \Vert} \doteq u_\textrm{pert},
\end{equation}
where $g_1(x_k)^+$ is the pseudo-inverse of $g_1(x_k)$ (in general $g_1(x_k)$ is not square). We can use this bound to ensure that $\tilde{u}_k$ is guaranteed to lie in $\mathcal{U}$ by enforcing that $u_k + u_\textrm{pert} \mathbf{1}_\infty \subseteq \mathcal{U}$, where $\mathbf{1}_\infty$ is the unit infinity-norm ball.
Furthermore, $A$ may become singular if $1 - \|g_1(x_k)^+\| \, \|\Delta_1\| \leq 0$. In this case, $\tilde{u}_k$ is not guaranteed to exist.
If $\tilde{u}_k$ exists and satisfies the control constraints for all $1 \leq k \leq K-1$, then we ensure that the system will track the path up to an $\epsilon$ error under the one-step feedback law.
In planning, we add the existence of a valid one step feedback law as a check when growing the search tree. Formally:
\begin{theorem}\label{th:tracking}
For trajectory $(x_0, \ldots, x_K)$ and $(u_0, \ldots u_{K-1})$, if the solution to the perturbed linear equation \eqref{eq:pert_lls}, $\tilde{u}_k$, exists for all $k \in \{1, \ldots, K-1\}$, then under the true dynamics $\|\tilde{x}_k - x_k\| \leq \epsilon$ for all $k$, given $L_{f-g}$, $L_{g_0}$, and $L_{g_1}$ are each an overestimate of the true Lipschitz constant of $f-g$, $g_0$, and $g_1$, respectively.
\end{theorem}
\begin{proof}Proof by induction. For the induction step, assume $\|\tilde{x}_k - x_k\| \leq \epsilon$ for some $k$. Since $\tilde{x}_k \in \mathcal{B}_\epsilon(x_k)$, the perturbed linear equation \eqref{eq:pert_lls} is valid. If a solution exists, then $x_{k+1} = g(\tilde{x}_k, \tilde{u}_k)$ and $\|f(\tilde{x}_k,\tilde{u}_k) - x_{k+1}\| \leq \epsilon$. This satisfies the induction step. For the base case, we have $g(x_0,u_0) = x_1$ and $\|f(x_0,u_0) - x_1\| \leq \epsilon$. Thus, for all $k$, $\|\tilde{x}_k - x_k\| \leq \epsilon$
\end{proof}
\subsubsection{Ensuring safety and invariance about the goal}\label{sec:safety_stability}
Since it is guaranteed by Thm. \ref{th:tracking} that $\|\tilde{x} - x_k\| \leq \epsilon$, we check that $\mathcal{B}_\epsilon(x_k) \subset \X_{\textrm{safe}}$ for each $x_k$ on the path to ensure safety.
The exact nature of this check depends on the system and definition of $\X_{\textrm{unsafe}}$. For example, in our experiments on quadrotor, the state includes the quadrotor's position in $\mathbb{R}^3$ and $\X_{\textrm{unsafe}}$ is defined by unions of boxes in $\mathbb{R}^3$. By defining a bounding sphere that completely contains the quadrotor, we can verify a path is safe via sphere-box intersection. With the Kuka arm, we randomly sample joint configurations in an $\epsilon$-ball about states, transform the joint configurations via forward kinematics, and check collisions in workspace.
While this method is not guaranteed to validate the entire ball around a state, in practice no collisions resulted from execution of plans. Another approach computes a free-space bubble \cite{quinlan1994real} around a given state $x$ and check if it contains $\mathcal{B}_\epsilon(x)$, however this is known to be conservative.
To stay near the goal after executing the trajectory, we use the same perturbed linear equation to ensure the existence of a one-step feedback law. Here, rather than checking the next state along the trajectory is reachable from the previous, we check that the final state is reachable from itself, i.e. $x_K$ is reachable from $x_K$. Similar to the arguments above, we can repeatedly execute the feedback law to ensure the system remains in an $(\epsilon + \lambda)$-ball about the goal. Formally, we have:
\begin{theorem}
If the solution, denoted $u_\textrm{st}$, to the perturbed linear equation exists for $A = g_1(x_K) + \Delta_1$ and $b = x_K - g_0(x_K) - \Delta_0$ for all $x \in \mathcal{B}_\epsilon(x_K)$, then the closed loop system will remain in $\mathcal{B}_{\epsilon+\lambda}(x_G)$, given $L_{f-g}$, $L_{g_0}$, and $L_{g_1}$ are each an overestimate of the true Lipschitz constant of $f-g$, $g_0$, and $g_1$, respectively.
\end{theorem}
\begin{proof}
By Thm. \ref{th:tracking}, $\|\tilde{x}_K - x_K\| \leq \epsilon$. Thus, if the solution to the perturbed linear equation with $A = g_1(x_K) + \Delta_1$ and $b = x_K - g_0(x_K) - \Delta_0$ exists and is valid then $g(\tilde{x}_K,u_\textrm{st}) = x_K$ and $\|f(\tilde{x}_K,u_\textrm{st}) - x_K\| \leq \epsilon$. Since $\|x_K - x_G\| \leq \lambda$, the system remains in $\mathcal{B}_{\epsilon + \lambda}(x_K)$ by the triangle inequality
\end{proof}
To close, we note that the overall safety and invariance probability of our method is $\rho^3$, arising from our need to estimate three Lipschitz constants: $L_{f-g}$, $L_{g_0}$, and $L_{g_1}$. Given independent samples for overestimating each constant with probability $\rho$ via Alg. \ref{alg:lipschitz}, the overall correctness probability is the product of the correctness of each constant, i.e. $\rho^3$.
\subsection{Algorithm}\label{sec:alg}
We present our full method, \textbf{L}earned \textbf{M}odels in \textbf{T}rusted \textbf{D}omains (LMTD-RRT), in Alg. \ref{alg:rrt}. In practice, we implemented \texttt{SampleState} and \texttt{SampleControl} in two different ways: uniform sampling and perturbations from training data. Sampling perturbations (up to a norm of $r - \epsilon$) does not exclude valid $(x,u)$ pairs since all points in $D_\epsilon$ lie within $r - \epsilon$ from a training point, and, in cases where $D_\epsilon$ is a relatively small volume, can yield a faster search. However, it also biases samples near regions where training data is more dense. We define the set $\S_\mathcal{X} = \{\bar{x} \enspace | \enspace \exists \bar{u} \enspace \text{s.t.} \enspace (\bar{x},\bar{u}) \in \S_D\}$ to describe the optimistic check described in Sec. \ref{sec:in_D}.
\texttt{NN} finds the nearest neighbor and \texttt{OneStep} checks that a valid feedback exists as described in Sec. \ref{sec:feedback}. \texttt{Model} evaluates the learned dynamics and \texttt{InCollision} checks if an $\epsilon$-ball is in $\X_{\textrm{safe}}$ as described in Sec. \ref{sec:safety_stability}.
\begin{algorithm}\label{alg:rrt}\small
\KwIn{$x_I$, $x_G$, $S_\mathcal{X}$, $S_D$, $r$, $\epsilon$, $\lambda$, $N_\textrm{samples}$, goal\_bias}
\SetKwFunction{SampleState}{SampleState}
\SetKwFunction{SampleControl}{SampleControl}
\SetKwFunction{NearestNeighbor}{NN}
\SetKwFunction{SteerInDEpsilon}{SteerInDEpsilon}
\SetKwFunction{OneStepReachable}{OneStep}
\SetKwFunction{Model}{Model}
\SetKwFunction{ConstructPath}{ConstructPath}
\SetKwFunction{InCollision}{InCollision}
$\mathcal{T} \leftarrow \{x_I\}$ \\
\While{\upshape True}{
\While{\upshape $\neg$sampled} {
$x_{\textrm{new}} \leftarrow $ \SampleState{\upshape goal\_bias} \\
\If{\upshape $\|x_{\textrm{new}} - $ \NearestNeighbor{\upshape $S_\mathcal{X}$, $x_{\textrm{new}}$}$\| \leq r - \epsilon$}{\label{line:opt_check}
sampled $\leftarrow$ True
}
}
$x_{\textrm{near}} \leftarrow $ \NearestNeighbor{\upshape $\mathcal{T}$, $x_{\textrm{new}}$} \\
$i \leftarrow 0, u_{\textrm{best}} \leftarrow \emptyset, x_{\textrm{best}} \leftarrow \emptyset, d \leftarrow \infty$ \\
\While{\upshape $i < N_\textrm{samples}$}{
$u \leftarrow$ \SampleControl{} \\
\If{$\| (x_{\textrm{near}}, u) - $\NearestNeighbor{\upshape $S_D$, $(x_{\textrm{near}}, u)$}$\| \leq r - \epsilon$}{\label{line:dist_check}
$x_{\textrm{next}} \leftarrow$ \Model{\upshape $x_{\textrm{near}}$, $u$} \\
\If{\upshape \OneStepReachable{$x_{\textrm{near}}$, $u$, $x_{\textrm{next}}$} $\land$\label{line:one_step} \\
$\quad \|x_{\textrm{next}} - $ \NearestNeighbor{\upshape $S_\mathcal{X}$, $x_{\textrm{next}}$}$\| \leq r - \epsilon$ $\land$ \label{line:opt_check2} \\
$\quad \|x_{\textrm{next}} - x_G\| < d$ $\land$ \\
$\quad \neg$\InCollision{$x_{\textrm{next}}, \epsilon$}}{
$u_{\textrm{best}} \leftarrow u$, $x_{\textrm{best}} \leftarrow x_{\textrm{next}}$ \\
$d \leftarrow \|x_{\textrm{next}} - x_G\|$ \\
}
}
$i \leftarrow i + 1$
}
\If{\upshape $u_{\textrm{best}}$}{
$\mathcal{T} \leftarrow \mathcal{T} \cup \{x_{\textrm{best}}\}$ \\
\If{\upshape $\|x_{\textrm{best}} - x_G \| \leq $ $\lambda$}{
return \ConstructPath{\upshape $\mathcal{T}$, $x_{\textrm{best}}$}
}
}
}
\caption{LMTD-RRT}
\end{algorithm}
Once a plan has been computed, it can be executed in closed-loop with Alg. \ref{alg:rollout}. \texttt{ModelG0} and \texttt{ModelG1} evaluate $g_0$ and $g_1$ of the learned model. \texttt{SolveLE} solves the linear equation and \texttt{Dynamics} executes the true dynamics $f$.
\begin{algorithm}\label{alg:rollout}\small
\KwIn{$\{x_{k}\}_{k=0}^K$, $\{u_{k}\}_{k=0}^{K-1}$}
\SetKwFunction{SolveLLS}{SolveLE}
\SetKwFunction{Dynamics}{Dynamics}
\SetKwFunction{ModelF}{ModelG0}
\SetKwFunction{ModelG}{ModelG1}
$\tilde{x}_0 \leftarrow x_0$, $k \leftarrow 0$ \\
\For{$k = 1\ldots n-1$} {
$b \leftarrow x_{k+1} - $ \ModelF{\upshape $\tilde{x}_{k}$}, $A \leftarrow $\ModelG{\upshape $\tilde{x}_{k}$} \\
$\tilde{u}_{k} \leftarrow $\SolveLLS{$A$, $b$} \\
$\tilde{x}_{k+1} \leftarrow$ \Dynamics{\upshape $\tilde{x}_{k}$, $\tilde{u}_{k}$} \\
}
\caption{LMTD-Execute}
\end{algorithm}
\section{RESULTS}
We present results on 1) a 2D system to illustrate the need for remaining near the trusted domain, 2) a 6D quadrotor to show scaling to higher-dimensional systems, and 3) a 7DOF Kuka arm simulated in Mujoco \cite{todorov2012mujoco} to show scaling to complex dynamics that are not available in closed form. Using $\rho = 0.975$, we plan with LMTD-RRT and rollout the plans in open-loop (no computation of $\tilde{u}_k$) and closed-loop (Alg. \ref{alg:rollout}). We compare with a na\"ive kinodynamic RRT that skips the checks on lines \ref{line:opt_check}, \ref{line:dist_check}, \ref{line:one_step}-\ref{line:opt_check2} of Alg. \ref{alg:rrt} in both open and closed loop.
See the video for experiment visualizations.
\subsection{2D Sinusoidal Model}
To aid in visualization, we demonstrate LMTD-RRT on a 2D system with dynamics $f(x,u) = f_0(x) + f_1(x)u$:
\begin{equation*}\small
f_0(x) = \begin{bmatrix} x \\ y \end{bmatrix} + \Delta T \begin{bmatrix} 3 \sin(0.3(x + 4.5)\big) \big\vert \sin\big(0.3(y + 4.5)) \big\vert \\
3 \sin(0.3(y + 4.5)\big) \big\vert \sin\big(0.3(x + 4.5)) \big\vert \end{bmatrix}
\end{equation*}
\begin{equation*}\small
f_1(x) = \Delta T \begin{bmatrix} 1 + 0.05\cos(y) & 0 \\
0 & 1 + 0.05\sin(x) \end{bmatrix}
\end{equation*}
\noindent where $\Delta T = 0.2$. We are given 9000 training points $(x_i, u_i, f(x_i,u_i))$, where $x_i$ is drawn uniformly from an `L'-shaped subset of $\mathcal{X}$ (see Fig. \ref{fig:sinusoid}) and $u_i$ is drawn uniformly from $\mathcal{U} = [-1, 1]^2$. $g_0(x)$ and $g_1(x)$ are modeled with separate neural networks with one hidden layer of size 128 and 512, respectively. We select $a=3$ in Alg. \ref{alg:select_D}. 1000 more samples are used to estimate $L_{f-g}$ via Alg. \ref{alg:lipschitz}, which we validate with a KS test with a $p$ value of $0.56$, far above the $0.05$ threshold significance value. We obtain $\hat\gamma = 0.117$ and $c = 6.85\times 10^{-4}$, giving $\epsilon = 0.215$ over $D$
See Fig. \ref{fig:sinusoid} for examples of the nominal, open-loop, and closed-loop trajectories planned with LMTD-RRT and a na\"ive kinodynamic RRT. The plan computed with LMTD-RRT remains in regions where we can trust the learned model (i.e. within $D_\epsilon$) and the closed-loop execution of the trajectory converges to $\mathcal{B}_{\epsilon+\lambda}(x_G)$. In contrast, both the open-loop and closed-loop execution of the na\"ive RRT plan diverge.
We provide statistics in Table \ref{table:stats_sinusoid} of maximum $\ell_2$ tracking error $\max_{i \in \{1, \ldots, T\}}\Vert \tilde{x}_i - x_i\Vert$ and final $\ell_2$ distance to the goal $\Vert \tilde{x}_T - x_G\Vert_2$ for both the open loop (OL) and closed loop (CL) variants, averaged over 70 random start/goal states. To give the baseline an advantage, we fix the start/goal states and plan with na\"ive RRT using two different dynamics models: 1) the same learned dynamics model used in LMTD-RRT and 2) a learned dynamics model with the same hyperparameters trained on the full dataset ($10^4$ datapoints), and report the statistics on the minimum of the two errors. The worst case tracking error for the plan computed with LMTD-RRT was $0.199$, which is within the guaranteed tracking error bound of $\epsilon = 0.215$, while despite the data advantage, plans computed with na\"ive RRT suffer from higher tracking error. Average planning times for LMTD-RRT and na\"ive RRT are 4.5 and 17 seconds, respectively.
Overall, this suggests that planning with LMTD-RRT avoids regions where model error may lead to poor tracking, unlike planning with a na\"ive RRT.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/sinusoid_fig_lipnew_KS_comb3.pdf}
\caption{2D sinusoidal dynamics. The LMTD-RRT plan (magenta) stays in $D$ and ensures a valid feedback law exists at each step. The plan can be tracked within $\epsilon$ under closed loop control (cyan). If feedback is not applied, the system drifts to the edge of the trusted domain, exits, and diverges (green). The na\"ive RRT plan (brown) does not consider $D$, and does not reach the goal under closed loop (grey) or open loop (red) control.
}
\label{fig:sinusoid}
\end{figure}
\begin{table}\centering
\begin{tabular}{ c | c | c }
& LMTD-RRT & Na\"ive kino. RRT \\\hline
\cellcolor{lightgray!50!} \hspace{-7pt}Max. trck. err. (CL)\hspace{-5pt} & \cellcolor{lightgray!50!} 0.099 $\pm$ 0.036 (0.199)& \cellcolor{lightgray!50!} 8.746 $\pm$ 4.195 (15.21) \\
\hspace{-7pt}Goal error (CL)\hspace{-5pt} & 0.039 $\pm$ 0.020 (0.113)& 7.855 $\pm$ 3.851 (14.78) \\
\cellcolor{lightgray!50!} \hspace{-7pt}Max. trck. err. (OL)\hspace{-5pt} & \cellcolor{lightgray!50!} 12.84 $\pm$ 4.444 (20.92)& \cellcolor{lightgray!50!} 10.39 $\pm$ 1.962 (15.31)\\
\hspace{-7pt}Goal error (OL)\hspace{-5pt} & 12.40 $\pm$ 4.576 (20.92)& 10.12 $\pm$ 1.762 (15.20)\\
\end{tabular}
\caption{Sinusoid errors in closed loop (CL) and open loop (OL). \\ Mean $\pm$ standard deviation (worst case).}
\label{table:stats_sinusoid}
\end{table}
\subsection{6D Quadrotor Model}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/quad_comb_maxstats_all_newlip_flat.pdf}
\caption{Quadrotor tracking example. The trajectory planned with LMTD-RRT (magenta) is tracked in closed loop (blue) and reaches the goal. The open loop (green) also converges near the goal, but not as close as the closed loop. The na\"ive RRT produces a plan (brown) that leaves the trusted domain. Thus, both the open (red) and closed (light blue) loop rapidly diverge.}
\label{fig:quadrotor}
\end{figure}
We evaluate our method on 6-dimensional fully-actuated quadrotor dynamics \cite{quadrotor} with state $x = [\chi, y, z, \phi, \theta, \psi]^\top$, where $f(x,u) = f_0(x) + f_1(x)u$, $f_0(x) = x$ and $f_1(x) = $
\begin{equation*}\footnotesize
\Delta T \begin{bmatrix} c_\theta c_\psi &\hspace{-6pt} -c_\phi s_\psi + c_\psi s_\phi s_\theta & \hspace{-7pt}s_\psi s_\phi + c_\phi c_\psi s_\theta & \hspace{-5pt}0 & \hspace{-3pt}0 & \hspace{-3pt}0 \\
c_\theta s_\psi &\hspace{-6pt} c_\phi c_\psi + s_\phi s_\psi s_\theta & \hspace{-7pt}-c_\psi s_\phi + c_\phi s_\psi s_\theta & \hspace{-5pt}0 & \hspace{-3pt}0 & \hspace{-3pt}0 \\
-s_\theta &\hspace{-6pt} c_\theta s_\phi & \hspace{-7pt}c_\phi c_\theta & \hspace{-5pt}0 & \hspace{-3pt}0 & \hspace{-3pt}0 \\
0 &\hspace{-5pt} 0 & 0 & \hspace{-5pt}1 & \hspace{-3pt}s_\phi t_\theta & \hspace{-3pt}c_\phi t_\theta \\
0 &\hspace{-5pt} 0 & 0 & \hspace{-5pt}0 & \hspace{-3pt}c_\phi & \hspace{-3pt} -s_\phi \\
0 &\hspace{-5pt} 0 & 0 & \hspace{-5pt}0 & \hspace{-3pt}s_\phi c_\theta & \hspace{-3pt}c_\phi /c_\theta
\end{bmatrix},
\end{equation*}
\noindent where $\Delta T = 0.1$ and $s_{(\cdot)}$, $c_{(\cdot)}$, and $t_{(\cdot)}$ are short for $\sin(\cdot)$, $\cos(\cdot)$, and $\tan(\cdot)$ respectively. We are given $9 \times 10^6$ training data tuples $(x_i, u_i, f(x_i, u_i))$, where $x_i$, $u_i$ are generated with Halton sampling over $[-1,1]^3 \times [-\frac{\pi}{20},\frac{\pi}{20}]^3$ and $[-1,1]^6$, respectively (data is collected near hover). As $f_0(x)$ is a simple integrator term, we assume it is known and we set $g_0(x) = x$, while $g_1(x)$ is learned with a neural network with one hidden layer of size 4000. We select $a=6$ in Alg. \ref{alg:select_D}. We use $10^6$ more samples in Alg. \ref{alg:lipschitz} to estimate $L_{f-g}$, and conduct a KS test resulting in a $p$-value of $0.43 \gg 0.05$. We obtain $\hat\gamma = 0.205$, $c = 0.011$, and $\epsilon = 0.134$.
See Fig. \ref{fig:quadrotor} for examples of the planned, open-loop, and closed-loop trajectories planned with LMTD-RRT and a na\"ive RRT.
The trajectory planned with LMTD-RRT
remains close to the training data, and the closed-loop system tracks the planned path with $\epsilon$-accuracy converging to $\mathcal{B}_{\epsilon+\lambda}(x_G)$.
We note that using the feedback controller to track trajectories planned with na\"ive RRT tends to worsen the tracking error, implying our learned model is highly inaccurate outside of the domain. We provide statistics in Table \ref{table:stats_quadrotor} for maximum tracking error and distance to goal, averaged over 100 random start/goal states. The worst case closed-loop tracking error for trajectories planned with LMTD-RRT is $0.011$, again much smaller than $\epsilon$. As with the 2D example, we give the baseline an advantage in computing tracking error statistics by reporting the minimum of the two errors when planning with 1) the same model used in LMTD-RRT and 2) a model trained on the full dataset ($10^7$ points). Despite the data advantage, the plans computed using na\"ive RRT have much higher tracking error. Average planning times for our unoptimized code are 100 sec. for LMTD-RRT and 15 min. for na\"ive RRT, suggesting that sampling focused near the training data can improve planning efficiency.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/quad_comb_newlip.pdf}
\caption{\textbf{Left}: Quadrotor obstacle (red) avoidance. Example plans (green, blue, black), tracking error bound $\epsilon$ overlaid (light blue). Closed-loop trajectories remain in the tubes, converging to the goal without colliding. \textbf{Right}: Na\"ive RRT plan (pink) fails to be tracked (cyan) and collides (red dots).}
\label{fig:quadrotor_obs}
\end{figure}
We also evaluate LMTD-RRT on an obstacle avoidance problem (Fig. \ref{fig:quadrotor_obs}). We perform collision checking as described in Sec. \ref{sec:safety_stability}. As the tracking error tubes (of radius $\epsilon = 0.134$) centered around the nominal trajectories never intersect with any obstacles, we can guarantee that the system never collides in execution. Empirically, in running Alg. \ref{alg:rrt} over 500 random seeds to obtain different nominal paths, the closed-loop trajectory never collides. In contrast, the na\"ive RRT plan fails to be tracked and collides (Fig. \ref{fig:quadrotor_obs}, right).
\begin{table}\centering
\begin{tabular}{ c | c | c }
& LMTD-RRT & Na\"ive kino. RRT \\\hline
\cellcolor{lightgray!50!} \hspace{-7pt}Max. trck. err. (CL)\hspace{-5pt} & \cellcolor{lightgray!50!} 0.003 $\pm$ 0.001 (0.008)& \cellcolor{lightgray!50!} 10.59 $\pm$ 16.75 (153.26) \\
\hspace{-7pt}Goal error (CL)\hspace{-5pt} & 0.001 $\pm$ 0.001 (0.004)& 8.247 $\pm$ 8.434 (46.150) \\
\cellcolor{lightgray!50!} \hspace{-7pt}Max. trck. err. (OL)\hspace{-5pt} & \cellcolor{lightgray!50!} 0.020 $\pm$ 0.007 (0.040)& \cellcolor{lightgray!50!} 4.289 $\pm$ 2.340 (12.986)\\
\hspace{-7pt}Goal error (OL)\hspace{-5pt} & 0.019 $\pm$ 0.008 (0.040)& 3.265 $\pm$ 2.028 (11.670)\\
\end{tabular}
\caption{Quadrotor errors (no obstacles) in closed loop (CL) and \\ open loop (OL). Mean $\pm$ standard deviation (worst case).}
\label{table:stats_quadrotor}
\end{table}
\subsection{7DOF Kuka Arm in Mujoco}
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{figures/arm_comb_v3_newlip2.png}
\caption{Planning to move a 7DOF arm from below to above a table. Trajectory-tracking time-lapse (time increases from left to right). Red (nominal), green (closed loop), blue (open loop). \textbf{Top}: LMTD-RRT (red, green, blue overlap due to tight tracking). \textbf{Bottom}: Na\"ive RRT (poor tracking causes collision).}
\label{fig:arm}
\end{figure*}
We evaluate our method on a 7DOF Kuka iiwa arm simulated in Mujoco \cite{todorov2012mujoco} using a Kuka model from \cite{githubkuka}. We train two models using different datasets, one for evaluating tracking error without the presence of obstacles (Table \ref{table:stats_kuka}), and the other for obstacle avoidance. For both models, $g_0(x)$ is again set to be $x$ while $g_1(x)$ is learned with a neural network with one hidden layer of size 4000. For the results in Table \ref{table:stats_kuka}, we are provided 2475 training data tuples, which are collected by recording continuous state-control trajectories from an expert and evaluating $f(x,u)$ on the trajectories and on random state-control perturbations locally around the trajectories. We select $a=5$ in Alg. \ref{alg:select_D}. 275 more samples are used in Alg. \ref{alg:lipschitz} to estimate $L_{f-g}$, validated with a KS test with a $p$ value of $0.58 \gg 0.05$. We obtain $\hat\gamma = 0.087$ and $c = 0.001$, leading to $\epsilon = 0.111$. In Table \ref{table:stats_kuka}, we provide statistics on maximum tracking error and distance to goal under plans with LMTD-RRT and the na\"ive RRT baseline (with a model trained on the full dataset of 2750 points), averaged over 25 runs of each method. Notably, closed-loop tracking of plans found with LMTD-RRT have lowest error, with a worst case error much smaller than $\epsilon = 0.111$. Planning takes on average 1.552 and 0.167 sec. for LMTD-RRT and na\"ive RRT, respectively. We suspect the na\"ive RRT exploits poor dynamics outside of $D$, expediting planning.
\begin{table}\centering
\begin{tabular}{ c | c | c }
& LMTD-RRT & Na\"ive kino. RRT \\\hline
\cellcolor{lightgray!50!} \hspace{-7pt}Max. trck. err. (CL)\hspace{-5pt} & \cellcolor{lightgray!50!} 0.010 $\pm$ 0.010 (0.038)& \cellcolor{lightgray!50!} 0.090 $\pm$ 0.250 (1.265) \\
\hspace{-7pt}Goal error (CL)\hspace{-5pt} & 0.004 $\pm$ 0.004 (0.018) & 0.082 $\pm$ 0.251 (1.265) \\
\cellcolor{lightgray!50!} \hspace{-7pt}Max. trck. err. (OL)\hspace{-5pt} & \cellcolor{lightgray!50!} 0.036 $\pm$ 0.041 (0.142) & \cellcolor{lightgray!50!} 0.076 $\pm$ 0.084 (0.282)\\
\hspace{-7pt}Goal error (OL)\hspace{-5pt} & 0.035 $\pm$ 0.040 (0.140) & 0.071 $\pm$ 0.075 (0.225)\\
\end{tabular}
\caption{7DOF arm errors (no obstacles) in closed loop (CL) and \\ open loop (OL). Mean $\pm$ standard deviation (worst case).}
\label{table:stats_kuka}
\end{table}
For the obstacle avoidance example (Fig. \ref{fig:arm}), we are provided 15266 datapoints, which again take the form of continuous trajectories plus perturbations. We select $a=8$ in Alg. \ref{alg:select_D}, and use 1696 more points to estimate $L_{f-g}$ using Alg. \ref{alg:lipschitz}, which we validate with a KS test with a $p$ value of $0.37 \gg 0.05$. We obtain $\hat\gamma = 0.156$ and $c = 0.010$, leading to $\epsilon = 0.111$. In planning, as described in Sec. \ref{sec:safety_stability}, we perform collision checking by randomly sampling configurations in an $\epsilon$-ball about each point along the trajectory. Though this collision checker is not guaranteed to detect collision, in running LMTD-RRT over 20 random seeds, we did not observe collisions in execution for any of the 20 plans, and the arm safely reaches the goal without collision. Over these trajectories, the worst case tracking error is $0.107$, which remains within $\epsilon = 0.111$. One such plan computed by LMTD-RRT and the corresponding open-loop and closed-loop tracking trajectories, is shown in the top row of Fig. \ref{fig:arm}. The three trajectories nearly overlap exactly due to small tracking error. In contrast, the na\"ive RRT plan cannot be accurately tracked, even with closed-loop control, due to planning outside of the trusted domain, causing the executed trajectories to diverge and collide with the table.
\section{DISCUSSION AND CONCLUSION}
We present a method to bound the difference between learned and true dynamics in a given domain and derive conditions that guarantee a one-step feedback law exists. We combine these two properties to design a planner that can guarantee safety, goal reachability, and that the closed-loop system remains in a small region about the goal.
While the method presented has strong guarantees, it also has limitations which are interesting targets for future work. First, the true dynamics are assumed to be deterministic. Stochastic dynamics may be possible by estimating the Lipschitz constant of the mean dynamics while also appropriately modeling the noise. Second,
the actuation requirement limits the systems that this method can be applied to.
For systems with $\texttt{dim}(\mathcal{U}) < \texttt{dim}(\mathcal{X})$, it may be possible to construct a similar feedback law that guarantees the learned dynamics will lie within a tolerance of planned states which, in turn, could still give strong guarantees on safety and reachability.
\bibliographystyle{IEEEtran}
| proofpile-arXiv_059-15609 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Consider a weighted undirected random graph consisting of vertices $i\in\{1,...,n\}$ and edges consisting of real-valued random variables $(X_{ij})_{i,j\in\{1,...n\}, i< j}$. Suppose $X_{ij}$ are identically distributed with density $f(\cdot)=f_{X_{12}}(\cdot)$. The dyadic kernel density estimator (KDE) of \cite{GrahamNiuPowell2019}, evaluated at a design point $x\in \mathrm{supp}(X_{12})$, is defined as
\begin{align*}
\hat f_{n}(x)= {n\choose 2}^{-1}\sum_{1\le i< j \le n}\frac{1}{h_n}K\left(\frac{x-X_{ij}}{h_n}\right).
\end{align*}
\par
Applications of dyadic data are numerous in different fields of the sciences and social sciences, such as bilateral trade, animal migrations, refugee diasporas, transportation networks between different cities, friendship networks between individuals, intermediate product sales between firms, research and development partnerships across different organizations, electricity grids across different states, and social networks between legislators, to list a few. See, e.g. \cite{Graham2019Handbook} for a recent and comprehensive review. This paper investigates the asymptotic properties of and proposes new, improved inference methods for dyadic KDE with both complete and incomplete data.\\
\par
The main contributions of this paper are two-fold. First, we establish uniform convergence rates for dyadic KDE under general conditions. These rates are new; no uniform rate is previously known for KDE under dyadic sampling in the literature. Second, we develop inference methods by adapting a modified version of the jackknife empirical likelihood (JEL) proposed by \cite{MatsushitaOtsu2020}. The proposed modified JEL (mJEL) procedure is shown to be asymptotically valid regardless of the presence of dyadic clustering. In practice, dyadic clustered data are often incomplete or contain missing values. We extend our modified JEL procedure to cover the practically relevant case of incomplete data under the missing at random assumption.
Extensive simulation studies show that this modified JEL inference procedure does not suffer from the unreliable finite sample performance of the analytic variance estimator, delivers precise coverage probabilities even under modest sample sizes, and is robust against both dyadic clustering and i.i.d. sampling for both complete and incomplete dyadic data. Finally, we illustrate the method by studying the distribution of congestion across different flight routes using data from the United States Department of Transportation (US DOT).\\
\par
In an important recent work, \cite{GrahamNiuPowell2019} first propose and examine a
nonparametric density estimator for dyadic random variables.
Focusing on pointwise asymptotic behaviors, they demonstrate that asymptotic normality at a fixed design point can be established at a parametric rate with respect to number of vertices, an unique feature of dyadic KDE. This implies that its asymptotics are robust to a range of bandwidth rates. They also point out that degeneracy that leads to a non-Gaussian limit, such as the situation pointed out in \cite{Menzel2020}, does not occur for dyadic KDE under some mild bandwidth conditions. For inference, they propose an analytic variance estimator that corresponds to the one proposed by \cite{FafchampsGubert2007} (henceforth FG) under parametric models and establish its asymptotic validity. {
In their simulation, they discover that the FG variance estimator often leads to imprecise coverage with moderate sample sizes.
Our proposed modified JEL procedure for dyadic KDE complements the findings of \cite{GrahamNiuPowell2019} by providing an alternative that has improved finite sample performance.\\
\par
}
In practice, the presence of missing edges is an important feature of network datasets, and a researcher working with network data often observes incomplete sets of edges. To our knowledge, none of the existing theoretical work in the literature that focuses on the asymptotics of dyadic KDE has tackled this issue. Under the missing at random assumption, we extend our theory for modified JEL to cover this practically relevant situation.
\par
Ever since the seminal papers of \cite{Owen1988,Owen1990}, empirical likelihood has been extensively studied in the literature. For a textbook treatment of classical theory, see \cite{Owen200Book}. Some recent developments under conventional asymptotics include \cite{HjortMcKeagueVan_Keilegom2009AoS} and \cite{BravoEscancianoVan_Keilegom2020AoS}, and many more. See \cite{chen2009review} for a recent review. Jackknife empirical likelihood was first proposed in the seminal work of
\cite{JingYuanZhang2009} for parametric $U$-statistics and has since been studied by \cite{GongPengQi2010JMA}, \cite{PengOiVan_Keilegom2012}, \cite{Peng2012CanJS}, \cite{WangPengQi2013Sinica}, \cite{ZhangZhao2013JMA}, \cite{MatsushitaOtsu2018JER} and \cite{ChenTabri2020}, to list a few, for different applications. Most of the aforementioned works focus on actual $U$-statistics under i.i.d. sampling. \cite{MatsushitaOtsu2020} demonstrate that JEL can not only be modified for certain unconventional asymptotics of i.i.d. observations, but can also be applied to study edge probabilities for both sparse and dense dyadic networks studied in \cite{BickelChenLevina2011}. This is achieved by incorporating the bias-correction idea of \cite{Hinkley1978}, \cite{EfronStein1981}. In a context where controlling bias and spikes - from the presence of the bandwidth in kernel estimation - presents extra challenges, our modified JEL for dyadic KDE provides a nonparametric counterpart to the network asymptotic results in \cite{MatsushitaOtsu2020}. \\
\par
{
Theoretically speaking, the modified JEL is closely related to the growing literature that utilizes
the idea of accounting for the contributions from both linear and quadratic terms in the Hoeffding-type decomposition of the statistics.
This idea emerges in the literature that studies
``small-bandwidth asymptotics," in the context of density-weighted average derivative by \cite{,CattaneoCrumpJansson2014bootstrap,CattaneoCrumpJansson2014ET} and \cite{CattaneoCrumpJansson2013JASA}, for the sake of robustness over a wider range of bandwidths. This idea has since been explored in various other contexts such as an inference problem in partially linear models with many regressors in \cite{cattaneo2018alternative,cattaneo2018inference}, as well as the various other examples explored in \cite{MatsushitaOtsu2020}.}\\
\par
Our results also complement the uniform convergence rates results for kernel type estimators under different dependence settings studied in \cite{EinmahlMason2000}, \cite{EinmahlMason2005}, \cite{DonyMason2008}, \cite{Hansen2008ET}, \cite{Kristensen2009ET} and so forth. Theoretically, the $\sqrt{n}$-uniform rate of dyadic KDE share common underlying features with the kernel density estimator of \cite{EscancianoJacho-Chavez2012}, as well as the residual distribution estimator proposed in \cite{AkritasVan_Keilegom2001}, both under i.i.d. sampling.\\
\par
{
In a seminal work, \cite{FafchampsGubert2007} proposed a dyadic robust estimator for regression models, which has since become the benchmark for inference under dyadic clustering. Related works include \cite{FrankSnijders1994} and \cite{SnijdersBorgatti1999}, \cite{AronowSamiiAssenova2015} \cite{CameronMiller2014} in different contexts.
Asymptotics for statistics of dyadic random arrays are studied by \cite{DDG2020} under a general empirical processes setting and by \cite{ChiangKatoSasaki2020} in high-dimensions. The former show the validity of a modified pigeonhole bootstrap (cf. \cite{Mccullagh2000} and \cite{Owen2007}) while the latter utilize a multiplier bootstrap, both under a non-degeneracy condition. On the other hand, for two-way separately exchangeable arrays and linear test-statistics, \cite{Menzel2020} proposed a conservative bootstrap that is valid uniformly even under potentially non-Gaussian degeneracy scenarios.
In this paper, we only need to focus on the non-degenerate and Gaussian degenerate cases since the non-Gaussian degeneracy scenario is not a concern for dyadic KDE; \cite{GrahamNiuPowell2019} pointed out that non-Gaussian degeneracy can be ruled out under mild bandwidth conditions.\\
\par
More recently, \cite{cattaneo2022} study uniform convergence rate and uniform inference for dyadic KDE. Their uniform convergence rate results further improved upon the result by allowing uniformity over the whole support for compactly supported
data and enabling boundary-adaptive estimation. They further derive the minimax rate of uniform convergence for density estimation with dyadic
data and show that the dyadic KDE is, under appropriate conditions, minimax-optimal. For inference, they obtain uniform distribution theory and provide bias-corrected $t$-statistics-based uniform confidence bands. Their methods differ from the ones we propose, which are based on jackknife empirical likelihood.}
\subsection{Notation and Organization}
Let $|A|$ be the cardinality of a finite set $A$. Denote the uniform distribution on $[0,1]$ as $U[0,1]$. For $g:\calS\to \R$ and $\calX\subseteq \calS$, denote $\|g\|_{\calX}=\sup_{x\in\calX}|g(x)|$. Write $a_n\lesssim b_n$ if there exists a constant $C>0$ independent of $n$ such that $a_n\le Cb_n$. Throughout the paper, the asymptotics should be understood as taking $n\to \infty$.\\
\par
The rest of this paper is organized as follows. In Section \ref{sec:model_estimator}, we introduce our model and the dyadic KDE estimator. Our main theoretical results of uniform convergence rates and the validity of different JEL statistics for both complete and incomplete dyadic random arrays are then introduced in Section \ref{sec:theory}. Section \ref{sec:simulation} contains the simulation studies while the empirical application can be found in Section \ref{sec:application}. A practical guideline on bandwidth choice and proofs of all the theoretical results are contained in the Supplementary Appendix.
\section{Model and estimator}\label{sec:model_estimator}
Consider a weighted undirected random graph.
Let $I_{n} = \{ (i,j) : 1 \le i < j \le n\}$ and $I_{\infty} = \bigcup_{n=2}^{\infty} I_{n}$. The random graph consists of an array $( X_{ij})_{(i,j) \in I_{\infty}}$ of real-valued random variables that is generated by
\begin{align}
X_{ij} = \mathfrak f (U_i,U_j ,U_{\{i,j\}}),\quad (U_i)_{i\in\mathbb{N}}, \:(U_{\{i,j\}})_{i,j\in \mathbb{N}} \stackrel{i.i.d.}{\sim} U[0,1]\label{eq:Aldous-Hoover}
\end{align}
for some Borel measurable map $\mathfrak{f}:[0,1]^3\to \R$ that is symmetric in the first two arguments. Note that $U_i$ and $U_{\{\iota,\jmath\}}$ are independent for all $i\in\mathbb{N}$, $(\iota,\jmath)\in I_{\infty}$. While (\ref{eq:Aldous-Hoover}) may seem to be a specific structural assumption, it is implied by the following low-level condition of joint exchangeability via the Aldous-Hoover-Kallenberg representation, see e.g. \citet[Theorem 7.22]{Kallenberg2006}.
\begin{definition}[Joint exchangeability]\label{a:JE}
An $(X_{ij})_{(i,j) \in I_{\infty}}$ is said to be jointly exchangeable if
for any permutation $\pi$ of $\mathbb{N}$, the arrays $(X_{ij})_{(i,j) \in I_{\infty}}$ and $(X_{\pi(i)\pi(j)})_{(i,j) \in I_{\infty}}$ are identically distributed.
\end{definition}
\begin{remark}[Identical distribution]
Under this setting, $(X_{ij})_{(i,j)\in I_{\infty}}$ are identically distributed.
\end{remark}
Suppose that the researcher observes $(X_{ij})_{(i,j)\in I_{n}}$ for $n\ge 2$. The object of interest is the density function $f(\cdot)=f_{X_{12}}(\cdot)$ of $X_{12}$. We assume such a density function exists and is well-defined although some low-level sufficient conditions can be obtained.
For a fixed design point $x\in\mathrm{supp}(X_{12})$ of interest, the dyadic KDE for $\theta=f(x)$ is defined as
\begin{align}
\hat \theta=\hat f_{n}(x)= {n\choose 2}^{-1}\sum_{i=1}^{n-1}\sum_{j= i+1}^n\frac{1}{h_n}K\left(\frac{x-X_{ij}}{h_n}\right)=\frac{1}{|I_{n}|}\sum_{{(i,j)\in I_{n}}} K_{ij,n},\label{eq:KDE}
\end{align}
where $h_n>0$ is a $n$-dependent bandwidth, $K_{ij,n}:=h_n^{-1}K((x-X_{ij})/h_n)$ and $K(\cdot)$ is a kernel function. We will be more specific about the requirements for $h_n$ and $K$ in the following Section.
\section{Theoretical results}\label{sec:theory}
\subsection{Uniform convergence rates}
{ In the rest of the paper, denote $f_{X_{12}|U_1}$ for the conditional density function of $X_{12}$ given $U_1$ and $f_{X_{12}|U_1,U_2}$ for the conditional density function of $X_{12}$ given $U_1,U_2$. In addition, define the shorthand notations $f'(\cdot)=\partial f(\cdot)/\partial x$, $f''(\cdot)=\partial^2 f(\cdot)/\partial x^2$, $f_{X_{12}|U_1}'(\cdot|u)=\partial f_{X_{12}|U_1}(\cdot|u)/\partial x$, $f_{X_{12}|U_1}''(\cdot|u)=\partial^2 f_{X_{12}|U_1}(\cdot|u)/\partial x^2$. Similarly, $f_{X_{12}|U_1,U_2}'(\cdot|u_1,u_2)=\partial f_{X_{12}|U_1,U_2}(\cdot|u_1,u_2)/\partial x$, and $f_{X_{12}|U_1,U_2}''(\cdot|u_1,u_2)=\partial^2 f_{X_{12}|U_1,U_2}(\cdot|u_1,u_2)/\partial x^2$.} We first study the uniform convergence rates of dyadic KDE under the following assumptions.
\begin{assumption}[Uniform convergence rates]\label{a:rates}
Suppose that { $\calX \subsetneq \mathrm{supp}(X_{12})$} is a compact interval, and suppose the following are satisfied:
\begin{enumerate}[(i)]
\item We observe $(X_{ij})_{(i,j)\in I_n}$, a subset of $( X_{ij})_{(i,j) \in I_{\infty}}$ that is generated following (\ref{eq:Aldous-Hoover}) with $\Var(X_{12})>0$.
\item In an open neighborhood of $\calX$, $f(\cdot)$ exists and is at least twice continuously differentiable and $\|f''\|_\calX =O(1)$, $f_{X_{12}|U_1}(\cdot|U)$ exists and is almost surely at least twice continuously differentiable, $\|f_{X_{12}|U_1}''\|_{\calX\times [0,1]} =O(1)$, $f_{X_{12}|U_1,U_2}(\cdot|U_1,U_2)$ exists and is almost surely continuously differentiable and $\|f_{X_{12}|U_1,U_2}'\|_{\calX\times [0,1]^2} =O(1)$.
\item The kernel function $K$ is symmetric, right (or left) continuous, non-negative, of bounded variation, and satisfies $\|K\|_\infty=O(1)$, $\int K(u)du=1$, { $\int uK(u)du=0$}, and $0<\int u^2 K(u)du<\infty$.
\item The sequence of bandwidths satisfies $h_n\to 0$ and { $nh_n\to \infty$}.
\end{enumerate}
\end{assumption}
{ The support condition in Assumption \ref{a:rates} restricts the scope to be on a compact interval $\mathcal X$ that is strictly contained in the support of $X_{12}$. Assumption \ref{a:rates}(i) requires the observations to be generated following the dyadic structure discussed in Section \ref{sec:model_estimator}. Assumption \ref{a:rates}(ii) assumes existence of the conditional densities conditional on vertex-specific latent shocks and imposes standard smoothness conditions on the unknown density and conditional densities. Assumption \ref{a:rates}(iii) assumes a smooth second-order kernel. Finally, Assumption \ref{a:rates}(iv) requires the sequence of bandwidths to be converging to zero in an adequate range.
}
\begin{theorem}[Uniform convergence rates for dyadic KDE]\label{thm:unif_rate}
Suppose Assumption \ref{a:rates} holds, then with probability at least $1-o(1)$,
\begin{align*}
\sup_{x\in \calX}\left|\hat f_n(x)-f(x)\right|\lesssim h_n^2
+\frac{1}{n^{1/2}}
\end{align*}
if $\inf_{x\in\calX}\Var\left(f_{X_{12}|U_1}(x|U_1)\right)\ge\underline L>0$ for some $\underline L>0$ (called the non-degenerate case). Otherwise (called the degenerate case), with probability at least $1-o(1)$,
\begin{align*}
\sup_{x\in \calX}\left|\hat f_n(x)-f(x)\right|\lesssim h_n^2 +\frac{1}{n}
+\sqrt{\frac{\log(1/h_n)}{n^2h_n}}.
\end{align*}
\end{theorem}
A proof can be found in Section D.1 of the Supplementary Appendix.
\begin{remark}[Convergence rates]
{ Theorem \ref{thm:unif_rate} shows the uniform convergence rates under both non-degenerate and degenerate scenarios.
In both cases, the bias is $O(h_n^2)$. In case of degeneracy, the uniform rate coincides with the independent sampling situation with sample size $2{n\choose 2}$. { Besides the bias term}, the component $\sqrt{\frac{\log(1/h_n)}{n^2h_n}}$ is the part in which the kernel plays a role (in adding the factor of $\frac{\log(1/h_n)}{h_n}$). In the non-degenerate case, the stochastic component is $O_p(n^{-1/2})$ because its leading term consists of a sample average of $\{\E[h_n^{-1 }K((x-X_{ij})/h_n)|U_i]\}_{i=1}^n$, and these elements are well-behaved under our assumptions.
These results agree with the observations made in \cite{GrahamNiuPowell2019} for the pointwise behaviors of the dyadic KDE. In a related paper, \cite{graham2021minimax} consider nonparametric regression with dyadic data and establish uniform convergence rates for a Nadaraya-Watson type estimator under a different setting. More explicitly, their rates are nonparametric rates since the regressors considered are vertex-specific.
}
\end{remark}
\begin{remark}[Recent improvement on the uniform convergence rates]\label{rem:recent}
The recent paper of \cite{cattaneo2022} further improved upon the result by obtaining uniform convergence rates for dyadic KDE under a more general setting. Specifically, their uniformity is over the whole support for
compactly supported
data by carefully addressing the boundary issues, while Theorem \ref{thm:unif_rate} is established on a compact interval strictly contained in the support of $X_{12}$.
They further show that dyadic KDE can be minimax optimal under some regularity conditions.
\end{remark}
\subsection{Inference for complete dyadic data with JEL and modified JEL}
We now introduce JEL-based inference procedures for dyadic KDE.
Denote the leave-one-out and leave-two-out index sets $I_n^{(i)}=\{(\iota,\jmath)\in I_n: \iota,\jmath\not\in \{i\}\}$, $I_n^{(i,j)}=\{(\iota,\jmath)\in I_n: \iota,\jmath\not\in\{ i,j\}\}$.
{ For a given $\theta$, define
\begin{align*}
&\hat \theta={n\choose 2}^{-1}\sum_{(i,j)\in I_{n}} \frac{1}{h_n}K\left(\frac{x-{X_{ij}}}{h_n}\right),\quad \hat \theta^{(i)}={n-1\choose 2}^{-1}\sum_{(k,l)\in I_{n}^{(i)}}\frac{1}{h_n} K\left(\frac{x-X_{kl}}{h_n}\right), \: i=1,...,n,\\
&\hat \theta^{(i,j)}={n-2\choose 2}^{-1}\sum_{(k,l)\in I_{n}^{(i,j)}}\frac{1}{h_n} K\left(\frac{x-X_{kl}}{h_n}\right),\quad 1\le i<j\le n,
\end{align*}
and subsequently $S(\theta)=\hat \theta-\theta$, $S^{(i)}(\theta)=\hat \theta^{(i)}-\theta$, and $S^{(i,j)}(\theta)=\hat \theta^{(i,j)}-\theta$.
}
Furthermore, define the pseudo true value for JEL as
\begin{align*}
V_i(\theta)=nS(\theta)-(n-1) S^{(i)}(\theta).
\end{align*}
For modified JEL, define
\begin{align}
&Q_{ij}=\frac{n-3}{n-1}\left[nS(\theta)-(n-1)\{S^{(i)}(\theta)+S^{(j)}(\theta)\}+(n-2)S^{(i,j)}(\theta)\right],\quad i<j,\nonumber\\
&\Gamma^2 = \frac{1}{n}\sum_{i=1}^n V_i(\theta)^2,\qquad \Gamma_m^2= \frac{1}{n}\sum_{i=1}^n V_i(\hat \theta)^2 - \frac{1}{n}\sum_{i=1}^{n-1}\sum_{j = i+1}^n Q_{ij}^2 \label{eq:jack_var}.
\end{align}
For each value of $\theta$, define the pseudo true value for modified JEL by
\begin{align*}
V_i^m(\theta)=V_i(\hat\theta)-\Gamma \Gamma_m^{-1}\{V_i(\hat\theta)-V_i(\theta)\}.
\end{align*}
Now, define the modified JEL function for $\theta$ by
\begin{align*}
\ell^m (\theta)&=-2\sup_{w_1,...,w_n} \sum_{i=1}^n \log(n w_i),\\
&\text{s.t.}\quad w_i\ge 0, \quad \sum_{i=1}^n w_i=1,\quad \sum_{i=1}^n w_i V_i^m (\theta)=0,
\end{align*}
and $\ell(\theta)$ is defined analogously with $V_i^m$ replaced by $V_i$.
The Lagrangian dual problems are
\begin{align*}
\ell(\theta)=2\sup_\lambda \sum_{i=1}^n \log( 1+\lambda V_i(\theta) ),\qquad \ell^m(\theta)=2\sup_\lambda \sum_{i=1}^n \log( 1+\lambda V^m_i(\theta) ).
\end{align*}
{ We say $f$ is non-degenerate at $x$ if $\Var\left(f_{X_{12}|U_1}(x|U_1)\right)\ge\underline L>0$, which means that the vertex-specific shock $U_1$ affects the conditional density $f_{X_{12}|U_1}$. Before stating the next result, which characterizes the asymptotics of JEL and modified JEL for inference, we make the following assumptions.}
\begin{assumption}[JEL {and modified JEL} for complete data]\label{a:JEL_complete}
Suppose that
\begin{enumerate}[(i)]
\item $f(x)>0$ at the design point of interest $x$ { which lies in $\calX\subsetneq\mathrm{supp}(X_{12})$}.
\item $K$ has bounded support, $nh_n^2\to \infty$ and { $nh_n^{5/2}\to 0$}
\end{enumerate}
\end{assumption}
{
Assumption \ref{a:JEL_complete} requires the design point $x$ to be an interior point of the support of $X_{12}$. Assumption \ref{a:JEL_complete} (ii) imposes constraints on the convergence rate of the sequence of bandwidths. Note that the lower bound $nh_n^2\to \infty $ is a sufficient condition for Lemma 2 in the appendix, which is subsequently used for bounding the linearization errors for the JEL functions $\ell$ and $\ell^m$. This is specific to empirical likelihood-based methods and is not required by procedures such as the Wald test with FG variance estimator proposed in \cite{GrahamNiuPowell2019}. The restriction $nh_n^{5/2}\to 0$ here is used only in the degenerate case and can be relaxed to $nh_n^4\to 0$ in the non-degenerate case.
Note that the bandwidth conditions accommodate the MSE optimal rate of $h_n=O(n^{-2/5})$ if the researcher is only concerned with the non-degenerate case.}
\begin{theorem}[Wilks' theorem for { JEL and} modified JEL for dyadic KDE with complete data]\label{thm:asymptotic_dist}
Suppose Assumptions \ref{a:rates} and \ref{a:JEL_complete} are satisfied, then
\begin{align*}
\ell^m(\theta)\stackrel{d}{\to} \chi_1^2.
\end{align*}
In addition, {
\begin{align*}
\ell(\theta)\stackrel{d}{\to}\begin{cases}
\chi_1^2, \text{ if $f$ is non-degenerate at $x$,}\\
\frac{1}{2}\chi_1^2, \text{ if $f$ is degenerate at $x$.}
\end{cases}
\end{align*}}
\end{theorem}
A proof can be found in Section D.2 of the Supplementary Appendix.
\begin{remark}[Asymptotic pivotality]\label{rem:pivotal}
Theorem \ref{thm:asymptotic_dist} shows that the modified JEL is pivotal regardless of whether $f$ is degenerate or not, while JEL is asymptotically pivotal only if $f$ is non-degenerate at $x$. Theorem \ref{thm:asymptotic_dist} implies that one can construct an approximate $1-\alpha$ confidence interval {
\begin{align*}
\mathcal R_\alpha=\{\theta: \ell^\bullet(\theta)\le c_\alpha \}
\end{align*}
for $\ell^\bullet\in\{\ell,\ell^m\}$},
where $c_\alpha$ is such that $\lim_{n\to \infty}P(\theta\in \mathcal R_\alpha)=P(\chi_1^2\le c_\alpha)=1-\alpha$.
\end{remark}
{
\begin{remark}[Conservatism of JEL]\label{rem:JEL}
An implication of Theorem \ref{thm:asymptotic_dist} is that under degeneracy, JEL is asymptotically conservative. Thus the tests and confidence intervals based on JEL are, although not always asymptotically precise, still asymptotically valid. On the other hand, although the mJEL is asymptotically precise, simulations Section \ref{sec:simulation} shows that it is likely oversized when sample size is small. As such, JEL is still a practical option with its simpler implementation, especially if one prefers to be more conservative with the size control.
\end{remark}
}
{
\begin{remark}[Uniform inference]\label{rem:uniform_inference}
Theorem \ref{thm:asymptotic_dist} focuses on pointwise inference. In \cite{cattaneo2022}, the authors pioneer uniform inference using strong approximation and boundary-adaptive kernels. They further develop a robust bias-correction procedure based on the bias-correction idea of \cite{calonico2018effect}. Their results are based on $t$-statistics. How to generalize a empirical likelihood type procedure to provide uniform inference for dyadic KDE remains an open question.
\end{remark}
}
\begin{remark}[Modified jackknife variance estimator]\label{rem:jack_var}
A direct implication of the proof of Theorem \ref{thm:asymptotic_dist} is the consistency of $\Gamma_m^2$ in (\ref{eq:jack_var}), a bias-corrected
jackknife variance estimator of \cite{EfronStein1981} adapted to our context.
Based on this variance estimator, one can construct an alternative approximate $1-\alpha$ confidence interval
for $\theta$ as $\left[\hat{\theta}\pm { n^{-1/2}}z_{\alpha/2}\Gamma_m\right]$, where $z_{\alpha/2}$ is the $(1-\alpha/2)$-th quantile of the standard normal random variable. This is formalized in the following corollary.
\end{remark}
\begin{corollary}[Asymptotic normality with modified jackknife variance estimator]\label{thm:jk_var}
Suppose Assumptions \ref{a:rates} and \ref{a:JEL_complete} are satisfied, then
\begin{align*}
\sqrt{n}\Gamma_m^{-1}(\hat\theta - \theta)\stackrel{d}{\to} N(0,1).
\end{align*}
\end{corollary}
\subsection{Inference for incomplete dyadic data with JEL and modified JEL}\label{sec:incomplete_data}
Let us now consider dyadic KDE with randomly missing incomplete data.
Suppose the researcher observes
\begin{align*}
X_{ij}^*=Z_{ij}X_{ij},
\end{align*}
where the random variables $Z_{ij}\stackrel{d}{=}\text{Bernoulli}(p_n)$ that determine whether each edge is observed are i.i.d. and assumed to be independent from $(X_{ij})_{(i,j)\in {(i,j)\in I_{n}}}$ for some unknown probability of observation $p_n\in(0,1)$ { which can be but is not necessarily fixed in sample size.} Define $\hat N=\hat p_n {n\choose 2}$, $\hat N_1=\hat p_n {n-1\choose 2}$, and $\hat N_2=\hat p_n {n-2\choose 2}$, with $\hat p_n={n\choose 2}^{-1}\sum_{{(i,j)\in I_{n}}} Z_{ij}$ being an estimate for $p_n$. Now let $\mathbb I_n=\{(i,j)\in I_{n}:Z_{ij}=1\}$, $\mathbb I_n^{(k)}=\{(i,j)\in I_{n}:Z_{ij}=1, i,j\ne k\}$ and $\mathbb I_n^{(k,\ell)}=\{(i,j)\in I_{n}:Z_{ij}=1, i,j\not\in\{ k,\ell\}\}$. { For a fixed $\theta=f(x)$, define
the incomplete dyadic KDE estimator and its leave-out counterparts by
\begin{align*}
&\hat \theta_{\text{inc}}=\frac{1}{\hat N}\sum_{(i,j)\in \mathbb I_n} \frac{1}{h_n}K\left(\frac{x-{X_{ij}}}{h_n}\right),\quad \hat \theta_{\text{inc}}^{(i)}=\frac{1}{\hat N_1}\sum_{(k,l)\in\mathbb I_n^{(i)}}\frac{1}{h_n} K\left(\frac{x-X_{kl}}{h_n}\right),\: i=1,...,n,\\
&\hat \theta_{\text{inc}}^{(i,j)}=\frac{1}{\hat N_2}\sum_{(k,l)\in \mathbb I_n^{(i,j)}}\frac{1}{h_n} K\left(\frac{x-X_{kl}}{h_n}\right),\quad 1\le i<j\le n,
\end{align*}
and the JEL pseudo true value for incomplete dyadic data by
\begin{align*}
\hat V_i(\theta)=n\hat S(\theta)-(n-1) \hat S^{(i)}(\theta),
\end{align*}
where $\hat S(\theta)=\hat \theta_{\text{inc}}-\theta$, $\hat S^{(i)}(\theta)=\hat \theta_{\text{inc}}^{(i)}-\theta$.}
In addition, for each $i< j$, define
\begin{align}
&\hat S^{(i,j)}(\theta)=\hat \theta_{\text{inc}}^{(i,j)}-\theta,\nonumber\\
&\hat Q_{ij}=\frac{n-3}{n-1}\left[n\hat S(\theta)-(n-1)\{\hat S^{(i)}(\theta)+\hat S^{(j)}(\theta)\}+(n-2)\hat S^{(i,j)}(\theta)\right],\nonumber\\
&\hat\Gamma^2 = \frac{1}{n}\sum_{i=1}^n \hat V_i(\theta)^2,\qquad \hat\Gamma^2_m= \frac{1}{n}\sum_{i=1}^n \hat V_i(\hat \theta_{\text{inc}})^2 - \frac{1}{n}\sum_{i=1}^{n-1}\sum_{j=i+1}^n \hat Q_{ij}^2. \nonumber
\end{align}
For each $\theta$, define the modified JEL pseudo true value for incomplete dyadic data by
\begin{align*}
\hat V_i^m(\theta)=\hat V_i(\hat\theta_{\text{inc}})-\hat \Gamma \hat \Gamma_m^{-1}\{\hat V_i(\hat\theta_{\text{inc}})- \hat V_i(\theta)\}.
\end{align*}
Now the modified JEL function for incomplete dyadic data for $\theta$ can be defined by
\begin{align*}
\hat\ell^m(\theta)&=-2\sup_{w_1,...,w_n} \sum_{i=1}^n \log(n w_i),\\
&\text{s.t.}\quad w_i\ge 0, \quad \sum_{i=1}^n w_i=1,\quad \sum_{i=1}^n w_i \hat V_i^m (\theta)=0.
\end{align*}
and $\hat\ell(\theta)$ is defined analogously with $\hat V_i^m$ replaced by $\hat V_i$.
The Lagrangian dual problems are now
\begin{align*}
\hat\ell(\theta)=2\sup_\lambda \sum_{i=1}^n \log( 1+\lambda \hat V_i(\theta) ),\qquad
\hat\ell^m(\theta)=2\sup_\lambda \sum_{i=1}^n \log( 1+\lambda \hat V^m_i(\theta) ).
\end{align*}
\begin{assumption}[JEL and modified JEL for incomplete data]\label{a:JEL_incomplete}
Suppose $Z_{ij}\stackrel{d}{=}\text{Bernoulli}(p_n)$ are i.i.d. independent from $(X_{ij})_{(i,j)\in {(i,j)\in I_{n}}}$ for some unknown sequence $p_n\in(0,1)$ with $nh_n p_n \to \infty$.
\end{assumption}
Assumption \ref{a:JEL_incomplete} imposes the random missing structure on the data and establishes a lower bound on the rate of $p_n$, the probability of observing an edge, to ensure the presence of enough data to establish asymptotic theory. Note that the this assumption accommodates sequences of $p_n$ that converge to zero slowly enough. { The condition $nh_n p_n\to \infty$ ensures that we have asymptotically increasing effective sample size. }
The following result is an incomplete data counterpart of Theorem \ref{thm:asymptotic_dist}.
\begin{theorem}[Wilks' theorem for { JEL and} modified JEL for dyadic KDE with incomplete data]\label{thm:asymptotic_dist_incomplete}
Suppose Assumptions \ref{a:rates}, \ref{a:JEL_complete} and \ref{a:JEL_incomplete} are satisfied, then
\begin{align*}
\hat\ell^m(\theta)\stackrel{d}{\to} \chi_1^2.
\end{align*}
In addition, if $f$ is non-degenerate at $x$ or $p_n\to 0$, then
\begin{align*}
\hat\ell(\theta)\stackrel{d}{\to}\chi_1^2.
\end{align*}
\end{theorem}
A proof can be found in Section D.3 of the Supplementary Appendix.
{
Theorem \ref{thm:asymptotic_dist_incomplete} states the asymptotic distributions of the proposed JEL and modified JEL statistics for incomplete data. As in Theorem \ref{thm:asymptotic_dist}, the modified JEL is asymptotically pivotal regardless of the asymptotic regime. However, with incomplete data, the asymptotic pivotality of the proposed JEL statistic happens not only under nondegeneracy but also when the proportion of observed data goes to zero slowly asymptotically. { In such a scenario, the term that consists of the randomness induced by the missing process (which is conditionally independent) dominates the original leading asymptotic term, which is potentially not pivotal. } Note that the construction of confidence intervals in Remark \ref{rem:pivotal} can be adapted with { $\ell^\bullet\in\{\hat \ell^m,\hat \ell\}$}. }
\section{Simulation studies}\label{sec:simulation}
We consider four sets of data generating processes (DGP) in our simulations. The first two follow
\begin{align*}
X_{ij}=\beta U_i U_j+U_{\{i,j\}}
\end{align*}
with $U_{\{i,j\}}\stackrel{i.i.d.}{\sim}N(0,1)$ and $U_i=-1$ with probability $1/3$ and equals $1$ otherwise. We set $\beta\in\{0,1\}$, where $\beta=1$ is the sufficiently non-degenerate DGP considered in \cite{GrahamNiuPowell2019}. This has the density:
\begin{align*}
f(x)=\frac{5}{9}\phi(x-1)+\frac{4}{9}\phi(x+1),
\end{align*}
where $\phi$ is the density of a standard normal random variable. Meanwhile, $\beta=0$ corresponds to the degenerate case where the true DGP is i.i.d. standard normal with density $f=\phi$. { The last two sets of DGPs are the incomplete data counterparts of the first two, with the same DGPs except that not all of the edges are observed following the set-up in Section \ref{sec:incomplete_data}. For these cases, we set $p_n=0.5$.}
We utilize the rule of thumb bandwidths $h_n^{S}$ and $h_n^{S,\text{inc}}$ proposed in (A.1) and (A.2) in Section A of the Supplementary Appendix, respectively. For complete dyadic data, we consider five alternative inference methods that are theoretically robust to dyadic clustering: (i) Wald statistic with FG variance estimator for dyadic KDE proposed in \citet[Equation (22)]{GrahamNiuPowell2019} (FG), (ii) Wald statistic with the leading term of FG (lFG), (iii) jackknife empirical likelihood (JEL), (iv) Wald statistic with modified jackknife variance estimator (mJK) from Remark \ref{rem:jack_var}, and (v) modified jackknife empirical likelihood (mJEL). Note that (iii)-(v) are studied in this paper. { For incomplete dyadic data, we consider the incomplete counterparts of (iii) and (v) from Section \ref{sec:incomplete_data}.}
Throughout the simulation studies, we set the design point $x=1.675$, consistent with the simulation study in \cite{GrahamNiuPowell2019}. Each simulation is iterated $5,000$ times.\\
\par
Tables \ref{table:dyadic_cov_prob}, \ref{table:iid_cov_prob}, \ref{table:incomplete_dyadic}, and \ref{table:incomplete_iid} show the simulation results under these different settings.
In general, the simulation supports our theoretical findings. { For complete data, under both the dyadic and i.i.d. DGPs, both mJEL and mJK-based confidence intervals enjoy close to nominal coverage rates even with moderate sample sizes, while the FG estimator is oversized with small sample sizes and converges to the nominal size when $n$ is large. The mJK-based intervals are slightly more oversized than mJEL-based intervals for small sample sizes but they perform similarly when $n$ grows larger. The original JEL and lFG estimators are conservative but asymptotically valid under the dyadic DGP. Consistent with Remark 5, the JEL estimator is always severely conservative under the i.i.d. DGP, as is the lFG estimator. The behaviors of JEL and mJEL-based intervals under incomplete data are similar to those we observed with complete data; JEL is conservative but asymptotically converging to the correct size in the non-degenerate case while mJEL remains quite precise under both degenerate and non-degenerate DGPs.}
\section{Empirical application: Airport congestion}\label{sec:application}
Flight delays caused by airport congestion are a familiar experience to the flying public. Aside from the frustration experienced by affected passengers, these delays are an important cause of fuel wastage and result in the release of criteria pollutants such as nitrogen oxides, carbon monoxide and sulfur oxides. As discussed in \cite{SchlenkerWalker2016}, these releases harm the health of people living near airports. To mitigate these harms, it is important to understand the distribution of congestion across routes, or flights between pairs of airports.\\
\par
In this empirical application, we apply the dyadic KDE and JEL and modified JEL procedure to understand the distribution of airport congestion, as measured by taxi time, across pairs of airports in the United States. The taxi time of a flight is defined as the sum of taxi-out time (the time between the plane leaving its parking position and taking off) and taxi-in time (the time between the plane landing and parking). In this context, the edges $X_{ij}$ and $X_{jk}$ represent flights between airport $i$ and airport $j$ and flights between airport $j$ and airport $k$ respectively; the fact that flights share a common airport $j$ induces dependence between $X_{ij}$ and $X_{jk}$. Therefore, this is a natural setting to apply dyadic KDE and modified JEL.\\
\par
Data on flight taxi times are obtained from the US DOT Bureau of Transportation Statistics (BTS) \textit{Reporting Carrier On-Time Performance} dataset, which compiles data on US domestic non-stop flights from all carriers which earn at least 0.5\% of scheduled domestic passenger revenues. From the flight-level data for June 2020, we collapse taxi time into various summary statistics (mean, 95th percentile, maximum) for each origin-destination airport pair, with the summary statistics being taken over flights in both directions so that the resulting network is undirected. Noting that a missing edge means that there were no non-stop flights between the vertices the edge connects, we apply the KDE, JEL, and modified JEL to the resulting network. For simplicity, the sample is restricted to the 100 largest airports by number of departing flights.\\
\par
The histograms, dyadic KDE estimates, pointwise JEL and pointwise modified JEL 95\% confidence intervals for the mean, 95th percentile and maximum taxi times between airport pairs in June 2020 are shown in Figure 1A-1C. The intervals obtained by numerically inverting the test statistic for each design point (each whole minute in the range of each statistic). { Since both the JEL and modified JEL confidence intervals in Figure 1 are pointwise, a comparison between JEL and mJEL can only be made for each of the finitely many points at which the confidence intervals were calculated. Note that the design points are the same for the JEL and mJEL graphs, making a point-by-point comparison possible.}\\
\par
For applied researchers, the results demonstrate how studying only the mean can be incomplete; unlike the mean, the 95th percentile and (particularly) maximum travel times exhibit positive skewness. Routes with taxi times lying in the positively skewed region cause the most problems, but this distinction is lost when only studying means. For all these statistics, the modified JEL procedure which we propose provides relatively precise estimates even under small sample sizes (100 vertices) and a large proportion of missing edges (72\% missing).
\section{Conclusion}
This paper studies the asymptotic properties of dyadic kernel density estimation. We establish uniform convergence rates under general conditions. For inference, we propose a modified jackknife empirical likelihood procedure which is valid regardless of degeneracy of the underlying DGP. We further extend the results to cover incomplete or missing at random dyadic data. Simulation studies show robust finite sample performance of the modified JEL inference for dyadic KDE in different settings. We illustrate the method via an application to airport congestion.\\
\par
Despite our focus on density estimation, the inference approach we take can be extended to cover the local regression as in \cite{graham2021minimax}, the local polynomial density estimation of \cite{CattaneoMaJansson2020JASA} or local polynomial distributional regression of \cite{CattaneoMaJansson2020wp} under dyadic data. Another potential direction is to study incomplete data under unconfoundedness or other non-random missing models.
{ Finally, in light of the recent paper of \cite{cattaneo2022} which pioneers uniform inference for dyadic KDE using Wald statistics, it would be interesting to consider ways to adapt modified JEL for uniform inference as well. }
Investigating the modified JEL under these assumptions provides interesting directions for future research.
\clearpage
| proofpile-arXiv_059-15610 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
Question Answering (QA) systems enable natural language platforms to interact with the Knowledge Bases. These QA systems return direct and more specific answers to the asked questions. In recent years, with the increase in the popularity and the use of knowledge bases (KB) such as Google's Knowledge Graph \cite{singhal2012introducing}, YAGO \cite{10.1145/1242572.1242667}, DBPedia \cite{lehmann2015dbpedia}, and Freebase\cite{bollacker2008freebase} people are more interested in seeking effective methods to access these knowledge bases. Most of such knowledge bases adopt Resource Description Framework (RDF) as their data format and they also contain billions of SPO (subject, predicate, object) triples\cite{klyne2009resource}.
There are several languages designed for querying such large KBs, including SPARQL\cite{seaborne2008sparql}, Xcerpt\cite{10.1007/978-3-540-27775-0_33}, RQL\cite{inproceedings}. However, learning these languages adds a limitation, as one needs to be familiarized with the query language, it's syntax, it's semantics, and the ontology of the knowledge base.
By contrast, KB-QA which takes natural language as it's query is a more user-friendly solution and became a research focus in recent years \cite{unger2014introduction}. There are two main research streams for the task of QA: Semantic Parsing-based systems (SP-based) \cite{zettlemoyer2009learning}, \cite{zettlemoyer2012learning,cui2017kbqa}, and Information Retrieval-based systems (IR-based) \cite{yao2014information, hao2017end},\cite{ bordes2014question, bordes2015large}. SP-based methods address the QA problem by constructing a semantic parser that converts the natural language question into a conditionally structured expressions like the logical forms\cite{alshawi1989logical} and then run the query on the Knowledge base to obtain the answer. SP- based methods consists of three modules: (1) Entity linking, recognizes all entity mentions in a question and links each mention to an entity in KB; (2)Predicate mapping, finds candidate predicates in KB for the question;(3) Answer selection, converts the candidate entity-predicate pairs into a query statement and queries the knowledge base to obtain the answer. IR-based methods\cite{yao2014information, hao2017end},\cite{ bordes2014question, bordes2015large} focus on mapping answers and questions into the same embedding space, where one could query any KB, independent of its schema without requiring any grammar or lexicon. IR-based methods are more flexible and require less supervision compared with the SP-based approaches \cite{zettlemoyer2009learning},\cite{zettlemoyer2012learning,cui2017kbqa}.
With the advancement in the embedding techniques which can capture the semantics, deep neural networks are applied in several areas of Natural Language Processing (NLP) \cite{10.1007/978-3-642-77189-7_5} and Natural Language Understanding (NLU) \cite{winograd1983language}. In the field of KB-QA, under IR-based umbrella, embedding-based approaches \cite{bordes2014open} \cite{hao-etal-2017-end}have been proposed. However, these approaches face two limitations. First, these models encode different components separately without learning the representation of the whole KB. Hence, are not able to capture the compositional semantics in a global perspective. Second, the performance of Long Short Term Memory\cite{sak2014long}(LSTM)s, Bi-directional LSTM\cite{10.5555/1986079.1986220}(BiLSTM)s, and Convolutional Neural Network\cite{fukushima1988neocognitron}(CNN)s are heavily dependent on large training data sets which is often not available in practice. In recent years, pre-trained Language Models\cite{devlin2018bert}\cite{radford2019language}\cite{radford2018improving}\cite{peters2018deep} on large-scale unsupervised corpus has shown its advantages on mining prior linguistic knowledge automatically, it indicates a possible way to deal with above problems.
In comparison to previous embedding-based approaches, we focus on exploiting pre-trained language models for IR based KB-QA task. We use BERT \cite{devlin2018bert} pre-trained language model embeddings to encode our question and candidate answer contexts.
We exploit a multi head attention encoder based on Convolution Neural Network (CNN) encoder \cite{DBLP:journals/corr/VaswaniSPUJGKP17} \cite{kim2014convolutional} to fine tune Bert pre-trained embeddings in particular for the KB-QA task. Since we use a Language Model trained on
the English Wikipedia and Brown Corpus \cite{devlin2018bert} for our question and KB representations, Out Of Vocabulary (OOV) problem can be negated. The interrelationships between the questions and the underlying KB are stored as
the context for our model. This context is used to filter out the candidate answers by
implementing cross attention between the question being asked and the KB.
The contribution of this paper is summarized as follows:
(1)
We propose a method called \textit {Language Model based Knowledge Base Question Answering} (LM-KBQA). It exploits the BERT pre-trained language model embeddings \cite{devlin2018bert}
( which eliminates the need for using Recurrent Neural Network(RNN) architectures (LSTM, Bi-LSTM) to capture the contextual representations); (2) it is based on CNN encoder with self multi-head attention mechanism to fine tune the Bert embedding for the KB-QA task; (3)
our results demonstrate the effectiveness of the proposed approach on the open Web-Questions data-set \cite{berant-etal-2013-semantic}.
\section{Related Work}
Over the past few years we have seen a growing research on KB-QA, shaping interactive paradigm. This enables user to take advantage of the communicative power of linguistic web knowledge while hiding their complexity behind an easy-to-use interface. At the same time the abundance of information has led to a heterogeneous data landscape where QA systems struggle to stay up with the quantity, variety, and truthfulness of the underlying knowledge. In general, the most popular methods for KB-QA can be mainly divided into two classes: SP-based and IR-based.
Semantic parsing(SP) based approaches focuses on constructing a semantic parsing tree or equivalent query structure that represents the semantic meaning of the question. In terms of logical representation of natural language questions, many methods have been implemented, such as query graph \cite{yih-etal-2014-semantic,yih-etal-2015-semantic} or RDF query language \cite{cui2017kbqa}\cite{hu2017answering}.
Information retrieval(IR) based system try to obtain target answer directly from question information and KB knowledge without explicit considering interior query structure. There are various methods \cite{yao2014information, bordes2015large, dong-etal-2015-question,xu-etal-2016-question} to select candidate answers and rank results.
\cite{bordes2014open} Were the pioneers to use embedding-based models to resolve the KB-QA problem. The queries and KB triples were represented as vectors in an exceedingly low dimensional vector space. Subsequently the cosine similarity is applied to find the most accurate answer. Also the Bag Of Words (BOW) methodology \cite{harris1954distributional} is used to generate one vector for all the query answers. The Pairwise training methodology is utilized and the negative samples are randomly extracted from the KB. Authors in \cite{bordes2014question} improved their work based on sub-graph embeddings which is capable of answering complex questions. Their main motive was to include the maximum amount of information within the answer context which encodes the surrounding sub-graph of the KB(e.g., answer path and context). The proposed sub-graph embeddings accommodate all the entities and relations that are connected to the answer entity. The resulting vector was obtained using BOW\cite{harris1954distributional} representations. In follow-up work \cite{bordes2015large}\cite{jain2016question}, memory networks \cite{weston2014memory} were used to store candidate set, and could be iteratively accessed to mimic multi-hop reasoning. Unlike the above methods that mainly use a bag-of-words (BOW) approach to encode questions and KB contexts, \cite{dong-etal-2015-question,hao-etal-2017-end} apply more advanced network modules (e.g., CNNs and LSTMs). The approach proposed in \cite{yih-etal-2014-semantic} is fixated on single-relation queries. The KB-QA task was sub-divided into 2 stages. First and foremost, the topic entity of the question was found. Then, the later question was considered by CNNs and utilize to match relations. Authors in \cite{dong-etal-2015-question}, thought of the diverse features of answers to represent questions respectively using 3 columns of CNNs. Authors in \cite{xu-etal-2016-hybrid,xu-etal-2016-question} proposed to use multi-channel CNNs to extract the relations of KB triples while exploiting the free text of the Wikipedia.
Most embedding-based approaches encode question and KB contexts independently. In NLP, \cite{bahdanau2014neural} was the first to apply attention model. By cooperatively learning to align and translate, they enhanced the encoder-decoder Neural Machine Translation (NMT) framework. They argued that having source sentence by a fixed vector is illogical, and proposed a soft-align method, which could be understood as attention mechanism. \cite{dong-etal-2015-question} represented questions using three CNNs with different parameters when dealing with different answer contexts including answer path, answer context and answer type. With this inspiration \cite{hao-etal-2017-end} proposed a cross-attention mechanism to encode questions according to various candidate answer aspects. Further more \cite{chen2019bidirectional} proposed bidirectional attention similar to those applied in machine reading comprehension \cite{xiong2016dynamic}\cite{seo2016bidirectional}\cite{wang2016machine} by modeling the interactions between questions and KB contexts. We take a stride forward incorporating BERT\cite{devlin2018bert} pre-trained language model embeddings to encode our question and candidate answer contexts from KB. We fine tuned BERT for the KB-QA problem with a Multi-Head Attention mechanism \cite{DBLP:journals/corr/VaswaniSPUJGKP17} which is based on a Convolution Neural Network (CNN) \cite{kim2014convolutional} encoder. Our proposed architecture is also based on the bi-directional cross attention mechanism between the asked question and KB answer contexts \cite{chen2019bidirectional}.
\section{Overview}
The goal of the KB-QA task can be formalized as follows: given a natural language question q, the model should return an entity set A as answers. The general architecture of a KB-QA system is illustrated in Figure 1. First, the candidate entity of the question is identified, then the candidate answers are generated from the Freebase \cite{bollacker2008freebase} knowledge base.
The questions are then encoded using pre-trained-Bert model \cite{devlin2018bert} which is fine tuned with a CNN encoder with Multi-Head attention \cite{DBLP:journals/corr/VaswaniSPUJGKP17} for relationship understanding. Then, cross attention neural networks \cite{hao-etal-2017-end} are employed to represent question under the influence of candidate answer set aspects(like entity types, relation paths) and vice versa. Finally, the similarity score between the question and each corresponding candidate answer set is calculated, and the candidates with the highest score will be selected as the final answer.
\begin{figure}[ht]
\centerline{\includegraphics[width=8cm, height=9cm]{fig1-2.png}}
\caption{General Architecture of a KB-QA.}
\label{fig}
\end{figure}
We refer to Freebase\cite{bollacker2008freebase} as our knowledge base. It has more than 3 billion facts, and is used as the supporting knowledge base for many Question Answering systems. In Freebase, the facts are represented as subject-predicate-object triples (s,p,o). Following is an example triple in Freebase: (/m/01428, /language/human\textunderscore language/countries\textunderscore spoken\textunderscore in, /m/03\textunderscore r3) which relates to the fact that the language spoken in the country of Jamaica is Jamaican English where "/m/03\textunderscore r3" denotes the country Jamaica while "/m/01428" denotes the language Jamaican English, and "/language/human\textunderscore language/countries\textunderscore spoken\textunderscore in" is a relationship between the Freebase entities.
\section{Our Approach}
\subsection{Candidate Generation}
Ideally all entities in the Freebase should be considered for candidate answers, but practically, this is computationally expensive and can be avoided. We use Freebase API\cite{bollacker2008freebase} to find a named entity for each question \ q \ which acts as the primary entity for the given question. For general understanding, Freebase API methodology resolved \ 86\% \ of questions in open Web-Questions data-set when top-1 accuracy criteria \cite{yao2014information} is used.
For example if the question "what language Jamaican people speak?" is passed through the Freebase API, it returns "Jamaica" as the main entity. After the named entity is identified with Freebase API, we gather all the other entities that are directly associated to the named entity
with in 2-hop. These entities create the candidate set.
\subsection{Question Representation}
Initially, we need to collect the representations for each word in the question. These representations contain all the information of the question, and could serve as follows: For example, the question Q is represented as Q = ($x_1$, $x_2$,..., $x_n$) where $x_i$ stands for $i^{th}$ word. The input natural language question Q = $\{ q_i \}_{i=1}^{|Q|} $ as a sequence of word embeddings \(q_i\) fine tuned from Bert pre-trained language model using an encoder layer.
\subsection{Context Representation}
For each candidate set from the KB, we generate context for each question focusing on three aspects: answer type, answer path, and answer context from KB.
Answer type contains entity type information and helps in narrowing down entities while ranking the answers. If a question uses the word "who", the candidates answers that are relevant to a person are more likely to be correct. Answer path is a sequence of relations from a candidate set to a named entity. For example "/language/human\textunderscore language/countries\textunderscore spoken\textunderscore in" is a relation path between Jamaica and Jamaican English which is stored as [human, language, countries, spoken, in] in the Freebase.
Answer context is defined as the surrounding entities (e.g., neighbour nodes) of the candidate which help to answer the asked questions with constraints. We only consider the context nodes that have overlap with the asked questions. In particular, for each context node (i.e. a sequence of words) of a candidate, we first compute the longest common sub-sequence between the neighbors and the question. If there exists some common sub-sequence between question and neighbor entities we store them as Answer context which will help in answering questions with multiple entities. All this information is stored as the context and will be fed into the pre-trained Bert model to generate embeddings and are fine tuned using an encoder layer.
\subsection{BERT Language Model}
BERT is a multi-layer bidirectional transformer encoder. The given input is a token sequence at the character-level. This means it is either a single sequence or a special token [SEP] separated sentence pair. Each token in the input sequences is the aggregate of the token embeddings, the segment embeddings, and the position embeddings. The initial token in every sequence is consistently a special classification symbol ([CLS]), and for the classification tasks it uses the hidden state of the token. After fine tuning, the pre-trained BERT \cite{devlin2018bert} representations are used in several natural language processing tasks.
\subsubsection{Notation}
For an input sequence of word or sub-word tokens X = (${x_1}$, . . . ,${x_n}$ ), BERT trains an encoder that generates a contextualized representations for each token: $x_1$, . . . , $x_n$ = enc(${x_1}$, . . . , ${x_n}$). Each token in the sequence, positional embeddings $p_1$, . . . , $p_n$ are used to label the absolute position of input sequence because deep transformer is used to implement the encoder.
\subsubsection{Masked Language Modeling (MLM)}
Masked Language Modeling (MLM) or "Cloze test", is used to predict the missing tokens from their placeholders in a given sequence. When a subset of tokens Y $\subseteq$ X sampled and substituted with placeholder set of tokens.In Bert's MLM implementation ,Y accounts for 15\% of the tokens in X; of those, 80\% are replaced with [MASK], 10\% are replaced with a random token (according to the unigram distribution), and 10\% are kept unchanged. The main idea is to predict the modified input from the original tokens in Y. BERT selects each token in Y independently by randomly selecting a subset.
\subsubsection{Next Sentence Prediction (NSP)}
The Next Sentence Prediction (NSP) predicts whether $X_B$ is immediate continuation of $X_A$ when two sequences $X_A$ and $X_B$ are given as input. In BERT methodology, it initially takes $X_A$ from corpus. Then it either reads $X_B$ from where $X_A$ has terminated or it randomly samples $X_B$ from different point in corpus. A special token [SEP] is used to separate the two sequences. Also, a special token [CLS] is appended to $X_A$ and $X_B$ to express as an input. The main idea of [CLS] is to truly find whether $X_B$ follows $X_A$ in the corpus.
\subsection{Embedding Layer}
To embed question and KB context we use pre-trained BERT\cite{devlin2018bert} language model. Without the need of Recurrent Architecture BERT uses positional embeddings to encode the word sequences. By using unsupervised learning based on Masked Language Models (MLM) and Next Sentence Prediction (NSP), BERT embeddings are generated. These embeddings contain bidirectional attention from Masked Language Model(MLM) section of BERT.
BERT model takes sentences in one or two formats. If we have a context of [location, language, human, language, countries,spoken, in, Jamaican] and a question [what does Jamaican people speak?], we can either encode it as a single input or two separate inputs to the BERT Model as shown below.
<CLS>[question]<SEP> [context] Or
<CLS>[question]
<CLS>[context]
\begin{figure*}[ht]
\includegraphics[width=1\textwidth]{fig2.png}
\caption{BERT Embedding Layer}
\label{fig}
\end{figure*}
We provide Context and Query together with query first followed by the context as shown in the figure above. Since BERT is trained on "next sentence prediction" outcome, we believe that using this formulation will provide richer query and context embeddings. The maximum length of the context and the question are 96 and 18 respectively. We use the concatenation of all 12 layers of BERT model as our embedding layer.
\subsection{Encoder Layer}
The encoder layer is a stack of following layers:
[CNN-layer + self-attention-layer + feed-forward-layer]
Similar to \cite{seo2016bidirectional}, the input of this layer at each position is [c, a, c $\bigodot$ a, c $\bigodot$ b], where a and b are respectively a row of attention matrix A and B. For the self-attention-layer, we adopted multi-head attention mechanism defined in \cite{DBLP:journals/corr/VaswaniSPUJGKP17} which is the improvement of the basic attention mechanism.
\begin{equation}
{head_i} = Attention(QW_i^Q, KW_i^K, VW_i^V)
\end{equation}\begin{equation}
Multi - Head(Q,K,V) = Concat(head_1,\cdot\cdot\cdot\cdot\cdot, head_h)
\end{equation}\begin{equation}
SelfMulti-Head = Multi - head(X, X, X)
\end{equation}
These three equations are the formation process of the self multi-head attention mechanism. The matrix of W is the weight matrix. The Q, V, K should multiply its corresponding weight matrix before getting into the attention function. Repeat this process h(number of heads) times and connect each result then we can get a new vector matrix that reflects the relationship between Q, V. Especially, in the self multi-head attention mechanism, to look for internal connections within words, the Q = V = K = X, and X represents the word vector matrix.
The multi-head attention mechanism helps the model learn the words relevant information in different presentation sub-spaces. The self-attention mechanism can extract the dependence in words. As the name shows, the self multi-head attention mechanism integrates the benefits of both, creates a context vector for each word. Then we don't need to depend on additional information to get a matrix that reflects the context relationship between current word and other words in a sequence. Each of these basic operations (cnn/self-attention/ffn) is placed inside a residual block
\subsection{Attention Layer}
We use a focused, Context-Query attention layer on top of the pre-trained BERT embeddings identical to that of the QANet\cite{yu2018qanet} model. Such attention modules were standard in other KB-QA models such as \cite{hao-etal-2017-end}\cite{chen2019bidirectional}.
C is used to denote context from KB and Q for natural language question. The context-to-query attention is constructed as follows:(1) Compute similarities between each pair of context and query words that generates a similarity matrix S $\in R^{n*m}$. (2) Normalize each row of S by applying soft-max function. (3) Compute context-to-query attention A = S$\cdot$ QT$ \in R^{n*d}$
The similarity function used here is the tri-linear function as mentioned in \cite{seo2016bidirectional} :
f(q, c) = $W_0$[q, c, q $\bigodot$ c]
\section{Experiments}
\subsection{Dataset}\label{AA}
We use the Web-Questions\cite{berant-etal-2013-semantic} dataset. Web-Questions dataset is built by using the Google Suggest API to obtain questions that begin with a wh-word and contain exactly one entity. Specifically, they queried the question excluding the entity, the phrase before the entity, or the phrase after it. Each query generates 5 candidate questions, which are added to the queue. Further they iterated until
1 million questions were visited; a random 100K were submitted to Amazon Mechanical Turk (AMT).
The AMT task requested that the distributed workforce should answer the question using only the Freebase page of the questions' entity, or otherwise mark it as unanswerable by Freebase. The answer was restricted to be one of the possible entities, values, or list of the entities on the page. Thus,
this combination of Web-Questions dataset with Freebase KB is used as the base line.
\subsection{Comparison with Sate of the art}
To assess the proposed methodology, experiments were conducted on the Freebase KB and the Web-Questions \cite{berant-etal-2013-semantic} dataset. Freebase is large-scale KB\cite{bollacker2008freebase} that has organised general facts as subject-predicate-object triples. It has 41M non-numeric entities with 19K unique properties and 596M assertions.The Web-Questions\cite{berant-etal-2013-semantic} dataset has 3,778 question-answer pairs for training and 2,032 for testing. The questions are gathered from Google Suggest API, and the answers from Amazon Mechanical Turk which are labelled manually. All of the answers are from the Freebase KB. we use 80\% of the training data as training set and 20\% as validate set. The evaluation metric we use is F1 score, and the average result is calculated by \cite{berant-etal-2013-semantic} script.
\subsubsection{Settings}
For KB-QA training, we use pre-trained Bert embeddings base uncased version. During tokenization, BERT\cite{devlin2018bert} code uses a word-piece algorithm to split words into sub-words and all less frequent words will be split into two or more sub-words. The vocab size of Bert is 30522. We adopt delexicalization strategy as mentioned in \cite{chen2019bidirectional}. For each question, the candidate entity mentions those belonging to date, ordinal, or number are replaced with their type. Same is applied on answer context from KB text, if the overlap belongs to above type. This assures that the query matches up with answer context in the embedding space. The dropout rates for both question and answer encoder side is set to 0.3. The bath size is set as 4 and answer module threshold is set to 0.7 to allow multiple answers for questions with list of answers. We use Adam optimizer\cite{kingma2014adam} to train the model. The initial learning rate is set to 0.01. Further learning rate is reduced by a factor of 10 if no improvement is observed in validation process for 3 successive epochs. The hyper-parameters are tuned on validation-set.
\subsubsection{Results}
In this section, we compare the performance of our method with other IR-based approaches. The results are shown in Table 1. Based on this table our method (LMKB-QA) obtained an F1 score of 52.7 on Web-Questions using the topic entity predicted by Freebase API. As Table 1 shows our method achieves better results or even competes with state-of-the-arts. This demonstrate the effectiveness of our idea to use Bert pre-trained language model embeddings in the Question Answering on Knowledge Base problem.
\begin{table}[ht]
\defTable{Table}
\centering
\begin{tabular}{|p{5cm}|p{1.5cm}|}
\hline
\textbf{Methods} & \textbf{Avg F1} \\ [0.5ex]
\hline
Bordes 2014b\cite{bordes2014open} &29.7 \\
Bordes 2014a\cite{bordes2014question} &39.2 \\
Yang 2014\cite{yang-etal-2014-joint} &41.3 \\
Dong 2015\cite{dong-etal-2015-question} &40.8 \\
Bordes 2015\cite{bordes2015large} &42.2 \\
Xu 2016 \cite{xu-etal-2016-question}&42.2 \\
Hao 2017\cite{hao-etal-2017-end} &42.2 \\
Chen 2019\cite{chen2019bidirectional}&49.7\\[1ex]
Chen 2019\cite{chen2019bidirectional} topic entity predictor &51.8\\[1ex]
\hline
Our Approach &52.7\\[1ex]
\hline
\end{tabular}
\caption{Evaluation results on Web-Questions}
\label{table:default}
\end{table}
It is important to note that our proposed approach is based on a CNN architecture, and only depends on the training data (it does not depends on wiki text \cite{xu-etal-2016-hybrid}). \cite{bordes2014open} applies BOW method to obtain a single vector for both questions and answers. \cite{bordes2014question} further improve their work by proposing the concept of sub-graph embeddings. Besides the answer path, the sub-graph contains all the entities and relations connected to the answer entity. The final vector is also obtained by bag-of-words strategy. \cite{yang-etal-2014-joint} follows the SP-based manner, but uses embeddings to map entities and relations into KB resources, then the question can be converted into logical forms. \cite{dong-etal-2015-question} use three columns of Convolution Neural Networks (CNNs) to represent questions corresponding to three aspects of the answers, namely the answer context, the answer path and the answer type. \cite{bordes2015large} put KB-QA into the memory networks framework \cite{sukhbaatar2015end}, and achieves the state-of-the-art performance of end-to-end methods. Our approach incorporates pre-trained Bert language model embeddings fine tuned for the KB-QA task using a Encoder architecture with multi head attention. \cite{bordes2014question,bordes2015large,bordes2014open} all utilize BOW model to represent the questions, while ours takes advantage of pre-trained language model embeddings. Also note that \cite{bordes2015large} uses additional training data such as Reverb \cite{fader-etal-2011-identifying} and their original data set Simple-Questions.
The proposed method in \cite{dong-etal-2015-question} employs three fixed CNNs to represent questions. \cite{hao-etal-2017-end} implemented the mutual influence between the representation of questions and the corresponding answer aspects. \cite{chen2019bidirectional} further enhanced the result by using memory networks \cite{weston2014memory} to control the mutual influence between the representation of questions and the corresponding answer aspects. The approach in \cite{chen2019bidirectional} achieved an F1 score of 0.518 using a custom topic entity predictor while an F1 score of 0.497 has achieved via the Freebase Search API. The \cite{yih-etal-2016-value, yih-etal-2015-semantic} have higher F1 scores than other methods. Their approach was able to address more questions with constraints and aggregations. But, their methods apply number of manually designed rules and features, which come from the observations on the training questions set. This is a manual process which reduces the versatility of their proposed approach. Integrated systems like \cite{xu-etal-2016-hybrid,xu-etal-2016-question} generates higher F1 score leveraging Wikipedia free text as external knowledge, which are based on semantic parsing. Therefore the systems are not directly compared to ours.
\begin{comment}
\subsubsection{Model Analysis}
\textcolor{red}{TO DO MAYBE}
Here, we further discuss the impacts of the components of our model. Table 2 indicates the effectiveness of different parts in the model.
\begin{table}[ht]
\defTable{Table}
\centering
\begin{tabular}{|p{5cm}|p{1.5cm}|}
\hline
\textbf{Methods} & \textbf{Avg F1} \\ [0.5ex]
\hline
Bert base uncased &0.0 \\
w/o multi head attention &0.0 \\
Bert large uncased&0.0 \\
w/o multi head attention&0.0 \\
\hline
\end{tabular}
\caption{Evaluation results on Web-Questions}
\label{table:default}
\end{table}
\end{comment}
\section{Conclusion}
In this paper, we focused on using a pre-trained language model for KB-QA task. Firstly, we used \textit {Bert base uncased} for the initial experiments. We further fine tuned these embeddings with two way attention mechanism from the knowledge base to the asked question and from the asked question to the knowledge base answer aspects. Our method is based on a simple CNN architecture with Multi-Head Attention mechanism to represent the asked question dynamically in the multiple aspects. Our experimental results show the effectiveness of the Bert pre-trained language model embeddings.
In the future, we would like to explore other language models including GPT-2, RoBERTa, Transformer-XL and XLNet. We will also further investigate to answer even more complex questions on the knowledge bases.
\section{Acknowledgements}
We would like to thank Hugging Face\cite{Wolf2019HuggingFacesTS} for their open-source code to use pretrained BERT embeddings for downstream tasks using PyTorch, BAMnet\cite{chen2019bidirectional} for open-source code to process Freebase knowledge base dump, and Web-Questions\cite{berant-etal-2013-semantic} for the train, test dataset, and F1 metric script for comparison.
\medskip
\bibliographystyle{plain}
| proofpile-arXiv_059-15611 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction} \label{sec1}
The study of minimal free resolution of monomial ideals and their powers is an interesting and
active topic of mathematics which employs methods of combinatorics in algebraic context. One of the main result in this area was obtained by
Fr${\rm \ddot{o}}$berg \cite[Theorem 1]{f}, who characterized all quadratic squarefree monomial ideals which have a linear resolution. Herzog, Hibi and Zheng \cite{hhz} proved that a quadratic squarefree monomial ideal $I$ has a linear resolution if and only if every power of $I$ has a linear resolution. It is also known \cite{ht} that polymatroidal ideals have a linear resolution. On the other hand, the powers
of polymatroidal ideals are again polymatroidal (see \cite{hh} and hence,
they have a linear resolution. However, it is not in general true that if a monomial ideal $I$ has a linear resolution, then its powers have the same property. Examples of this kind of ideals were provided by Terai (see \cite[remark 3]{c}) and Sturmfels \cite{s1}.
In this paper, we focus on powers of unmixed squarefree monomial ideals of height two. These ideals are naturally associated to simple graphs and are called cover ideals. The reason for this naming is that the cover ideal $J(G)$ of a graph $G$ is minimally generated by squarefree monomials corresponding to the minimal vertex covers of $G$ (see Section \ref{2} for precise definition of cover ideals). Herzog and Hibi \cite[Theorem 3.6]{hh''} proved that if $G$ is a bipartite graph such that $J(G)$ has a linear resolution, then $J(G)^k$ has a linear resolution, for each integer $k\geq 1$. By \cite[Corollary 2.6]{grv}, for every bipartite graph $G$, we have $J(G)^{(k)}=J(G)^k$, where $J(G)^{(k)}$ denotes the $k$th symbolic power of $J(G)$. Thus, the result of Herzog and Hibi essentially says that if $G$ is a bipartite graph such that $J(G)$ has a linear resolution, then $J(G)^{(k)}$ has a linear resolution, for every integer $k\geq 1$. In
\cite[Theorem 3.6]{s3}, we generalized this result to very well-covered graphs. In the same paper, we posed the following question.
\begin{ques} [\cite{s3}, Page 105 and \cite{s8}, Question 3.10] \label{qwellres}
Let $G$ be a very well-covered graph and suppose that $J(G)^{(k)}$ has a linear resolution, for some integer $k\geq 2$. Is it true that $J(G)$ has a linear resolution?
\end{ques}
In \cite[Corollary 3.7]{s3}, we proved that Question \ref{qwellres} has a positive answer for the special class of bipartite graphs. As the first main result of this paper, we give a positive answer to this question, for any very well-covered graph (see Proposition \ref{linwell}). Indeed, we prove a stronger result. In Theorem \ref{main}, we characterize all graphs $G$ with the property that $J(G)^{(k)}$ has a linear resolution for some integer $k\geq 2$. It turns out that $J(G)^{(k)}$ has a linear resolution for some (equivalently, for all) integer $k\geq 2$ if and only if $G$ is a very well-covered graph such that $J(G)$ has a linear resolution.
For a homogenous ideal $I$, let ${\rm reg}(I)$ denote the Castelnuovo--Mumford regularity of $I$. The next main result of this paper is motivated by a result due to Mart${\rm\acute{i}}$nez-Bernal et al. \cite{mmvv}. In fact, it is shown in \cite[Corollary 5.3]{mmvv} that for any bipartite graph $G$ and for every integer $k\geq 1$, the inequality$${\rm reg}(J(G)^k)\leq {\rm reg}(J(G)^{k+1})$$holds. As we mentioned above, for any bipartite graph $G$ and for each integer $k\geq 1$, we have $J(G)^{(k)}=J(G)^k$. Hence, the above inequality means that
\[
\begin{array}{rl}
{\rm reg}(J(G)^{(k)})\leq {\rm reg}(J(G)^{(k+1)}).
\end{array} \tag{$\dagger$} \label{dag}
\]
when $G$ is a bipartite graph. In Theorem \ref{main2}, we prove that inequality \ref{dag} holds for any arbitrary graph $G$, generalizing \cite[Corollary 5.3]{mmvv}.
For a monomial ideal $I$, let ${\rm deg}(I)$ denote the maximum degree of minimal monomial generators of $I$. As the last result of this paper, we study ${\rm deg}(J(G)^{(k)})$. In Corollary \ref{deguc}, we compute ${\rm deg}(J(G)^{(k)})$ when $G$ is either an unmixed or a claw-free graph. As a consequence, we see that for any such graph, ${\rm deg}(J(G)^{(k)})$ is a a linear function of $k$. Note that in general, ${\rm deg}(J(G)^{(k)})$ is not a linear function (even eventually), as it is shown by Dung at al. \cite[Theorem 5.15]{dhnt}.
\section{Preliminaries} \label{sec2}
In this section, we provide the definitions and basic facts which will be used in the next sections.
Let $G$ be a simple graph with vertex set $V(G)=\big\{x_1, \ldots,
x_n\big\}$ and edge set $E(G)$. For a vertex $x_i$, the {\it neighbor set} of $x_i$ is $N_G(x_i)=\{x_j\mid \{x_i, x_j\}\in E(G)\}$ and we set $N_G[x_i]=N_G(x_i)\cup \{x_i\}$ and call it the {\it closed neighborhood} of $x_i$. For a subset $F\subseteq V(G)$, we set $N_G[F]=\cup_{x_i\in F}N_G[x_i]$. For every subset $A\subset V(G)$, the graph $G\setminus A$ is the graph with vertex set $V(G\setminus A)=V(G)\setminus A$ and edge set $E(G\setminus A)=\{e\in E(G)\mid e\cap A=\emptyset\}$. A subgraph $H$ of $G$ is called induced provided that two vertices of $H$ are adjacent if and only if they are adjacent in $G$. The graph $G$ is {\it bipartite} if there exists a partition $V(G)=A\cup B$ such
that each edge of $G$ is of the form $\{x_i,x_j\}$ with $x_i\in A$ and $x_j\in B$. If moreover, every vertex of $A$ is adjacent to every vertex of $B$, then we say that $G$ is a {\it complete bipartite} graph and denote it by $K_{a,b}$, where $a=|A|$ and $b=|B|$. The graph $K_{1,3}$ is called a {\it claw} and the graph $G$ is said to be {\it claw--free} if it has no claw as an induced subgraph. A subset $W$ of $V(G)$ is called an {\it independent subset} of $G$ if there are no edges among the vertices of $W$. An independent subset $W$ of $G$ is a {\it maximal independent subset}, if $W\cup\{x\}$ is not an independent subset of $G$, for every vertex $x\in V(G)\setminus W$. A subset $C$ of $V(G)$ is called a {\it vertex cover} of $G$ if every edge of $G$ is incident to at least one vertex of $C$. A vertex cover $C$ is called a {\it minimal vertex cover} of $G$ if no proper subset of $C$ is a vertex cover of $G$. Note that $C$ is a minimal
vertex cover if and only if $V(G)\setminus C$ is a maximal independent set. The graph $G$ is called {\it unmixed} if all
maximal independent subsets of $G$ have the same number of elements. A graph $G$ without isolated vertices is {\it very well-covered} if $n$ is an even number and every maximal independent subset of $G$ has cardinality $n/2$. In particular, any unmixed bipartite graphs without isolated vertices is a very well-covered graph. The following result from \cite{crt}, determines the structure of very well-covered graphs.
\begin{prop} \label{strwell} \cite[Proposition 2.3]{crt}
Let $G$ be a very well-covered graph with $2h$ vertices. Then the vertices of $G$ can be labeled as $V(G)=\{x_1, \ldots, x_h, y_1, \ldots, y_h\}$ such that the following conditions are satisfied.
\begin{itemize}
\item[(i)] $X=\{x_1, \ldots, x_h\}$ is a minimal vertex cover of $G$ and $Y=\{y_1, \ldots, y_h\}$ is a maximal independent subset of $G$.
\item[(ii)] $\{x_i, y_i\}\in E(G)$, for each integer $i=1, \ldots, h$.
\item[(iii)] If $\{z_i, x_j\}$, $\{y_j, x_k\}\in E(G)$, then $\{z_i, x_k\}\in E(G)$
for distinct indices $i$, $j$ and $k$ and for $z_i\in \{ x_i, y_i\}$.
\item[(iv)] If $\{x_i, y_j\} \in E(G)$, then $\{x_i, x_j\} \notin E(G)$.
\end{itemize}
\end{prop}
A {\it simplicial complex} $\Delta$ on the set of vertices $V(\Delta)=\{x_1,
\ldots, x_n\}$ is a collection of subsets of $V(\Delta)$ which is closed under
taking subsets; that is, if $F \in \Delta$ and $F'\subseteq F$, then also
$F'\in\Delta$. Every element $F\in\Delta$ is called a {\it face} of
$\Delta$, and its {\it dimension} is defined to be $\dim F=|F|-1$. The {\it dimension} of
$\Delta$ which is denoted by $\dim\Delta$, is $d-1$, where $d
=\max\{|F|\mid F\in\Delta\}$. A {\it facet} of $\Delta$ is a maximal face
of $\Delta$ with respect to inclusion. We say that $\Delta$ is {\it pure} if all facets
of $\Delta$ have the same dimension. A pure simplicial complex $\Delta$ is {\it strongly connected}
if for every pair of facets $F, G\in \Delta$, there is a sequence of facets $F=F_0, F_1, \ldots, F_m=G$ such that $\dim(F_i\cap F_{i+1})=\dim\Delta-1$,
for every integer $i$ with $0\leq i\leq m-1$. The {\it link} of $\Delta$ with respect to a face $F \in \Delta$ is the simplicial complex$${\rm lk_{\Delta}}F=\{G
\subseteq V(\Delta)\setminus F\mid G\cup F\in \Delta\}.$$
Let $S=\mathbb{K}[x_1, \dots, x_n]$ be the polynomial ring in $n$ variables over a field $\mathbb{K}$. For every
subset $F\subseteq V(\Delta)$, we set ${\it x}_F=\prod_{x_i\in F}x_i$. The {\it
Stanley--Reisner ideal of the simplicial complex $\Delta$ over $\mathbb{K}$} is the ideal $I_{
\Delta}$ of $S$ which is generated by those squarefree monomials $x_F$ with
$F\notin\Delta$. The {\it Stanley--Reisner ring of $\Delta$ over $\mathbb{K}$}, denoted by $\mathbb
{K}[\Delta]$, is defined to be $\mathbb{K} [\Delta]=S/I_{\Delta}$. The simplicial complex $\Delta$ is called {\it Cohen-Macaulay}, if its Stanley--Reisner ring is a Cohen-Macaulay ring.
Let $G$ be a graph. The {\it independence simplicial complex} of $G$ is defined by
$$\Delta(G)=\{A\subseteq V(G)\mid A \,\, \mbox{is an independent subset of}\,\,
G\}.$$It is easy to see that the Stanley--Reisner ideal of $\Delta(G)$ is the {\it edge ideal} of $G$ which is defined as$$I(G)=\big(x_ix_j\mid \{x_1, x_j\}\in E(G)\big)\subset S.$$ A graph $G$ is a {\it Cohen-Macaualy graph}, if $\Delta(G)$ is a Cohen-Macaulay simplicial complex. It is well-known that every Cohen-Macaulay graph is unmixed. The following result from \cite{mmcrty} provides a characterization for Cohen-Macaulay very well-covered graphs.
\begin{prop} \label{cmwell} \cite[Lemma 3.1]{mmcrty}
Let $G$ be a very well-covered graph with $2h$ vertices. Then $G$ is a Cohen-Macaulay graph if and only if the vertices of $G$ can be labeled as $V(G)=\{x_1, \ldots, x_h, y_1, \ldots, y_h\}$ such that conditions (i)-(iv) of Proposition \ref{strwell} are satisfied and moreover, $i\leq j$ whenever $\{x_i,y_j\}\in E(G)$.
\end{prop}
The Alexander dual of the edge ideal of $G$ in $S$, i.e., the
ideal $$J(G)=I(G)^{\vee}=\bigcap_{\{x_i,x_j\}\in E(G)}(x_i,x_j),$$ is called the
{\it cover ideal} of $G$. It is
well-known and easy to check that $J(G)$ is minimally generated by the monomials $\prod_{x_i\in C}x_i$, where $C$ is a minimal vertex cover of $G$. We refer to \cite{s8} for a survey about homological and combinatorial properties of powers of cover ideals.
\begin{dfn}
Let $I$ be an ideal of $S$ and let ${\rm Min}(I)$ be the set of minimal primes of $I$. For every integer $k\geq 0$, the $k$th {\it symbolic power} of $I$,
denoted by $I^{(k)}$, is defined to be$$I^{(k)}=\bigcap_{\frak{p}\in {\rm Min}(I)} {\rm Ker}(S\rightarrow (S/I^k)_{\frak{p}}).$$
\end{dfn}
Let $I$ be a squarefree monomial ideal in $S$ and suppose that $I$ has the irredundant
primary decomposition $$I=\frak{p}_1\cap\ldots\cap\frak{p}_r,$$ where every
$\frak{p}_i$ is an ideal of $S$ generated by a subset of the variables of
$S$. It follows from \cite[Proposition 1.4.4]{hh} that for every integer $k\geq 0$, $$I^{(k)}=\frak{p}_1^k\cap\ldots\cap
\frak{p}_r^k.$$In particular, for every graph $G$ and for any integer $k\geq 0$, we have$$J(G)^{(k)}=\bigcap_{\{x_i,x_j\}\in E(G)}(x_i,x_j)^k.$$
Assume that $M$ is a graded $S$-module and let
$$0 \longrightarrow \cdots \longrightarrow \bigoplus_{j}S(-j)^{\beta_{1,j}(M)} \longrightarrow \bigoplus_{j}S(-j)^{\beta_{0,j}(M)} \longrightarrow M \longrightarrow 0$$be the minimal graded free resolution of $M$.
The integers $\beta_{i,j}(M)$ are called the graded Betti numbers of $M$. The Castelnuovo--Mumford regularity (or simply, regularity) of $M$, denote by ${\rm reg}(M)$, is defined as follows:$${\rm reg}(M)=\max\{j-i|\ \beta_{i,j}(M)\neq0\}.$$Moreover,$${\rm pd}(M)=\max\{i|\ \beta_{i,j}(M)\neq 0, {\rm \ for \ some} \ j\}$$is called the {\it projective dimension} of $M$.
A homogenous ideal $I$ is said to have a {\it linear resolution}, if for some integer $d$, $\beta_{i,i+t}(I)=0$
for all $i$ and for every integer $t\neq d$. It is clear from the definition that if an ideal has a linear resolution, then all the minimal generators of $I$ have the same degree. It follows from the Eagon-Reiner's theorem \cite[Theorem 8.1.9]{hh} that a graph $G$ is Cohen-Macaualy if and only if its cover ideal $J(G)$ has a linear resolution. Let $I$ be a homogenous ideal which is generated in a single degree $d$. We say that $I$ has a {\it linear presentation}, if $\beta_{1,1+t}(I)=0$ for each integer $t\neq d$. It is obvious that $I$ has a linear presentation if it has a linear resolution.
Let $I$ be a monomial ideal of
$S=\mathbb{K}[x_1,\ldots,x_n]$ with minimal generators $u_1,\ldots,u_m$,
where $u_j=\prod_{i=1}^{n}x_i^{a_{i,j}}$, $1\leq j\leq m$. For every $i$
with $1\leq i\leq n$, let$$a_i:=\max\{a_{i,j}\mid 1\leq j\leq m\},$$and
suppose that $$T=\mathbb{K}[x_{1,1},x_{1,2},\ldots,x_{1,a_1},x_{2,1},
x_{2,2},\ldots,x_{2,a_2},\ldots,x_{n,1},x_{n,2},\ldots,x_{n,a_n}]$$ is a
polynomial ring over the field $\mathbb{K}$. Let $I^{{\rm pol}}$ be the squarefree
monomial ideal of $T$ with minimal generators $u_1^{{\rm pol}},\ldots,u_m^{{\rm pol}}$, where
$u_j^{{\rm pol}}=\prod_{i=1}^{n}\prod_{k=1}^{a_{i,j}}x_{i,k}$, $1\leq j\leq m$. The
monomial $u_j^{{\rm pol}}$ is called the {\it polarization} of $u_j$, and the ideal $I^{{\rm pol}}$
is called the {\it polarization} of $I$. It is known that $\beta_{i,j}(I)=\beta_{i,j}(I^{{\rm pol}})$, for each pair of integers $i$ and $j$ (see e.g., \cite[Corollary 1.6.3]{hh}).
\section{Symbolic powers with linear resolution} \label{sec3}
Our goal in this section is to characterize all graphs $G$ with the property that $J(G)^{(k)}$ has a linear resolution for some integer $k\geq 2$. To achieve this goal, we first give a positive answer to Question \ref{qwellres}.
\begin{prop} \label{linwell}
Let $G$ be a very well-covered graph and suppose that $J(G)^{(k)}$ has a linear resolution, for some integer $k\geq 1$. Then $J(G)$ has a linear resolution.
\end{prop}
\begin{proof}
Using Proposition \ref{strwell}, the vertices of $G$ can be labeled as$$V(G)=\{x_1, \ldots, x_h, y_1, \ldots, y_h\}$$such that the following conditions are satisfied.
\begin{itemize}
\item[(i)] $X:=\{x_1, \ldots, x_h\}$ is a minimal vertex cover of $G$ and $Y:=\{y_1, \ldots, y_h\}$ is a maximal independent subset of $G$.
\item[(ii)] $\{x_i, y_i\}\in E(G)$, for each integer $i=1, \ldots, h$.
\item[(iii)] If $\{z_i, x_j\}$, $\{y_j, x_k\}\in E(G)$, then $\{z_i, x_k\}\in E(G)$
for distinct indices $i$, $j$ and $k$ and for $z_i\in \{ x_i, y_i\}$.
\item[(iv)] If $\{x_i, y_j\} \in E(G)$, then $\{x_i, x_j\} \notin E(G)$.
\end{itemize}
We know from \cite[Lemma 3.4]{s3} that $(J(G)^{(k)})^{{\rm pol}}$ is the cover ideal of a graph $G_k$ with vertex set $$V(G_k)=\big\{x_{i,p}, y_{i,p}\mid 1\leq i\leq h \ {\rm and} \ 1\leq p\leq k\big\},$$ and edge set
\begin{align*}
E(G_k) & =\big\{\{x_{i,p}, x_{j,q}\}\mid \{x_i, x_j\}\in E(G) \ {\rm and} \ p+q\leq k+1\big\}\\
& \cup \big\{\{x_{i,p}, y_{j,q}\}\mid \{x_i, y_j\}\in E(G) \ {\rm and} \ p+q\leq k+1\big\}.
\end{align*}
It follows from the assumption and \cite[Corollary 1.6.3]{hh} that $J(G_k)$ has a linear resolution. As a consequence, \cite[Theorem 8.1.9]{hh} implies that $G_k$ is a Cohen--Macaulay graph. Note that$$F=\{y_{i,j}\mid 1\leq i\leq h \ {\rm and} \ 2\leq j\leq k\}$$is an independent subset of $G_k$. It is obvious that$$N_{G_k}[F]=F\cup\{x_{i,j}\mid 1\leq i\leq h \ {\rm and} \ 1\leq j\leq k-1\}.$$Thus, $G_k\setminus N_{G_k}[F]$ is isomorphic to the bipartite graph which is obtained from $G$ by deleting the edges whose endpoints are in $X$. We denote this new graph by $G'$. In other words, $G'$ is the graph with vertex set $V(G')=V(G)$ and edge set$$E(G')=\big\{\{x_i, y_j\} \mid 1\leq i,j\leq h \ {\rm and} \ \{x_i, y_j\}\in E(G)\big\}.$$It follows that ${\rm lk}_{\Delta(G_k)}F=\Delta(G')$. Since $G_k$ is Cohen--Macaulay, we conclude that $G'$ is Cohen--Macaulay too. Therefore, $G'$ is a very well-covered graph. Using \cite[Lemma 3.5]{crt}, there exists a permutation $\sigma : [h]\rightarrow [h]$ with the property that $\sigma(i)\leq \sigma(j)$ whenever $\{x_{\sigma(i)}, y_{\sigma(j)}\}\in E(G)$. One can easily see that the graph $G$ with labeling $x_{\sigma(1)}, \ldots, x_{\sigma(h)}, y_{\sigma(1)}, \ldots, y_{\sigma(h)}$ of its vertices also satisfies conditions (i)-(iv) of Proposition \ref{strwell}. Hence, using Proposition \ref{cmwell}, we conclude that $G$ is a Cohen-Macaulay graph. As a consequence, \cite[Theorem 8.1.9]{hh} implies that $J(G)$ has a linear resolution.
\end{proof}
As we mentioned in Section \ref{sec2}, if a homogenous ideal $I$ has a linear resolution, the it must be generated in a single degree. Thus, in order to characterize the graphs $G$ with the property that $J(G)^{(k)}$ has a linear resolution, we should first determine the graphs $G$ such that $J(G)^{(k)}$ is generated in a single degree. This will be done in the next lemma. We recall that for a monomial ideal $I$, the unique set of minimal monomial generators of $I$ is denoted by $G(I)$.
\begin{lem} \label{singdeg}
Let $G$ be a graph without isolated vertices and assume that $J(G)^{(k)}$ is generated in a single degree, for some integer $k\geq 2$. Then $G$ is a very well-covered graph.
\end{lem}
\begin{proof}
Assume that $V(G)=\{x_1, \ldots, x_n\}$ and let $u$ be a monomial in the set of minimal monomial generators of $J(G)$. Note that $u^k\in J(G)^k\subseteq J(G)^{(k)}$. As $u\in G(J(G))$, for every integer $1\leq i\leq n$, there exists a vertex $x_j\in N_G(x_i)$ such that $u/x_i\notin (x_i, x_j)$. Hence,$$u^k/x_i=(u/x_i)^kx_i^{k-1}\notin (x_i, x_j)^k.$$In particular, for every integer $i$ with $1\leq i\leq n$ we have $u^k/x_i\notin J(G)^{(k)}$. Therefore, $u^k$ belongs to the set of minimal monomial generators of $J(G)^{(k)}$. As $J(G)^{(k)}$ is generated in a single degree, we deduce that$${\rm deg}(J(G)^{(k)})={\rm deg}(u^k)=k{\rm deg}(u).$$Since the above equalities hold for every monomial $u\in G(J(G))$, we conclude that $J(G)$ is generated in a single degree. Hence, $G$ is an unmixed graph. Let $d$ denote the degree of minimal monomial generators of $J(G)$. It follows that$${\rm deg}(J(G)^{(k)})=kd.$$
By \cite{gv} (see also \cite[Theorem 0.1]{crt}), we have $d\geq n/2$. Thus, to complete the proof, we must show that $d\leq n/2$. Set $v:=x_1\ldots x_n$. We divide the rest of the proof in two cases.
\vspace{0.3cm}
{\bf Case 1.} Assume that $k=2m$ is an even integer. It is clear that $v^m\in J(G)^{(k)}$. This implies that$$mn={\rm deg}(v^m)\geq {\rm deg}(J(G)^{(k)})=kd=2md.$$Therefore, $n\geq 2d$.
\vspace{0.3cm}
{\bf Case 2.} Assume that $k=2m+1$ is an odd integer. As above, let $u$ be a monomial in the set of minimal monomial generators of $J(G)$. Obviously, $uv^m\in J(G)^{(k)}$. Hence,$$d+mn={\rm deg}(uv^m)\geq {\rm deg}(J(G)^{(k)})=kd=(2m+1)d$$which means that $n\geq 2d$.
\end{proof}
It is obvious from the definitions that the condition of having a linear resolution is stronger that having a linear presentation. However, we will see in Proposition \ref{linpre} that these two concepts are equivalent for symbolic powers of cover ideals of very well-covered graphs. In order to prove this proposition, we need to recall the notion of Serre's condition.
Let $M$ be a nonzero finitely generated $S$-module and let $r\geq 1$ be a positive integer. Then $M$ is said to satisfy the {\it Serre's condition} ($S_r$), if for every prime ideal $\frak{p}$ of $S$, the inequality$${\rm depth}\ M_{\frak{p}}\geq \min\{r,\dim M_{\frak
{p}}\}$$holds true. We say that a simplicial complex $\Delta$ satisfies the Serre's condition ($S_r$) if its Stanley--Reisner ring satisfies the same condition. We refer to \cite{psty} for a survey on simplicial complexes satisfying the Serre's condition.
\begin{prop} \label{linpre}
Let $G$ be a very well-covered graph and assume that $k$ is a positive integer. Then $J(G)^{(k)}$ has a linear resolution if and only if $J(G)^{(k)}$ has a linear presentation.
\end{prop}
\begin{proof}
The "only if" part is obvious. Thus, we only prove the "if" part. So suppose that $J(G)^{(k)}$ has a linear presentation. By \cite[Corollary 1.6.3]{hh}, the ideal $(J(G)^{(k)})^{{\rm pol}}$ has a linear presentation. Using \cite[Proposition 3.1 and Lemma 3.4]{s3}, there is a very well-covered graph $G_k$ with $(J(G)^{(k)})^{{\rm pol}}=J(G_k)$. As $J(G_k)$ has a linear presentation, it follows from \cite[Corollary 3.7]{y} that $\Delta(G_k)$ satisfies the Serre's condition $(S_2)$. It is well-known that every simplicial complex satisfying the Serre's condition $(S_2)$ is strongly connected (see e.g. \cite[Page 2]{mt'}). In particular, $\Delta(G_k)$ is a strongly connected simplicial complex. Hence, we conclude from \cite[Theorem 0.2]{crt} that $G_k$ is a Cohen-Macaulay graph. Therefore, \cite[Theorem 8.1.9]{hh} implies that $J(G_k)$ has a linear resolution. Consequently, it follows from \cite[Corollary 1.6.3]{hh} that $J(G)^{(k)}$ has a linear resolution.
\end{proof}
We are now ready to prove the main result of this section.
\begin{thm} \label{main}
For any graph $G$ without isolated vertices, the following conditions are equivalent.
\begin{itemize}
\item[(i)] $J(G)^{(k)}$ has a linear resolution, for every integer $k\geq 1$.
\item[(ii)] $J(G)^{(k)}$ has a linear resolution, for some integer $k\geq 2$.
\item[(iii)] $J(G)^{(k)}$ has a linear presentation, for every integer $k\geq 1$.
\item[(iv)] $J(G)^{(k)}$ has a linear presentation, for some integer $k\geq 2$.
\item[(v)] $G$ is a Cohen-Macaulay very well-covered graph.
\end{itemize}
\end{thm}
\begin{proof}
If $G$ satisfies either of the conditions (i)-(iv), then $J(G)$ is generated in a single degree. Hence, using Lemma \ref{singdeg}, we deduce that $G$ is a very well-covered graph. Thus, the equivalences (i)$\Leftrightarrow$(iii) and (ii)$\Leftrightarrow$(iv) follow from Proposition \ref{linpre}. The implication (i)$\Rightarrow$(ii) is trivial. By \cite[Theorem 8.1.9]{hh} and \cite[Theorem 3.6]{s3}, (v) implies (i). To prove (ii)$\Rightarrow$ (v), as we mentioned above, it follows from (ii) and Lemma \ref{singdeg} that $G$ is a very well-covered graph. Then using Proposition \ref{linwell} and \cite[Theorem 8.1.9]{hh}, we conclude that $G$ is a Cohen-Macaulay graph.
\end{proof}
\begin{rems}
\begin{enumerate}
\item Assume that $I$ is a squarefree monomial ideal. In general, it is not true that if $I^{(k)}$ has a linear resolution, for some integer $k\geq 2$, then $I$ has the same property. For instance, let $I=(x_1x_2, x_2x_3, x_3x_4, x_4x_5, x_1x_5)$ be the edge ideal of the $5$-cycle graph. On can easily check that $I^{(2)}$ has a liner resolution but $I$ has not.
\item Assume that $I$ is a squarefree monomial ideal. In general, it is not true that if $I^{(2)}$ has a linear resolution then $I^{(k)}$ has the same property, for every integer $k\geq 2$. For example, let $I$ be the same ideal as above. Then $I^{(2)}$ has a liner resolution. On the other hand, $x_1^3x_2^3$ and $x_1x_2x_3x_4x_5$ belong to set of minimal monomial generators of $I^{(3)}$. In particular, $I^{(3)}$ is not generated in a single degree. Hence, it has not a linear resolution.
\end{enumerate}
\end{rems}
\section{Regularity of symbolic powers} \label{sec4}
In this section, we study the regularity function of symbolic powers of cover ideals of graphs. Mart${\rm\acute{i}}$nez-Bernal et al. \cite[Corollary 5.3]{mmvv} proved that the regularity of powers of cover ideals of every bipartite graph is a nondecreasing function. In Theorem \ref{main2}, we generalize this result by showing that the sequence $\big({\rm reg}(J(G)^{(k)})\big)_{k=1}^{\infty}$ is nondecreasing, for any arbitrary graph $G$.
\begin{thm} \label{main2}
Let $G$ be a graph. Then for every integer $k\geq 1$, we have$${\rm reg}(S/J(G)^{(k)})\leq {\rm reg}(S/J(G)^{(k+1)}).$$
\end{thm}
\begin{proof}
Without lose of generality, we may suppose that $G$ has no isolated vertex. Let $V(G)=\{x_1, \ldots, x_n\}$ be the vertex set of $G$. Consider the squarefree monomial ideals $(J(G)^{(k)})^{{\rm pol}}\subseteq T_k$ and $(J(G)^{(k+1)})^{{\rm pol}}\subseteq T_{k+1}$, where$$T_r=\mathbb{K}[x_{i,p}\mid 1\leq i\leq n, 1\leq p\leq r],$$for each $r=k, k+1$. It follows from \cite[Corollary 1.6.3]{hh} that$${\rm reg}\big(T_r/(J(G)^{(r)})^{{\rm pol}}\big)={\rm reg}\big(S/J(G)^{(r)}\big).$$As a consequence, it is enough to prove that$${\rm reg}\big(T_k/(J(G)^{(k)})^{{\rm pol}}\big)\leq {\rm reg}\big(T_{k+1}/(J(G)^{(k+1)})^{{\rm pol}}\big).$$
By \cite[Lemma 3.4]{s3}, for $r=k, k+1$, the ideal $(J(G)^{(r)})^{{\rm pol}}$ is the cover ideal of a graph $G_r$, with vertex set$$V(G_r)=\{x_{i,p}\mid 1\leq i\leq n, 1\leq p\leq r\}$$and edge set$$E(G_r)=\{\{x_{i,p}, x_{j,q}\}\mid \{x_i, x_j\}\in E(G) \ {\rm and} \ p+q\leq r+1\}.$$Set$$W:=\{x_{i, \lfloor\frac{k+1}{2}\rfloor +1} \mid 1\leq i\leq n\}\subseteq V(G_{k+1}),$$and consider the map $\varphi : V(G_k)\rightarrow V(G_{k+1}\setminus W)$ which is defined as follows.
$$\varphi(x_{i,j}) =
\left\{
\begin{array}{ll}
x_{i,j} & \mbox{if } 1\leq j\leq \lfloor\frac{k+1}{2}\rfloor \\
x_{i,j+1} & \mbox{if } j\geq \lfloor\frac{k+1}{2}\rfloor+1
\end{array}
\right.$$
It is easy to see that $\varphi$ is an isomorphism between $G_k$ and $G_{k+1}\setminus W$. Hence, $G_k$ is an induced subgraph of $G_{k+1}$. It follows from \cite[Proposition 4.1.1]{j} that$${\rm pd}(I(G_k))\leq {\rm pd}(I(G_{k+1})).$$Using Terai's theorem \cite[Proposition 8.1.10]{hh}, we deduce that$${\rm reg}\big(T_k/(J(G_k)\big)\leq {\rm reg}\big(T_{k+1}/(J(G_{k+1})\big)$$which means that$${\rm reg}\big(T_k/(J(G)^{(k)})^{{\rm pol}}\big)\leq {\rm reg}\big(T_{k+1}/(J(G)^{(k+1)})^{{\rm pol}}\big).$$This completes the proof.
\end{proof}
\section{Maximum degree of generators of symbolic powers} \label{sec5}
As the last result of this paper, we study the maximum degree of generators of symbolic powers of cover ideals. Thanks to \cite[Theorem 5.15]{dhnt}, we know that ${\rm deg}(J(G)^{(k)})$ is not necessarily a linear function (even eventually). However, we will see in Theorem \ref{deg} and Corollary \ref{deguc} that ${\rm deg}(J(G)^{(k)})$ is a linear function, for interesting classes of graphs. In order to prove theses results, we need the following lemma.
\begin{lem} \label{del}
Let $G$ ba a graph with vertex set $V(G)=\{x_1, \ldots, x_n\}$. Assume that $x\in V(G)$ is a vertex of $G$. Set $u:=\prod_{x_i\in N_G(x)}x_i$ and $J'=J(G\setminus N_G[x])S$. Then $J(G)^{(k)}+(x)=u^kJ'^{(k)}+(x)$.
\end{lem}
\begin{proof}
We have
\begin{align*}
& J(G)^{(k)}+(x)=\bigcap_{\{x_i,x_j\}\in E(G)}(x_i,x_j)^k+(x)=\bigcap_{\{x_i,x_j\}\in E(G)}\big((x_i,x_j)^k+(x)\big)\\ & =\bigg(\bigcap_{x_i\in N_G(x)}\big((x_i,x)^k+(x)\big)\bigg) \cap \bigg(\bigcap_{\substack{\{x_i,x_j\}\in E(G) \\ x\notin \{x_i,x_j\}}}\big((x_i,x_j)^k+(x)\big)\bigg)\\ & =\bigg(\bigcap_{x_i\in N_G(x)}\big((x_i)^k+(x)\big)\bigg) \cap \bigg(\bigcap_{\substack{\{x_i,x_j\}\in E(G) \\ x\notin \{x_i,x_j\}}}\big((x_i,x_j)^k+(x)\big)\bigg).
\end{align*}
Note that for every vertex $x_i\in N_G(x)$ and for every edge $\{x_i,x_j\}\in E(G)$, we have $(x_i)^k+(x)\subseteq (x_i,x_j)^k+(x)$. Thus, it follows from the above equalities that
\begin{align*}
& J(G)^{(k)}+(x)=\bigg(\bigcap_{x_i\in N_G(x)}\big((x_i)^k+(x)\big)\bigg) \cap \bigg(\bigcap_{\{x_i,x_j\}\in E(G\setminus N_G[x])}\big((x_i,x_j)^k+(x)\big)\bigg)\\ & =\bigg(\bigcap_{x_i\in N_G(x)}(x_i)^k \cap \bigcap_{\{x_i,x_j\}\in E(G\setminus N_G[x])}(x_i,x_j)^k\bigg)+(x)\\ & =\big(u^k\cap J'^{(k)}\big)+(x)=u^kJ'^{(k)}+(x).
\end{align*}
\end{proof}
We are now ready to prove the last result of this paper.
\begin{thm} \label{deg}
Let $\mathcal{H}$ be a family of graphs which satisfies the following conditions.
\begin{itemize}
\item[(i)] For every graph $G\in \mathcal{H}$ and every vertex $x\in V(G)$, the graph $G\setminus N_G[x]$ belongs to $\mathcal{H}$.
\item[(ii)] If $G\in \mathcal{H}$ has no isolated vertex, then it admits a minimal vertex cover with cardinality at least $\frac{|V(G)|}{2}$.
\end{itemize}
Then for every graph $G\in \mathcal{H}$ and every integer $k\geq 1$, we have$${\rm deg}(J(G)^{(k)})=k{\rm deg}(J(G)).$$
\end{thm}
\begin{proof}
By replacing $\mathcal{H}$ with $\mathcal{H}\cup\{K_2\}$, we may assume that $K_2\in \mathcal{H}$. Let $m$ be the number of edges of $G$. We use induction on $k+m$. For $k=1$, there is nothing to prove. Therefore, assume that $k\geq 2$. Suppose $m=1$ and let $\{x_1, x_2\}$ be the unique edge of $G$. Then $J(G)=(x_1,x_2)$. Hence, ${\rm deg}(J(G))=1$ and ${\rm deg}(J(G)^{(k)})=k$. Thus, assume that $m\geq 2$. Without lose of generality, we may suppose that $G$ has no isolated vertex. By \cite[Lemma 3.1]{s7}, we have ${\rm deg}(J(G)^{(k)})\geq k{\rm deg}(J(G))$. Therefore, it is enough to prove that for every monomial $w\in G(J(G)^{(k)})$, the inequality ${\rm deg}(w)\leq k{\rm deg}(J(G))$ holds. Assume that $V(G)=\{x_1, \ldots, x_n\}$ and set $v:=x_1\ldots x_n$. We consider the following two cases.
\vspace{0.3cm}
{\bf Case 1.} Suppose $v$ divides $w$. Then $w/v$ belongs to the set of minimal monomial generators of $(J(G)^{(k)}:v)$. We know from \cite[Lemma 3.4]{s6} that $$(J(G)^{(k)}:v)=J(G)^{(k-2)}.$$Hence, using the induction hypothesis, we deduce that$${\rm deg}(w/v)\leq (k-2){\rm deg}(J(G)).$$Consequently,$${\rm deg}(w)\leq (k-2){\rm deg}(J(G))+n.$$ On the other hand, by assumption we have $n\leq 2{\rm deg}(J(G))$. Thus, the above inequality implies that ${\rm deg}(w)\leq k{\rm deg}(J(G))$.
\vspace{0.3cm}
{\bf Case 2.} Suppose $v$ does not divide $w$. Then there is a variable, say $x_j$ which does not divide $w$. Set $u:=\prod_{x_i\in N_G(x_j)}x_i$. We conclude from Lemma \ref{del} that $w$ belongs to the set of minimal monomial generators of $u^kJ(G\setminus N_G[x_j])^{(k)}$. In other words, $u^k$ divides $w$, and$$w/u^k\in G(J(G\setminus N_G[x_j])^{(k)}).$$It follows from the induction hypothesis that
\[
\begin{array}{rl}
{\rm deg}(w)-k{\rm deg}_G(x_j)={\rm deg}(w)-k{\rm deg}(u)\leq k{\rm deg}(J(G\setminus N_G[x_j]).
\end{array} \tag{1} \label{1}
\]
On the other hand, if $C$ is a minimal vertex cover of $G\setminus N_G[x_j]$, then $C\cup N_G(x_j)$ is a minimal vertex cover of $G$. Therefore,
\[
\begin{array}{rl}
{\rm deg}(J(G\setminus N_G[x_j]))+{\rm deg}_G(x_j)\leq {\rm deg}(J(G)).
\end{array} \tag{2} \label{2}
\]
Using (\ref{1}) and (\ref{2}), we deduce that$${\rm deg}(w)\leq k{\rm deg}(J(G\setminus N_G[x_j])+k{\rm deg}_G(x_j)\leq k{\rm deg}(J(G)).$$
\end{proof}
As a consequence of Theorem \ref{deg}, we obtain the following corollary.
\begin{cor} \label{deguc}
Let $G$ be a graph which is either unmixed or claw-free. Then for every integer $k\geq 1$, we have$${\rm deg}(J(G)^{(k)})=k{\rm deg}(J(G)).$$In particular, ${\rm deg}(J(G)^{(k)})$ is a linear function of $k$.
\end{cor}
\begin{proof}
By \cite{gv} (see also \cite[Theorem 0.1]{crt}), every unmixed graph $G$ without isolated vertices has a minimal vertex cover with cardinality at least $\frac{|V(G)|}{2}$. Also, we know from the proof of \cite[Theorem 3.7]{s7} that every claw-free graph $G$ without isolated vertices has a minimal vertex cover with cardinality at least $\frac{|V(G)|}{2}$. Let $\mathcal{H}$ denote the class of graphs which are either unmixed or claw-free. Then the assertion follows from Theorem \ref{deg}.
\end{proof}
| proofpile-arXiv_059-15612 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Image Signal Processor}
\label{sec:ISP}
An image signal processor (ISP) is a specialized image processor in digital cameras. The goal of the ISP in a phone camera is to take the raw sensor data from the camera, and transform it to a user-visible image. Common stages of an ISP pipeline include color correction, lens correction, demosaicing and noise reduction~\cite{buckler2017reconfiguring}.
Many mobile device producers utilize third-party ISP chips for at least part of their ISP pipelines. This implies that different phones, even if they have similar cameras, might produce very different photos. In fact it has been reported that phones from the same vendor might have different ISPs for the same model~\cite{phones-different-countries,phones-different-vendor}. Moreover, ISP pipelines in mobile devices have become more complex and might make different decisions on a photo based on the environment and the object being photographed~\cite{apple-isp,google-isp}. The phone vendor typically does not provide access to raw sensor data before the ISP chips, and so even when phones allow to access to raw photos, those photos might have gone through some ISP pipeline stages.
Past work has shown that changes to the ISP can have a significant effect on deep learning efficacy~\cite{buckler2017reconfiguring, liu2015ultra}. Therefore, we would like to characterize how differences in phone ISPs contribute to instability. To the best of our knowledge, no phone provides direct access to its full ISP pipeline. For this reason we used a technique employed in prior work of using ImageMagick and Adobe Photoshop as simulated software ISPs~\cite{buckler2017reconfiguring}.
We converted the raw photos taken from the iPhone and Samsung phones, using a software pipeline. We then evaluated the classification on the resulting uncompressed PNG. The results are shown in Table~\ref{tab:isp}.
\begin{table}[h]
\caption{Accuracy and instability for images converted with ImageMagick or Adobe Photoshop}
\label{tab:isp}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{ll}
\toprule
metric & result \\
\midrule
Adobe Accuracy & 49.96\% \\
ImageMagick Accuracy & 54.75\% \\
Instability & 14.11\%\\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
We can see that different ISPs resulted in 14\% instability, implying that a large part of instability we saw in the end-to-end experiment may be attributed to ISP differences. Later in (\S\ref{sec:using-raw}) we will examine the potential of using raw images to overcome ISP and compression differences.
\section{Processor and OS}
\label{sec:OS}
In this section, we investigate two other potential causes for instability: (1) OS differences between the phone might affect how images are loaded and operations are scheduled; and (2) hardware differences might affect floating point calculations and instruction scheduling at inference time.
In order to test these two potential sources of instability, instead of taking images with the different phones and observing the differences, we run an experiment with a pre-defined set of photos, where we simply try to conduct inference on different phone models.
To this end we wrote a simple app that loads a subset of the Caltech101 dataset~\cite{caltech101} and runs classification on a subset of the images using MobileNetV2 trained on ImageNet. We tested our app on 5 phones, shown in Table~\ref{tab:firebase-phone-details}, using the Firebase Test Lab~\cite{firebase}, a service that allows developers to test their apps on different phones.
\begin{table}[h]
\caption{Phones used in the Firebase Test experiment.}
\label{tab:firebase-phone-details}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{ll}
\toprule
Phone & SoC \\
\midrule
Samsung Galaxy Note8 & Exynos 9 Octa 8895 \\
Huawei Mate RS & HiSilicon KIRIN 970 \\
Pixel 2 & Snapdragon 835 \\
Sony XZ3 & Snapdragon 845 \\
Xiaomi MI 8 Pro & Helio G90T (MT6785T) \\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
During our experiment we observed very little instability. In fact the only two phones to produce any different predictions or confidences were the Huawei and Xiaomi Phones. Both Xiaomi and Huawei produced the same exact confidences and prediction, and the rest of the phone models (Samsung, Pixel and Sony) produced consistent predictions as well, but different than the Xiaomi and Huawei phones. These difference resulted in a small amount of instability: $0.64\%$.
We suspect the reason for the difference in prediction was due to differences in the OS's handling of JPEG decoding, rather then differences in hardware. To confirm this we looked at the MD5 hash of the loaded JPEG images, and indeed Huawei and Xiaomi produced different MD5 hashes then the rest of the phone. We further confirmed this by running our experiment on PNG images; when running on PNG images we detected no instability across all phones.
Therefore, we conclude that the processor and OS are not a major source of instability in our experimental setup.
\section{Background}
In this section, we provide background for why inference is increasingly being conducted on edge devices, and introduce and define the model instability problem.
\subsection{Inference on Edge Devices}
While much of model training occurs on servers or datacenters, there is an increasing trend of pushing inference to edge devices~\cite{FB-ML-inference-on-edge,lee2019device}. The advantages of conducting inference on edge devices over a centralized location are manifold. First, it removes the need to transfer the input data over the network. Second, it can provide lower latency. Third, users may not be always be in settings where they have a stable connection. Fourth, it can ensure the input data collected on the edge remains private.
There are many examples of real-time applications that conduct inference on the edge, including augmenting images taken from a camera~\cite{FB-ML-inference-on-edge,ignatov2017dslr}, localizing and classifying objects~\cite{speed-accuracy}, monitoring vital health signals~\cite{health-monitoring} and recognizing speech for transcription or translation~\cite{speech-recog-edge}.
However, running inference on different edge devices, each with its own hardware and software, as well as different sensors with which they capture input data, creates very heterogeneous environments. These different environments lead the model to behave differently, even on seemingly identical inputs.
\subsection{Model Instability}
While the traditional ML metrics, of accuracy, precision and recall are extremely important, they do not capture how well a model performs \emph{across environments}. For this purpose, we define a new metric, called \textbf{model instability}, which denotes when a model conducts inference in different environments, and returns significantly different results on near-identical inputs~\cite{StabilityTraining}.
Photography always introduces some random noise during image acquisition from the sensor~\cite{boncelet2009image}. Due to this, models have some degree of instability, even when they are run on exactly the same edge device. To illustrate this, we run the following simple experiment. Figure~\ref{fig:burst} depicts two photos of a water bottle from the same Samsung Galaxy S10. The photos were taken couple of seconds apart using Android debug bridge, without touching the phone or changing its location. Both images seem identical to the naked eye. However, when we run MobileNetV2~\cite{mobilenetv2} on both models, the one on the left returns the ``bubble'' class (incorrect), while the model predicts ``water bottle'' class (correct) for the center image. The image on the right shows the pixels where the difference between in the images is higher than 5\%. This experiment shows that even on the same phone, two photos taken within a very short timespan may lead to different predictions.
In order to quantify the measure of instability of a model, we define a prediction as \emph{unstable} if in at least one environment it returns a correct class, and in at least one other environment it returns a clearly incorrect class. We do not compare the predictions between different environments in the case where all the predictions are incorrect, because it is difficult to say whether a particular classification is more ``incorrect'' than another.
Much of prior work focused on making models robust to adversaries~\cite{kurakin2016adversarial, bastani2016measuring, cisse2017parseval}. In contrast, our focus in this work is on making models robust to naturally occurring variations in model output due to different processors, software and sensors.
\section{Experiment Overview}
This section details our data collection, experimental setup and the goals of our experiments.
\subsection{Data Collection}
\label{sec:data-collection}
In order to run our experiments we collected images representing a subset of 5 classes from ImageNet~\cite{imagenet}: water bottle, beer bottle, wine bottle, purse and backpack. The images were a mix of images scraped from Flicker, images downloaded from Amazon and Amazon Prime Now and photos we took ourselves. We collected a total of 1,537 images.
\subsection{Experimental Setup}
\begin{figure}[t!]
\centering
\subfigure[end-to-end experiment]{\includegraphics[width=0.47\textwidth, height=3.5cm]{images/system_design.JPG}}
\subfigure[iPhone XR]{\includegraphics[width=0.15\textwidth, height=2.1cm]{images/misc/backpack0_iphone.jpg}} \hspace{10mm}
\subfigure[Galaxy S10]{\includegraphics[width=0.15\textwidth, height=2.1cm]{images/misc/backpack0_Samsung.jpg}}
\vspace{-2.8pt}
\caption{(a) Illustration of experimental setup. We wrote two open-source applications: one for Android and one for iOS. Each phone sends an HTTP request to a server to display an image. After the image is presented on the monitor, the photo is taken by the phone.
(b) Example of an iPhone photo incorrectly classified as ``pillow''. (c) Example of Samsung Galaxy S10 photo correctly classified as ``backpack''.\label{fig:system-design}}
\end{figure}
Our goal is to eliminate as much as possible any source of variability that is external to internal characteristics of the phone ({e.g. } lighting, position, the object being photographed).
To this end, we designed the following controlled experimental setup. We placed 5 phones on a camera mount in front of a computer screen in a closed room with light-blocking curtains.
The phones took a series of photos of objects presented on the computer screen from the same angle. The objects and photos projected on the screen where taken from our collected dataset described previously (\S\ref{sec:data-collection}). Each phone was presented identical photos on the screen.
\begin{table}[h]
\caption{Phones models used in experiments.\label{tab:phone-details}}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{ll}
\toprule
Phone & Model \\
\midrule
Samsung Galaxy S10 & SM-G973U1 \\
LG K10 LTE & K425 \\
HTC Desire 10 Lifestyle & DESIRE 10 \\
Motorola Moto G5 & XT1670 \\
iPhone XR & A1984 \\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
To ensure the phone and computer screen are synchronized, the time at which the photos are taken from the phone is controlled by apps, which we wrote for both iOS and Android\footnote{We will release the code for both apps on Github.}. Figure~\ref{fig:system-design} depicts the process: the app communicated with the computer screen, and determines which photo is currently displayed on the screen. The screen displays the photo, and then the app takes a picture of the screen from the device.
We repeated this sequence on each of the distinct objects, on 5 different angles (left, center-left, center, center-right and right) with fixed heights. In total, we took 68,125 photos.
Throughout the experiments presented in the paper, we evaluate the performance of a single MobileNetV2~\cite{mobilenetv2} model with fixed weights, on the photos taken by the different phones. The model weights were pre-trained on ImageNet.
The list of the phones used in our experiments are available in Table~\ref{tab:phone-details}.
To evaluate the results, we verified whether images were classified correctly or not by hand. For some predictions, there can be more than one possible correct label. For instance, ``wine bottle'' and ``red wine'' overlap in ImageNet, so for a bottle of red wine we accepted both "wine bottle" and "red wine". If an image contained more than one object, we only considered the object that is clearly in the foreground of the image.
\subsection{Goals}
We run four sets of experiments, presented in the next sections: (a) end-to-end experiments, whose goal is to measure the end-to-end instability of models using the same mobile devices and across devices (\S\ref{sec:end-to-end}); (b) measuring the effect of image compression (\S\ref{sec:compression}); (c) measuring the effect of different ISPs (\S\ref{sec:ISP}); and (d) measuring the effect of the device's operating system and processor (\S\ref{sec:OS}).
\section{Image Compression}
\label{sec:compression}
We now analyze the source and degree of instability within each phone, first focusing on compression. In this section, we analyze
only the raw photos taken in the end-to-end experiment on the iPhone and Samsung phone. The phones use two different compression schemes: Samsung uses JPEG and and iPhone uses HEIF. In order to isolate the effect of the ISP, we always use ImageMagick to compress and convert the photo to different formats.
\subsection{Compression Quality}
We compress the photos to JPEG with 3 different qualities: 100, 85 and 50. We used the compression parameter suggested by Google for machine learning~\cite{google-optimize-images}.
\begin{table}[t!]
\caption{Accuracy and image size for different JPEG compression qualities.}
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{llll}
\toprule
Metric & JPEG 100 & JPEG 85 & JPEG 50 \\
\midrule
Avg. Size [MB] & 3.05 & 0.65 & 0.25 \\
Accuracy & 54.0\% & 54.3\% & 54.5\% \\
\midrule
Instability & \multicolumn{3}{c}{7.6\%} \\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\label{tab:compression-quality}
\end{table}
Table~\ref{tab:compression-quality} shows the accuracy using different compression qualities. The results suggest that using Google's recommended compression parameters leads to very small differences in accuracy across compression qualities. In fact, in our experiment a higher compression ratio surprisingly yielded better accuracy. Yet the instability between the different qualities is 7.6\%.
Once again, despite the small difference in accuracy across the compression qualities, there is a noticeable instability.
\begin{figure*}[t]
\begin{multicols}{4}%
\noindent%
{%
\subfigure[HEIF: backpack]{\includegraphics[width=0.23\textwidth, height=1.7cm]{images/compression/backpack/iPhone_XR-2-backpack296-heic.jpg}}\\%
\subfigure[JPEG: bonnet]{\includegraphics[width=0.23\textwidth, height=1.7cm]{images/compression/backpack/iPhone_XR-2-backpack296-jpg.jpg}}\\%
\subfigure[PNG: bonnet]{\includegraphics[width=0.23\textwidth, height=1.7cm]{images/compression/backpack/iPhone_XR-2-backpack296-png.jpg}}\\%
}%
{%
\subfigure[HEIF: wine bottle]{\includegraphics[width=0.23\textwidth, height=1.7cm]{images/compression/wine_bottle/iPhone_XR-2-wine_bottle142-heic.jpg}}\\%
\subfigure[JPEG: wine bottle]{\includegraphics[width=0.23\textwidth, height=1.7cm]{images/compression/wine_bottle/iPhone_XR-2-wine_bottle142-jpg.jpg}}\\%
\subfigure[PNG: beer bottle]{\includegraphics[width=0.23\textwidth, height=1.7cm]{images/compression/wine_bottle/iPhone_XR-2-wine_bottle142-png.jpg}}\\%
}%
{%
\subfigure[HEIF: safety pin]{\includegraphics[width=0.23\textwidth, height=1.7cm]{images/compression/purse/Samsung_Galaxy_S10-3-purse154-heic.jpg}}\\%
\subfigure[JPEG: purse]{\includegraphics[width=0.23\textwidth, height=1.7cm]{images/compression/purse/Samsung_Galaxy_S10-3-purse154-jpg.jpg}}\\%
\subfigure[PNG: stethoscope]{\includegraphics[width=0.23\textwidth, height=1.7cm]{images/compression/purse/Samsung_Galaxy_S10-3-purse154-png.jpg}}\\%
}%
{%
\subfigure[JPEG-q100: beer bottle]{\includegraphics[width=0.23\textwidth, height=1.7cm]{images/compression/JPG-compression/iPhone_XR-3-beer_bottle167-quality100.jpg}}\\%
\subfigure[JPEG-q85: lighter]{\includegraphics[width=0.23\textwidth,height=1.7cm]{images/compression/JPG-compression/iPhone_XR-3-beer_bottle167-quality85.jpg}}\\%
\subfigure[JPEG-q50: lighter]{\includegraphics[width=0.23\textwidth, height=1.7cm]{images/compression/JPG-compression/iPhone_XR-3-beer_bottle167-quality50.jpg}}%
}%
\end{multicols}
\vspace{-2.8pt}
\caption{Example of photos that cause instability due to different compression formats.\label{fig:compression-format-example}}
\end{figure*}
\subsection{Different Compression Formats}
We repeat this experiment but this time we compress the raw images into different formats: JPEG, PNG, WebP and HEIF. Each format uses its default compression parameters.
\begin{table}[h]
\caption{Image size, accuracy and instability for different compression formats.}
\label{tab:compression-format}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lllll}
\toprule
Metric & JPEG & PNG & WebP & HEIF \\
\midrule
Avg. Size [MB] & 1.54 & 6.49 & 0.29 & 0.57\\
Accuracy & 53.9\% & 53.9\% & 55.2\% & 54.4\% \\
\midrule
Instability & \multicolumn{4}{c}{9.66\%} \\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
In Table~\ref{tab:compression-format} we can see that as before, the different compression format led to small differences in accuracy. Yet the instability across formats is 9.66\%. Figure~\ref{fig:compression-format-example} shows a few examples of instability across the different compression formats. Notably the images are almost identical to the naked eye, but produce different model predictions.
\section{Conclusions}
This work examines the source of variation of model inference across different devices. We show that accuracy is a poor metric to account for this variation, and propose a new metric, instability, which measures the percentage of nearly-identical inputs producing divergent outputs. We demonstrate that for classification, different compression formats and ISPs account for a significant source of instability. We also propose a technique to fine-tune models to reduce the instability due to different devices.
We believe further exploring other sources of instability is an important topic for future work. Other potential sources, which are beyond the scope of our paper, include variations in cameras and lenses, lighting and visibility conditions.
\section{End-to-end Instability}
\label{sec:end-to-end}
\begin{figure}[t]
\centering
\subfigure[Accuracy by phone model.]{\label{fig:old-experiment} \includegraphics[width=0.47\columnwidth]{plots/total_accuracy.pdf}}\hfill
\centering
\subfigure[Instability by class.]{\label{fig:old-experiment-instability}
\includegraphics[width=0.47\columnwidth]{plots/phone_inconsistancy.pdf}}
\centering
\subfigure[Instability by experiment angle. ]{\label{fig:old-experiment-variation}
\includegraphics[width=0.47\columnwidth]{plots/inconsistancy_across_experiment.pdf}}\hfill
\centering
\subfigure[Instability over repeat photos of the same object with same phone.]{\label{fig:old-experiment-self-instability}
\includegraphics[width=0.47\columnwidth]{plots/self_inconsistancy.pdf}}
\vspace{-2.8pt}
\caption{End-to-end accuracy and instability. Accuracy does not capture the variability in predictions across different models.\label{fig:end-to-end-overview}}
\end{figure}
In our first set of experiments, we seek to quantify the amount of instability across classification tasks within the same phone model and across phone models, and to evaluate whether accuracy is a sufficient metric for capturing the variability in classification across edge devices.
\subsection{Accuracy vs. Instability\label{sec:acc-instability}}
We first evaluated the accuracy of our models on all 5 phones. The results, presented in Figure~\ref{fig:old-experiment}, show that accuracy is generally similar across all phone photos, and ranged between $59\%-64\%$.
The instability across all models is depicted in Figure~\ref{fig:old-experiment-instability}. Instability is measured as the percentage of photos, where at least one of the phones was correct ({e.g. } classified the right class) and at least one of the phones was incorrect, when taking a photo of the same image on the computer screen. The results show that while the accuracy remains relatively stable across the different phones, there is a high degree of instability: for most of the classes about 15\% of the images yield at least one correct and one incorrect classification.
The results also demonstrate a large degree of variance in the instability between the different classes; some of them are more prone to instability.
Importantly, \emph{this variation is not captured by the accuracy of each class}.
Figure~\ref{fig:old-experiment-variation} plots the variation in instability across the five angles, from left to right, and shows that instability does vary somewhat based on the angle of the image.
Figure~\ref{fig:old-experiment-self-instability} shows that there is instability even across experiments ({i.e. } different angles) within the same phone model. However, this instability is much lower than the instability across different models.
Based on the results of the experiments, we can conclude accuracy is not a good metric to capture the variability of how well models perform across different devices.
\begin{figure}[h]
\centering
\subfigure[Stable images.]{
\includegraphics[width=0.45\columnwidth]{plots/inconsistant_image_confidence_stable.pdf} \label{fig:stable-images}}\hfill
\subfigure[Unstable photos.]{
\includegraphics[width=0.45\columnwidth]{plots/inconsistant_image_confidence_unstable.pdf} \label{fig:unstable-images}}
\vspace{-2.8pt}
\caption{Prediction score for stable and unstable images.}
\label{fig:app-experimet-confidence}
\end{figure}
\subsection{Model Confidence}
We evaluate the relationship between instability and the model's confidence in its prediction. Figure~\ref{fig:stable-images} shows the distribution of the prediction scores for stable photos ({i.e. } those where all phones were correct or all were incorrect). We can see there is a clear correlation between the score and whether the model was correct or not.
Figure~\ref{fig:unstable-images} shows that for unstable predictions ({i.e. } those where one phone was correct and one was incorrect), the prediction confidence of correct classifications tends to be almost identical to the confidence of the incorrect classifications. This implies that for most unstable images, the model has low confidence whereby even a small amount of noise can cause it to change its prediction.
Nevertheless, there is a noticeable group of outlier photos for which the model has very high confidence with one phone but low confidence with the other phone.
\section{Introduction}
Machine learning (ML) models are increasingly being deployed on a vast array of devices, including a wide variety of computers, servers, phones, cameras and other embedded devices~\cite{FB-ML-inference-on-edge}. The increase in the diversity of edge devices, and their bounded computational and memory capabilities, has led to extensive research on optimizing ML models for real-time inference on the edge.
However, prior work primarily focuses on the model's properties rather than on how each device introduces variations to the input. They tend to evaluate the models on public, well-known and hand-labeled datasets. However, these datasets do not necessarily represent the transformations performed by these devices on the input to the model~\cite{DBLP:conf/icml/RechtRSS19,biased-datasets}. The wide variety of sensors and hardware, different geographic locations \cite{phones-different-countries} and processing pipelines all effect the input data and the efficacy of the ML model.
To demonstrate how even small real-world variations to the input can affect the output, in Figure~\ref{fig:burst} we show an example where pictures of the same object taken by the same phone, one second apart, from a fixed position (without moving the phone), produce different classification results on the same model. Such divergence occurs even more frequently when the same model is run on different devices. In order to understand how the model will perform when run on many different devices, it is important to ensure consistent model accuracy, and create representative training and evaluation datasets that will be robust to variations across devices and sensors.
To capture this variability, we introduce the notion of \emph{instability}, which denotes the probability that the same model outputs different contradictory predictions on nearly identical input when run on different edge devices.
Running models on a wide array of edge devices can create many different opportunities for instability. Some of the instability is caused by changes to the inputs to the model, for example by the usage of different sensors on different devices ({e.g. } camera lenses), applying different transformations on the raw captured data ({e.g. } image processing pipelines), and saving the data in different formats ({e.g. } compression). Further instability may be introduced by hardware on the device ({e.g. } GPUs handling of floating points) or the operating system ({e.g. } the OS stores pictures in a particular format).
Prior work on model robustness has either focused on simulating the sources of instability~\cite{buckler2017reconfiguring}, or on making models robust to adversaries~\cite{kurakin2016adversarial, bastani2016measuring, cisse2017parseval}. While adversarial learning is important for model robustness, the vast majority of instability in many applications is not caused by adversaries, but rather by normal variations in the device's hardware and software. Therefore, we lack a solid understanding of the \emph{sources of instability in real-world edge devices}.
In this work, we conduct the first systematic characterization of the causes and degree of variance in convolutional neural network (CNN) computer vision model efficacy across existing mobile devices. We conduct four sets of experiments, in which we try to isolate different sources of variance, and evaluate the degree of instability they cause. In each experiment below, the same model is used while we vary the input or the operating conditions and the outputs are compared:
\begin{denseenum}
\item \textbf{End-to-end instability across phones:} We evaluate end-to-end instability across 5 mobile devices in a lab environment, in which the phones' position and lighting conditions are tightly controlled.
\item \textbf{Image compression algorithm:} We evaluate the effect of image compression techniques, by compressing a raw image using different algorithms.
\item \textbf{Image signal processor (ISP):} We estimate the effect of using different ISPs, by comparing raw converted images using ImageMagick and Adobe Photoshop.
\item \textbf{Processor and operating system:} We evaluate the effect of the device's OS and hardware by running inference on the same set of input images across 5 phones.
\end{denseenum}
Our characterization leaves us with several takeaways. First, we demonstrate that accuracy fails to account for the lack of consistency of predictions across different devices, and motivate the need to focus on minimizing instability as an objective function. Second, we show that instability across devices is significant; between 14-17\% of images classified by MobileNetV2 have divergent predictions (both correct and incorrect) in at least two phones. Third, we show that a significant source of instability is due to variable image compression and ISPs. Finally, we do not find evidence that the devices' processors or operating system are a significant source of instability.
While the focus of this work is on systematically characterizing the sources of instability, we also provide a preliminary analysis of mitigation techniques. First, we explore whether fine-tuning the training process by augmenting the training dataset with noise or with photos taken from multiple different phone models would make the model more robust. Inspired by prior work on noise robustness~\cite{StabilityTraining}, we show that augmenting the training data set with such ``noise'', can reduce the instability by up to about 75\%. Second, once a model is trained, instability can be further reduced if the model can either operate on raw images (when feasible) or by allowing the classification to display additional results ({e.g. } the top three results instead of the top one result). Our future work will focus on a more systematic development of instability mitigation techniques to handle edge variance.
\begin{figure}[t!]
\includegraphics[width=0.15\textwidth]{images/burst_mode/water_bottle_bubble.jpg}
\includegraphics[width=0.15\textwidth]{images/burst_mode/water_bottle_water_bottle.jpg}
\includegraphics[width=0.15\textwidth]{images/burst_mode/water_bottle_difference.jpg}
\vspace{-0.25cm}
\caption{Two photos taken one second apart with Samsung Galaxy S10 in a controlled environment, where the phone was not touched between the two shots. MobileNetV2 assigns the incorrect label ``bubble'' on the left image and the correct label ``water bottle'' on the middle image. The right image shows the pixel difference between both photos. The red dots represent the pixels within a range higher than 5\%. There is a very small pixel difference between the images, yet it is sufficient to cause instability.}
\label{fig:burst}
\end{figure}
The paper makes the following contributions:
\begin{denseitemize}
\item{\textbf{Instability:}} We motivate and propose instability, a new metric for evaluating the variability of model predictions across different edge devices.
\item{\textbf{Characterization:}} We present the first comprehensive characterization of the end-to-end degree and potential sources of instability on real-world mobile devices.
\item{\textbf{Mitigation:}} We propose and evaluate three potential mitigation strategies for instability.
\item{\textbf{Stability Training for Devices:}} We adapt a prior approach to make computer vision models robust to random noise~\cite{StabilityTraining} for reducing instability across devices.
\end{denseitemize}
\section{Related Work}
Deploying deep learning on edge devices is a widely researched topic. Due to the limited resources of the edge devices, prior work focused on different techniques to make DNN models smaller and faster. MobileNet~\cite{MobileNet, mobilenetv2} and SqueezeNet~\cite{iandola2016squeezenet} are examples of models optimized to run on resource-limited devices. Another way to improve the size and performance of the models is by using techniques such as quantization~\cite{DBLP:journals/corr/HanMD15, BinarizedNN, Ternary-Weight-Networks, DBLP:conf/cvpr/JacobKCZTHAK18} and pruning~\cite{Optimal-Brain-Damage,DBLP:conf/nips/HassibiS92, DBLP:conf/nips/HanPTD15,DBLP:journals/corr/HanMD15}. However, these prior works do not consider the variability of running the model across different devices.
Prior work from Facebook~\cite{FB-ML-inference-on-edge} presents the challenges of deploying machine learning algorithms on heterogeneous environments, such as mobile phones. They focus on the hardware challenges of accelerating ML on the edge, while maintaining performance, accuracy and high user experience. Their work is complementary to ours, as it mostly focuses on the latency and throughput of the model.
Past work examined the effects of image signal processors (ISPs) on deep learning accuracy. Buckler {et al.\xspace}~\cite{buckler2017reconfiguring} evaluated the effects of different stages of the ISP on accuracy and power usage, by creating a reversible ISP pipeline simulation tool. Using this analysis they created a low-power ISP for deep learning. Liu {et al.\xspace}~\cite{liu2015ultra} observe that perceived quality of an image is not correlated with accuracy of a CNN. Based on this insight they create a low-power ISP. Schwartz {et al.\xspace}~\cite{DeepISP} built an end-to-end ISP using a CNN by learning a mapping between raw sensor data and ISP processed images. To summarize, this set of works attempts to design a new ISP pipeline for deep learning. In contrast, our work treats ISPs as a black box, as they are largely outside the control of the developer and not consistent across devices.
There is a large body of work on deep learning robustness~\cite{nguyen2015deep, szegedy2013intriguing}. Akthar {et al.\xspace} survey the different papers in this area~\cite{survay}. The majority of work on DL robustness focuses on noise from adversarial examples, which are images specifically designed to produce incorrect results in models~\cite{kurakin2016adversarial, bastani2016measuring, cisse2017parseval}.
In contrast to work on adversarial robustness, DeepXplore~\cite{deepxplore} and DeepTest~\cite{deeptest} focus on how to train models to be more robust to environmental noise that may arise while driving, such as foggy road conditions. In our work we show that the device itself might add instability to the model. DeepCorrect~\cite{deepcorrect} focus on how to reduce Gaussian noise created during image acquisition. Meanwhile our work focuses on differences in noise between different devices. Dodge {et al.\xspace}~\cite{dodge2016understanding} study how models are affected by random Gaussian noise. Our fine-tuning model is inspired by prior work on stability training~\cite{StabilityTraining}, which tries to make models more robust to small perturbations in input data. We expand the work by adding a noise model designed to simulate differences between phones. We further show how to utilize stability training with minimal data collection.
\section{Mechanisms to Reduce Instability}
In this section, we propose and evaluate three different approaches to reduce instability when running a model on different devices:
\begin{denseenum}
\item Fine-tuning the model using stability loss to be more robust to input noise.
\item Reducing noise in the input by using raw images, and applying consistent ISP and compression.
\item Modifying the prediction task to account for instability, {e.g. } using the top 3 predicted classes.
\end{denseenum}
\subsection{Stability Training}
The main approach we investigated to reducing instability is fine-tuning the model to images taken by the phone. However, fine-tuning to a specific phone model introduces some challenges. A naive approach would be to train on photos taken from every phone the model might run on. However, there are thousands of constantly changing phone models, and developers often cannot anticipate which devices will run their model.
\begin{figure}[h]
\centering
\subfigure[Gaussian noise generator]{\label{fig:gaussian-stability}%
\includegraphics[width=0.235\textwidth]{images/stability/stability_model_gaussian.pdf}}\hfill
\subfigure[Two images as inputs]{\label{fig:two-images-stability}%
\raisebox{0.7cm}{%
\includegraphics[width=0.235\textwidth]{images/stability/stability_model_two_images.pdf}}}
\vspace{-2.8pt}
\caption{Examples of different versions of the fine-tuned stability model. We tested different versions of noise generation and stability loss. In the embedding distance stability loss an extra dense layer is added to the model.}
\label{fig:stability-model}
\end{figure}
\begin{table*}[t!]
\caption{Instability between iPhone and Samsung photos with fine-tuned MobileNetV2 with different stability losses and noise generation schemes. The hyper parameters for each training scheme are also included.}
\label{tab:stability-training}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\subfigure[Embedding distance loss.]{
\begin{tabular}{ccc}
\toprule
Noise & Hyper Parameters & Instability \\
\midrule
Two Images & $\alpha=0.001$ & 3.91\% \\
Subsample & $\alpha=0.001$ $\#images=10$ & 4.22\% \\
Distortion & $\alpha=0.01$ & 5.12\% \\
Gaussian & $\alpha=0.001$ $\sigma^{2}=0.04$ & 5.12\% \\
No Noise & n/a & 7.22\% \\
\bottomrule
\end{tabular}\label{tab:embedding-instability}}
\subfigure[Relative entropy loss.]{
\begin{tabular}{ccc}
\toprule
Noise & Hyper Parameters & Instability \\
\midrule
Two Images & $\alpha=0.01$ & 6.32\% \\
Subsample & $\alpha=0.01$ $\#images=10$ & 5.72\% \\
Distortion & $\alpha=0.1$ & 4.52\% \\
Gaussian & $\alpha=1$ $\sigma^{2}=0.025$ & 4.82\% \\
No Noise & n/a & 6.62\% \\
\bottomrule
\end{tabular}\label{tab:kl-instability}}
\end{sc}
\end{small}
\end{center}
\label{tab:stability-results}
\end{table*}
\begin{figure*}
\centering
\subfigure[Embedding distance loss]{%
\label{fig:embedding-pr}%
\includegraphics[width=0.45\textwidth]{plots/pr_curve_embedding_stability.pdf}}
\subfigure[Relative entropy loss]{%
\label{fig:kl-stab-pr}%
\includegraphics[width=0.45\textwidth]{plots/pr_curve_kl_stability.pdf}}
\vspace{-0.25cm}
\caption{Precision-recall curves for the different stability fine-tuning schemes, tested on Samsung and iPhone phones. Stability training not only reduces instability but slightly increases accuracy.}
\label{fig:pr-curves}
\end{figure*}
Therefore, an alternative solution is to use techniques that make the model more robust to noise.
Robust machine learning is a well-researched area~\cite{NIPS2017_6821,nguyen2015deep,szegedy2013intriguing,kurakin2016adversarial,bastani2016measuring,cisse2017parseval,tanay2016boundary,survay}, but most papers deal with robustness to adversarial examples hand-crafted to change model predictions, rather then non-adversarial noise. We are inspired by prior work by Zheng {et al.\xspace}~\cite{StabilityTraining} on stability training, which is a technique on how to improve model accuracy under noise from the environment. We further develop the stability training idea to reduce instability caused by different devices.
In stability training, during training each training image is complemented by an augmented image, which is generated by adding uncorrelated Gaussian pixel noise, {i.e. } if $k$ is the pixel index for image $x$, and $\sigma^{2}$ is the standard deviation of the noise, we generate $x'$ such that:
\setlength{\belowdisplayskip}{1mm} \setlength{\belowdisplayshortskip}{1mm}
\setlength{\abovedisplayskip}{1mm} \setlength{\abovedisplayshortskip}{1mm}
$$x'_{k} = x_{k} + \epsilon, \epsilon \sim \mathcal{N}(0,\sigma^2)$$
Because Gaussian noise is not representative of the differences between two phone images, we tried another version of noise generation: simulating the noise introduced by different phone ISPs. Our simulated phone noise randomly distorts different aspects of the training image: the hue, contrast, brightness, saturation and JPEG compression quality.
The model is trained with an augmented loss function, where $\theta$ are the trainable model weights, and $\alpha$ is an adjustable hyper parameter:
$$L(x,x',\theta) = L_{0}(x,\theta) + \alpha L_{s}(x,x',\theta)$$
$L_{0}$ is the regular classification cross entropy loss, {i.e. } for the predicted label vector $y$ and true label vector $\hat{y}$:
$$L_{0}(x,\theta) = \scalebox{0.75}[1.0]{\( - \)}\sum\hat{y}_{j}Log(P(y_{j}|x,\theta))$$
The stability loss $L_{s}(x,x',\theta)$ can come in one of two forms:
\begin{denseitemize}
\item The relative entropy (Kullback–Leibler divergence) over the model prediction of the input image against those of the noisy image:
$$L_{s}(x,x',\theta) = \scalebox{0.75}[1.0]{\( - \)}\sum P(y_{j}|x,\theta)Log\Big(\frac{P(y_{j}|x',\theta)}{P(y_{j}|x,\theta)}\Big)$$
\item The Euclidean distance between the embedding layer (the input to the last fully-connected layer of the model) between the input image and the noisy image:
$$L_{s}(x,x',\theta) = \|f(x,\theta) - f(x',\theta)\|_{2}$$
\end{denseitemize}
We implemented the stability training technique~\cite{StabilityTraining} using Keras~\cite{keras} with a Tensorflow 2.3.0 backend~\cite{abadi2016tensorflow}. We tried different variations of the stability training model to evaluate which approach would reduce instability the most. An illustration of some of the different versions of stability models we tried can be seen in Figure~\ref{fig:stability-model}. We train using a Tesla K80 GPU.
We compare our noise generation models to a version of the stability training in which the noisy image is not automatically generated but rather supplied as a separate input to the model. For instance, if we are training on images from the end-to-end experiment taken by the Samsung phone, we can supply the equivalent images from the iPhone as the noisy version of the images.
\begin{figure*}[ht!]
\centering
\subfigure[JPEG vs. raw converted images on Samsung and iPhone.]{\label{fig:app-experiment-inconsistancy-a}%
\includegraphics[width=0.3\textwidth]{plots/inconsistancy_raw_vs_jpg.pdf}}
\subfigure[JPEG vs. raw converted images on Samsung and iPhone, broken by class.]{\label{fig:app-experiment-inconsistancy-b}%
\includegraphics[width=0.3\textwidth]{plots/inconsistancy_raw_vs_jpg_by_label.pdf}}
\subfigure[Accuracy of JPEG vs. raw converted images on Samsung and iPhone.]{\label{fig:app-experiment-inconsistancy-c}%
\includegraphics[width=0.3\textwidth]{plots/accuracy_raw_vs_jpg.pdf}}
\vspace{-2.8pt}
\caption{Comparing JPEG and raw photos which where converted to PNG on iPhone and Samsung phones.}
\end{figure*}
In our final stability training experiment we similarly train on two images, one from Samsung and one from iPhone, but this time we limit the number of images we collect per class for iPhone. This version simulates how many images a developer would need to collect to adjust their model to a new phone. For example if we have a dataset from Samsung phones, it may be possible to augment it by collecting a limited amount of photos per class ({e.g. } 10) from an iPhone to make the model more stable. We treat the number of images per class as a hyper parameter. This version is similar to the two image version of our model but for every image from Samsung, instead of supplying to the model the corresponding iPhone image, we pick one image from a small subset of iPhone images for the same class.
We trained a model for each version of noise generation with each version of stability loss on Samsung photos taken from the end-to-end experiment. For the versions of the training that require noisy input images we use the photos from the iPhone. For a base model we use a MobileNetV2 pre-trained on ImageNet. We test the result on photos taken both from Samsung and iPhone. We found our hyper parameters for the models using grid search. We compare all the stability trained model with regular fine-tuning of the base model on the Samsung phone without a stability loss. To evaluate the embedding distance loss we added one extra fully-connected layer to the base model.
Table~\ref{tab:stability-results} presents the instability results across all noise generation schemes and both stability losses. We can see from the results that fine-tuning without stability training (denoted as ``No noise'') reduces instability the least. The results also demonstrate that including images from the iPhone for every image from the Samsung phone, and using embedding distance loss reduces instability the most. Yet, collecting many photos from each phone is not realistic.
Fortunately, augmenting the model with a modest number of photos per class from an iPhone (10 photos for each class), and using embedding distance loss resulted in fairly similar instability to collecting the entire dataset from an iPhone. We get 4.22\% instability with 10 photos per class, vs. 3.91\% with the entire dataset.
This solution though still requires calibration photos from each new phone we encounter. If that is not possible, using an image distortion noise with a relative entropy loss still provides a significantly lower instability (4.52\%) than the baseline, and requires no new data collection.
Figure~\ref{fig:pr-curves} contains the precision-recall graphs for all models trained under different noise generation schemes and stability losses. Interestingly, stability training has a small accuracy benefit as well as a benefit to instability. The two fine-tuning modes that augment the Samsung photos with iPhone photos provide the highest accuracy benefit.
\subsection{Using Raw Images in Inference\label{sec:using-raw}}
Our second approach is to test whether using raw images might reduce the level of instability, by comparing instability between JPEG and raw images. We repeated our end-to-end experiment but this time have each phone take both a compressed JPEG image and a raw DNG image, which is then converted to PNG. This limited the experiment to only the iPhone and Samsung phones, since they are the only phones capable of taking raw images. The conversion is done in a consistent manner for both phones using ImageMagick~\cite{imagemagick}, to eliminate any differences that may arise from using different ISPs between the phones.
Figure~\ref{fig:app-experiment-inconsistancy-a} and Figure~\ref{fig:app-experiment-inconsistancy-b} compare the instability between photos of the same object on both phones using MobileNetV2. The results demonstrate that using raw images does indeed slightly reduce the instability, both in the aggregate and in every class independently, but not by a significant amount.
Figure~\ref{fig:app-experiment-inconsistancy-c} compares the accuracy of JPEG vs. raw converted images. This experiment interestingly shows that instability and accuracy are not necessarily correlated. While results on photos taken from both phones had similar accuracy, with iPhone photos producing only slightly better results, the instability was much higher than the accuracy difference. Additionally, using converted raw images did not lead to significant changes in accuracy.
Utilizing raw images and a consistent conversion pipeline results in an average 11.5\% improvement in instability. Consistently throughout all experiments utilizing raw images outperformed using the phone's pipeline.
However, utilizing raw images didn't eliminate instability completely. This implies, as described in \S\ref{sec:ISP}, that even in phones with raw image access, it is not always clear at what stage of the pipeline we get the raw image from. In addition, utilizing raw images requires hardware support from the devices. For instance in our end-to-end experiments only two of the five devices where able to take raw images.
\subsection{Simplifying the Classification Task}
The last approach to reducing instability is to modify the prediction itself. For many models, such as recommendation systems or document search, it might be good enough for the correct prediction to be in one of the top $n$ predictions, rather than being the top result. To test this approach, using the same experimental setup we used before, we test the accuracy of having the correct classification appear in one of the top three results (Figure~\ref{fig:top3-accuracy}). Unsurprisingly we achieve a higher accuracy. Similarly, in Figure~\ref{fig:top3-inconsistancy} we can see that instability is also improved when we use top three results. Both accuracy and instability are improved by about $30\%$. Unfortunately, such task simplification is not viable for every application, and even for those where it is feasible, requiring the user to sift through additional possible classification results may degrade the user experience.
\begin{figure}[h]
\centering
\subfigure[Top 3 prediction accuracy for end-to-end experiment.]{
\includegraphics[width=0.47\columnwidth]{plots/top3_accuracy_raw_vs_jpg.pdf}
\label{fig:top3-accuracy}}
\hfill
\subfigure[Top 3 prediction instability for end-to-end experiment.]{
\centering
\includegraphics[width=0.47\columnwidth]{plots/top3_inconsistancy_raw_vs_jpg.pdf}
\label{fig:top3-inconsistancy}}
\vspace{-2.8pt}
\caption{Accuracy and instability of a model that displays the top three classifications compared to only the top one.}
\end{figure}
\section{Takeaways From the Experiments}
To summarize, the end-to-end experiment demonstrated there is significant instability in model prediction generated by mobile phones even when they are taking photos of the same object in the same environmental conditions. Our experimental results show that there can be 14\%-17\% instability overall and even higher for individual classes.
We examined potential root causes of the instability. From our analysis it seems that most of the instability can be explained by the different ways mobile phone process images. We have seen that compression differences can cause about 5-10\% instability, and ISP pipeline differences may cause about 14\%. We saw very little instability caused by running different OSes or processors.
In the next section, we evaluate how to mitigate instability across devices.
| proofpile-arXiv_059-15613 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
A \defn{Markov chain} is a model that describes transitions between states in a state space according to certain probabilistic
rules. The defining characteristic of a Markov chain is that the transition from one state to another only depends on the current
state and the elapsed time, but not how the system arrived there. In other words, a Markov chain is ``memoryless''.
Markov chains have an abundance of applications, from data analysis, population dynamics to traffic models.
For a Markov chain, the \defn{stationary distribution} $\Psi$ is the long-term limiting distribution. Mathematically
speaking, it is the eigenvector of the transition matrix $T$ of the Markov chain with eigenvalue one. That is
\[
T \Psi = \Psi.
\]
An important question is how quickly does the Markov chain converge to the stationary distribution.
In Markov chain theory, distance is usually the total variation distance or half the $L^1$-norm in classical analysis.
If $\Omega$ is the state space, the total variation distance between two probability distributions $\nu$ and $\mu$ is
defined as
\[
\|\nu - \mu\| = \max_{A \subseteq \Omega} |\nu(A) - \mu(A)|.
\]
For a given small $\epsilon>0$, the \defn{mixing time} $t_\mathsf{mix}$ is the smallest $t$ such that
\[
\| T^t \nu - \Psi \| \leqslant \epsilon,
\]
independent of the initial distribution $\nu$.
In seminal work of Bidigare, Hanlon and Rockmore~\cite{BHR.1999}, which was continued by Diaconis, Brown,
Athanasiadis, Bj\"orner, Chung and Graham, amongst others~\cite{BrownDiaconis.1998,BBD.1999,Brown.2000,Bjorner.2008,
Bjorner.2009,Athanasiadis.Diaconis.2010,ChungGraham.2012,Saliola.2012}, the special family of semigroups, now known as
\emph{left regular bands} first studied by Sch\"utzenberger~\cite{Schuetzenberger.1947} in the forties, was applied to
random walks or Markov chains on hyperplane arrangements. In his 1998 ICM lecture~\cite{Diaconis.icm.1998}, Diaconis
discussed these developments. In Section~4.1, entitled \emph{What is the ultimate generalization?}, he asks
how far the semigroup techniques can be taken.
Every finite state Markov chain $\mathcal{M}$ has a random letter representation, that is, a representation of a semigroup
$S$ acting on the left on the state space $\Omega$. See for example~\cite[Proposition 1.5]{LevinPeres.2017}
and~\cite[Theorem 2.3]{ASST.2015}. In this setting, there is a transition $s \stackrel{a}{\longrightarrow} s'$ with probability
$0\leqslant x_a\leqslant 1$, where $s, s'\in \Omega$, $a\in S$ and $s'=a.s$ is the action of $a$ on the state $s$.
It is enough to consider the semigroup $S$ generated by the elements $a$ with $x_a>0$, called the generating set $A$.
For example, the Markov chain with state space $\Omega =\{1,2\}$ and transition diagram
\begin{equation}
\label{equation.markov linear}
\raisebox{-1cm}{
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=3cm,
semithick]
\tikzstyle{every state}=[fill=red,draw=none,text=white]
\node[state] (I) {$\mathbf{1}$};
\node[state] (A) [right of=I] {$\mathbf{2}$};
\path (I) edge [bend left] node {$2,3$} (A)
edge [loop left] node {$1$} (I)
(A) edge[bend left] node {$1,3$} (I)
edge [loop right] node {$2$} (A);
\end{tikzpicture}}
\end{equation}
can be associated to the semigroup with right Cayley graph depicted in Figure~\ref{figure.right cayley linear}.
The conceptual reason why a Markov chain described using the \textit{left} action of a semigroup can be analyzed using the
\textit{right} Cayley graph is that if time goes left (due to the left action), then coupling from the past corresponds to
right multiplication. The transition matrix in this Markov chain is
\[
T = \begin{pmatrix}
x_1 & x_1+x_3\\
x_2+x_3 & x_2
\end{pmatrix}.
\]
\begin{figure}[t]
\begin{center}
\begin{tikzpicture}[auto]
\node (A) at (0, 0) {$\mathbbm{1}$};
\node (B) at (-3,-5) {$1$};
\node(C) at (3,-5) {$2$};
\node(D) at (0,-2) {$\bullet$};
\node(E) at (0,-4) {$\bullet$};
\path (A) edge[->,thick, blue, bend right = 40] node[midway,left] {$1$} (B)
edge[->,thick, blue, bend left = 40] node[midway,right] {$2$} (C);
\draw[edge,thick,blue] (A) -- (D) node[midway, left] {$3$};
\draw[edge,thick,blue] (D) -- (B) node[midway, above] {$2$};
\draw[edge,thick,blue] (D) -- (C) node[midway, above] {$1$};
\draw[edge,thick,blue] (E) -- (B) node[midway, below] {$1$};
\draw[edge,thick,blue] (E) -- (C) node[midway, below] {$2$};
\path (D) edge[->,thick, bend right = 40] node[midway,left] {$3$} (E);
\path (E) edge[->,thick, bend right = 40] node[midway,right] {$3$} (D);
\path (B) edge[->,thick, loop left] node {$1,2,3$} (B);
\path (C) edge[->,thick, loop right] node {$1,2,3$} (C);
\end{tikzpicture}
\end{center}
\caption{\label{figure.right cayley linear} The right Cayley graph $\mathsf{RCay}(S,A)$ of the semigroup that
gives the Markov chain in~\eqref{equation.markov linear} with generators $A=\{1,2,3\}$.}
\end{figure}
In the pursuit of finding Diaconis' ultimate generalization~\cite{Diaconis.icm.1998}, the arguments in Brown and
Diaconis~\cite{BrownDiaconis.1998} were generalized to Markov chains for $\mathscr{R}$-trivial
semigroups~\cite{ASST.2015}. In~\cite{RhodesSchilling.2019,RhodesSchilling.2019a}, the current authors developed
a general theory for computing the stationary distribution for any finite Markov chain. The theory uses semigroup methods
such as the Karnofsky--Rhodes and McCammond expansion of a semigroup. These expansions give rise to loop graphs
which immediately yield Kleene expressions for all paths from the root of the graph to elements in the minimal ideal of
the semigroup. The Kleene expressions in turn give rational expressions for the stationary distribution.
In this paper we apply the findings of~\cite{RhodesSchilling.2019,RhodesSchilling.2019a} to study upper bounds on the mixing
time of the Markov chain. In particular, Theorems~\ref{theorem.main} and~\ref{theorem.main1} provide upper bounds
for the mixing time directly from the rational expression of the stationary distribution in the case when the minimal ideal
of the semigroup is left zero. This general theory is applied to specific examples (Tsetlin library, edge flipping on a line Markov chain,
and a new Markov chain on linear extensions) in Section~\ref{section.examples}.
The paper is organized as follows. In Section~\ref{section.mixing time}, we develop the main theory.
In Section~\ref{section.mixing truncation}, we present our main theorems regarding the upper bounds
on the mixing time (see Theorems~\ref{theorem.main} and~\ref{theorem.main1}).
We discuss the relation to the Shannon entropy in Section~\ref{section.shannon}.
In Section~\ref{section.decreasing statistics}, we refine bounds on the mixing time using certain statistics
that were developed in~\cite{ASST.2015,ASST.2015a}. In Section~\ref{section.syntactic}, we consider semigroups
syntactic at zero. In particular, we prove in Theorem~\ref{theorem.syntactic} that the upper bounds on the mixing time do not change by
replacing the semigroup by its syntactic image. In Section~\ref{section.languages}, we relate observations on mixing
time to $d$-testable languages. Finally, in Section~\ref{section.examples} we consider specific examples such as the
Tsetlin library~\cite{Tsetlin.1963}, edge flipping on a line~\cite{BrownDiaconis.1998,ChungGraham.2012}, and a new Markov
chain on linear extensions of a poset with $n$ vertices, which is inspired by but different from the promotion Markov
chain~\cite{AyyerKleeSchilling.2014}. This new Markov chain has a mixing time of $O(n \log n)$ as
compared to the mixing time of the model of Bubley and Dyer~\cite{BubleyDyer.1999} with mixing time $O(n^3 \log n)$.
\subsection*{Acknowledgments}
We are grateful to Arvind Ayyer, Darij Grinberg, John Hunter, Stuart Margolis, Igor Pak, Dan Romik,
Eric Severson, Benjamin Steinberg, and Andrew Waldron for discussions.
The last author was partially supported by NSF grants DMS--1760329, DMS--1764153, and DMS--205335.
This material is based upon work supported by the Swedish Research Council under
grant no. 2016-06596 while the author was in residence at Institut
Mittag--Leffler in Djursholm, Sweden during Spring 2020.
An extended abstract of this paper has appeared in the proceedings for FPSAC 2021~\cite{RS.2021}.
\section{Mixing time}
\label{section.mixing time}
Let $T$ be the \defn{transition matrix} of a finite Markov chain. Assuming that the Markov chain is \defn{ergodic}
(meaning that it is irreducible and aperiodic), by the Perron--Frobenius Theorem there exists a unique \defn{stationary
distribution} $\Psi$ and $T^t \nu$ converges to $\Psi$ as $t\to \infty$ for any initial state $\nu$. A Markov chain is irreducible
if the graph of the Markov chain is strongly connected. It is aperiodic if the gcd of the cycle lengths in the graph of the Markov
chain is one. In fact, the stationary distribution is the right eigenvector of eigenvalue one of $T$
\[
T \Psi = \Psi.
\]
The \defn{mixing time} measures how quickly the Markov chain converges to the stationary distribution.
For a given small $\epsilon>0$, $t_\mathsf{mix}$ is the smallest $t$ such that
\[
\| T^t \nu - \Psi \| \leqslant \epsilon.
\]
We begin this section by reviewing methods to compute upper bounds on mixing times in Section~\ref{section.upper bound}
(see in particular Theorem~\ref{theorem.ASST}), relations between ideals and semaphore codes and how this relates to mixing time
in Section~\ref{section.semaphore}, and the Markov and Chernoff inequalities to bound mixing time in Sections~\ref{section.markov}
and~\ref{section.chernoff}. The semigroup methods of~\cite{RhodesSchilling.2019,RhodesSchilling.2019a} to compute rational expressions
of the stationary distribution of a Markov chain in terms of the probabilities $x_a$ for the generators $a\in A$ of the semigroup
are reviewed in Section~\ref{section.rational}. Our main new results for the upper bounds of the mixing times in terms of truncations
of the rational expressions of the stationary distribution (Theorem~\ref{theorem.main}) and using a Cauchy--Euler operator (Theorem~\ref{theorem.main1})
are stated in Section~\ref{section.mixing truncation}. In Section~\ref{section.shannon} we discuss the relation between Shannon entropy and
mixing time. Sections~\ref{section.decreasing statistics}-\ref{section.languages} are devoted to new results in special settings, for example
for monoids which are syntactic at zero (Theorem~\ref{theorem.syntactic}) and $d$-testable languages (Remark~\ref{remark.ideal containment}).
\subsection{Upper bound}
\label{section.upper bound}
Brown and Diaconis~\cite{BrownDiaconis.1998}~\cite[Theorem 0]{Brown.2000} showed, for Markov chains
associated to left regular bands, that the total variational distance from stationarity after $t$ steps is bounded above
by the probability $\mathsf{Pr}(\tau> t)$, where $\tau$ is the first time that the walk hits a certain ideal. The arguments in
Brown and Diaconis~\cite{BrownDiaconis.1998} can be generalized to arbitrary finite Markov chains (not just those
related to left regular bands). To state the details, we need some more notation.
Let $\mathcal{M}(S,A)$ be a finite state Markov chain with state space $\Omega$ and transition matrix $T$ associated to the
semigroup $S$ with generators $A$ with probabilities $0<x_a\leqslant 1$ for $a\in A$.
A two-sided \defn{ideal} $I$ (or ideal for short) is a subset $I \subseteq S$ such that $u I v \subseteq I$ for all
$u,v \in S^{\mathbbm{1}}$, where $S^{\mathbbm{1}}$ is the semigroup $S$ with identity $\mathbbm{1}$ added (even if
$S$ already contains an identity). If $I,J$ are ideals of $S$, then $IJ \subseteq I \cap J$, so that $I \cap J \neq \emptyset$.
Hence every finite semigroup has a unique nonempty minimal ideal denoted $K(S)$.
Assume that the minimal ideal $K(S)$ is \defn{left zero}, that is, $xy=x$ for all $x,y\in K(S)$.
This assumption implies that the Markov chain on the minimal ideal (given by the left action) is ergodic.
Let $\tau$ be the random variable which is the time that the random walk is absorbed into the minimal ideal $K(S)$.
\begin{theorem} \cite{ASST.2015}
\label{theorem.ASST}
Let $S$ be a finite semigroup whose minimal ideal $K(S)$ is a left zero semigroup and let $T$ be the transition
matrix of the associated Markov chain. Then
\[
\| T^t \nu - \Psi \| \leqslant \mathsf{Pr}(\tau>t).
\]
\end{theorem}
\begin{proof}
By~\cite[Corollary 3.5(3)]{ASST.2015}, we have
\[
\| T^t \nu - \Psi \| \leqslant P^{\star t}(S \setminus K(S)),
\]
where $P^{\star n}$ denotes the $n$-th convolution power of $P$.
By~\cite[Eq. (4.6)]{ASST.2015}, the right hand side equals $\mathsf{Pr}(\tau>t)$.
\end{proof}
\subsection{Ideals and semaphore codes}
\label{section.semaphore}
Let $A$ be a finite alphabet, $A^+$ the set of all nonempty words in the alphabet $A$, and $A^\star$ the set of all
words in the alphabet $A$.
As shown in~\cite{RSS.2016}, ideals in $A^+$ are in bijection with semaphore codes~\cite{BPR.2010}.
A \defn{prefix code} is a subset of $A^+$ such that all elements are incomparable in prefix order (meaning that no
element is the prefix of any other element of the code).
A \defn{semaphore code} $\mathcal{S}$ is a prefix code such that $A \mathcal{S} \subseteq \mathcal{S} A^\star$.
There is a natural left action on a semaphore code. If $u \in \mathcal{S} \subseteq A^+$ and $a \in A$, then $a u$
has a prefix in $\mathcal{S}$ (and hence a unique prefix of $a u$). The left action $a.u$ is the prefix of $a u$ that is in
$\mathcal{S}$. Assigning probability $0 \leqslant x_a \leqslant 1$ to $a\in A$, the left action on a semaphore code
$\mathcal{S}$ defines a Markov chain with a countable state space $\mathcal{S}$.
The bijection between ideals $I \subseteq A^+$ and semaphore codes $\mathcal{S}$ over $A$ is given as follows
(see~\cite[Proposition 4.3]{RSS.2016}).
If $u = a_1 a_2 \ldots a_j \in I \subseteq A^+$, find the (necessarily unique) index $1\leqslant i \leqslant j$
such that $a_1 \ldots a_{i-1} \not \in I$, but $a_1 \ldots a_i \in I$. Then $a_1 \ldots a_i$ is a code word and the set of all
such words forms the semaphore code $\mathcal{S}$. Conversely, given a semaphore code
$\mathcal{S}$, the corresponding ideal is $\mathcal{S} A^\star$.
In this setting, $\tau$ can be interpreted as the random variable given by the length of the semaphore code words.
Let $\mathcal{S}$ be a semaphore code and $I$ the ideal under the
bijection described above. A semaphore code word $s = s_1 s_2 \ldots s_\ell$ has the property that $s\in I$,
but $s_1 s_2 \ldots s_{\ell-1} \not \in I$. Hence $\tau$ can be interpreted as the random variable
given by the length $\ell$.
Next we discuss two ways to approximate $\mathsf{Pr}(\tau>t)$ using Markov's and Chernoff's inequality.
\subsection{Markov's inequality}
\label{section.markov}
By Markov's inequality (see for example~\cite{LevinPeres.2017,DevroyeLugosi.2001}), we have
\begin{equation}
\label{equation.Markov inequality}
\mathsf{Pr}(\tau>t) \leqslant \frac{E[\tau]}{t+1},
\end{equation}
where $E[\tau]$ is the expected value for $\tau$, the first time the walk hits the ideal. We have
\begin{equation}
\label{equation.Etau}
E[\tau] = \sum_{a=1}^\infty \mathsf{Pr}(\tau \geqslant a).
\end{equation}
\subsection{Chernoff's inequality}
\label{section.chernoff}
Chernoff's inequality uses the moment generating function combined with Markov's
inequality~\eqref{equation.Markov inequality} to give an upper bound on $\mathsf{Pr}(\tau\geqslant t)$. More precisely,
\[
\mathsf{Pr}(\tau\geqslant t) = \mathsf{Pr}(e^{s\tau} \geqslant e^{st}) \qquad \text{for $s>0$.}
\]
Hence by Markov's inequality~\eqref{equation.Markov inequality}
\[
\mathsf{Pr}(\tau \geqslant t) \leqslant \frac{E[e^{s\tau}]}{e^{st}}
\]
and since this is true for all $s>0$
\[
\mathsf{Pr}(\tau \geqslant t) \leqslant \min_{s>0}\left\{ \frac{E[e^{s\tau}]}{e^{st}} \right\}.
\]
\subsection{Rational expressions for stationary distributions}
\label{section.rational}
Let $\mathcal{M}(S,A)$ be the Markov chain associated to the finite semigroup $S$ with generators in $A$.
Assume that its minimal ideal $K(S)$ is left zero, so that $K(S)$ can be taken as the state space $\Omega$ of the
Markov chain. Denote by $\mathcal{S}(S,A)$ the semaphore code associated to $K(S)$ (see
Section~\ref{section.semaphore}). For a word $s \in A^+$, we denote by $[s]_S$ the image of the word in the alphabet
$A$ in $S$. The following theorem is stated in~\cite[Corollaries 2.23 \& 2.28]{RhodesSchilling.2019}.
\begin{theorem} \cite{RhodesSchilling.2019}
\label{theorem.stationary}
If $K(S)$ is left zero, the stationary distribution of the Markov chain $\mathcal{M}(S,A)$ labeled by $w \in K(S)$
is given by
\begin{equation}
\label{equation.Psiw}
\Psi_w(x_1,\ldots,x_n) = \sum_{\stackrel{s \in \mathcal{S}(S,A)}{[s]_S=w}}
\; \prod_{a\in s} x_a.
\end{equation}
\end{theorem}
In~\cite{RhodesSchilling.2019,RhodesSchilling.2019a}, we developed a strategy using loop graphs to compute
the expressions in Theorem~\ref{theorem.stationary} as rational functions in the probabilities $x_a$ for $a\in A$.
This is done in several steps:
\begin{enumerate}
\item We used the McCammond and Karnofsky--Rhodes expansion $\mathsf{Mc}\circ \mathsf{KR}(S,A)$ of the right
Cayley graph $\mathsf{RCay}(S,A)$ of the semigroup $S$ with generators $A$. In this paper we do not require the details
of these definitions, except that the right Cayley graph as well as its expansions are rooted graphs with root $\mathbbm{1}$.
The Karnofsky--Rhodes expansion is another right Cayley graph, whereas the McCammond expansion is only an
automata. For the precise definition of the Karnofsky--Rhodes expansion, we refer the reader
to~\cite[Definition 4.15]{MRS.2011}, \cite[Section 3.4]{MSS.2015}, \cite[Section 2.4]{RhodesSchilling.2019a},
and~\cite[Section 2]{RSS.2020}. For the definition of the McCammond expansion, we refer the reader
to~\cite[Section 2.7]{MRS.2011} and~\cite[Section 2.5]{RhodesSchilling.2019a}. The Markov chain $\mathcal{M}(S,A)$
is a \defn{lumping}~\cite{LevinPeres.2017} of the Markov chains associated to the expansions.
\item The stationary distributions of the Markov chains associated to the expansions can be expressed using
\defn{loop graphs} $G$, see~\cite{RhodesSchilling.2019a}. A loop graph is a straight line path from $\mathbbm{1}$ to an
endpoint $s$ with directed loops of any finite length attached recursively to any vertex (besides $\mathbbm{1}$ and $s$).
In this way~\cite[Theorem 1.4]{RhodesSchilling.2019a}
\begin{equation}
\label{equation.lumping}
\Psi_w(x_1,\ldots,x_n) = \sum_G \Psi_G(x_1,\ldots,x_n),
\end{equation}
where the sum is over certain loop graphs $G$ with end point $s$ such that $[s]_S=w$.
Here~\cite[Definition 1.3]{RhodesSchilling.2019a}
\begin{equation}
\label{equation.PsiG}
\Psi_G(x_1,\ldots,x_n) = \sum_p \; \prod_{a \in p} x_a,
\end{equation}
where the sum is over all paths $p$ in $G$ starting at $\mathbbm{1}$ and ending in $s$.
\item There is a \defn{Kleene expression} for the set of all paths from $\mathbbm{1}$ to $s$ in $G$. The Kleene expression
immediately yields a rational expression for the stationary distribution $\Psi_G(x_1,\ldots,x_n)$ and hence
$\Psi_w(x_1,\ldots,x_n)$ by~\eqref{equation.lumping}.
\end{enumerate}
\begin{remark}
\label{remark.series expansion}
An important property of the above construction is that in the series expansion of the rational expression for
$\Psi_w(x_1,\ldots,x_n)$ (resp. $\Psi_G(x_1,\ldots,x_n)$) the total degree of each term corresponds to the length of
the underlying semaphore code word in~\eqref{equation.Psiw} (resp. the underlying path in $G$ in~\eqref{equation.PsiG}).
\end{remark}
\subsection{Mixing time via truncation of Kleene expressions}
\label{section.mixing truncation}
As stated in Theorem~\ref{theorem.ASST}, $\mathsf{Pr}(\tau \geqslant t)$ provides an upper bound on the mixing time
in the setting that $K(S)$ is left zero. As discussed in Section~\ref{section.semaphore}, $\tau$ can be interpreted
as the random variable given by the length of the semaphore code words or paths in the loop graph. To compute
$\mathsf{Pr}(\tau \geqslant t)$, one needs to compute the sum of probabilities of all paths of length weakly greater than $t$.
By Remark~\ref{remark.series expansion}, the length of the paths is given by the total degree in the probability variables
$x_1,\ldots,x_n$ for the generators $a_1,\ldots,a_n$ of the semigroup $S$.
Hence we obtain $\mathsf{Pr}(\tau\geqslant t)$ by truncating the rational function for the stationary distribution to total degree
weakly bigger than $t$.
Let $\Psi^{\geqslant t}_w(x_1,\ldots,x_n)$ be the truncation of the formal power series associated to the rational function
$\Psi_w(x_1,\ldots,x_n)$ to terms of degree weakly bigger than $t$ and let $\Psi_w^{<t}(x_1,\ldots,x_n)$ be the truncation
of the formal power series associated to the rational function $\Psi_w(x_1,\ldots,x_n)$ to terms of degree strictly smaller
than $t$. Note that
\[
\Psi_w(x_1,\ldots,x_n) = \Psi_w^{<t}(x_1,\ldots,x_n) + \Psi^{\geqslant t}_w(x_1,\ldots,x_n).
\]
\begin{theorem}
\label{theorem.main}
Suppose the Markov chain satisfies the conditions of Theorem~\ref{theorem.ASST}.
If $\Psi_w(x_1,\ldots,x_n)$ is represented by a rational function such that each term of degree $\ell$ in its formal power
sum expansion corresponds to a semaphore code word $s$ of length $\ell$ with $[s]_S=w$, we have
\[
\mathsf{Pr}_w(\tau \geqslant t) = \frac{\Psi^{\geqslant t}_w(x_1,\ldots,x_n)}{\Psi_w(x_1,\ldots,x_n)}
= 1 - \frac{\Psi_w^{<t}(x_1,\ldots,x_n)}{\Psi_w(x_1,\ldots,x_n)}.
\]
\end{theorem}
For each $w\in K(S)$, we can also give an explicit formula for the expected number of steps $E_w[\tau]$ it takes to reach
the endpoint of $w$ using the Cauchy--Euler operator.
\begin{theorem}
\label{theorem.main1}
Suppose the Markov chain satisfies the conditions of Theorem~\ref{theorem.ASST}.
If $\Psi_w(x_1,\ldots,x_n)$ is represented by a rational function such that each term of degree $\ell$ in its formal power
sum expansion corresponds to a semaphore code word $s$ of length $\ell$ with $[s]_S=w$, we have
\[
E_w[\tau] = \left( \sum_{i=1}^n x_i \frac{\partial}{\partial x_i} \right) \ln \Psi_w(x_1,\ldots,x_n).
\]
\end{theorem}
\begin{remark}
Note that the formal expression for $\Psi_w(x_1,\ldots,x_n)$ cannot be manipulated using that $x_1+\cdots+x_n=1$
when using Theorems~\ref{theorem.main} and~\ref{theorem.main1}.
\end{remark}
\begin{proof}[Proof of Theorem~\ref{theorem.main1}]
Let the formal power sum expression for the rational function $\Psi_w(x_1,\ldots,x_n)$ be as follows
\[
\Psi_w(x_1,\ldots,x_n) = \sum_{m_1,\ldots,m_n\geqslant 0} c_{m_1,\ldots,m_n} x_1^{m_1}\cdots x_n^{m_n}.
\]
Then formally
\begin{multline*}
\left( \sum_{i=1}^n x_i \frac{\partial}{\partial x_i} \right) \ln \Psi_w(x_1,\ldots,x_n)
= \frac{\left( \sum_{i=1}^n x_i \frac{\partial}{\partial x_i} \right) \Psi_w(x_1,\ldots,x_n)}{\Psi_w(x_1,\ldots,x_n)}\\
= \frac{\sum_{m_1,\ldots,m_n\geqslant 0} c_{m_1,\ldots,m_n}(m_1+\cdots+m_n) x_1^{m_1} \cdots x_n^{m_n}}
{\sum_{m_1,\ldots,m_n\geqslant 0} c_{m_1,\ldots,m_n} x_1^{m_1} \cdots x_n^{m_n}}.
\end{multline*}
Note that a term $x_1^{m_1} \cdots x_n^{m_n}$ of degree $m_1+\cdots+m_n$ corresponds to a semaphore code word
of length $m_1+\cdots+m_n$. Hence $c_{m_1,\ldots,m_n} (m_1+\cdots+m_n)
x_1^{m_1}\cdots x_n^{m_n}/ \Psi_w(x_1,\ldots,x_n)$
is the length of the path times the probability of having taken a path with $m_i$ steps along the $i$-th generator.
The sum over all such terms is precisely $E_w[\tau]$.
\end{proof}
\begin{remark}
Let $\mathsf{Pr}_G(\tau \geqslant t)$ be the probability that the length of the paths in the loop graph $G$ from
$\mathbbm{1}$ to the end point $s$ is weakly bigger than $t$. Then by analogous argument as above, we also have
\begin{equation}
\label{equation.PrG}
\mathsf{Pr}_G(\tau \geqslant t) = \frac{\Psi^{\geqslant t}_G(x_1,\ldots,x_n)}{\Psi_G(x_1,\ldots,x_n)}
= 1 - \frac{\Psi_G^{<t}(x_1,\ldots,x_n)}{\Psi_G(x_1,\ldots,x_n)}
\end{equation}
and
\begin{equation}
\label{equation.EG}
E_G[\tau] = \left( \sum_{i=1}^n x_i \frac{\partial}{\partial x_i} \right) \ln \Psi_G(x_1,\ldots,x_n).
\end{equation}
\end{remark}
\begin{example}[Single loop]
\label{example.single loop}
Suppose the path in the loop graph $G$ from $\mathbbm{1}$ to the ideal is a straight line with a single loop
\begin{center}
\begin{tikzpicture}[auto]
\node (A) at (0, 0) {$\mathbbm{1}$};
\node (B) at (1.5,0) {$r$};
\node(C) at (3,0) {$s$};
\draw[edge,thick] (A) -- (B);
\draw[edge,thick] (B) -- (C);
\path (B) edge [thick,loop] (B);
\end{tikzpicture}
\end{center}
where the loop is taken with probability $p$ and the step to the ideal $r\to s$ with probability $1-p$. Then the probability that
one starts at $\mathbbm{1}$ and hits the element $s$ in the ideal in precisely $t$ steps is
\[
\mathsf{Pr}_G(\tau=t) = (1-p)p^{t-2} \qquad \text{for $t\geqslant 2$.}
\]
Hence
\begin{equation}
\label{equation.Pr example}
\mathsf{Pr}_G(\tau \geqslant t) = \sum_{j=t}^\infty \mathsf{Pr}_G(\tau=j)
= (1-p) p^{t-2} \sum_{j=0}^\infty p^j
= (1-p) p^{t-2} \frac{1}{1-p} = p^{t-2} \quad \text{for $t\geqslant 2$.}
\end{equation}
The expectation value is
\begin{equation}
\label{equation.E example}
E_G[\tau] = \sum_{t=1}^\infty \mathsf{Pr}_G(\tau\geqslant t) = 1+ \sum_{t=2}^\infty p^{t-2} = 1+ \frac{1}{1-p}.
\end{equation}
Indeed by Markov's inequality
\[
tp^{t-2} \leqslant 1+\frac{1}{1-p} \quad \text{for all $t\geqslant 2$.}
\]
Now let us use~\eqref{equation.PrG} to compute $\mathsf{Pr}_G(\tau\geqslant t)$. Suppose that the
step $\mathbbm{1}\to r$ is labelled by the generator $1$, the loop from $r$ to $r$ is labelled $2$, and the step
$r\to s$ is labelled $3$. Then the Kleene expression for the paths from $\mathbbm{1}$ to $s$ is
\[
12^\star 3.
\]
Let the probability for generator $i$ be $x_i$ for $i\in \{1,2,3\}$. Then by~\cite{RhodesSchilling.2019}
\[
\Psi_G(x_1,x_2,x_3) = \frac{x_1x_3}{1-x_2} = x_1 x_3 \sum_{j=0}^\infty x_2^j.
\]
By~\eqref{equation.PrG}, we obtain $\mathsf{Pr}_G(\tau\geqslant t)=1$ for $t=0,1$ and
\[
\mathsf{Pr}_G(\tau\geqslant t) = \frac{x_1 x_3 \sum_{j=t-2}^\infty x_2^j}{x_1 x_3 \sum_{j=0}^\infty x_2^j}
= x_2^{t-2} \qquad \text{for $t\geqslant 2$.}
\]
This agrees with~\eqref{equation.Pr example}, where $x_2=p$.
Next let us use~\eqref{equation.EG} to compute $E_G[\tau]$
\[
E_G[\tau] = \left(x_1\frac{\partial}{\partial x_1} + x_2\frac{\partial}{\partial x_2} + x_3\frac{\partial}{\partial x_3}\right)
\ln \Psi_G(x_1,x_2,x_3)
= 2 + \frac{x_2}{1-x_2} = 1+\frac{1}{1-x_2},
\]
which agrees with~\eqref{equation.E example} when $x_2=p$.
\end{example}
\subsection{Shannon entropy and exponential bounds}
\label{section.shannon}
It turns out that the mixing time has close ties to information theory and in particular Shannon's entropy.
See~\cite{Rioul.2018} and~\cite[Chapter 3]{Gray.2011} as references on information theory.
Let $X$ be a random variable with probability distribution $p(x)$.
The amount of information of an elementary event $x$ is $\log \frac{1}{p(x)}$.
Therefore, the average amount of information about $X$ is given by the expected value,
known as \defn{Shannon's entropy}
\begin{equation}
H(X) = E[\log \frac{1}{p}] = \sum_{x \in X} p(x) \log \frac{1}{p(x)}.
\end{equation}
Shannon's entropy features in the \defn{asymptotic equipartition property} or \defn{entropy ergodic theorem},
which can be stated as follows~\cite{Shannon.1948} (see also~\cite{Rioul.2018}). Let ${\bf x}=(x_1,\ldots,x_t)$ be a long
sequence of independent and identically distributed outcomes with probability distribution $p(x)$.
By the independence, $p({\bf x})$ is given by the product
\[
p({\bf x}) = p(x_1) p(x_2) \cdots p(x_t) = \prod_{x \in X} p(x)^{t(x)},
\]
where $t(x)$ is the number of $x_i$ equal to $x$. Since $t$ is large, by the law of large numbers
\[
\frac{t(x)}{t} \approx p(x),
\]
which implies
\begin{equation}
p({\bf x}) \approx \Bigl( \prod_{x \in X} p(x)^{p(x)} \Bigr)^t = e^{-t H(X)}.
\end{equation}
In other words, for very large (but fixed) $t$, the value of the probability of a given ``typical'' sequence
${\bf x} = (x_1, x_2, \ldots, x_t)$ is likely to be close to the constant $e^{-t H(X)}$.
The precise formulation of the asymptotic equipartition property is the \defn{Shannon--McMillan--Breiman
Theorem}~\cite{Shannon.1948, McMillam.1953, Breiman.1957} (see
also~\cite[Chapter 4]{Gray.2011}). Applied to $P^{\star t}(S \setminus K(S))$ in Theorem~\ref{theorem.ASST},
this gives an exponential bound on $\| T^t \nu - \Psi \|$. In probability, this is also known as
the \defn{Convergence Theorem} (see~\cite[Theorem 4.9]{LevinPeres.2017}).
\begin{theorem}[Convergence Theorem]
Suppose $T$ is the transition matrix of an ergodic Markov chain with stationary distribution $\Psi$.
Then there exist constants $\alpha \in (0,1)$ and $C>0$ such that
\[
\| T^t \nu - \Psi \| \leqslant C \alpha^t.
\]
\end{theorem}
A concept related to entropy is the \defn{entropy rate}. It is defined as the rate of information innovation
\[
H' = \lim_{t\to \infty} H(X_t \mid X_{t-1},\ldots, X_1).
\]
When $X_i$ is stationary, the entropy rate is equal to the average entropy per symbol
\[
\overline{H} = \lim_{t\to \infty} \frac{H(X_1,\ldots,X_t)}{t},
\]
that is $H' = \overline{H}$.
Since an ergodic Markov chain has a unique stationary distribution $\Psi$, the entropy rate is independent of the initial
distribution. If the Markov chain is defined on the finite (or countable) state space $\Omega$, then
\[
H' = - \sum_{s,s'\in \Omega} T_{s,s'} \Psi_{s'} \log(T_{s,s'}).
\]
A simple consequence of this definition is that indeed a stochastic process with independent and identically distributed
random variables has an entropy rate that is the same as the entropy of any individual member of the process.
\subsection{Mixing time via decreasing statistics}
\label{section.decreasing statistics}
In~\cite{ASST.2015,ASST.2015a}, a technique was developed for an upper bound on the mixing time
using a decreasing statistics on the semigroup underlying the Markov chain.
\begin{lemma} \cite[Lemma 3.6]{ASST.2015}
\label{lemma.statisticbound}
Let $\mathcal M$ be an irreducible Markov chain associated to the semigroup $S$ and probability distribution
$0\leqslant p(s) \leqslant 1$ for $s\in S$. We assume that $\{s \in S \mid p(s)>0\}$ generates $S$.
Let $\Psi$ be the stationary distribution and $f\colon S\to \mathbb N$ be a function,
called a \defn{statistic}, such that:
\begin{enumerate}
\item $f(ss')\leqslant f(s)$ for all $s,s'\in S$;
\item if $f(s)>0$, then there exists $s' \in S$ with $p(s')>0$ such that $f(ss')<f(s)$;
\item $f(s)=0$ if and only if $s \in K(S)$.
\end{enumerate}
Then if $p=\min\{p(s) \mid s \in S, p(s)>0\}$ and $L=f(\mathbbm{1})$, we have that
\[
\|T^t\nu -\Psi\|_{TV} \leqslant \sum_{i=0}^{L-1} {t\choose i}p^i(1-p)^{t-i}
\leqslant \exp\left(-\frac{(tp-(L-1))^2}{2tp}\right)\,,
\]
for any probability distribution $\nu$ on $S$, where the last inequality holds as long as $t\geqslant (L-1)/p$.
\end{lemma}
The bound
\[
\sum_{i=0}^{L-1} {t\choose i}p^i(1-p)^{t-i} \leqslant \exp\left(-\frac{(tp-(L-1))^2}{2tp}\right)
\]
works well for $p$ close to $\frac{1}{2}$. A better bound for $0<\frac{L-1}{t}<p$ is given by~\cite{AG.1989}
\[
\sum_{i=0}^{L-1} {t\choose i}p^i(1-p)^{t-i} \leqslant \exp\left( -t\; D\Bigl(\frac{L-1}{t} \;\Big\|\; p\Bigr) \right),
\]
where
\[
D(a\; \| \;p) = a \log \frac{a}{p} + (1-a) \log\frac{1-a}{1-p}.
\]
This can be rewritten as
\[
\sum_{i=0}^{L-1} {t\choose i}p^i(1-p)^{t-i} \leqslant \left( \frac{p}{a} \right)^{ta} \left( \frac{1-p}{1-a} \right)^{t(1-a)},
\]
where $a=\frac{L-1}{t}$.
\subsection{Syntactic at $0$}
\label{section.syntactic}
Syntactic monoids were introduced in mathematics and computer science as the smallest monoid that
recognizes a given formal language, see for example~\cite{Straubing.1994}. Here we develop this idea in the context
of the mixing time.
Recall that for a semigroup $S$, denote by $S^{\mathbbm{1}}$ the semigroup $S$ with a new added identity
$\mathbbm{1}$ (even if a one already exists).
\begin{definition}
Let $S$ be a semigroup with zero $0$. Define the congruence on $s_1,s_2 \in S$ by
\begin{equation}
\label{equation.congruence}
s_1 \equiv s_2 \qquad \text{if and only if} \qquad
\Bigl( \text{for any $x, y \in S^{\mathbbm{1}}$} \quad x s_1 y = 0 \Longleftrightarrow x s_2 y =0 \Bigr).
\end{equation}
Then $S$ is called \defn{syntactic at zero} if the congruence~\eqref{equation.congruence} has singleton classes,
that is,
\[
S/\equiv \quad \cong \quad S.
\]
We call $S/\equiv$ the \defn{syntactic image} of $S$, which is syntactic at zero. In other words, the syntactic
semigroup associated to $S$ is the smallest image under the homomorphism $f \colon S \to S/\equiv$ such that
$f^{-1}(0)=0$.
\end{definition}
\begin{example}
\label{example.min}
Consider the semigroup $S=\{0,1,2,\ldots,n\}$, where multiplication is taking the minimum. The $\equiv$-classes
are given by $\{1,2,\ldots,n\}$ and $\{0\}$. Hence, the syntactic semigroup $S/\equiv$ associated to $S$ is
isomorphic to $\{0,1\}$ with multiplication being minimum.
\end{example}
\begin{example}
\label{example.rees}
The \defn{Rees matrix semigroup} $(S;I,I';P)$ is indexed by a semigroup $S$, two non-empty sets $I$ and $I'$, and
a matrix $P$ indexed by $I$ and $I'$ with entries $p_{i',i}\in S$ (see for example~\cite[Section 3.4]{RhodesSchilling.2019}).
It is the set $I\times S\times I'$ with multiplication
\[
(i,s,i')(j,t,j')=(i,sp_{i',j} t,j').
\]
The \defn{Rees matrix semigroup with zero} $(S;I,I';P)^\square$ is the set $I\times S\times I' \cup \{\square\}$,
where the entries in $P$ are in $S\cup \{\square\}$, with multiplication
\[
(i,s,i')(j,t,j')=\begin{cases}
(i,sp_{i',j} t,j') & \text{if $p_{i',j} \neq \square$,}\\
\square & \text{otherwise.}
\end{cases}
\]
Then the syntactic image of $(S;I,I';P)^\square$ is isomorphic to $(\{1\};\tilde{I},\tilde{I}';\tilde{P})$, where
$\tilde{P}$ is a matrix of $0$ and $1$ without equal rows or columns.
\end{example}
It turns out that we can replace a semigroup with zero with its syntactic image without changing the upper bound on
the mixing time of the underlying Markov chain, but the stationary distribution can change.
\begin{theorem}
\label{theorem.syntactic}
Let $(S,A)$ be a finite semigroup $S$ with zero and generators $A$, whose minimal ideal $K(S)$ is a left zero semigroup.
Then the Markov chains associated to $(S,A)$ and $(S/\equiv,f(A))$ have the same upper bound $\mathsf{Pr}(\tau>t)$ on the mixing time.
\end{theorem}
\begin{remark}\mbox{}
\label{remark.syntactic}
\begin{enumerate}
\item
If the probability associated to the generator $a\in A$ is $x_a$, then the probability associated to the generator
$b \in f(A)$ is $\sum_{a \in f^{-1}(b)} x_a$.
\item
Note that the stationary distributions of the Markov chains associated to $(S,A)$ and \newline $(S/\equiv,f(A))$ may differ.
\end{enumerate}
\end{remark}
\begin{proof}[Proof of Theorem~\ref{theorem.syntactic}]
Let $\mathcal{S}$ be the semaphore code corresponding to the ideal $K(S)$. Then for a codeword $s\in \mathcal{S}$,
$f(s)$ is a codeword in the semaphore code corresponding to $K(S/\equiv)$. If the probabilities match up as in
Remark~\ref{remark.syntactic}, the random variable $\tau$ matches and hence the upper bound on the mixing time determined from
$\mathsf{Pr}(\tau>t)$ matches.
\end{proof}
Theorem~\ref{theorem.syntactic} is powerful in the sense that the upper bound on the mixing time for Markov chains with potentially
complicated stationary distributions can be deduced from those for small semigroups which are syntactic at zero.
\begin{example}
\label{example.semaphore}
Let us continue with Example~\ref{example.min}. The semigroup $(S,A)$ with $S=\{0,1\}$, $A=\{a,b\}$ and $a=0, b=1$ is
syntactic. The minimal ideal $K(S)$ is $A^\star a A^\star$ and the semaphore code is $\mathcal{S}=b^\star a
=\{b^ja \mid j\geqslant 0\}$. The left action on $\mathcal{S}$ is given by
\begin{align*}
a \cdot b^j a &= a && \text{(reset to $a$),}\\
b \cdot b^j a &= b^{j+1}a && \text{(free),}
\end{align*}
with stationary distribution
\[
\Psi_{b^ja} = x_b^j x_a \qquad \text{for $j\geqslant 0$.}
\]
Note that
\[
E[\tau] = \sum_{j=0}^\infty (j+1) x_b^j x_a
= x_a \frac{\partial}{\partial x_b} \left( \sum_{j=0}^\infty x_b^{j+1} \right)
= x_a \frac{\partial}{\partial x_b} \frac{x_b}{1-x_b}
= \frac{x_a}{(1-x_b)^2} = \frac{1}{x_a}.
\]
In contrast, let us compute
\[
\mathsf{Pr}(\tau>t) = \sum_{j=t}^\infty x_b^j x_a
= \frac{x_a x_b^{t}}{1-x_b} = x_b^{t}.
\]
Indeed $\mathsf{Pr}(\tau>t) \leqslant \frac{E[\tau]}{t+1}$ as in Example~\ref{example.single loop}.
\end{example}
\begin{example}
We can amend Example~\ref{example.semaphore} by making the semigroup finite and aperiodic by imposing
$b^w = b^{w+1}$. Using the methods in~\cite{RhodesSchilling.2019} (or comparing the in-flow with the
out-flow), the stationary distribution can be derived to be
\[
\begin{split}
\Psi_{b^ja} &= x_b^j x_a \qquad \text{for $0\leqslant j < w$,}\\
\Psi_{b^wa} &= \frac{x_a x_b^w}{1-x_b}.
\end{split}
\]
The associated syntactic semigroup is $(\{0,1\},A)$, which means by Theorem~\ref{theorem.syntactic} that the upper bound on the mixing
time is unchanged, even though the stationary distribution is different.
\end{example}
\begin{example}
Let $(S,A)$ be an arbitrary finite semigroup with generators $A=\{a_1,\ldots,a_k\}$ (with or without zero). Let $S^\square$
be the semigroup with a zero $\square$ adjoined. Then
\[
\left( S^\square / \equiv \right) = (\{ \square,1\},A\cup \{ \square \}).
\]
In this setting the stationary distribution can be complicated, however the upper bound on the mixing time is trivial by
Theorem~\ref{theorem.syntactic}
\[
\mathsf{Pr}(\tau>t) = (1-x_\square)^t.
\]
\end{example}
\begin{example}
Consider the Rees matrix semigroup $S=B(2)$ of~\cite[Example 3.3]{RhodesSchilling.2019} with generators $A=\{a,b\}$, where
$a=(1,2)$ and $b=(2,1)$. The minimal ideal $K(S)$ is $A^\star \{aa,bb\} A^\star$ with semaphore code
\[
\mathcal{S} = \{(ab)^\star aa, (ba)^\star bb, b(ab)^\star aa, a(ba)^\star bb\}.
\]
The left action on $\mathcal{S}$ is given by
\begin{align*}
a &\cdot (ab)^j aa = aa && \text{(reset),}\\
a &\cdot (ba)^j bb = a(ba)^j bb && \text{(free)},\\
a &\cdot b(ab)^j aa = (ab)^{j+1} aa && \text{(free)},\\
a &\cdot a(ba)^j bb = aa && \text{(reset),}
\end{align*}
and similarly with $a$ and $b$ interchanged. Note that
\begin{align*}
\mathsf{Pr}(\tau>2k) &= \sum_{j=k}^\infty (x_a^2 + x_b^2 + x_a + x_b) (x_ax_b)^j
= \frac{(x_a x_b)^k(x_a^2 + x_b^2 + 1)}{1-x_a x_b} = 2(x_a x_b)^k,\\
\mathsf{Pr}(\tau>2k+1) &= \sum_{j=k}^\infty (x_a^2 + x_b^2 + x_a^2 x_b + x_b^2 x_a) (x_ax_b)^j
= \frac{(x_a x_b)^k(x_a^2 + x_b^2 + x_a x_b)}{1-x_a x_b} = (x_a x_b)^k,
\end{align*}
which by Theorem~\ref{theorem.ASST} gives an upper bound on the mixing time.
\end{example}
\begin{example}
\label{example.ress aa}
Consider the Rees matrix semigroup (see Example~\ref{example.rees}) with $I=I'=\{1,2\}$, $S=\{0,1\}$,
\[
P = \begin{pmatrix} 1&1\\ 0&1 \end{pmatrix},
\]
and generators $A=\{a,b\}$ with $a=(1,1,2)$ and $b=(2,1,1)$. The minimal ideal is $A^\star aa A^\star$ with
semaphore code $\mathcal{S} = b^\star (abb^\star)^\star aa$. The left action on $\mathcal{S}$ is given by
\[
\begin{split}
a \cdot b^j \left(\prod_{k=1}^\ell abb^{e_k} \right) aa &= \begin{cases} ab^j \left(\prod_{k=1}^\ell abb^{e_k} \right) aa
& \text{if $j>0$ (free),} \\
aa & \text{if $j=0$ (reset),} \end{cases}\\
b \cdot b^j \left(\prod_{k=1}^\ell abb^{e_k} \right) aa &= b^{j+1} \left(\prod_{k=1}^\ell abb^{e_k} \right) aa
\qquad \text{(free).}
\end{split}
\]
In this case, the bound on the mixing time is given by
\[
\mathsf{Pr}(\tau>k) = x_a^2 \sum_{j \geqslant k-1} \sum_{i=0}^{\lfloor \frac{j}{2} \rfloor} \binom{j-i}{i} x_a^i x_b^{j-i}.
\]
\end{example}
\subsection{Ideals and $d$-testable languages}
\label{section.languages}
As we have seen, ideals are important in the study of Markov chains in the context of semigroups. In addition, ideals are
closely related to semaphore codes.
Let $(S_j,A)$ be two semigroups with zero for $j=1,2$ with the same generating set $A$ and $I_j$ the ideal of strings
in $A^+$ that is zero in $(S_j,A)$ for $j=1,2$. Let $\mathcal{S}_j$ for $j=1,2$ be the semaphore code associated to the
ideal $I_j$. Recall that through the left action of $A^+$ on $\mathcal{S}_j$ we have two Markov chains.
\begin{remark}[\defn{Ideal principle}]
\label{remark.ideal containment}
If $I_1 \subseteq I_2$, the upper bound on the mixing time of the Markov chain associated to $\mathcal{S}_2$ is smaller or equal to the
upper bound on the mixing time of the Markov chain associated to $\mathcal{S}_1$.
\end{remark}
Remark~\ref{remark.ideal containment} is true since by~\cite[Corollary 3.5(3)]{ASST.2015} the mixing time is bounded
above by $P^{\star t}(S \setminus K(S))$ (see Theorem~\ref{theorem.ASST}). If $I_1\subseteq I_2$, we hence have
\[
P^{\star t}(S_2 \setminus I_2) \leqslant P^{\star t}(S_1 \setminus I_1),
\]
since $I_j$ consists of all words in $A^+$ which are zero in $S_j$.
By Remark~\ref{remark.ideal containment} we want to study Markov chains with the smallest ideals as they have the
worst mixing time. To this end, we will study the \defn{complete lattice of ideals} of $A^\star$. All ideals (including
$\emptyset$) of $A^\star$ form a complete lattice under union and intersection.
\begin{lemma}
\label{lemma.descending chain}
Every nonempty ideal $I$ has a \defn{descending chain}
\[
I \supset A^\star t_1 A^\star \supset A^\star t_2 A^\star \supset \cdots \supset A^\star t_k A^\star \supset
\cdots.
\]
\end{lemma}
\begin{proof}
Since $I\neq \emptyset$, there exists an element $t_1\in I$. The unique smallest length element in $A^\star t_1 A^\star$
is of length $|t_1|$. Choose $t_2 \in A^\star t_1 A^\star$ with $|t_2|>|t_1|$. Then $A^\star t_1 A^\star \supset
A^\star t_2 A^\star$ and repeat.
\end{proof}
Some ideals $I$ have an infinite \defn{ascending chain}
\[
I \subset I_1 \subset I_2 \subset \cdots
\]
and some do not. Let $A=\{a,b\}$. Then $A^\star \setminus \{a\}$, for example, does not have an infinite ascending chain.
On the other hand (compare also with Example~\ref{example.ress aa})
\[
A^\star aa A^\star \subset A^\star aa A^\star \cup A^\star aba A^\star \subset \cdots \subset
\bigcup_{j=0}^k A^\star ab^j a A^\star \subset \cdots
\]
does.
Every ideal $I \subseteq A^+$ has a unique set of minimal generators, namely all $t=a_1 a_2 \cdots a_{\ell-1} a_\ell \in I$
such that $a_1\cdots a_{\ell-1} \not \in I$ and $a_2 \cdots a_\ell \not \in I$. Hence by Lemma~\ref{lemma.descending chain},
the smallest ideals are of the form $A^\star t A^\star$, where $|t|$ is big. Since by Remark~\ref{remark.ideal containment}
smaller ideals have worse upper bounds on the mixing times, we would like to analyze ideals of the form $A^\star t A^\star$, where $|t|$ is large.
This is related to $d$-testable languages, which are finite ideals generated by $\bigcup_{i=1}^n A^\star t_i A^\star$,
see~\cite{Zalcstein.1972}.
Let $t\in A^+$. The minimal automata $\mathsf{Test}(t)$ accepting the language $A^\star t A^\star$ for
$t=a_1a_2\ldots a_\ell$ is given as follows. There are $\ell+1$ states: $\mathbbm{1}, a_1, a_1 a_2,\ldots, a_1 \ldots
a_{\ell-1}, a_1\ldots a_\ell \equiv 0$.
We have $q \stackrel{a}{\longrightarrow} qa$ if both $q$ and $qa$ are prefixes of $t$ and otherwise
$q \stackrel{a}{\longrightarrow} \mathbbm{1}$.
Using~\cite[Definition 3.5]{RhodesSchilling.2019a}, $\mathsf{Test}(t)$ can be transformed into a loop graph with loops
labeled by words $w\in A^+$ such that $|w| \leqslant \ell$, $w=w_1\cdots w_k$ is not a prefix of $t$, but $w_1\cdots
w_{k-1}$ is a prefix of $t$. Let us denote the set of all such words $W_t$.
Hence the Kleene expression for the paths in $\mathsf{Test}(t)$ is $\left(\cup_{w \in W_t} \{w\}\right)^\star t$ and hence
the stationary distribution is
\[
\Psi_t = \frac{x_{a_1} \cdots x_{a_\ell}}{1-\sum_{w\in W_t} \prod_{a\in w} x_a}.
\]
By Theorem~\ref{theorem.main1} we obtain
\[
E_t[\tau] = \ell + \frac{\sum_{w\in W_t} |w| \prod_{a\in w} x_a}{1-\sum_{w\in W_t} \prod_{a\in w} x_a},
\]
which gives an upper bound on the mixing time using the Markov inequality~\eqref{equation.Markov inequality}.
Theorem~\ref{theorem.main} can also be used to obtain an upper bound on the mixing time using the series expansion of
$\Psi_t$.
\begin{remark}
The loop graphs in~\cite{RhodesSchilling.2019a} are not allowed to have loops at vertex $\mathbbm{1}$. Here we do
allow loops at $\mathbbm{1}$. To remedy the situation, one could rename $\mathbbm{1}$ by $1$ and have an edge
with probability 1 from $\mathbbm{1}$ to $1$.
\end{remark}
\begin{example}
Let $A=\{a,b\}$ and $t=aba$. Then $\mathsf{Test}(t)$ can be depicted by
\begin{center}
\begin{tikzpicture}[auto]
\node (I) at (0, 0) {$\mathbbm{1}$};
\node (a) at (2,0) {$a$};
\node (ab) at (4,0) {$ab$};
\node (aba) at (6,0) {$aba=0$};
\draw[edge,blue,thick] (I) -- (a) node[midway, above] {$a$\;};
\draw[edge,blue,thick] (a) -- (ab) node[midway, above] {$b$\;};
\draw[edge,blue,thick] (ab) -- (aba) node[midway, above] {$a$\;};
\path (a) edge[->,thick, bend left=30] node[midway,above] {$a$} (I);
\path (ab) edge[->,thick, bend left=30] node[midway,below] {$b$} (I);
\path
(I) edge [loop left,thick] node {$b$} (I)
(aba) edge [loop right, thick] node {$a,b$} (aba);
\end{tikzpicture}
\end{center}
The corresponding loop graph is
\begin{center}
\begin{tikzpicture}[auto]
\node (I) at (0, 0) {$\mathbbm{1}$};
\node (a) at (2,0) {$a$};
\node (ab) at (4,0) {$ab$};
\node (aba) at (6,0) {$aba$};
\node (dot) at (0,1) {$\bullet$};
\node (dot1) at (0.5,-1) {$\bullet$};
\node (dot2) at (-0.5,-1) {$\bullet$};
\draw[edge,blue,thick] (I) -- (a) node[midway, above] {$a$\;};
\draw[edge,blue,thick] (a) -- (ab) node[midway, above] {$b$\;};
\draw[edge,blue,thick] (ab) -- (aba) node[midway, above] {$a$\;};
\path
(I) edge [loop left,thick] node {$b$} (I)
(I) edge[->, thick, bend right=40] node[midway,right]{$a$} (dot)
(dot) edge[->, thick, bend right=40] node[midway,left]{$a$} (I)
(I) edge[->, thick, bend left=40] node[midway,right]{$a$} (dot1)
(dot1) edge[->, thick, bend left=40] node[midway,below]{$b$} (dot2)
(dot2) edge[->, thick, bend left=40] node[midway,left]{$b$} (I);
\end{tikzpicture}
\end{center}
Hence the stationary distribution is
\[
\Psi_{aba} = \frac{x_a^2 x_b}{1-x_b-x_a^2-x_ax_b^2}.
\]
By Theorem~\ref{theorem.main1}, this hence gives
\[
E_t[\tau] = 3 + \frac{x_b + 2 x_a^2 + 3 x_a x_b^2}{1-x_b-x_a^2-x_a x_b^2}.
\]
\end{example}
\begin{example}
Now let us take $A=\{a,b\}$ and $t=a^\ell$. In this case $W_t = \{a^kb \mid 0\leqslant k<\ell\}$ and hence
\[
\Psi_t = \frac{x_a^\ell}{1-\sum_{k=0}^{\ell-1} x_a^k x_b}
\]
with an upper bound for the mixing time given by
\[
E_t[\tau] = \ell + \frac{\sum_{k=0}^{\ell-1} (k+1) x_a^k x_b}{1-\sum_{k=0}^{\ell-1} x_a^k x_b}
\]
using~\eqref{equation.Markov inequality}.
\end{example}
\section{Examples}
\label{section.examples}
In this section, we analyze the mixing time of several examples using the methods developed in Section~\ref{section.mixing time}.
In Section~\ref{section.tsetlin} we derive upper bounds for the mixing time of the famous Tsetlin library~\cite{Tsetlin.1963}
and in Section~\ref{section.edge flipping} for edge flipping on a line~\cite{ChungGraham.2012}. In Section~\ref{section.promotion},
we provide a new Markov chain on linear extension of a poset with $n$ vertices, inspired by but different from the promotion Markov
chain of Ayyer, Klee and the last author. The mixing time of this Markov chain is $O(n \log n)$ (Theorem~\ref{theorem.expected new}).
\subsection{The Tsetlin library}
\label{section.tsetlin}
The Tsetlin library~\cite{Tsetlin.1963} is a Markov chain whose states are all permutations $S_n$ of $n$ books (on a shelf).
Given $\pi \in S_n$, construct $\pi' \in S_n$ from $\pi$ by removing book $a$ from the shelf and inserting
it to the front. In this case write $\pi \stackrel{a}{\longrightarrow} \pi'$.
Let $0< x_a\leqslant 1$ be probabilities for each $1\leqslant a \leqslant n$ such that $\sum_{a=1}^n x_a = 1$.
In the Tsetlin library Markov chain, we transition $\pi \stackrel{a}{\longrightarrow} \pi'$ with probability $x_a$.
The stationary distribution for the Tsetlin library was derived by Hendricks~\cite{Hendricks.1972, Hendricks.1973}
and Fill~\cite{Fill.1996}
\begin{equation}
\label{equation.psi Tsetlin}
\Psi_\pi = \prod_{i=1}^n \frac{x_{\pi_i}}{1-\sum_{j=1}^{i-1} x_{\pi_j}} \qquad \text{for all $\pi \in S_n$.}
\end{equation}
The stationary distribution was derived using right Cayley graphs and their Karnofsky--Rhodes and McCammond
expansions in~\cite[Section 3.1]{RhodesSchilling.2019}.
Consider the semigroup $P(n)$, which consists of the set of all non-empty subsets of $\{1,2,\ldots,n\}$. Multiplication in
$P(n)$ is union of sets. We pick as generators $A=[n]:=\{1,2,\ldots,n\}$. Then the right Cayley graph $\mathsf{RCay}(P(n),[n])$
is the Boolean poset with $\mathbbm{1}$ as root. The right Cayley graph for $P(3)$ is depicted in Figure~\ref{figure.P3}.
Except for the loops at a given vertex, all edges are transitional. Hence $\mathsf{Mc} \circ \mathsf{KR}(P(n),[n])
= \mathsf{KR}(P(n),[n])$ is a tree with leaves given by the permutations $S_n$ of $[n]$.
The case $n=3$ is depicted in Figure~\ref{figure.Mc P3}.
\begin{figure}[t]
\begin{tikzpicture}[auto]
\node (I) at (0, 0) {$\mathbbm{1}$};
\node (A) at (-3,-1.5) {$\{1\}$};
\node(B) at (0,-1.5) {$\{2\}$};
\node(C) at (3,-1.5) {$\{3\}$};
\node(D) at (-3,-3) {$\{1,2\}$};
\node(E) at (0,-3) {$\{1,3\}$};
\node(F) at (3,-3) {$\{2,3\}$};
\node(G) at (0,-4.5) {$\{1,2,3\}$};
\draw[edge,blue,thick] (I) -- (A) node[midway, left] {$1$\;};
\draw[edge,blue,thick] (I) -- (B) node[midway, right] {$2$};
\draw[edge,blue,thick] (I) -- (C) node[midway, right] {\;$3$};
\draw[edge,blue,thick] (A) -- (D) node[midway,left] {$2$};
\draw[edge,blue,thick] (A) -- (E) node[midway,above] {$3$};
\draw[edge,blue,thick] (B) -- (D) node[midway,below] {$1$};
\draw[edge,blue,thick] (C) -- (F) node[midway,right] {$2$};
\draw[edge,blue,thick] (B) -- (F) node[midway,below] {$3$};
\draw[edge,blue,thick] (C) -- (E) node[midway,above] {$1$};
\draw[edge,blue,thick] (D) -- (G) node[midway,below] {$3$};
\draw[edge,blue,thick] (E) -- (G) node[midway,right] {$2$};
\draw[edge,blue,thick] (F) -- (G) node[midway,right] {\;$1$};
\path
(A) edge [loop left] node {$1$} (A)
(B) edge [loop right] node {$2$} (B)
(C) edge [loop right] node {$3$} (C)
(D) edge [loop left] node {$1,2$} (D)
(E) edge [loop right] node {$1,3$} (E)
(F) edge [loop right] node {$2,3$} (F)
(G) edge [loop right] node {$1,2,3$} (G);
\end{tikzpicture}
\caption{The right Cayley graph $\mathsf{RCay}(S,A)$ with $S=P(3)$ and $A=\{1,2,3\}$. Transition edges are drawn in blue.
\label{figure.P3}}
\end{figure}
\begin{figure}[t]
\begin{tikzpicture}[auto]
\node (I) at (0, 0) {$\mathbbm{1}$};
\node (A) at (-4,-1.5) {$1$};
\node(B) at (0,-1.5) {$2$};
\node(C) at (4,-1.5) {$3$};
\node(D) at (-5,-3) {$12$};
\node(E) at (-3,-3) {$13$};
\node(F) at (-1,-3) {$21$};
\node(G) at (1,-3) {$23$};
\node(H) at (3,-3) {$31$};
\node(J) at (5,-3) {$32$};
\node(DD) at (-5,-4.5) {$123$};
\node(EE) at (-3,-4.5) {$132$};
\node(FF) at (-1,-4.5) {$213$};
\node(GG) at (1,-4.5) {$231$};
\node(HH) at (3,-4.5) {$312$};
\node(JJ) at (5,-4.5) {$321$};
\draw[edge,blue,thick] (I) -- (A) node[midway, left] {$1$\;\;};
\draw[edge,blue,thick] (I) -- (B) node[midway, right] {$2$};
\draw[edge,blue,thick] (I) -- (C) node[midway, right] {\;\;$3$};
\draw[edge,blue,thick] (A) -- (D) node[midway, left] {$2$};
\draw[edge,blue,thick] (A) -- (E) node[midway, right] {$3$};
\draw[edge,blue,thick] (B) -- (F) node[midway, left] {$1$};
\draw[edge,blue,thick] (B) -- (G) node[midway, right] {$3$};
\draw[edge,blue,thick] (C) -- (H) node[midway, left] {$1$};
\draw[edge,blue,thick] (C) -- (J) node[midway, right] {$3$};
\draw[edge,blue,thick] (D) -- (DD) node[midway, left] {$3$};
\draw[edge,blue,thick] (E) -- (EE) node[midway, left] {$2$};
\draw[edge,blue,thick] (F) -- (FF) node[midway, left] {$3$};
\draw[edge,blue,thick] (G) -- (GG) node[midway, right] {$1$};
\draw[edge,blue,thick] (H) -- (HH) node[midway, right] {$2$};
\draw[edge,blue,thick] (J) -- (JJ) node[midway, right] {$1$};
\path
(A) edge [loop left, red, dashed,thick] node {$1$} (A)
(B) edge [loop right, red, dashed, thick] node {$2$} (B)
(C) edge [loop right, red, dashed,thick] node {$3$} (C)
(D) edge [loop left, red, dashed,thick] node {$1,2$} (D)
(E) edge [loop left, red, dashed,thick] node {$1,3$} (E)
(F) edge [loop left, red, dashed,thick] node {$1,2$} (F)
(G) edge [loop right, red, dashed,thick] node {$2,3$} (G)
(H) edge [loop right, red, dashed,thick] node {$1,3$} (H)
(J) edge [loop right, red, dashed,thick] node {$2,3$} (J)
(DD) edge [loop below, red, dashed,thick] node {$1,2,3$} (DD)
(EE) edge [loop below, red, dashed,thick] node {$1,2,3$} (EE)
(FF) edge [loop below, red, dashed,thick] node {$1,2,3$} (FF)
(GG) edge [loop below, red, dashed,thick] node {$1,2,3$} (GG)
(HH) edge [loop below, red, dashed,thick] node {$1,2,3$} (HH)
(JJ) edge [loop below, red, dashed,thick] node {$1,2,3$} (JJ);
\end{tikzpicture}
\caption{$\mathsf{Mc} \circ \mathsf{KR}(P(3),[3]) = \mathsf{KR}(P(3),[3])$, which is the
Karnofsky--Rhodes expansion of the right Cayley graph of Figure~\ref{figure.P3}.
\label{figure.Mc P3}}
\end{figure}
To obtain an upper bound on the mixing time, we compute $E[\tau]$ from the Karnofsky--Rhodes expansion
of the right Cayley graph. The ideal consists of the leaves of the tree $\mathsf{KR}(P(n),[n])$, which are
labeled by permutations in $S_n$. Recall that $E[\tau]$ can be computed via~\eqref{equation.Etau}.
Any path from $\mathbbm{1}$ to the ideal is of length at least $n$. Hence $\mathsf{Pr}(\tau\geqslant t)=1$
for $1\leqslant t\leqslant n$.
Now for concreteness consider the loop graph $G$ associated to the path from $\mathbbm{1}$ to $12\ldots n$
in $\mathsf{Mc}\circ \mathsf{KR}(P(n),[n])$. The contributions of the loops can be treated in a similar fashion to
Example~\ref{example.single loop}. The Kleene expression for all paths from $\mathbbm{1}$ to $12\ldots n$ is given by
\[
1 1^\star 2 \{1,2\}^\star 3 \{1,2,3\}^\star \ldots \{1,2,\ldots,n-1\}^\star n.
\]
Hence we obtain (compare with~\eqref{equation.psi Tsetlin})
\[
\Psi_G(x_1,\ldots,x_n) = \frac{x_1 \cdots x_n}{(1-x_1)(1-x_1-x_2) \cdots (1-x_1-\cdots-x_{n-1})}
\]
and by Theorem~\ref{theorem.main1}
\begin{equation}
\label{equation.EG Tsetlin}
E_G[\tau]=n + \frac{x_1}{1-x_1} + \frac{x_1+x_2}{1-x_1-x_2} + \cdots + \frac{x_1+\cdots + x_{n-1}}{1-x_1-\cdots-x_{n-1}},
\end{equation}
which can also be checked directly. If $x_i=\frac{1}{n}$ for all $1\leqslant i\leqslant n$, we hence have
\begin{equation}
\label{equation.EG Tsetlin n}
E_G[\tau] = n + \frac{1}{n-1} + \frac{2}{n-2}+\cdots + \frac{n-1}{1} = n \left( \sum_{i=1}^n \frac{1}{i} \right).
\end{equation}
The last equality can be proved by induction on $n$. It is well-known that the sequence
$t_n = \sum_{i=1}^n \frac{1}{i} - \ln(n)$ approaches the Euler--Mascheroni constant $\gamma$ as $n\to \infty$.
Therefore
\[
E[\tau]=E_G[\tau] \leqslant n \ln(n) +n \gamma
\]
and by~\eqref{equation.Markov inequality}
\[
\| T^t \nu - \pi \| \leqslant \frac{n \ln(n) +n \gamma}{t+1}.
\]
Nestoridi~\cite{Nestoridi.2019} has proven upper/lower bounds for the mixing time of the separation distance. Pike~\cite{Pike.2013}
has discussed the eigenfunctions of the transition matrix. Note that, given the rational expression of the stationary
distribution~\eqref{equation.psi Tsetlin}, our methods work for general weights $x_i$. Truncating the degree of the expansion
of the stationary distribution~\eqref{equation.psi Tsetlin} gives a precise expression for an upper bound of the mixing time by
Theorem~\ref{theorem.main}.
\subsection{Edge flipping on a line}
\label{section.edge flipping}
In~\cite[Section 3.2]{RhodesSchilling.2019}, we treated the Markov chain obtained by edge flipping on a line
using the semigroup methods of~\cite{RhodesSchilling.2019}. Take a line with $n+1$ vertices.
Each vertex can either be $0$ or $1$. So the state space is $\Omega=\{0,1\}^{n+1}$ of size $2^{n+1}$.
Pick edge $i$ for $1\leqslant i \leqslant n$ (between vertices $i$ and $i+1$) with probability $x_i$. Then with
probability $\frac{1}{2}$ make the adjacent vertices both 0 (respectively both 1). Let us call this Markov chain
$\mathcal{M}$. This Markov chain is a Boolean arrangement~\cite{BHR.1999} for which the stationary distribution was
derived in~\cite{BrownDiaconis.1998} and which was also analyzed in~\cite{ChungGraham.2012}.
In~\cite[Section 3.2]{RhodesSchilling.2019}, we analyzed the stationary distribution in a similar fashion to the
Tsetlin library by considering the semigroup $P^{\pm}(n)$, which is the set of signed subsets of $[n]$. That is, take a
subset of $[n]$ and in addition associate to each letter a sign $+$ or $-$. Right multiplication of such a subset $X$ by
a generator $x\in [\pm n] := \{\pm 1, \ldots, \pm n\}$ is addition of $x$ to $X$ if neither $x$ nor $-x$ are in $X$ and
otherwise return $X$. The minimal ideal in the Karnofsky--Rhodes expansion of this monoid is the set of signed
permutations $S_n^\pm$. In the Markov chain on the minimal ideal, we transition from $\pi \stackrel{a}{\longrightarrow} \pi'$
with probability $y_a$ for $a\in [\pm n]$, where $\pi'$ is obtained from $\pi$ by prepending $a$ to $\pi$ and removing the
letter $a$ or $-a$ from $\pi$. The stationary distribution associated to $\pi \in S^\pm_n$ was computed to be
\begin{equation}
\label{equation.psi signed}
\Psi^{\mathsf{KR}(P^\pm(n),[\pm n])}_\pi = \prod_{i=1}^n \frac{y_{\pi_i}}{1-\sum_{j=1}^{i-1} (y_{\pi_j} + y_{-\pi_j})}.
\end{equation}
The stationary distribution for a word $s \in \Omega$ for the Markov chain $\mathcal{M}$ is a lumping (or sum)
of the $\Psi^{\mathsf{KR}(P^\pm(n),[\pm n])}_\pi$ in~\eqref{equation.psi signed}. By the same analysis as in
Section~\ref{section.tsetlin} the mixing time for $\mathcal{M}$ is of order $O(n \ln(n))$.
\subsection{Promotion Markov chain}
\label{section.promotion}
Let $P$ be a \defn{partially ordered set}, also known as a \defn{poset}, on $n$ elements with partial order $\preccurlyeq$.
A partial order must be reflexive ($a \preccurlyeq a$ for all $a\in P$), antisymmetric ($a\preccurlyeq b$ and $b \preccurlyeq a$
implies $a=b$ for $a,b\in P$), and transitive ($a\preccurlyeq b$ and $b\preccurlyeq c$ implies $a\preccurlyeq c$
for $a,b,c\in P$). We assume that the elements of $P$ are labeled by integers in $[n]:=\{1,2,\ldots,n\}$ such that
if $i,j \in P$ with $i \preccurlyeq j$ then $i \leqslant j$ as integers.
Let $\mathcal{L}:=\mathcal{L}(P)$ be the set of \defn{linear extensions} of $P$ defined as
\[
\mathcal{L}(P) = \{ \pi \in S_{n} \mid i \prec j \text{ in $P$ } \implies \pi^{-1}_{i} < \pi^{-1}_{j} \text{ as integers} \}.
\]
In computer science, linear extensions are also known as \defn{topological sortings} \cite{Knuth-volume1.1997,
Knuth-volume3.1998}. Computing the number of linear extensions is an important problem for
real world applications~\cite{KarzanovKhachiyan.1991}. For example, it relates to sorting algorithms.
Suppose one wants to schedule a sequence of tasks based on their dependencies. Specifying that
a certain task has to come before another task gives rise to a partial order. A linear extension gives a total order
in which to perform the jobs.
In social sciences, linear extensions are used in voting procedures~\cite{FishburnGehrlein.1975,Ackermanetal.2013},
where voters rank the candidates according specified traits (view on foreign policies, view on domestic policies etc).
A recursive formula for the number of linear extensions for a given poset $P$ was given in~\cite{EHS.1989}.
Brightwell and Winkler~\cite{BrightwellWinkler.1991} showed that counting the number
of linear extensions is $\# P$-complete. Bubley and Dyer~\cite{BubleyDyer.1999} provided an algorithm to (almost)
uniformly sample the set of linear extensions of a finite poset of size $n$ with mixing time $O(n^3 \log n)$.
In~\cite{AyyerKleeSchilling.2014}, the promotion Markov chain was introduced, which is a random walk on the
linear extensions of a finite poset $P$. Here we discuss a variant of the promotion Markov chain which
has mixing time of order $O(n \log n)$.
\subsubsection{The model}
We now explain the promotion Markov chain introduced in~\cite{AyyerKleeSchilling.2014}.
For a given poset $P$ with $n$ vertices, the state space of the \defn{promotion Markov chain} is the set of
linear extensions $\mathcal{L}(P)$. For $\pi,\pi'\in \mathcal{L}(P)$, we transition
$\pi \stackrel{\partial_j}{\longrightarrow} \pi'$ with probability $x_{\pi_j}$ if $\pi'=\partial_j \pi$, where $\partial_j$
is the promotion operator. The promotion operator is defined in terms of more elementary operators $\tau_i$
($1\leqslant i<n$) which appeared in~\cite{Haiman.1992, MalvenutoReutenauer.1994, Stanley.2009} and
was used explicitly to count linear extensions in~\cite{EHS.1989}.
Let $\pi=\pi_1 \ldots \pi_n \in \mathcal{L}(P)$ be a linear extension of $P$ in one-line notation. Then
\begin{equation}
\label{equation.tau}
\tau_i \pi = \begin{cases}
\pi_1 \ldots \pi_{i-1} \pi_{i+1} \pi_i \ldots \pi_n & \text{if $\pi_i$ and $\pi_{i+1}$ are not comparable in $P$,}\\
\pi_1 \ldots \pi_n & \text{otherwise.} \end{cases}
\end{equation}
In other words, $\tau_i$ acts non-trivially on a linear extension if interchanging entries $\pi_i$ and $\pi_{i+1}$ yields another
linear extension. Then the \defn{promotion operator} on $\mathcal{L}(P)$ is defined as
\begin{equation}
\label{equation.promotion tau}
\partial_j = \tau_1 \tau_2 \cdots \tau_{j-1}.
\end{equation}
Note that we use a different convention here to~\cite{AyyerKleeSchilling.2014}, where $\partial_j = \tau_j \tau_{j+1}
\cdots \tau_{n-1}$. Our convention here is compatible with the conventions for the Tsetlin library as in
Section~\ref{section.tsetlin}, where we moved letters to the front of the word rather than the end of the word.
\begin{example}
\label{example.promotion}
Let $P$ be the poset on four vertices defined by its covering relations $\{ (1,4), (2,4), (2,3) \}$. Then its Hasse diagram
is the following:
\setlength{\unitlength}{1mm}
\begin{center}
\begin{picture}(20, 20)
\put(10,4){\circle*{1}}
\put(20,4){\circle*{1}}
\put(9,0){1}
\put(19,0){2}
\put(10,14){\circle*{1}}
\put(20,14){\circle*{1}}
\put(9,16){4}
\put(19,16){3}
\put(10,4){\line(0,1){10}}
\put(20,4){\line(0,1){10}}
\put(10,14){\line(1,-1){10}}
\end{picture}
\end{center}
This poset has five linear extensions
\begin{equation}
\label{equation.LP example}
\mathcal{L}(P) = \{ 1234, 1243, 2134, 2143, 2314 \}.
\end{equation}
The promotion Markov chain for $P$ is depicted in Figure~\ref{figure.promotion}, where the vertices are the
linear extensions and an arrow labelled by $i$ from $\pi$ to $\pi'$ indicates that $\pi' = \partial_i \pi$.
\begin{figure}
\begin{tikzpicture}[>=latex,line join=bevel,]
\node (node_4) at (181.42bp,92.892bp) [draw,draw=none] {$\mathtt{2143}$};
\node (node_3) at (66.421bp,241.89bp) [draw,draw=none] {$\mathtt{2134}$};
\node (node_2) at (28.421bp,167.89bp) [draw,draw=none] {$\mathtt{1234}$};
\node (node_1) at (79.421bp,17.892bp) [draw,draw=none] {$\mathtt{2314}$};
\node (node_0) at (174.42bp,315.89bp) [draw,draw=none] {$\mathtt{1243}$};
\draw [green,->] (node_0) ..controls (124.95bp,307.28bp) and (88.338bp,299.05bp) .. (78.421bp,288.89bp) .. controls (71.149bp,281.44bp) and (68.102bp,270.23bp) .. (node_3);
\definecolor{strokecol}{rgb}{0.0,0.0,0.0};
\pgfsetstrokecolor{strokecol}
\draw (87.421bp,278.89bp) node {$3$};
\draw [green,->] (node_2) ..controls (44.272bp,181.29bp) and (50.494bp,187.84bp) .. (54.421bp,194.89bp) .. controls (59.205bp,203.48bp) and (62.121bp,214.01bp) .. (node_3);
\draw (71.421bp,204.89bp) node {$3$};
\draw [red,->] (node_2) ..controls (26.228bp,186.32bp) and (25.919bp,202.77bp) .. (32.421bp,214.89bp) .. controls (34.833bp,219.39bp) and (38.346bp,223.39bp) .. (node_3);
\draw (41.421bp,204.89bp) node {$2$};
\draw [blue,->] (node_2) ..controls (62.285bp,174.98bp) and (70.421bp,172.74bp) .. (70.421bp,167.89bp) .. controls (70.421bp,164.94bp) and (67.4bp,162.95bp) .. (node_2);
\draw (79.421bp,167.89bp) node {$1$};
\draw [blue,->] (node_3) ..controls (28.997bp,234.17bp) and (13.572bp,227.41bp) .. (5.4214bp,214.89bp) .. controls (-0.87737bp,205.22bp) and (4.7469bp,193.56bp) .. (node_2);
\draw (14.421bp,204.89bp) node {$1$};
\draw [red,->] (node_3) ..controls (100.28bp,248.98bp) and (108.42bp,246.74bp) .. (108.42bp,241.89bp) .. controls (108.42bp,238.94bp) and (105.4bp,236.95bp) .. (node_3);
\draw (117.42bp,241.89bp) node {$2$};
\draw [red,->] (node_4) ..controls (215.28bp,100.37bp) and (223.42bp,98.013bp) .. (223.42bp,92.892bp) .. controls (223.42bp,89.772bp) and (220.4bp,87.676bp) .. (node_4);
\draw (232.42bp,92.892bp) node {$2$};
\draw [yellow,->] (node_3) ..controls (89.622bp,262.35bp) and (113.03bp,282.62bp) .. (122.42bp,288.89bp) .. controls (129.96bp,293.93bp) and (138.55bp,298.73bp) .. (node_0);
\draw (131.42bp,278.89bp) node {$4$};
\draw [yellow,->] (node_1) ..controls (48.18bp,30.992bp) and (37.016bp,37.219bp) .. (28.421bp,44.892bp) .. controls (13.156bp,58.521bp) and (8.4693bp,63.343bp) .. (2.4214bp,82.892bp) .. controls (-4.7462bp,106.06bp) and (6.6849bp,132.82bp) .. (node_2);
\draw (11.421bp,92.892bp) node {$4$};
\draw [blue,->] (node_1) ..controls (68.389bp,50.34bp) and (45.835bp,116.67bp) .. (node_2);
\draw (66.421bp,92.892bp) node {$1$};
\draw [yellow,->] (node_0) ..controls (197.91bp,302.81bp) and (205.3bp,296.61bp) .. (209.42bp,288.89bp) .. controls (224.95bp,259.82bp) and (213.31bp,247.84bp) .. (214.42bp,214.89bp) .. controls (215.83bp,173.08bp) and (226.73bp,158.98bp) .. (209.42bp,120.89bp) .. controls (207.42bp,116.48bp) and (204.39bp,112.37bp) .. (node_4);
\draw (224.42bp,204.89bp) node {$4$};
\draw [red,->] (node_0) ..controls (208.64bp,304.72bp) and (219.69bp,298.33bp) .. (226.42bp,288.89bp) .. controls (245.73bp,261.83bp) and (234.97bp,248.05bp) .. (237.42bp,214.89bp) .. controls (240.52bp,172.94bp) and (249.66bp,155.95bp) .. (226.42bp,120.89bp) .. controls (222.75bp,115.36bp) and (217.57bp,110.72bp) .. (node_4);
\draw (247.42bp,204.89bp) node {$2$};
\draw [green,->] (node_3) ..controls (76.728bp,228.11bp) and (81.228bp,221.36bp) .. (84.421bp,214.89bp) .. controls (92.327bp,198.89bp) and (94.493bp,194.5bp) .. (97.421bp,176.89bp) .. controls (105.83bp,126.35bp) and (92.778bp,66.104bp) .. (node_1);
\draw (108.42bp,130.89bp) node {$3$};
\draw [green,->] (node_4) ..controls (164.21bp,74.14bp) and (147.23bp,56.919bp) .. (130.42bp,44.892bp) .. controls (123.35bp,39.836bp) and (115.22bp,35.107bp) .. (node_1);
\draw (162.42bp,54.892bp) node {$3$};
\draw [green,->] (node_1) ..controls (113.28bp,21.436bp) and (121.42bp,20.318bp) .. (121.42bp,17.892bp) .. controls (121.42bp,16.414bp) and (118.4bp,15.422bp) .. (node_1);
\draw (130.42bp,17.892bp) node {$3$};
\draw [red,->] (node_1) ..controls (109.88bp,40.928bp) and (139.42bp,38.088bp) .. (139.42bp,17.892bp) .. controls (139.42bp,0.45816bp) and (117.41bp,-4.0419bp) .. (node_1);
\draw (148.42bp,17.892bp) node {$2$};
\draw [yellow,->] (node_4) ..controls (172.62bp,118.86bp) and (159.86bp,159.53bp) .. (155.42bp,194.89bp) .. controls (150.85bp,231.32bp) and (161.09bp,273.65bp) .. (node_0);
\draw (164.42bp,204.89bp) node {$4$};
\draw [blue,->] (node_4) ..controls (181.59bp,111.37bp) and (181.65bp,127.23bp) .. (181.42bp,140.89bp) .. controls (180.31bp,206.7bp) and (179.96bp,223.17bp) .. (176.42bp,288.89bp) .. controls (176.28bp,291.6bp) and (176.09bp,294.46bp) .. (node_0);
\draw (188.42bp,204.89bp) node {$1$};
\draw [blue,->] (node_0) ..controls (208.28bp,322.98bp) and (216.42bp,320.74bp) .. (216.42bp,315.89bp) .. controls (216.42bp,312.94bp) and (213.4bp,310.95bp) .. (node_0);
\draw (225.42bp,315.89bp) node {$1$};
\draw [yellow,->] (node_2) ..controls (47.275bp,147.21bp) and (67.3bp,126.09bp) .. (76.421bp,120.89bp) .. controls (98.258bp,108.46bp) and (125.8bp,101.38bp) .. (node_4);
\draw (85.421bp,130.89bp) node {$4$};
\end{tikzpicture}
\caption{The promotion Markov chain digraph for the poset in Example~\ref{example.promotion}.
\label{figure.promotion}}
\end{figure}
We may represent the promotion operator $\partial_i$ by a $|\mathcal{L}(P)| \times |\mathcal{L}(P)|$-dimensional matrix,
where row $k$ and column $j$ contains 1 if the $j$-th linear extension in~\eqref{equation.LP example} is mapped
to the $k$-th linear extension in~\eqref{equation.LP example} under $\partial_i$; the rest of the entries are zero.
For example, $\partial_1$ is represented by the matrix
\[
\begin{pmatrix}
1&0&1&0&1\\
0&1&0&1&0\\
0&0&0&0&0\\
0&0&0&0&0\\
0&0&0&0&0
\end{pmatrix}.
\]
The right Cayley graph of the monoid generated by the matrices for the promotion operators
$\partial_1,\partial_2,\partial_3,\partial_4$ is depicted in Figure~\ref{figure.right Cayley example}.
The vertices in the right Cayley graph are labeled by reduced words in the generators.
For example $[1,4,1]$ stands for the element $\partial_1 \partial_4 \partial_1$.
\begin{figure}
\rotatebox{270}{\scalebox{0.5}{
\begin{tikzpicture}[>=latex,line join=bevel,]
\node (node_13) at (419.0bp,170.6bp) [draw,draw=none] {$[4, 4]$};
\node (node_14) at (59.0bp,19.599bp) [draw,draw=none] {$[1, 3]$};
\node (node_18) at (1105.0bp,19.599bp) [draw,draw=none] {$[3, 2]$};
\node (node_19) at (1027.0bp,94.599bp) [draw,draw=none] {$[3, 4]$};
\node (node_9) at (419.0bp,322.6bp) [draw,draw=none] {$[4]$};
\node (node_8) at (709.0bp,94.599bp) [draw,draw=none] {$[2, 4, 3, 4]$};
\node (node_7) at (611.0bp,19.599bp) [draw,draw=none] {$[4, 1]$};
\node (node_6) at (175.0bp,170.6bp) [draw,draw=none] {$[4, 3]$};
\node (node_5) at (154.0bp,94.599bp) [draw,draw=none] {$[4, 3, 4]$};
\node (node_4) at (751.0bp,170.6bp) [draw,draw=none] {$[2, 4, 3]$};
\node (node_3) at (253.0bp,246.6bp) [draw,draw=none] {$[1, 4]$};
\node (node_2) at (726.0bp,322.6bp) [draw,draw=none] {$[2, 1]$};
\node (node_1) at (353.0bp,94.599bp) [draw,draw=none] {$[4, 4, 4]$};
\node (node_0) at (585.0bp,473.6bp) [draw,draw=none] {$[]$};
\node (node_17) at (677.0bp,246.6bp) [draw,draw=none] {$[2, 4]$};
\node (node_11) at (154.0bp,322.6bp) [draw,draw=none] {$[1]$};
\node (node_16) at (698.0bp,398.6bp) [draw,draw=none] {$[2]$};
\node (node_10) at (917.0bp,19.599bp) [draw,draw=none] {$[3, 1]$};
\node (node_15) at (266.0bp,19.599bp) [draw,draw=none] {$[1, 4, 1]$};
\node (node_12) at (1005.0bp,170.6bp) [draw,draw=none] {$[3]$};
\draw [black,->] (node_16) ..controls (705.6bp,377.97bp) and (713.58bp,356.32bp) .. (node_2);
\definecolor{strokecol}{rgb}{0.0,0.0,0.0};
\pgfsetstrokecolor{strokecol}
\draw (724.0bp,360.6bp) node {$1$};
\draw [black,->] (node_9) ..controls (375.94bp,313.87bp) and (243.2bp,285.34bp) .. (213.0bp,256.6bp) .. controls (193.72bp,238.26bp) and (183.59bp,208.4bp) .. (node_6);
\draw (222.0bp,246.6bp) node {$3$};
\draw [black,->] (node_2) ..controls (748.1bp,326.79bp) and (757.0bp,325.72bp) .. (757.0bp,322.6bp) .. controls (757.0bp,320.7bp) and (753.69bp,319.56bp) .. (node_2);
\draw (766.0bp,322.6bp) node {$2$};
\draw [black,->] (node_2) ..controls (750.87bp,345.88bp) and (775.0bp,343.01bp) .. (775.0bp,322.6bp) .. controls (775.0bp,305.38bp) and (757.82bp,300.64bp) .. (node_2);
\draw (784.0bp,322.6bp) node {$1$};
\draw [black,->] (node_5) ..controls (148.66bp,75.604bp) and (146.02bp,57.778bp) .. (155.0bp,46.599bp) .. controls (156.33bp,44.942bp) and (205.66bp,33.395bp) .. (node_15);
\draw (164.0bp,56.599bp) node {$3$};
\draw [black,->] (node_5) ..controls (166.8bp,75.428bp) and (180.39bp,57.511bp) .. (196.0bp,46.599bp) .. controls (208.68bp,37.739bp) and (224.65bp,31.289bp) .. (node_15);
\draw (205.0bp,56.599bp) node {$2$};
\draw [black,->] (node_16) ..controls (693.69bp,377.55bp) and (688.97bp,353.35bp) .. (686.0bp,332.6bp) .. controls (682.69bp,309.44bp) and (680.07bp,282.6bp) .. (node_17);
\draw (695.0bp,322.6bp) node {$4$};
\draw [black,->] (node_3) ..controls (275.1bp,254.98bp) and (284.0bp,252.83bp) .. (284.0bp,246.6bp) .. controls (284.0bp,242.8bp) and (280.69bp,240.52bp) .. (node_3);
\draw (293.0bp,246.6bp) node {$2$};
\draw [black,->] (node_12) ..controls (1001.1bp,151.94bp) and (999.34bp,135.63bp) .. (1004.0bp,122.6bp) .. controls (1005.4bp,118.58bp) and (1007.7bp,114.73bp) .. (node_19);
\draw (1013.0bp,132.6bp) node {$4$};
\draw [black,->] (node_11) ..controls (177.61bp,304.48bp) and (211.86bp,278.18bp) .. (node_3);
\draw (224.0bp,284.6bp) node {$4$};
\draw [black,->] (node_17) ..controls (655.34bp,235.03bp) and (644.76bp,227.7bp) .. (638.0bp,218.6bp) .. controls (616.71bp,189.94bp) and (618.0bp,177.79bp) .. (612.0bp,142.6bp) .. controls (605.77bp,106.07bp) and (607.53bp,62.552bp) .. (node_7);
\draw (621.0bp,132.6bp) node {$4$};
\draw [black,->] (node_17) ..controls (668.37bp,209.45bp) and (646.68bp,119.22bp) .. (622.0bp,46.599bp) .. controls (621.02bp,43.724bp) and (619.9bp,40.713bp) .. (node_7);
\draw (659.0bp,132.6bp) node {$1$};
\draw [black,->] (node_8) ..controls (700.13bp,113.51bp) and (694.9bp,130.36bp) .. (702.0bp,142.6bp) .. controls (706.79bp,150.85bp) and (715.02bp,156.87bp) .. (node_4);
\draw (711.0bp,132.6bp) node {$4$};
\draw [black,->] (node_4) ..controls (763.82bp,156.27bp) and (770.28bp,149.03bp) .. (776.0bp,142.6bp) .. controls (813.87bp,100.01bp) and (815.55bp,80.982bp) .. (861.0bp,46.599bp) .. controls (871.05bp,39.0bp) and (883.57bp,32.718bp) .. (node_10);
\draw (829.0bp,94.599bp) node {$3$};
\draw [black,->] (node_4) ..controls (776.39bp,156.71bp) and (788.14bp,149.71bp) .. (798.0bp,142.6bp) .. controls (841.98bp,110.88bp) and (883.72bp,61.884bp) .. (node_10);
\draw (870.0bp,94.599bp) node {$2$};
\draw [black,->] (node_3) ..controls (236.33bp,232.57bp) and (228.04bp,225.33bp) .. (221.0bp,218.6bp) .. controls (210.29bp,208.36bp) and (198.75bp,196.32bp) .. (node_6);
\draw (230.0bp,208.6bp) node {$3$};
\draw [black,->] (node_8) ..controls (735.55bp,75.161bp) and (762.92bp,56.841bp) .. (789.0bp,46.599bp) .. controls (824.15bp,32.8bp) and (867.63bp,25.536bp) .. (node_10);
\draw (798.0bp,56.599bp) node {$1$};
\draw [black,->] (node_11) ..controls (113.73bp,311.95bp) and (0.0bp,279.4bp) .. (0.0bp,246.6bp) .. controls (0.0bp,246.6bp) and (0.0bp,246.6bp) .. (0.0bp,94.599bp) .. controls (0.0bp,69.488bp) and (20.685bp,47.819bp) .. (node_14);
\draw (9.0bp,170.6bp) node {$3$};
\draw [black,->] (node_1) ..controls (331.63bp,80.792bp) and (321.44bp,73.682bp) .. (313.0bp,66.599bp) .. controls (301.65bp,57.077bp) and (289.86bp,45.284bp) .. (node_15);
\draw (322.0bp,56.599bp) node {$3$};
\draw [black,->] (node_1) ..controls (348.38bp,75.451bp) and (342.35bp,57.546bp) .. (331.0bp,46.599bp) .. controls (320.8bp,36.759bp) and (306.5bp,30.293bp) .. (node_15);
\draw (352.0bp,56.599bp) node {$2$};
\draw [black,->] (node_5) ..controls (132.86bp,80.498bp) and (122.68bp,73.384bp) .. (114.0bp,66.599bp) .. controls (103.33bp,58.259bp) and (101.29bp,55.412bp) .. (91.0bp,46.599bp) .. controls (86.515bp,42.757bp) and (81.653bp,38.634bp) .. (node_14);
\draw (123.0bp,56.599bp) node {$1$};
\draw [black,->] (node_14) ..controls (81.1bp,21.584bp) and (90.0bp,21.076bp) .. (90.0bp,19.599bp) .. controls (90.0bp,18.7bp) and (86.695bp,18.16bp) .. (node_14);
\draw (99.0bp,19.599bp) node {$4$};
\draw [black,->] (node_14) ..controls (85.173bp,40.277bp) and (108.0bp,37.363bp) .. (108.0bp,19.599bp) .. controls (108.0bp,4.8888bp) and (92.345bp,0.36183bp) .. (node_14);
\draw (117.0bp,19.599bp) node {$3$};
\draw [black,->] (node_14) ..controls (93.013bp,42.635bp) and (126.0bp,39.795bp) .. (126.0bp,19.599bp) .. controls (126.0bp,1.9285bp) and (100.74bp,-2.4547bp) .. (node_14);
\draw (135.0bp,19.599bp) node {$2$};
\draw [black,->] (node_14) ..controls (101.0bp,44.922bp) and (144.0bp,42.03bp) .. (144.0bp,19.599bp) .. controls (144.0bp,-0.64079bp) and (108.99bp,-4.9717bp) .. (node_14);
\draw (153.0bp,19.599bp) node {$1$};
\draw [black,->] (node_12) ..controls (1022.6bp,162.3bp) and (1038.0bp,153.71bp) .. (1048.0bp,142.6bp) .. controls (1076.1bp,111.4bp) and (1092.7bp,63.91bp) .. (node_18);
\draw (1092.0bp,94.599bp) node {$3$};
\draw [black,->] (node_12) ..controls (1034.0bp,162.45bp) and (1087.7bp,143.56bp) .. (1105.0bp,104.6bp) .. controls (1114.4bp,83.374bp) and (1112.3bp,56.11bp) .. (node_18);
\draw (1119.0bp,94.599bp) node {$2$};
\draw [black,->] (node_2) ..controls (712.55bp,301.74bp) and (698.2bp,279.48bp) .. (node_17);
\draw (716.0bp,284.6bp) node {$4$};
\draw [black,->] (node_9) ..controls (419.0bp,290.06bp) and (419.0bp,222.32bp) .. (node_13);
\draw (428.0bp,246.6bp) node {$4$};
\draw [black,->] (node_0) ..controls (533.46bp,455.54bp) and (247.05bp,355.2bp) .. (node_11);
\draw (407.0bp,398.6bp) node {$1$};
\draw [black,->] (node_13) ..controls (430.94bp,156.39bp) and (436.89bp,149.15bp) .. (442.0bp,142.6bp) .. controls (474.63bp,100.75bp) and (469.24bp,76.626bp) .. (513.0bp,46.599bp) .. controls (535.55bp,31.124bp) and (566.72bp,24.49bp) .. (node_7);
\draw (485.0bp,94.599bp) node {$3$};
\draw [black,->] (node_13) ..controls (441.02bp,157.63bp) and (452.57bp,150.21bp) .. (462.0bp,142.6bp) .. controls (509.21bp,104.52bp) and (509.97bp,82.313bp) .. (559.0bp,46.599bp) .. controls (568.15bp,39.932bp) and (579.18bp,33.957bp) .. (node_7);
\draw (526.0bp,94.599bp) node {$2$};
\draw [black,->] (node_5) ..controls (138.16bp,113.14bp) and (128.13bp,129.4bp) .. (135.0bp,142.6bp) .. controls (138.96bp,150.21bp) and (146.05bp,156.18bp) .. (node_6);
\draw (144.0bp,132.6bp) node {$4$};
\draw [black,->] (node_19) ..controls (1027.2bp,113.15bp) and (1026.4bp,129.42bp) .. (1022.0bp,142.6bp) .. controls (1020.8bp,146.16bp) and (1019.1bp,149.74bp) .. (node_12);
\draw (1035.0bp,132.6bp) node {$4$};
\draw [black,->] (node_16) ..controls (739.35bp,390.05bp) and (861.68bp,360.9bp) .. (935.0bp,294.6bp) .. controls (943.81bp,286.63bp) and (978.85bp,220.73bp) .. (node_12);
\draw (957.0bp,284.6bp) node {$3$};
\draw [black,->] (node_12) ..controls (989.61bp,141.75bp) and (961.14bp,89.147bp) .. (935.0bp,46.599bp) .. controls (933.05bp,43.431bp) and (930.91bp,40.102bp) .. (node_10);
\draw (977.0bp,94.599bp) node {$1$};
\draw [black,->] (node_18) ..controls (1127.1bp,21.584bp) and (1136.0bp,21.076bp) .. (1136.0bp,19.599bp) .. controls (1136.0bp,18.7bp) and (1132.7bp,18.16bp) .. (node_18);
\draw (1145.0bp,19.599bp) node {$4$};
\draw [black,->] (node_18) ..controls (1131.2bp,40.277bp) and (1154.0bp,37.363bp) .. (1154.0bp,19.599bp) .. controls (1154.0bp,4.8888bp) and (1138.3bp,0.36183bp) .. (node_18);
\draw (1163.0bp,19.599bp) node {$3$};
\draw [black,->] (node_18) ..controls (1139.0bp,42.635bp) and (1172.0bp,39.795bp) .. (1172.0bp,19.599bp) .. controls (1172.0bp,1.9285bp) and (1146.7bp,-2.4547bp) .. (node_18);
\draw (1181.0bp,19.599bp) node {$2$};
\draw [black,->] (node_18) ..controls (1147.0bp,44.922bp) and (1190.0bp,42.03bp) .. (1190.0bp,19.599bp) .. controls (1190.0bp,-0.64079bp) and (1155.0bp,-4.9717bp) .. (node_18);
\draw (1199.0bp,19.599bp) node {$1$};
\draw [black,->] (node_17) ..controls (699.1bp,254.98bp) and (708.0bp,252.83bp) .. (708.0bp,246.6bp) .. controls (708.0bp,242.8bp) and (704.69bp,240.52bp) .. (node_17);
\draw (717.0bp,246.6bp) node {$2$};
\draw [black,->] (node_6) ..controls (169.33bp,150.08bp) and (163.43bp,128.73bp) .. (node_5);
\draw (175.0bp,132.6bp) node {$4$};
\draw [black,->] (node_1) ..controls (343.64bp,113.51bp) and (337.89bp,130.8bp) .. (346.0bp,142.6bp) .. controls (351.9bp,151.19bp) and (376.99bp,159.55bp) .. (node_13);
\draw (355.0bp,132.6bp) node {$4$};
\draw [black,->] (node_6) ..controls (143.14bp,166.6bp) and (109.16bp,159.77bp) .. (86.0bp,142.6bp) .. controls (69.559bp,130.42bp) and (66.163bp,124.11bp) .. (60.0bp,104.6bp) .. controls (53.073bp,82.666bp) and (54.21bp,55.933bp) .. (node_14);
\draw (69.0bp,94.599bp) node {$3$};
\draw [black,->] (node_6) ..controls (151.76bp,159.91bp) and (138.38bp,152.26bp) .. (129.0bp,142.6bp) .. controls (98.447bp,111.13bp) and (76.469bp,63.749bp) .. (node_14);
\draw (109.0bp,94.599bp) node {$2$};
\draw [black,->] (node_9) ..controls (386.62bp,307.77bp) and (312.71bp,273.93bp) .. (node_3);
\draw (364.0bp,284.6bp) node {$2$};
\draw [black,->] (node_0) ..controls (607.9bp,458.4bp) and (655.57bp,426.76bp) .. (node_16);
\draw (663.0bp,436.6bp) node {$2$};
\draw [black,->] (node_3) ..controls (252.23bp,217.6bp) and (251.32bp,165.79bp) .. (254.0bp,122.6bp) .. controls (255.85bp,92.884bp) and (260.31bp,58.578bp) .. (node_15);
\draw (263.0bp,132.6bp) node {$4$};
\draw [black,->] (node_3) ..controls (260.8bp,232.67bp) and (264.2bp,225.44bp) .. (266.0bp,218.6bp) .. controls (281.13bp,160.97bp) and (273.4bp,144.08bp) .. (270.0bp,84.599bp) .. controls (269.11bp,69.059bp) and (268.02bp,51.435bp) .. (node_15);
\draw (283.0bp,132.6bp) node {$1$};
\draw [black,->] (node_13) ..controls (401.33bp,156.61bp) and (393.33bp,149.61bp) .. (387.0bp,142.6bp) .. controls (378.38bp,133.05bp) and (370.01bp,121.27bp) .. (node_1);
\draw (396.0bp,132.6bp) node {$4$};
\draw [black,->] (node_4) ..controls (742.74bp,156.3bp) and (738.59bp,149.06bp) .. (735.0bp,142.6bp) .. controls (729.38bp,132.5bp) and (723.22bp,121.14bp) .. (node_8);
\draw (744.0bp,132.6bp) node {$4$};
\draw [black,->] (node_0) ..controls (643.51bp,469.51bp) and (1005.0bp,442.58bp) .. (1005.0bp,398.6bp) .. controls (1005.0bp,398.6bp) and (1005.0bp,398.6bp) .. (1005.0bp,246.6bp) .. controls (1005.0bp,227.25bp) and (1005.0bp,205.14bp) .. (node_12);
\draw (1014.0bp,322.6bp) node {$3$};
\draw [black,->] (node_19) ..controls (1041.5bp,75.968bp) and (1055.4bp,59.249bp) .. (1069.0bp,46.599bp) .. controls (1073.6bp,42.311bp) and (1078.9bp,38.043bp) .. (node_18);
\draw (1078.0bp,56.599bp) node {$1$};
\draw [black,->] (node_9) ..controls (450.93bp,272.21bp) and (564.09bp,93.631bp) .. (node_7);
\draw (529.0bp,170.6bp) node {$1$};
\draw [black,->] (node_15) ..controls (293.54bp,21.478bp) and (302.0bp,20.935bp) .. (302.0bp,19.599bp) .. controls (302.0bp,18.785bp) and (298.86bp,18.266bp) .. (node_15);
\draw (311.0bp,19.599bp) node {$4$};
\draw [black,->] (node_15) ..controls (294.61bp,40.304bp) and (320.0bp,37.446bp) .. (320.0bp,19.599bp) .. controls (320.0bp,4.5412bp) and (301.92bp,0.15358bp) .. (node_15);
\draw (329.0bp,19.599bp) node {$3$};
\draw [black,->] (node_15) ..controls (302.55bp,42.635bp) and (338.0bp,39.795bp) .. (338.0bp,19.599bp) .. controls (338.0bp,1.7707bp) and (310.37bp,-2.5319bp) .. (node_15);
\draw (347.0bp,19.599bp) node {$2$};
\draw [black,->] (node_15) ..controls (310.47bp,44.922bp) and (356.0bp,42.03bp) .. (356.0bp,19.599bp) .. controls (356.0bp,-0.72841bp) and (318.61bp,-5.0089bp) .. (node_15);
\draw (365.0bp,19.599bp) node {$1$};
\draw [black,->] (node_4) ..controls (762.3bp,137.01bp) and (784.82bp,64.154bp) .. (770.0bp,46.599bp) .. controls (761.34bp,36.339bp) and (676.35bp,26.293bp) .. (node_7);
\draw (782.0bp,94.599bp) node {$1$};
\draw [black,->] (node_19) ..controls (1022.7bp,75.085bp) and (1016.9bp,56.988bp) .. (1005.0bp,46.599bp) .. controls (986.93bp,30.77bp) and (959.74bp,24.221bp) .. (node_10);
\draw (1027.0bp,56.599bp) node {$3$};
\draw [black,->] (node_19) ..controls (1004.7bp,82.058bp) and (992.65bp,74.542bp) .. (983.0bp,66.599bp) .. controls (973.54bp,58.806bp) and (973.64bp,54.177bp) .. (964.0bp,46.599bp) .. controls (956.45bp,40.664bp) and (947.45bp,35.169bp) .. (node_10);
\draw (992.0bp,56.599bp) node {$2$};
\draw [black,->] (node_17) ..controls (697.64bp,225.4bp) and (720.22bp,202.21bp) .. (node_4);
\draw (731.0bp,208.6bp) node {$3$};
\draw [black,->] (node_2) ..controls (782.82bp,312.84bp) and (921.0bp,285.75bp) .. (921.0bp,246.6bp) .. controls (921.0bp,246.6bp) and (921.0bp,246.6bp) .. (921.0bp,94.599bp) .. controls (921.0bp,75.48bp) and (919.69bp,53.659bp) .. (node_10);
\draw (930.0bp,170.6bp) node {$3$};
\draw [black,->] (node_16) ..controls (714.65bp,407.43bp) and (724.0bp,405.58bp) .. (724.0bp,398.6bp) .. controls (724.0bp,394.46bp) and (720.7bp,392.12bp) .. (node_16);
\draw (733.0bp,398.6bp) node {$2$};
\draw [black,->] (node_10) ..controls (939.1bp,21.584bp) and (948.0bp,21.076bp) .. (948.0bp,19.599bp) .. controls (948.0bp,18.7bp) and (944.69bp,18.16bp) .. (node_10);
\draw (957.0bp,19.599bp) node {$4$};
\draw [black,->] (node_10) ..controls (943.17bp,40.277bp) and (966.0bp,37.363bp) .. (966.0bp,19.599bp) .. controls (966.0bp,4.8888bp) and (950.35bp,0.36183bp) .. (node_10);
\draw (975.0bp,19.599bp) node {$3$};
\draw [black,->] (node_10) ..controls (951.01bp,42.635bp) and (984.0bp,39.795bp) .. (984.0bp,19.599bp) .. controls (984.0bp,1.9285bp) and (958.74bp,-2.4547bp) .. (node_10);
\draw (993.0bp,19.599bp) node {$2$};
\draw [black,->] (node_10) ..controls (959.0bp,44.922bp) and (1002.0bp,42.03bp) .. (1002.0bp,19.599bp) .. controls (1002.0bp,-0.64079bp) and (966.99bp,-4.9717bp) .. (node_10);
\draw (1011.0bp,19.599bp) node {$1$};
\draw [black,->] (node_6) ..controls (191.62bp,137.6bp) and (227.26bp,67.644bp) .. (242.0bp,46.599bp) .. controls (244.59bp,42.901bp) and (247.62bp,39.147bp) .. (node_15);
\draw (228.0bp,94.599bp) node {$1$};
\draw [black,->] (node_0) ..controls (554.9bp,446.22bp) and (470.11bp,369.09bp) .. (node_9);
\draw (522.0bp,398.6bp) node {$4$};
\draw [black,->] (node_7) ..controls (633.1bp,21.584bp) and (642.0bp,21.076bp) .. (642.0bp,19.599bp) .. controls (642.0bp,18.7bp) and (638.69bp,18.16bp) .. (node_7);
\draw (651.0bp,19.599bp) node {$4$};
\draw [black,->] (node_7) ..controls (637.17bp,40.277bp) and (660.0bp,37.363bp) .. (660.0bp,19.599bp) .. controls (660.0bp,4.8888bp) and (644.35bp,0.36183bp) .. (node_7);
\draw (669.0bp,19.599bp) node {$3$};
\draw [black,->] (node_7) ..controls (645.01bp,42.635bp) and (678.0bp,39.795bp) .. (678.0bp,19.599bp) .. controls (678.0bp,1.9285bp) and (652.74bp,-2.4547bp) .. (node_7);
\draw (687.0bp,19.599bp) node {$2$};
\draw [black,->] (node_7) ..controls (653.0bp,44.922bp) and (696.0bp,42.03bp) .. (696.0bp,19.599bp) .. controls (696.0bp,-0.64079bp) and (660.99bp,-4.9717bp) .. (node_7);
\draw (705.0bp,19.599bp) node {$1$};
\draw [black,->] (node_11) ..controls (170.65bp,327.01bp) and (180.0bp,326.09bp) .. (180.0bp,322.6bp) .. controls (180.0bp,320.53bp) and (176.7bp,319.36bp) .. (node_11);
\draw (189.0bp,322.6bp) node {$2$};
\draw [black,->] (node_11) ..controls (176.34bp,345.88bp) and (198.0bp,343.01bp) .. (198.0bp,322.6bp) .. controls (198.0bp,305.7bp) and (183.14bp,300.82bp) .. (node_11);
\draw (207.0bp,322.6bp) node {$1$};
\draw [black,->] (node_13) ..controls (428.3bp,139.95bp) and (442.23bp,80.016bp) .. (413.0bp,46.599bp) .. controls (397.77bp,29.195bp) and (332.49bp,22.937bp) .. (node_15);
\draw (439.0bp,94.599bp) node {$1$};
\draw [black,->] (node_8) ..controls (705.01bp,75.249bp) and (699.47bp,57.239bp) .. (688.0bp,46.599bp) .. controls (673.18bp,32.849bp) and (650.94bp,26.038bp) .. (node_7);
\draw (710.0bp,56.599bp) node {$3$};
\draw [black,->] (node_8) ..controls (685.51bp,80.894bp) and (674.76bp,73.894bp) .. (666.0bp,66.599bp) .. controls (656.58bp,58.756bp) and (656.16bp,54.751bp) .. (647.0bp,46.599bp) .. controls (642.21bp,42.34bp) and (636.8bp,38.04bp) .. (node_7);
\draw (675.0bp,56.599bp) node {$2$};
\draw [black,->] (node_1) ..controls (381.98bp,75.031bp) and (412.34bp,56.367bp) .. (441.0bp,46.599bp) .. controls (491.57bp,29.366bp) and (554.42bp,23.034bp) .. (node_7);
\draw (450.0bp,56.599bp) node {$1$};
\end{tikzpicture}
}}
\caption{Right Cayley graph for the promotion Markov chain of Example~\ref{example.promotion}.
\label{figure.right Cayley example}}
\end{figure}
\end{example}
We prove some useful properties of the right Cayley graph of the semigroup $S$ generated by $\partial_i$ for
$1\leqslant i \leqslant n$.
\begin{proposition}
\label{proposition.length reduced}
Any element in $K(S)$ can be written as $\partial_{w_1} \cdots \partial_{w_{n-1}}$, where
$w_1,\ldots,w_{n-1} \in \{1,2,\ldots,n\}$ are distinct. In particular, the length of any reduced word for the elements
in $K(S)$ is less than $n$.
\end{proposition}
\begin{proof}
Each element in $K(S)$ corresponds to a linear extension in $\mathcal{L}(P)$. For a given $\pi \in \mathcal{L}(P)$,
we now construct a word $w_1 \ldots w_{n-1}$ with distinct letters such that $\pi=\partial_{w_1} \cdots \partial_{w_{n-1}} \pi'$
for all $\pi' \in \mathcal{L}(P)$. In particular, this means that $\partial_{w_1} \cdots \partial_{w_{n-1}} \in K(S)$.
Write $\pi = \pi_1 \ldots \pi_n$ in one-line notation and set $\pi^{(1)} = \pi$.
Construct $\pi^{(m+1)}$ from $\pi^{(m)}$ for $1\leqslant m < n$ as follows.
Set $i^{(m)}_1=1$ and then recursively find the smallest $i^{(m)}_{j+1}>i^{(m)}_j$
such that $\pi^{(m)}_{i^{(m)}_j} \prec \pi^{(m)}_{i^{(m)}_{j+1}}$ if possible. If there is no such
$i^{(m)}_{j+1}$, set $k^{(m)}=j$. Define $w_m=\pi^{(m)}_{i^{(m)}_{k^{(m)}}}$. Next construct $\pi^{(m+1)}$ from $\pi^{(m)}$
by removing $\pi^{(m)}_1$ and replacing $\pi^{(m)}_{i^{(m)}_j}$ by $\pi^{(m)}_{i^{(m)}_{j-1}}$ for
$2\leqslant j \leqslant k^{(m)}$.
Next we show that $\pi = \partial_{w_1} \cdots \partial_{w_{n-1}} \pi'$ for any $\pi'\in \mathcal{L}(P)$, proving that
$\partial_{w_1} \cdots \partial_{w_{n-1}} \in K(S)$ corresponding to the linear extension $\pi$. We will do so by
induction on $n$. For $n=2$, $P$ is either the antichain with vertex 1 incomparable to vertex 2 or 2 is bigger than 1.
In the first case, there are two linear extension $\pi=12$ or $21$. The algorithm determines $w=\pi_1$ and indeed
$\partial_{\pi_1}(12)=\partial_{\pi_1}(21)=\pi_1\pi_2=\pi$. In the second case, there is only one linear extension
$\pi=12$ and the algorithm determines $w=2$. Indeed $\partial_2(12)=12$.
Now assume by induction that the algorithm works for posets with strictly less than $n$ vertices.
In particular, for $\pi^{(2)}$ from the algorithm $\pi^{(2)} = \partial_{w_2}\cdots \partial_{w_{n-1}} \pi'$
for any linear extension $\pi'$ of the poset $P'$ obtained from $P$ by deleting the vertex $w_1$.
Also, by induction $w_2,\ldots,w_{n-1}$ are distinct and different from $w_1$. Note that $w_1$
is a maximal element in $P$. Hence for any linear extension $\pi'$ of $P$, we have that
$\partial_{w_2}\cdots \partial_{w_{n-1}} \pi'$ is a linear extension of $P$ such that removing the letter $w_1$
results in $\pi^{(2)}$. Let $\sigma \in \mathcal{L}(P)$ be such a linear extension, that is, $\sigma \setminus w_1 = \pi^{(2)}$.
Consider the saturated chain $\pi_1=a_1 \prec a_2 \prec \cdots \prec a_k = w_1$ in $P$ from $\pi_1$ to $w_1$.
Such a chain exists by the definition of $w_1$.
In $\pi^{(2)}$ and hence also in $\sigma$ the letter $a_{k-1}$ is the rightmost letter that is covered in $P$ by $w_1$.
This is since by the algorithm to construct $\pi^{(2)}$, the letter $a_{k-1}$ replaced the letter $a_k=w_1$ in $\pi$.
In $\sigma$, the letter $w_1$ must sit to the right of the letter $a_{k-1}$ since $a_{k-1} \prec w_1$.
Hence, when acting with $\partial_{w_1}$ on $\sigma$, the letter $w_1$ interchanges with all letters to its left
until it reaches the letter $a_{k-1}$. By the action of $\tau_i$ as in~\eqref{equation.tau}, the letter $w_1$ will stay
in the position where $a_{k-1}$ was in $\sigma$ and then the letter $a_{k-1}$ starts moving left. The letter $a_{k-2}$
is the rightmost letter in $\sigma$ that is covered by $a_{k-1}$ in $P$, again by the definition of the algorithm.
The letter $a_{k-1}$ replaces the letter $a_{k-2}$ and $a_{k-2}$ starts moving left and so on. Finally, the letter
$a_1=\pi_1$ moves into first position. Hence $\partial_{w_1} \sigma = \pi$. This proves the claim.
\end{proof}
\begin{example}
Take the poset from Example~\ref{example.promotion} and the linear extension $\pi = 1243$. Set $\pi^{(1)}=\pi$.
The first sequence of increasing entries in $\pi^{(1)}$ is given by the underlined entries
\[
\textcolor{darkred}{\underline{1}}2 \textcolor{darkred}{\underline{4}}3.
\]
Hence $w_1=4$ and $\pi^{(2)}=213$. The next sequence of increasing entries is given by
\[
\textcolor{darkred}{\underline{2}} 1 \textcolor{darkred}{\underline{3}}.
\]
Hence $w_2=3$ and $\pi^{(3)}=12$. Next we find the increasing sequence $\textcolor{darkred}{\underline{1}}2$, so
that $w_3=1$. Indeed, comparing with Figure~\ref{figure.right Cayley example}, we see that
\[
\partial_4 \partial_3 \partial_1 = \partial_1 \partial_4 \partial_1
\]
is in $K(S)$.
Note that the above algorithm does not always give a shortest path to the ideal in the right Cayley graph.
For example, if $\pi=2143$ the algorithm gives
\[
\textcolor{darkred}{\underline{2}}1\textcolor{darkred}{\underline{4}} 3 \to
\textcolor{darkred}{\underline{1}}23 \to
\textcolor{darkred}{\underline{2}}\textcolor{darkred}{\underline{3}} \to
2,
\]
so that $w_1 w_2 w_3 = 413$. From Figure~\ref{figure.right Cayley example}, we see that
$\partial_4 \partial_1 \partial_3 = \partial_4 \partial_1$ is in $K(S)$.
\end{example}
The (unnormalized) stationary distribution of the promotion Markov chain was computed
in~\cite[Theorem 4.5]{AyyerKleeSchilling.2014}. Recall that our conventions are different
from~\cite{AyyerKleeSchilling.2014}.
\begin{theorem}{\cite[Theorem 4.5]{AyyerKleeSchilling.2014}}
\label{theorem.stationary promotion}
The (unnormalized) stationary distribution for the promotion Markov chain $\Psi_\pi$ for
$\pi \in \mathcal{L}(P)$ for a finite poset $P$ with $n=|P|$ is given by
\begin{equation}
\label{equation.stationary linear extension}
\Psi_\pi = \prod_{i=1}^n \frac{1}{1-(x_{\pi_1} + \cdots + x_{\pi_{i-1}})}.
\end{equation}
\end{theorem}
Despite the fact that by Proposition~\ref{proposition.length reduced} the right Cayley graph is shallow in the sense
that each vertex is at most $n-1$ steps away from the minimal ideal and the existence of an explicit formula for the stationary
distribution, this is not enough to give a tight bound on the mixing time. The reason is that the expression for
$\Psi_\pi$ does not have the property required in Theorems~\ref{theorem.main} and~\ref{theorem.main1}
that each term of degree $\ell$ in its formal power sum expansion corresponds to a semaphore code word $s$ of
length $\ell$. Furthermore, the $\mathscr{R}$-classes (or strongly connected components) in the right Cayley graph
can become very big, especially when $P$ has a maximal element. This makes it hard to analyze the mixing time for the
promotion Markov chain in general. Here we propose a new Markov chain on linear extensions of a poset
which gives rise to an $\mathscr{R}$-trivial semigroup (where all strongly connected components have size one).
\subsubsection{A variant of the promotion Markov chain}
As before let $P$ be a poset with $n$ elements and $\mathcal{L}(P)$ the set of linear extensions of $P$.
Denote by $\mathcal{W}(P)$ the set of subwords of linear extensions in $\mathcal{L}(P)$ and set $A=[n]$.
We define a semigroup on $\mathcal{W}(P)$ as follows. Let $w \in \mathcal{W}(P)$ and $a\in A$. Then define
\begin{equation}
\label{equation.product}
wa = \begin{cases} w & \text{if $a\in w$,}\\
\mathsf{straight}(wa) & \text{if $a \not \in w$.}
\end{cases}
\end{equation}
Here $\mathsf{straight}(wa)$ is defined as follows. If $wa$ is a subword of a linear extension of $P$, then
$\mathsf{straight}(wa)=wa$. If not, write $w=w_1\ldots w_k$ and find the largest $1\leqslant j_1\leqslant k$
such that $a \prec w_{j_1}$ in $P$. Interchange $w_{j_1}$ and $a$. Repeat by finding the largest $1\leqslant j_2 <j_1$
such that $a \prec w_{j_2}$. Interchange $w_{j_2}$ and $a$. Repeat until no further element bigger than $a$ exists to the
left. The result is $\mathsf{straight}(wa)$.
\begin{example}
Take the poset $P$ of Example~\ref{example.promotion}, $w=234 \in \mathcal{W}(P)$, and $a=1$. We have
$1 \prec 4$, so $j_1=3$. Both $2$ and $3$ are incomparable to $1$, so we find $\mathsf{straight}(wa) = 2314
\in \mathcal{L}(P)$.
\end{example}
\begin{lemma}
\label{lemma.straighten}
Let $a\in A$ and $w\in \mathcal{W}(P)$ such that $a\not \in w$. Then $\mathsf{straight}(wa) \in \mathcal{W}(P)$.
\end{lemma}
\begin{proof}
Since $j_1$ is largest such that $a \prec w_{j_1}$, either $w_j \prec a$ or $w_j$ and $a$ are incomparable for
$j_1<j\leqslant k$. If $w_j \prec a$ by transitivity we find that $w_j \prec w_{j_1}$ which contradicts the fact that
$w \in \mathcal{W}(P)$. Hence $a$ is incomparable with $w_j$ for all $j_1<j\leqslant k$.
Suppose $w_{j_1} \prec w_j$ for some $j_1<j\leqslant k$. Then again by transitivity, we have $a\prec w_j$.
This contradicts the maximality of $j_1$. Hence $w_{j_1}$ is incomparable to $w_j$ for all $j_1<j\leqslant k$.
Therefore $a w_{j_1+1} \cdots w_k w_{j_1} \in \mathcal{W}(P)$. Repeating similar arguments for the next segments
(interchanging $a$ with $w_{j_2}$ etc), we find $\mathsf{straight}(wa) \in \mathcal{W}(P)$.
\end{proof}
\begin{proposition}
The set $\mathcal{W}(P)$ together with the product defined in~\eqref{equation.product} forms a semigroup.
\end{proposition}
\begin{proof}
Note that by the proof of Lemma~\ref{lemma.straighten}, the letters inbetween any letters that are interchanged
by the product are incomparable to the interchanged letters. By transitivity, if there are three letters that are interchanged,
say $w_i \ldots w_j \ldots w_k$ with $w_k \prec w_j \prec w_i$, it does not matter in which order this is done, the end
results is $w_k \ldots w_j \ldots w_i$. This proves that the product is associative and hence $\mathcal{W}(P)$ is a
semigroup with the product in~\eqref{equation.product}.
\end{proof}
Let us now define $(\mathcal{W}(P),A)$ to be the semigroup with product~\eqref{equation.product} and generators
$A=[n]$.
\begin{theorem}
The semigroup $(\mathcal{W}(P),A)$ is $\mathcal{R}$-trivial.
\end{theorem}
\begin{proof}
In the product, the length of the word can either stay the same or increase. When the length stays the same, the word does
not change. This proves that $(\mathcal{W}(P),A)$ is $\mathcal{R}$-trivial.
\end{proof}
\begin{example}
The right Cayley graph of $(\mathcal{W}(P),A)$ for the poset of Example~\ref{example.promotion} is given in Figure~\ref{figure.poset}.
\end{example}
\begin{figure}
\begin{center}
\scalebox{0.25}{
\input{figure-cayley_graph}
}
\end{center}
\caption{The right Cayley graph of $(\mathcal{W}(P),A)$ for the poset of Example~\ref{example.promotion}.
\label{figure.poset}}
\end{figure}
Note that the minimal ideal of $(\mathcal{W}(P),A)$ is the set of linear extensions $\mathcal{L}(P)$ of the poset $P$.
Let $\mathcal{M}(\mathcal{W}(P),A)$ be the Markov chain on $\mathcal{L}(P)$ induced by the semigroup
$(\mathcal{W}(P),A)$. More precisely, we transition from $\pi \in \mathcal{L}(P)$ to $a\pi \in \mathcal{L}(P)$ with
probability $x_a$.
\begin{proposition}
$\mathcal{M}(\mathcal{W}(P),A)$ is ergodic.
\end{proposition}
\begin{proof}
Note that $\pi \pi'=\pi$ for all $\pi,\pi'\in\mathcal{L}(P)$. Hence the graph of the Markov chain is strongly connected
and hence it is irreducible. Furthermore, if $\pi = \pi_1\ldots \pi_n \in \mathcal{L}(P)$, then $\pi_1 \pi = \pi$, which
means the Markov chain is aperiodic.
\end{proof}
The stationary distribution for $\mathcal{M}(\mathcal{W}(P),A)$ is given by
\[
\Psi_\pi = \sum_{\substack{\sigma \in S_n\\ [\sigma]_{{\mathcal{W}(P)}} = \pi}}
\left( \prod_{i=1}^n \frac{x_{\sigma_i}}{1-\sum_{j=1}^{i-1} x_{\sigma_j}} \right)
\qquad \text{for all $\pi \in \mathcal{L}(P)$.}
\]
\begin{theorem}
\label{theorem.expected new}
The expected value $E[\tau]$ for $\mathcal{M}(\mathcal{W}(P),A)$ is bounded above by $n \ln(n) + n \gamma$.
\end{theorem}
\begin{proof}
For a word $w \in \mathcal{W}(P)$, its length $|w|=k$ is bounded by $0\leqslant k \leqslant n$.
For a word of length $|w|=k$, there are $n-k$ transition arrows in $\mathsf{RCay}(\mathcal{W}(P),A)$
originating at $w$, given by all the letters that do not appear in $w$. Hence by the same arguments as for the
Tsetlin library $E[\tau] \leqslant n \ln(n) + n \gamma$.
\end{proof}
\begin{remark}
Note that the Markov chain $\mathcal{M}(\mathcal{W}(P),A)$ is not identical to the promotion Markov chain.
For example, left multiplication by 4 on $2143$ in $(\mathcal{W}(P),\{1,2,3,4\})$ for the poset in
Example~\ref{example.promotion} yields $2143$, whereas we see from Figure~\ref{figure.promotion}
that in the promotion Markov chain $2143$ goes to $1243$ under $\partial_4$. The full Markov chain transition
diagram is given in Figure~\ref{figure.markov new}.
\end{remark}
Theorem~\ref{theorem.expected new} shows that the mixing time for $\mathcal{M}(\mathcal{W}(P),[n])$ is of order
$O(n \log n)$. Of course, this does not take the computational complexity of computing the
product~\eqref{equation.product} into account. For a word of length $k$, this involves up to $k$ swaps.
\begin{figure}
\begin{tikzpicture}[>=latex,line join=bevel,]
\node (node_0) at (179.0bp,309.5bp) [draw,draw=none] {$\mathtt{(1, 2, 4, 3)}$};
\node (node_1) at (78.0bp,235.5bp) [draw,draw=none] {$\mathtt{(2, 1, 4, 3)}$};
\node (node_2) at (24.0bp,160.5bp) [draw,draw=none] {$\mathtt{(2, 3, 1, 4)}$};
\node (node_3) at (133.0bp,84.5bp) [draw,draw=none] {$\mathtt{(2, 1, 3, 4)}$};
\node (node_4) at (133.0bp,9.5bp) [draw,draw=none] {$\mathtt{(1, 2, 3, 4)}$};
\draw [yellow,->] (node_0) ..controls (212.86bp,313.04bp) and (221.0bp,311.93bp) .. (221.0bp,309.5bp) .. controls (221.0bp,308.02bp) and (217.98bp,307.03bp) .. (node_0);
\definecolor{strokecol}{rgb}{0.0,0.0,0.0};
\pgfsetstrokecolor{strokecol}
\draw (230.0bp,309.5bp) node {$4$};
\draw [blue,->] (node_0) ..controls (209.46bp,332.54bp) and (239.0bp,329.7bp) .. (239.0bp,309.5bp) .. controls (239.0bp,292.07bp) and (216.98bp,287.57bp) .. (node_0);
\draw (248.0bp,309.5bp) node {$1$};
\draw [red,->] (node_0) ..controls (142.83bp,297.53bp) and (128.36bp,291.03bp) .. (117.0bp,282.5bp) .. controls (106.08bp,274.29bp) and (96.274bp,262.44bp) .. (node_1);
\draw (126.0bp,272.5bp) node {$2$};
\draw [green,->] (node_0) ..controls (180.69bp,288.18bp) and (182.38bp,264.62bp) .. (183.0bp,244.5bp) .. controls (184.86bp,184.06bp) and (174.31bp,167.87bp) .. (150.0bp,112.5bp) .. controls (148.4bp,108.85bp) and (146.4bp,105.1bp) .. (node_3);
\draw (191.0bp,198.5bp) node {$3$};
\draw [blue,->] (node_1) ..controls (111.86bp,248.66bp) and (124.65bp,254.95bp) .. (135.0bp,262.5bp) .. controls (146.87bp,271.16bp) and (158.16bp,283.33bp) .. (node_0);
\draw (164.0bp,272.5bp) node {$1$};
\draw [yellow,->] (node_1) ..controls (111.86bp,239.04bp) and (120.0bp,237.93bp) .. (120.0bp,235.5bp) .. controls (120.0bp,234.02bp) and (116.98bp,233.03bp) .. (node_1);
\draw (129.0bp,235.5bp) node {$4$};
\draw [red,->] (node_1) ..controls (108.46bp,258.54bp) and (138.0bp,255.7bp) .. (138.0bp,235.5bp) .. controls (138.0bp,218.07bp) and (115.98bp,213.57bp) .. (node_1);
\draw (147.0bp,235.5bp) node {$2$};
\draw [green,->] (node_1) ..controls (63.098bp,214.8bp) and (47.065bp,192.54bp) .. (node_2);
\draw (67.0bp,198.5bp) node {$3$};
\draw [green,->] (node_2) ..controls (57.864bp,164.24bp) and (66.0bp,163.06bp) .. (66.0bp,160.5bp) .. controls (66.0bp,158.94bp) and (62.979bp,157.89bp) .. (node_2);
\draw (75.0bp,160.5bp) node {$3$};
\draw [red,->] (node_2) ..controls (54.459bp,183.78bp) and (84.0bp,180.91bp) .. (84.0bp,160.5bp) .. controls (84.0bp,142.8bp) and (61.785bp,138.29bp) .. (node_2);
\draw (93.0bp,160.5bp) node {$2$};
\draw [yellow,->] (node_2) ..controls (39.356bp,141.3bp) and (54.749bp,123.86bp) .. (71.0bp,112.5bp) .. controls (79.712bp,106.41bp) and (89.962bp,101.14bp) .. (node_3);
\draw (80.0bp,122.5bp) node {$4$};
\draw [blue,->] (node_2) ..controls (39.577bp,130.73bp) and (69.902bp,75.769bp) .. (104.0bp,36.5bp) .. controls (107.43bp,32.552bp) and (111.38bp,28.569bp) .. (node_4);
\draw (83.0bp,84.5bp) node {$1$};
\draw [yellow,->] (node_3) ..controls (136.5bp,109.12bp) and (139.38bp,143.9bp) .. (129.0bp,170.5bp) .. controls (121.56bp,189.57bp) and (106.56bp,207.58bp) .. (node_1);
\draw (143.0bp,160.5bp) node {$4$};
\draw [green,->] (node_3) ..controls (118.93bp,103.85bp) and (104.67bp,121.37bp) .. (89.0bp,132.5bp) .. controls (79.491bp,139.26bp) and (68.095bp,144.85bp) .. (node_2);
\draw (119.0bp,122.5bp) node {$3$};
\draw [red,->] (node_3) ..controls (166.86bp,91.982bp) and (175.0bp,89.621bp) .. (175.0bp,84.5bp) .. controls (175.0bp,81.379bp) and (171.98bp,79.284bp) .. (node_3);
\draw (184.0bp,84.5bp) node {$2$};
\draw [blue,->] (node_3) ..controls (133.0bp,64.368bp) and (133.0bp,43.594bp) .. (node_4);
\draw (142.0bp,46.5bp) node {$1$};
\draw [yellow,->] (node_4) ..controls (190.83bp,21.837bp) and (248.0bp,41.476bp) .. (248.0bp,84.5bp) .. controls (248.0bp,235.5bp) and (248.0bp,235.5bp) .. (248.0bp,235.5bp) .. controls (248.0bp,262.0bp) and (224.03bp,282.97bp) .. (node_0);
\draw (257.0bp,160.5bp) node {$4$};
\draw [green,->] (node_4) ..controls (145.33bp,23.178bp) and (149.85bp,29.724bp) .. (152.0bp,36.5bp) .. controls (154.68bp,44.974bp) and (154.61bp,48.003bp) .. (152.0bp,56.5bp) .. controls (150.83bp,60.319bp) and (148.94bp,64.084bp) .. (node_3);
\draw (162.0bp,46.5bp) node {$3$};
\draw [red,->] (node_4) ..controls (118.42bp,23.304bp) and (113.43bp,29.664bp) .. (111.0bp,36.5bp) .. controls (108.02bp,44.875bp) and (108.09bp,48.099bp) .. (111.0bp,56.5bp) .. controls (112.37bp,60.462bp) and (114.56bp,64.29bp) .. (node_3);
\draw (120.0bp,46.5bp) node {$2$};
\draw [blue,->] (node_4) ..controls (166.86bp,16.588bp) and (175.0bp,14.352bp) .. (175.0bp,9.5bp) .. controls (175.0bp,6.5436bp) and (171.98bp,4.5583bp) .. (node_4);
\draw (184.0bp,9.5bp) node {$1$};
\end{tikzpicture}
\caption{The Markov chain $\mathcal{M}(\mathcal{W}(P),[4])$ for the poset of Example~\ref{example.promotion}.
\label{figure.markov new}}
\end{figure}
\bibliographystyle{plain}
| proofpile-arXiv_059-15614 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}\label{sec1}\setcounter{equation}{0}
The convective Brinkman-Forchheimer (CBF) equations describe the motion of incompressible viscous fluid through a rigid, homogeneous, isotropic, porous medium. Let us first provide a mathematical formulation of the CBF equations. Let $\mathcal{O}\subset\mathbb{R}^n$ ($n=2,3$) be a bounded domain with a smooth boundary $\partial\mathcal{O}$. Let $\mathbf{X}_t(x) \in \mathbb{R}^n$ denotes the velocity field at time $t\in[0,T]$ and position $x\in\mathcal{O}$, $p_t(x)\in\mathbb{R}$ represents the pressure field, $\mathbf{f}_t(x)\in\mathbb{R}^n$ stands for an external forcing. The CBF equations are given by
\begin{equation}\label{1}
\left\{
\begin{aligned}
\frac{\partial \mathbf{X}_t}{\partial t}-\mu \Delta\mathbf{X}_t+(\mathbf{X}_t\cdot\nabla)\mathbf{X}_t+\alpha\mathbf{X}_t+\beta|\mathbf{X}_t|^{r-1}\mathbf{X}_t+\nabla p_t&=\mathbf{f}_t, \ \text{ in } \ \mathcal{O}\times(0,T), \\ \nabla\cdot\mathbf{X}_t&=0, \ \text{ in } \ \mathcal{O}\times(0,T), \\
\mathbf{X}_t&=\mathbf{0},\ \text{ on } \ \partial\mathcal{O}\times(0,T), \\
\mathbf{X}_0&=\mathbf{x}, \ \text{ in } \ \mathcal{O},
\end{aligned}
\right.
\end{equation}
the constants $\mu>0$ represents the Brinkman coefficient (effective viscosity), $\alpha>0$ stands for the Darcy (permeability of porous medium) coefficient and $\beta>0$ denotes the Forchheimer (proportional to the porosity of the material) coefficient. In order to obtain the uniqueness of the pressure $p$, we can impose the condition $ \int_{\mathcal{O}}p_t(x)\d x=0, $ for $t\in (0,T)$ also. The absorption exponent $r\in[1,\infty)$ and the case $r=3$ is known as the critical exponent. Note that for the case $\alpha=\beta=0$, we get the classical 3D Navier-Stokes equations (see \cite{GGP,OAL,JCR3,Te,Te1}, etc). The works \cite{SNA,CLF,KT2,MTM7}, etc discuss the global solvability results (existence and uniqueness of weak as well as strong solutions) for the deterministic CBF equations in bounded domains. As in the case of classical 3D Navier-Stokes equations, the existence of a unique global strong solution for the CBF equations \eqref{1} for $n=3$ and $r\in[1,3)$ is an open problem.
In the stochastic counterpart, the existence and uniqueness of strong solutions to the stochastic 3D tamed Navier-Stokes equations on bounded domains with Dirichlet boundary conditions is established in \cite{MRTZ}. The authors in \cite{LHGH1} obtained the existence of martingale solutions for the stochastic 3D Navier-Stokes equations with nonlinear damping. The existence of a pathwise unique strong solution satisfying the energy equality (It\^o's formula) to the stochastic convective Brinkman-Forchheimer (SCBF) equations perturbed by multiplicative Gaussian noise is proved in \cite{MTM8}. The author exploited the monotonicity and hemicontinuity properties of the linear and nonlinear operators as well as a stochastic generalization of the Minty-Browder technique in the proofs. The It\^o formula (energy equality) is established by using the fact that there are functions that can approximate functions defined on smooth bounded domains by elements of eigenspaces of linear operators (e.g., the Laplacian or the Stokes operator) in such a way that the approximations are bounded and converge in both Sobolev and Lebesgue spaces simultaneously (such a construction is available in \cite{CLF}). Making use of the exponential stability of strong solutions, the existence of a unique ergodic and strongly mixing invariant measure for the SCBF equations subject to multiplicative Gaussian noise is also established in \cite{MTM8}. The works \cite{HBAM,ZBGD,WLMR,WL,MRXZ1,MRTZ1}, etc discuss various results on the stochastic tamed 3D Navier-Stokes equations and related models on periodic domains as well as on the whole space.
The multiscale systems involve slow and fast components in mathematical models, and they are having wide range of applications in the areas like signal processing, climate dynamics, material science, molecular dynamics, mathematical finance, fluid dynamics, etc. The theory of averaging principle for multiscale systems is well developed for the past several years and it has extensive applications in science, technology and engineering (cf. \cite{RBJE,WEBE,FW1,EHVK,EAMEL,MMMC,FWTT}, etc and references therein). Averaging principle proposes that the slow component of a slow-fast system may be approximated by a simpler system obtained by averaging over the fast motion. For the deterministic systems, an averaging principle was first investigated by Bogoliubov and Mitropolsky in \cite{NNYA} and for the stochastic differential equations, an averaging principle was first studied by Khasminskii in \cite{RZK}. Several works are available in the literature for the theory of averaging principle for the stochastic partial differential equations (cf. \cite{CEB,CEB1,SC,SC1,SC2,SC3,ZDXS,HFJD,HFJL,HFLW,HFLW2,HFLW1,WWAJ,JXJL}, etc and references therein). An averaging principle for the slow component as two dimensional stochastic Navier-Stokes equations and the fast component as stochastic reaction-diffusion equations by using the classical Khasminskii approach based on time discretization is established in \cite{SLXS}. The authors in \cite{WLMRX} established a strong averaging principle for the slow-fast stochastic partial differential equations with locally monotone coefficients, which includes the systems like stochastic porous medium equation, the stochastic Burgers type equation, the stochastic $p$-Laplace equation and the stochastic 2D Navier-Stokes equation, etc. A strong averaging principle for the stochastic convective Brinkman-Forchheimer (SCBF) equations perturbed by Gaussian noise, in which the fast time scale component is governed by a stochastic reaction-diffusion equation with damping driven by multiplicative Gaussian noise, has been obtained in \cite{MTM11}.
The large deviations theory, which concerns the asymptotic behavior of remote tails of sequences of probability distributions (cf. \cite{FW,Va}), is one of the classical areas in probability theory with numerous deep developments and variety of applications in the fields like queuing theory, statistics, finance, engineering, etc. The theory of large deviations explains the probabilities of rare events that are exponentially small as a function of some parameter. In the case of stochastic differential equations, this parameter can be regarded as the amplitude of the noise perturbing a dynamical system. The Wentzell-Freidlin type large deviation estimates for a class of infinite dimensional stochastic differential equations is developed in the works \cite{BD1,chow,DaZ,KX}, etc. Large deviation principles for the 2D stochastic Navier-Stokes equations driven by Gaussian noise are established in the works \cite{ICAM,MGo,SSSP}, etc. A Wentzell-Freidlin type large deviation principle for the stochastic tamed 3D Navier-Stokes equations driven by multiplicative Gaussian noise in the whole space or on a torus is established in \cite{MRTZ1}. Small time large deviations principles for the stochastic 3D tamed Navier-Stokes equations in bounded domains is established in the work \cite{RWW}. Large deviation principle for the 3D tamed Navier-Stokes equations driven by multiplicative L\'evy noise in periodic domains is established in \cite{HGHL}. The author in \cite{MTM10} obtained the Wentzell-Freidlin large deviation principle for the two and three dimensional SCBF equations in bounded domains using a weak convergence approach developed by Budhiraja and Dupuis (see, \cite{BD1,BD2}). The large deviations for short time as well as the exponential estimates on certain exit times associated with the solution trajectory of the SCBF equations are also established in the same work. It seems to the author that some of the LDP results available in the literature for the 3D stochastic tamed Navier-Stokes equations in bounded domains are not valid due to the technical difficulty described in the works \cite{CLF,KT2,MTM7}, etc.
The large deviation theory for multi-scale systems are also well studied in the literature (see for instance, \cite{PDKS,FW,HJK,RKLP,RLi,TJL,LPo,AAP,KSp,AYV1}, etc and references therein). A large deviation principle for a class of stochastic reaction-diffusion partial differential equations with slow-fast components is derived in the work \cite{WWAJ1} . The authors in \cite{WHMS} studied a large deviation principle for a system of stochastic reaction-diffusion equations with a separation of fast and slow components, and small noise in the slow component, by using the weak convergence method in infinite dimensions. A Wentzell-Freidlin type large deviation principle for stochastic partial differential equations with slow and fast time-scales, where the slow component is a one-dimensional stochastic Burgers' equation with small noise and the fast component is a stochastic reaction-diffusion equation is established in \cite{XSRW}. In this work, we establish a Wentzell-Freidlin type large deviation principle for the two-time-scale stochastic partial differential equations, where the slow component is the SCBF equations in two and three dimensional bounded domains ($r\in[1,\infty),$ for $n=2$ and $r\in[3,\infty),$ for $n=3$ with $2\beta\mu>1$ for $r=3$) perturbed by small multiplicative Gaussian noise and the fast component is a stochastic reaction-diffusion equation with damping. We use a variational method (based on weak convergence approach) developed by Budhiraja and Dupuis (see, \cite{BD1,BD2}), Khasminkii's time discretization approach and stopping time arguments to obtain the LPD for the two-time-scale SCBF equations. Furthermore, we remark that the results obtained in this work are true for two dimensional stochastic Navier-Stokes equations also.
Our main aim of this work is to study a Wentzell-Freidlin type large deviation principle (LDP) for the following two-time-scale stochastic convective Brinkman-Forchheimer (SCBF) equations in two and three dimensional bounded domains. The two-time-scale SCBF equations are given by
\begin{equation}\label{1p1}
\left\{\begin{aligned}
\d\mathbf{X}^{\varepsilon,\delta}_t&=[\mu\Delta\mathbf{X}^{\varepsilon,\delta}_t-(\mathbf{X}^{\varepsilon,\delta}_t\cdot\nabla)\mathbf{X}^{\varepsilon,\delta}_t-\alpha\mathbf{X}^{\varepsilon,\delta}_t-\beta|\mathbf{X}^{\varepsilon,\delta}_t|^{r-1}\mathbf{X}^{\varepsilon,\delta}_t+\mathrm{F}(\mathbf{X}^{\varepsilon,\delta}_t,\mathbf{Y}^{\varepsilon,\delta}_t)]\d t\\&\quad-\nabla p^{\varepsilon,\delta}_t\d t+{\sqrt{\varepsilon}}\sigma_1(\mathbf{X}^{\varepsilon,\delta}_t)\mathrm{Q}_1^{1/2}\d\mathrm{W}_t,\\
\d \mathbf{Y}^{\varepsilon,\delta}_t&=\frac{1}{\delta}[\mu\Delta \mathbf{Y}^{\varepsilon,\delta}_t-\alpha \mathbf{Y}^{\varepsilon,\delta}_t-\beta|\mathbf{Y}^{\varepsilon,\delta}_t|^{r-1}\mathbf{Y}^{\varepsilon,\delta}_t+\mathrm{G}(\mathbf{X}^{\varepsilon,\delta}_t,\mathbf{Y}^{\varepsilon,\delta}_t)]\d t\\&\quad+\frac{1}{\sqrt{\delta}}\sigma_2(\mathbf{X}^{\varepsilon,\delta}_t,\mathbf{Y}^{\varepsilon,\delta}_t)\mathrm{Q}_2^{1/2}\d\mathrm{W}_t,\\
\nabla\cdot\mathbf{X}^{\varepsilon,\delta}_t&=0,\ \nabla\cdot\mathbf{Y}^{\varepsilon,\delta}_t=0,\\
\mathbf{X}^{\varepsilon,\delta}_t\big|_{\partial\mathcal{O}}&=\mathbf{Y}^{\varepsilon,\delta}_t\big|_{\partial\mathcal{O}}=\mathbf{0},\\
\mathbf{X}^{\varepsilon,\delta}_0&=\mathbf{x},\ \mathbf{Y}^{\varepsilon,\delta}_0=\mathbf{y},
\end{aligned}
\right.
\end{equation}
for $t\in(0,T)$, where $\varepsilon>0$, $\delta=\delta(\varepsilon)>0$ (with $\delta\to 0$ and $\frac{\delta}{\varepsilon}\to 0$ as $\varepsilon\to 0$) are small parameters describing the ratio of the time scales of the slow component $\mathbf{X}^{\varepsilon,\delta}_t$ and the fast component $\mathbf{Y}^{\varepsilon,\delta}_t$, $\mathrm{F},\mathrm{G},\sigma_1,\sigma_2$ are appropriate functions, and $\mathrm{W}_t$ is a Hilbert space valued standard cylindrical Wiener process on a complete probability space $(\Omega,\mathscr{F},\mathbb{P})$ with filtration $\{\mathscr{F}_t\}_{t\geq 0}$. The system \eqref{1p1} can be considered as stochastic convective Brinkman-Forchheimer equations, whose drift coefficient is coupled with a stochastic perturbation $\mathbf{Y}^{\varepsilon,\delta}_t$, which can be considered as the dramatically varying temperature (with damping) in the system.
The rest of the paper is organized as follows. In the next section, we provide some functional spaces as well as the hypothesis satisfied by the functions $\mathrm{F},\mathrm{G},\sigma_1,\sigma_2$ needed to obtain the global solvability of the system \eqref{1p1}. An abstract formulation for the two-time-scale SCBF system \eqref{1p1} is formulated in section \ref{sec4} and we discuss about the existence and uniqueness of a pathwise strong solution to the system \eqref{1p1} (Theorem \ref{exis}). A Wentzell-Freidlin type large deviation principle for the two-time-scale SCBF system is established in \ref{se4} (Theorem \ref{thm4.14}). The results are obtained by using a variational method (based on weak convergence approach) developed by Budhiraja and Dupuis, Khasminkii's time discretization approach and stopping time arguments (Theorems \ref{compact} and \ref{weak}). Moreover, we deduce that the results obtained this work are true for two dimensional stochastic Navier-Stokes equations also (Remark \ref{rem4.22}).
\section{Mathematical Formulation}\label{sec2}\setcounter{equation}{0}
In this section, we describe the necessary function spaces and the hypothesis satisfied by the functions $\mathrm{F},\mathrm{G}$ and the noise coefficients $\sigma_1,\sigma_2$ needed to obtain the global solvability results for the coupled SCBF equations \eqref{1p1}.
\subsection{Function spaces}\label{sub2.1} Let $\mathrm{C}_0^{\infty}(\mathcal{O};\mathbb{R}^n)$ denotes the space of all infinitely differentiable functions ($\mathbb{R}^n$-valued) with compact support in $\mathcal{O}\subset\mathbb{R}^n$. We define
\begin{align*}
\mathcal{V}&:=\{\mathbf{X}\in\mathrm{C}_0^{\infty}(\mathcal{O},\mathbb{R}^n):\nabla\cdot\mathbf{X}=0\},\\
\mathbb{H}&:=\text{the closure of }\ \mathcal{V} \ \text{ in the Lebesgue space } \mathbb{L}^2(\mathcal{O})=\mathrm{L}^2(\mathcal{O};\mathbb{R}^n),\\
\mathbb{V}&:=\text{the closure of }\ \mathcal{V} \ \text{ in the Sobolev space } \mathbb{H}_0^1(\mathcal{O})=\mathrm{H}_0^1(\mathcal{O};\mathbb{R}^n),\\
\widetilde{\mathbb{L}}^{p}&:=\text{the closure of }\ \mathcal{V} \ \text{ in the Lebesgue space } \mathbb{L}^p(\mathcal{O})=\mathrm{L}^p(\mathcal{O};\mathbb{R}^n),
\end{align*}
for $p\in(2,\infty)$. Then under some smoothness assumptions on the boundary, we characterize the spaces $\mathbb{H}$, $\mathbb{V}$ and $\widetilde{\mathbb{L}}^p$ as
$
\mathbb{H}=\{\mathbf{X}\in\mathbb{L}^2(\mathcal{O}):\nabla\cdot\mathbf{X}=0,\mathbf{X}\cdot\mathbf{n}\big|_{\partial\mathcal{O}}=0\}$, with norm $\|\mathbf{X}\|_{\mathbb{H}}^2:=\int_{\mathcal{O}}|\mathbf{X}(x)|^2\d x,
$
where $\mathbf{n}$ is the outward normal to $\partial\mathcal{O}$,
$
\mathbb{V}=\{\mathbf{X}\in\mathbb{H}_0^1(\mathcal{O}):\nabla\cdot\mathbf{X}=0\},$ with norm $ \|\mathbf{X}\|_{\mathbb{V}}^2:=\int_{\mathcal{O}}|\nabla\mathbf{X}(x)|^2\d x,
$ and $\widetilde{\mathbb{L}}^p=\{\mathbf{X}\in\mathbb{L}^p(\mathcal{O}):\nabla\cdot\mathbf{X}=0, \mathbf{X}\cdot\mathbf{n}\big|_{\partial\mathcal{O}}=0\},$ with norm $\|\mathbf{X}\|_{\widetilde{\mathbb{L}}^p}^p=\int_{\mathcal{O}}|\mathbf{X}(x)|^p\d x$, respectively.
Let $(\cdot,\cdot)$ denotes the inner product in the Hilbert space $\mathbb{H}$ and $\langle \cdot,\cdot\rangle $ denotes the induced duality between the spaces $\mathbb{V}$ and its dual $\mathbb{V}'$ as well as $\widetilde{\mathbb{L}}^p$ and its dual $\widetilde{\mathbb{L}}^{p'}$, where $\frac{1}{p}+\frac{1}{p'}=1$. Note that $\mathbb{H}$ can be identified with its dual $\mathbb{H}'$. We endow the space $\mathbb{V}\cap\widetilde{\mathbb{L}}^{p}$ with the norm $\|\mathbf{X}\|_{\mathbb{V}}+\|\mathbf{X}\|_{\widetilde{\mathbb{L}}^{p}},$ for $\mathbf{X}\in\mathbb{V}\cap\widetilde{\mathbb{L}}^p$ and its dual $\mathbb{V}'+\widetilde{\mathbb{L}}^{p'}$ with the norm $$\inf\left\{\max\left(\|\mathbf{Y}_1\|_{\mathbb{V}'},\|\mathbf{Y}_1\|_{\widetilde{\mathbb{L}}^{p'}}\right):\mathbf{Y}=\mathbf{Y}_1+\mathbf{Y}_2, \ \mathbf{Y}_1\in\mathbb{V}', \ \mathbf{Y}_2\in\widetilde{\mathbb{L}}^{p'}\right\}.$$ Furthermore, we have the continuous embedding $\mathbb{V}\cap\widetilde{\mathbb{L}}^p\hookrightarrow\mathbb{H}\hookrightarrow\mathbb{V}'+\widetilde{\mathbb{L}}^{p'}$.
\subsection{Linear operator}\label{sub2.2}
Let us define
\begin{equation*}
\left\{
\begin{aligned}
\mathrm{A}\mathbf{X}:&=-\mathrm{P}_{\mathbb{H}}\Delta\mathbf{X},\;\mathbf{X}\in\mathrm{D}(\mathrm{A}),\\ \mathrm{D}(\mathrm{A}):&=\mathbb{V}\cap\mathbb{H}^{2}(\mathcal{O}).
\end{aligned}
\right.
\end{equation*}
It can be easily seen that the operator $\mathrm{A}$ is a non-negative self-adjoint operator in $\mathbb{H}$ with $\mathbb{V}=\mathrm{D}(\mathrm{A}^{1/2})$ and \begin{align}\label{2.7a}\langle \mathrm{A}\mathbf{X},\mathbf{X}\rangle =\|\mathbf{X}\|_{\mathbb{V}}^2,\ \textrm{ for all }\ \mathbf{X}\in\mathbb{V}, \ \text{ so that }\ \|\mathrm{A}\mathbf{X}\|_{\mathbb{V}'}\leq \|\mathbf{X}\|_{\mathbb{V}}.\end{align}
For a bounded domain $\mathcal{O}$, the operator $\mathrm{A}$ is invertible and its inverse $\mathrm{A}^{-1}$ is bounded, self-adjoint and compact in $\mathbb{H}$. Thus, using the spectral theorem, the spectrum of $\mathrm{A}$ consists of an infinite sequence $0< \lambda_1\leq \lambda_2\leq\ldots\leq \lambda_k\leq \ldots,$ with $\lambda_k\to\infty$ as $k\to\infty$ of eigenvalues.
\iffalse
The behavior of these eigenvalues is well known in the literature (for example see Theorem 2.2, Corollary 2.2, \cite{AAI} and for asymptotic behavior, see \cite{PCCF,Te1,AAI1}, etc). For all $k\geq 1$, we have
\begin{align}\label{643}
\lambda_k\geq \widetilde{C}k^{2/n}, \ \text{ where }\
\widetilde{C}=\frac{n}{2+n}\left(\frac{(2\pi)^n}{\omega_n(n-1)|\mathcal{O}|}\right)^{2/n}, \ \omega_n= \pi^{n/2}\Gamma(1+n/2),
\end{align} and $|\mathcal{O}|$ is the $n$-dimensional Lebesgue measure of $\mathcal{O}$. For $n=2$, we get $\widetilde{C}=\frac{4\sqrt{\pi}}{|\mathcal{O}|}$ and for $n=3$, we find $\widetilde{C}=\frac{3^{5/3}\pi^{4/3}k^{2/3}}{5|\mathcal{O}|^{2/3}}$.
\fi
Moreover, there exists an orthogonal basis $\{e_k\}_{k=1}^{\infty} $ of $\mathbb{H}$ consisting of eigenvectors of $\mathrm{A}$ such that $\mathrm{A} e_k =\lambda_ke_k$, for all $ k\in\mathbb{N}$. We know that $\mathbf{X}$ can be expressed as $\mathbf{X}=\sum_{k=1}^{\infty}\langle\mathbf{X},e_k\rangle e_k$ and $\mathrm{A}\mathbf{X}=\sum_{k=1}^{\infty}\lambda_k\langle\mathbf{X},e_k\rangle e_k$, for all $\mathbf{X}\in\mathrm{D}(\mathrm{A})$. Thus, it is immediate that
\begin{align}\label{poin}
\|\nabla\mathbf{X}\|_{\mathbb{H}}^2=\langle \mathrm{A}\mathbf{X},\mathbf{X}\rangle =\sum_{k=1}^{\infty}\lambda_k|\langle \mathbf{X},e_k\rangle|^2\geq \lambda_1\sum_{k=1}^{\infty}|\langle\mathbf{X},e_k\rangle|^2=\lambda_1\|\mathbf{X}\|_{\mathbb{H}}^2,
\end{align}
which is the \emph{Poincar\'e inequality}.
\iffalse
In this work, we also need the fractional powers of $\mathrm{A}$. For $\mathbf{X}\in \mathbb{H}$ and $\alpha>0,$ we define
$\mathrm{A}^\alpha \mathbf{X}=\sum_{k=1}^\infty \lambda_k^\alpha (\mathbf{X},e_k) e_k, \ \mathbf{X}\in\mathrm{D}(\mathrm{A}^\alpha), $ where $\mathrm{D}(\mathrm{A}^\alpha)=\left\{\mathbf{X}\in \mathbb{H}:\sum_{k=1}^\infty \lambda_k^{2\alpha}|(\mathbf{X},e_k)|^2<+\infty\right\}.$
Here $\mathrm{D}(\mathrm{A}^\alpha)$ is equipped with the norm
\begin{equation} \label{fn}
\|\mathrm{A}^\alpha \mathbf{X}\|_{\mathbb{H}}=\left(\sum_{k=1}^\infty \lambda_k^{2\alpha}|(\mathbf{X},e_k)|^2\right)^{1/2}.
\end{equation}
It can be easily seen that $\mathrm{D}(\mathrm{A}^0)=\mathbb{H},$ $\mathrm{D}(\mathrm{A}^{1/2})=\mathbb{V}$. We set $\mathbb{V}_\alpha= \mathrm{D}(\mathrm{A}^{\alpha/2})$ with $\|\mathbf{X}\|_{\mathbb{V}_{\alpha}} =\|\mathrm{A}^{\alpha/2} \mathbf{X}\|_{\mathbb{H}}.$ Using Rellich-Kondrachov compactness embedding theorem, we know that for any $0\leq s_1<s_2,$ the embedding $\mathrm{D}(\mathrm{A}^{s_2})\subset \mathrm{D}(\mathrm{A}^{s_1})$ is also compact. Let us denote by $\mathrm{D}\left(\mathrm{A}^{-\alpha}\right)$, the dual space of $\mathrm{D}\left(\mathrm{A}^{\alpha}\right)$ and we have the following dense and continuous inclusions, for $\alpha>1/2$,
$$\mathrm{D}\left(\mathrm{A}^{\alpha}\right)\subset\mathbb{V}\subset\mathbb{H}\equiv\mathbb{H}'\subset\mathbb{V}'\subset\mathrm{D}\left(\mathrm{A}^{-\alpha}\right).$$ For negative powers, we also define $\left(\mathbf{X},\mathbf{Y}\right)_{\mathbb{V}_{-\alpha}}=\left(\mathrm{A}^{-\alpha/2}\mathbf{X},\mathrm{A}^{-\alpha/2}\mathbf{Y}\right)$ and $\|\mathbf{X}\|_{\mathbb{V}_{-\alpha}}=\|\mathrm{A}^{-\alpha/2}\mathbf{X}\|_{\mathbb{H}}$.
\fi
\iffalse
Applying H\"older's inequality on the expression \eqref{fn}, one can obtain the following interpolation estimate:
\begin{equation}\label{ie}
\|\mathrm{A}^{s}\mathbf{X}\|_{\mathbb{H}}\leq \|\mathrm{A}^{s_1}\mathbf{X}\|_{\mathbb{H}}^\theta\|\mathrm{A}^{s_2}\mathbf{X}\|^{1-\theta}_{\mathbb{H}},
\end{equation}
for any real $s_1\leq s\leq s_2$ and $\theta$ is given by $s=s_1\theta+s_2(1-\theta).$ Let us denote by $\mathrm{D}\left(\mathrm{A}^{-\alpha}\right)$, the dual space of $\mathrm{D}\left(\mathrm{A}^{\alpha}\right)$ and we have the following dense and continuous inclusions, for $\alpha>1/2$,
$$\mathrm{D}\left(\mathrm{A}^{\alpha}\right)\subset\mathbb{V}\subset\mathbb{H}\equiv\mathbb{H}'\subset\mathbb{V}'\subset\mathrm{D}\left(\mathrm{A}^{-\alpha}\right).$$ For negative powers, we also define $\left(\mathbf{X},\mathbf{Y}\right)_{\mathbb{V}_{-\alpha}}=\left(\mathrm{A}^{-\alpha/2}\mathbf{X},\mathrm{A}^{-\alpha/2}\mathbf{Y}\right)$ and $\|\mathbf{X}\|_{\mathbb{V}_{-\alpha}}=\|\mathrm{A}^{-\alpha/2}\mathbf{X}\|_{\mathbb{H}}$.
\begin{remark}
1. An integration by parts and boundary conditions yields
\begin{align}\label{24}
\langle \Delta\mathbf{X},|\mathbf{X}|^{r-2}\mathbf{X}\rangle&=\sum_{i,j=1}^n\int_{\mathcal{O}}\frac{\partial^2\mathbf{X}_j(x)}{\partial x_i^2}|\mathbf{X}(x)|^{r-2}\mathbf{X}_j(x)\d x\nonumber\\&=\sum_{i,j=1}^n\int_{\partial\mathcal{O}}\frac{\partial\mathbf{X}_j(x)}{\partial x_i}|\mathbf{X}(x)|^{r-2}\mathbf{X}_j(x)\mathbf{n}_i(x)\d x\nonumber\\&\quad-\sum_{i,j=1}^n\int_{\mathcal{O}}\frac{\partial \mathbf{X}_j(x)}{\partial x_i}\frac{\partial}{\partial x_i}(|\mathbf{X}(x)|^{r-2})\mathbf{X}_j(x)\d x-\sum_{i,j=1}^n\int_{\mathcal{O}}\left(\frac{\partial \mathbf{X}_j(x)}{\partial x_i}\right)^2|\mathbf{X}(x)|^{r-2}\d x\nonumber\\&=-(r-1)\sum_{i,j=1}^n\int_{\mathcal{O}}\left(\frac{\partial \mathbf{X}_j(x)}{\partial x_i}\right)^2|\mathbf{X}(x)|^{r-2}\d x=-(r-1)\||\mathbf{X}|^{\frac{r-2}{2}}\nabla\mathbf{X}\|_{\mathbb{L}^2}^2,
\end{align}
for $r\geq 2$.
2. Note that
\begin{align}
|\nabla(|\mathbf{X}|^{\frac{r}{2}})|&=|\nabla((|\mathbf{X}|^2)^{\frac{r}{4}})|=\sum_{k=1}^n\left|\frac{\partial}{\partial x_i}(|\mathbf{X}_k|^2)^{\frac{r}{4}}\right|=\frac{r}{2}\sum_{k,j=1}^n\left||\mathbf{X}_k|^{\frac{r}{2}-2}\mathbf{X}_j\frac{\partial}{\partial x_i}\mathbf{X}_j\right|\leq \frac{r}{2}|\mathbf{X}|^{\frac{r-2}{2}}|\nabla\mathbf{X}|,
\end{align}
and thus
$
\|\nabla(|\mathbf{X}|^{\frac{r}{2}})\|_{\mathbb{L}^2}\leq \frac{r}{2}\||\mathbf{X}|^{\frac{r-2}{2}}\nabla\mathbf{X}\|_{\mathbb{L}^2}.
$
Applying Poincar\'e inequality, we further have
\begin{align}\label{2p6}
\|\mathbf{X}\|_{\mathbb{L}^r}^r\leq\||\mathbf{X}|^{\frac{r}{2}}\|_{\mathbb{H}}^2\leq\frac{1}{\lambda_1}\|\nabla(|\mathbf{X}|^{\frac{r}{2}})\|_{\mathbb{H}}^2\leq\frac{r^2}{4\lambda_1}\||\mathbf{X}|^{\frac{r-2}{2}}\nabla\mathbf{X}\|_{\mathbb{L}^2}^2,
\end{align}
for all $\mathbf{X}\in\mathbb{V}\cap\mathbb{L}^r(\mathcal{O})$.
\end{remark}
It should be noted that, in this work, we are not using the Gagliardo-Nirenberg, Ladyzhenskaya or Agmon's inequalities. Thus, the results obtained in this work are true for $2\leq n\leq 4$ in bounded domains (see the discussions above \eqref{333}) and $n\geq 2$ in periodic domains.
\fi
\iffalse
\begin{lemma}[Gagliardo-Nirenberg inequality, Theorem 2.1, \cite{LN}] \label{gni}
Let $\mathcal{O}\subset\mathbb{R}^n$ and $\mathbf{X}\in\mathrm{W}_0^{1,p}(\mathcal{O};\mathbb{R}^n),p\geq 1$. Then, for any fixed number $q,r\geq 1$, there exists a constant $C>0$ depending only on $n,p,q$ such that
\begin{align}\label{gg}
\|\mathbf{X}\|_{\mathbb{L}^r}\leq C\|\nabla\mathbf{X}\|_{\mathbb{L}^p}^{\eta}\|\mathbf{X}\|_{\mathbb{L}^q}^{1-\eta},\ \eta\in[0,1],
\end{align}
where the numbers $p, q, r$ and $\eta$ are connected by the relation
$$\eta=\left(\frac{1}{q}-\frac{1}{r}\right)\left(\frac{1}{n}-\frac{1}{p}+\frac{1}{q}\right)^{-1}.$$
\end{lemma}
\fi
\iffalse
For the case $n=2$, taking $p=q=2$ in \eqref{gg}, we get $\eta=1-\frac{2}{r}$ and hence
\begin{align}\label{2.8}
\|\mathbf{X}\|_{\mathbb{L}^r}\leq C\|\nabla\mathbf{X}\|_{\mathbb{H}}^{1-\frac{2}{r}}\|\mathbf{X}\|_{\mathbb{H}}^{\frac{2}{r}},
\end{align}
for all $r\in[2,\infty)$. For the case $n=3$, substituting $p=q=2$ in \eqref{gg}, we obtain $\eta=\frac{3}{2}-\frac{3}{r}$ and thus
\begin{align}\label{2p11}
\|\mathbf{X}\|_{\mathbb{L}^r}\leq C\|\nabla\mathbf{X}\|_{\mathbb{H}}^{\frac{3}{2}-\frac{3}{r}}\|\mathbf{X}\|_{\mathbb{H}}^{-\frac{1}{2}+\frac{3}{r}},
\end{align}
for $r\in[2,6]$.
\begin{lemma}[Ladyzhenskaya inequality]
For $\mathbf{X}\in\ \mathrm{C}_0^{\infty}(\mathcal{O};\mathbb{R}^n), n = 2, 3$, there exists a constant $C$ such that
\begin{align}
\|\mathbf{X}\|_{\mathbb{L}^4(\mathcal{O})}\leq C^{1/4}\|\mathbf{X}\|^{1-\frac{n}{4}}_{\mathbb{L}^2(\mathcal{O})}\|\nabla\mathbf{X}\|^{\frac{n}{4}}_{\mathbb{L}^2(\mathcal{O})},\text{ for } n=2,3,
\end{align}
where $C=2,4$ for $n=2,3$ respectively.
\end{lemma}
\fi
\subsection{Bilinear operator}
Let us define the \emph{trilinear form} $b(\cdot,\cdot,\cdot):\mathbb{V}\times\mathbb{V}\times\mathbb{V}\to\mathbb{R}$ by $$b(\mathbf{X},\mathbf{Y},\mathbf{Z})=\int_{\mathcal{O}}(\mathbf{X}(x)\cdot\nabla)\mathbf{Y}(x)\cdot\mathbf{Z}(x)\d x=\sum_{i,j=1}^n\int_{\mathcal{O}}\mathbf{X}_i(x)\frac{\partial \mathbf{Y}_j(x)}{\partial x_i}\mathbf{Z}_j(x)\d x.$$ If $\mathbf{X}, \mathbf{Y}$ are such that the linear map $b(\mathbf{X}, \mathbf{Y}, \cdot) $ is continuous on $\mathbb{V}$, the corresponding element of $\mathbb{V}'$ is denoted by $\mathrm{B}(\mathbf{X}, \mathbf{Y})$. We also denote $\mathrm{B}(\mathbf{X}) = \mathrm{B}(\mathbf{X}, \mathbf{X})=\mathrm{P}_{\mathbb{H}}(\mathbf{X}\cdot\nabla)\mathbf{X}$.
An integration by parts yields
\begin{equation}\label{b0}
\left\{
\begin{aligned}
b(\mathbf{X},\mathbf{Y},\mathbf{Y}) &= 0,\text{ for all }\mathbf{X},\mathbf{Y} \in\mathbb{V},\\
b(\mathbf{X},\mathbf{Y},\mathbf{Z}) &= -b(\mathbf{X},\mathbf{Z},\mathbf{Y}),\text{ for all }\mathbf{X},\mathbf{Y},\mathbf{Z}\in \mathbb{V}.
\end{aligned}
\right.\end{equation}
In the trilinear form, an application of H\"older's inequality yields
\begin{align*}
|b(\mathbf{X},\mathbf{Y},\mathbf{Z})|=|b(\mathbf{X},\mathbf{Z},\mathbf{Y})|\leq \|\mathbf{X}\|_{\widetilde{\mathbb{L}}^{r+1}}\|\mathbf{Y}\|_{\widetilde{\mathbb{L}}^{\frac{2(r+1)}{r-1}}}\|\mathbf{Z}\|_{\mathbb{V}},
\end{align*}
for all $\mathbf{X}\in\mathbb{V}\cap\widetilde{\mathbb{L}}^{r+1}$, $\mathbf{Y}\in\mathbb{V}\cap\widetilde{\mathbb{L}}^{\frac{2(r+1)}{r-1}}$ and $\mathbf{Z}\in\mathbb{V}$, so that we get
\begin{align}\label{2p9}
\|\mathrm{B}(\mathbf{X},\mathbf{Y})\|_{\mathbb{V}'}\leq \|\mathbf{X}\|_{\widetilde{\mathbb{L}}^{r+1}}\|\mathbf{Y}\|_{\widetilde{\mathbb{L}}^{\frac{2(r+1)}{r-1}}}.
\end{align}
Hence, the trilinear map $b : \mathbb{V}\times\mathbb{V}\times\mathbb{V}\to \mathbb{R}$ has a unique extension to a bounded trilinear map from $(\mathbb{V}\cap\widetilde{\mathbb{L}}^{r+1})\times(\mathbb{V}\cap\widetilde{\mathbb{L}}^{\frac{2(r+1)}{r-1}})\times\mathbb{V}$ to $\mathbb{R}$. It can also be seen that $\mathrm{B}$ maps $ \mathbb{V}\cap\widetilde{\mathbb{L}}^{r+1}$ into $\mathbb{V}'+\widetilde{\mathbb{L}}^{\frac{r+1}{r}}$ and using interpolation inequality, we get
\begin{align}\label{212}
\left|\langle \mathrm{B}(\mathbf{X},\mathbf{X}),\mathbf{Y}\rangle \right|=\left|b(\mathbf{X},\mathbf{Y},\mathbf{X})\right|\leq \|\mathbf{X}\|_{\widetilde{\mathbb{L}}^{r+1}}\|\mathbf{X}\|_{\widetilde{\mathbb{L}}^{\frac{2(r+1)}{r-1}}}\|\mathbf{Y}\|_{\mathbb{V}}\leq\|\mathbf{X}\|_{\widetilde{\mathbb{L}}^{r+1}}^{\frac{r+1}{r-1}}\|\mathbf{X}\|_{\mathbb{H}}^{\frac{r-3}{r-1}}\|\mathbf{Y}\|_{\mathbb{V}},
\end{align}
for all $\mathbf{Y}\in\mathbb{V}\cap\widetilde{\mathbb{L}}^{r+1}$. Thus, we have
\begin{align}\label{2.9a}
\|\mathrm{B}(\mathbf{X})\|_{\mathbb{V}'+\widetilde{\mathbb{L}}^{\frac{r+1}{r}}}\leq\|\mathbf{X}\|_{\widetilde{\mathbb{L}}^{r+1}}^{\frac{r+1}{r-1}}\|\mathbf{X}\|_{\mathbb{H}}^{\frac{r-3}{r-1}},
\end{align}
for $r\geq 3$.
\iffalse
Note that $\mathrm{B}$ also maps $\widetilde{\mathbb{L}}^6\cap \mathbb{H}$ into $\mathbb{V}'$ and
\begin{align*}
\left|\langle \mathrm{B}(\mathbf{X},\mathbf{X}),\mathbf{Y}\rangle \right|=\left|b(\mathbf{X},\mathbf{Y},\mathbf{X})\right|\leq \|\mathbf{X}\|_{\widetilde{\mathbb{L}}^3}\|\mathbf{X}\|_{\widetilde{\mathbb{L}}^6}\|\nabla\mathbf{Y}\|_{\mathbb{H}}
\end{align*}
so that
\begin{align}
\|\mathrm{B}(\mathbf{X},\mathbf{X})\|_{\mathbb{V}'}\leq\|\mathbf{X}\|_{\widetilde{\mathbb{L}}^3}\|\mathbf{X}\|_{\widetilde{\mathbb{L}}^6}\leq C\|\mathbf{X}\|_{\mathbb{H}}^{1/2}\|\mathbf{X}\|_{\mathbb{V}}^{3/2},
\end{align}
once again, we get an estimate similar to \eqref{2.9a}.
Using \eqref{2p9}, for $\mathbf{X},\mathbf{Y}\in\mathbb{V}\cap\widetilde{\mathbb{L}}^{r+1}$, we also have
\begin{align}\label{lip}
\|\mathrm{B}(\mathbf{X})-\mathrm{B}(\mathbf{Y})\|_{\mathbb{V}'+\widetilde{\mathbb{L}}^{\frac{r+1}{r}}}&\leq \|\mathrm{B}(\mathbf{X}-\mathbf{Y},\mathbf{X})\|_{\mathbb{V}'}+\|\mathrm{B}(\mathbf{Y},\mathbf{X}-\mathbf{Y})\|_{\mathbb{V}'}\nonumber\\&\leq \left(\|\mathbf{X}\|_{\widetilde{\mathbb{L}}^{\frac{2(r+1)}{r-1}}}+\|\mathbf{Y}\|_{\widetilde{\mathbb{L}}^{\frac{2(r+1)}{r-1}}}\right)\|\mathbf{X}-\mathbf{Y}\|_{\widetilde{\mathbb{L}}^{r+1}}\nonumber\\&\leq \left(\|\mathbf{X}\|_{\mathbb{H}}^{\frac{r-3}{r-1}}\|\mathbf{X}\|_{\widetilde{\mathbb{L}}^{r+1}}^{\frac{2}{r-1}}+\|\mathbf{Y}\|_{\mathbb{H}}^{\frac{r-3}{r-1}}\|\mathbf{Y}\|_{\widetilde{\mathbb{L}}^{r+1}}^{\frac{2}{r-1}}\right)\|\mathbf{X}-\mathbf{Y}\|_{\widetilde{\mathbb{L}}^{r+1}},
\end{align}
for $r>3$, by using the interpolation inequality. For $r=3$, a calculation similar to \eqref{lip} yields
\begin{align}
\|\mathrm{B}(\mathbf{X})-\mathrm{B}(\mathbf{Y})\|_{\mathbb{V}'+\widetilde{\mathbb{L}}^{\frac{4}{3}}}&\leq \left(\|\mathbf{X}\|_{\widetilde{\mathbb{L}}^{4}}+\|\mathbf{Y}\|_{\widetilde{\mathbb{L}}^{4}}\right)\|\mathbf{X}-\mathbf{Y}\|_{\widetilde{\mathbb{L}}^{4}},
\end{align}
hence $\mathrm{B}(\cdot):\mathbb{V}\cap\widetilde{\mathbb{L}}^{4}\to\mathbb{V}'+\widetilde{\mathbb{L}}^{\frac{4}{3}}$ is a locally Lipschitz operator.
\fi
For $n=2$ and $r\in[1,3]$, using H\"older's and Ladyzhenskaya's inequalities, we obtain
\begin{align*}
|\langle\mathrm{B}(\mathbf{X},\mathbf{Y}),\mathbf{Z}\rangle|=|\langle\mathrm{B}(\mathbf{X},\mathbf{Z}),\mathbf{Y}\rangle|\leq\|\mathbf{X}\|_{\widetilde\mathbb{L}^4}\|\mathbf{Y}\|_{\widetilde\mathbb{L}^4}\|\mathbf{Z}\|_{\mathbb{V}},
\end{align*}
for all $\mathbf{X},\mathbf{Y}\in\widetilde\mathbb{L}^4$ and $\mathbf{Z}\in\mathbb{V}$, so that we get $\|\mathrm{B}(\mathbf{X},\mathbf{Y})\|_{\mathbb{V}'}\leq\|\mathbf{X}\|_{\widetilde\mathbb{L}^4}\|\mathbf{Y}\|_{\widetilde\mathbb{L}^4}$. Furthermore, we have $$\|\mathrm{B}(\mathbf{X},\mathbf{X})\|_{\mathbb{V}'}\leq\|\mathbf{X}\|_{\widetilde\mathbb{L}^4}^2\leq\sqrt{2}\|\mathbf{X}\|_{\mathbb{H}}\|\mathbf{X}\|_{\mathbb{V}}\leq\sqrt{\frac{2}{\lambda_1}}\|\mathbf{X}\|_{\mathbb{V}}^2,$$ for all $\mathbf{X}\in\mathbb{V}$.
\iffalse
For $r\geq 2$, using an integration by parts, boundary condition and divergence free condition, we estimate $\langle \mathrm{B}(\mathbf{X}),|\mathbf{X}|^{r-2}\mathbf{X}\rangle$ as
\begin{align}\label{2.12}
\langle \mathrm{B}(\mathbf{X}),|\mathbf{X}|^{r-2}\mathbf{X}\rangle&=\langle (\mathbf{X}\cdot\nabla)\mathbf{X},|\mathbf{X}|^{r-2}\mathbf{X}\rangle=\sum_{i,j,k=1}^n\int_{\mathcal{O}}\mathbf{X}_i(x)\frac{\partial \mathbf{X}_j(x)}{\partial x_i}|\mathbf{X}_k(x)|^{r-2}\mathbf{X}_j(x)\d x\nonumber\\&=\frac{1}{2}\int_{\mathcal{O}}\mathbf{X}_i(x)|\mathbf{X}_k(x)|^{r-2}\frac{\partial\mathbf{X}_j^2(x)}{\partial x_i}\d x\nonumber\\&=\frac{1}{2}\sum_{i,j,k=1}^n\bigg\{\int_{\partial\mathcal{O}}\mathbf{X}_i(x)|\mathbf{X}_k(x)|^{r-2}\mathbf{X}_j^2(x)\mathbf{n}_i(x)\d x-\int_{\mathcal{O}}\frac{\partial\mathbf{X}_i(x)}{\partial x_i}|\mathbf{X}_k(x)|^{r-2}\mathbf{X}_j^2(x)\d x \nonumber\\&\qquad-\int_{\mathcal{O}}\frac{\partial|\mathbf{X}_k(x)|^{r-2}}{\partial x_i}\mathbf{X}_i(x)\mathbf{X}_j^2(x)\d x\bigg\}\nonumber\\&=-\frac{1}{2}\sum_{i=1}^n\int_{\mathcal{O}}\frac{\partial|\mathbf{X}(x)|^{r-2}}{\partial x_i}|\mathbf{X}(x)|^2\mathbf{X}_i(x)\d x\nonumber\\&=-\frac{(r-2)}{2}\sum_{i,j=1}^n\int_{\mathcal{O}}|\mathbf{X}(x)|^{r-2}\mathbf{X}_i(x)\frac{\partial\mathbf{X}_j(x)}{\partial x_i}\mathbf{X}_j(x)\d x\nonumber\\&=-\frac{(r-2)}{2}\langle \mathrm{B}(\mathbf{X}),|\mathbf{X}|^{r-2}\mathbf{X}\rangle,
\end{align}
so that we get $\langle \mathrm{B}(\mathbf{X}),|\mathbf{X}|^{r-2}\mathbf{X}\rangle=0$.
\fi
\subsection{Nonlinear operator}
Let us now consider the operator $\mathcal{C}(\mathbf{X}):=\mathrm{P}_{\mathbb{H}}(|\mathbf{X}|^{r-1}\mathbf{X})$. It is immediate that $\langle\mathcal{C}(\mathbf{X}),\mathbf{X}\rangle =\|\mathbf{X}\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}$.
\iffalse
and the map $\mathcal{C}(\cdot):\widetilde{\mathbb{L}}^{r+1}\to\widetilde{\mathbb{L}}^{\frac{r+1}{r}}$ is Fr\'echet differentiable with Fr\'echet derivative $\mathcal{C}'(\mathbf{X})\mathbf{Y}=r\mathrm{P}_{\mathbb{H}}(|\mathbf{X}|^{r-1}\mathbf{Y}),$ for $\mathbf{Y}\in\widetilde{\mathbb{L}}^{r+1}$. For $\mathbf{X},\mathbf{Y}\in\widetilde{\mathbb{L}}^{r+1}$, using Taylor's formula, we have
\begin{align}\label{213}
\langle \mathrm{P}_{\mathbb{H}}(|\mathbf{X}|^{r-1}\mathbf{X})-\mathrm{P}_{\mathbb{H}}(|\mathbf{Y}|^{r-1}\mathbf{Y}),\mathbf{Z}\rangle&\leq \|(|\mathbf{X}|^{r-1}\mathbf{X})-(|\mathbf{Y}|^{r-1}\mathbf{Y})\|_{\widetilde{\mathbb{L}}^{\frac{r+1}{r}}}\|\mathbf{Z}\|_{\widetilde{\mathbb{L}}^{r+1}}\nonumber\\&\leq \sup_{0<\theta<1}r\|(\mathbf{X}-\mathbf{Y})|\theta\mathbf{X}+(1-\theta)\mathbf{Y}|^{r-1}\|_{\widetilde{\mathbb{L}}^{{\frac{r+1}{r}}}}\|\mathbf{Z}\|_{\widetilde{\mathbb{L}}^{r+1}}\nonumber\\&\leq \sup_{0<\theta<1} r\|\theta\mathbf{X}+(1-\theta)\mathbf{Y}\|_{\widetilde{\mathbb{L}}^{r+1}}^{r-1}\|\mathbf{X}-\mathbf{Y}\|_{\widetilde{\mathbb{L}}^{r+1}}\|\mathbf{Z}\|_{\widetilde{\mathbb{L}}^{r+1}}\nonumber\\&\leq r\left(\|\mathbf{X}\|_{\widetilde{\mathbb{L}}^{r+1}}+\|\mathbf{Y}\|_{\widetilde{\mathbb{L}}^{r+1}}\right)^{r-1}\|\mathbf{X}-\mathbf{Y}\|_{\widetilde{\mathbb{L}}^{r+1}}\|\mathbf{Z}\|_{\widetilde{\mathbb{L}}^{r+1}},
\end{align}
for all $\mathbf{X},\mathbf{Y},\mathbf{Z}\in\widetilde{\mathbb{L}}^{r+1}$.
Thus the operator $\mathcal{C}(\cdot):\widetilde{\mathbb{L}}^{r+1}\to\widetilde{\mathbb{L}}^{\frac{r+1}{r}}$ is locally Lipschitz.
\fi
\iffalse
\begin{align}\label{2p13}
&\langle \mathrm{P}_{\mathbb{H}}(\mathbf{X}|\mathbf{X}|^{r-1})-\mathrm{P}_{\mathbb{H}}(\mathbf{Y}|\mathbf{Y}|^{r-1}),\mathbf{X}-\mathbf{Y}\rangle\nonumber\\&= r\langle (\mathbf{X}-\mathbf{Y})|\theta\mathbf{X}+(1-\theta)\mathbf{Y}|^{r-1},\mathbf{X}-\mathbf{Y}\rangle \nonumber\\&=r\int_{\mathcal{O}} |\theta\mathbf{X}(x)+(1-\theta)\mathbf{Y}(x)|^{r-1}|\mathbf{X}(x)-\mathbf{Y}(x)|^2\d x\geq 0,
\end{align}
for all $\mathbf{X},\mathbf{Y}\in\mathbb{V}\cap\widetilde{\mathbb{L}}^{r+1}$. The above estimate can be proved in the following way also:
\fi
For any $r\in[1,\infty)$, we have
\begin{align}\label{2pp11}
&\langle \mathrm{P}_{\mathbb{H}}(\mathbf{X}|\mathbf{X}|^{r-1})-\mathrm{P}_{\mathbb{H}}(\mathbf{Y}|\mathbf{Y}|^{r-1}),\mathbf{X}-\mathbf{Y}\rangle\nonumber\\&=\int_{\mathcal{O}}\left(\mathbf{X}(x)|\mathbf{X}(x)|^{r-1}-\mathbf{Y}(x)|\mathbf{Y}(x)|^{r-1}\right)\cdot(\mathbf{X}(x)-\mathbf{Y}(x))\d x\nonumber\\&=\int_{\mathcal{O}}\left(|\mathbf{X}(x)|^{r+1}-|\mathbf{X}(x)|^{r-1}\mathbf{X}(x)\cdot\mathbf{Y}(x)-|\mathbf{Y}(x)|^{r-1}\mathbf{X}(x)\cdot\mathbf{Y}(x)+|\mathbf{Y}(x)|^{r+1}\right)\d x\nonumber\\&\geq\int_{\mathcal{O}}\left(|\mathbf{X}(x)|^{r+1}-|\mathbf{X}(x)|^{r}|\mathbf{Y}(x)|-|\mathbf{Y}(x)|^{r}|\mathbf{X}(x)|+|\mathbf{Y}(x)|^{r+1}\right)\d x\nonumber\\&=\int_{\mathcal{O}}\left(|\mathbf{X}(x)|^r-|\mathbf{Y}(x)|^r\right)(|\mathbf{X}(x)|-|\mathbf{Y}(x)|)\d x\geq 0.
\end{align}
Furthermore, we find
\begin{align}\label{224}
&\langle\mathrm{P}_{\mathbb{H}}(\mathbf{X}|\mathbf{X}|^{r-1})-\mathrm{P}_{\mathbb{H}}(\mathbf{Y}|\mathbf{Y}|^{r-1}),\mathbf{X}-\mathbf{Y}\rangle\nonumber\\&=\langle|\mathbf{X}|^{r-1},|\mathbf{X}-\mathbf{Y}|^2\rangle+\langle|\mathbf{Y}|^{r-1},|\mathbf{X}-\mathbf{Y}|^2\rangle+\langle\mathbf{Y}|\mathbf{X}|^{r-1}-\mathbf{X}|\mathbf{Y}|^{r-1},\mathbf{X}-\mathbf{Y}\rangle\nonumber\\&=\||\mathbf{X}|^{\frac{r-1}{2}}(\mathbf{X}-\mathbf{Y})\|_{\mathbb{H}}^2+\||\mathbf{Y}|^{\frac{r-1}{2}}(\mathbf{X}-\mathbf{Y})\|_{\mathbb{H}}^2\nonumber\\&\quad+\langle\mathbf{X}\cdot\mathbf{Y},|\mathbf{X}|^{r-1}+|\mathbf{Y}|^{r-1}\rangle-\langle|\mathbf{X}|^2,|\mathbf{Y}|^{r-1}\rangle-\langle|\mathbf{Y}|^2,|\mathbf{X}|^{r-1}\rangle.
\end{align}
But, we know that
\begin{align*}
&\langle\mathbf{X}\cdot\mathbf{Y},|\mathbf{X}|^{r-1}+|\mathbf{Y}|^{r-1}\rangle-\langle|\mathbf{X}|^2,|\mathbf{Y}|^{r-1}\rangle-\langle|\mathbf{Y}|^2,|\mathbf{X}|^{r-1}\rangle\nonumber\\&=-\frac{1}{2}\||\mathbf{X}|^{\frac{r-1}{2}}(\mathbf{X}-\mathbf{Y})\|_{\mathbb{H}}^2-\frac{1}{2}\||\mathbf{Y}|^{\frac{r-1}{2}}(\mathbf{X}-\mathbf{Y})\|_{\mathbb{H}}^2+\frac{1}{2}\langle\left(|\mathbf{X}|^{r-1}-|\mathbf{Y}|^{r-1}\right),\left(|\mathbf{X}|^2-|\mathbf{Y}|^2\right)\rangle \nonumber\\&\geq -\frac{1}{2}\||\mathbf{X}|^{\frac{r-1}{2}}(\mathbf{X}-\mathbf{Y})\|_{\mathbb{H}}^2-\frac{1}{2}\||\mathbf{Y}|^{\frac{r-1}{2}}(\mathbf{X}-\mathbf{Y})\|_{\mathbb{H}}^2.
\end{align*}
From \eqref{224}, we finally have
\begin{align}\label{2.23}
&\langle\mathrm{P}_{\mathbb{H}}(\mathbf{X}|\mathbf{X}|^{r-1})-\mathrm{P}_{\mathbb{H}}(\mathbf{Y}|\mathbf{Y}|^{r-1}),\mathbf{X}-\mathbf{Y}\rangle\geq \frac{1}{2}\||\mathbf{X}|^{\frac{r-1}{2}}(\mathbf{X}-\mathbf{Y})\|_{\mathbb{H}}^2+\frac{1}{2}\||\mathbf{Y}|^{\frac{r-1}{2}}(\mathbf{X}-\mathbf{Y})\|_{\mathbb{H}}^2\geq 0,
\end{align}
for $r\geq 1$. It is important to note that
\begin{align}\label{a215}
\|\mathbf{X}-\mathbf{Y}\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}&=\int_{\mathcal{O}}|\mathbf{X}(x)-\mathbf{Y}(x)|^{r-1}|\mathbf{X}(x)-\mathbf{Y}(x)|^{2}\d x\nonumber\\&\leq 2^{r-2}\int_{\mathcal{O}}(|\mathbf{X}(x)|^{r-1}+|\mathbf{Y}(x)|^{r-1})|\mathbf{X}(x)-\mathbf{Y}(x)|^{2}\d x\nonumber\\&\leq 2^{r-2}\||\mathbf{X}|^{\frac{r-1}{2}}(\mathbf{X}-\mathbf{Y})\|_{\mathbb{L}^2}^2+2^{r-2}\||\mathbf{Y}|^{\frac{r-1}{2}}(\mathbf{X}-\mathbf{Y})\|_{\mathbb{L}^2}^2.
\end{align}
Combining \eqref{2.23} and \eqref{a215}, we obtain
\begin{align}\label{214}
\langle\mathcal{C}(\mathbf{X})-\mathcal{C}(\mathbf{Y}),\mathbf{X}-\mathbf{Y}\rangle\geq\frac{1}{2^{r-1}}\|\mathbf{X}-\mathbf{Y}\|_{\widetilde\mathbb{L}^{r+1}}^{r+1},
\end{align}
for $r\geq 1$.
\iffalse
For the critical case $r=3$, we have
\begin{align}\label{2p23}
&\langle \mathrm{P}_{\mathbb{H}}(\mathbf{X}|\mathbf{X}|^{2})-\mathrm{P}_{\mathbb{H}}(\mathbf{Y}|\mathbf{Y}|^{2}),\mathbf{X}-\mathbf{Y}\rangle\nonumber\\&=\langle \mathbf{X}|\mathbf{X}|^2-\mathbf{Y}|\mathbf{Y}|^2,\mathbf{X}-\mathbf{Y}\rangle =\langle\mathbf{X}(|\mathbf{X}|^2-|\mathbf{Y}|^2)+(\mathbf{X}-\mathbf{Y})|\mathbf{Y}|^2,\mathbf{X}-\mathbf{Y}\rangle \nonumber\\&=\|\mathbf{Y}(\mathbf{X}-\mathbf{Y})\|_{\mathbb{H}}^2+\langle\mathbf{X}\cdot(\mathbf{X}-\mathbf{Y}),(\mathbf{X}+\mathbf{Y})\cdot(\mathbf{X}-\mathbf{Y})\rangle \nonumber\\&=\|\mathbf{Y}(\mathbf{X}-\mathbf{Y})\|_{\mathbb{H}}^2+\|\mathbf{X}\cdot(\mathbf{X}-\mathbf{Y})\|_{\mathbb{H}}^2+\langle\mathbf{X}\cdot(\mathbf{X}-\mathbf{Y}),\mathbf{Y}\cdot(\mathbf{X}-\mathbf{Y})\rangle \nonumber\\&\geq \|\mathbf{Y}(\mathbf{X}-\mathbf{Y})\|_{\mathbb{H}}^2+\|\mathbf{X}\cdot(\mathbf{X}-\mathbf{Y})\|_{\mathbb{H}}^2-\|\mathbf{X}\cdot(\mathbf{X}-\mathbf{Y})\|_{\mathbb{H}}\|\mathbf{Y}(\mathbf{X}-\mathbf{Y})\|_{\mathbb{H}}\nonumber\\&\geq\frac{1}{2}\|\mathbf{Y}(\mathbf{X}-\mathbf{Y})\|_{\mathbb{H}}^2+\frac{1}{2}\|\mathbf{X}\cdot(\mathbf{X}-\mathbf{Y})\|_{\mathbb{H}}^2\geq \frac{1}{2}\|\mathbf{Y}(\mathbf{X}-\mathbf{Y})\|_{\mathbb{H}}^2\geq 0,
\end{align}
where we used Young's inequality.
Applying integration by parts, we get
\begin{align}\label{219}
(\mathcal{C}(\mathbf{X}),\mathrm{A}\mathbf{X})&=\sum_{i,j=1}^n\int_{\mathcal{O}}|\mathbf{X}(x)|^{r-1}\mathbf{X}_j(x)\frac{\partial^2\mathbf{X}_j(x)}{\partial x_i^2}\d x\nonumber\\&= -\sum_{i,j=1}^n\int_{\mathcal{O}}|\mathbf{X}(x)|^{r-1}\left(\frac{\partial\mathbf{X}_j(x)}{\partial x_i}\right)^2\d x-\sum_{i,j=1}^n\int_{\mathcal{O}}\frac{\partial}{\partial x_i}|\mathbf{X}(x)|^{r-1}\mathbf{X}_j(x)\frac{\partial\mathbf{X}_j(x)}{\partial x_i}\d x\nonumber\\&=-(r-1)\sum_{i,j=1}^n\int_{\mathcal{O}}|\mathbf{X}(x)|^{r-1}\left(\frac{\partial\mathbf{X}_j(x)}{\partial x_i}\right)^2\d x=-(r-1)\||\mathbf{X}|^{\frac{r-1}{2}}|\nabla\mathbf{X}|\|_{\mathbb{H}}^2.
\end{align}
\fi
\subsection{Monotonicity}
In this subsection, we discuss about the monotonicity as well as the hemicontinuity properties of the linear and nonlinear operators.
\begin{definition}[\cite{VB}]
Let $\mathbb{X}$ be a Banach space and let $\mathbb{X}^{'}$ be its topological dual.
An operator $\mathrm{G}:\mathrm{D}\rightarrow
\mathbb{X}^{'},$ $\mathrm{D}=\mathrm{D}(\mathrm{G})\subset \mathbb{X}$ is said to be
\emph{monotone} if
$$\langle\mathrm{G}(x)-\mathrm{G}(y),x-y\rangle\geq
0,\ \text{ for all } \ x,y\in \mathrm{D}.$$ \iffalse
The operator $\mathrm{G}(\cdot)$ is \emph{maximal monotone} if there is no monotone operator that properly contains it, that is, if for $x\in\mathbb{X}$ and $w\in\mathbb{X}'$, the inequality $\langle w-\mathrm{G}(x),x-y\rangle\geq 0$, for all $y\in\mathbb{X}$ implies $w=\mathrm{G}(x)$.
\fi
The operator $\mathrm{G}(\cdot)$ is said to be \emph{hemicontinuous}, if for all $x, y\in\mathbb{X}$ and $w\in\mathbb{X}',$ $$\lim_{\lambda\to 0}\langle\mathrm{G}(x+\lambda y),w\rangle=\langle\mathrm{G}(x),w\rangle.$$
The operator $\mathrm{G}(\cdot)$ is called \emph{demicontinuous}, if for all $x\in\mathrm{D}$ and $y\in\mathbb{X}$, the functional $x \mapsto\langle \mathrm{G}(x), y\rangle$ is continuous, or in other words, $x_k\to x$ in $\mathbb{X}$ implies $\mathrm{G}(x_k)\xrightarrow{w}\mathrm{G}(x)$ in $\mathbb{X}'$. Clearly demicontinuity implies hemicontinuity.
\end{definition}
\begin{lemma}[Theorem 2.2, \cite{MTM7}]\label{thm2.2}
Let $\mathbf{X},\mathbf{Y}\in\mathbb{V}\cap\widetilde{\mathbb{L}}^{r+1}$, for $r>3$. Then, for the operator $\mathrm{G}(\mathbf{X})=\mu \mathrm{A}\mathbf{X}+\mathrm{B}(\mathbf{X})+\beta\mathcal{C}(\mathbf{X})$, we have
\begin{align}\label{fe}
\langle(\mathrm{G}(\mathbf{X})-\mathrm{G}(\mathbf{Y}),\mathbf{X}-\mathbf{Y}\rangle+\eta\|\mathbf{X}-\mathbf{Y}\|_{\mathbb{H}}^2\geq 0,
\end{align}
where $\eta=\frac{r-3}{2\mu(r-1)}\left(\frac{2}{\beta\mu (r-1)}\right)^{\frac{2}{r-3}}.$ That is, the operator $\mathrm{G}+\eta\mathrm{I}$ is a monotone operator from $\mathbb{V}\cap\widetilde{\mathbb{L}}^{r+1}$ to $\mathbb{V}'+\widetilde{\mathbb{L}}^{\frac{r+1}{r}}$.
\end{lemma}
\begin{lemma}[Theorem 2.3, \cite{MTM7}]\label{thm2.3}
For the critical case $r=3$ with $2\beta\mu \geq 1$, the operator $\mathrm{G}(\cdot):\mathbb{V}\cap\widetilde{\mathbb{L}}^{r+1}\to \mathbb{V}'+\widetilde{\mathbb{L}}^{\frac{r+1}{r}}$ is globally monotone, that is, for all $\mathbf{X},\mathbf{Y}\in\mathbb{V}$, we have
\begin{align}\label{218}\langle\mathrm{G}(\mathbf{X})-\mathrm{G}(\mathbf{Y}),\mathbf{X}-\mathbf{Y}\rangle\geq 0.\end{align}
\end{lemma}
\begin{lemma}[Remark 2.4, \cite{MTM7}]
Let $n=2$, $r\in[1,3]$ and $\mathbf{X},\mathbf{Y}\in\mathbb{V}$. Then, for the operator $\mathrm{G}(\mathbf{X})=\mu \mathrm{A}\mathbf{X}+\mathrm{B}(\mathbf{X})+\beta\mathcal{C}(\mathbf{X})$, we have
\begin{align}\label{fe2}
\langle(\mathrm{G}(\mathbf{X})-\mathrm{G}(\mathbf{Y}),\mathbf{X}-\mathbf{Y}\rangle+ \frac{27}{32\mu ^3}N^4\|\mathbf{X}-\mathbf{Y}\|_{\mathbb{H}}^2\geq 0,
\end{align}
for all $\mathbf{Y}\in{\mathbb{B}}_N$, where ${\mathbb{B}}_N$ is an $\widetilde{\mathbb{L}}^4$-ball of radius $N$, that is,
$
{\mathbb{B}}_N:=\big\{\mathbf{z}\in\widetilde{\mathbb{L}}^4:\|\mathbf{z}\|_{\widetilde{\mathbb{L}}^4}\leq N\big\}.
$
\end{lemma}
\begin{lemma}[Lemma 2.5, \cite{MTM7}]\label{lem2.8}
The operator $\mathrm{G}:\mathbb{V}\cap\widetilde{\mathbb{L}}^{r+1}\to \mathbb{V}'+\widetilde{\mathbb{L}}^{\frac{r+1}{r}}$ is demicontinuous.
\end{lemma}
\iffalse
\subsection{Abstract formulation and weak solution} We take the Helmholtz-Hodge orthogonal projection $\mathrm{P}_{\mathbb{H}}$ in \eqref{1} to obtain the abstract formulation for $t\in(0,T)$ as:
\begin{equation}\label{kvf}
\left\{
\begin{aligned}
\frac{\d\mathbf{u}(t)}{\d t}+\mu \mathrm{A}\mathbf{u}(t)+\mathrm{B}(\mathbf{u}(t))+\beta\mathcal{C}(\mathbf{u}(t))&=\mathbf{f}(t),\\
\mathbf{u}(0)&=\mathbf{u}_0\in\mathbb{H}\cap\widetilde{\mathbb{L}}^{r+1},
\end{aligned}
\right.
\end{equation}
for $r\geq 3$, where $\mathbf{f}\in\mathrm{L}^2(0,T;\mathbb{V}')$. Strictly speaking one should use $\mathrm{P}_{\mathbb{H}}\mathbf{f}$ instead of $\mathbf{f}$, for simplicity, we use $\mathbf{f}$. Let us now provide the definition of \emph{weak solution} of the system \eqref{kvf} for $r\geq 3$.
\begin{definition}\label{weakd}
For $r\geq 3$, a function $$\mathbf{u}\in\mathrm{C}([0,T];\mathbb{H})\cap\mathrm{L}^2(0,T;\mathbb{V})\cap\mathrm{L}^{r+1}(0,T;\widetilde{\mathbb{L}}^{r+1}),$$ with $\partial_t\mathbf{u}\in\mathrm{L}^{2}(0,T;\mathbb{V}')+\mathrm{L}^{\frac{r+1}{r}}(0,T;\mathbb{L}^{\frac{r+1}{r}}),$ is called a \emph{weak solution} to the system (\ref{kvf}), if for $\mathbf{f}\in\mathrm{L}^2(0,T;\mathbb{V}')$, $\mathbf{u}_0\in\mathbb{H}$ and $\mathbf{v}\in\mathbb{V}\cap\widetilde{\mathbb{L}}^{r+1}$, $\mathbf{u}(\cdot)$ satisfies:
\begin{equation}\label{3.13}
\left\{
\begin{aligned}
\langle\partial_t\mathbf{u}(t)+\mu \mathrm{A}\mathbf{u}(t)+\mathrm{B}(\mathbf{u}(t))+\beta\mathcal{C}(\mathbf{u}(t)),\mathbf{v}\rangle&=\langle\mathbf{f}(t),\mathbf{v}\rangle,\\
\lim_{t\downarrow 0}\int_{\mathcal{O}}\mathbf{u}(t)\mathbf{v}\d x&=\int_{\mathcal{O}}\mathbf{u}_0\mathbf{v}\d x,
\end{aligned}
\right.
\end{equation}
and the energy equality:
\begin{align}\label{237}
\frac{1}{2}\frac{\d}{\d t}\|\mathbf{u}(t)\|^2_{\mathbb{H}}+\mu \|\mathbf{u}(t)\|^2_{\mathbb{V}}+\beta\|\mathbf{u}(t)\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}=\langle \mathbf{f}(t),\mathbf{u}(t)\rangle.
\end{align}
\end{definition}
\iffalse
\begin{definition}
For $r\in[2,3)$, a function $$\mathbf{u}\in\mathrm{C}([0,T];\mathbb{H})\cap\mathrm{L}^{\infty}(0,T;\widetilde{\mathbb{L}}^4)\cap\mathrm{L}^2(0,T;\mathbb{V})\cap\mathrm{L}^{r+3}(0,T;\widetilde{\mathbb{L}}^{r+3}),$$ with $\partial_t\mathbf{u}\in\mathrm{L}^{2}(0,T;\mathbb{V}')+\mathrm{L}^{\frac{r+1}{r}}(0,T;\mathbb{L}^{\frac{r+1}{r}}),$ is called a \emph{weak solution} to the system (\ref{kvf}), if for $\mathbf{f}\in\mathrm{L}^2(0,T;\mathbb{V}')\cap\mathrm{L}^{4}(0,T;\widetilde{\mathbb{L}}^{4})$, $\mathbf{u}_0\in\mathbb{H}\cap\widetilde{\mathbb{L}}^4$ and $\mathbf{v}\in\mathbb{V}\cap\widetilde{\mathbb{L}}^{r+1}$, $\mathbf{u}(\cdot)$ satisfies \eqref{3.13}
and the energy equalities \eqref{237} and
\begin{align}\label{238}
\frac{1}{4}\frac{\d}{\d t}\|\mathbf{u}(t)\|_{\widetilde{\mathbb{L}}^{4}}^{4}+3\mu \|\mathbf{u}(t)\nabla\mathbf{u}(t)\|_{\mathbb{H}}^2+\beta\|\mathbf{u}(t)\|_{\widetilde{\mathbb{L}}^{r+3}}^{r+3}=\langle \mathbf{f}(t),|\mathbf{u}(t)|^{2}\mathbf{u}(t)\rangle.
\end{align}
\end{definition}
\begin{definition}
For $r\in[3,5)$, a function $$\mathbf{u}\in\mathrm{C}([0,T];\mathbb{H})\cap\mathrm{L}^{\infty}(0,T;\widetilde{\mathbb{L}}^{7-r})\cap\mathrm{L}^2(0,T;\mathbb{V})\cap\mathrm{L}^{6}(0,T;\widetilde{\mathbb{L}}^{6}),$$ with $\partial_t\mathbf{u}\in\mathrm{L}^{2}(0,T;\mathbb{V}')+\mathrm{L}^{\frac{r+1}{r}}(0,T;\mathbb{L}^{\frac{r+1}{r}}),$ is called a \emph{weak solution} to the system (\ref{kvf}), if for $\mathbf{f}\in\mathrm{L}^2(0,T;\mathbb{V}')\cap\mathrm{L}^{7-r}(0,T;\widetilde{\mathbb{L}}^{7-r})$, $\mathbf{u}_0\in\mathbb{H}\cap\widetilde{\mathbb{L}}^{7-r}$ and $\mathbf{v}\in\mathbb{V}\cap\widetilde{\mathbb{L}}^{r+1}$, $\mathbf{u}(\cdot)$ satisfies \eqref{3.13}
and the energy equalities \eqref{237} and
\begin{align}\label{2.38}
\frac{1}{7-r}\frac{\d}{\d t}\|\mathbf{u}(t)\|_{\widetilde{\mathbb{L}}^{7-r}}^{7-r}+(6-r)\mu \||\mathbf{u}(t)|^{\frac{5-r}{2}}\nabla\mathbf{u}(t)\|_{\mathbb{H}}^2+\beta\|\mathbf{u}(t)\|_{\widetilde{\mathbb{L}}^{6}}^{6}=\langle \mathbf{f}(t),|\mathbf{u}(t)|^{5-r}\mathbf{u}(t)\rangle.
\end{align}
\end{definition}
\fi
\subsection{Energy estimates}\label{sec2.3}
Let
$\{e_1,\ldots,e_n,\ldots\}$ be a complete orthonormal system in
$\mathbb{H}$ belonging to $\mathbb{V}$ and let $\mathbb{H}_n$ be the
$n$-dimensional subspace of $\mathbb{H}$. Let $\mathrm{P}_n$ denote
the orthogonal projection of $\mathbb{V}'$ to $\mathbb{H}_n$, that is, $\mathrm{P}_n\mathbf{x}=\sum_{i=1}^n\langle \mathbf{x},e_i\rangle e_i$. Since every element $\mathbf{x}\in\mathbb{H}$ induces a functional $\mathbf{x}^*\in\mathbb{H}$ by the formula $\langle \mathbf{x}^*,\mathbf{y}\rangle =(\mathbf{x},\mathbf{y})$, $\mathbf{y}\in\mathbb{V}$, then $\mathrm{P}_n\big|_{\mathbb{H}}$, the orthogonal projection of $\mathbb{H}$ onto $\mathbb{H}_n$ is given by $\mathrm{P}_n\mathbf{x}=\sum_{i=1}^n\left(\mathbf{x},e_i\right)e_i$. Hence in particular, $\mathrm{P}_n$ is the orthogonal projection from $\mathbb{H}$ onto $\text{span}\{e_1,\ldots,e_n\}$. We define $\mathrm{B}^n(\mathbf{u}^n)=\mathrm{P}_n\mathrm{B}(\mathbf{u}^n)$, $\mathcal{C}^n(\mathbf{u}^n)=\mathrm{P}_n\mathcal{C}(\mathbf{u}^n)$
and $\mathbf{f}^n=\mathrm{P}_n\mathbf{f}$.
With the above setting, let us now consider the following system of ODEs:
\begin{equation}\label{411}
\left\{
\begin{aligned}
\left(\partial_t\mathbf{u}^n(t),\mathbf{v}\right)&=-(\mu \mathrm{A}\mathbf{u}^n(t)+\mathrm{B}^n(\mathbf{u}^n(t))+\mathcal{C}^n(\mathbf{u}^n(t)),\mathbf{v})+(\mathbf{f}^n(t),\mathbf{v}),\\
(\mathbf{u}^n(0),\mathbf{v})&=(\mathbf{u}_0^n,\mathbf{v}),
\end{aligned}
\right.
\end{equation}
with $\mathbf{u}_0^n=\mathrm{P}_n\mathbf{u}_0,$ for all $\mathbf{v}\in\mathbb{H}_n$.
Since $\mathrm{B}^n(\cdot)$ and $\mathcal{C}^n(\cdot)$ are locally Lipschitz (see \eqref{lip} and \eqref{213}), the system \eqref{411} has a unique local solution $\mathbf{u}^n\in\mathrm{C}([0,T^*];\mathbb{H}_n)$, (existence due to Carath\'eodory's existence theorem and uniqueness is immediate from the local Lipschitz property). Let us now establish the energy estimates satisfied by the system \eqref{411}. Note that the en energy estimate is true for any $r\in[1,\infty)$.
\begin{proposition}\label{prop4.5}
Let $\mathbf{u}_0\in \mathbb{H}$ and $\mathbf{f}\in\mathrm{L}^2(0,T;\mathbb{V}')$ be given. Then, for $r\in[1,\infty)$, we have
\begin{align}\label{energy1}
\sup_{t\in[0,T]}\|\mathbf{u}^n(t)\|_{\mathbb{H}}^2+\mu \int_0^T\|\mathbf{u}^n(t)\|_{\mathbb{V}}^2\d t+2\beta\int_0^T\|\mathbf{u}^n(t)\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}\d t\leq \|\mathbf{u}_0\|_{\mathbb{H}}^2+\frac{1}{\mu }\int_0^T\|\mathbf{f}(t)\|_{\mathbb{V}'}^2\d t.
\end{align}
\end{proposition}
\begin{proof}
Let us take inner product with $\mathbf{u}^n(\cdot)$ to the system \eqref{411} to obtain
\begin{align*}
&\frac{1}{2}\frac{\d}{\d t}\|\mathbf{u}^n(t)\|_{\mathbb{H}}^2+\mu \|\mathbf{u}^n(t)\|_{\mathbb{V}}^2+\beta\|\mathbf{u}^n(t)\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}=(\mathbf{f}^n(t),\mathbf{u}^n(t)),
\end{align*}
where we performed an integration by parts and used the fact that $(\mathrm{B}(\mathbf{u}^n),\mathbf{u}^n)=0$. Let us integrate the inequality from $0$ to $t$ to find
\begin{align}\label{415}
&\|\mathbf{u}^n(t)\|_{\mathbb{H}}^2+2\mu \int_0^t\|\mathbf{u}^n(s)\|_{\mathbb{V}}^2\d s+2\beta\int_0^t\|\mathbf{u}^n(s)\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}\d s\nonumber\\&= \|\mathbf{u}^n_0\|_{\mathbb{H}}^2+2 \int_0^t(\mathbf{f}^n(s),\mathbf{u}^n(s))\d s.
\end{align}
Using Cauchy-Schwarz and Young's inequalities, we estimate $|(\mathbf{f}^n,\mathbf{u}^n)|$ as
\begin{align*}
|(\mathbf{f}^n,\mathbf{u}^n)|\leq \|\mathbf{f}^n\|_{\mathbb{V}'}\|\mathbf{u}^n\|_{\mathbb{V}}\leq \frac{1}{2\mu }\|\mathbf{f}^n\|_{\mathbb{V}'}^2+\frac{\mu }{2}\|\mathbf{u}^n\|_{\mathbb{V}}^2.
\end{align*}
Thus, using the fact that $\|\mathbf{u}_0^n\|_{\mathbb{V}}=\|\mathrm{P}_n\mathbf{u}_0\|_{\mathbb{V}}\leq \|\mathbf{u}_0\|_{\mathbb{V}}$ and $\|\mathbf{f}^n\|_{\mathbb{V}'}\leq \|\mathbf{f}\|_{\mathbb{V}'}$ in \eqref{415}, we infer that
\begin{align}\label{417}
&\|\mathbf{u}^n(t)\|_{\mathbb{H}}^2+\mu \int_0^t\|\mathbf{u}^n(s)\|_{\mathbb{V}}^2\d s+2\beta\int_0^t\|\mathbf{u}^n(s)\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}\d s\leq \|\mathbf{u}_0\|_{\mathbb{H}}^2+\frac{1}{\mu }\int_0^t\|\mathbf{f}(s)\|_{\mathbb{V}'}^2\d s,
\end{align}
for all $t\in[0,T]$ and \eqref{energy1} follows.
\end{proof}
\iffalse
\begin{proposition}\label{prop45}
Let $\mathbf{u}_0\in \mathbb{H}\cap\widetilde{\mathbb{L}}^{4}$ and $\mathbf{f}\in\mathrm{L}^2(0,T;\mathbb{V}')\cap\mathrm{L}^{4}(0,T;\widetilde{\mathbb{L}}^{4})$ be given. Then, for $r\in[2,3)$, we have
\begin{align}\label{energy2}
&\sup_{t\in[0,T]}\|\mathbf{u}^n(t)\|_{\widetilde{\mathbb{L}}^{4}}^{4}+6\mu \int_0^T\||\mathbf{u}^n(t)|\nabla\mathbf{u}^n(t)\|_{\mathbb{H}}^2\d t+4\beta \int_0^T\|\mathbf{u}^n(t)\|_{\widetilde{\mathbb{L}}^{r+3}}^{r+3}\d t\nonumber\\&\leq\|\mathbf{u}_0\|_{\widetilde{\mathbb{L}}^{4}}^{4}+\frac{8}{\mu ^3\lambda_1^3}\int_0^T\|\mathbf{f}(t)\|_{\widetilde{\mathbb{L}}^{4}}^{4}\d t.
\end{align}
Let $\mathbf{u}_0\in \mathbb{H}\cap\widetilde{\mathbb{L}}^{7-r}$ and $\mathbf{f}\in\mathrm{L}^2(0,T;\mathbb{V}')\cap\mathrm{L}^{7-r}(0,T;\widetilde{\mathbb{L}}^{7-r})$ be given. Then, for $r\in[3,5)$, we have
\begin{align}\label{energy2a}
&\sup_{t\in[0,T]}\|\mathbf{u}^n(t)\|_{\widetilde{\mathbb{L}}^{7-r}}^{7-r}+\frac{\mu (7-r)(6-r)}{2}\int_0^T\||\mathbf{u}^n(t)|^{\frac{5-r}{2}}\nabla\mathbf{u}^n(t)\|_{\mathbb{H}}^2\d t+4\beta \int_0^T\|\mathbf{u}^n(t)\|_{\widetilde{\mathbb{L}}^{6}}^{6}\d t\nonumber\\&\leq\|\mathbf{u}_0\|_{\widetilde{\mathbb{L}}^{7-r}}^{7-r}+\frac{(7-r)^{6-r}}{(2\mu \lambda_1)^{6-r}}\int_0^T\|\mathbf{f}(t)\|_{\widetilde{\mathbb{L}}^{7-r}}^{7-r}\d t.
\end{align}
\end{proposition}
\begin{proof}
For $p\geq 2$, taking inner product with $|\mathbf{u}^n(\cdot)|^{p-2}\mathbf{u}^n(\cdot)$ to the system \eqref{411}, we obtain
\begin{align}\label{239}
\frac{1}{p}\frac{\d}{\d t}\|\mathbf{u}^n(t)\|_{\widetilde{\mathbb{L}}^p}^p+\mu (p-1)\||\mathbf{u}^n(t)|^{\frac{p-2}{2}}\nabla\mathbf{u}^n(t)\|_{\mathbb{H}}^2+\beta\|\mathbf{u}^n(t)\|_{\widetilde{\mathbb{L}}^{r+p-1}}^{r+p-1}=( \mathbf{f}^n(t),|\mathbf{u}^n(t)|^{p-2}\mathbf{u}^n(t)),
\end{align}
where we used \eqref{24} and \eqref{2.12} also. Once again using H\"older's and Young's inequalities, we estimate $|(\mathbf{f}^n,|\mathbf{u}^n|^{p-2}\mathbf{u}^n)|$ as
\begin{align}\label{240}
|(\mathbf{f}^n,|\mathbf{u}^n|^{p-2}\mathbf{u}^n)|&\leq\|\mathbf{f}^n\|_{\widetilde{\mathbb{L}}^p}\|\mathbf{u}^n\|_{\widetilde{\mathbb{L}}^{p}}^{p-1}\leq \frac{2\mu (p-1)\lambda_1}{p^2}\|\mathbf{u}^n\|_{\widetilde{\mathbb{L}}^p}^p+\frac{p^{p-2}}{(2\mu \lambda_1)^{p-1}}\|\mathbf{f}^n\|_{\widetilde{\mathbb{L}}^p}^p\nonumber\\&\leq\frac{\mu (p-1)}{2}\||\mathbf{u}^n|^{\frac{p-2}{2}}\nabla\mathbf{u}\|_{\mathbb{H}}^2+\frac{p^{p-2}}{(2\mu \lambda_1)^{p-1}}\|\mathbf{f}\|_{\widetilde{\mathbb{L}}^p}^p,
\end{align}
where we used \eqref{2p6} also. Using \eqref{240} in \eqref{239} and the integrating from $0$ to $t$, we find
\begin{align}
&\|\mathbf{u}^n(t)\|_{\widetilde{\mathbb{L}}^p}^p+\frac{\mu p(p-1)}{2}\int_0^t\||\mathbf{u}^n(s)|^{\frac{p-2}{2}}\nabla\mathbf{u}^n(s)\|_{\mathbb{H}}^2\d s+\beta p\int_0^t\|\mathbf{u}^n(s)\|_{\widetilde{\mathbb{L}}^{r+p-1}}^{r+p-1}\d s\nonumber\\&\leq\|\mathbf{u}_0\|_{\widetilde{\mathbb{L}}^p}^p+\frac{p^{p-1}}{(2\mu \lambda_1)^{p-1}}\int_0^t\|\mathbf{f}(s)\|_{\widetilde{\mathbb{L}}^p}^p\d s,
\end{align}
for all $t\in[0,T]$. Hence the estimate \eqref{energy2} follows by taking $p=4$ and the estimate \eqref{energy2a} follows by taking $p=7-r$.
\end{proof}
\fi
\subsection{Global existence and uniqueness} Let us now show that the system \eqref{kvf} has a unique weak solution by making use of Galerkin approximation technique and energy estimates obtained in the previous subsection. The mononicity property of the linear and nonlinear operators obtained in Theorem \ref{thm2.2}, and the Minty-Browder technique are used to obtain global solvability results.
\begin{theorem}\label{main}
Let $\mathbf{u}_0\in \mathbb{H}$ and $\mathbf{f}\in\mathrm{L}^{2}(0,T;\mathbb{V}')$ be given. Then there exists a unique weak solution to the system (\ref{kvf}) satisfying $$\mathbf{u}\in\mathrm{C}([0,T];\mathbb{H})\cap\mathrm{L}^2(0,T;\mathbb{V})\cap\mathrm{L}^{r+1}(0,T;\widetilde{\mathbb{L}}^{r+1}),$$ for $r>3$.
\end{theorem}
\begin{proof}
\noindent\textbf{Part (1). Existence:} We prove the existence of a weak solution to the system \eqref{kvf} in the following steps.
\vskip 0.1in
\noindent\textbf{Step (i):} \emph{Finite-dimensional (Galerkin) approximation of the system (\ref{kvf}):}
Let $\{e_1,e_2, \ldots\}$ be a complete orthonormal basis in $\mathbb{H}$ belonging to $\mathbb{V}$ as discussed in section \ref{sec2.3}. Let $\mathbb{H}_n$ denotes the $n$-dimensional subspace of $\mathbb{H}$. We consider the following system of ODEs in $\mathbb{H}_n$:
\begin{equation}\label{420}
\left\{
\begin{aligned}
\frac{\d}{\d t}(\mathbf{u}^n(t),\mathbf{v})+(\mu \mathrm{A}\mathbf{u}^n(t)+\mathrm{B}^n(\mathbf{u}^n(t))+\beta\mathcal{C}^n(\mathbf{u}^n(t)),\mathbf{v})&=(\mathbf{f}^n(t),\mathbf{v}),\\
(\mathbf{u}^n(0),\mathbf{v})&=(\mathbf{u}_0^n,\mathbf{v}),
\end{aligned}
\right.
\end{equation}
for any $\mathbf{v} \in\mathbb{H}_n$. As in the previous section, we define the operator $\mathrm{G}(\mathbf{u}):=\mu \mathrm{A}\mathbf{u}+\mathrm{B}(\mathbf{u})+\beta\mathcal{C}(\mathbf{u}).$ Then the system \eqref{420} can be rewritten as
\begin{equation}\label{421}
\left\{
\begin{aligned}
\frac{\d}{\d t}(\mathbf{u}^n(t),\mathbf{v})+(\mathrm{G}(\mathbf{u}^n(t)),\mathbf{v})&=(\mathbf{f}^n(t),\mathbf{v}), \\
(\mathbf{u}^n(0),\mathbf{v})&=(\mathbf{u}_0^n,\mathbf{v}),
\end{aligned}
\right.
\end{equation}
for any $\mathbf{v}\in\mathbb{H}_n$. Let us take the test function as $\mathbf{u}^n(\cdot)$ in \eqref{421} to derive the following energy equality:
\begin{align}\label{422}
& \|\mathbf{u}^n(t)\|_{\mathbb{H}}^2+2\int_0^t(\mathrm{G}(\mathbf{u}^n(s))-\mathbf{f}^n(s),\mathbf{u}^n(s))\d s=\|\mathbf{u}^n(0)\|_{\mathbb{H}}^2,
\end{align}
for any $t\in(0,T)$. It can also be shown easily that
\begin{align}\label{eqn42}
&e^{-2\eta t}\|\mathbf{u}^n(t)\|^2_{\mathbb{H}} +2\int_0^te^{-2\eta s}(\mathrm{G}(\mathbf{u}^n(s))-\mathbf{f}^n(s)+\eta\mathbf{u}^n(s),\mathbf{u}^n(s))\d
s=\|\mathbf{u}^n(0)\|_{\mathbb{H}}^2,
\end{align}
for any $t\in(0,T]$, where $\eta=\frac{r-3}{r-1}\left(\frac{2}{\beta\mu (r-1)}\right)^{\frac{2}{r-3}}$.
\vskip 0.1in
\noindent\textbf{Step (ii):} \emph{Weak convergence of the sequences $\mathbf{u}^n(\cdot)$ and $\mathrm{G}(\mathbf{u}^n(\cdot))$:} We know that the dual of $\mathrm{L}^1(0,T;\mathbb{H})$ is $\mathrm{L}^{\infty}(0,T;\mathbb{H})$ and $\mathrm{L}^{\infty}(0,T;\widetilde{\mathbb{L}}^{r+1})$ (that is, $\left(\mathrm{L}^1(0,T;\mathbb{H})\right)' \cong \mathrm{L}^{\infty}(0,T;\mathbb{H})$) and the space $\mathrm{L}^1(0,T;\mathbb{H})$ is separable. Moreover, the spaces $\mathrm{L}^2(0,T;\mathbb{V})$ and $\mathrm{L}^{r+1}(0,T;\widetilde{\mathbb{L}}^{r+1})$ are reflexive. Making use of energy estimates in Proposition \ref{prop4.5} and the Banach-Alaoglu theorem (or Helly's theorem), we can extract subsequences $\{\mathbf{u}^{n_k}(\cdot)\}$ and $\{\mathrm{G}(\mathbf{u}^{n_k}(\cdot))\}$ such that (for notational convenience, we use $\{\mathbf{u}^{n}(\cdot)\}$ and $\{\mathrm{G}(\mathbf{u}^{n}(\cdot))\}$)
\begin{equation}\label{3.20}
\left\{
\begin{aligned}
\mathbf{u}^n&\xrightarrow{w^*}\mathbf{u}\ \text{ in }\ \mathrm{L}^{\infty}(0,T;\mathbb{H}),\\
\mathbf{u}^n&\xrightarrow{w}\mathbf{u}\ \text{ in }\ \mathrm{L}^{2}(0,T;\mathbb{V}),\\
\mathbf{u}^n&\xrightarrow{w}\mathbf{u}\ \text{ in }\ \mathrm{L}^{r+1}(0,T;\widetilde{\mathbb{L}}^{r+1}),\\
\mathrm{G}(\mathbf{u}^n)&\xrightarrow{w}\mathrm{G}_0\ \text{ in }\ \mathrm{L}^{2}(0,T;\mathbb{V}')+\mathrm{L}^{\frac{r+1}{r}}(0,T;\widetilde{\mathbb{L}}^{\frac{r+1}{r}}).
\end{aligned}
\right.\end{equation}
Note that for $r\geq 3$, the final convergence in (\ref{3.20}) can be justified using H\"older's inequality, interpolation inequality \eqref{211} and \eqref{energy1} as follows:
\begin{align}\label{428}
&\left|\int_0^T\langle\mathrm{G}(\mathbf{u}^n(t)),\mathbf{v}(t)\rangle\d t\right|\nonumber\\&\leq\mu \left|\int_0^T(\nabla\mathbf{u}^n(t),\nabla\mathbf{v}(t))\d t\right|+\left|\int_0^T\langle \mathrm{B}(\mathbf{u}^n(t),\mathbf{v}(t)),\mathbf{u}^n(t)\rangle\d t\right|\nonumber\\&\quad+\beta\left|\int_0^T\langle|\mathbf{u}^n(t)|^{r-1}\mathbf{u}^n(t),\mathbf{v}(t)\rangle\d t\right|\nonumber\\&\leq\mu \int_0^T\|\nabla\mathbf{u}^n(t)\|_{\mathbb{H}}\|\nabla\mathbf{v}(t)\|_{\mathbb{H}}\d t+\int_0^T\|\mathbf{u}^n(t)\|_{\widetilde{\mathbb{L}}^{r+1}}\|\mathbf{u}^n(t)\|_{\widetilde{\mathbb{L}}^{\frac{2(r+1)}{r-1}}}\|\mathbf{v}(t)\|_{\mathbb{V}}\d t\nonumber\\&\quad+\beta\int_0^T\|\mathbf{u}^n(t)\|_{\widetilde{\mathbb{L}}^{r+1}}^{r}\|\mathbf{v}(t)\|_{\widetilde{\mathbb{L}}^{r+1}}\d t\nonumber\\&\leq \left[\mu \left(\int_0^T\|\nabla\mathbf{u}^n(t)\|_{\mathbb{H}}^2\d t\right)^{1/2}+\left(\int_0^T\|\mathbf{u}^n(t)\|_{\widetilde{\mathbb{L}}^{r+1}}^{\frac{2(r+1)}{r-1}}\|\mathbf{u}^n(t)\|_{\mathbb{H}}^{\frac{2(r-3)}{r-1}}\d t\right)^{1/2}\right]\left(\int_0^T\|\mathbf{v}(t)\|_{\mathbb{V}}^2\d t\right)^{1/2}\nonumber\\&\quad+\beta\left(\int_0^T\|\mathbf{u}^n(t)\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}\d t\right)^{\frac{r}{r+1}}\left(\int_0^T\|\mathbf{v}(t)\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}\d t\right)^{\frac{1}{r+1}}\nonumber\\&\leq \left[\mu \left(\int_0^T\|\mathbf{u}^n(t)\|_{\mathbb{V}}^2\d t\right)^{1/2}+\left(\int_0^T\|\mathbf{u}^n(t)\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}\d t\right)^{\frac{1}{r-1}}\left(\int_0^T\|\mathbf{u}^n(t)\|_{\mathbb{H}}^2\d t\right)^{\frac{r-3}{2(r-1)}}\right]\left(\int_0^T\|\mathbf{v}(t)\|_{\mathbb{V}}^2\d t\right)^{1/2}\nonumber\\&\quad+\beta\left(\int_0^T\|\mathbf{u}^n(t)\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}\d t\right)^{\frac{r}{r+1}}\left(\int_0^T\|\mathbf{v}(t)\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}\d t\right)^{\frac{1}{r+1}}\nonumber\\&\leq C(\|\mathbf{u}_0\|_{\mathbb{H}},\mu ,T,\beta,\|\mathbf{f}\|_{\mathrm{L}^2(0,T;\mathbb{V}')})\left[\left(\int_0^T\|\mathbf{v}(t)\|_{\mathbb{V}}^2\d t\right)^{1/2}+\left(\int_0^T\|\mathbf{v}(t)\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}\d t\right)^{\frac{1}{r+1}}\right],
\end{align}
for all $\mathbf{v}\in\mathrm{L}^2(0,T;\mathbb{V})\cap\mathrm{L}^{r+1}(0,T;\widetilde{\mathbb{L}}^{r+1})$. Note also that
\begin{align*}
&\left|\int_0^T\langle\partial_t\mathbf{u}^n(t),\mathbf{v}(t)\rangle\d t\right|\nonumber\\&\leq \left|\int_0^T\langle\mathrm{G}(\mathbf{u}^n(t)),\mathbf{v}(t)\rangle\d t\right|+\left|\int_0^T\langle\mathbf{f}(t),\mathbf{v}(t)\rangle\d t\right|\nonumber\\&\leq C(\|\mathbf{u}_0\|_{\mathbb{H}},\mu ,T,\beta,\|\mathbf{f}\|_{\mathrm{L}^2(0,T;\mathbb{V}')})\left[\left(\int_0^T\|\mathbf{v}(t)\|_{\mathbb{V}}^2\d t\right)^{1/2}+\left(\int_0^T\|\mathbf{v}(t)\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}\d t\right)^{\frac{1}{r+1}}\right]\nonumber\\&\quad +\left(\int_0^T\|\mathbf{f}(t)\|_{\mathbb{V}'}^2\d t\right)^{1/2}\left(\int_0^T\|\mathbf{v}(t)\|_{\mathbb{V}}^2\d t\right)^{1/2},
\end{align*}
for all $\mathbf{v}\in\mathrm{L}^2(0,T;\mathbb{V})\cap\mathrm{L}^{r+1}(0,T;\widetilde{\mathbb{L}}^{r+1})$. Thus, it is immediate that
\begin{align} \partial_t\mathbf{u}^n\xrightarrow{w}\partial_t\mathbf{u}\ \text{ in } \ \mathrm{L}^2(0,T;\mathbb{V}')+\mathrm{L}^{\frac{r+1}{r}}(0,T;\widetilde{\mathbb{L}}^{\frac{r+1}{r}}).
\end{align} Note that $\mathbf{f}^n\to\mathbf{f}$ in $\mathrm{L}^2(0,T;\mathbb{V}')$. On passing to limit $n\to\infty$ in (\ref{421}), the limit $\mathbf{u}(\cdot)$ satisfies:
\begin{equation}\label{eqn43}
\left\{
\begin{aligned}
\frac{\d\mathbf{u}(t)}{\d t}+\mathrm{G}_0(t)&=\mathbf{f}(t), \ \textrm{ in
}\ \mathrm{L}^2(0,T;\mathbb{V}')+\mathrm{L}^{\frac{r+1}{r}}(0,T;\widetilde{\mathbb{L}}^{\frac{r+1}{r}}),\\
\mathbf{u}(0)&=\mathbf{u}_0.
\end{aligned}
\right.
\end{equation}
The above system (\ref{eqn43}) satisfies the following energy equality:
\begin{align}\label{eqn44}
\|\mathbf{u}(t)\|_{\mathbb{H}}^2+2\int_0^t\langle \mathrm{G}_0(s)-\mathbf{f}(s),\mathbf{u}(s)\rangle \d
s=\|\mathbf{u}(0)\|_{\mathbb{H}}^2,
\end{align}
for any $t\in (0,T]$. Moreover, we have the following estimate:
\begin{align}\label{eqn45}
&e^{-2\eta t}\|\mathbf{u}(t)\|_{\mathbb{H}}^2+2\int_0^te^{-2\eta t}\langle \mathrm{G}_0(s)-\mathbf{f}(s)+\eta\mathbf{u}(s),\mathbf{u}(s)\rangle \d
s=\|\mathbf{u}(0)\|_{\mathbb{H}}^2,
\end{align}
for any $t\in(0,T]$. Remember that $\mathbf{u}^n(0)=\mathrm{P}_n\mathbf{u}(0)$, and hence the initial value $\mathbf{u}^n(0)$ converges to $\mathbf{u}(0)$ strongly in $\mathbb{H}$, that is, we have
\begin{align}\label{eqn46}
\lim_{n\to\infty}\|\mathbf{u}^n(0)-\mathbf{u}(0)\|_{\mathbb{H}}=0.
\end{align}
\vskip 0.1in
\noindent\textbf{Step (iii).} \emph{Minty-Browder technique:} For any
$\mathbf{v}\in\mathrm{L}^{\infty}(0,T;\mathbb{H}_m)$ with
$m<n$, using the local monotonicity property (see \eqref{fe}), we obtain
\begin{align}\label{eqn48}
&\int_0^{T}e^{-2\eta t}\left\{\langle \mathrm{G}(\mathbf{v}(t))-\mathrm{G}(\mathbf{u}^n(t)),\mathbf{v}(t)-\mathbf{u}^n(t)\rangle
+\eta\left(\mathbf{v}(t)-\mathbf{u}^n(t),\mathbf{v}(t)-\mathbf{u}^n(t)\right)\right\}\d
t\geq 0.
\end{align}
Applying the energy equality (\ref{eqn42}) in \eqref{eqn48}, we find
\begin{align}\label{eqn49}
&\int_0^{T}e^{-2\eta t}\langle \mathrm{G}(\mathbf{v}(t))+\eta\mathbf{v}(t),\mathbf{v}(t)-\mathbf{u}^n(t)\rangle \d
t\nonumber\\&\geq
\int_0^{T}e^{-2\eta t}\langle \mathrm{G}(\mathbf{u}^n(t))+\eta\mathbf{u}^n(t),\mathbf{v}(t)-\mathbf{u}^n(t)\rangle \d
t\nonumber\\&=\int_0^{T}e^{-2\eta t}
\langle \mathrm{G}(\mathbf{u}^n(t))+\eta\mathbf{u}^n(t),\mathbf{v}(t)\rangle \d t
\nonumber\\&\quad+\frac{1}{2}\Big(e^{-2\eta T}\|\mathbf{u}^n(T)\|_{\mathbb{H}}^2-\|\mathbf{u}^n(0)\|_{\mathbb{H}}^2\Big)
-\int_0^{T}e^{-2\eta t}\langle \mathbf{f}^n(t),\mathbf{u}^n(t)\rangle \d t.
\end{align}
Taking limit infimum on both sides of (\ref{eqn49}), we further have
\begin{align}\label{eqn52}
&\int_0^{T}e^{-2\eta t}\langle \mathrm{G}(\mathbf{v}(t))+\eta\mathbf{v}(t),\mathbf{v}(t)-\mathbf{u}(t)\rangle \d
t\nonumber\\&\geq \int_0^{T}e^{-2\eta t}
\langle \mathrm{G}_0(t)+\eta\mathbf{u}(t),\mathbf{v}(t)\rangle \d
t\nonumber\\&\quad+\frac{1}{2}\liminf_{n\to\infty}\Big(e^{-2\eta T}\|\mathbf{u}^n(T)\|_{\mathbb{H}}^2-\|\mathbf{u}^n(0)\|_{\mathbb{H}}^2\Big)
-\int_0^{T}e^{-2\eta t}\langle \mathbf{f}(t),\mathbf{u}(t)\rangle \d t.
\end{align}
Making use of the lower semicontinuity property of the $\mathbb{H}$-norm, and the strong convergence of the initial data (see \eqref{eqn46}), we get
\begin{align}\label{eqn53}
&\liminf_{n\to\infty}\Big\{e^{-2\eta T}\|\mathbf{u}^n(T)\|_{\mathbb{H}}^2-\|\mathbf{u}^n(0)\|_{\mathbb{H}}^2\Big\}\geq
e^{-2\eta T}\|\mathbf{u}(T)\|^2_{\mathbb{H}}-\|\mathbf{u}(0)\|^2_{\mathbb{H}}.
\end{align}
Applying (\ref{eqn53}) and the energy equality (\ref{eqn45}) in (\ref{eqn52}), we deduce that
\begin{align}\label{eqn55}
&\int_0^{T}e^{-2\eta t}\langle \mathrm{G}(\mathbf{v}(t))+\eta\mathbf{v}(t),\mathbf{v}(t)-\mathbf{u}(t)\rangle \d
t\nonumber\\&\geq \int_0^{T}e^{-2\eta t}
\langle \mathrm{G}_0(t)+\eta\mathbf{u}(t),\mathbf{v}(t)\rangle \d
t+\frac{1}{2}\Big( e^{-2\eta T}\|\mathbf{u}(T)\|^2_{\mathbb{H}}-\|\mathbf{u}(0)\|^2_{\mathbb{H}}\Big)
-\int_0^{T}e^{-2\eta t}\langle \mathbf{f}(t),\mathbf{u}(t)\rangle \d
t\nonumber\\&=\int_0^{T}e^{-2\eta t}
\langle \mathrm{G}_0(t)+\eta\mathbf{u}(t),\mathbf{v}(t)\rangle \d
t-\int_0^{T}e^{-2\eta t}\langle \mathrm{G}_0(t)+\eta\mathbf{u}(t),\mathbf{u}(t)\rangle \d
t\nonumber\\&=\int_0^{T}e^{-2\eta t}
\langle \mathrm{G}_0(t)+\eta\mathbf{u}(t),\mathbf{v}(t)-\mathbf{u}(t)\rangle \d t.
\end{align}
Note that the estimate (\ref{eqn55}) holds true for any
$\mathbf{v}\in\mathrm{L}^{\infty}(0,T;\mathbb{H}_m)$, $m\in\mathbb{N}$, since the inequality given in (\ref{eqn55}) is
independent of both $m$ and $n$. Using a density
argument, one can show that the inequality (\ref{eqn55}) remains true for any
$$\mathbf{v}\in\mathrm{L}^{\infty}(0,T;\mathbb{H})\cap\mathrm{L}^2(0,T;\mathbb{V})\cap\mathrm{L}^{r+1}(0, T ; \widetilde{\mathbb{L}}^{r+1}).$$ In fact, for any
$\mathbf{v}\in\mathrm{L}^{\infty}(0,T;\mathbb{H})\cap\mathrm{L}^2(0,T;\mathbb{V})\cap\mathrm{L}^{r+1}(0, T ; \widetilde{\mathbb{L}}^{r+1}),$ there exists a strongly convergent subsequence
$\mathbf{v}_m\in\mathrm{L}^{\infty}(0,T;\mathbb{H})\cap\mathrm{L}^2(0,T;\mathbb{V})\cap\mathrm{L}^{r+1}(0, T ; \widetilde{\mathbb{L}}^{r+1}),$ that
satisfies the inequality in (\ref{eqn55}).
Taking $\mathbf{v}=\mathbf{u}+\lambda\mathbf{w}$, $\lambda>0$, where
$\mathbf{w}\in\mathrm{L}^{\infty}(0,T;\mathbb{H})\cap\mathrm{L}^2(0,T;\mathbb{V})\cap\mathrm{L}^{r+1}(0, T ; \widetilde{\mathbb{L}}^{r+1}),$ and
substituting for $\mathbf{v}$ in (\ref{eqn55}), we obtain
\begin{align}\label{eqn56}
\int_0^{T}e^{-2\eta t}\langle \mathrm{G}(\mathbf{u}(t)+\lambda\mathbf{w}(t))-\mathrm{G}_0(t)+\eta\lambda\mathbf{w}(t),\lambda\mathbf{w}(t)\rangle \d
t\geq 0.
\end{align}
Dividing the inequality in (\ref{eqn56}) by $\lambda$, using the
hemicontinuity property of the operator $\mathrm{G}(\cdot)$ (see Lemma \ref{lem2.8}), and then passing $\lambda\to 0$, we get
\begin{align}\label{eqn60}
\int_0^{T}e^{-2\eta t}\langle \mathrm{G}(\mathbf{u}(t))-\mathrm{G}_0(t),\mathbf{w}(t)\rangle \d
t\geq 0,
\end{align}
for any
$\mathbf{w}\in\mathrm{L}^{\infty}(0,T;\mathbb{H})\cap\mathrm{L}^2(0,T;\mathbb{V})\cap\mathrm{L}^{r+1}(0, T ; \widetilde{\mathbb{L}}^{r+1}).$
It should be noted that the term
${\int_0^{T}}e^{-2\eta t} \eta\lambda\langle\mathbf{w}(t),\mathbf{w}(t)\rangle \d
t$ in \eqref{eqn56} tends to $0$ as $\lambda\to0$.
Finally, from (\ref{eqn60}), we deduce that $\mathrm{G}(\mathbf{u}(t))=\mathrm{G}_0(t)$. Moreover, $\mathbf{u}(\cdot)$ satisfies
\begin{align}\label{energy3}
\sup_{t\in[0,T]}\|\mathbf{u}(t)\|_{\mathbb{H}}^2+\mu \int_0^T\|\mathbf{u}(t)\|_{\mathbb{V}}^2\d t+2\beta\int_0^T\|\mathbf{u}(t)\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}\d t\leq \|\mathbf{u}_0\|_{\mathbb{H}}^2+\frac{1}{\mu }\int_0^T\|\mathbf{f}(t)\|_{\mathbb{V}'}^2\d t.
\end{align}
Hence, we get $$\mathbf{u}\in\mathrm{L}^{\infty}(0,T;\mathbb{H})\cap\mathrm{L}^2(0,T;\mathbb{V})\cap\mathrm{L}^{r+1}(0, T ; \widetilde{\mathbb{L}}^{r+1}),$$ with $\partial_t\mathbf{u}\in\mathrm{L}^2(0,T;\mathbb{V}')+\mathrm{L}^{\frac{r+1}{r}}(0,T;\widetilde{\mathbb{L}}^{\frac{r+1}{r}})$.
Since $\mathbf{u}\in\mathrm{L}^{2}(0,T;\mathbb{V})\cap\mathrm{L}^{r+1}(0,T;\widetilde{\mathbb{L}}^{r+1})$ and $\partial_t\mathbf{u}\in\mathrm{L}^2(0,T;\mathbb{V}')+\mathrm{L}^{\frac{r+1}{r}}(0,T;\widetilde{\mathbb{L}}^{\frac{r+1}{r}})$, then using Theorem 3, section 5.9.2, \cite{Evans}, we obtain $\mathbf{u}\in \mathrm{C}([0,T];\mathbb{H})$.
Thus
$\mathbf{u}(\cdot)$ is a weak solution of the system (\ref{kvf}) with $\mathbf{u}(0)=\mathbf{u}_0$.
\vskip 0.2cm
\noindent\textbf{Part (2). Uniqueness:} Let $\mathbf{u}_1(\cdot)$ and $\mathbf{u}_2(\cdot)$ be two weak solutions of the system (\ref{kvf}) satisfying \eqref{energy3}. Then $\mathbf{u}_1(\cdot)-\mathbf{u}_2(\cdot)$ satisfies:
\begin{align}\label{346}
&\|\mathbf{u}_1(t)-\mathbf{u}_2(t)\|_{\mathbb{H}}^2+2\mu \int_0^t\|\mathbf{u}_1(s)-\mathbf{u}_2(s)\|_{\mathbb{V}}^2\d s\nonumber\\&=\|\mathbf{u}_1(0)-\mathbf{u}_2(0)\|_{\mathbb{H}}^2-2\int_0^t\langle\mathrm{B}(\mathbf{u}_1(s))-\mathrm{B}(\mathbf{u}_2(s)),\mathbf{u}_1(s)-\mathbf{u}_2(s)\rangle\d s\nonumber\\&\quad -2\beta\int_0^t\langle\mathcal{C}(\mathbf{u}_1(s))-\mathcal{C}_2(\mathbf{u}_2(s)),\mathbf{u}_1(s)-\mathbf{u}_2(s)\rangle\d s.
\end{align}
Note that $\langle\mathrm{B}(\mathbf{u}_1)-\mathrm{B}(\mathbf{u}_2),\mathbf{u}_1-\mathbf{u}_2\rangle=\langle\mathrm{B}(\mathbf{u}_1-\mathbf{u}_2,\mathbf{u}_2),\mathbf{u}_1-\mathbf{u}_2\rangle,$ since $\langle\mathrm{B}(\mathbf{u}_1,\mathbf{u}_1-\mathbf{u}_2),\mathbf{u}_1-\mathbf{u}_2\rangle=0$.
An estimate similar to \eqref{2.30} yields
\begin{align}\label{347}
&|\langle\mathrm{B}(\mathbf{u}_1-\mathbf{u}_2,\mathbf{u}_2),\mathbf{u}_1-\mathbf{u}_2\rangle|\leq\frac{\mu }{2}\|\mathbf{u}_1-\mathbf{u}_2\|_{\mathbb{V}}^2+\frac{\beta}{2}\||\mathbf{u}_2|^{\frac{r-1}{2}}(\mathbf{u}_1-\mathbf{u}_2)\|_{\mathbb{H}}^2+\eta\|\mathbf{u}_1-\mathbf{u}_2\|_{\mathbb{H}}^2,
\end{align}
for $r>3$. A calculation similar to \eqref{2.23} gives
\begin{align}\label{261}
&\beta\langle\mathcal{C}(\mathbf{u}_1)-\mathcal{C}(\mathbf{u}_2),\mathbf{u}_1-\mathbf{u}_2\rangle\geq \frac{\beta}{2}\||\mathbf{u}_2|^{\frac{r-1}{2}}(\mathbf{u}_1-\mathbf{u}_2)\|_{\mathbb{H}}^2.
\end{align}
Using (\ref{347}) and (\ref{261}) in \eqref{346}, we arrive at
\begin{align}\label{3.18}
&\|\mathbf{u}_1(t)-\mathbf{u}_2(t)\|_{\mathbb{H}}^2+\mu \int_0^t\|\mathbf{u}_1(s)-\mathbf{u}_2(s)\|_{\mathbb{V}}^2\d s\nonumber\\&\leq\|\mathbf{u}_1(0)-\mathbf{u}_2(0)\|_{\mathbb{H}}^2+2\eta\int_0^t\|\mathbf{u}_1(s)-\mathbf{u}_2(s)\|_{\mathbb{H}}^2\d s.
\end{align}
Applying Gronwall's inequality in (\ref{3.18}), we get
\begin{align}\label{269}
\|\mathbf{u}_1(t)-\mathbf{u}_2(t)\|_{\mathbb{H}}^2&\leq \|\mathbf{u}_1(0)-\mathbf{u}_2(0)\|_{\mathbb{H}}^2e^{2\eta T},
\end{align}
and hence the uniqueness follows by taking $\mathbf{u}_1(0)=\mathbf{u}_2(0)$ in \eqref{269}.
\end{proof}
Let us now sketch the proof of the existence and uniqueness of weak solution for the system \eqref{kvf} in the critical case $r=3,$ for $2\beta\mu \geq 1$. This means that when both the viscosity of a fluid and the porosity of a porous medium are sufficiently large, then the corresponding flow stays bounded and regular.
\begin{theorem}\label{main1}
Let $\mathbf{u}_0\in \mathbb{H}$ and $\mathbf{f}\in\mathrm{L}^{2}(0,T;\mathbb{V}')$ be given. Then, for $2\beta\mu \geq 1$, there exists a unique weak solution to the system (\ref{kvf}) satisfying $$\mathbf{u}\in\mathrm{C}([0,T];\mathbb{H})\cap\mathrm{L}^2(0,T;\mathbb{V})\cap\mathrm{L}^{4}(0,T;\widetilde{\mathbb{L}}^{4}).$$
\end{theorem}
\begin{proof}
One can follow in a similar way as in the proof of Theorem \ref{main}. In the critical case $r=3$, note that the operator $\mathrm{G}(\cdot)$ is monotone (see \eqref{218}), and hence we provide a short proof in this case. Using the convergences given in \eqref{3.20}, we take limit supremum in \eqref{422} to find
\begin{align}\label{263}
&\limsup_{n\to\infty}\int_0^t(\mathrm{G}(\mathbf{u}^n(s)),\mathbf{u}^n(s))\d s\nonumber\\&=\limsup_{n\to\infty}\left\{\frac{1}{2}\left[\|\mathbf{u}^n(0)\|_{\mathbb{H}}^2-\|\mathbf{u}^n(t)\|_{\mathbb{H}}^2\right]+\int_0^t(\mathbf{f}^n(s),\mathbf{u}^n(s))\d s\right\}\nonumber\\&\leq \frac{1}{2}\left[\limsup_{n\to\infty}\|\mathbf{u}^n(0)\|_{\mathbb{H}}^2-\liminf_{n\to\infty}\|\mathbf{u}^n(t)\|_{\mathbb{H}}^2\right]+\limsup_{n\to\infty}\int_0^t(\mathbf{f}^n(s),\mathbf{u}^n(s))\d s\nonumber\\&\leq\frac{1}{2}\left[\|\mathbf{u}_0\|_{\mathbb{H}}^2-\|\mathbf{u}(t)\|_{\mathbb{H}}^2\right]+\int_0^t\langle\mathbf{f}(s),\mathbf{u}(s)\rangle\d s=\int_0^t\langle \mathrm{G}_0(s),\mathbf{u}(s)\rangle \d
s,
\end{align}
for all $t\in(0,T]$, where we used \eqref{eqn44}, the weak lower-semicontinuity property of the $\mathbb{H}$-norm, and strong convergence of the initial data and forcing. Since $\mathrm{G}(\cdot)$ is a monotone operator, from \eqref{218}, we also have
\begin{align*}
\int_0^t\langle\mathrm{G}(\mathbf{u}^n(s))-\mathrm{G}(\mathbf{v}(s)),\mathbf{u}^n(s)-\mathbf{v}(s)\rangle\d s\geq 0.
\end{align*}
Taking limit supremum on both sides and using \eqref{263}, we further find
\begin{align}
\int_0^t\langle \mathrm{G}_0(s)-\mathrm{G}(\mathbf{v}(s)),\mathbf{u}(s)-\mathbf{v}(s)\rangle \d
s\geq 0.
\end{align}
Taking $\mathbf{u}-\mathbf{v}=\lambda\mathbf{w}$, for $\lambda>0$, dividing by $\lambda$, passing $\lambda\to 0$, and then exploiting the hemicontinuity property of the operator $\mathrm{G}(\cdot)$, we finally obtain $\mathrm{G}_0(t)=\mathrm{G}(\mathbf{u}(t))$. Uniqueness follows by taking the estimate \eqref{232} instead of \eqref{347} and following similarly as in the Part (2) of Theorem \ref{main}.
\end{proof}
\begin{remark}
1. For $n=2$ and $r=3$, one can prove the Theorem \ref{main1} for any $\beta,\mu >0$, by exploiting the local monotonicity property (see \eqref{fe2}) and a localized version of the Minty-Browder technique (see \cite{BMS,MTM}, etc for more details).
2. Note that in the Theorems \ref{main} and \ref{main1} compactness arguments are not used and hence the results obtained in both Theorems are valid in unbounded domains including Poincar\'e domains and the whole space $\mathbb{R}^n$ (one has to appropriately modify the functions spaces).
\end{remark}
\iffalse
\begin{remark}
1. For $2\leq r< 3$ along with $\mathbf{u}_0\in \mathbb{H}\cap\widetilde{\mathbb{L}}^{4}$ and $\mathbf{f}\in\mathrm{L}^2(0,T;\mathbb{V}')\cap\mathrm{L}^{4}(0,T;\widetilde{\mathbb{L}}^{4}),$ the results of the Theorem
\ref{main} hold true with
$$\mathbf{u}\in\mathrm{C}([0,T];\mathbb{H})\cap\mathrm{L}^{\infty}(0,T;\widetilde{\mathbb{L}}^{4})\cap\mathrm{L}^2(0,T;\mathbb{V})\cap\mathrm{L}^{r+3}(0,T;\widetilde{\mathbb{L}}^{r+3}),$$ and
\begin{align}\label{energy4}
&\sup_{t\in[0,T]}\|\mathbf{u}(t)\|_{\widetilde{\mathbb{L}}^{4}}^{4}+6\mu \int_0^T\||\mathbf{u}(t)|\nabla\mathbf{u}(t)\|_{\mathbb{H}}^2\d t+4\beta \int_0^T\|\mathbf{u}(t)\|_{\widetilde{\mathbb{L}}^{r+3}}^{r+3}\d t\nonumber\\&\leq\|\mathbf{u}_0\|_{\widetilde{\mathbb{L}}^{4}}^{4}+\frac{8}{\mu ^3\lambda_1^3}\int_0^T\|\mathbf{f}(t)\|_{\widetilde{\mathbb{L}}^{4}}^{4}\d t.
\end{align}
Let us now discuss the major differences in the proof of the Theorem \ref{main} for $2\leq r<3$. One has to use energy estimate given in the Proposition \ref{prop45}, which provides two more convergences in \eqref{3.20}, namely
\begin{align}
\mathbf{u}^n&\xrightarrow{w^*}\mathbf{u}\ \text{ in }\ \mathrm{L}^{\infty}(0,T;\widetilde{\mathbb{L}}^4),\\
\mathbf{u}^n&\xrightarrow{w}\mathbf{u}\ \text{ in }\ \mathrm{L}^{r+3}(0,T;\widetilde{\mathbb{L}}^{r+3}).
\end{align}
One has to define $\eta t$ given in \eqref{eqn47} as $ \eta t:= 2\left(\frac{7}{4\mu }\right)^7\int_0^t\|\mathbf{v}(s)\|_{\widetilde{\mathbb{L}}^4}^8\d s,$ and use the local monotonicity result \eqref{233} given in the Remark \ref{rem2.7} in place of \eqref{eqn48}. The function $\eta t$ can be controlled as
\begin{align*}
\eta t\leq 2\left(\frac{7}{4\mu }\right)^7\sup_{t\in[0,T]}\|\mathbf{v}(t)\|_{\widetilde{\mathbb{L}}^4}^4\int_0^T\|\mathbf{v}(t)\|_{\widetilde{\mathbb{L}}^4}^4\d t.
\end{align*}
In the uniqueness part, one has to replace \eqref{347} with \eqref{2.28} and the inequality \eqref{269} becomes
\begin{align}\label{2.69}
\|\mathbf{u}_1(t)-\mathbf{u}_2(t)\|_{\mathbb{H}}^2&\leq \|\mathbf{u}_1(0)-\mathbf{u}_2(0)\|_{\mathbb{H}}^2\exp\left(4\left(\frac{7}{4\mu }\right)^7\sup_{t\in[0,T]}\|\mathbf{u}_2(t)\|_{\widetilde{\mathbb{L}}^4}^4\int_0^T\|\mathbf{u}_2(t)\|_{\widetilde{\mathbb{L}}^4}^4\d t\right),
\end{align}
and hence the uniqueness follows.
2. For $3\leq r< 5$, if we take $\mathbf{u}_0\in \mathbb{H}\cap\widetilde{\mathbb{L}}^{7-r}$ and $\mathbf{f}\in\mathrm{L}^2(0,T;\mathbb{V}')\cap\mathrm{L}^{7-r}(0,T;\widetilde{\mathbb{L}}^{7-r}),$ then one can prove in a similar way as in the Theorem
\ref{main} that
$$\mathbf{u}\in\mathrm{C}([0,T];\mathbb{H})\cap\mathrm{L}^{\infty}(0,T;\widetilde{\mathbb{L}}^{7-r})\cap\mathrm{L}^2(0,T;\mathbb{V})\cap\mathrm{L}^{6}(0,T;\widetilde{\mathbb{L}}^{6}).$$
\end{remark}
\fi
For regularity of the weak solution, due to technical difficulties, we consider the problem in periodic domains or on the whole space $\mathbb{R}^n$. The major difficulty in working with bounded domains is that $\mathrm{P}_{\mathbb{H}}(|\mathbf{u}|^{r-1}\mathbf{u})$ need not be zero on the boundary, and $\mathrm{P}_{\mathbb{H}}$ and $-\Delta$ are not commuting. In periodic domain or in whole space $\mathbb{R}^n,$ the operators $\mathrm{P}_{\mathbb{H}}$ and $-\Delta$ commutes. The following result is available in the literature (for instance, see \cite{KWH}) and for the sake of completeness, we provide a proof in the case of periodic domains.
\begin{theorem}[Regualrity]
Let $\mathcal{O}$ be a periodic domain. For $\mathbf{u}_0\in\mathbb{V}$ and $\mathbf{f}\in\mathrm{L}^2(0,T;\mathbb{H})$, any weak solution $\mathbf{u}(\cdot)$ to the system \eqref{kvf} with $r>2$ satisfies the following regularity: $$\mathbf{u}\in\mathrm{C}([0,T];\mathbb{V})\cap\mathrm{L}^2(0,T;\mathrm{D}(\mathrm{A})),$$ and hence $\mathbf{u}(\cdot)$ is a \emph{strong solution} to the system \eqref{kvf}.
For $r=3$, $\mathbf{u}(\cdot)$ is a \emph{strong solution} to the system \eqref{kvf} with $\mathbf{u}\in\mathrm{C}([0,T];\mathbb{V})\cap\mathrm{L}^2(0,T;\mathrm{D}(\mathrm{A})),$ provided $2\beta\mu \geq 1$.
\end{theorem}
\begin{proof}
Taking inner product with $\mathrm{A}\mathbf{u}(\cdot)$ to the first equation in \eqref{kvf}, we find
\begin{align}\label{273}
\frac{1}{2}\frac{\d}{\d t}\|\mathbf{u}(t)\|_{\mathbb{V}}^2+\mu \|\mathrm{A}\mathbf{u}(t)\|_{\mathbb{H}}^2&=-(\mathrm{B}(\mathbf{u}(t)),\mathrm{A}\mathbf{u}(t))-\beta(\mathcal{C}(\mathbf{u}(t)),\mathrm{A}\mathbf{u}(t))-(\mathbf{f}(t),\mathrm{A}\mathbf{u}(t)).
\end{align}
We estimate the first term in the right hand side of the equality \eqref{273} using H\"older's, and Young's inequalities as
\begin{align}\label{275}
|(\mathrm{B}(\mathbf{u}),\mathrm{A}\mathbf{u})|&\leq\||\mathbf{u}||\nabla\mathbf{u}|\|_{\mathbb{H}}\|\mathrm{A}\mathbf{u}\|_{\mathbb{H}}\leq\frac{\mu }{4}\|\mathrm{A}\mathbf{u}\|_{\mathbb{H}}^2+\frac{1}{\mu }\||\mathbf{u}||\nabla\mathbf{u}|\|_{\mathbb{H}}^2.
\end{align}
For $r>3$, we estimate the final term from \eqref{275} using H\"older's and Young's inequalities as
\begin{align*}
& \int_{\mathcal{O}}|\mathbf{u}(x)|^2|\nabla\mathbf{u}(x)|^2\d x\nonumber\\&=\int_{\mathcal{O}}|\mathbf{u}(x)|^2|\nabla\mathbf{u}(x)|^{\frac{4}{r-1}}|\nabla\mathbf{u}(x)|^{\frac{2(r-3)}{r-1}}\d x\nonumber\\&\leq\left(\int_{\mathcal{O}}|\mathbf{u}(x)|^{r-1}|\nabla\mathbf{u}(x)|^2\d x\right)^{\frac{2}{r-1}}\left(\int_{\mathcal{O}}|\nabla\mathbf{u}(x)|^2\d x\right)^{\frac{r-3}{r-1}}\nonumber\\&\leq{\beta\mu }\left(\int_{\mathcal{O}}|\mathbf{u}(x)|^{r-1}|\nabla\mathbf{u}(x)|^2\d x\right)+\frac{r-3}{r-1}\left(\frac{2}{\beta\mu (r-1)}\right)^{\frac{2}{r-3}}\left(\int_{\mathcal{O}}|\nabla\mathbf{u}(x)|^2\d x\right).
\end{align*}
Using Cauchy-Schwarz and Young's inequalities, we estimate $|(\mathbf{f},\mathrm{A}\mathbf{u})|$ as
\begin{align}\label{276}
|(\mathbf{f},\mathrm{A}\mathbf{u})|\leq\|\mathbf{f}\|_{\mathbb{H}}\|\mathrm{A}\mathbf{u}\|_{\mathbb{H}}\leq\frac{\mu }{4}\|\mathrm{A}\mathbf{u}\|_{\mathbb{H}}^2+\frac{1}{\mu }\|\mathbf{f}\|_{\mathbb{H}}^2.
\end{align}
Substituting \eqref{275}, \eqref{219} and \eqref{276} in \eqref{273}, and then integrating from $0$ to $t$, we obtain
\begin{align}\label{277}
& \|\mathbf{u}(t)\|_{\mathbb{V}}^2+\mu \int_0^t\|\mathrm{A}\mathbf{u}(s)\|_{\mathbb{H}}^2\d s+\beta\int_0^t\||\mathbf{u}(s)|^{\frac{r-1}{2}}|\nabla\mathbf{u}(s)|\|_{\mathbb{H}}^2\d s\nonumber\\&\leq\|\mathbf{u}_0\|_{\mathbb{V}}^2+\frac{2}{\mu }\int_0^t\|\mathbf{f}(s)\|_{\mathbb{H}}^2\d s+\frac{2(r-3)}{\mu (r-1)}\left(\frac{2}{\beta\mu (r-1)}\right)^{\frac{2}{r-3}}\int_0^t\|\mathbf{u}(s)\|_{\mathbb{V}}^2\d s,
\end{align}
for all $t\in[0,T]$. An application of Gronwall's inequality gives
\begin{align}\label{2pp77}
& \|\mathbf{u}(t)\|_{\mathbb{V}}^2\leq\left\{\|\mathbf{u}_0\|_{\mathbb{V}}^2+\frac{2}{\mu }\int_0^T\|\mathbf{f}(t)\|_{\mathbb{H}}^2\d t\right\}\exp\left\{\frac{2(r-3)}{\mu (r-1)}\left(\frac{2}{\beta\mu (r-1)}\right)^{\frac{2}{r-3}}T\right\},
\end{align}
for all $t\in[0,T]$. Thus, it is immediate that
\begin{align}\label{2p77}
&\sup_{t\in[0,T]} \|\mathbf{u}(t)\|_{\mathbb{V}}^2+\mu \int_0^T\|\mathrm{A}\mathbf{u}(t)\|_{\mathbb{H}}^2\d t+\beta\int_0^T\||\mathbf{u}(t)|^{\frac{r-1}{2}}|\nabla\mathbf{u}(t)|\|_{\mathbb{H}}^2\d t\nonumber\\&\leq \left\{\|\mathbf{u}_0\|_{\mathbb{V}}^2+\frac{2}{\mu }\int_0^T\|\mathbf{f}(t)\|_{\mathbb{H}}^2\d t\right\}\exp\left\{\frac{4(r-3)}{\mu (r-1)}\left(\frac{2}{\beta\mu (r-1)}\right)^{\frac{2}{r-3}}T\right\}.
\end{align}
\iffalse
For $r\geq 6$, one can easily obtain from the above estimate that
\begin{align}\label{2.77}
&\sup_{t\in[0,T]} \|\mathbf{u}(t)\|_{\mathbb{V}}^2+\mu \int_0^T\|\mathrm{A}\mathbf{u}(t)\|_{\mathbb{H}}^2\d t+2\beta(r-1)\int_0^T\||\mathbf{u}(t)|^{\frac{r-1}{2}}|\nabla\mathbf{u}(t)|\|_{\mathbb{H}}^2\d t\nonumber\\&\leq\|\mathbf{u}_0\|_{\mathbb{V}}^2+\frac{2}{\mu }\int_0^T\|\mathbf{f}(t)\|_{\mathbb{H}}^2\d t\nonumber\\&\quad+T^{\frac{r-6}{r-2}}\frac{2C(r-2)}{4(r+1)}\left(\frac{3(r+2)}{\mu (r+1)}\right)^{\frac{3(r+2)}{r-2}}\sup_{t\in[0,T]}\|\mathbf{u}(t)\|_{\mathbb{H}}^2\left(\int_0^T\|\mathbf{u}(t)\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}\d t\right)^{\frac{4}{r-2}}\nonumber\\&\leq C(\|\mathbf{u}_0\|_{\mathbb{V}},T,\mu ,\beta,\|\mathbf{f}\|_{\mathrm{L}^2(0,T;\mathbb{H})}).
\end{align}
For $r\geq 5$ (in fact by taking $r=5$ in \eqref{275}), the inequality \eqref{277} can be rewritten as
\begin{align}\label{2.88}
& \|\mathbf{u}(t)\|_{\mathbb{V}}^2+\mu \int_0^t\|\mathrm{A}\mathbf{u}(s)\|_{\mathbb{H}}^2\d s+2\beta(r-1)\int_0^t\||\mathbf{u}(s)|^{\frac{r-1}{2}}|\nabla\mathbf{u}(s)|\|_{\mathbb{H}}^2\d s\nonumber\\&\leq\|\mathbf{u}_0\|_{\mathbb{V}}^2+\frac{2}{\mu }\int_0^t\|\mathbf{f}(s)\|_{\mathbb{H}}^2\d s+\frac{C}{4}\left(\frac{7}{2\mu }\right)^7\int_0^t\|\mathbf{u}(s)\|_{\mathbb{H}}^2\|\mathbf{u}(s)\|_{\widetilde{\mathbb{L}}^{6}}^{6}\|\mathbf{u}(s)\|_{\mathbb{V}}^2\d s.
\end{align}
An application of Gronwall's inequality in \eqref{2.88} yields
\begin{align}\label{2.89}
\|\mathbf{u}(t)\|_{\mathbb{V}}^2\leq\left(\|\mathbf{u}_0\|_{\mathbb{V}}^2+\frac{2}{\mu }\int_0^t\|\mathbf{f}(s)\|_{\mathbb{H}}^2\d s\right)\exp\left\{\frac{C}{4}\left(\frac{7}{2\mu }\right)^7\sup_{t\in[0,T]}\|\mathbf{u}(t)\|_{\mathbb{H}}^2\int_0^T\|\mathbf{u}(t)\|_{\widetilde{\mathbb{L}}^6}^6\d t\right\},
\end{align}
for all $t\in[0,T]$. Thus, we have
\begin{align}\label{2.90}
&\sup_{t\in[0,T]} \|\mathbf{u}(t)\|_{\mathbb{V}}^2+\mu \int_0^T\|\mathrm{A}\mathbf{u}(t)\|_{\mathbb{H}}^2\d t+\frac{8\lambda_1\beta(r-1)}{r^2}\int_0^T\||\mathbf{u}(t)|^{\frac{r-1}{2}}|\nabla\mathbf{u}(t)|\|_{\mathbb{H}}^2\d s\nonumber\\&\leq\left(\|\mathbf{u}_0\|_{\mathbb{V}}^2+\frac{2}{\mu }\int_0^t\|\mathbf{f}(s)\|_{\mathbb{H}}^2\d s\right)\exp\left\{\frac{C}{2}\left(\frac{7}{2\mu }\right)^7\sup_{t\in[0,T]}\|\mathbf{u}(t)\|_{\mathbb{H}}^2\int_0^T\|\mathbf{u}(t)\|_{\widetilde{\mathbb{L}}^6}^6\d t\right\}\nonumber\\&\leq C(\|\mathbf{u}_0\|_{\mathbb{V}},T,\mu ,\beta,\|\mathbf{f}\|_{\mathrm{L}^2(0,T;\mathbb{H})}),
\end{align}
where we used \eqref{2p6}.
\fi
For $r=3$, we estimate $|(\mathrm{B}(\mathbf{u}),\mathrm{A}\mathbf{u})|$ as
\begin{align}\label{291}
|(\mathrm{B}(\mathbf{u}),\mathrm{A}\mathbf{u})|&\leq\|(\mathbf{u}\cdot\nabla)\mathbf{u}\|_{\mathbb{H}}\|\mathrm{A}\mathbf{u}\|_{\mathbb{H}}\leq\frac{1}{4\theta}\|\mathrm{A}\mathbf{u}\|_{\mathbb{H}}^2+\theta\||\mathbf{u}||\nabla\mathbf{u}|\|_{\mathbb{H}}^2.
\end{align}
A calculation similar to \eqref{277} gives
\begin{align}\label{292}
& \|\mathbf{u}(t)\|_{\mathbb{V}}^2+\left(\mu -\frac{1}{2\theta}\right)\int_0^t\|\mathrm{A}\mathbf{u}(s)\|_{\mathbb{H}}^2\d s+2\left(\beta-\theta\right)\int_0^t\||\mathbf{u}(s)||\nabla\mathbf{u}(s)|\|_{\mathbb{H}}^2\d s\nonumber\\&\leq\|\mathbf{u}_0\|_{\mathbb{V}}^2+\frac{1}{\mu }\int_0^t\|\mathbf{f}(s)\|_{\mathbb{H}}^2\d s.
\end{align}
For $2\beta\mu \geq 1$, it is immediate that
\begin{align}\label{2.94}
&\sup_{t\in[0,T]} \|\mathbf{u}(t)\|_{\mathbb{V}}^2+\left(\mu -\frac{1}{2\theta}\right)\int_0^T\|\mathrm{A}\mathbf{u}(t)\|_{\mathbb{H}}^2\d t+\left(\beta-\theta\right)\int_0^T\||\mathbf{u}(t)||\nabla\mathbf{u}(t)|\|_{\mathbb{H}}^2\d t\nonumber\\&\leq \left\{\|\mathbf{u}_0\|_{\mathbb{V}}^2+\frac{2}{\mu }\int_0^T\|\mathbf{f}(t)\|_{\mathbb{H}}^2\d t\right\}.
\end{align}
Taking inner product with $\partial_t\mathbf{u}(\cdot)$ to the first equation in \eqref{kvf} to obtain
\begin{align}\label{295}
\|\partial_t\mathbf{u}(t)\|_{\mathbb{H}}^2+\frac{\mu }{2}\frac{\d}{\d t}\|\nabla\mathbf{u}(t)\|_{\mathbb{H}}^2+(\mathrm{B}(\mathbf{u}(t)),\partial_t\mathbf{u}(t))+\beta(\mathcal{C}(\mathbf{u}(t)),\partial_t\mathbf{u}(t))=(\mathbf{f}(t),\partial_t\mathbf{u}(t)).
\end{align}
It can be easily seen that
\begin{align*}
(\mathcal{C}(\mathbf{u}(t)),\partial_t\mathbf{u}(t))=(\mathrm{P}_{\mathbb{H}}(|\mathbf{u}(t)|^{r-1}\mathbf{u}(t)),\partial_t\mathbf{u}(t))=\frac{1}{r+1}\frac{\d}{\d t}\|\mathbf{u}(t)\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}.
\end{align*}
We estimate the terms $|(\mathrm{B}(\mathbf{u}),\partial_t\mathbf{u})|$ and $|(\mathbf{f},\partial_t\mathbf{u})|$ using H\"older's and Young's inequalities as
\begin{align*}
|(\mathrm{B}(\mathbf{u}),\partial_t\mathbf{u})|&\leq\|(\mathbf{u}\cdot\nabla)\mathbf{u}\|_{\mathbb{H}}\|\partial_t\mathbf{u}\|_{\mathbb{H}}\leq\frac{1}{4}\|\partial_t\mathbf{u}\|_{\mathbb{H}}^2+\||\mathbf{u}||\nabla\mathbf{u}|\|_{\mathbb{H}}^2, \\
|(\mathbf{f},\partial_t\mathbf{u})|&\leq\|\mathbf{f}\|_{\mathbb{H}}\|\partial_t\mathbf{u}\|_{\mathbb{H}}\leq\frac{1}{4}\|\partial_t\mathbf{u}\|_{\mathbb{H}}^2+\|\mathbf{f}\|_{\mathbb{H}}^2.
\end{align*}
From \eqref{295}, we have
\begin{align}\label{296}
& \frac{1}{2}\|\partial_t\mathbf{u}(t)\|_{\mathbb{H}}^2+\frac{1}{r+1}\frac{\d}{\d t}\|\mathbf{u}(t)\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}+\frac{\mu }{2}\frac{\d}{\d t}\|\mathbf{u}(t)\|_{\mathbb{V}}^2\leq\|\mathbf{f}(t)\|_{\mathbb{H}}^2+\||\mathbf{u}||\nabla\mathbf{u}|\|_{\mathbb{H}}^2.
\end{align}
Integrating the above inequality from $0$ to $t$, we further have
\begin{align}\label{297}
&\frac{1}{r+1} \|\mathbf{u}(t)\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}+\frac{\mu }{2}\|\mathbf{u}(t)\|_{\mathbb{V}}^2+\frac{1}{2}\int_0^t\|\partial_t\mathbf{u}(s)\|_{\mathbb{H}}^2\d s\nonumber\\&\leq \frac{1}{r+1}\|\mathbf{u}_0\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}+\frac{\mu }{2}\|\mathbf{u}_0\|_{\mathbb{V}}^2+\int_0^t\|\mathbf{f}(s)\|_{\mathbb{H}}^2\d s+\int_0^t\||\mathbf{u}(s)||\nabla\mathbf{u}(s)|\|_{\mathbb{H}}^2\d s,
\end{align}
for all $t\in[0,T]$. Note the final term on the right hand side of \eqref{297} is bounded for all $r\geq 3$ (see \eqref{2.90} and \eqref{2.94}).
Let us now consider
\begin{align}
&\int_0^T\|\partial_t\mathbf{u}(t)\|_{\mathbb{H}}^2\d t\nonumber\\&\leq 4\mu ^2\int_0^T\|\mathrm{A}\mathbf{u}(t)\|_{\mathbb{H}}^2\d t+4\int_0^T\|\mathrm{B}(\mathbf{u}(t))\|_{\mathbb{H}}^2\d t+4\beta^2\int_0^T\|\mathcal{C}(\mathbf{u}(t))\|_{\mathbb{H}}^2\d t+4\int_0^T\|\mathbf{f}(t)\|_{\mathbb{H}}^2\d t\nonumber\\&\leq 4\mu ^2\int_0^T\|\mathrm{A}\mathbf{u}(t)\|_{\mathbb{H}}^2\d t+4\sup_{t\in[0,T]}\|\mathbf{u}(t)\|_{\mathbb{H}}^{1/2}\sup_{t\in[0,T]}\|\mathbf{u}(t)\|_{\mathbb{V}}^2\int_0^T\|\mathrm{A}\mathbf{u}(t)\|_{\mathbb{H}}^{3/2}\d t\nonumber\\&\quad+4\beta^2\sup_{t\in[0,T]}\|\mathbf{u}(t)\|_{\mathbb{V}}^{\frac{6(r-1)}{(r+7)}}\int_0^T\|\mathbf{u}(t)\|_{\widetilde{\mathbb{L}}^{r+1}}^{\frac{2(r+3)(r+1)}{(r+7)}}\d t+4\int_0^T\|\mathbf{f}(t)\|_{\mathbb{H}}^2\d t,
\end{align}
and hence $\partial_t\mathbf{u}\in\mathrm{L}^2(0,T;\mathbb{H})$.
\end{proof}
Note that \begin{align}\label{280}x^2\leq 1+x^{r-1},\ \text{ for all }\ x\geq 0\ \text{ and }\ r\geq 3.\end{align} Using this and the energy estimates \eqref{energy3} and \eqref{energy4}, we estimate $\||\mathbf{u}||\nabla\mathbf{u}|\|_{\mathrm{L}^2(0,T;\mathbb{H})}^2$ as
\begin{align}\label{278}
& \int_0^T\int_{\mathcal{O}}|\mathbf{u}(x,t)|^2|\nabla\mathbf{u}(x,t)|^2\d x\d t\nonumber\\&\leq \int_0^T\int_{\mathcal{O}}|\nabla\mathbf{u}(x,t)|^2\d x\d t+\int_0^T\int_{\mathcal{O}}|\mathbf{u}(x,t)|^{r-1}|\nabla\mathbf{u}(x,t)|^2\d x\d t\\&\leq \frac{1}{\mu }\left(\|\mathbf{u}_0\|_{\mathbb{H}}^2+\frac{1}{\mu \lambda_1}\int_0^T\|\mathbf{f}(t)\|_{\mathbb{H}}^2\d t\right)\nonumber\\&\quad+\frac{2}{\mu r(r+1)}\left(\|\mathbf{u}_0\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}+\frac{(r+1)^{r+1}}{(2\mu \lambda_1)^{r+1}}\int_0^T\|\mathbf{f}(t)\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}\d t\right). \nonumber
\end{align}
Substituting \eqref{278} in \eqref{277}, we arrive at
\begin{align}
&\sup_{t\in[0,T]} \|\mathbf{u}(t)\|_{\mathbb{V}}^2+\mu \int_0^T\|\mathrm{A}\mathbf{u}(t)\|_{\mathbb{H}}^2\d t+2\beta(r-1)\int_0^T\||\mathbf{u}(t)|^{\frac{r-1}{2}}|\nabla\mathbf{u}(t)|\|_{\mathbb{H}}^2\d t\nonumber\\&\leq\|\mathbf{u}_0\|_{\mathbb{V}}^2+\frac{2}{\mu }\int_0^T\|\mathbf{f}(t)\|_{\mathbb{H}}^2\d t+\frac{2}{\mu ^2}\left(\|\mathbf{u}_0\|_{\mathbb{H}}^2+\frac{1}{\mu \lambda_1}\int_0^T\|\mathbf{f}(t)\|_{\mathbb{H}}^2\d t\right)\nonumber\\&\quad+\frac{4}{\mu ^2 r(r+1)}\left(\|\mathbf{u}_0\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}+\frac{(r+1)^{r+1}}{(2\mu \lambda_1)^{r+1}}\int_0^T\|\mathbf{f}(t)\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}\d t\right).
\end{align}
Hence, we get $\mathbf{u}\in\mathrm{C}([0,T];\mathbb{V})\cap\mathrm{L}^2(0,T;\mathrm{D}(\mathrm{A}))$.
Let us define $\mathbf{v}(\cdot)=\partial_t\mathbf{u}(\cdot)$. Then differentiating \eqref{kvf} with respect to $t$, we find
\begin{align}
\partial_t\mathbf{v}(t)=-\mu \mathrm{A}\mathbf{v}(t)-\mathrm{B}(\mathbf{v}(t),\mathbf{u}(t))-\mathrm{B}(\mathbf{u}(t),\mathbf{v}(t))-\beta r\widetilde{\mathcal{C}}(\mathbf{u}(t))\mathbf{v}(t)+\mathbf{f}_t(t),
\end{align}
where $\widetilde{\mathcal{C}}(\mathbf{u})\mathbf{v}=\mathrm{P}_{\mathbb{H}}|\mathbf{u}|^{r-1}\mathbf{v}$ and $\mathbf{v}(0)=-\mu \mathrm{A}\mathbf{u}_0-\mathrm{B}(\mathbf{u}_0)-\beta\mathcal{C}(\mathbf{u}_0)+\mathbf{f}(0)$. Taking inner product with $\mathbf{v}(\cdot)$, we get
\begin{align}\label{2.96}
\frac{1}{2}\frac{\d}{\d t}\|\mathbf{v}(t)\|_{\mathbb{H}}^2+\mu \|\nabla\mathbf{v}(t)\|_{\mathbb{H}}^2+\beta r(\widetilde{\mathcal{C}}(\mathbf{u}(t))\mathbf{v}(t),\mathbf{v}(t))&=-(\mathrm{B}(\mathbf{v}(t),\mathbf{u}(t)),\mathbf{v}(t))+(\mathbf{f}_t(t),\mathbf{v}(t))
\end{align}
Note that $(\widetilde{\mathcal{C}}(\mathbf{u})\mathbf{v},\mathbf{v})\geq\||\mathbf{u}|^{\frac{r-1}{2}}|\mathbf{v}|\|_{\mathbb{H}}^2\geq 0$. A calculation similar to \eqref{bes} gives
\begin{align}
|(\mathrm{B}(\mathbf{v},\mathbf{u}),\mathbf{v})|\leq \frac{\mu }{4}\|\mathbf{v}\|_{\mathbb{V}}^2+\frac{C(r-2)}{2(r+1)}\left(\frac{2(r+4)}{\mu (r+1)}\right)^{\frac{r+4}{r-2}}\|\mathbf{u}\|_{\widetilde{\mathbb{L}}^{r+1}}^{\frac{2(r+1)}{r-2}}\|\mathbf{v}\|_{\mathbb{H}}^2,
\end{align}
for $r\geq 4$. Using H\"older's and Young's inequalities, we estimate $|(\mathbf{f}_t,\mathbf{v})|$ as
\begin{align}
|(\mathbf{f}_t,\mathbf{v})|&\leq\|\mathbf{f}_t\|_{\mathbb{V}'}\|\mathbf{v}\|_{\mathbb{V}}\leq\frac{1}{\mu }\|\mathbf{f}_t\|_{\mathbb{V}'}^2+\frac{\mu }{4}\|\mathbf{v}\|_{\mathbb{V}}^2.
\end{align}
Thus from \eqref{2.96}, we obtain
\begin{align}\label{299}
& \|\mathbf{v}(t)\|_{\mathbb{H}}^2+\mu \int_0^t\|\mathbf{v}(s)\|_{\mathbb{V}}^2\d s\nonumber\\&\leq\|\mathbf{v}(0)\|_{\mathbb{H}}^2+\frac{2}{\mu }\int_0^t\|\mathbf{f}_t(s)\|_{\mathbb{V}'}^2\d s+\frac{C(r-2)}{(r+1)}\left(\frac{2(r+4)}{\mu (r+1)}\right)^{\frac{r+4}{r-2}}\int_0^t\|\mathbf{u}(s)\|_{\widetilde{\mathbb{L}}^{r+1}}^{\frac{2(r+1)}{r-2}}\|\mathbf{v}(s)\|_{\mathbb{H}}^2\d s.
\end{align}
An application of Gronwall's inequality in \eqref{299} yields
\begin{align}
\|\mathbf{v}(t)\|_{\mathbb{H}}^2\leq\left\{\|\mathbf{v}(0)\|_{\mathbb{H}}^2+\frac{2}{\mu }\int_0^T\|\mathbf{f}_t(t)\|_{\mathbb{V}'}^2\d t\right\}\exp\left\{\frac{C(r-2)}{(r+1)}\left(\frac{2(r+4)}{\mu (r+1)}\right)^{\frac{r+4}{r-2}}\int_0^T\|\mathbf{u}(t)\|_{\widetilde{\mathbb{L}}^{r+1}}^{\frac{2(r+1)}{r-2}}\d t\right\},
\end{align}
for all $t\in[0,T]$. Thus, from \eqref{299}, it is immediate that
\begin{align}
& \sup_{t\in[0,T]} \|\mathbf{v}(t)\|_{\mathbb{H}}^2+\mu \int_0^T\|\mathbf{v}(t)\|_{\mathbb{V}}^2\d t\nonumber\\&\leq \left\{\|\mathbf{v}(0)\|_{\mathbb{H}}^2+\frac{2}{\mu }\int_0^T\|\mathbf{f}_t(t)\|_{\mathbb{V}'}^2\d t\right\}\exp\left\{\frac{CT^{\frac{r-2}{r-4}}(r-2)}{(r+1)}\left(\frac{2(r+4)}{\mu (r+1)}\right)^{\frac{r-4}{r-2}}\left(\int_0^T\|\mathbf{u}(t)\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}\d t\right)^{\frac{2}{r-2}}\right\}.
\end{align}
For $r=3$, we estimate $|(\mathrm{B}(\mathbf{v},\mathbf{u}),\mathbf{v})|$ as
\begin{align}
|(\mathrm{B}(\mathbf{v},\mathbf{u}),\mathbf{v})|\leq\|\mathbf{v}\|_{\mathbb{V}}\||\mathbf{v}||\mathbf{u}|\|_{\mathbb{H}}\leq\frac{\mu }{4}\|\mathbf{v}\|_{\mathbb{V}}^2+\frac{2}{\mu }\||\mathbf{u}||\mathbf{v}\|_{\mathbb{H}}^2.
\end{align}
Thus, for $\beta\mu \geq 1$, from \eqref{2.96}, we obtain
\begin{align}
& \|\mathbf{v}(t)\|_{\mathbb{H}}^2+\mu \int_0^t\|\mathbf{v}(s)\|_{\mathbb{V}}^2\d s+2\left(\beta-\frac{1}{\mu }\right)\int_0^t\||\mathbf{u}(s)||\mathbf{v}(s)\|_{\mathbb{H}}^2\d s\nonumber\\&\leq \|\mathbf{v}(0)\|_{\mathbb{H}}^2+\frac{2}{\mu }\int_0^t\|\mathbf{f}_t(s)\|_{\mathbb{V}'}^2\d s,
\end{align}
for all $t\in[0,T]$.
\fi
\iffalse
\begin{remark}\label{rem2.12}
1. If we take $\mathbf{u}_0\in\mathbb{V}\cap\widetilde{\mathbb{L}}^{r+1}$ and $\mathbf{f}\in\mathrm{L}^2(0,T;\mathbb{H})\cap\mathrm{L}^{r+1}(0,T;\widetilde{\mathbb{L}}^{r+1})$, then one can obtain the following regularity of the weak solution to the problem \eqref{kvf}. In this case, $\mathbf{u}(\cdot)$ is the unique strong solution to the system \eqref{kvf}.
2. Let us now discuss about the inviscid limit as $\beta\to 0$ in \eqref{kvf}. We know that there exists a weak solution $\mathbf{v}\in\mathrm{C}([0,T];\mathbb{H})\cap\mathrm{L}^2(0,T;\mathbb{V})$ to the 3D Navier-Stokes system:
\begin{equation}\label{285}
\left\{
\begin{aligned}
\frac{\d\mathbf{v}(t)}{\d t}+\mu \mathrm{A}\mathbf{v}(t)+\mathrm{B}(\mathbf{v}(t))&=\mathbf{f}(t),\\
\mathbf{u}(0)&=\mathbf{u}_0\in\mathbb{H},
\end{aligned}
\right.
\end{equation}
for $\mathbf{f}\in\mathrm{L}^2(0,T;\mathbb{V}')$. Let us take $\mathbf{u}_0\in\mathbb{H}\cap\widetilde{\mathbb{L}}^{r+1}$ and $\mathbf{f}\in\mathrm{L}^2(0,T;\mathbb{V}')\cap\mathrm{L}^{r+1}(0,T;\widetilde{\mathbb{L}}^{r+1})$. Then, a weak solution $\mathbf{v}(\cdot)$ to the $3D$ Navier-Stokes equation satisfies $\mathbf{v}\in\mathrm{C}([0,T];\mathbb{H})\cap\mathrm{L}^{\infty}(0,T;\widetilde{\mathbb{L}}^{r+1})\cap\mathrm{L}^2(0,T;\mathbb{V})$. Let us define $\mathbf{w}=\mathbf{u}-\mathbf{v}$. Then $\mathbf{w}$ satisfies
\begin{equation}\label{286}
\left\{
\begin{aligned}
\frac{\d\mathbf{w}(t)}{\d t}+\mu \mathrm{A}\mathbf{w}(t)+\mathrm{B}(\mathbf{u}(t))-\mathrm{B}(\mathbf{v}(t))&=-\beta\mathrm{P}_{\mathbb{H}}(|\mathbf{u}(t)|^{r-1}\mathbf{u}(t)),\\
\mathbf{w}(0)&=\mathbf{0}.
\end{aligned}
\right.
\end{equation}
Taking inner product with $\mathbf{w}(\cdot)$ in \eqref{286}, we obtain
\begin{align}\label{287}
\frac{1}{2}\frac{\d}{\d t}\|\mathbf{w}(t)\|_{\mathbb{H}}^2+\mu \|\nabla\mathbf{w}(t)\|_{\mathbb{H}}^2&=\langle\mathrm{B}(\mathbf{u}(t))-\mathrm{B}(\mathbf{v}(t)),\mathbf{w}(t)\rangle-\beta\langle|\mathbf{u}(t)|^{r-1}\mathbf{u}(t),\mathbf{w}(t)\rangle.
\end{align}
A calculation similar to the estimate \eqref{bes} implies
\begin{align}\label{288}
|\langle\mathrm{B}(\mathbf{u})-\mathrm{B}(\mathbf{v}),\mathbf{w}\rangle|&\leq \frac{\mu }{2}\|\mathbf{w}\|_{\mathbb{V}}^2+2\left(\frac{7}{4\mu }\right)^7\|\mathbf{u}\|_{\widetilde{\mathbb{L}}^4}^8\|\mathbf{w}\|_{\mathbb{H}}^2.
\end{align}
The final term from the equality \eqref{287} can be estimated using H\"older's and Young's inequalities as
\begin{align}\label{289}
\beta|\langle|\mathbf{u}|^{r-1}\mathbf{u},\mathbf{w}\rangle|\leq\beta\||\mathbf{u}|^r\|_{\mathbb{H}}\|\mathbf{w}\|_{\mathbb{H}}\leq\frac{\beta}{2}\|\mathbf{w}\|_{\mathbb{H}}^2+\frac{\beta}{2}\|\mathbf{u}\|_{\widetilde{\mathbb{L}}^{2r}}^{2r}.
\end{align}
Using \eqref{288} and \eqref{289} in \eqref{287} and then integrating from $0$ to $t$, we arrive at
\begin{align}\label{290}
&\|\mathbf{w}(t)\|_{\mathbb{H}}^2+\mu \int_0^t\|\nabla\mathbf{w}(s)\|_{\mathbb{H}}^2\d s\nonumber\\&\leq 4\left(\frac{7}{4\mu }\right)^7\int_0^t\|\mathbf{u}(s)\|_{\widetilde{\mathbb{L}}^4}^8\|\mathbf{w}(s)\|_{\mathbb{H}}^2\d s+\beta\int_0^t\|\mathbf{w}(s)\|_{\mathbb{H}}^2\d s+\beta\int_0^t\|\mathbf{u}(s)\|_{\widetilde{\mathbb{L}}^{2r}}^{2r}\d s.
\end{align}
Applying Gronwall's inequality in \eqref{290}, we find
\begin{align}
\|\mathbf{w}(t)\|_{\mathbb{H}}^2\leq \beta\int_0^t\|\mathbf{u}(s)\|_{\widetilde{\mathbb{L}}^{2r}}^{2r}\d se^{\beta t}\exp\left\{4\left(\frac{7}{4\mu }\right)^7\sup_{s\in[0,t]}\|\mathbf{u}(s)\|_{\widetilde{\mathbb{L}}^4}^4\int_0^t\|\mathbf{u}(s)\|_{\widetilde{\mathbb{L}}^4}^4\d s\right\},
\end{align}
an hence the weak solution to $\mathbf{u}(\cdot)$ to the system \eqref{kvf} (see \eqref{energy3} and \eqref{energy4}) converges to a weak solution to the 3D Navier-Stokes system \eqref{286} as $\beta\to 0$.
\end{remark}
\begin{remark}\label{rem2.13}
1. For $n=2$ and $r\in[2,\infty)$, from the estimate \eqref{2.8}, we have
\begin{align}
\|\mathbf{u}\|_{\widetilde{\mathbb{L}}^{2r}}\leq C\|\mathbf{u}\|_{\mathbb{V}}^{1-\frac{1}{r}}\|\mathbf{u}\|_{\mathbb{H}}^{\frac{1}{r}}\leq C\|\mathbf{u}\|_{\mathbb{V}},
\end{align}
for all $\mathbf{u}\in\mathbb{V}\cap\widetilde{\mathbb{L}}^{2r}$. Thus, a function $$\mathbf{u}\in\mathrm{C}([0,T];\mathbb{H})\cap\mathrm{L}^2(0,T;\mathbb{V})\cap\mathrm{L}^{r+1}(0,T;\widetilde{\mathbb{L}}^{r+1}),$$ for $r\geq 2$ with $\partial_t\mathbf{u}\in\mathrm{L}^{2}(0,T;\mathbb{V}'),$ is called a \emph{weak solution} to the system (\ref{kvf}), if for $\mathbf{f}\in\mathrm{L}^2(0,T;\mathbb{V}')$, $\mathbf{u}_0\in\mathbb{H}$ and $\mathbf{v}\in\mathbb{V}$, $\mathbf{u}(\cdot)$ satisfies:
\begin{equation}\label{272}
\left\{
\begin{aligned}
\langle\partial_t\mathbf{u}(t)+\mu \mathrm{A}\mathbf{u}(t)+\mathrm{B}(\mathbf{u}(t))+\beta\mathcal{C}(\mathbf{u}(t)),\mathbf{v}\rangle&=\langle\mathbf{f}(t),\mathbf{v}\rangle,\\
\lim_{t\downarrow 0}\int_{\mathcal{O}}\mathbf{u}(t)\mathbf{v}\d x&=\int_{\mathcal{O}}\mathbf{u}_0\mathbf{v}\d x,
\end{aligned}
\right.
\end{equation}
and the energy equality:
\begin{align}
\frac{1}{2}\frac{\d}{\d t}\|\mathbf{u}(t)\|^2_{\mathbb{H}}+\mu \|\mathbf{u}(t)\|^2_{\mathbb{V}}+\beta\|\mathbf{u}(t)\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}=\langle \mathbf{f}(t),\mathbf{u}(t)\rangle.
\end{align}
The existence and uniqueness of a weak solution for the system \eqref{kvf} in the two dimensional case can be proved in a similar way as in Theorem \ref{main} by exploiting the local monotonicity result obtained in \eqref{fes2}. In the proof of Theorem \ref{main}, one has to take $\eta t$ in \eqref{eqn47} as $\eta t=\frac{27}{32\mu ^3}\int_0^t\|\mathbf{v}(s)\|_{\widetilde{\mathbb{L}}^4}^4\d s$ and the estimate \eqref{347} has to be replaced with an estimate similar to \eqref{221}.
2. For the case $n=2$, for $\mathbf{u}_0\in\mathbb{V}$ and $\mathbf{f}\in\mathrm{L}^2(0,T;\mathbb{H})$, one can obtain the strong solution to the system \eqref{kvf} in a similar way as in Remark \ref{rem2.12}. One needs to replace the estimate \eqref{275} by
\begin{align}
|(\mathrm{B}(\mathbf{u}),\mathrm{A}\mathbf{u})|&\leq\|\mathbf{u}\|_{\widetilde{\mathbb{L}}^4}\|\nabla\mathbf{u}\|_{\widetilde{\mathbb{L}}^4}\|\mathrm{A}\mathbf{u}\|_{\mathbb{H}}\leq 2^{1/2}\|\mathbf{u}\|_{\mathbb{H}}^{1/2}\|\mathbf{u}\|_{\mathbb{V}}\|\mathrm{A}\mathbf{u}\|_{\mathbb{H}}^{3/2}\nonumber\\&\leq\frac{\mu }{4}\|\mathrm{A}\mathbf{u}\|_{\mathbb{H}}^2+\frac{27}{4\mu ^3}\|\mathbf{u}\|_{\mathbb{H}}^2\|\mathbf{u}\|_{\mathbb{V}}^4,
\end{align}
where we used H\"older's, Ladyzhenskaya and Young's inequalities. A calculation similar to \eqref{277} yields
\begin{align}\label{281}
& \|\mathbf{u}(t)\|_{\mathbb{V}}^2+\mu \int_0^t\|\mathrm{A}\mathbf{u}(s)\|_{\mathbb{H}}^2\d s+2\beta(r-1)\int_0^t\||\mathbf{u}(s)|^{\frac{r-1}{2}}|\nabla\mathbf{u}(s)|\|_{\mathbb{H}}^2\d s\nonumber\\&\leq\|\mathbf{u}_0\|_{\mathbb{V}}^2+\frac{2}{\mu }\int_0^t\|\mathbf{f}(s)\|_{\mathbb{H}}^2\d s+\frac{27}{2\mu ^3}\int_0^t\|\mathbf{u}(s)\|_{\mathbb{H}}^2\|\mathbf{u}(s)\|_{\mathbb{V}}^4\d s.
\end{align}
An application of Gronwall's inequality in \eqref{281} gives
\begin{align}\label{282}
& \|\mathbf{u}(t)\|_{\mathbb{V}}^2+\mu \int_0^t\|\mathrm{A}\mathbf{u}(s)\|_{\mathbb{H}}^2\d s+2\beta(r-1)\int_0^t\||\mathbf{u}(s)|^{\frac{r-1}{2}}|\nabla\mathbf{u}(s)|\|_{\mathbb{H}}^2\d s\nonumber\\&\leq\left(\|\mathbf{u}_0\|_{\mathbb{V}}^2+\frac{2}{\mu }\int_0^t\|\mathbf{f}(s)\|_{\mathbb{H}}^2\d s\right)\exp\left(\frac{27}{2\mu ^3}\int_0^t\|\mathbf{u}(s)\|_{\mathbb{H}}^2\|\mathbf{u}(s)\|_{\mathbb{V}}^2\d s\right).
\end{align}
Using the energy estimate \eqref{energy3}, we get
\begin{align}
\int_0^T\|\mathbf{u}(t)\|_{\mathbb{H}}^2\|\mathbf{u}(t)\|_{\mathbb{V}}^2\d t&\leq\sup_{t\in[0,T]}\|\mathbf{u}(t)\|_{\mathbb{H}}^2\int_0^T\|\mathbf{u}(t)\|_{\mathbb{V}}^2\d t\nonumber\\&\leq \left(\|\mathbf{u}_0\|_{\mathbb{H}}^2+\frac{1}{\mu }\int_0^T\|\mathbf{f}(t)\|_{\mathbb{V}'}^2\d t\right)^2.
\end{align}
Thus, from \eqref{282}, we finally arrive at
\begin{align}
&\sup_{t\in[0,T]} \|\mathbf{u}(t)\|_{\mathbb{V}}^2+\mu \int_0^T\|\mathrm{A}\mathbf{u}(t)\|_{\mathbb{H}}^2\d t+2\beta(r-1)\int_0^T\||\mathbf{u}(t)|^{\frac{r-1}{2}}|\nabla\mathbf{u}(t)|\|_{\mathbb{H}}^2\d t\nonumber\\&\leq\left(\|\mathbf{u}_0\|_{\mathbb{V}}^2+\frac{2}{\mu }\int_0^t\|\mathbf{f}(s)\|_{\mathbb{H}}^2\d s\right)\exp\left\{\frac{27}{2\mu ^3}\left(\|\mathbf{u}_0\|_{\mathbb{H}}^2+\frac{1}{\mu \lambda_1}\int_0^T\|\mathbf{f}(t)\|_{\mathbb{H}}^2\d t\right)^2\right\},
\end{align}
and hence we obtain $\mathbf{u}\in\mathrm{C}([0,T];\mathbb{V})\cap\mathrm{L}^2(0,T;\mathrm{D}(\mathrm{A}))$.
\end{remark}
\begin{remark}
Let us now show the existence of a global attractor for the Navier-Stokes-Brinkman-Forchheimer equations in two dimensional bounded domains. Here we assume that the external forcing $\mathbf{f}$ appearing in \eqref{kvf} is independent of $t$. We know that there exists a unique weak solution for the system \eqref{kvf} and the solution can be represented through a one parameter family of continuous semigroup. Using the Remark \ref{rem2.13}, we can define a continuous semigroup $\{\mathrm{S}(t)\}_{t\geq 0}$ in $\mathbb{H}$ by
\begin{align}
\mathrm{S}(t)\mathbf{u}_0= \mathbf{u}(t), \ t\geq 0,
\end{align}
where $\mathbf{u}(\cdot)$ is the unique weak solution of the system \eqref{kvf} with $\mathbf{f}(t)=\mathbf{f}\in\mathbb{H}$ and $\mathbf{u}(0)=\mathbf{u}_0\in\mathbb{H}$.
Taking inner product with $\mathbf{u}(\cdot)$ to the first equation in \eqref{kvf} to find
\begin{align}\label{293}
\frac{1}{2}\frac{\d}{\d t}\|\mathbf{u}(t)\|_{\mathbb{H}}^2+\mu \|\nabla\mathbf{u}(t)\|_{\mathbb{H}}^2+\beta\|\mathbf{u}(t)\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}&=(\mathbf{f},\mathbf{u}(t))\leq\|\mathbf{f}\|_{\mathbb{V}'}\|\mathbf{u}(t)\|_{\mathbb{V}}\nonumber\\&\leq\frac{\mu }{2}\|\mathbf{u}(t)\|_{\mathbb{V}}^2+\frac{1}{2\mu }\|\mathbf{f}\|_{\mathbb{V}'}^2.
\end{align}
Thus, using Poincar\'e inequality in \eqref{293}, we further have
\begin{align}\label{294}
\frac{\d}{\d t}\|\mathbf{u}(t)\|_{\mathbb{H}}^2+\mu \lambda_1\|\mathbf{u}(t)\|_{\mathbb{H}}^2+2\beta\|\mathbf{u}(t)\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}\leq\frac{1}{\mu }\|\mathbf{f}\|_{\mathbb{V}'}^2.
\end{align}
Applying Gronwall's inequality in \eqref{294}, we get
\begin{align}
\|\mathbf{u}(t)\|^2_{\mathbb{H}}\leq e^{-\mu \lambda_1t}\|\mathbf{u}_0\|^2_{\mathbb{H}}+\frac{1}{\mu ^2\lambda_1^2}\|\mathbf{f}\|_{\mathbb{H}}^2.
\end{align}
Taking limit as $t\to\infty$, we find
\begin{align}\label{est}
\limsup_{t\to\infty}\|\mathbf{u}(t)\|_{\mathbb{H}}^2\leq \frac{1}{\mu ^2\lambda_1^2}\|\mathbf{f}\|_{\mathbb{H}}^2.
\end{align}
The inequality \eqref{est} implies that the semigroup $\mathrm{S}(t):\mathbb{H}\to\mathbb{H}$, $t\geq 0$ associated with the weak solution to the problem \eqref{kvf} has an absorbing ball
\begin{align}\label{3.1}
\mathcal{B}_1:=\left\{\mathbf{v}\in\mathbb{H}:\|\mathbf{v}\|_{\mathbb{H}}\leq M_1\equiv \frac{1}{\mu \lambda_1}\|\mathbf{f}\|_{\mathbb{H}}\right\}.
\end{align}
That is, given a bounded set $\mathrm{B}\subset\mathbb{H}$, there exists an entering time $t_\mathrm{B}>0$ such that $\mathrm{S}(t)\mathrm{B}\subset\mathcal{B}_1$, for all $t\geq t_\mathrm{B}$.
Hence, the following uniform estimate is valid:
\begin{align}\label{3.8}
\|\mathbf{u}(t)\|_{\mathbb{H}}\leq M_1, \ \text{ where }\ M_1= \frac{1}{\mu \lambda_1}\|\mathbf{f}\|_{\mathbb{H}},
\end{align}
for $t$ large enough ($t\gg 1$) depending on the initial data. Integrating the inequality \eqref{294} from $t$ to $t+\tau$, for $\tau>0$, we obtain
\begin{align}\label{316}
\mu \int_t^{t+\tau}\|\mathbf{u}(s)\|_{\mathbb{V}}^2\d s+2\beta\int_t^{t+\tau}\|\mathbf{u}(s)\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}\d s&\leq \|\mathbf{u}(t)\|_{\mathbb{H}}^2+\frac{\tau}{\mu \lambda_1}\|\mathbf{f}\|_{\mathbb{H}}^2\nonumber\\&\leq \frac{1}{\mu \lambda_1}\left(\frac{1}{\mu \lambda_1}+\tau\right)\|\mathbf{f}\|_{\mathbb{H}}^2,
\end{align}
for $t\geq t_{\mathrm{B}}$. Moreover, we have
\begin{align}
\limsup\limits_{T\to\infty}\int_0^{T}\|\mathbf{u}(t)\|_{\mathbb{V}}^2\d t\leq\frac{\|\mathbf{f}\|_{\mathbb{H}}^2}{\mu ^2\lambda_1},
\end{align}
and
\begin{align}
\limsup\limits_{T\to\infty}\int_0^{T}\|\mathbf{u}(t)\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}\d t\leq\frac{\|\mathbf{f}\|_{\mathbb{H}}^2}{2\mu \beta\lambda_1}.
\end{align}
Taking inner product with $\mathrm{A}\mathbf{u}(\cdot)$ to the first equation in \eqref{kvf}, we obtain
\begin{align}\label{206}
&\frac{1}{2}\frac{\d}{\d t}\|\mathbf{u}(t)\|_{\mathbb{V}}^2+\mu \|\mathrm{A}\mathbf{u}(t)\|_{\mathbb{H}}^2+\beta(r-1)\||\mathbf{u}(t)|^{\frac{r-1}{2}}|\nabla\mathbf{u}(t)|\|_{\mathbb{H}}^2=(\mathbf{f},\mathrm{A}\mathbf{u}(t))-(\mathrm{B}(\mathbf{u}(t)),\mathrm{A}\mathbf{u}(t)).
\end{align}
In order to obtain an absorbing ball in $\mathbb{V}$, for $r\geq 5$, we avoid the uniform Gronwall's Lemma used for the 2D Navier-Stokes equations in \cite{Te2}. In this case, we make use of the double integration trick used in \cite{JCR1}. We use Cauchy-Schwarz, H\"older's, Gagliardo-Nirenberg and Young's inequalities to estimate $(\mathrm{B}(\mathbf{u}),\mathrm{A}\mathbf{u})$ as
\begin{align}\label{207}
|(\mathrm{B}(\mathbf{u}),\mathrm{A}\mathbf{u})|&\leq \|\mathrm{B}(\mathbf{u})\|_{\mathbb{H}}\|\mathrm{A}\mathbf{u}\|_{\mathbb{H}}\leq \|\mathbf{u}\|_{\widetilde{\mathbb{L}}^{r+1}}\|\nabla\mathbf{u}\|_{\widetilde{\mathbb{L}}^{\frac{2(r+1)}{r-1}}}\|\mathrm{A}\mathbf{u}\|_{\mathbb{H}}\nonumber\\&\leq C\|\mathbf{u}\|_{\widetilde{\mathbb{L}}^{r+1}}^{\frac{2(r+1)}{r+3}}\|\mathrm{A}\mathbf{u}\|_{\mathbb{H}}^{\frac{r+7}{r+3}}\nonumber\\&\leq \frac{\mu }{4}\|\mathrm{A}\mathbf{u}\|_{\mathbb{H}}^2+C\left(\frac{r-1}{2(r+3)}\right)\left(\frac{2(r+7)}{\mu (r+3)}\right)^{\frac{r+7}{r-1}}\|\mathbf{u}\|_{\widetilde{\mathbb{L}}^{r+1}}^{\frac{4(r+1)}{r-1}}.
\end{align}
Using the Cauchy-Schwarz inequality and Young's inequality, we estimate $|(\mathbf{f},\mathrm{A}\mathbf{u})|$ as
\begin{align}\label{208}
|(\mathbf{f},\mathrm{A}\mathbf{u})|&\leq\|\mathbf{f}\|_{\mathbb{H}}\|\mathrm{A}\mathbf{u}\|_{\mathbb{H}}\leq\frac{\mu }{4}\|\mathrm{A}\mathbf{u}\|_{\mathbb{H}}^2+\frac{1}{\mu }\|\mathbf{f}\|_{\mathbb{H}}^2.
\end{align}
Combining \eqref{207} and \eqref{208} and then substituting it in \eqref{206}, we get
\begin{align}\label{209}
&\frac{\d}{\d t}\|\mathbf{u}(t)\|_{\mathbb{V}}^2+\mu \|\mathrm{A}\mathbf{u}(t)\|_{\mathbb{H}}^2+2\beta(r-1)\||\mathbf{u}(t)|^{\frac{r-1}{2}}|\nabla\mathbf{u}(t)|\|_{\mathbb{H}}^2\nonumber\\&\leq \frac{2}{\mu }\|\mathbf{f}\|_{\mathbb{H}}^2+C\left(\frac{r-1}{r+3}\right)\left(\frac{2(r+7)}{\mu (r+3)}\right)^{\frac{r+7}{r-1}}\|\mathbf{u}\|_{\widetilde{\mathbb{L}}^{r+1}}^{\frac{4(r+1)}{r-1}}.
\end{align}
Dropping the terms $\frac{\mu }{\lambda_1}\|\mathbf{u}(t)\|_{\mathbb{V}}^2+2\beta(r-1)\||\mathbf{u}(t)|^{\frac{r-1}{2}}|\nabla\mathbf{u}(t)|\|_{\mathbb{H}}^2$ from \eqref{209} and then integrating between $s$ and $t+1$, with $t\leq s<t+1$, we obtain
\begin{align}\label{210}
\|\mathbf{u}(t+1)\|_{\mathbb{V}}^2&\leq\|\mathbf{u}(s)\|_{\mathbb{V}}^2+\frac{2}{\mu }\|\mathbf{f}\|_{\mathbb{H}}^2+C\left(\frac{r-1}{r+3}\right)\left(\frac{2(r+7)}{\mu (r+3)}\right)^{\frac{r+7}{r-1}}\int_s^{t+1}\|\mathbf{u}(k)\|_{\widetilde{\mathbb{L}}^{r+1}}^{\frac{4(r+1)}{r-1}}\d k\nonumber\\&\leq \|\mathbf{u}(s)\|_{\mathbb{V}}^2+\frac{2}{\mu }\|\mathbf{f}\|_{\mathbb{H}}^2+C\left(\frac{r-1}{r+3}\right)\left(\frac{2(r+7)}{\mu (r+3)}\right)^{\frac{r+7}{r-1}}\left(\int_t^{t+1}\|\mathbf{u}(k)\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}\d k\right)^{\frac{4}{r-1}}\nonumber\\&\leq \|\mathbf{u}(s)\|_{\mathbb{V}}^2+\frac{2}{\mu }\|\mathbf{f}\|_{\mathbb{H}}^2+C\left(\frac{r-1}{r+3}\right)\left(\frac{2(r+7)}{\mu (r+3)}\right)^{\frac{r+7}{r-1}}\left(\frac{1}{2\beta\mu \lambda_1}\left(\frac{1}{\mu \lambda_1}+1\right)\|\mathbf{f}\|_{\mathbb{H}}^2\right)^{\frac{4}{r-1}},
\end{align}
for $r\geq 5$, where we used \eqref{316} for $t\geq t_{\mathrm{B}}$. Integrating $s$ between $t$ and $t+1$ in \eqref{210} and then using \eqref{316}, we find
\begin{align}
&\|\mathbf{u}(t+1)\|_{\mathbb{V}}^2\nonumber\\&\leq \int_{t}^{t+1}\|\mathbf{u}(s)\|_{\mathbb{V}}^2\d s+\frac{2}{\mu }\|\mathbf{f}\|_{\mathbb{H}}^2+C\left(\frac{r-1}{r+3}\right)\left(\frac{2(r+7)}{\mu (r+3)}\right)^{\frac{r+7}{r-1}}\left(\frac{1}{2\beta\mu \lambda_1}\left(\frac{1}{\mu \lambda_1}+1\right)\|\mathbf{f}\|_{\mathbb{H}}^2\right)^{\frac{4}{r-1}}\nonumber\\&\leq \frac{1}{\mu ^2\lambda_1}\left(\frac{1}{\mu \lambda_1}+1\right)\|\mathbf{f}\|_{\mathbb{H}}^2+\frac{2}{\mu }\|\mathbf{f}\|_{\mathbb{H}}^2+C\left(\frac{r-1}{r+3}\right)\left(\frac{2(r+7)}{\mu (r+3)}\right)^{\frac{r+7}{r-1}}\left(\frac{1}{2\beta\mu \lambda_1}\left(\frac{1}{\mu \lambda_1}+1\right)\|\mathbf{f}\|_{\mathbb{H}}^2\right)^{\frac{4}{r-1}},
\end{align}
for $t\geq t_{\mathrm{B}}$. Let us now integrate \eqref{209} from $t$ to $t+\tau$ to obtain
\begin{align}
&\|\mathbf{u}(t+\tau)\|_{\mathbb{V}}^2+\mu \int_t^{t+\tau}\|\mathrm{A}\mathbf{u}(s)\|_{\mathbb{H}}^2\d s+2\beta(r-1)\int_t^{t+\tau}\||\mathbf{u}(s)|^{\frac{r-1}{2}}|\nabla\mathbf{u}(s)|\|_{\mathbb{H}}^2\d s\nonumber\\&\leq\|\mathbf{u}(t)\|_{\mathbb{V}}^2+ \frac{2\tau}{\mu }\|\mathbf{f}\|_{\mathbb{H}}^2+C\tau^{\frac{r-5}{r-1}}\left(\frac{r-1}{r+3}\right)\left(\frac{2(r+7)}{\mu (r+3)}\right)^{\frac{r+7}{r-1}}\left(\int_t^{t+\tau}\|\mathbf{u}(s)\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}\d s\right)^{\frac{4}{r-1}}\nonumber\\&\leq \frac{1}{\mu ^2\lambda_1}\left(\frac{1}{\mu \lambda_1}+1\right)\|\mathbf{f}\|_{\mathbb{H}}^2+\frac{2(1+\tau)}{\mu }\|\mathbf{f}\|_{\mathbb{H}}^2\nonumber\\&\quad+C\left(1+\tau^{\frac{r-5}{r-1}}\right)\left(\frac{r-1}{r+3}\right)\left(\frac{2(r+7)}{\mu (r+3)}\right)^{\frac{r+7}{r-1}}\left(\frac{1}{2\beta\mu \lambda_1}\left(\frac{1}{\mu \lambda_1}+\tau\right)\|\mathbf{f}\|_{\mathbb{H}}^2\right)^{\frac{4}{r-1}},
\end{align}
for all $t\geq t_{\mathrm{B}}+1$.
For $r\in[2,5]$ (in fact for all $r\in[2,\infty)$), one can use the uniform Gronwall's lemma (Lemma III. 1.1, \cite{Te2}) to obtain the absorbing ball in $\mathbb{V}$. For this, the estimate \eqref{207} needs to be replaced by
\begin{align}
|(\mathrm{B}(\mathbf{u}),\mathrm{A}\mathbf{u})|&\leq \|\mathrm{B}(\mathbf{u})\|_{\mathbb{H}}\|\mathrm{A}\mathbf{u}\|_{\mathbb{H}}\leq C\|\mathbf{u}\|_{\widetilde{\mathbb{L}}^{\infty}}\|\nabla\mathbf{u}\|_{\mathbb{H}}\|\mathrm{A}\mathbf{u}\|_{\mathbb{H}}\nonumber\\&\leq C\|\mathbf{u}\|_{\mathbb{H}}^{1/2}\|\mathbf{u}\|_{\mathbb{V}}\|\mathrm{A}\mathbf{u}\|^{3/2}\leq\frac{\mu }{4}\|\mathrm{A}\mathbf{u}\|_{\mathbb{H}}^2+\frac{27 C}{4\mu ^3}\|\mathbf{u}\|_{\mathbb{H}}^2\|\mathbf{u}\|_{\mathbb{V}}^4,
\end{align}
where we used Agmon's inequality. Thus, the estimate \eqref{209} becomes
\begin{align}\label{216}
&\frac{\d}{\d t}\|\mathbf{u}(t)\|_{\mathbb{V}}^2+\mu \lambda_1\|\mathbf{u}(t)\|_{\mathbb{V}}^2\leq \frac{2}{\mu }\|\mathbf{f}\|_{\mathbb{H}}^2+\frac{27 C}{2\mu ^3}\|\mathbf{u}(t)\|_{\mathbb{H}}^2\|\mathbf{u}(t)\|_{\mathbb{V}}^4.
\end{align}
where we used the fact that $\|\mathbf{u}\|_{\mathbb{V}}^2\leq \frac{1}{\lambda_1}\|\mathrm{A}\mathbf{u}\|_{\mathbb{H}}^2$, for all $\mathbf{u}\in\mathrm{D}(\mathrm{A})$. Applying uniform Gronwall's lemma in \eqref{216}, we further have
\begin{align}\label{217}
\|\mathbf{u}(t)\|_{\mathbb{V}}^2\leq \left(\frac{a_3}{\tau}+a_2\right)e^{a_1}, \ \text{ for }\ t\geq t_{\mathrm{B}}+\tau,
\end{align}
where
\begin{align*}
a_1=\frac{27 CM_1^2}{2\mu ^3}a_3, \ a_2=\frac{2\tau}{\mu }\|\mathbf{f}\|_{\mathbb{H}}^2, \ a_3= \frac{1}{\mu ^2\lambda_1}\left(\frac{1}{\mu \lambda_1}+\tau\right)\|\mathbf{f}\|_{\mathbb{H}}^2.
\end{align*}
Let us fix $\tau>0$ and denote the right hand side of \eqref{217} as $M_2$. Then, we can conclude that there exists an absorbing ball $\mathcal{B}_2$ of $\mathbb{V}$ for the semigroup $\mathrm{S}(t)$. Moreover, if $\mathcal{B}$ is any bounded subset of $\mathbb{H}$, then $\mathrm{S}(t)\mathcal{B}\subset\mathcal{B}_2$ for $t\geq t_{\mathrm{B}}+r$. This, shows the existence of an absorbing set in $\mathbb{V}$ and also the operators $\mathrm{S}(t)$ are uniformly compact. Thus, the conditions of Theorem I.1.12, \cite{Te2} are satisfied and we have the following theorem:
\end{remark}
\begin{theorem}
The dynamical system associated with the 2D Navier-Stokes-Brinkman-Forchheimer equations \eqref{kvf} possesses an attractor $\mathscr{A}$ that is compact, connected and maximal in $\mathbb{H}$. Moreover, $\mathscr{A}$ attracts the bounded sets of $\mathbb{H}$ and $\mathscr{A}$ is also maximal among the functional invariant sets bounded in $\mathbb{H}$.
\end{theorem}
\fi
\section{Stochastic Coupled Convective Brinkman-Forchheimer Equations}\label{sec4}\setcounter{equation}{0}
Let $(\Omega,\mathscr{F},\mathbb{P})$ be a complete probability space equipped with an increasing family of sub-sigma fields $\{\mathscr{F}_t\}_{0\leq t\leq T}$ of $\mathscr{F}$ satisfying:
\begin{enumerate}
\item [(i)] $\mathscr{F}_0$ contains all elements $F\in\mathscr{F}$ with $\mathbb{P}(F)=0$,
\item [(ii)] $\mathscr{F}_t=\mathscr{F}_{t+}=\bigcap\limits_{s>t}\mathscr{F}_s,$ for $0\leq t\leq T$.
\end{enumerate} In this section, we consider the stochastic coupled convective Brinkman-Forchheimer equations perturbed by multiplicative Gaussian noise. On taking orthogonal projection $\mathrm{P}_{\mathbb{H}}$ onto the first two equations in \eqref{1p1}, we obtain
\begin{equation}\label{3.6}
\left\{
\begin{aligned}
\d \mathbf{X}^{\varepsilon,\delta}_t&=-[\mu\mathrm{A} \mathbf{X}^{\varepsilon,\delta}_t+\mathrm{B}(\mathbf{X}^{\varepsilon,\delta}_t)+\alpha\mathbf{X}^{\varepsilon,\delta}_t+\beta\mathcal{C}(\mathbf{X}^{\varepsilon,\delta}_t)-\mathrm{F}(\mathbf{X}^{\varepsilon,\delta}_t,\mathbf{Y}^{\varepsilon,\delta}_t)]\d t+\sqrt{\varepsilon}\sigma_1(\mathbf{X}^{\varepsilon,\delta}_t)\mathrm{Q}_1^{1/2}\d\mathrm{W}_t,\\
\d \mathbf{Y}^{\varepsilon,\delta}_t&=-\frac{1}{\delta}[\mu\mathrm{A} \mathbf{Y}^{\varepsilon,\delta}_t+\alpha\mathbf{Y}^{\varepsilon,\delta}_t+\beta\mathcal{C}(\mathbf{Y}_{t}^{\varepsilon,\delta})-\mathrm{G}(\mathbf{X}^{\varepsilon,\delta}_t,\mathbf{Y}^{\varepsilon,\delta}_t)]\d t+\frac{1}{\sqrt{\delta}}\sigma_2(\mathbf{X}^{\varepsilon,\delta}_t,\mathbf{Y}^{\varepsilon,\delta}_t)\mathrm{Q}_2^{1/2}\d\mathrm{W}_t,\\
\mathbf{X}^{\varepsilon,\delta}_0&=\mathbf{x},\ \mathbf{Y}^{\varepsilon,\delta}_0=\mathbf{y},
\end{aligned}
\right.
\end{equation}
where $\mathrm{Q}_i$'s, for $i=1,2$ are positive symmetric trace class operators in $\mathbb{H}$ and $\mathrm{W}_t$ is an $\mathbb{H}$-valued standard cylindrical Wiener process. Strictly speaking, one has to use $\mathrm{P}_{\mathbb{H}}\mathrm{F}$, $\mathrm{P}_{\mathbb{H}}\mathrm{G}$, $\mathrm{P}_{\mathbb{H}}\sigma_1$ and $\mathrm{P}_{\mathbb{H}}\sigma_2$ instead of $\mathrm{F}$, $\mathrm{G}$, $ \sigma_1$ and $\sigma_2$ in \eqref{3.6}, and for simplicity of notations we are keeping them as it is. Since $\mathrm{Q}_1$ and $\mathrm{Q}_2$ are trace class operators, the embedding of $\mathrm{Q}_i^{1/2}\mathbb{H}$ in $\mathbb{H}$ is Hilbert-Schmidt, for $i=1,2$. Let $\mathcal{L}(\mathbb{H};\mathbb{H})$ denotes the space of all bounded linear operators on $\mathbb{H}$ and $\mathcal{L}_2(\mathbb{H};\mathbb{H})$ denotes the space of all Hilbert-Schmidt operators from $\mathbb{H}$ to $\mathbb{H}$. The space $\mathcal{L}_2(\mathbb{H};\mathbb{H})$ is a Hilbert space equipped with the norm $ \left\|\Psi\right\|^2_{\mathcal{L}_2}=\mathop{\mathrm{Tr}}\left(\Psi\Psi^*\right)=\sum_{k=1}^{\infty}\|\Psi^*e_k\|_{\mathbb{H}}^2$ and inner product $(\Psi,\Phi)_{\mathcal{L}_2}=\mathop{\mathrm{Tr}}(\Psi\Phi^*)=\sum_{k=1}^{\infty}(\Phi^*e_k,\Psi^*e_k)$. For more details, the interested readers are referred to see \cite{DaZ}.
We need the following Assumptions on $\mathrm{F},\mathrm{G},\sigma_1$ and $\sigma_2$ to obtain our main results (see \cite{MTM11,XSRW} also).
\begin{assumption}\label{ass3.6}
The functions $\mathrm{F},\mathrm{G}:\mathbb{H}\times\mathbb{H}\to\mathbb{H}$, $\sigma_1\mathrm{Q}_1^{1/2}:\mathbb{H}\to\mathcal{L}_{2}(\mathbb{H};\mathbb{H})$ and $\sigma_2\mathrm{Q}_2^{1/2}:\mathbb{H}\times\mathbb{H}\to\mathcal{L}_{2}(\mathbb{H};\mathbb{H})$ satisfy the following Assumptions:
\begin{itemize}
\item [(A1)] The functions $\mathrm{F},\mathrm{G},\sigma_1,\sigma_2$ are Lipschitz continuous, that is, there exist positive constants $C,L_{\mathrm{G}}$ and $L_{\sigma_2}$ such that for any $\mathbf{x}_1,\mathbf{x}_2,\mathbf{y}_1,\mathbf{y}_2\in\mathbb{H}$, we have
\begin{align*}
\|\mathrm{F}(\mathbf{x}_1,\mathbf{y}_1)-\mathrm{F}(\mathbf{x}_2,\mathbf{y}_2)\|_{\mathbb{H}}&\leq C(\|\mathbf{x}_1-\mathbf{x}_2\|_{\mathbb{H}}+\|\mathbf{y}_1-\mathbf{y}_2\|_{\mathbb{H}}),\\
\|\mathrm{G}(\mathbf{x}_1,\mathbf{y}_1)-\mathrm{G}(\mathbf{x}_2,\mathbf{y}_2)\|_{\mathbb{H}}&\leq C\|\mathbf{x}_1-\mathbf{x}_2\|_{\mathbb{H}}+L_{\mathrm{G}}\|\mathbf{y}_1-\mathbf{y}_2\|_{\mathbb{H}},\\
\|[\sigma_1(\mathbf{x}_1)-\sigma_1(\mathbf{x}_2)]\mathrm{Q}_1^{1/2}\|_{\mathcal{L}_2}&\leq C\|\mathbf{x}_1-\mathbf{x}_2\|_{\mathbb{H}},\\
\|[\sigma_2(\mathbf{x}_1,\mathbf{y}_1)-\sigma_2(\mathbf{x}_2,\mathbf{y}_2)]\mathrm{Q}_2^{1/2}\|_{\mathcal{L}_2}&\leq C\|\mathbf{x}_1-\mathbf{x}_2\|_{\mathbb{H}}+L_{\sigma_2}\|\mathbf{y}_1-\mathbf{y}_2\|_{\mathbb{H}}.
\end{align*}
\item [(A2)] The function $\sigma_2$ grows linearly in $\mathbf{x}$, but is bounded in $\mathbf{y}$, that is, there exists a constant $C>0$ such that
\begin{align*}
\sup_{\mathbf{y}\in\mathbb{H}} \|\sigma_2(\mathbf{x},\mathbf{y})\mathrm{Q}_2^{1/2}\|_{\mathcal{L}_2}\leq C(1+\|\mathbf{x}\|_{\mathbb{H}}), \ \text{ for all }\ \mathbf{x}\in\mathbb{H}.
\end{align*}
\item [(A3)] The Brinkman coefficient $\mu>0$, Darcy coefficient $\alpha>0$, the smallest eigenvalue $\lambda_1$ of the Stokes operator and the Lipschitz constants $L_{\mathrm{G}}$ and $L_{\sigma_2}$ satisfy $$\mu\lambda_1+2\alpha-2L_{\mathrm{G}}-2L_{\sigma_2}^2>0.$$
\item [(A4)] $\lim\limits_{\varepsilon\downarrow 0}\delta(\varepsilon)=0$ and $\lim\limits_{\varepsilon\downarrow 0}\frac{\delta}{\varepsilon}=0$.
\end{itemize}
\end{assumption}
\iffalse
Let $\mathrm{W}(\cdot)$ be an $\mathbb{H}$-valued cylindrical Wiener process. Indeed, for an orthonormal basis
$\{e_j(x)\}_{j=1}^{\infty}$ in $\mathbb{H}$, $\mathrm{W}(\cdot)$ can be
represented as
\begin{align}
\mathrm{W}(t)=\sum_{n=1}^{\infty}\beta_n(t)e_n,
\end{align}
where $\{\beta_n\}_{n=1}^{\infty}$ is a sequence of mutually independent real Brownian motions in a fixed probability space $(\Omega,\mathscr{F},\mathbb{P})$ adapted to a filtration $\{\mathscr{F}_t\}_{t\geq 0}$. A bounded linear operator
$\Phi:\mathbb{H}\to\mathbb{X}$, $p\in[2,\infty)$ is a $\gamma$-radonifying
operator in $\mathbb{X}$ such that
\begin{align}
\Phi \d\mathrm{W}(t)=\sum_{n=1}^{\infty}\Phi e_n(x)\d\beta_n(t).
\end{align}
For a sequence of independent standard Gaussian random variables $\{\gamma_j\}_{j=1}^{\infty}$ on a probability space $(\widetilde{\Omega},\widetilde{\mathscr{F}},\widetilde{\mathbb{P}})$ (we use the notation $(\Omega,\mathscr{F},\mathbb{P})$ for the probability space on which our process is defined), the operator
$\Phi$ is said to be \emph{$\gamma$-radonifying} if the Gaussian
series ${\sum_{j=1}^{\infty}}\gamma_j\Phi e_j$
converges in $\mathrm{L}^2(\widetilde{\Omega};\mathbb{X})$. The number
\begin{align}
\|\Phi\|_{\gamma(\mathbb{H},\mathbb{X})}:=\left(\widetilde{\mathbb{E}}\left\|\sum_{n=1}^{\infty}\gamma_n\Phi
e_n\right\|^2_{\mathbb{X}}\right)^{\frac{1}{2}},
\end{align}
independent of the sequence $\{\gamma_n\}_{n=1}^{\infty}$ and the
basis $\{e_n\}_{n=1}^{\infty}$, defines a norm on the space
$\gamma(\mathbb{H},\mathbb{X})$ of all $\gamma$-radonifying operators from $\mathbb{H}$
into $\mathbb{X}$. For $\mathbb{X}=\mathbb{H}$, $\gamma(\mathbb{H},\mathbb{X})$ is the space of all \emph{Hilbert-Schmidt
operator} on $\mathbb{H}$ and we denote it by $\mathcal{L}_{\mathrm{Q}}$. For more details on the $\gamma$-radonifying operators, the interested readers are referred to see \cite{N10}.
Let $\Phi\in\gamma(\mathbb{H},\mathbb{X})$ and $\mathcal{K}\in\mathcal{L}(\mathbb{X},\mathcal{L}(\mathbb{X},\mathbb{R}))=\mathcal{L}(\mathbb{X},\mathbb{X}')$ be given. Then using Lemma 2.3, \cite{ZBJM}, we know that the sum $$\mathrm{Tr}_{\Phi}\mathcal{K}:=\sum_{n=1}^{\infty}\langle \mathcal{K}\Phi e_n,\Phi e_n\rangle_{\mathbb{X}',\mathbb{X}} $$ converges in $\mathbb{X}$ and does not depend on the choice of the orthornomal basis. Furthermore, we have
\begin{align}\label{35}
|\mathrm{Tr}_{\Phi}\mathcal{K}|\leq\|\mathcal{K}\|_{\mathcal{L}(\mathbb{X},\mathbb{X}')}\|\Phi\|_{\gamma(\mathbb{H},\mathbb{X})}^2.
\end{align}
Whenever, $\mathbb{X}=\mathbb{H}$, we use the symbol $\|\Phi\|_{\mathcal{L}_{\mathrm{Q}}}^2=\text{Tr}(\Phi^*\Phi)=\sum_{n=1}^{\infty}\|\Phi e_n\|_{\mathbb{H}}^2$. The inner product in $\mathcal{L}_{\mathrm{Q}}$ is given by $(\Phi,\Psi)_{\mathcal{L}_{\mathrm{Q}}}=\sum_{n=1}^{\infty}(\Phi e_n,\Psi e_n)$.
Next we state the It\^o formula given in Theorem 2.4, \cite{ZBJM} (see \cite{JMAM} also). A function $f:[0,T]\times\mathbb{X}\to\mathbb{Y}$ is said to be of class $\mathrm{C}^{1,2}$ if it is differentiable in the first variable and twice Fr\'echet differentiable in the second variable and the functions $f,\mathrm{D}_1f,\mathrm{D}_2f$ and $\mathrm{D}_{2}^2f$ are continuous on $[0,T]\times\mathbb{X}$. Here $\mathrm{D}_1f$ and $\mathrm{D}_2f$ are the derivatives with respect to the first and second variable, respectively.
\begin{theorem}[It\^o's formula]
Let $\mathbb{X}$ and $\mathbb{Y}$ be UMD (unconditional martingale difference) Banach spaces. Assume that $f:[0,T]\times\mathbb{X}\to\mathbb{Y}$ is of class $\mathrm{C}^{1,2}$. Let $\Phi:[0,T]\times\Omega\to\mathcal{L}(\mathbb{H},\mathbb{X})$ be an $\mathbb{H}$-strongly measurable and adapted process which is stochastically integrable with respect to $\mathrm{W}$ and assume that the paths of $\Phi$ belong to $\mathrm{L}^2(0,T;\gamma(\mathbb{H},\mathbb{X})),$ $\mathbb{P}$-a.s. Let $\psi:[0,T]\times\Omega\to\mathbb{X}$ be strongly measurable and adapted with paths in $\mathrm{L}^1(0,T;\mathbb{X})$, $\mathbb{P}$-a.s. Let $\xi:\Omega\to\mathbb{X}$ be strongly $\mathscr{F}_0$-measurable. Let us define $\zeta:[0,T]\times\Omega\to\mathbb{X}$ by $$\zeta=\xi+\int_0^{\cdot}\psi(s)\d s+\int_0^{\cdot}\Phi(s)\d\mathrm{W}(s).$$ Then $s\mapsto D_2f(s,\zeta(s))\Phi(s)$ is stochastically integrable and for all $t\in[0,T]$, we have
\begin{align*}
f(t,\zeta(t))&=f(0,\zeta(0))+\int_0^t\mathrm{D}_1f(s,\zeta(s))\d s+\int_0^t\mathrm{D}_2f(s,\zeta(s))\psi(s)\d s\nonumber\\&\quad+\int_0^t\mathrm{D}_2f(s,\zeta(s))\Phi(s)\d\mathrm{W}(s)+\frac{1}{2}\int_0^t\mathrm{Tr}_{\Phi(s)}\left(\mathrm{D}_2^2f(s,\zeta(s))\right)\d s,
\end{align*}
$\mathbb{P}$-a.s.
\end{theorem}
Note that all Hilbert spaces and $\widetilde{\mathbb{L}}^p$ for $1<p<\infty$ are UMD Banach spaces. Moreover, they are $2$-smooth Banach spaces (see \cite{ZB}). Let $\mathbb{H}$ be a Hilbert space and $\mathbb{X}$ be a UMD Banach space.
\begin{definition}
A stochastic process $\{\mathrm{W}(t)\}_{0\leq
t\leq T}$ is said to be an \emph{$\mathbb{H}$-valued $\mathscr{F}_t$-adapted
Wiener process} with covariance operator $\mathrm{Q}$ if
\begin{enumerate}
\item [$(i)$] for each non-zero $h\in \mathbb{H}$, $|\mathrm{Q}^{1/2}h|^{-1} (\mathrm{W}(t), h)$ is a standard one dimensional Wiener process,
\item [$(ii)$] for any $h\in \mathbb{H}, (\mathrm{W}(t), h)$ is a martingale adapted to $\mathscr{F}_t$.
\end{enumerate}
\end{definition}
The stochastic process $\{\mathrm{W}(t) : 0\leq t\leq T\}$ is a $\mathbb{H}$-valued Wiener process with covariance $\mathrm{Q}$ if and only if for arbitrary $t$, the process $\mathrm{W}(t)$ can be expressed as $\mathrm{W}(t,x) =\sum_{k=1}^{\infty}\sqrt{\mu_k}e_k(x)\upbeta_k(t)$, where $\upbeta_{k}(t),k\in\mathbb{N}$ are independent one dimensional Brownian motions on $(\Omega,\mathscr{F},\mathbb{P})$ and $\{e_k \}_{k=1}^{\infty}$ are the orthonormal basis functions of $\mathbb{H}$ such that $\mathrm{Q} e_k=\mu_k e_k$. If $\mathrm{W}(\cdot)$ is an $\mathbb{H}$-valued Wiener process with covariance operator $\mathrm{Q}$ with $\mathop{\mathrm{Tr}} \mathrm{Q}=\sum_{k=1}^{\infty} \mu_k< +\infty$, then $\mathrm{W}(\cdot)$ is a Gaussian process on $\mathbb{H}$ and $ \mathbb{E}[\mathrm{W}(t)] = 0,$ $\textrm{Cov} [\mathrm{W}(t)] = t\mathrm{Q},$ $t\geq 0.$ The space $\mathbb{H}_0=\mathrm{Q}^{1/2}\mathbb{H}$ is a Hilbert space equipped with the inner product $(\cdot, \cdot)_0$, $$(\mathbf{u}, \mathbf{v})_0 =\sum_{k=1}^{\infty}\frac{1}{\lambda_k}(\mathbf{u},e_k)(\mathbf{v},e_k)= (\mathrm{Q}^{-1/2}\mathbf{u}, \mathrm{Q}^{-1/2}\mathbf{v}),\ \text{ for all } \ \mathbf{u}, \mathbf{v}\in \mathbb{H}_0,$$ where $\mathrm{Q}^{-1/2}$ is the pseudo-inverse of $\mathrm{Q}^{1/2}$.
Let $\mathcal{L}(\mathbb{H})$ denotes the space of all bounded linear operators on $\mathbb{H}$ and $\mathcal{L}_{\mathrm{Q}}:=\mathcal{L}_{\mathrm{Q}}(\mathbb{H})$ denotes the space of all Hilbert-Schmidt operators from $\mathbb{H}_0:=\mathrm{Q}^{1/2}\mathbb{H}$ to $\mathbb{H}$. Since $\mathrm{Q}$ is a trace class operator, the embedding of $\mathbb{H}_0$ in $\mathbb{H}$ is Hilbert-Schmidt and the space $\mathcal{L}_{\mathrm{Q}}$ is a Hilbert space equipped with the norm $ \left\|\Phi\right\|^2_{\mathcal{L}_{\mathrm{Q}}}=\mathop{\mathrm{Tr}}\left(\Phi {\mathrm{Q}}\Phi^*\right)=\sum_{k=1}^{\infty}\| {\mathrm{Q}}^{1/2}\Phi^*e_k\|_{\mathbb{H}}^2 $ and inner product $ \left(\Phi,\Psi\right)_{\mathcal{L}_{\mathrm{Q}}}=\mathop{\mathrm{Tr}}\left(\Phi {\mathrm{Q}}\Psi^*\right)=\sum_{k=1}^{\infty}\left({\mathrm{Q}}^{1/2}\Psi^*e_k,{\mathrm{Q}}^{1/2}\Phi^*e_k\right) $. For more details, the interested readers are referred to see \cite{DaZ}.
\begin{hypothesis}\label{hyp}
The noise coefficient $\Phi(\cdot,\cdot)$ satisfies:
\begin{itemize}
\item [(H.1)] The function $\Phi\in\mathrm{C}([0,T]\times\mathbb{V};\mathcal{L}_{\mathrm{Q}}(\mathbb{H}))$.
\item[(H.2)] (Growth condition)
There exists a positive
constant $K$ such that for all $t\in[0,T]$ and $\mathbf{u}\in\mathbb{H}$,
\begin{equation*}
\|\Phi(t, \mathbf{u})\|^{2}_{\mathcal{L}_{\mathrm{Q}}} \leq K\left(1 +\|\mathbf{u}\|_{\mathbb{H}}^{2}\right),
\end{equation*}
\item[(H.3)] (Lipschitz condition)
There exists a positive constant $L$ such that for any $t\in[0,T]$ and all $\mathbf{u}_1,\mathbf{u}_2\in\mathbb{H}$,
\begin{align*}
\|\Phi(t,\mathbf{u}_1) - \Phi(t,\mathbf{u}_2)\|^2_{\mathcal{L}_{\mathrm{Q}}}\leq L\|\mathbf{u}_1 - \mathbf{u}_2\|_{\mathbb{H}}^2.
\end{align*}
\end{itemize}
\end{hypothesis}
\fi
Let us now provide the definition of a unique global strong solution in the probabilistic sense to the system (\ref{3.6}).
\begin{definition}[Global strong solution]
Let $(\mathbf{x},\mathbf{y})\in\mathbb{H}\times\mathbb{H}$ be given. An $\mathbb{H}\times\mathbb{H}$-valued $(\mathscr{F}_t)_{t\geq 0}$-adapted stochastic process $(\mathbf{X}_{t}^{\varepsilon,\delta},\mathbf{Y}_{t}^{\varepsilon,\delta})$ is called a \emph{strong solution} to the system (\ref{3.6}) if the following conditions are satisfied:
\begin{enumerate}
\item [(i)] the process \begin{align*}(\mathbf{X}^{\varepsilon,\delta},\mathbf{Y}^{\varepsilon,\delta})&\in\mathrm{L}^2(\Omega;\mathrm{L}^{\infty}(0,T;\mathbb{H})\cap\mathrm{L}^2(0,T;\mathbb{V}))\cap\mathrm{L}^{r+1}(\Omega;\mathrm{L}^{r+1}(0,T;\widetilde{\mathbb{L}}^{r+1}))\nonumber\\&\quad\times\mathrm{L}^2(\Omega;\mathrm{L}^{\infty}(0,T;\mathbb{H})\cap\mathrm{L}^2(0,T;\mathbb{V}))\cap\mathrm{L}^{r+1}(\Omega;\mathrm{L}^{r+1}(0,T;\widetilde{\mathbb{L}}^{r+1}))\end{align*} and $(\mathbf{X}_t^{\varepsilon,\delta},\mathbf{Y}_t^{\varepsilon,\delta})$ has a $(\mathbb{V}\cap\widetilde{\mathbb{L}}^{r+1})\times(\mathbb{V}\cap\widetilde{\mathbb{L}}^{r+1})$-valued modification, which is progressively measurable with continuous paths in $\mathbb{H}\times\mathbb{H}$ and \begin{align*}(\mathbf{X}^{\varepsilon,\delta},\mathbf{Y}^{\varepsilon,\delta})&\in\mathrm{C}([0,T];\mathbb{H})\cap\mathrm{L}^2(0,T;\mathbb{V})\cap\mathrm{L}^{r+1}(0,T;\widetilde{\mathbb{L}}^{r+1})\nonumber\\&\quad\times\mathrm{C}([0,T];\mathbb{H})\cap\mathrm{L}^2(0,T;\mathbb{V})\cap\mathrm{L}^{r+1}(0,T;\widetilde{\mathbb{L}}^{r+1}), \ \mathbb{P}\text{-a.s.}\end{align*}
\item [(ii)] the following equality holds for every $t\in [0, T ]$, as an element of $(\mathbb{V}'+\widetilde\mathbb{L}^{\frac{r+1}{r}})\times(\mathbb{V}'+\widetilde\mathbb{L}^{\frac{r+1}{r}}),$ $\mathbb{P}$-a.s.:
\begin{equation}\label{4.4}
\left\{
\begin{aligned}
\mathbf{X}^{\varepsilon,\delta}_t&=\mathbf{x}-\int_0^t[\mu\mathrm{A} \mathbf{X}^{\varepsilon,\delta}_s+\alpha\mathbf{X}^{\varepsilon,\delta}_s+\mathrm{B}(\mathbf{X}^{\varepsilon,\delta}_s)+\beta\mathcal{C}(\mathbf{X}^{\varepsilon,\delta}_s)-\mathrm{F}(\mathbf{X}^{\varepsilon,\delta}_s,\mathbf{Y}^{\varepsilon,\delta}_s)]\d s\\&\quad+\sqrt{\varepsilon}\int_0^t\sigma_1(\mathbf{X}^{\varepsilon,\delta}_s)\mathrm{Q}_1^{1/2}\d\mathrm{W}_s,\\
\mathbf{Y}^{\varepsilon,\delta}_t&=\mathbf{y}-\frac{1}{\delta}\int_0^t[\mu\mathrm{A} \mathbf{Y}^{\varepsilon,\delta}_s+\alpha\mathbf{Y}^{\varepsilon,\delta}_s+\beta\mathcal{C}(\mathbf{Y}_{s}^{\varepsilon,\delta})-\mathrm{G}(\mathbf{X}^{\varepsilon,\delta}_s,\mathbf{Y}^{\varepsilon,\delta}_s)]\d s\\&\quad+\frac{1}{\sqrt{\delta}}\int_0^t\sigma_2(\mathbf{X}^{\varepsilon,\delta}_s,\mathbf{Y}^{\varepsilon,\delta}_s)\mathrm{Q}_2^{1/2}\d\mathrm{W}_s,
\end{aligned}
\right.
\end{equation}
\item [(iii)] the following It\^o formula (energy equality) holds true:
\begin{align}\label{3p3}
& \| \mathbf{X}^{\varepsilon,\delta}_t\|_{\mathbb{H}}^2+\|\mathbf{Y}^{\varepsilon,\delta}_t\|_{\mathbb{H}}^2+2\mu \int_0^t\left(\| \mathbf{X}^{\varepsilon,\delta}_s\|_{\mathbb{V}}^2+\frac{1}{\delta}\|\mathbf{Y}^{\varepsilon,\delta}_s\|_{\mathbb{V}}^2\right)\d s+2\alpha \int_0^t\left(\| \mathbf{X}^{\varepsilon,\delta}_s\|_{\mathbb{H}}^2+\frac{1}{\delta}\|\mathbf{Y}^{\varepsilon,\delta}_s\|_{\mathbb{H}}^2\right)\d s\nonumber\\&\quad +2\beta\int_0^t\left(\|\mathbf{X}^{\varepsilon,\delta}_s\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}+\frac{1}{\delta}\|\mathbf{Y}^{\varepsilon,\delta}_s\|_{\widetilde{\mathbb{L}}^{r+1}}^{r+1}\right)\d s\nonumber\\&=\|\mathbf{x}\|_{\mathbb{H}}^2+\|\mathbf{y}\|_{\mathbb{H}}^2+2\int_0^t(\mathrm{F}(\mathbf{X}^{\varepsilon,\delta}_s,\mathbf{Y}^{\varepsilon,\delta}_s),\mathbf{X}^{\varepsilon,\delta}_s)\d s+\frac{2}{\delta}\int_0^t(\mathrm{G}(\mathbf{X}^{\varepsilon,\delta}_s,\mathbf{Y}^{\varepsilon,\delta}_s),\mathbf{X}^{\varepsilon,\delta}_s)\d s\nonumber\\&\quad+\int_0^t\left(\varepsilon\|\sigma_1(\mathbf{X}^{\varepsilon,\delta}_s)\mathrm{Q}_1^{1/2}\|_{\mathcal{L}_2}^2+\frac{1}{\delta}\|\sigma_2(\mathbf{X}^{\varepsilon,\delta}_s,\mathbf{Y}^{\varepsilon,\delta}_s)\mathrm{Q}_2^{1/2}\|_{\mathcal{L}_2}^2\right)\d s\nonumber\\&\quad+2\int_0^t(\sqrt{\varepsilon}\sigma_1(\mathbf{X}^{\varepsilon,\delta}_s)\mathrm{Q}_1^{1/2}\d\mathrm{W}_s,\mathbf{X}^{\varepsilon,\delta}_s)+\frac{2}{\sqrt{\delta}}\int_0^t(\sigma_2(\mathbf{X}^{\varepsilon,\delta}_s,\mathbf{Y}^{\varepsilon,\delta}_s)\mathrm{Q}_2^{1/2}\d\mathrm{W}_s,\mathbf{Y}^{\varepsilon,\delta}_s),
\end{align}
for all $t\in(0,T)$, $\mathbb{P}$-a.s.
\end{enumerate}
\end{definition}
An alternative version of condition (\ref{4.4}) is to require that for any $(\mathrm{U},\mathrm{V})\in(\mathbb{V}\cap\widetilde{\mathbb{L}}^{r+1})\times(\mathbb{V}\cap\widetilde{\mathbb{L}}^{r+1})$, $\mathbb{P}$-a.s.:
\begin{equation}\label{4.5}
\left\{
\begin{aligned}
(\mathbf{X}^{\varepsilon,\delta}_t,\mathrm{U})&=(\mathbf{x},\mathrm{U})-\int_0^t\langle\mu\mathrm{A} \mathbf{X}^{\varepsilon,\delta}_s+\mathrm{B}(\mathbf{X}^{\varepsilon,\delta}_s)+\alpha\mathbf{X}^{\varepsilon,\delta}_s+\beta\mathcal{C}(\mathbf{X}^{\varepsilon,\delta}_s)-\mathrm{F}(\mathbf{X}^{\varepsilon,\delta}_s,\mathbf{Y}^{\varepsilon,\delta}_s),\mathrm{U}\rangle\d s\nonumber\\&\quad+\int_0^t(\sqrt{\varepsilon}\sigma_1(\mathbf{X}^{\varepsilon,\delta}_s)\mathrm{Q}_1^{1/2}\d\mathrm{W}_s,\mathrm{U}),\\
(\mathbf{Y}^{\varepsilon,\delta}_t,\mathrm{V})&=(\mathbf{y},\mathrm{V})-\frac{1}{\delta}\int_0^t\langle\mu\mathrm{A} \mathbf{Y}^{\varepsilon,\delta}_s+\alpha\mathbf{Y}^{\varepsilon,\delta}_s+\beta\mathcal{C}(\mathbf{Y}_{s}^{\varepsilon,\delta})-\mathrm{G}(\mathbf{X}^{\varepsilon,\delta}_s,\mathbf{Y}^{\varepsilon,\delta}_s),\mathrm{V}\rangle\d s\nonumber\\&\quad+\frac{1}{\sqrt{\delta}}\int_0^t(\sigma_2(\mathbf{X}^{\varepsilon,\delta}_s,\mathbf{Y}^{\varepsilon,\delta}_s)\mathrm{Q}_2^{1/2}\d\mathrm{W}_s,\mathrm{V}),
\end{aligned}
\right.
\end{equation}
for every $t\in[0,T]$.
\begin{definition}
A strong solution $(\mathbf{X}^{\varepsilon,\delta}_t,\mathbf{Y}^{\varepsilon,\delta}_t)$ to (\ref{3.6}) is called a
\emph{pathwise unique strong solution} if
$(\widetilde\mathbf{X}^{\varepsilon,\delta}_t,\widetilde\mathbf{Y}^{\varepsilon,\delta}_t)$ is an another strong
solution, then $$\mathbb{P}\Big\{\omega\in\Omega: (\mathbf{X}^{\varepsilon,\delta}_t,\mathbf{Y}^{\varepsilon,\delta}_t)=(\widetilde\mathbf{X}^{\varepsilon,\delta}_t,\widetilde\mathbf{Y}^{\varepsilon,\delta}_t),\ \text{ for all }\ t\in[0,T]\Big\}=1.$$
\end{definition}
\begin{remark}\label{rem3.4}
For $n=2$ and $r\in[1,3]$, making use of the Gagliardo-Nirenberg interpolation inequality, we know that $\mathrm{C}([0,T];\mathbb{H})\cap\mathrm{L}^2(0,T;\mathbb{V})\subset\mathrm{L}^{r+1}(0,T;\widetilde\mathbb{L}^{r+1})$, and hence we get $\mathrm{C}([0,T];\mathbb{H})\cap\mathrm{L}^2(0,T;\mathbb{V})\cap\mathrm{L}^{r+1}(0,T;\widetilde{\mathbb{L}}^{r+1})=\mathrm{C}([0,T];\mathbb{H})\cap\mathrm{L}^2(0,T;\mathbb{V})$.
\end{remark}
\subsection{Global strong solution} In this section, we discuss the existence and uniqueness of strong solution to the system \eqref{3.6}.
For convenience, we make use of the following simplified notations. Let us define $\mathscr{H}:=\mathbb{H}\times\mathbb{H}$. For any $\mathrm{U}=(\mathbf{x}_1,\mathbf{x}_2),\mathrm{V}=(\mathbf{y}_1,\mathbf{y}_2)\in\mathscr{H},$ we denote the inner product and norm on this Hilbert space by
\begin{align}
(\mathrm{U},\mathrm{V})=(\mathbf{x}_1,\mathbf{y}_1)+(\mathbf{x}_2,\mathbf{y}_2), \ \|\mathrm{U}\|_{\mathscr{H}}=\sqrt{(\mathrm{U},\mathrm{U})}=\sqrt{\|\mathbf{x}_1\|_{\mathbb{H}}^2+\|\mathbf{x}_2\|_{\mathbb{H}}^2}.
\end{align}
In a similar way, we define $\mathscr{V}:=\mathbb{V}\times\mathbb{V}$. The inner product and norm on this Hilbert space is defined by
\begin{align}
(\mathrm{U},\mathrm{V})_{\mathscr{V}}=(\nabla\mathbf{x}_1,\nabla\mathbf{y}_1)+(\nabla\mathbf{x}_2,\nabla\mathbf{y}_2), \ \|\mathrm{U}\|_{\mathscr{V}}=\sqrt{(\mathrm{U},\mathrm{U})_{\mathscr{V}}}=\sqrt{\|\nabla\mathbf{x}_1\|_{\mathbb{H}}^2+\|\nabla\mathbf{x}_2\|_{\mathbb{H}}^2},
\end{align}
for all $\mathrm{U},\mathrm{V}\in\mathscr{V}$. We denote $\mathscr{V}'$ as the dual of $\mathscr{V}$. We define the space $\widetilde{\mathfrak{L}}^{r+1}:=\widetilde{\mathbb{L}}^{r+1}\times\widetilde{\mathbb{L}}^{r+1}$ with the norm given by
$$\|\mathrm{U}\|_{\widetilde{\mathfrak{L}}^{r+1}}=\left\{\|\mathbf{x}_1\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}+\|\mathbf{x}_2\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\right\}^{r+1},$$ for all $\mathrm{U}\in\widetilde{\mathfrak{L}}^{r+1}$. We represent the duality pairing between $\mathscr{V}$ and its dual $\mathscr{V}'$, $\mathfrak{L}^{r+1}$ and its dual $\mathfrak{L}^{\frac{r+1}{r}}$, and $\mathscr{V}\cap\mathfrak{L}^{r+1}$ and its dual $\mathscr{V}'+\mathfrak{L}^{\frac{r+1}{r}}$ as $\langle\cdot,\cdot\rangle$. Note that we have the Gelfand triple $\mathscr{V}\cap\mathfrak{L}^{r+1} \subset\mathscr{H}\subset\mathscr{V}'+\mathfrak{L}^{\frac{r+1}{r}}$. Let
$\mathrm{Q}=(\mathrm{Q}_1,\mathrm{Q}_2)$ be a positive symmetric trace class operator on $\mathscr{H}$. Let us now rewrite the system \eqref{3.6} for $\mathbf{Z}^{\varepsilon,\delta}_{t}=(\mathbf{X}^{\varepsilon,\delta}_{t},\mathbf{Y}^{\varepsilon,\delta}_{t})$ as
\begin{equation}\label{3p6}
\left\{
\begin{aligned}
\d\mathbf{Z}^{\varepsilon,\delta}_{t}&=-\left[\mu\widetilde\mathrm{A}\mathbf{Z}^{\varepsilon,\delta}_{t}+\widetilde\mathrm{F}(\mathbf{Z}^{\varepsilon,\delta}_{t})\right]\d t+\widetilde\sigma(\mathbf{Z}^{\varepsilon,\delta}_{t})\mathrm{Q}^{1/2}\d\mathrm{W}_t, \\
\mathbf{Z}^{\varepsilon,\delta}_0&=(\mathbf{x},\mathbf{y})\in\mathscr{H},
\end{aligned}
\right.
\end{equation}
where
\begin{align*}
\widetilde\mathrm{A}\mathbf{Z}^{\varepsilon,\delta}&=\left(\mathrm{A}\mathbf{X}^{\varepsilon,\delta},\frac{1}{\delta}\mathrm{A}\mathbf{Y}^{\varepsilon,\delta}\right), \\
\widetilde\mathrm{F}(\mathbf{Z}^{\varepsilon,\delta})&=\left(\mathrm{B}(\mathbf{X}^{\varepsilon,\delta})+\alpha\mathbf{X}^{\varepsilon,\delta}+\beta\mathcal{C}(\mathbf{X}^{\varepsilon,\delta})-\mathrm{F}(\mathbf{X}^{\varepsilon,\delta},\mathbf{Y}^{\varepsilon,\delta}),\frac{\alpha}{\delta}\mathbf{Y}^{\varepsilon,\delta}+\frac{\beta}{\delta}\mathcal{C}(\mathbf{Y}^{\varepsilon,\delta})-\frac{1}{\delta}\mathrm{G}(\mathbf{X}^{\varepsilon,\delta},\mathbf{Y}^{\varepsilon,\delta})\right), \\
\widetilde\sigma(\mathbf{Z}^{\varepsilon,\delta})&=\left(\sqrt{\varepsilon}\sigma_1(\mathbf{X}^{\varepsilon,\delta}),\frac{1}{\sqrt{\delta}}\sigma_2(\mathbf{X}^{\varepsilon,\delta},\mathbf{Y}^{\varepsilon,\delta})\right).
\end{align*}
Note that the mappings $\widetilde{\mathrm{A}}:\mathscr{V}\to\mathscr{V}'$ and $\widetilde{\mathrm{F}}:\mathscr{V}\cap\mathfrak{L}^{r+1}\to\mathscr{V}'+\mathfrak{L}^{\frac{r+1}{r}}$ are well defined. It can be easily seen that the operator $\widetilde\sigma\mathrm{Q}^{1/2}:\mathscr{H}\to\mathcal{L}_{2}(\mathscr{H};\mathscr{H})$, where $\mathcal{L}_{2}(\mathscr{H};\mathscr{H})$ is the space of all Hilbert-Schmidt operators from $\mathscr{H}$ to $\mathscr{H}$ with the norm
\begin{align}
\|\widetilde\sigma(\mathbf{z})\mathrm{Q}^{1/2}\|_{\mathcal{L}_2}=\sqrt{\|\sigma_1(\mathbf{x})\mathrm{Q}_1^{1/2}\|_{\mathcal{L}_2}^2+\|\sigma_2(\mathbf{x},\mathbf{y})\mathrm{Q}_2^{1/2}\|_{\mathcal{L}_2}^2}, \ \text{ for }\ \mathbf{z}=(\mathbf{x},\mathbf{y})\in\mathscr{H}.
\end{align}
The following Theorem on the existence and uniqueness of strong solution to the system \eqref{3p6} can be proved in a similar way as in Theorem 3.4, \cite{MTM11}.
\begin{theorem}[Theorem 3.4, \cite{MTM11}]\label{exis}
Let $(\mathbf{x},\mathbf{y})\in \mathscr{H}$ be given. Then for $n=2$, $r\in[1,\infty)$ and $n=3$, $r\in [3,\infty)$ ($2\beta\mu\geq 1$, for $r=3$), there exists a \emph{pathwise unique strong solution} $\mathbf{Z}^{\varepsilon,\delta}$ to the system (\ref{3p6}) such that \begin{align*}\mathbf{Z}^{\varepsilon,\delta}&\in\mathrm{L}^2(\Omega;\mathrm{L}^{\infty}(0,T;\mathscr{H})\cap\mathrm{L}^2(0,T;\mathscr{V}))\cap\mathrm{L}^{r+1}(\Omega;\mathrm{L}^{r+1}(0,T;\widetilde{\mathfrak{L}}^{r+1})),\end{align*} and a continuous modification with trajectories in $\mathscr{H}$ and $\mathbf{Z}^{\varepsilon,\delta}\in\mathrm{C}([0,T];\mathscr{H})\cap\mathrm{L}^2(0,T;\mathscr{V})\cap\mathrm{L}^{r+1}(0,T;\widetilde{\mathfrak{L}}^{r+1})$, $\mathbb{P}$-a.s.
\end{theorem}
\iffalse
\begin{proof}
In order to complete the existence proof, we need to show the monotonicity and hemicontinuity properties of the operator $\mu\widetilde{\mathrm{A}}+\widetilde{\mathrm{F}}$.
\iffalse
For $\mathrm{U}=(\mathbf{u},\mathbf{v})\in\mathscr{V}\cap\mathfrak{L}^{r+1}$, it can be easily seen that
\begin{align}
\langle\mu \widetilde{\mathrm{A}}\mathrm{U}+\widetilde{\mathrm{F}}(\mathrm{U}),\mathrm{U}\rangle &=\mu\langle\mathrm{A}\mathbf{u},\mathbf{u}\rangle+\frac{\mu}{\varepsilon}\langle\mathrm{A}\mathbf{v},\mathbf{v}\rangle +\langle \mathrm{B}(\mathbf{u}),\mathbf{u}\rangle+\beta\langle\mathcal{C}(\mathbf{u}),\mathbf{u}\rangle\nonumber\\&\quad-(f(\mathbf{u},\mathbf{v}),\mathbf{u})-\frac{1}{\varepsilon}(G(\mathbf{u},\mathbf{v}),\mathbf{v})\nonumber\\&\geq \mu\|\mathbf{u}\|_{\mathbb{V}}^2+\frac{\mu}{\varepsilon}\|\mathbf{v}\|_{\mathbb{V}}^2+\beta\|\mathbf{u}\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}-\|f(\mathbf{u},\mathbf{v})\|_{\mathbb{H}}\|\mathbf{u}\|_{\mathbb{H}}-\frac{1}{\varepsilon}\|G(\mathbf{u},\mathbf{v})\|_{\mathbb{H}}\|\mathbf{v}\|_{\mathbb{H}}\nonumber\\&\geq \mu\|\mathbf{u}\|_{\mathbb{V}}^2+\frac{\mu}{\varepsilon}\|\mathbf{v}\|_{\mathbb{V}}^2+\beta\|\mathbf{u}\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}-C(1+\|\mathbf{u}\|_{\mathbb{H}}+\|\mathbf{v}\|_{\mathbb{H}})\|\mathbf{u}\|_{\mathbb{H}}\nonumber\\&\quad-\frac{1}{\varepsilon}(C(1+\|\mathbf{u}\|_{\mathbb{H}})+L_{\mathrm{G}}\|\mathbf{v}\|_{\mathbb{H}})\|\mathbf{v}\|_{\mathbb{H}}\nonumber\\&\geq\frac{\mu}{2}\|\mathbf{u}\|_{\mathbb{V}}^2+\frac{\mu}{2\varepsilon}\|\mathbf{v}\|_{\mathbb{V}}^2+\beta\|\mathbf{u}\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}-\frac{C}{\lambda_1}\left(1+\frac{1}{\varepsilon}\right)(1+\|\mathbf{u}\|_{\mathbb{H}}^2)\nonumber\\&\quad-\frac{1}{\lambda_1}\left(C+\frac{L_{\mathrm{G}}^2}{\varepsilon}\right)\|\mathbf{v}\|_{\mathbb{H}}^2\nonumber\\&\geq\left[\frac{\mu}{2}-\frac{C}{\lambda_1^2}\left(1+\frac{1}{\varepsilon}\right)\right]\|\mathbf{u}\|_{\mathbb{V}}^2+\left[\frac{\mu}{2\varepsilon}-\frac{1}{\lambda_1^2}\left(C+\frac{L_{\mathrm{G}}^2}{\varepsilon}\right)\right]\|\mathbf{v}\|_{\mathbb{V}}^2-\frac{C}{\lambda_1}\left(1+\frac{1}{\varepsilon}\right),
\end{align}
and hence the coercivity of the operator follows, by choosing an $\varepsilon$ sufficiently small such that $ \frac{\mu\varepsilon}{2}>\max\left\{\frac{C}{\lambda_1^2}\left(\varepsilon+1\right),\frac{1}{\lambda_1^2}\left(C\varepsilon^2+{L_{\mathrm{G}}^2}\right)\right\}$.
\fi
\vskip 0.1 cm
\textbf{Step 1:} \emph{$\mu\widetilde{\mathrm{A}}+\widetilde{\mathrm{F}}:\mathscr{V}\cap\mathfrak{L}^{r+1}\to\mathscr{V}'+\mathfrak{L}^{\frac{r+1}{r}}$}. For $n=2,3$, $r\in[3,\infty)$ and $\mathrm{U}=(\mathbf{x},\mathbf{y})\in\mathscr{V}\cap\mathfrak{L}^{r+1}$, we first note that
\begin{align}\label{3.11}
\|\mu\widetilde{\mathrm{A}}\mathrm{U}+\widetilde{\mathrm{F}}(\mathrm{U})\|_{\mathscr{V}'+\mathfrak{L}^{\frac{r+1}{r}}}&\leq\mu\|\mathrm{A}\mathbf{x}\|_{\mathbb{V}'}+\frac{\mu}{\varepsilon}\|\mathrm{A}\mathbf{y}\|_{\mathbb{V}'}+\|\mathrm{B}(\mathbf{x})\|_{\mathbb{V}'+\widetilde\mathbb{L}^{\frac{r+1}{r}}}+\beta\|\mathcal{C}(\mathbf{x})\|_{\widetilde\mathbb{L}^{\frac{r+1}{r}}}\nonumber\\&\quad+\frac{\beta}{\varepsilon}\|\mathcal{C}(\mathbf{y})\|_{\widetilde\mathbb{L}^{\frac{r+1}{r}}}+\|f(\mathbf{x},\mathbf{y})\|_{\mathscr{V}'}+\frac{1}{\varepsilon}\|G(\mathbf{x},\mathbf{y})\|_{\mathscr{V}'}\nonumber\\&\leq \mu\|\mathbf{x}\|_{\mathbb{V}}+\frac{\mu}{\varepsilon}\|\mathbf{y}\|_{\mathbb{V}}+\|\mathbf{x}\|_{\widetilde\mathbb{L}^{r+1}}^{\frac{r+1}{r-1}}\|\mathbf{x}\|_{\mathbb{H}}^{\frac{r-3}{r-1}}+\beta\|\mathbf{x}\|_{\widetilde\mathbb{L}^{r+1}}^r+\frac{\beta}{\varepsilon}\|\mathbf{y}\|_{\widetilde\mathbb{L}^{r+1}}^r\nonumber\\&\quad+\frac{C}{\lambda_1}(1+\|\mathbf{x}\|_{\mathbb{H}}+\|\mathbf{y}\|_{\mathbb{H}})+\frac{1}{\lambda_1\varepsilon}(C(1+\|\mathbf{x}\|_{\mathbb{H}})+L_{\mathrm{G}}\|\mathbf{y}\|_{\mathbb{H}})\nonumber\\&\leq\mu\max\left\{1,\frac{1}{\varepsilon}\right\}\|\mathrm{U}\|_{\mathscr{V}}+\|\mathrm{U}\|_{\mathfrak{L}^{r+1}}^{\frac{r+1}{r-1}}\|\mathrm{U}\|_{\mathbb{H}}^{\frac{r-3}{r-1}}+\beta\max\left\{1,\frac{1}{\varepsilon}\right\}\|\mathrm{U}\|_{\mathfrak{L}^{r+1}}^r\nonumber\\&\quad+\frac{C}{\lambda_1}\left(1+\frac{1}{\varepsilon}\right)\left(1+\|\mathrm{U}\|_{\mathbb{H}}\right),
\end{align}
and hence $\mu\widetilde{\mathrm{A}}+\widetilde{\mathrm{F}}:\mathscr{V}\cap\mathfrak{L}^{r+1}\to\mathscr{V}'+\mathfrak{L}^{\frac{r+1}{r}}$. For $n=2$ and $r\in[1,3]$, one can estimate $\|\mathrm{B}(\mathbf{u})\|_{\mathbb{V}'}\leq\|\mathbf{u}\|_{\widetilde\mathbb{L}^4}^2\leq\sqrt{2}\|\mathbf{u}\|_{\mathbb{H}}\|\mathbf{u}\|_{\mathbb{V}}$, by using H\"older's and Ladyzhenskaya's inequalities, and an estimate similar to \eqref{3.11} follows.
\vskip 0.1 cm
\textbf{Step 2:} \emph{Monotonicity property of the operator $\mu\widetilde{\mathrm{A}}+\widetilde{\mathrm{F}}$}. First, we consider the case $n=2$ and $r\in[1,3]$. For $\mathrm{U}=(\mathbf{x}_1,\mathbf{y}_1)$ and $\mathrm{V}=(\mathbf{x}_2,\mathbf{y}_2)$, we have
\begin{align}\label{3p12}
&\langle\mu\widetilde\mathrm{A}(\mathrm{U}-\mathrm{V})+\widetilde{\mathrm{F}}(\mathrm{U})-\widetilde{\mathrm{F}}(\mathrm{V}),\mathrm{U}-\mathrm{V}\rangle \nonumber\\&= \langle\mu\mathrm{A}(\mathbf{x}_1-\mathbf{x}_2),\mathbf{x}_1-\mathbf{x}_2\rangle +\frac{\mu}{\varepsilon}\langle\mathrm{A}(\mathbf{y}_1-\mathbf{y}_2),\mathbf{y}_1-\mathbf{y}_2\rangle +\langle\mathrm{B}(\mathbf{x}_1)-\mathrm{B}(\mathbf{x}_2),\mathbf{x}_1-\mathbf{x}_2\rangle \nonumber\\&\quad+\beta\langle\mathcal{C}(\mathbf{x}_1)-\mathcal{C}(\mathbf{x}_2),\mathbf{x}_1-\mathbf{x}_2\rangle+\frac{\beta}{\varepsilon}\langle\mathcal{C}(\mathbf{y}_1)-\mathcal{C}(\mathbf{y}_2),\mathbf{y}_1-\mathbf{y}_2\rangle\nonumber\\&\quad-(f(\mathbf{x}_1,\mathbf{y}_1)-f(\mathbf{x}_2,\mathbf{y}_2),\mathbf{x}_1-\mathbf{x}_2)-\frac{1}{\varepsilon}(G(\mathbf{x}_1,\mathbf{y}_1)-G(\mathbf{x}_2,\mathbf{y}_2),\mathbf{y}_1-\mathbf{y}_2)\nonumber\\&=\mu\|\mathbf{x}_1-\mathbf{x}_2\|_{\mathbb{V}}^2+\frac{\mu}{\varepsilon}\|\mathbf{y}_1-\mathbf{y}_2\|_{\mathbb{V}}^2+\langle\mathrm{B}(\mathbf{x}_1-\mathbf{x}_2,\mathbf{x}_2),\mathbf{x}_1-\mathbf{x}_2\rangle \nonumber\\&\quad+\beta\langle\mathcal{C}(\mathbf{x}_1)-\mathcal{C}(\mathbf{x}_2),\mathbf{x}_1-\mathbf{x}_2\rangle+\frac{\beta}{\varepsilon}\langle\mathcal{C}(\mathbf{y}_1)-\mathcal{C}(\mathbf{y}_2),\mathbf{y}_1-\mathbf{y}_2\rangle\nonumber\\&\quad-(f(\mathbf{x}_1,\mathbf{y}_1)-f(\mathbf{x}_2,\mathbf{y}_2),\mathbf{x}_1-\mathbf{x}_2)-\frac{1}{\varepsilon}(G(\mathbf{x}_1,\mathbf{y}_1)-G(\mathbf{x}_2,\mathbf{y}_2),\mathbf{y}_1-\mathbf{y}_2)\nonumber\\&\geq \mu\|\mathbf{x}_1-\mathbf{x}_2\|_{\mathbb{V}}^2+\frac{\mu}{\varepsilon}\|\mathbf{y}_1-\mathbf{y}_2\|_{\mathbb{V}}^2-\|\mathbf{x}_2\|_{\widetilde\mathbb{L}^4}\|\mathbf{x}_1-\mathbf{x}_2\|_{\mathbb{V}}\|\mathbf{x}_1-\mathbf{x}_2\|_{\widetilde\mathbb{L}^4}\nonumber\\&\quad+\frac{\beta}{2^{r-1}}\|\mathbf{x}_1-\mathbf{x}_2\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}+\frac{\beta}{2^{r-1}\varepsilon}\|\mathbf{y}_1-\mathbf{y}_2\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}-\|f(\mathbf{x}_1,\mathbf{y}_1)-f(\mathbf{x}_2,\mathbf{y}_2)\|_{\mathbb{H}}\|\mathbf{x}_1-\mathbf{x}_2\|_{\mathbb{H}}\nonumber\\&\quad-\frac{1}{\varepsilon}\|G(\mathbf{x}_1,\mathbf{y}_1)-G(\mathbf{x}_2,\mathbf{y}_2)\|_{\mathbb{H}}\|\mathbf{y}_1-\mathbf{y}_2\|_{\mathbb{H}}\nonumber\\&\geq\frac{\mu}{2}\|\mathbf{x}_1-\mathbf{x}_2\|_{\mathbb{V}}^2+\frac{\mu}{\varepsilon}\|\mathbf{y}_1-\mathbf{y}_2\|_{\mathbb{V}}^2+\frac{\beta}{2^{r-1}}\|\mathbf{x}_1-\mathbf{x}_2\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}+\frac{\beta}{2^{r-1}\varepsilon}\|\mathbf{y}_1-\mathbf{y}_2\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\nonumber\\&\quad-\frac{27}{32\mu^3}\|\mathbf{x}_2\|_{\widetilde\mathbb{L}^4}^4\|\mathbf{x}_1-\mathbf{x}_2\|_{\mathbb{H}}^2- C\left(1+\frac{1}{\varepsilon}\right)\left(\|\mathbf{x}_1-\mathbf{x}_2\|_{\mathbb{H}}^2+\|\mathbf{y}_1-\mathbf{y}_2\|_{\mathbb{H}}^2\right)-\frac{L_{\mathrm{G}}}{\varepsilon}\|\mathbf{y}_1-\mathbf{y}_2\|_{\mathbb{H}}^2,
\end{align}
where we used the Assumption \ref{ass3.6} (A1), \eqref{2.23}, \eqref{a215}, H\"older's, Ladyzhenskaya's and Young's inequalities. From the above relation, it is clear that
\begin{align}\label{3p13}
\langle\mu\widetilde\mathrm{A}(\mathrm{U}-\mathrm{V})+\widetilde{\mathrm{F}}(\mathrm{U})-\widetilde{\mathrm{F}}(\mathrm{V}),\mathrm{U}-\mathrm{V}\rangle+\left[\frac{27}{32\mu^3}\|\mathrm{V}\|_{\widetilde{\mathfrak{L}}^4}^4+C\left(1+\frac{1}{\varepsilon}\right)+\frac{L_{\mathrm{G}}}{\varepsilon}\right]\|\mathrm{U}-\mathrm{V}\|_{\mathscr{H}}^2\geq 0,
\end{align}
for all $\mathrm{U},\mathrm{V}\in\mathscr{V}\cap\mathfrak{L}^{r+1}=\mathscr{V}$, for $n=2$ and $r\in[1,3]$. Now, for an $\widetilde{\mathfrak{L}}^4$-ball in $\mathscr{V}$, that is, for $\mathrm{V}\in\mathcal{B}_R$, where $\mathcal{B}_R:=\left\{\mathrm{Z}\in\mathscr{V}:\|\mathrm{Z}\|_{\widetilde{\mathfrak{L}}^4}\leq R \right\}$, we get from \eqref{3p13} that
\begin{align}\label{3p14}
\langle\mu\widetilde\mathrm{A}(\mathrm{U}-\mathrm{V})+\widetilde{\mathrm{F}}(\mathrm{U})-\widetilde{\mathrm{F}}(\mathrm{V}),\mathrm{U}-\mathrm{V}\rangle+\left[\frac{27R^4}{32\mu^3}+C\left(1+\frac{1}{\varepsilon}\right)+\frac{L_{\mathrm{G}}}{\varepsilon}\right]\|\mathrm{U}-\mathrm{V}\|_{\mathscr{H}}^2\geq 0,
\end{align}
which implies that the operator $\mu\widetilde{\mathrm{A}}+\widetilde{\mathrm{F}}:\mathscr{V}\to\mathscr{V}'$ is locally monotone. Furthermore, we get
\begin{align}
&\langle\mu\widetilde\mathrm{A}(\mathrm{U}-\mathrm{V})+\widetilde{\mathrm{F}}(\mathrm{U})-\widetilde{\mathrm{F}}(\mathrm{V}),\mathrm{U}-\mathrm{V}\rangle+\left[\frac{27R^4}{32\mu^3}+C\left(1+\frac{1}{\varepsilon}\right)+\frac{L_{\mathrm{G}}}{\varepsilon}+L_{\sigma_2}^2\right]\|\mathrm{U}-\mathrm{V}\|_{\mathscr{H}}^2\nonumber\\&\geq \|\sigma_1(\mathbf{x}_1)-\sigma_1(\mathbf{x}_2)\|_{\mathcal{L}_{\mathrm{Q}_1}}^2+ \|\sigma_2(\mathbf{x}_1,\mathbf{y}_1)-\sigma_2(\mathbf{x}_2,\mathbf{y}_2)\|_{\mathcal{L}_{\mathrm{Q}_2}}^2\geq 0.
\end{align}
Let us now consider the case $n=2,3$ and $r\in(3,\infty)$. From \eqref{2.23}, we easily have
\begin{align}\label{2a27}
\beta \langle\mathcal{C}(\mathbf{x}_1)-\mathcal{C}(\mathbf{x}_2),\mathbf{x}_1-\mathbf{x}_2\rangle \geq \frac{\beta}{2}\||\mathbf{x}_2|^{\frac{r-1}{2}}(\mathbf{x}_1-\mathbf{x}_2)\|_{\mathbb{H}}^2.
\end{align} Using H\"older's and Young's inequalities, we estimate $|\langle\mathrm{B}(\mathbf{x}_1-\mathbf{x}_2,\mathbf{x}_1-\mathbf{x}_2),\mathbf{x}_2\rangle|$ as
\begin{align}\label{2a28}
|\langle\mathrm{B}(\mathbf{x}_1-\mathbf{x}_2,\mathbf{x}_1-\mathbf{x}_2),\mathbf{x}_2\rangle|&\leq\|\mathbf{x}_1-\mathbf{x}_2\|_{\mathbb{V}}\|\mathbf{x}_2(\mathbf{x}_1-\mathbf{x}_2)\|_{\mathbb{H}}\nonumber\\&\leq\frac{\mu }{2}\|\mathbf{x}_1-\mathbf{x}_2\|_{\mathbb{V}}^2+\frac{1}{2\mu }\|\mathbf{x}_2(\mathbf{x}_1-\mathbf{x}_2)\|_{\mathbb{H}}^2.
\end{align}
We take the term $\|\mathbf{x}_2(\mathbf{x}_1-\mathbf{x}_2)\|_{\mathbb{H}}^2$ from \eqref{2a28} and use H\"older's and Young's inequalities to estimate it as (see \cite{KWH} also)
\begin{align}\label{2a29}
&\int_{\mathcal{O}}|\mathbf{x}_2(x)|^2|\mathbf{x}_1(x)-\mathbf{x}_2(x)|^2\d x\nonumber\\&=\int_{\mathcal{O}}|\mathbf{x}_2(x)|^2|\mathbf{x}_1(x)-\mathbf{x}_2(x)|^{\frac{4}{r-1}}|\mathbf{x}_1(x)-\mathbf{x}_2(x)|^{\frac{2(r-3)}{r-1}}\d x\nonumber\\&\leq\left(\int_{\mathcal{O}}|\mathbf{x}_2(x)|^{r-1}|\mathbf{x}_1(x)-\mathbf{x}_2(x)|^2\d x\right)^{\frac{2}{r-1}}\left(\int_{\mathcal{O}}|\mathbf{x}_1(x)-\mathbf{x}_2(x)|^2\d x\right)^{\frac{r-3}{r-1}}\nonumber\\&\leq{\beta\mu }\left(\int_{\mathcal{O}}|\mathbf{x}_2(x)|^{r-1}|\mathbf{x}_1(x)-\mathbf{x}_2(x)|^2\d x\right)+\frac{r-3}{r-1}\left(\frac{2}{\beta\mu (r-1)}\right)^{\frac{2}{r-3}}\left(\int_{\mathcal{O}}|\mathbf{x}_1(x)-\mathbf{x}_2(x)|^2\d x\right),
\end{align}
for $r>3$. Using \eqref{2a29} in \eqref{2a28}, we find
\begin{align}\label{2a30}
&|\langle\mathrm{B}(\mathbf{x}_1-\mathbf{x}_2,\mathbf{x}_1-\mathbf{x}_2),\mathbf{x}_2\rangle|\nonumber\\&\leq\frac{\mu }{2}\|\mathbf{x}_1-\mathbf{x}_2\|_{\mathbb{V}}^2+\frac{\beta}{2}\||\mathbf{x}_2|^{\frac{r-1}{2}}(\mathbf{x}_1-\mathbf{x}_2)\|_{\mathbb{H}}^2+\frac{r-3}{2\mu(r-1)}\left(\frac{2}{\beta\mu (r-1)}\right)^{\frac{2}{r-3}}\|\mathbf{x}_1-\mathbf{x}_2\|_{\mathbb{H}}^2.
\end{align}
Combining \eqref{3p12}, \eqref{2a27} and \eqref{2a30}, we get
\begin{align}
&\langle\mu\widetilde\mathrm{A}(\mathrm{U}-\mathrm{V})+\widetilde{\mathrm{F}}(\mathrm{U})-\widetilde{\mathrm{F}}(\mathrm{V}),\mathrm{U}-\mathrm{V}\rangle\nonumber\\&\quad+\left[\frac{r-3}{2\mu(r-1)}\left(\frac{2}{\beta\mu (r-1)}\right)^{\frac{2}{(r-3)}}+C\left(1+\frac{1}{\varepsilon}\right)+\frac{L_{\mathrm{G}}}{\varepsilon}\right]\|\mathrm{U}-\mathrm{V}\|_{\mathscr{H}}^2\geq 0,
\end{align}
for $r>3$ and the monotonicity of the operator $\mu\widetilde{\mathrm{A}}+\widetilde{\mathrm{F}}:\mathscr{V}\cap\mathfrak{L}^{r+1}\to\mathscr{V}'+\mathfrak{L}^{\frac{r+1}{r}}$ follows.
Now, for $n=3$ and $r=3$, from \eqref{2.23}, we have
\begin{align}\label{2a31}
\beta\langle\mathcal{C}(\mathbf{x}_1)-\mathcal{C}(\mathbf{x}_2),\mathbf{x}_1-\mathbf{x}_2\rangle\geq\frac{\beta}{2}\|\mathbf{x}_2(\mathbf{x}_1-\mathbf{x}_2)\|_{\mathbb{H}}^2.
\end{align}
We estimate the term $|\langle\mathrm{B}(\mathbf{x}_1-\mathbf{x}_2,\mathbf{x}_1-\mathbf{x}_2),\mathbf{x}_2\rangle|$ using H\"older's and Young's inequalities as
\begin{align}\label{2a32}
|\langle\mathrm{B}(\mathbf{x}_1-\mathbf{x}_2,\mathbf{x}_1-\mathbf{x}_2),\mathbf{x}_2\rangle|&\leq\|\mathbf{x}_2(\mathbf{x}_1-\mathbf{x}_2)\|_{\mathbb{H}}\|\mathbf{x}_1-\mathbf{x}_2\|_{\mathbb{V}}\nonumber\\& \leq\mu \|\mathbf{x}_1-\mathbf{x}_2\|_{\mathbb{V}}^2+\frac{1}{4\mu }\|\mathbf{x}_2(\mathbf{x}_1-\mathbf{x}_2)\|_{\mathbb{H}}^2.
\end{align}
Combining \eqref{3p12}, \eqref{2a31} and \eqref{2a32}, we obtain
\begin{align}
&\langle\mu\widetilde\mathrm{A}(\mathrm{U}-\mathrm{V})+\widetilde{\mathrm{F}}(\mathrm{U})-\widetilde{\mathrm{F}}(\mathrm{V}),\mathrm{U}-\mathrm{V}\rangle+\left[C\left(1+\frac{1}{\varepsilon}\right)+\frac{L_{\mathrm{G}}}{\varepsilon}\right]\|\mathrm{U}-\mathrm{V}\|_{\mathscr{H}}^2\nonumber\\&\geq\frac{1}{2}\left(\beta-\frac{1}{2\mu }\right)\|\mathbf{x}_2(\mathbf{x}_1-\mathbf{x}_2)\|_{\mathbb{H}}^2\geq 0,
\end{align}
provided $2\beta\mu \geq 1$ and the monotonicity of the operator $\mu\widetilde{\mathrm{A}}+\widetilde{\mathrm{F}}:\mathscr{V}\cap\mathfrak{L}^{4}\to\mathscr{V}'+\mathfrak{L}^{\frac{4}{3}}$ follows.
\vskip 0.1 cm
\textbf{Step 3:} \emph{Hemicontinuity property of the operator $\mu\widetilde{\mathrm{A}}+\widetilde{\mathrm{F}}$}. Let us now show that the operator $\mu\widetilde{\mathrm{A}}+\widetilde{\mathrm{F}}:\mathscr{V}\cap\mathfrak{L}^{r+1}\to\mathscr{V}'+\mathfrak{L}^{\frac{r+1}{r}}$ is hemicontinuous. We first consider the case $n=2,3$ and $r\in[3,\infty)$. Let us take a sequence $\mathrm{U}^n\to \mathrm{U}$ in $\mathscr{V}\cap\mathfrak{L}^{r+1}$, that is, $\|\mathrm{U}^n-\mathrm{U}\|_{\widetilde{\mathfrak{L}}^{r+1}}+\|\mathrm{U}^n-\mathrm{U}\|_{\mathscr{V}}\to 0$, as $n\to\infty$. Therefore, we know that $\|\mathbf{x}^n-\mathbf{x}\|_{\widetilde\mathbb{L}^{r+1}}+\|\mathbf{x}^n-\mathbf{x}\|_{\mathbb{V}}+\|\mathbf{y}^n-\mathbf{y}\|_{\widetilde\mathbb{L}^{r+1}}+\|\mathbf{y}^n-\mathbf{y}\|_{\mathbb{V}}\to 0 $ as $n\to\infty$, for $\mathrm{U}^n=(\mathbf{x}^n,\mathbf{y}^n),\mathrm{U}=(\mathbf{x},\mathbf{y})\in\mathscr{V}\cap\mathfrak{L}^{r+1}$. For any $\mathrm{V}=(\widetilde\mathbf{x},\widetilde\mathbf{y})\in\mathscr{V}\cap\mathfrak{L}^{r+1}$, we consider
\begin{align}\label{2p14}
&\langle\mu\widetilde\mathrm{A}(\mathrm{U}^n-\mathrm{U})+\widetilde{\mathrm{F}}(\mathrm{U}^n)-\widetilde{\mathrm{F}}(\mathrm{U}),\mathrm{V}\rangle\nonumber\\&= \langle\mu\mathrm{A}(\mathbf{x}^n-\mathbf{x}),\widetilde\mathbf{x}\rangle +\frac{\mu}{\varepsilon}\langle\mathrm{A}(\mathbf{y}^n-\mathbf{y}),\widetilde\mathbf{y}\rangle +\langle\mathrm{B}(\mathbf{x}^n)-\mathrm{B}(\mathbf{x}),\widetilde\mathbf{x}\rangle +\beta\langle\mathcal{C}(\mathbf{x}^n)-\mathcal{C}(\mathbf{x}),\widetilde\mathbf{x}\rangle\nonumber\\&\quad+\frac{\beta}{\varepsilon}\langle\mathcal{C}(\mathbf{y}^n)-\mathcal{C}(\mathbf{y}),\widetilde\mathbf{y}\rangle-(f(\mathbf{x}^n,\mathbf{y}^n)-f(\mathbf{x},\mathbf{y}),\widetilde\mathbf{x})-\frac{1}{\varepsilon}(G(\mathbf{x}^n,\mathbf{y}^n)-G(\mathbf{x},\mathbf{y}),\widetilde\mathbf{y}).
\end{align}
Let us take $\langle\mu\mathrm{A}(\mathbf{x}^n-\mathbf{x}),\widetilde\mathbf{x}\rangle $ from \eqref{2p14} and estimate it as
\begin{align}
|\langle\mu\mathrm{A}(\mathbf{x}^n-\mathbf{x}),\widetilde\mathbf{x}\rangle| =|(\nabla(\mathbf{x}^n-\mathbf{x}),\nabla\widetilde\mathbf{x})|\leq\|\mathbf{x}^n-\mathbf{x}\|_{\mathbb{V}}\|\widetilde\mathbf{x}\|_{\mathbb{V}}\to 0, \ \text{ as } \ n\to\infty,
\end{align}
since $\mathbf{x}^n\to \mathbf{x}$ in $\mathbb{V}$. Similarly, we get $ |\langle\mu\mathrm{A}(\mathbf{y}^n-\mathbf{y}),\widetilde\mathbf{y}\rangle| \to 0$ as $n\to\infty$. We estimate the term $\langle\mathrm{B}(\mathbf{x}^n)-\mathrm{B}(\mathbf{x}),\widetilde\mathbf{x}\rangle$ from \eqref{2p14} using H\"older's inequality as
\begin{align}\label{325}
|\langle\mathrm{B}(\mathbf{x}^n)-\mathrm{B}(\mathbf{x}),\widetilde\mathbf{x}\rangle|&=|\langle\mathrm{B}(\mathbf{x}^n,\mathbf{x}^n-\mathbf{x}),\widetilde\mathbf{x}\rangle+\langle\mathrm{B}(\mathbf{x}^n-\mathbf{x},\mathbf{x}),\widetilde\mathbf{x}\rangle|\nonumber\\&
\leq|\langle\mathrm{B}(\mathbf{x}^n,\widetilde\mathbf{x}),\mathbf{x}^n-\mathbf{x}\rangle|+|\langle\mathrm{B}(\mathbf{x}^n-\mathbf{x},\widetilde\mathbf{x}),\mathbf{x}\rangle|\nonumber\\&\leq\left(\|\mathbf{x}^n\|_{\widetilde{\mathbb{L}}^{\frac{2(r+1)}{r-1}}}+\|\mathbf{x}\|_{\widetilde{\mathbb{L}}^{\frac{2(r+1)}{r-1}}}\right)\|\mathbf{x}^n-\mathbf{x}\|_{\widetilde{\mathbb{L}}^{r+1}}\|\widetilde\mathbf{x}\|_{\mathbb{V}}\nonumber\\&\leq \left(\|\mathbf{x}^n\|_{\mathbb{H}}^{\frac{r-3}{r-1}}\|\mathbf{x}^n\|_{\widetilde{\mathbb{L}}^{r+1}}^{\frac{2}{r-1}}+\|\mathbf{x}\|_{\mathbb{H}}^{\frac{r-3}{r-1}}\|\mathbf{x}\|_{\widetilde{\mathbb{L}}^{r+1}}^{\frac{2}{r-1}}\right)\|\mathbf{x}^n-\mathbf{x}\|_{\widetilde{\mathbb{L}}^{r+1}}\|\widetilde\mathbf{x}\|_{\mathbb{V}}\nonumber\\& \to 0, \ \text{ as } \ n\to\infty,
\end{align}
since $\mathbf{x}^n\to\mathbf{x}$ in $\widetilde\mathbb{L}^{r+1}$ and $\mathbf{x}^n,\mathbf{x}\in\mathbb{V}\cap\widetilde\mathbb{L}^{r+1}$. We estimate the term $\langle \mathcal{C}(\mathbf{x}^n)-\mathcal{C}(\mathbf{x}),\widetilde\mathbf{x}\rangle$ from \eqref{214} using Taylor's formula and H\"older's inequality as (in fact, it is true for all $r\geq 1$)
\begin{align}
|\langle \mathcal{C}(\mathbf{x}^n)-\mathcal{C}(\mathbf{x}),\widetilde\mathbf{x}\rangle|&\leq \sup_{0<\theta<1}r\|(\mathbf{x}^n-\mathbf{x})|\theta\mathbf{x}^n+(1-\theta)\mathbf{x}|^{r-1}\|_{\widetilde{\mathbb{L}}^{\frac{r+1}{r}}}\|\widetilde\mathbf{x}\|_{\widetilde{\mathbb{L}}^{r+1}}\nonumber\\&\leq r\|\mathbf{x}^n-\mathbf{x}\|_{\widetilde{\mathbb{L}}^{r+1}}\left(\|\mathbf{x}^n\|_{\widetilde{\mathbb{L}}^{r+1}}+\|\mathbf{x}\|_{\widetilde{\mathbb{L}}^{r+1}}\right)^{r-1}\|\widetilde\mathbf{x}\|_{\widetilde{\mathbb{L}}^{r+1}}\to 0,
\end{align}
as $n\to\infty$, since $\mathbf{x}^n\to\mathbf{x}$ in $\widetilde{\mathbb{L}}^{r+1}$ and $\mathbf{x}^n,\mathbf{x}\in\mathbb{V}\cap\widetilde{\mathbb{L}}^{r+1}$. Similarly, we obtain
\begin{align}
|\langle \mathcal{C}(\mathbf{y}^n)-\mathcal{C}(\mathbf{y}),\widetilde\mathbf{y}\rangle|\to 0,
\end{align}
as $n\to\infty$.
Using the Assumption \ref{ass3.6} (A1), we get
\begin{align}
|(f(\mathbf{x}^n,\mathbf{y}^n)-f(\mathbf{x},\mathbf{y}),\widetilde\mathbf{x})|&\leq\|f(\mathbf{x}^n,\mathbf{y}^n)-f(\mathbf{x},\mathbf{y})\|_{\mathbb{H}}\|\widetilde\mathbf{x}\|_{\mathbb{H}}\nonumber\\&\leq C(\|\mathbf{x}^n-\mathbf{x}\|_{\mathbb{H}}+\|\mathbf{y}^n-\mathbf{y}\|_{\mathbb{H}})\|\widetilde\mathbf{x}\|_{\mathbb{H}}\to 0, \ \text{ as } \ n\to\infty,
\end{align}
since $\mathbf{x}^n\to\mathbf{x}$ in $\mathbb{H}$ and $\mathbf{y}^n\to\mathbf{y}$ in $\mathbb{H}$. Once again making use of the Assumption \ref{ass3.6} (A.1), we find
\begin{align}
|(G(\mathbf{x}^n,\mathbf{y}^n)-G(\mathbf{x},\mathbf{y}),\widetilde\mathbf{y})|\leq C\|\mathbf{x}^n-\mathbf{x}\|_{\mathbb{H}}+L_{\mathrm{G}}\|\mathbf{y}^n-\mathbf{y}\|_{\mathbb{H}}\to 0, \ \text{ as } \ n\to\infty,
\end{align}
since $\mathbf{x}^n\to\mathbf{x}$ in $\mathbb{H}$ and $\mathbf{y}^n\to\mathbf{y}$ in $\mathbb{H}$. From the above convergences, it is immediate that $\langle\mu\widetilde\mathrm{A}(\mathrm{U}^n-\mathrm{U})+\widetilde{\mathrm{F}}(\mathrm{U}^n)-\widetilde{\mathrm{F}}(\mathrm{U}),\mathrm{V}\rangle\to 0$, for all $\mathrm{V}\in\mathscr{V}\cap\mathfrak{L}^{r+1}$.
Hence the operator $\mu\widetilde{\mathrm{A}}+\widetilde{\mathrm{F}}:\mathscr{V}\cap\mathfrak{L}^{r+1}\to\mathscr{V}'+\mathfrak{L}^{\frac{r+1}{r}}$ is demicontinuous, which implies that the operator $\mu\widetilde{\mathrm{A}}+\widetilde{\mathrm{F}}$ is hemicontinuous also.
For $n=2$ and $r\in[1,3]$, we need to show the convergence \eqref{325} only. It can easily be seen that
\begin{align}
|\langle\mathrm{B}(\mathbf{x}^n)-\mathrm{B}(\mathbf{x}),\widetilde\mathbf{x}\rangle|
&\leq|\langle\mathrm{B}(\mathbf{x}^n,\widetilde\mathbf{x}),\mathbf{x}^n-\mathbf{x}\rangle|+|\langle\mathrm{B}(\mathbf{x}^n-\mathbf{x},\widetilde\mathbf{x}),\mathbf{x}\rangle|\nonumber\\&\leq\left(\|\mathbf{x}^n\|_{\widetilde\mathbb{L}^4}+\|\mathbf{x}\|_{\widetilde\mathbb{L}^4}\right)\|\widetilde\mathbf{x}\|_{\mathbb{V}}\|\mathbf{x}^n-\mathbf{x}\|_{\widetilde\mathbb{L}^4} \to 0, \ \text{ as } \ n\to\infty,
\end{align}
since $\mathbf{x}^n\to\mathbf{x}$ in $\widetilde\mathbb{L}^{4}$ and $\mathbf{x}^n,\mathbf{x}\in\mathbb{V}$.
Now, proceeding similarly as in the proof of Theorem 3.7, \cite{MTM8}, we obtain the existence and uniqueness of strong solution to the system \eqref{3.6} by using the above properties and a stochastic generalization of the Minty-Browder technique. For the case $n=2$ and $r\in[1,3],$ one can use the similar techniques in the works \cite{ICAM,MTM6,SSSP}, etc, where a localized version of the stochastic generalization of Minty-Browder technique is used in the proofs for the 2D stochastic Navier-Stokes equations and related models perturbed by Gaussian noise.
\end{proof}
\fi
\section{Large Deviation Principle}\label{se4}\setcounter{equation}{0}
In this section, we establish a Wentzell-Freidlin (see \cite{FW}) type large deviation principle for the two-time-scale SCBF equations \eqref{3.6} using the well known results of Varadhan as well as Bryc (see \cite{Va,DZ}) and Budhiraja-Dupuis (see \cite{BD1}). A Wentzell-Freidlin type large deviation principle for the two-time-scale one-dimensional stochastic Burgers equation is established in \cite{XSRW}. Interested readers are referred to see \cite{MTM10} (LDP for two and three dimensional SCBF equations), \cite{SSSP} (LDP for the 2D stochastic Navier-Stokes equations), \cite{ICAM} (LDP for some 2D hydrodynamic systems), \cite{MTM6} (LDP for the 2D Oldroyd fluids), \cite{MTM10} (LDP for the 2D and 3D SCBF equations) for application of such methods to various hydrodynamic models.
Let $(\Omega,\mathscr{F},\mathbb{P})$ be a probability space with an increasing family $\{\mathscr{F}_t\}_{t\geq 0}$ of the sub $\sigma$-fields of $\mathscr{F}$ satisfying the usual conditions. We consider the following two-time-scale SCBF system:
\begin{equation}\label{2.18a}
\left\{
\begin{aligned}
\d \mathbf{X}^{\varepsilon,\delta}_t&=-[\mu\mathrm{A} \mathbf{X}^{\varepsilon,\delta}_t+\mathrm{B}(\mathbf{X}^{\varepsilon,\delta}_t)+\alpha\mathbf{X}^{\varepsilon,\delta}_t+\beta\mathcal{C}(\mathbf{X}^{\varepsilon,\delta}_t)-\mathrm{F}(\mathbf{X}^{\varepsilon,\delta}_t,\mathbf{Y}^{\varepsilon,\delta}_t)]\d t+\sqrt{\varepsilon}\sigma_1(\mathbf{X}^{\varepsilon,\delta}_t)\mathrm{Q}_1^{1/2}\d\mathrm{W}_t,\\
\d \mathbf{Y}^{\varepsilon,\delta}_t&=-\frac{1}{\delta}[\mu\mathrm{A} \mathbf{Y}^{\varepsilon,\delta}_t+\alpha\mathbf{Y}^{\varepsilon,\delta}_t+\beta\mathcal{C}(\mathbf{Y}_{t}^{\varepsilon,\delta})-\mathrm{G}(\mathbf{X}^{\varepsilon,\delta}_t,\mathbf{Y}^{\varepsilon,\delta}_t)]\d t+\frac{1}{\sqrt{\delta}}\sigma_2(\mathbf{X}^{\varepsilon,\delta}_t,\mathbf{Y}^{\varepsilon,\delta}_t)\mathrm{Q}_2^{1/2}\d\mathrm{W}_t,\\
\mathbf{X}^{\varepsilon,\delta}_0&=\mathbf{x},\ \mathbf{Y}^{\varepsilon,\delta}_0=\mathbf{y},
\end{aligned}
\right.
\end{equation}
for some fixed point $(\mathbf{x},\mathbf{y})$ in $\mathbb{H}\times\mathbb{H}$. From Theorem \ref{exis} (see Theorem 3.4, \cite{MTM11} also), it is known that the system \eqref{2.18a} has a unique pathwise strong solution $(\mathbf{X}^{\varepsilon,\delta}_t,\mathbf{Y}^{\varepsilon,\delta}_t)$ with $\mathscr{F}_t$-adapted paths (that is, for any $t\in[0,T]$ and $x\in\mathcal{O}$, $(\mathbf{X}^{\varepsilon,\delta}_t(x),\mathbf{Y}^{\varepsilon,\delta}_t(x))$ is $\mathscr{F}_t$-measurable) in $\mathrm{C}([0,T];\mathbb{H})\cap \mathrm{L}^2(0,T;\mathbb{V})\cap\mathrm{L}^{r+1}(0,T;\widetilde\mathbb{L}^{r+1})\times \mathrm{C}([0,T];\mathbb{H})\cap \mathrm{L}^2(0,T;\mathbb{V})\cap\mathrm{L}^{r+1}(0,T;\widetilde\mathbb{L}^{r+1}),\ \mathbb{P}\text{-a.s.}$ Moreover, such a strong solution satisfies the energy equality (It\^o's formula) given in \eqref{3p3}.
As the parameter $\varepsilon\downarrow 0$, the slow component $\mathbf{X}^{\varepsilon,\delta}_t$ of (\ref{2.18a}) tends to the solution of the following deterministic averaged system:
\begin{equation}\label{2.19}
\left\{
\begin{aligned}
\d\bar{\mathbf{X}}_t&=-[\mu\mathrm{A}\bar{\mathbf{X}}_t+\mathrm{B}(\bar{\mathbf{X}}_t)+\alpha\bar{\mathbf{X}}_t+\beta\mathcal{C}(\bar{\mathbf{X}}_t)]\d t+\bar{\mathrm{F}}(\bar{\mathbf{X}}_t)\d t, \\ \bar{\mathbf{X}}_0&=\mathbf{x},
\end{aligned}\right.
\end{equation}
with the average
$$
\bar{\mathrm{F}}(\mathbf{x})=\int_{\mathbb{H}}\mathrm{F}(\mathbf{x},\mathbf{y})\nu^{\mathbf{x}}(\d\mathbf{y}), \ \mathbf{x}\in\mathbb{H},
$$ and $\nu^{\mathbf{x}}$ is the unique invariant distribution of the transition semigroup for the frozen system:
\begin{equation}\label{2p19}
\left\{
\begin{aligned}
\d\mathbf{Y}_t&=-[\mu\mathrm{A}\mathbf{Y}_t+\alpha\mathbf{Y}_t+\beta\mathcal{C}(\mathbf{Y}_t)-\mathrm{G}(\mathbf{x},\mathbf{Y}_t)]\d t+\sigma_2(\mathbf{x},\mathbf{Y}_t)\mathrm{Q}_2^{1/2}\d\bar{\mathrm{W}}_t,\\
\mathbf{Y}_0&=\mathbf{y},
\end{aligned}\right.
\end{equation}
where $\bar{\mathrm{W}}_t$ is a standard cylindrical Wiener process, which is independent of $\mathrm{W}_{t}$.
Note that the system \eqref{2.19} is a Lipschitz perturbation of the CBF equations (see \eqref{3p93} below). Using similar techniques as in Theorem 3.4, \cite{MTM7} (see \cite{CLF} also), one can show that the system (\ref{2.19}) has a unique weak solution in the Leray-Hopf sense, satisfying the energy equality:
\begin{align*}
&\|\bar{\mathbf{X}}_t\|_{\mathbb{H}}^2+2\mu\int_0^t\|\bar{\mathbf{X}}_s\|_{\mathbb{V}}^2\d s+2\alpha\int_0^t\|\bar{\mathbf{X}}_s\|_{\mathbb{H}}^2\d s+2\beta\int_0^t\|\bar{\mathbf{X}}_s\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d s\nonumber\\&=\|\mathbf{x}\|_{\mathbb{H}}^2+2\int_0^t(\bar{\mathrm{F}}(\bar{\mathbf{X}}_s),\bar{\mathbf{X}}_s)\d s,
\end{align*}
for all $t\in[0,T]$ in the Polish space $\mathrm{C}([0,T];\mathbb{H})\cap \mathrm{L}^2(0,T;\mathbb{V})\cap\mathrm{L}^{r+1}(0,T;\widetilde\mathbb{L}^{r+1})$. The strong averaging principle states that (Theorem 1.1, \cite{MTM11}) for any initial values $\mathbf{x},\mathbf{y}\in\mathbb{H}$, $p\geq 1$ and $T>0$, we have
\begin{align}\label{3.126}
\lim_{\varepsilon\to 0} \mathbb{E}\left(\sup_{t\in[0,T]}\|\mathbf{X}^{\varepsilon,\delta}_t-\bar\mathbf{X}_t\|_{\mathbb{H}}^{2p}\right)=0,
\end{align}
In this section, we investigate the large deviations of $\mathbf{X}^{\varepsilon,\delta}_t$ from the deterministic solution $\bar{\mathbf{X}}_t$, as $\varepsilon\downarrow 0$.
\subsection{Frozen equation} The frozen equation associated with the fast motion for fixed slow component $\mathbf{x}\in\mathbb{H}$ is given by
\begin{equation}\label{3.74}
\left\{
\begin{aligned}
\d\mathbf{Y}_t&=-[\mu\mathrm{A}\mathbf{Y}_t+\alpha\mathbf{Y}_t+\beta\mathcal{C}(\mathbf{Y}_t)-\mathrm{G}(\mathbf{x},\mathbf{Y}_t)]\d t+\sigma_2(\mathbf{x},\mathbf{Y}_t)\mathrm{Q}_2^{1/2}\d\bar{\mathrm{W}}_t,\\
\mathbf{Y}_0&=\mathbf{y},
\end{aligned}\right.
\end{equation}
where $\bar{\mathrm{W}}_t$ is a cylindrical Wiener process in $\mathbb{H}$, which is independent of $\mathrm{W}_{t}$. From the Assumption \ref{ass3.6}, we know that $\mathrm{G}(\mathbf{x},\cdot)$ and $\sigma_2(\mathbf{x},\cdot)$ are Lipschitz continuous. Thus, one can show that for any fixed $\mathbf{x}\in\mathbb{H}$ and any initial data $\mathbf{y}\in\mathbb{H}$, there exists a unique strong solution $\mathbf{Y}_{t}^{\mathbf{x},\mathbf{y}}\in\mathrm{L}^2(\Omega;\mathrm{L}^{\infty}(0,T;\mathbb{H})\cap\mathrm{L}^2(0,T;\mathbb{V}))\cap\mathrm{L}^{r+1}(\Omega;\mathrm{L}^{r+1}(0,T;\widetilde\mathbb{L}^{r+1}))$ to the system \eqref{3.74} with a continuous modification with paths in $\mathrm{C}([0,T];\mathbb{H})\cap\mathrm{L}^2(0,T;\mathbb{V})\cap\mathrm{L}^{r+1}(0,T;\widetilde\mathbb{L}^{r+1})$, $\mathbb{P}$-a.s. A proof of this result can be obtained in a same way as in Theorem 3.7, \cite{MTM8} by making use of the monotonicity property of the linear and nonlinear operators (see Lemmas \ref{thm2.2}-\ref{lem2.8}) as well as a stochastic generalization of the Minty-Browder technique (localized version for the case $n=2$ and $r\in[1,3]$). Furthermore, the strong solution satisfies the following infinite dimensional It\^o formula (energy equality):
\begin{align}
&\|\mathbf{Y}_t^{\mathbf{x},\mathbf{y}}\|_{\mathbb{H}}^2+2\mu\int_0^t\|\mathbf{Y}_s\|_{\mathbb{V}}^2\d s+2\alpha\int_0^t\|\mathbf{Y}_s\|_{\mathbb{H}}^2\d s+2\beta\int_0^t\|\mathbf{Y}_s\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d s\nonumber\\&=\|\mathbf{y}\|_{\mathbb{H}}^2-2\int_0^t(\mathrm{G}(\mathbf{x},\mathbf{Y}_s),\mathbf{Y}_s)\d s+\int_0^t\|\sigma_s(\mathbf{x},\mathbf{Y}_s)\mathrm{Q}_2^{1/2}\|_{\mathcal{L}_2}^2\d s+2\int_0^t(\sigma_s(\mathbf{x},\mathbf{Y}_s)\mathrm{Q}_2^{1/2}\d\bar\mathrm{W}_s,\mathbf{Y}_s),
\end{align}
$\mathbb{P}\text{-a.s.,}$ for all $t\in[0,T]$. Let $\mathrm{P}_t^{\mathbf{x}}$ be the transition semigroup associated with the process $\mathbf{Y}_t^{\mathbf{x},\mathbf{y}}$, that is, for any bounded measurable function $\varphi$ on $\mathbb{H}$, we have
\begin{align}
\mathrm{P}_t^{\mathbf{x}}\varphi(\mathbf{y})=\mathbb{E}\left[\varphi(\mathbf{Y}_t^{\mathbf{x},\mathbf{y}})\right], \ \mathbf{y}\in\mathbb{H} \ \text{ and }\ t>0.
\end{align}
For the system \eqref{3.74}, the following result is available in the work \cite{MTM11}.
\begin{proposition}[Proposition 4.4, \cite{MTM11}]\label{prop3.12}
For any given $\mathbf{x},\mathbf{y}\in\mathbb{H}$, there exists a unique invariant measure for the system \eqref{3.74}. Furthermore, there exists a constant $C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}}}>0$ and $\zeta>0$ such that for any Lipschitz function $\varphi:\mathbb{H}\to\mathbb{R}$, we have
\begin{align}\label{393}
\left|\mathrm{P}_t^{\mathbf{x}}\varphi(\mathbf{y})-\int_{\mathbb{H}}\varphi(\mathbf{z})\nu^{\mathbf{x}}(\d\mathbf{z})\right|\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}}}(1+\|\mathbf{x}\|_{\mathbb{H}}+\|\mathbf{y}\|_{\mathbb{H}})e^{-\frac{\zeta t}{2}}\|\varphi\|_{\mathrm{Lip}(\mathbb{H})},
\end{align}
where $\zeta=2\mu\lambda_1+2\alpha-2L_{\mathrm{G}}-L_{\sigma_2}^2>0$ and $ \|\varphi\|_{\mathrm{Lip}(\mathbb{H})}=\sup\limits_{\mathbf{x},\mathbf{y}\in\mathbb{H}}\frac{|\varphi(\mathbf{x})-\varphi(\mathbf{y})|}{\|\mathbf{x}-\mathbf{y}\|_{\mathbb{H}}}$.
\end{proposition}
\iffalse
\begin{proof}
An application of the infinite dimensional It\^o formula to the process $\|\mathbf{Y}_{t}^{\mathbf{x},\mathbf{y}}\|_{\mathbb{H}}^2$ yields
\begin{align}\label{3.76}
&\|\mathbf{Y}_t^{\mathbf{x},\mathbf{y}}\|_{\mathbb{H}}^2+2\mu\int_0^t\|\mathbf{Y}_s^{\mathbf{x},\mathbf{y}}\|_{\mathbb{V}}^2\d s+2\beta\int_0^t\|\mathbf{Y}_s^{\mathbf{x},\mathbf{y}}\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d s\nonumber\\&=\|\mathbf{y}\|_{\mathbb{H}}^2+2\int_0^t(\mathrm{G}(\mathbf{x},\mathbf{Y}_s^{\mathbf{x},\mathbf{y}}),\mathbf{Y}_s^{\mathbf{x},\mathbf{y}})\d s+2\int_0^t(\sigma_2(\mathbf{x},\mathbf{Y}_s)\d\bar{\mathrm{W}}_s^{\mathrm{Q}_2},\mathbf{Y}_s^{\mathbf{x},\mathbf{y}})+\int_0^t\|\sigma_2(\mathbf{x},\mathbf{Y}_s^{\mathbf{x},\mathbf{y}})\|_{\mathcal{L}_{\mathrm{Q}_2}}^2\d s,
\end{align}
$\mathbb{P}$-a.s., for all $t\in[0,T]$. Taking expectation in \eqref{3.76}, we find
\begin{align}\label{3.77}
\mathbb{E}\left[\|\mathbf{Y}_t^{\mathbf{x},\mathbf{y}}\|_{\mathbb{H}}^2\right]&=\|\mathbf{y}\|_{\mathbb{H}}^2-2\mu\mathbb{E}\left[\int_0^t\|\mathbf{Y}_s^{\mathbf{x},\mathbf{y}}\|_{\mathbb{V}}^2\d s\right]-2\beta\mathbb{E}\left[\int_0^t\|\mathbf{Y}_s^{\mathbf{x},\mathbf{y}}\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d s\right]\nonumber\\&\quad+2\mathbb{E}\left[\int_0^t(\mathrm{G}(\mathbf{x},\mathbf{Y}_s^{\mathbf{x},\mathbf{y}}),\mathbf{Y}_s^{\mathbf{x},\mathbf{y}})\d s\right]+\mathbb{E}\left[\int_0^t\|\sigma_2(\mathbf{x},\mathbf{Y}_s^{\mathbf{x},\mathbf{y}})\|_{\mathcal{L}_{\mathrm{Q}_2}}^2\d s\right],
\end{align}
for all $t\in[0,T]$. Thus, we have
\begin{align}\label{3.78}
\frac{\d}{\d t}\mathbb{E}\left[\|\mathbf{Y}_t^{\mathbf{x},\mathbf{y}}\|_{\mathbb{H}}^2\right]&= -2\mu\mathbb{E}\left[\|\mathbf{Y}_t^{\mathbf{x},\mathbf{y}}\|_{\mathbb{V}}^2\right]-2\beta\mathbb{E}\left[\|\mathbf{Y}_t^{\mathbf{x},\mathbf{y}}\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\right]+2\mathbb{E}\left[(\mathrm{G}(\mathbf{x},\mathbf{Y}_t^{\mathbf{x},\mathbf{y}}),\mathbf{Y}_t^{\mathbf{x},\mathbf{y}})\right]\nonumber\\&\quad+\mathbb{E}\left[\|\sigma_2(\mathbf{x},\mathbf{Y}_t^{\mathbf{x},\mathbf{y}})\|_{\mathcal{L}_{\mathrm{Q}_2}}^2\right] \nonumber\\&\leq -(\mu\lambda_1-2L_{\mathrm{G}})\mathbb{E}\left[\|\mathbf{Y}_t^{\mathbf{x},\mathbf{y}}\|_{\mathbb{H}}^2\right]+\frac{C}{\mu\lambda_1}(1+\|\mathbf{x}\|_{\mathbb{H}}^2),
\end{align}
where we used the Assumption \ref{ass3.6} (A1) and (A2). For $\gamma=\mu\lambda_1-2L_{\mathrm{G}}>0$, an application of Gronwall's inequality in \eqref{3.78} yields
\begin{align}\label{3.79}
\mathbb{E}\left[\|\mathbf{Y}_t^{\mathbf{x},\mathbf{y}}\|_{\mathbb{H}}^2\right]\leq e^{-\gamma t}\|\mathbf{y}\|_{\mathbb{H}}^2+ \frac{C}{\mu\lambda_1\gamma}(1+\|\mathbf{x}\|_{\mathbb{H}}^2)\leq C_{\mu,\lambda_1,L_{\mathrm{G}}}(1+\|\mathbf{x}\|_{\mathbb{H}}^2+e^{-\gamma t}\|\mathbf{y}\|_{\mathbb{H}}^2),
\end{align}
for all $t\in[0,T]$. Furthermore, from \eqref{3.77}, we also obtain
\begin{align}\label{380}
&2\mu\mathbb{E}\left[\int_0^t\|\mathbf{Y}_s^{\mathbf{x},\mathbf{y}}\|_{\mathbb{V}}^2\d s\right]\nonumber\\&\leq\|\mathbf{y}\|_{\mathbb{H}}^2+C_{L_{\mathrm{G}},L_{\sigma_2}}(1+\|\mathbf{x}\|_{\mathbb{H}}^2)t+(C_{\mu,\lambda_1,L_{\mathrm{G}}}+2L_{\mathrm{G}}+L_{\sigma_2}^2)\mathbb{E}\left[\int_0^t\|\mathbf{Y}_s^{\mathbf{x},\mathbf{y}}\|_{\mathbb{H}}^2\d s\right]\nonumber\\&\leq \|\mathbf{y}\|_{\mathbb{H}}^2+C_{L_{\mathrm{G}},L_{\sigma_2}}(1+\|\mathbf{x}\|_{\mathbb{H}}^2)t+(C_{\mu,\lambda_1,L_{\mathrm{G}}}+2L_{\mathrm{G}}+L_{\sigma_2}^2)\int_0^t(1+\|\mathbf{x}\|_{\mathbb{H}}^2+e^{-\gamma s}\|\mathbf{y}\|_{\mathbb{H}}^2)\d s\nonumber\\&\leq C_{\mu,\lambda_1,L_{\mathrm{G}},L_{\sigma_2}}(\|\mathbf{y}\|_{\mathbb{H}}^2+(1+\|\mathbf{x}\|_{\mathbb{H}}^2)t),
\end{align}
for $t\geq 0$.
Let us denote by $\mathcal{M}(\mathbb{H})$ as the space of all probability measures defined on $(\mathbb{H},\mathscr{B}(\mathbb{H}))$. For any $n\in\mathbb{N}$, we define the Krylov-Bogoliubov measure as
\begin{align*}
\nu_n^{\mathbf{x}}:=\frac{1}{n}\int_{\mathbf{0}}^n\delta_{\mathbf{0}}\mathrm{P}_t^{\mathbf{x}}\d t, \ n\geq 1,
\end{align*}
where $\delta_{\mathbf{0}}$ is the Dirac measure at $\mathbf{0}$. Then, it can be easily seen that $\nu_n^{\mathbf{x}}$ is a probability measure such that for any $\varphi\in\mathscr{B}(\mathbb{H})$, we have $$\int_{\mathbb{H}}\varphi\d\nu_n^{\mathbf{x}}=\frac{1}{n}\int_0^n\mathrm{P}_t^{\mathbf{x}}\varphi(\mathbf{0})\d t.$$
We know that the embedding of $\mathbb{V}\subset\mathbb{H}$ is compact and hence for any $R\in(0,\infty)$, the set $K_R:=\{\mathbf{x}\in\mathbb{H}:\|\mathbf{x}\|_{\mathbb{V}}\leq R\}$ is relatively compact in $\mathbb{H}$. Thus, using Markov's inequality, it is immediate from \eqref{380} that
\begin{align}
\nu_n^{\mathbf{x}}(K_R^c)&=\frac{1}{n}\int_0^n\mathbb{P}\left\{\|\mathbf{Y}_0^{\mathbf{x},\mathbf{0}}\|_{\mathbb{H}}>R\right\}\d s\leq \frac{1}{nR^2}\int_0^n\mathbb{E}\left[\|\mathbf{Y}_s^{\mathbf{x},\mathbf{0}}\|_{\mathbb{V}}^2\right]\d s\nonumber\\&\leq \frac{C_{\mu,\lambda_1,L_{\mathrm{G}},L_{\sigma_2}}\|\mathbf{y}\|_{\mathbb{H}}^2}{n_0 R^2\mu}+ \frac{C_{\mu,\lambda_1,L_{\mathrm{G}},L_{\sigma_2}}}{R^2\mu}(1+\|\mathbf{x}\|_{\mathbb{H}}^2), \ \text{ for all }\ n\geq n_0, \nonumber\\&\to 0\ \text{ as } \ R\to\infty,
\end{align}
which implies that the sequence $\{\nu_n^{\mathbf{x}}\}_{n\geq 1}$ is tight. Hence, using Prokhorov's theorem, there exists a probability measure $\nu^{\mathbf{x}}$ and a subsequence $\{\nu_{n_k}^{\mathbf{x}}\}_{k\in\mathbb{N}}$ such that $\nu_{n_k}^{\mathbf{x}} \xrightarrow{w}\nu^{\mathbf{x}}$, as $k\to\infty$. One can easily show that $\nu^{\mathbf{x}}$ is an invariant probability measure for $\{\mathrm{P}_t^{\mathbf{x}}\}_{t\geq 0}$. Thus, by the Krylov-Bogoliubov theorem (or by a result of Chow and Khasminskii, see \cite{CHKH}) $\nu^{\mathbf{x}}$ results to be an invariant measure for the transition semigroup $(\mathrm{P}_t^{\mathbf{x}})_{t\geq 0}$, defined by $\mathrm{P}_t^{\mathbf{x}}\varphi(\mathbf{x})=\mathbb{E}\left[\varphi(\mathbf{Y}_t^{\mathbf{x},\mathbf{y}})\right],$ for all $\varphi\in\mathrm{C}_b(\mathbb{H})$.
Let us now prove the uniqueness of invariant measure. For $\mathbf{y}_1,\mathbf{y}_2\in\mathbb{H}$, we have
\begin{equation}\label{3.80}
\left\{
\begin{aligned}
\d\mathbf{V}_t&=-\{\mu\mathrm{A}\mathbf{V}_t+\beta[\mathcal{C}(\mathbf{Y}_{t}^{\mathbf{x},\mathbf{y}_1})-\mathcal{C}(\mathbf{Y}_{t}^{\mathbf{x},\mathbf{y}_2})]-[G(\mathbf{x},\mathbf{Y}_t^{\mathbf{x},\mathbf{y}_1})-G(\mathbf{x},\mathbf{Y}_t^{\mathbf{x},\mathbf{y}_2})]\}\d t\\&\quad+[\sigma_2(\mathbf{x},\mathbf{Y}_t^{\mathbf{x},\mathbf{y}_1})-\sigma_2(\mathbf{x},\mathbf{Y}_t^{\mathbf{x},\mathbf{y}_2})]\d\bar{\mathrm{W}}_t^{\mathrm{Q}_2},\\
\mathbf{V}_0&=\mathbf{y}_1-\mathbf{y}_2,
\end{aligned}\right.
\end{equation}
where $\mathbf{V}_{t}=\mathbf{Y}_{t}^{\mathbf{x},\mathbf{y}_1}-\mathbf{Y}_{t}^{\mathbf{x},\mathbf{y}_2}$. An application of the infinite dimensional It\^o formula to the process $\|\mathbf{V}_{t}\|_{\mathbb{H}}^2$ yields
\begin{align}
\mathbb{E}\left[\|\mathbf{V}_{t}\|_{\mathbb{H}}^2\right]&=\|\mathbf{V}_{0}\|_{\mathbb{H}}^2-2\mu\mathbb{E}\left[\int_0^t\|\mathbf{V}_{s}\|_{\mathbb{V}}^2\d s\right]-2\beta\mathbb{E}\left[\int_0^t\langle\mathcal{C}(\mathbf{Y}_{s}^{\mathbf{x},\mathbf{y}_1})-\mathcal{C}(\mathbf{Y}_{s}^{\mathbf{x},\mathbf{y}_2}),\mathbf{V}_{s}\rangle\d s\right]\nonumber\\&\quad+2\mathbb{E}\left[\int_0^t(G(\mathbf{x},\mathbf{Y}_s^{\mathbf{x},\mathbf{y}_1})-G(\mathbf{x},\mathbf{Y}_s^{\mathbf{x},\mathbf{y}_2}),\mathbf{V}_s)\d s\right]\nonumber\\&\quad+\mathbb{E}\left[\int_0^t\|\sigma_2(\mathbf{x},\mathbf{Y}_s^{\mathbf{x},\mathbf{y}_1})-\sigma_2(\mathbf{x},\mathbf{Y}_s^{\mathbf{x},\mathbf{y}_2})\|_{\mathcal{L}_{\mathrm{Q}_2}}^2\d s\right].
\end{align}
Thus, using the Assumption \ref{ass3.6} (A1)-(A2), and \eqref{214}, it is immediate that
\begin{align}\label{3.82}
\frac{\d}{\d t}\mathbb{E}\left[\|\mathbf{V}_{t}\|_{\mathbb{H}}^2\right]&=-2\mu\mathbb{E}\left[\|\mathbf{V}_{t}\|_{\mathbb{V}}^2\right]-2\beta\mathbb{E}\left[\langle\mathcal{C}(\mathbf{Y}_{t}^{\mathbf{x},\mathbf{y}_1})-\mathcal{C}(\mathbf{Y}_{t}^{\mathbf{x},\mathbf{y}_2}),\mathbf{V}_{t}\rangle\right]\nonumber\\&\quad+2\mathbb{E}\left[(G(\mathbf{x},\mathbf{Y}_t^{\mathbf{x},\mathbf{y}_1})-G(\mathbf{x},\mathbf{Y}_t^{\mathbf{x},\mathbf{y}_2}),\mathbf{V}_t)\right]\nonumber\\&\quad+\mathbb{E}\left[\|\sigma_2(\mathbf{x},\mathbf{Y}_t^{\mathbf{x},\mathbf{y}_1})-\sigma_2(\mathbf{x},\mathbf{Y}_t^{\mathbf{x},\mathbf{y}_2})\|_{\mathcal{L}_{\mathrm{Q}_2}}^2\right]\nonumber\\&\leq-\left(2\mu\lambda_1-2L_{\mathrm{G}}-L_{\sigma_2}^2\right)\mathbb{E}\left[\|\mathbf{V}_{t}\|_{\mathbb{H}}^2\right]-\frac{\beta}{2^{r-2}}\mathbb{E}\left[\|\mathbf{V}_{t}\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\right]\nonumber\\&\leq-\left(2\mu\lambda_1-2L_{\mathrm{G}}-L_{\sigma_2}^2\right)\mathbb{E}\left[\|\mathbf{V}_{t}\|_{\mathbb{H}}^2\right].
\end{align}
An application of Gronwall's inequality in \eqref{3.82} gives the following exponential stability result:
\begin{align}\label{385}
\sup_{\mathbf{x}\in\mathbb{H}}\mathbb{E}\left[\|\mathbf{Y}_t^{\mathbf{x},\mathbf{y}_1}-\mathbf{Y}_t^{\mathbf{x},\mathbf{y}_2}\|_{\mathbb{H}}^2\right]\leq e^{-\zeta t}\|\mathbf{y}_1-\mathbf{y}_2\|_{\mathbb{H}}^2, \ \text{ for all } \ t\geq 0,
\end{align}
where $\zeta=2\mu\lambda_1-2L_{\mathrm{G}}-L_{\sigma_2}^2>0$. Furthermore, using the invariance of $\nu^{\mathbf{x}}$ and \eqref{3.79}, we obtain
\begin{align*}
\int_{\mathbb{H}}\|\mathbf{y}\|_{\mathbb{H}}^2\nu^{\mathbf{x}}(\d\mathbf{y})=\int_{\mathbb{H}}\mathbb{E}\left[\|\mathbf{Y}_t^{\mathbf{x},\mathbf{y}}\|_{\mathbb{H}}^2\right]\nu^{\mathbf{x}}(\d\mathbf{y})\leq {C_{\mu,\lambda_1,L_{\mathrm{G}}}}(1+\|\mathbf{x}\|_{\mathbb{H}}^2)+e^{-\gamma t} \int_{\mathbb{H}}\|\mathbf{y}\|_{\mathbb{H}}^2\nu^{\mathbf{x}}(\d\mathbf{y}),
\end{align*}
for any $t>0$. Taking $t$ large enough so that $e^{-\gamma t}<1$ and
\begin{align}\label{386}
\int_{\mathbb{H}}\|\mathbf{y}\|_{\mathbb{H}}^2\nu^{\mathbf{x}}(\d\mathbf{y})\leq{C_{\mu,\lambda_1,L_{\mathrm{G}}}}(1+\|\mathbf{x}\|_{\mathbb{H}}^2).
\end{align}
Thus, for any Lipschitz function $\varphi:\mathbb{H}\to\mathbb{R}$, using the invariance of $\nu^{\mathbf{x}}$, \eqref{385} and \eqref{386}, we have
\begin{align}
\left|\mathrm{P}_t^{\mathbf{x}}\varphi(\mathbf{y})-\int_{\mathbb{H}}\varphi(\mathbf{z})\nu^{\mathbf{x}}(\d\mathbf{z})\right|&=\left|\int_{\mathbb{H}}\mathbb{E}\left[\varphi(\mathbf{Y}_t^{\mathbf{x},\mathbf{y}})-\varphi(\mathbf{Y}_t^{\mathbf{x},\mathbf{z}})\right]\nu^{\mathbf{x}}(\d\mathbf{z})\right|\nonumber\\&\leq \|\varphi\|_{\mathrm{Lip}}\int_{\mathbb{H}}\mathbb{E}\left[\|\mathbf{Y}_t^{\mathbf{x},\mathbf{y}}-\mathbf{Y}_t^{\mathbf{x},\mathbf{z}}\|_{\mathbb{H}}\right]\nu^{\mathbf{x}}(\d\mathbf{z})\nonumber\\&\leq \|\varphi\|_{\mathrm{Lip}}\int_{\mathbb{H}}\left(\mathbb{E}\left[\|\mathbf{Y}_t^{\mathbf{x},\mathbf{y}}-\mathbf{Y}_t^{\mathbf{x},\mathbf{z}}\|_{\mathbb{H}}^2\right]\right)^{1/2}\nu^{\mathbf{x}}(\d\mathbf{z})\nonumber\\&\leq \|\varphi\|_{\mathrm{Lip}}e^{-\frac{\zeta t}{2}}\int_{\mathbb{H}}\|\mathbf{y}-\mathbf{z}\|_{\mathbb{H}}\nu^{\mathbf{x}}(\d\mathbf{z})\nonumber\\&\leq C_{\mu,\lambda_1,L_{\mathrm{G}}} \|\varphi\|_{\mathrm{Lip}}(1+\|\mathbf{x}\|_{\mathbb{H}}+\|\mathbf{y}\|_{\mathbb{H}})e^{-\frac{\zeta t}{2}},
\end{align}
and by the density of $\text{Lip}(\mathbb{H})$ in $\mathrm{C}_b (\mathbb{H})$, we obtain the above result for every $\varphi\in \mathrm{C}_b (\mathbb{H})$. In particular, the uniqueness of invariant measure $\nu^{\mathbf{x}}$ follows.
\end{proof}
\fi
The interested readers are referred to see \cite{GDJZ,ADe,FFBM,MHJC}, etc for more details on the invariant measures and ergodicity for the infinite dimensional dynamical systems and stochastic Navier-Stokes equations. We need the following lemma in the sequel.
\begin{lemma}
There exists a constant $C>0$ such that for any $\mathbf{x}_1,\mathbf{x}_2,\mathbf{y}\in\mathbb{H}$, we have
\begin{align}\label{394}
\sup_{t\geq 0}\mathbb{E}\left[\|\mathbf{Y}_t^{\mathbf{x}_1,\mathbf{y}}-\mathbf{Y}_t^{\mathbf{x}_2,\mathbf{y}}\|_{\mathbb{H}}^2\right]\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2}}\|\mathbf{x}_1-\mathbf{x}_2\|_{\mathbb{H}}^2.
\end{align}
\end{lemma}
\begin{proof}
We know that $\mathbf{W}_{t}:=\mathbf{Y}_{t}^{\mathbf{x}_1,\mathbf{y}}-\mathbf{Y}_{t}^{\mathbf{x}_2,\mathbf{y}}$ satisfies the following system:
\begin{equation}
\left\{
\begin{aligned}
\d\mathbf{W}_t&=-[\mu\mathrm{A}\mathbf{W}_t+\alpha\mathbf{W}_t+\beta(\mathcal{C}(\mathbf{Y}_{t}^{\mathbf{x}_1,\mathbf{y}})-\mathcal{C}(\mathbf{Y}_{t}^{\mathbf{x}_2,\mathbf{y}}))]\d t+[\mathrm{G}(\mathbf{x}_1,\mathbf{Y}_t^{\mathbf{x}_1,\mathbf{y}})-\mathrm{G}(\mathbf{x}_2,\mathbf{Y}_t^{\mathbf{x}_2,\mathbf{y}})]\d t\\&\quad +[\sigma_2(\mathbf{x}_1,\mathbf{Y}_t^{\mathbf{x}_1,\mathbf{y}})-\sigma_2(\mathbf{x}_2,\mathbf{Y}_t^{\mathbf{x}_2,\mathbf{y}})]\mathrm{Q}_2^{1/2}\d\bar\mathrm{W}_t,\\
\mathbf{W}_0&=\mathbf{0}.
\end{aligned}\right.
\end{equation}
Applying the infinite dimensional It\^o formula to the process $\|\mathbf{W}_t\|_{\mathbb{H}}^2$, we find
\begin{align}\label{4p10}
& \|\mathbf{W}_t\|_{\mathbb{H}}^2+2\mu\int_0^t\|\mathbf{W}_s\|_{\mathbb{V}}^2\d s+2\alpha\int_0^t\|\mathbf{W}_s\|_{\mathbb{H}}^2\d s+2\beta\int_0^t\langle\mathcal{C}(\mathbf{Y}_{s}^{\mathbf{x}_1,\mathbf{y}})-\mathcal{C}(\mathbf{Y}_{s}^{\mathbf{x}_2,\mathbf{y}}),\mathbf{W}_s\rangle\d s\nonumber\\&\leq 2\int_0^t([\mathrm{G}(\mathbf{x}_1,\mathbf{Y}_s^{\mathbf{x}_1,\mathbf{y}})-\mathrm{G}(\mathbf{x}_2,\mathbf{Y}_s^{\mathbf{x}_2,\mathbf{y}})],\mathbf{W}_s)\d s\nonumber\\&\quad+\int_0^t\|[\sigma_2(\mathbf{x}_1,\mathbf{Y}_s^{\mathbf{x}_1,\mathbf{y}})-\sigma_2(\mathbf{x}_2,\mathbf{Y}_s^{\mathbf{x}_2,\mathbf{y}})]\mathrm{Q}_2^{1/2}\|_{\mathcal{L}_2}^2\d s\nonumber\\&\quad+2\int_0^t([\sigma_2(\mathbf{x}_1,\mathbf{Y}_s^{\mathbf{x}_1,\mathbf{y}})-\sigma_2(\mathbf{x}_2,\mathbf{Y}_s^{\mathbf{x}_2,\mathbf{y}})]\mathrm{Q}_2^{1/2},\mathbf{W}_s), \ \mathbb{P}\text{-a.s.},
\end{align}
for all $t\in[0,T]$. Taking expectation in \eqref{4p10} and using the fact the final term appearing the right hand side of the equality \eqref{4p10} is a martingale, we get
\begin{align}
& \mathbb{E}\left[\|\mathbf{W}_t\|_{\mathbb{H}}^2\right]+2\mu\mathbb{E}\left[\int_0^t\|\mathbf{W}_s\|_{\mathbb{V}}^2\d s\right]+2\alpha\mathbb{E}\left[\int_0^t\|\mathbf{W}_s\|_{\mathbb{H}}^2\d s\right]\nonumber\\&=-2\beta\mathbb{E}\left[\int_0^t\langle\mathcal{C}(\mathbf{Y}_{s}^{\mathbf{x}_1,\mathbf{y}})-\mathcal{C}(\mathbf{Y}_{s}^{\mathbf{x}_2,\mathbf{y}}),\mathbf{W}_s\rangle\d s\right]\nonumber\\&\quad+2\mathbb{E}\left[\int_0^t([\mathrm{G}(\mathbf{x}_1,\mathbf{Y}_s^{\mathbf{x}_1,\mathbf{y}})-\mathrm{G}(\mathbf{x}_2,\mathbf{Y}_s^{\mathbf{x}_2,\mathbf{y}})],\mathbf{W}_s)\d s\right]\nonumber\\&\quad+\mathbb{E}\left[\int_0^t\|[\sigma_2(\mathbf{x}_1,\mathbf{Y}_s^{\mathbf{x}_1,\mathbf{y}})-\sigma_2(\mathbf{x}_2,\mathbf{Y}_s^{\mathbf{x}_2,\mathbf{y}})]\mathrm{Q}_2^{1/2}\|_{\mathcal{L}_2}^2\d s\right],
\end{align}
so that using the Assumption \ref{ass3.6} (A1) and \eqref{214}, we have
\begin{align}
\frac{\d}{\d t} \mathbb{E}\left[\|\mathbf{W}_t\|_{\mathbb{H}}^2\right]&=-2\mu\mathbb{E}\left[\|\mathbf{W}_t\|_{\mathbb{V}}^2\right]-2\alpha\mathbb{E}\left[\|\mathbf{W}_t\|_{\mathbb{H}}^2\right]-2\beta\mathbb{E}\left[\langle\mathcal{C}(\mathbf{Y}_{t}^{\mathbf{x}_1,\mathbf{y}})-\mathcal{C}(\mathbf{Y}_{t}^{\mathbf{x}_2,\mathbf{y}}),\mathbf{W}_t\rangle\right]\nonumber\\&\quad+2\mathbb{E}\left[([\mathrm{G}(\mathbf{x}_1,\mathbf{Y}_t^{\mathbf{x}_1,\mathbf{y}})-\mathrm{G}(\mathbf{x}_2,\mathbf{Y}_t^{\mathbf{x}_2,\mathbf{y}})],\mathbf{W}_t)\right]\nonumber\\&\quad+\mathbb{E}\left[\|[\sigma_2(\mathbf{x}_1,\mathbf{Y}_t^{\mathbf{x}_1,\mathbf{y}})-\sigma_2(\mathbf{x}_2,\mathbf{Y}_t^{\mathbf{x}_2,\mathbf{y}})]\mathrm{Q}_2^{1/2}\|_{\mathcal{L}_2}^2\right]\nonumber\\&\leq-(2\mu\lambda_1+2\alpha)\mathbb{E}\left[\|\mathbf{W}_t\|_{\mathbb{H}}^2\right]-\frac{\beta}{2^{r-2}}\mathbb{E}\left[\|\mathbf{W}_t\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\right]\nonumber\\&\quad+2\mathbb{E}\left[C\|\mathbf{x}_1-\mathbf{x}_2\|_{\mathbb{H}}\|\mathbf{W}_t\|_{\mathbb{H}}+L_{\mathrm{G}}\|\mathbf{W}_t\|_{\mathbb{H}}^2\right]\nonumber\\&\quad+\mathbb{E}\left[\left(C\|\mathbf{x}_1-\mathbf{x}_2\|_{\mathbb{H}}+L_{\sigma_2}\|\mathbf{W}_t\|_{\mathbb{H}}\right)^2\right]\nonumber\\&\leq -\left(\mu\lambda_1+2\alpha-2L_{\mathrm{G}}-2L_{\sigma_2}^2\right)\mathbb{E}\left[\|\mathbf{W}_t\|_{\mathbb{H}}^2\right]+C\left(1+\frac{1}{\mu\lambda_1}\right)\|\mathbf{x}_1-\mathbf{x}_2\|_{\mathbb{H}}^2,
\end{align}
for a.e. $t\in[0,T]$. Using the variation of constants formula, we further have
\begin{align}
\mathbb{E}\left[\|\mathbf{W}_t\|_{\mathbb{H}}^2\right]\leq C\left(1+\frac{1}{\mu\lambda_1}\right)\|\mathbf{x}_1-\mathbf{x}_2\|_{\mathbb{H}}^2\int_0^te^{-\xi(t-s)}\d s\leq \frac{C}{\xi}\left(1+\frac{1}{\mu\lambda_1}\right)\|\mathbf{x}_1-\mathbf{x}_2\|_{\mathbb{H}}^2,
\end{align}
for any $t>0$, where $\xi=\mu\lambda_1+2\alpha-2L_{\mathrm{G}}-2L_{\sigma_2}^2>0$ and the estimate \eqref{394} follows.
\end{proof}
\subsection{Preliminaries}
In this subsection, we provide some preliminaries regarding the Large deviation principle (LDP). Let us denote by $\mathscr{E}$, a complete separable metric space (Polish space) with the Borel $\sigma$-field $\mathscr{B}(\mathscr{E})$.
\begin{definition}
A function $\mathrm{I} : \mathscr{E}\rightarrow [0, \infty]$ is called a \emph{rate function} if $\mathrm{I}$ is lower semicontinuous. A rate function $\mathrm{I}$ is called a \emph{good rate function,} if for arbitrary $M \in [0, \infty)$, the level set $\mathcal{K}_M = \big\{x\in\mathscr{E}: \mathrm{I}(x)\leq M\big\}$ is compact in $\mathscr{E}$.
\end{definition}
\begin{definition}[Large deviation principle]\label{LDP} Let $\mathrm{I}$ be a rate function defined on $\mathscr{E}$. A family $\big\{\mathrm{X}^{\varepsilon}: \varepsilon
> 0\big\}$ of $\mathscr{E}$-valued random elements is said to satisfy \emph{the large deviation principle} on $\mathscr{E}$ with rate function $\mathrm{I}$, if the following two conditions hold:
\begin{enumerate}
\item[(i)] (Large deviation upper bound) For each closed set $\mathrm{F}\subset \mathscr{E}$:
$$ \limsup_{\varepsilon\rightarrow 0} \varepsilon\log \mathbb{P}\left(\mathrm{X}^{\varepsilon}\in\mathrm{F}\right) \leq -\inf_{x\in \mathrm{F}} \mathrm{I}(x).$$
\item[(ii)] (Large deviation lower bound) For each open set $\mathrm{G}\subset \mathscr{E}$:
$$ \liminf_{\varepsilon\rightarrow 0}\varepsilon \log \mathbb{P}(\mathrm{X}^{\varepsilon}\in\mathrm{G}) \geq -\inf_{x\in \mathrm{G}} \mathrm{I}(x).$$
\end{enumerate}
\end{definition}
\begin{definition}
Let $\mathrm{I}$ be a rate function on $\mathscr{E}$. A family $\big\{\mathrm{X}^{\varepsilon} :\varepsilon > 0\big\}$ of $\mathscr{E}$-valued random elements is said to satisfy the \emph{Laplace principle} on $\mathscr{E}$ with rate function $\mathrm{I},$ if for each real-valued, bounded and continuous function $h$ defined on $\mathscr{E}$, that is, for $h\in\mathrm{C}_b(\mathscr{E})$,
\begin{equation}\label{LP}
\lim_{\varepsilon \rightarrow 0} {\varepsilon }\log
\mathbb{E}\left\{\exp\left[-
\frac{1}{\varepsilon}h(\mathrm{X}^{\varepsilon})\right]\right\} = -\inf_{x
\in \mathscr{E}} \big\{h(x) + \mathrm{I}(x)\big\}.
\end{equation}
\end{definition}
\begin{lemma}[Varadhan's Lemma, \cite{Va}]\label{VL}
Let $\mathscr{E}$ be a Polish space and $\{\mathrm{X}^{\varepsilon}: \varepsilon > 0\}$ be a family of $\mathscr{E}$-valued random elements satisfying LDP with rate function $\mathrm{I}$. Then $\{\mathrm{X}^{\varepsilon}: \varepsilon > 0\}$ satisfies the Laplace principle on $\mathscr{E}$ with the same rate function $\mathrm{I}$.
\end{lemma}
\begin{lemma}[Bryc's Lemma, \cite{DZ}]\label{BL}
The Laplace principle implies the LDP with the same rate function.
\end{lemma}
It should be noted that Varadhan's Lemma together with Bryc's converse of Varadhan's Lemma state that for Polish space valued random elements, the Laplace principle and the large deviation principle are equivalent.
\subsection{Functional setting and Budhiraja-Dupuis LDP}\label{sec4.2}
In this subsection, the notation and terminology are built in order to state the large deviations result of Budhiraja and Dupuis \cite{BD1} for Polish space valued random elements. Let us define $$\mathcal{A}:=\left\{\mathbb{H}\text{-valued }\{\mathscr{F}_t\}\text{-predictable processes }h\text{ such that }\int_0^T\|h(s)\|_{\mathbb{H}}^2\d s<+\infty,\ \mathbb{P}\text{-a.s.}\right\}, $$ and $$\mathcal{S}_M: = \left\{h\in\mathrm{L}^2(0,T;\mathbb{H}): \int_0^T \|h(s)\|_{\mathbb{H}}^2\d s \leq M\right\}.$$ It is known from \cite{BD2} that the space $\mathcal{S}_M$ is a compact metric space under the metric $\widetilde{d}(\mathbf{u},\mathbf{v})=\sum\limits_{j=1}^{\infty}\frac{1}{2^j}\left|\int_0^T(\mathbf{u}(s)-\mathbf{v}(s),\widetilde{e}_j(s))\d s\right|$, where $\{\widetilde{e}_j\}_{j=1}^{\infty}$ are orthonormal basis of $\mathrm{L}^2(0,T;\mathbb{H})$. Since every compact metric space is complete, the set $\mathcal{S}_M$ endowed with the weak topology obtained from the metric $\widetilde{d}$ is a Polish space. Let us now define $$\mathcal{A}_M=\big\{h\in\mathcal{A}: h(\omega)\in \mathcal{S}_M, \ \mathbb{P}\text{-a.s.}\big\}.$$ Next, we state an important lemma regarding the convergence of the sequence $\int_0^{\cdot}h_n(s)\d s$, which is useful in proving compactness as well as weak convergence results.
\begin{lemma}[Lemma 3.2, \cite{BD1}]
Let $\{h_n\}$ be a sequence of elements from $\mathcal{A}_M,$ for some $0<M <+\infty$. Let the sequence $\{h_n\}$ converges in distribution to $h$ with respect to the weak topology on $\mathrm{L}^2(0,T;\mathbb{H})$. Then $\int_0^{\cdot}h_n(s)\d s$ converges in distribution as $\mathrm{C}([0,T];\mathbb{H})$-valued processes to $\int_0^{\cdot}h(s)\d s$ as $n\to\infty$.
\end{lemma} Let $\mathscr{E}$ denotes a Polish space, and for $\varepsilon>0$, let $\mathcal{G}^{\varepsilon} :\mathrm{C}([0,T];\mathbb{H})\to\mathscr{E}$ be a measurable map. Let us define $$\mathrm{X}^{\varepsilon} =\mathcal{G}^{\varepsilon}(\mathrm{W}(\cdot)).$$ We are interested in the large deviation principle for $\mathrm{X}^{\varepsilon}$ as $\varepsilon\to 0$.
\begin{hypothesis}\label{hyp1}
There exists a measurable map $\mathcal{G}^0 : \mathrm{C} ([0, T ] ;\mathbb{H} )\to\mathscr{ E}$ such that the following
hold:
\begin{enumerate}
\item [(i)] Let $\{h^{\varepsilon} :\varepsilon>0\}\subset \mathcal{A}_{M},$ for some $M<+\infty$. Let $h^{\varepsilon}$ converge in distribution as an $\mathcal{S}_M$-valued random element to $h$ as $\varepsilon\to0$. Then $\mathcal{G}^{\varepsilon}(\mathrm{W}(\cdot)+\frac{1}{\sqrt{\varepsilon}}\int_0^{\cdot}h^{\varepsilon}(s)\d s)$ converges in distribution to $\mathcal{G}^0(\int_0^{\cdot}h(s)\d s)$ as $\varepsilon\to0$.
\item [(ii)] For every $M<+\infty$, the set $$\mathcal{K}_M=\left\{\mathcal{G}^0\left(\int_0^{\cdot}h(s)\d s\right):h\in \mathcal{S}_M\right\}$$ is a compact subset of $\mathscr{E}$.
\end{enumerate}
\end{hypothesis}
For each $f\in\mathscr{E}$, we define
\begin{align}\label{rate}
\mathrm{I}(f):=\inf_{\left\{h\in\mathrm{L}^2(0,T;\mathbb{H}):f=\mathcal{G}^0\left(\int_0^{\cdot}h(s)\d s\right)\right\}}\left\{\frac{1}{2}\int_0^T\|h(s)\|_{\mathbb{H}}^2\d s\right\},
\end{align}
where infimum over an empty set is taken as $+\infty$. Next, we state an important result due to Budhiraja and Dupuis \cite{BD1}.
\begin{theorem}[Budhiraja-Dupuis principle, Theorem 4.4, \cite{BD1}]\label{BD}
Let $\mathrm{X}^{\varepsilon}= \mathcal{G}^{\varepsilon}(\mathrm{W}(\cdot))$. If $\{\mathcal{G}^{\varepsilon}\}$ satisfies the Hypothesis \ref{hyp1}, then the family $\{\mathrm{X}^{\varepsilon}:\varepsilon>0\}$ satisfies the Laplace principle in $\mathscr{E}$ with rate function $\mathrm{I}$ given by (\ref{rate}).
\end{theorem}
It should be noted that Hypothesis \ref{hyp1} (i) is a statement on the weak convergence of a certain family of random variables and is at the core of weak convergence approach to the study of large deviations. Hypothesis \ref{hyp1} (ii) says that the level sets of the rate function are compact.
\subsection{LDP for SCBF equations} Let us recall that the system (\ref{2.18a})
has an $\mathscr{F}_t$-adapted pathwise unique strong solution $(\mathbf{X}^{\varepsilon,\delta}_t,\mathbf{Y}^{\varepsilon,\delta}_t)$ in the Polish space $$\mathrm{C}([0,T];\mathbb{H})\cap\mathrm{L}^2(0,T;\mathbb{V})\cap\mathrm{L}^{r+1}(0,T;\widetilde\mathbb{L}^{r+1})\times \mathrm{C}([0,T];\mathbb{H})\cap\mathrm{L}^2(0,T;\mathbb{V})\cap\mathrm{L}^{r+1}(0,T;\widetilde\mathbb{L}^{r+1}), \ \mathbb{P}\text{-a.s.}$$ The solution to the first equation in (\ref{2.18a}) denoted by $\mathbf{X}^{\varepsilon,\delta}_{\cdot}$ can be written as $\mathcal{G}^{\varepsilon}(\mathrm{W} (\cdot))$, for a Borel measurable function $\mathcal{G}^{\varepsilon} : \mathrm{C}([0, T ]; \mathbb{H})\to \mathscr{E}, $ where $\mathscr{E}=\mathrm{C}([0,T];\mathbb{H})\cap\mathrm{L}^2(0,T;\mathbb{V})\cap\mathrm{L}^{r+1}(0,T;\widetilde\mathbb{L}^{r+1})$ (see Corollary 4.2, Chapter X, \cite{VF} for 2D Navier-Stokes equations, see \cite{BD1} also). Our main goal is to verify that such a $\mathcal{G}^{\varepsilon}$ satisfies the Hypothesis \ref{hyp1}. Then, applying the Theorem \ref{BD}, the LDP for $\big\{\mathbf{X}^{\varepsilon,\delta} : \varepsilon > 0\big\}$ in $\mathscr{E}$ can be established. Let us now state our main result on the Wentzell-Freidlin type large deviation principle for the system \eqref{3.6} ($r\in[1,\infty),$ for $n=2$ and $r\in[3,\infty),$ for $n=3$ with $2\beta\mu>1$ for $r=3$) .
\begin{theorem}\label{thm4.14}
Under the Assumption \ref{ass3.6}, $\{\mathbf{X}^{\varepsilon,\delta}:\varepsilon>0\}$ obeys an LDP on $\mathrm{C}([0, T ]; \mathbb{H} ) \cap \mathrm{L}^2(0, T ; \mathbb{V} )\cap\mathrm{L}^{r+1}(0,T;\widetilde\mathbb{L}^{r+1})$ with the rate function $\mathrm{I}$ defined in \eqref{rate}.
\iffalse
More precisely, it holds that:
\begin{enumerate}
\item [(i)] For each closed subset $\mathrm{F}$ of $\mathrm{C}([0,T];\mathbb{H})\cap\mathrm{L}^2(0,T;\mathbb{V})\cap\mathrm{L}^{r+1}(0,T;\widetilde\mathbb{L}^{r+1}),$
\begin{align*}
\limsup_{\varepsilon\to0}\varepsilon\log\mathbb{P}\left(\mathbf{X}^{\varepsilon,\delta}-\bar{\mathbf{X}}\in\mathrm{F}\right)\leq -\inf_{x\in\mathrm{F}}\mathrm{I}(x).
\end{align*}
\item [(i)] For each open subset $\mathrm{G}$ of $\mathrm{C}([0,T];\mathbb{H})\cap\mathrm{L}^2(0,T;\mathbb{V})\cap\mathrm{L}^{r+1}(0,T;\widetilde\mathbb{L}^{r+1}),$
\begin{align*}
\liminf_{\varepsilon\to0}\varepsilon\log\mathbb{P}\left(\mathbf{X}^{\varepsilon,\delta}-\bar{\mathbf{X}}\in\mathrm{G}\right)\geq -\inf_{x\in\mathrm{G}}\mathrm{I}(x).
\end{align*}
\end{enumerate}
\fi
\end{theorem}
The LDP for $\big\{\mathbf{X}^{\varepsilon,\delta} : \varepsilon> 0\big\}$ in $\mathscr{E}$ (Theorem \ref{thm4.14}) is proved in the following way. We show the well-posedness of certain controlled deterministic and controlled stochastic equations in $\mathscr{E}$. These results help us to prove the two main results on the compactness of the level sets and weak convergence of the stochastic controlled equation, which verifies the Hypothesis \ref{hyp1}. We exploit the classical Khasminskii approach based on time discretization in the weak convergence part of the Hypothesis \ref{hyp1}.
\subsection{Compactness} Let us first verify the Hypothesis \ref{hyp1} (ii) on compactness.
\begin{theorem}\label{thm5.9}
Let $h\in\mathrm{L}^2(0,T;\mathbb{H})$ and the Assumption \ref{ass3.6} be satisfied. Then the following deterministic control system:
\begin{equation}\label{5.4y}
\left\{
\begin{aligned}
\d\bar{\mathbf{X}}^{h}_t&=-[\mu\mathrm{A}\bar{\mathbf{X}}^{h}_t+\mathrm{B}(\bar{\mathbf{X}}^{h}_t)+\alpha\bar{\mathbf{X}}^{h}_t+\beta\mathcal{C}(\bar{\mathbf{X}}^{h}_t)]\d t+\bar{\mathrm{F}}(\bar{\mathbf{X}}^{h}_t)\d t+\sigma_1(\bar{\mathbf{X}}^{h}_t)\mathrm{Q}_1^{1/2}h_t\d t, \\ \bar{\mathbf{X}}^{h}_0&=\mathbf{x},
\end{aligned}\right.
\end{equation}
has a \emph{unique weak solution} in $\mathrm{C}([0,T];\mathbb{H})\cap\mathrm{L}^2(0,T;\mathbb{V})\cap\mathrm{L}^{r+1}(0,T;\widetilde\mathbb{L}^{r+1})$, and
\begin{align}\label{5.5y}
& \sup_{h\in \mathcal{S}_M}\left\{\sup_{t\in[0,T]}\|\bar{\mathbf{X}}^{h}_t\|_{\mathbb{H}}^2+\mu\int_0^T\|\bar{\mathbf{X}}^{h}_t\|_{\mathbb{V}}^2\d t+\alpha\int_0^T\|\bar{\mathbf{X}}^{h}_t\|_{\mathbb{H}}^2\d t+\beta\int_0^T\|\bar{\mathbf{X}}^{h}_t\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d t\right\}\nonumber\\&\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},M,T}\left(1+\|\mathbf{x}\|_{\mathbb{H}}^2\right).
\end{align}
\end{theorem}
\begin{proof}
From the Assumption \ref{ass3.6}, we know that the operator $\sigma_1(\cdot)\mathrm{Q}_1^{1/2}$ is Lipschitz. Since $\mathrm{F}(\cdot,\cdot)$ is Lipschitz, one can show that $\bar{\mathrm{F}}(\cdot)$ is Lipschitz in the following way. Using the Assumption \ref{ass3.6} (A1), \eqref{393} and \eqref{394}, we have
\begin{align*}
&\|\bar{\mathrm{F}}(\mathbf{x}_1)-\bar{\mathrm{F}}(\mathbf{x}_2)\|_{\mathbb{H}}\nonumber\\&=\left\|\int_{\mathbb{H}}\mathrm{F}(\mathbf{x}_1,\mathbf{z})\mu^{\mathbf{x}_1}(\d\mathbf{z})-\int_{\mathbb{H}}\mathrm{F}(\mathbf{x}_1,\mathbf{z})\mu^{\mathbf{x}_1}(\d\mathbf{z})\right\|_{\mathbb{H}}\nonumber\\&\leq\left\|\int_{\mathbb{H}}\mathrm{F}(\mathbf{x}_1,\mathbf{z})\mu^{\mathbf{x}_1}(\d\mathbf{z})-\mathbb{E}\left[\mathrm{F}(\mathbf{x}_1,\mathbf{Y}_t^{\mathbf{x}_1,\mathbf{y}})\right]\right\|_{\mathbb{H}}+\left\|\mathbb{E}\left[\mathrm{F}(\mathbf{x}_2,\mathbf{Y}_t^{\mathbf{x}_2,\mathbf{y}})\right]-\int_{\mathbb{H}}\mathrm{F}(\mathbf{x}_1,\mathbf{z})\mu^{\mathbf{x}_1}(\d\mathbf{z})\right\|_{\mathbb{H}}\nonumber\\&\quad+\|\mathbb{E}\left[\mathrm{F}(\mathbf{x}_1,\mathbf{Y}_t^{\mathbf{x}_1,\mathbf{y}})\right]-\mathbb{E}\left[\mathrm{F}(\mathbf{x}_2,\mathbf{Y}_t^{\mathbf{x}_2,\mathbf{y}})\right]\|_{\mathbb{H}}\nonumber\\&\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}}}(1+\|\mathbf{x}_1\|_{\mathbb{H}}+\|\mathbf{x}_2\|_{\mathbb{H}}+\|\mathbf{y}\|_{\mathbb{H}})e^{-\frac{\zeta t}{2}}+C\left(\|\mathbf{x}_1-\mathbf{x}_2\|_{\mathbb{H}}+\mathbb{E}\left[\|\mathbf{Y}_t^{\mathbf{x}_1,\mathbf{y}}-\mathbf{Y}_t^{\mathbf{x}_2,\mathbf{y}}\|_{\mathbb{H}}\right]\right)\nonumber\\&\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}}}(1+\|\mathbf{x}_1\|_{\mathbb{H}}+\|\mathbf{x}_2\|_{\mathbb{H}}+\|\mathbf{y}\|_{\mathbb{H}})e^{-\frac{\zeta t}{2}}+C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2}}\|\mathbf{x}_1-\mathbf{x}_2\|_{\mathbb{H}}.
\end{align*}
Taking $t\to\infty$, we get
\begin{align}\label{3p93}
&\|\bar{\mathrm{F}}(\mathbf{x}_1)-\bar{\mathrm{F}}(\mathbf{x}_2)\|_{\mathbb{H}}\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2}}\|\mathbf{x}_1-\mathbf{x}_2\|_{\mathbb{H}}.
\end{align}
Thus, it is immediate that the system \eqref{5.4y} is a Lipschitz perturbation of the CBF equations. The existence and uniqueness of weak solution in the Leray-Hopf sense (satisfying the energy equality) of the system (\ref{5.4y}) can be proved using the monotonicty as well as demicontinuous properties of the linear and nonlinear operators and the Minty-Browder technique as in Theorem 3.4, \cite{MTM7}. Thus, we need to show \eqref{5.5y} only. Taking inner product with $\bar{\mathbf{X}}^{h}_t$ to the first equation in \eqref{5.4y}, we find
\begin{align}\label{6.7}
&\frac{1}{2}\frac{\d}{\d t}\|\bar{\mathbf{X}}^{h}_t\|_{\mathbb{H}}^2+\mu\|\bar{\mathbf{X}}^{h}_t\|_{\mathbb{V}}^2+\alpha\|\bar{\mathbf{X}}^{h}_t\|_{\mathbb{V}}^2+\beta\|\bar{\mathbf{X}}^{h}_t\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}=-(\bar{\mathrm{F}}(\bar{\mathbf{X}}^{h}_t)-\sigma_1(\bar{\mathbf{X}}^{h}_t)\mathrm{Q}_1^{1/2}h_t,\bar{\mathbf{X}}^{h}_t).
\end{align}
since $\langle\mathrm{B}(\bar{\mathbf{X}}^{h}_t),\bar{\mathbf{X}}^{h}_t\rangle=0$. Using the Cauchy-Schwarz inequality, \eqref{3p93} and Young's inequality, we estimate $(\bar{\mathrm{F}}(\bar{\mathbf{X}}^{h}),\bar{\mathbf{X}}^{h})$ as
\begin{align}\label{410}
|(\bar{\mathrm{F}}(\bar{\mathbf{X}}^{h}),\bar{\mathbf{X}}^{h})|&\leq\|\bar{\mathrm{F}}(\bar{\mathbf{X}}^{h})\|_{\mathbb{H}}\|\bar{\mathbf{X}}^{h}\|_{\mathbb{H}}\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2}}(1+\|\bar{\mathbf{X}}^{h}\|_{\mathbb{H}})\|\bar{\mathbf{X}}^{h}\|_{\mathbb{H}}\nonumber\\&\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2}}(1+\|\bar{\mathbf{X}}^{h}\|_{\mathbb{H}}^2).
\end{align}
Using the Cauchy-Schwarz and H\"older inequalities, and Assumption \ref{ass3.6} (A.1), we estimate $(\sigma_1(\bar{\mathbf{X}}^{h})\mathrm{Q}_1^{1/2}h,\bar{\mathbf{X}}^{h})$ as
\begin{align}\label{6.8}
|(\sigma_1(\bar{\mathbf{X}}^{h})\mathrm{Q}_1^{1/2}h,\bar{\mathbf{X}}^{h})|&\leq\|\sigma_1(\bar{\mathbf{X}}^{h})\mathrm{Q}_1^{1/2}h\|_{\mathbb{H}}\|\bar{\mathbf{X}}^{h}\|_{\mathbb{H}}\leq\|\sigma_1(\bar{\mathbf{X}}^{h})\mathrm{Q}_1^{1/2}\|_{\mathcal{L}_2}\|h\|_{\mathbb{H}}\|\bar{\mathbf{X}}^{h}\|_{\mathbb{H}}\nonumber\\&\leq\frac{1}{2}\|\bar{\mathbf{X}}^{h}\|_{\mathbb{H}}^2+\frac{1}{2}\|\sigma_1(\bar{\mathbf{X}}^{h})\mathrm{Q}_1^{1/2}\|_{\mathcal{L}_2}^2\|h\|_{\mathbb{H}}^2\nonumber\\&\leq\frac{1}{2}\|\bar{\mathbf{X}}^{h}\|_{\mathbb{H}}^2+C\|h\|_{\mathbb{H}}^2+C\|\bar{\mathbf{X}}^{h}\|^2_{\mathbb{H}}\|h\|_{\mathbb{H}}^2.
\end{align}
Substituting \eqref{410} and \eqref{6.8} in \eqref{6.7}, we obtain
\begin{align}\label{6.9}
&\|\bar{\mathbf{X}}^{h}_t\|_{\mathbb{H}}^2+2\mu\int_0^t\|\bar{\mathbf{X}}^{h}_s\|_{\mathbb{V}}^2\d s+2\alpha\int_0^t\|\bar{\mathbf{X}}^{h}_s\|_{\mathbb{H}}^2\d s+2\beta\int_0^t\|\bar{\mathbf{X}}^{h}_s\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d s\nonumber\\&\leq\|\mathbf{x}\|_{\mathbb{H}}^2+C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2}}t+C\int_0^t\|h_s\|_{\mathbb{H}}^2\d s+C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2}}\int_0^t\|\bar{\mathbf{X}}^{h}_s\|_{\mathbb{H}}^2\d s\nonumber\\&\quad+C\int_0^t\|\bar{\mathbf{X}}^{h}_s\|^2_{\mathbb{H}}\|h_s\|_{\mathbb{H}}^2\d s.
\end{align}
Applying Gronwall's inequality in \eqref{6.9}, we get
\begin{align}
\|\bar{\mathbf{X}}^{h}_t\|_{\mathbb{H}}^2\leq\left(\|\mathbf{x}\|_{\mathbb{H}}^2+C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2}}T+C\int_0^T\|h_t\|_{\mathbb{H}}^2\d t\right)\exp\left(C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2}}T+C\int_0^T\|h_t\|_{\mathbb{H}}^2\d t\right),
\end{align}
for all $t\in[0,T]$. Thus, taking $h\in \mathcal{S}_M$, we finally obtain \eqref{5.5y}.
\end{proof}
\begin{lemma}\label{lem3.13}
For any $\mathbf{x},\mathbf{y}\in\mathbb{H}$, $T>0$, $\varepsilon,\Delta>0$ small enough, there exists a constant $C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},T}>0$ such that
\begin{align}\label{3.87}
\sup_{h\in\mathcal{S}_M}\int_0^T\|\bar\mathbf{X}^h_t-\bar\mathbf{X}^h_{t(\Delta)}\|_{\mathbb{H}}^2\d t \leq C_{\mu,\alpha,\beta,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},M,T}\Delta\left(1+\|\mathbf{x}\|_{\mathbb{H}}^{\ell}\right),
\end{align}
where $\ell=3$, for $n=2$, $r\in[1,3)$, and $\ell=2$, for $n=2,3$, $r\in[3,\infty)$ ($2\beta\mu> 1,$ for $n=r=3$). Here, $t(\Delta):=\left[\frac{t}{\Delta}\right]\Delta$ and $[s]$ stands for the largest integer which is less than or equal $s$.
\end{lemma}
\begin{proof}
Using \eqref{5.5y}, it can be easily seen that
\begin{align}\label{388}
\int_0^{T}\|\bar\mathbf{X}^h_t-\bar\mathbf{X}^h_{t(\Delta)}\|_{\mathbb{H}}^2\d t&\leq \int_0^{\Delta}\|\bar\mathbf{X}^h_t-\mathbf{x}\|_{\mathbb{H}}^2\d t+ \int_{\Delta}^T\|\bar\mathbf{X}^h_t-\bar\mathbf{X}^h_{t(\Delta)}\|_{\mathbb{H}}^2\d t\nonumber\\&\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},M,T}\left(1+\|\mathbf{x}\|_{\mathbb{H}}^2\right)\Delta+ 2\int_{\Delta}^T\|\bar\mathbf{X}^h_t-\bar\mathbf{X}^h_{t-\Delta}\|_{\mathbb{H}}^2\d t\nonumber\\&\quad+2 \int_{\Delta}^T\|\bar\mathbf{X}^h_{t(\Delta)}-\bar\mathbf{X}^h_{t-\Delta}\|_{\mathbb{H}}^2\d t.
\end{align}
Let us first estimate the second term from the right hand side of the equality \eqref{388}. Note that $\bar\mathbf{X}^h_r-\bar\mathbf{X}^h_{t-\Delta}$, for $r\in[t-\Delta,t]$ satisfies the following energy equality:
\begin{align}\label{4p26}
\|\bar\mathbf{X}^h_t-\bar\mathbf{X}^h_{t-\Delta}\|_{\mathbb{H}}^2&=-2\mu\int_{t-\Delta}^t\langle\mathrm{A}\bar\mathbf{X}^h_s,\bar\mathbf{X}^h_s-\bar\mathbf{X}^h_{t-\Delta}\rangle\d s-2\alpha\int_{t-\Delta}^t(\bar\mathbf{X}^h_s,\bar\mathbf{X}^h_s-\bar\mathbf{X}^h_{t-\Delta})\d s\nonumber\\&\quad-2\int_{t-\Delta}^t\langle\mathrm{B}(\bar\mathbf{X}^h_s),\bar\mathbf{X}^h_s-\bar\mathbf{X}^h_{t-\Delta}\rangle\d s-2\beta\int_{t-\Delta}^t\langle\mathcal{C}(\bar\mathbf{X}^h_s),\bar\mathbf{X}^h_s-\bar\mathbf{X}^h_{t-\Delta}\rangle\d s\nonumber\\&\quad-2\int_{t-\Delta}^t(\bar\mathrm{F}(\bar\mathbf{X}^h_s),\bar\mathbf{X}^h_s-\bar\mathbf{X}^h_{t-\Delta})\d s-2\int_{t-\Delta}^t(\sigma_1(\bar\mathbf{X}^h_s)\mathrm{Q}_1^{1/2}h_s,\bar\mathbf{X}^h_s-\bar\mathbf{X}^h_{t-\Delta})\d s\nonumber\\&=:\sum_{i=1}^6I_i(t).
\end{align}
Making use of an integration by parts, H\"older's inequality, Fubini's Theorem and \eqref{5.5y}, we estimate $\int_{\Delta}^T|I_1(t)|\d t$ as
\begin{align}\label{4p27}
\int_{\Delta}^T|I_1(t)|\d t&\leq 2\mu\int_{\Delta}^T\int_{t-\Delta}^t\|\bar\mathbf{X}^h_s\|_{\mathbb{V}}\|\bar\mathbf{X}^h_s-\bar\mathbf{X}^h_{t-\Delta}\|_{\mathbb{V}}\d s\d t\nonumber\\&\leq 2\mu\left(\int_{\Delta}^T\int_{t-\Delta}^t\|\bar\mathbf{X}^h_s\|_{\mathbb{V}}^2\d s\d t\right)^{1/2} \left(\int_{\Delta}^T\int_{t-\Delta}^t\|\bar\mathbf{X}^h_s-\bar\mathbf{X}^h_{t-\Delta}\|_{\mathbb{V}}^2\d s\d t\right)^{1/2}\nonumber\\&\leq 2\mu\left(\Delta\int_{0}^T\|\bar\mathbf{X}^h_t\|_{\mathbb{V}}^2\d t\right)^{1/2}\left(2\Delta\int_{0}^T\|\bar\mathbf{X}^h_t\|_{\mathbb{V}}^2\d t\right)^{1/2}\nonumber\\&\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},M,T}\Delta\left(1+\|\mathbf{x}\|_{\mathbb{H}}^2\right).
\end{align}
Similarly, we estimate the term $\int_{\Delta}^T|I_2(t)|\d t$ as
\begin{align}
\int_{\Delta}^T|I_2(t)|\d t&\leq C\alpha T\Delta \sup_{t\in[0,T]}\|\mathbf{X}^{\varepsilon}_t\|_{\mathbb{H}}^2\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},M,T}\Delta\left(1+\|\mathbf{x}\|_{\mathbb{H}}^2+\|\mathbf{y}\|_{\mathbb{H}}^2\right).
\end{align}
For $n=2$ and $r\in[1,3)$, using H\"older's and Ladyzhenskaya's inequalities, Fubini's Theorem and \eqref{5.5y}, we estimate $\int_{\Delta}^T|I_3(t)|\d t$ as
\begin{align}\label{4p29}
\int_{\Delta}^T|I_3(t)|\d t&\leq 2\int_{\Delta}^T\int_{t-\Delta}^t\|\bar\mathbf{X}^h_s\|_{\widetilde\mathbb{L}^4}^2\|\bar\mathbf{X}^h_s-\bar\mathbf{X}^h_{t-\Delta}\|_{\mathbb{V}}\d s\d t\nonumber\\&\leq 2\sqrt{2}\left(\int_{\Delta}^T\int_{t-\Delta}^t\|\bar\mathbf{X}^h_s\|_{\mathbb{H}}^2\|\bar\mathbf{X}^h_s\|_{\mathbb{V}}^2\d s\d t\right)^{1/2}\left(\int_{\Delta}^T\int_{t-\Delta}^t\|\bar\mathbf{X}^h_s-\bar\mathbf{X}^h_{t-\Delta}\|_{\mathbb{V}}^2\d s\d t\right)^{1/2}\nonumber\\&\leq 2\sqrt{2}\left(\Delta\sup_{t\in[0,T]}\|\bar\mathbf{X}^h_t\|_{\mathbb{H}}^2\int_0^T\|\bar\mathbf{X}^h_t\|_{\mathbb{V}}^2\d t\right)^{1/2} \left(2\Delta\int_{0}^T\|\bar\mathbf{X}^h_t\|_{\mathbb{V}}^2\d t\right)^{1/2}\nonumber\\&\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},M,T}\Delta\left(1+\|\mathbf{x}\|_{\mathbb{H}}^3\right).
\end{align}
For $n=2,3$ and $r\geq 3$ (take $2\beta\mu>1$, for $r=3$), we estimate $\int_{\Delta}^T|I_3(t)|\d t$ using H\"older's inequality, interpolation inequality and \eqref{5.5y} as
\begin{align}\label{4p30}
\int_{\Delta}^T|I_3(t)|\d t&\leq 2\int_{\Delta}^T\int_{t-\Delta}^t\|\bar\mathbf{X}^h_s\|_{\widetilde\mathbb{L}^{r+1}}\|\bar\mathbf{X}^h_s\|_{\widetilde\mathbb{L}^{\frac{2(r+1)}{r-1}}}\|\bar\mathbf{X}^h_s-\bar\mathbf{X}^h_{t-\Delta}\|_{\mathbb{V}}\d s\d t\nonumber\\&\leq 2\int_{\Delta}^T\int_{t-\Delta}^t\|\bar\mathbf{X}^h_s\|_{\mathbb{H}}^{\frac{r-3}{r-1}}\|\bar\mathbf{X}^h_s\|_{\widetilde\mathbb{L}^{r+1}}^{\frac{r+1}{r-1}}\|\bar\mathbf{X}^h_s-\bar\mathbf{X}^h_{t-\Delta}\|_{\mathbb{V}}\d s\d t \nonumber\\&\leq 2\left(\int_{\Delta}^T\int_{t-\Delta}^t\|\bar\mathbf{X}^h_s\|_{\mathbb{H}}^{\frac{2(r-3)}{r-1}}\|\bar\mathbf{X}^h_s\|_{\widetilde\mathbb{L}^{r+1}}^{\frac{2(r+1)}{r-1}}\d s\d t\right)^{1/2}\left(\int_{\Delta}^T\int_{t-\Delta}^t\|\bar\mathbf{X}^h_s-\bar\mathbf{X}^h_{t-\Delta}\|_{\mathbb{V}}^2\d s\d t\right)^{1/2}\nonumber\\&\leq 2\left(\Delta\int_{0}^T\|\bar\mathbf{X}^h_t\|_{\mathbb{H}}^{\frac{2(r-3)}{r-1}}\|\bar\mathbf{X}^h_t\|_{\widetilde\mathbb{L}^{r+1}}^{\frac{2(r+1)}{r-1}}\d t\right)^{1/2}\left(2\Delta\int_{0}^T\|\bar\mathbf{X}^h_t\|_{\mathbb{V}}^2\d t\right)^{1/2}\nonumber\\&\leq 2\Delta T^{\frac{r-3}{2(r-1)}}\left(\sup_{t\in[0,T]}\|\bar\mathbf{X}^h_t\|_{\mathbb{H}}^{r-3}\int_{0}^T\|\bar\mathbf{X}^h_t\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d t\right)^{\frac{1}{r-1}}\left[2\mathbb{E}\left(\int_{0}^T\|\bar\mathbf{X}^h_t\|_{\mathbb{V}}^2\d t\right)\right]^{1/2}\nonumber\\&\leq C_{\mu,\alpha,\beta,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},M,T}\Delta\left(1+\|\mathbf{x}\|_{\mathbb{H}}^2\right).
\end{align}
Once again using H\"older's inequality, Fubini's Theorem and \eqref{5.5y}, we estimate the term $\int_{\Delta}^T|I_4(t)|\d t$ as
\begin{align}\label{4p31}
\int_{\Delta}^T|I_4(t)|\d t&\leq 2\beta\int_{\Delta}^T\int_{t-\Delta}^t\|\bar\mathbf{X}^h_s\|_{\widetilde\mathbb{L}^{r+1}}^r\|\bar\mathbf{X}^h_s-\bar\mathbf{X}^h_{t-\Delta}\|_{\widetilde\mathbb{L}^{r+1}}\d s\d t\nonumber\\&\leq 2\beta\left(\Delta\int_{0}^T\|\bar\mathbf{X}^h_t\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d t\right)^{\frac{r}{r+1}}\left(2^{r}\Delta\int_0^T\|\bar\mathbf{X}^h_t\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d t\right)^{\frac{1}{r+1}}\nonumber\\&\leq C_{\mu,\alpha,\beta,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},M,T}\Delta\left(1+\|\mathbf{x}\|_{\mathbb{H}}^2\right).
\end{align}
We estimate $\int_{\Delta}^T|I_5(t)|\d t$ using the Assumption \ref{ass3.6} (A1) and \eqref{5.5y} as
\begin{align}\label{4p32}
\int_{\Delta}^T|I_5(t)|\d t&\leq 2\int_{\Delta}^T\int_{t-\Delta}^t\|\bar\mathrm{F}(\bar\mathbf{X}^h_s)\|_{\mathbb{H}}\|\bar\mathbf{X}^h_s-\bar\mathbf{X}^h_{t-\Delta}\|_{\mathbb{H}}\d s\d t\nonumber\\&\leq 2\left(\int_{\Delta}^T\int_{t-\Delta}^t\|\bar\mathrm{F}(\bar\mathbf{X}^h_s)\|_{\mathbb{H}}^2\d s\d t\right)^{1/2}\left(\int_{\Delta}^T\int_{t-\Delta}^t\|\bar\mathbf{X}^h_s-\bar\mathbf{X}^h_{t-\Delta}\|_{\mathbb{H}}^2\d s\d t\right)^{1/2}\nonumber\\&\leq C\left(\Delta\int_{0}^T(1+\|\bar\mathbf{X}^h_t\|_{\mathbb{H}}^2)\d t\right)^{1/2}\left(2\Delta\int_{0}^T\|\bar\mathbf{X}^h_t\|_{\mathbb{H}}^2\d t\right)^{1/2}\nonumber\\&\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},M,T}\Delta(1+\|\mathbf{x}\|_{\mathbb{H}}^2).
\end{align}
Similarly, we estimate the term $\int_{\Delta}^T|I_6(t)|\d t$ as
\begin{align}\label{433}
\int_{\Delta}^T|I_6(t)|\d t&\leq 2\int_{\Delta}^T\int_{t-\Delta}^t\|\sigma_1(\bar\mathbf{X}^h_s)\mathrm{Q}_1^{1/2}h_s\|_{\mathbb{H}}\|\bar\mathbf{X}^h_s-\bar\mathbf{X}^h_{t-\Delta}\|_{\mathbb{H}}\d s\d t\nonumber\\&\leq 2\left(\int_{\Delta}^T\int_{t-\Delta}^t\|\sigma_1(\bar\mathbf{X}^h_s)\mathrm{Q}_1^{1/2}\|_{\mathcal{L}_2}^2\|h_s\|_{\mathbb{H}}^2\d s\d t\right)^{1/2}\left(\int_{\Delta}^T\int_{t-\Delta}^t\|\bar\mathbf{X}^h_s-\bar\mathbf{X}^h_{t-\Delta}\|_{\mathbb{H}}^2\d s\d t\right)^{1/2}\nonumber\\&\leq C\left(\Delta\sup_{t\in[0,T]}(1+\|\bar\mathbf{X}^h_t\|_{\mathbb{H}}^2)\int_0^T\|h_t\|_{\mathbb{H}}^2\d t\right)^{1/2} \left(2\Delta\int_{0}^T\|\bar\mathbf{X}^h_t\|_{\mathbb{H}}^2\d t\right)^{1/2}\nonumber\\&\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},M,T}\Delta(1+\|\mathbf{x}\|_{\mathbb{H}}^2),
\end{align}
since $h\in\mathcal{S}_M$. Combing \eqref{4p26}-\eqref{433}, we obtain the required result \eqref{3.87}.
\end{proof}
We are now in a position to verify the Hypothesis \ref{hyp1} (ii).
\begin{theorem}[Compactness]\label{compact}
Let $M <+\infty$ be a fixed positive number. Let $$\mathcal{K}_M :=\big\{ \bar{\mathbf{X}}^h \in\mathrm{C}([0,T];\mathbb{H})\cap \mathrm{L}^2(0,T;\mathbb{V} )\cap\mathrm{L}^{r+1}(0,T;\widetilde\mathbb{L}^{r+1}):h\in \mathcal{S}_M\big\},$$
where $\bar{\mathbf{X}}_t^h$ is the unique Leray-Hopf weak solution of the deterministic controlled equation (\ref{5.4y}), and $\bar{\mathbf{X}}_0^h= \mathbf{x} \in\mathbb{H}$ in $\mathscr{E} = \mathrm{C}([0,T];\mathbb{H})\cap\mathrm{L}^2(0,T;\mathbb{V})\cap\mathrm{L}^{r+1}(0,T;\widetilde\mathbb{L}^{r+1})$. Then $\mathcal{K}_M$ is compact in $\mathscr{E}$.
\end{theorem}
\begin{proof}
Let us consider a sequence $\{\bar{\mathbf{X}}^{h_n}\}$ in $\mathcal{K}_M$, where $\bar{\mathbf{X}}^{h_n}$ corresponds to the solution of (\ref{5.4y}) with control $h_n \in \mathcal{S}_M$ in place of $h$, that is,
\begin{equation}\label{5.20z}
\left\{
\begin{aligned}
\d\bar{\mathbf{X}}_t^{h_n}&=-[\mu\mathrm{A}\bar{\mathbf{X}}_t^{h_n}+\mathrm{B}(\bar{\mathbf{X}}_t^{h_n})+\alpha\bar{\mathbf{X}}_t^{h_n}+\beta\mathcal{C}(\bar{\mathbf{X}}_t^{h_n})]\d t+\bar{\mathrm{F}}(\bar{\mathbf{X}}_t^{h_n})\d t+\sigma_1(\bar{\mathbf{X}}_t^{h_n})\mathrm{Q}_1^{1/2}h_t^n\d t, \\ \bar{\mathbf{X}}_0^{h_n}&=\mathbf{x}\in\mathbb{H}.
\end{aligned}
\right.
\end{equation}
Then, by using the weak compactness of $\mathcal{S}_M$, there exists a subsequence of $\{h_n\}$, (still denoted by $\{h_n\}$), which converges weakly to $ h\in \mathcal{S}_M$ in $\mathrm{L}^2(0, T ; \mathbb{H})$. Using the estimates (\ref{5.5y}), we obtain
\begin{equation}\label{5.29}
\left\{
\begin{aligned}
\bar{\mathbf{X}}_t^{h_n}&\xrightarrow{w^*}\bar{\mathbf{X}}_t^{h}\ \text{ in }\ \mathrm{L}^{\infty}(0,T;\mathbb{H}),\\
\bar{\mathbf{X}}_t^{h_n}&\xrightarrow{w}\bar{\mathbf{X}}_t^{h}\ \text{ in }\ \mathrm{L}^2(0,T;\mathbb{V}),\\
\bar{\mathbf{X}}_t^{h_n}&\xrightarrow{w}\bar{\mathbf{X}}_t^{h}\ \text{ in }\ \mathrm{L}^{r+1}(0,T;\widetilde\mathbb{L}^{r+1}).
\end{aligned}
\right.
\end{equation}
Using the monotonicity property of linear and nonlinear operators, and Minty-Browder technique (see \cite{MTM7}), one can establish that $\bar{\mathbf{X}}_t^{h}$ is the unique weak solution of the system (\ref{5.4y}). In order to prove $\mathcal{K}_M$ is compact, we need to show that $\bar{\mathbf{X}}_t^{h_n}\to\bar{\mathbf{X}}_t^{h}$ in $\mathscr{E}$ as $n\to\infty$. In other words, it is required to show that
\begin{align}\label{5.30z}
\sup_{t\in[0,T]}\|\bar{\mathbf{X}}_t^{h_n}-\bar{\mathbf{X}}_t^{h}\|_{\mathbb{H}}^2+\int_0^T\|\bar{\mathbf{X}}_t^{h_n}-\bar{\mathbf{X}}_t^{h}\|_{\mathbb{V}}^2\d t+\int_0^T\|\bar{\mathbf{X}}_t^{h_n}-\bar{\mathbf{X}}_t^{h}\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d t\to 0,
\end{align}
as $n\to\infty$. Recall that for the system (\ref{5.4y}), the energy estimate given in (\ref{5.5y}) holds true. Let us now define $\mathbf{W}_t^{h_n,h}:=\bar{\mathbf{X}}_t^{h_n}-\bar{\mathbf{X}}_t^{h}$, so that $\mathbf{W}_t^{h_n,h}$ satisfies:
\begin{equation}\label{5.22z}
\left\{
\begin{aligned}
\d\mathbf{W}_t^{h_n,h}&=-\left[\mu\mathrm{A}\mathbf{W}_t^{h_n,h}+(\mathrm{B}(\bar{\mathbf{X}}_t^{h_n})-\mathrm{B}(\bar{\mathbf{X}}_t^{h}))+\alpha\mathbf{W}_t^{h_n,h}+\beta(\mathcal{C}(\bar{\mathbf{X}}_t^{h_n})-\mathcal{C}(\bar{\mathbf{X}}_t^{h}))\right]\d t\\&\quad +\left[\bar{\mathrm{F}}(\bar{\mathbf{X}}_t^{h_n})-\bar{\mathrm{F}}(\bar{\mathbf{X}}_t^{h})\right]\d t +\left[\sigma_1(\bar{\mathbf{X}}_t^{h_n})\mathrm{Q}_1^{1/2}h_t^n-\sigma_1(\bar{\mathbf{X}}_t^{h})\mathrm{Q}_1^{1/2}h_t\right]\d t,\\
\mathbf{W}_0^{h_n,h}&=\mathbf{0}.
\end{aligned}
\right.\end{equation}
Taking inner product with $\mathbf{W}_t^{h_n,h}$ to the first equation in (\ref{5.22z}), we get
\begin{align}\label{5.23z}
&\|\mathbf{W}_t^{h_n,h}\|_{\mathbb{H}}^2+2\mu\int_0^t\|\mathbf{W}_s^{h_n,h}\|_{\mathbb{V}}^2\d s+2\alpha\int_0^t\|\mathbf{W}_s^{h_n,h}\|_{\mathbb{H}}^2\d s\nonumber\\&\quad+2\beta\int_0^t\langle\mathcal{C}(\bar{\mathbf{X}}_s^{h_n})-\mathcal{C}(\bar{\mathbf{X}}_s^{h}),\mathbf{W}_s^{h_n,h}\rangle\d s\nonumber\\&=-2\int_0^t\langle\mathrm{B}(\bar{\mathbf{X}}_s^{h_n})-\mathrm{B}(\bar{\mathbf{X}}_s^{h}),\mathbf{W}_s^{h_n,h}\rangle\d s+2\int_0^t(\bar{\mathrm{F}}(\bar{\mathbf{X}}_s^{h_n})-\bar{\mathrm{F}}(\bar{\mathbf{X}}_s^{h}),\mathbf{W}_s^{h_n,h})\d s\nonumber\\&\quad+2\int_0^t(\sigma_1(\bar{\mathbf{X}}_s^{h_n})\mathrm{Q}_1^{1/2}h_s^n-\sigma_1(\bar{\mathbf{X}}_s^{h})\mathrm{Q}_1^{1/2}h_s,\mathbf{W}_s^{h_n,h})\d s.
\end{align}
Note that $\langle\mathrm{B}(\bar{\mathbf{X}}^{h_n},\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}),\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}\rangle=0$ and it implies that
\begin{align}\label{6.28}
& \langle \mathrm{B}(\bar{\mathbf{X}}^{h_n})-\mathrm{B}(\bar{\mathbf{X}}^{h}),\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}\rangle \nonumber\\&=\langle\mathrm{B}(\bar{\mathbf{X}}^{h_n},\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}),\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}\rangle +\langle \mathrm{B}(\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h},\bar{\mathbf{X}}^{h}),\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}\rangle \nonumber\\&=\langle \mathrm{B}(\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h},\bar{\mathbf{X}}^{h}),\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}\rangle=-\langle \mathrm{B}(\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h},\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}),\bar{\mathbf{X}}^{h}\rangle.
\end{align}
For $n=2$ and $r\in[1,3]$, making use of H\"older's, Ladyzhenskaya's and Young's inequalities, we estimate $2|\langle\mathrm{B}(\bar{\mathbf{X}}^{h_n})-\mathrm{B}(\bar{\mathbf{X}}^{h}),\mathbf{W}^{h_n,h}\rangle|$ as
\begin{align}\label{419}
2|\langle\mathrm{B}(\bar{\mathbf{X}}^{h_n})-\mathrm{B}(\bar{\mathbf{X}}^{h}),\mathbf{W}^{h_n,h}\rangle|&= 2|\langle\mathrm{B}(\mathbf{W}^{h_n,h},\bar{\mathbf{X}}^{h}),\mathbf{W}^{h_n,h}\rangle |\leq 2\|\bar{\mathbf{X}}^{h}\|_{\mathbb{V}}\|\mathbf{W}^{h_n,h}\|_{\widetilde\mathbb{L}^4}^2\nonumber\\&\leq 2\sqrt{2}\|\bar{\mathbf{X}}^{h}\|_{\mathbb{V}}\|\mathbf{W}^{h_n,h}\|_{\mathbb{H}}\|\mathbf{W}^{h_n,h}\|_{\mathbb{V}}\nonumber\\&\leq\mu\|\mathbf{W}^{h_n,h}\|_{\mathbb{V}}^2+\frac{2}{\mu}\|\bar{\mathbf{X}}^{h}\|_{\mathbb{V}}^2\|\mathbf{W}^{h_n,h}\|_{\mathbb{H}}^2.
\end{align}
Using \eqref{2.23} and \eqref{a215}, we know that
\begin{align}
-2\beta\langle\mathcal{C}(\bar{\mathbf{X}}^{h_n})-\mathcal{C}(\bar{\mathbf{X}}^{h}),\mathbf{W}^{h_n,h}\rangle\leq-\frac{\beta}{2^{r-2}}\|\mathbf{W}^{h_n,h}\|_{\widetilde\mathbb{L}^{r+1}}^{r+1},
\end{align}
for $r\in[1,\infty)$. Using \eqref{3p93}, H\"older's and Young's inequalities, we estimate $2(\bar{\mathrm{F}}(\bar{\mathbf{X}}^{h_n})-\bar{\mathrm{F}}(\bar{\mathbf{X}}^{h}),\mathbf{W}^{h_n,h})$ as
\begin{align}\label{4.21}
2(\bar{\mathrm{F}}(\bar{\mathbf{X}}^{h_n})-\bar{\mathrm{F}}(\bar{\mathbf{X}}^{h}),\mathbf{W}^{h_n,h})&\leq 2\|\bar{\mathrm{F}}(\bar{\mathbf{X}}^{h_n})-\bar{\mathrm{F}}(\bar{\mathbf{X}}^{h})\|_{\mathbb{H}}\|\mathbf{W}^{h_n,h}\|_{\mathbb{H}}\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2}}\|\mathbf{W}^{h_n,h}\|_{\mathbb{H}}^2.
\end{align}
Combining \eqref{419}-\eqref{4.21} and substituting it in \eqref{5.23z}, we obtain
\begin{align}\label{4.22}
&\|\mathbf{W}_t^{h_n,h}\|_{\mathbb{H}}^2+\mu\int_0^t\|\mathbf{W}_s^{h_n,h}\|_{\mathbb{V}}^2\d s+2\alpha\int_0^t\|\mathbf{W}_s^{h_n,h}\|_{\mathbb{H}}^2\d s+\frac{\beta}{2^{r-2}}\int_0^t\|\mathbf{W}^{h_n,h}_s\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d s\nonumber\\&\leq \frac{2}{\mu}\int_0^t\|\bar{\mathbf{X}}^{h}_s\|_{\mathbb{V}}^2\|\mathbf{W}^{h_n,h}_s\|_{\mathbb{H}}^2\d s+C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2}}\int_0^t\|\mathbf{W}^{h_n,h}_s\|_{\mathbb{H}}^2\d s+I_1+I_2,
\end{align}
where
\begin{align*}
I_1&=2\int_0^t((\sigma_1(\bar{\mathbf{X}}_s^{h_n})-\sigma_1(\bar{\mathbf{X}}_s^{h}))\mathrm{Q}_1^{1/2}h_s^n,\mathbf{W}_s^{h_n,h})\d s,\\
I_2&=2\int_0^t(\sigma_1(\bar{\mathbf{X}}_s^{h})\mathrm{Q}_1^{1/2}(h_s^n-h_s),\mathbf{W}_s^{h_n,h})\d s.
\end{align*}
Using the Cauchy-Schwarz inequality, H\"older's and Young's inequalities, and the Assumption \ref{ass3.6} (A1), we estimate $I_1$ as
\begin{align}\label{634}
I_1 &\leq 2\int_0^t|((\sigma_1(\bar{\mathbf{X}}_s^{h_n})-\sigma_1(\bar{\mathbf{X}}_s^{h}))\mathrm{Q}_1^{1/2}h_s^n,\mathbf{W}_s^{h_n,h})|\d s\nonumber\\&\leq 2\int_0^t\|(\sigma_1(\bar{\mathbf{X}}_s^{h_n})-\sigma_1(\bar{\mathbf{X}}_s^{h}))\mathrm{Q}_1^{1/2}\|_{\mathcal{L}_2}\|h_s^n\|_{\mathbb{H}}\|\mathbf{W}_s^{h_n,h}\|_{\mathbb{H}}\d s\leq C\int_0^t\|h_s^n\|_{\mathbb{H}}\|\mathbf{W}_s^{h_n,h}\|_{\mathbb{H}}^2\d s\nonumber\\&\leq C\int_0^t\|\mathbf{W}_s^{h_n,h}\|_{\mathbb{H}}^2\d s+C\int_0^t\|h_s^n\|_{\mathbb{H}}^2\|\mathbf{W}_s^{h_n,h}\|_{\mathbb{H}}^2\d s.
\end{align}
Making use of the Cauchy-Schwarz inequality and Young's inequality, we estimate $I_2$ as
\begin{align}\label{635}
I_2&\leq 2\int_0^t\|\sigma_1(\bar{\mathbf{X}}_s^{h})\mathrm{Q}_1^{1/2}(h_s^n-h_s)\|_{\mathbb{H}}\|\mathbf{W}_s^{h_n,h}\|_{\mathbb{H}}\d s\nonumber\\&\leq \int_0^t\|\mathbf{W}_s^{h_n,h}\|_{\mathbb{H}}^2\d s+\int_0^t\|\sigma_1(\bar{\mathbf{X}}_s^{h})\mathrm{Q}_1^{1/2}(h_s^n-h_s)\|_{\mathbb{H}}^2\d s.
\end{align}
Thus, it is immediate from \eqref{4.22} that
\begin{align}\label{425}
&\|\mathbf{W}_t^{h_n,h}\|_{\mathbb{H}}^2+\mu\int_0^t\|\mathbf{W}_s^{h_n,h}\|_{\mathbb{V}}^2\d s+2\alpha\int_0^t\|\mathbf{W}_s^{h_n,h}\|_{\mathbb{H}}^2\d s+\frac{\beta}{2^{r-2}}\int_0^t\|\mathbf{W}^{h_n,h}_s\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d s\nonumber\\&\leq \frac{2}{\mu}\int_0^t\|\bar{\mathbf{X}}^{h}_s\|_{\mathbb{V}}^2\|\mathbf{W}^{h_n,h}_s\|_{\mathbb{H}}^2\d s+C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2}}\int_0^t\|\mathbf{W}^{h_n,h}_s\|_{\mathbb{H}}^2\d s+C\int_0^t\|h_s^n\|_{\mathbb{H}}^2\|\mathbf{W}_s^{h_n,h}\|_{\mathbb{H}}^2\d s\nonumber\\&\quad+\int_0^t\|\sigma_1(\bar{\mathbf{X}}_s^{h})\mathrm{Q}_1^{1/2}(h_s^n-h_s)\|_{\mathbb{H}}^2\d s,
\end{align}
for all $t\in[0,T]$. An application of Gronwall's inequality in \eqref{425} gives
\begin{align}\label{426}
&\sup_{t\in[0,T]}\|\mathbf{W}_t^{h_n,h}\|_{\mathbb{H}}^2+\mu\int_0^T\|\mathbf{W}_t^{h_n,h}\|_{\mathbb{V}}^2\d t+2\alpha\int_0^T\|\mathbf{W}_t^{h_n,h}\|_{\mathbb{H}}^2\d t+\frac{\beta}{2^{r-2}}\int_0^T\|\mathbf{W}^{h_n,h}_t\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d t\nonumber\\&\leq \left(\int_0^T\|\sigma_1(\bar{\mathbf{X}}_t^{h})\mathrm{Q}_1^{1/2}(h_t^n-h_t)\|_{\mathbb{H}}^2\d t\right)\exp\left\{\frac{2}{\mu}\int_0^T\|\bar{\mathbf{X}}^{h}_t\|_{\mathbb{V}}^2\d t+\int_0^T\|h_t^n\|_{\mathbb{H}}^2\d t\right\}e^{C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2}}T}\nonumber\\&\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},M,T,\|\mathbf{x}\|_{\mathbb{H}}}\left(\int_0^T\|\sigma_1(\bar{\mathbf{X}}_t^{h})\mathrm{Q}_1^{1/2}(h_t^n-h_t)\|_{\mathbb{H}}^2\d t\right),
\end{align}
since $\{h_n\}\in\mathcal{S}_M$ and the process $\bar{\mathbf{X}}_t^h$ satisfies the energy estimate \eqref{5.5y}. It should be noted that the operator $\sigma_1(\cdot)\mathrm{Q}^{\frac{1}{2}}$ is Hilbert-Schmidt in $\mathbb{H}$, and hence it is a compact operator on $\mathbb{H}$. Furthermore, we know that compact operator maps weakly convergent sequences into strongly convergent sequences. Since $\{h_n\}$ converges weakly to $ h\in \mathcal{S}_M$ in $\mathrm{L}^2(0, T ; \mathbb{H})$, we infer that $$\int_0^T\|\sigma_1(\bar{\mathbf{X}}_t^{h})\mathrm{Q}_1^{1/2}(h_t^n-h_t)\|_{\mathbb{H}}^2\d t\to 0 \ \text{ as } \ n\to\infty.$$ Thus, from \eqref{426}, we obtain
\begin{align}
&\sup_{t\in[0,T]}\|\mathbf{W}_t^{h_n,h}\|_{\mathbb{H}}^2+\mu\int_0^T\|\mathbf{W}_t^{h_n,h}\|_{\mathbb{V}}^2\d t+\frac{\beta}{2^{r-2}}\int_0^T\|\mathbf{W}^{h_n,h}_t\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d t\to 0\ \text{ as }\ n\ \to \infty,
\end{align}
which concludes the proof for $n=2$ and $r\in[1,3]$.
For $n=2,3$ and $r\in(3,\infty)$, from \eqref{2.23}, we easily have
\begin{align}\label{6.27}
\beta \langle\mathcal{C}(\bar{\mathbf{X}}^{h_n})-\mathcal{C}(\bar{\mathbf{X}}^{h}),\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}\rangle \geq \frac{\beta}{2}\||\bar{\mathbf{X}}^{h_n}|^{\frac{r-1}{2}}(\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h})\|_{\mathbb{H}}^2+\frac{\beta}{2}\||\bar{\mathbf{X}}^{h}|^{\frac{r-1}{2}}(\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h})\|_{\mathbb{H}}^2.
\end{align}
Using \eqref{6.28}, H\"older's and Young's inequalities, we estimate $|\langle\mathrm{B}(\bar{\mathbf{X}}^{h_n})-\mathrm{B}(\bar{\mathbf{X}}^{h}),\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}\rangle|$ as
\begin{align}\label{6.29}
|\langle\mathrm{B}(\bar{\mathbf{X}}^{h_n})-\mathrm{B}(\bar{\mathbf{X}}^{h}),\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}\rangle|&= |\langle\mathrm{B}(\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h},\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}),\bar{\mathbf{X}}^{h}\rangle|\nonumber\\&\leq\|\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}\|_{\mathbb{V}}\|\bar{\mathbf{X}}^{h}(\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h})\|_{\mathbb{H}}\nonumber\\&\leq\frac{\mu }{2}\|\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}\|_{\mathbb{V}}^2+\frac{1}{2\mu }\|\bar{\mathbf{X}}^{h}(\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h})\|_{\mathbb{H}}^2.
\end{align}
We take the term $\|\bar{\mathbf{X}}^{h}(\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h})\|_{\mathbb{H}}^2$ from \eqref{6.29} and use H\"older's and Young's inequalities to estimate it as (see \cite{KWH} also)
\begin{align}\label{6.30}
&\int_{\mathcal{O}}|\bar{\mathbf{X}}^{h}(x)|^2|\bar{\mathbf{X}}^{h_n}(x)-\bar{\mathbf{X}}^{h}(x)|^2\d x\nonumber\\&=\int_{\mathcal{O}}|\bar{\mathbf{X}}^{h}(x)|^2|\bar{\mathbf{X}}^{h_n}(x)-\bar{\mathbf{X}}^{h}(x)|^{\frac{4}{r-1}}|\bar{\mathbf{X}}^{h_n}(x)-\bar{\mathbf{X}}^{h}(x)|^{\frac{2(r-3)}{r-1}}\d x\nonumber\\&\leq\left(\int_{\mathcal{O}}|\bar{\mathbf{X}}^{h}(x)|^{r-1}|\bar{\mathbf{X}}^{h_n}(x)-\bar{\mathbf{X}}^{h}(x)|^2\d x\right)^{\frac{2}{r-1}}\left(\int_{\mathcal{O}}|\bar{\mathbf{X}}^{h_n}(x)-\bar{\mathbf{X}}^{h}(x)|^2\d x\right)^{\frac{r-3}{r-1}}\nonumber\\&\leq\frac{\beta\mu }{2}\left(\int_{\mathcal{O}}|\bar{\mathbf{X}}^{h}(x)|^{r-1}|\bar{\mathbf{X}}^{h_n}(x)-\bar{\mathbf{X}}^{h}(x)|^2\d x\right)\nonumber\\&\quad+\frac{r-3}{r-1}\left(\frac{4}{\beta\mu (r-1)}\right)^{\frac{2}{r-3}}\left(\int_{\mathcal{O}}|\bar{\mathbf{X}}^{h_n}(x)-\bar{\mathbf{X}}^{h}(x)|^2\d x\right),
\end{align}
for $r>3$. Combining \eqref{6.27} and \eqref{6.30}, we find
\begin{align}\label{630}
&\beta\langle\mathcal{C}(\bar{\mathbf{X}}^{h_n})-\mathcal{C}(\bar{\mathbf{X}}^{h}),\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}\rangle+\langle\mathrm{B}(\bar{\mathbf{X}}^{h_n})-\mathrm{B}(\bar{\mathbf{X}}^{h}),\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}\rangle\nonumber\\&\geq\frac{\beta}{4}\||\bar{\mathbf{X}}^{h_n}|^{\frac{r-1}{2}}(\bar{\mathbf{X}}^{h}-\bar{\mathbf{X}}^{h})\|_{\mathbb{H}}^2+\frac{\beta}{2}\||\bar{\mathbf{X}}^{h}|^{\frac{r-1}{2}}(\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h})\|_{\mathbb{H}}^2\nonumber\\&\quad-\frac{r-3}{2\mu(r-1)}\left(\frac{4}{\beta\mu (r-1)}\right)^{\frac{2}{r-3}}\left(\int_{\mathcal{O}}|\bar{\mathbf{X}}^{h_n}(x)-\bar{\mathbf{X}}^{h}(x)|^2\d x\right)-\frac{\mu}{2} \|\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}\|_{\mathbb{V}}^2.
\end{align}
Using \eqref{a215}, we obtain
\begin{align}
\frac{2^{2-r}\beta}{4}\|\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\leq\frac{\beta}{4}\||\bar{\mathbf{X}}^{h_n}|^{\frac{r-1}{2}}(\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h})\|_{\mathbb{L}^2}^2+\frac{\beta}{4}\||\bar{\mathbf{X}}^{h}|^{\frac{r-1}{2}}(\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h})\|_{\mathbb{L}^2}^2.
\end{align}
Thus, from \eqref{630}, it is immediate that
\begin{align}\label{622}
&\beta\langle\mathcal{C}(\bar{\mathbf{X}}^{h_n})-\mathcal{C}(\bar{\mathbf{X}}^{h}),\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}\rangle+\langle\mathrm{B}(\bar{\mathbf{X}}^{h_n})-\mathrm{B}(\bar{\mathbf{X}}^{h}),\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}\rangle\nonumber\\&\geq \frac{\beta}{2^r}\|\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}-\frac{\widetilde\eta}{2}\|\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}\|_{\mathbb{H}}^2-\frac{\mu}{2} \|\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}\|_{\mathbb{V}}^2,
\end{align}
where \begin{align}\label{623}\widetilde\eta=\frac{r-3}{\mu(r-1)}\left(\frac{4}{\beta\mu (r-1)}\right)^{\frac{2}{r-3}}.\end{align}
Using \eqref{4.21} and \eqref{622} in \eqref{5.23z}, we get
\begin{align}
&\|\mathbf{W}_t^{h_n,h}\|_{\mathbb{H}}^2+\mu\int_0^t\|\mathbf{W}_s^{h_n,h}\|_{\mathbb{V}}^2\d s+2\alpha\int_0^t\|\mathbf{W}_s^{h_n,h}\|_{\mathbb{H}}^2\d s+\frac{\beta}{2^{r-1}}\int_0^t\|\mathbf{W}^{h_n,h}_s\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d s\nonumber\\&\leq \widetilde\eta\int_0^t\|\mathbf{W}^{h_n,h}_s\|_{\mathbb{H}}^2\d s+C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2}}\int_0^t\|\mathbf{W}^{h_n,h}_s\|_{\mathbb{H}}^2\d s+C\int_0^t\|h_s^n\|_{\mathbb{H}}^2\|\mathbf{W}_s^{h_n,h}\|_{\mathbb{H}}^2\d s\nonumber\\&\quad+\int_0^t\|\sigma_1(\bar{\mathbf{X}}_s^{h})\mathrm{Q}_1^{1/2}(h_s^n-h_s)\|_{\mathbb{H}}^2\d s,
\end{align}
for all $t\in[0,T]$. Then, proceeding similarly as in the previous case, we arrive at the required result \eqref{5.30z}.
For the case $n=r=3$, from \eqref{2.23}, we find
\begin{align}\label{6.33}
\beta\langle\mathcal{C}(\bar{\mathbf{X}}^{h_n})-\mathcal{C}(\bar{\mathbf{X}}^{h}),\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}\rangle\geq\frac{\beta}{2}\|\bar{\mathbf{X}}^{h_n}(\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h})\|_{\mathbb{H}}^2+\frac{\beta}{2}\|\bar{\mathbf{X}}^{h}(\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h})\|_{\mathbb{H}}^2.
\end{align}
We estimate $|\langle\mathrm{B}(\bar{\mathbf{X}}^{h_n})-\mathrm{B}(\bar{\mathbf{X}}^{h}),\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}\rangle|$ using H\"older's and Young's inequalities as
\begin{align}\label{6.34}
|\langle\mathrm{B}(\bar{\mathbf{X}}^{h_n})-\mathrm{B}(\bar{\mathbf{X}}^{h}),\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}\rangle|&= |\langle\mathrm{B}(\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h},\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}),\bar{\mathbf{X}}^{h}\rangle|\nonumber\\&\leq\|\bar{\mathbf{X}}^{h}(\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h})\|_{\mathbb{H}}\|\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}\|_{\mathbb{V}} \nonumber\\&\leq\frac{\theta\mu}{2} \|\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}\|_{\mathbb{V}}^2+\frac{1}{2\theta\mu }\|\bar{\mathbf{X}}^{h}(\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h})\|_{\mathbb{H}}^2.
\end{align}
Combining \eqref{6.33} and \eqref{6.34}, we obtain
\begin{align}\label{632}
&\mu\langle\mathrm{A}(\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}),\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}\rangle+ \beta\langle\mathcal{C}(\bar{\mathbf{X}}^{h_n})-\mathcal{C}(\bar{\mathbf{X}}^{h}),\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}\rangle\nonumber\\&\quad+\langle\mathrm{B}(\bar{\mathbf{X}}^{h_n})-\mathrm{B}(\bar{\mathbf{X}}^{h}),\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}\rangle\nonumber\\&\geq\mu\left(1-\frac{\theta}{2}\right) \|\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}\|_{\mathbb{V}}^2+ \frac{\beta}{2}\|\bar{\mathbf{X}}^{h_n}(\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h})\|_{\mathbb{H}}^2+\frac{1}{2}\left(\beta-\frac{1}{\theta\mu}\right)\|\bar{\mathbf{X}}^{h}(\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h})\|_{\mathbb{H}}^2\nonumber\\&\geq\mu\left(1-\frac{\theta}{2}\right) \|\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}\|_{\mathbb{V}}^2+ \frac{1}{2}\left(\beta-\frac{1}{\theta\mu}\right)\|\bar{\mathbf{X}}^{h_n}-\bar{\mathbf{X}}^{h}\|_{\widetilde\mathbb{L}^4}^4,
\end{align}
for $\frac{1}{\beta\mu}<\theta<2$. Thus, we infer that
\begin{align}\label{642}
&\|\mathbf{W}_t^{h_n,h}\|_{\mathbb{H}}^2+\mu(2-\theta)\int_0^t\|\mathbf{W}_s^{h_n,h}\|_{\mathbb{V}}^2\d s+\left(\beta-\frac{1}{\theta\mu}\right)\int_0^t\|\mathbf{W}_s^{h_n,h}\|_{\widetilde\mathbb{L}^{4}}^{4}\d s\nonumber\\&\leq C_{\mu,\alpha\lambda_1,L_{\mathrm{G}},L_{\sigma_2}}\int_0^t\|\mathbf{W}^{h_n,h}_s\|_{\mathbb{H}}^2\d s+C\int_0^t\|h_s^n\|_{\mathbb{H}}^2\|\mathbf{W}_s^{h_n,h}\|_{\mathbb{H}}^2\d s+\int_0^t\|\sigma_1(\bar{\mathbf{X}}_s^{h})\mathrm{Q}_1^{1/2}(h_s^n-h_s)\|_{\mathbb{H}}^2\d s.
\end{align}
Hence, for $2\beta\mu >1$, arguing similarly as in the case of $n=2,3$ and $r\in[1,3]$, we finally obtain the required result \eqref{5.30z}.
\end{proof}
\subsection{Weak convergence}
Let us now verify the Hypothesis \ref{hyp1} (i) on weak convergence. We first establish the existence and uniqueness result of the following stochastic controlled SCBF equations.
\begin{theorem}\label{thm5.10}
For any $h\in\mathcal{A}_M$, $0<M<+\infty$, under the Assumption \ref{ass3.6}, the stochastic control problem:
\begin{equation}\label{5.4z}
\left\{
\begin{aligned}
\d \mathbf{X}^{\varepsilon,\delta,h}_t&=-[\mu\mathrm{A} \mathbf{X}^{\varepsilon,\delta,h}_t+\mathrm{B}(\mathbf{X}^{\varepsilon,\delta,h}_t)+\alpha\mathbf{X}^{\varepsilon,\delta,h}_t+\beta\mathcal{C}(\mathbf{X}^{\varepsilon,\delta,h}_t)-\mathrm{F}(\mathbf{X}^{\varepsilon,\delta,h}_t,\mathbf{Y}^{\varepsilon,\delta,h}_t)]\d t\\&\quad+\sigma_1(\mathbf{X}^{\varepsilon,\delta,h}_t)\mathrm{Q}_1^{1/2}h_t\d t+\sqrt{\varepsilon}\sigma_1(\mathbf{X}^{\varepsilon,\delta,h}_t)\mathrm{Q}_1^{1/2}\d\mathrm{W}_t,\\
\d \mathbf{Y}^{\varepsilon,\delta,h}_t&=-\frac{1}{\delta}[\mu\mathrm{A} \mathbf{Y}^{\varepsilon,\delta,h}_t+\alpha\mathbf{Y}^{\varepsilon,\delta,h}_t+\beta\mathcal{C}(\mathbf{Y}_{t}^{\varepsilon,\delta})-\mathrm{G}(\mathbf{X}^{\varepsilon,\delta,h}_t,\mathbf{Y}^{\varepsilon,\delta,h}_t)]\d t\\&\quad+\frac{1}{\sqrt{\delta\varepsilon}}\sigma_2(\mathbf{X}^{\varepsilon,\delta,h}_t,\mathbf{Y}^{\varepsilon,\delta,h}_t)\mathrm{Q}_2^{1/2}h_t\d t+\frac{1}{\sqrt{\delta}}\sigma_2(\mathbf{X}^{\varepsilon,\delta,h}_t,\mathbf{Y}^{\varepsilon,\delta,h}_t)\mathrm{Q}_2^{1/2}\d\mathrm{W}_t,\\
\mathbf{X}^{\varepsilon,\delta,h}_0&=\mathbf{x},\ \mathbf{Y}^{\varepsilon,\delta,h}_0=\mathbf{y},
\end{aligned}
\right.
\end{equation}
has a \emph{pathwise unique strong solution} $(\mathbf{X}^{\varepsilon,\delta,h},\mathbf{Y}^{\varepsilon,\delta,h})\in\mathrm{L}^{2}(\Omega;\mathscr{E})\times\mathrm{L}^2(\Omega;\mathscr{E})$, where $\mathscr{E}=\mathrm{C}([0,T];\mathbb{H})\cap\mathrm{L}^2(0,T;\mathbb{V})\cap\mathrm{L}^{r+1}(0,T;\widetilde\mathbb{L}^{r+1})$ with $\mathscr{F}_t$-adapted paths in $\mathscr{E}\times\mathscr{E}$, $\mathbb{P}$-a.s. Furthermore, for all $(\varepsilon,\delta)\in(0,1)$ with $\frac{\delta}{\varepsilon}\leq\frac{1}{C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},M}},$ we have
\begin{align}\label{5.5z}
&\mathbb{E}\left[\sup_{t\in[0,T]}\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^{2}+\mu \int_0^T\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{V}}^{2}\d t+\beta \int_0^T\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d t\right] \nonumber\\&\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},M,T}(1+\|\mathbf{x}\|_{\mathbb{H}}^2+\|\mathbf{y}\|_{\mathbb{H}}^2),
\end{align}
and
\begin{align}\label{443}
\mathbb{E}\left[\int_0^T\|\mathbf{Y}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^{2}\d t\right]\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},M,T}(1+\|\mathbf{x}\|_{\mathbb{H}}^2+\|\mathbf{y}\|_{\mathbb{H}}^2).
\end{align}
\end{theorem}
\begin{proof}
The existence and uniqueness of pathwise strong solution satisfying the energy equality to the system (\ref{5.4z}) can be obtained similarly as in Theorem 3.4, \cite{MTM11} (see, Theorem 3.7, \cite{MTM8} also), by using the monotonicty as well as demicontinuous properties of the linear and nonlinear operators and a stochastic generalization of the Minty-Browder technique.
Let us now prove the uniform energy estimates \eqref{5.5z} and \eqref{443}. An application of the infinite dimensional It\^o formula to the process $\|\mathbf{Y}^{\varepsilon,\delta,h}_{t}\|_{\mathbb{H}}^2$ yields (cf. \cite{MTM8})
\begin{align}\label{3.9}
\|\mathbf{Y}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^2&=\|\mathbf{y}\|_{\mathbb{H}}^2-\frac{2\mu}{\delta}\int_0^t\|\mathbf{Y}^{\varepsilon,\delta,h}_s\|_{\mathbb{V}}^2\d s-\frac{2\alpha}{\delta}\int_0^t\|\mathbf{Y}^{\varepsilon,\delta,h}_s\|_{\mathbb{H}}^2\d s-\frac{2\beta}{\delta}\int_0^t\|\mathbf{Y}^{\varepsilon,\delta,h}_s\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d s\nonumber\\&\quad+\frac{2}{\delta}\int_0^t(\mathrm{G}(\mathbf{X}^{\varepsilon,\delta,h}_s,\mathbf{Y}^{\varepsilon,\delta,h}_s),\mathbf{Y}^{\varepsilon,\delta,h}_s)\d s+\frac{2}{\sqrt{\delta\varepsilon}}\int_0^t(\sigma_2(\mathbf{X}^{\varepsilon,\delta,h}_s,\mathbf{Y}^{\varepsilon,\delta,h}_s)\mathrm{Q}_2^{1/2}h_s,\mathbf{Y}^{\varepsilon,\delta,h}_s)\d s\nonumber\\&\quad+\frac{1}{\delta}\int_0^t\|\sigma_2(\mathbf{X}^{\varepsilon,\delta,h}_s,\mathbf{Y}^{\varepsilon,\delta,h}_s)\mathrm{Q}_2^{1/2}\|_{\mathcal{L}_2}^2\d s+\frac{2}{\sqrt{\delta}}\int_0^t(\sigma_2(\mathbf{X}^{\varepsilon,\delta,h}_s,\mathbf{Y}^{\varepsilon,\delta,h}_s)\mathrm{Q}_2^{1/2}\d\mathrm{W}_s,\mathbf{Y}^{\varepsilon,\delta,h}_s),
\end{align}
for all $t\in[0,T]$, $\mathbb{P}$-a.s. Taking expectation in \eqref{3.9} and using the fact that the final term appearing in \eqref{3.9} is a martingale, we obtain
\begin{align}
&\mathbb{E}\left[\|\mathbf{Y}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^{2}\right]\nonumber\\&=\|\mathbf{y}\|_{\mathbb{H}}^{2}-\frac{2\mu}{\delta}\mathbb{E}\left[\int_0^t \|\mathbf{Y}^{\varepsilon,\delta,h}_s\|_{\mathbb{V}}^2\d s\right]-\frac{2\alpha}{\delta}\mathbb{E}\left[\int_0^t \|\mathbf{Y}^{\varepsilon,\delta,h}_s\|_{\mathbb{H}}^2\d s\right]-\frac{2\beta}{\delta}\mathbb{E}\left[\int_0^t \|\mathbf{Y}^{\varepsilon,\delta,h}_s\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d s\right]\nonumber\\&\quad+\frac{2}{\delta}\mathbb{E}\left[\int_0^t(\mathrm{G}(\mathbf{X}^{\varepsilon,\delta,h}_s,\mathbf{Y}^{\varepsilon,\delta,h}_s),\mathbf{Y}^{\varepsilon,\delta,h}_s)\d s\right]+\frac{2}{\sqrt{\delta\varepsilon}}\mathbb{E}\left[\int_0^t(\sigma_2(\mathbf{X}^{\varepsilon,\delta,h}_s,\mathbf{Y}^{\varepsilon,\delta,h}_s)\mathrm{Q}_2^{1/2}h_s,\mathbf{Y}^{\varepsilon,\delta,h}_s)\d s\right]\nonumber\\&\quad+\frac{1}{\delta}\mathbb{E}\left[\int_0^t\|\sigma_2(\mathbf{X}^{\varepsilon,\delta,h}_s,\mathbf{Y}^{\varepsilon,\delta,h}_s)\mathrm{Q}_2^{1/2}\|_{\mathcal{L}_2}^2\d s\right],
\end{align}
for all $t\in[0,T]$. Thus, it is immediate that
\begin{align}\label{312}
\frac{\d}{\d t} \mathbb{E}\left[\|\mathbf{Y}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^{2}\right]&=-\frac{2\mu}{\delta}\mathbb{E}\left[ \|\mathbf{Y}^{\varepsilon,\delta,h}_t\|_{\mathbb{V}}^2\right]-\frac{2\alpha}{\delta}\mathbb{E}\left[ \|\mathbf{Y}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^2\right]-\frac{2\beta}{\delta}\mathbb{E}\left[ \|\mathbf{Y}^{\varepsilon,\delta,h}_t\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\right]\nonumber\\&\quad+\frac{2}{\delta}\mathbb{E}\left[(\mathrm{G}(\mathbf{X}^{\varepsilon,\delta,h}_t,\mathbf{Y}^{\varepsilon,\delta,h}_t),\mathbf{Y}^{\varepsilon,\delta,h}_t)\right]+\frac{2}{\sqrt{\delta\varepsilon}}\mathbb{E}\left[(\sigma_2(\mathbf{X}^{\varepsilon,\delta,h}_t,\mathbf{Y}^{\varepsilon,\delta,h}_t)\mathrm{Q}_2^{1/2}h_t,\mathbf{Y}^{\varepsilon,\delta,h}_t)\right]\nonumber\\&\quad+\frac{1}{\delta}\mathbb{E}\left[\|\sigma_2(\mathbf{X}^{\varepsilon,\delta,h}_t,\mathbf{Y}^{\varepsilon,\delta,h}_t)\mathrm{Q}_2^{1/2}\|_{\mathcal{L}_2}^2\right],
\end{align}
for a.e. $t\in[0,T]$. Using the Assumption \ref{ass3.6} (A1), the Cauchy-Schwarz inequality and Young's inequality, we get
\begin{align}\label{313}
\frac{2}{\delta}(\mathrm{G}(\mathbf{X}^{\varepsilon,\delta,h},\mathbf{Y}^{\varepsilon,\delta,h}),\mathbf{Y}^{\varepsilon,\delta,h})&\leq \frac{2}{\delta}\|\mathrm{G}(\mathbf{X}^{\varepsilon,\delta,h},\mathbf{Y}^{\varepsilon,\delta,h})\|_{\mathbb{H}}\|\mathbf{Y}^{\varepsilon,\delta,h}\|_{\mathbb{H}}\nonumber\\&\leq \frac{2}{\delta} \left(\|\mathrm{G}(\mathbf{0},\mathbf{0})\|_{\mathbb{H}}+C\|\mathbf{X}^{\varepsilon,\delta,h}\|_{\mathbb{H}}+L_{\mathrm{G}}\|\mathbf{Y}^{\varepsilon,\delta,h}\|_{\mathbb{H}}\right)\|\mathbf{Y}^{\varepsilon,\delta,h}\|_{\mathbb{H}}\nonumber\\&\leq\left(\frac{\mu\lambda_1}{2\delta}+\frac{2L_{\mathrm{G}}}{\delta}\right)\|\mathbf{Y}^{\varepsilon,\delta,h}\|_{\mathbb{H}}^{2}+\frac{C}{\mu\lambda_1\delta}\|\mathbf{X}^{\varepsilon,\delta,h}\|_{\mathbb{H}}^{2}+\frac{C}{\mu\lambda_1\delta}.
\end{align}
Using the Cauchy-Schwarz inequality, Assumption \ref{ass3.6} (A2) and Young's inequality, we estimate the term $\frac{2}{\sqrt{\delta\varepsilon}}(\sigma_2(\mathbf{X}^{\varepsilon,\delta,h},\mathbf{Y}^{\varepsilon,\delta,h})h,\mathbf{Y}^{\varepsilon,\delta,h})$ as
\begin{align}
\frac{2}{\sqrt{\delta\varepsilon}}(\sigma_2(\mathbf{X}^{\varepsilon,\delta,h},\mathbf{Y}^{\varepsilon,\delta,h})\mathrm{Q}_2^{1/2}h,\mathbf{Y}^{\varepsilon,\delta,h})&\leq \frac{2}{\sqrt{\delta\varepsilon}}\|\sigma_2(\mathbf{X}^{\varepsilon,\delta,h},\mathbf{Y}^{\varepsilon,\delta,h})\mathrm{Q}_2^{1/2}\|_{\mathcal{L}_2}\|h\|_{\mathbb{H}}\|\mathbf{Y}^{\varepsilon,\delta,h}\|_{\mathbb{H}}\nonumber\\&\leq \frac{C}{\sqrt{\delta\varepsilon}}(1+\|\mathbf{X}^{\varepsilon,\delta,h}\|_{\mathbb{H}})\|h\|_{\mathbb{H}}\|\mathbf{Y}^{\varepsilon,\delta,h}\|_{\mathbb{H}}\nonumber\\&\leq\frac{\mu\lambda_1}{4\delta}\|\mathbf{Y}^{\varepsilon,\delta,h}\|_{\mathbb{H}}^{2} +\frac{C}{\mu\lambda_1\varepsilon}\left(1+\|\mathbf{X}^{\varepsilon,\delta,h}\|_{\mathbb{H}}^{2}\right)\|h\|_{\mathbb{H}}^2.
\end{align}
Once again, using the Assumption \ref{ass3.6} (A2) and Young's inequality, we obtain
\begin{align}\label{315}
\frac{1}{\delta}\|\sigma_2(\mathbf{X}^{\varepsilon,\delta,h},\mathbf{Y}^{\varepsilon,\delta,h})\mathrm{Q}_2^{1/2}\|_{\mathcal{L}_2}^2&\leq \frac{C}{\delta}(1+\|\mathbf{X}^{\varepsilon,\delta,h}\|_{\mathbb{H}}^2)\leq \frac{\mu\lambda_1}{4\delta}\|\mathbf{Y}^{\varepsilon,\delta,h}\|_{\mathbb{H}}^{2}+\frac{C}{\mu\lambda_1\delta}\|\mathbf{X}^{\varepsilon,\delta,h}\|_{\mathbb{H}}^{2p}+\frac{C}{\mu\lambda_1\delta}.
\end{align}
Combining \eqref{313}-\eqref{315} and using it in \eqref{312}, we find
\begin{align}\label{3.16}
\frac{\d}{\d t} \mathbb{E}\left[\|\mathbf{Y}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^{2}\right]&=-\frac{1}{\delta}\left(\mu{\lambda_1}+2\alpha-2L_{\mathrm{G}}\right)\mathbb{E}\left[ \|\mathbf{Y}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^{2}\right]+\frac{C}{\mu\lambda_1\delta}\mathbb{E}\left[\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^{2}\right]+\frac{C}{\mu\lambda_1\delta}\nonumber\\&\quad+\frac{C}{\mu\lambda_1\varepsilon}\mathbb{E}\left[\left(1+\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^{2}\right)\|h_t\|_{\mathbb{H}}^2\right],
\end{align}
for a.e. $t\in[0,T]$. By the Assumption \ref{ass3.6} (A3), we know that $\mu\lambda_1+2\alpha>2L_{\mathrm{G}}$ and an application of variation of constants formula gives
\begin{align}\label{3.17}
\mathbb{E}\left[\|\mathbf{Y}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^{2}\right]&\leq\|\mathbf{y}\|_{\mathbb{H}}^{2}e^{-\frac{\gamma t}{\delta}}+\frac{C}{\mu\lambda_1\delta}\int_0^te^{-\frac{\gamma(t-s)}{\delta}}\left(1+\mathbb{E}\left[\|\mathbf{X}^{\varepsilon,\delta,h}_s\|_{\mathbb{H}}^{2}\right]\right)\d s\nonumber\\&\quad+\frac{C}{\mu\lambda_1\varepsilon}\int_0^te^{-\frac{\gamma(t-s)}{\delta}}\mathbb{E}\left[\left(1+\|\mathbf{X}^{\varepsilon,\delta,h}_s\|_{\mathbb{H}}^{2}\right)\|h_s\|_{\mathbb{H}}^2\right]\d s,
\end{align}
for all $t\in[0,T]$, where $\gamma=\left(\mu{\lambda_1}+2\alpha-2L_{\mathrm{G}}\right)$. Integrating the above inequality from $0$ to $T$, performing a change of order of integration and using Fubini's theorem, we also get
\begin{align}\label{452}
\mathbb{E}\left[\int_0^T\|\mathbf{Y}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^{2}\d t\right]&\leq\|\mathbf{y}\|_{\mathbb{H}}^{2}\int_0^Te^{-\frac{\gamma t}{\delta}}\d t+\frac{C}{\mu\lambda_1\delta}\int_0^T\int_0^te^{-\frac{\gamma(t-s)}{\delta}}\left(1+\mathbb{E}\left[\|\mathbf{X}^{\varepsilon,\delta,h}_s\|_{\mathbb{H}}^{2}\right]\right)\d s\d t\nonumber\\&\quad+\frac{C}{\mu\lambda_1\varepsilon}\mathbb{E}\left[\sup_{t\in[0,T]}\left(1+\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^{2}\right)\int_0^T\int_0^te^{-\frac{\gamma(t-s)}{\delta}}\|h_s\|_{\mathbb{H}}^2\d s\d t\right]\nonumber\\&\leq\frac{\delta}{\gamma}\|\mathbf{y}\|_{\mathbb{H}}^{2}+\frac{C}{\mu\lambda_1\gamma}\mathbb{E}\left[\int_0^T\left(1+\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^{2}\right)\d t\right]\nonumber\\&\quad+\frac{C\delta}{\mu\lambda_1\gamma\varepsilon}\mathbb{E}\left[\sup_{t\in[0,T]}\left(1+\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^{2}\right)\int_0^T\|h_t\|_{\mathbb{H}}^2\d t\right]\nonumber\\&\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},T}(1+\|\mathbf{y}\|_{\mathbb{H}}^2)+C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},T}\mathbb{E}\left[\int_0^T\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^{2}\d t\right]\nonumber\\&\quad+C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},M}\left(\frac{\delta}{\varepsilon}\right)\left[T+\mathbb{E}\left(\sup_{t\in[0,T]}\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^{2}\right)\right],
\end{align}
since $\delta\in(0,1)$.
Let us now obtain the energy estimates satisfied by the process $\mathbf{X}^{\varepsilon,\delta,h}_{t}$. Applying the infinite dimensional It\^o formula to the process $\|\mathbf{X}^{\varepsilon,\delta,h}_{t}\|_{\mathbb{H}}^2$ (see \cite{MTM8}), we find
\begin{align}\label{3.19}
\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^2&=\|\mathbf{x}\|_{\mathbb{H}}^2-2\mu\int_0^t\|\mathbf{X}^{\varepsilon,\delta,h}_s\|_{\mathbb{V}}^2\d s-2\alpha\int_0^t\|\mathbf{X}^{\varepsilon,\delta,h}_s\|_{\mathbb{H}}^2\d s-2\beta\int_0^t\|\mathbf{X}^{\varepsilon,\delta,h}_s\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d s\nonumber\\&\quad+2\int_0^t(\mathrm{F}(\mathbf{X}^{\varepsilon,\delta,h}_s,\mathbf{Y}^{\varepsilon,\delta,h}_s),\mathbf{X}^{\varepsilon,\delta,h}_s)\d s+2\int_0^t(\sigma_1(\mathbf{X}^{\varepsilon,\delta,h}_s)\mathrm{Q}_1^{1/2}h_s,\mathbf{X}^{\varepsilon,\delta,h}_s)\d s\nonumber\\&\quad+\varepsilon\int_0^t\|\sigma_1(\mathbf{X}^{\varepsilon,\delta,h}_s)\mathrm{Q}_1^{1/2}\|_{\mathcal{L}_2}^2\d s+2\sqrt{\varepsilon}\int_0^t(\sigma_1(\mathbf{X}^{\varepsilon,\delta,h}_s)\mathrm{Q}_1^{1/2}\d\mathrm{W}_s,\mathbf{X}^{\varepsilon,\delta,h}_s),
\end{align}
for all $t\in[0,T]$, $\mathbb{P}$-a.s.
Taking supremum over $[0,T]$ and then taking expectation in \eqref{3.19}, we obtain
\begin{align}\label{320}
&\mathbb{E}\left[\sup_{t\in[0,T]}\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^{2}+2\mu \int_0^T\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{V}}^{2}\d t+2\alpha \int_0^T\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^{2}\d t+2\beta \int_0^T\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d t\right]\nonumber\\&\leq \|\mathbf{x}\|_{\mathbb{H}}^{2}+2\mathbb{E}\left[\int_0^T|(\mathrm{F}(\mathbf{X}^{\varepsilon,\delta,h}_t,\mathbf{Y}^{\varepsilon,\delta,h}_t),\mathbf{X}^{\varepsilon,\delta,h}_t)|\d t\right]+2\mathbb{E}\left[\int_0^T|(\sigma_1(\mathbf{X}^{\varepsilon,\delta,h}_t)\mathrm{Q}_1^{1/2}h_t,\mathbf{X}^{\varepsilon,\delta,h}_t)|\d t\right]\nonumber\\&\quad +\varepsilon\mathbb{E}\left[\int_0^T\|\sigma_1(\mathbf{X}^{\varepsilon,\delta,h}_t)\mathrm{Q}_1^{1/2}\|_{\mathcal{L}_2}^2\d t\right]+2\sqrt{\varepsilon}\mathbb{E}\left[\sup_{t\in[0,T]}\left|\int_0^t(\sigma_1(\mathbf{X}^{\varepsilon,\delta,h}_s)\mathrm{Q}_1^{1/2}\d\mathrm{W}_s,\mathbf{X}^{\varepsilon,\delta,h}_s)\right|\right]\nonumber\\&\leq \|\mathbf{x}\|_{\mathbb{H}}^{2}+CT+C\mathbb{E}\left[\int_0^T\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^{2}\d t\right]+C\mathbb{E}\left[\int_0^T\|\mathbf{Y}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^{2}\d t\right]\nonumber\\&\quad+2\mathbb{E}\left[\int_0^T|(\sigma_1(\mathbf{X}^{\varepsilon,\delta,h}_t)\mathrm{Q}_1^{1/2}h_t,\mathbf{X}^{\varepsilon,\delta,h}_t)|\d t\right]+2\sqrt{\varepsilon}\mathbb{E}\left[\sup_{t\in[0,T]}\left|\int_0^t(\sigma_1(\mathbf{X}^{\varepsilon,\delta,h}_s)\mathrm{Q}_1^{1/2}\d\mathrm{W}_s,\mathbf{X}^{\varepsilon,\delta,h}_s)\right|\right],
\end{align}
where we used calculations similar to \eqref{313}-\eqref{315}. We estimate the penultimate term from the right hand side of the inequality \eqref{320} using the Cauchy-Schwarz inequality, H\"older's and Young's inequalities as
\begin{align}\label{5.9z}
&2\mathbb{E}\left[\int_0^T|(\sigma_1(\mathbf{X}^{\varepsilon,\delta,h}_t)\mathrm{Q}_1^{1/2}h_t,\mathbf{X}^{\varepsilon,\delta,h}_t)|\d t\right]\nonumber\\&\leq 2\mathbb{E}\left[\int_0^{T}\|\sigma_1(\mathbf{X}^{\varepsilon,\delta,h}_t)\mathrm{Q}_1^{1/2}\|_{\mathcal{L}_2}\|h_t\|_{\mathbb{H}}\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}\d t\right]\nonumber\\&\leq 2\mathbb{E}\left[\sup_{t\in[0,T]}\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}\int_0^{T}\|\sigma_1(\mathbf{X}^{\varepsilon,\delta,h}_t)\mathrm{Q}_1^{1/2}\|_{\mathcal{L}_2}\|h_t\|_{\mathbb{H}}\d t\right]\nonumber\\&\leq \frac{1}{4}\mathbb{E}\left[\sup_{t\in[0,T]}\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^2\right]+4\mathbb{E}\left(\int_0^{T}\|\sigma_1(\mathbf{X}^{\varepsilon,\delta,h}_t)\mathrm{Q}_1^{1/2}\|_{\mathcal{L}_2}\|h_t\|_{\mathbb{H}}\d t\right)^2\nonumber\\&\leq\frac{1}{4}\mathbb{E}\left[\sup_{t\in[0,T]}\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^2\right]+4 \mathbb{E}\left[\left(\int_0^{T}\|\sigma_1(\mathbf{X}^{\varepsilon,\delta,h}_t)\mathrm{Q}_1^{1/2}\|_{\mathcal{L}_2}^2\d t\right)\left(\int_0^{T}\|h_t\|_{\mathbb{H}}^2\d t\right)\right]\nonumber\\&\leq \frac{1}{4}\mathbb{E}\left[\sup_{t\in[0,T]}\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^2\right]+4M \mathbb{E}\left[\int_0^{T}\|\sigma_1(\mathbf{X}^{\varepsilon,\delta,h}_t)\mathrm{Q}_1^{1/2}\|_{\mathcal{L}_2}^2\d t\right]\nonumber\\&\leq \frac{1}{4}\mathbb{E}\left[\sup_{t\in[0,T]}\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^2\right]+CMT+CM\mathbb{E}\left[\int_0^T\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^2\d t\right],
\end{align}
where we used the fact that $h\in\mathcal{A}_M$. Using the Burkholder-Davis-Gundy inequality (see Theorem 1, \cite{BD} for the Burkholder-Davis-Gundy inequality for the case $p=1$ and Theorem 1.1, \cite{DLB} for the best constant, \cite{CMMR} for BDG inequality in infinite dimensions), we estimate the final term from the right hand side of the inequality \eqref{320} as
\begin{align}\label{3.21}
&2\sqrt{\varepsilon}\mathbb{E}\left[\sup_{t\in[0,T]}\left|\int_0^t(\sigma_1(\mathbf{X}^{\varepsilon,\delta,h}_s)\mathrm{Q}_1^{1/2}\d\mathrm{W}_s,\mathbf{X}^{\varepsilon,\delta,h}_s)\right|\right]\nonumber\\&\leq C\sqrt{\varepsilon}\mathbb{E}\left[\int_0^T\|\sigma_1(\mathbf{X}^{\varepsilon,\delta,h}_t)\mathrm{Q}_1^{1/2}\|_{\mathcal{L}_2}^2\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^2\d t\right]^{1/2}\nonumber\\&\leq C\sqrt{\varepsilon}\mathbb{E}\left[\sup_{t\in[0,T]}\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^{2}\left(\int_0^T\|\sigma_1(\mathbf{X}^{\varepsilon,\delta,h}_t)\mathrm{Q}_1^{1/2}\|_{\mathcal{L}_2}^2\d t\right)^{1/2}\right]\nonumber\\&\leq\frac{1}{4}\mathbb{E}\left[\sup_{t\in[0,T]}\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^{2}\right]+C\varepsilon\mathbb{E}\left[\int_0^T\|\sigma_1(\mathbf{X}^{\varepsilon,\delta,h}_t)\mathrm{Q}_1^{1/2}\|_{\mathcal{L}_2}^2\d t\right]\nonumber\\&\leq\frac{1}{4}\mathbb{E}\left[\sup_{t\in[0,T]}\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^{2}\right]+C\varepsilon\mathbb{E}\left[\int_0^T\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^{2p}\d t\right]+C\varepsilon T,
\end{align}
where we used the Assumption \ref{ass3.6} (A1). Using \eqref{3.17}, \eqref{5.9z} and \eqref{3.21} in \eqref{320}, we deduce that
\begin{align}\label{322}
&\mathbb{E}\left[\sup_{t\in[0,T]}\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^{2}+4\mu \int_0^T\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{V}}^{2}\d t+4\alpha \int_0^T\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{V}}^{2}\d t+4\beta \int_0^T\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d t\right] \nonumber\\&\leq 2\|\mathbf{x}\|_{\mathbb{H}}^{2}+C(1+M+\varepsilon)T+C(1+M+\varepsilon)\mathbb{E}\left[\int_0^T\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^{2}\d t\right]+C\mathbb{E}\left[\int_0^T\|\mathbf{Y}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^{2}\d t\right]\nonumber\\&\leq 2\|\mathbf{x}\|_{\mathbb{H}}^{2}+C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},T}(1+\|\mathbf{y}\|_{\mathbb{H}}^2)+C_MT+C_M\mathbb{E}\left[\int_0^T\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^{2}\d t\right]\nonumber\\&\quad+C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},T}\mathbb{E}\left[\int_0^T\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^{2}\d t\right]+C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},M}\left(\frac{\delta}{\varepsilon}\right)\left[T+\mathbb{E}\left(\sup_{t\in[0,T]}\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^{2}\right)\right],
\end{align}
since $\varepsilon,\delta\in(0,1)$. By the Assumption \ref{ass3.6} (A4), one can choose $\frac{\delta}{\varepsilon}<\frac{1}{C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},M}}$, so that
\begin{align}\label{3.22}
&\mathbb{E}\left[\sup_{t\in[0,T]}\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^{2}\right]\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},M,T}\left\{1+\|\mathbf{x}\|_{\mathbb{H}}^2+\|\mathbf{y}\|_{\mathbb{H}}^2+\mathbb{E}\left[\int_0^T\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^{2}\d t\right]\right\}.
\end{align}
An application of Gronwall's inequality in \eqref{3.22} implies
\begin{align}\label{3.23}
\mathbb{E}\left[\sup_{t\in[0,T]}\|\mathbf{X}^{\varepsilon,\delta,h}_t\|_{\mathbb{H}}^{2}\right]\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},M,T}\left(1+\|\mathbf{x}\|_{\mathbb{H}}^2+\|\mathbf{y}\|_{\mathbb{H}}^2\right).
\end{align}
Substitution of \eqref{3.23} in \eqref{322} yields the estimate \eqref{5.5z}. Using \eqref{3.23} in \eqref{452}, we finally obtain \eqref{443}.
\end{proof}
Let us now take $(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}},\mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}})$ solves the following stochastic control system:
\begin{equation}\label{442}
\left\{
\begin{aligned}
\d \mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_t&=-[\mu\mathrm{A} \mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_t+\mathrm{B}(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_t)+\alpha\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_t+\beta\mathcal{C}(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_t)-\mathrm{F}(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_t,\mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}}_t)]\d t\\&\quad+\sigma_1(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_t)\mathrm{Q}_1^{1/2}h_t^{\varepsilon}\d t+\sqrt{\varepsilon}\sigma_1(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_t)\mathrm{Q}_1^{1/2}\d\mathrm{W}_t,\\
\d \mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}}_t&=-\frac{1}{\delta}[\mu\mathrm{A} \mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}}_t+\alpha\mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}}_t+\beta\mathcal{C}(\mathbf{Y}_{t}^{\varepsilon,\delta})-\mathrm{G}(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_t,\mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}}_t)]\d t\\&\quad+\frac{1}{\sqrt{\delta\varepsilon}}\sigma_2(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_t,\mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}}_t)\mathrm{Q}_2^{1/2}h_t^{\varepsilon}\d t+\frac{1}{\sqrt{\delta}}\sigma_2(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_t,\mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}}_t)\mathrm{Q}_2^{1/2}\d\mathrm{W}_t,\\
\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_0&=\mathbf{x},\ \mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}}_0=\mathbf{y}.
\end{aligned}
\right.
\end{equation}
Using Theorem \ref{thm5.10}, we know that the system \eqref{442} has a pathwise unique strong solution $(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}},\mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}})$ with paths in $\mathscr{E}\times\mathscr{E},$ $\mathbb{P}\text{-a.s.}$, where $\mathscr{E}=\mathrm{C}([0,T];\mathbb{H})\cap\mathrm{L}^2(0,T;\mathbb{V})\cap\mathrm{L}^{r+1}(0,T;\widetilde\mathbb{L}^{r+1}).$ Since,
\begin{align*}
\mathbb{E}\left(\exp\left\{-\frac{1}{\sqrt{\varepsilon}}\int_0^T(h^{\varepsilon}_t,\d\mathrm{W}_t)-\frac{1}{2\varepsilon}\int_0^T\|h^{\varepsilon}_t\|_{\mathbb{H}}^2\d t \right\}\right)=1,
\end{align*}
the measure $\widehat{\mathbb{P}}$ defined by
\begin{align*}
\d\widehat{\mathbb{P}}(\omega)=\exp\left\{-\frac{1}{\sqrt{\varepsilon}}\int_0^T(h^{\varepsilon}_t,\d\mathrm{W}_t)-\frac{1}{2\varepsilon}\int_0^T\|h^{\varepsilon}_t\|_{\mathbb{H}}^2\d t\right\}\d{\mathbb{P}}(\omega)
\end{align*}
is a probability measure on $(\Omega,\mathscr{F},\mathbb{P})$. Moreover, $\widehat{\mathbb{P}}(\omega)$ is mutually absolutely continuous with respect to $\mathbb{P}(\omega)$ and by using Girsanov's Theorem (Theorem 10.14, \cite{DZ}, Appendix, \cite{GDFF}), we have the process
\begin{align*}
\widehat{\mathrm{W}}_t:=\mathrm{W}_t+\frac{1}{\sqrt{\varepsilon}}\int_0^th^{\varepsilon}_s\d s, \ t\in[0,T],
\end{align*}
is a cylindrical Wiener process with respect to $\{\mathscr{F}_t\}_{t\geq 0}$ on the probability space $(\Omega,\mathscr{F},\widehat{\mathbb{P}})$. Thus, we know that (\cite{BD,MRBS}) $$(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{\cdot},\mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}}_{\cdot})=\left(\mathcal{G}^{\varepsilon}\left(\mathrm{W}_{\cdot} +\frac{1}{\sqrt{\varepsilon}}\int_0^{\cdot}h^{\varepsilon}_s\d s\right),\mathcal{G}^{\delta}\left(\mathrm{W}_{\cdot} +\frac{1}{\sqrt{\varepsilon}}\int_0^{\cdot}h^{\varepsilon}_s\d s\right)\right)$$ is the unique strong solution of \eqref{2.18a} with $\mathrm{W}_{\cdot}$ replaced by $\widehat{\mathrm{W}}_{\cdot}$, on $(\Omega,\mathscr{F},\{\mathscr{F}_t\}_{t\geq 0},\widehat{\mathbb{P}})$. Moreover, the system \eqref{2.18a} with $\widehat{\mathrm{W}}_{\cdot}$ is same as the system \eqref{442}, and since $\widehat{\mathbb{P}}$ and $\mathbb{P}$ are mutually absolutely continuous, we further find that $(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{\cdot},\mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}}_{\cdot})$ is the unique strong solution of \eqref{442} on $(\Omega,\mathscr{F},\{\mathscr{F}_t\}_{t\geq 0},{\mathbb{P}})$. Thus, the solution of the first equation in \eqref{442} is represented as
$$\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{\cdot}=\mathcal{G}^{\varepsilon}\left(\mathrm{W}_{\cdot} +\frac{1}{\sqrt{\varepsilon}}\int_0^{\cdot}h^{\varepsilon}_s\d s\right).$$
Since our approach is based on the Khasminskii time discretization, we need the following lemma. Similar result for the stochastic 2D Navier-Stokes equation is obtained in \cite{SLXS}, for the stochastic Burgers equation is established in \cite{XSRW} and for SCBF equations is proved in \cite{MTM11}. Since the system \eqref{442} is a controlled SCBF equations, we provide a proof here.
Let us first define a sequence of stopping times $\{\tau_R^{\varepsilon}\}$ as
\begin{align}\label{stop}
\tau_R^{\varepsilon}:=\inf_{t\geq 0}\left\{t:\|\mathbf{X}_t^{\delta,\varepsilon,h^{\varepsilon}}\|_{\mathbb{H}}> R\right\},
\end{align}
for any $\varepsilon,R>0$.
\begin{lemma}\label{lem3.8}
For any $\mathbf{x},\mathbf{y}\in\mathbb{H}$, $T>0$, $\varepsilon,\Delta>0$ small enough, there exists a constant $C_{\mu,\alpha,\beta,\lambda_1,L_{\mathrm{G}},M,T,R}>0$ such that
\begin{align}\label{3.24}
\mathbb{E}\left[\int_0^{T\wedge\tau_R^{\e}}\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{t(\Delta)}\|_{\mathbb{H}}^2\d t \right]\leq C_{\mu,\alpha,\beta,\lambda_1,L_{\mathrm{G}},M,T,R}\Delta^{1/2}(1+\|\mathbf{x}\|_{\mathbb{H}}^2+\|\mathbf{y}\|_{\mathbb{H}}^2).
\end{align}
where $t(\Delta):=\left[\frac{t}{\Delta}\right]\Delta$ and $[s]$ stands for the largest integer which is less than or equal $s$.
\end{lemma}
\begin{proof}
A straightforward calculation gives
\begin{align}\label{3.25}
& \mathbb{E}\left[\int_0^{T\wedge\tau_R^{\e}}\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{t(\Delta)}\|_{\mathbb{H}}^2\d t\right] \nonumber\\&\leq \mathbb{E}\left[\int_0^{\Delta}\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{x}\|_{\mathbb{H}}^2\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d t\right]+ \mathbb{E}\left[\int_{\Delta}^T\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{t(\Delta)}\|_{\mathbb{H}}^2\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d t\right]\nonumber\\&\leq C_{R}\left(1+\|\mathbf{x}\|_{\mathbb{H}}^{2}\right)\Delta+ 2\mathbb{E}\left[\int_{\Delta}^T\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{t-\Delta}\|_{\mathbb{H}}^2\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d t\right]\nonumber\\&\quad+2 \mathbb{E}\left[\int_{\Delta}^T\|\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{t(\Delta)}-\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{t-\Delta}\|_{\mathbb{H}}^2\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d t\right].
\end{align}
Let us first estimate the second term from the right hand side of the inequality \eqref{3.25}. Using the infinite dimensional It\^o formula applied to the process $\mathrm{Z}_r=\|\mathbf{X}_{r}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{t-\Delta}\|_{\mathbb{H}}^2$ over the interval $[t-\Delta,t]$, we find
\begin{align}
&\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{t-\Delta}\|_{\mathbb{H}}^2\nonumber\\&=-2\mu\int_{t-\Delta}^t\langle\mathrm{A}\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}},\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{t-\Delta}\rangle\d s-2\alpha\int_{t-\Delta}^t(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}},\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{t-\Delta})\d s\nonumber\\&\quad-2\int_{t-\Delta}^t\langle\mathrm{B}(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}),\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{t-\Delta}\rangle\d s-2\beta\int_{t-\Delta}^t\langle\mathcal{C}(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}),\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}_{t-\Delta}^{\varepsilon,\delta,h^{\varepsilon}}\rangle\d s\nonumber\\&\quad+2\int_{t-\Delta}^t(\mathrm{F}(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}},\mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}}_s),\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}_{t-\Delta}^{\varepsilon,\delta,h^{\varepsilon}})\d s+2\int_{t-\Delta}^t(\sigma_1(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_s)\mathrm{Q}_1^{1/2}h_s^{\varepsilon},\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}
-\mathbf{X}_{t-\Delta}^{\varepsilon,\delta,h^{\varepsilon}})\d s\nonumber\\&\quad+\varepsilon\int_{t-\Delta}^t\|\sigma_1(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})\mathrm{Q}_1^{1/2}\|_{\mathcal{L}_2}^2\d s+2\sqrt{\varepsilon}\int_{t-\Delta}^t(\sigma_1(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})\mathrm{Q}_1^{1/2}\d\mathrm{W}_s,\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}_{t-\Delta}^{\varepsilon,\delta,h^{\varepsilon}})\d s\nonumber\\&=:\sum_{k=1}^8I_k(t).
\end{align}
Using an integration by parts, H\"older's inequality, Fubini's Theorem and \eqref{5.5z}, we estimate $\mathbb{E}\left(\int_{\Delta}^T|I_1(t)|\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d t\right)$ as (see \eqref{4p27} also)
\begin{align}\label{327}
& \mathbb{E}\left( \int_{\Delta}^T|I_1(t)|\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d t\right)\nonumber\\&\leq 2\mu\left[\mathbb{E}\left(\int_{\Delta}^T\int_{t-\Delta}^t\|\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{V}}^2\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d s\d t\right)\right]^{1/2} \left[\mathbb{E}\left(\int_{\Delta}^T\int_{t-\Delta}^t\|\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}_{t-\Delta}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{V}}^2\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d s\d t\right)\right]^{1/2}\nonumber\\&\leq 2\mu\left[\Delta\mathbb{E}\left(\int_{0}^T\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{V}}^2\d t\right)\right]^{1/2}\left[2\Delta\mathbb{E}\left(\int_{0}^T\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{V}}^2\d t\right)\right]^{1/2}\nonumber\\&\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},T}\Delta\left(1+\|\mathbf{x}\|_{\mathbb{H}}^2+\|\mathbf{y}\|_{\mathbb{H}}^2\right).
\end{align}
Similarly, we estimate $\mathbb{E}\left(\int_{\Delta}^T|I_2(t)|\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d t\right)$ as
\begin{align}
\mathbb{E}\left(\int_{\Delta}^T|I_2(t)|\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d t\right)&\leq C\alpha\Delta T\mathbb{E}\left(\sup_{t\in[0,T]}\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{H}}^2\right)\leq C_{\mu,\alpha,\uplambda_1,L_{\mathrm{G}},T}\Delta\left(1+\|\mathbf{x}\|_{\mathbb{H}}^2+\|\mathbf{y}\|_{\mathbb{H}}^2\right).
\end{align}
For $n=2$ and $r\in[1,3)$, using H\"older's and Ladyzhenskaya's inequalities, Fubini's Theorem and \eqref{5.5z}, we estimate $\mathbb{E}\left(\int_{\Delta}^T|I_3(t)|\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d t\right)$ as (see \eqref{4p29} also)
\begin{align}
& \mathbb{E}\left(\int_{\Delta}^T|I_3(t)|\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d t\right)\nonumber\\&\leq 2\sqrt{2}\left[\mathbb{E}\left(\int_{\Delta}^T\int_{t-\Delta}^t\|\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{H}}^2\|\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{V}}^2\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d s\d t\right)\right]^{1/2}\nonumber\\&\quad\times\left[\mathbb{E}\left(\int_{\Delta}^T\int_{t-\Delta}^t\|\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}_{t-\Delta}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{V}}^2\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d s\d t\right)\right]^{1/2}\nonumber\\&\leq 2\sqrt{2}\left[\Delta\mathbb{E}\left(\int_0^T\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{H}}^2\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{V}}^2\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d t\right)\right]^{1/2} \left[2\Delta\mathbb{E}\left(\int_{0}^T\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{V}}^2\d t\right)\right]^{1/2}\nonumber\\&\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},T,R}\Delta\left(1+\|\mathbf{x}\|_{\mathbb{H}}^2+\|\mathbf{y}\|_{\mathbb{H}}^2\right).
\end{align}
For $n=2,3$ and $r\geq 3$ (take $2\beta\mu>1$, for $r=3$), we estimate $\mathbb{E}\left(\int_{\Delta}^T|I_3(t)|\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d t\right)$ using H\"older's inequality, interpolation inequality and \eqref{5.5z} as (see \eqref{4p30} also)
\begin{align}
& \mathbb{E}\left(\int_{\Delta}^T|I_3(t)|\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d t\right) \nonumber\\&\leq 2\left[\mathbb{E}\left(\int_{\Delta}^T\int_{t-\Delta}^t\|\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{H}}^{\frac{2(r-3)}{r-1}}\|\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\widetilde\mathbb{L}^{r+1}}^{\frac{2(r+1)}{r-1}}\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d s\d t\right)\right]^{1/2}\nonumber\\&\quad\times\left[\mathbb{E}\left(\int_{\Delta}^T\int_{t-\Delta}^t\|\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}_{t-\Delta}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{V}}^2\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d s\d t\right)\right]^{1/2}\nonumber\\&\leq 2\left[\Delta\mathbb{E}\left(\int_{0}^T\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{H}}^{\frac{2(r-3)}{r-1}}\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\widetilde\mathbb{L}^{r+1}}^{\frac{2(r+1)}{r-1}}\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d t\right)\right]^{1/2}\left[2\Delta\mathbb{E}\left(\int_{0}^T\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{V}}^2\d t\right)\right]^{1/2}\nonumber\\&\leq 2\Delta T^{\frac{r-3}{2(r-1)}}\left[\mathbb{E}\left(\int_{0}^T\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{H}}^{r-3}\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d t\right)\right]^{\frac{1}{r-1}}\left[2\mathbb{E}\left(\int_{0}^T\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{V}}^2\d t\right)\right]^{1/2}\nonumber\\&\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},T,R}\Delta\left(1+\|\mathbf{x}\|_{\mathbb{H}}^2+\|\mathbf{y}\|_{\mathbb{H}}^2\right)^{\frac{r+1}{2(r-1)}}\nonumber\\&\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},T,R}\Delta\left(1+\|\mathbf{x}\|_{\mathbb{H}}^2+\|\mathbf{y}\|_{\mathbb{H}}^2\right),
\end{align}
since $\frac{r+1}{2(r-1)}\leq 1$, for all $r\geq 3$. Once again using H\"older's inequality, Fubini's Theorem and \eqref{5.5z}, we estimate $\mathbb{E}\left(\int_{\Delta}^T|I_4(t)|\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d t\right)$ as (see \eqref{4p31} also)
\begin{align}
& \mathbb{E}\left(\int_{\Delta}^T|I_4(t)|\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d t\right)\nonumber\\&\leq 2\beta\mathbb{E}\left(\int_{\Delta}^T\int_{t-\Delta}^t\|\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\widetilde\mathbb{L}^{r+1}}^r\|\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}_{t-\Delta}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\widetilde\mathbb{L}^{r+1}}\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d s\d t\right)\nonumber\\&\leq 2\beta\left[\Delta\mathbb{E}\left(\int_{0}^T\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d t\right)\right]^{\frac{r}{r+1}}\left[2^{r}\Delta\mathbb{E}\left(\int_0^T\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d t\right)\right]^{\frac{1}{r+1}}\nonumber\\&\leq C_{\mu,\alpha,\beta,\lambda_1,L_{\mathrm{G}},T}\Delta(1+\|\mathbf{x}\|_{\mathbb{H}}^2+\|\mathbf{y}\|_{\mathbb{H}}^2).
\end{align}
We estimate $\mathbb{E}\left(\int_{\Delta}^T|I_5(t)|\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d t\right)$ using the Assumption \ref{ass3.6} (A1), \eqref{5.5z} and \eqref{443} as (see \eqref{4p32} also)
\begin{align}
& \mathbb{E}\left(\int_{\Delta}^T|I_5(t)|\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d t\right
\nonumber\\&\leq 2\left[\mathbb{E}\left(\int_{\Delta}^T\int_{t-\Delta}^t\|\mathrm{F}(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}},\mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}}_s)\|_{\mathbb{H}}^2\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d s\d t\right)\right]^{1/2}\nonumber\\&\quad\times\left[\mathbb{E}\left(\int_{\Delta}^T\int_{t-\Delta}^t\|\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}_{t-\Delta}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{H}}^2\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d s\d t\right)\right]^{1/2}\nonumber\\&\leq C\left[\Delta\mathbb{E}\left(\int_{0}^T(1+\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{H}}^2+\|\mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}}_t\|_{\mathbb{H}}^2)\d t\right)\right]^{1/2}\left[2\Delta\mathbb{E}\left(\int_{0}^T\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{H}}^2\d t\right)\right]^{1/2}\nonumber\\&\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},T}\Delta(1+\|\mathbf{x}\|_{\mathbb{H}}^2+\|\mathbf{y}\|_{\mathbb{H}}^2).
\end{align}
The term $\mathbb{E}\left(\int_{\Delta}^T|I_6(t)|\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d t\right)$ can be estimated using the Assumption \ref{ass3.6} (A1) and \eqref{5.5z} as (see \eqref{433} also)
\begin{align}
&\mathbb{E}\left(\int_{\Delta}^T|I_6(t)|\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d t\right)\nonumber\\&\leq2\left[\mathbb{E}\left(\int_{\Delta}^T\int_{t-\Delta}^t\|\sigma_1(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})\mathrm{Q}_1^{1/2}\|_{\mathcal{L}_2}^2\|h^{\varepsilon}_s\|_{\mathbb{H}}^2\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d s\d t\right)\right]^{1/2}\nonumber\\&\quad\times\left[\mathbb{E}\left(\int_{\Delta}^T\int_{t-\Delta}^t\|\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}_{t-\Delta}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{H}}^2\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d s\d t\right)\right]^{1/2}\nonumber\\&\leq C\left[\Delta\mathbb{E}\left(\int_{0}^T(1+\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{H}}^2)\|h^{\varepsilon}_t\|_{\mathbb{H}}^2\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d t\right)\right]^{1/2}\left[2\Delta\mathbb{E}\left(\int_{0}^T\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{H}}^2\d t\right)\right]^{1/2}\nonumber\\&\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},M,T,R}\Delta(1+\|\mathbf{x}\|_{\mathbb{H}}^2+\|\mathbf{y}\|_{\mathbb{H}}^2).
\end{align}
Once again using the Assumption \ref{ass3.6} (A1) and \eqref{5.5z}, we estimate $\mathbb{E}\left(\int_{\Delta}^T|I_7(t)|\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d t\right)$ as
\begin{align}
&\mathbb{E}\left(\int_{\Delta}^T|I_7(t)|\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d t\right)\nonumber\\&\leq C\varepsilon\mathbb{E}\left(\int_{\Delta}^T\int_{t-\Delta}^t(1+\|\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{H}}^2)\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d s\d t\right)\leq C\varepsilon T\Delta \mathbb{E}\left[1+\sup_{t\in[0,T]}\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{H}}^2\right]\nonumber\\&\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},T}\Delta(1+\|\mathbf{x}\|_{\mathbb{H}}^2+\|\mathbf{y}\|_{\mathbb{H}}^2),
\end{align}
since $\varepsilon\in(0,1)$. Finally, using the Burkholder-Davis-Gundy inequality, the Assumption \ref{ass3.6} (A1), Fubini's theorem and \eqref{5.5z}, we estimate $\mathbb{E}\left(\int_{\Delta}^T|I_8(t)|\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d t\right)$ as
\begin{align}\label{333}
& \mathbb{E}\left(\int_{\Delta}^T|I_8(t)|\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d t\right)\nonumber\\&\leq C\int_{\Delta}^T\mathbb{E}\left[\left(\int_{t-\Delta}^t\|\sigma_1(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})\mathrm{Q}_1^{1/2}\|_{\mathcal{L}_2}^2\|\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}_{t-\Delta}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{H}}^2\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d s\right)^{1/2}\right]\d t\nonumber\\&\leq CT^{1/2}\left[\mathbb{E}\left(\int_{\Delta}^T\int_{t-\Delta}^{t}\left(1+\|\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{H}}^2\right)\|\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}_{t-\Delta}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{H}}^2\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d s\d t\right)\right]^{1/2}\nonumber\\&\leq C_RT\Delta^{1/2}\left[\mathbb{E}\left(1+\sup_{t\in[0,T]}\|\mathbf{X}_t\|_{\mathbb{H}}^2\right)\right]^{1/2}\nonumber\\&\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},T,R}\Delta^{1/2}(1+\|\mathbf{x}\|_{\mathbb{H}}^2+\|\mathbf{y}\|_{\mathbb{H}}^2).
\end{align}
Combining \eqref{327}-\eqref{333}, we deduce that
\begin{align}\label{334}
\mathbb{E}\left[\int_{\Delta}^T\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{t-\Delta}\|_{\mathbb{H}}^2\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d t\right]\leq C_{\mu,\alpha,\beta,\lambda_1,L_{\mathrm{G}},M,T,R}\Delta^{1/2}(1+\|\mathbf{x}\|_{\mathbb{H}}^{2}+\|\mathbf{y}\|_{\mathbb{H}}^{2}).
\end{align}
A similar argument leads to
\begin{align}\label{335}
\mathbb{E}\left[\int_{\Delta}^T\|\mathbf{X}_{t(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{t-\Delta}\|_{\mathbb{H}}^2\chi_{\{t\leq\tau^{\varepsilon}_R\}}\d t\right]\leq C_{\mu,\alpha,\beta,\lambda_1,L_{\mathrm{G}},M,T,R}\Delta^{1/2}(1+\|\mathbf{x}\|_{\mathbb{H}}^{2}+\|\mathbf{y}\|_{\mathbb{H}}^{2}).
\end{align}
Combining \eqref{3.25}, \eqref{334} and \eqref{335}, we obtain the required result \eqref{3.24}.
\end{proof}
\subsection{Estimates of auxiliary process $\widehat{\mathbf{Y}}^{\varepsilon,\delta}_t$} We use the method proposed by Khasminskii, in \cite{RZK} to obtain the estimates for an auxiliary process. We introduce the auxiliary process $\widehat{\mathbf{Y}}^{\varepsilon,\delta}_t\in\mathbb{H}$ (see \eqref{3.37} below) and divide the interval $[0,T]$ into subintervals of size $\Delta$, where $\Delta$ is a fixed positive number, which depends on $\delta$ and it will be chosen later. Let us construct the process $\widehat{\mathbf{Y}}^{\varepsilon,\delta}_t$ with the initial value $\widehat{\mathbf{Y}}^{\varepsilon,\delta}_0=\mathbf{Y}^{\varepsilon,\delta}_0=\mathbf{y}$, and for any $k\in\mathbb{N}$ and $t\in[k\Delta,\min\{(k+1)\Delta,T\}]$ as
\begin{align}\label{3.37}
\widehat{\mathbf{Y}}^{\varepsilon,\delta}_t&=\widehat{\mathbf{Y}}^{\varepsilon,\delta}_{k\Delta}-\frac{\mu}{\delta}\int_{k\Delta}^t\mathrm{A}\widehat{\mathbf{Y}}^{\varepsilon,\delta}_s\d s-\frac{\alpha}{\delta}\int_{k\Delta}^t\widehat{\mathbf{Y}}^{\varepsilon,\delta}_s\d s-\frac{\beta}{\delta}\int_{k\Delta}^t\mathcal{C}(\widehat{\mathbf{Y}}^{\varepsilon,\delta}_s)\d s+\frac{1}{\delta}\int_{k\Delta}^t\mathrm{G}(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{k\Delta},\widehat{\mathbf{Y}}^{\varepsilon,\delta}_s)\d s\nonumber\\&\quad+\frac{1}{\sqrt{\delta}}\int_{k\Delta}^t\sigma_2(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{k\Delta},\widehat{\mathbf{Y}}^{\varepsilon,\delta}_s)\mathrm{Q}_2^{1/2}\d\mathrm{W}_s, \ \mathbb{P}\text{-a.s.},
\end{align}
which is equivalent to
\begin{equation}\label{338}
\left\{
\begin{aligned}
\d\widehat{\mathbf{Y}}^{\varepsilon,\delta}_t&=-\frac{1}{\delta}\left[\mu\mathrm{A}\widehat{\mathbf{Y}}^{\varepsilon,\delta}_t+\alpha\widehat{\mathbf{Y}}^{\varepsilon,\delta}_t+\beta\mathcal{C}(\widehat{\mathbf{Y}}^{\varepsilon,\delta}_t)-\mathrm{G}(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{t(\Delta)},\widehat{\mathbf{Y}}^{\varepsilon,\delta}_t)\right]\d t\\&\quad+\frac{1}{\sqrt{\delta}}\sigma_2(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{t(\Delta)},\widehat{\mathbf{Y}}^{\varepsilon,\delta}_t)\mathrm{Q}_2^{1/2}\d\mathrm{W}_t,\\
\widehat{\mathbf{Y}}^{\varepsilon,\delta}_0&=\mathbf{y}.
\end{aligned}\right.
\end{equation}
\iffalse
We define the process $\widehat\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}$ as
\begin{align}\label{339}
\widehat{\mathbf{X}}^{\varepsilon}_t&=\mathbf{x}-\mu\int_0^t\mathrm{A} \widehat\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}\d s-\int_0^t\mathrm{B}(\widehat\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})\d s-\beta\int_0^t\mathcal{C}(\widehat\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})\d s+\int_0^tf(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{s(\Delta)},\widehat\mathbf{Y}_{s}^{\varepsilon,\delta})\d s\nonumber\\&\quad+\int_0^t\sigma_1(\widehat\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})\d\mathrm{W}_s^{\mathrm{Q}_1}, \ \mathbb{P}\text{-a.s.},
\end{align}
for $t\in[0,T]$. Note that on each interval, the fast component $\widehat{\mathbf{Y}}^{\varepsilon,\delta}_t$ does not depend on the slow component $\widehat{\mathbf{X}}^{\varepsilon}_t$, but only on the value of the process $\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}$ at the first point of the interval.
\fi
The following energy estimate satisfied by $\widehat{\mathbf{Y}}^{\varepsilon,\delta}_t$ can be proved in a similar way as in Theorem \ref{thm5.10} (see Lemma 4.2, \cite{MTM11} also).
\begin{lemma}\label{lem3.9}
For any $\mathbf{x},\mathbf{y}\in\mathbb{H}$, $T>0$ and $\varepsilon\in(0,1)$, there exists a constant $C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},M,T}>0$ such that the strong solution $\mathbf{Y}^{\varepsilon,\delta}_{t}$ to the system \eqref{338} satisfies:
\iffalse
\begin{align}\label{340}
& \mathbb{E}\left[\sup_{t\in[0,T]}\|\widehat\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{H}}^{2}+2\mu\int_0^T\|\widehat\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{V}}^2\d t+2\beta \int_0^T\|\widehat\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d t\right]\nonumber\\&\leq C_{T}\left(1+\|\mathbf{x}\|_{\mathbb{H}}^{2}+\|\mathbf{y}\|_{\mathbb{H}}^{2}\right),
\end{align}
and
\fi
\begin{align}\label{341}
\sup_{t\in[0,T]} \mathbb{E}\left[\|\widehat\mathbf{Y}^{\varepsilon,\delta}_t\|_{\mathbb{H}}^{2}\right]\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},M,T}\left(1+\|\mathbf{x}\|_{\mathbb{H}}^{2}+\|\mathbf{y}\|_{\mathbb{H}}^{2}\right).
\end{align}
\end{lemma}
Our next aim is to establish an estimate on the difference between the processes $\mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}}_t$ and $\widehat\mathbf{Y}^{\varepsilon,\delta}_t$.
\begin{lemma}\label{lem3.10}
For any $\mathbf{x},\mathbf{y}\in\mathbb{H}$, $T>0$ and $\varepsilon,\delta\in(0,1)$, there exists a constant $C_{\mu,\alpha,\beta,\lambda_1,L_{\mathrm{G}},T}>0$ such that
\begin{align}\label{3.42}
\mathbb{E}\left(\int_0^{T}\|\mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}}_t-\widehat\mathbf{Y}^{\varepsilon,\delta}_t\|_{\mathbb{H}}^2\d t\right)\leq C_{\mu,\alpha,\beta,\lambda_1,L_{\mathrm{G}},M,T}(1+\|\mathbf{x}\|_{\mathbb{H}}^2+\|\mathbf{y}\|_{\mathbb{H}}^2)\left[\left(\frac{\delta}{\varepsilon}\right)+\Delta^{1/2}\right].
\end{align}
\end{lemma}
\begin{proof}
Let us define $\mathbf{U}^{\varepsilon}_t:=\mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}}_t-\widehat\mathbf{Y}^{\varepsilon,\delta}_t$. Then $\mathbf{U}^{\varepsilon}_{t}$ satisfies the following It\^o stochastic differential:
\begin{equation}
\left\{
\begin{aligned}
\d\mathbf{U}^{\varepsilon}_t&=-\frac{1}{\delta}\left[\mu\mathrm{A}\mathbf{U}^{\varepsilon}_t+\alpha\mathbf{U}^{\varepsilon}_t+\beta(\mathcal{C}(\mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}}_t)-\mathcal{C}(\widehat\mathbf{Y}^{\varepsilon,\delta}_t))\right]\d t\\&\quad+\frac{1}{\delta}\left[(\mathrm{G}(\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}},\mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}}_t)-\mathrm{G}(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{t(\Delta)},\widehat{\mathbf{Y}}^{\varepsilon,\delta}_t))\right]\d t\\&\quad+\frac{1}{\sqrt{\delta\varepsilon}}\sigma_2(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_t,\mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}}_t)\mathrm{Q}_2^{1/2}h_t^{\varepsilon}\d t \\&\quad+\frac{1}{\sqrt{\delta}}\left[\sigma_2(\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}},{\mathbf{Y}}^{\varepsilon,\delta,h^{\varepsilon}}_t)-\sigma_2(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{t(\Delta)},\widehat{\mathbf{Y}}^{\varepsilon,\delta}_t)\right]\mathrm{Q}_2^{1/2}\d\mathrm{W}_t,\\
\mathbf{U}^{\varepsilon}_0&=\mathbf{0}.
\end{aligned}
\right.
\end{equation}
An application of the infinite dimensional It\^o formula to the process $\|\mathbf{U}^{\varepsilon}_{t}\|_{\mathbb{H}}^2$ yields
\begin{align}\label{344}
\|\mathbf{U}^{\varepsilon}_t\|_{\mathbb{H}}^2&=-\frac{2\mu}{\delta}\int_0^t\|\mathbf{U}^{\varepsilon}_s\|_{\mathbb{V}}^2\d s-\frac{2\alpha}{\delta}\int_0^t\|\mathbf{U}^{\varepsilon}_s\|_{\mathbb{H}}^2\d s-\frac{2\beta}{\delta}\int_0^t\langle\mathcal{C}(\mathbf{Y}_{s}^{\varepsilon,\delta,h^{\varepsilon}})-\mathcal{C}(\widehat\mathbf{Y}_{s}^{\varepsilon,\delta}),\mathbf{U}^{\varepsilon}_s\rangle\d s\nonumber\\&\quad+\frac{2}{\delta}\int_0^t(\mathrm{G}(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}},{\mathbf{Y}}^{\varepsilon,\delta,h^{\varepsilon}}_s)-\mathrm{G}(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{s(\Delta)},\widehat{\mathbf{Y}}^{\varepsilon,\delta}_s),\mathbf{U}^{\varepsilon}_s)\d s\nonumber\\&\quad+\frac{2}{\sqrt{\delta\varepsilon}}\int_0^t(\sigma_2(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_s,\mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}}_s)\mathrm{Q}_2^{1/2}h_s^{\varepsilon},\mathbf{U}^{\varepsilon}_s)\d s\nonumber\\&\quad+\frac{1}{\sqrt{\delta}}\int_0^t([\sigma_2(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}},{\mathbf{Y}}^{\varepsilon,\delta,h^{\varepsilon}}_s)-\sigma_2(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{s(\Delta)},\widehat{\mathbf{Y}}^{\varepsilon,\delta}_s)]\mathrm{Q}_2^{1/2}\d\mathrm{W}_s,\mathbf{U}^{\varepsilon}_s)\nonumber\\&\quad+\frac{1}{\delta }\int_0^t\|[\sigma_2(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}},{\mathbf{Y}}^{\varepsilon}_s)-\sigma_2(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{s(\Delta)},\widehat{\mathbf{Y}}^{\varepsilon,\delta}_s)]\mathrm{Q}_2^{1/2}\|_{\mathcal{L}_2}^2\d s, \ \mathbb{P}\text{-a.s.},
\end{align}
for all $t\in[0,T]$. Taking expectation in \eqref{344}, we obtain
\begin{align}
\mathbb{E}\left[ \|\mathbf{U}^{\varepsilon}_t\|_{\mathbb{H}}^2\right]&=-\frac{2\mu}{\delta}\int_0^t\mathbb{E}\left[\|\mathbf{U}^{\varepsilon}_s\|_{\mathbb{V}}^2\right]\d s-\frac{2\alpha}{\delta}\int_0^t\mathbb{E}\left[\|\mathbf{U}^{\varepsilon}_s\|_{\mathbb{H}}^2\right]\d s\nonumber\\&\quad-\frac{2\beta}{\delta}\mathbb{E}\left[\int_0^t\langle\mathcal{C}(\mathbf{Y}_{s}^{\varepsilon,\delta,h^{\varepsilon}})-\mathcal{C}(\widehat\mathbf{Y}_{s}^{\varepsilon,\delta}),\mathbf{U}^{\varepsilon}_s\rangle\d s\right]\nonumber\\&\quad+\frac{2}{\delta}\int_0^t\mathbb{E}\left[(\mathrm{G}(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}},{\mathbf{Y}}^{\varepsilon,\delta,h^{\varepsilon}}_s)-\mathrm{G}(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{s(\Delta)},\widehat{\mathbf{Y}}^{\varepsilon,\delta}_s),\mathbf{U}^{\varepsilon}_s)\right]\d s\nonumber\\&\quad+\frac{2}{\sqrt{\delta\varepsilon}}\int_0^t\mathbb{E}\left[(\sigma_2(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_s,\mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}}_s)\mathrm{Q}_2^{1/2}h_s^{\varepsilon},\mathbf{U}^{\varepsilon}_s)\right]\d s\nonumber\\&\quad+\frac{1}{\delta}\int_0^t\mathbb{E}\left[\|[\sigma_2(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}},{\mathbf{Y}}^{\varepsilon,\delta,h^{\varepsilon}}_s)-\sigma_2(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{s(\Delta)},\widehat{\mathbf{Y}}^{\varepsilon,\delta}_s)]\mathrm{Q}_2^{1/2}\|_{\mathcal{L}_2}^2\right]\d s.
\end{align}
Thus, it is immediate that
\begin{align}\label{3.46}
\frac{\d}{\d t} \mathbb{E}\left[ \|\mathbf{U}^{\varepsilon}_t\|_{\mathbb{H}}^2\right]&=-\frac{2\mu}{\delta}\mathbb{E}\left[\|\mathbf{U}^{\varepsilon}_t\|_{\mathbb{V}}^2\right]-\frac{2\alpha}{\delta}\mathbb{E}\left[\|\mathbf{U}^{\varepsilon}_t\|_{\mathbb{H}}^2\right]-\frac{2\beta}{\delta}\mathbb{E}\left[\langle\mathcal{C}(\mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}}_t)-\mathcal{C}(\widehat\mathbf{Y}^{\varepsilon,\delta}_t),\mathbf{U}^{\varepsilon}_t\rangle\right]\nonumber\\&\quad+\frac{2}{\delta}\mathbb{E}\left[(\mathrm{G}(\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}},{\mathbf{Y}}^{\varepsilon,\delta,h^{\varepsilon}}_t)-\mathrm{G}(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{t(\Delta)},\widehat{\mathbf{Y}}^{\varepsilon,\delta}_t),\mathbf{U}^{\varepsilon}_t)\right]\nonumber\\&\quad+\frac{2}{\sqrt{\delta\varepsilon}}\mathbb{E}\left[(\sigma_2(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_t,\mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}}_t)\mathrm{Q}_2^{1/2}h_t^{\varepsilon},\mathbf{U}^{\varepsilon}_t)\right]\nonumber\\&\quad+\frac{1}{\delta}\mathbb{E}\left[\|[\sigma_2(\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}},{\mathbf{Y}}^{\varepsilon,\delta,h^{\varepsilon}}_t)-\sigma_2(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{t(\Delta)},\widehat{\mathbf{Y}}^{\varepsilon,\delta}_t)]\mathrm{Q}_2^{1/2}\|_{\mathcal{L}_2}^2\right],
\end{align}
for a.e. $t\in[0,T]$. Applying \eqref{214}, we find
\begin{align}\label{3p47}
&-\frac{2\beta}{\varepsilon}\langle\mathcal{C}(\mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}}_t)-\mathcal{C}(\widehat\mathbf{Y}^{\varepsilon,\delta}_t),\mathbf{U}^{\varepsilon}_t\rangle\leq-\frac{\beta}{2^{r-2}\varepsilon}\|\mathbf{U}^{\varepsilon}_t\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}.
\end{align}
Using the Assumption \ref{ass3.6} (A1), we get
\begin{align}\label{3.47}
&\frac{2}{\delta}(\mathrm{G}(\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}},\mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}}_t)-\mathrm{G}(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{t(\Delta)},\widehat{\mathbf{Y}}^{\varepsilon,\delta}_t),\mathbf{U}^{\varepsilon}_t)\nonumber\\&\leq \frac{2}{\delta}\|\mathrm{G}(\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}},\mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}}_t)-\mathrm{G}(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{t(\Delta)},\widehat{\mathbf{Y}}^{\varepsilon,\delta}_t)\|_{\mathbb{H}}\|\mathbf{U}^{\varepsilon}_t\|_{\mathbb{H}}\nonumber\\&\leq \frac{C}{\delta}\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{t(\Delta)}\|_{\mathbb{H}}\|\mathbf{U}^{\varepsilon}_t\|_{\mathbb{H}}+\frac{2L_{\mathrm{G}}}{\delta}\|\mathbf{U}^{\varepsilon}_t\|_{\mathbb{H}}^2\nonumber\\&\leq\left(\frac{\mu\lambda_1}{2\delta}+\frac{2L_{\mathrm{G}}}{\delta}\right)\|\mathbf{U}^{\varepsilon}_t\|_{\mathbb{H}}^2+\frac{C}{\mu\lambda_1\delta}\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{t(\Delta)}\|_{\mathbb{H}}^2.
\end{align}
Making use of the Assumption \ref{ass3.6} (A2), H\"older's and Young's inequalities, we have
\begin{align}
&\frac{2}{\sqrt{\delta\varepsilon}}(\sigma_2(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_t,\mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}}_t)\mathrm{Q}_2^{1/2}h_t^{\varepsilon},\mathbf{U}^{\varepsilon}_t)\nonumber\\&\leq\frac{2}{\sqrt{\delta\varepsilon}}\|\sigma_2(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_t,\mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}}_t)\mathrm{Q}_2^{1/2}\|_{\mathcal{L}_2}\|h_t^{\varepsilon}\|_{\mathbb{H}}\|\mathbf{U}^{\varepsilon}_t\|_{\mathbb{H}}\nonumber\\&\leq \frac{\mu\lambda_1}{2\delta}\|\mathbf{U}^{\varepsilon}_t\|_{\mathbb{H}}^2+\frac{C}{\mu\lambda_1\delta}\|\sigma_2(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_t,\mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}}_t)\mathrm{Q}_2^{1/2}\|_{\mathcal{L}_2}^2\|h_t^{\varepsilon}\|_{\mathbb{H}}^2\nonumber\\&\leq \frac{\mu\lambda_1}{2\delta}\|\mathbf{U}^{\varepsilon}_t\|_{\mathbb{H}}^2+\frac{C}{\mu\lambda_1\varepsilon}(1+\|\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_t\|_{\mathbb{H}}^2)\|h_t^{\varepsilon}\|_{\mathbb{H}}^2.
\end{align}
Similarly, using the Assumption \ref{ass3.6} (A1), we obtain
\begin{align}\label{3.48}
\frac{1}{\delta} \|[\sigma_2(\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}},{\mathbf{Y}}^{\varepsilon,\delta,h^{\varepsilon}}_t)-\sigma_2(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{t(\Delta)},\widehat{\mathbf{Y}}^{\varepsilon,\delta}_t)]\mathrm{Q}_2^{1/2}\|_{\mathcal{L}_2}^2&\leq \frac{C}{\delta}\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{t(\Delta)}\|_{\mathbb{H}}^2+\frac{2L_{\sigma_2}^2}{\delta}\|\mathbf{U}^{\varepsilon}_t\|_{\mathbb{H}}^2.
\end{align}
Combining \eqref{3p47}-\eqref{3.48} and substituting it in \eqref{3.46}, we deduce that
\begin{align}\label{3.49}
\frac{\d}{\d t} \mathbb{E}\left[ \|\mathbf{U}^{\varepsilon}_t\|_{\mathbb{H}}^2\right]&\leq-\frac{1}{\delta}(\mu\lambda_1+2\alpha-2L_{\mathrm{G}}-2L_{\sigma_2}^2)\mathbb{E}\left[\|\mathbf{U}^{\varepsilon}_t\|_{\mathbb{H}}^2\right]+\frac{C}{\mu\lambda_1\varepsilon}\mathbb{E}\left[(1+\|\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_t\|_{\mathbb{H}}^2)\|h_t^{\varepsilon}\|_{\mathbb{H}}^2\right]\nonumber\\&\quad+\frac{C}{\delta}\left(1+\frac{1}{\mu\lambda_1}\right)\mathbb{E}\left[\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{t(\Delta)}\|_{\mathbb{H}}^2\right].
\end{align}
Using the Assumption \ref{ass3.6} (A3) and variation of constants formula, from \eqref{3.49}, we infer that
\begin{align}
\mathbb{E}\left[\|\mathbf{U}^{\varepsilon}_t\|_{\mathbb{H}}^2\right]&\leq \frac{C}{\mu\lambda_1\varepsilon}\int_0^te^{-\frac{\kappa}{\delta}(t-s)}\mathbb{E}\left[(1+\|\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_s\|_{\mathbb{H}}^2)\|h_s^{\varepsilon}\|_{\mathbb{H}}^2\right]\d s \nonumber\\&\quad+\frac{C}{\delta}\left(1+\frac{1}{\mu\lambda_1}\right)\int_0^te^{-\frac{\kappa}{\delta}(t-s)}\mathbb{E}\left[\|\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{s(\Delta)}\|_{\mathbb{H}}^2\right]\d s,
\end{align}
where $\kappa=\mu\lambda_1+2\alpha-2L_{\mathrm{G}}-2L_{\sigma_2}^2>0$. Applying Fubini's theorem, for any $T>0$, we have
\begin{align}
\mathbb{E}\left[\int_0^{T}\|\mathbf{U}^{\varepsilon}_t\|_{\mathbb{H}}^2\d t\right]&\leq \frac{C}{\mu\lambda_1\varepsilon}\int_0^{T}\int_0^te^{-\frac{\kappa}{\delta}(t-s)}\mathbb{E}\left[(1+\|\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_s\|_{\mathbb{H}}^2)\|h_s^{\varepsilon}\|_{\mathbb{H}}^2\right]\d s\d t\nonumber\\&\quad+ \frac{C}{\delta}\left(1+\frac{1}{\mu\lambda_1}\right)\int_0^{T}\int_0^te^{-\frac{\kappa}{\delta}(t-s)}\mathbb{E}\left[\|\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{s(\Delta)}\|_{\mathbb{H}}^2\right]\d s\d t\nonumber\\&=\frac{C}{\mu\lambda_1\varepsilon}\mathbb{E}\left[\int_0^{T}(1+\|\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_s\|_{\mathbb{H}}^2)\|h_s^{\varepsilon}\|_{\mathbb{H}}^2\int_s^Te^{-\frac{\kappa}{\delta}(t-s)}\d t\d s\right]\nonumber\\&\quad+\frac{C}{\delta}\left(1+\frac{1}{\mu\lambda_1}\right)\mathbb{E}\left[\int_0^T\|\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{s(\Delta)}\|_{\mathbb{H}}^2\left(\int_s^Te^{-\frac{\kappa}{\delta}(t-s)}\d t\right)\d s\right]\nonumber\\&\leq\frac{C}{\mu\lambda_1\kappa}\left(\frac{\delta}{\varepsilon}\right)\mathbb{E}\left[\sup_{t\in[0,T]}(1+\|\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_t\|_{\mathbb{H}}^2)\int_0^{T}\|h_t^{\varepsilon}\|_{\mathbb{H}}^2\d t\right]\nonumber\\&\quad+\frac{C}{\kappa}\left(1+\frac{1}{\mu\lambda_1}\right)\mathbb{E}\left[\int_0^T\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{t(\Delta)}\|_{\mathbb{H}}^2\d t\right]\nonumber\\&\leq C_{\mu,\alpha,\beta,\lambda_1,L_{\mathrm{G}},M,T}(1+\|\mathbf{x}\|_{\mathbb{H}}^2+\|\mathbf{y}\|_{\mathbb{H}}^2)\left[\left(\frac{\delta}{\varepsilon}\right)+\Delta^{1/2}\right],
\end{align}
using \eqref{5.5z} and Lemma \ref{lem3.8} (see \eqref{3.24}), which completes the proof.
\end{proof}
\begin{remark}
It can be easily seen from the stochastic differential equations corresponding to $\mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}}_t$ and $\widehat\mathbf{Y}^{\varepsilon,\delta}_t$ (see \eqref{442} and \eqref{338}) that the control term involving $h^{\varepsilon}$ in $\mathbf{Y}^{\varepsilon,\delta,h^{\varepsilon}}_t$ vanishes in $\widehat\mathbf{Y}^{\varepsilon,\delta}_t$. From Lemma \ref{lem3.10}, we infer that the additional control term takes no effect as $\varepsilon\to 0$ due to the Assumption \ref{ass3.6} (A4).
\end{remark}
\iffalse
Let us now establish an estimate on the difference between the processes $\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}$ and $\widehat\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}$. For $n=2$ and $r\in[1,3]$, we need to construct a stopping time to obtain an estimate on $\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}-\widehat\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{H}}^2$. For fixed $\varepsilon\in(0,1)$ and $R>0$, we define
\begin{align}
\tau^{\varepsilon}_R:=\inf_{t\geq 0}\left\{t:\int_0^t\|\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{V}}^2\d s\geq R\right\}.
\end{align}
We see in the next Lemma that such a stopping time is not needed for $r\in(3,\infty)$, for $n=2$ and $r\in[3,\infty)$, for $n=3$ ($2\beta\mu\geq 1,$ for $r=3$).
\begin{lemma}\label{lem3.11}
For any $\mathbf{x},\mathbf{y}\in\mathbb{H}$, $T>0$ and $\varepsilon\in(0,1)$, the following estimate holds:
\begin{align}\label{3.53}
\mathbb{E}\left[\sup_{t\in[0,T\wedge\tau_R^{\varepsilon}]}\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}-\widehat{\mathbf{X}}^{\varepsilon}_t\|_{\mathbb{H}}^2\right]\leq C_{R,T}\left(1+\|\mathbf{x}\|_{\mathbb{H}}^3+\|\mathbf{y}\|_{\mathbb{H}}^3\right)\delta^{1/2},
\end{align}
for $n=2$ and $r\in[1,3]$, and
\begin{align}\label{3.54}
\mathbb{E}\left[\sup_{t\in[0,T]}\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}-\widehat{\mathbf{X}}^{\varepsilon}_t\|_{\mathbb{H}}^2\right]\leq C_{T}\left(1+\|\mathbf{x}\|_{\mathbb{H}}^2+\|\mathbf{y}\|_{\mathbb{H}}^2\right)\delta^{1/2},
\end{align}
for $n=2$, $r\in(3,\infty)$ and $n=3$, $r\in[3,\infty)$ ($2\beta\mu\geq 1,$ for $r=3$).
\end{lemma}
\begin{proof}
Let us define $\mathbf{Z}^{\varepsilon,\delta}_{t}:=\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}-\widehat\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}$. Then, $\mathbf{Z}^{\varepsilon,\delta}_{t}$ satisfies the following It\^o stochastic differential:
\begin{equation}
\left\{
\begin{aligned}
\d\mathbf{Z}^{\varepsilon,\delta}_{t}&=-[\mu\mathrm{A}\mathbf{Z}^{\varepsilon,\delta}_{t}+(\mathrm{B}(\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}})-\mathrm{B}(\widehat\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}))+\beta(\mathcal{C}(\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}})-\mathcal{C}(\widehat\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}))]\d t\\&\quad+[f(\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}},\mathbf{Y}_{t}^{\varepsilon,\delta,h^{\varepsilon}})-f(\mathbf{X}_{t(\Delta)}^{\varepsilon},\widehat\mathbf{Y}_{t}^{\varepsilon,\delta,h^{\varepsilon}})]\d t+[\sigma_1(\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}})-\sigma_1(\widehat\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}})]\d\mathrm{W}_t^{\mathrm{Q}_1},\\
\mathbf{Z}^{\varepsilon,\delta}_{0}&=\mathbf{0}.
\end{aligned}\right.
\end{equation}
We fist consider the case $n=2$ and $r\in[1,3]$. An application of the infinite dimensional It\^o's formula to the process $\|\mathbf{Z}^{\varepsilon,\delta}_{t}\|_{\mathbb{H}}^2$ yields
\begin{align}\label{3.56}
\|\mathbf{Z}^{\varepsilon,\delta}_{t}\|_{\mathbb{H}}^2&=-2\mu\int_0^t\|\mathbf{Z}^{\varepsilon,\delta}_{s}\|_{\mathbb{V}}^2\d s-2\int_0^t\langle(\mathrm{B}(\mathbf{X}_{s}^{\varepsilon,\delta})-\mathrm{B}(\widehat\mathbf{X}_{s}^{\varepsilon,\delta})),\mathbf{Z}^{\varepsilon,\delta}_s\rangle\d s\nonumber\\&\quad-2\beta\int_0^t\langle\mathcal{C}(\mathbf{X}_{s}^{\varepsilon,\delta})-\mathcal{C}(\widehat\mathbf{X}_{s}^{\varepsilon,\delta}),\mathbf{Z}^{\varepsilon,\delta}_s\rangle\d s+2\int_0^t(f(\mathbf{X}_{s}^{\varepsilon,\delta},\mathbf{Y}_{s}^{\varepsilon,\delta})-f(\mathbf{X}_{s(\Delta)}^{\varepsilon},\widehat\mathbf{Y}_{s}^{\varepsilon,\delta}),\mathbf{Z}^{\varepsilon,\delta}_s)\d s\nonumber\\&\quad+\int_0^t\|\sigma_1(\mathbf{X}_{s}^{\varepsilon,\delta})-\sigma_1(\widehat\mathbf{X}_{s}^{\varepsilon,\delta})\|_{\mathcal{L}_{\mathrm{Q}_1}}^2\d s+2\int_0^t([\sigma_1(\mathbf{X}_{s}^{\varepsilon,\delta})-\sigma_1(\widehat\mathbf{X}_{s}^{\varepsilon,\delta})]\d\mathrm{W}_t^{\mathrm{Q}_1},\mathbf{Z}^{\varepsilon,\delta}_s), \ \mathbb{P}\text{-a.s.},
\end{align}
for all $t\in[0,T]$. Using H\"older's, Ladyzhenskaya's and Young's inequalities, we estimate $2|\langle(\mathrm{B}(\mathbf{X}_{s}^{\varepsilon,\delta})-\mathrm{B}(\widehat\mathbf{X}_{s}^{\varepsilon,\delta})),\mathbf{Z}^{\varepsilon,\delta}_s\rangle|$ as
\begin{align}\label{3.57}
2| \langle(\mathrm{B}(\mathbf{X}_{s}^{\varepsilon,\delta})-\mathrm{B}(\widehat\mathbf{X}_{s}^{\varepsilon,\delta})),\mathbf{Z}^{\varepsilon,\delta}_s\rangle|&= 2|\langle\mathrm{B}(\mathbf{Z}^{\varepsilon,\delta}_{s},\mathbf{X}_{s}^{\varepsilon,\delta}),\mathbf{Z}^{\varepsilon,\delta}_{s}\rangle |\leq 2\|\mathbf{X}_{s}^{\varepsilon,\delta}\|_{\mathbb{V}}\|\mathbf{Z}^{\varepsilon,\delta}_{s}\|_{\widetilde\mathbb{L}^4}^2\leq 2\sqrt{2}\|\mathbf{X}_{s}^{\varepsilon,\delta}\|_{\mathbb{V}}\|\mathbf{Z}^{\varepsilon,\delta}_{s}\|_{\mathbb{H}}\|\mathbf{Z}^{\varepsilon,\delta}_{s}\|_{\mathbb{V}}\nonumber\\&\leq\mu\|\mathbf{Z}^{\varepsilon,\delta}_{s}\|_{\mathbb{V}}^2+\frac{2}{\mu}\|\mathbf{X}_{s}^{\varepsilon}\|_{\mathbb{V}}^2\|\mathbf{Z}^{\varepsilon,\delta}_s\|_{\mathbb{H}}^2.
\end{align}
Using \eqref{2.23} and \eqref{a215}, we know that
\begin{align}
-2\beta\langle\mathcal{C}(\mathbf{X}_{s}^{\varepsilon,\delta})-\mathcal{C}(\widehat\mathbf{X}_{s}^{\varepsilon,\delta}),\mathbf{Z}^{\varepsilon,\delta}_s\rangle\leq-\frac{\beta}{2^{r-2}}\|\mathbf{Z}^{\varepsilon,\delta}_s\|_{\widetilde\mathbb{L}^{r+1}}^{r+1},
\end{align}
for $r\in[1,\infty)$. Using the Assumption \ref{ass3.6} (A1), we have
\begin{align}\label{3.59}
|(f(\mathbf{X}_{s}^{\varepsilon,\delta},\mathbf{Y}_{s}^{\varepsilon,\delta})-f(\mathbf{X}_{s(\Delta)}^{\varepsilon},\widehat\mathbf{Y}_{s}^{\varepsilon,\delta}),\mathbf{Z}^{\varepsilon,\delta}_s)|&\leq\|f(\mathbf{X}_{s}^{\varepsilon,\delta},\mathbf{Y}_{s}^{\varepsilon,\delta})-f(\mathbf{X}_{s(\Delta)}^{\varepsilon},\widehat\mathbf{Y}_{s}^{\varepsilon,\delta})\|_{\mathbb{H}}\|\mathbf{Z}^{\varepsilon,\delta}_s\|_{\mathbb{H}}\nonumber\\&\leq C\left(\|\mathbf{X}_{s}^{\varepsilon,\delta}-\mathbf{X}_{s(\Delta)}^{\varepsilon}\|_{\mathbb{H}}+\|\mathbf{Y}_{s}^{\varepsilon,\delta}-\widehat\mathbf{Y}_{s}^{\varepsilon,\delta}\|_{\mathbb{H}}\right)\|\mathbf{Z}^{\varepsilon,\delta}_s\|_{\mathbb{H}}\nonumber\\&\leq\frac{1}{2}\|\mathbf{Z}^{\varepsilon,\delta}_s\|_{\mathbb{H}}^2+C\|\mathbf{X}_{s}^{\varepsilon,\delta}-\mathbf{X}_{s(\Delta)}^{\varepsilon}\|_{\mathbb{H}}^2+C\|\mathbf{Y}_{s}^{\varepsilon,\delta}-\widehat\mathbf{Y}_{s}^{\varepsilon,\delta}\|_{\mathbb{H}}^2.
\end{align}
Once again an application of the Assumption \ref{ass3.6} (A1) yields
\begin{align}
\|\sigma_1(\mathbf{X}_{s}^{\varepsilon,\delta})-\sigma_1(\widehat\mathbf{X}_{s}^{\varepsilon,\delta})\|_{\mathcal{L}_{\mathrm{Q}_1}}^2\leq C\|\mathbf{Z}^{\varepsilon,\delta}_s\|_{\mathbb{H}}^2.
\end{align}
Combining \eqref{3.57}-\eqref{3.59}, using it in \eqref{3.56} and the taking supremum over time from $0$ to $T\wedge\tau_R^{\varepsilon}$, we find
\begin{align}\label{3.61}
& \sup_{t\in[0,T\wedge\tau_R^{\varepsilon}]}\|\mathbf{Z}^{\varepsilon,\delta}_{t}\|_{\mathbb{H}}^2+\mu\int_0^{T\wedge\tau_R^{\varepsilon}}\|\mathbf{Z}^{\varepsilon,\delta}_s\|_{\mathbb{V}}^2\d s+\frac{\beta}{2^{r-2}}\int_0^{T\wedge\tau_R^{\varepsilon}}\|\mathbf{Z}^{\varepsilon,\delta}_{s}\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d s\nonumber\\&\leq \frac{2}{\mu}\int_0^{T\wedge\tau_R^{\varepsilon}}\|\mathbf{X}_{s}^{\varepsilon}\|_{\mathbb{V}}^2\|\mathbf{Z}^{\varepsilon,\delta}_s\|_{\mathbb{H}}^2\d s+C\int_0^{T\wedge\tau_R^{\varepsilon}}\|\mathbf{Z}^{\varepsilon,\delta}_s\|_{\mathbb{H}}^2\d s+C\int_0^{T\wedge\tau_R^{\varepsilon}}\|\mathbf{X}_{s}^{\varepsilon,\delta}-\mathbf{X}_{s(\Delta)}^{\varepsilon}\|_{\mathbb{H}}^2\d s\nonumber\\&\quad+C\int_0^{T\wedge\tau_R^{\varepsilon}}\|\mathbf{Y}_{s}^{\varepsilon,\delta}-\widehat\mathbf{Y}_{s}^{\varepsilon,\delta}\|_{\mathbb{H}}^2\d s+2\sup_{t\in[0,T\wedge\tau_R^{\varepsilon}]}\left|\int_0^t([\sigma_1(\mathbf{X}_{s}^{\varepsilon,\delta})-\sigma_1(\widehat\mathbf{X}_{s}^{\varepsilon,\delta})]\d\mathrm{W}_t^{\mathrm{Q}_1},\mathbf{Z}^{\varepsilon,\delta}_s)\right|.
\end{align}
An application of Gronwall's inequality in \eqref{3.61} yields
\begin{align}\label{3.62}
\sup_{t\in[0,T\wedge\tau_R^{\varepsilon}]}\|\mathbf{Z}^{\varepsilon,\delta}_{t}\|_{\mathbb{H}}^2&\leq \bigg\{C\int_0^{T\wedge\tau_R^{\varepsilon}}\|\mathbf{X}_{s}^{\varepsilon,\delta}-\mathbf{X}_{s(\Delta)}^{\varepsilon}\|_{\mathbb{H}}^2\d s+C\int_0^{T\wedge\tau_R^{\varepsilon}}\|\mathbf{Y}_{s}^{\varepsilon,\delta}-\widehat\mathbf{Y}_{s}^{\varepsilon,\delta}\|_{\mathbb{H}}^2\d s\nonumber\\&\qquad+2\sup_{t\in[0,T\wedge\tau_R^{\varepsilon}]}\left|\int_0^t([\sigma_1(\mathbf{X}_{s}^{\varepsilon,\delta})-\sigma_1(\widehat\mathbf{X}_{s}^{\varepsilon,\delta})]\d\mathrm{W}_t^{\mathrm{Q}_1},\mathbf{Z}^{\varepsilon,\delta}_s)\right|\bigg\}e^{CT}\nonumber\\&\quad\times \exp\left(\frac{2}{\mu}\int_0^{T\wedge\tau_R^{\varepsilon}}\|\mathbf{X}_{s}^{\varepsilon}\|_{\mathbb{V}}^2\d s\right)\nonumber\\&\leq C_{R,T}\bigg\{\int_0^{T\wedge\tau_R^{\varepsilon}}\|\mathbf{X}_{s}^{\varepsilon,\delta}-\mathbf{X}_{s(\Delta)}^{\varepsilon}\|_{\mathbb{H}}^2\d s+\int_0^{T\wedge\tau_R^{\varepsilon}}\|\mathbf{Y}_{s}^{\varepsilon,\delta}-\widehat\mathbf{Y}_{s}^{\varepsilon,\delta}\|_{\mathbb{H}}^2\d s\nonumber\\&\quad+2\sup_{t\in[0,T\wedge\tau_R^{\varepsilon}]}\left|\int_0^t([\sigma_1(\mathbf{X}_{s}^{\varepsilon,\delta})-\sigma_1(\widehat\mathbf{X}_{s}^{\varepsilon,\delta})]\d\mathrm{W}_t^{\mathrm{Q}_1},\mathbf{Z}^{\varepsilon,\delta}_s)\right|\bigg\}.
\end{align}
Using the Burkholder-Davis-Gundy, H\"older and Young inequalities, and the Assumption \ref{ass3.6} (A1), we estimate the final term from the right hand side of the inequality \eqref{3.61} as
\begin{align}\label{3.63}
&2\mathbb{E}\left[\sup_{t\in[0,T\wedge\tau_R^{\varepsilon}]}\left|\int_0^t([\sigma_1(\mathbf{X}_{s}^{\varepsilon,\delta})-\sigma_1(\widehat\mathbf{X}_{s}^{\varepsilon,\delta})]\d\mathrm{W}_t^{\mathrm{Q}_1},\mathbf{Z}^{\varepsilon,\delta}_s)\right|\right]\nonumber\\&\leq C\mathbb{E}\left[\int_0^{T\wedge\tau_R^{\varepsilon}}\|\sigma_1(\mathbf{X}_{s}^{\varepsilon,\delta})-\sigma_1(\widehat\mathbf{X}_{s}^{\varepsilon,\delta})\|_{\mathcal{L}_{\mathrm{Q}_1}}^2\|\mathbf{Z}^{\varepsilon,\delta}_s\|_{\mathbb{H}}^2\d s\right]^{1/2}\nonumber\\&\leq C\mathbb{E}\left[\sup_{t\in[0,T\wedge\tau_R^{\varepsilon}]}\|\mathbf{Z}^{\varepsilon,\delta}_s\|_{\mathbb{H}}\left(\int_0^{T\wedge\tau_R^{\varepsilon}}\|\sigma_1(\mathbf{X}_{s}^{\varepsilon,\delta})-\sigma_1(\widehat\mathbf{X}_{s}^{\varepsilon,\delta})\|_{\mathcal{L}_{\mathrm{Q}_1}}^2\d s\right)^{1/2}\right]\nonumber\\&\leq\frac{1}{2}\mathbb{E}\left[\sup_{t\in[0,T\wedge\tau_R^{\varepsilon}]}\|\mathbf{Z}^{\varepsilon,\delta}_s\|_{\mathbb{H}}^2\right]+C\mathbb{E}\left[\int_0^{T\wedge\tau_R^{\varepsilon}}\|\mathbf{Z}^{\varepsilon,\delta}_s\|_{\mathbb{H}}^2\d s\right].
\end{align}
Taking expectation in \eqref{3.63} and then using \eqref{3.62}, \eqref{3.53} and \eqref{3.42}, we get
\begin{align}
\mathbb{E}\left[\sup_{t\in[0,T\wedge\tau_R^{\varepsilon}]}\|\mathbf{Z}^{\varepsilon,\delta}_{t}\|_{\mathbb{H}}^2\right]&\leq C_{R,T}\bigg\{C\mathbb{E}\left[\int_0^{T}\|\mathbf{X}_{s}^{\varepsilon,\delta}-\mathbf{X}_{s(\Delta)}^{\varepsilon}\|_{\mathbb{H}}^2\d s\right]+C\mathbb{E}\left[\int_0^{T}\|\mathbf{Y}_{s}^{\varepsilon,\delta}-\widehat\mathbf{Y}_{s}^{\varepsilon,\delta}\|_{\mathbb{H}}^2\d s\right]\nonumber\\&\quad+C_{R,T}\mathbb{E}\left[\int_0^{T\wedge\tau_R^{\varepsilon}}\|\mathbf{Z}^{\varepsilon,\delta}_s\|_{\mathbb{H}}^2\d s\right]\bigg\}\nonumber\\&\leq C_{R,T}\left(1+\|\mathbf{x}\|_{\mathbb{H}}^3+\|\mathbf{y}\|_{\mathbb{H}}^3\right)\delta^{1/2}+C_{R,T}\int_0^{T}\mathbb{E}\left[\sup_{t\in[0,T\wedge\tau_R^{\varepsilon}]}\|\mathbf{Z}^{\varepsilon,\delta}_{t}\|_{\mathbb{H}}^2\right]\d t.
\end{align}
An application of Gronwall's inequality easily gives
\begin{align}\label{3.65}
\mathbb{E}\left[\sup_{t\in[0,T\wedge\tau_R^{\varepsilon}]}\|\mathbf{Z}^{\varepsilon,\delta}_{t}\|_{\mathbb{H}}^2\right]\leq C_{R,T}\left(1+\|\mathbf{x}\|_{\mathbb{H}}^3+\|\mathbf{y}\|_{\mathbb{H}}^3\right)\delta^{1/2},
\end{align}
which completes the proof for $n=2$ and $r\in[1,3]$.
Let us now discuss the case $n=3$ and $r\in(3,\infty)$. We just need to estimate the term $2|\langle(\mathrm{B}(\mathbf{X}_{s}^{\varepsilon,\delta})-\mathrm{B}(\widehat\mathbf{X}_{s}^{\varepsilon,\delta})),\mathbf{Z}^{\varepsilon,\delta}_s\rangle|$ only to get the required result.
From the estimate \eqref{2.23}, we easily have
\begin{align}\label{2.27}
-2 \beta \langle\mathcal{C}(\mathbf{X}_{s}^{\varepsilon,\delta})-\mathcal{C}(\widehat\mathbf{X}_{s}^{\varepsilon,\delta}),\mathbf{Z}^{\varepsilon,\delta}_{s}\rangle \leq- \beta\||\mathbf{X}_{s}^{\varepsilon,\delta}|^{\frac{r-1}{2}}\mathbf{Z}^{\varepsilon,\delta}_{s}\|_{\mathbb{H}}^2.
\end{align}
Using H\"older's and Young's inequalities, we estimate the term $2|\langle(\mathrm{B}(\mathbf{X}_{s}^{\varepsilon,\delta})-\mathrm{B}(\widehat\mathbf{X}_{s}^{\varepsilon,\delta})),\mathbf{Z}^{\varepsilon,\delta}_s\rangle|=2|\langle\mathrm{B}(\mathbf{Z}^{\varepsilon,\delta}_{s},\mathbf{X}_{s}^{\varepsilon,\delta}),\mathbf{Z}^{\varepsilon,\delta}_{s}\rangle |$ as
\begin{align}\label{2p28}
2|\langle\mathrm{B}(\mathbf{Z}^{\varepsilon,\delta}_{s},\mathbf{X}_{s}^{\varepsilon,\delta}),\mathbf{Z}^{\varepsilon,\delta}_{s}\rangle |&\leq 2\|\mathbf{Z}^{\varepsilon,\delta}_{s}\|_{\mathbb{V}}\|\mathbf{X}_{s}^{\varepsilon,\delta}\mathbf{Z}^{\varepsilon,\delta}_{s}\|_{\mathbb{H}}\leq\mu\|\mathbf{Z}^{\varepsilon,\delta}_{s}\|_{\mathbb{V}}^2+\frac{1}{\mu }\|\mathbf{X}_{s}^{\varepsilon,\delta}\mathbf{Z}^{\varepsilon,\delta}_{s}\|_{\mathbb{H}}^2.
\end{align}
We take the term $\|\mathbf{X}_{s}^{\varepsilon,\delta}\mathbf{Z}^{\varepsilon,\delta}_{s}\|_{\mathbb{H}}^2$ from \eqref{2p28} and use H\"older's and Young's inequalities to estimate it as (see \cite{KWH} also)
\begin{align}\label{2.29}
\|\mathbf{X}_{s}^{\varepsilon,\delta}\mathbf{Z}^{\varepsilon,\delta}_{s}\|_{\mathbb{H}}^2 &=\int_{\mathcal{O}}|\mathbf{X}_{s}^{\varepsilon,\delta}(x)|^2|\mathbf{Z}^{\varepsilon,\delta}_{s}(x)|^2\d x\nonumber\\&=\int_{\mathcal{O}}|\mathbf{X}_{s}^{\varepsilon,\delta}(x)|^2|\mathbf{Z}^{\varepsilon,\delta}_{s}(x)|^{\frac{4}{r-1}}|\mathbf{Z}^{\varepsilon,\delta}_{s}(x)|^{\frac{2(r-3)}{r-1}}\d x\nonumber\\&\leq\left(\int_{\mathcal{O}}|\mathbf{X}_{s}^{\varepsilon,\delta}(x)|^{r-1}|\mathbf{Z}^{\varepsilon,\delta}_{s}(x)|^2\d x\right)^{\frac{2}{r-1}}\left(\int_{\mathcal{O}}|\mathbf{Z}^{\varepsilon,\delta}_{s}(x)|^2\d x\right)^{\frac{r-3}{r-1}}\nonumber\\&\leq{\beta\mu }\left(\int_{\mathcal{O}}|\mathbf{X}_{s}^{\varepsilon,\delta}(x)|^{r-1}|\mathbf{Z}^{\varepsilon,\delta}_{s}(x)|^2\d x\right)+\frac{r-3}{r-1}\left(\frac{2}{\beta\mu (r-1)}\right)^{\frac{2}{r-3}}\left(\int_{\mathcal{O}}|\mathbf{Z}^{\varepsilon,\delta}_{s}(x)|^2\d x\right),
\end{align}
for $r>3$. Combining \eqref{2.27}, \eqref{2p28} and \eqref{2.29}, we obtain
\begin{align}\label{2.30}
&-2 \beta \langle\mathcal{C}(\mathbf{X}_{s}^{\varepsilon,\delta})-\mathcal{C}(\widehat\mathbf{X}_{s}^{\varepsilon,\delta}),\mathbf{Z}^{\varepsilon,\delta}_{s}\rangle-2\langle\mathrm{B}(\mathbf{Z}^{\varepsilon,\delta}_{s},\mathbf{X}_{s}^{\varepsilon,\delta}),\mathbf{Z}^{\varepsilon,\delta}_{s}\rangle\leq \frac{r-3}{\mu(r-1)}\left(\frac{2}{\beta\mu (r-1)}\right)^{\frac{2}{r-3}}\|\mathbf{Z}^{\varepsilon,\delta}_s\|_{\mathbb{H}}^2.
\end{align}
Thus a calculation similar to the estimate \eqref{3.65} yields
\begin{align}
\mathbb{E}\left[\sup_{t\in[0,T]}\|\mathbf{Z}^{\varepsilon,\delta}_{t}\|_{\mathbb{H}}^2\right]\leq C_{T}\left(1+\|\mathbf{x}\|_{\mathbb{H}}^2+\|\mathbf{y}\|_{\mathbb{H}}^2\right)\delta^{1/2},
\end{align}
and the estimate \eqref{3.54} follows.
For $r=3$, from \eqref{2.23}, we have
\begin{align}\label{231}
-2 \beta \langle\mathcal{C}(\mathbf{X}_{s}^{\varepsilon,\delta})-\mathcal{C}(\widehat\mathbf{X}_{s}^{\varepsilon,\delta}),\mathbf{Z}^{\varepsilon,\delta}_{s}\rangle \leq- \beta\|\mathbf{X}_{s}^{\varepsilon,\delta}\mathbf{Z}^{\varepsilon,\delta}_{s}\|_{\mathbb{H}}^2,
\end{align}
and a calculation similar to \eqref{2p28} gives
\begin{align}\label{3.72}
2|\langle\mathrm{B}(\mathbf{Z}^{\varepsilon,\delta}_{s},\mathbf{X}_{s}^{\varepsilon,\delta}),\mathbf{Z}^{\varepsilon,\delta}_{s}\rangle |&\leq 2\|\mathbf{Z}^{\varepsilon,\delta}_{s}\|_{\mathbb{V}}\|\mathbf{X}_{s}^{\varepsilon,\delta}\mathbf{Z}^{\varepsilon,\delta}_{s}\|_{\mathbb{H}}\leq 2\mu\|\mathbf{Z}^{\varepsilon,\delta}_{s}\|_{\mathbb{V}}^2+\frac{1}{2\mu }\|\mathbf{X}_{s}^{\varepsilon,\delta}\mathbf{Z}^{\varepsilon,\delta}_{s}\|_{\mathbb{H}}^2.
\end{align}
Combining \eqref{231} and \eqref{3.72}, we obtain
\begin{align}
&-2 \beta \langle\mathcal{C}(\mathbf{X}_{s}^{\varepsilon,\delta})-\mathcal{C}(\widehat\mathbf{X}_{s}^{\varepsilon,\delta}),\mathbf{Z}^{\varepsilon,\delta}_{s}\rangle-2\langle\mathrm{B}(\mathbf{Z}^{\varepsilon,\delta}_{s},\mathbf{X}_{s}^{\varepsilon,\delta}),\mathbf{Z}^{\varepsilon,\delta}_{s}\rangle\leq2\mu\|\mathbf{Z}^{\varepsilon,\delta}_{s}\|_{\mathbb{V}}^2 -\left(\beta-\frac{1}{2\mu }\right)\|\mathbf{X}_{s}^{\varepsilon,\delta}\mathbf{Z}^{\varepsilon,\delta}_{s}\|_{\mathbb{H}}^2,
\end{align}
and the estimate \eqref{3.54} follows for $2\beta\mu\geq 1$.
\end{proof}
\fi
Our next aim is to establish an estimate for the process ${\mathbf{X}}^{\varepsilon,\delta,h^{\varepsilon}}_{t}-\bar{\mathbf{X}}_{t}^h$. For $n=2$ and $r\in[1,3]$, we need to construct an another stopping time to obtain an estimate for $\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}-\bar\mathbf{X}_t^h\|_{\mathbb{H}}^2$. For fixed $\varepsilon\in(0,1)$ and $R>0$, we define
\begin{align}\label{stop1}
\widetilde\tau^{\varepsilon}_R:=\inf_{t\geq 0}\left\{t:\|\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{H}}+\int_0^t\|\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{V}}^2\d s> R\right\}.
\end{align}
It is clear that $\widetilde\tau_R^{\varepsilon}(\omega)\leq\tau_R^{\varepsilon}(\omega)$, for all $\omega\in\Omega$. We see in the next Lemma that such a stopping time is not needed for $r\in(3,\infty)$ for $n=2$ and $r\in[3,\infty)$ for $n=3$ ($2\beta\mu> 1,$ for $r=n=3$).
\begin{lemma}\label{lem3.14}
For any $\mathbf{x},\mathbf{y}\in\mathbb{H}$, $T>0$ and $\varepsilon\in(0,1)$, the following estimate holds:
\begin{align}\label{389}
& \mathbb{E}\left[ \sup_{t\in[0,T\wedge\widetilde\tau_R^{\varepsilon}]} \|\mathbf{Z}^{\varepsilon}_{t}\|_{\mathbb{H}}^2+\mu\int_0^{T\wedge\widetilde\tau_R^{\varepsilon}}\|\mathbf{Z}^{\varepsilon}_t\|_{\mathbb{V}}^2\d t+\frac{\beta}{2^{r-2}}\int_0^{T\wedge\widetilde\tau_R^{\varepsilon}}\|\mathbf{Z}^{\varepsilon}_t\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d t\right]\nonumber\\&\leq C_{\mu,\alpha,\beta,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},M,T,R}(1+\|\mathbf{x}\|_{\mathbb{H}}^3+\|\mathbf{y}\|_{\mathbb{H}}^3)\left[\varepsilon^2+\left(\frac{\delta}{\varepsilon}\right)+\delta^{1/8}\right]\nonumber\\&\quad+\mathbb{E}\left[\int_0^{T}\|\sigma_1(\bar\mathbf{X}^{h}_t)\mathrm{Q}_1^{1/2}(h^{\varepsilon}_t-h_t)\|_{\mathbb{H}}^2\d t\right],
\end{align}
for $n=2$ and $r\in[1,3]$, and
\begin{align}\label{390}
& \mathbb{E}\left[\sup_{t\in[0,T\wedge\tau_R^{\varepsilon}]} \|\mathbf{Z}^{\varepsilon}_{t}\|_{\mathbb{H}}^2+\mu\int_0^{T\wedge\tau_R^{\varepsilon}}\|\mathbf{Z}^{\varepsilon}_t\|_{\mathbb{V}}^2\d t+\frac{\beta}{2^{r-2}}\int_0^{T\wedge\tau_R^{\varepsilon}}\|\mathbf{Z}^{\varepsilon}_t\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d t\right]\nonumber\\&\leq C_{\mu,\alpha,\beta,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},M,T,R}(1+\|\mathbf{x}\|_{\mathbb{H}}^2+\|\mathbf{y}\|_{\mathbb{H}}^2)\left[\varepsilon^2+\left(\frac{\delta}{\varepsilon}\right)+\delta^{1/8}\right]\nonumber\\&\quad+\mathbb{E}\left[\int_0^{T}\|\sigma_1(\bar\mathbf{X}^{h}_t)\mathrm{Q}_1^{1/2}(h^{\varepsilon}_t-h_t)\|_{\mathbb{H}}^2\d t\right],
\end{align}
for $n=2$, $r\in(3,\infty)$ and $n=3$, $r\in[3,\infty)$ ($2\beta\mu> 1,$ for $r=3$). Here, $\tau_R^{\varepsilon}$ and $\widetilde\tau_R^{\varepsilon}$ are stopping times defined in \eqref{stop} and \eqref{stop1}, respectively.
\end{lemma}
\begin{proof}
Let us denote $\mathbf{Z}^{\varepsilon}_{t}:=\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}-\bar{\mathbf{X}}_{t}^h$, where $(\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}},\mathbf{Y}_{t}^{\varepsilon,\delta,h^{\varepsilon}})$ is the unique strong solution of the system \eqref{5.4z} and $\bar{\mathbf{X}}_{t}^h$ is the unique weak solution of the system \eqref{5.4y}. Then $\mathbf{Z}^{\varepsilon}_{t}$ satisfies the following It\^o stochastic differential:
\begin{equation}\label{3.91}
\left\{
\begin{aligned}
\d\mathbf{Z}^{\varepsilon}_{t}&=-[\mu\mathrm{A}\mathbf{Z}^{\varepsilon}_{t}+(\mathrm{B}(\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}})-\mathrm{B}(\bar\mathbf{X}^{h}_t))+\beta(\mathcal{C}(\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}})-\mathcal{C}(\bar\mathbf{X}^{h}_t))]\d t\\&\quad+[\mathrm{F}(\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}},\mathbf{Y}_{t}^{\varepsilon,\delta,h^{\varepsilon}})-\bar \mathrm{F}(\bar\mathbf{X}^{h}_{t})]\d t+[\sigma_1(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_t)\mathrm{Q}_1^{1/2}h^{\varepsilon}_t-\sigma_1(\bar\mathbf{X}^{h}_t)\mathrm{Q}_1^{1/2}h_t]\d t\\&\quad+\sqrt{\varepsilon}\sigma_1(\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}})\mathrm{Q}_1^{1/2}\d\mathrm{W}_t,\\
\mathbf{Z}^{\varepsilon}_{0}&=\mathbf{0}.
\end{aligned}\right.
\end{equation}
\vskip 2 mm
\noindent \textbf{Case 1: $n=2$ and $r\in[1,3]$.} We first consider the case $n=2$ and $r\in[1,3]$. An application of the infinite dimensional It\^o formula to the process $\|\mathbf{Z}^{\varepsilon}_{t}\|_{\mathbb{H}}^2$ yields
\begin{align}\label{3.92}
\|\mathbf{Z}^{\varepsilon}_{t}\|_{\mathbb{H}}^2&=-2 \mu\int_0^t\|\mathbf{Z}^{\varepsilon}_s\|_{\mathbb{V}}^2\d s-2\alpha\int_0^t\|\mathbf{Z}^{\varepsilon}_s\|_{\mathbb{H}}^2\d s-2\int_0^t\langle (\mathrm{B}(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})-\mathrm{B}(\bar{\mathbf{X}}^{h}_s)),\mathbf{Z}^{\varepsilon}_s\rangle\d s\nonumber\\&\quad-2\beta\int_0^t\langle\mathcal{C}(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})-\mathcal{C}(\bar{\mathbf{X}}^{h}_s),\mathbf{Z}^{\varepsilon}_s\rangle\d s+2\int_0^t(\mathrm{F}(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}},\mathbf{Y}_{s}^{\varepsilon,\delta,h^{\varepsilon}})-\bar\mathrm{F}(\bar{\mathbf{X}}^{h}_{s}),\mathbf{Z}^{\varepsilon}_s)\d s\nonumber\\&\quad+2\int_0^t([\sigma_1(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_s)\mathrm{Q}_1^{1/2}h^{\varepsilon}_s-\sigma_1(\bar\mathbf{X}^{h}_s)\mathrm{Q}_1^{1/2}h_s],\mathbf{Z}^{\varepsilon}_s)\d s+\varepsilon\int_0^t\|\sigma_1(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})\mathrm{Q}_1^{1/2}\|_{\mathcal{L}_2}^2\d s\nonumber\\&\quad+2\sqrt{\varepsilon}\int_0^t(\sigma_1(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})\mathrm{Q}_1^{1/2}\d\mathrm{W}_s,\mathbf{Z}^{\varepsilon}_s)\nonumber\\&=- 2 \mu\int_0^t\|\mathbf{Z}^{\varepsilon}_s\|_{\mathbb{V}}^2\d s- 2 \alpha\int_0^t\|\mathbf{Z}^{\varepsilon}_s\|_{\mathbb{H}}^2\d s-2\int_0^t\langle (\mathrm{B}(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})-\mathrm{B}(\bar{\mathbf{X}}^{h}_s)),\mathbf{Z}^{\varepsilon}_s\rangle\d s\nonumber\\&\quad-2\beta\int_0^t\langle\mathcal{C}(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})-\mathcal{C}(\bar{\mathbf{X}}^{h}_s),\mathbf{Z}^{\varepsilon}_s\rangle\d s+2\int_0^t(\bar \mathrm{F}(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})-\bar \mathrm{F}(\bar{\mathbf{X}}^{h}_{s}),\mathbf{Z}^{\varepsilon}_s)\d s\nonumber\\&\quad+2\int_0^t(\mathrm{F}(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}},\mathbf{Y}_{s}^{\varepsilon,\delta,h^{\varepsilon}})-\bar \mathrm{F}(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})-\mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}},\widehat\mathbf{Y}_{s}^{\varepsilon,\delta})+\bar \mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}}),\mathbf{Z}^{\varepsilon}_s)\d s\nonumber\\&\quad+2\int_0^t(\mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}},\widehat\mathbf{Y}_{s}^{\varepsilon,\delta})-\bar \mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}}),\mathbf{Z}^{\varepsilon}_s-\mathbf{Z}^{\varepsilon}_{s(\Delta)})\d s\nonumber\\&\quad+2\int_0^t(\mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}},\widehat\mathbf{Y}_{s}^{\varepsilon,\delta})-\bar \mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}}),\mathbf{Z}^{\varepsilon}_{s(\Delta)})\d s \nonumber\\&\quad+2\int_0^t([\sigma_1(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_s)\mathrm{Q}_1^{1/2}h^{\varepsilon}_s-\sigma_1(\bar\mathbf{X}^{h}_s)\mathrm{Q}_1^{1/2}h_s],\mathbf{Z}^{\varepsilon}_s)\d s\nonumber\\&\quad+\varepsilon\int_0^t\|\sigma_1(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})\mathrm{Q}_1^{1/2}\|_{\mathcal{L}_2}^2\d s+2\sqrt{\varepsilon}\int_0^t(\sigma_1(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})\mathrm{Q}_1^{1/2}\d\mathrm{W}_s,\mathbf{Z}^{\varepsilon}_s),\ \mathbb{P}\text{-a.s.},
\end{align}
for all $t\in[0,T]$. Using H\"older's, Ladyzhenskaya's and Young's inequalities, we estimate $2|\langle(\mathrm{B}(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})-\mathrm{B}(\bar{\mathbf{X}}^{h}_s)),\mathbf{Z}^{\varepsilon}_s\rangle|$ as
\begin{align}\label{3.93}
2| \langle(\mathrm{B}(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})-\mathrm{B}(\bar{\mathbf{X}}^{h}_s)),\mathbf{Z}^{\varepsilon}_s\rangle|&= 2|\langle\mathrm{B}(\mathbf{Z}^{\varepsilon}_{s},\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}),\mathbf{Z}^{\varepsilon}_{s}\rangle |\leq 2\|\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{V}}\|\mathbf{Z}^{\varepsilon}_{s}\|_{\widetilde\mathbb{L}^4}^2\nonumber\\&\leq 2\sqrt{2}\|\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{V}}\|\mathbf{Z}^{\varepsilon}_{s}\|_{\mathbb{H}}\|\mathbf{Z}^{\varepsilon}_{s}\|_{\mathbb{V}}\nonumber\\&\leq\mu\|\mathbf{Z}^{\varepsilon}_{s}\|_{\mathbb{V}}^2+\frac{2}{\mu}\|\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{V}}^2\|\mathbf{Z}^{\varepsilon}_s\|_{\mathbb{H}}^2.
\end{align}
Using \eqref{2.23} and \eqref{a215}, we know that
\begin{align}
-2\beta\langle\mathcal{C}(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})-\mathcal{C}(\bar{\mathbf{X}}^{h}_s),\mathbf{Z}^{\varepsilon}_s\rangle\leq-\frac{\beta}{2^{r-2}}\|\mathbf{Z}^{\varepsilon}_s\|_{\widetilde\mathbb{L}^{r+1}}^{r+1},
\end{align}
for $r\in[1,\infty)$. Using \eqref{3p93}, H\"older's and Young's inequalities, we estimate $2(\bar \mathrm{F}(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})-\bar \mathrm{F}(\bar{\mathbf{X}}^{h}_{s}),\mathbf{Z}^{\varepsilon}_s)$ as
\begin{align}
2(\bar \mathrm{F}(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})-\bar \mathrm{F}(\bar{\mathbf{X}}^{h}_{s}),\mathbf{Z}^{\varepsilon}_s)&\leq 2\|\bar \mathrm{F}(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})-\bar \mathrm{F}(\bar{\mathbf{X}}^{h}_{s})\|_{\mathbb{H}}\|\mathbf{Z}^{\varepsilon}_s\|_{\mathbb{H}}\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2}}\|\mathbf{Z}^{\varepsilon}_{s}\|_{\mathbb{H}}^2.
\end{align}
Similarly, we estimate $2(\mathrm{F}(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}},\mathbf{Y}_{s}^{\varepsilon,\delta,h^{\varepsilon}})-\bar \mathrm{F}(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})-\mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}},\widehat\mathbf{Y}_{s}^{\varepsilon,\delta})+\bar \mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}}),\mathbf{Z}^{\varepsilon}_s)$ as
\begin{align}
&2(\mathrm{F}(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}},\mathbf{Y}_{s}^{\varepsilon,\delta,h^{\varepsilon}})-\bar \mathrm{F}(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})-\mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}},\widehat\mathbf{Y}_{s}^{\varepsilon,\delta})+\bar \mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}}),\mathbf{Z}^{\varepsilon}_s)\nonumber\\&\leq 2\left(\|\mathrm{F}(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}},\mathbf{Y}_{s}^{\varepsilon,\delta,h^{\varepsilon}})-\mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}},\widehat\mathbf{Y}_{s}^{\varepsilon,\delta})\|_{\mathbb{H}}+\|\bar \mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}})-\bar \mathrm{F}(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})\|_{\mathbb{H}}\right)\|\mathbf{Z}^{\varepsilon}_s\|_{\mathbb{H}}\nonumber\\&\leq\|\mathbf{Z}^{\varepsilon}_s\|_{\mathbb{H}}^2+C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2}}(\|\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{H}}^2+\|\mathbf{Y}_{s}^{\varepsilon,\delta,h^{\varepsilon}}-\widehat\mathbf{Y}_{s}^{\varepsilon,\delta}\|_{\mathbb{H}}^2).
\end{align}
Using the Assumption \ref{ass3.6} (A1), H\"older's and Young's inequalities, we estimate the term $2\int_0^t(\mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}},\widehat\mathbf{Y}_{s}^{\varepsilon,\delta})-\bar \mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}}),\mathbf{Z}^{\varepsilon}_s-\mathbf{Z}^{\varepsilon}_{s(\Delta)})\d s$ as
\begin{align}\label{3.97}
&2\int_0^t(\mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}},\widehat\mathbf{Y}_{s}^{\varepsilon,\delta})-\bar \mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}}),\mathbf{Z}^{\varepsilon}_s-\mathbf{Z}^{\varepsilon}_{s(\Delta)})\d s\nonumber\\&\leq 2\int_0^t \|\mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}},\widehat\mathbf{Y}_{s}^{\varepsilon,\delta})-\bar \mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}})\|_{\mathbb{H}}\|\mathbf{Z}^{\varepsilon}_s-\mathbf{Z}^{\varepsilon}_{s(\Delta)}\|_{\mathbb{H}}\d s\nonumber\\&\leq 4\left(\int_0^t(\|\mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}},\widehat\mathbf{Y}_{s}^{\varepsilon,\delta})\|_{\mathbb{H}}^2+\|\bar \mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}})\|_{\mathbb{H}}^2)\d s\right)^{1/2}\nonumber\\&\quad\times\left(\int_0^t(\|\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{s}-\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{s(\Delta)}\|_{\mathbb{H}}^2+\|\bar{\mathbf{X}}^{h}_s-\bar{\mathbf{X}}^{h}_{s(\Delta)}\|_{\mathbb{H}}^2)\d s\right)^{1/2}\nonumber\\&\leq C\left(\int_0^t(1+\|\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{H}}^2+\|\widehat\mathbf{Y}_{s}^{\varepsilon,\delta}\|_{\mathbb{H}}^2)\d s\right)^{1/2}\left(\int_0^t(\|\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{s}-\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{s(\Delta)}\|_{\mathbb{H}}^2+\|\bar{\mathbf{X}}^{h}_s-\bar{\mathbf{X}}^{h}_{s(\Delta)}\|_{\mathbb{H}}^2)\d s\right)^{1/2}.
\end{align}
We estimate the term $2\int_0^t([\sigma_1(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_s)\mathrm{Q}_1^{1/2}h^{\varepsilon}_s-\sigma_1(\bar\mathbf{X}^{h}_s)\mathrm{Q}_1^{1/2}h_s],\mathbf{Z}^{\varepsilon}_s)\d s$ from the right hand side of the inequality \eqref{3.92} using the Cauchy-Schwarz inequality and Young's inequality, and the Assumption \ref{ass3.6} (A1) as
\begin{align}\label{456}
&2\int_0^t([\sigma_1(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_s)\mathrm{Q}_1^{1/2}h^{\varepsilon}_s-\sigma_1(\bar\mathbf{X}^{h}_s)\mathrm{Q}_1^{1/2}h_s],\mathbf{Z}^{\varepsilon}_s)\d s \nonumber\\& \leq 2\int_0^{t}|([\sigma_1(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_s)-\sigma_1(\bar\mathbf{X}^{h}_s)]\mathrm{Q}_1^{1/2}h^{\varepsilon}_s,\mathbf{Z}^{\varepsilon}_s)|\d s +2\int_0^{t}|(\sigma_1(\bar\mathbf{X}^{h}_s)\mathrm{Q}_1^{1/2}(h^{\varepsilon}_s-h_s),\mathbf{Z}^{\varepsilon}_s)|\d s \nonumber\\&\leq 2\int_0^{t}\|[\sigma_1(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_s)-\sigma_1(\bar\mathbf{X}^{h}_s)]\mathrm{Q}_1^{1/2}\|_{\mathcal{L}_2}\|h^{\varepsilon}_s\|_{\mathbb{H}}\|\mathbf{Z}^{\varepsilon}_s\|_{\mathbb{H}}\d s \nonumber\\&\quad+2 \int_0^{t}\|\sigma_1(\bar\mathbf{X}^{h}_s)\mathrm{Q}_1^{1/2}(h^{\varepsilon}_s-h_s)\|_{\mathbb{H}}\|\mathbf{Z}^{\varepsilon}_s\|_{\mathbb{H}}\d s \nonumber\\&\leq C\int_0^{t}\left(1+\|h^{\varepsilon}_s\|_{\mathbb{H}}^2\right)\|\mathbf{Z}^{\varepsilon}_s\|_{\mathbb{H}}^2\d s +\int_0^{t}\|\mathbf{Z}^{\varepsilon}_s\|_{\mathbb{H}}^2\d s+\int_0^{t}\|\sigma_1(\bar\mathbf{X}^{h}_s)\mathrm{Q}_1^{1/2}(h^{\varepsilon}_s-h_s)\|_{\mathbb{H}}^2\d s.
\end{align}
Making use of the Assumption \ref{ass3.6} (A1), it can be easily seen that
\begin{align}\label{457}
\int_0^{t}\|\sigma_1(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})\mathrm{Q}_1^{1/2}\|_{\mathcal{L}_2}^2\d s&\leq 2\int_0^{t} \|[\sigma_1(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})-\sigma_1(\bar\mathbf{X}_{s}^h)]\mathrm{Q}_1^{1/2}\|_{\mathcal{L}_2}^2\d s+2 \int_0^{t} \|\sigma_1(\bar\mathbf{X}_{s}^h)\mathrm{Q}_1^{1/2}\|_{\mathcal{L}_2}^2\d s \nonumber\\&\leq \frac{C}{\varepsilon}\int_0^{t}\|\mathbf{Z}^{\varepsilon}_s\|_{\mathbb{H}}^2\d s+C\varepsilon\int_0^{t}\left(1+\|\bar\mathbf{X}_{s}^h\|_{\mathbb{H}}^2\right)\d s.
\end{align}
Combining \eqref{3.93}-\eqref{457}, and then substituting it in \eqref{3.92}, we obtain
\begin{align}\label{3.99}
& \|\mathbf{Z}^{\varepsilon}_{t}\|_{\mathbb{H}}^2+\mu\int_0^t\|\mathbf{Z}^{\varepsilon}_s\|_{\mathbb{V}}^2\d s+\frac{\beta}{2^{r-2}}\int_0^t\|\mathbf{Z}^{\varepsilon}_s\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d s\nonumber\\&\leq \frac{2}{\mu}\int_0^t\|\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{V}}^2\|\mathbf{Z}^{\varepsilon}_s\|_{\mathbb{H}}^2\d s+C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2}}\int_0^t\|\mathbf{Z}^{\varepsilon}_s\|_{\mathbb{H}}^2\d s\nonumber\\&\quad+C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2}}\int_0^t\|\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{H}}^2\d s+C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2}}\int_0^t\|\mathbf{Y}_{s}^{\varepsilon,\delta,h^{\varepsilon}}-\widehat\mathbf{Y}_{s}^{\varepsilon,\delta}\|_{\mathbb{H}}^2\d s\nonumber\\&\quad+C\int_0^{t}\|h^{\varepsilon}_s\|_{\mathbb{H}}^2\|\mathbf{Z}^{\varepsilon}_s\|_{\mathbb{H}}^2\d s +\int_0^{t}\|\sigma_1(\bar\mathbf{X}^{h}_s)\mathrm{Q}_1^{1/2}(h^{\varepsilon}_s-h_s)\|_{\mathbb{H}}^2\d s+C\varepsilon^2\int_0^{t}\left(1+\|\bar\mathbf{X}_{s}^h\|_{\mathbb{H}}^2\right)\d s\nonumber\\&\quad+C\left(\int_0^t(1+\|\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{H}}^2+\|\widehat\mathbf{Y}_{s}^{\varepsilon,\delta})\|_{\mathbb{H}}^2)\d s\right)^{1/2}\left(\int_0^t(\|\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{s}-\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{s(\Delta)}\|_{\mathbb{H}}^2+\|\bar{\mathbf{X}}^{h}_s-\bar{\mathbf{X}}^{h}_{s(\Delta)}\|_{\mathbb{H}}^2)\d s\right)^{1/2} \nonumber\\&\quad +2\int_0^t(\mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}},\widehat\mathbf{Y}_{s}^{\varepsilon,\delta})-\bar \mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}}),\mathbf{Z}^{\varepsilon}_{s(\Delta)})\d s\nonumber\\&\quad+2\sqrt{\varepsilon}\int_0^t([\sigma_1(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})-\sigma_1(\bar{\mathbf{X}}^{h}_s)]\mathrm{Q}_1^{1/2}\d\mathrm{W}_s,\mathbf{Z}^{\varepsilon}_s), \ \mathbb{P}\text{-a.s.,}
\end{align}
for all $t\in[0,T]$. An application of the Gronwall inequality in \eqref{3.99} gives
\begin{align}\label{3100}
& \sup_{t\in[0,T\wedge\widetilde\tau_R^{\varepsilon}]} \|\mathbf{Z}^{\varepsilon}_{t}\|_{\mathbb{H}}^2+\mu\int_0^{T\wedge\widetilde\tau_R^{\varepsilon}}\|\mathbf{Z}^{\varepsilon}_t\|_{\mathbb{V}}^2\d t+2\alpha\int_0^{T\wedge\widetilde\tau_R^{\varepsilon}}\|\mathbf{Z}^{\varepsilon}_t\|_{\mathbb{H}}^2\d t+\frac{\beta}{2^{r-2}}\int_0^{T\wedge\widetilde\tau_R^{\varepsilon}}\|\mathbf{Z}^{\varepsilon}_t\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d t\nonumber\\&\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},T}\Bigg\{\int_0^{T\wedge\widetilde\tau_R^{\varepsilon}}\|\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{H}}^2\d s+\int_0^{T\wedge\widetilde\tau_R^{\varepsilon}}\|\mathbf{Y}_{s}^{\varepsilon,\delta,h^{\varepsilon}}-\widehat\mathbf{Y}_{s}^{\varepsilon,\delta}\|_{\mathbb{H}}^2\d s\nonumber\\&\quad +\int_0^{T\wedge\widetilde\tau_R^{\varepsilon}}\|\sigma_1(\bar\mathbf{X}^{h}_t)\mathrm{Q}_1^{1/2}(h^{\varepsilon}_t-h_t)\|_{\mathbb{H}}^2\d t+\varepsilon^2\int_0^{T\wedge\widetilde\tau_R^{\varepsilon}}\left(1+\|\bar\mathbf{X}_{t}^h\|_{\mathbb{H}}^2\right)\d t\nonumber\\&\quad+\left(\int_0^{T\wedge\widetilde\tau_R^{\varepsilon}}(1+\|\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{H}}^2+\|\widehat\mathbf{Y}_{s}^{\varepsilon,\delta}\|_{\mathbb{H}}^2)\d s\right)^{1/2}\nonumber\\&\qquad\times\left(\int_0^{T\wedge\widetilde\tau_R^{\varepsilon}}(\|\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{s}-\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{s(\Delta)}\|_{\mathbb{H}}^2+\|\bar{\mathbf{X}}^{h}_s-\bar{\mathbf{X}}^{h}_{s(\Delta)}\|_{\mathbb{H}}^2)\d s\right)^{1/2}\nonumber\\&\quad+ \sup_{t\in[0,T\wedge\widetilde\tau_R^{\varepsilon}]}\left|\int_0^t(\mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}},\widehat\mathbf{Y}_{s}^{\varepsilon,\delta})-\bar \mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}}),\mathbf{Z}^{\varepsilon}_{s(\Delta)})\d s\right|\nonumber\\&\quad+\sqrt{\varepsilon} \sup_{t\in[0,T\wedge\widetilde\tau_R^{\varepsilon}]}\left|\int_0^t([\sigma_1(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})-\sigma_1(\bar{\mathbf{X}}^{h}_s)]\mathrm{Q}_1^{1/2}\d\mathrm{W}_s,\mathbf{Z}^{\varepsilon}_s)\right|\Bigg\}\nonumber\\&\quad\times \exp\left(C\int_0^{T\wedge\widetilde\tau_R^{\varepsilon}}\|h^{\varepsilon}_t\|_{\mathbb{H}}^2\d t\right)\exp\left(\frac{2}{\mu}\int_0^{T\wedge\widetilde\tau_R^{\varepsilon}}\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{V}}^2\d t\right)\nonumber\\&\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},M,T,R}\bigg\{\int_0^{T\wedge\widetilde\tau_R^{\varepsilon}}\|\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{H}}^2\d s+\int_0^{T}\|\mathbf{Y}_{s}^{\varepsilon,\delta,h^{\varepsilon}}-\widehat\mathbf{Y}_{s}^{\varepsilon,\delta}\|_{\mathbb{H}}^2\d s\nonumber\\&\quad +\int_0^{T}\|\sigma_1(\bar\mathbf{X}^{h}_t)\mathrm{Q}_1^{1/2}(h^{\varepsilon}_t-h_t)\|_{\mathbb{H}}^2\d t+\varepsilon^2\int_0^{T}\left(1+\|\bar\mathbf{X}_{t}^h\|_{\mathbb{H}}^2\right)\d t\nonumber\\&\quad+\left(\int_0^{T}(1+\|\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{H}}^2+\|\widehat\mathbf{Y}_{s}^{\varepsilon,\delta}\|_{\mathbb{H}}^2)\d s\right)^{1/2}\nonumber\\&\qquad\times\left(\int_0^{T\wedge\widetilde\tau_R^{\varepsilon}}\|\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{s}-\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{s(\Delta)}\|_{\mathbb{H}}^2\d s+\int_0^T\|\bar{\mathbf{X}}^{h}_s-\bar{\mathbf{X}}^{h}_{s(\Delta)}\|_{\mathbb{H}}^2\d s\right)^{1/2}\nonumber\\&\quad+ \sup_{t\in[0,T\wedge\widetilde\tau_R^{\varepsilon}]}\left|\int_0^t(\mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}},\widehat\mathbf{Y}_{s}^{\varepsilon,\delta})-\bar \mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}}),\mathbf{Z}^{\varepsilon}_{s(\Delta)})\d s\right|\nonumber\\&\quad+\sqrt{\varepsilon} \sup_{t\in[0,T\wedge\widetilde\tau_R^{\varepsilon}]}\left|\int_0^t([\sigma_1(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})-\sigma_1(\bar{\mathbf{X}}^{h}_s)]\mathrm{Q}_1^{1/2}\d\mathrm{W}_s,\mathbf{Z}^{\varepsilon}_s)\right|\bigg\}, \ \mathbb{P}\text{-a.s.},
\end{align}
since $h^{\varepsilon}\in\mathcal{A}_M$, where we used the definition of stopping time given in \eqref{stop} also. Taking expectation on both sides of \eqref{3100} and then using Theorem \ref{thm5.9}, Lemmas \ref{lem3.13}-\ref{lem3.8} (see \eqref{5.5y}, \eqref{3.87}, \eqref{3.24} and \eqref{3.42}), we obtain
\begin{align}\label{3101}
& \mathbb{E}\left[ \sup_{t\in[0,T\wedge\widetilde\tau_R^{\varepsilon}]} \|\mathbf{Z}^{\varepsilon}_{t}\|_{\mathbb{H}}^2+\mu\int_0^{T\wedge\widetilde\tau_R^{\varepsilon}}\|\mathbf{Z}^{\varepsilon}_t\|_{\mathbb{V}}^2\d t+\frac{\beta}{2^{r-2}}\int_0^{T\wedge\widetilde\tau_R^{\varepsilon}}\|\mathbf{Z}^{\varepsilon}_t\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d t\right]\nonumber\\&\leq C_{\mu,\alpha,\beta,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},M,T,R}\Bigg\{\left(1+\|\mathbf{x}\|_{\mathbb{H}}^3+\|\mathbf{y}\|_{\mathbb{H}}^3\right)\left[\left(\frac{\delta}{\varepsilon}\right)+\Delta^{1/4}\right]\nonumber\\&\quad+\mathbb{E}\left[\int_0^{T}\|\sigma_1(\bar\mathbf{X}^{h}_t)\mathrm{Q}_1^{1/2}(h^{\varepsilon}_t-h_t)\|_{\mathbb{H}}^2\d t\right]+\varepsilon^2T\left(1+\sup_{t\in[0,T]}\|\bar\mathbf{X}_{t}^h\|_{\mathbb{H}}^2\right)\nonumber\\&\quad+\mathbb{E}\left[\sup_{t\in[0,T\wedge\widetilde\tau_R^{\varepsilon}]}\left|\int_0^t(\mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}},\widehat\mathbf{Y}_{s}^{\varepsilon,\delta})-\bar \mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}}),\mathbf{Z}^{\varepsilon}_{s(\Delta)})\d s\right|\right]\nonumber\\&\quad+\mathbb{E}\left[\sup_{t\in[0,T\wedge\widetilde\tau_R^{\varepsilon}]}\left|\int_0^t([\sigma_1(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})-\sigma_1(\bar{\mathbf{X}}^{h}_s)]\mathrm{Q}_1^{1/2}\d\mathrm{W}_s,\mathbf{Z}^{\varepsilon}_s)\right|\right]\Bigg\}.
\end{align}
Using the Burkholder-Davis-Gundy inequality and Assumption \ref{ass3.6} (A1), we estimate the final term from the right hand side of the inequality \eqref{3101} as
\begin{align}\label{3102}
&C_{\mu,\alpha,\beta,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},M,T,R}\mathbb{E}\left[\sup_{t\in[0,T\wedge\widetilde\tau_R^{\varepsilon}]}\left|\int_0^t([\sigma_1(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})-\sigma_1(\bar{\mathbf{X}}^{h}_s)]\mathrm{Q}_1^{1/2}\d\mathrm{W}_s,\mathbf{Z}^{\varepsilon}_s)\right|\right]\nonumber\\&\leq C_{\mu,\alpha,\beta,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},M,T,R} \mathbb{E}\left[\int_0^{T\wedge\widetilde\tau_R^{\varepsilon}}\|[\sigma_1(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})-\sigma_1(\bar{\mathbf{X}}^{h}_s)]\mathrm{Q}_1^{1/2}\|_{\mathcal{L}_2}^2\|\mathbf{Z}^{\varepsilon}_s\|_{\mathbb{H}}^2\d s\right]^{1/2}\nonumber\\&\leq C_{\mu,\alpha,\beta,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},M,T,R}\mathbb{E}\left[\sup_{s\in[0,T\wedge\widetilde\tau_R^{\varepsilon}]}\|\mathbf{Z}^{\varepsilon}_s\|_{\mathbb{H}}\left(\int_0^{T\wedge\widetilde\tau_R^{\varepsilon}}\|[\sigma_1(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})-\sigma_1(\bar{\mathbf{X}}^{h}_s)]\mathrm{Q}_1^{1/2}\|_{\mathcal{L}_2}^2\d s\right)^{1/2}\right]\nonumber\\&\leq\frac{1}{2}\mathbb{E}\left[\sup_{t\in[0,T\wedge\widetilde\tau_R^{\varepsilon}]}\|\mathbf{Z}^{\varepsilon}_t\|_{\mathbb{H}}^2\right]+C_{\mu,\alpha,\beta,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},M,T,R}\mathbb{E}\left[\int_0^{T\wedge\widetilde\tau_R^{\varepsilon}}\|\mathbf{Z}^{\varepsilon}_t\|_{\mathbb{H}}^2\d t\right].
\end{align}
Using \eqref{3102} in \eqref{3101}, we get
\begin{align}\label{3103}
& \mathbb{E}\left[ \sup_{t\in[0,T\wedge\widetilde\tau_R^{\varepsilon}]} \|\mathbf{Z}^{\varepsilon}_{t}\|_{\mathbb{H}}^2+\mu\int_0^{T\wedge\widetilde\tau_R^{\varepsilon}}\|\mathbf{Z}^{\varepsilon}_t\|_{\mathbb{V}}^2\d t+\frac{\beta}{2^{r-2}}\int_0^{T\wedge\widetilde\tau_R^{\varepsilon}}\|\mathbf{Z}^{\varepsilon}_t\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d t\right]\nonumber\\&\leq C_{\mu,\alpha,\beta,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},M,T,R}\left(1+\|\mathbf{x}\|_{\mathbb{H}}^3+\|\mathbf{y}\|_{\mathbb{H}}^3\right)\left[\left(\frac{\delta}{\varepsilon}\right)+\Delta^{1/4}\right]\nonumber\\&\quad+C_{\mu,\alpha,\beta,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},M,T,R}\mathbb{E}\left[\int_0^{T\wedge\widetilde\tau_R^{\varepsilon}}\|\mathbf{Z}^{\varepsilon}_s\|_{\mathbb{H}}^2\d s\right]\nonumber\\&\quad+\mathbb{E}\left[\int_0^{T}\|\sigma_1(\bar\mathbf{X}^{h}_t)\mathrm{Q}_1^{1/2}(h^{\varepsilon}_t-h_t)\|_{\mathbb{H}}^2\d t\right]+\varepsilon^2T\left(1+\sup_{t\in[0,T]}\|\bar\mathbf{X}_{t}^h\|_{\mathbb{H}}^2\right)\nonumber\\&\quad+C_{\mu,\alpha,\beta,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},M,T,R}\mathbb{E}\left[\sup_{t\in[0,T\wedge\widetilde\tau_R^{\varepsilon}]}\left|\int_0^t(\mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}},\widehat\mathbf{Y}_{s}^{\varepsilon,\delta})-\bar \mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}}),\mathbf{Z}^{\varepsilon}_{s(\Delta)})\d s\right|\right].
\end{align}
An application of Gronwall's inequality in \eqref{3103} yields
\begin{align}\label{3104}
& \mathbb{E}\left[ \sup_{t\in[0,T\wedge\widetilde\tau_R^{\varepsilon}]} \|\mathbf{Z}^{\varepsilon}_{t}\|_{\mathbb{H}}^2+\mu\int_0^{T\wedge\widetilde\tau_R^{\varepsilon}}\|\mathbf{Z}^{\varepsilon}_t\|_{\mathbb{V}}^2\d t+\frac{\beta}{2^{r-2}}\int_0^{T\wedge\widetilde\tau_R^{\varepsilon}}\|\mathbf{Z}^{\varepsilon}_t\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d t\right]\nonumber\\&\leq C_{\mu,\alpha,\beta,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},M,T,R}\left(1+\|\mathbf{x}\|_{\mathbb{H}}^3+\|\mathbf{y}\|_{\mathbb{H}}^3\right)\left[\left(\frac{\delta}{\varepsilon}\right)+\Delta^{1/4}\right]\nonumber\\&\quad+\mathbb{E}\left[\int_0^{T}\|\sigma_1(\bar\mathbf{X}^{h}_t)\mathrm{Q}_1^{1/2}(h^{\varepsilon}_t-h_t)\|_{\mathbb{H}}^2\d t\right]+\varepsilon^2T\left(1+\sup_{t\in[0,T]}\|\bar\mathbf{X}_{t}^h\|_{\mathbb{H}}^2\right)\nonumber\\&\quad+C_{\mu,\alpha,\beta,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},M,T,R}\mathbb{E}\left[\sup_{t\in[0,T]}|I(t)|\right],
\end{align}
where $I(t)=\int_0^t(\mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}},\widehat\mathbf{Y}_{s}^{\varepsilon,\delta})-\bar \mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}}),\mathbf{Z}^{\varepsilon}_{s(\Delta)})\d s$.
Let us now estimate the final term from the right hand side of the inequality \eqref{3104}. We follow similar arguments given in Lemma 3.8, \cite{SLXS} and Lemma 4.6, \cite{MTM11} to obtain the required result. For the sake of completeness, we provide a proof here. Note that
\begin{align}\label{3105}
|I(t)|&= \left|\sum_{k=0}^{[t/\Delta]-1}\int_{k\Delta}^{(k+1)\Delta}(\mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}},\widehat\mathbf{Y}_{s}^{\varepsilon,\delta})-\bar \mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}),\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{k\Delta}-\bar{\mathbf{X}}^{h}_{s(\Delta)})\d s\right.\nonumber\\&\quad+\left.\int_{t(\Delta)}^t(\mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}},\widehat\mathbf{Y}_{s}^{\varepsilon,\delta})-\bar \mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}}),\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{s(\Delta)}-\bar{\mathbf{X}}^{h}_{s(\Delta)})\d s\right|=:|I_1(t)+I_2(t)|.
\end{align}
Using the Assumption \ref{ass3.6} (A1), we estimate $\mathbb{E}\left[\sup\limits_{t\in[0,T]}|I_2(t)|\right]$ as
\begin{align}\label{4129}
& \mathbb{E}\left[\sup\limits_{t\in[0,T]}|I_2(t)|\right]\nonumber\\&\leq\mathbb{E}\left[\sup\limits_{t\in[0,T]}\int_{t(\Delta)}^t\|\mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}},\widehat\mathbf{Y}_{s}^{\varepsilon,\delta})-\bar \mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}})\|_{\mathbb{H}}\|\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{s(\Delta)}-\bar{\mathbf{X}}^{h}_{s(\Delta)}\|_{\mathbb{H}}\d s\right]\nonumber\\&\leq C\left[\mathbb{E}\left(\sup_{t\in[0,T]}\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}-\bar{\mathbf{X}}_t^{h}\|_{\mathbb{H}}^2\right)\right]^{1/2}\left[\mathbb{E}\left(\sup_{t\in[0,T]}\left|\int_{t(\Delta)}^t\left(1+\|\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{s(\Delta)}\|_{\mathbb{H}}+\|\widehat\mathbf{Y}_{s}^{\varepsilon,\delta}\|_{\mathbb{H}}\right)\d s\right|^2\right)\right]^{1/2}\nonumber\\&\leq C\Delta^{1/2} \left[\mathbb{E}\left(\sup_{t\in[0,T]}(\|\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{H}}^2+\|\bar{\mathbf{X}}_t^{h}\|_{\mathbb{H}}^2)\right)\right]^{1/2}\left[\mathbb{E}\left(\int_0^T\left(1+\|\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{s(\Delta)}\|_{\mathbb{H}}^2+\|\widehat\mathbf{Y}_{s}^{\varepsilon,\delta}\|_{\mathbb{H}}^2\right)\d s\right)\right]^{1/2}\nonumber\\&\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},M,T}(1+\|\mathbf{x}\|_{\mathbb{H}}^2+\|\mathbf{y}\|_{\mathbb{H}}^2)\Delta^{1/2},
\end{align}
where we used \eqref{5.5y} and \eqref{5.5z}. Next, we estimate the term $\mathbb{E}\left[\sup\limits_{t\in[0,T]}|I_1(t)|\right]$ as
\begin{align}\label{3107}
& \mathbb{E}\left[\sup\limits_{t\in[0,T]}|I_1(t)|\right]\nonumber\\&\leq \mathbb{E}\left[\sum_{k=0}^{[T/\Delta]-1}\left|\int_{k\Delta}^{(k+1)\Delta}(\mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}},\widehat\mathbf{Y}_{s}^{\varepsilon,\delta})-\bar \mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}),\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{k\Delta}-\bar{\mathbf{X}}^{h}_{k\Delta})\d s\right|\right]\nonumber\\&\leq\left[\frac{T}{\Delta}\right]\max_{0\leq k\leq [T/\Delta]-1}\mathbb{E}\left[\left|\int_{k\Delta}^{(k+1)\Delta}(\mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}},\widehat\mathbf{Y}_{s}^{\varepsilon,\delta})-\bar \mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}),\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{k\Delta}-\bar{\mathbf{X}}^{h}_{k\Delta})\d s\right|\right]\nonumber\\&\leq\frac{C_T}{\Delta}\max_{0\leq k\leq [T/\Delta]-1}\left[\mathbb{E}\left(\|\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{k\Delta}-\bar{\mathbf{X}}^{h}_{k\Delta}\|_{\mathbb{H}}^2\right)\right]^{1/2}\left[\mathbb{E}\left\|\int_{k\Delta}^{(k+1)\Delta}\mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}},\widehat\mathbf{Y}_{s}^{\varepsilon,\delta})-\bar \mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}})\d s\right\|_{\mathbb{H}}^2\right]^{1/2}\nonumber\\&\leq \frac{C_T\delta}{\Delta}\max_{0\leq k\leq [T/\Delta]-1}\left[\mathbb{E}\left(\|\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{k\Delta}-\bar{\mathbf{X}}^{h}_{k\Delta}\|_{\mathbb{H}}^2\right)\right]^{1/2}\left[\mathbb{E}\left\|\int_{0}^{\frac{\Delta}{\delta}}\mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}},\widehat\mathbf{Y}_{s\delta+k\Delta}^{\varepsilon,\delta})-\bar \mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}})\d s\right\|_{\mathbb{H}}^2\right]^{1/2}\nonumber\\&\leq\frac{C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},M,T}(1+\|\mathbf{x}\|_{\mathbb{H}}+\|\mathbf{y}\|_{\mathbb{H}})\delta}{\Delta}\max_{0\leq k\leq [T/\Delta]-1}\left[\int_0^{\frac{\Delta}{\delta}}\int_r^{\frac{\Delta}{\delta}}\Phi_k(s,r)\d s\d r\right]^{1/2},
\end{align}
where for any $0\leq r\leq s\leq \frac{\Delta}{\delta}$,
\begin{align}\label{3108}
\Phi_k(s,r):=\mathbb{E}\left[(\mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}},\widehat\mathbf{Y}_{s\delta+k\Delta}^{\varepsilon,\delta})-\bar \mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}),\mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}},\widehat\mathbf{Y}_{r\delta+k\Delta}^{\varepsilon,\delta})-\bar \mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}))\right].
\end{align}
Now, for any $\varepsilon>0$, $s>0$ and $\mathscr{F}_s$-measurable $\mathbb{H}$-valued random variables $\mathbf{X}$ and $\mathbf{Y}$, let $\{\mathbf{Y}_t^{\varepsilon,s,\mathbf{X},\mathbf{Y}}\}_{t\geq s}$ be the unique strong solution of the following It\^o stochastic differential equation:
\begin{equation}
\left\{
\begin{aligned}
\d\widetilde\mathbf{Y}_t&=-\frac{1}{\delta}[\mu\mathrm{A}\widetilde\mathbf{Y}_t+\alpha\widetilde\mathbf{Y}_t+\beta\mathcal{C}(\widetilde\mathbf{Y}_t)-\mathrm{G}(\mathbf{X},\widetilde\mathbf{Y}_t)]\d t+\frac{1}{\sqrt{\delta}}\sigma_2(\mathbf{X},\widetilde\mathbf{Y}_t)\mathrm{Q}_2^{1/2}\d\mathrm{W}_t,\\
\widetilde\mathbf{Y}_s&=\mathbf{Y}.
\end{aligned}
\right.
\end{equation}
Then, from the construction of the process $\widehat\mathbf{Y}_{t}^{\varepsilon,\delta}$ (see \eqref{3.37}), for any $t\in[k\Delta,(k+1)\Delta]$ with $k\in\mathbb{N}$, we have
\begin{align}
\widehat\mathbf{Y}_{t}^{\varepsilon,\delta}=\widetilde\mathbf{Y}_t^{\varepsilon,k\Delta,\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}},\widehat\mathbf{Y}_{k\Delta}^{\varepsilon,\delta}}, \ \mathbb{P}\text{-a.s.}
\end{align}
Using this fact in \eqref{3108}, we infer that
\begin{align}
\Phi_k(s,r)&=\mathbb{E}\left[(\mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}},\widetilde\mathbf{Y}_{s\delta+k\Delta}^{\varepsilon,k\Delta,\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}},\widehat\mathbf{Y}_{k\Delta}^{\varepsilon,\delta}})-\bar \mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}),\mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}},\widetilde\mathbf{Y}_{r\delta+k\Delta}^{\varepsilon,k\Delta,\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}},\widehat\mathbf{Y}_{k\Delta}^{\varepsilon,\delta}})-\bar \mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}))\right]\nonumber\\&=\int_{\Omega}\mathbb{E}\left[(\mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}},\widetilde\mathbf{Y}_{s\delta+k\Delta}^{\varepsilon,k\Delta,\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}},\widehat\mathbf{Y}_{k\Delta}^{\varepsilon,\delta}})-\bar \mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}),\right.\nonumber\\&\qquad\qquad\left.\mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}},\widetilde\mathbf{Y}_{r\delta+k\Delta}^{\varepsilon,k\Delta,\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}},\widehat\mathbf{Y}_{k\Delta}^{\varepsilon,\delta}})-\bar \mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}))\Big|{\mathscr{F}_{k\Delta}}\right]\d\mathbb{P}(\omega)\nonumber\\&=\int_{\Omega}\mathbb{E}\left[(\mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}(\omega),\widetilde\mathbf{Y}_{s\delta+k\Delta}^{\varepsilon,k\Delta,\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}(\omega),\widehat\mathbf{Y}_{k\Delta}^{\varepsilon,\delta}(\omega)})-\bar \mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}(\omega)),\right.\nonumber\\&\qquad\qquad\left.\mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}(\omega),\widetilde\mathbf{Y}_{r\delta+k\Delta}^{\varepsilon,k\Delta,\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}(\omega),\widehat\mathbf{Y}_{k\Delta}^{\varepsilon,\delta}(\omega)})-\bar \mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}(\omega)))\right]\mathbb{P}(\d\omega),
\end{align}
where in the final step we used the fact the processes $\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}$ and $\widehat{\mathbf{Y}}_{k\Delta}^{\varepsilon,\delta}$ are $\mathscr{F}_{k\Delta}$-measurable, whereas the process $\{\widetilde\mathbf{Y}^{\varepsilon,k\Delta,\mathbf{x},\mathbf{y}}_{s\delta+k\Delta}\}_{s\geq 0}$ is independent of $\mathscr{F}_{k\Delta}$, for any fixed $(\mathbf{x},\mathbf{y})\in\mathbb{H}\times\mathbb{H}$. Moreover, from the definition of the process $\widetilde\mathbf{Y}^{\varepsilon,k\Delta,\mathbf{x},\mathbf{y}}_{s\delta+k\Delta}$, we obtain
\begin{align}\label{3112}
\widetilde\mathbf{Y}^{\varepsilon,k\Delta,\mathbf{x},\mathbf{y}}_{s\delta+k\Delta}&= \mathbf{y}-\frac{\mu}{\delta}\int_{k\Delta}^{s\delta+k\Delta}\mathrm{A}\widetilde{\mathbf{Y}}^{\varepsilon,k\Delta,\mathbf{x},\mathbf{y}}_r\d r-\frac{\alpha}{\delta}\int_{k\Delta}^{s\delta+k\Delta}\widetilde{\mathbf{Y}}^{\varepsilon,k\Delta,\mathbf{x},\mathbf{y}}_r\d r-\frac{\beta}{\delta}\int_{k\Delta}^{s\delta+k\Delta}\mathcal{C}(\widetilde{\mathbf{Y}}^{\varepsilon,k\Delta,\mathbf{x},\mathbf{y}}_r)\d r\nonumber\\&\quad+\frac{1}{\delta}\int_{k\Delta}^{s\delta+k\Delta}\mathrm{G}(\mathbf{x},\widetilde{\mathbf{Y}}^{\varepsilon,k\Delta,\mathbf{x},\mathbf{y}}_r)\d r+\frac{1}{\sqrt{\delta}}\int_{k\Delta}^{s\delta+k\Delta}\sigma_2(\mathbf{x},\widetilde{\mathbf{Y}}^{\varepsilon,k\Delta,\mathbf{x},\mathbf{y}}_r)\mathrm{Q}_2^{1/2}\d\mathrm{W}_r\nonumber\\&=\mathbf{y}-\frac{\mu}{\delta}\int_{0}^{s\delta}\mathrm{A}\widetilde{\mathbf{Y}}^{\varepsilon,k\Delta,\mathbf{x},\mathbf{y}}_{r+k\Delta}\d r-\frac{\alpha}{\delta}\int_{0}^{s\delta}\widetilde{\mathbf{Y}}^{\varepsilon,k\Delta,\mathbf{x},\mathbf{y}}_{r+k\Delta}\d r-\frac{\beta}{\delta}\int_{0}^{s\delta}\mathcal{C}(\widetilde{\mathbf{Y}}^{\varepsilon,k\Delta,\mathbf{x},\mathbf{y}}_{r+k\Delta})\d r\nonumber\\&\quad+\frac{1}{\delta}\int_{0}^{s\delta}\mathrm{G}(\mathbf{x},\widetilde{\mathbf{Y}}^{\varepsilon,k\Delta,\mathbf{x},\mathbf{y}}_{r+k\Delta})\d r+\frac{1}{\sqrt{\delta}}\int_{0}^{s\delta}\sigma_2(\mathbf{x},\widetilde{\mathbf{Y}}^{\varepsilon,k\Delta,\mathbf{x},\mathbf{y}}_{r+k\Delta})\mathrm{Q}_2^{1/2}\d\mathrm{W}^{k\Delta}_r\nonumber\\&=\mathbf{y}-\mu\int_{0}^{s}\mathrm{A}\widetilde{\mathbf{Y}}^{\varepsilon,k\Delta,\mathbf{x},\mathbf{y}}_{r\delta+k\Delta}\d r-\alpha\int_{0}^{s}\widetilde{\mathbf{Y}}^{\varepsilon,k\Delta,\mathbf{x},\mathbf{y}}_{r\delta+k\Delta}\d r-\beta\int_{0}^{s}\mathcal{C}(\widetilde{\mathbf{Y}}^{\varepsilon,k\Delta,\mathbf{x},\mathbf{y}}_{r\delta+k\Delta})\d r\nonumber\\&\quad+\int_{0}^{s}\mathrm{G}(\mathbf{x},\widetilde{\mathbf{Y}}^{\varepsilon,k\Delta,\mathbf{x},\mathbf{y}}_{r\delta+k\Delta})\d r+\int_{0}^{s}\sigma_2(\mathbf{x},\widetilde{\mathbf{Y}}^{\varepsilon,k\Delta,\mathbf{x},\mathbf{y}}_{r\delta+k\Delta})\mathrm{Q}_2^{1/2}\d\widehat\mathrm{W}^{k\Delta}_r,
\end{align}
where $\{\mathrm{W}_r^{k\Delta}:=\mathrm{W}_{r+k\Delta}-\mathrm{W}_{k\Delta}\}_{r\geq 0}$ is the shift version of $\mathrm{W}_r$ and $\{\widehat\mathrm{W}^{k\Delta}_r:=\frac{1}{\sqrt{\delta}}\mathrm{W}_{r\delta}^{k\Delta}\}_{r\geq 0}$. Using the uniqueness of the strong solutions of \eqref{3.74} and \eqref{3112}, we infer that
\begin{align}
\mathscr{L}\left(\{\widetilde\mathbf{Y}^{\varepsilon,k\Delta,\mathbf{x},\mathbf{y}}_{s\delta+k\Delta}\}_{0\leq s\leq\frac{\Delta}{\delta}}\right)=\mathscr{L}\left(\{\mathbf{Y}^{\mathbf{x},\mathbf{y}}_{s}\}_{0\leq s\leq\frac{\Delta}{\delta}}\right),
\end{align}
where $\mathscr{L}(\cdot)$ denotes the law of the distribution. Using Markov's property, Proposition \ref{prop3.12}, the estimates \eqref{5.5z} and \eqref{341}, we estimate $\Phi_{k}(\cdot,\cdot)$ as
\begin{align}\label{3114}
\Phi_k(s,r)&=\int_{\Omega}\widetilde\mathbb{E}\left[(\mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}(\omega),\mathbf{Y}_{s}^{\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}(\omega),\widehat\mathbf{Y}_{k\Delta}^{\varepsilon,\delta}(\omega)})-\bar \mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}(\omega)),\right.\nonumber\\&\qquad\left.\mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}(\omega),\mathbf{Y}_{r}^{\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}(\omega),\widehat\mathbf{Y}_{k\Delta}^{\varepsilon,\delta}(\omega)})-\bar \mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}(\omega)))\right]\mathbb{P}(\d\omega)\nonumber\\&=\int_{\Omega}\int_{\widetilde{\Omega}}\Big(\widetilde\mathbb{E}\Big[\mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}(\omega),\mathbf{Y}_{s-r}^{\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}(\omega),\mathbf{z}})-\bar \mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}(\omega))\Big],\nonumber\\&\qquad \mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}(\omega),\mathbf{z})-\bar \mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}(\omega))\Big)\Big|_{\mathbf{z}=\mathbf{Y}_{r}^{\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}(\omega),\widehat\mathbf{Y}_{k\Delta}^{\varepsilon,\delta}(\omega)}(\widetilde\omega)}\widetilde{\mathbb{P}}(\d\widetilde{\omega})\mathbb{P}(\d\omega)\nonumber\\&\leq \int_{\Omega}\int_{\widetilde{\Omega}}\left\|\widetilde\mathbb{E}\Big[\mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}(\omega),\mathbf{Y}_{s-r}^{\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}(\omega),\mathbf{z}})-\bar \mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}(\omega))\Big]\right\|_{\mathbb{H}}\nonumber\\&\qquad\times \|\mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}(\omega),\mathbf{z})-\bar \mathrm{F}(\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}(\omega))\|_{\mathbb{H}}\Big|_{\mathbf{z}=\mathbf{Y}_{r}^{\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}(\omega),\widehat\mathbf{Y}_{k\Delta}^{\varepsilon,\delta}(\omega)}(\widetilde\omega)}\widetilde{\mathbb{P}}(\d\widetilde{\omega})\mathbb{P}(\d\omega)\nonumber\\&\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}}}\int_{\Omega}\int_{\widetilde{\Omega}}\left[1+\|\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}(\omega)\|_{\mathbb{H}}+\|\mathbf{Y}_{r}^{\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}(\omega),\widehat\mathbf{Y}_{k\Delta}^{\varepsilon,\delta}(\omega)}(\widetilde\omega)\|_{\mathbb{H}}\right]e^{-\frac{(s-r)\zeta}{2}}\nonumber\\&\qquad \times\left[1+\|\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}(\omega)\|_{\mathbb{H}}+\|\mathbf{Y}_{r}^{\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}(\omega),\widehat\mathbf{Y}_{k\Delta}^{\varepsilon,\delta}(\omega)}(\widetilde\omega)\|_{\mathbb{H}}\right]\widetilde{\mathbb{P}}(\d\widetilde{\omega})\mathbb{P}(\d\omega)\nonumber\\&\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}}}\int_{\Omega}\left(1+\|\mathbf{X}_{k\Delta}^{\varepsilon,\delta,h^{\varepsilon}}(\omega)\|_{\mathbb{H}}^2+\|\widehat\mathbf{Y}_{k\Delta}^{\varepsilon,\delta}(\omega)\|_{\mathbb{H}}^2\right)\mathbb{P}(\d\omega)e^{-\frac{(s-r)\zeta}{2}}\nonumber\\&\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},M,T}(1+\|\mathbf{x}\|_{\mathbb{H}}^2+\|\mathbf{y}\|_{\mathbb{H}}^2)e^{-\frac{(s-r)\zeta}{2}}.
\end{align}
Using \eqref{3114} in \eqref{3107}, we obtain
\begin{align}\label{3115}
\mathbb{E}\left[\sup\limits_{t\in[0,T]}|I_1(t)|\right]&\leq\frac{C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},M,T}(1+\|\mathbf{x}\|_{\mathbb{H}}^2+\|\mathbf{y}\|_{\mathbb{H}}^2)\delta}{\Delta}\left[\int_0^{\frac{\Delta}{\delta}}\int_r^{\frac{\Delta}{\delta}}e^{-\frac{(s-r)\zeta}{2}}\d s\d r\right]^{1/2}\nonumber\\&={C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},M,T}(1+\|\mathbf{x}\|_{\mathbb{H}}^2+\|\mathbf{y}\|_{\mathbb{H}}^2)}\frac{\delta}{\Delta}\left(\frac{2}{\zeta}\right)^{1/2}\left[\frac{\Delta}{\delta}+\frac{2}{\zeta}\left(e^{-\frac{\Delta\zeta}{2\delta}}-1\right)\right]^{1/2}\nonumber\\&\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},M,T}(1+\|\mathbf{x}\|_{\mathbb{H}}^2+\|\mathbf{y}\|_{\mathbb{H}}^2)\left[\left(\frac{\delta}{\Delta}\right)^{1/2}+\frac{\delta}{\Delta}\right].
\end{align}
Combining the estimates \eqref{4129} and \eqref{3115}, we get
\begin{align}\label{3116}
\mathbb{E}\left[\sup\limits_{t\in[0,T\wedge\widetilde\tau_R^{\varepsilon}]}|I(t)|\right]&\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},M,T}(1+\|\mathbf{x}\|_{\mathbb{H}}^2+\|\mathbf{y}\|_{\mathbb{H}}^2)\left[\left(\frac{\delta}{\Delta}\right)^{1/2}+\frac{\delta}{\Delta}+\Delta^{1/2}\right].
\end{align}
Using \eqref{3116} in \eqref{3104}, we finally find
\begin{align}\label{3z25}
& \mathbb{E}\left[ \sup_{t\in[0,T\wedge\widetilde\tau_R^{\varepsilon}]} \|\mathbf{Z}^{\varepsilon}_{t}\|_{\mathbb{H}}^2+\mu\int_0^{T\wedge\widetilde\tau_R^{\varepsilon}}\|\mathbf{Z}^{\varepsilon}_t\|_{\mathbb{V}}^2\d t+\frac{\beta}{2^{r-2}}\int_0^{T\wedge\widetilde\tau_R^{\varepsilon}}\|\mathbf{Z}^{\varepsilon}_t\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d t\right]\nonumber\\&\leq C_{\mu,\alpha,\beta,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},M,T,R}(1+\|\mathbf{x}\|_{\mathbb{H}}^3+\|\mathbf{y}\|_{\mathbb{H}}^3)\left[\left(\frac{\delta}{\varepsilon}\right)+\Delta^{1/4}\right]\nonumber\\&\quad+\mathbb{E}\left[\int_0^{T}\|\sigma_1(\bar\mathbf{X}^{h}_t)\mathrm{Q}_1^{1/2}(h^{\varepsilon}_t-h_t)\|_{\mathbb{H}}^2\d t\right]+\varepsilon^2T\left(1+\sup_{t\in[0,T]}\|\bar\mathbf{X}_{t}^h\|_{\mathbb{H}}^2\right)\nonumber\\&\quad+C_{\mu,\alpha,\beta,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},T}(1+\|\mathbf{x}\|_{\mathbb{H}}^2+\|\mathbf{y}\|_{\mathbb{H}}^2)\left[\left(\frac{\delta}{\Delta}\right)^{1/2}+\frac{\delta}{\Delta}+\Delta^{1/2}\right]\nonumber\\&\leq C_{R,\mu,\alpha,\beta,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},T}(1+\|\mathbf{x}\|_{\mathbb{H}}^3+\|\mathbf{y}\|_{\mathbb{H}}^3)\left[\varepsilon^2+\left(\frac{\delta}{\varepsilon}\right)+\left(\frac{\delta}{\Delta}\right)^{1/2}+\Delta^{1/4}\right]\nonumber\\&\quad+\mathbb{E}\left[\int_0^{T}\|\sigma_1(\bar\mathbf{X}^{h}_t)\mathrm{Q}_1^{1/2}(h^{\varepsilon}_t-h_t)\|_{\mathbb{H}}^2\d t\right],
\end{align}
and by choosing $\Delta=\delta^{1/2}$, we obtain \eqref{389}.
\vskip 2 mm
\noindent \textbf{Case 2: $n=2,3$ and $r\in(3,\infty)$.} Let us now discuss the case $n=3$ and $r\in(3,\infty)$. We just need to estimate the term $2|\langle(\mathrm{B}(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}})-\mathrm{B}(\bar{\mathbf{X}}^{h})),\mathbf{Z}^{\varepsilon}\rangle|$ only to get the required result. From the estimate \eqref{2.23}, we easily have
\begin{align}\label{2z27}
-2 \beta \langle\mathcal{C}(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}})-\mathcal{C}(\bar{\mathbf{X}}^{h}),\mathbf{Z}^{\varepsilon}\rangle \leq- \beta\||\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}|^{\frac{r-1}{2}}\mathbf{Z}^{\varepsilon}\|_{\mathbb{H}}^2- \beta\||\bar{\mathbf{X}}^{h}|^{\frac{r-1}{2}}\mathbf{Z}^{\varepsilon}\|_{\mathbb{H}}^2.
\end{align}
Using H\"older's and Young's inequalities, we estimate the term $2|\langle(\mathrm{B}(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}})-\mathrm{B}(\bar{\mathbf{X}}^{h})),\mathbf{Z}^{\varepsilon}\rangle|=2|\langle\mathrm{B}(\mathbf{Z}^{\varepsilon},\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}),\mathbf{Z}^{\varepsilon}\rangle |$ as
\begin{align}\label{2z28}
2|\langle\mathrm{B}(\mathbf{Z}^{\varepsilon},\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}),\mathbf{Z}^{\varepsilon}\rangle |&\leq 2\|\mathbf{Z}^{\varepsilon}\|_{\mathbb{V}}\|\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}\mathbf{Z}^{\varepsilon}\|_{\mathbb{H}}\leq\mu\|\mathbf{Z}^{\varepsilon}\|_{\mathbb{V}}^2+\frac{1}{\mu }\|\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}\mathbf{Z}^{\varepsilon}\|_{\mathbb{H}}^2.
\end{align}
We take the term $\|\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}\mathbf{Z}^{\varepsilon}\|_{\mathbb{H}}^2$ from \eqref{2z28} and perform a similar calculation in \eqref{6.30} to deduce
\begin{align}\label{2z29}
\|\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}\mathbf{Z}^{\varepsilon}\|_{\mathbb{H}}^2\leq\frac{\beta\mu}{2}\left(\int_{\mathcal{O}}|\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}(x)|^{r-1}|\mathbf{Z}^{\varepsilon}(x)|^2\d x\right)+\frac{r-3}{r-1}\left(\frac{4}{\beta\mu (r-1)}\right)^{\frac{2}{r-3}}\left(\int_{\mathcal{O}}|\mathbf{Z}^{\varepsilon}(x)|^2\d x\right),
\end{align}
for $r>3$. Combining \eqref{2z27}, \eqref{2z28} and \eqref{2z29}, we obtain
\begin{align}\label{2z30}
&-2 \beta \langle\mathcal{C}(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}})-\mathcal{C}(\bar{\mathbf{X}}^{h}),\mathbf{Z}^{\varepsilon}\rangle-2\langle\mathrm{B}(\mathbf{Z}^{\varepsilon},\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}),\mathbf{Z}^{\varepsilon}\rangle\nonumber\\&\leq -\frac{\beta}{2}\||\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}|^{\frac{r-1}{2}}\mathbf{Z}^{\varepsilon}\|_{\mathbb{H}}^2- \frac{\beta}{2}\||\bar{\mathbf{X}}^{h}|^{\frac{r-1}{2}}\mathbf{Z}^{\varepsilon}\|_{\mathbb{H}}^2+ \frac{r-3}{\mu(r-1)}\left(\frac{4}{\beta\mu (r-1)}\right)^{\frac{2}{r-3}}\|\mathbf{Z}^{\varepsilon}\|_{\mathbb{H}}^2.
\end{align}
Thus a calculation similar to the estimate \eqref{3.99} yields
\begin{align}
& \|\mathbf{Z}^{\varepsilon}_{t}\|_{\mathbb{H}}^2+\mu\int_0^t\|\mathbf{Z}^{\varepsilon}_s\|_{\mathbb{V}}^2\d s+\frac{\beta}{2^{r-1}}\int_0^t\|\mathbf{Z}^{\varepsilon}_s\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d s\nonumber\\&\leq \left[\frac{r-3}{\mu(r-1)}\left(\frac{4}{\beta\mu (r-1)}\right)^{\frac{4}{r-3}}+C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2}}\right]\int_0^t\|\mathbf{Z}^{\varepsilon}_s\|_{\mathbb{H}}^2\d s\nonumber\\&\quad+C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2}}\int_0^t\|\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}-\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{H}}^2\d s+C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2}}\int_0^t\|\mathbf{Y}_{s}^{\varepsilon,\delta,h^{\varepsilon}}-\widehat\mathbf{Y}_{s}^{\varepsilon,\delta}\|_{\mathbb{H}}^2\d s\nonumber\\&\quad+C\int_0^{t}\|h^{\varepsilon}_s\|_{\mathbb{H}}^2\|\mathbf{Z}^{\varepsilon}_s\|_{\mathbb{H}}^2\d s +\int_0^{t}\|\sigma_1(\bar\mathbf{X}^{h}_s)\mathrm{Q}_1^{1/2}(h^{\varepsilon}_s-h_s)\|_{\mathbb{H}}^2\d s+C\varepsilon^2\int_0^{t}\left(1+\|\bar\mathbf{X}_{s}^h\|_{\mathbb{H}}^2\right)\d s\nonumber\\&\quad+C\left(\int_0^t(1+\|\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{H}}^2+\|\widehat\mathbf{Y}_{s}^{\varepsilon,\delta})\|_{\mathbb{H}}^2)\d s\right)^{1/2}\left(\int_0^t(\|\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{s}-\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}_{s(\Delta)}\|_{\mathbb{H}}^2+\|\bar{\mathbf{X}}^{h}_s-\bar{\mathbf{X}}^{h}_{s(\Delta)}\|_{\mathbb{H}}^2)\d s\right)^{1/2} \nonumber\\&\quad +2\int_0^t(\mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}},\widehat\mathbf{Y}_{s}^{\varepsilon,\delta})-\bar \mathrm{F}(\mathbf{X}_{s(\Delta)}^{\varepsilon,\delta,h^{\varepsilon}}),\mathbf{Z}^{\varepsilon}_{s(\Delta)})\d s\nonumber\\&\quad+2\sqrt{\varepsilon}\int_0^t([\sigma_1(\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}})-\sigma_1(\bar{\mathbf{X}}^{h}_s)]\mathrm{Q}_1^{1/2}\d\mathrm{W}_s,\mathbf{Z}^{\varepsilon}_s),
\end{align}
for all $t\in[0,T]$, $\mathbb{P}$-a.s. Applying Gronwall's inequality and then using a calculation similar to \eqref{3z25} yields the estimate \eqref{390} for the case $n=2,3$ and $r\in(3,\infty)$.
\vskip 2 mm
\noindent \textbf{Case 3: $n=r=3$ and $2\beta\mu>1$.}
For $n=r=3$, from \eqref{2.23}, we have
\begin{align}\label{2z31}
-2 \beta \langle\mathcal{C}(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}})-\mathcal{C}(\bar{\mathbf{X}}^{h}),\mathbf{Z}^{\varepsilon}\rangle \leq- \beta\|\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}\mathbf{Z}^{\varepsilon}\|_{\mathbb{H}}^2- \beta\|\bar{\mathbf{X}}^{h}\mathbf{Z}^{\varepsilon}\|_{\mathbb{H}}^2,
\end{align}
and a calculation similar to \eqref{2z28} gives
\begin{align}\label{3z72}
2|\langle\mathrm{B}(\mathbf{Z}^{\varepsilon},\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}),\mathbf{Z}^{\varepsilon}\rangle |&\leq 2\|\mathbf{Z}^{\varepsilon}\|_{\mathbb{V}}\|\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}\mathbf{Z}^{\varepsilon}\|_{\mathbb{H}}\leq \theta\mu\|\mathbf{Z}^{\varepsilon}\|_{\mathbb{V}}^2+\frac{1}{\theta\mu }\|\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}\mathbf{Z}^{\varepsilon}\|_{\mathbb{H}}^2.
\end{align}
Combining \eqref{2z31} and \eqref{3z72}, we obtain
\begin{align}\label{3125}
&-2\mu\langle\mathrm{A}\mathbf{Z}^{\varepsilon},\mathbf{Z}^{\varepsilon}\rangle-2 \beta \langle\mathcal{C}(\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}})-\mathcal{C}(\bar{\mathbf{X}}^{h}),\mathbf{Z}^{\varepsilon}\rangle-2\langle\mathrm{B}(\mathbf{Z}^{\varepsilon},\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}),\mathbf{Z}^{\varepsilon}\rangle\nonumber\\&\leq (2-\theta)\mu\|\mathbf{Z}^{\varepsilon}\|_{\mathbb{V}}^2 -\left(\beta-\frac{1}{\theta\mu }\right)\|\mathbf{X}^{\varepsilon,\delta,h^{\varepsilon}}\mathbf{Z}^{\varepsilon}\|_{\mathbb{H}}^2- \beta\|\bar{\mathbf{X}}^{h}\mathbf{Z}^{\varepsilon}\|_{\mathbb{H}}^2,
\end{align}
for $\frac{1}{\beta\mu}<\theta<2$ and the hence estimate \eqref{390} follows for $2\beta\mu> 1$.
Note that by making use of the estimates \eqref{2z30} and \eqref{3125}, we don't need the stopping time defined in \eqref{stop1} to get the required estimate \eqref{390}.
\end{proof}
Let us now establish the weak convergence result. The well known Skorokhod's representation theorem (see \cite{AVS}) states that if $\mu_n, n=1,2,\ldots,$ and $\mu_0$ are probability measures on complete separable metric space (Polish space) such that $\mu_n\xrightarrow{w}\mu$, as $ n \to \infty$, then there exist a probability space $(\widetilde{\Omega},\widetilde{\mathscr{F}},\widetilde{\mathbb{P}})$ and a sequence of measurable random elements $\mathrm{X}_n$ such that $\mathrm{X}_n\to\mathrm{X},$ $\widetilde{\mathbb{P}}$-a.s., and $\mathrm{X}_n$ has the distribution function $\mu_n$, $n = 0, 1, 2, \ldots $ ($\mathrm{X}_n\sim\mu_n$), that is, the law of $\mathrm{X}_n$ is $\mu_n$. We use Skorokhod's representation theorem in the next theorem.
\begin{theorem}[Weak convergence]\label{weak}
Let $\big\{h^{\varepsilon} : \varepsilon > 0\big\}\subset \mathcal{A}_M$ converges in distribution to $h$ with respect to the weak topology on $\mathrm{L}^2(0,T;\mathbb{H})$. Then $ \mathcal{G}^{\varepsilon}\left(\mathrm{W}_{\cdot} +\frac{1}{\sqrt{\varepsilon}}\int_0^{\cdot}h^{\varepsilon}_s\d s\right)$ converges in distribution to $\mathcal{G}^0\left(\int_0^{\cdot}h_s\d s\right)$ in $\mathscr{E}$, as $\varepsilon\to0$.
\end{theorem}
\begin{proof}
Let $\{h_{\varepsilon}\}$ converges to $h$ in distribution as random elements taking values in $\mathcal{S}_M,$ where $\mathcal{S}_M$ is equipped with the weak topology.
Since $\mathcal{A}_M$ is Polish (see section \ref{sec4.2} and \cite{BD2}) and $\big\{h^{\varepsilon} : \varepsilon > 0\big\}\subset \mathcal{A}_M$ converges in distribution to $h$ with respect to the weak topology on $\mathrm{L}^2(0,T;\mathbb{H})$, the Skorokhod representation theorem can be used to construct a probability space $(\widetilde{\Omega},\widetilde{\mathscr{F}},(\widetilde{\mathscr{F}}_t)_{0\leq t\leq T},\widetilde{\mathbb{P}})$ and processes $(\widetilde h^{\varepsilon},\widetilde h, \widetilde\mathrm{W}^{\varepsilon})$ such that the distribution of $(\widetilde h^{\varepsilon}, \widetilde\mathrm{W}^{\varepsilon})$
is same as that of $(h^{\varepsilon}, \mathrm{W}^{\varepsilon})$, and $\widetilde h^{\varepsilon}\to \widetilde h$, $\widetilde{\mathbb{P}}$-a.s., in the weak topology of $\mathcal{S}_M$. Thus
$\int_0^{t} \widetilde h^{\varepsilon}(s)\d s\to\int_0^{t} \widetilde h(s)\d s$ weakly in $\mathbb{H}$, $\widetilde{\mathbb{P}}$-a.s., for all $t\in[0,T]$. In the following sequel, without loss of
generality, we write $(\Omega,\mathscr{F},\mathbb{P})$ as the probability space and $(h^{\varepsilon},h,\mathrm{W})$ as processes, though strictly speaking, one should write $(\widetilde{\Omega},\widetilde{\mathscr{F}},\widetilde{\mathbb{P}})$ and $(\widetilde h^{\varepsilon}, \widetilde h, \widetilde{\mathrm{W}}^{\varepsilon})$, respectively for probability space and processes.
Let us define $\mathbf{Z}^{\varepsilon}_t:=\mathbf{X}_{t}^{\varepsilon,\delta,h^{\varepsilon}}-\bar{\mathbf{X}}_{t}^h$, where $\mathbf{Z}^{\varepsilon}_t$ satisfies the stochastic differential given in \eqref{3.91}. We first prove the Theorem for $n=2$ and $r\in[1,3]$. Let $\widetilde\tau_R^{\varepsilon}$ be the stopping time defined in \eqref{stop1}. Then for any $\eta>0$, by using Markov's inequality, we have
\begin{align}\label{3130}
&\mathbb{P}\left\{\left(\sup_{t\in[0,T]}\|\mathbf{Z}^{\varepsilon}_t\|_{\mathbb{H}}^2+\int_0^T\|\mathbf{Z}^{\varepsilon}_t\|_{\mathbb{V}}^2\d t+\int_0^T\|\mathbf{Z}^{\varepsilon}_t\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d t\right)^{1/2}>\eta\right\}\nonumber\\&\leq\frac{1}{\eta}\mathbb{E}\left[\left(\sup_{t\in[0,T]}\|\mathbf{Z}^{\varepsilon}_t\|_{\mathbb{H}}^2+\int_0^T\|\mathbf{Z}^{\varepsilon}_t\|_{\mathbb{V}}^2\d t+\int_0^T\|\mathbf{Z}^{\varepsilon}_t\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d t\right)^{1/2}\right]\nonumber\\&=\frac{1}{\eta} \mathbb{E}\left[\left(\sup_{t\in[0,T]}\|\mathbf{Z}^{\varepsilon}_t\|_{\mathbb{H}}^2+\int_0^T\|\mathbf{Z}^{\varepsilon}_t\|_{\mathbb{V}}^2\d t+\int_0^T\|\mathbf{Z}^{\varepsilon}_t\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d t\right)^{1/2}\chi_{\{T\leq\widetilde\tau_R^{\varepsilon}\}}\right]\nonumber\\&\quad+\frac{1}{\eta}\mathbb{E}\left[\left(\sup_{t\in[0,T]}\|\mathbf{Z}^{\varepsilon}_t\|_{\mathbb{H}}^2+\int_0^T\|\mathbf{Z}^{\varepsilon}_t\|_{\mathbb{V}}^2\d t+\int_0^T\|\mathbf{Z}^{\varepsilon}_t\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d t\right)^{1/2}\chi_{\{T>\widetilde\tau_R^{\varepsilon}\}}\right],
\end{align}
where $\chi_{t}$ is the indicator function. From Lemma \ref{lem3.14} (see \eqref{389}), we obtain
\begin{align}\label{3131}
& \mathbb{E}\left[\left(\sup_{t\in[0,T]}\|\mathbf{Z}^{\varepsilon}_t\|_{\mathbb{H}}^2+\int_0^T\|\mathbf{Z}^{\varepsilon}_t\|_{\mathbb{V}}^2\d t+\int_0^T\|\mathbf{Z}^{\varepsilon}_t\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d t\right)^{1/2}\chi_{\{T\leq\widetilde\tau_R^{\varepsilon}\}}\right]\nonumber\\&\leq \mathbb{E}\left[\left(\sup_{t\in[0,T]}\|\mathbf{Z}^{\varepsilon}_t\|_{\mathbb{H}}^2+\int_0^T\|\mathbf{Z}^{\varepsilon}_t\|_{\mathbb{V}}^2\d t+\int_0^T\|\mathbf{Z}^{\varepsilon}_t\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d t\right)\chi_{\{T\leq\widetilde\tau_R^{\varepsilon}\}}\right]^{1/2}\nonumber\\&\leq \bigg\{C_{\mu,\alpha,\beta,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},M,T,R}(1+\|\mathbf{x}\|_{\mathbb{H}}^3+\|\mathbf{y}\|_{\mathbb{H}}^3)\left[\varepsilon^2+\left(\frac{\delta}{\varepsilon}\right)+\delta^{1/8}\right]\nonumber\\&\quad+\mathbb{E}\left[\int_0^{T}\|\sigma_1(\bar\mathbf{X}^{h}_t)\mathrm{Q}_1^{1/2}(h^{\varepsilon}_t-h_t)\|_{\mathbb{H}}^2\d t\right]\bigg\}^{1/2}.
\end{align}
Using H\"older's and Markov’s inequalities, \eqref{5.5y} and \eqref{5.5z}, we estimate the second term from the right hand side of the inequality \eqref{3130} as
\begin{align}\label{3132}
&\mathbb{E}\left[\left(\sup_{t\in[0,T]}\|\mathbf{Z}^{\varepsilon}_t\|_{\mathbb{H}}^2+\int_0^T\|\mathbf{Z}^{\varepsilon}_t\|_{\mathbb{V}}^2\d t+\int_0^T\|\mathbf{Z}^{\varepsilon}_t\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d t\right)^{1/2}\chi_{\{T>\widetilde\tau_R^{\varepsilon}\}}\right]\nonumber\\&\leq \left[\mathbb{E}\left(\sup_{t\in[0,T]}\|\mathbf{Z}^{\varepsilon}_t\|_{\mathbb{H}}^2+\int_0^T\|\mathbf{Z}^{\varepsilon}_t\|_{\mathbb{V}}^2\d t+\int_0^T\|\mathbf{Z}^{\varepsilon}_t\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d t\right)\right]^{1/2}[\mathbb{P}(T>\widetilde\tau_R^{\varepsilon})]^{1/2}\nonumber\\&\leq\frac{C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},M,T}(1+\|\mathbf{x}\|_{\mathbb{H}}^2+\|\mathbf{y}\|_{\mathbb{H}}^2)^{1/2}}{\sqrt{R}}\left[\mathbb{E}\left(\int_0^T\|\mathbf{X}_{s}^{\varepsilon,\delta,h^{\varepsilon}}\|_{\mathbb{V}}^2\d s\right)\right]^{1/2}\nonumber\\&\leq \frac{C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},M,T}(1+\|\mathbf{x}\|_{\mathbb{H}}^2+\|\mathbf{y}\|_{\mathbb{H}}^2)}{\sqrt{R}}.
\end{align}
Combining \eqref{3131}-\eqref{3132} and substitute it in \eqref{3130}, we find
\begin{align}\label{3133}
&\mathbb{P}\left\{\left(\sup_{t\in[0,T]}\|\mathbf{Z}^{\varepsilon}_t\|_{\mathbb{H}}^2+\int_0^T\|\mathbf{Z}^{\varepsilon}_t\|_{\mathbb{V}}^2\d t+\int_0^T\|\mathbf{Z}^{\varepsilon}_t\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d t\right)^{1/2}>\eta\right\}\nonumber\\&\leq \frac{1}{\eta}\bigg\{C_{\mu,\alpha,\beta,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},M,T,R}(1+\|\mathbf{x}\|_{\mathbb{H}}^3+\|\mathbf{y}\|_{\mathbb{H}}^3)\left[\varepsilon^2+\left(\frac{\delta}{\varepsilon}\right)+\delta^{1/8}\right]\nonumber\\&\quad+\mathbb{E}\left[\int_0^{T}\|\sigma_1(\bar\mathbf{X}^{h}_t)\mathrm{Q}_1^{1/2}(h^{\varepsilon}_t-h_t)\|_{\mathbb{H}}^2\d t\right]\bigg\}^{1/2}+\frac{C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},M,T}(1+\|\mathbf{x}\|_{\mathbb{H}}^2+\|\mathbf{y}\|_{\mathbb{H}}^2)}{\eta\sqrt{R}}.
\end{align}
Once again, we use the fact that compact operators maps weakly convergent sequences into strongly convergent sequences. Since $\sigma_1(\cdot)$ is compact and $\big\{h^{\varepsilon} : \varepsilon > 0\big\}\subset \mathcal{A}_M$ converges in distribution to $h$ with respect to the weak topology on $\mathrm{L}^2(0,T;\mathbb{H})$, we get
\begin{align*}
\int_0^{T}\|\sigma_1(\bar\mathbf{X}^{h}_t)\mathrm{Q}_1^{1/2}(h^{\varepsilon}_t-h_t)\|_{\mathbb{H}}^2\d t\to 0, \ \text{ as }\ \varepsilon\to 0,\ \mathbb{P}\text{-a.s.}
\end{align*}
Furthermore, we have
\begin{align*}
\mathbb{E}\left[\int_0^{T}\|\sigma_1(\bar\mathbf{X}^{h}_t)\mathrm{Q}_1^{1/2}(h^{\varepsilon}_t-h_t)\|_{\mathbb{H}}^2\d t\right]&\leq\mathbb{E}\left[\int_0^T\|\sigma_1(\bar\mathbf{X}^{h}_t)\mathrm{Q}_1^{1/2}\|_{\mathcal{L}_2}^2\|h^{\varepsilon}_t-h_t\|_{\mathbb{H}}^2\d t\right]\nonumber\\&\leq C\sup_{t\in[0,T]}(1+\|\bar\mathbf{X}^{h}_t\|_{\mathbb{H}}^2)\mathbb{E}\left[\int_0^T\|h_t^{\varepsilon}\|_{\mathbb{H}}^2+\int_0^T\|h_t\|_{\mathbb{H}}^2\right]\nonumber\\&\leq C_{\mu,\alpha,\lambda_1,L_{\mathrm{G}},L_{\sigma_2},M,T}\left(1+\|\mathbf{x}\|_{\mathbb{H}}^2\right),
\end{align*}
where we used \eqref{5.5y} and the fact that $h^{\varepsilon} \in\mathcal{A}_M$ and $h\in\mathcal{S}_M$. Thus, an application of the dominated convergence theorem gives
\begin{align}\label{461}
\mathbb{E}\left[\int_0^{T}\|\sigma_1(\bar\mathbf{X}^{h}_t)\mathrm{Q}_1^{1/2}(h^{\varepsilon}_t-h_t)\|_{\mathbb{H}}^2\d t\right]\to 0, \ \text{ as }\ \varepsilon\to 0.
\end{align}
Letting $\varepsilon\to 0$ and then $R\to\infty$ in \eqref{3133}, we obtain
\begin{align}\label{3134}
&\lim_{\varepsilon\to 0}\mathbb{P}\left\{\left(\sup_{t\in[0,T]}\|\mathbf{Z}^{\varepsilon}_t\|_{\mathbb{H}}^2+\int_0^T\|\mathbf{Z}^{\varepsilon}_t\|_{\mathbb{V}}^2\d t+\int_0^T\|\mathbf{Z}^{\varepsilon}_t\|_{\widetilde\mathbb{L}^{r+1}}^{r+1}\d t\right)^{1/2}>\eta\right\},
\end{align}
for all $\eta>0$, which completes the proof for the case $n=2$ and $r\in[1,3]$.
For $n=2$, $r\in(3,\infty)$ and $n=3$, $r\in[3,\infty)$ ($2\beta\mu> 1,$ for $r=3$), one has to use the stopping time $\tau^{\varepsilon}_R$ defined in \eqref{stop} for the estimate in \eqref{3130} and then use the estimate \eqref{390} to obtain the convergence given in \eqref{3134}.
\end{proof}
\begin{remark}\label{rem4.22}
Note that LDP for $\{\mathbf{X}^{\varepsilon,\delta}\}$ proved in Theorem \ref{thm4.14} holds true for the case $n=2$ and $r\in[1,3]$ with $\alpha=\beta=0$. Therefore, the results obtained in this work is valid for 2D Navier-Stokes equations also. In this case, the state space becomes $\mathscr{E}=\mathrm{C}([0,T];\mathbb{H})\cap\mathrm{L}^2(0,T;\mathbb{V})$ (see Remark \ref{rem3.4}).
\end{remark}
\medskip\noindent
{\bf Acknowledgments:} M. T. Mohan would like to thank the Department of Science and Technology (DST), India for Innovation in Science Pursuit for Inspired Research (INSPIRE) Faculty Award (IFA17-MA110).
| proofpile-arXiv_059-15615 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Communication-Efficient Training with Hybrid Parallelism}
We start, in Section \ref{sec:dct_dp}, by proposing a Dynamic Communication Thresholding (DCT) technique for DP (DCT-DP).
DCT-DP is inspired by existing theoretical works such as \citep{stich_sparsified_sgd} and \cite{alistarh2018_conv_sparse}.
It sparsifies the gradient in each iteration before sending it over the wire, and it intelligently chooses the threshold for sparsification to reduce the computational overhead introduced due to compression and decompression.
Then, in Section \ref{sec:dct_mp}, we propose DCT-MP, a novel thresholding scheme for sparsification of activations and gradients during forward and backward passes, respectively, to reduce communication during MP.
\subsection{DCT-DP: Reducing communication for Data Parallelism}\label{sec:dct_dp}
During DP, as illustrated in Fig. \ref{fig:hybrid_illus} (left), we compress the gradient, $W_{grad}$, from trainers to the parameter server to improve the communication bottleneck.
Our compression algorithm, DCT-DP, is inspired by previous works which focus on data-parallel training for alleviating communication bottlenecks, and in particular the works of \citep{stich_sparsified_sgd, alistarh2018_conv_sparse}, where error feedback is employed along with sparsification to correct the error in gradient direction due to compression.
Such schemes find a top-K threshold by sorting the gradient vector, and they use the threshold to sparsify the gradients by keeping only the top-K entries.
However, they focus on proving theoretical convergence guarantees, and they do not show improvements in end-to-end times for training neural networks.
In our experiments, we observed that the overhead of allocating memory to copy the gradient (with its size easily scaling into the millions) and sorting the resultant copy to find the top-K threshold in each iteration is sufficiently expensive that it deprives any improvements in end-to-end training time in real-world systems (see Sec. \ref{sec:exps_prod} for details).
Hence, such gradient compression schemes, in their most basic form, cannot be employed directly to obtain promised gains in training efficiency. However, we take advantage of the following observation to reduce the overhead introduced due to compression.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{thres_for_w_grad.pdf}
\caption{ Top-K threshold for various levels of sparsity during the gradient compression for DCT-DP. We see that the top-K thresholds, for different sparsity levels, do not deviate much from the mean. Thus, updating the threshold only every $L (> 1)$ iterations can help reduce the overhead of sorting to find the top-K threshold.}
\label{fig:thres_for_w_grad}
\end{figure}
In Fig. \ref{fig:thres_for_w_grad}, we plot the top-K thresholds for various levels of sparsity for the Deep Learning Recommendation Model (DLRM) \citep{dlrm} with the Criteo Ad Kaggle Dataset for one of the Fully Connected (FC) layers (see Sec. \ref{sec:exps_dlrm} for details on the training process). We see that the threshold value increases as the sparsity increases, which is expected. More importantly, we note that given a sparsity factor, the threshold value does not vary much across iterations. For example, for $95\%$ sparsity, the threshold deviates by at most $26\%$ around its running mean. Thus, even for reasonably large compression factors, updating the threshold every iteration is excessive.
Inspired by this observation, we update the threshold only once every $L$ iterations (where $L$ is generally in thousands) while compressing the gradient of the parameters, $W_{grad}$, for each DNN layer. We refer to $L$ as the threshold life-span. As we observe in our experiments (see Sec. \ref{sec:exps_prod}), we can compress the gradients by as much as $99\%$ sparsity with $L=1000$ for each layer using top-K sparsification and error correction without any loss in performance. Our algorithm is illustrated in Fig. \ref{fig:wta_dp} and detailed steps are provided in Algorithm \ref{alg:wta_dp}. Throughout this paper, the function $\mathbb{I}(\cdot)$ denotes the indicator function, and the symbols $\lfloor\cdot\rfloor$ and $\odot$ denote the integer floor and element-wise product of two matrices, respectively.
Note that each trainer consists of multiple workers, and each worker compresses the gradients layer-wise using sparsification before communication (see Fig. \ref{fig:hybrid_illus} for an illustration, where each trainer consists of 3 workers). This is unlike existing works (e.g. \citet{stich_sparsified_sgd, sketched-SGD}) where the gradient vectors of all the model parameters are combined and compressed together. However, the theoretical guarantees on the convergence of the algorithm still holds and can be trivially extended to our case. This is because, for any threshold $\tau > 0$, the compressed gradient satisfies the contraction property (Definition 2.1 in \citet{stich_sparsified_sgd}). Hence, DCT-DP satisfies the same rate of convergence as Stochastic Gradient Descent (SGD) without compression (see Theorem 2.4 in \citet{stich_sparsified_sgd}).
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{wta_dp.pdf}
\caption{ A illustration of DCT-DP.
First, $W_{grad} \in \mathbb{R}^N$ (which already incorporates error from the previous iteration) is compressed using a threshold $\tau$ to obtain the sparse vector $\widehat W_{grad}$.
Then, the error is calculated as $E_{grad} = W_{grad} - \widehat W_{grad}$ to be used in the next iteration to correct the error in gradient direction.}
\label{fig:wta_dp}
\end{figure}
\begin{algorithm}[t]
\caption{DCT-DP: Communication-Efficient Data Parallelism}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} Sparsity factor $\eta$ ($0 < \eta \leq 1$), Threshold life-span $L$, Iteration number $k$, Gradient of the DNN layer $W_{grad} \in \mathbb{R}^N$, Error $E_{grad} \in \mathbb{R}^N$, and Threshold $\tau$ (from iteration $k-1$)
\STATE {\bf Error Feedback}: $W_{grad} = W_{grad} + E_{grad}$
\IF{$L$ divides $k$}
\STATE $[w_1, w_2, \cdots, w_N]$ = $Sort(|W_{grad}|)$
\STATE Assign $\tau = w_{\lfloor N\times \eta \rfloor}$
\ELSE
\STATE Use $\tau$ from iteration $k-1$
\ENDIF
\STATE Compute mask $M = \mathbb{I}(|W_{grad}| \geq \tau)$
\STATE Compute compressed gradient $\widehat W_{grad} = W_{grad} \odot M$
\STATE Compute error $E_{grad} = W_{grad} - \widehat W_{grad}$
\STATE Send $\widehat W_{grad}$ to the parameter server which updates the model
\end{algorithmic}
\label{alg:wta_dp}
\end{algorithm}
\subsection{DCT-MP: Reducing communication for Model Parallelism}\label{sec:dct_mp}
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{wta_mp2.pdf}
\caption{ A illustration of DCT-MP. During the forward pass, we sparsify and compress the activations, say $X_{act}$, corresponding to one data sample, using the mask, $\mathbb{I}(|X_{act}| \geq \tau)$, is generated based on the threshold $\tau$. During the backward pass, the same mask is used to compress the gradients and selectively train neurons.}
\label{fig:wta_mp}
\end{figure}
Training of large-scale DNNs is often regarded with pessimism due to its associated training latency (multiple days/weeks).
However, training such large-scale models can be a ``blessing in disguise'' from a communication-efficiency point of view.
For such models, with billions of parameters in each layer, only a few of the neurons are activated during the forward pass, potentially allowing us to compress these activations by a factor of $20\times$ or more with no loss in model performance.
This idea of training only a subset of neurons every iteration based their activation values stems from several existing observations \cite{makhzani_k_sparse2013,makhzani2015_wta_autoencoder,slide}.
In fact, in works such as dropout \cite{dropout} and adaptive dropout \cite{adaptive_dropout}, the authors have shown that selective sparsification can improve the generalization performance due to implicit regularization \citep{Mah12}.
With such a scheme, we also observe gains in generalization performance on top of communication efficiency (see experiments in Section \ref{sec:exps}).
Motivated by this, we propose a sparsification scheme where the neurons compete with each other in every iteration during DNN training, and the ones with the largest (absolute) value of activations are selected.
Thus, for a given training sample, DCT-MP selects only a few neurons (say $\sim$5\%) during the forward pass that are generally sufficient to represent the entire information for that training sample. We next describe DCT-MP in more detail.
{\bf Algorithm.}
Let the mini-batch size be $B$ and the number of output features before the model split be $d$. Thus, the activation and gradient matrices ($X_{act}$ and $X_{grad}$, respectively) lie in $\mathbb{R}^{B\times d}$. Based on the idea that each example activates only a subset of neurons, we select a fraction, say $\eta$, of largest entries according to their absolute value in each row. Thus, for the $i$-th row of $X_{act}$, say $X_{act,i}$, we select a threshold $\tau_i$ which is greater than $d\times\eta$ values in $X_{act,i}$, and the mask is thus calculated for the $i$-th data sample as $\mathbb{I}(X_{act,i} \geq \tau_i)$. The same mask is then used to compress the entities $X_{act,i}$ and $X_{grad,i}$ during forward and backward passes, respectively, for all $i\in \{1,2, \cdots, B\}$. Thus, the training for each mini-batch happens only on the relevant neurons corresponding to each sample in the training data.
In Fig. \ref{fig:wta_mp}, we illustrate the compression using DCT-MP when the mini-batch size is one. Detailed steps for a general mini-batch size $B$ are provided in Algorithm \ref{alg:wta_mp}.
\begin{algorithm}[tb]
\caption{DCT-MP: Communication-Efficient Model Parallelism}
\label{alg:wta_mp}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} Sparsity factor $\eta$ ($0 < \eta \leq 1$),
{\underline{Forward Pass}}:
\STATE {\bfseries Input:} Activation matrix $X_{act} = [X_{act,i}]_{i=1}^{B} \in \mathbb{R}^{B\times d}$
\STATE Define the mask, $M = [~]$
\FOR{$i=1$ {\bfseries to} $B$}
\STATE $[x_1, x_2, \cdots, x_d]$ = $Sort(|X_{act,i}|)$
\STATE Define $\tau_i = x_{\lfloor d\times \eta \rfloor}$
\STATE $m_i = \mathbb{I}(|X_{act,i}| \geq \tau_i)$
\STATE $M = [M; ~m_i]$
\ENDFOR
\STATE Compute the sparse matrix $X_{act}\odot M$
\STATE Send $X_{act}\odot M$ and the mask $M$ across the network
{\underline{Backward Pass}}:
\STATE {\bfseries Input:} Gradient matrix $X_{grad} \in \mathbb{R}^{B\times d}$
\STATE Compute the sparse matrix $X_{grad}\odot M$
\STATE Send $X_{grad}\odot M$ across the network
\end{algorithmic}
\end{algorithm}
\begin{figure}[t]
\centering
\includegraphics[width=0.44\textwidth]{thres_values_for_MP2.pdf}
\caption{ Top-K threshold for various levels of sparsity for the cases when compression using DCT-MP is applied and when it is not applied. The top-K thresholds decrease significantly when DCT-MP is applied. Thus, DCT-MP induces sparsity in neuron activations. This is possibly the reason for its improved generalization performance.}
\label{fig:thres_values_for_MP}
\end{figure}
{\bf DCT-MP Promotes Sparsity in Model Activations.}
In Fig. \ref{fig:thres_values_for_MP},
we plot the mean, $\frac{1}{B}\sum_{i=1}^B \tau_i$, of threshold vector $\tau = [\tau_1,\tau_2, \cdots, \tau_B]$ with respect to the number of iterations for the DLRM model with the Criteo Ad Kaggle Dataset. The threshold is calculated for activations after one of the fully connected layers (see Sec. \ref{sec:exps_dlrm} for details on the experimental setup). The mean of the threshold is calculated for different sparsity levels ($75\%, 90\%$ and $95\%$) for the two cases when sparsification using DCT-MP is applied (dotted lines) and when it is not applied (solid lines). Thus, the solid lines correspond to a single training run where we are simply measuring the mean of top-K threshold values without actually sparsifying the activations sent across the wire. The dotted lines with different sparsification levels correspond to different training runs where the stated sparsification is actually applied to the activations (and gradients) that are sent across the wire.
We observe that, as the training progresses, the top-K thresholds decrease significantly faster for the case when DCT-MP is applied. A decrease in the top-K threshold corresponds to the activations getting sparser (maybe approximately) as the training progresses. Thus, DCT-MP induces sparsity in activations while training, which is exploited for communication efficiency. An important advantage of such sparsity-inducing regularization is the improved generalization performance of the model, as shown in our experiments in Sec. \ref{sec:exps}. Our conjectured explanation for why sparsity helps in improving the generalization error is based on the performance of existing popular schemes.
This includes dropout (see Fig. 8, \citet{dropout}) and Rectified Linear Units (ReLU) (see Fig. 3, \citet{relu}), which themselves introduce sparsity in model activations, as well as implementations of implicit sparsity based methods in scalable algorithms for graph analysis \citep{SRFM16_VLDB,FGM17_IEEE}.
{\bf Analysis of DCT-MP.}
To provide further insights into DCT-MP, we prove that the stochastic gradient obtained with Algorithm \ref{alg:wta_mp} is equal, in expectation, to the stochastic gradient obtained in a network without any communication thresholding. More details of this unbiased estimation, including a formal statement of the theorem and its proof, are provided in Appendix \ref{app:analysis_dct_mp}.
{\bf Comparision with Dropout.} Dropout and DCT-MP are similar in essence as they both selectively train neurons. However, the two schemes are different: both in the goals they try to achieve, and in the mechanisms they use. Furthermore, they can be used complementarily. Here are the main differences between the two schemes. First, Dropout drops neurons randomly, while DCT-MP keeps only the most relevant neurons for each training sample. Second, for Dropout, going beyond 50\% sparsity results in accuracy loss, but DCT-MP achieves up to 95\% sparsification. Third, Dropout is applied to every parameter layer, but DCT-MP is applied only to the layers before the model split.
\section{Please add supplemental material as appendix here}
\end{document}
\section{Conclusions}
Inspired by the fact that communication is increasingly becoming the bottleneck for large-scale training, we proposed two practical algorithms, DCT-DP and DCT-MP, to reduce the communication bottleneck during data and model parallelism, respectively, for fast training of DNN models. DCT-DP and DCT-MP improve end-to-end training time by sparsifying the matrices to be sent across the wire by appropriately selecting a sparsification threshold. We empirically evaluated the proposed algorithms on publicly-available as well as industry-scale models and datasets. We show a reduction in communication for MP and DP by up to $20\times$ and $100\times$, respectively, without any loss in performance.
Further, the end-to-end training time reduces by $37\%$ in production models. Further, our algorithms reduce the network bandwidth utilization by half and almost double the CPU utilization, shifting the training bottleneck from communication to computation.
\section{Introduction}
\begin{figure*}[t]
\centering
\includegraphics[scale=0.25]{hybrid_illus2.pdf}
\caption{ Distributed DNN training with hybrid training which uses both DP (left) and MP (right) for greater parallelization gains.
During DP, multiple trainers process several mini-batches of data in parallel.
During MP, one copy of the model is processed by one trainer which in turn is comprised of multiple workers.}
\label{fig:hybrid_illus}
\end{figure*}
Data Parallelism (DP), in which each (of many) trainers stores a replica of the entire model, is a popular parallelization paradigm for the training of very large Deep Neural Networks (DNNs)
\citep{dean2012large, DP_scaling}.
At the beginning of each training iteration, each worker processes a subset of entire training data with a predetermined batch size, and then each worker synchronizes the model parameters at the end of the iteration.
DP has experienced widespread deployment for state-of-the-art industrial applications, but it is now facing two major challenges.
The first challenge is that large batch size is needed to exploit fully the ever-increasing compute power of training nodes.
This turns out to be difficult.
Both theoretical and empirical evidence suggests that going beyond a certain batch size for training DNNs results in loss in generalization performance (e.g., see \citep{keskar,LBnotEmp:2018shallue,openai,Hoffer17,LBempGolmant:2018,MM18_TR,ma2018_sgd_large_batch,staleness_async_sgd}).
Despite active research on restoring generalization performance when the batch size is large \citep{priya2017,imagenet4mins:2018,lars,adabatch,smith2017,post-local-sgd,swap, yao2018hessian}, these methods either are specific to certain models and/or datasets, require extensive hyperparameter tuning, or can at best increase the maximum batch size by a small factor.
The second challenge is replicating an entire DNN model on each worker, which is becoming an increasingly infeasible proposition. This is
due to increasing model complexity and parameters in domains such as, but not limited to, natural language processing and recommendation systems (e.g., see \citep{bert,outrageously_large_dnn,dense_conv_nets, dlrm}), coupled with the saturation of single machine memory and compute power due to trends such as the ending of Moore's law \citep{moore_law1, moore_law2}.
For these reasons, Model Parallelism (MP) has gained significant traction, both from the industry and the research community, as an alternative parallelization paradigm \citep{pipedream, gpipe,mp_multi_gpu,ElasticPipe,xpipe_async_mp,strads}.
In its purest form, the entire network during MP is partitioned into a number of sub-networks equal to the number of workers. While this form can accommodate a larger network than DP, it fails to capitalize on the largest batch size that is allowable before generalization performance degrades.
Hybrid Parallelism (HP)---that employs both DP and MP---is a natural next step, an idea that was arguably first introduced in \cite{dean2012large}, and more recently exploited further for large-scale DNN training \citep{HT_google_2018,zaharia_hybrid_parallelism,hybrid_horizontal_vertical,hetpipe,optCNN, gholami2018integrated}.
An illustration of hybrid training that uses MP to distribute the model across workers and DP to process multiple batches of training data at once is provided in Fig. \ref{fig:hybrid_illus}.
Here, each partition of the network for MP is replicated in a group of workers, each processing the entire batch for that sub-network in question. Currently, hybrid training is employed in training a subset of large-scale recommendation models in production at Facebook.
The scaling of model size and batch size by HP has now progressed to the next bottleneck: communication bandwidth \cite{pipedream}.
This bottleneck exists in two crucial places.
First, for MP, activation values and gradient information need to be communicated from one sub-network to the next during forward and backward propagation.
Second, for DP, gradients of the same sub-network but for different sub-batches need to be communicated, regardless of the exact operations that follow.
This depends on the specific communication protocol (centralized versus decentralized reduction) or the algorithm (synchronous versus asynchronous updates).
To compound the problem, increasing the batch size to fully exploit DP increases the communication of activations and gradients in MP, the sizes of which are directly proportional to the batch size.
Additionally, in the asynchronous training, increasing batch size exacerbates the stale gradient problem due to an increase in the time interval between a worker receiving the model and sending the gradient \cite{revisiting_sync_sgd}.
In short, the benefits of communication reduction are many.
\textbf{Dynamic Communication Thresholding.}
We propose a Dynamic Communication Thresholding (DCT) framework for communication efficient training for HP.
DCT incorporates two algorithms, DCT-DP and DCT-MP, to alleviate communication congestion for DP and MP, respectively.
Our algorithms filter the entities to be communicated through a simple hard-thresholding function, eliminating the need to pass many of them over the communication fabric.
We propose practical methods to compute the thresholds to reduce the computational overhead of compression.
Our thresholding technique is versatile, as it applies to different communication primitives in DP for the gradients, to different pipelining methods in MP (e.g., GPipe \citep{gpipe}, PipeDream \citep{pipedream}), and to different applications such as recommendation systems and natural language processing models.
While thresholding communication introduces errors, we apply (previously known) error compensation technique as well as a model consistency adjustment method (we developed) to mitigate the effect of the error in compression.
Consequently, despite significant communication thresholding, model accuracy does not degrade, and in fact it often improves.
We apply DCT to large-scale state-of-the-art recommendation models in production with real-world datasets as well as publicly available models and datasets.
We observe that the communication costs are reduced by factors of up to $20\times$ for MP and $100\times$ for DP. Further, end-to-end training time for large-scale training is cut by as much as 37\% for industry-scale models in production.
Further, applying DCT reduces the network utilization from 94.2\% to 49.3\% and increases the overall CPU utilization from 48.7\% to 91.1\%, shifting the bottleneck of model training from communication to computation in such systems.
\textbf{Related Work.}
Due to the use of large clusters with powerful machines to train complex DNNs (e.g. BERT-Large \citep{bert} with 340M parameters), the distributed training workloads are becoming increasingly communication bound. For this reason, numerous compression schemes have been proposed in the past several years for the data parallel setting (see \citep{DP_survey} for a comprehensive survey).
These compression schemes come in various forms, such as the following:
(i) Quantization, where the number of bits per entry of the communicated vector is reduced (e.g., \citep{alistarh2017_qsgd, EF_signSGD_karimireddy, terngrad});
(ii) Sparsification, where only a few entries of the communicated vector are sent (e.g., \citep{stich_sparsified_sgd,alistarh2018_conv_sparse,aji2017sparse,deep_grad_comp_lin, sun2019sparse, wangni2018sparse});
(iii) Statistical techniques such as Randomized Sketching (e.g., \citep{sketchml,sketched-SGD}); and
(iv) Low-rank approximation, which decomposes the vector into low-rank components before communication (e.g., \cite{lowrank_atomo,lowrank_powersgd,lowrank_gradzip,lowrank_gradiveq}).
When it comes to performance on real-world systems, many of these existing schemes have one or more of the following shortcomings.
(i) Focus is mostly on a theoretical analysis of schemes based on restricted assumptions, such as convexity and synchronous SGD.
(ii) The empirical evaluation ignores the cost of compression and decompression which, in many cases, deprives them of any savings due to communication.
(iii) Comparison of convergence with respect to baseline is reported, while the number of epochs (or iterations) and the actual training time is ignored.
For instance, in Fig. 1 in \cite{DP_survey}, the authors compare the compression scheme in \citep{sketchml} with a baseline without compression.
They observe that, although the convergence with respect to the number of epochs is unaffected due to compression, it takes almost twice the time for training to converge, rendering the scheme worse than no compression.
We also observed in our experiments that for sparsification using top-K sparsity \cite{stich_sparsified_sgd,alistarh2018_conv_sparse}, the overhead of copying and sorting the large vectors ends up taking more time than the gains obtained due to communication reduction.
(See Fig. \ref{fig:DCT-DP-Prod} in Sec. \ref{sec:exps_prod} for details.)
In this paper, {\bf we propose practical schemes for communication reduction during DP, and we show performance improvements in terms of the end-to-end DNN training times}, with performance similar to, or in some cases better than, the baseline algorithms as implemented in industry.
For the MP case, existing works target the scheduling of communication of entities across the network to improve the efficiency of training DNNs \citep{lee2014_MP_sch, DES_MP}. However,
to the best of our knowledge, {\bf this is the first work that targets communication reduction for MP by compressing the entities (i.e., activations and gradients) that are sent across the network}.
As such, it can be applied on top of existing training efficiency schemes, such as communication scheduling \citep{lee2014_MP_sch, DES_MP} and Pipelining \citep{pipedream, gpipe, pipemare, xpipe_async_mp} for MP. As illustrated in Fig. \ref{fig:hybrid_illus} (right), communication is a major bottleneck for MP-based training since the activations are communicated from (say) worker 1 to worker 2 during the forward pass and the gradients are then communicated from worker 2 to worker 1 during the backward pass (similar communication happens between workers 2 and 3).
However, we further observed that naively applying compression schemes, such as sparsification, quantization and sketching, to the activations and gradients either do not achieve high enough compression rates to be practical, or the degradation in model performance is beyond an acceptable level.
(See Appendix \ref{app:sketch} for details on such negative results.)
In the next section, we describe our algorithms for communication efficiency during parallelization, for both the MP and DP primitives of the DNN training.
In particular,
we discuss DCT-DP (in Section \ref{sec:dct_dp}) and explain our gradient reduction technique for DP that requires minimal computational overhead for compression; and then we discuss DCT-MP (in Section \ref{sec:dct_mp}), a flexible thresholding framework with theoretical support for our design.
Then, Section \ref{sec:exps} reports our findings from a diverse set of experiments and demonstrates the advantages of using DCT-DP and DCT-MP for training large-scale models for both publicly available and production models.
\section{Empirical Results}
\label{sec:exps}
In this section, we investigate DCT-MP and DCT-DP for three different experimental setups. In subsections \ref{sec:exps_dlrm} and \ref{sec:exps_nlp}, we evaluate the performance of DCT-MP on the Deep Learning Recommendation Model (DLRM) and a Natural Language Processing (NLP) model, respectively, for different levels of compression and different number of MP workers. The models and datasets are publicly available. We show that high compression factors can be obtained (up to $\sim$$95\%$) with DCT-MP along with small improvements in model performance.
We further evaluate DCT-DP on the DLRM model in subsection \ref{sec:exps_dlrm} and see no loss in performance with up to $98\%$ sparsity.
Finally, we evaluate the performance of DCT-DP and DCT-MP on large-scale recommendation models that are trained with hybrid parallelism in production systems.
We show that the deployed algorithm reduces the training time by $37\%$ for such production-scale models without any performance loss.
Further, in all our experiments, we tried to show at least one negative result that would provide insights into the scalability of DCT. For instance, as the number of workers for MP (i.e., model splits) increases, the compression factor with DCT-MP decreases (e.g., Tables \ref{table:MP_DLRM}, \ref{table:DP_MP_DLRM}, \ref{table:DP_MP_NLP} and \ref{table:prod_mp}).
\subsection{Experiments on the DLRM Model}
\label{sec:exps_dlrm}
{\bf Experimental Setup.}
For these experiments, we use the DLRM model from \citep{dlrm}. In this model, the dense features are first processed by a Multilayer Perceptron (MLP) with four layers, where each layer contains a Fully Connected (FC) layer followed by a Rectified Linear Unit (ReLU). Then, there is a feature interaction between the processed dense and sparse features, which goes through a second MLP with four layers (the last layer has Sigmoid instead of ReLU as the non-linearity) to produce the final output.
In our experiments, the embedding dimension for sparse features was kept at 16, and the output dimensions of the four FC layers in the first MLP are 512, 256, 64 and 16, respectively. Similarly, for the second MLP, the output dimensions for the fours FC layers are 512, 256, 128 and 1, respectively.%
\footnote{See the Criteo Kaggle benchmark for further details on the training process: https://github.com/facebookresearch/dlrm}
Training and testing sets comprise of 6 days and one day, respectively, of the Criteo Ad Kaggle dataset.%
\footnote{https://labs.criteo.com/2014/02/kaggle-display-advertising-challenge-dataset/}
Fig. \ref{fig:dlrm_illus} provides an illustration of MP with the DLRM model. The shaded area in blue shows a sample partition for MP.
In our simulations, we consider up to two splittings of the DLRM model. The first split is after two layers in the first MLP, and the second split is after two layers in the second MLP. Our goal is to reduce communication across different workers (both during the forward and backward passes). This is a typical setup in MP Training where workers 1, 2, and 3 can be the different pipes of a single trainer (e.g., see \citet{gpipe}).
For all our experiments, the data shuffle remains constant across different training runs.
\begin{figure*}[t]
\centering
\includegraphics[width=0.85\textwidth]{dlrm_new}
\caption{ A illustration of model parallelism with DLRM. The entities that are sent across the network are shown in red. $X_{act}$ and $X_{grad}$ are communicated during MP, and $W_{grad}$ is communicated during DP. The shaded area in blue represents a sample model partitioning for MP. In this case, three workers are working on one copy of the model during MP and comprise a trainer.}
\label{fig:dlrm_illus}
\end{figure*}
In Fig. \ref{fig:dlrm_illus}, we mark the three entities that are sent across the network which we compress to alleviate communication costs in distributed DNN training. $X_{act}$ and $X_{grad}$ are the activation and gradient matrices sent across the network during the forward pass and backward passes, respectively. The third entity that can be compressed is the parameter gradient (shown as $W_{grad}$) that is sent from Workers 1, 2, and 3 to the parameter server.
This keeps a central copy of weights and updates it regularly through the gradients received from different workers.
{
\begin{table}[ht]
\caption{ DCT-MP on the DLRM model: Train and Test Loss and Accuracy for multiple sparsity ratios (denoted by $\eta$) and different settings for MP.}
\label{table:MP_DLRM}
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lccccr}
\toprule
\makecell{$\eta$} &
\makecell{MP\\Workers} &
\multicolumn{2}{c}{
\makecell{Train\\Loss \phantom{iii} Acc (\%)}} &
\multicolumn{2}{c}{
\makecell{Test\\Loss \phantom{iii} Acc (\%)}} \\
\midrule
0\% & -- & 0.4477 & 79.23 & 0.4538 & 78.78\\
\cmidrule{1-6}
75\% & 2 & 0.4473 & 79.29 & 0.4532 & 78.81\\
90\% & 2 & 0.4472 & 79.28 & 0.4530 & 78.81\\
{\bf 95\%} & 2 & 0.4473 & 79.24 & \bf 0.4534 & 78.80\\
98\% & 2 & 0.4505 & 79.07 & 0.4562 & 78.61\\
\cmidrule{1-6}
75\% & 3 & 0.4482 & 79.19 & 0.4536 & 78.79\\
\bf 90\% & 3 & 0.4479 & 79.24 & \bf 0.4537 & 78.78\\
95\% & 3 & 0.4495 & 79.18 & 0.4546 & 78.72\\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\end{table}
}
In Table \ref{table:MP_DLRM}, we show the cross-entropy loss \cite{DL_book} and accuracy with the DLRM model on the training and testing data samples. A sparsity factor ($\eta$) of $0\%$ denotes the baseline with no compression.
We consider two settings for MP: one split (that is, 2 MP workers); and two splits (or three workers for MP).
{\bf MP with two workers (one split).}
In rows 2-5 in Table \ref{table:MP_DLRM}, we consider one split in the model (or MP with two workers) in the first MLP after two layers. We see that even with $95\%$ sparsity (that is, $20\times$ compression) on $X_{act}$ (and $X_{grad}$) sent across the network, we are able to perform better than baseline (with no compression), both in terms of train and test loss (highlighted in bold cases). However, we see a tangible loss in performance when the sparsity is further increased to $98\%$.
{\bf MP with three workers (two splits).}
In rows 6-8 in Table~\ref{table:MP_DLRM}, we consider MP with 3 workers, where the two model splits are in the first and second MLP, as shown in Fig. \ref{fig:dlrm_illus}. Note that, in the case of two splits, compressing the entities that are sent across the network by up to $90\%$ does not affect the test accuracy, and it is still better than the baseline with no compression. However, increasing the sparsity factor to $95\%$ is too ambitious for the two split case, and it increases the test loss by $0.18\%$. Further increasing the number of splits results in a greater performance loss, and the performance is worse than baseline for even $75\%$ sparsity.
\begin{remark}
We emphasize that for all the experiments in this paper, the location of splits for MP were not tuned as hyperparameters. Instead, we inserted splits after randomly chosen FC layers, or after the ReLU following the FC layer if it exists. The advantage of inserting a split after ReLU layers is that the activation matrix is $50\%$ sparse on average, resulting in higher compression rates for DCT-MP.
\end{remark}
\begin{table}[ht]
\caption{ DCT-DP on the DLRM model: Train and Test Loss and Accuracy for various levels of sparsity.}
\label{table:DP_DLRM}
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lcccr}
\toprule
\makecell{Sparsity \\ Factor} &
\multicolumn{2}{c}{
\makecell{Train\\Loss \phantom{iii} Acc (\%)}} &
\multicolumn{2}{c}{
\makecell{Test\\Loss \phantom{iii} Acc (\%)}} \\
\midrule
Baseline & 0.4477 & 79.23 & 0.4538 & 78.78 \\
\cmidrule{1-5}
75\% & 0.4478 & 79.23 & 0.4534 & 78.81\\
90\% & 0.4478 & 79.22 & 0.4536 & 78.79\\
95\% & 0.4479 & 79.25 & 0.4538 & 78.79\\
\bf 98\% & 0.4478 & 79.23 & \bf 0.4537 & 78.80\\
99.5\% & 0.4482 & 79.20 & 0.4547 & 78.75\\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\end{table}
{\bf DP with the DLRM Model.}
In Table \ref{table:DP_DLRM}, we illustrate the performance of DCT-DP on DLRM by compressing the gradients of the parameters of all the 8 FC layers while they are sent across the wire to the parameter server.
The parameter server then updates the model parameters using the compressed gradient. We use error feedback \cite{EF_signSGD_karimireddy} to compensate for the error in gradient compression by feeding it back to the gradients in the next iteration. In general, DCT-DP compression enjoy higher compression rates due to the use of error compensation schemes and the fact that error in one layer does not propagate to the other layers, unlike in the case of MP compression. Compression up to $98\%$ sparsity does not show any loss in performance. However, further compressing to $99.5\%$ sparsity increases the test loss by $0.20\%$.
\begin{table}[ht]
\caption{ Compression using DCT-DP and DCT-MP on the DLRM model: Train and Test Loss and Accuracy with two MP splits (that is, three workers for MP).}
\label{table:DP_MP_DLRM}
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lcccr}
\toprule
\makecell{Sparsity \\ Factor} &
\multicolumn{2}{c}{
\makecell{Train\\Loss \phantom{iii} Acc (\%)}} &
\multicolumn{2}{c}{
\makecell{Test\\Loss \phantom{iii} Acc (\%)}} \\
\midrule
Baseline & 0.4477 & 79.23 & 0.4538 & 78.78 \\
\cmidrule{1-5}
75\% & 0.4480 & 79.23 & 0.4535 & 78.81\\
\bf 90\% & 0.4481 & 79.26 & \bf 0.4537 & 78.78\\
95\% & 0.4492 & 79.19 & 0.4548 & 78.70\\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\end{table}
{\bf Communication-efficient Hybrid Training.}
Next, we apply compression to $W_{grad}$ for the 8 FC layers (in the DP case) and to $X_{act}$ (and $X_{grad}$) for two splits (in the MP case) and present our results in Table \ref{table:DP_MP_DLRM}. We see that compression up to $90\%$ sparsity (both during DP and MP) does not affect the performance, but the test loss increases by $0.22\%$ when the sparsity factor is increased to $95\%$.
\subsection{Experiments on a Translation Model}
\label{sec:exps_nlp}
For our experiments with DCT-MP, we next consider the Transformer translation model as an application of NLP using DNNs. We train over the IWSLT'14 German to English dataset \citep{iwslt}.
The setup and hyperparameters were directly borrowed from the fairseq NLP Library \cite{fairseq}.
The model used was borrowed from \cite{vaswani2017attention}, where both encoder and decoder have 6 layers, each of which uses a fully connected Feed-Forward Network (FFN) with input and output dimensionality of 512 and inner layer dimensionality of 1024
\footnote{For further details on the translation model, dataset preprocessing and the hyperparameters used, see https://github.com/pytorch/fairseq/tree/master/examples/translation}
We report the training and testing losses and the BLEU scores after 50 epochs of training.
Our results with DCT-MP on the translation model are described in Table \ref{table:DP_MP_NLP}. We consider three training scenarios: Two MP workers (with one split), Three MP workers (with two splits), and Five MP workers (with 4 splits). For the case with one split, we inserted the DCT-MP operator after the ReLu operator in the FFN of the fifth encoder layer. For the two splits case, we additionally inserted the DCT-MP operator after the ReLu operator in the FFN of the fifth encoder layer. We further added two splits after the ReLu operator in the third FFN in both the encoder and decoder layers for the four splits case. For each scenario, we show the best performing sparsity factor in bold.
We emphasize that no hyperparameter tuning was performed in choosing the splits, and we observed in our experiments that using DCT-MP after an FC Layer or a ReLu layer improves the generalization performance, possibly due to (implicitly) added regularization (as illustrated in Fig. \ref{fig:thres_values_for_MP}).
Note that we can add more MP splits for the NLP model compared to the DLRM model since the model is significantly deeper (and thus less susceptible to changes in outputs of a few layers) with larger FC layers (thus allowing for greater sparsity).
This shows that DCT-MP is more beneficial for wider and/or deeper models (that is, typical setups where MP is used).
\begin{table}[ht]
\caption{ DCT-MP on a translation model with IWSLT'14 dataset: Train and Test Losses and BLEU scores for various levels of sparsity and different splits for MP.}
\label{table:DP_MP_NLP}
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lcccr}
\toprule
\makecell{Sparsity \\ Factor} &
\makecell{MP \\ Workers} &
\makecell{Train\\Loss } &
\makecell{Test\\Loss} &
\makecell{BLEU \\Score}
\\
\midrule
Baseline & -- & 3.150 & 3.883 & 35.17 \\
\cmidrule{1-5}
90\% & 2 & 3.159 & 3.879 & 35.23\\
\bf 95\% & 2 & 3.157 & \bf 3.882 & 35.18\\
\cmidrule{1-5}
90\% & 3 & 3.151 & 3.881 & 35.22\\
\bf 95\% & 3 & 3.148 & \bf 3.882 & 35.19\\
\cmidrule{1-5}
\bf 90\% & 5 & 3.157 & \bf 3.882 & 35.20\\
95\% & 5 & 3.188 & 3.890 & 35.15\\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\end{table}
In this subsection, we do not consider DCT-DP since similar schemes have been evaluated for NLP models in existing works such as \cite{DP_survey} and \cite{aji2017sparse}.
In the next subsection, we evaluate DCT-MP and DCT-DP on large-scale recommendation models for end-to-end training times and overall model performance.
\subsection{Large-Scale Recommendation System}
\label{sec:exps_prod}
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[width=\textwidth]{qps_l_perc.pdf}
\caption{
Training time improvements for different values of $L$.
}
\label{fig:qps_l_perc}
\end{subfigure}
~
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[width=\textwidth]{train_test_L.pdf}
\caption{
Train and test error improvements for different values of $L$.
}
\label{fig:train_test_L}
\end{subfigure}
~
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[width=\textwidth]{qps_eta_perc.pdf}
\caption{Training time improvements for various sparsity factors.
}
\label{fig:qps_eta_perc}
\end{subfigure}
~
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[width=\textwidth]{train_test_eta.pdf}
\caption{Train and test error for various sparsity factors.}
\label{fig:train_test_eta}
\end{subfigure}
\caption{
{DCT-DP on Large-Scale Recommendation Models. Figures (a) and (b) show the training time and loss improvements, respectively, over baseline for different values of the threshold life-span, $L$, for a sparsity level of $95\%$. Figures (c) and (d) show the same statistics for various levels of sparsity for $L=1000$.}
}
\label{fig:DCT-DP-Prod}
\end{figure*}
We present our results for a real-world large scale recommendation system that employs HP for parallelization on click-through rate prediction task. We employ DCT-MP and DCT-DP to reduce the network bandwidth usage in these systems.
{\bf Experimental Setup.}
We leverage a distributed data-parallel asynchronous training system with multiple trainers to train a recommendation model. Each trainer in the DP setup may consist of one or more workers that use MP (see Fig. \ref{fig:hybrid_illus} for an illustration). Typically, the model is split into 10 or more parts and fine-grained parallelism is employed for high throughput. Hence, the worker machines suffer from very high communication cost for both MP and DP.
The batch sizes are usually in the range of 100-1000, but they are employed with hogwild threads (see \citet{hogwild}) to increase the throughput of the system, further exacerbating the communication cost problem.
The recommendation model considered in this section takes multiple days to train with general-purpose CPU machines. All the workers and parameter servers run on Intel 18-core 2GHz processors with 12.5Gbit Ethernet. The hardware configurations are identical and consistent across all the experiments.
We train using 7B training examples and evaluate the model on 0.5B examples.
For quantifying model performance, we report the cross-entropy loss from the classification task. We compare relative cross-entropy loss and end-to-end training times of the proposed techniques with respect to a baseline model without communication compression.
{\bf DCT-DP with Large-Scale Recommendation Model.}
Figure \ref{fig:DCT-DP-Prod} shows the results of applying DCT-DP on the large-scale recommendation model.
In Figure \ref{fig:qps_l_perc}, we plot the improvements in end-to-end training times when DCT-MP is applied to compress the parameter gradients, $W_{grad}$, that are sent to the parameter server. Here, we keep the sparsity level constant at $95\%$ and vary the threshold life-span $L$ (the interval after which the top-K threshold is updated). We note that compression with $L=1$ takes $11\%$ more time than the baseline with no compression. This is due to the cost of the copy-and-sort routine which computes the top-K threshold.\footnote{Note that $L=1$ represents the scheme proposed in popular works such as \cite{stich_sparsified_sgd} and \cite{alistarh2018_conv_sparse}. Thus, naively implementing existing schemes for top-K sparsification might not always yield expected gains in production. However, as we observe later, simply updating $L$ every thousand iterations can improve the training time by $25\%$ without any loss in performance.} Increasing $L$ to $1000$ trains the model $23\%$ faster and further increasing it to $10000$ does not provide any additional gain. Figure \ref{fig:train_test_L} illustrates that for different values of $L$, the train and test losses are within $0.01\%$ of the baseline~performance.
Fig. \ref{fig:qps_eta_perc} shows the improvement in training time for various levels of sparsity when the threshold life span is kept constant at $L=1000$. We observe the general trend that when the sparsity is increased, the training time improves.
Overall, we are able to compress the gradients to sparsity factors of up to $99.5\%$ without any loss in train and test performance (as noted from Fig. \ref{fig:train_test_eta}). However, we do not see significant improvements in training time beyond the sparsity level of $95\%$, possibly because the message size is small enough to not hurt bandwidth usage, and the only cost remaining is the fixed latency cost associated with sending any message, irrespective of its size.
\begin{remark}
We observe that error feedback works very well in this asynchronous data-parallel training paradigm with a larger number of hogwild threads.
Note that this should not be expected since existing works prove convergence guarantees only for the synchronous SGD settings.
An implementation detail that helped was sharing the error feedback buffer between the multiple threads. But this can lead to a fast growing magnitude of error in the buffer leading to stale updates. To avoid this, we drain the error feedback buffer stochastically every $1$ million iterations.
\end{remark}
{\bf DCT-MP with Large-Scale Recommendation Model.}
We employ DCT-MP to compress the entities sent through the network during MP for communication efficiency. DCT-MP is applied across the 12 splits of the model after the ReLU layer. Our results are summarized in Table \ref{table:prod_mp}.
We show improvement in training and test losses%
\footnote{Positive numbers imply better performance.}
in columns 2 and 3, respectively, and the improvements in end-to-end training times in column 4 for various levels of sparsity.
We observe that the training performance slightly degrades with DCT-MP on large-scale models.
However, the test performance improves up to sparsity levels of $90\%$, with a $14\%$ improvement in end-to-end training time. Increasing the sparsity level to $95\%$ degrades the test performance by $0.121\%$. Note that we can further improve the performance of DCT-MP by identifying the layers whose activations are sensitive to sparsification and avoiding compressing them during DCT-MP (or changing the location of the split). However, such selectivity in choosing layers for DCT-MP is beyond the scope of this paper.
\begin{table}[t]
\caption{ DCT-MP on a large-scale recommender model}
\label{table:prod_mp}
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lccr}
\toprule
\makecell{Sparsity \\ Factor} &
\multicolumn{2}{c}{
\makecell{Loss Improvement (\%)\\Train \phantom{iiiiiiii} Test}} &
\makecell{Time \\Gain (\%)}
\\
\midrule
Baseline & 0.000\% & 0.000\% & 0.000\% \\
75\% & -0.006\% & 0.023\% & 7.04\%\\
\bf 90\% & -0.021\% & \bf 0.016\% & \bf 13.95\%\\
95\% & -0.070\% & -0.121\% & 14.43\%\\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\end{table}
{\bf Communication-Efficient Hybrid training.}
Next, we apply both DCT-DP and DCT-MP for communication reduction during hybrid training of a large-scale recommendation model. Inspired by our previous results, we chose the sparsity levels as $90\%$ and $99\%$ for DCT-MP and DCT-DP (with $L=1000$), respectively. We observe a $37.1\%$ reduction in end-to-end training time, with train and test loss within $0.01\%$ of the baseline model that does no compression.
Further, before applying DCT, we observed that the network utilization was high (94.2\%) and the CPU utilization was low (48.7\%), implying that communication is a bottleneck. However, after applying DCT, CPU utilization increased to 91.1\% and network utilization decreased to 49.3\%, implying that DCT shifted the bottleneck from communication to computation in production models.
\section{Statistical Estimation of top-K Threshold with Laplacian Fitting}
\section{Schemes That Do Not Work}
\label{app:sketch}
We saw in Sec. \ref{sec:exps} that activations during the forward pass and gradients during the backward pass can be compressed by large factors (up to $20\times$) using DCT-MP. This is due to selecting and training only the most relevant neurons corresponding to a given training sample. In this section, we present some negative results with other methods to compress the activation and gradient matrices during the forward and backward passes, respectively.
{\bf Gaussian Sketching for Activation Compression.}
Here, we use a Gaussian sketching scheme to compress the activations going forward. In Randomized Numerical Linear Algebra (RandNLA), the idea of sketching is to represent a large matrix by a smaller proxy that can be further used for matrix operations such as matrix multiplication, least squares regression, and low-rank approximation \cite{woodruff_now}. The sketched version of a matrix $A$ is given by $A\times S$, where $S$ is a random sketching matrix (e.g., all entries of $S$ are sampled i.i.d. from an appropriately scaled Gaussian distribution).
In Table \ref{table:MP_gaussian_sketching}, we compress the activations during the forward pass using Gaussian sketching.
Unlike the DCT-MP algorithm, we do not compress the gradients during the backward pass.
The aim is to identify if a low-rank structure exists in the activation matrix that can be used to compress the activation matrix in general.
\begin{table}[h]
\caption{Compressing the activation matrix during MP using Gaussian sketching does not yield good results.}
\label{table:MP_gaussian_sketching}
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lcccr}
\toprule
\makecell{Compression \\ Factor} &
\multicolumn{2}{c}{
\makecell{Train\\Loss \phantom{iii} Acc (\%)}} &
\multicolumn{2}{c}{
\makecell{Test\\Loss \phantom{iii} Acc (\%)}} \\
\midrule
Baseline & 0.4477 & 79.23 & 0.4538 & 78.78\\
50\% & 0.4569 & 78.72 & 0.4618 & 78.37\\
75\% & 0.4610 & 78.53 & 0.4656 & 78.12\\
90\% & 0.4685 & 77.95 & 0.4721 & 77.78\\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
As seen in Table \ref{table:MP_gaussian_sketching}, sketching techniques directly borrowed from RandNLA do not perform as well.
This is likely because such schemes were designed to cater to operations such as low-rank approximation, where the matrices to be compressed are generally well-approximated by low-rank matrices.
For instance, Gaussian sketching has seen success in approximate least squares regression and low-rank matrix approximation \cite{woodruff_now}.
This suggests that the activation matrix for DNNs, in general, does not reside in a subspace that is sufficiently low-rank to be meaningfully used for compression.
{\bf Top-K Thresholding for Gradient Compression.}
We saw in Sec. \ref{sec:exps} that the parameter gradients (illustrated as $W_{grad}$ in Fig. \ref{fig:hybrid_illus}) can be compressed to high factors with any loss in accuracy when used with appropriate error compensation. However, the same is not true for the gradients with respect to hidden neurons (illustrated as $X_{grad}$ in Fig. \ref{fig:hybrid_illus}) that are sent across the network during the backward pass in MP. This can be seen from our results in Table \ref{table:MP_grad_topK}, where we apply gradient compression using top-K thresholding with error feedback. Further, we observed that training without error feedback can cause~divergence.
\begin{table}[h]
\caption{Compressing the gradient matrix during backward pass in MP using top-K sparsification does not yield good results.}
\label{table:MP_grad_topK}
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lcccr}
\toprule
\makecell{Compression \\ Factor} &
\multicolumn{2}{c}{
\makecell{Train\\Loss \phantom{iii} Acc (\%)}} &
\multicolumn{2}{c}{
\makecell{Test\\Loss \phantom{iii} Acc (\%)}} \\
\midrule
Baseline & 0.4477 & 79.23 & 0.4538 & 78.78\\
50\% & 0.4495 & 79.07 & 0.4561 & 78.62\\
75\% & 0.4516 & 78.95 & 0.4588 & 78.48\\
90\% & 0.4701 & 77.76 & 0.4789 & 77.13\\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
Our hypothesis on why compressing the gradients of the hidden neurons by top-K thresholding does not yield good results is due to the propagation of error to the initial layers.
Consider the following example to illustrate this.
Consider the following deep network, where we have several vector-valued functions $A(\cdot), B(\cdot), C(\cdot), \cdots, L(\cdot)$ composed in a chain, that is $A \rightarrow B \rightarrow C \rightarrow \cdots \rightarrow K \rightarrow L$.
Algebraically, the loss looks like $L(A) = L(K(\cdots C(B(A))\cdots))$.
Then, the gradient of the loss with respect to $A$ is given by the multiplication of the Jacobians, that is, $J_L(A) = J_L(K) \times \cdots \times J_C(B) \times J_B(A)$.
(Here, $J_L(A)$ denotes the gradient of $L$ with respect to $A$.)
If we change any of the Jacobians in between (that is, compress the gradient $X_{grad}$ with respect to hidden neurons), then the error is propagated all the way to the initial layers of the network.
Even adding error feedback to the compression process does not recover the lost accuracy.
\section{Analysis of DCT-MP (Algorithm \ref{alg:wta_mp})}
\label{app:analysis_dct_mp}
Unlike compression of gradients, where the error introduced can be effectively corrected, compressing activations introduces errors that are propagated to the downstream of the network.
If the thresholds $\{\tau_i\}_{i=1}^B$ are fixed for all training iterations, then Algorithm \ref{alg:wta_mp} is simply performing SGD of a changed network, one in which the additional layer $\tilde{X}_{act,i} = \mathbb{I}(|X_{act,i}| \geq \tau_i)$
inserted right after $\{X_{act,i}\}_{i=1}^B$ are obtained.
Extra analysis is needed when $\{\tau_i\}$ are dynamic.
Consider a particular SGD iteration $k$, and further annotate the thresholds as $\{\tau_i^{(k)}\}$ to indicate their dependence on the stochastic mini-batch being chosen at iteration $k$.
Let $\bar{\tau}^{(k)} = \mathbb E_i(\tau_i^{(k)})$ be the average of these thresholds over the entire dataset.
Denote by $\bar{L}$ the loss function of the network with this batch-independent threshold $\tilde{X}_{act,i} = \mathbb{I}(|X_{act,i}| \geq \bar\tau^{(k)})$ inserted, and let $L_k$ be the loss function of this dynamically thresholded network with batch $k$.
Standard SGD assumptions say that for a randomly chosen batch $k$, $\mathbb E_i(\partial L_i/\partial \bm{\theta}) = \partial L/\partial \bm{\theta}$, with $\bm{\theta}$ being the network parameters.
However, since each $L_i$ is related to a different threshold, it is unclear what the $L$ should be on the right hand side of this expression.
In the following theorem, we show that $\mathbb E_i(\partial L_i / \partial \bm{\theta}) = \partial \bar{L}/\partial \bm{\theta}$, where $L_i$ is batch-$i$ loss function of the \emph{dynamically} thresholded network in Algorithm \ref{alg:wta_mp}.
\begin{theorem}
\label{thm:unbias}
Consider a 2-worker MP network where the activations from Worker 1 to Worker 2 are thresholded as in Algorithm \ref{alg:wta_mp}.
Let $L_i$ be the associated loss function for data point $i$, where $i=1,2,\ldots,N$, and $N$ be the total number of training samples.
Let $\bar{\tau} = \mathbb{E}_i(\tau_i)$, the entire data set at a particular training iteration $j$ (the subscript $j$ in $\tau_i$ and $\bar{\tau}$ is omitted for simplicity).
Let $\bar{L}_i$ be the loss function corresponding to the threshold $\bar{\tau}$.
If
\begin{equation}\label{eq:assumption}
\mathbb{E}_i[X_{act,i} \odot \mathbb{I}(|X_{act,i}| \ge \tau_i) ]
= \mathbb{E}_i[X_{act,i} \odot \mathbb{I}(|X_{act,i}| \ge \bar{\tau}) ],
\end{equation}
then, up to first order
\[
\mathbb{E}_i\left(\frac{\partial L_i}{\partial \bm{\theta}}\right) =
\mathbb{E}_i\left(\frac{\partial \bar{L}_i}{\partial \bm{\theta}}\right) =
\frac{\partial \bar{L}}{\partial \bm{\theta}}
\]
where $\bm{\theta}$ is the parameter of the network.
\end{theorem}
\begin{remark}
The theorem statement implies that the training step on the dynamically thresholded network is equivalent to that of training the network with just one threshold $\bar{\tau}$. This happens long as $\bar \tau$ and $\tau_i$ are sufficiently close and the mean of activations around $\tau_i$ is the same as their mean around $\bar\tau$ (formalized by assumption (\ref{eq:assumption})).
\end{remark}
\begin{proof}
To establish the proof, let the dynamically thresholded network be represented as follows, starting with a random data point $(X_i^{(0)},y_i)$ (where $X_i^{(0)}$ is the input and $y_i$ the corresponding label):
\begin{eqnarray*}
X_i^{(k)} & = & {\mathcal N}^{(k)}(\bm{\theta}^{(k)},X_i^{(k-1)}), \quad k = 1,2, \ldots, m \\
X_i^{(m+1)} & = & X_i^{(m)} \cdot \mathbb{I}(|X_i^{(m)}| \ge \tau_i) \\
X_i^{(k)} & = & {\mathcal N}^{(k)}(\bm{\theta}^{(k)},X_i^{(k-1)}), \quad k = m+2, m+3, \ldots, K\\
L_i &= & \ell(X_i^{(K)},y_i).
\end{eqnarray*}
Here, the MP split was inserted after the $k$-th layer, and the activation function for the $k$-th layer with parameters $\bm{\theta^{k}}$ is represented as ${\mathcal N}^{(k)}(\bm{\theta^{k}}, \cdot)$. For an activation layer ${\mathcal N}^{(k)}$, $\bm{\theta}^{(k)}$ is simply an empty set. For the network with a static threshold~$\bar{\tau}$:
\begin{eqnarray*}
\bar{X}_i^{(m+1)} & = & X_i^{(m)} \cdot \mathbb{I}(|X_i^{(m)}| \ge \tau) \\
\bar{X}_i^{(k)} & = & {\mathcal N}^{(k)}(\bm{\theta}^{(k)},\bar{X}_i^{(k-1)}), \quad k = m+2, m+3, \ldots, K\\
\bar{L}_i &= & \ell(\bar{X}_i^{(K)},y_i).
\end{eqnarray*}
By assumption (\ref{eq:assumption}), we have
$\mathbb{E}_i(X_i^{(m+1)}) = \mathbb{E}_i(\bar{X}_i^{(m+1)})$.
Therefore, by Taylor's expansion, we have
\[
\mathbb{E}_i(X_i^{(k)}) = \mathbb{E}_i(\bar{X}_i^{(k)}), \quad k = m+2, m+3, \ldots, K,
\]
up to first order, by expanding each ${\mathcal N}^{(k)}(\bm{\theta}^{(k)},\bar{X}_i)$ to first order. For example, for a linear layer, we have
\begin{eqnarray*}
\mathbb{E}_i(X_i^{(k)}) & = & \mathbb{E}_i( {\mathcal N}(\bm{\theta}^{(k)},X_i^{(k)}) )\\
& = & \mathbb{E}_i( W^{(k)} X_i^{(k)} + b^{(k)} ) \\
& = & W^{(k)} \mathbb{E}_i(X_i^{(k)}) + b^{(k)} \\
& = & \bar{X}_i^{k+1} .
\end{eqnarray*}
Similarly, for a general activation function $\sigma(\cdot)$, we have
\begin{align*}
& \mathbb{E}_i(\sigma(X_i^{(k)})) = \mathbb{E}_i(\sigma(\bar{X}_i^{(k)} + (X_i^{(k)}-\bar{X}_i^{(k)}) )) \\
& = \mathbb{E}_i(\sigma(\bar{X}_i^{(k)}) + \sigma'(\bar{X}_i^{(k)})\cdot (X_i^{(k)}-\bar{X}_i^{(k)})) + \hbox{second order terms} \\
& = \mathbb{E}_i(\sigma(\bar{X}_i^{(k)})) + \sigma'(\bar{X}_i^{(k)}) \mathbb{E}_i(X_i^{(k)}-\bar{X}_i^{(k)}) \\
& = \mathbb{E}_i(\sigma(\bar{X}_i^{(k)})) = \mathbb{E}_i(\bar{X}_i^{(k+1)}),
\end{align*}
where we have ignored the second-order terms beyond the second step.
Let $\Delta_i^{(k)} = X_i^{(k)} - \bar{X}_i^{(k)}$, $E_i(\Delta_i^{(k)}) = \mathbf{0}$. Consider the gradient $\frac{\partial L_i}{\partial \bm{\theta}}$ as a function of $\Delta_i^{(k)}$:
\begin{equation*}
\frac{\partial L_i}{\partial \bm{\theta}}(\Delta_i^{m+1},\Delta_i^{(m+2)},\ldots,\Delta_i^{(K)})
= \frac{\partial \bar{L}_i}{\partial \bm{\theta}} +
\sum_{k=m+1}^K \left(\frac{\partial^2 \bar{L}_i}{\partial \Delta^{(m)} \partial \bm{\theta}} \right) \cdot \Delta_i^{(k)}.
\end{equation*}
Thus
\[
\mathbb{E}_i\left( \frac{\partial L_i}{\partial \bm{\theta}} \right) =
\mathbb{E}_i\left( \frac{\partial \bar{L}_i}{\partial \bm{\theta}} \right),
\]
which proves the desired result.
\end{proof}
\section{Communication-Efficient Training with Hybrid Parallelism}
We start, in Section \ref{sec:dct_dp}, by proposing a Dynamic Communication Thresholding (DCT) technique for DP (DCT-DP).
DCT-DP is inspired by existing theoretical works such as \citep{stich_sparsified_sgd} and \cite{alistarh2018_conv_sparse}.
It sparsifies the gradient in each iteration before sending it over the wire, and it intelligently chooses the threshold for sparsification to reduce the computational overhead introduced due to compression and decompression.
Then, in Section \ref{sec:dct_mp}, we propose DCT-MP, a novel thresholding scheme for sparsification of activations and gradients during forward and backward passes, respectively, to reduce communication during MP.
\subsection{DCT-DP: Reducing communication for Data Parallelism}\label{sec:dct_dp}
During DP, as illustrated in Fig. \ref{fig:hybrid_illus} (left), we compress the gradient, $W_{grad}$, from trainers to the parameter server to improve the communication bottleneck.
Our compression algorithm, DCT-DP, is inspired by previous works which focus on data-parallel training for alleviating communication bottlenecks, and in particular the works of \citep{stich_sparsified_sgd, alistarh2018_conv_sparse}, where error feedback is employed along with sparsification to correct the error in gradient direction due to compression.
Such schemes find a top-K threshold by sorting the gradient vector, and they use the threshold to sparsify the gradients by keeping only the top-K entries.
However, they focus on proving theoretical convergence guarantees, and they do not show improvements in end-to-end times for training neural networks.
In our experiments, we observed that the overhead of allocating memory to copy the gradient (with its size easily scaling into the millions) and sorting the resultant copy to find the top-K threshold in each iteration is sufficiently expensive that it deprives any improvements in end-to-end training time in real-world systems (see Sec. \ref{sec:exps_prod} for details).
Hence, such gradient compression schemes, in their most basic form, cannot be employed directly to obtain promised gains in training efficiency. However, we take advantage of the following observation to reduce the overhead introduced due to compression.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{thres_for_w_grad.pdf}
\caption{ Top-K threshold for various levels of sparsity during the gradient compression for DCT-DP. We see that the top-K thresholds, for different sparsity levels, do not deviate much from the mean. Thus, updating the threshold only every $L (> 1)$ iterations can help reduce the overhead of sorting to find the top-K threshold.}
\label{fig:thres_for_w_grad}
\end{figure}
In Fig. \ref{fig:thres_for_w_grad}, we plot the top-K thresholds for various levels of sparsity for the Deep Learning Recommendation Model (DLRM) \citep{dlrm} with the Criteo Ad Kaggle Dataset for one of the Fully Connected (FC) layers (see Sec. \ref{sec:exps_dlrm} for details on the training process). We see that the threshold value increases as the sparsity increases, which is expected. More importantly, we note that given a sparsity factor, the threshold value does not vary much across iterations. For example, for $95\%$ sparsity, the threshold deviates by at most $26\%$ around its running mean. Thus, even for reasonably large compression factors, updating the threshold every iteration is excessive.
Inspired by this observation, we update the threshold only once every $L$ iterations (where $L$ is generally in thousands) while compressing the gradient of the parameters, $W_{grad}$, for each DNN layer. We refer to $L$ as the threshold life-span. As we observe in our experiments (see Sec. \ref{sec:exps_prod}), we can compress the gradients by as much as $99\%$ sparsity with $L=1000$ for each layer using top-K sparsification and error correction without any loss in performance. Our algorithm is illustrated in Fig. \ref{fig:wta_dp} and detailed steps are provided in Algorithm \ref{alg:wta_dp}. Throughout this paper, the function $\mathbb{I}(\cdot)$ denotes the indicator function, and the symbols $\lfloor\cdot\rfloor$ and $\odot$ denote the integer floor and element-wise product of two matrices, respectively.
Note that each trainer consists of multiple workers, and each worker compresses the gradients layer-wise using sparsification before communication (see Fig. \ref{fig:hybrid_illus} for an illustration, where each trainer consists of 3 workers). This is unlike existing works (e.g. \citet{stich_sparsified_sgd, sketched-SGD}) where the gradient vectors of all the model parameters are combined and compressed together. However, the theoretical guarantees on the convergence of the algorithm still holds and can be trivially extended to our case. This is because, for any threshold $\tau > 0$, the compressed gradient satisfies the contraction property (Definition 2.1 in \citet{stich_sparsified_sgd}). Hence, DCT-DP satisfies the same rate of convergence as Stochastic Gradient Descent (SGD) without compression (see Theorem 2.4 in \citet{stich_sparsified_sgd}).
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{wta_dp.pdf}
\caption{ A illustration of DCT-DP.
First, $W_{grad} \in \mathbb{R}^N$ (which already incorporates error from the previous iteration) is compressed using a threshold $\tau$ to obtain the sparse vector $\widehat W_{grad}$.
Then, the error is calculated as $E_{grad} = W_{grad} - \widehat W_{grad}$ to be used in the next iteration to correct the error in gradient direction.}
\label{fig:wta_dp}
\end{figure}
\begin{algorithm}[t]
\caption{DCT-DP: Communication-Efficient Data Parallelism}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} Sparsity factor $\eta$ ($0 < \eta \leq 1$), Threshold life-span $L$, Iteration number $k$, Gradient of the DNN layer $W_{grad} \in \mathbb{R}^N$, Error $E_{grad} \in \mathbb{R}^N$, and Threshold $\tau$ (from iteration $k-1$)
\STATE {\bf Error Feedback}: $W_{grad} = W_{grad} + E_{grad}$
\IF{$L$ divides $k$}
\STATE $[w_1, w_2, \cdots, w_N]$ = $Sort(|W_{grad}|)$
\STATE Assign $\tau = w_{\lfloor N\times \eta \rfloor}$
\ELSE
\STATE Use $\tau$ from iteration $k-1$
\ENDIF
\STATE Compute mask $M = \mathbb{I}(|W_{grad}| \geq \tau)$
\STATE Compute compressed gradient $\widehat W_{grad} = W_{grad} \odot M$
\STATE Compute error $E_{grad} = W_{grad} - \widehat W_{grad}$
\STATE Send $\widehat W_{grad}$ to the parameter server which updates the model
\end{algorithmic}
\label{alg:wta_dp}
\end{algorithm}
\subsection{DCT-MP: Reducing communication for Model Parallelism}\label{sec:dct_mp}
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{wta_mp2.pdf}
\caption{ A illustration of DCT-MP. During the forward pass, we sparsify and compress the activations, say $X_{act}$, corresponding to one data sample, using the mask, $\mathbb{I}(|X_{act}| \geq \tau)$, is generated based on the threshold $\tau$. During the backward pass, the same mask is used to compress the gradients and selectively train neurons.}
\label{fig:wta_mp}
\end{figure}
Training of large-scale DNNs is often regarded with pessimism due to its associated training latency (multiple days/weeks).
However, training such large-scale models can be a ``blessing in disguise'' from a communication-efficiency point of view.
For such models, with billions of parameters in each layer, only a few of the neurons are activated during the forward pass, potentially allowing us to compress these activations by a factor of $20\times$ or more with no loss in model performance.
This idea of training only a subset of neurons every iteration based their activation values stems from several existing observations \cite{makhzani_k_sparse2013,makhzani2015_wta_autoencoder,slide}.
In fact, in works such as dropout \cite{dropout} and adaptive dropout \cite{adaptive_dropout}, the authors have shown that selective sparsification can improve the generalization performance due to implicit regularization \citep{Mah12}.
With such a scheme, we also observe gains in generalization performance on top of communication efficiency (see experiments in Section \ref{sec:exps}).
Motivated by this, we propose a sparsification scheme where the neurons compete with each other in every iteration during DNN training, and the ones with the largest (absolute) value of activations are selected.
Thus, for a given training sample, DCT-MP selects only a few neurons (say $\sim$5\%) during the forward pass that are generally sufficient to represent the entire information for that training sample. We next describe DCT-MP in more detail.
{\bf Algorithm.}
Let the mini-batch size be $B$ and the number of output features before the model split be $d$. Thus, the activation and gradient matrices ($X_{act}$ and $X_{grad}$, respectively) lie in $\mathbb{R}^{B\times d}$. Based on the idea that each example activates only a subset of neurons, we select a fraction, say $\eta$, of largest entries according to their absolute value in each row. Thus, for the $i$-th row of $X_{act}$, say $X_{act,i}$, we select a threshold $\tau_i$ which is greater than $d\times\eta$ values in $X_{act,i}$, and the mask is thus calculated for the $i$-th data sample as $\mathbb{I}(X_{act,i} \geq \tau_i)$. The same mask is then used to compress the entities $X_{act,i}$ and $X_{grad,i}$ during forward and backward passes, respectively, for all $i\in \{1,2, \cdots, B\}$. Thus, the training for each mini-batch happens only on the relevant neurons corresponding to each sample in the training data.
In Fig. \ref{fig:wta_mp}, we illustrate the compression using DCT-MP when the mini-batch size is one. Detailed steps for a general mini-batch size $B$ are provided in Algorithm \ref{alg:wta_mp}.
\begin{algorithm}[tb]
\caption{DCT-MP: Communication-Efficient Model Parallelism}
\label{alg:wta_mp}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} Sparsity factor $\eta$ ($0 < \eta \leq 1$),
{\underline{Forward Pass}}:
\STATE {\bfseries Input:} Activation matrix $X_{act} = [X_{act,i}]_{i=1}^{B} \in \mathbb{R}^{B\times d}$
\STATE Define the mask, $M = [~]$
\FOR{$i=1$ {\bfseries to} $B$}
\STATE $[x_1, x_2, \cdots, x_d]$ = $Sort(|X_{act,i}|)$
\STATE Define $\tau_i = x_{\lfloor d\times \eta \rfloor}$
\STATE $m_i = \mathbb{I}(|X_{act,i}| \geq \tau_i)$
\STATE $M = [M; ~m_i]$
\ENDFOR
\STATE Compute the sparse matrix $X_{act}\odot M$
\STATE Send $X_{act}\odot M$ and the mask $M$ across the network
{\underline{Backward Pass}}:
\STATE {\bfseries Input:} Gradient matrix $X_{grad} \in \mathbb{R}^{B\times d}$
\STATE Compute the sparse matrix $X_{grad}\odot M$
\STATE Send $X_{grad}\odot M$ across the network
\end{algorithmic}
\end{algorithm}
\begin{figure}[t]
\centering
\includegraphics[width=0.44\textwidth]{thres_values_for_MP2.pdf}
\caption{ Top-K threshold for various levels of sparsity for the cases when compression using DCT-MP is applied and when it is not applied. The top-K thresholds decrease significantly when DCT-MP is applied. Thus, DCT-MP induces sparsity in neuron activations. This is possibly the reason for its improved generalization performance.}
\label{fig:thres_values_for_MP}
\end{figure}
{\bf DCT-MP Promotes Sparsity in Model Activations.}
In Fig. \ref{fig:thres_values_for_MP},
we plot the mean, $\frac{1}{B}\sum_{i=1}^B \tau_i$, of threshold vector $\tau = [\tau_1,\tau_2, \cdots, \tau_B]$ with respect to the number of iterations for the DLRM model with the Criteo Ad Kaggle Dataset. The threshold is calculated for activations after one of the fully connected layers (see Sec. \ref{sec:exps_dlrm} for details on the experimental setup). The mean of the threshold is calculated for different sparsity levels ($75\%, 90\%$ and $95\%$) for the two cases when sparsification using DCT-MP is applied (dotted lines) and when it is not applied (solid lines). Thus, the solid lines correspond to a single training run where we are simply measuring the mean of top-K threshold values without actually sparsifying the activations sent across the wire. The dotted lines with different sparsification levels correspond to different training runs where the stated sparsification is actually applied to the activations (and gradients) that are sent across the wire.
We observe that, as the training progresses, the top-K thresholds decrease significantly faster for the case when DCT-MP is applied. A decrease in the top-K threshold corresponds to the activations getting sparser (maybe approximately) as the training progresses. Thus, DCT-MP induces sparsity in activations while training, which is exploited for communication efficiency. An important advantage of such sparsity-inducing regularization is the improved generalization performance of the model, as shown in our experiments in Sec. \ref{sec:exps}. Our conjectured explanation for why sparsity helps in improving the generalization error is based on the performance of existing popular schemes.
This includes dropout (see Fig. 8, \citet{dropout}) and Rectified Linear Units (ReLU) (see Fig. 3, \citet{relu}), which themselves introduce sparsity in model activations, as well as implementations of implicit sparsity based methods in scalable algorithms for graph analysis \citep{SRFM16_VLDB,FGM17_IEEE}.
{\bf Analysis of DCT-MP.}
To provide further insights into DCT-MP, we prove that the stochastic gradient obtained with Algorithm \ref{alg:wta_mp} is equal, in expectation, to the stochastic gradient obtained in a network without any communication thresholding. More details of this unbiased estimation, including a formal statement of the theorem and its proof, are provided in Appendix \ref{app:analysis_dct_mp}.
{\bf Comparision with Dropout.} Dropout and DCT-MP are similar in essence as they both selectively train neurons. However, the two schemes are different: both in the goals they try to achieve, and in the mechanisms they use. Furthermore, they can be used complementarily. Here are the main differences between the two schemes. First, Dropout drops neurons randomly, while DCT-MP keeps only the most relevant neurons for each training sample. Second, for Dropout, going beyond 50\% sparsity results in accuracy loss, but DCT-MP achieves up to 95\% sparsification. Third, Dropout is applied to every parameter layer, but DCT-MP is applied only to the layers before the model split.
\section{Statistical Estimation of top-K Threshold with Laplacian Fitting}
\section{Schemes That Do Not Work}
\label{app:sketch}
We saw in Sec. \ref{sec:exps} that activations during the forward pass and gradients during the backward pass can be compressed by large factors (up to $20\times$) using DCT-MP. This is due to selecting and training only the most relevant neurons corresponding to a given training sample. In this section, we present some negative results with other methods to compress the activation and gradient matrices during the forward and backward passes, respectively.
{\bf Gaussian Sketching for Activation Compression.}
Here, we use a Gaussian sketching scheme to compress the activations going forward. In Randomized Numerical Linear Algebra (RandNLA), the idea of sketching is to represent a large matrix by a smaller proxy that can be further used for matrix operations such as matrix multiplication, least squares regression, and low-rank approximation \cite{woodruff_now}. The sketched version of a matrix $A$ is given by $A\times S$, where $S$ is a random sketching matrix (e.g., all entries of $S$ are sampled i.i.d. from an appropriately scaled Gaussian distribution).
In Table \ref{table:MP_gaussian_sketching}, we compress the activations during the forward pass using Gaussian sketching.
Unlike the DCT-MP algorithm, we do not compress the gradients during the backward pass.
The aim is to identify if a low-rank structure exists in the activation matrix that can be used to compress the activation matrix in general.
\begin{table}[h]
\caption{Compressing the activation matrix during MP using Gaussian sketching does not yield good results.}
\label{table:MP_gaussian_sketching}
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lcccr}
\toprule
\makecell{Compression \\ Factor} &
\multicolumn{2}{c}{
\makecell{Train\\Loss \phantom{iii} Acc (\%)}} &
\multicolumn{2}{c}{
\makecell{Test\\Loss \phantom{iii} Acc (\%)}} \\
\midrule
Baseline & 0.4477 & 79.23 & 0.4538 & 78.78\\
50\% & 0.4569 & 78.72 & 0.4618 & 78.37\\
75\% & 0.4610 & 78.53 & 0.4656 & 78.12\\
90\% & 0.4685 & 77.95 & 0.4721 & 77.78\\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
As seen in Table \ref{table:MP_gaussian_sketching}, sketching techniques directly borrowed from RandNLA do not perform as well.
This is likely because such schemes were designed to cater to operations such as low-rank approximation, where the matrices to be compressed are generally well-approximated by low-rank matrices.
For instance, Gaussian sketching has seen success in approximate least squares regression and low-rank matrix approximation \cite{woodruff_now}.
This suggests that the activation matrix for DNNs, in general, does not reside in a subspace that is sufficiently low-rank to be meaningfully used for compression.
{\bf Top-K Thresholding for Gradient Compression.}
We saw in Sec. \ref{sec:exps} that the parameter gradients (illustrated as $W_{grad}$ in Fig. \ref{fig:hybrid_illus}) can be compressed to high factors with any loss in accuracy when used with appropriate error compensation. However, the same is not true for the gradients with respect to hidden neurons (illustrated as $X_{grad}$ in Fig. \ref{fig:hybrid_illus}) that are sent across the network during the backward pass in MP. This can be seen from our results in Table \ref{table:MP_grad_topK}, where we apply gradient compression using top-K thresholding with error feedback. Further, we observed that training without error feedback can cause~divergence.
\begin{table}[h]
\caption{Compressing the gradient matrix during backward pass in MP using top-K sparsification does not yield good results.}
\label{table:MP_grad_topK}
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lcccr}
\toprule
\makecell{Compression \\ Factor} &
\multicolumn{2}{c}{
\makecell{Train\\Loss \phantom{iii} Acc (\%)}} &
\multicolumn{2}{c}{
\makecell{Test\\Loss \phantom{iii} Acc (\%)}} \\
\midrule
Baseline & 0.4477 & 79.23 & 0.4538 & 78.78\\
50\% & 0.4495 & 79.07 & 0.4561 & 78.62\\
75\% & 0.4516 & 78.95 & 0.4588 & 78.48\\
90\% & 0.4701 & 77.76 & 0.4789 & 77.13\\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
Our hypothesis on why compressing the gradients of the hidden neurons by top-K thresholding does not yield good results is due to the propagation of error to the initial layers.
Consider the following example to illustrate this.
Consider the following deep network, where we have several vector-valued functions $A(\cdot), B(\cdot), C(\cdot), \cdots, L(\cdot)$ composed in a chain, that is $A \rightarrow B \rightarrow C \rightarrow \cdots \rightarrow K \rightarrow L$.
Algebraically, the loss looks like $L(A) = L(K(\cdots C(B(A))\cdots))$.
Then, the gradient of the loss with respect to $A$ is given by the multiplication of the Jacobians, that is, $J_L(A) = J_L(K) \times \cdots \times J_C(B) \times J_B(A)$.
(Here, $J_L(A)$ denotes the gradient of $L$ with respect to $A$.)
If we change any of the Jacobians in between (that is, compress the gradient $X_{grad}$ with respect to hidden neurons), then the error is propagated all the way to the initial layers of the network.
Even adding error feedback to the compression process does not recover the lost accuracy.
\section{Analysis of DCT-MP (Algorithm \ref{alg:wta_mp})}
\label{app:analysis_dct_mp}
Unlike compression of gradients, where the error introduced can be effectively corrected, compressing activations introduces errors that are propagated to the downstream of the network.
If the thresholds $\{\tau_i\}_{i=1}^B$ are fixed for all training iterations, then Algorithm \ref{alg:wta_mp} is simply performing SGD of a changed network, one in which the additional layer $\tilde{X}_{act,i} = \mathbb{I}(|X_{act,i}| \geq \tau_i)$
inserted right after $\{X_{act,i}\}_{i=1}^B$ are obtained.
Extra analysis is needed when $\{\tau_i\}$ are dynamic.
Consider a particular SGD iteration $k$, and further annotate the thresholds as $\{\tau_i^{(k)}\}$ to indicate their dependence on the stochastic mini-batch being chosen at iteration $k$.
Let $\bar{\tau}^{(k)} = \mathbb E_i(\tau_i^{(k)})$ be the average of these thresholds over the entire dataset.
Denote by $\bar{L}$ the loss function of the network with this batch-independent threshold $\tilde{X}_{act,i} = \mathbb{I}(|X_{act,i}| \geq \bar\tau^{(k)})$ inserted, and let $L_k$ be the loss function of this dynamically thresholded network with batch $k$.
Standard SGD assumptions say that for a randomly chosen batch $k$, $\mathbb E_i(\partial L_i/\partial \bm{\theta}) = \partial L/\partial \bm{\theta}$, with $\bm{\theta}$ being the network parameters.
However, since each $L_i$ is related to a different threshold, it is unclear what the $L$ should be on the right hand side of this expression.
In the following theorem, we show that $\mathbb E_i(\partial L_i / \partial \bm{\theta}) = \partial \bar{L}/\partial \bm{\theta}$, where $L_i$ is batch-$i$ loss function of the \emph{dynamically} thresholded network in Algorithm \ref{alg:wta_mp}.
\begin{theorem}
\label{thm:unbias}
Consider a 2-worker MP network where the activations from Worker 1 to Worker 2 are thresholded as in Algorithm \ref{alg:wta_mp}.
Let $L_i$ be the associated loss function for data point $i$, where $i=1,2,\ldots,N$, and $N$ be the total number of training samples.
Let $\bar{\tau} = \mathbb{E}_i(\tau_i)$, the entire data set at a particular training iteration $j$ (the subscript $j$ in $\tau_i$ and $\bar{\tau}$ is omitted for simplicity).
Let $\bar{L}_i$ be the loss function corresponding to the threshold $\bar{\tau}$.
If
\begin{equation}\label{eq:assumption}
\mathbb{E}_i[X_{act,i} \odot \mathbb{I}(|X_{act,i}| \ge \tau_i) ]
= \mathbb{E}_i[X_{act,i} \odot \mathbb{I}(|X_{act,i}| \ge \bar{\tau}) ],
\end{equation}
then, up to first order
\[
\mathbb{E}_i\left(\frac{\partial L_i}{\partial \bm{\theta}}\right) =
\mathbb{E}_i\left(\frac{\partial \bar{L}_i}{\partial \bm{\theta}}\right) =
\frac{\partial \bar{L}}{\partial \bm{\theta}}
\]
where $\bm{\theta}$ is the parameter of the network.
\end{theorem}
\begin{remark}
The theorem statement implies that the training step on the dynamically thresholded network is equivalent to that of training the network with just one threshold $\bar{\tau}$. This happens long as $\bar \tau$ and $\tau_i$ are sufficiently close and the mean of activations around $\tau_i$ is the same as their mean around $\bar\tau$ (formalized by assumption (\ref{eq:assumption})).
\end{remark}
\begin{proof}
To establish the proof, let the dynamically thresholded network be represented as follows, starting with a random data point $(X_i^{(0)},y_i)$ (where $X_i^{(0)}$ is the input and $y_i$ the corresponding label):
\begin{eqnarray*}
X_i^{(k)} & = & {\mathcal N}^{(k)}(\bm{\theta}^{(k)},X_i^{(k-1)}), \quad k = 1,2, \ldots, m \\
X_i^{(m+1)} & = & X_i^{(m)} \cdot \mathbb{I}(|X_i^{(m)}| \ge \tau_i) \\
X_i^{(k)} & = & {\mathcal N}^{(k)}(\bm{\theta}^{(k)},X_i^{(k-1)}), \quad k = m+2, m+3, \ldots, K\\
L_i &= & \ell(X_i^{(K)},y_i).
\end{eqnarray*}
Here, the MP split was inserted after the $k$-th layer, and the activation function for the $k$-th layer with parameters $\bm{\theta^{k}}$ is represented as ${\mathcal N}^{(k)}(\bm{\theta^{k}}, \cdot)$. For an activation layer ${\mathcal N}^{(k)}$, $\bm{\theta}^{(k)}$ is simply an empty set. For the network with a static threshold~$\bar{\tau}$:
\begin{eqnarray*}
\bar{X}_i^{(m+1)} & = & X_i^{(m)} \cdot \mathbb{I}(|X_i^{(m)}| \ge \tau) \\
\bar{X}_i^{(k)} & = & {\mathcal N}^{(k)}(\bm{\theta}^{(k)},\bar{X}_i^{(k-1)}), \quad k = m+2, m+3, \ldots, K\\
\bar{L}_i &= & \ell(\bar{X}_i^{(K)},y_i).
\end{eqnarray*}
By assumption (\ref{eq:assumption}), we have
$\mathbb{E}_i(X_i^{(m+1)}) = \mathbb{E}_i(\bar{X}_i^{(m+1)})$.
Therefore, by Taylor's expansion, we have
\[
\mathbb{E}_i(X_i^{(k)}) = \mathbb{E}_i(\bar{X}_i^{(k)}), \quad k = m+2, m+3, \ldots, K,
\]
up to first order, by expanding each ${\mathcal N}^{(k)}(\bm{\theta}^{(k)},\bar{X}_i)$ to first order. For example, for a linear layer, we have
\begin{eqnarray*}
\mathbb{E}_i(X_i^{(k)}) & = & \mathbb{E}_i( {\mathcal N}(\bm{\theta}^{(k)},X_i^{(k)}) )\\
& = & \mathbb{E}_i( W^{(k)} X_i^{(k)} + b^{(k)} ) \\
& = & W^{(k)} \mathbb{E}_i(X_i^{(k)}) + b^{(k)} \\
& = & \bar{X}_i^{k+1} .
\end{eqnarray*}
Similarly, for a general activation function $\sigma(\cdot)$, we have
\begin{align*}
& \mathbb{E}_i(\sigma(X_i^{(k)})) = \mathbb{E}_i(\sigma(\bar{X}_i^{(k)} + (X_i^{(k)}-\bar{X}_i^{(k)}) )) \\
& = \mathbb{E}_i(\sigma(\bar{X}_i^{(k)}) + \sigma'(\bar{X}_i^{(k)})\cdot (X_i^{(k)}-\bar{X}_i^{(k)})) + \hbox{second order terms} \\
& = \mathbb{E}_i(\sigma(\bar{X}_i^{(k)})) + \sigma'(\bar{X}_i^{(k)}) \mathbb{E}_i(X_i^{(k)}-\bar{X}_i^{(k)}) \\
& = \mathbb{E}_i(\sigma(\bar{X}_i^{(k)})) = \mathbb{E}_i(\bar{X}_i^{(k+1)}),
\end{align*}
where we have ignored the second-order terms beyond the second step.
Let $\Delta_i^{(k)} = X_i^{(k)} - \bar{X}_i^{(k)}$, $E_i(\Delta_i^{(k)}) = \mathbf{0}$. Consider the gradient $\frac{\partial L_i}{\partial \bm{\theta}}$ as a function of $\Delta_i^{(k)}$:
\begin{equation*}
\frac{\partial L_i}{\partial \bm{\theta}}(\Delta_i^{m+1},\Delta_i^{(m+2)},\ldots,\Delta_i^{(K)})
= \frac{\partial \bar{L}_i}{\partial \bm{\theta}} +
\sum_{k=m+1}^K \left(\frac{\partial^2 \bar{L}_i}{\partial \Delta^{(m)} \partial \bm{\theta}} \right) \cdot \Delta_i^{(k)}.
\end{equation*}
Thus
\[
\mathbb{E}_i\left( \frac{\partial L_i}{\partial \bm{\theta}} \right) =
\mathbb{E}_i\left( \frac{\partial \bar{L}_i}{\partial \bm{\theta}} \right),
\]
which proves the desired result.
\end{proof}
\section{Conclusions}
Inspired by the fact that communication is increasingly becoming the bottleneck for large-scale training, we proposed two practical algorithms, DCT-DP and DCT-MP, to reduce the communication bottleneck during data and model parallelism, respectively, for fast training of DNN models. DCT-DP and DCT-MP improve end-to-end training time by sparsifying the matrices to be sent across the wire by appropriately selecting a sparsification threshold. We empirically evaluated the proposed algorithms on publicly-available as well as industry-scale models and datasets. We show a reduction in communication for MP and DP by up to $20\times$ and $100\times$, respectively, without any loss in performance.
Further, the end-to-end training time reduces by $37\%$ in production models. Further, our algorithms reduce the network bandwidth utilization by half and almost double the CPU utilization, shifting the training bottleneck from communication to computation.
\section{Empirical Results}
\label{sec:exps}
In this section, we investigate DCT-MP and DCT-DP for three different experimental setups. In subsections \ref{sec:exps_dlrm} and \ref{sec:exps_nlp}, we evaluate the performance of DCT-MP on the Deep Learning Recommendation Model (DLRM) and a Natural Language Processing (NLP) model, respectively, for different levels of compression and different number of MP workers. The models and datasets are publicly available. We show that high compression factors can be obtained (up to $\sim$$95\%$) with DCT-MP along with small improvements in model performance.
We further evaluate DCT-DP on the DLRM model in subsection \ref{sec:exps_dlrm} and see no loss in performance with up to $98\%$ sparsity.
Finally, we evaluate the performance of DCT-DP and DCT-MP on large-scale recommendation models that are trained with hybrid parallelism in production systems.
We show that the deployed algorithm reduces the training time by $37\%$ for such production-scale models without any performance loss.
Further, in all our experiments, we tried to show at least one negative result that would provide insights into the scalability of DCT. For instance, as the number of workers for MP (i.e., model splits) increases, the compression factor with DCT-MP decreases (e.g., Tables \ref{table:MP_DLRM}, \ref{table:DP_MP_DLRM}, \ref{table:DP_MP_NLP} and \ref{table:prod_mp}).
\subsection{Experiments on the DLRM Model}
\label{sec:exps_dlrm}
{\bf Experimental Setup.}
For these experiments, we use the DLRM model from \citep{dlrm}. In this model, the dense features are first processed by a Multilayer Perceptron (MLP) with four layers, where each layer contains a Fully Connected (FC) layer followed by a Rectified Linear Unit (ReLU). Then, there is a feature interaction between the processed dense and sparse features, which goes through a second MLP with four layers (the last layer has Sigmoid instead of ReLU as the non-linearity) to produce the final output.
In our experiments, the embedding dimension for sparse features was kept at 16, and the output dimensions of the four FC layers in the first MLP are 512, 256, 64 and 16, respectively. Similarly, for the second MLP, the output dimensions for the fours FC layers are 512, 256, 128 and 1, respectively.%
\footnote{See the Criteo Kaggle benchmark for further details on the training process: https://github.com/facebookresearch/dlrm}
Training and testing sets comprise of 6 days and one day, respectively, of the Criteo Ad Kaggle dataset.%
\footnote{https://labs.criteo.com/2014/02/kaggle-display-advertising-challenge-dataset/}
Fig. \ref{fig:dlrm_illus} provides an illustration of MP with the DLRM model. The shaded area in blue shows a sample partition for MP.
In our simulations, we consider up to two splittings of the DLRM model. The first split is after two layers in the first MLP, and the second split is after two layers in the second MLP. Our goal is to reduce communication across different workers (both during the forward and backward passes). This is a typical setup in MP Training where workers 1, 2, and 3 can be the different pipes of a single trainer (e.g., see \citet{gpipe}).
For all our experiments, the data shuffle remains constant across different training runs.
\begin{figure*}[t]
\centering
\includegraphics[width=0.85\textwidth]{dlrm_new}
\caption{ A illustration of model parallelism with DLRM. The entities that are sent across the network are shown in red. $X_{act}$ and $X_{grad}$ are communicated during MP, and $W_{grad}$ is communicated during DP. The shaded area in blue represents a sample model partitioning for MP. In this case, three workers are working on one copy of the model during MP and comprise a trainer.}
\label{fig:dlrm_illus}
\end{figure*}
In Fig. \ref{fig:dlrm_illus}, we mark the three entities that are sent across the network which we compress to alleviate communication costs in distributed DNN training. $X_{act}$ and $X_{grad}$ are the activation and gradient matrices sent across the network during the forward pass and backward passes, respectively. The third entity that can be compressed is the parameter gradient (shown as $W_{grad}$) that is sent from Workers 1, 2, and 3 to the parameter server.
This keeps a central copy of weights and updates it regularly through the gradients received from different workers.
{
\begin{table}[ht]
\caption{ DCT-MP on the DLRM model: Train and Test Loss and Accuracy for multiple sparsity ratios (denoted by $\eta$) and different settings for MP.}
\label{table:MP_DLRM}
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lccccr}
\toprule
\makecell{$\eta$} &
\makecell{MP\\Workers} &
\multicolumn{2}{c}{
\makecell{Train\\Loss \phantom{iii} Acc (\%)}} &
\multicolumn{2}{c}{
\makecell{Test\\Loss \phantom{iii} Acc (\%)}} \\
\midrule
0\% & -- & 0.4477 & 79.23 & 0.4538 & 78.78\\
\cmidrule{1-6}
75\% & 2 & 0.4473 & 79.29 & 0.4532 & 78.81\\
90\% & 2 & 0.4472 & 79.28 & 0.4530 & 78.81\\
{\bf 95\%} & 2 & 0.4473 & 79.24 & \bf 0.4534 & 78.80\\
98\% & 2 & 0.4505 & 79.07 & 0.4562 & 78.61\\
\cmidrule{1-6}
75\% & 3 & 0.4482 & 79.19 & 0.4536 & 78.79\\
\bf 90\% & 3 & 0.4479 & 79.24 & \bf 0.4537 & 78.78\\
95\% & 3 & 0.4495 & 79.18 & 0.4546 & 78.72\\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\end{table}
}
In Table \ref{table:MP_DLRM}, we show the cross-entropy loss \cite{DL_book} and accuracy with the DLRM model on the training and testing data samples. A sparsity factor ($\eta$) of $0\%$ denotes the baseline with no compression.
We consider two settings for MP: one split (that is, 2 MP workers); and two splits (or three workers for MP).
{\bf MP with two workers (one split).}
In rows 2-5 in Table \ref{table:MP_DLRM}, we consider one split in the model (or MP with two workers) in the first MLP after two layers. We see that even with $95\%$ sparsity (that is, $20\times$ compression) on $X_{act}$ (and $X_{grad}$) sent across the network, we are able to perform better than baseline (with no compression), both in terms of train and test loss (highlighted in bold cases). However, we see a tangible loss in performance when the sparsity is further increased to $98\%$.
{\bf MP with three workers (two splits).}
In rows 6-8 in Table~\ref{table:MP_DLRM}, we consider MP with 3 workers, where the two model splits are in the first and second MLP, as shown in Fig. \ref{fig:dlrm_illus}. Note that, in the case of two splits, compressing the entities that are sent across the network by up to $90\%$ does not affect the test accuracy, and it is still better than the baseline with no compression. However, increasing the sparsity factor to $95\%$ is too ambitious for the two split case, and it increases the test loss by $0.18\%$. Further increasing the number of splits results in a greater performance loss, and the performance is worse than baseline for even $75\%$ sparsity.
\begin{remark}
We emphasize that for all the experiments in this paper, the location of splits for MP were not tuned as hyperparameters. Instead, we inserted splits after randomly chosen FC layers, or after the ReLU following the FC layer if it exists. The advantage of inserting a split after ReLU layers is that the activation matrix is $50\%$ sparse on average, resulting in higher compression rates for DCT-MP.
\end{remark}
\begin{table}[ht]
\caption{ DCT-DP on the DLRM model: Train and Test Loss and Accuracy for various levels of sparsity.}
\label{table:DP_DLRM}
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lcccr}
\toprule
\makecell{Sparsity \\ Factor} &
\multicolumn{2}{c}{
\makecell{Train\\Loss \phantom{iii} Acc (\%)}} &
\multicolumn{2}{c}{
\makecell{Test\\Loss \phantom{iii} Acc (\%)}} \\
\midrule
Baseline & 0.4477 & 79.23 & 0.4538 & 78.78 \\
\cmidrule{1-5}
75\% & 0.4478 & 79.23 & 0.4534 & 78.81\\
90\% & 0.4478 & 79.22 & 0.4536 & 78.79\\
95\% & 0.4479 & 79.25 & 0.4538 & 78.79\\
\bf 98\% & 0.4478 & 79.23 & \bf 0.4537 & 78.80\\
99.5\% & 0.4482 & 79.20 & 0.4547 & 78.75\\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\end{table}
{\bf DP with the DLRM Model.}
In Table \ref{table:DP_DLRM}, we illustrate the performance of DCT-DP on DLRM by compressing the gradients of the parameters of all the 8 FC layers while they are sent across the wire to the parameter server.
The parameter server then updates the model parameters using the compressed gradient. We use error feedback \cite{EF_signSGD_karimireddy} to compensate for the error in gradient compression by feeding it back to the gradients in the next iteration. In general, DCT-DP compression enjoy higher compression rates due to the use of error compensation schemes and the fact that error in one layer does not propagate to the other layers, unlike in the case of MP compression. Compression up to $98\%$ sparsity does not show any loss in performance. However, further compressing to $99.5\%$ sparsity increases the test loss by $0.20\%$.
\begin{table}[ht]
\caption{ Compression using DCT-DP and DCT-MP on the DLRM model: Train and Test Loss and Accuracy with two MP splits (that is, three workers for MP).}
\label{table:DP_MP_DLRM}
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lcccr}
\toprule
\makecell{Sparsity \\ Factor} &
\multicolumn{2}{c}{
\makecell{Train\\Loss \phantom{iii} Acc (\%)}} &
\multicolumn{2}{c}{
\makecell{Test\\Loss \phantom{iii} Acc (\%)}} \\
\midrule
Baseline & 0.4477 & 79.23 & 0.4538 & 78.78 \\
\cmidrule{1-5}
75\% & 0.4480 & 79.23 & 0.4535 & 78.81\\
\bf 90\% & 0.4481 & 79.26 & \bf 0.4537 & 78.78\\
95\% & 0.4492 & 79.19 & 0.4548 & 78.70\\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\end{table}
{\bf Communication-efficient Hybrid Training.}
Next, we apply compression to $W_{grad}$ for the 8 FC layers (in the DP case) and to $X_{act}$ (and $X_{grad}$) for two splits (in the MP case) and present our results in Table \ref{table:DP_MP_DLRM}. We see that compression up to $90\%$ sparsity (both during DP and MP) does not affect the performance, but the test loss increases by $0.22\%$ when the sparsity factor is increased to $95\%$.
\subsection{Experiments on a Translation Model}
\label{sec:exps_nlp}
For our experiments with DCT-MP, we next consider the Transformer translation model as an application of NLP using DNNs. We train over the IWSLT'14 German to English dataset \citep{iwslt}.
The setup and hyperparameters were directly borrowed from the fairseq NLP Library \cite{fairseq}.
The model used was borrowed from \cite{vaswani2017attention}, where both encoder and decoder have 6 layers, each of which uses a fully connected Feed-Forward Network (FFN) with input and output dimensionality of 512 and inner layer dimensionality of 1024
\footnote{For further details on the translation model, dataset preprocessing and the hyperparameters used, see https://github.com/pytorch/fairseq/tree/master/examples/translation}
We report the training and testing losses and the BLEU scores after 50 epochs of training.
Our results with DCT-MP on the translation model are described in Table \ref{table:DP_MP_NLP}. We consider three training scenarios: Two MP workers (with one split), Three MP workers (with two splits), and Five MP workers (with 4 splits). For the case with one split, we inserted the DCT-MP operator after the ReLu operator in the FFN of the fifth encoder layer. For the two splits case, we additionally inserted the DCT-MP operator after the ReLu operator in the FFN of the fifth encoder layer. We further added two splits after the ReLu operator in the third FFN in both the encoder and decoder layers for the four splits case. For each scenario, we show the best performing sparsity factor in bold.
We emphasize that no hyperparameter tuning was performed in choosing the splits, and we observed in our experiments that using DCT-MP after an FC Layer or a ReLu layer improves the generalization performance, possibly due to (implicitly) added regularization (as illustrated in Fig. \ref{fig:thres_values_for_MP}).
Note that we can add more MP splits for the NLP model compared to the DLRM model since the model is significantly deeper (and thus less susceptible to changes in outputs of a few layers) with larger FC layers (thus allowing for greater sparsity).
This shows that DCT-MP is more beneficial for wider and/or deeper models (that is, typical setups where MP is used).
\begin{table}[ht]
\caption{ DCT-MP on a translation model with IWSLT'14 dataset: Train and Test Losses and BLEU scores for various levels of sparsity and different splits for MP.}
\label{table:DP_MP_NLP}
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lcccr}
\toprule
\makecell{Sparsity \\ Factor} &
\makecell{MP \\ Workers} &
\makecell{Train\\Loss } &
\makecell{Test\\Loss} &
\makecell{BLEU \\Score}
\\
\midrule
Baseline & -- & 3.150 & 3.883 & 35.17 \\
\cmidrule{1-5}
90\% & 2 & 3.159 & 3.879 & 35.23\\
\bf 95\% & 2 & 3.157 & \bf 3.882 & 35.18\\
\cmidrule{1-5}
90\% & 3 & 3.151 & 3.881 & 35.22\\
\bf 95\% & 3 & 3.148 & \bf 3.882 & 35.19\\
\cmidrule{1-5}
\bf 90\% & 5 & 3.157 & \bf 3.882 & 35.20\\
95\% & 5 & 3.188 & 3.890 & 35.15\\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\end{table}
In this subsection, we do not consider DCT-DP since similar schemes have been evaluated for NLP models in existing works such as \cite{DP_survey} and \cite{aji2017sparse}.
In the next subsection, we evaluate DCT-MP and DCT-DP on large-scale recommendation models for end-to-end training times and overall model performance.
\subsection{Large-Scale Recommendation System}
\label{sec:exps_prod}
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[width=\textwidth]{qps_l_perc.pdf}
\caption{
Training time improvements for different values of $L$.
}
\label{fig:qps_l_perc}
\end{subfigure}
~
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[width=\textwidth]{train_test_L.pdf}
\caption{
Train and test error improvements for different values of $L$.
}
\label{fig:train_test_L}
\end{subfigure}
~
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[width=\textwidth]{qps_eta_perc.pdf}
\caption{Training time improvements for various sparsity factors.
}
\label{fig:qps_eta_perc}
\end{subfigure}
~
\begin{subfigure}[t]{0.25\textwidth}
\centering
\includegraphics[width=\textwidth]{train_test_eta.pdf}
\caption{Train and test error for various sparsity factors.}
\label{fig:train_test_eta}
\end{subfigure}
\caption{
{DCT-DP on Large-Scale Recommendation Models. Figures (a) and (b) show the training time and loss improvements, respectively, over baseline for different values of the threshold life-span, $L$, for a sparsity level of $95\%$. Figures (c) and (d) show the same statistics for various levels of sparsity for $L=1000$.}
}
\label{fig:DCT-DP-Prod}
\end{figure*}
We present our results for a real-world large scale recommendation system that employs HP for parallelization on click-through rate prediction task. We employ DCT-MP and DCT-DP to reduce the network bandwidth usage in these systems.
{\bf Experimental Setup.}
We leverage a distributed data-parallel asynchronous training system with multiple trainers to train a recommendation model. Each trainer in the DP setup may consist of one or more workers that use MP (see Fig. \ref{fig:hybrid_illus} for an illustration). Typically, the model is split into 10 or more parts and fine-grained parallelism is employed for high throughput. Hence, the worker machines suffer from very high communication cost for both MP and DP.
The batch sizes are usually in the range of 100-1000, but they are employed with hogwild threads (see \citet{hogwild}) to increase the throughput of the system, further exacerbating the communication cost problem.
The recommendation model considered in this section takes multiple days to train with general-purpose CPU machines. All the workers and parameter servers run on Intel 18-core 2GHz processors with 12.5Gbit Ethernet. The hardware configurations are identical and consistent across all the experiments.
We train using 7B training examples and evaluate the model on 0.5B examples.
For quantifying model performance, we report the cross-entropy loss from the classification task. We compare relative cross-entropy loss and end-to-end training times of the proposed techniques with respect to a baseline model without communication compression.
{\bf DCT-DP with Large-Scale Recommendation Model.}
Figure \ref{fig:DCT-DP-Prod} shows the results of applying DCT-DP on the large-scale recommendation model.
In Figure \ref{fig:qps_l_perc}, we plot the improvements in end-to-end training times when DCT-MP is applied to compress the parameter gradients, $W_{grad}$, that are sent to the parameter server. Here, we keep the sparsity level constant at $95\%$ and vary the threshold life-span $L$ (the interval after which the top-K threshold is updated). We note that compression with $L=1$ takes $11\%$ more time than the baseline with no compression. This is due to the cost of the copy-and-sort routine which computes the top-K threshold.\footnote{Note that $L=1$ represents the scheme proposed in popular works such as \cite{stich_sparsified_sgd} and \cite{alistarh2018_conv_sparse}. Thus, naively implementing existing schemes for top-K sparsification might not always yield expected gains in production. However, as we observe later, simply updating $L$ every thousand iterations can improve the training time by $25\%$ without any loss in performance.} Increasing $L$ to $1000$ trains the model $23\%$ faster and further increasing it to $10000$ does not provide any additional gain. Figure \ref{fig:train_test_L} illustrates that for different values of $L$, the train and test losses are within $0.01\%$ of the baseline~performance.
Fig. \ref{fig:qps_eta_perc} shows the improvement in training time for various levels of sparsity when the threshold life span is kept constant at $L=1000$. We observe the general trend that when the sparsity is increased, the training time improves.
Overall, we are able to compress the gradients to sparsity factors of up to $99.5\%$ without any loss in train and test performance (as noted from Fig. \ref{fig:train_test_eta}). However, we do not see significant improvements in training time beyond the sparsity level of $95\%$, possibly because the message size is small enough to not hurt bandwidth usage, and the only cost remaining is the fixed latency cost associated with sending any message, irrespective of its size.
\begin{remark}
We observe that error feedback works very well in this asynchronous data-parallel training paradigm with a larger number of hogwild threads.
Note that this should not be expected since existing works prove convergence guarantees only for the synchronous SGD settings.
An implementation detail that helped was sharing the error feedback buffer between the multiple threads. But this can lead to a fast growing magnitude of error in the buffer leading to stale updates. To avoid this, we drain the error feedback buffer stochastically every $1$ million iterations.
\end{remark}
{\bf DCT-MP with Large-Scale Recommendation Model.}
We employ DCT-MP to compress the entities sent through the network during MP for communication efficiency. DCT-MP is applied across the 12 splits of the model after the ReLU layer. Our results are summarized in Table \ref{table:prod_mp}.
We show improvement in training and test losses%
\footnote{Positive numbers imply better performance.}
in columns 2 and 3, respectively, and the improvements in end-to-end training times in column 4 for various levels of sparsity.
We observe that the training performance slightly degrades with DCT-MP on large-scale models.
However, the test performance improves up to sparsity levels of $90\%$, with a $14\%$ improvement in end-to-end training time. Increasing the sparsity level to $95\%$ degrades the test performance by $0.121\%$. Note that we can further improve the performance of DCT-MP by identifying the layers whose activations are sensitive to sparsification and avoiding compressing them during DCT-MP (or changing the location of the split). However, such selectivity in choosing layers for DCT-MP is beyond the scope of this paper.
\begin{table}[t]
\caption{ DCT-MP on a large-scale recommender model}
\label{table:prod_mp}
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lccr}
\toprule
\makecell{Sparsity \\ Factor} &
\multicolumn{2}{c}{
\makecell{Loss Improvement (\%)\\Train \phantom{iiiiiiii} Test}} &
\makecell{Time \\Gain (\%)}
\\
\midrule
Baseline & 0.000\% & 0.000\% & 0.000\% \\
75\% & -0.006\% & 0.023\% & 7.04\%\\
\bf 90\% & -0.021\% & \bf 0.016\% & \bf 13.95\%\\
95\% & -0.070\% & -0.121\% & 14.43\%\\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\end{table}
{\bf Communication-Efficient Hybrid training.}
Next, we apply both DCT-DP and DCT-MP for communication reduction during hybrid training of a large-scale recommendation model. Inspired by our previous results, we chose the sparsity levels as $90\%$ and $99\%$ for DCT-MP and DCT-DP (with $L=1000$), respectively. We observe a $37.1\%$ reduction in end-to-end training time, with train and test loss within $0.01\%$ of the baseline model that does no compression.
Further, before applying DCT, we observed that the network utilization was high (94.2\%) and the CPU utilization was low (48.7\%), implying that communication is a bottleneck. However, after applying DCT, CPU utilization increased to 91.1\% and network utilization decreased to 49.3\%, implying that DCT shifted the bottleneck from communication to computation in production models.
\section{Introduction}
\begin{figure*}[t]
\centering
\includegraphics[scale=0.25]{hybrid_illus2.pdf}
\caption{ Distributed DNN training with hybrid training which uses both DP (left) and MP (right) for greater parallelization gains.
During DP, multiple trainers process several mini-batches of data in parallel.
During MP, one copy of the model is processed by one trainer which in turn is comprised of multiple workers.}
\label{fig:hybrid_illus}
\end{figure*}
Data Parallelism (DP), in which each (of many) trainers stores a replica of the entire model, is a popular parallelization paradigm for the training of very large Deep Neural Networks (DNNs)
\citep{dean2012large, DP_scaling}.
At the beginning of each training iteration, each worker processes a subset of entire training data with a predetermined batch size, and then each worker synchronizes the model parameters at the end of the iteration.
DP has experienced widespread deployment for state-of-the-art industrial applications, but it is now facing two major challenges.
The first challenge is that large batch size is needed to exploit fully the ever-increasing compute power of training nodes.
This turns out to be difficult.
Both theoretical and empirical evidence suggests that going beyond a certain batch size for training DNNs results in loss in generalization performance (e.g., see \citep{keskar,LBnotEmp:2018shallue,openai,Hoffer17,LBempGolmant:2018,MM18_TR,ma2018_sgd_large_batch,staleness_async_sgd}).
Despite active research on restoring generalization performance when the batch size is large \citep{priya2017,imagenet4mins:2018,lars,adabatch,smith2017,post-local-sgd,swap, yao2018hessian}, these methods either are specific to certain models and/or datasets, require extensive hyperparameter tuning, or can at best increase the maximum batch size by a small factor.
The second challenge is replicating an entire DNN model on each worker, which is becoming an increasingly infeasible proposition. This is
due to increasing model complexity and parameters in domains such as, but not limited to, natural language processing and recommendation systems (e.g., see \citep{bert,outrageously_large_dnn,dense_conv_nets, dlrm}), coupled with the saturation of single machine memory and compute power due to trends such as the ending of Moore's law \citep{moore_law1, moore_law2}.
For these reasons, Model Parallelism (MP) has gained significant traction, both from the industry and the research community, as an alternative parallelization paradigm \citep{pipedream, gpipe,mp_multi_gpu,ElasticPipe,xpipe_async_mp,strads}.
In its purest form, the entire network during MP is partitioned into a number of sub-networks equal to the number of workers. While this form can accommodate a larger network than DP, it fails to capitalize on the largest batch size that is allowable before generalization performance degrades.
Hybrid Parallelism (HP)---that employs both DP and MP---is a natural next step, an idea that was arguably first introduced in \cite{dean2012large}, and more recently exploited further for large-scale DNN training \citep{HT_google_2018,zaharia_hybrid_parallelism,hybrid_horizontal_vertical,hetpipe,optCNN, gholami2018integrated}.
An illustration of hybrid training that uses MP to distribute the model across workers and DP to process multiple batches of training data at once is provided in Fig. \ref{fig:hybrid_illus}.
Here, each partition of the network for MP is replicated in a group of workers, each processing the entire batch for that sub-network in question. Currently, hybrid training is employed in training a subset of large-scale recommendation models in production at Facebook.
The scaling of model size and batch size by HP has now progressed to the next bottleneck: communication bandwidth \cite{pipedream}.
This bottleneck exists in two crucial places.
First, for MP, activation values and gradient information need to be communicated from one sub-network to the next during forward and backward propagation.
Second, for DP, gradients of the same sub-network but for different sub-batches need to be communicated, regardless of the exact operations that follow.
This depends on the specific communication protocol (centralized versus decentralized reduction) or the algorithm (synchronous versus asynchronous updates).
To compound the problem, increasing the batch size to fully exploit DP increases the communication of activations and gradients in MP, the sizes of which are directly proportional to the batch size.
Additionally, in the asynchronous training, increasing batch size exacerbates the stale gradient problem due to an increase in the time interval between a worker receiving the model and sending the gradient \cite{revisiting_sync_sgd}.
In short, the benefits of communication reduction are many.
\textbf{Dynamic Communication Thresholding.}
We propose a Dynamic Communication Thresholding (DCT) framework for communication efficient training for HP.
DCT incorporates two algorithms, DCT-DP and DCT-MP, to alleviate communication congestion for DP and MP, respectively.
Our algorithms filter the entities to be communicated through a simple hard-thresholding function, eliminating the need to pass many of them over the communication fabric.
We propose practical methods to compute the thresholds to reduce the computational overhead of compression.
Our thresholding technique is versatile, as it applies to different communication primitives in DP for the gradients, to different pipelining methods in MP (e.g., GPipe \citep{gpipe}, PipeDream \citep{pipedream}), and to different applications such as recommendation systems and natural language processing models.
While thresholding communication introduces errors, we apply (previously known) error compensation technique as well as a model consistency adjustment method (we developed) to mitigate the effect of the error in compression.
Consequently, despite significant communication thresholding, model accuracy does not degrade, and in fact it often improves.
We apply DCT to large-scale state-of-the-art recommendation models in production with real-world datasets as well as publicly available models and datasets.
We observe that the communication costs are reduced by factors of up to $20\times$ for MP and $100\times$ for DP. Further, end-to-end training time for large-scale training is cut by as much as 37\% for industry-scale models in production.
Further, applying DCT reduces the network utilization from 94.2\% to 49.3\% and increases the overall CPU utilization from 48.7\% to 91.1\%, shifting the bottleneck of model training from communication to computation in such systems.
\textbf{Related Work.}
Due to the use of large clusters with powerful machines to train complex DNNs (e.g. BERT-Large \citep{bert} with 340M parameters), the distributed training workloads are becoming increasingly communication bound. For this reason, numerous compression schemes have been proposed in the past several years for the data parallel setting (see \citep{DP_survey} for a comprehensive survey).
These compression schemes come in various forms, such as the following:
(i) Quantization, where the number of bits per entry of the communicated vector is reduced (e.g., \citep{alistarh2017_qsgd, EF_signSGD_karimireddy, terngrad});
(ii) Sparsification, where only a few entries of the communicated vector are sent (e.g., \citep{stich_sparsified_sgd,alistarh2018_conv_sparse,aji2017sparse,deep_grad_comp_lin, sun2019sparse, wangni2018sparse});
(iii) Statistical techniques such as Randomized Sketching (e.g., \citep{sketchml,sketched-SGD}); and
(iv) Low-rank approximation, which decomposes the vector into low-rank components before communication (e.g., \cite{lowrank_atomo,lowrank_powersgd,lowrank_gradzip,lowrank_gradiveq}).
When it comes to performance on real-world systems, many of these existing schemes have one or more of the following shortcomings.
(i) Focus is mostly on a theoretical analysis of schemes based on restricted assumptions, such as convexity and synchronous SGD.
(ii) The empirical evaluation ignores the cost of compression and decompression which, in many cases, deprives them of any savings due to communication.
(iii) Comparison of convergence with respect to baseline is reported, while the number of epochs (or iterations) and the actual training time is ignored.
For instance, in Fig. 1 in \cite{DP_survey}, the authors compare the compression scheme in \citep{sketchml} with a baseline without compression.
They observe that, although the convergence with respect to the number of epochs is unaffected due to compression, it takes almost twice the time for training to converge, rendering the scheme worse than no compression.
We also observed in our experiments that for sparsification using top-K sparsity \cite{stich_sparsified_sgd,alistarh2018_conv_sparse}, the overhead of copying and sorting the large vectors ends up taking more time than the gains obtained due to communication reduction.
(See Fig. \ref{fig:DCT-DP-Prod} in Sec. \ref{sec:exps_prod} for details.)
In this paper, {\bf we propose practical schemes for communication reduction during DP, and we show performance improvements in terms of the end-to-end DNN training times}, with performance similar to, or in some cases better than, the baseline algorithms as implemented in industry.
For the MP case, existing works target the scheduling of communication of entities across the network to improve the efficiency of training DNNs \citep{lee2014_MP_sch, DES_MP}. However,
to the best of our knowledge, {\bf this is the first work that targets communication reduction for MP by compressing the entities (i.e., activations and gradients) that are sent across the network}.
As such, it can be applied on top of existing training efficiency schemes, such as communication scheduling \citep{lee2014_MP_sch, DES_MP} and Pipelining \citep{pipedream, gpipe, pipemare, xpipe_async_mp} for MP. As illustrated in Fig. \ref{fig:hybrid_illus} (right), communication is a major bottleneck for MP-based training since the activations are communicated from (say) worker 1 to worker 2 during the forward pass and the gradients are then communicated from worker 2 to worker 1 during the backward pass (similar communication happens between workers 2 and 3).
However, we further observed that naively applying compression schemes, such as sparsification, quantization and sketching, to the activations and gradients either do not achieve high enough compression rates to be practical, or the degradation in model performance is beyond an acceptable level.
(See Appendix \ref{app:sketch} for details on such negative results.)
In the next section, we describe our algorithms for communication efficiency during parallelization, for both the MP and DP primitives of the DNN training.
In particular,
we discuss DCT-DP (in Section \ref{sec:dct_dp}) and explain our gradient reduction technique for DP that requires minimal computational overhead for compression; and then we discuss DCT-MP (in Section \ref{sec:dct_mp}), a flexible thresholding framework with theoretical support for our design.
Then, Section \ref{sec:exps} reports our findings from a diverse set of experiments and demonstrates the advantages of using DCT-DP and DCT-MP for training large-scale models for both publicly available and production models.
\section{Please add supplemental material as appendix here}
\end{document}
| proofpile-arXiv_059-15616 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Unitriangular Matrices}
\label{section on unitriangular matrices}
Let $n$ be a positive integer and $R$ a commutative unital ring.
The set $\dbU_n(R)$ of all unitriangular (i.e., unipotent and upper-triangular) $(n+1)\times(n+1)$-matrices over $R$ forms a group in the standard way.
The additive group $R^+$ of $R$ embeds as a central subgroup of $\dbU_n(R)$ via the map $a\mapsto I+aE_{1,n+1}$, where $I$ is the unit matrix and $E_{ij}$ denotes the matrix with $1$ at entry $(i,j)$, and $0$ elsewhere.
Let
\[
\bar\dbU_n(R)=\dbU_n(R)/R^+.
\]
We may consider its elements as unitriangular $(n+1)\times(n+1)$-matrices, with the $(1,n+1)$-entry omitted.
Then the multiplication in $\bar\dbU$ is carried out similarly to the usual multiplication of upper-triangular matrices.
We abbreviate $\dbU=\dbU_n(R)$ and $\bar\dbU=\bar\dbU_n(R)$, and let $\pi\colon \dbU\to\bar\dbU$ be the projection map.
Thus there is a central extension
\begin{equation}
\label{central extension}
0\to R^+\to\dbU\xrightarrow{\pi}\bar\dbU\to1.
\end{equation}
For $1\leq i<j\leq n+1$, let $\pr_{ij}\colon\dbU\to R$ be the projection on the $(i,j)$-entry.
When $(i,j)\neq(1,n+1)$ it induces a projection map $\pr_{ij}\colon\bar\dbU\to R$.
We note that $\pr_{i,i+1}\colon\dbU\to R^+$, $i=1,2\nek n$, are group homomorphisms.
Given $1\leq a\leq b\leq n+1$ with $r=b-a$, the restriction to rows $a\leq i\leq b$ and columns $a\leq j\leq b$ gives a homomorphism $\dbU\to\dbU_r(R)$.
From now on we further assume that the ring $R$ is finite.
Let $\bar\dbU$ act on $R^+$ trivially, and consider the cohomology groups $H^l(\bar\dbU,R^+)$, $l=1,2$.
The projections $\pr_{i,i+1}$, $i=1,2\nek n$, may be viewed as elements of $H^1(\bar\dbU,R^+)=\Hom(\bar\dbU,R^+)$.
The extension (\ref{central extension}) corresponds under the Schreier correspondence to an element $\alp=\alp_{n,R}$ of $H^2(\bar\dbU,R^+)$ with the trivial action \cite{NeukirchSchmidtWingberg}*{Th.\ 1.2.4}.
Consider the $2$-cochain
\[
c=\sum_{k=2}^n\pr_{1k}\cup\pr_{k,n+1}\colon \bar\dbU\times\bar\dbU\to R^+, \quad
((m_{ij}),(m'_{ij}))\mapsto\sum_{k=2}^nm_{1k}m'_{k,n+1}.
\]
It is straightforward to verify that it is a 2-cocycle.
\begin{prop}
\label{decomposition of alpha}
The cohomology class of $c$ is $\alp$.
\end{prop}
\begin{proof}
Since $c$ is a $2$-cocycle, $R^+\times \bar\dbU$ is a group with respect to the multiplication
\[
(r,(m_{ij}))*(r',(m'_{ij}))=(r+r'+c((m_{ij}),(m'_{ij})),(m_{ij})(m'_{ij})).
\]
We obtain a central extension
\begin{equation}
\label{central extension 2}
0\to R^+\to R^+\times \bar\dbU\to \bar\dbU\to1,
\end{equation}
which corresponds to the cohomology class of $c$ in $H^2(\dbU,R^+)$ (see the proof of \cite{NeukirchSchmidtWingberg}*{Th.\ 1.2.4}).
It is therefore enough to show that the central extensions (\ref{central extension}) and (\ref{central extension 2}) are equivalent.
To this end consider the map $h=\pr_{1,n+1}\times\pi\colon\dbU\to R^+\times\bar\dbU$.
It is bijective, and commutes with the epimorphisms onto $\bar \dbU$ in both extensions.
It remains to show that $h$ is a group homomorphism.
Indeed, for $(m_{ij}),(m'_{ij})\in\dbU$ we have
\[
\sum_{k=1}^{n+1}m_{1k}m'_{k,n+1}
=m_{1,n+1}+m'_{1,n+1}+c(\pi((m_{ij})),\pi((m'_{ij}))),
\]
which implies that
\[
\begin{split}
&h((m_{ij})(m'_{ij}))=\Bigl(\sum_{k=1}^{n+1}m_{1k}m'_{k,n+1},\pi((m_{ij})(m'_{ij}))\Bigr)\\
&=(m_{1,n+1},\pi((m_{ij})))*(m'_{1,n+1},\pi((m'_{ij})))
=h((m_{ij}))*h((m'_{ij})),
\end{split}
\]
as desired.
\end{proof}
\section{pullbacks}
\label{section on pullbacks}
Let $G$ be a profinite group acting trivially on $R^+$.
Given a continuous homomorphism $\bar\rho\colon G\to\bar\dbU$, we abbreviate $\bar\rho_{ij}=\pr_{ij}\circ\bar\rho$.
Then $\bar\rho_{i,i+1}\colon G\to R^+$, $i=1,2\nek n$, are homomorphisms.
We say that $\bar\rho$ is \textsl{central} if its image is contained in $\Center(\bar\dbU)$.
If $\bar\rho,\bar\rho'\colon G\to \bar\dbU$ are continuous homomorphisms with $\bar\rho'$ central, then the product map $\bar\rho\cdot\bar\rho'\colon G\to\bar\dbU$, $g\mapsto\bar\rho(g)\bar\rho'(g)$, is also a continuous homomorphism.
Given a continuous homomorphism $\bar\rho\colon G\to\bar\dbU$, let
\[
\bar\rho^*\colon H^2(\bar\dbU,R^+)\to H^2(G,R^+)
\]
be the functorially induced homomorphism.
Thus $\bar\rho^*\alp$ is the pullback of $\alp=\alp_{n,R}$ to $H^2(G,R^+)$.
It corresponds to the central extension
\[
\label{embedding problem}
0\to R^+\to \dbU\times_{\bar\dbU}G\to G\to1,
\]
where $\dbU\times_{\bar\dbU}G$ is the fiber product of $\dbU$ and $R$ with respect to the projection $\pi\colon \dbU\to\bar \dbU$ and to $\bar\rho$.
We have $\bar\rho^*\alp=0$ if and only if the latter extension splits, or alternatively, there is a continuous homomorphism $\rho\colon G\to\dbU$ making the following diagram commutative \cite{Hoechsmann68}*{1.1}:
\begin{equation}
\label{embedding problem}
\xymatrix{&&&G\ar[d]^{\bar\rho}\ar@{-->}[dl]_{\rho}&\\
0\ar[r]& R^+\ar[r]& \dbU\ar[r] &\bar\dbU\ar[r]&1.
}
\end{equation}
If $\bar\dbV$ is a subgroup of $\bar\dbU$ and $M=\bar\rho\inv(\bar\dbV)$, then there is a commutative square
\begin{equation}
\label{res commutes with pullbacks}
\xymatrix{
H^2(\bar\dbU,R^+)\ar[r]^{\Res_{\bar\dbV}}\ar[d]_{\bar\rho^*} & H^2(\bar\dbV,R^+)\ar[d]^{(\Res_M\bar\rho)^*} \\
H^2(G,R^+)\ar[r]^{\Res_M}& H^2(M,R^+).
}
\end{equation}
Dwyer \cite{Dwyer75} shows in the discrete context the following connection with Massey products (see e.g., \cite{Efrat14}*{\S8} for the profinite context):
Given $\chi_1\nek\chi_n\in H^1(G,R^+)$, where $n\geq2$, the Massey product $\langle\chi_1\nek\chi_n\rangle\subseteq H^2(G,R^+)$ consists of all pullbacks $\bar\rho^*\alp$ with $\bar\rho\colon G\to\bar\dbU$ a continuous homomorphism such that $\chi_i=\bar\rho_{i,i+1}$, $i=1,2\nek n$.
In particular:
\begin{enumerate}
\item[(i)]
$\langle\chi_1\nek\chi_n\rangle\neq\emptyset$ if and only if there is a continuous homomorphism $\bar\rho\colon G\to\bar\dbU$ with $\chi_i=\bar\rho_{i,i+1}$, $i=1,2\nek n$;
\item[(ii)]
$0\in \langle\chi_1\nek\chi_n\rangle$ if and only if there is a continuous homomorphism $\bar\rho\colon G\to\bar\dbU$ with $\chi_i=\bar\rho_{i,i+1}$, $i=1,2\nek n$, and $\bar\rho^*\alp=0$;
i.e., there is a continuous homomorphism $\rho\colon G\to\dbU$ with $\chi_i=\rho_{i,i+1}$, $i=1,2\nek n$.
\end{enumerate}
\begin{rem}
\label{remark on cup products}
\rm
If all of $\bar\rho_{1k}$, $\bar\rho_{k,n+1}$, $k=2\nek n$, are homomorphisms, then they can be identified as elements of $H^1(\bar\dbU,R^+)$.
Hence Proposition \ref{decomposition of alpha} gives rise to a decomposition $\bar\rho^*\alp=\sum_{k=2}^n\bar\rho_{1k}\cup\bar\rho_{k,n+1}$ in $H^2(G,R^+)$.
For example, for $n=2$, the maps $\bar\rho_{12},\bar\rho_{23}\colon G\to R^+$ are homomorphisms, so $\bar\rho^*\alp=\bar\rho_{12}\cup\bar\rho_{23}$.
More generally, one has:
\end{rem}
\begin{lem}
\label{vanishing of cup products}
Suppose that $n\geq3$ and let $\bar\rho\colon G\to\bar\dbU$ be a continuous homomorphism.
Then $\bar\rho_{i-1,i}\cup\bar\rho_{i,i+1}=0$ in $H^2(G,R^+)$, $i=2\nek n$.
\end{lem}
\begin{proof}
Let $2\leq i \leq n$.
Recall that the projection to rows and columns $i-1,i,i+1$ is a homomorphism $\dbU\to\dbU_2(R)$.
By composing it with $\bar\rho$ we obtain that the maps
\[
\nu=\begin{bmatrix}1&\bar\rho_{i-i,i}&\bar\rho_{i-1,i+1}\\
0&1&\bar\rho_{i,i+1}\\
0&0&1\end{bmatrix}
\colon G\to\dbU_2(R), \
\bar\nu=\begin{bmatrix}1&\bar\rho_{i-i,i}&\\
0&1&\bar\rho_{i,i+1}\\
0&0&1\end{bmatrix}
\colon G\to\bar\dbU_2(R)
\]
are homomorphisms.
Clearly, $\nu$ solves the embedding problem (\ref{embedding problem}) for $n=2$ and $\bar\nu$, so $\bar\nu^*\alp_{2,R}=0$.
By Remark \ref{remark on cup products}, $\bar\rho_{i-1,i}\cup\bar\rho_{i,i+1}=0$.
\end{proof}
As another example, we have the following method to modify the pullbacks $\bar\rho^*\alp$:
\begin{lem}
\label{modified homomorphism}
Suppose that $n\geq3$.
Let $\bar\rho\colon G\to\bar\dbU$ be a continuous homomorphism, let $\lam,\lam'\colon G\to R^+$ be continuous maps, and let
\[
\bar\rho'=\begin{bmatrix}1&\bar\rho_{12}&\cdots&\bar\rho_{1,n-1}&\bar\rho_{1n}+\lam&\\
&1&\bar\rho_{23}&\cdots&\bar\rho_{2n}&\bar\rho_{2,n+1}+\lam'\\
&&1&\cdots&&\bar\rho_{3n}\\
&&&\ddots&&\vdots\\
&&&&1&\bar\rho_{n-1,n}\\
&&&&&1 \end{bmatrix}
\colon G\to\bar\dbU.
\]
Then $\bar\rho'$ is a homomorphism if and only if $\lam,\lam'$ are homomorphisms.
Moreover, in this case
\[
(\bar\rho')^*\alp=\bar\rho^*\alp+\lam\cup\bar\rho_{n,n+1}+\bar\rho_{12}\cup\lam'.
\]
\end{lem}
\begin{proof}
If $\lam,\lam'$ are homomorphisms, then the maps
\[
I+\lam E_{1n}, I+\lam' E_{2,n+1}\colon G\to\bar\dbU
\]
are central homomorphisms.
By the previous comments,
\[
\bar\rho'=\bar\rho\cdot(I+\lam E_{1n})\cdot(I+\lam' E_{2,n+1})\colon G\to\bar\dbU
\]
is a continuous homomorphism.
Conversely, if $\bar\rho$, $\bar\rho'$ are homomorphisms, then for $g,h\in G$ we have
\[
\begin{split}
\lam(gh)&=(\bar\rho'(g)\bar\rho'(h))_{1n}-(\bar\rho(g)\bar\rho(h))_{1n}\\
&=\bigl(\bar\rho'_{1n}(g)+\sum_{k=2}^{n-1}\bar\rho_{1k}(g)\bar\rho_{kn}(h)+\bar\rho'_{1n}(h)\bigr)\\
&\quad -\bigl(\bar\rho_{1n}(g)+\sum_{k=2}^{n-1}\bar\rho_{1k}(g)\bar\rho_{kn}(h)+\bar\rho_{1n}(h)\bigr)\\
&=\lam(g)+\lam(h),
\end{split}
\]
and similarly for $\lam'$.
The last assertion follows from Proposition \ref{decomposition of alpha}.
\end{proof}
For $n\geq3$ the projections $\bar\rho_{1k}$, $3\leq k\leq n$, and $\bar\rho_{k,n+1}$, $2\leq k\leq n-1$, need not be homomorphisms, so Proposition \ref{decomposition of alpha} does not yield a cohomological decomposition of $\bar\rho^*\alp$ as a sum of cup products.
Our main result circumvents this difficulty when $n=3$ and $G=G_F$ is the absolute Galois group of a field containing a root of unity of order $m$.
A key argument in the proof will be based on the following analysis:
\begin{exam}
\label{restriction to V}
\rm
Suppose that $n=3$, and let $\dbV$, $\bar\dbV$ be the kernels of $\pr_{12}$, considered as subgroups of $\dbU=\dbU_3(R)$, $\bar\dbU=\bar\dbU_3(R)$, respectively.
The restrictions $\Res_{\bar\dbV}(\pr_{13}),\Res_{\bar\dbV}(\pr_{34})\colon\bar\dbV\to R^+$ are homomorphisms, and may be viewed as elements of $H^1(\bar\dbV,R^+)$.
Since $\Res_{\bar\dbV}(\pr_{12})=0$, the $2$-cocycle $c$ and $\pr_{13}\cup\pr_{34}$ have the same restriction to $\bar\dbV$.
Let $\iota\colon \bar\dbV\to\bar\dbU$, $\bar\iota\colon \bar\dbV\to\bar\dbU$ be the inclusion maps, so $\iota^*=\Res_\dbV$ and $\bar\iota^*=\Res_{\bar\dbV}$.
There is a commutative diagram of central extensions
\[
\xymatrix{
0\ar[r] &R^+\ar@{=}[d]\ar[r]&\dbV\ar[r]\ar@{_{(}->}[d]^{\iota}&\bar\dbV\ar[r]\ar@{_{(}->}[d]^{\bar\iota}&1\\
0\ar[r]&R^+\ar[r]&\dbU\ar[r]^{\pi}&\bar\dbU\ar[r]&1.
}
\]
We have $\dbV=\dbU\times_{\bar\dbU}\bar\dbV$.
By the previous comments, the upper extension in the diagram corresponds to $\Res_{\bar\dbV}(\alp)\in H^2(\bar\dbV,R^+)$, which by
Proposition \ref{decomposition of alpha}, is $\Res_{\bar\dbV}(\pr_{13})\cup\Res_{\bar\dbV}(\pr_{34})$.
Now let $G$ be a profinite group, let $\bar\rho\colon G\to\bar\dbU$ be a continuous homomorphism, and set $M_1=\bar\rho\inv(\bar\dbV)$.
We note that $\Res_{M_1}(\bar\rho_{13})$ and $\Res_{M_1}(\bar\rho_{34})$ are homomorphisms, considered as elements of $H^1(M_1,R^+)$.
By (\ref{res commutes with pullbacks}),
\begin{equation}
\label{Res M1 of pullback}
\begin{split}
\Res_{M_1}(\bar\rho^*\alp)
&=(\Res_{M_1}\bar\rho)^*(\Res_{\bar\dbV}(\alp))\\
&=(\Res_{M_1}\bar\rho)^*(\Res_{\bar\dbV}(\pr_{13}))\cup(\Res_{M_1}\bar\rho)^*(\Res_{\bar\dbV}(\pr_{34})).\\
&=\Res_{M_1}(\bar\rho_{13})\cup\Res_{M_1}(\bar\rho_{34}).
\end{split}
\end{equation}
\end{exam}
\section{Restriction of homomorphisms}
\label{section on restrictions of homomorphisms}
In this section we focus on the case $n=2$, so $\dbU=\dbU_2(R)$.
Let $G$ be a profinite group.
\begin{prop}
\label{psi a homomorphism and commutators}
Suppose that $G=M_1M_2$, where $M_1,M_2$ are closed subgroups of $G$, and $M_1$ is normal in $G$.
Let
\[
\chi_1\colon G\to R^+, \ \chi_2\colon G\to R^+, \ \omega_1\colon M_1\to R^+, \ \omega_2\colon M_2\to R^+
\]
be continuous homomorphisms such that $\omega_1,\omega_2$ are trivial on $M_1\cap M_2$.
\begin{enumerate}
\item[(a)]
There is a well-defined map $\psi\colon G\to \dbU$, given by
\[
\psi(g)=\begin{bmatrix}
1&\chi_1(g)&\omega_1(g_1)+\omega_2(g_2)\\
0&1&\chi_2(g)\\
0&0&1
\end{bmatrix}
\]
where $g_1\in M_1$, $g_2\in M_2$, and $g=g_1g_2$.
\item[(b)]
The map $\psi$ is a homomorphism if and only if for every $g,g'\in G$ with decompositions $g=g_1g_2$ and $g'=g'_1g'_2$, where $g_1,g'_1\in M_1$ and $g_2,g'_2\in M_2$, one has
\[
\omega_1(g_2g'_1g_2\inv)-\omega_1(g'_1)=\chi_1(g)\chi_2(g').
\]
\end{enumerate}
\end{prop}
\begin{proof}
(a) \quad
Since $\omega_1,\omega_2$ are trivial on $M_1\cap M_2$, the map $g\mapsto \omega_1(g_1)+\omega_2(g_2)$ is well-defined.
Hence $\psi$ is well-defined.
\medskip
(b) \quad
Take $g=g_1g_2$ and $g'=g'_1g'_2$ as in (b).
Then $gg'=g_1(g_2g'_1g_2\inv)g_2g'_2$, where $g_1(g_2g'_1g_2\inv)\in M_1$ and $g_2g'_2\in M_2$.
Hence
\[
\psi(gg')=\begin{bmatrix}
1&\chi_1(gg')&\omega_1(g_1(g_2g'_1g_2\inv))+\omega_2(g_2g'_2)\\
0&1&\chi_2(gg')\\
0&0&1
\end{bmatrix}.
\]
On the other hand
\[
\begin{split}
&\psi(g)\psi(g')=\begin{bmatrix}
1&\chi_1(g)&\omega_1(g_1)+\omega_2(g_2)\\
0&1&\chi_2(g)\\
0&0&1
\end{bmatrix}
\begin{bmatrix}
1&\chi_1(g')&\omega_1(g'_1)+\omega_2(g'_2)\\
0&1&\chi_2(g')\\
0&0&1
\end{bmatrix}\\
&=\begin{bmatrix}
1&\chi_1(g)+\chi_1(g')&\omega_1(g_1)+\omega_2(g_2)+\omega_1(g'_1)+\omega_2(g'_2)
+\chi_1(g)\chi_2(g')\\
0&1&\chi_2(g)+\chi_2(g')\\
0&0&1
\end{bmatrix}.
\end{split}
\]
The assertion now follows from the fact that $\chi_1,\chi_2,\omega_1,\omega_2$ are homomorphisms.
\end{proof}
We now apply this Proposition to the following special case:
Take $R=\dbZ/m$ for an integer $m\geq2$.
Let $\chi_1,\chi_2\in H^1(G,\dbZ/m)$ and $M_1=\Ker(\chi_1)$.
Assume further that $\sig_1\in G$ satisfies $\chi_1(\sig_1)=1$ and $\chi_2(\sig_1)=0$.
Note that $G=M_1\langle\sig_1\rangle={\cdot \hspace{-10pt}\bigcup}_{i=0}^{m-1}M_1\sig_1^i$.
The group $G$ acts on $H^1(M_1,\dbZ/m)$ by $(g\omega)(h)=\omega(ghg\inv)$ for $\omega\in H^1(M_1,\dbZ/m)$, $g\in G$, and $h\in M_1$.
This makes $H^1(M_1,\dbZ/m)$ a module over the group ring $(\dbZ/m)[G]$.
Let $\omega\colon M_1\to \dbZ/m$ be a continuous homomorphism which is trivial on $\sig_1^m$.
Given $z\in \dbZ/m$ we define a \textsl{map} $\psi_z\colon G\to\dbU$ by
\[
\psi_z(h\sig_1^i)=\begin{bmatrix}1&i&\omega(h)+iz\\
0&1&\chi_2(h)\\0&0&1\end{bmatrix},
\]
where $h\in M_1$ and $0\leq i\leq m-1$.
\begin{prop}
\label{rho mu a homomorphism}
The following conditions are equivalent:
\begin{enumerate}
\item[(a)]
There exists a continuous homomorphism $\rho\colon G\to\dbU$ such that $\rho_{12}=\chi_1$, $\rho_{23}=\chi_2$, and $\omega=\Res_{M_1}(\rho_{13})$.
\item[(b)]
The map $\psi_z\colon G\to\dbU$ is a homomorphism for some $z\in \dbZ/m$;
\item[(c)]
The map $\psi_z\colon G\to\dbU$ is a homomorphism for every $z\in \dbZ/m$;
\item[(d)]
$(\sig_1-1)\omega=\Res_{M_1}(\chi_2)$.
\end{enumerate}
Moreover, every homomorphism as in (a) is of the form $\psi_z$ for some $z\in \dbZ/m$.
\end{prop}
\begin{proof}
Consider the previous setup with
\[
M_2=\langle\sig_1\rangle,\ \omega_1=\omega, \ \omega_2(\sig_1^i)=iz.
\]
Note that $\omega_1,\omega_2$ are trivial on $M_1\cap\langle\sig_1\rangle=\langle\sig_1^m\rangle$.
\medskip
(a)$\Rightarrow$(b): \quad
For $\rho$ as in (a) we set $z=\rho_{13}(\sig_1)\in \dbZ/m$.
Given $g=h\sig_1^i\in G$ with $h\in M_1$ and $0\leq i\leq m-1$, we have
\[
\rho(g)=\rho(h)\rho(\sig_1^i)
=\begin{bmatrix}1&0&\omega(h)\\0&1&\chi_2(h)\\0&0&1\end{bmatrix}
\begin{bmatrix}1&i&iz\\0&1&0\\0&0&1\end{bmatrix}
=\begin{bmatrix}1&i&\omega(h)+iz\\0&1&\chi_2(h)\\0&0&1\end{bmatrix}.
\]
Thus $\rho=\psi_z$.
This proves (b), as well as the last assertion.
\medskip
(c)$\Rightarrow$(a): \quad
We take $\rho=\psi_0$.
\medskip
(b)$\Rightarrow$(d)$\Rightarrow$(c): \quad
Let $z\in\dbZ/m$.
By Proposition \ref{psi a homomorphism and commutators}, $\psi_z$ is a homomorphism if and only if for every $g=h\sig_1^i$ and $g'=h'\sig_1^{i'}$, with $h,h'\in M_1$ and $0\leq i,i'\leq m-1$, one has
\[
\omega(\sig_1^ih'\sig_1^{-i})-\omega(h')=i\chi_2(h').
\]
By induction on $i$, this is equivalent to $\omega(\sig_1h'\sig_1\inv)-\omega(h')=\chi_2(h')$ for every $h'\in M_1$, which is (d).
\end{proof}
\section{Kummer formations}
\label{section on Kummer formations}
We recall from \cite{EfratMatzri17} the following terminology and setup.
Let $G$ be a profinite group and let $A$ be a discrete $G$-module.
For a closed normal subgroup $M$ of $G$ let $A^M$ be the submodule of $G$ fixed by $M$.
There is an induced $G/M$-action on $A^M$.
For open normal subgroups $M\leq M'$ of $G$ let $N_{M'/M}\colon A^M\to A^{M'}$ be the trace homomorphism $a\mapsto \sum_\sig\sig a$, where $\sig$ ranges over a system of representatives for the cosets of $M'$ modulo $M$.
Let $I_{M'/M}$ be the subgroup of $A^M$ consisting of all elements of the form $\sig a-a$ with $\sig\in M'$ and $a\in A^M$.
Note that $N_{M'/M}(I_{M'/M})=\{0\}$.
One sets
\[
\hat H^{-1}(M'/M,A^M)=\Ker(N_{M'/M})/I_{M'/M}
\]
(compare \cite{NeukirchSchmidtWingberg}*{Ch.\ I, \S7}).
When $M'/M$ is cyclic with generator $\sig M$, the subgroup $I_{M'/M}$ consists of all elements $\sig a-a$, with $a\in A^M$ (since $\sig^k-1=(\sig-1)\sum_{i=0}^{k-1}\sig^i$).
Then $\hat H^{-1}(M'/M,A^M)\isom H^1(M'/M,A^M)$ \cite{NeukirchSchmidtWingberg}*{Prop.\ 1.7.1}.
We fix again an integer $m\geq2$.
\begin{defin}
\label{Kummer formations}
\rm
An \textsl{$m$-Kummer formation} $(G,A,\{\kappa_M\}_M)$ consists of a profinite group $G$, a discrete $G$-module $A$, and for each open normal subgroup $M$ of $G$, a $G$-equivariant epimorphism $\kappa_M\colon A^M\to H^1(M,\dbZ/m)$ such that for every such $M$ the following conditions hold:
\begin{enumerate}
\item[(KF1)]
For every of $\chi\in H^1(M,\dbZ/m)$, there is an exact sequence
\[
\begin{split}
H^1(\Ker(\chi),\dbZ/m)\xrightarrow{\Cor_M}H^1(M,\dbZ/m)&\xrightarrow{\cup\chi}H^2(M,\dbZ/m) \\
&\xrightarrow{\Res_{\Ker(\chi)}} H^2(\Ker(\chi),\dbZ/m);
\end{split}
\]
\item[(KF2)]
$\Ker(\kappa_M)=mA^M$;
\item[(KF3)]
For every open normal subgroup $M'$ of $G$ such that $M\leq M'$, there are commutative squares
\[
\xymatrix{
A^M\ar[r]^{\kappa_M\qquad} & H^1(M,\dbZ/m)&&A^M\ar[r]^{\kappa_M\qquad}\ar[d]_{N_{M'/M}} & H^1(M,\dbZ/m)\ar[d]^{\Cor_{M'}} \\
A^{M'}\ar@{^{(}->}[u]\ar[r]^{\kappa_{M'}\qquad} & H^1(M',\dbZ/m)\ar[u]_{\Res_M},&&A^{M'}\ar[r]^{\kappa_{M'}\qquad} & H^1(M',\dbZ/m);
}
\]
\item[(KF4)]
For every open normal subgroup $M'$ of $G$ such that $M\leq M'$ and $M'/M$ is cyclic of order $m$ one has $\hat H^{-1}(M'/M,A^M)=0$.
\end{enumerate}
\end{defin}
\begin{exam}
\label{fields give Kummer formations}
\rm
Extending \cite{EfratMatzri17}*{Example 5.2} (which assumes that $m=p$ is prime), we observe that for a field $F$ of characteristic not dividing $m$ and containing the group $\mu_m$ of $m$th roots of unity, there is an $m$-Kummer formation $(G_F,F_\sep^\times, \{\kappa_M\}_M)$, where $G_F=\Gal(F_\sep/F)$ is the absolute Galois group of $F$, and for every finite Galois extension $E$ of $F$ with $M=G_E$ the map $\kappa_M\colon E^\times\to H^1(G_E,\mu_m)$ is the \textsl{Kummer homomorphism}.
Thus $\kappa_M\colon H^0(G_E,F_\sep^\times)\to H^1(G_E,\mu_m)$ is the connecting homomorphism arising from the short exact sequence of $G_E$-modules
$1\to\mu_m\to F_\sep^\times\xrightarrow{m}F_\sep^\times\to1$, and is therefore $G_F$-equivariant.
Condition (KF1) corresponds to the isomorphism $E^\times/N_{L/E}L^\times\xrightarrow{\sim}\Br(L/E)$ for a cyclic extension $L/E$ \cite{Draxl83}*{p.\ 73}.
Condition (KF2) follows from the exact sequence $E^\times\xrightarrow{m}E^\times\xrightarrow{\kappa_M}H^1(G_E,\mu_m)$.
Condition (KF3) follows from the commutativity of connecting homomorphisms with restrictions and corestrictions.
For (KF4) use the isomorphism $\hat H^{-1}(M'/M,A^M)\isom H^1(M'/M,A^M)$ for $M'/M$ cyclic (noted above) and Hilbert's Theorem 90.
\end{exam}
\begin{rems}
\label{norms commute with inclusions}
\rm
(1) \quad
Let $M_1,M_2$ be open subgroups of $G$.
Then the following square commutes:
\[
\xymatrix{
A^{M_1}\ar@{^{(}->}[r]\ar[d]_{N_{M_1M_2/M_1}}&A^{M_1\cap M_2}\ar[d]^{N_{M_2/M_1\cap M_2}}\\
A^{M_1M_2}\ar@{^{(}->}[r]& A^{M_2}.
}
\]
Indeed, a system of representatives for the cosets of $M_2/(M_1\cap M_2)$ is also a system of representatives for the cosets of $M_1M_2/M_1$.
\medskip
(2) \quad
For normal open subgroups $M\leq M'$ of $G$ and for $\sig\in G$, one has a commutative square
\[
\xymatrix{
A^M\ar[d]_{N_{M'/M}}\ar[r]^{\sig}& A^M\ar[d]^{N_{M'/M}}\\
A^{M'}\ar[r]^{\sig}&A^{M'}.
}
\]
Here we may replace $\sig$ by any element of the group ring $\dbZ[G]$.
\end{rems}
\section{Hilbert 90}
\label{section on Hilbert 90}
Let $(G,A,\{\kappa_M\}_M)$ be an $m$-Kummer formation.
Let $M<M'$ be open subgroups of $G$ with $M'/M$ cyclic of order $m$.
Choose $\sig\in M'$ such that $M'=\langle M,\sig\rangle$.
Let $N_{M'/M}\inv(A^G)$ be the inverse image of $A^G(\subseteq A^{M'})$ under the map $N_{M'/M}\colon A^M\to A^{M'}$.
In view of (KF4), there is a commutative diagram with exact rows
\[
\xymatrix{
A^M\ar[r]^{\sig-1\quad\ }&N_{M'/M}\inv(A^G)\ar[r]^{\quad\ N_{M'/M}}&A^G&\\
&A^G\ar[r]^{m}\ar@{^{(}->}[u]&mA^G\ar[r]\ar@{^{(}->}[u]&0.
}
\]
We deduce the following variant of Hilbert theorem 90 for Kummer formations:
\begin{prop}
\label{exact sequence}
in the above setup, there is an induced exact sequence
\[
A^M\xrightarrow{\sig-1} N_{M'/M}\inv(A^G)/A^G\xrightarrow{\bar N_{M'/M}} A^G/mA^G.
\]
\end{prop}
We say that open normal subgroups $M_1,M_3$ of $G$ are \textsl{$m$-independent} if $G_1/M_1\isom G/M_3\isom\dbZ/m$ and the natural map $G/(M_1\cap M_3)\to(G/M_1)\times (G/M_3)$ is an isomorphism
(the unnatural indexing is due to its application in the next section).
For $\chi_1,\chi_3\in H^1(G,\dbZ/m)$, we note that $M_i=\Ker(\chi_i)$, $i=1,3$, satisfy this condition if and only if $\chi_1,\chi_3$ are $\dbZ/m$-linearly independent.
The next proposition is the key technical step in our proof.
\begin{prop}
\label{key prop}
Let $M_1,M_3$ be open normal subgroups of $G$ which are $m$-independent, and set $M=M_1\cap M_3$.
Let $y_i\in A^{M_i}$, $i=1,3$, satisfy $\kappa_G(N_{G/M_1}(y_1))=\kappa_G(N_{G/M_3}(y_3))$.
Let $\sig_1\in M_3$ satisfy $G=\langle M_1,\sig_1\rangle$.
Then there exists $t\in A^M$ such that
\[
(\sig_1-1)N_{M_1/M}t=N_{G/M_1}y_1\pmod{mA^{M_1}}.
\]
\end{prop}
\begin{proof}
We choose $\sig_3\in M_1$ such that $G=\langle M_3,\sig_3\rangle$, and denote $M'=\langle M,\sig_1\sig_3\rangle$.
Since $M_1,M_3$ are $m$-independent, $(M':M)=m$ and
\[
G=M_1M_3=M_1M'=M_3M', \quad M=M_1\cap M'=M_3\cap M' .
\]
By Remark \ref{norms commute with inclusions}(1), the following diagram commutes:
\[
\xymatrix{
A^{M_1}\ar@{^{(}->}[r]\ar[d]_{N_{G/M_1}}&A^M\ar[d]^{N_{M'/M}}
&A^{M_3}\ar@{_{(}->}[l]\ar[d]^{N_{G/M_3}}\\
A^G\ar@{^{(}->}[r]& A^{M'}&
A^G\ar@{_{(}->}[l].
}
\]
Hence
\begin{equation}
\label{norms and kappa}
N_{M'/M}(y_3-y_1)=N_{G/M_3}y_3-N_{G/M_1}y_1\in A^G\ (\subseteq A^{M_1}),
\end{equation}
so $y_3-y_1\in N_{M'/M}\inv(A^G)$, and
\[
\kappa_G(N_{M'/M}(y_3-y_1))=\kappa_G(N_{G/M_3}y_3)-\kappa_G(N_{G/M_1}y_1)=0.
\]
By (KF2),
\begin{equation}
\label{norm in mAG}
N_{M'/M}(y_3-y_1)\in mA^G.
\end{equation}
Therefore Proposition \ref{exact sequence} (with $\sig=\sig_1\sig_3$) yields $t\in A^M$ and $b\in A^G$ such that
\[
(\sig_1\sig_3-1)t=y_3-y_1+b.
\]
As $\sig_3\in M_1$, Remark \ref{norms commute with inclusions} gives commutative squares
\[
\xymatrix{
A^M\ar[r]^{\sig_1\sig_3-1}\ar[d]_{N_{M_1/M}}&A^M\ar[d]^{N_{M_1/M}}&& A^{M_3}\ar@{^{(}->}[r]\ar[d]_{N_{G/M_3}}& A^M\ar[d]^{N_{M_1/M}}\\
A^{M_1}\ar[r]^{\sig_1-1}&A^{M_1},&& A^G\ar@{^{(}->}[r]& A^{M_1}.
}
\]
Considering $y_1,y_3$ as elements of $A^M$, we obtain
\[
\begin{split}
&(\sig_1-1)N_{M_1/M}t
=N_{M_1/M}((\sig_1\sig_3-1)t)\\
=&N_{M_1/M}(y_3)-N_{M_1/M}(y_1)+N_{M_1/M}(b)\\
=&N_{G/M_3}(y_3)-my_1+mb\equiv N_{G/M_3}(y_3)
\equiv N_{G/M_1}(y_1)\pmod{mA^{M_1}},
\end{split}
\]
where in the last step we used (\ref{norms and kappa}) and (\ref{norm in mAG}).
\end{proof}
We now interpret the previous proposition in cohomological terms.
\begin{cor}
\label{cor to key prop}
Let $\bar\rho\colon G\to\bar\dbU_3(\dbZ/m)$ be a continuous homomorphism.
Assume that $M_i=\Ker(\bar\rho_{i,i+1})$, $i=1,3$, are $m$-independent, and set $M=M_1\cap M_3$.
Let $\sig_1\in M_3$ satisfy $G=\langle M_1,\sig_1\rangle$.
Then there exists $\omega\in H^1(M_1,\dbZ/m) $ such that
\[
\omega\in\Cor_{M_1}H^1(M,\dbZ/m) , \quad (\sig_1-1)\omega=\Res_{M_1}(\bar\rho_{23}).
\]
\end{cor}
\begin{proof}
By Lemma \ref{vanishing of cup products}, $\bar\rho_{12}\cup\bar\rho_{23}=0=\bar\rho_{23}\cup\bar\rho_{34}$.
For both $i=1,3$, (KF1) implies that $\bar\rho_{23}$ is in the image of $\Cor_G\colon H^1(M_i,\dbZ/m)\to H^1(G,\dbZ/m)$.
Hence there exists $y_i\in A^{M_i}$ with $\Cor_G(\kappa_{M_i}(y_i))=\bar\rho_{23}$.
By (KF3), $\kappa_G(N_{G/M_i}y_i)=\bar\rho_{23}$.
Proposition \ref{key prop} yields $t\in A^M$ such that
\[
(\sig_1-1)N_{M_1/M}t\equiv N_{G/M_1}y_1\pmod{mA^{M_1}}.
\]
Let $\omega=\Cor_{M_1}(\kappa_M(t))$.
By (KF3) and Remark \ref{norms commute with inclusions}(2),
\[
\begin{split}
(\sig_1-1)\omega&=(\sig_1-1)\kappa_{M_1}(N_{M_1/M}(t))
=\kappa_{M_1}((\sig_1-1)N_{M_1/M}(t)) \\
&=\kappa_{M_1}(N_{G/M_1}(y_1))
=\Res_{M_1}\kappa_G(N_{G/M_1}(y_1))=\Res_{M_1}(\bar\rho_{23}).
\end{split}
\]
\end{proof}
\section{The main result}
\label{section on the main result}
In this section we take $n=3$, and set as before $R=\dbZ/m$ for $m\geq2$.
Then $\alp$ is the cohomology element corresponding to the sequence
\[
0\to \dbZ/m\to \dbU_3(\dbZ/m)\to\bar\dbU_3(\dbZ/m)\to1.
\]
Let $G$ be a profinite group, and let $\bar\rho\colon G\to\bar\dbU_3(\dbZ/m)$ be a continuous homomorphism.
For $i=1,3$ we set $M_i=\Ker(\bar\rho_{i,i+1})$, and $M=M_1\cap M_3$.
We recall that $\omega=\Res_{M_1}(\bar\rho_{13})\colon M_1\to \dbZ/m$ is a homomorphism, and view it as an element of $H^1(M_1,\dbZ/m)$.
\begin{prop}
\label{equivalence Cor and modified rho}
Assume that (KF1) holds for every open normal subgroup $M$ of $G$.
The following conditions are equivalent:
\begin{enumerate}
\item[(a)]
$\Res_{M_1}(\bar\rho^*\alp)=0$;
\item[(b)]
There exists a continuous homomorphism $\lam\colon G\to \dbZ/m$ such that $(\bar\rho(I-\lam E_{24}))^*\alp=0$;
\item[(c)]
$\omega\in\Cor_{M_1}(H^1(M,\dbZ/m))$.
\end{enumerate}
\end{prop}
\begin{proof}
(a)$\Leftrightarrow$(b): \quad
For every $\lam\in H^1(G,\dbZ/m)$, Lemma \ref{modified homomorphism} shows that
\[
\bar\rho'=\bar\rho(I-\lam E_{24})=\begin{bmatrix}1&\bar\rho_{12}&\bar\rho_{13}&\\0&1&\bar\rho_{23}&\bar\rho_{24}-\lam\\0&0&1&\bar\rho_{34}\\0&0&0&1\end{bmatrix}
\colon G\to\bar\dbU
\]
is a continuous homomorphism, and $(\bar\rho')^*\alp=\bar\rho^*\alp-\bar\rho_{12}\cup\lam$.
Now by (KF1), $\Res_{M_1}(\bar\rho^*\alp)=0$ if and only if $\bar\rho^*\alp=\bar\rho_{12}\cup\lam$ for some $\lam\in H^1(G,\dbZ/m)$.
\medskip
(a)$\Leftrightarrow$(c): \quad
By (\ref{Res M1 of pullback}),
$\Res_{M_1}(\bar\rho^*\alp)=\omega\cup\Res_{M_1}(\bar\rho_{34})$.
We now apply (KF1) for the group $M_1$ and the homomorphism $\Res_{M_1}(\bar\rho_{34})$.
\end{proof}
\medskip
We now prove the Main Theorem in the more general context of $m$-Kummer formations.
The statement in terms of Massey products given in the Introduction follows from the statement below, in view of the interpretation of Massey product elements as pullbacks $\bar\rho^*\alp$, given in \S\ref{section on pullbacks}, and by Example \ref{fields give Kummer formations}.
\begin{thm}
Let $G$ be the underlying group of an $m$-Kummer formation.
Let $\chi_1,\chi_2,\chi_3\in H^1(G,\dbZ/m)$, where $\chi_1,\chi_3$ are $\dbZ/m$-linearly independent.
If there exists a continuous homomorphism $\bar\rho\colon G\to\bar\dbU_3(\dbZ/m)$ with $\chi_i=\bar\rho_{i,i+1}$, $i=1,2,3$, then there exists such a homomorphism for which $\bar\rho^*\alp=0$.
\end{thm}
\begin{proof}
Let $\bar\rho\colon G\to\bar\dbU_3(\dbZ/m)$ be a continuous homomorphism.
Let $M_1,M_3$ be as above, and choose $\sig_1\in G$ such that $G=M_1\sig_1=\bigcup_{i=0}^{m-1}M_1\sig_1^i$.
Corollary \ref{cor to key prop} yields $\omega\in H^1(M_1,\dbZ/m)$ such that
\[
\omega\in\Cor_{M_1}(H^1(M,\dbZ/m)), \qquad
(\sig_1-1)\omega=\Res_{M_1}(\chi_2).
\]
Proposition \ref{rho mu a homomorphism} yields a continuous map $\bar\rho'_{13}\colon G\to\dbZ/m$ such that
\[
\begin{bmatrix}
1&\bar\rho_{12}&\bar\rho'_{13}\\
0&1&\bar\rho_{23}\\
0&0&1
\end{bmatrix}
\colon G\to\bar\dbU_2(\dbZ/m)
\]
is a homomorphism, and $\Res_{M_1}(\bar\rho'_{13})=\omega$.
It extends to a continuous homomorphism
\[
\bar\rho'=\begin{bmatrix}
1&\bar\rho_{12}&\bar\rho'_{13}&\\
0&1&\bar\rho_{23}&\bar\rho_{24}\\
0&0&1&\bar\rho_{34}\\
0&0&0&1
\end{bmatrix}
\colon G\to\bar\dbU_3(\dbZ/m).
\]
By Lemma \ref{modified homomorphism}, $\lam_{13}=\bar\rho_{13}-\bar\rho'_{13}\colon G\to \dbZ/m$ is a homomorphism.
Note that $\bar\rho'=\bar\rho(I-\lam_{13}E_{13})$.
Proposition \ref{equivalence Cor and modified rho} yields $\lam_{24}\in H^1(G,\dbZ/m)$ such that $(\bar\rho'(I-\lam_{24}))^*\alp=0$.
Therefore
\[
\bar\rho(I-\lam_{13}E_{13})(I-\lam_{24}E_{24})
=\begin{bmatrix}
1&\bar\rho_{12}&\bar\rho_{13}-\lam_{13}&\\
0&1&\bar\rho_{23}&\bar\rho_{24}-\lam_{24}\\
0&0&1&\bar\rho_{34}\\
0&0&0&1
\end{bmatrix}
\colon G\to\bar\dbU_3(\dbZ/m)
\]
is a homomorphism as required.
\end{proof}
\begin{bibdiv}
\begin{biblist}
\bib{Albert39}{book}{
author={Albert, A. Adrian},
title={Structure of Algebras},
series={American Mathematical Society Colloquium Publications, Vol. XXIV},
publisher={American Mathematical Society, Providence, R.I.},
date={1939},
}
\bib{CheboluEfratMinac12}{article}{
author={Chebolu, Sunil K.},
author={Efrat, Ido},
author={Min{\'a}{\v{c}}, J{\'a}n},
title={Quotients of absolute Galois groups which determine the entire Galois cohomology},
journal={Math. Ann.},
volume={352},
date={2012},
pages={205--221},
}
\bib{DeligneGriffithsMorganSullivan75}{article}{
author={Deligne, Pierre},
author={Griffiths, Phillip},
author={Morgan, John},
author={Sullivan, Dennis},
title={Real homotopy theory of K\"ahler manifolds},
journal={Invent. Math.},
volume={29},
date={1975},
pages={245--274},
}
\bib{Deninger95}{article}{
author={Deninger, Christopher},
title={Higher order operations in Deligne cohomology},
journal={Invent.\ Math.},
volume={120},
date={1995},
pages={289\ndash315},
}
\bib{Draxl83}{book}{
author={Draxl, P.K.},
title={Skew Fields},
series={London Math. Soc.\ Lect.\ Notes Series},
volume={81},
publisher={Cambridge University Press},
place={Cambridge},
date={1983},
}
\bib{Dwyer75}{article}{
author={Dwyer, William G.},
title={Homology, Massey products and maps between groups},
journal={J. Pure Appl. Algebra},
volume={6},
date={1975},
pages={177--190},
}
\bib{Efrat14}{article}{
author={Efrat, Ido},
title={The Zassenhaus filtration, Massey products, and representations of profinite groups},
journal={Adv. Math.},
volume={263},
date={2014},
pages={389\ndash411},
}
\bib{Efrat17}{article}{
author={Efrat, Ido},
title={The Cohomology of canonical quotients of free groups and Lyndon words},
journal={Documenta Math.},
volume={22},
date={2017},
pages={973\ndash997},
}
\bib{Efrat20a}{article}{
author={Efrat, Ido},
title={The lower p-central series of a free profinite group and the shuffle algebra},
journal={J.\ Pure Appl.\ Algebra},
volume={224},
date={2020},
}
\bib{Efrat20b}{article}{
author={Efrat, Ido},
title={The $p$-Zassenhaus filtration of a free profinite group and shuffle relations},
date={2020},
eprint={arXiv:2003.08903},
}
\bib{EfratMatzri15}{article}{
label={EfMa15},
author={Efrat, Ido},
author={Matzri, Eliyahu},
title={Vanishing of Massey products and Brauer groups},
journal={Canad.\ Math.\ Bull.},
volume={58},
date={2015},
pages={730\ndash740},
}
\bib{EfratMatzri17}{article}{
author={Efrat, Ido},
author={Matzri, Eliyahu},
title={Triple Massey products and absolute Galois groups},
journal={J. Eur. Math Soc.\ (JEMS)},
volume={19},
date={2017},
pages={3629\ndash3640},
label={EfMa17},
}
\bib{EfratMinac11}{article}{
label={EfMi11},
author={Efrat, Ido},
author={Min\'a\v c, J\'an},
title={On the descending central sequence of absolute Galois groups},
journal={Amer. J. Math.},
volume={133},
date={2011},
pages={1503\ndash1532},
}
\bib{EfratMinac17}{article}{
label={EfMi17},
author={Efrat, Ido},
author={Min\'a\v c, J\'an},
title={Galois groups and cohomological functors},
journal={Trans.\ Amer.\ Math.\ Soc.},
volume={369},
date={2017},
pages={2697\ndash2720},
}
\bib{Fenn83}{book}{
author={Fenn, Roger A.},
title={Techniques of Geometric Topology},
Series={London Math.\ Soc.\ Lect. Notes Series},
volume={57},
publisher={Cambridge Univ. Press},
date={1983},
place={Cambridge}
}
\bib{GuillotMinac17}{article}{
author={Guillot, Pierre},
author={Min\'a\v c, J\'an},
title={Extensions of unipotent groups, Massey products and Galois theory},
journal={Adv.\ Math.},
volume={354},
date={2019},
}
\bib{GuillotMinacTopazWittenberg18}{article}{
author={Guillot, Pierre},
author={Min\'a\v c, J\'an},
author={Topaz, Adam},
author={Wittenberg, Olivier},
title={Four-fold Massey products in Galois cohomology},
journal={Compos.\ Math.},
volume={154},
date={2018},
pages={1921\ndash1959},
}
\bib{HarpazWittenberg19}{article}{
label={HaW19},
author={Harpaz, Yonatan},
author={Wittenberg, Olivier},
title={The Massey vanishing conjecture for number fields},
date={2019},
status={preprint},
eprint={arXiv:1904.06512},
}
\bib{Hillman12}{book}{
author={Hillman, Jonathan},
title={Algebraic Invariants of Links},
series={Series on Knots and Everything},
volume={52},
edition={2},
publisher={World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ},
date={2012},
pages={xiv+353},
}
\bib{Hoechsmann68}{article}{
author={Hoechsmann, Klaus},
title={Zum Einbettungsproblem},
journal={J.\ reine angew.\ Math.},
volume={229},
date={1968},
pages={81\ndash106},
}
\bib{HopkinsWickelgren15}{article}{
author={Hopkins, Michael},
author={Wickelgren, Kirsten},
title={Splitting varieties for triple Massey products},
journal={J. Pure Appl. Algebra},
volume={219},
date={2015},
pages={1304\ndash1319},
}
\bib{Huybrechts05}{book}{
author={Huybrechts, Daniel},
title={Complex Geometry},
series={Universitext},
publisher={Springer}
place={Berlin},
date={2005},
pages={xii+309},
}
\bib{Kraines66}{article}{
author={Kraines, David},
title={Massey higher products},
journal={Trans.\ Amer.\ Math.\ Soc.},
volume={124},
date={1966},
pages={431\ndash449},
}
\bib{LamLiuSharifiWangWake20}{article}{
author={Lam, Yeuk Hay Joshua},
author={Liu, Yuan},
author={Sharifi, Romyar},
author={Wang, Jiuya},
author={Wake, Preston},
title={Generalized Bockstein maps and Massey products},
status={preprint},
eprint={arXiv:2004.11510},
date={2020},
}
\bib{LodayVallette12}{book}{
author={Loday, Jean-Louis},
author={Vallette, Bruno},
title={Algebraic Operads},
publisher={Springer, Heidelberg},
date={2012},
pages={xxiv+634},
}
\bib{Matzri14}{article}{
author={Matzri, Eliyahu},
title={Triple Massey products in Galois cohomology},
eprint={arXiv:1411.4146},
date={2014},
}
\bib{May69}{article}{
author={May, J.P.},
title={Matric Massey products},
journal={J.\ Algebra},
volume={12},
date={1969},
pages={533\ndash568},
}
\bib{MinacTan15a}{article}{
author={Min\'a\v c, J\'an},
author={T\^an, Nguyen Duy},
title={The Kernel Unipotent Conjecture and the vanishing of Massey products for odd rigid fields {\rm (with an appendix by Efrat, I., Min\'a\v c, J.\ and T\^an, N. D.)}},
journal={Adv.\ Math.},
volume={273},
date={2015},
pages={242\ndash270},
}
\bib{MinacTan15b}{article}{
author={Min\'a\v c, J\'an},
author={T\^an, Nguyen Duy},
title={Triple Massey products over global fields},
journal={Documenta Math.},
volume={20},
date={2015},
pages={1467\ndash1480},
}
\bib{MinacTan16}{article}{
author={Min{\'a}{\v{c}}, J{\'a}n},
author={T{\^a}n, Nguy{\^e}n Duy},
title={Triple Massey products vanish over all fields},
journal={J. London Math. Soc.},
volume={94},
date={2016},
pages={909\ndash932}
}
\bib{MinacTan17}{article}{
author={Min\'a\v c, J\'an},
author={T\^an, Nguyen Duy},
title={Triple Massey products and Galois theory},
journal={J. Eur. Math. Soc. (JEMS)},
volume={19},
date={2017},
pages={255\ndash284},
}
\bib{Morishita04}{article}{
author={Morishita, M.},
title={Milnor invariants and Massey products for prime numbers},
journal={Compos.\ Math.},
volume={140},
date={2004},
pages={69\ndash83},
}
\bib{NeukirchSchmidtWingberg}{book}{
author={Neukirch, J{\"u}rgen},
author={Schmidt, Alexander},
author={Wingberg, Kay},
title={Cohomology of Number Fields, Second edition},
publisher={Springer},
place={Berlin},
date={2008},
}
\bib{PalSchlank16}{article}{
author={P\'al, Ambrus},
author={Schlank, Tomer},
title={The Brauer--Manin obstruction to the local--global principle for the embedding problem},
status={preprint},
date={2016},
eprint={arXiv:1602.04998},
}
\end{biblist}
\end{bibdiv}
\end{document}
| proofpile-arXiv_059-15617 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{sec:intro}
The $k$-SUM problem is a parameterized version of the classical subset sum problem.
Given a collection of $m$ integers $a_1,\ldots,a_m$, the $k$-SUM problem asks if there is some subset of cardinality $k$ that sums to zero.\footnote{This is the homogeneous version of $k$-SUM. One could also define the inhomogeneous version where the instance consists also of a target integer $t$, and the goal is to produce a subset of $k$ elements that sums to $t$. In the worst-case world, the two versions are equivalent.}
This problem (especially for $k=3$, but more generally for arbitrary constant $k$)
has been influential in computational geometry,
where reductions from $\ksum{k}$ have been used to show the conditional
hardness of a large class of problems~\cite{GOClassProblems95,GOClassProblems12}.
More generally it has been used in computational complexity, where it has formed
the basis for several fine-grained hardness results~\cite{P10,AV14,VW13,KPP16}. We refer the reader to the extensive
survey of Vassilevska-Williams~\cite{WilFinegrainedQuestions18} for an exposition of this line of work. The $k$-SUM problem has also been extensively studied in the cryptanalysis community (see, e.g., \cite{Wagner02,BKW03,BCJ11}).
We know two very different algorithms for $k$-SUM: a meet-in-the-middle algorithm that achieves run-time $O(m^{\lceil k/2\rceil})$~\cite{HS74}, and dynamic programming or FFT-based algorithms that achieve run-time $\widetilde{O}(um)$~\cite{Bellman} where $u$ is the largest absolute value of the integers $a_i$ (A sequence of recent works~\cite{KX17,Bringmann17,ABJTW19,JW19} improve the latter to $\widetilde{O}(u+m)$). Note that the latter algorithms outperform the former when $u \ll m^{\lceil k/2\rceil}$, in what is sometimes called the {\em dense regime} of parameters, a point that we will come back to shortly.
In terms of hardness results for $k$-SUM, the work of P\v{a}tra\c{s}cu and Williams~\cite{PW10} shows that an algorithm that solves the problem in time $m^{o(k)}$ for all $m$ will give us better algorithms for
SAT, in particular refuting the exponential time hypothesis (ETH). The recent work of Abboud, Bringmann, Hermelin and Shabtay~\cite{ABHS19} shows that a $u^{1-\epsilon}$-time algorithm (for any constant $\epsilon>0$) would refute the strong exponential-time hypothesis (SETH). So, we know that the two algorithms described above are essentially optimal, at least in the worst case.
\paragraph{Average-case Hardness.}
The focus of this work is the natural average-case version of $k$-SUM where the problem instance $a_1,\ldots,a_m$ is chosen independently and uniformly at random from an interval $[-u,u]$. We call this the {\em average-case $\ksum{k}$ problem}. In this setting, \emph{deciding} whether a $\ksum{k}$ solution exists is in many cases trivial. In particular, if ${m \choose k} \ll u$ then a union bound argument shows that the probability of a solution existing approaches $0$. We refer to this as the \emph{sparse} regime of the problem. In contrast, if ${m \choose k}$ is sufficiently larger than $u$, then a hashing argument guarantees the existence of many solutions, with high probability over the instance. As already mentioned above, we refer to this as the \emph{dense} regime.
Notwithstanding this triviality, we notice that in the dense regime one could still consider the {\em search} problem of \emph{finding} a $\ksum{k}$ solution. The search problem seems to retain its hardness even in the dense setting and is the focus of our work. Since we consider the search version of the problem, we also refer to the dense regime as the \emph{total} regime, as the associated search problem has a solution with high probability.
The average-case total problem is {\em not} quite as hard as the worst-case version (at least assuming SETH), since (slight variants of) Wagner's generalized birthday algorithm~\cite{Wagner02} and the Blum-Kalai-Wasserman algorithm~\cite{BKW03} show how to solve this problem in time $u^{O(1/\log k)}$. This contrasts with the $u^{1-\varepsilon}$ lower bound of~\cite{ABHS19} in the worst case. (The BKW/Wagner algorithm was originally stated in a slightly different setting, so we restate it in Section~\ref{sec:BKW}.) This leaves the question of \emph{how much easier} the average-case is compared to the worst-case. Given that the lower-bounds from the worst-case setting are not a barrier here, it is a-priori unclear what is the best running time in this setting. Can we improve on \cite{BKW03,Wagner02}?
\subsection{Our Results}
In this work we characterize the hardness of average-case $\ksum{k}$ in the total regime by presenting a (conditional) lower bound that matches the $u^{O(1/\log k)}$ upper bound described above, up to the hidden constant in the exponent.
In more detail, our main result shows that average-case $\ksum{k}$ is indeed hard to solve, under the assumption that \emph{worst-case} lattice problems are hard to approximate. We thus introduce a new family of techniques into the study of the hardness of the $\ksum{k}$ problem. Concretely, this lower-bound shows that a $u^{o(1/\log k)}$-time algorithm for average-case $k$-SUM (in the dense regime) implies a $2^{o(n)}$-time $n^{1+\epsilon}$-approximation algorithm for the shortest independent vectors problem (SIVP) over an $n$-dimensional lattice, a lattice problem for which the best known algorithms run in time $2^{\Omega(n)}$~\cite{ALNSSlideReduction20,ACNoteConcrete19}. Improving this state of affairs, in particular finding a $2^{o(n)}$-time algorithm for SIVP, would have major consequences in lattice-based cryptography both in theory and in practice~\cite{NIST,APSConcreteHardness15, ADTSPostquantumKey16}.
We also note in the appendix that some of the connections between $\ksum{k}$ and geometric problems from \cite{GOClassProblems95,GOClassProblems12} carry over to the dense setting as well. This shows an interesting (and not previously known, as far as we could find) connection between approximate short vectors in lattices, and computational geometry.
\def\mathbf{x}{\mathbf{x}}
\subsection{Our Techniques}
The starting point of our reduction is the well-known worst-case to average-case reductions in the lattice world, pioneered by Ajtai~\cite{AjtGeneratingHard96,MRWorstcaseAveragecase07,GPVTrapdoorsHard08,GINX16}. These reductions show that the approximate shortest independent vectors (SIVP) problem, a standard problem in the lattice world, is at least as hard in the worst-case as a certain problem called short integer solutions (SIS) on the average. The definition of lattices and the approximate shortest vector problem is not crucial for the current discussion, however we note that the best algorithms on $n$-dimensional lattices that compute any $\mathrm{poly}(n)$-approximation to SIVP run in time $2^{\Omega(n)}$.
(We refer the curious reader to, e.g., \cite{MRWorstcaseAveragecase07,PeiDecadeLattice16, ALNSSlideReduction20}, Section~\ref{sec:lattices}, and the references therein for more background on lattices and lattice problems.)
In the (one-dimensional) {\em average-case} SIS problem with parameters $m,Q$ and $\beta$, one is given random integers $a_1,\ldots,a_m \in \ensuremath{\mathbb{Z}}_Q$ and the goal is to find a {\em non-zero} integer linear combination $\mathbf{x} = (x_1,\ldots,x_m) \in \ensuremath{\mathbb{Z}}^m$ such that $\sum_{i\in [m]} a_ix_i = 0 \pmod{Q}$ and $\mathbf{x}$ is short, namely $||\mathbf{x}||_1 \leq \beta$. Thus, this is exactly the modular subset sum problem (i.e.\ subset sum over the group $\ensuremath{\mathbb{Z}}_Q$), except with weights larger than $1$. The parameters of the problem live in the dense/total regime where such solutions are guaranteed to exist with high probability.
The worst-case to average-case reductions state that an average-case SIS solver for a sufficently large $Q$, namely $Q = (\beta n)^{\Omega(n)}$, gives us an $\widetilde{O}(\sqrt{n\log m}\cdot \beta)$-approximate algorithm for SIVP. (We refer the reader to Theorem~\ref{thm:GINX} for a more precise statement.)
Our main technical contribution is an average-case to average-case reduction from the SIS problem to the $k$-SUM problem. We show this by exhibiting a reduction from SIS to modular $k$-SUM (i.e.\ $\ksum{k}$ over the group $\ensuremath{\mathbb{Z}}_Q$), and one from modular $k$-SUM to $k$-SUM. The latter is easy to see: indeed, if you have a $k$-subset that sums to $0$, it also sums to $0 \pmod{Q}$ for any $Q$.
Henceforth in this discussion, when we say $k$-SUM, we will mean modular $k$-SUM.
To reduce from SIS with parameters $m,Q,\beta$ to modular $k$-SUM on $m$ numbers over $\ensuremath{\mathbb{Z}}_Q$, we start with a simple, seemingly {\em trivial}, idea. SIS and $k$-SUM are so similar that perhaps one could simply run the $k$-SUM algorithm on the SIS instance. Unfortunately, this fails. For a $k$-SUM solution to exist, $m$ has to be at least roughly $Q^{1/k} = n^{\Omega(n/k)}$. But, this could only possibly give us an approximate-SIVP algorithm that runs in time $n^{\Omega(n/k)}$ (where we are most interested in constant $k$), since the reduction from SIVP has to at least write down the $m$ samples. This is a meaningless outcome since,
as we discussed before, there are algorithms for approximate SIVP that run in time $2^{O(n)}$.
Fortunately, ideas from the BKW algorithm~\cite{BKW03} for subset sum (and the closely related algorithm from \cite{Wagner02} for $k$-SUM) come to our rescue. We will start with SIS modulo $Q = q^L$ for some $q$ and $L$ that we will choose later. (\cite{GINX16} showed that worst-case to average-case reductions work for any sufficiently large $Q$, include $Q = q^L$.)
The BKW algorithm iteratively produces subsets that sum to $0$ modulo $q^i$ for $i=1,\ldots,L$, finally producing SIS solutions modulo $Q$. To begin with, observe that for a $k$-subset-sum to exist modulo $q$, it suffices that $m \approx q^{1/k} \ll Q^{1/k}$, potentially getting us out of the conundrum from before. In particular, we will set $q \approx 2^{n\log k}$, $L \approx \log n/\log k$, therefore $Q = q^L \approx n^n$ as needed. We will also set $m \approx q^{\epsilon/\log k} \approx 2^{\epsilon n}$ for a large enough $\epsilon$ so that solutions exist (since $m^k \gg q$). Furthermore, a $k$-SUM algorithm mod $q$ that performs better than BKW/Wagner, that is, runs in time $q^{o(1/\log k)} = 2^{o(n)}$, {\em is} potentially useful to us.
With this ray of optimism, let us assume that we can run the $k$-SUM algorithm many times to get several, $m$ many, subsets $S_j$ that sum to $0$ modulo $q$. (We will return to, and remove, this unrealistic assumption soon.)
That is,
$$ b_j := \sum_{i \in S_j} a_i = 0 \pmod{q}$$
The BKW/Wagner approach would then be to use the $(b_1,\ldots,b_m)$ to generate $(c_1,\ldots,c_m)$ that are $0 \pmod{q^2}$, and so on. Note that $c_i$ are a linear combination of $a_1,\ldots,a_m$ with weight $k^2$. At the end of the iterations, we will obtain at least one linear combination of $(a_1,\ldots,a_m)$ of weight $\beta = k^L$ that sums to $0$ modulo $q^L = Q$, solving SIS. (We also need to make sure that this is a non-zero linear combination, which follows since the coefficients of all intermediate linear combinations are positive.)
This would finish the reduction, except that we need to remove our unrealistic assumption that we can use the $k$-SUM oracle to get many $k$-subsets of $(a_1,\ldots,a_m)$ that sum to $0$. For one, the assumption is unrealistic because if we feed the $k$-SUM oracle with the same $(a_1,\ldots,a_m) \pmod{q}$ twice, we will likely get the same $k$-SUM solution.
On the other hand, using a fresh random instance for every invocation of the $k$-SUM oracle will require $m$ to be too large (essentially returning to the trivial idea above). A natural idea is to observe that each $k$-SUM solution touches a very small part of the instance. Therefore, one could hope to first receive a $k$-SUM $a_{i_1}+\ldots+a_{i_k}$ from the oracle, and in the next iteration, use as input $\{a_1,\ldots, a_m\} \setminus \{a_{i_1},\ldots,a_{i_k}\}$, which is nearly as large as the original set. Unfortunately, continuing like this cannot work. The distributions of the successive instances that we feed to the oracle will no longer be uniform, and even worse, the oracle itself can choose which elements to remove from our set. A suitable malicious oracle can therefore prevent us from obtaining many $k$-SUMs in this way, even if the oracle has high success probability on uniform input.
Instead, our key idea is rather simple, namely to resort to randomization.
Given an instance $(a_1,\ldots,a_m) \in \ensuremath{\mathbb{Z}}_q^m$, we compute many random subset sums to generate $(a_1',\ldots,a_{m}') \in \ensuremath{\mathbb{Z}}_q^{m}$. That is, we choose $k$-subsets $T'_i \subseteq [m]$ and let
$$ a_i' = \sum_{j \in T'_i} a_j \pmod{q}$$
Since $q \gg m^{1/k}$, the leftover hash lemma~\cite{ILL89} tells us that the $a_i'$ are (statistically close to) uniformly random mod $q$. Furthermore, a $k$-subset sum of $(a_1',\ldots,a_m')$ will give us a $k^2$-subset sum of $(a_1,\ldots,a_m)$ that sums to $0 \pmod{q}$. To obtain a new subset sum of $(a_1,\ldots,a_m)$, simply run this process again choosing fresh subsets $T_i''$ to generate $(a_1'',\ldots,a_m'')$; and so on. Eventually, this will give us a $\beta = k^{2L}$ weight solution to SIS, which is a quadratic factor worse than before, but good enough for us. (We are glossing over an important technical detail here, which is how we ensure that the resulting subset sums yield uniformly random independent elements in $q\ensuremath{\mathbb{Z}}/q^2\ensuremath{\mathbb{Z}}$.)
To finish the analysis of the reduction, observe that it calls the $k$-SUM oracle $\approx mL$ times. Assuming the oracle runs in time $q^{o(1/\log k)}$, this gives us a $2^{o(n)}$-time algorithm for approximate SIVP. The approximation factor is $\widetilde{O}(\sqrt{n\log m}\cdot \beta) \approx n^{3}$. (In the sequel, we achieve $n^{1+o(1)}$ by a careful choice of parameters.)
Interestingly, our reduction re-imagines the BKW/Wagner {\em algorithm as a reduction} from the SIS problem to $k$-SUM, where the algorithm itself is achieved (in retrospect) by plugging in the trivial algorithm for $\ksum{2}$. Of course, eventually the algorithm ends up being much simpler than the reduction (in particular, there is no need for re-randomization) since we don't need to account for ``malicious'' $\ksum{k}$ solvers.
\subsection{Open Problems and Future Directions}
Our work introduces the powerful toolkit of lattice problems into the field of average-case fine-grained complexity, and raises several natural directions for further research.
First is the question of whether a result analogous to what we show holds in the sparse/planted regime as well. A possible theorem here would rule out an $m^{o(k)}$-time algorithm for $k$-SUM, assuming the hardness of lattice problems. To the best of our knowledge, in the sparse/planted regime it is not known whether the average-case problem is easier than the worst-case as in the dense regime.
Second is the question of whether we can obtain average-case hardness of $k$-SUM for concrete small constants~$k$, perhaps even $k=3$. Our hardness result is asymptotic in $k$.
Third is the question of whether we can show the average-case hardness of natural distributions over {\em combinatorial and computational-geometric} problems, given their connection to $k$-SUM. In this vein, we show a simple reduction to (perhaps not the most natural distribution on) the $(Q,m,d)$-plane problem in Appendix~\ref{apx:geometric}, but we believe much more can be said. More generally, now that we have shown average-case hardness of $k$-SUM, it is natural to try to reduce average-case $k$-SUM to other natural average-case problems.
\subsection{Other Related Works}
There are now quite a few works that study average-case fine-grained hardness of problems in $P$. We mention a few. First, Ball, Rosen, Sabin, and Vasudevan~\cite{BallRSV17} showed a reduction from SAT to an (average-case) variant of the orthogonal vectors problem. They demonstrated that sub-quadratic algorithms for their problem will refute SETH.
There is also a sequence of works on the average-case hardness of counting $k$-cliques. The work of Goldreich and Rothblum~\cite{GoldreichR18,GR20} shows worst-case to average-case {\em self}-reductions for the problem of counting $k$-cliques (and other problems in $P$). Boix-Adser{\`a}, Brennan, and Bresler proved the same result for $G_{n,p}$~\cite{boix-adseraAverageCaseComplexityCounting2019}, and Hirahara and Shimizu recently showed that it is even hard to count the number of $k$-cliques with even a small probability of success~\cite{hiraharaNearlyOptimalAverageCase2020}. In contrast, our reductions go from the worst-case of one problem (SIVP) to the average-case of another ($k$-SUM). We find it a fascinating problem to show a worst-case to average-case {\em self}-reduction for $k$-SUM.
Dalirrooyfard, Lincoln, and Vassilevska Williams recently proved fine-grained average-case hardness for many different~\cite{DLW20} problems in $P$ under various complexity-theoretic assumptions. In particular, they show fine-grained average-case hardness of counting the number of solutions of a ``factored'' variant of $k$-SUM assuming SETH. In contrast, we show fine-grained average-case hardness of the standard $k$-SUM problem under a non-standard assumption.
For the lattice expert, we remark that if one unwraps our reduction from SIVP to SIS and then to $k$-SUM, we obtain a structure that is superficially similar to \cite{MPHardnessSIS13}. However, in their setting, they do not need to reuse samples and therefore do not need the re-randomization technique, which is the key new idea in this work.
\subsection{Organization of the Paper}
Section~\ref{sec:variants} describes the modular variant of $k$-SUM as well as the standard $k$-SUM (over the integers), shows their totality on average, and reductions between them. For completeness, we describe the BKW/Wagner algorithm in Section~\ref{sec:BKW}. We remark that while the standard descriptions of the algorithm refer to finite groups, we need one additional trick (namely, Lemma~\ref{lem:Qimpliesu}) to obtain the algorithm over the integers. Finally, our main result, the worst-case to average-case reduction is described in Section~\ref{sec:wcac}. The connection to computational geometry is provided in Appendix~\ref{apx:geometric}.
\section{Preliminaries}
We write $\log$ for the logarithm base two and $\ln$ for the natural logarithm. We write $\binom{m}{k} := \frac{m!}{(m-k)!k!}$ for the binomial coefficient.
\subsection{Probability}
We make little to no distinction between random variables and their associated distributions.
For two random variables $X, Y$ over some set $S$, we write $\Delta(X,Y) := \sum_{z \in S} |\Pr[X = z] - \Pr[Y=z]|$ for the statistical distance between $X$ and $Y$. For a finite set $S$, we write $U_S$ for the uniform distribution over $S$.
Recall that a set of functions $\mathcal{H} \subseteq \{h : X \to Y\}$ is a \emph{universal family of hash functions} from $X$ to $Y$ if for any distinct $x,x' \in X$
\[
\Pr_{h \sim \mathcal{H}}[h(x) = h(x')] \leq 1/|Y|
\; .
\]
\begin{lemma}[Leftover hash lemma]
If $\mathcal{H}$ is a universal family of hash functions from $X$ to $Y$, then
\[
\Pr[\Delta(h(U_X), U_Y) \geq \beta] \leq \beta
\; ,
\]
where the probability is over a random choice of $h \sim \mathcal{H}$ and $\beta := (|Y|/|X|)^{1/4}$.
\end{lemma}
\begin{lemma}
For any positive integers $Q,m$, let $\mathcal{H}$ be the family of hash functions from $\{0,1\}^m$ to $\ensuremath{\mathbb{Z}}_Q$ given by $h_{\vec{a}}(\vec{x}) = \langle \vec{a},\vec{x} \rangle \bmod Q$ for all $\vec{a} \in \ensuremath{\mathbb{Z}}_Q^{m}$. Then, $\mathcal{H}$ is a universal family of hash functions.
\end{lemma}
\begin{proof}
Let $\vec{x}, \vec{y} \in \{0,1\}^m$ be distinct vectors, and suppose without loss of generality that $x_1 =1$ and $y_1 = 0$. We write $\vec{a}' \in \ensuremath{\mathbb{Z}}_Q^{m-1}$ for the vector obtained by removing the first coordinate from $\vec{a}$ and ${a}_1$ for the first coordinate itself. Similarly, we write $\vec{x}', \vec{y}' \in \{0,1\}^{m - 1}$ for the vectors $\vec{x}, \vec{y}$ with their first coordinate removed. Then,
\[
\Pr[\langle \vec{a},\vec{x}\rangle = \langle\vec{a},\vec{y} \rangle\bmod Q] = \Pr[\langle\vec{a}', \vec{x}'\rangle + {a}_1 = \langle\vec{a}', \vec{y}'\rangle \bmod Q] = \Pr[{a}_1 = \langle\vec{a}',\vec{y}' - \vec{x}'\rangle \bmod Q] = 1/Q
\; ,
\]
where the probability is over the random choice of $\vec{a} \in \ensuremath{\mathbb{Z}}_Q^m$. The last equality follows from the fact that ${a}_1 \in \ensuremath{\mathbb{Z}}_Q$ is uniformly random and independent of $\vec{a}'$.
\end{proof}
\begin{corollary}
\label{cor:LHL_A_1}
For any positive integers $Q,m$ and any subset $X \subseteq \{0,1\}^m$,
\[
\Pr_{\vec{a} \sim \ensuremath{\mathbb{Z}}_Q^{m}}[\Delta(\langle \vec{a}, U_X \rangle \bmod Q, U_{\ensuremath{\mathbb{Z}}_Q}) \geq \beta] \leq \beta
\; ,
\]
where $\beta := (Q/|X|)^{1/4}$.
\end{corollary}
\begin{corollary}
\label{cor:LHL_A}
Let $\vec{a} := ({a}_1,\ldots, {a}_M) \in \ensuremath{\mathbb{Z}}_Q^{M}$ be sampled uniformly at random, and let $S_1,\ldots, S_{M'} \subset [M]$ be sampled independently and uniformly at random with $|S_i| = t$. Let ${c}_i := \sum_{j \in S_i} {a}_j \bmod Q$. Then, $(\vec{a}, \vec{c}) := ({a}_1,\ldots, {a}_M, {c}_1,\ldots, {c}_{M'})$ is within statistical distance $\delta$ of a uniformly random element in $\ensuremath{\mathbb{Z}}_Q^{M+M'}$, where
\[
\delta := (M'+1)\cdot Q^{1/4} \cdot \binom{M}{t}^{-1/4} \leq (M' + 1) \cdot \bigg(\frac{Qt^t}{M^t}\bigg)^{1/4}
\; .
\]
\end{corollary}
\begin{proof}
%
Let $X_t := \{\vec{x} \in \{0,1\}^M \ : \ \|\vec{x}\|_1 = t\}$, and notice that $|X_t| = \binom{M}{t}$.
Call $\vec{a}$ good if $\Delta(\langle\vec{a},U_{X_t}\rangle \bmod Q, U_{\ensuremath{\mathbb{Z}}_Q}) \leq \beta := Q^{1/4} /|X_t|^{1/4}$.
From Corollary~\ref{cor:LHL_A_1}, we see that $\vec{A}$ is good except with probability at most $\beta$.
Finally, notice that the ${c}_i$ are distributed exactly as independent samples from $\langle\vec{a}, U_{X_t}\rangle$. Therefore, if $\vec{a}$ is good, each of the ${c}_i$ is within statistical distance $\beta$ of an independent uniform sample. The result then follows from the union bound.
\end{proof}
\subsection{Hitting probabilities}
\begin{definition}
For $\vec{a} := (a_1,\ldots, a_M) \in \ensuremath{\mathbb{Z}}_Q^{M}$, $\vec{c} := (c_1,\ldots, c_{M'}) \in \ensuremath{\mathbb{Z}}_Q^{M'}$, $I \subset [M]$, $J \subset [M']$, and a positive integer $t$, the $t$-\emph{hitting probability} of $\vec{a}$, $\vec{c}$, $I$, and $J$ is defined as follows. For each $j \in J$, sample a uniformly random $S_j \in \binom{[M]}{t}$ with $\sum_{i \in S_j} a_i = c_j$. (If no such $S_j$ exists, then we define the hitting probability to be $1$.) The hitting probability is then
\[
p_{\vec{a}, \vec{c}, I, J,t} := \Pr[\exists\; j,j'\in J \text{ such that } S_j \cap I \neq \emptyset \text{ or } S_j \cap S_{j'} \neq \emptyset]
\; .
\]
\end{definition}
\begin{lemma}
\label{lem:hitting}
For any positive integers $Q,M,t$ and $0 < \varepsilon < 1$,
\[
\Pr_{\vec{a} \sim \ensuremath{\mathbb{Z}}_Q^{M}, c \sim \ensuremath{\mathbb{Z}}_Q}\Big[p_{\vec{a}, c,t} \geq \frac{1+\varepsilon}{1-\varepsilon} \cdot \frac{t}{M}\Big] \leq \frac{4Q^{1/4}}{\varepsilon \cdot \binom{M-1}{t-1}^{1/4}}
\; .
\]
where
\[
p_{\vec{a}, c, t} := p_{\vec{a}, c, \{1\}, \{1\},t}
\; .
\]
\end{lemma}
\begin{proof}
We have
\[
p_{\vec{a},c,t} = \Pr_{\vec{x} \sim X_t}[x_1 = 1\ |\ \langle \vec{a}, \vec{x} \rangle = c \bmod Q]
\; ,
\]
where $X_t := \{\vec{x} \in \{0,1\}^M \ : \ \|\vec{x}\|_1 = t\}$. Therefore,
\begin{align*}
p_{\vec{a},c,t}
&= \Pr_{\vec{x} \sim X_t}[x_1 = 1] \cdot \Pr_{\vec{x} \sim X_t}[\langle \vec{a}, \vec{x} \rangle = c \bmod Q\ | \ x_1 = 1]/\Pr_{\vec{x} \sim X_t}[\langle \vec{a}, \vec{x} \rangle = c \bmod Q]\\
&= \frac{t}{M} \cdot \Pr_{\vec{x}' \sim X_{t-1}'}[\langle \vec{a}_{-1}, \vec{x}'\rangle = c - a_1 \bmod Q]/\Pr_{\vec{x} \sim X_t}[\langle \vec{a}, \vec{x} \rangle = c \bmod Q]
\; ,
\end{align*}
where $\vec{a}_{-1}$ is $\vec{a}$ with its first coordinate removed and $X_{t-1}' := \{\vec{x} \in \{0,1\}^{M-1} \ : \ \|\vec{x}\|_1 = t-1\}$.
So, let
\[
p_{\vec{a}, c} := \Pr_{\vec{x} \in X_t}[ \langle \vec{a}, \vec{x} \rangle = c \bmod Q ]
\; ,
\]
and
\[
p_{\vec{a}, c}' := \Pr_{\vec{x}' \in X_{t-1}'}[ \langle \vec{a}_{-1}, \vec{x}'\rangle = c - a_1 \bmod Q ]
\; .
\]
As in the proof of Corollary~\ref{cor:LHL_A}, we see that
\begin{equation}
\label{eq:good}
\sum_{c \in \ensuremath{\mathbb{Z}}_Q} |p_{\vec{a},c}- 1/Q| = \Delta(\langle \vec{a}, U_{X_t}\rangle \bmod Q, U_{\ensuremath{\mathbb{Z}}_Q}) \leq Q^{1/4}/\binom{M}{t}^{1/4}
\end{equation}
except with probability at most $Q^{1/4}/\binom{M}{t}^{1/4}$ over $\vec{a}$. Similarly,
\begin{equation}
\label{eq:very_good}
\sum_{c\in \ensuremath{\mathbb{Z}}_Q} |p_{\vec{a},c}'- 1/Q| = \Delta(\langle \vec{a}_{-1}, U_{X_{t-1}'\rangle } \bmod Q, U_{\ensuremath{\mathbb{Z}}_Q}) \leq Q^{1/4}/\binom{M-1}{t-1}^{1/4}
\end{equation}
except with probability at most $Q^{1/4}/\binom{M-1}{t-1}^{1/4}$ over $\vec{a}$.
So, suppose that $\vec{a}$ satisfies Eq.~\eqref{eq:good} and Eq.~\eqref{eq:very_good}. Then, by Markov's inequality,
\[
\Pr_{c \in \ensuremath{\mathbb{Z}}_Q}[p_{\vec{a}, c} \geq (1-\varepsilon)/Q] \leq \frac{Q^{1/4}}{\varepsilon \cdot \binom{M}{t}^{1/4}}
\]
for any $0 < \varepsilon < 1$, and similarly,
\[
\Pr_{c \in \ensuremath{\mathbb{Z}}_Q}[p_{\vec{a}, c}' \leq (1+r)/Q] \leq \frac{Q^{1/4}}{\varepsilon \cdot \binom{M-1}{t-1}^{1/4}}
\; .
\]
Therefore, for such $\vec{a}$,
\[
\Pr[p_{\vec{a},c,t} \geq (1+\varepsilon)t /((1-\varepsilon)M)] \leq \frac{2Q^{1/4}}{\varepsilon \cdot \binom{M-1}{t-1}^{1/4}}
\; .
\]
The result then follows by union bound.
\end{proof}
By repeated applications of union bound, we derive the following corollary.
\begin{corollary}
\label{cor:hitting_prob}
For any positive integers $Q,M,M',t, v, v'$ and $0 < \varepsilon < 1$, let $\vec{a} \sim \ensuremath{\mathbb{Z}}_Q^{M}$ and $\vec{c} \sim \ensuremath{\mathbb{Z}}_Q^{M'}$ be sampled uniformly at random. Then,
\[
p_{\vec{a}, \vec{c},I,J,t} \leq (v+tv') \cdot v' \cdot \frac{1+\varepsilon}{1-\varepsilon} \cdot \frac{t}{M}
\]
for all $I \in \binom{[M]}{\leq v},\; J \in \binom{[M']}{\leq v'}$ except with probability at most
\[
4 M M'\frac{Q^{1/4}}{\varepsilon \cdot \binom{M-1}{t-1}^{1/4}}
\; .
\]
\end{corollary}
\begin{proof}
Let $\eta := \max_{i,j} p_{\vec{a}, \vec{c}, \{i\}, \{j\},t}$.
By union bound, for any set $I$, we have
\[
p_{\vec{a},\vec{c}, I, \{j\},t} \leq \sum_{i\in I} p_{\vec{a},\vec{c},\{i\},\{j\}, t} \leq |I| \cdot \eta
\; .
\]
Fix some set $J$. Let $S_j$ be as in the definition of the hitting probability, and let $I_{-j} := I \cup \bigcup_{j' \in J \setminus \{j\}} S_{j'}$. Notice that $|I_{-j}| \leq |I| + t|J|$.
Then,
\[
p_{\vec{a}, \vec{c}, I, J,t} \leq \sum_{j \in J} p_{\vec{a},\vec{c}, I_{-j}, \{j\},t} \leq (|I|+t|J|) \cdot |J|\cdot \eta
\; .
\]
Finally, by union bound and Lemma~\ref{lem:hitting}, we have
\[
\eta \leq \frac{1+\varepsilon}{1-\varepsilon} \cdot \frac{r}{M}
\]
except with probability at most
\[
4 M M'\frac{Q^{1/4}}{\varepsilon \cdot \binom{M-1}{t-1}^{1/4}}
\;.
\]
The result follows.
\end{proof}
\subsection{Lattices and Lattice Problems}
\label{sec:lattices}
\begin{definition}[Shortest Independent Vectors Problem]
\label{def:sivp}
For an approximation factor $\gamma := \gamma(n) \geq 1$,
$\gamma$-SIVP is the search problem defined as follows. Given a lattice~$\mathcal{L} \subset \ensuremath{\mathbb{R}}^n$, output~$n$ linearly
independent lattice vectors which all have length at most
$\gamma(n)$ times the minimum possible, $\lambda_n(\mathcal{L})$.
\end{definition}
\def\mathbf{x}{\mathbf{x}}
\def\mathsf{SIS}{\mathsf{SIS}}
\begin{definition}[Short Integer Solutions]
\label{def:sis}
For integers $m,Q,\alpha$, the (average-case) short integer solutions problem $\mathsf{SIS}(m,Q,\beta)$ is defined by $m$ integers $a_1,\ldots,a_m$ drawn uniformly at random and independently from $\ensuremath{\mathbb{Z}}_Q$, and the goal is to come up with a {\em non-zero} vector $\mathbf{x} = (x_1,\ldots,x_m)$ where $$\sum_{i\in [m]} x_i a_i = 0 \pmod{Q} \hspace{.2in} \mbox{and}\hspace{.2in} ||\mathbf{x}||_1 := \sum_{i=1}^m |x_i| \leq \alpha$$
\end{definition}
Following the seminal work of Ajtai~\cite{AjtGeneratingHard96}, there have been several works that show how to solve the {\em worst-case} $\gamma$-SIVP problem given an algorithm for the {\em average-case} SIS problem. We will use the most recent one due to Gama et al.~\cite{GINX16} (specialized to the case of cyclic groups for simplicity).
\begin{theorem}[Worst-Case to Average-Case Reduction for SIS~\cite{MRWorstcaseAveragecase07,GINX16}]\label{thm:GINX}
Let $n,Q, \beta \in \mathbb{N}$ where $Q = (\beta n)^{\Omega(n)}$. If there is an algorithm for the {\em average-case} SIS problem $\mathsf{SIS}(m,Q,\beta)$ over $\ensuremath{\mathbb{Z}}_Q$ that runs in time $T$, then there is an $(m+T)\cdot \mathsf{poly}(n)$-time algorithm for {\em worst-case} %
$\widetilde{O}(\sqrt{n \log m}\cdot \beta)$-SIVP on any $n$-dimensional lattice $L$.
\end{theorem}
\section{Variants of Average-case \texorpdfstring{$k$}{k}-SUM: Totality and Reductions}\label{sec:variants}
We define two variants of average-case $k$-SUM, one over the integers (which is the standard version of $k$-SUM) and one over the finite group $\ensuremath{\mathbb{Z}}_Q$ of integers modulo $Q$. We show that the hardness of the two problems is tied together, which will allow us to use the modular version for our results down the line.
\begin{definition}[Average-case $k$-SUM]
For positive integers $m, k \geq 2$ and $u \geq 1$, the {\em average-case} $\kSUMG{k}{u}{m}$ problem is the search problem defined as follows. The input is $a_1,\ldots, a_m \in [-u,u]$ chosen uniformly and independently at random, and the goal is to
find $k$ distinct elements $a_{i_1},\ldots, a_{i_k}$ with $a_{i_1} + \cdots + a_{i_k} = 0$.
\end{definition}
We define the modular version of the problem where the instance consists of numbers chosen at random from the finite additive group $\ensuremath{\mathbb{Z}}_Q$ of numbers modulo $Q$. This will appear as an intermediate problem in our algorithm in Section~\ref{sec:BKW} and our worst-case to average-case reduction in Section~\ref{sec:wcac}.
\begin{definition}[Average-case Modular $k$-SUM]
For integers $m, k \geq 2$ and integer modulus $Q \ge 2$, the average-case {$\kSUMG{k}{\ensuremath{\mathbb{Z}}_Q}{m}$} problem is the search problem defined as follows. The input is $a_1,\ldots, a_m \sim$ {$\ensuremath{\mathbb{Z}}_Q$} chosen uniformly and independently at random, and the goal is to
find $k$ distinct elements $a_{i_1},\ldots, a_{i_k}$ with $a_{i_1} + \cdots + a_{i_k} = 0$ {$\pmod{Q}$}.
\end{definition}
We highlight the distinction in our notation for the two problems. The former (non-modular version) is denoted $\kSUMG{k}{u}{m}$ (the first parameter is the bound $u$ on the absolute value of the elements), whereas the latter is denoted {$\kSUMG{k}{\ensuremath{\mathbb{Z}}_Q}{m}$} (the first parameter indicates the group on which the problem is defined). The second parameter always refers to the number of elements in the instance.
We now show that the modular problem is total when $\binom{m}{k} \gtrsim Q$ and is unlikely to have a solution when $\binom{m}{k} \lesssim Q$.
\begin{lemma}\label{lem:totalmod}
If $a_1,\ldots, a_m \sim \ensuremath{\mathbb{Z}}_Q$ are sampled uniformly at random, and $E_k$ is the event that there exist distinct indices $i_1,\ldots, i_k$ with $a_{i_1} + \cdots + a_{i_k} = 0 \pmod{Q}$, then
\[
1-Q/\binom{m}{k} \leq \Pr[E_k] \leq \binom{m}{k}/Q
\; .
\]
\end{lemma}
\begin{proof}
Notice that for fixed indices $i_1,\ldots, i_k$, the probability that $a_{i_1} + \cdots + a_{i_k} = 0$ is exactly $1/Q$. The upper bound then follows from a union bound over all $\binom{m}{k}$ $k$-tuples of indices. Furthermore, notice that $i_1,\ldots, i_k$ and $j_1,\ldots, j_k$, the event that $a_{i_1} + \cdots + a_{i_k} = 0 \pmod{Q}$ is independent of the event that $a_{j_1} + \cdots + a_{j_k} = 0 \pmod{Q}$ as long as $\{i_1,\ldots, i_k\} \neq \{j_1,\ldots, j_k\}$. The lower bound then follows from Chebyshev's inequality.
\end{proof}
\begin{lemma}
If $a_1,\ldots, a_m \sim [-u,u]$ are sampled uniformly at random, and $E_k$ is the event that there exist distinct indices $i_1,\ldots, i_k$ with $a_{i_1} + \cdots + a_{i_k} = 0$, then
\[
1- e^{-\alpha} \leq \Pr[E_k] \leq \binom{m}{k}/(2u+1)
\; ,
\]
where
\[
\alpha := \frac{1}{4k+2} \cdot \Big\lfloor \frac{m}{k(20u+10)^{1/k}} \Big\rfloor \approx m/(k^2 u^{1/k})
\; .
\]
\end{lemma}
\begin{proof}
The upper bound follows immediately from the upper bound in Lemma~\ref{lem:totalmod} together with the observation that elements that sum to zero over the integers must sum to zero modulo $Q := 2u+1$ as well.
Let $m' := k (10Q)^{1/k}$. Let $E_k'$ be the event that there exist distinct indices $i_1,\ldots, i_k \leq m'$ with $a_{i_1} + \cdots + a_{i_k} = 0$. Notice that
\[
\Pr[E_k] \geq 1- (1-\Pr[E_k'])^{\floor{m/m'}} \geq 1 - \exp(-\floor{m/m'}\Pr[E_k'])
\; .
\]
So, it suffices to show that
\[
\Pr[E_k'] \geq \frac{1-Q/\binom{m'}{k}}{2k+1} \geq \frac{1}{4k+2}
\; .
\]
By Lemma~\ref{lem:totalmod}, we know that with probability at least $1-Q/\binom{m'}{k}$, there exists a $k$-SUM that sums to zero modulo $Q$ in the first $m'$ elements. I.e., $a_{i_1} + \cdots + a_{i_k} = \ell Q$ for some $\ell \in \{-k,-k+1,\ldots, k-1,k\}$ and $i_1,\ldots, i_k \leq m'$. We wish to argue that $\ell = 0$ is at least as likely as $\ell = i$ for any $i$.
Let $p(k',s) := \Pr[a_1 + \cdots + a_{k'} = s]$ for integers $k',s$. Notice that for $s \geq 0$, we have
\begin{align*}
p(k'+1,s) - p(k'+1,s+1)
&= \big(p(k',-(s+u)) - p(k', s+u+1)\big)/(2u+1) \\
&= \big(p(k', s+u) - p(k',s+u+1)\big)/(2u+1)
\; .
\end{align*}
It then follows from a simple induction argument that $p(k,s+1) \leq p(k,s)$. In particular, $p(k,\ell Q) \leq p(k,0)$ for any $\ell$.
Therefore, letting $E_{k,Q}'$ be the event that the first $m'$ elements contain a $k$-SUM modulo $Q$, we have
\begin{align*}
\Pr[E_k']
&\geq \Pr[E_{k,Q}'] \cdot \Pr[a_{i_1} + \cdots + a_{i_k} = 0 \ | \ a_{i_1} + \cdots + a_{i_k} = 0 \pmod{Q}]\\
&\geq \frac{\Pr[E_{k,Q}']}{2k+1}
\; .
\end{align*}
Finally, by Lemma~\ref{lem:totalmod}, we have
\[
\Pr[E_{k,Q}'] \geq 1-Q/\binom{m'}{k} \geq 1/2
\; ,
\]
as needed.
\end{proof}
\subsection{From \texorpdfstring{$k$}{k}-SUM to Modular \texorpdfstring{$k$}{k}-SUM and Back}
We first show that an algorithm for the modular $k$-SUM problem gives us an algorithm for the $k$-SUM problem. A consequence of this is that when we describe the algorithm for $k$-SUM in Section~\ref{sec:BKW}, we will focus on the modular variant.
\begin{lemma}\label{lem:Qimpliesu}
Let $u$ be a positive integer and let $Q=2u+1$. If there is an algorithm for the $\kSUMG{k}{\ensuremath{\mathbb{Z}}_Q}{m}$ that runs in time $T$ and succeeds with probability $p$, then there is an algorithm for $\kSUMG{2k}{u}{2m}$ that runs in time $O(T)$ and succeeds with probability at least $p^2/k$.
\end{lemma}
\begin{proof}
Let $\mathcal{A}$ be the purported algorithm for $\kSUMG{k}{\ensuremath{\mathbb{Z}}_Q}{m}$.
The algorithm for $\kSUMG{2k}{u}{2m}$ receives $2m$ integers $a_1,\ldots,a_{2m}$ in the range $[-u, u]$ and works as follows. We use the natural embedding to associate elements in $\ensuremath{\mathbb{Z}}_Q$ with elements in $[-u, u]$, so we may think of $a_1,\ldots,a_{2m}$ also as elements in $\ensuremath{\mathbb{Z}}_Q$ (simply by considering their coset modulo $Q$).
\begin{itemize}
\item Run $\mathcal{A}$ on $a_1,\ldots,a_m$ to obtain a $k$-subset $S_1$. If $\mathcal{A}$ does not succeed, then fail.
\item Run $\mathcal{A}$ on $-a_{m+1},\ldots,-a_{2m}$ to obtain a $k$-subset $S_2$. If $\mathcal{A}$ does not succeed, then fail.
\item If $\sum_{i\in S_1} a_i = -\sum_{i \in S_2} a_i$, output $S_1\cup S_2$ as the $2k$-subset. Fail otherwise.
\end{itemize}
It is clear that the run-time is $O(T)$ and that if the algorithm does not fail then it indeed outputs a valid $2k$-sum. It suffices to bound the probability that the algorithm succeeds.
Since the first two steps run $\mathcal{A}$ on independent and identically distributed input, we can deduce that the probability that both succeed is $p^2$, and in the case that both succeed, their output satisfies
$$ \sum_{i\in S_1} a_i = \alpha_1 Q \hspace{.1in} \mbox{and} \hspace{.1in} \sum_{i\in S_1} a_i = \alpha_2 Q $$
for some integers $\alpha_1,\alpha_2 \in (-k/2,k/2)$, which are independent and identically distributed random variables. The probability that $\alpha_1 = \alpha_2$ is therefore at least $1/k$, since the collision probability of a random variable is bounded by the inverse of its support size. If this happens then, $\sum_{i\in S_1} a_i = \sum_{i\in S_2} a_i$ and the algorithm succeeds. Thus, we conclude that our algorithm succeeds with probability at least $p^2/k$.
\end{proof}
Finally, we show a proof in the other direction. Namely, that an algorithm for the $k$-SUM problem gives us an algorithm for the modular $k$-SUM problem. We will use this when we describe the worst-case to average-case reduction in Section~\ref{sec:wcac}.
\begin{lemma}\label{lem:mod-to-z}
For $m \geq k\cdot u^{2/k}$, if there is an algorithm for $\kSUMG{k}{u}{m}$ that runs in time $T$ and succeeds with probability $p$, then there is an algorithm for $\kSUMG{k}{\ensuremath{\mathbb{Z}}_{2u+1}}{m}$ that runs in time $T$ and succeeds with probability $p$.
\end{lemma}
\begin{proof}
Let $\mathcal{A}$ be the purported algorithm for $\kSUMG{k}{u}{m}$.
The algorithm for $\kSUMG{k}{\ensuremath{\mathbb{Z}}_{2u+1}}{m}$ receives $m$ integers $a_1,\ldots,a_{2m} \in \mathbb{Z}_{2u+1}$ and works as follows.
As before, identify $\ensuremath{\mathbb{Z}}_{2u+1}$ with the interval $[-u,u]$ and run $\mathcal{A}$ on $a_1,\ldots,a_m$. By Lemma~\ref{lem:mod-to-z}, since $m \geq k\cdot u^{2/k}$, there is a $k$-subset $S$ such that $\sum_{i\in S} a_i = 0$. In particular, $\sum_{i\in S} a_i = 0 \pmod{Q}$, so it is a modular $k$-SUM solution as well.
\end{proof}
\section{The \texorpdfstring{$u^{O(1/\log k)}$-time}{faster} Algorithm for Average-case \texorpdfstring{$k$}{k}-SUM}
\label{sec:BKW}
In this section, we describe a variant of the Blum-Kalai-Wasserman algorithm~\cite{BKW03} for the average-case $k$-SUM problem that runs in time $u^{O(1/\log k)}$.
\begin{theorem}
There is a $\widetilde{O}(2^\ell q^2)$-time algorithm that solves average-case $\kSUMG{2^\ell}{\ensuremath{\mathbb{Z}}_{q^\ell}}{m}$ for $m = \widetilde{\Theta}(2^\ell q^2)$.
\end{theorem}
\begin{proof}
On input $a_1,\ldots, a_m \in \ensuremath{\mathbb{Z}}_{q^\ell}$ with $m := 1000\ell^2 q^2 2^\ell \log q = \widetilde{\Theta}(2^\ell q^2)$, the algorithm behaves as follows. Let $L_1 := (a_1,\ldots, a_m)$.
For $i = 1,\ldots, \ell$, the algorithm groups the elements in $L_i$ according to their value modulo $q^i$. It then greedily groups them into $m_{i+1}$ disjoint points $(a,b)$ with $a + b = 0 \bmod q^i$. It sets $L_{i+1}$ to be the list of sums of these pairs (and records the indices of the $2^i$ input elements that sum to $a+b$). If at any point the algorithm fails to find such pairs, it simply fails; otherwise, the algorithm outputs the elements $a_{i_1}, \cdots , a_{i_{2^\ell}}$ satisfying $\sum a_{i_j} = 0 \bmod q^\ell$ found in the last step.
The running time of the algorithm is clearly $\mathrm{poly}(\ell, \log q, \log m) m$ as claimed. To prove correctness, we need to show that at each step the algorithm is likely to succeed in populating the list $L_{i+1}$ with at least $m_i := (\ell^2-i^2)/\ell^2 \cdot m/2^{i-1}$ elements, since clearly the algorithm outputs a valid $2^\ell$-SUM in this case.
Suppose that the algorithm succeeds up to the point where it populates $L_i$. Let $L_i = (b_1,\ldots, b_{m_i})$, and $b_i' := (b_i/q^{i-1}) \bmod q$, where the division by $q^{i-1}$ is possible because $b_i = 0 \bmod q^{i-1}$ by assumption. Notice that the $b_i'$ are independent and uniformly random. For $j \in \ensuremath{\mathbb{Z}}_q$, let $c_j := |\{i \ : \ b_i' = j \bmod q\}|$. Notice that the algorithm successfully populates $L_{i+1}$ if and only if
\[
\sum_{j\in\ensuremath{\mathbb{Z}}_q} \min\{ c_j, c_{-j} \}/2 \geq m_{i+1}
\; .
\]
By the Chernoff-Hoeffding bound, we have that
\[
\Pr\big[c_j < m_i/q - 10\sqrt{m_i \log m_i}\big] \leq 1/m_i^2
\]
It follows that
\[
\sum_j \min\{ c_j, c_{-j} \}/2 \geq q \min \{ c_j\}/2 \geq m_i/2 - 5q \sqrt{m_i \log m_i} \geq m_{i+1}
\]
except with probability at most $1/m_i$. By union bound, we see that the algorithm succeeds in populating every list except with probability at most $\sum 1/m_i \ll 1/10$, as needed.
\end{proof}
Combining this with Lemma~\ref{lem:Qimpliesu} (the reduction from $k$-SUM to modular $k$-SUM), we obtain the following corollary.
\begin{corollary}
For $u = (q^\ell-1)/2$ for odd $q$ and $k = 2^{\ell+1}$, there is a $u^{O(1/\log k)}$-time algorithm for $\kSUMG{k}{u}{m}$ for $m = u^{\Theta(1/\log k)}$.
\end{corollary}
\section{From Worst-case Lattice Problems to Average-case \texorpdfstring{$k$}{k}-SUM}
\label{sec:simple}\label{sec:wcac}
In this section, we describe our main result, namely a worst-case to average-case reduction for $k$-SUM. We state the theorem below.
\begin{theorem}\label{thm:mainthm-red}
Let $k,m,u,n$ be positive integers, and $0 < \varepsilon < \varepsilon'$ where $$u = k^{2(1+\varepsilon')cn/\varepsilon'} \mbox{ and }
m = u^{\varepsilon /(2\log k)}~$$
for some universal constant $c > 0$.
If there is an algorithm for average-case $\kSUMG{k}{u}{m}$ that runs in time $T_{\mathsf{kSUM}} = T_{\mathsf{kSUM}}(k,u,m)$, then there is an algorithm for the worst-case $n^{1+\varepsilon'}$-approximate shortest independent vectors problem (SIVP) that runs in time $2^{O(\varepsilon n/\varepsilon' + \log n)} \cdot T_{\mathsf{kSUM}}$.
\end{theorem}
When we say that a $k$-SUM algorithm succeeds, we mean that it outputs a $k$-subset of the input that sums to $0$ with probability $1-\delta$ for some tiny $\delta$. This can be achieved starting from an algorithm that succeeds with (some small) probability $p$ by repeating, at the expense of a multiplicative factor of $1/p\cdot \log(1/\delta)$ in the run-time. We ignore such issues for this exposition, and assume that the algorithm outputs a $k$-sum with probability $1-\delta$ for a tiny $\delta$.
Before we proceed to the proof, a few remarks on the parameters of Theorem~\ref{thm:mainthm-red} are in order. First, note that the parameter settings imply that $m^k \gg u$, therefore putting us in the total regime of parameters for $k$-SUM. Secondly, setting $\epsilon'=100$ (say), we get the following consequence: if there is a $k$-SUM algorithm that, on input $m = u^{\epsilon/(2\log k)}$ numbers, runs in time roughly $m$, then we have an $n^{101}$-approximate SIVP algorithm that runs in time $\approx 2^{\epsilon c n}$. Now, $\epsilon$ is the ``knob'' that one can turn to make the SIVP algorithm run faster, assuming a correspondingly fast $k$-SUM algorithm that works with a correspondingly smaller instance.
\begin{proof}
The theorem follows from the following observations:
\begin{itemize}
\item
First, by Theorem~\ref{thm:GINX}, there is a reduction from $\widetilde{O}(\sqrt{n\log m'}\cdot \beta)$-approximate SIVP to $\mathsf{SIS}(m',Q,\beta)$, where we take $m' := \ceil{k^{10cn/(k \varepsilon')} n^{10}}$. The reduction produces SIS instances over $\ensuremath{\mathbb{Z}}_Q$ where $Q \geq (\beta n)^{cn}$ for some constant $c$, and works as long as the SIS algorithm produces solutions of $\ell_1$ norm at most $\beta$. If the SIS algorithm runs in time $T_{\mathsf{SIS}} = T_{\mathsf{SIS}}(m',Q,\beta)$, the SIVP algorithm runs in time $(m'+T_{\mathsf{SIS}})\cdot \mathsf{poly}(n)$. We take $\beta := n^{\varepsilon'}$.
\item
Second, as our main technical contribution, we show in Lemma~\ref{lem:main} how to reduce SIS to $k$-SUM. Note that Theorem~\ref{thm:GINX} gives us the freedom to pick $Q$, as long as it is sufficiently large. We will set $Q = q^r$ where $r = \floor{\varepsilon' \log n/(2 \log k)}$ for a prime $q \approx u \approx (\beta n)^{cn/r} \approx k^{2(1+\varepsilon') cn/\varepsilon'}$.
Now, Lemma~\ref{lem:main} (with $k = t$) shows a reduction from $\mathsf{SIS}(m',Q,\beta)$ to $\kSUMG{2k}{\ensuremath{\mathbb{Z}}_q}{m}$ (provided that $m' \gg q^{1/k} k^{4r/k} m^{4/k}$, which holds in this case).
The reduction produces a SIS solution with $\ell_1$ norm bounded by $k^{2r} \leq \beta$.
The running time of the resulting algorithm is $$rm'(m \cdot \mathrm{poly}(k,\log q) + 10T_{\mathsf{kSUM}}) \approx 2^{O(\varepsilon n/\varepsilon' + \log n)} \cdot T_{\mathsf{kSUM}}~.$$
%
\item
Finally, by Lemma~\ref{lem:mod-to-z}, we know that modular $2k$-SUM over $\ensuremath{\mathbb{Z}}_q$ can be reduced to $k$-SUM over the integers in the interval $[-u,u]$ for $u \approx q$ with essentially no overhead.
\end{itemize}
This finishes the proof.
\end{proof}
The following lemma shows our main reduction from SIS to $k$-SUM. In particular, taking $k = t$, $m' \gg (m^4q)^{1/k} \cdot (10k)^{4r}$ (so that $\delta$ is small), and $m \gg q^{1/k}$ (so that $\kSUMG{k}{\ensuremath{\mathbb{Z}}_q}{m}$ is total) gives a roughly $rmm'$-time reduction from SIS over $\ensuremath{\mathbb{Z}}_{q^r}$ to $k$-SUM over $\ensuremath{\mathbb{Z}}_q$ with high success probability.
\begin{lemma}
\label{lem:main}
Let $m,m', k,r,t$ be positive integers and $q > (tk)^r$ a prime, and let $Q = q^{r}$. If there is an algorithm that solves (average-case) $\kSUMG{k}{\ensuremath{\mathbb{Z}}_q}{m}$ in time $T$ with success probability $p$, then there is an algorithm that solves $\mathsf{SIS}(m',Q,\beta)$ in time $r\cdot m' (m \cdot \mathrm{poly}(k,t,\log q)+10T)/p$ with success probability at least $1 - \delta$ and produces a solution with $\ell_1$ norm $\beta \leq (tk)^{r}$, where
\[
\delta := \frac{100 rm (m')^2}{p} \cdot \frac{q^{1/4}}{(m'/(10tk)^{2r+1})^{t/4}}
\; .
\]
\end{lemma}
\begin{proof}
At a high level, the idea is to run a variant of the Blum-Kalai-Wasserman~\cite{BKW03} algorithm where in each iteration, we call a $k$-SUM oracle. In particular, on input $a_1,\ldots,a_{m'}$, the algorithm operates as follows.
\begin{itemize}
\item
In the beginning of the $i^{\mathsf{th}}$ iteration, the algorithm starts with a sequence of $$m_i := \ceil{m'/(10 t^2 k^2)^{i-1}}$$ numbers $a_{i,1},\ldots,a_{i,m_i}$. The invariant is that $a_{i,j} = 0 \pmod{q^{i-1}}$ for all $j$. It then generates disjoint $S_{i,1},\ldots, S_{i,m_{i+1}} \subseteq [m_i]$ such that $|S_{i,\ell}| \leq kt$ and $\sum_{j \in S_{i,\ell}} a_{i,j} =0 \pmod{q^i}$, in a way that we will describe below.
As the base case, for $i=1$, $a_{1,j} = a_j$, the input itself, and the invariant is vacuous.
\item In the $i^{\mathsf{th}}$ iteration, we apply the re-randomization lemma (Corollary~\ref{cor:LHL_A}), computing subsets of $t$ randomly chosen elements from $a_{i,1},\ldots, a_{i,m_i}$, to generate $m_i^* := 10m \ceil{m_{i+1}/p}$ numbers $c_{i,1},\ldots,c_{i,m_i^*}$.
\item Let $d_{i,j} = c_{i,j}/q^{i-1} \pmod{q} \in \ensuremath{\mathbb{Z}}_q$. Note that this is well-defined because each $c_{i,j} = 0 \pmod{q^{i-1}}$.
\item Divide the $d_{i,j}$ into $10\ceil{m_{i+1}/p}$ disjoint blocks of $m$ elements each, set $\ell = 1$. For each block, feed the block to the $k$-SUM algorithm to obtain $d_{i,j_1},\ldots, d_{i,j_k}$. This yields corresponding subsets $S_{1}^*,\ldots, S_{k}^* \in \binom{[m_i]}{t}$ such that $\sum_{j \in S_x^*} a_{i,j}/q^{i-1} = d_{i,j_x} \pmod{q}$. If $d_{i,j_1} + \cdots + d_{i,j_k} = 0 \pmod{q}$ and the sets $S_1^*,\ldots S_{k}^*,S_{i,1},\ldots, S_{i,\ell-1}$ are pairwise disjoint, then set $S_{i,\ell} := \bigcup S_x^*$ and increment $\ell$.
\item If $\ell \leq m_{i+1}$, the algorithm fails. Otherwise, take $a_{i+1,\ell} := \sum_{j \in S_{i,\ell}} a_{i,j}$ for $\ell = 1,\ldots, m_{i+1}$.
\item At the end of the $r^{th}$ iteration we obtain a $(kt)^{r}$-subset of the $a_1,\ldots,a_{m'}$ that sums to $0 \pmod{Q}$.
\end{itemize}
We now analyze the correctness, run-time and the quality of output of this reduction.
The reduction calls the $k$-SUM oracle $\sum m_i^*/m \leq 20rm'/p$ times. The rest of the operations take $r mm' \mathrm{poly}(k,t,\log q)/p$ time for a total of $r\cdot m'( m \mathrm{poly}(k,t,\log q)+10T)/p$ time, as claimed. Furthermore, the $\ell_1$ norm of the solution is $\beta \leq (tk)^r$, as claimed.
Finally, we show that the algorithm succeeds with the claimed probability. Since the sets $S_{i,\ell}$ are disjoint and do not depend on $a_{i,j} - (a_{i,j} \bmod q^i)$, it follows from a simple induction argument that at each step the $a_{i,j}$ are uniformly random and independent elements from $q^{i-1}\ensuremath{\mathbb{Z}}/q^r\ensuremath{\mathbb{Z}}$.
Therefore, by Corollary~\ref{cor:LHL_A}, the statistical distance of the collection of all $d_{i,j}$ (for a given $i$) from uniformly random variables that are independent of the $a_{i,j}$ is $\delta_i \leq (m_i^*+1)\cdot (\frac{qt^t}{m_i^t})^{1/4}$. In total, the statistical distance of all samples from uniform is then at most $\sum \delta_i < \delta/3$ for our choice of parameters. So, up to statistical distance $\delta/3$, we can treat the $d_{i,j}$ as uniformly random and independent elements.
It remains to show that, assuming that the $d_{i,j}$ are uniformly random and independent, then we will find \emph{disjoint} sets $S_{i,1},\ldots, S_{i,m_{i+1}}$ with $\sum_{j \in S_{i,\ell}} d_{i,j} = 0 \pmod{q}$ at each step except with probability at most $2\delta/3$.
Let $\vec{b}_{i} := (a_{i,1}/q^{i-1} \bmod q,\ldots, a_{i,m_i}/q^{i-1} \bmod q)$. By Corollary~\ref{cor:hitting_prob}, we have
\[
p_{\vec{b}_i, \vec{d}_i, I,J,t} \leq 10t^2 k^2 m_{i+1}/m_i \leq 1/2
\]
for all $I \in \binom{[m_i]}{\leq v}$ and $J \in \binom{[m_i^*]}{\leq v}$ except with probability at most $10 m_i m_i^* q^{1/4}/\binom{m_i-1}{t-1}^{1/4} < \delta/3$ for $v := tk m_{i+1} \geq |T|$, $v' := k$, and $\varepsilon := 1/2$.
So, suppose this holds.
Notice that $\Pr[d_{i,j_1} + \cdots + d_{i,j_k} = 0 \pmod{q}] = p$ by definition. And, conditioned on $d_{i,j_1},\ldots, d_{i,j_k}$, the $S_{x}^*$ are independent and uniformly random subject to the constraint that $\sum_{j \in S_x^*} a_{i,j}/q^{i-1} = d_{i,j_x} \pmod{q}$. Therefore, the probability that $S_1^*,\ldots, S_k^*, I := S_{i,1} \cup \cdots \cup S_{i,\ell-1}$ are pairwise disjoint in this case is exactly
\[
p_{\vec{b}_i, \vec{d}_i, I,J,t} \leq 1/2
\; ,
\]
where $J := \{j_1,\ldots, j_k\}$. So, each time we call the oracle, we increment $\ell$ with probability at least $p/2$. It follows from the Chernoff-Hoeffding bound that we increment $\ell$ at least $m_{i+1}$ times except with probability at most $e^{-m_{i+1}/100}\ll \delta/3$.
Putting everything together, we see that the algorithm fails with probability at most $\delta$, as claimed.
\end{proof}
\newcommand{\etalchar}[1]{$^{#1}$}
| proofpile-arXiv_059-15618 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Conclusions and Future Work}
Throughout the construction of this artist-focused interface for creating visualizations of 3D multivariate data, we have broken apart the conceptual components of existing visualization software and reassembled them into a new whole, leveraging artistic metaphor and language to present the data visualization design process in a new way.
Like prior work in building visualization interfaces that support artists in their creative methodologies, the interface emphasizes rapid exploration of design alternatives, and by building the interface on top of the Artifact-Based Rendering engine, the result also enables artists to bring color, texture, line, and form from their existing art practices to the world of data-intensive science.
Our future work includes continuing to explore new visual encoding styles, which we plan to add to the interface as new plates. We are also excited by the potential of tangible user interfaces that might create an even tighter connection and seamless workflow between the physical task of creating in the studio and incorporating those creations into data-driven 3D renderings.
\section{Related Work}
Our work is inspired by the long tradition of artists engaging with and interpreting scientific data, and it contributes to literature in the interdisciplinary research areas of visual design for science and creativity support tools.
\subsection{Artists Engaging with Scientific Data}
From Leonardo da Vinci to the present VISAP program, there is a long history of artists engaging with scientific concepts and data. The examples pictured in this paper deal specifically with climate science, a common theme in contemporary art-sci-tech work.
Artists who provide a human context to environmental issues are widespread. InfoWhelm, a recent publication by Houser, surveys contemporary art and literature addressing climate and the environment~\cite{Houser}. Pioneered by artists including Polli, Anadol, Miebach, West, Vensa, Viegas and Wattenburg, just to name a few, the field has spurred university programs across the United States, including the University of New Mexico, the University of Oregon, Arizona State University, and UCLA ~\cite{polli2005atmospherics,Mielbach,west2015dataremix,Viegas2007,Samsel2013ArtSciTech}. Other artists working directly from actively studied scientific data have created site-specific interactive and sculptural installations~\cite{swackhamer2017weather,singh2018orbacles}, translated topographic data to ice sculptures of glaciers that make the viewer gasp~\cite{sengal2015glacier}, and much more. Rather than striving to produce an ``unbiased'' or ``sterile'' account of scientific data, artists often help us understand and interpret the human connection to the data, creating data visualizations that can successfully interweave both data {\em and} emotion~\cite{wang2019emotional,Carpendale-2018-art-connection}.
\subsection{Visual Design for Science}
Artists also have a rich history of contributing to the field of data visualization specifically, teaming with scientists and other data stakeholders to present data more clearly. Cox outlined what artists can contribute to science, specifically highlighting the powerful difference that artist-designed colormaps can make as well as artists' usefulness in depicting high-dimensional spaces~\cite{cox1988using}. Many visualization techniques have drawn inspiration from art (e.g., use of color~\cite{lum2002nprVolRend}, stippling and line rendering~\cite{lu2003stipple, almeraj2009pencilLines}, narrative forms~\cite{bach2017emerging}). Several guidelines that improve visualization clarity, engagement, and impact come directly from design theory and principles~\cite{Ware_2012,FunctionalArt}. Numerous programs~\cite{Processing}, dashboards and entire companies have been created to enable this effort. Most closely related to our work, powerful visualization tools, systems, and user interfaces have been designed specifically to support the role of artists in data visualization, notably~\cite{schroeder2010drawing,schroeder2015visualization,guo2011wysiwyg,bruckner2005volumeshop,keefe2008scientific, Processing}.
Since it is built upon Artifact-Based Rendering (ABR)~\cite{johnson2019artifact}, our interface supports the visualization of 3D multivariate scientific datasets using artist-created media such as glyphs, colormaps, lines, and textures. This is a defining aspect of our approach, since this technique is already so closely tied to traditional artistic practice and leverages real-world artistic skill. The interface also includes features found useful in other digital visualization interfaces for artists. For example, we follow an approach similar to ColorMoves~\cite{samsel2018colormoves} to support building, editing, and tweaking custom dataset-specific colormaps in real time.
In general, the interface supports a mode of working that is fast, iterative, and visual. Interfaces to powerful 3D visualization packages, such as ParaView~\cite{ahrens2005paraview} and VisIT~\cite{HPV:VisIt}, are primarily designed for scientists and technologists. The workflows they provide elegantly succeed at defining a highly adjustable logical progression starting from data, moving through a processing pipeline, and finally rendering an image. However, the approach does not necessarily align with the creative workflows of visual artists and designers. Instead, we seek a workflow that includes a similar level of control over appearance but is more in the style of fluid, sketch-based user interfaces. Artists have previously used sketch-based user interfaces to prototype 3D visualizations~\cite{keefe2005artistic,keefe2008scientific}, create custom illustrations of 2D fluid flows~\cite{schroeder2010drawing}, sketch free-form glyphs~\cite{schroeder2015visualization,isenberg2008interactive}, and create multi-layered animated 2D visualizations of a variety of datasets~\cite{schroeder2015visualization}. In our interface, sketching and working with many other forms of physical media happen primarily outside of the digital interface but, in the style of Buxton's work with ``sketching user experiences''~\cite{buxton2010sketching}, working with the interface can feel like sketching in the same way that it enables a similar rapid, fluid style of visual exploration.
\subsection{Creativity Support Tools}
Tools that enable rapid visual experimentation like sketching are excellent examples of a broader category of user interfaces known as creativity support tools~\cite{shneiderman2000creating}. Visual creativity is often aided by digital sketch pads, tablets, and other interfaces that enable artists and designers to work directly with their hands, leveraging real-world, physical skills. Our approach is almost an extreme version of this, leveraging artists' physical real-world work by digitizing it. Some artists have commented that this enables greater expression and personalization, since, for all the success of digital tools, there is a reason artists continue to work with physical media. Creativity has also been linked to user interfaces that involve visual exemplars~\cite{terry2002sideViews} and fun, which is often encouraged through subtle animations and metaphors~\cite{shneiderman2004designing}. Our work contributes to the growing interdisciplinary literature on creativity support tools with a new specific focus on artists in visualization.
\subsection{Interpretation and Extended Use}
\begin{figure}[tbh]
\centerline{\includegraphics[width=\columnwidth]{TEXT_Sections/pics/colormapping.png}}
\caption{The includes the ability for artists to upload and or create new colormaps via the Color Loom applet using the ABR technique. While this provides limitless options, the artists felt strongly that they needed to be able to control the distribution of the hues across the data in order to create the contrast and visual distinction with in the imagery. Thus an internal colormapping tool was added to the interface. On the right is the color interface enabling on to control hue distributions. }
\label{fig:colormap-interface}
\end{figure}
\begin{figure}[tbh]
\centering
\includegraphics[width=\columnwidth]{TEXT_Sections/pics/Background3b.jpg}
\caption{Intuition might dictate that the water in an Antarctic visualization should be blue (left). However, in practice with complex multivariate data such as the six water masses mixing shown in this figure, using blue as the ocean can quickly become overwhelming. One solution is to try a more neutral color (middle and right). The subtle differences between the middle and right images is something easily accomplished in the ABR interface, enabling fine-tuned control of visualizations.}
\label{fig:background}
\end{figure}
Francesca Samsel, an artist in the Sculpting Vis Collective is a co-designer of the interface and has worked with it in various stages of development to create works like those pictured in Figure~\ref{fig:teaser} and \ref{fig:science_connection}, which build so clearly upon her art practice, for example, the form, line quality, metaphor, and color of the Osmosis series (Figure~\ref{fig:texture}).
In this section, we report both on Samsel's own observations working with the interface and her reflections on the feedback received from other artists. Together, these coalesce around three major themes.
\subsubsection{Scope and fidelity of visual vocabulary}
The visual variation exhibited by artists was significant because the workflow
presented by the interface allows each individual to contribute their artistic
vision to the visualization design. In many cases, this is accomplished by
enabling artists to utilize elements that they have previously created, such as
chine coll\'{e} and custom colors discussed earlier. Samsel shows how the same
can be accomplished with hand-sculpted clay forms to create custom glyph
vocabularies (Figure~\ref{fig:glyphs}). Relative to typical scientific
visualization software, the scope of visual variation that is possible is much
larger, similar to working in a studio. So, subtle variations in visualization
design can be explored
to discover combinations that work together.
Additionally, since the interface is focused on enabling an interactive design
process, artists can make minute changes to the visual inputs, and often these
have a profound effect on the resulting visualization.
Figure~\ref{fig:background} demonstrates how an artist with the ability to fine
tune the color can apply color contrast principals to impact the visualization
for clarity.
We see it as a mark of a good interface that this fidelity to fine-tune the visualization correlates with artistic skill. The ability to render an object without thinking is the root of visual invention. This is the means via which artists push their work/vocabulary forward. The idea is to leave the critic outside the room and trust in the process. The key to the arts is to gain the technical skill so that your hands flow freely, then when you enter the studio to turn off the linear or rational decision making process and follow the work itself as it guides you to the next step.
\subsubsection{Rapid iteration and stimulation of artistic imagination}
The scope and fidelity of the visual vocabulary combined with the speed at which one can explore visual alternatives seems to lead to a tool that stimulates the artistic imagination. ``This is fun!'' -- both of the artists seeing the interface for the first time expressed enjoyment in the process. Artists can get into the creative zone, that special mental place where artistic magic happens, even while they are working with 3D multivariate scientific datasets.
It is necessary that any tool living up to an artist's specifications must easily support iteration in order to facilitate work in the creative zone. While Figures~\ref{fig:glyphs},~\ref{fig:colormap-interface}, and ~\ref{fig:background} demonstrate the impact of shape and color on our ability to distinguish between variables, Figure~\ref{fig:science_connection} demonstrates that engagement is not at cross purposes with scientific needs. The iterations shown here were accomplished in under a minute by our experienced artist. It is this flexibility and rapid iteration that enables artistic discovery. The key is to remove the barriers to achieving this creative state so that artists can use their artistic skills built up over a lifetime of experience to add their voices and visions to the multidisciplinary efforts to wrangle increasingly large and complex scientific data.
Science and art have a common thread in that many scientific breakthroughs, like artistic breakthroughs, happen when intense thought, contemplation, and experimentation meet the subconscious and the accidental, underscoring the need for artistic expression in our society and for science.
\subsubsection{Limitations}
We have described some feedback from artists on the occasion of their first
introduction to the interface, and it is important to note that this is not a
system to be mastered in a single session. The scope of the vocabulary feels
limitless, which is a good thing; artists are used to starting with limitless
possibilities and narrowing to the essential. However, the possibilities here
are even more expansive than sitting down in front of a canvas and paints; it is
more like walking into a new studio with paint, clay, printing press, video
software, and more. Artists tell us that they know they will need more time to
experiment before they can take full advantage of the tool. Similar to artists
learning a new medium, we fully expect that as artists spend more time with the
interface, the resulting variety in visualizations will expand as well.
We also wish to be clear that for all of the benefits for accessibility presented in this interface, it does not address the major challenge of data wrangling. For the data pictured in this paper, we have relied upon ParaView to accomplish that task. By structuring the design interface and rendering engine as a modular system that can connect to existing tools like ParaView via network sockets, we are able to read data from this tool that scientists already use for analysis and have a communication strategy that can be mimicked with other scientist-facing data analysis tools. However, this does not address the problem that, for many actively studied datasets, the initial task bringing data into such tools can be a daunting challenge.
\section{Introduction}
The arts and humanities are crucial in formulating, interpreting, and expressing challenging problems and ideas, including those that are the subject of scientific inquiry like climate science, public health, ethical engagement with technology and more.
However, our current ``data-intensive paradigm''~\cite{jim-gray-fourth-paradigm} makes it increasingly difficult for artists, humanists, and others to engage deeply with such problems and ideas since engaging with the underlying multidimensional data increasingly requires a core background in data science or computing.
Our team has formed a design collective, the Sculpting Visualizations
Collective. The ethos of the Collective derives from a desire to create more
enchanting visualizations, as well as improving the data-intensive tools these
visualizations are built on. Our team does this by bringing the knowledge and
experience of artists and designers into all facets of the visualization
process, from the theory behind a visualization tool to its application, design,
and the final products that it enables.
In this paper, we discuss the specific challenge of designing a user interface
for artists to create visualizations of actively studied, modern 3D datasets.
For those who wish to engage with the big, data-driven questions of our day, the
vision is to welcome that engagement in multiple modes, both as part of the
artists' own art practice and/or as part of interdisciplinary collaborative
efforts to better understand and communicate science. We want the artists
working in this space to be able to work as artists. The design user interface
elegantly mirrors the ethos of our Collective by drawing from the tradition of
printmaking. In doing so, it creates a more intuitive, enchanting way for
artists to easily and quickly iterate through many visual possibilities derived
from their work. The artist-centered design process that the interface enables
allows for artists to create visualizations that enchant users with their
surprising, yet intuitive design choices that spark imagination and curiosity.
Our design approach is grounded in an embrace of traditional, physical art processes. We ask, what if we could make the process of providing the technical specification for a data visualization feel like it fits within the artist's studio, something that is synergistic with creative visual processes based on experimentation and ``working on the whole'' rather than the style of ``linear thinking'' and ``efficiency'' that is often more closely associated with computing and big data?
Our technical approach makes use of the Artifact-Based Rendering (ABR) technique presented in last year's IEEE VIS technical track~\cite{johnson2019artifact}. That technique makes it possible to render 3D visualizations using visual elements created by digitally scanning real-world artifacts (e.g., 2D paintings, drawings, ink washes, and prints; 3D clay, shaved wax, arranged objects). The software places these visual building blocks in the 3D visual space of the data and morphs, recolors, and otherwise adapts them in response to data. For example, working with this technique, artists can populate a virtual sea with hand-sculpted clay glyphs, coloring each one according to the ocean temperature, to help clarify for scientists five types of water masses critical to understanding ice sheet melt rates (Figure~\ref{fig:teaser}). Artists have already used ABR to produce visualizations that exhibit a unique, decidedly hand-crafted style~\cite{samsel-visap-last-year}. However, until now this process has required a programmer to act as a guide, helping to set up appropriate links from data objects to computer graphics render objects and to interpret the visual meaning of various parameters exposed by the computer graphics shader programs. Our technical goal is to create the interface needed to generate such a complete and correct rendering specification that the ABR engine can use to drive its computer graphics shader programs.
The interface to weave together the creative artistic process and the technical data-driven 3D rendering engine was designed and implemented over the course of approximately 8 months by an interdisciplinary collaborative team (the co-authors) who collectively bring expertise from the disciplines of visual art, environmental humanities, and computer science. The team is separated by distance, and it helped us to kick off the design project with an in-person ``hunker week,'' which was then followed up with months of implementation and iterative refinement supported by two regular team video conferences per week. Our work began by understanding related work and then moved to building a common language, brainstorming, implementing, and iteratively refining the interface.
\section{Insights from Artists}
This section discusses the experience and insights from artists on our team who have had a chance to work with the interface in detail as well as feedback artists who have just started the process of designing visualizations with the interface. We begin by describing design sessions we facilitated with two practicing artists as an introduction to the tool.
We cast these two introduction / design sessions, simply, as a chance to work creatively with a dataset scientists are currently using to understand the biogeochemistry of the Gulf of Mexico~\cite{wolfram2015diagnosing}. These data include surface layers for the ocean and land, streamlines for the ocean currents representing the direction of eddy curvature (clockwise or anticlockwise), and concentrations for nitrates and chlorophyll. The sessions were conducted remotely so as to respect the social distancing necessitated by the ongoing pandemic. The software ran on a local computer with the artists connecting remotely over a video conferencing link. In one case, that link supported having the artist take over control of the local computer and use the interface with her own mouse. In the other, we acted as the artist's hands on the interface, sharing the screen over video and following her directions.
\subsection{Vis Design Session 1}
The first artist, Deborra Stewart-Pettengill, works with many forms of traditional artistic media and had no experience working with 3D scientific data prior to this session. Her current art processes include printmaking with chine-coll\'{e}, an intricate technique that involves adding color and form to a print by using the press to apply shapes of thin cutout paper to the print. She was interested in experimenting with her abstracted natural and organic forms on a data visualization after she was contacted by a member of the Sculpting Vis Collective.
After some introduction to the project and discussion to find common ground, we realized we were all curious to see how the line quality she and the master printmakers at Wingate Studio achieve with chine-coll\'{e} (Figure~\ref{fig:artist1_gulf}A) might translate into a digital data visualization. So, she digitized some of this work using her smartphone camera, and we used the ABR's Infinite Line and Texture Mapper applets~\cite{johnson2019artifact} to prepare the results for attaching to data.
``I'd like to start with something neutral in the terrain,'' Stewart-Pettengill stated at the onset of our session; together, we assigned a generic brown colormap and left it for the time being. Later, toward the end of the session, she returned to this color; ``the one thing I want to change is the color of the land.''
The colormap was modified to tease out the most varied part of the elevation data -- ``That's looking a lot better, the peachiness of it looks good with the blue-violet water.''
Stewart-Pettengill was enthusiastic about including her own chine-coll\'{e} work
-- when she first saw the 8 chine-coll\'{e} textures in her palette, she immediately recognized them: ``Oh
yeah, there are my strokes! Cool!'' Later, she applied these to the streamlines depicting ocean
currents and experimented with adjusting line width and coloration
(Figure~\ref{fig:artist1_gulf}B,C).
In fact, she repeatedly came back to the streamline pattern and colormap throughout the process, particularly once she began assigning encodings to the chlorophyll and nitrate glyph layers.
We will return to the theme of the importance of bringing one's own work from the studio into the new project of visualizing data. This emerged as a consistent theme.
Another important lesson from the introductory session with Stewart-Pettingill was the need for visual experimentation. She is ``just so used to experimenting with things'' as she works, and reflected that the interface enabled a similar process during her session. To provide a sense of the scope of experimentation that is possible, we noticed that in addition to the 8 line textures mentioned earlier, she also worked with over 10 glyphs and more than 20 colormaps for the ocean current streamlines, the nitrates, and the ocean floor during this first-ever design session.
\begin{figure}[tbh]
\centering
\includegraphics[width=\columnwidth]{TEXT_Sections/pics/chine-colle.png}
\caption{The top image shows chine-coll\'{e} etchings by Samsel, from a series entitled Osmosis. Details from the etching are converted into textures usable in ABR visualizations, shown in the bottom row alongside textures from Stewart-Pettengill.}
\label{fig:texture}
\end{figure}
\subsection{Vis Design Session 2}
The second artist, Stephanie Zeller, had prior experience working with 3D scientific visualizations including custom colormap creation and has worked with the Sculpting Vis Collective previously. Zeller is trained in traditional media with a focus on painting. Her work of late examines the viewer's relationship with the digital environment through traditional media, especially in terms of data consumption and analysis. Zeller is also a freelance writer and uses a breadth of digital software to augment her work in data journalism through both digital illustration and information visualization.
Of her session, Zeller remarked ``As I worked, I was coming first from an
artistic angle, closely followed by a scientific angle. I wanted it to be both
aesthetically appealing and to also maintain the ability to distinguish between
variables, which requires a fair amount of stepping back and moving in again.''
She spent a short time experimenting with colormaps for the ocean floor once a texture was applied, then moved on to working on the terrain colormap. She clearly judged the terrain as critical to the composition, spending more than 20 minutes to fine-tune a brown/tan hued colormap to work well specifically for these data. She later decided to take the design in a new direction, shifting to a green/white colormap to achieve suitable contrast with the rest of the visualization elements (particularly the streamlines) and to not overpower the main subject, which is the visualization in the ocean.
The design process was clearly iterative, with each design decision coming as a
reaction to what was seen. At one point, Zeller decided that there was ``too
much green'' in the visualization, and swapped out the old olive chlorophyll glyphs
for a blue instead. Later, she returned to the colormaps for the land and the
streamlines, making fine adjustments to achieve a balance between them.
Similar to Stewart-Pettengill, Zeller was also keen to incorporate her own artwork in the visualization; in her case through color. During this introductory session, she worked with more than 50 different colormaps, editing 18 of them relative to the data histogram, and creating 2 more from scratch. She felt that many of the colormaps given in the curated library didn't resonate with her style and created new ones adapted from her own work in data journalism, which can be seen on the ocean current streamlines (Figure~\ref{fig:artist2_gulf}).
In her own paintings, Zeller often builds very large surfaces (8 feet or more in
length), and works with details up close.
Every so often, she steps far enough back to see the entire piece and understand
the color relationships in the part she just worked on, and make an informed
next move in the part.
Building visualizations with the interface was appealing to her because she felt
she could address design concerns from up close and far away using her familiar
processes.
This was reflected in her usage of the interface -- when choosing colors and
glyphs, she started with detail-oriented view, quickly moved to a more global
perspective, then made changes in the visualization design based on how the
color interactions between variables both close up and far away.
Zeller felt that for similar reasons, it was intuitive to design for both a
close-up 3D perspective and a bird's-eye 2D perspective using the interface.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{TEXT_Sections/pics/artist2_2.jpg}
\caption{Stephanie Zeller's process for designing a visualization of the Gulf of Mexico. (1) shows an early step in the visualization, (2) includes progress on the streamlines and terrain colormap, and (3) shows the final result including several custom colormaps designed by Zeller.}
\label{fig:artist2_gulf}
\end{figure}
\begin{figure*}[tbh]
\centering
\includegraphics[width=\textwidth]{TEXT_Sections/pics/SciConnection.jpg}
\caption{This visualization of water masses underneath the Ronne-Filchner Ice Sheet shows: five ocean masses, their locations and temperature; two directions of eddy flow; the ocean floor depth; and the topography of the Antarctic. Here we are illustrating the power of including artists in the process. In order to render seven overlapping variables in a 3D simulation, the visualization uses distinguishable glyphs and hues. The visualization shown here provides scientific value, as scientists have not previously seen the movement of these water masses which are critical to predicting the melt rate of the ice on the underneath side of the Antarctic ice sheets. Data - E3SM, BER, DOE.}
\label{fig:science_connection}
\end{figure*}
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{TEXT_Sections/pics/4Gly.jpg}
\caption{Detail in the glyph selection process. The yellow glyphs in the left column (A and C) compare a disk verses a spherical glyph. On the right, the green glyphs compare a triangular shape (B) verse an elongated form (D). While these are subtle shifts, they provide critical contrast when working on large complex data. }
\label{fig:glyphs}
\end{figure}
\section{The Visualization Design Interface}
We begin the discussion of the user interface design by reflecting on why this is a hard problem to solve. There are many reasons for this, but one of the most important is the disconnect between what artists perceive as the visual object they wish to operate on and how the underlying computer graphics software organizes its data objects and rendering objects. These various objects often have complex relationships; so, designing an interface that fits the artists' cognitive process is not as simple as making a diagram of the software organization and adding buttons and sliders to control the parameters.
Stepping back during a design session to look at a data visualization, an artist might think, ``let's see what happens if we change those glyphs to show water temperature instead of salinity'' (Figure~\ref{fig:ex1}).
\begin{figure}[!ht]
\centerline{\includegraphics[width=\columnwidth]{TEXT_Sections/pics/before-after-data.png}}
\caption{``Let's see what happens if we change those glyphs to show water temperature instead of salinity.''}
\label{fig:ex1}
\end{figure}
Notice how, to the visual artist, this thought is organized conceptually around making a change to a visual object. ``Those glyphs'' are perhaps the most visually dominant aspect of the image, and the goal is to make what seems like a small visual edit to the colormap.
From the standpoint of the software, this operation actually involves three classes of data that need to be managed in concert: 1. Data Variables--temperature and salinity are both data variables; we might also call them data fields in this case because for any (x,y,z) point within the bounds of the data, we can look up the water temperature at that point. 2. A Dataset--both these variables belong to a set of data that are somehow related, in this case spatially; we might say we are visualizing ``the Gulf of Mexico dataset.'' 3. Data Objects--``those glyphs'' are yet another class of data. They exist within the same dataset, but they are not exactly data variables nor are they data fields. We could say that they are derived from data field(s) in the sense that the glyphs are generated by sampling into another variable of the data field such as phytoplankton concentration or the nitrate concentration. In the software, these are known by the rather nondescript term, ``data object.''
Generally, we cannot picture data variables directly, but we can use them to modify a data object. For example, we might modify the glyph size or vary the color in response to values of a data variable from the same dataset. This leads to rules, such as: 1) data variables and data objects both belong to datasets, and 2) you can't draw a data variable directly, but you can draw a data object and then modify some of its properties based on a data variable, as long as they belong to the same dataset. During our ``hunker week,'' it took our team multiple days of deep discussion to believe we were finally on the same page with the nuanced relationships between data variables, data objects, and datasets, which aspects of these are precalculated and available to the user and which are not, and where data are stored -- inside a dataset, in a data object, or both. We therefore realized a key design goal of the interface should be to naturally encode such rules and avoid the need for multi-day user training sessions.
The following request highlights another challenge, ``Now, change these flow lines so that they are drawn using this long thin clay form I sculpted yesterday rather than the current textured ribbons'' (Figure~\ref{fig:ex2}).
\begin{figure}[!ht]
\centerline{\includegraphics[width=\columnwidth]{TEXT_Sections/pics/before-after-ribbons.png}}
\caption{``Now, change the flow lines so that they are drawn using this long thin clay form I sculpted yesterday rather than the current textured ribbons.''}
\label{fig:ex2}
\end{figure}
\begin{figure}[t]
\centerline{\includegraphics[width=\columnwidth]{TEXT_Sections/pics/gui-mockup.pdf}}
\caption{Digital mockup of the three-panel interface. The center Composition Panel is where artists link together data, stored on the left panel, and visual elements, stored on the right.}
\label{fig:GUImockup}
\end{figure}
Again, from the artist's standpoint, this request is completely natural. It operates on the visual element of the ``flow lines.'' On the surface, it does not ask to change that object in any fundamental way--they are still the same lines. The request is just to change the visual representation. Unfortunately, to the computer graphics programmer, this must be interpreted as a big change because the efficient approach to drawing these lines is completely different in the two cases. The differences, even requiring different triangle mesh geometries and different shader programs, are significant enough that most graphics programmers would take the approach, ``OK, if you want to make that change, then I will just delete the old ribbon render objects I created for the lines earlier and start over, making a new set of lines that use my instanced mesh shader program to create something that looks visually like a line but is really many copies of your clay mesh.'' Creating a user interface that embraces the language and creative visual design processes of artists while also preserving the full power of the computer graphics and efficiencies needed to render large datasets in VR is a serious challenge.
\subsection{The Language of Printmaking}
One of the joyful struggles of interdisciplinary collaboration is breaking down the traditions, processes, and assumptions of our various disciplines in order to find the intersection points that we {\em know} are there but can be so difficult to articulate clearly. We often use metaphor to assist with building a common language in such conversations. So, during the ``hunker week'' that kicked off our interface development effort, we searched for metaphors that might help to translate the most complex requirements of the 3D graphics data rendering engine into a language that fits with artistic traditions.
Printmaking emerged as one of the most useful metaphors. In intaglio printmaking, artists first create a design by carving or etching into a matrix, such as a metal plate. Ink is applied to the plate, filling in the recessed design. Then, the print is pulled (the design is transferred to paper) by running the paper and plate through a press. A single transfer in this style is called an impression, but prints routinely combine several layers of impressions from multiple different plates. An edition of identical prints can be produced by repeating the process with the same inks, plates, and ordering, or, an infinite variety of new prints can be created by combining plates in new ways, inking them differently, or even adjusting pressure on the press. There are many additional variations to the process including that wood, metal, stone, and linoleum can all be used as matrices. However, one amusing constant seems to be that if you ask a printmaker, ``do you have any old plates in your studio?'' You will often get a smile and the answer, ``yes, how did you know my closet is absolutely overflowing with metal plates, wood blocks, and ink!''
There are some useful connections between printmaking and data visualization. Like a library of reusable computer graphics algorithms that can draw points, lines, or surfaces in different colors and locations based on the data sent to them, that closet full of plates provides a reusable collection of design elements that can be reinked and rearranged on the page in countless ways. Waiting in the closet until they are needed, these plates (rendering algorithms) are only brought to life when they are loaded with ink (data). It is true that printmaking notably departs from 3D computer graphics in that the result is typically a 2D image; however, printmaking is also the one traditional, physical art form where the concept of building up a complete composition from a series of separate layers is absolutely obvious. In fact, the technical steps required to set up and pull each layer can be quite complex and time consuming. So, many printmakers naturally think in terms of layers and procedures that are quite reminiscent of the way multiple data objects (streamlines, surfaces, points) are sent in serial through the computer graphics pipeline and ultimately superimposed to form a complete 3D visualization.
\subsection{Early Ideation and Sketching}
The concept of reusable plates that can be inked with data forms the foundation for the interface. When combined with data, each plate forms a ``data impression,'' a visual layer that becomes part of the overall composition.
Another high-level concept is the notion of data and visual art/design meeting
in the middle to create a visualization. Figure~\ref{fig:GUImockup} shows an
early design mockup exploring this concept. Notice the 3-panel design with the
``Data Sources'' on the left, the ``Design Library'' on the right, and the ``Composition'' panel in the middle. Adding data impressions to the composition
requires bringing together elements from both the left and the right. Most
traditional 3D data visualization pipelines start from the data and proceed in a
linear fashion through a pipeline until an image is rendered at the end. In an
effort to better support design that places a priority of visual decisions, this
interface makes it possible to start with the design, pulling colors, forms, and
textures into the composition before linking them with data.
The links between data and visuals could be made by drawing lines between input and output ports to form a pipeline in the style of Data Explorer~\cite{greg-ibm-data-explorer}, but we designed an alternative inspired by the block-based visual programming tools often used to teach programming~\cite{pasternak2017blockly,resnick2009scratch}. This puzzle piece metaphor has the great advantage of visually encoding the difficult-to-explain rules mentioned in the earlier discussion of ``why designing this interface is a hard problem.''
The puzzle connectors are designed to be abstract enough to work as interface elements, but specific enough to evoke the sense of the item they are representing.
For example, the point data connector looks like bubbles coming out of the puzzle piece, and the colormap connector looks like a painter's palette.
Also worth mentioning are the action icons located throughout the interface which use the Material Design icon pack~\cite{materialDesign}, which is a popular choice among smartphone and web apps alike.
\subsection{Data Impressions}
\begin{figure*}[htbp]
\centering
\includegraphics[width=\textwidth]{TEXT_Sections/pics/figicons22.png}
\caption{(A and D): The iconography used in the interface, showing the types of Key Data, Data Variables, and Visual Elements available to the artist in the interface. (B): Example of a glyph plate ``Leafy Chlorophyll'' which is registered with the ``chlorophyll-points'' density-based point sampling and has the ``Temperature'' scalar data variable encoded with a colormap from the design palette, as well as a ``drum'' glyph. The dark gray slots indicate entries that have not yet been assigned by the artist and remain at the default values shown in brackets. Other types of plates (ribbons and surfaces) are also shown. (C): 20 examples of Visual Elements available to the artist in the interface, including Colormaps, Glyphs, Lines, and Textures.}
\label{fig:icons_and_plates}
\end{figure*}
The complete, current visual language of plates, data, and visual elements is shown in Figure~\ref{fig:icons_and_plates}. Let us describe how artists work with these building blocks to design a visualization composed of multiple data impressions.
There are several ways to create a data impression, but a typical approach starts with selecting a plate from the design palette in the right panel, also known as the ``studio closet.'' This collection of plates is expandable. Whenever a new data-driven rendering algorithm is added to ABR, we add a new plate type. The pattern on each plate is an example of the visual style it can create, but this is just one example. The plate will produce something different depending on how it is inked.
Since the rendering is 3D, the plate needs to know where its pattern should be
applied in space. In the underlying technical system, this information comes
from the ``data object'' discussed earlier, which is, in our experience, one of
the most challenging technical aspects to explain. Our interface addresses this
by, again, using metaphor. Printmakers are familiar with the need to align or
register a pattern, and they often use registration or ``key'' plates to
accomplish this. Thus, the interface presents data objects as ``key'' data.
Key data are necessary. There is no way to draw data variables without providing
some spatial registration, so every plate includes at least one puzzle-piece
slot for key data. Notice, also, that the slot types vary based on the different
icons for registering the plate's pattern to 3D locations in space (points), 3D
curves (lines), and forms (surfaces). Figure~\ref{fig:icons_and_plates} shows
just the plates available in our current implementation. Going forward, we
expect our team will always include computer graphics researchers and artists
working together to develop new plates and corresponding graphics algorithms.
A specific example of a ``Leafy Chlorophyll'' data impression designed by an artist is shown in Figure~\ref{fig:icons_and_plates}. As we see from the shape of the puzzle piece, this plate only works when registered to point-style key data. In the Gulf of Mexico Biogeochemistry dataset pictured in several figures of this paper, there are multiple options associated with this type of key data, including ``Chlorophyll Concentration'' and ``Nitrate Concentration.'' In this case, the artist has registered the plate to ``Chlorophyll Concentration.'' We often use ``Concentration'' to signify a density-based sampling, which means the pattern will be sparse in areas of low concentration and dense in areas of high concentration.
Beyond the spatial registration provided by key data, all of the plates we have created to date also include additional settings to further customize the data impression. The interface presents these in a collapsible list attached to the bottom of each plate. When a plate is placed in the center Composition Panel, the list automatically opens up to show all available options, as shown in the ``Leafy Chlorophyll'' example.
All of the settings in this list are optional, and we require all plates to have
reasonable defaults. So, as soon as the plate is registered with key data, a 3D
visual will appear in ABR, which is usually set up to run side-by-side on a
second monitor and/or in an attached VR/AR headset. This enables the artist to
explore the visual results in real-time 3D and react with visual changes.
Several settings have been adjusted in the ``Leafy Chlorophyll'' example by
attaching data variable puzzle pieces from the left data panel and visual design
puzzle pieces from the right design panel.
Returning to some of the technical challenges that must be overcome in the interface design, note the separation in this interface between key data and data variables. With the settings shown in Figure~\ref{fig:icons_and_plates}, the ``Leafy Chlorophyll'' data impression produces a visualization like the one in Figure~\ref{fig:ex1} {\em Temperature}. Making the design change described in that Before--After example is trivial. The artist must only replace the ``Salinity'' puzzle piece in the ``Color Variable'' slot with the ``Temperature'' puzzle piece from the scalar variables section of the data palette on the left panel of the interface.
Beyond this quick switch to a different data variable it is also possible for the artist to decide that the way that pattern is distributed in space is not working. Perhaps, the combination of leafy glyphs works perfectly with the background color scheme and it makes sense to include these somewhere in the composition, but the chlorophyll concentration data happens to be very unevenly distributed and the leafy glyphs are all clustering together in a way that is not very readable. It is possible in this case to keep all of the existing data and visual settings the same but swap in different key data, ``Nitrate Concentration,'' for example. This will have the effect of retaining all of the plate's visual style but re-registering the pattern to a different 3D spatial distribution.
Let us consider the other Before--After picture mentioned earlier. Recall that the design edit illustrated in Figure~\ref{fig:ex2} is a drastic change from the standpoint of the computer graphics algorithm that should be used, even though artists think of it as a visual change to the same set of lines. This example requires more manipulation of the interface to achieve, but all of the underlying complexity is hidden behind the metaphors and iconography. The {\em Ribbons} image was created using a textured ribbon plate where artists can provide several styles of texture, including one used here to give the ribbons their patterned edge. The ribbons are also colored according to data. To use a sculpted clay form to depict the lines instead, the artist must first notice that the ribbon plate does not include a slot for glyph key data--it is impossible to ink this plate with glyphs because the underlying software approach to the two styles of line is so different. The solution is to ``go back to the closet'' and find a different plate that is more suited to glyphs. Once they place this new plate in the composition panel, they can move all the important repeated elements (key data, data variables, color) from the original plate to the new one.
The composition panel holds all of the data impressions created for the visualization. Artists often reposition these within the panel to organize the space, and the panel itself can be panned if the artist has more layers than they have screen space. Drawing inspiration from digital image manipulation software, each data impression also includes buttons to hide, collapse, expand, or delete the impression.
When an artist saves their work, the placement of each layer in the composition panel persists between sessions.
\subsection{Importing and Editing Visual Assets}
Artists seldom work in a vacuum, and the interface embraces this concept by
enabling artists to incorporate any visual elements stored in the public
Sculpting Vis Library~\cite{SculptingVisLibrary} in a way that feels magical.
The Sculpting Vis library is simply loaded in one web browser window while the
design interface runs in another. Then, any of the glyphs, colormaps, and
textures in the library can included by simply clicking on their thumbnail images in the
library window and dragging these into the design interface browser window.
This automatically adds them to the current working palette (right panel) and
triggers the connected instance of ABR to download the original raw 3D model
files, image data, etc. so that these elements may be used for 3D computer
graphics rendering.
The library itself is rapidly expanding and currently contains a selection of glyphs, colormaps, lines, and textures curated by artists in the Sculpting Vis Collective. Individuals may also upload new visual elements to the library via the ABR applets~\cite{johnson2019artifact}.
Rather than working to create generic, generalizable color maps, glyphs, and textures, one of the benefits of this design interface is that it makes it increasingly practical to create data-specific visualizations, that is, visualizations that include color maps, glyphs, and other elements that are fine-tuned for the particular data at hand. To this end, the interface includes a data-specific colormap editor inspired by the powerful ColorMoves tool~\cite{samsel2018colormoves}.
Double-clicking the color map's thumbnail icon in the interface launches the colormap editor, which shows the colormap on top of the data histogram for the variable that the colormap is attached to. This allows the artist to tune the colormap relative to the actual data values. The interface includes features to add, subtract, and adjust the colormap's control points.
\begin{figure*}[htbp]
\centering
\includegraphics[width=\textwidth]{TEXT_Sections/pics/deb.pdf}
\caption{Deborra Stewart-Pettengill works with master printmakers at Wingate Studio (A) to realize her chine-coll\'{e} designs. She commonly works with patterns, which have been digitized to form streamlines in the Gulf of Mexico visualization (B) and a texture on the land (C). Image (A) Copyright 2020 Wingate Studio; used with permission.}
\label{fig:artist1_gulf}
\end{figure*}
\subsection{Implementation}
To support a multitude of devices including desktop and laptop computers, tablets, and large-format touch screen displays, the interface is built for web browsers using a combination of Django, JavaScript, jQuery UI, and other libraries like Data Driven Documents~\cite{zhu2013d3}.
The puzzle piece connections are implemented with snapping so that a connection is made whenever a correct puzzle piece is dropped near, even if not precisely on, a valid slot. If the slot is already occupied with another piece, it is replaced, and if the piece does not match the slot, the connection is refused. In this way, the implementation makes it impossible to drop a ``Glyph'' visual element into a ``Colormap'' slot in a data impression, and it is impossible to drop a scalar variable into a vector variable slot.
To support diverse applications, the system is implemented using a modular structure, where the web-based design interface, the ABR 3D rendering engine, and the data engine are three distinct sub-systems that connect to each other using network sockets.
This means, for example, that the data can be hosted on a supercomputer, the graphics can be rendered on a machine with a powerful graphics card, and a designer can craft a visualization on a tablet while interactively monitoring their design modifications on a laptop with a remote viewer.
This opens up possibilities for more artists to become involved in the visualization design process by drastically reducing the hardware requirements for building visualizations of large scientific datasets.
We have extended ParaView~\cite{ahrens2005paraview} to act as a data server within this framework. To create the figures shown in this paper, we used ParaView to prepare datasets from two supercomputer climate simulations actively studied by our collaborators: Biogeochemistry in the Gulf of Mexico~\cite{wolfram2015diagnosing,Ringler_ea13om,Petersen_ea15om} and Sea Ice Climate data~\cite{petersen2019antarctic}. The design interface does not support data wrangling itself; filtering data, deriving new data variables, and similar operations can be done in ParaView before or during a design session. This is one limitation in the sense that we do not expect artists to have knowledge to do this level of data wrangling. However, after data have been loaded into ParaView once and saved, the design interface makes reloading these data for design work simple through its Load and Save functionality.
The three modules are connected as follows: The geometric representation of the
data is sent from the data server (ParaView) to the rendering engine (ABR),
which pre-processes and optimizes the data for rendering. The ABR engine
connects to a Python server which delivers the interface to the artist. As the
artist makes changes to the visual styles on the interface, messages are sent
via WebSocket to the engine, which updates the encodings that are rendered
accordingly. Graphics from the engine can optionally be rendered to a depth
texture~\cite{luke2002semotus}, and sent to any connected remote viewers. VR
headsets can also be connected directly to the engine, which renders the
visualization graphics at 120+ frames per second. The scale of the
visualizations in VR is initialized to a table-scale default, but can be
adjusted via bimanual interactions from the user. Graphics can also be easily
exported as PNG images, which is useful for artists using an ABR-created
visualization as a part of a transmedia piece.
| proofpile-arXiv_059-15619 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{}
A finite graph $G$, is called $h$-{\it expander} for
$$
h = \inf_{S \subset V_G : \mbox{ } 0 < |S| < |G|/2} {|\partial S|
\over |S|},
$$
where $V_{G}$ are the vertices of $G$ and $\partial S$ is the outer vertex boundary of $S$.
$G$ a graph with degrees bounded from above by $d$ and expansion $h$.
\begin{conjecture} There is a function $f(h, d) >0$ so that $G$ contains a spanning subgraph, which is an $f(h,d)$- expander with girth proportional to the diameter. Where the proportionality ratio between girth and diameter,
are bounded below by strictly positive functions only of the degrees and the expansion of $G$.
\end{conjecture}
Start by showing existence of spanning expander with girth going to infinity with the size of $G$?
\medskip
Note that we don't require from the spanning high girth sub expander to be regular.
\medskip
We don't know the conjecture even further assuming that the expanders are Cayley graphs.
\medskip
Are there approachable high dimensional formulations of the conjecture for the several variants of high dimensional expanders \cite{L}?
\medskip
For the standard definitions and background see \cite{HLW}.
\section{Motivation and examples}
\begin{enumerate}
\item
In \cite{BS} it was shown that infinite graphs with expansion bigger or equal $1$, admit a spanning tree with with expansion bigger or equal $1$. This is an infinite variant on the conjecture for sufficiently large expansion. For finite expanders with expansion at least $1$, it implies that there are balls of order diameter with expanding spanning trees. Maybe using the Lovasz local lemma one can glue these to get a spanning subexpander?
\medskip
\item
One could try a random construction, trimming short cycles.
Under some further assumptions already Bernoulli percolation gives a giant component which is rather tree like locally. By adapting theorem 4 of \cite{pyond} one can conclude that if $p$ is such that
$$
\rho(G)d_{G}p < 1.
$$
Where $\rho(G)$ is the spectral radius, $d_{G}$ the degree of $G$. And $p$-Bernoulli bond percolation admits a giant component. Then there will be a unique giant component (\cite{abs})
which locally , up to order diameter, has the structure of an infinite percolation cluster in the non-uniqueness regime, admitting dense triforcation vertices. For any expander $G$, if one takes a sufficiently large power of $G$ the percolation requirements will hold \cite{PS}.
\medskip
Show that if $G^2$ satisfies the conjecture, then $G$
as well.
For trimming of percolation clusters to get a subgraph with positive expansion in the context of infinite vertex transitive graphs see \cite{BLS}.
\medskip
\item
One specific challenge is products of
graphs $G \times G$, were $G$
is an expander, e.g. a random regular graph or a Ramanujan graph.
\medskip
As a very partial positive indication note that
${\mathrm{SL}}(m,Z/p^nZ)\times {\mathrm{SL}}(m,Z/p^nZ)$, $m>1$ where $p$ is prime and $n$ goes to infinity admits Cayley graphs with girth going to infinity.
Using Corollary 1.4 from \cite{BG} we find a free dense, finitely generated subgroup $\Lambda \in {\mathrm{SL}}(m,\mathbb Z_p)\times {\mathrm{SL}}(m,\mathbb Z_p)$. Let $S\subset \Lambda$ be a finite set that freely generates $\Lambda$.
$\Lambda$ is dense, so for each $n$ the image $S$ generates ${\mathrm{SL}}(m,\mathbb Z/p^n\mathbb Z)\times {\mathrm{SL}}(m,\mathbb Z/p^n\mathbb Z)$. Consider a sequence of Cayley graphs
$$
Cay({\mathrm{SL}}(m,\mathbb Z/p^n\mathbb Z)\times {\mathrm{SL}}(m,\mathbb Z/p^n\mathbb Z, S)).
$$
The girth of this sequence tends to infinity. As otherwise there would be a finite length relation on elements of $S$ that holds modulo $p^n$ for every $n$. But then the same relation would have to hold in the inverse limit which is $SL(m,\mathbb Z_p)\times SL(m,\mathbb Z_p)$. This contradicts the freeness.
\medskip
\item Let $S\subset {\mathrm{SL}}(2,\mathbb Z)$ be a finite symmetric set generating a Zariski dense subgroup. By the work of Bourgain and Gamburd \cite{BoGa} it is known that the sequence of graphs $G_p:={\mathrm{Cay}}({\mathrm{SL}}(2,\mathbb Z/p\mathbb Z), S)$ is an expander sequence as $p$ varies among sufficiently big primes. Let $\Gamma$ be the subgroup generated by $S$. If $S$ is free then the argument form the previous example shows that $G_p$ have large girth. The conjecture predicts that even if that is not the case, we should be able to find spanning subgraphs of $G_p$ of large girth which are still good expanders. In this example we can do something slightly weaker. We can find a sequence of expanders $H_p$ on the same vertex set as $G_p$, of girth proportional to $\log p$, such that every pair of vertices in $H_p$ connected by an edge is of distance at most $O(1)$ in $G_p$. To find such graphs take a finite power of $S$ that contains two elements, say $a$ and $b$, that generate a free Zariski dense subgroup. This is always possible by the Tits alternative. Put $H_p:={\mathrm{Cay}}({\mathrm{SL}}(2,\mathbb Z/p\mathbb Z), \{a,b,a^{-1},b^{-1}\}).$ By \cite{BoGa} $H_p$ is an expander sequence. Since $\langle a,b\rangle$ is free, the girth must go to infinity. Indeed, any relation on $a,b$ that holds modulo infinitely many primes would have to hold in ${\mathrm{SL}}(2,\mathbb Z)$, which contradicts the freeness.
\item
Let's end by recalling an old conjecture: there is no sequence of finite bounded degrees graphs growing in size to infinity, so that all the induced balls in all the graphs in the sequence, are uniform expanders.
For a related progress see \cite{FV}.
\end{enumerate}
| proofpile-arXiv_059-15620 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{INTRODUCTION}
Physical experiments are frequently expensive and impractical to perform. The growth in computing power during modern times offers an alternative to carry out such experiments in the real world scenario. Due to the high cost of resources that come with conducting physical experiments, less expensive computer simulators are used to represent phenomena, such as, dynamic traffic patterns of a metropolitan intersection, hydrological behaviors of an ecosystem, the spread behaviour of a wildfire, formation patterns of galaxies, and so on. The applications of computer simulation models span a variety of sectors including ecology, medicine, engineering, industrial experiments, nuclear research, manufacturing, climatology, and astronomy. {Complex computer experiments, although much less expensive than physical experiments, are often still computationally expensive, and cost-efficient surrogate simulators are called for.}
In this paper we focus on the calibration of expensive to evaluate dynamic computer simulators. That is, the simulator outputs are time-series and the objective is to solve the inverse problem (also sometimes called as the calibration problem) which attempts to find the input parameters of the simulator that generate either exact or close approximation of a pre-specified target. The application that motivated this study comes from a rainfall-runoff measurement model called Matlab-Simulink model which predicts the rate of runoff and sediment yield (Duncan et al., 2013). Here, the objective is to find the input parameters of the Matlab-Simulink model that generate simulator outputs as close as possible to the real data collected from a watershed from the Bioconversion center at the University of Georgia, Athens, USA.
Since computer experiments tend to be complex and have high dimensional inputs, direct attempt to solve the inverse problem using simulator outputs alone is infeasible. Thus, less computationally expensive surrogates such as the Gaussian Process models are fitted using training data of $n$ observations $(x_1, y_1), \dots, (x_n, y_n)$ to emulate computer models. During the past few years, the inverse problem for expensive to evaluate complex computer models with \emph{scalar-valued responses} has been given extensive focus (e.g. Bingham et al., 2014; Picheny et al., 2013; Ranjan et al., 2008). In contrast, approaches to solve the calibration problem for \emph{dynamic computer models} have been less studied. Vernon et al. (2010) selected a handful of time-points from the target response series (called discretization point set (DPS)) and then developed a batch-sequential approach called the history matching (HM) to simultaneously solve multiple scalar-valued inverse problems which would approximate the underlying dynamic inverse problem, but the method required too many simulator runs. Recently, Bhattacharjee et al. (2019) proposed a modification which required fewer simulator runs for the calibration of hydrological simulators. Ranjan et al. (2016) suggested a scalarization approach to efficiently minimize $\|g(x)-g_0\|$ via the expected improvement based sequential strategy developed by Jones et al. (1998), where $g(x)=\{g(x,t_j), j=1,2,...,L\}$ is the simulator output for the input $x$ and $g_0=\{g_0(t_j), j=1,2,...,L\}$ is the target response. Zhang et al. (2019) used a singular value decomposition based Gaussian process model to fit a surrogate for the dynamic simulator and then generalized the expected improvement approach of Jones et al. (1998) for minimizing $\|g(x)-g_0\|$.
Both Vernon et al. (2010) and Bhattacharjee et al. (2019) used an adhoc method (or a subjective expert opinion) for choosing DPS, and then used an implausibility criterion (similar in spirit as the improvement function) to sort through the input space and find the inverse solution. In this paper, we propose using cubic spline smoothing of the target series to systematically construct the DPS as the optimal knots identified in a sequential manner inspired by the forward variable selection method. This method allows us to identify both the size and positions of the DPS and divide the predetermined budget accordingly. Subsequently, we recommend solving the scalar-valued inverse problems iteratively using the contour estimation method developed by Ranjan et al. (2008). Finally, the inverse solution of the underlying dynamic simulator is obtained by taking the intersection of the solutions from the scalar-valued inverse problems.
The remaining sections are outlined as follows. Section 2 reviews the concepts integral to the proposed method and review the competing modified history matching (HM) approach proposed by Bhattacharjee et al. (2019). Section 3 provides the elements of the proposed multiple scalar contour estimation (MSCE) method along with thorough implementation details of the key steps. In Section 4, we establish the superiority of the proposed method via three test functions based simulator. Section~5 discusses the real motivating Matlab-Simulink application. We provide some concluding remarks in Section 6.
\section{REVIEW OF EXISTING METHODOLOGY}
In this section, the existing methods that set precedence for this paper are reviewed. We examine the use of a GP model as a surrogate to a deterministic simulator and the use of the expected improvement criterion for choosing the follow-up trials in the sequential design framework. We also review the modified history matching approach of Bhattacharjee et al. (2019) and the use of a discretization-point-set (DPS) to approximate the inverse solution for dynamic computer simulators.
\subsection{Gaussian Process Models}
Deterministic computer model simulators of complex phenomena are often computationally expensive to evaluate, and hence the emulation via a statistical surrogate becomes much more practical. A useful surrogate for this purpose is the Gaussian Process (GP) model (Sacks et al., 1989). For a set of input-output combinations, a stationary GP model, called the ordinary Kriging, assumes:
\begin{eqnarray*}
y(x_i) = \mu + Z(x_i), \hspace{0.5in} i = 1, \dots, n,
\end{eqnarray*}
where $\mu$ is the mean and $Z(x_i)$ is a Gaussian Process with $E(Z(x_i)) = 0$ and a covariance structure of $Cov(Z(x_i), Z(x_j)) = \sigma^2R(\theta; x_i, x_j)$. There are several popular choices of $R(\cdot,\cdot)$. The power-exponential correlation structure will have the $(i,j)^{\mbox{th}}$ term $R_{ij}(\theta)$ as:
\begin{eqnarray}\label{eq:1}
R(Z(x_i), Z(x_j)) = \prod_{k = 1}^{d} \exp\Bigg\{-\theta_k \mid x_{ik} - x_{jk} \mid ^{p_k} \Bigg\} \hspace{0.2in} \mbox{for all } i,j,
\end{eqnarray}
where $0 < p_k \leq 2$ are smoothness parameters and $\theta_k$ measure correlation strength. In this paper, we assume power exponential correlation with $p_k = 1.95$ (for numerical stability and smoothness). The best linear unbiased predictor for the response at any unsampled point $x^*$ is given by:
\begin{eqnarray*}
\hat{y}(x^*) = \hat{\mu} + r(x^*)^T R_n^{-1}(y-1_n \hat{\mu}),
\end{eqnarray*}
where $r(x^*) = \Big[ \mbox{corr} (z(x^*), z(x_1)), \dots, \mbox{corr} (z(x^*), z(x_n))\Big]^{\tt T}$, $R_n$ is the $n\times n$ correlation matrix with elements $R_{ij}$ (as seen in Equation (\ref{eq:1})), and the prediction uncertainty is quantified by
\begin{eqnarray}\label{eq:2}
s^2(x^*) = \hat{\sigma}^2 \bigg(1-r(x^*)^T R_n^{-1}r(x^*) \bigg).
\end{eqnarray}
The flexibility of the correlation structure makes the GP model a popular surrogate for complex computer models. Throughout this paper, the \textit{R} package {\tt GPfit} (MacDonald et al., 2015) is used to fit GP models.
\subsection{Sequential Design}
For finding optimal solutions of an inverse problem for computationally intensive computer simulators, sequential designs have been proven to outperform the competing popular approaches, e.g., Ranjan et al. (2008), Ranjan et al. (2016), and Zhang et al. (2019). The basic framework remains the same as in the global optimization developed by Jones et al. (1998) - which consists of two components, finding a good initial design and then sequentially choose the follow-up trial locations for simulation runs.
In computer experiments, space-filling designs are popular choices for the initial design. Suppose we have $d$ input variables, a space-filling design such as a maximum projection Latin hypercube design (Joseph et al., 2015), is used to create a training input set of initial size $n_{0}$ from the scaled input space $[0,1]^{d}$. The corresponding responses are generated by evaluating the simulator at each input of the training set. A surrogate model is then fitted to the simulator responses and the inputs. After which, a sequential design criterion such as expected improvement (EI) is evaluated using the GP model over the entire input space to find the input, $x_{new}$, that leads to the greatest expected improvement (see Jones et al. (1998) and Bingham et al. (2014) for details). Practically, we evaluate EI over a dense, randomly generated spacing-filling test set of large size $M$ in $[0,1]^{d}$. The $x_{new}$ and corresponding true simulator response are augmented to the training set. The surrogate (GP model) is refitted to a new training set. This iterative process, of optimizing EI to choose $x_{new}$ and refitting the surrogate to the augmented data, is repeated until the total budget of $N$ points is exhausted. The optimal inverse solution would be extracted from the final fit.
\subsection{Expected Improvement Criterion for Contour Estimation}
Jones et al. (1998) proposed the first EI criterion for global optimization for scalar-valued simulators. In the same spirit, Ranjan et al. (2008) developed a EI criterion for estimating the inputs that lead to a pre-specified scalar target response. The corresponding improvement function is given by
\begin{equation*}\label{Eq.EI_contour1}
I(x^{*}) = \epsilon^{2}(x^{*}) - \mbox{min}\Big[ \{ y(x^{*}) - a\}^{2}, \epsilon^{2}(x^{*}) \Big].
\end{equation*}
where $\epsilon(x^{*}) = \alpha s(x^{*})$ for a positive constant $\alpha$ (e.g., $\alpha = 0.67$, corresponding to 50\% confidence under approximate normality), $s(x^*)$ is defined in (\ref{eq:2}), and $a$ is the pre-specified target response. Hence, the EI value (which is simply the expected value of the improvement function) is:
\begin{eqnarray}\label{Eq.EI_contour2}
E[I(x^{*})] =& \Big[\epsilon^{2}(x^{*}) - \{ \hat{y}(x^{*}) - a \}^{2} \Big] \Big\{\Phi(u_{2}) - \Phi(u_{1})\Big\} \nonumber \\
&+ s^{2}(x^{*})\Big[ \{u_{2} \phi(u_{2}) - u_{1} \phi(u_{1})\} \{\Phi(u_{2}) - \Phi(u_{1})\}\Big] \nonumber \\
&+ 2\Big\{ \hat{y}(x^{*}) - a \Big\}s(x^{*}) \Big\{\phi(u_{2}) - \phi(u_{1})\Big\},
\end{eqnarray}
where $u_{1} = [a - \hat{y}(x^{*}) - \epsilon(x^{*})] / s(x^{*})$, and $u_{2} = [a - \hat{y}(x^{*}) + \epsilon(x^{*})] / s(x^{*})$, and $\Phi$, $\phi$ are the cumulative distribution function and probability density function of the standard normal distribution for input $x^{*}$.
The EI based selection strategy for follow-up trials became extremely popular because different components of EI focus on two separate aspects, (1) exploration of the input space which prohibits getting stuck at a local optimum, and (2) exploiting the areas of interest for more precise estimate of the objective.
\subsection{Modified History Matching for the Inverse Problem} \label{hm_via_sd}
Recall that $g(x) = \{g(x, t_j), j = 1, \dots, L\}$ denote the time-series valued simulator response with the input scaled in $[0,1]^d$. The aim of the inverse problem (or calibration problem) is to find $x$ such that $g(x)$ resembles the target response $g_0 = \{g_0(t_j), j = 1, \dots, L\}$. To this end, a history matching approach was first proposed by Vernon et al. (2010) and was subsequently modified by Bhattacharjee et al. (2019) to solve the inverse problem for dynamic simulators. One of the key modifications in Bhattacharjee et al. (2019) was to use a small space-filling Maximin Latin hypercube design (Johnson et al., 1990) as compared to a large Latin hypercube design for fitting the initial GP surrogate.
Although the simulator output is a time-series of length $L$, the history matching approach selects a handful of time-points, which are referred to as a discretization-point-set (DPS) and has size $k\ll L$. The said approach uses the simulator output at only the DPS time-points ($t_1^*, t_2^*, \dots, t_{k}^*)$ to solve the underlying inverse problem. That is, the reduced objective is to finds $x$ such that $g(x, t_j^*) = g_0(t_j^*)$ for all $j = 1, \dots, k$. As a result, it is important for the DPS to be representative of the simulator response.
The history matching uses a multi-stage design approach to augment the training set. At each stage, a criterion called the implausibility function is evaluated over a large test set from the input space ($\chi$), and then the design points are said to be implausible if the criterion value exceeds certain pre-determined cutoff. Subsequently, the plausible points, $\{ x \in \chi : IM_{max}(x) \leq c\}$, are augmented to the training set. { For each $j = 1, \dots, k$, the} implausibility criterion is defined as
\begin{equation*}
IM_{j}(x) = \frac{\mid\hat{g}(x, t_j^*) - g_0(t_j^*)\mid}{s_{t_j^*}(x)},
\end{equation*}
where $\hat{g}(x, t_j^*)$ is the predicted response derived from the GP surrogate corresponding to the simulator response at time point $t_j^*$ and $s_{t_j^*}(x)$ is the associated uncertainty. Design points are deemed implausible if $IM_{max}(x) > c$, where $c$ is the pre-determined cutoff chosen in an ad hoc manner and
\begin{eqnarray*}
IM_{max}(x) = \mbox{max}\{IM_{1}(x), IM_{2}(x), \dots, IM_{k}(x)\}.
\end{eqnarray*}
Following each stage of training data augmentation, the GP surrogate is updated. This iterative process of updating the surrogate fit and selecting the plausible points, continues until no additional points are to be added from the test set. At the end of the procedure, the best approximate inverse solution is extracted from the training set or from the final GP surrogate.
\section{Proposed Methodology}
We propose to solve this dynamic computer simulator inverse problem via multiple scalar-valued inverse problems under a limited budget constraint. Similar to the (modified) history matching method, we discretize the simulator response at a DPS of size $k (\ll L)$ that aims to capture the important features of the time-series response. Subsequently, we solve the $k$ scalar-valued inverse problems using the contour estimation method developed by Ranjan et al. (2008) discussed in Sections~2.2 and 2.3. Finally, the intersection of these $k$ sets of inverse solutions is taken as the optimal solution for the original inverse problem for dynamic computer simulator.
There are several parts of the proposed methodology that requires detailed discussion. First, we present the construction of the DPS. Both the history matching approach by Vernon et al. (2010) and the modified history matching technique by Bhattacharjee et al. (2019) choose DPS rather subjectively, and do not follow any systematic algorithm. Here we propose an algorithmic approach for choosing the DPS.
We propose fitting cubic splines to the target response series and then use the set of knot locations as the DPS. Undoubtedly, finding optimal number and location of knots is a challenging problem in spline regression. We suggest an iterative but greedy approach for constructing the DPS. The idea is similar to the construction of a regression tree, where the split-points are essentially the knot locations. That is, we start with no knots, and find the best location for the first knot by minimizing the overall mean squared error as per the spline regression fit. The optimal location for the second knot is found by fixing the first knot location. Continuing further in this manner, the search for optimal location for the $j$-th knot assumes that the optimal location of the previous $j-1$ knots are known. Finally, the optimal number of knots are found using something similar to scree-plot, where we plot mean squared error against the number of knots and identify the elbow of the plot. For implementation, the R package {\tt splines} is called upon for this purpose while the command {\tt bs()} is used for finding B-spline basis functions in the linear model environment.
We quickly illustrate the details by applying it to a test function. Suppose the simulator outputs are generated via Easom function (Michalewicz, 1996), defined as,
\begin{equation*}
g(x,t_j) = \cos(x_1)\cos(x_2) \exp\big\{-(x_1-\pi t_j)^2-(x_2-\pi)^2)\big\},
\end{equation*}
where $t_j$ are $L$ equidistant time points in $[0, 1]$ for $j = 1, \dots, L = 200$, and the input space is scaled to $(x_1, x_2) \in [0, 1]^2$. We select the target response $g_0$ corresponding to the input set $x_0 = (0.8, 0.2)$. Here the objective is to solve the inverse problem by calibrating $x=(x_1,x_2)$ such that $g(x,t_j) \approx g_0(t_j)$ for all $j$.
The first element of the DPS is obtained by minimizing the mean square errors of the cubic spline fits onto the target response over each of the possible 200 time points as the sole knot. We found the optimal first knot at time point $t_1^*=145$. Keeping the knot at time point $t_1^*=145$ fixed, we repeated the process and found the second knot at time point $t_2^*=37$. The process continued, and the locations of ten optimal knot are $\{145, 37, 132, 47, 120, 55, 113, 63, 104, 174\}$ (see Figure~1).
\begin{figure}[h!]\centering
\includegraphics[scale=0.75]{easom_full_knots.pdf}
\caption{Easom function: The vertical dashed lines depict the ordered positioning of optimal knots for fitting cubic splines to the time-series.}
\end{figure}
In Figure~1, we have added 10 knots, however, in reality, the required number of knots may be different. Following the idea of scree-plot from the principal component analysis, we look at the mean squared error (MSE) versus the number of knots function, and try to find an ``elbow" in the plot to find the optimal number of knots. The idea is to find the point in the ``MSE vs. the number of knots function" where the second derivative reaches a positive value. This would allow for a good fit while maintaining the efficiency of the knots used. Figure~2 shows the corresponding ``MSE vs. the number of knots function" plot for the Easom function. In this case, the elbow cutoff is $3$. That is, the required discretization-point-set (DPS) for this time series response would be $\{145, 37, 132\}$.
\begin{figure}[h!]\centering
\includegraphics[scale=0.75]{sequential_easom_mse.pdf}
\caption{Easom function: ``Mean squared error versus the number of knots" for 10 knots for spline regression added sequentially one at-a-time. }
\end{figure}
After finding a reasonable DPS, we sequentially solve $k$ scalar-valued inverse problem via contour estimation approach developed by Ranjan et al. (2008). Suppose our budget for the simulator runs is $N$, then, the process starts by choosing an initial design of size $n_0 (<N)$ from the input space $[0,1]^d$. We use a space-filling, maximum projection Latin hypercube design (Joseph et al., 2015), for the initial design. The remainder of the budget $N-n_0$ is equally distributed for estimating the $k$ scalar-valued inverse solutions. That is, the first inverse problem would estimate $S_1(x) = \{x: g(x,t_1^*)=g_0(t_1^*)\}$ using $n_0$-point initial design and $(N-n_0)/k$ follow-up trials chosen sequentially by maximizing the EI criterion based on the GP surrogate given by equation~(\ref{Eq.EI_contour2}) in Section~2.3. The augmented data are now treated as the initial training set for the second scalar-valued inverse problem. Thus, one would estimate $S_j(x) = \{x: g(x,t_j^*)=g_0(t_j^*)\}$ using the initial training set of size $n_0 + (j-1)(N-n_0)/k$, obtained after solving the previous $j-1$ scalar-valued inverse problems, and $(N-n_0)/k$ sequential trials via EI optimization.
For the Easom function, since the DPS is of size three, we need to solve three scalar-valued inverse problems. We set a total training size budget of $N=50$ points and initial design of size $n_0 = 15$. The budget of follow-up points, $N - n_0 = 35$, is divided approximately evenly for the three inverse problems. When computing the EI criterion, we set $\alpha = 0.67$ which corresponds to $50\%$ confidence interval under normality. Furthermore, since the input space is only two-dimensional unit square, we use $5000$-point random Latin hypercube designs for maximizing the EI criteria for sequentially adding follow-up trials. Figure~3 shows the three estimated contours along with selected follow-up points corresponding to $t_1^*=145$ (in red), $t_2^*=37$ (in green) and $t_3^* = 132$ (in blue).
\begin{figure}[h!]\centering
\includegraphics[scale=0.7]{easom_seq_design.pdf}
\caption{Easom function: Training data is depicted by dots and the estimated contours are shown by solid curves. The black dots correspond to the initial design, whereas red, green, and blue dots represent the follow-up points obtained via EI optimization for the three scalar-valued contour estimation at $DPS =(145,37,132)$, respectively.}
\end{figure}
From Figure~3, it is clear that for the first contour estimation, more follow-up points focus on global exploration for better overall understanding of the process as compared to local search for accuracy enhancement of the contour estimate.
The intersection of $k$ scalar-valued inverse problem solutions at the time points of the discretization-point-set (DPS) is used as the inverse solution of the underlying dynamic simulator. For Easom function example, it is clear from Figure~3, that the three contours intersect on a common point which is the true inverse solution $x_0=(0.8, 0.2)$. To implement this, final GP model surrogate fitted after the final iteration is used to extract $x_{opt}$. Instead of the exact match, we accept the inverse solutions as $S_i = \{x^{*}: |\hat{g}(x^*, t_{i}^*) - g_{0}(t_i^*)| < \epsilon\}$, for each time point $t_i^*$ in DPS. This accounts for the round off errors and other approximations made during the implementation. This tolerance $\epsilon$ has to be subjectively decided to accurately estimate the inverse solution set. Assuming that the solution exists and $\cap_{i=1}^kS_i$ represents a single contiguous region, one can find $x_{opt} = argmin\{\|g(x)-g_0\|, x \in \cap_{i=1}^kS_i\}$. For Easom function example, we set $\epsilon = 10^{-5}$, and the final inverse solution obtained is $x_{opt} = (0.8188, 0.2029)$. Figure 4 shows that the simulator response at $x_{opt}$ is virtually indistinguishable as compared to the target response.
\begin{figure}[h!]\centering
\includegraphics[scale=0.8]{easom_result.pdf}
\caption{Easom function: The target response is shown by the solid black line and the simulator response at $x_{opt}$ is shown by the dashed blue curve.}
\end{figure}
\noindent We now summarize the key steps of the proposed approach in Algorithm~1.
{\small
\begin{algorithm}[t!]
\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}
\DontPrintSemicolon
\Input{(1) Input parameters: $n_0, d, L, N$ \\ (2) Target response: $\{g_0(t_j),j=1,...,L\}$\\ (3) Tolerance: $\epsilon$ \\ (4) Dynamic computer simulator: $\{g(x,t_j), j=1,...,L\}$}
%
\Output{(1) Final training set: $\texttt{xx}_{N\times d}$ and $\texttt{yy}_{L\times N}$\\ (2) Estimated inverse solution: $\texttt{x}_{opt}$}
\hrule
Construct a DPS of size $k (\ll L)$ that would capture the important features of the target time-series response, say, $(t_1^*, t_2^*, ..., t_k^*)$. See Section~3 for the proposed spline based methodology. \;
%
Choose $n_0$ points in $[0,1]^d$ using maximin Latin hypercube design. Obtain the corresponding simulator response matrix $Y_{L\times n_0}$.\;
%
\For{$j = 1, \ldots, {k}$}{
Use scalar-valued contour estimation method by Ranjan et al. (2008) to estimate $S_j(x)=\{x\in \chi \ : \ |g(x,t_j^*) - g_0(t_j^*)| < \epsilon \}$. Assume the size of initial design is $n_0 + (j-1)\cdot (N-n_0)/k$, whereas $(N-n_0)/k$ follow-up trials are added sequentially one at-a-time as per the EI criterion in Section~2.3. \;
%
Augment the follow-up points to the initial design for the next scalar-valued inverse problem.\;
}
Extract the final inverse solution as $\cap_{j=1}^{k} S_i(x)$.\;
\caption{Multiple scalar-valued contour estimation approach}
\end{algorithm}
}
\section{Simulation studies}
In this section, we use three different test simulators to compare the performance of the proposed method with the modified history matching algorithm. Since the number of simulator runs cannot be fixed beforehand in the latter approach, we implement the proposed multiple scalar-valued contour estimation method at two settings - using a prefixed limited budget, and the budget matching with that of the competing method. For performance comparison between the two methods, we use three popular goodness of fit measures called $R^2$, RMSE and normD. The objective would be to maximize $R^2$ and minimize RMSE, normD.
\begin{itemize}
\item Root mean squared error
\begin{eqnarray*}
RMSE = \left(\frac{1}{L} \sum_{j=1}^{L} \mid g(\hat{x}_{opt}, t_j) - g_0(t_j) \mid ^2 \right)^{1/2}.
\end{eqnarray*}
\item Coefficient of determination $R^2$ of the simple linear regression model fitted to the estimated inverse solution and the target response, i.e., $R^2$ of the following linear regression model:
\begin{eqnarray*}
g_0(t_j) = g(\hat{x}_{opt}, t_j) + \delta_j, j = 1, 2, \dots, L,
\end{eqnarray*}
with the assumption of i.i.d. errors $\delta_j$.
\item Normalized discrepancy (on log-scale), between the simulator response at the estimated inverse solution and the target response
\begin{eqnarray*}
normD=\log\left(\frac{\left\|{g_0}-{g}\left(\hat{{x}}^{*}\right)\right\|_{2}^{2}} {\left\|{g_0}-\bar{g}_0 1_{L}\right\|_{2}^{2}}\right)
\end{eqnarray*}
where $\bar{g}_0 = \sum_{t=1}^{L}g_0(t)/L$ and $1_{L}$ is an L-dimension vector of ones. Note that $1-\exp(normD)$, also referred to as Nash–Sutcliffe Efficiency (Nash and Sutcliffe, 1970), is a popular goodness of fit measure in hydrology literature.
\end{itemize}
\subsection{Example 1: Easom Function (Michalewicz, 1996) continued}
We begin by revisiting the example discussed in Section 3 to compare the inverse solutions arrived at by the two methods. The modified history matching approach used a budget size of $N = 230$ when using a predetermined cutoff of $c = 0.5$ (for implausibility measure). Thus, the proposed multiple scalar-valued contour estimation (MSCE) was implemented in two scenarios: a budget of the matching size, and the original budget size of $N = 50$.
\begin{table}[h!]
\centering
\caption{Easom Function: Performance comparison of the proposed MSCE method with $N=50$ and $N=230$, and the modified history matching (HM) method which required $N=230$ simulator runs. }
\medskip
\begin{tabular}{ccccc}
\hline
\bf{Methods} & \bf{Total Budget} & \bf{RMSE} & \bf{R$^2$} & \bf{normD}\\
\hline
HM & $N = 230$ & $2.46\times10^{-7}$ & 0.999 & $3.157\times10^{-5}$\\
MSCE & $N = 50$ & $3.10\times10^{-7}$ & 0.999 & $4.993\times10^{-5}$\\
MSCE & $N = 230$ & $1.48\times10^{-7}$ & 0.999 & $1.137\times10^{-5}$\\
\hline
\end{tabular}
\label{Tab: easom1}
\end{table}
\begin{figure}[h!]\centering
\begin{tabular}{lll}
\includegraphics[scale=0.3]{easom_comp1.pdf} &
\includegraphics[scale=0.3]{easom_comp2.pdf} &
\includegraphics[scale=0.3]{easom_comp4.pdf}
\end{tabular}
\caption{Easom Function: Performance comparison between modified HM (gray), MSCE with $N = 50$ (brown), and MSCE with $N = 230$ (blue).}
\label{Fig: easom1}
\end{figure}
From Table~\ref{Tab: easom1} and Figure~\ref{Fig: easom1}, we see that the proposed MSCE method using a budget size of $N = 230$ is clearly the most favoured approach. As compared to the modified HM approach (Bhattacharjee et al. 2019), the proposed method shows an improvement margin of $(2.46 - 1.48)/1.48 \times 100\% \approx 66\%$ per RMSE, and $(3.157- 1.137)/1.137 \times 100\% \approx 178\%$ according to log-normalized discrepancy.
\subsection{Example 2: Harari and Steinberg, 2014}
{\rm
In another example of applying the proposed MSCE approach to solve the inverse problem, we use a more complex test function by Harari and Steinberg (2014) with the three-dimensional input space scaled to unit cube, i.e., $x = (x_1, x_2, x_3)^T \in [0,1]^3$. The simulator model output is generated as:
\begin{eqnarray*}
y_t(x)= \exp(3x_1t + t)\times \cos(6x_2t+ 2t - 8x_3 - 6),
\end{eqnarray*}
where time $t \in [0,1]$ is on a 200-point equidistant grid. The target time-series response for the calibration problem corresponds to $x_0 = (0.522, 0.950, 0.427)^T$ (as in Example~4 of Zhang et al., 2019).
Using the spline based knot selection method presented in Section~3, we identified the DPS to be $\{118, 26, 95\}$. An initial training set of size $n_0 = 18$ generated via a MaxPro Latin Hypercube design and a total budget of $N = 66$ was needed when using a predetermined cutoff of $c = 3$. The budget used in implementing the modified history matching method is $N = 93$, thus the proposed method with the matching budget and $n_0$ held constant was used as well. Figure~\ref{Fig:hs_a} depicts the placement of DPS and the time series responses for the inverse solutions derived from the methods.
\begin{figure}[h!]
\begin{center}
\begin{tabular}{cc}
\includegraphics[scale=.6]{hs_a.pdf}
\end{tabular}
\caption{Harari and Steinberg (2014): The target series corresponding to $x_0 = (0.522, 0.950, 0.427)^T$ is shown by the black curve, and DPS is displayed by the dotted vertical lines. The time series responses corresponding to the estimated solution via modified HM method, MSCE method with $N=66$, and MSCE method with $N=93$ are shown by the red, blue, and green lines, respectively.}
\label{Fig:hs_a}
\end{center}
\end{figure}
Table~\ref{Tab: hs1} and Figure~\ref{Fig: hs1} present the performance comparison of the two methods for this example.
\begin{table}[h!]
\centering
\caption{Harari and Steinberg (2014): Performance comparisons of the proposed MSCE using $N=66$ and $N=93$, and the modified HM method - which required 93 simulator runs.}
\medskip
\begin{tabular}{llccc}
\hline
\bf{Methods} & \bf{Total Budget} & \bf{RMSE} & \bf{R$^2$} & \bf{normD}\\
\hline
HM & $N = 93$ & 0.209 & 0.997 & 0.00317\\
MSCE & $N = 66$ & 0.246 & 0.996 & 0.00440\\
MSCE & $N = 93$ & 0.134 & 0.999 & 0.00132\\
\hline
\end{tabular}
\label{Tab: hs1}
\end{table}
\begin{figure}[h!]
\centering
\begin{tabular}{lll}
\includegraphics[scale=0.3]{hs_comp1.pdf} &
\includegraphics[scale=0.3]{hs_comp2.pdf} &
\includegraphics[scale=0.3]{hs_comp4.pdf}
\end{tabular}
\vspace{-1cm}
\caption{Harari and Steinberg (2014): Performance comparison between modified HM (with $N=93$ points) (in gray), MSCE with $N = 66$ (brown), and MSCE with $N = 93$ (blue).}
\label{Fig: hs1}
\end{figure}
From Table~\ref{Tab: hs1} and Figure~\ref{Fig: hs1} we see that the proposed MSCE method at the default budget of $N=66$ performs slightly worse than the modified HM method, however, the proposed method outperforms the modified HM method in all three goodness of fit measurements when the budget matches. In particular, we see the proposed method using $N=93$ outperform the modified HM method by a significant margin of $(0.209 - 0.134)/0.134 \times 100\% \approx 56\%$ according to RMSE, and an even greater margin of $(0.00317 - 0.00132)/0.00132 \times 100\% \approx 140\%$ according to log-normalized discrepancy.
}
\subsection{Example 3: Bliznyuk et al., 2008}
{\rm
We now use a five-dimensional environmental model by Bliznyuk et al. (2008) which simulates a pollutant spill caused by a chemical accident. Here, the input space is $x = (x_1, x_2, x_3, x_4, x_5)^T \in [7,13] \times [0.02, 0.12] \times [0.01, 3] \times [30.01, 30.304] \times [0,3]$, and the simulator outputs are generated as:
%
\begin{eqnarray*}
y_t(x) &=& \frac{x_1}{\sqrt{x_2t}} \exp\bigg(\frac{-x_5^2}{4x_2t}\bigg) + \frac{x_1}{\sqrt{x_2(t - x_4)}}\exp\bigg(\frac{-(x_5-x_3)^2}{4x_2(t - x_4)}\bigg)I(x_4<t)
\end{eqnarray*}
%
where $t \in [35.3, 95]$ is defined by 200 equidistant points. The true target time-series response corresponds to $x_0 = (9.640, 0.059, 1.445, 30.277, 2.520)^T$ (as in Example~5 of Zhang et al., 2019). For the purpose of HM procedure, the input space is scaled such that $x \in [0,1]^5$.
By optimizing the cubic spline knots onto the target response, the DPS of size $k=3$ is chosen at $\{30, 7, 61, 14\}$. An initial training set of size $n_0 = 30$ and total budget of $N=120$ are used for implementing the proposed MSCE method. Since the HM method (with $c = 0.7$) required $N=269$ simulator runs, the proposed method has also been implemented using $N=269$ for the calibration problem. Figure~\ref{Fig:bliz_a} shows the target response, the DPS, and the estimated inverse solution obtained by the three procedures.
\begin{figure}[h!]
\begin{center}
\begin{tabular}{cc}
\includegraphics[scale=0.6]{bliz_a.pdf}
\end{tabular}
\caption{Bliznyuk et al. (2008): Target series is shown by the black curve, and DPS of size $k = 4$ is displayed by the dotted vertical lines. The estimated inverse solutions obtained via the modified HM method, MSCE method with $N=120$ runs, and MSCE method with $N=269$ runs are shown by the red, blue, and green lines respectively.}
\label{Fig:bliz_a}
\end{center}
\end{figure}
}
From Figure~\ref{Fig:bliz_a}, it is clearly visible that the HM algorithm leads to inferior inverse solution as compared to the proposed MSCE approaches. Table~\ref{Tab: bliz1} and Figure~\ref{Fig: bliz1} compare the goodness of fit measures for this five-dimensional example.
\begin{table}[h!]
\centering
\caption{Bliznyuk et al. (2008): Performance comparison of the proposed MSCE method using $N=120$ and $N=269$, and the modified HM method with $N=269$ simulator runs.}
\medskip
\begin{tabular}{llccccccc}
\hline
\bf{Methods} &&& \bf{Total Budget} && \bf{RMSE} & \bf{R$^2$} & \bf{normD}\\
\hline
modified history matching &&& $N = 269$ && 0.129 & 0.999 & 0.0152\\
multiple scalar contour estimation &&& $N = 120$ && 0.0221 & 0.999 & 0.000447\\
multiple scalar contour estimation &&& $N = 269$ && 0.0194 & 0.999 & 0.000346\\
\hline
\end{tabular}
\label{Tab: bliz1}
\end{table}
\begin{figure}[h!]
\centering
\begin{tabular}{ccc}
\includegraphics[scale=0.3]{bliz_comp1.pdf} &
\includegraphics[scale=0.3]{bliz_comp2.pdf} &
\includegraphics[scale=0.3]{bliz_comp4.pdf}
\end{tabular}
\vspace{-1cm}
\caption{Bliznyuk et al. (2008): Performance comparison plots for modified HM algorithm with $N = 269$ (gray), MSCE with $N = 120$ (brown), and MSCE with $N = 269$ (blue).}
\label{Fig: bliz1}
\end{figure}
Table~\ref{Tab: bliz1} and Figure~\ref{Fig: bliz1} show that the proposed multiple scalar contour estimation (MSCE) method with both budget levels ($N=120, N=269$) significantly outperform the modified history matching approach by massive margins.
\section{Real Application: Rainfall-Runoff Example}
The modified HM model proposed by Bhattacharjee et al. (2019) motivated their approach using hydrological models. The actual computer simulator - Matlab-Simulink model introduced by Duncan et al. (2013) - studies the rainfall-runoff relationship for the windrow composting pad. The following four input parameters are identified to have the most significant influence on the output: depth of surface, depth of sub-surface and two coefficients of the saturated hydraulic conductivity ($K_{sat1}$ and $K_{sat2}$). Interested readers can see Duncan et al. (2013) for further details on the Matlab-Simulink Model.
For the inverse problem, the target response is the rainfall-runoff data ($g_0$) observed from the Bioconversion center at the University of Georgia, Athens, USA (Bhattacharjee et al. 2019). Figure~\ref{ms_random} depicts the observed target response and a few random outputs from the Matlab-Simulink model.
\begin{figure}[h!]
\begin{center}
\begin{tabular}{cc}
\includegraphics[scale=0.7]{ms_random_outputs.pdf}
\end{tabular}
\caption{Matlab-Simulink model: Observed data $(g_0)$ with a few random simulator outputs.}
\label{ms_random}
\end{center}
\end{figure}
Using the spline-based knots selection method discussed in Section~3, we found the discretization time points as $DPS = \{4557, 3359, 4702, 4085\}$. For the proposed approach, we used an initial training set with $n_0 = 40$ points generated using a Latin hypercube design and a total budget of $N= 120$. However, since the modified HM method exhausted a total budget of $N = 461$ using a predetermined cutoff of $c = 3$ (as in Bhattercharjee et al., 2019), the budget size of $N = 461$ was also made available for the proposed method. Figure~\ref{Figs:ms_a} shows the positioning of DPS as well as the estimated time-series responses for the three inverse solutions with respect to the observed runoff data.
\begin{figure}[h!]
\begin{center}
\begin{tabular}{cc}
\includegraphics[scale=0.7]{ms_a.pdf}
\end{tabular}
\caption{Matlab-Simulink model: Target time series response is shown by the black curve, and the DPS is being displayed by the dotted vertical lines. The estimated inverse solution corresponding to the modified HM method, MSCE method using $N=120$, and MSCE method using $N=461$ are displayed by the red, blue, and green lines respectively.}
\label{Figs:ms_a}
\end{center}
\end{figure}
Table~\ref{Tab: ms1} and Figure~\ref{Fig: ms1} show that the MSCE approach using $N=120$ outperformed the modified HM approach by small margins of $(55.580 - 54.215)/54.215 \times 100\% \approx 2.5\%$ per RMSE, and $(0.0841 - 0.0801)/0.0801 \times 100\% \approx 5\%$ per log-normalized discrepancy. When the proposed method matches the budget size used by modified HM method at $N = 461$, the margin of improvement increases to $(55.580 - 53.585)/53.585 \times 100\% \approx 3.7\%$ per RMSE, and $(0.0841 - 0.0782)/0.0782 \times 100\% \approx 7.5\%$ per normal discrepancy.
\begin{table}[h!]
\centering
\caption{Matlab-Simulink model: Goodness-of-fit comparisons of the proposed MSCE method using $N=120$ and $N=461$, and the modified HM method (with $N=461$ runs).}
\medskip
\begin{tabular}{cccccc}
\hline
\bf{Methods} & \bf{Total Budget} & \bf{RMSE} & \bf{R$^2$} & \bf{normD}\\
\hline
HM & $N = 461$ & 55.580 & 0.926 & 0.0841\\
MSCE & $N = 120$ & 54.215 & 0.934 & 0.0801\\
MSCE & $N = 461$ & 53.585 & 0.934 & 0.0782\\
\hline
\end{tabular}
\label{Tab: ms1}
\end{table}
\begin{figure}[h!]
\centering
\begin{tabular}{ccc}
\includegraphics[scale=0.3]{ms_comp1.pdf} &
\includegraphics[scale=0.3]{ms_comp2.pdf} &
\includegraphics[scale=0.3]{ms_comp4.pdf}
\end{tabular}
\vspace{-1cm}
\caption{Matlab-Simulink model: Pictorial representation of performance comparison between modified HM with $N = 461$ (gray), MSCE with $N = 120$ (brown), and MSCE with $N = 461$ (blue).}
\label{Fig: ms1}
\end{figure}
\section{Concluding Remarks}
In this paper, we have proposed a scalarized approach of solving the inverse problem for dynamic computer simulators by first carefully selecting a handful of time-points based on the target response series (called the DPS), and then solve multiple scalar-valued inverse problems at the DPS using the popular sequential algorithm via expected improvement approach developed by Ranjan et al. (2008). The final inverse solution for the underlying dynamic simulator will be the intersection of all scalarized inverse solutions. In this paper, we have also proposed using a natural cubic spline based method for systematically finding the DPS (discretization point set). Based on the our investigation using several test functions and a real-life hydrological simulator, it is clear that the proposed approach outperforms the competing modified history matching algorithm by some or significant margin. It has also been demonstrated that the proposed MSCE approach shows better performance than the modified HM algorithm even with a much lower simulator run budget. This is a two-fold advantage: in the accuracy of inverse solution and the number of simulator runs used.
There are a few important remarks worth mentioning. First, when finding an optimal DPS using spline-based technique, we followed a greedy ``forward variable selection" type approach and identified one best knot at-a-time. Perhaps the ``best selection" type approach might lead to a better solution, however, it was abandoned due to vast (seemingly impractical) computational cost in finding the best DPS. Second, when solving the scalar-valued inverse problems at the $i$-th element of the DPS, it is important to note that the size of the initial design is $n_0 + (i-1)\cdot (N-n_0)/k$ and budget of follow-up points is $(N-n_0)/k$. Thus, it is tempting to believe that the inverse solution at the first or second element of DPS are perhaps less accurate as compared to (say) the fourth or fifth time point in the DPS. Based on our preliminary simulation study, we found no significant improvement in accuracy by changing the order of DPS for solving the scalarized inverse problems. On a related note, the distribution of total budget $N$ for solving $k$ scalar-valued inverse problems can also be further investigated. As per our preliminary simulation study, the results were not significantly different.
\section*{References}\label{Ref}
\begin{enumerate}
\item
Bhattacharjee, N. V., Ranjan, P., Mandal, A., \& Tollner, E. W. (2019). A history matching approach for calibrating hydrological models. \emph{Environmental and Ecological Statistics, 26}(1), 87-105.
\item
Bingham, D., Ranjan, P., \& Welch, W. J. (2014). Design of computer experiments for optimization, estimation of function contours, and related objectives. \emph{Statistics in Action: A Canadian Outlook, 109.}
\item
Bliznyuk, N., Ruppert, D., Shoemaker, C., Regis, R., Wild, S., \& Mugunthan, P. (2008). Bayesian calibration and uncertainty analysis for computationally expensive models using optimization and radial basis function approximation. \emph{Journal of Computational and Graphical Statistics, 17}(2), 270-294.
\item
Duncan O, Tollner E, Ssegane H (2013) An instantaneous unit hydrograph for estimating runoff from windrow composting pads. \emph{Appl Eng Agric 29}(2):209–223
\item
Franke, R. (1979). A critical comparison of some methods for interpolation of scattered data. \emph{NAVAL POSTGRADUATE SCHOOL MONTEREY CA NPS53-79-003.}
\item
Harari, O., \& Steinberg, D. M. (2014). Convex combination of Gaussian processes for Bayesian analysis of deterministic computer experiments. \emph{Technometrics, 56}(4), 443-454.
\item
Johnson, M. E., Moore, L. M., \& Ylvisaker, D. (1990). Minimax and maximin distance designs. \emph{Journal of statistical planning and inference, 26}(2), 131-148.
\item
Jones, D. R., M. Schonlau, and W. J. Welch (1998). Efficient global optimization of expensive black-box functions. \emph{Journal of Global Optimization 13}(4), 455–492.
\item
Joseph, V. R., Gul, E., \& Ba, S. (2015). Maximum projection designs for computer experiments. \emph{Biometrika, 102}(2), 371-380.
\item
MacDonald, B., Ranjan, P., \& Chipman, H. (2015). GPfit: An R package for fitting a Gaussian process model to deterministic simulator outputs. \emph{Journal of Statistical Software, 64}(i12).
\item
Michalewicz, Z. (1996), \emph{Genetic Algorithms+Data Structures $\frac{1}{4}$ Evolution Programs}, SpringerVerlag, Berlin/Heidelberg/New York.
\item
Nash, J. E., \& Sutcliffe, J. V. (1970). River flow forecasting through conceptual models part I—A discussion of principles. \emph{Journal of hydrology, 10}(3), 282-290.
\item
Picheny, V., Ginsbourger, D., Richet, Y., \& Caplin, G. (2013). Quantile-based optimization of noisy computer experiments with tunable precision. \emph{Technometrics, 55}(1), 2-13.
\item
Ranjan, P., Bingham, D., \& Michailidis, G. (2008). Sequential experiment design for contour estimation from complex computer codes. \emph{Technometrics, 50}(4), 527-541.
\item
Ranjan, P., Thomas, M., Teismann, H., \& Mukhoti, S. (2016). Inverse problem for a time-series valued computer simulator via scalarization. \emph{Open Journal of Statistics, 6}(3), 528-544.
\item
Sacks, J., Welch, W. J., Mitchell, T. J., \& Wynn, H. P. (1989). Design and analysis of computer experiments. \emph{Statistical science}, 409-423.
\item
Vernon, I., Goldstein, M., \& Bower, R. G. (2010). Galaxy formation: a Bayesian uncertainty analysis. \emph{Bayesian analysis, 5}(4), 619-669.
\item
Zhang, R., Lin, C. D., \& Ranjan, P. (2018). A sequential design approach for calibrating a dynamic population growth model. \emph{arXiv preprint arXiv:1811.00153.}
\end{enumerate}
\end{document} | proofpile-arXiv_059-15621 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction} \label{1}
While it is believed that supermassive black holes (SMBH, with masses $M_{BH} \simeq$ 10$^6$ $-$ 10$^9$ \ensuremath{\mathrm{M}_{\odot}}) are ubiquitous in the centers of massive galaxies at the present epoch, the rate of incidence of (S)MBH in dwarf galaxies with stellar masses M$_{\star} \lesssim$ 10$^{9.5}$ \ensuremath{\mathrm{M}_{\odot}}\ (roughly that of the Large Magellanic Cloud) is not well determined. The direct detection of SMBH in dwarf galaxies based on the stellar and gas dynamics within the gravitational sphere of influence of the SMBH is extremely challenging, although there have been recent efforts producing promising results \citep[][]{Nguyen2018,Nguyen2019}. Nevertheless, recent studies have revealed active galactic nuclei (AGN) in dwarf galaxies through diagnostics in the optical \citep[e.g.][]{Greene2007,Dong2012, Reines2013,Moran2014,Diecky2019,He2_10_2020,Mezcua2020}, near and mid-infrared \citep[e.g.][]{Sartori2015,Hood2017,Riffel2020}, X-rays \citep[e.g.][]{Pardo2016,Mezcua2018}, as well as from optical variability \citep[e.g.][]{Baldassare2018}, opening a new window for systematic studies of (S)MBH in dwarfs \citep[see][for a recent review]{Greene2019}.
There is a general consensus that feedback processes likely play a vital role in the evolution of dwarf galaxies, given their shallow potential well \citep[e.g.][]{Veilleux2005,Veilleux2020}. Stellar processes have long been considered the main source of feedback in dwarf galaxies \citep[e.g.][]{Larson1974,Veilleux2005,Heckman2017,Martin2018}. However, it is still debated whether such stellar feedback is effective enough to reproduce the properties of the dwarf galaxies we see today \citep[e.g.][]{Garrison-Kimmel2013}. Given the growing number of AGN detected in dwarf galaxies, it is also important to consider the possible impact of AGN feedback. Few studies have explored this issue systematically. Plausible evidence of star formation quenching induced by AGN feedback in dwarf galaxies has been reported by \citet{Penny2018}. \citet{Bradford2018} have also found that the global HI content may be lower in dwarf galaxies with AGN, perhaps due to AGN feedback. In addition, radio observations have revealed radio jets in dwarf galaxies that are as powerful as those observed in more massive systems \citep{Mezcua2019b}. From the theoretical perspective, analytic analyses from \citet{Silk2017} and \citet{Dashyan2018} have pointed out the possibly significant effects of AGN feedback in dwarfs. New simulations by \citet{Koudmani2019,Koudmani2020} suggest that AGN boost the energetics of outflows in dwarf galaxies.
Powerful, kpc-scale outflows triggered by luminous AGN has been regarded as strong observational evidence of on-going AGN feedback \citep[e.g.][]{RupkeVeilleux2011,RupkeVeilleux2013a,RupkeVeilleux2013b,RupkeVeilleux2015,Rupke2017, Liu2013a,Liu2013b,Harrison2014,Westmoquette2013,RamosAlmeida2019}, which may impact even the circumgalactic medium \citep[e.g.][]{Veilleux2014,Lau2018,Liu2019}. It is thus interesting to explore if similar outflows can be found in dwarf galaxies with AGN. Recently, \citet{ManzanoKing2019} have observed a sample of 29 dwarf galaxies with AGN using Keck LRIS long-slit spectroscopy. Spatially extended (up to $\sim$2 kpc in radius), rapid outflows (median velocity offsets $\lesssim$180 km s$^{-1}$, 80-percentile widths W$_{80}$\ $\lesssim$ 1600 km s$^{-1}$) have been discovered in a third of the sources from the sample, suggesting that AGN feedback may be significant in these dwarf galaxies. More recently, a parsec-scale radio jet was reported in one of the targets with a reported outflow, adding evidence for AGN feedback in these dwarf galaxies \citep{Yang2020}. However, while the results from the long-slit spectra are tantalizing, they do not capture the two-dimensional morphology of the outflows. Integral field spectroscopy (IFS) that provides full two-dimensional coverage with high spatial resolution is needed to map the outflows and fully quantify the true impact of these outflows on the dwarf hosts.
In this paper, we analyze newly obtained IFS data of eight dwarf galaxies with AGN showing the fastest and brightest outflowing gas in the sample studied by \citet{ManzanoKing2019}. The eight targets were observed with Keck/KCWI, and two of the targets were also observed with Gemini/GMOS. This paper is organized as follows. In Section \ref{2}, the data sets, physical properties of the targets measured from the IFS and ancillary data, and reduction procedures are described. The analysis techniques adopted in this paper are described in Section \ref{3}. The main results are presented in Section \ref{5} and detailed in Appendix \ref{4}. The implications of these results are discussed in Section \ref{6}, and the conclusions are summarized in Section \ref{7}. Throughout the paper, we assume a $\Lambda$CDM cosmology with $H_0$ = 69.3 km s$^{-1}$ Mpc$^{-1}$, $\Omega_{\rm m}$ = 0.287, and $\Omega_{\rm \Lambda} = 0.713$ \citep{wmap2013}.
\section{Sample, Observations, \& Data Reduction} \label{2}
\subsection{Sample} \label{21}
We observed 8 out of the 29 dwarf galaxies with AGN studied in \citet{ManzanoKing2019}. The 29 sources were originally selected from samples of dwarf galaxies with AGN in recent literatures based on Baldwin, Phillips \& Telervich and Veilleux \& Osterbrock 1987 \citep[hereafter BPT and VO87, respectively;][]{bpt,Veilleux1987} line ratio diagrams \citep{Reines2013,Moran2014} and mid-infrared diagnosis \citep{Sartori2015}. The readers are referred to \citet{ManzanoKing2019} for more details.
All targets are confirmed to host AGN based on the AGN-like line ratiosAll targets show AGN-like line ratios as measured from the Keck/LRIS long-slit spectra extracted from the central 1\arcsec\ region. Many of the targets show further evidence of hosting AGN, including i) the detection of strong He~{\sc ii} $\lambda$4686\ and [Ne~{\sc v}] $\lambda$3426\ emission in the Keck LRIS long-slit spectra and KCWI spectra; ii) the detection of coronal emission lines in the near-infrared spectra of these objects (Bohn et al.\ 2020, in prep.). In addition, the highly ionized [Fe~{\sc x}] $\lambda$6375\ line (I.P.$=$233.6 eV) is detected within the central 0.6\arcsec\ of target J0906$+$56\ based on the GMOS Integral Field Unit (IFU) spectra reported here; Targets J0906$+$56\ and J0954$+$47\ also show hard X-ray emission originating from AGN activity \citep{Baldassare2017}. The basic physical properties of the 8 targets in our sample, including those from the NASA-Sloan Altas\footnote{\nsaurl} (NSA), are summarized in Table \ref{tab:targets}.
\begin{deluxetable*}{cccc ccccc}[!htb]
\tablecolumns{9}
\tablecaption{Properties of the Targets\label{tab:targets}}
\tablehead{ \colhead{Name} & \colhead{Short Name} & \colhead{Redshift} & \colhead{log(M$_{\rm stellar}$/\ensuremath{\mathrm{M}_{\odot}})} &
\colhead{R$_{50}$} &
\colhead{log(L$_{[OIII]}$)} & \colhead{C$_{bol}$} & \colhead{log(L$_{AGN}$)} & \colhead{SFR} \\
\colhead{(1)} & \colhead{(2)} & \colhead{(3)} & \colhead{(4)} & \colhead{(5)} & \colhead{(6)} & \colhead{(7)} & \colhead{(8)} & \colhead{(9)}
}
\startdata
SDSS J010005.94$-$011059.0 & J0100$-$01 & 0.0517 & 9.47 & 1.2 & 40.96$^{+0.03}_{-0.04}$ & 142 & 43.5 & $<$0.6 \\
SDSS J081145.29$+$232825.7 & J0811$+$23 & 0.0159 & 9.02 & 0.6 & 39.63$^{+0.05}_{-0.06}$ & 87 & 42.0 & $<$0.01 \\
SDSS J084025.54$+$181858.9 & J0840$+$18 & 0.0151 & 9.28 & 1.0 & 39.96$^{+0.02}_{-0.02}$ & 87 & 42.0 & $<$0.01 \\
SDSS J084234.51$+$031930.7 & J0842$+$03 & 0.0291 & 9.34 & 1.0 & 40.51$^{+0.03}_{-0.03}$ & 142 & 43.1 & $<$0.3 \\
SDSS J090613.75$+$561015.5 & J0906$+$56 & 0.0467 & 9.36 & 1.5 & 41.15$^{+0.01}_{-0.01}$ & 142 & 43.7 & $<$0.3 \\
SDSS J095418.16$+$471725.1& J0954$+$47 & 0.0327 & 9.12 & 2.0 & 41.36$^{+0.02}_{-0.02}$ & 142 & 43.9 & $<$0.3 \\
SDSS J100551.19$+$125740.6 & J1005$+$12 & 0.00938 & 9.97 & 1.0 & 40.20$^{+0.05}_{-0.06}$ & 142 & 43.2 & $<$0.1 \\
SDSS J100935.66$+$265648.9 & J1009$+$26 & 0.0145 & 8.77 & 0.7 & 40.48$^{+0.01}_{-0.01}$ & 142 & 43.0 & $<$0.1 \\
\enddata
\tablecomments{Column (1): SDSS name of the target; Column (2): Short name of the target used in this paper; Column (3): Redshift of the target measured from the stellar fit to the spectrum integrated over the KCWI data cube; Column (4): Stellar mass from the NSA; Column (5): Half-light radius from the NSA, in unit of kpc;
Column (6): Total [O~{\sc iii}] $\lambda$5007\ luminosity based on the observed total [O~{\sc iii}] $\lambda$5007\ fluxes within the field of view of the KCWI data without extinction correction, in units of erg s$^{-1}$; Column (7): [O III]-to-bolometric luminosity correction factor adopted from \citet{Lamastra2009};
Column (8): Bolometric AGN luminosity, based on the extinction-corrected [O~III] luminosity, in units of erg s$^{-1}$; Column (9): Upper limit on the star formation rate based on the extinction-corrected [O~{\sc ii}] $\lambda$$\lambda$3726,3729\ flux from the KCWI data, in units of {M$_{\sun}$ yr$^{-1}$}. Here we assume that 1$/$3 of the [O~{\sc ii}] $\lambda$$\lambda$3726,3729\ emission is from the star formation activity, following \citet{Ho2005}.}
\end{deluxetable*}
\subsection{Observations} \label{22}
\subsubsection{GMOS Observations} \label{221}
J0906$+$56\ and J0842$+$03\ were observed through Gemini fast-turnaround (FT) programs GN-2019A-FT-109 and GS-2019A-FT-105 (PI S.\ Veilleux). The GMOS IFU \citep{GMOS,GMOSS} data were taken on 2019-04-04 and 2019-04-05 at Gemini-N for J0906$+$56, and on 2019-04-28 and 2019-04-29 at Gemini-S for J0842$+$03. The GMOS IFU 1-slit, B600 mode was used for both targets, and the spectral resolution was $\sim$100 {km s$^{-1}$}\ FWHM at 4610 \AA. The field of view of this GMOS setup is 3.5\arcsec$\times$5\arcsec. The details of the observations are summarized in Table \ref{tab:obs}.
\begin{deluxetable*}{ccccc ccccc}
\tablecolumns{10}
\tabletypesize{\scriptsize}
\tablecaption{Summary of Observations\label{tab:obs}}
\tablehead{\colhead{Name} & Telescope/Instrument & \colhead{Dates} & \colhead{Grating(Slicer)} & \colhead{t$_{exp}$} & \colhead{PSF} & \colhead{Range} & \colhead{PA} & \colhead{FOV} & 5-$\sigma$ detection \\
& & & & & & & & & limit ($\times$10$^{-17})$
}
\colnumbers
\startdata
J0100$-$01 & Keck/KCWI & 2020-01-30 & BL(Small) & 1200$+$600 & 1.2\arcsec & 3500--5500 \AA & 51.0 & 8\arcsec$\times$20\arcsec & 9 \\
J0811$+$23 & Keck/KCWI & 2020-01-30 & BL(Medium) & 4$\times$1200 & 1.2\arcsec & 3500--5500 \AA & 0.0 & 16\arcsec$\times$20\arcsec & 1 \\
J0840$+$18 & Keck/KCWI & 2020-01-30 & BL(Medium) & 3$\times$1200 & 1.2\arcsec & 3500--5500 \AA & 101.0 & 16\arcsec$\times$20\arcsec & 1 \\
J0842$+$03 & Gemini/GMOS & 2019-04-28,29 & B600 & 8$\times$1125 & 0.55\arcsec & 3750--7070 \AA & 122.0 & 3.5\arcsec$\times$5\arcsec & 1 \\
J0842$+$03 & Keck/KCWI & 2020-01-31 & BL(Small) & 2$\times$1200 & 0.9\arcsec & 3500--5500 \AA & 290.0 & 8\arcsec$\times$20\arcsec & 1 \\
J0906$+$56 & Gemini/GMOS & 2019-04-04,05 & B600 & 8$\times$1155 & 0.6\arcsec & 3880--7200 \AA \tablenotemark{a} & 273.0 & 3.5\arcsec$\times$5\arcsec & 3 \\
J0906$+$56 & Keck/KCWI & 2020-01-31 & BL(Small) & 2$\times$1200+280 & 0.9\arcsec & 3500--5500 \AA & 0.0 & 8\arcsec$\times$20\arcsec & 2 \\
J0954$+$47 & Keck/KCWI & 2020-01-30 & BL(Small) & 5$\times$1200 & 1.2\arcsec & 3500--5500 \AA & 0.0 & 8\arcsec$\times$20\arcsec & 2 \\
J1005$+$12 & Keck/KCWI & 2020-01-30 & BL(Small) & 6$\times$600 & 1.2\arcsec & 3500--5500 \AA & 60.0 & 8\arcsec$\times$20\arcsec & 3 \\
J1009$+$26 & Keck/KCWI & 2020-01-30 & BL(Small) & 7$\times$600 & 1.2\arcsec & 3500--5500 \AA & 45.5 & 8\arcsec$\times$20\arcsec & 5
\enddata
\tablecomments{Column (1): Short name of the target; Column (2): Telescope and instrument used for the observations; Column (3): Date of the observation; Column (4): Grating adopted in the observation, slicer configuration adopted for the corresponding KCWI observation is also shown in the bracket; Column (5): Exposure time of the observation in seconds; Column (6): FWHM of the PSF measured from the acquisition image (GMOS data) or IFU observation of the spectrophotometric standard star (KCWI data); Column (7): Spectral coverage of the data set; Column (8): Position angle of the IFU in degrees measured East of North; Column (9): Full field of view of the IFU. (10): 5-$\sigma$ detection limit for a [O~{\sc iii}] $\lambda$5007\ emission line with FWHM of 1000 {km s$^{-1}$}, in units of erg cm$^{-2}$ s$^{-1}$ arcsec$^{-2}$. The typical uncertainty of the listed values is $\sim$30\%}
\tablenotetext{a}{The data with wavelength shorter than 5000 \AA\ were discarded in the analysis due to the low S/N.}
\end{deluxetable*}
We measured the point spread function (PSF) of the IFS data by fitting single 2-D Gaussian profiles to bright stars in the acquisition images of each target. The mean values of the measured FWHM (0.60\arcsec\ for J0906$+$56\ and 0.55\arcsec\ for J0842$+$03) were used as the empirical Gaussian PSF for the IFS data. Whether these PSF are a good approximation for our analysis can be checked by comparing the PSF of the acquisition images of the standard stars with those of the IFS frames on the stars themselves. We find that the former is more extended than the latter, i.e., the average FWHM of the PSF for the acquisition images is $\sim$90\% larger than that of the IFS frames in arcseconds, although the former is only $\sim$15\% larger than the latter in unit of image pixel size. This suggests that the FWHM of the PSF determined from the acquisition images overestimate those of the science observations. Thus, the use of PSF measurements derived from the acquisition images in our analysis conservatively overestimates the true size of the PSF in the IFS observations on our targets.
\subsubsection{KCWI Data} \label{222}
All targets were observed with KCWI \citep{KCWI} through Keck program 2019-U217 (PI G.\ Canalizo) on 2020-01-31 and 2020-02-01. All targets were observed with BL grating. J0811$+$23\ and J0840$+$18\ were observed with the medium-slicer setup (spectral resolution $\sim$160 {km s$^{-1}$}\ FWHM at 4550 \AA), while the others were observed with the small-slicer setup (spectral resolution $\sim$80 {km s$^{-1}$}\ FWHM at 4550 \AA). The details of the observations are summarized in Table \ref{tab:obs}.
We measured the PSF of these IFU observations from the observations of spectrophotometric standard stars taken before, in between, and after the on-target observations, where single 2-D Gaussian profiles were fit to the narrow-band images (5000--5100 \AA) of those standard stars reconstructed from the data cubes. For one of the targets, J0842$+$03, a nearby bright star fell in the field-of-view and was thus observed simultaneously with the target in one science exposure. The same 2-D Gaussian fit was applied to it and the results were compared with other PSF measurements. For each night, all individual measurements of the PSF described above broadly agree with each other, and the median FWHM of these best-fit Gaussian profiles were adopted as the FWHM of the PSF for further analysis. Notice that we do not have measurements for the PSF taken at the same time of the on-target science observations, therefore, the variations in the size of the actual PSF may be larger. This speculation is based on the variation of the DIMM seeing measured by the Mauna Kea Weather Center\footnote{\dimmurl}, which ranges from 0.4\arcsec\ to 0.8\arcsec\ throughout the two observation nights.
\subsection{Data Reduction} \label{23}
\subsubsection{GMOS Data} \label{231}
Both GMOS data sets were reduced with the standard Gemini Pyraf package (v1.14), supplemented by scripts from IFSRED library \citep{ifsred}. We followed the standard processes listed in the GMOS data reduction manual, except that we did not apply scattered light removal for the science frames. This was based on the fact that i) there was no clear features indicative of scattered light in the raw data and ii) the attempt to apply scattered light removal led to significant and unphysical wiggles in the extracted spectra.
The final data cubes were generated by combining individual exposures of each target using script IFSR\_MOSAIC from the IFSRED library. The wavelength solutions were further verified by checking the sky emission lines (mainly [O~{\sc i}] $\lambda$5577, and also weaker [O~{\sc i}] $\lambda$6300\ and [O~{\sc i}] $\lambda$6364). For J0906$+$56, the differences between the measured line centers of the sky emission and the reference values are between $-$10 {km s$^{-1}$}\ and 10 {km s$^{-1}$}. The differences are randomly distributed across the data cube and no pattern is seen. Therefore, no further correction was applied to the wavelength calibration.
However, for target J0842$+$03, shifts of up to $\sim$5 \AA\ between the measured and reference line centers of the sky line [O~{\sc i}] $\lambda$5577\ were seen. The arc exposure for this target was taken eleven days after the science observations, perhaps explaining these large shifts. Additional corrections were applied to modify the wavelength solutions: i) For each exposure, the zero-point shifts of the spectra were corrected using the sky emission [O~{\sc i}] $\lambda$5577; ii) for the final combined data cube, small ($\lesssim$ 0.8 \AA), wavelength dependent shifts in the wavelength solution were further corrected by adding shifts $\Delta(\lambda)$, where $\Delta(\lambda/{\mbox{\normalfont\AA}})= 0.0016(\lambda/{\mbox{\normalfont\AA}}) - 9.06$ is the best-fit linear fit to the shifts between the measured line centers and the expected ones calculated from the emission-line redshift determined from the Keck/LRIS spectrum (Manzano-King, private communication). The strong optical emission lines [O~{\sc iii}] $\lambda$5007, \mbox {H$\alpha$}, [N~{\sc ii}] $\lambda$$\lambda$6548,6583, and [S~{\sc ii}] $\lambda$$\lambda$6716,6731\ were included in the fit. We further required that $\Delta(\lambda_{[O~I] \lambda5577}/{\mbox{\normalfont\AA}})$ = 0, i.e., zero shifts at the wavelength of sky emission line [O~{\sc i}] $\lambda$5577. The residuals of the best-fit are $\lesssim$ 0.15 \AA\ in general.
\subsubsection{KCWI Data} \label{232}
The KCWI data sets were reduced with the KCWI data reduction pipeline and the IFSRED library. We followed the standard processes listed in the KCWI data reduction manual\footnote{\kcwiurl} for all targets. The data cubes generated from individual exposures were resampled to 0.15\arcsec\ $\times$ 0.15\arcsec\ (small-slicer setup) or 0.29\arcsec\ $\times$ 0.29\arcsec\ square spaxels (medium-slicer setup) using IFSR\_KCWIRESAMPLE. The resampled data cubes of the same target were then combined into a single data cube using IFSR\_MOSAIC.
\section{Analysis} \label{3}
\subsection{Voronoi Binning} \label{31}
The data cubes were mildly, spatially binned using the Voronoi binning method \citep{voronoi}. As our aim is to characterize the broad, blueshifted components in the emission lines (especially [O~{\sc iii}] $\lambda$5007) which trace the outflows, we binned the data cube according to the signal-to-noise ratio (S/N) of the blue wing of the [O~{\sc iii}] $\lambda$5007\ emission line (calculated in the target-specific, 200 {km s$^{-1}$}-wide velocity window). The spaxels with S/N of the blue wing less than 1 were excluded from the binning, and each final spatial bin was required to reach a minimum S/N of 3.
\begin{figure*}[!htb]
\begin{minipage}[t]{0.5\textwidth}
\includegraphics[width=\textwidth]{f1a.pdf}
\end{minipage}
\begin{minipage}[t]{0.5\textwidth}
\includegraphics[width=\textwidth]{f1b.pdf}
\end{minipage}
\caption{Examples of fits to the [O~{\sc iii}] $\lambda$$\lambda$4959,5007\ line profiles for J0842$+$03\ using (left panel) one Gaussian component and (right panel) two Gaussian components. In each panel, the top spectrum in black is the observed data, while the solid red curve is the best fit model and the dashed curves represent the individual Gaussian components (C1, C2). The residuals after subtraction of the best-fit models from the data are shown in solid black curve at the bottom, and the y$=$0 line is shown in red.
\label{fig:J0842o3fit}}
\end{figure*}
\begin{figure*}[!htb]
\begin{minipage}[t]{0.5\textwidth}
\includegraphics[width=\textwidth]{f2a.pdf}
\end{minipage}
\begin{minipage}[t]{0.5\textwidth}
\includegraphics[width=\textwidth]{f2b.pdf}
\end{minipage}
\caption{Examples of fits to the [O~{\sc iii}] $\lambda$$\lambda$4959,5007\ line profiles for J1005$+$12\ using (left panel) two Gaussian components and (right panel) three Gaussian components. The presentation of the data, fits, and residuals is the same as in Fig. \ref{fig:J0842o3fit}.
\label{fig:J1005o3fit}}
\end{figure*}
\subsection{Spectral Fits} \label{32}
The spectral fits utilized IDL library IFSFIT \citep{ifsfit}, supplemented by customized python scripts.
\subsubsection{Fits to the [O~{\sc iii}] $\lambda$$\lambda$4959,5007\ Emission} \label{321}
The [O~{\sc iii}] $\lambda$$\lambda$4959,5007\ line emission from our targets shows the strongest blueshifted wings among all of the emission line tracers of the ionized outflow. In addition, the absence of other strong emission and absorption features in the vicinity of [O~{\sc iii}] $\lambda$$\lambda$4959,5007\ makes the faint [O~III] wing components easier to analyze. In order to capture the faintest signal from the outflows traced by those faint emission line wing, we started by solely fitting the [O~{\sc iii}] $\lambda$$\lambda$4959,5007\ line emission. With the emission lines masked out, the stellar continuum was fit using the public software pPXF \citep{ppxf} with 0.5$\times$ solar metallicity stellar population synthesis (SPS) models from \citet{Delgado2005}. Polynomials of order up to 4 were added to account for any non-stellar continua.
The continuum-subtracted [O~{\sc iii}] $\lambda$$\lambda$4959,5007\ emission lines were then fitted with multiple Gaussian components using the IDL library MPFIT \citep{mpfit}. The line centers and line widths of the corresponding Gaussian components of both lines were tied together, and only the amplitudes were allowed to change freely. We did not fix the relative amplitude ratios of the doublet so that a fit was allowed when a Gaussian component was only detected in [O~{\sc iii}] $\lambda$5007\ but not in [O~{\sc iii}] $\lambda$4959. We checked the flux ratios of the doublet from the best-fit results afterwards when applicable and found that they were very close to the theoretical expectation (within 2\%). We allowed a maximum of three Gaussian components in the fits, and the required number of components in each spaxel was determined by a combination of software automation and visual inspection:
An additional component was added to the best-fit model when 1) it was broader than the spectral resolution; 2) it had a S/N $>$ 2; 3) it was not too broad to be robustly distinguished from the continuum (i.e., the peak S/N of individual spectral channel was required to be greater than 1.5 when the line width W$_{80}$\ was greater than 800 {km s$^{-1}$}). The best-fit parameters from the continuum and emission line fits were adopted as initial parameters for a second fit to check for convergence of the fit.
In order to check how the uncertainties on the fit to the stellar continuum might affect the results on the [O~{\sc iii}] $\lambda$$\lambda$4959,5007\ emission lines, we also tried fitting the continuum with a straight line through the continuum-only windows adjacent to the [O~{\sc iii}] $\lambda$$\lambda$4959,5007\ emission lines. The differences of the best-fit parameters of the [O~{\sc iii}] $\lambda$$\lambda$4959,5007\ emission lines between the two continuum fitting schemes were on average less than 2\%, indicating that the best-fit results were not sensitive to the choice of continuum fitting function in most cases.
Examples of the multi-Gaussian fits, using the KCWI spectra of targets J0842$+$03\ and J1005$+$12, are shown in Figs.\ \ref{fig:J0842o3fit} and \ref{fig:J1005o3fit}, respectively.
For J0842$+$03, a model with one Gaussian component cannot fit the spectra well ($\chi_{\nu}$\ $>>$ 1). Two Gaussian components, the narrower C1 component, and the broader C2 component, are enough to describe the [O~{\sc iii}] $\lambda$$\lambda$4959,5007\ emission profiles. For J1005$+$12, neither a model with one Gaussian component nor one with two Gaussian components can fit the [O~{\sc iii}] $\lambda$$\lambda$4959,5007\ profiles well ($\chi_{\nu}$\ $>>$ 1 and $\chi_{\nu}$\ $=$ 3.36, respectively). Three Gaussian components are needed to properly fit the [O~{\sc iii}] $\lambda$$\lambda$4959,5007\ line emission: the narrowest component (C1), the intermediate-width component (C2) and the broadest component (C3). For the rest of the paper, we name the individual velocity components with the same rule adopted here, i.e., the C1, C2, and C3 components are defined by their increasing line widths.
The results from these fits are discussed in detail in Appendix \ref{4} and summarized in Section \ref{5}.
\subsubsection{Emission Line Fits to the Full Spectral Range} \label{322}
Emission line fits to the full spectral range were also carried out where all of the strong emission lines (\mbox {H$\alpha$}, \mbox {H$\beta$}, [O~{\sc iii}] $\lambda$$\lambda$4959,5007, [N~{\sc ii}] $\lambda$$\lambda$6548,6583, [S~{\sc ii}] $\lambda$$\lambda$6716,6731, and [O~{\sc i}] $\lambda$6300\ in the GMOS data, \mbox {H$\beta$}, {H$\gamma$}, [O~{\sc ii}] $\lambda$$\lambda$3726,3729, [Ne~{\sc iii}] $\lambda$3869, and [O~{\sc iii}] $\lambda$$\lambda$4959,5007\ in the KCWI data) were fit simultaneously. The continuum-subtracted spectra obtained from Section \ref{321} were adopted for these fits. Following the routine adopted for the fit of the [O~{\sc iii}] $\lambda$$\lambda$4959,5007\ emission lines alone, all of the emission lines were fitted with multiple Gaussian components, where the line centers and widths of the corresponding Gaussian components for each line were tied together. For each target, the maximum number of Gaussian components used in the fit was determined from the best fits of [O~{\sc iii}] $\lambda$$\lambda$4959,5007\ emission described in Section \ref{321}. Based on the best-fit results obtained above, we did not detect additional, distinct broad hydrogen Balmer line emission that can be attributed to a genuine broad-line-region (BLR) in any of the eight targets.
\subsection{Non-Parametric Measurements of the Emission Line Profiles} \label{33}
\begin{figure}[!htb]
\plotone{f3.pdf}
\caption{Example of a line profile illustrating the various non-parametric kinematic parameters used in this paper. The vertical dashed lines mark the locations of {v$_{10}$}, {v$_{50}$}, and {v$_{90}$}\ for the mock emission line profile shown in the figure. W$_{80}$\ is the line width between {v$_{90}$}\ and {v$_{10}$}.}
\label{fig:nonpar}
\end{figure}
Non-parametric line profile measurements were utilized to describe the gas kinematics for both the individual Gaussian components and the overall line profiles. The details are described below, and an example is shown in Fig. \ref{fig:nonpar}.
i. {v$_{10}$}\ and {v$_{90}$}\ are the velocities at the 10th and 90th percentiles of the total flux, respectively, calculated starting from the red side of the line.
ii. W$_{80}$\ is the line width defined to encompass 80 percent of the total flux such that W$_{80}$={v$_{10}$} $-$ {v$_{90}$}.
iii. {v$_{50}$}\ is the median velocity, the velocity at the 50th percentile of the total flux.
\subsection{AGN Luminosities} \label{34}
The bolometric AGN luminosities (L$_{AGN}$) of our targets were calculated from the extinction-corrected [O~{\sc iii}] $\lambda$5007\ luminosities integrated over the entire IFS data cubes (L$_{[O~{\sc III}]}$) \footnote{Based on the [O II]/[O III] vs [O III]/\mbox {H$\beta$}\ diagrams drawn from the KCWI data, at least $\sim$90\% of the spaxels show AGN-like line ratios in each target. Consistently, all of our targets show AGN-like line ratios in the BPT and VO87 diagrams based on the Keck/LRIS spectra extracted from the central 1\arcsec\ box regions. Moreover, for targets J0842$+$03\ and J0906$+$56\ where the BPT and VO87 diagrams can be derived from the GMOS IFU data, we find that the spaxels with AGN-like line ratios contribute at least $\sim$95\% of the [O III] flux. Overall, the [O III] luminosities integrated over the entire data cubes are thus at most slight overestimates of the [O III] luminosities originating from the AGN.}. The extinction correction was determined from the Balmer decrement based on the spatially-integrated spectrum, assuming an intrinsic \mbox {H$\alpha$}/\mbox {H$\beta$}\ ratio of 2.87\footnote{While studies have shown that the intrinsic H$\alpha$/H$\beta$\ ratio of AGN is 3.1 \citep[][]{Osterbrock2006}, we adopt the value 2.87 since (1) the intrinsic Balmer line ratios of AGN in these dwarf galaxies are poorly constrained due to a lack of dedicated studies; (2) in Section \ref{63}, we will compare our results of outflow energetics with those from some previous studies \citep[e.g.][]{Harrison2014,Rupke2017} where they adopted the value 2.87. Nevertheless, if we adopt instead an intrinsic H$\alpha$/H$\beta$\ value of 3.1 in our calculations, the derived AGN luminoisity will only decrease by $\sim$0.1 dex for our targets.} for the GMOS data, or an intrinsic \mbox {H$\beta$}/{H$\gamma$}\ ratio of 2.13 for the KCWI data \citep[Case B, T=10$^4$ K;][]{Osterbrock2006} and the \citet{ccm89} extinction curve with $R_V$ $=$ 3.1. For J0100$-$01\ and J0811$+$23, where {H$\gamma$}\ is too weak to be measured robustly, the Balmer decrement was determined from the \mbox {H$\alpha$}/\mbox {H$\beta$}\ ratio measured from the SDSS spectra. We adopted the empirical bolometric correction factors in \citet{Lamastra2009}: L$_{AGN}$ $=$ 142 L$_{[O~{\sc III}]}$\ and L$_{AGN}$ $=$ 87 L$_{[O~{\sc III}]}$\ for 40 $<$ log(L$_{[O~{\sc III}]}$) $<$ 42 and 38 $<$ log(L$_{[O~{\sc III}]}$) $<$ 40 in cgs units, respectively. Note that the AGN luminosities calculated here may be affected by relatively large systematic errors since the intrinsic Balmer line ratio, the shape of the extinction curve, and the L$_{[O~{\sc III}]}$\ to L$_{AGN}$\ correction factor in systems like our targets are uncertain. The observed L$_{[O~{\sc III}]}$\ and derived L$_{AGN}$\ are summarized in Table \ref{tab:targets}.
\subsection{Upper Limits on the Star Formation Rates} \label{35}
Robust star formation rate (SFR) measurements of our targets cannot be obtained due to the lack of sensitive far-infrared data. None of the targets is detected in IRAS and AKARI all sky survey. An order-of-magnitude estimate of SFR for our targets can be derived by dividing the stellar mass with the Hubble time, assuming a constant star formation rate. For a stellar mass of log($M_{\rm stellar}$/\ensuremath{\mathrm{M}_{\odot}}) $=$ 9.5, this gives a SFR on the order of 0.2 {M$_{\sun}$ yr$^{-1}$}, an order of magnitude lower than the upper limits derived from the far-infrared data.
Star formation rates may also be estimated from [O~{\sc ii}] $\lambda$$\lambda$3726,3729\ luminosities (L$_{[O~{\sc II}]}$) in AGN \citep[e.g.][]{Ho2005}. The derived SFR are in principle upper limits on the intrinsic SFR since the AGN contributes to the [O~{\sc ii}] $\lambda$$\lambda$3726,3729\ fluxes. Adopting equation (10) in \citet{Kewley2004}, we follow the same recipe in \citet[][]{Ho2005}, where 1/3 of the [O~II] emission comes from the star formation activity. The L$_{[O~{\sc II}]}$\ was measured from the spatially-integrated KCWI spectra, and was corrected for extinction in the same way as that for L$_{[O~{\sc III}]}$. The gas-phase metallicity of the targets adopted in the calculations above were assumed to be solar (This is based on our ionization diagnosis in Section \ref{53}. Given that the [O~{\sc ii}] $\lambda$$\lambda$3726,3729\ flux is dominated by the nuclear region and contaminated by AGN emission, a metallicity higher than the prediction from the stellar mass--metallicity relation is not surprising). These results are summarized in Table \ref{tab:targets}. Instead, if we use 0.5 $\times$ solar (LMC-like) metallicity \citep[e.g.][]{LMCmetal1999} in the calculations, the upper limits on SFR will be $\sim$20\%\ lower. Therefore, the upper limits recorded in Table \ref{tab:targets} are conservatively high.
To assess the upper limits on SFR derived above, we have also compared them to the median SFR listed in the MPA-JHU DR7 catalog based on SDSS data \citep{Brinchmann2004}. One possible caveat of the SFR from MPA-JHU DR7 catalog is that they misclassify 6 out of the 8 targets studied here as starburst/star-forming galaxies. Therefore, for these 6 targets, there could be significant systematic errors in the SFR listed in the catalog. Moreover, even for the two targets classified as AGN (J0811$+$23\ and J1005$+$12), the treatment of AGN contamination to the SFR measurements might still introduce certain systematic errors to the SFR. Nevertheless, from the comparison we find that (a) the median SFR measured within the SDSS fibers in the MPA-JHU catalog are all below our [O II]-based upper limits except for J0811$+$23\ (SFR $\simeq$ 0.02 {M$_{\sun}$ yr$^{-1}$}\ from fiber SFR in catalog vs SFR $<$ 0.01 {M$_{\sun}$ yr$^{-1}$}\ from our [O II] data); (b) Even if we consider the total SFR (corrected for fiber loss) listed in the MPA-JHU catalog, only three targets show clearly higher SFR in the catalog than our [O II]-based upper limits (the largest difference is seen for J0906$+$56: total SFR $\simeq$ 0.74 {M$_{\sun}$ yr$^{-1}$}\ in the catalog vs SFR $<$ 0.3 {M$_{\sun}$ yr$^{-1}$}\ from our data), while the SFR of J0842$+$03\ in the catalog is only 1/10 of the upper limit measured from our [O II] data. These differences are likely caused by the fact that the AGN emission in these targets are not modelled properly in the MPA-JHU catalog. In general, our [O II]-based upper limits are not systematically lower than the values from MPA-JHU catalog.
\section{Outflows Detected in the Sample} \label{5}
The main results from our analysis of the IFS data are summarized in this section. The target-specific maps of the [O~{\sc iii}] $\lambda$5007\ flux and kinematics, globally and for each velocity component, the stellar kinematics, and the radial profiles of the fluxes from individual velocity components are discussed in Appendix \ref{4} (Fig. \ref{fig:o3map1}--\ref{fig:radial6}). In addition, line ratio maps and the spatially resolved BPT and VO87 diagrams are shown for J0842$+$03\ (Fig. \ref{fig:J0842bpt1} and \ref{fig:J0842bpt2}) and J0906$+$56\ (Fig. \ref{fig:J0906bpt2} and \ref{fig:J0906bpt3}). In all cases, the systematic velocities of our targets are determined from the stellar velocities measured from the spectra integrated over the whole KCWI data cubes.
\begin{deluxetable*}{ccccrrrrrrr}[!htb]
\tablecolumns{11}
\tablecaption{Kinematic Properties of the Targets\label{tab:kinematics}}
\tablehead{
\colhead{Name} & \colhead{N$_{comp}$} & \colhead{Component} & \colhead{Data Set} & \colhead{Median {v$_{50}$}} & \colhead{Min {v$_{50}$}} & \colhead{Max {v$_{50}$}} & \colhead{Median W$_{80}$} & \colhead{Max W$_{80}$} & \colhead{{v$_{50}$}, int.} & \colhead{W$_{80}$, int.} \\
& & & & \colhead{[{km s$^{-1}$}]} & \colhead{[{km s$^{-1}$}]} & \colhead{[{km s$^{-1}$}]} & \colhead{[{km s$^{-1}$}]} & \colhead{[{km s$^{-1}$}]} & \colhead{[{km s$^{-1}$}]} & \colhead{[{km s$^{-1}$}]} \\
\colhead{(1)} & \colhead{(2)} & \colhead{(3)} & \colhead{(4)} & \colhead{(5)} & \colhead{(6)} & \colhead{(7)} & \colhead{(8)} & \colhead{(9)} & \colhead{(10)} & \colhead{(11)}
}
\startdata
\hline
J0100$-$01 & 2 & C1 & KCWI & $-$20 & $-$60 & 0 & 120 & 210 & ... & ... \\
& & C2 & KCWI & $-$40 & $-$240 & 50 & 310 & 650 & ... & ... \\
& & Total & KCWI & $-$20 & $-$130 & 0 & 220 & 440 & $-$20 & 150 \\
\\
J0811$+$23 & 1 & C1 & KCWI & $-$40 & $-$60 & $-$20 & 140 & 220 & $-$40 & 150 \\
\\
J0840$+$18 & 1 & C1 & KCWI & $-$10 & $-$30 & 20 & 50 & 130 & $-$10 & 50 \\
\\
J0842$+$03 & 2 & C1 & GMOS & $-$80 & $-$110 & $-$20 & 130 & 250 & ... & ... \\
& & C2 & GMOS & $-$160 & $-$220 & $-$110 & 500 & 650 & ... & ... \\
& & Total & GMOS & $-$110 & $-$150 & $-$80 & 400 & 520 & $-$120 & 420 \\
& & C1 & KCWI & $-$30 & $-$60 & 10 & 150 & 220 & ... & ... \\
& & C2 & KCWI & $-$110 & $-$160 & $-$40 & 500 & 750 & ... & ... \\
& & Total & KCWI & $-$70 & $-$110 & $-$20 & 400 & 700 & $-$60 & 320 \\
\\
J0906$+$56 & 3 & C1 & GMOS & $-$10 & $-$30 & 30 & 30\tablenotemark{a} & 30\tablenotemark{a} & ... & ... \\
& & C2 & GMOS & 30 & $-$10 & 60 & 350 & 410 & ... & ... \\
& & C3 & GMOS & $-$50 & $-$100 & 40 & 920 & 1200 & ... & ... \\
& & Total & GMOS & 0 & $-$20 & 20 & 550 & 650 & 10 & 570 \\
& & C1 & KCWI & $-$10 & $-$50 & 50 & 110 & 140 & ... & ... \\
& & C2 & KCWI & 60 & 30 & 90 & 430 & 680 & ... & ... \\
& & C3 & KCWI & $-$70 & $-$150 & 10 & 980 & 1250 & ... & ... \\
& & Total & KCWI & 10 & $-$50 & 50 & 520 & 670 & 20 & 420 \\
\\
J0954$+$47 & 3 & C1 & KCWI & 10 & 0 & 20 & 70 & 100 & ... & ... \\
& & C2 & KCWI & 0 & $-$70 & 20 & 260 & 430 & ... & ... \\
& & C3 & KCWI & $-$60 & $-$80 & 0 & 730 & 1100 & ... & ... \\
& & Total & KCWI & 0 & $-$10 & 10 & 240 & 530 & 0 & 220 \\
\\
J1005$+$12 & 3 & C1 & KCWI & $-$20 & $-$40 & 10 & 80 & 120 & ... & ... \\
& & C2 & KCWI & $-$30 & $-$100 & 50 & 440 & 710 & ... & ... \\
& & C3 & KCWI & $-$140 & $-$200 & $-$60 & 730 & 1200 & ... & ... \\
& & Total & KCWI & $-$30 & $-$60 & 10 & 300 & 680 & $-$30 & 260 \\
\\
J1009$+$26 & 2 & C1 & KCWI & $-$10 & $-$30 & 0 & 80 & 100 & ... & ... \\
& & C2 & KCWI & $-$20 & $-$60 & 40 & 210 & 480 & ... & ... \\
& & Total & KCWI & $-$10 & $-$50 & 10 & 90 & 150 & $-$20 & 90
\enddata
\tablecomments{Column (1): Short name of the target; Column (2): Number of velocity components required by the best-fit results from Section \ref{32}; (3): Individual velocity components (C1,C2,C3) and overall emission line profiles (Total) from the best fits; Column (4): Instrument used for the observations; Columns (5)-(7): Median, minimum and maximum values of {v$_{50}$}\ measured across the whole data cube. The spaxels with the highest and lowest 5\%\ of {v$_{50}$}\ are excluded in the calculations. The values listed are rounded to the nearest 10 {km s$^{-1}$}; Columns (8)--(9): Median and maximum values of W$_{80}$\ measured across the whole data cube. The spaxels with the highest and lowest 5\%\ of W$_{80}$\ are excluded in the calculations. The values listed are rounded to the nearest 10 {km s$^{-1}$}; Columns (10)--(11): {v$_{50}$}\ and W$_{80}$\ of the overall emission line profiles from the spatially-integrated spectra of the whole data cubes. The values listed are rounded to the nearest 10 {km s$^{-1}$}.}
\tablenotetext{a}{Compared with the KCWI data, the GMOS data has a poorer spectral resolution (FWHM $\simeq$ 100 {km s$^{-1}$}\ vs 80 {km s$^{-1}$}) and a shallower depth (see Table \ref{tab:obs}). The significantly smaller line width of C1 component measured in the GMOS data is thus most likely due to that the decomposition of the emission line profile is less constrained in the GMOS data. Therefore, for this target, we adopt the KCWI-based line width measurements of the C1 components as the fiducial values in our analysis instead.}
\end{deluxetable*}
\begin{figure*}[!htb]
\epsscale{1.25}
\plotone{f4.pdf}
\caption{Median velocity ({v$_{50}$}) maps (in units of {km s$^{-1}$}) for the velocity components of [O~{\sc iii}] $\lambda$5007\ emission showing evidence of outflows in the seven targets with detected outflows. An overview of these outflow components is presented in Section 4.1 and the detailed analyses of these components in individual targets are presented in Appendix \ref{4}. The name of the target and the corresponding velocity component is noted at the bottom of each panel: The components showing strong evidence for outflows are labelled in black and marked with asterisks, whereas those with relatively more uncertain origins are labelled in red, as stated in Section \ref{51} and discussed in detail in the Appendix \ref{4}. The color scale of each panel is set to be the same as that of the corresponding target-specific map in Appendix \ref{4}, except for those of the C2 and C1 components of J0842$+$03, where the color scales are centered on 0 {km s$^{-1}$}\ instead. The black cross in each panel denotes the spaxel where the peak of the total [O~{\sc iii}] $\lambda$5007\ emission line flux falls.}
\label{fig:snapshot1}
\end{figure*}
\begin{figure*}[!htb]
\epsscale{1.25}
\plotone{f5.pdf}
\caption{Same as Fig. \ref{fig:snapshot1} but for line width W$_{80}$\ (in units of {km s$^{-1}$}).}
\label{fig:snapshot2}
\end{figure*}
\subsection{Gas Kinematics across Our Sample} \label{51}
The gas kinematic properties of the galaxies in our sample are summarized in Table \ref{tab:kinematics}. This includes basic statistics (min, max, median) on {v$_{50}$}\ and W$_{80}$\ for individual velocity components and the entire [O~{\sc iii}] $\lambda$5007\ line emission across the data cubes, as well as measurements of {v$_{50}$}\ and W$_{80}$\ from the spatially-integrated spectra.
Overall, we find that the number of velocity components needed to adequately fit the emission line profiles in our targets ranges from three for J0906$+$56, J0954$+$47, and J1005$+$12, two for J0100$-$01, J0842$+$03, and J1009$+$26, and one for targets J0811$+$23\ and J0840$+$18.
The kinematic properties of the C3 components in J0906$+$56, J0954$+$47, and J1005$+$12, and the C2 components in J0100$-$01\ and J0842$+$03, show strong evidence for outflows since they are very broad and/or significantly blueshifted with respect to the stellar velocity field derived from the same data (Their names are shown in black and marked with asterisks in Fig.\ \ref{fig:snapshot1} and \ref{fig:snapshot2}. The kinematic properties of the C2 components in J0906$+$56, J0954$+$47, and J1005$+$12, as well as the C1 component in J0842$+$03\ also suggest that they are at least part of, or affected by, the outflows in these systems. In addition, given the peculiar kinematics of the C2 component in J1009$+$26\ and C1 component in J0811$+$23\ relative to that of the stellar component, we argue in Appendix \ref{4} that they also likely represent outflowing gas in these objects (These last two groups of velocity components have relatively more ambiguous origins than the first group, so their names are shown in red in Fig.\ \ref{fig:snapshot1} and \ref{fig:snapshot2} to distinguish them from the first group. In the following discussion, we associate all of these velocity components with the outflows in these seven objects, and will refer to them as outflow components by default. In the end, only J0840$+$18\ does not show any sign of outflowing gas in our IFS data, so it is omitted from the following discussion of the outflows, except when mentioned explicitly.
The kinematic properties of the outflows in these seven targets span a relatively large range in terms of line width and median velocities. Quantitatively, the maxima of W$_{80}$\ range from $\sim$220 {km s$^{-1}$}\ to $\sim$1200 {km s$^{-1}$}, and the minima of {v$_{50}$}\ range from $\sim$30 {km s$^{-1}$}\ to $\sim$$-$240 {km s$^{-1}$}\ based on our IFU data. The apparent morphology of these outflow-tracing components are in general symmetric with respect to the galaxy center, except for J0100$-$01\ and J1009$+$26, which show biconical morphology in projection. In addition, significant non-radial velocity gradients/structures are also seen for the outflow components of targets J0811$+$23\ and J0842$+$03, as well as the C2 components of targets J0954$+$47\ and J1005$+$12. (see Fig. \ref{fig:snapshot1} and \ref{fig:snapshot2} for snapshots of the {v$_{50}$}\ and W$_{80}$\ maps of these components; see Appendix \ref{4} for additional target-specific flux and kinematic maps).
\subsection{Spatial Extents of the Outflows} \label{52}
A key question is whether the outflows detected in our targets are extended on galactic scale. As shown in Fig. \ref{fig:o3map1}--\ref{fig:radial6} of Appendix \ref{4} and discussed in this Appendix, the analysis of the IFS data has revealed spatially resolved structures in the velocity fields of the outflow components, as well as excess flux relative to the PSF, in J0100$-$01, J0811$+$23, J0842$+$03, and J1009$+$26, strongly suggesting that the outflows in these galaxies are spatially resolved.
Similarly, the C2 components in J0906$+$56, J0954$+$47, and J1005$+$12, and the C1 component in J0842$+$03\ are also probably spatially resolved. However, the results of our analysis are inconclusive for the C3 components of J0906$+$56, J0954$+$47, and J1005$+$12.
An independent constraint on the spatial extent of the outflow components in J0906$+$56, J0954$+$47, and J1005$+$12\ may be derived from a more formal deconvolution of the data cubes.
For this, we follow a procedure explained in detail below, which is a simplified version of the deconvolution scheme introduced in \citet{Rupke2017}. First, we assume that the flux in the spaxel with the peak emission line flux (a 0.2\arcsec$\times$0.2\arcsec\ box for the GMOS data and a 0.15\arcsec$\times$0.15\arcsec\ one for the KCWI data) is dominated by AGN emission. The spectrum from this spaxel is treated as an AGN emission template from a point source. Next, we fit each spaxel n with this AGN template $+$ smooth exponential continuum functions $+$ host emission lines, according to:
\begin{equation}
I^n_{total} = C_{AGN}I^n_{AGN} + I^n_{exp, continuum}+I^n_{emission}
\end{equation}
The scaling factor C$_{AGN}$ for the AGN emission template and the exponential continuum functions in the equation are each the sum of four exponentials, so eq. (1) can be re-written as:
\begin{equation}
I^n_{total} = \sum_{i=1}^{4}I_i^n I^n_{AGN} + \sum_{j=1}^{4}I_j +I^n_{emission}
\end{equation}
and the four exponentials are :
\begin{eqnarray}
I_1^n = a_1^n e^{-b_1^n<\lambda>}
\end{eqnarray}
\begin{eqnarray}
I_2^n = a_2^n e^{-b_{2}^n(1-<\lambda>)}
\end{eqnarray}
\begin{eqnarray}
I_3^n = a_3^n(1-e^{-b_{3}^n<\lambda>})
\end{eqnarray}
\begin{eqnarray}
I_4^n = a_4^n({1-e^{-b_{4}^n(1-<\lambda>)}}),
\end{eqnarray}
where ${a_i^n}\geq{0}$; ${b_i^n}\geq{0}$; ${<}\lambda{>}{=}\frac{\lambda-\lambda_{min}}{\lambda_{max}-\lambda_{min}}$; and $[\lambda_{min},\lambda_{max}]$ is the fit range. These exponentials are adopted because they are monotonic and are positive-definite. The four exponentials allow for all combinations of concave/convex and monotonically increasing/decreasing. We have not used stellar templates in the fits above, since the stellar absorption features are not strong enough in individual spaxels to constrain the extra free parameters and the fits become divergent.
The host emission lines are modeled with a maximum of two Gaussian components. The fits are iterative. In step 1), the cores of the emission lines are masked and the continuum is fit with the AGN template $+$ exponential continuum terms. In step 2), the best-fit model from step 1) is subtracted from the original spectrum, and the emission lines are fit. In 3), the best-fit emission line models are used to determine a better emission line mask window in the continuum fit, and then steps 1) through 3) are repeated until the best-fit results are stable.
The results of this analysis on J0906$+$56, J0954$+$47, and J1005$+$12\ indicate 1) clear evidence for spatially extended narrow line emission originating on the scale of the host galaxy in all three targets; 2) blueshifted, broad line emission with a S/N of $\sim$3--8 tracing the outflows in the host galaxy in the spatially-stacked spectra for all three targets. The line widths of these components fall in between those of the C3 and C2 components in these targets; 3) but inconclusive (S/N $\lesssim$ 2 in general) evidence for spatially resolved line emission from the outflow components. The same analysis conducted on the other four targets confirms the presence of spatially resolved, blueshifted and/or redshifted velocity components from the host galaxy, corresponding to the outflow components detected in our more detailed kinematic analysis (Appendix \ref{4}).
Before concluding this section, it is important to repeat that the PSF deconvolution scheme described here relies on the assumption that the spectra used as AGN templates for these targets are indeed pure AGN emission (and thus from an unresolved point source). While the line ratios measured from these spectra fall in the AGN region in the BPT/VO87 diagrams (for the GMOS data of J0906$+$56) or the [O~{\sc ii}]/[O~{\sc iii}] vs [O~{\sc iii}]/H$\beta$\ diagram (for the KCWI data), there are reasons to believe that emission from the host galaxies themselves still contributes significantly to the spectra. First and foremost, weak to moderate (S/N $\simeq$ 2--9) Mg I$b$\ absorption features of stellar origin are detected in these spectra. In addition, we carried out a separate, power-law continuum $+$ stellar templates fit to the continuum emission of these spectra (in the ranges of $\sim$5000--7000 \AA\ for the GMOS data, and $\sim$3600--5500 \AA\ for the KCWI data). An AGN-like power-law continuum component is not formally needed in the best-fit results. Our PSF deconvolution procedure thus almost certainly overestimates (underestimates) the contribution from the unresolved AGN emission (resolved host emission), so the S/N of the spatially resolved outflow emission in J0906$+$56, J0954$+$47, and J1005$+$12\ obtained above should therefore be considered conservative lower limits.
\begin{figure*}[!htb]
\begin{minipage}[t]{0.33\textwidth}
\includegraphics[width=\textwidth]{f6a.pdf}
\end{minipage}
\begin{minipage}[t]{0.33\textwidth}
\includegraphics[width=\textwidth]{f6b.pdf}
\end{minipage}
\begin{minipage}[t]{0.33\textwidth}
\includegraphics[width=\textwidth]{f6c.pdf}
\end{minipage}
\caption{[O~{\sc iii}]/H$\beta$\ vs [S~{\sc ii}]/H$\alpha$\ for the C2 (black) and C1 (gray) components of J0842$+$03, compared with AGN (left), shock (middle) and shock$+$precursor (right) models. The grids of the AGN models are color-coded by the power-law indices and ionization parameters of the AGN, and those of the shock and shock$+$precursor models are color-coded by the values of magnetic parameter $b$ and shock velocity v$_{shock}$. See Section \ref{53} for more details on these model parameters. In all three panels, the black, solid lines are the theoretical line separating AGN (above right) and star-forming galaxies (below left) from \citet{Kewley2001}. The black, dashed lines are the theoretical line separating the Seyferts (above left) and LINERs (below right) defined in \citet{Kewley2006}.
\label{fig:lrmodel1}}
\end{figure*}
\begin{figure*}[!htb]
\begin{minipage}[t]{0.33\textwidth}
\includegraphics[width=\textwidth]{f7a.pdf}
\end{minipage}
\begin{minipage}[t]{0.33\textwidth}
\includegraphics[width=\textwidth]{f7b.pdf}
\end{minipage}
\begin{minipage}[t]{0.33\textwidth}
\includegraphics[width=\textwidth]{f7c.pdf}
\end{minipage}
\caption{ Same as Fig.\ \ref{fig:lrmodel1} but for the C3 (black) and C2 (gray) components of J0906$+$56.
\label{fig:lrmodel2}}
\end{figure*}
\begin{figure}[!htb]
\epsscale{2.3}
\plottwo{f8a.pdf}{f8b.pdf}
\caption{[S~{\sc ii}]/H$\alpha$\ ratios versus gas velocity dispersions for the outflow components in J0842$+$03\ (top) and J0906$+$56\ (bottom) based on the GMOS data.}
\label{fig:lrvel}
\end{figure}
\begin{figure*}[!htb]
\begin{minipage}[t]{0.33\textwidth}
\includegraphics[width=\textwidth]{f9a.pdf}
\end{minipage}
\begin{minipage}[t]{0.33\textwidth}
\includegraphics[width=\textwidth]{f9b.pdf}
\end{minipage}
\begin{minipage}[t]{0.33\textwidth}
\includegraphics[width=\textwidth]{f9c.pdf}
\end{minipage}
\begin{minipage}[t]{0.33\textwidth}
\includegraphics[width=\textwidth]{f9d.pdf}
\end{minipage}
\begin{minipage}[t]{0.33\textwidth}
\includegraphics[width=\textwidth]{f9e.pdf}
\end{minipage}
\begin{minipage}[t]{0.33\textwidth}
\includegraphics[width=\textwidth]{f9f.pdf}
\end{minipage}
\caption{[O~{\sc ii}]/[O~{\sc iii}] vs [O~{\sc iii}]/H$\beta$\ for the outflow components of all seven targets (1st row: C2 component in J0100$-$01, C1 component in J0811$+$23, C2 and C1 components in J0842$+$03, as well as C3 and C2 components in J0906$+$56; 2nd row: C3 and C2 components in J0954$+$47\ and J1005$+$12, as well as C2 component in J1009$+$26) based on the KCWI data, compared with AGN (left column), shock (middle column) and shock$+$precursor (right column) models (gray-scale model grids). The median values of the errors of the data points are noted by the black crosses on the top right corners. For the AGN models, the constant power-law indices of the AGN are drawn in solid lines and the constant ionization parameters are drawn in dashed lines. For the shock and shock$+$precursor models, the constant magnetic parameters $b$ are drawn in dashed lines, and the constant shock velocities v$_{shock}$ are drawn in solid lines. The red, solid lines represent the approximate upper boundary of line ratios that can be generated by star-forming activity, based on the Starburst99 models with continuous star formation history from \citet[][]{Levesque2010}. See Section \ref{53} for more details on the model parameters.}
\label{fig:lrmodelkcwi}
\end{figure*}
\begin{figure}[!htb]
\epsscale{2.3}
\plottwo{f10a.pdf}{f10b.pdf}
\caption{[O~{\sc iii}]/H$\beta$\ ratios versus gas velocity dispersions for the outflowing gas in all seven targets (The results are split into two panels for a better view of the data points) with detected outflows based on the KCWI data. The median values of the errors of the data points are shown as the black crosses in the top-right corners.}
\label{fig:lrvelkcwi}
\end{figure}
\subsection{Outflow Ionization: AGN or Shocks?} \label{53}
The line ratio maps and spatially resolved BPT and VO87 diagrams for J0842$+$03\ and J0906$+$56\ (Figs.\ \ref{fig:J0842bpt1} $-$ \ref{fig:J0842bpt2} and Figs.\ \ref{fig:J0906bpt2} $-$ \ref{fig:J0906bpt3} in Appendix \ref{4}, respectively) suggest that the outflows in our targets are largely photoionized by the AGN. Here we examine further the evidence that supports this statement. In particular, we examine the possibility that fast shocks caused by the interaction of the outflows with the surrounding ISM may contribute, or even dominate, the heating and ionization of the outflowing gas. Shock excitation is a telltale sign of fast starburst-driven winds \citep{VeilleuxRupke2002,Sharp2010}, and has also been suspected in a few AGN-driven outflows \citep[e.g.][]{Hinkle2019}.
First, we compare the BPT and VO87 line ratios measured in the clear outflow components, C2 and C1 components in J0842$+$03\, and C3 and C2 components in J0906$+$56, to those of typical AGN models \citep{Groves2004} and shock models \citep{Allen2008} extracted from the ITERA library \citep{ITERA}. For the AGN models, the free parameters are the gas number density, the metallicity, the photon index of the AGN continuum, $\alpha$, and the ionization parameter $U$, where $U \equiv n_{\rm ion}/n_e$, where $n_{\rm ion}$ is the density of ionizing photons and $n_e$ is the electron density. We find that the line ratios probed by our data are not sensitive to the gas number density in the range (100--1000 cm$^{-3}$) relevant to our targets. We further compared the AGN models with metallicity of 0.5 $\times$ solar and solar to the data, and conclude that the one with solar metallicity is a better match to the data. Therefore, the gas number density and metallicity of the AGN model grids are fixed at 1000 cm$^{-3}$ and solar values in our following model comparison, respectively. For the shock models, we consider two types of models, one where only the ionization from the shock itself is considered (called shock model hereafter), and one where the ionization is caused by both the shock and the precursor region ahead of the shock front (called shock$+$precursor model hereafter). The free parameters for both sets of models are the pre-shock particle number density $n$, the metallicity, the shock velocity v$_{shock}$, and the magnetic parameter $b \equiv log[B/n^{\frac{1}{2}} / (1\ {\mu}G\ cm^{\frac{2}{3}}) ]$ (where $B$ is the transverse magnetic field). We have fixed the pre-shock particle number density $n$ to 1000 cm$^{-3}$ and the metallicity to solar value, which follows the same set-up as that for the AGN models. The full extent of the line ratio predictions from the shock and shock$+$precursor models with other density and metallicity settings is mostly covered by the model grids we adopt here, and they are thus omitted from the discussion below.
The results for the [O~{\sc iii}]/H$\beta$\ vs [S~{\sc ii}]/H$\alpha$\ diagram are shown in Fig.\ \ref{fig:lrmodel1} for J0842$+$03\ and Fig.\ \ref{fig:lrmodel2} for J0906$+$56, where the comparison with the AGN, shocks, and shock$+$precursor models are displayed in the left, middle, and right panels, respectively. The results for the other two VO87 diagnostic diagrams, [O~{\sc iii}]/H$\beta$\ vs [N~{\sc ii}]/H$\alpha$\ and [O~{\sc iii}]/H$\beta$\ vs [O~{\sc i}]/H$\alpha$, are in general similar to those from the [O~{\sc iii}]/H$\beta$\ vs [S~{\sc ii}]/H$\alpha$\ diagram in terms of how well the data and the models match with each other. They are thus omitted in the following discussion.
For the C2 component of J0842$+$03, the AGN models match the observed line ratios with $-$3.5 $\lesssim$ log(U) $\lesssim$ $-$2 and $-$2 $\lesssim$ $\alpha$ $\lesssim$ $-$1.2. The shock models can reproduce the majority of the observed line ratios with relatively large $b$ parameters ($\gtrsim$1.5) and small shock velocities ($\lesssim$700 {km s$^{-1}$}). As for the shock$+$precursor models, either the observed [O~{\sc iii}]/H$\beta$\ ratios or the [S~{\sc ii}]/H$\alpha$\ ratios are systematically lower than the model predictions, by $\sim$0.3 dex on average. For the C1 component, most of the data points lie in, or close to the region for the star-forming galaxies in the diagram. This is consistent with their systematically lower [O~{\sc iii}]/H$\beta$\ ratios compared to the AGN models. However, the shock and shock$+$precursor models are apparently better matches to the line ratios of the C1 component.
For J0906$+$56, the observed [O~{\sc iii}]/H$\beta$\ and [S~{\sc ii}]/H$\alpha$\ ratios can be mostly reproduced by AGN models with ionization parameters in the range of $-$3 $\lesssim$ log(U) $\lesssim$ $-$1 and photon indices in the full range provided by the model grids ($-$2 $<$ $\alpha$ $<$ $-$1.2). However, either the observed [O~{\sc iii}]/H$\beta$\ ratios or the [S~{\sc ii}]/H$\alpha$\ ratios are systematically larger than the predictions of shock models by at least $\sim$0.3 dex, contrary to the case for J0842$+$03. This discrepancy becomes larger as the shock velocity increases. Once the ionization from the precursor region is considered, the model predictions match the observed line ratios almost as well as the AGN models, although the data have few constraints on the shock velocity and the $b$ parameters. As for the C2 component, the AGN models still match the data relatively well, except that $\sim$1/3 of the data points show slightly higher [O~{\sc iii}]/H$\beta$\ ratios. The shock models with relatively high $b$ parameter ($\gtrsim$1) are also a good match to the data. Finally, the shock$+$precursor models have some trouble explaining $\sim$1/2 of the data points with lower [O~{\sc iii}]/H$\beta$\ ratios.
Overall, the AGN models more easily reproduce the observed [O~{\sc iii}]/H$\beta$\ and [S~{\sc ii}]/H$\alpha$\ ratios of the C2 component in J0842$+$03\ and C3 component in J0906$+$56. The shock models generate line ratios consistent with observations for J0842$+$03\ but not for J0906$+$56, while the shock$+$precursor models match the observations for J0906$+$56\ but not for J0842$+$03. As for the C1 component in J0842$+$03, the AGN models are a worse match to the data, which agrees with the expectation that it is contaminated by emission from the host galaxy, as discussed in Appendix \ref{42}. Nevertheless, the AGN and shock models can both explain the line ratios of the C2 component in J0906$+$56\ apparently.
Second, as shown in Fig. \ref{fig:lrvel}, there is no positive correlation between the emission line widths ($\sigma_{gas}$) and the [S~{\sc ii}]/H$\alpha$\ line ratios for the individual outflow components of targets J0842$+$03\ and J0906$+$56, contrary to theoretical predictions \citep[e.g.,][]{Allen2008} and what is usually found in systems where shocks are the dominant source of ionization \citep[e.g.][]{Veilleux1995,Allen1999,Sharp2010,Rich2011,Rich2012,Rich2014,Ho2014}.
This conclusion still holds even when we consider the two outflow components together in each target. Overall, these results suggest that shock ionization is not important in J0842$+$03\ and J0906$+$56. The outflowing gas in these two objects thus appears to be primarily photoionized by the AGN.
For the other targets, where only KCWI data are available, the [N~{\sc ii}] $\lambda$$\lambda$6548,6583, [S~{\sc ii}] $\lambda$$\lambda$6716,6731, [O~{\sc i}] $\lambda$6300, and \mbox {H$\alpha$}\ emission lines are not covered by the data, so we cannot directly compare the results with model predictions in the BPT and VO87 diagrams. Instead, we compare the KCWI data-based line ratios of the outflow components with model predictions in the [O~{\sc ii}]/[O~{\sc iii}] vs [O~{\sc iii}]/H$\beta$\ diagrams as shown in Fig. \ref{fig:lrmodelkcwi}. The emission line fluxes are extinction corrected in the same way as stated in Section \ref{34}. The same AGN, shock, and shock$+$precursor models as those shown in Fig. \ref{fig:lrmodel1} and Fig. \ref{fig:lrmodel2} are adopted in this analysis. Additionally, we plot the approximate upper boundary of line ratios predicted by a set of star-forming galaxy models from \citet[][]{Levesque2010} as a red solid line in Fig. \ref{fig:lrmodelkcwi}. Excluding the outflow components with possible contribution from non-outflowing gas (i.e., C2 components in J0906$+$56, J0954$+$47, and J1005$+$12, as well as C1 component in J0842$+$03), the results suggest that: (1) the star-forming models cannot reproduce the observed line ratios of the outflowing gas in the targets, therefore indicating that massive young stars are not the dominant ionization source in the outflowing gas; (2) the predictions from the shock models can match the observed line ratios of the outflowing gas relatively well, although the models may not be able to explain the observed data with the highest [O~{\sc iii}]/H$\beta$\ ratios and lowest [O~{\sc ii}]/[O~{\sc iii}]\ ratios; (3) the AGN and shock$+$precursor models can explain the observed line ratios equally well and are both slightly better matches to the observations than the shock models. Moreover, the C2 component of J0906$+$56\ and the C1 component of J0842$+$03\ have lower [O~{\sc ii}]/[O~{\sc iii}]\ ratios than the predictions of all three model sets in general, and the C2 component of J0954$+$47\ have lower [O~{\sc iii}]/H$\beta$\ ratios than those of the AGN models. These results are consistent with our conclusions in Appendix \ref{42}, \ref{41}, and \ref{43} that these outflow components are partially contaminated by emission from non-outflowing gas.
Next, we have examined the [O~{\sc iii}]/H$\beta$\ line ratios vs the emission line widths ($\sigma_{gas}$) based on the KCWI data for all seven targets with detected outflows in Fig.\ \ref{fig:lrvelkcwi}. To the first order, one would expect a positive correlation between the [O~{\sc iii}]/H$\beta$\ line ratios and gas velocity dispersions \citep[e.g., see Fig. 16 \& 17 in][]{Allen2008}. However, no such clear correlation is seen in our data, which is a similar conclusion to that derived from the [S~{\sc ii}]/H$\alpha$\ ratios. In addition, for the C2 components in both J0100$-$01\ and J1009$+$26, their observed line widths are significantly smaller (by $\sim$300--400 {km s$^{-1}$}\ on average) than the shock velocities predicted by the shock and shock$+$precursor models shown in the middle and right columns of Fig. \ref{fig:lrmodelkcwi}. This is apparently contradictory to the expectation that the emission line velocity dispersion reflects the shock velocity when the shocks dominate the ionization of the gas. These results again suggest that shock ionization is not important in our targets.
Overall, our analysis indicates that AGN is most likely the dominant source of ionization for the outflows in our targets.
\subsection{Electron Densities of the Outflows} \label{54}
The electron density, n$_e$, in the ionized gas may be derived from the [S~{\sc ii}] $\lambda$6716/[S~{\sc ii}] $\lambda$6731\ ratios or [O~{\sc ii}] $\lambda$3726/[O~{\sc ii}] $\lambda$3729\ ratios, following well-established calibrations \citep[e.g.,][]{Sanders2016}.
For the two targets in our sample with GMOS observations, the spatially-resolved electron density maps derived from the flux ratios of the total [S~{\sc ii}] $\lambda$$\lambda$6716,6731\ line emission show possible radial trends of decreasing electron density outwards, but the errors on $n_e$ are too large to draw robust ($>$5$\sigma$) conclusions. For the other targets, the electron density maps derived from the [O~{\sc ii}] $\lambda$3726/[O~{\sc ii}] $\lambda$3729\ ratios from the KCWI data are even more noisy, which again prevent us from determining the radial trend of the electron densities. Consequently, the electron densities in individual velocity components cannot be measured reliably based on the spatially-resolved maps in these systems.
To check further the possible difference of [S~II]-based electron densities among different velocity components, we then turn to use the spectra spatially integrated over the whole GMOS data cubes for targets J0842$+$03\ and J0906$+$56, and the Keck/LRIS spectra for the other targets\footnote{Notice that for the Keck/LRIS data, the emission line profiles are fit with two Gaussian components as described in \citet{ManzanoKing2019}, and here the outflow components in J0100$-$01, J0811$+$23, J0954$+$47, J1005$+$12, and J1009$+$26\ refer to the broad components from their best fits}. However, for most of our targets, the measured electron densities of the outflow components still show large uncertainties and thus no useful information of the electron density contrast among individual velocity components can be obtained from our data. The only exceptions are J0842$+$03\ and J1005$+$12, where no clear differences in electron densities are seen among individual velocity components. In the discussion below, we thus adopt the electron densities measured from the [S~{\sc ii}] $\lambda$6716/[S~{\sc ii}] $\lambda$6731\ ratios based on the total line flux in each object as the electron densities for the outflowing gas (see Table \ref{tab:energetics}).
\subsection{Dust Extinction of the Outflows} \label{55}
From the GMOS data of J0842$+$03\ and J0906$+$56, we find that the clearly outflowing line-emitting material (the C2 component in J0842$+$03\ and the C3 component in J0906$+$56) has H$\alpha$/H$\beta$\ ratios that are higher than the intrinsic values of typical H~II regions or AGN narrow line region \citep[2.87 and 3.1, respectively;][]{Osterbrock2006}, suggesting dust extinction affects the line emission of the outflows in these objects. Adopting the extinction curve from \citet{ccm89} with $R_V$ = 3.1, the derived extinction values, $A_V$, measured from the spectra integrated over the whole data cube, are on the order of 1 mag. For comparison, the other velocity components in these two targets show slightly smaller $A_V$ by $\sim$0.2 magnitude on average. A more detailed look at the spatially-resolved $A_V$ maps of the outflow components reveals possible radial trends of decreasing $A_V$ at larger radii in both targets. As for the other targets observed with KCWI, the outflow components in {H$\gamma$}\ are in general too faint to allow us to draw robust conclusions.
\subsection{Comparison with the Keck/LRIS Data} \label{56}
The fast outflows in our targets were initially discovered by \citet{ManzanoKing2019} based on Keck/LRIS long-slit data. The properties of the outflows measured from these long-slit data are in broad agreement with those reported here.
The column (10) in Table \ref{tab:obs} lists the 5-$\sigma$ detection limits of an [O~{\sc iii}] $\lambda$5007\ emission line with FWHM of 1000 {km s$^{-1}$}\ in the GMOS and KCWI data. Excluding the shallower observation of J0100$-$01, these detection limits are in general comparable to those of the Keck/LRIS data, which are in the range of $\sim$1--3$\times$10$^{-17}$ erg cm$^{-2}$ s$^{-1}$ arcsec$^{-2}$.
In J0100$-$01, J0842$+$03, J0906$+$56, J0954$+$47, and J1005$+$12\ (GMOS data and KCWI data with small slicer setup), the kinematic properties of the outflows ({v$_{50}$}\ and W$_{80}$) measured from these three data sets are similar, but the better spectral resolutions of the GMOS and KCWI IFS data compared with the LRIS data\footnote{Recall that FWHM $\simeq$ 100 {km s$^{-1}$}\ at 4610 \AA\ for GMOS, $\simeq$ 80 {km s$^{-1}$}\ at 4550 \AA\ for the small-slicer setup of KCWI, and $\simeq$ 190 {km s$^{-1}$}\ for Keck/LRIS \citep{ManzanoKing2019}.} reveal more details in the shapes of the emission line profiles in J0906$+$56, J0954$+$47, and J1005$+$12, where three Gaussian components are required to adequately describe the line profiles. The spatial extents of the outflows are broadly consistent with each other after taking into account the sensitivity of the various data sets.
In J0811$+$23\ and J1009$+$26\ (KCWI data with the medium and small slicer setup, respectively), blueshifted [O~{\sc iii}] $\lambda$5007\ velocity components are detected in both the Keck/LRIS and KCWI data sets, although they are narrower (by a factor of $\sim$3 on average) and show smaller blueshifts (by a factor of $\sim$4 on average) in the KCWI data when compared to those in the Keck/LRIS data. As for J0840$+$18\ (KCWI data with medium slicer setup), a very faint ($\sim$2$\times$10$^{-17}$ erg cm$^{-2}$ s$^{-1}$ arcsec$^{-2}$), broad (W$_{80}$\ $\simeq$ 1600 {km s$^{-1}$}), and redshifted ({v$_{50}$}\ $\simeq$ 150 {km s$^{-1}$}) velocity component is reported in the Keck/LRIS data, but it is not detected in the KCWI data. The origin of this apparent discrepancy is not clear although the slightly coarser spectral resolution of LRIS might make it more capable of detecting such a broad feature.
\section{Discussion} \label{6}
\subsection{Energetics of the Outflows} \label{61}
The ionized gas mass of the outflows can be calculated based on either the [O~{\sc iii}] $\lambda$$\lambda$4959,5007\ line luminosity or the Balmer line (\mbox {H$\alpha$}\ or \mbox {H$\beta$}) luminosity of the outflowing, line-emitting gas. We have compared the ionized gas mass of the outflows based on these emission lines, and find that the [O~III]-based values are systematically smaller than the \mbox {H$\alpha$}\ or H$\beta$-based values by $\sim$0.2 dex on average, assuming solar metallicity and following equation (29) in \citet{Veilleux2020} (If we assume instead a 0.5$\times$solar metallicity, the average difference increases to $\sim$0.5 dex). This difference may be caused by the uncertainties on the ionization fraction correction (which is assumed to be unity in the previous calculation) and gas-phase metallicity that is assumed in the [O~III]-based ionized gas mass. In order to avoid introducing such uncertainties into our results, the best global fits (Section\ \ref{322}) to the \mbox {H$\alpha$}\ (GMOS data) and \mbox {H$\beta$}\ (KCWI data) line emission are thus used to calculate the energetics of the outflows in the following discussion. From \citet{Osterbrock2006} and assuming case B recombination with $T$ = 10$^4$ K, we have
\begin{eqnarray}
M_{\rm out}= 4.48~\ensuremath{\mathrm{M}_{\odot}} \left(\frac{L_{H\alpha,corr}}{10^{35}~{\rm erg~s}^{-1}}\right) \left(\frac{<n_e>}{100\ {\rm cm}^{-3}}\right)^{-1}
\end{eqnarray}
where $L_{H\alpha,corr}$ is the extinction-corrected \mbox {H$\alpha$}\ luminosity using the measured Balmer decrement from the total emission line fluxes of the spatially-integrated spectra and adopting an intrinsic \mbox {H$\alpha$}/\mbox {H$\beta$}\ ratio of 2.87, appropriate for Case B recombination \citep[][]{Osterbrock2006}, and the \citet{ccm89} extinction curve with $R_V$ = 3.1. For the KCWI data sets, where \mbox {H$\alpha$}\ was not observed, we instead use the extinction-corrected \mbox {H$\beta$}\ luminosity $L_{H\beta,corr}$ and then convert it to $L_{H\alpha,corr}$ using $L_{H\alpha,corr}$ = 2.87 $L_{H\beta,corr}$ as above.
The calculations of the mass, momentum, and kinetic energy outflow rates depend on the spatial extent of the outflows. As discussed in Section \ref{52}, while the outflows in J0100$-$01, J0811$+$23, J0842$+$03, and J1009$+$26\ are spatially resolved in the IFS data, our analysis of the IFS data on J0906$+$56, J0954$+$47, and J1005$+$12\ does not provide a conclusive outflow size in these objects. For the later, we thus calculate the energetics of the outflows in both scenarios, one where the outflows are spatially resolved and one where they are not.
As presented in Section \ref{51}, while the outflows are mainly traced by the broadest/most blueshifted velocity components (C3 in J0906$+$56, J0954$+$47, J1005$+$12, C2 in J0100$-$01, J0842$+$03, J1009$+$26, and C1 in J0811$+$23) in the seven targets with detected outflows, the C2 components in J0906$+$56, J0954$+$47, and J1005$+$12, as well as the C1 component in J0842$+$03\ may also trace significant portion of the outflowing gas in these systems. In the following calculations of the outflow energetics, we thus consider not only the primary outflow components of each target (C3 in J0906$+$56, J0954$+$47, J1005$+$12, C2 in J0100$-$01, J0842$+$03, J1009$+$26, and C1 in J0811$+$23), but also the C2 components in J0906$+$56, J0954$+$47, and J1005$+$12, as well as the C1 component in J0842$+$03, recording their results separately.
\subsubsection{Spatially Resolved Outflows} \label{611}
We begin with the scenario where the detected outflows are spatially resolved. The mass, momentum, and kinetic energy outflow rates are calculated using a time-averaged, thin-shell, free wind model \citep[e.g.][]{Shih2010,RupkeVeilleux2013b}, where the outflow is spherically-symmetric with a radius $R_{out}$ in 3D space.
Specifically, the energetics are calculated by summing up quantities over individual spaxels:
\begin{eqnarray}
dM/dt = \sum dm/dt = \sum \frac{m_{\rm out} v_{50,out}\sec\theta}{R_{out}}
\end{eqnarray}
\begin{eqnarray}
dp/dt = \sum (v_{50,out}\sec\theta)dm/dt
\end{eqnarray}
\begin{eqnarray}
dE/dt = \frac{1}{2}\sum [(v_{50,out}\sec\theta)^2+3{\sigma_{out}^2}]~dm/dt
\end{eqnarray}
where $m_{\rm out}$, $v_{50,out}$, $\sigma_{out}$ respectively are the ionized gas mass, absolute value of {v$_{50}$}, and velocity dispersion (= W$_{80}$/2.563) measured from the outflow components within individual spaxels. In these expressions, $\theta = {\rm sin}^{-1}(r_{spaxel}/R_{out})$, the angle between the velocity vector of the outflow in 3D space and the line-of-sight. $R_{out}$, again, is the radius of the spherically-symmetric outflow in 3D space, and is calculated as the maximum extent that the outflow components are detected (S/N of the outflow component of [O~{\sc iii}] $\lambda$5007\ emission $>$ 2) in the sky plane plus half spaxel, converted to an equivalent physical distance. The half spaxel is added artificially since a spherical outflow is formally travelling perpendicular to the line-of-sight at the maximum radius $R_{out}$ (i.e., the $v_{50,out}$ will be 0), and thus no outflow signal can be detected. r$_{spaxel}$ is the projected distance on the sky of a given spaxel with respect to the spaxel with peak outflow flux. In the calculations above, we exclude the spaxels with emission line flux that fall in the lowest 5\% of the full flux range. It should be emphasized that we adopt the [O III]-based $R_{out}$ in the calculation instead of the Balmer-line-based values, which are in general smaller when measured through the fainter \mbox {H$\beta$}\ feature. The mass, momentum, and kinetic energy outflow rates scale as $R_{out}^{-1}$ in the above equations and would thus be higher if the \mbox {H$\beta$}-based $R_{out}$ were used in the calculations.
The electron densities used in the above equations are measured from the [S~{\sc ii}] $\lambda$6716/[S~{\sc ii}] $\lambda$6731\ ratios, following the conversion presented in \citet{Sanders2016}. As discussed in details in Section \ref{54}, the [S~{\sc ii}] $\lambda$6716/[S~{\sc ii}] $\lambda$6731\ ratios are calculated using the total line fluxes from the spatially-integrated GMOS spectra or the Keck/LRIS spectra for the other targets without GMOS observations (see Table \ref{tab:energetics}). Neither the [S~{\sc ii}] $\lambda$6716/[S~{\sc ii}] $\lambda$6731\ ratios of individual spaxels nor the [S~{\sc ii}] $\lambda$6716/[S~{\sc ii}] $\lambda$6731\ ratios of the outflow components could be used due to their large uncertainties.
We multiply the energetics by a factor of two to account for the far side of the outflow that is blocked by the galaxy, except for the C2 component of J0906$+$56, which is purely redshifted and likely represents the back side of the outflow traced by the C3 component (see discussion in Appendix\ \ref{411}). The results of the calculations are listed in Table \ref{tab:energetics}.
It is important to point out that the geometries of the outflows in J0100$-$01\ and J1009$+$26\ may deviate significantly from the spherically-symmetric wind model adopted in the calculations, given the apparent biconical morphologies of the outflows on the sky plane. Nevertheless, if we assume a biconical geometry \citep[e.g., bipolar super-bubble as adopted in][]{RupkeVeilleux2013b} for the outflows in these targets, the estimated change in the mass, momentum and kinetic energy outflow rates are comparable to the errors listed in Table \ref{tab:energetics}. This may also be true for the C2 components in J0954$+$47\ and J1005$+$12, if their apparent biconical/asymmetric morphologies on the sky plane arise from the geometry of the outflowing gas.
\begin{deluxetable*}{ccccccccccccc}[!htb]
\tablecolumns{13}
\tabletypesize{\tiny}
\tablecaption{Energetics of the Outflows \label{tab:energetics}}
\tablehead{
\colhead{Name} & \colhead{Comp.} & \colhead{Data Set} & \colhead{$n_e$ (cm$^{-3}$)} & \colhead{log($M$/\ensuremath{\mathrm{M}_{\odot}})} & \colhead{R$_{out}$(kpc)} & \colhead{R$_{out,ur}$(kpc)} & \multicolumn{2}{c}{log[(d$M$/d$t$)/({M$_{\sun}$ yr$^{-1}$})]} & \multicolumn{2}{c}{log[(d$E$/d$t$)/(erg s$^{-1}$)]} & \multicolumn{2}{c}{log[($c$ d$p$/d$t$)]/(\ensuremath{\mathrm{L}_{\odot}})]} \\
& & & & & & & resolved & unresolved & resolved & unresolved & resolved & unresolved \\
(1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12) & (13)
}
\startdata
\hline
J0100$-$01 & C2 & KCWI & 60$\pm{50}$ & 7.3$^{+0.3}_{-0.8}$ & 3.1 & ... & $-$0.5$^{+0.3}_{-0.8}$ & ... & 40.8$^{+0.3}_{-0.8}$ & ... & 9.5$^{+0.3}_{-0.8}$ & ... \\
\\
J0811$+$23 & C1 & KCWI & 590$\pm{160}$ & 4.8$^{+0.1}_{-0.1}$ & 0.9 & ... & $-$2.5$^{+0.1}_{-0.1}$ & ... & 37.1$^{+0.1}_{-0.1}$ & ... & 6.8$^{+0.1}_{-0.1}$ & ... \\
\\
J0842$+$03 & C2 & GMOS & \multirow{4}{*}{470$\pm{150}$} & 5.4$^{+0.1}_{-0.2}$ & 0.8 & ... & $-$1.4$^{+0.1}_{-0.2}$ & ... & 39.3$^{+0.1}_{-0.2}$ & ... & 8.5$^{+0.1}_{-0.2}$ & ... \\
& C1 & GMOS & & 5.4$^{+0.2}_{-0.2}$ & 0.9 & ... & $-$1.6$^{+0.1}_{-0.2}$ & ... & 38.1$^{+0.1}_{-0.1}$ & ... & 8.0$^{+0.1}_{-0.2}$ & ... \\
& C2 & KCWI & & 5.9$^{+0.1}_{-0.2}$ & 1.6 & ... & $-$1.2$^{+0.1}_{-0.2}$ & ... & 39.4$^{+0.1}_{-0.2}$ & ... & 8.6$^{+0.1}_{-0.2}$ & ... \\
& C1 & KCWI & & 6.0$^{+0.1}_{-0.2}$ & 1.6 & ... & $-$1.6$^{+0.2}_{-0.3}$ & ... & 38.1$^{+0.1}_{-0.1}$ & ... & 7.8$^{+0.1}_{-0.1}$ & ... \\
\\
J0906$+$56 & C3 & GMOS & \multirow{4}{*}{570$\pm{360}$} & 5.8$^{+0.2}_{-0.4}$ & 1.1 & 0.3 & $-$1.8$^{+0.2}_{-0.4}$ & $>-$0.9 & 39.2$^{+0.2}_{-0.4}$ & $>$40.2 & 7.5$^{+0.2}_{-0.4}$ & $>$8.4 \\
& C2 & GMOS & & 5.4$^{+0.2}_{-0.4}$ & 1.2 & 0.3 & $-$2.1$^{+0.2}_{-0.4}$ & $>-$1.6 & 37.8$^{+0.2}_{-0.4}$ & $>$38.7 & 7.1$^{+0.2}_{-0.4}$ & $>$7.6 \\
& C3 & KCWI & & 5.9$^{+0.2}_{-0.4}$ & 2.1 & 0.4 & $-$1.5$^{+0.2}_{-0.4}$ & $>-$0.9 & 39.9$^{+0.2}_{-0.4}$ & $>$40.3 & 8.3$^{+0.2}_{-0.4}$ & $>$8.7 \\
& C2 & KCWI & & 6.1$^{+0.2}_{-0.4}$ & 2.2 & 0.4 & $-$1.4$^{+0.2}_{-0.4}$ & $>-$0.7 & 39.1$^{+0.2}_{-0.4}$ & $>$39.7 & 8.2$^{+0.2}_{-0.4}$ & $>$8.7 \\
\\
J0954$+$47 & C3 & KCWI & \multirow{2}{*}{470$\pm{80}$} & 5.8$^{+0.1}_{-0.1}$ & 1.6 & 0.4 & $-$1.5$^{+0.1}_{-0.1}$ & $>-$1.0 & 39.6$^{+0.1}_{-0.1}$ & $>$39.9 & 8.2$^{+0.1}_{-0.1}$ & $>$8.4 \\
& C2 & KCWI & & 6.3$^{+0.1}_{-0.1}$ & 1.8 & ... & $-$2.1$^{+0.1}_{-0.1}$ & ... & 38.9$^{+0.1}_{-0.1}$ & ... & 7.9$^{+0.1}_{-0.1}$ & ... \\
\\
J1005$+$12 & C3 & KCWI & \multirow{2}{*}{450$\pm{100}$} & 5.2$^{+0.1}_{-0.1}$ & 0.3 & 0.1 & $-$1.2$^{+0.1}_{-0.1}$ & $>-$0.6 & 40.1$^{+0.1}_{-0.1}$ & $>$40.4 & 9.0$^{+0.1}_{-0.1}$ & $>$9.2 \\
& C2 & KCWI & & 5.6$^{+0.1}_{-0.1}$ & 0.7 & ... & $-$1.7$^{+0.1}_{-0.3}$ & ... & 38.8$^{+0.1}_{-0.1}$ & ... & 7.8$^{+0.1}_{-0.1}$ & ... \\
\\
J1009$+$26 & C2 & KCWI & 150$\pm{60}$ & 5.5$^{+0.2}_{-0.2}$ & 0.8 & ... & $-$2.0$^{+0.2}_{-0.2}$ & ... & 38.2$^{+0.2}_{-0.2}$ & ... & 7.5$^{+0.2}_{-0.2}$ & ...
\enddata
\tablecomments{Column (1): Short name of the target; Column (2): Individual outflow components from the best fits; Column (3): Instrument used for the observations; Column (4): Electron density measured from the [S~{\sc ii}] $\lambda$$\lambda$6716,6731\ line ratio based on the total line flux from the spatially-integrated, GMOS spectra or keck/LRIS spectra (see Section \ref{54}); Column (5): Ionized gas mass of the corresponding outflow component; Column (6): Outflow radius adopted in the calculation of mass, momentum and kinetic energy outflow rates when the outflows are spatially resolved (Column (8), (10), and (12), respectively); Column (7): Outflow radius adopted in the calculation of mass, momentum and kinetic energy outflow rate when the outflow is spatially unresolved (Column (9), (11), and (13), respectively); (8): Ionized gas mass outflow rate of the corresponding outflow component when the outflow is spatially resolved; Column (9): Same as in Column (8) but with the assumption that the outflow is spatially unresolved; Column (10): Ionized gas kinetic energy outflow rate of the corresponding outflow component when the outflow is spatially resolved; Column (11): Same as in Column (10) but with the assumption that the outflow is spatially unresolved; Column (12): Ionized gas momentum outflow rate of the corresponding velocity component when the outflow is spatially resolved; Column (13): Same as in Column (12) but with the assumption that the outflow is spatially unresolved.}
\end{deluxetable*}
\subsubsection{Spatially Unresolved Outflows} \label{612}
If instead the outflows are unresolved by the IFS data, the total mass of the outflowing gas remains unchanged, but the time-averaged mass, momentum, and kinetic energy outflow rates are affected since they depend inversely on the size of the outflows. As discussed above, the C3 components of J0906$+$56, J0954$+$47, and J1005$+$12\, and the C2 component of J0906$+$56\ may be spatially unresolved. In this scenario, we adopt $\frac{1}{2}$~$\times$~FWHM(PSF) as a conservative upper limit to the true outflow radius $R_{out,ur}$, and get:
\begin{eqnarray}
dM/dt = \frac{M_{\rm out,tot} v_{out,tot}}{R_{out,ur}}
\end{eqnarray}
\begin{eqnarray}
dp/dt = v_{out,tot} dM/dt
\end{eqnarray}
\begin{eqnarray}
dE/dt = \frac{1}{2} (v_{50,out}^2+3{\sigma_{out}^2}) dM/dt
\end{eqnarray}
Here, $M_{\rm out,tot}$ is the total mass of the outflowing gas, and $R_{out,ur}$ is the upper limit on the radius of the outflow. The quantities $v_{50,out}$ and $\sigma_{out}$ are the median values of {v$_{50}$}\ and $\sigma$ (= W$_{80}$/2.563) of the outflow components measured across the data cube (see Table \ref{tab:kinematics}). The adopted electron densities are the same as those in the spatially resolved scenario. The lower limits on the outflow rates obtained under these assumptions are listed in Table \ref{tab:energetics}.
\begin{figure}[!htb]
\epsscale{1.1}
\plotone{f11.pdf}
\caption{[O~{\sc iii}] $\lambda$5007\ line widths W$_{80}$\ vs [O~{\sc iii}] $\lambda$5007\ luminosities for the seven targets with detected outflows (red filled circles indicate the KCWI data and red open circles indicate the GMOS data of J0842$+$03\ and J0906$+$56) as well as more luminous AGN and ULIRGs taken from the literature \citep[black symbols;][]{Liu2013a,Liu2013b,Harrison2014,RZ2013}, as indicated in the legend. All measurements refer to the total, spatially-integrated [O~{\sc iii}] $\lambda$5007\ line emission from each object. The typical errors of the measurements are similar to the size of the data points.}
\label{fig:fenxiw80}
\end{figure}
\begin{figure*}[!htb]
\epsscale{1.1}
\plottwo{f12a.pdf}{f12b.pdf}
\caption{Ratios of the kinetic energy outflow rates, based on the KCWI data, to the AGN bolometric luminosities as a function of (left) the AGN bolometric luminosties and (right) H-band absolute magnitudes, for the seven targets with detected outflows in our sample (red circles) and lower limits (blue triangles) if the outflows in J0906$+$56, J0954$+$47, and J1005$+$12\ are spatially unresolved (see Section \ref{52} and \ref{612}). Here we have neglected the kinetic energy outflow rates calculated from the C2 components in J0906$+$56, J0954$+$47, J1005$+$12\ and the C1 component in J0842$+$03\ as their contributions are modest. Also plotted as a comparison are the values from a sample of $z <$ 0.3 but more powerful type 1 quasars and nearby Seyfert galaxies from or collected by \citet{Rupke2017}, as well as a sample of $z <$ 0.15, AGN-dominated ULIRGs from \citet{Rose2018}. The absolute H-band magnitudes shown in the right panel are derived from the 2MASS \citep{2MASS} H-band magnitudes taken from the IRSA/2MASS archive, except for those of the Type 1 quasars and three of the ULRIGs from \citet{Rose2018}, which are the AGN-subtracted, host-only H-band magnitudes quoted from \citet{veilleux2006,veilleux2009a}. The estimated typical errors of the data points are noted as black crosses in the upper-right corners of both panels.
}
\label{fig:ke}
\end{figure*}
\subsection{Comparison with More Luminous AGN} \label{62}
The most direct measure of the magnitude of an outflow is its velocity. Various definitions have been used in literature to represent outflow velocities \citep[e.g., see a brief summary in Sec. 3.1 in][]{Veilleux2020}. W$_{80}$\ of the overall spatially integrated emission line profiles have been used as surrogates for characteristic outflow velocities in many studies \citep[e.g.][]{Liu2013a,Liu2013b,RZ2013,Harrison2014,nadiajenny2014}. In Fig. \ref{fig:fenxiw80}, the values of W$_{80}$\ derived from the [O~{\sc iii}] $\lambda$5007\ line emission integrated over our data cubes and the [O~{\sc iii}] $\lambda$5007\ luminosities (L$_{[O~{\sc III}]}$) of our targets are compared with published values in low-z AGN and/or Ultraluminous Infrared Galaxies (ULIRGs) with strong outflows. Remarkably, four of our targets (J0842$+$03, J0906$+$56, J0954$+$47, and J1005$+$12) have W$_{80}$\ that are comparable to those of AGN with L$_{[O~{\sc III}]}$\ that are two orders of magnitude larger than those of our targets. However, in general, the data points suggest a positive correlation between [O~{\sc iii}] $\lambda$5007\ W$_{80}$\ and luminosities, spanning 4 orders of magnitude in L$_{[O~{\sc III}]}$\ and 1.5 orders of magnitude in W$_{80}$. This correlation simply implies that more powerful AGN provide more energy to drive faster outflows.
A more physically meaningful, albeit also more model-dependent, estimate of the importance of an outflow is the kinetic energy outflow rate. In Fig. \ref{fig:ke}, the kinetic energy outflow rates of our targets (based on the KCWI data), normalized by their AGN luminosities (see Table \ref{tab:targets}), are compared with those of low-z Seyferts and type 1 quasars studied in \citet{Rupke2017}, as well as those of the z $<$ 0.15, AGN-dominated ULIRGs from \citet{Rose2018}. The results for the C2 components of J0906$+$56, J0954$+$47, and J1005$+$12, as well as the C1 component of J0842$+$03\ are omitted in this analysis due to their relatively modest contribution, as they have on average $\sim$1 dex smaller dE/dt than those of either the C3 or C2 components of these targets. The values shown in this figure assume the spatially-resolved scenario by default (red filled circles; see Section\ \ref{611}) for all of our sources. For J0906$+$56, J0954$+$47, and J1005$+$12, we also show the lower limits obtained by assuming that the outflow components are spatially unresolved (blue filled triangles). The measurements of J0842$+$03\ and J0906$+$56\ based on the GMOS data are also omitted as they have dE/dt smaller than (but close to) those based on the KCWI data. Compared with our targets, those Seyferts and quasars have both more powerful AGN (with higher median AGN luminosity by $\sim$1 to 3 orders of magnitude) and more massive host galaxies (with brighter median H-band absolute magnitudes\footnote{The absolute H-band magnitudes of our targets and all other sources are derived from the 2MASS \citep{2MASS} H-band magnitudes taken from the IRSA/2MASS archive \irsaurl, except for those of the Type 1 quasars and three of the ULRIGs from \citet{Rose2018}, which are the AGN-subtracted, host-only H-band magnitudes quoted from \citet{veilleux2006,veilleux2009a}. While the H-band magnitudes of the Seyfert galaxies are not AGN-subtracted, the contribution from the AGN is probably not substantial: the H-band magnitudes of the Seyfert galaxies are close to the QSO-subtracted ones of the Type 1 quasars, which is consistent with the fact that the stellar velocity dispersions of the two samples are comparable when they are measured or recorded in \citet{Rupke2017}.} by $\sim$4 to 5 mag.). Nevertheless, our targets have ratios of kinetic energy outflow rates to AGN luminosities that are comparable to those measured in the more luminous AGN. This result adds support to the idea that the outflows in the dwarf galaxies are scaled-down versions of the outflows in the more luminous AGN and are fundamentally driven by the same AGN processes. We examine this issue in more detail in Section \ref{63}.
\subsection{What Drives these Outflows: AGN or Starbursts?} \label{63}
The results from the previous sections favor AGN-related processes as the main driver of the detected outflows. First, the velocities of the outflows detected in our dwarf galaxies are often large. The maximum W$_{80}$\ of outflow components in six targets exceed 600 {km s$^{-1}$}, including three that exceed 1000 {km s$^{-1}$}. If we adopt the definition of bulk outflow velocities V$_{out}=$W$_{80}$$/$1.3 as in some studies \citep[e.g.][where they assume spherically-symmetric or wide-angle bi-cone outflows]{Liu2013b,Harrison2014}, six out of the seven targets with detected outflows have outflow velocities $\gtrsim$500 {km s$^{-1}$}. To put these numbers into perspective, a velocity of 500 {km s$^{-1}$}\ is equivalent to an energy of 1 keV per particle, and is difficult to achieve with stellar processes \citep{Fabian2012araa}. The high velocities of the outflows seen in most of our targets thus suggest that AGN plays an important role in driving these outflows.
Second, as shown in Fig.\ \ref{fig:ke} and discussed in Section \ref{62}, the AGN are also powerful enough to drive the outflows in our targets. The ratios of kinetic energy outflow rates to bolometric AGN luminosities of our targets are in the range of $\sim$1$\times$10$^{-5}$ -- 2$\times$10$^{-3}$. These ratios are far less than unity, and are within the range of values seen in other more luminous AGN, suggesting that the AGN are more than capable of driving these outflows.
The lower limits of the ionized gas mass entrainment efficiency $\eta$, defined as the ratio of ionized gas mass outflow rate over the star formation rate, are in the range of $\sim$0.1 -- 0.8, with a median of $\sim$0.3 (the range and median are $\sim$0.1 -- 0.6 and $\sim$0.2, respectively, if we exclude the contributions from the C2 components in J0906$+$56, J0954$+$47, and J1005$+$12, and from the C1 component in J0842$+$03). Note that these are lower limits since our adopted SFR are upper limits (see Section\ \ref{35}). This is comparable to the average value ($\sim$0.19) measured for the neutral outflows in low-redshift, AGN/starburts-composite ULIRGs \citep{Rupke2005c}. In the more luminous AGN, apparently higher $\eta$\ are reported in the literature. For example, $\eta$ $\simeq$ 6 $-$ 20 are reported for a sample of $z <$ 0.2 luminous type 2 AGN \citep{Harrison2014}. Meanwhile, much lower $\eta$\ values, with a median of 0.8, are reported for a sample of type 1 quasars at z$<$ 0.3 in \citet{Rupke2017} once the quasar emission is subtracted and both the neutral and ionized phases of the outflows are considered. In their sample, the median value of $\eta$ drops further to 0.03 when the ionized phase alone is considered. In short, the $\eta$ measured in our targets fall in the wide range seen in various studies of outflows in more luminous AGN. In addition, if the outflows in J0906$+$56, J0954$+$47, and J1005$+$12\ are spatially unresolved, then the lower limits of $\eta$ can be as high as $\sim$3, uncomfortably high for starburst-driven outflows in the low-z universe \citep[e.g.][where $\eta < 1$ in general]{Arribas2014}. This is even more so if we also consider the possible contribution from the C2 components to the outflow energetics in these targets.
There is also circumstantial evidence against starburst driving of these outflows. Given the upper limits of SFR estimated from the [O~{\sc ii}] $\lambda$$\lambda$3726,3729\ emission, all of the galaxies in our sample lie either slightly, or significantly, below the main sequence of star-forming galaxies in the low-z universe \citep[e.g.][]{Brinchmann2004}, while the star formation-driven outflows are observed much more frequently in galaxies above the star formation main sequence \citep[e.g.][]{Heckman2015,Roberts-Borsani2020}.
More quantitatively, we can examine if stellar processes are physically capable of driving the observed outflows. The typical kinetic energy output rate from core collapse supernovae is $\sim$7$\times10^{41}(\alpha_{SN}/0.02)(\dot{M_{\star}}/\ensuremath{\mathrm{M}_{\odot}} \ yr^{-1})$ \citep{Veilleux2005,Veilleux2020}. Adopting the SFR upper limits of our targets (Table \ref{tab:targets}), and assuming a constant supernovae rate of $\alpha_{SN}=0.02$, the expected maximum kinetic energy output rates from core-collapse supernovae in our targets are in the range of $\sim$7$\times$10$^{39}$ -- 5$\times$10$^{41}$ erg $s^{-1}$, with a median of $\sim$2$\times$10$^{41}$ erg $s^{-1}$. These are $\sim$6 -- 720 times larger than the kinetic energy outflow rates based on the scenario that the outflows are spatially resolved. Stellar processes thus cannot be overlooked as a potential source of energy for these outflows.
However, it should be pointed out that we have only considered the warm ionized phase of the outflowing gas and adopted the energetics calculated in the spatially resolved scenario. If the outflows in J0906$+$56, J0954$+$47, and J1005$+$12\ are spatially unresolved, the kinetic energy outflow rates may be comparable to, if not larger than, the kinetic energy output from the stellar process as estimated above. This argument is slightly stronger if we consider the contribution from the C2 components to the outflow energetics in these targets, too. Additionally, it is possible that a significant fraction of the energy is carried in a hot, thin gas phase instead, which has been predicted by recent simulations \citep[e.g.][]{Koudmani2019,Koudmani2020}.
Overall, the outflows in our targets are likely driven by AGN, but we cannot formally rule out the possibility that star formation activity may also help in launching the outflows, as is often the case among low-z ULIRGs and luminous AGN \citep[e.g.][]{RupkeVeilleux2013b,Harrison2014,Fluetsch2019}. More stringent constraints on the star formation rates of our targets need to be obtained before we can draw a more robust conclusion about the role of stellar processes in these outflows.
\subsection{Does the Outflowing Gas Escape the Galaxies?} \label{64}
To help us evaluate the impact of these outflows on their host galaxies, it is interesting to examine the question of whether some of the outflowing gas is able to escape the host galaxy. This requires comparing the kinematics of the outflows with the local escape velocity, $v_{\rm esc}(r) = \sqrt{2[\Phi(\infty) - \Phi(r)]}$, where $\Phi(r)$ and $\Phi(\infty)$ are the values of the gravitational potential at $r$ and $r = \infty$, respectively, in the case of a spherically-symmetric galaxy.
One may estimate the escape velocity in terms of observed quantities, like the circular velocity $v_{\rm circ}$ of the galaxy, by assuming a simple density profile such as that of a singular isothermal sphere. A conservative estimate of the escape velocity in that case gives $v_{\rm esc} \simeq 3v_{\rm circ}$ \citep{Veilleux2020}. Our IFS data do not probe the flat portion of the rotation curve, so we adopt the maximum of the measured stellar velocities (v$_\star$) and velocity dispersions ($\sigma_\star$) to calculate the lower limits of the circular velocities in our targets, where $v_{\rm circ} = \sqrt{v_\star^2+2\sigma_\star^2}$ \citep[e.g., See Section 2.4 of ][]{Veilleux2020}. We have not applied any deprojection corrections to the circular velocities and outflow velocities, given that the 3D morphologies of the outflows are poorly constrained.
Alternatively, the escape velocity may be derived by assuming a NFW dark matter density profile \citep{NFW} and a total halo mass determined from abundance matching \citep{Moster2013}, which has been done in \citet{ManzanoKing2019}. Since the escape velocity always peaks at the center, it can serve as a conservative upper limit to the escape velocity throughout the galaxy. For our targets, the escape velocities at $r$ = 0 obtained through this approach are larger by $\sim$50\% on average than those based on the empirical circular velocities above. We adopt the more conservative $r =$ 0, NFW-based escape velocities in the remainder of our discussion.
\begin{deluxetable}{ccc}[!htb]
\tabletypesize{\normalsize}
\tablecaption{Outflow Escape Fractions\label{tab:escape}}
\tablehead{
\colhead{Target} & \colhead{V$_{esc}$ [{km s$^{-1}$}]} & \colhead{$f_{esc}$} \\
\colhead{(1)} & \colhead{(2)} & \colhead{(3)}
}
\startdata
J0100$-$01 & 320 & 1\% \\
J0811$+$23 & 260 & 0.1\% \\
J0842$+$03 & 300 & 6\% \\
J0906$+$56 & 300 & 6\% \\
J0954$+$47 & 320 & 1\% \\
J1005$+$12 & 380 & 1\% \\
J1009$+$26 & 240 & 0.3\%
\enddata
\tablecomments{Column (1): Short name of the target; Column (2): Escape velocity at the center of each galaxy assuming a NFW density profile, rounded to the nearest 10 {km s$^{-1}$}; Column (3): Escape fraction of the [O~{\sc iii}] $\lambda$5007\ line emitting gas, based on flux rather than mass. This number does not take into account possible density contrasts between the outflowing and quiescent gas components in these systems and projection effects; see Section\ \ref{64} for more details.}
\end{deluxetable}
For all of the targets, we next define the escape fraction ($f_{\rm esc}$) as the ratio of [O~{\sc iii}] $\lambda$5007\ flux with absolute velocities larger than the escape velocity summed up across the data cube, to the total emission line flux in the whole data cube. Notice here that the escape fraction is defined as a flux ratio rather than a mass ratio, so it does not take into account possible density contrasts between the outflowing and quiescent (non-outflowing) gas components \citep[e.g.][]{Hinkle2019,Fluetsch2019,Fluetsch2020}, which may affect the luminosity-to-mass conversion factor. In addition, the values of $f_{\rm esc}$ obtained here are conservatively low since we have not applied deprojection corrections to the gas velocities in the outflows. Some fraction of the escaping gas may not be accounted for here if the velocities of this gas, projected along our line of sight, fall below $v_{\rm esc}$.
The results from our IFS data are summarized in Table \ref{tab:escape}. The escape fractions range from 0.1\% to 6\%. Taking into account that the escape velocities are likely overestimated for the reasons mentioned earlier and that the outflow velocities are potentially underestimated due to projection effects, this suggests that at least some small portion of the outflowing gas may travel a long way from the centers and help contribute to the metal enrichment of the circumgalactic medium in these dwarf galaxies \citep[as reported in a number of studies; e.g.][]{Bordoloi2014}.
\section{Conclusions} \label{7}
In this paper, we report the results from an integral field spectroscopic study with Gemini/GMOS and Keck/KCWI of the warm ionized gas in a sample of 8 low-redshift (0.01 $\lesssim$ z $\lesssim$ 0.05) dwarf galaxies with known AGN and suspected outflows. The main results are summarized as follows:
\begin{itemize}
\item
Warm ionized outflows are detected in 7 out of the 8 targets. The IFS data in most targets reveal broad, blueshifted velocity components tracing rapid outflows ({v$_{50}$}\ down to $\sim$$-$240 {km s$^{-1}$}\ and W$_{80}$\ up to $\sim$1200 {km s$^{-1}$}) and narrow components tracing the rotation of the host galaxies. In J0906$+$56, J0954$+$47, and J1005$+$12, the multi-Gaussian fits require a third velocity component with intermediate line widths, which probably traces portion of the outflowing gas and/or turbulent gas. In J0811$+$23\ and J0842$+$03, the narrow components are in general blueshifted and may trace the outflows or a mixture of outflowing and rotating gas in these systems.
\item
The two-dimensional velocity structures and radial profiles of the outflowing kinematic components indicate that the outflows are spatially resolved by the IFS data in at least four cases (J0842$+$03, J0100$-$01, J1009$+$26, J0811$+$23), with the emission extending up to $\sim$3 kpc from the galactic centers. In J0100$-$01\ and J1009$+$26, the outflowing kinematic components show apparent biconical morphologies in projection. Additionally, clear non-radial velocity gradients/structures are also seen in those components of J0811$+$23\ and J0842$+$03. In J0906$+$56, J0954$+$47, and J1005$+$12, the kinematic components that have intermediate line widths and probably trace part of the outflows are also spatially resolved. However, the fast outflows traced by the kinematic components with the broadest line widths in these targets are not clearly spatially resolved. An attempt at deconvolving the data cubes gives inconclusive results.
\item
The clearly outflowing gas in all of the targets have line ratios that are consistent with AGN photoionization. A general lack of positive correlation between the gas kinematics and the [S~{\sc ii}]/H$\alpha$\ or [O~{\sc iii}]/H$\beta$\ line ratios, and inconsistencies between the observed line ratios and the predictions from shock models, indicate that shocks likely do not play a major role in heating and ionizing the outflowing gas in these systems.
\item
Assuming a simple thin-shell, free wind model, the warm, ionized gas mass outflow rates of our targets range from $\sim$3$\times$10$^{-3}$ to $\sim$3$\times$10$^{-1}$ {M$_{\sun}$ yr$^{-1}$}, and the kinetic energy outflow rates range from $\sim$1$\times$10$^{37}$ erg s$^{-1}$ to $\sim$6$\times$10$^{40}$ erg s$^{-1}$ (excluding the contribution from the velocity components that likely trace portion of the outflows in targets J0842$+$03, J0906$+$56, J0954$+$47, and J1005$+$12). In J0906$+$56, J0954$+$47, and J1005$+$12, where the outflows may be spatially unresolved, the lower limits of the mass outflow rates and kinetic energy outflow rates are $\sim$2--10 times higher than those obtained in the scenario where they are spatially resolved.
\item
The overall emission line widths measured from the spatially-integrated spectra of our targets, together with the results from samples of more luminous AGN studied in the recent literature, show a positive trend with increasing [O~{\sc iii}] $\lambda$5007\ luminosities. When normalized by the bolometric AGN luminosities, the kinetic energy outflow rates of these outflows are comparable to those of more luminous AGN in massive systems. The outflows in these dwarf galaxies act as scaled-down versions of those in more luminous AGN, in shallower potential wells.
\item
The outflows are likely driven by the central AGN, since i) the outflows are faster than typical outflows driven by stellar processes; ii) the AGN is powerful enough to drive the outflows given the efficiency of other low-redshift AGN; (iii) the lower limits of the ionized gas mass entrainment efficiency (i.e. mass outflow rates to SFR $\simeq$ 0.1--0.8, based on the upper limits on SFR estimated from the [O~{\sc ii}] $\lambda$$\lambda$3726,3729\ emission) fall in the wide range seen in various studies of outflows in more luminous AGN, and may be uncomfortably high (with lower limits up to $\sim$3) for starburst-driven outflows in the low-z universe if the outflows are spatially unresolved in targets J0906$+$56, J0954$+$47, and J1005$+$12; (iv) the dwarf galaxies of our sample all lie either slightly or significantly below the main sequence of star-forming galaxies, whereas starburst-driven outflows typically take place in star-forming galaxies above that main sequence. However, we cannot formally rule out, based on energetic arguments, the possibility that the star formation activity in these galaxies also partially contributes to driving these outflows.
\item
A small but non-negligible fraction (at least 0.1\%--6\%) of the outflowing ionized gas in our targets has velocities large enough to escape from the host galaxies, if no additional drag force is present. These outflows may thus contribute to the enrichment of the circumgalactic medium in dwarf galaxies.
\end{itemize}
If such AGN-driven outflows are also present in dwarf galaxies at high redshifts, they will increase the porosity of these dwarf galaxies and thus their contribution to the reionization of the universe \citep[e.g.][]{Silk2017}. They may also help explain the current core-cusp controversy regarding the dark matter distribution in dwarf galaxies \citep[e.g.][]{Maccio2020}. A proper treatment of such AGN feedback will need to be included in seed black hole formation models \citep[e.g.][]{Mezcua2019a}.
\clearpage
\acknowledgements
We thank the anonymous referee for thoughtful and constructive comments that improved this paper. S.V. and W.L. acknowledge partial support for this work provided by NASA through grants {\em HST} GO-15662.001A and GO-15915.001A from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555. It also made use of NASA's Astrophysics Data System Abstract Service and the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. G.C. acknowledges partial support from the National Science Foundation, under grant number AST 1817233. Additional support was provided by NASA through a grant from the Space Telescope Science Institute (Program AR- 14582.001-A), which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555.
A significant part of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. The rest of the observations were obtained at the international Gemini Observatory, a program of NSF’s NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. on behalf of the Gemini Observatory partnership: the National Science Foundation (United States), National Research Council (Canada), Agencia Nacional de Investigaci\'{o}n y Desarrollo (Chile), Ministerio de Ciencia, Tecnolog\'{i}a e Innovaci\'{o}n (Argentina), Minist\'{e}rio da Ci\^{e}ncia, Tecnologia, Inova\c{c}\~{o}es e Comunica\c{c}\~{o}es (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea). The data were processed using the Gemini IRAF package. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation.
\software{Astropy (\url{http://dx.doi.org/10.1051/0004-6361/201322068}), Gemini IRAF package \citep{Gemini}, IFSRED \citep{ifsred}, IFSFIT \citep{ifsfit}, KCWI data reduction pipeline (\kcwiurl) MPFIT\citep{mpfit}, pPXF \citep{ppxf}, Kinemetry \citep{Kinemetry}.}
| proofpile-arXiv_059-15622 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{More Applications of \texorpdfstring{{$i$-Mix}}{{i-Mix}}}
\label{sec:imix_more}
In this section, we introduce more variations of {{$i$-Mix}}.
For conciseness, we use $v_{i}$ to denote virtual labels for different methods.
We make the definition of $v_{i}$ for each application clear.
\ifdefined\supp
Here we restate the formulation of SimCLR~\citep{chen2020simple}:
\begin{alignmain}[2]
\ell_{\text{SimCLR}} ( x_{i} ; \mathcal{B} ) =
-\log \frac{ \exp \big( s(f_{i}, f_{(N+i) \bmod 2N}) / \tau \big) }
{ \sum_{k=1, k \neq i}^{2N} \exp \big( s(f_{i}, f_{k}) / \tau \big) }.
\label{eq:simclr}
\end{alignmain}
\fi
\subsection{\texorpdfstring{{$i$-Mix}}{{i-Mix}} for SimCLR}
\label{sec:simclr_imix}
For each anchor, SimCLR takes other anchors as negative samples such that the virtual labels must be extended.
Let $x_{N+i} \,{=}\, \tilde{x}_{i}$ for conciseness, and $v_{i} \,{\in}\, \{0,1\}^{2N}$ be the virtual label indicating the positive sample of each anchor, where $v_{i,N+i} = 1$ and $v_{i,j \neq N+i} \,{=}\, 0$.
Note that $v_{i,i} \,{=}\, 0$ because the anchor itself is not counted as a positive sample.
Then, Eq.~\eqref{eq:simclr} can be represented in the form of the cross-entropy loss:
\begin{align}
\ell_{\text{SimCLR}} ( x_{i}, v_{i} ; \mathcal{B} ) =
-\sum_{n=1}^{2N} v_{i,n}
\log \frac{ \exp \big( s(f_{i}, f_{n}) / \tau \big) }
{ \sum_{k=1, k \neq i}^{2N} \exp \big( s(f_{i}, f_{k}) / \tau \big) }.
\label{eq:simclr_v}
\end{align}
The application of {{$i$-Mix}} to SimCLR is straightforward:
for two data instances $(x_{i}, v_{i})$, $(x_{j}, v_{j})$ and a batch of data $\mathcal{B} = \{x_{i}\}_{i=1}^{2N}$, the {{$i$-Mix}} loss is defined as follows:\footnote{The $j$-th data can be excluded from the negative samples, but it does not result in a significant difference.}
\begin{align}
\ell_{\text{SimCLR}}^{\text{{$i$-Mix}}} \big( (x_{i}, v_{i}), (x_{j}, v_{j}) ; \mathcal{B}, \lambda \big) =
\ell_{\text{SimCLR}}( \lambda x_{i} + (1 - \lambda) x_{j}, \lambda v_{i} + (1 - \lambda) v_{j} ; \mathcal{B} ).
\label{eq:simclr_imix}
\end{align}
Note that only the input data of Eq.~\eqref{eq:simclr_imix} is mixed, such that $f_{i}$ in Eq.~\eqref{eq:simclr_v} is an embedding vector of the mixed data while the other $f_{n}$'s are the ones of clean data.
Because both clean and mixed data
need to be fed to the network $f$, {{$i$-Mix}} for SimCLR
requires twice more memory and training time compared to
SimCLR when the same batch size is used.
\subsection{\texorpdfstring{{$i$-Mix}}{{i-Mix}} for Supervised Contrastive Learning}
\label{sec:supcon_imix}
Supervised contrastive learning has recently shown to be effective for supervised representation learning and it often outperforms the standard end-to-end supervised classifier learning~\citep{khosla2020supervised}.
Suppose an one-hot label $y_{i} \in \{0,1\}^{C}$ is assigned to a data $x_{i}$, where $C$ is the number of classes.
Let $x_{N+i} \,{=}\, \tilde{x}_{i}$ and $y_{N+i} \,{=}\, y_{i}$ for conciseness.
For a batch of data pairs and their labels $\mathcal{B} \,{=}\, \{(x_{i}, y_{i})\}_{i=1}^{2N}$,
let $v_{i} \,{\in}\, \{0,1\}^{2N}$ be the virtual label indicating the positive samples of each anchor,
where $v_{i,j} = 1$ if $y_{i} \,{=}\, y_{j \neq i}$,
and otherwise $v_{i,j} \,{=}\, 0$.
Intuitively, $\sum_{j=1}^{2N} v_{i,j} = 2N_{y_{i}} - 1$ where $N_{y_{i}}$ is the number of data with the label $y_{i}$.
Then, the supervised learning version of the SimCLR (SupCLR) loss function is written as follows:
\begin{align}
\ell_{\text{SupCLR}} ( x_{i}, v_{i} ; \mathcal{B} ) =
-\frac{1}{2N_{y_{i}} - 1}
\sum_{n=1}^{2N} v_{i,n}
\log \frac{ \exp \big( s(f_{i}, f_{n}) / \tau \big) }
{ \sum_{k=1, k \neq i}^{2N} \exp \big( s(f_{i}, f_{k}) / \tau \big) }.
\label{eq:supcon}
\end{align}
The application of {{$i$-Mix}} to SupCLR is straightforward:
for two data instances $(x_{i}, v_{i})$, $(x_{j}, v_{j})$ and a batch of data $\mathcal{B} = \{x_{i}\}_{i=1}^{2N}$, the {{$i$-Mix}} loss is defined as follows:
\begin{align}
\ell_{\text{SupCLR}}^{\text{{{$i$-Mix}}}} \big( (x_{i}, v_{i}), (x_{j}, v_{j}) ; \mathcal{B}, \lambda \big) =
\ell_{\text{SupCLR}}( \lambda x_{i} + (1 - \lambda) x_{j}, \lambda v_{i} + (1 - \lambda) v_{j} ; \mathcal{B} ).
\label{eq:supcon_mix}
\end{align}
\subsection{\texorpdfstring{{$i$-Mix}}{{i-Mix}} for N-Pair Supervised Contrastive Learning}
\label{sec:npair_supcon_mix}
Note that {{$i$-Mix}} in Eq.~\eqref{eq:supcon_mix} is not as efficient as SupCLR in Eq.~\eqref{eq:supcon} due to the same reason in the case of SimCLR.
To overcome this, we reformulate SupCLR in the form of the N-pair loss~\citep{sohn2016improved}.
Suppose an one-hot label $y_{i} \in \{0,1\}^{C}$ is assigned to a data $x_{i}$, where $C$ is the number of classes.
For a batch of data pairs and their labels $\mathcal{B} \,{=}\, \{(x_{i}, \tilde{x}_{i}, y_{i})\}_{i=1}^{N}$,
let $v_{i} \,{\in}\, \{0,1\}^{N}$ be the virtual label indicating the positive samples of each anchor,
where $v_{i,j} = 1$ if $y_{i} \,{=}\, y_{j \neq i}$,
and otherwise $v_{i,j} \,{=}\, 0$.
Then, the supervised version of the N-pair (Sup-N-pair) contrastive loss function is written as follows:
\begin{align}
\ell_{\text{Sup-N-pair}} ( x_{i}, v_{i} ; \mathcal{B} ) =
-\frac{1}{N_{y_{i}}}
\sum_{n=1}^{N} v_{i,n}
\log \frac{ \exp \big( s(f_{i}, \tilde{f}_{n}) / \tau \big) }
{ \sum_{k=1}^{N} \exp \big( s(f_{i}, \tilde{f}_{k}) / \tau \big) }.
\label{eq:supcon_npair}
\end{align}
Then, the {{$i$-Mix}} loss for Sup-N-pair is defined as follows:
\begin{align}
\ell_{\text{Sup-N-pair}}^{\text{{{$i$-Mix}}}} \big( (x_{i}, v_{i}), (x_{j}, v_{j}) ; \mathcal{B}, \lambda \big) =
\ell_{\text{Sup-N-pair}}( \lambda x_{i} + (1 - \lambda) x_{j}, \lambda v_{i} + (1 - \lambda) v_{j} ; \mathcal{B} ).
\label{eq:supcon_npair_mix}
\end{align}
\section{Proof of the linearity of losses with respect to virtual labels}
\label{sec:linear}
\paragraph{Cross-entropy loss.}
The loss used in contrastive representation learning works, which is often referred to as InfoNCE~\citep{oord2018representation}, can be represented in the form of the cross-entropy loss as we showed for N-pair contrastive learning, SimCLR~\citep{chen2020simple}, and MoCo~\citep{he2020momentum}.
Here we provide an example in the case of N-pair contrastive learning.
Let $f_{ij}^{\lambda} \,{=}\, f(\lambda x_{i} \,{+}\, (1 \,{-}\, \lambda) x_{j})$ for conciseness.
\begin{align}
&\ell_{\text{N-pair}}^{\text{{$i$-Mix}}} \big( (x_{i}, v_{i}), (x_{j}, v_{j}) ; \mathcal{B}, \lambda \big) =
\ell_{\text{N-pair}}( \lambda x_{i} + (1 - \lambda) x_{j}, \lambda v_{i} + (1 - \lambda) v_{j} ; \mathcal{B}) \nonumber \\
&= -\sum_{n=1}^{N} (\lambda v_{i,n} + (1 - \lambda) v_{j,n})
\log \frac{ \exp \big( s(f_{ij}^{\lambda}, \tilde{f}_{n}) / \tau \big) }
{ \sum_{k=1}^{N} \exp \big( s(f_{ij}^{\lambda}, \tilde{f}_{k}) / \tau \big) } \nonumber \\
&= - \lambda \sum_{n=1}^{N} v_{i,n}
\log \frac{ \exp \big( s(f_{ij}^{\lambda}, \tilde{f}_{n}) / \tau \big) }
{ \sum_{k=1}^{N} \exp \big( s(f_{ij}^{\lambda}, \tilde{f}_{k}) / \tau \big) }
- (1 - \lambda) \sum_{n=1}^{N} v_{j,n}
\log \frac{ \exp \big( s(f_{ij}^{\lambda}, \tilde{f}_{n}) / \tau \big) }
{ \sum_{k=1}^{N} \exp \big( s(f_{ij}^{\lambda}, \tilde{f}_{k}) / \tau \big) } \nonumber \\
&= \lambda \ell_{\text{N-pair}}( \lambda x_{i} + (1 - \lambda) x_{j}, v_{i} ; \mathcal{B}) +
(1 - \lambda) \ell_{\text{N-pair}}( \lambda x_{i} + (1 - \lambda) x_{j}, v_{j} ; \mathcal{B}).
\label{eq:npair_imix_long}
\end{align}
\paragraph{L2 loss between L2-normalized feature vectors.} \hspace{0pt}
The BYOL~\citep{grill2020bootstrap} loss is in this type.
Let $\tilde{F} \,{=}\, [\tilde{f}_{1} / \lVert \tilde{f}_{1} \rVert, {\dots}, \tilde{f}_{N} / \lVert \tilde{f}_{N} \rVert] \,{\in}\, \mathbb{R}^{D {\times} N}$ such that $\tilde{f}_{i} / \lVert \tilde{f}_{i} \rVert \,{=}\, \tilde{F} v_{i}$, and
$\bar{g} \,{=}\, g(f( \lambda x_{i} \,{+}\, (1 \,{-}\, \lambda) x_{j} )) / \lVert g(f( \lambda x_{i} \,{+}\, (1 \,{-}\, \lambda) x_{j} )) \rVert$ for conciseness.
\begin{align}
&\ell_{\text{BYOL}}^{\text{{$i$-Mix}}} \big( (x_{i}, v_{i}), (x_{j}, v_{j}) ; \mathcal{B}, \lambda \big)
= \ell_{\text{BYOL}}( \lambda x_{i} + (1 - \lambda) x_{j}, \lambda v_{i} + (1 - \lambda) v_{j} ) \nonumber \\
&= \left\lVert \bar{g} - \tilde{F} ( \lambda v_{i} + (1 - \lambda) v_{j} ) \right\rVert^{2}
= \left\lVert \bar{g} - \left( \lambda \tilde{F} v_{i} + (1 - \lambda) \tilde{F} v_{j} \right) \right\rVert^{2} \nonumber \\
&= 1
- 2 \cdot \bar{g}^{\top} \left( \lambda \tilde{F} v_{i} + (1 - \lambda) \tilde{F} v_{j} \right)
+ \left\lVert \lambda \tilde{F} v_{i} + (1 - \lambda) \tilde{F} v_{j} \right\rVert^{2}
\nonumber \\
&= 2
- 2 \cdot \bar{g}^{\top} \left( \lambda \tilde{F} v_{i} + (1 - \lambda) \tilde{F} v_{j} \right)
+ \text{const} \nonumber \\
&= \lambda \lVert \bar{g} - \tilde{F} v_{i} \rVert^{2}
+ (1 - \lambda) \lVert \bar{g} - \tilde{F} v_{j} \rVert^{2}
+ \text{const} \nonumber \\
&= \lambda \ell_{\text{BYOL}}( \lambda x_{i} + (1 - \lambda) x_{j}, v_{i} ; \mathcal{B})
+ (1 - \lambda) \ell_{\text{BYOL}}( \lambda x_{i} + (1 - \lambda) x_{j}, v_{j} ; \mathcal{B})
+ \text{const}.
\label{eq:byol_imix_long}
\end{align}
Because $\tilde{F}$ is not backpropagated, it can be considered as a constant.
\section{More on Experiments}
\label{sec:exp_more}
We describe details of the experimental settings and more experimental results.
For additional experiments below, we adapted the code for supervised contrastive learning~\citep{khosla2020supervised}.\footnote{\url{https://github.com/HobbitLong/SupContrast}}
\subsection{Setup}
\label{sec:setup_more}
In this section, we describe details of the experimental settings.
Note that the learning rate is scaled by the batch size~\citep{goyal2017accurate}:
$\text{ScaledLearningRate} \,{=}\, \text{LearningRate} \times \text{BatchSize} / 256$.
\paragraph{Image.}
The experiments on CIFAR-10 and 100~\citep{krizhevsky2009learning} and ImageNet~\citep{deng2009imagenet} are conducted in two stages:
following \citet{chen2020simple},
the convolutional neural network (CNN) part of ResNet-50~\citep{he2016deep}\footnote{%
For small resolution data from CIFAR and Speech Commands, we replaced the kernal, stride, and padding size from (7,2,3) to (3,1,1) in the first convolutional layer, and removed the first max pooling layer, following \citet{chen2020simple}.
}
followed by the two-layer multilayer perceptron (MLP) projection head (output dimensions are 2048 and 128, respectively) is trained on the unlabeled pretext dataset
with a batch size of 256 (i.e., 512 augmented data) with the stochastic gradient descent (SGD) optimizer with a momentum of 0.9 over up to 4000 epochs.
BYOL has an additional prediction head (output dimensions are the same with the projection head), which follows the projection head, only for the model updated by gradient.
10 epochs of warmup with a linear schedule to an initial learning rate of 0.125, followed by the cosine learning rate schedule~\citep{loshchilov2017sgdr} is used.
We use the weight decay of 0.0001 for the first stage.
For ImageNet, we use the same hyperparameters except that the batch size is 512 and the initial learning rate is 0.03.
Then, the head of the CNN is replaced with a linear classifier, and only the linear classifier is trained with the labeled downstream dataset.
For the second stage, we use a batch size of 256 with the SGD optimizer with a momentum of 0.9 and an initial learning rate chosen among \{1, 3, 5, 10, 30, 50, 70\} over 100 epochs, where the learning rate is decayed by 0.2 after 80, 90, 95 epochs.
No weight decay is used at the second stage.
The quality of representation is evaluated by the top-1 accuracy on the downstream task.
We sample a single mixing coefficient $\lambda \,{\sim}\, \text{Beta}(1,1)$ for each training batch.
The temperature is set to $\tau \,{=}\, 0.2$.
Note that the optimal distribution of $\lambda$ and the optimal value of $\tau$ varies over different architectures, methods, and datasets, but the choices above result in a reasonably good performance.
The memory bank size of MoCo is 65536 for ImageNet and 4096 for other datasets, and the momentum for the exponential moving average (EMA) update is 0.999 for MoCo and BYOL.
We do not symmetrize the BYOL loss, as it does not significantly improve the performance while increasing computational complexity.
For data augmentation, we follow \citet{chen2020simple}:
We apply a set of data augmentations randomly in sequence including
resized cropping~\citep{szegedy2015going},
horizontal flipping with a probability of 0.5,
color jittering,\footnote{Specifically, brightness, contrast, and saturation are scaled by a factor uniformly sampled from $[0.6,1.4]$ at random, and hue is rotated in the HSV space by a factor uniformly sampled from $[-0.1,0.1]$ at random.} and
gray scaling with a probability of 0.2.
A Gaussian blurring with $\sigma \in [0.1, 2]$ and kernel size of 10\% of the image height/width is applied for ImageNet.
For evaluation on downstream tasks, we apply padded cropping with the pad size of 4 and horizontal flipping for CIFAR-10 and 100, and resized cropping and horizontal flipping for ImageNet.
\paragraph{Speech.}
In the experiments on Speech Commands~\citep{warden2018speech},
the network is the same with the image domain experiments, except that the number of input channels is one instead of three.
The temperature is set to $\tau \,{=}\, 0.5$ for the standard setting and $\tau \,{=}\, 0.2$ for the no augmentation setting.
10\% of silence data (all zero) are added when training.
At the first stage, the model is trained with the SGD optimizer with a momentum of 0.9 and an initial learning rate of 0.125 over 500 epochs, where the learning rate decays by 0.1 after 300 and 400 epochs and the weight decay is 0.0001.
The other settings are the same with the experiments on CIFAR.
For data augmentation,\footnote{\url{https://github.com/tugstugi/pytorch-speech-commands}} we apply a set of data augmentations randomly in sequence including
changing amplitude, speed, and pitch in time domain,
stretching, time shifting, and adding background noise in frequency domain.
Each data augmentation is applied with a probability of 0.5.
Augmented data are then transformed to the mel spectogram in the size of 32\,$\times$\,32.
\paragraph{Tabular.}
In the experiments on CovType and Higgs~\citep{asuncion2007uci},
we take a five-layer MLP with batch normalization as a backbone network.
The output dimensions of layers are (2048-2048-4096-4096-8192), where all layers have batch normalization followed by ReLU except for the last layer.
The last layer activation is maxout~\citep{goodfellow2013maxout} with 4 sets, such that the output dimension is 2048.
On top of this five-layer MLP, we attach two-layer MLP (2048-128) as a projection head.
We sample a single mixing coefficient $\lambda \,{\sim}\, \text{Beta}(\alpha, \alpha)$ for each training batch, where $\alpha \,{=}\, 2$ for CovType and Higgs100k, and $\alpha \,{=}\, 1$ for Higgs1M.
The temperature is set to $\tau \,{=}\, 0.1$.
The other settings are the same with the experiments on CIFAR, except that the batch size is 512 and the number of training epochs is 500.
At the second stage, the MLP head is replaced with a linear classifier.
For Higgs, the classifier is computed by linear regression from the feature matrix obtained without data augmentation to the label matrix using the pseudoinverse.
Since the prior knowledge on tabular data is very limited,
only the masking noise with a probability of 0.2 is considered as a data augmentation.
\subsection{Variations of \texorpdfstring{{$i$-Mix}}{{i-Mix}}}
\label{sec:var}
\begin{table}[t]
\centering
\resizebox{\linewidth}{!}{%
\begin{tabular}{cccccccc}
\toprule
\multirow{2}{*}{Pretext} & \multirow{2}{*}{Downstream} & \multicolumn{3}{c}{N-pair} & \multicolumn{3}{c}{SimCLR} \cr
\cmidrule(rl){3-5}\cmidrule(rl){6-8}
& & Vanilla & {{$i$-MixUp}} & {{$i$-CutMix}} & Vanilla & {{$i$-MixUp}} & {{$i$-CutMix}} \cr
\cmidrule(rl){1-1}\cmidrule(rl){2-2}\cmidrule(rl){3-5}\cmidrule(rl){6-8}
\multirow{2}{*}{CIFAR-10} & CIFAR-10 &
92.4 \scriptsize{$\pm$ 0.1} & \textbf{94.8} \scriptsize{$\pm$ 0.2} & 94.7 \scriptsize{$\pm$ 0.1} &
92.5 \scriptsize{$\pm$ 0.1} & \textbf{94.8} \scriptsize{$\pm$ 0.2} & \textbf{94.8} \scriptsize{$\pm$ 0.2} \cr
& CIFAR-100 &
60.2 \scriptsize{$\pm$ 0.3} & \textbf{63.3} \scriptsize{$\pm$ 0.2} & 61.5 \scriptsize{$\pm$ 0.2} &
60.0 \scriptsize{$\pm$ 0.2} & \textbf{61.4} \scriptsize{$\pm$ 1.0} & 57.1 \scriptsize{$\pm$ 0.4} \cr
\cmidrule(rl){1-1}\cmidrule(rl){2-2}\cmidrule(rl){3-5}\cmidrule(rl){6-8}
\multirow{2}{*}{CIFAR-100} & CIFAR-10 &
84.4 \scriptsize{$\pm$ 0.2} & \textbf{86.2} \scriptsize{$\pm$ 0.2} & 85.1 \scriptsize{$\pm$ 0.2} &
84.4 \scriptsize{$\pm$ 0.2} & \textbf{85.2} \scriptsize{$\pm$ 0.3} & 83.7 \scriptsize{$\pm$ 0.6} \cr
& CIFAR-100 &
68.7 \scriptsize{$\pm$ 0.2} & \textbf{72.3} \scriptsize{$\pm$ 0.2} & \textbf{72.3} \scriptsize{$\pm$ 0.4} &
68.7 \scriptsize{$\pm$ 0.2} & \textbf{72.3} \scriptsize{$\pm$ 0.2} & 71.7 \scriptsize{$\pm$ 0.2} \cr
\bottomrule
\end{tabular}
}
\caption{
Comparison of N-pair contrastive learning and SimCLR with {{$i$-MixUp}} and {{$i$-CutMix}} on them with ResNet-50 on CIFAR-10 and 100.
We run all experiments for 1000 epochs.
{{$i$-MixUp}} improves the accuracy on the downstream task regardless of the data distribution shift between the pretext and downstream tasks.
{{$i$-CutMix}} shows a comparable performance with {{$i$-MixUp}} when the pretext and downstream datasets are the same, but it does not when the data distribution shift occurs.
}
\label{tb:variation}
\end{table}
\begin{table}[t]
\centering
\resizebox{\linewidth}{!}{%
\begin{tabular}{cccccccc}
\toprule
\multirow{2}{*}{Pretext} & \multirow{2}{*}{Downstream} & \multicolumn{3}{c}{Self-Supervised Pretext} & \multicolumn{3}{c}{Supervised Pretext} \cr
\cmidrule(rl){3-5}\cmidrule(rl){6-8}
& & SimCLR & N-pair & + {{$i$-Mix}} & SimCLR & N-pair & + {{$i$-Mix}} \cr
\cmidrule(rl){1-1}\cmidrule(rl){2-2}\cmidrule(rl){3-5}\cmidrule(rl){6-8}
\multirow{2}{*}{CIFAR-10} & CIFAR-10 &
92.5 \scriptsize{$\pm$ 0.1} & 92.4 \scriptsize{$\pm$ 0.1} & \textbf{94.8} \scriptsize{$\pm$ 0.2} &
95.6 \scriptsize{$\pm$ 0.3} & 95.7 \scriptsize{$\pm$ 0.1} & \textbf{97.0} \scriptsize{$\pm$ 0.1} \cr
& CIFAR-100 &
60.0 \scriptsize{$\pm$ 0.2} & 60.2 \scriptsize{$\pm$ 0.3} & \textbf{63.3} \scriptsize{$\pm$ 0.2} &
58.6 \scriptsize{$\pm$ 0.2} & \textbf{58.9} \scriptsize{$\pm$ 0.5} & 57.8 \scriptsize{$\pm$ 0.6} \cr
\cmidrule(rl){1-1}\cmidrule(rl){2-2}\cmidrule(rl){3-5}\cmidrule(rl){6-8}
\multirow{2}{*}{CIFAR-100} & CIFAR-10 &
84.4 \scriptsize{$\pm$ 0.2} & 84.4 \scriptsize{$\pm$ 0.2} & \textbf{86.2} \scriptsize{$\pm$ 0.2} &
86.5 \scriptsize{$\pm$ 0.4} & 86.7 \scriptsize{$\pm$ 0.2} & \textbf{88.7} \scriptsize{$\pm$ 0.2} \cr
& CIFAR-100 &
68.7 \scriptsize{$\pm$ 0.2} & 68.7 \scriptsize{$\pm$ 0.2} & \textbf{72.3} \scriptsize{$\pm$ 0.2} &
74.3 \scriptsize{$\pm$ 0.2} & 74.6 \scriptsize{$\pm$ 0.3} & \textbf{78.4} \scriptsize{$\pm$ 0.2} \cr
\bottomrule
\end{tabular}
}
\caption{
Comparison of the N-pair self-supervised and supervised contrastive learning methods and {{$i$-Mix}} on them with ResNet-50 on CIFAR-10 and 100.
We also provide the performance of formulations proposed in prior works:
SimCLR~\citep{chen2020simple} and its supervised version~\citep{khosla2020supervised}.
We run all experiments for 1000 epochs.
{{$i$-Mix}} improves the accuracy on the downstream task regardless of the data distribution shift between the pretext and downstream tasks, except the case that the pretest task has smaller number of classes than that of the downstream task.
The quality of representation depends on the pretext task in terms of the performance of transfer learning:
self-supervised learning is better on CIFAR-10, while supervised learning is better on CIFAR-100.
}
\label{tb:supcon}
\end{table}
We compare the MixUp~\citep{zhang2018mixup} and CutMix~\citep{yun2019cutmix} variation of {{$i$-Mix}} on N-pair contrastive learning and SimCLR.
To distinguish them, we call them {{$i$-MixUp}} and {{$i$-CutMix}}, respectively.
To be fair with the memory usage in the pretext task stage, we reduce the batch size of {{$i$-MixUp}} and {{$i$-CutMix}} by half (256 to 128) for SimCLR.
Following the learning rate adjustment strategy in \citet{goyal2017accurate}, we also decrease the learning rate by half (0.125 to 0.0625) when the batch size is reduced.
We note that {{$i$-MixUp}} and {{$i$-CutMix}} on SimCLR take approximately 2.5 times more training time to achieve the same number of training epochs.
The results are provided in Table~\ref{tb:variation}.
We first verify that the N-pair formulation results in no worse performance than that of SimCLR.
This justifies to conduct experiments using the N-pair formulation instead of that of SimCLR, which is simpler and more efficient, especially when applying {{$i$-Mix}}, while not losing the performance.
When pretext and downstream tasks share the training dataset, {{$i$-CutMix}} often outperforms {{$i$-MixUp}}, though the margin is small.
However, {{$i$-CutMix}} shows a worse performance in transfer learning.
Table~\ref{tb:supcon} compares the performance of SimCLR, N-pair contrastive learning, and {{$i$-Mix}} on N-pair contrastive learning when the pretext task is self-supervised and supervised contrastive learning.
We confirm that the N-pair formulation results in no worse performance than that of SimCLR in supervised contrastive learning as well.
{{$i$-Mix}} improves the performance of supervised contrastive learning from 95.7\% to 97.0\% on CIFAR-10, similarly to improvement achieved by MixUp for supervised learning where it improves the performance of supervised classifier learning from 95.5\% to 96.6\%.
On the other hand, when the pretext dataset is CIFAR-100, the performance of supervised contrastive learning is not better than that of supervised learning:
MixUp improves the performance of supervised classifier learning from 78.9\% to 82.2\%, and {{$i$-Mix}} improves the performance of supervised contrastive learning from 74.6\% to 78.4\%.
While supervised {{$i$-Mix}} improves the classification accuracy on CIFAR-10 when trained on CIFAR-10, the representation does not transfer well to CIFAR-100, possibly due to overfitting to 10 class classification.
When pretext dataset is CIFAR-100, supervised contrastive learning shows a better performance than self-supervised contrastive learning regardless of the distribution shift, as it learns sufficiently general representation for linear classifier to work well on CIFAR-10 as well.
\subsection{Qualitative Embedding Analysis}
\label{sec:qual_embed}
\begin{figure}[t]
\vspace{-20pt}
\centering
\begin{subfigure}[h]{0.436\linewidth}
\includegraphics[width=\linewidth]{figures/tsne_embed_cifar10_cifar10_n1000_p30_r0_none.pdf}
\caption{Contrastive Learning on CIFAR-10}
\label{fig:tsne_npair_cifar10}
\end{subfigure}
\captionsetup[subfigure]{oneside,margin={0pt,40pt}}
\begin{subfigure}[h]{0.545\linewidth}
\includegraphics[width=\linewidth]{figures/tsne_embed_cifar10_cifar10_n1000_p30_r0_mixup.pdf}
\caption{{{$i$-Mix}} on CIFAR-10}
\label{fig:tsne_mixup_cifar10}
\end{subfigure} \\
\captionsetup[subfigure]{oneside,margin={0pt,0pt}}
\begin{subfigure}[h]{0.436\linewidth}
\includegraphics[width=\linewidth]{figures/tsne_embed_cifar10_cifar100_n1000_p30_r0_none.pdf}
\caption{Contrastive Learning on CIFAR-100}
\label{fig:tsne_npair_cifar100}
\end{subfigure}
\captionsetup[subfigure]{oneside,margin={0pt,40pt}}
\begin{subfigure}[h]{0.545\linewidth}
\includegraphics[width=\linewidth]{figures/tsne_embed_cifar10_cifar100_n1000_p30_r0_mixup_box.pdf}
\caption{{{$i$-Mix}} on CIFAR-100}
\label{fig:tsne_mixup_cifar100}
\end{subfigure} \\
\caption{
t-SNE visualization of embeddings trained by contrastive learning and {{$i$-Mix}} with ResNet-50 on CIFAR-10.
(a,b): Classes are well-clustered in both cases when applied to CIFAR-10.
(c,d): When models are transferred to CIFAR-100, classes are more clustered for {{$i$-Mix}} than contrastive learning, as highlighted in dashed boxes.
We show 10 classes for a better visualization.
}
\label{fig:tsne}
\end{figure}
Figure~\ref{fig:tsne} visualizes embedding spaces learned by N-pair contrastive learning and {{$i$-Mix}} on CIFAR-10 and 100.
When the downstream dataset is the same with the pretext task,
both contrastive learning and {{$i$-Mix}} cluster classes well, as shown in Figure~\ref{fig:tsne_npair_cifar10} and \ref{fig:tsne_mixup_cifar10}.
However, when the downstream task is transferred to CIFAR-100, {{$i$-Mix}} in Figure~\ref{fig:tsne_mixup_cifar100} clusters classes better than contrastive learning in Figure~\ref{fig:tsne_npair_cifar100}.
Specifically, clusters of ``apple,'' ``chair,'' and ``dolphin,'' can be found in Figure~\ref{fig:tsne_mixup_cifar100} while they spread out
in Figure~\ref{fig:tsne_npair_cifar100}.
Also, ``rose'' and ``squirrel'' are more separated in Figure~\ref{fig:tsne_mixup_cifar100} than \ref{fig:tsne_npair_cifar100}.
This shows that the representation learned with {{$i$-Mix}} is more generalizable than vanilla contrastive learning.
\subsection{Quantitative Embedding Analysis}
\label{sec:quan_embed}
\begin{table}[t]
\centering
\begin{tabular}{cccccccc}
\toprule
\multirow{2}{*}{Pretext} & \multirow{2}{*}{Downstream} & \multicolumn{2}{c}{FED ($\times 10^{-4}$) ($\downarrow$)} & \multicolumn{2}{c}{Training Acc (\%) ($\uparrow$)} & \multicolumn{2}{c}{Test Acc (\%) ($\uparrow$)} \cr
\cmidrule(rl){3-4}\cmidrule(rl){5-6}\cmidrule(rl){7-8}
& & N-pair & + {{$i$-Mix}} & N-pair & + {{$i$-Mix}} & N-pair & + {{$i$-Mix}} \cr
\cmidrule(rl){1-1}\cmidrule(rl){2-2}\cmidrule(rl){3-4}\cmidrule(rl){5-6}\cmidrule(rl){7-8}
\multirow{2}{*}{CIFAR-10} & CIFAR-10 &
30.0 & \textbf{16.7} & \textbf{96.1} & \textbf{96.1} & 92.4 & \textbf{94.8} \cr
& CIFAR-100 &
13.8 & \textbf{7.9} & \textbf{70.7} & 69.5 & 60.2 & \textbf{63.3} \cr
\cmidrule(rl){1-1}\cmidrule(rl){2-2}\cmidrule(rl){3-4}\cmidrule(rl){5-6}\cmidrule(rl){7-8}
\multirow{2}{*}{CIFAR-100} & CIFAR-10 &
15.2 & \textbf{9.7} & 88.1 & \textbf{88.8} & 84.4 & \textbf{86.2} \cr
& CIFAR-100 &
30.4 & \textbf{13.3} & \textbf{85.6} & 79.0 & 68.7 & \textbf{72.3} \cr
\bottomrule
\end{tabular}
\caption{
Comparison of N-pair contrastive learning and {{$i$-Mix}} with ResNet-50 on CIFAR-10 and 100 in terms of the Fr{\'e}chet embedding distance (FED) between training and test data distribution on the embedding space, and training and test accuracy.
$\uparrow$ ($\downarrow$) indicates that the higher (lower) number is the better.
{{$i$-Mix}} improves contrastive learning in all metrics, which shows that {{$i$-Mix}} is an effective regularization method for the pretext task, such that the learned representation is more generalized.
}
\vspace{-4pt}
\label{tb:fed}
\end{table}
To estimate the quality of representation by the similarity between training and test data distribution,
we measure the Fr{\'e}chet embedding distance (FED):
similarly to the Fr{\'e}chet inception distance (FID) introduced in \citet{heusel2017gans}, FED is the Fr{\'e}chet distance~\citep{frechet1957distance,vaserstein1969markov} between the set of training and test embedding vectors under the Gaussian distribution assumption.
For conciseness, let $\bar{f}_{i} \,{=}\, f(x_{i}) / \lVert f(x_{i}) \rVert$ be an $\ell_{2}$ normalized embedding vector;
we normalize embedding vectors as we do when we measure the cosine similarity.
Then, with the estimated mean
$m = \frac{1}{N} \sum_{i=1}^{N} \bar{f}_{i}$
and the estimated covariance
$S = \frac{1}{N} \sum_{i=1}^{N} (\bar{f}_{i} - m) (\bar{f}_{i} - m)^{\top}$,
the FED can be defined as
\begin{align}
d^{2} \big( ( m^{\texttt{tr}}, S^{\texttt{tr}} ), ( m^{\texttt{te}}, S^{\texttt{te}} ) \big) =
\lVert m^{\texttt{tr}} - m^{\texttt{te}} \rVert^{2}
+ \text{Tr} \big( S^{\texttt{tr}} + S^{\texttt{te}} - 2 ( S^{\texttt{tr}} S^{\texttt{te}} )^{\frac{1}{2}} \big).
\label{eq:fed}
\end{align}
As shown in Table~\ref{tb:fed}, {{$i$-Mix}} improves FED over contrastive learning, regardless of the distribution shift.
Note that the distance is large when the training dataset of the downstream task is the same with that of the pretext task.
This is because the model is overfit to the training dataset, such that the distance from the test dataset, which is unseen during training, has to be large.
On the other hand, Table~\ref{tb:fed} shows that {{$i$-Mix}}
reduces the gap between the training and test accuracy.
This implies that {{$i$-Mix}} is an effective regularization method for pretext tasks, such that the learned representation is more generalizable on downstream tasks.
\section{Introduction}
\label{sec:intro}
Representation learning~\citep{bengio2013representation} is a fundamental task in machine learning since the success of machine learning relies on the quality of representation.
Self-supervised representation learning ({SSL}) has been successfully applied in several domains, including
image recognition~\citep{he2020momentum,chen2020simple},
natural language processing~\citep{mikolov2013distributed,devlin2018bert},
robotics~\citep{sermanet2018time,lee2019making},
speech recognition~\citep{ravanelli2020multi}, and
video understanding~\citep{korbar2018cooperative,owens2018audio}.
Since no label is available in the unsupervised setting, pretext tasks are proposed to provide self-supervision:
for example,
context prediction~\citep{doersch2015unsupervised},
inpainting~\citep{pathak2016context}, and
contrastive learning~\citep{wu2018unsupervised,hjelm2019learning,he2020momentum,chen2020simple}.
{SSL} has also been used as an auxiliary task to improve the performance on the main task, such as
generative model learning~\citep{chen2019self},
semi-supervised learning~\citep{zhai2019s4l}, and
improving robustness and uncertainty~\citep{hendrycks2019using}.
Recently, contrastive representation learning has gained increasing attention by showing state-of-the-art performance in {SSL} for large-scale image recognition~\citep{he2020momentum,chen2020simple}, which outperforms its supervised pre-training counterpart~\citep{he2016deep} on downstream tasks.
However, while the concept of contrastive learning is applicable to any domains, the quality of learned representations rely on the domain-specific inductive bias:
as anchors and positive samples are obtained from the same data instance, data augmentation introduces semantically meaningful variance for better generalization.
To achieve a strong, yet semantically meaningful data augmentation, domain knowledge is required, e.g., color jittering in 2D images or structural information in video understanding.
Hence, contrastive representation learning in different domains requires an effort to develop effective data augmentations.
Furthermore, while recent works have focused on large-scale settings where millions of unlabeled data is available, it would not be practical in real-world applications.
For example, in lithography, acquiring data is very expensive in terms of both time and cost due to the complexity of manufacturing process~\citep{lin2018data,sim2019automatic}.
Meanwhile, MixUp~\citep{zhang2018mixup} has shown to be a successful data augmentation for supervised learning in various domains and tasks, including
image classification~\citep{zhang2018mixup},
generative model learning~\citep{lucas2018mixed}, and
natural language processing~\citep{guo2019augmenting,guo2020nonlinear}.
In this paper, we explore the following natural, yet important question:
is the idea of MixUp useful for unsupervised, self-supervised, or contrastive representation learning across different domains?
To this end,
we propose \emph{instance Mix ({{$i$-Mix}})},
a domain-agnostic regularization strategy for contrastive representation learning.
The key idea of {{$i$-Mix}} is to introduce virtual labels in a batch and mix data instances and their corresponding virtual labels in the input and label spaces, respectively.
We first introduce the general formulation of {{$i$-Mix}}, and then we show the applicability of {{$i$-Mix}} to state-of-the-art contrastive representation learning methods, SimCLR~\citep{chen2020simple} and MoCo~\citep{he2020momentum}, and a self-supervised learning method
without negative pairs, BYOL~\citep{grill2020bootstrap}.
Through the experiments, we demonstrate the efficacy of {{$i$-Mix}} in a variety of settings.
First, we show the effectiveness of {{$i$-Mix}} by evaluating the discriminative performance of learned representations in multiple domains.
Specifically, we adapt {{$i$-Mix}} to the contrastive representation learning methods,
advancing state-of-the-art performance across different domains, including image~\citep{krizhevsky2009learning,deng2009imagenet}, speech~\citep{warden2018speech}, and tabular~\citep{asuncion2007uci} datasets.
Then, we study {{$i$-Mix}} in various conditions, including when
1)~the model and training dataset is small or large,
2)~domain knowledge is limited, and
3)~transfer learning.
\paragraph{Contribution.} In summary, our contribution is three-fold:
\begin{itemize}
\item We propose {{$i$-Mix}}, a method for regularizing contrastive representation learning, motivated by MixUp~\citep{zhang2018mixup}.
We show how to apply {{$i$-Mix}} to state-of-the-art contrastive representation learning methods~\citep{chen2020simple,he2020momentum,grill2020bootstrap}.
\item We show that {{$i$-Mix}} consistently improves contrastive representation learning in both vision and non-vision domains.
In particular, the discriminative performance of representations learned with {{$i$-Mix}} is on par with fully supervised learning on CIFAR-10/100~\citep{krizhevsky2009learning} and Speech Commands~\citep{warden2018speech}.
\item We verify the regularization effect of {{$i$-Mix}} in a variety of settings.
We empirically observed that {{$i$-Mix}} significantly improves contrastive representation learning when
1) the training dataset size is small, or
2) the domain knowledge for data augmentations is not enough.
\end{itemize}
\section{Related Work}
\label{sec:related}
\paragraph{Self-supervised representation learning ({SSL})}
aims at learning representations from unlabeled data by solving a pretext task that is derived from self-supervision.
Early works on {SSL} proposed pretext tasks based on data reconstruction by autoencoding~\citep{bengio2007greedy}, such as
context prediction~\citep{doersch2015unsupervised} and
inpainting~\citep{pathak2016context}.
Decoder-free {SSL} has made a huge progress in recent years.
Exemplar CNN~\citep{dosovitskiy2014discriminative} learns by classifying individual instances with data augmentations.
{SSL} of visual representation, including
colorization~\citep{zhang2016colorful},
solving jigsaw puzzles~\citep{noroozi2016unsupervised},
counting the number of objects~\citep{noroozi2017representation},
rotation prediction~\citep{gidaris2018unsupervised},
next pixel prediction~\citep{oord2018representation,henaff2019data}, and
combinations of them~\citep{doersch2017multi,kim2018learning,noroozi2018boosting} often leverages image-specific properties to design pretext tasks.
Meanwhile, alhough deep clustering~\citep{caron2018deep,caron2019unsupervised,asano2020self} is often distinguished from {SSL},
it also leverages unsupervised clustering assignments as self-supervision for representation learning.
\paragraph{Contrastive representation learning}
has gained lots of attention for
{SSL}~\citep{he2020momentum,chen2020simple}.
As opposed to early works on exemplar CNN~\citep{dosovitskiy2014discriminative,dosovitskiy2015discriminative},
contrastive learning
maximizes similarities of positive pairs while minimizes similarities of negative pairs
instead of training an instance classifier.
As the choice of negative pairs is crucial for the quality of learned representations, recent works have carefully designed them.
Memory-based approaches~\citep{wu2018unsupervised,hjelm2019learning,bachman2019learning,misra2020self,tian2019contrastive} maintain a memory bank of embedding vectors of instances to keep negative samples, where the memory is updated with embedding vectors extracted from previous batches.
In addition, MoCo~\citep{he2020momentum}
showed that differentiating the model for anchors and positive/negative samples is effective, where the model for positive/negative samples is updated by the exponential moving average of the model for anchors.
On the other hand, recent works~\citep{ye2019unsupervised,misra2020self,chen2020simple,tian2019contrastive} showed that learning invariance to different views is important in contrastive representation learning.
The views can be generated through data augmentations carefully designed using domain knowledge~\citep{chen2020simple}, splitting input channels~\citep{tian2019contrastive}, or borrowing the idea of other pretext tasks, such as creating jigsaw puzzles or rotating inputs~\citep{misra2020self}.
In particular, SimCLR~\citep{chen2020simple} showed that a simple memory-free approach with a large batch size and strong data augmentations has a comparable performance to memory-based approaches.
InfoMin~\citep{tian2020makes} further studied a way to generate good views for contrastive representation learning and achieved state-of-the-art performance by combining prior works.
Different from other contrastive representation learning methods, BYOL~\citep{grill2020bootstrap} does not require negative pairs, where the proposed pretext task aims at predicting latent representations of one view from another.
While prior works have focused on {SSL} on large-scale visual recognition tasks, our work focuses on contrastive representation learning in both small- and large-scale settings in different domains.
\paragraph{Data augmentation}
is a technique to increase the diversity of data, especially when training data are not enough for generalization.
Since the augmented data must be understood as the original data, data augmentations are carefully designed using the domain knowledge about images~\citep{devries2017improved,cubuk2019autoaugment,cubuk2019randaugment,zhong2020random}, speech~\citep{amodei2016deep,park2019specaugment}, or natural languages~\citep{zhang2015character,wei2019eda}.
Some works have studied data augmentation with less domain knowledge:
\citet{devries2017dataset} proposed a domain-agnostic augmentation strategy by first encoding the dataset and then applying augmentations in the feature space.
MixUp~\citep{zhang2018mixup} is an effective data augmentation strategy in supervised learning,
which performs vicinal risk minimization instead of empirical risk minimization, by linearly interpolating input data and their labels on the data and label spaces, respectively.
On the other hand, MixUp has also shown its effectiveness in other tasks and non-vision domains, including
generative adversarial networks~\citep{lucas2018mixed},
improved robustness and uncertainty~\citep{hendrycks2020augmix}, and
sentence classification in natural language processing~\citep{guo2020nonlinear,guo2019augmenting}.
Other variations have also been investigated by interpolating in the feature space~\citep{verma2019manifold} or leveraging domain knowledge~\citep{yun2019cutmix}.
MixUp would not be directly applicable to some domains, such as point clouds,
but its adaptation can be effective~\citep{harris2020fmix}.
{{$i$-Mix}} is a kind of data augmentation for better generalization in contrastive representation learning, resulting in better performances on downstream tasks.
\paragraph{Concurrent works}
have leveraged the idea of MixUp for contrastive representation learning.
As discussed in Section~\ref{sec:inputmix}, only input data can be mixed for improving contrastive representation learning~\citep{shen2020rethinking,verma2020towards,zhou2020comparing}, which can be considered as injecting data-driven noises.
\citet{kalantidis2020hard} mixed hard negative samples on the embedding space.
\citet{kim2020mixco} reported similar observations to ours but focused on small image datasets.
\section{Approach}
\label{sec:approach}
In this section, we review MixUp~\citep{zhang2018mixup} in supervised learning and present {{$i$-Mix}} in contrastive learning~\citep{he2020momentum,chen2020simple,grill2020bootstrap}.
Throughout this section, let $\mathcal{X}$ be a data space, $\mathbb{R}^{D}$ be a $D$-dimensional embedding space, and a model $f \,{:}\, \mathcal{X} \,{\rightarrow}\, \mathbb{R}^{D}$ be a mapping between them.
For conciseness, $f_{i} \,{=}\, f(x_{i})$ and $\tilde{f}_{i} \,{=}\, f(\tilde{x}_{i})$ for $x_{i}, \tilde{x}_{i} \in \mathcal{X}$, and model parameters are omitted in loss functions.
\subsection{MixUp in Supervised Learning}
\label{sec:mixup}
Suppose an one-hot label $y_{i} \in \{0,1\}^{C}$ is assigned to a data $x_{i}$, where $C$ is the number of classes.
Let a linear classifier predicting the labels consists of weight vectors $\{w_{1}, \dots, w_{C}\}$, where $w_{c} \in \mathbb{R}^{D}$.\footnote{We omit bias terms for presentation clarity.}
Then, the cross-entropy loss for supervised learning is defined as:
\begin{align}
\ell_{\text{Sup}}(x_{i}, y_{i}) =
-\sum_{c=1}^{C} y_{i,c}
\log \frac{ \exp(w_{c}^{\top} f_{i}) }{ \sum_{k=1}^{C} \exp( w_{k}^{\top} f_{i}) }.
\label{eq:sup}
\end{align}
While the cross-entropy loss is widely used for supervised training of deep neural networks, there are several challenges of training with the cross-entropy loss, such as preventing overfitting or networks being overconfident.
Several regularization techniques have been proposed to alleviate these issues, including
label smoothing~\citep{szegedy2016rethinking},
adversarial training~\citep{miyato2018virtual}, and
confidence calibration~\citep{lee2018training}.
MixUp~\citep{zhang2018mixup} is an effective regularization with negligible computational overhead.
It conducts a linear interpolation of two data instances in both input and label spaces and trains a model by minimizing the cross-entropy loss defined on the interpolated data and labels.
Specifically, for two labeled data $(x_{i}, y_{i})$, $(x_{j}, y_{j})$, the MixUp loss is defined as follows:
\begin{align}
\ell_{\text{Sup}}^{\text{MixUp}} \big( (x_{i}, y_{i}), (x_{j}, y_{j}) ; \lambda \big) =
\ell_{\text{Sup}}( \lambda x_{i} + (1 - \lambda) x_{j}, \lambda y_{i} + (1 - \lambda) y_{j} ),
\label{eq:mixup}
\end{align}
where $\lambda \,{\sim}\, \text{Beta}(\alpha, \alpha)$ is a mixing coefficient sampled from the beta distribution.
MixUp is a vicinal risk minimization method~\citep{chapelle2001vicinal} that augments data and their labels
in a data-driven manner.
Not only improving the generalization on the supervised task, it also improves adversarial robustness~\citep{pang2019mixup} and confidence calibration~\citep{thulasidasan2019mixup}.
\subsection{\texorpdfstring{{$i$-Mix}}{{i-Mix}} in Contrastive Learning}
\label{sec:imix}
We introduce \emph{instance mix ({{$i$-Mix}})}, a data-driven augmentation strategy for contrastive representation learning to improve the generalization of learned representations.
Intuitively, instead of mixing class labels, {{$i$-Mix}} interpolates their \emph{virtual} labels, which indicates their identity in a batch.
Let $\mathcal{B} \,{=}\, \{(x_{i}, \tilde{x}_{i})\}_{i=1}^{N}$ be a batch of data pairs, where $N$ is the batch size, $x_{i}, \tilde{x}_{i} \,{\in}\, \mathcal{X}$ are two views of the same data, which are usually generated by different augmentations.
For each anchor $x_{i}$, we call $\tilde{x}_{i}$ and $\tilde{x}_{j {\neq} i}$ positive and negative samples, respectively.\footnote{%
Some literature~\citep{he2020momentum,chen2020improved} refers to them as query and positive/negative keys.
}
Then, the model $f$ learns to maximize similarities of positive pairs (instances from the same data) while minimize similarities of negative pairs (instances from different data) in the embedding space.
The output of $f$ is L2-normalized,
which has shown to be effective~\citep{wu2018incremental,he2020momentum,chen2020simple}.
Let $v_{i} \,{\in}\, \{0,1\}^{N}$ be the virtual label of $x_{i}$ and $\tilde{x}_{i}$ in a batch $\mathcal{B}$, where $v_{i,i} \,{=}\, 1$ and $v_{i,j {\neq} i} \,{=}\, 0$.
For a general sample-wise contrastive loss with virtual labels $\ell(x_{i}, v_{i})$, the {{$i$-Mix}} loss is defined as follows:
\begin{align}
\ell^{\text{{$i$-Mix}}} \big( (x_{i}, v_{i}), (x_{j}, v_{j}) ; \mathcal{B}, \lambda \big) =
\ell( \text{Mix}(x_{i}, x_{j}; \lambda), \lambda v_{i} + (1 - \lambda) v_{j} ; \mathcal{B}),
\label{eq:imix}
\end{align}
where $\lambda \,{\sim}\, \text{Beta}(\alpha, \alpha)$ is a mixing coefficient and $\text{Mix}$ is a mixing operator, which can be adapted depending on target domains:
for example,
$\text{MixUp}(x_{i}, x_{j}; \lambda) \,{=}\, \lambda x_{i} \,{+}\, (1 {-} \lambda) x_{j}$~\citep{zhang2018mixup} when data values are continuous,
and $\text{CutMix}(x_{i}, x_{j}; \lambda) \,{=}\, M_{\lambda} {\odot} x_{i} \,{+}\, (1{-}M_{\lambda}) {\odot} x_{j}$~\citep{yun2019cutmix} when data values have a spatial correlation with neighbors, where $M_{\lambda}$ is a binary mask filtering out a region whose relative area is $(1{-}\lambda)$, and $\odot$ is an element-wise multiplication.
Note that some mixing operators might not work well for some domains:
for example, CutMix would not be valid when data values and their spatial neighbors have no correlation.
However, the MixUp operator generally works well across domains including image, speech, and tabular;
we use it for {{$i$-Mix}} formulations and experiments, unless otherwise specified.
In the following, we show how to apply {{$i$-Mix}} to contrastive representation learning methods.
\paragraph{SimCLR~\citep{chen2020simple}}
is a simple contrastive representation learning method without a memory bank, where
each anchor has one positive sample and ($2N{-}2$) negative samples.
Let $x_{N+i} = \tilde{x}_{i}$ for conciseness.
Then, the ($2N{-}1$)-way discrimination loss is written as follows:
\begin{align}
\ell_{\text{SimCLR}} ( x_{i} ; \mathcal{B} ) =
-\log \frac{ \exp \big( s(f_{i}, f_{(N+i) \bmod 2N}) / \tau \big) }
{ \sum_{k=1, k \neq i}^{2N} \exp \big( s(f_{i}, f_{k}) / \tau \big) },
\label{eq:simclr}
\end{align}
where $\tau$ is a temperature scaling parameter and $s(f, \tilde{f}) \,{=}\, (f^{\top} \tilde{f}) / \lVert f \rVert \lVert \tilde{f} \rVert$ is the inner product of two L2-normalized vectors.
In this formulation, {{$i$-Mix}} is not directly applicable because virtual labels are defined differently for each anchor.\footnote{%
We present the application of {{$i$-Mix}} to the original SimCLR formulation in Appendix~\ref{sec:imix_more}.
}
To resolve this issue,
we simplify the formulation of SimCLR by excluding anchors from negative samples.
Then, with virtual labels, the $N$-way discrimination loss is written as follows:
\begin{align}
\ell_{\text{N-pair}} ( x_{i}, v_{i} ; \mathcal{B} ) =
-\sum_{n=1}^{N} v_{i,n}
\log \frac{ \exp \big( s(f_{i}, \tilde{f}_{n}) / \tau \big) }
{ \sum_{k=1}^{N} \exp \big( s(f_{i}, \tilde{f}_{k}) / \tau \big) },
\label{eq:npair}
\end{align}
where we call it the \textbf{N-pair} contrastive loss, as the formulation is similar to the N-pair loss in the context of metric learning~\citep{sohn2016improved}.%
\footnote{%
InfoNCE~\citep{oord2018representation} is a similar loss inspired by the idea of noise-contrastive estimation~\citep{gutmann2010noise}.
}
For two data instances $(x_{i}, v_{i})$, $(x_{j}, v_{j})$ and a batch of data pairs $\mathcal{B} \,{=}\, \{(x_{i}, \tilde{x}_{i})\}_{i=1}^{N}$, the {{$i$-Mix}} loss is defined as follows:
\begin{align}
\ell_{\text{N-pair}}^{\text{{$i$-Mix}}} \big( (x_{i}, v_{i}), (x_{j}, v_{j}) ; \mathcal{B}, \lambda \big) =
\ell_{\text{N-pair}}( \lambda x_{i} + (1 - \lambda) x_{j}, \lambda v_{i} + (1 - \lambda) v_{j} ; \mathcal{B}).
\label{eq:npair_imix}
\end{align}
\begin{algorithm}[t]
\caption{Loss computation for {{$i$-Mix}} on N-pair contrastive learning in PyTorch-like style.}
\label{alg:imix}
\begin{minted}[fontfamily=courier]{python}
a, b = aug(x), aug(x) # two different views of input x
lam = Beta(alpha, alpha).sample() # mixing coefficient
randidx = randperm(len(x))
a = lam * a + (1-lam) * a[randidx]
logits = matmul(normalize(model(a)), normalize(model(b)).T) / t
loss = lam * CrossEntropyLoss(logits, arange(len(x))) + \
(1-lam) * CrossEntropyLoss(logits, randidx)
\end{minted}
\end{algorithm}
Algorithm~\ref{alg:imix} provides the pseudocode of {{$i$-Mix}} on N-pair contrastive learning for one iteration.\footnote{%
For losses linear with respect to labels (e.g., the cross-entropy loss), they are equivalent to
$\lambda \ell( \lambda x_{i} + (1 - \lambda) x_{j}, v_{i} ) + (1 - \lambda) \ell( \lambda x_{i} + (1 - \lambda) x_{j}, v_{j} )$, i.e., optimization to the mixed label is equivalent to joint optimization to original labels.
The proof for losses in contrastive learning methods is provided in Appendix~\ref{sec:linear}.
}
\paragraph{Pair relations in contrastive loss.}
To use contrastive loss for representation learning, one needs to properly define a pair relation $\{(x_{i}, \tilde{x}_{i})\}_{i=1}^{N}$.
For contrastive representation learning,
where semantic class labels are not provided, the pair relation would be defined in that
1) a positive pair, $x_{i}$ and $\tilde{x}_{i}$, are different views of the same data and
2) a negative pair, $x_{i}$ and $\tilde{x}_{j \neq i}$, are different data instances.
For supervised representation learning,
$x_{i}$ and $\tilde{x}_{i}$ are two data instances from the same class, while $x_{i}$ and $\tilde{x}_{j \neq i}$ are from different classes.
Note that two augmented versions of the same data also belong to the same class,
so they can also be considered as a positive pair.
{{$i$-Mix}} is not limited to self-supervised contrastive representation learning, but it can also be used as a regularization method for supervised contrastive representation learning~\citep{khosla2020supervised} or deep metric learning~\citep{sohn2016improved,movshovitz2017no}.
\paragraph{MoCo~\citep{he2020momentum}.}
In contrastive representation learning, the number of negative samples affects the quality of learned representations~\citep{arora2019theoretical}.
Because SimCLR mines negative samples in the current batch, having a large batch size is crucial, which often requires a lot of computational resources~\citep{chen2020simple}.
For efficient training, recent works have maintained a memory bank $\mathcal{M} \,{=}\, \{\mu_{k}\}_{k=1}^{K}$, which is a queue of previously extracted embedding vectors, where $K$ is the size of the memory bank~\citep{wu2018unsupervised,he2020momentum,tian2019contrastive,tian2020makes}.
In addition, MoCo introduces an exponential moving average (EMA) model to extract positive and negative embedding vectors, whose parameters are updated as
$\theta_{f^{\texttt{EMA}}} \,{\leftarrow}\, m \theta_{f^{\texttt{EMA}}} \,{+}\, (1 \,{-}\, m) \theta_{f}$, where $m \,{\in}\, [0,1)$ is a momentum coefficient and $\theta$ is model parameters.
The loss is written as follows:
\begin{align}
\ell_{\text{MoCo}} ( x_{i} ; \mathcal{B}, \mathcal{M} ) =
-\log \frac{ \exp \big( s(f_{i}, \tilde{f}_{i}^{\texttt{EMA}}) / \tau \big) }
{ \exp \big( s(f_{i}, \tilde{f}_{i}^{\texttt{EMA}}) / \tau \big)
+ \sum_{k=1}^{K} \exp \big( s(f_{i}, \mu_{k}) / \tau \big) }.
\label{eq:moco}
\end{align}
The memory bank $\mathcal{M}$ is then updated with $\{\tilde{f}_{i}^{\texttt{EMA}}\}$ in the first-in first-out order.
In this ($K{+}1$)-way discrimination loss, data pairs are independent to each other,
such that {{$i$-Mix}} is not directly applicable because virtual labels are defined differently for each anchor.
To overcome this issue, we include the positive samples of other anchors as negative samples, similar to the N-pair contrastive loss in Eq.~\eqref{eq:npair}.
Let $\tilde{v}_{i} \,{\in}\, \{0,1\}^{N+K}$ be a virtual label indicating the positive sample of each anchor, where $\tilde{v}_{i,i} \,{=}\, 1$ and $\tilde{v}_{i,j \neq i} \,{=}\, 0$.
Then, the ($N{+}K$)-way discrimination loss is written as follows:
\begin{align}
\ell_{\text{MoCo}} ( x_{i}, \tilde{v}_{i} ; \mathcal{B}, \mathcal{M} ) =
-\sum_{n=1}^{N} \tilde{v}_{i,n}
\log \frac{ \exp \big( s(f_{i}, \tilde{f}_{n}^{\texttt{EMA}}) / \tau \big) }
{ \sum_{k=1}^{N} \exp \big( s(f_{i}, \tilde{f}_{k}^{\texttt{EMA}}) / \tau \big)
+ \sum_{k=1}^{K} \exp \big( s(f_{i}, \mu_{k}) / \tau \big) }.
\label{eq:moco_v}
\end{align}
As virtual labels are bounded in the same set in this formulation, {{$i$-Mix}} is directly applicable:
for two data instances $(x_{i}, \tilde{v}_{i})$, $(x_{j}, \tilde{v}_{j})$, a batch of data pairs $\mathcal{B} \,{=}\, \{(x_{i}, \tilde{x}_{i})\}_{i=1}^{N}$, and the memory bank $\mathcal{M}$, the {{$i$-Mix}} loss is defined as follows:
\begin{align}
\ell_{\text{MoCo}}^{\text{{$i$-Mix}}} \big( (x_{i}, \tilde{v}_{i}), (x_{j}, \tilde{v}_{j}) ; \mathcal{B}, \mathcal{M}, \lambda \big) =
\ell_{\text{MoCo}}( \lambda x_{i} + (1 - \lambda) x_{j}, \lambda \tilde{v}_{i} + (1 - \lambda) \tilde{v}_{j} ; \mathcal{B}, \mathcal{M} ).
\label{eq:moco_imix}
\end{align}
\paragraph{BYOL~\citep{grill2020bootstrap}.}
Different from other contrastive representation learning methods, BYOL is a self-supervised representation learning method without contrasting negative pairs.
For two views of the same data $x_{i}, \tilde{x}_{i} \,{\in}\, \mathcal{X}$, the model $f$ learns to predict a view embedded with the EMA model $\tilde{f}_{i}^{\texttt{EMA}}$ from its embedding $f_{i}$.
Specifically, an additional prediction layer $g$ is introduced, such that the difference between $g(f_{i})$ and $\tilde{f}_{i}^{\texttt{EMA}}$ is learned to be minimized.
The BYOL loss is written as follows:
\begin{align}
\ell_{\text{BYOL}} ( x_{i}, \tilde{x}_{i} ) =
\left\lVert g(f_{i}) / \lVert g(f_{i}) \rVert - \tilde{f}_{i} / \lVert \tilde{f}_{i} \rVert \right\rVert^{2}
= 2 - 2 \cdot s(g(f_{i}), \tilde{f}_{i}).
\label{eq:byol}
\end{align}
This formulation can be represented in the form of the general contrastive loss in Eq.~\eqref{eq:imix}, as the second view $\tilde{x}_{i}$ can be accessed from the batch $\mathcal{B}$ with its virtual label $v_{i}$.
To derive {{$i$-Mix}} in BYOL, let $\tilde{F} \,{=}\, [\tilde{f}_{1} / \lVert \tilde{f}_{1} \rVert, {\dots}, \tilde{f}_{N} / \lVert \tilde{f}_{N} \rVert] \,{\in}\, \mathbb{R}^{D {\times} N}$ be the collection of L2-normalized embedding vectors of the second views, such that $\tilde{f}_{i} / \lVert \tilde{f}_{i} \rVert \,{=}\, \tilde{F} v_{i}$.
Then, the BYOL loss is written as follows:
\begin{align}
\ell_{\text{BYOL}} ( x_{i}, v_{i} ; \mathcal{B} ) =
\left\lVert g(f_{i}) / \lVert g(f_{i}) \rVert - \tilde{F} v_{i} \right\rVert^{2}
\label{eq:byol_v}
= 2 - 2 \cdot s(g(f_{i}), \tilde{F} v_{i}).
\end{align}
For two data instances $(x_{i}, v_{i})$, $(x_{j}, v_{j})$ and a batch of data pairs $\mathcal{B} \,{=}\, \{(x_{i}, \tilde{x}_{i})\}_{i=1}^{N}$, the {{$i$-Mix}} loss is defined as follows:
\begin{align}
\ell_{\text{BYOL}}^{\text{{$i$-Mix}}} \big( (x_{i}, v_{i}), (x_{j}, v_{j}) ; \mathcal{B}, \lambda \big) =
\ell_{\text{BYOL}}( \lambda x_{i} + (1 - \lambda) x_{j}, \lambda v_{i} + (1 - \lambda) v_{j} ; \mathcal{B}).
\label{eq:byol_imix}
\end{align}
\subsection{{InputMix}}
\label{sec:inputmix}
The contribution of data augmentations to the quality of learned representations is crucial in contrastive representation learning.
For the case when the domain knowledge about efficient data augmentations is limited,
we propose to apply {InputMix} together with {{$i$-Mix}}, which mixes input data but not their labels.
This method can be viewed as introducing structured noises driven by auxiliary data to the principal data with the largest mixing coefficient $\lambda$, and the label of the principal data is assigned to the mixed data~\citep{shen2020rethinking,verma2020towards,zhou2020comparing}.
We applied {InputMix} and {{$i$-Mix}} together on image datasets in Table~\ref{tb:aug}.
\section{Experiments}
\label{sec:exp}
In this section, we demonstrate the effectiveness of {{$i$-Mix}}.
In all experiments, we conduct contrastive representation learning on a pretext dataset and evaluate the quality of representations via supervised classification on a downstream dataset. We report the accuracy averaged over up to five runs.
In the first stage, a convolutional neural network (CNN) or multilayer perceptron (MLP) followed by the two-layer MLP projection head is trained on an unlabeled dataset.
Then, we replace the projection head with a linear classifier and train only the linear classifier on a labeled dataset for downstream task.
Except for transfer learning, datasets for the pretext and downstream tasks are the same.
For {{$i$-Mix}}, we sample a mixing coefficient $\lambda \,{\sim}\, \text{Beta}(\alpha,\alpha)$ for each data, where $\alpha \,{=}\, 1$ unless otherwise stated.%
\footnote{%
$\text{Beta}(\alpha,\alpha)$ is the uniform distribution when $\alpha \,{=}\, 1$, bell-shaped when $\alpha \,{>}\, 1$, and bimodal when $\alpha \,{<}\, 1$.
}
Additional details for the experimental settings and more experiments can be found in Appendix~\ref{sec:exp_more}.
\subsection{Experimental Setup}
\label{sec:setup}
\paragraph{Baselines and datasets.}
We consider
1) N-pair contrastive learning as a memory-free contrastive learning method,\footnote{%
We use the N-pair formulation in Eq.~\eqref{eq:npair} instead of SimCLR as it is simpler and more efficient to integrate {{$i$-Mix}}.
As shown in Appendix~\ref{sec:var}, the N-pair formulation results in no worse performance than SimCLR.
}
2) MoCo v2~\citep{he2020momentum,chen2020improved}
\footnote{%
MoCo v2 improves the performance of MoCo by cosine learning schedule and more data augmentations.
}
as a memory-based contrastive learning method, and
3) BYOL~\citep{grill2020bootstrap},
which is a self-supervised learning method without negative pairs.
We apply {{$i$-Mix}} to these methods and compare their performances.
To show the effectiveness of {{$i$-Mix}} across domains, we evaluate the methods on datasets from multiple domains, including
image, speech, and tabular datasets.
CIFAR-10/100~\citep{krizhevsky2009learning} consist of 50k training and 10k test images, and ImageNet~\citep{deng2009imagenet} has 1.3M training and 50k validation images, where we use them for evaluation.
For ImageNet, we also use a subset of randomly chosen 100 classes out of 1k classes to experiment at a different scale.
We apply a set of data augmentations randomly in sequence including random resized cropping,
horizontal flipping, color jittering, gray scaling,
and Gaussian blurring for ImageNet,
which has shown to be effective~\citep{chen2020simple,chen2020improved}.
We use ResNet-50~\citep{he2016deep} as a backbone network.
Models are trained with a batch size of 256 (i.e., 512 including augmented data)
for up to 4000 epochs on CIFAR-10 and 100,
and with a batch size of 512 for 800 epochs on ImageNet.
For ImageNet experiments, we use the CutMix~\citep{yun2019cutmix} version of {{$i$-Mix}}.
The Speech Commands dataset~\citep{warden2018speech} contains 51k training, 7k validation, and 7k test data
in 12 classes.
We apply a set of data augmentations randomly in sequence:
changing amplitude, speed, and pitch in time domain,
stretching, time shifting, and adding background noise in frequency domain.
Augmented data are then transformed to a $32{\times}32$ mel spectogram.
We use the same architecture with image experiments.
Models are trained with a batch size of 256 for 500 epochs.
For tabular dataset experiments, we consider Forest Cover Type (CovType) and Higgs Boson (Higgs) from UCI repository~\citep{asuncion2007uci}.
CovType contains 15k training and 566k test data in 7 classes, and
Higgs contains 10.5M training and 0.5M test data for binary classification.
For Higgs, we use a subset of 100k and 1M training data to experiment at a different scale.
Since the domain knowledge for data augmentations on tabular data is limited,
only a masking noise with the probability 0.2 is considered as a data augmentation.
We use a 5-layer MLP with batch normalization~\citep{ioffe2015batch} as a backbone network.
Models are trained with a batch size of 512 for 500 epochs.
We use $\alpha \,{=}\, 2$ for CovType and Higgs100k, as it is slightly better than $\alpha \,{=}\, 1$.
\subsection{Main Results}
\label{sec:domain}
\begin{table}[t]
\centering
\begin{tabular}{cccccccc}
\toprule
Domain & Dataset & N-pair & + {{$i$-Mix}} & MoCo v2 & + {{$i$-Mix}} & BYOL & + {{$i$-Mix}} \cr
\cmidrule(rl){1-1}\cmidrule(rl){2-2}\cmidrule(rl){3-4}\cmidrule(rl){5-6}\cmidrule(rl){7-8}
\multirow{2}{*}{Image} & CIFAR-10 &
{93.3} \scriptsize{$\pm$ 0.1} & \textbf{95.6} \scriptsize{$\pm$ 0.2} & {93.5} \scriptsize{$\pm$ 0.2} & \textbf{96.1} \scriptsize{$\pm$ 0.1} & {94.2} \scriptsize{$\pm$ 0.2} & \textbf{96.3} \scriptsize{$\pm$ 0.2} \cr
& CIFAR-100 &
{70.8} \scriptsize{$\pm$ 0.4} & \textbf{75.8} \scriptsize{$\pm$ 0.3} & {71.6} \scriptsize{$\pm$ 0.1} & \textbf{78.1} \scriptsize{$\pm$ 0.3} & {72.7} \scriptsize{$\pm$ 0.4} & \textbf{78.6} \scriptsize{$\pm$ 0.2} \cr
\cmidrule(rl){1-1}\cmidrule(rl){2-2}\cmidrule(rl){3-4}\cmidrule(rl){5-6}\cmidrule(rl){7-8}
Speech & Commands &
{94.9} \scriptsize{$\pm$ 0.1} & \textbf{98.3} \scriptsize{$\pm$ 0.1} & {96.3} \scriptsize{$\pm$ 0.1} & \textbf{98.4} \scriptsize{$\pm$ 0.0} & {94.8} \scriptsize{$\pm$ 0.2} & \textbf{98.3} \scriptsize{$\pm$ 0.0} \cr
\cmidrule(rl){1-1}\cmidrule(rl){2-2}\cmidrule(rl){3-4}\cmidrule(rl){5-6}\cmidrule(rl){7-8}
Tabular & CovType &
{68.5} \scriptsize{$\pm$ 0.3} & \textbf{72.1} \scriptsize{$\pm$ 0.2} & {70.5} \scriptsize{$\pm$ 0.2} & \textbf{73.1} \scriptsize{$\pm$ 0.1} & {72.1} \scriptsize{$\pm$ 0.2} & \textbf{74.1} \scriptsize{$\pm$ 0.2} \cr
\bottomrule
\end{tabular}
\caption{
Comparison of contrastive representation learning methods and {{$i$-Mix}} in different domains.
}
\label{tb:domain}
\end{table}
Table~\ref{tb:domain} shows the wide applicability of {{$i$-Mix}} to state-of-the-art contrastive representation learning methods in multiple domains.
{{$i$-Mix}} results in consistent improvements on the classification accuracy, e.g., up to 6.5\% when {{$i$-Mix}} is applied to MoCo v2 on CIFAR-100.
Interestingly, we observe that linear classifiers on top of representations learned with {{$i$-Mix}} without fine-tuning the pre-trained part often yield
a classification accuracy on par with
simple end-to-end supervised learning from random initialization,
e.g., {{$i$-Mix}} vs. end-to-end supervised learning performance is 96.3\% vs. 95.5\% on CIFAR-10, 78.6\% vs. 78.9\% on CIFAR-100, and 98.2\% vs. 98.0\% on Speech Commands.%
\footnote{%
Supervised learning with improved methods, e.g., MixUp, outperforms $i$-Mix.
However, linear evaluation on top of self-supervised representation learning is a proxy to measure the quality of representations learned without labels, such that it is not supposed to be compared with the performance of supervised learning.
}
\subsection{Regularization Effect of \texorpdfstring{{$i$-Mix}}{{i-Mix}}}
\label{sec:scalablility}
\begin{figure}[t]
\centering
\begin{subfigure}[h]{0.436\linewidth}
\includegraphics[width=\linewidth]{figures/arch_epoch_cifar10.pdf}
\vspace{-12pt}
\caption{CIFAR-10}
\label{fig:arch_epoch_cifar10}
\end{subfigure}
\hfill
\begin{subfigure}[h]{0.545\linewidth}
\includegraphics[width=\linewidth]{figures/arch_epoch_cifar100.pdf}
\vspace{-12pt}
\caption{CIFAR-100}
\label{fig:arch_epoch_cifar100}
\end{subfigure}
\caption{
Comparison of performance gains by applying {{$i$-Mix}} to MoCo v2 with different model sizes and number of epochs on CIFAR-10 and 100.
}
\label{fig:arch_epoch}
\end{figure}
\begin{figure}[t]
\centering
\begin{minipage}[h]{0.436\linewidth}
\centering
\resizebox{\linewidth}{!}{%
\begin{tabular}{cccc}
\toprule
Domain & Dataset & MoCo v2 & + {{$i$-Mix}} \cr
\cmidrule(rl){1-1}\cmidrule(rl){2-2}\cmidrule(rl){3-4}
\multirow{2}{*}{Image}
& ImageNet-100 &
84.1 & \textbf{87.0} \cr
& ImageNet-1k &
70.9 & \textbf{71.3} \cr
\bottomrule
\cr
\toprule
Domain & Dataset & MoCo v2 & + {{$i$-Mix}} \cr
\cmidrule(rl){1-1}\cmidrule(rl){2-2}\cmidrule(rl){3-4}
\multirow{2}{*}{Tabular}
& Higgs100k &
72.1 & \textbf{72.9} \cr
& Higgs1M &
\textbf{74.9} & 74.5 \cr
\bottomrule
\end{tabular}
}
\vspace{8pt}
\captionof{table}{
Comparison of MoCo v2 and {{$i$-Mix}} on large-scale datasets.
}
\label{tb:large}
\end{minipage}
\hfill
\begin{minipage}[h]{0.545\linewidth}
\includegraphics[width=\linewidth]{figures/dsize_imagenet_dr.pdf}
\vspace{-12pt}
\caption{
Comparison of MoCo v2 and {{$i$-Mix}} trained on the different size of ImageNet.
}
\label{fig:dsize}
\end{minipage}
\end{figure}
A better regularization method often benefits from longer training of deeper models, which is more critical when training on a small dataset.
To investigate the regularization effect of {{$i$-Mix}}, we first make a comparison between MoCo v2 and {{$i$-Mix}} by training with different model sizes and number of training epochs on the pretext task.
We train ResNet-18, 50, 101, and 152 models with varying number of training epochs from 200 to 2000.
Figure~\ref{fig:arch_epoch} shows the performance of MoCo v2 (solid box) and {{$i$-Mix}} (dashed box).
The improvement by applying {{$i$-Mix}} to MoCo v2 is consistent over the different architecture size and the number of training epochs.
Deeper models benefit from {{$i$-Mix}}, achieving 96.7\% on CIFAR-10
and 79.1\% on CIFAR-100
when the backbone network is ResNet-152.
On the other hand, models trained without {{$i$-Mix}} start to show decrease in performance, possibly due to overfitting to the pretext task when trained longer.
The trend clearly shows that {{$i$-Mix}} results in better representations via improved regularization.
Next, we study the effect of {{$i$-Mix}} with varying dataset sizes for the pretext tasks.
Table~\ref{tb:large} shows the effect of {{$i$-Mix}} on large-scale datasets\footnote{%
Here, ``scale'' corresponds to the amount of data rather than image resolution.
}
from image and tabular domains.
We observe that {{$i$-Mix}} is particularly effective when the amount of training data is reduced,
e.g., ImageNet-100 consists of images from 100 classes, thus has only 10\% of training data compared to ImageNet-1k.
However, the performance gain is reduced when the amount of training data is large.
we further study representations learned with different pretext dataset sizes from 1\% to 100\% of the ImageNet training data in Figure~\ref{fig:dsize}.
Here, different from ImageNet-100, we reduce the amount of data for each class, but maintain the number of classes the same.
We observe that the performance gain by {{$i$-Mix}} is more significant when the size of the pretext dataset is small.
Our study suggests that {{$i$-Mix}} is effective for regularizing self-supervised representation learning when training from a limited amount of data.
We believe that this is aligned with findings in \citet{zhang2018mixup} for MixUp in supervised learning.
Finally, when a large-scale unlabeled dataset is available, we expect {{$i$-Mix}} would still be useful in obtaining better representations when trained longer with deeper and larger models.
\subsection{Contrastive Learning without Domain-Specific Data Augmentation}
\label{sec:aug}
\begin{table}[t]
\centering
\begingroup
\renewcommand{\thempfootnote}{\fnsymbol{mpfootnote}}
\begin{minipage}[h]{\linewidth}
\centering
\resizebox{\linewidth}{!}{%
\begin{tabular}{ccccccccccccc}
\toprule
\multirow{2}{*}{Aug} & \multicolumn{2}{c}{CIFAR-10} & \multicolumn{2}{c}{CIFAR-100} & \multicolumn{2}{c}{Speech Commands} & \multicolumn{2}{c}{CovType} & \multicolumn{2}{c}{Higgs100k} & \multicolumn{2}{c}{Higgs1M} \cr
\cmidrule(rl){2-3}\cmidrule(rl){4-5}\cmidrule(rl){6-7}\cmidrule(rl){8-9}\cmidrule(rl){10-11}\cmidrule(rl){12-13}
& MoCo v2 & + {{$i$-Mix}}\footnote{{InputMix} is applied when no other data augmentations are used.\label{fn:inputmix}} & MoCo v2 & + {{$i$-Mix}}\textsuperscript{\ref{fn:inputmix}} & MoCo v2 & + {{$i$-Mix}} & MoCo v2 & + {{$i$-Mix}} & MoCo v2 & + {{$i$-Mix}} & MoCo v2 & + {{$i$-Mix}} \cr
\cmidrule(rl){1-1}\cmidrule(rl){2-3}\cmidrule(rl){4-5}\cmidrule(rl){6-7}\cmidrule(rl){8-9}\cmidrule(rl){10-11}\cmidrule(rl){12-13}
- &
{47.7} \scriptsize{$\pm$ 1.3} & \textbf{83.4} \scriptsize{$\pm$ 0.4} &
{24.7} \scriptsize{$\pm$ 0.7} & \textbf{54.0} \scriptsize{$\pm$ 0.5} &
{76.9} \scriptsize{$\pm$ 1.7} & \textbf{92.8} \scriptsize{$\pm$ 0.5} &
{69.6} \scriptsize{$\pm$ 0.3} & \textbf{73.1} \scriptsize{$\pm$ 0.1} &
64.2 & \textbf{71.8} & 65.5 & \textbf{72.9} \cr
\ding{51} &
{93.5} \scriptsize{$\pm$ 0.2} & \textbf{96.1} \scriptsize{$\pm$ 0.1} &
{71.6} \scriptsize{$\pm$ 0.1} & \textbf{78.1} \scriptsize{$\pm$ 0.3} &
{96.3} \scriptsize{$\pm$ 0.1} & \textbf{98.4} \scriptsize{$\pm$ 0.0} &
{70.5} \scriptsize{$\pm$ 0.2} & \textbf{73.1} \scriptsize{$\pm$ 0.1} &
72.1 & \textbf{72.9} & \textbf{74.9} & 74.5 \cr
\bottomrule
\end{tabular}
}
\caption{
Comparison of MoCo v2 and {{$i$-Mix}} with and without data augmentations.
}
\label{tb:aug}
\end{minipage}
\endgroup
\end{table}
\begin{table}[t]
\centering
\begin{subtable}[h]{0.6\linewidth}
\centering
\scalebox{0.85}{%
\begin{tabular}{ccccc}
\toprule
Pretext & \multicolumn{2}{c}{CIFAR-10} & \multicolumn{2}{c}{CIFAR-100} \cr
\cmidrule(rl){1-1}\cmidrule(rl){2-3}\cmidrule(rl){4-5}
Downstream & MoCo v2 & + {{$i$-Mix}} & MoCo v2 & + {{$i$-Mix}} \cr
\cmidrule(rl){1-1}\cmidrule(rl){2-3}\cmidrule(rl){4-5}
CIFAR-10 & {93.5} \scriptsize{$\pm$ 0.2} & \textbf{96.1} \scriptsize{$\pm$ 0.1} & {85.9} \scriptsize{$\pm$ 0.3} & \textbf{90.0} \scriptsize{$\pm$ 0.4} \cr
CIFAR-100 & {64.1} \scriptsize{$\pm$ 0.4} & \textbf{70.8} \scriptsize{$\pm$ 0.4} & {71.6} \scriptsize{$\pm$ 0.1} & \textbf{78.1} \scriptsize{$\pm$ 0.3} \cr
\bottomrule
\end{tabular}
}
\caption{CIFAR-10 and 100 as the pretext dataset}
\end{subtable}
\begin{subtable}[h]{0.36\linewidth}
\centering
\scalebox{0.85}{%
\begin{tabular}{ccc}
\toprule
\multirow{2}{*}{\parbox{0.35\linewidth}{\centering VOC Object\\Detection}} & \multicolumn{2}{c}{ImageNet} \cr
\cmidrule(rl){2-3}
& MoCo v2 & + {{$i$-Mix}} \cr
\cmidrule(rl){1-1}\cmidrule(rl){2-3}
AP & 57.3 \scriptsize{$\pm$ 0.1} & \textbf{57.5} \scriptsize{$\pm$ 0.4} \cr
AP$_{50}$ & 82.5 \scriptsize{$\pm$ 0.2} & \textbf{82.7} \scriptsize{$\pm$ 0.2} \cr
AP$_{75}$ & 63.8 \scriptsize{$\pm$ 0.3} & \textbf{64.2} \scriptsize{$\pm$ 0.7} \cr
\bottomrule
\end{tabular}
}
\caption{ImageNet as the pretext dataset}
\end{subtable}
\caption{
Comparison of MoCo v2 and {{$i$-Mix}} in transfer learning.
}
\label{tb:transfer}
\end{table}
Data augmentations play a key role in contrastive representation learning,
and therefore it raises a question when applying them to domains with a limited or no knowledge of such augmentations.
In this section, we study the effectiveness of {{$i$-Mix}} as
a domain-agnostic strategy for contrastive representation learning,
which can be adapted to different domains.
Table~\ref{tb:aug} shows the performance of MoCo v2 and {{$i$-Mix}} with and without data augmentations.
We observe significant performance gains with {{$i$-Mix}} when other data augmentations are not applied.
For example,
compared to the accuracy of 93.5\% on CIFAR-10 when other data augmentations are applied,
contrastive learning achieves 47.7\%
when trained without any data augmentations.
This suggests that data augmentation is an essential part for the success of
contrastive representation learning~\citep{chen2020simple}.
However, {{$i$-Mix}} is able to learn meaningful representations without other data augmentations and achieves
the accuracy of 83.4\% on CIFAR-10.
In Table~\ref{tb:aug}, {InputMix} is applied together with {{$i$-Mix}} to further improve the performance on image datasets.
For each principal data, we mix two auxiliary data, with mixing coefficients ($0.5 \lambda_{1} \,{+}\, 0.5$, $0.5 \lambda_{2}$, $0.5 \lambda_{3}$), where $\lambda_{1}, \lambda_{2}, \lambda_{3} \,{\sim}\, \text{Dirichlet}(1,1,1)$.\footnote{%
This guarantees that the mixing coefficient for the principal data is larger than 0.5 to prevent from training with noisy labels.
Note that \citet{beckham2019adversarial} also sampled mixing coefficients from the Dirichlet distribution for mixing more than two data.
}
In the above example, while {{$i$-Mix}} is better than baselines, adding {InputMix} further improves the performance of {{$i$-Mix}}, i.e., from 75.1\% to 83.4\% on CIFAR-10, and from 50.7\% to 54.0\% on CIFAR-100.
This confirms that {InputMix} can further improve the performance when domain-specific data augmentations are not available, as discussed in Section~\ref{sec:inputmix}.
Moreover, we verify its effectiveness on other domains beyond the image domain.
For example, the performance improves from 76.9\% to 92.8\% on the Speech Commands dataset when we assume no other data augmentations are available.
We also observe consistent improvements in accuracy for tabular datasets, even when the training dataset size is large.
Although the domain knowledge for data augmentations is important to achieve state-of-the-art results,
our demonstration shows the potential of {{$i$-Mix}} to be used for a wide range of application domains where domain knowledge is particularly limited.
\subsection{Transferability of \texorpdfstring{{$i$-Mix}}{{i-Mix}}}
\label{sec:transfer}
In this section, we show the improved transferability of the representations learned with {{$i$-Mix}}.
The results are provided in Table~\ref{tb:transfer}.
First,
we train linear classifiers with downstream datasets different from the pretext dataset used to train backbone networks and evaluate their performance,
e.g., CIFAR-10 as pretext and CIFAR-100 as downstream datasets or vice versa.
We observe consistent performance gains when learned representations from one dataset are evaluated on classification tasks of another dataset.
Next,
we transfer representations trained on ImageNet to the PASCAL VOC object detection task~\citep{everingham2010pascal}.
We follow the settings in prior works~\citep{he2020momentum,chen2020improved}:
the parameters of the pre-trained ResNet-50 are transferred to a Faster R-CNN detector with the ResNet50-C4 backbone~\citep{ren2015faster}, and fine-tuned end-to-end on the VOC 07+12 trainval dataset and evaluated on the VOC 07 test dataset.
We report the average precision (AP) averaged over IoU thresholds between 50\% to 95\% at a step of 5\%, and
AP$_{50}$ and AP$_{75}$, which are AP values when IoU threshold is 50\% and 75\%, respectively.
Similar to Table~\ref{tb:large}, we observe small but consistent performance gains in all metrics.
Those results confirm that {{$i$-Mix}} improves the quality of learned representations, such that performances on downstream tasks are improved.
\section{Conclusion} \label{sec:conclusion}
We propose {{$i$-Mix}}, a domain-agnostic regularization strategy applicable to a class of self-supervised learning.
The key idea of {{$i$-Mix}} is to introduce a virtual label to each data instance, and mix both inputs and the corresponding virtual labels.
We show that {{$i$-Mix}} is applicable to state-of-the-art self-supervised representation learning methods
including SimCLR, MoCo, and BYOL,
which consistently improves the performance in a variety of settings and domains.
Our experimental results indicate that {{$i$-Mix}} is particularly effective when the training dataset size is small or data augmentation is not available, each of which are prevalent in practice.
| proofpile-arXiv_059-15623 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{\@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}}}
\def\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}{\@startsection{section}{1}\z@{.7\linespacing\@plus\linespacing}{.5\linespacing}{\normalfont\scshape\centering\S}}
\makeatother
\usepackage{amsmath,amssymb,amsthm}
\usepackage{amsfonts}
\usepackage{mathrsfs}
\usepackage{mathabx}\changenotsign
\usepackage{dsfont}
\RequirePackage{bm}
\usepackage{xcolor}
\usepackage[backref]{hyperref}
\hypersetup{
colorlinks,
linkcolor={red!60!black},
citecolor={green!60!black},
urlcolor={blue!60!black}
}
\usepackage{cleveref}
\usepackage{graphicx}
\usepackage{verbatim}
\usepackage[open,openlevel=2,atend]{bookmark}
\usepackage[abbrev,msc-links,backrefs]{amsrefs}
\usepackage{doi}
\renewcommand{\doitext}{DOI\,}
\renewcommand{\PrintDOI}[1]{\doi{#1}}
\renewcommand{\eprint}[1]{\href{http://arxiv.org/abs/#1}{arXiv:#1}}
\usepackage[T1]{fontenc}
\usepackage{lmodern}
\usepackage[babel]{microtype}
\usepackage[english]{babel}
\linespread{1.3}
\usepackage{geometry}
\geometry{left=27.5mm,right=27.5mm, top=25mm, bottom=25mm}
\numberwithin{equation}{section}
\numberwithin{figure}{section}
\usepackage[inline]{enumitem}
\def\upshape({\itshape \roman*\,}){\upshape({\itshape \roman*\,})}
\def\upshape(\Roman*){\upshape(\Roman*)}
\def\upshape({\itshape \alph*\,}){\upshape({\itshape \alph*\,})}
\def\upshape({\itshape \Alph*\,}){\upshape({\itshape \Alph*\,})}
\def\upshape({\itshape \arabic*\,}){\upshape({\itshape \arabic*\,})}
\let\polishlcross=\ifmmode\ell\else\polishlcross\fi
\def\ifmmode\ell\else\polishlcross\fi{\ifmmode\ell\else\polishlcross\fi}
\def\ \text{and}\ {\ \text{and}\ }
\def\quad\text{and}\quad{\quad\text{and}\quad}
\def\qquad\text{and}\qquad{\qquad\text{and}\qquad}
\def\paragraph#1{%
\noindent\textbf{#1.}\enspace}
\let\emptyset=\varnothing
\let\setminus=\smallsetminus
\let\backslash=\smallsetminus
\let\sm=\setminus
\makeatletter
\def\mathpalette\mov@rlay{\mathpalette\mov@rlay}
\def\mov@rlay#1#2{\leavevmode\vtop{ \baselineskip\z@skip \lineskiplimit-\maxdimen
\ialign{\hfil$\m@th#1##$\hfil\cr#2\crcr}}}
\newcommand{\charfusion}[3][\mathord]{
#1{\ifx#1\mathop\vphantom{#2}\fi
\mathpalette\mov@rlay{#2\cr#3}
}
\ifx#1\mathop\expandafter\displaylimits\fi}
\makeatother
\newcommand{\charfusion[\mathbin]{\cup}{\cdot}}{\charfusion[\mathbin]{\cup}{\cdot}}
\newcommand{\charfusion[\mathop]{\bigcup}{\cdot}}{\charfusion[\mathop]{\bigcup}{\cdot}}
\DeclareFontFamily{U} {MnSymbolC}{}
\DeclareSymbolFont{MnSyC} {U} {MnSymbolC}{m}{n}
\DeclareFontShape{U}{MnSymbolC}{m}{n}{
<-6> MnSymbolC5
<6-7> MnSymbolC6
<7-8> MnSymbolC7
<8-9> MnSymbolC8
<9-10> MnSymbolC9
<10-12> MnSymbolC10
<12-> MnSymbolC12}{}
\DeclareMathSymbol{\powerset}{\mathord}{MnSyC}{180}
\theoremstyle{plain}
\newtheorem{thm}{Theorem}[section]
\newtheorem{theorem}[thm]{Theorem}
\newtheorem{lemma}[thm]{Lemma}
\newtheorem{corollary}[thm]{Corollary}
\newtheorem{proposition}[thm]{Proposition}
\newtheorem{problem}[thm]{Problem}
\newtheorem{conjecture}[thm]{Conjecture}
\newtheorem{question}[thm]{Question}
\newtheorem{thm-intro}{Theorem}[]
\newtheorem*{claim*}{Claim}
\newtheorem{claim}{Claim}[]
\theoremstyle{definition}
\newtheorem{definition}[thm]{Definition}
\newtheorem{remark}[thm]{Remark}
\newtheorem{remarks}[thm]{Remarks}
\newtheorem{observation}[thm]{Observation}
\newtheorem{situation}[thm]{Situation}
\newtheorem{example}[thm]{Example}
\usepackage{accents}
\newcommand{\seq}[1]{\accentset{\rightharpoonup}{#1}}
\usepackage[symbol]{footmisc}
\renewcommand*{\thefootnote}{\fnsymbol{footnote}}
\newcommand{\ensuremath{{\upharpoonright}}}{\ensuremath{{\upharpoonright}}}
\newcommand{\ensuremath{{\downharpoonright}}}{\ensuremath{{\downharpoonright}}}
\newcommand{\family}[2]{\ensuremath{{\big( {#1}\ \big|\ {#2} \big)}}}
\newcommand{\no}[1]{}
\newcommand{\abs}[1]{\ensuremath{{\lvert {#1} \rvert}}}
\newcommand{\arxiv}[1]{\href{http://arxiv.org/abs/#1}{\texttt{arXiv:#1}}}
\newcommand*{\Bidirected}[1]{\overset{\text{\tiny$\bm\leftrightarrow$}}{#1}}
\newcommand{$\mathsf{GB}$}{$\mathsf{GB}$}
\newcommand*{\Cut}[2]{\Fkt{\partial_{#1}}{#2}}
\newcommand*{\Fkt}[2]{#1\!\left( #2 \right)}
\newcommand*{\BigO}[1]{\mathcal{O}(#1)}
\newcommand*{\Set}[1]{\left\lbrace #1 \right\rbrace}
\author[Heuer, Steiner, Wiederrecht]{Karl Heuer \and Raphael Steiner \and Sebastian Wiederrecht}
\address[Heuer, Wiederrecht]{Technische Universit\"{a}t Berlin,
Institute of Software Engineering and Theoretical Computer Science,
Ernst-Reuter-Platz 7, 10587 Berlin, Germany}
\address[Steiner]{Technische Universit\"{a}t Berlin, Institute of Mathematics, Straße des 17. Juni 136, 10623 Berlin, Germany}
\email{\tt [email protected]}
\email{\tt [email protected]}
\email{\tt [email protected]}
\title[Even circuits in oriented matroids]{Even circuits in oriented matroids}
\subjclass[2010]{05B35, 05C20, 05C70, 05C75, 05C83, 05C85, 52C40}
\keywords{digraph, non-even, odd dijoin, oriented matroid, even directed circuit}
\begin{document}
\begin{abstract}
In this paper we generalise the even directed cycle problem, which asks whether a given digraph contains a directed cycle of even length, to orientations of regular matroids.
We define non-even oriented matroids generalising non-even digraphs, which played a central role in resolving the computational complexity of the even dicycle problem.
Then we show that the problem of detecting an even directed circuit in a regular matroid is polynomially equivalent to the recognition of non-even oriented matroids.
Our main result is a precise characterisation of the class of non-even oriented bond matroids in terms of forbidden minors, which complements an existing characterisation of non-even oriented graphic matroids by Seymour and Thomassen.
\end{abstract}
\maketitle
\section{Introduction}
\label{sec:intro}
Deciding whether a given digraph contains a directed cycle, briefly \textit{dicycle}, of even length is a fundamental problem for digraphs and often referred to as the \textit{even dicycle problem}.
The computational complexity of this problem was unknown for a long time and several polynomial time equivalent problems have been found~\cites{klee1984signsolvability, manber1986digraphs, thomassen1986sign, mccuaig2004polya}.
The question about the computational complexity was resolved by Robertson, Seymour and Thomas~\cite{robertson1999permanents} and independently by McCuaig~\cite{mccuaig2004polya} who stated polynomial time algorithms for one of the polynomially equivalent problems, and hence also for the even dicycle problem.
One of these polynomially equivalent problems makes use of the following definition.
\begin{definition}[\cite{seymour1987characterization}]
Let $D$ be a digraph.
We call $D$ \emph{non-even}, if there exists a set $J$ of directed edges in $D$ such that every directed cycle $C$ in $D$ intersects $J$ in an odd number of edges.
If such a set does not exist, we call $D$ \emph{even}.
\end{definition}
Seymour and Thomassen proved that the decision problem whether a given digraph is non-even, is polynomially equivalent to the even dicycle problem.
\begin{theorem}[\cite{seymour1987characterization}]\label{seymthomequivalence}
The problem of deciding whether a given digraph contains an even directed cycle, and the problem of deciding whether a given digraph is non-even, are polynomially equivalent.
\end{theorem}
Furthermore, Seymour and Thomassen~\cite{seymour1987characterization} characterised being non-even in terms of forbidden subgraphs.
Their result can be stated more compactly by formulating it in terms of forbidden butterfly minors, which is a commonly used notion in directed graph structure theory~\cites{johnson2001directed, packingdicycles, digridthm}, instead of forbidden subgraphs.
Before we state their result, let us define the notion of butterfly minors and fix another notation.
Given a digraph $D$, an edge $e \in E(D)$ is called \emph{butterfly-contractible} if it is not a loop and if it is either the unique edge emanating from its tail or the unique edge entering its head.
A \emph{butterfly minor} (sometimes also called \emph{digraph minor} or just \emph{minor}) of a digraph $D$ is any digraph obtained from $D$ by a finite sequence of edge-deletions, vertex-deletions and contractions of butterfly-contractible edges.
Note that the main idea behind the concept of a butterfly-contractible edge $e$ within a digraph $D$ is that every directed cycle in $D/e$ either equals one in $D$ or induces one in $D$ by incorporating~$e$.
This property does not necessarily hold if arbitrary edges are contracted.
For every $k \ge 3 $ let $\Bidirected{C}_k$ denote the symmetrically oriented cycle of length $k$ (also called \emph{bicycle}), i.\@e.\@ the digraph obtained from $C_k$ be replacing every edge by a pair of anti-parallel directed edges.
Now we can state the result of Seymour and Thomassen as follows.
\begin{theorem}[\cite{seymour1987characterization}]\label{thm:seymthomtheorem}
A digraph $D$ is non-even if and only if no butterfly minor of $D$ is isomorphic to $\Bidirected{C}_k$ for some odd $k$.
\end{theorem}
The main purpose of this work is to lift the even dicycle problem to oriented matroids, and to extend \Cref{seymthomequivalence} and partially \Cref{thm:seymthomtheorem} to oriented matroids as well.
\subsection{The Even Directed Circuit Problem in Oriented Matroids}\label{subsec:evendirectedcircuitproblem}
${}$
\vspace{6pt} \newline
\indent In this paper we view a matroid as a tuple $M=(E,\mathcal{C})$ consisting of a finite \emph{ground set} $E(M):= E$ containing the \emph{elements} of $M$ and the family $\mathcal{C}$ of \emph{circuits} of $M$.
In what follows we introduce a generalisation of the graph theoretic notion of being non-even to oriented matroids and state the main results of this work.
For our purposes, the most important examples of matroids are \emph{graphical matroids} and \emph{bond matroids}.
Let $G=(V,E)$ be a graph.
The \emph{graphical matroid of $G$}, denoted by $M(G)$, is the matroid $(E,\mathcal{C})$ where the set $\mathcal{C}$ of circuits consists of all edge-sets of the cycles of $G$.
Analogously, the \emph{bond matroid of $G$} is $M^\ast(G)=(E,\mathcal{S})$ where $\mathcal{S}$ is the set of bonds (or minimal non-empty edge cuts) of $G$.
Note that $M(G)$ and $M^\ast(G)$ are the dual matroids of each another.
A matroid is called a \emph{graphic matroid}, resp.~a \emph{bond matroid} (also called \emph{cographic matroid}) if it is, respectively, isomorphic to the graphical or the bond matroid of some graph.
Digraphs can be seen as a special case of oriented matroids\footnote{For a formal and in-depth introduction of terms and notation used here please see \Cref{subsec:prelim}.} in the sense that every digraph $D$ has an associated oriented graphic matroid $M(D)$ whose signed circuits resemble the oriented cycles in the digraph $D$.
In this spirit, it is natural to lift questions concerning cycles in directed graphs to more general problems on circuits in oriented matroids.
The following algorithmic problem is the straight forward generalisation of the even dicycle problem to oriented matroids, and the main motivation of the paper at hand.
\begin{problem}\label{evencircuit1}
Given an oriented matroid $\vec{M}$, decide whether there exists a directed circuit of even size in $\vec{M}$.
\end{problem}
Our first contribution is to generalise the definition of non-even digraphs to oriented regular matroids in the following sense.
\begin{definition}
Let $\vec{M}$ be an oriented matroid.
We call $\vec{M}$ \emph{non-even} if its underlying matroid is regular and there exists a set $J \subseteq E(\vec{M})$ of elements such that every directed circuit in $\vec{M}$ intersects $J$ in an odd number of elements. If such a set does not exist, we call $\vec{M}$ even.
\end{definition}
The reader might wonder why the preceding definition concerns only regular matroids.
This has several reasons.
The main reason is a classical result by Bland and Las Vergnas~\cite{orientability} which states that a binary matroid is orientable if and only if it is regular.
Hence, if we were to extend the analysis of non-even oriented matroids beyond the regular case, we would have to deal with orientations of matroids which are not representable over $\mathbb{F}_2$.
This has several disadvantages, most importantly that cycles bases, which constitute an important tool in all of our results, are not guaranteed to exist any more.
Furthermore, some of our proofs make use of the strong orthogonality property of oriented regular matroids\footnote{For a definition we refer to \Cref{subsec:prelim}}, which fails for non-binary oriented matroids.
Lastly, since \Cref{evencircuit1} is an algorithmic questions, oriented regular matroids have the additional advantage that they allow for a compact encoding in terms of totally unimodular matrices, which is not a given for general oriented matroids.
The first result of this article is a generalisation of \Cref{seymthomequivalence} to oriented matroids as follows:
\begin{theorem}
\label{thm:oddjoinsandevencuts}
The problems of deciding whether an oriented regular matroid represented by a totally unimodular matrix contains an even directed circuit, and the problem of recognising whether an oriented regular matroid given by a totally unimodular matrix is non-even, are polynomially equivalent.
\end{theorem}
\Cref{thm:oddjoinsandevencuts} motivates a structural study of the class of non-even oriented matroids, as in many cases the design of a recognition algorithm for a class of objects is based on a good structural understanding of the class.
In order to state our main result, which is a generalisation of \Cref{thm:seymthomtheorem} to graphic and cographic oriented matroids, we have to introduce a new minor concept.
We naturally generalise the concept of butterfly minors to regular oriented matroids, in the form of so-called \emph{generalised butterfly minors}.
\begin{definition}
Let $\vec{M}$ be an orientation of a regular matroid $M$.
An element $e \in E(\vec{M})$ is called \emph{butterfly-contractible} if there exists a cocircuit $S$ in $M$ such that $(S\setminus\{e\},\{e\})$ forms a signed cocircuit of $\vec{M}$.\footnote{For a definition of a signed (co)circuits see~\Cref{subsec:prelim}.}
A \emph{generalised butterfly minor ($\mathsf{GB}$-minor for short)} of $\vec{M}$ is any oriented matroid obtained from $\vec{M}$ by a finite sequence of element deletions and contractions of butterfly-contractible elements (in arbitrary order).
\end{definition}
Note that the generalised butterfly-contraction captures the same fundamental idea as the initial one for digraphs while being more general:
Given a butterfly-contractible element $e$ of a regular oriented matroid~$\vec{M}$, we cannot have a directed circuit $C$ of $\vec{M} / e$ such that $(C, \{e\})$ is a signed circuit of $\vec{M}$ \footnote{In this case, $(C, \{e\})$ together with a signed cocircuit $(S\setminus\{e\},\{e\})$ would contradict the orthogonality property (see \Cref{subsec:prelim}, (\ref{equ:ortho})) for oriented matroids.}, and hence either $C$ or $C \cup \{e\}$ must form a directed circuit of $\vec{M}$.
Replacing the notion of butterfly minors by $\mathsf{GB}$-minors allows us to translate \Cref{thm:seymthomtheorem} to the setting of oriented matroids in the following way:
\begin{proposition}
\label{thm:forbiddengraphicminors}
An oriented graphic matroid $\vec{M}$ is non-even if and only if none of its $\mathsf{GB}$-minors is isomorphic to $M(\Bidirected{C}_k)$ for some odd $k \ge 3$.
\end{proposition}
As our main result, we complement \Cref{thm:forbiddengraphicminors} by determining the list of forbidden $\mathsf{GB}$-minors for cographic non-even oriented matroids.
We need the following notation:
For integers $m, n \ge 1$ we denote by $\vec{K}_{m,n}$ the digraph obtained from the complete bipartite graph $K_{m,n}$ by orienting all edges from the partition set of size $m$ towards the partition set of size $n$.
\begin{theorem}
\label{thm:forbiddencographicminors}
An oriented bond matroid $\vec{M}$ is non-even if and only if none of its $\mathsf{GB}$-minors is isomorphic to $M^\ast(\vec{K}_{m,n})$ for any $m, n \ge 2$ such that $m+n$ is odd.
\end{theorem}
To prove \Cref{thm:forbiddencographicminors} we study those digraphs whose oriented bond matroids are non-even.
Equivalently, these are the digraphs admitting an \emph{odd dijoin}, which is an edge set hitting every directed bond an odd number of times.
After translating $\mathsf{GB}$-minors into a corresponding minor concept on directed graphs, which we call \emph{cut minors}\footnote{See the beginning of \Cref{sec:dijoin} for a precise definition.}, we show that the class of digraphs with an odd dijoin is described by two infinite families of forbidden cut minors (\Cref{thm:forbiddenminors}).
Finally, we translate this result to oriented bond matroids in order to obtain a proof of \Cref{thm:forbiddencographicminors}.
The structure of this paper is as follows.
In \Cref{subsec:prelim} we introduce the needed notation and basic facts about digraphs, matroids and oriented matroids for this paper.
Furthermore, we prove that non-even oriented matroids are closed under $\mathsf{GB}$-minors (\Cref{prop:GBminorcontainment}), which is then used to prove \Cref{thm:forbiddengraphicminors} in the same section.
We start \Cref{sec:even-circuit complexity} by showing that the even directed circuit problem for general oriented matroids cannot be solved using only polynomially many calls to a signed circuit oracle~(\Cref{prop:oracleisexponential}).
The remainder of the section is devoted to the proof of \Cref{thm:oddjoinsandevencuts}. We also note that \emph{odd} directed circuits can be detected in polynomial time in orientations of regular matroids~(\Cref{prop:oddcycleproblem}).
In \Cref{sec:dijoin} we characterise those digraphs that admit an odd dijoin (\Cref{thm:forbiddenminors}) and use this to deduce our main result, \Cref{thm:forbiddencographicminors}.
\section{Background}\label{subsec:prelim}
This section is dedicated to a formal introduction of basic terms and notation used throughout this paper.
However, we assume basic familiarity with digraphs and matroid theory.
For basic notation and facts about digraphs we refer the reader to \cite{bang-jensen}.
For missing terminology and basic facts from matroid theory not mentioned or mentioned without proof in the following, please consult the standard reading \cites{oxley, welsh}.
For two sets $X, Y$ we denote by $X+Y:= (X \cup Y)\setminus (X \cap Y)$ their \emph{symmetric difference}. For $n \in \mathbb{N}$ we denote $[n]:=\{1,2,\ldots,n\}$.
\subsection*{(Di)graphs}
Graphs considered in this paper are multi-graphs and may include loops.
Digraphs may have loops and multiple (parallel and anti-parallel) directed edges (sometimes called \emph{edges}).
Given a digraph $D$, we denote by $V(D)$ its vertex set and by $E(D)$ the set of directed edges.
A directed edge with tail $u \in V(D)$ and head $v \in V(D)$ is denoted by $(u,v)$ if this does not lead to confusion with potential parallel edges.
By $U(D)$ we denote the \emph{underlying multi-graph} of $D$, which is the undirected multi-graph obtained from $D$ by forgetting the orientations of the edges.
Given a digraph $D$ and a partition $(X,Y)$ of its vertex set, the set $D[X,Y]$ of edges with one endpoint in $X$ and one endpoint in $Y$, if it is non-empty, is referred to as a \emph{cut}.
A cut of $D$ is called \emph{minimal} or a \emph{bond}, if there is no other cut of $D$ properly contained in it.
If there is no edge of $D$ with head in $X$ and tail in $Y$, the cut $D[X,Y]$ is called \emph{directed} and denoted by $\Cut{}{X}$ (the set of edges leaving $X$).
A \emph{dijoin} in a digraph is a set of edges intersecting every directed cut (resp.~every directed bond).
\subsection*{Matroids}
Matroids can be used to represent several algebraic and combinatorial structures of dependencies.
The so-called \emph{linear or representable} matroids are induced by vector configurations in linear spaces.
Let $V=\mathbb{F}^n$ be a vector-space over a field $\mathbb{F}$ and let $X=\{x_1,\ldots,x_k\} \subseteq V$ for some $k \in \mathbb{N}$.
Let $A$ be the $n\times k$-matrix over $\mathbb{F}$ whose columns are $x_1,\ldots,x_k$.
Then we define the \emph{column matroid induced by $A$} as $M[A]:= (\{x_1,\ldots,x_k\},\mathcal{C}_A)$, where its set of circuits $\mathcal{C}_A$ consists of the inclusion-wise minimal collections of linearly dependent vectors from $\{x_1,\ldots,x_k\}$.
It is a well-known fact that $M[A]$ is indeed a matroid for any choice of a matrix $A$.
A matroid $M$ is called \emph{$\mathbb{F}$-linear} or \emph{representable over the field $\mathbb{F}$} if there is a matrix $A$ with entries in $\mathbb{F}$ such that $M \simeq M[A]$.
Graphic matroids and bond matroids, as introduced in \Cref{subsec:evendirectedcircuitproblem}, form part of a larger class, the so-called \emph{regular matroids}.
A matroid $M$ is called regular if it is $\mathbb{F}$-linear for every field $\mathbb{F}$.
The following equivalent characterisation of regular matroids is useful for encoding purposes.
A matrix with entries in $\mathbb{R}$ is called \emph{totally unimodular} if every square submatrix has determinant $-1, 0$ or $1$.
\begin{theorem}[\cite{tutteregularchar}]\label{tutteregularity}
Let $M$ be a matroid. Then $M$ is regular if and only if $M \simeq M[A]$ for a totally unimodular real-valued matrix $A$. Furthermore, for any field $\mathbb{F}$, reinterpreting the $\{-1,0,1\}$-entries of $A$ as elements of $\mathbb{F}$, we obtain an $\mathbb{F}$-linear representation of $M$.
\end{theorem}
Every graphic and every bond matroid is regular, but not vice-versa.
Regular matroids are in turn generalised by the \emph{binary matroids}, which are the $\mathbb{F}_2$-linear matroids.
We conclude this paragraph with the important notion of \emph{matroid minors}, which generalises the concept of minors in graph theory.
Given a matroid $M$ and an element $e \in E(M)$, we denote by $M-e$ and $M/e$ the matroids obtained from $M$ by \emph{deleting} and \emph{contracting}~$e$.
These operations are consistent with deletions and contractions in graph theory in the following sense:
If $G$ is a graph and $e \in E(G)$, let us denote by $G/e$ the graph obtained by contracting the edge $e$ and by $G-e$ the graph obtained by deleting $e$.
Then it holds that ${M(G/e) \simeq M(G)/e}$, $M(G-e) \simeq M(G)-e, M^\ast(G-e)=M^\ast(G)/e$, and finally $M^\ast(G/e) \simeq M^\ast(G)-e$.
\subsection*{Oriented Matroids}
For missing terminology and basic facts from the theory of oriented matroids not mentioned or mentioned without proof in the following, please consult the standard reading~\cite{bibel}.
An \emph{oriented matroid} $\vec{M}$ is a tuple $(E,\mathcal{C})$ consisting of a ground set $E$ of elements and a collection $\mathcal{C}$ of \emph{signed subsets} of $E$, i.\@e.\@ ordered partitions $(C^+,C^-)$ of subsets $C$ of $E$ into \emph{positive} and \emph{negative} parts such that the following axioms are satisfied:
\begin{itemize}
\item $(\emptyset,\emptyset) \notin \mathcal{C}$
\item If $(C^+,C^-) \in \mathcal{C}$, then $(C^-,C^+) \in \mathcal{C}$.
\item If $(C_1^+,C_1^-), (C_2^+,C_2^-) \in \mathcal{C}$ such that $C_1^+ \cup C_1^- \subseteq C_2^+ \cup C_2^-$, then one of the equations ${(C_1^+,C_1^-) = (C_2^+,C_2^-)}$ or ${(C_1^+,C_1^-) = (C_2^-,C_2^+)}$ holds.
\item Let $(C_1^+,C_1^-), (C_2^+,C_2^-) \in \mathcal{C}$ such that $(C_1^+,C_1^-) \neq (C_2^-,C_2^+)$, and let ${e \in C_1^+ \cap C_2^-}$.
Then there is some $(C^+,C^-) \in \mathcal{C}$ such that $C^+ \subseteq (C_1^+ \cup C_2^+) \setminus \{e\}$ and ${C^- \subseteq (C_1^- \cup C_2^-) \setminus \{e\}}$.
\end{itemize}
In case these axioms are satisfied, the elements of $\mathcal{C}$ are called \emph{signed circuits}.
Two oriented matroids $\vec{M}_1=(E_1,\mathcal{C}_1)$ and $\vec{M}_2=(E_2,\mathcal{C}_2)$ are called \emph{isomorphic} if there exists a bijection $\sigma:E_1 \rightarrow E_2$ such that $\{(\sigma(C^+),\sigma(C^-)) \; | \; (C^+,C^-) \in \mathcal{C}_1\}=\mathcal{C}_2$.
For every oriented matroid $\vec{M}=(E,\mathcal{C})$ and a signed circuit $X=(C^+,C^-) \in \mathcal{C}$, we denote by $\underline{X}:= C^+ \cup C^-$ the so-called \emph{support} of $X$.
From the axioms for signed circuits it follows that the set family $\underline{\mathcal{C}}:= \{\underline{X} \; | \; X \in \mathcal{C}\}$ over the ground set $E$ defines a matroid $M=(E,\underline{\mathcal{C}})$, which we refer to as the \emph{underlying matroid} of $\vec{M}$, and vice versa, $\vec{M}$ is called an \emph{orientation} of $M$.
A matroid is called \emph{orientable} if it admits at least one orientation.
A signed circuit $(C^+,C^-)$ is called \emph{directed} if either $C^+=\emptyset$ or $C^-=\emptyset$.
We use this definition also for the circuits of the underlying matroid $M$, i.\@e.\@, a circuit of $M$ is \emph{directed} in $\vec{M}$ if $(C,\emptyset)$ (or equivalently $(\emptyset,C)$) is a directed signed circuit of $\vec{M}$.
We say that $\vec{M}$ is \emph{totally cyclic} if every element of $M$ is contained in a directed circuit, and \emph{acyclic} if there exists no directed circuit.
Classical examples of oriented matroids can be derived from vector configurations in real-valued vector spaces and, most importantly for the investigations in this paper, from directed graphs.
Given a configuration $\{x_1,x_2,\ldots,x_k\} \in \mathbb{R}^n$ of vectors for some $k \in \mathbb{N}$, consider the matroid $M[A]$ with $A=(x_1,x_2,\ldots,x_k) \in \mathbb{R}^{n \times k}$.
Given a circuit $C=\{x_{i_1},x_{i_2},\ldots,x_{i_\ell}\} \in \mathcal{C}$, then there are scalars $\alpha_1,\ldots,\alpha_k \in \mathbb{R}^k\setminus \{\mathbf{0}\}$ such that $\sum_{\ell=1}^{k}{\alpha_ix_{i_\ell}}=0$, and the coefficients $\alpha_i$ are determined up to multiplication with a common scalar.
It is therefore natural to assign two signed sets to the circuit as follows: $X(C):= (C^+,C^-)$ and $-X(C):= (C^-,C^+)$, where $C^+:= \{x_{i_\ell} \; | \; \alpha_{i_\ell}>0\}$ and $C^-:= \{x_{i_\ell} \; | \; \alpha_{i_\ell}<0\}$.
The \emph{oriented matroid induced by $A$} is then defined as $\vec{M}[A]=(\{x_1,\ldots,x_k\},\{X(C),-X(C) \; | \; C \in \mathcal{C}_A\})$.
Given a digraph $D$ we can, as in the undirected case, associate with it two different kinds of oriented matroids with ground set $E(D)$.
Unsurprisingly, their underlying matroids are exactly the graphical respectively the bond matroid of $U(D)$.
\begin{definition}
Let $D$ be a digraph.
\begin{itemize}
\item For every cycle $C$ in $D$, let $(C^+,C^-),(C^-,C^+)$ be the two tuples describing a partition of $E(C)$ into sets of forward and backward edges, according to some choice of cyclical traversal of $C$. Then $\{(C^+,C^-),(C^-,C^+) \; | \; C \text{ cycle in }D\}$ forms the set of signed circuits of an orientation $M(D)$ of $M(U(D))$, called the \emph{oriented graphic matroid induced by $D$}.
\item For every bond $S=D[X,Y]$ in $D$, let $S^+$ be the set of edges in $S$ with tail in $X$ and head in $Y$, and let $S^-$ contain those edges on $S$ with tail in $Y$ and head in $X$. Then $\{(S^+,S^-), (S^-, S^+) \; | \; S\text{ is a bond in }D\}$ forms the set of signed circuits of an orientation $M^\ast(D)$ of $M^\ast(U(D))$, called the \emph{oriented bond matroid induced by $D$}.
\end{itemize}
\end{definition}
Note that the directed circuits of an oriented graphic matroid are exactly the edge-sets of the directed cycles of the corresponding digraph $D$.
Similarly, the directed circuits in an oriented bond-matroid are the edge-sets of the directed bonds in the corresponding digraph.
Given an oriented matroid $\vec{M}=(E,\mathcal{C})$ and an element $e \in E(\vec{M})$, we denote by $\vec{M}-e$ and $\vec{M}/e$ the matroids obtained from $\vec{M}$ by \emph{deleting} and \emph{contracting} $e$, respectively.
The signed circuits of these matroids are defined as follows:
\begin{align*}
\mathcal{C}(M-e)&:= \{(C^+,C^-) \in \mathcal{C} \; | \; e \notin C^+ \cup C^-\},
\end{align*}
\begin{align*}
\mathcal{C}(M/e)&:= \{(C^+\setminus \{e\},C^-\setminus \{e\}) \; | \; (C^+,C^-) \in \mathcal{C}, \not\exists X \in \mathcal{C}: \underline{X}-e \subsetneq (C^+ \cup C^-)-e\}\setminus \{(\emptyset,\emptyset)\}.
\end{align*}
These definitions generalize to subsets $Z \subseteq E(\vec{M})$, here we denote by $\vec{M}-Z$ resp.~$\vec{M}/Z$ the oriented matroids obtained from $\vec{M}$ by successively deleting (resp.~contracting) all elements of $Z$ (in arbitrary order\footnote{It is well/known that the order in which elements are deleted resp.~contracted does not affect the outcome of the process.}). We also use the notation $\vec{M}[Z]=\vec{M}-(E(\vec{M}-Z))$.
Again, in the case of graphic and cographic oriented matroids, the deletion and contraction operations resemble the same operations in directed graphs:
Given a digraph $D$ and $e \in E(D)$, denote by $D/e$ the digraph obtained by deleting $e$ and identifying the endpoints of $e$.
We then have $M(D)-e \simeq M(D-e), M(D)/e \simeq M(D/e)$ and $M^\ast(D)-e \simeq M^\ast(D)/e, M^\ast(D)/e \simeq M^\ast(D-e)$.
For an oriented matroid $\vec{M}$ with a collection $\mathcal{C}$ of signed circuits, let $\mathcal{S}$ be defined as the set of signed vectors $(S^+,S^-)$ satisfying the following \emph{orthogonality property} for every signed circuit $C=(C^+,C^-) \in \mathcal{C}$:
\begin{equation*}\label{equ:ortho}
(S^+ \cap C^+) \cup (S^- \cap C^-) \neq \emptyset \Longleftrightarrow (S^+ \cap C^-) \cup (S^- \cap C^+) \neq \emptyset. \tag{$\ast$}
\end{equation*}
Then $\mathcal{S}$ is called the set of \emph{signed cocircuits} of $\vec{M}$.
The supports of the signed cocircuits form exactly the cocircuits of the underlying matroid $M$.
A signed cocircuit $(S^+,S^-)$ is called \emph{directed} if $S^+=\emptyset$ or $S^-=\emptyset$.
If the underlying matroid $M$ of $\vec{M}$ is regular, then the following stronger orthogonality holds for every singed circuit $(C^+,C^-) \in \mathcal{C}$, and every signed cocircuit $(S^+,S^-) \in \mathcal{S}$:
\begin{equation*}\label{equ:strongortho}
|C^+ \cap S^+|+|C^- \cap S^-| = |C^+ \cap S^-| + |C^- \cap S^+|. \tag{$\ast \ast$}
\end{equation*}
For any digraph $D$ the signed cocircuits of $M(D)$ are the same as the signed circuits of $M^\ast(D)$, while the signed cocircuits of $M^\ast(D)$ are exactly the signed circuits of $M(D)$.
We conclude this first part of the preliminary section by stating a couple of important facts concerning orientations of (regular) matroids from the literature.
\begin{theorem}[\cite{bibel}]\label{regularityOM}
Let $\vec{M}$ be an orientation of a regular matroid $M$. Then there exists a totally unimodular matrix $A$ such that $\vec{M} \simeq \vec{M}[A]$ and $M \simeq M[A]$.
\end{theorem}
We will also need the following matroidal version of the famous Farkas' Lemma:
\begin{theorem}[\cite{bibel}]\label{lemma:farkas}
Let $\vec{M}$ be an oriented matroid and $e \in E(M)$. Then $e$ is contained in a directed circuit of $\vec{M}$ if and only if it is not contained in a directed cocircuit.
\end{theorem}
\subsection{Non-Evenness and $\mathsf{GB}$-minors}\hfill\\
\indent Our main result, \Cref{thm:forbiddencographicminors}, builds on the important fact that the non-even oriented matroids are closed under the $\mathsf{GB}$-minor relation.
In this subsection we present a proof of this fact and use it to derive \Cref{thm:forbiddengraphicminors} from \Cref{thm:seymthomtheorem}.
\begin{lemma}\label{prop:GBminorcontainment}
Every $\mathsf{GB}$-minor of a non-even oriented matroid is non-even.
\end{lemma}
\begin{proof}
It suffices to show the following two statements:
For every non-even oriented matroid $\vec{M}$ and every element $e \in E(\vec{M})$, the oriented matroid $\vec{M}-e$ is non-even as well, and for every element $e \in E(\vec{M})$ which is butterfly-contractible, the oriented matroid $\vec{M}/e$ is non-even as well.
The claim then follows by repeatedly applying these two statements.
For the first claim, note that since the underlying matroid $M$ of $\vec{M}$ is regular, so is the underlying matroid of $\vec{M}-e$.
Let $J \subseteq E(\vec{M})$ be a set of elements intersecting every directed circuit in $\vec{M}$ an odd number of times.
Then clearly the set $J \setminus \{e\}$ intersects every directed circuit in $\vec{M}-e$ an odd number of times, proving that $\vec{M}-e$ is non-even.
For the second claim, let $e \in E(\vec{M})$ be butterfly-contractible.
Let $S$ be a cocircuit of $M$ such that $(S\setminus \{e\},\{e\})$ forms a signed cocircuit of $\vec{M}$.
Then the underlying matroid of $\vec{M}/e$ is a matroid minor of the regular matroid $M$ and is hence regular.
Define $J' \subseteq E(\vec{M}) \setminus \{ e \}$ via
\begin{align*}
J' :=
\begin{cases}
J & \mbox{if } e \notin J \\
J+S & \mbox{if } e \in J.
\end{cases}
\end{align*}
We claim that for every directed circuit $C$ in $\vec{M}/e$, the intersection $C \cap J'$ is odd.
Indeed, by definition either $C$ is a directed circuit also in $\vec{M}$ not containing $e$, or $C \cup \{e\}$ is a directed circuit in $\vec{M}$, or $(C,\{e\})$ is a signed circuit of $\vec{M}$.
The last case however is impossible, as then the signed circuit $(C,\{e\})$ and the signed cocircuit $(S\setminus \{e\},\{e\})$ in $\vec{M}$ would yield a contradiction to the orthogonality (\ref{equ:ortho}) of oriented matroids.
In the first case, since $e \notin C$, we must have $S \cap C=\emptyset$ as otherwise again $C$ and the signed cocircuit $(S\setminus \{e\},\{e\})$ form a contradiction to the orthogonality property (\ref{equ:ortho}).
This then shows that indeed $|C \cap J'|=|C \cap (J' \setminus S)|=|C \cap (J \setminus S)|=|C \cap J|$ is odd, as required.
In the second case, the orthogonality property (\ref{equ:strongortho}) of regular oriented matroids applied with the directed circuit $C \cup \{e\}$ and the signed cocircuit $(\{e\},S\setminus \{e\})$ within $\vec{M}$ yield that the equation ${|(C \cup \{ e \}) \cap (S \setminus \{e\})| = |(C \cup \{ e \}) \cap \{e\}|=1}$ holds.
So let $C \cap S = \{f\}$ for some element $f \in E(\vec{M}) \setminus \{ e \}$.
By definition of $J'$, if $e \notin J$, then we have ${|C \cap J'|=|C \cap J|=|(C \cup \{e\}) \cap J|}$, which is odd.
If $e \in J$, then we have (modulo $2$)
\begin{align*}
|C \cap J'|=|C \cap (J+S)|=|(C \cap J)+(C \cap S)| \equiv |C \cap J|+|\{f\}| = |(C \cup \{e\}) \cap J|,
\end{align*}
which is odd.
Hence, we have shown that $|C \cap J'|$ is odd in every case, which yields that $\vec{M}/e$ is a non-even oriented matroid.
This concludes the proof.
\end{proof}
\Cref{prop:GBminorcontainment} allows us to immediately prove the correctness of \Cref{thm:forbiddengraphicminors}.
\begin{proof}[Proof of \Cref{thm:forbiddengraphicminors}]
We prove both directions of the equivalence.
Suppose first that $\vec{M}$ is non-even.
Then by \Cref{prop:GBminorcontainment} every oriented matroid isomorphic to a $\mathsf{GB}$-minor of $\vec{M}$ is non-even as well.
Hence it suffices to observe that none of the matroids $M(\Bidirected{C}_k)$ for odd $k \ge 3$ is non-even.
However, this follows directly since any element set $J$ in $M(\Bidirected{C}_k)$ intersecting every directed circuit an odd number of times corresponds to an edge set in $\Bidirected{C}_k$ intersecting every directed cycle an odd number of times, which cannot exist since by \Cref{thm:seymthomtheorem} none of the digraphs $\Bidirected{C}_k$ is non-even for an odd $k \ge 3$.
Vice versa, suppose that no $\mathsf{GB}$-minor of $\vec{M}$ is isomorphic to $M(\Bidirected{C}_k)$ for any odd $k \ge 3$.
Let $D$ be a digraph such that $\vec{M} \simeq M(D)$.
We claim that $D$ must be non-even. Suppose not, then by \Cref{thm:seymthomtheorem} $D$ admits a butterfly minor isomorphic to $\Bidirected{C}_k$ for some odd $k \ge 3$.
We now claim that $M(D)$ has a $\mathsf{GB}$-minor isomorphic to $M(\Bidirected{C}_k)$.
For this, it evidently suffices to verify the following general statement:
If an edge $e$ of a digraph $F$ is butterfly-contractible in $F$, then within $M(F)$ the corresponding element $e$ of $M(F)$ is butterfly-contractible.
Indeed, let $e=(u,v)$ for distinct vertices $u, v \in V(D)$.
Then by definition either $u$ has out-degree $1$ or $v$ has in-degree $1$ in $D$.
In the first case, $e$ is the unique edge leaving $u$ in the cut $D[\{u\},V(D)\setminus \{u\}]$, while in the second case $e$ is the only edge entering $v$ in the cut $D[V(D)\setminus \{v\},\{v\}]$.
Since every cut is an edge-disjoint union of bonds, we can find in both cases a bond containing $e$ where $e$ is the only edge directed away resp.~towards the side of the bond that contains $u$ resp.~$v$.
Since the oriented bonds in $D$ yield the signed cocircuits of $M(D)$, this shows that there is a cocircuit $S$ in $M(D)$ such that $(S\setminus \{e\},\{e\})$ is a signed cocircuit.
Hence, $e$ is a butterfly-contractible element of $M(D)$.
This shows that $M(\Bidirected{C}_k)$ is isomorphic to a $\mathsf{GB}$-minor of $M(D) \simeq \vec{M}$ which contradicts our initial assumption that no $\mathsf{GB}$-minor of $\vec{M}$ is isomorphic to $M(\Bidirected{C}_k)$.
Hence, $D$ is non-even, and there exists $J \subseteq E(D)$ such that every directed cycle in $D$ contains an odd number of edges from $J$.
The same set $J$ also certifies that $\vec{M} \simeq M(D)$ is non-even, and this concludes the proof of the equivalence.
\end{proof}
\section{On the Complexity of the Even Directed Circuit Problem}
\label{sec:even-circuit complexity}
The formulation of \Cref{evencircuit1} is rather vague, as it is not clear by which means the oriented matroid $\vec{M}$ is given as an input to an algorithm designed for solving the problem, and in which way we will measure its efficiency.
For the latter, it is natural to aim for an algorithm which performs a polynomial number of elementary steps in terms of the number of elements of $\vec{M}$.
This also resembles the even dicycle problem in digraphs, where we aim to find an algorithm running in polynomial time in $|E(D)|$.
For the former, it is not immediately clear how to encode the (oriented) matroid, and hence how to make information contained in the (oriented) matroid available to the algorithm.
For instance the list of all circuits of a matroid, if given as input to an algorithm, will usually have exponential size in the number of elements, and therefore disqualify as a good reference value for efficiency of the algorithm.
For that reason, different computational models (and efficiency measures) for algorithmic problems in matroids (see \cite{oraclesmatroids}) and oriented matroids (see \cite{oraclesOMs}) have been proposed in the literature.
These models are based on the concept of \emph{oracles}.
For a family $\mathcal{F} \subseteq 2^{E(M)}$ of objects characterising the matroid $M$, an \emph{oracle} is a function $f:2^{E(M)} \rightarrow \{\texttt{true}, \texttt{false}\}$ assigning to every subset a truth value indicating whether or not the set is contained in $\mathcal{F}$.
If $\mathcal{F}$ for instance corresponds to the collection of circuits, cocircuits, independent sets, or bases of a matroid, we speak of a \emph{circuit-, cocircuit-, independence-}, or \emph{basis-oracle}.
Similarly, for oriented matroids we can define several oracles~\cite{oraclesOMs}.
Maybe the most natural choice for an oriented matroid-oracle for \Cref{evencircuit1} is the \emph{circuit oracle}, which given any subset of the element set together with a $\{+, -\}$-signing of its elements, reveals whether or not this signed subset forms a signed circuit of the oriented matroid.
This computational model applied to \Cref{evencircuit1} yields the following question.
\begin{question}\label{circuitoracle}
Does there exist an algorithm which, given an oriented matroid $\vec{M}$, decides whether there exists a directed circuit in $\vec{M}$ of even size, by calling the circuit-oracle of $\vec{M}$ only $\BigO{|E(\vec{M})|^c}$ times for some $c\in\mathbb{N}$?
\end{question}
However, as it turns out, the answer to the above problem is easily seen to be negative, even when the input oriented matroid $\vec{M}$ is graphic.
\begin{proposition}\label{prop:oracleisexponential}
Any algorithm deciding whether a given oriented graphic matroid on $n$ elements, for some $n \in \mathbb{N}$, contains an even directed circuit must use at least $2^{n-1}-1$ calls to the circuit-oracle for some instances.
\end{proposition}
\begin{proof}
Suppose towards a contradiction there was an algorithm which decides whether a given oriented graphic matroid contains an even directed circuit and uses at most $2^{n-1}-2$ oracle calls for any input oriented graphic matroid on elements $E := \{1,\ldots,n\}$.
Now, playing the role of the oracle, we will answer all of the (at most $2^{n-1}-2$) calls of the algorithm by \texttt{false}.
Since there are exactly $2^{n-1}-1$ non-empty sets $Y \in 2^E$ of even size, there must be an even non-empty subset $Y$ of $E$ such that the algorithm did not call the oracle with any input signed set whose support is $Y$.
But this means the algorithm cannot distinguish between the oriented graphic matroids $(E,\mathcal{C}_0)$ and $(E,\mathcal{C}_Y)$, where $\mathcal{C}_0:= \emptyset$ and $\mathcal{C}_Y:= \{(Y,\emptyset),(\emptyset, Y)\}$, which result in the same oracle-answers to the calls by the algorithm, while $(E,\mathcal{C}_0)$ contains no even directed circuit, but $(E,\mathcal{C}_Y)$ does.
This shows that the algorithm does not work correctly, and this contradiction proves the assertion.
\end{proof}
The above result and its proof give a hint that maybe in general the use of oriented matroid-oracles to measure the efficiency of algorithms solving \Cref{evencircuit1} is doomed to fail.
One should therefore look for a different encoding of the input oriented matroids in order to obtain a sensible algorithmic problem.
In this paper, we solve this issue by restricting the class of possible input oriented matroids to \emph{oriented regular matroids}, which allow for a much simpler and compact encoding via their representation by totally unimodular matrices (cf.\@ \Cref{tutteregularity,regularityOM}).
The following finally is the actual algorithmic problem we are going to discuss in this paper.
\begin{problem}\label{therealproblem}
Is there an algorithm which decides, given as input a totally unimodular matrix $A \in \mathbb{R}^{m \times n}$ for some $m, n \in \mathbb{N}$, whether $\vec{M}[A]$ contains an even directed circuit, and runs in time polynomial in $mn$?
\end{problem}
The alert reader might be wondering what happens if in the above problem we aim to detect odd instead of even directed circuits. The reason why this problem is not a center of study in our paper is that it admits a simple polynomial time solution, which is given in the form of \Cref{prop:oddcycleproblem} at the end of this section.
The next statement translates the main results from~\cite{robertson1999permanents} and~\cite{mccuaig2004polya} to our setting to show that \Cref{therealproblem} has a positive answer if we restrict to graphic oriented matroids as inputs.
\begin{lemma}
There exists an algorithm which, given as input any totally unimodular matrix $A \in \mathbb{R}^{m \times n}$ for some $m, n \in \mathbb{N}$ such that $\vec{M}[A]$ is a graphic oriented matroid, decides whether $\vec{M}[A]$ contains a directed circuit of even size, and which runs in time polynomial in $mn$.
\end{lemma}
\begin{proof}
The main results of Robertson et al.\@~\cite{robertson1999permanents} and McCuaig~\cite{mccuaig2004polya} yield polynomial time algorithms which, given as input a digraph $D$ (by its vertex- and edge-list) returns whether or not $D$ contains an even directed cycle.
Therefore, given a totally unimodular matrix $A \in \mathbb{R}^{m \times n}$ such that $\vec{M}[A]$ is graphic, if we can construct in polynomial time in $mn$ a digraph $D$ such that $\vec{M}[A] \simeq M(D)$, then we can decide whether $\vec{M[A]}$ contains a directed circuit of even size by testing whether $D$ contains an even directed cycle using the algorithms from~\cites{mccuaig2004polya, robertson1999permanents}.
Such a digraph can be found as follows:
First, we consider the \emph{unoriented} matroid $M[A]$ defined by the matrix $A$, which is graphic.
It follows from a result of Seymour~\cite{recoggraphmatroids} that using a polynomial number in $|E(M[A])|=n$ of calls to an independence-oracle for $M[A]$, we can compute a connected graph $G$ with $n$ edges such that $M(G) \simeq M[A]$.
For every given subset of columns of $A$ we can test linear independence in polynomial time in $mn$, and hence we can execute the steps of Seymour's algorithm in polynomial time.
Since $M(G) \simeq M[A]$, there must exist an orientation of $M(G)$ isomorphic to $\vec{M}[A]$, and this orientation in turn can be realized as $M(D)$ where $D$ is an orientation of $G$\footnote{The fact that every orientation of $M(G)$ can be realised as $M(D)$ for an orientation $D$ of $G$ follows from a classical result by Bland and LasVergnas~\cite{orientability}, who show that regular matroids (and particularly graphic ones) have a unique reorientation class.}.
To find the desired orientation $D$ of $G$ in polynomial time, we first compute a decomposition of $G$ into its blocks $G_1,\ldots,G_k$ (maximal connected subgraphs without cutvertices).
Next we (arbitrarily) select for every $i \in \{1,\ldots,k\}$ a special `reference'-edge $e_i \in E(G_i)$.
Note that two different orientations of $G$ obtained from each other by reversing all edges in one block result in the same oriented matroid, as cycles in $G$ are always entirely contained in one block.
Hence for every $i \in \{1,\ldots,k\}$ we can orient $e_i$ arbitrarily and assume w.\@l.\@o.\@g.\@ that this orientation coincides with the orientation in $D$.
Note that every block of $G$ which is not $2$-connected must be a $K_2$ forming a bridge in $G$.
In this case, the only edge of the block is our chosen reference-edge and already correctly oriented.
Now, for every $i \in \{1,\ldots,k\}$ such that $G_i$ is $2$-connected and every edge $e\in E(G_i)\setminus \{e_i\}$ there is a cycle $C$ in $G_i$ containing both $e_i$ and $e$.
This cycle can be computed in polynomial time using a disjoint-paths algorithm between the endpoints of $e$ and $e_i$.
Now we consider the minimally linearly dependent set of columns in $A$ corresponding to $C$, and compute the coefficients of a non-trivial linear combination resulting in $0$.
As we already know the orientation of $e_i \in E(C)$, this yields us the orientations of all edges on the cycle $C$ in $D$ and hence of the edge $e$.
In this way, we can compute all orientations of edges in $D$ in polynomial time in $mn$ and find the digraph $D$ such that $\vec{M}[A] \simeq M(D)$.
As discussed above, this concludes the proof.
\end{proof}
\subsection{Proof of \Cref{thm:oddjoinsandevencuts}}\label{sec:equivalence}\hfill\\
\indent We prepare the proof by a set of useful definitions and lemmas dealing with circuit bases of regular matroids.
\begin{definition}
Let $M$ be a binary matroid.
The \emph{circuit space} of $M$ is the $\mathbb{F}_2$-linear vector space generated by the incidence vectors $\mathbf{1}_C \in \mathbb{F}_2^{E(M)}$ defined by $\mathbf{1}_C(e):= 1$ for $e \in C$ and $\mathbf{1}_C(e):= 0$ for $e \notin C$ and all circuits $C$ of $M$.
A \emph{circuit basis} of $M$ is a set of circuits of $M$ whose incidence vectors form a basis of the circuit space.
Equivalently, we can consider the circuit space as a $\mathbb{F}_2$-linear subspace of the vector space whose elements are all the subsets of $E$ and where the sum $X+Y$ of two sets $X, Y \subseteq E(M)$ is defined as their symmetric difference.
\end{definition}
\begin{definition}
Let $\vec{M}$ be a regular oriented matroid and $M$ be its underlying regular matroid.
We call a circuit basis $\mathcal{B}$ of $M$ \emph{directed} if all elements of $\mathcal{B}$ are directed circuits of~$\vec{M}$.
\end{definition}
The next proposition is a well-known fact about the circuit space of a binary matroid.
\begin{proposition}\label{proposition:basesize}
Let $M$ be a binary matroid. Then the dimension of the circuit space of $M$ equals $|E(M)|-r(M)$.
\end{proposition}
The following lemma is crucial for the proof of \Cref{thm:oddjoinsandevencuts} as well as for our work on digraphs in \Cref{sec:dijoin}.
\begin{lemma}\label{lemma:circuitspace}
Let $\vec{M}$ be an oriented regular matroid.
If $\vec{M}$ is totally cyclic, then the underlying matroid $M$ admits a directed circuit basis.
Furthermore, for every coindependent set $A$ in $M$ such that $\vec{M}-A$ is totally cyclic, there exists a directed circuit basis of $M$ such that every $a \in A$ is contained in exactly one circuit of the basis.
\end{lemma}
\begin{proof}
We start by proving the first assertion concerning the existence of a directed circuit basis of $M$.
We use induction on $|E(M)|$.
If $M$ consists of a single element, the claim holds trivially, since every circuit is a loop and thus directed.
So assume now that $|E(M)|=k \ge 2$ and that the statement of the lemma holds for all oriented regular matroids on at most $k-1$ elements. Choose some $e \in E(M)$ arbitrarily.
Since $\vec{M}$ is totally cyclic, there exists a directed circuit $C_e$ containing $e$.
Let us now consider the oriented regular matroid $\vec{M}-e$.
If $\vec{M}-e$ is totally cyclic, then we can apply the induction hypothesis to $\vec{M}-e$ and find a directed circuit basis $\mathcal{B}^-$ of $M-e$.
Now consider the collection $\mathcal{B} = \mathcal{B}^- \cup \{ C_e \}$ of directed circuits in $\vec{M}$.
The incidence vectors of these circuits are linearly independent over $\mathbb{F}_2$, as $C_e$ is the only circuit yielding a non-zero entry at element $e$.
Furthermore, we get by induction that $ |\mathcal{B}| = |E(M)|-1-r(M-e)+1=|E(M)|-r(M-e)=|E(M)|-r(M)$.
The last equality holds since $e$ is contained in the circuit $C_e$ and hence is not a coloop.
As this matches the dimension of the circuit space of $M$, we have found a directed circuit basis of $M$, proving the inductive claim.
It remains to prove the case where $\vec{M}-e$ is not totally cyclic, i.\@e.\@, there is an element not contained in a directed circuit.
By Farkas' Lemma (\Cref{lemma:farkas}) applied to $\vec{M}-e$ and this element there exists a directed cocircuit $S$ in $\vec{M}-e$. Then either $(S, \emptyset), (S \cup \{e\},\emptyset)$ or $(S, \{e\})$ form a signed cocircuit of $\vec{M}$.
Since $\vec{M}$ is totally cyclic, it contains no directed cocircuits, and hence only the latter case is possible, $(S,\{e\})$ must form a signed cocircuit.
Let us now consider the oriented regular matroid $\vec{M}/e$.
Since $\vec{M}$ is totally cyclic, so is $\vec{M}/e$.
By the induction hypothesis there exists a directed circuit basis $\mathcal{B}^-$ of $M/e$.
By definition, for every directed circuit $C \in \mathcal{B}^-$, either $C$ is a directed circuit in $\vec{M}$ not containing $e$, or $C \cup \{e\}$ is a directed circuit in $\vec{M}$, or $(C,\{e\})$ forms a signed circuit of $\vec{M}$.
The latter is however impossible, as in this case we can consider the signed cocircuit $X=(S,\{e\})$ and the signed circuit $Y=(C,\{e\})$ of $\vec{M}$, which satisfy $e \in X^- \cap Y^- \neq \emptyset$ but furthermore ${(X^+ \cap Y^-) \cup (X^- \cap Y^+)=\emptyset}$, violating the orthogonality property (\ref{equ:ortho}) of oriented matroids.
Hence, the set $\mathcal{B}:= \{C \; | \; C \in \mathcal{B}^-\text{ circuit in }M\}\cup \{C \cup \{e\} \; | \; C \in \mathcal{B}^-, C \cup \{e\}\text{ circuit in }M\}$ consists of $|\mathcal{B}|=|\mathcal{B}^-|=|E(M)|-1-r(M/e)=|E(M)|-r(M)$ many circuits of $M$ which are all directed ones in $\vec{M}$.
Note that for the last equality we used that $e$ is not a loop, as it is contained in the cocircuit $S \cup \{e\}$ of $M$.
Finally, we claim that the binary incidence vectors of the elements of $\mathcal{B}$ in $\mathbb{F}_2^{E(M)}$ are linearly independent.
This follows since the restriction of these vectors to the coordinates $E(M) \setminus \{ e \}$ equals the characteristic vectors of the elements of $\mathcal{B}^-$, which form a circuit basis of $M/e$.
This shows that we have found a directed circuit basis of $M$, proving the inductive claim.
For the second assertion, let a coindependent set $A$ in $M$ be given and suppose that $\vec{M}-A$ is totally cyclic.
We claim that for every $a \in A$ there exists a directed circuit $C_a$ in $\vec{M}$ such that $C_a \cap A=\{a\}$.
Equivalently, we may show that the oriented matroid $\vec{M}-(A\setminus \{a\})$ has a directed circuit containing $a$.
Towards a contradiction, suppose not, then by Farkas' Lemma (\Cref{lemma:farkas}) there exists a directed cocircuit $S$ in $\vec{M}-(A\setminus \{a\})$ containing $a$. Since $A$ is coindependent, $a$ is not a coloop of $M$ and hence $S \setminus \{a\} \neq \emptyset$.
Every directed circuit in $\vec{M}-(A\setminus \{a\})$ must be disjoint from $S$, and hence no $f \in S\setminus \{a\}$ is contained in a directed circuit of $\vec{M}-A$, contradicting our assumption that $\vec{M}-A$ is totally cyclic.
It follows that for each $a \in A$ a directed circuit $C_a$ with $C_a \cap A=\{a\}$ exists.
Next we apply the first assertion of this lemma to the totally cyclic oriented matroid $\vec{M}-A$.
We find that there exists a directed circuit basis $\mathcal{B}_A$ of $M-A$.
We claim that $\mathcal{B}:= \mathcal{B}_A \cup \{C_a \; | \; a \in A\}$ forms a directed circuit basis of $M$ satisfying the properties claimed in this lemma.
Indeed, every circuit in $\mathcal{B}$ is a directed circuit of $\vec{M}$, and for every $a \in A$ the circuit $C_a$ is the only circuit in $\mathcal{B}$ containing $a$.
Since the characteristic vectors of the elements of $\mathcal{B}_A$ are linearly independent as $\mathcal{B}_A$ is a circuit basis of $M-A$, we already get that the characteristic vectors of elements of $\mathcal{B}$ are linearly independent using that the characteristic vector of $C_a$ is the only basis-vector having a non-zero entry at the position corresponding to element~$a$.
To show that $\mathcal{B}$ indeed is a circuit basis of $M$, it remains to verify that it has the required size.
We have $|\mathcal{B}|=|A|+|\mathcal{B}_A|=|A|+|E(M-A)|-r(M-A)=|E(M)|-r(M)$, where for the latter equality we used that $r(M-A)=r(M)$ since $A$ is coindependent.
This concludes the proof of the second assertion.
\end{proof}
In order to prove our next lemma, we need the following result, which was already used by Seymour and Thomassen.
\begin{lemma}[\cite{seymour1987characterization}, Prop.~3.2] \label{lemma:oddandeven}
Let $E$ be a finite set and $\mathcal{F}$ a family of subsets of $E$. Then precisely one of the following statements holds:
\begin{enumerate}[label=(\roman*)]
\item There is a subset $J \subseteq E$ such that $|F \cap J|$ is odd for every $F \in \mathcal{F}$.
\item There are sets $F_1,\ldots,F_k \in \mathcal{F}$, where $k \in \mathbb{N}$ is odd, such that $\sum_{i=1}^{k}{F_i}=\emptyset$.
\end{enumerate}
\end{lemma}
Please note that (i) and (ii) cannot hold simultaneously because if $k$ is odd and $F_1,\dotsc,F_k$ all have odd intersection with $A$, then the symmetric difference $\sum_{i=1}^k F_i$ has odd intersection with $A$.
We now derive the following corollary for totally cyclic oriented regular matroids by using \Cref{lemma:circuitspace} and applying \Cref{lemma:oddandeven} to a directed circuit basis.
\begin{corollary}\label{cor:matroidweighting}
Let $\vec{M}$ be a totally cyclic oriented regular matroid, and let $\mathcal{B}$ be a directed circuit basis of $M$.
Then there exists $J \subseteq E(\vec{M})$ such that $|C \cap J|$ is odd for every $C \in \mathcal{B}$.
\end{corollary}
\begin{proof}
The claim is that $(i)$ in \Cref{lemma:oddandeven} with $E=E(\vec{M})$ and $\mathcal{F}:=\mathcal{B}$ holds true, so it suffices to rule out $(ii)$.
However, the latter would contradict the linear independence of the basis $\mathcal{B}$.
\end{proof}
Building on this corollary we derive equivalent properties for an oriented matroid to be non-even.
\begin{proposition} \label{proposition:matroidequivalence}
Let $\vec{M}$ be a totally cyclic oriented regular matroid and let $\mathcal B$ be a directed circuit basis of $M$.
Furthermore, let $J \subseteq E(M)$ be such that $|C \cap J|$ is odd for all $C \in \mathcal B$.
Then the following statements are equivalent:
\begin{enumerate}[label=(\roman*)]
\item $\vec{M}$ is non-even.
\item If $C_1,\ldots,C_k$ are directed circuits of $\vec{M}$ where $k \in \mathbb{N}$ is odd, then $\sum_{i=1}^k C_i \neq \emptyset$.
\item Every directed circuit of $\vec{M}$ is a sum of an odd number of elements of $\mathcal{B}$.
\item $|C \cap J|$ is odd for all directed circuits $C$ of $\vec{M}$.
\end{enumerate}
\end{proposition}
\begin{proof}
\noindent
\begin{description}
\item[``(i) $\Rightarrow$ (ii)''] This follows from \Cref{lemma:oddandeven} applied to the set of all directed circuits of $\vec{M}$.
\item[``(ii) $\Rightarrow$ (iii)''] Let $C$ be a directed circuit of $\vec{M}$.
Since $\mathcal B$ is a circuit basis of $M$, we can write $C = \sum_{i=1}^k C_i$ for some $k \in \mathbb{N}$ and $C_1,\ldots,C_k \in \mathcal B$.
If $k$ were even, then the sum $C + \sum_{i=1}^k C_i = \emptyset$ would yield a contradiction to (ii).
\item[``(iii) $\Rightarrow$ (iv)''] Let $C$ be a directed circuit of $\vec{M}$.
By assumption, $C = \sum_{i=1}^k C_i$ with $C_1,\ldots,C_k \in \mathcal B$ and $k \in \mathbb{N}$ being odd.
Since $J$ has odd intersection with all $C_i$, the set~$J$ has also odd intersection with $C$.
\item[``(iv) $\Rightarrow$ (i)''] This implication follows directly from the definition of non-even.
\end{description}
\end{proof}
Before we turn towards the proof of \Cref{thm:oddjoinsandevencuts} we need the following result, yielding a computational version of Farkas' lemma for oriented regular matroids. Although we suspect the statement is well-known among experts, we include a proof for the sake of completeness.
\begin{lemma}\label{lemma:farkas-compute}
There exists an algorithm that, given as input a regular oriented matroid $\vec{M}$ represented by a totally unimodular matrix $A \in \{-1,0,1\}^{m \times n}$ such that $\vec{M} \simeq \vec{M}[A]$, and an element $e \in E(\vec{M})$, outputs either a directed circuit of $\vec{M}$ containing $e$ or a directed cocircuit of $\vec{M}$ containing $e$, and which runs in polynomial time in $mn$.
\end{lemma}
\begin{proof}
We first observe that we can decide in polynomial time in $mn$ whether $e$ is contained in a directed circuit or in a directed cocircuit of $\vec{M}$ (by Farkas' Lemma, we know that exactly one of these two options must be satisfied).
Let us denote for every element $f \in E(\vec{M})$ by $x_f \in \{-1,0,1\}^{m}$ the corresponding column-vector of $A$.
We need the following claim:
The element $e$ is contained in a directed circuit of $\vec{M}$ if and only if there exist non-negative scalars $\alpha_f \ge 0$ for $f \in E(\vec{M}) \setminus \{e\}$ such that $-x_e=\sum_{f \in E(\vec{M} \setminus \{e\})}{\alpha_f x_f}$.
The necessity of this condition follows directly by definition of $\vec{M}[A]$: If $e$ is contained in a directed circuit with elements $e,f_1,\ldots,f_k$, then there are coefficients $\beta_e>0$ and $\beta_i>0$ for $1 \le i \le k$ such that $\beta_e x_e+\sum_{i=1}^{k}{\beta_i x_{f_i}}=0$, i.\@e.\@, $-x_e=\sum_{i=1}^{k}{\frac{\beta_i}{\beta_e}x_{f_i}}$.
On the other hand, if $-x_e$ is contained in the conical hull of $\{x_f|f \in E(\vec{M})\setminus\{e\}\}$, then we can select an inclusion-wise minimal subset $F \subseteq E(\vec{M} \setminus \{e\})$ such that $-x_e$ is contained in the conical hull of $\{x_f|f \in F\}$.
We claim that $\{e\} \cup F$ forms a directed circuit of $\vec{M}$.
By definition of $F$, it suffices to verify that the vectors $x_e$ and $x_f$ for $f \in F$ are minimally linearly dependent.
However, this follows directly by Carath\'{e}odory's Theorem: The dimension of the subspace spanned by $\{x_f|f \in F\}$ equals $|F|$, for otherwise we could select a subset of at most $|F|-1$ elements from $\{x_f|f \in F\}$ whose conical hull also contains $-x_e$, contradicting the minimality of $F$.
This shows the equivalence claimed above.
We can now use the well-known linear programming algorithm for linear programs with integral constraints by Khachiyan~\cites{khachiyan, gacslovasz} to decide in strongly polynomial time\footnote{Here we use the fact that all coefficients appearing in the linear system are $-1$,$0$ or $1$.} (and hence in polynomial time in $mn$) the feasibility of the linear inequality system
\begin{align*}
\sum_{f \in E(\vec{M} \setminus \{e\})}{\alpha_f x_f}=-x_e, \text{ with } \alpha_f \ge 0.
\end{align*}
Therefore, we have shown that we can decide in polynomial time in $mn$ whether or not $e$ is contained in a directed circuit of $\vec{M}$.
Next we give an algorithm which, given that $e$ is contained in a directed circuit of $\vec{M}$, finds such a circuit in polynomial time:
During the procedure, we update a subset $Z \subseteq E(\vec{M})$, which maintains the property that it contains a directed circuit including $e$.
At the end of the procedure $Z$ will form such a directed circuit of $\vec{M}$. We initialise $Z:= E(\vec{M})$.
During each step of the procedure, we go through the elements $f \in Z \setminus \{e\}$ one by one and apply the above algorithm to test whether $\vec{M}[Z]-f$ contains a directed circuit including $e$.
At the first moment such an element is found, we put $Z:= Z\setminus \{f\}$ and repeat.
If no such element is found, we stop and output $Z$.
Since we reduce the size of the set $Z$ at each round of the procedure, the above algorithm runs in at most $n$ rounds and calls the above decision algorithm for the existence of a directed circuit including $e$ at most $n-1$ times in every round.
All in all, the algorithm runs in time polynomial in $mn$.
It is obvious that the procedure maintains the property that $Z$ contains a directed circuit including $e$ and that at the end of the procedure all elements of $Z$ must be contained in this circuit, i.\@e.\@, $Z$ forms a directed circuit with the desired properties.
To complete the proof we now give an algorithm which finds either a directed circuit or a directed cocircuit through a given element $e$ of $\vec{M}$ as follows:
First we apply the first (decision) algorithm, which either tells us that $e$ is contained in a directed circuit of $\vec{M}$, in which case we apply the second (detection) algorithm to find such a circuit.
Otherwise we know that $e$ is contained in a directed cocircuit of $\vec{M}$, in which case we compute in polynomial time a totally unimodular representing matrix $A^\ast$ with at most $n$ rows and $n$ columns\footnote{To find such a representing matrix, one can use Gaussian elimination to compute a basis $\mathcal{B}$ of $\text{ker}(A)$.
Since $A$ is totally unimodular, the vectors in $\mathcal{B}$ can be taken to be $\{-1,0,1\}$-vectors such that the matrix $A^\ast$ consisting of the elements of $\mathcal{B}$ written as row-vectors is totally unimodular as well.
It then follows from the orthogonality property of regular oriented matroids that $A^\ast$ indeed forms a representation of $\vec{M}^\ast$, using the fact that the row spaces of $A$ and $A^\ast$ are orthogonal complements.} of the dual regular oriented matroid $\vec{M}^\ast$.
As we know that $e$ is included in a directed circuit of $\vec{M}^\ast$, we can apply the second (detection) algorithm to $A^\ast$ and $\vec{M}^\ast$ instead of $A$ and $\vec{M}$ in order to find a directed cocircuit in $\vec{M}$ containing $e$ in polynomial time.
\end{proof}
Given a regular oriented matroid $\vec{M}$ we shall denote by $TC(\vec{M})$ the largest totally cyclic deletion minor of $\vec{M}$, i.\@e.\@ the deletion minor of $\vec{M}$ whose ground set is
\begin{align*}
E(TC(\vec{M})) := \bigcup\{C \; | \; C \textnormal{ is a directed circuit of } \vec{M} \}.
\end{align*}
From~\Cref{lemma:farkas-compute} we directly have the following.
\begin{corollary}\label{cor:totallycyclic}
Let $\vec{M}$ be a regular oriented matroid represented by a totally unimodular matrix $A \in \{-1,0,1\}^{m \times n}$ for some $m \in \mathbb{N}$ and $n = |E(M)|$.
Then we can compute a totally unimodular representative matrix of $TC(\vec{M})$ in time polynomial in $mn$. \qed
\end{corollary}
The last ingredient we shall need for the proof of \Cref{thm:oddjoinsandevencuts} is a computational version of the first statement of \Cref{lemma:circuitspace} combined with \Cref{cor:matroidweighting}.
\begin{lemma}\label{lemma:compute_base_and_cover}
Let $\vec{M}$ be a totally cyclic regular oriented matroid represented by a totally unimodular matrix $A \in \{-1,0,1\}^{m \times n}$ for some $m \in \mathbb{N}$ and $n = |E(M)|$.
Then we can compute a directed circuit basis $\mathcal{B}$ of $\vec{M}$ together with a set $J \subseteq E(\vec{M})$ such that $|J \cap B| \equiv 1 (\text{mod }2)$ for every $B \in \mathcal{B}$ in time polynomial in $mn$.
\end{lemma}
\begin{proof}
We shall follow the inductive proof of \Cref{lemma:circuitspace} to obtain a recursive algorithm for finding a desired directed circuit basis together with the desired set $J$.
If $n = 1$, the unique element $e$ of $E(\vec{M})$ is a directed loop, since $\vec{M}$ is totally cyclic, and forms our desired directed circuit basis of $\vec{M}$.
Furthermore, by setting $J := \{ e \}$ we also get our desired set.
In the case $n \ge 2$, let us fix an arbitrary element $e$ of $E(\vec{M})$ and compute a directed circuit $C_e$ of $\vec{M}$ containing $e$ by applying \Cref{lemma:farkas-compute}.
Also using \Cref{lemma:farkas-compute}, we can test in time polynomial in $mn$ whether $\vec{M} - e$ is totally cyclic.
If so, we fix $C_e$ as an element of our desired directed circuit base $\mathcal{B}$ of $\vec{M}$ and proceed as before with $\vec{M}-e$ instead of $\vec{M}$.
The set $J$ is updated as follows:
Suppose we have already computed a directed circuit base $\mathcal{B}^-$ and a set $J^-$ as in the statement of this lemma, but with respect to $\vec{M} - e$.
Then we set $\mathcal{B} := \mathcal{B}^- \cup \{ C_e \}$.
Now we check the parity of $|J^- \cap C_e|$ and set
\begin{align*}
J :=
\begin{cases}
J^- & \mbox{if } |J^- \cap C_e| \equiv 1 \; (\text{mod }2) \\
J^- \cup \{ e \} & \mbox{if } |J^- \cap C_e| \equiv 0 \; (\text{mod }2).
\end{cases}
\end{align*}
As $C_e$ is the only element of $\mathcal{B}$ that contains $e$, the set $J$ has odd intersection with every element of $\mathcal{B}$, as desired.
If $\vec{M} - e$ is not totally cyclic, we compute a totally unimodular representative matrix ${A' \in \{-1,0,1\}^{m \times (n-1)}}$ of $\vec{M}/e$.
This task can be executed in time polynomial in $mn$ \footnote{To compute $A'$, select a non-zero entry in the column of $A$ belonging to the element $e$. Pivoting on this element and exchanging rows transforms $A$ in polynomial time in $mn$ into a totally unimodular matrix $A'' \in \{-1,0,1\}^{m \times n}$ of $\vec{M}$ in which the column corresponding to the element $e$ of $\vec{M}$ is $(1,0,\ldots,0)^\top$. Then $\vec{M}[A]=\vec{M}[A'']$, and the matrix $A'$ obtained from $A''$ by deleting the first row is a totally unimodular representation of $\vec{M}/e$.}.
Now $\vec{M}/e$ is totally cyclic as $\vec{M}$ is totally cyclic and we proceed as before with $\vec{M}/e$ instead of~$\vec{M}$.
However, when our recursive algorithm already yields a directed circuit basis $\mathcal{B}^-$ of $\vec{M}/e$ as well as a set $J^-$ for $\vec{M}/e$ as in the statement of this lemma, we know as argued in the proof of \Cref{lemma:circuitspace} that each element $C$ of $\mathcal{B}^-$ either is a directed circuit of $\vec{M}$ or $C \cup \{ e \}$ is a directed circuit of~$\vec{M}$.
Depending on this distinction we define our desired circuit basis $\mathcal{B}$ of $\vec{M}$ as in the proof of \Cref{lemma:circuitspace} via
\begin{align*}
\mathcal{B}:= \{C \; | \; C \in \mathcal{B}^-\text{ circuit in }M\}\cup \{C \cup \{e\} \; | \; C \in \mathcal{B}^-, C \cup \{e\}\text{ circuit in }M\}.
\end{align*}
To decide for each element $C \in \mathcal{B}^-$ whether $C$ or $C \cup \{e\}$ is a directed circuit of $\vec{M}$ we calculate $A \mathbf{1}_C$ where $\mathbf{1}_C$ denotes the incidence vector of $C$ with respect to $A$.
Then $C$ forms a directed circuit of $\vec{M}$ if and only if $A \mathbf{1}_C = 0$.
As $|\mathcal{B}^-| = |\mathcal{B}| = |E(\vec{M})| - r(\vec{M})$ as argued in the proof of \Cref{lemma:circuitspace} and by \Cref{proposition:basesize}, we have to do at most $n$ of these computations to compute $\mathcal{B}$ from $\mathcal{B}^-$.
Regarding the set $J$ we can simply set $J := J^-$.
\end{proof}
We are now ready for the proof of \Cref{thm:oddjoinsandevencuts}.
\begin{proof}[Proof of \Cref{thm:oddjoinsandevencuts}]
Assume first we have access to an oracle deciding whether an oriented regular matroid given by a representing totally unimodular matrix is non-even.
Suppose we are given a regular oriented matroid $\vec{M}$ represented by a totally unimodular matrix $A \in \{-1,0,1\}^{m \times n}$ for some $m, n \in \mathbb{N}$ and we want to decide whether it contains a directed circuit of even size.
First we compute $TC(\vec{M})$, which can be done in time polynomial in $mn$ by \Cref{cor:totallycyclic}.
Now we use \Cref{lemma:compute_base_and_cover} to compute a directed circuit basis of $TC(\vec{M})$ in time polynomial in~$mn$.
Then we go through the $|E(TC(\vec{M}))|-r(TC(\vec{M}))$ many elements of the basis and check whether one of these directed circuits has even size.
If so, the algorithm terminates.
Otherwise, every member of the basis has odd size.
By \Cref{proposition:matroidequivalence} with $J := E(TC(\vec{M}))$, we know that $TC(\vec{M})$ contains no directed circuit of even size if and only if $TC(\vec{M})$ is non-even.
Since $TC(\vec{M})$ is the largest deletion minor of $\vec{M}$, which has the same directed circuits as $\vec{M}$, we know that $TC(\vec{M})$ is non-even if and only if $\vec{M}$ is non-even.
So we can decide the question using the oracle.
Conversely, assume we have access to an oracle which decides whether a given oriented regular matroid contains a directed circuit of even size.
Again, our first step is to compute $TC(\vec{M})$ using \Cref{cor:totallycyclic}.
By \Cref{lemma:compute_base_and_cover} we then compute a directed circuit basis of $TC(\vec{M})$ and a set $J \subseteq E(TC(\vec{M}))$ such that every circuit in the basis has odd intersection with $J$.
Let $\vec{M}'$ be the oriented matroid obtained from $TC(\vec{M})$ by duplicating every element $e \in E(TC(\vec{M})) \setminus J$ into two copies $e_1$ and $e_2$\footnote{In particular, we transform every signed circuit of $TC(\vec{M})$ into a signed circuit of $\vec{M}'$ by replacing every occurrence of an element $e \in E(TC(\vec{M})) \setminus J$ in a signed partition by the two elements $e_1, e_2$ in the same set of the signed partition.
It is not hard to see that this indeed defines an oriented matroid, which is still regular.}. This way, every directed circuit in $\vec{M}'$ intersects $E(\vec{M}') \setminus J$ in an even number of elements. Thus, for every directed circuit $C$ in $TC(\vec{M})$, the size of the corresponding directed circuit in $\vec{M}'$ is odd if and only if $|C \cap J|$ is odd.
Hence, $J$ intersects every directed circuit in $TC(\vec{M})$ an odd number of times if and only if $\vec{M}'$ contains no even directed circuit.
By \Cref{proposition:matroidequivalence} this shows that $TC(\vec{M})$ is non-even if and only if $\vec{M}'$ has no directed circuit of even size.
Since $TC(\vec{M})$ is non-even if and only if $\vec{M}$ is non-even, we can decide the non-evenness of $\vec{M}$ by negating the output of the oracle with instance $\vec{M}'$.
\end{proof}
With the tools developed in this section at hand we are ready for the proof of Proposition~\ref{prop:oddcycleproblem}.
\begin{proposition}\label{prop:oddcycleproblem}
There is an algorithm which given as input a totally unimodular matrix $A \in \mathbb{R}^{m \times n}$ for some $m, n \in \mathbb{N}$, either returns an odd directed circuit of $\vec{M}[A]$ or concludes that no such circuit exists, and runs in time polynomial in $mn$.
\end{proposition}
\begin{proof}
Let $A \in \mathbb{R}^{m \times n}$ be a totally unimdoular matrix given as input and let $\vec{M} := \vec{M}[A]$. To decide whether $\vec{M}$ contains a directed circuit of odd size, we first use \Cref{cor:totallycyclic} to compute a totally unimdoular representation of $TC(\vec{M})$ in polynomial time in $mn$. We now apply \Cref{lemma:compute_base_and_cover} to compute in polynomial time a directed circuit basis $\mathcal{B}$ of $TC(\vec{M})$. Going through the elements of $\mathcal{B}$ one by one, we test whether one of the basis-circuits is odd, in which case the algorithm stops an returns this circuit. Otherwise, all circuits in $\mathcal{B}$ are even. Since every circuit in the underlying matroid of $TC(\vec{M})$ can be written as a symmetric difference of elements of $\mathcal{B}$, every circuit in this matroid must be even. In particular, $TC(\vec{M})$ and hence $\vec{M}$ do not contain any odd directed circuits, and the algorithm terminates with this conclusion.
\end{proof}
\section{Digraphs Admitting an Odd Dijoin}\label{sec:dijoin}
This section is dedicated to the proof of our main result, \Cref{thm:forbiddencographicminors}.
The overall strategy to achieve this goal is to work on digraphs and their families of bonds directly.
The object that certifies that the bond matroid of a digraph is non-even is called an \emph{odd dijoin}.
\begin{definition}
Let $D$ be a digraph.
A subset $J \subseteq E(D)$ is called an \emph{odd dijoin} if $|J \cap S|$ is odd for every directed bond $S$ in $D$.
\end{definition}
Let $D$ be a digraph.
The \emph{contraction} $D/A$ of an edge set $A \subseteq E(D)$ in $D$ is understood as the digraph arising from $D$ by deleting all edges of $A$ and identifying each weak connected component of $D[A]$ into a corresponding vertex.
Note that this might produce new loops arising from edges spanned between vertices incident with $A$ but not included in $A$. Note that contracting a loop is equivalent to deleting the loop.
An edge $e=(x,y)$ of a digraph $D$, which is not a loop, is said to be \emph{deletable} (or \emph{transitively reducible}) if there is a directed path in $D$ starting in $x$ and ending in $y$ which does not use $e$. Note that an edge $e \in E(D)$ is deletable if and only if $e$ is a butterfly-contractible element of $M^\ast(D)$.
For two digraphs $D_1,D_2$, we say that $D_1$ is a \emph{cut minor} of $D_2$ if it can be obtained from $D_2$ by a finite series of edge contractions, deletions of deletable edges, and deletions of isolated vertices.
Our next lemma guarantees that the property of admitting an odd dijoin is closed under the cut minor relation.
\begin{lemma} \label{lemma:maintainment}
Let $D_1, D_2$ be digraphs such that $D_1$ is a cut minor of $D_2$.
If $D_2$ admits and odd dijoin, then so does $D_1$.
\end{lemma}
\begin{proof}
The statement follows by applying \Cref{prop:GBminorcontainment} to $M^\ast(D_1)$ and $M^\ast(D_2)$, noting that deleting isolated vertices from a digraph does not change the induced oriented bond matroid.
\end{proof}
Our goal will be to characterise the digraphs admitting an odd dijoin in terms of forbidden cut minors. In the following, we prepare this characterisation by providing a set of helpful statements. For an undirected graph $G$, we define the \emph{cutspace} of $G$ as the $\mathbb{F}_2$-linear vector space generated by the bonds in $G$, whose addition operation is the symmetric difference and whose neutral element is the empty set. The following statements are all obtained in a striaghforward way by applying the oriented matroid results \Cref{lemma:circuitspace}, \Cref{cor:matroidweighting} respectively \Cref{proposition:matroidequivalence} to the oriented bond matroid $M^{\ast}(D)$ induced by $D$.
\begin{corollary} \label{cor:cutspace}
Let $D$ be a weakly connected and acyclic digraph with underlying multi-graph $G$.
Then the cut space of $G$ admits a basis $\mathcal{B}$ whose elements are the edge sets of minimal directed cuts in $D$.
Moreover, if $A \subseteq E(D)$ is a set of edges such that $D/A$ is acyclic and $G[A]$ is a forest, then one can choose $\mathcal{B}$ such that every edge $e \in A$ appears in exactly one cut of the basis. \qed
\end{corollary}
\begin{corollary} \label{cor:weighting}
Let $D$ be a digraph and let $\mathcal{B}$ be a basis of the cut space consisting of minimal directed cuts. Then there is an edge set $J' \subseteq E(D)$ such that $|J' \cap B|$ is odd for all $S \in \mathcal{B}$. \qed
\end{corollary}
\begin{proposition} \label{proposition:equivalence}
Let $D$ be a digraph, $\mathcal B$ be a basis of the cut space consisting of directed bonds, and let $J' \subseteq E(D)$ be such that $|B \cap J'|$ is odd for all $B \in \mathcal B$.
Then the following statements are equivalent:
\begin{enumerate}[label=(\roman*)]
\item $D$ has an odd dijoin.
\item If $B_1,\dotsc,B_k$ are directed bonds of $D$ with $k$ odd, then $\sum_{i=1}^k B_i \neq \emptyset$.
\item Every directed bond of $D$ can be written as $\sum_{i=1}^k B_i$ with $k$ odd.
\item $J'$ is an odd dijoin of $D$. \qed
\end{enumerate}
\end{proposition}
\subsection{Forbidden cut minors for digraphs with an odd dijoin}
${}$
\vspace{6pt} \newline
\indent Next we characterise the digraphs admitting an odd dijoin in terms of forbidden cut minors.
For this purpose, we identify the digraphs without an odd dijoin for which every proper cut minor has an odd dijoin.
We call such a digraph a \emph{minimal obstruction}.
A digraph~$D=(V,E)$ is said to be \emph{oriented} if it has no loops, no parallel, and no anti-parallel edges.
Furthermore, $D$ is called \emph{transitively reduced} if for every edge $e = (v,w) \in E$ the only directed path in $D$ starting at $v$ and ending in $w$ consists of $e$ itself, or equivalently, if no edge in $D$ is deletable.
We start with the following crucial lemma, which will be used multiple times to successively find the structure minimal obstructions must have.
\begin{lemma} \label{lemma:basicproperties}
Let $D$ be a minimal obstruction.
Then the underlying multi-graph $G$ of $D$ is 2-vertex-connected.
Furthermore, $D$ is oriented, acyclic, and transitively reduced.
\end{lemma}
\begin{proof}
Assume that $D$ has no odd dijoin, but every cut minor of $D$ has one.
Then it is easy to check that $|V(D)| \ge 4$.
To prove that $G$ must be $2$-vertex-connected, suppose towards a contradiction that $G$ can be written as the union of two proper subgraphs $G_1, G_2$ with the property that $|V(G_1) \cap V(G_2)| \le 1$.
Then the orientations $D_1,D_2$ induced on $G_1, G_2$ by $D$ are proper cut minors of $D$:
Indeed, for $i \in \{1,2\}$ we can obtain $D_i$ from $D$ by contracting all edges in $D_{2-i}$ and then deleting all the resulting isolated vertices outside $V(D_i)$.
Since $D_1, D_2$ are proper cut minors of $D$, they must admit odd dijoins $J_1,J_2$, respectively.
However, since $D_1$ and $D_2$ share at most a single vertex, the directed bonds of $D$ are either directed bonds of $D_1$ or of $D_2$.
Hence, the disjoint union $J_1 \cup J_2$ defines an odd dijoin of $D$ and yields the desired contradiction.
To prove acyclicity, assume towards a contradiction that there is a directed cycle $C$ in~$D$.
Let us consider the digraph $D/E(C)$.
This is a proper cut minor of $D$ and therefore must have an odd dijoin $J$.
However, the directed bonds in $D/E(C)$ are the same as the directed bonds in $D$ edge-disjoint from $C$, and since $C$ is directed, these are already all the directed bonds of $D$. Hence $J$ is an odd dijoin also for~$D$, which is a contradiction.
To prove that $D$ is transitively reduced, assume towards a contradiction that there was an edge $e=(x,y) \in E(D)$ and a directed path $P$ from $x$ to $y$ not containing $e$.
Then $e$ is a deletable edge and $D-e$ is a cut minor of $D$, which therefore must have an odd dijoin $J \subseteq E(D) \setminus \{e\}$.
Note that a directed cut in $D$ either does not intersect $\{e\} \cup E(P)$ at all or contains $e$ and exactly one edge from $P$.
It follows from this that for every directed bond $B$ in $D$, we get that $B-e$ is a directed bond of $D-e$.
This directly yields that $J$ is also an odd dijoin of $D$, contradiction.
Clearly, the fact that $D$ is oriented follows from $D$ being simultaneously acyclic and transitively reduced.
This concludes the proof of the lemma.
\end{proof}
From this, we directly have the following useful observations.
\begin{corollary} \label{cor:contract}
Let $D$ be a minimal obstruction.
Then for every edge $e \in E(D)$, the digraph $D/e$ is acyclic.
Similarly, for every vertex $v \in V(D)$ which is either a source or a sink, the digraph $D / E(v)$, with $E(v) := D[ \{ v \},V(D) \setminus \{ v \}]$, is acyclic.
\end{corollary}
\begin{proof}
Let $e$ be an edge of $D$.
Since $D$ is a minimal obstruction, we know by \Cref{lemma:basicproperties} that $e$ is no loop.
Now assume towards a contradiction that there was a directed cycle in $D/e$.
As $D$ itself is acyclic according to \Cref{lemma:basicproperties}, this implies that there is a directed path $P$ in $D$ connecting the end vertices of $e$, which does not contain $e$ itself.
This path together with $e$ now either contradicts the fact that $D$ is acyclic or the fact that $D$ is transitively reduced, both of which hold due to \Cref{lemma:basicproperties}.
For the second part assume w.\@l.\@o.\@g.\@ (using the symmetry given by reversing all edges) that $v$ is a source.
Suppose for a contradiction there was a directed cycle in $D/E(v)$.
This implies the existence of a directed path $P$ in $D-v$ which connects two different vertices in the neighbourhood of $v$, say it starts in $w_1 \in N(v)$ and ends in $w_2 \in N(v)$.
Now the directed path $(v,w_1)+P$ witnesses that the directed edge $(v,w_2)$ is deletable contradicting that $D$ is transitively reduced.
This concludes the proof of the second statement.
\end{proof}
\begin{lemma} \label{lemma:acycliccontract}
Let $D$ be a minimal obstruction.
If $A \subseteq E(D)$ is such that $D/A$ is acyclic and such that $D[A]$ is an oriented forest, then there is a directed bond in $D$ which fully contains $A$.
\end{lemma}
\begin{proof}
By \Cref{cor:cutspace} there is a basis $\mathcal{B}$ of the cut space consisting of directed bonds such that each $e \in A$ is contained in exactly one of the bonds in the basis.
Moreover, by \Cref{cor:weighting} there is $J' \subseteq E(D)$ such that each $B \in \mathcal{B}$ has odd intersection with~$J'$.
Since $D$ has no odd dijoin, there has to be a directed bond $B_0$ in $D$ such that $|B_0 \cap J'|$ is even.
Let $B_0=B_1+\ldots+B_m$ be the unique linear combination with pairwise distinct $B_1,\ldots,B_m \in \mathcal{B}$.
Clearly, $m$ must be even.
Let $D'$ be the cut minor obtained from $D$ by contracting the edges in $E(D) \setminus \bigcup_{i=1}^{m}{B_i}$.
The bonds $B_0,B_1,\ldots,B_m$ are still directed bonds in $D'$ and satisfy $B_0+\ldots+B_m=\emptyset$, while $m+1$ is odd.
The equivalence of (i) and (ii) in \Cref{proposition:equivalence} now yields that $D'$ has no odd dijoin.
By the minimality of $D$ we thus must have $D=D'$ and $\bigcup_{i=1}^{m}{B_i}=E(D)$.
It follows that every $e \in A$ is contained in exactly one of the bonds $B_i$ and thus also in $B_0$.
Therefore, $B_0 \supseteq A$.
\end{proof}
\begin{corollary} \label{no separating directed cut}
Let $D = (V, E)$ be a minimal obstruction.
For $i \in \{1,2\}$ let $\emptyset \neq A_i \subseteq E$ be such that $D[A_i]$ is a forest and $D/A_i$ is acyclic.
Suppose there is a directed cut~$\Cut{}{X}$ in $D$ separating $A_1$ from $A_2$, i.\@e.\@, such that $A_1 \subseteq E(D[X])$ and $A_2 \subseteq E(D[V \setminus X])$. Then there exists a directed bond in $D$ containing $A_1 \cup A_2$.
\end{corollary}
\begin{proof}
Let $A := A_1 \mathbin{\dot\cup} A_2$.
As $A_1$ and $A_2$ induce vertex-disjoint forests, $D[A]$ is a forest as well.
Since no edge is directed from a vertex in $V \setminus X$ to a vertex in $X$, no directed circuit in $D/A$ can contain a contracted vertex from $A_1$ and a contracted vertex from $A_2$, so every directed circuit must already exist in $D/A_1$ or in $D/A_2$.
Because these two digraphs are acyclic, $D/A$ is acyclic.
Hence, by \Cref{lemma:acycliccontract}, $A$ is fully included in a directed bond of $D$. This proves the assertion.
\end{proof}
With the next proposition we shall make the structure of minimal obstructions much more precise.
To state the result, we shall make use of the following definition.
\begin{definition}
Let $n_0, n_1, n_2 \in \mathbb{N}$.
Then we denote by $\mathcal D(n_0,n_1,n_2)$ the digraph $(V,E)$, where $V = V_0 \mathbin{\dot\cup} V_1 \mathbin{\dot\cup} V_2$ with $V_i = [n_i]$ for $i \in \{1,2,3\}$, and $E = (V_0 \times V_1) \mathbin{\dot\cup} (V_1 \times V_2)$.
\end{definition}
\begin{proposition} \label{prop:threenumberreduction}
Let $D = (V, E)$ be a minimal obstruction.
Then $D$ is isomorphic to $\mathcal{D}(n_1,n_2,n_3)$ for some integers $n_1,n_2,n_3 \ge 0$.
\end{proposition}
\begin{proof}
We shall split the proof into several claims, starting with the following one.
\begin{claim}\label{claim:length3}
$D$ contains no directed path of length $3$.
\end{claim}
Suppose towards a contradiction that $v_0,e_1,v_1,e_2,v_2,e_3,v_3$ is a directed path of length $3$ in $D$ with $e_1=(v_0,v_1),e_2=(v_1,v_2), e_3=(v_2,v_3)$.
By \Cref{cor:contract}, $D/e_1$ and $D/e_3$ are acyclic.
Moreover, because $D$ is acyclic by \Cref{lemma:basicproperties}, the edge $e_2$ is contained in a directed cut~$\Cut{}{X}$ in $D$, separating $\{e_1\}$ and $\{e_3\}$.
By \Cref{no separating directed cut} this means that there is a directed bond $\partial(Y)$ in $D$ containing both $e_1$ and $e_2$.
This however means that $v_0, v_2 \in Y$ and $v_1, v_3 \notin Y$.
Hence, $e_2$ is an edge in $D$ starting in $V(D) \setminus Y$ and ending in $Y$, a contradiction since $\partial(Y)$ is a directed bond.
This completes the proof of \Cref{claim:length3}.
\vspace{6pt}
For $i \in \{ 0, 1, 2 \}$ let $V_i$ denote the set of vertices $v \in V$ such that the longest directed path ending in $v$ has length~$i$.
By definition of the $V_i$ and since $D$ is acyclic, there is no edge from a vertex in $V_i$ to a vertex in $V_j$ for $i \ge j$, as otherwise this would give rise to a directed path of length $i+1$ ending in a vertex of $V_j$.
By \Cref{claim:length3} we know that $V = V_0 \mathbin{\dot\cup} V_1 \mathbin{\dot\cup} V_2$ holds.
We move on by proving the following claim.
\begin{claim}\label{claim:all edges 0-1}
Every vertex~$v \in V_1$ is adjacent to every vertex~$u \in V_0$.
\end{claim}
Let $v \in V_1$ and $u \in V_0$.
Assume for a contradiction that $u$ is not adjacent to $v$.
By definition of $V_1$ there is an edge $f = (u',v)$ with $u' \in V_0$.
By \Cref{cor:contract}, $D/f$ and $D/E(u)$ are acyclic because $u$ is a source.
Let $X \supseteq \{u',v\}$ be the set of all vertices from which $v$ can be reached via a directed path.
Clearly $\Cut{}{X}$ is a directed cut in $D$.
As $u \in V_0 \setminus X$ is a source, we conclude that $\{u\} \cup N(u) \subseteq V \setminus X$.
This however means that the directed cut $\Cut{}{X}$ separates $f$ from the edges in $E(u)$.
By \Cref{no separating directed cut}, this means that there is a directed bond $\partial(Y)$ in $D$ containing $E(u) \cup \{f\}$.
Since $E(u) = \Cut{}{\{u\}}$ itself is a directed cut in $D$, this contradicts the fact that $\partial(Y)$ is an inclusion-wise minimal directed cut in $D$, and proves \Cref{claim:all edges 0-1}.
\vspace{6pt}
We proceed with another claim.
\begin{claim}\label{claim:no 2-layer edges}
$D$ does not contain any edge from $V_0$ to $V_2$.
\end{claim}
Let $u \in V_0$ and $w \in V_2$.
By definition of $V_2$ there is some $v \in V_1$ such that $(v,w) \in E$.
By \Cref{claim:all edges 0-1}, $(u,v) \in E$.
Because $D$ is transitively reduced by \Cref{lemma:basicproperties}, we obtain $(u,w) \notin E$.
So the proof of \Cref{claim:no 2-layer edges} is complete.
\vspace{6pt}
Now we come to the last claim we need for the proof of this proposition.
\begin{claim}
Every vertex~$v \in V_1$ is adjacent to every vertex~$w \in V_2$.
\end{claim}
Let $v \in V_1, w \in V_2$ and suppose for a contradiction that $w$ is not adjacent to $v$.
Let $f = (u,v)$ be an edge with $u \in V_0$.
By \Cref{lemma:acycliccontract}, $D/f$ and $D/E(w)$ are acyclic because $w$ is a sink.
Let $X \supseteq \{u,v\}$ be the set of all vertices from which $v$ can be reached via a directed path.
Again, $\Cut{}{X}$ forms a directed cut in $D$.
\Cref{claim:no 2-layer edges} implies that $N(w) \subseteq V_1 \setminus \{v\} \subseteq V \setminus X$.
This means $\Cut{}{X}$ separates $f$ from the edges in $E(w)$, contradicting \Cref{no separating directed cut} again.
\vspace{6pt}
By combining all four claims we obtain $E=(V_0 \times V_1) \dot{\cup} (V_1 \times V_2)$, and the proof of this proposition is complete.
\end{proof}
Now \Cref{prop:threenumberreduction} puts us in the comfortable situation that the only possible minimal obstructions to having an odd dijoin are part of a $3$-parameter class of simply structured digraphs.
The rest of this section is devoted to determine the conditions on $n_1,n_2,n_3$ that need to be imposed such that $\mathcal{D}(n_1,n_2,n_3)$ is a minimal obstruction.
It will be helpful to use the well-known concept of so-called $T$-joins.
\begin{definition}
Let $G$ be an undirected graph and $T \subseteq V(G)$ be some vertex set.
A subset $J \subseteq E(G)$ of edges is called a \emph{$T$-join}, if in the subgraph $H:= G[J]$ of $G$, every vertex in $T$ has odd, and every vertex in $V(G) \setminus T$ has even degree.
\end{definition}
The following result is folklore.
\begin{lemma} \label{lemma:tjoin}
A graph $G$ with some vertex set $T \subseteq V(G)$ admits a $T$-join if and only if $T$ has an even number of vertices in each connected component of $G$.
\end{lemma}
We continue with an observation about odd dijoins in digraphs of the form $\mathcal{D}(n_1,n_2,0)$.
\begin{observation} \label{observation:obs}
Let $n_1,n_2 \ge 1$.
Then the digraph $\mathcal{D}(n_1,n_2,0) \simeq \mathcal{D}(0,n_1,n_2)$ has an odd dijoin if and only if $\text{min}(n_1,n_2) \leq 1$ or $n_1,n_2 \ge 2$ and $n_1 \equiv n_2 \text{ (mod }2)$.
\end{observation}
\begin{proof}
If $\text{min}(n_1,n_2) \leq 1$, then all directed bonds in $\mathcal{D}(n_1,n_2,0)$ consist of single edges and thus, $J:= E(\mathcal{D}(n_1,n_2,0))$ defines an odd dijoin.
If $n_1,n_2 \ge 2$, the directed bonds in $\mathcal{D}(n_1,n_2,0)$ are exactly those cuts with one vertex on one side of the cut and all other vertices on the other side.
Hence, there is an odd dijoin if and only if the complete bipartite graph with partition classes of size $n_1,n_2$ has a $T$-join, where $T$ contains all $n_1+n_2$ vertices.
The statement is now implied by \Cref{lemma:tjoin}.
\end{proof}
Next we characterise when the digraphs $\mathcal{D}(n_1,n_2,n_3)$ admit an odd dijoin.
\begin{proposition} \label{proposition:threenumbers}
Let $n_1,n_2,n_3 \ge 1$ be integers.
Then $\mathcal{D}(n_1,n_2,n_3)$ has an odd dijoin if and only if one of the following holds:
\begin{enumerate}[label=(\roman*)]
\item $n_2=1$.
\item $n_2=2$ and $n_1 \equiv n_3 \text{ (mod }2)$.
\item $n_2 \ge 3$, and $n_1 \equiv n_3 \equiv 1 \text{ (mod }2)$.
\end{enumerate}
\end{proposition}
\begin{proof}
If $n_2=1$, then $\mathcal{D}(n_1,n_2,n_3)$ is an oriented star.
Clearly, here, the directed bonds consist of single edges, and therefore, $J := E(\mathcal{D}(n_1,1,n_3))$ defines an odd dijoin.
If $n_2=2$, it is easily seen that $\mathcal{D}(n_1,2,n_3)$ is a planar digraph, which admits a directed planar dual isomorphic to a bicycle $\Bidirected{C}_{n_1+n_3}$ of length $n_1+n_3$.
By planar duality, we know that $\mathcal{D}(n_1,2,n_3)$ has an odd dijoin if and only if there is a subset of edges of $\Bidirected{C}_k$ which intersects every directed cycle an odd number of times.
By \Cref{thm:seymthomtheorem} we know that such an edge set exists if and only if $n_1+n_3$ is even, that is, $n_1 \equiv n_3 \text{ (mod }2)$.
Therefore, we assume that $n_2 \ge 3$ for the rest of the proof.
We now first show the necessity of~(iii).
So assume that $D := \mathcal{D}(n_1,n_2,n_3)$ has an odd dijoin $J$.
We observe that the underlying multi-graph of $D$ is $2$-connected.
Hence, for every vertex $x \in V_1 \cup V_2 \cup V_3$, the cut $E(x)$ of all edges incident with $x$ is a minimal cut of the underlying multi-graph, and it is directed whenever $x \in V_1 \cup V_3$.
Therefore, $U(D[J])$ must have odd degree at every vertex in $V_1 \cup V_3$.
Moreover, we observe that for any proper non-empty subset $X \subsetneq V_2$, the cut in $D$ induced by the partition $(V_1 \cup X, (V_2 \setminus X) \cup V_3)$ is minimal and directed.
In the following, we denote this cut by $F(X)$.
Now for every vertex $x \in V_2$, choose some $x' \in V_2 \setminus \{x\}$ and consider the minimal directed cuts $F(\{x'\}), F(\{x,x'\})$.
Both are minimal directed cuts (here, we use that $n_2 \ge 3$) and thus must have odd intersection with~$J$.
Moreover, the symmetric difference $F(\{x'\})+F(\{x,x'\})$ contains exactly the set $E(x)$ of edges incident with $x$ in $D$.
We conclude the following:
\begin{align*}
|E(x) \cap J|=|(F(\{x'\})+F(\{x,x'\})) \cap J| & \equiv |F(\{x'\}) \cap J|+|F(\{x,x'\}) \cap J| \\
& \equiv 1+1 \equiv 0 \text{ (mod }2)
\end{align*}
As $x \in V_2$ was chosen arbitrarily, we conclude that $J$ must be a $T$-join of the underlying multi-graph of $\mathcal{D}(n_1,n_2,n_3)$ where $T = V_1 \cup V_3$.
Now \Cref{lemma:tjoin} implies that ${|T| = n_1+n_3}$ must be even and hence $n_1 \equiv n_3 \text{ (mod }2)$.
We claim that (iii) must be satisfied, i.\@e.\@, $n_1$ and $n_3$ are odd.
Assume towards a contradiction that this is not the case.
Hence, by our observation above both $n_1$ and $n_3$ are even.
Let $x \in V_2$ be some vertex, and consider the directed bond $F(\{x\})$.
We can rewrite this bond as the symmetric difference of the directed cut ${\Cut{}{V_1}=\{(v_1,v_2) \; | \; v_1 \in V_1, v_2 \in V_2\}}$ and the cut $E(x)$ of all edges incident with $x$.
Because $|E(u) \cap J|$ is odd for every $u \in V_1$, we obtain that $|\Cut{}{V_1} \cap J|=\sum_{u \in V_1}{|E(u) \cap J|}$ must be even.
However, since also $|E(x) \cap J|$ is even, this means that $|F(\{x\}) \cap J| \equiv |\Cut{}{V_1} \cap J|+|E(x) \cap J| \equiv 0 \text{ (mod }2)$, which is the desired contradiction, as $J$ is an odd dijoin.
So (iii) must be satisfied.
To prove the reverse direction, assume that (iii) is fulfilled, i.\@e.\@, $n_1 \equiv n_3 \equiv 1 \text{ (mod }2)$.
We shall construct an odd dijoin of $\mathcal{D}(n_1,n_2,n_3)$.
For this purpose, we choose $J$ to be a $T$-join of the underlying multi-graph where $T=V_1 \cup V_3$.
We claim that this defines an odd dijoin of $\mathcal{D}(n_1,n_2,n_3)$.
It is not hard to check that the directed bonds of $\mathcal{D}(n_1,n_2,n_3)$ are the cuts $E(v)$ for vertices $v \in V_1 \cup V_3$ and the cuts $F(X)$ as described above, where $\emptyset \neq X \subsetneq V_2$.
By the definition of a $T$-join, all of the directed bonds of the first type have an odd intersection with $J$, so it suffices to consider the bonds of the second type.
Consider again the directed cut $\Cut{}{V_1}$ in $\mathcal{D}(n_1,n_2,n_3)$.
For any $\emptyset \neq X \subsetneq V_2$, we can write $F(X)$ as the symmetric difference $F(X)=\Cut{}{V_1}+\sum_{x \in X}{E(x)}$.
We therefore conclude that
\begin{align*}
|F(X) \cap J| & \equiv |\Cut{}{V_1} \cap J| + \sum_{x \in X}{\underbrace{|E(x) \cap J|}_{\text{even}}} \text{ (mod }2) \\
& \equiv |\Cut{}{V_1} \cap J| = \sum_{x \in V_1}{\underbrace{|E(x) \cap J|}_{\text{odd}}} \equiv n_1 \equiv 1 \text{ (mod }2).
\end{align*}
This verifies that $J$ is an odd dijoin, and completes the proof of the proposition.
\end{proof}
We shall now use these insights to characterise minimal obstructions.
For this let us first introduce new notation.
Let $D$ be a digraph consisting of a pair $h_1,h_2$ of ``hub vertices'' and other vertices $x_1,\ldots,x_n$, where $n \ge 3$, such that for every $i \in [n]$, the vertex $x_i$ has either precisely two outgoing or precisely two incoming edges to both $h_1,h_2$, and these are all edges of $D$.
In this case, we refer to $D$ as a \emph{diamond}.
Furthermore, we call any digraph isomorphic to $\vec{K}_{n_1,n_2}$ for some $n_1,n_2 \ge 2$, a \emph{one-direction}.
We shall call both, diamonds and one-directions, \emph{odd} if the total number of vertices of these digraphs is odd.
\begin{lemma} \label{lemma:verifyobstructions}
All odd diamonds and all odd one-directions are minimal obstructions.
\end{lemma}
\begin{proof}
It is directly seen from \Cref{observation:obs} and \Cref{proposition:threenumbers} that indeed, odd diamonds and odd one-directions do not posses an odd dijoin.
Therefore it remains to show that all proper cut minors of these digraphs have odd dijoins.
Because both odd diamonds and odd one-directions are weakly $2$-connected, transitively reduced and acyclic, the only cut minor operation applicable to them in the first step is the contraction of a single edge.
By \Cref{lemma:maintainment} it therefore suffices to show that for both types of digraphs, the contraction of any edge results in a digraph admitting an odd dijoin.
We first consider odd diamonds.
Let $D=\mathcal{D}(n_1,2,n_3)$ with $n_1,n_2 \ge 1$ and $n_1+n_2$ odd, and let $e \in E(D)$ be arbitrary.
In the planar directed dual graph of $D$, an odd bicycle with $n_1+n_2$ vertices, there is a directed dual edge corresponding to $e$.
It is easily seen by duality that $D/e$ has an odd dijoin if and only if the odd bicycle of order $n_1+n_2 \ge 3$ with a single deleted edge has an edge set intersecting every directed cycle an odd number of times.
However, this is the case, because such a digraph is non-even by \Cref{thm:seymthomtheorem}.
Now we consider odd one-directions.
Let $D = \mathcal{D}(n_1,n_2,0)$ with $n_1,n_2 \ge 2$ and $n_1+n_2$ odd, and let $e=(x,y) \in E(D)$ be arbitrary.
Then in the digraph $D/e$, define $J$ to be the set of all edges incident with the contraction vertex.
It is easily observed that $J$ intersects every minimal directed cut exactly once and thus indeed, every proper cut minor has an odd dijoin.
This completes the proof.
\end{proof}
Now we are able to prove a dual version of \Cref{thm:seymthomtheorem} and characterise the existence of odd dijoins in terms of forbidden cut minors.
\begin{theorem}\label{thm:forbiddenminors}
A digraph admits an odd dijoin if and only if it does neither have an odd diamond nor an odd one-direction as a cut minor.
\end{theorem}
\begin{proof}
By \Cref{lemma:maintainment}, a digraph has an odd dijoin if and only if it does not contain a minimal obstruction as a cut minor.
Hence it suffices to show that a digraph $D$ is a minimal obstruction if and only if it is isomorphic to an odd diamond or an odd one-direction.
The fact that these digraphs indeed are minimal obstructions was proved in \Cref{lemma:verifyobstructions}.
So it remains to show that these are the only minimal obstructions.
Let $D$ be an arbitrary minimal obstruction.
By \Cref{prop:threenumberreduction} there are integers $n_1,n_2,n_3 \ge 0$ such that $D \simeq \mathcal{D}(n_1,n_2,n_3)$.
By the definition of a minimal obstruction, we know that $D$ has no odd dijoin, while for every edge $e \in E(D)$, the digraph $D/e$ is a cut minor of $D$ and therefore has one.
We know due to \Cref{lemma:basicproperties} that $D$ is weakly $2$-connected.
Hence, we either have ${\text{min}(n_1,n_3) = 0}$, so (by symmetry) w.l.o.g. $n_3=0$, or $n_1,n_3 \ge 1$ and therefore $n_2 \ge 2$.
In the first case, we know by \Cref{observation:obs} and using that $D$ has no odd dijoin, that $n_1,n_2 \ge 2$ and $n_1 \not\equiv n_2 \text{ (mod }2)$.
So $D$ is an odd one-direction, which verifies the claim in the case of $\text{min}(n_1,n_3)=0$.
Next assume that $n_1,n_3 \ge 1$ and $n_2 \ge 2$.
Let $e=(x_1,x_2) \in E(D)$ with $x_i \in V_i$ for $i=1,2$ be an arbitrary edge going from the first layer $V_1$ to the second layer $V_2$.
Denote by $c$ the vertex of $D/e$ corresponding to the contracted edge $e$.
Then in the digraph $D/e$, all edges $\{(c,v_3) \; | \; v_3 \in V_3\}$ as well as all the edges in $\{(v_1,v_2) \; | \; v_1 \in V_1 \setminus \{x_1\},v_2 \in V_2 \setminus \{x_2\}\}$ admit parallel paths since $n_2 \ge 2$ and, therefore, are deletable.
Successive deletion yields a cut minor $D'$ of $D/e$, and thus of $D$, with vertex set
\begin{align*}
V(D') = (V_1\setminus \{x_1\}) \cup \{c\} \cup (V_2\setminus \{x_2\}) \cup V_3
\end{align*}
and edge set
\begin{align*}
E(D')\! =\! \{(v_1,c) \; | \; v_1 \in V_1\setminus\{x_1\}\} \!\cup\! \{(c,v_2) \; | \; v_2 \in V_2 \setminus\{x_2\}\} \!\cup\! \{(v_2,v_3) \; | \; v_2 \in V_2 \setminus \{x_2\}, v_3 \in V_3\}.
\end{align*}
Now after contracting all edges of $D'$ of the set $\{(v_1,c) \; | \; v_1 \in V_1\setminus\{x_1\}\}$ we find that $D'$, and hence $D$, has a proper cut minor isomorphic to $\mathcal{D}(1,n_2-1,n_3)$ with corresponding layers $\{c\}, V_2 \setminus \{x_2\}$ and $V_3$.
Applying a symmetric argument (starting by contracting an edge going from $V_2$ to $V_3$), we find that $D$ also has a proper cut minor isomorphic to $\mathcal{D}(n_1,n_2-1,1)$.
Using these insights, we now show that $n_2 = 2$ holds.
Suppose for a contradiction that $n_2 \ge 3$ holds.
Assume first that $n_2 \ge 4$, and therefore $n_2 - 1 \ge 3$.
Using statement (iii) of \Cref{proposition:threenumbers} and that $\mathcal{D}(1,n_2-1,n_3)$ and $\mathcal{D}(n_1,n_2-1,1)$ both have odd dijoins, we must have $n_1 \equiv n_3 \equiv 1 \text{ (mod }2)$.
In the case that $n_2=3$, we similarly observe from statement (ii) of \Cref{proposition:threenumbers} with the digraphs $\mathcal{D}(1,2,n_3)$ and $\mathcal{D}(n_1,2,1)$ that both $n_1$ and $n_3$ must be odd.
Now using statement (iii) of \Cref{proposition:threenumbers} with the digraph $D \simeq \mathcal{D}(n_1,n_2,n_3)$ we can conclude that $D$ must admit an odd dijoin as well, a contradiction.
Hence, we must have $n_2=2$.
Using again statement (ii) of \Cref{proposition:threenumbers} with $D \simeq \mathcal{D}(n_1,2,n_3)$, we get that $n_1+n_3$ must be odd.
Therefore $D$ is isomorphic to an odd diamond with $2+n_1+n_3$ many vertices.
This concludes the proof of the theorem.
\end{proof}
We are now ready to give the proof of \Cref{thm:forbiddencographicminors}.
\begin{proof}[Proof of \Cref{thm:forbiddencographicminors}]
Let $\vec{M}$ be an oriented bond matroid, and let $D$ be a digraph such that $\vec{M} \simeq M^\ast(D)$.
Let us first note that by definition, $\vec{M}$ is non-even if and only if $D$ has an odd dijoin.
Hence, for the equivalence claimed in this theorem it suffices to show that $D$ has an odd dijoin if and only if $M^\ast(D)$ does not have a $\mathsf{GB}$-minor isomorphic to $\vec{K}_{m,n}$ for $m, n \ge 2$ such that $m+n$ is odd.
Suppose first that $D$ has an odd dijoin and $M^\ast(D)$ is non-even. Then by \Cref{prop:GBminorcontainment}, every $\mathsf{GB}$-minor of $M^\ast(D)$ is non-even as well, and hence, no such minor can equal $M^\ast(\vec{K}_{m,n})$ for any $m, n \ge 2$ with $m+n$ is odd, since $\vec{K}_{m,n}$ does not have an odd dijoin for any such $m$ and $n$ by \Cref{lemma:verifyobstructions}.
This proves the first implication of the equivalence.
Conversely, let us suppose that $M^\ast(D)$ does not have a $\mathsf{GB}$-minor isomorphic to $M^\ast(\vec{K}_{m,n})$ for any $m, n \ge 2$ such that $m+n$ is odd.
We shall show that $D$ admits an odd dijoin.
For this we use \Cref{thm:forbiddenminors} and verify that $D$ has neither an odd diamond nor an odd one-direction as a cut minor.
This however follows directly from the fact that the bond-matroid induced by any odd diamond of order $n$ is isomorphic to $M^\ast(\vec{K}_{2,n-2})$ as well as the easy observation that if $D'$ is a cut minor of $D$, then $M^\ast(D')$ is a $\mathsf{GB}$-minor of $M^\ast(D)$.
This finishes the proof of the claimed equivalence.
\end{proof}
\section{Concluding remarks}\label{sec:conclusion}
For every odd $k \ge 3$ it holds that $M(\Bidirected{C_k}) \simeq M^\ast(\vec{K}_{k,2}) \simeq M^\ast(\vec{K}_{2,k})$, and hence, the list of smallest excluded $\mathsf{GB}$-minors characterising non-evenness for cographic oriented matroids strictly extends the list for graphic ones.
We find this quite surprising and did not expect it when we initiated our research on the subject.
Seymour~\cite{seymregularsums} has proved a theorem about generating the class of regular matroids, showing that every regular matroid can be built up from graphic matroids, bond matroids and a certain $10$-element matroid $R_{10}$ by certain sum operations.
The matroid $R_{10}$ is regular, but neither graphic nor cographic.
It is given by the following totally unimodular representing matrix:
\begin{align*}
R_{10}=M\left[\left(\begin{matrix}
1 & 0 & 0 & 0 & 0 & -1 & 1 & 0 & 0 & 1 \cr
0 & 1 & 0 & 0 & 0 & 1 & -1 & 1 & 0 & 0 \cr
0 & 0 & 1 & 0 & 0 & 0 & 1 & -1 & 1 & 0 \cr
0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & -1 & 1 \cr
0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & -1
\end{matrix} \right) \right]
.
\end{align*}
Seymour introduced three different kinds of sum operation which join two regular matroids $M_1$ and $M_2$ whose element sets are either disjoint (1-sum), intersect in a single non-loop element (2-sum) or in a common $3$-circuit (3-sum) into a bigger regular matroid $M_1 \Delta M_2$ (for a precise definition of these operations we refer to the introduction of \cite{seymregularsums}).
\begin{theorem}[\cite{seymregularsums}]
Every regular matroid can be built up from graphic matroids, bond matroids and $R_{10}$ by repeatedly applying 1-sums, 2-sums and 3-sums.
\end{theorem}
This theorem shows that graphic matroids, bond matroids and $R_{10}$ constitute the most important building blocks of regular matroids.
Using a brute force implementation, we checked by computer that every orientation of $R_{10}$ containing no $M^\ast(\vec{K}_{m,n})$ as a $\mathsf{GB}$-minor for any $m, n \ge 2$ such that $m+n$ is odd, is already non-even.
We therefore expect the total list of forbidden minors for all non-even oriented matroids to not be larger than the union of the forbidden minors for graphic (\Cref{thm:forbiddengraphicminors}) and cographic (\Cref{thm:forbiddencographicminors}) non-even oriented matroids. In other words, we conjecture the following.
\begin{conjecture}
A regular oriented matroid $M$ is non-even if and only if none of its $\mathsf{GB}$-minors is isomorphic to $M^\ast(\vec{K}_{m,n})$ for some $m, n \ge 2$ such that $m+n$ is odd.
\end{conjecture}
The natural way of working on this conjecture would be to try and show that a smallest counterexample is not decomposable as the $1$-, $2$- or $3$-sum of two smaller oriented regular matroids.
Apart from the obvious open problem of resolving the computational complexity of the even circuit problem (\Cref{evencircuit1}) for regular oriented matroids in general, already resolving the case of bond matroids would be interesting.
\begin{problem}
Is there a polynomially bounded algorithm that, given as input a digraph $D$, decides whether or not $D$ contains a directed bond of even size?
Equivalently, is there a polynomially bounded recognition algorithm for digraphs admitting an odd dijoin?
\end{problem}
Conclusively, given our characterisation of digraphs admitting an odd dijoin in terms of forbidden cut minors, the following question naturally comes up.
\begin{problem}
Let $F$ be a fixed digraph.
Is there a polynomially bounded algorithm that, given as input a digraph $D$, decides whether or not $D$ contains a cut minor isomorphic to $F$?
\end{problem}
\section*{Acknowledgement}
We thank Winfried Hochst\"{a}ttler and Sven Jäger for discussions on the topic.
Karl Heuer was supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (ERC consolidator grant DISTRUCT, agreement No.\ 648527).
Raphael Steiner was funded by DFG-GRK 2434.
\input{bib}
\end{document} | proofpile-arXiv_059-15624 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\emph{Sensor arrays} are a key technology in for example, radar, wireless communication, medical imaging, radio astronomy, sonar, and seismology \cite{vantrees2002optimum}. The key advantages of arrays include spatial selectivity and the capability to mitigate interference. However, conventional \emph{uniform array} configurations may become prohibitively expensive, when a high spatial resolution facilitated by a large electrical aperture, and consequently a large number of sensors is required.
\emph{Sparse arrays} allow for significantly reducing the number of sensors and costly RF-IF chains, whilst resolving vastly more scatterers or signal sources than sensors. This is facilitated by a virtual array model called the \emph{co-array} \cite{haubrich1968array,hoctor1990theunifying}, which is commonly defined in terms of the pairwise differences or sums of the physical sensor positions \cite{hoctor1990theunifying}. Uniform arrays have a co-array with redundant virtual sensors, which allows the number of physical sensors to be reduced without affecting the number of unique co-array elements. This enables sparse arrays to identify up to $ \mathcal{O}(N^2) $ signal sources using only $ N $ sensors \cite{koochakzadeh2016cramerrao,wang2017coarrays,liu2017cramerrao}. A \emph{contiguous} co-array is often desired, since it maximizes the number of virtual sensors for a given array aperture. The properties of the resulting uniform virtual array model, such as the Vandermonde structure of the virtual steering matrix, can also be leveraged in array processing \cite{pal2010nested,liu2015remarks}.
Typical sparse array designs, such as the \emph{Minimum-Redundancy Array} (MRA) \cite{moffet1968minimumredundancy,hoctor1996arrayredundancy}, seek to maximize the number of contiguous co-array elements for a given number of physical sensors $ N $. The minimum-redundancy property means that no physical array configuration can achieve a larger contiguous co-array. Although optimal, the MRA lacks a closed-form expression for its sensor positions, and quickly becomes impractical to compute, as the search space of the combinatorial optimization problem that needs to be solved grows exponentially with $ N $. Consequently, large sparse arrays have to be constructed by sub-optimal means, yielding low-redundancy, rather than provably minimum-redundancy, array configurations. For instance, smaller (but computable) MRAs can be extended to larger apertures by repeating regular substructures in the array \cite{hoctor1996arrayredundancy}, or by recursively nesting them in a fractal manner \cite{ishiguro1980minimum}. Recent research into such \emph{recursive} or \emph{fractal arrays} \cite{liu2017maximally,yang2018aunified,cohen2019sparsefractal,cohen2020sparse} has also revived interest in \emph{symmetric} array configurations. Symmetry in either the physical or co-array domain can further simplify array design \cite{haupt1994thinned} and calibration \cite{friedlander1991direction}, or be leveraged in detection \cite{xu1994detection}, source localization \cite{roy1989esprit,swindlehurst1992multiple,gao2005ageneralized, wang2013mixed}, and adaptive beamforming \cite[p.~721]{vantrees2002optimum}. \emph{Parametric} arrays are also of great interest, as their sensor positions can be expressed in closed form. This facilitates the simple design and optimization of the array geometry. For example, the redundancy of the array may be minimized to utilize the co-array as efficiently as possible. Notable parametric arrays include the Wichmann \cite{wichmann1963anote,pearson1990analgorithm,linebarger1993difference}, Co-prime \cite{vaidyanathan2011sparsesamplers}, Nested \cite{pal2010nested}, and Super Nested Array \cite{liu2016supernested,liu2016supernestedii}.
Sparse array configurations have been developed mainly for passive sensing, where the \emph{difference co-array} can be exploited if the source signals are incoherent or weakly correlated. Far fewer works consider the \emph{sum co-array}, which is more relevant in active sensing applications, such as radar or microwave and ultrasound imaging, where scatterers may (but need not) be fully coherent \cite{hoctor1990theunifying,hoctor1992highresolution,ahmad2004designandimplementation}. In particular, the design of low-redundancy sparse arrays with overlapping transmitting and receiving elements has not been extensively studied. Some of our recent works have addressed this sum co-array based array design problem by proposing symmetric extensions to existing parametric array configurations that were originally designed to only have a contiguous difference co-array \cite{rajamaki2017sparselinear,rajamaki2018symmetric,rajamaki2020sparselow}. Symmetry can thus provide a simple means of achieving a contiguous sum co-array. However, a unifying analysis and understanding of such symmetric sparse arrays is yet lacking from the literature. The current work attempts to fill this gap.
\subsection{Contributions and organization}
This paper focuses on the design of sparse linear active arrays with a contiguous sum co-array. The main contributions of the paper are twofold. Firstly, we propose a general symmetric sparse linear array design. We establish necessary and sufficient conditions under which the sum and difference co-array of this array are contiguous. We also determine sufficient conditions that greatly simplify array design by allowing for array configurations with a contiguous difference co-array to be leveraged, thanks to the symmetry of the proposed array design. This connects our work to the abundant literature on mostly asymmetric sparse arrays with a contiguous difference co-array \cite{linebarger1993difference,pal2010nested,liu2016supernestedarrays,zheng2019misc}. Moreover, it provides a unifying framework for symmetric configurations relevant to both active and passive sensing \cite{rajamaki2017sparselinear,rajamaki2018symmetric,rajamaki2020sparselow}.
The second main contribution is a detailed study of two specific instances of this symmetric array --- one based on the Nested Array (NA) \cite{pal2010nested}, and the other on the Kl{\o}ve-Mossige basis from additive combinatorics \cite{mossige1981algorithms}. In particular, we clarify the connection between these symmetric arrays and the recently studied \emph{Concatenated Nested Array} (CNA) \cite{rohrbach1937beitrag,rajamaki2017sparselinear} and \emph{Kl{\o}ve Array} (KA) \cite{klove1981class,rajamaki2020sparselow}. We also derive the minimum redundancy parameters for both the CNA and KA. Additionally, we show that the minimum-redundancy symmetric NA reduces to the CNA. Both the CNA and KA can be generated for practically any number of sensors, as their positions have closed-form expressions.
The paper is organized as follows. \cref{sec:preliminaries} introduces the signal model and the considered array figures of merit. \cref{sec:MRA} briefly reviews the MRA and some of its characteristics. \cref{sec:symmetric} then presents the general definition of the proposed symmetric array, and outlines both necessary and sufficient conditions for its sum co-array to be contiguous. In \cref{sec:generators}, we study two special cases of this array, and derive their minimum-redundancy parameters. Finally, we compare the discussed array configurations numerically in \cref{sec:numerical}, before concluding the paper and discussing future work in \cref{sec:conclusions}.
\subsection{Notation}
We denote matrices by bold uppercase, vectors by bold lowercase, and scalars by unbolded letters. Sets are denoted by calligraphic letters. The set of integers from $ a \in \mathbb{Z}$ to $ c\in\mathbb{Z} $ in steps of $ b\in\mathbb{N}_+ $ is denoted $ \{a:b:c\}= \{a,a+b,a+2b,\ldots,c\}$. Shorthand $ \{a:c\}$ denotes $\{a:1:c\} $. The sum of two sets is defined as the set of pairwise sums of elements, i.e., $ \mathcal{A}+\mathcal{B}\triangleq \{a+b\ |\ a\in\mathcal{A};b\in\mathcal{B} \} $. The sum of a set and a scalar is a special case, where either summand is a set with a single element. Similar definitions hold for the difference set. The cardinality of a set $ \mathcal{A} $ is denoted by $ |\mathcal{A}| $. The rounding operator $ \lceil \cdot \rfloor $ quantizes the scalar real argument to the closest integer. Similarly, the ceil operator $ \lceil \cdot \rceil $ yields the smallest integer larger than the argument, and the floor operator $ \lfloor\cdot \rfloor $ yields the largest integer smaller than the argument.
\section{Preliminaries}\label{sec:preliminaries}
In this section, we briefly review the active sensing and sum co-array models. We then define the considered array figure of merit, which are summarized in \cref{tab:fom}.
\subsection{Signal model}\label{sec:signal_model}
Consider a linear array of $ N $ transmitting and receiving sensors, whose normalized positions are given by the set of non-negative integers $ \mathcal{D}\!=\!\{d_n\}_{n=1}^{N}\subseteq\mathbb{N} $. The first sensor of the array is located at $ d_1 = 0 $, and the last sensor at $ d_N=L$, where $ L=\max \mathcal{D}$ is the (normalized) array aperture. This array is used to actively sense $ K $ far field scatterers with reflectivities $\{\gamma_k\}_{k=1}^K\subseteq \mathbb{C}$ in
azimuth directions $ \{\varphi_k\}_{k=1}^K \subseteq[-\pi/2, \pi/2]$. Each transmitter illuminates the scattering scene using narrowband radiation in a sequential or simultaneous (orthogonal MIMO) manner \cite{hoctor1992highresolution,li2007mimoradar}. The reflectivities are assumed fixed during the coherence time of the scene, which may consist of one or several snapshots (pulses). The received signal after a single snapshot and matched filtering is then \cite{boudaher2015sparsitybased}
\begin{align}
\bm{x} = (\bm{A}\odot \bm{A})\bm{\gamma} + \bm{n}, \label{eq:z}
\end{align}
where $ \odot $ denotes the Khatri-Rao (columnwise Kronecker) product, $ \bm{A}\!\in\!\mathbb{C}^{N\times K} $ is the array steering matrix, $ \bm{\gamma}\!=\![\gamma_1,\ldots,\gamma_K]^\txt{T}\!\in\!\mathbb{C}^K$ is the scattering coefficient vector, and $ \bm{n}\!\in\!\mathbb{C}^{N^2} $ is a receiver noise vector following a zero-mean white complex circularly symmetric normal distribution. A typical array processing task is to estimate parameters $\{ \varphi_k, \gamma_k\}_{k=1}^K$, or some functions thereof, from the measurements $ \bm{x} $.
\subsection{Sum co-array}\label{sec:co-array}
The effective steering matrix in \eqref{eq:z} is given by $ \bm{A} \odot \bm{A}$. Assuming ideal omnidirectional sensors, we have
\begin{align*}
[\bm{A}\odot\bm{A}]_{(n-1)N+m,k}=\exp({j2\pi (d_n+d_m)\delta\sin\varphi_k}),
\end{align*}
where $ \delta $ is the unit inter-sensor spacing in carrier wavelengths (typically $ \delta=1/2 $). Consequently, the entries of $\bm{A}\odot\bm{A}$ are supported on a virtual array, known as the \emph{sum co-array}, which consists of the pairwise sums of the physical element locations.
\begin{definition}[Sum co-array]\label{def:sca}
The virtual element positions of the sum co-array of physical array $ \mathcal{D}$ are given by the set
\begin{align}
\mathcal{D}_\Sigma \triangleq \mathcal{D}+\mathcal{D} = \{d_n+d_m\ |\ d_n,d_m\in\mathcal{D} \}.\label{eq:sca}
\end{align}
\end{definition}
The relevance of the sum co-array is that it may have up to $N(N+1)/2 $ unique elements, which is vastly more than the number of physical sensors $ N $. This implies that $\mathcal{O}(N^2)$ coherent scatterers can be identified from \eqref{eq:z}, provided the set of physical sensor positions $ \mathcal{D} $ is judiciously designed.
The sum co-array is \emph{uniform} or \emph{contiguous}, if it equals a virtual \emph{Uniform linear array} (ULA) of aperture $ 2L=2\max\mathcal{D} $.
\begin{definition}[Contiguous sum co-array]
The sum co-array of $ \mathcal{D} $ is contiguous if $ \mathcal{D}+\mathcal{D} = \{0:2\max\mathcal{D} \} $.
\end{definition}
A contiguous co-array is desirable for two main reasons. Firstly, it maximizes the number of unique co-array elements for a given physical aperture. Second, it facilitates the use of many array processing algorithms designed for uniform arrays. For example, \emph{co-array MUSIC} \cite{pal2010nested,liu2015remarks} leverages the resulting Vandermonde structure to resolve more sources than sensors unambiguously in the infinite snapshot regime.
A closely related concept to the sum co-array is that of the difference co-array \cite{haubrich1968array,hoctor1990theunifying}. Defined as the set of pairwise element position differences, the difference co-array mainly emerges in passive sensing applications, where the incoherence of the source signals is leveraged. Other assumptions give rise to more exotic co-arrays models, such as the \emph{difference of the sum} co-array\footnote{Up to $\mathcal{O}(N^4)$ \emph{incoherent} scatterers can be resolved by utilizing the second-order statistics of \eqref{eq:z} and the difference of the sum co-array \cite{boudaher2015sparsitybased}.} \cite{chen2008minimumredundancymimo,weng2011nonuniform}, and the \emph{union} of the sum and difference co-array \cite{wang2017doaestimation,si2019improved}, which are not considered herein.
\makeatletter \renewcommand{\@IEEEsectpunct}{\ \,}\makeatothe
\subsection{Array figures of merit}\label{sec:fom}
\begin{table}[]
\centering
\caption{Frequently used symbols. The set of sensor positions is denoted by $\mathcal{D}$, the array aperture by $ L=\max\mathcal{D} $, and the number of sensors by $ N=|\mathcal{D}| $.}\label{tab:fom}
\resizebox{1\linewidth}{!}{%
\begin{tabular}{c|c|c}
Symbol&Explanation&Value range\\
\hline
$ |\mathcal{D}_\Sigma| $&Number of total DoFs&$ \{N:2L+1\}$\\
$ H $&Number of contiguous DoFs&$ \{1:|\mathcal{D}_\Sigma|\}$\\
$ R $&Redundancy&$ [1,\infty)$\\
$ S(d) $&$ d $-spacing multiplicity&$ \{0:\min(N-1,L-d+1)\}$\\
$ R_\infty $&Asymptotic redundancy&$ [1,\infty)$\\
$ F_\infty $&Asymptotic co-array filling ratio&$ [0,1]$\\
\end{tabular}
}
\end{table}
\subsubsection{Degrees of freedom (DoF).}\label{sec:dof}
The number of unique elements in the sum co-array $|\mathcal{D}_\Sigma|$ is often referred to as the total number of DoFs. Similarly, the number of virtual elements in the largest contiguous subarray contained in the sum co-array is called the number of contiguous DoFs.
\begin{definition}[Number of contiguous DoFs]\label{def:H}
The number of contiguous DoFs in the sum co-array of $\mathcal{D}$ is
\begin{align}
H\triangleq \argmax _{h,s\in\mathbb{N}}\big\{ h\ |\ s+\{0:h-1\}\subseteq \mathcal{D}+\mathcal{D}\big\}. \label{eq:H}
\end{align}
\end{definition}
If the offset $ s$ is zero, then $ H $ equals the position of the first hole in the sum co-array. Moreover, if the sum co-array is contiguous, then $ H=2L+1 $, where $ L $ is the array aperture.
\subsubsection{Redundancy,} \label{sec:R}
$ R $, quantifies the multiplicity of the co-array elements. A non-redundant array achieves $ R\!=\!1 $, whereas $R \!>\!1 $ holds for a redundant array.
\begin{definition}[Redundancy] \label{def:R}
The redundancy of an $ N $ sensor array with $ H$ contiguous sum co-array elements is
\begin{align}
R \triangleq \frac{N(N+1)/2}{H}. \label{eq:R}
\end{align}
\end{definition}
The numerator of $ R $ is the maximum number of unique pairwise sums generated by $ N $ numbers. The denominator is given by \eqref{def:H}. \cref{def:R} is essentially \emph{Moffet's} definition of (difference co-array) redundancy \cite{moffet1968minimumredundancy} adapted to the sum co-array. It also coincides with \emph{Hoctor and Kassam's} definition \cite{hoctor1996arrayredundancy}, when the sum co-array is contiguous, i.e., $ H\!=\!2L\!+\!1 $, where $ 2L+1 $ is the aperture of the sum co-array.
\subsubsection{The $d$-spacing multiplicity,} \label{sec:fom_S}
$ S(d) $, enumerates the number of inter-element spacings of a displacement $ d $ in the array \cite{rajamaki2018sparseactive}. For linear arrays, $ S(d) $ simplifies to the \emph{weight} or \emph{multiplicity function} \cite{pal2010nested,rajamaki2017comparison} of the difference co-array (when $ d\geq 1 $).
\begin{definition}[$ d $-spacing multiplicity \cite{rajamaki2018sparseactive}] \label{def:S}
The multiplicity of inter-sensor displacement $ d \geq 1$ in a linear array $ \mathcal{D} $ is
\begin{align}
S(d) \triangleq \frac{1}{2}\sum_{d_n\in \mathcal{D}}\sum_{d_m \in \mathcal{D}}\mathbbm{1}(|d_n-d_m| = d).\label{eq:S}
\end{align}
\end{definition}
It is easy to verify that $ 0\leq S(d)\leq \min(N-1,L-d+1)$, where $d\in\mathbb{N}_+ $. If the difference co-array is contiguous, then $ S(d)\geq 1$ for $1 \leq d\leq L $.
Typically, a low value for $ S(d) $ is desired for small $ d $, as sensors that are closer to each other tend to interact more strongly and exhibit coupling \cite{friedlander1991direction}. Consequently, the severity of mutual coupling effects deteriorating the array performance may be reduced by decreasing $ S(d) $ \cite{boudaher2016mutualcoupling,liu2016supernested,liu2017hourglass,zheng2019misc}. This simplifies array design, but has its limitations. Specifically $ S(d) $ does not take into account important factors, such as the individual element gain patterns and the mounting platform, as well as the scan angle and the uniformity of the array \cite[Ch.~8]{balanis2016antenna}. Since treating such effects in a mathematically tractable way is challenging, proxies like the \emph{number of unit spacings} $ S(1) $ are sometimes considered instead for simplicity.
\subsubsection{Asymptotic and relative quantities.}
The discussed figures of merit are generally functions of $ N $, $ H $, or $ L $, i.e., they depend on the size of the array. A convenient quantity that holds in the limit of large arrays is the \emph{asymptotic redundancy}
\begin{align}
R_\infty \triangleq \lim_{N\to\infty} R= \lim_{N\to\infty} \frac{N^2}{2H}.\label{eq:R_inf}
\end{align}
Another one is the \emph{asymptotic sum co-array filling ratio}
\begin{align}
F_\infty \triangleq \lim_{N\to\infty}\frac{H}{2L+1},\label{eq:F_inf}
\end{align}
which satisfies $ 0\!\leq\!F_\infty\!\leq\!1 $, as $ H\!\leq\!|\mathcal{D}_\Sigma|\!\leq\!2L+1 $. If the sum co-array is contiguous, then $ F_\infty\!=\!1 $. Note that the limits in \eqref{eq:R_inf} and \eqref{eq:F_inf} may equivalently be taken with respect to $ L $ or $ H $.
In many cases, we wish to evaluate the relative performance of a given array with respect to a reference array configuration of choice -- henceforth denoted by superscript ``ref''. Of particular interest are the three ratios
\begin{align*}
\frac{H}{H^\txt{ref}},\ \frac{N}{N^\txt{ref}}\ \txt{and}\ \frac{L}{L^\txt{ref}},
\end{align*}
or their asymptotic values, when either $ H,N $ or $ L $ is equal for both arrays and approaches infinity. \cref{tab:relative} shows that these asymptotic ratios can be expressed using \eqref{eq:R_inf} and \eqref{eq:F_inf}, provided the respective limits exist. For example, the second row of the first column in \cref{tab:relative} is interpreted as
\begin{align*}
\lim_{N\to\infty}\frac{H(N)}{H^\txt{ref}(N)} =\frac{R_\infty^\txt{ref}}{R_\infty},
\end{align*}
where the right-hand-side follows by simple manipulations from \eqref{eq:R}. The expressions in \cref{tab:relative} simplify greatly if both arrays have a contiguous sum co-array, since $ F_\infty=F_\infty^\txt{ref}=1 $.
\begin{table}[]
\centering
\caption{Asymptotic ratios of the number of contiguous DoFs $ H $, sensors $ N $, and array aperture $ L $. The variable approaching infinity is assumed equal for both arrays.}\label{tab:relative}
\begin{tabular}{c|c|c|c}
&$ H/H^\txt{ref} $&$ N/N^\txt{ref} $&$ L/L^\txt{ref} $\\
\hline
$ H\to\infty $&$ 1 $&$ \sqrt{R_\infty/R_\infty^\txt{ref}} $&$F_\infty^\txt{ref}/F_\infty $\\
$ N\to\infty $&$R_\infty^\txt{ref}/R_\infty$&$1$&$(R_\infty^\txt{ref}/R_\infty) (F_\infty^\txt{ref}/F_\infty )$\\
$ L\to\infty $&$ F_\infty/F_\infty^\txt{ref} $&$ \sqrt{R_\infty/R_\infty^\txt{ref}} \sqrt{F_\infty/F_\infty^\txt{ref}} $&$1$\\
\end{tabular}
\end{table}
\makeatletter \renewcommand{\@IEEEsectpunct}{:\ \,}\makeatother
\section{Minimum-Redundancy Array}\label{sec:MRA}
In this section, we present the sparse array design problem solved by the \emph{Minimum-redundancy array} (MRA). We then review some properties of the MRA, and briefly discuss an extension that is computationally easier to find.
The MRA is defined as the solution to either of two slightly different optimization problems, depending on if the sum co-array is constrained to be contiguous or not \cite{moffet1968minimumredundancy}.
\begin{definition}[Minimum-redundancy array (MRA)] \label{def:MRA}
The general Minimum-redundancy array (MRA) solves
\begin{opteq}
\underset{\mathcal{D}\subseteq\mathbb{N}; h\in\mathbb{N}}{\txt{maximize}}&\ h\nonumber \\
\txt{subject to}&\ |\mathcal{D}|=N\ \txt{and}\ \mathcal{D}+\mathcal{D} \supseteq \{0:h-1\}. \label{p:MRA_general}
\end{opteq}
The restricted MRA (R-MRA) is given by the solution to \eqref{p:MRA_general} with the extra constraint $ h=2\max\mathcal{D}+1$.
\end{definition}
The general MRA minimizes the redundancy $ R $, subject to a given number of sensors $ N $ and offset $s=0$ in \eqref{eq:H}. By \cref{def:R}, this is equivalent to maximizing $ H $, which by \cref{def:H} reduces to \eqref{p:MRA_general}. In contrast, the R-MRA constrains the sum co-array to be contiguous, and therefore maximizes the array aperture. A more generic definition of the general MRA is possible by including offset $s\in\mathbb{N}$ as an optimization variable in \eqref{p:MRA_general}. Note that the R-MRA implies that $s=0$, regardless of the definition of the MRA. Finding general MRAs with $s\neq0$ is an open problem that is left for future work.
MRAs can, but need not, be restricted \cite{kohonen2014addition,kohonen2014meet}. For example, two of the three MRAs with $ N=8 $ sensors in \cref{tab:MRA_example} are restricted. On the other hand, none of the general MRAs with $ N=11 $ sensors are restricted ($ H=47 $ in the general case, respectively, $ H=45 $ in the restricted case) \cite{kohonen2014addition,kohonen2014meet}. The ``sum MRAs'' in \cref{def:MRA} are equivalent to \emph{extremal additive 2-bases}, which have been extensively studied in number theory \cite{rohrbach1937beitrag,riddell1978someextremal,mossige1981algorithms,klove1981class,kohonen2014meet}, similarly to \emph{difference bases} \cite{leech1956ontherepresentation,wichmann1963anote} corresponding ``difference MRAs'' \cite{moffet1968minimumredundancy}. Solving \eqref{p:MRA_general} is nevertheless challenging, since the size of the search space grows exponentially with the number of elements $ N $. Consequently, MRAs are currently only known for $ N\leq 25 $ \cite{kohonen2014addition} and R-MRAs for $ N\leq48 $ \cite{kohonen2015early}.
The MRA can also be defined for a fixed aperture. E.g., the restricted MRA would minimize $ N $ subject to a contiguous co-array of length $ 2L+1 $. The difference MRA following this definition is referred to as the \emph{sparse ruler} \cite{shakeri2012directionofarrival}. For a given $ N $, several such rulers of different length may exist. However, we exclusively consider the MRA of \cref{def:MRA} henceforth.
\subsection{Key properties}\label{sec:MRA_properties}
The lack of a closed-form expression for the sensor positions of the MRA make its properties difficult to analyze precisely. Nevertheless, it is straightforward to see that a linear array with a contiguous sum co-array necessarily contains two sensors in each end of the array, as shown by the following lemma.
\begin{lemma}[Necessary sensors]\label{thm:N_nec}
Let $ N\geq 2 $. If $ \mathcal{D} $ has a contiguous sum co-array, then $ \mathcal{D}\supseteq \{0,1,L-1,L\} $. If $ \mathcal{D} $ has a contiguous difference co-array, then $ \mathcal{D}\supseteq \{0,1,L\} $.
\end{lemma}
\begin{proof}
Clearly, $ \mathcal{D}+\mathcal{D} \supseteq \{0,1,2L-1,2L\} $ if and only if $ \mathcal{D}\supseteq \{0,1,L-1,L\} $. Similarly, $ \mathcal{D}-\mathcal{D} \supseteq \{0,1,L-1,L\} $ if and only if $\mathcal{D}\supseteq \{0,1,L\} $ or $\mathcal{D}\supseteq \{0,L-1,L\}$. We may write $ \mathcal{D}\supseteq \{0,1,L\} $ without loss of generality, since any $ \mathcal{D}\supseteq \{0,L-1,L\} $ can be mirrored to satisfy $ L- \mathcal{D}\supseteq \{0,1,L\}$.
\end{proof}
\cref{thm:N_nec} implies that any array with a contiguous sum co-array and $ N\geq 4 $ sensors has at least two sensor pairs separated by a unit inter-element spacing, i.e., $ S(1)\geq2 $. \cref{thm:N_nec} also suggests that the R-MRA achieves redundancy $ R=1 $ in only two cases. These arrays are called \emph{perfect arrays}.
\begin{corollary}[Perfect arrays]
The only two unit redundancy R-MRAs are $ \{0\} $ and $ \{0,1\} $. All other R-MRAs are redundant.
\end{corollary}
\begin{proof}
By \cref{thm:N_nec}, the element at position $ L $ of the sum co-array can be represented in at least two ways, namely $ L=L+0=(L-1)+1 $. Consequently, if $ N\geq 4 $, then $ R>1 $ must hold. The R-MRAs for $ N\leq 3 $ are $ \{0\}, \{0,1\} $, and $ \{0,1,2\} $. Only the first two of these satisfy $ R=1 $.
\end{proof}
Generally, the redundancy of the MRA is unknown. However, the asymptotic redundancy can be bounded as follows.
\begin{thm}[Asymptotic redundancy of MRA \cite{yu2015anew,kohonen2017animproved,klove1981class,yu2009upper}]\label{thm:R_inf_MRA}
The asymptotic redundancy of the MRA satisfies
\begin{align*}
\txt{General MRA: }& 1.090 < R_\infty < 1.730\\
\txt{Restricted MRA: }& 1.190 < R_\infty < 1.917.
\end{align*}
\end{thm}
\begin{proof}
In the general case, the lower bound $ R_\infty\geq \frac{1}{0.917} > 1.090 $ follows directly from \cite[Theorem 1.1]{yu2015anew}, and the upper bound $R_\infty\leq \frac{147}{85} < 1.730 $ from \cite[Eq.~(1)]{kohonen2017animproved}. Similarly, in the restricted case $ R_\infty\geq \frac{11}{7+\sqrt{5}} > 1.190 $ \cite[Theorem 1.2]{yu2009upper}, and $R_\infty\leq \frac{23}{12} < 1.917 $ \cite[Theorem, p.~177]{klove1981class}.
\end{proof}
\cref{thm:R_inf_MRA} suggests that the R-MRA may be more redundant than the general MRA that does not constrain the sum co-array to be contiguous. The restricted definition of the MRA is nevertheless more widely adopted than the general one for the reasons listed in \cref{sec:co-array}. We note that difference bases or MRAs can also be of the general or restricted type \cite{leech1956ontherepresentation,moffet1968minimumredundancy}. Difference MRAs are typically less redundant than sum MRAs due to the commutativity of the sum, i.e., $ a+b=b+a $, but $ a-b\neq b-a $.
\subsection{Unique solution with fewest closely spaced sensors}\label{sec:MRA_unique}
Problem \eqref{p:MRA_general} may have several solutions for a given $ N $, which means that the MRA is not necessarily unique \cite{kohonen2014addition,kohonen2014meet}. In order to guarantee uniqueness, we introduce a secondary optimization criterion. In particular, we consider the MRA with the \emph{fewest closely spaced sensors}. This MRA is found by minimizing a weighted sum of $ d $-spacing multiplicities (see \cref{def:S}) among the solutions to \eqref{p:MRA_general}, which is equivalent to subtracting a regularizing term from the objective function. This regularizer $\varsigma\geq0 $ can be defined as, for example,
\begin{align}
\varsigma\triangleq \sum_{d=1}^LS(d)10^{-d(\lfloor \log L\rfloor + 1)}, \label{eq:varsigma}
\end{align}
where $ L \in\mathbb{N}_+$ is the largest aperture of the considered solutions. Consequently, any two solutions to the unregularized problem, say $ \mathcal{D}_a $ and $ \mathcal{D}_b $, satisfy $ \varsigma_a > \varsigma_b$, if and only if $ S_a(n)>S_b(n) $ and $ S_a(d)=S_b(d)$ for all $ 1\leq d< n$. In words: \eqref{eq:varsigma} promotes large sensor displacements by prioritizing the value of $ S(1) $, then $ S(2) $, then $ S(3) $, etc. For example, \cref{tab:MRA_example} shows two R-MRAs with equal $ S(1) $ and $ S(2) $, but different $ S(3) $. The R-MRA with the smaller $S(3) $, and therefore lower value of $ \varsigma $, is preferred.
\begin{table}[]
\centering
\caption{MRAs with $ N=8 $ sensors \cite{kohonen2014addition}. The bottom MRA has the fewest unit spacings. Of the two restricted MRAs, the first has fewer closely spaced sensors and hence a lower $\varsigma$.}\label{tab:MRA_example}
\resizebox{1\linewidth}{!}{%
\begin{tabular}{c|c|c|c|c|c}
Configuration&Restricted&$ S(1) $&$ S(2) $&$ S(3) $&$ \varsigma $\\
\hline
$ \{0,1,2,5,8,11,12,13\} $&\ding{51}&$ 4 $&$ 2 $&$ 3 $&$ 0.040203\ldots $\\
$ \{0,1,3,4,9,10,12,13\} $&\ding{51}&$ 4 $&$ 2 $&$ 4 $&$ 0.040204\ldots $\\
$ \{0,1,3,5,7,8,17,18\} $&\ding{55}&$ 3 $&$ 3 $&$ 2 $&$ 0.030302\ldots $\\
\end{tabular}%
}
\end{table}
\subsection{Symmetry and the Reduced Redundancy Array}
Many of the currently known sum MRAs are actually symmetric \cite{kohonen2014addition,kohonen2014meet}. In fact, there exist at least one symmetric R-MRA for each $ N\leq 48 $. Moreover, the R-MRA with the fewest closely spaced sensors (lowest $\varsigma$) turns out to be symmetric for $ N\leq 48 $. Indeed, symmetry seems to arise naturally from the additive problem structure (cf. \cite[Section~4.5.3]{kohonen2015exact}). Note that difference MRAs are generally asymmetric \cite{moffet1968minimumredundancy,ishiguro1980minimum}.
Imposing symmetry on the array design problem has the main advantage of reducing the size of the search space \cite{mossige1981algorithms,haupt1994thinned}. In case of the MRA, this can be achieved by adding constraint $ \mathcal{D}= \max \mathcal{D}-\mathcal{D} $ to \eqref{p:MRA_general}. Unfortunately, the search space of this symmetric MRA still scales exponentially with $ N $. Fortunately, another characteristic of the MRA may readily be exploited. Namely, MRAs tend to have a sparse mid section consisting of uniformly spaced elements \cite{kohonen2014meet}. The \emph{reduced redundancy array} (RRA) extends the aperture of the MRA by adding extra elements to this uniform mid section \cite{ishiguro1980minimum,hoctor1996arrayredundancy}.
\begin{definition}[Reduced redundancy array (RRA) \cite{hoctor1996arrayredundancy}]
The sensor positions of the RRA for a given MRA are given by
\begin{align*}
\mathcal{D}_\txt{RRA} \triangleq \mathcal{P} \cup (\mathcal{M}+\max \mathcal{P})\cup (\mathcal{S}+\max \mathcal{P}+\max \mathcal{M}),
\end{align*}
where $ \mathcal{P} $ is the prefix and $ \mathcal{S} $ the suffix of the MRA, and
\begin{align*}
\mathcal{M} = \{0:M:(N-|\mathcal{P}|-|\mathcal{S}|+1)M \}
\end{align*}
is the mid subarray, with inter-element spacing $ M \in\mathbb{N}_+$.
\end{definition}
The prefix $ \mathcal{P} $, suffix $ \mathcal{S} $, and the inter-element spacing $ M $ of $ \mathcal{M} $ are determined by the \emph{generator} MRA, i.e., the MRA that is extended. For example, an MRA with $ N=7 $ sensors is
\begin{align*}
\underbrace{\{0, 1, 2, 5, 8, 9, 10\}}_{\mathcal{D}_\txt{MRA}} = \underbrace{\{0, 1, 2\} }_{\mathcal{P}}\cup\underbrace{\{2:3:8\}}_{\mathcal{M}+2}\cup \underbrace{\{8, 9, 10\}}_{\mathcal{S}+8}.
\end{align*}
Note that $ \mathcal{S}= \max \mathcal{P}-\mathcal{P} $ holds, if the MRAs is symmetric as above. The RRA also has a contiguous sum co-array, if the MRA is restricted.
The RRA has a low redundancy when $ |\mathcal{D}_\txt{RRA}|\approx|\mathcal{D}_\txt{MRA}| $. However, the redundancy of the RRA approaches infinity as $ |\mathcal{D}_\txt{RRA}|$ grows, since the aperture of the RRA only increases linearly with the number of sensors. Consequently, we will next consider a general class of symmetric arrays which scale better and admit a solution in polynomial time, provided the design space is constrained judiciously. In particular, we will show that this class of symmetric arrays naturally extends many of the established array configurations -- designed for passive sensing -- to the active sensing setting.
\section{Symmetric array --- general design and conditions for contiguous co-array} \label{sec:symmetric}
In this section, we establish a general framework for symmetric arrays with a contiguous sum (and difference) co-array. The proposed \emph{symmetric array with generator $\mathcal{G}$} (S-$\mathcal{G} $) is constructed by taking the union of a generator array\footnote{This terminology is adopted from the literature on fractal arrays \cite{werner2003anoverview,cohen2019sparsefractal}.} $\mathcal{G}$ and its mirror image shifted by a non-negative integer $ \lambda $. The generator, which is the basic construction block of the symmetric array, can be any array configuration of choice.
\begin{definition}[Symmetric array with generator $ \mathcal{G} $ (S-$\mathcal{G}$)]\label{def:SA}
The sensor positions of the S-$\mathcal{G}$ are given by
\begin{align}
\mathcal{D}_{\txt{S-}\mathcal{G}} \triangleq\mathcal{G}\cup (\max \mathcal{G}-\mathcal{G}+\lambda),\label{eq:SA}
\end{align}
where $\mathcal{G}$ is a generator array and $\lambda\in \mathbb{N}$ is a shift parameter\footnote{Note that $ \lambda \leq 0 $ is equivalent to considering the mirrored generator $ \max{\mathcal{G}}-{\mathcal{G} }$ for $ \lambda \geq 0 $. Therefore, we may set $ \lambda \geq 0$ without loss of generality.}.
\end{definition}
\cref{fig:S-G_array} shows a schematic of the S-$\mathcal{G} $. Note that the number of sensors satisfies $ |\mathcal{G}|\leq N\leq 2|\mathcal{G}|$,
depending on the overlap between $ \mathcal{G} $ and $L- \mathcal{G} $. The aperture $ L $ of the S-$\mathcal{G}$ is
\begin{align}
L= \max \mathcal{G} + \lambda. \label{eq:SA_L}
\end{align}
The exact properties of the S-$\mathcal{G}$ are determined by the particular choice of $ \mathcal{G} $ and $\lambda$. Specifically, the generator array $ \mathcal{G} $ influences how easily the symmetrized array S-$\mathcal{G}$ can be optimized to yield a large contiguous sum co-array. For example, a generator with a simple nested structure makes a convenient choice in this regard, as we will demonstrate in \cref{sec:generators}. Next, however, we establish necessary and sufficient conditions that any $\mathcal{G}$ and $\lambda$ must fulfill for S-$\mathcal{G} $ to have a contiguous sum co-array. These general conditions are later utilized when considering specific choices for $ \mathcal{G} $ in \cref{sec:generators}.
\subsection{Necessary and sufficient conditions for contiguous co-array}
\cref{fig:S-G_coarray} illustrates the difference co-array of the S-$\mathcal{G}$, which is composed of the difference co-array and shifted sum co-arrays of the generator $\mathcal{G}$. By symmetry, the sum and difference co-array of the S-$ \mathcal{G} $ are equivalent up to a shift. This fact, along with \eqref{eq:SA}, allows us to express the necessary and sufficient condition for the contiguity of both co-arrays in terms of the generator $\mathcal{G}$ and shift parameter $ \lambda $. Moreover, we may conveniently decompose the condition into two simpler subconditions, as shown by the following theorem.
\begin{thm}[Conditions for contiguous co-array] \label{thm:SA_coarray_c}
The sum (and difference) co-array of the S-$\mathcal{G}$ is contiguous if and only~if
\begin{enumerate}[label=(C\arabic*),leftmargin=1.0cm]
\item $ (\mathcal{G}-\mathcal{G})\cup (\mathcal{G}+\mathcal{G}-L)\cup (L-(\mathcal{G}+\mathcal{G}))\supseteq \{0:\max\mathcal{G}\}$\label{c:gmg}
\item $ \mathcal{G}+\mathcal{G}\supseteq \{0:\lambda-1\}, $ \label{c:gpg}
\end{enumerate}
where $ L $ is the array aperture given by \eqref{eq:SA_L}.
\end{thm}
\begin{proof}
By symmetry of the physical array, the sum co-array is contiguous if and only if the difference co-array is contiguous (e.g., see \cite[Lemma~1]{rajamaki2018sparseactive}), that is,
\begin{align*}
\mathcal{D}_{\txt{S-}\mathcal{G}}+\mathcal{D}_{\txt{S-}\mathcal{G}} =\{0:2L\} \iff \mathcal{D}_{\txt{S-}\mathcal{G}}-\mathcal{D}_{\txt{S-}\mathcal{G}} =\{-L:L\}.
\end{align*}
By symmetry of the difference co-array, this is equivalent to requiring that
$ \mathcal{D}_{\txt{S-}\mathcal{G}}\!-\!\mathcal{D}_{\txt{S-}\mathcal{G}}\!\supseteq\!\{0:L\} $, or using \eqref{eq:SA} that
\begin{align*}
\mathcal{D}_{\txt{S-}\mathcal{G}}-\mathcal{D}_{\txt{S-}\mathcal{G}} &=\mathcal{G}\cup (L-\mathcal{G})-\mathcal{G}\cup (L-\mathcal{G})\\
&= (\mathcal{G}-\mathcal{G})\cup(\mathcal{G}+\mathcal{G}-L)\cup(L-(\mathcal{G}+\mathcal{G}))\\
&\supseteq \{0:L\}.
\end{align*}
Conditions \ref{c:gpg} and \ref{c:gmg} then directly follow from \eqref{eq:SA_L}.
\end{proof}
Note that \ref{c:gpg} may also be reformulated as $ \lambda\leq H$, where $ H $ denotes the position of the first hole in the sum co-array of $\mathcal{G}$, per \eqref{eq:H}. Since $ H\leq 2\max\mathcal{G}+1 $, the sum co-array of $\mathcal{D}_{\txt{S-}\mathcal{G}}$ is contiguous only if $ \lambda $ satisfies
\begin{align*}
\lambda \leq 2\max \mathcal{G}+1.
\end{align*}
\subsection{Sufficient conditions for contiguous co-array}
It is also instructive to consider some simple sufficient conditions for satisfying \ref{c:gmg} and \ref{c:gpg}, as outlined in the following two corollaries to \cref{thm:SA_coarray_c}.
\begin{corollary}[Sufficient conditions for \ref{c:gmg}]\label{thm:c_suff_gmg}
Condition~\ref{c:gmg} is satisfied if either of the following holds:
\begin{enumerate}[label=(\roman*)]
\item$ \mathcal{G} $ has a contiguous difference co-array.\label{gmg_1}
\item $ \mathcal{G} $ has a contiguous sum co-array and $\lambda\leq \max\mathcal{G}+1$.\label{gmg_2}
\end{enumerate}
\end{corollary}
\begin{proof}
Firstly, if the difference co-array of $ \mathcal{G}$ is contiguous, then $ \mathcal{G} - \mathcal{G}\supseteq \{0:\max\mathcal{G}\} $ holds by definition, implying \ref{c:gmg}. Secondly, if the sum co-array is contiguous, then $ \mathcal{G} + \mathcal{G}\supseteq \{0:2\max\mathcal{G}\} $ holds, which implies that
\begin{align*}
\mathcal{G}+\mathcal{G}-L &= \{-L:\max\mathcal{G}-\lambda\}\\
L-(\mathcal{G}+\mathcal{G}) &= \{-\max\mathcal{G}+\lambda:L\}.
\end{align*}
If $ \lambda\leq \max\mathcal{G} +1$, then the union of these two sets and $\mathcal{G}-\mathcal{G}$ covers $ \{-L:L\}\supseteq\{0:\max\mathcal{G}\} $, implying \ref{c:gmg}.
\end{proof}
\begin{corollary}[Sufficient conditions for \ref{c:gpg}]\label{thm:c_suff_gpg}
Condition \ref{c:gpg} is satisfied if any of the following hold:
\begin{enumerate}[label=(\roman*)]
\item $ \lambda \leq 1 $ \label{gpg_1}
\item $ \lambda \leq 3 $ and $ |\mathcal{G}|\geq 2 $ \label{gpg_2}
\item $ \lambda \leq 2\max\mathcal{G}+1 $ and $ \mathcal{G} $ has a contiguous sum co-array.\label{gpg_3}
\end{enumerate}
\end{corollary}
\begin{proof}
Firstly, $ \lambda \leq 1 $ implies \ref{c:gpg}, since $ \mathcal{G}\supseteq\{0\} $. Secondly, if $ |\mathcal{G}| \geq 2$, then $\mathcal{G}\supseteq\{0,1\}$ holds by \cref{thm:N_nec}. Consequently, $ |\mathcal{G}| \geq 2 $ and $ \lambda \leq 3 \implies$ \ref{c:gpg}. Thirdly, if $ \mathcal{G} $ has a contiguous sum co-array and $ \lambda\leq 2\max\mathcal{G}+1$, then $ \mathcal{G}\!+\!\mathcal{G}\!=\!\{0:2\max\mathcal{G}\}\supseteq\{0:\lambda\!-\!1\}$ holds by definition.
\end{proof}
We note that if $ |\mathcal{G}|\geq 2 $, then $ \lambda\leq \max\mathcal{G}+2 $ is actually sufficient in \cref{gmg_2} of \cref{thm:c_suff_gmg} (cf. \cref{thm:N_nec}). This is also sufficient for satisfying \ref{c:gpg} by \cref{gpg_3} of \cref{thm:c_suff_gpg}. \cref{gmg_1} of \cref{thm:c_suff_gmg} is of special interest, since it states that any $\mathcal{G}$ with a contiguous difference co-array satisfies \ref{c:gmg}. This greatly simplifies array design, as shown in the next section, where we develop two arrays leveraging this property.
\begin{figure*}[]
\centering
\subfigure[Physical array and constituent sub-arrays]{\includegraphics[width=0.3\textwidth]{S-G_array}\label{fig:S-G_array}}
\hfil
\subfigure[Difference co-array (same as sum co-array but shifted by $ -L $) and constituent sub-co-arrays]{\includegraphics[width=0.6\textwidth]{S-G_coarray}\label{fig:S-G_coarray}}
\caption{The symmetric array S-$ \mathcal{G} $ in (a) consists of the union of a generator array $ \mathcal{G} $ and its mirror image shifted by $\lambda\geq0$. The difference co-array of the S-$ \mathcal{G} $ in (b) can be decomposed into the union of the difference co-array of $ \mathcal{G} $, and two shifted copies of the sum co-array of $\mathcal{G}$ (of which one is also mirrored). Due to symmetry, the difference and sum co-array of the S-$ \mathcal{G} $ are equivalent up to a shift of $ L $, and contiguous only if $ \lambda\leq2\max\mathcal{G}+1 $.}
\end{figure*}
\section{Low-redundancy symmetric array designs using parametric generators}\label{sec:generators}
Similarly to the R-MRA in \eqref{p:MRA_general}, we wish to find the S-$\mathcal{G} $ with maximal aperture
satisfying the conditions in \cref{thm:SA_coarray_c}. Given a number of elements $ N $, and a class of generator arrays $ \mathscr{G} $, such that $\mathcal{G} \in\mathscr{G} $, this minimum-redundancy S-$\mathcal{G} $ design is found by solving the following optimization problem:
\begin{opteq}
\underset{\mathcal{G}\in\mathscr{G}, \lambda \in \mathbb{N}}{\text{maximize}}&\ \max \mathcal{G}+\lambda\nonumber \\
\text{subject to}&\ |\mathcal{G}\cup (\max \mathcal{G}-\mathcal{G}+\lambda)|=N\nonumber\\
&\ \txt{\ref{c:gmg} and \ref{c:gpg}}. \label{p:SA}
\end{opteq}
In general, \eqref{p:SA} is a non-convex problem, whose difficulty depends on the choice of $\mathscr{G}$. Solving \eqref{p:SA} may therefore require a grid search over $ \lambda $ and the elements of $ \mathscr{G} $, which can have exponential complexity at worst. At best, however, a solution can be found in polynomial time, or even in closed form.
We will now focus on a family of choices for $\mathscr{G}$, such that each $ \mathcal{G}\in\mathscr{G} $ has the following two convenient properties:
\begin{enumerate}[label=(\alph*)]
\item $\mathcal{G}$ has a contiguous difference co-array\label{i:G_diff_cont}
\item $\mathcal{G}$ has a closed-form expression for its aperture.\label{i:G_param}
\end{enumerate}
\cref{i:G_diff_cont} greatly simplifies \eqref{p:SA} by directly satisfying condition \ref{c:gmg}, whereas \cref{i:G_param} enables the straightforward optimization of the array parameters. Condition \ref{c:gpg} is typically easy to satisfy for an array with these two properties. This is the case with many sparse array configurations in the literature, such as the Nested \cite{pal2010nested} and Wichmann Array \cite{wichmann1963anote,pearson1990analgorithm,linebarger1993difference}. Constructing a symmetric array using a generator with a contiguous difference co-array is thus a practical way to synthesize a contiguous sum co-array.
\subsection{Symmetric Nested Array (S-NA)}
The \emph{Nested Array} (NA) \cite{pal2010nested} is a natural choice for a generator satisfying the previously outlined properties~\ref{i:G_diff_cont} and \ref{i:G_param}. The NA has a simple structure consisting of the union of a dense ULA, $\mathcal{D}_1$, and sparse ULA sub-array, $\mathcal{D}_2$, as shown in \cref{fig:G_NA}. When used as the generator $\mathcal{G}$ in \eqref{eq:SA}, the NA yields the \emph{Symmetric Nested Array} (S-NA), defined as follows:
\begin{definition}[Symmetric Nested Array (S-NA)] \label{def:S-NA}
The sensor positions of the S-NA are given by \eqref{eq:SA}, where
\begin{align*}
\mathcal{G} = \mathcal{D}_1 \cup (\mathcal{D}_2+N_1),
\end{align*}
with $\mathcal{D}_1 = \{0:N_1-1\};$ $\mathcal{D}_2=\{0:N_1+1:(N_2-1)(N_1+1)\}$; and array parameters $ N_1,N_2 \in\mathbb{N}$.
\end{definition}
Special cases of the S-NA have also been previously proposed. For example, \cite[Definition~2]{liu2018optimizing} considered the case $ \lambda=0 $ for improving the robustness of the NA to sensor failures. Next we briefly discuss a special case that is more relevant from the minimum-redundancy point of view.
\subsubsection{Concatenated Nested Array (CNA)}
The S-NA coincides with a restricted additive 2-basis studied by Rohrbach in the 1930's \cite[Satz~2]{rohrbach1937beitrag}, when the shift parameter $ \lambda $ satisfies
\begin{align}
\lambda= (N_1+1)k+N_1, \label{eq:lambda_CNA}
\end{align}
where $ k\in \{0:N_2\} $. Unaware of Rohrbach's early result, we later derived the same configuration based on the NA and called it the \emph{Concatenated Nested Array} (CNA) \cite{rajamaki2017sparselinear}.
\begin{definition}[Concatenated Nested Array (CNA) \cite{rajamaki2017sparselinear}]\label{def:CNA}
The sensor positions of the CNA are given by
\begin{align*}
\mathcal{D}_\txt{CNA} \triangleq \mathcal{D}_1 \cup (\mathcal{D}_2 +N_1) \cup (\mathcal{D}_1+N_2(N_1+1)),
\end{align*}
where $ N_1,N_2 \in\mathbb{N}$, and $ \mathcal{D}_1,\mathcal{D}_2$ follow \cref{def:S-NA}.
\end{definition}
The CNA is illustrated in \cref{fig:CNA}. When $N_1 = 0$ or $N_2 \in\{0,1\}$, the array degenerates to the ULA. In the interesting case when $ N_1+N_2\geq 1 $, the aperture of the CNA is\cite{rajamaki2017sparselinear}
\begin{align}
L &= (N_1+1)(N_2+1)-2. \label{eq:L_CNA}
\end{align}
Furthermore, the number of sensors is \cite{rajamaki2017sparselinear}
\begin{align}
N =
\begin{cases}
N_1,&\txt{if } N_2=0\\
2N_1+N_2,& \txt{otherwise,}
\end{cases} \label{eq:N_CNA}
\end{align}
and the number of unit spacings evaluates to
\begin{align}
S(1) &= \begin{cases}
N_1-1,&\txt{if } N_2=0\\
N_2-1,&\txt{if } N_1=0\\
2N_1,&\txt{otherwise}.
\end{cases}\label{eq:S1_CNA}
\end{align}
\begin{figure}[]
\centering
\subfigure[Nested Array (NA) \cite{pal2010nested}]{\includegraphics[width=.6\linewidth]{G_NA}\label{fig:G_NA}}
\subfigure[Symmetric Nested Array (S-NA)]{\includegraphics[width=0.85\linewidth]{S-NA}\label{fig:S-NA}}
\subfigure[Concatenated Nested Array (CNA) \cite{rohrbach1937beitrag,rajamaki2017sparselinear}]{\includegraphics[width=1\linewidth]{CNA}\label{fig:CNA}}
\caption{The (a) NA generator and shift parameter $\lambda$ define the (b) S-NA. The S-NA reduces to the (c) CNA, when $\lambda$ follows \eqref{eq:lambda_CNA}. The minimum-redundancy S-NA solving \eqref{p:SA} is a CNA.}
\end{figure}
\subsubsection{Minimum-redundancy solution} \label{sec:S-NA_MRA}
The S-NA solving \eqref{p:SA} is actually a CNA. This follows from the fact that any S-NA that has a contiguous sum co-array, but is not a CNA, has redundant sensors, due to the reduced number of overlapping sensors between $\mathcal{G}$ and its shifted mirror image.
\begin{proposition}\label{thm:S-NA-CNA}
The S-NA solving \eqref{p:SA} is a CNA.
\end{proposition}
\begin{proof}
We write \ref{c:gmg} and \ref{c:gpg} as a constraint on $\lambda$, which we then show implies the proposition under reparametrization.
Firstly, \ref{c:gmg} holds by \cref{gmg_1} of \cref{thm:c_suff_gmg}, since the NA has a contiguous difference co-array \cite{pal2010nested}. Secondly, the location of the first hole in the sum co-array of the NA yields that \ref{c:gpg} is satisfied if and only if
\begin{align}
\lambda \leq
\begin{cases}
2(N_1+N_2)-1,& \txt{ if } N_1=0 \txt{ or } N_2=0\\
N_2(N_1+1)+N_1,& \txt{ otherwise}.
\end{cases} \label{eq:lambda_max_S_NA}
\end{align}
If $0\leq \lambda \leq N_1-1$, then a CNA with the same number of sensors ($ N=2|\mathcal{G}|-2 $), but a larger aperture ($ L=\max\mathcal{G}+\lambda $) is achieved by instead setting $ \lambda= \max\mathcal{G}-N_1 $. Otherwise, it is straightforward to verify that the S-NA either reduces to the CNA, or a CNA can be constructed with the same $ N $ but a larger $ L $, by satisfying \eqref{eq:lambda_max_S_NA} with equality.
\end{proof}
By \cref{thm:S-NA-CNA}, problem \eqref{p:SA} simplifies to maximizing the aperture of the CNA for a given number of sensors:
\begin{opteq}
\underset{N_1\in\mathbb{N},N_2\in\mathbb{N}_+}{\text{maximize}}&\ N_1N_2+N_1+N_2\ \ \text{s.t.}\ \ 2N_1+N_2=N. \label{p:CNA}
\end{opteq}
Problem \eqref{p:CNA} actually admits a closed-form solution \cite{rajamaki2017sparselinear}. In fact, we may state the following more general result for any two-variable integer program similar to \eqref{p:CNA}.
\begin{lemma} \label{thm:general_opt}
Let $ f$ be a concave function. The solution to
\begin{opteq}
\underset{x,y\in\mathbb{N}}{\text{maximize}}\ f(x,y)\ \text{subject to}\ g(x)=y \label{p:gen}
\end{opteq}
is given by $ x = \lceil z \rfloor + k$ and $ y=g(x) $, where $ z $ solves
\begin{align*}
\underset{z\in\mathbb{R}_+}{\text{maximize}}&\ f(z,g(z)),
\end{align*}
and $ |k|$ is the smallest non-negative integer satisfying $ g( \lceil z \rfloor + k) \in \mathbb{N}$, where $ k\in \{-\lceil z \rfloor,-\lceil z \rfloor+1,\ldots,-1\}\cup \mathbb{N}$.
\end{lemma}
\begin{proof}
Since $ f(z,g(z)) $ is concave, $ x=\lceil z\rfloor $ maximizes $ f(x,g(x)) $ among all $ x\in\mathbb{N}$. This is a global optimum of \eqref{p:gen}, if $ g(\lceil z\rfloor)\in\mathbb{N} $. Generally, the optimal solution can be expressed as $ x= \lceil z\rfloor + k$, where $ k \in \{-\lceil z \rfloor:-1\}\cup \mathbb{N}$. By concavity of $ f $, the smallest $ |k| $ satisfying $ g(\lceil z\rfloor + k)\in\mathbb{N} $ then yields the global optimum of \eqref{p:gen}.
\end{proof}
\cref{thm:general_opt} is useful for solving two-variable integer programs similar to \eqref{p:CNA} in closed-form. Such optimization problems often arise in, e.g., sparse array design \cite{rajamaki2018symmetric,rajamaki2020sparselow}. In our case, \cref{thm:general_opt} allows expressing the minimum-redundancy parameters of the CNA directly in terms of $ N $ as follows\footnote{An equivalent result is given in \cite[Eq.~(7) and (8)]{rohrbach1937beitrag} and \cite[Eq.~(13)]{rajamaki2017sparselinear}, but in a more inconvenient format due to the use of the rounding operator.}.
\begin{thm}[Minimum-redundancy parameters of CNA] \label{thm:param_CNA}
The parameters of the CNA solving \eqref{p:CNA} are
\begin{align}
N_1 &= (N-\alpha)/4 \label{eq:N1_CNA_opt}\\
N_2 &= (N+\alpha)/2, \label{eq:N2_CNA_opt}
\end{align}
where $ N=4m+k, m\in \mathbb{N}, k \in \{0:3\}$, and
\begin{align}
\alpha = (k+1)\bmod 4 -1.\label{eq:alpha_CNA}
\end{align}
\end{thm}
\begin{proof}
By \cref{thm:general_opt}, the optimal solution to \eqref{p:CNA} is given by $ N_1=\lceil (N-1)/4 \rfloor $ and $ N_2 = N-2N_1 $ \cite{rajamaki2017sparselinear}. Since any $ N\in\mathbb{N} $ may be expressed as $ N=4m+k$, where $m\in\mathbb{N} $ and $ k \in \{0:3\}$, we have
\begin{align*}
N_1 = \Big\lceil m+\frac{k-1}{4}\Big\rfloor
= \begin{cases}
m,& k \in \{0,1,2\}\\
m+1,& k = 3.
\end{cases}
\end{align*}
Since $ m = (N-k)/4 $, we have $ N_1 = m = (N-k)/4 $, when $ k \in \{0,1,2\} $. Similarly, when $ k=3 $, we have $ N_1 = m+1 = (N-k+4)/4 =(N+1)/4$, which yields \eqref{eq:N1_CNA_opt} and \eqref{eq:alpha_CNA}. Substituting \eqref{eq:N1_CNA_opt} into \eqref{eq:N_CNA} then yields \eqref{eq:N2_CNA_opt}.
\end{proof}
By \cref{thm:param_CNA}, the properties of the minimum-redundancy CNA can also be written compactly as follows.
\begin{corollary}[Properties of minimum-redundancy CNA]
The aperture $ L $, number of sensors $ N $, and number of unit spacings $ S(1) $ of the CNA solving \eqref{p:CNA} are
\begin{align*}
L &= (N^2+6N-7)/8-\beta\\
N &= 2\sqrt{2}\sqrt{L+2+\beta}-3\\
S(1) &= (N-\alpha)/2,
\end{align*}
where $ \beta= (\alpha -1)^2/8 $ and $ \alpha $ is given by \eqref{eq:alpha_CNA}.
\end{corollary}
\begin{proof}
This follows from \cref{thm:param_CNA}, and \eqref{eq:L_CNA}--\eqref{eq:S1_CNA}.
\end{proof}
\subsection{Symmetric Kl{\o}ve-Mossige array (S-KMA)} \label{sec:KA}
In the 1980's, \emph{Kl{\o}ve and Mossige} proposed an additive 2-basis with some interesting properties \cite{mossige1981algorithms}. In particular, the basis has a contiguous difference co-array (see Appendix~\ref{a:KMA_diff_coarray}) and a low asymptotic redundancy ($ R_\infty\!=\!1.75 $), despite having a non-contiguous sum co-array (see Appendix~\ref{a:S-KMA_lambda}). We call this construction the \emph{Kl{\o}ve-Mossige Array} (KMA). As shown in \cref{fig:G_KMA}, the KMA contains a CNA, and can therefore be interpreted as another extension of the NA. However, selecting the KMA as the generator in \eqref{eq:SA} yields the novel \emph{Symmetric Kl{\o}ve-Mossige Array} (S-KMA), shown in \cref{fig:S-KMA}.
\begin{definition}[Symmetric Kl{\o}ve-Mossige Array (S-KMA)]\label{def:S-KMA}
The sensor positions of the S-KMA are given by \eqref{eq:SA}, where
\begin{align*}
\mathcal{G} &=\mathcal{D}_\txt{CNA} \cup (\mathcal{D}_3+2\max \mathcal{D}_\txt{CNA}+1),\\
\mathcal{D}_3&= \{0:N_1:N_1^2\}+\bigcup_{i=1}^{N_3}\{(i-1)(N_1^2+\max \mathcal{D}_\txt{CNA}+1)\},
\end{align*}
$\mathcal{D}_\txt{CNA}$ follows \cref{def:CNA}, and parameters\footnote{The S-KMA is undefined for $ N_1=N_2=0 $, as $\mathcal{D}_\txt{CNA} = \emptyset$. Consequently, we will not consider this case further. However, we note that \cref{def:S-KMA} is easily modified so that the S-KMA degenerates to a ULA even in this case.} $ N_1, N_2,N_3\!\in\!\mathbb{N} $.
\end{definition}
\subsubsection{Kl{\o}ve array}
The structure of the S-KMA simplifies substantially when the shift parameter $ \lambda $ is of the form
\begin{align}
\lambda=
2\max\mathcal{D}_\txt{CNA}+1+(\max\mathcal{D}_\txt{CNA}+N_1^2)k, \label{eq:lambda_KA}
\end{align}
where $ k\!\in\!\{0\!:\!N_3\}$. In fact, this S-KMA coincides with the \emph{Kl{\o}ve array} (KA), which is based on a class of restricted additive 2-bases proposed by \emph{Kl{\o}ve} in the context of additive combinatorics \cite{klove1981class} (see also \cite{rajamaki2020sparselow}).
\begin{definition}[Kl{\o}ve array (KA) \cite{klove1981class}]
The sensor positions of the KA with parameters $N_1,N_2,N_3\!\in\!\mathbb{N}$ are given by
\begin{align*}
\mathcal{D}_\txt{KA} \triangleq&\ \mathcal{D}_\txt{CNA} \cup (\mathcal{D}_3+2\max \mathcal{D}_\txt{CNA}+1)\nonumber\\
&\cup (\mathcal{D}_\txt{CNA}+(N_3+2)\max \mathcal{D}_\txt{CNA}+N_3(N_1^2+1)+1),
\end{align*}
where $\mathcal{D}_\txt{CNA}$ follows \cref{def:CNA}, and $\mathcal{D}_3$ \cref{def:S-KMA}.
\end{definition}
\cref{fig:KA} illustrates the KA, which consists of two CNAs connected by a sparse mid-section consisting of $ N_3 $ widely separated and sub-sampled ULAs with $ N_1+1 $ elements each. The KA reduces to the CNA when $ N_1 =0 $ or $ N_2 =0 $. When $ N_2\geq 1 $, the aperture $ L $, and the number of sensors $ N $ of the KA evaluate to \cite[Lemma 3]{klove1981class}:
\begin{align}
L&=(N_1+1)(N_3(N_1+N_2)+3N_2+3)-5 \label{eq:L_KA}\\
N&=2(2N_1+N_2)+N_3(N_1+1).\label{eq:N_KA}
\end{align}
Furthermore, the number of unit spacings $ S(1) $ is \cite{rajamaki2020sparselow}
\begin{align}
S(1) &=
\begin{cases}
N_3+1,& \txt{if } N_1 = 0 \txt{ and } N_2 = 1\\
2(N_2-1),& \txt{if } N_1 = 0 \txt{ and } N_2\geq 2\\
N_3+4,& \txt{if } N_1 = 1\\
4N_1,& \txt{otherwise.}
\end{cases}\label{eq:S1_KA}
\end{align}
\begin{figure*}[]
\centering
\subfigure[Kl{\o}ve-Mossige array (KMA) \cite{mossige1981algorithms}]{\includegraphics[width=0.85\textwidth]{G_KMA}\label{fig:G_KMA}}
\hfil
\subfigure[Symmetric Kl{\o}ve-Mossige array (S-KMA)]{\includegraphics[width=1\textwidth]{S-KMA}\label{fig:S-KMA}}
\hfil
\subfigure[Kl{\o}ve array (KA) \cite{klove1981class,rajamaki2020sparselow}]{\includegraphics[width=.9\textwidth]{KA}\label{fig:KA}}
\caption{The Kl{\o}ve-Mossige generator in (a) has a contiguous difference co-array, but a non-contiguous sum co-array. The S-KMA in (b) reduces to the KA in (c), when $\lambda$ follows \eqref{eq:lambda_KA}. The KA has a contiguous sum co-array and may be interpreted as a generalization of the CNA.}
\end{figure*}
\subsubsection{Minimum-redundancy solution}
The simple structure of the KMA suggests that the minimum redundancy S-KMA is also a KA. Intuitively, any S-KMA that is not a KA is unnecessarily redundant, since increasing $ \lambda $ to its maximum value $ H $ yields a KA (cf. Appendix~\ref{a:S-KMA_lambda}). However, proving that the S-KMA solving \eqref{p:SA} is a KA is not as simple as in the case of the S-NA and CNA (see \cref{thm:S-NA-CNA} in \cref{sec:S-NA_MRA}). In particular, the S-KMA cannot easily be modified into a KA with an equal or larger aperture without affecting the number of sensors $ N $. A formal proof is therefore still an open problem.
Nevertheless, we may derive the minimum redundancy parameters of the KA. The \emph{minimum-redundancy Kl{\o}ve array} (KA$ _R $) maximizes the aperture for a given $ N $, which is equivalent to solving the following optimization problem:
\begin{opteq}
\underset{N_1,N_3\in\mathbb{N}, N_2\in\mathbb{N}_+}{\text{maximize}}&\ 3N_1(N_2+1)+3N_2+N_3(N_1+N_2)(N_1+1)\nonumber \\
\text{subject to}&\ 2(2N_1+N_2)+N_3(N_1+1)=N. \label{p:KA_R}
\end{opteq}
Problem \eqref{p:KA_R} is an integer program with three unknowns. This problem is challenging to solve in closed form for \emph{any} $ N $, in contrast to \eqref{p:CNA}, which only has two integer unknowns (cf. \cref{thm:param_CNA}). Nevertheless, \emph{some} $ N $ readily yield a closed-form solution to \eqref{p:KA_R}, as shown by the following theorem.
\begin{thm}[Minimum-redundancy parameters of KA] \label{thm:param_KA_R}
The parameters of the KA solving \eqref{p:KA_R} are
\begin{align}
N_1 &= (N+3)/23 \label{eq:N1_relaxed}\\
N_2 &= 5(N+3)/23\label{eq:N2_relaxed}\\
N_3 &= (9N-42)/(N+26),\label{eq:N3_relaxed}
\end{align}
when $ N\!\in\!\{20,43,66,112,250\} $.
\end{thm}
\begin{proof}\let\qed\relax
Under certain conditions, we may obtain a solution to \eqref{p:KA_R} by considering a relaxed problem. Specifically, solving \eqref{eq:N_KA} for $ N_3 $, substituting the result into \eqref{eq:L_KA}, and
relaxing $ N_1,N_2\in\mathbb{R} $ leads to the following concave quadratic program:
\begin{align*}
\underset{N_1,N_2\in\mathbb{R}}{\text{maximize}}&\ (N_1+N_2)(N\!+\!3)-3N_1N_2-4N_1^2-2N_2^2.
\end{align*}
At the critical point of the objective function we have
\begin{align*}
\partial L/\partial N_1&=N-8N_1-3N_2+3 = 0\\
\partial L/\partial N_2&=N-3N_1-4N_2+3 = 0.
\end{align*}
Solving these equations for $ N_1$ and $N_2 $ yields \eqref{eq:N1_relaxed} and \eqref{eq:N2_relaxed}, which when substituted into \eqref{eq:N_KA} yields \eqref{eq:N3_relaxed}. These are also solutions to \eqref{p:KA_R} if:
\begin{enumerate}[label=(\roman*)]
\item $ N_1 \in\mathbb{N}$, which holds when $ N= 23k-3, k\in\mathbb{N}_+ $, i.e., $ N =20,43,66,89,112,135,158,181,204,227,250,\ldots $
\item $ N_2 \in \mathbb{N} $, which holds when $ N_1\in\mathbb{N}$, as $ N_2 = 5 N_1 \in \mathbb{N} $
\item $ N_3 \in\mathbb{N} $, which holds when $ N=(26l+42)/(9-l)\in \mathbb{N} $ and $ l\in\{0:8\} $, i.e., $ N=20,43,66,112,250 $.
\end{enumerate}
The only integer-valued $ N $ satisfying all three conditions are $ N\!=\!20,43,66,112,250 $, as stated.
\end{proof}
The KA in \cref{thm:param_KA_R} also has the following properties:
\begin{corollary}[Properties of minimum-redundancy KA]
The aperture $ L $, number of sensors $ N $, and number of unit spacings $ S(1) $ of the KA solving \eqref{p:KA_R} are
\begin{align*}
L &= (3N^2+18N-19)/23 \label{eq:L_opt_KA}\\
N &= \sqrt{23/3}\sqrt{L+2}-3\\
S(1) &=
\begin{cases}
(N+22)/6& N_1 = 1\\
4(N+3)/23 & N_1\geq 2,
\end{cases}
\end{align*}
when $ N\!\in\!\{20,43,66,112,250\}. $
\end{corollary}
\begin{proof}
This follows from \cref{thm:param_KA_R} and \eqref{eq:L_KA}--\eqref{eq:S1_KA}.
\end{proof}
The minimum-redundancy KA achieves the asymptotic redundancy $ R_\infty\!=\!23/12 $, as shown in the following proposition, which is a reformulation of \cite[Theorem, p.~177]{klove1981class}.
\begin{proposition}[Asymptotic redundancy of KA \cite{klove1981class}]\label{thm:R_inf_KA_R}
The asymptotic redundancy of the solution to \eqref{p:KA_R} is
\begin{align*}
R_\infty = 23/12 < 1.9167.
\end{align*}
\end{proposition}
\begin{proof}
Let $ N=23k+9$, where $k\in\mathbb{N} $. A feasible KA that is equivalent to the minimum-redundancy KA when $ N\to\infty $ is then given by the choice of parameters $ N_1 = (N-9)/23 $, $ N_2 = 5N_1 $, and $ N_3=9$. Substitution of these parameters into \eqref{eq:L_KA} yields $ L= 3N^2/23 +\mathcal{O}(N) $, i.e., $ R_\infty = 23/12 $.
\end{proof}
\subsubsection{Polynomial time grid search}
Although solving \eqref{p:KA_R} in closed form for any $ N $ is challenging, we can nevertheless obtain the solution in at most $\mathcal{O}(N\log N)$ objective function evaluations. This follows from the fact that the feasible set of \eqref{p:KA_R} only has $\mathcal{O}(N\log N)$ or fewer elements.
\begin{proposition}[Cardinality of feasible set in \eqref{p:KA_R}]\label{thm:KA_complexity}
The cardinality of the feasible set in \eqref{p:KA_R} is at most $ \mathcal{O}(N\log N)$.
\end{proposition}
\begin{proof}\let\qed\relax
We may verify from \eqref{eq:N_KA} that $ 0 \leq N_1\leq (N-2)/4 $ and $ 0 \leq N_3\leq (N-4N_1)/(N_1+1) $. Consequently, the number of grid points that need to be checked is
\begin{align*}
V = \sum_{N_1=0}^{\lfloor\frac{N-2}{4}\rfloor}\Bigg\lfloor\frac{N-4N_1}{N_1+1}\Bigg\rfloor+1 \leq \int_{0}^{\frac{N+2}{4}} \frac{N-3x+5/2}{x+1/2}dx.
\end{align*}
The upper bound follows from ignoring the floor operations, and substituting $ N_1\!=\!x\!-\!1/2$, where $ x\!\in\!\mathbb{R} $, to account for the rectangular integration implied by the sum. Finally,
\[
V\!\leq\!(N\!+\!4)\log(N/2\!+\!2)\!-\!3(N\!+\!2)/4\!=\!\mathcal{O}(N\log N)
\]
follows from integration by parts.
\end{proof}
\cref{alg:grid} summarizes a simple grid search that finds the solution to \eqref{p:KA_R} for any $ N $ in $ \mathcal{O}(N\log N)$ steps, as implied by \cref{thm:KA_complexity}. We iterate over $ N_1 $ and $ N_3 $, because this choice yields the least upper bound on the number of grid points that need to be checked\footnote{Selecting $ N_1 $ and $ N_2$, or $ N_2$ and $ N_3 $, yields $ \mathcal{O}(N^2) $ points.}. Since the solution to \eqref{p:KA_R} is not necessarily unique, we select the KA$_R $ with the fewest closely spaced sensors, similarly to the MRA in \cref{sec:MRA_unique}. Note that computing the regularizer $ \varsigma $ requires $ \mathcal{O}(N^2) $ floating point operations (flops), whereas evaluating the aperture $ L $ only requires $\mathcal{O}(1)$ flops. Consequently, the time complexity of finding the KA$_R $ with the fewest closely spaced sensors is\footnote{Actually, $ \varsigma $ only needs to be computed when $ L\!=\!L^\star $ in \cref{alg:grid}.} $\mathcal{O}(N^2)$, whereas finding any KA$ _R $, that is, solving \eqref{p:KA_R} in general, has a worst case complexity of $ \mathcal{O}(N\log N) $.
\begin{algorithm}[]
\caption{Minimum-redundancy parameters of Kl{\o}ve Array} \label{alg:grid}
\begin{algorithmic}[1]
\Procedure{KA$_R $}{$N$}
\State $ f\gets -\infty $\Comment{initialize objective function value}
\For{$ N_1 \in \{0:\lfloor (N-2)/4 \rfloor\} $}
\For{$ N_3\in \{0:\lfloor (N-4N_1)/(N_1+1) \rfloor\} $}
\State $ N_2 \gets (N-(N_1+1)N_3)/2-2N_1 $
\If{${N_2 \bmod 1\!=\!0}$}\Comment{$ N_2 $ valid if integer}
\State Compute $ L $ and $ \varsigma $ using \eqref{eq:L_KA} and \eqref{eq:varsigma} \label{line:varsigma}
\If{$L-\varsigma > f$} \Comment{better solution found}
\State $ f\gets L-\varsigma$\Comment{update objective fcn.}
\For{$ i\in\{1:3\} $} $ N_i^\star\gets N_i $\EndFor
\EndIf
\EndIf
\EndFor
\EndFor
\State \Return $N_1^\star,N_2^\star,N_3^\star$ \Comment{optimal array parameters}
\EndProcedure
\end{algorithmic}
\end{algorithm}
In a recent related work, we also developed a KA with a constraint on the number of unit spacings \cite{rajamaki2020sparselow}. This \emph{constant unit spacing Kl{\o}ve Array} (KA$_S $) achieves $ S(1)=8 $ for any $ N $ at the expense of a slight increase in redundancy (asymptotic redundancy $ R_\infty=2 $). The minimum-redundancy parameters of the KA$_S $ are found in closed form by application of \cref{thm:general_opt}, similarly to the CNA (cf. \cite[Theorem~1]{rajamaki2020sparselow}).
\subsection{Other generator choices}
Naturally, other choices for the generator $\mathcal{G}$ may yield alternative low-redundancy symmetric array configurations. For example, the Wichmann generator with $ \lambda = 0 $ yields the \emph{Interleaved Wichmann Array} (IWA) \cite{rajamaki2018symmetric}. This array satisfies \ref{c:gmg} by \cref{gmg_1} of \cref{thm:c_suff_gmg} and \ref{c:gpg} by \cref{gpg_1} of \cref{thm:c_suff_gpg}. The IWA has the same asymptotic redundancy as the CNA, but fewer unit spacings. Although finding the minimum-redundancy parameters of the more general \emph{Symmetric Wichmann Array} (S-WA) in closed-form is cumbersome, numerical optimization of $\lambda$ and the other array parameters can slightly improve the non-asymptotic redundancy of the S-WA compared to the IWA. Nevertheless, neither the IWA nor S-WA are considered further in this work, since the KA achieves both lower $ R $ and $ S(1) $.
Numerical experiments suggest that some prominent array configurations, such as the \emph{Super Nested Array} \cite{liu2016supernested} or \emph{Co-prime Array} \cite{vaidyanathan2011sparsesamplers}, are not as well suited as generators $\mathcal{G}$ from the redundancy point of view (mirroring $\mathcal{G}$ does not help either). However, several other generators, both with and without contiguous difference co-arrays, remain to be explored in future work. For example, the difference MRA \cite{moffet1968minimumredundancy} and minimum-hole array \cite{bloom1977applications,vertatschitsch1986nonredundant} are interesting candidates.
\section{Performance evaluation of array designs} \label{sec:numerical}
In this section, we compare the sparse array configurations presented in \cref{sec:MRA,sec:generators} in terms of the array figures of merit in \cref{sec:fom}. We also demonstrate the proposed sparse array configurations' capability of resolving more scatterers than sensors in an active sensing scenario. Further numerical results can be found in the companion paper \cite{rajamaki2020sparselow}.
\subsection{Comparison of array figures of merit}\label{sec:comparison}
\cref{tab:summary,tab:asymptotic} summarize the key properties and figures of merit of the discussed sparse array configurations. The parameters of each configuration is chosen such that they maximize the number of contiguous DoFs $ H$ in \eqref{eq:H}. The optimal parameters of the NA and KMA are found similarly to those of the CNA and KA (cf. \cref{thm:param_CNA,thm:param_KA_R})\footnote{Derivations are straightforward and details are omitted for brevity.}. \cref{fig:arrays} illustrates the array configurations for $ N=24 $ sensors. The RRA is omitted since the MRA is known in this case. Note that the sum and difference co-arrays of the symmetric arrays are contiguous and merely shifted copies of each other.
\begin{table*}[]
\resizebox{1\linewidth}{!}{
\begin{threeparttable}[t]
\centering
\caption{Key properties of considered sparse array configurations. The arrays below the dashed line have a contiguous sum co-array. The symmetric arrays have equivalent sum and difference co-arrays.}\label{tab:summary}
\begin{tabular}{c|c|c|c|c|c|c}
Array configuration&Symmetric&Contiguous DoFs\tnote{a}, $ H $&Total DoFs, $ |\mathcal{D}_\Sigma| $&Aperture, $ L $& No. of sensors, $ N $&No. of unit spacings, $ S(1)$\\
\hline
General Minimum-Redundancy Array (MRA)&
no&
n/a&
n/a&
$ \geq (H-1)/2$ and $\leq H $&
n/a&
$ \geq 1 $\\
Nested Array (NA) \cite{pal2010nested}&
no&
$ (N^2+2N-4)/4$&
$ H+N/2-1 $&
$ (N^2+4N)/4$&
$ \sqrt{4L+5}-1 $&
$ N/2 $\\
Kl{\o}ve-Mossige Array (KMA) \cite{mossige1981algorithms}&
no&
$ (2N^2+8N+1)/7$&
$ H+6N/7+\mathcal{O}(1)$&
$ (11N^2+16N-61)/49$&
$ (7\sqrt{11L+15}-8)/11 $&
$2(N+2)/7$\\
\hdashline
Restricted Minimum-Redundancy Array (R-MRA)&
no\tnote{b}&
n/a&
$ H $&
$ (H-1)/2 $&
n/a&
$ \geq 2 $\\
Reduced-Redundancy Array (RRA) \cite{hoctor1996arrayredundancy}&
yes&
$ 30N-706 $&
$ H $&
$ 15N-353 $&
$ (L+353)/15 $&
$ 10 $\\
Concatenated Nested Array (CNA) \cite{rajamaki2017sparselinear}&
yes&
$ (N^2+6N-3)/4$&
$ H $&
$ (N^2+6N-7)/8$&
$ 2\sqrt{2}\sqrt{L+2}-3$&
$ N/2 - 1/2 $\\
Constant unit spacing Kl{\o}ve Array (KA$_S $) \cite{rajamaki2020sparselow}&
yes&
$(N^2+10N-83)/4 $&
$ H$&
$ (N^2+10N-87)/8$&
$2\sqrt{2}\sqrt{L+14}-5 $&
$ 8$\\
Minimum-Redundancy Kl{\o}ve Array (KA$ _R $)&
yes&
$(6N^2+36N-15)/23$&
$ H$&
$ (3N^2+18N-19)/23$&
$\sqrt{23/3}\sqrt{L+2}-3 $&
$ 4N/23+12/23 $\\
\hline
\end{tabular}
\begin{tablenotes}
\item[a] The tabulated results are representative of the scaling with $ N $ or $ L $ for the parameters maximizing $ H $. The expressions only hold exactly for specific values of $ N $ and $ L $, which may vary between configurations.
\item[b] The R-MRA has both asymmetric and symmetric solutions. A symmetric solution exists at least for all $ N\leq 48 $.
\end{tablenotes}
\end{threeparttable}
}
\end{table*}
\begin{table*}[]
\centering
\caption{Asymptotic array figures of merit. The KA$_R $ has at most $ 27\% $ more sensors than the R-MRA, which is less than other known arrays with closed-form sensor positions and a contiguous sum co-array.}\label{tab:asymptotic}
\begin{tabular}{c|c|c|c|c|c|c|c|c}
\multirow{2}{*}{Array}&
\multirow{2}{*}{$ R_\infty $}&
\multirow{2}{*}{$ F_\infty $}&
\multicolumn{2}{|c|}{$\lim H/H^\txt{R-MRA} $}&
\multicolumn{2}{|c|}{$ \lim N/N^\txt{R-MRA} $}&
\multicolumn{2}{|c}{$\lim L/L^\txt{R-MRA} $}\\
&&&$ N\to\infty $&$ L\to\infty $&$ H\to\infty $&$ L\to\infty $&$ H\to\infty $&$ N\to\infty $\\
\hline
MRA&$ 1.09 - 1.73 $&$ 0.5-1 $&$ 1-1.76 $&$ 0.5-1 $&$ 0.75-1$&$ 0.53-1 $&$ 1-2 $&$ 1-3.52 $\\
NA&$ 2 $&$ 0.5 $&$ 0.60-0.96 $&$ 0.5 $&$1.02-1.30 $&$ 0.72- 0.92$&$ 2 $&$ 1.19-1.92 $\\
KMA&$ 1.75 $&$ 0.64 $&$0.68-1.10 $&$0.64 $&$ 0.96-1.21$&$ 0.76-0.97 $&$ 1.57$&$ 1.07-1.72$\\
\hdashline
R-MRA&$ 1.19-1.92 $&$ 1 $&$ 1 $&$ 1 $&\multicolumn{2}{c|}{$ 1$}&$ 1 $&$ 1 $\\
RRA&$ \infty $&$ 1 $&$ 0 $&$ 1 $&\multicolumn{2}{c|}{$ \infty $}&$ 1 $&$ 0 $\\
CNA&$ 2 $&$ 1 $&$ 0.60-0.96 $&$ 1 $&\multicolumn{2}{c|}{$ 1.02-1.30 $}&$ 1 $&$ 0.60-0.96 $\\
KA$ _S$&$ 2$&$ 1 $&$ 0.60-0.96 $&$ 1 $&\multicolumn{2}{c|}{$ 1.02-1.30 $}&$ 1 $&$ 0.60 - 0.96 $\\
KA$ _R$&$1.92 $&$ 1 $&$ 0.62-1 $&$ 1 $&\multicolumn{2}{c|}{$1- 1.27$}&$ 1 $&$ 0.62 -1 $\\
\hline
\end{tabular}
\end{table*}
\begin{figure*}[]
\centering
\subfigure{\includegraphics[width=1\textwidth]{arrays_N_24}\label{fig:arrays_phys}}
\hfil
\subfigure{\includegraphics[width=1\textwidth]{coarray_s_N_24}\label{fig:arrays_sca}}
\hfil
\subfigure{\includegraphics[width=.95\textwidth]{coarray_d_N_24}\label{fig:arrays_dca}}
\caption{Sparse array configurations with $ N=24 $ sensors, and the corresponding sum and difference co-arrays (in units of the smallest inter-sensor spacing). Both co-arrays are contiguous for the R-MRA, CNA, KA$ _S $ and KA$ _R $. The sum and difference co-arrays of any symmetric physical array are merely translated copies of each other.}
\label{fig:arrays}
\end{figure*}
\subsubsection{Non-contiguous sum co-array}
We first briefly consider the configurations with a non-contiguous sum co-array before studying the arrays with a contiguous sum co-array in more detail. Note from \cref{tab:summary} that even when the sum co-array is non-contiguous, the number of total DoFs $ |\mathcal{D}_\Sigma| $ is only slightly larger than the number of contiguous DoFs $ H $. Specifically, the difference between the two is proportional to the number of physical sensors $ N$, which is much smaller than $ H\propto N^2 $, especially when $ N$ or $H$ grow large. \cref{tab:asymptotic} shows that for the same number of contiguous DoFs $ H\to\infty $, the general MRA has at most $ 25\% $ fewer sensors than the R-MRA by the most conservative estimate. For a fixed aperture $ L\to\infty $, the corresponding number is at most $47\% $, mainly due to the uncertainty related to $ L $, and thus the asymptotic co-array filling ratio $ F_\infty $, of the general MRA. In particular, any configuration seeking to maximize $ H $, such as the MRA, must satisfy $ H\geq L$. Moreover, $ H\leq 2L+1 $ holds by definition.
Among the considered configurations with closed-form sensor positions, the KMA achieves the largest number of contiguous DoFs $ H$, and therefore the lowest redundancy\footnote{The construction in \cite{kohonen2017animproved} achieves an approximately $ 1\% $ lower $ R_\infty $.} for a fixed number of sensors $ N $. However, its sum co-array contains holes, since $ F_\infty=7/11< 1$. For equal aperture $ L \to\infty$, the KA$ _R $ has a $57\% $ larger $ H $ than the KMA, but only $31\% $ more physical sensors $ N $. Similarly, the CNA has a two times larger $ H $ than the NA and $ 41\% $ larger $ N $. Conversely, for equal $ N \to\infty$, the CNA has a $ 50\% $ smaller physical aperture $ L $ than the NA, while still achieving the same $ H $. The KA$ _R $ has a $ 42\% $ smaller $ L $ than the KMA, but only a $ 9\% $ smaller $ H $.
\subsubsection{Contiguous sum co-array}
Next, we turn our attention to the configurations with a contiguous sum co-array. \cref{fig:N_vs_L} illustrates the array aperture $ L $ as a function of the number of sensors $ N $. The aperture scales quadratically with $ N $ for all configurations except the RRA with $L\propto N $. For reference, recall that the ULA satisfies $ L = N-1 $. The KA$_R $ achieves a slightly larger aperture than the rest of the considered parametric arrays. For the same number of sensors (approaching infinity), the KA$ _R $ has a $ 4\% $ larger aperture than the KA$ _S $ and CNA. Conversely, for a fixed aperture, the KA$ _R $ has $ 2\% $ fewer sensors. The differences between the configurations is clear in \cref{fig:N_vs_R}, which shows the redundancy $ R $ as a function of $ N $. By definition, the R-MRA is the least redundant array with a contiguous sum co-array. However, it is also computationally prohibitively expensive to find for large $ N $, and therefore only known for $ N \leq 48 $ \cite{kohonen2014meet,kohonen2015early}. For $ N \geq 49 $, one currently has to resort to alternative configurations that are cheaper to generate, such as the KA$ _R $. The KA$_R$ achieves the lowest redundancy when $ N\geq 72$. When $ 49 \leq N\leq 71$, the RRA is the least redundant configuration. However, the redundancy of the RRA goes to to infinity with increasing $ N $. The KA$ _R $ has between $0$ and $ 27\% $ more sensors than then R-MRA, which is less than any other currently known parametric sparse array configuration with a contiguous sum co-array.
\begin{figure}[]
\centering
\includegraphics[width=1\linewidth]{N_vs_L_1000}
\caption{The aperture of the sparse arrays grows quadratically with the number of sensors $ N $. For a given $ N $, the KA$ _R $ has the largest aperture of all currently known parametric arrays with a contiguous sum co-array.}
\label{fig:N_vs_L}
\end{figure}
\begin{figure}[]
\includegraphics[width=1\linewidth]{N_vs_R_1000}
\caption{The KA$ _R $ achieves the lowest redundancy $ R $ for $ N\geq 72 $ sensors. When $ 49\leq N\leq 71 $, the RRA is less redundant. The R-MRA is the least redundant configuration with a contiguous sum co-array for any $ N $, but it is computationally
expensive to find and unknown for $ N\geq 49 $.}\label{fig:N_vs_R}
\end{figure}
\cref{fig:N_vs_S1} shows the number of unit spacings $ S(1) $ as a function of $ N $. In general, $ S(1) $ increases linearly with $ N $, and the KA$ _R $ has the smallest rate of growth. The two exceptions are the KA$ _S $ and RRA, which have a constant $ S(1) $. However, unlike the RRA, the redundancy of the KA$ _S $ is bounded (cf.~\cref{fig:N_vs_R}). As discussed in \cref{sec:fom_S}, the number of unit spacings $ S(1) $ may be used as a simplistic indicator of the robustness of the array to mutual coupling. Assessing the effects of coupling ultimately requires actual measurements of the array response, or simulations using an electromagnetic model for the antenna and mounting platform \cite{allen1966mutual,rubio2015mutual,craeye2011areview,friedlander2020theextended}. A detailed study of mutual coupling is therefore beyond the scope of this paper.
\begin{figure}[]
\centerline{\includegraphics[width=1\linewidth]{N_vs_S1_1000}}
\caption{The number of unit spacings $ S(1) $ of the KA$ _R $ grows linearly with the number of sensors $ N $. Both the RRA and KA$ _S $ have a constant $ S(1) $. However, the RRA does not have a bounded asymptotic redundancy.}\label{fig:N_vs_S1}
\end{figure}
Obviously, many other important figures of merit are omitted here for brevity of presentation. For example, \emph{fragility} and the achievable \emph{beampattern} are natural criteria for array design or performance evaluation. Fragility quantifies the sensitivity of the co-array to physical sensor failures \cite{liu2019robustness}. The array configurations studied in this paper only have \emph{essential} sensors, and therefore high fragility, since the difference (and sum) co-array ceases to be contiguous if a sensor is removed. This is the cost of low redundancy. The beampattern is of interest in applications employing linear processing\footnote{For exceptions where the beampattern is also relevant when employing non-linear processing, see, e.g., \cite{pal2010nested,cohen2018sparseconvolutional}.}. For example, in adaptive beamforming, the one-way (transmit \emph{or} receive) beampattern is critical, whereas in active imaging, the two-way (combined transmit \emph{and} receive) beampattern is more relevant. Although the one-way beampattern of a sparse array generally exhibits high sidelobes, a wide range of two-way beampatterns may be achieved using one \cite{cohen2020sparse} or several \cite{hoctor1990theunifying,kozick1991linearimaging} transmissions and receptions. The arrays discussed in this paper can achieve the same effective beampattern as the ULA of equivalent aperture by employing multiple transmissions and receptions.
\subsection{Active sensing performance}
The estimation of the angles and scattering coefficients from \eqref{eq:z} can be formulated as an on-grid \emph{sparse support recovery} problem, similarly to the passive sensing case described in \cite{pal2015pushing}. In particular, let $ \{\tilde{\varphi}_i\}_{i=1}^V $ denote a set of $V\gg K $ discretized angles. The task then becomes to solve
\begin{opteq}
\underset{\bm{\bar{\gamma}}\in\mathbb{C}^{V}}{\txt{minimize}}&\ \|\bm{x}-(\bm{\tilde{A}}\odot \bm{\tilde{A}})\bm{\tilde{\gamma}}\|_2^2\
\txt{subject to}\ \|\bm{\tilde{\gamma}}\|_0=K, \label{p:cs}
\end{opteq}
where $ \bm{\tilde{A}} \in\mathbb{C}^{N\times V}$ is the known steering matrix sampled at the $ V $ angles, and $ \bm{\bar{\gamma}} \in\mathbb{C}^V$ the unknown sparse scattering coefficient vector. The sparsity of $ \bm{\bar{\gamma}} $ is enforced by the $ \ell_0 $ pseudonorm $ \|\bm{\tilde{\gamma}}\|_0 \triangleq \sum_{i=1}^V\mathbbm{1}(\tilde{\gamma}_i\neq 0)$, which enumerates the number of non-zero entries. Although \eqref{p:cs} is a non-convex optimization problem due to the $ \ell_0 $ pseudonorm, it can be approximately solved using \emph{Orthogonal Matching Pursuit} (OMP). OMP is a greedy algorithm that iteratively finds the $k = 1,2\ldots, K $ columns in the dictionary matrix $ \bm{\tilde{A}}\odot \bm{\tilde{A}} $ whose linear combination best describes the data vector $ \bm{x} $ in an $ \ell_2 $ sense. For details on the OMP algorithm, see \cite{tropp2004greed}, \cite[p.~65]{foucart2013amathematical}.
We now compare the active sensing performance of the KMA and KA$ _R $ using OMP. As demonstrated in \cref{sec:comparison}, the KMA and KA$ _R $ achieve the lowest redundancy of the considered configurations with closed-form sensor positions. We consider $ K= 65$ scatterers with angles $ \varphi_k $ uniformly spaced between $ -60^\circ $ and $ 60^\circ $. The scattering coefficients are spherically distributed complex-valued random variables, i.e., $ \gamma_k=z_k/|z_k| $, where $ z\sim\mathcal{CN}(0,1) $. We assume that the arrays consist of ideal omnidirectional sensors with unit inter-sensor spacing $ \delta = 1/2 $, such that the $ (n,i) $th entry of the sampled steering matrix is $ \tilde{A}_{n,i} = \exp(j\pi d_n \sin\tilde{\varphi}_i) $. Angles $ \{\tilde{\varphi}_i\}_{i=1}^V $ form a uniform grid of $ V\!=\!10^4 $ points between $ -75^\circ $ and $ 75^\circ $.
\cref{fig:arrays2} shows that for a fixed aperture $ L=70 $, the KA$ _R $ has $ 141/101\approx 40\% $ more contiguous DoFs than the KMA, at the expense of $ 21/17\approx 24\% $ more sensors\footnote{For reference, the R-MRA with aperture $ L=70 $ has $ N=20 $ sensors.}. The KA$ _R $ can therefore resolve more scatterers than the KMA of equivalent aperture at little extra cost. This is illustrated in \cref{fig:omp_wo_noise}, which shows the noiseless spatial spectrum estimate produced by the OMP algorithm. Both arrays resolve more scatterers than sensors, although the KMA misses some scatterers and displays spurious peaks instead. The KA$_R $ approximately resolves all scatterers due to its larger co-array. \cref{fig:omp_wo_noise} shows the noisy OMP spectra when the SNR of the scene, defined as $ K/\sigma^2 $, is $ 5 $ dB. The spectrum degrades more severely in case of the KMA, partly because it has fewer physical sensors than the KA$ _R $. These conclusions are validated by the empirical \emph{root-mean square error} (RMSE) of the angle estimates, where $ 1000 $ random realizations of the noise and scattering coefficients were used. In the noiseless case, the RMSE of the KMA is $ 8.6^\circ $, whereas the KA$_R $ achieves $ 4.2^\circ $. In the noisy case the corresponding figures are $ 16.2^\circ $ (KMA) and $ 7.9^\circ $ (KA$ _R $).
\begin{figure}[]
\centering
\subfigure{\includegraphics[width=1\linewidth]{arrays_L_70}\label{fig:arrays_phys}}
\hfil
\subfigure{\includegraphics[width=1\linewidth]{coarray_s_L_70}\label{fig:arrays_sca}}
\caption{Equi-aperture array configurations. The KA$ _R $ has $ 4 $ more physical sensors than the KMA, but $ 40 $ more contiguous elements in its sum co-array.}
\label{fig:arrays2}
\end{figure}
\begin{figure}[]
\centering
\subfigure{\includegraphics[width=1\linewidth]{DoA_KMA_L_70_65_SNR_Inf}\label{fig:1}}
\hfil
\subfigure{\includegraphics[width=1\linewidth]{DoA_KA_R_L_70_65_SNR_Inf}\label{fig:2}}
\caption{Noiseless spatial spectrum estimate. The KA$ _R $ resolves more scatterers than the KMA, due to its larger sum co-array. Both arrays find more scatterers than sensors. The dashed lines indicate the true scatterer directions.}
\label{fig:omp_wo_noise}
\end{figure}
\begin{figure}[]
\centering
\subfigure{\includegraphics[width=1\linewidth]{DoA_KMA_L_70_65_SNR_5}\label{fig:1}}
\hfil
\subfigure{\includegraphics[width=1\linewidth]{DoA_KA_R_L_70_65_SNR_5}\label{fig:2}}
\caption{Noisy spatial spectrum estimate. The KMA displays multiple false peaks as it has fewer physical (and sum co-array) elements than the KA$ _R $.}
\label{fig:omp_w_noise}
\end{figure}
\section{Conclusions and future work} \label{sec:conclusions}
This paper proposed a general symmetric sparse linear array design suitable for both active and passive sensing. We established a necessary and sufficient condition for the sum and difference co-array to be contiguous, and identified sufficient conditions that substantially simplify the array design. We studied two special cases in detail, the CNA and KA, both of which achieve a low redundancy and can be generated for any number of sensors $ N $. The KA achieves the lowest asymptotic redundancy among the considered array configurations. This also yields an upper bound on the redundancy of the R-MRA, whose exact value remains an open question. The upper bound may perhaps be tightened by novel sparse array designs suggested by the proposed array design methodology.
In future work, it would be of interest to characterize the redundancy of other symmetric arrays, including recursive/fractal arrays \cite{yang2018aunified,cohen2019sparsefractal,cohen2020sparse} that have a contiguous sum co-array. Another direction is investigating the advantages of symmetric arrays over co-array equivalent asymmetric arrays in more detail. This could further increase the relevance of the symmetric sparse array configurations studied in this paper
\section*{Acknowledgment}
The authors would like to thank Dr. Jukka Kohonen for bringing the Kl{\o}ve basis to their attention and for the feedback on this manuscript, as well as for the many stimulating discussions regarding additive bases.
\appendices
\section{Contiguous difference co-array of the KMA} \label{a:KMA_diff_coarray}
We now show that the Kl{\o}ve-Mossige array $\mathcal{G}$ in \cref{def:S-KMA} has a contiguous difference co-array. By symmetry of the difference co-array, it suffices to show that $ \mathcal{G}-\mathcal{G}\supseteq\mathcal{C}\supseteq\{0:\max\mathcal{G}\} $. We may easily verify that $ \mathcal{G}-\mathcal{G}\supseteq\mathcal{C} $ holds for
\begin{align*}
\mathcal{C}&=(\mathcal{D}_\txt{CNA}-\mathcal{D}_\txt{CNA})\cup (\mathcal{D}_3-\mathcal{D}_\txt{CNA}+2\max\mathcal{D}_\txt{CNA}+1)\\
&\supseteq\{0:\max\mathcal{D}_\txt{CNA}\}\cup (\mathcal{D}_3+\mathcal{D}_\txt{CNA}+\max\mathcal{D}_\txt{CNA}+1),
\end{align*}
where the second line follows from the fact that the CNA is symmetric and has a contiguous difference co-array. Consequently, $ \mathcal{C}\supseteq\{0:\max\mathcal{G}\} $ holds if and only if
\begin{align*}
\mathcal{D}_3+\mathcal{D}_\txt{CNA}=\{0:\max\mathcal{G}-\max\mathcal{D}_\txt{CNA}-1\}.
\end{align*}
Due to the periodicity of $ \mathcal{D}_3 $, this condition simplifies to
\begin{align*}
\mathcal{D}_4+\mathcal{D}_\txt{CNA}= \{0:\max\mathcal{D}_\txt{CNA}+N_1^2\},
\end{align*}
where $ \mathcal{D}_4=\{0:N_1:N_1^2\} $, and by \cref{def:CNA} we have
\begin{align*}
\mathcal{D}_4+\mathcal{D}_\txt{CNA}=\{\mathcal{D}_4+\mathcal{D}_1\}&\cup\{\mathcal{D}_4+\mathcal{D}_2+N_1\}\\ &\cup\{\mathcal{D}_4+\mathcal{D}_1+N_2(N_1+1)\}.
\end{align*}
As $ \mathcal{D}_1+\mathcal{D}_4=\{0:N_1(N_1+1)-1\} $, it suffices to show that
\begin{align*}
\mathcal{D}_4+\mathcal{D}_2 &\supseteq \{N_1^2:(N_2-1)(N_1+1)\}.
\end{align*}
By \cref{def:CNA,def:S-KMA}, we have
\begin{align*}
\mathcal{D}_4+\mathcal{D}_2 &=\{kN_1+l(N_1+1)\ |\ k\in\{0: N_1\}; l\in\{0: N_2-1\}\}\\
&=\{i(N_1+1)-k\ |\ k\in\{0: N_1\};i-k\in\{0: N_2-1\}\}\\
&\supseteq\{i(N_1+1)-k\ |\ k\in\{0: N_1\};i\in\{N_1: N_2-1\}\}\\
&\supseteq \{N_1^2:(N_2-1)(N_1+1)\},
\end{align*}
which implies that the difference co-array of $\mathcal{G}$ is contiguous.
\section{First hole in the sum co-array of the KMA}\label{a:S-KMA_lambda}
Let $\mathcal{G}$ denote the KMA in \cref{def:S-KMA}. Furthermore, let $ H\in\mathbb{N} $, as defined in \eqref{eq:H}, be the first hole in $\mathcal{G}+\mathcal{G}$. In the following, we show that
\begin{align*}
H = \begin{cases}
2\max\mathcal{G}+1,&\txt{if } N_1 +N_2=1\\
h+1,&\txt{if } N_1 \geq 1 \txt{ and } N_2 = 1\\
h,&\txt{otherwise},
\end{cases}
\end{align*}
where the non-negative integer $ h $ is
\begin{align}
h&=\max\mathcal{G}+\max\mathcal{D}_\txt{CNA}+1\nonumber\\
&=N_3(\max\mathcal{D}_\txt{CNA}+1+N_1^2) +2\max\mathcal{D}_\txt{CNA}+1. \label{eq:h}
\end{align}
The first case, which we only briefly mention here, follows trivially from the fact that $\mathcal{G}$ degenerates to the ULA when either $N_1=0 $ and $ N_2 = 1 $, or $ N_1=1 $ and $ N_2=0 $. We prove the latter two cases by contradiction, i.e., by showing that $h +1\in \mathcal{G}+\mathcal{G} $, respectively $h \in \mathcal{G}+\mathcal{G} $, leads to an impossibility. Verifying that indeed $\mathcal{G}+\mathcal{G}\supseteq\{0:h\} $, respectively $\mathcal{G}+\mathcal{G}\supseteq\{0:h-1\} $, is left as an exercise for the interested reader.
We start by explicitly writing the sum co-array of $\mathcal{G}$ as
\begin{align*}
\mathcal{G}+\mathcal{G} = (\mathcal{D}_\txt{CNA}+\mathcal{D}_\txt{CNA})&\cup(\mathcal{D}_\txt{CNA}+\mathcal{D}_3+2\max\mathcal{D}_\txt{CNA}+1)\\
&\cup (\mathcal{D}_3+\mathcal{D}_3+4\max\mathcal{D}_\txt{CNA}+2).
\end{align*}
Note that the CNA has a contiguous sum co-array, that is,
\begin{align*}
\mathcal{D}_\txt{CNA}+\mathcal{D}_\txt{CNA}=\{0:2\max\mathcal{D}_\txt{CNA}\}.
\end{align*}
Furthermore, it was shown in Appendix~\ref{a:KMA_diff_coarray} that
\begin{align*}
\mathcal{D}_\txt{CNA}+\mathcal{D}_3+2\max\mathcal{D}_\txt{CNA}+1 = \{2\max\mathcal{D}_\txt{CNA}+1:h-1\}.
\end{align*}
Consequently, $h\in \mathcal{G}+\mathcal{G} $ holds if and only if
\begin{align*}
h\in \mathcal{D}_3+\mathcal{D}_3+4\max\mathcal{D}_\txt{CNA}+2.
\end{align*}
By \cref{def:S-KMA}, there must therefore exist non-negative integers $ k\in\{0:2N_1\}$ and $ l\in\{0:2(N_3-1)\} $ such that
\begin{align}
h\!=\!kN_1+l(\max\mathcal{D}_\txt{CNA}+1+N_1^2)+4\max\mathcal{D}_\txt{CNA}+2.\label{eq:a2_c1}
\end{align}
Substituting \eqref{eq:h} into \eqref{eq:a2_c1} and rearranging the terms yields
\begin{align*}
(N_3-l)(\max\mathcal{D}_\txt{CNA}+1+N_1^2)=2\max\mathcal{D}_\txt{CNA}+1+kN_1.
\end{align*}
Since $ k\in\{0:2N_1\}$, the following inequality must hold:
\begin{align*}
\frac{2\max\mathcal{D}_\txt{CNA}+1}{N_1^2+\max\mathcal{D}_\txt{CNA}+1} \leq N_3-l \leq \frac{2N_1^2+2\max\mathcal{D}_\txt{CNA}+1}{N_1^2+\max\mathcal{D}_\txt{CNA}+1}.
\end{align*}
This reduces to $0 < N_3-l< 2 $, or more conveniently, $ N_3-l =1$, since $ N_3-l $ is an integer. Consequently, we have
\begin{align}
N_1(N_1-k)=\max\mathcal{D}_\txt{CNA},\label{eq:a2_c2}
\end{align}
where $ \max\mathcal{D}_\txt{CNA} \geq 0$ leads to $ N_1-k \in\{0:N_1\} $. Substituting $\max\mathcal{D}_\txt{CNA}=L$ in \eqref{eq:L_CNA} into \eqref{eq:a2_c2} yields
\begin{align}
N_1-k = N_2+1+\frac{N_2-1}{N_1}. \label{eq:a2_c3}
\end{align}
Combined with $ N_1-k\leq N_1 $, this implies that
\begin{align*}
N_1\geq \frac{N_2+1+\sqrt{(N_2+1)^2+4(N_2-1)}}{2}\geq N_2+1,
\end{align*}
since $ N_1,N_2\geq 1 $. We identify the following two cases:
\begin{enumerate}[label=\roman*)]
\item If $ N_2=1$, then \eqref{eq:a2_c3} yields that $ N_1-k=2$, implying that $H> h $. However, it is straightforward to verify that $H= h+1 $ from the fact that when $ h $ is replaced by $ h+1 $ in \eqref{eq:a2_c1}, no integer-valued $ N_1\geq 2 $ satisfies the equation.
\item If $ N_2\geq 2 $, then $ N_1 \leq N_2-1$ follows from \eqref{eq:a2_c3}, since $ (N_2-1)/N_1 $ must be an integer. This leads to a contradiction, since both $ N_1\geq N_2+1$ and $ N_1\leq N_2-1$ cannot hold simultaneously. Consequently, $H= h $ holds.
\end{enumerate}
Finally, $ H=h $ also holds when $ N_1=0 $ and $ N_2 \geq 2$, or $ N_1\geq 2 $ and $ N_2=0 $, since $\mathcal{G}$ degenerates into the NA in this case. This covers all of the possible values of $ H $.
\bibliographystyle{IEEEtran}
| proofpile-arXiv_059-15625 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction} \label{sec:intro}
\input{01-introDiscriminability.tex}
\section{Graph Neural Networks} \label{sec:GNN}
\input{02-graphNeuralNetworks.tex}
\section{Discriminability} \label{sec:disciminability}
\input{03-discriminability.tex}
\section{Numerical Experiments} \label{sec:sims}
\input{04-sims.tex}
\section{Conclusions} \label{sec:conclusions}
\input{05-conclusions.tex}
\bibliographystyle{bibFiles/IEEEbib}
\subsection{Notion of discriminability} \label{subsec:discriminabilityDef}
The stability of the graph filter bank $\ccalH$ and the GNN $\bbPhi$ are proportional to the integral Lipschitz constant $C$ such that a smaller value of $C$ leads to a more stable GNN, see \cite[Thms. 2, 4]{Gama20-Stability}. Thus, stable architecture have a low value of $C$. However, a small $C$ causes the filters to become flat for smaller values of $\lambda$, see Fig.~\ref{fig:ILfilters}. More specifically, as $\lambda$ increases, the derivative of the filter $h'(\lambda)$ has to decrease, since $|\lambda h'(\lambda)| \leq C$. If $h'(\lambda)$ is small, then the filter is nearly flat. We can then define the \emph{cutoff frequency} $\lambda_{C}(\varepsilon)$ to be the eigenvalue such that $h'(\lambda) < \varepsilon$ for all $\lambda > \lambda_{C}(\varepsilon)$, for some given $\varepsilon$. Certainly, the cutoff frequency is parametrized by the integral Lipschitz constant $C$ in a way that, for a fixed value of $\varepsilon$, we have $\lambda_{C'}(\varepsilon) < \lambda_{C}(\varepsilon)$ for $C' < C$. Therefore, the more stable the GNN is, the lower $C$ is, and the lower the cutoff frequency $\lambda_{C}$ is.
\begin{figure}
\centering
\def 3.68 {3.68}
\input{figures/integralLipschitzFilters.tex}
\caption{Bank of integral Lipschitz filters.}
\label{fig:ILfilters}
\end{figure}
A small cutoff frequency implies that the filter bank is not able to tell apart signals whose difference lies in frequencies located above $\lambda_{C}$. This exhibits a tradeoff between stability (the need for a smaller $C$) and discriminability (the need for a higher $\lambda_{C}$) for integral Lipschitz filters. Let us fix $C$ and $\varepsilon$ such that, for the given support matrix $\bbS$, the eigenvalues satisfy
\begin{equation} \label{eq:ILconditionGSO}
|\lambda_{1}| \leq \cdots \leq |\lambda_{K}| < \lambda_{C}(\varepsilon) < |\lambda_{K+1}| \leq \cdots |\lambda_{N}|.
\end{equation}
This means that signals whose difference has frequency content located in eigenvalues greater than $\lambda_{C}$ cannot be discriminated, see Fig.~\ref{fig:ILfilters}. These are signals whose difference can be written as a linear combination of eigenvectors $\bbv_{K+1},\ldots,\bbv_{N}$, or equivalently, signals whose difference lies in the column space of $\bbV_{N-K} = [\bbv_{K+1},\ldots,\bbv_{N}]$. Noting that the column space of $\bbV_{N-K}$ is equivalent to the null space of $\bbV_{K} = [\bbv_{1},\ldots,\bbv_{K}]$, i.e., the eigenvectors associated to the discriminable frequencies, we can then define the \emph{set of nondiscriminable signals} as
\begin{equation} \label{eq:nondiscriminableSet}
\ccalD = \big\{ \bbx, \bby \in \reals^{N} : (\bbx - \bby) \in \mathrm{Nul}(\bbV_{K}) \big\}.
\end{equation}
Since a linear operator does not create frequency content, it is immediate that the set of nondiscriminable signals is unchanged by the use of a graph filter bank \eqref{eq:graphFilterBank}, i.e. $\ccalD \equiv \ccalD_{\ccalH}$ with
\begin{equation} \label{eq:nondiscriminableSetFilter}
\ccalD_{\ccalH} = \Big\{ \bbx, \bby \in \reals^{N} : \big( \ccalH(\bbx;\bbS) - \ccalH(\bby;\bbS) \big) \in \mathrm{Nul}(\bbV_{K}) \Big\}.
\end{equation}
When considering GNNs of the form \eqref{eq:GNNsingle}, the set of nondiscriminable signals becomes
\begin{equation} \label{eq:nondiscriminableSetGNN}
\ccalD_{\bbPhi} = \Big\{ \bbx, \bby \in \reals^{N} : \big(\bbPhi(\bbx;\bbS) - \bbPhi(\bby;\bbS)\big) \in \mathrm{Nul}(\bbV_{K}) \Big\}.
\end{equation}
Due to the effect of the nonlinearity $\sigma$, the set $\ccalD_{\bbPhi}$ may be different from the set $\ccalD_{\ccalH}$ for the same filter bank $\{\bbH^{f}\}_{f=1}^{F}$.
\subsection{Enhanced discriminability of GNNs} \label{subsec:discriminabilityThm}
We now analyze and compare the discriminability of graph filters with that of GNNs. In particular, we first prove that any pair of signals that can be discriminated by the graph filter bank \eqref{eq:graphFilterBank} can also be discriminated by the GNN \eqref{eq:GNN}. Then, we characterize the signal pairs that cannot be discriminated by either the graph filter bank or the GNN. Finally, we prove that, for a $\tanh$ nonlinearity, the GNN is more discriminative than the graph filter bank.
We start by proving that GNNs are at least as discriminative as graph filters; i.e. there is no discriminabililty lost in adding a nonlinearity.
\begin{theorem} \label{thm:firstStep}
Let $\{\bbH^{f}\}_{f=1}^{F}$ be a bank of $F$ integral Lipschitz graph filters \eqref{eq:graphFilterBank} with a constant $C$ such that \eqref{eq:ILconditionGSO} holds for the given support matrix $\bbS$. Let $\bbPhi$ be a one-layer GNN as in \eqref{eq:GNNsingle} with a Lipschitz continuous, strictly monotone nonlinearity $\sigma$. If at least one filter $\bbH^{f}$ has a frequency response such that $h^{f}(\lambda)=0$ for $\lambda> \lambda_{C}$, then it holds that
\begin{equation} \label{eq:firstStep}
(\bbx,\bby) \notin \ccalD_{\ccalH} \Rightarrow (\bbx,\bby) \notin \ccalD_{\bbPhi}
\end{equation}
%
for all $(\bbx,\bby) \notin \ccalD_{\ccalH}$, with $\ccalD_{\ccalH}$ and $\ccalD_{\bbPhi}$ defined as in \eqref{eq:nondiscriminableSetFilter} and \eqref{eq:nondiscriminableSetGNN}, respectively.
\end{theorem}
\begin{proof}
See Appendix.
\end{proof}
\noindent Theorem~\ref{thm:firstStep} states that all pairs of signals that can be discriminated by the filter bank can also be discriminated by the GNN. More specifically, it suffices for the GNN to have only $F=1$ filter with $h^{1}(\lambda_{j}) =0$ for $j>K$ to be at least as discriminative as the corresponding linear filter bank. This is a sensible condition, since setting $h^{f}(\lambda) = 0$ for $\lambda> \lambda_{C}$ for at least one filter guarantees that no nonlinearity-generated low-eigenvalue content (generated from the high-eigenvalue content) interferes with the low-eigenvalue content (which can already be discriminated). In essence, Theorem~\ref{thm:firstStep} guarantees that the GNN is at least as discriminative as the graph filter bank, meaning that the nonlinearity does not decrease the discriminatory power.
Next, we characterize the pairs of signals that are not discriminable by either the graph filter bank or the GNN.
\begin{theorem} \label{thm:secondStep}
Let $\{\bbH^{f}\}_{f=1}^{F}$ be a bank of $F \geq 2$ integral Lipschitz graph filters \eqref{eq:graphFilterBank} with a constant $C$ such that \eqref{eq:ILconditionGSO} holds for the given support matrix $\bbS$. Assume the first filter satisfies $h^{1}(\lambda) = 0$ for $\lambda > \lambda_{C}$. Let $\bbPhi$ be a one-layer GNN as in \eqref{eq:GNNsingle} with a Lipschitz continuous, strictly monotone nonlinearity $\sigma$. Let $(\bbx,\bby) \in \ccalD_{\ccalH}$. Then,
\begin{equation} \label{eq:NDpairGNN}
(\bbx,\bby) \in \ccalD_{\bbPhi} \quad \Leftrightarrow \quad b_{i}^{f} = b^{f} \ \forall\ i \in \{1,\ldots,N\}
\end{equation}
%
for all $f$ such that $h^{f}(\lambda) \neq 0$ for $\lambda > \lambda_{C}$, and where $b_{i}^{f} = (\sigma(x_{i}^{f}) - \sigma(y_{i}^{f}))/(x_{i}^{f}-y_{i}^{f})$ is the secant for $x_{i}^{f} - y_{i}^{f} \neq 0$ and $b_{i}^{f} = \sigma'(x_{i}^{f})$ is the derivative for $x_{i}^{f}=y_{i}^{f}$, with $x_{i}^{f} = [\bbH^{f}\bbx]_{i}$ and $y_{i}^{f} = [\bbH^{f}\bby]_{i}$, respectively.
\end{theorem}
\begin{proof}
See Appendix.
\end{proof}
\noindent Theorem~\ref{thm:secondStep} plays a key role in characterizing the signals that we will not be able to discriminate, even when using a GNN. More specifically, Theorem~\ref{thm:secondStep} states that if the secant (or the derivative) of the nonlinearity, evaluated at the output of the graph filter, is the same at all nodes, then the pair of signals will not be discriminated. Note that since the value of $b_{i}^{f}$ depends on the value of the secant at the output of the filter $h^{f}$, having a larger graph filter bank (large $F$) with nonzero frequency responses beyond the cutoff frequency increases the possibility that at least one filter will have a distinct $b_{i}^{f}$.
In terms of filter training (and design), we need to guarantee that at least one filter in the bank has a nonzero frequency response beyond the cutoff frequency. Otherwise, all pair of signals that are nondiscriminable with a filter bank will also be nondiscriminable with the GNN.
\begin{corollary} \label{cor:setEquivalence}
Under the setting of Theorem~\ref{thm:secondStep}, assume that all filters in the bank are such that $h^{f}(\lambda)=0$ for $\lambda > \lambda_{C}$, for all $f \in \{1,\ldots,F\}$. Then,
\begin{equation} \label{eq:setEquivalence}
\ccalD_{\ccalH} \equiv \ccalD_{\bbPhi}
\end{equation}
%
\end{corollary}
\begin{proof}
See Appendix.
\end{proof}
\noindent Corollary \eqref{cor:setEquivalence} states that having at least one filter with a nonzero frequency response beyond the cutoff frequency is a necessary condition for having a potentially more discriminative architecture. This makes sense, since if $h^{f}(\lambda) = 0$ for all $\lambda > \lambda_{C}$ and for all filters in the bank, then there is no filter that can actually pick up the high eigenvalue differences between $\bbx$ and $\bby$ (which are the only differences present, since $(\bbx,\bby)$ is in $\ccalD_{\ccalH}$ by hypothesis).
\begin{corollary} \label{cor:tanhSet}
Under the setting of Theorem~\ref{thm:secondStep}, let $\sigma = \tanh$, and assume $\lambda_{C}(\varepsilon)$ and $\bbS$ are such that $N-K > 1$ [cf. \eqref{eq:ILconditionGSO}]. Then,
\begin{equation} \label{eq:tanhSet}
\ccalD_{\bbPhi} \subset \ccalD_{\ccalH}
\end{equation}
%
\end{corollary}
\begin{proof}
See Appendix.
\end{proof}
\noindent Corollary~\eqref{cor:tanhSet} shows one case in which the GNN is certifiably more discriminative than the graph filter.
In summary, we (i) proved that GNNs are, at the very least, as discriminative as graph filter banks (Theorem~\ref{thm:firstStep}), (ii) characterized the signals that the GNN will not be able to discriminate (Theorem~\ref{thm:secondStep}), (iii) established a case where the GNN is exactly as discriminative as the graph filter bank (Corollary~\ref{cor:setEquivalence}), (iv) shown a practical case where the GNN is guaranteed to be more discriminative than the graph filter bank (Corollary~\ref{cor:tanhSet}).
\subsubsection{#1}\vspace{-3\baselineskip}\color{black}\medskip{\noindent \bf \thesubsubsection. #1.}}
\newcommand{\myparagraph}[1]{\needspace{1\baselineskip}\medskip\noindent {\bf #1}}
\newcommand{\myindentedparagraph}[1]{\needspace{1\baselineskip}\medskip \hangindent=11pt \hangafter=0 \noindent{\it #1.}}
\newcommand{\myparagraphtc}[1]{\needspace{1\baselineskip}\medskip\noindent {\it #1.}\addcontentsline{toc}{subsubsection}{\qquad\qquad\quad#1}}
| proofpile-arXiv_059-15626 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
~~Attempt for reconciling gravity and quantum mechanics is one of the important subjects in theoretical physics which has not yet found a satisfactory solution. Several studies of quantum gravity theories such as string theory \cite{stringHUP} and loop quantum gravity \cite{loopHUP} propose to extend the standard Heisenberg Uncertainty Principle (HUP), which incorporates gravitational phenomena at higher energies and converges to Heisenberg’s uncertainty principle at lower energies, as follows:
\begin{eqnarray}
(\Delta X)(\Delta P)\geq\frac{\hbar}{2}(1+\beta (\Delta P)^{2}+...),
\label{HUP}
\end{eqnarray}
where $ \beta $ is a small deformation parameter with dimension of $(\text{Length})^{2}$ in the natural unit. It is also customary to define dimensionless parameter $\beta_0$ through writing $\beta=\dfrac{\beta_{0}}{(M_{P}c)^{2}}$ where $M_{P}$ denotes Planck mass. This Generalized Uncertainty Principle (GUP) includes a nonzero minimal uncertainty in position (minimal length), given by $(\Delta x)_{min}=\hbar\sqrt{\beta}$, and is of the order of Planck length ($10^{-35} m$) \cite{minimallength}. In recent years, the effects of GUP on various physical systems have been studied by many authors in both low energy \cite{lowGUP} and high energy \cite{highGUP} regimes. Current experiments can also set upper bounds on $\beta_0$. It can be constrained by using kinds of phenomenological approaches which can be summarized as follows: the best bounds from the non-gravitational origin are in the range of $\beta_{0}<10^{6}$ \cite{Bushev2019}; the best upper bound with gravitational origin (with violation of the equivalence principle) is $\beta_{0}<10^{21}$ \cite{Ghosh2014}; and for the best gravitational bound with respecting the equivalence principle and using gravitational wave event GW150914, we have $\beta_{0}<10^{60}$ \cite{Feng2017}. Motivated by string theory models, some authors also find some estimations the order of unit for the GUP parameter \cite{theo1, theo2}. There is a gap between theoretical predictions and experimental bounds which requires a big leap in the experimental techniques to probe a vast region for $\beta_{0}$ where the current upper bounds are in range of $10^{6}$ \cite{Bushev2019} to $10^{78}$\cite{78bound}. Indeed, in this regard, various quantum systems have been proposed to investigate the possibility of detecting the GUP effects which finally help us in getting some upper bounds on the GUP parameter \cite{lowGUP, highGUP, Bushev2019, Ghosh2014, Feng2017, theo1, theo2, 78bound, 1, 2, 5, GUPwork}.
\par
On the other hand, the geometrical phase proposed by Berry in adiabatic cyclic process, contains information on the geometrical properties of the parameter space of a quantum system \cite{berryphase}. Berry phase can play a fundamental role in understanding the behavior of a variety of systems and phenomena, and thus, it is interesting to develop experimental probes to measure Berry phase, as well as theoretical models that connect its behavior to microscopic information or external fields \cite{ABphase,BP1}. In recent years, the spin-dependent transport experiments have demonstrated that it is possible to control the geometric phase of electrons by the application of in-plane fields in semiconductor devices such the quantum rings built on In-Ga-As structures \cite{In-Ga-As}. In such devices, when an electron is transmitted from the source to drain, the spin precession and the spin-dependent phase (spin Berry phase) of the electron are controllable by the Rashba spin-orbit coupling, while this effect is regulated by a perpendicular electric field \cite{spinberry}. In addition, both the Aharonov-Bohm (AB) \cite{ABphase} and Aharonov-Casher (AC) \cite{ACphase} effects have been studied for quantum rings both experimentally and theoretically and may also be useful for controlling the spins of electrons \cite{spinAC}. The purpose of this work is to explore the geometrical Berry phase in the presence of GUP. In this regard, we show that the GUP effect provides the new contributions to Berry phase for electrons confined in a ring. We express that the geometrical phase produced by spin-orbit interactions in condensed matter systems can generate a way to follow GUP effects. It is also addressed that the current experiments in the solid material indicate the best upper bound on the GUP parameter.
\par
The paper is organized as follows:
Section II describes a generalized framework for the GUP effect. Section III is devoted to deriving Berry phase with the GUP correction. A discussion of GUP effects on the geometrical Berry factor for electrons inside a quantum ring is presented in section IV. The geometrical phase to the electron in a ring are discussed by the Rashba and Dresselhaus interactions. In section V, we summarize our results and conclusions.
\section{Generalized framework}
~~A general deformed Heisenberg algebra has been proposed in the variety of models of quantum gravity which are predicted a leading quadratic term in the momenta type correction to the HUP by Kempf, Mangano, and Mann \cite{HUP}. The existence of GUP leads to fundamental consequences on the mathematical basis of quantum mechanics. One of its most important implications is the deformation of commutation relation between position and momentum operators \cite{HUP}, as follows:
\begin{eqnarray}
[X,P]=i\hbar(1+\beta P^{2}).
\label{GUP}
\end{eqnarray}
~~Various topics such as harmonic oscillator \cite{1}, hydrogen atom \cite{2}, particle in a gravitational quantum well \cite{3}, have recently been studied using this modified version of quantum mechanics and its effects on corresponding Schr\"{o}dinger equations. In the relativistic regimes, it was used in studying the Dirac oscillator \cite{4}, the Klein-Gordon equation \cite{5}, the Casimir effect and the black body radiation \cite{6}. In general, GUP modifies Heisenberg algebra \cite{HUP, HUP2} as form of:
\begin{eqnarray}
&[X_{i},P_{j}]&=i\hbar\{(1+\beta P^{2})\delta_{ij}+\beta'P_{i}P_{j}\},\nonumber\\
&[P_{i},P_{j}]&=0,\nonumber\\
&[X_{i},X_{j}]&=i\hbar\dfrac{2\beta-\beta'+\beta(2\beta+\beta')P^{2}}{1+\beta P^{2}}(P_{i}X_{j}-X_{i}P_{j}),
\label{GA}
\end{eqnarray}
where $\beta$ and $\beta'$ are the GUP parameters, producing two versions of deformed quantum mechanics. First, the momentum representation is given in Ref. \cite{HUP2} by
\begin{eqnarray}
X_{i}&=&i\hbar\bigg((1+\beta p^{2})\dfrac{\partial}{\partial p_{i}}+\beta'p_{i}p_{j}\dfrac{\partial}{\partial p_{j}}+\gamma p_{i}\bigg),\nonumber\\
P_{i}&=&p_{i}.
\label{mumentumspace}
\end{eqnarray}
~~Here, operators $x_{i}$ and $p_{i}$ satisfy the standard commutation relations of ordinary quantum mechanics (they are canonical operators), and $\gamma$ is a parameter related to $\beta$ and $\beta'$. The solution of the deformed Schr\"{o}dinger equation in this approach is not often simple and there are only a few problems which have been solved exactly in the momentum approach with a minimal length in the case of $\beta'=2\beta$ \cite{2}. The other important representation is the following position representation \cite{7}
\begin{eqnarray}
X_{i}&=&x_{i},\nonumber\\
P_{i}&=&p_{i}(1+\beta p^{2})+\mathcal{O}(\beta^{2}),
\label{coordinatespace}
\end{eqnarray}
which is valid in the case $\beta'= 2\beta$ up to the first order of $\beta$. The simplicity of applying the perturbation theory to solve the Schr\"{o}dinger equation modified by GUP is the main advantage of the position representation \cite{7}. Hence, we use this representation in this work.
\section{Berry phase with a GUP}
~~In the early 1980s, it was shown that a quantum mechanical system acquires a geometric phase for a cyclic motion in the parameter space. This geometric phase under adiabatic motion is called Berry phase \cite{berryphase} while its generalized, which encompasses the non-adiabatic motion is known as the Aharonov-Anandan phase \cite{AAphase}. A manifestation of the Berry phase is the well-known Aharonov-Bohm (AB) phase \cite{ABphase} of an electrical charge which cycles around a magnetic flux. Aside from the AB effect, the first experimental observation of Berry phase was reported in 1986 for photons in a wound optical fiber \cite{expberry}. Another important
Berry phase effect is the Aharonov-Casher (AC) effect \cite{ACphase}, which has been proposed to occur when an electron with a spin-orbit interaction propagates in a ring structure under the effect of an external magnetic field perpendicular to the ring plane. In solid materials, especially for those with large spin-orbit coupling, the spin Berry phase has manifested by the various quantum phenomena, including anomalous Hall effect, spin Hall effect, valley Hall effect, anomalous thermoelectric effect, electronic polarization, orbital magnetization, magnetoresistance, magneto-optic effect, and 3D/2D topological insulator \cite{so}.
\par
To find the GUP effects on geometrical Berry phase, let us firstly consider usual quantum mechanics with general Hamiltonian
\begin{eqnarray}
H(p,x)=\frac{p^{2}}{2m}+V(x),
\label{H0}
\end{eqnarray}
where the corresponding Schr\"{o}dinger equation is given by
\begin{eqnarray}
H(p,x)|\phi_{n}>=E_{n}^{(0)}|\phi_{n}>.
\label{sheq}
\end{eqnarray}
~~Now by using the position representation of GUP framework introduced in (\ref{coordinatespace}), Hamiltonian changes as
\begin{eqnarray}
H(P,X)&=&\frac{P^{2}}{2m}+V(X)\nonumber\\
&=&\frac{p^{2}}{2m}+V(x)+\frac{\beta}{m}p^{4}+\mathcal{O}(\beta^{2}),
\label{H(P,X)}
\end{eqnarray}
up to first order of $\beta$. Since the GUP parameter $\beta$ is so small deformed parameter, we can use the non-degenerate perturbation theory to rewrite the above equation as
\begin{eqnarray}
H(P,X)=H(p,x)+\beta H_{p}+\mathcal{O}(\beta^{2}),
\end{eqnarray}
where $H_{p}=\dfrac{p^{4}}{m}$ is the perturbation Hamiltonian. For the energy spectrum and perturbed wave functions, one can obtain
\begin{eqnarray}
E_{n}=E_{n}^{(0)}+\beta E_{n}^{(1)}+\mathcal{O}(\beta^{2}),
\label{E}
\end{eqnarray}
and
\begin{eqnarray}
|\psi_{n}>=|\phi_{n}>+\sum_{k\neq n}C_{nk}|\phi_{k}>,
\label{state1}
\end{eqnarray}
respectively. Here,
\begin{eqnarray}
\beta E_{n}^{(1)}=<\phi_{n}|\beta H_{p}|\phi_{n}>,
\end{eqnarray}
and $|\phi_{n}>$ is the unperturbed wave function, while $C_{nk}$ is given by
\begin{eqnarray}
C_{nk}=\dfrac{<\phi_{k}|\beta H_{p}|\phi_{n}>}{E_{n}^{(0)}-E_{k}^{(0)}}.
\label{state2}
\end{eqnarray}
~~~The general Berry phase relation, driven in usual quantum mechanics with adiabatic route, is given in \cite{berryphase} by
\begin{eqnarray}
\eta_{B}=i\oint dR <\psi_{n}(R)| \nabla_R|\psi_{n}(R)>,
\label{eta}
\end{eqnarray}
where $|\psi_{n}(R)>$ and $ \nabla_R$ are the quantum state and the gradient operator with respect of the parameter space $R$, respectively. Substituting Eq.~(\ref{state1}) in the general form of Berry phase on the right-hand side of Eq.~(\ref{eta}), one can find the correction of Berry phase due to GUP as:
\begin{eqnarray}
\eta_{B}^{GUP}&=&i\oint dR <\phi_{n}(R)|\nabla_R|\phi_{n}(R)>\nonumber\\
&+&i\beta\oint dR\bigg(\sum_{l\neq n}C_{nl}<\phi_{n}(R)|\nabla_R|\phi_{l}(R)>\nonumber\\
&+& \sum_{k\neq n}C_{nk}<\phi_{k}(R)|\nabla_R|\phi_{n}(R)>\bigg)+\mathcal{O}(\beta^{2}).
\label{etaGUP}
\end{eqnarray}
~~The first term in Eq.~(\ref{etaGUP}) is Berry phase in the usual space and the other terms
give the correction to Berry phase due to GUP effect in the position representation (\ref{coordinatespace}). It should be noted that one can recover the usual form of phase factor (Eq.~(\ref{eta})) by imposing $\beta\longrightarrow 0$.
\section{Berry phase in the presence of spin-orbit interaction with a GUP}
~~~The spin-orbit interactions lead to geometrical spin phase shifts in conducting quantum rings. In fact, when the spin of electron moves inside a closed trajectory in the momentum space, the effective magnetic field produces Berry phase effects in a conducting ring \cite{PRL1993}. Here, we consider a conducting quantum ring in the presence of Rashba and Dresselhaus interactions, and study the effect of GUP on Berry phase of such quantum systems.
\subsection{Rashba interaction and GUP effect}
~~~The full Hamiltonian of an electron confined in a quantum ring in the presence of an external magnetic field and Rashba interaction is described in Ref. \cite{RashbaH} by
\begin{eqnarray}
H_{R}=\dfrac{(p_{x}-\frac{e}{c}A_{x})^{2}+(p_{y}-\frac{e}{c}A_{y})^{2}}{2m^{*}}+\frac{\hbar}{2}\alpha_{R}[\sigma_{x}(p_{y}-\frac{e}{c}A_{y})-\sigma_{y}(p_{x}-\frac{e}{c}A_{x})]+\hbar\omega_{B}\sigma_{z},
\label{RahbaH}
\end{eqnarray}
where $\textbf{A}$ is the vector potential and $\omega_{B}=\dfrac{egB}{2m^{*}c}$ denotes the corresponding Larmor frequency. $\alpha_{R}$ and $m^{*}$ are the strength of Rashba coupling and effective mass of electron in materials, respectively. With the polar coordinate system and the tangential component of the vector potential $A_{\varphi}=\dfrac{\Phi}{2\pi r}$, in which $\Phi$ and $r$ are the magnetic field flux and radius of ring, respectively, we get the following Hamiltonian for the electron in a closed ring
\begin{eqnarray}
H_{R}=\hbar\omega(-i\frac{\partial}{\partial\varphi}-\frac{\Phi}{\Phi_{0}})^{2}+\hbar\omega_{B}\sigma_{z}+\hbar\omega_{R}(\sigma_{x}\sin\varphi-\sigma_{y}\cos\varphi)(-i\frac{\partial}{\partial\varphi}-\frac{\Phi}{\Phi_{0}}).
\label{Rahbacly}
\end{eqnarray}
~~~Here, we defined $\omega=\dfrac{\hbar}{2m^{*}r^{2}}$ and $\omega_{R}=\dfrac{\hbar\alpha_{R}}{2r}$ as the characteristic frequencies of kinetic and Rashba terms, respectively, and $\Phi_{0}=\dfrac{2\pi\hbar c}{e}$ denotes the magnetic flux quantum. The eigenenergies of such Hamiltonian for the spin up and down are mentioned in appendix A. Relations show that, in opposite directions, we can obtain different phases due to the electron spin, a result similar to the usual AB effect ($-2\pi\dfrac{\Phi}{\Phi_{0}}$) \cite{Rashbaberry}. By using the eigenspinors of such system (appendix A), a geometrical Berry phase for an electron with state $(n=0, \lambda=+1, s=+1)$ would be obtained as
\begin{eqnarray}
\eta^{R}_{B}(\theta)&=&\pi(1+cos\theta),
\label{berry0Rashba}
\end{eqnarray}
where $\theta $ is exactly half of the solid angle subtended by the effective magnetic field.
~~~To explore the GUP effect, one can consider the position representation in the GUP modification, expressed in Eq.~(\ref{coordinatespace}), and rewrite Hamiltonian~(\ref{RahbaH}) in the form of
\begin{eqnarray}
H_{R}^{GUP}&=&H_{R}+\frac{\beta}{m^{*}}p_{x}\overrightarrow{p}^{2}(p_{x}-\frac{e}{c}A_{x})+\frac{\beta}{m^{*}}p_{y}\overrightarrow{p}^{2}(p_{y}-\frac{e}{c}A_{y})\nonumber\\
&+&\frac{\hbar}{2}\beta\alpha_{R}\overrightarrow{p}^{2}[p_{y}\sigma_{x}-p_{x}\sigma_{y}]+\mathcal{O}(\beta^{2}),
\label{HRGUP}
\end{eqnarray}
in which $\overrightarrow{p}^{2}=p_{x}^{2}+p_{y}^{2}$.
The above Hamiltonian can be written as $H_{R}^{GUP}=H_{R}+\beta H_p^{R}$ where $H_{R}$ and $H_{p}^{R}$ related to unperturbed conducting ring Hamiltonian and the perturbation Hamiltonian with GUP, respectively. By using the polar coordinates, see appendix B, we rewrite $H_{p}^{R}$ as
\begin{eqnarray}
H_{p}^{R}=\dfrac{\hbar^{4}}{m^{*}r^{4}}\{(\dfrac{\partial}{\partial \varphi})^{4}-i\dfrac{\Phi}{\Phi_{0}}(\dfrac{\partial}{\partial \varphi})^{3}\}+\dfrac{i\hbar^{4}\alpha_{R}}{2r^{3}}(\dfrac{\partial}{\partial \varphi})^{3}(\sigma_{x}\sin\varphi-\sigma_{y}\cos\varphi).
\label{HPRahbacly}
\end{eqnarray}
~~~As it is shown in appendix B, the geometrical phase under the shadow of GUP at lower perturbation order is found in form as:
\begin{eqnarray}
\eta_{B}^{R,GUP}(\theta)\approx\eta_{B}^{R}(\theta)+\frac{2\beta\pi\hbar^{2}}{r^{2}}\frac{[1+\cos\theta][3+\cos\theta]}{2+\sqrt{1+\tan^{2}\theta}},
\label{BerryRahbaGUP}
\end{eqnarray}
where the angle $\theta$ given by $\tan\theta=\alpha_{R}m^{*}r$. Here, the first term denotes the Berry phase in usual space (\ref{berry0Rashba}), the second term refers to the correction Berry phase in the presence of GUP, and thus, by considering the $\beta\rightarrow 0$ limit, we can get Eq.~(\ref{berry0Rashba}) as a desired property. Since the accuracy of current apparatus to measure a geometrical phase is less than $10^{-4}~ \text{rad}$ \cite{expberryphase}, a reasonable expectation is
\begin{eqnarray}
|\Delta \eta_{B}^{R,GUP}(\theta)|<10^{-4}~\text{rad}~,
\end{eqnarray}
leading to the upper bound on GUP parameter in the natural unit ($\hbar=c=1$) as
\begin{eqnarray}
\beta<\dfrac{r^{2}(2+\sqrt{1+\tan^{2}\theta})}{2\pi[1+\cos\theta][3+\cos\theta]}10^{-4}.
\end{eqnarray}
One can obtain the bound on GUP and the dimensionless parameters as $\beta<10^{8}~\rm GeV^{-2}$ and $\beta_{0}< 10^{46}$, respectively, by considering $m^{*}\simeq 0.02~m_{e}$, $r\simeq nm$, and $\alpha_{R}\simeq 10^{-4}$, where $\tan\theta=\alpha_{R}m^{*}r\simeq 5\times10^{-3}$. In Fig. (\ref{fig:Rashba}), for different materials with radius in the range of micrometer to nanometer ($\rm \mu m$ to nm), and the electron effective mass in the range of 0.02$~m_{e}$ to 0.6$~m_{e}$, the dimensionless GUP parameter ($\beta_{0}$) is displayed. One can see that the allowed region for upper bound on GUP parameter would be obtained for a ring size in the order of nanometer where $m^{*}$ in the range of (0.02 - 0.6)$~m_{e}$.
\begin{figure}[H]
\centering
\includegraphics[scale=0.7]{NR.pdf}
\caption{The upper bounds on the dimensionless GUP parameter from an electron confined in a quantum ring with space parameters ($m^{*},r$) in the presence of Rashba interaction.}
\label{fig:Rashba}
\end{figure}
\subsection{Dresselhaus interaction and GUP effect}
~~~It is possible to show the similar effect for the Dresselhaus interaction by considering a two-dimensional electron gas (2DEG) in a semiconductor. We consider a semiconductor with a normal $A_3B_5$ crystal where z-axis is parallel to the interface of a plane with Miller index $(001)$ for a rectangular quantum well and an external magnetic field (B), while the corresponding Hamiltonian \cite{PRL1993} takes the form
\begin{eqnarray}
H_{D}=\dfrac{(p_{x}-\frac{e}{c}A_{x})^{2}+(p_{y}-\frac{e}{c}A_{y})^{2}}{2m^{*}}+\frac{\hbar}{2}\alpha_{D}[\sigma_{x}(p_{x}-\frac{e}{c}A_{x})-\sigma_{y}(p_{y}-\frac{e}{c}A_{y})]+\hbar\omega_{B}\sigma_{z},
\label{HDress}
\end{eqnarray}
where $\alpha_{D}$ denotes the strength of Dresselhaus coupling. In the polar coordinates, one can gets
\begin{eqnarray}
H_{D}=\hbar\omega(-i\frac{\partial}{\partial\varphi}-\frac{\Phi}{\Phi_{0}})^{2}+\hbar\omega_{B}\sigma_{z}+\hbar\omega_{D}(\sigma_{x}\cos\varphi-\sigma_{y}\sin\varphi)(-i\frac{\partial}{\partial\varphi}-\frac{\Phi}{\Phi_{0}}),
\label{HDressrectal}
\end{eqnarray}
in which $\omega=\dfrac{\hbar}{2m^{*}r^{2}}$ and $\omega_{D}=\dfrac{\hbar\alpha_{D}}{2r}$. For $\lambda=+1$, $s=+1$, the usual Berry phase with the eigenspinors, introduced in appendix A, is given by
\begin{eqnarray}
\eta^{D}_{B}(\theta)=-\pi[2n-1+\cos\theta].
\label{Berry0Dress}
\end{eqnarray}
For the GUP Hamiltonian in the position representation, one reaches at
\begin{eqnarray}
H_{D}^{GUP}&=&H_{D}+\frac{\beta}{m^{*}}p_{x}\overrightarrow{p}^{2}(p_{x}-\frac{e}{c}A_{x})+\frac{\beta}{m^{*}}p_{y}\overrightarrow{p}^{2}(p_{y}-\frac{e}{c}A_{y})\nonumber\\
&+&\frac{\hbar}{2}\beta\alpha_{D}\overrightarrow{p}^{2}[p_{x}\sigma_{x}-p_{y}\sigma_{y}]+\mathcal{O}(\beta^{2}).
\label{HDGUP}
\end{eqnarray}
We can also write $H_{D}^{GUP}=H_{D}+\beta H_p^{D}$ in which $H_{D}$ and $H_{p}^{D}$ are related to the ring Hamiltonian with Dresselhaus coupling (Eq.~\ref{HDress}) and the perturbation Hamiltonian under the effect of GUP, respectively. The latter can finally be written as
\begin{eqnarray}
H_{p}^{D}=\dfrac{\hbar^{4}}{m^{*}r^{4}}\{(\dfrac{\partial}{\partial \varphi})^{4}-i\dfrac{\Phi}{\Phi_{0}}(\dfrac{\partial}{\partial \varphi})^{3}\}+\dfrac{i\hbar^{4}\alpha_{D}}{2r^{3}}(\dfrac{\partial}{\partial \varphi})^{3}(\sigma_{x}\cos\varphi-\sigma_{y}\sin\varphi).
\label{HPDahbacly}
\end{eqnarray}
By doing some straightforward calculations, see appendix B, we get
\begin{eqnarray}
\eta_{B}^{D,GUP}(\theta)&\approx&\eta_{B}^{D}(\theta)+\frac{4\beta\pi\hbar^{2}}{r^{2}}\frac{[1-\cos\theta][1+\cos\theta]}{3+\sqrt{9+8\tan^{2}\theta}},
\label{BerryDreslGUP}
\end{eqnarray}
as the GUP corrected Berry phase in the presence of Dresselhaus interaction. It is seen the usual Berry phase is obtained by setting $\beta\longrightarrow 0$, and the GUP effect is obtained with $\alpha_{D}\longrightarrow 0$. With respect to the condition
\begin{eqnarray}
|\Delta \eta_{B}^{D,GUP}|<10^{-4}~\text{rad},
\end{eqnarray}
where we find that the GUP parameter should satisfy the below condition
\begin{eqnarray}
\beta<\dfrac{r^{2}(3+\sqrt{9+8\tan^{2}\theta})}{4\pi[1-\cos\theta][1+\cos\theta]}10^{-4}.
\end{eqnarray}
~~Here, $\tan\theta=\alpha_{D}m^{*}r\simeq 5\times10^{-3}$ denotes a small angle which is in order of $\theta$=0.005. Therefore, bounds on GUP and dimensionless parameters are obtained as $\beta<10^{13}~\rm GeV^{-2}$ and $\beta_{0}< 10^{51}$, respectively, with $m^{*}\simeq 0.02~m_{e}$, $r\simeq nm$ and $\alpha_{D}\simeq 10^{-4}$. Fig.~(\ref{fig:Dress}) illustrates the best upper bound on GUP parameter for different materials in the Logarithmic space ($m^{*}, r$), same range as Rashba interaction where the Dresselhaus coupling is large. In the present of Dresselhaus effect, by using the current accuracy of the Berry phase detectors, the allowed region for the upper bound of GUP parameter has been depicted in Fig.~(\ref{fig:Dress}) for different materials.
\begin{figure}[H]
\centering
\includegraphics[scale=0.7]{ND.pdf}
\caption{The upper bounds on the dimensionless GUP parameter from an electron confined in a quantum ring with space parameters ($m^{*},r$) in the presence of Dresselhaus interaction.}
\label{fig:Dress}
\end{figure}
\section{Conclusion}
~~~In this paper, employing position representation of GUP and perturbation theory, we could find a modification to Berry phase due to the presence of GUP and up to the first order of perturbation. Thereafter, focusing on Rashba and Dresselhaus interactions of an electron in a quantum ring, the effects of GUP on corresponding Hamiltonian and consequently, the modifications to Berry phase of each cases were obtained. With the help of the current accuracy of measurement apparatus in detecting Berry phase, we could also obtain a common upper bounds as $\beta<10^{8} ~\rm GeV^{-2}$ and $\beta<10^{13}$ from Rashba and Dresselhaus interactions, respectively. Although these bounds is in agreement with some previous works such as electroweak measurements \cite{Das}, it is still far from the minimum upper bound $10^6$ addressed in \cite{Bushev2019}. Anyway, it will be improved if the accuracy of measurements is increased and upgraded.
\begin{appendix}
\section{Eigenenergies, Eigenspinors and Spin-orbit interaction}
~~~In the current appendix, we address the eigenenergies and eigenspinors of an electron confined in a quantum ring with spin-orbit interactions.
\begin{itemize}
\item Rashba interaction
\end{itemize}
~~~The 2D Hamiltonian for an electron with the effective mass $m^{*}$ subjected to Rashba coupling with constant $\alpha_{R}$ is introduced in Eq.~(\ref{RahbaH}). This Hamiltonian is not diagonal in the spin space. After diagonalization, the eigenvalues and the corresponding eigenfunctions would be listed as \cite{Rashbaberry}
\begin{eqnarray}
E_{ns}^{(R,0)}=\hbar\omega(l^{2}+\frac{1}{4})+s\hbar\sqrt{\omega_{R}^{2}l^{2}+(\omega_{B}-l\omega)^{2}},
\label{Rashbaenergy}
\end{eqnarray}
where $l=\lambda n+\frac{1}{2}+\frac{\Phi}{\Phi_{0}}$, $n$ is an integer number, $\lambda$ and $s$ denotes the travel direction and spin numbers for orbital quantum number $n\geq0$, respectively. The normalized eigenspinors, corresponding to the eigenenergies, are
\begin{eqnarray}
\phi_{n,\lambda,s}(\varphi)=e^{i\lambda n \varphi}\chi_{n,\lambda,s}; ~~~ \chi_{n,\lambda,s}=\begin{bmatrix}
\chi_{1}\\ \chi_{2}e^{i\varphi}
\end{bmatrix},
\label{rashbavectors}
\end{eqnarray}
which $\chi_{n, \lambda, s}$ is the spin components. The wave functions for travel direction number $\lambda=\pm 1$ and spin quantum number $s=\pm1$, associated with spin up ($\uparrow$) and down ($\downarrow$) states are governed by
\begin{eqnarray}
\phi_{n,+,\uparrow}&=&e^{in\varphi}
\begin{bmatrix}
\sin\frac{\theta}{2}\\ \cos\frac{\theta}{2}e^{i\varphi}
\end{bmatrix},
\nonumber\\
\phi_{n,+,\downarrow}&=&e^{in\varphi}
\begin{bmatrix}
\cos\frac{\theta}{2}\\ -\sin\frac{\theta}{2}e^{i\varphi}
\end{bmatrix},
\nonumber\\
\phi_{n,-,\uparrow}&=&e^{-in\varphi}
\begin{bmatrix}
\cos\frac{\theta}{2}\\ -\sin\frac{\theta}{2}e^{i\varphi}
\end{bmatrix},
\nonumber\\
\phi_{n,-,\downarrow}&=&e^{-in\varphi}
\begin{bmatrix}
\sin\frac{\theta}{2}\\ \cos\frac{\theta}{2}e^{i\varphi}
\end{bmatrix}.
\label{waverashba}
\end{eqnarray}
~~~The general Berry phase with Rashba coupling, for both directions with applying $\tan\theta=\dfrac{\omega_{R}}{\omega}$, is obtained by using Eq.~(\ref{eta}) as
\begin{eqnarray}
\eta_{B}^{R}(\theta)&=&-\pi[2n-1-s\cos\theta],~~~ \lambda>0,\nonumber\\
&=&-\pi[2n+1-s\cos\theta],~~~\lambda<0.
\label{RashbaB}
\end{eqnarray}
~~~The expressions~(\ref{RashbaB}) are valid if the spin-orbit energy is larger than the Zeeman energy. We should mention that the AB phase ($\dfrac{\Phi}{\Phi_{0}}$) is ignored in this equation.
\begin{itemize}
\item Dresselhaus interaction
\end{itemize}
~~~The 2D Hamiltonian for an electron with the effective mass $m^{*}$ subjected to Dresselhaus interaction with constant $\alpha_{D}$ is introduced in Eq.~(\ref{HDress}). By diagonalizing the matrix representation of this Hamiltonian, similar to the Rashba interaction, one can find the following energy eigenvalues
\begin{eqnarray}
E_{n,s}^{(D,0)}=\hbar\omega(l^{2}+\frac{1}{4})+s\hbar\sqrt{\omega_{D}^{2}(l^{2}-\frac{1}{4})+(\omega_{B}+l\omega)^{2}},
\label{EDress}
\end{eqnarray}
where $l=\lambda n+\frac{1}{2}+\frac{\Phi}{\Phi_{0}}$, $n$ is an integer number. The corresponding wave functions, in the presence of Dresselhaus interaction, are summarized as \cite{PRL1993}
\begin{eqnarray}
\phi_{n,+,\uparrow}&=&e^{in\varphi}
\begin{bmatrix}
-\cos\frac{\theta}{2}\\ \sin\frac{\theta}{2}e^{-i\varphi}
\end{bmatrix},
\nonumber\\
\phi_{n,+,\downarrow}&=&e^{in\varphi}
\begin{bmatrix}
\sin\frac{\theta}{2}\\ \cos\frac{\theta}{2}e^{-i\varphi}
\end{bmatrix},
\nonumber\\
\phi_{n,-,\uparrow}&=&e^{-in\varphi}
\begin{bmatrix}
\sin\frac{\theta}{2}\\ \cos\frac{\theta}{2}e^{-i\varphi}
\end{bmatrix},
\nonumber\\
\phi_{n,-,\downarrow}&=&e^{-in\varphi}
\begin{bmatrix}
-\cos\frac{\theta}{2}\\ \sin\frac{\theta}{2}e^{-i\varphi}
\end{bmatrix}.
\label{Dressvectors}
\end{eqnarray}
~~~Hence, for an electron with spin up (down) in a ring, the geometrical Berry phase would be evaluated as
\begin{eqnarray}
\eta_{B}^{D}(\theta)&=&-\pi[2n-1+s\cos\theta],~~~ \lambda>0,\nonumber\\
&=&+\pi[2n+1+s\cos\theta],~~~ \lambda<0.
\end{eqnarray}
\section{Modified Berry phase with GUP}
~~~In this section, we derive the geometrical phase in the presence of GUP for an electron that is confined inside a quantum ring with spin-orbit interaction.
\begin{itemize}
\item Rashba interaction
\end{itemize}
~~~Eq.~(\ref{HRGUP}) shows the corresponding Hamiltonian of an electron in the presence of Rashba interaction and GUP ($H_{R}^{GUP}$), where the coordinate representation in the GUP formation is applied. For investigating quantum effects of this system, it is convenient to use the polar coordinates ($r$, $\varphi$) in the following forms
\begin{eqnarray}
(p-\frac{e}{c}A)_{x}=\frac{\hbar}{r}\cos\varphi(-i\frac{\partial}{\partial \varphi}-\frac{\Phi}{\Phi_{0}}),\nonumber\\
(p-\frac{e}{c}A)_{y}=\frac{\hbar}{r}\sin\varphi(-i\frac{\partial}{\partial \varphi}-\frac{\Phi}{\Phi_{0}}).
\label{polarcor}
\end{eqnarray}
~~~Therefore, by substituting~(\ref{polarcor}) in~(\ref{HRGUP}), the perturbation term in $H_{R}^{GUP}=H_{R}+H_{p}^{GUP}$ becomes
\begin{eqnarray}
H_{p}^{R}=\dfrac{\hbar^{4}}{m^{*}r^{4}}\{(\dfrac{\partial}{\partial \varphi})^{4}-i\dfrac{\Phi}{\Phi_{0}}(\dfrac{\partial}{\partial \varphi})^{3}\}+\dfrac{i\hbar^{4}\alpha_{R}}{2r^{3}}(\dfrac{\partial}{\partial \varphi})^{3}(\sigma_{x}\sin\varphi-\sigma_{y}\cos\varphi).
\label{HpR}
\end{eqnarray}
~~~To find the modified Berry phase in the presence of GUP, we expand the expression~(\ref{etaGUP}) in the parameter space $\varphi$ as follows
\begin{eqnarray}
\eta_{B}^{R,GUP}&=& \eta_{B}^{R}(\theta)
+2i\int_{0}^{2\pi} d\varphi \sum_{k\neq n}C^{R}_{nk}<\phi_{n}(\varphi)|\frac{\partial}{\partial \varphi}|\phi_{k}(\varphi)>+\mathcal{O}(\beta^{2}),
\label{eq1}
\end{eqnarray}
where
\begin{eqnarray}
C^{R}_{nk}=\dfrac{<\phi_{k}(\varphi)| \beta H^{R}_{p}|\phi_{n}(\varphi)>}{E_{n}^{(R,0)}-E_{k}^{(R,0)}}.
\label{CnkR}
\end{eqnarray}
~~~It should be noted that the states of system, $\phi_{n}(\varphi)\equiv \phi_{n,\lambda,s}(\varphi)$ are introduced in appendix A for Rashba interaction. For simplicity, we suppose that the initial state of system would be as $(n,\lambda=+1, s=+1)$, then Eq.~(\ref{eq1}) can be written as
\begin{eqnarray}
\eta_{B}^{R,GUP}&=& \eta_{B}^{R}(\theta)
+2i\beta\int_{0}^{2\pi} d\varphi \sum_{k\neq n}\dfrac{<\phi_{k,\lambda',s'}(\varphi)|H_{p}^{R}|\phi_{n,+1,+1}(\varphi)>}{E_{n,+1,+1}^{(R,0)}-E_{k,\lambda',s'}^{(R,0)}}\times\nonumber\\
&&<\phi_{n,+1,+1}(\varphi)|\frac{\partial}{\partial \varphi}|\phi_{k,\lambda',s'}(\varphi)>,
\label{eq2}
\end{eqnarray}
where by considering all intermediate states $(\lambda', s')$, the non-zero terms can be found in the form of
\begin{eqnarray}
\eta_{B}^{R,GUP}&=& \eta_{B}^{R}(\theta)
+2i\beta\int_{0}^{2\pi} d\varphi \sum_{k\neq n}\dfrac{<\phi_{k,+1,+1}(\varphi)|\frac{\hbar^{4}}{m^{*}r^{4}}(\frac{\partial}{\partial \varphi})^{4}|\phi_{n,+1,+1}(\varphi)>}{E_{n,+1,+1}^{(R,0)}-E_{k,+1,+1}^{(R,0)}}\times\nonumber\\
&&<\phi_{n,+1,+1}(\varphi)|\frac{\partial}{\partial \varphi}|\phi_{k,+1,+1}(\varphi)>.
\label{eq3}
\end{eqnarray}
~~~It should also be noted that the second term in $H_{p}^{R}$ have no contribution due to the orthogonal condition in the eigenspinors. Here, the assumption $\frac{\Phi}{\Phi_{0}}\longrightarrow 0$ is applied and consequently, the numerator and denominator of Eq.~(\ref{eq3}) can be simplified by using the eigenspinors~(\ref{rashbavectors}) and eigenenergies~(\ref{Rashbaenergy}), in the Rashba part of appendix A, respectively, as
\begin{eqnarray}
&&e^{-ik\varphi}
\begin{bmatrix}
\sin\frac{\theta}{2}& \cos\frac{\theta}{2}e^{-i\varphi}
\end{bmatrix}(\frac{\partial}{\partial \varphi})^{4}
\begin{bmatrix}
\sin\frac{\theta}{2}\\ \cos\frac{\theta}{2}e^{i\varphi}
\end{bmatrix}e^{in\varphi}\times\nonumber\\
&&e^{-in\varphi}
\begin{bmatrix}
\sin\frac{\theta}{2}& \cos\frac{\theta}{2}e^{-i\varphi}
\end{bmatrix}(\frac{\partial}{\partial \varphi})
\begin{bmatrix}
\sin\frac{\theta}{2}\\ \cos\frac{\theta}{2}e^{i\varphi}
\end{bmatrix}e^{ik\varphi}
\nonumber\\
&&=i\{n^{4}+(6n^{2}+4n^{3}+4n+1)\cos^{2}\frac{\theta}{2}\}\{\frac{2k+1+\cos\theta}{2}\},
\label{num}
\end{eqnarray}
and
\begin{eqnarray}
E_{n,+1,+1}^{(R,0)}-E_{k,+1,+1}^{(R,0)}= \hbar\omega\{l^{2}-l'^{2}+(l-l')\sqrt{1+\tan^{2}\theta}\},
\label{deno}
\end{eqnarray}
where we set $\omega_{B}=0~(\Phi\longrightarrow 0)$, $\tan\theta=\frac{\omega_{R}}{\omega}=\alpha_{R}m^{*}r$, $l=n+\frac{1}{2}$, and $l'=k+\frac{1}{2}$. By doing some straightforward calculations and taking integral on $\varphi$ space, Eq.~(\ref{eq3}) leads to the following expansion
\begin{eqnarray}
\eta_{B}^{R,GUP}(\theta)&=&\eta_{B}^{R}(\theta)+\frac{2\beta \pi \hbar^{2}}{r^{2}}[\frac{(1+\cos\theta)(3+\cos\theta)}{2+\sqrt{1+\tan^{2}\theta}}+\frac{(1+\cos\theta)(5+\cos\theta)}{6+2\sqrt{1+\tan^{2}\theta}}\nonumber\\
&+&\frac{(1+\cos\theta)(7+\cos\theta)}{12+3\sqrt{1+\tan^{2}\theta}}+...],
\label{etaBRashba}
\end{eqnarray}
in which we set $n=0$, $k=1,2,...$, and moreover, the $\alpha_R\rightarrow 0$ limit addresses the effects of GUP on Berry phase in the absence of Rashba interaction. One can observe in Eq.~(\ref{etaBRashba}), the terms with both Rashba and GUP corrections are decreasing in a given angle while the first term has a larger amplitude in comparison to others. The signification effects in Rashba interaction and GUP effects at lower states may take place in the nearest states. Hence, we consider only the first term ($k=1$) to generate an effective contribution for examining the GUP parameter, where the corresponding Berry phase can be written as
\begin{eqnarray}
\eta_{B}^{R,GUP}(\theta)\approx\eta_{B}^{R}(\theta)+\frac{2\beta\pi\hbar^{2}}{r^{2}}\frac{[1+\cos\theta][3+\cos\theta]}{2+\sqrt{1+\tan^{2}\theta}}.
\label{etaBRashba2}
\end{eqnarray}
The first term refers to Berry phase in the presence of Rashba coupling as
\begin{eqnarray}
\eta_{B}^{R}(\theta)=i\int_{0}^{2\pi} d\varphi <\phi_{0,+1,+1}(\varphi)|\frac{\partial}{\partial \varphi}|\phi_{0,+1,+1}(\varphi)>=\pi[1+\cos\theta],
\label{44}
\end{eqnarray}
which is introduced in appendix A with setting $n=0$, $\lambda=+1$, and $s=+1$. It should be noted that the modification of Berry phase in Eq.~(\ref{etaBRashba}) shows that the different order of magnitude in GUP parameter ($\beta$) may be testable by the variety of experiments.\\
\begin{itemize}
\item Dresselhaus interaction
\end{itemize}
~~~To find the modified Berry phase in the presence of Dresselhaus interaction and GUP, one can follow similar way as Rashba interaction with using the corresponding expression in Dresselhaus interaction. We start with the corresponding Hamiltonian which is introduced in Eq.~(\ref{HDGUP}). We can rewrite this Hamiltonian in the polar coordinates which is addressed in Eq.~(\ref{polarcor}) as follows
\begin{eqnarray}
H_{p}^{D}=\dfrac{\hbar^{4}}{m^{*}r^{4}}\{(\dfrac{\partial}{\partial \varphi})^{4}-i\dfrac{\Phi}{\Phi_{0}}(\dfrac{\partial}{\partial \varphi})^{3}\}+\dfrac{i\hbar^{4}\alpha_{D}}{2r^{3}}(\dfrac{\partial}{\partial \varphi})^{3}(\sigma_{x}\cos\varphi-\sigma_{y}\sin\varphi).
\label{HpD}
\end{eqnarray}
~~~The Berry phase with GUP correction is written in the following form
\begin{eqnarray}
\eta_{B}^{D,GUP}&=& \eta_{B}^{D}(\theta)
+2i\int_{0}^{2\pi} d\varphi \sum_{k\neq n}C^{D}_{nk}<\phi_{n}(\varphi)|\frac{\partial}{\partial \varphi}|\phi_{k}(\varphi)>+\mathcal{O}(\beta^{2}),
\label{eq4}
\end{eqnarray}
where
\begin{eqnarray}
C^{D}_{nk}=\dfrac{<\phi_{k}(\varphi)| \beta H^{D}_{p}|\phi_{n}(\varphi)>}{E_{n}^{(D,0)}-E_{k}^{(D,0)}}.
\label{CnkD}
\end{eqnarray}
~~~By considering the similar conditions for quantum system in the presence of Dresselhaus interaction $(n,\lambda=+1, s=+1)$, the non-zero terms are given by
\begin{eqnarray}
\eta_{B}^{D,GUP}&=& \eta_{B}^{D}(\theta)
+2i\beta\int_{0}^{2\pi} d\varphi \sum_{k\neq n}\dfrac{<\phi_{k,+1,+1}(\varphi)|\frac{\hbar^{4}}{m^{*}r^{4}}(\frac{\partial}{\partial \varphi})^{4}|\phi_{n,+1,+1}(\varphi)>}{E_{n,+1,+1}^{(D,0)}-E_{k,+1,+1}^{(D,0)}}\times\nonumber\\
&&<\phi_{n,+1,+1}(\varphi)|\frac{\partial}{\partial \varphi}|\phi_{k,+1,+1}(\varphi)>.
\label{eq5}
\end{eqnarray}
~~~The numerator and denominator of Eq.~(\ref{eq5}) can be found by using the eigenspinors and eigenenergies which are mentioned in the part of Dresselhaus of appendix A, respectively, as
\begin{eqnarray}
&&e^{-ik\varphi}
\begin{bmatrix}
-\cos\frac{\theta}{2}& \sin\frac{\theta}{2}e^{i\varphi}
\end{bmatrix}(\frac{\partial}{\partial \varphi})^{4}
\begin{bmatrix}
-\cos\frac{\theta}{2}\\ \sin\frac{\theta}{2}e^{-i\varphi}
\end{bmatrix}e^{in\varphi}\times\nonumber\\
&&e^{-in\varphi}
\begin{bmatrix}
-\cos\frac{\theta}{2}& \sin\frac{\theta}{2}e^{i\varphi}
\end{bmatrix}(\frac{\partial}{\partial \varphi})
\begin{bmatrix}
-\cos\frac{\theta}{2}\\ \sin\frac{\theta}{2}e^{-i\varphi}
\end{bmatrix}e^{ik\varphi}
\nonumber\\
&&=i\{n^{4}+(6n^{2}-4n^{3}-4n+1)\sin^{2}\frac{\theta}{2}\}\{\frac{2k-1+\cos\theta}{2}\},
\label{numD}
\end{eqnarray}
and
\begin{eqnarray}
E_{n,+1,+1}^{(D,0)}-E_{k,+1,+1}^{(D,0)}&=&
\hbar\omega\{l^{2}-l'^{2}+\sqrt{\tan^{2}\theta(l^{2}-\frac{1}{4})+l^{2}}\nonumber\\
&-&\sqrt{\tan^{2}\theta(l'^{2}-\frac{1}{4})+l'^{2}}\}.
\label{denoD}
\end{eqnarray}
~~Here, we apply $\tan\theta=\frac{\omega_{D}}{\omega}$.
The GUP correction on Berry phase in the presence of Dresselhaus interaction can be driven easily by doing some straightforward calculations as
\begin{eqnarray}
\eta_{B}^{D,GUP}(\theta)&=&\eta_{B}^{D}(\theta)+\frac{4\beta\pi\hbar^{2}}{r^{2}}[\frac{(1-\cos\theta)(1+\cos\theta)}{3+\sqrt{9+8\tan^{2}\theta}}\nonumber\\
&+&\frac{(1-\cos\theta)(3+\cos\theta)}{11+\sqrt{25+24\tan^{2}\theta}}
+\frac{(1-\cos\theta)(5+\cos\theta)}{23+\sqrt{49+48\tan^{2}\theta}}+...],
\label{etaBD}
\end{eqnarray}
where we set $n=0$ and $k=1,2,3,...$. By considering the effective contribution for GUP effect, where the state $k=1$ provides the larger contribution in compare of other terms as mentioned in Rashba interaction, one can obtain
\begin{eqnarray}
\eta_{B}^{D,GUP}(\theta)&\approx&\eta_{B}^{D}(\theta)+\frac{4\beta\pi\hbar^{2}}{r^{2}}\frac{[1-\cos\theta][1+\cos\theta]}{3+\sqrt{9+8\tan^{2}\theta}},
\label{etaD2}
\end{eqnarray}
where angle $\theta$ refers to $\tan\theta=\alpha_{D}m^{*}r$. The first term refers to Berry phase in the presence of Dresselhaus interaction as shown in appendix A, where the settings $n=0$, $\lambda=+1$, and $s=+1$ are used.
\begin{eqnarray}
\eta_{B}^{D}(\theta)=i\int_{0}^{2\pi} d\varphi <\phi_{0,+1,+1}(\varphi)|\frac{\partial}{\partial \varphi}|\phi_{0,+1,+1}(\varphi)>=\pi[1-\cos\theta].
\label{55}
\end{eqnarray}
\end{appendix}
| proofpile-arXiv_059-15627 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |
\section{Introduction}
\label{intro}
The study of scaling in financial systems has been a field of investigation for many years now (\cite{Dima2007}, \cite{DimaAsteDaco2005}, \cite{DimaAsteDaco2003}, \cite{Mand1963}, \cite{CalvFish2002}, \cite{BoucPottMeye2000}, \cite{MantStan1995}, \cite{Leba2001}, \cite{Kaiz2001}, \cite{Scal1998}, \cite{BartMellDiMa2007}, \cite{LiuLuxDiMa2007}, \cite{LiuDiMaLux2008}, \cite{Mand1997}, \cite{MiloHatiBarn2020},\cite{barunik2010hurst},\cite{kristoufek2011multifractal},\cite{jiang2019multifractal},\cite{barunik2012understanding}). These studies have shown that financial series, especially from stock-markets, display multiscaling, which is nowadays widely accepted as empirical stylized fact of financial time series. The (multi)scaling property of time series is particularly important in risk management, especially when the model used assumes independence of asset returns. In fact, the lack of this assumption to hold, may severely bias risk measures, especially if there is long-range dependence and this is acting with a different degree across the time series statistical moments. In recent years, multiscaling has been adopted as a formalism in two different branches of quantitative finance, i.e. econophysics and mathematical finance. The former devoted most of the attention to price and returns series in order to understand the source of multiscaling from an empirical and theoretical point of view \cite{mantegna_stanley_book,dacorogna_book,MantStan1995,Dima2007,CalvFish2002,lux1,lux_marchesi,DimaAsteDaco2005,buonocore2020interplay} and has recently identified a new stylized fact which relates (non-linearly) the strength of multiscaling and the dependence between stocks \cite{buonocore2020interplay}. The latter instead builds on the work of \cite{roughvola} on rough volatility and has been used to construct stochastic models with anti-persistent volatility dynamics \cite{roughvola,roughvola2,roughvola3,roughvola4}. Although these research fields try to answer different research questions, it is important to recognize the relevance that multiscaling has attained in finance. Multiscaling has been understood to originate from one or more phenomena related to trading dynamics. In particular, it can be attributed to (i) the fat tails in price change distributions, (ii) the auto-correlation of the absolute value of log-returns, (iii) liquidity dynamics, or (iv) (non-linear) correlation between high and low returns generated by the different time horizon of traders and the consequent volumes traded. It can also be caused by the endogeneity of markets for which a given order generates many other orders. The latter occurs especially in markets where algorithmic trading is prevalent \cite{brandi2020statistics}.
However, scaling in a financial time series has also been shown to vary with time. For example, there have been studies trying to link this variation with dynamical elements in the underlying title such as, for instance, the level of stability of a firm (\cite{MoraDiMatteo2011}). In \cite{DrozKowa2018} and \cite{DrozOswi2015} the authors discuss, by using Multi-Fractal Detrended Fluctuation Analysis method (MF-DFA), the dynamical evolution of the $f(\alpha)$ \textit{vs.} $\alpha$ multifractal spectrum in financial and other types of time-series, not only in terms of its width $\Delta \alpha = \alpha_{max}-\alpha_{min}$ but also in terms of its 'asymmetry', \textit{i.e.} looking at the evolution of the shape (skewness) of the spectrum and relating it to market events and underling dynamics. Other studies have tried to associate a time-varying Hurst exponent as a measure of the dynamically changing scaling of a financial time-series, with the development of stock-market bubbles (\cite{YalaRossMcKe2011}, \cite{FernSancMuno2017} and references therein), trading signals (\cite{KrohSkou2018}) and predictability of an index \cite{CapoGila2017}, raising the question whether scaling analysis can be used as a signaling tool for financial markets (\cite{GrechMazu2004},\cite{GrechPamu2008},\cite{Mitr2012}).
In the present study, we aim to contribute towards this discussion by studying the dynamical evolution of multiscaling using the structure function approach, also known as the Generalized Hurst exponent (GHE) method \citep{DimaAsteDaco2003,Dima2007,jiang2019multifractal}, on time-series from four stock market indices, two major ones, S\&P~500 and Tokyo-NIKKEI), and two from peripheral markets, Athens Stock Exchange General Index (ASE) and Bombay-SENSEX. Employing the GHE method, the generalized Hurst exponents, $H_q$, are calculated for various values of the parameter $q$ corresponding to time scaling of the $q$-moment of the series difference distribution for a time delay $\tau$. In the time-dependent GHE approach, time-series of $H_q$ are generated for a range of $q$ values, by partitioning the underlying time-series into (usually overlapping) time segments and calculating the $H_q$ values for each segment. Looking at the relative values of the $H_q$ for the various $q$ at a particular time segment, one can evaluate the degree of multiscaling during that period. Alternative methods to GHE can also be used to extract the scaling exponent from time series, such as Rescaled range (R/S) analysis (\cite{Hurs1951,jiang2019multifractal}), MF-DFA (\cite{KantZschKosc2002}) and the Wavelet Transform Modulus Maxima (WTMM) introduced by \citep{muzy1991wavelets,muzy1993}. A more complete discussion on the use and misuse of various Hurst exponent estimation methods is given by Serinaldi \cite{Seri2010}, suggesting caution on the method used depending on the type of time-series considered. Recently, \cite{buonocore2020interplay} showed that the results retrieved by GHE methodology and MF-DFA are qualitatively equivalent while \citep{barunik2010hurst} showed empirically that the GHE approach outperforms the other methods under different data specifications. For this reason, throughout this work we will use the GHE methodology.
The scope of this work is threefold: (i) At a given time period, to detect \textit{differences} among the GHE temporal profiles of a time-series and the respective profiles of a surrogate randomly generated time-series of similar volatility temporal profile as the original series, using the exact same estimation method for both. (ii) To detect temporal changes in the GHE profiles of a time-series. We are thus interested in detecting statistically significant \textit{differences}, rather than absolute values, of GHE's relative to a specific reference series (the surrogate series) and the temporal evolution of these differences. (iii) to identify recurrent patterns in the temporal profiles that may correspond to particular market conditions. These patterns could characteristically emerge before or after critical time periods such as a stock-market bubble. In the first case, they can be used as warning signals for a particular future market event. In order to provide a rigorous definition of such patterns in GHE profiles and a systematic way to detect them, we introduce a visual methodology to algorithmically detect critical changes in the scaling of the underlying complex time-series. The methodology involves the strength of multiscaling at a particular time instance, the multiscaling trend, which is calculated by the Change-Point Analysis method, and a rigorous evaluation of the statistical significance of the identified patterns, by comparing to the output of the same analysis applied to randomly generated surrogate time-series that are constructed so that they have the same volatility temporal profile as the real series. Using this algorithm, we have identified particular patterns in the temporal co-evolution of the different GHE time-series. These patterns, that we call GHE Temporal Patterns (TP), distinguish in a statistically robust way, not only between time periods of uniscaling and multiscaling, but also among different types of multiscaling: \textit{symmetric} multiscaling and \textit{asymmetric} multiscaling. The later type is characterized by a time asymmetric dynamic of the scaling exponents for the extreme $q$ values, $q_1$ and $q_2$. The methodology shows that asymmetric multiscaling itself can be robustly divided into three subcategories that correspond to different dynamics. By applying the above visual methodology to historical data of the four indices mentioned above, we find that critical events are preceded by asymmetric multiscaling patterns thus highlighting a warning signal. We also find that such behaviour is in general stronger for endogenous crisis as the Dot.com bubble, the 1991 Japanese bubble, or the 2000 Athens bubble, but much weaker for exogenous generated ones, such as the 2008 global financial crisis. Furthermore, we discuss the physical connection of the multiscaling TP's to underlying market trading dynamics.
The paper is structured as follows. Section \ref{methods} is devoted to the presentation of the methods and implementation used in the paper, section \ref{results} shows results of an empirical application of the methodology to stock market indices, while sections \ref{discussion} and \ref{conclusions} are devoted to the discussion of the results, conclusions and future work for the further development of the method used in this study and its possible application to financial time series or time series of other complex systems.
\section{Description of methods}
\label{methods}
\subsection{Generalized Hurst Exponents}
\label{GHE}
The Hurst exponent (\cite{Hurs1951}, \cite{HursBlacSima1965}) is a well-known tool used to study the scaling behavior of time series coming from any dynamical process. To compute the scaling exponents, it is necessary to study the $q$-order moments of the absolute value of the increments of the stochastic process \cite{Dima2007}. In particular, the process $(X_t)$ with stationary increments is analysed through
\begin{equation}\label{eq:Kq}
\Xi(\tau,q)= \mathbb{E}\left[|X_{( t+ \tau)} -X_t|^q\right]\sim K_q\tau^{qH_q},
\end{equation}
where $q=\{q_1,q_2,\dots,q_M\}$ is the set of evaluated moments, $\tau=\{\tau_1,\tau_2,\dots,\tau_N\}$ is the set of time aggregation used to compute the process increments, $K_q$ is the $q$-moment for $\tau=1$ and $H_q$ is the so called generalized Hurst exponent, which is a function of $q$. The function $q H_q$ is concave \cite{Mand1963,Mand1997} and codifies the scaling exponents of the process. A multiscaling proxy can be obtained by fitting the measured scaling exponents with a second degree polynomial (\cite{buonocore2020interplay,buonocore2016measuring}) of the form\footnote{Technical details of the choice of this functional form can be found in \cite{buonocore2020interplay,buonocore2016measuring}.}
\begin{equation}\label{mult_proxy2}
qH_q=Aq+Bq^2,
\end{equation}
or equivalently \cite{brandi2020statistics}:
\begin{equation}\label{mult_proxy3}
H_q=A+Bq,
\end{equation}
where $A$ and $B$ are two constants. In this setting, the measured $B$, $\widehat B$, represents the curvature of $qH_q$. If $\widehat B=0$, $H_q$ does not depend on $q$, i.e. $H_q=H$ for all $q$, hence the process is uniscaling, while if $\widehat B\neq0$, the process is multiscaling \cite{Dima2007,brandi2020statistics,buonocore2020interplay,buonocore2016measuring}. For $q=1$, the GHE is equivalent to the original Hurst exponent. Notice also that for $q=2$, $\Xi(\tau,2)$ is proportional to the auto-correlation function of $X_t$. For $H_1=0.5$, the evolution of the system in state-space is equivalent to a random walk, i.e. the underlying process is purely stochastic (diffusive). For a single variable time series, this is equivalent to saying that at any given time, the value of the series is equally likely to go up as it is to go down. For $H_1>0.5$, the system evolves faster than stochastic diffusion (super-diffusive process), which implies that -for a single-variable series- if a change occurs in one direction (up or down), it is more likely that the next change will be in the same direction rather than in the opposite. In such a case, the underlying process is characterized as a {\it persistent} process. Finally, for $H_1< 0.5$, the system evolves slower than stochastic diffusion (sub-diffusive process). For a single-variable series, this implies that, if a change occurs in one direction (up or down), it is more likely that the next change will be in the opposite direction. In the latter case, the process is characterized as an {\it anti-persistent} process. For the calculation of GHE's of higher and more positive values of $q$, the largest differences in a series are weighted more than smaller differences in (\ref{eq:Kq}) and therefore large-$q$ GHE's emphasize the tails of the distribution of differences. Conversely, lower (less positive) values of $q$ weigh small differences more than large ones. Computing a broad spectrum of GHE's, for several spread-out values of $q$, provides a more detailed 'signature' of the underlying dynamics of the system compared to considering only the original Hurst exponent. However, using high values of $q$ can bias the results if the data analysed is characterised by distributions with fat tails. In particular, for $q>\alpha$, where $\alpha$ is the tail exponent of the distribution of the data, the $q$-moments are not well defined. This introduces a bias on the expected value which in turn, produces a bias in the GHE estimation. Since financial time series are generally fat tailed, the choice of $q$ is relevant.
\subsection{Weighted GHE's}
\label{wGHE}
Recently, Morales \& Di~Matteo in \cite{MoraDiMatteo2011} have proposed a modification of the GHE, the weighted GHE ({\it w}GHE), by modifying the way the time averaging is carried out in Equation (\ref{eq:Kq}). Specifically, while taking the sum within a time interval $[t-\Delta t, t]$ of length $\Delta t$, each term of the time series is weighed by a factor that depends on how far back from the present time $t$ the term lies: the farther in the past, the less this term is weighed, so that more recent times have a higher contribution to the calculation of the moments in Equation (\ref{eq:Kq}). Thus, the averaging in Equation (\ref{eq:Kq}) is replaced by the following definition: For any function $f$ of a dynamic variable $X_t$, we have
\begin{equation}
\label{eq:wAver}
\mathbb{E}\left[f \left( X_{t} \right)\right]_{ \theta }= \sum _{s=0}^{ \Delta t-1}w_{s} \left( \theta \right) f \left( X_{(t-s)} \right),
\end{equation}
where the weighting factor $w_{s}$ is an exponentially decaying function of time defined as:
\begin{equation}
\label{eq:Ws}
w_{s} \left( \theta \right) =w_{o} \left( \theta \right) e^{-\frac{s}{ \theta }},
\end{equation}
where $\theta$ is the characteristic time for which $w$ drops to $1/e$ and $w_{o}=w_{o} \left( \theta \right) =\frac{1-e^{-\frac{1}{ \theta}}}{1-e^{-\frac{ \Delta t}{\theta}}}$ is a normalization constant that ensures that the sum of all weights $w$ within the interval $\Delta t$ equals 1. Thus, Equation (\ref{eq:Kq}) is now replaced by its weighted sum equivalent:
\begin{equation}
\label{eq:wKq}
\Xi(\tau,q,\theta)= \mathbb{E}\left[|X_{( t+ \tau)} -X_t|^q\right]_{\theta}\sim K_q\tau^{qH_q^{(\theta)}},
\end{equation}
where $H_q^{(\theta)}$ is the {\it w}GHE corresponding to a characteristic time $\theta$. Throughout the rest of this paper we will use the {\it w}GHE version as defined above. Its main advantage is that it allows one to use a fixed window $\Delta t$ for all calculations, varying only the characteristic time $\theta$ in order to increase or decrease the weighting of the short-term past relative to the long-term past. This provides enough data to obtain accurate estimates for {\it w}GHE's and at the same time gives flexibility in setting the characteristic weighting time scale, thus adjusting smoothly the importance of recent past to distant past in GHE computation.
Besides {\it w}GHE's, we also estimated the time evolution of the volatility of an index, defined as the standard deviation of the weighted log returns over a time window equal to $\Delta t$. For volatility calculation, the averaging was again carried out as a weighted average with a characteristic time $\theta$ using Equation (\ref{eq:wAver}):
\begin{equation}
\label{eq:volatility}
V(t) = \sigma\left( log\left(\frac{X_{\tau+1}}{X_\tau}\right) w_{t} \left( \theta \right)\right)_{\Delta t},
\end{equation}
where $log(...)$ is the natural logarithm, $\sigma(...)_{\Delta t}$ denotes standard deviation of the series for $\tau=1...\Delta t-1$ and the time weighting factor $w$ is given by Equation (\ref{eq:Ws}).
\subsection{Surrogate stock market indices}
\label{SurrIndex}
In order to make sure that our results are not numerical artefacts of the finite data sizes due to the relatively short time windows used in the {\it w}GHE computations, we applied the same calculations on \textit{surrogate} time series. In order to produce such series, we did not apply the 'shuffling' method often used for this purpose, according to which the surrogate is constructed by a random permutation of the original time-series percentage differences, in order to destroy any long-term correlations of the original data. Instead, for each market index studied, we created a respective surrogate index as follows: Starting at the actual close price of the particular index at an initial date (the first date for which data was available), closing prices of all subsequent dates were artificially generated by a 'random walk' procedure, in which the day-to-day log price change was picked from a normal distribution with mean equal to zero and variance equal to the weighted average volatility $V(t)$ of the actual index at that particular date $t$. In this way, the surrogate index day-to-day relative price changes are randomly chosen, but the volatility variation of the surrogate index (i.e. the average magnitude of the relative daily changes) matches the temporal volatility profile of the actual index.
The reason for making this choice is that, in the present study, the surrogate series merely serve as reference series for the purpose of subtracting the effect in multiscaling properties that are \textit{solely} due to the finite-sized (short length) data segments used in calculations from any {\it w}GHE temporal variations of the real index that are the cause of the underlying market dynamics. In other words, the surrogate index serves as a measure of the "noise level" for the {\it w}GHE of the real indices, which, after being subtracted, will enable a more accurate quantitative evaluation of the departure of observed multiscaling behavior from a randomly generated finite data set whose distribution of differences is normal, by construction. Randomly shuffling a real index, on the other hand, destroys any temporal correlations but maintains the precise distribution of changes intact. Therefore, comparisons of the {\it w}GHE temporal profiles of the real index with the respective profiles of a shuffled surrogate, does not seclude the effect of the non-normal character of real price distributions, an effect that we want to measure. Another obvious choice for a surrogate index would be a randomly generated index with price changes picked from a normal distribution of uniform variance in time (i.e. ignoring the effect of a time-varying market volatility). In the present study, we chose to include the effects of the volatility variation with time, in order to subtract any residual effect it may have on the {\it w}GHE's. In this way, we are sure to measure the effects on the {\it w}GHE's profiles coming from the departure of the price change distributions from being normal (although with time-varying variance), as well as any temporal correlations within the close price time series themselves.
\section{Results}
\label{results}
\subsection{Data description}
\label{DataDescr}
For our analysis, we have used 4 stock market indices: New York stock exchange index (S\&P~500), Tokyo stock exchange index (NIKKEI), Athens Stock Exchange general index (ASE) and the Bombay stock index (SENSEX). Table \ref{tab:tabdata} shows the time period in which the data is analysed and the number of trading days in each series.
\begin{table}[!h]
\centering
\begin{tabular}{|c|c|c|}
\hline
\textbf{Market} & \textbf{Time period} & \textbf{Trading days} \\ \hline
S\&P~500 & 1927-2020 & 23138 \\ \hline
NIKKEI & 1969-2020 & 13068 \\ \hline
ASE & 1991-2020 & 7146 \\ \hline
SENSEX & 2001-2020 & 5450 \\ \hline
\end{tabular}
\caption{Time periods and the number of trading days analysed for each stock market.}
\label{tab:tabdata}
\end{table}
For each data series $X_t$, we used daily log prices, which is defined as the natural logarithm of the closing price of the index at each day, i.e. $X_t=log(P_t)$, where $P_t$ is the closing price of the index at time $t$.
\subsection{Standardized GHE, multiscaling proxies and parameter definition}
\label{Proxies}
We use a convenient normalization for $H_q$ defined as follows:
\begin{equation}
\label{eq:NormH}
H_q^{'(\theta)} = \frac{H_q^{(\theta)}-H^{*}}{\sigma\left( H_q^{surr(\theta)} \right)},
\end{equation}
where $H^{*}$ is the value of Hurst exponent expected for a perfectly random series ($H^{*}=0.5$) and $\sigma\left( H^{surr(\theta)}_q \right)$ is the standard deviation of $H^{surr}_\theta(q)$, the {\it w}GHE of the respective surrogate series calculated over the entire timeline, which is computed as:
\begin{equation}
\label{eq:stdev}
\sigma\left(H^{surr}_q\right) = \sqrt{\frac{\sum_{\tau=1}^N \left( H^{surr}_q(\tau)-\mathbb{E}\left[H^{surr}_q\right]\right)^2 }{N-1}},
\end{equation}
where $\mathbb{E}\left[H^{surr}_q\right]$ is the average of the series:
\begin{equation}
\label{eq:mean}
\mathbb{E}\left[ H^{surr}_q\right] = \frac{1}{N} \sum_{t=1}^N H^{surr}_q(t),
\end{equation}
and $N$ is the total number of points in the time-series. This type of normalized GHE (to which we will refer, from now on, as the 'standardized' GHE in order to distinguish it from usual normalized versions that contain the standard deviation of the real series itself) has a convenient interpretation: a value of $H_q^{'(\theta)}\approx0$ signifies an underlying time-series with the same behavior as a random series. For other values, $H_q^{'(\theta)}$ is equal to the number of standard deviations of $H_q^{surr(\theta)}$ that the real index $H_q^{(\theta)}$ is above 0.5. The standard deviation of $H_q^{surr(\theta)}$ is a measure of the variability of the $H_q$ series of a random index and thus conveniently measures the degree of 'noise level', i.e. the variability of any $H_q$ series that is due to finite data size effects and not to the actual underlying dynamics (apart from the dynamical changes in volatility which -in the present work- are included in the generation of the random surrogate). Therefore, division by the standard deviation of the surrogate series, enables a quantification of the statistical strength of the observed persistent, anti-persistent or uniscaling/multiscaling behavior of the real time-series at each time period, compared to a random signal for which the \textit{w}GHE's are computed with the same window size $\Delta t$ and characteristic weighting factor $\theta$.
In order to assess multiscaling, we use two alternative measures:
\begin{itemize}
\item Multiscaling \textit{width} $W_{q_1,q_2}$
\item Multiscaling \textit{curvature} (or \textit{depth}) $B$ of Equation \ref{mult_proxy3}.
\end{itemize}
The multiscaling width $W_{q_1,q_2}$ is computed as the difference between the $H_{q_1}$ and $H_{q_2}$, i.e.
\begin{equation}
\label{eq:W}
W_{q_1,q_2}=H_{q_1} - H_{q_2},
\end{equation}
and conveys information on the span of the $H_q$ parameter. Conversely, the multiscaling curvature $B$ is computed as the linear fit between $q$ and $H_q^{(\theta)}$, as described in Equation (\ref{mult_proxy3}) (\cite{brandi2020statistics,buonocore2020interplay}). If the process is uniscaling, both measures should be approximately zero as $H_q^{(\theta)}$ doesn't depend on $q$. In order to run our procedure, we have to specify some input parameters, i.e. $\tau$, $q$, and $\Delta t$ and $\theta$ in Equation \ref{eq:wKq}. Regarding the maximum $\tau$, we use 19 days, as prescribed in \cite{DimaAsteDaco2003,DimaAsteDaco2005}. Similarly to the standardized $H_q$, in order to detect statistically significant multiscaling at time $t$, the 'width' of the {\it w}GHE $q$-spectrum for extreme $q$ values, $q_1$ and $q_2$ in respect, is also standardized as
\begin{equation}
\label{eq:Wnorm}
W'_{q_1,q_2}(t) = \frac{W_{0.1,4}(t)}{\sigma\left( W^{surr}_{q_1,q_2} \right)},
\end{equation}
where $\sigma\left( W^{surr}_{q_1,q_2} \right)$ is the \textit{pooled} standard deviation of the difference between surrogate series $H^{surr}_{q_1}(t)$ and $H^{surr}_{q_2}(t)$ given by:
\begin{equation}
\label{eq:Stdpooled}
\sigma\left( W^{surr}_{q_1,q_2} \right) = \sqrt{\sigma\left( H^{surr}_{q_1} \right)^2+\sigma\left( H^{surr}_{q_2} \right)^2},
\end{equation}
Finally, in order to compute the series of $B(t)$, we use the series $H_q(t)$ for several values of $q$ within a range $q'_1-q'_2$. The number of $q$ values affects the accuracy of determining $B(t)$ by the least squares linear fit to $H_q$ \textit{vs.} $q$ data for each time $t$. A number of about 20 $q$ are adequate for a good quality fit, yielding 'p-values', on the average, above 0.98, and in the worst case (rare outliers) 0.85. Similar to $W'_{0.1,4}(t)$, we standardize $B$ by using the standard deviation of $B$ computed on the surrogate data, $\sigma\left( B^{surr} \right)$:
\begin{equation}
\label{eq:Bnorm}
B'(t)=\frac{B(t)}{\sigma\left( B^{surr} \right)}.
\end{equation}
$\sigma\left( B^{surr} \right)$ is calculated via Equation (\ref{eq:stdev}) replacing $H_q(\tau)$ by $B^{surr}(\tau)$.
Regarding the choice of the extreme values of $q$, we used two sets: For the multiscaling width $W_{q_1,q_2}$, we used a large span, $q_1=0.1$ and $q_2=4$, in order to capture the strong 'biasing' effect of the tails of the price change distributions as it has been reported elsewhere \cite{buonocore2020interplay,brandi2020statistics} for financial time-series. We want to include this "biased" version of the width in order to capture the \textit{dynamics} of such bias and spot any transitions it may reveal in time. For the multiscaling proxy $B$, on the other hand, we used a short span $q'_1=0.1$ and $q'_2=1$ with a step of $\Delta q=0.04$ in order to concentrate on the small $q$ values that mostly weigh the small price changes and thus emphasize the center of the price change distributions. The step of $\Delta q=0.04$ provides 23 $H_q$ values for each point in time $t$, and thus the quality of the linear fit yielding $B$ is very good.
\subsubsection{Choice of $\Delta t$ and $\theta$}
One of the most important issues concerning the time-dependent {\it w}GHE's is the choice of the $\Delta t$ and $\theta$ parameters which represent the size of the time window and the time weighting parameter within that window that directly pertain to the {\it w}GHE's calculations. The optimum choice should be the result of a trade-off between reducing the finite-size effects (that increase the smaller $\Delta t$ and $\theta$ are), and capturing the short-term changes in multiscaling and {\it w}GHE's, a task for which the smaller $\Delta t$ and $\theta$, the better. If the time window length and time weighting parameter are too short, finite size effects will overwhelm the amount of multiscaling caused by the real dynamics. If, on the other hand, they are too large, finite size effects are ameliorated, but possible short-term multiscaling variations in the real dynamics are lost because they are averaged out in time. Moreover, the averaging-out effect may lead to another undesirable effect: to obtain spurious multiscaling estimation for the time period immediately after some extreme tail event which biases the width of the {\it w}GHE spectrum, especially for the large $q$ values. For example, if one picks $\Delta t=750$ trading days, then a large tail event will cause a bias in the {\it w}GHE's for a period of approximately 750 days (the characteristic decay time of the bias also depends on $\theta$). For a choice of $\Delta t=120$ trading days instead, the forward in time 'contamination' of the {\it w}GHE spectrum will have a much shorter duration, but finite-size effects will rise considerably for such small $\Delta t$. In order to make a proper choice, first of all we set $\Delta t=\theta$. This choice is arbitrary, but, without loss of generality, corresponds to a time window for which the last day in the past is weighted by a factor $1/e$ less than the most recent day. Then, $\Delta t$ is determined by the rule that it should be: i) as small as possible (in order to capture short-term dynamical changes and avoid long-term 'contamination' of multiscaling due to large tail events) and ii) sufficiently large that the noise level due to finite-size effects is satisfactorily low. In order to make a plausible choice meeting the above criteria, we calculated the width $W_{1,4}(t)$ time-series with a range of $\Delta t$ from 60-1250 trading days for the S\&P~500 index as well as its random surrogate. Then, for each $\Delta t$, we calculate the average value of the width of the time-series and plot it $vs.$ $\Delta t$ in figure~\ref{fig:OptimumTheta}. For the real index, error bars correspond to the standard \textit{error} of the average, whereas for the surrogate index the error bars correspond to the standard \textit{deviation} of the surrogate $W_{1,4}$ time-series. We see that the average width decreases with $\Delta t$, both for the real index and the surrogate, as finite-size effects are reduced as $\Delta t$ rises. For the real index, the average width naturally reaches a plateau that corresponds to the actual multiscaling strength (on the average) of the index, whereas for the random surrogate the width slowly drops to zero, the theoretical value for a random series. The dashed horizontal line in the figure shows the value of the plateau calculated as the average of the widths for $\Delta t=250,375,500,750,1000$ and $1250$. We see that already for $\Delta t \sim 250$ the finite size effects have considerably reduced and the value of the average width of the real index has reached the plateau value well within standard error. We also see that $\sim 250$ is the smallest value of $\Delta t$ for which the width of the real series is above at one standard deviation of the surrogate series average width, which means that for this value of $\Delta t$, the observed multiscaling is statistically strong (above the 'noise' level). Finally, for each of the depicted values of $\Delta t$, we plot the rate of \%~improvement of the average width of the real series per day, if $\Delta t$ is increased beyond each specific value shown. We observe that for the lowest values of $\Delta t$ the rate of improvement is high. Again, $\Delta t \sim 250$ is the smallest value for which this rate significantly drops, which means that if $\Delta t$ is increased beyond $\sim 250$ trading days, the improvement in noise level reduction is not significant. For all the above reasons we chose $\Delta t=\theta=250$ trading days as our optimum window size and time weighting factor.
\begin{figure}[h!]
\includegraphics[scale=0.3]{FIGURES/OptimumTheta.png}
\caption{\label{fig:OptimumTheta} Average width $W_{1,4}$ of real S\&P~500 and a random surrogate of the same data length as S\&P~500 for various values of $\Delta t$. For calculations $\Delta t = \theta$. Error bars for the real series correspond to the standard error of the mean. Error bars for the surrogate series correspond to the standard deviation of the surrogate series $W_{0.1,4}$. The horizontal line shows the plateau the of the S\&P~500 width. The \% rate of reduction of the deviation of the width from the plateau value per one day increase in $\Delta t$ is shown on the right axis.}
\end{figure}
\subsection{{\it w}GHE's vs. time}
\label{GHEresults}
Before we proceed with our main analysis of the GHE time-series and the introduction of the scaling pattern identification methodology, we present in figure~\ref{fig:SP500_RAW_250}, the raw (not-standardized) time series of $H_{0.1}^{(\theta)}$ and $H_4^{(\theta)}$ for the S\&P~500 index log prices and $\Delta t=\theta=250$ trading days together with the {\it w}GHE's of the respective S\&P~500 surrogate series. For comparison, in figure~\ref{fig:SP500_RAW_750}, we show the same quantities for $\Delta t=\theta=750$ trading days.\footnote{Throughout this paper, when we refer to the 'surrogate series' of a particular stock market index, we mean the randomly generated index according to the procedure highlighted in section \ref{SurrIndex}, where the surrogate series matches the temporal profile of the volatility of the real index.} The width of each line shown corresponds to the uncertainty of the {\it w}GHE's, which is equal to one standard deviation above and below the mean value of the {\it w}GHE's, as determined by the fitting procedure in the GHE algorithm. This error depends on finite-size effects and varies significantly for each time segment considered based on the quality of the least squares fit in the GHE algorithm for a particular time segment. The error is larger the smaller the values of $\Delta t$ (and $\theta$) and is also larger the bigger $q$ is, because high-$q$ GHE's are more strongly affected by rare and large events. The average error for $H^{250}_{1}$ for the entire time-line is $0.028\pm0.014$ and the respective error for $H^{250}_{4}$ is $0.034\pm0.017$. As it is evident from figures~\ref{fig:SP500_RAW_250}~and~\ref{fig:SP500_RAW_750}, the two {\it w}GHE's of the randomly generated surrogates are evenly distributed around 0.5, the expected value for a random series, whereas the the two {\it w}GHE's of the real S\&P~500 data clearly depart from these values. We also notice that, for the surrogate series, $H_1^{(\theta)}$ and $H_4^{(\theta)}$ evolve almost parallel to each other and are close to each other at all times, as expected for a uniscaling series. However, the two {\it w}GHE's of the real time-series clearly differ at certain time periods and, at some periods, they even follow completely different trends. Notice that there are certain points in the series where $H^{(\theta)}_4$ shows an abrupt drop relative to $H^{(\theta)}_{0.1}$, a drop which decays with a characteristic time that is proportional to $\theta$. These correspond (as we will discuss later in this paper) to large tail events (big rises or drops) that bias the value of the high $q$ \textit{w}GHE. This biasing effect carries on in the future for a characteristic time proportional to $\Delta t$, the length of the averaging window for the \textit{w}GHE calculations and is also dependent on $\theta$. This fact demonstrates why it is highly desirable to choose a $\Delta t$ value as small as possible so that we avoid masking the true multiscaling for a prolonged time in the future of such large tail events, as long as finite size effects are also kept at an acceptably low level.
\begin{figure}[h!]
\begin{center}
\includegraphics[height=0.5\textheight,width=1.05\textwidth]{FIGURES/FIG1_SP500_RAW_theta=250.png}
\caption{\label{fig:SP500_RAW_250} Time series of {\it w}GHE's for $q=0.1$ and $q=4$ and $\theta=250$ trading days of (a) the SP500 index log close prices and (b) the SP500 surrogate index log close prices. The width of each line is equal to two standard errors of $H_q$ as determined by the least squares fitting performed by the GHE algorithm in \cite{MoraDiMatteo2011}.}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[height=0.5\textheight,width=1.05\textwidth]{FIGURES/FIG1_SP500_RAW_theta=750.png}
\caption{\label{fig:SP500_RAW_750} Time series of {\it w}GHE's for $q=0.1$ and $q=4$ and $\theta=250$ trading days of (a) the SP500 index log close prices and (b) the SP500 surrogate index log close prices.}
\end{center}
\end{figure}
Table \ref{tab:MeanH} shows the mean values of $H_{1}$ and the extreme {\it w}GHE's, $H_{0.1}$ and $H_4$, as well as $\mathbb{E}\left[|W_{0.1,4}|\right]$, the mean of the absolute value of the difference between the extreme-$q$ {\it w}GHE's of the S\&P~500 series and its respective surrogate. The mean values are calculated over the entire history of S\&P~500 ($\approx1929$ until Feb.~14, 2020) using Equations (\ref{eq:mean}) and (\ref{eq:W}). The standard error of the mean which depends on the uncertainty of determining each value of the exponent $H_q(t)$ from the GHE method is also shown preceded by $\pm$ sign. The standard deviations of the S\&P~500 and its respective surrogate time-series, calculated by Equation (\ref{eq:stdev}), are also reported on a separate line together with their standard errors. The higher the value of $\mathbb{E}\left[|W_{0.1,4}|\right]$ or $B$, the more multiscaling the financial time-series is (on the overall). We see that the mean values of {\it w}GHE for $q=1$ of the real S\&P~500 data are higher than 0.5 within standard error. The mean values of {\it w}GHE for $q=4$ are lower than 0.5 within standard error. Also, the mean absolute value of $W_{0.1,4}$ and $B$ are also greater than 0, within standard error. All the above imply that, on the average, the S\&P~500 index, during in its entire historical time span is characterised by multiscaling. Also, the fact that $H_1$ is also statistically greater than 0.5, suggests that S\&P~500 has been, historically and on the average, a slightly persistent market. On the other hand, the average values of the respective quantities for the randomly generated surrogate S\&P~500 time-series show that, on the average, all the Hurst exponents of a random time series with a varying volatility profile that matches that of the real series, shows neutral behavior. The above results agree with previous studies of the Hurst exponent of the S\&P~500.
\begin{table}[ht!]
\centering
\caption{\label{tab:MeanH} Statistics of {\it w}GHE's for the SP500 index: comparison between real and surrogate data. $\theta=750$ days.}
\vspace{2pt}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
& $H_{0.1}^{(\theta)}$ & $H_1^{(\theta)}$ & $H_4^{(\theta)}$ & $\mathbb{E}\left[|W_{0.1,4}|\right]$ & $B$ \\ \hline
\textbf{S\&P~500} & \makecell{$0.5511$ \\ $\pm 4.7x10^{-4}$} & \makecell{$0.5244$ \\ $\pm 3.2x10^{-4}$} & \makecell{$ 0.4454$ \\ $\pm 4.9x10^{-4}$} & \makecell{ $0.1058$ \\ $\pm 6.8x10^{-4}$} & \makecell{ $-0.02951$ \\ $\pm 8.2x10^{-6}$} \\ \hline
\textbf{(Standard deviation)} & \makecell{$0.06101$ \\ $\pm 7.1x10^{-6}$} & \makecell{ $0.05671$ \\ $\pm 4.8x10^{-6}$} & \makecell{$0.06623$ \\ $\pm 8.8x10^{-6}$} & \makecell{ $0.09003$ \\ $\pm 1.1x10^{-5}$} & \makecell{ $0.020028$ \\ $\pm 2.0x10^{-7}$} \\ \hline
\textbf{Surrogate data} & \makecell{$0.4943$ \\ $\pm 3.7x10^{-4}$} & \makecell{$0.4940$ \\ $\pm 2.5x10^{-4}$} & \makecell{ $0.4853$ \\ $\pm 3.1x10^{-4} $} & \makecell{ $0.0089$ \\ $\pm 4.8x10^{-4}$} & \makecell{$-0.001976$ \\ $\pm 4.8x10^{-7}$} \\ \hline
\textbf{(Standard deviation)} & \makecell{ $0.03615$ \\ $\pm 5.7x10^{-6} $ } & \makecell{$ 0.03472$ \\ $\pm 2.0x10^{-5}$} & \makecell{ $0.04324$ \\ $\pm 5.0x10^{-6}$} & \makecell{ $0.05636$ \\ $\pm 7.6x10^{-6}$} & \makecell{ $0.016943$ \\ $\pm 3.2x10^{-7}$} \\ \hline
\end{tabular}
\end{table}
Turning to the temporal evolution of $H_q$ for various values of $q$, it is already apparent from figures~ \ref{fig:SP500_RAW_250}~and~\ref{fig:SP500_RAW_750} that persistence and multiscaling may vary with time as there are time periods when the index seems to be persistent, others when it is neutral and others when it is anti-persistent. Similarly, there are time periods when it is multiscaling and others where it is uniscaling indicated by the relative deviation between the $H_{0.1}$ and $H_4$. There are also some time periods where $H_{0.1}$ and $H_4$ seem to evolve with similar local trend and some time periods where they seem to follow different or even opposite trends. The later signifies an anomalous kind of $H_q$ profile evolution that is probably related to particular changes in the underlying dynamics of the market. In order to investigate these matters in more detail, we perform the analysis described in the next paragraphs.
First, we apply a 2nd-order polynomial smoothing filter to $H_q^{'(\theta)}$ data for a time window of length equal to 240 trading days (~ 1 year) in order to reduce the noise and more clearly identify the underlying temporal patterns in the GHE spectra.
Next we inspect the smoothed $H_q^{'(\theta)}$ series for S\&P~500 log prices, and the five values $q=0.1,1,2,3,4$, as shown in figure~\ref{fig:PatternDefinition}. We identify several distinct \textit{Temporal Patterns} (TP) in the co-evolution of the series of the extreme $q$'s ($q=0.1, 4$) based on:
\begin{enumerate}
\item The standardized 'width' $W'_{0.1,4}$ of the {\it w}GHE $q$-spectrum, as defined in Equation (\ref{eq:Wnorm}), which, as already said, is a measure of the multiscaling of the index at time $t$. Looking at the co-evolution of the five $H'_q$'s shown in figure~\ref{fig:PatternDefinition}, we distinguish time periods where the five $H'_q$'s are very close to each other (signifying strongly uniscaling behavior), time periods where the five $H'_q$'s are clearly apart (signifying time periods of multiscaling) and time periods where they are strongly diverging, i.e. time periods where multiscaling is stronger. Therefore, we define three different levels of multiscaling by comparing $W'_{0.1,4}$ to two threshold values $\phi$: $\phi_L$ and $\phi_H$ which correspond to the low and high threshold values. If $W'_{0.1,4}(t)>\phi_H$ we consider that the index is characterized by strong multiscaling (denoted either by letter M or A), while, for small widths $W'_{0.1,4}(t)\lesssim\phi_L$), it is characterized as uniscaling (denoted by letter 'S'). For intermediate widths $\phi_L\lesssim W'_{0.1,4}(t)\lesssim\phi_H$ we characterise it as 'weak multiscaling' and denote it with the letter M$^L$ (or A$^L$). Similarly, in the case we measure multiscaling by using the $B$-proxy instead of $W$, we use the standardized value $B'$ as defined in Equation (\ref{eq:Bnorm}) and compare it to $\phi_L$ and $\phi_H$. $B'>\phi_H$ denotes a strong multiscaling pattern (M or A) and $B'<\phi_L$ a uniscaling S pattern, whereas $\phi_L\lesssim W'_{0.1,4}(t)\lesssim\phi_H$ a 'weak multiscaling' M$^L$ or A$^L$ pattern.
\item The difference between the 'local trends' of the extreme \textit{w}GHE curves $H'_{0.1}(t)$ and $H'_4(t)$ at time $t$. The local trends could be defined as the time derivative of the \textit{w}GHE series at time $t$, but in order to get a statistically significant measure we use the Change Point Analysis method (CPA), as will described later in the paper. We denote by letter M (stands for 'muliscaling') or M$^L$ (stands for 'low' multiscaling), a wide TP, as determined by the procedure described in the previous point, for which the local trends are statistically equal. In an M (or M$^L$) pattern, the extreme $H_q$ time-series move parallel to each other and thus the width $W$ remains statistically unchanged. Conversely, we denote by letter A (or A$^L$) (stands for 'asymmetric' multiscaling) a wide TP, in which the extreme {\it w}GHE's evolve in statistically different directions and/or different rates.
\item The 'asymmetry' in local trends: 'A' patterns can come in the following three variations: (i) an A$^-$: a TP in which $H_4$ drops at a rate faster than $H_{0.1}$ either drops or rises; (ii) A$^+$: a TP in which $H_{0.1}$ rises at a rate faster than $H_4$ either drops or rises; (iii) A$^0$ a TP in which $H_{0.1}$ rises at approximately the same rate as $H_4$ drops; (iv) mA$^-$ is a pattern in which $H_4$ rises at a rate faster than $H_{0.1}$ either drops or rises\footnote{The prefix $m$ stands for mirror image of the pattern.}; (v) mA$^+$ is a pattern in which $H_{0.1}$ drops at a rate faster than $H_4$ either drops or rises; (vi) mA$^0$ is a pattern in which $H_{0.1}$ drops at approximately the same rate as $H_4$ rises. For the 'weakly multiscaling' asymmetric TP's A$^L$, we do not define any '+' or '-' TP's, just the diverging TP A$^L$ and the converging (mirror) TP $mA^L$.
\item The relative variation among the GHE's across the $q$ values, \textit{e.g.} the ordering of the GHE's \textit{vs.} $q$ at a particular time instance. Specifically, in some time periods the concavity relation can be violated giving way to a \textit{'reversed' TP}, in which {\it w}GHE's of higher $q$'s are larger than {\it w}GHE's of smaller $q$'s. We denote such TPs by attaching the prefix 'r' to the symbols of any of the above TPs. It is important to highlight that 'reversal' is a particularly rare phenomenon as it entails the effect for which dependence would be stronger for tail events than for common events. 'Reversal', however, is realistically expected for severe crisis periods, where the price change distribution strongly deviates from a Gaussian distribution and tail events are very frequent and highly correlated.
\item The transition state from a uniscaling period to a multiscaling period and vice-versa. In case we have a weakly multiscaling TP for which the extreme wGHE's seem to diverge tending to turn into a multiscaling pattern, then, if at a particular time $t$, $W'_{0.1,4}>\phi_L$ (or $B'>\phi_L$) and the local trends of the extreme \textit{w}GHE's are statistically \textit{diverging}, we define a 'transition', weakly-multiscaling TP, that we denote by A$^L$. If, on the other hand, the local trends of the extreme \textit{w}GHE's are statistically \textit{converging}, then define a 'transition' weakly-multiscaling TP that we denote by $mA^L$, i.e. the 'mirror' A$^L$.
\end{enumerate}
In figure~\ref{fig:PatternRecognition} we summarize and schematically present the TP's described above.
\begin{figure}[h!]
\includegraphics[scale=0.35]{FIGURES/Fig02_SP500_PATTERN_DEFINITION_W0_14.png}
\caption{\label{fig:PatternDefinition} Temporal patterns in \textit{w}GHE time-series for the S\&P~500 index for the period Jan. 2, 1970 to Jan. 2, 2013.}
\end{figure}
\begin{figure}[h!]
\includegraphics[scale=0.17]{FIGURES/patterns_recognition.png}
\caption{\label{fig:PatternRecognition} Schematic depiction of GHE TP's. In the upper plot, a schematic representation of two $H_q$ time-series is shown for the extreme $q's$. Blue color represents the minimum $q$ series and the red color the maximum $q$ series. In the lower plot, the respective TP's are labeled.}
\end{figure}
\subsection{TP identification algorithmic procedure}
In this section we present the algorithmic procedure to extract the TPs from {\it w}GHE's series in a statistically rigorous way. The procedure contains the following steps:
\begin{enumerate}
\item First, select the standardized metric $\gamma'$ to be used as a measure of multiscaling: $\gamma'=W'_{q_1,q_2}$ or $\gamma'=B'_{q'_1,q'_2}$, as well as the respective pair of extreme $H'_{q_1}$ and $H'_{q_2}$ time-series that will be used for determining the local trends. Select a $\theta$ value and a sliding window length $\Delta t$ and compute the relative \textit{w}GHE's for both the real series and a random surrogate, with log-returns drawn from a normal distribution standard deviation equal to the volatility profile of the real series. Compute the relevant standard deviations of the surrogate series from Equations~(\ref{eq:stdev}) and (\ref{eq:Stdpooled}) and obtain the standardized series from Equations~(\ref{eq:NormH}), (\ref{eq:Wnorm}) or (\ref{eq:Bnorm}). If $\gamma'$ is set equal to $B'_{q'_1,q'_2}$, then compute several series $H_{q}$ between the chosen extreme values. In this work we calculated a set of $H_{q}$'s in the range $q=0.1-1$ with extreme series the ones for $q'_1=0.1$ and $q'_2=1$. Then, for each each time $t$, apply a linear least squares fit to the data $H_q(t)$ \textit{vs.} $q$, the slope of which is equal to $B(t)$. For $\gamma'= W'_{q_1,q_4}$, we chose $q_1=0.1$ and $q_4=4$, in the present work.
\item Smooth out the computed raw standardized series using a 2nd order polynomial smoothing function. We used a smoothing window of 48 data points which (for a skipping window of 5 trading days that we used for \textit{w}GHE calculations) corresponds to 240 trading days, \textit{i.e.} approximately one calendar year.
\item Apply the Change Point Analysis algorithm (CPA) (\cite{killick2012optimal}) to the two extreme series $H'_{q_1}$ and $H'_{q_2}$, in order to get time intervals characterized by the same local trend (rate of increase of the {\it w}GHE's) as well as to obtain the values of the trends. The same or different segment limits can be chosen (same 'binning') for the two series. If the same binning is selected (we chose this option in the present work), then, in practice, CPA is applied to one of the two series (or alternatively to the $\gamma$ series), and the automatically extracted bin limits are then enforced on the application of CPA to the other series. The CPA analysis breaks the series into a set of several segments of potentially different lengths $\{\Delta t_i\}$, and outputs a unique slope value, $\beta_i^{q_1}$ and $\beta_i^{q_2}$ for each segment $i$ and for the respective standardized wGHE series for $q_1$ and $q_2$.
\item For each data point at time $t$, statistically determine the degree of multiscaling by checking the statistical significance of $\gamma'$ against the predefined threshold value $\phi_L$ and $\phi_H$: if $\gamma'>\phi_H$, then the dynamics of the underlying series is multiscaling (M-type or the various A-type TP's), else if $\gamma'<\phi_L$, it is characterised as uniscaling (S), else it is characterised as 'weakly multiscaling' (M$^L$-type or A$^L$/$mA^L$-type TP's). $\phi_L$ should, in general be much smaller than 1 and $\phi_H$ greater than 1. In the present work, we use $\phi_L=0.32$ and $\phi_H=1.64$ as threshold values, which correspond to the $25^{th}$ and $95^{th}$ percentile of the Gaussian distribution in respect. Other choices are of course possible. The rationale behind the particular choices is that the limit for true uniscaling should be considerably \textit{lower} than the 'noise level' of $W$, as defined by 1 standard deviation of the random surrogate, while the limit for strong multiscaling should be significantly \textit{higher} than the noise level. Therefore, $\phi_L=0.32$ signifies that only 25\% of the widths $W'$ in a random time-series are below this threshold and thus the scaling of the real series at any point in time for which $W'$ is smaller than this value can be characterized as uniscaling in a statistically significant manner. Similarly, $\phi_H=1.64$ signifies that only 5\% of the widths in the random time-series are above this limit, therefore the scaling of the real series at any point in time for which $W'$ is greater than this value can be characterized as multiscaling in a statistically significant manner. Finally, for times when $W'$ values are between these values the scaling is characterized as "weak' multiscaling.
\item For each data point on day $t$, compare the relative slopes of the extreme {\it w}GHE series, as extracted by CPA, at the time bin $i$ to which $t$ belongs, in order to detect the different forms of multiscaling, i.e. to identify whether the TP is an M-type or an A-type. In particular, to designate an A pattern, we require that the absolute difference in the slopes should be $\phi_S$ standard deviations above 0, i.e.:
\begin{equation}
\label{eq:AvsMcheck}
\frac{|\beta_i^{q_1}-\beta_i^{q_2}|}{\sigma(|\beta_{surr}^{q_1}-\beta_{surr}^{q_2}|)}>\phi_S,
\end{equation}
where $\beta_i^{q_1}$, $\beta_i^{q_2}$ are the slopes of $H_{q_1}$ and $H_{q_2}$ at bin $i$, and $\beta_{surr}^{q_1}$, $\beta_{surr}^{q_2}$ is the respective pair of slopes computed on the surrogate data, $\sigma(...)$ denotes the standard deviation of the series and $\phi_S$ is the threshold of the evaluation.\footnote{In general, one could use different thresholds $\phi$ between the width test involved in multiscaling \textit{vs.} uniscaling characterizations, and the slope tests involved in the type of multiscaling characterizations. In the present work, we chose the same value $\phi_S=1.64$ (as $\phi_H$) for all tests.} In other words, this formulation returns an 'A' pattern only if it is statistically greater than the variability of the local trends of the surrogate index, as measured by applying a similar CPA procedure to the extreme series, $H_{q_1}$ and $H_{q_2}$ of the random surrogate data. In practice, one can use the same binning on the surrogate data as determined by the CPA of the real data, which is what we did in the present work.
\item If $W'_{q_1,q_2}>\phi_H$, then distinguish among the various $A$-type multiscaling TP's. At first, compare the absolute value of the difference of absolute values of the local trends $\beta_i^{q_1}$ and $\beta_i^{q_1}$. We check its statistical significance by
\begin{equation}
\label{eq:AvsAminuscheck}
\abs*{ \frac{|\beta_t^{q_1}|-|\beta_t^{q_2}|}{\sigma(|\beta_{surr}^{q_1}|-|\beta_{surr}^{q_2}|)}}>\phi_S.
\end{equation}
If this condition is false, then the TP is an A$^0$. If it is true, then it is either an A$^-$ or A$^+$ or one of their respective mirrors mA$^-$, mA$^+$. In the later case, in order to determine which one of the four, compare the absolute values of the two $\beta$'s and also use the sign of each $\beta$. Specifically:
\begin{itemize}
\item[i] if $|\beta_t^{q_1}|<|\beta_t^{q_2}|$ and $\beta_t^{q_2}<0$, it is A$^-$,
\item [ii] if $|\beta_t^{q_1}|<|\beta_t^{q_2}|$ and $\beta_t^{q_2}>0$, it is mA$^-$,
\item [iii] if $|\beta_t^{q_1}|>|\beta_t^{q_2}|$ and $\beta_t^{q_2}>0$, it is A$^+$
\item [iv] $|\beta_t^{q_1}|>|\beta_t^{q_2}|$ and $\beta_t^{q_2}<0$, it is mA$^+$.
\end{itemize}
\item If $\phi_L<W'_{q_1,q_2}<\phi_H$, then distinguish between the A$^L$ TP and the $mA^L$ TP, the first corresponding to diverging weakly multiscaling asymmetric patterns and the second to a converging one. The first often precedes a transition between an uniscaling state (S) to an M or M$^L$ multiscaling state. The second precedes the reverse transition, \textit{i.e.} from a multiscaling to a uniscaling state. The condition is that the relative trends $\beta_t^{q_1}$ and $\beta_t^{q_2}$ are sufficiently different, i.e. they satisfy condition~(\ref{eq:AvsMcheck}) and:
\begin{itemize}
\item[i] $\beta_t^{q_1} > \beta_t^{q_2}$, then the TP is an A$^L$ else
\item[ii] $\beta_t^{q_1} > \beta_t^{q_2}$, then it is an $mA^L$.
\end{itemize}
\item In case of 'reversal', i.e. if $H'_{q_1}<H'_{q_2}$: then one must simply interchange $H'_{0.1}$ with $H'_{4}$ in the equations presented in all the above points. The resulting TP's will be the 'reverse' TP denoted by an extra letter 'r' in front of the respective TP symbol.
\end{enumerate}
The results of the TP identification analysis presented above, as applied to the S\&P~500, NIKKEI, ASE and SENSEX indices are shown in figures~\ref{fig:SP500close}-\ref{fig:SENSEXclose}. In each of these figures the following are plotted: In (a) the weighted volatility series of the index (left axis), calculated by Equation (\ref{eq:volatility}), and the index close prices (right axis); in (b) the normalized \textit{w}GHE's $H'(q)$ time-series of the index for $q=0.1,1.2.3.4$, where we have marked the identified TP's by setting $\gamma'=W'_{0.1,4}$ and using $H'_{0.1}$ and $H'_{4}$ for the extreme \textit{w}GHE's. TP's are marked by color mapping; in (c) the time evolution of the normalized \textit{w}GHE width $W'_{0.1,4}$ together with the width of the respective surrogate index; in (d) the $H'(q)$ time-series for $q=0.1, 0.5, 1$ with the identified TPs by setting $\gamma=-B$ and using $H'_{0.1}$ and $H'_{1}$ for the extreme \textit{w}GHE's. TPs are also marked by the same color mapping as in plots (b). Finally, in (e) we show the B proxy time-series of the real index together with the respective B proxy of the surrogate index.
By examining figures~\ref{fig:SP500close}~-~\ref{fig:SENSEXclose}, we notice several interesting facts:
\begin{enumerate}
\item As clearly seen from plots (b) and (c), the scaling of the various indexes varies significantly with time: there are certain time periods when $W'_{0.1,4}$ is much higher than the average width of the GHE spectrum of the surrogate index, signifying a definite multiscaling structure of the underlying dynamics, while there are time periods when $W'_{0.1,4}$ is very small, signifying a uniscaling structure. Moreover, the transition between a period of multiscaling behavior to a period of uniscaling behavior can be rather sharp, a fact that alludes to the existence of transition occurring in the underlying index dynamics.
\item There are time periods of persistent behaviour ($H_1^{'(\theta)}>0$), time periods of anti-persistent behavior ($H_1^{'(\theta)}<0$) and time periods of neutral behavior ($H_1^{'(\theta)} \approx 0$). If one generalizes the notion of 'persistence' to include GHE's of $q$ values different from 1, then there are time periods when the small $q$ GHE's rise or stay approximately the same, while $H'_4$ is dropping, \textit{i.e.} moving in the opposite direction to a more 'anti-persistent' scaling. This behavior, which is characterised by A$^0$ or A$^-$ TPs, is connected to one or more isolated, large price change events (tail events) that occur in a direction opposite to the local market trend (\textit{e.g.} a large price drop in an otherwise rising market or vice-versa). A notable example is the 'Black Monday' event that occurred on Monday, Oct. 19th 1987 (and Tuesday Oct. 20 in some markets), where S\&P~500 (arrow~6), for example, lost more than 20\% in one day. The event was followed by a large rise in the next day and the index made up for all the losses soon after. 'Black Monday' occurred amidst a bullish market period and was similarly followed by a rising trend. A single tail event of this size causes a large bias, especially in the high $q$ \textit{w}GHE's, hence the pronounced A$^-$ TP is observed, as appears in figures~\ref{fig:SP500close}b and \ref{fig:NIKKEIclose}b for both S\&P~500 and NIKKEI (arrow~No~4). Such TP's are also seen after the 'Asian' and the 'Russian' related crises drops in 1997 and 1998\footnote{For the related dates of these isolated market drops, see table~\ref{tab:DeletedDates}.} in respect, which also occurred amidst a rising market and in several other occasions along the index price timeline such as, for example: (i) S\&P~500: April, 16-17, 1935 when a $\approx 9\%$ drop is followed by a $\approx 9\%$ rise, May, 16-17, 1935 when a $\approx 7\%$ drop was followed by a $\approx 9.4\%$ rise and August 16,19, 1935, when a $\approx 8\%$ drop was followed by a $\approx 7\%$ rise (arrow~No~1 in figure~\ref{fig:SP500close}). (ii) NIKKEI: June, 26-27, 1972 when a $\approx 8\%$ drop was followed by a $\approx 5.3\%$ rise (arrow~No~2 in figure~\ref{fig:NIKKEIclose}), Jun., 26-27, 1972 when a $\approx +4\%$ rise occurs amidst a dropping trend (arrow~No~3 in figure~\ref{fig:NIKKEIclose}). Notably, these A$^-$ patterns are not present in the B-proxy series shown in plots (d) of the said figures, since the small $q$ \textit{w}GHE's are not so much affected by tail events, except for the 1935 large A$^-$ TP for S\&P~500 which appears there too, because that particular TP was caused by several more than one big tail events over an extended period of time. However, there are time periods when the small $q$ \textit{w}GHE's show a sharp rise while the $H'_4$ drops or is almost unchanged (a behavior that yields an A$^+$ TP). This behavior hints to a situation where a one or more large events occur in the same direction as the current market trend, meaning that they give a large 'persistent' boost in the small $q$ $H'_q$'s. An example of the latter behavior is in the 2008 real estate crisis (arrow~No~9 in figure~\ref{fig:SP500close}), an exogenous to stock-market event, where large index price drops occurred amidst a period of a rapidly falling market, evidence of persistent 'herding' behavior following the 2008 crash. These few large drops cause a sharp rise in $H'_{0.1}$ and $H'_1$, rather than a drop in $H'_4$. The same pattern is seen in 2012 for S\&P~500, where the observed $A+$ TP is a result of big daily rises amidst a rising market period. One more example of an A$^+$ TP coming from several big rising events amidst a period of rising rising market trend is the one shown by arrow~No~4 in figure~~\ref{fig:SP500close}, in period May,~1955-Sep.~1956. Finally, there are other time periods when the scaling is consistently persistent (or anti-persistent, or neutral) for all values of $q$, meaning that either the period is void of large rising or dropping events and/or that both large and small events follow the same scaling behavior.
\item Multiscaling behavior is not necessarily correlated with periods of increased index volatility or periods of persistent scaling: there are time periods showing both high volatility and multiscaling/persistent behaviour, as well as time periods with low-volatility and multiscaling/anti-persistent behaviour. Time periods when volatility and multiscaling are positively correlated, include those which contain a single extreme market drop tail event, which is sufficiently large to impact both volatility and GHE calculations. An example of this fact is seen in the period 1987-1988, following 'Black Monday'. As an example of a period showing a large increase in multiscaling strength, while volatility remains low, we mention the first semester of 1993 for S\&P~500. During this period, we observe a type of asymmetric multiscaling which is the product of a sequence of smaller tail events, distributed over a longer period of time and also depends on how these events are temporarily correlated. Notice also, that A$^-$ TP's caused by a single tail event (that necessarily leads to a sharp volatility rise as well) and A$^-$ TP's caused by temporal correlations and tail events distributed over an extended period of time, also have different shapes: in the first case, the width of the TP decays (following the characteristic rate that depends on the choice of $\theta$), whereas, in the second case, it does not decay immediately, but remains wide for a longer time while the variation of its width does not depend on the choice of $\theta$.
\item At the beginning of a bubble, a strong uniscaling behaviour is observed at which the investor heterogeneity seems to be low. Whilst the market starts to grow, the complexity of the time series appears to increase in both measures of multiscaling through an asymmetric TP (usually A$^-$ or A$^L$) and then comes back to uniscaling or moderate multiscaling after the bubble has exploded. This is apparent in both Dot.com bubble and the US real estate bubble, but also in ASE 2000 bubble, in ASE 1990 crash as well as the Japanese 1991 bubble. It is even apparent before the 'Black Monday' crash, both for NIKKEI and S\&P~500, when we notice a clear transition from uniscaling to strong multiscaling via an A$^L$ TP starting back in 1986. In general, before any critical event we necessarily have a transition from uniscaling to multiscaling ranging from a few months to a couple of years before the beginning of the bubble break or crash. It must be noted that the A$^-$/A$^L$/A$^0$ type TP's that we encounter in these transitions are not the 'after-effect' of single large tail market events, but rather the consequence of a transition to multiscaling behavior due to smaller tail events occurring over a prolonged period. Examples of such TP's are: (i) S\&P~500 1956-1957 (arrow~No~5 in figure~\ref{fig:SP500close}), an A$^L$ TP followed by a an A$^0$ TP which was actually followed by a small crash (micro-bubble) at the last quarter of 1957, (ii) S\&P~500 1961, an A$^-$ TP that was followed by a small crash in 1962, (iii) ASE: the pronounced A$^-$-A$^0$-A$^+$-A$^0$ sequence before the ASE big 2000 bubble, as well as the A$^-$-A$^0$ TP before the 1990 crash. (iv) The A$^L$ TP's just before the 2000 'dot.com' bubble in S\&P~500 (arrow~No~8 in figure~\ref{fig:SP500close}), NIKKEI (arrow~No~5 in figure~\ref{fig:NIKKEIclose}) and SENSEX (arrow~No~1 in figure~\ref{fig:SENSEXclose}). (v) The A$^L$ TP's just before the 2008 US real-estate crisis in S\&P~500 (arrow~No~9 in figure~\ref{fig:SP500close}), NIKKEI (arrow~No~6 in figure~\ref{fig:NIKKEIclose}), ASE (arrow~No~6 in figure~\ref{fig:ASEclose}) and SENSEX (arrow~No~3 in figure~\ref{fig:SENSEXclose}). See also, the plots in the Appendix, showing zoomed versions of some particular time periods for S\&P~500 and NIKKEI.
\item The multiscaling width $W_{0.1,4}$ and multiscaling depth $B$ convey different information in some cases. For example, during the 2008 great financial crisis, $W_{0.1,4}$ doesn't increase too much while $B$ increases sharply. This is because $W_{0.1,4}$ better captures the heterogeneity in the market, which is, to some extent, lower when all investors go in the same direction (selling orders), while $B$ measures the complexity of such heterogeneity, as the distribution of $H_q$ inside a range of q's matters instead of only its boundary values, as we also mentioned above.
\item Multiscaling does not necessarily imply bad market conditions. When we have multiscaling of type M, it usually reflects good market conditions, even if there is increased heterogeneity (and complexity) in the market.
\item Multiscaling time periods detected by the GHE spectrum curvature $B$ is on the overall in line with the ones detected with $W$. However, some differences are observed in specific time periods. In particular, during crisis events, the $B$ is more symmetrically multiscaling while $W$ shows many more asymmetries. This is due to the fact that $W$ is affected by the tails of price change distributions considerably more than $B$. In general, it is useful to look at both measures of multiscaling as they emphasize the opposite ends of the price change distribution (small large changes) and thus are complementary to each other.
\end{enumerate}
\begin{figure}[h!]
\begin{center}
\includegraphics[height=0.6\textheight,width=1.05\textwidth]{FIGURES/SP500_final_arrow.png}
\caption{\label{fig:SP500close} SP500 index price time-series and scaling TPs: (a) Index closing prices and weighted volatility (b) Normalized \textit{w}GHE's for $q=0.1,1,2,3,4$ with identified TPs using $H(0.1)$, $H(4)$ and $\gamma=W_0.1,4$. (c) Width $W'_{0.1,4}$ of the S\&P~500 normalized GHE's for the real index data and the respective surrogate data. (d) Normalized \textit{w}GHE's for $q=0.1,0.5,1$ with identified TPs using $H(0.1)$, $H(1)$ and $\gamma=-B$. (e) $B$ proxy of the S\&P~500 normalized GHE's for the real index data and the respective surrogate data. Numbered arrows show particular market events, as referenced in text.}
\end{center}
\end{figure}
We now elaborate on some events on each of the indexes analysed. Regarding S\&P~500, depicted in figure~\ref{fig:SP500close}, we can highlight the following facts:
\begin{enumerate}
\item The Hurst exponent, $H_1$ has a positive trend up to 1971, when it reverses to a long-term negative trend, moving from a persistent signal to a more random signal. This coincides with the end of the Bretton Wood system.
\item Before the Black Monday of October 1987, the time series presents a uniscaling pattern followed by a moderate asymmetric pattern which is then followed by strong multiscaling. At the same time, the volatility is quite low, meaning that the increased complexity is not driven by a single event but by the market structure.
\item Before the Dot.com bubble burst on the second quarter of 2000, we have for both $W_{q_1,q_2}$ and $B$ a sequence of patterns, i.e. converging moderate multiscaling - uniscaling - diverging moderate multiscaling - strong multiscaling. This is accompanied by relatively low but increasing volatility. This is a signal that the market is going to saturate and a probable drop is expected. It can be attributed to the fact that the increasing multiscaling along with a rising volatility increases the market heterogeneity, which is becoming driven by turbulence in trading patterns.
\end{enumerate}
\begin{figure}[h!]
\begin{center}
\includegraphics[height=0.5\textheight,width=1.05\textwidth]{FIGURES/NIKKEI_final_arrow.png}
\caption{\label{fig:NIKKEIclose} NIKKEI index price time-series and scaling TPs: (a),(b),(c),(d) and (e) exactly as described in caption of figure~\ref{fig:SP500close}. Numbered arrows show particular market events, as referenced in text.}
\end{center}
\end{figure}
In figure~\ref{fig:NIKKEIclose} we show another major index, the NIKKEI. Some particular features of this index are:
\begin{enumerate}
\item An uniscaling behaviour at the beginning of 1986 which evolves to an asymmetric multiscaling behaviour of type A$^0$ and A$^-$ and then evolves to a persistent multiscaling, even after the bubble has exploded in 1991.
\item After the bubble exploded in 1991, the market follows an anomalous scaling. In fact, the market remains moderately multiscaling. This reflects the heterogeneity generated by the monetary policies adopted by the central bank of Japan.
\item The series appears persistent from 1970 up to the bubble explosion, when a mix of neutral and anti-persistent behaviour are then more present. This behaviour persists up to 2007.
\end{enumerate}
\begin{figure}[h!]
\begin{center}
\includegraphics[height=0.6\textheight,width=1.05\textwidth]{FIGURES/ASE_final_arrow.png}
\caption{\label{fig:ASEclose} ASE index price time-series and scaling TPs: (a),(b),(c),(d) and (e) exactly as described in caption of figure~\ref{fig:SP500close}. Numbered arrows show particular market events, as referenced in text.}
\end{center}
\end{figure}
In figure~\ref{fig:ASEclose} we report the analysis related to the Athens stock market. The plots show:
\begin{enumerate}
\item Before the 2000 bubble, an asymmetric multiscaling period is identified, which is a signature of a turbulent period. This is retrieved both using the W and B metrics which remains quite high for the consequent period. This is a combination of the global turbulence in the 1997 and 1998 and the Dot.com bubble which was going to break.
\item From 2005 to the third quarter of 2008 we have a succession of uniscaling and moderate multiscaling patterns which is then followed by a long period of moderate multiscaling pattern, suggesting that a the inception of the global financial crisis a complex dynamic with stronger heterogeneity is taking place.
\item The $B$ proxy agrees almost perfectly with the multiscaling width. This is mainly because, apart from the Dot.com bubble, the turbulent time periods were generated by complex dynamics which increase the heterogeneity of the process in a symmetric way rather than by extreme tail events.
\end{enumerate}
\begin{figure}[h!]
\begin{center}
\includegraphics[height=0.6\textheight,width=1.05\textwidth]{FIGURES/SENSEX_final_arrow.png}
\caption{\label{fig:SENSEXclose} SENSEX index price time-series and scaling TPs: (a),(b),(c),(d) and (e) exactly as described in caption of figure~\ref{fig:SP500close}. Numbered arrows show particular market events, as referenced in text.}
\end{center}
\end{figure}
Finally, we report in Figure~\ref{fig:SENSEXclose} the plots for the Bombay stock market (SENSEX). We observe the following:
\begin{enumerate}
\item Between the third quarter of 2003 and first quarter of 2004 we see a short uniscaling behaviour followed by a strong multiscaling behaviour. This corresponds to the election of Sonia Gandhi's communist coalition in May 2004 (arrow~No~2) which generated a market drop of 15.52\% and a consequent market heterogeneity in market conditions. In particular, it is possible to notice that the highest and lowest $H_q^{'(\theta)}$ for the $W_{0.01,4}$ go in opposite directions, resulting in a A$^0$ type of pattern.
\item $H_1^{'(\theta)}$ is always higher than 0, which implies a persistent behaviour while $H_4^{'(\theta)}$ is, apart few local exceptions, always negative, implying an antipersistent behaviour.
\item For this index, the $B$ and the $W$ multiscaling proxies disagree in most of the cases. In fact, it is possible notice that in time periods of high width as 2001-2003 and 2011-2012, we have a relatively low $B$, which, together with the high volatility of the series, makes clear that the high width is due more to tail events than temporal correlations.
\end{enumerate}
It is important to notice that major indices are affected by global events which also spill-over to the peripheral ones, while the opposite is not always true. In fact, it is possible to notice that 'Black Monday', the Japanese bubble and the Dot.com bubble generated a scaling change also in markets different from the ones in which they originated. In contrast, the main shifting events in peripheral indices, do not affect main ones. Given the results depicted in ~\ref{fig:SP500close}~-~\ref{fig:SENSEXclose}, we conclude that a transition from a uniscaling to a multiscaling pattern (usually through an asymmetric pattern\textbf{ of type A$^-$ (strongly multiscaling and asymmetric) or A$^L$ (more weakly multiscaling and asymmetric)} in combination with a relatively low (but rising) volatility is a warning signal that the market is becoming saturated and a turbulent period can follow with a possible crash.
\subsection{Robustness of TP's as warning signals}
\label{robustness}
Having noted all the above, the indication that temporal evolution of multiscaling strength, in both its symmetric and asymmetric forms, as described by the TP's that were defined above and identified by the algorithmic procedure presented in this work, provides possible signals for future market behavior should be further investigated. In order to take the next step, we must first distinguish between the effect on multiscaling coming from the biasing of \textit{w}GHE values caused by tail events which is observed immediately \textit{after} these events, and the effect on multiscaling coming from either tail events or temporal correlations in price changes that occur \textit{prior} to a market crisis, such as a stock market bubble under development, or before a market crash. The first is an 'after-effect' of single and extreme market events, the second is a signal preceding an actual critical event. In an attempt to address the issue, we deleted one or more single trading days from the index time-series that correspond to specific events and recalculated the $H'_q$ profiles of the modified index. More specifically, we deleted some key trading days in S\&P~500, NIKKEI, ASE and SENSEX that are directly related to one or more of these critical events: (i) 'Black Monday', (ii) the 1997 'Asian' crisis, (iii) the 1998 'Russian' crisis (iv) the I. Ghandi election in India in 2004. The exact trading dates that were erased for each index are shown in Table~\ref{tab:DeletedDates} together with the corresponding \% close price changes. Cells containing a dash denote that these dates were not deleted for the particular index. Notice that for each event, we deleted possibly different days and different number of days per index. This is because each market reacted differently to the particular crisis. In particular we wanted to capture and remove the 'instantaneous' effect of a single market event on the GHE computations and the resulting multiscaling, not its possible short or long-term after-effects on the actual market dynamics. Therefore, we deleted just 1 up to 4 trading days that were directly associated with the single market event, usually a large drop followed by a big rise or other smaller rises/drops.
\begin{table}[!h]
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\textbf{Deleted dates} & \textbf{Market event} & \textbf{S\&P~500} & \textbf{NIKKEI} & \textbf{ASE} & \textbf{SENSEX} \\ \hline
19-Oct-1987 & B. M. & -20.47\% & - & - & - \\ \hline
20-Oct-1987 & B. M. & +5.33\% & -14.90\% & - & - \\ \hline
21-Oct-1987 & B. M. & +9.10\% & +9.30\% & - & - \\ \hline
22-Oct-1987 & B. M. & -3.92\% & - & - & - \\ \hline
23-Oct-1987 & B. M. & - & -4.93\% & - & - \\ \hline
26-Oct-1987 & B. M. & - & -4.30\% & - & - \\ \hline
27-Oct-1997 & A. C. & -6.87\% & - & - & - \\ \hline
28-Oct-1997 & A. C. & - & & - & - \\ \hline
31-Oct-1997 & A. C. & - & - & -4.02\% & - \\ \hline
04-Nov-1997 & A. C. & - & - & +4.72\% & - \\ \hline
06-Nov-1997 & A. C. & - & - & -4.23\% & - \\ \hline
28-Aug-1998 & R. C. & - & -3.46\% & - & - \\ \hline
31-Aug-1998 & R. C. & -6.80\% & - & - & - \\ \hline
01-Sep-1998 & R. C. & - & - & -3.81\% & - \\ \hline
02-Sep-1998 & R. C. & - & - & +5.15\% & - \\ \hline
14-May-2004 & G. E. & - & - & - & -0.0610 \% \\ \hline
17-May-2004 & G. E. & - & - & - & -11.14\% \\ \hline
18-May-2004 & G. E. & - & - & - & +8.25\% \\ \hline
\end{tabular}
\caption{Deleted dates for modified indices, the corresponding market event and the \% close price change of that date per index. Cells with a '-' correspond to dates that were not deleted for the particular index in the respective column. 'B.M.' stands for 'Black Monday', 'A.C.' for 'Asian Crisis', 'R.C.' for 'Russian Crisis' and 'G.E.' for Gandhi 2004 Election'.}
\label{tab:DeletedDates}
\end{table}
In figures~\ref{fig:SP500_zoom}~-~\ref{fig:SENSEX_zoom} a comparison between the \textit{w}GHE time series, $\gamma$ time-series and identified TP's of the real indices and the respective modified indices are shown, focusing on the time periods around the market events mentioned in Table~\ref{tab:DeletedDates}. For S\&P~500 and NIKKEI we observe that after B.M., the strong 'post-event' A$^-$ TP that exists in the real index data after Oct. 1987 (figures~\ref{fig:SP500_zoom},~\ref{fig:NIKKEI_zoom}b) and is directly related to the strong biasing induced to the tails of the price change distribution exclusively due to the four deleted days, is almost eliminated in the modified index $W'_{0.1,4}$ TP's (figures~\ref{fig:SP500_zoom},~\ref{fig:NIKKEI_zoom}c). However, the pre-event 'warning' A$^0$ TP and A$^L$ TP signals corresponding to a transition from a uniscaling to multiscaling starting in 1986, \textit{well before} the B.M. event are still present. Notice also, that an A$^L$ pattern well after the 'black Monday' event is still seen in the modified index TP's for NIKKEI, well before 1991. This suggests that the 'warning' $A$-type TP's observed well after Oct. 1987, are not an artefact of the B.M. event, but a consequence of market trading patterns before the NIKKEI 1991 bubble break-down. Similarly, the A$^-$ type TP before the year 2000 dot.com bubble is destroyed after in S\&P~500 after the deletion of the A.C. and R.C. related extreme tail events but there is a clear uniscaling to multiscaling transition via an A$^L$ TP well before the bubble break-down. The same A$^L$ TP is seen in NIKKEI, before 2000. Again, this suggests that the $A$-type warning signal in the period 1997-1998 is not exclusively an 'after-effect' product of the 1997 'Asian crisis' and 1998 'Russian crisis' events, but a product of trading dynamics of an extended period jut before the year 2000 bubble break-down.
\begin{figure}[h!]
\begin{center}
\includegraphics[height=0.6\textheight,width=1.05\textwidth]{FIGURES/SP500_final_zoom.png}
\caption{\label{fig:SP500_zoom} SP500 index price time-series and scaling TPs in period 1985-2001: Comparison between TPs obtained from SP500 close prices (real index TP's) and TPs obtained after removing the 'black Monday', '1997 Asian crisis and '1998 Russian crisis' critical trading days (modified index TP's): (a) SP500 close prices, (b) real index $W_{14}$ TP's, (c) modified index $W_{14}$ TP's, (d) real index $B$-proxy TP's and (e) modified index $B$-proxy TP's. The 'warning' $A$-type TP's before 19th of October 1987 are maintained in the modified index results. }
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[height=0.6\textheight,width=1.05\textwidth]{FIGURES/NIKKEI_final_zoom.png}
\caption{\label{fig:NIKKEI_zoom} NIKKEI index price time-series and scaling TPs in period 1985-2001: Comparison between TPs obtained from NIKKEI close prices (real index TP's) and TPs obtained after removing the 'black Monday', '1997 Asian crisis and '1998 Russian crisis' critical trading days (modified index TP's): (a) Index close prices (b) real index TP's for $W_{14}$ (c) $W_{14}$ modified index TP's, (d) $B$-proxy real index TP's and (e) $B$-proxy modified index TP's.}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[height=0.6\textheight,width=1.05\textwidth]{FIGURES/ASE_final_zoom.png}
\caption{\label{fig:ASE_zoom} ASE index price time-series and scaling TPs in period 1995-2001: As in figure \ref{fig:NIKKEI_zoom}, comparison between real index TP's and modified index TP's of ASE log close price $H_q$'s. The later are obtained after removing 1997 'Asian crisis' and 1998 'Russian crisis' critical trading days. (a), (b), (c), (d), (e) as in figure \ref{fig:NIKKEI_zoom}.}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[height=0.6\textheight,width=1.05\textwidth]{FIGURES/SENSEX_final_zoom.png}
\caption{\label{fig:SENSEX_zoom} SENSEX index price time-series and scaling TPs in period 1985-2001: Comparison between TPs obtained from SENSEX close prices (real index TP's) and TPs obtained after removing the 2006 'Ghandi election' crisis (modified index TP's): (a) S\&P~500 close prices (b) real index TP's for $W_{14}$ (c) $W_{14}$ modified index TP's, (d) $B$-proxy real index TP's and (e) $B$-proxy modified index TP's.}
\end{center}
\end{figure}
The same conclusion can be drawn by looking at the 'warning' A$^-$ and A$^0$ types of TPs before the year 2000 ASE bubble in figure~\ref{fig:ASE_zoom}. These patterns are maintained, almost intact, in the modified index TPs, after removing A.C. and the R.C. related trading days from ASE. Again, this means that $A$-type patterns seen in the real index TPs are not just a product of 'after-effects' of one or two isolated big market events, but a product of an extended period of market trading patterns, well before a bubble bursts. This fact is particularly pronounced for the ASE 2000 bubble which was well under development in the period where the Asian and Russian crises occurred, since the the A$^-$ 'warning' TP's were barely affected by the removal of the few trading days related to these crises. However, the removal of the respective trading days before the break of the S\&P~500 'dot.com' bubble has had a different effect. We observe that the existing A$^-$ TP has almost disappeared and the dynamics approximately two years before the break of the bubble is uniscaling. However, even here, the dynamics undergoes a clear uniscaling to multiscaling transition almost a year before the bubble burst through and asymmetric A$^L$ TP.
\section{Discussion}
\label{discussion}
By examining the GHE results for all these indices, we confirm that there are common elements among many of them especially in critical time periods, like a stock-market bubble or financial crisis. However, each index also has unique features indicating that the corresponding markets have different underling dynamics which can be related to global events for major stock indices and to local phenomena. In particular, we notice that critical events are usually driven by a uniscaling behaviour which is then followed by a usually sharp transition to multiscaling via 'asymmetric' multiscaling patterns.
In some time periods one clearly sees that the $w$GHE's for higher values of $q$ show a sharp drop towards strongly anti-persistent behavior, whereas the respective small $q$ $w$GHE's are almost constant or rising, depicting neutral or persistent behavior. This behavior can be caused by two types of market changes: either (i) due to a critical single market event such as a market crash (e.g. Black Monday on Oct. 19, 1987 when S\&P~500 lost 20.4\% in one day), or (ii) a more extended critical period where the market behaves in a bullish way for small day-to-day price changes but shows anti-persistent behavior for large price changes (fig.~\ref{fig:SP500close}). In case (i), the large change is a single extreme tail point in the price change distributions taken within the particular time-window where the GHE's are calculated, a tail point that mostly affects the large $q$ GHE's causing a sharp drop. The volatility also shows a sharp rise at that date, followed by a gradual decay. The time duration of the effect of this single market event on the GHE time-series, as well as volatility, is in the order of $\Delta t$, the time window length used for calculating the price change distribution moments, and $\theta$ and leads to a pronounced A$^-$ pattern extended in time, although the actual scaling within this time window may be different. Subtracting this single event, would largely destroy the pattern, as it was seen, for instance, for the B.M. event in S\&P~500 and NIKKEI. In case (ii), we have shown that these patterns are a consequence of the increase of tail events that occur in a turbulent market period and the way they are correlated. For example, during a critical period (of a developing bubble, for example) there is increased frequency of large market drop tail events that are immediately followed (usually in the next trading day) by an equivalent rise. This combination of events occurring amidst a rising market trend, causes a sharp drop in the high-$q$ \textit{w}GHE's while the small $q$ \textit{w}GHE's are not so much affected. This type of market behavior that leads to market transition from a more 'regular' and efficient market (uniscaling behavior) to a more 'nervous' market has a plausible justification: when the majority of traders are afraid of or get the 'gut feeling' that the market becomes saturated and a crash is imminent, they are more likely to revert to rapid sales (in order to secure profits) that drive the market down by large amount during a single day. As the market is still in a rising trend, this sales spree is likely to be reversed and followed by a buying spree the next day in anticipation for a continued market rise. This sort of 'nervous' behavior was particularly notable with the ASE 2000 bubble, where the very pronounced asymmetric multiscaling patterns in the period 1997-1998 were not at all a result of just the Asian or Russian crises that took place within that period. In fact, one may argue, that even the large drops (followed by large rises) that are due to some justified market event (such as the A.C. and R.C. crises) are just a 'pretext' for a saturated market during a bubble development to correct itself. It is also a notable fact that the S\&P~500 changed from a rather long uniscaling (or very weakly multiscaling) period during the seventies to mid-eighties to a multiscaling period (transition is via an A$^L$ TP), starting in 1986 more than a year before Black Monday, a clear stock-market historical event that remains completely unexplained by the economic surroundings of the preceding period. In a large survey carried out by Shiller \cite{Shil1988} over a sample of more than 800 investors, when interviewed, the most frequent answer they gave to the question why they behaved the way they did during that day, was that they had a 'gut-feeling' that there was an impeding crash. This 'gut-feeling' was captured by \textit{w}GHE's measuring the market scaling transition that occurred long before the event, as these traders developed particular trading habits which, over a rather long period before the crash, lead to a sequence of tail events that would spark an A$^L$ TP and multiscaling behavior seen in 1986-1987.
In conclusion, case (i) 'A' patterns caused by large single events can be distinguished from extended time (case (ii)) 'A' patterns, by the fact that the first \textit{follow} a crash or a bubble-break, whereas the second are \textit{preceding} a possible crash or bubble-break. In this sense, 'A' patterns, especially when they follow a period of uniscaling behavior, can be used as warning signals for critical market time periods.
We also noticed some differences between major and peripheral markets. In particular, in major stock indices, more abrupt transitions between patterns are observed during critical time periods, while for peripheral markets they are much smoother. This is probably due to the number of market participants and the amount of information available to them. In fact, in a global market the market shift due to 'bad news' can completely alter market dynamics in a relatively small amount of time. A second difference lies in the fact that for major indices, multiscaling is not associated directly to period of a of recession or crisis while it is mostly the case for peripheral markets. This is probably due to the fact that in peripheral markets there isn't enough liquidity to absorb the huge heterogeneity generated by the market participants.
\section{Conclusions}
\label{conclusions}
In this paper, we have presented for the first time how different temporal patterns can emerge from the dynamics of the time-dependent generalized Hurst exponents (GHE). In particular, we proposed several patterns which differentiate uniscaling from multiscaling and further differentiate two forms of multiscaling, i.e. symmetric and asymmetric multiscaling in the temporal evolution of GHE timeseries. These temporal patterns combined with the analysis of the multiscaling width $W$ and the multiscaling depth $B$ (and their dynamics) offer an important set of tools to signal critical events in financial time series and not only. We also introduced a completely algorithmic and general procedure to identify such patterns in any time-series of GHE's, which allows one to determine these patterns in a statistically significant manner. Regarding the calculation of the GHE time-series, we also addressed the important issue of choosing a proper sliding time window length $\Delta t$ and provided an empirical rule that is based on minimising the noise due to finite-size effects in the GHE calculations and at the same time capturing the actual local dynamical changes of the scaling over short time scales.
Results showed very interesting patterns among major and peripheral markets. We found similar patterns among the market considered but also differences related to local behaviors. One of the common features is the existence of a (usually sharp) transition from an
uniscaling to multiscaling pattern in the rising period, before a stock market bubble breaks, such as the 2000 bubble in S\&P~500 and ASE or the 1991 bubble for NIKKEI. ASE, being a small and peripheral market with low liquidity, had a much more pronounced and robust 'asymmetric' multiscaling warning pattern before its large 2000 bubble, than major indices like S\&P~500 and NIKKEI. This feature is also present in stock market crises that are externally caused, such as the 2008 real-estate market crisis, but in a significantly weaker form. For example, for ASE, whereas the 2000 and 1990 crashes showed very pronounced and clear A$^-$ patterns, the 2008 crash showed a weaker A$^L$ pattern. Another feature is that there exists some kind of notable scaling transition shortly before or after the break-down of a bubble, usually a change from a strong asymmetric multiscaling to either an uniscaling or moderate multiscaling TP. It should be stressed that the transition to an asymmetric multiscaling TP is manifested by past data sufficiently prior to the bubble breakdown so that this feature could be used as a 'warning' signal of a bubble in development, in particular, if the strong multiscaling is accompanied to relatively low (and maybe rising) volatility. In general, transitions always occur at some critical date when there is either the beginning of a new period of development or the end of some type of crisis. However, if we are talking about a global crisis, various stock markets are affected in significantly different ways. The differences are pronounced if major indices, that highly correlate to global events are compared to peripheral or developing markets which are mostly affected by local events. Indeed, major market crashes also affect the scaling behaviour of peripheral markets, while the reverse is not true. For several indices there are extended time periods of uniscaling behavior and time periods of clear multiscaling behavior, while some indices are, on the overall, more multiscaling than others. The rich variety of information that can be conveyed by the newly introduced scaling patterns can be used as a valuable tool to obtain the 'fingerprint' of a possible turbulent market period and also issue warning signals for impeding market crashes or other critical events.
Finally, as the defined scaling temporal patterns are clearly related to the details of the underlying complex dynamics of the physical system in a physically justifiable way, they offer (together with the algorithmic identification procedure presented in this work) a tool to characterise the dynamical evolution of scaling of any complex system. Of particular interest would be to apply the temporal pattern analysis presented in this work to low-dimensional complex systems that contain, apart from fat-tailed change difference distributions, enhanced temporal correlations. These systems would be optimal test-beds for testing the 'predicting power' of GHE's for the future evolution of the system, especially for critical events. Future work will be devoted to disentangling the effect of fat tails and correlation to the various forms of multiscaling in a robust statistical manner. A second extension to this work will be in the direction of the \textit{quantification} of the asymmetries (based on empirically defined metrics that would depend on the GHE temporal profiles) in order to algorithmically detect strong asymmetries in scaling which can be used by market participants for trading strategies, \textit{e.g.} issue a sell order when the asymmetry is higher than the long term asymmetry during specific market conditions such as a bullish market. The construction of a total 'market risk' indicator that could depend on these GHE metrics is another very interesting possibility for future work. Such an indicator would be a very useful tool in the hands of investors and policy makers in order to detect and quantifiably assess financial risk.
| proofpile-arXiv_059-15628 | {
"file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0015.json.gz"
} |