set
stringclasses
1 value
id
stringlengths
5
9
chunk_text
stringlengths
1
115k
chunk_num_tokens
int64
1
106k
document_num_tokens
int64
58
521k
document_language
stringclasses
2 values
train
0.4984.0
\begin{document} \title{Entanglement and nonlocality in multi-particle systems } \author{M. D. Reid, Q. Y. He and P. D. Drummond} \affiliation{ARC Centre of Excellence for Quantum-Atom Optics, Centre for Atom Optics and Ultrafast Spectroscopy, Swinburne University of Technology, Melbourne 3122, Australia} \begin{abstract} \textbf{Entanglement, the Einstein-Podolsky-Rosen (EPR) paradox and Bell's failure of local-hidden-variable (LHV) theories are three historically famous forms of {}``quantum nonlocality''. We give experimental criteria for these three forms of nonlocality in multi-particle systems, with the aim of better understanding the transition from microscopic to macroscopic nonlocality. We examine the nonlocality of $N$ separated spin $J$ systems. First, we obtain multipartite Bell inequalities that address the correlation between spin values measured at each site, and then we review spin squeezing inequalities that address the degree of reduction in the variance of collective spins. The latter have been particularly useful as a tool for investigating entanglement in Bose-Einstein condensates (BEC). We present solutions for two topical quantum states: multi-qubit Greenberger-Horne-Zeilinger (GHZ) states, and the ground state of a two-well BEC. } \textbf{Keywords:} entanglement, quantum nonlocality, multi-particle, two-well Bose-Einstein condensates (BEC) \textbf{PACS numbers:} 03.65.Ta, 42.50.St, 03.65.Ud, 03.75.Gg \end{abstract} \maketitle \section{\textbf{Introduction}} Nonlocality in quantum mechanics has been extensively experimentally investigated. Results to date support the quantum prediction, first presented by Bell, that quantum theory is inconsistent with a combination of premises now generally called {}``local realism'' \cite{Bell,CHSH}. However, the extent that quantum mechanics is inconsistent with local realism at a more mesoscopic or macroscopic level is still not well understood. Schr\"odinger presented the case that loss of realism macroscopically would be a concern, and raised the question of how to link the loss of local realism with macroscopic superposition states \cite{Schrodinger-1,Schrodinger-2,Schrodinger-3,legg}. The advent of entangled Bose-Einstein condensate (BEC) states leads to new possibilities for testing mesoscopic and macroscopic quantum mechanics. With this in mind, the objective of this article is to give an overview of a body of work that explores nonlocality in multi-particle or multi-site systems. Three types of nonlocality are reviewed: \emph{entanglement }\cite{Schrodinger-1}, the \emph{Einstein-Podolsky-Rosen (EPR) paradox} \cite{epr}, and\emph{ Bell's nonlocality} \cite{Bell,CHSH}. Examples of criteria to demonstrate each of these nonlocalities is presented, first for multi-site {}``qubits'' (many spin $1/2$ particles) and then for multi-site {}``qudits'' (many systems of higher dimensionality such as high spin particles). The criteria presented in this paper are useful for detecting the nonlocality of the $N$-qubit (or $N$-qudit) Greenberger-Horne-Zeilinger (GHZ) states \cite{ghz,mermin90}. These states are extreme superpositions that were shown by GHZ to demonstrate a very striking {}``all or nothing'' type of nonlocality. This nonlocality can manifest as a violation of a Bell inequality, and at first glance these violations, because they increase exponentially with $N$, appear to indicate a more extreme nonlocality as the size $N$ of the system increases \cite{mermin bellghz}. We point out, however, that the detection of \emph{genuine} $N$-body nonlocality, as first discussed by Svetlichny \cite{genuine,collspinmol}, requires much higher thresholds. Genuine $n$-party nonlocality (e.g. genuine entanglement) requires that the nonlocality is shared among \emph{all} $N$ parties, or particles. The violations in this case do not increase with $N$, and the detection over many sites is very sensitive to loss and inefficiencies. Finally, we review and outline how to detect entanglement \cite{collspinmol} and the EPR paradox using collective spin measurements. This approach has recently been employed to establish a genuine entanglement of many particles in a BEC \cite{exp multi,treutnature}.
1,235
22,195
en
train
0.4984.1
\section{Three Famous types of nonlocality} The earliest studies of nonlocality concerned bipartite systems. Einstein-Podolsky-Rosen (EPR) \cite{epr} began the debate about quantum nonlocality, by pointing out that for some quantum states there exists an inconsistency between the premises we now call {}``local realism'' and the completeness of quantum mechanics. \emph{Local realism} (LR) may be summarized as follows. EPR argued \cite{epr,epr rev } first for {}``locality'', by claiming that there could be no {}``action-at-a-distance''. A measurement made at one location cannot instantaneously affect the outcomes of measurements made at another distant location. EPR also argued for {}``reality'', which they considered in the following context. Suppose one can predict with certainty the result of a measurement made on a system, without disturbing that system. Realism implies that this prediction is possible, only because the outcome for that measurement was a predetermined property of the system. EPR called this predetermined property an {}``\emph{element of reality}'', though most often the element of reality is interpreted as a {}``\emph{hidden variable}''. The essence of EPR's local realism assumption is that results of measurements made on a system at one location come about because of predetermined properties of that system, and because of their local interactions with the measurement apparatus, not because of measurements that are made simultaneously at a distant locations. \subsection{EPR paradox} EPR argued that for states such as the spin $1/2$ singlet state \begin{equation} |\psi\rangle=\frac{1}{\sqrt{2}}\left(|\uparrow\rangle_{A}|\downarrow\rangle_{B}-|\downarrow\rangle_{A}|\uparrow\rangle_{B}\right)\label{eq:spinbohm} \end{equation} there arises an inconsistency of the LR premises with the quantum predictions. Here, we define $|\uparrow\rangle_{A/B}$ and $|\downarrow\rangle_{A/B}$ as the spin {}``up'' and {}``down'' eigenstates of $J_{A/B}^{Z}$ for a system at location $A/B$. For the state (\ref{eq:spinbohm}), the prediction of the spin component $J_{A}^{Z}$ can be made by measurement of the component $J_{B}^{Z}$ at $B$. From quantum theory, the two measurements are perfectly anticorrelated. According to EPR's Local Realism premise (as explained above), there must exist an {}``element of reality'' to describe the predetermined nature of the spin at $A$. We let this element of reality be symbolized by the variable $\lambda_{z}$, and we note that $\lambda_{z}$ assumes the values $\pm1/2$ (\ref{eq:spinbohm}). Calculation shows that there is a similar prediction of a perfect anti-correlation for the other spin component pairs. Therefore, according to LR, each of the spin components $J_{A}^{Y}$ and $J_{A}^{X}$ can also be represented by an element of reality, which we denote $\lambda_{x}$ and $\lambda_{y}$ respectively. A moment's thought tells us that the if there is a state for which all three spins are completely and precisely predetermined in this way, then this {}``state'' cannot be a quantum state. Such a {}``state'' is generally called a {}``local hidden variable (LHV) state'', and the set of three variables are {}``hidden'', since they are not part of standard quantum theory. Hence, EPR argued, quantum mechanics is incomplete. Since perfect anticorrelation is experimentally impossible, an operational criterion for an EPR paradox can be formulated as follows. Consider two observables $X$ and $P$, with commutators like position and momentum. The Heisenberg Uncertainty Principle is $\Delta X\Delta P\geq1$, where $\Delta X$ and $\Delta P$ are the variances of the outcomes of measurements for $X$ and $P$ respectively. The EPR paradox criterion is \cite{reidepr} \begin{equation} \Delta_{inf}X\Delta_{inf}P<1,\label{eq:eprcrit} \end{equation} where $\Delta_{inf}X\equiv V(X|O_{B})$ is the {}``variance of inference'' i.e. the variance of $X$ conditional on the measurement of an observable $O_{B}$ at a distant location $B$. The $\Delta_{inf}P\equiv V(P|Q_{B})$ is defined similarly where $Q_{B}$ is a second observable for measurement at $B$. This criterion reflects that the combined uncertainty of inference is reduced below the Heisenberg limit. Of course, the reduced uncertainty applies over an ensemble of measurements, where only one of the conjugate measurements is made at a time. This criterion is also applicable to optical quadrature observables, where it has been experimentally violated, although without causal separation. With spin commutators, other types of uncertainty principle can be used to obtain analogous inferred uncertainty limits. The demonstration of an EPR paradox through the measurement of correlations satisfying Eq. (\ref{eq:eprcrit}) is a proof that local realism is inconsistent with the completeness of quantum mechanics (QM). Logically, one must: discard local realism, the completeness of QM, or both. However, it does not indicate which alternative is correct. \subsection{Schr\"odinger's Entanglement} Schr\"odinger \cite{Schrodinger-1,Schrodinger-2,Schrodinger-3} noted that the state (\ref{eq:spinbohm}) is a special sort of state, which he called an an \emph{entangled} state. An entangled state is one which cannot be factorized: for a pure state, we say there is entanglement between $A$ and $B$ if we cannot write the composite state $|\psi\rangle$ (that describes all measurements at the two locations) in the form $|\psi\rangle=|\psi\rangle_{A}|\psi\rangle_{B}$, where $|\psi\rangle_{A/B}$ is a state for the system at $A/B$ only. For mixed states, there is said to be entanglement when the density operator for the composite system cannot be written as a mixture of factorizable states \cite{peres}. A mixture of factorizable states is said to be a \emph{separable} state, which where there are just two sites, is written as \begin{equation} \rho=\sum_{R}P_{R}\rho_{A}^{R}\rho_{B}^{R}.\label{eq:sep2} \end{equation} If the density operator cannot be written as (\ref{eq:sep2}), then the mixed system possesses \emph{entanglement} (between $A$ and $B$). More generally, for $N$ sites, full separability implies \begin{equation} \rho=\sum_{R}P_{R}\rho_{1}^{R}...\rho_{N}^{R}.\label{eq:sepN} \end{equation} If the density operator cannot be expressed in the fully separable form (\ref{eq:sepN}), there is entanglement between at least two of the sites. We consider measurements $\hat{X}_{k}$, with associated outcomes $X_{k}$, that can be performed on the $k$-th system ($k=1,...,N$). For a separable state (\ref{eq:sepN}), it follows that the joint probability for outcomes is expressible as \begin{equation} P(X_{1},...,X_{N})=\int_{\lambda}P(\lambda)P_{Q}(X_{1}|\lambda)...P_{Q}(X_{N}|\lambda)d\lambda\,,\label{eq:sepent} \end{equation} where we have replaced for convenience of notation the index $R$ by $\lambda$, and used a continuous summation symbolically, rather than a discrete one, so that $P(\lambda)\equiv P_{R}$. The subscript $Q$ represents {}``quantum'', because there exists the quantum density operator $\rho_{k}^{\lambda}\equiv\rho_{k}^{R}$ for which $P(X_{k}|\lambda)\equiv\langle X_{k}|\rho_{k}^{\lambda}|X_{k}\rangle$. In this case, we write $P(X_{k}|\lambda)\equiv P_{Q}(X_{k}|\lambda)$, where the subscript $Q$ reminds us that this is a quantum probability distribution. The model (\ref{eq:sepent}) implies (\ref{eq:sep2}) \cite{wisesteer,wisesteer2}, and has been studied in Ref. \cite{eric steer}, in which it is referred to as a \emph{quantum separable model }(QS). We can test nonlocality when each system $k$ is spatially separated. We will see from the next section that LR implies the form (\ref{eq:sepent}), but without the subscripts {}``Q'', that is, without the underlying local states designated by $\lambda$ necessarily being quantum states. If the quantum separable QS model can be shown to fail where each $k$ is spatially separated, one can only have consistency with Local Realism if there exist underlying local states that are \emph{non-quantum. }This is an EPR paradox, since it is an argument to complete quantum mechanics, based on a requirement that LR be valid. The EPR paradox necessarily requires entanglement \cite{epr rev ,mallon}. The reason for this is that for separable states (\ref{eq:sep2}-\ref{eq:sepent}), the uncertainty relation that applies to each of the states $|\psi\rangle_{A}$ and $|\psi\rangle_{B}$ will imply a minimum level of local uncertainty, which means that the noncommuting observables cannot be sufficiently correlated to obtain an EPR paradox. In other words, the entangled state (\ref{eq:spinbohm}) can possess a greater correlation than possible for (\ref{eq:sep2}). Schr\"odinger also pointed to two paradoxes \cite{Schrodinger-1,Schrodinger-2,Schrodinger-3} in relation to the EPR paper. These gedanken-experiments strengthen the apparent need for the existence of EPR {}``elements of reality'', in situations involving macroscopic systems, or spatially separated ones. The first is famously known as the Schr\"odinger's cat paradox, and emphasizes the importance of EPR's {}``elements of reality'' at a \emph{macroscopic} level. Reality applied to the state of a cat would imply a cat to be either dead or alive, prior to any measurement that might be made to determine its {}``state of living or death''. We can define an {}``element of reality'' $\lambda_{cat}$ , to represent that the cat is \emph{predetermined} to be dead (in which case $\lambda_{cat}=-1$) or alive (in which case $\lambda_{cat}=+1$). Thus, the observer looking inside a box, to make a measurement that gives the outcome {}``dead'' or {}``alive'', is simply uncovering the value of $\lambda_{cat}$. Schr\"odinger's point was that the element of reality specification is not present in the quantum description $|\Psi\rangle=\frac{1}{\sqrt{2}}\left(|dead\rangle+|alive\rangle\right)$ of a superposition of two macroscopically distinguishable states. The second paradox raised by Schr\"odinger concerns the apparent {}``action at-a-distance'' that seems to occur for the EPR entangled state. Unless one identifies an element of reality for the outcome $A$, it would seem to be the action of the measurement of $B$ that immediately enables prediction of the outcome for the measurement at $A$. Schr\"odinger thus introduced the notion of {}``\emph{steering}''. While all these paradoxes require entanglement, we emphasize that entanglement \emph{per se} is a relatively common situation in quantum mechanics. It is necessary for a quantum paradox, but does not by itself demonstrate any paradox. \subsection{Bell's nonlocality: failure of local hidden variables (LHV)} EPR claimed as a solution to their EPR paradox that hidden variables consistent with local realism would exist to further specify the quantum state. It is the famous work of Bell that proved the impossibility of finding such a theory. This narrows down the two alternatives possible from a demonstration of the EPR paradox, and shows that local realism itself is invalid. Specifically, Bell considered the predictions of a Local Hidden Variable (LHV) theory, to show that they would be different to the predictions of the spin-half EPR state (\ref{eq:spinbohm}). Following Bell \cite{Bell,CHSH}, we have a \emph{local hidden variable model} (LHV) if the joint probability for outcomes of simultaneous measurements performed on the $N$ spatially separated systems is given by \begin{equation} P(X_{1},...,X_{N})=\int_{\lambda}P(\lambda)P(X_{1}|\lambda)...P(X_{N}|\lambda)d\lambda\,.\label{eq:bell} \end{equation} Here $\lambda$ are the {}``local hidden variables'' and $P(X_{k}|\lambda)$ is the probability of $X_{k}$ given the values of $\lambda$, with $P(\lambda)$ being the probability distribution for $\lambda$. The factorization in the integrand is Bell's locality assumption, that $P(X_{k}|\lambda)$ depends on the parameters $\lambda$, and the measurement choice made at $k$ only. The hidden variables $\lambda$ describe a\emph{ }local state\emph{ }for each site, in that the probability distribution $P(X_{k}|\lambda)$ for the measurement at $k$ is given as a function of the $\lambda$. The form of (\ref{eq:bell}) is formally similar to (\ref{eq:sepN}) except in the latter there is the additional requirement that the local states are quantum states. If (\ref{eq:bell}) fails, then we have\emph{ }proved\emph{ }a\emph{ }failure of all LHV theories\emph{, }which we refer to as a \emph{Bell violation }or\emph{ Bell nonlocality} \cite{eric steer}\emph{. } The famous Bell-Clauser-Horne-Shimony-Holt (CHSH) inequalities follow from the LHV model, in the $N=2$ case. Bell considered measurements of the spin components $J_{A}^{\theta}=\cos\theta J_{A}^{X}+\sin\theta J_{A}^{Y}$ and $J_{B}^{\theta}=\cos\phi J_{B}^{X}+\sin\phi J_{B}^{Y}$. He then defined the spin product $E(\theta,\phi)=\langle J_{A}^{\theta}J_{B}^{\phi}\rangle$ and showed that for the LHV model, there is always the constraint \begin{equation} B=E(\theta,\phi)-E(\theta,\phi')+E(\theta',\phi)+E(\theta',\phi')\leq2.\label{eq:bellchsh} \end{equation} The quantum prediction for an entangled Bell state (\ref{eq:spinbohm}) is $E(\theta,\phi)=\cos(\theta-\phi)$ and the inequality is violated for the choice of angles \begin{equation} \theta=0,\theta'=\pi/2,\phi=\pi/4,\phi'=3\pi/4\label{eq:angles} \end{equation} for which the quantum prediction becomes $B=2\sqrt{2}$. Tsirelson's theorem proves the value of $B=2\sqrt{2}$ to be the maximum violation possible for any quantum state \cite{tsirel}. We note that experimental inefficiencies mean that violation of the CHSH inequalities for causally separated detectors is difficult, and has so far always required additional assumptions in the interpretation of experimental data.
3,963
22,195
en
train
0.4984.2
\subsection{Bell's nonlocality: failure of local hidden variables (LHV)} EPR claimed as a solution to their EPR paradox that hidden variables consistent with local realism would exist to further specify the quantum state. It is the famous work of Bell that proved the impossibility of finding such a theory. This narrows down the two alternatives possible from a demonstration of the EPR paradox, and shows that local realism itself is invalid. Specifically, Bell considered the predictions of a Local Hidden Variable (LHV) theory, to show that they would be different to the predictions of the spin-half EPR state (\ref{eq:spinbohm}). Following Bell \cite{Bell,CHSH}, we have a \emph{local hidden variable model} (LHV) if the joint probability for outcomes of simultaneous measurements performed on the $N$ spatially separated systems is given by \begin{equation} P(X_{1},...,X_{N})=\int_{\lambda}P(\lambda)P(X_{1}|\lambda)...P(X_{N}|\lambda)d\lambda\,.\label{eq:bell} \end{equation} Here $\lambda$ are the {}``local hidden variables'' and $P(X_{k}|\lambda)$ is the probability of $X_{k}$ given the values of $\lambda$, with $P(\lambda)$ being the probability distribution for $\lambda$. The factorization in the integrand is Bell's locality assumption, that $P(X_{k}|\lambda)$ depends on the parameters $\lambda$, and the measurement choice made at $k$ only. The hidden variables $\lambda$ describe a\emph{ }local state\emph{ }for each site, in that the probability distribution $P(X_{k}|\lambda)$ for the measurement at $k$ is given as a function of the $\lambda$. The form of (\ref{eq:bell}) is formally similar to (\ref{eq:sepN}) except in the latter there is the additional requirement that the local states are quantum states. If (\ref{eq:bell}) fails, then we have\emph{ }proved\emph{ }a\emph{ }failure of all LHV theories\emph{, }which we refer to as a \emph{Bell violation }or\emph{ Bell nonlocality} \cite{eric steer}\emph{. } The famous Bell-Clauser-Horne-Shimony-Holt (CHSH) inequalities follow from the LHV model, in the $N=2$ case. Bell considered measurements of the spin components $J_{A}^{\theta}=\cos\theta J_{A}^{X}+\sin\theta J_{A}^{Y}$ and $J_{B}^{\theta}=\cos\phi J_{B}^{X}+\sin\phi J_{B}^{Y}$. He then defined the spin product $E(\theta,\phi)=\langle J_{A}^{\theta}J_{B}^{\phi}\rangle$ and showed that for the LHV model, there is always the constraint \begin{equation} B=E(\theta,\phi)-E(\theta,\phi')+E(\theta',\phi)+E(\theta',\phi')\leq2.\label{eq:bellchsh} \end{equation} The quantum prediction for an entangled Bell state (\ref{eq:spinbohm}) is $E(\theta,\phi)=\cos(\theta-\phi)$ and the inequality is violated for the choice of angles \begin{equation} \theta=0,\theta'=\pi/2,\phi=\pi/4,\phi'=3\pi/4\label{eq:angles} \end{equation} for which the quantum prediction becomes $B=2\sqrt{2}$. Tsirelson's theorem proves the value of $B=2\sqrt{2}$ to be the maximum violation possible for any quantum state \cite{tsirel}. We note that experimental inefficiencies mean that violation of the CHSH inequalities for causally separated detectors is difficult, and has so far always required additional assumptions in the interpretation of experimental data. \subsection{Steering as a special nonlocality } Recently, Wiseman et al (WJD) \cite{wisesteer,wisesteer2} have constructed a hybrid separability model, called the Local Hidden State Model (LHS), the violation of which is confirmation of Schr\"odinger's {}``steering'' (Figure 1). The bipartite local hidden state model (LHS) assumes \begin{equation} P(X_{A},X_{B})=\int_{\lambda}P(\lambda)P(X_{A}|\lambda)P_{Q}(X_{B}|\lambda)d\lambda\,.\label{eq:bell-1} \end{equation} Thus, for one site $A$ which we call {}``Alice'', we assume a local hidden variable (LHV) state, but at the second site $B$, which we call {}``Bob'', we assume a local quantum state (LQS). The violation of this model occurs iff there is a {}``\emph{steering}'' of Bob's state by Alice \cite{steerexp}. WJD pointed out the association of steering with the EPR paradox \cite{wisesteer}. The EPR criterion is also a criterion for steering, as defined by the violation of the LHS model. An analysis of the EPR argument when generalized to allow for imperfect correlation and arbitrary measurements reveals that violation of the LHS model occurs iff there is an EPR paradox \cite{eric steer,epr rev }. As a consequence, the violation of the LHS model is referred to as demonstration of a type of nonlocality called {}``\emph{EPR steering}'' \cite{eric steer}. EPR steering confirms the incompatibility of local realism with the \emph{completeness} of quantum mechanics, just as with the approach of EPR in their original paper \cite{epr}. The notion of steering can be generalized to consider $N$ sites, or observers \cite{ericmulti}. The multipartite LHS model is (Figure 1) \begin{multline} P(X_{1},...,X_{N})=\\ \int d\lambda P(\lambda)\prod_{j=1}^{T}P_{Q}(X_{j}|\lambda)\prod_{j=T+1}^{N}P(X_{j}|\lambda),\label{eq:LHS_model_multipartite} \end{multline} where here we have $T$ quantum states, and $N-T$ LHV local states. We use the symbol $T$ to represent the quantum states, since these are the{}``trusted sites'' in the picture put forward by WJD \cite{wisesteer}. This refers to an application of this generalized steering to a type of quantum cryptography in which an encrypted secret is being shared between sites. At some of the sites, the equipment and the observers are trusted, while at other sites this is not the case. In this picture, which is an application of the LHS model, an observer $C$ wishes to establish entanglement between two observers Alice and Bob. The violation of the QS model is sufficient to do this, provided each of the two observers Alice and Bob can be trusted to report the values for their local measurements. It is conceivable however that they report instead statistics that can give a violation of LHS model, so it seems as if there is entanglement when there is not. WJD point out the extra security present if instead there is the stronger requirement of violation of the LHV model, in which the untrusted observers are identified with a LHV state. Cavalcanti et al \cite{ericmulti} have considered the multipartite model (\ref{eq:LHS_model_multipartite}), and shown that violation of (\ref{eq:LHS_model_multipartite}) where $T=1$ is sufficient to imply an \emph{EPR steering} paradox exists between at least two of the sites. Violation where $T=0$ is proof of Bell's nonlocality, and violation where $T=N$ is a confirmation of entanglement (quantum inseparability). \subsection{Hierarchy of nonlocality } WJD established formally the concept of a hierarchy of nonlocality \cite{wisesteer,wisesteer2}. Werner \cite{werner} showed that some classes of entangled state can be described by Local Hidden Variable theories and hence cannot exhibit a Bell nonlocality. WJD showed that not all entangled states are {}``steerable'' states, defined as those that can exhibit EPR steering. Similarly, they also showed that not all EPR steerable states exhibit Bell nonlocality. However, we see from the definitions that all EPR steering states must be entangled, and all Bell-nonlocal states ( defined as those exhibiting Bell nonlocality) must be EPR steering states. Thus, the Bell-nonlocal states are a strict subset of EPR steering states, which are a strict subset of entangled states, and a hierarchy of nonlocality is established. \begin{figure} \caption{\emph{The LHS model:} \end{figure}
2,234
22,195
en
train
0.4984.3
\section{Multiparticle Nonlocality} Experiments that have been performed on many microscopic systems support quantum mechanics. Those that test Bell's theorem \cite{Bell,CHSH}, or the equivalent, are the most useful, since they directly refute the assumption of local realism. While these experiments still require additional assumptions, it is generally expected that improved technology will close the remaining loopholes. There remains however the very important question of whether reality will hold macroscopically. Quantum mechanics predicts the possibility of superpositions of two macroscopically distinguishable states \cite{legg}, like a cat in a superposition of dead and alive states. Despite the apparent paradox, there is increasing evidence for the existence of mesoscopic and macroscopic quantum superpositions. As with microscopic systems, there is a need to verify the loss of reality for macroscopic superpositions in an objective sense, by following Bell's example and comparing the predictions of quantum mechanics with those based on premises of local realism. The first steps toward this have been taken, through theoretical studies of nonlocality for multi-particle systems. Two limits have been rather extensively examined. The first is that of bipartite qudits. The second is multipartite qubits. Surprisingly, while it may have been thought that the violation of LR would diminish or vanish at a critical number of particles, failure of local realism has been shown possible according to quantum mechanics, for arbitrarily large numbers of particles. The third possibility of multipartite qudits has not been treated in as much detail. \subsection{Bipartite qudits} The simplest mesoscopic extension of the Bell case (\ref{eq:spinbohm}) is to consider bipartite qudits: two sites of higher dimensionality. The maximally entangled state in this case is \begin{equation} |\psi\rangle=\frac{1}{\sqrt{d}}\sum_{j=0}^{d-1}|jj\rangle,\label{eq:maxentqudit} \end{equation} where $|jj\rangle$ is abbreviation for $|j\rangle_{A}|j\rangle_{B}$, and $d$ is the dimensionality of the systems at $A$ and $B$. In this case at each site $A$ and $B$ the possible outcomes are $j=0,...,d-1$. This system can be realized by two spin $J$ systems, for which the outcomes are $x$ given by $-J,-J+1,...,J-1,J$, so that $d=2J+1$, and $j$ of Eq. (\ref{eq:maxentqudit}) is $j\equiv x+J$ where $x$ is the outcome of spin. It can also be realized by multi-particle systems. It was shown initially by Mermin, Garg and Drummond, and Peres and others \cite{highd,multibell,drumspinbell,peresspin,gsisspin,franmrspin} that quantum systems could violate local realism for large $d$. The approach was to use the classic Bell inequalities derived for binary outcomes. Later, Kaszlikowski et al \cite{high D K} showed that for maximally entangled states (\ref{eq:maxentqudit}), the strength of violation actually becomes stronger for increasing $d$. A new set of Bell inequalities for bipartite qudits was presented by Collins et al (CGLMP) et al \cite{collins high d,fuqdit} and it was shown subsequently by Acin et al \cite{acin} that greater violations can be obtained with non-maximally entangled states, and that the violations increase with $d$. Chen et al \cite{chen} have shown that the violation of CGLMP inequalities increases as $d\rightarrow\infty$ to a limiting value. We wish to address the question of how the entanglement and EPR steering nonlocalities increase with $d$. Since Bell nonlocality implies both EPR steering and entanglement, these nonlocalities also increase with $d.$ However, since there are distinct nested classes of nonlocality, the violation could well be greater, for an appropriate set of measures of the nonlocalities, and this problem is not completely solved for the CGLMP approach. We later investigate alternative criteria that show differing levels of violation for the different classes of nonlocality. \subsection{Multipartite qubits: MABK Bell inequalities} The next mesoscopic - macroscopic scenario that we will consider is that of many distinct single particles $-$ the multi-site qubit system. The interest here began with the Greenberger-Horne-Zeilinger (GHZ) argument \cite{ghz}, which revealed a more extreme {}``all-or-nothing'' form of nonlocality for the case of three and four spin $1/2$ particle (three or four qubits), prepared in a so-called GHZ state. The N qubit GHZ state is written \begin{equation} |\Psi\rangle_{GHZ}=\frac{1}{\sqrt{2}}\{|0\rangle^{\otimes N}+|1\rangle^{\otimes N}\},\label{eq:ghz-1} \end{equation} where $|0\rangle$ and $|1\rangle$ in this case are spin up/ down eigenstates. Mermin then showed that for this extreme superposition, there corresponded a greater violation of LR, in the sense that the new {}``Mermin'' Bell inequalities were violated by an amount that increased exponentially with $N$ \cite{mermin bellghz}. These new multipartite Bell inequalities of Mermin were later generalized by Ardehali, Belinski and Klyshko, to give a set of MABK Bell inequalities \cite{ard,bkmabk}. The MABK inequalities define moments like $\langle J_{A}^{+}J_{B}^{+}J_{C}^{-}\rangle$, where $J^{\pm}=J^{X}\pm iJ^{Y}$ and $J^{X}$, $J^{Y}$, $J^{Z}$, $J^{2}$ are the standard quantum spin operators. In the MABK case of qubits, Pauli operators are used, so that the spin outcomes $\pm1/2$ are normalized to $\pm1$. The $J^{X/Y}$ are redefined accordingly. The moments are defined generally by \begin{equation} \prod_{N}=\langle\Pi_{k=1}^{N}J_{k}^{s_{k}}\rangle\label{eq:prodNmabk} \end{equation} where $s_{k}=\pm1$ and $J^{s_{1}}\equiv J^{+}$ and $J^{s_{-1}}\equiv J^{-}$. A LHV theory expresses such moments as the integral of a complex number product: \begin{equation} \prod_{N}=\int d\lambda P(\lambda)\Pi_{N,\lambda}\label{eq:prodNmabkhidden} \end{equation} where $\Pi_{N,\lambda}=\Pi_{k=1}^{N}\langle J_{k}^{s_{k}}\rangle_{\lambda}$ and $\langle J_{k}^{\pm}\rangle_{\lambda}=\langle J_{k}^{X}\rangle\pm i\langle J_{k}^{Y}\rangle$ where $\langle J_{k}^{X/Y}\rangle_{\lambda}$ is the expected value of outcome for measurement $J^{X/Y}$ made at site $k$ given the local hidden state $\lambda$. The $\Pi_{N,\lambda}$ is a complex number product, which Mermin \cite{mermin bellghz} showed has the following extremal values: for $N$ odd, a magnitude $2^{N/2}$ at angle $\pi/4$ to real axis; for $N$ even, magnitude $2^{N/2}$ aligned along the real or imaginary axis. With this algebraic constraint, LR will imply the following inequalities, for odd $N$: \begin{equation} Re\prod_{N},\, Im\prod_{N}\leq2^{(N-1)/2}.\label{eq:mabkodd} \end{equation} For even $N$, the inequality $ $$Re\prod_{N},\, Im\prod_{N}\leq2^{N/2}$ will hold. However, it is also true, for even $N$, that \cite{ard} \begin{equation} Re\prod_{N}+Im\prod_{N}\leq2^{N/2}.\label{eq:mabkeven} \end{equation} The Eqns (\ref{eq:mabkodd}-\ref{eq:mabkeven}) are the MABK Bell inequalities \cite{bkmabk}. Maximum violation of these inequalities is obtained for the $N$-qubit Greenberger-Horne-Zeilinger (GHZ) state (\ref{eq:ghz-1}) \cite{wernerwolf}. For optimal angle choice, a maximum value \begin{equation} \langle\mathrm{Re}\Pi_{N}\rangle,\mathrm{\langle Im}\Pi_{N}\rangle=2^{N-1}\label{eq:qm1} \end{equation} can be reached for the left -side of (\ref{eq:mabkodd}), while for a different optimal angle choice, the maximum value \begin{equation} \langle\mathrm{Re}\Pi_{N}\rangle+\langle\mathrm{Im}\Pi_{N}\rangle=2^{N-1/2}\label{eq:qm2} \end{equation} can be reached for the left-side of (\ref{eq:mabkeven}). MABK Bell inequalities became famous for the prediction of exponential gain in violation as the number of particles (sites), $N$, increases. The size of violation is most easily measured as the ratio of left-side to right-side of the inequalities (\ref{eq:mabkodd},\ref{eq:mabkeven}), seen to be $2^{(N-1)/2}$ for the MABK inequalities. Werner and Wolf \cite{wernerwolf} showed the quantum prediction to be maximum for two-setting inequalities. \subsection{MABK-type EPR steering and entanglement inequalities for multipartite qubits} Recently, MABK-type inequalities have been derived for EPR steering and entanglement \cite{ericmulti}. Entanglement is a failure of quantum separability, where each of the local states in (\ref{eq:LHS_model_multipartite}) are quantum states ($T=N$). EPR steering occurs when there is failure of the LHS model with $T=1$. To summarize the approach of Ref. \cite{ericmulti}, we note the statistics of each \emph{quantum }state must satisfy a quantum uncertainty relation \begin{equation} \Delta^{2}J^{X}+\Delta^{2}J^{Y}\geq1.\label{eq:hup-1} \end{equation} As a consequence, for every \emph{quantum} local state $\lambda$, \begin{equation} \langle J^{X}\rangle^{2}+\langle J^{Y}\rangle^{2}\leq1,\label{eq:hupconseqquantum} \end{equation} which implies the complex number product can have arbitrary phase, leading to the new nonlocality inequalities, which apply for all $N$, even or odd, and $T>0$: \begin{eqnarray} \langle\mathrm{Re}\Pi_{N}\rangle,\langle\mathrm{Im}\Pi_{N}\rangle & \leq & 2^{(N-T)/2},\label{eq:merminsteer}\\ \langle\mathrm{Re}\Pi_{N}\rangle+\langle\mathrm{Im}\Pi_{N}\rangle & \leq & 2^{(N-T+1)/2}.\label{eq:merminsteerstat-2} \end{eqnarray} For $T=N$, these inequalities if violated will imply entanglement, as shown by Roy \cite{roy}. If violated for $T=1$, there is EPR steering. As pointed out in \cite{ericmulti}, the exponential gain factor of the violation with the number of particles $N$ increases for increasing $T$: the strength of violation as measured by left to right side ratio is $2^{(N+T-2)/2}$, but for both inequalities (\ref{eq:merminsteer}-\ref{eq:merminsteerstat-2}).
3,011
22,195
en
train
0.4984.4
\subsection{MABK-type EPR steering and entanglement inequalities for multipartite qubits} Recently, MABK-type inequalities have been derived for EPR steering and entanglement \cite{ericmulti}. Entanglement is a failure of quantum separability, where each of the local states in (\ref{eq:LHS_model_multipartite}) are quantum states ($T=N$). EPR steering occurs when there is failure of the LHS model with $T=1$. To summarize the approach of Ref. \cite{ericmulti}, we note the statistics of each \emph{quantum }state must satisfy a quantum uncertainty relation \begin{equation} \Delta^{2}J^{X}+\Delta^{2}J^{Y}\geq1.\label{eq:hup-1} \end{equation} As a consequence, for every \emph{quantum} local state $\lambda$, \begin{equation} \langle J^{X}\rangle^{2}+\langle J^{Y}\rangle^{2}\leq1,\label{eq:hupconseqquantum} \end{equation} which implies the complex number product can have arbitrary phase, leading to the new nonlocality inequalities, which apply for all $N$, even or odd, and $T>0$: \begin{eqnarray} \langle\mathrm{Re}\Pi_{N}\rangle,\langle\mathrm{Im}\Pi_{N}\rangle & \leq & 2^{(N-T)/2},\label{eq:merminsteer}\\ \langle\mathrm{Re}\Pi_{N}\rangle+\langle\mathrm{Im}\Pi_{N}\rangle & \leq & 2^{(N-T+1)/2}.\label{eq:merminsteerstat-2} \end{eqnarray} For $T=N$, these inequalities if violated will imply entanglement, as shown by Roy \cite{roy}. If violated for $T=1$, there is EPR steering. As pointed out in \cite{ericmulti}, the exponential gain factor of the violation with the number of particles $N$ increases for increasing $T$: the strength of violation as measured by left to right side ratio is $2^{(N+T-2)/2}$, but for both inequalities (\ref{eq:merminsteer}-\ref{eq:merminsteerstat-2}). \subsection{CFRD Multipartite qudit Bell, EPR steering and entanglement inequalities} We now summarize an alternative approach to nonlocality inequalities, developed by Cavalcanti, Foster, Reid and Drummond (CFRD) \cite{cfrd,vogelcfrd,acincfrd,cfrd he func,cfrdhepra}. These hold for any operators, and are not restricted to spin-half or qubits. We shall apply this approach to the case of a hierarchy of inequalities, with some quantum and some classical hidden variable states. Consider \begin{eqnarray} |\prod_{N}| & \leq & \int d\lambda P(\lambda)\Pi_{k=1}^{N}|\langle J_{k}^{s_{k}}\rangle_{\lambda}|\label{eq:prodNmabkhidden-1}\\ & = & \int d\lambda P(\lambda)\Pi_{k=1}^{N}\{\langle J_{k}^{X}\rangle_{\lambda}^{2}+\langle J_{k}^{Y}\rangle_{\lambda}^{2}\}^{1/2}. \end{eqnarray} We can see that for any LHV, because the variance is always positive, one can derive an inequality for any operator \begin{equation} \langle J_{k}^{X}\rangle_{\lambda}^{2}+\langle J_{k}^{Y}\rangle_{\lambda}^{2}\leq\langle(J_{k}^{X})^{2}\rangle_{\lambda}+\langle(J_{k}^{Y})^{2}\rangle_{\lambda}\label{eq:LHVspinvar} \end{equation} but then for a quantum state in view of the uncertainty relation (\ref{eq:hup-1}), it is the case that for qubits (spin-1/2) \begin{equation} \langle J_{k}^{X}\rangle_{\lambda}^{2}+\langle J_{k}^{Y}\rangle_{\lambda}^{2}\leq\langle(J_{k}^{X})^{2}\rangle_{\lambda}+\langle(J_{k}^{Y})^{2}\rangle_{\lambda}-1.\label{eq:lqsvarhur} \end{equation} For the particular case of qubits, the outcomes are $\pm1$ so that simplification occurs, to give final bounds based on local realism that are identical to (\ref{eq:merminsteer}-\ref{eq:merminsteerstat-2}). We note that at $T=0$, there is also a CFRD Bell inequality, but it is weaker than that of MABK, in the sense that the violation is not as strong as is not predicted for $N=2$. Since this approach holds for any operator, we now can generalize to arbitrary spin. The expression (\ref{eq:LHVspinvar}-\ref{eq:lqsvarhur}) also holds for arbitrary spin, for which case we revert to the usual spin outcomes (rather than the Pauli spin outcomes of $\pm1$). The LHV result for arbitrary spin is constrained by (\ref{eq:LHVspinvar}). The quantum result however requires a more careful uncertainty relation that is relevant to higher spins. In fact, for systems of fixed dimensionality $d$, or fixed spin $J$, the {}``qudits'', the following uncertainty relation holds \begin{equation} \Delta^{2}J^{X}+\Delta^{2}J^{Y}\geq C_{J},\label{eq:cj} \end{equation} where the $C_{J}$ has been derived and presented in Ref. \cite{cj}. The use of the more general result (\ref{eq:cj}) gives the following higher-spin (qudit) nonlocality inequalities derived in Ref. \cite{higherspin steerq}: \begin{multline} |\langle\prod_{k=1}^{N}J_{k}^{s_{k}}\rangle|^{2}\leq\int d\lambda P(\lambda)\prod_{k=1}^{N}|\langle J_{k}^{s_{k}}\rangle_{\lambda}|^{2}\\ \leq\left\langle \prod_{k=1}^{T}(J_{k}^{X})^{2}+(J_{k}^{Y})^{2}-C_{k})\prod_{k=T+1}^{N}(J_{k}^{X})^{2}+(J_{k}^{Y})^{2}\right\rangle .\label{eq:ineqcom} \end{multline} Thus: \begin{enumerate} \item Entanglement is verified if ($T=N$) \begin{eqnarray} |\langle\prod_{k=1}^{N}J_{k}^{s_{k}}\rangle|^{2} & > & \langle\prod_{k=1}^{N}[(J_{k})^{2}-(J_{k}^{Z})^{2}-C_{J}]\rangle.\label{eq:spinjent} \end{eqnarray} \item An EPR-steering nonlocality is verified if ($T=1$) \begin{eqnarray} |\langle\prod_{k=1}^{N}J_{k}^{s_{k}}\rangle|^{2} & > & \langle[(J_{1})^{2}-(J_{1}^{Z})^{2}-C_{J}]\nonumber \\ & & \ \ \times\prod_{k=2}^{N}[(J_{k}^{X})^{2}+(J_{k}^{Y})^{2}]\rangle.\label{eq:spinjsteer} \end{eqnarray} \item Bell inequality ($T=0$). The criterion to detect failure of the LHV theories is \begin{eqnarray} |\langle\prod_{k=1}^{N}J_{k}^{s_{k}}\rangle|^{2} & > & \langle\prod_{k=0}^{N}[(J_{k}^{X})^{2}+(J_{k}^{Y})^{2}]\rangle. \end{eqnarray} \end{enumerate} These criteria will be called the {}``$C_{J}"$ CFRD nonlocality criteria, and allow investigation of nonlocality in multisite qudits, where the spin $J$ is fixed. We investigate predictions for quantum states that are maximally entangled, or not so\textcolor{black}{, according to measures of entanglement that are justified for pure states. Maximally-entangled, highly correlated states for a fixed spin $J$ are written \begin{eqnarray} |\Psi\rangle_{max} & = & \frac{1}{\sqrt{d}}\sum_{m=-J}^{J}|m\rangle_{1}|m\rangle_{2}....|m\rangle_{N}\nonumber \\ & = & \frac{1}{\sqrt{d}}\sum_{j=0}^{d-1}|j\rangle_{1}|j\rangle_{2}...|j\rangle_{N},\label{eq:maxentst} \end{eqnarray} where $|m\rangle_{k}\equiv|J,m\rangle_{k}$ is the eigenstate of $J_{k}^{2}$ and $J_{k}^{Z}$ (eigenvalue $m$ for $J_{k}^{Z}$), defined at site $k$, and the dimensionality is $d=2J+1$. This state is the extension of (\ref{eq:maxentqudit}) for multiple sites. We follow \cite{acin} however and consider more generally the non-maximally entangled but highly correlated spin states of form \begin{eqnarray} |\psi\rangle_{non} & = & \frac{1}{\sqrt{n}}[r_{-J}|J,-J\rangle^{\otimes N}+r_{-J+1}|J,-J+1\rangle^{\otimes N}\nonumber \\ & & \ \ \ \ +...+r_{J}|J,+J\rangle^{\otimes N}],\label{eq:nonmaximally state} \end{eqnarray} where $|J,m\rangle^{\otimes N}=\Pi_{k=1}^{N}|J,m\rangle_{k}$, $n={\displaystyle \sum_{m=-J}^{J}}r_{m}^{2}$. Here we will restrict to the case of real parameters symmetrically distributed around $m=0$. The amplitude $r_{m}$ can be selected to optimize the nonlocality result. It is known for example, with $N$ sites and a spin-$1$ system that the optimized state: \begin{eqnarray} |\psi\rangle & = & \frac{1}{\sqrt{r^{2}+2}}(|1,-1\rangle^{\otimes N}+r|1,0\rangle^{\otimes N}\nonumber \\ & & \ \ \ \ +|1,+1\rangle^{\otimes N}),\label{eq:staterspin1} \end{eqnarray} will give improved violation over the maximally entangled state (for which the amplitudes are uniform) for some Bell inequalities \cite{acin}. } \textcolor{black}{With the optimization described above, we summarize }the results explained in Ref. \cite{higherspin steerq} that a growth of the violation of the nonlocality inequalities for increasing number $N$ of spin sites is maintained with arbitrary $d$. This is shown in Figure 2 for qudits $d=2$ and $d=3$ (spin $J=1/2$ and $J=1$), but means for higher $d$ that one can obtain in principle a violation of inequalities for arbitrary $d$ by increasing $N$. Thus, quantum mechanics predicts that at least for some states, increasing contradiction with separable theories is possible, as the number of sites increases, even where one has at each site a system of high spin. These results are consistent with those obtained by other authors \cite{Cabello,multisitequdit,multisitequditson}. \begin{figure} \caption{\emph{Showing nonlocality to be possible for large numbers $N$ of spin systems.} \end{figure}
2,895
22,195
en
train
0.4984.5
\section{Genuine Multiparticle Nonlocality: qubit example} Svetlichny \cite{genuine} addressed the following question. How many particles are \emph{genuinely} entangled? The above nonlocality inequalities can fail if separability/ locality fails between a single \emph{pair} of sites. To prove \emph{all} $N$ sites are entangled, or that the Bell nonlocality is shared between \emph{all} $N$ sites, is a more challenging task, and one that relates more closely to the question of multi-particle quantum mechanics. To detect genuine nonlocality, one needs to construct different criteria. For example where $N=3$, to show genuine tripartite entanglement, we need to exclude that the statistics can be described by bipartite entanglement i.e., by the models \begin{equation} \rho=\sum_{R}P_{R}\rho_{AB}^{R}\rho_{C}^{R},\,\rho=\sum_{R}P_{R}\rho_{A}^{R}\rho_{BC}^{R},\,\rho=\sum_{R}P_{R}\rho_{B}^{R}\rho_{AC}^{R},\label{eq:genentmodel} \end{equation} where $\rho_{IJ}^{R}$ can be \emph{any} density operator for composite system $I$ and $J$. These models can fail \emph{only} if there is genuine tripartite entanglement. Thus, to show there is a genuine tripartite Bell nonlocality, one needs to falsify all models encompassing bipartite Bell nonlocality, i.e.. \begin{equation} P(x_{\theta},x_{\phi},x_{\vartheta})=\int d\lambda P(\lambda)P_{AB}(x_{\theta},x_{\phi}|\lambda)P_{C}(x_{\vartheta}|\lambda)\label{eq:gennolocmodel} \end{equation} and the permutations. In the expansion (\ref{eq:gennolocmodel}), locality is not assumed between $A$ and $B$, but is assumed between a composite system $AB$, and $C$. This model allows bipartite entanglement between $A$ and $B$, but not tripartite entanglement. To test genuine nonlocality or entanglement, it is therefore useful to consider hybrid local-nonlocal models. What is a condition for genuine $N$ partite entanglement? Consider again the $N$-qubit system. A recent analysis \cite{ericmulti} follows Svetlichny \cite{genuine} and Collins \emph{et al. }(CGPRS) \cite{collspinmol}, to consider a hybrid local-nonlocal model in which Bell nonlocality \emph{can} exist, but only if shared among $k=N-1$ or fewer parties. Separability must then be retained between any two groups $A$ and $B$ of $ $$k$ and $N-k$ parties respectively, if $k>N/2$ , and one can write: \begin{equation} \langle\prod_{j=1}^{N}F_{j}^{s_{j}}\rangle=\int_{\lambda}d\lambda P(\lambda)\langle\prod_{j=1}^{k}F_{j}^{s_{j}}\rangle_{A,\lambda}\langle\prod_{j=k+1}^{N}F_{j}^{s_{j}}\rangle_{B,\lambda}.\label{eqn:sepave-1} \end{equation} Violation of all such {}``$k$ - nonlocality'' models then implies the nonlocality to be genuinely {}``$k+1$ partite''. We summarize Ref. \cite{ericmulti} who use (\ref{eqn:sepave-1}) to consider consequences of the hybrid model (\ref{eqn:sepave-1}) for the three different types of nonlocality. Multiplying out $\prod_{j=1}^{N}F_{j}^{s_{j}}=\mathrm{Re}\Pi_{N}+i\mathrm{Im}\Pi_{N}$ reveals recursive relations $\mathrm{Re}\Pi_{N}=\mathrm{Re}\Pi_{N-1}\sigma_{x}^{N}-\mathrm{Im}\Pi_{N-1}\sigma_{y}^{N}$, $\mathrm{Im}\Pi_{N}=\mathrm{Re}\Pi_{N-1}\sigma_{y}^{N}+\mathrm{Im}\Pi_{N-1}\sigma_{x}^{N}$ which imply algebraic constraints that must hold for all theories \cite{mermin bellghz} \begin{eqnarray} \langle\mathrm{Re}\Pi_{N}\rangle,\langle\mathrm{Im}\Pi_{N}\rangle & \leq & 2^{N-1},\label{eq:alg1}\\ \langle\mathrm{Re}\Pi_{N}\rangle+\langle\mathrm{Im}\Pi_{N}\rangle & \leq & 2^{N}.\label{eq:alg2} \end{eqnarray} These recursive relations and the CHSH lemma summarized by Ardehali \cite{ard} gives the Svetlichny-CGPRS inequality \cite{genuine,collspinmol} \[ \langle\mathrm{Re}\Pi_{N}\rangle+\langle\mathrm{Im}\Pi_{N}\rangle\leq2^{N-1} \] the violation of which confirms genuine $N$ partite Bell-nonlocality. The quantum prediction maximizes at (\ref{eq:qm2}) to predict violation by a \emph{constant }amount ($S_{N}=\sqrt{2}$) \cite{ghose-1,ghose2}. In order to investigate the other nonlocalities, for example the genuine multipartite steering, the authors of Ref. \cite{ericmulti} suggest the hybrid approach of \emph{quantizing} $B$, the group of $N-k$ qubits, but not group $A$. In this case, the extremal points of the hidden variable product $\langle\Pi_{k}^{A}\rangle_{\lambda}=\langle\prod_{j=1}^{k}F_{j}^{s_{j}}\rangle_{A,\lambda}$ of $A$ is constrained only by the \emph{algebraic} limit (\ref{eq:alg1}), whereas the product $\langle\Pi_{N-k}^{B}\rangle_{\lambda}\equiv\langle\prod_{j=k+1}^{N}F_{j}^{s_{j}}\rangle_{\lambda}$ for group $B$ is constrained by the \emph{quantum} result (\ref{eq:qm2}). We note that a criterion for genuine $N$-qubit entanglement is obtained by constraining \emph{both} $A$ and $B$ to be quantum, leading to the condition \begin{equation} \langle\mathrm{Re}\Pi_{N}\rangle,\langle\mathrm{Im}\Pi_{N}\rangle\leq2^{N-2} \end{equation} (as derived in Ref. \cite{tothguhne}), and $\langle\mathrm{Re}\Pi_{N}\rangle+\langle\mathrm{Im}\Pi_{N}\rangle\leq2^{N-3/2}$. These are violated by (\ref{eq:qm1}-\ref{eq:qm2}) to confirm genuine $N$qubit entanglement ($S_{N}=2$). In short, genuine $N$ particle nonlocality can be confirmed using MABK Bell inequalities for $N$ qubits, but a higher threshold is required. The threshold is reached by the quantum prediction of the GHZ states, but the higher bound implies the level of violation is \emph{no longer} exponentially increasing with $N$. As a related consequence, the higher threshold also implies a much higher bound for efficiency, which makes multi-particle nonlocality difficult to detect for increasingly larger systems. \begin{figure} \caption{\emph{Genuine nonlocality and entanglement:} \end{figure}
1,893
22,195
en
train
0.4984.6
\section{Investigating Entanglement using Collective measurements: spin squeezing inequalities} While detection of individual qubits could be fulfilled in many systems, the demonstration of a large multi-particle nonlocality would likely require exceptional detection efficiencies if one is to detect a \emph{genuine} multi-particle nonlocality for large $N$. We thus review and outline a complementary approach, which is the measurement of the \emph{collective }spin of a system. \subsection{Spin squeezing entanglement criterion} Consider $N$ identical spin-$J$ particles (Figure 3). One defines the collective spin operator \begin{equation} J^{X}=\sum_{k=1}^{N}J_{k}^{X}\label{eq:spincoll} \end{equation} and similarly a $J^{Y}$ and $J^{Z}$. Entanglement between the spin $J$ particles can be inferred via measurements of these collective operators. The concept of spin squeezing was pioneered by Kitagawa and Ueda \cite{spinsq-1kueda}, and Wineland et al \cite{wineland e}. To investigate entanglement, we note that for each particle, or quantum site $k$, the Heisenberg uncertainty relation holds \begin{equation} \Delta J_{k}^{X}\Delta J_{k}^{Y}\geq|\langle J_{k}^{Z}\rangle|/2.\label{eq:hup} \end{equation} If the system is fully separable (no entanglement) then \begin{equation} \rho=\sum_{R}P_{R}\rho_{1}^{R}...\rho_{k}^{R}...\rho_{N}^{R}.\label{eq:fulsep} \end{equation} For a mixture, the variance is greater than the average of the variances of the components, which for a product state is the sum of the individual variances \cite{hofman}. Thus, separability implies \begin{equation} \Delta^{2}J^{X}\geq\sum_{R}P_{R}\sum_{k=1}^{N}\Delta^{2}J_{k}^{X}.\label{eq:varmin} \end{equation} The next point to note is that for a fixed dimensionality spin- $J$ system, there is a constraint on the \emph{minimum }value for the variance of spin. The constraint on the minimum arises because of the constraint on the \emph{maximum} variance, which for fixed spin $J$ must be bounded by \begin{equation} \Delta^{2}J^{Y}\leq J^{2}.\label{eq:varmax} \end{equation} This implies, by the uncertainty relation, the lower bound on the minimum variance for a spin J system \begin{equation} \Delta^{2}J^{X}\geq\langle J^{Z}\rangle^{2}/4J^{2}.\label{eq:hupspinsq} \end{equation} Then we can prove, using (\ref{eq:varmin}) to get the first line, \begin{eqnarray} \Delta^{2}J^{X} & \geq & \frac{1}{4J^{2}}\sum_{k=1}^{N}\sum_{R}P_{R}\langle J_{k}^{Z}\rangle_{R}^{2}\nonumber \\ & \geq & \frac{1}{4J^{2}}\sum_{k=1}^{N}|\sum_{R}P_{R}\langle J_{k}^{Z}\rangle_{R}|^{2}\nonumber \\ & = & \frac{1}{4J^{2}}\sum_{k=1}^{N}|\langle J_{k}^{Z}\rangle|^{2}\label{eq:proofspinsq} \end{eqnarray} and the Cauchy Schwarz inequality to get the second to last line (use $(\sum x^{2})(\sum y^{2})\geq|\sum xy|^{2}$ where $x=\sqrt{P_{R}}$ and $y=\sqrt{P_{R}}\langle J_{k}^{Z}\rangle_{R}$). We can rewrite and use the Cauchy-Schwarz inequality again (this time, $x=1/\sqrt{N}$ and $y=\langle J_{k}^{Z\rangle}\rangle/\sqrt{N}$), to obtain \begin{eqnarray} \Delta^{2}J^{X} & = & \frac{N}{4J^{2}}\sum_{k=1}^{N}\frac{1}{N}|\langle J_{k}^{Z}\rangle|^{2}\nonumber \\ & \geq & \frac{N}{4J^{2}}|\sum_{k=1}^{N}\frac{1}{N}\langle J_{k}^{Z}\rangle|^{2}\nonumber \\ & = & \frac{1}{4NJ^{2}}|\langle J^{Z}\rangle|^{2}.\label{eq:proofspinsq-1} \end{eqnarray} We can express the result as \begin{equation} y=x^{2}/4J,\label{eq:analyquad} \end{equation} where $y=\Delta^{2}J^{X}/J$ and $x=|\langle J^{Z}\rangle|/J$. For $J=1/2$, we obtain the result that for a fully separable state, \begin{equation} \Delta^{2}J^{X}\geq|\langle J^{Z}\rangle|^{2}/N\label{eq:spinsqentcrit} \end{equation} ($y=x^{2}/2$). This result for spin $1/2$ was first derived by Sorenson et al \cite{sorespinsqzoller}, and is referred to as the {}``spin squeezing criterion'' to detect entanglement. Failure of (\ref{eq:spinsqentcrit}) reflects a reduction in variance (hence {}``squeezing''), and is confirmation that there is entanglement between at least two particles (sites). The criterion is often expressed in terms of the parameter defined by Wineland et al \cite{wineland e}, that is useful measure of interferometric enhancement, as \begin{equation} \xi=\frac{\sqrt{N}\Delta J^{X}}{|\langle J^{Z}\rangle|}<1.\label{eq:spinsqe} \end{equation} \begin{figure} \caption{Detecting entanglement within the atoms of a two-component BEC using the spin squeezing criterion (\ref{eq:spinsqe} \end{figure} The spin squeezing criterion has been used to investigate entanglement within a group of atoms in a BEC by Esteve et al, Gross et al and Riedel et al \cite{Germany-spin&entanglement,exp multi,treutnature}. In fact, spin squeezing is predicted for the ground state of the following two-mode Hamiltonian \begin{equation} H=\kappa(a^{\dagger}b+ab^{\dagger})+\frac{g}{2}[a^{\dagger}a^{\dagger}aa+b^{\dagger}b^{\dagger}bb],\label{hamgs} \end{equation} which is a good model for a two-component BEC. Here $\kappa$ denotes the conversion rate between the two components, and $g$ is a self interaction term. More details on one method of solution of this Hamiltonian and some other possible entanglement criteria are given in Ref. \cite{eprbec he}. To summarize, collective spin operators can be defined in the Schwinger representation: \textcolor{black}{ \begin{eqnarray} J^{Z} & = & (a^{\dagger}a-b^{\dagger}b)/2,\nonumber \\ J^{X} & = & (a^{\dagger}b+ab^{\dagger})/2,\nonumber \\ J^{Y} & = & (a^{\dagger}b-ab^{\dagger})/(2i),\nonumber \\ J^{2} & = & \hat{N}(\hat{N}+2)/4,\nonumber \\ \hat{N} & = & a^{\dagger}a+b^{\dagger}b.\label{eq:schwinger} \end{eqnarray} }The system is viewed as $N$ atoms, each with two-levels (components) available to it. For each atom, the spin is defined in terms of boson operators $J_{i}^{Z}=(a_{i}^{\dagger}a_{i}-b_{i}^{\dagger}b_{i})/2$ where the total number for each atom is $N_{i}=1$, and the outcomes for $a_{i}^{\dagger}a_{i}$ and $b_{i}^{\dagger}b_{i}$ are $0$ and $1$. The collective spin defined as $J^{Z}=\sum_{i}J_{i}^{Z}=\sum_{i}(a_{i}^{\dagger}a_{i}-b_{i}^{\dagger}b_{i})/2$ can then be re-expressed in terms of the total occupation number sums of each level. Figure 4 shows predictions for the variance of the collective spins $J^{Z}$ or $J^{Y}$, where the mean spin is aligned along direction $J^{X}$, as a function of ratio $Ng/\kappa$, for a fixed number of atoms $N$ and a fixed intercomponent coupling. In nonlinear regimes, indicated by $g\neq0$, we see $\xi<1$ is predicted, which is sufficient to detect entanglement. Heisenberg relations imply $\xi\geq1/\sqrt{N}$. Sorenson and Molmer \cite{soremol} have evaluated the exact minimum variance of the spin squeezing for a fixed $J$. Their result for $J=1/2$ agrees with (\ref{eq:hupspinsq}) and also (\ref{eq:spinsqentcrit}), but for $J\geq1$ there is a tighter lower bound for the minimum variance, which can be expressed as \begin{equation} \Delta^{2}J^{X}/J\geq F_{J}(\langle J^{Z}\rangle/J),\label{eq:min functsm} \end{equation} where the functions $F_{J}$ are given in Ref. \cite{soremol}. The above criteria hold for particles that are effectively indistinguishable. It is usually of most interest to detect entanglement when the particles involved are distinguishable, or, even better, causally separated. We ask how to detect entanglement between spatially-separated or at least distinguishable groups of spin $J$. We examined this question in Section III, and considered criteria that were useful for superposition states with mean zero spin amplitude. Another method put forward by Sorenson and Molmer (SM) is as follows. The separability assumption (\ref{eq:fulsep}) will imply \cite{soremol}, \begin{equation} \Delta^{2}J^{Z}\geq NJF_{J}(\langle J^{X}\rangle/NJ),\label{eq:smolvar} \end{equation} where we have for convenience exchanged the notation of $X$ and $Z$ directions (compared to (\ref{eq:min functsm})). The expression applies when considering $N$ states $\rho_{k}^{R}$ which have a fixed spin $J$, and could be useful where the mean spin is nonzero.
2,694
22,195
en
train
0.4984.7
\subsection{Depth of entanglement and genuine entanglement} We note from (\ref{eq:min functsm}) that the minimum variance (maximum spin squeezing) reduces as $J$ increases. Sorenson and Molmer (SM) showed how this feature can be used to demonstrate that a minimum number of particles or sites are genuinely entangled \cite{soremol}. If \begin{equation} \Delta^{2}J^{Z}/NJ<F_{J_{0}}(\langle J^{X}\rangle/NJ),\label{eq:geentpart} \end{equation} then we must have $J>J_{0}$ and so a minimum number $N_{0}$ of particles (where the maximum spin for a block of $N$ atoms is $J=N/2$, we must have $N_{0}=2J_{0}$ are involved, to allow the higher spin value. It will be useful to summarize the proof of this result giving some detail as follows. Consider a system with the density matrix \begin{eqnarray} \rho & = & \sum_{R}P_{R}\rho^{R}\nonumber \\ \nonumber \\ & = & \sum_{R}P_{R}\prod_{i=1}^{N_{R}}\rho_{i}^{R}.\label{eq:rho-1} \end{eqnarray} We will consider for the sake of simplicity \emph{that the overall system has a fixed number of atoms $N_{T}$ and a fixed total spin $J_{tot}$.} The density operator (\ref{eq:rho-1}) describes a system in a mixture of states $\rho_{R}$, with probability $P_{R}$. For each possibility $R$, there are $N_{R}$ blocks each with $N_{R,i}$ atoms and a total spin $J_{R,i}$ (note that $J_{R,i}\leq N_{R,i}/2$) (Figure 5). We note that if the maximum number of atoms in each block is $N_{0}$ then the \emph{maximum} spin for the block is $J_{0}=N_{0}/2$. Also, if the total number of atoms is fixed, at $N_{T}$, then $N_{T}=\sum_{i=1}^{N_{R}}N_{R,i}$, which implies that each $\rho_{i}^{R}$ has a definite number $N_{R,i}$ , meaning it cannot be a superposition of state of different numbers. Similarly, for a product state the total spin must be the sum of the individual spins (as readily verified on checking Clebsch-Gordan coefficients), which implies that if the total spin is fixed, then each $\rho_{i}^{R}$ has a fixed spin (that is, cannot be in a superposition state of different spins). \begin{figure} \caption{\emph{Genuine multi-particle entanglement:} \end{figure} Using again that the variance of the mixture cannot be less than the average of its components, and that the variance of the product state $\rho^{R}$ is the sum of the variances $(\Delta^{2}J^{Z})_{R,i}$ of each factor state $\rho_{i}^{R}$, we apply (\ref{eq:min functsm}), that the variance has a lower bound determined by the spin $J_{R,i}$. Thus we can write: \begin{eqnarray} \Delta^{2}J^{Z} & \geq & \sum_{R}P_{R}\sum_{i=1}^{N_{R}}(\Delta^{2}J^{Z})_{R,i}\nonumber \\ & \geq & \sum_{R}P_{R}\sum_{i=1}^{N_{R}}J_{R,i}F_{J_{R,i}}(|\langle J^{X}\rangle|_{R,i}/J_{R,i}). \end{eqnarray} \textcolor{black}{Now we can use the fact that the curves $F_{J}$ are nested to form a decreasing sequence at each value of their domain, as $J$ increases, as explained by Sorenson and Molmer. We then apply the steps based on the SM proof (lines (6) -(8) of their paper), which uses convexity of the functions $F_{J}$. We cannot exclude that the total spin of a block can be zero, $J_{R,i}=0$, for which $(\Delta^{2}J^{Z})_{R,i}\geq0$, but such blocks do not contribute to the summation and can be formally excluded. We define the total spin $\sum_{i=1}^{N_{R}}J_{R,i}=J_{tot}^{R}$ for each $\rho^{R}$ but note that for fixed total spin this is equal to $J_{tot}$, and we also note that $J_{tot}^{R}\leq J_{0}$. In the later steps below, we define the total spin as $J_{tot}=\sum_{R}P_{R}J_{tot}^{R}$ and the collective spin operator $J^{Z}$.} \begin{eqnarray} \Delta^{2}J^{Z} & \geq & \sum_{R}P_{R}\sum_{i=1}^{N_{R}}J_{R,i}F_{J_{0}}(\langle J^{X}\rangle_{R,i}/J_{R,i})\nonumber \\ & = & \sum_{R}P_{R}J_{tot}^{R}\sum_{i=1}^{N_{R}}\frac{J_{R,i}}{J_{tot}^{R}}F_{J_{0}}(\langle J^{X}\rangle_{R,i}/J_{R,i})\nonumber \\ & \geq & \sum_{R}P_{R}J_{tot}^{R}F_{J_{0}}(\frac{\sum_{i=1}^{N_{R}}\langle J^{X}\rangle_{R,i}}{J_{tot}^{R}})\nonumber \\ & = & J_{tot}\sum_{R}P_{R}\frac{J_{tot}^{R}}{J_{tot}}F_{J_{0}}(\frac{\sum_{i=1}^{N_{R}}\langle J^{X}\rangle_{R,i}}{J_{tot}^{R}})\nonumber \\ & \geq & J_{tot}F_{J_{0}}(\sum_{R}P_{R}\frac{1}{J_{tot}}\sum_{i=1}^{N_{R}}\langle J^{X}\rangle{}_{R,i})\nonumber \\ & \geq & J_{tot}F_{J_{0}}(\frac{1}{J_{tot}}\sum_{R}P_{R}\sum_{i=1}^{N_{R}}\langle J^{X}\rangle{}_{R,i})\nonumber \\ & = & J_{tot}F_{J_{0}}(\langle J^{X}\rangle/J_{tot}).\label{eq:proof2-1} \end{eqnarray} The total spin $J_{tot}$ is maximum at $J_{tot}=N/2$ where $N$ is total number of atoms over both systems, but is assumed measurable. Thus, if the maximum number of atoms in each block does not exceed $N_{0}$, then the inequality (\ref{eq:proof2-1}) must always hold. The violation of (\ref{eq:proof2-1}) is a demonstration of a group of atoms that are genuinely entangled \cite{soremol}. \begin{figure} \caption{Detecting multi-particle entanglement in ground state of a two-component BEC, as modeled by ({\ref{hamgs} \end{figure} The predictions of the model (\ref{hamgs}) are given in Figure 6, for a range of values of $N$ (the total number of atoms). \textcolor{black}{In each case, there is a constant total spin, $J\equiv J_{tot}$, given by $\langle(J^{X})^{2}+(J^{Y})^{2}+(J^{Z})^{2}\rangle=J(J+1)$ where $J=N/2$. We keep $N$ and the interwell coupling $\kappa$ fixed, and note that the variance of $J^{Z}$ decreases with increasing $g$, while the variance in $J^{X}$ increases. Evaluation of the normalized quantities of the SM inequality (\ref{eq:proof2-1}) are given in the second plot of Figure 6. Comparing with the functions $F_{J_{0}}$ reveals the prediction of a full $N$ particle entanglement, where $N$ is an integer value. } We note this treatment does not itself test nonlocality, or even the quantum separability models (\ref{eq:sepN}-\ref{eq:sepent}) because measurements are not taken at distinct locations. However, it can reveal, \emph{within a quantum framework}, an underlying entanglement, of the type that could give nonlocality if the individual spins could be measured at different locations. The great advantage however of the collective criteria is the reduced sensitivity to efficiency, since it is no longer necessary to measure the spin at each site. The depth of spin squeezing has been used recently and reported at this conference to infer blocks of entangled atoms in BEC condensates \cite{exp multi,treutnature}. To test nonlocality between sites, the criteria need will involve measurements made at the different spatial locations. How to detect entanglement between two-modes using spin operators \cite{hillzub,schvogent,spinsq,toth2009,spinsqkorb}, and how to detect a true Einstein-Podolsky-Rosen (EPR) entanglement \cite{epr rev ,reidepr,eprbohmparadox,sumuncerduan,spinprodg,Kdechoum,cavalreiduncer} in BEC \cite{murraybecepr,eprbecbar,eprbec he} are the topics of much current interest. \subsection{EPR steering nonlocality with atoms } An interesting question is whether one derive criteria, involving collective operators, to determine whether there are stronger underlying nonlocalities. How can we infer whether the one group of atoms $A$ can {}``steer'' a second group $B$, as shown in schematic form in Figure 7? This would confirm an EPR paradox between the two groups, that the correlations imply inconsistency between Local Realism (LR) and the completeness of quantum mechanics. This is an interesting task since very little experimental work has been done on confirming EPR paradoxes between even single atoms. Steering paradoxes between groups of atoms raise even more fundamental questions about mesoscopic quantum mechanics. \begin{figure} \caption{Is {} \end{figure} As an example, we thus consider the following. EPR steering is demonstrated between $N$ sites when the LHS model (\ref{eq:bell-1}) fails with $T=1$ fails. The system (which we will call $B$) at the one site corresponding to $T=1$ is described by a local quantum state LQS, which means it is constrained by the uncertainty principle. All other groups are described by a Local Hidden Variable Theory (LHV), and thus are constrained to have only a non-negative variance. For this first group (only) there is the SM minimum variance (implied by quantum mechanics): \begin{equation} \Delta^{2}J_{B}^{X}\geq J_{B}F_{J}(\langle J^{Z}\rangle/J_{B})\label{eq:smolvar-1} \end{equation} Hence, with this assumption, we follow the approach of Section V. B, to write (where we assume the maximum spin of the steered group $B$ is $J_{0}$) \begin{eqnarray} \Delta^{2}J^{X} & \geq & \sum_{R}P_{R}\{J_{R,B}F_{J_{0}}(\langle J^{Z}\rangle_{R,B}/J_{R,B})\}\nonumber \\ & = & \sum_{R}P_{R}J_{tot,}^{R}\frac{J_{R,B}}{J_{tot}^{R}}F_{J_{0}}(\langle J^{Z}\rangle_{R,B}/J_{R,B})\nonumber \\ & \geq & \sum_{R}P_{R}J_{tot}^{R}F_{J_{0}}(\frac{\langle J^{Z}\rangle_{R,B}}{J_{tot}^{R}})\nonumber \\ & = & J_{tot}\sum_{R}P_{R}\frac{J_{tot}^{R}}{J_{tot}}F_{J_{0}}(\frac{\sum_{i=1}^{N_{R}}\langle J^{Z}\rangle_{R,i}}{J_{tot}^{R}})\nonumber \\ & \geq & J_{tot}F_{J_{0}}(\sum_{R}P_{R}\frac{1}{J_{tot}}\langle J^{Z}\rangle{}_{R,B})\nonumber \\ & \geq & J_{tot}F_{J_{0}}(\frac{1}{J_{tot}}\sum_{R}P_{R}\langle J^{Z}\rangle{}_{R,B})\nonumber \\ & = & J_{tot}F_{J_{0}}(\langle J_{B}^{Z}\rangle/J_{tot})\label{eq:proof2-1-1} \end{eqnarray} If the inequality is violated, a {}``steering'' between the two groups is confirmed possible: group $A$ {}``steers'' group $B$. In this case, the spins of spatially separated systems $B$ would need to be measured, and potential such {}``EPR'' systems have been proposed, with a view to this sort of experiment in the future.
3,224
22,195
en
train
0.4984.8
\section{Conclusion} We have examined a strategy for testing multi-particle nonlocality, by first defining three distinct levels of nonlocality: (1) entanglement, (2) EPR paradox/ steering, or (3) failure of local hidden variable (LHV) theories (which we call Bell's nonlocality). We next focused on two types of earlier studies that yielded information about nonlocality in systems of more than two particles. The first study originated with Greenberger, Horne and Zeilinger (GHZ) and considers $N$ spatially separated in $1/2$ particles, on which individual spin measurements are made. The study revealed that nonlocality involving $N$ spatially separated (spin $1/2$) particles can be more extreme. Mermin showed that the deviation of the quantum prediction from the classical LHV boundaries can grow exponentially with $N$ for this scenario. Here we have summarized some recent results by us that reveal similar features for entanglement and EPR steering nonlocalities. Inequalities are presented that enable detection of these nonlocalities in this multipartite scenario, for certain correlated quantum states. The results are also applicable to $N$ spin $J$ particles (or systems), and thus reveal nonlocality can survive for $N$ systems even where these systems have a higher dimensionality. We then examined the meaning of {}``multi-particle nonlocality'', in the sense originated by Svetlichny, that there is an {}``$N-$body'' nonlocality, necessarily shared among \emph{all} $N$systems. For example, three-particle entanglement is defined as an entanglement that cannot be modeled using two-particle entangled or separable states only. Such entanglement, generalized to $N$ parties, is called genuine $N$ partite entanglement. We present some recent inequalities that detect such genuine nonlocality for the GHZ/ Mermin scenario of $N$ spin $1/2$ particles, and show a higher threshold is required that will imply a much greater sensitivity to inefficiencies $\eta$. In other words, the depth of violation of the Bell or nonlocality inequalities determines the level of \emph{genuine} multi-particle nonlocality. This led to the final focus of the paper, which examined criteria that employ collective spin measurements.\textcolor{red}{{} }\textcolor{black}{For example,}\textcolor{red}{{} }the spin squeezing entanglement criterion of Sorenson et al enables entanglement to be confirmed between $N$ spin $1/2$ particles, based on a reduction in the overall variance ({}``squeezing'') of a single collective spin component. The criterion works because of the finite dimensionality of the spin Hilbert space, which means only higher spin systems $-$ as can be formed from entangled spin $1/2$ states $-$ can have larger variances in one spin component, and hence smaller variances in the other. As shown by Sorenson and Molmer, even greater squeezing of the spin variances will imply larger entanglement, between more particles. Hence the depth of spin squeezing, as with the depth of Bell violations in the GHZ Mermin example above, will imply genuine entanglement between a minimum number of particles. This result has recently been used to detect experimental multi-particle entanglement in BEC systems. We present a model of the ground state BEC for the two component system, calculating the extent of such multi-particle squeezing. We make the final point that, while collective spin measurements are useful in detecting multi-particle entanglement and overcoming problems that are encountered with detection inefficiencies, the method does not address tests of nonlocality unless the measured systems can be at least in principle spatially separated. This provides motivation for studies of entanglement and EPR steering between groups of atoms in spatially distinct environments. \begin{acknowledgments} \end{acknowledgments} We wish to thank the Humboldt Foundation, Heidelberg University, and the Australian Research Council for funding via AQUAO COE and Discovery grants, as well as useful discussions with Markus Oberthaler, Philip Treutlein, and Andrei Sidorov. \end{document}
1,046
22,195
en
train
0.4985.0
\begin{document} \title{Faithfulness and learning hypergraphs from discrete distributions} \author{Anna Klimova \\ {\small{Institute of Science and Technology, Austria} }\\ {\small \texttt{[email protected]}}\\ {}\\ \and Caroline Uhler \\ {\small{Institute of Science and Technology, Austria} }\\ {\small \texttt{[email protected]}}\\ {}\\ \and Tam\'{a}s Rudas \\ {\small{E\"{o}tv\"{o}s Lor\'{a}nd University, Budapest, Hungary}}\\ {\small \texttt{[email protected]}}\\ } \date{} \maketitle \begin{abstract} In this paper, we study the concepts of faithfulness and strong-faithfulness for discrete distributions. In the discrete setting, graphs are not sufficient for describing the association structure. So we consider hypergraphs instead, and introduce the concept of parametric (strong-) faithfulness with respect to a hypergraph. Assuming strong-faithfulness, we build uniformly consistent parameter estimators and corresponding procedures for a hypergraph search. The strength of association in a discrete distribution can be quantified with various measures, leading to different concepts of strong-faithfulness. We explore these by computing lower and upper bounds for the proportions of distributions that do not satisfy strong-faithfulness. \end{abstract} \begin{keywords} contingency tables, directed acyclic graphs, hierarchical log-linear models, hypergraphs, (strong-) faithfulness \end{keywords} \section{Introduction}\label{intro} A graphical model is a set of probability distributions whose association structure can be identified with a graph. Given a graph, the Markov property entails a set of conditional independence relations that are fulfilled by distributions in the model. Distributions in the model that obey no further conditional independence relations are called \emph{faithful to the graph}. For each undirected graphical model, as well as for each directed acyclic graph (DAG) model, there is a distribution that is faithful to the graph \perp\!\!\!\perptep*[cf.][]{SpirtesBook}. Moreover, the Lebesgue measure of the set of parameters corresponding to distributions that are unfaithful to a graphical model is zero; this result was proven by \perp\!\!\!\perpte*{SpirtesBook} for the case of multivariate normal distributions, by \perp\!\!\!\perpte{MeekFaith} for discrete distributions on multi-way contingency tables, and by \perp\!\!\!\perpte*{Pena2009} for arbitrary sample spaces and dominating measures. It is also well-known, that a DAG model may include distributions that are unfaithful to it but are not Markov to any nested DAG. This kind of unfaithfulness may occur due to path cancellation and can arise both in the discrete and in the multivariate normal settings \perp\!\!\!\perptep[cf.][]{ZhangSp2008, UhlerRaskutti2013}. In the discrete case, the non-existence of a graph to which a distribution is faithful is related to the presence of higher than first order interactions in this distribution. Graph learning algorithms \perp\!\!\!\perptep[cf.][]{SpirtesBook}, which do not recognize the presence of higher order interactions, may produce a graph which does not reveal the true association structure \perp\!\!\!\perptep[cf.][]{StudenyBook}. In order to avoid such errors, graph learning algorithms usually assume the existence of a DAG to which the distribution is faithful. Since the Lebesgue measure of the set of parameters corresponding to the distributions that are unfaithful to the underlying graph is zero, the faithfulness assumption is not considered to be restrictive in the context of graphical search. While graph search procedures assuming faithfulness are pointwise consistent, they are not uniformly consistent and thus cannot simultaneously control Type I and Type II errors with a finite sample size \perp\!\!\!\perptep*{RSSWassUnifCons}. To ensure existence of a uniformly consistent learning procedure, strong-faithfulness of a distribution to the underlying DAG is needed \perp\!\!\!\perptep{ZhangSpirtesLambdaFaith}. \perp\!\!\!\perpte*{UhlerFaithGeometry} analyzed the Gaussian setting and showed that the strong-faithfulness assumption may, in fact, be very restrictive and the corresponding proportions of distributions which do not satisfy strong-faithfulness may become very large as the number of nodes grows. The concepts of faithfulness and strong-faithfulness were originally introduced in the causal search framework, where they are linked to identifiability of causal effects. However, as we show in this paper using the discrete setting, these concepts are also important for identifiability of more general parameters of association. In Section~\ref{sectionGraphFaith}, we define the concept of a model class being closed under a faithfulness relation: for each positive distribution, there exists a model in such a class to which it is faithful. By giving examples of distributions that are not faithful to any directed or undirected graphical model, we show that these model classes are not closed under the faithfulness relation which is based on the corresponding Markov property. Further, we introduce the concept of parametric faithfulness of a distribution to a hypergraph (instead of a graph). This concept seems more adequate for categorical data, where hypergraphs can be used to represent hierarchical log-linear models. Indeed, we show that the class of models associated with hypergraphs is closed under a parametric faithfulness relation. In Section~\ref{sectionStrongFaith}, we describe two major difficulties with the concept of strong-faithfulness in the discrete case. First, in contrast to role of correlations in the multivariate normal case, there is no single standard measure of the strength of association in a joint distribution. Therefore, depending on the measure of association, different variants of strong-faithfulness may be considered. Second, the proportion of strong-faithful distributions depends on the parameterization used and can only be computed if the parameter space has finite volume. We explore the consequences of different parameterizations and measures of association for the case of the $2 \times 2$ contingency table. We define parametric strong-faithfulness with respect to a hypergraph under a parameterization based on the log-linear interaction parameters. Assuming strong-faithfulness, we show that the maximum likelihood estimators of the interaction parameters associated with the hyperedges are uniformly consistent. As a result, we give a set of conditions, under which Type I and Type II errors can be controlled with a finite sample size. We also discuss the uniform consistency of model selection procedures for a hypergraph search, for example, using the approaches described by \perp\!\!\!\perpte{EdwardsBook, EdwardsNote}. In Section~\ref{SecProp}, we estimate the proportion of distributions that do not satisfy the parametric strong-faithfulness assumption with respect to a given hypergraph. We give an exact formulation of these proportions, under a parameterization based on conditional probabilities, for hypergraphs whose hyperedges form a decomposable set. The association structure of such distributions may be discovered incorrectly during a hypergraph learning procedure. Finally, we define the concept of projected strong-faithfulness, which applies to distributions which do not belong to the hypergraph, and estimate the proportions of projected strong-faithful distributions for several hypergraphs for the $2 \times 2 \times 2$ contingency table. In Section \ref{SecConcl}, we conclude the paper with a brief discussion of our results and their implications.
1,757
23,638
en
train
0.4985.1
\section{Graphical and parametric faithfulness} \label{sectionGraphFaith} In this section, we first review the concept of faithfulness with respect to a graph. We then introduce parametric faithfulness with respect to a hypergraph and show that this is a more relevant concept for categorical data.
67
23,638
en
train
0.4985.2
\subsection{Faithfulness with respect to a graph} Let $\mathcal{V}_1, \dots, \mathcal{V}_K$ be random variables taking values in $\mathcal{I} = \mathcal{I}_1 \times \dots \times \mathcal{I}_K $, a Cartesian product of finite sets. $\mathcal{I}$ describes a $K$-way contingency table and a vector $\boldsymbol i = (i_1,\dots,i_K) \in \mathcal{I}$ forms a cell. A subset $M \subseteq \{1,\dots ,K\}$ specifies a marginal of the joint distribution of $\mathcal{V}_1, \dots, \mathcal{V}_K$, and $M=\emptyset$ is the empty marginal. For $M=(k_1,\dots k_t)$, the set $\mathcal{I}_M = \mathcal{I}_{k_1} \times \dots \times \mathcal{I}_{k_t}$ is a \emph{marginal table}, and the canonical projection $\boldsymbol i_M$ of the cell $\boldsymbol i$ onto the set $\mathcal{I}_M$ is a \emph{marginal cell}. We parameterize the population distribution by cell probabilities $\boldsymbol p =(p_{\boldsymbol i})_{\boldsymbol i\in \mathcal{I}}$, where $p_{\boldsymbol i} \in (0,1)$ and $\sum_{\boldsymbol i \in \mathcal{I} }p_{\boldsymbol i} = 1$, and denote by $\mathcal{P}$ the set of all distributions on $\mathcal{I}$. A subset of $\mathcal{P}$ is called a \emph{model}. For simplicity of exposition, we assume that $\mathcal{V}_1, \dots, \mathcal{V}_K$ are binary, $\mathcal{I}$ is treated as a sequence of cells ordered lexicographically, and a distribution $P\in \mathcal{P}$ is addressed by its parameter, $\boldsymbol p$. A \emph{graphical model} is a set of probability distributions, whose association structure can be identified with a graph with vertices $V = \{1, \dots, K\}$, where each vertex $i$ is associated with a random variable $\mathcal{V}_i$. In the following, we will identify each vertex with its associated random variable. The absence of an edge between two vertices means that the corresponding random variables satisfy some (conditional) independence relation. A detailed description of graphical models for discrete as well as for multivariate normal distributions can be found in \perp\!\!\!\perpte{EdwardsBook}, among others. In the sequel, we only consider undirected graphical models and DAG models. A graphical model identified with an undirected graph (also called a \emph{graphical log-linear model} in the discrete setting) is a set of probability distributions on $V$ that satisfy the \emph{local undirected Markov property}: Every node is conditionally independent of its non-neighbors given its neighbors. In the discrete case, such models are a sublcass of hierarchical log-linear models. A graphical model identified with a directed acyclic graph, a DAG model, is a set of probability distributions on $V$ that satisfy the \emph{directed Markov property}: Every node is conditionally independent of its non-descendants given its parents. A distribution that satisfies the Markov property with respect to a graph is called \emph{Markov} to it. A distribution which is Markov to a graph, is said to be \emph{faithful} to it if all conditional independencies in this distribution can be derived from the graph. The faithfulness relation can be thought of as a decision rule that classifies a distribution $\boldsymbol p$ in a model $\mathcal{M}$ as faithful or unfaithful to it: $$\mathbb{F}(\boldsymbol p, \mathcal{M}) = \left\{\begin{array}{ll} 1 & \mbox{ if } \boldsymbol p \mbox{ is faithful to } \, \mathcal{M},\\ 0 & \mbox{ otherwise}.\\ \end{array}\right.$$ \begin{definition} A class $\frak{C}$ of models on $\mathcal{P}$, where $\frak{C}$ is partially ordered with respect to inclusion, is said to be \emph{closed} under the faithfulness relation indicated by $\mathbb{F}$, if for every non-empty $\mathcal{M} \in \frak{C}$ and for every $\boldsymbol p \in \mathcal{M}$ such that $\mathbb{F}(\boldsymbol p , \mathcal{M}) = 0$, there exists an $\mathcal{M}' \in \frak{C}$ with $\mathcal{M}' \subset \mathcal{M}$ and $\mathbb{F}(\boldsymbol p , \mathcal{M}') = 1$. \end{definition} This definition implies that a class $\frak{C}$ is closed under the faithfulness relation indicated by $\mathbb{F}$ if and only if for every $\boldsymbol p \in \mathcal{P}$ there exists an $\mathcal{M} \in \frak{C}$, such that $\mathbb{F}(\boldsymbol p, \mathcal{M}) = 1$. Graphical log-linear models and DAG models are specified by a list of conditional independence relations which, in turn, comprise other conditional independencies. Thus, these model classes have a natural partial order implied by the conditional independence relation. We now show that these classes are not closed under the corresponding faithfulness relations. \begin{proposition} \label{prop_undirected} The class of graphical log-linear models is not closed under the faithfulness relation defined by the local undirected Markov property. \end{proposition} The following example is given as a proof. \begin{example}\label{FourVarUnfaith} Let $V=\{A, B, C, D\}$ and consider the log-linear model $[ABC][ABD]$ \perp\!\!\!\perptep[cf.][]{Agresti2002}. This is the model of conditional independence of $C$ and $D$ given $A$ and $B$. All distributions in this model are Markov to the graph in Figure \ref{CiDgivABgraph}. Consider the distribution parameterized by \begin{eqnarray*} \boldsymbol p &=& (0.022, 0.062, 0.063, 0.103, 0.103, 0.063, 0.062, 0.022, \\ &&0.103, 0.063, 0.062, 0.022,0.022,0.062,0.063,0.103)', \end{eqnarray*} where the cell probabilities are ordered lexicographically. In this distribution, the conditional odds ratios ($\mathcal{COR}$) of $C$ and $D$ given the levels of $A$ and $B$ are equal to $1$: \begin{equation*} \mathcal{COR}(CD\mid A=i, B=j) = \frac{p_{ij00}p_{ij11}}{p_{ij01}p_{ij10}} = 1, \,\, \mbox{for all } i, j \in \{0, 1\}. \end{equation*} Hence, the distribution is in the model. The $(A,B)$-marginal of this distribution is uniform: $$ \begin{array}{c|cc} & {B}=0 & {B}=1 \\ \hline {A} = 0& 1/4 & 1/4\\ {A} = 1& 1/4 & 1/4\\ \end{array}, $$ and thus $A \perp\!\!\!\perp B$. So the distribution is unfaithful to the graph in Figure \ref{CiDgivABgraph}. In addition, since the conditional odds ratios of $A$, $B$ and $C$ given $D$ and of $A$, $B$, and $D$ given $C$ are not equal to $1$: \begin{eqnarray*} \mathcal{COR}(ABC\mid D=0) &=& \frac{p_{0000}p_{1100}p_{0110}p_{1010}}{p_{0100}p_{1000}p_{0010}p_{1110}} \approx 0.04418483,\\ \mathcal{COR}(ABC\mid D=1) &=& \frac{p_{0001}p_{1101}p_{0111}p_{1011}}{p_{0101}p_{1001}p_{0011}p_{1111}} \approx 0.04418483,\\ \mathcal{COR}(ABD\mid C=0) &=& \frac{p_{0000}p_{1100}p_{0101}p_{1001}}{p_{0100}p_{1000}p_{0001}p_{1101}} \approx 0.04710518,\\ \mathcal{COR}(ABD\mid C=1) &=& \frac{p_{0010}p_{1110}p_{0111}p_{1011}}{p_{0110}p_{1010}p_{0011}p_{1111}} \approx 0.04710518, \end{eqnarray*} the distribution cannot be Markov to any nested undirected graph. \qed \end{example} The situation described in Example \ref{FourVarUnfaith} is distinctive to discrete distributions. In the Gaussian setting, marginal independence of more than two variables implies their joint independence. Thus, a multivariate normal distribution whose components are pairwise independent is Markov and faithful to a graph with no edges. But in the discrete case, a joint distribution of pairwise independent variables may have a non-trivial structure of higher than first order interactions. Next, we prove that also the class of DAG models is not closed with respect to the faithfulness relation. \begin{proposition} The class of DAG models is not closed under the faithfulness relation defined by the directed Markov property. \end{proposition} As a proof, two examples are given. The second example only pertains to the discrete case. \begin{figure} \caption{$C \perp\!\!\!\perp D \mid A,B$.} \label{CiDgivABgraph} \caption{$A \perp\!\!\!\perp C \mid B, \quad B \perp\!\!\!\perp D \mid A,C$.} \label{DAGcycle} \end{figure} \begin{example}\label{normalDAGunfaith} Let $V=\{A, B, C, D\}$ and consider the model specified by two conditional independence relations: $A \perp\!\!\!\perp C \mid B$ and $B \perp\!\!\!\perp D \mid A,C$. Any distribution in this model is Markov to the DAG in Figure \ref{DAGcycle}. For example, the distribution parameterized by \begin{eqnarray*} \boldsymbol p &=& (0.006, 0.006, 0.0288, 0.0192, 0.06, 0.06, 0.072, 0.048, 0.0056, \\ &&0.0504, 0.187148, 0.0368516, 0.021, 0.189, 0.175452, 0.0345484)', \end{eqnarray*} is in the model. However, this distribution also satisfies the additional independence relation $A \perp\!\!\!\perp D$. This independence relation is not reflected in the graph. Thus the distribution is unfaithful to the graph in Figure \ref{DAGcycle}. Next, we show that there is no DAG that fulfills all three (conditional) independence relations $A \perp\!\!\!\perp D$, $A \perp\!\!\!\perp C\mid B$, $B \perp\!\!\!\perp D\mid A,C$. If such a DAG existed, then its skeleton would have three edges: $AB$, $BC$, $CD$. In order to satisfy faithfulness, $A \perp\!\!\!\perp D$ requires that $A\to B\leftarrow C$ or $B\to C\leftarrow D$. However, $A \perp\!\!\!\perp C\mid B$ is unfaithful to $A\to B\leftarrow C$ and $B \perp\!\!\!\perp D\mid A,C$ is unfaithful to $B\to C\leftarrow D$. \qed \end{example} \begin{remark} \label{rem_Gaussian} One can also construct an instance of Example \ref{normalDAGunfaith} using multivariate normal distributions by choosing the partial correlations in such a way that the causal effect associated with the edge $A\to D$ cancels with the causal effect associated with the path $A\to B\to C\to D$ (see Figure \ref{DAGcycle}). This shows that also Gaussian DAG models are not closed under the faithfulness relation defined by the directed Markov property. \end{remark} The next example illustrates a situation that occurs only in the discrete case. To construct this example, we will use the fact that, in contrary to the Gaussian case, a discrete distribution with pairwise independent random variables can have non-vanishing interactions of higher than the first order. \begin{example}\label{ExampleIntro} Let $V=\{A, B, C\}$. Consider the distribution parameterized by \begin{equation}\label{distr18} \boldsymbol p = (1/8-\delta,1/8+\delta, 1/8+\delta,1/8-\delta, 1/8+\delta,1/8-\delta, 1/8-\delta,1/8+\delta)', \end{equation} where $\delta \in (-1/8, 1/8)$. Its marginals are uniform resulting in pairwise independence: $A \perp\!\!\!\perp B$, $A \perp\!\!\!\perp C$, and $B \perp\!\!\!\perp C$. The second order odds ratio of this distribution, \begin{equation*} \frac{p_{000}p_{011}p_{101}p_{110}}{p_{001}p_{010}p_{100}p_{111}} = \left(\frac{1/8-\delta}{1/8+\delta}\right)^{4}, \end{equation*} does not vanish, implying that $A$, $B$, and $C$ are not jointly independent. The distribution belongs to the graphical log-linear model that can be identified with the graph shown in row 1 of Table \ref{allgraphsABC}. Further, since each pairwise independence holds, the distribution is Markov to the DAGs shown in rows 2, 3, and 4 of Table \ref{allgraphsABC}. However, the distribution is not faithful to these DAGs and it is not Markov to any of the nested DAGs (rows 5, 6, 7 and 8). \qed
3,623
23,638
en
train
0.4985.3
\end{example} The situation described in Example \ref{FourVarUnfaith} is distinctive to discrete distributions. In the Gaussian setting, marginal independence of more than two variables implies their joint independence. Thus, a multivariate normal distribution whose components are pairwise independent is Markov and faithful to a graph with no edges. But in the discrete case, a joint distribution of pairwise independent variables may have a non-trivial structure of higher than first order interactions. Next, we prove that also the class of DAG models is not closed with respect to the faithfulness relation. \begin{proposition} The class of DAG models is not closed under the faithfulness relation defined by the directed Markov property. \end{proposition} As a proof, two examples are given. The second example only pertains to the discrete case. \begin{figure} \caption{$C \perp\!\!\!\perp D \mid A,B$.} \label{CiDgivABgraph} \caption{$A \perp\!\!\!\perp C \mid B, \quad B \perp\!\!\!\perp D \mid A,C$.} \label{DAGcycle} \end{figure} \begin{example}\label{normalDAGunfaith} Let $V=\{A, B, C, D\}$ and consider the model specified by two conditional independence relations: $A \perp\!\!\!\perp C \mid B$ and $B \perp\!\!\!\perp D \mid A,C$. Any distribution in this model is Markov to the DAG in Figure \ref{DAGcycle}. For example, the distribution parameterized by \begin{eqnarray*} \boldsymbol p &=& (0.006, 0.006, 0.0288, 0.0192, 0.06, 0.06, 0.072, 0.048, 0.0056, \\ &&0.0504, 0.187148, 0.0368516, 0.021, 0.189, 0.175452, 0.0345484)', \end{eqnarray*} is in the model. However, this distribution also satisfies the additional independence relation $A \perp\!\!\!\perp D$. This independence relation is not reflected in the graph. Thus the distribution is unfaithful to the graph in Figure \ref{DAGcycle}. Next, we show that there is no DAG that fulfills all three (conditional) independence relations $A \perp\!\!\!\perp D$, $A \perp\!\!\!\perp C\mid B$, $B \perp\!\!\!\perp D\mid A,C$. If such a DAG existed, then its skeleton would have three edges: $AB$, $BC$, $CD$. In order to satisfy faithfulness, $A \perp\!\!\!\perp D$ requires that $A\to B\leftarrow C$ or $B\to C\leftarrow D$. However, $A \perp\!\!\!\perp C\mid B$ is unfaithful to $A\to B\leftarrow C$ and $B \perp\!\!\!\perp D\mid A,C$ is unfaithful to $B\to C\leftarrow D$. \qed \end{example} \begin{remark} \label{rem_Gaussian} One can also construct an instance of Example \ref{normalDAGunfaith} using multivariate normal distributions by choosing the partial correlations in such a way that the causal effect associated with the edge $A\to D$ cancels with the causal effect associated with the path $A\to B\to C\to D$ (see Figure \ref{DAGcycle}). This shows that also Gaussian DAG models are not closed under the faithfulness relation defined by the directed Markov property. \end{remark} The next example illustrates a situation that occurs only in the discrete case. To construct this example, we will use the fact that, in contrary to the Gaussian case, a discrete distribution with pairwise independent random variables can have non-vanishing interactions of higher than the first order. \begin{example}\label{ExampleIntro} Let $V=\{A, B, C\}$. Consider the distribution parameterized by \begin{equation}\label{distr18} \boldsymbol p = (1/8-\delta,1/8+\delta, 1/8+\delta,1/8-\delta, 1/8+\delta,1/8-\delta, 1/8-\delta,1/8+\delta)', \end{equation} where $\delta \in (-1/8, 1/8)$. Its marginals are uniform resulting in pairwise independence: $A \perp\!\!\!\perp B$, $A \perp\!\!\!\perp C$, and $B \perp\!\!\!\perp C$. The second order odds ratio of this distribution, \begin{equation*} \frac{p_{000}p_{011}p_{101}p_{110}}{p_{001}p_{010}p_{100}p_{111}} = \left(\frac{1/8-\delta}{1/8+\delta}\right)^{4}, \end{equation*} does not vanish, implying that $A$, $B$, and $C$ are not jointly independent. The distribution belongs to the graphical log-linear model that can be identified with the graph shown in row 1 of Table \ref{allgraphsABC}. Further, since each pairwise independence holds, the distribution is Markov to the DAGs shown in rows 2, 3, and 4 of Table \ref{allgraphsABC}. However, the distribution is not faithful to these DAGs and it is not Markov to any of the nested DAGs (rows 5, 6, 7 and 8). \qed \end{example} The association structure of a distribution that is unfaithful to every model in a given class can be considered within a larger model class. We have described examples of discrete distributions for which there is no undirected graphical model or DAG model to which they are faithful. Graphical models (directed and undirected) for discrete distributions are a subclass of hierarchical marginal log-linear models \perp\!\!\!\perptep*{RudasBergsma, RudasBN2006} and can be considered within this larger class. We revisit Example \ref{normalDAGunfaith} to motivate the introduction of \emph{parametric faithfulness}, a generalization of the concept of faithfulness that can be applied to the class of hierarchical marginal log-linear models. In the following, we show that under this natural generalization of faithfulness, we can find a model in the class of hierarchical marginal log-linear models to which the distribution described in Example \ref{normalDAGunfaith} is faithful. \textbf{Example \ref{normalDAGunfaith}} (revisited): A marginal log-linear parameterization \perp\!\!\!\perptep{RudasBergsma} for the DAG in Figure~\ref{DAGcycle} can be derived from the set of marginals $$\mathcal{M} = \{(A, D), (A, B, C), (A, B, C, D)\}.$$ The corresponding parameters are: \begin{eqnarray}\label{MargEx} \lambda_{\emptyset}^{AD}, \, \, \lambda_{A*}^{AD}, \, \, \lambda_{*D}^{AD}, \, \, \lambda_{AD}^{AD}, \, \, \lambda_{*B*}^{ABC}, \, \, \lambda_{**C}^{ABC}, \, \, \lambda_{*BC}^{ABC}, \, \, \lambda_{AB*}^{ABC}, \, \, \lambda_{A*C}^{ABC}, \, \, \lambda_{ABC}^{ABC}, \nonumber \\ \\ \lambda_{*B*D}^{ABCD}, \, \, \lambda_{**CD}^{ABCD}, \, \, \lambda_{AB*D}^{ABCD}, \, \, \lambda_{A*CD}^{ABCD}, \, \lambda_{*BCD}^{ABCD}, \, \, \lambda_{ABCD}^{ABCD}. \nonumber \end{eqnarray} The conditional independencies $A \perp\!\!\!\perp C \mid B$ and $B \perp\!\!\!\perp D \mid A,C$ are obtained by taking \begin{eqnarray}\label{MargEx2} \lambda_{A*C}^{ABC} = 0, \,\, \lambda_{ABC}^{ABC} = 0, \,\, \lambda_{*B*D}^{ABCD} = 0, \,\, \lambda_{AB*D}^{ABCD} = 0, \,\, \lambda_{*BCD}^{ABCD} = 0, \,\, \lambda_{ABCD}^{ABCD} = 0. \end{eqnarray} Any distribution that is Markov to the DAG in Figure \ref{DAGcycle} can be parameterized by the remaining marginal log-linear parameters. The faithfulness relation in the class of marginal log-linear models can be defined as a relationship between the parameters of a distribution and the parameters of a model that contains the distribution. A distribution which also satisfies the marginal independence $A \perp\!\!\!\perp D$, has $\lambda_{AD}^{AD} = 0$ and thus belongs to a nested marginal log-linear model, to which it is faithful in the parametric sense. \qed This example motivates taking a parametric approach (instead of a graphical approach) to faithfulness. In the next section we introduce the concept of parametric faithfulness for discrete distributions more formally.
2,253
23,638
en
train
0.4985.4
\subsection{Parametric faithfulness} \label{sec_par_faith} Let $\mathcal{P}$ denote the full exponential family of distributions. We choose a mixed parameterization $(\boldsymbol \mu, \boldsymbol \nu)$ of this family, where $\boldsymbol \mu$ denotes the vector of mean value parameters and $\boldsymbol \nu$ the vector of canonical parameters \perp\!\!\!\perptep[cf.][]{Barndorff1978}. Let $\frak{C}$ be a class of partially ordered exponential families that are obtained by setting some of the components of $\boldsymbol \mu$ and/or some of the components of $\boldsymbol \nu$ to zero, and let $\mathcal{M} \in \frak{C}$. Assume that $\mathcal{M}$ is parameterized by $(\boldsymbol \mu_{\mathcal{M}}, \boldsymbol \nu_{\mathcal{M}})$, where $\boldsymbol \mu_{\mathcal{M}} \subseteq \boldsymbol \mu$, $\boldsymbol \nu_{\mathcal{M}} \subseteq \boldsymbol \nu$, $\boldsymbol \mu \setminus \boldsymbol \mu_{\mathcal{M}} = \boldsymbol 0$, and $\boldsymbol \nu \setminus \boldsymbol \nu_{\mathcal{M}} = \boldsymbol 0$. We define faithfulness as a relationship between the parameters of a distribution and the parameters of a model containing the distribution under consideration. \begin{definition} \label{FaithNormalDef} A distribution $\boldsymbol p \in \mathcal{M}$ parameterized by $(\boldsymbol \mu_{\mathcal{M}}(\boldsymbol p), \boldsymbol \nu_{\mathcal{M}}(\boldsymbol p))$, satisfies the \emph{parametric faithfulness relation with respect to} $\mathcal{M}$ if none of the components of $\boldsymbol \mu_{\mathcal{M}}(\boldsymbol p)$ or $\boldsymbol \nu_{\mathcal{M}}(\boldsymbol p)$ vanish. \end{definition} The class of discrete exponential families, where the canonical parameters are the interactions of the variables in $V$ of order up to $K-1$, corresponds to the class of hierarchical log-linear models on $V$. More precisely, let $\mathcal{M}=\{M_1, \dots,M_T\}$ be a set of incomparable subsets of $V$. Then the hierarchical log-linear model generated by $\mathcal{M}$ is the set of distributions in $\mathcal{P}$ that satisfy \begin{equation}\label{LLMdef} \mbox{log } p_{\boldsymbol i} = \sum_{M \subseteq V: M \subseteq M_j \in \mathcal{M}} \gamma_M(\boldsymbol i_M), \end{equation} where $\gamma_{M'}(\boldsymbol i_{M'}) = 0$ implies $\gamma_{M''} (\boldsymbol i_{M''}) = 0$ for any $M''\supseteq M'$, and $\gamma_{M}$ are called the \emph{interaction parameters} (interactions for short). Their identifiability is assumed in the sequel. The set $\mathcal{M}$ partitions the power set of $V$ into a descending class, consisting of subsets of $M_1, \dots, M_T$, and a complementary ascending class. The partition induces a mixed parameterization of $\mathcal{P}$ with the canonical parameters equal to the conditional odds ratios (or their logarithms) of the subsets in the ascending class, given the remaining variables, and the mean value parameters equal to the marginal distributions of the subsets in the descending class. Under this parameterization, the canonical parameters of the distributions in the model generated by $\mathcal{M}$ are equal to $1$ (or zero) and the distributions are parameterized by the mean value parameters \perp\!\!\!\perptep{RudasSAGE}. The structure of the highest order interactions of the distributions in the hierarchical log-linear model generated by $\mathcal{M}=\{M_1, \dots,M_T\}$ is described next. In the sequel, $\bar{M}_t= V \setminus M_t$. \begin{lemma}\label{interactions} There exists a parameterization of $\mathcal{P}$ under which, for every $t = 1, \dots, T$, the interaction parameter $\gamma_t$ is equal to the logarithm of the conditional odds ratio of $M_t$ given $\bar{M}_t = \boldsymbol i_{\bar{M}_t}$, and is invariant of the choice of $\boldsymbol i_{\bar{M}_t}$. \end{lemma} \begin{proof} There exists a marginal log-linear parameterization of $\mathcal{P}$ under which for every $t = 1, \dots, T$, the interaction parameter $\gamma_{t}$, corresponding to the generating marginal $M_t$, is the average log conditional odds ratio of $M_t$ conditioned on and averaged over $\bar{M}_t$ \perp\!\!\!\perptep[cf.][]{RudasBN2006}: $$\gamma_{M_t} = \frac{1}{|\mathcal{I}_{\bar{M}_t}|}\sum_{\boldsymbol i_{\bar{M}_t}}\mbox{log } \mathcal{COR}(M_t\mid \bar{M}_t = \boldsymbol i_{\bar{M}_t}).$$ Since $M_{t}$ is a maximal interaction, $\mathcal{COR}(M'\mid \bar{M}' = \boldsymbol i_{\bar{M}'}) = 1,$ for any $M' \supsetneq M_t$. Further, it can be shown by induction on the elements of the ascending class of $M_1, \dots, M_T$, that $$\mathcal{COR}(M'\mid\bar{M}' = \boldsymbol i_{\bar{M}'}) = \frac{ \mathcal{COR}(M_t \mid (M'\setminus M_t)\cup\bar{M}' =(\boldsymbol i_{M'\setminus M_t}, \boldsymbol i_{\bar{M}'}))}{\mathcal{COR}(M_t \mid (M'\setminus M_t)\cup\bar{M}' =(\boldsymbol j_{M'\setminus M_t}, \boldsymbol i_{\bar{M}'}))} = \frac{\mathcal{COR}(M_t \mid \bar{M}_t = \boldsymbol i_{\bar{M}_t})}{\mathcal{COR}(M_t \mid \bar{M}_t = \boldsymbol j_{\bar{M}_t})},$$ and thus, $$\mbox{log } \mathcal{COR}(M_t\mid \bar{M}_t = \boldsymbol i_{\bar{M}_t}) = \mbox{log } \mathcal{COR}(M_t\mid \bar{M}_t = \boldsymbol j_{\bar{M}_t}),$$ for any $\boldsymbol i_{\bar{M}_t}$ and $\boldsymbol j_{\bar{M}_t}$. Hence, $$\gamma_{M_t} = \mbox{log } \mathcal{COR}(M_t\mid \bar{M}_t = \boldsymbol i_{\bar{M}_t}),$$ for any $\boldsymbol i_{\bar{M}_t}$. \end{proof} The association structure of a discrete distribution in a hierarchical log-linear model generated by $\mathcal{M}$ can be described with a hypergraph, $\mathcal{H}= \mathcal{H}(\mathcal{M})$ with vertices $V=\{1, \dots, K\}$ and hyperedges equal to the generating marginals, or, equivalently, to the maximum non-vanishing interactions in $\mathcal{M}$. Faithfulness to a hypergraph is naturally defined as follows: \begin{definition}\label{HypergrFaithDef} A distribution is \emph{faithful to a hypergraph} $\mathcal{H}$ if the non-vanishing maximal interactions of this distribution coincide with hyperedges of $\mathcal{H}$. \end{definition} This definition implies that a distribution in the log-linear model generated by $\mathcal{M}=\{M_1,\dots M_T\}$ is faithful to the hypergraph with hyperedges $M_1,\dots M_T$ if, for all $t \in \{1,\dots, T\}$, none of the conditional odds ratios of $M_t$ given the variables in $\bar{M}_t = V \setminus M_t$ is equal to $1$. In the following result, we show that the class of hypergraphs is closed under the parametric faithfulness relation. \begin{theorem} \label{pIII} The class of hypergraphs in $\mathcal{P}$ is closed under the faithfulness relation specified by Definition \ref{HypergrFaithDef}. \end{theorem} \begin{proof} Let $P \in \mathcal{P}$. In the following, we show that there exists a hypergraph to which $P$ is faithful. First, derive the ascending class, $\mathcal{A}$, of subsets of $V$ such that the log conditional odds ratios of the elements of $\mathcal{A}$ given the remaining variables vanish on $P$. Next, find the maximal (with respect to inclusion) elements, $M_1, \dots, M_T$, of the complement of $\mathcal{A}$. Then, by construction, $P$ is faithful to the hypergraph with hyperedges $M_1, \dots,M_T$. \end{proof} \begin{remark} This paper is solely concerned with discrete distributions. However, it is worth pointing out that Definition \ref{FaithNormalDef} makes sense for exponential families in general. In particular, multivariate normal distributions can be described using an exponential family whose canonical parameters correspond to pairwise interactions between the random variables in $V$. We mentioned in Remark \ref{rem_Gaussian} that there are examples of distributions in Gaussian DAG models that are not Markov to any nested DAG, and hence the class of Gaussian DAG models is not closed under the faithfulness relation. However, the class of multivariate normal exponential families is closed under the parametric faithfulness relation. This is the case since setting an additional canonical parameter to zero leads to a nested exponential family. \end{remark}
2,325
23,638
en
train
0.4985.5
\section{Parametric Strong-Faithfulness}\label{sectionStrongFaith} In order to test statistical hypotheses when working with data, a stronger version of faithfulness is needed. In this section, we generalize the notion of parametric faithfulness to parametric strong-faithfulness and discuss difficulties arising with this concept in the discrete setting. \begin{table}[b!] \centering \caption{Selected parameterizations, measures of association, and strong-faithfulness conditions for the $2 \times 2$ contingency table.} \begin{tabular}{l|p{20mm}|p{26mm}|p{59mm}} \hline & & & \\ Parametrization & Parameter space& Variation independence& Association function; $\lambda$-strong-faithfulness condition \\ \hline & & &\\ Cell probabilities: & & & \\ $p_{00}, p_{01},$ & \multicolumn{1}{c|}{Simplex} & \multicolumn{1}{c|}{No} & $\phi_1 = \left|\mbox{log} \left(\frac{p_{00}p_{11}}{p_{01}p_{10}} \right)\right| > \lambda$ \\ [5pt] $p_{10}, p_{11}$& \multicolumn{1}{c|}{$\Delta_3$} & \\ & & & $\phi_2 = \left|\frac{p_{00}p_{11} - p_{01}p_{10}}{p_{00}p_{11} + p_{01}p_{10}}\right| > \lambda$ \\ & & &\\ \hline & & &\\ Conditional probabilities: & & &\\ [5pt] $\theta_1 = \mathbb{P}(A=0)$, & \multicolumn{1}{c|}{$(0,1)^3$} & \multicolumn{1}{c|}{Yes} & $\phi_3 = |\theta_2 - \theta_3| > \lambda$ \\ $\theta_2 = \mathbb{P}(B=0\mid A=0)$, & & \\ $\theta_3 = \mathbb{P}(B=0\mid A=1)$ & & \\ \hline \end{tabular} \label{StrFtwoway} \end{table} \subsection{Strong-faithfulness in the discrete setting} \label{subsec_strong_faith} A distribution in a model is faithful to it if the model fully describes the conditional independence structure in this distribution. It is further called \emph{strong-faithful} if the conditional dependencies present in the distribution are strong enough. The concept of strong-faithfulness, originally defined by \perp\!\!\!\perpte{ZhangSpirtesLambdaFaith}, is usually applied to multivariate normal distributions: For a given $\lambda > 0$, a multivariate normal distribution in a DAG model is $\lambda$\emph{-strong-faithful} with respect to this DAG if all non-zero partial correlations are bounded away from zero by $\lambda$. A formal definition of strong-faithfulness in the discrete case has not been proposed, although some analogies were used. For example, \perp\!\!\!\perpte*{Zuk} made use of the assumption that the conditional probabilities in a Bayesian network are bounded between $\lambda$ and $1 - \lambda$. This can be seen as a form of strong-faithfulness. In the discrete setting, one problem is that many variants of strong-faithfulness relations can be considered. Whether a distribution is $\lambda$-strong-faithful to a model, depends on the choice of parameterization and the measure of association. This is illustrated in the following example for two binary random variables. \begin{example}\label{22param} Let $V = \{A, B\}$ and consider the saturated model $[AB]$, which allows for interaction between $A$ and $B$. A distribution in which this interaction vanishes is unfaithful to $[AB]$ and belongs to the model of independence, $A \perp\!\!\!\perp B$. A distribution in which the association between $A$ and $B$ is strong enough is called strong-faithful to $[AB]$. While in the multivariate normal setting the partial correlations are a standard measure of association, in the discrete setting there are many viable choices of association measures, see \perp\!\!\!\perpte{GoodmanKruskal74}. Table \ref{StrFtwoway} illustrates different possible definitions of strong-faithfulness based on three different measures of association, the log odds ratio, $\phi_1$, Yule's coefficient of association, $\phi_2$, and the absolute difference between the conditional probabilities, $\phi_3$. In all three cases, the parameter space has finite volume. So it is possible to estimate the proportion (relative volume) of distributions that do not satisfy the $\lambda$-strong-faithfulness relation with respect to $[AB]$. Figure \ref{2by2Prop} shows that this proportion varies considerably depending on the chosen parameterization and association measure. \qed \end{example} \begin{figure} \caption{The proportions of distributions that are not $\lambda$-strong-faithful to the model $[AB]$ with respect to different association measures, see Example \ref{22param} \label{2by2Prop} \end{figure} The proportion of distributions in a model that do not satisfy the strong-faithfulness relation with respect to this model is of importance for model selection procedures, which are often based on the strong-faithfulness assumption. Lemma \ref{interactions} justifies the use of $\phi_1$ to define strong-faithfulness in the discrete case. In the following, we propose the concept of strong-faithfulness to a hypergraph and, assuming strong-faithfulness, prove existence of uniformly consistent estimators of the hypergraph parameters. \subsection{Strong-faithfulness with respect to a hypergraph} Let $\mathcal{H}$ be the hypergraph generated by a set of marginals $\mathcal{M}= \{M_1, \dots, M_T\}$. For $\boldsymbol p \in \mathcal{H}$ let $\boldsymbol \gamma(\boldsymbol p) = (\gamma_1(\boldsymbol p), \dots, \gamma_T(\boldsymbol p))$ denote the set of interaction parameters of $\boldsymbol p$ corresponding to the hyperedges of $\mathcal{H}$. \begin{definition} For $\lambda >0$, a distribution $\boldsymbol p \in \mathcal{H}$ is $\lambda$\emph{-strong-faithful} to $\mathcal{H}$ if \begin{equation}\label{tube} \operatorname{min} \{|\gamma_1(\boldsymbol p)|, \dots, |\gamma_T(\boldsymbol p)|\} > \lambda. \end{equation} \end{definition} As described in Section \ref{subsec_strong_faith}, one can, in principle, use different measures of association to define strong-faithfulness. The advantage of the definition given here is that it generalizes the original definition of strong-faithfulness given by \perp\!\!\!\perpte{ZhangSpirtesLambdaFaith}. For a hypergraph generated by two-way marginals the interactions $\boldsymbol \gamma(\boldsymbol p)$ are analogous to partial correlations of a multivariate normal distribution \perp\!\!\!\perptep[cf.][]{WermuthAnalogies}. Therefore, the definition of strong-faithfulness to a hypergraph proposed here is consistent with the original definition of strong-faithfulness of a multivariate normal distribution with respect to a DAG given by \perp\!\!\!\perpte{ZhangSpirtesLambdaFaith}. In addition, as we will show in Section \ref{sec_hypergraph_search}, strong-faithfulness with respect to a hypergraph allows to build uniformly consistent algorithms for learning hypergraphs. In the following example, we illustrate the concept of strong-faithfulness with respect to a hypergraph for distributions on the $2 \times 2 \times 2$ contingency table. \begin{example}\label{222marginal} Let $V = \{A, B, C\}$. A distribution of $V$ can be parameterized by \begin{equation*} \mbox{log } \boldsymbol p = \mathbf{M} \boldsymbol \gamma, \end{equation*} where \begin{equation*} \mathbf{M}=\left(\begin{array}{cccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & 1 & 0 & 0 & 1 & 0 \\ 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 0 & 1 & 0 & 1 & 0 & 0 \\ 1 & 1 & 1 & 0 & 1 & 0 & 0 & 0 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \end{array}\right), \end{equation*} and $$\boldsymbol \gamma = (\gamma^{\emptyset}, \gamma^{A}_{1}, \gamma^{B}_{1}, \gamma^{C}_{1} , \gamma^{AB}_{11}, \gamma^{AC}_{11}, \gamma^{BC}_{11}, \gamma^{ABC}_{111})$$ are the interaction parameters corresponding to the marginal distributions indicated in the superscript. The matrix $\mathbf{M}$ is of full rank, and it can easily be shown that \begin{align*} &\gamma^{\emptyset} = \mbox{log } p_{000}, \hspace{14mm} \gamma^{A}_{1} = \mbox{log } \frac{p_{100}}{p_{000}} ,\\ &\gamma^{B}_{1} = \mbox{log } \frac{p_{010}}{p_{000}}, \hspace{13mm} \gamma^{C}_{1} = \mbox{log } \frac{p_{001}}{p_{000}},\\ &\gamma^{AB}_{11} = \mbox{log } \frac{p_{000}p_{110}}{p_{010}p_{100}}, \quad \gamma^{AC}_{11} = \mbox{log } \frac{p_{000}p_{101}}{p_{001}p_{100}}, \\ &\gamma^{BC}_{11} = \mbox{log } \frac{p_{000}p_{011}}{p_{001}p_{010}}, \quad \gamma^{ABC}_{111} = \mbox{log } \frac{p_{001}p_{010}p_{100}p_{111}}{p_{000}p_{011}p_{101}p_{110}}. \end{align*} \noindent In the following table we give the $\lambda$-strong-faithfulness conditions for several hypergraph models: \begin{equation*} \begin{tabular}{l| l} {Hyperedges} & \multicolumn{1}{c}{Strong-faithfulness constraints} \\ \hline & \\ $\{ABC\}$ & $|\gamma^{ABC}_{111}| > \lambda$ \\ [5pt] $\{AB\}, \{AC\}, \{BC\}$ & $\operatorname{min} \{|\gamma^{AB}_{11}|, |\gamma^{AC}_{11}|, |\gamma^{BC}_{11}|\} > \lambda$ \\ [5pt] $\{AC\}, \{BC\}$ & $\operatorname{min} \{|\gamma^{AC}_{11}|, |\gamma^{BC}_{11}|\} > \lambda$ \\ [5pt] $\{A\}, \{BC\}$ & $\operatorname{min} \{|\gamma^{A}_{1}|, |\gamma^{BC}_{11}|\} > \lambda$ \\ [5pt] $\{A\}, \{B\}, \{C\}$ & $\operatorname{min} \{|\gamma^{A}_{1}|, |\gamma^{B}_{1}|, |\gamma^{C}_{1}| \} > \lambda$ \\ [5pt] \end{tabular} \end{equation*} \end{example}
2,980
23,638
en
train
0.4985.6
\subsection{Hypergraph search} \label{sec_hypergraph_search} In this section, we discuss how to construct hypothesis tests, when the association measure is based on the interaction parameters $\boldsymbol{\gamma}(\boldsymbol p)$, and how to perform a hypergraph search based on these hypothesis tests. Let $\mathcal{H}$ be a hypergraph generated by the marginals $M_1,\dots , M_T$ and let $\gamma_1,\dots ,\gamma_T$ be the corresponding interaction parameters. We denote by $\mathcal{H}_{\lambda}$ the set of distributions that are $\lambda$-strong-faithful to the hypergraph $\mathcal{H}$, i.e., $$\mathcal{H}_{\lambda} = \{\boldsymbol p \in \mathcal{H}: \,\, \operatorname{min}\{|\gamma_1(\boldsymbol p)|, \dots, |\gamma_T(\boldsymbol p)|\} > \lambda\},$$ and define $$\mathcal{H}_{\lambda, \delta} = \mathcal{H}_{\lambda} \cap \{ \boldsymbol p \in \mathcal{P}: \,\, p_{\boldsymbol i} \in [\delta, 1), \, \sum_{\boldsymbol i \in \mathcal{I}} p_{\boldsymbol i} = 1\},$$ where $\delta > 0$ is small enough so $\mathcal{H}_{\lambda, \delta}$ is not empty. If $M_t$, for $t \in \{1, \dots, T\}$, is an interaction of order $h_t$, then the conditional odds ratio of $M_t$ given the variables in $\bar{M}_t$ is the ratio of the product of some $2^{h_t}$ cell probabilities and the product of a disjoint set of $2^{h_t}$ cell probabilities. Since $p_{\boldsymbol i} \in [\delta, 1)$ for all $\boldsymbol i \in \mathcal{I}$, the interaction parameter $\,\,|\gamma_t(\boldsymbol p)| \leq 2^{h_t}\mbox{log } ((1-\delta)/\delta)$, and, therefore, $$|\gamma_t(\boldsymbol p)| \leq C(\delta), \quad \mbox{for } t=1, \dots, T,$$ where $C(\delta) = 2^{{\operatorname{max}}\{h_1, \dots, h_T\}} \mbox{log } ((1-\delta)/\delta)$. Here, $C(\delta)$ is an upper bound on the interaction parameters (it plays the same role as the constant $M$ in Assumption (A4) of \perp\!\!\!\perpte{KalischBullm} for the Gaussian setting). \begin{theorem} Let $\mathbf Y$ have a multinomial distribution with parameters $N$ and $\boldsymbol p$. Assume that, under the log-linear model corresponding to $\mathcal{H}$, the maximum likelihood estimates of the interaction parameters $$\hat{\boldsymbol \gamma}^{(N)}(\boldsymbol p)=(\hat{\gamma}_1^{(N)}(\boldsymbol p), \dots, \hat{\gamma}_T^{(N)}(\boldsymbol p))=({\gamma}_1^{(N)}(\hat{\boldsymbol p}), \dots, {\gamma}_T^{(N)}(\hat{\boldsymbol p}))$$ exist and are unique. Then, $\hat{\boldsymbol \gamma}^{(N)}(\boldsymbol p)$ is a uniformly, over $\mathcal{H}_{\lambda, \delta}$, consistent estimator of ${\boldsymbol \gamma}(\boldsymbol p)$. \end{theorem} \begin{proof} For $t \in \{1, \dots, T\}$, $\,\gamma_t(\boldsymbol p) = \boldsymbol c_{t}' \operatorname{log } \boldsymbol p$, where $\boldsymbol c_t$ is a vector in $\mathbb{Z}^{|\mathcal{I}|}$ whose components are comprised of equal number, $2^{h_t}$, of $1$'s and $-1$'s, and some $0$'s. By Theorem 14.6-4 in \perp\!\!\!\perpte{BFH}, as $N \to \infty$, $\gamma_t(\boldsymbol p)$ is asymptotically normal with mean zero and variance \begin{equation}\label{asVar} {\textrm{var}}(\hat{\gamma}_t) = \frac{1}{N}\boldsymbol c'_t \textrm{diag}^{-1}(\boldsymbol p)\boldsymbol c_t. \end{equation} For every $\boldsymbol p \in \mathcal{H}_{\lambda, \delta}$, \begin{equation}\label{VarBound} {\textrm{var}}(\hat{\gamma}_t) \leq \frac{\boldsymbol c_t'\boldsymbol c_t}{N\delta}, \end{equation} and thus, using the Chebyshev inequality, \begin{eqnarray}\label{ConsBound} \mathbb{P}\left(|\hat{\gamma}_t^{(N)}(\boldsymbol p) - \gamma_t(\boldsymbol p)| < \epsilon\right) &=& \mathbb{P}\left(|Z|< \frac{\sqrt{N} \epsilon}{\sqrt{\boldsymbol c_{t}'\textrm{diag}^{-1}(\boldsymbol p)\boldsymbol c_{t}}}\right) \geq \mathbb{P}\left(|Z| < \epsilon\sqrt{\frac{N\delta}{\boldsymbol c_t'\boldsymbol c_t}}\right) \nonumber \\ \nonumber \\ \nonumber \\ &\geq& 1 - \frac{\boldsymbol c_t'\boldsymbol c_t}{N\delta\epsilon^2} \geq 1 - \frac{\underset{t =1, \dots, T}{\operatorname{max}}(\boldsymbol c_t'\boldsymbol c_t)}{N\delta\epsilon^2}, \quad \forall \epsilon > 0, \end{eqnarray} where $Z$ is a random variable with a standard normal distribution. Therefore, $\mathbb{P}(|\hat{\gamma}_t^{(N)}(\boldsymbol p) - {\gamma_t}(\boldsymbol p)| < \epsilon) \to 1$ for every $t = 1, \dots,T$, uniformly over $\boldsymbol p \in \mathcal{H}_{\lambda, \delta}$. Since the lower bound in (\ref{ConsBound}) does not depend on $t$, the proof is complete. \end{proof} We now address the question of how to select the threshold $\lambda$ in a hypergraph learning procedure. We fix a $t \in \{1, \dots, T\}$ and consider testing the ``one-hyperedge'' hypothesis $H_{0t}: \, \gamma_t = 0$ versus $H_{1t}: \, \gamma_t \neq 0$ under a significance level $\alpha$. Let $\hat{\boldsymbol p}$ be the observed distribution and let $\hat{\gamma}_t = \gamma_t(\hat{\boldsymbol p}) = \boldsymbol c_t \mbox{log }\hat{\boldsymbol p}$ denote the corresponding interaction parameter. By Slutsky's Theorem, $$\sqrt{N} \frac{\hat{\gamma}_t}{\sqrt{\boldsymbol c'_t \textrm{diag}^{-1}(\hat{\boldsymbol p})\boldsymbol c_t}} \to N(0, 1), \, \mbox{ as } N \to \infty,$$ and thus we reject the null hypothesis if \begin{equation}\label{TestSt} \frac{|\hat{\gamma}_t|}{\sqrt{ \frac{1}{N}\boldsymbol c'_t \textrm{diag}^{-1}(\hat{\boldsymbol p})\boldsymbol c_t}} > z_{1-\alpha/2}, \end{equation} where $z_{1-\alpha/2} = \Phi^{-1}(1-\alpha/2)$ is the corresponding quantile of the standard normal distribution. With such a procedure, the probability of wrongly rejecting the null hypothesis does not exceed $\alpha$. \begin{theorem}\label{powerTh} Let $\epsilon \in (0,1/2)$ and set \begin{equation}\label{LambdaVar} \lambda^*_N = \frac{z_{1-\alpha/2}}{N^{1/2-\epsilon}}{\underset{t =1, \dots, T}{\operatorname{min}}\sqrt{\boldsymbol c_t'\boldsymbol c_t}}. \end{equation} For the distributions that are $\lambda^*_N$-strong-faithful to $\mathcal{H}$, the power of the one-hyperedge test approaches $1$ as $N \to \infty$. \end{theorem} \begin{proof} The asymptotic variance of $\hat{\gamma}_t$ is bounded below by $${\textrm{var}}(\hat{\gamma}_t) = \frac{1}{N}\boldsymbol c'_t \textrm{diag}^{-1}(\hat{\boldsymbol p})\boldsymbol c_t \geq \frac{1}{N} \boldsymbol c_t'\boldsymbol c_t.$$ Since for an $h_t$-order interaction, the vector $\boldsymbol c_t$ has $2^{h_t}$ components equal to $1$, $2^{h_t}$ components equal to $-1$, and the remaining components equal to zero, we have $\sqrt{\boldsymbol c_t'\boldsymbol c_t} = 2^{(h_t + 1)/2}$. The distributions that are $\lambda^*_N$-strong-faithful to $\mathcal{H}$ satisfy (\ref{TestSt}) for all $t = 1, \dots, T$. For these distributions the power of the one-hyperedge test is bounded below: $$\Phi\left(\frac{|\hat{\gamma}_t|}{\sqrt{ \frac{1}{N}\boldsymbol c'_t \textrm{diag}^{-1}(\hat{\boldsymbol p})\boldsymbol c_t}} - z_{1-\alpha/2}\right) \geq \Phi\left(z_{1-\alpha/2}(\frac{\sqrt{\boldsymbol c_t'\boldsymbol c_t}N^{-1/2+\epsilon}}{\sqrt{ \frac{1}{N}\boldsymbol c'_t \textrm{diag}^{-1}(\hat{\boldsymbol p})\boldsymbol c_t}} - 1)\right) \textrm{ for all } t\in\{1,\dots T\}$$ and approaches $1$ as $N \to \infty$. \end{proof} Examples of $\lambda^*_N$ computed for hyperedges of different sizes are shown in Table \ref{lambdasTest}. In this paper, we do not investigate any multiple comparison issues arising with testing several one-hyperedge hypotheses at the same time. \begin{table} \centering \caption{Possible threshold values for the parameter $\lambda$.} \label{lambdasTest} \begin{tabular}{l|c|l} \hline & & \\ Hyperedge & \multicolumn{1}{c|}{The order of the odds ratio} & \multicolumn{1}{c}{$\lambda^*_N$} \\ & & \\ \hline & & \\ $[AB]$ & $h = 1$ & $\lambda^*_N = \frac{z_{1-\alpha/2}}{N^{1/4}} \cdot 2$\\ & & \\ $[ABC]$ & $h=2$ & $\lambda^*_N = \frac{z_{1-\alpha/2}}{N^{1/4}} \cdot 2^{3/2}$ \\ & & \\ $[ABCD]$ & $h=3$ & $\lambda^*_N = \frac{z_{1-\alpha/2}}{N^{1/4}}\cdot 4$\\ & & \\ $[ABCDE]$ & $h=4$ & $\lambda^*_N = \frac{z_{1-\alpha/2}}{N^{1/4}}\cdot 2^{5/2}$\\ \hline \end{tabular} \end{table} For learning a hypergraph, any model selection procedure for hierarchical log-linear models can be applied. A review of such procedures can be found in \perp\!\!\!\perpte{EdwardsBook}. Backward selection which starts from the saturated model and, using the edge removal mechanism described by \perp\!\!\!\perpte {EdwardsNote}, goes through a sequence of nested hypergraphs, is a polynomial time algorithm that is appropriate for high dimensions. Uniform consistency of the maximum likelihood estimates for the maximal interactions of the distributions in $\mathcal{H}_{\lambda, \delta}$ entails that backward selection is a uniformly consistent procedure and the hypergraph will be determined correctly.
2,837
23,638
en
train
0.4985.7
\section{Proportions of strong-unfaithful distributions}\label{SecProp} As shown in the previous section, strong-faithfulness ensures the existence of uniformly consistent tests for developing methods for learning the underlying hypergraph. If the parameter space has finite volume it is possible to estimate the proportion (relative volume) of the distributions that are not $\lambda$-strong-faithful to a model of interest and thus whose association structure may be discovered incorrectly. \perp\!\!\!\perpte{UhlerFaithGeometry} analyzed the proportion of distributions that are not $\lambda$-strong-faithful to a DAG in the Gaussian setting. Partial correlations define varieties and strong-unfaithful distributions correspond to the parameters that lie in a tube around these varieties. So the relative volume of these tubes corresponds to the proportion of distributions that don't satisfy the strong-faithfulness assumption, and lower bounds on these volumes were given for different classes of DAGs. The following example illustrates how one can estimate such volumes in the discrete case. \textbf{Example \ref{22param}} (revisited): Consider a hierarchical log-linear parameterization of the distributions on the $2\times 2$ contingency table: \begin{align}\label{22corner} \operatorname{log} p_{00} &=\gamma^{\emptyset}, \nonumber \\ \operatorname{log} p_{01} &=\gamma^{\emptyset}+ \gamma^{B}_{1},\\ \operatorname{log} p_{10} &= \gamma^{\emptyset} + \gamma^{A}_{1}, \nonumber \\ \operatorname{log} p_{11} &= \gamma^{\emptyset} + \gamma^{A}_{1}+\gamma^{B}_{1}+ \gamma^{AB}_{11}. \nonumber \end{align} The interaction parameter, $\gamma_{11}^{AB}$, which was denoted by $\phi_1$ in Example \ref{22param} and in the corresponding Table \ref{StrFtwoway} and Figure \ref{2by2Prop}, can be expressed in terms of conditional probabilities $\theta_1 = \mathbb{P}(A = 0)$, $\theta_2 = \mathbb{P}(B=0\mid A=0)$, and $\theta_3 = \mathbb{P}(B=0\mid A=1)$: \begin{equation*} \gamma^{AB}_{11} =\mbox{log } \left(\frac{p_{00}p_{11}}{p_{01}p_{10}}\right)= \mbox{log } \frac{\theta_2(1-\theta_3)}{(1-\theta_2)\theta_3} = \mbox{log } \frac{\theta_2}{1-\theta_2} - \mbox{log } \frac{\theta_3}{1-\theta_3}. \end{equation*} Let \begin{eqnarray*} \mathcal{H}_{\lambda} &=&\left \{(\theta_1, \theta_2, \theta_3) \in (0,1)^3: \,\, \left|\mbox{log } \frac{\theta_2}{1-\theta_2} - \mbox{log } \frac{\theta_3}{1-\theta_3}\right| > \lambda \right\}. \end{eqnarray*} The volume of its complement, $\bar{\mathcal{H}}_{\lambda}$, is equal to: \begin{eqnarray} \mathrm{vol}(\bar{\mathcal{H}}_{\lambda}) &=& \mathrm{vol} \left \{(\theta_1, \theta_2, \theta_3) \in (0,1)^3: \,\, e^{-\lambda} < \frac{\theta_2}{1-\theta_2} \cdot \frac{1-\theta_3}{\theta_3} < e^{\lambda} \right \}\nonumber\\ &=& \int_{0}^1 d \theta_2 \left( \frac{\theta_2}{\theta_2(1-e^{-\lambda}) + e^{-\lambda}} - \frac{\theta_2}{\theta_2(1-e^{\lambda}) + e^{\lambda}} \right) \nonumber\\ {}\nonumber\\ &=& \frac{e^{2\lambda} - 2\lambda e^{\lambda} - 1}{(1-e^{\lambda})^2},\label{eq_gamma2} \end{eqnarray} where the integral was computed by substitution. The parameter space $(0,1)^3$ has a unit volume. Hence, the relative proportion of distributions that are not $\lambda$-strong-faithful to $[AB]$ is equal to $\mathrm{vol}(\bar{\mathcal{H}}_{\lambda}) $. For small $\lambda$ this proportion is approximately $\frac{\lambda}{3}$, which is consistent with the simulation results for $\phi_1$ in Figure \ref{2by2Prop}.\qed \begin{theorem}\label{BoundMax} Let $\mathcal{H}$ be a hypergraph whose hyperedges $M_1, \dots, M_T$ are interactions of order $h_1, \dots, h_T$ respectively, and let $\bar{\mathcal{H}}_{\lambda}$ be the set of distributions that are not $\lambda$-strong-faithful to $\mathcal{H}$. Then, $$\mathrm{vol}(\bar{\mathcal{H}}_{\lambda}) \geq \underset{t \in \{1, \dots, T\}}{\operatorname{max}} \mathrm{vol} \{\boldsymbol p \in \mathcal{H}: \,\, |\gamma_{t}(\boldsymbol p)| < \lambda\} \geq \underset{t \in \{1, \dots, T\}}{\operatorname{max}}\left(\frac{e^{2\mu} - 2\mu e^{\mu} - 1}{(1-e^{\mu})^2}\right)^{2^{h_t-1}},$$ where $\mu=\lambda/2^{h_t-1}$ \end{theorem} \begin{proof} By Lemma \ref{interactions}, the parameter $\gamma_{t}$ for $t \in \{1, \dots, T\}$ is the log conditional odds ratio of $M_t$, given the variables in $\bar{M}_t$. We can express $\gamma_t$ using a corresponding set of variation independent conditional probabilities: \begin{eqnarray*} \gamma_{t} = \mbox{log } \left(\frac{\theta_{1}}{1-\theta_{1}} \cdots \frac{\theta_{2^{h_t-1}}} {1-\theta_{2^{h_t-1}}}\cdot \frac{1-\theta_{2^{h_t-1}+1}}{\theta_{2^{h_t-1}+1}} \cdots \frac{1-\theta_{2^{h_t}}}{ \theta_{2^{h_t}}}\right).\end{eqnarray*} Therefore, \begin{eqnarray*} &&\mathrm{vol} \{\boldsymbol p \in \mathcal{H}: \,\, |\gamma_{t}(\boldsymbol p)| < \lambda\} = \mathrm{vol} \left\{\boldsymbol \theta \in (0,1)^m: \,\, |\gamma_{t}| < \lambda \right\}\\ && \\ &=&\mathrm{vol}\left\{(\theta_1, \dots, \theta_{2^{h_t}})\!\in\!(0,1)^{2^{h_t}}\!:\, \left| \mbox{log }\!\!\left(\!\frac{\theta_{1}}{1-\theta_{1}} \cdots \frac{\theta_{2^{h_t-1}}} {1-\theta_{2^{h_t-1}}}\cdot \frac{1-\theta_{2^{h_t-1}+1}}{\theta_{2^{h_t-1}+1}} \cdots \frac{1-\theta_{2^{h_t}}}{ \theta_{2^{h_t}}}\!\right)\!\right| < \lambda \right\} \\ &\geq& \left(\mathrm{vol}\,\left\{(\zeta_1, \zeta_2)\in(0,1)^2:\,\, \left| \mbox{log } \frac{\zeta_1}{1-\zeta_1} - \mbox{log } \frac{\zeta_2}{1-\zeta_2}\right| < \frac{\lambda}{2^{h_t-1}}\right\} \right)^{2^{h_t-1}} \\ &=& \left(\frac{e^{2\mu} - 2\mu e^{\mu} - 1}{(1-e^{\mu})^2}\right)^{2^{h_t-1}}, \end{eqnarray*} where $\mu = {\lambda}/{2^{h_t-1}}$, and for the last equation we used (\ref{eq_gamma2}). Since $\bar{\mathcal{H}}_{\lambda} = \{\boldsymbol p \in \mathcal{H}: \,\, |\gamma_t(\boldsymbol p)| < \lambda \mbox{ for at least one } t \in \{1, \dots, T\}\}$, $$\mathrm{vol}(\bar{\mathcal{H}}_{\lambda}) \geq \mathrm{vol} \{\boldsymbol p \in \mathcal{H}: \,\, |\gamma_t(\boldsymbol p)| < \lambda \mbox{ for all } t \in \{1, \dots, T\}\},$$ and thus $$\mathrm{vol}(\bar{\mathcal{H}}_{\lambda}) \geq \underset{t \in \{1, \dots, T\}}{\operatorname{max}} \left(\frac{e^{2\mu} - 2\mu e^{\mu} - 1}{(1-e^{\mu})^2}\right)^{2^{h_t-1}}, \quad \mbox{for } \mu = {\lambda}/{2^{h_t-1}}.$$ \end{proof} As we will see in the following result, for hypergraphs whose hyperedges are variation independent we can in fact give an exact formulation of the proportion of distributions that don't satisfy strong-faithfulness. \begin{theorem} \label{conjecture} Let $\mathcal{H}$ be the hypergraph generated by marginals $M_1, \dots, M_T$. Assume that there exists a parameterization of $\mathcal{H}$ under which the interaction parameters corresponding to $M_1, \dots, M_T$ are variation independent and, further, that the parameter space has finite volume. Then, the proportion of distributions that are not $\lambda$-strong-faithful to $\mathcal{H}$ is \begin{equation}\label{VolumeDecomp} \mathrm{vol}(\bar{\mathcal{H}}_{\lambda}) = 1 - (1-\boldsymbol \nu_{M_1}) \cdots (1-\boldsymbol \nu_{M_T}), \end{equation} where $\boldsymbol \nu_{M_t}$, for $t = 1, \dots, T$, is the proportion of distributions that are not $\lambda$-strong-faithful to $M_t$. \end{theorem} \begin{proof} Consider a parameterization of $\mathcal{H}$ under which the maximal interaction parameters corresponding to $M_1, \dots, M_T$ are variation independent. For $t \in \{1, \dots, T\}$, let $\boldsymbol \nu_{M_t}$ denote the proportion of distributions that do not satisfy $\lambda$-strong-faithfulness with respect to $M_t$. Since the joint range of the variation independent parameters is equal to the Cartesian product of the individual ranges, the proportion of distributions that are not $\lambda$-strong-faithful to $\mathcal{H}$ is equal to $1 - (1-\boldsymbol \nu_{M_1}) \cdots (1-\boldsymbol \nu_{M_T}).$ \end{proof}
2,684
23,638
en
train
0.4985.8
As we will see in the following result, for hypergraphs whose hyperedges are variation independent we can in fact give an exact formulation of the proportion of distributions that don't satisfy strong-faithfulness. \begin{theorem} \label{conjecture} Let $\mathcal{H}$ be the hypergraph generated by marginals $M_1, \dots, M_T$. Assume that there exists a parameterization of $\mathcal{H}$ under which the interaction parameters corresponding to $M_1, \dots, M_T$ are variation independent and, further, that the parameter space has finite volume. Then, the proportion of distributions that are not $\lambda$-strong-faithful to $\mathcal{H}$ is \begin{equation}\label{VolumeDecomp} \mathrm{vol}(\bar{\mathcal{H}}_{\lambda}) = 1 - (1-\boldsymbol \nu_{M_1}) \cdots (1-\boldsymbol \nu_{M_T}), \end{equation} where $\boldsymbol \nu_{M_t}$, for $t = 1, \dots, T$, is the proportion of distributions that are not $\lambda$-strong-faithful to $M_t$. \end{theorem} \begin{proof} Consider a parameterization of $\mathcal{H}$ under which the maximal interaction parameters corresponding to $M_1, \dots, M_T$ are variation independent. For $t \in \{1, \dots, T\}$, let $\boldsymbol \nu_{M_t}$ denote the proportion of distributions that do not satisfy $\lambda$-strong-faithfulness with respect to $M_t$. Since the joint range of the variation independent parameters is equal to the Cartesian product of the individual ranges, the proportion of distributions that are not $\lambda$-strong-faithful to $\mathcal{H}$ is equal to $1 - (1-\boldsymbol \nu_{M_1}) \cdots (1-\boldsymbol \nu_{M_T}).$ \end{proof} The maximal interaction parameters in decomposable log-linear models \perp\!\!\!\perptep{Haberman} and in ordered decomposable marginal log-linear models \perp\!\!\!\perptep{RudasBergsma} are variation independent, and Theorem \ref{conjecture} applies. Let $\mathcal{H}$ be the hypergraph generated by a decomposable sequence of marginals $M_1, \dots, M_T$. The interaction parameter associated with a hyperedge $M_t$ can be expressed using a variation independent set of conditional probabilities. Under this parameterization, the proportion of distributions that do not satisfy $\lambda$-strong-faithfulness with respect to the hyperedge $M_t$ is equal to \begin{eqnarray}\label{nu} \boldsymbol \nu_{h_t} = \mathrm{vol}\!\left\{\!(\theta_1, \dots, \theta_{2^{h_t}})\!:\, \left|\log \!\left(\frac{\theta_{1}}{1-\theta_{1}} \cdots \frac{\theta_{2^{h_t-1}}} {1-\theta_{2^{h_t-1}}}\cdot \frac{1-\theta_{2^{h_t-1}+1}}{\theta_{2^{h_t-1}+1}} \cdots \frac{1-\theta_{2^{h_t}}}{ \theta_{2^{h_t}}}\right)\!\right| < \lambda \right\}, \end{eqnarray} where $h_t$ denotes the order of interaction $M_t$. The proportion of distributions that are not $\lambda$-strong-faithful to $\mathcal{H}$ is calculated using (\ref{VolumeDecomp}). Figure \ref{Chains} shows values of $\boldsymbol \nu_{h}$ as functions of $h$ and $\lambda$. The concrete computations involved in Theorem \ref{conjecture} are illustrated in Example \ref{ChainExample}. We next analyze hypergraphs with a special ``chain'' structure and show that in this case Equation (\ref{VolumeDecomp}) simplifies. \begin{definition} A hypergraph $\mathcal{H}$ is called a chain of order $h$ if the generating sequence of marginals $\{M_1, \dots M_T\}$, where $\cup_{t = 1}^T M_t = V$, is decomposable and all of the hyperedges correspond to $h$-th order interactions of the joint distribution. \end{definition} For example, the hypergraph generated by $\{A,B\}$, $\{B,C\}$, $\{C,D\}$, $\{D,E\}$ is a chain of order $1$ of length $4$, and the hypergraph generated by $\{A,B,C\}$ and $\{A,B,D\}$ is a chain of order 2 of length 2. \begin{corollary} \label{ChainH} Let a hypergraph $\mathcal{H}$ be a chain of order $h$ of length $L$. Then the proportion of distributions that are not $\lambda$-strong-faithful to $\mathcal{H}$ is equal to $$1 - (1-\boldsymbol \nu_h)^L,$$ where \begin{eqnarray}\label{nuH} \boldsymbol \nu_{h} = \mathrm{vol}\left\{(\theta_1, \dots, \theta_{2^{h}}):\,\, \left|\log \left(\frac{\theta_{1}}{1-\theta_{1}} \cdots \frac{\theta_{2^{h-1}}} {1-\theta_{2^{h-1}}}\cdot \frac{1-\theta_{2^{h-1}+1}}{\theta_{2^{h-1}+1}} \cdots \frac{1-\theta_{2^{h}}}{ \theta_{2^{h}}}\right)\right| < \lambda \right\}. \end{eqnarray} \end{corollary} For a chain of order $1$, the proportion of distributions that are not $\lambda$-strong-faithful to $\mathcal{H}$ is especially simple: $$\mathrm{vol}(\bar{\mathcal{H}}_{\lambda}) = 1 - (1-\boldsymbol \nu_1)^T,$$ where \begin{eqnarray}\label{nu1} \boldsymbol \nu_1 = \frac{e^{2\lambda} - 2\lambda e^{\lambda} - 1}{(1-e^{\lambda})^2}. \end{eqnarray} The proportions for chains of several orders were estimated using Monte-Carlo method and are displayed in Figure \ref{ChainsProportions}. \begin{figure} \caption{Proportions of distributions that do not satisfy strong-faithfulness with respect to a single hyperedge. See Equation (\ref{nu} \label{Chains} \caption{Proportions of distributions that are not $\lambda$-strong-faithful to a first order chain. See Corollary \ref{ChainH} \label{ChainsProportions} \end{figure} \begin{example} \label{ChainExample} We demonstrate the volume computation using the chain $[AB][BC][CD]$ of order 1. The maximal interaction parameters corresponding to the hyperedges are: \begin{eqnarray*} \gamma_1 &=& \mbox{log } \mathcal{COR}(AB \mid CD) = \mbox{log } \frac{p_{00kl}p_{11kl}}{p_{01kl}p_{10kl}}, \\ \gamma_2 &=& \mbox{log } \mathcal{COR}(BC \mid AD) = \mbox{log } \frac{p_{i00l}p_{i11l}}{p_{i01l}p_{i10l}}, \\ \gamma_3 &=& \mbox{log } \mathcal{COR}(CD \mid AB) = \mbox{log } \frac{p_{ij00}p_{ij11}}{p_{ij01}p_{ij10}}, \end{eqnarray*} where $i, j, k, l \in \{0, 1\}$ are fixed categories of $A$, $B$, $C$, $D$ respectively. The chain can be described by two conditional independence relations: $A \perp\!\!\!\perp C \mid B$ and $AB \perp\!\!\!\perp D \mid C$. Thus the distributions in a chain model can be parameterized using the conditional probabilities: \begin{align*} &\theta_{0} = \mathbb{P}(B = 0), \\ &\theta_{10} = \mathbb{P}(A=0 \mid B = 0), \quad \theta_{11} = \mathbb{P}(A = 0 \mid B = 1), \\ &\theta_{20} = \mathbb{P}(C = 0 \mid B = 0), \quad \theta_{21} = \mathbb{P}(C = 0 \mid B = 1), \\ &\theta_{30} = \mathbb{P}(D = 0 \mid C = 0), \quad \theta_{31} = \mathbb{P}(D = 0 \mid C = 1). \end{align*} The parameters $\boldsymbol \theta$ are variation independent, and, for $t = 1, 2, 3$, $$\gamma_t = \mbox{log } \left(\frac{\theta_{t0}}{1-\theta_{t0}} \cdot \frac{1-\theta_{t1}}{\theta_{t1}}\right).$$ Let \begin{eqnarray*} \boldsymbol \nu_1 &=& \mathrm{vol}\left \{(\theta_1, \theta_2) \in (0,1)^2: \,\, \left|\mbox{log } \frac{\theta_1}{1-\theta_1} - \mbox{log } \frac{\theta_2}{1-\theta_2} \right| < \lambda \right\} =\frac{e^{2\lambda} - 2\lambda e^{\lambda} - 1}{(1-e^{\lambda})^2}. \end{eqnarray*} Using the binomial formula, we obtain that $\mathrm{vol}(\bar{\mathcal{H}}_{\lambda}) = 1 - (1 - \boldsymbol \nu_1)^3$. \qed \end{example} \textbf{Example \ref{FourVarUnfaith}} (revisited): The maximal non-vanishing interactions of a distribution that is faithful to a hypergraph $\mathcal{H}$ with hyperedges $\{A,B,C\}$ and $\{A,B,D\}$ can be described using the interaction parameters equal to the logarithm of the second order conditional odds ratios of $ABC$ given $D$ and of $ABD$, given $C$: \begin{eqnarray*} \gamma^{ABC}_0 &=& \mbox{log } \mathcal{COR}(ABC\mid D=0),\\ \gamma^{ABC}_1 &=& \mbox{log } \mathcal{COR}(ABC\mid D=1),\\ \gamma^{ABD}_0 &=& \mbox{log } \mathcal{COR}(ABD\mid C=0),\\ \gamma^{ABD}_1 &=& \mbox{log } \mathcal{COR}(ABD\mid C=1). \end{eqnarray*} Using conditional probabilities, \begin{eqnarray*} \theta_1 &=& \mathbb{P}(C=0\mid A=0, B=0), \quad \theta_2 = \mathbb{P}(C=0\mid A=0, B=1), \\ \theta_3 &=& \mathbb{P}(C=0\mid A=1, B=0), \quad \theta_4 = \mathbb{P}(C=0\mid A=1, B=1), \\ \theta_5 &=& \mathbb{P}(D=0\mid A=0, B=0), \quad \theta_6 = \mathbb{P}(D=0\mid A=0, B=1), \\ \theta_7 &=& \mathbb{P}(D=0\mid A=1, B=0), \quad \theta_8 = \mathbb{P}(D=0\mid A=1, B=1), \end{eqnarray*} one obtains \begin{eqnarray*} \gamma^{ABC}_0 &=& \gamma^{ABC}_1 = \mbox{log } \frac{\theta_1}{1-\theta_1} + \mbox{log } \frac{\theta_4}{1-\theta_4} - \mbox{log } \frac{\theta_2}{1-\theta_2} - \mbox{log } \frac{\theta_3}{1-\theta_3}, \\ \gamma^{ABD}_0 &=& \gamma^{ABD}_1 = \mbox{log } \frac{\theta_5}{1-\theta_5} + \mbox{log } \frac{\theta_8}{1-\theta_8} - \mbox{log } \frac{\theta_6}{1-\theta_6} - \mbox{log } \frac{\theta_7}{1-\theta_7}. \end{eqnarray*} A distribution is not $\lambda$-strong-faithful to the hypergraph $\mathcal{H}$ if at least one of the following inequalities holds: $$|\gamma^{ABC}_0| < \lambda, \mbox{ or } |\gamma^{ABD}_0| < \lambda.$$ Hence, the proportion of distributions that are not $\lambda$-strong-faithful to the hypergraph $\mathcal{H}$ is equal to $1 - (1 - \boldsymbol \nu_2)^2$, where \begin{eqnarray*} \boldsymbol \nu_2 &=& \mathrm{vol}\left \{(\theta_1, \dots, \theta_4) \in (0,1)^4: \, \left|\mbox{log } \frac{\theta_1}{1-\theta_1} + \mbox{log } \frac{\theta_4}{1-\theta_4} - \mbox{log } \frac{\theta_2}{1-\theta_2} - \mbox{log } \frac{\theta_3}{1-\theta_3} \right| < \lambda \right\}. \end{eqnarray*} We were not able to find a closed-form expression for $\boldsymbol \nu_2$. It can be shown that $\boldsymbol \nu_2$ is bounded above by $\boldsymbol \nu_1$ and thus the volume of distributions that are not $\lambda$-strong-faithful to the hypergraph $\mathcal{H}$ is bounded above by the volume computed for the chain of the same length of order $1$, that is, $\mathrm{vol}(\bar{\mathcal{H}}_{\lambda}) \leq 1 - (1-\boldsymbol \nu_{1})^2.$ \qed \begin{remark} The concept of strong-faithfulness can be extended to distributions that do not belong to a given hypergraph model. Let $\boldsymbol p, \boldsymbol q \in \mathcal{P}$, and let $\rho$ be a divergence function. The distance from a distribution $\boldsymbol p$ to a hypergraph $\mathcal{H}$ can be defined as \begin{equation}\label{distModel} \rho(\boldsymbol p, \mathcal{H}) = \underset{\boldsymbol q \in \mathcal{H}}{\mbox{min }} \rho(\boldsymbol p, \boldsymbol q). \end{equation} In particular, $\rho(\boldsymbol p, \mathcal{H}) = 0$ if and only if $\boldsymbol p \in \mathcal{H}$. We denote by $\boldsymbol p_{\mathcal{H,\rho}}$ the projection of $\boldsymbol p$ onto the hypergraph $\mathcal{H}$, i.e.: $$\boldsymbol p_{\mathcal{H}, \rho} = \underset{\boldsymbol q \in \mathcal{H}}{\mbox{argmin }} \rho(\boldsymbol p, \boldsymbol q),$$ and call a distribution $\boldsymbol p$ \emph{projected-$\lambda$-strong-faithful to} $\mathcal{H}$ with respect to $\rho$ (for $\lambda >0$) if $\boldsymbol p_{\mathcal H, \rho}$ is $\lambda$-strong-faithful to $\mathcal{H}$. The concept of projected-strong-faithfulness is relevant in various estimation procedures. \end{remark} We end by illustrating the concept of projected-strong-faithfulness by estimating the proportions of projected-$\lambda$-strong-faithful distributions for several hypergraph models on the $2 \times 2 \times 2$ contingency table. \begin{figure} \caption{Proportions of distributions that are not projected-$\lambda$-strong-faithful computed for several hypergraphs on the $2 \times 2 \times 2$ contingency table.} \label{C1} \end{figure} \textbf{Example \ref{222marginal}} (revisited): To determine the distance from a distribution $\boldsymbol p$ to a hypergraph $\mathcal{H}$ we use the likelihood function under the corresponding log-linear model. Relative frequencies of distributions that do not satisfy the projected-$\lambda$-strong-faithfulness relation for different hypergraphs and different values of $\lambda$ are displayed in Figure \ref{C1}. \qed
3,959
23,638
en
train
0.4985.9
\section{Conclusion}\label{SecConcl} We demonstrated that the association structure of discrete data can be very complex, and some distributions are not faithful to any undirected graphical model or any DAG. Thus, the attractive simplicity of graphical models may be misleading. In Section \ref{sectionGraphFaith}, we proposed the concept of parametric faithfulness, which can be applied to any exponential family, including those which cannot be specified using Markov properties. We considered the class of hypergraphs which can be identified with hierarchical log-linear models. We showed that for any distribution there exists a hypergraph to which it is parametrically faithful and suggested to conduct the search in this class. As the class also contains graphical models, if a model structure which can be described by a graph is appropriate, it will be discovered (see the consistency result in Section \ref{sectionStrongFaith}). Our work is relevant for the popular causal search algorithms, referred to in Section \ref{intro}, which assume (strong-) faithfulness. The findings described in Sections \ref{sectionStrongFaith} and \ref{SecProp} imply that, depending on the quantitative expression for association and on the choice of the cut-off parameter $\lambda$, to define strong-faithfulness, the resulting model selection procedures may yield different results for the same data. \begin{table} \centering \label{allgraphsABC} \caption{Some graphical models on three nodes.} \begin{tabular}{m{7mm}m{32mm}m{65mm}m{35mm}} \hline & & & \\ & \multicolumn{1}{c}{Log-Linear Model} & \multicolumn{1}{c}{Conditional Independence} & \multicolumn{1}{c}{Graph} \\ [2ex] \hline 1 & \multicolumn{1}{c}{[ABC]} & \multicolumn{1}{c}{None} & \multicolumn{1}{c}{\begin{minipage}{.2\textwidth}\includegraphics[scale=0.5]{ABCpic.pdf}\end{minipage}} \\ [4ex] \hline 2 & \multicolumn{1}{c}{[A][B]} & \multicolumn{1}{c}{$A \perp\!\!\!\perp B$} & \multicolumn{1}{c}{\begin{minipage}{.2\textwidth}\includegraphics[scale=0.5]{AindB.pdf}\end{minipage}} \\ [4ex] \hline 3 & \multicolumn{1}{c}{[A][C]} & \multicolumn{1}{c}{$A \perp\!\!\!\perp C$} & \multicolumn{1}{c}{\begin{minipage}{.2\textwidth}\includegraphics[scale=0.5]{AindC.pdf}\end{minipage}} \\ [4ex] \hline 4 & \multicolumn{1}{c}{[B][C]} & \multicolumn{1}{c}{$B \perp\!\!\!\perp C$} & \multicolumn{1}{c}{\begin{minipage}{.2\textwidth}\includegraphics[scale=0.5]{BindC.pdf}\end{minipage}} \\ [4ex] \hline 5 & \multicolumn{1}{c}{[AB][C]} & \multicolumn{1}{c}{$AB \perp\!\!\!\perp C$} & \multicolumn{1}{c}{\begin{minipage}{.2\textwidth}\includegraphics[scale=0.5]{CindAB.pdf}\end{minipage}} \\ [4ex] \hline 6 & \multicolumn{1}{c}{[AC][B]} & \multicolumn{1}{c}{$AC \perp\!\!\!\perp B$} & \multicolumn{1}{c}{\begin{minipage}{.2\textwidth}\includegraphics[scale=0.5]{BindAC.pdf}\end{minipage}} \\ [4ex] \hline 7 & \multicolumn{1}{c}{[A][BC]} & \multicolumn{1}{c}{$A \perp\!\!\!\perp BC$} & \multicolumn{1}{c}{\begin{minipage}{.2\textwidth}\includegraphics[scale=0.5]{AindBC.pdf}\end{minipage}} \\ [4ex] \hline 8 & \multicolumn{1}{c}{[A][B][C]} & \multicolumn{1}{c}{\begin{tabular}{lll} $A \perp\!\!\!\perp B$, & $A \perp\!\!\!\perp C$, & $B \perp\!\!\!\perp C$, \\ $A \perp\!\!\!\perp B | C$, & $A \perp\!\!\!\perp C | B$, & $B \perp\!\!\!\perp C | A$ \end{tabular}} & \multicolumn{1}{c}{\begin{minipage}{.2\textwidth}\includegraphics[scale=0.5]{A_B_Cpic.pdf}\end{minipage}} \\ [4ex] \hline \end{tabular} \end{table} \end{document}
1,153
23,638
en
train
0.4986.0
\begin{document} \begin{abstract} \@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}heAbstract \end{abstract} \maketitle \else \begin{document} \journalname{Mathematical Programming} \title{\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}heTitle\thanks{\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}heFunding.}} \titlerunning{\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}heShortTitle} \author{ Puya Latafat\and Andreas Themelis\and Panagiotis Patrinos } \authorrunning{\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}heShortAuthor} \institute{ P. Latafat \at Tel.: +32 (0)16 374408\\ \email{[email protected]} \and A. Themelis \at Tel.: +32 (0)16 374573\\ \email{[email protected]} \and P. Patrinos \at Tel.: +32 (0)16 374445\\ \email{[email protected]} \and \@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}heAddressKU. } \date{Received: date / Accepted: date} \maketitle \begin{abstract} \@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}heAbstract \keywords{\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}heKeywords} \subclass{\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}heSubjclass} \end{abstract} \fi \section{Introduction}\label{sec:Introduction} This paper addresses block-coordinate (BC) proximal gradient methods for problems of the form \begin{equation}\label{eq:P} \minimize_{\bm x=(x_1,\dots,x_N)\in\R^{\sum_in_i}} \@ifstar\@@P\@Phi(\bm x) {}\coloneqq{} F(\bm x) {}+{} G(\bm x), \quad\text{where}\quad \textstyle F(\bm x)\coloneqq\tfrac1N\sum_{i=1}^N f_i(x_i), \end{equation} in the following setting. \begin{ass}[problem setting]\label{ass:basic} In problem \eqref{eq:P} the following hold: \begin{enumeratass} \item\label{ass:f} function \(f_i\) is \(L_{f_i}\)-smooth (Lipschitz differentiable with modulus \(L_{f_i}\)), \(i\in[N]\); \item\label{ass:g} function \(G\) is proper and lower semicontinuous (lsc); \item\label{ass:phi} a solution exists: \(\argmin\@ifstar\@@P\@Phi\neq\emptyset\). \end{enumeratass} \end{ass} Unlike typical cases analyzed in the literature where \(G\) is separable \cite{tseng2001convergence,tseng2009coordinate,nesterov2012efficiency,beck2013convergence,bolte2014proximal,richtarik2014iteration,lin2015accelerated,chouzenoux2016block,hong2017iteration,xu2017globally}, we here consider the complementary case where it is only the smooth term \(F\) that is assumed to be separable. The main challenge in analyzing convergence of BC schemes for \eqref{eq:P} especially in the nonconvex setting is the fact that even in expectation the cost does not necessarily decrease along the trajectories. Instead, we demonstrate that the forward-backward envelope (FBE) \cite{patrinos2013proximal,themelis2018forward} is a suitable Lyapunov function for such problems. Several BC-type algorithms that allow for a nonseparable nonsmooth term have been considered in the literature, however, all in convex settings. In \cite{tseng2008block,tseng2010coordinate} a class of convex composite problems is studied that involves a linear constraint as the nonsmooth nonseparable term. A BC algorithm with a Gauss-Southwell-type rule is proposed and the convergence is established using the cost as Lyapunov function by exploiting linearity of the constraint to ensure feasibility. A refined analysis in \cite{necoara2013random,necoara2014random} extends this to a random coordinate selection strategy. Another approach in the convex case is to consider randomized BC updates applied to general averaged operators. Although this approach can allow for fully nonseparable problems, usually separable nonsmooth functions are considered in the literature. The convergence analysis of such methods relies on establishing quasi-Fej\'er monotonicity \cite{iutzeler2013asynchronous,combettes2015stochastic,pesquet2015class,bianchi2016coordinate,peng2016arock,latafat2019new}. In a primal-dual setting in \cite{fercoq2019coordinate} a combination of Bregman and Euclidean distance is employed as Lyapunov function. In \cite{hanzely2018sega} a BC algorithm is proposed for strongly convex algorithms that involves coordinate updates for the gradient followed by a full proximal step, and the distance from the (unique) solution is used as Lyapunov function. The analysis and the Lyapunov functions in all of the above mentioned works rely heavily on convexity and are not suitable for nonconvex settings. Thanks to the nonconvexity and nonseparability of \(G\), many machine learning problems can be formulated as in \eqref{eq:P}, a primary example being constrained and/or regularized finite sum problems \cite{bertsekas2011incremental,shalevshwartz2013stochastic,defazio2014finito,defazio2014saga,mairal2015incremental,reddi2016proximal,reddi2016stochastic,schmidt2017minimizing} \begin{equation}\label{eq:FSP} \textstyle \minimize_{x\in\R^n} \varphi(x) {}\coloneqq{} \tfrac1N\sum_{i=1}^N f_i(x) {}+{} g(x), \end{equation} where \(\func{f_i}{\R^n}{\R}\) are smooth functions and \(\func{g}{\R^n}{\Rinf}\) is possibly nonsmooth, and everything here can be nonconvex. In fact, one way to cast \eqref{eq:FSP} into the form of problem \eqref{eq:P} is by setting \begin{equation}\label{eq:FINITOG} \textstyle G(\bm x) {}\coloneqq{} \tfrac1N\sum_{i=1}^Ng(x_i) {}+{} \indicator_C(\bm x), \end{equation} where \( C {}\coloneqq{} \set{\bm x\in\R^{nN}}[x_1=x_2=\dots=x_N] \) is the consensus set, and \(\indicator_C\) is the indicator function of set \(C\), namely \( \indicator_C(\bm x)=0 \) for \(\bm x\in C\) and \(\infty\) otherwise. Since the nonsmooth term \(g\) is allowed to be nonconvex, formulation \eqref{eq:FSP} can account for nonconvex constraints such as rank constraints or zero norm balls, and nonconvex regularizers such as \(\ell^p\) with \(p\in[0,1)\), \cite{hou2012complexity}. Another prominent example in distributed applications is the \emph{``sharing''} problem \cite{boyd2011distributed}: \begin{equation}\label{eq:SP} \minimize_{\bm x\in\R^{nN}}\@ifstar\@@P\@Phi(\bm x) {}\coloneqq{} \textstyle \tfrac1N\sum_{i=1}^Nf_i(x_i) {}+{} g\Bigl(\sum_{i=1}^Nx_i\Bigr) . \end{equation} where \(\func{f_i}{\R^n}{\R}\) are smooth functions and \(\func{g}{\R^n}{\Rinf}\) is nonsmooth, and all are possibly nonconvex. The sharing problem is cast as in \eqref{eq:P} by setting \(G\coloneqq g \circ A\), where \(A\coloneqq[\I_n~\dots~\I_n]\in\R^{n\times nN}\) (\(I_r\) denotes the \(r\times r\) identity matrix). \subsection{The main block-coordinate algorithm}\label{sec:BC} While gradient evaluations are the building blocks of smooth minimization, a fundamental tool to deal with a nonsmooth lsc term \(\func{\psi}{\R^r}{\Rinf}\) is its \DEF{\(V\)-proximal mapping} \begin{equation}\label{eq:prox} \prox_\psi^V(x) {}\coloneqq{} \argmin_{w\in\R^r}\set{ \psi(w) {}+{} \tfrac12\|w-x\|^2_V }, \end{equation} where \(V\) is a symmetric and positive definite matrix and \(\|{}\cdot{}\|_V\) indicates the norm induced by the scalar product \((x,y)\mapsto\innprod{x}{Vy}\). It is common to take \(V=t^{-1}\I_r\) as a multiple of the \(r\times r\) identity matrix \(\I_r\), in which case the notation \(\prox_{t\psi}\) is typically used and \(t\) is referred to as a stepsize. While this operator enjoys nice regularity properties when \(g\) is convex, such as (single valuedness and) Lipschitz continuity, for nonconvex \(g\) it may fail to be a well-defined function and rather has to be intended as a point-to-set mapping \(\ffunc{\prox_\psi^V}{\R^r}{\R^r}\). Nevertheless, the value function associated to the minimization problem in the definition \eqref{eq:prox}, namely the \emph{Moreau envelope} \begin{equation}\label{eq:Moreau} \psi^V(x) {}\coloneqq{} \min_{w\in\R^r}\set{ \psi(w) {}+{} \tfrac12\|w-x\|^2_V }, \end{equation} is a well-defined real-valued function, in fact locally Lipschitz continuous, that lower bounds \(\psi\) and shares with \(\psi\) infima and minimizers. The proximal mapping is available in closed form for many useful functions, many of which are widely used regularizers in machine learning; for instance, the proximal mapping of the \(\ell^0\) and \(\ell^1\) regularizers amount to hard and soft thresholding operators. In many applications the cost to be minimized is structured as the sum of a smooth term \(h\) and a proximable (\ie with easily computable proximal mapping) term \(\psi\). In these cases, the \emph{proximal gradient method} \cite{fukushima1981generalized,attouch2013convergence} constitutes a cornerstone iterative method that interleaves gradient descent steps on the smooth function and proximal operations on the nonsmooth function, resulting in iterations of the form \( x^+ {}\in{} \prox_{\gamma\psi}(x-\gamma\nabla h(x)) \) for some suitable stepsize $\gamma$. Our proposed scheme to address problem \eqref{eq:P} is a BC variant of the proximal gradient method, in the sense that only some coordinates are updated according to the proximal gradient rule, while the others are left unchanged. This concept is synopsized in \Cref{alg:BC}, which constitutes the general algorithm addressed in this paper. \begin{algorithm} \caption{General forward-backward block-coordinate scheme} \label{alg:BC}
3,573
58,159
en
train
0.4986.1
\subsection{The main block-coordinate algorithm}\label{sec:BC} While gradient evaluations are the building blocks of smooth minimization, a fundamental tool to deal with a nonsmooth lsc term \(\func{\psi}{\R^r}{\Rinf}\) is its \DEF{\(V\)-proximal mapping} \begin{equation}\label{eq:prox} \prox_\psi^V(x) {}\coloneqq{} \argmin_{w\in\R^r}\set{ \psi(w) {}+{} \tfrac12\|w-x\|^2_V }, \end{equation} where \(V\) is a symmetric and positive definite matrix and \(\|{}\cdot{}\|_V\) indicates the norm induced by the scalar product \((x,y)\mapsto\innprod{x}{Vy}\). It is common to take \(V=t^{-1}\I_r\) as a multiple of the \(r\times r\) identity matrix \(\I_r\), in which case the notation \(\prox_{t\psi}\) is typically used and \(t\) is referred to as a stepsize. While this operator enjoys nice regularity properties when \(g\) is convex, such as (single valuedness and) Lipschitz continuity, for nonconvex \(g\) it may fail to be a well-defined function and rather has to be intended as a point-to-set mapping \(\ffunc{\prox_\psi^V}{\R^r}{\R^r}\). Nevertheless, the value function associated to the minimization problem in the definition \eqref{eq:prox}, namely the \emph{Moreau envelope} \begin{equation}\label{eq:Moreau} \psi^V(x) {}\coloneqq{} \min_{w\in\R^r}\set{ \psi(w) {}+{} \tfrac12\|w-x\|^2_V }, \end{equation} is a well-defined real-valued function, in fact locally Lipschitz continuous, that lower bounds \(\psi\) and shares with \(\psi\) infima and minimizers. The proximal mapping is available in closed form for many useful functions, many of which are widely used regularizers in machine learning; for instance, the proximal mapping of the \(\ell^0\) and \(\ell^1\) regularizers amount to hard and soft thresholding operators. In many applications the cost to be minimized is structured as the sum of a smooth term \(h\) and a proximable (\ie with easily computable proximal mapping) term \(\psi\). In these cases, the \emph{proximal gradient method} \cite{fukushima1981generalized,attouch2013convergence} constitutes a cornerstone iterative method that interleaves gradient descent steps on the smooth function and proximal operations on the nonsmooth function, resulting in iterations of the form \( x^+ {}\in{} \prox_{\gamma\psi}(x-\gamma\nabla h(x)) \) for some suitable stepsize $\gamma$. Our proposed scheme to address problem \eqref{eq:P} is a BC variant of the proximal gradient method, in the sense that only some coordinates are updated according to the proximal gradient rule, while the others are left unchanged. This concept is synopsized in \Cref{alg:BC}, which constitutes the general algorithm addressed in this paper. \begin{algorithm} \caption{General forward-backward block-coordinate scheme} \label{alg:BC} \begin{algorithmic}[1] \Require \(\bm x^0\in\R^{\sum_in_i}\),~ \(\gamma_i\in(0,\nicefrac{N}{L_{f_i}})\), {\small \(i\in[N]\)} \Statex \( \Gamma=\blockdiag(\gamma_1\I_{n_1},\dots,\gamma_N\I_{n_N}) \),~ \(k=0\) \item[{\sc Repeat} until convergence] \State \( \bm z^k {}\in{} \prox_G^{\Gamma^{-1}}\bigl( \bm x^k-\Gamma\nabla F(\bm x^k) \bigr) \) \State\label{state:BC:sampling} select a set of indices \(I^{k+1}\subseteq[N]\) \State update~~ \(x_i^{k+1}= z_i^k\) ~for \(i\in I^{k+1}\) ~~and~~ \(x_i^{k+1}= x_i^k\) ~for \(i\notin I^{k+1}\),~ \(k\gets k+1\) \item[{\sc Return}] \(\bm z^k\) \end{algorithmic} \end{algorithm} Although seemingly wasteful, in many cases one can efficiently compute individual blocks without the need of full operations. In fact BC \Cref{alg:BC} bridges the gap between a BC framework and a class of incremental methods where a global computation typically involving the full gradient is carried out incrementally via performing computations only for a subset of coordinates. Two such broad applications, problems \eqref{eq:FSP} and \eqref{eq:SP}, are discussed in the dedicated \Cref{sec:Finito,sec:Sharing}, where among other things we will show that \Cref{alg:BC} leads to the well known Finito/MISO algorithm \cite{defazio2014finito,mairal2015incremental}. \subsection{Contribution} \begin{enumerate}[ leftmargin=0pt, labelwidth=7pt, itemindent=\labelwidth+\labelsep, label=\rlap{{\bf\arabic*)}}\hspace*{\labelwidth}, ] \item To the best of our knowledge this is the first analysis of BC schemes with a nonseparable nonsmooth term and in the fully nonconvex setting. While the original cost \(\@ifstar\@@P\@Phi\) cannot serve as a Lyapunov function, we show that the forward-backward envelope (FBE) \cite{patrinos2013proximal,themelis2018forward} decreases surely, not only in expectation (\Cref{thm:sure}). \item This allows for a quite general convergence analysis for different sampling criteria. This paper in particular covers randomized strategies (\Cref{sec:random}) where at each iteration one or more coordinates are sampled with possibly time-varying probabilities, as well as essentially cyclic (and in particular cyclic and shuffled) strategies in case the nonsmooth term is convex (\Cref{sec:cyclic}). \item We exploit the Kurdyka-\L ojasiewicz (KL) property to show global (as opposed to subsequential) and linear convergence when the sampling is essentially cyclic and the nonsmooth function is convex, without imposing convexity requirements on the smooth functions (\Cref{thm:cyclic:global}). \ifaccel \item When \(G\) is convex and \(F\) is twice continuously differentiable, the FBE is continuously differentiable. If, additionally, \(F\) is (strongly) convex and quadratic, then the FBE is (strongly) convex and has Lipschitz-continuous gradient. Owing to these favorable properties, we propose a new BC Nesterov-type acceleration algorithm for minimizing the sum of a block-separable convex quadratic plus a nonsmooth convex function, whose analysis directly follows from existing work on smooth BC minimization \cite{allen2016even}. \fi \item As immediate byproducts of our analysis we obtain {\bf (a)} an incremental algorithm for the sharing problem \cite{boyd2011distributed} that to the best of our knowledge is novel (\Cref{sec:Sharing}), and {\bf (b)} the Finito/MISO algorithm \cite{defazio2014finito,mairal2015incremental} leading to a much simpler and more general analysis than available in the literature with new convergence results both for randomized sampling strategies in the fully nonconvex setting and for essentially cyclic samplings when the nonsmooth term is convex (\Cref{sec:Finito}). \end{enumerate} \subsection{Organization} The rest of the paper is organized as follows. The core of the paper lies in the convergence analysis of \Cref{alg:BC} detailed in \Cref{sec:convergence}: \Cref{sec:FBE} introduces the FBE, fundamental tool of our methodology and lists some of its properties whose proofs are detailed in the dedicated \Cref{sec:proofs:FBE}, followed by other ancillary results documented in \Cref{sec:auxiliary}. The algorithmic analysis begins in \Cref{sec:sure} with a collection of facts that hold independently of the chosen sampling strategy, and later specializes to randomized and essentially cyclic samplings in the dedicated \Cref{sec:random,sec:cyclic}. \Cref{sec:Finito,sec:Sharing} discuss two particular instances of the investigated algorithmic framework, namely (a generalization of) the Finito/MISO algorithm for finite sum minimization and an incremental scheme for the sharing problem, both for fully nonconvex and nonsmooth formulations. Convergence results are immediately inferred from those of the more general BC \Cref{alg:BC}. \Cref{sec:Conclusions} concludes the paper. \section{Convergence analysis}\label{sec:convergence} We begin by observing that \Cref{ass:basic} is enough to guarantee the well definedness of the forward-backward operator in \Cref{alg:BC}, which for notational convenience will be henceforth denoted as \(\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm x)\). Namely, \(\ffunc{\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}}{\R^{\sum_in_i}}{\R^{\sum_in_i}}\) is the point-to-set mapping \begin{align*} \@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm x) {}\coloneqq{} & \prox_G^{\Gamma^{-1}}\left(\Fw{\bm x}\right) \\ \numberthis\label{eq:T} {}={} & \argmin_{\bm w\in\R^{\sum_in_i}}\set{ F(\bm x)+\innprod{\nabla F(\bm x)}{\bm w-\bm x} {}+{} G(\bm w) {}+{} \tfrac12\|\bm w-\bm x\|_{\Gamma^{-1}}^2 }. \end{align*} \begin{lem}\label{thm:osc} Suppose that \Cref{ass:basic} holds, and let \(\Gamma\coloneqq\blockdiag(\gamma_1\I_{n_1},\dots,\gamma_N\I_{n_N})\) with \(\gamma_i\in(0,\nicefrac{N}{L_{f_i}})\), \(i\in[N]\). Then \(\prox_G^{\Gamma^{-1}}\) and \(\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}\) are locally bounded, outer semicontinuous (osc), nonempty- and compact-valued mappings. \begin{proof} See \Cref{proof:thm:osc}. \end{proof}
3,064
58,159
en
train
0.4986.2
\end{lem} \subsection{The forward-backward envelope}\label{sec:FBE} The fundamental challenge in the analysis of \eqref{eq:P} is the fact that, without separability of \(G\), descent on the cost function cannot be established even in expectation. Instead, we show that the \emph{forward-backward envelope} (FBE) \cite{patrinos2013proximal,themelis2018forward} can be used as Lyapunov function. This subsection formally introduces the FBE, here generalized to account for a matrix-valued stepsize parameter \(\Gamma\), and lists some of its basic properties needed for the convergence analysis of \Cref{alg:BC}. Although easy adaptations of the similar results in \cite{patrinos2013proximal,themelis2018forward,themelis2019acceleration}, for the sake of self-inclusiveness the proofs are detailed in the dedicated \Cref{sec:proofs:FBE}. \begin{subequations} \begin{defin}[forward-backward envelope]\label{def:FBE} In problem \eqref{eq:P}, let \(f_i\) be differentiable functions, \(i\in[N]\), and for \(\gamma_1,\dots,\gamma_N>0\) let \( \Gamma=\blockdiag(\gamma_1\I_{n_1},\dots,\gamma_N\I_{n_N}) \). The forward-backward envelope (FBE) associated to \eqref{eq:P} with stepsize \(\Gamma\) is the function \( \func{\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}}{\R^{\sum_in_i}}{[-\infty,\infty)} \) defined as \begin{equation} \label{eq:FBE} \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x) {}\coloneqq{} \inf_{\bm w\in\R^{\sum_in_i}}\set{ F(\bm x)+\innprod{\nabla F(\bm x)}{\bm w-\bm x} {}+{} G(\bm w) {}+{} \tfrac12\|\bm w-\bm x\|_{\Gamma^{-1}}^2 }. \end{equation} \end{defin} \Cref{def:FBE} highlights an important symmetry between the Moreau envelope and the FBE: similarly to the relation between the Moreau envelope \eqref{eq:Moreau} and the proximal mapping \eqref{eq:prox}, the FBE \eqref{eq:FBE} is the value function associated with the proximal gradient mapping \eqref{eq:T}. By replacing any minimizer \(\bm z\in\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm x)\) in the right-hand side of \eqref{eq:FBE} one obtains yet another interesting interpretation of the FBE in terms of the \(\Gamma^{-1}\)-augmented Lagrangian associated to \eqref{eq:P} \begin{align} \nonumber \LL(\bm x,\bm z,\bm y) {}\coloneqq{} & F(\bm x)+G(\bm z)+\innprod{\bm y}{\bm x-\bm z} {}+{} \tfrac12\|\bm x-\bm z\|_{\Gamma^{-1}}^2, \shortintertext{namely,} \label{eq:FBEz} \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x) {}={} & F(\bm x)+\innprod{\nabla F(\bm x)}{\bm z-\bm x} {}+{} G(\bm z) {}+{} \tfrac12\|\bm z-\bm x\|_{\Gamma^{-1}}^2 \\ {}={} & \LL(\bm x,\bm z,-\nabla F(\bm x)). \shortintertext{ Lastly, by rearranging the terms it can easily be seen that } \label{eq:FBEMoreau} \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x) {}={} & F(\bm x) {}-{} \tfrac12\|\nabla F(\bm x)\|_\Gamma^2 {}+{} G^{\Gamma^{-1}}(\Fw{\bm x}), \end{align} hence in particular that the FBE inherits regularity properties of \(G^{\Gamma^{-1}}\) and \(\nabla F\), some of which are summarized in the next result. \end{subequations} \begin{lem}[FBE: fundamental inequalities]\label{thm:FBEineq} Suppose that \Cref{ass:basic} is satisfied and let \(\gamma_i\in(0,\nicefrac{N}{L_{f_i}})\), \(i\in[N]\). Then, the FBE \(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}\) is a (real-valued and) locally Lipschitz-continuous function. Moreover, the following hold for any \(\bm x\in\R^{\sum_in_i}\): \begin{enumerate} \item\label{thm:leq} \(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x)\leq\@ifstar\@@P\@Phi(\bm x)\). \item\label{thm:geq} \( \tfrac12\|\bm z-\bm x\|^2_{\Gamma^{-1}-\Lambda_F} {}\leq{} \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x)-\@ifstar\@@P\@Phi(\bm z) {}\leq{} \tfrac12\|\bm z-\bm x\|^2_{\Gamma^{-1}+\Lambda_F} \) for any \(\bm z\in\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm x)\), where \( \Lambda_F {}\coloneqq{} \tfrac1N \blockdiag\bigl(L_{f_1}\I_{n_1},\dots, L_{f_n}\I_{n_N}\bigr) \). \item\label{thm:strconcost} If in addition each $f_i$ is $\mu_{f_i}$-strongly convex and $G$ is convex, then for every \(\bm x\in\R^{\sum_in_i}\) \[ \tfrac12\|\bm z-\bm x^\star\|_{\mu_F}^2 {}\leq{} \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x)-\min\@ifstar\@@P\@Phi \] where \(\bm x^\star\coloneqq\argmin\@ifstar\@@P\@Phi\), \( \mu_F {}\coloneqq{} \frac1N\blockdiag\bigl(\mu_{f_1}\I_{n_1},\dots,\mu_{f_N}\I_{n_N}\bigr) \), and \(\bm z=\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm x)\). \end{enumerate} \begin{proof} See \Cref{proof:thm:FBEineq}. \end{proof} \end{lem} Another key property that the FBE shares with the Moreau envelope is that minimizing the extended-real valued function \(\@ifstar\@@P\@Phi\) is equivalent to minimizing the continuous function \(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}\). Moreover, the former is level bounded iff so is the latter. This fact will be particularly useful for the analysis of \Cref{alg:BC}, as it will be shown in \Cref{thm:sure} that the FBE (surely) decreases along its iterates. As a consequence, despite the fact that the same does not hold for \(\@ifstar\@@P\@Phi\) (in fact, iterates may even be infeasible), coercivity of \(\@ifstar\@@P\@Phi\) is enough to guarantee boundedness of \(\seq{\bm x^k}\) and \(\seq{\bm z^k}\). \begin{lem}[FBE: minimization equivalence]\label{thm:FBEmin} Suppose that \Cref{ass:basic} is satisfied and that \(\gamma_i\in(0,\nicefrac{N}{L_i})\), \(i\in[N]\). Then the following hold: \begin{enumerate} \item\label{thm:min} \(\min\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}=\min\@ifstar\@@P\@Phi\); \item\label{thm:argmin} \(\argmin\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}=\argmin\@ifstar\@@P\@Phi\); \item\label{thm:LB} \(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}\) is level bounded iff so is \(\@ifstar\@@P\@Phi\). \end{enumerate} \begin{proof} See \Cref{proof:thm:FBEmin}. \end{proof}
2,899
58,159
en
train
0.4986.3
\end{lem} We remark that the kinship of \(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}\) and \(\@ifstar\@@P\@Phi\) extends also to local minimality; the interested reader is referred to \cite[Th. 3.6]{themelis2018proximal} for details. \subsection{A sure descent lemma}\label{sec:sure} We now proceed to the theoretical analysis of \Cref{alg:BC}. Clearly, some assumptions on the index selection criterion are needed in order to establish reasonable convergence results, for little can be guaranteed if, for instance, one of the indices is never selected. Nevertheless, for the sake of a general analysis it is instrumental to first investigate which properties hold independently of such criteria. After listing some of these facts in \Cref{thm:sure}, in \Cref{sec:random,sec:cyclic} we will specialize the results to randomized and (essentially) cyclic sampling strategies. \begin{lem}[sure descent]\label{thm:sure} Suppose that \Cref{ass:basic} is satisfied. Then, the following hold for the iterates generated by \Cref{alg:BC}: \begin{enumerate} \item\label{thm:Igeq} \( \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{k+1}) {}\leq{} \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^k) {}-{} \sum_{i\in I^{k+1}}\tfrac{\xi_i}{2\gamma_i}\|z_i^k-x_i^k\|^2 \), where \(\xi_i\coloneqq\frac{N-\gamma_iL_{f_i}}{N}\), \(i\in[N]\), are strictly positive; \item\label{thm:decrease} \(\seq{\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^k)}\) monotonically decreases to a finite value \(\@ifstar\@@P\@Phi_\star\geq\min\@ifstar\@@P\@Phi\); \item\label{thm:omega} \(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}\) is constant (and equals \(\@ifstar\@@P\@Phi_\star\) as above) on the set of accumulation points of \(\seq{\bm x^k}\); \item\label{thm:xdiff} the sequence \(\seq{\|\bm x^{k+1}-\bm x^k\|^2}\) has finite sum (and in particular vanishes); \item\label{thm:bounded} if \(\@ifstar\@@P\@Phi\) is coercive, then \(\seq{\bm x^k}\) and \(\seq{\bm z^k}\) are bounded. \end{enumerate} \begin{proof} \begin{proofitemize} \item\ref{thm:Igeq}~ To ease notation, let \( \Lambda_F {}\coloneqq{} \tfrac1N \blockdiag\bigl(L_{f_1}\I_{n_1},\dots, L_{f_n}\I_{n_N}\bigr) \) and for \(\bm w\in\R^{\sum_in_i}\) let \(w_I\in\R^{\sum_{i\in I}n_i}\) denote the slice \((w_i)_{i\in I}\), and let \(\Lambda_{F_I},\Gamma_I\in\R^{\sum_{i\in I}n_i\times\sum_{i\in I}n_i}\) be defined accordingly. Start by observing that, since \(\bm z^{k+1}\in\prox_G^{\Gamma^{-1}}(\Fw{\bm x^{k+1}})\), from the proximal inequality on $G$ it follows that \begin{align*} G(\bm z^{k+1})-G(\bm z^k) {}\leq{} & \tfrac12\|\bm z^k-\bm x^{k+1}+\Gamma\nabla F(\bm x^{k+1})\|_{\Gamma^{-1}}^2 {}-{} \tfrac12\|\bm z^{k+1}-\bm x^{k+1}+\Gamma\nabla F(\bm x^{k+1})\|_{\Gamma^{-1}}^2 \\ ={} & \numberthis\label{eq:proxIneqFBS} \tfrac12\|\bm z^k-\bm x^{k+1}\|_{\Gamma^{-1}}^2 {}-{} \tfrac12\|\bm z^{k+1}-\bm x^{k+1}\|_{\Gamma^{-1}}^2 {}+{} \innprod{\nabla F(\bm x^{k+1})}{\bm z^k-\bm z^{k+1}}. \end{align*} We have \ifarxiv\else \bgroup\mathtight[0.5] \fi \begin{align*} \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{k+1})-\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^k) {}={} & {\color{red} F(\bm x^{k+1}) } {}+{} \innprod{\nabla F(\bm x^{k+1})}{\bm z^{k+1}-\bm x^{k+1}} {\color{blue} {}+{} G(\bm z^{k+1}) } {}+{} \tfrac12\|\bm z^{k+1}-\bm x^{k+1}\|_{\Gamma^{-1}}^2 \\ & {}-{} \left( {\color{red} F(\bm x^k)+\innprod{\nabla F(\bm x^k)}{\bm z^k-\bm x^k} } {\color{blue} {}+{} G(\bm z^k) } {}+{} \tfrac12\|\bm z^k-\bm x^k\|_{\Gamma^{-1}}^2 \right) \shortintertext{ {\color{red} apply the upper bound in \eqref{eq:Lip} with \(\bm w=\bm x^{k+1}\) } and {\color{blue} the proximal inequality \eqref{eq:proxIneqFBS} } } {}\leq{} & {\color{red} \innprod{\nabla F(\bm x^k)}{\bm x^{k+1}-\bm z^k} {}+{} \tfrac12\|\bm x^{k+1}-\bm x^k\|_{\Lambda_F}^2 } {}+{} \innprod{\nabla F(\bm x^{k+1})}{{\color{blue}\bm z^k}-\bm x^{k+1}} \\ & {}-{} \tfrac12\|\bm z^k-\bm x^k\|_{\Gamma^{-1}}^2 {\color{blue} {}+{} \tfrac12\|\bm z^k-\bm x^{k+1}\|_{\Gamma^{-1}}^2 }. \end{align*} \ifarxiv\else \egroup \fi To conclude, notice that the \(\ell\)-th block of \(\nabla F(\bm x^k)-\nabla F(\bm x^{k+1})\) is zero for \(\ell\notin I\), and that the \(\ell\)-th block of \(\bm x^{k+1}-\bm z^k\) is zero if \(\ell\in I\). Hence, the scalar product vanishes. For similar reasons, one has \( \| \bm z^k-\bm x^{k+1} \|^2_{\Gamma^{-1}} {}-{} \|\bm z^k-\bm x^k\|_{\Gamma^{-1}}^2 {}={} {}-{} \|z_I^k-x_I^k\|_{\Gamma_I^{-1}}^2 \) and \( \|\bm x^{k+1}-\bm x^k\|_{\Lambda_F}^2 {}={} \|z_I^k-x_I^k\|_{\Lambda_{F_I}}^2 \), yielding the claimed expression. \item\ref{thm:decrease}~ Monotonic decrease of \(\seq{\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^k)}\) is a direct consequence of assert \ref{thm:Igeq}. This ensures that the sequence converges to some value \(\@ifstar\@@P\@Phi_\star\), bounded below by \(\min\@ifstar\@@P\@Phi\) in light of \Cref{thm:min}. \item\ref{thm:omega}~ Directly follows from assert \ref{thm:decrease} together with the continuity of \(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}\), see \Cref{thm:FBEineq}. \item\ref{thm:xdiff}~ Denoting \( \xi_{\rm min} {}\coloneqq{} \min_{i\in[N]}\set{ \xi_i } \) which is a strictly positive constant, it follows from assert \ref{thm:Igeq} that for each \(k\in\N\) it holds that \begin{align*} \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{k+1})-\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^k) {}\leq{} & {}-{} \sum_{\mathclap{i\in I^{k+1}}}{ \tfrac{\xi_i}{2\gamma_i}\|z_i^k-x_i^k\|^2 } \\ {}\leq{} & {}-{} \tfrac{\xi_{\rm min}}{2} \sum_{i\in I^{k+1}}{ \gamma_i^{-1}\|z_i^k-x_i^k\|^2 } \\ \numberthis\label{eq:SDx} {}={} & {}-{} \tfrac{\xi_{\rm min}}{2} \|\bm x^{k+1}-\bm x^k\|_{\Gamma^{-1}}^2. \end{align*} By summing for \(k\in\N\) and using the positive definiteness of \(\Gamma^{-1}\) together with the fact that \(\min\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}=\min\@ifstar\@@P\@Phi>\infty\) as ensured by \Cref{thm:min} and \Cref{ass:phi}, we obtain that \( \sum_{k\in\N}\|\bm x^{k+1}-\bm x^k\|^2 {}<{} \infty \). \item\ref{thm:bounded}~ It follows from assert \ref{thm:decrease} that the entire sequence \(\seq{\bm x^k}\) is contained in the sublevel set \(\set{\bm w}[\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm w)\leq\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^0)]\), which is bounded provided that \(\@ifstar\@@P\@Phi\) is coercive as shown in \Cref{thm:LB}. In turn, boundedness of \(\seq{\bm z^k}\) then follows from local boundedness of \(\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}\), cf. \Cref{thm:osc}. \qedhere \end{proofitemize} \end{proof}
3,662
58,159
en
train
0.4986.4
\end{lem} \subsection{Randomized sampling}\label{sec:random} In this section we provide convergence results for \Cref{alg:BC} where the index selection criterion complies with the following requirement. \begin{ass}[randomized sampling requirements]\label{ass:random} There exist \(p_1,\dots,p_N>0\) such that, at any iteration and independently of the past, each $i\in[N]$ is sampled with probability at least $p_i$. \end{ass} Our notion of randomization is general enough to allow for time-varying probabilities and mini-batch selections. The role of parameters \(p_i\) in \Cref{ass:random} is to prevent that an index is sampled with arbitrarily small probability. In more rigorous terms, \( \@ifstar\@@P\@P*{i\in I^{k+1}} {}\geq{} p_i \) shall hold for all \(i\in[N]\), where \(\@ifstar\@@P\@P{}\) represents the probability conditional to the knowledge at iteration \(k\). Notice that we do not require the \(p_i\)'s to sum up to one, as multiple index selections are allowed, similar to the setting of \cite{bianchi2016coordinate,latafat2019new} in the convex case. Due to the possible nonconvexity of problem \eqref{eq:P}, unless additional assumptions are made not much can be said about convergence of the iterates to a unique point. Nevertheless, the following result shows that any accumulation point \(\bm x^\star\) of sequences $\seq{\bm x^k}$ and $\seq{\bm z^k}$ generated by \Cref{alg:BC} is a stationary point, in the sense that it satisfies the necessary condition for minimality \( 0\in\hat\partial\@ifstar\@@P\@Phi(\bm x^\star) \), where \(\hat\partial\) denotes the (regular) nonconvex subdifferential, see \cite[Th. 10.1]{rockafellar2011variational}. \begin{thm}[randomized sampling: subsequential convergence]\label{thm:random:subseq} Suppose that \Cref{ass:basic,ass:random} are satisfied. Then, the following hold almost surely for the iterates generated by \Cref{alg:BC}: \begin{enumerate} \item\label{thm:res} the sequence \(\seq{\|\bm x^k-\bm z^k\|^2}\) has finite sum (and in particular vanishes); \item\label{thm:decreasez} the sequence \(\seq{\@ifstar\@@P\@Phi(\bm z^k)}\) converges to \(\@ifstar\@@P\@Phi_\star\) as in \Cref{thm:decrease}; \item\label{thm:cluster} \(\seq{\bm x^k}\) and \(\seq{\bm z^k}\) have same cluster points, all stationary and on which \(\@ifstar\@@P\@Phi\) and \(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}\) equal \(\@ifstar\@@P\@Phi_\star\). \end{enumerate} \begin{proof} In what follows, \(\@ifstar\@@E\@E{}\) denotes the expectation conditional to the knowledge at iteration \(k\). \begin{proofitemize} \item\ref{thm:res}~ Let \( \xi_i\coloneqq\frac{N-\gamma_iL_{f_i}}{N}>0 \), \(i\in[N]\), be as in \Cref{thm:Igeq}. We have \begin{align*} \@ifstar\@@E\@E{\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{k+1})} {}\overrel*[\leq]{\ref{thm:Igeq}}{} & \@ifstar\@@E\@E{ \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^k) {}-{} \sum_{i\in I^{k+1}}{ \tfrac{\xi_i}{2\gamma_i}\|z_i^k-x_i^k\|^2 } } \\ {}={} & \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^k) {}-{} \sum_{I\in\Omega}{ \@ifstar\@@P\@P{\mathcal I^{k+1}=I} \sum_{i\in I}{ \tfrac{\xi_i}{2\gamma_i}\|z_i^k-x_i^k\|^2 } } \\ {}={} & \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^k) {}-{} \sum_{i=1}^N{ \sum_{I\in\Omega,I\ni i}{ \@ifstar\@@P\@P{\mathcal I^{k+1}=I} \tfrac{\xi_i}{2\gamma_i}\|z_i^k-x_i^k\|^2 } } \\ \numberthis\label{eq:EFBE+} {}\leq{} & \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^k) {}-{} \sum_{i=1}^N{ \tfrac{p_i\xi_i}{2\gamma_i}\|z_i^k-x_i^k\|^2 }, \end{align*} where $\Omega\subseteq 2^{[N]}$ is the sample space ($2^{[N]}$ denotes the power set of $[N]$). Therefore, \begin{equation}\label{eq:ExSD} \@ifstar\@@E\@E{\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{k+1})} {}\leq{} \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^k) {}-{} \tfrac\sigma2 \|\bm x^k-\bm z^k\|_{\Gamma^{-1}}^2 \quad\text{where } \sigma {}\coloneqq{} \min_{i=1\dots N}{ p_i\xi_i } {}>{} 0. \end{equation} The claim follows from the Robbins-Siegmund supermartingale theorem, see \eg \cite{robbins1985convergence} or \cite[Prop. 2]{bertsekas2011incremental}. \item\ref{thm:decreasez}~ Observe that \( \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^k)-\|\bm z^k-\bm x^k\|^2_{\Gamma^{-1}+\Lambda_F} {}\leq{} \@ifstar\@@P\@Phi(\bm z^k) {}\leq{} \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^k)-\|\bm z^k-\bm x^k\|^2_{\Gamma^{-1}-\Lambda_F} \) holds (surely) for \(k\in\N\) in light of \Cref{thm:geq}. The claim then follows by invoking \Cref{thm:decrease} and assert \ref{thm:res}. \item\ref{thm:cluster}~ In the rest of the proof, for conciseness the ``almost sure'' nature of the results will be implied without mention. It follows from assert \ref{thm:res} that a subsequence \(\seq{\bm x^k}[k\in K]\) converges to some point \(\bm x^\star\) iff so does the subsequence \(\seq{\bm z^k}[k\in K]\). Since \(\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm x^k)\ni\bm z^k\) and both \(\bm x^k\) and \(\bm z^k\) converge to \(\bm x^\star\) as \(K\ni k\to\infty\), the inclusion \(0\in\hat\partial\@ifstar\@@P\@Phi(\bm x^\star)\) follows from \Cref{thm:critical}. Since the full sequences \(\seq{\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^k)}\) and \(\seq{\@ifstar\@@P\@Phi(\bm z^k)}\) converge to the same value \(\@ifstar\@@P\@Phi_\star\) (cf. \Cref{thm:decrease} and assert \ref{thm:decreasez}), due to continuity of \(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}\) (\Cref{thm:FBEineq}) it holds that \(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^\star)=\@ifstar\@@P\@Phi_\star\), and in turn the bounds in \Cref{thm:geq} together with assert \ref{thm:res} ensure that \(\@ifstar\@@P\@Phi(\bm x^\star)=\@ifstar\@@P\@Phi_\star\) too. \qedhere \end{proofitemize} \end{proof}
2,899
58,159
en
train
0.4986.5
\end{thm} When \(G\) is convex and \(F\) is strongly convex (that is, each of the functions \(f_i\) is strongly convex), the FBE decreases \(Q\)-linearly in expectation along the iterates generated by the randomized BC-\Cref{alg:BC}. \begin{thm}[randomized sampling: linear convergence under strong convexity]\label{thm:random:linear} Additionally to \Cref{ass:basic,ass:random}, suppose that \(G\) is convex and that each \(f_i\) is \(\mu_{f_i}\)-strongly convex. Then, for all \(k\) the following hold for the iterates generated by \Cref{alg:BC}: \begin{subequations}\label{subeq:random:linear} \begin{align} \label{eq:random:Qlinear} \@ifstar\@@E\@E{\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{k+1})-\min\@ifstar\@@P\@Phi} {}\leq{} & (1-c) \bigl(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^k)-\min\@ifstar\@@P\@Phi\bigr) \\ \@ifstar\@@E\@E[]{\@ifstar\@@P\@Phi(\bm z^k)-\min\@ifstar\@@P\@Phi} {}\leq{} & \bigl(\@ifstar\@@P\@Phi(\bm x^0)-\min\@ifstar\@@P\@Phi\bigr)(1-c)^k \\ \tfrac12\@ifstar\@@E\@E[]{\|\bm z^k-\bm x^\star\|^2_{\mu_F}} {}\leq{} & \bigl(\@ifstar\@@P\@Phi(\bm x^0)-\min\@ifstar\@@P\@Phi\bigr)(1-c)^k \end{align} \end{subequations} where \(\bm x^\star\coloneqq\argmin\@ifstar\@@P\@Phi\), \( \mu_F {}\coloneqq{} \tfrac1N \blockdiag\bigl(\mu_{f_1}\I_{n_1},\dots\mu_{f_n}\I_{n_N}\bigr) \), and denoting \(\xi_i=\frac{N-\gamma_iL_{f_i}}{N}\), \(i\in[N]\), \begin{equation}\label{eq:cwc} c {}={} \min_{i\in[N]}{ \set{\tfrac{\xi_ip_i}{\gamma_i}} {}\bigg/{} \max_{i\in[N]}\set{\tfrac{N-\gamma_i\mu_{f_i}}{\gamma_i^2\mu_{f_i}}} }. \end{equation} Moreover, by setting the stepsizes \(\gamma_i\) and minimum sampling probabilities \(p_i\) as \begin{equation}\label{eq:gammaLinear} \gamma_i {}={} \tfrac{N}{\mu_{f_i}} \left(1-\sqrt{1-1/\kappa_{i}}\right) \quad\text{and}\quad p_i {}={} \frac{ \left(\sqrt{\kappa_i}+\sqrt{\kappa_i-1}\right)^2 }{ \sum_{j=1}^N\left(\sqrt{\kappa_j}+\sqrt{\kappa_j-1}\right)^2 } \end{equation} with \( \kappa_i\coloneqq\frac{L_{f_i}}{\mu_{f_i}} \), \(i\in[N]\), then the constant \(c\) in \eqref{subeq:random:linear} can be tightened to \begin{equation}\label{eq:cbc} c {}={} \tfrac{1}{ \sum_{i=1}^N\left( \sqrt{\kappa_i}+\sqrt{\kappa_i-1} \right)^2 }. \end{equation} \begin{proof} Since \(\bm z^k\) is a minimizer in \eqref{eq:FBE}, the necessary stationarity condition reads \( \Gamma^{-1}(\bm x^k-\bm z^k)-\nabla F(\bm x^k) {}\in{} \partial G(\bm z^k) \). Convexity of \(G\) then implies \[ G(\bm x^\star) {}\geq{} G(\bm z^k) {}+{} \innprod{\Gamma^{-1}(\bm x^k-\bm z^k)-\nabla F(\bm x^k)}{\bm x^\star-\bm z^k}, \] whereas from strong convexity of \(F\) we have \[ F(\bm x^\star) {}\geq{} F(\bm x^k) {}+{} \innprod{\nabla F(\bm x^k)}{\bm x^\star-\bm x^k} {}+{} \tfrac12\|\bm x^k-\bm x^\star\|^2_{\mu_F}. \] By combining these inequalities into \eqref{eq:FBEz}, and denoting \(\@ifstar\@@P\@Phi_\star\coloneqq\min\@ifstar\@@P\@Phi=\min\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}\) (cf. \Cref{thm:min}), we have \begin{align*} \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^k)-\@ifstar\@@P\@Phi_\star {}\leq{} & \tfrac12\|\bm z^k-\bm x^k\|^2_{\Gamma^{-1}} {}-{} \tfrac12\|\bm x^\star-\bm x^k\|^2_{\mu_F} {}+{} \innprod{\Gamma^{-1}(\bm z^k-\bm x^k)}{\bm x^\star-\bm z^k} \\ {}={} & \tfrac12\|\bm z^k-\bm x^k\|_{\Gamma^{-1}-\mu_F}^2 {}+{} \innprod{(\Gamma^{-1}-\mu_F)(\bm z^k-\bm x^k)}{\bm x^\star-\bm z^k} {}-{} \tfrac12\|\bm x^\star-\bm z^k\|_{\mu_F}^2. \end{align*} Next, by using the inequality \( \innprod{\bm a}{\bm b} {}\leq{} \tfrac12\|\bm a\|_{\mu_F}^2 {}+{} \tfrac12\|\bm b\|^2_{\mu_F^{-1}} \) to cancel out the last term, we obtain \begin{align*} \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^k)-\@ifstar\@@P\@Phi_\star {}\leq{} & \tfrac12\|\bm z^k-\bm x^k\|_{\Gamma^{-1}-\mu_F}^2 {}+{} \tfrac12\|(\Gamma^{-1}-\mu_F)(\bm x^k-\bm z^k)\|_{\mu_F^{-1}}^2 \\ {}={} & \tfrac12\|\bm z^k-\bm x^k\|_{\Gamma^{-2}\mu_F^{-1}(\I-\Gamma\mu_F)}^2, \numberthis\label{eq:QUB} \end{align*} where the last identity uses the fact that the matrices are diagonal. Combined with \eqref{eq:EFBE+} the claimed \(Q\)-linear convergence \eqref{eq:random:Qlinear} with factor \(c\) as in \eqref{eq:cwc} is obtained. The $R$-linear rates in terms of the cost function and distance from the solution are obtained by repeated application of \eqref{eq:random:Qlinear} after taking (unconditional) expectation from both sides and using \Cref{thm:FBEineq}. To obtain the tighter estimate \eqref{eq:cbc}, observe that \eqref{eq:EFBE+} with the choice \[ \textstyle p_i {}\coloneqq{} \tfrac{1}{\gamma_i\mu_{f_i}} \tfrac{N-\gamma_i\mu_{f_i}}{N-\gamma_iL_{f_i}} \left( \sum_j{ \tfrac{1}{\gamma_j\mu_{f_j}} \tfrac{N-\gamma_j\mu_{f_j}}{N-\gamma_jL_{f_j}} } \right)^{-1}, \] which equals the one in \eqref{eq:gammaLinear} with $\gamma_i$ as prescribed, yields \begin{align*} \@ifstar\@@E\@E{\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{k+1})-\@ifstar\@@P\@Phi_\star} {}\leq{} & \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^k)-\@ifstar\@@P\@Phi_\star {}-{} \left( \textstyle 2N\sum_j{ \tfrac{1}{\gamma_j\mu_j} \tfrac{N-\gamma_j\mu_j}{N-\gamma_jL_j} } \right)^{-1} \sum_{i=1}^N{ \tfrac{N-\gamma_i\mu_{f_i}}{\gamma_i^2\mu_{f_i}} \|z_i^k-x_i^k\|^2 } \\ {}={} & \textstyle \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^k)-\@ifstar\@@P\@Phi_\star {}-{} \left( 2N\sum_j{ \tfrac{1}{\gamma_j\mu_j} \tfrac{N-\gamma_j\mu_j}{N-\gamma_jL_j} } \right)^{-1} \|\bm z^k-\bm x^k\|_{\Gamma^{-1}\mu_F^{-1}(\Gamma^{-1}-\mu_F)}^2. \end{align*} The assert now follows by combining this with \eqref{eq:QUB} and replacing the values of \(\gamma_i\) as proposed in \eqref{eq:gammaLinear}. \end{proof}
3,143
58,159
en
train
0.4986.6
\end{thm} Notice that as \(\kappa_i\)'s approach \(1\) the linear rate tends to \(1-\nicefrac1N\). \subsection{Cyclic, shuffled and essentially cyclic samplings}\label{sec:cyclic} In this section we analyze the convergence of the BC-\Cref{alg:BC} when a cyclic, shuffled cyclic or (more generally) an essentially cyclic sampling \cite{tseng1987relaxation,tseng2001convergence,hong2017iteration,chow2017cyclic,xu2017globally} is used. As formalized in the following standing assumption, an additional convexity requirement for the nonsmooth term \(G\) is needed. \begin{ass}[essentially cyclic sampling requirements]\label{ass:cyclic} In problem \eqref{eq:P}, function \(G\) is convex. Moreover, there exists $T\geq 1$ such that in \Cref{alg:BC} each index is selected at least once within any interval of $T$ iterations. \end{ass} Note that having \(T<N\) is possible because of our general sampling strategy where sets of indices can be sampled within the same iteration. For instance, \(T=1\) corresponds to \(I^{k+1}=[N]\) for all \(k\), in which case \Cref{alg:BC} would reduce to a (full) proximal gradient scheme. Two notable special cases of single index selection rules are the cyclic and shuffled cyclic sampling strategies. \begin{itemize}[ leftmargin=*, label={}, itemindent=0cm, labelsep=0pt, partopsep=0pt, parsep=0pt, listparindent=0pt, topsep=0pt, ] \item{\sc Shuffled cyclic sampling:} corresponds to setting \begin{equation}\label{eq:ShufCyclicRule} I^{k+1}=\set{\pi_{\lfloor\nicefrac kN\rfloor}\bigl(\mod(k,N)+1\bigr)}\quad \text{for all}\quad k\in\N, \end{equation} where $\pi_0,\pi_1,\dots$ are permutations of the set of indices $[N]$ (chosen randomly or deterministically). \item{\sc Cyclic sampling:} corresponds to the case \eqref{eq:ShufCyclicRule} with $\pi_{\lfloor\nicefrac kN\rfloor}=\id$, \ie, \begin{equation}\label{eq:cyclicRule} I^{k+1}=\set{\mod(k,N)+1}\quad \text{for all}\quad k\in\N. \end{equation} \end{itemize} We remark that in practice it has been observed that an effective sampling technique is to use random shuffling after each cycle \cite[\S2]{bertsekas2015convex}. Consistently with the deterministic nature of the essentially cyclic sampling, all results of the previous section hold surely, as opposed to almost surely. \begin{thm}[essentially cyclic sampling: subsequential convergence]\label{thm:cyclic:subseq} Suppose that \Cref{ass:basic,ass:cyclic} are satisfied. Then, all the asserts of \Cref{thm:random:subseq} hold surely. \begin{proof} We first establish an important descent inequality for \(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}\) after every $T$ iterations, cf. \eqref{eq:Essential_cyclic_descent}. Convexity of \(G\), entailing $\prox_{G}^{\Gamma^{-1}}$ being Lipschitz continuous (cf. \Cref{thm:FNE}), allows the employment of techniques similar to those in \cite[Lemma 3.3]{beck2013convergence}. Since all indices are updated at least once every \(T\) iterations, one has that \begin{equation}\label{eq:ki} \ki {}\coloneqq{} \min\set{t\in[T]}[ \text{\(i\) is sampled at iteration \(T\nu+t-1\)} ] \end{equation} is well defined for each index \(i\in[N]\) and \(\nu\in\N\). Since \(i\) is sampled at iteration \(T\nu+\ki-1\) and \(x_i^{T\nu}=x_i^{T\nu+1}=\dots=x_i^{T\nu+\ki-1}\) by definition of \(\ki\), it holds that \begin{align*} x_i^{T\nu+\ki} {}={} & x_i^{T\nu+\ki-1} {}+{} \trans{U_i}\,\left( \@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm x^{T\nu+\ki-1}) {}-{} \bm x^{T\nu+\ki-1} \right) \\ \numberthis\label{eq:equiiter_ki} {}={} & x_i^{T\nu} {}+{} \trans{U_i}\,\left( \@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm x^{T\nu+\ki-1}) {}-{} \bm x^{T\nu+\ki-1} \right), \end{align*} \ifaccel\else where $U_i\in \R^{(\sum_jn_j)\times n_i}$ denotes the $i$-th block column of the identity matrix so that for a vector $v\in \R^{n_i}$ \begin{equation}\label{eq:U} U_iv {}={} \trans{(0,\dots, 0, \!\overbracket{\,v\,}^{\mathclap{i\text{-th}}}\!, 0, \dots, 0)}. \end{equation} \fi For all $t\in[T]$ the following holds \begin{align*} \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{T(\nu+1)}) {}-{} \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{T\nu}) {}={} & \sum_{\tau=1}^T\left( \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{T\nu+\tau}) {}-{} \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{T\nu+\tau-1}) \right) \\ {}\leq{} & \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{T\nu+t}) {}-{} \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{T\nu+t-1}) \\ {}\leq{} & -\tfrac{\xi_{\rm min}}{2} \|\bm x^{T\nu+t}-\bm x^{T\nu+t-1}\|_{\Gamma^{-1}}^2, \numberthis\label{eq:descent_esscyc} \end{align*} where $\xi_i\coloneqq \tfrac{N-\gamma_{i}L_{f_{i}}}{N}$ as in \Cref{thm:Igeq}, $\xi_{\rm min}\coloneqq \min_{i\in[N]}\set{\xi_i}$, and the two inequalities follow from \Cref{thm:Igeq}. Moreover, using triangular inequality for $i\in[N]$ yields \begin{align*} \|\bm x^{T\nu+\ki-1}-\bm x^{T\nu}\|_{\Gamma^{-1}} {}\leq{} & \sum_{\tau=1}^{\ki-1}\|\bm x^{T\nu+\tau}-\bm x^{T\nu+\tau-1}\|_{\Gamma^{-1}} \\ \numberthis\label{eq:new25} {}\leq{} & \tfrac{T}{\sqrt{\xi_{\rm min}\nicefrac{}{2}}} \left( \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{T\nu}) {}-{} \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{T(\nu+1)}) \right)^{\nicefrac12}, \end{align*} where the second inequality follows from \eqref{eq:descent_esscyc} together with the fact that \(\ki\leq T\). For all \(i\in[N]\), from the triangular inequality and the \(L_{\bf T}\)-Lipschitz continuity of \(\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}\) (\Cref{thm:TLip}) we have \begin{align*} \gamma_i^{-\nicefrac12} \|\trans{U_i}\,(\bm x^{T\nu}-\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm x^{T\nu}))\| {}\leq{} & \gamma_i^{-\nicefrac12} \|\trans{U_i}\,\bigl(\bm x^{T\nu}-\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm x^{T\nu+\ki-1})\bigr)\| \\ & {}+{} \gamma_i^{-\nicefrac12} \|\trans{U_i}\,\bigl(\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm x^{T\nu+\ki-1})-\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm x^{T\nu})\bigr)\| \\ {}\leq{} & \gamma_i^{-\nicefrac12} \|x_i^{T\nu+\ki-1}-x_i^{T\nu+\ki}\| \\ & {}+{} \|\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm x^{T\nu+\ki-1})-\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm x^{T\nu})\|_{\Gamma^{-1}} \\ {}\leq{} & \|\bm x^{T\nu+\ki-1}-\bm x^{T\nu+\ki}\|_{\Gamma^{-1}} {}+{} L_{\bf T}\|\bm x^{T\nu+\ki-1}-\bm x^{T\nu}\|_{\Gamma^{-1}} \\ \numberthis\label{eq:sqrtbound} {}\overrel[\leq]{\eqref{eq:descent_esscyc},~\eqref{eq:new25}}{} & \tfrac{1+TL_{\bf T}}{\sqrt{\xi_{\rm min}/2}} \left( \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{T\nu}) {}-{} \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{T(\nu+1)}) \right)^{\nicefrac12}. \end{align*} By squaring and summing over \(i\in[N]\), we obtain \begin{equation}\label{eq:Essential_cyclic_descent} \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{T(\nu+1)})-\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{T\nu}) {}\leq{} -\tfrac{\xi_{\rm min}}{2N(1+TL_{\bf T})^2} \|\bm z^{T\nu}-\bm x^{T\nu}\|^{2}_{\Gamma^{-1}}. \end{equation} By telescoping the inequality and using the fact that \(\min\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}=\min\@ifstar\@@P\@Phi\) \ifarxiv by \else shown in \fi \Cref{thm:min}, we obtain that \( \seq{\|\bm z^{T\nu}-\bm x^{T\nu}\|^2_{\Gamma^{-1}}}[\nu\in\N] \) has finite sum, and in particular vanishes. Clearly, by suitably shifting, for every \(t\in[T]\) the same can be said for the sequence \( \seq{\|\bm z^{T\nu+t}-\bm x^{T\nu+t}\|^2_{\Gamma^{-1}}}[\nu\in\N] \). The whole sequence \( \seq{\|\bm z^k-\bm x^k\|^2} \) is thus summable, and we may now infer the claim as done in the proof of \Cref{thm:random:subseq}. \end{proof}
3,802
58,159
en
train
0.4986.7
\end{thm} In the next theorem explicit linear convergence rates are derived under the additional strong convexity assumption for the smooth functions. The cyclic and shuffled cyclic cases are treated separately, as tighter bounds can be obtained by leveraging the fact that within cycles of \(N\) iterations every index is updated exactly once. \begin{thm}[essentially cyclic sampling: linear convergence under strong convexity]\label{thm:cyclic:linear} Additionally to \Cref{ass:basic,ass:cyclic}, suppose that each function \(f_i\) is \(\mu_{f_i}\)-strongly convex. Then, denoting \( \delta {}\coloneqq{} \min_{i\in[N]}\set{ \tfrac{\gamma_i\mu_{f_i}}{N} } \) and \( \Delta {}\coloneqq{} \max_{i\in[N]}\set{ \tfrac{\gamma_iL_{f_i}}{N} } \), for all \(\nu\in\N\) the following hold for the iterates generated by \Cref{alg:BC}: \begin{subequations}\label{subeq:cyclic:linear} \begin{align} \label{eq:cyclic:Qlinear} \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{T(\nu+1)})-\min\@ifstar\@@P\@Phi {}\leq{} & (1-c) \bigl(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{T\nu})-\min\@ifstar\@@P\@Phi\bigr) \\ \label{eq:cyclic:Rlinear1} \@ifstar\@@P\@Phi(\bm z^{T\nu})-\min\@ifstar\@@P\@Phi {}\leq{} & \bigl(\@ifstar\@@P\@Phi(\bm x^0)-\min\@ifstar\@@P\@Phi\bigr)(1-c)^\nu \\ \label{eq:cyclic:Rlinear2} \tfrac12\|\bm z^{T\nu}-\bm x^\star\|^2_{\mu_F} {}\leq{} & \bigl(\@ifstar\@@P\@Phi(\bm x^0)-\min\@ifstar\@@P\@Phi\bigr)(1-c)^\nu \end{align} \end{subequations} where \(\bm x^\star\coloneqq\argmin\@ifstar\@@P\@Phi\), \( \mu_F {}\coloneqq{} \tfrac1N \blockdiag\bigl(\mu_{f_1}\I_{n_1},\dots\mu_{f_n}\I_{n_N}\bigr) \), and \begin{equation}\label{eq:cyclic:cwc} c {}={} \frac{ \delta(1-\Delta) }{ N\bigl(1+T(1-\delta)\bigr)^2 (1-\delta) }. \end{equation} In the case of shuffled cyclic \eqref{eq:ShufCyclicRule} or cyclic \eqref{eq:cyclicRule} sampling, the inequalities can be tightened by replacing \(T\) with \(N\) and with \begin{equation}\label{eq:linearShuffledCyclic} c {}={} \frac{\delta(1-\Delta)}{N\left(2-\delta\right)^{2}\left(1-\delta\right)}. \end{equation} \begin{comment} the following tighter bound holds \begin{equation}\label{eq:linearShuffledCyclic} {\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{N(\nu+1)})-\min\@ifstar\@@P\@Phi} {}\leq{} \left(1-c\right) \left(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{N\nu})-\min\@ifstar\@@P\@Phi\right), \quad \text{where} \quad c {}={} \frac{\delta(1-\Delta)}{N\left(2-\delta\right)^{2}\left(1-\delta\right)}. \end{equation} \end{comment} \begin{proof} \begin{proofitemize} \item\emph{The general essentially cyclic case.}~ Since \(\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}\) is \(L_{\bf T}\)-Lipschitz continuous with \(L_{\bf T}=1-\delta\) as shown in \Cref{thm:contractive}, inequality \eqref{eq:Essential_cyclic_descent} becomes \[ \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{T(\nu+1)}) {}-{} \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{T\nu}) {}\leq{} -\tfrac{1-\Delta}{2N(1+T(1-\delta))^2} \|\bm z^{T\nu}-\bm x^{T\nu}\|^2_{\Gamma^{-1}}. \] Moreover, it follows from \eqref{eq:QUB} that \begin{equation}\label{eq:strongLB} \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{T\nu})-\@ifstar\@@P\@Phi_\star {}\leq{} \tfrac12 (\delta^{-1}-1) \|\bm z^{T\nu}-\bm x^{T\nu}\|_{\Gamma^{-1}}^2. \end{equation} By combining the two inequalities the claimed \(Q\)-linear convergence \eqref{eq:cyclic:Qlinear} with factor \(c\) as in \eqref{eq:cyclic:cwc} is obtained. In turn, the $R$-linear rates \eqref{eq:cyclic:Rlinear1} and \eqref{eq:cyclic:Rlinear2} follow from \Cref{thm:FBEineq}. \item\emph{The shuffled cyclic case.}~ Let us now suppose that the sampling strategy follows a shuffled rule as in \eqref{eq:ShufCyclicRule} with permutations \(\pi_0,\pi_1,\dots\) (hence in the cyclic case $\pi_\nu=\id$ for all $\nu\in\N$). Let $U_i$ be as in \eqref{eq:U} and $\xi_{\rm min}$ as in the proof of \Cref{thm:cyclic:subseq}. Observe that \(\ki=\pi_\nu^{-1}(i)\leq N\) for \(\ki\) as defined in \eqref{eq:ki}. For all $t\in[N]$ \begin{align*} \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{N(\nu+1)}) -\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{N\nu}) {}\leq{} & \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{N\nu+t-1}) -\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{N\nu}) \\ {}\leq{} & -\tfrac{\xi_{\rm min}}{2}\sum_{\tau=1}^{t-1} \|\bm x^{N\nu+\tau}-\bm x^{N\nu+\tau-1}\|^2_{\Gamma^{-1}} \\ \numberthis\label{eq:tighterNoTri} {}={} & -\tfrac{\xi_{\rm min}}{2}\|\bm x^{N\nu+t-1}-\bm x^{N\nu}\|^2_{\Gamma^{-1}}, \end{align*} where the equality follows from the fact that at every iteration a different coordinate is updated (and that $\Gamma$ is diagonal), and the inequalities from \Cref{thm:Igeq}. Similarly, \eqref{eq:descent_esscyc} holds with $T$ replaced by \(N\) (despite the fact that \(T\) is not necessarily \(N\), but is rather bounded as \(T\leq 2N-1\)). By using \eqref{eq:tighterNoTri} in place of \eqref{eq:new25}, inequality \eqref{eq:sqrtbound} is tightened as follows \[ \gamma_i^{-\nicefrac12} \|\trans{U_i}(\bm x^{N\nu}-\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm x^{N\nu}))\| {}\leq{} \tfrac{1+L_{\bf T}}{\sqrt{\xi_{\rm min}/2}} \left( \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{N\nu}) {}-{} \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{N(\nu+1)}) \right)^{\nicefrac12}. \] By squaring and summing for \(i\in[N]\) we obtain \begin{equation}\label{eq:cyclic_descent} \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{N(\nu+1)})-\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{N\nu}) {}\leq{} -\tfrac{\xi_{\rm min}}{2N(1+L_{\bf T})^2} \|\bm z^{N\nu}-\bm x^{N\nu}\|^2_{\Gamma^{-1}} {}={} -\tfrac{1-\Delta}{2N(1+L_{\bf T})^2} \|\bm z^{N\nu}-\bm x^{N\nu}\|^2_{\Gamma^{-1}}, \end{equation} where \(L_{\bf T}=1-\delta\) as discussed above. By combining this and \eqref{eq:strongLB} (with \(T\) replaced by \(N\)) the improved coefficient \eqref{eq:linearShuffledCyclic} is obtained. \qedhere \end{proofitemize} \end{proof}
3,240
58,159
en
train
0.4986.8
\end{thm} Note that if one sets $\gamma_i = \alpha N/L_{f_i}$ for some $\alpha\in(0,1)$, then $\delta = \alpha\min_{i\in[N]} \set{\nicefrac{\mu_{f_i}}{L_{f_i}}}$ and $\Delta=\alpha$. With this selection, as the condition number approaches $1$ the rate in \eqref{eq:linearShuffledCyclic} tends to $1-\frac{\alpha}{N\left(2-\alpha\right)^{2}}$. \begin{comment} {\color{red}Similarly to the argument in the randomized case, the $R$-linear rate \[ \|\bm x^{N\nu}-\bm x^\star\|^2_M {}\leq{} 4(1-c)^k \left(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^0)-\min\@ifstar\@@P\@Phi\right) \] for the (shuffled) cyclic case with $M$ as in \Cref{thm:strconcost} is obtained.} \end{comment} \subsection{Global and linear convergence with KL inequality} The convergence analyses of the randomized and essentially cyclic cases both rely on a descent property on the FBE that quantifies the progress in the minization of \(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}\) in terms of the squared forward-backward residual \(\|\bm x-\bm z\|^2\). A subtle but important difference, however, is that the inequality \eqref{eq:ExSD} in the former case involves a conditional expectation, whereas \eqref{eq:Essential_cyclic_descent} in the latter does not. The \emph{sure} descent property occurring for essentially cyclic sampling strategies is the key for establishing global (as opposed to subsequential) convergence based on the Kurdyka-\L ojasiewicz (KL) property \cite{lojasiewicz1963propriete,lojasiewicz1993geometrie,kurdyka1998gradients}. A similar result is achieved in \cite{xu2017globally}, which however considers the complementary case to problem \eqref{eq:P} where the nonsmooth function \(G\) is assumed to be separable, and thus the cost function itself can serve as Lyapunov function. \begin{defin}[KL property with exponent \(\theta\)]\label{def:KL} A proper lsc function \(\func{h}{\R^n}{\Rinf}\) is said to have the \DEF{Kurdyka-{\L}ojasiewicz} (KL) property with exponent \(\theta\in(0,1)\) at \(\bar w\in\dom h\) if there exist \(\varepsilon,\eta,\varrho>0\) such that \[ \psi'(h(w)-h(\bar w))\dist(0,\partial h(w))\geq 1 \] holds for all \(w\) such that \(\|w-\bar w\|<\varepsilon\) and \(h(\bar w)<h(w)<h(\bar w)+\eta\), where \(\psi(s)\coloneqq\varrho s^{1-\theta}\). We say that \(h\) satisfies the KL property with exponent \(\theta\) (without mention of \(\bar w\)) if it satisfies the KL property with exponent \(\theta\) at any \(\bar w\in\dom\partial h\). \end{defin} Semialgebraic functions comprise a wide class of functions that enjoy this property \cite{bolte2007clarke,bolte2007lojasiewicz}, which has been extensively exploited to provide convergence rates of optimization algorithms \cite{attouch2009convergence,attouch2010proximal,attouch2013convergence,bolte2014proximal,frankel2015splitting,ochs2014ipiano,li2016douglas,xu2013block}. Based on this, in the next result we provide sufficient conditions ensuring global and \(R\)-linear convergence of \Cref{alg:BC} with essentially cyclic sampling. \begin{thm}[essentially cyclic sampling: global and linear convergence]\label{thm:cyclic:global} Additionally to \Cref{ass:basic,ass:cyclic}, suppose that \(\@ifstar\@@P\@Phi\) has the KL property with exponent \(\theta\in(0,1)\) (as is the case when \(f_i\) and \(G\) are semialgebraic), and is coercive. Then, any sequences \(\seq{\bm x^k}\) and \(\seq{\bm z^k}\) generated by \Cref{alg:BC} converge to (the same) stationary point \(\bm x^\star\). Moreover, if \(\theta\leq\nicefrac12\) then \(\seq{\|\bm z^k-\bm x^k\|}\), \(\seq{\bm x^k}\) and \(\seq{\bm z^k}\) converge at $R$-linear rate. \begin{proof}
1,351
58,159
en
train
0.4986.9
Let \(\seq{\bm x^k}\) and \(\seq{\bm z^k}\) be sequences generated by \Cref{alg:BC} with essentially cyclic sampling, and let \(\@ifstar\@@P\@Phi_\star\) be the limit of the sequence \(\seq{\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^k)}\) as in \Cref{thm:decrease}. To avoid trivialities, we may assume that \(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^k)\gneqq\@ifstar\@@P\@Phi_\star\) for all \(k\), for otherwise the sequence \(\seq{\bm x^k}\) is asymptotically constant, and thus so is $\seq{\bm z^k}$. Let \(\Omega\) be the set of accumulation points of \(\seq{\bm x^k}\), which is compact and such that \(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}\equiv\@ifstar\@@P\@Phi_\star\) on \(\Omega\), as ensured by \Cref{thm:cyclic:subseq}. It follows from \Cref{thm:loja} and \cite[Lem. 1(ii)]{attouch2009convergence} that \(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}\) enjoys a \emph{uniform} KL property on \(\Omega\); in particular, \( \psi'(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^k)-\@ifstar\@@P\@Phi_\star)\dist(0,\partial\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^k)) {}\geq{} 1 \) holds for all \(k\) large enough such that \(\bm x^k\) is sufficiently close to \(\Omega\) and \(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^k)\) is sufficiently close to \(\@ifstar\@@P\@Phi_\star\), where \(\psi(s)=\varrho s^{1-\theta'}\) for some \(\varrho>0\) and \(\theta'=\max\set{\theta,\nicefrac12}\). Combined with \Cref{thm:subdiffdist}, for all \(k\) large enough we thus have \begin{equation}\label{eq:KL} \psi'(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^k)-\@ifstar\@@P\@Phi_\star) {}\geq{} \frac{c}{\|\bm x^k-\bm z^k\|_{\Gamma^{-1}}}, \end{equation} where \( c {}\coloneqq{} \frac{ N\min_i\set{\sqrt{\gamma_i}} }{ N+\max_i\set{\gamma_iL_{f_i}} } {}>{} 0 \). Let \( \Delta_k\coloneqq\psi(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^k)-\@ifstar\@@P\@Phi_\star) \). By combining \eqref{eq:KL} and \eqref{eq:Essential_cyclic_descent} we have that there exists a constant \(c'>0\) such that \begin{equation}\label{eq:KLinequality} \Delta_{(\nu+1)T} {}-{} \Delta_{\nu T} {}\leq{} \psi'(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{\nu T})-\@ifstar\@@P\@Phi_\star) \left(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{(\nu+1)T})-\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{\nu T})\right) {}\leq{} -c' \|\bm x^{\nu T}-\bm z^{\nu T}\|_{\Gamma^{-1}} \end{equation} holds for all $\nu\in\N$ large enough (the first inequality uses concavity of \(\psi\)). By summing over \(\nu\) (sure) summability of the sequence \(\seq{\|\bm x^{\nu T}-\bm z^{\nu T}\|}[\nu\in\N]\) is obtained. By suitably shifting, for every \(t\in[T]\) the same can be said for the sequence \( \seq{\|\bm z^{T\nu+t}-\bm x^{T\nu+t}\|}[\nu\in\N] \), and since \(T\) is finite we conclude that the whole sequence \( \seq{\|\bm z^k-\bm x^k\|} \) is summable. Since \(\|\bm x^{k+1}-\bm x^k\|\leq\|\bm z^k-\bm x^k\|\) we conclude that \(\seq{\bm x^k}\) has finite length and is thus convergent (to a single point), and consequently so is \(\seq{\bm z^k}\). \begin{comment} Consider the \(\Gamma^{-1}\)-augmented Lagrangian defined in \eqref{eq:AugLagrangian} and let \( \mathcal{L}_k {}\coloneqq{} \LL(\bm x^k,\bm z^k,-\nabla F(\bm x^k)) \) and similarly \( \partial\mathcal{L}_k {}\coloneqq{} \partial\LL(\bm x^k,\bm z^k,-\nabla F(\bm x^k)) \). Note that \( \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^k) {}={} \mathcal{L}_k \); to avoid trivialities, we may thus assume that \(\mathcal{L}_k\gneqq\@ifstar\@@P\@Phi_\star\) for all \(k\), for otherwise the sequence \(\seq{\bm x^k}\) is asymptotically constant, cf. \eqref{eq:SDx}, and thus so is $(\bm x^k,\bm z^k,-\nabla F(\bm x^k))$. Let \(\Omega\) be the set of accumulation points of \(\seq{\bm x^k}\), which is compact and such that \(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}\equiv\@ifstar\@@P\@Phi_\star\) on \(\Omega\) for some \(\@ifstar\@@P\@Phi_\star\in\R\), as ensured by \Cref{thm:cluster}. Then, since \(\|\bm x^k-\bm z^k\|\to0\) and \(\nabla F\) is continuous, \( \Omega' {}\coloneqq{} \set{\bigl(\bm x,\bm x,-\nabla F(\bm x)\bigr)}[\bm x\in\Omega] \) is the set of cluster points of $(\bm x^k,\bm z^k,-\nabla F(\bm x^k))$, which is also compact and on which \(\LL\) is constantly equal to \(\@ifstar\@@P\@Phi_\star\). Since \(F\) and \(G\) are semialgebraic, known properties of semialgebraic functions (see \eg \cite[\S8.3.1]{ioffe2017variational}) ensure that \(\LL\) is semialgebraic, and as such it possesses the KL property on \(\Omega'\), see \cite[Thm. 3 and Lem. 6]{bolte2014proximal}: there exists a continuous increasing concave function \( \func{\psi}{[0,\varepsilon)}{[0,\infty)} \) (for some \(\varepsilon>0\)) which is differentiable on \((0,\varepsilon)\) and with \(\psi(0)=0\), such that \( \psi'(\mathcal{L}_k-\@ifstar\@@P\@Phi_\star)\dist(0,\partial\mathcal{L}_k) {}\geq{} 1 \) for all \(k\) large enough such that $(\bm x^k,\bm z^k,-\nabla F(\bm x^k))$ is sufficiently close to \(\Omega'\) and \(\mathcal{L}_k\) is sufficiently close to \(\@ifstar\@@P\@Phi_\star\). The optimality condition for \(\bm z\in\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm x)\) reads \begin{equation}\label{eq:proxsubgrad} \Gamma^{-1}(\bm x-\bm z)-\nabla F(\bm x) {}\in{} \partial G(\bm z), \end{equation} hence \( \partial\mathcal{L}_k {}\ni{} \bigl( \Gamma^{-1}(\bm x^k-\bm z^k),~ 0,~ \bm x^k-\bm z^k \bigr) \), which implies that \[ \textstyle \dist(0,\partial\mathcal{L}_k) {}\leq{} \sqrt{\gamma_{\rm min}^{-1}+\gamma_{\rm max}}\, \|\bm x^k-\bm z^k\|_{\Gamma^{-1}}, \] where \(\gamma_{\rm min}\coloneqq\min_{i\in[N]}\set{\gamma_i}\) and \(\gamma_{\rm max}\coloneqq\max_{i\in[N]}\set{\gamma_i}\). Combined with the KL inequality, we obtain \begin{equation}\label{eq:KL} \textstyle \psi'(\mathcal L_k-\@ifstar\@@P\@Phi_\star) {}\geq{} \frac{1}{ \sqrt{\gamma_{\rm min}^{-1}+\gamma_{\rm max}}\, \|\bm x^k-\bm z^k\|_{\Gamma^{-1}} }. \end{equation} Denote \( \Delta_k\coloneqq\psi(\mathcal{L}_k-\@ifstar\@@P\@Phi_\star) \), \( \sigma' {}={} \tfrac{\xi_{\rm min}}{N(1+TL_{\bf T})^2} \) and let \(\xi_{\rm min}\) and \(L_{\bf T}\) be as in the proof of \Cref{thm:cyclic:subseq}. We have for all $\nu\in\N$ \begin{equation}\label{eq:KLinequality} \Delta_{(\nu+1)T} {}-{} \Delta_{\nu T} {}\leq{} \psi'(\mathcal{L}_{\nu T}-\@ifstar\@@P\@Phi_\star) \left(\mathcal{L}_{(\nu+1)T}-\mathcal{L}_{\nu T}\right) {}\overrel*[\leq]{\eqref{eq:Essential_cyclic_descent},\,\eqref{eq:KL}}{} -\tfrac{\sigma'}{2\sqrt{\gamma_{\rm min}^{-1}+\gamma_{\rm max}}} \|\bm x^{\nu T}-\bm z^{\nu T}\|_{\Gamma^{-1}}, \end{equation} where the first inequality uses concavity of \(\psi\). By summing over \(\nu\in\N\) (sure) summability of the sequence \(\seq{\|\bm x^{\nu T}-\bm z^{\nu T}\|}[\nu\in\N]\) is obtained. By suitably shifting, for every \(t\in[T]\) the same can be said for the sequence \( \seq{\|\bm z^{T\nu+t}-\bm x^{T\nu+t}\|}[\nu\in\N] \), and since \(T\) is finite we conclude that the whole sequence \( \seq{\|\bm z^k-\bm x^k\|} \) is summable. Since \(\|\bm x^{k+1}-\bm x^k\|\leq\|\bm z^k-\bm x^k\|\) we conclude that \(\seq{\bm x^k}\) has finite length and is thus convergent (to a single point), and consequently so is \(\seq{\bm z^k}\).
3,651
58,159
en
train
0.4986.10
By summing over \(\nu\in\N\) (sure) summability of the sequence \(\seq{\|\bm x^{\nu T}-\bm z^{\nu T}\|}[\nu\in\N]\) is obtained. By suitably shifting, for every \(t\in[T]\) the same can be said for the sequence \( \seq{\|\bm z^{T\nu+t}-\bm x^{T\nu+t}\|}[\nu\in\N] \), and since \(T\) is finite we conclude that the whole sequence \( \seq{\|\bm z^k-\bm x^k\|} \) is summable. Since \(\|\bm x^{k+1}-\bm x^k\|\leq\|\bm z^k-\bm x^k\|\) we conclude that \(\seq{\bm x^k}\) has finite length and is thus convergent (to a single point), and consequently so is \(\seq{\bm z^k}\). Suppose now that \(\theta\leq\nicefrac12\), so that \(\psi(s)=\varrho\sqrt s\). Then, \[ \|\bm x^{\nu T}-\bm z^{\nu T}\|_{\Gamma^{-1}} {}\overrel[\geq]{\eqref{eq:KL}}{} \tfrac{2c}{\varrho} \sqrt{\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{\nu T})-\@ifstar\@@P\@Phi_\star} {}={} \tfrac{2c}{\varrho^2} \psi(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^{\nu T})-\@ifstar\@@P\@Phi_\star) {}={} \tfrac{2c}{\varrho^2} \Delta_{\nu T}. \] Combined with \eqref{eq:KLinequality} it follows that $\seq{\Delta_{\nu T}}[\nu\in\N]$ conveges $Q$-linearly. By rearranging \eqref{eq:KLinequality} as \[ c'\|\bm x^{\nu T}-\bm z^{\nu T}\|_{\Gamma^{-1}} {}\leq{} \Delta_{\nu T} {}-{} \Delta_{(\nu+1)T} {}\leq{} \Delta_{\nu T}, \] \(R\)-linear convergence of \(\seq{\|\bm x^{\nu T}-\bm z^{\nu T}\|}[\nu\in\N]\) follows. By suitably shifting, for every \(t\in[T]\) the same can be said for the sequence \( \seq{\|\bm z^{T\nu+t}-\bm x^{T\nu+t}\|}[\nu\in\N] \), and since \(T\) is finite we conclude that the whole sequence \( \seq{\|\bm z^k-\bm x^k\|} \) converges $R$-linearly. On the other hand, since \(\|\bm x^{k+1}-\bm x^k\|\leq\|\bm z^k-\bm x^k\|\), also \(\seq{\|\bm x^{k+1}-\bm x^k\|}\) converges \(R\)-linearly, hence so does \(\seq{\bm x^k}\). By combining the two, we conclude that also $\seq{\bm z^k}$ converges $R$-linearly. \begin{comment} As in the proof of \Cref{thm:cyclic:global} for \(k\) large enough inequality \eqref{eq:KL} holds, that is, \[ \textstyle \rho(1-\theta)(\mathcal L_{\nu T}-\@ifstar\@@P\@Phi_\star)^{-\theta} {}\geq{} \frac{1}{ \sqrt{\gamma_{\rm min}^{-1}+\gamma_{\rm max}}\, \|\bm x^k-\bm z^k\|_{\Gamma^{-1}} } \] owing to the fact that \(\psi(s)=\rho s^{1-\theta}\). Therefore, \begin{align*} \Delta_{\nu T} {}={} \rho(\mathcal L_{\nu T}-\@ifstar\@@P\@Phi_\star)^{1-\theta} {}\leq{} & \rho \left( \rho(1-\theta)\sqrt{\gamma_{{\rm min}}^{-1}+\gamma_{{\rm max}}}\,\|{\bm x}^{\nu T}-{\bm z}^{\nu T}\|_{\Gamma^{-1}} \right)^{\frac{1-\theta}{\theta}} \\ {}\leq{} & \rho^2(1-\theta)\sqrt{\gamma_{{\rm min}}^{-1}+\gamma_{{\rm max}}}\,\|{\bm x}^{\nu T}-{\bm z}^{\nu T}\|_{\Gamma^{-1}}, \end{align*} where in the second inequality we used the fact $(1-\theta)/\theta\geq 1$ and that the base of the exponent is smaller than one for \(k\) large enough since $\|{\bm x}^{\nu T}-{\bm z}^{\nu T}\|_{\Gamma^{-1}}$ converges to zero. Combined with \eqref{eq:KLinequality} it follows that $\seq{\Delta_{\nu T}}[\nu\in\N]$ conveges $Q$-linearly. By rearranging \eqref{eq:KLinequality} as \[ \tfrac{\sigma'}{2\sqrt{\gamma_{\rm min}^{-1}+\gamma_{\rm max}}} \|\bm x^{\nu T}-\bm z^{\nu T}\|_{\Gamma^{-1}} {}\leq{} \Delta_{\nu T} {}-{} \Delta_{(\nu+1)T} {}\leq{} \Delta_{\nu T} \] \(R\)-linear convergence of \(\seq{\|\bm x^k-\bm z^k\|}\) follows. By suitably shifting, for every \(t\in[T]\) the same can be said for the sequence \( \seq{\|\bm z^{T\nu+t}-\bm x^{T\nu+t}\|}[\nu\in\N] \), and since \(T\) is finite we conclude that the whole sequence \( \seq{\|\bm z^k-\bm x^k\|} \) converges $R$-linearly. On the other hand since $\|\bm x^{k+1}-\bm x^k\|\leq \|\bm z^k-\bm x^k\|$, $\seq{\|\bm x^{k+1}-\bm x^k\|}$ also converges $R$-linearly, hence so does $\seq{\bm x^k}$. \end{comment} \end{proof}
1,814
58,159
en
train
0.4986.11
\end{thm} \section{Nonconvex finite sum problems: the Finito/MISO algorithm}\label{sec:Finito} As mentioned in \Cref{sec:Introduction}, if \(G\) is of the form \eqref{eq:FINITOG} then problem \eqref{eq:P} reduces to the finite sum minimization presented in \eqref{eq:FSP}. Most importantly, the proximal mapping of the original nonsmooth function \(G\) can be easily expressed in terms of that of the small function \(g\) in the reduced finite sum reformulation, as shown in the next lemma. \begin{lem} Given \(\gamma_i>0\), \(i\in[N]\), let \( \Gamma {}\coloneqq{} \blockdiag(\gamma_1I_n,\dots,\gamma_NI_n) \) and \( \hat\gamma {}\coloneqq{} \bigl(\sum_{i=1}^N\gamma_i^{-1}\bigr)^{-1} \). Then, for \(G\) as in \eqref{eq:FINITOG} and any \(\bm u\in\R^{Nn}\) \[ \textstyle \prox_G^{\Gamma^{-1}}(\bm u) {}={} \set{(\hat v,\dots,\hat v)}[ \hat v {}\in{} \prox_{\hat\gamma g}(\hat u) ] \quad\text{where}\quad \hat u {}\coloneqq{} \hat\gamma \sum_{i=1}^N\gamma_i^{-1}u_i. \] \begin{proof} Observe first that for every \(w\in\R^n\) one has \begin{align*} \textstyle \sum_i\gamma_i^{-1}\|w-u_i\|^2 {}={} & \textstyle \sum_i\gamma_i^{-1}\|\hat u-u_i\|^2 {}+{} \sum_i\gamma_i^{-1}\|w-\hat u\|^2 {}+{} \smashoverbrace{ \textstyle 2\sum_i\gamma_i^{-1}\innprod{\hat u-u_i}{w-\hat u} }{ =0 } \\ \numberthis\label{eq:mean} {}={} & \textstyle \sum_i\gamma_i^{-1}\|\hat u-u_i\|^2 {}+{} \hat\gamma^{-1}\|w-\hat u\|^2. \end{align*} Next, observe that since \(\dom G\subseteq C\) (the consensus set), \begin{align*} \prox_G^{\Gamma^{-1}}(\bm u) {}={} & \argmin_{\bm w\in\R^{Nn}}\set{\textstyle G(\bm w)+\sum_{i=1}^N\tfrac{1}{2\gamma_i}\|w_i-u_i\|^2 } \\ {}={} & \argmin_{\bm w\in\R^{Nn}}\set{\textstyle G(\bm w)+\sum_{i=1}^N\tfrac{1}{2\gamma_i}\|w_i-u_i\|^2 }[ w_1=\dots=w_N ] \\ {}={} & \argmin_{(w,\dots,w)}\set{\textstyle g(w)+\sum_{i=1}^N\tfrac{1}{2\gamma_i}\|w-u_i\|^2 } \\ {}\overrel*{\eqref{eq:mean}}{} & \argmin_{(w,\dots,w)}\set{\textstyle g(w) {}+{} \tfrac{1}{2\hat\gamma}\|w-\hat u\|^2 } {}={} \set{(\hat v,\dots,\hat v)}[ \hat v\in\prox_{\hat\gamma g}(\hat u) ] \end{align*} as claimed. \end{proof}
1,275
58,159
en
train
0.4986.12
\end{lem} If all stepsizes are set to a same value \(\gamma\), so that \(\Gamma=\gamma\I_{Nn}\), then the forward-backward step reduces to \begin{align*} \bm z {}\in{} \prox_G^{\Gamma^{-1}}(\bm x-\Gamma\nabla F(\bm x)) \quad\Leftrightarrow\quad & \bm z=(\bar z,\dots,\bar z), \\ \numberthis\label{eq:FBFinito} & \bar z {}\in{} \prox_{\gamma g\nicefrac{}N}\left( \textstyle \tfrac1N\sum_{j=1}^N\bigl( x_j-\tfrac\gamma N\nabla f_j(x_j) \bigr) \right). \end{align*} The argument of \(\prox_{\gamma g\nicefrac{}{N}}\) is the (unweighted) average of the forward operator. By applying \Cref{alg:BC} with \eqref{eq:FBFinito}, Finito/MISO \cite{defazio2014finito,mairal2015incremental} is recovered. Differently from the existing convergence analyses, ours covers fully nonconvex and nonsmooth problems, more general sampling strategies and the possibility to select different stepsizes \(\gamma_i\) for each block, which can have a significant impact on the performance compared to the case where all stepsizes are equal. Moreover, to the best of our knowledge this is the first work that shows global convergence and linear rates even when the smooth functions are nonconvex. The resulting scheme is presented in \Cref{alg:Finito}. We remark that the consensus formulation to recover Finito/MISO (although from a different umbrella algorithm) was also observed in \cite{davis2016smart} in the convex case. Moreover, the Finito/MISO algorithm with cyclic sampling is also studied in \cite{mokhtari2018surpassing} when \(g\equiv0\) and $f_i$ are strongly convex functions; consistently with \Cref{ass:cyclic}, our analysis covers the more general essentially cyclic sampling even in the presence of a nonsmooth convex term $g$ and allowing the smooth functions $f_i$ to be nonconvex. Randomized Finito/MISO with $g\equiv 0$ is also studied in the recent work \cite{qian2019miso}; although their analysis is limited to a single stepsize, in the convex case it is allowed to be larger than our worst-case stepsize \(\min_i\gamma_i\). \begin{algorithm} \caption{Nonconvex proximal Finito/MISO for problem \eqref{eq:FSP} } \label{alg:Finito} \begin{algorithmic}[1] \item[{\sc Require}] \( x^{\rm init}\in\R^n \),~ \(\gamma_i\in(0,\nicefrac{N}{L_{f_i}})\), {\small \(i\in[N]\)} \Statex \( \hat\gamma {}\coloneqq{} \bigl(\sum_{i=1}^N\gamma_i^{-1}\bigr)^{-1} \),~~ \( s_i {}={} x^{\rm init}-\frac{\gamma_i}{N}\nabla f_i(x^{\rm init}) \)~ \(i\in[N]\),~~ \( \hat s {}={} {\hat\gamma}\sum_{i=1}^N\gamma_i^{-1}s_i \) \item[{\sc Repeat} until convergence] \State select a set of indices \(I\subseteq[N]\) \State \( z {}\in{} \prox_{\hat\gamma g}(\hat s) \) \For{ \(i\in I\) } \State \( v {}\gets{} z-\frac{\gamma_i}{N}\nabla f_i(z) \) \State update~~ \( \hat s {}\gets{} \hat s+\tfrac{\hat\gamma}{\gamma_i}(v-s_i) \) ~~and~~ \( s_i\gets v \) \@ifstar\@@E\@EndFor \item[{\sc Return} $z$ ] \end{algorithmic} \end{algorithm} The convergence results from \Cref{sec:convergence} are immediately translated to this setting by noting that the bold variable ${\bm z}^k$ corresponds to $(z^k,\dots,z^k)$. Therefore, $\@ifstar\@@P\@Phi({\bm z^k})= \varphi(z^k)$ where $\varphi$ is the cost function for the finite sum problem. \begin{cor}[subsequential convergence of \Cref{alg:Finito}]\label{thm:Finito:convergence} In the finite sum problem \eqref{eq:FSP} suppose that \(\argmin\varphi\) is nonempty, \(g\) is proper and lsc, and each \(f_i\) is \(L_{f_i}\)-Lipschitz differentiable, \(i\in[N]\). Then, the following hold almost surely (resp. surely) for the sequence $\seq{z^k}$ generated by \Cref{alg:Finito} with randomized sampling strategy as in \Cref{ass:random} (resp. with any essentially cyclic sampling strategy and $g$ convex as required in \Cref{ass:cyclic}): \begin{enumerate} \item the sequence \(\seq{\varphi(z^k)}\) converges to a finite value \(\varphi_\star\leq\varphi(x^{\rm init})\); \item all cluster points of the sequence \(\seq{z^k}\) are stationary and on which \(\varphi\) equals \(\varphi_\star\). \end{enumerate} If, additionally, \(\varphi\) is coercive, then the following also hold: \begin{enumerate}[resume] \item \(\seq{z^k}\) is bounded (in fact, this holds surely for arbitrary sampling criteria). \end{enumerate} \end{cor} \begin{cor}[linear convergence of \Cref{alg:Finito} under strong convexity] Additionally to the assumptions of \Cref{thm:Finito:convergence}, suppose that \(g\) is convex and that each \(f_i\) is \(\mu_{f_i}\)-strongly convex. The following hold for the iterates generated by \Cref{alg:Finito}: \begin{itemize}[leftmargin=*,label={},itemindent=-0.5cm,labelsep=0pt,partopsep=0pt,parsep=0pt,listparindent=0pt,topsep=0pt] \item{\sc Randomized sampling:} under \Cref{ass:random}, \begin{align*} \@ifstar\@@E\@E[]{\varphi(z^k)-\min\varphi} {}\leq{} & (\varphi(x^{\rm init})-\min\varphi) (1-c)^k \\ \tfrac12\@ifstar\@@E\@E[]{\|z^k-x^\star\|^2} {}\leq{} & \frac{ N(\varphi(x^{\rm init})-\min\varphi) }{ \sum_i\mu_{f_i} } (1-c)^k \end{align*} holds for all \(k\in\N\), where \(c\) is as in \eqref{eq:cwc} and \(x^\star\coloneqq\argmin\varphi\). If the stepsizes \(\gamma_i\) and the sampling probabilities \(p_i\) are set as in \Cref{thm:random:linear}, then the tighter constant \(c\) as in \eqref{eq:cbc} is obtained. \item{\sc Shuffled cyclic or cyclic sampling:} under either sampling strategy \eqref{eq:ShufCyclicRule} or \eqref{eq:cyclicRule}, \begin{align*} \varphi(z^{\nu N})-\min\varphi {}\leq{} & (\varphi(x^{\rm init})-\min\varphi) (1-c)^\nu \\ \tfrac12\@ifstar\@@E\@E[]{\|z^{\nu N}-x^\star\|^2} {}\leq{} & \frac{ N(\varphi(x^{\rm init})-\min\varphi) }{ \sum_i\mu_{f_i} } (1-c)^\nu \end{align*} holds surely for all \(\nu\in\N\), where \(c\) is as in \eqref{eq:linearShuffledCyclic}. \end{itemize} \end{cor} The next result follows from \Cref{thm:cyclic:global} once the needed properties of \(\@ifstar\@@P\@Phi\) as in the umbrella formulation \eqref{eq:P} are shown to hold. \begin{cor}[global convergence of \Cref{alg:Finito}]\label{thm:Finito:global} In the finite sum problem \eqref{eq:FSP}, suppose that \(\varphi\) has the KL property with exponent \(\theta\in(0,1)\) (as is the case when \(f_i\) and \(g\) are semialgebraic) and coercive, \(g\) is proper convex and lsc, and each \(f_i\) is \(L_{f_i}\)-Lipschitz differentiable, \(i\in[N]\). Then the sequence \(\seq{z^k}\) generated by \Cref{alg:Finito} with any essentially cyclic sampling strategy as in \Cref{ass:cyclic} converges surely to a stationary point for \(\varphi\). Moreover, if \(\theta\leq\nicefrac12\) then it converges at \(R\)-linear rate. \begin{proof} Function \(\@ifstar\@@P\@Phi=F+G\) be as in \eqref{eq:FINITOG} clearly is coercive and satisfies \Cref{ass:basic}. In order to invoke \Cref{thm:cyclic:global} is suffices to show that there exists a constant \(c>0\) such that \begin{equation}\label{eq:Dist} \dist(0,\partial\@ifstar\@@P\@Phi(\bm x)) {}\geq{} c\dist(0,\partial\varphi(x)) \quad \text{for all \(x\in\R^n\) and \(\bm x=(x,\dots,x)\),} \end{equation} as this will ensure that \(\@ifstar\@@P\@Phi\) enjoys the KL property at \(\bm x^\star=(x^\star,\dots,x^\star)\) with same desingularizing function (up to a positive scaling). Notice that for \(x\in\R^n\) and \(\bm x=(x,\dots,x)\), one has \( \bm v\in\partial G(\bm x) \) iff \( \frac1N\sum_{i=1}^Nv_i {}\in{} \partial g(x) \). Since \( \partial\@ifstar\@@P\@Phi(\bm x) {}={} \tfrac1N\mathop\times_{i=1}^N\nabla f_i(x_i)+\partial G(\bm x) \) and \( \partial\varphi(x) {}={} \tfrac1N\sum_{i=1}^N\nabla f_i(x) {}+{} \partial g(x) \), see \cite[Ex. 8.8(c) and Prop. 10.5]{rockafellar2011variational}, for \(x\in\R^n\) and denoting \(\bm x=(x,\dots,x)\) we have \begin{align*} \dist(0,\partial\varphi(x)) {}\leq{} & \inf_{\bm v\in\partial G(\bm x)}{ \left\|\textstyle \tfrac1N\sum_{i=1}^N\nabla f_i(x) {}+{} \tfrac1N\sum_{i=1}^Nv_i \right\| } \\ {}\leq{} & \tfrac1N\inf_{\bm v\in\partial G(\bm x)}{ \textstyle \sum_{i=1}^N\|\nabla f_i(x)+v_i\| } {}={} \tfrac1N\inf_{\bm u\in\partial\@ifstar\@@P\@Phi(\bm x)}{ \newnorm{\bm u} }, \end{align*} where \(\newnorm{{}\cdot{}}\) is the norm in \(\R^{Nn}\) given by \( \newnorm{\bm w}=\sum_{i=1}^N\|w_i\| \). Inequality \eqref{eq:Dist} then follows by observing that \( \inf_{\bm u\in\partial\@ifstar\@@P\@Phi(\bm x)}{ \newnorm{\bm u} } \) is the distance of \(0\) from \(\partial\@ifstar\@@P\@Phi(\bm x)\) in the norm \(\newnorm{{}\cdot{}}\), hence that \(\newnorm{{}\cdot{}}\leq c'\|{}\cdot{}\|\) for some \(c'>0\). \end{proof}
3,957
58,159
en
train
0.4986.13
\end{cor} \section{Nonconvex sharing problem}\label{sec:Sharing} In this section we consider the sharing problem \eqref{eq:SP}. As discussed in \Cref{sec:Introduction}, \eqref{eq:SP} fits into the problem framework \eqref{eq:P} by simply letting \(G\coloneqq g \circ A\), where \(A\coloneqq[\I_n~\dots~\I_n]\in\R^{n\times nN}\). By arguing as in \cite[Th. 6.15]{beck2017first} it can be shown that, when \(A\) has full row rank, the proximal mapping of $G=g\circ A$ is given by \begin{equation}\label{eq:sharingprox} \prox_G^{\Gamma^{-1}}(\bm u) {}={} \bm u+\Gamma\trans A(A\Gamma\trans A\,)^{-1}\left(\prox^{(A\Gamma\trans A\,)^{-1}}_g\left(A\bm u\right)-A\bm u\right). \end{equation} Since \(A\Gamma\trans A=\sum_{i=1}^N\gamma_i\) for the sharing problem \eqref{eq:SP}, \begin{align*} \bm v {}\in{} \prox_G^{\Gamma^{-1}}(\bm u) ~~\Leftrightarrow~~ & \bm v {}={} (u_1+\gamma_1w,\dots,u_N+\gamma_Nw) \\ & \textstyle w {}\in{} \tilde\gamma^{-1}\left(\prox_{\tilde{\gamma}g}(\tilde u)-\tilde u\right), ~~ \tilde\gamma\coloneqq\sum_{i=1}^N\gamma_i, ~~ \tilde u\coloneqq \sum_{i=1}^Nu_i. \end{align*} Consequently general BC \Cref{alg:BC} when applied to the sharing problem \eqref{eq:SP} reduces to \Cref{alg:Sharing}. \begin{algorithm} \caption{Block-coordinate method for nonconvex sharing problem \eqref{eq:SP}} \label{alg:Sharing} \begin{algorithmic}[1] \item[{\sc Require}] \( x_i^{\rm init}\in\R^{n} \),~ \(\gamma_i\in(0,\nicefrac{N}{L_{f_i}})\), {\small \(i\in [N]\)} \Statex \( \tilde\gamma {}\coloneqq{} \sum_{i=1}^N\gamma_i \),~~ \( s_i {}={} x_i^{\rm init}-\frac{\gamma_i}{N}\nabla f_i(x_i^{\rm init}) \)~ \(i\in [N]\),~~\( \tilde s {}={} \sum_{i=1}^N s_i \) \item[{\sc Repeat} until convergence] \State select a set of indices \(I\subseteq[N]\) \State $w \gets \tilde{\gamma}^{-1}(\prox_{\tilde{\gamma}g}(\tilde s)-\tilde s)$ \For{ \(i\in I\) } \State \( v_i {}\gets{} s_i + \gamma_i w - \tfrac{\gamma_i}{N} \nabla f_i(s_i + \gamma_i w ) \) \State update~~ \( \tilde s {}\gets{} \tilde s+(v_i-s_i) \) ~~and~~ \( s_i \gets v_i \) \@ifstar\@@E\@EndFor \item[{\sc Return}] $\bm z=(s_1 + \gamma_1 w ,\dots,s_N + \gamma_N w)$ with $w\in\tilde{\gamma}^{-1}(\prox_{\tilde{\gamma}g}(\tilde s)-\tilde s)$ \end{algorithmic} \end{algorithm} \begin{rem}[generalized sharing constraint] Another notable instance of $G=g\circ A$ well suited for the BC framework of \Cref{alg:BC} is when \(g=\indicator_{\set0}\) and \(A=[A_1~\dots~A_N]\), \(A_i\in\R^{n\times n_i}\) such that $A$ is full rank. This models the generalized sharing problem \[ \minimize_{\bm x\in\R^{\sum_in_i}}{\textstyle \tfrac1N\sum_{i=1}^Nf_i(x_i) } \quad\stt{}\textstyle \sum_{i=1}^NA_ix_i=0. \] In this case \eqref{eq:sharingprox} simplifies to \[ \left(\prox_G^{\Gamma^{-1}}(\bm u)\right)_i {}={} u_i-\gamma_i\trans{A_i}\mathcal A^{-1}\sum_{i=1}^NA_iu_i, \] where $\mathcal A\coloneqq A\Gamma\trans A$ can be factored offline and \(\sum_{i=1}^NA_ix_i\) can be updated in an incremental fashion in the same spirit of \Cref{alg:Sharing}. \end{rem} The convergence results for \Cref{alg:Sharing} summarized below fall as special cases of those in \Cref{sec:convergence}. \begin{cor}[convergence of \Cref{alg:Sharing}]\label{thm:sharing:convergence} In the sharing problem \eqref{eq:SP}, suppose that \(\argmin\@ifstar\@@P\@Phi\) is nonempty, \(g\) is proper and lsc, and each \(f_i\) is \(L_{f_i}\)-Lipschitz differentiable, \(i\in[N]\). Consider the sequences $\seq{w^k}$ and $\seq{\bm s^k}$ generated by \Cref{alg:Sharing} and let $\seq{\bm z^k}=\seq{s_1^k + \gamma_1 w^k ,\dots,s_N^k + \gamma_N w^k}$. Then, the following hold almost surely (resp. surely) with randomized sampling strategy as in \Cref{ass:random} (resp. with any essentially cyclic sampling strategy and $g$ convex as required in \Cref{ass:cyclic}): \begin{enumerate} \item the sequence \(\seq{\@ifstar\@@P\@Phi(\bm z^k)}\) converges to a finite value \(\@ifstar\@@P\@Phi_\star\leq\@ifstar\@@P\@Phi(\bm x^{\rm init})\); \item all cluster points of the sequence \(\seq{\bm z^k}\) are stationary and on which \(\@ifstar\@@P\@Phi\) equals \(\@ifstar\@@P\@Phi_\star\). \end{enumerate} If, additionally, \(\@ifstar\@@P\@Phi\) is coercive, then the following also hold: \begin{enumerate}[resume] \item \(\seq{\bm z^k}\) is bounded (in fact, this holds surely for arbitrary sampling criteria). \end{enumerate} \end{cor} \begin{cor}[linear convergence of \Cref{alg:Sharing} under strong convexity]\label{cor:RLinSharing} Additionally to the assumptions of \Cref{thm:sharing:convergence}, suppose that \(g\) is convex and that each \(f_i\) is \(\mu_{f_i}\)-strongly convex. The following hold: \begin{itemize}[leftmargin=*,label={},itemindent=-0.5cm,labelsep=0pt,partopsep=0pt,parsep=0pt,listparindent=0pt,topsep=0pt] \item{\sc Randomized sampling:} under \Cref{ass:random}, \begin{align*} \@ifstar\@@E\@E[]{\@ifstar\@@P\@Phi(\bm z^k)-\min\@ifstar\@@P\@Phi} {}\leq{} & \bigl(\@ifstar\@@P\@Phi(\bm x^{\rm init})-\min\@ifstar\@@P\@Phi\bigr)(1-c)^k \\ \tfrac12\@ifstar\@@E\@E[]{\|\bm z^k-\bm x^\star\|^2_{\mu_F}} {}\leq{} & \bigl(\@ifstar\@@P\@Phi(\bm x^{\rm init})-\min\@ifstar\@@P\@Phi\bigr)(1-c)^k \end{align*} holds for all \(k\in\N\), where \(\bm x^\star\coloneqq\argmin\@ifstar\@@P\@Phi\), \( \mu_F {}\coloneqq{} \tfrac1N \blockdiag\bigl(\mu_{f_1}\I_{n_1},\dots\mu_{f_n}\I_{n_N}\bigr) \), and \(c\) is as in \eqref{eq:cwc}. If the stepsizes \(\gamma_i\) and the sampling probabilities \(p_i\) are set as in \Cref{thm:random:linear}, then the tighter constant \(c\) as in \eqref{eq:cbc} is obtained. \item{\sc Shuffled cyclic or cyclic sampling:} under either sampling strategy \eqref{eq:ShufCyclicRule} or \eqref{eq:cyclicRule}, \begin{align*} \@ifstar\@@P\@Phi(\bm z^{N\nu})-\min\@ifstar\@@P\@Phi {}\leq{} & \bigl(\@ifstar\@@P\@Phi(\bm x^{\rm init})-\min\@ifstar\@@P\@Phi\bigr)(1-c)^\nu \\ \tfrac12\|\bm z^{N\nu}-\bm x^\star\|^2_{\mu_F} {}\leq{} & \bigl(\@ifstar\@@P\@Phi(\bm x^{\rm init})-\min\@ifstar\@@P\@Phi\bigr)(1-c)^\nu \end{align*} holds surely for all \(\nu\in\N\), where \(c\) is as in \eqref{eq:linearShuffledCyclic}. \end{itemize} \end{cor} We conclude with an immediate consequence of \Cref{thm:cyclic:global} that shows that (strong) convexity is in fact not necessary for global or linear convergence to hold. \begin{cor}[global and linear convergence of \Cref{alg:Sharing}]\label{thm:Sharing:global} In problem \eqref{eq:SP}, suppose that \(\@ifstar\@@P\@Phi\) has the KL property with exponent \(\theta\in(0,1)\) (as is the case when \(g\) and \(f_i\) are semialgebraic) and is coercive, \(g\) is proper convex lsc, and each \(f_i\) is \(L_{f_i}\)-Lipschitz differentiable, \(i\in[N]\). Then the sequence $\seq{\bm z^k}$ as defined in \Cref{thm:sharing:convergence} with any essentially cyclic sampling strategy as in \Cref{ass:cyclic} converges surely to a stationary point for \(\@ifstar\@@P\@Phi\). Moreover, if \(\theta\leq\nicefrac12\) it converges with \(R\)-linear rate. \end{cor} \ifaccel \section{Accelerated block-coordinate proximal gradient}
3,269
58,159
en
train
0.4986.14
\ifaccel \section{Accelerated block-coordinate proximal gradient} The work \cite{allen2016even} introduced a coordinate descent method for smooth convex minimization, in which each coordinate is randomly sampled according to an ad hoc probability distribution that provably leads to a remarkable speed up with respect to uniform sampling strategies. The unified analysis of BC-algorithms and the analytical tool introduced in this paper, the forward backward envelope function, allow the extention of this approach to nonsmooth convex minimization of the form \eqref{eq:P}, where functions \(f_i\) are convex quadratic and \(G\) is convex but possibly nonsmooth: \begin{ass}[requirements for the fast BC-\Cref{alg:Fast}]\label{ass:Fast} In problem \eqref{eq:P}, \(\func{G}{\R^{\sum_in_i}}{\Rinf}\) is proper convex and lsc, and \(f_i(x_i)\coloneqq\tfrac12\trans{x_i}H_ix_i+\trans{q_i}x_i\) is convex quadratic, with \(L_{f_i}\coloneqq\lambda_{\rm max}(H_i)\) and \(\mu_{f_i}\coloneqq\lambda_{\rm min}(H_i)\geq0\), \(i\in[N]\). \end{ass} Let $U_i\in \R^{{\sum_{i=1}^N n_i}\times n_i}$ denote the $i$-th block column of the identity matrix so that for a vector $v\in \R^{n_i}$ \begin{equation}\label{eq:U} U_iv= (0,\dots, 0, \!\overbracket{\,v\,}^{\mathclap{i\text{-th}}}\!, 0, \dots, 0). \end{equation} The accelerated BC scheme based on \cite{allen2016even} (for both strongly convex and convex cases) is given in \Cref{alg:Fast}. Similarly to the approach of \cite{patrinos2014douglas} where an accelerated Douglas-Rachford algorithm is proposed, in order to derive \Cref{alg:Fast} we consider the scaled problem \( \minimize_{\tilde{\bm x}} \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}C(\tilde{\bm x})\) where $\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}C \coloneqq \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}\circ Q^{-1/2}$, and $Q$ is the symmetric positive definite matrix \begin{equation} \label{eq:QQ} Q {}\coloneqq{} \blockdiag(Q_1,\dots,Q_N)\succ0 \quad \text{with } Q_i {}\coloneqq{} \gamma_i^{-1}\I-\tfrac{1}{N}H_i\in\R^{n_i\times n_i},~i\in[N]. \end{equation} As detailed in \Cref{thm:convex}, whenever \Cref{ass:Fast} is satisfied $\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}C$ is a convex Lipschitz-differentiable function, and its gradient is given by $\nabla \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}C(\tilde{\bm x}) = Q^{1/2}(\bm x-\prox_G^{\Gamma^{-1}}(\bm x-\Gamma\nabla F(\bm x)))$ where $\bm x=Q^{-1/2}\tilde{\bm x}$. Note that, based on \Cref{thm:convex}, \(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}C\) is \(1\)-smooth along the \(i\)-th block (in the notation of \cite{allen2016even}, \(L_i=1\), $S_\alpha=N$, and \(p_i=\nicefrac1N\)). Hence the parameters of the algorithm simplify substantially resulting in uniform sampling. Moreover, when functions \(f_i\) are \(\mu_{f_i}\)-strongly convex, by \Cref{thm:convex} \(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}C\) is $\sigma$-strongly convex with $\sigma= \frac{1}{N} \min_{i\in [N]} \{\gamma_i\mu_{f_i}\}$. \Cref{alg:Fast} is obtained by applying the fast BC to this problem and scaling the variables by $Q^{-1/2}$. Specifically, the update rule as in \cite{allen2016even} reads \[ \begin{cases}[ r @{{}={}} l ] \tilde{\bm x}^+ & \tau\tilde{\bm w}+(1-\tau)\tilde{\bm y} \\ \tilde{\bm y}^+ & \tilde{\bm x}-U_i\trans{U_i}\nabla\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}C(\tilde{\bm x}^+) {}={} \tilde{\bm x}-U_iQ_i^{\nicefrac12}(x_i^+-z_i^+) \\ \tilde{\bm w}^+ & \tfrac{1}{1+\eta\sigma}(\tilde{\bm w}+\eta\sigma\tilde{\bm x}^+-N\eta U_i\trans{U_i}\nabla\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}C(\tilde{\bm x}^+)) {}={} \tfrac{1}{1+\eta\sigma}(\tilde{\bm w}+\eta\sigma\tilde{\bm x}^+-N\eta U_iQ_i^{\nicefrac12}(x_i^+-z_i^+)), \end{cases} \] where \(\bm z^+=\prox_G^{\Gamma^{-1}}(\bm x^+-\Gamma\nabla F(\bm x^+))\). Since \(Q^{-\nicefrac12}U_iQ_i^{\nicefrac12}=U_i\), premultiplying by \(Q^{-\nicefrac12}\) yields \[ \begin{cases}[ r @{{}={}} l ] \bm x^+ & \tau\bm z+(1-\tau)\bm y \\ \bm z^+ & \prox_G^{\Gamma^{-1}}(\bm x^+-\Gamma\nabla F(\bm x^+)) \\ \bm y^+ & \bm x+U_i(z_i^+-x_i^+) \\ \bm w^+ & \tfrac{1}{1+\eta\sigma}(\bm w+\eta\sigma\bm x^++N\eta U_i(z_i^+-x_i^+)). \end{cases} \] For computational efficiency, vectors $\Gamma\nabla F(\bm x^k)$ and $\Gamma\nabla F(\bm w^k)$ are stored in variables $\bm r^k$ and $\bm v^k$ and updated recursively using the fact that gradients are affine, in such a way that each iteration requires only the evaluation of the sampled gradient (see \Cref{state:d}). For similar reasons, in \Cref{alg:Fast} the iterates start with the $\bm y$-update rather than the $\bm x$-update as in \cite{allen2016even}. Moreover, in the same spirit of \Cref{alg:BC} this accelerated variant can be implemented efficiently whenever the individual blocks of \(\bm z^+\) can be computed efficiently, similarly to the cases discussed in \Cref{sec:Finito,sec:Sharing}. \begin{algorithm} \caption{Accelerated block-coordinate proximal gradient for problem \eqref{eq:P} under \Cref{ass:Fast}} \label{alg:Fast} \begin{algorithmic}[1] \item[{\sc Require}] \(\bm x^0\in\R^{\sum_in_i}\),~ \(\gamma_i\in(0,\nicefrac{N}{L_{f_i}})\),~$i\in[N]$, \( \sigma {}\coloneqq{} \frac{1}{N} \min_{i\in [N]} \{\gamma_i\mu_{f_i}\} \) \State\label{state:sigma_beta} {\bf if} \(\sigma=0\),~~{\bf then}~~ \(\eta = \nicefrac{1}{N^2}\) ~~{\bf otherwise}~~ set \( \tau {}={} \frac{2}{1+\sqrt{1+\nicefrac{4N^2}{\sigma}}} \), \( \eta {}={} \frac{1}{\tau N^2} \)~~ {\bf end if} \State \( \bm w^0 {}={} \bm x^0 \),~ \( (\bm v^0,\bm r^0) {}={} (\Gamma\nabla F(\bm x^0),\Gamma\nabla F(\bm x^0)) \),~ \( \bm z^{0} {}={} \prox_G^{\Gamma^{-1}}\bigl(\bm x^{0}-\bm r^{0}\bigr) \) \def\myVar#1{ \fillwidthof[l]{\bm x^{k+1}}{#1} } \For{ \(k=0,1,\dots\) } \State sample \(i\in[N]\) uniformly \State\label{state:d} \( \myVar{\bm y^{k+1}} {}\gets{} \bm x^{k}+U_{i}\bigl(z_{i}^{k}-x_{i}^{k}\bigr) \),\quad \( d {}\gets{} \tfrac{\gamma_i}{N}\nabla f_i(z_i^{k})-r_i^{k} \) \State \( \myVar{\bm v^{k+1}} {}={} \frac{1}{1+\eta\sigma}\Bigl(\bm v^{k}+\eta\sigma\bm r^{k}+N{\eta} U_id\Bigr) \),\quad \( \bm w^{k+1} {}={} \frac{1}{1+\eta\sigma}\Bigl(\bm w^{k}+\eta\sigma\bm x^{k}+{N\eta}U_{i}\bigl(z_{i}^{k}-x_{i}^{k}\bigr)\Bigr) \) \State{\bf if}~~\(\sigma=0\),~~{\bf then}~~ \( \eta {}\gets{} \frac{k+3}{2N^2} \),~~ \( \tau {}\gets{} \frac{2}{k+3} \); ~~{\bf end if} \State\label{state:FBEgrad} \( \myVar{\bm x^{k+1}} {}={} \tau \bm w^{k+1}+(1-\tau)\bm y^{k+1} \),\quad \( \bm r^{k+1} {}={} \tau \bm v^{k+1}+(1-\tau)(\bm r^{k}+U_i d) \) \State \( \myVar{\bm z^{k+1}} {}={} \prox_G^{\Gamma^{-1}}\bigl(\bm x^{k+1}-\bm r^{k+1}\bigr) \) \@ifstar\@@E\@EndFor{} \end{algorithmic} \end{algorithm} The convergence rate results follow directly from those of \cite{allen2016even} with parameters \(L_i=1\) and $S_\alpha=N$ as described above. \begin{thm}[convergence rates of \Cref{alg:Fast}] Suppose that \Cref{ass:basic,ass:Fast} are satisfied. Then, the iterates generated by \Cref{alg:Fast} satisfy \[ \mathbb{E}\left[{\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm y^k)-\min \@ifstar\@@P\@Phi}\right] {}\leq{} \frac{2N^2\|\bm x^0 - \bm x^\star\|^2_Q}{(k+1)^2}, \] where $Q$ is as in \eqref{eq:QQ}. Moreover, in the strongly convex case ($\sigma= \frac{1}{N} \min_{i\in [N]} \{\gamma_i\mu_{f_i}\}>0$) \[ \mathbb{E}\left[{\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm y^k)-\min \@ifstar\@@P\@Phi}\right] {}\leq{} O(1) (1-c)^k\left( \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^0)-\min \@ifstar\@@P\@Phi \right) \quad\text{where}\quad \textstyle c {}={} \left( \frac12 {}+{} \sqrt{ \frac14 {}+{} \frac{N^2}{\sigma} } \right)^{-1}. \]
3,988
58,159
en
train
0.4986.15
\end{algorithm} The convergence rate results follow directly from those of \cite{allen2016even} with parameters \(L_i=1\) and $S_\alpha=N$ as described above. \begin{thm}[convergence rates of \Cref{alg:Fast}] Suppose that \Cref{ass:basic,ass:Fast} are satisfied. Then, the iterates generated by \Cref{alg:Fast} satisfy \[ \mathbb{E}\left[{\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm y^k)-\min \@ifstar\@@P\@Phi}\right] {}\leq{} \frac{2N^2\|\bm x^0 - \bm x^\star\|^2_Q}{(k+1)^2}, \] where $Q$ is as in \eqref{eq:QQ}. Moreover, in the strongly convex case ($\sigma= \frac{1}{N} \min_{i\in [N]} \{\gamma_i\mu_{f_i}\}>0$) \[ \mathbb{E}\left[{\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm y^k)-\min \@ifstar\@@P\@Phi}\right] {}\leq{} O(1) (1-c)^k\left( \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^0)-\min \@ifstar\@@P\@Phi \right) \quad\text{where}\quad \textstyle c {}={} \left( \frac12 {}+{} \sqrt{ \frac14 {}+{} \frac{N^2}{\sigma} } \right)^{-1}. \] \end{thm} Note that in the strongly convex case it follows from \Cref{thm:strconcost} that the distance from the solution decreases \(R\)-linearly as \[ \@ifstar\@@E\@E[]{ \|\bm y^{k}-\bm x^\star\|^2_M} {}\leq{} O(1)\left(1-c\right)^k \left(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^0)-\min\@ifstar\@@P\@Phi\right), \] where $M$ is as in \Cref{thm:strconcost}. \fi \section{Conclusions}\label{sec:Conclusions} We presented a general block-coordinate forward-backward algorithm for minimizing the sum of a separable smooth and a nonseparable nonsmooth function, both allowed to be nonconvex. The framework is general enough to encompass regularized finite sum minimization and sharing problems, and leads to (a generalization of) the Finito/MISO algorithm \cite{defazio2014finito,mairal2015incremental} with new convergence results and with another novel incremental-type algorithm. The forward-backward envelope is shown to be a particularly suitable Lyapunov function for establishing convergence: additionally to enjoying favorable continuity properties, \emph{sure} descent (as opposed to in expectation) occurs along the iterates. Possible future developments include extending the framework to account for a nonseparable smooth term, for instance by ``quantifying the strength of coupling'' between blocks of variables as in \cite[\S7.5]{bertsekas1989parallel}. \ifarxiv \fi \begin{appendix} \section{The key tool: the forward-backward envelope}\label{sec:appendix} This appendix contains some proofs and auxiliary results omitted in the main body. We begin by observing that, since \(F\) and \(-F\) are 1-smooth in the metric induced by \( \Lambda_F\coloneqq\tfrac1N\blockdiag(L_{f_1}\I_{n_1},\dots,L_{f_N}\I_{n_N}) \), one has \begin{equation}\label{eq:Lip} F(\bm x)+\innprod{\nabla F(\bm x)}{\bm w-\bm x} {}-{} \tfrac12\|\bm w-\bm x\|_{\Lambda_F}^2 {}\leq{} F(\bm w) {}\leq{} F(\bm x)+\innprod{\nabla F(\bm x)}{\bm w-\bm x} {}+{} \tfrac12\|\bm w-\bm x\|_{\Lambda_F}^2 \end{equation} for all \(\bm x,\bm w\in\R^{\sum_in_i}\), see \cite[Prop. A.24]{bertsekas2016nonlinear}. Let us denote \[ \M(\bm w,\bm x) {}\coloneqq{} F(\bm x)+\innprod{\nabla F(\bm x)}{\bm w-\bm x} {}+{} G(\bm w) {}+{} \tfrac12\|\bm w-\bm x\|_{\Gamma^{-1}}^2 \] the quantity being minimized (with respect to \(\bm w\)) in the definition \eqref{eq:FBE} of the FBE. It follows from \eqref{eq:Lip} that \begin{equation}\label{eq:bounds} \@ifstar\@@P\@Phi(\bm w) {}+{} \tfrac12\|\bm w-\bm x\|^2_{\Gamma^{-1}-\Lambda_F} {}\leq{} \M(\bm w,\bm x) {}\leq{} \@ifstar\@@P\@Phi(\bm w) {}+{} \tfrac12\|\bm w-\bm x\|^2_{\Gamma^{-1}+\Lambda_F} \end{equation} holds for all \(\bm x,\bm w\in\R^{\sum_in_i}\). In particular, \(\M\) is a \emph{majorizing model} for \(\@ifstar\@@P\@Phi\), in the sense that \(\M(\bm x,\bm x)=\@ifstar\@@P\@Phi(\bm x)\) and \(\M(\bm w,\bm x)\geq\@ifstar\@@P\@Phi(\bm w)\) for all \(\bm x,\bm w\in\R^{\sum_in_i}\). In fact, as explained in \Cref{sec:FBE}, while a \(\Gamma\)-forward-backward step \(\bm z\in\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm x)\) amounts to evaluating a minimizer of \(\M({}\cdot{},\bm x)\), the FBE is defined instead as the minimization value, namely \(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x)=\M(\bm z,\bm x)\) where \(\bm z\) is any element of \(\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm x)\). \subsection{Proofs of \texorpdfstring{\Cref{sec:FBE}}{\S\ref*{sec:FBE}}}\label{sec:proofs:FBE} \begin{appendixproof}{thm:osc} For \(\bm x^\star\in\argmin\@ifstar\@@P\@Phi\) it follows from \eqref{eq:Lip} that \[ \min\@ifstar\@@P\@Phi {}\leq{} F(\bm x) {}+{} G(\bm x) {}\leq{} G(\bm x) {}+{} F(\bm x^\star) {}+{} \innprod{\nabla F(\bm x^\star)}{\bm x-\bm x^\star} {}+{} \tfrac12\|\bm x^\star-\bm x\|_{\Lambda_F}^2. \] Therefore, \(G\) is lower bounded by a quadratic function with quadratic term \(-\tfrac12\|{}\cdot{}\|_{\Lambda_F}^2\), and thus is prox-bounded in the sense of \cite[Def. 1.23]{rockafellar2011variational}. The claim then follows from \cite[Th. 1.25 and Ex. 5.23(b)]{rockafellar2011variational} and the continuity of the forward mapping \(\Fw{}\). \end{appendixproof} \begin{appendixproof}{thm:FBEineq} Local Lipschitz continuity \ifarxiv of the FBE \fi follows from \eqref{eq:FBEMoreau} in light of \Cref{thm:osc} and \cite[Ex. 10.32]{rockafellar2011variational}. \begin{proofitemize} \item\ref{thm:leq}~ Follows by replacing \(\bm w=\bm x\) in \eqref{eq:FBE}. \item\ref{thm:geq}~ Directly follows from \eqref{eq:bounds} and the identity \(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x)=\M(\bm z,\bm x)\) for \(\bm z\in\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm x)\). \item\ref{thm:strconcost}~ By strong convexity, denoting \(\@ifstar\@@P\@Phi_\star\coloneqq\min\@ifstar\@@P\@Phi\), we have \[ \@ifstar\@@P\@Phi_\star {}\leq{} \@ifstar\@@P\@Phi(\bm z)-\tfrac12\|\bm z-\bm x^\star\|_{\mu_F}^2 {}\leq{} \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x) {}-{} \tfrac12\|\bm z-\bm x^\star\|_{\mu_F}^2 \] where the second inequality follows from \Cref{thm:geq}. \qed
2,902
58,159
en
train
0.4986.16
here \end{proofitemize} \end{appendixproof} \begin{appendixproof}{thm:FBEmin} \begin{proofitemize} \item\ref{thm:min} and \ref{thm:argmin}~ It follows from \Cref{thm:leq} that \(\inf\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}\leq\min\@ifstar\@@P\@Phi\). Conversely, let \(\seq{\bm x^k}\) be such that \(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^k)\to\inf\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}\) as \(k\to\infty\), and for each \(k\) let \(\bm z^k\in\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm x^k)\). It then follows from \Cref{thm:leq,thm:geq} that \[ \inf\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}} {}\leq{} \min\@ifstar\@@P\@Phi {}\leq{} \liminf_{k\to\infty}\@ifstar\@@P\@Phi(\bm z^k) {}\leq{} \liminf_{k\to\infty}\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^k) {}={} \inf\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}, \] hence \(\min\@ifstar\@@P\@Phi=\inf\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}\). Suppose now that \(\bm x\in\argmin\@ifstar\@@P\@Phi\) (which exists by \Cref{ass:basic}); then it follows from \Cref{thm:geq} that \(\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm x)=\set{\bm x}\) (for otherwise another element would belong to a lower level set of \(\@ifstar\@@P\@Phi\)). Combining with \Cref{thm:leq} with \(\bm z=\bm x\) we then have \[ \min\@ifstar\@@P\@Phi {}={} \@ifstar\@@P\@Phi(\bm z) {}\leq{} \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x) {}\leq{} \@ifstar\@@P\@Phi(\bm x) {}={} \min\@ifstar\@@P\@Phi. \] Since \(\min\@ifstar\@@P\@Phi=\inf\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}\), we conclude that \(\bm x\in\argmin\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}\), and that in particular \(\inf\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}=\min\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}\). Conversely, suppose \(\bm x\in\argmin\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}\) and let \(\bm z\in\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm x)\). By combining \Cref{thm:leq,thm:geq} we have that \(\bm z=\bm x\), that is, that \(\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm x)=\set{\bm x}\). It then follows from \Cref{thm:geq} and assert \ref{thm:min} that \[ \@ifstar\@@P\@Phi(\bm x) {}={} \@ifstar\@@P\@Phi(\bm z) {}\leq{} \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x) {}={} \min\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}} {}={} \min\@ifstar\@@P\@Phi, \] hence \(\bm x\in\argmin\@ifstar\@@P\@Phi\). \item\ref{thm:LB}~ Due to \Cref{thm:leq}, if \(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}\) is level bounded clearly so is \(\@ifstar\@@P\@Phi\). Conversely, suppose that \(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}\) is not level bounded. Then, there exist \(\alpha\in\R\) and \(\seq{\bm x^k}\subseteq\lev_{\leq\alpha}\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}\) such that \(\|\bm x^k\|\to\infty\) as \(k\to\infty\). Let \(\lambda=\min_i\set{\gamma_i^{-1}-L_{f_i}N^{-1}}>0\), and for each \(k\in\N\) let \(\bm z^k\in\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm x^k)\). It then follows from \Cref{thm:geq} that \[ \min\@ifstar\@@P\@Phi {}\leq{} \@ifstar\@@P\@Phi(\bm z^k) {}\leq{} \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x^k) {}-{} \tfrac\lambda2\|\bm x^k-\bm z^k\|^2 {}\leq{} \alpha {}-{} \tfrac\lambda2\|\bm x^k-\bm z^k\|^2, \] hence \(\seq{\bm z^k}\subseteq\lev_{\leq\alpha}\@ifstar\@@P\@Phi\) and \( \|\bm x^k-\bm z^k\|^2 {}\leq{} \tfrac2\lambda(\alpha-\min\@ifstar\@@P\@Phi) \). Consequently, also the sequence \(\seq{\bm z^k}\subseteq\lev_{\leq\alpha}\@ifstar\@@P\@Phi\) is unbounded, proving that \(\@ifstar\@@P\@Phi\) is not level bounded. \qedhere \end{proofitemize} \end{appendixproof} \subsection{Further results}\label{sec:auxiliary} This section contains a list of auxiliary results invoked in the main proofs of \Cref{sec:convergence}. \begin{lem}\label{thm:critical} Suppose that \Cref{ass:basic} holds, and let two sequences \(\seq{\bm u^k}\) and \(\seq{\bm v^k}\) satisfy \(\bm v^k\in\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm u^k)\) for all \(k\) and be such that both converge to a point \(\bm u^\star\) as \(k\to\infty\). Then, \(\bm u^\star\in\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm u^\star)\), and in particular \(0\in\hat\partial\@ifstar\@@P\@Phi(\bm u^\star)\). \begin{proof} Since \(\nabla F\) is continuous, it holds that \(\Fw{\bm u^k}\to\Fw{\bm u^\star}\) as \(k\to\infty\). From outer semicontinuity of \(\prox_G^{\Gamma^{-1}}\) \cite[Ex. 5.23(b)]{rockafellar2011variational} it then follows that \[ \bm u^\star {}={} \lim_{k\to\infty} \bm v^k {}\in{} \limsup_{k\to\infty} \prox_G^{\Gamma^{-1}}(\Fw{\bm u^k}) {}\subseteq{} \prox_G^{\Gamma^{-1}}(\Fw{\bm u^\star}) {}={} \@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm u^\star), \] where the limit superior is meant in the Painlevé-Kuratowski sense, cf. \cite[Def. 4.1]{rockafellar2011variational}. The optimality conditions defining \(\prox_G^{\Gamma^{-1}}\) \cite[Th. 10.1]{rockafellar2011variational} then read \begin{align*} 0 {}\in{} & \hat\partial\left( G+\tfrac12\|{}\cdot{}-(\Fw{\bm u^\star})\|_{\Gamma^{-1}}^2 \right)(\bm u^\star) {}={} \hat\partial G(\bm u^\star) {}+{} \Gamma^{-1}\left( \bm u^\star - (\Fw{\bm u^\star}) \right) \\ {}={} & \hat\partial G(\bm u^\star) {}+{} \nabla F(\bm u^\star) {}={} \hat\partial\@ifstar\@@P\@Phi(\bm u^\star), \end{align*} where the first and last equalities follow from \cite[Ex. 8.8(c)]{rockafellar2011variational}. \end{proof}
3,233
58,159
en
train
0.4986.17
\end{lem} \begin{lem} Suppose that \Cref{ass:basic} holds and that function \(G\) is convex. Then, the following hold: \begin{enumerate} \item\label{thm:FNE} \(\prox_G^{\Gamma^{-1}}\) is (single-valued and) firmly nonexpansive (FNE) in the metric $\|{}\cdot{}\|_{\Gamma^{-1}}$; namely, \[ \| \prox_G^{\Gamma^{-1}}(\bm u) {}-{} \prox_G^{\Gamma^{-1}}(\bm v) \|_{\Gamma^{-1}}^2 {}\leq{} \innprod{ \prox_G^{\Gamma^{-1}}(\bm u) {}-{} \prox_G^{\Gamma^{-1}}(\bm v) }{ \Gamma^{-1}(\bm u-\bm v) } {}\leq{} \| \bm u {}-{} \bm v \|_{\Gamma^{-1}}^2 \quad\forall\bm u,\bm v; \] \item\label{thm:MoreauGrad} the Moreau envelope \(G^{\Gamma^{-1}}\) is differentiable with \(\nabla G^{\Gamma^{-1}}=\Gamma^{-1}(\id-\prox_G^{\Gamma^{-1}})\); \item\label{thm:subdiffdist} for every \(\bm x\in\R^{\sum_in_i}\) it holds that \( \dist(0,\partial\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x)) {}\leq{} \tfrac{ N+\max_i\set{\gamma_iL_{f_i}} }{ N\min_i\set{\sqrt{\gamma_i}} } \|\bm x-\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm x)\|_{\Gamma^{-1}} \); \item\label{thm:TLip}\label{thm:contractive} \(\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}\) is \(L_{\bf T}\)-Lipschitz continuous in the metric $\|{}\cdot{}\|_{\Gamma^{-1}}$ for some \(L_{\bf T}\geq0\); if in addition \(f_i\) is \(\mu_{f_i}\)-strongly convex, \(i\in[N]\), then \(L_{\bf T}\leq 1-\delta\) for \(\delta=\frac1N\min_{i\in[N]}\set{\gamma_i\mu_{f_i}}\). \end{enumerate} \begin{proof} \begin{proofitemize} \item\ref{thm:FNE} and \ref{thm:MoreauGrad}~ See \cite[Prop.s 12.28 and 12.30]{bauschke2017convex}. \item\ref{thm:subdiffdist} Let \(D\subseteq\R^{\sum_in_i}\) be the set of points at which \(\nabla F\) is differentiable. From the chain rule of differentiation applied to the expression \eqref{eq:FBEMoreau} and using assert \ref{thm:MoreauGrad}, we have that \(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}\) is differentiable on \(D\) with gradient \[ \nabla\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x) {}={} \bigl[ \I-\Gamma\nabla^2F(\bm x) \bigr] \Gamma^{-1} \bigl[ \bm x-\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm x) \bigr] \quad \forall\bm x\in D. \] Since \(D\) is dense in \(\R^{\sum_in_i}\) owing to Lipschitz continuity of \(\nabla F\), we may invoke \cite[Th. 9.61]{rockafellar2011variational} to infer that \(\partial\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x)\) is nonempty for every \(\bm x\in\R^{\sum_in_i}\) and \[ \partial\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x) {}\supseteq{} \partial_B\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x) {}={} \bigl[ \I-\Gamma\partial_B\nabla F(\bm x) \bigr] \Gamma^{-1} \bigl[ \bm x-\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm x) \bigr] {}={} \bigl[ \Gamma^{-1}-\partial_B\nabla F(\bm x) \bigr] \bigl[ \bm x-\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm x) \bigr], \] where \(\partial_B\) denotes the (set-valued) Bouligand differential \cite[\S7.1]{facchinei2003finite}. The claim now follows by observing that \( \partial_B\nabla F(\bm x) {}={} \tfrac1N\blockdiag(\partial_B\nabla f_1(x_1),\dots,\partial_B\nabla f_N(x_N)) \) and that each element of \(\partial_B\nabla f_i(x_i)\) has norm bounded by \(L_{f_i}\). \item\ref{thm:TLip}~ Lipschitz continuity follows from assert \ref{thm:FNE} together with the fact that Lipschitz continuity is preserved by composition. Suppose now that \(f_i\) is \(\mu_{f_i}\)-strongly convex, \(i\in[N]\). By \cite[Thm 2.1.12]{nesterov2013introductory} for all $x_i,y_i\in\R^{n_i}$ \begin{equation}\label{eq:smoothStrcvx} \langle\nabla f_i(x_i)-\nabla f_i(y_i),x_i-y_i\rangle\geq\tfrac{\mu_{f_i}L_{f_i}}{\mu_{f_i}+L_{f_i}}\|x_i-y_i\|^2+\tfrac1{\mu_{f_i}+L_{f_i}}\|\nabla f_i(x_i)-\nabla f_i(y_i)\|^2. \end{equation} For the forward operator we have \begin{align*} & \| (\id-\tfrac{\gamma_i}{N}\nabla f_i)(x_i) {}-{} (\id-\tfrac{\gamma_i}{N}\nabla f_i)(y_i) \|^2 \\ {}={} & \|x_i-y_i\|^2 {}+{} \tfrac{\gamma_i^2}{N^2} \|\nabla f_i(x_i)-\nabla f_i(y_i)\|^2 {}-{} \tfrac{2\gamma_i}{N} \innprod{x_i-y_i}{\nabla f_i(x_i)-\nabla f_i(y_i)} \\ \overrel[\leq]{\eqref{eq:smoothStrcvx}}{} & \Bigl( 1-\tfrac{\gamma_i^2\mu_{f_i}L_{f_i}}{N^2} \Bigr) \|x_i-y_i\|^2 {}-{} \tfrac{\gamma_i}{N} \Bigl( 2-\tfrac{\gamma_i}{N}(\mu_{f_i}+L_{f_i}) \Bigr) \innprod{\nabla f_i(x_i)-\nabla f_i(y_i)}{x_i-y_i} \\ {}\leq{} & \left(1-\tfrac{\gamma_i^2\mu_{f_i}L_{f_i}}{N^2}\right) \|x_i-y_i\|^2 {}-{} \tfrac{\gamma_i\mu_{f_i}}{N} \left(2-\tfrac{\gamma_i}{N}(\mu_{f_i}+L_{f_i})\right) \|x_i-y_i\|^2 \\ {}={} & \left(1-\tfrac{\gamma_i\mu_{f_i}}{N}\right)^2 \|x_i-y_i\|^2, \end{align*} where strong convexity and the fact that $\gamma_i<\nicefrac{N}{L_{f_i}}\leq\nicefrac{2N}{(\mu_{f_i}+L_{f_i})}$ was used in the second inequality. Multiplying by $\gamma_i^{-1}$ and summing over $i$ shows that \(\id-\Gamma\nabla F\) is \((1-\delta)\)-contractive in the metric \(\|{}\cdot{}\|_{\Gamma^{-1}}\), and so is \(\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}=\prox_G^{\Gamma^{-1}}\circ(\Fw{})\) as it follows from assert \ref{thm:FNE}. \qedhere \end{proofitemize} \end{proof}
3,119
58,159
en
train
0.4986.18
\end{lem} The next result recaps an important property that the FBE inherits from the cost function \(\@ifstar\@@P\@Phi\) that is instrumental for establishing global convergence and asymptotic linear rates for the BC-\Cref{alg:BC}. The result falls as special case of \cite[Th. 5.2]{yu2019deducing} after observing that \[ \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x) {}={} \inf_{\bm w}\set{ \@ifstar\@@P\@Phi(\bm w) {}+{} D_H(\bm w,\bm x) }, \] where \( D_H(\bm w,\bm x) {}={} H(\bm w)-H(\bm x)-\innprod{\nabla H(\bm x)}{\bm w-\bm x} \) is the Bregman distance with kernel \(H=\tfrac12\|{}\cdot{}\|_{\Gamma^{-1}}^2-F\). \begin{lem}[{\cite[Th. 5.2]{yu2019deducing}}]\label{thm:loja} Suppose that \Cref{ass:basic} holds and for \(\gamma_i\in(0,\nicefrac{N}{L_{f_i}})\), \(i\in[N]\), let \(\Gamma=\blockdiag(\gamma_1\I_{n_1},\dots,\gamma_N\I_{n_N})\). If \(\@ifstar\@@P\@Phi\) has the KL property with exponent \(\theta\in(0,1)\) (as is the case when \(f_i\) and \(G\) are semialgebraic), then so does \(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}\) with exponent \( \max\set{\nicefrac12,\theta} \). \end{lem} \ifaccel \begin{lem}[FBE: convexity and block-smoothness]\label{thm:convex} Suppose that \Cref{ass:basic,ass:Fast} are satisfied, and consider the notation introduced therein. Let \(\gamma_i\in(0,\nicefrac{N}{L_{f_i}})\) be fixed. Define \( Q_i {}\coloneqq{} \gamma_i^{-1}\I-\tfrac{1}{N}H_i\in\R^{n_i\times n_i} \), \( Q {}\coloneqq{} \blockdiag(Q_1,\dots,Q_N) \), and \( H {}\coloneqq{} \tfrac1N\blockdiag(H_1,\dots,H_N) \). Then, $\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}C \coloneqq\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}} \circ Q^{-1/2}$ is convex and smooth with $\nabla\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}C (\tilde{\bm x}) = Q^{1/2}(\bm x-\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm x))$ where $\bm x=Q^{-1/2}\tilde{\bm x}$. In fact, for any \(\tilde{\bm x},\tilde{\bm x}'\in\R^{\sum_in_i}\) it holds that \begin{equation}\label{eq:FBEComposedsmooth} 0 {}\leq{} \innprod{\nabla\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}C(\tilde{\bm x}')-\nabla\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}C(\tilde{\bm x})}{\tilde{\bm x}'-\tilde{\bm x}} {}\leq{} \|\tilde{\bm x}'-\tilde{\bm x}\|^2. \end{equation} In particular, function $\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}C$ is $1$-smooth along each block $i\in[N]$. If, additionally, all functions \(f_i\) are strongly convex, then $\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}C$ is \(\sigma\)-strongly convex with $\sigma\coloneqq \tfrac{1}{N}\min_{i\in [N]}\left\{\gamma_i\mu_{f_i}\right\}$. \begin{proof} Since $\gamma_i<N/L_{f_i}$, $Q$ is positive definite. We begin by showing that for any \(\bm x,\bm x'\in\R^{\sum_in_i}\) it holds that \begin{equation}\label{eq:FBEsmooth} 0\leq \|\bm x'-\bm x\|^2_{Q}- \|Q(\bm x'-\bm x)\|^2_{\Gamma} {}\leq{} \innprod{\nabla\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x')-\nabla\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x)}{\bm x'-\bm x} {}\leq{} \|\bm x'-\bm x\|^2_Q. \end{equation} It follows from \Cref{thm:MoreauGrad}, the chain rule of differentiation applied to \eqref{eq:FBEMoreau}, and the twice continuous differentiability of \(F\) that \(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}\) is continuously differentiable with \( \nabla\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x) {}={} Q(\bm x-\bm z) \). For \(\bm z^x\coloneqq\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm x)\) and \(\bm z^{x'}\coloneqq\@ifstar\operatorname T_\gamma^{\text{\sc fb}}\operatorname T_\Gamma^{\text{\sc fb}}(\bm {x'})\) it holds that \begin{equation}\label{eq:innprodGrad} \innprod{\nabla\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm {x'})-\nabla\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}(\bm x)}{\bm {x'}-\bm x} {}={} \innprod{ Q(\bm {x'}-\bm z^{x'}-\bm x+\bm z^x) }{ \bm {x'}-\bm x } {}={} \|\bm {x'}-\bm x\|^2_Q {}-{} \innprod{ \bm z^{x'}-\bm z^x }{ Q(\bm {x'}-\bm x) }. \end{equation} In order to bound the last scalar product, observe that \[ 0 {}\leq{} \innprod{ \Gamma^{-1}(\bm z^{x'}-\bm z^x) }{ (\bm {x'}-\Gamma\nabla F(\bm {x'})) {}-{} (\bm x-\Gamma\nabla F(\bm x)) } {}\leq{} \bigl\| (\bm {x'}-\Gamma\nabla F(\bm {x'})) {}-{} (\bm x-\Gamma\nabla F(\bm x)) \bigr\|_{\Gamma^{-1}}^2, \] as it follows from \Cref{thm:FNE}. Since \(\id-\Gamma\nabla F=\Gamma Q{}\cdot{} - \Gamma\bm q\) (with $\bm q\coloneqq(\tfrac1Nq_1,\dots,\tfrac1Nq_N)$), the above inequality simplifies to \[ 0 {}\leq{} \innprod{ \bm z^{x'}-\bm z^x }{ Q(\bm {x'}-\bm x) } {}\leq{} \|\Gamma Q(\bm {x'}-\bm x)\|_{\Gamma^{-1}}^2, \] which combined with \eqref{eq:innprodGrad} results in the claimed \eqref{eq:FBEsmooth}. If additionally \(\mu_{f_i}>0\) for all \(i\), then \(\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}\) is \(1\)-strongly convex in the metric \(\|{}\cdot{}\|^2_{Q - Q\Gamma Q}\) (by observing that \(Q-Q\Gamma Q\succ 0\)). The result in \eqref{eq:FBEComposedsmooth} follows by using \eqref{eq:FBEsmooth} with the change of variables $\bm x=Q^{-1/2}\tilde{\bm x}$, $\bm x'=Q^{-1/2}\tilde{\bm x}'$ and noting that $\nabla \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}C(\tilde{\bm x}) = Q^{-1/2}\nabla \@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}} (\bm x)$. Since $\Gamma$ is block-wise a multiple of identity it commutes with any block-diagonal matrix. Therefore, when $f_i$ are strongly convex, using the lower bound in \eqref{eq:FBEsmooth} and the above change of variable we obtain that $\@ifstar\@ifstar\@@P\@Phi_\gamma^{\text{\sc fb}}\@ifstar\@@P\@Phi_\Gamma^{\text{\sc fb}}C$ is strongly convex in the metric \(\|{}\cdot{}\|^2_{\I - \Gamma Q}\). The result follows by noting that $\I - \Gamma Q= \Gamma H$. \end{proof} \end{lem} \fi \end{appendix} \ifarxiv \else \phantomsection \addcontentsline{toc}{section}{References} \fi \end{document}
3,318
58,159
en
train
0.4987.0
\begin{document} \title{On diagram groups over Fibonacci-like semigroup presentations and their generalizations} \author{ V. S. Guba\thanks{This work is partially supported by the Russian Foundation for Basic Research, project no. 19-01-00591 A.}\\ Vologda State University,\\ 15 Lenin Street,\\ Vologda\\ Russia\\ 160600\\ E-mail: guba{@}uni-vologda.ac.ru} \date{} \maketitle \begin{abstract} We answer the question by Matt Brin on the structure of diagram groups over semigroup presentation ${\mathcal P}=\langle\,ngle a,b,c\mid a=bc,b=ca,c=ab\,\ranglengle$. In the talk on Oberwolfach workshop, Brin conjectured that the diagram group over $\mathcal P$ with base $a$ is isomorphic to the generalized Thompson's group $F_9$. We confirm this conjecture and consider some generalizations of this fact. \end{abstract}
277
12,988
en
train
0.4987.1
\section{Introduction} \langle\,bel{backgr} In this background Section we recall the concept of diagram groups and introduce some terminology. The contents of the present Section is essentially known. Some defininions and examples from here repeat the ones from \cite{Gu04}. Detailed information about diagram groups can be found in \cite{GbS}. First of all, let us recall the concept of a semigroup diagram and introduce some notation. To do this, we consider the following example. Let ${\mathcal P}=\langle\, a,b\mid aba=b,bab=a\,\rangle$ be the semigroup presentation. (In the next Section we will work with it.) It is easy to see by the following algebraic calculation $$ a^5=a(bab)a(bab)a=(aba)(bab)(aba)=bab=a $$ that the words $a^5$ and $a$ are equal modulo ${\mathcal P}$. The same can be seen from the following picture \begin{center} \begin{picture}(90.00,37.00) \put(00.00,23.00){\circle*{1.00}} \put(10.00,23.00){\circle*{1.00}} \put(20.00,23.00){\circle*{1.00}} \put(30.00,23.00){\circle*{1.00}} \put(30.00,23.00){\circle*{1.00}} \put(40.00,23.00){\circle*{1.00}} \put(50.00,23.00){\circle*{1.00}} \put(60.00,23.00){\circle*{1.00}} \put(60.00,23.00){\circle*{1.00}} \put(70.00,23.00){\circle*{1.00}} \put(80.00,23.00){\circle*{1.00}} \put(90.00,23.00){\circle*{1.00}} \put(00.00,23.00){\line(1,0){90.00}} \bezier{152}(10.00,23.00)(25.00,35.00)(40.00,23.00) \bezier{240}(50.00,23.00)(80.00,23.00)(50.00,23.00) \bezier{164}(50.00,23.00)(65.00,37.00)(80.00,23.00) \bezier{240}(0.00,23.00)(30.00,23.00)(0.00,23.00) \bezier{156}(0.00,23.00)(17.00,11.00)(30.00,23.00) \bezier{164}(30.00,23.00)(44.00,9.00)(60.00,23.00) \bezier{164}(60.00,23.00)(74.00,9.00)(90.00,23.00) \put(5.00,25.00){\makebox(0,0)[cc]{$a$}} \put(24.00,32.00){\makebox(0,0)[cc]{$a$}} \put(45.00,25.00){\makebox(0,0)[cc]{$a$}} \put(65.00,32.00){\makebox(0,0)[cc]{$a$}} \put(84.00,25.00){\makebox(0,0)[cc]{$a$}} \put(23.00,16.00){\makebox(0,0)[cc]{$b$}} \put(44.00,13.00){\makebox(0,0)[cc]{$a$}} \put(65.00,16.00){\makebox(0,0)[cc]{$b$}} \put(15.00,20.00){\makebox(0,0)[cc]{$b$}} \put(24.00,25.00){\makebox(0,0)[cc]{$a$}} \put(35.00,21.00){\makebox(0,0)[cc]{$b$}} \put(53.00,21.00){\makebox(0,0)[cc]{$b$}} \put(66.00,25.00){\makebox(0,0)[cc]{$a$}} \put(74.00,21.00){\makebox(0,0)[cc]{$b$}} \bezier{520}(0.00,23.00)(45.00,-24.00)(90.00,23.00) \put(44.00,2.00){\makebox(0,0)[cc]{$a$}} \end{picture} \end{center} This is a {\em diagram\/} $\Delta$ over the semigroup presentation ${\mathcal P}$. It is a plane graph with $10$ vertices, $15$ (geometric) edges and $6$ faces or {\em cells\/}. Each cell corresponds to an elementary transformation of a word, that is, a transformation of the form $p\cdot u\cdot q\to p\cdot v\cdot q$, where $p$, $q$ are words (possibly, empty), $u=v$ or $v=u$ belongs to the set of defining relations. The diagram $\Delta$ has the leftmost vertex denoted by $\iota(\Delta)$ and the rightmost vertex denoted by $\tau(\Delta)$. It also has the {\em top path\/} $\mathop{\mbox{\bf top}}(\Delta)$ and the {\em bottom path\/} $\mathop{\mbox{\bf bot}}(\Delta)$ from $\iota(\Delta)$ to $\tau(\Delta)$. Each cell $\pi$ of a diagram can be regarded as a diagram itself. The above functions $\iota$, $\tau$, $\mathop{\mbox{\bf top}}$, $\mathop{\mbox{\bf bot}}$ can be applied to $\pi$ as well. We do not distinguish isotopic diagrams. We say that $\Delta$ is a $(w_1,w_2)$-diagram whenever the label of its top path is $w_1$ and the label of its bottom path is $w_2$. In our example, we deal with an $(a^5,a)$-diagram. If we have two diagrams such that the bottom path of the first of them has the same label as the top path of the second, then we can naturally {\em concatenate\/} these diagrams by identifying the bottom path of the first diagram with the top path of the second diagram. The result of the concatenation of a $(w_1,w_2)$-diagram and a $(w_2,w_3)$-diagram obviously is a $(w_1,w_3)$-diagram. We use the sign $\circ$ for the operation of concatenation. For any diagram $\Delta$ over ${\mathcal P}$ one can consider its {\em mirror image\/} $\Delta^{-1}$. A diagram may have {\em dipoles\/}, that is, subdiagrams of the form $\pi\circ\pi^{-1}$, where $\pi$ is a single cell. To {\em cancel\/} (or {\em reduce\/}) the dipole means to remove the common boundary of $\pi$ and $\pi^{-1}$ identifying $\mathop{\mbox{\bf top}}(\pi)$ with $\mathop{\mbox{\bf bot}}(\pi^{-1})$. In any diagram, we can cancel all its dipoles, step by step. The result does not depend on the order of cancellations. A diagram is {\em irreducible\/} whenever it has no dipoles. The operation of cancelling dipoles has an inverse operation called the {\em insertion\/} of a dipole. These operations induce an equivalence relation on the set of diagrams (two diagrams are {\em equivalent\/} whenever one can go from one of them to the other by a finite sequence of cancelling/inserting dipoles). Each equivalence class contains exactly one irreducible diagram. For any nonempty word $w$, the set of all $(w,w)$-diagrams forms a monoid with the identity element $\varepsilon(w)$ (the diagram with no cells). The operation $\circ$ naturally induces some operation on the set of equivalence classes of diagrams. This operation is called a {\em product\/} and equivalent diagrams are called {\em equal\/}. (The sign $\equiv$ will be used to denote that two diagrams are isotopic.) So the set of all equivalence classes of $(w,w)$-diagrams forms a group that is called the {\em diagram group\/} over ${\mathcal P}$ with {\em base\/} $w$. We denote this group by ${\mathcal D}({\mathcal P},w)$. We can think of this group as of the set of all irreducible $(w,w)$-diagrams. The group operation is the concatenation with cancelling all dipoles in the result. An inverse element of a diagram is its mirror image. We also need one more natural operation on the set of diagrams. By the {\em sum\/} of two diagrams we mean the diagram obtained by identifying the rightmost vertex of the first summand with the leftmost vertex of the second summand. This operation is also associative. The sum of diagrams $\Delta_1$, $\Delta_2$ is denoted by $\Delta_1+\Delta_2$. Now let us recall some information about generalized Thompson's groups $F_r$. This family was introduced by K. S. Brown in \cite{Bro}. Additional facts about these groups can be found in \cite{BCS,Stein}. The family of generalized Thompson's groups can be defined as follows. The group $F_r$ is the group of all piecewise linear self homeomorphisms of the unit interval $[0,1]$ that are orientation preserving (that is, send $0$ to zero and $1$ to $1$) with all slopes integer powers of $r$ and such that their singularities (breakpoints of the derivative) belong to $\mathbb Z[\,\frac1r\,]$. The group $F_r$ admits a presentation given by \be{presfp} \langle\, x_0,x_1,x_2,\ldots\mid x_jx_i=x_ix_{j+r-1}\ (i<j)\,\rangle. \end{equation} \noindent This presentation is infinite, but a close examination shows that the group is actually finitely generated, since $x_0$, $x_1$, \dots, $x_{p-1}$ are sufficient to generate it. In fact, the group is finitely presented; see \cite{Bro}. The finite presentation is awkward, and it is not used much. The symmetric and simple nature of the infinite presentation makes it much more adequate for almost all purposes. One way in which the infinite presentation is very useful is in the construction of the normal forms. A word given in the generators $x_i$ and their inverses, can have its generators moved around according to the relators, and the result is the following well-known statement: \begin{thm} \langle\,bel{stnf} An element in $F_r$ always admits an expression of the form $$ x_{i_1}x_{i_2}\cdots x_{i_m}x_{j_n}^{-1}\cdots x_{j_2}^{-1}x_{j_1}^{-1}, $$ where $$ i_1\le i_2\le\cdots\le i_m,\ j_1\le j_2\le\cdots\le j_n. $$ \end{thm} In general, this expression is not unique, but for every element there is a unique word of this type which satisfies certain technical condition. This unique word is called the {\em standard normal form\/} for the element of $F_r$. The case $r=2$ corresponds to famous R. Thompson's group $F=F_2$. It is known \cite{GbS} that groups $F_r$ are diagram groups over the semigroup presentation ${\mathcal P}_r=\langle\, x\mid x=x^r\,\rangle$ with base $x$ (note that for any base $x^k$, where $k\ge1$, we get an isomorphic group). Now let us compare the diagram representation of $F$ with the representation of its elements by piecewise-linear homeomorphisms of the closed unit interval $[0,1]$. Let $\Delta$ be an $(x^p,x^q)$-diagram over ${\mathcal P}$. We will show how to assign to it a piecewise-linear function from $[0,p]$ onto $[0,q]$. Each positive edge of $\Delta$ is homeomorphic to the unit interval $[0,1]$. So we assign a coordinate to each point of this edge (the leftmost end of an edge has coordinate $0$, the rightmost one has coordinate $1$). Let $\pi$ be an $(x,x^r)$-cell of $\Delta$. Let us map $\mathop{\mbox{\bf top}}(\pi)$ onto $\mathop{\mbox{\bf bot}}(\pi)$ linearly, that it, the point on the edge $\mathop{\mbox{\bf top}}(\pi)$ with coordinate $t\in[0,1]$ is taken to the point on $\mathop{\mbox{\bf bot}}(\pi)$ with coordinate $rt$ (the bottom path of $\pi$ has length $r$ so it is naturally homeomorphic to $[0,r]$). The same thing can be done for an $(x^r,x)$-cell of $\Delta$. Thus for any cell $\pi$ of $\Delta$ we have a natural mapping $T_\pi$ from $\mathop{\mbox{\bf top}}(\pi)$ onto $\mathop{\mbox{\bf bot}}(\pi)$ (we call it a {\em transition map\/}). Now let $t$ be any number in $[0,p]$. We consider the point $o$ on $\mathop{\mbox{\bf top}}(\Delta)$ that has coordinate $t$. If $o$ is not a point of $\mathop{\mbox{\bf bot}}(\Delta)$, then it is an internal point on the top path of some cell. Thus we can apply the corresponding transition map to $o$. We repeat this operation until we get a point $o'$ on the path $\mathop{\mbox{\bf bot}}(\Delta)$. The coordinate of this point is a number in $[0,q]$. Hence we have a function $f_\Delta\colon[0,p]\to[0,q]$ induced by $\Delta$. It is easy to see this will be a piecewise-linear function. When we concatenate diagrams, this corresponds to the composition of the PL functions induced by these diagrams. For groups $F_r$, which are the diagram group ${\mathcal D}({\mathcal P}_r,x)$, we have the homomorphism from it to $PLF[0,1]$. It is known this is an monomorphism. The following elementary fact was essentially used several times in \cite{GuSa99,Gu00} and some other papers. \begin{lm} \langle\,bel{longpath} Let ${\mathcal P}=\langle\, X\mid{\mathcal R}\,\rangle$ be a semigroup presentation. Suppose that all defining relations of ${\mathcal P}$ have the form $a=A$, where $a\in X$ and $A$ is a word of length at least $2$. Also assume that all letters in the left-hand sides of the defining relations are different. Then any irreducible diagram $\Delta$ over ${\mathcal P}$ is the concatenation of the form $\Delta_1\circ\Delta_2^{-1}$, where the top path of each cell of both $\Delta_1$, $\Delta_2$ has length $1$. The longest positive path in $\Delta$ from $\iota(\Delta)$ to $\tau(\Delta)$ coincides with the bottom path of $\Delta_1$ and the top path of $\Delta_2^{-1}$. \end{lm}
3,976
12,988
en
train
0.4987.2
The family of generalized Thompson's groups can be defined as follows. The group $F_r$ is the group of all piecewise linear self homeomorphisms of the unit interval $[0,1]$ that are orientation preserving (that is, send $0$ to zero and $1$ to $1$) with all slopes integer powers of $r$ and such that their singularities (breakpoints of the derivative) belong to $\mathbb Z[\,\frac1r\,]$. The group $F_r$ admits a presentation given by \be{presfp} \langle\, x_0,x_1,x_2,\ldots\mid x_jx_i=x_ix_{j+r-1}\ (i<j)\,\rangle. \end{equation} \noindent This presentation is infinite, but a close examination shows that the group is actually finitely generated, since $x_0$, $x_1$, \dots, $x_{p-1}$ are sufficient to generate it. In fact, the group is finitely presented; see \cite{Bro}. The finite presentation is awkward, and it is not used much. The symmetric and simple nature of the infinite presentation makes it much more adequate for almost all purposes. One way in which the infinite presentation is very useful is in the construction of the normal forms. A word given in the generators $x_i$ and their inverses, can have its generators moved around according to the relators, and the result is the following well-known statement: \begin{thm} \langle\,bel{stnf} An element in $F_r$ always admits an expression of the form $$ x_{i_1}x_{i_2}\cdots x_{i_m}x_{j_n}^{-1}\cdots x_{j_2}^{-1}x_{j_1}^{-1}, $$ where $$ i_1\le i_2\le\cdots\le i_m,\ j_1\le j_2\le\cdots\le j_n. $$ \end{thm} In general, this expression is not unique, but for every element there is a unique word of this type which satisfies certain technical condition. This unique word is called the {\em standard normal form\/} for the element of $F_r$. The case $r=2$ corresponds to famous R. Thompson's group $F=F_2$. It is known \cite{GbS} that groups $F_r$ are diagram groups over the semigroup presentation ${\mathcal P}_r=\langle\, x\mid x=x^r\,\rangle$ with base $x$ (note that for any base $x^k$, where $k\ge1$, we get an isomorphic group). Now let us compare the diagram representation of $F$ with the representation of its elements by piecewise-linear homeomorphisms of the closed unit interval $[0,1]$. Let $\Delta$ be an $(x^p,x^q)$-diagram over ${\mathcal P}$. We will show how to assign to it a piecewise-linear function from $[0,p]$ onto $[0,q]$. Each positive edge of $\Delta$ is homeomorphic to the unit interval $[0,1]$. So we assign a coordinate to each point of this edge (the leftmost end of an edge has coordinate $0$, the rightmost one has coordinate $1$). Let $\pi$ be an $(x,x^r)$-cell of $\Delta$. Let us map $\mathop{\mbox{\bf top}}(\pi)$ onto $\mathop{\mbox{\bf bot}}(\pi)$ linearly, that it, the point on the edge $\mathop{\mbox{\bf top}}(\pi)$ with coordinate $t\in[0,1]$ is taken to the point on $\mathop{\mbox{\bf bot}}(\pi)$ with coordinate $rt$ (the bottom path of $\pi$ has length $r$ so it is naturally homeomorphic to $[0,r]$). The same thing can be done for an $(x^r,x)$-cell of $\Delta$. Thus for any cell $\pi$ of $\Delta$ we have a natural mapping $T_\pi$ from $\mathop{\mbox{\bf top}}(\pi)$ onto $\mathop{\mbox{\bf bot}}(\pi)$ (we call it a {\em transition map\/}). Now let $t$ be any number in $[0,p]$. We consider the point $o$ on $\mathop{\mbox{\bf top}}(\Delta)$ that has coordinate $t$. If $o$ is not a point of $\mathop{\mbox{\bf bot}}(\Delta)$, then it is an internal point on the top path of some cell. Thus we can apply the corresponding transition map to $o$. We repeat this operation until we get a point $o'$ on the path $\mathop{\mbox{\bf bot}}(\Delta)$. The coordinate of this point is a number in $[0,q]$. Hence we have a function $f_\Delta\colon[0,p]\to[0,q]$ induced by $\Delta$. It is easy to see this will be a piecewise-linear function. When we concatenate diagrams, this corresponds to the composition of the PL functions induced by these diagrams. For groups $F_r$, which are the diagram group ${\mathcal D}({\mathcal P}_r,x)$, we have the homomorphism from it to $PLF[0,1]$. It is known this is an monomorphism. The following elementary fact was essentially used several times in \cite{GuSa99,Gu00} and some other papers. \begin{lm} \langle\,bel{longpath} Let ${\mathcal P}=\langle\, X\mid{\mathcal R}\,\rangle$ be a semigroup presentation. Suppose that all defining relations of ${\mathcal P}$ have the form $a=A$, where $a\in X$ and $A$ is a word of length at least $2$. Also assume that all letters in the left-hand sides of the defining relations are different. Then any irreducible diagram $\Delta$ over ${\mathcal P}$ is the concatenation of the form $\Delta_1\circ\Delta_2^{-1}$, where the top path of each cell of both $\Delta_1$, $\Delta_2$ has length $1$. The longest positive path in $\Delta$ from $\iota(\Delta)$ to $\tau(\Delta)$ coincides with the bottom path of $\Delta_1$ and the top path of $\Delta_2^{-1}$. \end{lm} Note that $\langle\, x\mid x=x^r\,\rangle$ obviously satisfies the conditions of the Lemma. The same concerns the presentation $\langle\, a,b\mid a=bab, b=aba\,\rangle$, which was considered in the beginning of this Section. Let us recall the idea of the proof. Let $p$ be the longest positive path in $\Delta$ from $\iota(\Delta)$ to $\tau(\Delta)$. It cuts $\Delta$ into two parts. It suffices to prove that all cells in the ``upper" part correspond to the defining relations of the form $a=A$, where $a$ is a letter, and none of them corresponds to $A=a$. Assume the contrary. Suppose that there is a cell $\pi$ in the upper part of $\Delta$ with the top label $A$ and the bottom label $a$. The bottom path of $\pi$ cannot be a subpath in $p$ since $p$ is chosen the longest. So the bottom edge of $\pi$ belongs to the top path of some cell $\pi'$. The diagram $\Delta$ has no dipoles. All letters in the left-hand sides of the defining relations are different. So the top path of $\pi'$ cannot have length $1$. This means that we have found a new cell in the upper part of $\Delta$ that also corresponds to the defining relation of the form $A=a$. Applying the same argument to $\pi'$, we get a process that never terminates. This is impossible since the cells that appear during the process cannot repeat. This completes the proof.
1,849
12,988
en
train
0.4987.3
\section{Main Results} \langle\,bel{flp} Let $a_1$, $a_2$, ... , $a_n$ be a finite alphabet. By definition, $a_{n+1}=a_1$, $a_{n+2}=a_2$. Consider the following semigroup presentation \be{fs} {\mathcal P}_n=\langle\,ngle a_1,\ldots,a_n\mid a_i=a_{i+1}a_{i+2}\ (1\le i\le n)\,\ranglengle. \end{equation} The semigroup presented by ${\mathcal P}_n$ is called {\em Fibonacci semigroup\/}. One can ask what are the diagram groups $G_n=\mathcal D(\mathcal P_n,a_1)$. The case $n=1$ is trivial, it gives the diagram group over $\langle\,ngle x\mid x=xx\,\ranglengle$ so it is Thompson's group $F$. For $n=2$ one has the presentation $\langle\,ngle a,b\mid a=ba,b=ab\,\ranglengle$. It was shown in \cite{GoSa17} that $G_2$ (the so called Jones' subgroup) is isomorphic to $F_3$. In his talk on an Oberwolfach worksop, Matt Brin asked about the group $G_3$, see \cite[Question 73]{BBN}. He conjectured that this diagram group is isomorphic to $F_9$. Notice that $\mathcal P_3$ can be written as $\langle\,ngle a,b,c\mid a=bc,b=ca,c=ab\,\ranglengle$. This presentation is not {\em complete\/}. This means that for the Thue system $ab\to c$, $bc\to a$, $ca\to b$ there are no unique normal forms. For complete semigroup presentations, there exists a technique of their calculation from \cite{GbS}. Sometimes it is possible to consider a completion, but here it has a complicated form. Indeed, the semigroup given by $\mathcal P_3$ is the quaternion group $Q_8$. So this way of description looks very unclear. Here we present a purely geometric way to find the diagram group. First of all, let us mention that one can avoid generator $c$ replacing it by $ab$. In generators $a$, $b$ the presentaion becomes $\langle\,ngle a,b\mid a=bab,b=aba\,\ranglengle$. It was considered as an example in the beginning of the Introduction. The semigroup given by it is the same as above. There is a fact from \cite[Section 4]{GuSa05} that ordinary Tietze transformations of semigroup presentations lead to the same diagram groups. (This can also be shown directly.) So we have one more generalization of the class of semigroup presentations under consideration. Let $a_1$, $a_2$, ... , $a_n$ be a finite alphabet as above and let $r\ge2$ be an integer. For any $j$ from $1$ to $r$ we set $a_{n+j}=a_j$. Now for every $i$ from $1$ to $n$ we consider a relation of the form $a_i=a_{i+1}\ldots a_{i+r}$. By $\mathcal P_{nr}$ we denote a semigroup presentation given by these relations: \be{pnr} {\mathcal P}_{nr}=\langle\,ngle a_1,\ldots,a_n\mid a_i=a_{i+1}\ldots a_{i+r}\ (1\le i\le n)\,\ranglengle. \end{equation} This class of presentations was introduced by Johnson in \cite{Jo74} in order to generalize the concept of a Fibonacci group. Since $\mathcal P_{nr}$ is also a semigroup presentation, one can introduce the corresponding semigroups as well. For $r=2$ we have the above Fibonacci-like presentations. Now we can consider diagram groups $G_{nr}$ defined as $\mathcal D(\mathcal P_{nr},a_1)$. The group we are interested in is $G_{32}\cong G_{23}$. We confirm Brin's conjecture about it. \begin{thm} \langle\,bel{f9} The diagram group with base $a$ over semigroup presentation $\langle\,ngle a,b,c\mid a=bc,b=ca,c=ab\,\ranglengle$ is isomorphic to generalized Thompson's group $F_9$. \end{thm} {\bf Proof.}\ We consider this group as a diagram group over $\mathcal P_{23}=\langle\,ngle a,b\mid a=bab,b=aba\,\ranglengle$. It is known that the groups $F_r$ have no proper non-Abelian homomorphic images. So it suffices to construct a homomorphism from $F_9$ to the diagram group $G=G_{23}$ showing it is surjective. Therefore, this will give us an isomorphism. The group $F_9$ will be considered as the diagram group over $\langle\,ngle x\mid x=x^9\,\ranglengle$ with base $x$. A diagram over this presentation is a plane graph composed from cycles of even length. By induction on the number of cells it is easy to show that the graph is bipartite. So we can give colours to its vertices. Let the initial vertex of a diagram $\Delta$ gets the colour 1. Then the other vertices get their colours uniquely. Now we relabel the diagram: if a positive edge goes from a vertex of colour 1 to the vertex of color 2, then we give it label $a$. Otherwise it has label $b$. As a result, we get a diagram denoted by $\Delta'$. Each cell $x=x^9$ becomes a cell of one of the two forms: $a=a(ba)^4$ or $b=b(ab)^4$. The same for inverse cells. We have the following derivation over $\mathcal P_{23}$: $a=bab=(aba)(bab)(aba)=a(ba)^4$, and similarly for the other equality. Semigroup diagrams for these equalities consist of $4$ cells. They will be called {\em basic\/}. We fill the cells of the above form by basic diagrams. This gives us the diagram $\Delta''$ over $\mathcal P_{23}$. The rule $\Delta\mapsto\Delta''$ induces a homomorphism of groupoids of diagrams. (Notice that cancelling a dipole in a diagram $\Delta$ over $x=x^9$ leads to cancelling $4$ dipoles in $\Delta''$ so the mapping we have defined preserves equivalence of diagrams.) In particular, we have a homomorphism from $F_9$ as the diagram group over $x=x^9$ with base $x$ to $G$ as the diagram over $\mathcal P_{23}$ with base $a$. Now let $\Psi$ be a reduced $(a,a)$-diagram over $\mathcal P_{23}$. We would like to find a preimage of it in $F_9$. According to Lemma~\ref{longpath}, we decompose $\Psi$ as $\Psi_1\circ\Psi_2^{-1}$ where $\Psi_1$, $\Psi_2$ are positive diagrams. It holds that $\mathop{\mbox{\bf bot}}{\Psi_1}=\mathop{\mbox{\bf top}}{\Psi_2^{-1}=p}$, where $p$ is the longest positive path in $\Psi$ from $\iota(\Psi)$ to $\tau(\Psi)$. Now we will change $\Psi=\Psi_1\circ\Psi_2^{-1}$ and the path $p$ step by step inserting some dipoles. The current situation will always have the same notation. Suppose that the first edge of $p$ has label $b$. In this case we replace the subdiagram $\varepsilon(b)$ that consists of one edge by a dipole of the two cells $(b=aba)\circ(aba=b)$. The new longest path in the diagram we obtain will be still denoted by $p$. Now look and the subwords of the form $aa$ or $bb$ of the label of $p$. Choose the leftmost of them. If it is $aa$ then we replace the second edge labelled by $a$ by the dipole $(a=bab)\circ(bab=b)$. If it is $bb$ then we also replace the second edge of it by the dipole $(b=aba)\circ(aba=b)$. After a finite number of steps, the label of the longest path $p$ becomes $abab\ldots$\ . The last letter in it will have label $a$. This follows from parity arguments and the fact that the terminal vertex of $\Psi$ has colour $2$. Now we have $\Psi=\Psi_1\circ\Psi_2^{-1}$ where $\Psi_1$, $\Psi_2$ are positive $(a,a(ba)^m)$-diagrams for some $m$. It suffices to show that each diagram with this property belongs to the image of our mapping $\Delta\mapsto\Delta''$. This means that every positive $(a,a(ba)^m)$-diagram over $\mathcal P_{23}$ can be composed from basic diagrams. Also we claim a symmetric statement: every positive $(b,b(ab)^m)$-diagram over $\mathcal P_{23}$ can be composed from basic diagrams. Let $\Phi$ be one of these diagrams. We proceed by induction on the number of cells in it. If there are no cells ($m=0$) then we can nothing to prove. Otherwise let us define the {\em depth\/} of an edge in the diagram. The top edge will have depth $0$ by definition. All other edges belong to the bottom path of a cell $\pi$. If its top edge has depth $d$, then we assign depth $d+1$ to our edge. The only important thing for us is whether $d$ is even or odd. So we talk about even and odd edges. Now we remark the following. 1) Let $e_1$, ... , $e_s$ be all edges coming out of a vertex, read from top to bottom. Then labels of them always change from $a$ to $b$ and vice versa, and the same for parity of their depth. The same for edges that come into a vertex. 2) If two consecutive edges have the same label, then they have different partity. Otherwise, if the labels are $ab$ or $ba$, the parity is the same. The first part is clear. As for the second one, let us consider only one case of the edges labelled by $ab$. Let $e$ be the highest edge that ends at $v$ (the vertex between $a$ and $b$) and let $f$ be the highest edge that starts at $v$. It is easy to see that $ef$ is a part of the bottom path of a cell. Therefore, $e$ and $f$ have different labels and the same depth. Now everything follows from 1). The cases $ba$, $aa$, $bb$ are similar. Now we look again at the path $p$ (the bottom of $\Phi$). Its first label is $a$, so the first edge is even. Therefore, all edges of $p$ are even according to 2) since $p$ has label $abab...a$. If $m > 0$ then $\Phi$ has a top cell $a=bab$ with the bottom path $e_1e_2e_3$. Deleting the top cell gives us a sum of 3 diagrams: $(e_1,p_1)+(e_2,p_2)+(e_3,p_3)$, where $p=p_1p_2p_3$. Each edge of $p_i$ has odd parity in the $i$-th sumand. Therefore, $e_i$ does not belong to $p_i$. So there exists a top cell in each of the summands. Together with the cell $a=bab$ we have deleted, they form a basic diagram. Removing three cells with top edges $e_i$ ($i=1,2,3)$, we get a sum of $9$ positive diagrams. Now all edges of $p$ have even depth so the inductive assumption can be applied to these summands. This completes the proof. So this answers Brin's question, and now we look at some generalizations. The next Fibonacci-like presentation in the series is $\mathcal P_{42}$. Its relations are $a=bc$, $b=cd$, $c=da$, $d=ab$. Applying Tietze transformations, we rewrite the presentation as $\langle\,ngle a,b\mid a=baba,b=abaab\,\ranglengle$, where $d\to ab$, $c\to da\to aba$. In the second relation $b=abaab$ we replace its third occurrence of $a$ to the right-hand side by $(ba)^2$. This gives us a Tietze-equivalent presentation $\mathcal P=\langle\,ngle a,b\mid a=(ba)^2,b=(ab)^4$. The diagram group over $\mathcal P$ with base $a$ is the same as the one over $\mathcal P_{42}$ according to general facts from \cite{GuSa05}. \begin{thm} \langle\,bel{f11} The diagram group with base $a$ over semigroup presentation $\langle\,ngle a,b,c\mid a=bc,b=cd,c=da,d=ab\,\ranglengle$ is isomorphic to generalized Thompson's group $F_{11}$. \end{thm} {\bf Proof.}\ The idea of the proof is similar to the one for Theorem~\ref{f9}. We will work with presentation $\mathcal P=\langle\,ngle a,b\mid a=(ba)^2,b=(ab)^4$ instead of $\mathcal P_{42}$. Our aim is to construct a homomorphism from $F_{11}$ to $G=\mathcal D(\mathcal P,a)$. Notice that we have no longer a symmetry between $a$ and $b$. The group $F_{11}$ will be the diagram group with base $x$ over $x=x^{11}$, as usual. Any diagram $\Delta$ over it is still a bipartite graph since $11$ is odd. So each vertex gets a colour $1$ or $2$ and each edge will have a label $a$ or $b$ by the same rules as above. This new diagram over $a=a(ba)^5$, $b=b(ab)^5$ will be denoted by $\Delta'$. Both relations can be derived from $\mathcal P$. Indeed, $a=baba$, and then we replace the first occurrence of $b$ to the right-hand side by $(ab)^4$. Thus we have a diagram of two cells over $\mathcal P$ for $a=a(ba)^5$. As for the second equality, we take $b=ababab$ and replace the first $a$ by $(ba)^2$. This gives a two-cell diagram over $\mathcal P$ for $b=b(ab)^5$. These two diagrams over $\mathcal P$ will be called basic. Replacing the cells of $\Delta'$ by basic diagrams lead to the diagram $\Delta''$. In a standard way, the mapping $\Delta\mapsto\Delta''$ induces the homomorphism of the groupoids of diagrams, and therefore we have a group homomorphism from $F_{11}$ to $G$. Our aim is to establish its surjectivity. Now let $\Psi$ be a reduced diagram over $\mathcal P$. As in the proof of the previous theorem, we let $\Psi=\Psi_1\circ\Psi_2^{-1}$ where $p$ is the common part of the two pieces. We are going to insert certain dipoles to $\Psi$ in such a way that the label of $p$ will have the form $abab\ldots$\ . Suppose that the label of $p$ starts with $b$. Then we insert a dipole of the form $(b=(ab)^4)\circ((ab)^4=b)$ instead of the first edge of $p$. The new path is still denoted by $p$. If its label has an occurrence of $aa$ or $bb$ then we take the leftmost of them. In case it is $aa$, we replace the second edge by the dipole $(a=(ba)^2)\circ((ba)^2=a)$. In case it is $bb$, the second edge is replaced by a dipole from the beginning of this paragraph. So in a finite number of steps, we get a decomposition into a product of two diagrams, positive and negative. It suffices to take a positive $(a,abab\ldots)$-diagram $\Phi$ showing that it is in the image of the mapping $\Delta\mapsto\Delta''$.
4,019
12,988
en
train
0.4987.4
The first part is clear. As for the second one, let us consider only one case of the edges labelled by $ab$. Let $e$ be the highest edge that ends at $v$ (the vertex between $a$ and $b$) and let $f$ be the highest edge that starts at $v$. It is easy to see that $ef$ is a part of the bottom path of a cell. Therefore, $e$ and $f$ have different labels and the same depth. Now everything follows from 1). The cases $ba$, $aa$, $bb$ are similar. Now we look again at the path $p$ (the bottom of $\Phi$). Its first label is $a$, so the first edge is even. Therefore, all edges of $p$ are even according to 2) since $p$ has label $abab...a$. If $m > 0$ then $\Phi$ has a top cell $a=bab$ with the bottom path $e_1e_2e_3$. Deleting the top cell gives us a sum of 3 diagrams: $(e_1,p_1)+(e_2,p_2)+(e_3,p_3)$, where $p=p_1p_2p_3$. Each edge of $p_i$ has odd parity in the $i$-th sumand. Therefore, $e_i$ does not belong to $p_i$. So there exists a top cell in each of the summands. Together with the cell $a=bab$ we have deleted, they form a basic diagram. Removing three cells with top edges $e_i$ ($i=1,2,3)$, we get a sum of $9$ positive diagrams. Now all edges of $p$ have even depth so the inductive assumption can be applied to these summands. This completes the proof. So this answers Brin's question, and now we look at some generalizations. The next Fibonacci-like presentation in the series is $\mathcal P_{42}$. Its relations are $a=bc$, $b=cd$, $c=da$, $d=ab$. Applying Tietze transformations, we rewrite the presentation as $\langle\,ngle a,b\mid a=baba,b=abaab\,\ranglengle$, where $d\to ab$, $c\to da\to aba$. In the second relation $b=abaab$ we replace its third occurrence of $a$ to the right-hand side by $(ba)^2$. This gives us a Tietze-equivalent presentation $\mathcal P=\langle\,ngle a,b\mid a=(ba)^2,b=(ab)^4$. The diagram group over $\mathcal P$ with base $a$ is the same as the one over $\mathcal P_{42}$ according to general facts from \cite{GuSa05}. \begin{thm} \langle\,bel{f11} The diagram group with base $a$ over semigroup presentation $\langle\,ngle a,b,c\mid a=bc,b=cd,c=da,d=ab\,\ranglengle$ is isomorphic to generalized Thompson's group $F_{11}$. \end{thm} {\bf Proof.}\ The idea of the proof is similar to the one for Theorem~\ref{f9}. We will work with presentation $\mathcal P=\langle\,ngle a,b\mid a=(ba)^2,b=(ab)^4$ instead of $\mathcal P_{42}$. Our aim is to construct a homomorphism from $F_{11}$ to $G=\mathcal D(\mathcal P,a)$. Notice that we have no longer a symmetry between $a$ and $b$. The group $F_{11}$ will be the diagram group with base $x$ over $x=x^{11}$, as usual. Any diagram $\Delta$ over it is still a bipartite graph since $11$ is odd. So each vertex gets a colour $1$ or $2$ and each edge will have a label $a$ or $b$ by the same rules as above. This new diagram over $a=a(ba)^5$, $b=b(ab)^5$ will be denoted by $\Delta'$. Both relations can be derived from $\mathcal P$. Indeed, $a=baba$, and then we replace the first occurrence of $b$ to the right-hand side by $(ab)^4$. Thus we have a diagram of two cells over $\mathcal P$ for $a=a(ba)^5$. As for the second equality, we take $b=ababab$ and replace the first $a$ by $(ba)^2$. This gives a two-cell diagram over $\mathcal P$ for $b=b(ab)^5$. These two diagrams over $\mathcal P$ will be called basic. Replacing the cells of $\Delta'$ by basic diagrams lead to the diagram $\Delta''$. In a standard way, the mapping $\Delta\mapsto\Delta''$ induces the homomorphism of the groupoids of diagrams, and therefore we have a group homomorphism from $F_{11}$ to $G$. Our aim is to establish its surjectivity. Now let $\Psi$ be a reduced diagram over $\mathcal P$. As in the proof of the previous theorem, we let $\Psi=\Psi_1\circ\Psi_2^{-1}$ where $p$ is the common part of the two pieces. We are going to insert certain dipoles to $\Psi$ in such a way that the label of $p$ will have the form $abab\ldots$\ . Suppose that the label of $p$ starts with $b$. Then we insert a dipole of the form $(b=(ab)^4)\circ((ab)^4=b)$ instead of the first edge of $p$. The new path is still denoted by $p$. If its label has an occurrence of $aa$ or $bb$ then we take the leftmost of them. In case it is $aa$, we replace the second edge by the dipole $(a=(ba)^2)\circ((ba)^2=a)$. In case it is $bb$, the second edge is replaced by a dipole from the beginning of this paragraph. So in a finite number of steps, we get a decomposition into a product of two diagrams, positive and negative. It suffices to take a positive $(a,abab\ldots)$-diagram $\Phi$ showing that it is in the image of the mapping $\Delta\mapsto\Delta''$. Now we are proving that any positive $(a,abab\ldots)$-diagram over $\mathcal P$ can be composed from basic diagrams together with an additional statement for a $(b,baba\ldots)$-diagram over $\mathcal P$. We prove both facts simultaneously by induction on the number of cells in a diagram $\Phi$ with this property. If $\Phi$ has no cells, there is nothing to prove. Let $\Phi$ have $a$ as a top label. Notice that the defining relations of $\mathcal P$ always preserve the last letter of a word. So $a$ cannot be equal modulo this presentation to a word that ends with $b$. Hence $\Phi$ is an $(a,(ab)^ma)$-diagram for some $m\ge1$. The top cell of $\Phi$ has the form $a=(ba)^2$. Since the bottom path $p$ starts with $a$, the first letter $b$ of the word $(ba)^2$ must correspond to the top path of a cell $b=(ab)^4$. These two cells form a basic diagram. So we can cut it off. The rest will be a diagram with top path $a(ba)^5$ and bottom path $p$ labelled by $(ab)^ma$. All vertices of a positive diagram belong to its bottom path. So it decomposes into a sum of diagrams for which the top label of each of them is $a$ or $b$. If it is $a$, then the bottom label of a summand ends with $a$. The length of the bottom path is odd so the bottom label has the form $(ab)^ka$ for some $k\ge0$. If the top label of a summand is $b$, the same argument shows that the bottom label is of the form $(ba)^kb$. Thus all the summands satisfy the inductive assumption (they have fewer cells than $\Phi$). Therefore they can be decomposed into basic diagrams. Now let $\Phi$ have $b$ as a top label. The top cell now is $b=(ab)^4$. The bottom path $p$ now starts with $b$. Thus the first letter $a$ of $(ab)^4$ is the top path of a cell $a=(ba)^2$. The two cells together form a basic diagram. We cut it off, and then repeat the same arguments as in the previous paragraph. The image of the homomorphism is not Abelian. As above, we use the fact that generalized Thompson's groups $F_r$ have no proper non-Abelian homomorphic images. Thus we have an isomorphism $F_{11}\cong G_{42}$. The proof is complete. Notice that the Fibonacci {\bf group} presented by $\mathcal P$ is a cyclic group $\mathbb Z_5$. The semigroup with the same presentation is also finite, it has $10$ elements. However, for $n\ge5$ the Fibonacci semigroups presented by (\ref{fs}) turn out to be infinite. This makes unclear the structure of diagram groups $G_{n2}$ for that case (it is even possible that the groups may be trivial). As for the generalization into another direction, we are able to describe completely the diagram groups over (\ref{pnr}) for the case $n=2$. \begin{thm} \langle\,bel{johnn} Let $s$ be a positive integer. The diagram group with base $a$ over $\langle\,ngle a,b\mid a=b(ab)^s,b=a(ba)^s\,\ranglengle$ is isomorphic to generalized Thompson's group $F_{(2s+1)^2}$. The diagram group with base $a$ over $\langle\,ngle a,b\mid a=(ba)^s,b=(ab)^s\,\ranglengle$ is isomorphic to generalized Thompson's group $F_{4s-1}$. So the group $G_{2r}=\mathcal D(\mathcal P_{2r},a)$ is isomorphic to $F_{r^2}$ for odd $r$ and $F_{2r-1}$ for even $r$, where $\mathcal P_{2r}=\langle\,ngle a,b\mid a=ba\ldots,b=ab\ldots\,\ranglengle$ with the right-hand sides of the defining relations of length $r\ge2$. \end{thm} {\bf Proof.}\ The case of odd $r=2s+1$ has the same proof as in Theorem~\ref{f9}. Basic diagrams here consist of $r+1$ cells. They correspond to the derivation $a=b(ab)^s$ with further replacements of all the $r$ letters of the right-hand side according to the defining relators, and similarly for $b=a(ba)^s$ (we have a total symmetry here). The bottom label of basic diagrams have length $r^2$. The proof goes without any changes for the general case. Now let $r=2s$ be even. The construction of basic diagrams here is simpler. They consist of two cells only. There is some similarity here to the construction from the proof of Theorem~\ref{f11}. Namely, we take the cell $a=(ba)^s$ and replace the first letter in the right-hand side by $(ab)^s$. As a result, we get an $(a,(ab)^{2s-1}a)$-diagram of two cells. We call it basic as well as the $(b,(ba)^{2s-1}b)$-diagram of two cells. The bottom paths here have length $4s-1=2r-1$ so we are able to construct a homomorphism from $F_{2r-1}$ to the diagram group and then show it is an isomorphism. The construction here is slightly easier than the one from the proof of Theorem~\ref{f11} because of symmetry. This completes the proof. \end{document}
2,867
12,988
en
train
0.4988.0
\begin{document} \ifJOC \TITLE{SOS-SDP: an Exact Solver for Minimum Sum-of-Squares Clustering} \ARTICLEAUTHORS{ \AUTHOR{Veronica Piccialli, Antonio M.~Sudoso} \AFF{University of Rome Tor Vergata, \EMAIL{\href{mailto:[email protected]}{[email protected]}}, ORCiD: 0000-0002-3357-9608, \EMAIL{\href{mailto:[email protected]}{[email protected]}}, ORCiD: 0000-0002-2936-9931, \URL{}} \AUTHOR{Angelika Wiegele} \AFF{Universität Klagenfurt, \EMAIL{\href{mailto:[email protected]}{[email protected]}}, ORCiD: 0000-0003-1670-7951} } \else \title{SOS-SDP: an Exact Solver for Minimum Sum-of-Squares Clustering} \date{\today} \author{Veronica Piccialli, Antonio M.~Sudoso, Angelika Wiegele} \fi \ifJOC \ABSTRACT{ The minimum sum-of-squares clustering problem (MSSC) consists of partitioning $n$ observations into $k$ clusters in order to minimize the sum of squared distances from the points to the centroid of their cluster. In this paper we propose an exact algorithm for the MSSC problem based on the branch-and-bound technique. The lower bound is computed by using a cutting-plane procedure where valid inequalities are iteratively added to the Peng-Wei SDP relaxation. The upper bound is computed with the constrained version of $k$-means where the initial centroids are extracted from the solution of the SDP relaxation. In the branch-and-bound procedure, we incorporate instance-level must-link and cannot-link constraints to express knowledge about which data points should or should not be grouped together. We manage to reduce the size of the problem at each level preserving the structure of the SDP problem itself. The obtained results show that the approach allows to successfully solve for the first time real-world instances up to 4000 data points. } \maketitle \else \maketitle \begin{abstract} The minimum sum-of-squares clustering problem (MSSC) consists of partitioning $n$ observations into $k$ clusters in order to minimize the sum of squared distances from the points to the centroid of their cluster. In this paper, we propose an exact algorithm for the MSSC problem based on the branch-and-bound technique. The lower bound is computed by using a cutting-plane procedure where valid inequalities are iteratively added to the Peng-Wei SDP relaxation. The upper bound is computed with the constrained version of $k$-means where the initial centroids are extracted from the solution of the SDP relaxation. In the branch-and-bound procedure, we incorporate instance-level must-link and cannot-link constraints to express knowledge about which data points should or should not be grouped together. We manage to reduce the size of the problem at each level preserving the structure of the SDP problem itself. The obtained results show that the approach allows to successfully solve for the first time real-world instances up to 4000 data points. \end{abstract} \fi
904
34,122
en
train
0.4988.1
\section{Introduction}\label{sec:intro} Clustering is the task of partitioning a set of objects into homogeneous and/or well-separated groups, called clusters. Cluster analysis is the discipline that studies methods and algorithms for clustering objects according to a suitable similarity measure. It belongs to unsupervised learning since it does not use class labels. Two main clustering approaches exist: hierarchical clustering, which assumes a tree structure in the data and builds nested clusters, and partitional clustering. Partitional clustering generates all the clusters at the same time without assuming a nested structure. Among partitional clustering, the minimum sum-of-squares clustering problem (MSSC) or sum-of-squares (SOS) clustering, is one of the most popular and well studied. MSSC asks to partition $n$ given data points into $k$ clusters so that the sum of the Euclidean distances from each data point to the cluster centroid is minimized. The MSSC commonly arises in a wide range of disciplines and applications, as for example image segmentation \citep{dhanachandra2015image, shi2000normalized}, credit risk evaluation \citep{CARUSO2021100850}, biology \citep{jiang2004cluster}, customer segmentation \citep{syakur2018integration}, document clustering \citep{mahdavi2009harmony}, and as a technique for the missing values imputation \citep{zhang2006clustering}. The MSSC can be stated as follows for fixed $k$: \begin{subequations} \label{eq:MSSC} \begin{align} \min~ & \sum_{i=1}^n \sum_{j=1}^k x_{ij}\|p_i - c_j\|^2 \\ \textrm{s.t.}~ & \sum_{j=1}^k x_{ij} = 1,\quad \forall i \in \{1,\dots,n\} \label{eq:MSSCa}\\ & \sum_{i=1}^n x_{ij} \ge 1, \quad \forall j \in \{1,\dots,k\}\label{eq:MSSCb}\\ & x_{ij} \in \{0,1\}, \quad \forall i \in \{1,\dots,n\}\; \forall j \in \{1,\dots,k\} \\ & c_j \in \mathbb{R}^d, \quad \forall j \in \{1, \dots, k \}. \end{align} \end{subequations} Here, $p_i \in \mathbb{R}^d$, where $d$ is the number of features, $i \in \{1,\dots,n\}$, are the data points, and the centers of the $k$ clusters are at the (unknown) points $c_j$, $j\in \{1,\dots,k\}$. For convenience, we sometimes collect all the data points $p_i$ as rows in a matrix $W_p$. The binary decision variable $x_{ij}$ expresses whether data point $i$ is assigned to cluster $j$ or not. Constraints~\eqref{eq:MSSCa} make sure that each point is assigned to a cluster, and constraints~\eqref{eq:MSSCb} guarantee that none of the $k$ clusters is empty. Setting the gradient of the objective function with respect to $c$ to zero yields \begin{equation*} \sum_{i=1}^n x_{ij}(c^r_j - p^r_i) = 0,\quad \forall j \in \{1,\dots,k\}\; \forall r \in \{1,\dots,d\} \end{equation*} and we obtain the formula for the point in the center of each cluster \begin{equation*} c^r_j = \frac{\sum_{i=1}^n x_{ij} p^r_i}{\sum_{i=1}^n x_{ij}}, \quad \forall j \in \{1,\dots,k\}\; \forall r \in \{1,\dots,d\}. \end{equation*} Replacing the formula for $c$ in~\eqref{eq:MSSC}, we get \begin{subequations} \label{eq:MSSC2} \begin{align} \min~ & \sum_{i=1}^n \sum_{j=1}^k x_{ij}\Big\|p_i - \frac{\sum_{l=1}^n x_{lj} p_l}{\sum_{l=1}^n x_{lj}}\Big\|^2 \\ \textrm{s.t.}~ & \sum_{j=1}^k x_{ij} = 1,\quad \forall i \in \{1,\dots,n\}\\ & \sum_{i=1}^n x_{ij} \ge 1, \quad \forall j \in \{1,\dots,k\}\\ & x_{ij} \in \{0,1\}, \quad \forall i \in \{1,\dots,n\}\; \forall j \in \{1,\dots,k\}. \end{align} \end{subequations}
1,228
34,122
en
train
0.4988.2
\subsection{Literature Review} The MSSC is known to be NP-hard in $\mathbb{R}^2$ for general values of $k$ \citep{mahajan2012planar}, and in higher dimension even for $k=2$ \citep{aloise2009np}. The one-dimensional case is proven to be solvable in polynomial time. In particular, \cite{wang2011ckmeans} proposed an $O(kn^2)$ time and $O(kn)$ space dynamic programming algorithm for solving this special case. Because of MSSC's computational complexity, heuristic approaches and approximate algorithms are usually preferred over exact methods. The most popular heuristic for solving MSSC is $k$-means \citep{macqueen1967some, lloyd1982least}, that alternates the centroid initialization with the assignments of points until centroids do not move anymore. The main disadvantage of $k$-means is that it produces locally optimal solutions that can be far from the global minimum, and it is extremely sensitive to the initial assignment of centroids. For this reason, a lot of research has been dedicated to finding efficient initialization for $k$-means (see for example \cite{arthur2006k,improvedkmeans2018,franti2019much} and references therein). However, an efficient initialization may not be enough in some instances, so that different strategies have been implemented in order to improve the exploration capability of the algorithm. A variety of heuristics and metaheuristics have been proposed, following the standard metaheuristic framework, e.g., simulated annealing \citep{lee2021simulated}, tabu search \citep{ALSULTAN19951443}, variable neighborhood search \citep{HANSEN2001405,Orlov2018}, iterated local search \citep{likas2003global}, evolutionary algorithms \citep{MAULIK20001455,SARKAR1997975}). In the work of \cite{tao2014new,BAGIROV201612,KARMITSA2017367,KARMITSA2018245}, DC (Difference of Convex functions) programming is used to define efficient heuristic algorithms for clustering large datasets. The algorithm $k$-means has also been used as a local search subroutine in different algorithms, as in the population-based metaheuristic developed in \cite{gribel2019hg} and in the differential evolution scheme proposed in \cite{Schoen2021}. Recently, thanks to the enhancements in computers' computational power and to the progress in mathematical programming, the exact resolution of MSSC has become way more achievable. In this direction, mathematical programming algorithms based on branch-and-bound and column generation have produced guaranteed globally optimal solutions for small and medium scale instances. Due to the NP-hardness of the MSSC, the computational time of globally optimal algorithms quickly increases with the size of the problem. However, besides the importance of finding optimal solutions for some clustering applications, certified optimal solutions remain extremely valuable as a benchmark tool since they can be used for evaluating, improving, and developing heuristics and approximate methods. Compared to the huge number of papers proposing heuristics and approximate methods for the MSSC problem, the number of articles proposing exact algorithms is much smaller. One of the earliest attempts was the integer programming formulation proposed by \citet{rao1971cluster}, which requires the cluster sizes to be fixed in advance and is limited to small instances. A first branch-and-bound algorithm was proposed by \citet{koontz1975branch} and extended by \citet{diehr1985evaluation}. The idea is to use partial clustering solutions on a subset $S$ of the main dataset $D$ to determine improved bounds and clusters on the entire sample by a branch-and-bound search. The key observation is that the optimal objective function value of the MSSC on $D$ is greater or equal than the optimal objective function value of the MSSC on $S$ plus the optimal objective function value of the MSSC on $D - S$. This approach was later improved by \citet{brusco2006repetitive}, who developed a repetitive-branch-and-bound algorithm (RBBA). After a proper reordering of the entities in $D$, RBBA solves a sequence of subproblems of increasing size with the branch-and-bound technique. While performing a branch-and-bound for a certain subproblem, Brusco's algorithm exploits the optimal solutions found for the previous subproblems which provide tighter bounds compared to the ones used by \cite{koontz1975branch} and \cite{diehr1985evaluation}. RBBA provided optimal solutions for well separated synthetic datasets with up to 240 objects. Poorly separated problems with no inherent cluster structure were optimally solved for up to 60 objects. \citet{sherali2005global} proposed a different branch-and-bound algorithm where tight lower bounds are determined by using the reformulation-linearization-technique (RLT), see \citet{sherali1998reformulation}. The authors claim that this algorithm allows for the exact resolution of problems of size up to 1000 entities, but those results seem to be hard to reproduce. The computing times in an attempted replication by \citet{aloise2011evaluating} were already high for real datasets with about 20 objects. A column generation algorithm for MSSC was proposed by \citet{du1999interior}. The master problem is solved by an interior point method, whereas the auxiliary problem of finding a column with negative reduced cost is expressed as a hyperbolic program with binary variables. Variable-neighborhood-search heuristics are used to find a good initial solution and to accelerate the resolution of the auxiliary problem. This approach has been considered a successful one, since it solved for the first time medium size benchmark instances (i.e., instances with 100--200 entities), including the popular Iris dataset, which encounters 150 entities. However, the bottleneck of the algorithm lies in the resolution of the auxiliary problem, and more precisely, in the unconstrained quadratic 0-1 optimization problem. Later this algorithm was further improved by \citet{aloise2012improved} who define a different geometric-based approach for solving the auxiliary problem. In particular, the solution of the auxiliary problem is achieved by solving a certain number of convex quadratic problems. If the points to be clustered are in the plane, the maximum number of convex problems to solve is polynomially bounded. When the points are not in the plane, in order to solve the auxiliary problems the cliques in a certain graph (induced by the current solution of the master problem) have to be found. The algorithm is more efficient when the graph is sparse, and the graph becomes sparser when the number of clusters $k$ increases. Therefore, the algorithm proposed in \citet{aloise2012improved} is particularly efficient in the plane and when $k$ is large. Their method was able to provide exact solutions for large scale problems, including one instance of 2300 entities when the ratio between $n$ and $k$ is small. Recently, \citet{peng2007approximating} by using matrix arguments proved the equivalence between the MSSC formulation and a model called 0-1 semidefinite programming (SDP), in which the eigenvalues of the matrix variable are binary. Using this result, \citet{aloise2009branch} proposed a branch-and-cut algorithm for MSSC where lower bounds are obtained from the linear programming relaxation of the 0-1 SDP model. This algorithm manages to obtain exact solutions for datasets up to 200 entities with computing times comparable with those obtained by the column generation method proposed by \citet{du1999interior}. Constant-factor approximation algorithms have also been developed in the literature, both for fixed number of clusters $k$ and for fixed dimension $d$ \citep{kanungo2004local}. Among these methods, \citet{peng2007approximating} proposed a rounding procedure to extract a feasible solution of the original MSSC from the approximate solution of the relaxed SDP problem. More in detail, they use the Principal Component Analysis (PCA) to reduce the dimension of the dataset and then perform clustering on the projected PCA space. They showed that this algorithm can provide a 2-approximate solution to the MSSC. More recently, \citet{prasad2018improved} proposed a new approximation algorithm that utilizes an improved copositive conic reformulation of the MSSC. Starting from this reformulation, the authors derived a hierarchy of accurate SDP relaxations obtained by replacing the completely positive cone with progressively tighter semidefinite outer approximations. Their SDP relaxations provide better lower bounds than the Peng-Wei one but do not scale well when the size of the problem increases. \subsection*{Main results and outline} The main contributions of this paper are the following: \begin{description} \item[(i)] we define the first SDP based branch-and-bound algorithm for MSSC, and we use a cutting-plane procedure for strengthening the bound, following a recent strand of research \citep{demeijer2021sdpbased}; \item[(ii)] we define a shrinking procedure that allows reducing the size of the problem when introducing must link constraints; \item[(iii)] we exploit the SDP solution for a smart initialization of the constrained version of $k$-means that yields high quality upper bounds; \item[(iv)] for the first time, we manage to find the exact solution for instances of size up to $n=4000$. \end{description} This paper is structured as follows. In Section~\ref{sec:bound} we introduce equivalent formulations for the MSSC and derive relaxations based on semidefinite programming (SDP). In Section~\ref{sec:branching} we analyze the SDP problems that arise at each node within the branch-and-bound tree and discuss the selection of the branching variable. In Section~\ref{sec:bab} the details about the bound computation are discussed, including a post-processing procedure that produces a ``safe'' bound from an SDP that is solved to medium precision only. Section~\ref{sec:heuristic} gives all the details on the heuristic used to generate feasible clusterings. The details of our implementation and exhaustive numerical results are presented in Section~\ref{sec:numericalresults}. Finally, Section~\ref{sec:conclusion} concludes the paper. \subsection*{Notation} Let ${\mathcal S}^n$ denote the set of all $n\times n$ real symmetric matrices. We denote by $M\succeq 0$ that matrix $M$ is positive semidefinite and let ${\mathcal S}_+^n$ be the set of all positive semidefinite matrices of order $n\times n$. We denote by $\inprod{\cdot}{\cdot}$ the trace inner product. That is, for any $M, N \in \mathbf{R}^{n\times n}$, we define $\inprod{M}{N}:= \textrm{trace} (M^\top N )$. Its associated norm is the Frobenius norm, denoted by $\| M\|_F := \sqrt{\textrm{trace} (M^\top M )}$. We define the linear map $\mathcal{A}: {\mathcal S}^n \rightarrow \mathbb{R}^{m_1}$ as $(\mathcal{A}(X))_i = \inprod{A_i}{X}$, where $A_i \in {\mathcal S}^n$, $i=1,\dots,m_1$, and the linear map $\mathcal{B}: {\mathcal S}^n \rightarrow \mathbb{R}^{m_2}$ as $(\mathcal{B}(X))_i = \inprod{B_i}{X}$, where $B_i \in {\mathcal S}^n$, $i=1,\dots,m_2$. We define by $e_n$ the vector of all ones of length $n$. We omit the subscript in case the dimension is clear from the context. We denote by $E_{i}$ the symmetric matrix such that $\inprod{E_{i}}{Z}$ is the sum of row~$i$ of $Z$.
2,909
34,122
en
train
0.4988.3
\section{A Lower Bound based on Semidefinite Programming}\label{sec:bound} We briefly remind the Peng-Wei SDP relaxation to Problem~\eqref{eq:MSSC2} that will be the basis of the bounding procedures within our exact algorithm. Consider matrix $W$ where the entries are the inner products of the data points, i.e., $W_{ij} = p_i^\top p_j$ for $i,j \in \{1,\dots,n\}$. Furthermore, collect the binary decision variables $x_{ij}$ from~\eqref{eq:MSSC2} in the $n\times k$ matrix $X$ and define matrix $Z$ as \begin{equation*} Z = X(X^\top X)^{-1}X^\top. \end{equation*} \citet{peng2007approximating} introduced a different but equivalent formulation for the MSSC, yielding the following optimization problem: \begin{subequations} \label{eq:PengSDP} \begin{align} \min~ & \inprod{-W}{Z} \\ \textrm{s.t.}~ & Ze = e\\ & \textrm{tr}(Z) = k\\ & Z \ge 0, \ Z^2 = Z, \ Z = Z^\top. \end{align} \end{subequations} We can convert Problem~\eqref{eq:PengSDP} into a rank constrained optimization problem. In fact we can replace the constraints $Z^2 = Z$ and $Z = Z^\top$ with a rank constraint and a positive semidefiniteness constraint on $Z$, yielding the following problem: \begin{subequations} \label{eq:RankSDP} \begin{align} \min~ & \inprod{-W}{Z} \\ \textrm{s.t.}~ & Ze = e\\ & \textrm{tr}(Z) = k\\ & Z \ge 0, \ Z \in {\mathcal S}^np\\ & \textrm{rank}(Z) = k. \end{align} \end{subequations} In order to prove the equivalence of Problems~\eqref{eq:PengSDP} and~\eqref{eq:RankSDP}, we need the definition of an idempotent matrix and its characterization in terms of eigenvalues given by Lemma~\ref{lemma:ideig}. \begin{definition} A symmetric matrix $Z$ is idempotent if $Z^2 = ZZ = Z$. \end{definition} \begin{lemma}\label{lemma:ideig} A symmetric matrix $Z$ is idempotent if and only if all its eigenvalues are either 0 or 1. \end{lemma} \ifJOC \proof{Proof.} \else \begin{proof} \fi Let $Z$ be idempotent, $\lambda$ be an eigenvalue and $v$ a corresponding eigenvector then $\lambda v = Zv = ZZv = \lambda Zv = \lambda^2 v$. Since $v \neq 0$ we find $\lambda - \lambda^2 = \lambda (1 - \lambda) = 0$ so either $\lambda = 0$ or $\lambda = 1$. To prove the other direction, consider the eigenvalue decomposition of $Z$, $Z = P \Lambda P^\top$, where $\Lambda$ is a diagonal matrix having the eigenvalues $0$ and $1$ on the diagonal, and $P$ is orthogonal. Then, since $\Lambda^2 = \Lambda$, we get \begin{equation*} Z^2 = P \Lambda P^\top P \Lambda P^\top = P \Lambda^2 P^\top = P \Lambda P^\top = Z. \end{equation*} \ifJOC \Halmos \endproof \else \end{proof} \fi \begin{theorem} Problems~\eqref{eq:PengSDP} and~\eqref{eq:RankSDP} are equivalent. \end{theorem} \ifJOC \proof{Proof.} \else \begin{proof} \fi Let $Z$ be a feasible solution of Problem \eqref{eq:PengSDP}. We first show that $Z^2 = Z$ and $Z = Z^T$ imply $Z \in {\mathcal S}^np$. In fact, for all $v$ we have: \begin{equation*} v^\top Z v = v^\top Z^2 v = v^\top Z Z v = v^\top Z (v^\top Z^\top)^\top = (v^\top Z) (v^\top Z)^\top = \| v^\top Z \|_2^2 \geq 0. \end{equation*} Since $Z$ is symmetric idempotent, the number of eigenvalues equal to 1 is $\textrm{tr}(Z) = \textrm{rank}(Z) = k$. To prove the other direction, let $Z$ be a feasible solution of Problem \eqref{eq:RankSDP}. If $\textrm{rank}(Z) = k$, then $Z$ has $n-k$ eigenvalues equal to 0. Furthermore, let $\lambda_1 \geq \lambda_2 \geq \ldots > \lambda_n\ge 0$ be the eigenvalues of Z, then \begin{equation*} \textrm{tr}(Z) = \sum_{i=1}^{n} \lambda_i = \sum_{i=1}^{k} \lambda_i + \sum_{i=k+1}^{n} \lambda_i = \sum_{i=1}^{k} \lambda_i = k. \end{equation*} Constraints $Z\succeq 0$, $Z\ge 0$ and $Ze=e$ imply that the eigenvalues of $Z$ are bounded by one (see, e.g., Lemma~\ref{lem:eigboundZ}). Hence, the trace constraint is satisfied if and only if the positive eigenvalues are all equal to~1. This shows that $\lambda(Z) \in \{0, 1\}$ and therefore $Z$ is symmetric idempotent. \ifJOC \par \Halmos \endproof \else \end{proof} \fi By dropping the non-convex rank constraint from Problem~\eqref{eq:RankSDP}, we obtain the SDP relaxation which is the convex optimization problem \begin{subequations} \label{eq:SDP} \begin{align} \min~ & \inprod{-W}{Z} \\ \textrm{s.t.}~ & Ze = e\\ & \textrm{tr}(Z) = k\\ & Z \ge 0, \ Z \in {\mathcal S}^np \end{align} \end{subequations} \subsection{Strengthening the Bound through Inequalities} The SDP relaxation~\eqref{eq:SDP} can be tightened by adding valid inequalities and solving the resulting SDP in a cutting-plane fashion. In this section, we present the class of inequalities we use for strengthening the bound. For each class, we describe the separation routine used. We consider three different sets of inequalities: \begin{description} \item[Pair inequalities.] In any feasible solution of~\eqref{eq:RankSDP}, it holds that \begin{equation}\label{eq:pairs} Z_{ij}\le Z_{ii},\quad Z_{ij}\le Z_{jj}\quad \forall i,j \in \{1,\dots,n\}, i\not=j. \end{equation} This set of $n(n-1)$ inequalities were used by \citet{peng2005new} and in the branch-and-cut proposed by \citet{aloise2009branch}. \item[Triangle Inequalities.] The triangle inequalities are based on the observation that if points $i$ and $j$ are in the same cluster and points $j$ and $h$ are in the same cluster, then points $i$ and $h$ necessarily must be in the same cluster. The resulting $3\binom{n}{3}$ inequalities are: \begin{equation}\label{eq:triangle} Z_{ij}+Z_{ih}\le Z_{ii}+Z_{jh}\quad \forall i,j,h \in \{1,\dots,n\}, i,j,h ~\mathrm{distinct}. \end{equation} These inequalities were already introduced by \citet{peng2005new}, and used also by \citet{aloise2009branch}. \item[Clique Inequalities.] If the number of clusters is $k$, for any subset $Q$ of $k+1$ points at least two points have to be in the same cluster (meaning that at least one $Z_{ij}$ needs to be positive and equal to $Z_{ii}$ for all $(i,j)\in Q$). This can be enforced by the following inequalities: \begin{equation}\label{eq:clique} \sum_{(i,j)\in Q,i<j}Z_{ij}\ge \frac{1}{n-k+1} \quad\forall Q\subset\{1,\ldots,n\},\,|Q|=k+1. \end{equation} These $\binom{n}{k+1}$ inequalities are similar to the clique inequalities for the $k$-partitioning problem \citep{chopra1993partition}, the difference lies in the right hand side, that in that case is equal to~1, whereas here we use the smallest possible value that an element on the diagonal of $Z$ can hold. \end{description} Pair and triangle inequalities are known to be valid for Problem~\eqref{eq:PengSDP}, see \cite{peng2005new} and \cite{de2020ratio}. It remains to show that also the clique inequalities are valid. \begin{lemma} The clique inequalities~\eqref{eq:clique} are valid for Problem~\eqref{eq:PengSDP}. \end{lemma} \ifJOC \proof{Proof.} \else \begin{proof} \fi The left hand side of \eqref{eq:clique} has $\binom{k+1}{2}$ terms, and we know that $Z_{ii}\ge\frac{1}{n-k+1}$, since the cardinality of a cluster can be at most $n-k+1$. Given that the number of clusters is $k$, for any set of $k+1$ points at least two points have to be in the same cluster, say points $i$ and $j$. Then, for any feasible clustering $Z$, at least the element $Z_{ij}$ in the left hand side of \eqref{eq:clique} needs to be different from zero, therefore equal to $Z_{ii}$, and hence \eqref{eq:clique} must hold. \ifJOC \Halmos \endproof \else \end{proof} \fi
2,648
34,122
en
train
0.4988.4
\section{Branching: Subproblems within a Branch-and-Bound Algorithm and Variable Selection}\label{sec:branching} Our final goal is to develop a branch-and-bound scheme to solve the MSSC to optimality using relaxation~\eqref{eq:SDP} strengthened by some of the inequalities~\eqref{eq:pairs}--\eqref{eq:clique}. In this section we examine the problems that arise after branching. To keep the presentation simple and since everything carries over in a straightforward way, we omit in this section the inclusion of inequalities~\eqref{eq:pairs}--\eqref{eq:clique}. The branching decisions are as follows. Given a pair $(i,j)$, \begin{itemize} \item points $p_i$ and $p_j$ should be in different clusters, i.e., they \textit{cannot link} or \item points $p_i$ and $p_j$ should be in the same cluster, i.e., they \textit{must link}. \end{itemize} By adding constraints due to the branching decisions, the problem changes. However, the structure of the SDP remains similar. In this section we describe the subproblems to be solved at each node in the branch-and-bound tree. Each such SDP is of the form \begin{subequations} \label{eq:SDPbab} \begin{align} \min~ & \inprod{-\mathcal{T}^{\ell} W (\mathcal{T}^{\ell})^\top}{Z^{\ell}} \\ \textrm{s.t.}~ & Z^{\ell} e^{\ell} = e \\ & \inprod{\textrm{Diag}(e^{\ell})}{Z^{\ell}} = k\\ & Z^\ell_{ij} = 0 \quad (i,j) \in \textrm{CL}\\ &Z^{\ell} \ge 0, \ Z^{\ell} \in \mathcal{S}^+_{n-\ell} \end{align} \end{subequations} where $\textrm{CL}$ (cannot link) is the set of pairs that must be in different clusters and matrix $\mathcal{T}^{\ell}$ and vector $e^\ell$ encode the branching decisions that ask data points to be in the same cluster (i.e., they must link). We describe this in detail in the subsequent sections. \subsection{Branching Decisons} In case we want to have $i$ and $j$ in different clusters, we add the constraint $Z_{ij} = 0$ to the SDP, i.e., we add the pair $(i,j)$ to the set $\textrm{CL}$. In the other case, i.e., when the decison is to have $i$ and $j$ in the same cluster, we proceed as follows. Assume at the current node we have $n$ points and we decide that on this branch the two points $p_i$ and $p_j$ have to be in the same cluster. We can reduce the size of $W_p$ (the matrix having data points $p_i$ as rows) by substituting row $i$ by $p_{i}+p_j$ and omitting row $j$. To formalize this procedure, we introduce the following notation. Let $b(r) = (i,j)$, $i<j$, be the branching pair in branching decision at level $r$ and $b(1),\dots,b(\ell)$ a sequence of consecutive branching decisions. Furthermore, let $g(r)=(\underline{i}, \underline{j})$ be the corresponding global indices. Define $\mathcal{T}^{\ell} \in \{0,1\}^{(n-\ell) \times n}$ as \[ \mathcal{T}^{\ell} = T^{b(\ell)} T^{b(\ell-1)} \dots T^{b(1)} \] where the $(n-r)\times (n-r+1)$ matrix $T^{b(r)}$ for branching decision $b(r)=(i,j)$ is defined by \[ T^{b(r)}_{s,\cdot} = \left\{ \begin{array}{ll} u_s & \textrm{if}~ 1 \le s < i ~\textrm{and}~ i+1\le s\le j\\ u_i + u_j & \textrm{if}~ s=i\\ u_{s+1} & \textrm{if}~ j<s\le n-r \end{array}\right. \] with $u_s$ being the unit vector of size $(n-r+1)$. Furthermore, we define $T^{b(0)} = I_n$. Note that $T^{b(r)}\cdot M$ builds a matrix of size $(n-r)\times (n-r+1)$ by adding rows $i$ and $j$ of $M$ and putting the result into row $i$ while row $j$ is removed and all other rows remain the same. We also define the vector $e^\ell \in \mathbb{R}^{n-\ell}$ as \[ e^\ell = \mathcal{T}^\ell e \] where $e$ is the vector of all ones of length $n$. \begin{remark}\label{rem:ell} Note that in $(e^\ell)$ the number of points that have been fixed to belong to the same cluster along the branching decisions $b(1),\dots,b(\ell)$ are given. Furthermore, $\mathcal{T}^\ell(\mathcal{T}^{\ell})^\top = \textrm{Diag}(e^\ell)$. \ifJOC $\triangle$ \fi \end{remark} We now show that this shrinking operation corresponds to the must-link branching decisions. Consider the following two semidefinite programs. \begin{subequations} \label{eq:SDPell} \begin{align} \min~ & -\inprod{\mathcal{T}^{\ell} W (\mathcal{T}^{\ell})^\top}{Z^{\ell}} \\ \textrm{s.t.}~ & Z^{\ell} e^{\ell} = e_{n-\ell} \label{eq:SDPellb}\\ & \inprod{\mathcal{T}^{\ell}(\mathcal{T}^{\ell})^\top}{Z^{\ell}} = k \label{eq:SDPellk}\\ &Z^{\ell} \ge 0, Z^{\ell} \in \mathcal{S}^+_{n-\ell} \end{align} \end{subequations} and \begin{subequations} \label{eq:SDPbranch} \begin{align} \min~ & -\inprod{W}{Z} \\ \textrm{s.t.}~ & Ze = e\\ & \inprod{I}{Z} = k\\ & Z_{i\cdot} = Z_{j\cdot} \quad \forall \{i,j\} \in g(l), ~ l \in \{1,\dots, \ell\} \label{eq:SDbranch-rows}\\ &Z \ge 0, Z \in {\mathcal S}^np \end{align} \end{subequations} \begin{theorem} Problems~\eqref{eq:SDPell} and~\eqref{eq:SDPbranch} are equivalent. \end{theorem} \ifJOC \proof{Proof.} \else \begin{proof} \fi Let $Z^{\ell}$ be a feasible solution of Problem~\eqref{eq:SDPell}. Define $Z = (\mathcal{T}^{\ell})^\top Z^{\ell} \mathcal{T}^{\ell}$. This is equivalent to expanding the matrix by replicating the rows according to branching decisions. Therefore, \eqref{eq:SDbranch-rows} holds by construction. Clearly, $Z \ge 0$ and $Z\in {\mathcal S}^np$ hold as well. Moreover, we have that \[\inprod{I}{Z} = \inprod{I}{(\mathcal{T}^{\ell})^\top Z^{\ell} \mathcal{T}^{\ell}} = \inprod{\mathcal{T}^{\ell}(\mathcal{T}^{\ell})^\top}{Z^{\ell}} = k \] and \[ Ze = (\mathcal{T}^{\ell})^\top Z^{\ell} \mathcal{T}^{\ell} e = (\mathcal{T}^{\ell})^\top Z^{\ell} e^{\ell} = (\mathcal{T}^{\ell})^\top e_{n-\ell} = e_n. \] Furthermore, \[ \inprod{W}{Z} = \inprod{W}{(\mathcal{T}^{\ell})^\top Z^{\ell} \mathcal{T}^{\ell}} = \inprod{(\mathcal{T}^{\ell})W(\mathcal{T}^{\ell})^\top}{Z^{\ell}} \] and thus $Z$ is a feasible solution of Problem~\eqref{eq:SDPbranch} and the values of the objective functions coincide. We next prove that any feasible solution of Problem~\eqref{eq:SDPbranch} can be transformed into a feasible solution of Problem~\eqref{eq:SDPell} with the same objective function value. In order to do so, we define the matrix \[ \mathcal{D}^\ell = \textrm{Diag}(1/e^\ell)\] where $1/e^\ell$ denotes the vector that takes the inverse elementwise. It is straightforward to check that \[ \mathcal{D}^\ell \mathcal{T}^\ell (\mathcal{T}^\ell)^\top \mathcal{D}^\ell = \mathcal{D}^\ell. \] Assume that $Z$ is a feasible solution of Problem~\eqref{eq:SDPbranch} and set $Z^{\ell} = \mathcal{D}^\ell \mathcal{T}^{\ell} Z (\mathcal{T}^{\ell})^\top \mathcal{D}^\ell$. If $Z$ is nonnegative and positive semidefinite, then so is $Z^\ell$. Furthermore, we can derive \begin{align*} \inprod{\mathcal{T}^{\ell}(\mathcal{T}^{\ell})^\top}{Z^{\ell}} & =\inprod{\mathcal{T}^{\ell}(\mathcal{T}^{\ell})^\top}{\mathcal{D}^\ell \mathcal{T}^{\ell} Z (\mathcal{T}^\ell)^\top \mathcal{D}^\ell}\\ &= \inprod{\mathcal{D}^\ell\mathcal{T}^{\ell}(\mathcal{T}^{\ell})^\top \mathcal{D}^\ell}{\mathcal{T}^\ell Z (\mathcal{T}^\ell)^\top }\\ &= \inprod{\mathcal{D}^\ell}{ \mathcal{T}^\ell Z (\mathcal{T}^\ell)^\top } = \sum_{l=1}^{n-\ell} \frac{1}{e^\ell_l} \sum_{j \in g(l)} \sum_{i\in g(l)} Z_{ij}\\ &\textrm{s.t.}ackrel{(*)}{=} \sum_{l=1}^{n-\ell} \frac{1}{e^\ell_l} \sum_{j \in g(l)} \sum_{i\in g(l)} Z_{ii} = \sum_{l=1}^{n-\ell} \frac{1}{e^\ell_l} e^\ell_l \sum_{i\in g(l)} Z_{ii}\\ &= \sum_{l=1}^{n-\ell} \sum_{i\in g(l)} Z_{ii}= \sum_{i=1}^n Z_{ii}= k. \end{align*} Note that the equality~$(*)$ holds since $Z_{i,j} = Z_{r,s}$ for any $i,j,r,s \in g(l)$. This ensures that constraint~\eqref{eq:SDPellk} holds for $Z^\ell$. To prove~\eqref{eq:SDPellb} consider the equations \begin{align*} Z^\ell e^\ell & = \mathcal{D}^\ell \mathcal{T}^{\ell} Z (\mathcal{T}^{\ell})^\top \mathcal{D}^\ell e^\ell = \mathcal{D}^\ell \mathcal{T}^{\ell} Z (\mathcal{T}^{\ell})^\top e_{n-\ell} \\ &= \mathcal{D}^\ell \mathcal{T}^{\ell} Z e = \mathcal{D}^\ell \mathcal{T}^{\ell} e = \mathcal{D}^\ell e^\ell = e_{n-\ell}. \end{align*} It remains to show that the objective function values coincide. \begin{align*} \inprod{\mathcal{T}^{\ell} W (\mathcal{T}^{\ell})^\top}{Z^{\ell}} &= \inprod{\mathcal{T}^{\ell} W (\mathcal{T}^{\ell})^\top}{\mathcal{D}^\ell \mathcal{T}^{\ell} Z (\mathcal{T}^{\ell})^\top \mathcal{D}^\ell}\\ &= \inprod{ W }{(\mathcal{T}^{\ell})^\top\mathcal{D}^\ell \mathcal{T}^{\ell} Z (\mathcal{T}^{\ell})^\top \mathcal{D}^\ell\mathcal{T}^{\ell}}\\ &= \inprod{ W }{Z}.\\ \end{align*} As for the last equation, note that pre- and postmultiplying $Z$ by $(\mathcal{T}^{\ell})^\top\mathcal{D}^\ell \mathcal{T}^{\ell}$ ``averages'' over the respective rows of matrix $Z$. Since these respective rows are identical due to~\eqref{eq:SDbranch-rows}, the last equation holds. \ifJOC \Halmos \endproof \else \end{proof} \fi \begin{remark} The addition of constraints $Z_{ij}=0$ for datapoints $i,j$ that should not belong to the same cluster also goes through in the above equivalence. However, to keep the presentation simple we did not include it in the statement of the theorem above. \ifJOC $\triangle$ \fi \end{remark} \begin{remark} It is straightforward to include the additional constraints~\eqref{eq:pairs}, \eqref{eq:triangle}, and~\eqref{eq:clique} in the subproblems, i.e., in case of shrinking the problem, the constraints are still valid. Again, to keep notation simple, we omitted these constraints in the presentation above. Further discussions on including these inequalities are in Section~\ref{sec:boundcomp}. \ifJOC $\triangle$ \fi \end{remark} \subsection{Variable Selection for Branching}\label{sec:variableselection} In a matrix $Z$ corresponding to a clustering, for each pair $(i,j)$ either $Z_{ij}=0$ or $Z_{ii} = Z_{ij}$. \citet{peng2005new} propose a simple branching scheme. Suppose that for the optimal solution of the SDP relaxation there are indices $i$ and $j$ such that $Z_{ij}(Z_{ii}-Z_{ij}) \neq 0$ then one can produce a cannot-link branch with $Z_{ij} = 0$ and a must-link branch with $Z_{ii} = Z_{ij}$. Regarding the variable selection the idea is to choose indices $i$ and $j$ such that in both branches we expect a significant improvement of the lower bound. In~\cite{peng2005new} the branching pair is chosen as the \[\argmax_{i,j} \{ \min \{Z_{ij},Z_{ii}-Z_{ij}\} \}.\] Here we propose a variable selection strategy that is coherent with the way we generate the cannot-link and the must-link subproblems. In fact, we observe that in a matrix $Z$ corresponding to a clustering, for each pair $(i, j)$ either $Z_{ij} = 0$ or $Z_{i\cdot} = Z_{j\cdot}$. This motivates the following strategy to select a pair of data points to branch on \[\argmax_{i,j} \{ \min \{Z_{ij}, \|Z_{i\cdot} - Z_{j\cdot}\|_2^2 \} \}.\] In case this maximizer gives a value close to zero, say $10^{-5}$, the SDP solution corresponds to a feasible clustering. \subsubsection*{Variable selection on the shrunk problem} The strategy for the variable selection still carries over on the shrunk problems. Since $Z$ is obtained from $Z^\ell$ only by repeating rows and columns, every pair $(Z^\ell_{ij}, Z^\ell_{ii})$ appears also in $Z$ and vice versa. Moreover, within the already merged points, by construction $Z^\ell_{ii}=Z^\ell_{ij}$ and hence this can never be a branching candidate again.
3,943
34,122
en
train
0.4988.5
\section{Branch-and-Bound Algorithm}\label{sec:bab} We now put the bound computation (see Sections~\ref{sec:bound} and~\ref{sec:branching}) together with our way of branching (see Section~\ref{sec:variableselection}) to form our algorithm \texttt{SOS-SDP}. The final ingredient, a heuristic for providing upper bounds, is described in Section~\ref{sec:heuristic}. \subsection{The Bound Computation}\label{sec:boundcomp} In order to obtain a strong lower bound, we solve the SDP relaxation~\eqref{eq:SDP} strengthened by the inequalities given in Section~\ref{sec:bound}. The enumeration of all pair and triangle inequalities is computationally intractable even for medium size instances. Therefore we use a similar separation routine for both types of inequalities: \begin{enumerate} \item Generate randomly up to $t$ inequalities violated by at least $\varepsilon_{\mathrm{viol}}$ \item Sort the $t$ inequalities by decreasing violation \item Add to the current bounding problem the $p\ll t$ most violated ones. \end{enumerate} As for the clique inequalities, we use the heuristic separation routine described in \cite{ghaddar2011branch} for the minimum $k$-partition problem, that returns at most $n$ valid clique inequalities. More in detail, at each cutting-plane iteration, these cuts are determined by finding $n$ subsets $Q$ with a greedy principle. For each point $i \in S = \{1, \dots, n\}$, $Q$ is initialized as $Q = \{i\}$. Then, until the cardinality of $Q$ does not reach the size $k+1$, $Q$ is updated as $Q = Q \cup \{\argmin_{j \in S \setminus Q} \sum_{q \in Q} Z_{qj} \}$. We denote by $\mathcal{A}(Z^\ell) = b$ the equations from the must-link and cannot-link constraints and by $l \leq \mathcal{B}(Z^\ell) \leq u$ the inequalities representing the cutting planes. The cutting-plane procedure performed at each node is outlined in Algorithm~\ref{alg:cpproc}. We stop the procedure when we reach the maximum number of iterations $cp_\textrm{max}$. Another stopping criterion is based on the relative variation of the bound between two consecutive iterations. If the variation is lower than a tolerance $\varepsilon_{\textrm{cp}}$, the cutting-pane method terminates, and we branch. At each node, we use a cuts inheritance procedure to quickly retrieve several effective inequalities from the parent node and save a significant number of cutting-plane iterations during the bound computation of the children. More in detail, the inequalities that were included in the parent node during the last cutting-plane iteration are passed to its children and included in their problem from the beginning. While inheriting inequalities in the $(i,j)$ must link child, the shrinking procedure must be taken into account, updating the indices in the inequalities involved and deleting inequalities involving both points $i$ and $j$. In addition to the cuts inheritance, we use a cuts management procedure. A standard cutting-plane algorithm expects the valid inequalities not to be touched after having been included. The efficiency of state-of-the-art SDP solvers considerably deteriorates as we add these cuts, especially when solving large scale instances in terms of $n$. For this reason, after solving the current SDP, we remove the constraints that are not active at the optimum. Of course, inactive constraints may become active again in the subsequent cutting-plane iteration, and this operation could prevent the lower bounds from increasing monotonically; however, empirical results show that this situation happens rarely, and in this case, we decide to stop the cutting-plane procedure and we branch. From the practical standpoint, we notice that removing inactive constraints makes a huge difference since it keeps the SDP problem to a computationally tractable size. The result is that each cutting-plane iteration is more lightweight in comparison to the standard version, and this significantly impacts the overall efficiency of our branch-and-bound algorithm. Our strategy turns out to be more efficient than adding cuts only at the root node and inheriting them in the children. Indeed, if we add cuts only at the root node, the number of nodes in the tree increases since the bound does not improve as much as by repeating the separation routine in each node. Even though the single node is faster since only one SDP is solved, the overall computational time increases. \begin{algorithm} \SetKw{Init}{Initialization:} \SetKw{Or}{or} \SetKw{Stop}{stop;} \KwData{A subproblem defined through the current set of equalities $\mathcal{A}(Z^\ell) = b$, and inequalities $l \leq \mathcal{B}(Z^\ell) \leq u$, the current global upper bound $\varphi$, the maximum number of cutting-plane iterations $cp_{\max}$, the cutting-plane tolerance $\varepsilon_{\mathrm{cp}}$, the cuts violation tolerance $\varepsilon_{\mathrm{viol}}$, and the cuts removal tolerance $\varepsilon_{\mathrm{act}}$.} \KwResult{A lower bound $\hat{\delta}^\ell$ on the optimal value of the subproblem} \Init $i \leftarrow 1$, $\delta_0 \leftarrow -\infty$\ \mathbb{R}epeat{no violated inequalities found}{ solve the current SDP relaxation: \begin{equation*} \hat{\delta}_i^\ell = \min \big\{ \inprod{-\mathcal{T}^{\ell} W (\mathcal{T}^{\ell})^\top}{Z^{\ell}} \colon \mathcal{A}(Z^\ell) = b, \ l \leq \mathcal{B}(Z^\ell) \leq u, \ Z^\ell \ge 0, \ Z^\ell \in {\mathcal S}^nml_+ \big\} \end{equation*} and let $\hat{Z}_i^\ell$ be the optimizer\; \If{$\hat{\delta}_i^\ell \geq \varphi$}{ \Stop the node can be pruned\; } \If{$i \geq cp_{\max}$ \Or $\frac{| \hat{\delta}_i^\ell - \hat{\delta}_{i-1}^\ell |}{\hat{\delta}_{i-1}} \leq \varepsilon_{\mathrm{cp}}$}{\Stop return the lower bound $\hat{\delta}_i^\ell$ and branch\;} remove inactive inequalities with tolerance $\varepsilon_{\mathrm{act}}$ by updating $(\mathcal{B}(\cdot), l, u)$\; apply the separation routines for pair, triangle and clique inequalities with tolerance $\varepsilon_{\mathrm{viol}}$ and add them to $(\mathcal{B}(\cdot), l, u)$\; \eIf{no violated inequalities found}{\Stop return the lower bound $\hat{\delta}_i^\ell$ and branch\;} {add the inequalities by updating $(\mathcal{B}(\cdot), l, u)$\; set $i \leftarrow i + 1$\;} } \caption{The node processing loop in the branch-and-cut algorithm} \label{alg:cpproc} \end{algorithm}
1,701
34,122
en
train
0.4988.6
\subsection{Post-processing Using Error Bounds} Using the optimal solution of the SDP relaxation whithin a branch-and-bound framework requires the computation of ``safe'' bounds. Such safe bounds are obtained by solving the SDP to high precision, which, however, is out of reach when using first-order methods. In order to obtain a safe bound, we run a post-processing procedure where we use a method to obtain rigorous lower bounds on the optimal value of our SDP relaxation introduced by \citet{JaChayKeil2007}. Before describing our post-processing, we state a result bounding the eigenvalues of any feasible solution of~\eqref{eq:SDP}. \begin{lemma}\label{lem:eigboundZ} Let $Z\succeq 0$ and $Z\ge 0$. Furthermore, let $Ze=e$. Then the eigenvalues of $Z$ are bounded by one. \end{lemma} \ifJOC \proof{Proof.} \else \begin{proof} \fi Let $\lambda$ be an eigenvalue of $Z$ with eigenvector $v$, i.e., $Zv = \lambda v$. This implies \begin{equation*} \lambda |v_i| = | \sum_{j=1}^n z_{ij}v_j| \le \max_{1\le j\le n} |v_j| \sum_{i=1}^n z_{ij} = \max_{1\le j\le n} |v_j| \mbox{ for all } i\in \{1,\dots,n\} \end{equation*} by nonnegativity of $Z$ and since the row sums of $Z$ are one. Therefore, the inequality \begin{equation*} \lambda \le \frac{\max_{1\le j\le n} |v_j|}{|v_i|} \end{equation*} holds for all $i\in \{1,\dots,n\}$, and in particular for $i \in \argmax_{1\le j\le n} |v_j|$ which proves $\lambda \le 1$. \ifJOC \Halmos \endproof \else \end{proof} \fi We now restate Lemma~3.1 from~\cite{JaChayKeil2007} in our context. \begin{lemma}\label{lem:jansson} Let $S$, $Z$ be symmetric matrices that satisfy $0 \leq \lambda_{\min}(Z)$ and $\lambda_{\max}(Z) \leq \bar{z}$ for some $\bar{z} \in \mathbb{R}$. Then the inequality \begin{equation*} \left\langle S,Z\right\rangle \geq \bar{z}\sum_{i \colon \lambda_i(S) <0}\lambda_i(S) \end{equation*} holds. \end{lemma} \ifJOC \proof{Proof.} \else \begin{proof} \fi Let $S$ have the eigenvalue decomposition $S=Q\Lambda Q^\top$ where $QQ^\top=I$ and $\Lambda=\textrm{Diag}(\lambda(S))$. Then \begin{equation*} \left\langle S,Z\right\rangle = \inprod{Q\Lambda Q^\top}{Z} = \inprod{\Lambda}{Q^\top Z Q} = \sum_{i=1}^n \lambda_i(S)Q_{\cdot,i}^\top Z Q_{\cdot,i} \end{equation*} where $Q_{\cdot,i}$ is column $i$ of matrix $Q$. Because of the bounds on the eigenvalues of $Z$, we have $0 \leq Q_{\cdot,i}^\top Z Q_{\cdot,i} \leq \bar{z}.$ Therefore $\left\langle S,Z\right\rangle \geq \bar{z}\sum_{i \colon \lambda_i <0}\lambda_i(S)$. \ifJOC \Halmos \endproof \else \end{proof} \fi \begin{theorem}\label{thm:errorbound} Consider the SDP~\eqref{eq:SDP} together with equations $\mathcal{A}(Z)=b$ (e.g., from cannot-link constraints) and inequalities $l \le \mathcal{B}(Z) \le u$ (representing cutting planes) with optimal objective function value $p^*$. Denote the dual variables by $(\tilde{y},\tilde{u},\tilde{v},\tilde{w},\tilde{P})$, with $\tilde{y}\in \mathbb{R}^{n+1}$, $\tilde{u}$, $\tilde{v}$, $\tilde{w}$ being vectors of appropriate size, $\tilde{P}\in {\mathcal S}^n$, $\tilde{P} \ge 0$ and set $\tilde{S} = -W - \sum_{i=1}^n \tilde{y}_iE_i - \tilde{y}_{n+1}I - \mathcal{A}^\top(\tilde{u}) + \mathcal{B}^\top(\tilde{v}) - \mathcal{B}^\top(\tilde{w}) - \tilde{P}$. Then \begin{equation*} p^* \ge \sum_{i=1}^n\tilde{y}_i + k\tilde{y}_{n+1} + b^\top \tilde{u} - l^\top \tilde{v} + u^\top\tilde{w} + \bar{z} \sum_{i\colon \lambda_i(\tilde{S}) < 0} \lambda_i(\tilde{S}). \end{equation*} \end{theorem} \ifJOC \proof{Proof.} \else \begin{proof} \fi Let $Z^*$ be an optimal solution of~\eqref{eq:SDP} with the additional constraints $\mathcal{A}(Z) = b$ and $l \le \mathcal{B}(Z) \le u$ and $(\tilde{y},\tilde{z},\tilde{u},\tilde{v},\tilde{w},\tilde{P})$ dual feasible. Then \begin{align*} \inprod{-W}{Z^*} &- ( \sum_{i=1}^n \tilde{y}_i + k\tilde{y}_{n+1} + b^\top \tilde{u} - l^\top \tilde{v} + u^\top\tilde{w}) \\ & = \inprod{-W}{Z^*} - \sum_{i=1}^n \tilde{y}_i\inprod{E_i}{Z^*} - \tilde{z}\inprod{I}{Z^*} - \inprod{\mathcal{A}(Z^*)}{\tilde{u}} + \inprod{\mathcal{B}(Z^*)}{\tilde{v}} - \inprod{\mathcal{B}(Z^*)}{\tilde{w}} \\ &= \inprod{-W - \sum_{i=1}^n \tilde{y}_iE_i - \tilde{y}_{n+1}I - \mathcal{A}^\top(\tilde{u}) + \mathcal{B}^\top(\tilde{v}) - \mathcal{B}^\top(\tilde{w})}{Z^*} \\ & = \inprod{\tilde{P} + \tilde{S}}{Z^*} = \inprod{\tilde{P}}{Z^*} + \inprod{\tilde{S}}{Z^*}. \end{align*} We have $\tilde{P}\ge 0$, $Z^* \ge 0$. Furthermore, the eigenvalues of $Z^*$ are nonnegative and bounded by one (Lemma~\ref{lem:eigboundZ}). Using this and Lemma~\ref{lem:jansson}, we obtain \begin{align*} p^* = \inprod{-W}{Z^*} &\ge \sum_{i=1}^n \tilde{y}_i + k\tilde{y}_{n+1} + b^\top \tilde{u} - l^\top \tilde{v} + u^\top\tilde{w} + \inprod{\tilde{S}}{Z^*} \\ & \ge \sum_{i=1}^n \tilde{y}_i + k\tilde{y}_{n+1} + b^\top \tilde{u} - l^\top \tilde{v} + u^\top\tilde{w} + \sum_{i\colon \lambda_i(\tilde{S}) < 0} \lambda_i(\tilde{S}). \end{align*} \ifJOC \Halmos \endproof \else \end{proof} \fi Before stating the result used in the branch-and-bound tree after merging data points, we introduce the following notation. Let $E_i^\ell$ be the symmetric matrix such that $\inprod{E_i^\ell}{Z^\ell} = (Z^\ell e^\ell)_i$. \begin{corollary} Consider the SDP~\eqref{eq:SDPell} together with equations $\mathcal{A}(Z^\ell)=b$ (e.g., from cannot-link constraints) and inequalities $l \le \mathcal{B}(Z^\ell) \le u$ (representing cutting planes) with optimal objective function value $p^*$. Let $\tilde{y} \in \mathbb{R}^{n-\ell+1}$, $\tilde{u}$, $\tilde{v}$, $\tilde{w}$ being vectors of appropriate size, $\tilde{P}\in {\mathcal S}^nml$, $\tilde{P} \ge 0$ and set $\tilde{S} = -W^\ell - \sum_{i=1}^{n-\ell} \tilde{y}_i E^\ell_i + \tilde{y}_{n+\ell+1}\textrm{Diag}(e^\ell) - \mathcal{A}^\top(\tilde{u}) + \mathcal{B}^\top(\tilde{v}) - \mathcal{B}^\top(\tilde{w}) - \tilde{P}$. Then \begin{equation*} p^* \ge \sum_{i=1}^{n-\ell} \tilde{y}_i + k\tilde{y}_{n-\ell+1} + b^\top \tilde{u} - l^\top \tilde{v} + u^\top\tilde{w} + \sum_{i\colon \lambda_i(\tilde{S}) < 0} \lambda_i(\tilde{S}). \end{equation*} \end{corollary} \ifJOC \proof{Proof.} \else \begin{proof} \fi Constraint~\eqref{eq:SDPellb} implies that the row-sum of any row in $Z^\ell$ is bounded by one since \begin{equation*} \sum_{j=1}^{n-\ell} z^\ell_{ij} \le \sum_{j=1}^{n-\ell} z^\ell_{ij}e^\ell_j = 1 \quad \mbox{for all } i \in \{1,\dots, n-\ell\}. \end{equation*} Hence using the same arguments as in Lemma~\ref{lem:eigboundZ} we can bound the eigenvalues by one and apply Theorem~\ref{thm:errorbound}. \ifJOC \Halmos \endproof \else \end{proof} \fi
2,664
34,122
en
train
0.4988.7
\section{Heuristic}\label{sec:heuristic} The most popular heuristic for solving MSSC is $k$-means \citep{macqueen1967some, lloyd1982least}. It can be viewed as a greedy algorithm. During each update step, all the data points are assigned to their nearest centers. Afterwards, the cluster centers are repositioned by calculating the mean of the assigned observations to the respective centroids. The update process is performed until the centroids are no longer updated and therefore all observations remain at the assigned clusters. In this paper, we use COP $k$-means \citep{wagstaff2001constrained}, a constrained version of $k$-means that aims at finding high quality clusters using prior knowledge. COP $k$-means is a constrained clustering algorithm that belongs to a class of semi-supervised machine learning algorithms. Constrained clustering incorporates a set of must-link and cannot-link constraints that define a relationship between two data instances: a must-link constraint (ML) is used to specify that the two points in the must-link relation should be in the same cluster, whereas a cannot-link constraint (CL) is used to specify that the two points in the cannot-link relation should not be in the same cluster. These sets of constraints, which are naturally available as branching decisions while visiting the branch-and-bound tree, represent the prior knowledge on the problem for which $k$-means will attempt to find clusters that satisfy the specified ML and CL constraints. The algorithm returns an empty partition if no such clustering exists which satisfies the constraints. COP $k$-means is described in Algorithm~\ref{alg:copkmeans}. \begin{algorithm} \caption{COP $k$-means} \label{alg:copkmeans} \FuncSty{K-MEANS(}\ArgSty{dataset $\mathcal{D}$, initial cluster centers $m_1, \dots, m_k$, must-link constraints ML $\subseteq \mathcal{D} \times \mathcal{D}$, cannot-link constraints CL $\subseteq \mathcal{D} \times \mathcal{D}$}\FuncSty{)} \mathbb{R}epeat{convergence}{ \ForEach{data point $s_i \in \mathcal{D}$}{ $j \leftarrow \argmin \big\{ \|s_i - m_j\|^2 \colon j \in \{1,\dots,k\} \And $ \\ \hspace{3cm}$\texttt{VIOLATE\_CONSTRAINT}(s_i, C_j, \textrm{ML}, \textrm{CL}) \mathrm{~is~ false}\big\}$\; \eIf{$j<\infty$}{ assign $s_i$ to $C_j$\;}{ \KwRet empty partition\;}} \ForEach{cluster $C_j$}{ $m_j \leftarrow $ mean of the data points $s_i$ assigned to $C_j$\;} } \KwRet $C_1, \dots, C_k$ \FuncSty{VIOLATE\_CONSTRAINTS(}\ArgSty{data point $s_i$, cluster $C_j$, must-link constraints $\textrm{ML} \subseteq \mathcal{D} \times \mathcal{D}$, cannot-link constraints $\textrm{CL} \subseteq \mathcal{D} \times \mathcal{D}$}\FuncSty{)} \ForEach{$(s_i, s_h) \in \textrm{ML}$}{ \lIf{$s_h \notin C_j$}{ \KwRet true}} \ForEach{$(s_i, s_h) \in \textrm{CL}$}{ \lIf{$s_h \in C_j$}{ \KwRet true}} \KwRet false\; \end{algorithm} Like other local solvers for non-convex optimization problems, $k$-means (both in the unconstrained and constrained version) is very sensitive to the choice of the initial centroids, therefore, it often converges to a local minimum rather than the global minimum of the MSSC objective. To overcome this drawback, the algorithm is initialized with several different starting points, choosing then the clustering with the lowest objective function \citep{franti2019much}. In the literature, several initialization algorithms have been proposed to prevent $k$-means to get stuck in a low quality local minimum. The most popular strategy for initializing $k$-means is $k$-means++ \citep{arthur2006k}. The basic idea behind this approach is to spread out the $k$ initial cluster centers to avoid the poor clustering that can be found by the standard $k$-means algorithm with random initialization. More in detail, in $k$-means++, the first cluster center is randomly chosen from the data points. Then, each subsequent cluster center is chosen from the remaining data points with probability proportional to its squared distance from the already chosen cluster centers. We aim to exploit the information available in the solution of the SDP relaxation in order to extract a centroid initialization for COP $k$-means. In the literature, theoretical properties of the Peng-Wei relaxation have been studied under specific stochastic models. A feasible clustering can be derived by the solution of the SDP relaxation~(\ref{eq:SDP}) by a rounding step. Sometimes, the rounding step is unnecessary because the SDP relaxation finds a solution that is feasible for the original MSSC. This phenomenon is known in the literature as exact recovery or tightness of the relaxation. Recovery guarantees have been established under a model called the subgaussian mixtures model, whose special cases include the stochastic ball model and Gaussian mixture model \citep{awasthi2015relax, iguchi2017probably, mixon2017clustering, li2020birds}. Under this distributional setting, cluster recovery is guaranteed with high probability whenever the distances between the clusters are sufficiently large. However, the generative assumption may not be satisfied by real data, and this implies that in general a rounding procedure is needed, and if possible also a bound improvement. Instead of building a rounding procedure, we decide to derive a ``smart'' initialization for the constrained $k$-means based on the solution of our bounding problem. Here, we build the initialization exploiting the matrix $Z_{SDP}$ solution of the current bounding problem. The idea is that if the relaxation were tight, then $Z_{SDP}$ would be a clustering feasible for the rank constrained SDP \eqref{eq:RankSDP}, and hence would allow to easily recover the centroids. If the relaxation is not tight, the closest rank-$k$ approximation is built and it is used to recover the centroids. More in detail, let $Z$ be a feasible solution of the rank constrained SDP~\eqref{eq:RankSDP}. It is straightforward~\citep{mixon2017clustering} to see that $Z$ can be written as the sum of $k$ rank-one matrices: \begin{equation} \label{eq:clustering_matrix} Z = \sum_{j=1}^{k} \frac{1}{|C_j|} \mathbbm{1}_{C_j} \mathbbm{1}^\top_{C_j}, \end{equation} where $\mathbbm{1}_{C_j} \in \{0,1\}^n$ is the indicator vector of the $j$-th cluster, i.e., the $i$-th component of $\mathbbm{1}_{C_j}$ is 1 if the data point $p_i \in C_j$ and 0 otherwise. If we post-multiply $Z$ by the data matrix $W_p \in \mathbf{R}^{n\times d}$ whose $i$-th row is the data point $p_i$, we obtain a matrix $M = ZW_p$ with a well defined structure. In fact, from equation (\ref{eq:clustering_matrix}) it follows that, for each $j \in \{1, \dots, k\}$, $M$ contains $|C_j|$ rows equal to the centroid of the data points assigned to $C_j$. If the SDP relaxation is tight, the different rows of $M$ are equal to the optimal centroids. In this case, it is natural to use the convex relaxation directly to obtain the underlying ground truth solution without the need for a rounding step. In practice, the optimizer of the SDP relaxation may not always be a clustering matrix, i.e., a low-rank solution as described by equation~(\ref{eq:clustering_matrix}). The idea now is to build the rank-$k$ approximation $\hat{Z}$ which is obtained exploiting the following result. \begin{proposition} \citep{eckart1936approximation} Let $X$ be a positive semidefinite matrix with the eigenvalues $\lambda_1 \geq \lambda_2 \geq \ldots \geq \lambda_n\ge 0$ and the corresponding eigenvectors $v_1, v_2,\ldots,v_n$. If $X$ has rank $r$, for any $k < r$, the best rank $k$ approximation of $X$, for both the Frobenius and the spectral norms is given by \begin{equation}\label{eq:rankkapp} \hat{X} = \sum_{i=i}^k \lambda_i v_i v_i^\top, \end{equation} which is the truncated eigenvalue decomposition of $X$. \end{proposition} Then, we compute the approximate centroid matrix $M=\hat{Z}W_p$. In order to derive the $k$ centroids, the unconstrained $k$-means is applied to the rows of matrix $M$. Finally, the obtained centroids are used in order to initialize the algorithm COP $k$-means, which is run just once. The procedure is summarized in Algorithm~\ref{alg:sdpinit}. \begin{algorithm} \caption{SDP-based initialization of $k$-means} \label{alg:sdpinit} \FuncSty{SDP-INIT(}\ArgSty{dataset $\mathcal{D}$, number of clusters $k$, must-link constraints ML $\subseteq \mathcal{D} \times \mathcal{D}$, cannot-link constraints CL $\subseteq \mathcal{D} \times \mathcal{D}$}\FuncSty{)} solve the SDP relaxation and obtain the optimizer $Z_{SDP}$\; find the best rank $k$ approximation of $Z_{SDP}$ and obtain $\hat{Z}$ by \eqref{eq:rankkapp}\; compute $M = \hat{Z}W_p$\; cluster the rows of $M$ with unconstrained $k$-means to get the centroids $m_1, \dots, m_k$\; use $m_1, \dots, m_k$ as the starting point of constrained $k$-means\; \end{algorithm} The intuition is that the better the SDP solution, the better the initialization, and hence the produced clustering. In order to confirm this intuition, we show the behavior of the heuristic on a synthetic example with 150 points in 2 dimensions. We denote by circles the points in $W_p$, by crosses the rows of matrix $M$ produced at Step 3, by diamonds the centroids obtained by clustering the rows of $M$ at Step 4 of the Algorithm~\ref{alg:sdpinit}. In Figure~\ref{fig:heuristic3} we assume $k=3$ and apply our heuristic on different solutions of the SDPs generated during our bounding procedure: in Figure~\ref{fig:heuristic3}~(a) we use as $Z_{SDP}$ the solution obtained by solving problem \eqref{eq:SDP}, and we can see that there is some gap (the upper and lower bounds are displayed on top of each figure) and that matrix $M$ has many different rows. In Figures~\ref{fig:heuristic3}~(b), (c), and (d) we consider as $Z_{SDP}$ the solution of the SDP obtained by performing respectively 1, 2 and 3 iterations of adding cutting-planes, i.e., solving problem~(\ref{eq:SDP}) with some additional constraints~(\ref{eq:pairs})--(\ref{eq:clique}). It is clear how the rows of $M$ converge to three different centroids that, in this case, correspond to the optimal solution (the gap here is zero). The use of \texttt{SDP-INIT} as a standalone initialization procedure could be expensive since it needs to solve a certain number of SDP problems and to perform an eigenvalue decomposition on the solution that gives the best lower bound. However, when embedded in our branch-and-bound, the extra cost of running \texttt{SDP-INIT} is only the computation of the spectral decomposition of the SDP solution providing the lower bound at the node, which is negligible with respect to the bound computation. The effectiveness of the proposed heuristic algorithm is confirmed by the numerical results presented in Section~\ref{sec:heurnr}. \begin{comment} \begin{figure} \caption{An instance with 150~points and $k = 2$.} \label{fig:heuristic2} \end{figure} \end{comment} \begin{figure} \caption{An instance with 150~points and $k = 3$.} \label{fig:heuristic3} \end{figure} \begin{comment} \begin{figure} \caption{$k = 4$.} \label{fig:heuristic4} \end{figure} \end{comment} \begin{comment} \begin{figure} \caption{$k = 5$.} \label{fig:heuristic5} \end{figure} \end{comment}
3,278
34,122
en
train
0.4988.8
\section{Numerical Results}\label{sec:numericalresults} In this section we describe the implementation details and we show the numerical results of \texttt{SOS-SDP} on synthetic and real-world datasets. \subsection{Details on the Implementation} \texttt{SOS-SDP} is implemented in C++ and we use as internal subroutine for computing the bound SDPNAL+ \citep{sdpnalplus, zhao-sun-toh-2010}, which is implemented in MATLAB. SDPNAL+ is called using the MATLAB Engine API that enables running MATLAB code from C++ programs. We note that solvers based on interior point methods are not practical when solving instances with such a large number of constraints. We run our experiments on a machine with Intel(R) Xeon(R) 8124M CPU @ 3.00GHz with 16 cores, 64 GB of RAM, and Ubuntu Server 20.04. The C++ Armadillo library \citep{sanderson2016armadillo} is extensively used to handle matrices and linear algebra operations efficiently. \texttt{SOS-SDP} can be efficiently executed in a multi-thread environment. In order to guarantee an easy and highly configurable parallelization, we use the thread pool pattern. This pattern allows controlling the number of threads the branch-and-bound is creating and saving resources by reusing threads for processing different nodes of the tree. We adopt the same branch-and-bound configuration for each instance. In particular, we visit the tree with the best-first search strategy. When the problem at a given level is divided into the \emph{must-link} and the \emph{cannot-link} sub-problems, each node is submitted to the thread pool and run in parallel with the other threads of the pool. Each thread of the branch-and-bound algorithm runs in a separate MATLAB session. Furthermore, since numerical algebra and linear functions are multi-threaded in MATLAB, these functions automatically execute on multiple computational threads in a single MATLAB session. To balance resource allocations for multiple MATLAB sessions and use all the available cores of the machine, we set a maximum number of computational threads allowed in each session. \paragraph{Branch-and-bound setting} On all the numerical tests, we adopt the following parameters setting. As for the pair and triangle inequalities, we randomly separate at most 100000 valid cuts, we sort them in decreasing order with respect to the violation, and we select the first 5\% of violated ones, yielding at most 5000 pairs and at most 5000 triangles added in each cutting-plane iteration. Since effective inequalities are inherited from the parent to its children, at the root node the maximum number of cutting-plane iterations is set to $cp_{\mathrm{max}} = 50$, whereas for the children this number is set to 30. The tolerance for checking the violation of the cuts is set to $\varepsilon_{\mathrm{viol}} = 10^{-4}$, whereas the tolerance for identifying the active inequalities is set to $\varepsilon_{\mathrm{act}} = 10^{-6}$. Finally, we set the accuracy tolerance of SDPNAL+ to $10^{-5}$. As for the parallel setting, we use different configurations depending on the size of the instances since the solver requires a higher number of threads to efficiently solve large size problems. For small instances ($n < 500$) we create a pool of 16 threads, each of them running on a session with a single component thread. For medium instances ($500 \leq n < 1000$) we use a pool of 8 threads, each of them running on a session with 2 component threads. For ($1000 \leq n < 1500$) we use a pool of 4 threads, each of them runs on a session with 4 component threads. Finally, for large scale instances ($n \geq 1500$) we use a pool of 2 threads, each of them running on a session with 8 component threads. In all cases, the MATLAB session for the computation at the root node uses all the available cores. The source code is available at \url{https://github.com/INFORMSJoC/2021.0096} \citep{SOS-SDP2021}. \subsection{Benchmark Instances} In order to test extensively the efficiency of \texttt{SOS-SDP} we use both artificial datasets that are built in such a way to be compliant with the MSSC assumptions and real-world datasets. \paragraph{Artificial Instances} Due to the minimization of the sum of squared Euclidean distances, an algorithm that solves the MSSC finds spherically distributed clusters around the centers. In order to show the effectiveness of our algorithm on instances compliant with the MSSC assumptions, we generate very large scale Gaussian datasets in the plane $(d=2)$ with varying number of data points $n \in \{2000, 2500, 3000 \}$, number of clusters $k \in \{10, 15\}$ and degree of overlap. More in detail, we sample $n$ points from a mixture of $k$ Gaussian distributions $\mathcal{N}(\mu_j, \Sigma_j)$ with equal mixing proportions, mean $\mu_j$ and shared spherical covariance matrix $\Sigma_j = \sigma I$, where $\sigma \in \{0.5, 1.0\}$ is the standard deviation. The cluster centers $\mu_j$ are sampled from a uniform distribution in the interval $[-\frac{n}{1000}-k, \frac{n}{1000}+k]$. We use the following notation to name the instances: $\{n\}\_\{k\}\_\{\sigma\}$. Note that in this case, we know in advance the correct number of clusters, so we only solve the instances for that value of~$k$. \paragraph{Real-world Datasets} We use a set of 34 real-world datasets coming from different domains, with a number of entities $n$ ranging between $75$ and $4177$, and with a number of features $d$ ranging between $2$ and $20531$. The datasets' characteristics are reported in Table~\ref{tab:datasets}. \begin{table} \begin{center} \begin{tabular}{lcc} \toprule Dataset & $n$ & $d$ \\ \midrule Ruspini & 75 & 2 \\ Voice & 126 & 310 \\ Iris & 150 & 4 \\ Wine & 178 & 13 \\ Gr202 & 202 & 2 \\ Seeds & 210 & 7 \\ Glass & 214 & 9 \\ CatsDogs & 328 & 14773 \\ Accent & 329 & 12 \\ Ecoli & 336 & 7 \\ RealEstate & 414 & 5 \\ Wholesale & 440 & 11 \\ ECG5000 & 500 & 140 \\ Hungarian & 522 & 20 \\ Wdbc & 569 & 30 \\ Control & 600 & 60 \\ Heartbeat & 606 & 3053 \\ \bottomrule \end{tabular}\qquad \begin{tabular}{lcc} \toprule Dataset & $n$ & $d$ \\ \midrule Strawberry & 613 & 235 \\ Energy & 768 & 16 \\ Gene & 801 & 20531 \\ SalesWeekly & 810 & 106 \\ Vehicle & 846 & 18 \\ Arcene & 900 & 10000 \\ Wafer & 1000 & 152 \\ Power & 1096 & 24 \\ Phishing & 1353 & 9 \\ Aspirin & 1500 & 63 \\ Car & 1727 & 11 \\ Wifi & 2000 & 7 \\ Ethanol & 2000 & 27 \\ Mallat & 2400 & 1024 \\ Advertising & 3279 & 1558 \\ Rice & 3810 & 7 \\ Abalone & 4177 & 10 \\ \bottomrule \end{tabular} \end{center} \caption{Characteristics of the real world datasets. They all can be downloaded at the UCI \citep{uci}, UCR \citep{UCRArchive2018} and sGDML \citep{chmiela2019sgdml} websites.} \label{tab:datasets} \end{table} \subsection{Branch-and-Bound Results on Artificial instances} In Table \ref{tab:res_art} we report the dataset name according to the notation $\{n\}\_\{k\}\_\{\sigma\}$, the optimal objective function $f_\textrm{opt}$, the number of cutting-plane iterations at the root (cp), the number of cuts added in the last cutting-plane iteration at the root ($\textrm{cuts}_\textrm{cp}$), the gap at the root ($gap_0$) when problem \eqref{eq:SDPbab} is solved without adding valid inequalities, in brackets the gap at the end of the cutting-plane procedure at the root node ($gap_{cp}$), the number of nodes of the branch-and-bound tree (N), and the wall clock time in seconds (time). \begin{tabularx}{\textwidth}{lcccccccc} \toprule Dataset & $f_\textrm{opt}$ & cp & $\textrm{cuts}_\textrm{cp}$ & $gap_0$ $(gap_{cp})$ & N & time \\ \midrule 2000\_10\_0.5 & 955.668 & 0 & 0 & 0.000039 (0.000039) & 1 & 848.88 \\ 2000\_10\_1.0 & 3601.310 & 3 & 10999 & 0.006171 (0.003578) & 3 & 8794.17 \\ 2000\_15\_0.5 & 955.800 & 1 & 6177 & 0.001556 (0.000009) & 1 & 1155.06 \\ 2000\_15\_1.0 & 3658.730 & 3 & 11035 & 0.006192 (0.002059) & 3 & 8351.91 \\ 2500\_10\_0.5 & 1199.080 & 1 & 5249 & 0.000184 (0.000083) & 1 & 2859.30 \\ 2500\_10\_1.0 & 4522.350 & 12 & 11539 & 0.008008 (0.000553) & 1 & 20495.43 \\ 2500\_15\_0.5 & 1194.550 & 0 & 0 & 0.000699 (0.000699) & 1 & 1049.76 \\ 2500\_15\_1.0 & 4574.360 & 6 & 10146 & 0.005311 (0.000971) & 1 & 10245.69 \\ 3000\_10\_0.5 & 1446.480 & 0 & 0 & 0.000067 (0.000067) & 1 & 2220.21 \\ 3000\_10\_1.0 & 5512.370 & 9 & 10769 & 0.004601 (0.000606) & 1 & 27781.38 \\ 3000\_15\_0.5 & 1439.940 & 0 & 0 & 0.000433 (0.000433) & 1 & 2003.94 \\ 3000\_15\_1.0 & 5537.200 & 10 & 15608 & 0.006245 (0.001205) & 3 & 38330.01 \\ \bottomrule \caption{Results for the artificial datasets.} \label{tab:res_art} \end{tabularx} As we increase $\sigma$, the cluster separation decreases, and the degree of overlap increases (see Figure \ref{fig:art}). In this scenario, the SDP relaxation is not tight anymore and the global minimum is certified by our specialized branch-and-bound algorithm. For $\sigma = 0.5$ each problem is solved at the root with zero (i.e., the SDP relaxation is tight) or with at most one cutting-plane iteration. As we decrease the cluster separation by increasing $\sigma$ the problem becomes harder since some clusters overlap and the cluster boundaries are less clear. In this case, more cutting-plane iterations are needed (up to a maximum of 12 iterations). In any case, we need at most 3 nodes for solving these instances, and this confirms that, if the generative assumption is met, the cutting-plane procedure at the root node is the main ingredient for success. In the next section, we show how the behavior changes in real world instances, where we do not have information on the data distribution and on the correct value of $k$. In this case, the overall branch-and-bound algorithm becomes fundamental in order to solve the problems. \begin{figure} \caption{Artificial instances for $n=2000$ and $d=2$.} \label{fig:art} \end{figure}
3,500
34,122
en
train
0.4988.9
\subsection{Branch-and-Bound Results on Real World Datasets} The MSSC requires the user to specify the number of clusters $k$ to generate. Determining the right $k$ for a data set is a different issue from the process of solving the clustering problem. This is still an open problem since, depending on the chosen distance measure, one value of $k$ may be better than another one. Hence, choosing $k$ is often based on assumptions on the application, prior knowledge of the properties of the dataset, and practical experience. In the literature, clustering validity indices in conjunction with the $k$-means algorithm are commonly used to determine the ``right'' number of clusters. Most of these methods minimize or maximize an external validity index by running a clustering algorithm (for example $k$-means) several times for different values of $k$. We recall that the basic idea behind the MSSC is to define clusters such that the total within-cluster sum of squares is minimized. This objective function measures the compactness of the clustering and we want it to be as small as possible. The ``elbow method'' is probably the most popular method for determining the number of clusters. It requires running the $k$-means algorithm with an increasing number of clusters. The suggested $k$ can be determined by looking at the MSSC objective as a function of $k$ and by finding the inflection point. The location of the inflection point (knee) in the plot is generally considered as an indicator of the appropriate number of clusters. The drawback of this method is that the identification of the knee could not be obvious. Hence, different validity indices have been proposed in the literature to identify the suitable number of clusters or to check whether a given dataset exhibits some kind of a structure that can be captured by a clustering algorithm for a given $k$. All these indices are computed aposteriori given the clustering produced for different values of $k$. In addition to the elbow method, we use three cluster validity measures that are compliant with the assumptions of the MSSC: namely the Silhouette index \citep{rousseeuw1987silhouettes}, the Calinski–Harabasz (CH) index \citep{calinski1974dendrite} and Davies–Bouldin (DB) index \citep{davies1979cluster}. The Silhouette index determines how well each object lies within its cluster and is given by the average Silhouette coefficient over all the data points. The Silhouette coefficient is defined for each data point and is composed of two scores: the mean distance between a sample and all other points in the same class and the mean distance between a sample and all other points in the next nearest cluster. The CH index is the ratio of the sum of between-clusters dispersion and within-cluster dispersion for all clusters. The DB index is defined as the average similarity between each cluster and its most similar one. The Silhouette index and the CH index are higher when the clusters are dense and well separated, which relates to the standard concept of clustering, whereas for the DB index lower values indicate a better partition. Since the exact resolution of the MSCC problem could be expensive and time consuming from the computational point of view, one may be interested in finding the global solution for a specified or restricted number of clusters. In practice, one can run the $k$-means algorithm for different values of $k$ and then use the exact algorithm to find and certify the global optimum for the $k$ suitable for the application of interest. Hence, we choose to run our algorithm on a large number of datasets, and for each dataset, we run it only for the suggested number of clusters obtained with the help of the criteria mentioned above. Whenever there is some ambiguity, i.e., the different criteria suggest different values of $k$, we run our algorithm for all the suggested values. With this criterion, we end up solving $54$ clustering instances with different size $n$, different dimension $d$, and different values of $k$. In Table \ref{tab:res_dataset} we report: \begin{itemize} \item the dataset name \item the number of clusters ($k$) \item the optimal objective function ($f_\textrm{opt}$). We add a $(*)$ whenever the optimum we certify is not found by $k$-means at the root node \item the number of cutting-plane iterations at root (cp) \item the number of inequalities of the last SDP problem solved at the root in the cutting-plane procedure ($\textrm{cuts}_\textrm{cp}$) \item the gap at the root ($gap_0$) when problem \eqref{eq:SDPbab} is solved without adding valid inequalities, and in brackets the gap at the end of the cutting-plane procedure at the root node ($gap_{cp}$) \item the number of nodes (N) of the branch-and-bound tree \item the wall clock time in seconds (time). \end{itemize} Small and medium scale instances ($n < 1000$) are considered solved when the relative gap tolerance is less or equal than $10^{-4}$, whereas for large scale instances ($n \geq 1000$) the branch-and-bound algorithm is stopped when the tolerance is less or equal than $10^{-3}$, which we feel is an adequate tolerance for large scale real-world applications. The gap measures the difference between the best upper and lower bounds and it is calculated as $(UB - LB) / UB$. The numerical results show that our method is able to solve successfully all the instances up to a size of $n=4177$ entities.
1,284
34,122
en
train
0.4988.10
Small and medium scale instances ($n < 1000$) are considered solved when the relative gap tolerance is less or equal than $10^{-4}$, whereas for large scale instances ($n \geq 1000$) the branch-and-bound algorithm is stopped when the tolerance is less or equal than $10^{-3}$, which we feel is an adequate tolerance for large scale real-world applications. The gap measures the difference between the best upper and lower bounds and it is calculated as $(UB - LB) / UB$. The numerical results show that our method is able to solve successfully all the instances up to a size of $n=4177$ entities. \begin{longtable}{lcccccccc} \toprule Dataset & $k$ & $f_\textrm{opt}$ & cp & $\textrm{cuts}_\textrm{cp}$ & $gap_0$ $(gap_{cp})$ & N & time \\ \midrule Ruspini & 4 & 1.28811e+04 & 0 & 0 & 2.23e-04 (2.23e-04) & 1 & 2.55 \\ Voice & 2 & 1.13277e+22 & 2 & 7593 & 5.40e-02 (1.66e-06) & 1 & 14.45 \\ Voice & 9 & 5.74324e+20* & 4 & 6115 & 1.07e-01 (6.45e-04) & 3 & 128.35 \\ Iris & 2 & 1.52348e+02 & 2 & 7701 & 1.10e-02 (2.19e-06) & 1 & 17 \\ Iris & 3 & 7.88514e+01 & 4 & 7136 & 4.23e-02 (1.18e-04) & 5 & 83.3 \\ Iris & 4 & 5.72285e+01 & 4 & 7262 & 4.28e-02 (4.20e-04) & 3 & 104.55 \\ Wine & 2 & 4.54375e+06 & 3 & 8162 & 3.45e-02 (2.69e-07) & 1 & 53.55 \\ Wine & 7 & 4.12138e+05* & 4 & 5759 & 5.81e-02 (1.03e-04) & 3 & 87.55 \\ Gr202 & 6 & 6.76488e+03 & 6 & 6607 & 6.72e-02 (8.53e-04) & 17 & 298.35 \\ Seeds & 2 & 1.01161e+03* & 9 & 10186 & 4.31e-02 (4.77e-04) & 29 & 957.1 \\ Seeds & 3 & 5.87319e+02 & 4 & 6620 & 2.67e-02 (1.26e-05) & 1 & 68.85 \\ Glass & 3 & 1.14341e+02 & 5 & 6799 & 4.68e-02 (1.64e-04) & 3 & 193.8 \\ Glass & 6 & 7.29647e+01* & 7 & 3014 & 5.45e-02 (4.36e-04) & 5 & 198.9 \\ CatsDogs & 2 & 1.14099e+05 & 1 & 5368 & 1.83e-03 (2.23e-09) & 1 & 108.8 \\ Accent & 2 & 3.28685e+04 & 0 & 0 & 6.55e-06 (6.55e-06) & 1 & 11.05 \\ Accent & 6 & 1.84360e+04* & 8 & 4523 & 2.94e-02 (2.08e-05) & 1 & 244.8 \\ Ecoli & 3 & 2.32610e+01 & 4 & 10101 & 7.71e-03 (1.89e-04) & 3 & 181.9 \\ RealEstate & 3 & 5.50785e+07 & 3 & 6236 & 1.59e-02 (3.51e-05) & 1 & 104.55 \\ RealEstate & 5 & 2.18711e+07 & 5 & 8006 & 6.82e-02 (2.64e-05) & 1 & 258.4 \\ Wholesale & 5 & 2.04735e+03 & 6 & 7668 & 6.43e-02 (2.06e-05) & 1 & 421.6 \\ Wholesale & 6 & 1.73496e+03* & 10 & 11161 & 6.32e-02 (7.06e-04) & 3 & 1782.45 \\ ECG5000 & 2 & 1.61359e+04 & 3 & 9312 & 1.02e-03 (7.49e-05) & 1 & 119 \\ ECG5000 & 5 & 1.15458e+04 & 25 & 6289 & 4.93e-02 (1.01e-04) & 3 & 2524.5 \\ Hungarian & 2 & 8.80283e+06 & 7 & 11265 & 1.03e-02 (1.31e-05) & 1 & 551.65 \\ Wdbc & 2 & 7.79431e+07 & 5 & 8645 & 3.21e-02 (2.10e-05) & 1 & 436.05 \\ Wdbc & 5 & 2.05352e+07* & 23 & 10662 & 7.45e-02 (5.27e-04) & 15 & 2436.95 \\ Control & 3 & 1.23438e+06 & 6 & 12381 & 2.80e-03 (1.26e-04) & 9 & 895.9 \\ Heartbeat & 2 & 2.79391e+04 & 0 & 0 & 8.15e-06 (8.15e-06) & 1 & 66.3 \\ Strawberry & 2 & 2.79363e+03 & 15 & 23776 & 5.44e-02 (4.02e-04) & 37 & 5250.45 \\ Energy & 2 & 9.64123e+03 & 0 & 0 & 9.86e-09 (9.86e-09) & 1 & 18.7 \\ Energy & 12 & 4.87456e+03 & 0 & 0 & 4.03e-07 (4.03e-07) & 1 & 29.75 \\ Gene & 5 & 1.78019e+07* & 2 & 15589 & 1.83e-03 (1.30e-04) & 3 & 3851.35 \\ Gene & 6 & 1.70738e+07 & 5 & 14620 & 3.82e-03 (2.08e-04) & 11 & 9896.55 \\ SalesWeekly & 2 & 1.44942e+06* & 6 & 8508 & 2.50e-02 (1.33e-03) & 9 & 2341.75 \\ SalesWeekly & 3 & 7.09183e+05* & 4 & 9096 & 1.03e-03 (9.44e-05) & 1 & 262.65 \\ SalesWeekly & 5 & 5.20938e+05* & 4 & 11811 & 1.67e-03 (1.12e-04) & 5 & 1045.5 \\ Vehicle & 2 & 7.29088e+06 & 5 & 10395 & 7.68e-03 (3.72e-04) & 11 & 1842.8 \\ Arcene & 2 & 3.48490e+10 & 3 & 36100 & 2.59e-03 (1.26e-04) & 3 & 1369.35 \\ Arcene & 3 & 2.02369e+10 & 0 & 0 & 3.50e-06 (3.50e-06) & 1 & 758.2 \\ Arcene & 5 & 1.69096e+10* & 7 & 8327 & 7.57e-03 (1.55e-04) & 27 & 6885 \\ Wafer & 2 & 6.19539e+04 & 3 & 7254 & 7.82e-04 (1.00e-04) & 1 & 379.1 \\ Wafer & 4 & 4.42751e+04 & 22 & 16957 & 1.97e-02 (8.76e-04) & 1 & 6756.65 \\ Power & 2 & 3.22063e+03 & 3 & 11350 & 1.05e-02 (2.89e-03) & 3 & 3381.3 \\ Phishing & 9 & 3.15888e+03* & 46 & 12459 & 2.48e-02 (7.00e-04) & 1 & 18866.6 \\ Aspirin & 3 & 1.27669e+04 & 2 & 10000 & 4.39e-03 (3.02e-03) & 9 & 3779.1 \\ Car & 4 & 5.61600e+03 & 23 & 38582 & 1.61e-03 (1.02e-05) & 1 & 5989.95 \\ Ethanol & 2 & 7.26854e+03 & 0 & 0 & 5.33e-08 (5.33e-08) & 1 & 310.25 \\ Wifi & 5 & 2.04311e+05 & 7 & 20886 & 1.13e-02 (2.18e-03) & 7 & 22754.5 \\ Mallat & 3 & 9.08648e+04 & 5 & 17092 & 3.61e-03 (9.59e-04) & 1 & 5970.4 \\ Mallat & 4 & 7.45227e+04 & 6 & 15305 & 6.80e-03 (4.49e-03) & 5 & 26344.9 \\ Advertising & 2 & 5.00383e+06* & 1 & 12533 & 1.53e-03 (2.16e-05) & 1 & 6465.1 \\ Advertising & 8 & 4.54497e+06* & 4 & 19948 & 2.98e-03 (1.08e-04) & 1 & 25114.1 \\ Rice & 2 & 1.39251e+04 & 24 & 7258 & 1.43e-02 (7.14e-03) & 5 & 103710.2 \\ Abalone & 3 & 1.00507e+03 & 0 & 0 & 3.14e-04 (3.14e-04) & 1 & 9428.2 \\ \bottomrule \caption{Results for the real world datasets} \label{tab:res_dataset} \end{longtable}
4,028
34,122
en
train
0.4988.11
To the best of our knowledge, the exact algorithm proposed in \cite{aloise2012improved} represents the actual state-of-the-art. Indeed it is the only algorithm able to exactly solve instances of size larger than 1000, satisfying one of the following strong assumptions (due to the geometrical approach involved): either the instance is on the plane ($d=2$) or the required number of clusters is large with respect to the number of points. Indeed they were able to solve a TSP instance with $d=2$ of size $n=2392$ for numbers of clusters ranging from $k=2$ to $k=10$, and for large number of clusters ($k$ between 100 and 400), and an instance of size $n=2310$ with $d=19$ but only for large number of clusters ($k$ between 230 and 500). Our algorithm has orthogonal capabilities in some sense to the one proposed in \cite{aloise2012improved}, since is not influenced by the number of features (we solve problems with thousands of features, which would be completely out of reach for the algorithm in~\cite{aloise2012improved}). Indeed, in the SDP formulation, the number of features is hidden in the matrix $W$, which is computed only once, so that it does not influence the computational cost of the algorithm. On the other hand, it is well known that the difficulty (and the gap) of the SDP relaxation \eqref{eq:SDPbab} increases when the boundary of the clusters are confused, and this phenomenon becomes more frequent when the number of clusters is high with respect to the number of points, and far away from the correct $k$ for the MSSC objective function. The strength of our bounding procedure is confirmed by 28 problems out of 54 solved at the root. Among these 28 problems, only 8 are tight, in the sense that problem \eqref{eq:SDPbab} without inequalities produces the optimal solution. The efficiency of \texttt{SOS-SDP} comes from the combination of the cutting-plane procedure that allows us to close a significant amount of the gap even when the bound without inequalities is not tight, and the heuristic that when the SDP solution is good allows us to find the optimal solution. Note that in 15 out of 34 instances, our algorithm certifies the optimality of a solution that $k$-means at the root could not find. Overall, the number of nodes of the branch-and-bound tree is always smaller than 40, but the computational cost of the single node may be high due to the high number of cutting-plane iterations. The values of cuts$_{\mathrm{cp}}$ confirm that the removal of inactive inequalities is effective, and allows to keep the number of inequalities moderate so that the SDP at each cutting-plane iteration is computationally tractable.
662
34,122
en
train
0.4988.12
\subsection{Numerical Results of \texttt{SDP-INIT} }\label{sec:heurnr} In order to test the efficiency of our initialization of constrained $k$-means, we report the behaviour at the root node on a subset of real-world datasets. We selected the most popular on the UCI website with size in the range of 150--569. To have more difficult instances, we run the heuristic for all the values of $k$ in the range from $2$ to $10$. Note that for $k$ far from the values suggested by the validation indices, the optimal solution may be constituted by overlapped and confused clusters that are more difficult to find for any heuristic. In Table~\ref{tab:heuristic}, we report the results obtained by our heuristic, compared with 50 runs of $k$-means initialized with $k$-means++ and with random initialization. In each table, we report: \begin{itemize} \item the lower bound obtained by solving the basic SDP relaxation ($LB_0$), and the corresponding heuristic solution ($UB_0$) \item the lower bound obtained after performing $CP$ cutting-plane iterations $LB_{CP}$ and the corresponding heuristic solution ($UB_{CP}$) \item the solution produced by $k$-means after 50 runs initialized with $k$-means++ ($UB_{++}$) \item the solution produced by $k$-means after 50 runs randomly initialized ($UB_{RAND}$) \end{itemize} We highlight the best solution in boldface. The results show that the solution $UB_{CP}$ is always the best, apart from 1 case. Note that in many cases, the solution $UB_{0}$ is fairly competitive both in terms of bound quality and computational effort since it requires the solution of exactly one SDP.
428
34,122
en
train
0.4988.13
\begin{table} \begin{center} \footnotesize \begin{tabular}{cccccccc} \toprule $K$ & $CP$ & $LB_{0}$ & $LB_{CP}$ & $UB_{0}$ & $UB_{CP}$ & $UB_{++}$ & $UB_{RAND}$ \\ \midrule \multicolumn{8}{l}{Iris dataset}\\ \midrule 2 & 2 & 1.50679e+02 & 1.52348e+02 & \fontseries{b}\selectfont{1.52348e+02} & \fontseries{b}\selectfont{1.52348e+02} & \fontseries{b}\selectfont{1.52348e+02} & \fontseries{b}\selectfont{1.52348e+02} \\ 3 & 4 & 7.55144e+01 & 7.88421e+01 & 7.88557e+01 & \fontseries{b}\selectfont{7.88514e+01} & 7.88518e+01 & 7.88527e+01 \\ 4 & 6 & 5.47766e+01 & 5.72281e+01 & \fontseries{b}\selectfont{5.72285e+01} & \fontseries{b}\selectfont{5.72285e+01} & 5.72560e+01 & 5.72560e+01 \\ 5 & 3 & 4.38467e+01 & 4.64369e+01 & 4.64612e+01 & \fontseries{b}\selectfont{4.64462e+01} & \fontseries{b}\selectfont{4.64462e+01} & 4.64612e+01 \\ 6 & 4 & 3.67110e+01 & 3.90175e+01 & 3.90660e+01 & \fontseries{b}\selectfont{3.90400e+01} & 3.90660e+01 & \fontseries{b}\selectfont{3.90400e+00} \\ 7 & 6 & 3.18467e+01 & 3.42788e+01 & 3.43058e+01 & \fontseries{b}\selectfont{3.42982e+01} & 3.44090e+01 & 3.43859e+01 \\ 8 & 3 & 2.88697e+01 & 2.99660e+01 & \fontseries{b}\selectfont{2.99904e+01} & \fontseries{b}\selectfont{2.99904e+01} & \fontseries{b}\selectfont{2.99904e+01} & 3.04762e+01 \\ 9 & 3 & 2.64849e+01 & 2.77836e+01 & 2.79408e+01 & \fontseries{b}\selectfont{2.77861e+01} & 2.78921e+01 & 2.83071e+01 \\ 10 & 4 & 2.44186e+01 & 2.58329e+01 & 2.62712e+01 & \fontseries{b}\selectfont{2.58341e+01} & 2.59644e+01 & 2.65776e+01 \\ \midrule \multicolumn{8}{l}{Glass dataset}\\ \midrule 2 & 6 & 1.35499e+02 & 1.36525e+02 & 1.36537e+02 & \fontseries{b}\selectfont{1.36528e+02} & \fontseries{b}\selectfont{1.36528e+02} & 1.36537e+02 \\ 3 & 5 & 1.08991e+02 & 1.14320e+02 & \fontseries{b}\selectfont{1.14341e+02} & \fontseries{b}\selectfont{1.14341e+02} & \fontseries{b}\selectfont{1.14341e+02} & \fontseries{b}\selectfont{1.14341e+02} \\ 4 & 5 & 9.14749e+01 & 9.47742e+01 & 9.48402e+01 & \fontseries{b}\selectfont{9.47899e+01} & 9.48402e+01 & 9.48402e+01 \\ 5 & 6 & 7.87104e+01 & 8.34045e+01 & 8.40062e+01 & \fontseries{b}\selectfont{8.35054e+01} & 8.42973e+01 & 8.40502e+01 \\ 6 & 8 & 6.89918e+01 & 7.29430e+01 & \fontseries{b}\selectfont{7.29647e+01} & \fontseries{b}\selectfont{7.29647e+01} & 7.37947e+01 & 7.43696e+01 \\ 7 & 8 & 6.19552e+01 & 6.47908e+01 & 6.53398e+01 & \fontseries{b}\selectfont{6.47973e+01} & 7.08087e+01 & 6.66828e+01 \\ 8 & 6 & 5.61534e+01 & 5.85654e+01 & 5.87606e+01 & \fontseries{b}\selectfont{5.85699e+01} & 5.90119e+01 & 6.08941e+01 \\ 9 & 10 & 5.12932e+01 & 5.37277e+01 & 5.41810e+01 & \fontseries{b}\selectfont{5.37580e+01} & 5.55979e+01 & 5.61847e+01 \\ 10 & 4 & 4.70718e+01 & 4.93411e+01 & 4.97866e+01 & \fontseries{b}\selectfont{4.97382e+01} & 5.15837e+01 & 5.25047e+01 \\ \midrule \multicolumn{8}{l}{Wholesale dataset}\\ \midrule 2 & 2 & 3.48221e+03 & 3.48656e+03 & \fontseries{b}\selectfont{3.48657e+03} & \fontseries{b}\selectfont{3.48657e+03} & \fontseries{b}\selectfont{3.48657e+03} & \fontseries{b}\selectfont{3.48657e+03} \\ 3 & 5 & 2.85705e+03 & 2.91234e+03 & \fontseries{b}\selectfont{2.91252e+03} & \fontseries{b}\selectfont{2.91252e+03} &\fontseries{b}\selectfont{2.91252e+03} & \fontseries{b}\selectfont{2.91254e+03} \\ 4 & 9 & 2.33207e+03 & 2.46555e+03 & \fontseries{b}\selectfont{2.46558e+03} & \fontseries{b}\selectfont{2.46558e+03} & \fontseries{b}\selectfont{2.46558e+03} & \fontseries{b}\selectfont{2.46558e+03} \\ 5 & 7 & 1.91575e+03 & 2.04735e+03 & 2.04741e+03 & \fontseries{b}\selectfont{2.04735e+03} & 2.04891e+03 & 2.04891e+03 \\ 6 & 10 & 1.63098e+03 & 1.73382e+03 & 1.74322e+03 & \fontseries{b}\selectfont{1.73496e+03} & 1.74096e+03 & 1.75359e+03 \\ 7 & 12 & 1.44236e+03 & 1.52350e+03 & 1.52551e+03 & \fontseries{b}\selectfont{1.52383e+03} & 1.52693e+03 & 1.53677e+03 \\ 8 & 11 & 1.28695e+03 & 1.36289e+03 & 1.36949e+03 & \fontseries{b}\selectfont{1.36290e+03} & 1.36621e+03 & 1.39735e+03 \\ 9 & 10 & 1.14692e+03 & 1.21928e+03 & 1.22008e+03 & \fontseries{b}\selectfont{1.21978e+03} & \fontseries{b}\selectfont{1.21978e+03} & 1.26105e+03 \\ 10 & 6 & 1.03078e+03 & 1.07843e+03 & 1.08010e+03 & \fontseries{b}\selectfont{1.07843e+03} & 1.13670e+03 & 1.21282e+03 \\ \midrule \multicolumn{8}{l}{Wdbc dataset}\\ \midrule 2 & 6 & 7.54429e+07 & 7.79415e+07 & \fontseries{b}\selectfont{7.79431e+07} & \fontseries{b}\selectfont{7.79431e+07} & \fontseries{b}\selectfont{7.79431e+07} & \fontseries{b}\selectfont{7.79431e+07} \\ 3 & 27 & 4.14673e+07 & 4.72612e+07 & 4.74219e+07 & \fontseries{b}\selectfont{4.72648e+07} & \fontseries{b}\selectfont{4.72648e+07} & 4.74999e+07 \\ 4 & 22 & 2.62662e+07 & 2.91013e+07 & 2.92269e+07 & \fontseries{b}\selectfont{2.92265e+07} & \fontseries{b}\selectfont{2.92265e+07} & \fontseries{b}\selectfont{2.92265e+07} \\ 5 & 20 & 1.90062e+07 & 2.05248e+07 & 2.05806e+07 & \fontseries{b}\selectfont{2.05352e+07} & \fontseries{b}\selectfont{2.05352e+07} & 2.06727e+07 \\ 6 & 6 & 1.47880e+07 & 1.55897e+07 & 1.69771e+07 & 1.69343e+07 & \fontseries{b}\selectfont{1.66461e+07} & 1.71215e+07 \\ 7 & 22 & 1.20747e+07 & 1.31868e+07 & 1.32742e+07 & \fontseries{b}\selectfont{1.32470e+07} & 1.32655e+07 & 1.33533e+07 \\ 8 & 8 & 1.02027e+07 & 1.07390e+07 & 1.12114e+07 & \fontseries{b}\selectfont{1.12064e+07} & 1.12441e+07 & 1.15090e+07 \\ 9 & 3 & 8.83658e+06 & 9.09983e+06 & \fontseries{b}\selectfont{9.43290e+06} & \fontseries{b}\selectfont{9.43290e+06} & 9.47386e+06 & 1.05951e+07 \\ 10 & 1 & 7.72013e+06 & 7.72013e+06 & \fontseries{b}\selectfont{8.37902e+06} & \fontseries{b}\selectfont{8.37902e+06} & 8.54589e+06 & 9.83225e+06 \\ \bottomrule \end{tabular} \end{center} \caption{Heuristic performance for selected datasets. Best upper bounds found are typeset in boldface.}\label{tab:heuristic} \end{table}
3,914
34,122
en
train
0.4988.14
\section{Conclusions}\label{sec:conclusion} We developed an exact solution algorithm for the minimum sum-of-squares clustering problem (MSSC) using tools from semidefinite programming. We use a semidefinite relaxation that exploits three types of valid inequalities in a cutting plane fashion to generate tight lower bounds for the MSSC. Besides these lower bounds, the semidefinite relaxation also provides a primal solution that can be used for generating data to initialize constrained $k$-means, which is known to be sensitive concerning the starting point. Numerical experiments undoubtedly demonstrate the advantage of using this initialization procedure. We implemented a branch-and-bound algorithm using the ingredients described above. Our way of branching allows us to decrease the size of the problem while going down the branch-and-bound tree. Notably, the shrinking procedure preserves the structure of the problem which is beneficial for our routine computing the bounds in each node of the branch-and-bound tree. Our code is parallelized in two ways: the nodes in the branch-and-bound tree are evaluated in parallel and the bound computation within a node is executed in a multi-threaded MATLAB environment. The numerical results impressively exhibit the efficiency of our algorithm: we can solve real-world instances up to 4000~data points. To the best of our knowledge, no other exact solution methods can handle generic instances of that size. Moreover, the dimension of the data points does not influence the performance of our algorithm, we solve instances with more than 20\;000 features. Our algorithm can be extended to deal with certain constrained versions of sum-of-squares clustering like those with diameter constraints, split constraints, density constraints, or capacity constraints \citep{davidson2005clustering, duong:2017}. This is left for future work. Also, kernel-based clustering is a promising extension that we plan to consider \citep{dhillon2004kernel}. Finally, we have ideas in mind on how to use our algorithm in a heuristic fashion for obtaining high quality solutions for huge graphs. \ifJOC \ACKNOWLEDGMENT{ Parts of this project were carried out during a research stay of the third author at the University Tor Vergata, funded by the University of Rome Tor Vergata Visiting Professor grant 2018. Furthermore, this project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk{\l}odowska-Curie grant agreement MINOA No~764759. We thank Kim-Chuan Toh for bringing our attention to the work of \citet{JaChayKeil2007} and for providing an implementation of the method therein.} \else {\small \vspace*{1ex}\noindent Veronica Piccialli, \href{mailto:[email protected]}{\url{[email protected]}}, University of Rome Tor Vergata, via del Politechnico, 00133 Rome, Italy ORCiD: 0000-0002-3357-9608 \vspace*{1ex}\noindent Antonio M. Sudoso, \href{mailto:[email protected]}{\url{[email protected]}}, University of Rome Tor Vergata, via del Politecnico, 1, 00133 Rome, Italy ORCiD: 0000-0002-2936-9931 \vspace*{1ex}\noindent Angelika Wiegele, \href{mailto:[email protected]}{\url{[email protected]}}, Alpen-Adria-Universität Klagenfurt, Universitätsstraße 65--67, 9020 Klagenfurt, Austria, ORCiD: 0000-0003-1670-7951 } \fi \end{document}
1,031
34,122
en
train
0.4989.0
\begin{document} \begin{abstract} Given a multivariate complex polynomial ${p\in\mathbb{C}[z_1,\ldots,z_n]}$, the imaginary projection $\mathcal{I}(p)$ of $p$ is defined as the projection of the variety $\mathcal{V}(p)$ onto its imaginary part. We focus on studying the imaginary projection of complex polynomials and we state explicit results for certain families of them with arbitrarily large degree or dimension. Then, we restrict to complex conic sections and give a full characterization of their imaginary projections, which generalizes a classification for the case of real conics. That is, given a bivariate complex polynomial $p\in\C[z_1,z_2]$ of total degree two, we describe the number and the boundedness of the components in the complement of $\mathcal{I}(p)$ as well as their boundary curves and the spectrahedral structure of the components. We further show a realizability result for strictly convex complement components which is in sharp contrast to the case of real polynomials. \end{abstract} \maketitle
248
33,646
en
train
0.4989.1
\section{Introduction\label{se:intro}} Given a polynomial $p\in\C[\z]:=\C[z_1, \ldots ,z_n]$, the imaginary projection $\I(p)$ as introduced in \cite{jtw-2019} is the projection of the variety $\mathcal{V}(p)\subseteq\C^n$ onto its imaginary part, that is, \begin{equation} \label{eq:imagproj1} \ \I(p)= \ \left\{ \z_{\rm im} = ((z_1)_{\rm im}, \ldots, (z_n)_{\rm im}) \ : \ \mathbf{z} \in \mathcal{V}(p)\right\} \ \subseteq \ \R^n, \end{equation} where $(\cdot)_{\rm im}$ is the imaginary part of a complex number. Recently, there has been wide-spread research interest in mathematical branches which are directly connected to the imaginary projection of polynomials. As a primary motivation, the imaginary projection provides a comprehensive geometric view for notions of \emph{stability of polynomials} and generalizations thereof. A polynomial $p\in\C[\z]$ is called \textit{stable}, if $p(\z)=0$ implies $(z_j)_{\rm im}\leq 0$ for some $j\in[n]$. In terms of the imaginary projection $\I(p)$, we can express the stability of $p$ as the condition $\I(p)\cap\R^n_{>0}=\emptyset$. Stable polynomials have applications in many branches of mathematics including combinatorics (\cite{braenden-hpp} and see \cite{brown-wagner-2020} for the connection of the imaginary projection to combinatorics), differential equations \cite{borcea-braenden-2010}, optimization \cite{straszak-vishnoi-2017}, probability theory \cite{bbl-2009}, and applied algebraic geometry \cite{volcic-2019}. Further application areas include theoretical computer science \cite{mss-interlacing1, mss-interlacing2}, statistical physics \cite{borcea-braenden-leeyang1}, and control theory \cite{ms-2000}, see also the surveys \cite{pemantle-2012} and \cite{wagner-2010}. Recently, various generalizations and variations of the stability notion have been studied, such as stability with respect to a polyball \cite{gkv-2016,gkv-2017}, conic stability \cite{dgt-conic-pos-map-2019,joergens-theobald-conic}, Lorentzian polynomials \cite{braenden-huh-2020}, or positively hyperbolic varieties \cite{rvy-2021}. Exemplarily, regarding the conic stability, a polynomial $p\in\C[\z]$ is called \textit{$K$-stable} for a proper cone $K\subset \R^n$ if $p(\z)\neq 0$, whenever $\z_{\rm im}\in \inter K$, where $\inter$ is the interior. In terms of the imaginary projection, this condition can be equivalently expressed as $\I(p)\cap \inter K=\emptyset$. Another motivation comes from the close connection of the imaginary projection to hyperbolic polynomials and hyperbolicity cones \cite{garding-59}. As shown in \cite{joergens-theobald-hyperbolicity}, in case of a real \emph{homogeneous} polynomial $p$, the components of the complement $\I(p)^\mathsf{c}$ coincide with the hyperbolicity cones of $p$. These concepts play a central role in hyperbolic programming, see \cite{gueler-97,naldi-plaumann-2018, nesterov-tuncel-2016,saunderson-2019}. A prominent open question in this research direction is the generalized Lax conjecture, which claims that every hyperbolicity cone is spectrahedral, see \cite{vinnikov-2012}. Representing convex sets by spectrahedra is not only motivated by the general Lax conjecture, but also by the question of effective handling convex semialgebraic sets (see, for example, \cite{bpt-2013,kpv-2015}). Recently, the conjecture that every convex semialgebraic set would be the linear projection of a spectrahedron, the ``Helton-Nie conjecture'', has been disproven by Scheiderer \cite{scheiderer-spectrahedral-shadows}. Moreover, the imaginary projection closely relates to and complements the notions of \emph{amoebas}, as introduced by Gel'fand, Kapranov and Zelevinsky \cite{gkz-1994}, and \emph{coamoebas}. The amoeba $\mathcal{A}(p)$ of a polynomial $p$ is defined as {\small $\,{\!\mathcal{A}(p)\! := \!\{(\ln|z_1|, \ldots,\ln|z_n|) \!:\! \mathbf{z} \in\! \mathcal{V}(p) \cap (\C^*)^n \}}$}, so it considers the logarithm of the absolute value of a complex number rather than its imaginary part. The coamoeba of a polynomial deals with the phase of a complex number. Each of these three viewpoints of a complex variety gives a set in a real space with the characteristic property that the complement of the closure consists of finitely many \emph{convex} connected components. See \cite{forsgard-johansson-2015}, \cite{gkz-1994} and \cite{jtw-2019} for the convexity properties of amoebas, coamoebas, and imaginary projections, respectively. Due to their convexity phenomenon, these structures provide natural classes in recent developments of convex algebraic geometry. For amoebas, an exact upper bound on the number of components in the complement is known \cite{gkz-1994}. For the coamoeba of a polynomial $p$, it has been conjectured that there are at most $n!\vol\New(p)$ connected components in the complement, where $\vol$ denotes the volume and $\New(p)$ the Newton polytope of $p$, see \cite{forsgard-johansson-2015} for more background as well as a proof for the special case $n=2$. For imaginary projections, a tight upper bound is known in the homogeneous case \cite{joergens-theobald-hyperbolicity}, but for the non-homogeneous case there only exists a lower bound \cite{jtw-2019}. Currently, no efficient method is known to calculate the imaginary projection for a general real or complex polynomial. For some families of polynomials, the imaginary projection has been explicitly characterized, including complex linear polynomials and real quadratic polynomials, see \cite{jtw-2019} and \cite[Proposition 3.2]{joergens-theobald-conic}. However, since imaginary projections for non-linear complex polynomials exhibit new structural phenomena compared to the real case, even the characterization of the imaginary projection of complex conics had remained elusive so far. Our primary goal is to reveal fundamental and surprising differences between imaginary projections of \emph{real polynomials} and \emph{complex polynomials}. In fixed degree and dimension, for a polynomial $p$ with non-real coefficients, the algebraic degree of the {\it boundary} of the imaginary projection $\partial\I(p):= \overline{\I(p)}\cap\overline{\I(p)^\mathsf{c}}$ can be higher than the case of real coefficients. Here $(.)^\mathsf{c}$ and $\overline{(.)}$ are the complement and Euclidean closure, respectively. These incidences already begin when the degree and dimension are both two. However, the contrast is not only concerning the boundary degrees, but also the arrangements and the strict convexity of the components in $\I(p)^\mathsf{c}$. We start with structural results which serve to work out the differences between the case of real and complex coefficients. Our first result is a sufficient criterion on the roots of the \emph{initial form} of an arbitrarily large degree non-real bivariate complex polynomial to have the real plane as its imaginary projection, see Theorem~\ref{th:EvenDeg} and Corollary~\ref{co:wholeplane2}. Next, we characterize the imaginary projections of $n$-dimensional multivariate complex quadratics with hyperbolic initial form, see Theorem~\ref{th:ndimhyperbol1} and Corollary~\ref{co:ndimhyperbol2}. In the two-dimensional case, although by generalizing from real to complex conics, the bounds on the number of bounded and unbounded components in the complement of the imaginary projections remain unchanged, the possible arrangements of these components, strictness of their convexity, and the algebraic degrees of their boundaries strongly differ. See Corollaries \ref{co:alg-degrees} and \ref{cor:oneUnbdd}. For conic sections with real coefficients, it was shown by J\"orgens, Theobald, and de~Wolff \cite{jtw-2019} that the boundary $\partial\I(p)$ consists of pieces which are algebraic curves of degree at most two. In sharp contrast to this, for complex polynomials, the boundary may not be algebraic and the degree of its irreducible pieces can go up to 8. For example, despite the simple expression of the polynomial $p = z_1^2+{\rm i}z_2^2+z_2$, an exact description of $\I(p)$ is \begin{equation} \label{eq:example1} \begin{array}{r@{\hspace*{0.5ex}}l} \mathcal{I}(p) \ = \ & \{ y \in \R^2 \, : \, -64 y_1^8-128 y_1^4 y_2^4-64 y_2^8 +256 y_1^4 y_2^3+256 y_2^7-272 y_1^4 y_2^2 \\ [1ex] & \, -400 y_2^6+144 y_1^4 y_2+304 y_2^5-27 y_1^4-112 y_2^4+16 y_2^3 \le 0\} \setminus \{(0,1/2)\}, \end{array} \end{equation} and the describing polynomial in~\eqref{eq:example1} is irreducible over $\C$. In this example, the set ${\I(p)}^{\mathsf{c}}$ consists of a single convex connected and bounded component. Any polynomial vanishing on the boundary will also vanish on the single point $(0,1/2)$ which is not part of the boundary $\partial \I(p)$. Thus, $\partial \I(p)$ is not algebraic. See Figure~\ref{fi:example1} for an illustration and we return to this example in Section \ref{se:higherdegree} and at the end of Section~\ref{se:non-hyperbolic}. \begin{figure} \caption{{\small(A)} \label{fi:example1} \end{figure} Since the topology of the imaginary projection in $\R^n$ is invariant under the action of $G_n:=\C^n\rtimes \GL_n(\R)$, that is the semi-direct product of $\GL_n(\R)$ and complex translations, the problem to understand the imaginary projections naturally leads to a polynomial classification problem. As starting point, recall that under the action of the affine group $\text{Aff}(\C^2)$, there are precisely five orbits for complex conics, with the following representatives: \[ \begin{matrix} z_1^2 \text{ (one line)},&&& z_1^2+1 \text{ (two parallel lines)},&&& z_1^2-z_2 \text{ (parabola)}, \end{matrix} \] \[ \begin{matrix} z_1^2+z_2^2 \text{ (two crossing lines)},&&& z_1^2+z_2^2-1 \text{ (circle)}. \end{matrix} \] However, the arrangement of the components in $\I(p)^\mathsf{c}$ is not invariant under the action of $\text{Aff}(\C^2)$, but only under its restriction to $G_2$. There are several other related classifications of complex conic sections. Newstead \cite{Newstead} has classified the set of projective complex conics under real linear transformations. However, out of a projective setting his method becomes ineffective as it is based on the arrangements of four intersection points between a conic and its conjugate. On the other hand, by considering the real part and the imaginary part of a complex conic $p$, under the action of $G_2$ the classification of conic sections has some relations to the problem of classifying pairs of real conics. Systematic classifications of this kind are mostly done in the projective setting and are well understood. See \cite{briand-2007,levy-1964,petitjean-2010,uhlig-1976}. However, those classifications rely on the invariance of the number and multiplicity of real intersection points between the two real conics. The drawback here is that under complex translations on $p$, these numbers are not invariant anymore, except at infinity. To capture the invariance under $G_2$, we develop a novel classification based on the initial forms of complex conics. This classification is adapted to the imaginary projection and it is rather fine but coarse enough to allow handling the inherent algebraic degree of 8 in the boundary description of the imaginary projection. Finally, we show that non-real complex conics can significantly improve a realization result on the complement of the imaginary projections. In \cite{joergens-theobald-hyperbolicity}, for any given integer $k\ge 1$, they present a polynomial $p$ of degree $d=4\lceil \frac{k}{4}\rceil+2$ as a product of real conics, such that $\I(p)^\mathsf{c}$ has at least $k$ components that are strictly convex and bounded. Using non-real conics, we furnish a degree $d/2+1$ polynomial having exactly $k$ components with these properties. See Theorem~\ref{th:StrictlyConvexComplex} and Question \ref{ques:deg}. The paper is structured as follows. Section \ref{se:prelim} provides our notation and the necessary background on the imaginary projection of polynomials and contains the classification of the imaginary projection for the case of real conics. Section~\ref{se:higherdegree} deals with complex plane curves and provides a highlighting example where the complex versus real coefficients make a remarkable difference in the complexity of the imaginary projection. Moreover, we determine a family of arbitrarily large degree non-real plane curves with a full-space imaginary projection, based on the arrangements of roots of the initial form. In Section~\ref{se:QuadraticsWithHyperbolicInit}, we set the degree to be two and let the dimension grow and we classify the imaginary projections of complex quadratics with hyperbolic initial form. In Sections~\ref{se:mainclassification} and~\ref{se:non-hyperbolic}, we restrict the degree and dimension both to be two and we provide a full classification of the imaginary projections for affine complex conics based on their initial forms. Moreover, we determine in which classes the components in the complement of the imaginary projection have a spectrahedral description and also state them explicitly.
3,937
33,646
en
train
0.4989.2
\begin{figure} \caption{{\small(A)} \label{fi:example1} \end{figure} Since the topology of the imaginary projection in $\R^n$ is invariant under the action of $G_n:=\C^n\rtimes \GL_n(\R)$, that is the semi-direct product of $\GL_n(\R)$ and complex translations, the problem to understand the imaginary projections naturally leads to a polynomial classification problem. As starting point, recall that under the action of the affine group $\text{Aff}(\C^2)$, there are precisely five orbits for complex conics, with the following representatives: \[ \begin{matrix} z_1^2 \text{ (one line)},&&& z_1^2+1 \text{ (two parallel lines)},&&& z_1^2-z_2 \text{ (parabola)}, \end{matrix} \] \[ \begin{matrix} z_1^2+z_2^2 \text{ (two crossing lines)},&&& z_1^2+z_2^2-1 \text{ (circle)}. \end{matrix} \] However, the arrangement of the components in $\I(p)^\mathsf{c}$ is not invariant under the action of $\text{Aff}(\C^2)$, but only under its restriction to $G_2$. There are several other related classifications of complex conic sections. Newstead \cite{Newstead} has classified the set of projective complex conics under real linear transformations. However, out of a projective setting his method becomes ineffective as it is based on the arrangements of four intersection points between a conic and its conjugate. On the other hand, by considering the real part and the imaginary part of a complex conic $p$, under the action of $G_2$ the classification of conic sections has some relations to the problem of classifying pairs of real conics. Systematic classifications of this kind are mostly done in the projective setting and are well understood. See \cite{briand-2007,levy-1964,petitjean-2010,uhlig-1976}. However, those classifications rely on the invariance of the number and multiplicity of real intersection points between the two real conics. The drawback here is that under complex translations on $p$, these numbers are not invariant anymore, except at infinity. To capture the invariance under $G_2$, we develop a novel classification based on the initial forms of complex conics. This classification is adapted to the imaginary projection and it is rather fine but coarse enough to allow handling the inherent algebraic degree of 8 in the boundary description of the imaginary projection. Finally, we show that non-real complex conics can significantly improve a realization result on the complement of the imaginary projections. In \cite{joergens-theobald-hyperbolicity}, for any given integer $k\ge 1$, they present a polynomial $p$ of degree $d=4\lceil \frac{k}{4}\rceil+2$ as a product of real conics, such that $\I(p)^\mathsf{c}$ has at least $k$ components that are strictly convex and bounded. Using non-real conics, we furnish a degree $d/2+1$ polynomial having exactly $k$ components with these properties. See Theorem~\ref{th:StrictlyConvexComplex} and Question \ref{ques:deg}. The paper is structured as follows. Section \ref{se:prelim} provides our notation and the necessary background on the imaginary projection of polynomials and contains the classification of the imaginary projection for the case of real conics. Section~\ref{se:higherdegree} deals with complex plane curves and provides a highlighting example where the complex versus real coefficients make a remarkable difference in the complexity of the imaginary projection. Moreover, we determine a family of arbitrarily large degree non-real plane curves with a full-space imaginary projection, based on the arrangements of roots of the initial form. In Section~\ref{se:QuadraticsWithHyperbolicInit}, we set the degree to be two and let the dimension grow and we classify the imaginary projections of complex quadratics with hyperbolic initial form. In Sections~\ref{se:mainclassification} and~\ref{se:non-hyperbolic}, we restrict the degree and dimension both to be two and we provide a full classification of the imaginary projections for affine complex conics based on their initial forms. Moreover, we determine in which classes the components in the complement of the imaginary projection have a spectrahedral description and also state them explicitly. Section~\ref{se:mainclassification} contains our main classification theorems and the corollaries differentiating the cases of complex and real coefficients. The part where the initial form is hyperbolic is already covered in \ref{se:QuadraticsWithHyperbolicInit}. Each subsection of Section~\ref{se:non-hyperbolic} treats one of the remaining classes and explains their spectrahedral structure. In particular, we show that the only class where the components in the complement are not necessarily spectrahedral is the case where the initial form has two distinct non-real roots in $\P^1_\C$ such that they do not form a complex conjugate pair. In Section~\ref{se:convex}, we prove a realization result for strictly convex complement components, which highlights another contrast between the imaginary projections of complex and real polynomials. Section~\ref{se:outlook} gives some open questions. \section{Preliminaries and background}\label{se:prelim} For a set $S\subseteq\R^n$, we denote by $\overline{S}$ the topological closure of $S$ with respect to the Euclidean topology on $\R^n$ and by $S^{\mathsf{c}}$ the complement of $S$ in $\R^n$. The \textit{algebraic degree} of $S$ is the degree of its closure with respect to the Zariski topology. The set of non-negative and the set of strictly positive real numbers are abbreviated by $\R_{\ge 0}$ and $\R_{>0}$ throughout the text. Moreover, bold letters will denote $n$-dimensional vectors. By $\P^n$ and $\P^n_{\R}$, we denote the $n$-dimensional complex and real projective spaces, respectively. For a polynomial $p \in \C[\mathbf{z}]$, the imaginary projection $\mathcal{I}(p)$ is defined in~\eqref{eq:imagproj1} and its boundary $ \overline{\I(p)}\cap\overline{\I(p)^\mathsf{c}}$ is denote by $\partial\I(p)$. \begin{theorem}\cite{jtw-2019} Let $p\in\C[\z]$ be a complex polynomial. The set $\overline{\I(p)}^\mathsf{c}$ consists of a finite number of convex connected components. \end{theorem} We denote by $a_{\rm{re}}$ and $a_{\rm{im}}$ the real and the imaginary parts of a complex number $a\in\C$, i.e., $a$ is written in the form $a_{\rm re}+{\rm i}a_{\rm im}$, such that $a_{\rm re},a_{\rm im}\in\R$. Let $p\in\C[\z]$ be a complex polynomial. After substituting $z_j = x_j+{\rm i}y_j$ for all $1\le j\le n$, the complex polynomial can be written in the form \[p(\z) =p_{\rm re}(\x,\y)+{\rm i}p_{\rm im}(\x,\y),\] such that $p_{\rm re},p_{\rm im}\in\R[\x,\y]$. We call the real polynomials $p_{\rm re}$ and $p_{\rm im}$, the \textit{real part} and the \textit{imaginary part} of $p$, respectively. Thus, finding $\I(p)$ is equivalent to determining the values of $\y$ for which the real polynomial system \begin{equation}\label{PolySystem} p_{\rm re}\,(\x,\y)=0 \; \text{ and } \; p_{\rm im}(\x,\y)=0 \end{equation} has real solutions for $\x$. \begin{definition}\label{def:complexConic} Let $p\in\C[z_1,z_2]$ be a quadratic polynomial, i.e., $p = a z_1^2 + b z_1 z_2 + c z_2^2 + d z_1 + e z_2 +f$ such that $a,b,c,d,e,f\in\C$. We say that $p$ is the defining polynomial of a complex conic, or shortly, a \textit{complex conic} if its total degree equals two, i.e., at least one of the coefficients $a,b$, or $c$ is non-zero. A complex conic $p$ is called a \textit{real conic} if all coefficients of $p$ are real. \end{definition} The following lemma from \cite{jtw-2019} shows how real linear transformations and complex translations act on the imaginary projection. These are the key ingredients for computing the imaginary projection of every class of conic sections. \begin{lemma}\label{le:group-actions-improj} Let $p\in\C[\z]$ and $A\in\R^{n\times n}$ be an invertible matrix. Then \[{\I(p(A\z))=A^{-1}\I(p(\z)).}\] Moreover, a real translation $\z\mapsto \z+\aaa, \ \aaa\in\R^n$ does not change the imaginary projection. An imaginary translation $\z\mapsto \z+{\rm i}\aaa, \ \aaa\in\R^n$ shifts the imaginary projection into the direction $-\aaa$. \end{lemma} By the previous lemma, to classify the imaginary projection of polynomials we consider their orbits under the action of the group $G_n:=\C^n\rtimes \text{GL}_n(\R)$, given by real linear transformations and complex translations. Further let $\text{Aff}(\K^n):=\K^n\rtimes\text{GL}_n(\K)$ be the general affine group for $\K=\R$ or $\K=\C$. The real dimensions of these groups are \[ \begin{matrix} \dim_\R(\text{Aff}(\C^n))=2\dim_\R(\text{Aff}(\R^n))=2(n^2+n),&&\dim_\R(G_n)=n^2+2n. \end{matrix} \] Up to the action of $G_2$, a real conic $p\in\R[z_1,z_2]$ is equivalent to a conic given by one of the following polynomials. \setlength{\columnsep}{0pt} \begin{multicols}{2} \begin{itemize} \item[($i$)] $z_1^2+z_2^2-1$ (ellipse), \item[($ii$)] $z_1^2-z_2^2-1$ (hyperbola), \item[($iii$)] $z_1^2+z_2$ (parabola), \item [($iv$)]$z_1^2+z_2^2+1$ (empty set), \item[($v$)] $z_1^2-z_2^2$ (pair of crossing lines), \item[($vi$)] $z_1^2-1$ (parallel lines/one line $z_1^2$), \item [($vii$)]$z_1^2+z_2^2$ (isolated point), \item [($viii$)]$z_1^2+1$ (empty set). \end{itemize} \end{multicols} In \cite{jtw-2019}, a full classification of the imaginary projection for real quadratics was shown. In particular, the following theorem is the classification for real conics. For illustrations of the cases, see Figure~\ref{fi:real-classification}. The theorem that comes after provides the imaginary projection of some families of real quadratics. Furthermore, they state the subsequent question as an open problem. \begin{theorem}\label{th:RealConicChar} Let $p\in\R[z_1,z_2]$ be a real conic. For the normal forms (i)--(viii) from above, the imaginary projections $\I(p)\subseteq\R^2$ are as follows. \hspace{-3mm} \begin{multicols}{2} \begin{itemize} \item[($i$)] $\I(p) =\R^2$, \item[($ii$)] $\I(p) = \{-1\le y_1^2-y_2^2<0\}\cup\{\mathbf{0}\}$, \item[($iii$)] $\I(p) =\R^2\setminus\{(0,y_2):y_2\neq 0\}$, \item [($iv$)]$\I(p) =\{\y\in\R^2:y_1^2+y_2^2-1\ge 0\}$, \item[($v$)] $\I(p) =\{\y\in\R^2:y_1^2=y_2^2\}$, \item[($vi$)] $\I(p) =\{\y\in\R^2:y_1=0\}$, \item [($vii$)]$\I(p) =\R^2$, \item [($viii$)]$\I(p) =\{\y\in\R^2:y_1=\pm 1\}$. \end{itemize} \end{multicols} \end{theorem} \begin{theorem}\label{th:real-Quad} Let $p\in\C[z_1,\dots,z_n]$ be $p = \sum_{i=1}^{n-1}z_i^2-z_{n}^2+k$ for $k\in\{\pm 1\}$. Then \[ \I(p) = \begin{cases} \left\{\y \in \R^n \ : \ y_n^2<\sum_{i=1}^{n-1} y_i^2 \right\} \cup \{\mathbf{0}\} & \text{if } k =1, \\ \left\{\y \in \R^n \ : \ y_{n}^{2}-\sum_{i=1}^{n-1} y_i^2 \le 1 \right\} & \text{if } k =-1. \\ \end{cases} \] \end{theorem} \begin{figure} \caption{The imaginary projections of the real conic sections and their complements are colored in gray and blue, respectively. The cases $(i)$ and $(vii)$ are skipped, as their imaginary projection is the whole plane.} \label{fi:real-classification} \end{figure} The following question, which is true for real quadratics $p\in\C[\z]$, was asked in \cite[Open problem 3.4]{jtw-2019}. In Section~\ref{subs:real-non-real}, we show that it is not true in general even for complex conics. \begin{question}\label{que:open-close} Let $p\in\C[\z]$ be a polynomial. Is $\I(p)$ open if and only if $\I(p)=\R^n$? \end{question} We use the {\it initial form} of $p$ abbreviated by $\init(p)(\z)=p^h(\z,0)$ , where $p^h$ is the homogenization of $p$. The initial form consists of the terms of $p$ with the maximal total degree. Furthermore, a complex polynomial $p \in \C[\z]$ is called \emph{hyperbolic} w.r.t. $\e\in\R^n$ if the univariate polynomial $t\mapsto p(\x+t\e)$ is real-rooted. Note that any hyperbolic polynomial is a, possibly complex, multiple of a real polynomial. Finally, a \textit{spectrahedron} is a set of the form
4,041
33,646
en
train
0.4989.3
\begin{multicols}{2} \begin{itemize} \item[($i$)] $z_1^2+z_2^2-1$ (ellipse), \item[($ii$)] $z_1^2-z_2^2-1$ (hyperbola), \item[($iii$)] $z_1^2+z_2$ (parabola), \item [($iv$)]$z_1^2+z_2^2+1$ (empty set), \item[($v$)] $z_1^2-z_2^2$ (pair of crossing lines), \item[($vi$)] $z_1^2-1$ (parallel lines/one line $z_1^2$), \item [($vii$)]$z_1^2+z_2^2$ (isolated point), \item [($viii$)]$z_1^2+1$ (empty set). \end{itemize} \end{multicols} In \cite{jtw-2019}, a full classification of the imaginary projection for real quadratics was shown. In particular, the following theorem is the classification for real conics. For illustrations of the cases, see Figure~\ref{fi:real-classification}. The theorem that comes after provides the imaginary projection of some families of real quadratics. Furthermore, they state the subsequent question as an open problem. \begin{theorem}\label{th:RealConicChar} Let $p\in\R[z_1,z_2]$ be a real conic. For the normal forms (i)--(viii) from above, the imaginary projections $\I(p)\subseteq\R^2$ are as follows. \hspace{-3mm} \begin{multicols}{2} \begin{itemize} \item[($i$)] $\I(p) =\R^2$, \item[($ii$)] $\I(p) = \{-1\le y_1^2-y_2^2<0\}\cup\{\mathbf{0}\}$, \item[($iii$)] $\I(p) =\R^2\setminus\{(0,y_2):y_2\neq 0\}$, \item [($iv$)]$\I(p) =\{\y\in\R^2:y_1^2+y_2^2-1\ge 0\}$, \item[($v$)] $\I(p) =\{\y\in\R^2:y_1^2=y_2^2\}$, \item[($vi$)] $\I(p) =\{\y\in\R^2:y_1=0\}$, \item [($vii$)]$\I(p) =\R^2$, \item [($viii$)]$\I(p) =\{\y\in\R^2:y_1=\pm 1\}$. \end{itemize} \end{multicols} \end{theorem} \begin{theorem}\label{th:real-Quad} Let $p\in\C[z_1,\dots,z_n]$ be $p = \sum_{i=1}^{n-1}z_i^2-z_{n}^2+k$ for $k\in\{\pm 1\}$. Then \[ \I(p) = \begin{cases} \left\{\y \in \R^n \ : \ y_n^2<\sum_{i=1}^{n-1} y_i^2 \right\} \cup \{\mathbf{0}\} & \text{if } k =1, \\ \left\{\y \in \R^n \ : \ y_{n}^{2}-\sum_{i=1}^{n-1} y_i^2 \le 1 \right\} & \text{if } k =-1. \\ \end{cases} \] \end{theorem} \begin{figure} \caption{The imaginary projections of the real conic sections and their complements are colored in gray and blue, respectively. The cases $(i)$ and $(vii)$ are skipped, as their imaginary projection is the whole plane.} \label{fi:real-classification} \end{figure} The following question, which is true for real quadratics $p\in\C[\z]$, was asked in \cite[Open problem 3.4]{jtw-2019}. In Section~\ref{subs:real-non-real}, we show that it is not true in general even for complex conics. \begin{question}\label{que:open-close} Let $p\in\C[\z]$ be a polynomial. Is $\I(p)$ open if and only if $\I(p)=\R^n$? \end{question} We use the {\it initial form} of $p$ abbreviated by $\init(p)(\z)=p^h(\z,0)$ , where $p^h$ is the homogenization of $p$. The initial form consists of the terms of $p$ with the maximal total degree. Furthermore, a complex polynomial $p \in \C[\z]$ is called \emph{hyperbolic} w.r.t. $\e\in\R^n$ if the univariate polynomial $t\mapsto p(\x+t\e)$ is real-rooted. Note that any hyperbolic polynomial is a, possibly complex, multiple of a real polynomial. Finally, a \textit{spectrahedron} is a set of the form \[ \{ \x \in \R^n \ : \ A_0 + \sum_{j=1}^n A_j x_j \succeq 0\}, \] where $A_1, \ldots, A_n$ are real symmetric matrices of size $d$. Here, ``$\succeq 0$'' denotes the positive semidefiniteness of a matrix. We also speak of a spectrahedral set if the set is given by positive definite conditions, i.e., by strict conditions. \section{Imaginary projections of complex plane curves\label{se:higherdegree}} In this section, we determine the imaginary projection of some families of arbitrarily high degree complex plane curves. Our point of departure is the characterization of real conics in Theorem \ref{th:RealConicChar}. In the following example, which is an affine version of case ($B_{+}$) in Newstead's classification \cite{Newstead}, we show that by allowing non-real coefficients the imaginary projection of a complex conic can significantly change in terms of the algebraic degree of its boundary. See Corollary \ref{co:alg-degrees}. \begin{remark}\label{re:quarticRoots} Recall that the discriminant of a univariate polynomial $p(z) = \sum_{j=0}^n a_j z^j$ is given by $\Disc(p) = (-1)^{\frac{1}{2}n(n-1)}\frac{1}{a_n} \Res(p,p')$, where $\Res$ denotes the resultant. For a quartic, having negative discriminant implies the existence of a real root. However, a positive discriminant can correspond to either four real roots or none. Let {\small \[ P = 8 a_2 a_4 - 3 a_3^2, \, R = a_3^{3}+8a_1a_4^{2}-4a_4a_3a_2,\, D = 64a_4^{3}a_0-16a_4^{2}a_2^{2}+16a_4a_3^{2}a_2-16a_4^{2}a_3a_1-3a_3^{4}. \]} If $\Disc(p)>0$, then $p=0$ has four real roots if $P < 0$ and $D<0$, and no real roots otherwise. Finally, if the discriminant is zero, the only conditions under which there is no real solution is having $D=R=0$ and $P>0$ (see, e.g., \cite[Theorem 9.13 (vii)]{janson-2011}). \end{remark} \begin{example}\label{ex:caseB} Let $p=z_1^2+{\rm i}z_2^2+z_2$. For simplifying the calculations, we use the translation $z_2\mapsto z_2+{\rm{i}}/2$ to eliminate the linear term. This turns the equation $p=0$ into $ q := z_1^2+{\rm i}z_2^2+{\rm{i}}/4=0. $ Building the real polynomial system as introduced in (\ref{PolySystem}) implies \[q_{\mathrm{re}} = x_1^2-2x_2y_2-y_1^2 = 0 \; \text{ and } \; q_{\mathrm{im}} = 4x_2^2 +8x_1y_1 - 4y_2^2 + 1= 0. \] First assume $y_1\neq 0$. Substituting $x_1$ from $q_{\mathrm{im}}=0$ into $q_{\mathrm{re}}=0$ gives\[ 16x_2^4 + (-32y_2^2 + 8)x_2^2 - 128y_1^2y_2x_2 - 64y_1^4 + 16y_2^4 - 8y_2^2 + 1 = 0. \] We calculate the discriminant of the above equation with respect to $x_2$. By the previous remark, there is a real solution for $x_2$ if the discriminant is negative, i.e., \[ -64y_1^8 - 128y_1^4y_2^4 - 64y_2^8 - 80y_1^4y_2^2 + 48y_2^6 + y_1^4 - 12y_2^4 + y_2^2< 0. \] Now we need to check the conditions where the discriminant is zero or positive. To show the positive discriminant implies no real solution for $x_2$, we rewrite the condition with the substitution $u = y_1^4$: \[ \Delta:=-64u^2 + (-128y_2^4 - 80y_2^2 + 1)u - 64y_2^8 + 48y_2^6 - 12y_2^4 + y_2^2>0. \] It is a quadratic polynomial in $u$ with negative leading coefficient. It can only be positive between the two roots for $u$ in $\Delta=0$. Those are \[ -y_2^4 - \frac{5}{8}y_2^2 + \frac{1}{128} \pm \frac{\sqrt{32768y_2^6 + 3072y_2^4 + 96y_2^2 + 1}}{128}. \] To obtain $\Delta >0$, we need to have a solution $u>0$, i.e., we need to have either $-y_2^4 - \frac{5}{8}y_2^2 + \frac{1}{128}\ge 0$ or otherwise {\small \[ \left(-y_2^4 - \frac{5}{8}y_2^2 + \frac{1}{128}\right)^2>\frac{32768y_2^6 + 3072y_2^4 + 96y_2^2 + 1}{128^2}. \] } The first inequality implies $y_2^2\le \frac{3\sqrt{3}-5}{16}$ and after simplifications the second inequality implies $y_2^2< 1/4$. The polynomial $P$ from the previous remark for the quartic polynomials evaluates to $ 4(1-4y_2^2),$ which is positive for $y_2^2< 1/4$. Therefore, for $\Delta>0$, there is no real solution for $x_2$. It remains now to consider the case $\Delta=0$. Since $y_1\neq 0$, to have $R=-262144y_2y_1^2=0$ we need $y_2=0$. Substituting $y_2=0$ in $D=0$ implies $-4096y_1^4 - 960=0$, which is a contradiction. Therefore, if $y_1\neq 0$, the imaginary projection of $q$ consists of points $\y \in\R^2$ for which $\Delta\le 0$. Now assume $y_1=0$. From $q_{\mathrm{im}}=0$ we can observe that $\mathbf{0} \not\in \mathcal{I}(q)$. Thus, assume $y_2\neq 0$. Solving $q_{\mathrm{re}}=0$ for $x_2$ and substituting in $q_{\mathrm{im}}=0$ implies $x_1^4-y_2^2(4y_2^2 - 1)=0.$ This equation has a real solution if and only if $-y_2^2(4y_2^2 - 1)\le0$. Substituting $y_1=0$ in $\Delta$ allows to write $\Delta$ in terms of $y_2$, which gives $\Delta_{y_2} = -y_2^2(4y_2^2 - 1)^3.$ Therefore, the imaginary projection on the $y_2$-axis is $\{(0,y_2)\in\R^2 :\Delta_{y_2}\le 0\}\setminus\{(0,0)\}$. Thus, \[ \mathcal{I}(q)=\{\y \in\R^2:-64y_1^8 - 128y_1^4y_2^4 - 64y_2^8 - 80y_1^4y_2^2 + 48y_2^6 + y_1^4 - 12y_2^4 + y_2^2\le 0\}\setminus\{\mathbf0\}. \] The irreducibility of the polynomial above over $\C$ can be verified for example using {\sc Maple}. For the original polynomial $p$, this gives the inequality description for $\mathcal{I}(p)$ stated in~\eqref{eq:example1} in the Introduction. \end{example} Even in the case of real polynomials, extending the case of real conics by letting the degree or the number of variables be greater than two dramatically increases the difficulty of characterizing the imaginary projection. Let us see one such example of a cubic plane curve, i.e., where we have two unknowns and the total degree is three. \begin{example} Let $p\in\R[\z] = \R[z_1,z_2]$ be of the form $p = z_1^3 + z_2^3 - 1$. The similar attempt as before to calculate the imaginary projection $\I(p)$ is to separate the real and the imaginary parts of $p$ according to \eqref{PolySystem}, \[ p_{\rm re} = x_1^3 - 3x_1y_1^2 + x_2^3 - 3x_2y_2^2 - 1=0 \; \text{ and } \; p_{\rm im} = 3x_1^2y_1 + 3x_2^2y_2 - y_1^3 - y_2^3 =0. \]
4,028
33,646
en
train
0.4989.4
The irreducibility of the polynomial above over $\C$ can be verified for example using {\sc Maple}. For the original polynomial $p$, this gives the inequality description for $\mathcal{I}(p)$ stated in~\eqref{eq:example1} in the Introduction. \end{example} Even in the case of real polynomials, extending the case of real conics by letting the degree or the number of variables be greater than two dramatically increases the difficulty of characterizing the imaginary projection. Let us see one such example of a cubic plane curve, i.e., where we have two unknowns and the total degree is three. \begin{example} Let $p\in\R[\z] = \R[z_1,z_2]$ be of the form $p = z_1^3 + z_2^3 - 1$. The similar attempt as before to calculate the imaginary projection $\I(p)$ is to separate the real and the imaginary parts of $p$ according to \eqref{PolySystem}, \[ p_{\rm re} = x_1^3 - 3x_1y_1^2 + x_2^3 - 3x_2y_2^2 - 1=0 \; \text{ and } \; p_{\rm im} = 3x_1^2y_1 + 3x_2^2y_2 - y_1^3 - y_2^3 =0. \] Despite the simplicity of the polynomial $p$, one cannot use the previous techniques to find the values of $\y \in\R^2$ such that the above system has real solutions for $\x$. The reason is that both $x_1$ and $x_2$ appear in higher degree than one in both equations. The resultant with respect to one of $x_1$ or $x_2$ is a univariate polynomial of degree six in the other, where we lack the exact tools to specify the reality of the roots. \end{example} In the following theorem, we show that the imaginary projection of a generic complex plane curve of odd degree is the whole plane. \begin{thm}\label{th:EvenDeg} Let $p\in\C[z_1,z_2]$ be a complex bivariate polynomial of total degree $d$ such that its initial form has no real roots in $\P^1$. If $d$ is odd then the imaginary projection $\mathcal{I}(p)$ is $\R^2$. As a consequence, the imaginary projection of a generic complex bivariate polynomial of odd total degree is $\R^2$. \end{thm} \begin{proof} Since the initial form has no real roots, it can be written in the form \[ \init(p) = \prod_{j=1}^{d}(z_1-\alpha_j z_2), \] where $\alpha_j\notin\R$ for $1\le j \le d$. Substitute $z_j=x_j+ {\rm i}y_j$ for $j=1,2$ in $p$ and form the polynomial system $p_{\rm re} = p_{\rm im} = 0$ as introduced in (\ref{PolySystem}). For any fixed $\y \in\R^2$, both equations are of total degree $d$ in $x_1$ and $x_2$. Denote by $p_{\mathrm{re}}^h$ and $p_{\mathrm{im}}^h$, the homogenization of these two polynomials by a new variable $x_3$. Since both, $p_{\mathrm{re}}^h$ and $p_{\mathrm{im}}^h$, have odd degree, the number of complex intersection points (counted with multiplicities) is odd while the number of non-real intersection points (counted with multiplicities) is even. Thus, there is a real intersection point in $\P^2_{\R}$. We claim that this intersection point lies in the affine piece where $x_3 = 1$. This implies that for any given $\y \in\R^2$, there exist $x_1,x_2\in\R$ for which $p_{\mathrm{re}}=p_{\mathrm{im}}=0$ and therefore completes the proof. To prove our claim, we show that the two curves defined by $p_{\mathrm{re}}^h=0$ and $p_{\mathrm{im}}^h=0$ do not intersect at infinity, i.e., their intersection point has $x_3\neq 0$. Let us assume that they intersect at infinity and set $x_3 = 0$ in $p_{\mathrm{re}}^h$ and $p_{\mathrm{im}}^h$. This substitution turns the complex polynomial $p_{\mathrm{re}}^h+ {\rm i}p_{\mathrm{im}}^h$ into \[ q:=\prod_{j=1}^{d}(x_1-\alpha_j x_2). \] Thus, for the two projective curves to intersect at infinity we need to have $q=0$. Since $\alpha_j\notin\R$ for $1\le j \le d$, the only real solution for $x_1$ and $x_2$ is zero. This is a contradiction. \end{proof}
1,304
33,646
en
train
0.4989.5
\begin{cor} \label{co:wholeplane2} Let $p\in\C[z_1,z_2]$ be a complex bivariate polynomial. The imaginary projection $\I(p)$ is $\R^2$ if $p$ has a factor $q$ such that the total degree of $q$ is odd and its initial form has no real roots in $\P^1$. \end{cor} \begin{proof} Since for $p_1,p_2 \in \C[\z]$, we have $\I(p_1 \cdot p_2) = \I(p_1) \cup \I(p_2)$, we claim that if there is a factor $q$ in $p$ whose imaginary projection is $\R^2$, then $\I(p)=\R^2$. The result now follows from the previous theorem. \end{proof} In the following section, instead of the dimension we set the degree to be two and characterize the imaginary projection for a certain family of quadratic hypersurfaces. \section{Complex quadratics with hyperbolic initial form\label{se:QuadraticsWithHyperbolicInit}} As we have seen in Example \ref{ex:caseB}, the methods used to compute the imaginary projection of real quadratics is not always useful for complex ones. However, for a certain family, namely the quadratics with hyperbolic initial form, one can build up on the methods for the real case. To classify the imaginary projections of any family of polynomials, Lemma \ref{le:group-actions-improj} suggests bringing them to their proper normal forms. \begin{lemma}\label{le:hyp-Init-quadratic-forms} Under the action of $G_n$, any quadratic polynomial $p\in\C[z_1, \dots, z_n]$ with hyperbolic initial form can be transformed to one of the following normal forms: \begin{enumerate} \item $z_1^2+\alpha z_2+ r z_3+\gamma$,\\ \item $\sum_{i=1}^{j}z_i^2 - z_{j+1}^2+\alpha z_{j+2}+r z_{j+3}+\gamma \qquad\text{for some } j=1,\dots,n-1$, \end{enumerate} \noindent such that terms containing $z_k$ do not appear for $k>n$, and $\alpha,r,\gamma\in\C$. \end{lemma} \begin{proof} The initial form $\init(p)$ is a hyperbolic polynomial of degree two. That is, after a real linear transformation it can be either $z_1^2$ or of the form $\z'^TM\z'$ such that $\z' = (z_1,\dots, z_{j+1})$ for some $1\le j\le n-1$ and $M$ is a square matrix of size $j+1$ with signature $(j,1)$. See \cite{garding-59}. This explains the initial forms in (1) and (2). Any term $\lambda z_j$ for some $1\le j \le n$, such that $z_j$ appears in our transformed initial forms, cancels out by one of the translations $z_j\mapsto z_j\pm \frac{\lambda}{2}$ without changing the initial form. Finally, we show that the number of linear terms in the rest of the variables is at most two. Consider the complex linear form $\sum_{j=1}^{n}\lambda_j z_j$. For $1\le j \le n$, let $\lambda_j= r_j + {\rm i} s_j$ such that $r_j,s_j\in\R$. We can now write the sum as $ (\sum_{j=1}^{n}r_j z_j)+{\rm i}(\sum_{j=1}^{n}s_j z_j) $. If in the real part at least one of the $r_j$, say, $r_1$, is non-zero, then a sequence of linear transformations $z_1\mapsto z_1-\frac{r_j}{r_1}z_j$ for $j=2,\dots,n$, cancels out $\sum_{j=2}^{n}r_j z_j$. Similarly, the complex part reduces to only one term. \end{proof} We first focus on the case where $n=2$. In this case, we explicitly express the unbounded spectrahedral components forming $\I(p)^\mathsf{c}$. The following subsection covers part of the proof of Theorem \ref{th:complex-classification1}.
1,133
33,646
en
train
0.4989.6
\subsection{Complex conics with hyperbolic initial form}\label{subs:Conic-Hyp-Init} To match them with our classification of conics in Theorem~\ref{th:conic-classification1}, we do a real linear transformation in the case (2) and write them as \[ \begin{matrix} \text{(1a.1)} \,p=z_1^2+\gamma,&&&&\text{(1a.2)}\,p=z_1^2+\gamma z_2\,\,\,\,\,\,\gamma\neq 0,&&&&\text{(1b)} \,p=z_1z_2+\gamma, \end{matrix} \] \noindent for some $\gamma\in\C$. To find $\I(p)$ for each normal form, we compute the resultant of the two real polynomials, as introduced in (\ref{PolySystem}), with respect to $x_i$ to have a univariate polynomial in $x_j$, where $i,j\in\{1,2\}$, and $i\neq j$. Then we use the discriminantal conditions on the univariate polynomials to argue about the real roots. First consider the normal form (1a.1). If $\gamma_{\mathrm{im}}=0$, then we have the real conics of the cases $(vi)$ and $(viii)$ in Theorem \ref{th:RealConicChar}. The two real polynomials $ p_{\mathrm{re}} = x_1^2-y_1^2+\gamma_{\mathrm{re}} =0 \; \text{ and } \; p_{\mathrm{im}} = 2 x_1 y_1+\gamma_{\mathrm{im}} =0 $ form the system (\ref{PolySystem}) here. From $\gamma_{\mathrm{im}}\neq 0$, we need to have $y_1\neq 0$. Now substituting $x_1 = \frac{-\gamma_{\mathrm{im}}}{2 y_1}$ from $p_{\mathrm{im}}=0$ into $p_{\mathrm{re}}=0$ and solving for $y_1^2$ implies $ y_1^2 = \frac{1}{2}\left(\gamma_{\mathrm{re}}+\sqrt{\gamma_{\mathrm{re}}^2+\gamma_{\mathrm{im}}^2}\right). $ Therefore, \begin{equation} \tag{\text{1a.1}} \mathcal{I}(p) = \begin{cases} \text{A unique line} &\text{if } \gamma\in\R_{\le 0},\\ \text{Two parallel lines} &\text{otherwise}. \end{cases} \end{equation} Clearly, the closures of the components in the complement are spectrahedra. Now consider (1a.2) which is a generalization of the parabola case $(iii)$ in Theorem~\ref{th:RealConicChar}, where $\gamma=1$. Similarly to the previous case, we build the corresponding polynomial system as (\ref{PolySystem}). The discriminantal condition after substituting $x_2$ from $p_{\mathrm{im}} = 0$ into $p_{\mathrm{re}} = 0$ implies that there exists a real $x_1$ if and only if ${4|\gamma|^2(y_1^2+\gamma_{\mathrm{im}}y_2)\ge 0}$. Hence, $\I(p)^\mathsf{c}$ consists of $\y\in\R^2$ such that $y_1^2+\gamma_{\mathrm{im}}y_2< 0$. This inequality specifies the open subset of $\R^2$ bounded by the parabola $y_1^2+\gamma_{\mathrm{im}}y_2= 0$ and containing its focus. Therefore, \begin{equation} \tag{\text{1a.2}} \mathcal{I}(p) = \begin{cases} \R^2\setminus \{(0,y_2):y_2\neq 0\} &\text{if } \gamma\in\R, \\ \{\y \in\R^2 : y_1^2+\gamma_{\mathrm{im}}y_2\ge 0\}&\text{otherwise.} \end{cases} \end{equation} Notice that this incidence of $\I(p)^\mathsf{c}$ consisting of one unbounded component does not occur for real conics. See Corollary \ref{cor:oneUnbdd}. Further, $\I(p)^\mathsf{c}$ for $\gamma\notin\R$ is given by the unbounded spectrahedral set defined by \begin{equation*} \left(\begin{matrix} 1 & y_1 \\ y_1 & -\gamma_{\mathrm{im}}y_2 \end{matrix}\right)\succ 0. \end{equation*} For the last case (1b) from the corresponding real polynomial system $p_{\mathrm{re}} =p_{\mathrm{im}}=0$, one can simply check that $\gamma=0$ implies $\mathcal{I}(p)=\{\y\in\R^2:y_1y_2=0\}$. Now let $\gamma\neq 0$ and first assume $y_1y_2\neq 0$. After the substitution of $x_2$ from $p_{\mathrm{im}}=0$ to $p_{\mathrm{re}}=0$, the discriminantal condition on the quadratic univariate polynomial to have a real $x_1$ implies \[ \gamma_{\mathrm{re}} - |\gamma| \le 2 y_1 y_2 \le \gamma_{\mathrm{re}}+ |\gamma|. \] If $\gamma\in\R\setminus\{0\}$, then $\mathbf{0}$ is the only point with $y_1y_2=0$ that is included in $\I(p)$. If $\gamma\notin\R$, then the union of the two axes except the origin is included in $\I(p)$. Thus, \begin{equation} \tag{\text{1b}} \I(p)= \begin{cases} \text{The union of the two axes $y_1$ and $y_2$} & \text{if } \gamma = 0, \\ \left\{\y \in \R^2 \ : \ 0 < y_1 y_2 \le \gamma \right\} \cup \{\mathbf{0}\} & \text{if } \gamma \in \R_{>0}, \\ \left\{\y \in \R^2 \ : \ \gamma \le y_1 y_2 < 0 \right\} \cup \{\mathbf{0}\} & \text{if } \gamma \in \R\setminus\R_{\ge0}, \\ \left\{\y \in \R^2 \ : \ \frac{1}{2}(\gamma_{\mathrm{re}} - |\gamma|) \le y_1 y_2 \le \frac{1}{2}( \gamma_{\mathrm{re}} + |\gamma| ) \right\} \setminus \{\mathbf{0}\} & \text{if } \gamma \not\in \R. \end{cases} \end{equation} \begin{corollary}\label{co:spectra-double-real-root} Let $p\in\C[z_1,z_2]$ be a complex conic with hyperbolic initial form. The complement $\I(p)^\mathsf{c}$ of the imaginary projection consists of only unbounded spectrahedral components. \end{corollary} \begin{proof} We saw this already for the cases (1a.1) and (1a.2). Therefore, we only prove the statement for (1b). There are four unbounded components, namely in each quadrant one, and no bounded component in $\mathcal{I}(p)^\mathsf{c}$. The closures of the four unbounded components after setting \[w = \sqrt{\frac{1}{2}( |\gamma|+\gamma_{\mathrm{re}})}\,\quad \text{and}\quad u = \sqrt{\frac{1}{2}( |\gamma| - \gamma_{\mathrm{re}})}\,\] have the following representations as spectrahedra. In the quadrants $y_1y_2\geq 0$, they are expressed by ${y_1 y_2 - \frac{1}{2}(\gamma_{\mathrm{re}} + |\gamma|) \ge 0}$, or equivalently, $S_1(y_1,y_2)\succeq 0$ and ${ S_2(y_1,y_2)\succeq 0}$, where \[ S_1(y_1,y_2)=\left( \begin{array}{ccc} y_1 && w \\ w & &y_2\\ \end{array} \right), \quad S_2(y_1,y_2)=\left( \begin{array}{cc} -y_1 & w \\ w & -y_2 \\ \end{array} \right). \] In the quadrants with $y_1y_2\leq 0$, they are expressed by ${y_1 y_2 - \frac{1}{2} (\gamma_{\mathrm{re}} - |\gamma|) \le 0}$, or equivalently, $S_3(y_1,y_2)\succeq 0$ and $S_4(y_1,y_2)\succeq 0$, where \[ S_3(y_1,y_2)=\left( \begin{array}{cc} y_1 & u \\ u & - y_2 \\ \end{array} \right), \quad S_4(y_1,y_2)=\left( \begin{array}{ccc} - y_1 && u \\ u & &y_2 \end{array} \right). \] \end{proof} Given a conic $q$, an explicit description of the components of $\I(q)^\mathsf{c}$ can be derived by using those of its normal form $p$ and applying on $\y$ the inverse operations turning $q$ to $p$. We close this subsection by providing two examples for the cases (1a.2) and (1b) and their corresponding spectrahedral components. \begin{example}\label{ex:(1a.2)} Let $q(z_1,z_2)=z_1^2+2z_1z_2+z_2^2+2{\rm i}z_2+1$. By applying the transformation $A$ and the translation $\w$ given by \[ A:=\begin{pmatrix} 1 & -1 \\ 0 & 1 \end{pmatrix} \quad \text{and} \quad \w:=\begin{pmatrix} 0 \\ \rm{i/2} \end{pmatrix}, \] the conic $q$ is transformed to its normal form $p=z_1^2+2{\rm i}z_2$. Thus, we have {\small \[\I(p)^\mathsf{c}=\left\{ y \in\R^2:\begin{pmatrix} 1 & y_1 \\ y_1 & -2y_2 \end{pmatrix}\succ 0\right\} \ \text{and} \ \ \mathcal{I}(q)^\mathsf{c}=\left\{ y \in\R^2:\begin{pmatrix} 1 & y_1+y_2 \\ y_1+y_2 & -2y_2+1 \end{pmatrix}\succ 0\right\}, \] } \\ such that $\I(q)^\mathsf{c}$ is obtained by the inverse transformations for $\y$ in $\I(p)^\mathsf{c}$. Figure \ref{fig:improjComplex} (1a) illustrates $\I(q)^\mathsf{c}$. \end{example} \begin{example}\label{ex:1b} Let $q(z_1,z_2)=z_1^2-z_2^2+2{\rm i}$. Applying $A=\frac{1}{2}\left(\begin{matrix} -1 & -1 \\ -1 & 1 \end{matrix}\right)$ transfers the conic $q$ into $p = z_1z_2+2{\rm i}=0$. The value of both $u$ and $w$ introduced in the proof of Corollary \ref{co:spectra-double-real-root} is 1. By applying $A^{-1}$ to $\y$, the matrices $S_1,\ldots,S_4$ transform to {\small \begin{align*} T_1(y_1,y_2)=& \begin{pmatrix} -y_1-y_2 & 1 \\ 1 & -y_1+y_2 \end{pmatrix}, \ \qquad T_2(y_1,y_2)= \ \left(\begin{matrix} y_1+y_2 && 1 \\ 1 && y_1-y_2 \end{matrix}\right), \\ T_3(y_1,y_2)=& \ \begin{pmatrix} -y_1-y_2 &1 &\!\!\\ 1& y_1-y_2&\!\! \end{pmatrix}, \ \qquad T_4(y_1,y_2)= \ \begin{pmatrix} y_1+y_2 &1\, \\ 1& -y_1+y_2\, \end{pmatrix}. \end{align*} } Thus, the complement of the imaginary projection as shown in Figure~\ref{fig:dist-real} is given by \[ \overline{\I(q)^\mathsf{c}}=\bigcup_{j=1}^4\left\{ \y \in\R^2:T_j(y_1,y_2)\succeq 0\right\}. \] \begin{figure} \caption{The first four pictures represent $T_j(y_1,y_2)\succeq 0$ for $1\le j\le4$, and the last one shows their union, which gives $\I(q)^\mathsf{c} \label{fig:dist-real} \end{figure} \end{example} In the example above all four components are strictly convex, which can not occur in the case of real conics. This provides a key ingredient in the proof of Theorem \ref{th:StrictlyConvexComplex}.
3,552
33,646
en
train
0.4989.7
\subsection{Higher dimensional complex quadratics} We now let the dimension to be at least three and we use the normal forms provided in Lemma \ref{le:hyp-Init-quadratic-forms} to show the following classification of the imaginary projection. To avoid redundancy, for each quadratic polynomial we set $n$ to be the largest index of $z$ appearing in its normal form. Since we have already covered the case of conics, we need to consider $n\ge 3$. \begin{theorem} \label{th:ndimhyperbol1} Let $n \ge 3$ and $p\in\C[z_1,\dots,z_n]$ be a quadratic polynomial with hyperbolic initial form. Up to the action of $G_n$, the imaginary projection $\I(p)$ is either $\R^n$, $\R^n\setminus\{(0,\dots,0,y_n)\in\R^n : y_n\neq 0\}$, or otherwise we can write $p$ as $p= \sum_{i=1}^{n-1}z_i^2 - z_{n}^2+\gamma$ for some $\gamma\in\C$ such that $|\gamma|=1$ and we get \begin{equation*} \mathcal{I}(p)= \begin{cases} \left\{\y \in \R^n \ : \ y_n^2<\sum_{i=1}^{n-1} y_i^2 \right\} \cup \{\mathbf{0}\} & \text{if } \gamma =1, \\ \left\{\y \in \R^n \ : \ y_{n}^{2}-\sum_{i=1}^{n-1} y_i^2 \le 1 \right\} & \text{if } \gamma =-1, \\ \left\{\y \in \R^n \ : \ y_{n}^{2}-\sum_{i=1}^{n-1} y_i^2 \le\frac{1}{2}(1-\gamma_{\rm re}) \right\} \setminus \{\mathbf{0}\} & \text{if } \gamma \not\in \R. \end{cases} \end{equation*} \end{theorem} \begin{proof} By real scaling and complex translations, any of the forms in Lemma \ref{le:hyp-Init-quadratic-forms} drops into one of the following cases: \[ \begin{matrix} \text{(a)}\, \alpha=r=\gamma=0,&&& \text{(b)}\, \alpha=1,\,\, \text{and}\,\, r=\gamma=0,&&& \text{(c)}\,\alpha\notin\R,\,\, \text{and} \,\,r,\gamma=0, \end{matrix} \] \[ \begin{matrix} \text{(d)}\, \alpha\notin\R,\, r=1,\,\, \text{and} \,\,\gamma=0,&&& \text{(e)}\,\alpha=r=0, \,\,\text{and} \,\,\gamma\neq 0. \end{matrix} \] For the normal form (1) all cases but (d) drop into the conic sections discussed previously. Case (d) is similar for both normal forms (1) and (2). Thus we focus on (2). The imaginary projection for the cases (a) and (b) are known from the real classification and they are $\R^n$ and $ \R^n\setminus\{(0,\dots,0,y_n)\in\R^n : y_n\neq 0\}$, respectively. See \cite[Theorem 5.4]{jtw-2019}. In case (c) after building the system (\ref{PolySystem}) and considering two cases, based on whether the real part of $\alpha$ is zero or not, one can then check that $\mathcal{I}(p) = \R^n$ as follows. We have \[ \begin{array}{rcl} p_{\rm re} &=&\sum_{i=1}^{n-2}x_i^2-x_{n-1}^{2}-\sum_{i=1}^{n-2}y_i^2+y_{n-1}^{2}+\alpha_{\rm re}x_n-\alpha_{\rm im}y_n,\\ p_{\rm im} &=&2\sum_{i=1}^{n-2}x_iy_i-2 x_{n-1} y_{n-1}+\alpha_{\rm im} x_{n}+\alpha_{\rm re} y_{n}. \end{array} \] First assume $\alpha_{\rm re} = 0$. For any $\y\in\R^n$, the equation $p_{\rm re} = 0$ has solutions $(x_1,\dots,x_{n-1})\in\R^{n-1}$. By substituting any of those solutions in $p_{\rm im} = 0$ we can solve it for $x_n$ and get a real solution. Now let $\alpha_{\rm re}\neq 0$. In this case, we substitute $x_n$ from the second equation into the first. For any $\y\in\R^n$, we get $\sum_{i=1}^{n-2}(x_i-r_i)^2-(x_{n-1}-r_{n-1})^{2} = r_n$ for some $r_1,\dots,r_{n}\in\R$ and therefore, there always exists a real solution $(x_1,\dots,x_{n-1})\in\R^{n-1}$. Similarly, in the case (d), for any $\y\in\R^n$, there exists a real solution $(x_1,\dots,x_{n-1})\in\R^{n-1}$ for $p_{\rm im} = 0$ and for any $\y\in\R^n$ and any $(x_1,\dots,x_{n-1})\in\R^{n-1}$, there exists a real $x_n$ for $p_{\rm re}=0$. Thus $\I(p) = \R^n$ in this case, too. Now we focus on case (e). Let $p= \sum_{i=1}^{n-1}z_i^2 - z_{n}^2+\gamma$ for some $\gamma\in\C \setminus \{0\}$. Building the real system (\ref{PolySystem}) for $p$ yields \[ \begin{matrix} p_{\rm re} &=&\sum_{i=1}^{n-1}x_i^2-x_{n}^{2}-\sum_{i=1}^{n-1}y_i^2+y_{n}^{2}+\gamma_{\rm re},&& p_{\rm im} &=&2\sum_{i=1}^{n-1}x_iy_i-2 x_{n} y_{n}+\gamma_{\rm im}. \end{matrix} \] We can assume $|\gamma| = 1$. Note that $\{\mathbf{0}\}\in\mathcal{I}(p)$ if and only if $\gamma\in\R$. We can thus exclude the origin in the following calculations. Moreover, Theorem \ref{th:real-Quad} shows the cases where $\gamma = \pm 1$. Thus, we need to consider the case $\gamma\notin\R$. Let $T$ be an orthogonal transformation on $\R^{n-1}$. Invariance of the polynomials $\sum_{j=1}^{n-1}{y}_j^2$ and $\sum_{j=1}^{n-1}x_jy_j$ under the mapping $(x,y) \mapsto (T(x),T(y))$ implies \begin{center} $(y_1,y_2,\dots,y_{n}) \in \mathcal{I}(p)\qquad$ if and only if $\qquad (y'_1,\dots,y'_{n-1},y_n) \in \mathcal{I}(p)$, \end{center} where $(y'_1,\dots,y'_{n-1}) = T(y_1,\dots,y_{n-1})$. For a given $\y\in\I(p)$, let $T$ be a transformation with the property $T(y_1,\dots,y_{n-1})=(\sqrt{\sum_{i=1}^{n-1}y_i^2},0,\dots, 0)$ and set $(x'_1,\dots,x'_{n-1})=T(x_1,\dots,x_{n-1})$. We can now rewrite the simplified polynomial system as \[ \begin{matrix} p_{\rm re} &=&\sum_{i=1}^{n-1}{x'_{i}}^{2}-x_{n}^{2}-{y'_1}^{2}+y_{n}^{2}+\gamma_{\rm re}, &&& p_{\rm im} &=& 2 x'_{1} y'_{1}-2 x_{n} y_{n}+\gamma_{\rm im}. \end{matrix} \] First consider $y'_1 = 0$. This implies $y_n\neq 0$. Solving $p_{\rm im} =0$ for $x_n$ and substituting in $p_{\rm re} = 0$ implies \[ 4y_{n}^2(\sum_{i=1}^{n-1}{x'_i}^{2})=\left(\gamma_{\rm re}^{2}+\gamma_{\rm im}^{2}\right)-\left(2y_{n}^{2}+\gamma_{\rm re}^{}\right)^{2} = 1-\left(2y_{n}^{2}+\gamma_{\rm re}\right)^{2}. \] This has a real solution for $(x'_1,\dots,x'_{n-1})$ if and only if $y_n^2\le\frac{1-\gamma_{\rm re}}{2}$. Now assume $y'_1 \neq 0$. Observe that if ${y'_1}^2 = y_n^2$ then we always get a real solution. Thus assume $\frac{y_{n}^{2}}{{y'_1}^2}-1\neq 0$. Solving $p_{\rm im} =0$ for $x'_1$ and substituting in $p_{\rm re} = 0$ implies {\small\[ \left(\frac{y_{n}^{2}}{{y'_1}^2}-1\right) \left(x_{n}-\frac{\gamma_{\rm im} y_{n}}{2 {y'_1}^2 \left(\frac{y_{n}^{2}}{{y'_1}^2}-1\right)}\right)^{2}+\sum_{i=2}^{n-1}{x'_i}^{2}+ \frac{\left(y_{n}^{2}-{y'_1}^2\right)^{2}+\gamma_{\rm re}\left(y_{n}^{2}-{y'_1}^2\right)-\left(\frac{\gamma_{\rm im}^{}}{2}\right)^{2}}{ y_{n}^{2}- {y'_1}^2}=0. \]} If ${y'_1}^2 > y_n^2$, there always is a real solution and otherwise, it has a real solution if and only if $\left(y_{n}^{2}-{y'_1}^{2}\right)^{2}+\gamma_{\rm re}\left(y_{n}^{2}-{y'_1}^{2}\right)-\left(\frac{\gamma_{\rm im}^{}}{2}\right)^{2} \le 0$. That is, $y_{n}^{2}-{y'_1}^{2}\le\frac{1-\gamma_{\rm re}^{}}{2}$. To get the imaginary projection of the original system, it is enough to do the inverse transformation $T^{-1}$. This completes the proof. \end{proof}
2,755
33,646
en
train
0.4989.8
\begin{cor} \label{co:ndimhyperbol2} Let $p\in\C[z_1,\dots,z_n]$ be a quadratic polynomial with hyperbolic initial form. Then \begin{itemize} \item[(1)] the complement $\mathcal{I}(p)^\mathsf{c}$ is either empty or it consists of \subitem- one, two, three, or four unbounded components; or \subitem- two unbounded components and a single point. \item[(2)] the complement of the closure $\overline{\mathcal{I}(p)}^\mathsf{c}$ is either empty or unbounded. \item[(3)] the algebraic degrees of the irreducible components in $\partial\I(p)$ are at most two. \end{itemize} \end{cor} \section{The main classification of complex conics\label{se:mainclassification}} In this section, we give a classification of the imaginary projection $\mathcal{I}(p)$ where $p\in\C[\z] = \C[z_1,z_2]$ is a complex conic as in Definition \ref{def:complexConic}. We state our topological classification in terms of the number and boundedness of the components in $\mathcal{I}(p)^\mathsf{c}$. In particular, this implies that the number of bounded and unbounded components do not exceed one and four, respectively. Furthermore, $\mathcal{I}(p)^\mathsf{c}$ cannot contain both bounded and unbounded components for some complex conic $p$. A main achievement of this section is to establish a suitable classification and normal forms of complex conics under the action of the group $G_2$. There are infinitely many orbits on the set of complex conics under this action, since the real dimension of $G_2$ is $8$ and the set of complex conics has real dimension $10$. Each of our normal forms corresponds to infinitely many orbits that share their topology of imaginary projection by Lemma \ref{le:group-actions-improj}. As a consequence of the obstructions in the existing classifications of conics that we discussed in the Introduction, we developed our own classification of conic sections. It is based on the five distinct arrangement possibilities for the roots of the initial form in $\P^1$ that are grouped in two main cases, depending on whether the initial form of the complex conic is hyperbolic or not: \setlength{\columnsep}{-10pt} \begin{multicols}{2} \begin{itemize} \item[] \hspace{-0.82cm}\underline{Hyperbolic initial form} \item[] \item[(1a)] A double real root \item [(1b)]Two distinct real roots \item[] \hspace{-0.85cm}\underline{Non-hyperbolic initial form} \hspace{-3mm} \item [(2a)]A double non-real root \item [(2b)]One real and one non-real root \item [(2c)]Two distinct non-real roots \end{itemize} \end{multicols} \begin{thm}[\textbf{Topological Classification}] \label{th:complex-classification1} Let $p\in\C[z_1,z_2]$ be a complex conic. For the above five cases, the set $\I(p)^\mathsf{c}$ is \setlength{\columnsep}{-10pt} \begin{multicols}{2} \begin{itemize} \item[] \item[(1a)] the union of one, two, or three \item[]unbounded components. \item [(1b)] the union of four \item[]unbounded components. \item[] \item [(2a)] empty. \item[(2b)] empty, a single point, \item[]or a line segment. \item [(2c)]empty or one bounded component, \item[]possibly open. \end{itemize} \end{multicols} In particular, the components of $\mathcal{I}(p)^\mathsf{c}$ are spectrahedral in all the first four classes. This is not true in general for the last class {\rm(2c)}. \end{thm} The following corollary relates the boundedness of the components in $\mathcal{I}(p)^\mathsf{c}$ to the hyperbolicity of the initial form $\init(p)$. \begin{cor}\label{co:hyperbolicityBdd} Let $p\in\C[z_1,z_2]$ be a complex conic. Then $\mathcal{I}(p)^\mathsf{c}$ consists of unbounded components if and only if the initial form of $p$ is hyperbolic. Otherwise, $\mathcal{I}(p)^\mathsf{c}$ is empty or consists of one bounded component. Moreover, if there is a bounded component with non-empty interior, then $\init(p)$ has two distinct non-real roots. \end{cor} Figure~\ref{fig:improjComplex} represents the types that do not appear for real coefficients. For instance, the middle picture, labeled as (2b), shows the case where $\I(p)^\mathsf{c}$ consists of a bounded component with empty interior. This can not occur if $p$ has only real coefficients. The other two pictures are discussed in the next two corollaries. The following corollary compares the algebraic degrees of the irreducible components in the boundary $\partial\I(p)$. Its proof comes at the end of the next section. \begin{figure} \caption{The complements of the imaginary projections are colored in blue. The pictures show cases in the classification of the imaginary projection for complex conics which do not appear for real conics. The orange line in the right figure represents a generic line intersecting the boundary in two points, which is used to prove the non-spectrahedrality of this example in Section~\ref{se:non-hyperbolic} \label{fig:improjComplex} \end{figure} \begin{cor} \label{co:alg-degrees} Let $p\in\C[z_1,z_2]$ be a complex conic. \begin{enumerate} \item The boundary $\partial\mathcal{I}(p)$ may not be algebraic. The algebraic degree of any irreducible component in its Zariski closure is at most 8. The bound is tight. If $\mathcal{I}(p)^\mathsf{c}$ has no bounded components, then $\partial\mathcal{I}(p)$ is algebraic and it consists of irreducible pieces of degree at most two. \item If all coefficients are real, then $\partial\mathcal{I}(p)$ is algebraic and it consists of irreducible pieces of degree at most two. \end{enumerate} \end{cor} Example \ref{ex:caseB}, that is shown in Figure \ref{fig:improjComplex} (2c), illustrates an instance where the above contrast appears. The next corollary compares the number and strict convexity of the unbounded components that occur in $\mathcal{I}(p)^\mathsf{c}$ when $p$ is a complex or a real conic. \begin{cor}\label{cor:oneUnbdd} Let $p\in\C[z_1,z_2]$ be a complex conic. \begin{enumerate} \item The number of unbounded components in $\mathcal{I}(p)^\mathsf{c}$ can be any integer $0\le k\le 4$ and up to 4 of them can be strictly convex. \item If all coefficients are real, the number of unbounded components in $\mathcal{I}(p)^\mathsf{c}$ can be any integer $0\le k\le 4$ except for $k=1$ and up to 2 of them can be strictly convex. \end{enumerate} \end{cor} The proof follows from Theorems \ref{th:RealConicChar} and \ref{th:complex-classification1}, together with Example \ref{ex:1b}. The highlighting difference in the previous corollary, i.e., when $\I(p)^\mathsf{c}$ has one unbounded component, appears in the first class (1a) where the initial form has a double real root. Example \ref{ex:(1a.2)} provides such an instance and is shown in Figure \ref{fig:improjComplex} (1a). Theorem~\ref{th:complex-classification1} is only proven by the end of Section \ref{se:non-hyperbolic}. In the previous section, we discussed the case where $p$ has hyperbolic initial form in details. It remains to consider the case where $\init(p)$ is not hyperbolic. As in Subsection \ref{subs:Conic-Hyp-Init}, we first need to compute proper normal forms and then by Lemma~\ref{le:group-actions-improj}, it suffices to compute the imaginary projections of those forms for each case. \begin{thm}[\textbf{Normal Form Classification}] \label{th:conic-classification1} With respect to the group $G_2$, there are infinitely many orbits for the complex conic sections with the following representatives. \setlength{\columnsep}{-20pt} \begin{multicols}{2} \begin{itemize} \item[] \item[] \item[(1a)] $\begin{array}{l} {\rm(1a.1)}\,\,p=z_1^2+\gamma \\ {\rm(1a.2)}\,\ p=z_1^2+\gamma z_2 \end{array}$ \item[] \item[(1b)] $p= z_1z_2+\gamma$ \item[] \item[] \item[(2a)] $\begin{array}{l} {\rm(2a.1)}\,\,p=(z_1-{\rm i} z_2)^2 + \gamma \\ {\rm(2a.2)}\,\ p = (z_1 - {\rm i}z_2)^2 + \gamma z_2 \end{array}$ \item[] \item[(2b)] $p = z_2 (z_1 - \alpha z_2) + \gamma$ \item[(2c)] $\begin{array}{l} {\rm(2c.1)}\,\,p = z_1^2+z_2^2+\gamma \\ {\rm(2c.2)}\,\ p=(z_1 - {\rm i} z_2)(z_1 - \alpha z_2)+\gamma \end{array}$ \end{itemize} \end{multicols} \noindent for some $\gamma,\alpha\in\C$ such that, to avoid overlapping, we assume $\gamma\neq 0$ in {\rm(1a.2)} and {\rm(2a.2)}, $\alpha\notin\R$ in {\rm(2b)} and {\rm(2c.2)}, and finally $\alpha\neq \pm {\rm i}$ in \rm{(2c.2)}. \end{thm} \begin{proof} By applying a real linear transformation we first map the roots of $\init(p)$ to $(0:1)$ in (1a), to $(1:0)$ and $(0:1)$ in (1b), to $({\rm i : 1})$ in (2a), to $(1:0)$ and $(\alpha,1)$ such that $\alpha\notin\R$ in (2b), to $(\pm{\rm i} : 1)$ in (2c.1), and to $({\rm i} : 1)$ and $(\alpha : 1)$ such that $\alpha\notin\R$ and $\alpha\neq\pm{\rm i}$ in (2c.2). Then, similar to the proof of Lemma \ref{le:hyp-Init-quadratic-forms}, by eliminating some linear terms or the constant by complex translations we arrive at the given normal forms for each case. Since the arrangements of the two roots in $\P^1$ is invariant under the action of $G_2$, the given five cases lie in different orbits. Note that the orbits of the subcases in each case do not overlap. For the subcases of (1a), in (1a.2), $z_1$ and $z_2$ may be transformed to $az_1+bz_2+e$ and $cz_1+dz_2+f$ with $a,b,c,d\in\R$ and $e,f\in\C$. This leads to $(az_1+bz_2+e)^2+\gamma$. Since $z_2^2$ does not appear in the normal form of case (1a.2), we get $b=0$ and thus $z_2$ can not appear. Further $z_1^2+\gamma_1$ and $z_1^2+\gamma_2$ with $\gamma_1\neq \gamma_2$ belong to different orbits since the previous argument enforces $a=1,b=0,e=0$. The other cases are similar. Thus, for any of the eight normal forms, there are infinitely many orbits corresponding to each $\gamma\in\C$ (and $\alpha\in\C$ in some cases). \end{proof}
3,522
33,646
en
train
0.4989.9
\section{Complex conics with non-hyperbolic initial form\label{se:non-hyperbolic}} We complete the proof of the Topological Classification Theorem~\ref{th:complex-classification1} by treating the case where the complex conic $p \in \C[\z] = \C[z_1,z_2]$ does not have a hyperbolic initial form. In particular, we see that, as previously stated in Corollary \ref{co:hyperbolicityBdd}, if the initial form of $p$ is not hyperbolic, then $\mathcal{I}(p)^\mathsf{c}$ is empty or consists of one bounded component whose interior is non-empty only if $\init(p)$ has two distinct non-real roots in $\P^1$. The overall steps in computing the imaginary projection of the cases with non-hyperbolic initial form are as follows. After building up the real polynomial system for the classes (2b) and (2c.1) of Theorem~\ref{th:conic-classification1} as in~\eqref{PolySystem}, we use the same techniques as in Subsection \ref{subs:Conic-Hyp-Init}. However, in the case (2a), by the nature of the polynomial system, we directly argue that the imaginary projection is $\R^2$. In the last case (2c.2), we do not explicitly represent the components of $\I(p)^\mathsf{c}$. Instead, in Theorem~\ref{OnebddComp} we prove that it does not contain any unbounded components and the number of bounded components does not exceed one. \subsection{A double non-real root (2a)} We show that in this case we have a full space imaginary projection. First consider the normal form (2a.1). We have \[ \begin{matrix} p_{\mathrm{re}} & = & x_1 ^2- x_2^2+ 2 y_2 x_1+ 2 y_1 x_2 + \gamma_{\mathrm{re}}x_2-y_1^2 + y_2^2 -\gamma_{\mathrm{im}}y_2&=& 0, \\ p_{\mathrm{im}} & = & - 2 x_1 x_2 +2 y_1x_1 - 2y_2 x_2 + \gamma_{\mathrm{im}}x_2 + 2 y_1 y_2 + \gamma_{\mathrm{re}}y_2&= &0. \end{matrix}\] We prove $\mathcal{I}(p) = \R^2$ by showing that for every given $\y \in \R^2$, these two real conics in $\x=(x_1,x_2)$ have a real intersection point. For any fixed $\y \in \R^2$, the bivariate polynomial $p_\mathrm{re}$ in $\x$ has the quadratic part $x_1^2-x_2^2$, and hence, the equation $p_{\mathrm{re}}=0$ defines a real hyperbola in $\x$ with asymptotes $x_1=x_2+c_1$ and $x_1=-x_2+c_2$ for some constants $c_1,c_2 \in \R$; possibly the hyperbola degenerates to a union of these two lines. The degree two part of the polynomial $p_{\mathrm{im}}$ is given by $-2x_1x_2$ and hence, the equation $p_{\mathrm{im}}=0$ defines a real hyperbola in $\x$ with asymptotes $x_1= d_1$ and $x_2=d_2$ for some constants $d_1, d_2 \in \R$; possibly the hyperbola may degenerate to a union of these two lines. Since the two hyperbolas have a real intersection point, the claim follows. The case (2a.2) is similar. \subsection{One real and one non-real root (2b)}\label{subs:real-non-real} This case gives the system of equations \[ \begin{matrix} p_{\mathrm{re}} & = & - \alpha_{\mathrm{re}} x_2^2+ x_1 x_2 + 2 \alpha_{\mathrm{im}} y_2 x_2 + \alpha_{\mathrm{re}} y_2^2 - y_1 y_2 + \gamma_{\mathrm{re}} & = & 0, \\ p_{\mathrm{im}} & = & - \alpha_{\mathrm{im}} x_2^2+ y_2 x_1+ y_1x_2 -2 \alpha_{\mathrm{re}}y_2 x_2 + \alpha_{\mathrm{im}} y_2^2 + \gamma_{\mathrm{im}} & =& 0. \end{matrix} \] First assume $y_2\neq 0$. By solving the second equation for $x_1$, substituting the solution into the first equation and clearing the denominator, we get a univariate cubic polynomial in $x_2$ with non-zero leading coefficient. Since real cubic polynomials always have a real root, this shows that for $\y \in \R^2$ with $y_2 \neq 0$, there is a solution $\x \in \R^2$. It remains to consider $y_2 = 0$. In this case, the second equation has a real solution in $x_2$ whenever the corresponding discriminant $y_1^2 + 4 \alpha_{\mathrm{im}} \gamma_{\mathrm{im}}$ is non-negative, and if one of these solutions is non-zero, the first equation then gives a real solution for $x_1$. The special case that in the second equation both solutions for $x_2$ are zero, can only occur for $y_1 = 0$ and $\gamma_{\mathrm{im}} = 0$. Then the first equation has a real solution for $x_1$ if and only if $\gamma_{\mathrm{re}} = 0$. Altogether, we obtain \begin{equation*} \tag{\text{2b}} \mathcal{I}(p) \ = \ \begin{cases} \R^2 & \text{ if } \gamma=0 \ \ \text{or} \ \ \alpha_{\mathrm{im}}\gamma_{\mathrm{im}} > 0, \\ \R^2 \setminus \{\mathbf{0}\} & \text{ if } \gamma\in\R\setminus\{0\},\\ \R^2 \setminus \{(y_1,0) \ : \ y_1^2 < -4 \alpha_{\mathrm{im}} \gamma_{\mathrm{im}}\} & \text{ if } \alpha_{\mathrm{im}} \gamma_{\mathrm{im}} < 0. \end{cases} \end{equation*} Note that when $\gamma\in\R\setminus\{0\}$ then $\I(p)$ is open but not $\R^2$. This answers Question \ref{que:open-close}. See Figure \ref{fig:improjComplex} (2b) for the imaginary projection of $p = z_2(z_1-{\rm i} z_2)-{\rm i}$ from this class. \subsection{Two distinct non-real roots (2c)}\label{subs:2complex} First we show that in (2c.1), i.e., where the roots of the initial form are complex conjugate, the imaginary projection is one open bounded component. After forming the polynomial system (\ref{PolySystem}), the same methods as those in Subsection \ref{subs:Conic-Hyp-Init}, i.e., taking the resultant of the two polynomials $p_{\rm re}$ and $p_{\rm im}$ with respect to $x_2$ and checking the discriminantal conditions to have a real $x_1$, lead to the imaginary projection \begin{equation*}\label{k-disc} \tag{\text{2c.1}} \I(p) = \Big\{\y\in\R^2 : y_1^2+y_2^2\ge \frac{1}{2}(\gamma_{\mathrm{re}}+\sqrt{\gamma_{\mathrm{re}}^2+\gamma_{\mathrm{im}}^2})\Big\}. \end{equation*} In particular, we have $\I(p) = \R^2$ if and only if $\gamma_{\mathrm{im}}=0$ and $\gamma_{\mathrm{re}}\le0$. Hence, in the case of two non-real conjugate roots, $\mathcal{I}(p)^{\mathsf{c}}$ consists of either one or zero bounded component and it is a spectrahedral set. The subsequent lemma shows that for the case (2c) in general $\mathcal{I}(p)^\mathsf{c}$ is either empty or consists of one bounded component. \begin{lemma}\label{OnebddComp} Let $p = (z_1 - \alpha z_2) (z_1 - \beta z_2) + d z_1 + e z_2 + f$ with $\alpha, \beta \not \in \R$ and $d,e,f \in \C$. Then \begin{enumerate} \item $\mathcal{I}(p)^{\mathsf{c}}$ has at most one bounded component. \item $\mathcal{I}(p)^{\mathsf{c}}$ does not have unbounded components. \end{enumerate} \end{lemma} \begin{proof} (1) Assume that there are at least two bounded components in $\mathcal{I}(p)^{\mathsf{c}}$. By Lemma~\ref{le:group-actions-improj}, we can assume without loss of generality that the $y_1$-axis intersects both components. Solving $p=0$ for $z_1$ gives {\small \begin{equation} \label{eq:one-component-branch1} z_1 \ = \ \frac{\alpha + \beta}{2}z_2 - \frac{d}{2} + \sqrt[\C]{\left( \frac{\alpha-\beta}{2} \right)^2 z_2^2 - e z_2 -f } \, . \end{equation} } By letting $z_2\in\R$ we obtain two continuous branches $y_1^{(1)}(z_2)$ and $y_1^{(2)}(z_2)$ satisfying~\eqref{eq:one-component-branch1}. Therefore, the set $\mathcal{I}(p) \cap \{\y \in \R^2 \, : \, y_2 = 0\}$ has at most two connected components. This is a contradiction to our assumption that the $y_1$-axis intersects the two bounded components in $\mathcal{I}(p)^\mathsf{c}$. For (2), assume that there exists an unbounded component in the complement of $\mathcal{I}(p)$. The convexity implies that it must contain a ray. By Lemma~\ref{le:group-actions-improj}, we can assume without loss of generality that the ray is the non-negative part of the $y_1$-axis. Similarly to the proof of (1), we set $y_2 = 0$ and check the imaginary projection on $y_1$-axis, using the two complex solutions in~\eqref{eq:one-component-branch1}. Since $\alpha \neq \beta$, we have $D:= \left( \frac{\alpha-\beta}{2} \right)^2 \neq 0$, where $D$ is the discriminant of $\init(p)$ with $z_2$ substituted to 1. We consider two cases: $D \not\in \R_{>0}$ and $D \in \R_{>0}$. In both cases we get into a contradiction to the assumption that the unbounded component contains the non-negative part of the $y_1$-axis. First assume $D \not\in \R_{>0}$. For $z_2 \to \pm \infty$, the imaginary part of the radicand is dominated by the imaginary part of the square root of $D$. Since $D \not\in \R_{>0}$ at least one of the two expressions {\small \begin{equation*} \label{eq:dominating1} \left(\frac{\alpha+\beta}{2}\right)_{\mathrm{im}} \pm \sqrt{\frac{-D_{\mathrm{re}}+\sqrt{D_{\mathrm{re}}^2+D_{\mathrm{im}}^2}}{2}}\,\, \end{equation*}} is non-zero. Thus, letting $z_2\mapsto\pm\infty$, implies $y_1\mapsto+\infty$ in at least one of the branches. Now assume $D \in \R_{>0}$. This implies $(\alpha - \beta)/2 \in \R$. Thus $(\alpha + \beta)/2 \notin \R$, since otherwise it contradicts with $\alpha,\beta\notin\R$. In this case, by letting $z_2$ grow to infinity, the dominating expression for $y_1$ is $\frac{1}{2}(\alpha+\beta)_{\rm im}z_2.$ Therefore, $y_1$ converges to $+\infty$ in one of the two branches. In both cases, for some $s>0$, the ray $\{(y_1,0)\in\R^2:y_1\ge s\}$ lies in the imaginary projection. This completes the proof. \end{proof}
3,559
33,646
en
train
0.4989.10
Before, in Example \ref{ex:caseB} we have shown that the defining polynomial of the imaginary projection can be irreducible of degree 8. The previous lemma enables us to show that $\mathcal{I}(q)^\mathsf{c}$ has exactly one bounded component. Note that $\mathbf{0}\in\mathcal{I}(q)^\mathsf{c}$. Let $B_\epsilon$ be an open ball with center at the origin and radius $\epsilon$. By letting $y_1$ and $y_2$ converge to zero, the dominating part of $\Delta$ is $y_1^4+y_2^2$. Thus, for sufficiently small $\epsilon$, any non-zero point in $B_\epsilon$ has $\Delta>0$. Therefore, $\mathcal{I}(q)^\mathsf{c}$ contains an open ball around the origin. Now the claim follows from Theorems \ref{OnebddComp}. In this example, the imaginary projection is Euclidean closed, i.e., $\overline{\mathcal{I}(q)}=\mathcal{I}(q)$, however, its boundary is not Zariski closed. We claim that the set $\mathcal{I}(q)^\mathsf{c}$ is not a spectrahedron. By the characterization of Helton and Vinnikov \cite{helton-vinnikov-2007}, it suffices to show that $\overline{\mathcal{I}(q)}$ is not rigidly convex. That is, if $h$ is a defining polynomial of minimal degree for the component $\mathcal{I}(q)^\mathsf{c}$, then we have to show that a generic line $\ell$ through the interior of $\mathcal{I}(q)^\mathsf{c}$ does not meet the variety $V:=\{\x \in \R^2 \, : \, h(\x) = 0\}$ in exactly $\deg(h)$ many real points, counting multiplicities. However, this can be checked immediately. For example, the line $y_1 = 1/3$ intersects the variety $V$ in exactly two real points, and any sufficiently small perturbation of the line preserves the number of real intersection points. See Figure \ref{fig:improjComplex} (2c). This completes the proof of Theorem~\ref{th:complex-classification1}. We now prove Corollary \ref{co:alg-degrees} by showing that 8 is an upper bound. \noindent \textit{Proof of Corollary \ref{co:alg-degrees}}. For the first four classes we have precisely computed the boundaries $\partial\I(p)$ and they are algebraic with irreducible components of degree at most two. It remains to consider the case (2c), more precisely (2c.2), where $p=(z_1 - {\rm i} z_2)(z_1 - \alpha z_2)+\gamma$ for some $\alpha,\gamma\in\C$, $\alpha\notin\R$, and $\alpha\neq {\rm \pm i}$. Using Remark \ref{re:quarticRoots}, we show that the degrees of the irreducible components in the Zariski closure of $\partial\I(p)$ do not exceed $8$. This, together with Example \ref{ex:caseB}, completes the proof of (1). We separate the real and the imaginary parts as before. {\small \[ p_{\mathrm{re}} =x_{1}^{2}\!+(\!(\alpha_{\mathrm{im}}+1) y_{2}\!)-\alpha_{\mathrm{re}} x_{2}) x_{1}-\alpha_{\mathrm{im}} x_{2}^{2}+(\!(\!\alpha_{\mathrm{im}}+1) y_{1}-2 \alpha_{\mathrm{re}} y_{2}) x_{2}+\alpha_{\mathrm{re}} y_{2} y_{1}+\alpha_{\mathrm{im}} y_{2}^{2}-y_{1}^{2}\!+\gamma_{\mathrm{re}}=0, \]\[ p_{\mathrm{im}} =\! ((\alpha_{\mathrm{im}}+1) x_{2}+\alpha_{\mathrm{re}} y_{2}-2 y_{1}) x_{1}-\alpha_{\mathrm{re}} x_{2}^{2}+(\alpha_{\mathrm{re}} y_{1}+2 \alpha_{\mathrm{im}} y_{2}) x_{2}+\alpha_{\mathrm{re}} y_{2}^{2}-(\alpha_{\mathrm{im}}+1) y_{1} y_{2}-\gamma_{\mathrm{im}}= 0. \] } First we assume $(\alpha_{\mathrm{im}}+1) x_{2}+\alpha_{\mathrm{re}} y_{2}-2 y_{1}\neq0$. Solving $p_{\mathrm{im}} =0$ for $x_1$ and substituting in $p_{\mathrm{re}} = 0$ returns {\small \[ \Big( \alpha_{\mathrm{im}}(\alpha_{\mathrm{re}}^{2}+(\alpha_{\mathrm{im}}+1)^2) \Big) x_2^4 -\Big((\alpha_{1}^{2}+\alpha_{2}^{2}+6 \alpha_{2}+1) (-\alpha_{1} y_{2}+y_{1} (\alpha_{2}+1)) \Big) x_2^3 +\Big((\alpha_{1}^{2}+5 \alpha_{2}^{2}+14 \alpha_{2}+5) y_{1}^{2} \] \[ -y_{1} \alpha_{1} (\alpha_{1}^{2}+\alpha_{2}^{2}+14 \alpha_{2}+9) y_{2}+(4 \alpha_{1}^{2}+\alpha_{2} (\alpha_{1}^{2}+(\alpha_{2}-1)^2)) y_{2}^{2} +(k_{2} \alpha_{1}-2 k_{1} -k_{1} \alpha_{2})\alpha_{2}-k_{2} \alpha_{1}-k_{1}\Big) x_{2}^{2} \] \[ +\Big(8(-\alpha_{2}-1) y_{1}^{3}+8 \alpha_{1} (\alpha_{2}+2) y_{1}^{2} y_{2}-(\alpha_{2} (\alpha_{1}^{2}+\alpha_{2}^{2}-\alpha_{2}-1)+9\alpha_{1}^{2}+1) y_{1} y_{2}^{2}+\alpha_{1} (\alpha_{1}^{2}+(\alpha_{2}^{}-1)^{2}) y_{2}^{3} \] \[ +4 k_{1}(\alpha_{2}+1) y_{1}+((\alpha_{1}^{2}-(\alpha_{2}-1)^{2}) k_{2}-2 k_{1} \alpha_{1} (\alpha_{2}+1)) y_{2}\Big)x_2 + 4 y_{1}^{4}-8 \alpha_{1} y_{1}^{3} y_{2}+(5 \alpha_{1}^{2}+(\alpha_{2}-1)^{2}) y_{1}^{2} y_{2}^{2} \] \[ -\alpha_{1} (\alpha_{1}^{2}+(\alpha_{2}-1)^{2}) y_{1} y_{2}^{3}-4 k_{1} y_{1}^{2}+4\alpha_{1} k_{1} y_{1} y_{2}-\alpha_{1} (k_{1} \alpha_{1}+\alpha_{2} k_{2}-k_{2}) y_{2}^{2}-k_{2}^{2}. \] } Since $\alpha\notin\R$, the leading coefficient is non-zero. Therefore, we have a quartic univariate polynomial in $x_2$. The relevant polynomials for the decision of whether this polynomial has a real root for $x_2$ are $P,D$ and the discriminant $\Disc$ from Remark~\ref{re:quarticRoots}. By computing these polynomials, we observe that $\Disc$ decomposes as $Q_1^2\cdot q$, where $Q_1$ is a quadratic polynomial and $q$ is of degree $8$ in $\mathbf{y}$. The total degrees of $P$ and $D$ are $2$ and $4$, respectively. Now let us assume $(\alpha_{\mathrm{im}}+1) x_{2}+\alpha_{\mathrm{re}} y_{2}-2 y_{1} = 0$. If $\alpha_{\mathrm{im}}\neq -1$, then substituting $x_2 = \frac{-\alpha_{\mathrm{re}} y_{2}+2 y_{1}}{\alpha_{\mathrm{im}}+1}$ into $p_{\mathrm{im}}=0$ is the quadratic $Q_1$. Otherwise, the substitution $\alpha_{\mathrm{im}}= -1$ and $y_{1}=\frac{\alpha_{\mathrm{re}} y_{2}}{2}$ in $p_{\mathrm{re}}$ and $ p_{\mathrm{im}}$, and setting $s = 2p_{\mathrm{im}}-\alpha_{\mathrm{re}}p_{\mathrm{re}}$ simplifies the original system to \[\begin{matrix} p_{\mathrm{re}} &=& \alpha_{\mathrm{re}}^{2} y_{2}^{2}-4 \alpha_{\mathrm{re}} x_{1} x_{2}-8 \alpha_{\mathrm{re}} x_{2} y_{2}+4 x_{1}^{2}+4 x_{2}^{2}-4 y_{2}^{2}+4 \gamma_{\mathrm{re}}&=&0,\\ \\ s &=& 2 (2 \alpha_{\mathrm{re}}^{2} x_{1}+3 \alpha_{\mathrm{re}}^{2} y_{2}+4 y_{2})x_{2}-(\alpha_{\mathrm{re}}^{3} y_{2}^{2}+4 \alpha_{\mathrm{re}} x_{1}^{2}+4 \gamma_{\mathrm{re}} \alpha_{1}-4 \gamma_{\mathrm{im}})&=&0. \end{matrix} \] If the coefficient of $x_2$ in $s$ is non-zero, then solving $s=0$ for $x_2$ and substituting in $p_{\mathrm{re}}=0$ results in a quartic polynomial in $x_1$ with non-zero leading coefficient. In this case, the polynomials Disc, P, and D from Remark \ref{re:quarticRoots} are all univariate in $y_2$. The decomposition of the discriminant in this case consists of the polynomial $q$ after the substitution $y_{1}=\frac{\alpha_{\mathrm{re}} y_{2}}{2}$ and the square of a quadratic polynomial $Q_2$. The total degrees of $P$ and $D$ are $2$ and $4$, respectively. Otherwise, solving $2 \alpha_{\mathrm{re}}^{2} x_{1}+3 \alpha_{\mathrm{re}}^{2} y_{2}+4 y_{2}=0$ for $x_1$ and substituting in $s=0$, results in $Q_2$. In all the cases that we have discussed above, the degree of none of the irreducible factors appearing in the polynomials that could possibly form the $\partial\I(p)$ exceeds 8. Example \ref{ex:caseB} shows an example where this bound is reached. This completes the proof of (1). (2) follows from Theorem \ref{th:RealConicChar}. $\Box$ We have precisely verified the imaginary projections for all the normal forms in Theorem \ref{th:conic-classification1} except for (2c.2) . In particular, we have shown that if $p$ is not of the class (2c.2), then $\I(p) = \R^2$ if and only if there exist some $\gamma,\alpha\in\C$, and $\alpha\notin\R$ such that $p$ can be transformed to one of the following normal forms. \begin{equation}\label{list:ConicImprojR2} \begin{cases} (2a): (z_1 - {\rm i}z_2)^2 + \gamma z_2\quad\text{or}\quad (z_1-{\rm i} z_2)^2 + \gamma\\ (2b): z_2 (z_1 - \alpha z_2) + \gamma & \text{for}\quad \gamma=0 \,\,\,\text{or}\,\,\, \alpha_{\mathrm{im}}\gamma_{\mathrm{im}} < 0,\\ (2c.1): z_1^2+z_2^2+\gamma & \text{for}\quad \gamma_{\mathrm{im}}=0 \,\,\,\text{and}\,\,\, \gamma_{\mathrm{re}}\le0. \end{cases} \end{equation} An example for a complex conic of class (2c.2) where the imaginary projection is $\R^2$ is $p = z_1^2 - 3{\rm i}z_1z_2-2z_2^2$. The reason is that for any given $(y_1,y_2)\in\R^2$, the polynomial $p$ vanishes on the point $(-y_2+{\rm i} y_1 , y_1+{\rm i} y_2)$. Answering the following question completes the verification of complex conics with a full-space imaginary projection. \begin{question} Let $p\in\C[z_1,z_2]$ be a complex conic of the form $p=(z_1 - {\rm i}z_2) (z_1 - \alpha z_2) + \gamma$ such that $\alpha\notin\R$ and $\alpha\neq\pm {\rm i}$. Under which conditions on the coefficients $\gamma,\alpha\in\C$ does $\mathcal{I}(p)$ coincide with $\R^2$? \end{question}
3,453
33,646
en
train
0.4989.11
We have precisely verified the imaginary projections for all the normal forms in Theorem \ref{th:conic-classification1} except for (2c.2) . In particular, we have shown that if $p$ is not of the class (2c.2), then $\I(p) = \R^2$ if and only if there exist some $\gamma,\alpha\in\C$, and $\alpha\notin\R$ such that $p$ can be transformed to one of the following normal forms. \begin{equation}\label{list:ConicImprojR2} \begin{cases} (2a): (z_1 - {\rm i}z_2)^2 + \gamma z_2\quad\text{or}\quad (z_1-{\rm i} z_2)^2 + \gamma\\ (2b): z_2 (z_1 - \alpha z_2) + \gamma & \text{for}\quad \gamma=0 \,\,\,\text{or}\,\,\, \alpha_{\mathrm{im}}\gamma_{\mathrm{im}} < 0,\\ (2c.1): z_1^2+z_2^2+\gamma & \text{for}\quad \gamma_{\mathrm{im}}=0 \,\,\,\text{and}\,\,\, \gamma_{\mathrm{re}}\le0. \end{cases} \end{equation} An example for a complex conic of class (2c.2) where the imaginary projection is $\R^2$ is $p = z_1^2 - 3{\rm i}z_1z_2-2z_2^2$. The reason is that for any given $(y_1,y_2)\in\R^2$, the polynomial $p$ vanishes on the point $(-y_2+{\rm i} y_1 , y_1+{\rm i} y_2)$. Answering the following question completes the verification of complex conics with a full-space imaginary projection. \begin{question} Let $p\in\C[z_1,z_2]$ be a complex conic of the form $p=(z_1 - {\rm i}z_2) (z_1 - \alpha z_2) + \gamma$ such that $\alpha\notin\R$ and $\alpha\neq\pm {\rm i}$. Under which conditions on the coefficients $\gamma,\alpha\in\C$ does $\mathcal{I}(p)$ coincide with $\R^2$? \end{question} \section{convexity results}\label{se:convex} For the case of complex plane conics, we have shown in Theorem \ref{OnebddComp} that there can be at most one bounded component in the complement of its imaginary projection. An example of such a conic is $z_1^2+z_2^2+1 = 0$, where the unique bounded component is the unit disc, which in particular is strictly convex. In the following theorem, we show that for any $k>0$, there exists a complex plane curve whose complement of the imaginary projection has exactly $k$ strictly convex bounded components. For the case of real coefficients, only the lower bound of $k$ and no exactness result is known (see \cite[Theorem 1.3]{joergens-theobald-hyperbolicity}). Allowing non-real coefficients lets us break the symmetry of the imaginary projection with respect to the origin and this enables us to fix the number of components exactly instead of giving a lower bound. Furthermore, using a non-real conic which has four strictly convex unbounded components, illustrated in Figure \ref{fig:dist-real}, notably drops the degree of the corresponding polynomial. \begin{theorem}\label{th:StrictlyConvexComplex} For any $k>0$ there exists a polynomial $p\in\C[z_1,z_2]$ of degree $2\lceil \frac{k}{4}\rceil+2$ such that $\mathcal{I}(p)^\mathsf{c}$ consists of exactly $k$ strictly convex bounded components. \end{theorem} \begin{proof} Let $R^{\varphi}$ be the rotation map and $g:\C^2\rightarrow\C^2$ be defined as \[ g(z_1,z_2) = z_1z_2+2{\rm i}. \] Note that the equation \begin{equation}\label{eq:m=2} \prod_{j=0}^{m-1}(g\circ R^{\pi j/2m})(z_1,z_2) =0 \end{equation} where $m=\lceil \frac{k}{4}\rceil$ as before, has $4m$ unbounded components in the complement of its imaginary projection. We need to find a circle that intersects with $k$ of them and does not intersect with the rest $4m-k$ components. By symmetry of the construction of the equation above, the smallest distance between the origin $O$ and each component is the same for all the components. The following picture shows the case $m=2$. \begin{figure} \caption{The imaginary projection of (\ref{eq:m=2} \end{figure} Let $C$ be the boundary of the imaginary projection of $ z_1^2 +z_2^2 +r^2 $ where $r = |OA_1|$. The center of $C$ is the origin and it passes through all $4m$ points $A_1,\dots,A_{4m}$ that minimize the distance from the origin to each component. A sufficiently small perturbation of the center and the diameter can result in a circle $C'$ with center $(a,b)$ and radius $s$ that only intersects the interiors of the first $k$ unbounded components. Now define \[q:=(z_1-{\rm i}a)^2+(z_2-{\rm i}b)^2+s^2.\] By Lemma \ref{le:group-actions-improj} and the fact that the imaginary projection of the multiplication of two polynomials is the union of their imaginary projections, the polynomial \[ p := q \cdot \prod_{j=0}^{m-1}(g\circ R^{\pi j/2m})(z_1,z_2), \] has exactly $k$ strictly convex bounded components in $\mathcal{I}(p)^\mathsf{c}$. \end{proof}
1,679
33,646
en
train
0.4989.12
Although, by generalizing from real to complex coefficients, we improved the degree of the desired polynomial from $d=4\lceil \frac{k}{4}\rceil+2$ to $d/2+1$, it is not the optimal degree. For instance if $k=1$, the polynomial $z_1^2+z_2^2+1$ has the desired imaginary projection, while the degree is $2<4$. Thus, we can ask the following question. \begin{question}\label{ques:deg} For $k>0$, what is the smallest integer $d>0$ for which there exists a polynomial $p\in\C[z_1,z_2]$ of degree $d$ such that $\mathcal{I}(p)^\mathsf{c}$ consists of exactly $k$ strictly convex bounded components. \end{question} \section{Conclusion and open questions\label{se:outlook}} We have classified the imaginary projections of complex conics and revealed some phenomena for polynomials with complex coefficients in higher degrees and dimensions. It seems widely open to come up with a classification of the imaginary projections of bivariate cubic polynomials, even in the case of real coefficients. In particular, the maximum number of components in the complement of the imaginary projection for both complex and real polynomials of degree $d$ where $d\ge 3$ is currently unknown. We have shown that in degree two they coincide for real and complex conics, however, this may not be the case for cubic polynomials. \subsection*{Acknowledgment.} We thank the anonymous referees for their helpful comments. \end{document}
435
33,646
en
train
0.4990.0
\begin{document} \title{Exact rate analysis for quantum repeaters with imperfect memories and \\entanglement swapping as soon as possible} \author{Lars Kamin} \email{[email protected]} \author{Evgeny Shchukin} \email{[email protected]} \author{Frank Schmidt} \email{[email protected]} \author{Peter van Loock} \email{[email protected]} \affiliation{Johannes-Gutenberg University of Mainz, Institute of Physics, Staudingerweg 7, 55128 Mainz, Germany} \begin{abstract} We present an exact rate analysis for a secret key that can be shared among two parties employing a linear quantum repeater chain. One of our main motivations is to address the question whether simply placing quantum memories along a quantum communication channel can be beneficial in a realistic setting. The underlying model assumes deterministic entanglement swapping of single-spin quantum memories and it excludes probabilistic entanglement distillation, and thus two-way classical communication, on higher nesting levels. Within this framework, we identify the essential properties of any optimal repeater scheme: entanglement distribution in parallel, entanglement swapping as soon and parallel quantum storage as little as possible. While these features are obvious or trivial for the simplest repeater with one middle station, for more stations they cannot always be combined. We propose an optimal scheme including channel loss and memory dephasing, proving its optimality for the case of two stations and conjecturing it for the general case. In an even more realistic setting, we consider additional tools and parameters such as memory cut-offs, multiplexing, initial state and swapping gate fidelities, and finite link coupling efficiencies in order to identify potential regimes in memory-assisted quantum key distribution beyond one middle station that exceed the rates of the smallest quantum repeaters as well as those obtainable in all-optical schemes unassisted by stationary memory qubits and two-way classical communication. Our analytical treatment enables us to determine simultaneous trade-offs between various parameters, their scaling, and their influence on the performance ordering among different types of protocols, comparing two-photon interference after dual-rail qubit transmission with one-photon interference of single-rail qubits or, similarly, optical interference of coherent states. We find that for experimental parameter values that are highly demanding but not impossible (up to 10s coherence time, about 80\% link coupling, and state or gate infidelities in the regime of 1-2\%), one secret bit can be shared per second over a total distance of 800km with repeater stations placed at every 100km -- a significant improvement over ideal point-to-point or realistic twin-field quantum key distribution at GHz clock rates. \end{abstract} \pacs{03.67.Mn, 03.65.Ud, 42.50.Dv} \keywords{quantum repeaters, quantum memory} \maketitle
746
85,618
en
train
0.4990.1
\section{Introduction}\label{sec:Introduction} Recent progress on quantum computers with tens of qubits led to experimental demonstrations of quantum devices that are able to solve specifically adapted problems which are not soluble in an efficient manner with the help of classical computers alone. These devices are primarily based upon solid-state (superconducting) systems \cite{Arute2019,Qiskit}, however, there are also photonics approaches \cite{Pan2020}. While these schemes still have to be enhanced in terms of size, i.e. the number of qubits (scalability), their error robustness and corresponding logical encoding (fault tolerance), as well as their range of applicability (eventually reaching universality), this progress represents a threat to common classical communication systems. Eventually, this may compromise our current key distribution protocols. Although there are recent developments in classical cryptography to address the threat imposed by such quantum devices (``post-quantum cryptography''), quantum mechanics also gives a possible solution to this by means of quantum key distribution (QKD) \cite{NLRMP,PirRMP}. Many QKD protocols have been proposed such as the most prominent, so-called BB84 scheme \cite{BB84}. Indeed among the various quantum technologies that promise to enable their users to fulfil tasks impossible without quantum resources, quantum communication is special. Unlike quantum computers there are already commercially available quantum communication systems intended for costumers who wish to communicate in the classical, real world in a basically unconditionally secure fashion -- independent of mathematically unproven assumptions exploiting the concept of QKD. QKD systems are naturally realized for photonic systems using non-classical optical quantum states such as single-photon, weak \cite{Hwang2002, Lo2004} or even bright coherent states \cite{PirRMP}. \subsection{Previous works and state of the art} Current point-to-point QKD systems, directly connecting the sender (Alice) and the receiver (Bob) via an optical-fiber channel, are limited in distance due to the exponentially growing transmission loss along the channel. Typical maximal distances are 100-200km. A very recent QKD variant, so-called twin-field (TF) QKD\cite{Lucamarini}, allows to push these limits farther (basically doubling the effective distance) by placing an (untrusted) middle station between Alice and Bob. Remarkably, TF QKD achieves this loss scaling advantage in an all-optical fashion with no need for quantum storage at the middle station and at an, in principle, unlimited clock rate with no need for two-way classical communication. It further inherits the improved security features of measurement-device-independent (MDI) QKD schemes \cite{LoCurty,PirBraun}. However, the original TF QKD concept is not known to be further scalable beyond the effective distance doubling. In classical communication, the distance problem is straightforwardly overcome by introducing repeater stations along the fiber channel (about every 50-100km) in order to reamplify (and typically reshape) the optical pulses. On a fundamental level, the famous No-Cloning-theorem \cite{NoCloning,Dieks1982}, prohibits such solutions for quantum communication. As a possible remedy, the concept of quantum repeaters has been developed \cite{BriegelDur,Dur1999,Hartmann2007}. With the help of sufficiently short-range entanglement distributions, quantum memories, entanglement distillation and swapping, in principle, scalable long-distance, fiber-based quantum communication becomes possible, including long-range QKD. The original quantum repeater proposals assumed small-scale non-universal quantum computers at each repeater node in order to perform the necessary gates for the entangled-pair manipulations, and hence clearly appeared to be technologically less demanding than a fully-fledged fault-tolerant and universal quantum computer. Related to this, for QKD applications including those over large distances, there are very powerful, classical post-processing techniques which allow to relax the minimal requirements on the experimental states and gates. Nonetheless, as a whole, these original quantum repeater systems would still have high experimental requirements. This led to some quantum repeater proposals specifically adapted to certain matter memory systems and light-matter interfaces. Probably the most prominent such proposal is the ``DLCZ'' quantum repeater \cite{DLCZ, Sangouard}, based upon atomic-ensemble nodes that no longer rely upon the execution of difficult two-qubit entangling gates, but instead only require linear-optical state manipulations and photon detectors. Other schemes rely upon single emitters in solid-state repeater nodes, especially colour centers in diamond \cite{ChildressNV, Humphreys2017}. Alternative proposals employ optical coherent states and their cavity-QED interactions with single-spin-based quantum memory nodes \cite{HybridPRL}. These proposals made a possible realization of a large-scale quantum repeater more likely, but as a complete implementation, they would still be fundamentally limited in their achievable (secret) key rates per second. The reason for this is the need for two-way classical communication on all, including the highest ``nesting'' levels in order to conduct entanglement distillation and confirm successful entanglement swappings when these are probabilistic. Today this type of quantum repeater schemes are referred to as 1st-generation quantum repeaters. A memory-assisted QKD scheme was proposed in Ref. \cite{tf_repeater}, extending the TF concept to memory-based quantum repeaters. In principle, this scheme achieves an effective distance doubling compared with standard quantum repeaters or, equivalently, it exhibits the standard loss scaling with about half as many memory stations as in a standard quantum repeater (while the other half are all-optical stations with beam splitter and photon detectors). Apart from a certain level of memory assistance, this repeater scheme also relies upon two-way classical communication (between the nearest stations) and hence can operate only at a limited clock rate determined by the classical signalling time per segment. Moreover, for its large-scale operation the scheme would require an additional element for quantum error correction. Alternative schemes circumventing the fundamental limitations are the so-called 2nd- and 3rd-generation quantum repeaters that exploit quantum error correction codes to suppress the effect of gate and memory errors or channel loss, respectively \cite{JiangRvw}. A 3rd-generation quantum repeater no longer requires quantum memories and two-way classical communication and so it can be, in principle, realized in an all-optical fashion at a clock rate only limited by the local error correction operations. It is important to stress that all these quantum repeaters are designed to allow for a genuine long-distance quantum state transfer. In the QKD context, this means that the intermediate stations along the repeater channel may be untrusted. If instead sufficiently many trusted stations can be placed along the communication channel between Alice and Bob, and the quantum signals can be converted into classical information at each station (as a whole, effectively corresponding to classically connected, independent, sufficiently short-range QKD links), large-scale QKD is already possible and being demonstrated \cite{ChinaDaily}. Conceptually, this also applies to long-range links enabled by satellites \cite{Yin2017, Vallone2015}. It is only the genuine quantum repeater that incorporates two main features at the same time: {\it long-distance scalability and long-distance privacy}. From a practical point of view, it is expected that global quantum communication systems will be a combination of both elements: genuine fiber-based quantum repeaters over intermediate distances (thousands of km) and satellite-based quantum links bridging even longer distances (tens of thousands of km; the earth's circumference is about 40000km). While such truly global quantum communication may eventually lead to some form of a ``quantum internet'' \cite{WehnerHanson}, only the coherent long-distance quantum state transfer as enabled by a genuine quantum repeater allows to consider applications that go beyond long-range QKD. In fact, the original quantum repeater proposals were not specifically intended for or adapted to long-range QKD. They can be used for any application that relies upon the distribution of entangled states over large distances including large-scale quantum networks. Obvious applications are distributed quantum tasks such as distributed quantum computing, coherently connecting quantum computers which are spatially far apart. These ultimate long-distance quantum communication applications will then impose much higher demands on the fault tolerance of the experimental quantum states and gates. In particular, QKD-specifc classical post-processing will no longer be applicable. In this work, we shall consider small to intermediate-scale quantum repeaters that allow to do QKD or coherently connect quantum nodes at a corresponding size and at a reasonably practical clock rate.
2,167
85,618
en
train
0.4990.2
\subsection{This work} In this work, we will focus on small-scale or medium-size quantum repeater systems beyond a single middle station and without probabilistic entanglement distillation on higher ``nesting levels". This class of quantum repeaters is of great interest for at least two reasons. (i) There are now first experiments of memory-enhanced quantum communication basically demonstrating memory-assisted MDI QKD \cite{Lukin, Rempe}. Therefore the natural next step for the experimentalists will be to connect such elementary modules to obtain larger repeater systems with {\it two or more intermediate stations}, thus bridging larger distances and, unlike memory-assisted MDI QKD, ultimately relying upon classical communication between the repeater stations \cite{White}. These next near-term experiments will aim at a distance extension still independent of additional and more complicated schemes such as entanglement distillation on ``higher nesting levels''. Restricting the entanglement manipulations to the level of the elementary repeater segments will also help to avoid the use of long-distance two-way classical signalling like in a fully scalable 1st-generation quantum repeater, and hence allow for still limited but reasonable repeater clock rates. In this regime, comparing (secret key) rates per second of the quantum repeaters with those of an (ideal) point-to-point link or TF QKD scheme is in some way most fair and meaningful. While the current experimental repeater demonstrations with a single repeater station \cite{Lukin,Rempe} would still suffer from too low clock rates and link coupling efficiencies before giving a practical repeater advantage, an urgent theoretical question is whether, under practical realistic circumstances, it really helps to place memory stations along a quantum communication channel and execute memory-assisted QKD without extra active quantum error correction. In principle, placing a middle station between Alice and Bob allows to gain a repeater advantage per channel use \cite{NL,WehPar,White}. Omitting the non-scalable all-optical TF approach, is there a practical benefit also in terms of secret bits per second when using a two-segment quantum repeater? Moreover, and this is the focus of the present work, is there even a further advantage when adding more stations beyond a single middle station under realistic assumptions and with no extra quantum error correction? We will see that for up to eight repeater segments, covering distances up to around 800km, the quantum repeaters treated in this work, assuming experimental parameter values that are demanding but not impossible to achieve in practice, can exceed the performance limits of the other schemes. For larger distances, the attainable absolute rates of point-to-point quantum communication become extremely small. However, for quantum repeaters, additional elements of quantum error correction will be needed, as otherwise the final rates would vanish and no gain can be expected over point-to-point communication. (ii) The second point refers to the theoretical treatment. Typically, the repeater rates can be calculated either numerically including many protocol variations and (experimental) degrees of freedom \cite{Coopmans} or approximately in certain regimes \cite{Sangouard} (there are also semi-analytical approaches, see Refs. \cite{Kuzmin2021,Kuzmin2019}). If errors are neglected an exact and even optimized raw rate calculation is possible even for non-unit (but constant) entanglement swapping probabilities using the formalism of Markov chains and decision processes \cite{PvL,Shchukin2021} (see also Refs. \cite{Vinay2019,Khatri2019}). This approach works well for repeaters up to about ten segments; for too many repeater segments the resulting linear equation systems become intractable. Nonetheless, for the smallest repeaters with only a single middle station, it was shown how to calculate secret key rates even including various experimental parameters, though partially also employing approximations for the raw rates \cite{NL,WehPar}. In this work we will go beyond the case of a single middle station and present exact calculations of {\it secret key rates obtainable with realistic small and intermediate-scale quantum repeaters}. The theoretical difficulty here is, even already when only channel loss and memory dephasing is considered, that for repeaters beyond a single middle station there are various distribution and swapping strategies and so it becomes non-trivial to determine the optimal ones. The usual treatment in this case is based upon the so-called doubling strategy where for a repeater with a power-of-two number of segments only certain pairs of segments will be connected in order to double the distances at each repeater level. As a consequence, sometimes entanglement connections will be postponed even though neighboring pairs may be ready already, thus unnecessarily accumulating more memory dephasing errors. With regards to memory dephasing, the best strategy appears to be {\it to swap as soon as possible} and here we will show how this type of repeater strategy can be exactly and analytically treated. This element is the crucial step that enables us to propose optimal quantum repeater schemes. On the hardware side, memory-based quantum repeaters require sufficiently long-lived quantum memories and efficient, typically light-matter-based interfaces converting flying into stationary qubits. In the context of our theoretical treatment, the stationary qubits are assumed to be represented by single spins in a suitable solid-state quantum node such as colour (NV or SiV) centers in diamond, usually separately treated as short-lived electronic and long-lived nuclear spins \cite{LukinSiV, HansonNV}. As for efficient quantum emitters and short-lived quantum memories semiconductor quantum dots may be considered too \cite{White}. Alternatively, various types of atom or ion qubits could be taken into account \cite{White}. While all these different hardware platforms have their own assets and disadvantages (e.g. the required temperatures which range from room or modestly low temperatures for atoms/ions/NV to cryogenic temperatures for NV/SiV/quantum dots), and every one eventually requires a specifically adapted physical model, to a certain extent the quantum repeater performance based on these elements and assuming only a single repeater station can be assessed (or at least qualitatively bounded from above) using a fairly simple physical model that includes {\it three experimental parameters}: the link coupling efficiency, the memory coherence time, and the experimental clock rate \cite{White}. In order to incorporate an appropriate experimental memory coherence time into the model, qubit dephasing errors can be considered where the stationary qubit is never lost but subject to random phase flips with a probability exponentially growing with the storage time. Already this rather simple model is theoretically non-trivial, because it leads to two distinct impacts on the final secret key rates. On the one hand, a finite link coupling efficiency (including all constant inefficiencies per segment from the sources, detectors, and interfaces) and a segment-length-dependent transmission efficiency affect the raw rate of the qubit transmission (which, if expressed as rate per second, also directly depends on the repeater clock rate). Thereby, in logarithmic rate-versus-distance plots (like those frequently shown later in this article), a finite link coupling leads to an offset towards smaller rates at zero distance, while a finite channel transmission results in a certain (negative) slope. On the other hand, a finite memory coherence time influences the final Alice-Bob state fidelity or QKD error rate (which also indirectly depends on the repeater clock rate, i.e. the time duration per entanglement distribution attempt per segment, determining the possible number of distribution attempts within a given memory coherence time). This becomes manifest as an increase of the (negative) slope for growing distances, moving from an initially repeater-like slope towards one corresponding to a point-to-point transmission. There are interesting concepts to suppress this latter effect by introducing more sophisticated memory models such as memory buffers or cut-offs. Especially a memory cut-off \cite{CollinsPrl} has turned out to be useful without the need for additional experimental resources. It means that a maximal storage time is imposed at every memory node and any loaded stationary qubits waiting for a longer duration will be reinitialized. As a result, state fidelities can be kept high at the expense of a decreasing raw rate due to the frequently occurring reinitializations (which implies that a memory cut-off must neither be set too low nor too high). Theoretically, including memory cut-offs into the rate analysis significantly increases the complexity (becoming manifest in e.g. quickly growing Markov-chain matrices) \cite{PvL}. For small quantum repeaters, especially those with only one middle station, a secret key rate analysis remains possible \cite{WehPar, White}. For larger quantum repeaters, the effective rates may be calculated via recursively obtained expressions \cite{Jiang}, via different kinds of approximations and assumptions \cite{Elkouss2021} or with the help of numerical simulations \cite{Coopmans}. Nonetheless, in our treatment, we shall explicitly include a memory cut-off in some protocols allowing us to extrapolate its positive impact on other schemes. We choose to incorporate random dephasing as the dominating source of memory errors. While memory dephasing is generally an error to be taken into account, it is particularly important for those stationary qubits encoded into single solid-state spins, e.g. for colour centers or quantum dots \cite{White}. We omit (time-dependent) memory decay (loss) which additionally becomes relevant for atomic memories, either as collective spin modes of atomic ensembles or in the form of an individual atom in a cavity (generally, atoms and trapped ions may be subject to both dephasing and decay) \cite{DLCZ,HybridPRL,HybridNJP,Rempe}. It turns out that the effect of memory dephasing can be accurately included into the statistical repeater model, since the total, accumulated dephasing in the final Alice-Bob density operator follows a simple sum rule \cite{tf_repeater}. Thus, the statistical averaging can be applied to the final state, for which we derive a recursive formula that also includes depolarizing errors from the initially distributed states and from the imperfect Bell measurement gates in every entanglement swapping operation. The main complication will be to determine the correct dephasing variables for the different swapping strategies and identify the optimal schemes. As a result, we extend the simple model of Ref.~\cite{White} not only with regards to the repeater's size, but also to include additional experimental parameters: {\it besides the above three parameters we then have one or two extra parameters for the initially distributed states} (taking into account initial dephasing or depolarization errors depending on the protocol) {\it and one extra depolarization parameter for the local gates and Bell measurements.} Our analytical treatment enables us to identify the scaling of the various parameters, their specific impact onto the repeater performance (for QKD, affecting either the raw rate or the error-dependent secret key fraction), and the resulting trade-offs. Most apparent is the trade-off for quantum repeaters with $n$ segments and $n-1$ intermediate memory stations leading to an improved loss scaling with an $n$-times bigger effective attenuation distance compared with a point-to-point link ($n=1$), but a final state fidelity parameter decreasing as the power of $2n-1$ (assuming equal gate and initial state error rates). We will then be able to consider repeater protocol variations with an improved scaling of the basic loss and fidelity parameters. Based upon the above-mentioned TF concept with coherent states or basically replacing two-photon by one-photon interferences at the beam splitter stations, these repeaters exhibit a $2n$-times bigger effective attenuation distance while keeping the $2n-1$ power scaling of the final state fidelity parameter for $n-1$ memory stations. However, they are subject to some extra intrinsic (dephasing) errors even when only channel loss is considered, which will turn out to be an essential complication that prevents to fully exploit the improved scaling of the basic parameters in comparison with the standard repeater protocols that do not suffer from intrinsic dephasing. Comparing different repeater protocols and incorporating the optimized memory dephasing from our statistical model into them, we find that for experimental parameter values that are highly demanding but not impossible (up to 10s coherence time, 80\% link coupling, and state or gate infidelities in the regime of 1-2\%), one secret bit can be shared per second over a total distance of 800km. This represents a significant improvement over ideal point-to-point or realistic TF QKD at GHz clock rates. In particular, the repeaterless, point-to-point bound \cite{PLOB}, for e.g. 800km is $3 \times 10^{-16}$ bits per channel use or $0.3 \mu$bits per second (at GHz clock rate). We will see that, in order to clearly beat this with those reasonable experimental parameters from above, the number of repeater stations must neither be too high nor too low, and so placing a station at every 100km will work well. As mentioned before, our schemes are generally independent of the typically used doubling strategies in quantum repeaters (which are most suitable to incorporate entanglement distillation in a systematic way and which are included as a special case in our sets of swapping strategies). Instead we will consider general memory-assisted entanglement distribution with possible QKD applications. Compatible with our analysis are also schemes that aim at an enhanced initial state distribution efficiency or fidelity as, for example, in multiplexing-assisted or the above-mentioned 2nd-generation quantum repeaters. In any case, the subsequent steps after the initial distributions in each repeater segment are simple entanglement swapping steps combined with quantum storage in single spins. For the entanglement swapping we assume unit success probability. This assumption is experimentally justified for systems where Bell measurements or, more generally, gates can be performed in a deterministic fashion, for instance, with atoms or ions or solid-state-based spin qubits \cite{White}. For a linear quantum repeater chain, this system is still remarkably complex. The assumption of deterministic entanglement swapping will allow us to calculate the exact (secret key) rates in a quantum repeater up to eight segments. We will distinguish schemes with sequential and parallel entanglement distributions and also consider different swapping strategies. Based on {\it two characteristic random variables}, the total repeater waiting time and the accumulated dephasing time of the final state, and their probability generating functions, we will be able to determine exact, optimized secret key rates. In principle, this gives us access to the {\it full statistics of this class of quantum repeaters.} Optimality here refers to the minimal dephasing among all parallel-distribution (and hence maximal raw-rate) schemes. For three segments and two intermediate stations, we show that the resulting secret key rates are optimal among all schemes. For more segments and stations we conjecture this to hold too, however, there is the loophole that sequential-distribution schemes (generally exhibiting smaller raw rates) may accumulate less dephasing and as a result, in combination, lead to a higher secret key rate. We conclude that our treatment gives evidence for any optimal scheme to distribute entangled pairs in parallel, to swap as soon as possible, and to simultaneously store qubits as little as possible. However, here the first and the third property are not compatible, which leads to another trade-off between high efficiencies (raw rates) and small state fidelities (high error rates) as commonly encountered for entanglement distribution and quantum repeaters. The (partially or fully) sequential schemes have the advantage that parallel storage of qubits can be avoided to a certain (or even a full) extent. However, since the sequential schemes are overall slower, their total dephasing may still exceed that of the fastest repeater schemes with parallel storage. For up to eight repeater segments, our optimal scheme, exhibiting the smallest total dephasing among all fast repeater schemes, also exhibits a smaller total dephasing than the fully sequential scheme. The outline of this paper is as follows. In Sec.~\ref{sec:QRwonemiddlestation} we will first review the known results and existing approaches to analyze secret key rates for the smallest possible quantum repeater based upon a single middle station, including calculations of the repeater raw rate and physical error models to describe the evolution of the relevant density operators. The methods for the statistical analysis -- probability generating functions, and the figure of merit to quantitatively assess the repeater performance -- a QKD secret key rate, will be introduced in Sec.~\ref{sec:Methods}.
4,012
85,618
en
train
0.4990.3
not only with regards to the repeater's size, but also to include additional experimental parameters: {\it besides the above three parameters we then have one or two extra parameters for the initially distributed states} (taking into account initial dephasing or depolarization errors depending on the protocol) {\it and one extra depolarization parameter for the local gates and Bell measurements.} Our analytical treatment enables us to identify the scaling of the various parameters, their specific impact onto the repeater performance (for QKD, affecting either the raw rate or the error-dependent secret key fraction), and the resulting trade-offs. Most apparent is the trade-off for quantum repeaters with $n$ segments and $n-1$ intermediate memory stations leading to an improved loss scaling with an $n$-times bigger effective attenuation distance compared with a point-to-point link ($n=1$), but a final state fidelity parameter decreasing as the power of $2n-1$ (assuming equal gate and initial state error rates). We will then be able to consider repeater protocol variations with an improved scaling of the basic loss and fidelity parameters. Based upon the above-mentioned TF concept with coherent states or basically replacing two-photon by one-photon interferences at the beam splitter stations, these repeaters exhibit a $2n$-times bigger effective attenuation distance while keeping the $2n-1$ power scaling of the final state fidelity parameter for $n-1$ memory stations. However, they are subject to some extra intrinsic (dephasing) errors even when only channel loss is considered, which will turn out to be an essential complication that prevents to fully exploit the improved scaling of the basic parameters in comparison with the standard repeater protocols that do not suffer from intrinsic dephasing. Comparing different repeater protocols and incorporating the optimized memory dephasing from our statistical model into them, we find that for experimental parameter values that are highly demanding but not impossible (up to 10s coherence time, 80\% link coupling, and state or gate infidelities in the regime of 1-2\%), one secret bit can be shared per second over a total distance of 800km. This represents a significant improvement over ideal point-to-point or realistic TF QKD at GHz clock rates. In particular, the repeaterless, point-to-point bound \cite{PLOB}, for e.g. 800km is $3 \times 10^{-16}$ bits per channel use or $0.3 \mu$bits per second (at GHz clock rate). We will see that, in order to clearly beat this with those reasonable experimental parameters from above, the number of repeater stations must neither be too high nor too low, and so placing a station at every 100km will work well. As mentioned before, our schemes are generally independent of the typically used doubling strategies in quantum repeaters (which are most suitable to incorporate entanglement distillation in a systematic way and which are included as a special case in our sets of swapping strategies). Instead we will consider general memory-assisted entanglement distribution with possible QKD applications. Compatible with our analysis are also schemes that aim at an enhanced initial state distribution efficiency or fidelity as, for example, in multiplexing-assisted or the above-mentioned 2nd-generation quantum repeaters. In any case, the subsequent steps after the initial distributions in each repeater segment are simple entanglement swapping steps combined with quantum storage in single spins. For the entanglement swapping we assume unit success probability. This assumption is experimentally justified for systems where Bell measurements or, more generally, gates can be performed in a deterministic fashion, for instance, with atoms or ions or solid-state-based spin qubits \cite{White}. For a linear quantum repeater chain, this system is still remarkably complex. The assumption of deterministic entanglement swapping will allow us to calculate the exact (secret key) rates in a quantum repeater up to eight segments. We will distinguish schemes with sequential and parallel entanglement distributions and also consider different swapping strategies. Based on {\it two characteristic random variables}, the total repeater waiting time and the accumulated dephasing time of the final state, and their probability generating functions, we will be able to determine exact, optimized secret key rates. In principle, this gives us access to the {\it full statistics of this class of quantum repeaters.} Optimality here refers to the minimal dephasing among all parallel-distribution (and hence maximal raw-rate) schemes. For three segments and two intermediate stations, we show that the resulting secret key rates are optimal among all schemes. For more segments and stations we conjecture this to hold too, however, there is the loophole that sequential-distribution schemes (generally exhibiting smaller raw rates) may accumulate less dephasing and as a result, in combination, lead to a higher secret key rate. We conclude that our treatment gives evidence for any optimal scheme to distribute entangled pairs in parallel, to swap as soon as possible, and to simultaneously store qubits as little as possible. However, here the first and the third property are not compatible, which leads to another trade-off between high efficiencies (raw rates) and small state fidelities (high error rates) as commonly encountered for entanglement distribution and quantum repeaters. The (partially or fully) sequential schemes have the advantage that parallel storage of qubits can be avoided to a certain (or even a full) extent. However, since the sequential schemes are overall slower, their total dephasing may still exceed that of the fastest repeater schemes with parallel storage. For up to eight repeater segments, our optimal scheme, exhibiting the smallest total dephasing among all fast repeater schemes, also exhibits a smaller total dephasing than the fully sequential scheme. The outline of this paper is as follows. In Sec.~\ref{sec:QRwonemiddlestation} we will first review the known results and existing approaches to analyze secret key rates for the smallest possible quantum repeater based upon a single middle station, including calculations of the repeater raw rate and physical error models to describe the evolution of the relevant density operators. The methods for the statistical analysis -- probability generating functions, and the figure of merit to quantitatively assess the repeater performance -- a QKD secret key rate, will be introduced in Sec.~\ref{sec:Methods}. In Sec.~\ref{sec:Physical Modelling} we will then start introducing our new, generalized treatment for quantum repeaters beyond a single middle station. For this, we present two subsections on the two characteristic random variables -- the waiting time and the dephasing time, which contain the entire statistical information of the class of quantum repeaters considered in our work. In order to be able to take into account optimal strategies for the initial entanglement distribution and the subsequent entanglement swapping in more complex quantum repeaters with two or more intermediate repeater stations, we discuss in detail in various subsections sequential and parallel distribution as well as optimal swapping schemes. Still in Sec.~\ref{sec:Physical Modelling}, we show how these optimizations can be applied to the statistics of various quantum repeaters, explicitly calculating the probability generating functions of the two basic random variables for two-, three-, four- and eight-segment quantum repeaters. In particular, for the four- and eight-segment cases we will show how and to what extent our optimized and exact treatment of the memory dephasing will improve the relevant quantities of the final state density operators as compared with the usually employed, canonical schemes such as ``doubling''. The interesting case of a three-segment repeater and its optimization will be discussed in more detail in an appendix. Finally, in Sec.~\ref{sec:Secret Key Rate} we will analyze the secret key rates of all proposed schemes and compare them for various repeater sizes with the ``PLOB" bound \cite{PLOB}. For this, we will explicitly consider the extended set of experimental parameters and insert experimentally meaningful values (representing current and future experimental capabilities) for them. A particular focus will be on the initial state and gate parameters and their impact on the repeater performance. We shall compare the performances of different schemes, discuss the possibility of including multiplexing, and examine what influence a memory cut-off and what (scaling) advantages the different types of encoding for the flying qubits can have. For the latter, we discuss in more detail schemes based on the TF concept and, for the comparison between different schemes and encodings, the final secret key rates per second. Sec.~\ref{sec:Conclusion} concludes the paper with a final summary of the results and their implications. Various additional technical details can be found in the appendices.
2,034
85,618
en
train
0.4990.4
\section{Quantum repeaters with one middle station}\label{sec:QRwonemiddlestation} A small quantum repeater composed of two segments and one middle station, as schematically shown in Fig.~\ref{fig:2seg}, is pretty well understood and it is known how to obtain the secret key rates in a QKD scheme assisted by a single memory station, even including experimental imperfections \cite{NL,WehPar,White,tf_repeater}, including memory cutoffs \cite{WehPar,White,PvL, CollinsPrl}, and for general, probabilistic entanglement swappping \cite{PvL}. First experimental demonstrations of memory-enhanced quantum communication are also based on this simplest repeater setting \cite{Lukin}. In such a small quantum repeater, there is only a single Bell measurement on the spin memories at the central station, and so the entanglement swapping ``strategy'' is clear. Later we will briefly discuss the two-segment case as a special case of our more general rate analysis treatment, easily deriving the statistical properties of the two basic random repeater variables, the total waiting and dephasing times, and obtaining the optimal scheme \cite{White, tf_repeater}. \begin{figure} \caption{A two-segment quantum repeater. Each segment has length $L_0$ and is characterized by a distribution success probability $p$, a (geometrically distributed) random number of distribution attempts $N$ (with expectation value $\bar N = 1/p$), and a ``final'' two-qubit state $\hat\rho$ (subscripts denote segments or qubits at the nodes). ``Final'' here means that the, in general, imperfectly distributed states may be further subject to memory dephasing for a maximal number of $m$ time steps (distribution attempts). After an imperfect swapping operation $\mathcal S$ (error parameter $\mu$), the repeater end nodes share an entangled state over distance $2 L_0$.} \label{fig:2seg} \end{figure} The smallest, two-segment quantum repeater also serves as a basic building block for general, larger quantum repeaters. In the scheme of Fig.~\ref{fig:2seg}, each segment distributes an entangled pair of (mostly) stationary qubits by connecting its end nodes through flying qubits. The goal is to share entanglement between the two qubits at the end nodes of the whole repeater. The specific entanglement distribution scheme in each segment depends on the repeater protocol and it may involve memory nodes sending or receiving photons \cite{White}. In the notation of Fig.~\ref{fig:2seg}, from an entangled state $\hat{\varrho}_{12}$ of qubits 1 and 2 and an entangled state $\hat{\varrho}_{34}$ of qubits 3 and 4, we create an entangled state $\hat{\varrho}_{14}$ of qubits 1 and 4. Here the states $\hat{\varrho}_{12}$ and $\hat{\varrho}_{34}$ subject to the Bell measurement for the entanglement swapping operation are those quantum states present in the segments at the moment when the swapping is performed. If, for example, segment 1 generates an entangled state earlier than segment 2, then $\hat{\varrho}_{12}$ enters the swapping step in the form of the initially, distributed state (which is not necessarily a pure maximally entangled state) after it was subject to memory dephasing while waiting for segment 2. Thus, our physical model includes state imperfections that originate from the initial distribution as well as from the storage time, as we shall discuss in detail below. In addition, we will include an error parameter for the swapping gate itself. \subsection{Raw rate}\label{sec:rawrate} The entanglement distribution in an elementary segment is typically not a deterministic process and several attempts are necessary to successfully share an entangled pair of qubits among two neighboring stations. If the probability of successful generation in each attempt is $p$, then the number of time steps until success is a geometrically distributed random variable $N$ with success parameter $p$. We denote the failure probability as $q = 1 - p$. The parameter $p$ is primarily given by the probability that a photonic qubit is successfully transmitted via a fiber channel of length $L_0$ connecting two stations, $\exp(-L_0/\unit[22]{km})$. It also includes local state preparation/detection, fiber coupling, frequency conversion, and memory ``write-in'' efficiencies. The random variables for different segments (in Fig.~\ref{fig:2seg} denoted as $N_1$ and $N_2$ for the first and the second segment, respectively) are independent and identically distributed geometric random variables. Only when both segments have generated an entangled state, we perform a swapping operation on the adjacent ends (nodes 2 and 3) of the segments and, when successful, we will be left with an entangled state of qubits 1 and 4. In general, the swapping operation is also non-deterministic, but here we consider only the case of deterministic swapping. Under this simple assumption we can still cover a large class of physically relevant and realistic repeater schemes and obtain exact and optimized rates for them. Moreover, especially for larger repeaters (still with no entanglement distillations), this assumption allows to circumvent the need for classical communication times longer than the elementary time $\tau$ (as defined below) in order to confirm successful entanglement swapping operations on ``higher'' repeater levels beyond the initial distributions in each segment. Physically, this assumption requires that in our schemes the Bell measurements for entanglement swapping (including the memory ``read-out'' operations) can be performed deterministically. Nonetheless, the swapping operations can still be imperfect, introducing errors in the states, as will be described below. Due to the non-deterministic nature of the initial entanglement generation, the whole process of entanglement distribution is also non-deterministic and fully described by the number of attempts up to and including the successful distribution (so, this number is always larger than zero). The real, wall-clock time needed for entanglement generation or distribution can be obtained from the number of attempts by multiplying it with an elementary time unit, typically $\tau = L_0/c_f$, where again $L_0$ is the length of the segment and $c_f = c/n_\mathrm{r}$ is the speed of light in the optical fiber ($c$ is the speed of light in vacuum and $n_\mathrm{r}$ is the index of refraction of the fiber, and depending on the specific distribution protocol there may be an extra factor 2). The elementary time unit is actually composed of the classical (and quantum) signalling time per segment $\tau$ and the local processing time. However, for typical $L_0$ values as considered here, the former largely dominates over the latter, and so we may neglect the local times, as they would hardly change the final secret key rates \cite{White}. If one of the two segments generates entanglement earlier than the other, then the created state must be kept in memory. The exact technique employed to implement this quantum memory is irrelevant for our analysis. The simplest model assumes that the state can be kept in memory for arbitrarily long. A useful assumption in the realistic setting with imperfect quantum memories is to set a certain limit of $m$ time units on the memory storage time, thus restarting the creation process whenever this threshold is reached.
1,776
85,618
en
train
0.4990.5
\subsection{Raw rate}\label{sec:rawrate} The entanglement distribution in an elementary segment is typically not a deterministic process and several attempts are necessary to successfully share an entangled pair of qubits among two neighboring stations. If the probability of successful generation in each attempt is $p$, then the number of time steps until success is a geometrically distributed random variable $N$ with success parameter $p$. We denote the failure probability as $q = 1 - p$. The parameter $p$ is primarily given by the probability that a photonic qubit is successfully transmitted via a fiber channel of length $L_0$ connecting two stations, $\exp(-L_0/\unit[22]{km})$. It also includes local state preparation/detection, fiber coupling, frequency conversion, and memory ``write-in'' efficiencies. The random variables for different segments (in Fig.~\ref{fig:2seg} denoted as $N_1$ and $N_2$ for the first and the second segment, respectively) are independent and identically distributed geometric random variables. Only when both segments have generated an entangled state, we perform a swapping operation on the adjacent ends (nodes 2 and 3) of the segments and, when successful, we will be left with an entangled state of qubits 1 and 4. In general, the swapping operation is also non-deterministic, but here we consider only the case of deterministic swapping. Under this simple assumption we can still cover a large class of physically relevant and realistic repeater schemes and obtain exact and optimized rates for them. Moreover, especially for larger repeaters (still with no entanglement distillations), this assumption allows to circumvent the need for classical communication times longer than the elementary time $\tau$ (as defined below) in order to confirm successful entanglement swapping operations on ``higher'' repeater levels beyond the initial distributions in each segment. Physically, this assumption requires that in our schemes the Bell measurements for entanglement swapping (including the memory ``read-out'' operations) can be performed deterministically. Nonetheless, the swapping operations can still be imperfect, introducing errors in the states, as will be described below. Due to the non-deterministic nature of the initial entanglement generation, the whole process of entanglement distribution is also non-deterministic and fully described by the number of attempts up to and including the successful distribution (so, this number is always larger than zero). The real, wall-clock time needed for entanglement generation or distribution can be obtained from the number of attempts by multiplying it with an elementary time unit, typically $\tau = L_0/c_f$, where again $L_0$ is the length of the segment and $c_f = c/n_\mathrm{r}$ is the speed of light in the optical fiber ($c$ is the speed of light in vacuum and $n_\mathrm{r}$ is the index of refraction of the fiber, and depending on the specific distribution protocol there may be an extra factor 2). The elementary time unit is actually composed of the classical (and quantum) signalling time per segment $\tau$ and the local processing time. However, for typical $L_0$ values as considered here, the former largely dominates over the latter, and so we may neglect the local times, as they would hardly change the final secret key rates \cite{White}. If one of the two segments generates entanglement earlier than the other, then the created state must be kept in memory. The exact technique employed to implement this quantum memory is irrelevant for our analysis. The simplest model assumes that the state can be kept in memory for arbitrarily long. A useful assumption in the realistic setting with imperfect quantum memories is to set a certain limit of $m$ time units on the memory storage time, thus restarting the creation process whenever this threshold is reached. \subsection{Errors} When the quantum repeater is employed for long-range QKD, errors will become manifest in terms of a reduced secret key fraction, as introduced in the subsequent section. In order to compute this secret key fraction, we need to know the finally distributed state (density operator) of the complete repeater system, and for this we require a more detailed physical model. We shall establish a relation between the finally distributed state as a function of the initial states in each segment and various errors that appear in the process of entanglement distribution. The physical model is rather common and has been used before in several works, both analytical and numerical. Especially, a two-segment quantum repeater can be treated analytically based on simple Pauli errors representing memory dephasing and gate (Bell measurement) errors. We address the effect of imperfect quantum storage at a memory node via a dephasing model where the stored quantum state is waiting for an adjacent segment to successfully generate or distribute entanglement. This kind of memory error can be modelled by a one-qubit dephasing channel, \begin{equation}\label{eq:Gl} \Gamma_\lambda(\hat{\varrho})=(1 - \lambda) \hat{\varrho} +\lambda Z \hat{\varrho} Z, \end{equation} where $Z$ is a qubit Pauli phase flip operator. We assume that $0 \leqslant \lambda < 1/2$, and any such number can be represented as $\lambda = (1 - e^{-\alpha})/2$ for some $\alpha > 0$. We denote the map in Eq.~\eqref{eq:Gl} also as $\Gamma_\alpha$. To avoid confusion, throughout this work we use the following definition: \begin{equation}\label{eq:Ga} \Gamma_\alpha(\hat{\varrho}) = \frac{1 + e^{-\alpha}}{2} \hat{\varrho} + \frac{1 - e^{-\alpha}}{2} Z \hat{\varrho} Z. \end{equation} The definition for a dephasing two-qubit channel is obtained from Eqs.~\eqref{eq:Gl}-\eqref{eq:Ga} by the replacement $Z \to Z \otimes I$ if the dephasing acts on the first qubit and by $Z \to I \otimes Z$ if the dephasing acts on the second qubit. Errors may also occur when a Bell state measurement is performed. This kind of errors is modelled by a two-qubit depolarizing channel, \begin{equation} \tilde{\Gamma}_\mu(\hat{\varrho}) = \mu \hat{\varrho} + (1-\mu) \frac{\hat{\mathbb{1}}}{4}. \end{equation} We do not consider dark counts of the detectors, since the optical propagation distances $L_0$ after which a detection attempt takes place remain sufficiently small in any quantum relay or repeater. Thanks to recent technological developments typical dark count rates can be reduced far below 1 dark count per second. In Ref.~\cite{Schuck2013} they were shown to be in the range of \(\unit{mHz} \). Dark counts of such a low frequency have no significant impact on the secret key rate in our schemes. Let us now apply this to the case of a two-segment quantum repeater. The Bell measurement of qubits 2 and 3 produces from a pair of states $\hat{\varrho}_{12}$ and $\hat{\varrho}_{34}$ a state $\hat{\varrho}_{14}$, see Fig.~\ref{fig:2seg}. The initial state $\hat{\varrho}_{1234} = \hat{\varrho}_{12} \otimes \hat{\varrho}_{34}$ of all four qubits 1, 2, 3 and 4 is the product of the states of qubits 1, 2 and qubits 3, 4. After the measurement the state $\hat{\varrho}_{14}$ of qubits 1 and 4 becomes \begin{equation}\label{eq:Srho} \hat{\varrho}_{14} \equiv \mathcal{S}(\hat{\varrho}_{1234}) = \frac{\Tr_{23}(\hat{P}_{23} \tilde{\Gamma}_{\mu, 23}(\hat{\varrho}_{1234}) \hat{P}_{23})}{\Tr(\hat{P}_{23} \tilde{\Gamma}_{\mu, 23}(\hat{\varrho}_{1234}) \hat{P}_{23})}, \end{equation} where $\mu$ describes the imperfection of the measurement and $\hat{P}_{23} = |\Psi^+\rangle_{23}\langle\Psi^+|$ is one of the four measurement operators in the two-qubit Bell state basis of the central subsystem (qubits 2 and 3), $\{ |\Phi^\pm\rangle_{23}\langle\Phi^\pm|,\,|\Psi^\pm\rangle_{23}\langle\Psi^\pm| \}$, where $|\Phi^\pm\rangle = (|00\rangle \pm |11\rangle)/\sqrt{2},\, |\Psi^\pm\rangle = (|10\rangle \pm |01\rangle)/\sqrt{2}$, for qubits defined via the two $Z$ eigenstates $|0\rangle, \,|1\rangle$ (for any one of the other three Bell measurement outcomes, the analysis below is similarly applicable). In this case, Eq.~\eqref{eq:Srho} reduces to \begin{equation}\label{eq:rhod} \hat{\varrho}_{14} \equiv \mathcal{S}(\hat{\varrho}_{1234}) = \frac{_{23}\langle\Psi^+|\tilde{\Gamma}_{\mu, 23}(\hat{\varrho}_{1234})|\Psi^+\rangle_{23}} {\Tr(_{23}\langle\Psi^+|\tilde{\Gamma}_{\mu, 23}(\hat{\varrho}_{1234})|\Psi^+\rangle_{23})}. \end{equation} A simple way to compute the right-hand side of this relation for an arbitrary density operator $\hat{\varrho}_{1234}$ is given in App.~\ref{app:Trace Identities}. In general, states of the form \begin{equation}\label{eq:rho0} \hat{\varrho}_0 = \tilde{\Gamma}_{\mu_0}\bigl(F_0\dyad{\Psi^+} + (1-F_0)\dyad{\Psi^-}\bigr) \end{equation} play an important role in the full theory presented below. It is easy to verify that \begin{equation} (I \otimes Z) \hat{\varrho}_0 (I \otimes Z) = (Z \otimes I) \hat{\varrho}_0 (Z \otimes I), \end{equation} so it does not matter whether $\Gamma_\alpha$ acts on the first or second qubit of $\hat{\varrho}_0$ and either application we simply denote as $\Gamma_\alpha(\hat{\varrho}_0)$. An easily checkable relation is \begin{equation}\label{eq:Ga2} \Gamma_\alpha(\hat{\varrho}_0) = \tilde{\Gamma}_{\mu_0}\bigl(F\dyad{\Psi^+} + (1-F)\dyad{\Psi^-}\bigr), \end{equation} where the new parameter $F$ is expressed in terms of the original one, $F_0$, as \begin{equation}\label{eq:Fprime} F = \frac{1}{2}(2F_0-1) e^{-\alpha} + \frac{1}{2}. \end{equation} The initial fidelity parameter $F_0$ (describing an initial dephasing of the distributed states) combined with the $\mu_0$-dependent initial depolarization are both included in the initial $\hat \rho_0$ in Eq.~\eqref{eq:rho0}, because later this will allow for an elegant recursive state relation for larger repeaters. It will also allow to switch between different initial physical errors depending on the specific repeater realization. In general, the maps in Eq.~\eqref{eq:Ga} satisfy the relation $\Gamma_\alpha \circ \Gamma_\beta = \Gamma_{\alpha + \beta}$. In particular, we have $\Gamma_\alpha \circ \ldots \circ \Gamma_\alpha = \Gamma_{k\alpha}$, where $\Gamma_\alpha$ is used $k$ times on the left-hand side. So, applying $\Gamma_\alpha$ to the state $\hat{\varrho}_0$ given by Eq.~\eqref{eq:rho0} several times, we have to multiply $\alpha$ in Eq.~\eqref{eq:Fprime} by this number of times. In a two-segment quantum repeater, if we start with the distributed states $\hat{\varrho}_{12}$ and $\hat{\varrho}_{34}$ of the special form (similar to Eq.~\eqref{eq:rho0}) \begin{equation}\label{eq:Drho} \begin{split} \hat{\varrho}_{12} &= \tilde{\Gamma}_{\mu_1}\bigl(F_1|\Psi^+\rangle_{12}\langle\Psi^+| + (1 - F_1)|\Psi^-\rangle_{12}\langle\Psi^-|\bigr), \\ \hat{\varrho}_{34} &= \tilde{\Gamma}_{\mu_2}\bigl(F_2|\Psi^+\rangle_{34}\langle\Psi^+| + (1 - F_2)|\Psi^-\rangle_{34}\langle\Psi^-|\bigr), \\ \end{split} \end{equation} then the ``swapped'', finally distributed state $\hat{\varrho}_{14}$, given by Eq.~\eqref{eq:rhod}, is also of the same form, \begin{equation}\label{eq:rho14} \hat{\varrho}_{14} = \tilde{\Gamma}_{\mu_d}\bigl(F_d|\Psi^+\rangle_{14}\langle\Psi^+| + (1 - F_d)|\Psi^-\rangle_{14}\langle\Psi^-|\bigr), \end{equation} where $\mu_d = \mu \mu_1 \mu_2$ and $F_d$ reads as \begin{equation}\label{eq:Fd} F_d = \frac{1}{2}(2F_1 - 1)(2F_2 - 1) + \frac{1}{2}. \end{equation} We see that the form of the state is preserved by the total distribution procedure of a two-segment repeater. The same conclusion will be applicable to larger repeaters as well --- if all segments start in a state of the form given by Eq.~\eqref{eq:rho0}, then the finally distributed state will also be of the same form. For the two-segment repeater, let us now assume that both segments generate the same state as in Eq.~\eqref{eq:rho0}, but not necessarily simultaneously, and so generally only after some waiting time we perform the entanglement swapping and distribute entanglement over the two segments. If the first segment generates entanglement after $N_1$ time units, and the second segment after $N_2$ time units, and we perform the entanglement swapping after $N$ time units, with $N \geqslant N_1, N_2$, then the states $\hat{\varrho}_{12}$ and $\hat{\varrho}_{34}$ prior to swapping will be of the form in Eq.~\eqref{eq:Drho} with $\mu_1 = \mu_2 = \mu_0$ and \begin{equation} \begin{split} F_1 &= \frac{1}{2}(2F_0 - 1)e^{-(N-N_1)\alpha} + \frac{1}{2}, \\ F_2 &= \frac{1}{2}(2F_0 - 1)e^{-(N-N_2)\alpha} + \frac{1}{2}. \\ \end{split} \end{equation} The final, distributed state is then given by Eq.~\eqref{eq:rho14} where, according to Eq.~\eqref{eq:Fd}, the parameters are $\mu_d = \mu \mu^2_0$ and \begin{equation} F_d = \frac{1}{2}(2F_0-1)^2 e^{-(2N-N_1-N_2)\alpha} + \frac{1}{2}. \end{equation} This distributed state is subject to less dephasing when we swap as early as possible, thus $N = \max(N_1, N_2)$, so the integer term in front of $\alpha$ is equal to $2\max(N_1, N_2) - N_1 - N_2 = |N_1 - N_2|$. Extra factors depending on the number of spins subject to dephasing in one segment (in particular, a factor of 2 for one spin pair) can be absorbed into $\alpha$. The precise physical meaning of $\alpha$ will be discussed later when we calculate the memory-assisted secret key rates in a quantum repeater. Furthermore, here we omitted explicit factors depending on the number of memory qubits that are subject to dephasing in a single repeater segment (in our model this will be one or two spins).
4,063
85,618
en
train
0.4990.6
\section{Methods and figure of merit}\label{sec:Methods} Before we move to the more general case of more than two segments and more than just one middle station, we need some general methods and tools from statistics. This will enable us to derive an analytic, statistical model for larger quantum repeaters beyond one middle station (the physical model remains basically the same as for the small, elementary two-segment quantum repeater), where we calculate average values or moments of two random variables: the total repeater waiting time $K_n$ and the total (i.e., the totally accumulated) memory dephasing time $D_n$. As a quantitative figure of merit, it is useful to consider the secret key rate of QKD, as it combines in a single quantity the two typically competing effects in a quantum repeater system: the speed at which quantum states can be distributed over the entire communication distance and the quality of the totally distributed quantum states. These two effects are naturally related to the above-mentioned two random variables. For our purposes here, throughout we shall rely on asymptotic expressions for the secret key rate omitting effects of finite key lengths. Of course, alternatively, one could also treat the total state distribution efficiencies and qualities (fidelities) separately and individually, and then also consider quantum repeater applications beyond long-range QKD. \subsection{Probability generating function} The method of probability generating functions (PGFs) plays an important role in our treatment of statistical properties of quantum repeaters. For any random variable $X$ taking integer non-negative values its PGF $G_{X}(t)$ is defined via \begin{equation} G_X(t) = \mathbf{E}[t^X] = \sum^{+\infty}_{k = 0} \mathbf{P}(X = k)t^k. \end{equation} The series on the right-hand side converges at least for all complex values of $t$ such that $|t| \leqslant 1$. The PGF contains all statistical information about $X$, which can be easily extracted if an explicit expression for $G_X(t)$ is known. For example, the average value of $X$, $\mathbf{E}[X] \equiv \overline{X}$, and its variance $\mathbf{V}[X] \equiv \sigma^2_X = \mathbf{E}[(X - \overline{X})^2]$, are expressed as follows: \begin{equation}\label{eq:PGF} \begin{split} \mathbf{E}(X) &= G'_X(1), \\ \mathbf{V}(X) &= G^{\prime\prime}_X(1) + G'_X(1) - G^{\prime 2}_X(1). \end{split} \end{equation} For any $\alpha \geqslant 0$ the random variable $e^{-\alpha X}$ has a finite average value, which can be computed as \begin{equation}\label{eq:PGF_2} \mathbf{E}[e^{-\alpha X}] = G_X(e^{-\alpha}). \end{equation} Note that for this random variable, besides the mean or average value, any statistical moment can be easily obtained and the $k$th-moment simply becomes $\mathbf{E}[e^{-\alpha k X}] = G_X(e^{-k \alpha})$. Two kinds of random variables appear in our model of quantum repeaters where one is related to the raw rate and the other to the secret key fraction of QKD as introduced below. It is not always possible to get a compact expression for the PGF of these random variables explicitly, but when it is, we use the equations above to obtain statistical properties of the corresponding random variables. \subsection{Secret key rate}\label{sec:skr} The main figure of merit in our study is the quantum repeater secret key rate, which can be defined as the product of two quantities, \begin{equation} S = Rr, \end{equation} where $R$ is the raw rate and $r$ is the secret key fraction. The raw rate is simply the inverse average waiting time, \begin{equation} R = \frac{1}{T}, \end{equation} where $T = \mathbf{E}[K]$ is the average number of steps $K$ needed to successfully distribute one entangled qubit pair over the entire communication distance between Alice and Bob (giving an average time duration in seconds when multiplied with an appropriate time unit $\tau$). The secret key fraction of the BB84 QKD protocol \cite{BB84,PirRMP}, assuming one-way post-processing, is given by \begin{equation}\label{eq:skf} r = 1 - h(\overline{e_x}) - h(\overline{e_z}), \end{equation} where $e_x$ and $e_z$ are the quantum bit error rates (QBERs), \begin{equation}\label{eq:exez} \begin{split} e_z &= \langle 00|\hat{\varrho}_n|00\rangle + \langle 11|\hat{\varrho}_n|11\rangle, \\ e_x &= \langle +-|\hat{\varrho}_n|+-\rangle + \langle -+|\hat{\varrho}_n|-+\rangle, \end{split} \end{equation} and $h(p)$ is the binary entropy function, \begin{equation} h(p) = -p\log_2(p) - (1-p)\log_2(1-p). \end{equation} The QBERs $e_x$ and $e_z$ in Eq.~\eqref{eq:exez} are obtainable from the final, distributed state $\hat{\varrho}_n$ of an $n$-segment quantum repeater, which in our case will depend on the dephasing random variable, and so we have to insert average values in Eq.~\eqref{eq:skf} as indicated by the bars. We thus need a complete model of quantum repeaters to compute the statistical properties of the relevant random variables associated with the number of steps to distribute entanglement or the density operator of the distributed state. Given such a model, the aim of our work is to compute and analyze secret key rates of quantum repeaters with an increasing size, up to eight segments, considering and optimizing different distribution and swapping schemes. Besides the most common BB84 QKD protocol, alternatively, we may also consider the six-state protocol \cite{sixstate} which would slightly improve the secret key rate. Assuming again one-way post-processing, the secret key fraction $r$ of the six-state protocol is given by $1-H(\boldsymbol{\lambda})$ \cite[App. A]{RevModPhys.81.1301} where $H(\cdot)$ is the Shannon entropy and the vector $\boldsymbol{\lambda}$ must contain the corresponding weights of the four Bell states in the final density operator $\hat{\varrho}_n$. Throughout this work all secret key rates are calculated from their asymptotic expressions and hence effects of finite key lengths are not included here. This simplifies the analytical treatment of a quantum repeater chain, which, as we will see, quickly becomes rather complex for a growing number of stations, involving many distinct choices and strategies for the entanglement manipulations. Moreover, our rate analysis shall also be useful to assess and compare the performances of different quantum repeaters in applications beyond QKD.
1,745
85,618
en
train
0.4990.7
\section{Quantum repeaters beyond one middle station}\label{sec:Physical Modelling} \begin{figure*} \caption{``Doubling'' swapping scheme for a four-segment quantum repeater. This is the most common swapping strategy which allows to systematically include entanglement distillation at each repeater ``nesting level". Without extra distillation, however, ``doubling'' is never optimal: combined with fast, parallel distributions it exhibits increased parallel storage times and hence memory dephasing (while combined with sequential distributions the repeater waiting times become suboptimal). Memory cut-off parameters are omitted in the illustration.} \label{fig:4segD} \end{figure*} \begin{figure*} \caption{``Iterative'' swapping scheme for a four-segment quantum repeater. The swapping operations are performed step by step (here from left to right). Also this scheme, when executed with parallel distributions in each segment, leads to an increase of the total dephasing. However, if combined with sequential distributions, the accumulated dephasing times can be reduced (with always at most one spin or spin pair being subject to a long dephasing) at the expense of a growing repeater waiting time. Memory cut-off parameters are omitted in the illustration.} \label{fig:4segI} \end{figure*} Larger repeaters with more than two segments and one middle station can now be modeled in a way similar to the two-segment case discussed above. However, the extended, more general case is also more complex and there are both different ways to perform the initial entanglement distributions in all elementary segments and different ways to connect the successfully distributed segments via entanglement swapping. For the initial distributions we make a distinction between sequential and parallel schemes, where the former refers to a scheme in which, according to a predetermined order, the distributions are attempted step by step starting from e.g. the first segment. In a parallel scheme, the distributions are attempted simultaneously in all segments, which obviously leads to a smaller total repeater waiting time than for the sequential distribution schemes. Nonetheless, since the sequential schemes do make use of the quantum memories, they do already offer the repeater-like scaling advantage over point-to-point quantum communication links. Even for a two-segment quantum repeater, we may choose a sequential scheme, where we first only distribute e.g. the left segment and only once we succeeded there we attempt to distribute the right segment. Experimentally, this can be of relevance for those realizations where only a single short-term quantum memory is available at every station for the light-matter interface and another quantum memory for the longer-term storage (e.g., respectively, an electronic and a nuclear spin in colour-center-based repeater nodes) \cite{WehnerNV,HansonNV}. Theoretically and conceptually, there are at least two advantages of a (fully) sequential distribution approach \cite{tf_repeater}. First, the two basic random variables of a quantum repeater are very simple and so the secret key rates are fairly easy to calculate. Second, always only at most one entangled qubit pair (or even only a single spin if e.g. Alice measures her qubit immediately) may be subject to memory dephasing during all distribution steps. For the entanglement connections via entanglement swapping, the two-segment case is special, as there is only one swapping to be performed at the end when pairs in both segments are available. However, already with three segments and two repeater stations there is no unique swapping order anymore, and we may either fix the order or ``dynamically'' choose where we swap as soon as swapping is possible for two neighboring, successfully distributed segments. In a fixed scheme, two neighboring segments, though ready, may have to wait before being connected. Thus, the choice of the entanglement swapping scheme has a significant impact on the totally accumulated dephasing time. In a worst-case scenario, we could wait until all segments have been distributed and then do all the entanglement connections at the very end; for deterministic entanglement swapping, like in our model, this would not affect the raw waiting times, but it would lead to a maximal total dephasing. In this case, a sequential distribution where entanglement swapping takes place immediately when a new, successfully bridged segment is available can lead to a higher secret key rate than a combination of parallel distribution and swapping at the end (where the rates of the latter scheme may still only be obtainable approximately) \cite{tf_repeater}. The crucial innovation in our analytical treatment here is that we will be able to obtain the exact secret key rates for schemes that combine fast, parallel distributions with fast, immediate swappings (and hence a suppressed level of parallel storage). In other words, among all parallel-distribution schemes we will calculate the exact rates that are optimized with regards to the total repeater dephasing. \subsection{Waiting times} The average total waiting times in a quantum repeater or even the full statistics of the waiting-time random variable can be, in principle, obtained via the Markov chain formalism, even when the swapping is probabilistic \cite{PvL, Shchukin2021}. More generally, the PGFs as introduced earlier contain the full statistical information, and for deterministic swapping, we can obtain the PGF of $K_n$ through combinatorics. In order to minimize the total waiting time, the distributions should occur in parallel. However, there is no unique way to perform the entanglement swapping, and so let us briefly consider this aspect in the context of the waiting times. For example, for a four-segment repeater, two possible swapping strategies are shown in Figs.~\ref{fig:4segD} and \ref{fig:4segI}. Both schemes are for a fixed swapping order, while we may distribute the individual segments in parallel. In the first scheme, typically referred to as ``doubling'', we swap the two halves of the repeater independently and only when both are ready, we swap them too. In the second scheme, we swap the segments one after the other starting in one of the repeater's ends (here the left end); we may refer to this scheme as ``iterative'' swapping. Other schemes are possible, and the more segments the repeater has, the more possibilities for performing swappings there are. The raw rate of a repeater is characterized by the number of steps, $K_n$, needed to successfully distribute an entangled pair, and this random variable can be expressed in terms of the geometric random variables $N_i$ associated with each segment. For example, for the swapping schemes shown in Figs.~\ref{fig:4segD} and \ref{fig:4segI}, when combined with parallel distributions, we have $K_4 = \max(N_1, N_2, N_3, N_4)$, so the two schemes have the same raw rate. In general, the waiting times of all such schemes that distribute in parallel are of a similar form. Those schemes that we later classify as ``optimal'' in terms of the whole secret key rate are assumed to be parallel distribution schemes. Conversely, combining iterative swapping with sequential distribution can lead to a reduced accumulated dephasing time at the expense of an increased total repeater waiting time. We shall discuss the accumulated dephasing times next. \subsection{Dephasing times} In order to treat the total dephasing time in a quantum repeater with more than two segments, we have to generalize the methods and the model that led to the result for the distributed state for two segments, Eq.~\eqref{eq:rho14} and Eq.~\eqref{eq:Fd}, and the discussion below, to larger repeaters with, in pinciple, an arbitrary number of segments $n$. In fact, we did the two-segment derivations in such a way that an $n$-segment extension is now straightforward. We obtain the following expression for the final, distributed state in the general case: \begin{equation}\label{eq:rhon} \begin{split} \hat{\varrho}_n = \tilde{\Gamma}_{\mu_n}\Biggl[&\frac{1 +(2F_0 - 1)^n e^{-\alpha D_n}}{2} \dyad{\Psi^+} \Biggr. \\ +\Biggl.&\frac{1 - (2F_0 - 1)^n e^{-\alpha D_n}}{2} \dyad{\Psi^-}\Biggr], \end{split} \end{equation} where $\mu_n = \mu^{n-1} \mu^n_0$ and $D_n = D_n(N_1, \ldots, N_n)$ is a random variable describing the total number of time units that contribute to the total dephasing in the final output state. For $n=2$, the expression for $D_2(N_1, N_2) = |N_1 - N_2|$ has been obtained before, for larger $n$ the value of $D_n$ now depends on the swapping scheme. As before, we omitted explicit factors depending on the number of memory qubits that are subject to dephasing in a single repeater segment (one or two spins in our model) which also depends on the application and the specific execution of the protocol. Such factors can always be absorbed into $\alpha$. The precise physical meaning of $\alpha$ will be discussed later when we calculate the memory-assisted secret key rates in a quantum repeater. The QBERs for the state in Eq.~\eqref{eq:rhon} are easy to compute, \begin{equation}\label{eq:QBER} \begin{split} e_z &= \frac{1}{2}(1 - \mu^{n-1}\mu^n_0), \\ e_x &= \frac{1}{2}(1 - \mu^{n-1}\mu^n_0 (2F_0 - 1)^n e^{-\alpha D_n}). \end{split} \end{equation} For one of the averages, we have $\overline{e_z} = e_z$, and in order to obtain the other average $\overline{e_x}$ we need to calculate the expectation value $\mathbf{E}[e^{-\alpha D_n}]$. This average can be obtained with the help of Eq.~\eqref{eq:PGF_2} if we know the PGF of $D_n$. Again, in principle, we can get the full statistics of $D_n$ (and functions of it) from this PGF. More specifically, according to Eq.~\eqref{eq:PGF_2}, for the random variable $e^{-\alpha D_n}$ we can easily obtain all statistical moments of order $k$, $\mathbf{E}[e^{-\alpha D_n k}]$. This may be useful for a rate analysis that includes keys of a finite length, though here in this work we shall focus on asymptotic keys. The PGF of $D_n$, however, is generally harder to obtain than that of $K_n$. For example, the PGF of $D_n$ is not obtainable via the absorption time of a Markov chain (unlike that of $K_n$, which is obtainable even when the entanglement swapping is probabilistic) \cite{PvL, Shchukin2021}. Nonetheless, at least without considering the more complicated case including a memory cut-off, we can calculate the relevant PGF of $D_n$ by analyzing all permutations of the basic variables (there are also other, more elegant, but still not so efficient and well scalable methods to treat the statistics of $D_n$, e.g. based on algebraic geometry). We see that in order to compute the secret key rate of a quantum repeater we need to study the two integer-valued random variables $K_n$ and $D_n$. The former describes the number of steps to successfully distribute entanglement and is responsible for the repeater's raw rate. The latter describes the quality of the final state and strongly depends on the swapping scheme. For example, for a four-segment repeater with a predetermined swapping order like the iterative scheme in Fig.~\ref{fig:4segI}, we could actually also choose to adapt the initial entanglement distributions to the swapping strategy and hence wait with every subsequent distribution step until the corresponding connection from the left has been performed. Since this is no longer parallel distribution (it is ``sequential'' distribution), we would obtain an increased total waiting time. However, the accumulated dephasing time may be reduced this way, as we discuss in the next subsection. In general, we may also consider schemes with a memory cut-off, where we put a certain restriction of $m$ time units on the maximum time a qubit can be kept in memory. So, in this case, we study four variables --- the total number of distribution steps and the total dephasing, both with and without cut-off. In order to maximize the secret key rate we need a scheme with small $\mathbf{E}[K_n]$ and large $\mathbf{E}[e^{-\alpha D_n}]$. In the following subsections, we will introduce different schemes for performing the entanglement swapping and, where possible, compute the PGFs of the corresponding random variables. The PGF of $K_n$ is denoted as $G_n(t)$ and that of $D_n$ as $\tilde{G}_n(t)$. For the corresponding quantities with cut-off $m$ we use the superscript $[m]$, e.g. $K^{[m]}_n$. We will see and argue that there are three basic properties that a quantum repeater protocol (unassisted by additional quantum error detection or correction) should exhibit: distribute the entangled states in each segment in parallel, swap the initially distributed states as soon as possible, and avoid parallel storage of already distributed pairs as much as possible. It is obvious that all these three ``rules'' cannot be fully obeyed at the same time. In particular, parallel distribution will ultimately lead to some degree of parallel storage.
3,257
85,618
en
train
0.4990.8
\subsection{Sequential distribution schemes} In what we refer to as a sequential entanglement distribution scheme, the initial, individual pairs are no longer distributed in parallel but strictly sequentially according to a predetermined order. If this order is chosen in a suitable way, it is possible that at any time during the repeater protocol at most one entangled pair is subject to dephasing (apart from small constant dephasing units for single attempts), because once a new pair is available an entanglement connection can be immediately performed and only then another new segment starts distributing. This may lead to a reduced accumulated dephasing time. Moreover, from a secret key rate analysis point of view, an appropriate sequential scheme can allow for a straightforward calculation of the statistics of both random variables, the total waiting and the accumulated dephasing times, even when a memory cut-off is included. Let us consider a simple, sequential distribution and swapping scheme where the above discussion applies and the secret key rate can be computed exactly by means of elementary combinatorics. In this scheme, we start by distributing entanglement in segment 1 (most left segment), and only after a success we start to attempt distributions in segment 2. As soon as we succeed there too, we immediately swap segments 1 and 2 and start to distribute entanglement in segment 3. As soon as we succeed with the distribution in segment 3, we swap segment 3 with the first two, already connected segments, start distributing in segment 4, and so on, repeating this process until entanglement has also been distributed in the most right segment followed by a final entanglement swapping step. This scheme, for $n=4$, is also illustrated by Fig.~\ref{fig:4segI}. The variables $K_n$ and $D_n$ for this scheme and general $n$ are thus defined as \begin{equation} K^{\mathrm{seq}}_n = N_1 + \ldots + N_n, \quad D^{\mathrm{seq}}_n = N_2 + \ldots + N_n. \end{equation} The PGFs of these random variables are just powers of the PGF of the geometric distribution: \begin{equation} G^{\mathrm{seq}}_n(t) = \left(\frac{pt}{1 - qt}\right)^n, \ \tilde{G}^{\mathrm{seq}}_n(t) = \left(\frac{pt}{1 - qt}\right)^{n-1}. \end{equation} In App.~\ref{app:SeqPGF} we derive the following expressions for the PGFs of the random variables with memory cut-off. We assume an accumulated, global cut-off where the total storage (dephasing) time across all segments must not exceed the value $m$. The PGF of $K^{[m]}_n$ is given by \begin{equation}\label{eq:Gmnt} G^{[m]}_n(t) = \frac{p^n t^n \sum^{m-n+1}_{j=0}\binom{j+n-2}{n-2}q^j t^j}{1-qt-p\sum^{n-2}_{i=0}\binom{m}{i}p^i q^{m-i} t^{m+1}}, \end{equation} and the PGF of $D^{[m]}_n$ becomes \begin{equation} \tilde{G}^{[m]}_n(t) = \frac{t^{n-1}\sum^{m-n+1}_{j=0} \binom{j+n-2}{n-2}q^j t^j}{\sum^{m-n+1}_{i=0}\binom{m}{i+n-1}p^i q^{m-n+1-i}}. \end{equation} Because it takes at least one time step for each segment to succeed, we have the inequalities $n \leqslant K^{[m]}_n$ and $n-1 \leqslant D^{[m]}_n \leqslant m$, which agree with the PGFs of these quantities presented above. Moreover, for $m \to +\infty$ we have \begin{equation}\label{eq:GGinf} G^{[+\infty]}_n(t) = G^{\mathrm{seq}}_n(t), \quad \tilde{G}^{[+\infty]}_n(t) = \tilde{G}^{\mathrm{seq}}_n(t). \end{equation} These relations are easy to prove, just note that \begin{equation} \begin{split} \sum^{m-n+1}_{i=0} &\binom{m}{i+n-1}p^i q^{m-n+1-i} \\ &= \frac{1}{p^{n-1}} \left[1 - \sum^{n-2}_{i=0}\binom{m}{i}p^i q^{m-i}\right]. \end{split} \end{equation} The binomial coefficient $\binom{m}{i}$ is polynomial in $m$ of $i$-th degree, and thus $\binom{m}{i} q^m \to 0$ when $m \to +\infty$ for all $i = 0, \ldots, n-2$, which proves the relations of Eq.~\eqref{eq:GGinf}. There are also variations of the above sequential cutoff scheme. In the previous scheme we only abort a round when we already waited $m$ time units. Now consider the case where we already waited $m/2$ time units, but only a small number of segments succeeded. Hence, it is highly unlikely that we will succeed in all segments within the $m$ time steps. Therefore, it is better not to waste time and already abort the current round to start from scratch. A very simple strategy following this idea makes use of an individual (local) cutoff in each segment. However, it is beneficial to use a different cutoff in every segment; one should choose a smaller cutoff in the first segments and then increase the cutoff for later segments. The rationale behind this is that in the first segments we have not invested much effort and can discard rather aggressively, whereas later we should discard less aggressively since we already consumed lots of resources. The advanced protocol is uniquely defined by a vector of cutoffs $\vec{m}=(m_1,\dots,m_{n-1})$ and the random variables $K_n$ and $D_n$ for this protocol and general $n$ are given by \begin{equation} K_{n}^{\mathrm{seq},\vec{m}}= \tilde{N}^{(m_{n-1})}+(T_{n-1}-1)m_{n-1}+\sum_{j=1}^{T_{n-1}} K_{n-1,j}^{\mathrm{seq},\vec{m}}, \end{equation} where $K_1^{\mathrm{seq},\vec{m}}$ is geometrically distributed with parameter $p$, $\tilde{N}^{(m_{n-1})}$ follows a truncated geometric distribution with cutoff $m_{n-1}$, and $T_{n-1}$ is a geometric random variable with parameter $(1-q^{m_{n-1}})$ describing the number of starts of the protocol. For the dephasing we have \begin{equation} D_{n}^{\mathrm{seq},\vec{m}}=\tilde{N}^{m_1}+\ldots+\tilde{N}^{m_{n-1}}. \end{equation} The PGF of $K_{n}^{\mathrm{seq},\vec{m}}$ is calculated in App.~\ref{app:SeqPGF} and given recursively by \begin{equation} G^{[\vec{m}]}_n(t)=\tilde{G}_2^{[m_{n-1}]}(t)t^{-m_{n-1}}P^{(m_{n-1})}\left(G_{n-1}^{[\vec{m}]}(t)t^{m_{n-1}}\right), \end{equation} where $P^{(m)}(t)=\frac{(1-q^m)t}{1-q^m t}$ and $G^{[\vec{m}]}_1=G^{seq}_1$. The PGF of $D_{n}^{\mathrm{seq},\vec{m}}$ is simply given by \begin{equation} \tilde{G}^{[\vec{m}]}_{n}(t)=\prod_{j=1}^{n-1} \tilde{G}^{[m_j]}_2(t)\,, \end{equation} since the sum of independent random variables translates to a product for PGFs. As the state quality only depends on the total dephasing time, the best sequential protocol would count the total number of storage steps and would discard following a cutoff which is a function of the number of already succeeded segments, and one may also make use of the early aggressive discarding.
2,069
85,618
en
train
0.4990.9
\subsection{Parallel distribution schemes} A more efficient class of schemes is constructed when we do not wait for some segments to finish before we start others. In these schemes we start all segments independently and distribute in parallel. It follows that for these schemes without cut-off we have \begin{equation} K^{\mathrm{par}}_n = \max(N_1, \ldots, N_n), \end{equation} which means that all such schemes give the same raw rate. In App.~\ref{app:GKn} we derive the following expressions for the PGF of $K_n$: \begin{equation}\label{eq:GKn} \begin{split} G^{\mathrm{par}}_n(t) &= t\sum^n_{i = 1}(-1)^{i+1} \binom{n}{i}\frac{1-q^i}{1 - q^i t} \\ &= 1 + (1-t)\sum^n_{i=1} (-1)^i \binom{n}{i} \frac{1}{1-q^i t}. \end{split} \end{equation} The two expressions are identical, since their difference reduces to $(1 - 1)^n = 0$. From the first expression it is clear that the values of $K_n$ start at 1, as it must be, because it takes at least one time unit to distribute entanglement. In the other expression the necessary property of all PGFs becomes manifest, $G_n(1) = 1$. From the first relation of Eqs.~\eqref{eq:PGF} we get the well-known expression for the average waiting time of a quantum repeater with parallel distribution and deterministic entanglement swapping (at any time when possible, e.g. at the very end) \begin{equation}\label{eq:Knpar} \overline{K^{\mathrm{par}}_n} = \frac{\mathrm{d}}{\mathrm{d}t}G^{\mathrm{par}}_n(t) \Big\vert_{t=1} = \sum^n_{i=1} (-1)^{i+1} \binom{n}{i} \frac{1}{1 - q^i}, \end{equation} which has been obtained in Ref.~\cite{PhysRevA.83.012323} (but the full waiting time probability distribution has not). Importantly, however, all other relevant expressions, the total number of distribution steps including memory cut-off as well as the finally distributed quantum state including memory imperfections, both for the model with and without memory cut-off, depend on the particular swapping strategy chosen (e.g. unnecessarily postponing some or even all entanglement swapping steps until the very end maximizes the amount of parallel storage and hence the total dephasing in the final state). For this, there is a growing number of choices for larger repeaters, and in the following we shall derive an optimal swapping scheme that results in a minimal total dephasing time (while sharing the high raw rates, i.e. the minimal total waiting times, with all parallel distribution schemes). \subsubsection{Optimal swapping scheme}\label{sec:optimalswapscheme} Because all schemes (without cut-off) considered in this subsection have equal raw rates, the best secret key rate is determined by the optimal scheme with regards to the secret key fraction. In this subsection we shall present this scheme. In contrast to the schemes presented in Figs.~\ref{fig:4segD} and \ref{fig:4segI}, which are fixed, the optimal swapping scheme is dynamic. In a fixed scheme the order of swappings is fixed at the beginning and does not depend on the order in which the segments become ready. For example, for the ``doubling'' scheme as shown in Fig.~\ref{fig:4segD} for $n=4$, we never swap segments 2 and 3, even if they are ready and segments 1 and 4 are not. We always wait for segments 1 and 2 or segments 3 and 4 to become ready, swap these pairs, and then swap the larger segments to finish the entanglement distribution over the whole repeater. In a dynamical scheme we do not follow a prescribed order and can swap the segments based on their state. Of course, we can freely mix and match fixed and dynamic behaviours. For example, for $n=8$, we can first swap four pairs of segments in a fixed way and then swap the four new, larger segments dynamically. We now show that the fully dynamic scheme, where we always swap the segments that are ready, is the optimal one. To prove this statement, we give two characterizations of this fully dynamic scheme. One is the straightforward translation of the description to the definition, but this definition is not explicitly optimal. The other one is optimal by construction, but it is not fully dynamic explicitly. We then show that the two constructions coincide, which will demonstrate the validity of our statement. Swapping an earliest pair of segments means that we choose an index $i$ for which $\max(N_i, N_{i+1})$ is minimal (there can be several such indices), swap the pair of segments $i$ and $i+1$, and recursively apply this procedure to the other segments (if there are several such pairs, choose one of them arbitrarily). If we denote the dephasing random variable of this scheme as $\tilde{D}_n$, then its formal definition reads as \begin{widetext} \begin{equation}\label{eq:Dtilde} \tilde{D}_n(N_1, \ldots, N_n) = |N_{i_0} - N_{i_0 + 1}| +\tilde{D}_{n-1}(N_1, \ldots, N_{i_0-1}, \max(N_{i_0}, N_{i_0+1}), N_{i_0+2}, \ldots, N_n), \end{equation} \end{widetext} where $i_0 = \argmin_i \max(N_i, N_{i+1})$. This definition is a greedy, locally optimal scheme, which optimizes only one step. As it is known from algorithm theory, greedy algorithms do not always produce globally optimal results. By doing only locally optimal steps, we may miss an opportunity for a much better reward in the future if we make a non-optimal step now. Fortunately, in this case the greedy, locally optimal scheme expressed by Eq.~\eqref{eq:Dtilde} does give the globally optimal result, as we show below. In any scheme, the first step will be to swap a pair of neighbouring segments, let us say segments $i$ and $i+1$. We do this at the time moment $\max(N_i, N_{i+1})$, and the contribution of these segments to the total dephasing is $|N_i - N_{i+1}|$. After this swapping, we are left with $n-1$ new segments, one of which is the combination of two original ones. Any initial segment $j$, where $j \not= i, i+1$, generates an entangled state after $N_j$ time units, and the combined segment ``generates'' entanglement after $\max(N_i, N_{i+1})$ time units. If we swap these $n-1$ segments in any way in $D_{n-1}$ time units, then the total swapping takes $D_n = |N_i - N_{i+1}| + D_{n-1}$ time units. To find the minimal dephasing we simply take the minimum over $i = 1, \ldots, n-1$ of this expression, and recursively apply it for the new segments. If we denote the dephasing random variable corresponding to this scheme as $D^\star_n$, then this description translates into the following definition: \begin{equation}\label{eq:Dstar} \begin{split} &D^\star_n(N_1, \ldots, N_n) = \min_{i = 1, \ldots, n-1}\Bigl[|N_i - N_{i+1}| \Bigr. \\ \Bigl.&+ D^\star_{n-1}(N_1, \ldots, N_{i-1}, \max(N_i, N_{i+1}), N_{i+2}, \ldots, N_n) \Bigr]. \end{split} \end{equation} The base case of this recursive definition is $D^\star_2(N_1, N_2) \equiv D_2(N_1, N_2) = |N_1 - N_2|$. This definition by construction gives the globally minimal number of dephasing time units required to distribute long-distance entanglement if it takes $N_i$ time units for segment $i$ to generate entanglement. We now have two quantities, the locally optimal one, given by Eq.~\eqref{eq:Dtilde}, and the globally optimal one, given by Eq.~\eqref{eq:Dstar}. The former has semantics of swapping the earliest, but may not be globally optimal. The latter is optimal by construction, but does not necessarily correspond to the swapping earliest strategy. It turns out that the two quantities coincide, at least for all $n = 2, \ldots, 8$. A straightforward way to check this is to consider all possible inequality relations between $N_i$. There are $n!$ such relations, which correspond to the permutations of $N_i$ in the following inequality \begin{equation}\label{eq:N1Nn} N_1 \leqslant \ldots \leqslant N_n. \end{equation} For any given inequality relation between $N_i$ we can compute both quantities explicitly in terms of $N_i$. For example, for the relation in Eq.~\eqref{eq:N1Nn} both quantities reduce to the same expression, $\tilde{D}_n = D^\star_n = N_n - N_1$. For all other possible relations we have \begin{equation} \tilde{D}_n(N_1, \ldots, N_n) = D^\star_n(N_1, \ldots, N_n), \end{equation} for all $n = 2, \ldots, 8$. This can be easily verified with the help of a computer algebra system. Our conjecture is that the statement is valid for all $n \geqslant 2$, but in this work we consider repeaters with up to eight segments only, and for such $n$ we have verified this statement directly. In contrast to the sequential scheme introduced earlier, there is no compact expression for the PGF of the optimal scheme here. Each case will be considered separately in the next subsections. Where possible, we present explicit expressions of the PGFs of the quantities in question. The main difficulty is encountered for those schemes with memory cut-off, and hence when including a cut-off, even for smaller repeaters (but $n>2$) we only consider the fully sequential scheme, for which we have got the exact expressions. In the following subsections, we discuss quantum repeaters for $n=2$, $3$, $4$, and $8$ segments. Although the case $n=2$ is rather well known and there is no set of different swapping strategies to choose from in this case, it will be briefly reproduced based on the formalism introduced in this work. The case $n=3$ is interesting, as it represents the simplest, nontrivial case beyond one middle station, already requiring a choice regarding distribution and swapping strategies (here, in the main text, the focus remains on schemes with an optimal dephasing for parallel distribution; in App.~\ref{app:Optimality 3 segments}, we discuss the full secret key rate for $n=3$ including all possible distribution schemes). Finally, the cases $n=4$ and $n=8$ are chosen, as they allow for a comparison with ``doubling'' (see Fig.~\ref{fig:4segD}). Larger quantum repeaters with $n>8$ become increasingly difficult to treat (in terms of the optimized total dephasing). We will later also see that for $n=8$, without additional methods of quantum error detection or correction, the necessary experimental parameter values in our model become already highly demanding. \subsubsection{Two-segment repeater} This is the simplest kind of a quantum repeater. The PGF $G_2(t)$ of $K_2 = \max(N_1, N_2)$ is given by Eq.~\eqref{eq:GKn} with $n=2$ and in this case reads as \begin{equation} G_2(t) = \frac{p^2 t (1 + qt)}{(1 - qt)(1 - q^2 t)}. \end{equation} As we noted before, there is only one choice for the dephasing variable, $D_2 = |N_1 - N_2|$ (parallel distribution). In Appendix~\ref{app:PGF Parallel schemes}, we derive the following expression for the PGF of this variable: \begin{equation} \tilde{G}_2(t) = \frac{p^2}{1 - q^2} \frac{1 + q t}{1 - q t}. \end{equation} There we also show that the PGFs of the variables with cut-offs are \begin{equation} \begin{split} G^{[m]}_2(t) &= \frac{p^2 t (1 + qt - 2(qt)^{m+1})}{(1 - qt)(1 - q^2 t - 2p (qt)^{m+1})}, \\ \tilde{G}^{[m]}_2(t) &= \frac{p}{1 + q - 2q^{m+1}} \frac{1 + qt - 2(qt)^{m+1}}{1 - qt}. \end{split} \end{equation} It is obvious that we have the same consistency relations as for the sequential distribution scheme: \begin{equation} G^{[+\infty]}_2(t) = G_2(t), \quad \tilde{G}^{[+\infty]}_2(t) = \tilde{G}_2(t). \end{equation}
3,349
85,618
en
train
0.4990.10
\subsubsection{Two-segment repeater} This is the simplest kind of a quantum repeater. The PGF $G_2(t)$ of $K_2 = \max(N_1, N_2)$ is given by Eq.~\eqref{eq:GKn} with $n=2$ and in this case reads as \begin{equation} G_2(t) = \frac{p^2 t (1 + qt)}{(1 - qt)(1 - q^2 t)}. \end{equation} As we noted before, there is only one choice for the dephasing variable, $D_2 = |N_1 - N_2|$ (parallel distribution). In Appendix~\ref{app:PGF Parallel schemes}, we derive the following expression for the PGF of this variable: \begin{equation} \tilde{G}_2(t) = \frac{p^2}{1 - q^2} \frac{1 + q t}{1 - q t}. \end{equation} There we also show that the PGFs of the variables with cut-offs are \begin{equation} \begin{split} G^{[m]}_2(t) &= \frac{p^2 t (1 + qt - 2(qt)^{m+1})}{(1 - qt)(1 - q^2 t - 2p (qt)^{m+1})}, \\ \tilde{G}^{[m]}_2(t) &= \frac{p}{1 + q - 2q^{m+1}} \frac{1 + qt - 2(qt)^{m+1}}{1 - qt}. \end{split} \end{equation} It is obvious that we have the same consistency relations as for the sequential distribution scheme: \begin{equation} G^{[+\infty]}_2(t) = G_2(t), \quad \tilde{G}^{[+\infty]}_2(t) = \tilde{G}_2(t). \end{equation} \subsubsection{Three-segment repeater}\label{sssec:Par-distr: 3-segment repeater} For three segments there are various ways how to distribute entanglement. One could use a fully sequential scheme, start at one end and distribute entanglement in concurrent segments. Alternatively, one could consider schemes where pairs of segments generate entanglement in parallel and the remaining segment goes last or, the other way around, it goes first. There are also combined distribution schemes with ``overlapping" parallel and sequential distributions. Finally, there are those schemes which attempt to generate entanglement in all segments at once and thereby use different swapping schemes. Among the latter here only the potentially optimal scheme is of interest, as it minimizes the accumulated dephasing, while having the same total waiting time as any other parallel distribution scheme. However, it could still be the case that a scheme from the other, slower class of schemes performs better in terms of the full secret key rate. This is possible, because there is typically a trade-off between the raw rate and the dephasing or, more generally, the QBER. In particular, the fully sequential distribution scheme is interesting, since its total dephasing becomes minimal, as there is basically always only one segment waiting at every time step. On the other hand, for the fully parallel schemes the raw rate is optimal. In App.~\ref{app:Optimality 3 segments} we present all possible schemes for $n=3$ and calculate the PGFs of their total waiting and dephasing times. Then we use these results to obtain the secret key rate for each scheme and to compare the different schemes. We also show in the appendix that the PGF of the optimal dephasing random variable, equivalently defined by Eqs.~\eqref{eq:Dtilde} and \eqref{eq:Dstar}, reads as \begin{equation} \tilde{G}^\star_3(t) = \frac{p^3}{1-q^3} \frac{1 + (q+2q^2)t - (2q^2+q^3)t^3 - q^4 t^4}{(1-qt)(1-q^2t)(1-qt^2)}. \end{equation} It turns out that with regards to the full secret key rate the parallel-distribution optimal-dephasing scheme is indeed optimal in all relevant regimes and especially in the limit of improving hardware parameters, which can be seen in Fig.~\ref{fig:Comparison_3_segments_non tau=0.1} and Fig.~\ref{fig:Comparison_3_segments_non tau=10} for two different memory coherence times. In the same section one can also find a more detailed discussion of the figure. In addition, aiming at the most general treatment of the $n=3$ case, we also consider the scenario where Alice and Bob measure their qubits immediately, thus suppressing their memory dephasing, and we apply this to all possible schemes. The comparison of these ``immediate-measurement" schemes is shown in Fig.~\ref{fig:Comparison_3_segments_immediate tau=0.1} and Fig.~\ref{fig:Comparison_3_segments_immediate tau=10}, again for two different coherence times. The conclusion remains the same: overall ``optimal" is optimal. However, note that the option with immediate measurements for Alice and Bob only exists when they operate the quantum repeater for the purpose of long-range QKD. More advanced quantum repeater applications may require quantum storage for the qubits at each end (user) node. In any case, the memory qubits at each intermediate repeater node are (jointly) measured as soon as possible when the two adjacent segments are filled with an entangled pair (or even later, depending on the particular swapping strategy, but in App.~\ref{app:Optimality 3 segments} we only consider swap-as-soon-as-possible schemes that minimize the dephasing). The above discussion leads us to the conclusion that there are three basic properties that a quantum repeater protocol (unassisted by additional quantum error detection or correction) should exhibit: distribute the entangled states in each segment in parallel, swap the initially distributed states as soon as possible, and avoid parallel storage of already distributed pairs as much as possible. It is obvious that all these three``rules" cannot be fully obeyed at the same time. However, our optimal scheme has the optimal balance with regards to these rules for three segments. We conjecture that this also holds true for larger $n>3$-segment repeaters. \subsubsection{Four-segment repeater}\label{sssec:Par-distr: 4-segment repeater} Of particular interest to us is the case of a four-segment repeater which is commonly operated via ``doubling". Here we are now able to discuss more general schemes, especially those that would always swap as soon as possible, unlike doubling where the second and third segments may not be immediately connected even when they are both ready. Overall there are many more schemes than in the previous $n=3$ case, and here for $n=4$ we focus on the parallel-distribution schemes. All these schemes (without cut-off) have identical $K_4 = \max(N_1, N_2, N_3, N_4)$, whose PGF is given by Eq.~\eqref{eq:GKn} for $n=4$. The dephasing variable $D_4$ and its PGF become different for different schemes. One such scheme, the common ``doubling", is illustrated in Fig.~\ref{fig:4segD}, where we first swap the pairs of segments 1, 2 and 3, 4 independently and then swap the two larger segments. Note that the swappings will typically take place at different moments in time - one pair of segments will usually swap earlier than the other. The state of the faster pair that goes into the final swapping operation is the state of these segments after their connection and at the moment when the final swapping is done, and so the state has been subject to a corresponding memory dephasing. For example, if the swapping of segments 1 and 2 is done first, the state of the distributed state over segments 1 and 2 just after the swapping is $\hat{\varrho}_{14} = \mathcal{S}(\hat{\varrho}_{12} \otimes \hat{\varrho}_{34})$. If $k$ time units later segments 3 and 4 swap, producing the state $\hat{\varrho}_{58} = \mathcal{S}(\hat{\varrho}_{56} \otimes \hat{\varrho}_{78})$, the former state becomes $\Gamma_{k\alpha}(\hat{\varrho}_{14})$, and the state distributed over the whole repeater is \begin{equation}\label{eq:Sk} \hat{\varrho}_{18} = \mathcal{S}(\Gamma_{k\alpha}(\mathcal{S}(\hat{\varrho}_{12} \otimes \hat{\varrho}_{34})) \otimes \mathcal{S}(\hat{\varrho}_{56} \otimes \hat{\varrho}_{78})), \end{equation} instead of just $\hat{\varrho}_{18} = \mathcal{S}(\mathcal{S}(\hat{\varrho}_{12} \otimes \hat{\varrho}_{34}) \otimes \mathcal{S}(\hat{\varrho}_{56} \otimes \hat{\varrho}_{78}))$. Again, as before, we omitted any extra factors that depend on the number of spins subject to dephasing in a single repeater segment. So, Fig.~\ref{fig:4segD} shows just a workflow of swapping operations, while the exact expressions should be adjusted according to the respective time differences. The dephasing variable $D_4$ in this doubling scheme is defined as follows: \begin{equation} \begin{split}\label{eq:doublingvariable} D^{\mathrm{dbl}}_4 &= |N_1 - N_2| + |N_3 - N_4| \\ &+ |\max(N_1, N_2) - \max(N_3, N_4)|. \end{split} \end{equation} The first two terms are due to the possible time difference for generating entangled states within each pair of segments. The last term is due to the time difference between the pairs (e.g. the difference of the two maxima is $k$ time steps in Eq.~\eqref{eq:Sk}). Note that this particular form of $D^{\mathrm{dbl}}_4$ is consistent with the commonly used "doubling" where the initial distributions happen in parallel, but the swapping strategy is fixed and sometimes disallows to swap as soon as possible. In Appendix~\ref{app:PGF Parallel schemes}, we derive the PGF of this random dephasing variable, \begin{equation} \tilde{G}^{\mathrm{dbl}}_4(t) = \frac{p^4}{1-q^4} \frac{P^{\mathrm{dbl}}_4(q, t)}{Q^{\mathrm{dbl}}_4(q, t)}, \end{equation} where the numerator and denominator are given by \begin{displaymath} \begin{split} P^{\mathrm{dbl}}_4(q, t) &= 1 + (q^2+3q^3)t + (3q+3q^2-q^5)t^2 \\ &- (q^3-q^5)t^3 + (q^3-3q^6-3q^7)t^4 \\ &- (3q^5+q^6)t^5 - q^8t^6, \\ Q^{\mathrm{dbl}}_4(q, t) &= (1-q^2t)(1-q^3t)(1-qt^2)(1-q^2t^2). \end{split} \end{displaymath} The dephasing variable corresponding to the iterated scheme as shown in Fig.~\ref{fig:4segI} differs from that of the doubling scheme. In the iterative scheme we first distribute entanglement over segments 1 and 2, then extend it over segment 3, and finally over segment 4. Note that the figure can be understood to illustrate both sequential distribution and iterated swapping. In the sequential distribution scheme, we would start to generate entanglement in each segment only when all previous segments (e.g. from left to right) have successfully generated entanglement. In the iterated swapping scheme, all segments may start simultaneously (parallel distribution), thus increasing the chances to swap sooner, but also the number of qubits potentially stored in parallel. The variable $D^{\mathrm{itr}}_4$ for this scheme is \begin{displaymath} \begin{split} D^{\mathrm{itr}}_4(N_1, N_2, N_3, N_4) &= |N_1 - N_2| + |\max(N_1, N_2) - N_3| \\ &+ |\max(N_1, N_2, N_3) - N_4|. \end{split} \end{displaymath} The PGF of this random variable is rather large and reads as \begin{equation} \tilde{G}^{\mathrm{itr}}_4(t) = \frac{p^4}{1-q^4} \frac{P^{\mathrm{itr}}_4(q, t)}{Q^{\mathrm{itr}}_4(q, t)}, \end{equation} where the numerator and denominator are given by \begin{displaymath} \begin{split} P^{\mathrm{itr}}_4(q, t) &= 1+3q^3t+(4q^2-q^4-2q^5)t^2 \\ &+(q-q^2-3q^3-6q^4+2q^5+q^6)t^3\\ &+(-2q^2-5q^3+q^4+2q^5-q^6-3q^7)t^4\\ &+(-2q^2+4q^4-4q^6+2q^8)t^5\\ &+(3q^3+q^4-2q^5-q^6+5q^7+2q^8)t^6 \\ &+(-q^4-2q^5+6q^6+3q^7+q^8-q^9)t^7\\ &+(2q^5+q^6-4q^8)t^8-3q^7t^9-q^{10}t^{10}, \\ Q^{\mathrm{itr}}_4(q, t) &= (1-qt)(1-q^2t)(1-q^3t)(1-qt^2)\\ &\times (1-q^2t^2)(1-qt^3). \end{split} \end{displaymath} We present an example for another, mixed swapping strategy in App.~\ref{app:mixedstr}. For the dephasing random variable $D^\star_4$, corresponding to the optimal swapping scheme given by Eq.~\eqref{eq:Dstar} for $n=4$, we derive the following PGF: \begin{equation} \tilde{G}^\star_4(t) = \frac{p^4}{1-q^4} \frac{P^\star_4(q, t)}{Q^\star_4(q, t)}, \end{equation} where the numerator and denominator read as \begin{displaymath} \begin{split} P^\star_4(q, &t) = 1 + (q+2q^2+3q^3)t + (q+2q^2+q^4)t^2 \\ &-(3q^2+4q^3+4q^4)t^3 - (4q^5+4q^6+3q^7)t^4 \\ &+ (q^5+2q^7+q^8)t^5 + (3q^6+2q^7+q^8)t^6 + q^9t^7, \\ Q^\star_4(q, &t) = (1-qt)(1-q^2t)(1-q^3t)(1-qt^2)(1-q^2t^2). \end{split} \end{displaymath}
3,825
85,618
en
train
0.4990.11
\subsubsection{Eight-segment repeater}\label{sssec:Par-distr: 8-segment repeater} As before, again all parallel-distribution schemes (without cut-off) have identical total waiting times, $K_8 = \max(N_1, \ldots, N_8)$, whose PGF is given by Eq.~\eqref{eq:GKn} for $n=8$. For the dephasing variable there are many more possibilities now. We shall consider and compare five different schemes -- the doubling and the optimal schemes, and three less important schemes, which nevertheless exhibit an interesting behavior. The somewhat less important ones are described and discussed in App.~\ref{app:mixedstr}. The optimal dephasing $D^\star_8$ is defined equivalently by Eqs.~\eqref{eq:Dtilde}-\eqref{eq:Dstar} for $n=8$ and the doubling dephasing $D^{\mathrm{dbl}}_8$ is defined recursively as \begin{equation} \begin{split} D^{\mathrm{dbl}}_8&(N_1, \ldots, N_8) = D^{\mathrm{dbl}}_4(N_1, \ldots, N_4) \\ &+ D^{\mathrm{dbl}}_4(N_5, \ldots, N_8) \\ &+ |\max(N_1, \ldots, N_4) - \max(N_5, \ldots, N_8)|, \end{split} \end{equation} with $D^{\mathrm{dbl}}_4$ defined as in Eq.~\eqref{eq:doublingvariable}. The comparison of the five different schemes can be found in App.~\ref{app:mixedstr}. In this appendix, App.~\ref{app:mixedstr}, we present some figures showing the ratios between the average dephasing of the four sub-optimal schemes and the optimal scheme, with and without exponentiation. We can then compare the relative positions of the curves in Fig.~\ref{fig:Ee} with those of the curves of the ratios \begin{equation}\label{eq:r3} \frac{\mathbf{E}[D^{\mathrm{sch}}_8]}{\mathbf{E}[D^{\mathrm{opt}}_8]} = \frac{\tilde{G}^{\mathrm{sch}\prime}_8(1)}{\tilde{G}^{\mathrm{opt}\prime}_8(1)}, \end{equation} which are shown in Fig.~\ref{fig:Ea}. Looking at the two figures, we see that \begin{equation} \mathbf{E}[D^{\mathrm{dbl}}_8] > \mathbf{E}[D^{44}_8], \quad \mathbf{E}[e^{-\alpha D^{\mathrm{dbl}}_8}] < \mathbf{E}[e^{-\alpha D^{44}_8}]. \end{equation} This behavior is in full agreement with the properties of the exponential function: if $x > y \geqslant 0$ and $\alpha > 0$, then $e^{-\alpha x} < e^{-\alpha y}$. But for the other pair of schemes we have \begin{equation}\label{eq:DD} \mathbf{E}[D^{242}_8] > \mathbf{E}[D^{2222}_8], \quad \mathbf{E}[e^{-\alpha D^{242}_8}] > \mathbf{E}[e^{-\alpha D^{2222}_8}]. \end{equation} Nonetheless there is no contradiction here. This is a known property of nonlinear functions of random variables. This property can be observed even in the simplest case of random variables $X$ and $Y$ each taking two values only. One can easily construct an example such that $\mathbf{E}[X] > \mathbf{E}[Y]$ and $\mathbf{E}[e^{-\alpha X}] > \mathbf{E}[e^{-\alpha Y}]$. However, the inequalities \eqref{eq:DD} show that it is not necessary to consider artificial constructions. This property can be observed for simple and natural schemes. The important conclusion is that the optimal scheme by construction minimizes $\mathbf{E}[D]$, but to have the highest fidelity of the distributed state we need to maximize $\mathbf{E}[e^{-\alpha D}]$. For an ordinary nonnegative function $f(x)$ and a positive parameter $\alpha > 0$ the minimum of $f(x)$ is the maximum of $e^{-\alpha f(x)}$ and vice versa, but for random variables this is not necessarily true. Strictly speaking, in general, we know only the scheme that minimizes $\mathbf{E}[D]$, but not the scheme that maximizes $\mathbf{E}[e^{-\alpha D}]$. The two schemes seem to be identical, but there is no strict proof of this statement. We have to rely on evidence based on computing the properties of some schemes explicitly and comparing them. For the examples for $n=8$ given in this section and in the appendix, we see that dividing the exponentiated dephasing of all other schemes by that of the optimal scheme gives a number smaller than one, whereas the same ratios without exponentiation give a number greater than one. Thus, minimal dephasing corresponds to minimal dephasing errors, and the optimal dephasing scheme exhibits the smallest fraction of dephasing errors. To summarize, our optimization of the secret key rates obtainable with different distribution and swapping strategies is based on three steps. First, we can rely upon the proof of the minimal dephasing variable for up to $n=8$ segments given in Sec.~\ref{sec:optimalswapscheme} assuming parallel initial distributions (it is already non-trivial to extend this proof to larger $n>8$). Second, in order to compare the average dephasing errors in the final density operators, we need to consider the average dephasing exponentials for the different schemes. Finally, in order to assess the optimality of the secret key rate over all possible schemes, we have to take into account also those schemes where the initial distributions no longer occur in parallel which generally leads to smaller raw rates, but at the same time can result in a smaller dephasing by (partially) avoiding parallel storage. For the first non-trivial case beyond a single middle station, we have explicitly gone through all these three steps, namely for the case of a three-segment repeater with two intermediate stations (App.~\ref{app:Optimality 3 segments}), and found that ``optimal" is optimal. For larger repeaters beyond eight segments, $n>8$, we conjecture that our ``optimal" scheme also gives the best secret key rate. This includes conjecturing that our minimized dephasing is minimal also for $n>8$, that it minimizes the dephasing errors in the final density operator, and that overall the dephasing-optimized parallel-distribution approach is superior to any partially or fully sequential distribution scheme. Especially the last point cannot be taken for granted. In App.~\ref{app:8segmentsimmediate} we present some rate calculations for $n=8$ where, beyond a certain distance, ``optimal" can be beaten by a sequential scheme. However, there we allow for immediate measurements at an end node only for the sequential scheme (for which this is easy to include), but not for ``optimal"; a comparison which is slightly unfair and also only relevant for QKD applications. In the case of non-immediate-measurement schemes including potential beyond-QKD applications, ``optimal" remains optimal.
1,743
85,618
en
train
0.4990.12
\section{Secret key rate analysis}\label{sec:Secret Key Rate} A useful and practically relevant figure of merit for quantifying a quantum repeater's performance is its secret key rate in long-range QKD, which determines the amount of secret key generated in bits per channel use or second. As briefly reviewed in Sec.~\ref{sec:skr}, the secret key rate consists of two parts: the raw rate or yield and the secret key fraction. The former quantifies how long it takes to send a raw quantum bit or to (effectively) generate entanglement, independent of the quality of the final state; the latter then determines the average amount of secret key that can be extracted from a single raw bit, depending on the particular QKD protocol chosen and including the corresponding procedures for the classical post-processing. Here we will focus on the asymptotic BB84 secret key rate $S=Rr=r/T$ with one-way post-processing. In the most general scenario of long-range memory-assisted QKD, i.e. including a finite swapping probability $a$ and a memory cut-off parameter $m$, this secret key rate is given by \begin{equation}\label{eq:secret key rate} S(p,a,m)=\frac{1-h(\overline{e_x}(p,a,m))-h(\overline{e_z}(p,a,m))}{T(p,a,m)}, \end{equation} where \(h\) is the binary entropy function, \(T\) the average number of steps needed to successfully distribute long-distance entanglement, and \(e_x\), \(e_z\) are the QBERs of Eq.~\eqref{eq:QBER}. The probability of successful entanglement generation in a single attempt in a single elementary segment is $p$, as introduced in Sec.~\ref{sec:rawrate}. The denominator of $S$, $T = \mathbf{E}[K]$, is basically the total raw waiting time of the repeater which generally depends on $p$ and $a$ where $a$ is a finite success probability of the entanglement swapping using the same notation as in Refs.~\cite{PvL,Shchukin2021} (where it was shown how to compute \cite{PvL} and optimize \cite{Shchukin2021} $T=\mathbf{E}[K]$ for arbitrary $a$). The dependency on the cut-off parameter $m$ means: the smaller $m$ becomes, the longer it takes to distribute an entangled state. The numerator of $S$, $r$, generally also depends on $p$, $a$, and $m$ through the QBERs. Recall that we have to take the averages here, $\overline{e_z} = e_z$ and $\overline{e_x}$ obtainable via $\mathbf{E}[e^{-\alpha D_n}]$. A smaller $m$ can lead to a higher state quality with a smaller total dephasing and thus to a larger secret key fraction $r$. It is generally hard to optimize $S$ over general $p$, $a$, and $m$. Our approach here is based on the simplifying (and experimentally still relevant) assumption $a=1$ (deterministic entanglement swapping) and the idea that the highest secret key rates will be obtainable with the fastest schemes (parallel distributions minimizing the total waiting time) and, among these, with those that swap entanglement as soon as possible (minimizing the total dephasing time, see Sec.~\ref{sec:optimalswapscheme}). While for a two-segment repeater the cases of deterministic and non-deterministic swapping can be treated similarly, for repeater chains with more than a single middle station ($n>2$) our results for optimizing distribution and swapping strategies only hold for the deterministic swapping case. Using the results of all previous sections the secret key rate can then be calculated. Therefore, in what follows we always have $a=1$. The above secret key rate $S$ is expressed in terms of bits per channel use. For a rate per second, the average total number of distribution attempts $T$ must by multiplied with the duration of a single attempt in seconds, i.e. the elementary time unit $\tau = L_0/c_f$. Note that a single attempt or channel use is uniquely defined only for direct channel transmission in a point-to-point link, whereas the channel in a quantum repeater is used directly only between neighboring memory stations. Since our model always assumes that the interfaces at each station connect a single channel (to the left or to the right) with a single memory qubit (unit memory ``buffer"), those channel segments that belong to already successfully distributed pairs remain unused until new attempts in these segments will be started (e.g. when the memory cut-off has been exceeded or when a long-distance pair has been finally created). Nonetheless, at every attempt, we shall always count a full channel use over the entire distance despite the growing number of unused channel segments during memory-assisted long-distance entanglement distribution. Thus, strictly speaking, we underestimate the secret key rate per channel use and one could continue distributing pairs in all channel segments provided sufficient memory qubits are available. The parameter values as given in Tab.~\ref{tab:constants} have been used to obtain the quantitative results discussed in this section. Most parameters there have been introduced in the previous sections in the context of our physical model. The resulting probability to distribute entanglement over one link in terms of the parameters of Tab.~\ref{tab:constants} now includes a zero-distance link-coupling efficiency \begin{equation} p(L_0)=p_{\mathrm{link}}\cdot e^{-\frac{L_0}{L_{\mathrm{att}}}}, \end{equation} with $p(0) = p_{\mathrm{link}}$ and where $p_{\mathrm{link}} = \eta_\mathrm{c} \cdot \eta_\mathrm{d} \cdot \eta_\mathrm{p}$ incorporates various efficiencies of the experimental hardware independent of the channel transmission itself, especially wavelength conversion, fiber coupling, preparation, and detector efficiencies. \begin{table*} \begin{tabular}{c|c|c|c} Constant & Meaning & Current value & Improved value \\ \hline \hline $a$ & swapping probability & $1$ & $1$\\ $\tau_{\mathrm{coh}}$ & coherence time & $\unit[0.1]{s}$ & $\unit[10]{s}$ \\ $\mu$ & gate depolarisation (Bell measurement) & $0.97$ & $1$ \\ $\mu_0$ & initial state depolarisation & $0.97$ & $1$ \\ $F_0$ & initial state fidelity (dephasing) & $1$ & $1$ \\ $L_{\mathrm{att}}$ & attenuation length & $\unit[22]{km}$ & $\unit[22]{km}$ \\ $n_\mathrm{r}$ & index of refraction & $1.44$ & $1.44$ \\ $\eta_\mathrm{p}$ & preparation efficiency & * & * \\ $\eta_\mathrm{c}$ & \begin{tabular}{@{}c@{}} photon-fibre coupling efficiency $\times$\\ wavelength conversion\\ \end{tabular} & * & *\\ $\eta_\mathrm{d}$ & detector efficiency & * & * \\ \hline $p_{\mathrm{link}}:=\eta_\mathrm{c} \cdot \eta_\mathrm{d} \cdot \eta_\mathrm{p}$ & total efficiency & $0.05$ & $0.7$ \end{tabular} \caption{Experimental parameter values used to calculate secret key rates. The star symbols * allow for various choices. The exact choices vary for each experimental platform. Some of the ``improved values" are the ideal values which allow to consider idealized, fundamental scenarios such as ``channel-loss-only" or ``channel-loss-and-memory-dephasing-only" (for which we may also set $p_{\mathrm{link}}=1$).} \label{tab:constants} \end{table*} In the context of our statistical and physical model the memory coherence time \(\tau_{\mathrm{coh}}\) in Tab.~\ref{tab:constants}, an experimentally determined parameter that describes the average speed of the memory dephasing, can be converted into a (dimensionless) effective coherence time in units of the repeater's elementary time unit, $\tau_{\mathrm{coh}}/\tau$. Equivalently, we can say that the (number of) dephasing time (steps) $D_n$ is to be multiplied with an elementary time $\tau$ before it can be divided by $\tau_{\mathrm{coh}}$ in $\mathbf{E}[e^{-D_n \tau/\tau_{\mathrm{coh}}}]$. In any case, we absorb both $\tau$ and $\tau_{\mathrm{coh}}$ in our dimensionless $\alpha$ dephasing parameter, \begin{equation} \alpha(L_0)=\frac{\tau}{\tau_{\mathrm{coh}}}=\frac{L_0}{c_f \tau_{\mathrm{coh}}}. \end{equation} Thus, $\alpha$ can be referred to as an inverse effective coherence time. Note that in order to count the dephasing times appropriately in a specific protocol, we may have to add an extra factor of 2 (depending on the number of spins dephasing at each time step in a certain elementary or extended segment) and a constant dephasing term $\sim 2n$ that takes into account memory dephasing that occurs even when the first distribution attempt in a segment succeeds. Any missing factors in the dephasing can be reinterpreted in terms of $\alpha$ or $\tau_{\mathrm{coh}}$, e.g. a missing factor of 2 corresponds to a coherence time twice as big. In Tab.~\ref{tab:constants}, two sets of current and improved parameter values are listed, which specifically refer to $\tau_{\mathrm{coh}}$ and $p_{\mathrm{link}}$ for which we choose 0.1s or 10s and 0.05 or 0.7, respectively. The other state and gate fidelity parameters will be either set to unity or close to but below one (in some of the following plots we will also treat them as a free parameter). We will see that in memory-assisted QKD without additional quantum error detection or correction, the fidelity parameters must always be above a certain threshold value which (obviously) grows with the number of stations (and which generally depends on the particular QKD protocol and the classical post-processing method). To compare the performance of each repeater protocol with a direct point-to-point link over the total distance $L$, we will use the PLOB bound \cite{PLOB}, which is given by \begin{equation} S^{\mathrm{PLOB}}(L)=-\log_2(1-e^{-\frac{L}{L_{\mathrm{att}}}}). \end{equation} It represents an upper bound on the number of secret bits that can be shared per channel use. For example, for $e^{-\frac{L}{L_{\mathrm{att}}}}=1/2$ corresponding to $L=15$km, we have $S^{\mathrm{PLOB}} = 1$, and so at most one secret bit can be distributed per channel use (per mode) independent of the optical encoding. It will also be useful to consider an upper bound on the number of secret bits that can be shared with the help of a quantum repeater \cite{PLOB_QR}, \begin{equation} S^{\mathrm{PLOB,QR}}(L_0)=-\log_2(1-e^{-\frac{L_0}{L_{\mathrm{att}}}}), \end{equation} corresponding to the PLOB bound for one segment (in the case of equal segment lengths $L_0$). For a point-to-point link, $n=1$ with $L=L_0$, we thus use the notation $S^{\mathrm{PLOB}}=S^{\mathrm{PLOB,QR}}$. The rates we will focus on first in the following are to be understood as secret key rates per channel use. Later we shall also discuss secret key rates per second.
2,879
85,618
en
train
0.4990.13
\subsection{Two-segment repeater}\label{sec:Two-Segment Repeater} Let us start with the rates for the simplest case: a two-segment quantum repeater with one middle station. We shall only consider one scheme, the ``optimal" scheme, with and without a memory cut-off. First, we address the question whether and when it is possible to overcome the PLOB bound with a two-segment repeater given the (current and improved) parameter values from Tab.~\ref{tab:constants}. We stick to \(F_0=1\) and, for illustrative clarity, we set \(\mu=\mu_0\) (while, first, $\mu$ is not fixed). Physically, this means that the repeater states when initially distributed in each segment and then manipulated at the middle station for the Bell measurement are subject to the same depolarizing error channels (and there is no extra initial dephasing). The cut-off parameter \(m\) is chosen most appropriately such that the final secret key rate is close to optimal over the entire range. In Fig.~\ref{fig:Contour_2_segments} one can see various contour plots of the secret key rate. For convenience, we translated the error parameter \(\mu\) into a fidelity, $F = (3\mu + 1)/4$. The plots clearly indicate the minimal fidelity values below which the rates drop below the PLOB bound or even to zero rates, for different total repeater distances \(L\). The resulting contours are color-coded such that a particular color represents the secret key rate to be e.g. twice the rate of the PLOB bound. Thus, one can see that in certain parameter regimes it becomes impossible to beat the PLOB bound with a two-segment repeater. However, if both the memory coherence time $\tau_{\mathrm{coh}}$ and the link efficiency \(p_{\mathrm{link}}\) take on their improved values, it is possible to reach secret key rates as high as \(500\)-times the rate of the PLOB bound, and beyond, in a certain distance regime. In Fig.~\ref{fig:SKR_2_segments}, we show the resulting secret key rates for the experimental parameters from Tab.~\ref{tab:constants}, for both the scheme with and without a memory cut-off. This time the error parameter \(\mu=\mu_0\) is fixed, and it either takes on its ``current" or its ``improved" (ideal) value. For comparison, as a reference, we also included the raw rates in each case. The loss scaling of the rates in all schemes is, as expected, proportional to $p_{\mathrm{link}} \,e^{-\frac{L}{2 L_{\mathrm{att}}}}=p_{\mathrm{link}}\sqrt{e^{-\frac{L}{L_{\mathrm{att}}}}}$ (corresponding to a linear decrease with distance due to the log scale representation). The effect of the different experimental parameter values is clearly visible. The choice of $p_{\mathrm{link}}=0.05$ or $p_{\mathrm{link}}=0.7$ determines the offset along the $y$-axis (rate axis) at zero distance. A higher $p_{\mathrm{link}}$ allows to cross the PLOB bound at a smaller distance. Note that the PLOB bound itself can arbitrarily exceed the value of one secret bit towards zero distance; in our schemes we always distribute qubits and so one secret bit per channel use is the maximum (and depending on the number of modes to encode the photonic qubits there could be extra factors, ``per mode"). The choice of $\tau_{\mathrm{coh}}=\unit[0.1]{s}$ or $\tau_{\mathrm{coh}}=\unit[10]{s}$ determines when (at which distance) the (negative) slope of the secret key rate increases such that the repeater switches from a $\sqrt{e^{-\frac{L}{L_{\mathrm{att}}}}}$ to a $e^{-\frac{L}{L_{\mathrm{att}}}}$ (PLOB-like) scaling, or even worse. This effect is an effect of the memory dephasing that occurs even when \(\mu=\mu_0=1\). If, in addition, \(\mu=\mu_0=0.97<1\), the secret key rates can drop abruptly down to zero, since then the QBERs have nonzero contributions both in $e_z$ and $e_x$, see Eq.~\eqref{eq:QBER}. Note that this effect happens also when either of the two parameters, $\mu$ or $\mu_0$, drop below one, i.e. when either the gates or the initial states become imperfect. Also note that non-unit $\mu$ or $\mu_0$ in addition lead to an increased $y$-axis offset which will become more apparent for larger repeaters with larger $n$. However, a memory cut-off can significantly change the picture, and it can increase the achievable distance compared to the scheme without a cut-off (compare the solid yellow with the solid green curves in Fig.~\ref{fig:SKR_2_segments}). More specifically, beyond distances when the rates of the no cut-off scheme drop dramatically, the cut-off scheme still scales proportional to the PLOB bound. Note that for the scheme with cut-off, even the raw rates (dashed green curves) can switch from an $L/2$ to an $L$ scaling (like PLOB), because a finite cut-off value ``simulates" an imperfect memory in the raw rate (whose loss scaling resembles the scaling without a quantum memory, i.e. that of the PLOB bound, in the limit of $m=1$) \cite{CollinsPrl}. Again, one can also see that with ``current" parameter values, see Fig.~\ref{fig:SKR_2_segments}(a), it is impossible to beat the PLOB bound (here even when \(\mu=\mu_0=1\), see Fig.~\ref{fig:SKR_2_segments}(b)), but with improving values for the coherence time and the link efficiency, it becomes possible. This holds even when only one of the two parameters, $p_{\mathrm{link}}$ or $\tau_{\mathrm{coh}}$, is improved, as long as we can cross PLOB at a sufficiently small distance or maintain the repeater's slope for sufficiently long, respectively. \begin{figure*} \caption{Contour plots illustrating the minimal fidelity requirements to overcome the PLOB bound by a two-segment repeater for different parameter sets. In all contour plots, \(\mu = \mu_0\) and \(F_0=1\) has been used.} \label{fig:Contour_2_segments} \end{figure*} \begin{figure*} \caption{Rates (secret key or raw) for a two-segment repeater over distance \(L\) for different experimental parameters.} \label{fig:SKR_2_segments} \end{figure*} In the next section we will turn to a four-segment repeater (a three-segment repeater is discussed in great detail in App.~\ref{app:Optimality 3 segments}). \subsection{Four-segment repeater}\label{sec:Four-Segment Repeater} As we have seen in Sec.~\ref{sssec:Par-distr: 4-segment repeater}, there are various swapping strategies possible for a four-segment repeater in contrast to a simple two-segment repeater. Our conjecture is (see also App.~\ref{app:Optimality 3 segments} for the case $n=3$) that the ``optimal" scheme is optimal in the regimes of increasingly good hardware parameters. Thus, let us first again focus on the minimal fidelities to overcome the PLOB bound for this scheme, similar to our analysis for two segments, but now without cut-off only. The results are shown in Fig.~\ref{fig:Contour_4_segments}. It becomes apparent that now a much higher fidelity or equivalently \(\mu\) is needed, but in turn also much higher secret key rates, \(10^4\)-times the PLOB rate and beyond, are possible. Since we have $n=4$ now, non-unit $\mu$ values have a stronger impact on the QBERs, see Eq.~\eqref{eq:QBER}. At the same time, however, the loss scaling becomes proportional to $p_{\mathrm{link}} \,e^{-\frac{L}{4 L_{\mathrm{att}}}}=p_{\mathrm{link}}\sqrt[4]{e^{-\frac{L}{L_{\mathrm{att}}}}}$. Furthermore, note that a different scaling of the contours is observable. This effect is due to the lack of a memory cut-off. \begin{figure*} \caption{Contour plots illustrating the minimal fidelity requirements to overcome the PLOB bound by a four-segment repeater for different parameter sets. In all contour plots, \(\mu = \mu_0\) and \(F_0=1\) has been used.} \label{fig:Contour_4_segments} \end{figure*} Next, we consider the secret key rates for a particular choice of the experimental parameters including $\mu = \mu_0$ according to Tab.~\ref{tab:constants}. Besides the ``optimal" scheme, now we also include the sequential and the doubling schemes in the rate analysis (sequential/iterative swapping together with sequential distributions and doubling with parallel distributions). In Fig.~\ref{fig:SKR_4_segments}, one can see the PLOB bound and the secret key rates for the sequential scheme with and without a cut-off, for the doubling scheme and for the optimal scheme (both without a cut-off). In addition, again the raw rates are shown as a reference, and the corresponding three dashed curves are the raw rates for (equivalently) doubling and ``optimal", and for the sequential scheme with and without cut-off. Compared to the previous two-segment repeater, it is now easier to overcome the PLOB bound, but the crossing happens at longer distances, since the four-segment repeater starts with a lower rate at \(L=\unit[0]{km}\). \begin{figure*} \caption{Rates (secret key or raw) for a four-segment repeater over distance \(L\) for different experimental parameters.} \label{fig:SKR_4_segments} \end{figure*} \subsection{Eight-segment repeater}\label{sec:Eight-Segment Repeater} In comparison with the usual treatment of quantum repeaters via doubling the links at each repeater level, the next logical step is to consider an eight-segment repeater. For eight segments, there is an increasing number of possible distribution and swapping strategies, and for the swapping we have discussed this in more detail in Sec.~\ref{sssec:Par-distr: 8-segment repeater}. Here we will only consider the sequential, the doubling, and the optimal schemes (the former one with sequential distributions, the latter two with parallel distributions). Again, in Fig.~\ref{fig:Contour_8_segments}, we present limitations on the error parameter \(\mu\) to overcome the PLOB rate at different distances. The regions are color-coded as before. Compared to the limits observed for a two-segment repeater they exhibit a different behaviour now, but this is again due to the fact that we do not consider a cut-off scheme here. The requirements for the fidelity or \(\mu\) are higher, but this was expected, since the secret key fraction includes terms \(\propto \mu^{2n-1} \), again setting \(\mu_0=\mu\). Nevertheless, for sufficiently high fidelities, the attainable secret key rates are much higher than for any of the previously considered repeater schemes, becoming as high as \(10^8\)-times the rate of the PLOB bound, and beyond. Finally, we have also evaluated the performance of an eight-segment repeater for our experimental parameter set. Now caution is required when these plots are compared directly with the previous ones, as we had to improve the ``current", non-unit value of \(\mu\) to \(\mu=0.99\). Without this fidelity adjustment, it would be impossible to achieve a non-zero secret key rate for an eight-segment repeater (see next section). The $\mu$-scaling with $n$ in the QBERs prohibits to scale up a realistic quantum repeater to arbitrarily large distances and $n$ values, as long as no extra elements for quantum error detection or correction are included. For example, in a 2nd-generation quantum repeater, the effective $\mu_0$ and $\mu$ values could be kept close to one, at the expense of extra resources for quantum error correction and a typically decreasing initial distribution efficiency $p$ (for instance, due to an extra step of entanglement distillation for the distributed, encoded memory qubits). In principle, our formalism could be also applied to such a more sophisticated scenario by considering the effective changes of $\mu$, $\mu_0$, and $p$ (and possibly $\alpha$ too). Nevertheless, our plots presented in Fig.~\ref{fig:SKR_8_segments} show that an eight-segment quantum repeater in a memory-assisted QKD scheme is, in principle, already able to cover large distances by reaching usable rates up to \(\unit[1000]{km}\) or even \(\unit[1200]{km}\), provided that \(\mu=0.99\) or \(\mu\rightarrow 1\), respectively. Apart from this, the behaviour of an eight-segment repeater is very similar to that of the previous four-segment repeater.
3,121
85,618
en
train
0.4990.14
\subsection{Four-segment repeater}\label{sec:Four-Segment Repeater} As we have seen in Sec.~\ref{sssec:Par-distr: 4-segment repeater}, there are various swapping strategies possible for a four-segment repeater in contrast to a simple two-segment repeater. Our conjecture is (see also App.~\ref{app:Optimality 3 segments} for the case $n=3$) that the ``optimal" scheme is optimal in the regimes of increasingly good hardware parameters. Thus, let us first again focus on the minimal fidelities to overcome the PLOB bound for this scheme, similar to our analysis for two segments, but now without cut-off only. The results are shown in Fig.~\ref{fig:Contour_4_segments}. It becomes apparent that now a much higher fidelity or equivalently \(\mu\) is needed, but in turn also much higher secret key rates, \(10^4\)-times the PLOB rate and beyond, are possible. Since we have $n=4$ now, non-unit $\mu$ values have a stronger impact on the QBERs, see Eq.~\eqref{eq:QBER}. At the same time, however, the loss scaling becomes proportional to $p_{\mathrm{link}} \,e^{-\frac{L}{4 L_{\mathrm{att}}}}=p_{\mathrm{link}}\sqrt[4]{e^{-\frac{L}{L_{\mathrm{att}}}}}$. Furthermore, note that a different scaling of the contours is observable. This effect is due to the lack of a memory cut-off. \begin{figure*} \caption{Contour plots illustrating the minimal fidelity requirements to overcome the PLOB bound by a four-segment repeater for different parameter sets. In all contour plots, \(\mu = \mu_0\) and \(F_0=1\) has been used.} \label{fig:Contour_4_segments} \end{figure*} Next, we consider the secret key rates for a particular choice of the experimental parameters including $\mu = \mu_0$ according to Tab.~\ref{tab:constants}. Besides the ``optimal" scheme, now we also include the sequential and the doubling schemes in the rate analysis (sequential/iterative swapping together with sequential distributions and doubling with parallel distributions). In Fig.~\ref{fig:SKR_4_segments}, one can see the PLOB bound and the secret key rates for the sequential scheme with and without a cut-off, for the doubling scheme and for the optimal scheme (both without a cut-off). In addition, again the raw rates are shown as a reference, and the corresponding three dashed curves are the raw rates for (equivalently) doubling and ``optimal", and for the sequential scheme with and without cut-off. Compared to the previous two-segment repeater, it is now easier to overcome the PLOB bound, but the crossing happens at longer distances, since the four-segment repeater starts with a lower rate at \(L=\unit[0]{km}\). \begin{figure*} \caption{Rates (secret key or raw) for a four-segment repeater over distance \(L\) for different experimental parameters.} \label{fig:SKR_4_segments} \end{figure*} \subsection{Eight-segment repeater}\label{sec:Eight-Segment Repeater} In comparison with the usual treatment of quantum repeaters via doubling the links at each repeater level, the next logical step is to consider an eight-segment repeater. For eight segments, there is an increasing number of possible distribution and swapping strategies, and for the swapping we have discussed this in more detail in Sec.~\ref{sssec:Par-distr: 8-segment repeater}. Here we will only consider the sequential, the doubling, and the optimal schemes (the former one with sequential distributions, the latter two with parallel distributions). Again, in Fig.~\ref{fig:Contour_8_segments}, we present limitations on the error parameter \(\mu\) to overcome the PLOB rate at different distances. The regions are color-coded as before. Compared to the limits observed for a two-segment repeater they exhibit a different behaviour now, but this is again due to the fact that we do not consider a cut-off scheme here. The requirements for the fidelity or \(\mu\) are higher, but this was expected, since the secret key fraction includes terms \(\propto \mu^{2n-1} \), again setting \(\mu_0=\mu\). Nevertheless, for sufficiently high fidelities, the attainable secret key rates are much higher than for any of the previously considered repeater schemes, becoming as high as \(10^8\)-times the rate of the PLOB bound, and beyond. Finally, we have also evaluated the performance of an eight-segment repeater for our experimental parameter set. Now caution is required when these plots are compared directly with the previous ones, as we had to improve the ``current", non-unit value of \(\mu\) to \(\mu=0.99\). Without this fidelity adjustment, it would be impossible to achieve a non-zero secret key rate for an eight-segment repeater (see next section). The $\mu$-scaling with $n$ in the QBERs prohibits to scale up a realistic quantum repeater to arbitrarily large distances and $n$ values, as long as no extra elements for quantum error detection or correction are included. For example, in a 2nd-generation quantum repeater, the effective $\mu_0$ and $\mu$ values could be kept close to one, at the expense of extra resources for quantum error correction and a typically decreasing initial distribution efficiency $p$ (for instance, due to an extra step of entanglement distillation for the distributed, encoded memory qubits). In principle, our formalism could be also applied to such a more sophisticated scenario by considering the effective changes of $\mu$, $\mu_0$, and $p$ (and possibly $\alpha$ too). Nevertheless, our plots presented in Fig.~\ref{fig:SKR_8_segments} show that an eight-segment quantum repeater in a memory-assisted QKD scheme is, in principle, already able to cover large distances by reaching usable rates up to \(\unit[1000]{km}\) or even \(\unit[1200]{km}\), provided that \(\mu=0.99\) or \(\mu\rightarrow 1\), respectively. Apart from this, the behaviour of an eight-segment repeater is very similar to that of the previous four-segment repeater. \subsection{Minimal $\mu$ values} We have already seen that the secret key rate of memory-assisted QKD is highly sensitive to the depolarizing errors that we use to model the imperfect gates and the imperfect initial states in the quantum repeater. Here let us explicitly give some minimal values for the error parameter $\mu$ which at least have to be achieved in order to obtain a non-zero secret key fraction for QKD protocols restricted to one-way post-processing (see Tab.~\ref{tab_minimalmu}). More generally, in principle, much higher error rates can be tolerated by allowing for two-way post-processing in the QKD protocols \cite{twowayqkd}. However, in this work, we primarily utilize the secret key rate as a practical and useful quantitative figure of merit to assess a quantum repeater's performance. Nonetheless, the quantum repeater schemes that we consider may also be employed for other, more general quantum information and communication tasks. Thus, we decided not to include schemes with two-way post-processing, as this would certainly lead to a narrower specialization towards QKD applications. Clearly, in the context of long-range QKD, we believe that considering schemes with two-way post-processing will be very valuable, since potential, future large-scale quantum repeaters will be rather noisy and therefore protocols which still work for large error rates are very useful. Such a further optimization of our schemes with a special focus on long-range QKD is possible and we leave this option for future work. It is easy to check that the concatenation of two depolarizing channels with parameters $\mu_1$ and $\mu_2$ is equivalent to a single depolarizing channel with parameter $\mu_1\mu_2$. Thus, for an $n$-segment repeater, we would expect a total depolarizing channel with parameter $\mu_n=\mu_0^n\mu^{n-1}$. We have carefully and systematically checked and confirmed this in the first part of the paper including other parameters too, such as constant initial and time-dependent memory dephasing. For the BB84 and the six-state protocols, the amount of tolerable noise, such that a secret key can still be obtained with one-way post-processing, has been extensively studied. For BB84 the error threshold lies at $Q=11.0\%$ and for the six-state protocol it is $Q=12.6\%$ \cite[App. A]{RevModPhys.81.1301}. Since a maximally mixed state results in an error rate of $50\%$, this gives us a constraint on the minimal values of $\mu_n\geq1-2Q$. More specifically, the BB84 secret key fraction of Eq.~\eqref{eq:skf} on which we focus here vanishes when the two QBERs both exceed $Q=11\%$. This is the case for $\mu_n < 1-2Q$ even when all other elements are perfect, i.e. even when there is no memory dephasing at all ($\alpha \rightarrow 0$). In this case, the two QBERs as described by Eq.~\eqref{eq:QBER} coincide (also assuming zero initial dephasing $F_0=0$) and neither includes a random variable. These two constant QBERs then express the sole faultiness of the repeater elements without any time-dependent quantum storage (i.e., only the initial states and the gates) which can suffice to prevent Alice and Bob from finally sharing a non-zero secret key. \begin{table}[] \begin{tabular}{l|l|l|l|l} $n$ & \begin{tabular}{@{}c@{}}\quad $\mu_0=1$, \quad \\\quad BB84 \quad \end{tabular} & \begin{tabular}{@{}c@{}} \quad $\mu_0=\mu$, \quad \\ \quad BB84 \quad \end{tabular} & \begin{tabular}{@{}c@{}} \quad $\mu_0=1$, \quad \\ \quad 6-state \quad \end{tabular} & \begin{tabular}{@{}c@{}} \quad $\mu_0=\mu$, \quad \\ \quad 6-state \quad \end{tabular} \\\hline 2 & 0.780 & 0.920 & 0.748 & 0.908 \\\hline 4 & 0.920 & 0.965 & 0.908 & 0.959 \\\hline 8 & 0.965 & 0.984 & 0.959 & 0.981 \end{tabular} \caption{Minimal values of $\mu$ required for a non-zero secret key rate in one-way post-processing protocols.} \label{tab_minimalmu} \end{table} \begin{figure*} \caption{Contour plots illustrating the minimal fidelity requirements to overcome the PLOB bound by an eight-segment repeater for different parameter sets. In all contour plots, \(\mu = \mu_0\) and \(F_0=1\) has been used.} \label{fig:Contour_8_segments} \end{figure*} \begin{figure*} \caption{Rates (secret key/raw) for an eight-segment repeater over distance \(L\) for different experimental parameters.} \label{fig:SKR_8_segments} \end{figure*}
2,787
85,618
en
train
0.4990.15
\subsection{Comparisons} \subsubsection{Sequential vs. doubling vs. optimal schemes}\label{sec:Comparison: Sequential vs. Doubling vs. Optimal scheme} In the previous sections (together with the appendix) we have presented our results for the obtainable secret key rates of two-, three-, four- and eight-segment quantum repeaters based on various entanglement distribution and swapping strategies. While it is generally straightforward to include a memory cut-off for the case of two segments, for more than two segments, we have achieved this only for the fully sequential scheme. This was depicted in green in the (non-contour) plots for four and eight segments. The memory cut-off allows to maintain a scaling proportional to the PLOB bound even beyond the distance where the scheme without cut-off drops more quickly. As a consequence, the cut-off can significantly increase the achievable distance. However, it is hard to obtain an exact result for the secret key rate for the more complicated swapping strategies. Nonetheless, for larger distances, one could extrapolate the behaviour of the doubling and optimal schemes including a cut-off by simply continuing the curves with lines parallel to the PLOB bound after the drops. Alternatively, inferring from our plots, at larger distances one can rely on a continuation of the curves that behaves exactly like the sequential scheme with memory cut-off. Both approaches give us a fairly good picture of the behaviour of the doubling and optimal schemes including the cut-off. Nevertheless, the optimal scheme outperforms all other schemes without a cut-off before each one drops completely. The doubling scheme achieves almost similar rates, although it starts earlier to decline. The secret key rates are similar thanks to the equivalent, high raw rates of the doubling and optimal schemes (both being based upon parallel entanglement distributions), and due to our general assumption of deterministic entanglement swapping with $a=1$ \cite{Shchukin2021} \footnote{for $a<1$, regimes exist where in terms of the raw rates ``doubling'' performs strictly worse than ``swap as soon as possible'' \cite{Shchukin2021}, similar to regimes here for the full secret key rates with $a=1$ when the dephasing becomes dominant.}. Thus, for the doubling scheme one could additionally incorporate nested entanglement distillations in the usual, well-known way, which would allow to reduce the QBERs at the expense of the effective raw rates and with the need of extra physical resources. While the differences between the doubling and optimal schemes may not be so large for the repeater sizes mainly considered here ($n\leq 8$), our exact statistical treatment enabled us to determine the optimal swapping scheme (optimizing the dephasing) and thus allows for a rigorous, quantitative comparison with the non-optimal doubling and possible other (including ``mixed") schemes. The fully sequential scheme, based on sequential entanglement distributions, leads to the lowest raw rate. The longer total waiting times of this scheme also contribute to an increased accumulated dephasing. On the other hand, the dephasing of the fully sequential scheme remains limited, as only one segment is waiting at any time step. Thus, although theoretically the sequential scheme is the easiest to calculate, experimentally it would typically result in the lowest secret key rate. Nonetheless, the fully sequential scheme is conceptually special and serves as a very useful reference for comparison with the other schemes.
777
85,618
en
train
0.4990.16
\subsubsection{Two- vs. four- vs. eight-segment repeaters}\label{sec:Comparison: 2 vs. 4 vs. 8 segment repeaters} In this section, let us finally address one of the main questions that motivates the exact secret key rate analysis that we have presented: is there an actual benefit of additional (memory) stations and repeater segments compared with schemes that work entirely without quantum memories (such as point-to-point links or twin-field QKD) or compared to schemes with a smaller number of memory stations? More specifically, is it useful to replace a simple two-segment repeater by a four- or eight-segment repeater in a realistic setting, i.e. even when the extra quantum memories are subject to additional preparation and operational errors and contribute to an increased accumulated memory dephasing? In the preceding section with Tab.~\ref{tab_minimalmu} we saw that the sole faultiness of the memory qubit initial states and gates, even with no time- and distance-dependent memory dephasing, can make the secret key rate completely vanish, and this effect grows with the segment number $n$. In the last section of the paper, we shall also look at schemes that minimize the actual number of memory stations by combining the twin-field QKD and repeater memory concepts, for instance, in a four-segment scheme with only one of the three intermediate stations being equipped with memory qubits. Now here we only consider the ``optimal" scheme (generally and rigorously only without memory cut-off, as discussed before), since this ensures we always consider the highest possible secret key rates. By adding extra repeater stations the requirements on the initial state preparations and the Bell measurements become much higher, where the corresponding terms scale as \(\propto \mu^{n-1} \mu_0^n \) in the QBERs. We stress again that in order to achieve a non-zero secret key rate for the eight-segment repeater, we had to alter the non-ideal value of \(\mu\) of Tab.~\ref{tab:constants} to a sufficiently large value, \(\mu=0.99\), see also Tab.~\ref{tab_minimalmu}. For a fair comparison, this value is then also used here to obtain the curves of the two- and four-segment repeaters. \begin{figure*} \caption{Comparison of secret key rates of the two-, four-, and eight-segment repeaters at total distances \(L\) for different experimental parameters.} \label{fig:SKR_comparison} \end{figure*} The resulting secret key rates can be seen in Fig.~\ref{fig:SKR_comparison}. As one would expect, for example, the scaling changes from \(\sqrt{e^{-\frac{L}{L_{\mathrm{att}}}}} \) to \(\sqrt[8]{e^{-\frac{L}{L_{\mathrm{att}}}}} \) when the transition from a two-segment to an eight-segment repeater is considered. However, the rate at \(L=\unit[0]{km} \) decreases when increasing the number of segments. This effect occurs for the raw rates (and the secret key rates assuming $\mu=1$), but it becomes more apparent for $\mu=0.99$. Still, at long distances, eight segments are superior to a smaller number of segments. Therefore, acknowledging that the necessary $\mu$ requirements are extremely demanding but not entirely impossible to achieve in practice, we conclude that it is indeed beneficial to add repeater stations. In particular, the effect of the memory dephasing alone (besides channel loss), for possible coherence times like those in Tab.~\ref{tab:constants} and used throughout the plots, will not prevent the benefit of adding more stations. Even when both $p_{\mathrm{link}}$ and $\tau_{\mathrm{coh}}$ take on their lowest of the two considered values as shown in Fig.~\ref{fig:SKR_comparison}(b), by placing seven memory stations along the channel it is in principle still possible to exceed the PLOB bound significantly. However, realistically, when $\mu<1$ like in Fig.~\ref{fig:SKR_comparison}(a), all secret key rates stay below the PLOB bound. In this case it becomes crucial that either $p_{\mathrm{link}}$ (Fig.~\ref{fig:SKR_comparison}(c)) or $\tau_{\mathrm{coh}}$ (Fig.~\ref{fig:SKR_comparison}(e)) is sufficiently large such that the curves can cross PLOB at a sufficiently small distance (thanks to the small $y$-axis offset) or they can maintain their repeater loss scaling for sufficiently long distances, respectively. Recall that all rates shown and discussed here are per channel use. Further it should be stressed here that we did not explicitly include time-dependent memory loss (assuming that the memory imperfections are dominated by the time-dependent memory dephasing), which can additionally jeopardise the benefits of adding more, in this case lossy memory stations \cite{PirEisert}. (If this loss is detectable it may lead to a non-deterministic entanglement swapping like in the ``DLCZ" quantum repeater, which is harder to accurately analyze and optimize even for a constant swapping probability \cite{Shchukin2021}; if the loss remains partially undetected at each station, it can lead to a reduced final state fidelity and thus an increased QBER.) Let us discuss the comparison of repeaters with different segment numbers in a little more detail. It is indeed quite subtle and for this we shall also take into account larger repeater systems, far beyond the $n=8$ case. For the general discussion, it is helpful to first consider the fully sequential scheme, as in this case we have access to all relevant (physical and statistical) quantities even for large repeaters, see Tab.~\ref{tab:seqperchanneluse}. If we only consider channel loss or, equivalently, if we only look at the raw rates, there is an optimal number of segments for a given total distance. In Tab.~\ref{tab:seqperchanneluse}, among the possibilities considered there, this is $n=80$ for $L=800$km, and so we should put stations every $L_0=10$km. If we include the memory dephasing (``channel-loss-and-memory-dephasing-only case"), we observe that not only the average (number of) waiting time (steps) $\mathbf{E}[K_n]$, but also the average (number of) dephasing time (steps) $\mathbf{E}[D_n]$ is minimized for $n=80$ when $L=800$km. In fact, these two averages, $n/p$ and $(n-1)/p$, respectively, become identical for larger $n$, and both grow in the two limits of many and very few segments, $L_0\rightarrow 0$ ($n\rightarrow \infty$) and $L_0\rightarrow L/2$ ($n\rightarrow 2$), respectively. However, when changing the segment length $L_0$, also the inverse effective coherence time $\alpha=L_0/(c_f \tau_{\mathrm{coh}})$ will change, where now $\alpha$ is simply maximal at $L_0=L/2$ and it steadily becomes smaller when $L_0\rightarrow 0$ at fixed $\tau_{\mathrm{coh}}$. Note that below a certain $L_0$ value the repeater's elementary time unit is no longer dominated by the classical communication times and instead the maximal local processing times must go into $\alpha$ which we refer to as $\alpha^{\mathrm{loc}}$. This effect implies that in order to maximize the effective coherence time $\tau_{\mathrm{coh}}/\tau$, one should simply use as many stations as possible, eventually approaching the limitation given by the local processing times at each station. For these we may typically assume $\alpha^{\mathrm{loc}}_1=\tau/\tau_{\mathrm{coh}}={\rm MHz}^{-1}/0.1{\rm s}=0.00001$ and $\alpha^{\mathrm{loc}}_2=\tau/\tau_{\mathrm{coh}}={\rm MHz}^{-1}/10{\rm s}=0.0000001$. However, the first really relevant quantity to assess the effect of the memory dephasing is the effective average dephasing time $\alpha \mathbf{E}[D_n]$ that is related to the memory dephasing channel evolution. Interestingly, for the fully sequential scheme, this quantity, $\alpha \mathbf{E}[D_n]=(L/n)(n-1)/(c_f \tau_{\mathrm{coh}} p)$, converges for growing $n$ (small $L_0$) to $L/(c_f \tau_{\mathrm{coh}} p)$ with $p\rightarrow 1$. For example, in Tab.~\ref{tab:seqperchanneluse}, for $L=800$km, we have $L/(c_f \tau_{\mathrm{coh}} p)=0.0374$ for $\tau_{\mathrm{coh}}=0.1$s and $L/(c_f \tau_{\mathrm{coh}} p)=0.0004$ for $\tau_{\mathrm{coh}}=10$s. These limits are attainable for about $n=8000$ and for $n=800$, respectively. With $\tau_{\mathrm{coh}}=10$s the limit is also almost attainable for $n=80$, so again $L_0=10$km, and there is no further benefit by further increasing $n$. However, we also have $\alpha^{\mathrm{loc}}_1 \mathbf{E}[D_n]=0.00001\times (n-1)/p=0.0804$ for $n=8000$ and $\alpha^{\mathrm{loc}}_2 \mathbf{E}[D_n]=0.0000001\times (n-1)/p=0.0001$ for $n=800$.
2,297
85,618
en
train
0.4990.17
However, the first really relevant quantity to assess the effect of the memory dephasing is the effective average dephasing time $\alpha \mathbf{E}[D_n]$ that is related to the memory dephasing channel evolution. Interestingly, for the fully sequential scheme, this quantity, $\alpha \mathbf{E}[D_n]=(L/n)(n-1)/(c_f \tau_{\mathrm{coh}} p)$, converges for growing $n$ (small $L_0$) to $L/(c_f \tau_{\mathrm{coh}} p)$ with $p\rightarrow 1$. For example, in Tab.~\ref{tab:seqperchanneluse}, for $L=800$km, we have $L/(c_f \tau_{\mathrm{coh}} p)=0.0374$ for $\tau_{\mathrm{coh}}=0.1$s and $L/(c_f \tau_{\mathrm{coh}} p)=0.0004$ for $\tau_{\mathrm{coh}}=10$s. These limits are attainable for about $n=8000$ and for $n=800$, respectively. With $\tau_{\mathrm{coh}}=10$s the limit is also almost attainable for $n=80$, so again $L_0=10$km, and there is no further benefit by further increasing $n$. However, we also have $\alpha^{\mathrm{loc}}_1 \mathbf{E}[D_n]=0.00001\times (n-1)/p=0.0804$ for $n=8000$ and $\alpha^{\mathrm{loc}}_2 \mathbf{E}[D_n]=0.0000001\times (n-1)/p=0.0001$ for $n=800$. \begin{table*} \begin{tabular}{c|c|c|c|c|c|c|c} $n$ & 1 & 2 & 4 & 8 & 80 & 800 & 8000 \\ \hline \hline $L_0$[km] & 800 & 400 & 200 & 100 & 10 & 1 & 0 \\ \hline $\mathbf{E}[K_n]$ & $\sim 10^{16}$ & $\sim 10^{8}$ & 35497 & 754 & 126 & 837 & 8036 \\ \hline $R$ & $\sim 10^{-16}$ & $\sim 10^{-8}$ & $\sim 10^{-5}$ & 0.0013 & 0.0079 & 0.0012 & 0.0001 \\ \hline $\mathbf{E}[D_n]$ & - & $\sim 10^{8}$ & 26623 & 659 & 124 & 836 & 8035 \\ \hline $\alpha_1$ & - & 0.0192 & 0.0096 & 0.0048 & 0.0005 & $\sim 10^{-5}$ & $\sim 10^{-6}$ \\ \hline $\alpha_1 \mathbf{E}[D_n]$ & - & $\sim 10^{6}$ & 256 & 3.1674 & 0.0598 & 0.0402 & 0.0386 \\ \hline $\alpha_2$ & - & 0.0002 & 0.0001 & $\sim 10^{-5}$ & $\sim 10^{-6}$ & $\sim 10^{-7}$ & $\sim 10^{-8}$ \\ \hline $\alpha_2 \mathbf{E}[D_n]$ & - & 15131 & 2.5576 & 0.0317 & 0.0006 & 0.0004 & 0.0004 \\ \hline $\mathbf{E}[e^{-\alpha_1 D_n}]$ & - & $\sim 10^{-7}$ & $\sim 10^{-6}$ & $0.0729$ & $0.9420$ & $0.9606$ & $0.9621$ \\ \hline $\mathbf{E}[e^{-\alpha_2 D_n}]$ & - & $ \sim 10^{-6}$ & $0.1573$ & $0.9689$ & $0.9994$ & $0.9996$ & $0.9996$ \\ \hline $r_1(\mu=1)$ & - & $\sim 10^{-13}$ & $\sim 10^{-12}$ & $0.0038$ & $0.8106$ & $0.8603$ & $0.8646$ \\ \hline $r_2(\mu=1)$ & - & $\sim 10^{-9}$ & $0.0179$ & $0.8843$ & $0.9961$ & $0.9972$ & $0.9973$ \\ \hline $r_1(\mu=0.99)$ & - & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\ \hline $r_2(\mu=0.99)$ & - & $0$ & $0$ & $0.2203$ & $0$ & $0$ & $0$ \\ \hline $S_1(\mu=1)$ & - & $\sim 10^{-21}$ & $\sim 10^{-17}$ & $\sim 10^{-6}$ & $0.0064$ & $0.0010$ & $0.0001$ \\ \hline $S_2(\mu=1)$ & - & $\sim 10^{-17}$ & $\sim 10^{-7}$ & $0.0012$ & $0.0079$ & $0.0012$ & $0.0001$ \\ \hline $S_1(\mu=0.99)$ & - & $0$ & $0$ & $0$ & $0$ & $0$ & $0$ \\ \hline $S_2(\mu=0.99)$ & - & $0$ & $0$ & $0.0003 $ & $0$ & $0$ & $0$ \\ \hline $S^{\mathrm{PLOB,QR}}(L_0)$ & $\sim 10^{-16}$ & $\sim 10^{-8}$ & 0.0002 & 0.0154 & 1.4530 & 4.4921 & 7.7846 \end{tabular} \caption{Overview of the relevant quantities for the {\it fully sequential scheme}: segment number $n$, segment length $L_0$[km], average (number of) waiting time (steps) $\mathbf{E}[K_n]$, raw rate $R$, average (number of) dephasing time (steps) $\mathbf{E}[D_n]$, inverse effective coherence time $\alpha_1=L_0/(c_f 0.1{\rm s})$, effective average dephasing time $\alpha_1 \mathbf{E}[D_n]$, inverse effective coherence time $\alpha_2=L_0/(c_f 10{\rm s})$, effective average dephasing time $\alpha_2 \mathbf{E}[D_n]$, average dephasing fractions $\mathbf{E}[e^{-\alpha_1 D_n}]$ and $\mathbf{E}[e^{-\alpha_2 D_n}]$, secret key fractions and rates, $r$ and $S$, for different $\mu=\mu_0$ (subscript corresponds to the choice of $\alpha_1$ or $\alpha_2$, $\mu=1$ is the channel-loss-and-memory-dephasing-only case), and the (repeater-assisted) capacity bound $S^{\mathrm{PLOB,QR}}(L_0)$. We further assumed $p_{\mathrm{link}}=F_0=1$ for the link coupling efficiency and the initial state dephasing.} \label{tab:seqperchanneluse} \end{table*} \begin{table*} \begin{tabular}{c|c|c|c|c|c|c|c} $n$ &1 & 2 & 4 & 8 & 80 & 800 & 8000\\ \hline \hline $L_0$[km] &800 & 400 & 200 & 100 & 10 & 1 & 0.1\\ \hline $ \mathbf{E}[K_n] $ & $ \sim 10^{16} $ & $ \sim 10^{8} $ & $ 18487$ & $ 255$ & $ 5.4 $ & $ 2.9 $ & $ 2.2 $\\ \hline $R $ & $ \sim 10^{-16} $ & $ \sim 10^{-8} $ & $ \sim 10^{-5} $ & $ 0.0039$ & $ 0.1841 $ & $ 0.3490 $ & $ 0.4646 $\\ \hline $\mathbf{E}[D_n]$ & - & $ \sim 10^{8} $ & $ 22923$ & $ 488$ & $<124$ & $<836$ & $<8035$\\ \hline $\alpha_1$ & - & $ 0.0192$ & $ 0.0096$ & $ 0.0048$ & $ 0.0005$ & $ \sim 10^{-5} $ & $ \sim 10^{-6} $\\ \hline $\alpha_1 \mathbf{E}[D_n]$ &- & $ \sim 10^{6} $ & $ 220$ & $ 2.3484$ & $ <0.0582$ & $ <0.0391$ & $ <0.0376$ \\ \hline $\alpha_2$ & - & $ 0.0002$ & $ 0.0001 $ & $ \sim 10^{-5} $ & $ \sim 10^{-6} $ & $ \sim 10^{-7} $ & $ \sim 10^{-8} $\\ \hline $\alpha_2 \mathbf{E}[D_n]$ &- & $ 15131$ & $ 2.2022$ & $ 0.0235$ & $<0.0006$ & $<0.0004$ & $<0.0004$\\ \hline $\mathbf{E}[e^{-\alpha_1 D_n}]$ &- &$ \sim 10^{-6} $ & $ \sim 10^{-5} $ & $ 0.1552$ & $ >0.9420$ & $ >0.9606$ & $ >0.9621$ \\ \hline $\mathbf{E}[e^{-\alpha_2 D_n}]$ &- &$ \sim 10^{-4} $ & $ 0.2215$ & $ 0.9769$ & $>0.9994$ & $>0.9996$ & $>0.9996$ \\ \hline $r_1(\mu=1)$ &- &$ \sim 10^{-13} $ & $ \sim 10^{-11} $ & $ 0.0174$ & $>0.8106$ & $>0.8603$ & $>0.8646$ \\ \hline $r_2(\mu=1)$ &- &$ \sim 10^{-9} $ & $ 0.0357$ & $ 0.9090$ & $>0.9961$ & $>0.9972$ & $>0.9973$ \\ \hline $r_1(\mu=0.99)$ &- &$ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$\\ \hline $r_2(\mu=0.99)$ &- &$ 0$ & $ 0$ & $ 0.2323$ & $ 0$ & $ 0$ & $ 0$\\ \hline $S_1(\mu=1)$ &- &$ \sim 10^{-21} $ & $ \sim 10^{-15} $ & $ 0.0001 $ & $>0.0064$ & $>0.0010$ & $>0.0001$ \\ \hline $S_2(\mu=1)$ &- &$ \sim 10^{-17} $ & $ \sim 10^{-6} $ & $ 0.0036$ & $>0.0079$ & $>0.0012$ & $>0.0001$ \\ \hline $S_1(\mu=0.99)$ &- &$ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$ & $ 0$\\ \hline $S_2(\mu=0.99)$ &- &$ 0$ & $ 0$ & $ 0.0009$ & $ 0$ & $ 0$ & $ 0$\\ \hline $S^{\mathrm{PLOB,QR}}(L_0)$ & $\sim 10^{-16}$ & $\sim 10^{-8}$ & 0.0002 & 0.0154 & 1.4530 & 4.4921 & 7.7846 \end{tabular} \caption{Overview of the relevant quantities for the {\it optimal scheme}: segment number $n$, segment length $L_0$[km], average (number of) waiting time (steps) $\mathbf{E}[K_n]$, raw rate $R$, average (number of) dephasing time (steps) $\mathbf{E}[D_n]$, inverse effective coherence time $\alpha_1=L_0/(c_f 0.1{\rm s})$, effective average dephasing time $\alpha_1 \mathbf{E}[D_n]$, inverse effective coherence time $\alpha_2=L_0/(c_f 10{\rm s})$, effective average dephasing time $\alpha_2 \mathbf{E}[D_n]$, average dephasing fractions $\mathbf{E}[e^{-\alpha_1 D_n}]$ and $\mathbf{E}[e^{-\alpha_2 D_n}]$, secret key fractions and rates, $r$ and $S$, for different $\mu=\mu_0$ (subscript corresponds to the choice of $\alpha_1$ or $\alpha_2$, $\mu=1$ is the channel-loss-and-memory-dephasing-only case), and the (repeater-assisted) capacity bound $S^{\mathrm{PLOB,QR}}(L_0)$. For the cases $n>8$, not all exact values are available and hence we inserted approximate values or (lower or upper) bounds. We assumed $p_{\mathrm{link}}=F_0=1$ for the link coupling efficiency and the initial state dephasing.} \label{tab:optperchanneluse} \end{table*}
3,925
85,618
en
train
0.4990.18
Next let us consider the relevant quantities for the optimal scheme as presented in Tab.~\ref{tab:optperchanneluse}. In this case we no longer have access to all exact values for larger repeaters $n>8$. However, there is a distinction between the waiting times $K_n$ and the dephasing times $D_n$. For the total waiting times or the raw rates $R$ we can calculate the numbers for small and also for larger $n$ according to the exact analytical expression in Eq.~\eqref{eq:Knpar}. There are also good approximations for both small $n$ (small $p$) and larger $n$ ($p$ closer to one) which may be easier to calculate \cite{PvL, Elkouss2021, Eisenberg}. Importantly, unlike the case of the fully sequential scheme, the raw rate $R$ now grows with all $n$ (though slowly for larger $n$) thanks to the fast, parallel distributions in all segments together with the loss scaling that improves with $n$. This behaviour even matches that of the repeater-assisted capacity bounds for increasing $n$, as given in the last row of Tab.~\ref{tab:optperchanneluse}. However, recall that for our qubit-based quantum repeaters the raw rate can never exceed one secret bit per channel use, whereas $S^{\mathrm{PLOB,QR}}(L_0)$ can, for decreasing $L_0$. For the average total dephasing we can calculate the exact values up to $n=8$. Comparing these values in Tabs.~\ref{tab:seqperchanneluse} and \ref{tab:optperchanneluse}, we see that the optimal scheme accumulates less dephasing than the fully sequential scheme when $n=4, 8$. The two competing effects in the fully sequential scheme, long total waiting time versus minimal number of simultaneously stored memory qubits per elementary time unit, overall result in a larger dephasing rate in comparison with our optimal scheme for $n\leq 8$. We extrapolate this relative behaviour to larger $n$ and therefore assume that the dephasing values of the fully sequential scheme may serve as upper bounds on those for the optimal scheme when $n>8$ in Tab.~\ref{tab:optperchanneluse}. We make the same assumption for the other dephasing-dependent quantities, in particular, the secret key fractions, for which the fully sequential values then serve as lower bounds. Looking at the entries of Tab.~\ref{tab:optperchanneluse} for the optimal scheme, as a final result, we conclude that while for $\mu=1$ (``channel-loss-and-memory-dephasing-only'' case) it may be best to choose as many segments as $n=80$ (i.e. stations are placed at every 10km), similar to what is best for the fully sequential scheme (Tab.~\ref{tab:seqperchanneluse}), for $\mu=0.99<1$ we must not go to segment numbers higher than $n=8$. In fact, for $\mu=0.99$, both for the sequential and the optimal schemes, effectively the only non-zero secret key rate is obtainable for $n=8$ and the larger of the two coherence times considered, with a factor-three enhancement for the optimal scheme over the sequential one. If $n>8$, the faulty states and gates make $S$ vanish, if $n<8$ the small raw rates and the high effective average dephasing times do not permit practically usable secret key rates. Note that the entire discussion here in the context of Tabs.~\ref{tab:seqperchanneluse} and \ref{tab:optperchanneluse} is for a total distance of $L=800$km. We may infer that an elementary segment length of $L_0 \sim 100$km is not only highly compatible with existing classical repeater and fiber network architectures, but also seems to offer a good balance between an improved memory-assisted loss scaling and an only limited addition of extra faulty elements. This conclusion here holds for our repeater setting based upon heralded loss-tolerant entanglement distribution, deterministic entanglement swapping, and a memory dephasing model. Similar elementary lengths have been used before for schemes with probabilistic entanglement swapping and memory loss \cite{DLCZ, Sangouard}. For schemes with deterministic entanglement swapping, but a less loss-tolerant entanglement distribution mechanism, \cite{HybridPRL} smaller segment lengths may be preferable. We will include such schemes, exhibiting an intrinsic channel-loss-dependent dephasing, into the discussion in a later section. Let us now consider a simple form of multiplexing in order to improve the repeater performance, provided sufficient extra resources are available.
1,076
85,618
en
train
0.4990.19
\subsection{Multiplexing} Operating $M$ repeater chains in parallel automatically leads to an enhancement of the overall rates by a factor of $M$. However, since in this case the corresponding number of channels grows as well by a factor of $M$, the rates per channel use remain unchanged. The situation becomes different though when the chains can ``interact" with each other. In particular, the loss scaling of heralded entanglement distributions can be improved, at least for small systems in an MDI QKD setting (even without the use of quantum memories but with the need for a nondestructive heralding) \cite{Azuma2015}. For memory-based quantum repeaters, memory imperfections may be compensated via multiplexing techniques \cite{CollinsPrl,MunroNatPhot,LutPRA,RazLut}. Experimentally, multiplexing can be realized through various degrees of freedom. Apart from spatial multiplexing with additional memory qubits at each station that can be coupled to additional fiber channels, this can be forms of temporal or spectral multiplexing where a single fiber may be employed sequentially at a high clock rate \cite{Cody_Jones} or at the same time with multiple wavelengths, respectively. In this section, we shall incorporate a simple form of multiplexing into our formalism and our repeater models and systems. We have seen that either high total efficiencies or sufficiently long coherence times are needed to achieve usable secret key rates at long distances. We will now see that multiplexing can be understood as a means to effectively enhance the memory coherence time. In the following we will describe in more detail which kind of multiplexing we consider and why it indeed effectively increases the coherence time. The simplest way to include multiplexing in our repeater models is by using \(M\) memories simultaneously to generate entanglement. These memories can either be connected to the same fiber by a switch or they may each be coupled to their own fiber channel. For simplicity, we consider the switch to be perfect such that both approaches become equivalent (and where the additional channel uses take place either in time or in space). A lossy switch could be easily incorporated into our model by using an additional parameter which is included in \(p_{\mathrm{link}}\) (note that the loss from the switch is time-independent and so always the same). A possible setup for a two-segment repeater with multiplexing is shown in Fig.~\ref{fig:figure-multiplexing}. Here all entanglement distribution attempts happen simultaneously. Since we have \(M\) replica of all memories and channels, this setup acts as if \( p \mapsto 1-(1-p)^M\), provided that memory qubits from different chains can talk to each other in the middle station so that we may again swap as soon as possible. \begin{figure} \caption{Multiplexing in a two-segment repeater.} \label{fig:figure-multiplexing} \end{figure} For an $M$-multiplexing let us thus define the effective distribution probability $p_{\mathrm{eff}}=1-(1-p)^M$. For small $p$, only keeping linear terms, we have $p_{\mathrm{eff}}\approx M p$. As the expected waiting time in a single segment is then given by $\frac{1}{Mp}$, we can already gain insight on the possibility that multiplexing increases the effective coherence time by a factor of $M$. More specifically, for example, for the fully sequential scheme the expectation value of $D_n$ is $(n-1)/p$, thus the transition \( p \mapsto p_{\mathrm{eff}}\approx M p\) reduces the number of dephasing steps, on average, by a factor of $1/M$. This is equivalent to an increase of the coherence time by a factor $M$. In the following, let us be more precise and show what `small' $p$ really means in terms of the corresponding segment length $L_0$. In fact, including multiplexing, the secret key rates in dependence of the repeater distance behave in a more complicated way and one can see that for small distances the rate is nearly constant and only for larger distances the rates behave as we would expect from the non-multiplexed schemes. In the general, exact model using $p_{\mathrm{eff}}=1-(1-p)^M$, it becomes clear that the above-mentioned behaviour originates from this general expression for $p_{\mathrm{eff}}$. In Fig.~\ref{fig:ruleofthumb}(a) one can see that $p_{\mathrm{eff}}$ can be divided in three regimes. In the first regime of small $L_0$, $p_{\mathrm{eff}}$ is a constant. In the second regime of large $L_0$, $p_{\mathrm{eff}}$ is a simple exponential decay, while in between it has a more complicated form interpolating both regimes. In the first regime, the effective probability is nearly constant, because in our simple multiplexing protocol we only make use of a single `entanglement excitation' in each segment of the parallelized repeater chains, but for small $L_0$ we would typically have multiple excitations in each segment. Thus, increasing $L_0$ decreases the number of excitations, but as we anyway only make use of a single one, this barely matters (making use of more excitations and keeping the `residual entanglement' could potentially further enhance the rates \cite{RazaviProc}; however, here our focus is on a simple and clear interpretation of the impact of the multiplexing on the coherence time and the memory dephasing in our statistical model). In the second regime of rather large $L_0$, the contributions of multiple excitations can be neglected and therefore the rates behave exactly like in the $M=1$ case. Hence, this regime two is exactly that where we can increase the effective coherence time by a factor of $M$ with the help of multiplexing. We can give a rough rule of thumb for the minimal length of $L_0$ when one may use the simple approximation of increasing the coherence time by a factor of $M$. For this we assume $p=\exp(-\frac{L_0}{L_{\mathrm{att}}})$ \footnote{When considering $p_{\mathrm{link}}<1$ one can incorporate this as an additional length of $-\ln(p_{\mathrm{link}})L_{\mathrm{att}}$ regarding $L_0$.} and take the minimizing argument of $\frac{\partial^2\ln\left(p_{\mathrm{eff}}\right)}{\partial L_0^2}$ for a given $M$ in order to estimate the midpoint of the interpolating regime. For general $M$ this expression can be nicely fitted to an expression of the form $c_1 \ln\left(c_2 M+c_3\right)+c_4$, as one can see in Fig.~\ref{fig:ruleofthumb}(b). One should then consider $L_0$ to be slightly larger for the approximation to hold. \begin{figure} \caption{(a) $p_{\mathrm{eff} \label{fig:ruleofthumb} \end{figure} \begin{figure*} \caption{Rates (secret key/raw) of (a,b) two- and (c,d) four-segment repeaters using multiplexing $M=10$ at distances \(L\) for different experimental parameters. The rate of a repeater without multiplexing, but with the same coherence time is shown in orange, whereas the rate of a repeater using multiplexing is shown in red. Additionally, a repeater without multiplexing, but with an equivalent effective coherence time is presented in dashed black. All rates are expressed per channel use and hence include a division by $M$.} \label{fig:SKR_multiplexing} \end{figure*} Let us give another, more rigorous derivation of the effective coherence time in the presence of multiplexing. The coherence time primarily characterises the increasing decline of the secret key rate with distance. However, a massive drop actually happens when the secret key fraction \(r\) reaches zero, which is possible when $e_z>0$, i.e. when $\mu<1$ or $\mu_0<1$. Thus, let us determine the probability at which \(r=0\) holds with multiplexing and from that deduce an equivalent coherence time without multiplexing. Since the QBER \(e_z\) is constant ($e_z=\overline{e_z})$, we have to solve for the expectation value of \(\overline{e_x}\) such that \begin{equation} 1-h(e_z) \overset{!}{=} h(\overline{e_x}). \end{equation} In order to find the probability \(p\) or equivalently the distance at which the drop happens, let us use the Taylor series of the binary entropy function at \(x=\frac{1}{2}\), \begin{align} h(x)= 1- \frac{1}{2 \ln(2)}\sum_{n=1}^{\infty} \frac{\left(1-2x\right)^{2n}}{n\left(2n-1\right)},\; \forall\; 0<x<1. \end{align} Then one finds for \(\overline{e_x}\) up to first order: \begin{equation} \overline{e_x}= \frac{1}{2} - \sqrt{\frac{\ln(2)h(e_z)}{2}}, \end{equation} where only the negative root is possible, as \(0\leq e_x \leq \frac{1}{2}\). Inserting \(\overline{e_x}\) and solving for $\mathbf{E}[e^{-\alpha D_n}]$ gives \begin{equation} \mathbf{E}[e^{-\alpha D_n}] = \frac{\sqrt{2\ln(2)h(e_z)}}{\mu^{n-1} \mu_0^{n} \left(2 F_0-1\right)^n}. \end{equation} If $\mu=\mu_0=1$, including especially the channel-loss-and-memory-dephasing-only case (for which also $F_0=1$), we have $h(e_z)=0$ and so the requirement becomes $\mathbf{E}[e^{-\alpha D_n}]=0$, which is impossible. However, as soon as $e_z>0$, i.e. $\mu<1$ or $\mu_0<1$, a sufficiently small non-zero (average) dephasing fraction $\mathbf{E}[e^{-\alpha D_n}]$ leads to a zero secret key fraction. As we can always calculate this expectation value by our previously derived PGFs, we now have an accurate and systematic way to derive the probability $p$ (or the total distance $L=n L_0$) at which the drop takes place for given values of $n$, $\tau_{\mathrm{coh}}$, $\mu$, $\mu_0$, and $F_0$. Recall that the inverse effective coherence time $\alpha=L_0/(c_f \tau_{\mathrm{coh}})$ typically also depends on $L_0$. On the other hand, we may use the above relation to determine an (inverse) effective coherence time by calculating the drop for a repeater with multiplexing and then the equivalent \(\alpha\), which would be needed to achieve the same distance without any multiplexing. From this \(\alpha\) one can recover the coherence time \(\tau_{\mathrm{coh}}\) and finds the approximate relation \begin{equation}\label{eq:MPrelation} \tau_{\mathrm{coh}} \mapsto M \cdot \tau_{\mathrm{coh}}, \end{equation} when a multiplexing of \(M\) is used and the remaining setup is kept the same. Thus, one can achieve an \(M\)-times longer effective coherence time with the help of multiplexing. In Fig.~\ref{fig:SKR_multiplexing}, we show the rates of two- and four-segment repeaters using a multiplexing of \(M=10\) in red. Note that because we use the SKR per channel use, the rates are obtained including a division by $M$. The rates of the same repeaters without multiplexing are presented in orange. Furthermore, a repeater without multiplexing, but with the equivalent `effective' coherence time of \(\tau_{\mathrm{eff}} = M \tau_{\mathrm{coh}}\) is shown in dashed black. One can see that for small distances, i.e large probabilities, the multiplexed repeater does not quite behave like its non-multiplexed counterpart with an effectively increased coherence time. A clear splitting between the red and black curves is visible. However, for larger distances, especially after crossing the PLOB bound, the multiplexed repeater behaves exactly the same as if simply memories with an effectively longer coherence time were used. For smaller link efficiencies, the splitting becomes much less pronounced, as can be seen in the plots on the right of Fig.~\ref{fig:SKR_multiplexing}. All this holds for both two and four segments, according to Fig.~\ref{fig:SKR_multiplexing}. In particular, for small link efficiencies, the secret key rate of an equivalent repeater with \(\tau_{\mathrm{eff}} = M \tau_{\mathrm{coh}}\) is almost indistinguishable from a repeater with multiplexing. This is in agreement with the above discussion on the occurrence of single versus multiple `entanglement excitations' in each segment where the latter are then highly suppressed even at short distances due to the small value of $p_{\mathrm{link}}$. Thus, for practical purposes, in all our discussions, we may treat several cases equivalently, for instance, a repeater with $\tau_{\mathrm{coh}}=10$s and $M=1$ would be equivalent to a repeater with $\tau_{\mathrm{coh}}=1$s and $M=10$.
3,237
85,618
en
train
0.4990.20
\subsection{Secret key rate per second}\label{sec:Secret Key Rate per Second} In a real-world application, the important figure of merit is not the rate per channel use, it will be the rate per second. In particular, a memory-asissted QKD system or generally a memory-based quantum repeater, as typically based upon light-matter interactions and classical communication at least between neighboring stations, has a limited `clock rate'. Classical communication is needed to declare successful transmission of photons for the entanglement distribution. In general, also extra communication would be needed to signal any successful entanglement swapping, but as we assumed deterministic swapping no such communication is needed in our repeater models. As we already discussed frequently throughout the paper, a repeater's performance generally depends on an elementary time unit $\tau$, which is contained in the inverse effective coherence time $\alpha=\tau/\tau_{\mathrm{coh}}$, where generally $\tau=\tau_{\mathrm{clock}}+L_0/c_f$ including the experimental local processing time $\tau_{\mathrm{clock}}$. We have mostly argued that in the relevant distance regimes, this quantity is dominated by the (quantum and classical) communication times between neighboring stations, thus $\tau = L_0/c_f$ and $\alpha=L_0/(c_f \tau_{\mathrm{coh}})$. Already with segment lengths above \(\unit[10]{km}\), one can neglect the local clock rates, since these are much higher than the rates given by the transmission times. An extra factor of two could be included in $\tau$ for some protocols due to the $L_0$-transmission of a photon entangled with a memory qubit and the classical answer (sent back over $L_0$) heralding its successful transmission. However, this would depend on the specific protocol and so we have chosen the simplest, minimal form $\tau = L_0/c_f$. Only for very short segment lengths do we have $\alpha\approx \alpha^{\mathrm{loc}}=\tau_{\mathrm{clock}}/\tau_{\mathrm{coh}}={\rm MHz}^{-1}/\tau_{\mathrm{coh}}$ assuming experimental clock rates $\tau_{\mathrm{clock}}^{-1}$ typically of the order of MHz. However, there are repeater schemes that are independent of additional classical communication and the decision to keep or reinitialize a memory state can be made at the memory station. These schemes may be referred to as ``node receives photons" (NRP) as opposed to the class of schemes with ``node sends photons" (NSP) \cite{White}. An NRP protocol and application that circumvents the need of extra signal waiting times can be realized with two ``segments'' and a middle station in memory-assisted MDI QKD \cite{White}. Such a scheme, when treating it as an elementary quantum repeater unit or module many of which a large-scale repeater can be made of, may be referred to as a ``quantum repeater cell'', actually composed of two half-segments \cite[Fig. 6b]{White}. In this case, even for large (half-)segment length $L_0$, we have $\alpha = \alpha^{\mathrm{loc}}=\tau_{\mathrm{clock}}/\tau_{\mathrm{coh}}$. For completeness, we show the rates of such an NRP-based two-segment scheme in the form of contour plots in App.~\ref{app:nrp}. By circumventing the need for extra classical communication and thus significantly reducing the effective memory dephasing, the minimal state and gate fidelity values can even be kept constant over large distance regimes. However, as soon as the NRP concept is applied to repeaters beyond a single middle station effectively connecting complete repeater segments, \cite[Fig. 6a]{White} the need for extra classical communication to initiate an entanglement swapping operation can no longer be entirely avoided (though there are ideas to still partially benefit from the NRP concept) \cite{Cody_Jones}. A quantum repeater cell can also be considered employing the NSP protocol \cite{NL} and one such cell (two half-segments) or the corresponding complete segment can then be used as an elementary quantum repeater unit \cite[Fig. 4]{White}. For the NSP concept, the extra signal waiting time is generally required at every distribution attempt. In any case or protocol, the repeater's elementary time unit $\tau$ determines the effective coherence time $\tau_{\mathrm{coh}}/\tau$ and as such, even when the rates per channel use are considered, it determines how many distribution attempts are possible within a given $\tau_{\mathrm{coh}}$ and hence how big the effective dephasing time $\alpha D_n$ becomes. Compared with memory-assisted quantum communication schemes, a big asset of an all-optical point-to-point quantum communication link is that it can operate at a very high clock rate, typically of the order of GHz, only limited by the speed of Alice's laser (quantum state) source and Bob's (quantum state) detector. For such a direct state transmission, no extra classical communication is required as for heralding the successful transfer of entangled photons between repeater links. Thus, the rate per second is simply given by the two local clock rates, especially the time it takes to generate the photonic qubit states or any other quantum states in QKD based on different types of encoding (however, thanks to the known linear bounds on the key distribution via a long and lossy point-to-point quantum communication channel \cite{PLOB, TGW}, it is clear that the rate scaling of qubit-based QKD cannot be beaten by any form of non-qubit encoding). Other all-optical schemes such as MDI QKD or twin-field QKD, which are no longer point-to-point and do include a middle station between Alice and Bob, also benefit from such high clock rates. The remarkable feature of twin-field QKD is that it shares both advantages: the high clock rate with point-to-point quantum communication and the $L \rightarrow L/2$ loss scaling gain with memory-based two-segment quantum repeaters. In order to assess whether there is a real benefit of employing a two-segment quantum repeater or even adding extra repeater stations, we must eventually consider the rates per second and take into account the corresponding clock rates in all schemes. As a consequence, comparing clock rates of MHz with those of GHz (of memory-based versus all-optical quantum communication), there is a penalty of a factor of about 1000 from the start for the memory-based approach. In the regime where $\alpha\approx L_0/(c_f \tau_{\mathrm{coh}})$, this penalty even gets worse. In this case, when $\tau\approx L_0/c_f$, there are at least two disadvantages of $\tau$ growing with $L_0$: a reduced effective coherence time $\tau_{\mathrm{coh}}/\tau$ and a reduced raw rate per second $R/\tau$. Beating the PLOB bound for the rates per channel use is only a necessary criterion that a quantum repeater can be beneficial. In order to confirm a real benefit, we have to consider the secret key rates per second $S/\tau=r R/\tau$. Thus, even with perfect memories $\tau_{\mathrm{coh}}\rightarrow\infty$, the different $\tau$ values matter. The situation is similar to throwing two or more dices at once at a fast rate. To get all dices showing six eyes this may still be faster than throwing them very slowly while being allowed to only continue with the unsuccessful dices in each round. The final raw and secret key rates per second obtainable with our two most prominent and mostly discussed repeater schemes, the fully sequential scheme and the optimal scheme, are given in Tabs.~\ref{tab:seqpersecond} and \ref{tab:optpersecond}, respectively. \begin{table*} \begin{tabular}{c|c|c|c|c|c|c|c} $n$ &1 & 2 & 4 & 8 & 80 & 800 & 8000\\ \hline \hline $L_0$[km] &800 & 400 & 200 & 100 & 10 & 1 & 0.1\\ \hline $R /\tau $ & $\unit[\sim 10^{-14}]{Hz}$ & $\unit[\sim 10^{-6}]{Hz}$ & $\unit[0.0293]{Hz}$ & $\unit[2.8]{Hz}$ & $\unit[165.2]{Hz}$ & $\unit[248.7]{Hz}$ & $\unit[259.1]{Hz}$\\ \hline $S_1(\mu=1)/\tau $ &- &$\unit[\sim 10^{-18}]{Hz}$ & $\unit[\sim 10^{-14}]{Hz}$ & $\unit[0.0106]{Hz}$ & $\unit[133.9]{Hz}$ & $\unit[213.9]{Hz}$ & $\unit[224.0]{Hz}$\\ \hline $S_2(\mu=1)/\tau $ &- &$\unit[\sim 10^{-14}]{Hz}$ & $\unit[0.0005]{Hz}$ & $\unit[2.4]{Hz}$ & $\unit[164.5]{Hz}$ & $\unit[248.0]{Hz}$ & $\unit[258.4]{Hz}$\\ \hline $S_1(\mu=0.99)/\tau $ &- &$\unit[0]{Hz}$ & $\unit[0]{Hz}$ & $\unit[0]{Hz}$ & $\unit[0]{Hz}$ & $\unit[0]{Hz}$ & $\unit[0]{Hz}$\\ \hline $S_2(\mu=0.99)/\tau $ &- &$\unit[0]{Hz}$ & $\unit[0]{Hz}$ & $\unit[0.6086]{Hz}$ & $\unit[0]{Hz}$ & $\unit[0]{Hz}$ & $\unit[0]{Hz}$\\ \hline $S^{\mathrm{PLOB,QR}}(L_0)/\tau $ &$\unit[\sim 10^{-7}]{Hz}$ & $\unit[18.3]{Hz}$ & $\unit[0.2]{MHz}$ & $\unit[15.5]{MHz}$ & $\unit[1.5]{GHz}$ & $\unit[4.5]{GHz}$ & $\unit[7.8]{GHz}$ \end{tabular} \caption{Overview of the relevant quantities for the {\it fully sequential scheme} of Tab.~\ref{tab:seqperchanneluse} calculated per second (shown are only those entries that change, but again with segment number $n$, segment length $L_0$[km]): raw rate $R/\tau$, secret key rate $S/\tau$ for different $\mu=\mu_0$ (again subscript corresponds to the choice of $\alpha_1$ or $\alpha_2$, $\mu=1$ is the channel-loss-and-memory-dephasing-only case), and the (repeater-assisted) capacity bound per elementary time unit $S^{\mathrm{PLOB,QR}}(L_0)/\tau$ where we choose $\tau={\rm GHz}^{-1}$ for the cases $n=1,2$, i.e. the bounds, expressed per second, on all-optical point-to-point and twin-field QKD. Note that for realistic but still GHz-clock-rate twin-field QKD we rather have $S/\tau\sim1$Hz. In any of the other, memory-based scenarios, we choose $\tau=\tau_{\mathrm{clock}}+L_0/c_f$ with $\tau_{\mathrm{clock}}={\rm MHz}^{-1}$. We again assumed $p_{\mathrm{link}}=F_0=1$ for the link coupling efficiency and the initial state dephasing.} \label{tab:seqpersecond} \end{table*} \begin{table*} \begin{tabular}{c|c|c|c|c|c|c|c} $n$ &1 & 2 & 4 & 8 & 80 & 800 & 8000\\ \hline \hline $L_0$[km] &800 & 400 & 200 & 100 & 10 & 1 & 0.1\\ \hline $R /\tau $ & $\unit[\sim 10^{-14}]{Hz}$ & $\unit[\sim 10^{-6}]{Hz}$ & $\unit[0.0563]{Hz}$ & $\unit[8.2]{Hz}$ & $\unit[3.8]{kHz}$ & $\unit[72.7]{kHz}$ & $\unit[967.2]{kHz}$\\ \hline $S_1(\mu=1)/\tau $ &- &$\unit[\sim 10^{-18}]{Hz}$ & $\unit[\sim 10^{-12}]{Hz}$ & $\unit[0.1423]{Hz}$ & $>\unit[3.1]{kHz}$ & $>\unit[62.5]{kHz}$ & $>\unit[832.1]{kHz}$\\ \hline $S_2(\mu=1)/\tau $ &- &$\unit[\sim 10^{-14}]{Hz}$ & $\unit[0.0020]{Hz}$ & $\unit[7.4]{Hz}$ & $>\unit[3.8]{kHz}$ & $>\unit[72.4]{kHz}$ & $>\unit[964.5]{kHz}$\\ \hline $S_1(\mu=0.99)/\tau $ &- &$\unit[0]{Hz}$ & $\unit[0]{Hz}$ & $\unit[0]{Hz}$ & $\unit[0]{Hz}$ & $\unit[0]{Hz}$ & $\unit[0]{Hz}$\\ \hline $S_2(\mu=0.99)/\tau $ &- &$\unit[0]{Hz}$ & $\unit[0]{Hz}$ & $\unit[1.9]{Hz}$ & $\unit[0]{Hz}$ & $\unit[0]{Hz}$ & $\unit[0]{Hz}$\\ \hline $S^{\mathrm{PLOB,QR}}(L_0)/\tau $ &$\unit[\sim 10^{-7}]{Hz}$ & $\unit[18.3]{Hz}$ & $\unit[0.2]{MHz}$ & $\unit[15.5]{MHz}$ & $\unit[1.5]{GHz}$ & $\unit[4.5]{GHz}$ & $\unit[7.8]{GHz}$ \end{tabular} \caption{Overview of the relevant quantities for the {\it optimal scheme} of Tab.~\ref{tab:optperchanneluse} calculated per second (shown are only those entries that change, but again with segment number $n$, segment length $L_0$[km]): raw rate $R/\tau$, secret key rate $S/\tau$ for different $\mu=\mu_0$ (again subscript corresponds to the choice of $\alpha_1$ or $\alpha_2$, $\mu=1$ is the channel-loss-and-memory-dephasing-only case), and the (repeater-assisted) capacity bound per elementary time unit $S^{\mathrm{PLOB,QR}}(L_0)/\tau$ where we choose $\tau={\rm GHz}^{-1}$ for the cases $n=1,2$, i.e. the bounds, expressed per second, on all-optical point-to-point and twin-field QKD. Note that for realistic but still GHz-clock-rate twin-field QKD we rather have $S/\tau\sim1$Hz. In any of the other, memory-based scenarios, we choose $\tau=\tau_{\mathrm{clock}}+L_0/c_f$ with $\tau_{\mathrm{clock}}={\rm MHz}^{-1}$. We again assumed $p_{\mathrm{link}}=F_0=1$ for the link coupling efficiency and the initial state dephasing.} \label{tab:optpersecond} \end{table*}
3,832
85,618
en
train
0.4990.21
\subsection{Application and comparison of protocols} Let us now consider various quantum repeater protocols based on different types of the optical encoding and calculate their corresponding secret key rates per second using the methods developed in the preceding sections. We shall look at (i) a kind of standard scheme employing two-mode (dual-rail, DR) photonic qubits distributed through the optical-fiber channels (either emitted from a central source of entangled photon pairs and written into the spin memory qubits or emitted from the repeater nodes employing spin-photon entangled states and utilizing two-photon interference in the middle of each segment), \cite{White} (ii) a scheme based upon spin-photon (spin-light-mode) entanglement and one-photon interference with an encoding similar to that introduced by Cabrillo et al. \cite{cabrillo} effectively using one-mode (single-rail, SR) photonic qubits, (iii) a scheme that extends the concepts of twin-field QKD with coherent states to a specific variant of memory-assisted QKD, i.e. a kind of twin-field quantum repeater \cite{tf_repeater}. We refer to scheme (ii) as the Cabrillo scheme and discuss it in more detail in App.~\ref{app:cabrillo}. For all three schemes we consider a quantum repeater with $n=1,2,3,4,8$ segments matching the size of the repeater systems that we have formally/theoretically treated in great detail in the first parts of this paper. We always use the previously derived ``optimal'' quantum repeater protocol that belongs to the fastest schemes and gives the smallest dephasing among all fast schemes. The two schemes (ii) and (iii) share the potential benefit that for quantum repeaters with $n$ segments and $n-1$ intermediate memory stations (not counting the memories at Alice and Bob or assuming immediate measurements there) they lead to an improved loss scaling with a $2n$-times bigger effective attenuation distance compared with a point-to-point link (unlike the standard scheme (i) that only achieves an $n$-times bigger effective attenuation distance), but a final state fidelity parameter still decreasing as the power of $2n-1$ (assuming equal gate and initial state error rates) like the standard scheme (i). However, scheme (ii) has an intrinsic error during the distribution step due to the initial two-photon terms in combination with channel loss. Similarly, scheme (iii) is more sensitive to channel loss exhibiting an intrinsic loss-dependent dehasing error, because the optical state is a phase-sensitive continuous-variable state \cite{HybridPRL}. The two models of channel-loss-induced errors for schemes (ii) and (iii) thus slightly differ, while the transmission loss scaling is identical. As a consequence, for both (ii) and (iii), we have the constraint that the excitation amplitudes (the weights of the non-vacuum terms) must not become too large. Despite the above-mentioned benefits compared with scheme (i) it will turn out that the intrinsic errors of schemes (ii) and (iii) represent an essential complication that prevents to fully exploit the improved scaling of the basic parameters in comparison with the standard repeater protocols. For a fair comparison, assuming similar types of initial state imperfections in all three schemes, we set $\mu_0=1$ with $F_0=0.99,0.98$ and so replace the initial depolarizing error for scheme (i) by an initial dephasing error. Thus, in the expressions of the QBERs as given by Eq.~\eqref{eq:QBER}, the contribution of $\mu_0^n$ to the initial error scaling from the analysis of the preceding sections (where $F_0=1)$ is now replaced by a corresponding scaling with $F_0<1$. The gate error scaling with $\mu^{n-1}$ remains unchanged in all schemes. Of course, our formalism also allows to focus on specific schemes including initial state errors with $\mu_0<1$. In this case, the specific contributions of the different elements in each elementary repeater unit (segments, half-segments, ``cells'') \cite{White} to the link coupling efficiency $p_{\mathrm{link}}$ and the initial state error parameters $\mu_0$ or $F_0$ depend of the particular protocol \cite{White}. For example, zooming in on an NSP segment, \cite{White} we have a squared contribution from the two spin-photon entangled states on the left and on the right, $\mu_{\mathrm{sp,ph}}^2$, and another possible gate error factor, $\mu_{\mathrm{OBM}}$, coming from the optical Bell measurement in the middle of the segment. In this scenario, already in a single segment, we effectively have one imperfect entanglement swapping operation (acting on the two photons in the middle of the segment) connecting two initially distributed, depolarized entangled states (the two spin-photon states), to which our physical model directly applies replacing our initial $\mu_0$ for one segment according to $\mu_0 \rightarrow \mu_{\mathrm{sp,ph}}^2 \mu_{\mathrm{OBM}}$. This overall initial distribution error will most likely be dominated by the imperfect spin-photon states, assuming near-error-free (though probabilistic) photonic Bell measurements, thus $\mu_0 \sim \mu_{\mathrm{sp,ph}}^2$. In a full NRP segment, the memory write-in may be realized via quantum teleportation using a locally prepared spin-photon state and an optical Bell measurement on the photon that arrives from the fiber channel and the local photon. In this scenario, already in a single complete segment, we may effectively have three initial entangled states (two local spin-photon states on the left and on the right together with one distributed entangled photon pair emitted from a source in the middle of the segment) and two optical Bell measurements, \cite[Fig. 6a]{White} with our model resulting in a $\mu_0 \sim \mu_{\mathrm{ph,ph}} \mu_{\mathrm{sp,ph}}^2 \mu_{\mathrm{OBM}}^2$ scaling of the initial error parameter for one segment (i.e., similar to the effective final scaling of a three-segment repeater in our more abstract model, with $\mu_0 \rightarrow \mu_{\mathrm{sp,ph}}$ and $\mu \rightarrow \mu_{\mathrm{OBM}}$, and setting for this simplifying analogy, quite unrealistically, $\mu_{\mathrm{sp,ph}}=\mu_{\mathrm{ph,ph}}$). Assuming near-error-free Bell measurements, and near-perfect (though possibly only probabilistically created) photon pairs, we would again arrive at an overall scaling of $\mu_0 \sim \mu_{\mathrm{sp,ph}}^2$ for the initial error parameter. In case of an entangled photon pair source that deterministically produces imperfect photon-photon states (such as a quantum dot source), we would have $\mu_0 \sim \mu_{\mathrm{ph,ph}}\mu_{\mathrm{sp,ph}}^2$ instead. There is also the option of a heralded memory write-in that no longer relies on the generation of local spin-photon states and optical Bell measurements \cite{Rempe}. In this case, our physical model has to be slightly adapted to such a scenario and a decomposition of the different error channels, including an imperfect memory write-in operation, into one effective initial error channel should be considered. Thus, zooming in on our general initial-state error parameters $\mu_0$ or $F_0$ for a specific implementation is straightforwardly possible, but it will eventually lead to even stronger fidelity requirements for the individual experimental components that contribute to $\mu_0$ or $F_0$. The different contributions to the link coupling efficiencies $p_{\mathrm{link}}$ can be similarly decomposed into the different experimental elements, also including some differences for the different types of quantum repeater units and protocols \cite{White}. However, note that for our comparison in this section, especially assuming that two photonic states are combined in the middle of each segment (i.e. in a kind of NSP scenario), the two-photon interference of scheme (i) results in a quadratic disadvantage not only for the channel transmission but also in terms of the link coupling efficiency $p_{\mathrm{link}}$ in comparison with the protocols based on one-photon interference (schemes (ii) and (iii)), $p_{\mathrm{link,(i)}}=p_{\mathrm{link,(ii)}}^2=p_{\mathrm{link,(iii)}}^2$. For this let us write in short $p_{\mathrm{link,DR}}=p_{\mathrm{link,TF}}^2$, given the similarity of schemes (ii) and (iii). In Fig. \ref{fig:SKR_per_sec_tf} we compare the secret key rates for the dual-rail scheme (i) (DR), the Cabrillo scheme (ii), and the twin-field repeater (iii) (TF). The two twin-field-type schemes include a free parameter describing the number of excitations. More excitations lead to a higher transmission rate at the expense of a lower state quality. In the plots we optimize this parameter for each data point to obtain the maximal secret key rate. Recall, for the DR scheme, we introduce a small dephasing via the parameter $F_0<1$ in order to avoid comparing perfect initial entangled states with noisy ones. When comparing schemes (ii) and (iii) one can see that for $\mu\approx1$ (iii) performs better while for lower $\mu$ (ii) is the better performing scheme. This is because the probability of an error is smaller for the Cabrillo scheme, but the error would affect both QBERs of the BB84 protocol, significantly reducing the secret key rate. For the TF scheme (iii) we have an effect on only one of the two error rates. When $\mu$ gets smaller, all schemes have a non-vanishing error rate in both bases and therefore the lower error rate of the Cabrillo scheme is helpful. Figure~\ref{fig:SKR_per_sec_tf} shows that, although the DR scheme has a scaling disadvantage in comparison to both other schemes, it is often highly competitive, since both twin-field-type schemes suffer from their low initial probabilities of success when only weak excitations can be used to avoid introducing too much noise from the loss channel. Considering a memory coherence time of 10 seconds, a gate error parameter $\mu\geq0.97$, and coupling efficiencies as $p_{\mathrm{link,TF}}=0.9$, one can already overcome the PLOB bound with only three memory stations using either the DR scheme (i) or the TF protocol (iii). For this comparison in terms of secret bits per second, we assume a source repetition rate of \unit[1]{GHz} for an ideal point-to-point link as associated with the PLOB bound per channel use. Note that we do not include an extra factor of $1/2$ for the final rates which would strictly be needed in the DR-based scheme in comparison with the PLOB bound for a single-mode loss channel. Here the parallel transmission of the two modes for a DR qubit does not change the rates per second and this optical encoding does not cause an extra experimental resource overhead (in fact, it even simplifies the optical transmission circumventing the need for long-distance phase stabilization as for the TF-type schemes). Moreover, an optical point-to-point direct transmission would most likely be based on DR qubit transmission as well. The other, previously mentioned factor $2$ that occurs in front of the effective inverse coherence time $\alpha$ when the two spins of a two-qubit spin pair simultaneously dephase while waiting in one segment has now been included here for each segment (i.e. a small improvement would be possible when Alice and Bob measure their spins immediately). In Fig.~\ref{fig:SKR_per_sec_tf}, we always assume a coherence time $\tau_{\mathrm{coh}}=\unit[10]{s}$, $p_{\mathrm{link,TF}}=0.9$, and $M=1$. Recall from our discussions of the possibility of multiplexing that we may equivalently consider schemes for which, for instance, $\tau_{\mathrm{coh}}=\unit[1]{s}$ and $M=10$ according to Eq.~\eqref{eq:MPrelation}. The plots lead to the following observations. The two TF-type schemes (ii) and (iii) more heavily rely upon sufficiently good error parameters than the DR scheme (i). In Figs.~\ref{fig:SKR_per_sec_tf}(a) and (b) for two different initial dephasing fidelities (which is only relevant for DR), we see that only the TF scheme (iii) performs as good as DR with a gate error as low as $\mu=0.999$. In this case, for the given parameters, TF even allows to reach slightly larger distances compared with DR, both going well above $L=1200$ km giving more than a hundredth of a secret bit per second at such distances. Note that in order to achieve this, the TF scheme requires a loss scaling with a $16$-times bigger effective attenuation distance compared with a point-to-point link, whereas the DR scheme only has to exhibit an $8$-times bigger effective attenuation distance (``$n=8$ TF'' vs. ``$n=8$ DR''). The number of memory stations is the same for both, namely seven (not counting those at Alice and Bob). With increasing gate errors $\mu\leq 0.99$, as shown in Figs.~\ref{fig:SKR_per_sec_tf}(c)-(g), only the DR scheme allows to reach distances above or near $L=1000$km. If both error parameters, that for the gates, $\mu$, and that for the initial states, $F_0$, are no longer sufficiently good (both or in combination), also the DR scheme ceases to reach large distances and barely beats the PLOB bound (see Figs.~\ref{fig:SKR_per_sec_tf}(f) and (g)). For the two TF-type schemes (ii) and (iii), we generally checked both types of detectors, on-off as well as photon-number-resolving (Fig.14 shows the results for on-off detections), and we did not see a significant difference in the logarithmic plots of the secret key rates for both schemes. The reason is that for larger distances the two-photon events at either of the two detectors (detectable via PNRDs) get increasingly unlikely compared with one-photon detection events coming from the two-photon terms in combination with the loss of one photon during transmission (causing errors which remain undetectable via PNRDs). The practically most relevant situation is shown in Figs.~\ref{fig:SKR_per_sec_tf}(c)-(e). In particular, for the numbers chosen there, i.e. state and gate errors of the order of 1-2\%, the DR scheme reaches a distance of $L=800$km with about one secret bit per second, and even beyond with a lower rate. The link coupling efficiency for this scenario, like in all others, is $p_{\mathrm{link,DR}}=p_{\mathrm{link,TF}}^2=0.81$; the coherence time is $\tau_{\mathrm{coh}}=\unit[10]{s}$. The number of segments is $n=8$ (``$n=8$ DR'', dotted yellow curve) corresponding to a memory station placed at every $L_0=100$km. The result for this scheme is consistent with the results obtained for $S_2(\mu=0.99)$ and especially $S_2(\mu=0.99)/\tau$ in Tabs.~\ref{tab:optperchanneluse} and \ref{tab:optpersecond}, respectively, for $n=8$. However, note that for the values in Tabs.~\ref{tab:optperchanneluse} and \ref{tab:optpersecond} we chose $p_{\mathrm{link}}=F_0=1$ and $\mu=\mu_0$, slightly different from the parameter choice for Fig.~\ref{fig:SKR_per_sec_tf}(c) where $\mu_0=1$ and $F_0=0.99$ playing the role of an imperfect state parameter instead of $\mu_0$ (in addition, we have $p_{\mathrm{link}}=0.81$ for DR, and also two spins dephasing at any time step included). Reiterating the previous discussions in Secs.~\ref{sec:Comparison: 2 vs. 4 vs. 8 segment repeaters}, the choice of $L_0 \sim 100$km seems not only highly compatible with existing classical repeater and fiber network architectures, but also offers a good balance between an improved memory-assisted loss scaling and an only limited addition of extra faulty elements. Here now we found, in particular, that the standard DR scheme (i) provides another good choice in order to really benefit from these well balanced parameters.
4,028
85,618
en
train
0.4990.22
In Fig.~\ref{fig:SKR_per_sec_tf}, we always assume a coherence time $\tau_{\mathrm{coh}}=\unit[10]{s}$, $p_{\mathrm{link,TF}}=0.9$, and $M=1$. Recall from our discussions of the possibility of multiplexing that we may equivalently consider schemes for which, for instance, $\tau_{\mathrm{coh}}=\unit[1]{s}$ and $M=10$ according to Eq.~\eqref{eq:MPrelation}. The plots lead to the following observations. The two TF-type schemes (ii) and (iii) more heavily rely upon sufficiently good error parameters than the DR scheme (i). In Figs.~\ref{fig:SKR_per_sec_tf}(a) and (b) for two different initial dephasing fidelities (which is only relevant for DR), we see that only the TF scheme (iii) performs as good as DR with a gate error as low as $\mu=0.999$. In this case, for the given parameters, TF even allows to reach slightly larger distances compared with DR, both going well above $L=1200$ km giving more than a hundredth of a secret bit per second at such distances. Note that in order to achieve this, the TF scheme requires a loss scaling with a $16$-times bigger effective attenuation distance compared with a point-to-point link, whereas the DR scheme only has to exhibit an $8$-times bigger effective attenuation distance (``$n=8$ TF'' vs. ``$n=8$ DR''). The number of memory stations is the same for both, namely seven (not counting those at Alice and Bob). With increasing gate errors $\mu\leq 0.99$, as shown in Figs.~\ref{fig:SKR_per_sec_tf}(c)-(g), only the DR scheme allows to reach distances above or near $L=1000$km. If both error parameters, that for the gates, $\mu$, and that for the initial states, $F_0$, are no longer sufficiently good (both or in combination), also the DR scheme ceases to reach large distances and barely beats the PLOB bound (see Figs.~\ref{fig:SKR_per_sec_tf}(f) and (g)). For the two TF-type schemes (ii) and (iii), we generally checked both types of detectors, on-off as well as photon-number-resolving (Fig.14 shows the results for on-off detections), and we did not see a significant difference in the logarithmic plots of the secret key rates for both schemes. The reason is that for larger distances the two-photon events at either of the two detectors (detectable via PNRDs) get increasingly unlikely compared with one-photon detection events coming from the two-photon terms in combination with the loss of one photon during transmission (causing errors which remain undetectable via PNRDs). The practically most relevant situation is shown in Figs.~\ref{fig:SKR_per_sec_tf}(c)-(e). In particular, for the numbers chosen there, i.e. state and gate errors of the order of 1-2\%, the DR scheme reaches a distance of $L=800$km with about one secret bit per second, and even beyond with a lower rate. The link coupling efficiency for this scenario, like in all others, is $p_{\mathrm{link,DR}}=p_{\mathrm{link,TF}}^2=0.81$; the coherence time is $\tau_{\mathrm{coh}}=\unit[10]{s}$. The number of segments is $n=8$ (``$n=8$ DR'', dotted yellow curve) corresponding to a memory station placed at every $L_0=100$km. The result for this scheme is consistent with the results obtained for $S_2(\mu=0.99)$ and especially $S_2(\mu=0.99)/\tau$ in Tabs.~\ref{tab:optperchanneluse} and \ref{tab:optpersecond}, respectively, for $n=8$. However, note that for the values in Tabs.~\ref{tab:optperchanneluse} and \ref{tab:optpersecond} we chose $p_{\mathrm{link}}=F_0=1$ and $\mu=\mu_0$, slightly different from the parameter choice for Fig.~\ref{fig:SKR_per_sec_tf}(c) where $\mu_0=1$ and $F_0=0.99$ playing the role of an imperfect state parameter instead of $\mu_0$ (in addition, we have $p_{\mathrm{link}}=0.81$ for DR, and also two spins dephasing at any time step included). Reiterating the previous discussions in Secs.~\ref{sec:Comparison: 2 vs. 4 vs. 8 segment repeaters}, the choice of $L_0 \sim 100$km seems not only highly compatible with existing classical repeater and fiber network architectures, but also offers a good balance between an improved memory-assisted loss scaling and an only limited addition of extra faulty elements. Here now we found, in particular, that the standard DR scheme (i) provides another good choice in order to really benefit from these well balanced parameters. Finally, we also considered the six-state QKD protocol \cite{sixstate} instead of BB84, but this only improved the final rates marginally. In the case of $\mu=0.98$ and $\mu_0=1$, the rate could be, in principle, improved significantly for $n=8$, but for these parameters, in practice, it is easier to use BB84 and $n=4$ instead. When considering sufficiently good error parameter values like $\mu=0.99$, such that $n=8$ outperforms $n=4$, then again there is only a minimal improvement by employing the six-state QKD protocol. \begin{figure*} \caption{Secret key rates per second. We always assume a coherence time $\tau_{\mathrm{coh} \label{fig:SKR_per_sec_tf} \end{figure*}
1,440
85,618
en
train
0.4990.23
\section{Conclusion}\label{sec:Conclusion} We presented a statistical model based on two random variables and their probability-generating functions (PGFs) in order to describe, in principle, the full statistics of the rates obtainable in a memory-based quantum repeater chain. The physical repeater model assumes a heralded initial entanglement distribution with a certain elementary probability for each repeater segment (including fiber channel transmission and all link coupling efficiencies), deterministic entanglement swapping to connect the segments, and single-spin quantum memories at each repeater station that are subject to time-dependent memory dephasing. No active quantum error correction is performed on any of the repeater ``levels'', while our model does not even rely upon the basic assumption of any nested repeater level structure. The two basic statistical variables associated with this physical repeater model are the total repeater waiting time and the total, accumulated dephasing time. In the context of an application in long-range quantum cryptography, our model corresponds to a form of memory-assisted quantum key distribution, for which we calculated the (asymptotic, primarily BB84-type) secret key rates as a figure of merit to assess the repeater performance against known benchmarks and all-optical quantum communication schemes. Apart from the theoretical complexity that grows with the size of the repeater (i.e., the number of repeater segments), it was clear from the start that experimentally the memory-assisted schemes of our model cannot go arbitrarily far while still producing a non-zero secret key rate. One motivation and goal of our work was to quantify this intuition and to provide an answer to the question whether it is actually beneficial, in a real setting, to add faulty memory stations to a quantum communication line. Existing works had their focus on the smallest repeaters with only two segments and one middle station. So, the aim was to further explore these smallest repeaters and then extend them to repeaters of a larger scale, answering the above question. Within this framework, we determined an optimal repeater scheme that belongs to the class of the fastest schemes (minimizing the average total waiting time and hence maximizing the long-distance entanglement distribution ``raw rate'') and, in addition, minimizes the average accumulated memory dephasing within the class of the fastest schemes. We have achieved this optimization for medium-size quantum repeaters with up to eight segments. In particular, for the minimal dephasing, this led us to a scheme to ``swap as soon as possible''. The technically most challenging element of our treatment is to determine an explicit analytical expression for the random dephasing variable of the fast schemes and its PGF. In order to confirm the correspondence of the minimum of the dephasing variable with the minimal QKD quantum bit error rate (for the variable related to memory dephasing), we calculated the relevant expectation values and compared the optimal scheme with schemes based on other, different swapping strategies. More generally, our formalism enables one to also consider mixed strategies in which different types of entanglement distribution and swapping can be combined, including the traditionally used doubling strategy that allows to systematically incorporate methods for quantum error detection (entanglement distillation). Our new results especially apply to quantum repeaters beyond one middle station for which an optimization of the distribution and swapping strategies is no longer obvious. For the special case of three repeater segments, assuming only channel loss and memory dephasing, we showed that our optimal scheme gives the highest secret key rate among not only all the fastest schemes but among all schemes including overall slower schemes that may still potentially lead to a smaller accumulated dephasing. We conjecture that our optimal scheme also gives the highest secret key rate for more than three segments under the same physical assumptions. A rigorous proof of this is non-trivial, because the number of distinct swapping and distribution strategies grows fast with the number of repeater segments. Moreover, in a long-range QKD application, some of the spin qubits may be measured immediately which is generally hard to include in the statistical analysis and the optimization for all possible schemes; for three segments though we did include this additional complexity of the protocols. Towards applications beyond QKD, this extra variation may no longer be relevant. We identified three criteria that should be satisfied by an optimal repeater scheme: distribute entanglement in parallel as fast as possible, store entanglement in parallel as little as possible, and swap entanglement as soon as possible. It is not always possible to satisfy these conditions at the same time, and we discussed specific schemes that are particularly good or bad with regards to some of the criteria. For example, a fully sequential repeater scheme is particularly slow, but avoids parallel storage of many spin qubits. Nonetheless, since it is overall slow, the fully sequential scheme can still accumulate more dephasing. We presented a detailed analysis comparing such different repeater protocols and approaches. With regards to a more realistic quantum repeater modelling, we considered additional tools and parameters such as memory cut-offs, multiplexing, initial state and swapping gate fidelities in order to identify potential regimes in memory-assisted quantum key distribution beyond one middle station where, exploiting our optimized swapping strategy, it becomes useful to add further memory stations along the communication line and connect them via two-qubit swapping operations. Importantly, we found that the initial state and gate fidelities must exceed certain minimal values (generally depending on the specific QKD protocol including post-processing), as otherwise the sole faultiness of the spin-qubit preparations and operations prevents to obtain a non-zero secret key rate even when no imperfect quantum storage (no memory dephasing) at all takes place and independent of the finite channel transmission. This effect becomes stronger with an increasing number of repeater nodes, scaling with the power of $2n-1$ for the error parameters in the QKD secret key rate. Once this minimal state and gate fidelity criterion is fulfilled and when the other experimental imperfections are included too, especially the time-dependent memory dephasing, it is essential to consider the exact secret key rates obtainable in optimized repeater protocols in order to conclude whether a genuine quantum repeater advantage over direct transmission schemes is possible or not. This is what our work aimed at and achieved based on the standard notion of asymptotic QKD figures of merit. By quantifying the influence of (within our physical model) basically all relevant experimental parameters on the final long-range QKD rate, we were able to determine the scaling and trade-offs of these parameters and analytically calculate exact, optimal rates. A quantum repeater of $n=L/L_0$ segments is thereby characterized by the parameter set $(p,a,\alpha)$ where $p$ is the entanglement distribution probability per segment (including the $n$-dependent channel transmission and zero-distance link coupling efficiency per segment), $a$ is the entanglement swapping success probability, and $\alpha$ is the inverse effective memory coherence time which, in most protocols, depends on $n$ via the quantum and classical communication times per distribution attempt (we also considered small-scale two-segment protocols without this dependence and ideas exist to minimize the impact of the inevitable signal waiting times for the elementary units of larger repeaters in combination with high experimental source and processing clock rates \cite{Cody_Jones}). In addition, we have introduced a set of initial state and gate parameters $(\mu_0/F_0, \mu)$ where $\mu_0$ and $F_0$ can be adapted to the specific protocols. Additional memory parameters can be collected as $(m,M,B)$ where $m$ is the memory cut-off (maximal time at which any spin qubit is stored), $M$ is the number of simultaneously employed memory qubits in a simple multiplexing scenario with $M$ repeater chains used in parallel, and $B$ is the ``memory buffer'' (the number of memory qubits per half station in a single repeater chain). In our work, we focussed on schemes with $a=1$ and $B=1$. The use of $B>1$ memories at each station would allow to continue the optical quantum state transfer even in segments that already possess successfully distributed states and to potentially replace the earlier distributed lower-quality pairs (subject to memory dephasing) by the later distributed pairs. We also did not put the main emphasis on the use and optimization of $m$, though we did include this option in some schemes. We found that $M>1$ leads to an effective improvement of the memory coherence time by a factor of $M$. In this setting, the three essential experimental parameters that have to be sufficiently good are the link coupling efficiency (via $p$), the memory coherence time (via $\alpha$), and the state/gate error parameter $\mu_0$/$\mu$. While the latter must not go below the above-mentioned limits, generally two of these three parameters should be sufficiently good as a rule of thumb in order to exceed the repeaterless bound and obtain practically meaningful rates. If this is the case, or even better, if all three are of high quality, memory-assisted quantum key distribution based on heralded entanglement distribution and swapping without additional quantum error correction or detection is possible to allow Alice and Bob to share a secret key at a rate orders of magnitude faster than in all-optical quantum state transmission schemes. For instance, for a total distance of 800km and experimental parameter values that are highly demanding but not impossible (up to 10s coherence time, about 80\% link coupling, and state or gate infidelities in the regime of 1-2\%), one secret bit can be shared per second with repeater stations placed at every 100km, providing the best balance between a minimal number of extra faulty repeater elements and a sufficient number of repeater stations for an improved loss scaling. {\it Acknowledgement:} We thank the BMBF in Germany for support via Q.Link.X/QR.X and the BMBF/EU for support via QuantERA/ShoQC. \appendix \section{Derivation of Eq.~\eqref{eq:GKn}}\label{app:GKn} In this section we derive the PGF $G_n(t)$ of the random variable $K_n$ defined via \begin{equation} K_n = \max(N_1, \ldots, N_n), \end{equation} where $N_i$ are the geometrically distributed random variables with parameter $p$. We have \begin{equation}\label{eq:app:GKn} \begin{split} G_n(t) &= \sum^{+\infty}_{k_1, \ldots, k_n = 1} p q^{k_1 - 1} \ldots p q^{k_n - 1} t^{\max(k_1, \ldots, k_n)} \\ &= p^n t F_n(q, t), \end{split} \end{equation} where the function $F_n(x, t)$ is defined as \begin{equation} F_n(x, t) = \sum^{+\infty}_{k_1, \ldots, k_n = 0} x^{k_1 + \ldots + k_n} t^{\max(k_1, \ldots, k_n)}. \end{equation} The series on the right-hand side of this definition converges for all $|x|<1$ and $|t| \leqslant 1$, since we have \begin{equation} |F_n(x, t)| \leqslant \sum^{+\infty}_{k_1, \ldots, k_n = 0} |x|^{k_1 + \ldots + k_n} = \frac{1}{(1-|x|)^n}. \end{equation} The function $F_n(x, t)$ can be written in a compact form, having only a finite number of terms. We have \begin{equation} \begin{split} &\frac{F_n(x, t)}{1-t} = \sum^{+\infty}_{k_1, \ldots, k_n = 0} \sum^{+\infty}_{k = \max(k_1, \ldots, k_n)} x^{k_1 + \ldots + k_n} t^k \\ &= \sum^{+\infty}_{k = 0} t^k \sum^k_{k_1, \ldots, k_n = 0} x^{k_1 + \ldots + k_n} = \sum^{+\infty}_{k = 0} t^k \left(\frac{1 - x^{k+1}}{1-x}\right)^n. \end{split} \end{equation} Expanding the $n$-th power on the right-hand side and applying simple algebraic transformations, we obtain the following compact expression: \begin{equation} F_n(x, t) = \frac{1 - t}{(1 - x)^n t} \sum^n_{i = 0} (-1)^i \binom{n}{i} \frac{1}{1 - x^i t}. \end{equation} From Eq.~\eqref{eq:app:GKn} we derive the following expression for the PGF of $K_n$: \begin{equation} \begin{split} G_n(t) &= (1 - t) \sum^n_{i = 0} (-1)^i \binom{n}{i} \frac{1}{1 - q^i t} \\ &= 1 + (1-t)\sum^n_{i = 1} (-1)^i \binom{n}{i} \frac{1}{1 - q^i t}, \end{split} \end{equation} which is exactly the expression presented in the main text.
3,160
85,618
en
train
0.4990.24
\section{Derivation of Eq.~\eqref{eq:GKn}}\label{app:GKn} In this section we derive the PGF $G_n(t)$ of the random variable $K_n$ defined via \begin{equation} K_n = \max(N_1, \ldots, N_n), \end{equation} where $N_i$ are the geometrically distributed random variables with parameter $p$. We have \begin{equation}\label{eq:app:GKn} \begin{split} G_n(t) &= \sum^{+\infty}_{k_1, \ldots, k_n = 1} p q^{k_1 - 1} \ldots p q^{k_n - 1} t^{\max(k_1, \ldots, k_n)} \\ &= p^n t F_n(q, t), \end{split} \end{equation} where the function $F_n(x, t)$ is defined as \begin{equation} F_n(x, t) = \sum^{+\infty}_{k_1, \ldots, k_n = 0} x^{k_1 + \ldots + k_n} t^{\max(k_1, \ldots, k_n)}. \end{equation} The series on the right-hand side of this definition converges for all $|x|<1$ and $|t| \leqslant 1$, since we have \begin{equation} |F_n(x, t)| \leqslant \sum^{+\infty}_{k_1, \ldots, k_n = 0} |x|^{k_1 + \ldots + k_n} = \frac{1}{(1-|x|)^n}. \end{equation} The function $F_n(x, t)$ can be written in a compact form, having only a finite number of terms. We have \begin{equation} \begin{split} &\frac{F_n(x, t)}{1-t} = \sum^{+\infty}_{k_1, \ldots, k_n = 0} \sum^{+\infty}_{k = \max(k_1, \ldots, k_n)} x^{k_1 + \ldots + k_n} t^k \\ &= \sum^{+\infty}_{k = 0} t^k \sum^k_{k_1, \ldots, k_n = 0} x^{k_1 + \ldots + k_n} = \sum^{+\infty}_{k = 0} t^k \left(\frac{1 - x^{k+1}}{1-x}\right)^n. \end{split} \end{equation} Expanding the $n$-th power on the right-hand side and applying simple algebraic transformations, we obtain the following compact expression: \begin{equation} F_n(x, t) = \frac{1 - t}{(1 - x)^n t} \sum^n_{i = 0} (-1)^i \binom{n}{i} \frac{1}{1 - x^i t}. \end{equation} From Eq.~\eqref{eq:app:GKn} we derive the following expression for the PGF of $K_n$: \begin{equation} \begin{split} G_n(t) &= (1 - t) \sum^n_{i = 0} (-1)^i \binom{n}{i} \frac{1}{1 - q^i t} \\ &= 1 + (1-t)\sum^n_{i = 1} (-1)^i \binom{n}{i} \frac{1}{1 - q^i t}, \end{split} \end{equation} which is exactly the expression presented in the main text. \section{Trace identities}\label{app:Trace Identities} We have \begin{equation}\label{eq:Trid} \begin{split} {}_{23}\langle&\Psi^+| \tilde{\Gamma}_{\mu, 23}(\hat{\varrho}_{1234})|\Psi^+\rangle_{23} \\ &= \mu \cdot {}_{23}\langle\Psi^+| \hat{\varrho}_{1234}|\Psi^+\rangle_{23} + \frac{1 - \mu}{4} \Tr_{23}(\hat{\varrho}_{1234}). \end{split} \end{equation} Here we show how to compute the quantities on the right-hand side of this equality. A simple way is to work with density matrices. We use the order of basis elements induced by the tensor product. From the one-qubit basis $(|0\rangle, |1\rangle)^T$ we obtain the two-qubit basis \begin{equation}\label{eq:B2} \begin{pmatrix} |0\rangle \\ |1\rangle \end{pmatrix} \otimes \begin{pmatrix} |0\rangle \\ |1\rangle \end{pmatrix} = \begin{pmatrix} |00\rangle \\ |01\rangle \\ |10\rangle \\ |11\rangle \end{pmatrix}. \end{equation} Taking the tensor product once again, we obtain the ordering of four-qubit basis vectors $|0000\rangle$, $|0001\rangle$, $|0010\rangle$, $|0011\rangle$, $|0100\rangle$, $|0101\rangle$, $|0110\rangle$, $|0111\rangle$, $|1000\rangle$, $|1001\rangle$, $|1010\rangle$, $|1011\rangle$, $|1100\rangle$, $|1101\rangle$, $|1110\rangle$, $|1111\rangle$. If a four-qubit state is described by a density operator $\hat{\varrho}_{1234}$ which has a $16 \times 16$ density matrix $\varrho$ in the standard basis ordered as described above, then two-qubit partial diagonal states have the following matrices in the basis \eqref{eq:B2}: \begin{equation}\label{eq:D23} \begin{split} {}_{23}\langle 00|\hat{\varrho}_{1234}|00\rangle_{23} &= \rho[1, 2, 9, 10] \\ {}_{23}\langle 01|\hat{\varrho}_{1234}|01\rangle_{23} &= \rho[3, 4, 11, 12] \\ {}_{23}\langle 10|\hat{\varrho}_{1234}|10\rangle_{23} &= \rho[5, 6, 13, 14] \\ {}_{23}\langle 11|\hat{\varrho}_{1234}|11\rangle_{23} &= \rho[7, 8, 15, 16], \end{split} \end{equation} where $\varrho[I]$, $I$ being a set of 1-based indices, is the submatrix of $\varrho$ with row and column indices in $I$. For the off-diagonal states we have \begin{equation}\label{eq:O23} \begin{split} {}_{23}\langle 01|\hat{\varrho}_{1234}|10\rangle_{23} &= \rho[3, 4, 11, 12 | 5, 6, 13, 14] \\ {}_{23}\langle 10|\hat{\varrho}_{1234}|01\rangle_{23} &= \rho[5, 6, 13, 14 | 3, 4, 11, 12], \end{split} \end{equation} where $\varrho[I|J]$ is the submatrix of $\varrho$ with row indices in $I$ and column indices in $J$. The state of the form given by Eq.~\eqref{eq:Drho} \begin{equation} \hat{\varrho} = \tilde{\Gamma}_{\mu}\bigl(F|\Psi^+\rangle\langle\Psi^+| + (1 - F)|\Psi^-\rangle\langle\Psi^-|\bigr) \end{equation} has the following density matrix in the basis \eqref{eq:B2}: \begin{equation} \varrho = \frac{1}{4} \begin{pmatrix} 1 - \mu & 0 & 0 & 0 \\ 0 & 1 + \mu & 2\mu(2F - 1) & 0 \\ 0 & 2\mu(2F - 1) & 1 + \mu & 0 \\ 0 & 0 & 0 & 1 - \mu \end{pmatrix}. \end{equation} Taking the Kronecker product of two states of this form, Eq.~\eqref{eq:Trid} together with the relations Eqs.~\eqref{eq:D23}-\eqref{eq:O23} lead to the final form of the distributed state given by Eq.~\eqref{eq:rho14}. \section{Computing PGFs of the sequential scheme}\label{app:SeqPGF} \begin{figure*} \caption{A visualization of the entanglement distribution process with the sequential scheme for $n=4$.} \label{fig:seq1} \end{figure*} In the sequential scheme the number of steps $K_n$ and the dephasing $D_n$ are given by \begin{equation} K_n = N_1 + \ldots + N_n, \quad D_n = N_2 + \ldots + N_n. \end{equation} Their PGFs are thus the $n$-th and $(n-1)$-th power of the single-segment PGF: \begin{equation} G_n(t) = \left(\frac{p t}{1 - q t}\right)^n, \quad \tilde{G}_n(t) = \left(\frac{p t}{1 - q t}\right)^{n-1}. \end{equation} In the case of cutoff, the process of entanglement distribution is visualized in Fig.~\ref{fig:seq1}. There are zero or more failure parts, with number of steps generating function $B^{[m]}_n(t)$, and one and only one success part, with generating function $A^{[m]}_n(t)$. The total PGF $G^{[m]}_n(t)$ of the number of steps $K^{[m]}_n$ is thus given by \begin{equation} G^{[m]}_n(t) = \frac{A^{[m]}_n(t)}{1 - B^{[m]}_n(t)}. \end{equation} We start with the derivation of the failure parts's PGF. The PGF of the top line is clearly \begin{equation} G_0(t) = \frac{pt}{1-qt}. \end{equation} Among the rest $n-1$ lines there are $i$ lines that succeed, where $0 \leqslant i \leqslant n-2$, so we have to put $i$ $p$'s into $m$ places and the rest $m-i$ places will be taken by $q$'s. We thus have \begin{equation} B^{[m]}_n(t) = G_0(t) \sum^{n-2}_{i=0} \binom{m}{i}p^iq^{m-i}t^m. \end{equation} For the success part's PGF we have \begin{equation} A^{[m]}_n(t) = G_0(t) \sum^m_{j=n-1} \binom{j-1}{n-2}p^{n-1}q^{j-n+1} t^j, \end{equation} since the length of the success part can vary from $n-1$ to $m$ (we need to put at least $n-1$ $p$'s there). The position of the last $p$ is fixed, so we need to place $n-2$ $p$'s into $j-1$ places and the rest $j-n+1$ will be taken by $q$'s. Making substitution $j \to j-n+1$, we arrive to the expression \eqref{eq:Gmnt} of the main text. The random variable for the waiting time of the scheme involving multiple cutoffs is given by \begin{equation} K_{n}^{\mathrm{seq},\vec{m}}= \tilde{N}^{(m_{n-1})}-m_{n-1}+\sum_{j=1}^{T_{n-1}} \left(K_{n-1,j}+m_{n-1}\right)\,. \end{equation} Exploiting that sums of independent random variables correspond to products of their PGFs and using \cite[Satz 3.8]{Klenke2020} for the sum one immediately obtains the result in the main text.
3,202
85,618
en
train
0.4990.25
\section{Computing dephasing PGFs for parallel schemes}\label{app:PGF Parallel schemes} In this section we derive explicit expressions for the PGFs of the dephasing random variables $D_n$ for different schemes considered in the main text. All these schemes have the same property --- if the order of $N_i$'s is known then one can obtain an analytical expression for the corresponding random variable $D_n$ explicitly. Having an explicit expression for $D_n$, we can compute a part of its PGF corresponding to a given order of arguments. Combining these parts for all possible ordering of arguments, we get the expression for PGF of $D_n$. More formally, the space $\Omega = \mathbb{N}^n$ of elementary events consists of all $n$-vectors $\vec{N} = (N_1, \ldots, N_n)$ of positive integers. The components $N_i$ are independent identically distributed (i.i.d.) random variables with geometric distribution with success probability $p$, so $N_i$ is the number of attempts (including the last successful one) of the $i$-th segment to distribute entanglement. The failure probability we denote $q = 1-p$. To every point $\vec{N} = (N_1, \ldots, N_n) \in \Omega$ we assign the probability \begin{equation} \mathbf{P}(\vec{N}) = pq^{N_1 - 1} \ldots pq^{N_n-1} = p^n q^{N_1 + \ldots + N_n - n}. \end{equation} The sum of these probabilities is obviously 1, so we have a valid probability space $(\Omega, \mathbf{P})$. The PGF of every component $N_i$ is given by the following simple expression: \begin{equation} g_{N_i}(t) = \frac{pt}{1-qt}. \end{equation} To find PGFs of more complicated random variables involving several components, we appropriately partition $\Omega$, compute the partial PGF on each part and then combine these partial results into the full expression. For every permutation $\pi \in S_n$ we define a subset of $\Omega$ which is determined by the corresponding relations between $n$ arguments. For $n=2$ we have two permutations $(12)$ and $(21)$ with corresponding relations $N_1 \leqslant N_2$ and $N_2 < N_1$. For $n=3$ we have six permutations and six corresponding relations \begin{equation} \begin{split} &N_1 \leqslant N_2 \leqslant N_3 \quad N_1 \leqslant N_3 < N_2 \quad N_2 < N_1 \leqslant N_3 \\ &N_2 \leqslant N_3 < N_1 \quad N_3 < N_1 \leqslant N_2 \quad N_3 < N_2 < N_1. \end{split} \end{equation} To make all these subsets non-overlapping, we use strict inequality between an inversion and non-strict inequality in other positions between numbers in permutations. We thus have the following decomposition: \begin{equation}\label{eq:omega} \Omega = \bigsqcup_{\pi \in S_n} \Omega_\pi, \end{equation} where $\Omega_\pi$ is the subset determined by the relations corresponding to $\pi$. For any point $\vec{N} \in \Omega_\pi$ we can obtain an explicit expression for $D_n$ for any scheme. In Table~\ref{tbl:1} we show all possible relations between four arguments and the expression corresponding to the optimal and doubling schemes in the case of $n=4$. Expressions corresponding to different $\pi$ might be the same, as can be seen for the doubling scheme. The PGF of $D_n$ is defined as \begin{equation} \tilde{G}_n(t) = \sum^{+\infty}_{d=0} \mathbf{P}(D_n=d) t^d = \sum_{\vec{N} \in \Omega} \mathbf{P}(\vec{N}) t^{D_n(\vec{N})}. \end{equation} Using the decomposition in Eq.~\eqref{eq:omega}, we introduce the partial PGFs via \begin{equation} \tilde{G}_n(\pi|t) = \sum_{\vec{N} \in \Omega_\pi} p^nq^{N_1 + \ldots + N_n - n} t^{D_n(N_1, \ldots, N_n)}, \end{equation} where $D_n(N_1, \ldots, N_n)$ is given explicitly as an appropriate linear combination of $N_i$'s. The total PGF $\tilde{G}_n(t)$ is then just the sum of all of these partial PGFs: \begin{equation} \tilde{G}_n(t) = \sum_{\pi \in S_n} \tilde{G}_n(\pi|t). \end{equation} We demonstrate computing these sums by an example for $n=4$. We have the correspondence \begin{equation} \pi = (2134) \to N_2 < N_1 \leqslant N_3 \leqslant N_4 \end{equation} and the explicit expressions \begin{equation} \begin{split} D^\star_4(N_1, N_2, N_3, N_4) &= N_4 - N_2, \\ D^{\mathrm{dbl}}_4(N_1, N_2, N_3, N_4) &= 2N_4 - N_2 - N_3. \end{split} \end{equation} For the partial PGFs we have \begin{widetext} \begin{equation} \begin{split} \tilde{G}^\star_4(\pi|t) &= \sum^{+\infty}_{N_2=1}\sum^{+\infty}_{N_1=N_2+1}\sum^{+\infty}_{N_3=N_1}\sum^{+\infty}_{N_4=N_3} p^4 q^{N_1+N_2+N_3+N_4-4} t^{N_4 - N_2} = \frac{p^4}{1-q^4}\frac{q^3 t}{(1-qt)(1-q^2t)(1-q^3t)}, \\ \tilde{G}^{\mathrm{dbl}}_4(\pi|t) &= \sum^{+\infty}_{N_2=1}\sum^{+\infty}_{N_1=N_2+1}\sum^{+\infty}_{N_3=N_1}\sum^{+\infty}_{N_4=N_3} p^4 q^{N_1+N_2+N_3+N_4-4} t^{2N_4 - N_2- N_3} = \frac{p^4}{1-q^4}\frac{q^3 t}{(1-q^2t)(1-q^3t)(1-qt^2)}. \end{split} \end{equation} \end{widetext} Summing up the expression for all $\pi \in S_4$, we obtain the expressions for $\tilde{G}^\star_4(t)$ and $\tilde{G}^{\mathrm{dbl}}_4(t)$ presented in the main text. For completeness, we also give the optimal PGFs for $n=2$ and $n=3$: \begin{displaymath} \begin{split} \tilde{G}^\star_2(t) &= \frac{p^2}{1-q^2} \frac{1+qt}{1-qt}, \\ \tilde{G}^\star_3(t) &= \frac{p^3}{1-q^3} \frac{1+(q+2q^2)t-(2q^2+q^3)t^3-q^4t^4}{(1-qt)(1-q^2t)(1-qt^2)}. \end{split} \end{displaymath} The size of the expressions grows rather quickly with $n$, so we do not present them explicitly for $n>4$. We see that obtaining $\tilde{G}_n(t)$ reduces to computing sums of many geometrical series, which is a rather trivial task. The only nontrivial part of this algorithm is its superexponential $n!$-complexity. So, this algorithm is applicable only for small $n$; we used it up to $n=8$, which is of practical relevance. \begin{table}[ht] \begin{tabular}{|l|l|l|} \hline \hfil Permutation & \hfil $D^\star_4(\vec{N})$ & \hfil $D^{\mathrm{dbl}}_4(\vec{N})$ \\ \hline $N_1 \leqslant N_2 \leqslant N_3 \leqslant N_4$ & $N_4-N_1$ & $2N_4-N_1-N_3$ \\ $N_1 \leqslant N_2 \leqslant N_4 < N_3$ & $2N_3-N_1-N_4$ & $2N_3-N_1-N_4$ \\ $N_1 \leqslant N_3 < N_2 \leqslant N_4$ & $N_2+N_4-N_1-N_3$ & $2N_4-N_1-N_3$ \\ $N_1 \leqslant N_3 \leqslant N_4 < N_2$ & $2N_2-N_1-N_3$ & $2N_2-N_1-N_3$ \\ $N_1 \leqslant N_4 < N_2 \leqslant N_3$ & $2N_3-N_1-N_4$ & $2N_3-N_1-N_4$ \\ $N_1 \leqslant N_4 < N_3 < N_2$ & $2N_2-N_1-N_4$ & $2N_2-N_1-N_4$ \\ $N_2 < N_1 \leqslant N_3 \leqslant N_4$ & $N_4-N_2$ & $2N_4-N_2-N_3$ \\ $N_2 < N_1 \leqslant N_4 < N_3$ & $2N_3-N_2-N_4$ & $2N_3-N_2-N_4$ \\ $N_2 \leqslant N_3 < N_1 \leqslant N_4$ & $N_4-N_2$ & $2N_4-N_2-N_3$ \\ $N_2 \leqslant N_3 \leqslant N_4 < N_1$ & $N_1-N_2$ & $2N_1-N_2-N_3$ \\ $N_2 \leqslant N_4 < N_1 \leqslant N_3$ & $2N_3-N_2-N_4$ & $2N_3-N_2-N_4$ \\ $N_2 \leqslant N_4 < N_3 < N_1$ & $N_1+N_3-N_2-N_4$ & $2N_1-N_2-N_4$ \\ $N_3 < N_1 \leqslant N_2 \leqslant N_4$ & $N_2+N_4-N_1-N_3$ & $2N_4-N_1-N_3$ \\ $N_3 < N_1 \leqslant N_4 < N_2$ & $2N_2-N_1-N_3$ & $2N_2-N_1-N_3$ \\ $N_3 < N_2 < N_1 \leqslant N_4$ & $N_4-N_3$ & $2N_4-N_2-N_3$ \\ $N_3 < N_2 \leqslant N_4 < N_1$ & $N_1-N_3$ & $2N_1-N_2-N_3$ \\ $N_3 \leqslant N_4 < N_1 \leqslant N_2$ & $2N_2-N_1-N_3$ & $2N_2-N_1-N_3$ \\ $N_3 \leqslant N_4 < N_2 < N_1$ & $N_1-N_3$ & $2N_1-N_2-N_3$ \\ $N_4 < N_1 \leqslant N_2 \leqslant N_3$ & $2N_3-N_1-N_4$ & $2N_3-N_1-N_4$ \\ $N_4 < N_1 \leqslant N_3 < N_2$ & $2N_2-N_1-N_4$ & $2N_2-N_1-N_4$ \\ $N_4 < N_2 < N_1 \leqslant N_3$ & $2N_3-N_2-N_4$ & $2N_3-N_2-N_4$ \\ $N_4 < N_2 \leqslant N_3 < N_1$ & $N_1+N_3-N_2-N_4$ & $2N_1-N_2-N_4$ \\ $N_4 < N_3 < N_1 \leqslant N_2$ & $2N_2-N_1-N_4$ & $2N_2-N_1-N_4$ \\ $N_4 < N_3 < N_2 < N_1$ & $N_1-N_4$ & $2N_1-N_2-N_4$ \\ \hline \end{tabular} \caption{Explicit expressions for the optimal and doubling dephasing for all possible relations between arguments in the case of $n=4$.} \label{tbl:1} \end{table}
3,595
85,618
en
train
0.4990.26
\section{Optimality for three segments}\label{app:Optimality 3 segments} Here we will compare all possible schemes for a 3-segment repeater, when swapping is applied as soon as possible. We will not consider any scheme, which swaps only at the end or delays the entanglement swapping, as this increases the dephasing even further. For each scheme we calculate the random variables for the waiting time and the dephasing. In case of the dephasing the probability generating function is most useful, whereas for the waiting time we will only state the expectation value. Moreover, we will consider two different types of schemes. The first type, which we will indicate by ``imm'', describes schemes where Alice and Bob measure their qubits immediately. This scenario is especially useful in QKD applications. The second type of schemes we consider is indicated by a subscript ``non'' and describes schemes, where Alice and Bob do not measure immediately and these types of schemes are important in non-QKD applications. A possible case of usage for those schemes is transferring quantum information between quantum computers by exchanging entangled photons. Here Alice and Bob will not measure their qubits until they share entanglement between each other. \subsection{Sequential schemes} \begin{figure} \caption{Sequential arrangements of entanglement generation in a three-segment repeater. The number in each segment corresponds to the moment when it starts.} \label{fig:Sequential a} \label{fig:Sequential b} \label{fig:Sequential c} \label{fig:Different schemes 3 segments seq} \end{figure} Let us start with sequential schemes, where entanglement generation only takes places in one segment after another. There are three possibilities. First, one starts generating entanglement in Alice's or Bob's segment and always connects adjacent segments after the previous one has finished successfully. Note that here entanglement swapping is performed as soon as possible. We will call this scheme ``sequential a'', see Fig.~\ref{fig:Sequential a}. The second possibility is given by starting with the left or right segment, followed by the segment on the opposite side. Thus, no entanglement swapping is possible. Finally, the middle segment is connected. Let us call this scheme ``sequential b'', see Fig.~\ref{fig:Sequential b}. The third possible arrangement is given by starting in the middle, continuing with the left or right segment and finishing of with the remaining segment on the opposing side, see Fig.~\ref{fig:Sequential c}. All other sequential arrangements for three segments are equivalent to those three schemes. These three sequential schemes share the same waiting time, which is \begin{equation} K^{\mathrm{seq}}_3=N_1+ N_2 +N_3, \end{equation} and has the expectation value \begin{equation} \mathbf{E}[K^{\mathrm{seq}}_3]=\frac{3}{p}. \end{equation} Obviously, the dephasing of the schemes differs, and we also have to distinguish between schemes measuring immediately and non-immediately. At first, let us consider immediate schemes, as it will turn out the random variables of the non-immediate schemes are just scaled by a factor of two, although it might not be the random variable of the same scheme. We find \begin{equation} \begin{split} D^{\mathrm{seq,a}}_{3,\mathrm{imm}} &= N_2 + N_3 ,\\ D^{\mathrm{seq,b}}_{3,\mathrm{imm}} &= 2 N_2 + N_3, \\ D^{\mathrm{seq,c}}_{3,\mathrm{imm}} &= 2 N_1 + N_3 . \end{split} \end{equation} Since \(N_2\) and \(N_3\) are i.i.d., the probability generating function (PGF) of \(D^{\mathrm{seq,a}}_{3,\mathrm{imm}}\) is given by \begin{equation} \tilde{G}^{\mathrm{seq,a}}_{3, \mathrm{imm}}(t)=g_{N_2}(t) \cdot g_{N_3}(t) = \left( \frac{pt}{1-qt} \right)^2. \end{equation} Due to the general relation \begin{equation} g_{2X}(t)=\mathbf{E}[t^{2X}]=\mathbf{E}[(t^2)^X]=g_X(t^2) \end{equation} valid for any discrete random variable \(X\), we have \begin{align} \tilde{G}^{\mathrm{seq,b}}_{3,\mathrm{imm}}(t)= g_{N_2}(t^2) \cdot g_{N_3}(t) = \frac{p^2 t^3}{(1-qt)(1-qt^2)}. \end{align} The same holds true for the PGF of the immediate measurement scheme ``sequential c'', because \(N_1\) and \(N_2\) are i.i.d.. Thus, its PGF is also given by \begin{equation} \tilde{G}^{\mathrm{seq,c}}_{3,\mathrm{imm}}(t)= \frac{p^2 t^3}{(1-qt)(1-qt^2)}, \end{equation} which shows, that this scheme is actually equivalent to ``sequential b'' and will not be considered separately in the later comparison. On the other hand, for non-immediate measurements we find the random variables to be \begin{equation} \begin{split} D^{\mathrm{seq,a}}_{3,\mathrm{non}} &= 2 D^{\mathrm{seq,a}}_{3,\mathrm{imm}} = 2\left( N_2 + N_3 \right), \\ D^{\mathrm{seq,b}}_{3,\mathrm{non}} &= 2 D^{\mathrm{seq,b}}_{3,\mathrm{imm}} = 2\left( 2 N_2 + N_3 \right), \\ D^{\mathrm{seq,c}}_{3,\mathrm{non}} &= 2 D^{\mathrm{seq,a}}_{3,\mathrm{imm}} = 2\left( N_1 + N_3 \right). \end{split} \end{equation} By using the same argument as before, we find the corresponding PGFs \begin{equation} \begin{split} G^{\mathrm{seq,a}}_{3,\mathrm{non}}(t) &= G^{\mathrm{seq,a}}_{3,\mathrm{imm}}(t^2) ,\\ G^{\mathrm{seq,b}}_{3,\mathrm{non}}(t) &= G^{\mathrm{seq,b}}_{3,\mathrm{imm}}(t^2) , \\ G^{\mathrm{seq,c}}_{3,\mathrm{non}}(t) &= G^{\mathrm{seq,a}}_{3,\mathrm{imm}}(t^2). \end{split} \end{equation} Again, the scheme ``sequential c'' is equivalent to another scheme, but now it is ``sequential a''. Therefore, the non-immediate version of ``sequential c'' will not be treated separately from ``sequential a''. \subsection{Two segments simultaneously at the start} \begin{figure} \caption{Possible arrangements of entanglement generation in a three-segment repeater, when two segments start simultaneously. The number in each segment corresponds to the moment when it starts.} \label{fig:two start a} \label{fig:two start b} \label{fig:Different schemes 3 segments sim start} \end{figure} When we generate entanglement in two segments simultaneously, we can do that by starting with these two segments or by finishing with these two. Here we will consider the case where one starts with them and we only have two different arrangements. However, we still have to distinguish between measuring immediately or not. For the first scheme in consideration, the middle and the left (or equivalently right) segment start generating entanglement at once. They swap as soon as both are done and then the last segment starts generating entanglement, see Fig.~\ref{fig:two start a}. Let us call this scheme ``start a''. The dephasing random variables in this case are \begin{equation} \begin{split} D^{\mathrm{start,a}}_{3,\mathrm{imm}}&= \begin{cases} N_2-N_1+N_3 & N_1 \leq N_2\\ 2\left(N_1-N_2\right)+N_3 & N_2 < N_1 \end{cases}, \\ D^{\mathrm{start,a}}_{3,\mathrm{non}}&=2|N_1-N_2|+2N_3. \end{split} \end{equation} The PGF of \(D^{\mathrm{start,a}}_{3,\mathrm{non}}\) is obviously reads as \begin{equation} \tilde{G}^{\mathrm{start,a}}_{3,\mathrm{non}}(t)=\tilde{G}_{2}(t^2) \cdot g_{N_3}(t^2) = \frac{p^3t^2(1+qt^2)}{(1-q^2)(1-qt^2)^2}. \end{equation} For immediate measurements use the methods presented in the previous section and derive the PGF of \(D^{\mathrm{start,a}}_{3,\mathrm{imm}}\) \begin{equation} \tilde{G}^{\mathrm{start,a}}_{3,\mathrm{imm}}(t) = \frac{p^3 t (1-q^2 t^3)}{(1-q^2) (1-q t)^2 (1-q t^2)}. \end{equation} The second scheme is realised when we start with both the left and the right segment at once. As in the second sequential scheme there is no swapping possible, when both segments finished and one has to wait for the middle segment. We will call this scheme ``start b''. In pictures, it can be seen in Fig.~\ref{fig:two start b}. Here we have for the dephasing random variables \begin{equation} \begin{split} D^{\mathrm{start,b}}_{3,\mathrm{imm}} &= |N_1-N_3|+2N_2, \\ D^{\mathrm{start,b}}_{3,\mathrm{non}} &= 2|N_1-N_3|+4N_2 = 2D^{\mathrm{start,b}}_{3,\mathrm{imm}}. \end{split} \end{equation} We can simplify the calculation, by considering the immediate scheme first and using \(g_{2X}(t)=g_X(t^2)\). The PGF is given by \begin{displaymath} \tilde{G}^{\mathrm{start,b}}_{3,\mathrm{imm}}(t) = \tilde{G}_{2}(t) \cdot g_{2N_3}(t) = \frac{p^3 t^2 (1+qt)}{(1-q^2)(1-qt)(1-qt^2)}. \end{displaymath} Hence, the PGF of the non-immediate version is simply \begin{align} \tilde{G}^{\mathrm{start,b}}_{3,\mathrm{non}}(t) = \tilde{G}^{\mathrm{start,b}}_{3,\mathrm{imm}}(t^2). \end{align} The waiting time is the same for both schemes in this subsection and amounts to \begin{equation} K^{\mathrm{simult.}}_3=\max(N_1,N_2)+N_3, \end{equation} with an expectation value of \begin{equation} \mathbf{E}[K^{\mathrm{simult.}}_3]=\frac{5-3p}{(2-p)p}. \end{equation} \subsection{Two segments simultaneously at the end} \begin{figure} \caption{Possible arrangements of entanglement generation in a three-segment repeater, when only one segment starts and the rest finishes simultaneously. The number in each segment corresponds to the moment when it starts.} \label{fig:two end a} \label{fig:two end b} \label{fig:Different schemes 3 segments sim end} \end{figure} Finally, the last possible arrangement of two simultaneous segments is to start them in the last step. The waiting time stays the same as in the previous case, but again, there are two possibilities for the dephasing and two to perform measurements,i.e. immediate or non-immediate. The first scheme is realised, when we start with the segment in the middle and when it finishes, the left and right segment start generating entanglement simultaneously. We will call this scheme ``end a'' and it is shown schematically in Fig.~\ref{fig:two end a}. In this case the dephasing random variables are given by \begin{equation} \begin{split} D^{\mathrm{end,a}}_{3,\mathrm{imm}} &= N_1 + N_3 , \\ D^{\mathrm{end,a}}_{3,\mathrm{non}} &=2\max(N_1,N_3), \end{split} \end{equation} with the PGFs \begin{equation} \begin{split} \tilde{G}^{\mathrm{end,a}}_{3,\mathrm{imm}}(t) &= \tilde{G}^{\mathrm{seq,a}}_{3, \mathrm{imm}}(t) = \left( \frac{pt}{1-qt} \right)^2, \\ \tilde{G}^{\mathrm{end,a}}_{3,\mathrm{non}}(t) &= G^{\mathrm{par}}_n(t^2) = \frac{p^2 t^2 (1+qt^2)}{(1-qt^2)(1-q^2t^2)}. \end{split} \end{equation} The second possibility is to start with the left or right segment and after it finished generate entanglement simultaneously in the remaining segments. The schemes and random variables are equivalent independent whether one starts with the left or right segment. We will call this scheme ``end b'' and its schematic representation, when starting with the left segment, is shown in Fig.~\ref{fig:two end b}. Similarly to the scheme ``start a'', the dephasing random variables depended on the order of successful entanglement generation. Let us consider the scheme where we do not measure immediately as an example. First, assume that we started with the left segment and it finished successfully after \(N_1\) attempts. Then both the middle and the right segment start generating entanglement simultaneously. If the middle segments succeeds first after \(N_2\) attempts, we can swap immediately and again have only one segment waiting. Eventually, the right segment will succeed after \(N_3\) attempts, and we can also swap it. In total the dephasing will equal \(D^{\mathrm{end,b}}_{3,\mathrm{non}}=2N_3\), because \(2N_2\) cancels out. This is the optimal case of this scheme. Alternatively, it could also happen that the right segment finishes first, and we have two segments waiting for the middle to succeed. In this case, we have \(D^{\mathrm{end,b}}_{3,\mathrm{non}}=4N_2-2N_3\). Hence, in total the dephasing is \begin{equation} D^{\mathrm{end,b}}_{3,\mathrm{non}}= \begin{cases} 2N_3 & N_3 \geq N_2 \\ 4N_2-2N_3 & N_3 < N_2 \end{cases}. \end{equation} A similar consideration yields the dephasing random variable of the immediate measurement scheme to be \begin{equation} D^{\mathrm{end,b}}_{3,\mathrm{imm}}= \begin{cases} N_3 & N_3 \geq N_2 \\ 2N_2-N_3 & N_3 < N_2 \end{cases}. \end{equation} As mentioned a few times so far, we can exploit that \(g_{2X}(t)=g_X(t^2)\), and thus we calculate the PGF of the immediate scheme first, which reads as \begin{equation} \tilde{G}^{\mathrm{end,b}}_{3,\mathrm{imm}}(t) = \frac{p^2 t \left(1-q^2 t^3\right)}{(1-qt) \left(1- q^2 t\right) \left(1-q t^2\right)}. \end{equation} Therefore, the PGF of of \(D^{\mathrm{end,b}}_{3,\mathrm{non}}\) is given by \begin{equation} \tilde{G}^{\mathrm{end,b}}_{3,\mathrm{non}}(t) = \tilde{G}^{\mathrm{end,b}}_{3,\mathrm{imm}}(t^2), \end{equation} and we have covered all possibles schemes of this subsection.
4,021
85,618
en
train
0.4990.27
\subsection{Two segments simultaneously at the end} \begin{figure} \caption{Possible arrangements of entanglement generation in a three-segment repeater, when only one segment starts and the rest finishes simultaneously. The number in each segment corresponds to the moment when it starts.} \label{fig:two end a} \label{fig:two end b} \label{fig:Different schemes 3 segments sim end} \end{figure} Finally, the last possible arrangement of two simultaneous segments is to start them in the last step. The waiting time stays the same as in the previous case, but again, there are two possibilities for the dephasing and two to perform measurements,i.e. immediate or non-immediate. The first scheme is realised, when we start with the segment in the middle and when it finishes, the left and right segment start generating entanglement simultaneously. We will call this scheme ``end a'' and it is shown schematically in Fig.~\ref{fig:two end a}. In this case the dephasing random variables are given by \begin{equation} \begin{split} D^{\mathrm{end,a}}_{3,\mathrm{imm}} &= N_1 + N_3 , \\ D^{\mathrm{end,a}}_{3,\mathrm{non}} &=2\max(N_1,N_3), \end{split} \end{equation} with the PGFs \begin{equation} \begin{split} \tilde{G}^{\mathrm{end,a}}_{3,\mathrm{imm}}(t) &= \tilde{G}^{\mathrm{seq,a}}_{3, \mathrm{imm}}(t) = \left( \frac{pt}{1-qt} \right)^2, \\ \tilde{G}^{\mathrm{end,a}}_{3,\mathrm{non}}(t) &= G^{\mathrm{par}}_n(t^2) = \frac{p^2 t^2 (1+qt^2)}{(1-qt^2)(1-q^2t^2)}. \end{split} \end{equation} The second possibility is to start with the left or right segment and after it finished generate entanglement simultaneously in the remaining segments. The schemes and random variables are equivalent independent whether one starts with the left or right segment. We will call this scheme ``end b'' and its schematic representation, when starting with the left segment, is shown in Fig.~\ref{fig:two end b}. Similarly to the scheme ``start a'', the dephasing random variables depended on the order of successful entanglement generation. Let us consider the scheme where we do not measure immediately as an example. First, assume that we started with the left segment and it finished successfully after \(N_1\) attempts. Then both the middle and the right segment start generating entanglement simultaneously. If the middle segments succeeds first after \(N_2\) attempts, we can swap immediately and again have only one segment waiting. Eventually, the right segment will succeed after \(N_3\) attempts, and we can also swap it. In total the dephasing will equal \(D^{\mathrm{end,b}}_{3,\mathrm{non}}=2N_3\), because \(2N_2\) cancels out. This is the optimal case of this scheme. Alternatively, it could also happen that the right segment finishes first, and we have two segments waiting for the middle to succeed. In this case, we have \(D^{\mathrm{end,b}}_{3,\mathrm{non}}=4N_2-2N_3\). Hence, in total the dephasing is \begin{equation} D^{\mathrm{end,b}}_{3,\mathrm{non}}= \begin{cases} 2N_3 & N_3 \geq N_2 \\ 4N_2-2N_3 & N_3 < N_2 \end{cases}. \end{equation} A similar consideration yields the dephasing random variable of the immediate measurement scheme to be \begin{equation} D^{\mathrm{end,b}}_{3,\mathrm{imm}}= \begin{cases} N_3 & N_3 \geq N_2 \\ 2N_2-N_3 & N_3 < N_2 \end{cases}. \end{equation} As mentioned a few times so far, we can exploit that \(g_{2X}(t)=g_X(t^2)\), and thus we calculate the PGF of the immediate scheme first, which reads as \begin{equation} \tilde{G}^{\mathrm{end,b}}_{3,\mathrm{imm}}(t) = \frac{p^2 t \left(1-q^2 t^3\right)}{(1-qt) \left(1- q^2 t\right) \left(1-q t^2\right)}. \end{equation} Therefore, the PGF of of \(D^{\mathrm{end,b}}_{3,\mathrm{non}}\) is given by \begin{equation} \tilde{G}^{\mathrm{end,b}}_{3,\mathrm{non}}(t) = \tilde{G}^{\mathrm{end,b}}_{3,\mathrm{imm}}(t^2), \end{equation} and we have covered all possibles schemes of this subsection. \subsection{Overlapping schemes} Before, considering fully parallel schemes, we turn our attention to a mixture of the previous simultaneous schemes. We will call the schemes of this section overlapping schemes. The procedure is as follows, we start generating entanglement in two segments simultaneously and as soon as one of the two segments finishes, we start with the remaining one as well. Thus, the two processes of entanglement generation are overlapping, explaining the naming. In Fig.~\ref{fig:Different overlapping schemes} a schematic version of the overlapping schemes can be seen. \begin{figure} \caption{Possible arrangements of entanglement generation in a three-segment repeater, when two segments start simultaneously and the remaining segment starts as soon as one is successful. The number in each segment corresponds to the moment when it starts and the star indicates that this segment starts as soon as one of the others finished.} \label{fig:Overlapping a} \label{fig:Overlapping b} \label{fig:Different overlapping schemes} \end{figure} There are two different possible arrangements presented in Fig.~\ref{fig:Overlapping a} and Fig.~\ref{fig:Overlapping b}. In the former one the left (or equivalently the right) and the middle segment start from the beginning. This scheme will be called ``overlapping, a''. The latter scheme starts with both outer segments and will be called ``overlapping, b". For the scheme ``overlapping, a'' we find with immediate measurements the dephasing random variable to be \begin{equation} D^{\mathrm{over,a}}_{3,\mathrm{imm}}= \begin{cases} N_3 &\Omega_1 \\ 2(N_2-N_1)-N_3 & \Omega_2 \\ N_1-N_2+N_3 &\Omega_3 \\ \end{cases} \end{equation} where we have chosen the partition $\Omega = \mathbb{N}^3 = \Omega_1 \sqcup \Omega_2 \sqcup \Omega_3$ given by the following inequalities: \begin{equation} \begin{split} \Omega_1 &= N_1 \leq N_2, N_2-N_1 \leq N_3, \\ \Omega_2 &= N_1 < N_2, N_2-N_1 > N_3, \\ \Omega_3 &= N_2< N_1. \end{split} \end{equation} The dephasing varies depending on the order in which the segments finish, since one cannot swap or measure depending on which segment is done first. Thus, we have three different cases. One can calculate the full PGF of the dephasing in a similar way to the previous schemes and finds \begin{align} \tilde{G}^{\mathrm{over,a}}_{3,\mathrm{imm}}(t) = \frac{p^3 t (1 + q - 2 q^2 t - q t^2 + q^4 t^4)} { (1 - q^2) (1 -q t)^2 (1 - q^2 t) (1\! -q t^2)}. \end{align} For the non-immediate version of the scheme ``overlapping, a'', we do not have to take the measurements into account, but this still does not result in more symmetries simplifying the expression. Hence, one has to consider all possible orders separately and we find the dephasing to be \begin{equation} D^{\mathrm{over,a}}_{3,\mathrm{non}}= \begin{cases} 2N_3 & \Omega_1 \\ 2\left(2\left(N_2-N_1\right)-N_3\right) & \Omega_2 \\ 2N_3 & \Omega_3 \\ 2\left(N_1-N_2\right) & \Omega_4 \\ \end{cases} \end{equation} where the partition in this case is given by \begin{equation} \begin{split} \Omega_1 &= N_1 \leq N_2, N_2-N_1 \leq N_3, \\ \Omega_2 &= N_1 < N_2, N_2-N_1 > N_3, \\ \Omega_3 &= N_2< N_1, N_1-N_2 \leq N_3, \\ \Omega_4 &= N_2< N_1, N_1-N_2 > N_3. \end{split} \end{equation} The resulting PGF reads as \begin{displaymath} \tilde{G}^{\mathrm{over,a}}_{3,\mathrm{non}}(t) = \frac{p^3 t^2 (1 +2 q - q (1 + q) t^4 - q^3 t^6)} {(1 - q^2) (1 - q t^2) (1 - q^2 t^2) (1 - q t^4)}. \end{displaymath} The other overlapping scheme possesses more symmetry, thus we find more compact expressions for the random variables. It mainly depends on the relative difference of steps between the outer segments. We find for the immediate and non-immediate scheme \begin{equation} \begin{split} D^{\mathrm{over,b}}_{3,\mathrm{imm}} &= \begin{cases} 2N_2 - \abs{N_1 - N_3} & \abs{N_1-N_3} < N_2 \\ \abs{N_1-N_3} & \abs{N_1-N_3} \geq N_2 \end{cases}, \\ D^{\mathrm{over,b}}_{3,\mathrm{non}} &= \begin{cases} 4N_2 - 2\abs{N_1- N_3} & \abs{N_1-N_3} < N_2 \\ 2\abs{N_1\!-\!N_3} \! & \abs{N_1-N_3} \geq N_2 \end{cases}. \end{split} \end{equation} By case analysis we derive the PGFs \begin{equation} \begin{split} \tilde{G}^{\mathrm{over,b}}_{3,\mathrm{imm}}(t) &= \frac{p^3 t (t + q (2 - t^2 (1 + q + q^2 t)))}{(1 - q^2) (1 - q t) (1 - q^2 t) (1 - q t^2)}, \\ \tilde{G}^{\mathrm{over,b}}_{3,\mathrm{non}}(t) &= \tilde{G}^{\mathrm{over,b}}_{3,\mathrm{imm}}(t^2). \end{split} \end{equation} Finally, the only missing piece is the waiting time of the overlapping schemes and its expectation value. The random variable of the waiting time is \begin{equation} K^{\mathrm{over}}_3 = \min(N_1,N_2) + \max(|N_1-N_2|,N_3). \end{equation} Its expectation value is found to be \begin{equation} E[K^{\mathrm{over}}_3] = \frac{8 - 3p \left(3 - p \right)}{p\left(2 - p\right)^2}. \end{equation} \subsection{Parallel schemes} Here we only consider the potentially optimal scheme, since all parallel schemes posses the same raw rate, but differ in dephasing. In the optimal scheme the dephasing is minimized, such that it has the best secret key rate of all schemes of this class. \begin{table}[ht] \begin{tabular}{|l|l|l|} \hline \hfil Domain & \hfil $D^\star_{3, \mathrm{non}}$ & \hfil $D^\star_{3, \mathrm{imm}}$ \\ \hline $N_1 \leqslant N_2 \leqslant N_3$ & $2(N_3 - N_1)$ & $N_3 - N_1$ \\ $N_1 \leqslant N_3 < N_2$ & $2(2N_2 - N_1 - N_3)$ & $2N_2 - N_3 - N_1$ \\ $N_2 < N_1 \leqslant N_3$ & $2(N_3 - N_2)$ & $N_1 + N_3 - 2N_2$ \\ $N_2 \leqslant N_3 < N_1$ & $2(N_1 - N_2)$ & $N_1 + N_3 - 2N_2$ \\ $N_3 < N_1 \leqslant N_2$ & $2(2N_2 - N_1 - N_3)$ & $2N_2 - N_3 - N_1$ \\ $N_3 < N_2 < N_1$ & $2(N_1 - N_3)$ & $N_1 - N_3$ \\ \hline \end{tabular} \caption{The values of $D^\star_{3, \mathrm{non}}$ and $D^\star_{3, \mathrm{imm}}$ on the domains of the partition.}\label{tbl:D3nm} \end{table} The waiting time is \(K^{\mathrm{par}}_3=\max(N_1,N_2,N_3)\) and following \eqref{eq:Knpar} or Appendix~\ref{app:GKn} its expectation value is \begin{equation} E[K^{\mathrm{par}}_3]=\frac{1 + q \left(4 + 3 q \left(1 + q\right)\right)}{1 + q - q^3 - q^4}. \end{equation} The dephasing PGF can be computed with our partitioning approach. The six domains and the values of the dephasing variables in these domains are given in Table~\ref{tbl:D3nm}. The final result reads as \begin{displaymath} \begin{split} \tilde{G}^\star_{3,\mathrm{non}}(t) &= \frac{p^3}{1-q^3} \frac{1+q(1+2q)t^2-q^2(2+q)t^6-q^4t^8}{(1-qt^2)(1-q^2t^2)(1-qt^4)} \\ \tilde{G}^\star_{3,\mathrm{imm}}(t) &= \frac{p^3}{1 - q^3} \frac{1+q^2t-2q^3t^2-2q^2t^3+q^3t^4+q^5t^5}{(1-qt)^2(1-q^2t)(1-qt^2)} \end{split} \end{displaymath}
3,886
85,618
en
train
0.4990.28
\subsection{Parallel schemes} Here we only consider the potentially optimal scheme, since all parallel schemes posses the same raw rate, but differ in dephasing. In the optimal scheme the dephasing is minimized, such that it has the best secret key rate of all schemes of this class. \begin{table}[ht] \begin{tabular}{|l|l|l|} \hline \hfil Domain & \hfil $D^\star_{3, \mathrm{non}}$ & \hfil $D^\star_{3, \mathrm{imm}}$ \\ \hline $N_1 \leqslant N_2 \leqslant N_3$ & $2(N_3 - N_1)$ & $N_3 - N_1$ \\ $N_1 \leqslant N_3 < N_2$ & $2(2N_2 - N_1 - N_3)$ & $2N_2 - N_3 - N_1$ \\ $N_2 < N_1 \leqslant N_3$ & $2(N_3 - N_2)$ & $N_1 + N_3 - 2N_2$ \\ $N_2 \leqslant N_3 < N_1$ & $2(N_1 - N_2)$ & $N_1 + N_3 - 2N_2$ \\ $N_3 < N_1 \leqslant N_2$ & $2(2N_2 - N_1 - N_3)$ & $2N_2 - N_3 - N_1$ \\ $N_3 < N_2 < N_1$ & $2(N_1 - N_3)$ & $N_1 - N_3$ \\ \hline \end{tabular} \caption{The values of $D^\star_{3, \mathrm{non}}$ and $D^\star_{3, \mathrm{imm}}$ on the domains of the partition.}\label{tbl:D3nm} \end{table} The waiting time is \(K^{\mathrm{par}}_3=\max(N_1,N_2,N_3)\) and following \eqref{eq:Knpar} or Appendix~\ref{app:GKn} its expectation value is \begin{equation} E[K^{\mathrm{par}}_3]=\frac{1 + q \left(4 + 3 q \left(1 + q\right)\right)}{1 + q - q^3 - q^4}. \end{equation} The dephasing PGF can be computed with our partitioning approach. The six domains and the values of the dephasing variables in these domains are given in Table~\ref{tbl:D3nm}. The final result reads as \begin{displaymath} \begin{split} \tilde{G}^\star_{3,\mathrm{non}}(t) &= \frac{p^3}{1-q^3} \frac{1+q(1+2q)t^2-q^2(2+q)t^6-q^4t^8}{(1-qt^2)(1-q^2t^2)(1-qt^4)} \\ \tilde{G}^\star_{3,\mathrm{imm}}(t) &= \frac{p^3}{1 - q^3} \frac{1+q^2t-2q^3t^2-2q^2t^3+q^3t^4+q^5t^5}{(1-qt)^2(1-q^2t)(1-qt^2)} \end{split} \end{displaymath} \subsection{Comparisons} Finally, as we have calculated all necessary statistical quantities we are able to compare the previously discussed schemes. Again as a remark, we only considered schemes here, which swap as soon as possible, as delaying the entanglement swapping increases the dephasing, which in turn decreases the SKR. First, we consider the immediate measurement schemes. In Fig.~\ref{fig:Comparison_3_segments_immediate tau=0.1} ($\tau_{\mathrm{coh}}= \unit[0.1]{s}$) and Fig.~\ref{fig:Comparison_3_segments_immediate tau=10} ($\tau_{\mathrm{coh}}= \unit[10]{s}$), one can see a comparison of all immediate measurement schemes for a three-segment repeater using the previously discussed schemes. In both figures the SKR of the ``optimal'' scheme is represented in orange. As mentioned earlier, the scheme ``seq, c'' is equivalent to ``seq, b'' in this setting and thus not considered separately. For both coherence times the optimal schemes outperforms all other schemes. Especially for shorter distances, the optimal scheme performs clearly better than others. Only for longer distances, where the rate of any three-segment repeater drops, the schemes ``over, b'', ``over, a'' and ``end, b'' catch up, but do not surpass it. Typically, one would not use this regime of a repeater, as the rates are too low. Additionally, in the limit of increasing hardware resources, i.e. \( p_{\mathrm{link}} \rightarrow 1 ,\; \mu \rightarrow 1 , \; \mu_0 \rightarrow 1 \), the optimal scheme keeps performing the best. Thus, we conclude that the immediate measurement version of the optimal scheme is truly optimal for \(n \leq 3\). Next, in Fig.~\ref{fig:Comparison_3_segments_non tau=0.1} ($\tau_{\mathrm{coh}}= \unit[0.1]{s}$) and Fig.~\ref{fig:Comparison_3_segments_non tau=10} ($\tau_{\mathrm{coh}}= \unit[10]{s}$) one can see the same comparison of different swapping schemes using non-immediate measurements. Again, the ``optimal" scheme is presented in orange. This time the sequential schemes ``seq, a'' and ``seq, c'' are equivalent and thus are not considered separately. As one can see, the optimal scheme outperforms all other schemes in the ideal case when \(\mu=\mu_0=1\) for all choices of \(\tau_{\mathrm{coh}}\) and \(p_{\mathrm{link}}\). Furthermore, it also provides the highest secret key rate in the non-ideal case until close to the drop-off. The scheme ``end a'' surpasses it only at those distances either close to or after both start declining dramatically, thus increasing the achievable distance. As discussed before, one typically would not use the regime of an repeater. However, if the main goal is to achieve the longest achievable distance possible, then the scheme ``end a'' performs the best. In the end, the optimal scheme provides the best secret key rate under most realistic use scenarios. Moreover, it is truly optimal in the limit of increasing hardware parameters, i.e. \( p_{\mathrm{link}} \rightarrow 1 ,\; \mu \rightarrow 1 , \; \mu_0 \rightarrow 1 \). Thus, it will be beneficial to use the ``optimal'' scheme as technology progresses and the hardware resources increase. Hence, our conclusion for non-immediate schemes is that the ``optimal" scheme is optimal under improving hardware parameters for \(n \leq 3\). We conjecture that the same is true for both immediate and non-immediate measurement schemes for all \(n\geq 3\)-segment repeaters. This should be investigated in future research. \begin{figure*} \caption{Comparison of secret key rates of three-segment repeaters performing \emph{immediate} \label{fig:Comparison_3_segments_immediate tau=0.1} \end{figure*} \begin{figure*} \caption{Comparison of secret key rates of three-segment repeaters performing \emph{immediate} \label{fig:Comparison_3_segments_immediate tau=10} \end{figure*} \begin{figure*} \caption{Comparison of secret key rates of three-segment repeaters performing \emph{non-immediate} \label{fig:Comparison_3_segments_non tau=0.1} \end{figure*} \begin{figure*} \caption{Comparison of secret key rates of three-segment repeaters performing \emph{non-immediate} \label{fig:Comparison_3_segments_non tau=10} \end{figure*}
1,960
85,618
en